POSSIBLE TYPES OF STATISTICAL INFORMATION ABOUT A SYSTEM 1
Our statistical information about a system may always be expressed by giving the expectation values of all observables. Now the expectation value of an arbitrary observable F, for a state whose wave function is φ, is
If we do not know the state of the system, but know that wi
are the respective probabilities of its being in states whose wave functions are φi, then we must assign as the expectation value of F the weighted average of its expectation values for the states φi. Thus,
This formula for is the appropriate one when our system is one of an ensemble of systems of which numbers proportional to wi are in the states φi. It must not be confused with any such formula as
which corresponds to the system’s having a wave function which is a linear combination of the φi. This last formula is of the type of (1), while (2) is an altogether different type.
An alternative way of expressing our statistical information is to give the probability that measurement of an arbitrary observable F will give as result an arbitrary one of its eigenvalues, say δ. When the system is in the state φ, this probability is
where xδ is the eigenfunction of F corresponding to the eigenvalues δ. When we know only that wi are the probabilities of the system’s being in the states φi, the probability in question is
Formula (2’) is not the same as any special case of (1’) such as
It differs generically from (1’) as (2) does from (1).
When such equations as (1), (1’) hold, we say that the system is in the “pure state” whose wave function is φ. The situation represented by Eqs. (2), (2’) is called a “mixture” of the states φi with the weights wi. It can be shown that the most general type of statistical information about a system is represented by a mixture. A pure state is a special case, with only one non-vanishing wi. The term mixture is usually reserved for cases in which there is more than one non-vanishing wi.It must again be emphasized that a mixture in this sense is essentially different from any pure state whatever.”
Now we quote from a recent Quantum Reality Web site the same description of “pure state” and “quantum state”:
“The statistical properties of both systems before measurement, however, could be described by a density matrix. So for an ensemble system such as this the density matrix is a better representation of the state of the system than the vector.
So how do we calculate the density matrix? The density matrix is defined as the weighted sum of the tensor products over all the different states:
Where p and q refer to the relative probability of each state. For the example of particles in a box, p would represent the number of particles in state│ψ>, and q would represent the number of particles in state │φ>.
Let’s imagine we have a number of qubits in a box (these can take the value │0> or│1>.
Let’s say all the qubits are in the following superposition state: 0.6│0> +0.8i│1>.
In other words, the ensemble system is in a pure state, with all of the particles in an identical quantum superposition of states │0> and│1>. As we are dealing with a single, pure state, the construction of the density matrix is particularly simple: we have a single probability p, which is equal to 1.0 (certainty), while q (and all the other probabilities) are equal to zero. The density matrix then simplifies to: │ψ><ψ│
This state can be written as a column (“ket”) vector. Note the imaginary component (the expansion coefficients are in general complex numbers):
In order to generate the density matrix we need to use the Hermitian conjugate (or adjoint) of this column vector (the transpose of the complex conjugate│ψ>. So in this case the adjoint is the following row (“bra”) vector:
What does this density matrix tell us about the statistical properties of our pure state ensemble quantum system? For a start, the diagonal elements tell us the probabilities of finding the particle in the│0> or│1> eigenstate. For example, the 0.36 component informs us that there will be a 36% probability of the particle being found in the │0> state after measurement. Of course, that leaves a 64% chance that the particle will be found in the │1> state (the 0.64% component).
The way the density matrix is calculated, the diagonal elements can never have imaginary components (this is similar to the way the eigenvalues are always real). However, the off-diagonal terms can have imaginary components (as shown in the above example). These imaginary components have a associated phase (complex numbers can be written in polar form). It is the phase differences of these off-diagonal elements which produces interference (for more details, see the book Quantum Mechanics Demystified). The off-diagonal elements are characteristic of a pure state. A mixed state is a classical statistical mixture and therefore has no off-diagonal terms and no interference.
So how do the off-diagonal elements (and related interference effects) vanish during decoherence?
The off-diagonal (imaginary) terms have a completely unknown relative phase factor which must be averaged over during any calculation since it is different for each separate measurement (each particle in the ensemble). As the phase of these terms is not correlated (not coherent) the sums cancel out to zero. The matrix becomes diagonalised (all off-diagonal terms become zero. Interference effects vanish. The quantum state of the ensemble system is then apparently “forced” into one of the diagonal eigenstates (the overall state of the system becomes a mixture state) with the probability of a particular eigenstate selection predicted by the value of the corresponding diagonal element of the density matrix.
Consider the following density matrix for a pure state ensemble in which the off-diagonal terms have a phase factor of θ:
The above statement can be written in a simplified manner as follows: Selection of a particular eigenstate is governed by a purely probabilistic process. This requires a large number of readings. For this purpose, we must consider an ensemble – a large number of quantum particles in a similar state and treat them as a single quantum system. Then we measure each particle to ascertain a particular value; say color. We tabulate the results in a statement called the density matrix. Before measurement, each of the particles is in the same state with the same state vector. In other words, they are all in the same superposition state. Hence this is called a pure state. After measurement, all particles are in different classical states – the state (color) of each particle is known. Hence it is called a mixed state.
In common-sense language, what it means is that: if we take a box of billiard balls of say 100 numbers of random colors – say blue and green, before counting balls of each color, we could not say what percentage of balls are blue and what percentage green. But after we count the balls of each color and tabulate the results, we know that (in the above example) 36% of the balls belong to one color and 64% belong to another color. If we have to describe the balls after counting, we will give the above percentage or say that 36 numbers of balls are blue and 64 numbers of balls are green. That will be a pure statement. But before such measurement, we can describe the balls as 100 balls of blue and green color. This will be a mixed state.
As can be seen, our common-sense description is opposite of the quantum mechanical classification, which are written by two scientists about 75 years apart and which is accepted by all scientists unquestioningly. Thus, it is no wonder that one scientist jokingly said that: “A good working definition of quantum mechanics is that things are the exact opposite of what you thought they were. Empty space is full, particles are waves, and cats can be both alive and dead at the same time.”
We quote another example from the famous EPR argument of Einstein and others (Phys. Rev. 47, 777 (1935): “To illustrate the ideas involved, let us consider the quantum-mechanical description of the behavior of a particle having a single degree of freedom. The fundamental concept of the theory is the concept of state, which is supposed to be completely characterized by the wave function ψ, which is a function of the variables chosen to describe the particle’s behavior. Corresponding to each physically observable quantity A there is an operator, which may be designated by the same letter.
If ψ is an eigenfunction of the operator A, that is, if ψ’ ≡ Aψ = aψ (1)
where a is a number, then the physical quantity A has with certainty the value a whenever the particle is in the state given by ψ. In accordance with our criterion of reality, for a particle in the state given by ψ for which Eq. (1) holds, there is an element of physical reality corresponding to the physical quantity A”.
We can write the above statement and the concept behind it in various ways that will be far easier to understand by the common man. We can also give various examples to demonstrate the physical content of the above statement. However, such statements and examples will be difficult to twist and interpret differently when necessary. Putting the concept in an ambiguous format helps in its subsequent manipulation, as is explained below citing from the same example:
“In accordance with quantum mechanics we can only say that the relative probability that a measurement of the coordinate will give a result lying between a and b is
Since this probability is independent of a, but depends only upon the difference b – a, we see that all values of the coordinate are equally probable”.
The above conclusion has been arrived at based on the following logic: “More generally, it is shown in quantum mechanics that, if the operators corresponding to two physical quantities, say A and B, do not commute, that is, if AB ≠ BA, then the precise knowledge of one of them precludes such a knowledge of the other. Furthermore, any attempt to determine the latter experimentally will alter the state of the system in such a way as to destroy the knowledge of the first”.
The above statement is highly misleading. The law of commutation is a special case of non-linear accumulation as explained below. All interactions involve application of force which leads to accumulation and corresponding reduction. Where such accumulation is between similars, it is linear accumulation and its mathematics is called addition. If such accumulation is not fully between similars, but partially similars (and partially dissimilar) it is non-linear accumulation and its mathematics is called multiplication. For example, 10 cars and another 10 cars are twenty cars through addition. But if there are 10 cars in a row and there are two rows of cars, then rows of cars is common to both statements, but one statement shows the number of cars in a row while the other shows the number of rows of cars. Because of this partial dissimilarity, the mathematics has to be multiplication of 10 x 2 or 2 x 10. We are free to use any of the two sequences and the result will be the same. This is the law of commutation. However, no multiplication is possible if the two factors are not partially similar. In such cases, the two factors are said to be non-commutable. If the two terms are mutually exclusive, i.e., one of the terms will always be zero, the result of their multiplication will always be zero. Hence they may be said to be not commutable though in reality they are commutable, but the result of their multiplication is always zero. This implies that the knowledge of one precludes the knowledge of the other. The commutability or otherwise depend on the nature of the quantities – whether they are partially related and partially non-related to each other or not.
Position is a fixed co-ordinate in a specific frame of reference. Momentum is a mobile co-ordinate in the same frame of reference. Both fixedity and mobility are mutually exclusive. If a particle has a fixed position, its momentum is zero. If it has momentum, it does not have a fixed position. Since “particle” is similar in both the above statements, i.e., since both are related to the particle, they can be multiplied, hence commutable. But since one or the other factors is always zero, the result will always be zero and the equation AB ≠ BA does not hold. In other words, while uncertainty is established due to other reasons, the equation Δx. Δp ≥ h is a mathematically wrong statement, as mathematically the answer will always be zero. The validity of a physical statement is judged by its correspondence to reality or as Einstein and others put it: “by the degree of agreement between the conclusions of the theory and human experience”. Since in this case the degree of agreement between the conclusions of the theory and human experience is zero, it cannot be a valid physical statement either. Hence, it is no wonder that the Heisenberg’s Uncertainty relation is still a hypothesis and not proven. In latter pages we have discussed this issue elaborately.