Basis Entropy

نویسنده

  • Xing Chen
چکیده

Projective measurement can increase the entropy of a state ρ, the increased entropy is not only up to the basis of projective measurement, but also has something to do with the properties of the state itself. In this paper we define this increased entropy as basis entropy. And then we discuss the usefulness of this new concept by showing its application in explaining the success probability of Grover’s algorithm and the existence of quantum discord. And as shown in the paper, this new concept can also be used to describe decoherence. IntroductionProjective measurement can increase the entropy of a state ρ[1][3]. And for different states, the increased entropy is different, and it is up to two factors 1. The orthogonal projectors of projective measurement 2. The state itself. Every state has its increased entropy after a projective measurement. This increased entropy is actually a quite useful physical quantity. The aim of this paper is to show the usefulness of this increased entropy by providing some theorems about this increased entropy. A mentionable merit of this increased entropy is that it is highly related to quantum discord, and it can be used to explain the existence of quantum discord. Basis entropy-First we should give this increased entropy a proper name. Since projective measurement is dependent on its basis, we suggest using ’basis entropy’ to describe this increased entropy of the state. The meaning of which is, with the knowledge of the state(the knowledge here means the information which equals to the von Neumann entropy of this state), the ignorance of a state’s projective measurement result. And according to the definition of basis entropy, we can calculate basis entropy by using the following formula BE = S(∑ i PiρPi)− S(ρ) (1) where Pi is a complete set of orthogonal projectors. A good example which can illustrate the physical meaning of basis entropy is Grove’s algorithm[4]. First let us review the procedure of Grover’s algorithm[1]. 1. |0〉⊗n|1〉 2. −→ 1 √ 2n ∑ 2n−1 x=0 |x〉[ |0〉−|1〉 √ 2 ] 3. −→ [(2|ψ〉〈ψ| − I)O]R 1 √ 2n ∑ 2n−1 x=0 |x〉[ |0〉−|1〉 √ 2 ] ≈ |x0〉[ |0〉−|1〉 2 ] 4. −→ x0 The first state is the initial state, where n represents the qubit number, by applying H⊗n to initial n qubits we get the state in step two, and then we apply the Grover iteration (2|ψ〉〈ψ| − I)O[6], we get the state in step three. After applying the Grover iteration for about R ≈ dπ √ 2n/4e times, we get the wanted state |x0〉, the detail of Grover’s algorithm refer [1]. For Grover’s algorithm, we actually search in the state (∑ n−1 i=0 |xi〉)/ √ 2n, and measure it with projectors {|0〉〈0|, |1〉〈1|}, before apply Grover iteration, the basis entropy of the database state is n, this is our ignorance of the measurement result, which means without applying Grover iteration, we need n bits to describe the measurement result. After apply one time of Grover iteration, the basis entropy decreased, which means our ignorance of the measurement result decreased, so the probability of finding the target state |x0〉 increased. As shown in Figure 1, with the decreasing of the basis entropy, the successful probability is increasing. Figure 1 is the basis entropy change of Grover’s algorithm, here we set n = 20, clearly we can see from this figure that, the more time we apply Grover iteration(under the desired time R ≈ dπ √ 2n/4e, see appendix A), the basis entropy will be smaller and the success probability will be higher. So basis entropy is the ignorance of the measurement result with the knowledge of the state. 1 ar X iv :1 60 6. 01 50 5v 3 [ qu an tph ] 2 0 Ju n 20 16 Basis entropy Success probability 0 200 400 600 800 0 5 10 15 20 0 0.25 0.50 0.75 1.00 Iteration time Ba sis en tro py The curve of basis entropy and success probability Su cc es sp ro ba bi lit y Figure 1: Basis entropy and success probability change of Grover’s algorithm A state’s basis entropy is dependent on different measurement projectors, so the number of its basis entropy is actually infinite, but we are only interested in its maximum and minimum basis entropy. For a state’s maximum basis entropy, we have the following theorem. Theorem 1. Iff a state’s maximal basis entropy is log2D, the state is a pure state. where D is the dimension of Hilbert space. If a state’s maximal basis entropy is log2D, which means BEmax = S(∑ i PiρPi)− S(ρ) = log2D (2) Since the entropy of a state is non-negative and in a D-dimensional Hilbert space, the state’s von Neumann entropy is at most log2D[1], so if the basis entropy of a state is log2D, then S(ρ) must be zero, which means the state is a pure state. Let’s prove the other direction of the theorem. It seems only for pure state like |ψ〉 = 1 √ D D ∑ i |i〉 (3) has log2D basis entropy, while for pure state like ( √ 3|0〉+ |1〉)/2, its basis entropy will be smaller than log2D. We will prove that for any pure state, its maximum basis entropy will be log2D, as long as we choose right projectors to measure them. For simplicity, we just prove the case D = 2. For any pure state, we can write its density matrix as ρ = 1 2 I + aσ1 + bσ2 + cσ3 (4) where a, b and c are coefficients and σ1, σ2, σ3 are Pauli matrices. We only need to prove that Smax( D ∑ i PiρPi) = 1 (5) where, {Pk = V|k〉〈k|V† : k = 0, 1} (6) is the complete set of orthogonal projectors. And V is a 2-dimensional unitary transformation. It has be proven that for any 2-dimensional pure state, its maximum basis entropy is log22 = 1 (for those who are interested in the details ,see appendix B). And now we complete the proof of theorem 1. The proof can be easily extended to higher Hilbert space. From theorem 1, it is natural to get the following corollary: Corollary 1. Only for states like ρ = ∑ i |i〉〈i|/D, they have no basis entropy, it means no projective measurement can increase their entropy. This corollary is easy to prove, since for a state like ρ = ∑ i |i〉〈i|/D, its von Neumann has reached the maximal in its Hilbert space, so no projective measurement can increase its entropy. We can use maximum basis entropy to judge whether a state is mixed or not. According to theorem 1, if a state’s maximum basis entropy is log2D, then the state is a pure state. If its maximum basis entropy is smaller than log2D, then it’s a mixed state. If its maximum basis entropy is zero, it has no basis entropy, then the state is ρ = ∑ i |i〉〈i|/D. Basis entropy and quantum discord-For most states, regardless of pure states or mixed states, their minimum basis entropy is zero, which means there exists a complete set of orthogonal projectors, under these projectors, we can get the full knowledge of this state. But there are some states, their minimum basis entropy are not zero, which means no matter what projectors we use to measure them, there is always some information we can not get. This inaccessible information, we will show next, is quantum discord. Quantum discord comes from the projective measurement[7], just as the basis entropy. For a bell state, its basis entropy is a constant, so it’s easy for us to calculate it minimum basis entropy. So let’s take a bell state as an example to explain why the quantum discord is related to minimum basis entropy. For bell state like (|00〉+ |11〉)/ √ 2, the subsystem is A and B, the discord of this state is δ(A : B){ΠB i } =I(A : B)− J(A : B){ΠB i } =S(A) + S(B)− S(A, B)− S(A) + S(A|ΠB i ) =S(B) + S(A|ΠB i ) =S(B) =1 (7)

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Preferred Definition of Conditional Rényi Entropy

The Rényi entropy is a generalization of Shannon entropy to a one-parameter family of entropies. Tsallis entropy too is a generalization of Shannon entropy. The measure for Tsallis entropy is non-logarithmic. After the introduction of Shannon entropy , the conditional Shannon entropy was derived and its properties became known. Also, for Tsallis entropy, the conditional entropy was introduced a...

متن کامل

Basis Function Adaptation in Temporal Difference Reinforcement Learning

We examine methods for on-line optimization of the basis function for temporal difference Reinforcement Learning algorithms. We concentrate on architectures with a linear parameterization of the value function. Our methods optimize the weights of the network while simultaneously adapting the parameters of the basis functions in order to decrease the Bellman approximation error. A gradient-based...

متن کامل

MAXENT-F90: Fortran 90 Library for Maximum-Entropy Basis Functions

This manual describes the Fortran 90 implementation of maximum-entropy basis functions. The main ingredients of the theory are presented, and then the numerical implementation is touched upon. Instructions on the installation and execution of the code, as well as on writing an interface to the library are presented. Each program module and the most important functions in each module are discuss...

متن کامل

Driver Fatigue Detection System Using Electroencephalography Signals Based on Combined Entropy Features

Driver fatigue has become one of the major causes of traffic accidents, and is a complicated physiological process. However, there is no effective method to detect driving fatigue. Electroencephalography (EEG) signals are complex, unstable, and non-linear; non-linear analysis methods, such as entropy, maybe more appropriate. This study evaluates a combined entropy-based processing method of EEG...

متن کامل

A Data Driven Orthonormal Parameterization of the Generalized Entropy Maximization Problem

During the past years Lindquist and coworkers have formulated and studied the so called generalized maximum entropy problem. It generalizes the maximum entropy problem and thus includes problem like the Carathéodory extension problem and the Nevanlinna-Pick interpolation problem. A central ingredient in the theory is the dual (in mathematical programming sense) convex optimization problem. This...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1606.01505  شماره 

صفحات  -

تاریخ انتشار 2016