Information theory of open fragmenting systems

An information theory description of finite systems explicitly evolving in time is presented. We impose a MaxEnt variational principle on the Shannon entropy at a given time while the constraints are set at a former time. The resulting density matrix contains explicit time odd components in the form of collective flows. As a specific application we consider the dynamics of the expansion in connection with heavy ion experiments. Lattice gas and classical molecular dynamics simulations are shown.


I. INTRODUCTION
The microscopic foundations of thermodynamics are well established using the Gibbs hypothesis of statistical ensembles maximizing the Shannon entropy [1]. At the thermodynamic limit, the various Gibbs ensembles converge to an unique thermodynamic equilibrium. However, most of the systems studied in physics do not correspond to this limit [2]. In finite systems the various Gibbs ensembles are not equivalent [3] and the physical meaning and relevance of these different equilibria has to be investigated.
A common interpretation of a statistical ensemble for a finite system is given by the Boltzmann ergodic assumption. In this interpretation the statistical ensemble represents the collection of successive snapshots of a physical system evolving in time, and the state variables are identified with the conserved observables. This interpretation suffers from important drawbacks. First, even for a truly ergodic Hamiltonian, a finite time experiment may very well achieve ergodicity only on a subspace of the total accessible phase space [4]. Moreover, ergodicity applies to confined systems and thus it requires the definition of boundary conditions . Then the statistical ensemble directly depends on the boundary conditions and we will discuss that an exact knowledge of the boundary corresponds to an infinite information and is therefore hardly compatible with the very principles of statistical mechanics . Finally, the systems experimentally accessible are often not confined but freely evolve in the vacuum, as it is notably the case for heavy ion collisions. The concept of a stationary equilibrium defined by the variables conserved by the dynamics in a hypothetical constraining box, is not useful for these systems.
However statistical approaches, expressing the reduction of the available information to a limited number of * member of the Institut Universitaire de France collective observables, are still pertinent to complex systems even if the dynamics does not allows at any time a total and even exploration of the energy shell [5]. In this interpretation the MaxEnt postulate has to be interpreted as a minimum information postulate which finds its justification in the complexity of the dynamics independent of any time scale [1,5].
This information theory approach is a very powerful extension of the classical Gibbs equilibrium: any arbitrary observable can act as a state variable, and all statistical quantities can be unambiguously defined for any number of particles [6]. The price to be paid for such a generalization is that the constraining variables as well as the density matrix continuously evolve in time. The time dependence of the process naturally leads to the appearance of new time-odd constraints or collective flows. In the case of an ideal gas of particles or clusters we will show that the ensemble of constraints forms a closed algebra and that the information at the initial time is sufficient to calculate the exact density matrix at any successive time.

II. STATISTICAL EQUILIBRIA
When the system is characterized by L observables known in average <Â ℓ >= TrDÂ ℓ , statistical equilibrium corresponds to the maximization of the constrained entropy is the density matrix, and λ = {λ ℓ } are Lagrange multipliers. Gibbs equilibrium is then given bŷ where Z λ is the associated partition sum. It should be noticed that microcanonical thermodynamics [7] can also be obtained from the variation of the Shannon entropy in the special case of a fixed energy subspace. In this case the maximum of the Shannon entropy can be identified with the Boltzmann entropy max (S) = log W (E), where W is the total state density with the energy E. In the following we shall confine ourselves to the Gibbs formulation (1), which is more general than the microcanonical ansatz. Indeed the microcanonical density matrix corresponds to an even occupation of the whole energy shell while non ergodic components can be already included within the Gibbs formalism through the introduction of extra constraints.

A. Boundary condition problem in finite systems
The statistical physics formalism recalled above is valid for any system size . However, as soon as oneÂ ℓ contains differential operators such as a kinetic energy, eq. (1) is not defined, unless boundary conditions are specified. Only at the thermodynamic limit boundary conditions are irrelevant, as only in this limit surface effects are negligible. The definition of any density with a finite number of particles requires the definition of a finite volume. To this aim, a fictitious container is generally introduced [8]. The volume and shape of this unphysical box has no influence on the thermodynamics of selfbound systems, but in the presence of continuum states the situation is different. Let us consider the standard case of the annulation of the wavefunction on the surface S of a containing box V. Introducing the projector,P S , over S and its exterior, the boundary conditions readŝ P S Ψ (n) = 0 for all microstates (n). UsingP 2 S =P S , we can see that this condition imposes an extra constraint to the statistical ensemble <P S >= TrDP S = 0. The density matrix then readŝ which shows that the thermodynamics of the system depends on the whole surface S. For the very same global features such as the same average particle density or energy, we will have as many different thermodynamics as boundary conditions. More important, to specify the density matrix, the projectorP S has to be exactly known and this is in fact impossible. The nature ofP S is intrinsically different from the usual global observableŝ A ℓ . Not only it is a many-body operator, butP S requires the exact knowledge of each point of the boundary surface while no or few parameters are sufficient to define theÂ ℓ . This infinity of points corresponds to an infinite amount of information to be known to define the density matrix (2). This requirement is in contradiction with the statistical mechanics principle of minimum information. Thus eq.(2) is unphysical, and the same is true for the standard (N, T, V ) or (N, E, V ) ensembles when dealing with finite unbound unconfined systems.

B. Incomplete knowledge on the boundaries
One way to get around the difficulties encountered to take into account our incomplete knowledge on the boundaries, is to introduce a hierarchy of observables describing the size and shape of the matter distribution.
For example, if only the average system size <R 2 > is known, the minimum information principle implieŝ which is akin an isobar canonical ensemble, since the additional Lagrange multiplier λ R 2 imposing the size information has the dimension of a pressure when divided by a typical scale R 0 and by the temperature, A typical application of this concept is the so called freeze-out hypothesis in nuclear collisions : at a given time t 0 the main evolution (i.e. the main entropy creation) is assumed to stop and partitions are supposed to be essentially frozen. Typically thermal and chemical equilibrium is assumed, meaning that the information at t 0 on the energetics and particle numbers is limited to the observables <Ĥ > and <N f > for the different species f [8,9]. Freeze-out occurs when the system has expanded to a finite size. Then at least one measure of the system's compactness should be included . The limited knowledge of the system extension leads to a minimum biased density matrix given by eq.(3) [10].

III. MULTIPLE TIME STATISTICAL ENSEMBLES
As soon as one of the constraining observablesÂ ℓ is not a constant of the motion, the statistical ensemble (1) is not stationary. A single time description may still looks appropriate in the freeze out configuration discussed in the last section. Indeed in many physical cases one can clearly identify a specific time at which the information concentrated in a given observable is frozen (i.e. the observable expectation value ceases to evolve). However this freeze out time is in general fluctuating and different for different observables. For example for the ultrarelativistic heavy ion reactions two freeze-out times are discussed [9], one for the chemistry and one for the thermal agitation. We need therefore to define a statistical ensemble constrained by informations coming from different times.
Let us now suppose that the different informations on the system, <Â ℓ >, are known at different times t ℓ : <Â ℓ > t ℓ = TrD (t ℓ )Â ℓ . A generalization of the Gibbs idea is that at a time t the least biased state of the system is the maximum of the Shannon entropy, considering all informations as constraints.
The maximization of the entropy at time t with the various constraints <Â ℓ > t ℓ known at former times t ℓ corresponds to the free maximization of where the λ ℓ are the Lagrange parameters associated with all the constraints. This maximization will lead to a density matrix which can be considered as a generalization to time dependent processes of the Gibbs ensembles (1).
Let us consider the case of a deterministic evolution where {., .} are Poisson bracket in classical physics and commutators divided by ih in quantum physics. The minimum biased density matrix is given by [11] where theÂ ′ ℓ represent the time evolution of the constraining observablesÂ ℓ in the Heisenberg representation Eq. (6) can be interpreted as the introduction of additional constraintsB (p) ℓ and additional Lagrange parameters ν (p) ℓ associated with the time evolution of the system Eq. (7) is an exact solution of the complete many body evolution problem eq.(5) with a minimum information hypothesis on the final time t having made few observations <Â ℓ > at previous times t ℓ , which shows the wide domain of applicability of information theory. A generalization of this theory to non deterministic evolutions can be found in ref. [11]. We can see from eq.(7) that in general an infinite amount of information, i.e. an infinite number of Lagrange multipliers are needed if we want to follow the system evolution for a long time. However, different interesting physical situations exist, for which the series can be analytically summed up. In this case, a limited information (the knowledge of a small number of average observables) will be sufficient to describe the whole density matrix at any time, under the unique hypothesis that the information was finite at a given time.

IV. THE DYNAMICS OF THE EXPANSION
Let us now apply the above formalism to transient unconfined systems. Let us concentrate on a scenario often encountered experimentally: a finite system of loosely interacting particles with a finite extension in an open space. We shall assume that at a given freeze out time t 0 the system can be modelized as a non interacting ensemble of n = 1, . . . , N particles or fragments, and a definite value for the mean square radius <ˆ R 2 > (withˆ R 2 = nˆ r 2 n ) characterizes the ensemble of states. Then we have to introduce the constraining observablê A 2 = R 2 associated with a Lagrange multiplier λ 0 . If time is not taken into account, the maximum entropy solution is given bŷ Eq. (8) is akin to a system of non-interacting particles trapped in a harmonic oscillator potential with a string constant k = 2λ 0 /β. From the partition sum, the EOS are easily derived: Since λ 0ˆ R 2 is not an external confining potential but only a finite size constraint, the minimum biased distribution (8) is not stationary. To take into account the time evolution, we must introduce additional constraining observableŝ Since {Ĥ,B R } = 0, all the otherB (p) R with p > 2 are zero. The density matrix is given bŷ with β ef f (t) = β + 2λ 0 (t − t 0 ) 2 /m ; ν 0 (t) = 2λ 0 (t − t 0 ) /m (10) The density matrix (9) can be interpreted as a radially expanding ideal gas. Indeed the distribution can be written as where α = ν 0 (t) /β ef f (t) represents a Hubblian factor and the confining Lagrange multiplier is transformed into The term mα (t)ˆ r n correcting the momentum in eq.(11) can be interpreted as a momentum produced by a radial velocity α (t)ˆ r n . This proportionality of the velocity withr n shows that the motion is self similar. As a consequence, when this collective motion is subtracted from the particle momentum, the density matrix (11) corresponds at any time to a standard equilibrium (8) in the local rest frame.
In this case the infinite information which is a priori needed to follow the time evolution of the density matrix according to eq.(7), reduces to the three observablesˆ 2 r, 2 p,ˆ r ·ˆ p +ˆ p ·ˆ r. Indeed these operators form a closed Lie algebra, and the exact evolution of (11) preserves it algebraic structure. The description of the time evolution when describing unconfined finite systems has introduced a new phenomenon: the expansion. One should then consider a more general equilibrium of a finite-size expanding finitesystems with β ′ , α and λ ′ 0 as free parameters. Then, if the observed minim biased distribution at time t is coming from a confined system at time t 0 , the three parameters β ′ , α and λ ′ 0 should be linked to the time t 0 the initial temperature β −1 and the initial λ 0 by equations (10) and (12) [12].
The important consequence is that radial flow is a necessary ingredient of any statistical description of unconfined finite systems: the static (canonical or microcanonical) Gibbs ansatz in a confining box which is often employed [8] misses this crucial point. On the other hand, if a radial flow is observed in the experimental data, the formalism we have developed allows to associate this flow observation to a distribution at a former time when flow was absent. This initial distribution corresponds to a standard static Gibbs equilibrium in a confining harmonic potential, i.e. to an isobar ensemble.

V. NUMERICAL SIMULATIONS
As we have already mentioned in section IV, in the hypothesis of negligible interaction between the system's constituents the expansion is self-similar, implying that the situation is equivalent to a standard Gibbs equilibrium in the local rest frame. In the expanding ensemble the total average kinetic energy per particle is simply the sum of the thermal energy e th = 3/(2β) and the radial flow e f l = mα 2 r 2 /2.
This scenario is often invoked in the literature [8] to justify the treatment of flow as a collective radial velocity superimposed on thermal motion; however eq.(11)contains also an additional term ∝ r 2 which corresponds to an outgoing pressure . The phase diagram and fragment observables are therefore expected to be modified by the presence of flow even in the self similar approximation. To quantify this statement, we have performed calculations in the Lattice Gas Model [12], and the results are shown in Figure 1.
The effective pressure λ ef f as well as the associated average volume are shown in the upper left part of figure 1 as a function of the collective radial flow for a given pressure and temperature . The Lagrange parameter λ ef f being a decreasing function of α, the critical point is moved towards higher pressures in the presence of flow [17]. However one can see that the effect is very small up to mα ≈ .6 (which corresponds to ∆p 2 /p 2 = e f l / e th ≈ 67% flow contribution). In this regime the cluster size distributions displayed in the upper right part of figure 1 are only slightly affected. On the other side if collective flow overcomes a threshold value ∆p 2 /p 2 ≈ 100% the average volume shows an exponential increase and the outgoing flow pressure leads to a complete fragmentation of the system (dashed grey line in the lower part of fig.1). We can also observe that an oriented motion is systematically less effective than a random one to break up the system. This is shown in the lower part of figure 1 which compares for a given λ distributions with and without radial flow at the same total deposited energy: for any value of radial flow equilibrium in the standard microcanonical ensemble corresponds to more fragmented configurations. The above results are relevant for the experimental situation if and only if the inter-fragment interactions can be neglected when the system is still compact enough to bear pertinent information on the phase diagram. Indeed only in this case the series (7) can be analytically summed up and the expansion dynamics can be reduced to a self similar flow [11]. The validity of the ideal gas approximation eq.(9) for the expansion dynamics is tested in Figure 2 [14] in the framework of classical molecular dynamics [15]. A Lennard Jones system is initially confined in a small volume and successively freely expanding in the vacuum. We can see that after a first phase of the order of ≈ 10 Lennard Jones time units, where interparticle interactions cannot be neglected, the time evolution predicted by eq.(9) is remarkably fulfilled for all total energies. This result is due to the fact that the system's size and dynamics are dominated by the free particles, while deviations from a self similar flow can be seen if the analysis is restricted to bound particles [14]. We expect eq.(9) to describe the system evolution even better if the degrees of freedom n are changed from particles to clusters, as suggested by the Fisher model of condensation [16].

VI. CONCLUSIONS
In this paper we have introduced an extension of Gibbs ensembles to time dependent constraints . This formalism gives a statistical description of a system observed at a time at which the entropy has not reached its saturating value yet, as it may be the case in intermediate energy heavy ion reactions [13]. Another physical application concerns systems for which the relevant observables pertain to different times, as in high energy nuclear collisions [9].
Our most important result is that any statistical description of a finite unbound system must necessarily contain a local collective velocity term. Indeed the knowledge of the average spatial extension of the system at a given time, naturally produces a flow constraint at any successive time. Conversely a collective flow measurement at a given time can be translated into an information on the system density at a former time.