Negative probabilityThe probability of the outcome of an experiment is never negative, although a quasiprobability distribution allows a negative probability, or quasiprobability for some events. These distributions may apply to unobservable events or conditional probabilities. Physics and mathematicsIn 1942, Paul Dirac wrote a paper "The Physical Interpretation of Quantum Mechanics"[1] where he introduced the concept of negative energies and negative probabilities:
The idea of negative probabilities later received increased attention in physics and particularly in quantum mechanics. Richard Feynman argued[2] that no one objects to using negative numbers in calculations: although "minus three apples" is not a valid concept in real life, negative money is valid. Similarly he argued how negative probabilities as well as probabilities above unity possibly could be useful in probability calculations. Negative probabilities have later been suggested to solve several problems and paradoxes.[3] Half-coins provide simple examples for negative probabilities. These strange coins were introduced in 2005 by Gábor J. Székely.[4] Half-coins have infinitely many sides numbered with 0,1,2,... and the positive even numbers are taken with negative probabilities. Two half-coins make a complete coin in the sense that if we flip two half-coins then the sum of the outcomes is 0 or 1 with probability 1/2 as if we simply flipped a fair coin. In Convolution quotients of nonnegative definite functions[5] and Algebraic Probability Theory [6] Imre Z. Ruzsa and Gábor J. Székely proved that if a random variable X has a signed or quasi distribution where some of the probabilities are negative then one can always find two random variables, Y and Z, with ordinary (not signed / not quasi) distributions such that X, Y are independent and X + Y = Z in distribution. Thus X can always be interpreted as the "difference" of two ordinary random variables, Z and Y. If Y is interpreted as a measurement error of X and the observed value is Z then the negative regions of the distribution of X are masked / shielded by the error Y. Another example known as the Wigner distribution in phase space, introduced by Eugene Wigner in 1932 to study quantum corrections, often leads to negative probabilities.[7] For this reason, it has later been better known as the Wigner quasiprobability distribution. In 1945, M. S. Bartlett worked out the mathematical and logical consistency of such negative valuedness.[8] The Wigner distribution function is routinely used in physics nowadays, and provides the cornerstone of phase-space quantization. Its negative features are an asset to the formalism, and often indicate quantum interference. The negative regions of the distribution are shielded from direct observation by the quantum uncertainty principle: typically, the moments of such a non-positive-semidefinite quasiprobability distribution are highly constrained, and prevent direct measurability of the negative regions of the distribution. Nevertheless, these regions contribute negatively and crucially to the expected values of observable quantities computed through such distributions. EngineeringThe concept of negative probabilities has also been proposed for reliable facility location models where facilities are subject to negatively correlated disruption risks when facility locations, customer allocation, and backup service plans are determined simultaneously.[9][10] Li et al.[11] proposed a virtual station structure that transforms a facility network with positively correlated disruptions into an equivalent one with added virtual supporting stations, and these virtual stations were subject to independent disruptions. This approach reduces a problem from one with correlated disruptions to one without. Xie et al.[12] later showed how negatively correlated disruptions can also be addressed by the same modeling framework, except that a virtual supporting station now may be disrupted with a “failure propensity” which
This finding paves ways for using compact mixed-integer mathematical programs to optimally design reliable location of service facilities under site-dependent and positive/negative/mixed facility disruption correlations.[13] The proposed “propensity” concept in Xie et al.[12] turns out to be what Feynman and others referred to as “quasi-probability.” Note that when a quasi-probability is larger than 1, then 1 minus this value gives a negative probability. In the reliable facility location context, the truly physically verifiable observation is the facility disruption states (whose probabilities are ensured to be within the conventional range [0,1]), but there is no direct information on the station disruption states or their corresponding probabilities. Hence the disruption "probabilities" of the stations, interpreted as “probabilities of imagined intermediary states,” could exceed unity, and thus are referred to as quasi-probabilities. FinanceNegative probabilities have more recently been applied to mathematical finance. In quantitative finance most probabilities are not real probabilities but pseudo probabilities, often what is known as risk neutral probabilities.[14] These are not real probabilities, but theoretical "probabilities" under a series of assumptions that help simplify calculations by allowing such pseudo probabilities to be negative in certain cases as first pointed out by Espen Gaarder Haug in 2004.[15] A rigorous mathematical definition of negative probabilities and their properties was recently derived by Mark Burgin and Gunter Meissner (2011). The authors also show how negative probabilities can be applied to financial option pricing.[14] Some problems in machine learning use graph- or hypergraph-based formulations having edges assigned with weights, most commonly positive. A positive weight from one vertex to another can be interpreted in a random walk as a probability of getting from the former vertex to the latter. In a Markov chain that is the probability of each event depending only on the state attained in the previous event. Some problems in machine learning, e.g., correlation clustering, naturally often deal with a signed graph where the edge weight indicates whether two nodes are similar (correlated with a positive edge weight) or dissimilar (anticorrelated with a negative edge weight). Treating a graph weight as a probability of the two vertices to be related is being replaced here with a correlation that of course can be negative or positive equally legitimately. Positive and negative graph weights are uncontroversial if interpreted as correlations rather than probabilities but raise similar issues, e.g., challenges for normalization in graph Laplacian and explainability of spectral clustering for signed graph partitioning; e.g.,[16] Similarly, in spectral graph theory, the eigenvalues of the Laplacian matrix represent frequencies and eigenvectors form what is known as a graph Fourier basis substituting the classical Fourier transform in the graph-based signal processing. In applications to imaging, the graph Laplacian is formulated analogous to the anisotropic diffusion operator where a Gaussian smoothed image is interpreted as a single time slice of the solution to the heat equation, that has the original image as its initial conditions. If the graph weight was negative, that would correspond to a negative conductivity in the heat equation, stimulating the heat concentration at the graph vertices connected by the graph edge, rather than the normal heat dissipation. While negative heat conductivity is not-physical, this effect is useful for edge-enhancing image smoothing, e.g., resulting in sharpening corners of one-dimensional signals, when used in graph-based edge-preserving smoothing.[17] See also
References
|