The Counterintuitive Behavior of High-Dimensional Gaussian Distributions
Date: 3-13 2018
Tags: statistics, math
We all love Gaussian distributions. In statistical modeling, we love addition Gaussian noises. In Bayesian inference, we have Gaussian priors. In deep learning, we initialize weights using Gaussian distributions. In theoretical derivations, we make Gaussian assumptions. We are familiar with what 2D and 3D Gaussian distributions look like:
Because we cannot easily visualize higher dimensional spaces, we have the impression that Gaussian samples should concentrate around the mean, and that the overall shape formed by these samples is a (hyper)ellipsoid determined by the covariance matrix.
In practice, we usually use Gaussian distributions with much higher dimensions. Does our intuition really hold for high-dimensional Gaussian distributions? Well, not really. Check the short write up by Ferenc Huszár first:
Counterintuitively, Gaussian distributions look like soap bubbles in high-dimensional spaces. Too see this, let us generate some normal samples () and plot the distributions of their norms ():
We can observe that for higher dimensional Gaussian samples, the mode of the distribution of their norms shifts far away from the origin. This implies that most of these samples are concentrated around a sphere, forming a "soap bubble". In fact, for this simple case, we mathematically derive the distribution of the norms. Given , we have
Here denotes the chi-distribution with degrees of freedom. The mode of is given by for , which explains why the the samples are concentrated around a sphere in higher dimensions.
This counterintuitive behavior has another implication: using Euclidean space in high-dimensional spaces for clustering may not be a good idea. Let us consider a simple example, where we have three clusters of Gaussian samples generated from the following three distributions:
If we project the samples along the first two dimensions, we obtain the following:
Clearly, these three 2D clusters can be easily identified by common clustering algorithms using the Euclidean distance. If we follow our original impression of Gaussian distributions, we may have the following thoughts: in higher dimensional spaces, these three clusters are three concentrated balls just like the 2D case, and we should be able to distinguish them using the Euclidean space without any problem. However, due to the counterintuitive behavior of high-dimensional Gaussian distributions, these thoughts are no longer valid.
Let us first plot the distributions of Euclidean distances from these Gaussian samples to the origin:
When the number of dimensions is three, we can clearly observe three different distributions. However, as the number of dimensions increase, these three distributions begin to shift away from the origin and merge into each other. When the number of dimensions is 500, the three distributions are not easily distinguishable.
We can further plot the distance matrix of these Gaussian samples as follows:
We can observe that in lower dimensional spaces, the three clusters are clearly distinguishable. However, as the number of dimensions increase, the distance matrix gradually loses its structure. When the number of dimensions is set to 500, the pair-wise distances are almost the same and the three clusters are almost indistinguishable. If we run clustering algorithms using the Euclidean distance in these cases, we will clearly get unsatisfying results.
The above observations are also counterintuitive. We can clearly identify the three clusters when projecting the samples along first two dimensions! Well, you now learned the importance of dimensionality reduction. In fact, for our contrived example, useful information is concentrated along the first dimension, while the remaining dimensions can be regarded as "noise dimensions". Instead of blindly applying clustering algorithms to the original data, we should apply PCA first (Here we have Gaussian samples so PCA naturally fits. When working with real-world samples, you may need to use non-linear dimensionality reduction methods, which leads to another interesting topic: manifold learning.)
For more some further readings, I recommend checking the following:
- Counterintuitive Properties of High Dimensional Space
- C. C. Aggarwal, A. Hinneburg, and D. A. Keim, "On the surprising behavior of distance metrics in high dimensional space," in International Conference on Database Theory, 2001, pp. 420–434.
The Jupyter notebook generating all the figures in this article can be download here.