Normal-Distribution

Mahalanobis distance between two bivariate distributions with different covariances

  • March 5, 2011

The question is pretty much contained in the title. What is the Mahalanobis distance for two distributions of different covariance matrices? What I have found till now assumes the same covariance for both distributions, i.e., something of this sort:

What if I have two different s?

Note:- The problem is this: there are two bivariate distributions that have the same dimensions but that are rotated and translated with respect to each other (sorry I come from a pure mathematical background, not a statistics one). I need to measure their degree of overlap/distance.

**Update: ** What might or might not be implicit in what I’m asking is that I need a distance between the means of the two distributions. I know where the means are, but since the two distributions are rotated with respect to one another, I need to assign different weights to different orientations and therefore a simple Euclidean distance between the means does not work. Now, as I have understood it, the Mahalanobis distance cannot be used to measure this information if the distributions are differently shaped (apparently it works with two multivariate normal distributions of identical covariances, but not in the general case). Is there a good measure that encodes this wish to encode orientations with different weights?

There are many notions of distance between probability distributions. Which one to use depends on your goals. Total variation distance is a natural way of measuring overlap between distributions. If you are working with multivariate Normals, the Kullback-Leibler Divergence is mathematically convenient. Though it is not actually a distance (as it fails to be symmetric and fails to obey the triangle inequality), it upper bounds the total variation distance — see Pinsker’s Inequality.

引用自:https://stats.stackexchange.com/questions/7912

comments powered by Disqus