Vectorizing the Computation of MUSIC Spectrum
MUtiple SIgnal Classification (MUSIC) is a widely-used subspace-based direction finding algorithm in array signal processing. MUSIC works in both 1D and multi-dimensional direction finding problems. Given the noise subspace estimate, , and the search grid of size , the MUSIC spectrum is computed using the following formula:
Let (which is actually the "steering matrix" of the search grid ), a straightforward implementation in Python would be:
# Assume A and En are already computed. We also omit the hat over En in the # code. # Output spectrum. sp = np.zeros((n,)) for i in range(n): sp[i] = 1.0 / (A[:,i].T.conj() @ (En @ En.T.conj()) @ A[:,i])
This implementation, while works, is not very efficient for the following reasons:
- A for-loop is used while vectorization is possible;
- is repeatedly computed.
Note: This can be avoided by caching the result. However, in the vectorized implementation we do not have to and we will be dealing with a smaller matrix, which further speeds up the computation.
Now we derive the vectorized implementation. First, is actually . Let . We have
Given a complex vector , we have , where is the conjugate of , denotes Hadamard product (element-wise product), and is an vector of ones. Therefore,
where is the number of sources (the number of columns in ).
Let us define as
where . Once we get , we just need to compute the reciprocals of its elements to obtain the final MUSIC spectrum.
Now lets translate the above steps into Python codes:
V = En.T.conj() @ A.
V * V.conj().
np.sum(V * V.conj(), axis=0).real. Note that just computes the column-wise sums. While is real mathematically, we need to force real output here due to floating point errors.
- Compute the reciprocals of elements in : use
Combining them together we get the vectorized implementation:
# Assume A and En are already computed. We also omit the hat over En in the # code. V = En.T.conj() @ A sp = np.reciprocal(np.sum(V * V.conj(), axis=0).real)
Not convinced? Let us do a simple benchmarking with the following three implementations:
def music(A, En): '''Naive implementation.''' n = A.shape sp = np.zeros((n,)) for i in range(n): sp[i] = 1.0 / (A[:,i].T.conj() @ (En @ En.T.conj()) @ A[:,i]).real return sp def music_caching(A, En): '''Naive implementation with caching.''' n = A.shape sp = np.zeros((n,)) Q = En @ En.T.conj() for i in range(n): sp[i] = 1.0 / (A[:,i].T.conj() @ Q @ A[:,i]).real return sp def music_vectorized(A, En): '''Vectorized implementation.''' V = En.T.conj() @ A return np.reciprocal(np.sum(V * V.conj(), axis=0).real)
For benchmarking purpose, we just fill and with complex Gaussians:
n = 360 # search grid size m = 20 # number of sensors k = 6 # number of source # Fill noise subspace and steering matrices with random numbers. # This is for benchmark purpose only. You can swap them will real # ones. A = np.random.randn(m, n) + 1j * np.random.randn(m, n) En = np.random.randn(m, k) + 1j * np.random.randn(m, k)
- i7-9700K CPU and 32GB RAM
- Windows 10 Pro
- Python 3.6
- numpy 1.16.4
music: 2.18 ms ± 6.91 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) music_caching: 1.09 ms ± 5.12 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) music_vectorized: 50.4 µs ± 507 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
R. Schmidt, "Multiple emitter location and signal parameter estimation," IEEE Transactions on Antennas and Propagation, vol. 34, no. 3, pp. 276–280, Mar. 1986. ↩
Here is a reference on vectorization in MATLAB: https://www.mathworks.com/help/matlab/matlab_prog/vectorization.html. The same idea also applies to
Actually the speed-up not only comes from vectorizing the for-loops. To see why, try to derive the computational complexity of the three approaches. ↩
If possible, consider using root-MUSIC instead of the classical spectrum-base approach. It does not require a search grid and is much faster when the search grid is dense. ↩