# MUtiple SIgnal Classification (MUSIC) in the Browser

Date: 7-29 2017

Tags: direction-of-arrival, javascript, web

After successfully getting the conventional (Bartlett) beamformer and the MVDR (Capon) beamformer working in the browser,
I have been trying to get the MUtiple SIgnal Classification (MUSIC) algorithm^{[1]}
working. MUSIC is a classical subspace based algorithm whose details are
briefly described as follows. Consider the following far-field narrow-band
observation model:

where $\mathbf{x}(t) \in \mathbb{C}^K$ denotes the source signals, $\mathbf{A}(\mathbf{\theta}) \in \mathbb{C}^{M\times K}$ denotes the steering matrix of a $M$-sensor array, and $\mathbf{n}(t) \in \mathbb{C}^{M}$ denotes the additive noise. Assuming that the additive noise is spatially and temporally uncorrelated white circularly-symmetric Gaussian, and that it is uncorrelated from the sources. The corresponding sample covariance matrix of the measurement vector $\mathbf{y}(t)$ is given by

where $\mathbf{P} = \mathbb{E}[\mathbf{x}(t)\mathbf{x}^H(t)]$ is the source covariance matrix.

Assuming that $\mathbf{P}$ is full-rank. If $K < M$, $\mathbf{A} \mathbf{P} \mathbf{A}^H$ in (2) is not full-rank. Therefore, the eigendecomposition of the covariance matrix admits the following form:

where $\mathbf{E}_\mathrm{s}$ corresponds to the $K$-dimensional signal subspace spanned by $\mathbf{A}$, and $\mathbf{E}_\mathrm{n}$ denote the $(M-K)$-dimensional noise subspace. By orthogonality, $\mathbf{E}_\mathrm{n}^H \mathbf{A} = \mathbf{0}$, which implies that $\mathbf{E}_\mathrm{n}^H \mathbf{a}(\theta)=0$ if $\theta$ corresponds to one of the DOAs (here we assume that $\mathbf{A}$ is unambiguous). Therefore, we can obtain the DOAs by searching the peaks of the following pseudo-spectrum:

where $\hat{\mathbf{E}}_\mathrm{n}$ is the estimated noise subspace obtained from $\hat{\mathbf{R}}$.

From the above we observe that the implementation of MUSIC is quite simple if we have access to eigendecomposition related subroutines for complex matrices. For MATLAB, it is trivial. For JavaScript, it is a different story. Therefore, the major obstacle in getting MUSIC working in the browser is the lack of eigendecomposition related subroutines for complex matrices. With some effort, I managed to port a subset of subroutines in EISPACK, which are written in Fortran, to JavaScript and merged them into my own work-in-progress JavaScript matrix library.

The resulting interactive figure is shown below (again it also works on mobile
devices). The underlying array is a uniform linear array with half-wavelength
inter-element spacing. The snapshots are generated according to the
unconditional/stochastic model^{[2]}. They are regenerated when
any of the parameters changes. You can tinker with the sliders to see how the
two MUSIC spectrum response as the parameters change. For comparison, I also
included the pseudo-spectrum from the MVDR beamformer. It can be observed that
under most circumstances MUSIC produces sharper peaks than the MVDR beamformer.

0 dB | ||

50 | ||

12 | ||

6 | ||

[-60°, 60°] |

R. Schmidt, "Multiple emitter location and signal parameter estimation,"

*IEEE Transactions on Antennas and Propagation*, vol. 34, no. 3, pp. 276–280, Mar. 1986. ↩P. Stoica and A. Nehorai, "Performance study of conditional and unconditional direction-of-arrival estimation,"

*IEEE Transactions on Acoustics, Speech and Signal Processing*, vol. 38, no. 10, pp. 1783–1795, Oct. 1990. ↩