# Mianzhi Wang

Ph.D. in Electrical Engineering

# MUtiple SIgnal Classification (MUSIC) in the Browser

After successfully getting the conventional (Bartlett) beamformer and the MVDR (Capon) beamformer working in the browser, I have been trying to get the MUtiple SIgnal Classification (MUSIC) algorithm[1] working. MUSIC is a classical subspace based algorithm whose details are briefly described as follows. Consider the following far-field narrow-band observation model:

$\mathbf{y}(t) = \mathbf{A}(\mathbf{\theta}) \mathbf{x}(t) + \mathbf{n}(t),$
(1)

where $\mathbf{x}(t) \in \mathbb{C}^K$ denotes the source signals, $\mathbf{A}(\mathbf{\theta}) \in \mathbb{C}^{M\times K}$ denotes the steering matrix of a $M$-sensor array, and $\mathbf{n}(t) \in \mathbb{C}^{M}$ denotes the additive noise. Assuming that the additive noise is spatially and temporally uncorrelated white circularly-symmetric Gaussian, and that it is uncorrelated from the sources. The corresponding sample covariance matrix of the measurement vector $\mathbf{y}(t)$ is given by

$\mathbf{R} = \mathbb{E}[\mathbf{y}(t)\mathbf{y}^H(t)] =\mathbf{A} \mathbf{P} \mathbf{A}^H + \sigma^2 \mathbf{I},$
(2)

where $\mathbf{P} = \mathbb{E}[\mathbf{x}(t)\mathbf{x}^H(t)]$ is the source covariance matrix.

Assuming that $\mathbf{P}$ is full-rank. If $K < M$, $\mathbf{A} \mathbf{P} \mathbf{A}^H$ in (2) is not full-rank. Therefore, the eigendecomposition of the covariance matrix admits the following form:

$\mathbf{R} = \mathbf{E}_\mathrm{s}\mathbf{\Lambda}_\mathrm{s}\mathbf{E}_\mathrm{s}^H + \sigma^2 \mathbf{E}_\mathrm{n}\mathbf{E}_\mathrm{n}^H,$
(3)

where $\mathbf{E}_\mathrm{s}$ corresponds to the $K$-dimensional signal subspace spanned by $\mathbf{A}$, and $\mathbf{E}_\mathrm{n}$ denote the $(M-K)$-dimensional noise subspace. By orthogonality, $\mathbf{E}_\mathrm{n}^H \mathbf{A} = \mathbf{0}$, which implies that $\mathbf{E}_\mathrm{n}^H \mathbf{a}(\theta)=0$ if $\theta$ corresponds to one of the DOAs (here we assume that $\mathbf{A}$ is unambiguous). Therefore, we can obtain the DOAs by searching the peaks of the following pseudo-spectrum:

$P_\mathrm{MUSIC}(\theta) = \frac{1}{\mathbf{a}^H(\theta) \hat{\mathbf{E}}_\mathrm{n} \hat{\mathbf{E}}_\mathrm{n}^H \mathbf{a}(\theta)},$
(4)

where $\hat{\mathbf{E}}_\mathrm{n}$ is the estimated noise subspace obtained from $\hat{\mathbf{R}}$.

From the above we observe that the implementation of MUSIC is quite simple if we have access to eigendecomposition related subroutines for complex matrices. For MATLAB, it is trivial. For JavaScript, it is a different story. Therefore, the major obstacle in getting MUSIC working in the browser is the lack of eigendecomposition related subroutines for complex matrices. With some effort, I managed to port a subset of subroutines in EISPACK, which are written in Fortran, to JavaScript and merged them into my own work-in-progress JavaScript matrix library.

The resulting interactive figure is shown below (again it also works on mobile devices). The underlying array is a uniform linear array with half-wavelength inter-element spacing. The snapshots are generated according to the unconditional/stochastic model[2]. They are regenerated when any of the parameters changes. You can tinker with the sliders to see how the two MUSIC spectrum response as the parameters change. For comparison, I also included the pseudo-spectrum from the MVDR beamformer. It can be observed that under most circumstances MUSIC produces sharper peaks than the MVDR beamformer.

 SNR (dB): 0 dB Number of snapshots: 50 Number of sensors: 12 Number of sources: 6 Source range: [-60°, 60°]

1. R. Schmidt, "Multiple emitter location and signal parameter estimation," IEEE Transactions on Antennas and Propagation, vol. 34, no. 3, pp. 276–280, Mar. 1986.

2. P. Stoica and A. Nehorai, "Performance study of conditional and unconditional direction-of-arrival estimation," IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 38, no. 10, pp. 1783–1795, Oct. 1990.