Mianzhi Wang

Ph.D. in Electrical Engineering

MUtiple SIgnal Classification (MUSIC) in the Browser


After successfully getting the conventional (Bartlett) beamformer and the MVDR (Capon) beamformer working in the browser, I have been trying to get the MUtiple SIgnal Classification (MUSIC) algorithm[1] working. MUSIC is a classical subspace based algorithm whose details are briefly described as follows. Consider the following far-field narrow-band observation model:

y(t)=A(θ)x(t)+n(t),\mathbf{y}(t) = \mathbf{A}(\mathbf{\theta}) \mathbf{x}(t) + \mathbf{n}(t),
(1)

where x(t)CK\mathbf{x}(t) \in \mathbb{C}^K denotes the source signals, A(θ)CM×K\mathbf{A}(\mathbf{\theta}) \in \mathbb{C}^{M\times K} denotes the steering matrix of a MM-sensor array, and n(t)CM\mathbf{n}(t) \in \mathbb{C}^{M} denotes the additive noise. Assuming that the additive noise is spatially and temporally uncorrelated white circularly-symmetric Gaussian, and that it is uncorrelated from the sources. The corresponding sample covariance matrix of the measurement vector y(t)\mathbf{y}(t) is given by

R=E[y(t)yH(t)]=APAH+σ2I,\mathbf{R} = \mathbb{E}[\mathbf{y}(t)\mathbf{y}^H(t)] =\mathbf{A} \mathbf{P} \mathbf{A}^H + \sigma^2 \mathbf{I},
(2)

where P=E[x(t)xH(t)]\mathbf{P} = \mathbb{E}[\mathbf{x}(t)\mathbf{x}^H(t)] is the source covariance matrix.

Assuming that P\mathbf{P} is full-rank. If K<MK < M, APAH\mathbf{A} \mathbf{P} \mathbf{A}^H in (2) is not full-rank. Therefore, the eigendecomposition of the covariance matrix admits the following form:

R=EsΛsEsH+σ2EnEnH,\mathbf{R} = \mathbf{E}_\mathrm{s}\mathbf{\Lambda}_\mathrm{s}\mathbf{E}_\mathrm{s}^H + \sigma^2 \mathbf{E}_\mathrm{n}\mathbf{E}_\mathrm{n}^H,
(3)

where Es\mathbf{E}_\mathrm{s} corresponds to the KK-dimensional signal subspace spanned by A\mathbf{A}, and En\mathbf{E}_\mathrm{n} denote the (MK)(M-K)-dimensional noise subspace. By orthogonality, EnHA=0\mathbf{E}_\mathrm{n}^H \mathbf{A} = \mathbf{0}, which implies that EnHa(θ)=0\mathbf{E}_\mathrm{n}^H \mathbf{a}(\theta)=0 if θ\theta corresponds to one of the DOAs (here we assume that A\mathbf{A} is unambiguous). Therefore, we can obtain the DOAs by searching the peaks of the following pseudo-spectrum:

PMUSIC(θ)=1aH(θ)E^nE^nHa(θ),P_\mathrm{MUSIC}(\theta) = \frac{1}{\mathbf{a}^H(\theta) \hat{\mathbf{E}}_\mathrm{n} \hat{\mathbf{E}}_\mathrm{n}^H \mathbf{a}(\theta)},
(4)

where E^n\hat{\mathbf{E}}_\mathrm{n} is the estimated noise subspace obtained from R^\hat{\mathbf{R}}.

From the above we observe that the implementation of MUSIC is quite simple if we have access to eigendecomposition related subroutines for complex matrices. For MATLAB, it is trivial. For JavaScript, it is a different story. Therefore, the major obstacle in getting MUSIC working in the browser is the lack of eigendecomposition related subroutines for complex matrices. With some effort, I managed to port a subset of subroutines in EISPACK, which are written in Fortran, to JavaScript and merged them into my own work-in-progress JavaScript matrix library.

The resulting interactive figure is shown below (again it also works on mobile devices). The underlying array is a uniform linear array with half-wavelength inter-element spacing. The snapshots are generated according to the unconditional/stochastic model[2]. They are regenerated when any of the parameters changes. You can tinker with the sliders to see how the two MUSIC spectrum response as the parameters change. For comparison, I also included the pseudo-spectrum from the MVDR beamformer. It can be observed that under most circumstances MUSIC produces sharper peaks than the MVDR beamformer.

0 dB
50
12
6
[-60°, 60°]


  1. R. Schmidt, "Multiple emitter location and signal parameter estimation," IEEE Transactions on Antennas and Propagation, vol. 34, no. 3, pp. 276–280, Mar. 1986.

  2. P. Stoica and A. Nehorai, "Performance study of conditional and unconditional direction-of-arrival estimation," IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 38, no. 10, pp. 1783–1795, Oct. 1990.