Mianzhi Wang

Ph.D. in Electrical Engineering

MUtiple SIgnal Classification in TensorFlow


While TensorFlow is mainly used in deep learning, it can be used for other numerical computation related tasks. Perhaps because I have done too much statistical array processing, I suddenly got the idea of implementing MUtiple SIgnal Classification (MUSIC)[1] in TensorFlow today. After checking the APIs, I found it is indeed possible because TensorFlow supports complex matrices. Here is how.

Disclaimer: This implementation is just a demonstration of TensorFlow's rich API for random sampling, linear algebra, as well as complex numbers. Here we do not need the computation graph to provide us free gradient computations (actually I do not think TensorFlow conducts gradient computations for complex operations). It is easier (and probably faster) to implement MUSIC directly in numpy.

You can view the HTML version of the Jupyter notebook here, or download the notebook here.

Generating Snapshots

Before implementing MUSIC, we first need to generate snapshots. For simplicity, let us consider the following far-field narrow-band observation model:

y(t)=A(θ)x(t)+n(t).\mathbf{y}(t) = \mathbf{A}(\mathbf{\theta}) \mathbf{x}(t) + \mathbf{n}(t).
(1)

Furthermore, we consider the stochastic model[2] for the sources signals and additive noise. In this model, both the source signals and the additive noise are circularly-symmetric complex Gaussian. This model requires a random number generator capable of generating normally distributed random numbers. Luckily, tf.distributions.Normal gets us covered. Here is a snippet of code that generates samples from a zero-mean Gaussian with covariance matrix Σ=diag(1,2)\mathbf{\Sigma} = \mathrm{diag}(1, 2):

# generate some Gaussian samples using tf.contrib.distributions
mu = tf.constant([0., 0.])
sigma = tf.constant([1., 2.])
gaussian_dist = tf.distributions.Normal(loc=mu, scale=sigma)
x_samples = gaussian_dist.sample(500)

with tf.Session() as sess:
    x, = sess.run([x_samples])
    plt.scatter(x[:,0], x[:,1])
    plt.axis('equal')

music_sp

Now we can generate circularly-symmetric Gaussian samples for the sources signals and the additive noise:

# Generates samples from a white circulary-symmetric complex Gaussian distribution.
def sample_complex_white_gaussian(dist, shape, variance):
    return tf.cast(tf.sqrt(variance / 2), complexx) * tf.complex(dist.sample(shape), dist.sample(shape))

Note that complexx will be replaced with either tf.complex64 or tf.complex128.

Next, we want to generate the snapshots y(t)\mathbf{y}(t) and compute the sample covariance matrix for MUSIC. Here is the part of the code doing it:

# create the ULA
d_0 = wavelength / 2.0 # inter-element spacing
sensor_positions = tf.cast(tf.linspace(zero, n_sensors - one, n_sensors) * d_0, floatx)

# compute the steering matrix
A = steering_matrix(sensor_positions, doas, wavelength)

# generate source samples and noise samples
wn_dist = tfd.Normal(loc=zero, scale=one)
source_power = tf.constant(1.0, dtype=floatx)
noise_power = tf.constant(10.**(-SNR/10.), dtype=floatx)
S = sample_complex_white_gaussian(wn_dist, [n_doas, n_snapshots], source_power)
N = sample_complex_white_gaussian(wn_dist, [n_sensors, n_snapshots], noise_power)

# compute the sample covairance matrix
Y = tf.matmul(A, S) + N
R = tf.matmul(Y, Y, adjoint_b=True) / n_snapshots
R = 0.5 * (R + tf.conj(tf.transpose(R))) # ensure Hermitian

The steering_matrix function creates the steering matrix A\mathbf{A} given the directions of arrival and the sensor locations:

# Creates a steering matrix.
def steering_matrix(sensor_positions, doas, wavelength):
    # here both `sensor_positions` and `doas` are row vectors.
    k = imaginary_unit * tf.cast(2 * math.pi / wavelength, dtype=complexx)
    return tf.exp(k * tf.cast(tf.reshape(sensor_positions, (-1, 1)) * tf.sin(doas), dtype=complexx))

MUSIC!

Now we are ready to implement MUSIC. It turns out to be quite straight-forward with all the linear algebra API support in TensorFlow.

# Generates the MUSIC spectrum.
def music(R, sensor_positions, n_doas, wavelength, n_grid_points=360):
    # MUSIC
    n_sensors = tf.size(sensor_positions)
    [v, E] = tf.self_adjoint_eig(R)
    # extract the noise subspace
    # (TensorFlow orders the eigenvalues from the smallest to the largest)
    En = E[:,0:(n_sensors-n_doas)]
    # compute the spectrum
    doa_grid = tf.cast(tf.linspace(-math.pi/2, math.pi/2, n_grid_points), floatx)
    A_grid = steering_matrix(sensor_positions, doa_grid, wavelength)
    EhA = tf.matmul(En, A_grid, adjoint_a=True)
    music_sp = tf.real(tf.reduce_sum(tf.conj(EhA) * EhA, axis=0))
    music_sp = tf.reciprocal(music_sp)
    # normalize
    music_sp = music_sp / tf.reduce_max(music_sp)
    return doa_grid, music_sp

Here tf.self_adjoint_eig will compute the eigenvalues and eigenvectors for symmetric/Hermitian matrices.

Finally, we can perform direction-of-arrival estimation with MUSIC by launching a new session:

# MUSIC!
doa_grid, music_sp = music(R, sensor_positions, n_doas, wavelength)

# run
with tf.Session() as sess:
    true_doas, doa_grid_out, music_sp_out = sess.run([doas, doa_grid, music_sp])
    plt.figure()
    plt.plot(doa_grid_out, music_sp_out)
    plt.stem(true_doas, np.ones(true_doas.shape), '--', basefmt=' ')
    plt.ylabel('Normailzed Spectrum')
    plt.xlabel('Angle')
    plt.legend(['MUSIC', 'True DOAs'], loc=1, bbox_to_anchor=(1.3, 1))

It looks like all the sources are identified correctly:

music_sp


  1. R. Schmidt, "Multiple emitter location and signal parameter estimation," IEEE Transactions on Antennas and Propagation, vol. 34, no. 3, pp. 276–280, Mar. 1986.

  2. P. Stoica and A. Nehorai, "Performance study of conditional and unconditional direction-of-arrival estimation," IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 38, no. 10, pp. 1783–1795, Oct. 1990.