Main -> Users' Guide and Examples -> Sparse Vector Estimation.

Prev: Estimate a Gaussian vector.

## Program Edit

The code for the example is in `examples/sparseEstim/sparseAWGN.m`

in the directory `gampmatlab/trunk/code`

.

To run the code, simply `cd`

to that directory and run `sparseAWGN`

at the MATLAB prompty. You will see a plot of a sparse vector and its estimate.

## Problem Description Edit

The previous Gaussian example illustrated the basic syntax of the gampEst function for the standard problem of estimating a Gaussian random vector. In this example, we consider a more interesting problem and show how to use the code for a simple problem in *compressed sensing*. Compressed sensing concerns the estimation of sparse vectors from large random linear measurements. A vector $ \mathbf{x} \in R^n $ is *sparse*, if only a small fraction of its components are non-zero. We consider the problem of estimating a random sparse vector $ \mathbf{x} $ through random linear measurements of the form

$ \quad \mathbf{y}=\mathbf{z} + \mathbf{w}, \quad \mathbf{z}= \mathbf{A}\mathbf{x}, \quad (1) $

where $ \mathbf{A} $ is a Gaussian random matrix and $ \mathbf{w} $ is Gaussian noise. The AWGN observation model (1) is identical to the Gaussian example. The difference in this example is the distribution on the components of the unknown vector $ \mathbf{x} $. The gampmatlab package can incorporate many distributions for sparse random vectors. For illustration, we consider a simple model for the vector $ \mathbf{x} $ where the components are i.i.d and follow a Gaussian-Bernoulli distribution

$ x_j \sim \left\{ \begin{array}{ll} 0 & \mbox{with probability } 1-\rho \\ {\mathcal N}(0,1) & \mbox{with probability } \rho \end{array} \right. $

where $ \rho $ represents the average fraction of non-zero components. The problem is to estimate the vector $ \mathbf{x} $ from the measurement vector $ \mathbf{y} $.

## MATLAB Program Description Edit

### Generation of the Data Edit

The beginning of the code is similar to the Gaussian example and adds the MATLAB path and sets the problem dimensions:

% Set path to the main directory addpath('../../main/'); % Parameters nx = 200; % Number of input components (dimension of x) nz = 100; % Number of output components (dimension of y) sparseRat = 0.1; % fraction of components of x that are non-zero snr = 20; % SNR in dB.

We see one additional parameter `sparseRat`

which represents the sparsity ratio $ \rho $.
Note that the number of measurements, `nz`

is less than the signal dimension `nx`

.
That is, the vector is under-determined. A property of sparse estimation, is that, due to the sparsity
constraint, we will still be able to estimate $ \mathbf{x} $ well.
Again, feel free to try out different values.

Most of the rest of the initial part of the code is identical to the Gaussian example. The one change is to generate a random Gauss-Bernoulli random vector $ \mathbf{x} $. The vector is generated by the code

xmean0 = 0; xvar0 = 1; x0 = normrnd(xmean0, sqrt(xvar0),nx,1); % a dense Gaussian vector x = x0.*(rand(nx,1) < sparseRat); % insert zeros

where the final lines zero out the a random subset of the components making it sparse.
As in the
Gaussian example,
the matrix $ \mathbf{A} $ and the noise vector $ \mathbf{w} $ are generated with Gaussian
i.i.d. components, with the noise scaled according to the parameter `snr`

:

% Create a random measurement matrix A = (1/sqrt(nx))*randn(nz,nx); % Compute the noise level based on the specified SNR. Since the components % of A have power 1/nx, the E|y(i)|^2 = E|x(j)|^2 = sparseRat. wvar = 10^(-0.1*snr)*xvar0*sparseRat; w = normrnd(0, sqrt(wvar), nz, 1); y = A*x + w;

### Solution via GAMP Edit

The command to estimate the $ \mathbf{x} $
from the measurement vector $ \mathbf{y} $ is identical to the
Gaussian example and uses the
`gampEst`

function that appears near
the bottom of the file:

[xhat, xvar] = gampEst(inputEst, outputEst, A, opt);

The only argument that needs to be changed from the
Gaussian example is the
the `EstimIn`

class `inputEst`

to specify the Gauss-Bernoulli distribution.

An `EstimIn`

estimator class for a
sparse distributions that arises from a mixtures of a point mass at zero and some other distribution
can be created via the `SparseScaEstim`

class. The estimator class is created in two steps.
First, we create an input estimator corresponding to the distribution of the non-zero components.
Since, in this example, the non-zero components are Gaussian, we use the pre-built `AwgnEstimIn`

class with the line:

inputEst0 = AwgnEstimIn(xmean0, xvar0);

This step is identical to the Gaussian example.
Then, we build the `inputEst`

class from `inputEst0`

through the function

inputEst = SparseScaEstim( inputEst0, sparseRat );

which creates a new estimator class corresponding to a random vector
with components that are zero with probability `1-sparseRat`

and distribution `inputEst0`

with probability `sparseRat`

.

The function `SparseScaEstim`

can be used to "sparsify" any distribution. That is, suppose you start
with an input estimator class `inputEst0`

corresponding to some distribution $ P_0(x) $
on the components of the vector $ \mathbf{x} $. Then, the function
`inputEst = SparseScaEstim( inputEst0, sparseRat )`

creates a new
input estimation class corresponding to the distribution of components that are zero with probability
`1-sparseRat`

and follow $ P_0(x) $ with probability `sparseRat`

.
Such methods to rapidly create new and complex distributions is one of the benefits of the object-oriented approach.

The output estimator class `outputEst`

is created identically to
the Gaussian example
using the pre-built `AwgnEstimOut`

class.

With the estimator classes defined, the program then calls `gampEst`

which returns
an estimate vector `xhat`

. The program concludes by plotting `x`

and `xhat`

.
You should see that GAMP is successful.