Photon-Limited Image Reconstruction


Photon-limited imaging arises in a wide variety of applications, including night vision, medical imaging, space weather, astronomy, and spectral imaging. Night vision capabilities have played a critical role in ensuring US success in many conflict arenas in recent years. A fundamental understanding of space weather is essential to protecting aircraft and their communication systems as “solar flares send off little bursts of radiation that travel down to the Earth, and can impact communications systems, GPS systems, satellites, power grids, aviation interests and polar regions” (Roth, 2010). (“Little” is relative; these radiation bursts can cause significant damage.) Scientists expect a new solar activity cycle to start in 2013, making accurate prediction of solar events critical for many Air Force missions. While most solar data is not considered photon-limited, in high-energy bands the small number of photons emitted can provide crucial information about the underlying solar physics. Spectral images distinguish various materials by the characteristic spectra. A mathematically similar problem outside the conventional imaging arena is the accurate prediction of IED placement based on historical data.

In these and other applications, our goal is to reconstruct spatially distributed and dynamic phenomena from data collected by counting discrete independent events, such as photons hitting a detector. Challenges associated with low photon count data are often circumvented by designing systems which aggregate photons into fixed bins across space and time (i.e., creating low-resolution and low frame rate cameras). If the bins are large enough, then photon statistics no longer dominate reconstruction errors, and off-the-shelf image processing methods based on squared error measures or which assume a “noise-free” model may be applied. However, the resulting low spatial and temporal resolution cannot be overcome, even with the most sophisticated interpolation methods. This is illustrated in Figure 2. The original image in Figure 2(a) is from a recent article in the New York Times depicting “United States Marines during a gunbattle in northern Helmand Province, Afghanistan” (Mazzetti, Perlez, Schmitt, and Lehren, 2010). Figure 2(b) corresponds to a low-resolution detector array measuring the scene with relatively little noise, 31 photons per pixel. Notice that the snipers are not distinguishable. This low-resolution image can easily be “denoised” using a variety of off-the-shelf techniques, but it will never result in an accurate reconstruction of the snipers on the hillside; that critical information has been lost in the sampling process.
Resolution


High-resolution observations, in contrast, will exhibit significant non-Gaussian noise since each pixel will be either a zero or one, and conventional algorithms which neglect the effects of Poisson noise will fail. However, reconstruction methods based on the Poisson statistics of the data can achieve minimax optimal reconstructions when we only observe zero or one photons for each detector. (These methods for static images will be described in Section 2.) Figure 2(c) corresponds to a high-resolution detector array with only 0.31 photons per pixel. However, the high-resolution data can be denoised using adaptive partitioning methods (Figure 2(d)) designed for photon-limited data by the PI (Willett, 2007; Willett and Nowak, 2003, 2007) to produce the accurate reconstruction in Figure 2(e). This reconstruction clearly shows the presence of snipers, making it possible to perform mission-critical image analysis that would be precluded by conventional, low-resolution sensing paradigms for low-light settings. In this case, the resolution of the estimate is dictated by the statistics of the observations, not by artificial limits associated with a fixed detector size.
As shown above, operating at the resolution limit results in very noisy, non-Gaussian data which necessitates specialized reconstruction methods. In particular, the Poisson noise model is critical to accurate analysis of photon-limited image data. To see the challenge of using traditional reconstruction methods (such as wavelet denoising) for this problem, consider the following experiment. An original intensity image, f, with intensities ranging from zero to 0.2, is displayed in Figure 3(a); the Poisson observations of this intensity image, y, are displayed in Figure 3(b). Note that most of the observations consist of either zero or one photon counts, with a mean of 0.06 photons per pixel.

Simply transforming Poisson data to produce data with Gaussian noise (via, for instance, the variance stabilizing Anscombe transform) is only effective when the number of photon counts is sufficiently high. Furthermore, using the Anscombe transform to estimate intensities does not always lead to photon flux preservation, so that the overall intensity of the estimate may be very different from the overall intensity of the scene, potentially leading to critical errors in quantitative analysis of scenes. The results of applying wavelet estimation in conjunction with the Anscombe transform are displayed in Figure 3(e-f). Again, these images correspond to the undecimated Haar and undecimated D6 wavelets, respectively. The photon flux of the estimate is significantly smaller than the photon flux of the source. In addition, as pointed out by Kolaczyk (1999), this yields significant over-smoothing of the image.

To address these challenges, the PI has developed data-adaptive binning and smoothing strategies based on a novel multiresolution analysis framework for photon density estimation. This data-adaptive partitioning gives our estimators the capability of spatially varying the resolution to automatically increase the smoothing in very regular regions of the image and to preserve detailed structure in less regular regions. In (Willett and Nowak, 2003, 2007), we demonstrated that fast and practical tree-based methods effectively address both model accuracy and computational efficiency challenges, nearly achieve theoretical lower bounds on the best possible error convergence rates, and perform very well in practice on static images. This approach is demonstrated in Figure 2(e), in which very few photon observations are used to reconstruct an underlying intensity with high accuracy using adaptive binning and constant model fits.

In particular, the multiscale methods pioneered by the PI and her collaborators result in significantly improved reconstructions, as displayed in Figure 3(g-h). These estimates exhibit few spurious artifacts and improved estimation in edge regions of the image, and they are photon flux preserving. This example indicates the enormous impact of considering the Poisson observation model in designing optimal reconstruction algorithms for photon-limited data. If we were to rely upon standard image processing methods, even optimal methods for Gaussian noise settings, the results would have significant artifacts with serious implications for image analysis in defense settings. As displayed in Figure 3, methods designed with Poisson noise in mind are essential for accurate photon-limited image analysis.
SaturnExample

Patch-based Poisson PCA for Image Denoising



Sparse Poisson Intensity Reconstruction Algorithms


Theory and Practice


Software available here.



The observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where (a) the number of unknowns may potentially be larger than the number of observations and (b) f* admits a sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.
spiral


Platelets


(This work was done in collaboration with Dr. Robert Nowak.)

Much of my research has been focused on multiresolution methods for signal recovery from noisy observations, an important theoretical problem with a large number of diverse and critical applications. In the past decade, wavelet analysis and related methods have proved themselves to be good practical tools with a number of favorable theoretical properties, and many investigators have considered the use of wavelet representations for image denoising, deblurring, and tomographic image reconstruction. The effectiveness and theoretical optimality of these methods, however, are constrained by two common phenomena: (1) edges in images or lower-dimensional manifolds embedded in higher-dimensional observation spaces, and (2) the presence of non-Gaussian noise. Both are particularly prevalent in the context of photon-limited imaging, which arises routinely in applications such as bioimaging and astrophysics. The localization of singularity structures in multi-dimensional data is critical to many information processing tasks, including estimation, compression, and classification. Ignoring edges and singularities can severely hamper performance, but computationally tractable and accurate extraction of sub-manifolds is a challenging open problem. Similarly, ignoring the effects of non-Gaussian noise can be a significant source of error, but more precise noise models can lead to highly non-linear computational problems. Both model accuracy and computational efficiency must be considered in the development of effective, practical information processing tools. In collaboration with Prof. Robert Nowak, I demonstrated that fast and practical platelet methods effectively address both these problems, can nearly achieve theoretical lower bounds on the best possible error convergence rates, and perform very well in practice.

Platelets are localized basis functions at various locations, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Moreover, platelet representations are especially well-suited to the analysis of Poisson data, unlike most other multiscale image representations, and they can be rapidly computed. I have shown that that platelet-based estimators nearly achieve theoretical lower bounds on the minimax error convergence rate and can significantly outperform conventional wavelet and wedgelet approximations.

shapeimage_2shapeimage_3shapeimage_4



Spectral Imaging



Hyperspectral imaging is the process of capturing the same scene at different wavelengths yielding a hypercube with two spatial dimensions and one spectral dimension. In simple terms, there is a spectrum associated with every spatial location as illustrated in Figure 1. Spectrum conveys important information about the physics of the underlying system being studied. Hyperspectral imaging is extensively used in astronomical imaging, fluorescence microscopy and remote sensing.

shapeimage_2

The most important task of a signal processing engineer is to extract useful information from the measurements made at the detector. Often, the measurements are corrupted by noise and the limitations of the measuring device. In addition, the number of photons emitted by the objects of interest might be very low which severely limits the accuracy of the reconstructed hypercube. In such situations, the imaging process is said to be photon-limited. We are interested in such photon-limited hyperspectral imaging applications.

In particular, we are investigating hyperspectral imaging in astronomical applications. The key challenge in such astronomical applications is that the spectrum has rich structure and wide dynamic range (the photon counts in different spectral bands vary drastically) as shown in Figures 2d, 2e and 2f. This makes the reconstruction part very challenging and interesting. We address these challenges by using a multiscale framework in conjunction with maximum penalized likelihood estimation. The proposed approach outperforms the most widely used Richardson Lucy algorithm as is evident from the results shown in Figures 2g through 2l.
Results


Figure 2: Experiments with simulated data. a) True intensity at spectral band 6. b) True intensity at spectral band 38. c) True intensity at spectral band 53. d) Noisy and distorted observations st spectral band 6. e) Noisy and distorted observations st spectral band 38. f) Noisy and distorted observations st spectral band 53. g) Reconstruction of spectral band 6 using Richardson Lucy algorithm. h) Reconstruction of spectral band 38 using Richardson Lucy algorithm. i) Reconstruction of spectral band 53 using Richardson Lucy algorithm. j) Reconstruction of spectral band 6 using the proposed approach. k) Reconstruction of spectral band 38 using the proposed approach. l) Reconstruction of spectral band 53 using the proposed approach.