## Abstract

A new imaging technique that combines compressive sensing and super-resolution techniques is presented. Compressive sensing is accomplished by capturing optically a set of Radon projections. Super-resolution measurements are simply taken by introducing a slanted two-dimensional array in the optical system. The goal of the technique is to overcome resolution limitation that occurs in imaging scenarios where dense pixels sensors with large number of pixels are not available or cannot be used. With the presented imaging technique, owing to the compressive sensing approach, we were able to reconstruct images with significantly more number of pixels than measured, and owing to the super-resolution design we have been able to achieve resolution significantly beyond that limited by the sensor's pixels size.

© 2013 Optical Society of America

## 1. Introduction

High quality imaging requires imagers to have large space-bandwidth products. For digital imagers this implies that the sensor array needs to have: (a) sufficiently large number of pixels, and (b) it needs to be dense enough such that the pixels size is sufficiently small to provide high resolution. There are imaging conditions and scenarios in which these two requirements are difficult to be fulfilled together. This typically occurs in imaging applications outside the visible regime where the number of pixels is cost-limited and their size cannot be arbitrarily reduced. Even in the visible spectrum, despite the advanced sensor manufacturing capability, there are applications that require relatively large pixels, such as imaging under low illumination conditions. In this work we present an image acquisition method that is designed to overcome these practical physical limitations. The presented method and system combines compressive sensing (CS) and super-resolution (SR) techniques, where CS is used to reduce the total number of measurements and SR is used to overcome the pixel size limitation.

The recently introduced CS framework [1, 2], allows sampling and reconstruction of signals from a small number of measurements, which is significantly lower than that required by Shannon/Nyqist sampling theory. Due to its appealing advantages, researchers have pursuit to develop many applications of CS for optical imaging and spectral imaging, (see for example [3–9]). The core idea behind CS, is that for certain types of signals, a small number of linear non-adaptive measurements carry sufficient information for good approximation of the signal. The class of signals that CS is applicable for is that of signals having sparse mathematical representations. Fortunately, most of the natural images are essentially sparse or at least highly comprisable in some mathematical representation domain. In CS the acquisition process is accomplished by taking appropriate set of linear measurements. Contrary to conventional imaging that seeks to map each object point onto a (preferably) single image point, optical CS systems are designed to project multiple image points with different weight onto each single pixel sensor. A brief overview of the CS is provided in subsection 2.1. In this work we use a compressive imaging (CI) method that is based on that presented in [4, 6]. This method is briefly described in subsection 2.3.

Super-resolution [10–13] refers to a class of techniques that attempts to reconstruct a single high resolution (HR) scene image from a set of observed images at lower resolution (LR). The most common SR approach considers the case that the LR images are obtained by downsampling subpixel-shifted HR images. A brief introduction to SR is given in subsection 2.2.

In this work we generalize the CI method published in [4, 6] to perform geometrical SR, that is, to obtain resolution below the sensor limit in addition to the optical compression. Following the approach in [4] the acquired measurements are a set of Radon Projections (RP), which can be optically obtained using an anamorphic optical setup. Such a system is figuratively described in Fig. 1(a). Each pixel in the line array of sensors *S* integrates the intensities over a line in the object plane, so that the entire sensor captures the RP of the scene. Compressive imaging is implemented in [4] by rotating the cylindrical lens together with the line array of sensors around its' optical axis. During the rotational scanning, multiple projections are taken and the image is reconstructed with an appropriate non-linear reconstruction algorithm. In [6] we demonstrated Mpixel size reconstruction with compression ratio of up to $\times 20$ using this approach. Here we extend this method to perform SR by utilizing the fact that the point spread function (PSF) of the anamorphic setup is a line [6, 14] as shown in Fig. 1(a). A straight forward method to extend the technique to perform geometrical SR is made by replacing the single line array of sensors in Fig. 1(a) with multiple staggered line-array of sensors Fig. 1(b). This way, sub-pixel shifted data can be captured, which then can be exploited with conventional SR techniques designed for multichannel measurements with subpixel shifts [10]. The same type of measurements can be obtained more practically by placing a regular two dimensional (2D) array that is slightly rotated around the optical axis with respect to the cylindrical lens coordinates, as shown in Fig. 1(c). As is evident from Fig. 1(c) the PSF crosses the columns of the 2D sensor array at sub-pixels shifts, therefore the columns of the slanted 2D array capture similar information as the staggered sensor. Hence SR numerical reconstructions algorithms can be applied on the set of columns of the slanted sensor array in order to reconstruct high resolution RP.

One of the main motivations for using our SR compressive imaging (SRCI) system is simultaneously utilization of the advantages of SR system and that of CS systems. The CS part of the system allows image restoration from only small total number of samples. The super-resolution part of the system allows resolution improvement beyond that limited by detector size. Besides the CS and SR benefit of the method we may mention. Its implementation is relatively easy since complicated 2D scanning mechanism is avoided. In Sec. 5 we presented a detailed list of the advantages of this SRCI system.

The paper is organized as follows. In section 2, we give a brief background on CS and SR, and we provide a description of the optical RP acquisition system. In section 3, we describe the SRCI method. In section 4, we show results obtained with simulated and experimental imaging system. In section 5, we conclude and list the advantages of the technique.

## 2. Background

#### 2.1 Compressive sensing

Compressive sensing is relatively new sampling approach that allows reconstruction of signals from only a few measurements [1, 2]. Compressive sensing relies on the assumption that the acquired signal has some mathematical representation where it is sparse. In order to avoid the obvious loss of information the acquisition process must occur in the some encoded space. The CS process can be described by:

where $f\in {\Re}^{N}$ is the signal that we want to measure, $g\in {\Re}^{M}$ is the captured signal, $\Phi \in {\Re}^{M\times N}$ represents the sampling operator, $\alpha \in {\Re}^{N}$ is a sparse representation of the signal obtained by $\alpha ={\Psi}^{-1}f$, where $\Psi \in {\Re}^{N\times N}$ is an inverse of sparsifying transform operator, and $\epsilon \in {\Re}^{M}$ is an additive noise. In the CS framework, the number of samples $M$is smaller than $N$, therefore the system sampling matrix $\Phi $ is ill conditioned and therefore not invertible. Hence, in order to estimate the signal $f$ we must apply appropriate inverse problem techniques, which take in account the sensing model and the sparsity assumption.Compressive imaging (CI) is a natural application of CS in the field of optical imaging. CI has been applied to acquisition of 2D images, videos, holograms [7] and hyperspectral images [8]. Here we focus on CI of 2D images. In this case $f$ is an image of size $n\times n$, which, in order to comply with Eq. (1) is lexicographically reordered as a column vector of size $N\times 1$, where $N={n}^{2}$. Most natural images, have a sparse representation that may be achieved by some sparsifying transform ${\Psi}^{-1}$(such as discrete Fourier transform, discrete cosine Transform, Hadamard transform and discrete wavelet Transform).

The reconstruction step is based on search of the sparsiest coefficients vector $\alpha $. A common way achieved by solving the following minimization problem:

#### 2.2 Super-resolution

Super-resolution refers to a class of techniques that allow reconstruction of HR signal from a set of LR signals, typically representing different views of the same scene [9–13, 15–17]. The relation between the HR signal and LR signals can be described through the following multichannel model:

where $g\in {\Re}^{M}$ is the HR signal, ${T}_{j}\in {\Re}^{{S}_{j}\times M}$is a matrix that describes relation between the HR signal and ${j}^{th}$ LR signal, in general representing geometrical transformations (such as displacements, rotations, warping), downsampling and blurring. The vector ${z}_{j}\in {\Re}^{{S}_{j}}$ is the measured LR signal, $b$is the number of different LR signals. We shall denote by $S$ the total number of measurements, that is $S={\displaystyle {\sum}_{j=1}^{j=b}{S}_{j}}$, where ${S}_{j}$ is the number of measurements in the ${j}^{th}$ channel.In order to comply with the notation in Eq. (1), we rewrite Eq. (3) in a matrix-vector form:

Most common SR acquisition schemes are based on the subpixel translation, that is, the set of LR images${\left\{{z}_{j}\right\}}_{j=1}^{j=b}$ are captured by generating sub pixel shifts between the image captured by camera and the object. This involves inconvenient 2D mechanical scanning. In our work we achieve the same effect by using of rotation scanning. Our acquisition system is described in chapter 3.

#### 2.3 The anamorphic optical setup

Figure 2 details the optical setup shown in Fig. 1. A similar optical setup was used in [4,6,14]. The core of the system is a cylindrical lens, which, in conjunction with the spherical lens has power in the $y\text{'}$ direction. Here we added the sensor coordinate system $\left(x\text{'}\text{'},y\text{'}\text{'}\right)$, which unlike in [4, 6, 14] is rotated relatively to the cylindrical lens coordinates $\left(x\text{'},y\text{'}\right)$. In the Fig. 2 the measurement plane is placed on the imaging plane of the spherical lens, and the cylindrical lens can be placed on some other plane, such that to ensure homogeneity of the data in the spreading direction.

In order to collect sufficient data amount for imaging we need to acquire multiple RP. For this we must simultaneously rotate the sensors together with cylindrical lens in the same direction around the optical axis by some angle ${\theta}_{i}$ see Fig. 1. An alternative way to achieve multiple RP is done by rotating only the object. A third option is to place an image rotating component between the cylindrical lens and the object (see [6], for example). In our experiments (sec. 4) we used the third option.

If we denote by ${N}_{p}$ the number of projections, the measured signal $g$ in Eq. (1) is written as ${[{g}_{{\theta}_{1}}^{T}\cdots {g}_{{\theta}_{i}}^{T}\cdots {g}_{{\theta}_{{N}_{p}}}^{T}]}^{T},$ where ${g}_{\theta}{}_{i}$ represents the projection at angle ${\theta}_{i}$. The overall system is$\Phi ={[{\Phi}_{{\theta}_{1}}^{T}\text{\hspace{1em}}{\Phi}_{{\theta}_{2}}^{T}\cdots {\Phi}_{{\theta}_{{N}_{p}}}^{T}]}^{T},$ where ${\Phi}_{{\theta}_{i}}$is the matrix describing the ${i}^{th}$ projection.

Every row of matrix $\Phi $ represents a vectorized image that we can treat as a map indicating the locations and weights of the image pixels that contributes to a detector. More details about structure of matrix $\Phi $ can be found in [18, 19].

## 3. Super-resolution system based on compressive imaging with optical Radon projections

#### 3.1 Optical system description

As explained in sec. 1, the main difference between the optical implementation here and in the previous works [4, 6] is the replacement of the line array of sensors $\text{S}$ with a 2D array of sensors (e.g., CCD/ CMOS). The proper replacement of the 1D sensor in Fig. 1(a) with the 2D sensor in Fig. 1(c) permits to take advantage of the SR techniques by the fact that the PSF is captured with a subpixel shift see Fig. 1(c).

The relation between the 2D LR grid representing the measured data by the CCD and the 1D HR grid of the reconstructed projection ${g}_{{\theta}_{i}}$ is described in Fig. 3. We shall use the following definitions to describe the system and its model. The parameter $n$ denotes the number of pixels in a single column of the CCD, *b *is the number of CCD columns that we want to use (it can represent the number of all CCD columns or a specific set of columns), *β* is the rotation angle between the CCD coordinates $\left(x\text{'}\text{'},y\text{'}\text{'}\right)$ and cylindrical lens coordinates $\left(x\text{'},y\text{'}\right)$. We denote by ${z}_{j,{\theta}_{i}}$the ${j}^{th}$column of the CCD taken at projection angle ${\theta}_{i}$.

Our aim is to reconstruct a HR 1D projection ${g}_{{\theta}_{i}}$ (measuring the RP at angle ${\theta}_{i}$) from *b* columns of a LR projection. Let us start with Fig. 3(a), where we show the PSF at a particular RP angle, as it captured by the camera LR grid. On this image we add a virtual HR 1D grid that defines the reconstructed signal's resolution. In Fig. 3, for example the HR grid density is 4 times larger than that of the LR sensor array. In the case that *β* = 0 (Fig. 3(a)), the PSF is in the direction of cylindrical lens axis $x\text{'}$which is coincidence with the direction of the CCD horizontal axis $x\text{'}\text{'}$. In this case each of the detectors columns${\left\{{z}_{j,{\theta}_{i}}\right\}}_{j=1}^{j=b}$contains the same data as its neighbour column, as shown in the right hand part of Fig. 3(a). In Fig. 3(b) we ilustrate the case where the cylindrical lens axis ($x\text{'}$) is counterclockwise rotated with respect to axis $x\text{'}\text{'}$ by an angle *β*. Since the PSF (in the direction of axis $x\text{'}$), is not any more along the sensors'$x\text{'}\text{'}$ axis, then each colum captures sligthly different shifted data, as seen in the columns $[{z}_{1,{\theta}_{i}}\text{\hspace{0.17em}}{z}_{2,{\theta}_{i}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{z}_{b,{\theta}_{i}}]$depicted on the right-hand part of Fig. 3(b).

Optimal alignment between HR and LR grids is the one in which HR information is uniformly spread along the LR columns. This is achieved if the angle *β* is chosen such that the PSF undergoes exactly one vertical LR pixel shift while crossing horizontally the CCD sensor, as shown in Fig. 3(b). For LR pixels with a square form, the angle *β* fulfilling this requirement is:

#### 3.2 Mathematical model

The overall acquisition with our SRCI is obtained by combining the geometrical transformation, and downsampling process described by the operator $T$in Eq. (4) with the CS acquisition model Eq. (1):

where $z\in {\Re}^{s}$ is the concatenation of all LR images. We assume that our goal is resolution improvement by a factor b, therefore the choice of $\beta $ is according Eq. (5).The matrix $T$in Eq. (6) is given by:

where ${I}_{beams}$denotes the identity matrix of size${N}_{p}\times {N}_{p},$ the sign $\otimes $ denotes the Kronecker product (see Appendix A for the explicit expression) and ${T}_{\theta}$ is the operator which relates one HR projection ${g}_{{\theta}_{i}}$ to $\left[{z}_{1,{\theta}_{i}}^{T}\cdots {z}_{b,{\theta}_{i}}^{T}\right]$ see Fig. 4. The structures of $T$and ${T}_{\theta}$ are depicted in Fig. 5 (a) and (b), respectively. The matrix ${T}_{\theta}$ is given by ${T}_{\theta}={\left[{\overline{T}}_{1}^{T}\cdots \text{\hspace{1em}}{\overline{T}}_{j}^{T}\cdots \text{\hspace{1em}}{\overline{T}}_{b}^{T}\right]}^{T},$ where ${\overline{T}}_{j}$ is the operator which relates the high resolution projection ${g}_{{\theta}_{i}}$to ${j}^{th}$ column of the CCD pixels ${z}_{j,{\theta}_{i}}$. The explicit expression of the matrix ${\overline{T}}_{j}$ is derived in Appendix A.Figure 4 depicts with more details than Fig. 3(b) the relation between a HR projection ${g}_{{\theta}_{i}}$ and between$\left[{z}_{1,{\theta}_{i}}^{T}\cdots {z}_{b,{\theta}_{i}}^{T}\right]$ for any projection angle ${\theta}_{i}$ and $\beta =14\xb0$ according to Eq. (5). We will find this figure particularly useful for derivation of $T$ in the Appendix A.

In Figs. 5(a)-5(c) we show the structure of the overall SR matrix $T$, the entries of one projection dimension reduction matrix ${T}_{\theta}$ and the entries of matrix ${\overline{T}}_{1}$relating ${g}_{\theta}{}_{i}$ with ${z}_{j,{\theta}_{i}}$ in the case where $b=4$, $n=6$ and ${N}_{p}=5$. We can see that all these matrices are sparse.

## 4. Results

#### 4.1 Simulation results

In order to evaluate quantitatively our system we first performed a simulated experiment. In our simulation we compare the SRCI system presented here with the CI system presented in [4]. We assume that the SRCI has an array of pixels of size $120\times 4$. As for the system in [4] we consider two cases; in the first case we assume that a line-array of $120\times 1$ sensors is used and in the second case a line-array of $480\times 1$pixels sensors are used. In the first case it is assumed that the same pixels size as with SRCI is used, whereas in the second case it is assumed that the pixels size can be reduced by a factor of 4 (in each dimension) to obtain a denser array having the same number of pixels as with the SRCI. With both systems we assume the same number *N _{p}* of exposures are taken.

The test image of size $480\times 480$ shown is in Fig. 6(a). It consists of a sequence of gradually increasing squares, where the smallest square has dimensions of $1\times 1$ pixels and the largest square is of size $24\times 24$ pixels. The gap between every two adjacent squares is 8 pixels. The forward process that describes transformations from HR image to the LR projections consists from the following steps. First step in the process is the lexicographical vectorization of the image matrix. The second step is a multiplication between $f$ and between Radon matrix ${\Phi}_{{\theta}_{i}}$for obtaining column vector ${g}_{{\theta}_{i}}$Eq. (1). In the third step we perform a duplication of ${g}_{{\theta}_{i}}$. The matrix that we obtain as a result is equivalent to an image that would be captured by the two-dimensional array of detectors shown in Fig. 3(a). The fourth step is the counterclockwise rotation of the obtained matrix around its center by the angle$\beta $, as in Fig. 3(b). Fifth step is downsampling each dimension by a factor *b* = 4. Sixth step is the repetition of the previous steps for all projection angles ${\theta}_{i}$ for $i=1,\cdots ,{N}_{p}$. All this steps implement Eq. (6). With this process we simulated LR projections ${\left\{{z}_{j,{\theta}_{i}}\right\}}_{j=1}^{j=b}$ where $b=4$ for ${N}_{p}=90$RP angles. In our simulation, each LR projection has a size $n=120$.

For the image reconstruction we used the Two-Step Iterative Shrinkage/Thresholding (TwIST) algorithm [20] which solves:

_{,}$z$ and ${\Vert \alpha \Vert}_{1}$ respectively is needed for restoration of SR images. First, we reconstruct the sparse representation of the signal ${\alpha}_{est}$ in wavelet domain, and then we reconstruct the $f$ by applying the inverse wavelet transform. We achieved the best results for$\lambda =0.1$.

Figure 6(b) shows the reconstructed image from ${N}_{p}=90$ HR projections of size$480\times 1$. This image represents the best result to be expected from SRCI, since in this case we assumed that HR pixels are directly captured. In the Fig. 6(c) we show the image restored from the case that LR projections of size$120\times 1$ are captured. This image actually represents the result to be expected without SR. Note that squares of sizes $1\times 1$ to $3\times 3$HR pixels are unresolved. This is expected since; the captured LR pixels are 4 times larger than the HR pixels in the original image (Fig. 6(a)). Figure 6(d) represents the image restored by implementing SRCI method from 90 Radon projections captured with four columns, each having 120 LR pixels.

From comparison between Figs. 6(c) and 6(d) we can see significant improvement of resolution, clarity and visual quality of image. Note that, owing to the SR, the squares at the top of the image are clearly resolved despite being smaller ($1\times 1$and $3\times 3$ HR pixels) than the sensor LR pixel (of size $4\times 4$ HR pixels). From a system point of view, the simulation results demonstrate the ability to cover a field-of-view of $\text{48}0\times \text{48}0$pixels with a sensor having only 120 × 4 pixels (i.e., about × 500 times less pixel sensors than pixels in the image). The reconstructed image having 230400 ( = 480 × 480) pixels was obtained from 43200 samples ( = 120 × 4 × 90), hence a compression ratio of × 5.33 is demonstrated. This implies that with the SRCI system about × 5.33 less data needs to be stored and transmitted, and shorter acquisition time is required compared to any conventional scanning system using the same sensor array.

#### 4.2 Real experiment results

In the real experiment we used the system described in [6] using a slanted CCD sensor with $\beta =14\xb0$. We have chosen a toy as an object. For image reconstruction, as in the simulated experiments, we used TwIST algorithm. In Fig. 7(a) we show a LR image of size $120\times 120$ pixels restored from 90 projections of size $120\times 1$ pixels, only by implementation of the CS (as Fig. 6(c)). In Fig. 7(b) we show an image of size $480\times 480$ pixels restored from 90 SR projections of size $120\times 4$. From comparison between Figs. 7(a) and 7(b) we can see significant improvement of the edges; we can see in details the forelock, eyes and teeth of the toy.

In Fig. 8 we show an image of size $480\times 480$ restored by implementation SRCI with only half of the projections used for reconstruction of Fig. 7(b). Obviously the reconstruction is poorer than that in Fig. 7(b), but even despite the fact that only half of the number of measurements were taken, the quality of restored image is higher than that obtained without SR (Fig. 7(a)).

## 5. Conclusion

In this work we have presented a new imaging technique that combines CS with SR techniques; CS is used to reduce the total number of measurements and SR is used to overcome the pixel size limitation. With this technique, high resolution images can be reconstructed from a set of LR Radon projections. The presented technique was demonstrated by numerical simulation and by a real experiment implementing the proposed optical system. The numerical results have demonstrated resolution improvement by a factor 16 (i.e., factor of four in each dimension). Hence, owing to the SR approach we were able to reconstruct details four times finer than sensor's pixels dimension. We have designed an acquisition system that uses approximately 500 times less pixels sensors than the image size, and uses a scanning process that is approximately $\text{18}.\text{7}\%$shorter than a conventional push-broom system using the same number of sensors.

The advantages of the method are summarized in the following. (a) Geometrical SR is obtained, that is, the reconstructed image has a resolution beyond the one dictated by the sensor size. (b) SR is obtained with rotational scanning rather with 2D raster scanning that is typically used in conventional SR techniques. Rotation motion is preferable in terms of mechanical design as it is smooth, periodic and one directional (angular rather horizontal and vertical). (c) Owing to the CI approach, fewer samples are taken. (d) Compared to conventional imaging systems performing linear scanning, the acquisition time is faster by the same factor as the compression factor. The SR improvement does not come at the expense of scanning time, compared to the previous CI technique in [6]. (e) Compared to the previous CS technique in [6] the only additional hardware requirement is replacement of the line array of sensors with a conventional 2D sensor array. Since the number of columns of commercial sensor array is typically much larger than the desired SR factor, the additional columns can be used for improving the SNR. This can be achieved, for example, by binning multiple rows together, thus generating virtual macropixels with larger area. (f) The method can be readily applied with the golden angle sampling scheme proposed in [6] to achieve progressive CI, with which the reconstruction quality of the image is gradually improving.

We wish to emphasize that, although we have demonstrated our technique with an experiment in the visible spectral regime, its strength is mainly for imaging outside the visible spectrum (UV, IR, THz, and etc.). In addition, our technique can be useful also in the visible spectral regime where the sensor pixel size needs to be kept large, e.g. for imaging under photon starved conditions.

## 6. Appendix A

Let us write explicitly Eq. (7):

As mentioned in Sec. 3.2 ${T}_{\theta}={\left[{\overline{T}}_{1}^{T}\cdots \text{\hspace{1em}}{\overline{T}}_{j}^{T}\cdots \text{\hspace{1em}}{\overline{T}}_{b}^{T}\right]}^{T},$ where ${\overline{T}}_{j}$ is a matrix of size ${\Re}^{n\times \tilde{n}}$, where$\tilde{n}=b(n+1),$ relating a LR projection ${z}_{j,{\theta}_{i}}$from a HR projection ${g}_{{\theta}_{i}}$. The construction of the matrices ${T}_{\theta}$ and ${\overline{T}}_{j}$, are better understood with the help of Fig. 4 where we show downsampling process. As we can see, the contribution of each HR pixel to one LR sensor is in accordance to their overlapping relative area. It can be seen from Fig.4 that $b-1$ HR pixels contribute completely to each LR pixel (the overlapping area has the shape of a parallelogram), while 2 HR pixels contribute only partially. We first define the general dimension reduction matrix $\overline{T}\in {\Re}^{n\times bn}$ which (see Fig. 4) has entries ${\overline{t}}_{i,k}$ given by:

## Acknowledgment

The authors wish to thank the Israel Science Foundation (grant No.1039/09) and Harbour foundation for supporting this research.

## References and links

**1. **E. J. Candès, “Compressive sampling,” Proc. Int. Congress of Mathematics 3, 1433–1452, Madrid, Spain (2006). [CrossRef]

**2. **D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory **52**(4), 1289–1306 (2006). [CrossRef]

**3. **Y. Rivenson and A. Stern, “An efficient method for multi-dimensional compressive imaging,” in *Computational Optical Sensing and Imaging* (Optical Society of America, 2009).

**4. **A. Stern, “Compressed imaging system with linear sensors,” Opt. Lett. **32**(21), 3077–3079 (2007). [CrossRef] [PubMed]

**5. **D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in *Electronic Imaging 2006* (International Society for Optics and Photonics, 2006).

**6. **S. Evladov, O. Levi, and A. Stern, “Progressive compressive imaging from Radon projections,” Opt. Express **20**(4), 4260–4271 (2012). [CrossRef] [PubMed]

**7. **Y. Rivenson, A. Stern, and B. Javidi, “Overview of compressive sensing techniques applied in holography [Invited],” Appl. Opt. **52**(1), A423–A432 (2013). [CrossRef] [PubMed]

**8. **Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. **52**(10), D46–D54 (2013). [CrossRef] [PubMed]

**9. **Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express **18**(14), 15094–15103 (2010). [CrossRef] [PubMed]

**10. **S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process **20**(3), 21–36 (2003). [CrossRef]

**11. **D. Capel and A. Zisserman, “Computer vision applied to super resolution,” IEEE Signal Process **20**(3), 75–86 (2003). [CrossRef]

**12. **Z. Zalevsky and D. Mendlovic, *Optical Superresolution*, Springer (2004).

**13. **H. Greenspan, “Super-resolution in medical imaging,” Comput. J. **52**(1), 43–63 (2008). [CrossRef]

**14. **Y. Kashter, O. Levi, and A. Stern, “Optical compressive change and motion detection,” Appl. Opt. **51**(13), 2491–2496 (2012). [CrossRef] [PubMed]

**15. **S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. **13**(10), 1327–1344 (2004). [CrossRef] [PubMed]

**16. **P. Vandewalle, S. Süsstrunk, and M. Vetterli, “A frequency domain approach to registration of aliased images with application to super-resolution,” EURASIP J. Adv. Signal Process. **2006**, 1–15 (2006). [CrossRef]

**17. **A. Stern, Y. Porat, A. Ben-Dor, and N. S. Kopeika, “Enhanced-resolution image restoration from a sequence of low-frequency vibrated images by use of convex projections,” Appl. Opt. **40**(26), 4706–4715 (2001). [CrossRef] [PubMed]

**18. **J. H. Jørgensen, “Knowledge-based tomography algorithms” (Doctoral dissertation, Technical University of Denmark, DTU, DK-2800 Kgs. Lyngby, Denmark,2009).

**19. **V. Farber, E. Eduard, Y. Rivenson, and A. Stern, “A study of the coherence parameter of the progressive compressive imager based on Radon transform,” in *SPIE Defense, Security, and Sensing* (International Society for Optics and Photonics,2013).

**20. **J. M. Bioucas-Dias and M. A. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. **16**(12), 2992–3004 (2007). [CrossRef] [PubMed]