Sparsity-based Color Image Super Resolution
via Exploiting Cross Channel Constraints

MCcSR


ABSTRACT

Sparsity constrained single image super-resolution (SR) has been of much recent interest. A typical approach involves sparsely representing patches in a low-resolution (LR) input image via a dictionary of example LR patches, and then using the coefficients of this representation to generate the highresolution (HR) output via an analogous HR dictionary. However, most existing sparse representation methods for super resolution focus on the luminance channel information and do not capture interactions between color channels. In this work, we extend sparsity based super-resolution to multiple color channels by taking color information into account. Edge similarities amongst RGB color bands are exploited as cross channel correlation constraints. These additional constraints lead to a new optimization problem which is not easily solvable; however, a tractable solution is proposed to solve it efficiently. Moreover, to fully exploit the complementary information among color channels, a dictionary learning method is also proposed specifically to learn color dictionaries that encourage edge similarities. Merits of the proposed method over state of the art are demonstrated both visually and quantitatively using image quality metrics.

Source code

Click here to download the source code for our proposed MCcSR Color Super Resolution algorithm.

Super Resolution Results

Competing approaches

Our experiments are performed on the widely used set 5 and set 14 images as in [1]. We compare the proposed Multi- Channel constrained Super Resolution (MCcSR) method with several well-known single image super resolution methods. These include the ScSR [2] method because our MCcSR method can be seen as a multi-channel extension of the same. Other methods for which we report results are the Single Image Scale-up using Sparse Representation by Zeyde et al. [1], Anchored Neighborhood Regression for Fast Example- Based Super-Resolution (ANR) [3] and Global Regression (GR) [4] methods by Timofte et al, Neighbor Embedding with Locally Linear Embedding (NE+LLE) [5] and Neighbor Embedding with NonNegative Least Squares (NE+NNLS) [6] that were both adapted to learned dictionaries.

In our experiments, we will magnify the input images by a factor of 2, 3 or 4, which is commonplace in the literature. For the low-resolution images. High resolution patches are reconstructed using the learned high resolution dictionaries which are learnt by training over 100000 patch pairs. The size of the learned dictionaries for each channel is 512 for most of our experiments. We perform visual comparisons of obtained super-resolution images and additionally evaluate them quantitatively using image quality metrics. The metrics we use include: 1.) Peak Signal to Noise Ratio (PSNR) while recognizing its limitations [7], 2.) the widely used Structural Similarity Index (SSIM) [8] and 3.) a popular color-specific quality measure called S-CIELAB [9] which evaluates color fidelity while taking spatial context into account.

Here we present the high quality image results firt and then provide the quantitaive evaluations.

Image Results (Click to enlarge)

Following figures show SR results for images where resolution enhancement was performed via scaling by a factor of 2, 3 and 4. for each row, from left to rigth, images correspond to actual low resolution image (scaled down by a factor of 2, 3 or 4), ground truth high resolution image, Bicubic interpolation, Method of Zeyde et al. in [1], GR method [4], ANR [3], NENNLS [6], NELLE [5], our proposed MCcSR and ScSR method [2], respectively. For each image, the first row shows visual image results from different methods mentioned above and the second row shows the corresponding S-CIELAB error map for that method. It is apparent that the MCcSR method produces less error around edges and color textures.

Magnification factor x2

low resolution image
Ground truth
Bicubic
Zeyde et al.[1]
GR[4]
ANR[3]
NENNLS[6]
NELLE[5]
MCcSR
ScSR[2]

Magnification factor x4

low resolution image
Ground truth
Bicubic
Zeyde et al.[1]
GR[4]
ANR[3]
NENNLS[6]
NELLE[5]
MCcSR
ScSR[2]

Magnification factor x4

low resolution image
Ground truth
Bicubic
Zeyde et al.[1]
GR[4]
ANR[3]
NENNLS[6]
NELLE[5]
MCcSR
ScSR[2]

Full Results

Please check the image data page.

Related Publications

  1. H. S. Mousavi and V. Monga, "Sparsity Based Super Resolution Using Color Channel Constraints", IEEE International Conference on Image Processing, Phoenix, Arizona, Sep, 2015. [IEEE Xplore]

  2. H. S. Mousavi and V. Monga, "Sparsity-based Color Image Super Resolution via Exploiting Cross Channel Constraints", IEEE Transactions on Image Processing, volume 26, issue 11, pages 5094-5106, November 2017. [IEEE Xplore]

Selected References

  1. R. Zeyde, M. Elad, and M. Protter, “On single image scale-up using sparse-representations,” in Curves and Surfaces. Springer, 2012, pp. 711–730.

  2. J. Yang, Z. Wang, Z. Lin, S. Cohen, and T. Huang, “Coupled dictionary training for image super-resolution,” IEEE Trans. on Image Processing, vol. 21, no. 8, pp. 3467–3478, 2012.

  3. R. Timofte, V. De Smet, and L. Van Gool, “A+: Adjusted anchored neighborhood regression for fast super-resolution,” in Computer Vision– ACCV. Springer, 2014, pp. 111–126.

  4. R. Timofte, V. De, and L. Van Gool, “Anchored neighborhood regression for fast example-based super-resolution,” in Proc. IEEE Conf. on Computer Vision, 2013, pp. 1920–1927.

  5. H. Chang, D.-Y. Yeung, and Y. Xiong, “Super-resolution through neighbor embedding,” in Proc. IEEE Conf. Computer Vision Pattern Recognition.

  6. M. Bevilacqua, A. Roumy, C. Guillemot, and M. L. Alberi-Morel, “Lowcomplexity single-image super-resolution based on nonnegative neighbor embedding,” 2012.

  7. Z. Wang and A. C. Bovik, “Mean squared error: love it or leave it? a new look at signal fidelity measures,” Signal Processing Magazine, IEEE, vol. 26, no. 1, pp. 98–117, 2009.

  8. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.\

  9. X. Zhang and B. A. Wandell, “Color image fidelity metrics evaluated using image distortion maps,” Signal processing, vol. 70, no. 3, pp. 201–214, 1998.

Email
ipal.psu@gmail.com

Address
104 Electrical Engineering East,
University Park, PA 16802, USA

Lab Phone:
814-863-7810
814-867-4564