Deep, convergent, unrolled half-quadratic splitting for image deconvolution

IEEE Transactions on Computational Imaging

ABSTRACT

In recent years, algorithm unrolling has emerged as a powerful technique for designing interpretable neural networks based on iterative algorithms. Imaging inverse problems have particularly benefited from unrolling-based deep network design since many traditional model-based approaches rely on iterative optimization. Despite exciting progress, typical unrolling approaches heuristically design layer-specific convolution weights to improve performance. Crucially, convergence properties of the underlying iterative algorithm are lost once layer-specific parameters are learned from training data. We propose an unrolling technique that breaks the trade-off between retaining algorithm properties while simultaneously enhancing performance. We focus on image deblurring and unrolling the widely-applied Half-Quadratic Splitting (HQS) algorithm. We develop a new parametrization scheme which enforces layer-specific parameters to asymptotically approach certain fixed points. Through extensive experimental studies, we verify that our approach achieves competitive performance with state-of-the-art unrolled layer-specific learning and significantly improves over the traditional HQS algorithm. We further establish convergence of the proposed unrolled network as the number of layers approaches infinity, and characterize its convergence rate. Our experimental verification involves simulations that validate the analytical results as well as comparison with state-of-the-art non-blind deblurring techniques on benchmark datasets. The merits of the proposed convergent unrolled network are established over competing alternatives, especially in the regime of limited training.

Code

The MATLAB code is for the numerical simulation part of the paper can be found here.

The implementation of DECUN can be found here.

DECUN

Theorems

Therorem 1 (Convergence result):

For a fixed μ > 0 , suppose ξ l R , γ l R are absolutely summable, meaning l | ξ l | and l | γ l | both converge. Furthermore, suppose that { E l } l forms a bounded sequence, i.e., E l M for some M > 0 . Then the sequence { ( w l , u l ) } generated by executing DECUN from any starting point { ( w 0 , u 0 ) } converges to { ( w , u ) } as l .

Therorem 2 (Convergence rate):

Let G l = D l ( M l ) 1 ( D l ) T where suppose that D l and β l under the conditions of Theorem 1. Then the sequence { ( w l , u l ) } converges to { ( w , u ) } with convergence rate satisfying w l + 1 w ( λ max ) l + 1 w 0 w + F 1 ( B ( w ) 1 λ max e j w ) where λ max = max { λ 1 , λ 2 , , λ l 0 , 1 2 [ 1 + ρ ( G ) ] } with λ l = ρ ( ( G l ) 2 ) ) , and ρ ( G ) and ρ ( ( G l ) 2 ) ) represent the spectral radii of matrix G and ( G l ) 2 respectively. Here B ( w ) is the discrete-time Fourier transform of b l = 3 | ξ l | u + 2 | γ l | ( β ¯ ) 2 and u = ( ( D ¯ ) T D ¯ + μ β ¯ K T K ) 1 ( ( D ¯ ) T w + μ β ¯ K T y ) where w is the fixed point of s h : w = s ( h ( w ) ) , which could be achieved as l .

Results


Figure. Visual comparison over BSD dataset and nonlinear kernels.


Figure. Visual comparison over BSD dataset and nonlinear kernels.


Figure. Performance evaluation in a limited training set-up

Related Publications

  1. Zhao Y, Li Y, Zhang H, Monga V, Eldar YC. Deep, convergent, unrolled half-quadratic splitting for image deconvolution. submitted to IEEE Transactions on Computational Imaging. arxiv]

  2. Zhao Y, Li Y, Zhang H, Monga V, Eldar YC. A convergent neural network for non-blind image deblurring. in 2023 IEEE International Conference on Image Processing (ICIP) 2023 Oct 8 (pp. 1505-1509). IEEE. [IEEE]

Email
ipal.psu@gmail.com

Address
104 Electrical Engineering East,
University Park, PA 16802, USA

Lab Phone:
814-863-7810
814-867-4564