Deep Learning for Single Image Super-Resolution: A Brief Review
TL;DR: This survey reviews representative deep learning-based SISR methods and group them into two categories according to their contributions to two essential aspects of S ISR: The exploration of efficient neural network architectures for SISS and the development of effective optimization objectives for deep SISr learning.
read more
Abstract: Single image super-resolution (SISR) is a notoriously challenging ill-posed problem, which aims to obtain a high-resolution (HR) output from one of its low-resolution (LR) versions. To solve the SISR problem, recently powerful deep learning algorithms have been employed and achieved the state-of-the-art performance. In this survey, we review representative deep learning-based SISR methods, and group them into two categories according to their major contributions to two essential aspects of SISR: the exploration of efficient neural network architectures for SISR, and the development of effective optimization objectives for deep SISR learning. For each category, a baseline is firstly established and several critical limitations of the baseline are summarized. Then representative works on overcoming these limitations are presented based on their original contents as well as our critical understandings and analyses, and relevant comparisons are conducted from a variety of perspectives. Finally we conclude this review with some vital current challenges and future trends in SISR leveraging deep learning algorithms.
read more
Chat with Paper
AI Agents for this Paper
Find similar papers on Google Scholar, PubMed and Arxiv
Write a critical review of this paper
Analyze citations of this paper to find unaddressed research gaps
Figures

Figure 6: LapSRN architecture. Red arrows indicate convolutional layer; blue arrows indicate transposed convolutions (upsampling); green arrows denote element-wise addition operators. 
Figure 1: Sketch for overall framework of SISR. 
Figure 8: Sketch of the DBPN architecture. 
Figure 7: Sketch of the pixel recursive SR architecture. 
Table I: Comparisons among some representative deep models. 
Figure 10: Comparisons of ’monarch’ in Set14 for scale 2 with Gaussian kernel degradation. We can see that, faced with degradation mismatching the one for training, the performance of EDSR drops drastically.
Citations
Few-Shot Learning-Based, Long-Term Stable, Sensitive Chemosensor for On-Site Colorimetric Detection of Cr(VI).
TL;DR: In this article , a few-shot learning (FSL) method was used for colorimetric determination of Cr(VI) with 1,5-diphenylcarbazide (DPC)-based test paper.
16
Super-Resolution of Sentinel-2 Images Using a Spectral Attention Mechanism
M. C. B. Zabalza,A. Bernardini +1 more
TL;DR: The objective of this work is to apply deep learning techniques to increase the resolution of the Sentinel-2 Read-Green-Blue-NIR (RGBN) bands from the original 10 m to 2.5 m, improving the perception and visual quality.
Dual Learning-Based Graph Neural Network for Remote Sensing Image Super-Resolution
TL;DR: In this article , a dual learning-based graph neural network (DLGNN) is proposed to refine the reconstruction results by constraining the mapping process in terms of the loss function, transferring the typical ill-posed problem to a wellposed one.
16
Deep learning-based super-resolution and de-noising for XMM-newton images
Sam Sweere,Ivan Valtchanov,M. Lieu,Antonia Vojtekova,Eva Verdugo,Maria Santos-Lleo,Florian Pacaud,Alexia Briassouli,D. P'erez +8 more
TL;DR: The first application of Machine Learning based super-resolution (SR) and de-noising (DN) to enhance X-ray images from the European Space Agency’s XMM-Newton telescope is presented.
Deep RegNet-150 architecture for single image super resolution of real-time unpaired image data
S. Karthick,N. Muthukumaran +1 more
TL;DR: Deep RegNet-150 is an effective deep learning architecture for single image super-resolution of real-time unpaired image data.
16
References
Deep Residual Learning for Image Recognition
Kaiming He,Xiangyu Zhang,Shaoqing Ren,Jian Sun +3 more
- 27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
•Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
- 01 Jan 2015
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
138.5K
•Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
- 04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
102.6K
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
99K
ImageNet classification with deep convolutional neural networks
TL;DR: A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.