Generally, et al., 2012). A constrained least square

Generally, the SR image reconstruction approaches are really an ill-posed problem due to an inadequate number of LR images and the ill-conditioned blur operators. Techniques which usually use to support the inversion of the ill-posed problem are identified as a regularization. The regularization approach takes advantage of the prior knowledge of the unidentified HR image to resolve the SR problem. Deterministic and stochastic regularization approaches are offered in the following subsection.

   3.2.2.1.            Deterministic Approaches

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

The deterministic approach presents the regularization term which transforms the ill-posed problem into a well-posed one. It happens through the use of prior information about the perfect solution depending on the regularization term (R) and regularization constant (?) (Plenge et al., 2012). A constrained least square (CLS) regularization approach incorporates the smoothness constraints as a priori information. In this instance, R is a high pass filter which usually reduces the amount of high-frequency details in the new reconstructed image. The regularization parameter ? handles and controls the high-frequency information. The larger values of ? may possibly smooth the new reconstructed image. These values are a suitable choice, if a little number of LR images are present and/or there is a great deal of noise. The smaller values of ? may produce a noisy solution which can be applied when a huge quantity of LR images are present and the quantity of noise is small (Park et al., 2003). Tikhonov least-square approach (Plenge et al., 2012) incorporates l2-norm of the second order derivation (p=2) of the HR image as a regularization term. The primary benefit of the l2-norm is that it is certainly easy to resolve. On the other hand, The l2-norm does not promise a unique solution. Also, it is optimum when the model error is white-Gaussian distribution (Plenge et al., 2012; Song, Zhang, Wang, Zhang, & Li, 2010). For this reason, Farsiu et al. (Farsiu, Robinson, Elad, & Milanfar, 2004) employ an alternative l1-norm (p=1). They verify that the l1-norm works more effectively than the l2-norm when the images consist of non-Gaussian errors for very quickly and powerful SR. Several researchers employ developed approaches (Shen, Peng, Yue, Yuan, & Zhang, 2016; Yue, Shen, Yuan, & Zhang, 2014; Zeng & Yang, 2013) with combined error modes. Therefore, the lp-norm function (1? p ? 2) may also be utilized as the constraint function due to its convex property and its own convenience for the imaging model errors 81. When 1? p ? 2, it leads to a weighted mean of measurements. If the value of p is near to one, the solution is computed with a greater weight throughout the measurements near to the median value. When the value of p is near to two, the solution is estimated to the average value (Farsiu et al., 2004). Sometimes, images are infected by Gaussian and non-Gaussian errors. Therefore, the lp-norm function is recognized to be a highly effective solution (Shen et al., 2016). Kim and Bose (S. Kim et al., 1990) suggest a weighted recursive least square algorithm for generating the SR image. The weight depends upon the prior information of the image. This algorithm provides higher weights to the LR images. With various weights, the problem basically decreases to the general least square estimation. Finally, interpolation and restoration are adapted to get the HR image. Mallat and Yu (Mallat & Yu, 2010) recommend a regularization SR approach which uses adaptive estimators acquired by combining a family group of linear inverse estimators.

   3.2.2.2.            Stochastic Approaches

Depending on the observation model explained above, the goal is to rebuild the HR image from a collection of warped, blurred, noisy, and under-sampled images. As the model in (2) is an ill-conditioned, SR actually is an ill-posed inverse problem. As a result, the stochastic approaches are well-known especially the Bayesian theorem. The reason is that they present the adaptable, flexible, and convenient way to incorporate a priori information. Moreover, they create a powerful relationship between the LR images and the unidentified HR image (Belekos, Galatsanos, & Katsaggelos, 2010; Pickup, Capel, Roberts, & Zisserman, 2009; Polatkan, Zhou, Carin, Blei, & Daubechies, 2015; Tian & Ma, 2010; Villena, Vega, Babacan, Molina, & Katsaggelos, 2013; Villena, Vega, Molina, & Katsaggelos, 2009; Woods, Galatsanos, & Katsaggelos, 2006).

Maximum likelihood (ML) is suggested by Tom and Katsaggelos (Tom & Katsaggelos, 1995). The goal of this approach is to get the ML estimation of the HR image. on one hand, it only takes into account the relationship amongst the LR images and the primary HR image. on the other hand, Maximum A-Posteriori (MAP) method combines the prior image model to expose the expectancy of the unidentified HR image. ML and MAP are a famous approaches because of their flexibility and adaptability for protecting edges and combining parameters (Belekos et al., 2010; Pickup et al., 2009; Polatkan et al., 2015; Tian & Ma, 2010; Villena et al., 2013; Villena et al., 2009; Woods et al., 2006). Tipping and Bishop (Tipping & Bishop, 2002) employ an expectation-maximization (EM) algorithm to approximate the hyper-parameter value by increasing its misfit likelihood function. However, the EM algorithm produces a huge computational load. Moreover, it does not always meet to the global ideal (Wu, 1983). Pickup et al. (Pickup, Capel, Roberts, & Zisserman, 2006) develop a new technique by minimizing the computational cost. They alter the prior image model by taking illumination adjustments among the collected LR images. He and Kondi (He & Kondi, 2004) suggest a strategy to simultaneously estimate both hyper-parameter value and the HR image by enhancing the cost function. On the other hand, this approach considers that the shifts between LR images are limited to integer value on the new HR grid. In (Woods et al., 2006), Woods et al. suggest to use the EM algorithm and calculate the ML approximation of the hyper-parameter.

Total variation (TV) regularization is initially suggested by Osher et al. (Goldstein & Osher, 2009; Osher, Burger, Goldfarb, Xu, & Yin, 2005; Pan & Reeves, 2006; Yuan, Zhang, & Shen, 2012) to protect edge information and prevent ringing results. On the other hand, the finding results of the TV prior model is sometimes lead to a “staircase” result with intense noises particularly in flat or smooth areas (Yuan, Zhang, & Shen, 2013). Therefore, several researchers suggest spatially adaptive approaches for overcoming the disadvantages of the TV prior model (Dong, Hintermüller, & Rincon-Camacho, 2011; Yuan et al., 2012). There are a few approaches that categorize the image into flat and detailed areas by spatial information. They use a greater and a smaller charges parameter for the smooth regions and edges respectively. On the other hand, the spatially adaptive indications such as the difference curvature, gradients, and structure tensor are usually very critical to the noises. Bilateral total variation (BTV) (Farsiu et al., 2004)  is utilized for estimating TV, maintaining the flatness of continued areas, and protecting edges in discontinued areas. A locally adaptive version of BTV (LABTV) is presented for giving a stability among the noise reductions and the protection of image information (Xuelong Li, Hu, Gao, Tao, & Ning, 2010).

1.      Similarity Measure

To be able to assess the fidelity of the image reconstruction procedure, every reconstructed HR image has to match to the original image which is called similarity measurement. In addition, it assists in the monitoring and evaluating the performance of the image reconstruction procedure. There are numerous similarity measures existing in literature. A few of the most well-known are Normalized Cross-Correlation Ratio (NCCR), Direct Difference Error (DDE), Peak Signal-to-Noise Ratio (PSNR), Root Mean Square Error (RMSE), and Structural Similarity (SSIM) (Begin & Ferrie, 2006; Tang, Deng, Xiao, & Yu, 2011; Z. Wang, Bovik, Sheikh, & Simoncelli, 2004).

The PSNR is measured from the MSE, which is the average error amongst the original image and the SR image. Given a SR m x n image  and its original , MSE and PSNR are defined as: