AIPNet Image-to-Image Single Image Dehazing With Atmospheric Illumination Prior

Page 1

AIPNet Image-to-Image Single Image Dehazing With Atmospheric Illumination Prior

Abstract: The atmospheric scattering and absorption gives rise to the natural phenomenon of haze, which severely affects the visibility of scenery. Thus, the image taken by the camera can easily lead to over brightness and ambiguity. To resolve an ill posed and intractable problem of single image dehazing, we propose a straightforward but remarkable prior—atmospheric illumination prior in this paper. The extensive statistical experiments for different colorspaces and theoretical analyses indicate that the atmospheric illumination in hazy weather mainly has a great influence on the luminance channel in YCrCb colorspace, and has less impact on the chrominance channels. According to this prior, we try to maintain the intrinsic color of approaches demonstrate that the proposed approach achieves the state-ofthe hazy scene and enhance its visual contrast. To this end, we apply the multiscale convolutional networks that can automatically identify hazy regions and restore deficient texture information. Compared with previous methods, the deep CNNs not only achieve an end-to-end trainable model, but also accomplish an easy imageto- image system architecture. The extensive comparisons and analyses with existing -art performance on several dehazing effects.


Existing system: LL kinds of inclement weather conditions often occur in our daily life, such as snow, rain, sandstorm, hail, haze and mist. Haze and mist are the most common weather relative to others, even on a sunny day. This phenomenon is influenced by suspended aerosol, water drops and other particles, which can result in some reflected air light to be absorbed and scattered. Due to the existence of these particles, outdoor images captured by the camera become obscure to some degree and have poor contrast for visual performance. Consequently, people have presented plenty of theoretical approaches and feasible priors/hypotheses for haze removal along with the increasing application of computer vision such as autonomous driving, video surveillance , Smartphone camera, remote sensing and visual navigation . According to the features of the dehazing methods, they are mainly divided into two part multi-image restoration and single-image restoration. Proposed system: However, the proposed flat world hypothesis is difficult to realize haze removal in the real road scene to some extent. Meanwhile, based on two basic constraints in , Tan develops a cost function with Markov Random Fields (MRF) and maximizing contrast to cope with this issue. However, the output image tends to be oversaturated. Similar to the dark-object subtraction method, the dark channel prior (DCP) proposed by He et al. effectively predicts the transmission of atmospheric light, and removes haze in the local regions without the sky. the proposed method only needs a small number of simple training samples relative to other deep CNN methods, and acquires an image-to-image model to achieve the best performance on haze removal. An image-to-image CNNs architecture without more additional information is realized. The mapping between hazy and haze-free images is established. The quantitative and qualitative evaluations demonstrate the efficiency and effectiveness of the proposed method relative to the state-of the- art dehazing methods. Advantages: As is well known, the short connections within the feature unit are effective for the dehazing network to facilitate gradients flow, alleviate vanishing/exploding gradients and enhance nonlinear complexity. Moreover, multiscale restoration


module can maintain scale invariance to reconstruct a clear scene with fewer parameters at different spatial scales. Even though it possesses these advantages for convergence, it is difficult to train the dehazing network to map hazy image to haze-free one directly. Whereas, the fusing network can improve its training process. Therefore, the ensemble comprising dehazing network and fusing network can take full advantage of their strengths, so that it accurately restores the highfrequency details of hazy luminance map. Furthermore, its training process takes less time for convergence and becomes more robust for over-fitting than the single dehazing network.

Disadvantages: The two feature units followed by the first convolution layer leverage the modular design. The feature unit (FU) not only extends the network depth easily, but also increases nonlinear complexity to enhance the accuracy of feature extraction. Furthermore, the feature unit exploits a residual shortcut connection with identity mapping sufficiently to address degradation problem. For dehazing task, the key idea is that AIPNET automatically learns relevant hazefeatures to identify hazy regions, and enhances visual contrast and restores deficient texture information with the intrinsic color of the scene. Although it isvery successful for AIPNET on the single image dehazing, there are still extensive optimizations to be further investigated. For instance, the ΔH unbalanced problem in YCrCb colorspace will be studied in the future. For dehazing task, the key idea is that AIPNET automatically. Modules: Multisscale cnn: The fully end-to-end multiscale CNNs are introduced to remove the haze. In order to accelerate training, learn more hazy features, and reduce the model capacity, we propose that the dehazing network identifies hazy regions and restores deficient texture information in the luminance channel, and the fusing network fuses various features of three channels in YCrCb colorspace. The multiscale CNNs architecture including the dehazing network and the fusing network is concretely described. In


Section V, extensive assessments and analyses to remove the haze are presented. Finally, we summarize and discuss our results in Section . To this end, a novel atmospheric illumination prior to remove the haze with multiscale CNNs is proposed. The prior can not only efficiently and automatically identify hazy regions, but also reinforce visual contrast for a hazy image and keep its original and intrinsic color to a large extent. Due to the data driven manner of deep learning, it is crucial for hazy images to select training data source. Image restoration: LL kinds of inclement weather conditions often occur in our daily life, such as snow, rain, sandstorm, hail, haze and mist. Haze and mist are the most common weather relative to others, even on a sunny day. This phenomenon is influenced by suspended aerosol, water drops and other particles, which can result in some reflected airlight to be absorbed and scattered. Due to the existence of these particles, outdoor images captured by the camera become obscure to some degree and have poor contrast for visual performance. Consequently, people have presented plenty of theoretical approaches and feasible priors/hypotheses for haze removal along with the increasing application of computer vision such as autonomous driving, video surveillance , smart phone camera, remote sensing and visual navigation . According to the features of the dehazing methods, they are mainly divided into two parts: multi-image restoration and single-image restoration. Due to the strict limitation of hardware device and technical levels in the past, researchers leverage two or more images . Multiscale depth fusion: Recently, mathematical methods for dehazing have become increasingly popular to restore visibility in bad weather, such as probability-based and learning-based methods. For one thing, in , a factorial Markov random field is introduced to model image, which predicts the albedo and depth of the scene. In order to model the fog scene, Y. K. Wang and C. T. Fan employ the multiscale depth fusion (MDF) method to recover high-quality images with multiple Markov random fields (MRFs). Galdran et al. extend a perception-inspired variational framework to improve the scene contrast. For another, Tang et al.Train numerous relevant hazefeature samples to implement this task with the random forest (RF)frame work.


Combing with seven types of haze-features, Luan et al. use the support vector regression (SVR) to achieve the terrific dehazing effect. Although making significant progress on haze removal, the above state-of-the-art approaches can be subject to some constraints, priors and hypotheses which are invalid or unavailable under some circumstances, thus lacking self-adaptive and generalization ability. Feature unit: The two feature units followed by the first convolution layer leverage the modular design. The feature unit (FU) not only extends the network depth easily, but also increases nonlinear complexity to enhance the accuracy of feature extraction. Furthermore, the feature unit exploits a residual shortcut connection with identity mapping sufficiently to address degradation problem. In the feature unit, the shortcut connection performed on the first two output feature maps can not only reduce excessive parameters and computation complexity, but also effectively accelerate deep network training. In addition, the latter two layers of the feature unit employ the structure of Network-in-Network (NIN) including amlpconv layer. Since the mlpconv layer with cascaded cross channel parametric pooling (cccp) learns interactions of cross channel information, it can enhance nonlinear complexity of the network. Taking full advantage of their strengths, the entire feature unit can stimulate more complex and abstract functions, and have higher abilities to extract features.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.