V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 5, Issue No. 2, 261 - 268
A Genetic Algorithm Based on Fog Intensity Detection method for Driver Safety V.Sujatha
Y.Prathima
E.C.E.Department V.R. Siddhartha Engineering College TIFAC CORE TELEMATICS Kanur, Vijayawada prathimasiva@gmail.com
Key-Words-Fog removal, Atmospheric Visibility distance, Contrast Restoration, Genetic algorithm, Contrast restoration
INTRODUCTION
A
I.
IJ
Recently, many systems have been developed that use computers and various sensors to assist driving. Some notable examples include self-steering by white-line detection, a rear-end collision-prevention system that operates by measuring the distance to the vehicle ahead, a danger notification system that recognizes pedestrians, and a system that automatically operates the windshield wipers upon recognizing rain drops . When considering a driving assistant system, we cannot ignore changes in weather conditions, since in such adverse weather conditions as rain, snow, or fog, driving is more difficult than in fair conditions, leading to a significant increase in the accident rate. Therefore, a close relationship exists between driver assistance and weather recognition. In this paper, we focus on fog detection. Fog negatively influences human perception of traffic conditions, making for potentially dangerous situations. Automatic lighting of fog lamps, speed control, and rousing of attention are examples of potential assistance to be realized with respect to fog recognition. Under foggy conditions the distance between a
ISSN: 2230-7818
E.C.E. Department V.R.Siddhartha Engineering College TIFAC CORE IN TELEMATICS Kanur, Vijayawada srk_kalva@yahoo.com
preceding vehicle's tail lamp is perceived to be 60% further away than under fair conditions. Furthermore, fog changes significantly both temporally and spatially, and as a result there is a need for real-time detection using in-vehicle sensors. One method that involves installing large numbers of sensors along roads might be one solution, though it may not accurately reflect a driver's visual condition. It would also be a very expensive system to establish. Considering these problems, we propose a method that classifies fog density into three levels using in-vehicle camera images and millimetre-wave (mm-W) radar data. The image from the in-vehicle camera reflects the driver's visual conditions, vital when driving. This is the prime advantage of using an invehicle camera. We also evaluate the degradation in visibility of images that are captured in foggy conditions, especially by focusing on the change in visibility of a preceding vehicle. We must also take into account the distance to the targets to determine the fog density, because under the same fog condition, nearby objects is easy to see while distant objects are not. We therefore use mm-W radar together with the in-vehicle camera, since it can measure distance without being influenced by adverse weather
ES
Abstract-The obstacle detection under adverse weather conditions, especially foggy conditions, is a challenging task because the contrast is drastically reduced. In weather conditions in particular, daylight fog, the contrast of images grabbed by in-vehicle cameras in the visible light range is drastically degraded, which makes current driver assistance that relies on cameras very sensitive to weather conditions. The visibility distance is calculated from the camera projection equations and the blurring due to the fog. The effects of daylight fog vary across the scene and are exponential with respect to the depth of scene points.
Dr K.Rama Krishna
T
E.C.E Department V.R.Siddhartha Engineering College TIFAC CORE TELEMATICS Kanur, Vijayawada Sujatha.vundavali@gmail.com
Under adverse meteorological conditions, the contrast of images that are grabbed by a classical in-vehicle camera in the visible light range is drastically degraded, which makes current in-vehicle applications relying on such sensors very sensitive to weather conditions. An in-vehicle vision system should take fog effects into account to be more reliable. A first solution is to adapt the operating thresholds of the system or to momentarily deactivate it if these thresholds have been surpassed. A second solution is to remove weather effects from the image beforehand. Unfortunately, the effects vary across the scene. They are exponential with respect to the depth of scene points. Consequently, spaceinvariant filtering techniques cannot be directly used to adequately remove weather effects from images. A judicious approach is to detect and characterize weather conditions to estimate the decay in the image and then to remove it.
@ 2011 http://www.ijaest.iserp.org. All rights Reserved.
Page 261
V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 5, Issue No. 2, 261 - 268
FOG EFFECTS ON VISION
A. Visual Properties of Fog The attenuation of luminance through the atmosphere was studied by Koschmieder, who derived an equation relating the apparent luminance or radiance L of an object that is located at distance d to the luminance L0 measured close to this Object, +
(1-
)
(1)
=( -
B. Camera Response Let us denote f as the camera-response function, which models the mapping from scene luminance to image intensity by the imaging system, including optic as well as electronic. The intensity I of a pixel is the result of f applied to the sum of the air light A and the direct transmission T, (1) In this paper, we assume that the conversion process between incident energy on the charge-coupled device (CCD) sensor and the intensity in the image is linear. This is generally the case for short exposure times because it prevents the CCD array from being saturated. Furthermore, short exposure times (1–4 ms) are used on in-vehicle cameras to reduce the motion blur. This assumption can, thus, be considered as valid, and (1) becomes
ES
This expression indicates that the luminance of the object that is seen through fog is attenuated in e−kd (Beer–Lambert law); it also reveals a luminance reinforcement of the form (1) resulting from daylight that is scattered by the slab of fog between the object and the observer, which is also named air light. L∞ is the atmospheric luminance. In the presence of fog, it is also the background luminance on which the target can be detected. The previous equation may be written as
Fig.1. Fog or haze luminance is due to the scattering of daylight. Light Coming from the sun and scattered by atmospheric particles toward the camera is the air light A. It increases with the distance. The light emanating from the object R is attenuated by scattering along the line of sight. Direct transmission T of R decreases with distance
T
II.
)
(2)
On the basis of this equation, Duntley developed a contrast attenuation law, stating that a nearby object exhibiting Contrast C0 with the background will be perceived at distance d with the following contrast C= [( -
)/
]
=
)
(3)
(1
)
IJ
A
This expression serves to base the definition of a standard dimension that is called “meteorological visibility distance” Vmet, i.e., the greatest distance at which a black object (C0 = −1) of a suitable dimension can be seen in the sky on the horizon, with the threshold contrast set at 5%. It is, thus, a standard dimension that characterizes the opacity of a fog layer. This definition yields the following expression =-
1
ISSN: 2230-7818
3
(4)
)) ) (1
+
(1
)
)
C. Enhancement The aim of image enhancement is to improve the interpretability or perception of information in images for human viewers, or to provide `better' input for other automated image processing techniques. Image enhancement techniques can be divided into two broad categories: 1. Spatial domain methods, which operate directly on pixels, and 2. Frequency domain methods, which operate on the Fourier transform of an image. Unfortunately, there is no general theory for determining what `good’ image enhancement is when it comes to human perception. If it looks good, it is good! However, when image enhancement techniques are used as pre-processing tools for other image processing techniques, then
@ 2011 http://www.ijaest.iserp.org. All rights Reserved.
Page 262
V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 5, Issue No. 2, 261 - 268
quantitative measures can determine which techniques are most appropriate
CONTRAST-RESTORATION METHOD
Restoration Principle
∞
)] (4)
Fig 2.Gray Scale Images
ES
III.
+
T
Contrast enhancement techniques can be classed into two means: global and local. Histogram equalization is the most popular global enhancement algorithm for its simplicity and effectiveness. Because it uses global histogram information over the whole image as its transformation function to stretch contrast, can not reflect local scene depth change, the enhancement effect cannot be satisfying when depth changes in the scene. In ,an image clearness method for fog using a moving maskbased sub-block overlapped histogram equalization method to de weather the degraded image. The idea of this method is that assuming the pixels in the sub-block mask having same scene depth, so by segmenting the sky region for restraining over-enhancing, the contrast in non-sky region can be restored and fog effect is lighten. However, the method moves the mask by step equal to 1 for implementing sky region pixel judgement and contrast enhancement by sub-block overlapped histogram equalization, the complexity of computation is very high.
[0,
Here, we describe a simple method to restore scene contrast from an image of a foggy scene. Let us consider a pixel with known depth d. Its intensity I is given by. (A∞, β) characterizes the weather condition and is estimated through consequently, contrary to; R can be directly estimated for all scene points from (1-
)
(1)
A
+
Fig 3.Restoration Principles Outputs
The previous equation can be written as follows ∞
(2)
∞
IJ
And thus, the contrast Cr in the restored image is ∞ )/ ∞ ]
=C
(3)
However, R may become negative for certain values of (I, d). We can solve the equation R (d∗) = 0 and obtain
In case of negative values during the restoration process, we propose to set these values to 0. The restoration equation finally becomes
ISSN: 2230-7818
Fig 4. Histogram Output for Images
@ 2011 http://www.ijaest.iserp.org. All rights Reserved.
Page 263
V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 5, Issue No. 2, 261 - 268
Block Diagram Edge detection Image (binary)
Genetic algorithm
Foggy Image Gray scale Foggy image
1. Foggy Image
In this foggy image is taken. That foggy image may be any format i.e., color image may be j-peg bitmap ...etc that foggy image (color image) is converted in to Gray scale image. Color image is 3-dimension image where as gray scale image is 2- dimension image. Image extraction is possible in 2- dimension in 3dimension image extraction is quite complicated.
2. Edge Detection
Out Put
while preserving the important structural properties of an image. If the edge detection step is successful, the subsequent task of interpreting the information contents in the original image may therefore be substantially simplified. However, it is not always possible to obtain such ideal edges from real life images of moderate complexity. Edges extracted from nontrivial images are often hampered by fragmentation, meaning that the edge curve are not connected, missing edge segments as well as false edges not corresponding to interesting phenomena in the image – thus complicating the subsequent task of interpreting the image data
ES
To convert gray scale to edge detection canny algorithm is used
Iteration to reduce intensity levels
T
IV.
Edge detection is a fundamental tool in image processing and computer vision, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. Canny edge detection applied to a photograph
Canny Edge Detection:
Fig 5.Input Images
IJ
discontinuities in depth,
A
The purpose of detecting sharp changes in image brightness is to capture important events and changes in properties of the world. It can be shown that under rather general assumptions for an image formation model, discontinuities in image brightness are likely to correspond to discontinuities in surface orientation, changes in material properties and Variations in scene illumination.
In the ideal case, the result of applying a edge detector to an image may lead to a set of connected curves that indicate the boundaries of objects, the boundaries of surface markings as well as curves that correspond to discontinuities in surface orientation. Thus, applying an edge detection algorithm to an image may significantly reduce the amount of data to be processed and may therefore filter out information that may be regarded as less relevant,
ISSN: 2230-7818
Fig 6.Edge Detection Outputs
@ 2011 http://www.ijaest.iserp.org. All rights Reserved.
Page 264
V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 5, Issue No. 2, 261 - 268
3. Gray Scale Images
A. Initialization Initially many individual solutions are randomly generated to form an initial population. The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Traditionally, the population is generated randomly, covering the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found.
T
A grayscale image (also called gray-scale, gray scale, or gray-level) is a data matrix whose values represent intensities within some range. MATLAB stores a grayscale image as an individual matrix, with each element of the matrix corresponding to one image pixel. By convention, this documentation uses the variable name I to refer to grayscale images. The matrix can be of class uint8, uint16, int16, single, or double. While grayscale images are rarely saved with a color map, MATLAB uses a color map to display them. For a matrix of class single or double, using the default grayscale color map, the intensity 0 represents black and the intensity 1 represents white. For a matrix of type uint8, uint16, or int16, the intensity intmin (class (I)) represents black and the intensity intmax (class (I)) represents white
possible to cover everything in these pages. But you should get some idea, what the genetic algorithms are and what they could be useful for. Do not expect any sophisticated mathematics theories here. Genetic algorithms are a part of evolutionary computing, which is a rapidly growing area of artificial intelligence. As you can guess, genetic algorithms are inspired by Darwin's theory about evolution. Simply said, solution to a problem solved by genetic algorithms is evolved.
ES
During each successive generation, a proportion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as this process may be very time-consuming B. Reproduction
IJ
A
Fig 7.Input Images
Fig 8.Gray Scale Images
4. Fog Removal Algorithm (Genetic Algorithm) The area of genetic algorithms is very wide; it is not
ISSN: 2230-7818
The next step is to generate a second generation population of solutions from those selected through genetic operators: crossover (also called recombination), and/or mutation. For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests more than two "parents" are better to be used to reproduce a good quality chromosome. These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions, for reasons already mentioned above. Although Crossover and Mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.
@ 2011 http://www.ijaest.iserp.org. All rights Reserved.
Page 265
V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 5, Issue No. 2, 261 - 268
A solution is found that satisfies minimum criteria
Fixed number of generations reached
Allocated budget (computation time/money) reached
The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
Manual inspection
Combinations of the above Simple generational genetic algorithm pseudo code
Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector, antennae designed to pick up radio signals in space, and walking methods for computer figures. Many of their solutions have been highly effective, unlike anything a human engineer would have produced, and inscrutable as to how they arrived at that solution. The simplest algorithm represents each chromosome as
ES
c. d.
Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular it is difficult
A
a. b.
Choose the initial population of individuals Evaluate the fitness of each individual in that population Repeat on this generation until termination: (time limit, sufficient fitness achieved, etc.) Select the best-fit individuals for reproduction Breed new individuals through crossover and mutation operations to give birth to offspring Evaluate the individual fitness of new individuals Replace least-fit population with new individuals The building block hypothesis
to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:
IJ
1. 2. 3.
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as mixing, i.e., mutation in combination with crossover, is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a Markov chain).
T
C.Termination This generational process is repeated until a Termination conditions have been reached. Common terminating conditions are
timetabling and scheduling problems, and many scheduling software packages are based on GAs. GAs have also been applied to engineering. Genetic algorithms are often applied as an approach to solve global optimization problems.
1.A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length schemata with above average fitness.
2. A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic. Problem with Genetic algorithm
represented by integers, though it is possible to use floating point representations. The floating point representation is natural
to
evolution
strategies
and
evolutionary
programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by Holland in the 1970s. This theory is not without
support
though,
based
on
theoretical
and
experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains. When bit-string representations of integers are used, Gray coding is often employed. In this
Problems which appear to be particularly appropriate for solution by genetic algorithms include
ISSN: 2230-7818
a bit string. Typically, numeric parameters can be
way, small changes in the integer can be readily affected through mutations or crossovers.
@ 2011 http://www.ijaest.iserp.org. All rights Reserved.
Page 266
A
ES
Fig 9:Input Images
T
V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 5, Issue No. 2, 261 - 268
IJ
Fig 10: Fog Removal Algorithm Outputs (Genetic Algorithm)
Fig 11: Histograms for Inputs and Outputs Images
ISSN: 2230-7818
@ 2011 http://www.ijaest.iserp.org. All rights Reserved.
Page 267
V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 5, Issue No. 2, 261 - 268
IJ
A
ES
VI. References 1. Nicolas Hautière, Jean-Philippe Tarel, and Didier Aubert Mitigation of Visibility Loss for Advanced Camera-Based Drive Assistance IEEE transactions on intelligent transportation systems, vol. 11, no. 2, June 2010 2. R. T. Tan, “Visibility in bad weather from a single image,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2008 3. N. Hautière, J.-P. Tarel, and D. Aubert, “Towards fogfree in-vehicle vision systems through contrast restoration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2007, pp. 1–8. 4. N. Hautière, J.-P. Tarel, J. Lavenant, and D. Aubert. Automatic Fog Detection and Estimation of Visibility Distance through use of an Onboard Camera. Machine Vision and Applications Journal, 17(1):8– 20, April 2006. 5. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 6, pp. 713–724, Jun. 2003
T
V. Conclusion In this paper, we proposed a method that classifies fog density according to a visibility feature of a preceding vehicle and the distance to the vehicle. We obtained promising results through an experiment using data collected from an in-vehicle camera while driving the vehicle. From the results, we confirmed that the proposed method could make judgments that comply with human perception. In future, we will consider an improved visibility feature that does not vary depending on the type or colour of the preceding vehicle. In addition, we will consider a situation when there is no preceding vehicle at all.
ISSN: 2230-7818
@ 2011 http://www.ijaest.iserp.org. All rights Reserved.
Page 268