Abstract Science. All rights reserved. 1. Introduction Fire

AbstractFire incidence is one of the major disasters of human society. This paper proposes a still imagebasedfire detection system.

It has many advantages like lower cost, faster response, and large coverage.The existing methods are not able to detect fire region adequately. The proposed method overcome andaddresses the issue.

A binary contour image of flame that is capable of classifying fire or no fire in imagefor fire detection is proposed in this study. The color of fire area can range from red yellow to almost white.So, here it is challenges the detected area is actually fire or no fire.

Our propose method consists of fiveparts. Firstly, the digital image is taken from dataset and the digital image is sampled and mapped as agrid of dots or picture elements. We convert image to separate RGB Color range Matrix.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

We define somerules to select yellow color range of the image later on converted the image to binary range. Finally, binarycontour image of flame information that detect the fire. We have analyzed different types of fire images indifferent varieties and found accuracy 85-90%.Keywords: dataset, digital image, binary range and matrix, binary contour image, fire detectionCopyright © 2016 Institute of Advanced Engineering and Science. All rights reserved.1. IntroductionFire is one of the biggest disasters for human begins. It is very challenging for detectingfire, environmental disasters or serious damage to human life.

In particular, accidents involvingfire and explosion have attracted interest to the development of automatic fire detectionsystems. Existing solutions are based on ultraviolet and infrared sensors, and usually explorethe chemical properties of fire and smoke in particle samplings 1. However, the main constraintof these solutions is that sensors must be set near to the fire source, which brings complexityand cost of installation and maintenance, especially in large open areas.

Several methods regarding to fire detection on videos have been proposed in the lastyears. These methods use two steps to detect fire. First, they explore the visual featuresextracted from the video frames (images); second, they take advantage of the motion and othertemporal features of the videos 2. In the first step, the general approach is to create amathematical/rule-based model, defining a sub-space on the color space that represents all thefire-colored pixels in the image.2. Literature ReviewThere are several empirical models using different color spaces as RGB 1, YCbCr 3,CIE Lab 4 and HSV 5. In these cases, the limitation is the lack of correspondence of thesemodels to fire properties beyond color.

The problem is that high illumination value or reddishyellowishobjects lead to a higher false-positive rate. These false-positives are usuallyeliminated on the second step through temporal analysis. In contrast to such methods, ourproposal is to detect fire in still images, without any further (temporal) information, using onlyvisual features extracted from the images.

To overcome the problems aforementioned, wepropose a new method to detect fire in still images that is based on the combination of twoapproaches: pixel-color classification and texture classification. The use of color is a traditionalapproach to the problem; whilst, the use of texture is promising, because fire traces present IJEECS ISSN: 2502-4752 ?Fire Detection in Still Image Using Color Model (Hira Lal Gope)619particular textures that permit to distinguish between actual fire and fire-like regions. We showthat, even with just the information present in the images, it is possible to achieve a highaccuracy level in such detection.Fire detectors are one of those amazing inventions that, because of mass production,cost practically nothing. Recently, several methods have been proposed, with the aim toanalyze the videos acquired by traditional video surveillance cameras and detect fires or smoke,and the current scientific effort 6, 7 focused on improving the robustness and performance ofthe proposed approaches, so as to make possible a commercial exploitation. Although a strictclassification of the methods is not simple, two main classes can be distinguished, dependingon the analyzed features: color based and motion based. The methods using the first kind offeatures are based on the consideration that a flame, under the assumption that it is generatedby common combustibles, such as wood, plastic, paper, or others, can be reliably characterizedby its color, so that the evaluation of the color components in RGB (Red, Green, Blue), YUV(Luminance, Chrominance) or any other color space is adequately robust to identify thepresence of flames.

This simple idea inspires several recent methods: for instance, in 8 and9, fire pixels are recognized by an advanced background subtraction technique and astatistical RGB color model: a set of images have been used and a region of the color spacehas been experimentally identified. So that if a pixel belongs to this particular region, then it canbe classified as fire.Generally, current residential fire detection research focuses on upholsteredfurniture/mattress fires.

The fire losses from residential furniture fires may decrease due to thedevelopment of new regulations; therefore it is imperative to evaluate the new detectionapproaches with the next most significant fire losses in residential fires.The existing method cannot detect fire region properly; however, many other featureshave to be taken into consideration. In our research, our propose method that can overcomethese issues. A novel feature extraction method that is capable of classifying an object as fire orno fire in video frame for fire detection is propose in this study.

The color of fire area can rangefrom red yellow to almost white. So, here it is challenges the detected fire is actually fire or not.Irregularity of the boundary of the fire-colored region is taken into consideration and image isconverted to gray scale image. Eventually, our approach can identify more relevant concepts fordetecting fire by utilizing system especially, the techniques of convert images to binary images.In our paper we have worked with a sample image in Figure 1.

Figure 1. Sample Input Image to Detect FireOur proposed method detects not only the fire but also it can detect the intensity of firelike low fire, medium fire and no fire. When the flame is getting more violent, flame change theirshapes more rapidly. Therefore, the variation of flame is needed to measure the intensity offlames. Here, contour information of the binary image is needed.

Since, the contour information of the flame is needed, the corresponding binary image,b(x, y), of a contour image, c(x, y), is defined as follows: ( ) { ( ) ? ISSN: 2502-4752IJEECS Vol. 3, No. 3, September 2016 : 618 – 625620In the contour image, set the remaining fire color is white and it is called the binarycontour image.

Now, the difference ( ), of two binary contour image,( ) and( ) isas follows: ( ) |( )( )|The image ( ) is called the difference contour image. After obtaining a contourdifference image, it is time to measure the intensity of fire by using the amount of white pixels.White pixels ratio, can be defined as follows:Where, is the amount of white pixels and n is the total number of pixels in the contourimage ( ).The ratio is higher it means the more intensity of fires. Here, we consider three types offires such as small fire, medium fire and big fire.If the ratio is equal to 0 it means no fire is detected in the image.

Second condition confirms that small fire is recognized. Similarly, third condition tellsthat the medium fire is detected. And finally, fourth condition is assumed that the big fire isdetected. In our research, we exploit the hierarchical structure and their relations together withbinary images order to identify and predict more specific concept for fire detection.This test image shows the area of contour fire pixels.Figure 2.

Detected Fire AreaIn our experimental result we see that the accuracy rate is nearly 90%3. Proposed MethodThis section covers the detail of the previously proposed fire detection methods. It isassumed that the image capturing device produces its output in RGB format.

During anoccurrence of fire, smoke and flame can be seen. With the increasing in fire intensity, smokeand flame will be visible. An easy way to comply with the conference paper formattingrequirements is to use this document as a template and simply type your text into it. So, in orderto detect the occurrence of fire, both flame and smoke need to be analyzed.

Many researchersused unusual properties of fire such as color, motion, edge, shape. Lai, et al., 10 suggested IJEECS ISSN: 2502-4752 ?Fire Detection in Still Image Using Color Model (Hira Lal Gope)621that features of fire event can be utilized for fire detection in early stages. Han, et al., 11 usedcolor and motion features while Kandil, et al., 12 and Liu, et al.

, 13 utilized shape and colorfeatures to detect an occurrence of fire.The main aspects of this research are to develop system with good accuracy. Theresearch is ongoing, and some proposals are under consideration as complements to thecurrently planned approach. Using fire detection basic algorithm we are:a) At first find out digital image from image datasetb) After finding digital images then convert these images to RGB (Red, Green, Blue)color range matrixc) Select yellow color range of photo from RGB color range matrix and convert imageto binary ranged) After converting to binary image then count how much set pixel inside the imagee) Take the decision from set count value, fire is present or not.

The proposed fire detection method can be divided into five major parts: (1) CollectedImage from video frame.(2) Convert the image to RGB color, (3) Selection of yellow color rangeof the image, (4) convert image to binary range, and (5) Detect the intensity of fire by contourbinary image, as depicted in Figure 3.Figure 3.

Flow Chart of Proposed Algorithm for Fire Detection in Image SequencesTable 1. Summary For Fire and No Fire ImagesImage Name Height Width Test bit Ratio Commentsfire4444 669 1000 75007 0.1121 High Rangefire14 1200 1600 282 0.00014 Actually no firefire3 2896 1944 342869 0.0609 High Rangefire 335 423 4209 0.0297 Medium Rangefire10 533 517 154321 0.56 High Rangefire444 211 239 0 0 No firefire4 282 425 32091 0.2678 High Rangefire6 2592 3872 22006 0.

0022 Low Rangefire805 300 400 11860 0.0988 Medium Rangefire7 823 1291 0 0 No fire4. Experemental Result and DiscussionWe performed experiments using a dataset of fire images. It consists of different imageswith various resolutions. Also, it was divided in two categories: some images containing fire, andsome images without fire. The fire images consist of emergency situations with different fireincidents, as buildings on fire, industrial fire, car accidents, and riots. These images weremanually cropped by human experts.

The remaining images consist of emergency situationswith no visible fire and also images with fire-like regions, such as sunsets, and red or yellowobjects. ? ISSN: 2502-4752IJEECS Vol. 3, No. 3, September 2016 : 618 – 625622We have taken two RGB image frames then algorithm is applied on it, and result isshown as in Figure 4(a), Figure 4(b) and Figure 4(c). Sample RGB image frames having fire, itcontains sub images of different steps in algorithm: 1st image frame, 2nd image frame havingflame, red component of fire pixel according to condition as mentioned above, motion isdetected between these two frames, and last sub image shows the fire pixel detected in image.(a) First Image Shows the Intermediate Result of Processing, and Second Image Shows theContour Fire Pixels(b) First Image Shows the Intermediate Result Of Processing, and Second Image Shows theContour Fire PixelsLow RangeMedium RangeHigh Range(c) These Images Show the Result of Different RangesFigure 4. Analytical View for Different ImagesIJEECS ISSN: 2502-4752 ?Fire Detection in Still Image Using Color Model (Hira Lal Gope)6235. ResultsIn our research there are 2 classes of image.

At first step we have applied foldingmethod on two classes of images that classes are Fire and No Fire class and we see that ourproposed method based fire detection gives a good result. Each of the classes contains 10images with different verities. The result is in Table 2.Table 2. Number of Class AccuracyClass Name Number of Image Fire Detected Not detectedFire 10 8 2No Fire 10 3 7Table 3.

Results per Class SuccessNo. of image Classes Name No. of images Fire Detected Fire Not Detected Accuracy10 Fire 10 8 2 80.00%10 No Fire 10 1 9 90.0%Figure 5. Success and Failure per Class Figure 6. Accuracy Graph of Different ClassCross-validation, sometimes called rotation estimation, is a technique for assessing howthe results of a statistical analysis will generalize to an independent data set. It is mainly used insettings where the goal is prediction, and one wants to estimate how accurately a predictivemodel will perform in practice.

One round of cross-validation involves partitioning a sample ofdata into complementary subsets, performing the analysis on one subset (called the trainingset), and validating the analysis on the other subset (called the validation set or testing set). Toreduce variability, multiple rounds of cross-validation are performed using different partitions,and the validation results are averaged over the rounds.K-fold cross-validation, the original sample is randomly partitioned into k equal sizesubsamples. Of the k subsamples, a single subsample is retained as the validation data fortesting the model, and the remaining k-1 subsamples are used as training data. The crossvalidationprocess is then repeated k times (the folds), with each of the k subsamples usedexactly once as the validation data. The k results from the folds then can be averaged (orotherwise combined) to produce a single estimation. The advantage of this method overrepeated random sub-sampling is that all observations are used for both training and validation,and each observation is used for validation exactly once.

10-fold cross-validation is commonlyused, but in general k remains an unfixed parameter.5.1. K-Fold Cross-ValidationIn stratified k-fold cross-validation 14, 15, the folds are selected so that the meanresponse value is approximately equal in all the folds.

In the case of a dichotomousclassification, this means that each fold contains roughly the same proportions of the two typesof class labels.0246810Fire Detected Not DetectedFireNo Fire0%50%100%No Fire Fire ExpectedAccuracyAccuracy ? ISSN: 2502-4752IJEECS Vol. 3, No. 3, September 2016 : 618 – 6256242 fold-cross validation, this is the simplest variation of k-fold cross-validation. For eachfold, we randomly assign data points to two sets d0 and d1, so that both sets are equal size (thisis usually implemented as shuffling the data array and then splitting in two). We then train on d0and test on d1, followed by training on d1 and testing on d0. This has the advantage that ourtraining and test sets are both large, and each data point is used for both training and validationon each fold.Now 2 folding method applied on ten classes of image, that means 50 percent image ofa class are in training set and 50 percent images of that class are on test, that started 2 class, 3class.

The results are given on Table 3. Per class success and failed rate chart in Figure 5.The next section that describes the accuracy rate of two classes and the expected classin below chart Figure 6.

6. ConclusionIn this paper, image processing based fire detection system using color model wasproposed. We have collected a number of sequential frames from original video, which consistsof fire and non fire images. The proposed method consists five main stages: – collected Imagefrom video frame, convert the image to RGB color, selection of yellow color range of the image,convert image to binary range, and detect the intensity of fire by contour binary image. Theproposed method is applied on video sequences and detected fire is classified into three groupssuch as small fire, medium fire and large fire based on the threshold values.7.

Future DirectionsTo accomplish more valuable and more accurate video fire detection, this paper pointsout future directions. We will improve the result for the step of intensity of fire by contour binaryimage. Feature extraction method will be included before step five and we will use machinelearning algorithms like SVM, and KNN as a classifier to detect the fire more accurately byreplacing step five.AcknowledgementsIn this work I am grateful to Dr. Mohammad Khairul Islam, Professor Department ofComputer Science & Engineering, University of Chittagong, Bangladesh for his idea. I have hadinspired by him as well as the necessity of current world.