"" Fire detection using CNN model

Fire detection using CNN model

""
Fire is one of the most destructive and unpredictable natural phenomena that can cause severe damage to life and property. Early and accurate detection of fire is crucial for minimizing losses and ensuring the safety of people and assets. However, traditional fire detection methods, such as smoke detectors and thermal sensors, have some limitations, such as high false alarm rates, low sensitivity, and slow response time. Therefore, there is a need for more advanced and reliable fire detection techniques that can leverage the power of computer vision and artificial intelligence.

One of the promising approaches for fire detection is using convolutional neural networks (CNNs), which are a type of deep learning model that can learn to extract features from images and classify them into different categories. CNNs have shown remarkable performance in various image recognition tasks, such as face recognition, object detection, and scene segmentation. In this blog post, we will introduce the basic concept of CNNs and how they can be applied to fire detection using image data.

CNNs are composed of multiple layers that perform different operations on the input image, such as convolution, pooling, activation, and fully connected layers. The convolution layer is the core component of CNNs, which applies a set of filters to the input image to produce feature maps that capture the local patterns and edges in the image. The pooling layer reduces the size of the feature maps by applying a downsampling operation, such as max pooling or average pooling, which helps to reduce the computational cost and prevent overfitting. The activation layer applies a nonlinear function, such as sigmoid or ReLU, to the feature maps to introduce nonlinearity and increase the expressive power of the model. The fully connected layer connects all the neurons in the previous layer to the output layer, which produces the final classification result.

To train a CNN model for fire detection, we need a large dataset of fire images and non-fire images, which can be obtained from various sources, such as online databases, video surveillance cameras, or drones. The dataset should be divided into training, validation, and test sets, which are used for learning the model parameters, tuning the hyperparameters, and evaluating the model performance, respectively. The training process involves feeding the images to the CNN model and comparing the output with the ground truth labels using a loss function, such as cross-entropy or mean squared error. The loss function measures how well the model predicts the correct class for each image. The goal is to minimize the loss function by updating the model parameters using an optimization algorithm, such as gradient descent or Adam.

The test process involves applying the trained CNN model to new images that have not been seen by the model before and measuring its accuracy, precision, recall, and F1 score. These metrics indicate how well the model can detect fire images and avoid false alarms. A high accuracy means that the model can correctly classify most of the images into fire or non-fire categories. High precision means that most of the images that are predicted as fire are actually fire images. A high recall means that most of the fire images are correctly detected by the model. A high F1-score is a harmonic mean of precision and recall that balances both metrics.

In conclusion, CNNs are a powerful tool for fire detection that can overcome some of the limitations of traditional methods. They can learn to extract features from images automatically and classify them into fire or non-fire categories with high accuracy and speed. However, there are also some challenges and limitations that need to be addressed, such as data quality and quantity, model complexity and interpretability, and generalization ability across different scenarios and environments. Therefore, further research and development are needed to improve the performance and robustness of CNN-based fire detection systems.

code in python

dataset:wildfire detection image data

!mkdir -p ~/.kaggle

!cp kaggle.json ~/.kaggle/

 

!kaggle datasets download -d brsdincer/wildfire-detection-image-data

 

import zipfile

zip_ref = zipfile.ZipFile('/content/wildfire-detection-image-data.zip','r')

zip_ref.extractall('/content')

zip_ref.close()

 

import tensorflow as tf

import numpy as np

from tensorflow import keras

import os

import cv2

from tensorflow.keras.preprocessing.image import ImageDataGenerator

from tensorflow.keras.preprocessing import image

import matplotlib.pyplot as plt

 

train=ImageDataGenerator(rescale=1/255)

test=ImageDataGenerator(rescale=1/255)

train_dataset=train.flow_from_directory("/content/forest_fire/Training and Validation",

                                        target_size=(150,150),

                                        batch_size=32,

                                        class_mode='binary')

test_dataset=train.flow_from_directory("/content/forest_fire/Testing",

                                        target_size=(150,150),

                                        batch_size=32,

                                        class_mode='binary')

 

test_dataset.class_indices

 

from keras.api._v2.keras import activations

from tensorflow import keras

 

model= keras.Sequential()

model.add(keras.layers.Conv2D(32,(3,3),activation='relu',input_shape=(150,150,3)))

model.add(keras.layers.MaxPool2D(2,2))

 

model.add(keras.layers.Conv2D(64,(3,3),activation='relu'))

model.add(keras.layers.MaxPool2D(2,2))

 

model.add(keras.layers.Conv2D(128,(3,3),activation='relu'))

model.add(keras.layers.MaxPool2D(2,2))

 

model.add(keras.layers.Conv2D(128,(3,3),activation='relu'))

model.add(keras.layers.MaxPool2D(2,2))

 

model.add(keras.layers.Flatten())

model.add(keras.layers.Dense(512,activation='relu'))

model.add(keras.layers.Dense(1,activation='sigmoid'))

model.summary()

 

model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])

 

r = model.fit(train_dataset,epochs=3,validation_data= test_dataset)

 

predictions = model.predict(test_dataset)

predictions = np.round(predictions)

 

plt.plot(r.history['accuracy'], label='accuracy')

plt.plot(r.history['val_accuracy'], label='val_accuracy')

plt.plot(r.history['loss'], label='loss')

plt.plot(r.history['val_loss'], label='Val_loss')

plt.title('model accuracy')

plt.ylabel('Accuracy')

plt.xlabel('Epochs')

plt.legend()

 

plt.plot(r.history['val_accuracy'], label='loss')

plt.plot(r.history['val_loss'], label='Val_loss')

plt.legend()

 

plt.plot(r.history['val_accuracy'], label='val_accuracy')

plt.plot(r.history['accuracy'], label='accuracy')

plt.legend()

 

def predictImage(filename):

    img1 = image.load_img(filename,target_size=(150,150))

    plt.imshow(img1)

    Y = image.img_to_array(img1)

    X = np.expand_dims(Y,axis=0)

    val=model.predict(X)

    print(val)

    if val == 1:

        plt.xlabel("No fire",fontsize=30)

    elif val == 0:

        plt.xlabel("fire",fontsize=30)

predictImage('/content/smokeee7.png')



output





Post a Comment

0 Comments