Enhanced Wildfire Detection Using a Mixture of Experts Approach

Authors

  • Brent Kong Scotts Valley High School
  • Dr. Zhang University of California, Santa Cruz

DOI:

https://doi.org/10.47611/jsrhs.v14i1.8612

Keywords:

Wildfires, Mixture of Experts, ML

Abstract

With increasing global wildfire severity, effective fire detection methods are essential to mitigate widespread environmental and health impacts. A recent solution to this phenomena is the application of ensemble machine learning methods, which combine several models to create a more effective one. However, this raises several questions, notably whether an ensemble method is more effective than an individual model or if increasing the number of constituent models leads to overfitting. This paper conducts an ablation study on the Mixture of Experts (MoE) approach for forest fire detection via satellite imagery across a Canadian dataset. The model (MoE6) constitutes all six state-of-the-art architectures, including InceptionNet, ResNet, Vision Transformer (ViT), AlexNet, VGG-Net, and a baseline CNN. Experts of the MoE6 will be systematically removed to form MoE4 and MoE2, which constitute only the top four and top two performing constituent models respectively. We hypothesize that the MoE ensemble approach will outperform any constituent model (two heads are better than one). Furthermore, among the MoE architectures, we hypothesize MoE2 as the top model as it comprehensively integrates characteristics from top model architectures while mitigating overfitting. However, the results show that the original MoE6 was the top performer, achieving a peak accuracy of 93.13\% and ROC-AUC of 0.9303. This work provides a promising solution for improving wildfire detection accuracy and response times, potentially reducing the devastation caused by wildfires globally.

Downloads

Download data is not yet available.

Author Biography

Dr. Zhang, University of California, Santa Cruz

Associate Professor, Department of EE and CE

References or Bibliography

A. Aaba. Wildfire prediction dataset (satellite images), Feb 2023. URL https://www.kaggle. com/datasets/abdelghaniaaba/wildfire-prediction-dataset/data.

J. Brownlee. ROC curves and precision-recall curves for imbalanced classi- fication, Sep 2020. URL https://machinelearningmastery.com/

roc-curves-and-precision-recall-curves-for-imbalanced-classification/.

A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani,

M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. 2021. URL https://arxiv.org/abs/2010. 11929.

E. Elgazar. Vision transformer (vit) + keras pretrained models, Feb 2023. URL https://www.kaggle.com/code/ebrahimelgazar/ vision-transformer-vit-keras-pretrained-models.

S. Gaur, J. S. Kumar, and S. Shukla. A comparative assessment of CNN-Sigmoid and CNN-SVM model for forest fire detection. In 2024 IEEE 9th Intl. Conf. for Convergence in Technology (I2CT), pages 1–6, 2024.

GeeksforGeeks. Normalize an image in OpenCV Python, May 2024. URL https://www. geeksforgeeks.org/normalize-an-image-in-opencv-python/.

Government and Municipalities of Québec. URL https://open.canada.ca/data/en/ dataset/9d8f219c-4df0-4481-926f-8a2a532ca003.

Government of Canada, Jul 2023. URL https://www.canada.ca/en/ public-health/services/emergency-preparedness-response/

rapid-risk-assessments-public-health-professionals/ risk-profile-wildfires-2023.html.

K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385.

IBM, Oct 2021. URL https://www.ibm.com/topics/ convolutional-neural-networks.

IBM, Apr 2024. URL https://www.ibm.com/topics/mixture-of-experts.

R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive Mixtures of Local Experts.

Neural Computation, 3(1):79–87, 03 1991. ISSN 0899-7667.

A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolu- tional neural networks. In F. Pereira, C. Burges, L. Bottou, and K. Weinberger, editors, Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., 2012. URL https://proceedings.neurips.cc/paper_files/paper/2012/ file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf.

V. Patil. Wildfire prediction CNN, May 2024. URL https://www.kaggle.com/code/ vaishnavipatil4848/wildfire-prediction-cnn/notebook.

Prasann. Mixture of experts (MoE) explained, Jan 2024. URL https://www.kaggle.com/ code/newtonbaba12345/mixture-of-experts-moe-explained.

H. C. Reis and V. Turk. Detection of forest fire using deep convolutional neural networks with transfer learning approach. Applied Soft Computing, 143:110362, 2023. ISSN 1568-4946. URL https:

//www.sciencedirect.com/science/article/pii/S1568494623003800.

K. Salama. Keras documentation: Image classification with vision transformer, 2021. URL https://keras.io/examples/vision/image_classification_with_ vision_transformer/.

V. E. Sathishkumar, J. Cho, M. Subramanian, and O. S. Naren. Forest fire and smoke detection using deep learning-based learning without forgetting. Fire Ecology, 19(1), Feb 2023.

J. Shaikh. Deep learning in the trenches: Understanding inception network from scratch, Nov 2023. URL https://www.analyticsvidhya.com/blog/2018/10/ understanding-inception-network-from-scratch/.

Y. Shinde. How to code your resnet from scratch in tensorflow?, Sep 2021. URL https://www.analyticsvidhya.com/blog/2021/08/ how-to-code-your-resnet-from-scratch-in-tensorflow/.

K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In Intl. Conf. on Learning Representations, 2015.

D. Spector. Will Quebec’s forest fire season be as bad as it was last year?, May 2024. URL https://globalnews.ca/news/10497658/ quebec-forest-wild-fire-outlook-summer-2024/.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and

A. Rabinovich. Going deeper with convolutions, 2014. URL https://arxiv.org/abs/ 1409.4842.

M. Tripathi. Image processing using CNN: A beginners guide, May 2024. URL https://www.analyticsvidhya.com/blog/2021/06/ image-processing-using-cnn-a-beginners-guide/.

P. Varshney. Alexnet architecture: A complete guide, Jul 2020a. URL https://www.kaggle. com/code/blurredmachine/alexnet-architecture-a-complete-guide.

P. Varshney. VGGNet-16 architecture: A complete guide, Jul 2020b. URL https://www.kaggle.com/code/blurredmachine/ vggnet-16-architecture-a-complete-guide.

X. You, Z. Zheng, K. Yang, L. Yu, J. Liu, J. Chen, X. Lu, and S. Guo. A PSO-CNN-based deep learning model for predicting forest fire risk on a national scale. Forests, 15(1), 2024. ISSN 1999-4907. URL https://www.mdpi.com/1999-4907/15/1/86.

W. Zhang, J. Tanida, K. Itoh, and Y. Ichioka. Shift invariant pattern recognition neural network and its optical architecture. Proceedings of Annual Conference of the Japan Society of Applied Physics, 1988.

Published

02-28-2025

How to Cite

Kong, B., & Zhang, Y. (2025). Enhanced Wildfire Detection Using a Mixture of Experts Approach. Journal of Student Research, 14(1). https://doi.org/10.47611/jsrhs.v14i1.8612

Issue

Section

HS Research Articles