Ogurtsov S., Efremov V., Leus A. Application of artificial intelligence technologies in processing images from camera traps: principles, software, approaches // Principy èkologii. 2024. № 1. P. 4‒37. DOI: 10.15393/j1.art.2024.14662


Issue № 1

Analytical review

pdf-version

Application of artificial intelligence technologies in processing images from camera traps: principles, software, approaches

Ogurtsov
   Sergey
PhD, Central Forest Nature Reserve, A.N. Severtsov Institute of Ecology and Evolution of the Russian Academy of Science, Russia, 172521, Tver region, Nelidovsky city district, Zapovedniy village, etundra@mail.ru
Efremov
   Vladislav
Moscow Institute of Physics and Technology (National Research University), Russia, 127576, Moscow Region, Moscow, efremov.va@phystech.edu
Leus
   Andrey
PhD, Moscow Institute of Physics and Technology (National Research University), Russia, 141000, Moscow Region, Mytishchi, leus.av@mipt.ru
Keywords:
image analysis
detection
classification
computer vision
machine learning
neural networks
pattern recognition
camera traps
Summary: Artificial intelligence (AI) is increasingly penetrating environmental science. This is most rapidly evident in image-based research, for example from drones or camera traps. This review discusses the current development of camera trap research in the application of AI technologies, namely computer vision and deep learning. The basic concepts of Machine Learning (ML) and deep neural networks are briefly considered, which are essential for the modern biologist and ecologist to understand the images processing and analysis. The possibilities of using AI for patterns recognition and object search in images are discussed. An overview of modern software using computer vision and machine learning technologies for recognizing photos and video from camera traps is provided, as well as a brief overview of open data sets for training ML models. In general, eight ML-software are considered: MegaDetector, EcoAssist, MLWIC2, Conservation AI, FasterRCNN+InceptionResNetV2, DeepFaune, ClassifyMe, as well as the first domestic development from the Moscow Institute of Physics and Technology. On the basis of research with camera traps in the Central Forest Nature Reserve, the software was tested and its advantages and disadvantages were identified. In conclusion, the potential of AI application in modern research with camera traps is discussed. This review will be useful for both biologists and ecologists for general acquaintance with neural networks and their application in the field of pattern recognition on images, and for ML-specialists and programmers to understand the applicability of ML models in environmental science and the possibilities of their training on large data sets.

© Petrozavodsk State University

Received on: 28 January 2024
Published on: 02 May 2024

References

Ahumada J. A., Fegraus E., Birch T., Flores N., Kays R., O’Brien T. G., Palmer J., Schuttler S., Zhao J. Y., Jetz W., Kinnaird M., Kulkarni S., Lyet A., Thau D., Duong M., Oliver R., Dancer A. Wildlife Insights: A Platform to Maximize the Potential of Camera Trap and Other Passive Sensor Wildlife Data for the Planet, Environmental Conservation. 2020. Vol. 47. P. 1–6. DOI: 10.1017/S0376892919000298

Allaire J. J., Kalinowski T., Falbel D., Eddelbuettel D., Tang Y., Golding N. Package “tensorflow”: R Interface to TensorFlow. R package version 2.14.0. 2023. URL: https://cran.r-project.org/package=tensorflow (data obrascheniya: 25.10.2023).

Beery S., Agarwal A., Cole E., Birodkar V. The iWildCam 2021 Competition Dataset, arXiv. 2021. DOI: 10.48550/arXiv.2105.03494

Beery S., Morris D., Yang S. Efficient pipeline for camera trap image review, arXiv. 2019. Article: 1907.06772. DOI: 10.48550/arXiv.1907.06772

Beery S., Van Horn G., Perona P. Recognition in Terra Incognita, Proceedings of the European Conference on Computer Vision (ECCV), V. Ferrari, M. Hebert, C. Sminchisescu, Y. Weiss (Eds.). Munich, Germany: Springer, 2018. P. 456–473.

Belyavskiy D. S. Artificial Neural Networks Application in Vertebrate Populations Studies, Ohrana okruzhayuschey sredy i zapovednoe delo. 2022. No. 3. P. 81–88.

Berger-Wolf T. Y., Rubenstein D. I., Stewart C. V., Holmberg J. A., Parham J., Menon S., Crall J., Van Oast J., Kiciman E., Joppa L. Wildbook: Crowdsourcing, computer vision, and data science for conservation, arXiv. 2017. Article: 1710.08880. DOI: 10.48550/arXiv.1710.08880

Binta Islam S., Valles D., Hibbitts T. J., Ryberg W. A., Walkup D. K., Forstner M. R. J. Animal Species Recognition with Deep Convolutional Neural Networks from Ecological Camera Trap Images, Animals. 2023. Vol. 13. Article: 1526. DOI: 10.3390/ani13091526

Bizikov V. A. Sabirov M. A. Sidorov L. K. Lukina Yu. N. Abundance and distribution of the Ladoga ringed seals in anomaly warm winter 2020: results of the arial survey using drones, Trudy VNIRO. 2022. T. 190. P. 79–94. DOI: 10.36038/2307-3497-2022-190-79-94

Bogucki R., Cygan M., Khan C. B., Klimek M., Milczek J. K., Mucha M. Applying deep learning to right whale photo identification, Conservation Biology. 2018. Vol. 33, No 3. P. 676–684. DOI: 10.1111/ cobi.13226

Bridle J. S. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition, Neurocomputing. New York: Springer, 1990. P. 227–236.

Carl C., Schönfeld F., Profft I., Klamm A., Landgraf D. Automated detection of European wild mammal species in camera trap images with an existing and pretrained computer vision model, European Journal of Wildlife Research. 2020. Vol. 66, No 4. P. 1–7. DOI: 10.1007/s10344-020-01404-y

Casaer J., Milotic T., Liefting Y., Desmet P., Jansen P. Agouti: A platform for processing and archiving camera trap images, Biodiversity Information Science and Standards. 2019. Vol. 3. Article: e46690. DOI: 10.3897/biss.3.46690

Ceballos G., Ehrlich P. R., Raven P. H. Vertebrates on the brink as indicators of biological annihilation and the sixth mass extinction, Proceedings of the National Academy of Science. 2020. Vol. 117, No 24. P. 13596–13602. DOI: 10.1073/pnas.1922686117

Chalmers C., Fergus P., Wich S., Montanez A. C. Conservation AI: Live stream analysis for the detection of endangered species using convolutional neural networks and drone technology, arXiv. 2019. Article: 1910.07360. DOI: 10.48550/arXiv.1910.07360

Chang W., Cheng J., Alaire J., Xie Y., McPherson J. Shiny: Web application framework for R. R package version 1.4.0. 2019. URL: https://CRAN.R-project.org/package=shiny (data obrascheniya: 25.10.2023).

Chen G., Han T. X., He Z., Kays R., Forrester T. Deep convolutional neural network based species recognition for wild animal monitoring, Proceedings of the IEEE International Conference on Image Processing. Paris: IEEE, 2014. P. 858–862. DOI: 10.1109/ICIP.2014.7025172

Chen P., Swarup P., Matkowski W. M., Kong A. W. K., Han S., Zhang Z., Rong H. A study on giant panda recognition based on images of a large proportion of captive pandas, Ecology and Evolution. 2020. Vol. 10, No 7. P. 3561–3573. DOI: 10.1002/ece3.6152

Corcoran E., Winsen M., Sudholz A., Hamilton G. Automated detection of wildlife using drones: Synthesis, opportunities and constraints, Methods in Ecology and Evolution. 2021. Vol. 12, No 6. P. 1103ionndr. DOI: 10.1111/2041-210X.13581

Efremov V. A. Leus A. V. Gavrilov D. A. Mangazeev D. I. Holodnyak I. V. Radysh A. S. Zuev V. A. Vodichev N. A. Method of processing photo and video data from camera traps using a two-stage neural network approach, Iskusstvennyy intellekt i prinyatie resheniy. 2023b. No. 3. P. 98–108. DOI: 10.14357/20718594230310

Efremov V. A. Zuev V. A. Leus A. V. Mangazeev D. I. Radysh A. S. Holodnyak I. V. Formation of animal registrations based on post-processing of data from camera traps, Ekosistemy. 2023a. Vyp. 34. P. 51–58.

Evans B. C. CamTrap-detector: Detect animals, humans and vehicles in camera trap imagery. 2023. URL: https://github.com/bencevans/camtrap-detector (data obrascheniya: 25.10.2023).

Falzon G., Lawson C., Cheung K, W., Vernes K., Ballard G. A., Fleming P. J. S., Glen A. S., Milne H., Mather-Zardain A. T., Meek P. D. ClassifyMe: A field-scouting software for the identification of wildlife in camera trap, Animals. 2020. Vol. 10, No 1. Article: 58. DOI: 10.3390/ani10010058

Farley S. S., Dawson A., Goring S. J., Williams J. W. Situating ecology as a big-data science: current advances, challenges, and solutions, BioScience. 2018. Vol. 68. P. 563–576. DOI: 10.1093/biosci/biy068

Feng H., Mu. G., Zhong S., Zhang P., Yuan T. Benchmark Analysis of YOLO Performance on Edge Intelligence Devices, Cryptography. 2022. Vol. 6, No 2. Article: 16. DOI: 10.3390/cryptography6020016

Fennell M., Beirne C., Burton A. C. Use of object detection in camera trap image identification: Assessing a method to rapidly and accurately classify human and animal detections for research and application in recreation ecology, Global Ecology and Conservation. 2022. Vol. 35. Article: e02104. DOI: 10.1016/j.gecco.2022. e02104

Girshick R. Fast R-CNN, Proceedings of the IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. P. 1440–1448. DOI: 10.1109/ICCV.2015.169

Girshick R., Donahue J., Darrell T., Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE, 2014. P. 580–587. DOI: 10.1109/CVPR.2014.81

Glover-Kapfer P., Soto-Navarro C. A., Wearn O. R. Camera-trapping version 3.0: Current constraints and future priorities for development, Remote Sensing in Ecology and Conservation. 2019. Vol. 5. P. 209–223. DOI: 10.1002/rse2.106

Gomez Villa A., Diez G., Salazar A., Diaz A. Animal identification in low quality camera-trap images using very deep convolutional neural networks and confidence thresholds, International Symposium on Visual Computing. Cham, Switzerland: Springer, 2016. P. 747–756. DOI: 0.1007/978-3-319-50835-1_67

Gomez Villa A., Salazar A., Vargas F. Towards automatic wild animal monitoring: Identification of animal species in camera-trap images using very deep convolutional neural networks, Ecological Informatics. 2017. Vol. 41. P. 24–32. DOI: 10.1016/j.ecoinf.2017.07.004.

Goodfellow I., Bengio Y., Courville A. Deep Learning. Cambridge, USA: MIT Press, 2016. 800 p.

Google LLC – TensorFlow Hub. faster_rcnn/inception_resnet_v2. 2019. URL: https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1 (data obrascheniya: 27.10.2023).

Google LLC. Open Images Dataset V4, under CC BY 4.0 license. 2019. URL: https://storage.googleapis.com/openimages/web/ factsfigures_v4.html (data obrascheniya: 27.10.2023).

Green S. E., Rees J. P., Stephens P. A., Hill R. A., Giordano A. J. Innovations in camera trapping technology and approaches: The integration of citizen science and artificial intelligence, Animals. 2020. Vol. 10, No 1. Article: 132. DOI: 10.3390/ani10010132

Greenberg S. Automated image recognition for wildlife camera traps: making it work for you. Technical report. 2020. URL: http://hdl.handle.net/1880/112416 (data obrascheniya: 22.03.2023).

Greenberg S., Godin T., Whittington J. Design patterns for wildlife-related camera trap image analysis, Ecology and Evolution. 2019. Vol. 9, No 24. P. 13706–13730. DOI: 10.1002/ece3.5767

Guo C., Pleiss G., Sun Y., Weinberger K. On calibration of modern neural networks, arXiv. 2017. Article: 1706.04599. DOI: 10.48550/arXiv.1706.04599

Gyurov P. MegaDetector-GUI: A desktop application that makes using MegaDetector’s model easier. 2022. URL: https://github.com/petargyurov/megadetector-gui (data obrascheniya: 21.10.2023).

Hagan M. T., Demuth H. B., Beale M. H., De Jesús O. Neural network design. Boston: Pws Publications Co, 1996. 1011 p.

He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition. Las Vegas, USA: IEEE, 2016. P. 770–778. DOI: 10.1109/CVPR.2016.90

Hu W., Huang Y., Wei L., Zhang F., Li H. Deep convolutional neural networks for hyperspectral image classification, Journal of Sensors. 2015. Vol. 2. P. 1–12. DOI: 10.1155/2015/258619

Huang J., Rathod V., Sun C., Zhu M., Korattikara A., Fathi A., Murphy K. Speed/accuracy trade-offs for modern convolutional object detectors, Proceedings of the IEEE conference on computer vision and pattern recognition. Honolulu, USA: IEEE, 2017. P. 3296–3297. DOI: 10.1109/CVPR.2017.351

Hui J. Object detection: speed and accuracy comparison (faster R-CNN, R-FCN, SSD, FPN, RetinaNet and YOLOv3). 2018. URL: https://medium.com/@jonathan_hui/object-detection-speed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359 (data obrascheniya: 22.10.2023).

Kellenberger B., Tuia D., Morris D. AIDE: Accelerating image‐based ecological surveys with interactive machine learning, Methods in Ecology and Evolution. 2020. Vol. 11, No 12. P. 1716–1727. DOI: 10.1111/2041-210X.13489

Kellenberger B., Veen T., Folmer E., Tuia D. 21 000 birds in 4.5 h: efficient large-scale seabird detection with machine learning, Remote Sensing in Ecology and Conservation. 2021. Vol. 7, No 3. P. 445–460. DOI: 10.1002/rse2.200

Krizhevsky A., Sutskever I., Hinton G. E. ImageNet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems. La Jolla, USA: Neural Information Processing Systems Foundation, 2012. P. 1097–1105. DOI: 10.1145/3065386

Kwok R. Al empowers conservation biology, Nature. 2019. Vol. 567. P. 133–135. DOI: 10.1038/d41586-019-00746-1

LeCun Y., Bengio Y., Hinton G. Deep learning, Nature. 2015. Vol. 521. P. 436–444. DOI: 10.1038/nature14539

LeCun Y., Boser B., Denker J. S., Henderson D., Howard R. E., Hubbard W., Jackel L. D. Backpropagation applied to handwritten zip code recognition, Neural Computation. 1989. Vol. 1, No 4. P. 541–551. DOI: 10.1162/neco.1989.1.4.541

Leus A. V. Efremov V. A. Application of computer vision methods for the analysis of images collected from camera traps within the framework of a software and hardware complex for monitoring the state of the environment in specially protected natural areas, Trudy Mordovskogo gosudarstvennogo prirodnogo zapovednika im. P. G. Smidovicha. 2021. Vyp. 28. P. 121–129.

Leus A. V. Gavrilov D. A. Mangazeev D. I. Efremov V. A. Radysh A. S. Zuev V. A. Holodnyak I. V. System for analyzing data read using camera traps for operational remote monitoring of natural areas (Russian Federation patent №2799114). Federal'nyy institut promyshlennoy sobstvennosti, 2023.

Lin T, Y., Goyal P., Girshick R., He K., Dollár P. Focal loss for dense object detection, Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. P. 2999–3007. DOI: 10.1109/ICCV.2017.324

Liu W., Anguelov D., Erhan D., Szegedy C., Reed S., Fu C, Y., Berg A. C. SSD: single shot multibox detector, European Conference on Computer Vision, B. Leibe, J. Matas, N. Sebe, M. Welling (Eds.). Amsterdam: Springer, 2016. P. 21–37. DOI: 10.48550/arXiv.1512.02325

Meek P. D., Ballard G. A., Falzon G., Williamson J., Milne H., Farrell R., Stover J., Mather-Zardain A. T., Bishop J., Cheung E. K, W., Lawson C. K., Munezero A. M., Schneider D., Johnston B. E., Kiani E., Shahinfar S., Sadgrove E. J., Fleming P. J. S. Camera trapping technology and advances: into the new millennium, Australian Zoologist. 2020. Vol. 40, No 3. P. 392–403. DOI: 10.7882/AZ.2019.035

Miao Z., Gaynor K. M., Wang J., Liu Z., Muellerklein O., Norouzzadeh M. S., McInturff A., Bowie R. C. K., Nathan R., Yu S. X., Getz W. M. Insights and approaches using deep learning to classify wildlife, Scientific Reports. 2019. Vol. 9, No 1. P. 1–9. DOI: 10.1038/s4159 8-019-44565-w

Mihaylov V. V. Kolpaschikov L. A. Sobolevskiy V. A. Solov'ev N. V. Yakushev G. K. Methodological approaches and algorithms for recognizing and counting animals in aerial photographs, Informacionno-upravlyayuschie sistemy. 2021. No. 5. P. 20–32. DOI: 10.31799/1684-8853-2021-5-20-32

Mohri M., Rostamizadeh A., Talwalkar A. Foundations of machine learning. Cambridge, USA: MIT Press, 2012. 505 p.

Norouzzadeh M. S., Morris D., Beery S., Joshi N., Jojic N., Clune J. A deep active learning system for species identification and counting in camera trap images, Methods in Ecology and Evolution. 2021. Vol. 12, No 1. P. 150–161. DOI: 10.1111/2041-210X.13504

Norouzzadeh M. S., Nguyen A., Kosmala M., Swanson A., Palmer M. S., Packer C., Clune J. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning, Proceedings of the National Academy of Science. 2018. Vol. 115, No 25. P. E5716–E5725. DOI: 10.1073/pnas.171936711

Ogurcov S. S. Efremov V. A. Leus A. V. Review of the software for processing and analyzing camera trap data: neural networks and web services, Russian Journal of Ecosystem Ecology. 2024. [V pechati]

Qin H., Li X., Liang J., Peng Y., Zhang C. DeepFish: Accurate underwater live fish recognition with a deep architecture, Neurocomputing. 2016. Vol. 187. P. 49–58. DOI: 10.1016/j.neucom.2015.10.122

Redmon J., Divvala S., Girshick R., Farhadi A. You only look once: unified, real-time object detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. P. 779–788. DOI: 10.1109/CVPR.2016.91

Ren S., He K., Girshick R., Sun J. Faster R‐CNN: Towards real‐time object detection with region proposal networks, arXiv. 2015. Article: 150601497. DOI: 10.48550/arXiv.1506.01497

Reyserhove L., Norton B., Desmet P. Best practices for managing and publishing camera trap data. Community review draft. GBIF Secretariat: Copenhagen, 2023. 58 p. DOI: 10.35035/doc-0qzp-2x37

Rigoudy N., Dussert G., Benyoub A., Besnard A., Birck C., Boyer J., Bollet Y. The DeepFaune initiative: a collaborative effort towards the automatic identification of French fauna in camera-trap images, bioRxiv. 2022. DOI: 10.1101/2022.03.15.484324

Russakovsky O., Deng J., Su H., Krause J., Satheesh S., Ma S., Huang Z., Karpathy A., Khosla A., Bernstein M., Berg A. C., Fei-Fei L. Imagenet large scale visual recognition challenge, International Journal of Computer Vision. 2015. Vol. 115. P. 211–252. DOI: 10.1007/s11263-015-0816-y

Schneider S., Taylor G. W., Kremer S. C. Similarity learning networks for animal individual re-identification – beyond the capabilities of a human observer, Proceedings of the IEEE Winter Conference on Applications of Computer Vision Workshops. Snowmass, USA: IEEE, 2020. P. 44–52. DOI: 10.1109/WACVW50321.2020.9096925

Schneider S., Taylor G. W., Kremer S. Deep learning object detection methods for ecological camera trap data, 15th Conference on Computer and Robot Vision. Toronto, Canada: IEEE, 2018. P. 321–328. DOI: 10.1109/crv.2018.00052

Schneider S., Taylor G. W., Linquist S., Kremer S. C. Past, present and future approaches using computer vision for animal re-identification from camera trap data, Methods in Ecology and Evolution. 2019. Vol. 10 (4). P. 461–470. DOI: 10.1111/2041-210X.13133

Sener O., Savarese S. Active learning for convolutional neural networks: A core-set approach, arXiv. 2018. Article: 1708.00489. DOI: 10.48550/arXiv.1708.00489

Shepley A., Falzon G., Meek P., Kwan P. Automated location invariant animal detection in camera trap images using publicly available data sources, Ecology and Evolution. 2021. Vol. 11, No 9. P. 4494–4506. DOI: 10.1002/ece3.7344

Shi C., Xu J., Roberts N.J., Liu D., Jiang G. Individual automatic detection and identification of big cats with the combination of different body parts, Integrative Zoology. 2023. Vol. 18 (1). P. 157–168. DOI: 10.1111/1749-4877.12641

Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition, arXiv. 2014. Article: 1409.1556. DOI: 10.48550/arXiv.1409.1556

Swanson A., Kosmala M., Lintott C., Simpson R., Smith A., Packer C. Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna, Scientific Data. 2015. Vol. 2. P. 1–14. DOI: 10.1038/sdata.2015.26

Swinnen K. R. R., Reijniers J., Breno M., Leirs H. A novel method to reduce time investment when processing videos from camera trap studies, PLoS One. 2014. V. 9. N 6. Article: e98881. DOI: 10.1371/journal.pone.0098881

Szegedy C., Vanhoucke V., Ioffe S., Shlens J., Wojna Z. Rethinking the inception architecture for computer vision. 2016. URL: https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf (data obrascheniya: 22.10.2023).

Tabak M. A., Norouzzadeh M. S., Wolfson D. W., Newton E. J., Boughto R. K., Ivan J. S., Odell E. A., Newkirk E. S., Conrey R. Y., Stenglein J., Iannarilli F., Erb J., Brook R. K., Davis A. J., Lewis J., Walsh D. P., Beasley J. C., VerCauteren K. C., Clune J., Miller R. S. Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2, Ecology and Evolution. 2020. Vol. 10, No 19. P. 10374–10383. DOI: 10.1002/ece3.6692

Tabak M. A., Norouzzadeh M. S., Wolfson D. W., Sweeney S. J., Vercauteren K. C., Snow N. P., Halseth J. M., Di Salvo P. A., Lewis J. S., White M. D., Teton B. Machine learning to classify animal species in camera trap images: applications in ecology, Methods in Ecology and Evolution. 2019. Vol. 10, No 4. P. 585–590. DOI: 10.1111/2041-210x.13120

Tack J. L. P., West B. S., McGowan C. P., Ditchkoff S. S., Reeves S. J., Keever A. C., Grand J. B. AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images, Ecological Informatics. 2016. Vol. 36. P. 145–151. DOI: 10.1016/j.ecoinf.2016.11.003

Tuia D., Kellenberger B., Beery S., Costelloe B. R., Zuffi S., Risse B., Mathis A., Mathis M. W., Langevelde F.van, Burghardt T., Kays R., Klinck H., Wikelski M., Couzin I. D., Horn G.van, Crofoot M. C., Stewart C. V., Berger-Wolf T. Perspectives in machine learning for wildlife conservation, Nature Communications. 2022. Vol. 13, No 1. P. 1–15. DOI: 10.1038/s41467-022-27980-y

Vélez J., Fieberg J. Guide for using artificial intelligence systems for camera trap data processing. 2022. URL: https://ai-camtraps.netlify.app (data obrascheniya: 26.10.2023).

Vélez J., McShea W., Shamon H., Castiblanco-Camacho P. J., Tabak M. A., Chalmers C., Fergus P., Fieberg J. An evaluation of platforms for processing camera-trap data using artificial intelligence, Methods in Ecology and Evolution. 2023. Vol. 14. P. 459–477. DOI: 10.1111/2041-210X.14044

Wei W., Luo G., Ran J., Li J. Zilong: A tool to identify empty images in camera-trap data, Ecological Informatics. 2020. Vol. 55. P. 1–7. DOI: 10.1016/j.ecoinf.2019.101021

Whytock R. C., Świeżewski J., Zwerts J. A., Bara-Słupski T., Koumba Pambo A. F., Rogala M., Bahaa-el-din L., Boekee K., Brittain S., Cardoso A. W., Henschel P., Lehmann D., Momboua B., Kiebou Opepa C., Orbell C., Pitman R. T., Robinson H. S., Abernethy K. A. Robust ecological analysis of camera trap data labelled by a machine learning model, Methods in Ecology and Evolution. 2021. Vol. 12, No 6. P. 1080–1092. DOI: 10.1111/2041- 210X.13576

Willi M., Pitman R. T., Cardoso A. W., Locke C., Swanson A., Boyer A., Veldthuis M., Fortson L. Identifying animal species in camera trap images using deep learning and citizen science, Methods in Ecology and Evolution. 2019. Vol. 10, No 1. P. 80–91. DOI: 10.1111/2041-210X.13099

Xie Y., Jiang J., Bao H., Zhai P., Zhao Y., Zhou X., Jiang G. Recognition of big mammal species in airborne thermal imaging based on YOLO V5 algorithm, Integrative Zoology. 2023. Vol. 18 (2). P. 333–352. DOI: 10.1111/1749-4877.12667

Xue Y., Wang T., Skidmore A. K. Automatic counting of large mammals from very high resolution panchromatic satellite imagery, Remote Sensing. 2017. Vol. 9, No 9. P. 1–16. DOI: 10.3390/rs9090878

Yosinski J., Clune J., Bengio Y., Lipson H. How transferable are features in deep neural networks?, arXiv. 2014. Article: 1411.1792. DOI: 10.48550/arXiv.1411.1792

Yousif H., Yuan J., Kays R., He Z. Animal Scanner: software for classifying humans, animals, and empty frames in camera trap images, Ecology and Evolution. 2019. Vol. 9. P. 1578–1589. DOI: 10.1002/ece3.4747

Yu X., Wang J., Kays R., Jansen P. A., Wang T., Huang T. Automated identification of animal species in camera trap images, EURASIP Journal on Image and Video Processing. 2013. Vol. 52. P. 1–10. DOI: 10.1186/1687-5281-2013-52

Zhang H., Wu C., Zhang Z., Zhu Y., Zhang Z., Lin H., Sun Y., He T., Mueller J. W., Manmatha R., Li M., Smola A. ResNeSt: Split-Attention Networks, IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. New Orleans, USA: IEEE, 2022. P. 2736–2746. DOI: 10.48550/arXiv.2004.08955

van Gils J. Recognition of wildlife behaviour in camera-trap photographs using machine learning, MSc Thesis Wildlife Ecology and Conservation. Netherlands: Wageningen University, 2022. 44 p.

van Lunteren P. EcoAssist: A no-code platform to train and deploy custom YOLOv5 object detection models, Journal of Open Source Software. 2023. Vol. 8, No 88. P. 1–3. DOI: 10.21105/joss.05581

Displays: 433; Downloads: 61;