Sound source classification for soundscape analysis using fast third-octave bands data from an urban acoustic sensor network

Articles dans une revue

Auteurs : Modan Tailleur, Pierre Aumond, Mathieu Lagrange, Vincent Tourre.

Publié dans : Journal of the Acoustical Society of America

Date de publication : 2024

SoundscapeAcoustic SensorsDeep LearningAudio ClassificationSensor NetworkConvolutional Neural Network
Lien vers le dépot HAL

Abstract


The exploration of the soundscape relies strongly on the characterization of the sound sources in the sound environment. Novel sound source classifiers, called pre-trained audio neural networks (PANNs), are capable of predicting the presence of more than 500 diverse sound sources. Nevertheless, PANNs models use fine Mel spectro-temporal representations as input, whereas sensors of an urban noise monitoring network often record fast third-octaves data, which have significantly lower spectro-temporal resolution. In a previous study, we developed a transcoder to transform fast third-octaves into the fine Mel spectro-temporal representation used as input of PANNs. In this paper, we demonstrate that employing PANNs with fast third-octaves data, processed through this transcoder, does not strongly degrade the classifier's performance in predicting the perceived time of presence of sound sources. Through a qualitative analysis of a large-scale fast third-octave dataset, we also illustrate the potential of this tool in opening new perspectives and applications for monitoring the soundscapes of cities.