27-28 mars 2024 : GreenDays à l’IRIT

En partenariat avec GDS Ecoinfo et cinq autres GDR, nous organisons les journées GreenDays 2024 à Toulouse, les 27 et 28 mars 2024, sur le thème “Explorer les multiples facettes de la sobriété numérique”. GreenDays 2024 @ Toulouse : http://perso.ens-lyon.fr/laurent.lefevre/greendaystoulouse2024/ En 2024, la problématique d’un numérique plus sobre recouvre différentes facettes et de nombreux scientifiques… Continue reading 27-28 mars 2024 : GreenDays à l’IRIT

8 février 2024 : “Les sens artificiels” au Stereolux

Le jeudi 8 février 2024 à 18h30 au Stereolux, dans le cadre de la Nuit blanche des chercheur-e-s de Nantes université. L’intelligence artificielle (IA) révolutionne notre compréhension du vivant en utilisant les sens humains. Elle permet une analyse poussée de la parole, des signaux sonores et de la bioacoustique. En médecine, les sens peuvent être reproduits pour améliorer les… Continue reading 8 février 2024 : “Les sens artificiels” au Stereolux

Kymatio notebooks @ ISMIR 2023

On November 5th, 2023, we hosted a tutorial on Kymatio, entitled “Deep Learning meets Wavelet Theory for Music Signal Processing”, as part of the International Society for Music Information Retrieval (ISMIR) conference in Milan, Italy. The Jupyter notebooks below were authored by Chris Mitcheltree and Cyrus Vahidi from Queen Mary University of London. I. Wavelets… Continue reading Kymatio notebooks @ ISMIR 2023

16 novembre 2023 : journée GdR ISIS “Traitement du signal pour la musique”

Dans le cadre de l’action « Traitement du signal pour l’audio et l’écoute artificielle » du GdR ISIS, nous organisons, le Jeudi 16 Novembre 2023 à l’IRCAM, une troisième journée dédiée au traitement des signaux de musique, animée par les orateurs suivants : Nous invitons tout participant souhaitant présenter ses travaux relevant de l’audio de contacter… Continue reading 16 novembre 2023 : journée GdR ISIS “Traitement du signal pour la musique”

Efficient Evaluation Algorithms for Sound Event Detection @ DCASE

Our article presents an algorithm for pairwise intersection of intervals by performing binary search within sorted onset and offset times. Computational benchmarks on the BirdVox-full-night dataset confirms that our algorithm is significantly faster than exhaustive search. Moreover, we explain how to use this list of intersecting prediction-reference pairs for the purpose of SED evaluation.

“Can Technology Save Biodiversity?” Call for papers

While technological development has been a key driver of climate change and biodiversity loss, humans from high income countries continue their unbridled race for innovation and technological development. Throughout history, technological developments have allowed humans to exploit more and more natural resources. These technological developments have, themselves, consumed more and more natural resources and generated… Continue reading “Can Technology Save Biodiversity?” Call for papers

Automated acoustic monitoring captures timing and intensity of bird migration @ J. Applied Ecology

Monitoring small, mobile organisms is crucial for science and conservation, but is technically challenging. Migratory birds are prime examples, often undertaking nocturnal movements of thousands of kilometres over inaccessible and inhospitable geography. Acoustic technology could facilitate widespread monitoring of nocturnal bird migration with minimal human effort. Acoustics complements existing monitoring methods by providing information about individual behaviour and species identities, something generally not possible with tools such as radar. However, the need for expert humans to review audio and identify vocalizations is a challenge to application and development of acoustic technologies. Here, we describe an automated acoustic monitoring pipeline that combines acoustic sensors with machine listening software (BirdVoxDetect). We monitor 4 months of autumn migration in the northeastern United States with five acoustic sensors, extracting nightly estimates of nocturnal calling activity of 14 migratory species with distinctive flight calls. We examine the ability of acoustics to inform two important facets of bird migration: (1) the quantity of migrating birds aloft and (2) the migration timing of individual species. We validate these data with contemporaneous observations from Doppler radars and a large community of citizen scientists, from which we derive independent measures of migration passage and timing. Together, acoustic and weather data produced accurate estimates of the number of actively migrating birds detected with radar. A model combining acoustic data, weather and seasonal timing explained 75% of variation in radar-derived migration intensity. This model outperformed models that lacked acoustic data. Including acoustics in the model decreased prediction error by 33%. A model with only acoustic information outperformed a model comprising weather and date (57% vs. 48% variation explained, respectively). Acoustics also successfully measured migration phenology: species-specific timing estimated by acoustic sensors explained 71% of variation in timing derived from citizen science observations. Our results demonstrate that cost-effective acoustic sensors can monitor bird migration at species resolution at the landscape scale and should be an integral part of management toolkits. Acoustic monitoring presents distinct advantages over radar and human observation, especially in inaccessible and inhospitable locations, and requires significantly less expense. Managers should consider using acoustic tools for monitoring avian movements and identifying and understanding dangerous situations for birds. These recommendations apply to a variety of conservation and policy applications, including mitigating the impacts of light pollution, siting energy infrastructure (e.g. wind turbines) and reducing collisions with structures and aircraft.