Japanese–French Frontiers of Science Symposium 「日仏先端科学シンポジウム」

Japanese–French Frontiers of Science

The 11th Japanese–French Frontiers of Science Symposium (JFFoS) 「日仏先端科学シンポジウム」was held at the University of Strasbourg from May 24th to 28th, as a joint event between the CNRS and the Japanese Society for the Promotion of Science (JSPS).

Program: https://www.jsps.go.jp/english/e-fos/e-jffos/2024_11.html

Vincent presented an overview of the research activities of the Audio @ LS2N, under the title “Towards Ecological Listening Machines”. We reproduce the abstract below.


“OK Google”, “Hey Siri!”, “Alexa?” … If you’ve said these words, you’ve probably been talking to a listening machine made by a Big Tech company: namely, Alphabet, Apple, or Amazon. Nowadays, these listening machines are not only sold as standalone devices but are also software components of consumer electronics: smartphones, cars, wearables, etc. Yet, current-day listening machines suffer from multiple shortcomings: privacy, reliability, security, sustainability, to name just a few.

In this poster presentation, i review some of these shortcomings from the particular standpoint of listening abilities in humans and non-human animals. I outline a research program towards ecological listening machines (ELM for short), wherein the term “ecological” is understood in three different ways: as a scientific problem, as a sociopolitical constraint, and as an aesthetic practice.

First, the global decline of biodiversity calls for surveys calls for wildlife population surveys over large spatiotemporal scales. For this purpose, an effective method is bioacoustics, defined as the analysis of vocal behavior in non-human animals. However, unlike English speech or pop music, training machine listening algorithms on animal sounds demands expert annotation which is costly and time-consuming. With my team, i have begun to address this issue in my research via alternative paradigms in machine learning: namely, self-supervised learning, few-shot learning, and active learning.

Secondly, the operation of listening machines depends on a steady supply of electricity and hardware components. This is particularly problematic in an age of global energy transition and growing geopolitical tensions over minerals. Besides, bioacoustic sensors are often deployed off the electrical grid, thus a need to harvest renewable energy in situ. In this context, my team has built a prototype of batteryless acoustic recognition device (BARD) which is wireless and made from durable hardware parts.

Thirdly, and despite the abilities of listening machines on pre-recorded media, they often fail to maintain naturalistic interactions with humans. For example, my team is trying to build machines for cyber-human musicianship which can not only measure the tempo of a song but also “keep time” during live performance.

In conclusion, there is an urgent need to put listening machines at the service, not the detriment, of ecological action. For that matter, the invention of ecological listening machines (ELM) requires a strong level of interdisciplinarity. In fact, while my research ascribes a pivotal role to machine learning and signal processing, it is informed by other disciplines: time–frequency analysis, computer arithmetic, real-time systems, intermittent computing, ecological statistics, conservation geography, renewable energy engineering, robotics, integrative neuroscience, animal behavior, music cognition, musicology, science and technology studies, and interaction design.