BioacAI doctoral network workshop in Czech Republic

From May 6th to May 10th, 2024, PhD student Yasmine Benhamadi and CNRS scientist Vincent Lostanlen have attended the first internal workshop of the BioacAI doctoral network. The Czech University of Life Sciences in Prague hosted the event in its University Forest Establishment, an ancient castle in the town of Kostelec nad Černými lesy. Yasmine… Continue reading BioacAI doctoral network workshop in Czech Republic

Action “Musiscale” au symposium du GDR MaDICS

Le 30 mai 2024 à Blois, se tenait le sixième symposium du GDR MaDICS : masses de données, informations et connaissances en sciences. Dans le cadre de l’action “Musiscale : modélisation multi-échelles de masses de données musicales”, Vincent a présenté les travaux de l’équipe sur la diffusion en ondelettes (scattering transform) ainsi que sur les… Continue reading Action “Musiscale” au symposium du GDR MaDICS

Japanese–French Frontiers of Science Symposium 「日仏先端科学シンポジウム」

The 11th Japanese–French Frontiers of Science Symposium (JFFoS) 「日仏先端科学シンポジウム」was held at the University of Strasbourg from May 24th to 28th, as a joint event between the CNRS and the Japanese Society for the Promotion of Science (JSPS). Program: Vincent presented an overview of the research activities of the Audio @ LS2N, under the title… Continue reading Japanese–French Frontiers of Science Symposium 「日仏先端科学シンポジウム」

Towards multisensory control of physical modeling synthesis @ Inter-Noise

Physical models of musical instruments offer an interesting tradeoff between computational efficiency and perceptual fidelity. Yet, they depend on a multidimensional space of user-defined parameters whose exploration by trial and error is impractical. Our article addresses this issue by combining two ideas: query by example and gestural control. On one hand, we train a deep… Continue reading Towards multisensory control of physical modeling synthesis @ Inter-Noise

Structure Versus Randomness in Computer Music and the Scientific Legacy of Jean-Claude Risset @ JIM

According to Jean-Claude Risset (1938–2016), “art and science bring about complementary kinds of knowledge”. In 1969, he presented his piece Mutations as “[attempting] to explore […] some of the possibilities offered by the computer to compose at the very level of sound—to compose sound itself, so to speak.” In this article, I propose to take the same motto as a starting point, yet while adopting a mathematical and technological outlook, more so than a musicological one.

Instabilities in Convnets for Raw Audio @ IEEE SPL

What makes waveform-based deep learning so hard? Despite numerous attempts at training convolutional neural networks (convnets) for filterbank design, they often fail to outperform hand-crafted baselines. These baselines are linear time-invariant systems: as such, they can be approximated by convnets with wide receptive fields. Yet, in practice, gradient-based optimization leads to suboptimal approximations. In our… Continue reading Instabilities in Convnets for Raw Audio @ IEEE SPL

Kymatio notebooks @ ISMIR 2023

On November 5th, 2023, we hosted a tutorial on Kymatio, entitled “Deep Learning meets Wavelet Theory for Music Signal Processing”, as part of the International Society for Music Information Retrieval (ISMIR) conference in Milan, Italy. The Jupyter notebooks below were authored by Chris Mitcheltree and Cyrus Vahidi from Queen Mary University of London. I. Wavelets… Continue reading Kymatio notebooks @ ISMIR 2023

Efficient Evaluation Algorithms for Sound Event Detection @ DCASE

Our article presents an algorithm for pairwise intersection of intervals by performing binary search within sorted onset and offset times. Computational benchmarks on the BirdVox-full-night dataset confirms that our algorithm is significantly faster than exhaustive search. Moreover, we explain how to use this list of intersecting prediction-reference pairs for the purpose of SED evaluation.