Diagram of perceptual-neural-physical sound matching

Perceptual–Neural–Physical Sound Matching

Communications dans un congrès

Auteurs : Han Han, Vincent Lostanlen, Mathieu Lagrange.

Conférence : Proceedings of the International Conference on Acoustics, Speech, and Signal Processing

Date de publication : 2023

Sound matchingAuditory similarityScattering transformDeep convolutional networksPhysical modeling synthesis
Lien vers le dépot HAL

Abstract


Sound matching algorithms seek to approximate a target waveform by parametric audio synthesis. Deep neural networks have achieved promising results in matching sustained harmonic tones. However, the task is more challenging when targets are nonstationary and inharmonic, e.g., percussion. We attribute this problem to the inadequacy of loss function. On one hand, mean square error in the parametric domain, known as "P-loss", is simple and fast but fails to accommodate the differing perceptual significance of each parameter. On the other hand, mean square error in the spectrotemporal domain, known as "spectral loss", is perceptually motivated and serves in differentiable digital signal processing (DDSP). Yet, spectral loss is a poor predictor of pitch intervals and its gradient may be computationally expensive; hence a slow convergence. Against this conundrum, we present Perceptual-Neural-Physical loss (PNP). PNP is the optimal quadratic approximation of spectral loss while being as fast as P-loss during training. We instantiate PNP with physical modeling synthesis as decoder and joint time-frequency scattering transform (JTFS) as spectral representation. We demonstrate its potential on matching synthetic drum sounds in comparison with other loss functions.