Semi-supervised audio source separation based on the iterative estimation and extraction of note events
Guardado en:
Autores: | , |
---|---|
Formato: | comunicación de congreso |
Fecha de Publicación: | 2019 |
Descripción: | In this paper, we present an iterative semi-automatic audio source separation process for single-channel polyphonic recordings, where the underlying sources are isolated by clustering a set of note events, which are considered to be single notes or groups of consecutive notes coming from the same source. In every iteration, an automatic process detects the pitch trajectory of the predominant note event in the mixture, and separates its spectral content from the mixed spectrogram. The predominant note event is then transformed back to the time-domain and subtracted from the input mixture. The process repeats using the residual as the new input mixture, until a predefined number of iterations is reached. When the iterative stage is complete, note events are clustered by the end-user to form individual sources. Evaluation is conducted on mixtures of real instruments and compared with a similar approach, revealing an improvement in separation quality. |
País: | Kérwá |
Institución: | Universidad de Costa Rica |
Repositorio: | Kérwá |
Lenguaje: | Inglés |
OAI Identifier: | oai:kerwa.ucr.ac.cr:10669/99932 |
Acceso en línea: | https://www.scitepress.org/Link.aspx?doi=10.5220/0007828002730279 https://hdl.handle.net/10669/99932 |
Palabra clave: | Audio Source Separation Note Event Detection Fundamental Frequency Estimation Note Event Tracking Separation of Overlapping Harmonics Time-domain Subtraction Semi-supervised Estimation |