Improving post-filtering of artificial speech using pre-trained LSTM neural networks

 

Guardado en:
Detalles Bibliográficos
Autor: Coto Jiménez, Marvin
Formato: artículo original
Fecha de Publicación:2019
Descripción:Several researchers have contemplated deep learning-based post-filters to increase the quality of statistical parametric speech synthesis, which perform a mapping of the synthetic speech to the natural speech, considering the different parameters separately and trying to reduce the gap between them. The Long Short-term Memory (LSTM) Neural Networks have been applied successfully in this purpose, but there are still many aspects to improve in the results and in the process itself. In this paper, we introduce a new pre-training approach for the LSTM, with the objective of enhancing the quality of the synthesized speech, particularly in the spectrum, in a more efficient manner. Our approach begins with an auto-associative training of one LSTM network, which is used as an initialization for the post-filters. We show the advantages of this initialization for the enhancing of the Mel-Frequency Cepstral parameters of synthetic speech. Results show that the initialization succeeds in achieving better results in enhancing the statistical parametric speech spectrum in most cases when compared to the common random initialization approach of the networks.
País:Kérwá
Institución:Universidad de Costa Rica
Repositorio:Kérwá
Lenguaje:Inglés
OAI Identifier:oai:kerwa.ucr.ac.cr:10669/86280
Acceso en línea:https://www.mdpi.com/2313-7673/4/2/39
https://hdl.handle.net/10669/86280
Palabra clave:Deep learning
Long short-term memory (LSTM)
Machine learning
Post-filtering
Signal processing
Speech synthesis