Audio samples for "INTERACTIVE TEXT-TO-SPEECH SYSTEM VIA JOINT STYLE ANALYSIS"

PAPER accepted by INTERSPEECH 2020: pdf
Website license info sheet pdf

Abstract

While modern TTS technologies have made significant advancements in audio quality, there is still a lack of behavior naturalness compared to conversing with people. We propose a style-embedded TTS system that generates styled responses based on the speech query style. To achieve this, the system includes a style extraction model that extracts a style embedding from the speech query, which is then used by the TTS to produce a matching response. We faced two main challenges: 1) only a small portion of the TTS training dataset has style labels, which is needed to train a multi-style TTS that respects different style embeddings during inference. 2) The TTS system and the style extraction model have disjoint training datasets. We need consistent style labels across these two datasets so that the TTS can learn to respect the labels produced by the style extraction model during inference. To solve these, we adopted a semi-supervised approach that uses the style extraction model to create style labels for the TTS dataset and applied transfer learning to learn the style embedding jointly. Our experiment results show user preference for the styled TTS responses and demonstrate the style-embedded TTS system’s capability of mimicking the speech query style.

Contents

Section A

We present the style rendering samples for six styles.

Styles

A B
Soft
Fast
Neutral
Happy
Sad
Angry

Section B

We present the weights impact on style rendering samples for six styles.

Lower weights will decrease the style effects

Weight Weight 1 Weight 0.8 Weight 0.6
Soft
Fast
Neutral
Happy
Sad
Angry

Input queries and TTS responses

Section C

We present the recorded input queries and the style TTS' responses accordingly.

Collected input queries from multiple speakers

Input queries TTS responses
1A
1B
1C
2A
2B
2C
3A
3B
3C
4A
4B
4C
5A
5B
5C