Accelerating Codec-based Speech Synthesis with Multi-Token Prediction and Speculative Decoding

1Multimodal AI Lab, KAIST, Korea
242dot Inc, Korea
Inference Process

Figure 1: Viterbi-based Speculative Decoding is illustrated as follows: (1) Multiple prediction heads generate several distributions per timestep simultaneously. (2) To optimize memory and computational efficiency, the dimensions of the transition matrix and state probabilities are reduced by selecting only the necessary rows and columns. (3) The best sequence is determined using Speculative Decoding, as described in Algorithm 1. The transition matrix computation for LibriTTS is completed in just 3 minutes. Additionally, topk sampling is employed to preserve diversity.

Abstract

The goal of this paper is to accelerate codec-based speech synthesis systems without compromising speech quality. We propose an enhanced inference method that allows for flexible trade-offs between speed and quality during inference without requiring additional training.

Our core idea is to predict multiple tokens per inference step of the AR module using multiple prediction heads, resulting in a linear reduction in synthesis time as the number of heads increases. Furthermore, we introduce a novel speculative decoding technique that utilises a Viterbi-based algorithm to select the optimal sequence of generated tokens at each decoding step.

In our experiments, we demonstrate that the time required to predict each token is reduced by a factor of 4 to 5 compared to baseline models, with minimal quality tradeoff or even improvement in terms of speech intelligibility.

We upload the refined version of our paper to arxiv. We so sorry for bad grammar and several mistakes we made in the first version.

Qualitative comparing between USLM and USLM with 8 heads and Speculative Decoding

Prompt: USLM USLM + Ours (~5.34x faster)

Qualitative comparing between VALLE and VALLE with 8 heads and Speculative Decoding

Prompt: VALLE VALLE + Ours (~4.56x faster)

Ablation study on topk (TABLE 2.1) with 8 heads

topk=3 topk=5 topk=7 topk=9 topk=15 topk=25

Ablation study on topk (TABLE 2.2) with 4 heads

topk=3 topk=5 topk=7 topk=9 topk=15 topk=25

BibTeX

@article{nguyen23accelerating,
  author    = {Tan Dat Nguyen, Ji-Hoon Kim, Jeongsoo Choi, Shukjae Choi, Jinseok Park, Younglo Lee, Joon Son Chung},
  title     = {Accelerating Codec-based Speech Synthesis with Multi-Token Prediction and Speculative Decoding},
  journal   = {arixv},
  year      = {2024},
}