Magicodec - Demo & Benchmark

Simple Masked Gaussian-Injected Audio Codec for High-Fidelity Reconstruction & Generation

πŸ“„ Paper πŸ€— Model πŸ™ GitHub

Abstract

Neural audio codecs have made significant strides in efficiently mapping raw audio waveforms into discrete token representations, which are foundational for contemporary audio generative models. However, most existing codecs are optimized primarily for reconstruction quality, often at the expense of the downstream modelability of the encoded tokens. Motivated by the need to overcome this bottleneck, we introduce MagiCodec, a novel single-layer, streaming Transformer-based audio codec. MagiCodec is designed with a multistage training pipeline that incorporates Gaussian noise injection and latent regularization, explicitly targeting the enhancement of semantic expressiveness in the generated codes while preserving high reconstruction fidelity. We analytically derive the effect of noise injection in the frequency domain, demonstrating its efficacy in attenuating high-frequency components and fostering robust tokenization. Extensive experimental evaluations show that MagiCodec surpasses state-of-the-art codecs in both reconstruction quality and downstream tasks. Notably, the tokens produced by MagiCodec exhibit Zipf-like distributions, as observed in natural languages, thereby improving compatibility with language-model-based generative architectures.

Listening Examples

Benchmark

Comparative scores on LibriSpeech test-clean set. ↓ = lower is better, ↑ = higher is better.

* indicates results from the TS3-Codec paper. BigCodec-S refers to the streaming version of BigCodec.

Loading benchmark table…