![simple dns plus 修改映射关系 simple dns plus 修改映射关系](https://upimg.ruijie.com.cn/Editor/Image/20200414182645/14.png)
![simple dns plus 修改映射关系 simple dns plus 修改映射关系](https://www.i-studio.com.tw/images/5facf42c6a1d7.png)
We hope XLS-R can help to improve speech processing tasks for many more languages of the world. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining. XLS-R also sets a new state of the art on VoxLingua107 language identification. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 14-34% relative on average. On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource.
![simple dns plus 修改映射关系 simple dns plus 修改映射关系](https://www.xshdkj.com/images/top-100-websites-prev.jpeg)
We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. Abstract: This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0.