The prevalence of automatic speech recognition (ASR) systems in spoken language applications has increased significantly in recent years. Notably, many African languages lack sufficient linguistic resources to support the robustness of these systems. This paper focuses on the development of an end-to-end speech recognition system customized for Nigerian Pidgin English.
We investigated and evaluated different pretrained state-of-the-art architectures on a new dataset. Our empirical results demonstrate a notable performance of the variant Wav2Vec2 XLSR-53 on our dataset, achieving a word error rate (WER) of 29.6% on the test set, surpassing other architectures such as NEMO QUARTZNET and Wav2Vec2 BASE-100H in quantitative assessments. Additionally, we demonstrate that pretrained state-of-the-art architectures do not work well out-of-the-box. We performed zero-shot evaluation using XLSR-English as the baseline, chosen for its similarity to Nigerian Pidgin. This yielded a higher WER of 73.7%.
By adapting this architecture to nuances represented in our dataset, we reduce error by 59.84%. This study underscores the potential for improving ASR systems for under-resourced languages like Nigerian Pidgin English, contributing to greater inclusion in speech technology applications. We publicly release our unique parallel dataset (speech-to-text) on Nigerian Pidgin, as well as the model weights on Hugging Face. Our code would be made available to foster future research from the community.
The proposed dataset consists of the following;
Using BERTopic, we revealed 15 themes in the Nigerian Pidgin text dataset, with “Everyday Conversation” and “Politics” emerging as the most prominent across the collected texts
We use a balanced speaker pool and filter out invalid recordings
Several state-of-the-art ASR model architectures were evaluated. This includes;
Word Error Rate (WER) was used to evaluate performance, with Wav2Vec2-XLSR-53 achieving the lowest WER and effectively capturing Nigerian Pidgin terms, though it struggled with accurate number recognition. The WER formula is given as follows;
Furthermore, the qualitative results for all model architectures are summarized in Table 1. Table 2 displays the quantitative results for the best performing model, Wav2Vec2-XLSR-53. Finally, table 3 shows a failure case for Wav2Vec2-XLSR-53.
Our final model delivered the best test results courtesy of its robust representations
@misc{rufai2025endtoendtrainingautomaticspeech,
author = {Amina Mardiyyah Rufai and Afolabi Abeeb and Esther Oduntan and Tayo Arulogun and Oluwabukola Adegboro and Daniel Ajisafe},
title = {Towards End-to-End Training of Automatic Speech Recognition for Nigerian Pidgin},
year = {2025},
eprint = {2010.11123},
archivePrefix = {arXiv},
primaryClas = {eess.AS},
url = {https://arxiv.org/abs/2010.11123}
}