Towards End-to-End Training of Automatic Speech Recognition for Nigerian Pidgin

1African Institute For Mathematical Sciences, 2Ladoke Akintola University of Technology
DLI 2025

Pidgin English is widely spoken in Africa by an estimate of 75 million Nigerian speakers and 5 million Ghanaians but Africa lack sufficient resources to support automatic speech recognition systems on this language. We curate a novel Nigerian Pidgin text-to-speech dataset, show that a pretrained state-of-the-art model do not work well out-of-the-box, and reduce error rate by 59.84% 🚀

Key Contributions

  • A publicly accessible ASR system for Nigerian Pidgin
  • Free speech corpus for Nigerian Pidgin
  • First parallel (speech-to-text) Nigerian Pidgin data
  • Abstract

    The prevalence of automatic speech recognition (ASR) systems in spoken language applications has increased significantly in recent years. Notably, many African languages lack sufficient linguistic resources to support the robustness of these systems. This paper focuses on the development of an end-to-end speech recognition system customized for Nigerian Pidgin English.

    We investigated and evaluated different pretrained state-of-the-art architectures on a new dataset. Our empirical results demonstrate a notable performance of the variant Wav2Vec2 XLSR-53 on our dataset, achieving a word error rate (WER) of 29.6% on the test set, surpassing other architectures such as NEMO QUARTZNET and Wav2Vec2 BASE-100H in quantitative assessments. Additionally, we demonstrate that pretrained state-of-the-art architectures do not work well out-of-the-box. We performed zero-shot evaluation using XLSR-English as the baseline, chosen for its similarity to Nigerian Pidgin. This yielded a higher WER of 73.7%.

    By adapting this architecture to nuances represented in our dataset, we reduce error by 59.84%. This study underscores the potential for improving ASR systems for under-resourced languages like Nigerian Pidgin English, contributing to greater inclusion in speech technology applications. We publicly release our unique parallel dataset (speech-to-text) on Nigerian Pidgin, as well as the model weights on Hugging Face. Our code would be made available to foster future research from the community.

    Dataset

    The proposed dataset consists of the following;

    • 4,288 recorded utterances collected from 10 native speakers
    • LIG-Aikuma app was used for recording the text data
    • Each uttereance averages between 8 and 17 words
    • The average sentence length in the corpus is 86 characters
    • The corresponding mean audio duration is approximately 17 seconds
    • The dataset is partitioned into training, validation, and test sets

    Using BERTopic, we revealed 15 themes in the Nigerian Pidgin text dataset, with “Everyday Conversation” and “Politics” emerging as the most prominent across the collected texts

    Topic distribution for dataset

    We use a balanced speaker pool and filter out invalid recordings

    Data collection summary

    Model Architecture

    Several state-of-the-art ASR model architectures were evaluated. This includes;

    • XLSR-English (zero-shot baseline)
    • Nemo QuartzNet
    • Wav2Vec2-Base-100h
    • Wav2Vec2-XLSR-53 (our final model)

    Results

    Word Error Rate (WER) was used to evaluate performance, with Wav2Vec2-XLSR-53 achieving the lowest WER and effectively capturing Nigerian Pidgin terms, though it struggled with accurate number recognition. The WER formula is given as follows;

    WER

    Furthermore, the qualitative results for all model architectures are summarized in Table 1. Table 2 displays the quantitative results for the best performing model, Wav2Vec2-XLSR-53. Finally, table 3 shows a failure case for Wav2Vec2-XLSR-53.

    Table 1: Qualitative results for all models Table 1: Qualitative Results Contd.

    Our final model delivered the best test results courtesy of its robust representations

    Model Comparison for all models
    Table 2: Quantitative results for Wav2Vec2-XLSR-53
    Table 3: Failure case for Wav2Vec2-XLSR-53

    BibTeX

    
          @misc{rufai2025endtoendtrainingautomaticspeech,
                    author    = {Amina Mardiyyah Rufai and Afolabi Abeeb and Esther Oduntan and Tayo Arulogun and Oluwabukola Adegboro and Daniel Ajisafe},
                    title     = {Towards End-to-End Training of Automatic Speech Recognition for Nigerian Pidgin},
                    year      = {2025},
                    eprint    = {2010.11123},
                    archivePrefix = {arXiv},
                    primaryClas = {eess.AS},
                    url = {https://arxiv.org/abs/2010.11123}
                
                  }