The files in the dataset correspond to results that have been generated for the submitted Interspeech 2016 paper: "Combining Feature and Model-Based Adaptation of RNNLMs for Multi-Genre Broadcast Speech Recognition" (DOI: 10.21437/Interspeech.2016-480). The paper deals with language model adaptation for the MGB Challenge 2015 transcription task.
The files in the zip file are of three types:
- .ctm, which correspond to the output of the automatic speech recognition system and the columns include segment information as well as transcripts of the recognition.
- .ctm.filt.sys, which correspond to scoring of the automatic speech recognition system and includes the overall word error rate as well as the number of insertions, deletions and substitutions of the overall system.
- .ctm.filt.lur, which provides a more detailed decomposition of the word error rate across multiple genres.
The three file types are repeated for all the results described in Table 3 of the paper.
The following is a description about the naming convention of the files:
rnnlm refers to Recurrent Neural Network Language Model.
amrnnlm prefix refers to acoustic model text RNNLM.
amlmrnnlm prefix refers to acoustic model + language model text RNNLM.
.lattice.rescore suffix refers to results generated with lattice rescoring.
.nbest.rescore suffix refers to results generated with nbest rescoring.
.baseline refers to baseline RNNLM results.
.noadaptation refers to RNNLM results with no adaptation.
.genre.finetune refers to genre fine-tuning of the RNNLMs.
.genre.adaptationlayer refers to genre adaptation layer fine-tuning of the RNNLMs.
.ldafeat.hiddenlayer refers to Latent Dirichlet Allocation (LDA) features at the hidden layer.
.genrefeat.hiddenlayer refers to Genre 1-hot auxiliary codes at the hidden layer.
.genrefeat.adaptationlayer refers to Genre 1-hot auxiliary codes at the adaptation layer.
All three file types are standard outputs that are recognised by the automatic speech recognition community and can be opened using any text editor.
Funding
EPSRC Programme Grant EP/I031022/1 (Natural Speech Technology)
History
Ethics
There is no personal data or any that requires ethical approval
Policy
The data complies with the institution and funders' policies on access and sharing