Offline Speech Translation
Recent advances in deep learning are giving the possibility to address traditional NLP tasks in a new and completely different manner. One of these tasks is spoken language translation (SLT). For years, SLT has been addressed by cascading an automatic speech recognition (ASR) and a machine translation (MT) system. Recent trends rely on using a single neural network to directly translate the input audio signal in one language into a text in a different language without intermediate symbolic representations, e.g., transcriptions.
The goal of the Offline Speech Translation Task is to examine automatic methods for translating audio speech in one language into text in the target language. This has to be done either by exploiting cascaded solutions or end-to-end approaches. Last year's results of IWSLT 2019 have shown that the performance of end-to-end models is approaching the results of cascade solutions, with a difference of around 1.5 BLEU points. Hence, the question we want to answer this year is: is the cascaded solution still the dominant technology in spoken language translation?
In continuity with last year, the task addresses the translation of TED talks from English into German. Two test sets will be released containing the same talks, respectively with and without audio segmentation.
The system's performance will be evaluated with respect to their capability to produce translations similar to the target-language references. Such similarity will be measured in terms of multiple automatic metrics: BLEU, TER, BEER and characTER. The submitted runs will be ranked based on the BLEU calculated on the test set by using automatic resegmentation of the hypothesis based on the reference translation by mwerSegmenter. The detailed evaluation script can be found in the SLT.KIT
Both cascade and end-to-end models will be evaluated. We kindly ask each participant to specify at submission time if a cascade or an end-to-end model has been used.
In this task, we use the following definition of end-to-end model:
- No intermediated discrete representations (source language like in cascade or target languages like in rover)
- All parameters/parts that are used during decoding need to be trained on the end2end task (may also be trained on other tasks → multitasking ok, LM rescoring is not ok)
This year two versions of the same TED talks are released:
- The audio files are segmented into sentence-like segmentation using automatic tools.
- The audio files are NOT segmented.
To measure the progress in the ST field, each participant is required to translate also the 2019 test set that is still blind. Similar to this year test set, the 2019 test set will be made available with and without automatic segmentation.
Past Editions Development Data
The development data is not segmented using the reference transcript. The archives contain segmentation into sentence-like segmentation using automatic tools. But the participants might also use a different segmentation. The data provided as an archive with the following files ($set e.g. IWSLT.TED.dev2010):
- $set.en-de.en.xml: Reference transcript (will not be provided for evaluation data)
- $set.en-de.en.xml: Reference translation (will not be provided for evaluation data)
- CTM_LIST: Ordered file list containing the ASR Output CTM Files (will not be provided for evaluation data) (Generated by ASR systems that use more data)
- FILE_ORDER: Ordered file list containing the wav files
- $set.yaml: This file contains the time steps for sentence-like segments. It is generated by the LIUM Speaker Diarization tool.
- $set.h5: This file contains the 40-dimensional Filterbank features for each sentence-like segment of the test data created by XNMT.
- The last two files are created by the following command:
python -m xnmt.xnmt_run_experiments /opt/SLT.KIT/scripts/xnmt/config.las-pyramidal-preproc.yaml
(Please note that system generated the provided ASR scripts use more training data than allowed for this year's evaluations)
Allowed Training Data
These datasets can be used to train your model:
- Speech-Translation TED corpus (for this corpus, we provided 40-dimension Filterbank features from the audio, extracted by XNMT)
- How2 Corpus (only English - Portuguese)
- LibriVoxDeEn (only German - English)
- The official test set is not part of this corpus, but if you want to use the development data you need to make sure that it is not part of the data
- Augmented LibriSpeech (only English - French)
- Mozilla Common Voice for English use version en_1488h_2019-12-10
- Multiple run submissions are allowed, but participants must explicitly indicate one PRIMARY run for each track. All other run submissions are treated as CONTRASTIVE runs. In the case that none of the runs is marked as PRIMARY, the latest submission (according to the file time-stamp) for the respective track will be used as the PRIMARY run.
- Submissions have to be submitted as a gzipped TAR archive (see format below) and sent as an email attachment to email@example.com and firstname.lastname@example.org.
- The TAR archive should include in the file name the type of system (cascade/end-to-end) used to generate the submission
- Each run has to be stored in a plain text file with one sentence per line
- Scoring will be case-sensitive and including the punctuation. Submissions have to be in UTF-8.
TAR archive file structure: < UserID >/< Set >.< Task >.< UserID >.primary.xml
/< Set >.< Task >.< UserID >.contrastive1.xml /< Set >.< Task >.< UserID >.contrastive2.xml /...
where: < UserID > = user ID of participant used to download data files < Set > = IWSLT18.SLT.tst2018 <Task> = <fromLID>-<toLID> <fromLID>, <toLID> = Language identifiers (LIDs) as given by ISO 639-1 codes; see for example the WIT3 webpage
Chair: Marco Turchi (FBK, Italy)
Sebastian Stüker (KIT, Germany)
Jan Niehues (Maastricht University, Netherland)
Matteo Negri (FBK, Italy)