📢 Announcement 📢: Baselines coming soon 🚀

Baseline implementations and links will be published here soon.

Description

Simultaneous translation (also known as real-time or streaming translation) is the task of generating translations incrementally given partial input only. Simultaneous systems are typically evaluated with respect to quality and latency.

There will be one main track and one sub-track:

  • Speech-to-Text: simultaneously translating speech in source language into text in target language.
  • Speech-to-Text with Extra Context: same as above, but the systems can also leverage extra context (e.g., content of the presented ACL paper).

in the following language directions (more details will be made available soon):

  • English -> German
  • English -> Chinese
  • English -> Italian
  • Czech -> English

We have three focuses this year:

  • long-form speech: our evaluation will be conducted on unsegmented speech
  • large language models: participants are allowed to use LLMs (details will be announced later)
  • extra context: a sub-track allowing the use of extra context (e.g., the content of the presented ACL paper)

The test set domains are the subsets of the ones of the offline track:

  • English -> German: ACL talks and accent challenge data
  • English -> Chinese: ACL talks
  • English -> Italian: ACL talks
  • Czech -> English: dedicated dev set (will be provided soon)

Training Data and Data Conditions

We follow similar data conditions as in the offline track (see here). Additionally, for the constrained submission we require the system to be runnable on a single H100 with 80GB of memory.

The data condition for this task is “constrained with large language models (LLMs)”. Any open-weight model with a permissive license is acceptable for use. In addition, pretrained speech encoders and ASR models may be employed. We also encourage participants to submit systems leveraging closed-source models/LLMs for evaluation, but such systems will be evaluated separately and will not be eligible for the main ranking.

English-to-X

Our English-to-X training data condition follows that of the offline task. The list is available here. ACL 60/60 dataset can be used as the development set. The development data can be found here, while the YAML files containing the audio information (useful for metric computation) can be found here.

Czech-to-English

  • ParCzech 3.0 (ASR):
    • Allowed data: parczech-3.0-asr-train-20*.tar.gz
    • https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3631?show=full
  • VoxPopuli (ST)
    • Unlabeled data: cs_v2
    • Translated data (cs → en)
    • Speech-to-speech data (cs → en)
    • https://github.com/facebookresearch/voxpopuli
  • Common Voice Corpus 20.0 (ASR)
    • Czech ASR data
    • CV version: 20.0
    • https://commonvoice.mozilla.org/en/datasets
  • Czeng 2.0 (MT)
    • https://ufal.mff.cuni.cz/czeng
  • OpenSubtitles v2018 (MT)
    • https://opus.nlpl.eu/OpenSubtitles/cs&en/v2018/OpenSubtitles
  • Europarl (MT)
    • https://www.statmt.org/europarl/
  • MOSEL (transcripts only)
    • automatic transcripts for unlabeled VoxPopuli audio
    • https://huggingface.co/datasets/FBK-MT/mosel <!– - 2025 Dev Set (ST)
    • https://drive.google.com/file/d/1-XicsrBQubkGK-kyBIxKO-7JAx94o_KV/view?usp=sharing –>

Baselines

Last year baselines for each language pair can be found here (GitHub).

We will provide updated baselines for this year soon.

Submission

The evaluation implementation will use the latest SimulStream toolkit (see paper here).

Participants have two options for the submission:

  • (Preferred) Docker Image Submission: the organizers run the system to compare the computation-aware latency.
  • System Log Submission: computation-aware latency cannot be compared directly, but it will be reported along with the hardware used.

Systems submitted via Docker image are expected to run on a single NVIDIA H100 GPU with 80 GB of HBM. Additionally, participants must include a README with instructions on how to run the system for each track and language direction. To enable communication between evaluators and participants, a point of contact and email address should be provided in the README in case of issues during evaluation. Docker images should support the linux/arm64 architecture, specified during build via the --platform flag.

Regardless of the submission type (Docker or log), participants must also submit results on the development set (i.e., ACL 60/60 or the dedicated Czech-to-English dev set) to determine the latency regime of their submission.

Participants will be allowed to update their submissions during the evaluation period. If you have specific questions regarding your submission to the simultaneous shared task, please reach out via e-mail at agostinv@oregonstate.edu.

Evaluation

Metrics

The system’s performance will be evaluated in two ways:

  • Quality:
    • COMET
    • Additional results using other metrics (chrF, BLEURT, …)
  • Latency:
    • For the main ranking, we will use LongYAAL, see implementation here.
    • For consistency with the previous year, we will also include StreamLAAL.

For latency measurement, we will contrast computation aware and non computation aware latency metrics.

Ranking

The systems will be ranked by the translation quality within the latency constraints, measured by non-computation-aware LongYAAL. System latency regime (low/high) is based on logs with development set results.

This year, we have two latency regimes, low and high. The detailed latency constraints (non-computationally-aware LongYAAL) for each language pair will be announced soon.

Human Evaluation

Human evaluation will be conducted for primary submissions.

Organizers

  • Peter Polák (chair, Charles University)
  • Siqi Ouyang (co-chair for the Context Subtrack, Carnegie Mellon University)
  • Victor Agostinelli (Oregon State University)
  • OndĹ™ej Bojar (Charles University)
  • Lizhong Chen (Oregon State University)
  • David JavorskĂ˝ (Charles University)
  • Nam Hoang Luu (Charles University)
  • Sara Papi (FBK)
  • Katsuhito Sudoh (Nara Women’s University)

Contact

Discussion: iwslt-evaluation-campaign@googlegroups.com

  • Peter Polák: surname@ufal.mff.cuni.cz
  • Siqi Ouyang: siqiouya@andrew.cmu.edu
  • Victor Agostinelli: agostinv@oregonstate.edu
  • Lizhong Chen: chenliz@oregonstate.edu