<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-lechler-mlcodec-test-battery-02" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="MlCodecTestBattery">Test Battery for Opus ML Codec Extensions</title>
    <seriesInfo name="Internet-Draft" value="draft-lechler-mlcodec-test-battery-02"/>
    <author fullname="Laura Lechler">
      <organization>Cisco Systems</organization>
      <address>
        <postal>
          <country>United Kingdom</country>
        </postal>
        <email>llechler@cisco.com</email>
      </address>
    </author>
    <author fullname="Kamil Wojcicki">
      <organization>Cisco Systems</organization>
      <address>
        <postal>
          <country>Australia</country>
        </postal>
        <email>kamilwoj@cisco.com</email>
      </address>
    </author>
    <date year="2025" month="November" day="06"/>
    <area>Applications and Real-Time</area>
    <workgroup>Machine Learning for Audio Coding</workgroup>
    <keyword>mushra</keyword>
    <keyword>drt</keyword>
    <keyword>evaluation</keyword>
    <abstract>
      <?line 208?>

<t>This document proposes methodology and data for evaluation of machine learning (ML) codec extensions,
such as the deep audio redundancy (DRED), within the Opus codec (RFC6716).</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-lechler-mlcodec-test-battery/"/>.
      </t>
      <t>
        Discussion of this document takes place on the
        Machine Learning for Audio Coding Working Group mailing list (<eref target="mailto:mlcodec@ietf.org"/>),
        which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/mlcodec/"/>.
        Subscribe at <eref target="https://www.ietf.org/mailman/listinfo/mlcodec/"/>.
      </t>
    </note>
  </front>
  <middle>
    <?line 214?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>The IETF machine learning for audio coding (mlcodec) working group aims to 
leverage current and future opportunities presented by ML codecs 
to enhance the Opus codec <xref target="RFC6716"/> and its extensions, 
including to improve speech coding quality and robustness to packet loss. 
Effective evaluation of codec extensions (such as DRED),
in both standalone and redundancy settings,
is a crucial factor in achieving those objectives.
It supports reproducibility for existing extensions 
(for instance, by enabling validation of whether a retraining pipeline matches baseline model performance)
and enables benchmarking of future improvements against previously established baselines.</t>
      <t>However, as outlined in subsequent sections, 
effective evaluation of generative ML models presents 
numerous challenges and necessitates specialized subjective 
and objective evaluation methods. 
This document proposes a crowdsourced subjective test battery,
along with associated test datasets, to address the unique requirements 
for accurate and reproducible evaluations of ML codecs.
The proposed test battery covers both speech quality and intelligibility, 
including tests in clean, noisy, and reverberant conditions, 
and incorporates real-world audio data. 
The methodology leverages crowdsourced listeners <xref target="CROWDSOURCED-DRT"/> 
to enable rapid and scalable assessments, 
while controlling the variability associated with non-lab-based measurements.</t>
      <t>In the era of generative ML models, 
reference-based objective metrics face additional limitations, 
while non-intrusive methods struggle with generalization, e.g., <xref target="URGENT2025"/> and <xref target="CROWDSOURCED-MUSHRA"/>. 
Consequently, the use of human listeners, 
the gold standard in both quality and intelligibility assessment, 
is of notable importance.
The generative nature of ML codecs also implies that speech intelligibility 
could be significantly improved and/or degraded by such algorithms. 
For example, human perception for some phoneme categories could be enhanced, 
while confusions might be introduced for others, 
including hallucinations of incorrect phonemes even at high overall perceived quality.
Such confusions may not be easily detected in quality tests, 
highlighting a pressing need for highly diagnostic phoneme-category, 
or even phoneme-level, intelligibility assessment methods.</t>
      <t>The subsequent sections present the methodology, key considerations, 
and further motivation underlying the proposed test battery, 
addressing the challenges and requirements discussed above.</t>
      <section anchor="listening-test-methods">
        <name>Listening Test Methods</name>
        <section anchor="mushra-1s">
          <name>MUSHRA--1S</name>
          <t>MUSHRA--1S <xref target="MUSHRA-1S"/> is a variant of the well-established MUSHRA (multiple stimuli with hidden reference and anchor) methodology for assessing quality <xref target="ITU-R.BS1534-3"/> in clean non-reverberant conditions is proposed for testing and benchmarking of ML codecs. MUSHRA is firstly adapted to a crowdsourced, non-expert listener base, as described in <xref target="CROWDSOURCED-MUSHRA"/>. Particularly for generative models, which may cause hallucinations, a reference-based listening test is preferable <xref target="URGENT2025"/>. Secondly, one system under test is assessed at a time, in the context of a fixed reference and anchor. The advantages of testing one system at a time are the unlimited extendability of test conditions within the quality range of anchor and reference, avoiding context effects of other conditions within the same test, avoiding difficulties associated when merging results across multiple tests, and simplifying the task for the participants thereby avoiding listener fatigue, particularly in non-expert listeners. As such, MUSHRA--1S is similar to absolute category rating (ACR) tests, which can be used to calculate a mean opinion score (MOS), in that it is simple and easily extendable. At the same time, it is more stable than ACR, due to the fixed range of expected audio quality, bound by the anchor and reference. Reference-less MOS scores have been demonstrated to suffer from range-equalizing biases <xref target="COOPER2023"/>, with other samples presented within the same test defining the range of expectation of what constitutes "good" or "bad" speech quality. The drawback of the MUSHRA--1S solution, compared to a traditional MUSHRA test, is the slightly decreased sensitivity to very small differences between similar methods, which may only be detectable in direct comparisons.</t>
        </section>
        <section anchor="dcr">
          <name>DCR</name>
          <t>The degradation category rating (DCR) approach is used to produce a degradation mean opinion score (DMOS) <xref target="ITU-T.P800"/>. Although it is typically used with a high-quality reference, the test is also capable of assessing degradation caused by codecs when tested on mild-to-moderately impaired real-world data <xref target="MULLER2024"/>. The approach is more sensitive than absolute category ratings (ACR) <xref target="ITU-T.P800"/>. An implementation of the test procedure for crowdsourced tests is available in <xref target="ITU-T.P808"/>.</t>
        </section>
        <section anchor="drt">
          <name>DRT</name>
          <t>The diagnostic rhyme test (DRT) <xref target="ITU-T.P807"/> measures speech intelligibility by presenting minimal pairs where the contrasted phonemes differ in terms of a specific, controlled phonetic category. The linguistic and acoustic insight of the DRT, with test items belonging to classes of distinctive
linguistic features which are acoustically interpretable, poses a useful tool for both codec analysis and benchmarking. The test is free from context-effects and memory effects and has a high test sensitivity. It is therefore well-suited for a crowdsourced listener audience. Bearing in mind the principles for crowdsourcing listening tests employed in <xref target="ITU-T.P808"/>, the test was adapted for crowdsourced listening tests in <xref target="CROWDSOURCED-DRT"/> and test vectors in five languages were published <xref target="DRT-REPO"/>. The test data was recently adopted by <xref target="LESCHANOWSKY2025"/>.</t>
        </section>
        <section anchor="crowdsourcing-adaptations">
          <name>Crowdsourcing Adaptations</name>
          <t>Crowdsourced listening tests benefit from rigorous screening and quality control. In addition to the specific implementation of standardized test approaches for crowdsourced listening tests, <xref target="ITU-T.P808"/> has provided useful guiding principles for the adaptation of laboratory-based tests to counteract challenges  posed by the comparatively uncontrolled crowdsourcing environment. For instance, steps of qualification and training are added before the actual test stimuli are presented and catch trials are included in the pool of test questions.
It is further recommended to assess the quality of participants' responses across different platforms, such as Amazon Mechanical Turk, Prolific, and others <xref target="CROWDSOURCED-MUSHRA"/>. Each platform has a unique set of filters that can be used to recruit a specific participant pool. The platform and any filters used should always be reported along with test results, as absolute results may depend on those settings and may differ considerably between platforms.</t>
        </section>
      </section>
    </section>
    <section anchor="proposed-crowdsourced-listening-test-battery">
      <name>Proposed Crowdsourced Listening Test Battery</name>
      <t>In the literature, evaluations of speech codec quality often focus solely on clean conditions. 
However, given the wide range of potential applications for modern speech codecs, 
and the unique ways in which ML codecs may be affected by various types of real-world distortions,
it is important to assess their limitations under representative real-world scenarios, 
including challenging listening conditions.</t>
      <t>In addition to clean speech data, the proposed test battery considers performance evaluation on overlapping speech, reverberant and noisy speech, speaker consistency and phoneme-level intelligibility. The current version comprises predominantly English test vectors, but the extension to include multiple languages is desirable.
Some of the modules of the test battery outlined below for assessment of standalone ML codec performance can also be used (where applicable), for assessing the performance of redundancy schemes under packet loss conditions (e.g., Opus+DRED).</t>
      <t>The proposed test vectors are publicly available at a sampling rate of 24 kHz at <eref target="https://github.com/cisco/multilingual-speech-testing/tree/main/LRAC-2025-test-data/blind-test-set/track_1">https://github.com/cisco/multilingual-speech-testing/tree/main/LRAC-2025-test-data/blind-test-set/track_1</eref>.</t>
      <section anchor="speech-quality-evaluation">
        <name>Speech Quality Evaluation</name>
        <section anchor="clean-speech-test-vectors">
          <name>Clean Speech Test Vectors</name>
          <t>By employing the MUSHRA--1S approach and utilizing high-quality clean speech data, the system under test is evaluated with respect to the overall quality. The reference allows the listener to assess also the correctness of the linguistic content as well as the preservation of the speaker characteristics. In this test, the quality of each codec or extension is assessed in standalone mode. The diverse test set comprises 100 gender-balanced clean speech files, covering 100 unique speakers, and includes samples from both adult and children's speech. Furthermore, the set of test vectors covers a diverse range of accents of English.</t>
        </section>
        <section anchor="real-world-degradation-test-vectors">
          <name>Real-World Degradation Test Vectors</name>
          <t>As speech codecs may be used by a wide variety of applications, it cannot be ensured that the audio to be compressed constitutes clean speech in the sense of dry and noise-free high-quality audio. It is therefore important to assess the codec's resilience to real-world degradation. 
For tests where test vectors have impaired quality, DCR offers an effective way to measure the severity of any additional degradation introduced by the codec. 
The test data consists of 90 crowdsourced speech files in mildly impaired real-world scenarios of noise and reverberation. Of these, 45 files are predominantly focussed on reverberant speech and 45 on speech in noise. The reverberation and noise levels are mild to moderate.</t>
        </section>
        <section anchor="simultaneous-talker-test-vectors">
          <name>Simultaneous Talker Test Vectors</name>
          <t>Most application purposes rely on the codec's capability of preserving simultaneously occurring speech from multiple talkers. However, in practice, this can be a challenging task. A listening test using the DCR methodology offers insights into whether the presence of overlapping speech leads to degradation, which may occur in the form of artifacts or speech suppression. The proposed test set consists of 20 files of conversations between two to three talkers.</t>
        </section>
        <section anchor="packet-loss-scenarios">
          <name>Packet Loss Scenarios</name>
          <t>Real-world packet loss traces and/or simulated loss patterns (including using the packet loss simulator provided by the working group in Opus) can be utilized to evaluate the overall quality of redundancy codecs, such as Opus and DRED working together.</t>
          <t>Details TBD.</t>
        </section>
      </section>
      <section anchor="speech-intelligibility-evaluation">
        <name>Speech Intelligibility Evaluation</name>
        <section anchor="clean-speech-test-vectors-1">
          <name>Clean Speech Test Vectors</name>
          <t>The DRT for evaluating speech intelligibility, adapted for crowdsourced participants <xref target="CROWDSOURCED-DRT"/>, is proposed to be performed on a subset of the stimuli provided in <xref target="DRT-REPO"/>. The subset consists of two test vectors, one male and one female talker sample, for each word pair in the standard DRT word list for English <xref target="ITU-T.P807"/>. Test vectors for four other languages are also available in the same collection. 
Due to listeners' perceptual sensitivity to the subtle and highly localized cues that distinguish the two target phonemes, this test is primarily applicable in the evaluation of standalone codecs, with limited expected utility when combined with packet losses and redundancy schemes.</t>
        </section>
        <section anchor="noisy-test-vectors">
          <name>Noisy Test Vectors</name>
          <t>In order to evaluate a codec's resilience to noise in terms of speech intelligibility, the proposed evaluation battery for ML codecs contains noisy counterparts for the clean speech test vectors described in the previous paragraph. Speech-shaped noise (SSN) is used as a stationary additive masker in which intelligibility can be evaluated. While the presence of noise may lead to particularly severe codec distortion in some models, even the presence of well-preserved noise can help to distinguish the intelligibility of high-quality models that demonstrate a ceiling effect in clean conditions. The use of stationary noise is essential for the DRT to ensure uniform effects on the short-term localized perceptual cues. For the same reason, the noisy version of the test is also geared towards the evaluation of standalone codecs. 
The SSN was generated based on long-term-averaged short-term spectra of a publicly available clean speech data set <xref target="DEMIRSAHIN2020"/>. 
The average spectrum was used as a filter that was convolved with white noise, resulting in SSN.</t>
        </section>
      </section>
      <section anchor="example-results">
        <name>Example Results</name>
        <t>The results shown in Table 1 below were obtained by using test methodology described above. Subjective tests were run on the Prolific crowdsourcing platform. The participants were required to be native speakers of English, with an approval rate of at least 98% and at least 110 previous submissions. Only participants without any self-reported hearing impairments and without a cochlear implant were invited to participate. Additionally, diagnostic rhyme test studies were only open to participants who self-reported not to have have dyslexia.</t>
        <table>
          <thead>
            <tr>
              <th align="left">Codec</th>
              <th align="center">Quality in Clean Speech (MUSHRA) [95% CI]</th>
              <th align="center">Intelligibility in Clean Speech (DRT) [95% CI]</th>
              <th align="center">Quality in Real-World Noise and Reverberation (DCR) [95% CI]</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">Input</td>
              <td align="center">98.3 [+/- 0.2]</td>
              <td align="center">94.9 [+/- 1.3]</td>
              <td align="center">4.7 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">Opus v1.5.2 9000 bps NOLACE</td>
              <td align="center">85.4 [+/- 1.7]</td>
              <td align="center">90.0 [+/- 2.0]</td>
              <td align="center">4.3 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">Opus v1.5.2 9000 bps LACE</td>
              <td align="center">70.2 [+/- 2.0]</td>
              <td align="center">90.6 [+/- 1.8]</td>
              <td align="center">3.9 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">Opus v1.5.2 9000 bps</td>
              <td align="center">56.2 [+/- 2.3]</td>
              <td align="center">89.0 [+/- 2.0]</td>
              <td align="center">3.3 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">Opus v1.5.2 6000 bps</td>
              <td align="center">24.0 [+/- 0.7]</td>
              <td align="center">86.3 [+/- 2.4]</td>
              <td align="center">3.0 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">DRED SA v1.5.2 q0 1772 bps</td>
              <td align="center">60.6 [+/- 1.5]</td>
              <td align="center">90.5 [+/- 2.2]</td>
              <td align="center">3.1 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">DRED SA v1.5.2 q6 957 bps</td>
              <td align="center">62.3 [+/- 1.7]</td>
              <td align="center">88.1 [+/- 2.5]</td>
              <td align="center">2.7 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">DRED SA v1.5.2 q10 423 bps</td>
              <td align="center">41.1 [+/- 1.6]</td>
              <td align="center">80.9 [+/- 3.3]</td>
              <td align="center">1.8 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">DRED SA Candidate_A greg189 q1 1735 bps</td>
              <td align="center">61.4 [+/- 1.8]</td>
              <td align="center">90.4 [+/- 2.0]</td>
              <td align="center">3.2 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">DRED SA Candidate_A greg189 q6 848 bps</td>
              <td align="center">53.0 [+/- 1.3]</td>
              <td align="center">87.7 [+/- 2.4]</td>
              <td align="center">2.5 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">DRED SA Candidate_A greg189 q9 425 bps</td>
              <td align="center">37.5 [+/- 1.8]</td>
              <td align="center">82.9 [+/- 2.9]</td>
              <td align="center">1.9 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">DRED SA Candidate_B jm26d q1 1786 bps</td>
              <td align="center">61.4 [+/- 1.6]</td>
              <td align="center">90.9 [+/- 2.1]</td>
              <td align="center">3.1 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">DRED SA Candidate_B jm26d q6 868 bps</td>
              <td align="center">50.4 [+/- 1.4]</td>
              <td align="center">88.9 [+/- 2.4]</td>
              <td align="center">2.5 [+/- 0.1]</td>
            </tr>
            <tr>
              <td align="left">DRED SA Candidate_B jm26d q9 456 bps</td>
              <td align="center">36.8 [+/- 1.7]</td>
              <td align="center">84.8 [+/- 2.7]</td>
              <td align="center">1.9 [+/- 0.1]</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="objective-evaluation">
      <name>Objective Evaluation</name>
      <t>Objective metrics are often used during the development of speech codecs, 
with expert evaluations conducted towards the end of the development lifecycle. 
While effective for traditional DSP-based codecs, 
traditional well-established reference-based metrics, 
such as PESQ <xref target="ITU-T.P862"/>, often fail to accurately evaluate generative methods.
For instance, PESQ has been empirically shown to have an underestimation bias 
for generative models which may have high output quality but for which 
the output may also considerably differ from the reference <xref target="CROWDSOURCED-MUSHRA"/>.</t>
      <t>At present, the research into alternative metrics is flourishing 
with various innovative methods being proposed,<br/>
such as non-intrusive DNN-based metrics (e.g, <xref target="UTMOS"/>), 
metrics with non-matched references (e.g., <xref target="SCOREQ"/>), 
or composite score types of metrics (e.g., <xref target="UNI-VERSA"/>). 
While recent correlation investigations, e.g., <xref target="URGENT2025"/>, are promising, 
it is too early to include such metrics in this proposal, 
as it is yet to be seen which metrics can demonstrate both good accuracy and generalization 
to a variety of generative models and test vectors. 
Further insights in this area are of potential value for rapid, 
accessible and inexpensive evaluation of ML codecs. 
Hence, we propose to investigate which objective metrics are effective 
predictors of listener responses for the test battery components, 
and under which conditions.</t>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
      <?line -18?>

</section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TBD</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
        <reference anchor="RFC6716">
          <front>
            <title>Definition of the Opus Audio Codec</title>
            <author fullname="JM. Valin" initials="JM." surname="Valin"/>
            <author fullname="K. Vos" initials="K." surname="Vos"/>
            <author fullname="T. Terriberry" initials="T." surname="Terriberry"/>
            <date month="September" year="2012"/>
            <abstract>
              <t>This document defines the Opus interactive speech and audio codec. Opus is designed to handle a wide range of interactive audio applications, including Voice over IP, videoconferencing, in-game chat, and even live, distributed music performances. It scales from low bitrate narrowband speech at 6 kbit/s to very high quality stereo music at 510 kbit/s. Opus uses both Linear Prediction (LP) and the Modified Discrete Cosine Transform (MDCT) to achieve good compression of both speech and music. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6716"/>
          <seriesInfo name="DOI" value="10.17487/RFC6716"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="ITU-T.P800">
          <front>
            <title>Methods for subjective determination of transmission quality</title>
            <author>
              <organization>ITU-T</organization>
            </author>
            <date year="1996" month="August"/>
          </front>
          <seriesInfo name="ITU-T" value="Recommendation P.800"/>
        </reference>
        <reference anchor="ITU-R.BS1534-3">
          <front>
            <title>Method for the subjective assessment of intermediate quality level of audio systems</title>
            <author>
              <organization>ITU-R</organization>
            </author>
            <date year="2015" month="October"/>
          </front>
          <seriesInfo name="ITU-R" value="Recommendation BS.1534-3"/>
        </reference>
        <reference anchor="ITU-T.P807">
          <front>
            <title>Subjective test methodology for assessing speech intelligibility</title>
            <author>
              <organization>ITU-T</organization>
            </author>
            <date year="2016" month="February"/>
          </front>
          <seriesInfo name="ITU-T" value="Recommendation P.807"/>
        </reference>
        <reference anchor="ITU-T.P808">
          <front>
            <title>Subjective evaluation of speech quality with a crowdsourcing approach</title>
            <author>
              <organization>ITU-T</organization>
            </author>
            <date year="2021" month="June"/>
          </front>
          <seriesInfo name="ITU-T" value="Recommendation P.808"/>
        </reference>
        <reference anchor="ITU-T.P862" target="https://www.itu.int/rec/T-REC-P.862">
          <front>
            <title>Perceptual evaluation of speech quality (PESQ): An objective method for end-to-end speech quality assessment of narrow-band telephone networks and speech codecs</title>
            <author>
              <organization>ITU-T</organization>
            </author>
            <date year="2001" month="February"/>
          </front>
        </reference>
        <reference anchor="CROWDSOURCED-DRT" target="https://ieeexplore.ieee.org/document/10447869">
          <front>
            <title>Crowdsourced Multilingual Speech Intelligibility Testing</title>
            <author initials="L." surname="Lechler" fullname="L. Lechler">
              <organization/>
            </author>
            <author initials="K." surname="Wojcicki" fullname="K. Wojcicki">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="ICASSP" value="2024"/>
          <seriesInfo name="DOI" value="10.1109/ICASSP48485.2024.10447869"/>
        </reference>
        <reference anchor="LESCHANOWSKY2025" target="https://arxiv.org/abs/2506.01731v1">
          <front>
            <title>Benchmarking Neural Speech Codec Intelligibility with SITool</title>
            <author initials="A." surname="Leschanowsky" fullname="A. Leschanowsky">
              <organization/>
            </author>
            <author initials="K.K." surname="Lakshminarayana" fullname="K.K. Lakshminarayana">
              <organization/>
            </author>
            <author initials="A." surname="Rajasekhar" fullname="A. Rajasekhar">
              <organization/>
            </author>
            <author initials="L." surname="Behringer" fullname="L. Behringer">
              <organization/>
            </author>
            <author initials="I." surname="Kilinc" fullname="I. Kilinc">
              <organization/>
            </author>
            <author initials="G." surname="Fuchs" fullname="G. Fuchs">
              <organization/>
            </author>
            <author initials="E.A.P." surname="Habets" fullname="E.A.P. Habets">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2025"/>
          <seriesInfo name="DOI" value="10.48550/arXiv.2506.01731"/>
        </reference>
        <reference anchor="CROWDSOURCED-MUSHRA" target="https://arxiv.org/abs/2506.00950">
          <front>
            <title>Crowdsourcing MUSHRA Tests in the Age of Generative Speech Technologies: A Comparative Analysis of Subjective and Objective Testing Methods</title>
            <author initials="L." surname="Lechler" fullname="L. Lechler">
              <organization/>
            </author>
            <author initials="C." surname="Moradi" fullname="C. Moradi">
              <organization/>
            </author>
            <author initials="I." surname="Balic" fullname="I. Balic">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2025"/>
        </reference>
        <reference anchor="COOPER2023" target="https://www.isca-archive.org/interspeech_2023/cooper23_interspeech.pdf">
          <front>
            <title>Investigating Range-Equalizing Bias in Mean Opinion Score Ratings of Synthesized Speech</title>
            <author initials="E." surname="Cooper" fullname="E. Cooper">
              <organization/>
            </author>
            <author initials="J." surname="Yamagishi" fullname="J. Yamagishi">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2023"/>
          <seriesInfo name="pages" value="1104--1108"/>
        </reference>
        <reference anchor="DRT-REPO" target="https://github.com/cisco/multilingual-speech-testing/tree/main/speech-intelligibility-DRT">
          <front>
            <title>Multilingual Speech Testing - Speech Intelligibility DRT</title>
            <author>
              <organization>Cisco Systems</organization>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="MULLER2024" target="https://www.isca-archive.org/interspeech_2024/muller24c_interspeech.pdf">
          <front>
            <title>Speech quality evaluation of neural audio codecs</title>
            <author initials="T." surname="Muller" fullname="T. Muller">
              <organization/>
            </author>
            <author initials="S." surname="Ragot" fullname="S. Ragot">
              <organization/>
            </author>
            <author initials="L." surname="Gros" fullname="L. Gros">
              <organization/>
            </author>
            <author initials="P." surname="Philippe" fullname="P. Philippe">
              <organization/>
            </author>
            <author initials="P." surname="Scalart" fullname="P. Scalart">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2024"/>
          <seriesInfo name="pages" value="1760--1764"/>
        </reference>
        <reference anchor="URGENT2025">
          <front>
            <title>Interspeech 2025 URGENT Speech Enhancement Challenge</title>
            <author initials="K." surname="Saijo" fullname="K. Saijo">
              <organization/>
            </author>
            <author initials="W." surname="Zhang" fullname="W. Zhang">
              <organization/>
            </author>
            <author initials="S." surname="Cornell" fullname="S. Cornell">
              <organization/>
            </author>
            <author initials="R." surname="Scheibler" fullname="R. Scheibler">
              <organization/>
            </author>
            <author initials="C." surname="Li" fullname="C. Li">
              <organization/>
            </author>
            <author initials="Z." surname="Ni" fullname="Z. Ni">
              <organization/>
            </author>
            <author initials="A." surname="Kumar" fullname="A. Kumar">
              <organization/>
            </author>
            <author initials="M." surname="Sach" fullname="M. Sach">
              <organization/>
            </author>
            <author initials="Y." surname="Fu" fullname="Y. Fu">
              <organization/>
            </author>
            <author initials="W." surname="Wang" fullname="W. Wang">
              <organization/>
            </author>
            <author initials="T." surname="Fingscheidt" fullname="T. Fingscheidt">
              <organization/>
            </author>
            <author initials="S." surname="Watanabe" fullname="S. Watanabe">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2025"/>
          <seriesInfo name="target" value="https://arxiv.org/abs/2505.23212"/>
        </reference>
        <reference anchor="UNI-VERSA">
          <front>
            <title>Uni-VERSA: Versatile Speech Assessment with a Unified Network</title>
            <author initials="J." surname="Shi" fullname="J. Shi">
              <organization/>
            </author>
            <author initials="H.J." surname="Shim" fullname="H.J. Shim">
              <organization/>
            </author>
            <author initials="S." surname="Watanabe" fullname="S. Watanabe">
              <organization/>
            </author>
            <date year="2025"/>
          </front>
          <seriesInfo name="DOI" value="10.48550/arXiv.2505.20741"/>
          <seriesInfo name="target" value="https://arxiv.org/abs/2505.20741"/>
        </reference>
        <reference anchor="DEMIRSAHIN2020" target="https://www.aclweb.org/anthology/2020.lrec-1.804">
          <front>
            <title>Crowdsourced high-quality UK and Ireland English Dialect speech data set.</title>
            <author initials="I." surname="Demirsahin" fullname="I. Demirsahin">
              <organization/>
            </author>
            <author initials="O." surname="Kjartansson" fullname="O. Kjartansson">
              <organization/>
            </author>
            <author initials="A." surname="Gutkin" fullname="A. Gutkin">
              <organization/>
            </author>
            <author initials="C." surname="Rivera" fullname="C. Rivera">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="LREC" value="2020"/>
          <seriesInfo name="pages" value="6532--6541"/>
          <seriesInfo name="ISBN" value="979-10-95546-34-4"/>
        </reference>
        <reference anchor="SCOREQ" target="https://proceedings.neurips.cc/paper_files/paper/2024/file/bece7e02455a628b770e49fcfa791147-Paper-Conference.pdf">
          <front>
            <title>SCOREQ: Speech Quality Assessment with Contrastive Regression</title>
            <author initials="A." surname="Ragano" fullname="A. Ragano">
              <organization/>
            </author>
            <author initials="J." surname="Skoglund" fullname="J. Skoglund">
              <organization/>
            </author>
            <author initials="A." surname="Hines" fullname="A. Hines">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="NeurIPS" value="2024"/>
          <seriesInfo name="pages" value="105702--105729"/>
        </reference>
        <reference anchor="UTMOS" target="https://www.isca-archive.org/interspeech_2022/saeki22c_interspeech.pdf">
          <front>
            <title>UTMOS: UTokyo-SaruLab System for VoiceMOS Challenge 2022</title>
            <author initials="T." surname="Saeki" fullname="T. Saeki">
              <organization/>
            </author>
            <author initials="D." surname="Xin" fullname="D. Xin">
              <organization/>
            </author>
            <author initials="W." surname="Nakata" fullname="W. Nakata">
              <organization/>
            </author>
            <author initials="T." surname="Koriyama" fullname="T. Koriyama">
              <organization/>
            </author>
            <author initials="S." surname="Takamichi" fullname="S. Takamichi">
              <organization/>
            </author>
            <author initials="H." surname="Saruwatari" fullname="H. Saruwatari">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="INTERSPEECH" value="2022"/>
          <seriesInfo name="pages" value="4521--4525"/>
        </reference>
        <reference anchor="MUSHRA-1S" target="https://arxiv.org/abs/2509.19219">
          <front>
            <title>MUSHRA-1S: A scalable and sensitive test approach for evaluating top-tier speech processing systems</title>
            <author initials="L." surname="Lechler" fullname="L. Lechler">
              <organization/>
            </author>
            <author initials="I." surname="Balic" fullname="I. Balic">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
          <seriesInfo name="Preprint" value="2025"/>
        </reference>
      </references>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA71c6XLjRpL+z6fAyjHh7h0SIinqjImZpQ67tdY1oto9HseE
owgUSVggQOOQLLf7XfZZ9sn2y8wqoACScvduxPqHmwJRhaw8vzzAXq/XKaIi
1ifezoPOC+9UFYXOXrxZmnm3qzL3rq+8szTUgXfxa6GTPEqTfKejptNMP2HN
dcxf0lKzcqcTqELP0+zlxIuSWdrphGmQqCWeEGZqVvRiHSxinfWWcUBLewXW
9qayuNcfdvJyuoxyelDxssKqy4uHbzzvK0/FeYonRkmoVxr/S4qdrrejw6hI
s0jF9Mfl+BT/gPKdy/uHb3Y6Sbmc6uykE4Kik04A0nGCMj/xiqzUHdA/6KhM
K+w6Xq3iCITT8TyVhN69VnHvIVrqnc5zmj3Os7Rc0XlVsIgS7V1plSVRMmc+
jcswSolLuLDTedQvWBGedLyetyzzRaboU5gV9I9+UnHJj+k86aQEVZ73BXt7
nvBk5wNoolu+pbV0famiGNcNU/8j0sXMTzNeorJgga8WRbHKT3Z36U66FD1p
3962Sxd2p1n6nOtds8fuTqejymKRZnQU7ON5szKORZQ7V6rMFEhlWe7wt9hH
JdFvfLoT7yzKg9SbvOSFXub8vTY0xkYD/iOgW/wgXcr6IC2TgtRm530SFTr0
vsMJQ/527fHfqWUUex/Sn4MoeIy+7PmPtPY5/Xn788dlXmQqjhRYkKTZEls+
sajuvzkbDgbH5uPR4HBkPh4cDg5OOh1SeOf2y4f3vQf/7qjfP+EnWEu71mBr
mLOAoe4/64BWeKGGDSyjhE/gpTOoqUpyYwzeLyUoKl6E1kownjn5iTyML7C6
e4Pj44Ne/4iv5DqLdE7U2SV89wnUHMdfwpzkmXc+aDWE3/unk8H+3qi3t4l4
pr1YaJd+lec6z7FbQcRHCZ0G9glqLPFerJ90TN8q1utcxPPqme6dMw37g/3e
oP/Kme7XznQ68eUYrkAOm2ea1Icgd+Qt+YxpnM7FE8rJyODylYb28uHiOJpH
0+iLhIIDHJCT+1KhHLrEH20lvnYvxGNDq+X9c1QsPOUFsPIwT8ssoPOo1SpL
4Xi+4ATDQa9/8MUnOHJOcDBsnuBOZ4FeFaDz9RO8ubuY/P3tiTfGl9WZl7U+
4oG9Iu3hn/bKpmomKgMTEHRwI+SoV4sUfjfRBbl68f9mPXvCjfrZ28ae/mCb
gAuVzXVx4llP/Pz87EdF6UOZdjM43Ife/cVZD8w6oPVn97cfzie37+/PLs57
5/cPTZadVVKEo7wu4wJ6mMyJgxOh/LKpoR6FaBND1o8iXvXKtw699cV3fuVq
t8n9bDyZ3LF2jMyl89tLOKG+Pxj0j3fl+9HR6Gjfp3v8QX80Ojw6ON7ImEhr
/esqTjOKUFpzhAKKKEl+u87Kq4vJ2bvxze2HyXc/YNf9JotOdRIslkoC5Y1G
vKqYI4imzSI2kMnlQ5rGr7FpTGzKg4VKEC4fX9Z4BW5dqcd8QZ48Uy8qUesb
3KufVa4fF6rNasjgVC8ykLwmhUsfERFSDlrXv/W9b8pgkbcuX/hj/8733qmp
LvJtYrt5uLif3F1cnL1j2e23ZAdx7fcBDv4RPfnD/f6B3x8c7g02ykxlv+Im
xhLTfLe++WnQ1uXr95N39+Nt6kzCkjtYZXO4Wo4z47km0/1WJzrjAGtl+YD/
JeSqcTbwFrJdrpS5ZZyo+CWPclrpuEky8NvqL2MZnonK/ysLOfO96zRTYbQu
s1N4oOCL+P85rO0f71MYPLu9vbu4x8pWkL5MnuhUc8Unu1dQp94FO8Pf6MJp
pJix11olgPlRQv52EsDgcC8tEY69JOB8Hv0GFyPMfo03Fz5Yn67WWPOfvveD
Wqp5lC+2eo8WG/bM9ZWak0zhQUa9Hv5/tN2N5oHqWVRLnGL0IU78J9pxN2Da
hns/Od/4q3CGLeFc4Xnvbls4Z4NTtarS2+ZlsdXWSLqOSNsnmcMBlVNCpLuM
TXeXDg09oZnzJVzZLTKtCcwnu+aLFiahmIGnXL+/umINGbVQQzNANuNuIt5S
QNqGGOi1ZPzgUwxaN4sJebp5WqybERKXtsOCs7pbgPTVSq9/MwkUspbiMxVo
1FKgw4M+FOjwYNRg+86XaNCIhIETDkdBW4WINe/vv724eVgPQpf1rWzg5kar
PxcJwkigGZicLRQeAEt9ldWILhMV/Zy2Ln/wvX9iq/m6AM7SLIFetL64J5Yu
dDTd6Muu2n7sn753076GMPZduVyLYNdEYLBoXf2BgtQ6zR/WSYYufUMeiKgL
25ozoSUFQupUf2FI+0OnCmCyNxwQ8np/c9n7Hpu0YhSyUnPZ+x4yhbHEVRAa
1/jSwGzcPYvgOG8EVb4qU7jIyaLN3ne+XF7+AQsqWL6/hSNbgjkBscPR4AvY
Y24/v7i+BBfeXd7gof1XYOkimi961sG8/47D7mWmY/r3IpnHiAjeeaRixGGL
t3EYhSMU/qv8QlQ918sIMlhESeu7W2jlz/AUyJzztP0lVPbbsnhcWwSVv4fd
Z2oLC68AzJnH/aZj2TnY3xv2egf7o8GO1b/J6c2Jd3x4jDy1d7y/PzroIfls
Oh7X76ggftZT4TTCLeecu/QkP0ZO0BsgdaLFk7Pb+4u/t1y4XLM6+HfD6bYu
nqVJkamcsc69nmea6wl/gHDhtoFvN+jpYzqPyyRcX/EuSvQ2nEno+/JustE3
9/cP++Ah/TvcnA4gQw20pgpY7lNkila5HwS7K4WA/tMMVpjL511203Rhd6oD
fajx5/6+OhgeTQ8P+3p0PAtm6vB4MBgd9u5oQQ+smekMeYI2UOD9w/XtpGX2
fAnfpI8vaW+isvJKTU0Y56zz+zQKNO6pHTgdc/gagx/IR+rHtsmf+94/1nQT
XvJGPcIu1vf4Ls2iF2CrdQ/xoKjKFWxwKh6d4BnbZZ+LxoZNgY32h4NeD//f
DFY/J5YOd3M6/XC4FkoZsBD47w1aYqgvA+PnhAYQuiRVp7J0XbuxFQ2pCBhY
A8xWpKteEenMuhrWKlPU2V6G+kPk/wcI/y7TK2RzxZfA+2N/cDwcHHd83+90
er2eh+uw4KDodB4WSGZsKkxHWKUw9ka9iljCbtQ9v8C6pakwx7bC/Ob66q3g
O09X5f1uJ0c66SFDoLwr1HplgGCmQ1g+8MqL9+b8/uL8bZcdjEnQuFsge70x
FdG3oJ8PsIzCMNadzlcEmLM0LAMugeM4Wur7a5Rx1c3CTybVlKXfes+m9s11
c09FSxCaeh0qLWZQUS8os4y4Q4yYlUWJpCZdrdKsKBOoCbi1ggvEDYhQ0xfq
bwjA9TrYRQseax/o40dzok+feNsIeanDMK+DrDwuQ1EzL1pCMFDHuoZEX1Sl
KKzP0mmZF/CXTPpKBY+68OI0z32vczGbbazmtcXkvbFiElmABm+awt/niH2h
iqmkxc+qpYbAyskd7s25ElgGCL/eDKoFdmM9SUE/8TEWUKy6ypb7nUsE6ZL5
mHuk1CRFm/awqv0aSXLkkNh5M+ONiaRAd4nfGsCFMhoPh4vC6nTPC6gwjFNh
b+h6xEqwilY6JrVYqgJwMPemKjcXwIzYgwvnejv2ftuhs/LmdJ9b+sHuRg2M
YMh2cP65IsJIG56itMxj0AY6pwRLSDXMo3Dyzrv0mZSrS8xOy4Iuh8SuvJzm
+peSlC3XrNKkC3qLAOd16QJKxyeodBGsSmDT0GionA0jUolMNLupAjAvJ5Ui
kXFK7lTe+fDppjKweAbSqy2uwy0INzdlb2oac90OKdTcgFsgq4DK+qHcQ94G
qoWzQ5lVGGas17AgGBy4A4n+UkaZYXuHLTuAkVJfQBTUKlPsks4liMo6ffYV
huqwQRvugHByo/ytui8ZazMvblqrLTEF8DxJ10vSKMcdQhZ2nUJi4FaQJmFk
5St7BmkGW2ChZNQrhFeKQ+OyiCHMcd3wzNZD5U2WQ+EKUo0cbqZd9IW/EbfE
0S5TqyiUmFcFwArsEWnPC8pJAgJ8aRyLHWtYWhYpY6mO7FiWSZr0sFOP1D0E
tSovjaSg95fi2UH0Nv3FMzNtIJTZo1GZzyI4VvgXTXrBHIS/iaMlqbNhp9BM
dEBQWZnXNX2oOy7M5/iaaRUCYtPi63ran/tdMK3OvY17bvFRoMOnTxDJGfWA
2WJjiJl1NOe64gKZbFKLAnTRl/MUMhV/mrHFs469olyOPEjPWIeTtGBRwfuk
GTtCUWaHn4mSMOUoPPe7aU1MIatYqGJL78nrBGkJMqcIONGcss5A0fGst2OF
2YXRhcD+KpSoJ6EjngNAFosluYdv2IUrPA9+WrixkrYMuRHuVKZLWCC1SvCv
afITbdXjTfAMXUWclRIJlsgFC7opMgBAS8smJa/fjKDk/uANktoLsLVllCaa
xyP6PmmEq4JzTI/sH4uE4IiObETkdyYlR+CaDvVCAmFyVR6BTdR1DQpx6Fay
7BZAFO0eE+XcKmNnzYAx0YZ8vgF7RGqepAh/gaWwZ6cgsAvjMJBrv+ImaPcV
1am8tiCkDWHGxg1WYcfHdL1HTQ4R8Tdk5apd1qzMOMIuU6icBAfgAp3FL9ZP
bPSutFpcur2tFZ8a3j0E9C9z2kNNIRXCf1995V2xWdF6njExhXb66isL9wHs
O/VHWHCF92HTjFbYi0n/jqh4Bu96bsA2TYM3XDOFFsNuI3yOxHcsgEAhgspZ
MenQVkD9t6+0e61CfPzYbIkTUSZosOvaHCyI8Iqp3DA39WN6eBuj1KHOHgWr
Z1GWky2rUK043qatgN3lx+tfofpF5b4YvDBaCXUeZNFUtHurW7xTGVS3jBWU
gel0XJP187BoWBKZT6DIZzattMvArRkI4krqrE/MC7qFnWHTayMr1cQ2csqE
WyUhE/2sVotUSLWA7ZEXLnXXNoco4gF28mABeParDjdKGnnxgiLRE8TEcZh0
yYjEeWy1v6cybYAMxyxsy+A2tNHUrHdF7mREVnkyar8wbUyFsRpDHjj3lEbs
+OwpBEAydewet2yfI//kxztbhNFsRpLkPMeN9QtNWDCb000wZtyB76FIQGqV
wRivx/iCA8+scg0AeI/VyMeK1SVaKbJ4IlAjolQkVEo4g2rMSxxw5apXlGzS
WCj9OOeo1HU8AkkdlNC8Emv+NE/jsqiCD3GWZfdmfHb/1pIviooYSD6+zMVo
AJeIAEKcBHKAx03TK+em15vr28lbo02QflSYJ69MicGECiv8WIPcwhGC6CKv
WtJ+uUR8bJZ4oK3rhYDBIINWGPW0OkGM4PAj0NHoDFKltEw4VNOaTYrje/eV
wcWEuKkExcfJYZyw3KmGzEO9hNoUmTLeIy+hXJBNli6Fhp6u24LTSFFKAD9R
tRU/fZIk32hizgDBTaE3KSSeOpMMji63juqkfIotB/ZXlISjd+ZpGu7w/N5U
4UMTy4vxhpl6niJftmHAURZWDgaGAbeArbssqC9roKfxrGI1kWQpOUd4RgIB
oDxpjC0pMRZIvSfKMvIlQQwyMGE6ZZnFM/HYqqiJ2q6vTBPsO9UGZAgMhFAi
BjNCZpSDBQBgHA3Pz+455AtWE1atafs5aXtV5sIprJZLHkU67q7fpO/npPAm
qsmEGjnhcYwDlABUosrFywpYMsYJ+AGmsdGo6jtujN2EddWEXQO14gOT46sC
avNgvO/0xUJe9lK0B6URIDyKeZ6HQhApsGBaFWXs3quci4tdBBhso5OOwo7e
YZGYZV0pJMvc5k9y41DW2JMwHmeoU8/o2WNzPTEkGE9+spHhmSwTfHmi4Uuj
Bc72R9jeKMD9gyhAjSizxYu1qzf4ukHXIYCISdrybfkB+GvMlQSwhB5Akz3i
I3PchLjAdAlAbgWyRdnZLepsmUt45RIEgky3yjPtEiLWslIkwK3riE/BURip
Av8RQQ6UDxj+4VTGy4gCUTEWVkMVB1NSC2JWIVoQcpmJM8yOs/9Mcw6VG+Oj
yG0fxyrMhWbwgY0QMckUP6CBszLGI9KY5cYJnpTalB0eaaM1OZvV9VmmtfhT
E797Nn7TuiX8LxTLvbRQuTEj2cNxNr53WRi3BMsilWWYm5eMPRiabq4dcPSQ
qHCqFY0PkdQg69Age3AsYs/dVM46YNfVEA0dT18sZHR11DHyZzqEAaVr+t7e
cR17SmlD5u+w25OmCiTfOCPzjBWNPBBCeyb9XJUW5H/8aIc1rJFXFSgmCX6V
c3vQlq5Mhffjx/acGJcC2Nyac0djOpAA2s7ZaweCNiDCFSaMRlB4KtwBamu5
iw5mXaSxEog2qYogFgtYW9rgV2zVgUt9jcbGmhDXCey2BMc6R9WAiPJ/o/Ow
HAZsLeVgvFExgkiBx6JKF/TYIHthApkljUxrak24WaEnCY/BLkE9j0WRJHG8
RlMPdfIUZWlCXPC9bxp1YxxuxcbPTJ2ZSX3RH1ssZosPub4hpsMHCXiqVOzM
5IN0Y41faI+AysvYKULc4q+lGCEmwPZD7sGCfWTiOesI18TJA5jMOrNTrwZ5
cNBrZALYwgXPXxMUX1FJqgLjFl8gngCuUnUbwrSF/vFS/ZbS1BYNH5Jf8x7K
7LHr3YGf4pK5Dswlle3Z3gWFRLu78UamUptr9smzKKaunIDhFpLGITO4IycQ
uEdiTolhVk+Q/Oul2pV3yhdcNVLxs3ohe6IqcJqxQOpCM7PbZCuczlYR26Yw
hLHkfRACDNK0sF0O8b90g4SxqioyZUwm2K3iMnsE4qQob8P8W/UL86ZLVR+F
aAmfIPp02/Vrd4zY0QLsBlsLyGekMZlFaosJdaoHgqq+wzyi+hGXPXCCGlGv
0oLCOo1quS+wkBkzaEqac8ymEuSU5pn7UHIJmnXxkdgGoSgOW2LLVH8hL0dv
oPDZXAgGBkF60j0U8GirnUXTFKLMLf+aDJ86AGyPUnJwNs7hzum5zQqh9TXN
6OWwjkXjOlthrjNk0t1e8aoUJXcbTI1mTsIlxxhMr98J6DY6Bty5oWZC9S3+
VY9WDYnmQCrIjapgG7yJKdmOJrU5GDjDpSJvkDwsTGnQmIOeHapxYyoyyVJS
1aovxz1K8XB18l+H3IjLRhHXafzOhKq+BqdBp8rYlE0WzQZR3Rcj3PbslNHs
0L3TlrR61uAv+RnOG6yzeSPg1Gg2iEGG3qzOsQidLVgp624nQiWhWNExp8nq
1lPeSBOBer1/5j6qqbk2NcPiE2XBSEAQowLzXDDi1JjLK1RmACnDkff47jf6
8sf/44Dn1f34rEfARV6WI/3dpSZqKH/D4e3SfMDjT4N/vfl/e9Rbzljaw0cX
9QtuArDY8JzxWe974WXn9MUgTStJJ5WvMjcykJLI5vpEI/XcYtIbS4fGdm0S
S1GXEnCDwmz7oFFocOqHMRQ6N57ewO3ap7HKCs7hFgU39Y2BOOkJ5wbkF3KG
9Ha6gv1e9tTIJitHsVCErHTGO+QMIAtq4kr1ooUrtKriDHdyrK27hVNqWddW
SBHCFFVo9i3XNh0pHAcz6PepGgxmAvvF3N5pcp7nr7rSgyUZ0QKLJuQgpqRo
PE5e1ZAYPHO+peBWxGUGC+T8YPvXNpulsVFGV5TAGwkLSGkYpmkBq+oodc01
CLgzgY/GPZpMm9/w/MBR5twpSjR0dJw3A6iNi7ZuoSQeU2jUIgY3DnNJEG7N
9psSStNDQVUMTrniV7DHY46LlNySWIPVtsyGjfhoYfZSxRnd4zy0YSG8/3pK
uSUyyxG/phwqh8Gx6jPeq6N8zSbTMJREwBQRXHlwAbIq1lRFzfOzexA+Y1El
Xj0oARhCzzKVDHNKUijD1OTF7R+7NSSnnVjlGziG6b7XyaGJuqwHx/3WzIOj
yZI2x+GWWlOFR6SxC743ZwWEN7dsyNSCGe2bbU3S4QRrxn+5VLpc5GCooW2x
OnWlzw+07sl5Yq0F8oKlPI6OwVw15TOb804oDYL8NaG5BxWTr2lo/XUq2aZV
ZIS8TOolmcGqrrpwma9qhxiHxrDIeQ4to5mPzHmHku2/bj4wIfByFejFgVfk
ACOpLka5TUVUAwBSa8L3xu12U1khBFI6t8NnFNDUoOgDmGTnjyqnbPDEOs6j
GbWQs19HDxs1XzqoNVbOgEiFgY5pzion72w2onEqM41rUqYG6BBHXKvtsG90
iYfBkieZQScQY5OZ4jmVmEauoGKoSP1O8M8V4Z+J1eLOfa3aLkCiEC/tXRoc
YEFy9OQvV4z4CDvViLzmtruNWYgtqtqDMdLmDF+UMAB7W+WaHPEl37She1Ok
bgE+m+TYdJkH+Mg0CNhVjyzSOYuaGHOuC2A4WMHpeQPNtF/r+XxU8yDlzPbw
6ebSbHd7Ca3RY9tUPOs2mssSRAwWFp+iZG6gKrLa8kclCq7KtctpZo2rd6xV
jYyC0YMyvTH6Y6b5T1E5E+AFrDMsod9D4IJzFcLsSA2xir8l6+UFNodpFrl9
4bINL3TjDJwyXak6d+EiECGyRqW9ak4FVHcKTAA7l55c1YL82g68UMWo1QEy
b7oX5sxm7CNOAzOLF5R2SEfK0wT7FpIkEfd44reqq3drGCdCjJawRsooqlTH
Ut16HboGb1bXGc/WzWnTS2QDojdaqaECaDHlzIzvdeyzGuBop0xkGqznN5zB
NhQcIBTyEghc2abaAh4kJrn9g22G0MjGnVNPnd8jqcsThKZpetOk2KYMSTZT
1zAb0KkBThpDEcbh8wgoWZ2CT18Bdopx9/KFWmkbXN9MJjdvq4YbF85yqWSo
zCIUGplARJK2iQSFdkPGeLkqKfG9Dzwu1Y498lAKKRRyZFLY6aQzQjKa4FRg
GOVT0m4nN7StHbl7c1/BBOvqfETYQscrDm4tPW4fgmbmXLBp5lnFBuqmM6mG
5rTTwL16bsYtdT3Uk3gOR436IH/Lc1PnstIlx8GDkQwYkXBwoK0mJ4zNL8CS
HumeY6qOkZPVSqm58hDUAKZwThdEuWzRxS162BbnXJs28zOcWf45FmuAKRSJ
+xVm0MbMG7PfpvIn09xTMi4auufgxFXmMdWmUsRaVswwAn6+8dIWN0C4RWpm
5mXbcslE1dotJVsRKn1DsCONn6wvgXoXwibdNSVZ03nC+YwT8S5kqBD5Fpds
O4JepXyLgz2zwj4w8QNTPOKmTzolExfIYOBF+6c5akOWaTOv9Tsepn2UlYnV
CFspb7UebBXYwDA39soOMuNm42widUqb3zq5pXHIVMqiEgZ0oSoHgYWQDU5w
fPQnqYnbC4NBv3ZB9c8fQVduaXygSU5ELfqCU6Jcx7NeVTdf2L4fpy1mxj0J
6xWQHr3AojLuNlGqwWeLkqfIDIdUT6JkYVwlXDSVtbkfnRfUdjRM4lmHdKWT
xlZM9CJtEUspMe7iNJH/F77ksf41UoBhv5ufZ/i9KipBQRqI642Uit56Px7v
/8k7u/wX7m1DtrU13Dp3FjibO6WAmyqnu29kWDJ0YZd3fj/p2f+cj72TV/7o
EI0rCOJ3aIC/5/34592e1/eHRMvxyD+WCwN/jy6M/EN7wwB/Yy2D2aeBv+8P
kcD2+950lXs3t1fjswvcf7Tvj+wGh7xj3+/LhaHflx33PmdHs98hCGssx34H
9gFHdGHPUvz6fr97+wf1Vny2o+MWaXuvknZQbzUc2ZV9OeXRgV069EeyVb+1
FWP/ydju9kvfGxweDs2GB86p9s0x9+2GQ9lw8AcbHnjH+4d2v6Glx4jh6Miu
H8oDhmuCbe8HZzAa7pkNRwO7fuAf8IZ9y/c94SbEsWXDM2gxvWGjfxojz9Lz
wdExdsfx9/YtuYNaa47M8Uct0Qy/ZPcD72h0ZOVeycLo9NGhPbsR1tDy+vM2
PwZfLOV7h3YtUY69h5Yt+CBsaavn+t6n3s/L4UEoTDk62MCUA8OUau/B6zqx
YW+w5KBiSb/eemTU4/iLWVJtDYbsW6r3DqwekOZh55H9eyia2GYIBWfnR1Dc
DPd27Q0OxS8nUNeSsUFYZjbdD6nelK6qJk+r3cjR0Mx6uq1Rwn5lULSxE6WT
s7V9EbF18BLQ1GVHkHJdOWQ86Az4nU/uzJBCRYP79drUeHto2ZwYy2wVgX5z
yslGD4aUe5sWLlAXF1HN60zxS50SuSPUdpq/OdTAG1P7nQc19XIVZWZWSWCR
jY/KDOpTr2ZpsiL6/ZbOxkltpxAl0ZVfkCgLijwWq1NHkBbLrfymi7mDlsnw
ntssNw10LtgVjd7I9jdtOuPCTll0zaJc0zvBUm9TBCwT1dAyGqWIAcnol2Kg
XqI8tu0cJUn61OAn+CbjK5I1dr1aZs23ic5vbprS5a4fvzlE73Z/+vQW4rZf
VW9FyTuHjoZUzcKPH+XNe1lIlZt0CRIIC8t4ZdUid58nryrZn5bA2kqbZWhJ
GkixLWtXv9/DjYRNrzp1TVE5BV4EI6hDLpX+FHkRZ4hOn5c5UzHaNJKEdSqm
uYDcDH2+6MKg3JzU0miTWUgJopvccfeGpnaNCZiWdvNFLX6DTblNknWlbU+C
UX/BTNU4dVqhmn6y03gkZwCC7E7cAb8lR0cK+MVJ+5o4cgn4oSRffzHTefOi
806GWZ+raoQw0UpDG4asv+RGBNVuqUO1/kjKDTRBZfuG9byPTWRbkwdQpMS8
zMe9T25jmpF2d74BzvuMasBJ/ZOl5zx0LdNrnGXRq0BUXMv57fkH+oVU+heo
kT9Dg99fIrrQ58m78dVV9aFj7pi8u31/dV5/qlee3V5fX9ycy2Jc9RqXOjvX
4x92pO23c3v3cHl7M77aqQRYvYTKr1ek5q0wmc7kzLPTqNCcnt39938NRtD+
fzO/v/npk/mDfoETf1CZy8w+UQ4if4K5Lx1kYZzw0GhBTK2KqFCxzBKJk6XW
Fdj57z8SZ/514v1lGqwGo7+aC3TgxkXLs8ZF5tn6lbXFwsQNlzY8puJm43qL
0016xz80/rZ8dy7+5W/88nRvcPS3vwoAmGiYLYWEs8YrY9Cf03O+4XJ8M17/
siHFBTtcuVMFlYLSe/80qd/5H0oBbD5kWAAA

-->

</rfc>
