<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc2629 version 1.6.2 (Ruby 3.0.3) -->
<?rfc tocindent="yes"?>
<?rfc strict="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-gruessing-moq-requirements-01" category="info" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.12.3 -->
  <front>
    <title abbrev="MoQ Use Cases and Considerations">Media Over QUIC - Use Cases and Considerations for Media Transport Protocol Design</title>
    <seriesInfo name="Internet-Draft" value="draft-gruessing-moq-requirements-01"/>
    <author initials="J." surname="Gruessing" fullname="James Gruessing">
      <organization>Nederlandse Publieke Omroep</organization>
      <address>
        <postal>
          <country>Netherlands</country>
        </postal>
        <email>james.ietf@gmail.com</email>
      </address>
    </author>
    <author initials="S." surname="Dawkins" fullname="Spencer Dawkins">
      <organization>Tencent America LLC</organization>
      <address>
        <postal>
          <country>United States of America</country>
        </postal>
        <email>spencerdawkins.ietf@gmail.com</email>
      </address>
    </author>
    <date year="2022" month="March" day="07"/>
    <area>applications</area>
    <workgroup>MOQ Mailing List</workgroup>
    <keyword>Internet-Draft QUIC</keyword>
    <abstract>
      <t>This document describes use cases that have been discussed in the IETF community under the banner of "Media Over QUIC", provides analysis about those use cases, recommends a subset of use cases that cover live media ingest, syndication, and streaming for further exploration, and describes considerations that should guide the design of protocols to satisfy these use cases.</t>
    </abstract>
    <note>
      <name>Note to Readers</name>
      <t><em>RFC Editor: please remove this section before publication</em></t>
      <t>Source code and issues for this draft can be found at
<eref target="https://github.com/fiestajetsam/draft-gruessing-moq-requirements">https://github.com/fiestajetsam/draft-gruessing-moq-requirements</eref>.</t>
      <t>Discussion of this draft should take place on the IETF Media Over QUIC (MoQ)
mailing list, at <eref target="https://www.ietf.org/mailman/listinfo/moq">https://www.ietf.org/mailman/listinfo/moq</eref>.</t>
    </note>
  </front>
  <middle>
    <section anchor="intro">
      <name>Introduction</name>
      <t>This document describes use cases that have been discussed in the IETF community under the banner of "Media Over QUIC", provides analysis about those use cases, recommends a subset of use cases that cover live media ingest, syndication, and streaming for further exploration, and describes considerations that should guide the design of protocols to satisfy these use cases.</t>
      <section anchor="for-the-impatient-reader">
        <name>For The Impatient Reader</name>
        <ul spacing="compact">
          <li>Our proposal is to focus on live media use cases, as described in <xref target="propscope"/>, rather than on interactive media use cases or on-demand use cases.</li>
          <li>The reasoning behind this proposal can be found in <xref target="analy-interact"/>.</li>
          <li>The considerations for protocol work to satisfy the proposed use cases can be found in <xref target="considerations"/>.</li>
        </ul>
        <t>Most of the rest of this document provides background for these sections.</t>
      </section>
      <section anchor="why-quic">
        <name>Why QUIC For Media?</name>
        <t>It is not the purpose of this document to argue against proposals for work on media applications that do not involve QUIC. Such proposals are simply out of scope for this document.</t>
        <t>When work on the QUIC protocol (<xref target="RFC9000"/>) was chartered (<xref target="QUIC-goals"/>), the key goals for QUIC were:</t>
        <ul spacing="compact">
          <li>Minimizing connection establishment and overall transport latency for applications, starting with HTTP,</li>
          <li>Providing multiplexing without head-of-line blocking,</li>
          <li>Requiring only changes to path endpoints to enable deployment,</li>
          <li>Enabling multipath and forward error correction extensions, and</li>
          <li>Providing always-secure transport, using TLS 1.3 by default.</li>
        </ul>
        <t>These goals were chosen with HTTP (<xref target="I-D.draft-ietf-quic-http"/>) in mind.</t>
        <t>While work on "QUIC version 1" (version codepoint 0x00000001) was underway, protocol designers considered potential advantages of the QUIC protocol for other applications. In addition to the key goals for HTTP applications, these advantages were immediately apparent for at least some media applications:</t>
        <ul spacing="compact">
          <li>QUIC endpoints can create bidirectional or unidirectional ordered byte streams.</li>
          <li>QUIC will automatically handle congestion control, packet loss, and reordering for stream data.</li>
          <li>QUIC streams allow multiple media streams to share congestion and flow control without otherwise blocking each other.</li>
          <li>QUIC streams also allow partial reliability, since either the sender or receiver can terminate the stream early without affecting the overall connection.</li>
          <li>With the DATAGRAM extension (<xref target="I-D.draft-ietf-quic-datagram"/>), further partially reliable models are possible, and applications can send congestion controlled datagrams below the MTU size.</li>
          <li>QUIC connections are established using an ALPN.</li>
          <li>QUIC endpoints can choose and change their connection ID.</li>
          <li>QUIC endpoints can migrate IP address without breaking the connection.</li>
          <li>Because QUIC is encapsulated in UDP, QUIC implementations can run in user space, rather than in kernel space, as TCP typically does. This allows more room for extensible APIs between application and transport, allowing more rapid implementation and deployment of new congestion control, retransmission, and prioritization mechanisms.</li>
          <li>QUIC is supported in browsers via HTTP/3 or WebTransport.</li>
          <li>With WebTransport, it is possible to write libraries or applications in JavaScript.</li>
        </ul>
        <t>The specific advantages of interest may vary from use case to use case, but these advantages justify further investigation of "Media Over QUIC".</t>
      </section>
    </section>
    <section anchor="term">
      <name>Terminology</name>
      <section anchor="moq-meaning">
        <name>The Many Meanings of "Media Over QUIC"</name>
        <t>Protocol developers have been considering the implications of the QUIC protocol (<xref target="RFC9000"/>) for media transport for several years, resulting in a large number of possible meanings of the term "Media Over QUIC", or "MOQ". As of this writing, "Media Over QUIC" has had at least these meanings:</t>
        <ul spacing="compact">
          <li>any kind of media carried directly over the QUIC protocol, as a QUIC payload</li>
          <li>any kind of media carried indirectly over the QUIC protocol, as an RTP payload (<xref target="RFC3550"/>)</li>
          <li>any kind of media carried indirectly over the QUIC protocol, as an HTTP/3 payload</li>
          <li>any kind of media carried indirectly over the QUIC protocol, as a WebTransport payload</li>
          <li>the encapsulation of any Media Transport Protocol (<xref target="mtp"/>) in a QUIC payload</li>
          <li>an IETF mailing list (<xref target="MOQ-ml"/>), which was requested "... for discussion of video ingest and distribution protocols that use QUIC as the underlying transport", although discussion of other Media Over QUIC proposals have also been discussed there.</li>
        </ul>
        <t>There may be IETF participants using other meanings as well.</t>
        <t>As of this writing, the second bullet ("any kind of media carried indirectly over the QUIC protocol, as an RTP payload"), seems to be in scope for the IETF AVTCORE working group, and was discussed at some length at the February 2022 AVTCORE working group meeting <xref target="AVTCORE-2022-02"/>, although no drafts in this space have yet been adopted by the AVTCORE working group.</t>
      </section>
      <section anchor="mtp">
        <name>Media Transport Protoccol</name>
        <t>This document describes considerations for work on extensions to existing "Media Transport Protocols" or creation of new "Media Transport Protocols".</t>
        <t>Within this document, we use the term "Media Transport Protocol" to describe the protocol of interest. This is easier to understand if the reader assumes that we are talking about a protocol stack that looks something like this:</t>
        <artwork><![CDATA[
               Media
    ---------------------------
           Media Format
    ---------------------------
    Media Transport Protocol(s)
    ---------------------------
               QUIC
]]></artwork>
        <t>where "Media Format" would be something like RTP payload formats or ISOBMFF <xref target="ISOBMFF"/>, and "Media Transport Protocol" would be something like RTP or HTTP. Not all possible proposals for "Media Over QUIC" follow this model, but for the ones that do, it seems useful to have names for "the protocol layers beteern Media and QUIC".</t>
        <t>It is worth noting explicitly that the "Media Transport Protocol" layer might include more than one protocol. For example, a new Media Transport Protocol might be defined to run over HTTP, or even over WebTransport and HTTP.</t>
      </section>
      <section anchor="latent-cat">
        <name>Latency Requirement Categories</name>
        <t>Within this document, we extend the latency requirement categories for streaming media described in <xref target="I-D.draft-ietf-mops-streaming-opcons"/>:</t>
        <ul spacing="compact">
          <li>ultra low-latency (less than 1 second)</li>
          <li>low-latency live (less than 10 seconds)</li>
          <li>non-low-latency live (10 seconds to a few minutes)</li>
          <li>on-demand (hours or more)</li>
        </ul>
        <t>These latency bands were appropriate for streaming media, which was the target for <xref target="I-D.draft-ietf-mops-streaming-opcons"/>, but some interactive media may have requirements that are significantly less than "ultra-low latency". Within this document, we are also using</t>
        <ul spacing="compact">
          <li>Ull-50 (less than 50 ms)</li>
          <li>Ull-200 (less than 200 ms)</li>
        </ul>
        <t>Perhaps obviously, these last two latency bands are the shortened form of "ultra-low latency - 50 ms" and "ultra-low-latency - 200 ms". Perhaps less obviously, bikeshedding on better names and more useful values is welcomed.</t>
      </section>
    </section>
    <section anchor="priorart">
      <name>Prior and Existing Specifications</name>
      <t>Several draft specifications have been proposed which either encapsulate existing Media Transport Protocols in QUIC, or define
their own new Media Transport Protocol on top of QUIC. With the exception of RUSH (<xref target="kpugin"/>), it is unknown if the other specifications listed in this section
have had any deployments or interop with multiple implementations.</t>
      <section anchor="hurst">
        <name>QRT: QUIC RTP Tunnelling</name>
        <t><xref target="I-D.draft-hurst-quic-rtp-tunnelling"/></t>
        <t>QRT encapsulates RTP and RTCP and define the means of using QUIC datagrams
with them, defining a new payload within a datagram frame which distinguishes
packets for a RTP packet flow vs RTCP.</t>
      </section>
      <section anchor="englebart">
        <name>RTP over QUIC</name>
        <t><xref target="I-D.draft-engelbart-rtp-over-quic"/></t>
        <t>This specification also encapsulates RTP and RTCP but unlike QRT which simply
relies on the default QUIC congestion control mechanisms, it defines a set of
requirements around QUIC implementation's congestion controller to permit the
use of separate control algorithms.</t>
      </section>
      <section anchor="kpugin">
        <name>RUSH - Reliable (unreliable) streaming protocol</name>
        <t><xref target="I-D.draft-kpugin-rush"/></t>
        <t>Whilst RUSH predates the datagram specification, it uses its own frame types on
top of QUIC to take advantage of QUIC implementations reassembling messages
greater than MTU. In addition individual media frames are given their own stream
identifiers to remove HoL blocking from processing out-of-order.</t>
        <t>It defines its own registry for signalling codec information with room for
future expansion but presently is limited to a subset of popular video and audio
codecs and doesn't include other types (such as subtitles, transcriptions, or
other signalling information) out of bitstream.</t>
      </section>
      <section anchor="sharabayko">
        <name>Tunnelling SRT over QUIC</name>
        <t><xref target="I-D.draft-sharabayko-srt-over-quic"/></t>
        <t>Secure Reliable Transport (SRT) (<xref target="I-D.draft-sharabayko-srt"/>) itself is a general purpose transport protocol
primarily for ingest transport use cases and this specification covers the
encapsulation and delivery of SRT on top of QUIC using datagram frame types.
This specification sets some requirements regarding how the two interact and
leaves considerations for congestion control and pacing to prevent conflict
between the two protocols. Apart from that, SRT provides a native suport for stream multiplexing,
thus contributing this missing functionality to QUIC datagrams.</t>
      </section>
      <section anchor="warp">
        <name>Warp - Segmented Live Video Transport</name>
        <t><xref target="I-D.draft-lcurley-warp"/></t>
        <t>Warp's specification attemps to map Group of Picture encoding of video on top of
QUIC streams. It depends on ISOBMFF containers to encapsulate both media as well
as messaging, and defines prioritisation with separate considerations for audio
and video. It doesn't yet define bi-directionality of media flows, and can be
run over protocols like WebTransport <xref target="I-D.draft-ietf-webtrans-overview"/>.</t>
      </section>
      <section anchor="comparison-of-existing-specifications">
        <name>Comparison of Existing Specifications</name>
        <t>** Additional details for this comparison could usefully be added here. **</t>
        <ul spacing="compact">
          <li>Some drafts attempt to use existing payloads of RTP, RTCP, and SDP, while others do not.</li>
          <li>Some use QUIC Datagram frames, while others use QUIC streams.</li>
          <li>All drafts take differing approaches to flow/stream identification and management. Some address congestion control and others just defer this to QUIC to handle.</li>
          <li>Some drafts specify ALPN identification, while others do not.</li>
        </ul>
      </section>
    </section>
    <section anchor="overallusecases">
      <name>Use Cases Informing This Proposal</name>
      <t>Our goal in this section is to understand the range of use cases that have been proposed for "Media Over QUIC".</t>
      <t>Although some of the use cases described in this section came out of "RTP over QUIC" proposals, they are worth considering in the broader "Media Over QUIC" context, and may be especially relevant to MOQ, depending on whether "RTP over QUIC" requires major changes to RTP and RTCP, in order to meet the requirements arising out of the corresponding use cases.</t>
      <t>An early draft in the "media over QUIC" space,
<xref target="I-D.draft-rtpfolks-quic-rtp-over-quic"/>, defined several key use cases. Some of the
following use cases have been inspired by that document, and others have come from discussions with the
wider MOQ community (among other places, a side meeting at IETF 112).</t>
      <t>For each use case in this section, we also define</t>
      <ul spacing="compact">
        <li>the number of senders or receiver in a given session transmitting distinct streams,</li>
        <li>whether a session has bi-direction flows of media from senders and receivers, and</li>
        <li>the expected lowest latency requirements using the definitions specified in <xref target="term"/>.</li>
      </ul>
      <t>It is likely that we should add other characteristics, as we come to understand them.</t>
      <section anchor="interact">
        <name>Interactive Media</name>
        <t>The use cases described in this section have one particular attribute in common - the target latency for these cases are on the order of one or two RTTs. In order to meet those targets, it is not possible to rely on protocol mechanisms that require multiple RTTs to function effectively. For example,</t>
        <ul spacing="compact">
          <li>When the target latency is on the order of one RTT, it makes sense to use FEC <xref target="RFC6363"/> and codec-level packet loss concealment <xref target="RFC6716"/>, rather than selectively retransmitting only lost packets. These mechanisms use more bytes, but do not require multiple RTTs in order to recover from packet loss.</li>
          <li>When the target latency is on the order of one RTT, it is impossible to use congestion control schemes like BBR <xref target="I-D.draft-cardwell-iccrg-bbr-congestion-control"/>, since BBR has probing mechanisms that rely on temporarily inducing delay and amortizing the consequences of that over multiple RTTs.</li>
        </ul>
        <t>This may help to explain why these use cases often rely on protocols such as RTP <xref target="RFC3550"/>, which provide low-level control of packetization and transmission.</t>
        <section anchor="gaming">
          <name>Gaming</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-50</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received, and user inputs are sent by the client. This may also
include the client receiving other types of signalling, such as triggers for
haptic feedback. This may also carry media from the client such as microphone
audio for in-game chat with other players.</t>
        </section>
        <section anchor="remdesk">
          <name>Remote Desktop</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-50</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received, and user inputs are sent by the client. Latency
requirements with this usecase are marginally different than the gaming use
case. This may also include signalling and/or transmitting of files or devices
connected to the user's computer.</t>
        </section>
        <section anchor="vidconf">
          <name>Video Conferencing/Telephony</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">Many to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-50 to Ull-200</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is both sent and received; This may include audio from both
microphone(s) or other inputs, or may include "screen sharing" or inclusion of
other content such as slide, document, or video presentation. This may be done
as client/server, or peer to peer with a many to many relationship of both
senders and receivers. The target for latency may be as large as Ull-200 for
some media types such as audio, but other media types in this use case have much
more stringent latency targets.</t>
        </section>
      </section>
      <section anchor="lm-media">
        <name>Live Media</name>
        <t>The use cases in this section, unlike the use cases described in <xref target="interact"/>, still have "humans in the loop", but these humans expect media to be "responsive", where the responsiveness is more on the order of 5 to 10 RTTs. This allows the use of protocol mechanisms that require more than one or two RTTs - as noted in <xref target="interact"/>, end-to-end recovery from packet loss and congestion avoidance are two such protocol mechanisms that can be used with Live Media.</t>
        <t>To illustrate the difference, the responsiveness expected with videoconferencing is much greater than watching a video, even if the video is being produced "live" and sent to a platform for syndication and distribution.</t>
        <section anchor="lmingest">
          <name>Live Media Ingest</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-200 to Ultra-Low</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received from a source for onwards handling into a distribution
platform. The media may comprise of multiple audio and/or video sources.
Bitrates may either be static or set dynamically by signalling of connection
inforation (bandwidth, latency) based on data sent by the receiver.</t>
        </section>
        <section anchor="lmsynd">
          <name>Live Media Syndication</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-200 to Ultra-Low</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is sent onwards to another platform for further distribution. The
media may be compressed down to a bitrate lower than source, but larger than
final distribution output. Streams may be redundant with failover mechanisms in
place.</t>
        </section>
        <section anchor="lmstream">
          <name>Live Media Streaming</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-200 to Ultra-Low</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received from a live broadcast or stream. This may comprise of
multiple audio or video outputs with different codecs or bitrates. This may also
include other types of media essence such as subtitles or timing signalling
information (e.g. markers to indicate change of behaviour in client such as
advertisement breaks). The use of "live rewind" where a window of media behind
the live edge can be made available for clients to playback, either because the
local player falls behind edge or because the viewer wishes to play back from a
point in the past.</t>
        </section>
      </section>
      <section anchor="od-media">
        <name>On-Demand Media</name>
        <t>Finally, the "On-Demand" use cases described in this section do not have a tight linkage between ingest and streaming, allowing significant transcoding, processing, insertion of video clips in a news article, etc. The latency constraints for the use cases in this section may be dominated by the time required for whatever actions are required before media are available for streaming.</t>
        <section anchor="od-ingest">
          <name>On-Demand Ingest</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">On Demand</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is ingested and processed for a system to later serve it to clients
as on-demand media. This media provided from a pre-recorded source, or captured from live output, but in either case, this media is not immediately passed to viewers, but is stored for "on-demand" retrieval, and may be transcoded upon ingest.</t>
        </section>
        <section anchor="od-stream">
          <name>On-Demand Media Streaming</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">On Demand</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received from a non-live, typically pre-recorded source. This may
feature additional outputs, bitrates, codecs, and media types described in the
live media streaming use case.</t>
        </section>
      </section>
    </section>
    <section anchor="propscope">
      <name>Proposed Scope for "Media Over QUIC"</name>
      <t>Our proposal is that "Media Over QUIC" discussions focus first on the use cases described in <xref target="lm-media"/>, which are Live Media Ingest (<xref target="lmingest"/>),
Syndication (<xref target="lmsynd"/>), and Streaming (<xref target="lmstream"/>). Our reasoning for this suggestion follows.</t>
      <t>Each of the above use cases in <xref target="overallusecases"/> fit into one of three classifications of solutions.</t>
      <section anchor="analy-interact">
        <name>Analysis for Interactive Use Cases</name>
        <t>The first group, Interactive Media, as described in <xref target="interact"/>, and covering gaming (<xref target="gaming"/>), screen sharing (<xref target="remdesk"/>), and general video conferencing (<xref target="vidconf"/>), are
largely covered by RTP, often in conjunction with WebRTC <xref target="WebRTC"/>, and related protocols today.</t>
        <t>Whilst there may be
benefit in these use cases having a QUIC based protocol it may be more
appropriate given the size of existing deployments to extend the RTP
protocols and specifications.</t>
      </section>
      <section anchor="analy-lm">
        <name>Analysis for Live Media Use Cases</name>
        <t>The second group of classifications, in <xref target="lm-media"/>, covering Live Media Ingest (<xref target="lmingest"/>),
Live Media Syndication (<xref target="lmsynd"/>), and Live Media Streaming (<xref target="lmstream"/>) are likely the use cases that will benefit most from
this work.</t>
        <t>Existing ingest and streaming protocols such as HLS <xref target="RFC8216"/> and DASH <xref target="DASH"/>
are reaching limits towards how low they can reduce latency in live streaming
and for scenarios where low-bitrate audio streams are used, these protocols add a significant
amount of overhead compared to the media bitstream itself.</t>
        <t>For this reason, we suggest that work on "Media Over QUIC" protocols target these use cases at this time.</t>
      </section>
      <section anchor="analy-od">
        <name>Analysis for On-Demand Use Cases</name>
        <t>The third group, <xref target="od-media"/>, covering On-Demand Media Ingest (<xref target="od-ingest"/>) and On-Demand Media streaming (<xref target="od-stream"/>) is unlikely to benefit from work in
this space. Without the same "Live Media" latency requirements that would motivate deployment of new protocols, existing protocols such as HLS and
DASH are probably "good enough" to meet the needs of these use cases.</t>
        <t>This does not mean that existing protocols in this space are perfect. Segmented protocols such as HLS and DASH were developed to overcome the deficiencies of TCP, as used in HTTP/1.1 <xref target="RFC7230"/> and HTTP/2 <xref target="RFC7540"/>, and do not make full use of the possible congestion window along the path from sender to receiver. Other protocols in this space have their own deficiencies. For example, RTSP <xref target="RFC7826"/> does not have easy ways to add support for new media codecs.</t>
        <t>Our expectation is that these use cases will not drive work in the "Media Over QUIC" space, but as new protocols come into being, they may very well be taken up for these use cases as well.</t>
      </section>
    </section>
    <section anchor="considerations">
      <name>Considerations for Protocol Work</name>
      <t>Even a cursory examination of the existing proposals listed in <xref target="priorart"/>
shows that there are fundamental differences in the approaches being used. This sction is intended to "up-level" the conversation beyond specific protocols, so that we can more likely agree on what is important for protocol design work.</t>
      <t>Please note that the considerations in this section are focused especially on the use cases described in <xref target="lm-media"/>, although other use cases are mentioned for comparison and contrast.</t>
      <section anchor="here-be-dragons">
        <name>Here Be Dragons</name>
        <t>The discussion in <xref target="considerations"/> is less mature than in most other sections of this document. The good news is that this section is fertile ground for people who would like to contribute to future revisions of this document. Comments are even more welcome for this section than for the rest of the document, for which they are welcome. The authors suggest that high-level comments are most appropriate at this time.</t>
      </section>
      <section anchor="codec-agility">
        <name>Codec Agility</name>
        <t>When initiating a media session, both the sender and receiver will need to agree on the codecs, bitrates, resolution, and other media details based on
capabilities and preferences. This agreement needs to take place before commencing
media transmission, but might also take place during media transmission, perhaps as a result of changes to device output or network
conditions (such as reduction in available network bandwidth).</t>
        <t>It may be
prefered to use existing ecosystem for such purposes, e.g. SDP <xref target="RFC4566"/>.</t>
      </section>
      <section anchor="support-an-appropriate-range-of-latencies">
        <name>Support an Appropriate Range of Latencies</name>
        <t>Support for a nominal latency appropriate for the use cases that are in scope should be
achievable, with consideration for the minimum buffer that a receiver playing content may
need to handle congestion, packet loss, and other degradation in network
quality.</t>
      </section>
      <section anchor="migration-of-sessions">
        <name>Migration of Sessions</name>
        <t>Handling of migration of a session between hosts, either of sender or receiver
should be supported. This may either happen because the sender is undergoing
maintenence or a rebalancing of resource, because the either is experiencing a
change in network connectivity (such as a device moving from WiFi to cellular
connectivity) or other reasons.</t>
        <t>This may depend on QUIC capabilities such as <xref target="I-D.draft-ietf-quic-multipath"/>
but support for full QUIC operation over multiple paths between senders and receivers is by no means essential.</t>
      </section>
      <section anchor="acc">
        <name>Appropriate Congestion Control</name>
        <t>An appropriate congestion control mechanism will depend upon the use cases under consideration.</t>
        <t>It's worth remembering that we have more experience with QUIC carrying HTTP traffic than with any other type of application at this time, and consequently, we have more experience with congestion control mechanisms such as NewReno <xref target="RFC9002"/>, Cubic <xref target="RFC8312"/>, and BBR <xref target="I-D.draft-cardwell-iccrg-bbr-congestion-control"/> being used with QUIC than with any other congestion control mechanisms. These congestion control mechanisms may also be appropriate for the on-demand use cases described in <xref target="od-media"/>.</t>
        <t>Conversely, for the interactive use cases described in <xref target="interact"/>, these congestion control mechanisms are very likely inappropriate, especially when QUIC is being used with a Media Transport Protocol such as RTP, which provides its own congestion control mechanism, and which does not seem to interact well with a second, QUIC-level congestion control mechanism. Congestion control mechanisms such as SCReAM <xref target="RFC8298"/> or NADA <xref target="RFC8698"/> may be more appropriate for media. "Congestion Control Requirements for Interactive Real-Time Media" <xref target="RFC8836"/> is a useful reference.</t>
        <t>Awkwardly, the live media use cases described in <xref target="lm-media"/> live somewhere in the middle, and work will be needed to understand the characteristics of an appropriate congestion control mechanism for these use cases.</t>
      </section>
      <section anchor="support-lossless-and-lossy-media-transport">
        <name>Support Lossless and Lossy Media Transport</name>
        <t>TODO: confirm scope of this draft to describe lossless media transport, lossy media transport, or both lossless and lossy transport.</t>
      </section>
      <section anchor="flow-directionality">
        <name>Flow Directionality</name>
        <t>Media should be able to flow in either direction from client to server or
vice-versa, either individually or concurrently but should only be negotiated at
the start of the session.</t>
      </section>
      <section anchor="webtransport">
        <name>WebTransport</name>
        <t>TODO: Unsure of the importance of this consideration for live media use cases. If this is critical, we have to consider two
things:</t>
        <ul spacing="compact">
          <li>WebTransport supports HTTP/2, are we going to explicitly exclude it?</li>
          <li>Also, WebTransport <xref target="I-D.draft-ietf-webtrans-overview"/> has normative language
around congestion control, which may be at odds with the considerations described in <xref target="acc"/>.</li>
        </ul>
      </section>
      <section anchor="authentication">
        <name>Authentication</name>
        <t>In order to allow hosts to authenticate one another, capabilities beyond what QUIC provides may be necessary. This should be kept simple but robust in nature to
prevent attacks like credential brute-forcing.</t>
        <t>TODO: More details are required here</t>
      </section>
      <section anchor="considerations-implying-quic-extensions">
        <name>Considerations Implying QUIC Extensions</name>
        <t>Most of the discussion of protocol work in this document has avoided mentioning capabilities that may be useful for some use cases, but seem to imply the need for extensions to the QUIC protocol, beyond what is already being considered in the IETF QUIC working group. These are included in this section, for completeness' sake.</t>
        <section anchor="nat-traversal">
          <name>NAT Traversal</name>
          <t>From Section 8.2 of <xref target="RFC9000"/>:</t>
          <ul empty="true">
            <li>
              <t>Path validation is not designed as a NAT traversal mechanism. Though the mechanism described here might be effective for the creation of NAT bindings that support NAT traversal, the expectation is that one endpoint is able to receive packets without first having sent a packet on that path. Effective NAT traversal needs additional synchronization mechanisms that are not provided here.</t>
            </li>
          </ul>
          <t>Although there are use cases that would benefit from a mechanism for NAT traversal, a QUIC protocol extention would be needed to support those use cases.</t>
        </section>
        <section anchor="multicast">
          <name>Multicast</name>
          <t>Even if multicast and other network broadcasting capabilities are often used in delivering media in our use cases, QUIC doesn't yet support multicast, and a QUIC protocol extension would be needed to do so. In addition, the inclusion of multicast would introduce more complexity in both the specification and
client implimentations.
On the other hand, UDP multicast may be considered as the last mile delivery transport outside of QUIC transport, thus
it would be beneficial for a protocol to provide such an opportunity (e.g. RTP/QUIC -&gt; RTP/UDP).</t>
        </section>
      </section>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document makes no requests of IANA.</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>As this document is intended to guide discussion and consensus, it introduces
no security considerations of its own.</t>
    </section>
  </middle>
  <back>
    <references>
      <name>Informative References</name>
      <reference anchor="RFC3550">
        <front>
          <title>RTP: A Transport Protocol for Real-Time Applications</title>
          <author fullname="H. Schulzrinne" initials="H." surname="Schulzrinne">
            <organization/>
          </author>
          <author fullname="S. Casner" initials="S." surname="Casner">
            <organization/>
          </author>
          <author fullname="R. Frederick" initials="R." surname="Frederick">
            <organization/>
          </author>
          <author fullname="V. Jacobson" initials="V." surname="Jacobson">
            <organization/>
          </author>
          <date month="July" year="2003"/>
          <abstract>
            <t>This memorandum describes RTP, the real-time transport protocol.  RTP provides end-to-end network transport functions suitable for applications transmitting real-time data, such as audio, video or simulation data, over multicast or unicast network services.  RTP does not address resource reservation and does not guarantee quality-of- service for real-time services.  The data transport is augmented by a control protocol (RTCP) to allow monitoring of the data delivery in a manner scalable to large multicast networks, and to provide minimal control and identification functionality.  RTP and RTCP are designed to be independent of the underlying transport and network layers.  The protocol supports the use of RTP-level translators and mixers. Most of the text in this memorandum is identical to RFC 1889 which it obsoletes.  There are no changes in the packet formats on the wire, only changes to the rules and algorithms governing how the protocol is used. The biggest change is an enhancement to the scalable timer algorithm for calculating when to send RTCP packets in order to minimize transmission in excess of the intended rate when many participants join a session simultaneously.  [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="STD" value="64"/>
        <seriesInfo name="RFC" value="3550"/>
        <seriesInfo name="DOI" value="10.17487/RFC3550"/>
      </reference>
      <reference anchor="RFC4566">
        <front>
          <title>SDP: Session Description Protocol</title>
          <author fullname="M. Handley" initials="M." surname="Handley">
            <organization/>
          </author>
          <author fullname="V. Jacobson" initials="V." surname="Jacobson">
            <organization/>
          </author>
          <author fullname="C. Perkins" initials="C." surname="Perkins">
            <organization/>
          </author>
          <date month="July" year="2006"/>
          <abstract>
            <t>This memo defines the Session Description Protocol (SDP).  SDP is intended for describing multimedia sessions for the purposes of session announcement, session invitation, and other forms of multimedia session initiation.  [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="4566"/>
        <seriesInfo name="DOI" value="10.17487/RFC4566"/>
      </reference>
      <reference anchor="RFC6363">
        <front>
          <title>Forward Error Correction (FEC) Framework</title>
          <author fullname="M. Watson" initials="M." surname="Watson">
            <organization/>
          </author>
          <author fullname="A. Begen" initials="A." surname="Begen">
            <organization/>
          </author>
          <author fullname="V. Roca" initials="V." surname="Roca">
            <organization/>
          </author>
          <date month="October" year="2011"/>
          <abstract>
            <t>This document describes a framework for using Forward Error Correction (FEC) codes with applications in public and private IP networks to provide protection against packet loss.  The framework supports applying FEC to arbitrary packet flows over unreliable transport and is primarily intended for real-time, or streaming, media.  This framework can be used to define Content Delivery Protocols that provide FEC for streaming media delivery or other packet flows.  Content Delivery Protocols defined using this framework can support any FEC scheme (and associated FEC codes) that is compliant with various requirements defined in this document. Thus, Content Delivery Protocols can be defined that are not specific to a particular FEC scheme, and FEC schemes can be defined that are not specific to a particular Content Delivery Protocol.   [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="6363"/>
        <seriesInfo name="DOI" value="10.17487/RFC6363"/>
      </reference>
      <reference anchor="RFC6716">
        <front>
          <title>Definition of the Opus Audio Codec</title>
          <author fullname="JM. Valin" initials="JM." surname="Valin">
            <organization/>
          </author>
          <author fullname="K. Vos" initials="K." surname="Vos">
            <organization/>
          </author>
          <author fullname="T. Terriberry" initials="T." surname="Terriberry">
            <organization/>
          </author>
          <date month="September" year="2012"/>
          <abstract>
            <t>This document defines the Opus interactive speech and audio codec. Opus is designed to handle a wide range of interactive audio applications, including Voice over IP, videoconferencing, in-game chat, and even live, distributed music performances.  It scales from low bitrate narrowband speech at 6 kbit/s to very high quality stereo music at 510 kbit/s.  Opus uses both Linear Prediction (LP) and the Modified Discrete Cosine Transform (MDCT) to achieve good compression of both speech and music.  [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="6716"/>
        <seriesInfo name="DOI" value="10.17487/RFC6716"/>
      </reference>
      <reference anchor="RFC7230">
        <front>
          <title>Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing</title>
          <author fullname="R. Fielding" initials="R." role="editor" surname="Fielding">
            <organization/>
          </author>
          <author fullname="J. Reschke" initials="J." role="editor" surname="Reschke">
            <organization/>
          </author>
          <date month="June" year="2014"/>
          <abstract>
            <t>The Hypertext Transfer Protocol (HTTP) is a stateless application-level protocol for distributed, collaborative, hypertext information systems.  This document provides an overview of HTTP architecture and its associated terminology, defines the "http" and "https" Uniform Resource Identifier (URI) schemes, defines the HTTP/1.1 message syntax and parsing requirements, and describes related security concerns for implementations.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="7230"/>
        <seriesInfo name="DOI" value="10.17487/RFC7230"/>
      </reference>
      <reference anchor="RFC7540">
        <front>
          <title>Hypertext Transfer Protocol Version 2 (HTTP/2)</title>
          <author fullname="M. Belshe" initials="M." surname="Belshe">
            <organization/>
          </author>
          <author fullname="R. Peon" initials="R." surname="Peon">
            <organization/>
          </author>
          <author fullname="M. Thomson" initials="M." role="editor" surname="Thomson">
            <organization/>
          </author>
          <date month="May" year="2015"/>
          <abstract>
            <t>This specification describes an optimized expression of the semantics of the Hypertext Transfer Protocol (HTTP), referred to as HTTP version 2 (HTTP/2).  HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent exchanges on the same connection.  It also introduces unsolicited push of representations from servers to clients.</t>
            <t>This specification is an alternative to, but does not obsolete, the HTTP/1.1 message syntax.  HTTP's existing semantics remain unchanged.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="7540"/>
        <seriesInfo name="DOI" value="10.17487/RFC7540"/>
      </reference>
      <reference anchor="RFC7826">
        <front>
          <title>Real-Time Streaming Protocol Version 2.0</title>
          <author fullname="H. Schulzrinne" initials="H." surname="Schulzrinne">
            <organization/>
          </author>
          <author fullname="A. Rao" initials="A." surname="Rao">
            <organization/>
          </author>
          <author fullname="R. Lanphier" initials="R." surname="Lanphier">
            <organization/>
          </author>
          <author fullname="M. Westerlund" initials="M." surname="Westerlund">
            <organization/>
          </author>
          <author fullname="M. Stiemerling" initials="M." role="editor" surname="Stiemerling">
            <organization/>
          </author>
          <date month="December" year="2016"/>
          <abstract>
            <t>This memorandum defines the Real-Time Streaming Protocol (RTSP) version 2.0, which obsoletes RTSP version 1.0 defined in RFC 2326.</t>
            <t>RTSP is an application-layer protocol for the setup and control of the delivery of data with real-time properties.  RTSP provides an extensible framework to enable controlled, on-demand delivery of real-time data, such as audio and video.  Sources of data can include both live data feeds and stored clips.  This protocol is intended to control multiple data delivery sessions; provide a means for choosing delivery channels such as UDP, multicast UDP, and TCP; and provide a means for choosing delivery mechanisms based upon RTP (RFC 3550).</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="7826"/>
        <seriesInfo name="DOI" value="10.17487/RFC7826"/>
      </reference>
      <reference anchor="RFC8216">
        <front>
          <title>HTTP Live Streaming</title>
          <author fullname="R. Pantos" initials="R." role="editor" surname="Pantos">
            <organization/>
          </author>
          <author fullname="W. May" initials="W." surname="May">
            <organization/>
          </author>
          <date month="August" year="2017"/>
          <abstract>
            <t>This document describes a protocol for transferring unbounded streams of multimedia data.  It specifies the data format of the files and the actions to be taken by the server (sender) and the clients (receivers) of the streams.  It describes version 7 of this protocol.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8216"/>
        <seriesInfo name="DOI" value="10.17487/RFC8216"/>
      </reference>
      <reference anchor="RFC8298">
        <front>
          <title>Self-Clocked Rate Adaptation for Multimedia</title>
          <author fullname="I. Johansson" initials="I." surname="Johansson">
            <organization/>
          </author>
          <author fullname="Z. Sarker" initials="Z." surname="Sarker">
            <organization/>
          </author>
          <date month="December" year="2017"/>
          <abstract>
            <t>This memo describes a rate adaptation algorithm for conversational media services such as interactive video.  The solution conforms to the packet conservation principle and uses a hybrid loss-and-delay- based congestion control algorithm.  The algorithm is evaluated over both simulated Internet bottleneck scenarios as well as in a Long Term Evolution (LTE) system simulator and is shown to achieve both low latency and high video throughput in these scenarios.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8298"/>
        <seriesInfo name="DOI" value="10.17487/RFC8298"/>
      </reference>
      <reference anchor="RFC8312">
        <front>
          <title>CUBIC for Fast Long-Distance Networks</title>
          <author fullname="I. Rhee" initials="I." surname="Rhee">
            <organization/>
          </author>
          <author fullname="L. Xu" initials="L." surname="Xu">
            <organization/>
          </author>
          <author fullname="S. Ha" initials="S." surname="Ha">
            <organization/>
          </author>
          <author fullname="A. Zimmermann" initials="A." surname="Zimmermann">
            <organization/>
          </author>
          <author fullname="L. Eggert" initials="L." surname="Eggert">
            <organization/>
          </author>
          <author fullname="R. Scheffenegger" initials="R." surname="Scheffenegger">
            <organization/>
          </author>
          <date month="February" year="2018"/>
          <abstract>
            <t>CUBIC is an extension to the current TCP standards.  It differs from the current TCP standards only in the congestion control algorithm on the sender side.  In particular, it uses a cubic function instead of a linear window increase function of the current TCP standards to improve scalability and stability under fast and long-distance networks.  CUBIC and its predecessor algorithm have been adopted as defaults by Linux and have been used for many years.  This document provides a specification of CUBIC to enable third-party implementations and to solicit community feedback through experimentation on the performance of CUBIC.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8312"/>
        <seriesInfo name="DOI" value="10.17487/RFC8312"/>
      </reference>
      <reference anchor="RFC8836">
        <front>
          <title>Congestion Control Requirements for Interactive Real-Time Media</title>
          <author fullname="R. Jesup" initials="R." surname="Jesup">
            <organization/>
          </author>
          <author fullname="Z. Sarker" initials="Z." role="editor" surname="Sarker">
            <organization/>
          </author>
          <date month="January" year="2021"/>
          <abstract>
            <t>Congestion control is needed for all data transported across the Internet, in order to promote fair usage and prevent congestion collapse. The requirements for interactive, point-to-point real-time multimedia, which needs low-delay, semi-reliable data delivery, are different from the requirements for bulk transfer like FTP or bursty transfers like web pages. Due to an increasing amount of RTP-based real-time media traffic on the Internet (e.g., with the introduction of the Web Real-Time Communication (WebRTC)), it is especially important to ensure that this kind of traffic is congestion controlled.</t>
            <t>This document describes a set of requirements that can be used to evaluate other congestion control mechanisms in order to figure out their fitness for this purpose, and in particular to provide a set of possible requirements for a real-time media congestion avoidance technique.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8836"/>
        <seriesInfo name="DOI" value="10.17487/RFC8836"/>
      </reference>
      <reference anchor="RFC9000">
        <front>
          <title>QUIC: A UDP-Based Multiplexed and Secure Transport</title>
          <author fullname="J. Iyengar" initials="J." role="editor" surname="Iyengar">
            <organization/>
          </author>
          <author fullname="M. Thomson" initials="M." role="editor" surname="Thomson">
            <organization/>
          </author>
          <date month="May" year="2021"/>
          <abstract>
            <t>This document defines the core of the QUIC transport protocol.  QUIC provides applications with flow-controlled streams for structured communication, low-latency connection establishment, and network path migration. QUIC includes security measures that ensure confidentiality, integrity, and availability in a range of deployment circumstances.  Accompanying documents describe the integration of TLS for key negotiation, loss detection, and an exemplary congestion control algorithm.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="9000"/>
        <seriesInfo name="DOI" value="10.17487/RFC9000"/>
      </reference>
      <reference anchor="RFC9002">
        <front>
          <title>QUIC Loss Detection and Congestion Control</title>
          <author fullname="J. Iyengar" initials="J." role="editor" surname="Iyengar">
            <organization/>
          </author>
          <author fullname="I. Swett" initials="I." role="editor" surname="Swett">
            <organization/>
          </author>
          <date month="May" year="2021"/>
          <abstract>
            <t>This document describes loss detection and congestion control mechanisms for QUIC.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="9002"/>
        <seriesInfo name="DOI" value="10.17487/RFC9002"/>
      </reference>
      <reference anchor="RFC8698">
        <front>
          <title>Network-Assisted Dynamic Adaptation (NADA): A Unified Congestion Control Scheme for Real-Time Media</title>
          <author fullname="X. Zhu" initials="X." surname="Zhu">
            <organization/>
          </author>
          <author fullname="R. Pan" initials="R." surname="Pan">
            <organization/>
          </author>
          <author fullname="M. Ramalho" initials="M." surname="Ramalho">
            <organization/>
          </author>
          <author fullname="S. Mena" initials="S." surname="Mena">
            <organization/>
          </author>
          <date month="February" year="2020"/>
          <abstract>
            <t>This document describes Network-Assisted Dynamic Adaptation (NADA), a novel congestion control scheme for interactive real-time media applications such as video conferencing. In the proposed scheme, the sender regulates its sending rate, based on either implicit or explicit congestion signaling, in a unified approach. The scheme can benefit from Explicit Congestion Notification (ECN) markings from network nodes. It also maintains consistent sender behavior in the absence of such markings by reacting to queuing delays and packet losses instead.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8698"/>
        <seriesInfo name="DOI" value="10.17487/RFC8698"/>
      </reference>
      <reference anchor="I-D.draft-cardwell-iccrg-bbr-congestion-control">
        <front>
          <title>BBR Congestion Control</title>
          <author fullname="Neal Cardwell">
            <organization>Google</organization>
          </author>
          <author fullname="Yuchung Cheng">
            <organization>Google</organization>
          </author>
          <author fullname="Soheil Hassas Yeganeh">
            <organization>Google</organization>
          </author>
          <author fullname="Ian Swett">
            <organization>Google</organization>
          </author>
          <author fullname="Van Jacobson">
            <organization>Google</organization>
          </author>
          <date day="7" month="November" year="2021"/>
          <abstract>
            <t>   This document specifies the BBR congestion control algorithm.  BBR
   ("Bottleneck Bandwidth and Round-trip propagation time") uses recent
   measurements of a transport connection's delivery rate, round-trip
   time, and packet loss rate to build an explicit model of the network
   path.  BBR then uses this model to control both how fast it sends
   data and the maximum volume of data it allows in flight in the
   network at any time.  Relative to loss-based congestion control
   algorithms such as Reno [RFC5681] or CUBIC [RFC8312], BBR offers
   substantially higher throughput for bottlenecks with shallow buffers
   or random losses, and substantially lower queueing delays for
   bottlenecks with deep buffers (avoiding "bufferbloat").  BBR can be
   implemented in any transport protocol that supports packet-delivery
   acknowledgment.  Thus far, open source implementations are available
   for TCP [RFC793] and QUIC [RFC9000].  This document specifies version
   2 of the BBR algorithm, also sometimes referred to as BBRv2 or bbr2.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-cardwell-iccrg-bbr-congestion-control-01"/>
      </reference>
      <reference anchor="I-D.draft-engelbart-rtp-over-quic">
        <front>
          <title>RTP over QUIC</title>
          <author fullname="Jörg Ott">
            <organization>Technical University Munich</organization>
          </author>
          <author fullname="Mathis Engelbart">
            <organization>Technical University Munich</organization>
          </author>
          <date day="25" month="October" year="2021"/>
          <abstract>
            <t>   This document specifies a minimal mapping for encapsulating RTP and
   RTCP packets within QUIC.  It also discusses how to leverage state
   from the QUIC implementation in the endpoints to reduce the exchange
   of RTCP packets.

Discussion Venues

   This note is to be removed before publishing as an RFC.

   Discussion of this document takes place on the mailing list (), which
   is archived at .

   Source for this draft and an issue tracker can be found at
   https://github.com/mengelbart/rtp-over-quic-draft.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-engelbart-rtp-over-quic-01"/>
      </reference>
      <reference anchor="I-D.draft-hurst-quic-rtp-tunnelling">
        <front>
          <title>QRT: QUIC RTP Tunnelling</title>
          <author fullname="Sam Hurst">
	 </author>
          <date day="28" month="January" year="2021"/>
          <abstract>
            <t>   QUIC is a UDP-based transport protocol for stream-orientated,
   congestion-controlled, secure, multiplexed data transfer.  RTP
   carries real-time data between endpoints, and the accompanying
   control protocol RTCP allows monitoring and control of the transfer
   of such data.  With RTP and RTCP being agnostic to the underlying
   transport protocol, it is possible to multiplex both the RTP and
   associated RTCP flows into a single QUIC connection to take advantage
   of QUIC features such as low-latency setup and strong TLS-based
   security.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-hurst-quic-rtp-tunnelling-01"/>
      </reference>
      <reference anchor="I-D.draft-kpugin-rush">
        <front>
          <title>RUSH - Reliable (unreliable) streaming protocol</title>
          <author fullname="Kirill Pugin">
            <organization>Facebook</organization>
          </author>
          <author fullname="Alan Frindell">
            <organization>Facebook</organization>
          </author>
          <author fullname="Jordi Cenzano">
            <organization>Facebook</organization>
          </author>
          <author fullname="Jake Weissman">
            <organization>Facebook</organization>
          </author>
          <date day="12" month="July" year="2021"/>
          <abstract>
            <t>   RUSH is an application-level protocol for ingesting live video.  This
   document describes core of the protocol and how it maps onto QUIC

Discussion Venues

   This note is to be removed before publishing as an RFC.

   Discussion of this document takes place on the mailing list (), which
   is archived at .

   Source for this draft and an issue tracker can be found at
   https://github.com/afrind/draft-rush.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-kpugin-rush-00"/>
      </reference>
      <reference anchor="I-D.draft-lcurley-warp">
        <front>
          <title>Warp - Segmented Live Video Transport</title>
          <author fullname="Luke Curley">
            <organization>Twitch</organization>
          </author>
          <date day="9" month="February" year="2022"/>
          <abstract>
            <t>   This document defines the core behavior for Warp, a segmented live
   video transport protocol.  Warp maps live media to QUIC streams based
   on the underlying media encoding.  Media is prioritized to minimize
   latency during congestion.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-lcurley-warp-00"/>
      </reference>
      <reference anchor="I-D.draft-rtpfolks-quic-rtp-over-quic">
        <front>
          <title>RTP over QUIC</title>
          <author fullname="Jörg Ott">
            <organization>Technische Universitaet Muenchen</organization>
          </author>
          <author fullname="Roni Even">
            <organization>Huawei</organization>
          </author>
          <author fullname="Colin Perkins">
            <organization>University of Glasgow</organization>
          </author>
          <author fullname="Varun Singh">
            <organization>CALLSTATS I/O Oy</organization>
          </author>
          <date day="1" month="September" year="2017"/>
          <abstract>
            <t>   QUIC is a UDP-based protocol for congestion controlled reliable data
   transfer, while RTP serves carrying (conversational) real-time media
   over UDP.  This draft discusses design aspects and issues of carrying
   RTP over QUIC.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-rtpfolks-quic-rtp-over-quic-01"/>
      </reference>
      <reference anchor="I-D.draft-sharabayko-srt">
        <front>
          <title>The SRT Protocol</title>
          <author fullname="Maria Sharabayko">
            <organization>Haivision Network Video</organization>
          </author>
          <author fullname="Maxim Sharabayko">
            <organization>Haivision Network Video</organization>
          </author>
          <author fullname="Jean Dube">
            <organization>Haivision Systems</organization>
          </author>
          <author fullname="Joonwoong Kim">
            <organization>SK Telecom Co., Ltd.</organization>
          </author>
          <author fullname="Jeongseok Kim">
            <organization>SK Telecom Co., Ltd.</organization>
          </author>
          <date day="7" month="September" year="2021"/>
          <abstract>
            <t>   This document specifies Secure Reliable Transport (SRT) protocol.
   SRT is a user-level protocol over User Datagram Protocol and provides
   reliability and security optimized for low latency live video
   streaming, as well as generic bulk data transfer.  For this, SRT
   introduces control packet extension, improved flow control, enhanced
   congestion control and a mechanism for data encryption.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-sharabayko-srt-01"/>
      </reference>
      <reference anchor="I-D.draft-sharabayko-srt-over-quic">
        <front>
          <title>Tunnelling SRT over QUIC</title>
          <author fullname="Maxim Sharabayko">
            <organization>Haivision Network Video GmbH</organization>
          </author>
          <author fullname="Maria Sharabayko">
            <organization>Haivision Network Video GmbH</organization>
          </author>
          <date day="28" month="July" year="2021"/>
          <abstract>
            <t>   This document presents an approach to tunnelling SRT live streams
   over QUIC datagrams.

   QUIC [RFC9000] is a UDP-based transport protocol providing TLS
   encryption, stream multiplexing, and connection migration.  It was
   designed to become a faster alternative to the TCP protocol
   [RFC7323].

   An Unreliable Datagram Extension to QUIC [QUIC-DATAGRAM] adds support
   for sending and receiving unreliable datagrams over a QUIC
   connection, but transfers the responsibility for multiplexing
   different kinds of datagrams, or flows of datagrams, to an
   application protocol.

   SRT [SRTRFC] is a UDP-based transport protocol.  Essentially, it can
   operate over any unreliable datagram transport.  SRT provides loss
   recovery and stream multiplexing mechanisms.  In its live streaming
   configuration SRT provides an end-to-end latency-aware mechanism for
   packet loss recovery.  If SRT fails to recover a packet loss within a
   specified latency, then the packet is dropped to avoid blocking
   playback of further packets.

   The Datagram Extension to QUIC could be used as an underlying
   transport instead of UDP.  This way QUIC would provide TLS-level
   security, connection migration, and potentially multi-path support.
   It would be easier for existing network facilities to process, route,
   and load balance the unified QUIC traffic.  SRT on its side would
   provide end-to-end latency tracking and latency-aware loss recovery.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-sharabayko-srt-over-quic-00"/>
      </reference>
      <reference anchor="I-D.draft-ietf-mops-streaming-opcons">
        <front>
          <title>Operational Considerations for Streaming Media</title>
          <author fullname="Jake Holland">
            <organization>Akamai Technologies, Inc.</organization>
          </author>
          <author fullname="Ali Begen">
            <organization>Networked Media</organization>
          </author>
          <author fullname="Spencer Dawkins">
            <organization>Tencent America LLC</organization>
          </author>
          <date day="1" month="March" year="2022"/>
          <abstract>
            <t>   This document provides an overview of operational networking issues
   that pertain to quality of experience when streaming video and other
   high-bitrate media over the Internet.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-mops-streaming-opcons-09"/>
      </reference>
      <reference anchor="I-D.draft-ietf-quic-datagram">
        <front>
          <title>An Unreliable Datagram Extension to QUIC</title>
          <author fullname="Tommy Pauly">
            <organization>Apple Inc.</organization>
          </author>
          <author fullname="Eric Kinnear">
            <organization>Apple Inc.</organization>
          </author>
          <author fullname="David Schinazi">
            <organization>Google LLC</organization>
          </author>
          <date day="4" month="February" year="2022"/>
          <abstract>
            <t>   This document defines an extension to the QUIC transport protocol to
   add support for sending and receiving unreliable datagrams over a
   QUIC connection.

Discussion Venues

   This note is to be removed before publishing as an RFC.

   Discussion of this document takes place on the QUIC Working Group
   mailing list (mailto:quic@ietf.org), which is archived at
   https://mailarchive.ietf.org/arch/browse/quic/.

   Source for this draft and an issue tracker can be found at
   https://github.com/quicwg/datagram.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-quic-datagram-10"/>
      </reference>
      <reference anchor="I-D.draft-ietf-quic-http">
        <front>
          <title>Hypertext Transfer Protocol Version 3 (HTTP/3)</title>
          <author fullname="Mike Bishop">
            <organization>Akamai</organization>
          </author>
          <date day="2" month="February" year="2021"/>
          <abstract>
            <t>The QUIC transport protocol has several features that are desirable
   in a transport for HTTP, such as stream multiplexing, per-stream flow
   control, and low-latency connection establishment.  This document
   describes a mapping of HTTP semantics over QUIC.  This document also
   identifies HTTP/2 features that are subsumed by QUIC, and describes
   how HTTP/2 extensions can be ported to HTTP/3.

DO NOT DEPLOY THIS VERSION OF HTTP

   DO NOT DEPLOY THIS VERSION OF HTTP/3 UNTIL IT IS IN AN RFC.  This
   version is still a work in progress.  For trial deployments, please
   use earlier versions.

Note to Readers

   Discussion of this draft takes place on the QUIC working group
   mailing list (quic@ietf.org), which is archived at
   https://mailarchive.ietf.org/arch/search/?email_list=quic.

   Working Group information can be found at https://github.com/quicwg;
   source code and issues list for this draft can be found at
   https://github.com/quicwg/base-drafts/labels/-http.
            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-quic-http-34"/>
      </reference>
      <reference anchor="I-D.draft-ietf-quic-multipath">
        <front>
          <title>Multipath Extension for QUIC</title>
          <author fullname="Yanmei Liu">
            <organization>Alibaba Inc.</organization>
          </author>
          <author fullname="Yunfei Ma">
            <organization>Alibaba Inc.</organization>
          </author>
          <author fullname="Quentin De Coninck">
            <organization>UCLouvain</organization>
          </author>
          <author fullname="Olivier Bonaventure">
            <organization>UCLouvain</organization>
          </author>
          <author fullname="Christian Huitema">
            <organization>Private Octopus Inc.</organization>
          </author>
          <author fullname="Mirja Kuehlewind">
            <organization>Ericsson</organization>
          </author>
          <date day="2" month="February" year="2022"/>
          <abstract>
            <t>   This document specifies a multipath extension for the QUIC protocol
   to enable the simultaneous usage of multiple paths for a single
   connection.

Discussion Venues

   This note is to be removed before publishing as an RFC.

   Discussion of this document takes place on the QUIC Working Group
   mailing list (quic@ietf.org), which is archived at
   https://mailarchive.ietf.org/arch/browse/quic/.

   Source for this draft and an issue tracker can be found at
   https://github.com/mirjak/draft-lmbdhk-quic-multipath.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-quic-multipath-00"/>
      </reference>
      <reference anchor="I-D.draft-ietf-webtrans-overview">
        <front>
          <title>The WebTransport Protocol Framework</title>
          <author fullname="Victor Vasiliev">
            <organization>Google</organization>
          </author>
          <date day="28" month="July" year="2021"/>
          <abstract>
            <t>   The WebTransport Protocol Framework enables clients constrained by
   the Web security model to communicate with a remote server using a
   secure multiplexed transport.  It consists of a set of individual
   protocols that are safe to expose to untrusted applications, combined
   with a model that allows them to be used interchangeably.

   This document defines the overall requirements on the protocols used
   in WebTransport, as well as the common features of the protocols,
   support for some of which may be optional.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-webtrans-overview-02"/>
      </reference>
      <reference anchor="AVTCORE-2022-02" target="https://datatracker.ietf.org/meeting/interim-2022-avtcore-01/session/avtcore">
        <front>
          <title>AVTCORE 2022-02 interim meeting materials</title>
          <author>
            <organization/>
          </author>
          <date year="2022" month="February"/>
        </front>
      </reference>
      <reference anchor="MOQ-ml" target="https://www.ietf.org/mailman/listinfo/moq">
        <front>
          <title>Moq -- Media over QUIC</title>
          <author>
            <organization/>
          </author>
          <date>n.d.</date>
        </front>
      </reference>
      <reference anchor="DASH" target="https://www.iso.org/standard/79329.html">
        <front>
          <title>ISO/IEC 23009-1:2019: Dynamic adaptive streaming over HTTP (DASH) -- Part 1: Media presentation description and segment formats (2nd edition)</title>
          <author>
            <organization/>
          </author>
          <date>n.d.</date>
        </front>
      </reference>
      <reference anchor="ISOBMFF" target="https://www.iso.org/standard/83102.html">
        <front>
          <title>ISO/IEC 14496-12:2022 Information technology — Coding of audio-visual objects — Part 12: ISO base media file format</title>
          <author>
            <organization/>
          </author>
          <date year="2022" month="January"/>
        </front>
      </reference>
      <reference anchor="WebRTC" target="https://www.w3.org/groups/wg/webrtc">
        <front>
          <title>Web Real-Time Communications Working Group</title>
          <author>
            <organization/>
          </author>
          <date>n.d.</date>
        </front>
      </reference>
      <reference anchor="QUIC-goals" target="https://datatracker.ietf.org/doc/charter-ietf-quic/01/">
        <front>
          <title>Initial Charter for QUIC Working Group</title>
          <author>
            <organization/>
          </author>
          <date year="2016" month="October"/>
        </front>
      </reference>
    </references>
    <section anchor="acknowledgements">
      <name>Acknowledgements</name>
      <t>The authors would like to thank the many authors of the specifications referenced in <xref target="priorart"/> for their work:</t>
      <ul spacing="compact">
        <li>Alan Frindell</li>
        <li>Colin Perkins</li>
        <li>Jake Weissman</li>
        <li>Joerg Ott</li>
        <li>Jordi Cenzano</li>
        <li>Kirill Pugin</li>
        <li>Maria Sharabayko</li>
        <li>Mathis Engelbart</li>
        <li>Maxim Sharabayko</li>
        <li>Roni Even</li>
        <li>Sam Hurst</li>
        <li>Varun Singh</li>
      </ul>
      <t>The authors would like to thank Alan Frindell, Luke Curley, and Maxim Sharabayko for text contributions to this draft.</t>
      <t>James Gruessing would also like to thank Francesco Illy and Nicholas Book for their part in providing the needed motivation.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAIeHJWIAA+1963IbR7Lm/36KCuiHJR2AFClbtrkn5ixNSmPN0YUmqXFs
bGxMFLoLQJuNbkx3gxCG5onzEOcJ90k2v8ys6uoGyJEnvNdYR8yIBLrrkpWX
Ly+VnEwmSZu3hTsx712WW/Px1tXmp09vz8zEfGqcObONa4wtM3NWlU2eudq2
Of1kZlWtr1zXtmxWVd2ai7pqq7QqzLlr8nmZ2Om0drc0dPXTo4MlWZWWdkmL
yGo7ayfzeu2aJi/nk2X110nt/rrOa7d0ZdtMXhwlqW3dvKq3JyYvZ1WS0a8n
SZKv6hPT1uumPX7x4vsXx4mtnT0xdrUq8lSn2VT1zbyu1ita0sefzHubFzSJ
eZc3bXLjtvR1dmLelq2rS9dOzrEWpkXStLTov9iiKmmNW9ckq/zE/Ffa69jQ
/+VlRmsbm4ZoULtZQz9tl/pDW+cpfZVWy5XFD816Gn6mH3hTY9oILcT9tySx
63ZR1SeJMRP6n6EvmhPzpwPzR08R/lRo9Sf6/2bwTVXPbZn/jfd7Yj44InFB
SyfaX6ynRe5unPm4rCu34qfTal22IOQH1y70Sf7CLYk0J+YXzHCQu3b2n+f4
5IBW3F/a1YE5t5sb+jla2NXKlSmxUfxNf2HXeKBszenSEX2seffurL+gT2Xe
usxctXS4jalm/sl4dY1Mk8ksw2UmYI56SRPeOtDz8s3Zy2++eaE/fv3Nq1f6
46uXr176H7898p9+e/zSP/vtN1+HH7879g98d3zU/fj9d/7Hl0fH/sfvXvoH
vn/x4kX3Y3jgFV6jn99Ozg+E8VNbZxtXFJM8Tev5hMRnklbl3DUgG35s66o4
6b3i6Otiaut2UrerSUXiOyFxSfsPLdZ10/Ln/FS7LkuahZim/9jNaj3PywkJ
0aL/RZGu68JtJxtbr/rf0HCzqrhpusEfWEKzsLWd2u1NNWnq9rHv4hF6j+GE
SSOsmgmJlbNLKIhqRVRpTnaf4/WQbrDz2i4f+n7RtquHvluuizZf2Xax54GN
m7bQerzU29xteKWnf74++3j5enL84vh4IsdsTGvruWtPDKZqTg4PsSR6N71x
NbPsAUnG4dK5lnZzmEP35EsZwd62aVU7UnmHDUS8Kg/1IxlY1PZIZzU6q9Ex
jI5pSATod1s0I36N1aV546b12tZbfos+J3U4WRb7V7zZbKKVknwtbXlYkM6E
hB2Shu4t5331VzOZqGmovDXB3OenVz8+MkNT8QSsakkMDr/9/uXx9weLdln0
xn979fHw7eszQ/L54vvJ0cnxi6PvT8z5llRPnhqb2RUk3gQOkTX8eH19YZ5i
Bc+wuguSF3Pkbd6qdg1pI9ZNJnNNWucr/hmWqnFzKGkjyqQxT4/pQ3oNDzzD
tmhBP7x/8+Y37IyUxIvjh3d29PXX37+akB7B2ZA5UjVG62lduiiroppvzX//
9/8gI5rxBmfGrrO8mtzmzdoWppr+4lJaKR6RjR6fYJVmSuaX2AJbnuWF0y3F
bEE2pYzZ4mc3vbw+e3hrm5e8M7apzeFmfkiCUbdpb1s0hrl0tphc50uCAGTz
1qW3yOZnssjYwx8xAlYCZpnMK2LX3yA+BB8OU9IhxOidAB+S3PTpSzaF5MCc
yZOMYBjp7CzCk+Nj2lZTevLrMRHk6FWSTIh37LTBAtokuV7kjaG518wgwjhT
MldrInPKUKdd2NYsLPHj1DnirbxJ101Dhi2nw1w48/b19RuGAUSSdmvWBCRq
/mJqSUPXONrRAJeNxsSv1S2BJyApW2wbWoSdVuuWXqxo5jD72NROIEZGTwB5
NK7FkIP1pSwhBaRGmCNniwMUU2Z6UmORhSBUoN1sXQM1GPd5VVR19FhHibSP
GXm6ZlGti8zM1/QF7zVjsIiFrRQ/0pOVaeilZrbFI/GuDuQUyqp1f/mA/2ur
vxB70SRNkjwns2pek3AShjKrwoHhCTnSBmkYolNDggFBmjragDMrYCLZ4PMk
uarWdUqzVLQubCNvGoJWvFV+mfU/LQKv06d0WMa2yT97zpzn7WI9BfY4nOVE
P/uLaxu7PPx7kPYPtKNzYQwsjcgQzabEai0Bt1VhaXlVxDlDyP6UgPazZKmw
FiqaDqQ1//zFmvwPSt1lnmWFS5InQMN1la2FbHdPcvx6//9Z/38X6z95Yt7Q
/NcgIHkRbQ7yC/vTyZmP6xojraqG9FzOY83okBpwTbTLiE62CWvmo7m7w/tN
Wq3c/T2R0fI+afUlxmBsQbpvz1CE8OmJSUbgnOgQrXnCqyXqNVUJ+k3dghwm
YfKw1p5U8TL4gCd+wvt7P1C664Z62hn4dwP66RQuWtKeyfqDYrLkfdW0IoxY
vP855vrAi1MyRzCBNJooC5yaqho9tJ8XWxHRN95t/heSps1iy6aKBOpti/Mi
rSaLXtdY9O6ctDkyiGtSUHNLTk8bKCiUYALQOcnZxL6vMGBW8RR5eVsVdIRY
0YG5WqeLaBzym02TL1fF1kCyaAnMDZEe1MXQzn5ekGj7SbFw3mM4j6d3d+r9
3N8/MxviNbXSdBz0XWfr6esxv09euOFPOuu8ocdPwNzv8zJf5n8DD9F5larK
oWlJizcLpg94DzJti8K0IS5RkDUv0y2PGRMF/jmtByNuSHszShzTTBd8sgye
2Qko3Gf/DEiyIHmbVLMJfHYzLaoU6AHvXbJaZ0xWEvlos1AnODT4EYa00aoi
juZPXEnLhgYgDbLF2jHAa3zYzYuXrHAVeV4EO+uadkAeQO03/5k21shW6MHe
0m2xsVtylRz5bq4jxpgEAV9fv7syRwcvzXRLa5hZmu8Aah2sKwcAutMWiA3L
jjo4toccKZwxSRMpyYxZAxDT88aIj5IOhm3c0cg89T/D3DJVzIvPL+S/I+EV
Ngm0h3HHUKIv6c2gBoiTVoQCSkZ2Nru1hOPnEjLY5Uecf8UKLeaCAzJy9KpA
ehzNLiPy1vucI1Iezcj0ypcseq2j46fnSZbEdYAZBhwhG1At3R75ZA7n1XZc
Aj2VkuJsicvoTPXQgfBrIs7gE6HFdNt634dVr0hQTtJg120FNyIl0diSYS7J
wJsuuGA0uEDEBrim1VaNMBVpPx7d2z4ZHQjZhhl0RuK5otoEodFt+i+hmBfQ
LtGszN14R6cPMsbHtMmbTsCMs6Sn+PM98zaVTr6CQBNFalfkdkpIqCUGIo4n
6ORyNWbQzow3aDtEQ5cDBYDapJqIfUFwfkh26mxNFPMLs7MZqE7rwSNe13QK
CWv7GeKCr89Pr0//eHn6vhPUhwTIRypYE3p4oXuh2WU3ICmJiypp0tdNTp/J
KfWUPfaCLe454IK4xE9GpsuBZljq++tPRKa/uUDbbksyXVC0bEtZw5Tm9N3F
h4MHOHdRwYRhbaIIMU1ex7r77fkD7y5zWh6dwtsLCCaZ3ybQf0pncuPJ3yf7
Dy61sPI8IBkq0vl21ayh/dnOfzq/GOuXZN4YfUf0qtfAN4AJxOMkBK6Pfuir
G8SFC/8laajrswvTblcqU1lFcMcwMmZWbOiwiG51VS1ZcJQHcIqnF29B+3YD
aBydHFMrUtU8DtsDHsmu8mywdkWd3ohA75Vus1ewa8cjL3N2NIRrVnVe1aT4
JDBLAoujyptIecBrWq+wHCHitKadQQPfkmhDLx6+hBiRlx/yAEEE4g/HJmeI
47kW6mBDUzsCptPa1rlAyB4b03R/srf2igMyYp4Q9U3zGYd6YmXPQBEwbWm3
5hYhjFlNdPegD7P5n8dmyh7DQH//siZ6EWj0wkcgCRScC2X2uSOAduaaVYYE
Ze6eQIHcM+LDWt/bcktozwL3NnuHoFfgDy7lGXrzojN1tySbK1C686K81fP8
D14I1Npr8voQDGwoSrkDR6zTHesxsyVdx85TAxVOk9AJWIJPNUlvuV5OxS8L
R7iMtoapsft9XhvNMHr/8afRgTltAqbF4QM37SHKwmLTWWc05bD8dGwrQdob
OBI0nmwptTVxEUkD20XA11vV9j2SsORa/cxui8pmjw5HH37JgKW5JIygAyrZ
kXAgsv8+w6us/Y5L7slnNDAe73Sncr+w8gP5PtruMgDAPbQVbz8OTOAVCTqz
xdsscjLuQH0IjZDc0S5GBwcHzJ1ZLzoCn6tSR120X44sG8k0Hoj8aDg8wRzY
hrfFmLLYsvz4bYygZ2Fc5ovBVIIVh1GWzlViwWTsMYhx4D0nCov0NlTSVCMe
bNJTgvYwdWJGZZYgSxZQsijo7X3CIuCF9ABhvTUZcyLk6Pfl3REdR+OcADZa
Np1o7ADqPnzWYaOxUw4Bi03BKXaksAp5C1fO4c6Ig9vLP+wfLCQw7u4GiRUE
JsKJlZWEyhoJLMFcwT7L2WyJPnw0NqtWLeNjnn7vhOKp7+dxMDmpauLxh2Nf
eyIT3v3p/DR2/T5zyG3uNd+uQDUj6EzG/sqKMOqPPA6HiyyuJ4FfHQmWxI+G
2nl3jBFW5jfjIyci3ZF5VXwDcGWbHBxViUxxcsPkPl6CgBSxVrNe+lAbLQQo
srUFU1zCdrabhAZIb+TRoqpuGuYa7Aj64kbit6T4/43/4xh99B9vij+cPPxf
MnwBwRhyib7ovYfI9rR59lvmxX9cUaD7SDasIUbxekbENggT0jEMaBBbGJ+P
Ij7RDBQJiv7EAkLH8chhPzaFOrwH5kPVAoJ2Br8fbdo13LOqEH8ib8RTEazl
NUdVuhCIYjwoeoYYdLYuwEostCXXNfAEPTYs7BZoiGCzIyCuJ4JtejAmMTSS
uRZqgSUMAVpSt1CAPC8GfIQqPAW8jwWCZGmxzpxAbw2Adqs54Eie+2yBxona
LKAP2kcZcYpwzywvYSAq9jdCbpIxEoEw/ahnmLFFPg/WT+80lnXZpRDMmVTE
AELfPeFgF0oJ2vtHlAJrJDZUIToWJSVM2o3Y+fzsiPAWByHjL0nR398zbCNc
WROirDYTP+3TAu4dE/hITRsQU/wIh67j517ogw2eLKtysvt09wwHTc2MzoeW
s24dv9SFqp+SHalZjnDSz3wQzI82RVGMBHfINyH+rxHe2UeVGMWwvuXUJT/5
pSQScWF7uRtqB4xgAYmzR8LWErOdl3CMCFkQt3fEGjHJQSG/J8LhD/IFRmJE
w9gEJ/apKCbfvIjJT78tmYj46vhF7zv8ii+TC1cvCEGaanqbV+um2PqQWcGA
flMNKMzGAeBmAVcTMgINx07TzgbMRNYwEi0Xvp9038s6aKd+HbzEaDFT0naI
ZUgKHQmBFklhUT4YluVeVdOtLZAKzBmapXQ8Gft+F/Ce+eHX3qRfqXuqIODu
CXvYBPpIGK/Uy9LUXv/JzskLGQthJ41aRcGMDkA8CAggl1CLrFdE6SQSfqk2
5eO6imOgKxBe0gMhmOU+p27l8cjlp6sfgeGlZIgxvPj36/KmxBwKBQTcDvYK
B8BnArusbMIkYL+v3EYxDZZNFgdaFkeiQ4BxEMcRDfnT5fWJoFwYs+tQ60SH
wXVQdBKxPD5YG3VPD9JYMeUbHhIHfonYj8ReQFzeKyB8I1lGTMdLCKG2ZKN0
XI7lHUZBfBTepG9EKG14yczo/5zyQSZnvkYErkkkSiva2Sow4LgtR1NvG16g
0INNevBe7p4QEC/cVFgyJsQDlWT3HvL2DlGUxMOkgR5blwwpQEPZgqSVEgQz
XeNTRpp9CEHHQdwqikkxiwm9OavLKd2kpw2tpOH2BPm+avYGQxnBrhDDYXCQ
rCXr1jhy1CBqfhW2gDlsF0tlMuZ/ZHw0Lvt0XfoY7bPIMATwcvdEJaVP86ji
DnRG0oS0Iw++ql3GdGUieY7oHQITZI2UZg4pIakThmm3K6ZvEgkyJzZQRhBC
XuGbYTQUudrGLTUTRXoT8bFkzpkIjYa+v/7UT5vA0yTPHPVHWmFUiyYlJTrP
AWw69SPkSXJUzdJeAOqAh6RO48fqXRfy5xge0TCV2gkkJJF443yEAD7PDp4A
tZsjHCC5PphEK7KPNFNq8qiYiuXRB2eT2bpFmozwopU4PRhYC8PInOZQWkuu
SWU00VUUrKoV8X+tYQmOxKMYK+EJxZIgMFx+1SFK0YlySE8bZF8tgqxTLlRC
aglK2RehNVDhiarRbj/RTp75PO2UiMCkFRaNNN8ViWCsArqCywE/PlSJec/m
izOJgeU72/GUxn/Wz2z0B+LIUNu4YgZKWjMnAw9L6PPcXTTSC0xCdnNp67yQ
k9RwT/dcl823vpigr5+4tIOFJ+nHskRpAyQSlxDVmDY9m6f6e6CF+bwO9qnC
BoqYQVtPFxEr2prhxUJzLEA9HtdxvrZwZPL2Rg72KEKO19uUY1cVePOWkXpV
zsjFaROfTvAThUjYgTlF1EmECXBxzHvuSmkI9EjN5LoLCEvqK05/jwlArBtZ
DUfbOAYNRy8X4ZytS01HoqCHltg3gFoIYesV6c0rqaokcXqHmf/MwtMx1N0T
VBoPeDMuQmZlSf9+tWOWCMUtV6xQlnYlJX041wsiEQt4mYaqSRHZcPZJnFIk
7QblsuIKIqSq1MfG9m1eqs6KQdm0AjIRp1RCeAn9K+qTQ3cdWmhC3qWJdFFs
c4bsIDoFI/CiZXWqVhDkUhQyzSdRVhjHEEKCwAWa0JX6lyQ4oF3MlO11z/nc
8Vx2Sp+5WoZO9gzXGuq8EXz4ACBOkufPzakaDUBhR9QsoiK7tBsl5RCFIPCC
I6hkbYhjOLZqnj+nscwVpE7jf3L0rU/1BJCs8IqR2SVcbQAUocQVMoIbrlRg
DdtohcyBHzrEj897yqAZvBUeC6l32mVR+JWx3c3y2UwyN+xH2nQhpSE4mEOV
N28So2wg+ahkfrneRpbkE6IPaAhdEbJZ4AqndPXiyEEW5P4PBtQTMdpyRnew
jgdIRA5Qd6lHypS5sATTXfjCrrsnmiInErG+JsFFlRpqK4YOgFasRfFEDiZy
7ni3dm+Pw7Q3KIVAug8Ws5LWZFU3XC+a0VtQCsWv1nXUw9GjLhbGju2WkY7E
nuIsnVY7TunAERLdDZnh7NzndqyHzXzu+DB84t8Br4Ew7z/+NFadpE7rZsEX
d3bWpnaI1I/9BcakK0SKQTruHUnlCKtL51qN3vYAde6Rlycc1x+RdpBVxNWJ
p6UWS4iHq3sfLQd3ATSD3lPuj9wjQVjEx818ohLlOd3MwsiyvERikL2lRbyS
k17La58J4Eikj35E0sPPw88Xo9llhaQOgefZ4Ij5HllXxfrULquQ0eF6Xahc
A24I6QyakzMoR0fHz4hmHEZEZUtIVQ+YUMIy8LbUjyfBBVm7fKwUszS9ahb2
IwV46+URo9n/llch3iTBENVYYxrVc5MNryARG9sUsSKRUQF1/PRSKyTzaz3a
cw0cEDvD1tPLQHF7Ao4+F6YOYV7mYvvUuPtAI6fX70OoF+bKR3Y3zpfXkoLU
E0C1IUEtkkTabCrVrhs92B1Fo6j5bRR3E2HlqmepQpUShC/RHMxBHDDmhB+7
CGSgGDnxEYNp6LlJHCqM6xQlWKYQtw513yKuyE2W+IWR3uX1tRSxDWWZoTUP
3fjwDMo/4xKMGgSM8qaRuy101UPqYi6Yje2Wwj3jpCLqlkbqR8XBqVwhumeL
ebN3RzQ4r3RJFhPELLvSjTevyXO50wt79/eCZOBkTQqUSsR1a1CqqbMFh7Ll
nW+PXg3rmckd8evuamNEOrh8s0AFsMZZkPeSEoRAHayJY4SouWskcqv1tftp
FitblKdDTMXB7VZ+8I+TDGm5ZXy0zKa7IKEh4AHPnLHeDz9c9iDeF11ABCGl
og6vQ0cQ90wlWDBkHuEuADNEQeHO5WW2ZieGnDAyd+wxEx1bqezVqq4GVQA0
g2bAaSgmV4+gBxqX4ri4K1aSWiWtm8Mw7lTP00hExx2Gh+ctHjhMY1S24cP5
6idJQoI5zVMS3j8fnS+fCtVbWmPFGuWJ+aPEgu6ezPkH0iK/ngZV8Kv5M4LL
5tfkV0ki/tNO/vDX58+vRMceXnr9+vz5r8Z8LPmg6R96+fnzH3roH0/8F9fw
N5o04pckns+123W49tB4zZ2JHeQquLxcrVstCIckafI8LXJGo4H4ME6JD250
T+iQXYmDhqVmURxjHKhP9JjPYUYQjVngFl9qZs5lKK4fzMXlDdvYBEWT+vGW
eUoAbUESkrDzpGGEyRyQLmWLAUseTDVyi3pel25Z0cmcu+YGnuHdE7JRpOlv
/l85OB20HzlVXJOzYmMcYrl0pSbnVcob2X/hSwjQnhhQ+BkvJHhjeEyeJaKw
FS3xEFarp2xnfB2xkVTFbU5Sn2hhp4TbFK3XX4l7SMSv9aQkbnBWlbwyaJXD
a9LqOHZU45HcIjzy+50bF/QBidO/v/nk8KJmzHYOkYMHjb/B4I/0P3UE9bRU
XgbT452kY/OnzTMT6tuFAzj7E789IrgCFIzoHBFrJJkV+k4rnjTOyD5JJEtN
QXQeR1C58vHO+OpsdPzIdLPkNcp0h3R8REV+c+V81J3+ZbZDblMIy/+SipZo
wSLn2A1vdC/KZLMcJ1q9rdRF0PxSvEg/+GQl9EtUhC9Kye+UySu23JdldQ95
jBeQOoO8Jb2aMBBAGRrZyrIz2Qq+NHMfQ8piOeGhdyDlDvrXLMojLuvdXXdN
CldaUO7PSxst1ktbNt4RK6pqNYqrX/Vrwed+q1z0NRIPr6Elj2AInWZnu49L
xCByrXEe4pJvMMzRC0WmcUm030Z05e1hyNmrvIjALqFmy1B23/6JTyZtNXHC
JxWHeYcwS8Fjdw3htsozC0TDaWiaptFLUftXqDfI1pylBQd3ZwtUQqoPAY+2
9jcJvO5E3fgeMgYHicdiyUojncZkxnp6KZiNbdOFJBH5jbHUj2jeVaskUS6j
eShCXSiqROB7pDfb9UIZzF/L2XYO/XZXH3fKK1XtRpz8VgLzYGiJ0f8vNJIf
qr2aFkLOqhZVAe+qzYMWU9iCvF25+Mu3g0pctGokRibxG6ZQTITEk0u0T1ec
AeNU58LcAamKwla7J6ci85FW+CFnFhGdqXl+lGRBn6aG67LJp5DWBmyDyZBH
xpSm6S5ASL8TObanKKnY5Fm7GHtV9Ixv/2eQVETke7jAq9Pd072KeAFHDN74
v+CAeXP+KHF+ZYB5HZ/7Ov8ef+NEk+5Ep04O1XEpa4YEI7PDVA6OAxrem+RD
Fe3KNkc+T2Y5x7njGuVq3ZJ1PjBXentJZ6pdti4zxPpYDcxsXojT0ymfnJkv
dXtOKqSc+Zz4t9/9pB6BPbtH9Q+JItducbA0RZlQyERF2CKSsmQgZUHChMIK
ajvgqjlZekwPsHnIixk4LLJecAFsxE7Klm1TztTvxDOJk81P3cH8AGj6RjNH
uQiW83ejAHMcGe2c2IjDQz1vJrEZnUVLu+aQBt+Bap6J/lFryoqd6LmhkUdq
sa3Bb9Wm24Jcvk4YDeB5l82dt2ZLC3x5S2zH6V3OQvIq5AYruUjwxcadopLb
VoiFFlWKbK6UT85o/42/5s0TVL2nDbJGDP0aTYHgRb5FrWyQyIVQhS0r4gRB
UB/LybkU7XkYVWUBRr0RP0UM7Cg8OvqicJ0GbqSgn84S5Zp0iDcok/CJ1ejK
QSjwiK5oRfV3msHnROM4ql9AwL3BOcY3GYjGq0bitaXbwGsj5Y+aUtemcsIe
TSIuQiPzLTlfUvsgcOxwuNxoDLXvbd4lqiVlsiFIg7i6sdF1v/CE9srQzGY9
ZJFACtVJ3RkFaECH9D8JG/wmhUQvGVnajhqS5eG2At+J4/NS4pCp3NJXS0xX
MPpiVwYRt7byAgJHp6so5WG9ZtEOQxxDCmqOLMoE4LTGZ95yQN7sCllqfY4l
VFSZ2BU6YhU+ucfWdjNoaDe+f7yychelUoHTGCVYpK382Y/CskccBM3drS16
ySjPy7j2uaq8GOyc9q4ZomP/P8EOPXLsQ+vDlcT0wTi61bnnqDqjkcwIkKOs
wHYpbbU942Bjxmp3lKqRQznQR6RHu2rfrobMi7iWnGqu8yrcydl3qbBr4iHp
1l5TEHgwuy/FSS5pGjLLa9jg8nHfM3iyIWIKJbHrIjzFo6oH7p+Nkxhb8ncM
LlFIypn5sH35ThjpnmwettN1EwmFA8167r05yQDC637NV8bFI7JTlJf19OXd
3TA5fU+bbgX0s8+JV2uH0BkJU1TBiiBmVayjqtNT35AGK4qTSF2O/O7JoKmJ
+P5CZr1AtZN+2teiJXZ3xZO9lWzzPJBMQ80gZz/mgy99PNMT25djqUGKXU96
2kfR+OmamBToFu0tMKvYFS6rkPg6p7bKX3xyaKP3gC+vkb2RH/yyOc7jsl7v
m8xuD0IdZBtdn0umtEg5nJ3IPnATO8Jc4iCOTvDd89ZrMsQTkriIPxQm8s13
HGooGonLjzmvEK5L0F6TbsWMBnolLnvYIRKGXW4olsoHeq1v7kuWBjw33hW3
cO5/X9oecOp2BW+vT9GXQRbwkHt1w8IMbjfhT2uJJBrUayL3GKv6BnLpybwP
U+1Jzfz47kpSM+i6qbk/NPOjD/HP/X0ikMWmepNpmfO5qTOPmwNSgreVq/4O
wZAusabNkcIKEm26QqLjSpKaqlFAjRyQ9/3E5QgdKOSyQOZvOUQckmVcAhDQ
YWKXaHPK+Ts6P3ST0bKnLtytaN0XdGrxpJYLMCVFBXJxgKo+pb7vuLKj4CMx
k4DpUIz4fhQMBCHEPVzc2fpdJq4yZWIaoM68MiP9mu3h1iFo6Fi2Q4rgMnpg
+GgTs2QHMFBe2miktNhKDFP4jy0704R85+56qFxrkM5hREDkhEYd54/2lygo
fVFjsKxIQ4MLdlsvBDKPoxK0vRwNSMJczN086mpKoHprRvOqIr+pRMHSqFeY
UzqX+bv2g75geifVCQrERQRZ7Z4V9K/J8tSuRgb/IKrKfHDBInZ8Hcp3KGCm
xdlKXYVWcKQ5LIi4z1Jv10i8NNer7EcHRyLTaLWrMs1fHOvH33z9wlsKdc9Q
FWBQDeidXnYPfco7iueq24uGzXP1IRFR6apVNAcvYS/zUaJDD5CIncKuej3e
3OAW4OX1laaQ0SiY9hROhMcgid0aNGXiIBKpBW2swdIF1tHL24wWDwS4SWjY
hvI4vcTYk1tWuJglq8HCyu0muusYaQHtXgJPADH0mGGlMIbBD0eNtbCNO2og
io7CAPYI6BhKcgaiOpVIiYTr60/2tQwP14zQdJPUx6DzGpkGmGQiwrpuKpoT
tIX/mvvWiK7H03obtbtQhP51euHqPmkWknKwHkqA22cIsvFthyKKy4ckSVSc
KaFzMK0i/iaUKQKBlZmw/mi9ksKAka9egL9itdHktooQQqwcmioULnHLm6oz
qnYOzMn1fTYUdtSt1V5Sg3ZY3qheSL9LpEXCnoe1xMMgARMEYJ+2ElUd/hbM
H27iS8ws4gR2tUrMo85mVNyrCRgypT648yMO6Adnzms750rha06chH4Mexv1
cRUYUihL8cN8qx6GHXppwjcxGjbTk+AKa1sOvHTi1S9JnSFiQ/olavG3chWi
jptFpfZAUnRVVx3vpESKF1W727x5YAln2gNeWiyB95kR9Jph5N7oeniDPvjT
9SV0UXJWQjpwxLq6VBlOdiw95ps+bFjk80Uob4mWxISMMfMuSDjjqzWnc+61
pT0BuYDPSr2jd2id9h3iXHfbNeGKM7qqy5zes/FyIJwsTnTnVtP21QeLijfD
DWUpLPdZjyS1K2kHluudEXLsvez7FCWmY1MultbfmJLGqxoHE+rAN0qiTjqh
qRL0qtz25hKI6P1sXXcXqPsvrfSOKjdlkeY77AB0ZbtSF6GxBcPmooXYo1Ai
01rJcJWI4a3eyoqidfqKCcmhZ1JIqf6V0iPbqZ8nt0RDYAyJOTEq93YAcRDX
vjpXs4e2+v4+wJUaN3QJi/jn0oe7JURDp5EkV5EdRBRmyTkTj8GG1673uBzg
1NCnRMtAaUtwB9yt5R5p7Ij21EcYbInGkuslnd1M6uUxYseSiE7LFTKpikDc
x3PoTh+9Pf3zhC0zN69tZv2x+PP765rvaWjvEW57pqbuSuSFyPOjT0YijB8/
0hXq+hj1gsS1CSH6UBsclwYngT5dY68oDaKvEj+uXNkL2+tQuXZmnFcsA5ZN
IedF+PRqN7WFleABzQ8Z1cRYNJROkkvqu8412GATzYZ0BAoZzluurw51Gl4i
ltVtuCj4c/4mZyVM+AMFt0n8blQcI65Tr35QquqhauQaaqws/Jz7G/eFTp2E
N/jyfsTLDFV5QDTy0mPr1THixa4T3N4aF07ib9HnRu4XcxYKbQHVRYuk46xD
wGdao0gOWor+sqdlT44eu2crOlgpwiHfvsRJq+aeKLEi+cr33oC/hNp0qecU
iCOlMpVcsZQTdyKTSvC6ZiHjTpukHWdAS1LnwBVC5TbKyTHvx13zIpvkI2Ja
QtoiI/To/I9eOQ6H/8FtLsknM76dGjchOltPaZUSmHh5dOy9lX+wqDbCmxFl
9tHg0SX7UuXH9xWK9Ka7fS24lGe3lfMQBHa+PR3/maBeB3L7MeIWFl9UutR+
wcKh6tkbUaxMpqJb/jgGsRsAEd/DcEhb+3ADhKgeeFAD3F0zfmyN2gVLLu17
BxCdbiTvq9c/2ZnSpUjkT3pTdjXGD05wEAv6Izx7dXbpTt/7wNn33xGL0cl8
OD0/1c9e8WdRdHSHFTSVNdqjWS7j0Mgw6N39wQcNqMiE3718JZjd+s4aAYTh
+tDmBgE7n8Pd17D8YUdEQ3iEciVUp+6cdLLXI4E90egkQzwFO/37ZoNLI9L7
7suV5x6fuI+H3hEwYJeFo630y05fPbJLH88/nnAgPq+XCmv6fxggbtZV+BEH
rR3H/M1292Pk4wHCi3gp8mx4SpvNI2563rtSmiQaigswwuqFA2490eUoo1tD
sM5a0YAuwFwLimvtMOET9pYDZukaCcAPZV2XrutaruGzgZV5+YoGn+O8gqvB
rea4sIF7enu3SAGSXj6ObrZ6In8qG/ho+rh3tNOO3ruYcR9jHpi3+jxewdXe
FHlUb3nENeRxUF6YcMctaWTZu26r6KHRSNhYvTfDYMtfcNCGVu6z1Krk7b/Q
MKekzMe//e4uX98o/R/KIsxdztd2TtBZnN19fWRFs/kKW6J0lnXX4oYBh4G8
Aoyof3BKbiiAjNhwAhDR9Rhp5cxgln/tHpVLVVrQNe4DNQ22cNDEdzkUta2L
JTCI69j11sdzAgvfuFUr7UkcM1ldTXGBFjhUAwtV4m/c2xat6vQCTUoek3Yf
n9bk9k+IQVKphhD+el9xqFS80V5hBdSUOtA9kr1Fk5TQQOZ1aB3Y/5sE/U6V
/T+BMOzsxIfMNa4u8yEZ9mhi6jFSU0KpcmZ/z1+D1j8YwSLojRn/lQAfmo6b
HGunw3a332R8SFwWjGaBW7XPUVf3+M+ESBfzXq9GBTni+LEc7BT1jEPMqXAt
F9p+ZRryx7V04cPpNVQua58iSd5AR12pwvru4BhUjXrnkqj+wVwginxLWjAL
AVkOu0pX+kwcE4zb+nFjo30tYTJJ73hz0cmH5Dp9o7hwqS6gqbgbJOaY5nwB
1/9pEbUuvdnHGi/dDSFDiHzLbT6GcB2QvQ5/7S103ZYcteZZ5Z6Cd3QrTTXA
lTkwr8O6+3SQiEpUJtFsy3RRV+Vu6+nOpefLir50RluqhrvcXUR3mH9UoY4S
QHZgoAdUsn0mFSaWRIJXEB1c8JQe/MkZ5ar38OxQu6iR7FxLgbmcsYsGhFiM
L3XcEUe+88kZdZ810aYlXRAJFwrXdSyc0nEj6gvhFxvWoM3i9224eWDDGSqW
e91+xgrvu4sj0SZlhFz/bpDiSpHCz/Dh0UM8RAD7vTvKLFGUwI2t48ZeH/WG
gQYngJY/nV9E04Zy3aBBtA0f95tb5vy3NrTrS9dGhnibL2eHJkkdSEK3kyTv
2Ek5Cs6FBqoC+bgdi9wSFOxNJGG6661wDpKRO3Eof1j1D/wzLf8ZZ0nenn44
Hf5B1EGTWbkOW1a+OzKjUrzHA3BjHkw0HOS0GdiBQeZC/hBRZEmC60ygSK8M
+3NskhLITWcaWHl0iBXPSP+AFIo5sbTTFN3gClSBsqMgUX0fge7HzuHp3oh6
hKvrH/JArt9CLvgNO1kfrzBzab97knAvDjqSNzX+UGtR0O9nVUFvXbia/z7p
c/Mny31P8qahqfF75eq5+di2/HOd5ebMlX8j1EG//2tew4W4QOMu+vW9rVEq
Edod8UdM9de+nxp/9Dlf9p+6JN1noCPQiMMuzY/oQ0c//9miMcsVifni71Or
t7GxebemL8+4TY4I+nBeIQ6Je9THJ9hq71zQGQ7+tqzOzQGD/gLe1ADL5KGY
twDsmPMDwcOKpM78UFU30WlwE6K8VFHxl31Vz2hCnaH6/wAHtzBelXgAAA==

-->

</rfc>
