<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.6.17 (Ruby 3.1.2) -->
<?rfc tocindent="yes"?>
<?rfc strict="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-gruessing-moq-requirements-03" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.15.2 -->
  <front>
    <title abbrev="MoQ Use Cases and Requirements">Media Over QUIC - Use Cases and Requirements for Media Transport Protocol Design</title>
    <seriesInfo name="Internet-Draft" value="draft-gruessing-moq-requirements-03"/>
    <author initials="J." surname="Gruessing" fullname="James Gruessing">
      <organization>Nederlandse Publieke Omroep</organization>
      <address>
        <postal>
          <country>Netherlands</country>
        </postal>
        <email>james.ietf@gmail.com</email>
      </address>
    </author>
    <author initials="S." surname="Dawkins" fullname="Spencer Dawkins">
      <organization>Tencent America LLC</organization>
      <address>
        <postal>
          <country>United States of America</country>
        </postal>
        <email>spencerdawkins.ietf@gmail.com</email>
      </address>
    </author>
    <date year="2022" month="November" day="07"/>
    <area>applications</area>
    <workgroup>MOQ Mailing List</workgroup>
    <keyword>Internet-Draft QUIC</keyword>
    <abstract>
      <t>This document describes use cases and requirements that guide the specification of a simple, low-latency media delivery solution for ingest and distribution, using either the QUIC protocol or WebTransport.</t>
    </abstract>
    <note>
      <name>Note to Readers</name>
      <t><em>RFC Editor: please remove this section before publication</em></t>
      <t>Source code and issues for this draft can be found at
<eref target="https://github.com/fiestajetsam/draft-gruessing-moq-requirements">https://github.com/fiestajetsam/draft-gruessing-moq-requirements</eref>.</t>
      <t>Discussion of this draft should take place on the IETF Media Over QUIC (MoQ)
mailing list, at <eref target="https://www.ietf.org/mailman/listinfo/moq">https://www.ietf.org/mailman/listinfo/moq</eref>.</t>
    </note>
  </front>
  <middle>
    <section anchor="intro">
      <name>Introduction</name>
      <t>This document describes use cases and requirements that guide the specification of a simple, low-latency media delivery solution for ingest and distribution <xref target="MOQ-charter"/>, using either the QUIC protocol <xref target="RFC9000"/> or WebTransport <xref target="WebTrans-charter"/>.</t>
      <section anchor="note-for-moq-working-group-participants">
        <name>Note for MOQ Working Group participants</name>
        <t>This version of the document is intended to provide the MOQ working group with a starting point for work on the "Use Cases and Requirements document" milestone. The update implements the work plan described in <xref target="MOQ-ucr"/>. The authors intend to request MOQ working group adoption after IETF 115, so the working group can begin to focus on these topics in earnest.</t>
      </section>
    </section>
    <section anchor="term">
      <name>Terminology</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
    </section>
    <section anchor="overallusecases">
      <name>Use Cases Informing This Proposal</name>
      <t>Our goal in this section is to understand the range of use cases that are in scope for "Media Over QUIC" <xref target="MOQ-charter"/>.</t>
      <t>For each use case in this section, we also describe</t>
      <ul spacing="compact">
        <li>the number of senders or receiver in a given session transmitting distinct streams,</li>
        <li>whether a session has bi-directional flows of media from senders and receivers, and</li>
        <li>the worst-case expected RTT requirements.</li>
      </ul>
      <t>It is likely that we should add other characteristics, as we come to understand them.</t>
      <section anchor="interact">
        <name>Interactive Media</name>
        <t>The use cases described in this section have one particular attribute in common - the target latency for these cases are on the order of one or two RTTs. In order to meet those targets, it is not possible to rely on protocol mechanisms that require multiple RTTs to function effectively. For example,</t>
        <ul spacing="compact">
          <li>When the target latency is on the order of one RTT, it makes sense to use FEC <xref target="RFC6363"/> and codec-level packet loss concealment <xref target="RFC6716"/>, rather than selectively retransmitting only lost packets. These mechanisms use more bytes, but do not require multiple RTTs in order to recover from packet loss.</li>
          <li>When the target latency is on the order of one RTT, it is impossible to use congestion control schemes like BBR <xref target="I-D.draft-cardwell-iccrg-bbr-congestion-control"/>, since BBR has probing mechanisms that rely on temporarily inducing delay, but these mechanisms can then amortize the consequences of induced delay over multiple RTTs.</li>
        </ul>
        <t>This may help to explain why interactive use cases have typically relied on protocols such as RTP <xref target="RFC3550"/>, which provide low-level control of packetization and transmission, and make no provision for retransmission.</t>
        <section anchor="gaming">
          <name>Gaming</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-50</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received, and user inputs are sent by the client. This may also
include the client receiving other types of signaling, such as triggers for
haptic feedback. This may also carry media from the client such as microphone
audio for in-game chat with other players.</t>
        </section>
        <section anchor="remdesk">
          <name>Remote Desktop</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-50</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received, and user inputs are sent by the client. Latency
requirements with this use case are marginally different than the gaming use
case. This may also include signalling and/or transmitting of files or devices
connected to the user's computer.</t>
        </section>
        <section anchor="vidconf">
          <name>Video Conferencing/Telephony</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">Many to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">Yes</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-50 to Ull-200</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is both sent and received; This may include audio from both
microphone(s) or other inputs, or may include "screen sharing" or inclusion of
other content such as slide, document, or video presentation. This may be done
as client/server, or peer to peer with a many to many relationship of both
senders and receivers. The target for latency may be as large as Ull-200 for
some media types such as audio, but other media types in this use case have much
more stringent latency targets.</t>
        </section>
      </section>
      <section anchor="hybrid-interactive-and-live-media">
        <name>Hybrid Interactive and Live Media</name>
        <t>For the video conferencing/telephony use case, there can be additional scenarios
where the audience greatly outnumbers the concurrent active participants, but
any member of the audience could participate. As this has a much larger total
number of participants - as many as Live Media Streaming <xref target="lmstream"/>, but with
the bi-directionality of conferencing, this should be considered a "hybrid".</t>
      </section>
      <section anchor="lm-media">
        <name>Live Media</name>
        <t>The use cases in this section, unlike the use cases described in <xref target="interact"/>, still have "humans in the loop", but these humans expect media to be "responsive", where the responsiveness is more on the order of 5 to 10 RTTs. This allows the use of protocol mechanisms that require more than one or two RTTs - as noted in <xref target="interact"/>, end-to-end recovery from packet loss and congestion avoidance are two such protocol mechanisms that can be used with Live Media.</t>
        <t>To illustrate the difference, the responsiveness expected with videoconferencing is much greater than watching a video, even if the video is being produced "live" and sent to a platform for syndication and distribution.</t>
        <section anchor="lmingest">
          <name>Live Media Ingest</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-200 to Ultra-Low</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received from a source for onwards handling into a distribution
platform. The media may comprise of multiple audio and/or video sources.
Bitrates may either be static or set dynamically by signaling of connection
information (bandwidth, latency) based on data sent by the receiver.</t>
        </section>
        <section anchor="lmsynd">
          <name>Live Media Syndication</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to One</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-200 to Ultra-Low</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is sent onwards to another platform for further distribution. The
media may be compressed down to a bitrate lower than source, but larger than
final distribution output. Streams may be redundant with failover mechanisms in
place.</t>
        </section>
        <section anchor="lmstream">
          <name>Live Media Streaming</name>
          <table>
            <thead>
              <tr>
                <th align="left">Attribute</th>
                <th align="left">Value</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>Senders/Receivers</strong></td>
                <td align="left">One to Many</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Bi-directional</strong></td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>Latency</strong></td>
                <td align="left">Ull-200 to Ultra-Low</td>
              </tr>
            </tbody>
          </table>
          <t>Where media is received from a live broadcast or stream. This may comprise of
multiple audio or video outputs with different codecs or bitrates. This may also
include other types of media essence such as subtitles or timing signalling
information (e.g. markers to indicate change of behaviour in client such as
advertisement breaks). The use of "live rewind" where a window of media behind
the live edge can be made available for clients to playback, either because the
local player falls behind edge or because the viewer wishes to play back from a
point in the past.</t>
        </section>
      </section>
    </section>
    <section anchor="req-sec">
      <name>Requirements for Protocol Work</name>
      <t>Our goal in this section is to understand the requirements that result from the use cases described in <xref target="overallusecases"/>.</t>
      <t>*Note: the initial high-level organization for this section is taken from Suhas Nandakumar's presentation, "Progressing MOQ" <xref target="Prog-MOQ"/>, at the October 2022 MOQ virtual interim meeting, which was in turn taken from the MOQ working group charter <xref target="MOQ-charter"/>. We think this is a reasonable starting point. We won't be surprised to see the high-level structure change a bit as things develop, but we didn't want to have this section COMPLETELY blank when we request working group adoption.</t>
      <t>TODO: Describe overall, high level requirements that we previously stated in earlier versions of this document.</t>
      <section anchor="pub-proto">
        <name>Common Publication Protocol for Media Ingest and Distribution</name>
        <t>Many of the use cases have bi-directional flows of media, with clients both sending and receiving media concurrently, thus the protocol should have a unified approach in connection negotiation and signalling to send and received media both at the start and ongoing in the lifetime of a session including describing when flow of media is unsupported (e.g. a live media server signalling it does not support receiving from a given client).</t>
      </section>
      <section anchor="media-request">
        <name>Client Media Request Protocol</name>
        <t>In the initiation of a session both client and server must perform negotiation in order to agree upon a variety of details before media can move in any direction:</t>
        <ul spacing="compact">
          <li>Is the client authenticated and subsequently authorised to initiate a connection?</li>
          <li>What media is available, and for each what are the parameters such as codec, bitrate, and resolution etc?</li>
          <li>Is sending of media from a client permitted? If so, what media is accepted?</li>
        </ul>
        <t>Re-negotiation in an existing protocol should be supported to allow changes in what is being sent of received.</t>
      </section>
      <section anchor="naming">
        <name>Naming and Addressing Media Resources</name>
        <t>As multiple streams of media may be available for concurrent sending such as multiple camera views or audio tracks, a means of both identifying the technical properties of each resource (codec, bitrate, etc) as well as a useful identification for playback should be part of the protocol. A base level of optional metadata e.g. the known language of an audio track or name of participant's camera should be supported, but further extended metadata of the contents of the media or its ontology should not be supported.</t>
      </section>
      <section anchor="Packaging">
        <name>Packaging Media</name>
        <t>Packaging of media describes how encapsulation of media to carry the raw media will work. There are at a high level two approaches to this:</t>
        <ul spacing="compact">
          <li>Within the protocol itself, where the protocol defines the carrying for each media encoding the ancillary data required for decoding the media.</li>
          <li>A common encapsulation format such as ISOBMFF which defines a generic method for all media and handles ancillary decode information.</li>
        </ul>
        <t>The working group must agree on which approach should be taken to the packaging of media, taking into consideration the various technical trade offs that each provide.</t>
      </section>
      <section anchor="MOQ-security">
        <name>End-to-end Security</name>
        <t>End-to-end security describes the use of encryption of the media stream(s) to provide confidentiality in the presence of unauthorized intermediates or observers and prevent or restrict ability to decrypt the media without authorization. Generally, there are three aspects of end-to-end media security:</t>
        <ul spacing="compact">
          <li>Media Rights Management, which refers to the authorization of receivers to decode a media stream.</li>
          <li>Sender-to-Receiver Media Security, which refers to the ability of media senders and receivers to transfer media while protected from authorized intermediates and observers, and</li>
          <li>Node-to-node Media Security, which refers to security when authorized intermediaries are needed to transform media into a form acceptable to authorized receivers. For example, this might refer to a video transcoder between the media sender and receiver.</li>
        </ul>
        <t>**Note: "Node-to-node" refers to a path segment connecting two MOQ nodes, that makes up part of the end-to-end path between the MOQ sender and ultimate MOQ receiver.</t>
        <t>The working group must agree on a number of details here, and perhaps the first question is whether the MOQ protocol makes any provision for "node-to-node" media security, or simply treats authorized transcoders as MOQ receivers. If that's the decision all MOQ media security is "sender-to-receiver", but some "ends" may not be either senders or ultimate receivers, from a certain point of view.</t>
      </section>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document makes no requests of IANA.</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>As this document is intended to guide discussion and consensus, it introduces
no security considerations of its own.</t>
    </section>
  </middle>
  <back>
    <references>
      <name>References</name>
      <references>
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner">
              <organization/>
            </author>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification.  These words are often capitalized. This document defines these words as they should be interpreted in IETF documents.  This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba">
              <organization/>
            </author>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol  specifications.  This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the  defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references>
        <name>Informative References</name>
        <reference anchor="RFC3550">
          <front>
            <title>RTP: A Transport Protocol for Real-Time Applications</title>
            <author fullname="H. Schulzrinne" initials="H." surname="Schulzrinne">
              <organization/>
            </author>
            <author fullname="S. Casner" initials="S." surname="Casner">
              <organization/>
            </author>
            <author fullname="R. Frederick" initials="R." surname="Frederick">
              <organization/>
            </author>
            <author fullname="V. Jacobson" initials="V." surname="Jacobson">
              <organization/>
            </author>
            <date month="July" year="2003"/>
            <abstract>
              <t>This memorandum describes RTP, the real-time transport protocol.  RTP provides end-to-end network transport functions suitable for applications transmitting real-time data, such as audio, video or simulation data, over multicast or unicast network services.  RTP does not address resource reservation and does not guarantee quality-of- service for real-time services.  The data transport is augmented by a control protocol (RTCP) to allow monitoring of the data delivery in a manner scalable to large multicast networks, and to provide minimal control and identification functionality.  RTP and RTCP are designed to be independent of the underlying transport and network layers.  The protocol supports the use of RTP-level translators and mixers. Most of the text in this memorandum is identical to RFC 1889 which it obsoletes.  There are no changes in the packet formats on the wire, only changes to the rules and algorithms governing how the protocol is used. The biggest change is an enhancement to the scalable timer algorithm for calculating when to send RTCP packets in order to minimize transmission in excess of the intended rate when many participants join a session simultaneously.  [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="64"/>
          <seriesInfo name="RFC" value="3550"/>
          <seriesInfo name="DOI" value="10.17487/RFC3550"/>
        </reference>
        <reference anchor="RFC6363">
          <front>
            <title>Forward Error Correction (FEC) Framework</title>
            <author fullname="M. Watson" initials="M." surname="Watson">
              <organization/>
            </author>
            <author fullname="A. Begen" initials="A." surname="Begen">
              <organization/>
            </author>
            <author fullname="V. Roca" initials="V." surname="Roca">
              <organization/>
            </author>
            <date month="October" year="2011"/>
            <abstract>
              <t>This document describes a framework for using Forward Error Correction (FEC) codes with applications in public and private IP networks to provide protection against packet loss.  The framework supports applying FEC to arbitrary packet flows over unreliable transport and is primarily intended for real-time, or streaming, media.  This framework can be used to define Content Delivery Protocols that provide FEC for streaming media delivery or other packet flows.  Content Delivery Protocols defined using this framework can support any FEC scheme (and associated FEC codes) that is compliant with various requirements defined in this document. Thus, Content Delivery Protocols can be defined that are not specific to a particular FEC scheme, and FEC schemes can be defined that are not specific to a particular Content Delivery Protocol.   [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6363"/>
          <seriesInfo name="DOI" value="10.17487/RFC6363"/>
        </reference>
        <reference anchor="RFC6716">
          <front>
            <title>Definition of the Opus Audio Codec</title>
            <author fullname="JM. Valin" initials="JM." surname="Valin">
              <organization/>
            </author>
            <author fullname="K. Vos" initials="K." surname="Vos">
              <organization/>
            </author>
            <author fullname="T. Terriberry" initials="T." surname="Terriberry">
              <organization/>
            </author>
            <date month="September" year="2012"/>
            <abstract>
              <t>This document defines the Opus interactive speech and audio codec. Opus is designed to handle a wide range of interactive audio applications, including Voice over IP, videoconferencing, in-game chat, and even live, distributed music performances.  It scales from low bitrate narrowband speech at 6 kbit/s to very high quality stereo music at 510 kbit/s.  Opus uses both Linear Prediction (LP) and the Modified Discrete Cosine Transform (MDCT) to achieve good compression of both speech and music.  [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6716"/>
          <seriesInfo name="DOI" value="10.17487/RFC6716"/>
        </reference>
        <reference anchor="RFC9000">
          <front>
            <title>QUIC: A UDP-Based Multiplexed and Secure Transport</title>
            <author fullname="J. Iyengar" initials="J." role="editor" surname="Iyengar">
              <organization/>
            </author>
            <author fullname="M. Thomson" initials="M." role="editor" surname="Thomson">
              <organization/>
            </author>
            <date month="May" year="2021"/>
            <abstract>
              <t>This document defines the core of the QUIC transport protocol.  QUIC provides applications with flow-controlled streams for structured communication, low-latency connection establishment, and network path migration. QUIC includes security measures that ensure confidentiality, integrity, and availability in a range of deployment circumstances.  Accompanying documents describe the integration of TLS for key negotiation, loss detection, and an exemplary congestion control algorithm.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9000"/>
          <seriesInfo name="DOI" value="10.17487/RFC9000"/>
        </reference>
        <reference anchor="I-D.draft-cardwell-iccrg-bbr-congestion-control">
          <front>
            <title>BBR Congestion Control</title>
            <author fullname="Neal Cardwell" initials="N." surname="Cardwell">
              <organization>Google</organization>
            </author>
            <author fullname="Yuchung Cheng" initials="Y." surname="Cheng">
              <organization>Google</organization>
            </author>
            <author fullname="Soheil Hassas Yeganeh" initials="S. H." surname="Yeganeh">
              <organization>Google</organization>
            </author>
            <author fullname="Ian Swett" initials="I." surname="Swett">
              <organization>Google</organization>
            </author>
            <author fullname="Van Jacobson" initials="V." surname="Jacobson">
              <organization>Google</organization>
            </author>
            <date day="7" month="March" year="2022"/>
            <abstract>
              <t>   This document specifies the BBR congestion control algorithm.  BBR
   ("Bottleneck Bandwidth and Round-trip propagation time") uses recent
   measurements of a transport connection's delivery rate, round-trip
   time, and packet loss rate to build an explicit model of the network
   path.  BBR then uses this model to control both how fast it sends
   data and the maximum volume of data it allows in flight in the
   network at any time.  Relative to loss-based congestion control
   algorithms such as Reno [RFC5681] or CUBIC [RFC8312], BBR offers
   substantially higher throughput for bottlenecks with shallow buffers
   or random losses, and substantially lower queueing delays for
   bottlenecks with deep buffers (avoiding "bufferbloat").  BBR can be
   implemented in any transport protocol that supports packet-delivery
   acknowledgment.  Thus far, open source implementations are available
   for TCP [RFC793] and QUIC [RFC9000].  This document specifies version
   2 of the BBR algorithm, also sometimes referred to as BBRv2 or bbr2.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-cardwell-iccrg-bbr-congestion-control-02"/>
        </reference>
        <reference anchor="I-D.draft-kpugin-rush">
          <front>
            <title>RUSH - Reliable (unreliable) streaming protocol</title>
            <author fullname="Kirill Pugin" initials="K." surname="Pugin">
              <organization>Facebook</organization>
            </author>
            <author fullname="Alan Frindell" initials="A." surname="Frindell">
              <organization>Facebook</organization>
            </author>
            <author fullname="Jordi Cenzano" initials="J." surname="Cenzano">
              <organization>Facebook</organization>
            </author>
            <author fullname="Jake Weissman" initials="J." surname="Weissman">
              <organization>Facebook</organization>
            </author>
            <date day="7" month="March" year="2022"/>
            <abstract>
              <t>   RUSH is an application-level protocol for ingesting live video.  This
   document describes the protocol and how it maps onto QUIC.

Discussion Venues

   This note is to be removed before publishing as an RFC.

   Discussion of this document takes place on the mailing list (), which
   is archived at .

   Source for this draft and an issue tracker can be found at
   https://github.com/afrind/draft-rush.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-kpugin-rush-01"/>
        </reference>
        <reference anchor="I-D.draft-lcurley-warp">
          <front>
            <title>Warp - Segmented Live Media Transport</title>
            <author fullname="Luke Curley" initials="L." surname="Curley">
              <organization>Twitch</organization>
            </author>
            <author fullname="Kirill Pugin" initials="K." surname="Pugin">
              <organization>Meta</organization>
            </author>
            <author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
              <organization>Cisco</organization>
            </author>
            <date day="24" month="October" year="2022"/>
            <abstract>
              <t>   This document defines the core behavior for Warp, a segmented live
   media transport protocol over QUIC.  Media is split into segments
   based on the underlying media encoding and transmitted independently
   over QUIC streams.  QUIC streams are prioritized based on the
   delivery order, allowing less important segments to be starved or
   dropped during congestion.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-lcurley-warp-02"/>
        </reference>
        <reference anchor="I-D.draft-jennings-moq-quicr-arch">
          <front>
            <title>QuicR - Media Delivery Protocol over QUIC</title>
            <author fullname="Cullen Jennings" initials="C." surname="Jennings">
              <organization>cisco</organization>
            </author>
            <author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
              <organization>Cisco</organization>
            </author>
            <date day="11" month="July" year="2022"/>
            <abstract>
              <t>   This specification outlines the design for a media delivery protocol
   over QUIC.  It aims at supporting multiple application classes with
   varying latency requirements including ultra low latency applications
   such as interactive communication and gaming.  It is based on a
   publish/subscribe metaphor where entities publish and subscribe to
   data that is sent through, and received from, relays in the cloud.
   The information subscribed to is named such that this forms an
   overlay information centric network.  The relays allow for efficient
   large scale deployments.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-jennings-moq-quicr-arch-01"/>
        </reference>
        <reference anchor="I-D.draft-jennings-moq-quicr-proto">
          <front>
            <title>QuicR - Media Delivery Protocol over QUIC</title>
            <author fullname="Cullen Jennings" initials="C." surname="Jennings">
              <organization>cisco</organization>
            </author>
            <author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
              <organization>Cisco</organization>
            </author>
            <author fullname="Christian Huitema" initials="C." surname="Huitema">
              <organization>Private Octopus Inc.</organization>
            </author>
            <date day="11" month="July" year="2022"/>
            <abstract>
              <t>   Recently new use cases have emerged requiring higher scalability of
   media delivery for interactive realtime applications and much lower
   latency for streaming applications and a combination thereof.

   draft-jennings-moq-arch specifies architectural aspects of QuicR, a
   media delivery protocol based on publish/subscribe metaphor and Relay
   based delivery tree, that enables a wide range of realtime
   applications with different resiliency and latency needs.

   This specification defines the protocol aspects of the QuicR media
   delivery architecture.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-jennings-moq-quicr-proto-01"/>
        </reference>
        <reference anchor="MOQ-charter" target="https://datatracker.ietf.org/wg/moq/about/">
          <front>
            <title>Media Over QUIC (moq)</title>
            <author>
              <organization/>
            </author>
            <date year="2022" month="September"/>
          </front>
        </reference>
        <reference anchor="Prog-MOQ" target="https://datatracker.ietf.org/meeting/interim-2022-moq-01/materials/slides-interim-2022-moq-01-sessa-moq-use-cases-and-requirements-individual-draft-working-group-draft-00">
          <front>
            <title>Progressing MOQ</title>
            <author>
              <organization/>
            </author>
            <date year="2022" month="October"/>
          </front>
        </reference>
        <reference anchor="MOQ-ucr" target="https://datatracker.ietf.org/meeting/interim-2022-moq-01/materials/slides-interim-2022-moq-01-sessa-progressing-moq-00.pdf">
          <front>
            <title>MOQ Use Cases and Requirements</title>
            <author>
              <organization/>
            </author>
            <date year="2022" month="October"/>
          </front>
        </reference>
        <reference anchor="WebTrans-charter" target="https://datatracker.ietf.org/wg/webtrans/about/">
          <front>
            <title>WebTransport (webtrans)</title>
            <author>
              <organization/>
            </author>
            <date year="2021" month="March"/>
          </front>
        </reference>
      </references>
    </references>
    <section anchor="acknowledgements">
      <name>Acknowledgements</name>
      <t>The authors would like to thank several authors of individual drafts that fed into the "Media Over QUIC" charter process:</t>
      <ul spacing="compact">
        <li>Kirill Pugin, Alan Frindell, Jordi Cenzano, and Jake Weissman (<xref target="I-D.draft-kpugin-rush"/>,</li>
        <li>Luke Curley (<xref target="I-D.draft-lcurley-warp"/>), and</li>
        <li>Cullen Jennings and Suhas Nandakumar (<xref target="I-D.draft-jennings-moq-quicr-arch"/>), together with Christian Huitema (<xref target="I-D.draft-jennings-moq-quicr-proto"/>).</li>
      </ul>
      <t>We would also like to thank Suhas Nandakumar for his presentation, "Progressing MOQ" <xref target="Prog-MOQ"/>, at the October 2022 MOQ virtual interim meeting. We used his outline as a starting point for the Requirements section (<xref target="req-sec"/>).</t>
      <t>James Gruessing would also like to thank Francesco Illy and Nicholas Book for
their part in providing the needed motivation.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA91b6XIbR5L+309RC/6wxAV4yMeMORP2UqRk00OJMkmNwjEx
4Sh0F4Ay+3JXNWGY5rvss+yT7ZeZVX2AlBV2eDZ21xEWge46svL8Misxm80S
b31ujtQrk1mtLm5No759e3aiZuqtM+pEO+OULjN1aX5sbWMKU3qnFlUTJlw3
unR11Xj1pql8lVa5OjXOLstEz+eNucXC1be/slSSVWmpCxCQNXrhZ8umNc7Z
cjkrqh9nzWDk7ODjJNXeLKtmc6RsuagS5xujiyN19uL6ZZLh3VGS2Lo5Ur5p
nX92cPD5wbNEY8yR0nWdW0y3VemSddXcLJuqrUHdxbfqlbY5dlTn1vnkxmzw
OsOipTdNafzslAhjpmBD0P+9zqsSBG+MS2p7pP6BY08V/rFlBkKnyoEdjVk4
fNoU4YNvbIpXaVXUmj64dt59xgc+4RSnAiHmn0miW7+qmqNEqRn+V3jhjtQ3
e+qryB5+Koz7Bv+6rTdVs9Sl/ZnPe6Rem8w0OUiHGN6089yaG6MuiqYyNY9O
q7b0xNXXxq/CSH5hCrDmSP1AO+xZ4xf/saQne6B4TNrVnjrV6xt8HhB2VZsy
hT4N34wJu6YBpVfHhQF/tDo/PxkT9La03mTqykO4TlWLOHJInZNtMtllm8yE
NKUpsOEtqYdSly9PPv7004Mj+fjZx599HD/+6fCz8PHzg4MDHnw2O90TxUx1
k61Nns9smjbLGZR7llbl0jg6CX30TZUfjabc1O3SljPo4mr8Ik/bJjeb2Vo3
9fjND6YsIUPH2g/dT5uZbtLVBwfVZHxMMRR6lq50A+U9Yi553SyNP1Ir72t3
tL8PO9G+0emNaZhXexDJ/nq5j8X29bxq/b5ME7cw2fYLTzDu6YSHsMWpK1N7
U8zx/s9T9ezg2TO8gy9YzkDJb6CgMMbjUPuWzM4WM1qJD3hwuA/p4ZnO3b7L
bWbc7JFBM3gXp/lL6wzEhe8zKPLYh8BE7a3NWp3PhJfkCsjbsDsIzw4ORhyg
wzRiW8Td4dkvUl/RyZ8ddkcn/rfpb+H9H3TyuidTXhzs1dliLMuLX3PFHzjY
OzNnZ//7tGtt5p5mP6ZicWUOI0/iyJGSvSIrUIcHTM1hksxmM6XnjnbySXK9
sk4hkLR0DgU2pY2d44RQBJV2Zx0qgvIr7dWyBU/x0ZAPSe0iBAjyM1o5W9S5
maq8Ws9yEFGmG1WwMWQmhzNpNnD0ecsTKB5adga8U2bJ38/53RRUkOYYS66V
N2M7qmO4xNQhA/bkcGXlzfev6R9ffX9pNBy4S5Jd+Cb1IrMesUGBOpwNxyqq
WzoFeOBMyvTMDSgyqiZfL2faTZKrqm1SMKTCmYlK6xxCBtPOk1n5wS+ajqct
hmif/DVKdokDtHPyqfsLi5PqH4x3utj/UNz+Aic6tS5t8Vp4O9jNrao2z6BF
CEh1rkEehhCPKKQ/wCRPgCWeJkUI1zm4PAWJqiNxvV4PzArDCl3u0zCKAeTg
vgjcLWyW5SZJdijKN1XWCtvudix9vf/frVHq7m7g5O/vP6hhd3chpN3fb2sb
3m3b9f09mLSzo0j5BOjBa7wTNwmUATepagy0qa01ATjhFajvxWt61uEVuSwg
I0i5IppuI4do3eB+FbtftcYJiE+e1sfTusJcpoHGRdWY/AowjftOIOIczANQ
21PXmNTW5EgUSyDKy8iy0LuyEzHMIjIYXhy84NkCx+JR6CAkehLOw0PorKpZ
TNBviIMV+fDwU4KF3Z79aDE3wARadAHqXTgmzuir2qa0qTIaSNSRb9gBZmoK
W1Z5tdxAYbFFwfpqFJArLZ45OPq3V9eTqfxVry/48+ULaMXli1P6fPX18fl5
9yEJI66+vnh7ftp/6meeXLx69eL1qUzGUzV6lExeHX+HNySNycWb67OL18fn
E6Lbj8wISJxOCe/CUaxuDGE7jRxgyPznJ2/+6z8PP4EQ/g1q++zw8HOorXz5
8+GfPsGX9cqUsltV5pvwFSzbJED5YBWtovMcvK2tR+zEWEeeZl0qGIgBF3f/
QZz555H66zytDz/5IjygA48eRp6NHjLPHj55MFmY+MijR7bpuDl6vsXpMb3H
342+R74PHpK29LZyxjiYNI8NFqimrpyGd9hB+GjAL/g2dm1Qp4u2UcsKL6MM
Y2DBR0gQoQHmTrkQazS8x9KQ5ffekZ0hyRsLuLSqxZNsg8nJtiuDaF5inNEI
9nGxbRqmag2DzGFNUW0gT6ajbBmGghBnmELydo1JDflWVgq1xEdQZCQWMc4o
rGdnk3GcSL2SrNJNsSpUi32q7qasoEpzO8vgcZga8GgBf865ifjyRVMVHQES
JYQCxzobaIWhOs8oVZmfECrIEi6vr0chBdw4Yw+a2xsDRWem4vAhaOoMBsDk
Ef8AhAAMcYRU9H1Nkb4wD8VViHvn9BaTQFmIsxz9+FlwKL04RwY6UoiVvqWY
bUJMaHOYn/YSq1h0lNxi3IxPLWhRxeAn0MP0MbXp4j/8mIiS1qZh64r44/ZA
eHiJkxF0xvDKxaUph2aWAT8hfEBm89yIvwYDsXYXFAsDrpXWFUFZA+NV0ebe
IkjwbuyU21KOahYLw/zKN3uK1fQnzQGd9O8dnNBjR7Tu0RNhcaa0APAhZpZO
JIU/L1+cSMSm1BTejsRGmC2d5QZ7g9EA19gAh8NzZL46Z+8qc5DDEiJodMAC
mrQ9j3TjlCOdZ/+JlXxY1XG4AxED7hBNBaHJ+QY5+FRBsHDpzODHeWYHAoLu
k3cRoxhQvvf7WUaIohiKltW0S8VVSMXhdqDrRoxHPX9+CQb9xmyeGAlklcp0
snxoz5z49lB5RLuQCANV6cbiK1LNNmXHYnK9Eb75beZS+PfEBw0We/uzICPs
7whfYGt2LLwUjI9XUszQEcv3Aggr8HZl8pq4Aq+Sa4hivdooO7D13qjZdP0G
GAPOn1QjtyYb2ggUs4Ujxrkvr9+IflHlhNiyXlm8iWiOAS0rZ2Q+iBZph1IP
a3FQPXakEr9J/aFJspCLuLdTUh7J7mpHfaU5dt3tLPkDXNQvx52f+UX9Xect
/ia/qBn99++zrf+SX3Z3r8Qp719Gh7y7+4tSFyVrEf5g8u7u85FzpxHfGcdv
zkU/edJbqM6nB0nyjgBFcPwQQHD1mZwOrKawU7deXJsjM51vRMRgdun3VCc3
imgJlC1vAzyWEWFJNlYx6U0tSkFFVk2J0LQTE9ixXFLYAReTlQYMTdXCmGwO
UWxtBR1oms0wZA32jOsVNgVKWMH6Et1mtgpZyQwiMBR0vCB2IQzqtsHeQVyX
yEohmFPjbgBjITbENESRm/8vcguLJqMUkLnB4bEDLzS/gH+zJRtZZhFEGlqP
XTOtKPpMMxKasS2nqBIibk58QeM+hcSRJ1+oBWU8FCszc2vhORIYYynIwkvq
QQf7yHERGtxvgqj+DhOu1ElVMmXksvavETJI7pRhwMKx0OKPE9wrjXVBEf39
zaKjifTp2cFDKc6hiCKsAezK/tIzNPIyKDNpPc1Jej1/4p4SB0WjRQWm9GA4
ewIsZAhFAnWBWRPFRoF3IQFOAiirKFnsjYkLeNMuG+Jlb5n1SISIanaUA/HP
KY0m03NB6/YhPnCRZ9ZGAiz/DXlzERjLf+HO5bZjZWvSDj7oo7BUUtwQhMnC
u9qEEIH9c3pJHwLr2cE4gpfCfHFK8aTMXgl4worhoAggOwvhMFRgasIog+oc
CMRljwcCshPY+vVm3thshF7pMOcdjJUMgrRdmJsO9dp3eh2358SxMbHsBUxt
A6x3qSkh4Molax7hV6I4FJWRvBvtKea3XrIOFwN32jZs34G4YamEWZKQcAoT
M5XRqinj+m6KhzM4dsIuQh+a2STCIOEjt036nGe4E+A2uW/aCn975qgrzm04
kN7lhWQ6FM9JVqRFCdEzzm+s39DyQz5OQw4gechcAAu43VA6ryYrltFEBHY+
TDDyYsa68CDBeJDjtSXjNv/+NOTurktXCKd5i4yfdWmyanHysCaBk6qeDOFX
eC1pV9RNLktMYIg1HeXWTAjjRLH3j0ukgeRqWFW3UeqntMzhQchT2I7hsSk5
jMcgMX0wAal4V+jjVuojQqXC8CPnh2HPfDUzYtgVFxO3QXdIJTqkrG8rm2lS
PK7MYBu24PdSGGwEB8nE5fSyJQSKWEVFBIQlL2yLwS4VK9tmY5f38lpsrUMl
YzYTPWxrMZlZa5+uOAjKDJycMnq7GJg8hQLD9UMu7WKHCZVXJ3x+jg+QkybA
4qkkwj7PbcosFmu3C64hTg40+Uwqs6TQUqT9H4Q1r6tHQyN5ZY6N4P/svFq/
F+OIWmjl5DqADl+Va011QzA4Y4QBzSIODZmQRHZJuJBlKUIQmmisKHeXlUiE
DUBFpCL7wY0/t6wiEuRC0XpOjl8TXiVZQF+zTQk/JZkJoFcHdoMrKoUj/eUu
xPZkjv3WNvOraYwdT9VcO0lq6GJqhORi/Hso3auBLpCISTf+DwiYDxdFSfIr
O2De6/mibfjhSL9JokkvUXboBaES4l1GdVNWh7kIjtK9rrbAQhXvGuMSnicL
wrvjWwuESsCpvRCCOoiDmNGW8EIhl1hom0uC2zsfy8qXmkck1YeznS6c/dGS
+hWc+lBUv8sUyTupeVPpDHHOswnwUQZgcGBlyZaVdRYmHA5pSJ9pcPGIE4Mg
QPe+tHMrwxR6SQsoRnQ4tp3zxS2v6C1zv09OxhZp9pZ7lP7cMECiZIYNi5PH
UDOeGwRtCzXiYuEo/0x0Bll4nJoLXHOw5MY9Dfc54nDYsYOfa6w8CRFbK/pW
rfsjYA88YWzD40227BBfoSkhuIXaaaonkY0IFUwwJbWUPU97R5Vq2htfkryC
fwp5LxQ3z13YSTaoRqMhI7NmrO5Wpltb0eJBDRK57gqwpdbhvudB41XXaEWX
cpxZ/zgDbvrtZfsHt5eweKhWXw54L/LavjGgwv0uXRge8TxbAkSDjJVdrkJh
aNj30185D+nTN4jjvPVVS2j3NejUN8BqlLIOU6Tpw5YMkBRbTggMaYZ6fRPD
wbNnfFF3axvfMnu4iUKFzotYzlprAY1tUw7JefyuMtxZPLjBUO/4Lr68kRMS
BgRftYPLIP0a323y6HVVfuQ5ArYN2zjn686I2gx4CKfQpqCuMx92yVz4IUhE
UsKwqg5wngBYRkuvtUAeKfgN+X5y8erN+YvrF+ffqXmuQTPdpdHMeL35+NUm
ob2L04sjKu+wWqigEFMmVwm5D/ULC0OQZO0OYZ0CviiU0Q1MromXyK5vEgjJ
sqQSJ3KV8Kbvauhtoe9IPOtvzU9Ht+Y7dTuXRimYChchQv61VRL91eudqXjX
6CJiySELdZlBuU5cT58R5htCwa0kAx3GDjkUb6xhoXZBRVhdYwBdgvEFSkQ7
qjTLCobVgdRBTYg1hjo2BpWP6P2IxmASrH7h6nRZCdSTLMkuYAuFCZ0K4bZL
AoMUsVnS9JF1hLjSO1jK50vX1tRSgH3F74e4JiOkdDGk2NI1gpGbmjB1wL0Q
GeW+Trj9NGiBxAgR9mXQ004P7nZ4v1lQYEj6rBz4pEEzRjgicyfEHckPGqmv
06WIaRg4Dfk+vN3QcELUWkDiULe6sUZy5cx4xBMX23CCJiDecKcOXUSWVAoM
OnZE90dnbliApX4D/OVYKTJF1JUrAao6SDtC9BThZKQ/va58yRcs2vcC6mKc
FDkX8Z51Ha9qJeo0ujCewnWM9wwfphE7TIOGdY0qxqdfCv3RDMY3oTqeqaa+
BY/zfKnOFgCOU9m5py9NqZMw+zJJLs1si+XgnfmJL2iXD2yHPWdUPZIK5dzB
RbJD5326pFBg8qKzktDtIjCSDnecZV1sCUoWchdoVxlvH45dn+2EG+P+5LFy
NoYVfXEosqortMeVUnC/0YwVGF8JwuOGOrrWxfJa3COrraV+Y7vYsAOgGh4w
c2kZlDRVTdBJgBzLuQmnUE+2JQoRPpUr4zzn+h15xEWbx/Wjs6VDREA0YD4V
nqIrjbLZU8eceoVQQFd4dfCnUC/NyRg7CZp0U1KKgfizbLWAQoh7cHJiBHUU
b1W5qJIt7HpEESQGxmzH/BRakLrNA72hSOvidxEfFXTpGbJgbrMJ65OnGu4h
mvMGFOplry13O90TqEn/tlOOvptsBT0FttY1YFfnmrqClNzQSIPFOjxeU5WL
gjKj4EYqN2S/w8BLlZwYQQRqUjBlN/POElQYhyAc1eSLYb2re5UZJHImuCYi
h11z9BwhPyihT1EDdZmCQg26mcsBAoi3gdL1AwupGu1CT0KHwJgRkkR0BnJ2
dfH81cuXAahFshAfTEn94CTXVSXbUOePUEbGzCUN7hTrCDPcATnIU/akGjnG
OhwAxMFXZdi3C8u9wglODPcr9QNhT2lAV1GJZVI5IicFVGEmTNBZLjSeMrHF
ImAm5nS4aBWFe9GX+q4MPAoVaO92CIW68BV6NxgUnw4Ub1CPBNubTR2Vr7cB
8Wl0HzJo26P6nDgFqQt3qmQkR6QeoDJEp58Z21F/Gq3nJV+s5hJfpRZJYJC9
Md35yk8klJ5bXtpTfw+TNiCKsBey3BgAfw5XJl+RFlCpKFbzJZyR7DT1YYp9
D0qkEZMIY9gygquHFWE00CE8kVzTiOgbswg5rO/6AWM60wcTGRE0TI84Sbou
ZQYiIhYaYh0jUPKe3QJLOufw6DUOj6YrwUV344LFcrFmKbRKQH6feBgVRvnE
XqXXOAkRXNKJPkRsp2kMER/dqLGhz6c0JvSECtGEswIWkOIjPxFQoEOnx2DF
we3VsBVH0oaCpCh0SeVK6iO8EYmGEnO/NqH9ZMjTEUspqQ1Z7WTIh8ngyBpG
zynAspBaiwAw8nLwwpQ20gw3FWOWdp/QQRvtbaCXvNaQNlpgQBnBBPpJAD8f
kPkh96UHvXERnJKhCJoDUljB9fKGC9tgImPnkJjHHrhITn89wIchKDtu35iU
I16NbY0vL7kRGiZOhX03lGovIUdef3hK6vxaMBc/EkphZbInOXwaOd6ISJ+4
zuDiMuEmiO8uJ3jrJgzWQmgPRZ5B+2DH8UEXX8S1AFjUYSNlG3CWUBuXbM6O
Xx/ThXrv7N12S7kwr+waidlD0TxeoPPr24vE68D39VdL/3nWt9uHKx/qMGtD
c1zoeDcuKQcmO4pN0nVERK3L0DJPoI9IO04JruVU4ZIf8CXDBuk1x0W5uqu4
FAykaLg40I2RhqbwMxz5HUCIdQtxFeL1HjaLxpIL1A3UE6CZqb/ZhiDRG/ql
1VQdUyv3y4Z+iEfFiG+QqVl1YsqfdVmJrn9DHUfvjHWuwNAnw56wwc+17u+n
WPu8xdgT/qnWeOTw91v390/FV84wNM9ht9+En2fxftu1rPFC7/m5F6/pq6UY
HhccTlbc3Amav26tN4X+4EJS6binrJmLTNwySh0lY+k8IJBMmFTsX1py48IX
XyTSVgjp9LtHyT0e6f6nlUdl0Fi/Ag9i9ZMPuvVTyPef+mVD955wNeqMbphI
Uq8Ry6ocJDyvqhtucMC2thFXbcuAgSJ+DeGrQJJ6G0DkfwOA2CM9TTsAAA==

-->

</rfc>
