<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.6.14 (Ruby 2.6.10) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

]>


<rfc ipr="trust200902" docName="draft-cheshire-sbm-03" category="info" submissionType="independent" tocInclude="true" sortRefs="true" symRefs="true">
  <front>
    <title abbrev="Source Buffer Management">Source Buffer Management</title>

    <author fullname="Stuart Cheshire">
      <organization>Apple Inc.</organization>
      <address>
        <email>cheshire@apple.com</email>
      </address>
    </author>

    <date year="2025" month="October" day="20"/>

    
    
    <keyword>Bufferbloat</keyword> <keyword>Latency</keyword> <keyword>Responsiveness</keyword>

    <abstract>


<t>In the past decade there has been growing awareness about the
harmful effects of bufferbloat in the network, and there has
been good work on developments like L4S to address that problem.
However, bufferbloat on the sender itself remains a significant
additional problem, which has not received similar attention.
This document offers techniques and guidance for host networking
software to avoid network traffic suffering unnecessary delays
caused by excessive buffering at the sender. These improvements
are broadly applicable across all datagram and transport
protocols (UDP, TCP, QUIC, etc.) on all operating systems.</t>



    </abstract>

    <note title="About This Document" removeInRFC="true">
      <t>
        The latest revision of this draft can be found at <eref target="https://StuartCheshire.github.io/draft-cheshire-sbm/draft-cheshire-sbm.html"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-cheshire-sbm/"/>.
      </t>
      <t>
        Discussion of this document takes place on the
        sbm Working Group mailing list (<eref target="mailto:sbm@ietf.org"/>),
        which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/sbm/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/StuartCheshire/draft-cheshire-sbm"/>.</t>
    </note>


  </front>

  <middle>


<section anchor="conventions-and-definitions"><name>Conventions and Definitions</name>

<t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>

</section>
<section anchor="introduction"><name>Introduction</name>

<t>In 2010 Jim Gettys identified the problem
of how excessive buffering in networks adversely affects
delay-sensitive applications <xref target="Bloat1"/><xref target="Bloat2"/><xref target="Bloat3"/>.
This important work identifying a non-obvious problem
has led to valuable developments to improve this situation,
like fq_codel <xref target="RFC8290"/>, PIE <xref target="RFC8033"/>, Cake <xref target="Cake"/>
and L4S <xref target="RFC9330"/>.</t>

<t>However, excessive buffering at the source
-- in the sending devices themselves --
can equally contribute to degraded performance
for delay-sensitive applications,
and this problem has not yet received
a similar level of attention.</t>

<t>This document describes the source buffering problem,
steps that have been taken so far to address the problem,
shortcomings with those existing solutions,
and new mechanisms that work better.</t>

<t>To explain the problem and the solution,
this document begins with some historical background
about why computers have buffers in the first place,
and why buffers are useful.
This document explains the need for backpressure on
senders that are able to exceed the network capacity,
and separates backpressure mechanisms into
direct backpressure and indirect backpressure.</t>

<t>The document describes
the TCP_REPLENISH_TIME socket option
for TCP connections using BSD Sockets,
and its equivalent for other networking protocols and APIs.</t>

<t>The goal is to define a cross-platform and cross-protocol
mechanism that informs application software when it is a good
time to generate new data, and when the application software
might want to refrain from generating new data,
enabling the application software to
write chunks of data large enough to be efficient,
without writing too many of them too quickly.
This avoids the unfortunate situation where a delay-sensitive
application inadvertently writes many blocks of data
long before they will actually depart the source machine,
such that by the time the enqueued data is actually sent,
the application may have newer data that it would rather send instead.
By deferring generating data until the networking code is
actually ready to send it, the application retains more precise
control over what data will be sent when the opportunity arises.</t>

<t>The document concludes by describing some alternative
solutions that are often proposed, and explains
why we feel they are less effective than simply
implementing effective source buffer management.</t>

</section>
<section anchor="source-buffer-backpressure"><name>Source Buffer Backpressure</name>

<t>Starting with the most basic principles,
computers have always had to deal with the situation
where software is able to generate output data
faster than the physical medium can accept it.
The software may be sending data to a paper tape punch,
to an RS232 serial port (UART),
or to a printer connected via a parallel port.
The software may be writing data to a floppy disk
or a spinning hard disk.
It was self-evident to early computer designers that it would
be unacceptable for data to be lost in these cases.</t>

<section anchor="direct-backpressure"><name>Direct Backpressure</name>

<t>The early solutions were simple.
When an application wrote data to a file on a floppy disk,
the file system “write” API would not return control to the caller
until the data had actually been written to the floppy disk.
This had the natural effect of slowing down
the application so that it could not exceed
the capacity of the medium to accept the data.</t>

<t>Soon it became clear that these simple synchronous APIs
unreasonably limited the performance of the system.
If, instead, the file system “write” API
were to return to the caller immediately
-- even though the actual write to the
spinning hard disk had not yet completed --
then the application could get on with other
useful work while the actual write to the
spinning hard disk proceeded in parallel.</t>

<t>Some systems allowed a single asynchronous write
to the spinning hard disk to proceed while
the application software performed other processing.
Other systems allowed multiple asynchronous writes to be enqueued,
but even these systems generally imposed some upper bound on
the amount of outstanding incomplete writes they would support.
At some point, if the application software persisted in
trying to write data faster than the medium could accept it,
then the application would be throttled in some way,
either by making the API call a blocking call
(simply not returning control to the application,
removing its ability to do anything else)
or by returning a Unix EWOULDBLOCK error or similar
(to inform the application that its API call had
been unsuccessful, and that it would need to take
action to write its data again at a later time).</t>

<t>It is informative to observe a comparison with graphics cards.
Most graphics cards support double-buffering.
This allows one frame to be displayed while
the CPU and GPU are working on generating the next frame.
This concurrency allows for greater efficiency,
by enabling two actions to be happening at the same time.
But quintuple-buffering is not better than double-buffering.
Having a pipeline five frames deep, or ten frames,
or fifty frames, is not better than two frames.
For a fast-paced video game, having a display pipeline fifty
frames deep, where every frame is generated, then waits in
the pipeline, and then is displayed fifty frames later,
would not improve performance or efficiency,
but would cause an unacceptable delay between
a player performing an action and
seeing the results of that action on the screen.
It is beneficial for the video game to work on preparing
the next frame while the previous frame is being displayed,
but it is not beneficial for the video game to get multiple
frames ahead of the frame currently being displayed.</t>

<t>Another reason that it is good not to permit an
excessive amount of unsent data to be queued up
is that once data is committed to a buffer,
there are generally limited options for changing it.
Some systems may provide a mechanism to flush the entire
buffer and discard all the data, but mechanisms to
selectively remove or re-order enqueued data
are complicated and rare.
While it could be possible to add such mechanisms,
on balance it is simpler simply to avoid committing
too much unsent data to the buffer in the first place.
If the backlog of unsent data is kept reasonably low,
that gives the source more flexibility decide what to
put into the buffer next, when that opportunity arises.</t>

<t>In summary, in order to give applications maximum
flexibility, pending data should be kept as close
to the application for as long as possible.
Application buffers should be as large as needed
for the application to do its work,
and lower-layer buffers should be no larger than
is necessary to provide efficient use of available
network capacity and other resources like CPU time.</t>

</section>
<section anchor="indirect-backpressure"><name>Indirect Backpressure</name>

<t>All of the situations described above using “direct backpressure”
are one-hop communication where the CPU generating the data
is connected more-or-less directly to the device consuming the data.
In these cases it is relatively simple for the receiving device
to exert direct backpressure to influence
the rate at which the CPU sends data.</t>

<t>When we introduce multi-hop networking,
the situation becomes more complicated.
When a flow of packets travels 30 hops though
a network, the bottleneck hop may be quite distant
from the original source of the data stream.</t>

<t>For example, consider the case of
a smartphone communicating via a Wi-Fi Access Point at 600 Mb/s,
which is connected to a home NAT gateway via gigabit Ethernet,
which is connected to a cable modem via gigabit Ethernet,
which has an upstream output rate of 35Mb/s over the coaxial cable.</t>

<figure><artwork><![CDATA[
  -----   600            1              1             35
 |  P  |  Mb/s          Gb/s           Gb/s          Mb/s
 |  h  |        ------        -------        ------
 |  o  |------>|  AP  |----->|  NAT  |----->|  CM  |------> Internet
 |  n  |        ------        -------        ------
 |  e  |                                        ^
  -----                                       Queue
    ^                                         forms
Source of data                                here
]]></artwork></figure>

<t>When the cable modem experiences
an excessive flow of incoming packets arriving
on its gigabit Ethernet interface,
the cable modem has no direct way to cause
the networking code on the smartphone to curtail
the influx of data by pausing the sending application
via blocking its write calls or delivering EWOULDBLOCK errors.
The source of the excessive flood of data
causing the problem (the smartphone)
is three network hops away from the device
experiencing the problem (the cable modem).
When an incoming packet arrives,
the cable modem’s choices are limited to
enqueueing the packet,
discarding the packet,
or enqueueing the packet and
marking it with an ECN CE (Congestion Experienced) mark <xref target="RFC3168"/> <xref target="RFC9330"/>.
The cable modem drops or marks an incoming packet
in the expectation that this will, eventually,
indirectly, cause the networking code and operating system
on the sending device to take the necessary steps
to curtail the sending application.</t>

<t>The reasons the cable modem’s choices are so limited
are because of security and packet size constraints.</t>

<t>Security and trust concerns revolve around preventing a
malicious entity from performing a denial-of-service attack
against a victim device, by sending fraudulent messages that
would cause the victim to reduce its transmission rate.
It is particularly important to guard against an off-path attacker
being able to do this. This concern is addressed if queue size
feedback generated in the network follows the same path already
taken by the data packets and their subsequent acknowledgement
packets. The logic is that any on-path device that is able to
modify data packets (changing the ECN bits in the IP header)
could equally well corrupt packets or discard them entirely.
Thus, trusting ECN information from these devices does not
increase security concerns, since these devices could already
perform more damaging actions anyway. The sender already
trusts the receiver to generate accurate acknowledgement
packets, so also trusting it to report ECN information back
to the sender does not increase the security risk.</t>

<t>A consequence of this security requirement is that it takes a
full round trip time for the source to learn about queue state
in the network. In many common cases this is not a significant
deficiency. For example, if a user is receiving data from a
well-connected server on the Internet, and the network
bottleneck is the last hop on the path (e.g., the Wi-Fi hop to
the user’s smartphone in their home) then the location where
the queue is building up (the Wi-Fi Access Point) is very close
to the receiver, and having the receiver echo the queue state
information back to the sender does not add significant delay.</t>

<t>Packet size constraints, particularly scarce bits available
in the IP header, mean that for pragmatic reasons the ECN
queue size feedback is limited to two states: “The source
may try sending a little faster if desired,” and, “The
source should reduce its sending rate.” Use of these
increase/decrease indications in successive packets allows
the sender to converge on the ideal transmission rate, and
then to oscillate slightly around the ideal transmission
rate as it continues to track changing network conditions.</t>

<t>Discarding or marking an incoming packet
at some point within the network are
what we refer to as indirect backpressure,
with the assumption that these actions will eventually
result in the sending application being throttled
via having a write call blocked,
returning an EWOULDBLOCK error,
or some other form of backpressure that
causes the source application
to temporarily pause sending new data.</t>

</section>
</section>
<section anchor="casestudy"><name>Case Study -- TCP_NOTSENT_LOWAT</name>

<t>In April 2011 the author was investigating
sluggishness with Mac OS Screen Sharing,
which uses the VNC Remote Framebuffer (RFB) protocol <xref target="RFC6143"/>.
Initially it seemed like a classic case of network bufferbloat.
However, deeper investigation revealed that in this case
the network was not responsible for the excessive delay --
the excessive delay was being caused by
excessive buffering on the sending device itself.</t>

<t>In this case the network connection was a relatively slow
DSL line (running at about 500 kb/s) and
the socket send buffer (SO_SNDBUF) was set to 128 kilobytes.
With a 50 ms round-trip time,
about 3 kilobytes (roughly two packets)
was sufficient to fill the bandwidth-delay product of the path.
The remaining 125 kilobytes available in the 128 kB socket send buffer
were simply holding bytes that had not even been sent yet.
At 500 kb/s throughput (62.5 kB/s),
this meant that every byte written by the VNC RFB server
spent two seconds sitting in the socket send buffer
before it even left the source machine.
Clearly, delaying every sent byte by two seconds
resulted in a very sluggish screen sharing experience,
and it did not yield any useful benefit like
higher throughput or lower CPU utilization.</t>

<t>This led to the creation in May 2011
of a new socket option on Mac OS and iOS
called “TCP_NOTSENT_LOWAT”.
This new socket option provided the ability for
sending software (like the VNC RFB server)
to specify a low-water-mark threshold for the
minimum amount of <strong>unsent</strong> data it would like
to have waiting in the socket send buffer.
Instead of inviting the application to
fill the socket send buffer to its maximum capacity,
the socket send buffer would hold just the data
that had been sent but not yet acknowledged
(enough to fully occupy the bandwidth-delay product
of the network path and fully utilize the available capacity)
plus some <strong>small</strong> amount of additional unsent data waiting to go out.
Some <strong>small</strong> amount of unsent data waiting to go out is
beneficial, so that the network stack has data
ready to send when the opportunity arises
(e.g., a TCP ACK arrives signalling
that previous data has now been delivered).
Too much unsent data waiting to go out
-- in excess of what the network stack
might soon be able to send --
is harmful for delay-sensitive applications
because it increases delay without
meaningfully increasing throughput or utilization.</t>

<t>Empirically it was found that setting an
unsent data low-water-mark threshold of 16 kilobytes
worked well for VNC RFB screen sharing.
When the amount of unsent data fell below this
low-water-mark threshold, kevent() would
wake up the VNC RFB screen sharing application
to begin work on preparing the next frame to send.
Once the VNC RFB screen sharing application
had prepared the next frame and written it
to the socket send buffer,
it would again call kevent() to block and wait
to be notified when it became time to begin work
on the following frame.
This allows the VNC RFB screen sharing server
to stay just one frame ahead of
the frame currently being sent over the network,
and not inadvertently get multiple frames ahead.
This provided enough unsent data waiting to go out
to fully utilize the capacity of the path,
without buffering so much unsent data
that it adversely affected usability.</t>

<t>A live on-stage demo showing the benefits of using TCP_NOTSENT_LOWAT
with VNC RFB screen sharing was shown at the
Apple Worldwide Developer Conference in June 2015 <xref target="Demo"/>.</t>

</section>
<section anchor="shortcomings-of-tcpnotsentlowat"><name>Shortcomings of TCP_NOTSENT_LOWAT</name>

<t>While TCP_NOTSENT_LOWAT achieved its initial intended goal,
later operational experience has revealed some shortcomings.</t>

<section anchor="platform-differences"><name>Platform Differences</name>

<t>The Linux network maintainers implemented a TCP
socket option with the same name, but different behavior.
While the Apple version of TCP_NOTSENT_LOWAT was
focussed on reducing delay,
the Linux version was focussed on reducing kernel memory usage.
The Apple version of TCP_NOTSENT_LOWAT controls
a low-water mark, below which the application is signalled
that it is time to begin working on generating fresh data.
The Linux version determines a high-water mark for unsent data,
above which the application is <strong>prevented</strong> from writing any more,
even if it has data prepared and ready to enqueue.
Setting TCP_NOTSENT_LOWAT to 16 kilobytes works well on Apple systems,
but can increase CPU load and severely limit throughput on Linux systems.
This has led to confusion among developers and makes it difficult
to write portable code that works on both platforms.</t>

</section>
<section anchor="time-versus-bytes"><name>Time versus Bytes</name>

<t>The original thinking on TCP_NOTSENT_LOWAT focussed on
the number of unsent bytes remaining, but it soon became
clear that the relevant quantity was time, not bytes.
The quantity of interest to the sending application
was how much advance notice it would get of impending
data exhaustion, so that it would have enough time
to generate its next logical block of data.
On low-rate paths (e.g., 250 kb/s and less)
16 kilobytes of unsent data could still result
in a fairly significant unnecessary queueing delay.
On high-rate paths (e.g., Gb/s and above)
16 kilobytes of unsent data could be consumed
very quickly, leaving the sending application
insufficient time to generate its next logical block of data
before the unsent backlog ran out
and available network capacity was left unused.
It became clear that it would be more useful for the
sending application specify how much advance notice
of data exhaustion it required (in milliseconds, or microseconds),
depending on how much time the application anticipated
needing to generate its next logical block of data.</t>

<t>The application could perform this calculation itself,
calculating the estimated current data rate and multiplying
that by its desired advance notice time, to compute the number
of outstanding unsent bytes corresponding to that desired time.
For example, if the current average data rate is 1 megabyte per second,
and the application would like 0.1 seconds warning
before the backlog of awaiting data runs out,
then 1,000,000 x 0.1 gives us a
TCP_NOTSENT_LOWAT value of 100,000 bytes.</t>

<t>However, the application would have to keep adjusting its
TCP_NOTSENT_LOWAT value as the observed data rate changed.
Since the transport protocol already knows the number of
unacknowledged bytes in flight, and the current round-trip delay,
the transport protocol is in a better position
to perform this calculation.</t>

<t>In addition, the network stack knows if features like hardware
offload, aggregation, and stretch acks are being used,
which could impact the burstiness of consumption of unsent bytes.</t>

<t>Wi-Fi interfaces perform better when they send
batches of packets aggregated together instead of
sending individual packets one at a time.
The amount of aggregation that is desirable depends
on the current wireless conditions,
so the Wi-Fi interface and its driver
are in the best position to determine that.</t>

<t>If stretch acks are being used, then each ack packet
could acknowledge 8 data segments, or about 12 kilobytes.
If one such ack packet is lost, the following ack packet
will cumulatively acknowledge 24 kilobytes,
instantly consuming the entire 16 kilobyte unsent backlog,
and giving the application no advance notice that
the transport protocol is suddenly out of available data to send,
and some network capacity becomes wasted.</t>

<t>Occasional failures to fully utilize the entire
available network capacity are not a disaster, but we
still would like to avoid this being a common occurrence.
Therefore it is better to have the transport protocol,
in cooperation with the other layers of the network stack,
use all the information available to estimate
when it expects to run out of data available to send,
given the current network conditions
and current amount of unsent data.
When the estimated time remaining until exhaustion falls
below the application’s specified threshold, the application
is notified to begin working on generating more data.</t>

</section>
<section anchor="other-transport-protocols"><name>Other Transport Protocols</name>

<t>TCP_NOTSENT_LOWAT was initially defined only for TCP,
and only for the BSD Sockets programming interface.
It would be useful to define equivalent delay management
capabilities for other transport protocols,
like QUIC <xref target="RFC9000"/><xref target="RFC9369"/>,
and for other network programming APIs.</t>

</section>
</section>
<section anchor="tcpreplenishtime"><name>TCP_REPLENISH_TIME</name>

<t>Because of these lessons learned, this document proposes
a new BSD Socket option for TCP, TCP_REPLENISH_TIME.</t>

<t>The new TCP_REPLENISH_TIME socket option specifies the
threshold for notifying an application of impending data
exhaustion in terms of microseconds, not bytes.
It is the job of the transport protocol to compute its
best estimate of when the expected time-to-exhaustion
falls below this threshold.</t>

<t>The new TCP_REPLENISH_TIME socket option
should have the same semantics across all
operating systems and network stack implementations.</t>

<t>Other transport protocols, like QUIC,
and other network APIs not based on BSD Sockets,
should provide equivalent time-based backlog-management
mechanisms, as appropriate to their API design.</t>

<t>The time-based estimate does not need to be perfectly accurate,
either on the part of the transport protocol estimating how much
time remains before the backlog of unsent data is exhausted,
or on the part of the application estimating how much
time it will need generate its next logical block of data.
If the network data rate increases significantly, or a group of
delayed acknowledgments all arrive together, then the transport
protocol could end up discovering that it has overestimated how
much time remains before the data is exhausted.
If the operating system scheduler is slow to schedule the
application process, or the CPU is busy with other tasks,
then the application may discover that it has
underestimated how much time it will take
to generate its next logical block of data.
These situations are not considered to be serious problems,
especially if they only occur infrequently.
For a delay-sensitive application, having some reasonable
mechanism to avoid an excessive backlog of unsent data is
dramatically better than having no such mechanism at all.
Occasional overestimates or underestimates do not
negate the benefit of this capability.</t>

<t>The IETF Transport Services API specification <xref target="RFC9622"/>
states that “Sent events allow an application to obtain
an understanding of the amount of buffering it creates.”
TCP_REPLENISH_TIME goes beyond
giving an application <strong>visibility</strong>
into the amount of buffering it creates,
by giving an application the ability to <strong>specify</strong>
the amount of buffering it would <strong>like</strong> to create.</t>

<section anchor="solicitation-for-name-suggestions"><name>Solicitation for Name Suggestions</name>

<t>Author’s note: The BSD socket option name “TCP_REPLENISH_TIME”
is currently proposed as a working name
for this new option for BSD Sockets.
While the name does not affect the behavior of the code,
the choice of name is important, because people often
form their first impressions of a concept based on its name,
and if they form incorrect first impressions then their
thinking about the concept may be adversely affected.</t>

<t>For example, one suggested name was “TCP_EXHAUSTION_TIME”.
We view “TCP_REPLENISH_TIME” and “TCP_EXHAUSTION_TIME”
as representing two interpretations of the same quantity.
From the application’s point of view, it is expressing
how much time it will require to replenish the buffer.
From the networking code’s point of view, it is estimating
how much time remains before it will need the buffer replenished.
In an ideal world,
REPLENISH_TIME == EXHAUSTION_TIME, so that the data is
replenished at exactly the moment the networking code needs it.
In a sense, they are two ways of saying the same thing.
Since this API call is made by the application, we feel it
should be expressed in terms of the application’s requirement.</t>

</section>
</section>
<section anchor="application-guidance"><name>Application Guidance</name>

<section anchor="program-structure"><name>Program Structure</name>

<t>For an application that wishes to achieve high throughput
without compromising the timeliness of its data,
this document recommends that the application use the
“TCP_REPLENISH_TIME” socket option (or equivalent)
to specify how much time it expects it will need
to generate its next batch of data.</t>

<t>After setting TCP_REPLENISH_TIME for a connection,
the application then uses a notification API like
kevent() on Mac OS (or equivalents on other platforms)
to block and wait until the networking code determines
that it is time to generate new data for that connection.
Immediately after the creation of a new connection,
kevent() (or equivalent) will immediately report that
it is ready for more data. Once the application has
written enough data to build up a sufficient backlog of
unsent data waiting on the source device, kevent() will stop
indicating that it is inviting the application to write more data.
Once the backlog of unsent data drains to the point
where the networking code expects it to be exhausted
in less than the time specified by TCP_REPLENISH_TIME
for that connection, kevent() again reports the socket
as writable to invite the application to generate its
next batch of data.</t>

<t>It is important to note that the kevent() signal indicating
that it is time to generate new data is a hint to the application.
The presence of the kevent() signal tells the application
that this is a good time to generate new data;
the absence of the kevent() signal is <strong>not</strong>
a <strong>prohibition</strong> on the application writing more data.
Even if kevent() is not signalling impending exhaustion
of the data buffer, an application is still free to write
as much data as is appropriate for that application
(potentially limited by some other parameter,
such as the SO_SNDBUF size in BSD Sockets).</t>

<t>Note that there is precendent for this
kind of behavior in current programming APIs.
For example, if a TCP connection on Linux
has a socket send buffer of 1000 kilobytes,
and a TCP ACK packet arrives
acknowledging 3 kilobytes of data,
leaving only 997 kilobytes of data
remaining in the socket send buffer,
then epoll() will not immediately wake up
the process to replenish the data and fill
the buffer back up to the full 1000 kilobytes.
Instead Linux will wait until the socket send
buffer occupancy has fallen to 50% before
waking up the process to replenish the data.
This allows the process to do a
relatively small number of efficient 500-kilobyte writes
instead of a huge number of little 3-kilobyte writes.
[Author’s note: I would appreciate a confirmation
that this is correct, with a reference, or alternatively
inform me if this is wrong and I will remove it.]</t>

<t>In this way the application is able to keep a
reasonable amount of data waiting in the outgoing buffer,
without building too much backlog resulting in excessive delay.</t>

</section>
<section anchor="selection-of-tcpreplenishtime-value"><name>Selection of TCP_REPLENISH_TIME value</name>

<t>The selection of the appropriate TCP_REPLENISH_TIME value
depends on the application’s needs.</t>

<t>For example, a screen sharing server (or a video streaming source)
sending data at 60 frames per second may require 17 milliseconds
between when it grabs the frame from the screen (or camera),
compresses it, and has the data ready for transmission.
Such an application might specify a TCP_REPLENISH_TIME
of 20 milliseconds, so give reasonable confidence that
it will have the next frame prepared and ready before
the transport protocol finishes sending the previous frame.
If the video capture process is more pipelined,
so that it takes the application 17 milliseconds
to capture the frame from the camera,
and then a further 17 milliseconds to compress that frame,
then it might specify a TCP_REPLENISH_TIME of 35 milliseconds.</t>

<t>For an application that cares most about
achieving the highest possible video quality,
and a little extra delay is not a serious problem,
it may be appropriate to specify a slightly higher
TCP_REPLENISH_TIME to ensure a slightly higher safety
margin and reduce the risk of the transport protocol
occasionally becoming starved of new data.</t>

<t>For an application that cares most about
getting the lowest possible delay rather than
achieving the highest utilization of available network capacity,
it may be appropriate to specify a slightly lower
TCP_REPLENISH_TIME to keep buffering delay to
a minimum, at the risk of occasionally leaving
some amount of network capacity unused.</t>

<t>Continuing the example of the video streaming application,
if a given frame has a lot of movement relative to the
previous frame, then the video compression algorithm
can be set either to encode the frame at lower quality
(yielding the same compressed data size)
or at the same quality
(yielding a larger compressed data size).
In the latter case, if the compressed data size is
three times larger than a typical compressed frame,
the application can still write that larger block of data.
The write is not prevented or blocked just because
it exceeds the desired TCP_REPLENISH_TIME budget.
After writing this larger block of data
kevent() (or equivalent) will not signal
that it is ready for more data
until after the large block has drained,
which may take more than one typical frame time.
In this way the kevent() loop has the effect of
automatically reducing the frame rate to stay within
the available network capacity, instead of continuing
to generate frames faster than the network can carry
them and building up an increasing backlog
(with a corresponding increasing delay).
The application may accept this reduced frame rate,
or it may choose to adjust its video compression algorithm
to a lower quality so as to increase the frame rate.
In either case, the source device buffering delay
is kept under control.</t>

<t>In all cases, it is expected that application writers will
experiment with different values of TCP_REPLENISH_TIME to
determine empirically what works best for their application.</t>

</section>
</section>
<section anchor="applicability"><name>Applicability</name>

<t>This time-based backlog management is applicable anywhere
that a queue of unsent data may build up on the sending device.</t>

<section anchor="physical-bottlenecks"><name>Physical Bottlenecks</name>

<t>A backlog may build up on the sending device if the source
of the packets is simply generating them faster than
the outgoing first-hop interface is able to send them.
This will cause a queue to build up in the network
hardware or its associated driver.
In this case,
to avoid packets suffering excessive queueing delay,
the hardware or its driver
needs to communicate backpressure to IP, which
needs to communicate backpressure to
the transport protocol (TCP or QUIC), which
needs to communicate backpressure to
the application that is the source of the data.
We refer to this case as a physical bottleneck.</t>

<t>For an example of a physical bottleneck,
consider the case where a user has symmetric
1Gb/s Internet service,
and they are sending data from a device
communicating via Wi-Fi at a lower rate, say 300 Mb/s.
In this case (assuming the device is communicating
with a well-connected server on the Internet)
the limiting factor of the entire path is
the first hop -- the sending device’s Wi-Fi interface.
If the device’s Wi-Fi hardware, driver, and networking software
does not produce appropriate backpressure, then outgoing
network traffic will experience increasing delays.
The Linux Byte Queue Limits mechanism <xref target="Hruby"/><xref target="THJ"/><xref target="Herbert"/>
is one example of a technique to tune hardware buffers
to an appropriate size so that they are large enough
to avoid transmitter starvation without being
so large that they unnecessarily increase delay.</t>

<t>Poor backpressure from first-hop physical bottlenecks
can produce the ironic outcome that upgrading
home Internet service from 100Mb/s to 1Gb/s can sometimes
result in a customer getting a worse user experience,
because the service upgrade
causes the bottleneck hop to change location,
from the Internet gateway
(which may have good queue mangement using L4S <xref target="RFC9330"/>)
to the source device’s Wi-Fi interface,
which may have very poor source buffer management.</t>

<t>Note that when the physical bottleneck is the first hop interface,
part of (or directly attached to) the source generating
the data stream, then the direct backpressure described here
is the appropriate way to signal that the source should slow down.</t>

<t>When the physical bottleneck is elsewhere along the path,
it may be a local interface from the point of view of the
device it is part of, but that device is not the original
source of the data -- that device is merely passing through
IP packets it received from another local interface
(that is to say, this device is acting as a router or switch).
In this situation the direct backpressure at the physical interface
cannot be immediately communicated directly to the source of
the data, because that software is running on a different device.
In this case the direct physical backpressure is communicated
instead to the device’s queue management algorithm,
which determines when it becomes appropriate to express
this local physical backpressure in the form of indirect backpressure
(i.e., ECN Congestion marks, or dropping the packet entirely)
that will indirectly cause the source to lower its sending rate.</t>

<t>In this way a physical bottleneck on the router
generates direct backpressure to the router’s queue management algorithm,
which generates indirect backpressure as appropriate,
which is manifested at the source as an algorithmic bottleneck
(the rate optimization or “congestion control” algorithm)
which moderates the rate that the source will inject packets into the network
so as to match the rate the physical bottleneck can actually carry them.</t>

</section>
<section anchor="algorithmic-bottlenecks"><name>Algorithmic Bottlenecks</name>

<t>In addition to physical bottlenecks,
there are other reasons why software on the sending
device may choose to refrain from sending data as fast
as the outgoing first-hop interface can carry it.
We refer to these as algorithmic bottlenecks.</t>

<t>In the case study in <xref target="casestudy"/>, the bottleneck was the
transport protocol’s rate management (congestion control) algorithm,
not a physical constraint of the outgoing first-hop interface
(which was gigabit Ethernet).</t>

<t><list style="symbols">
  <t>If the TCP receive window is full, then the sending TCP
implementation will voluntarily refrain from sending new data,
even though the device’s outgoing first-hop interface is easily
capable of sending those packets.
This is vital to avoid overrunning the receiver with data
faster than it can process it.</t>
  <t>The transport protocol’s rate management (congestion control) algorithm
may determine that it should delay before sending more data, so as
not to overflow a queue at some other bottleneck within the network.
This is vital to avoid overrunning the capacity of the bottleneck
network hop with data faster than it can forward it,
resulting in massive packet loss,
which would equate to a large wastage of resources at the sender,
in the form of battery power and network capacity wasted by
generating packets that will not make it to the receiver.</t>
  <t>When packet pacing is being used, the sending network
implementation may choose voluntarily to moderate the rate at
which it emits packets, so as to smooth the flow of packets into
the network, even though the device’s outgoing first-hop interface
might be easily capable of sending at a much higher rate.
When packet pacing is being used, a temporary backlog
can build up at this layer if the source is generating
data faster than the pacing rate.</t>
</list></t>

<t>Whether the source application is constrained
by a physical bottleneck on the sending device, or
by an algorithmic bottleneck on the sending device,
it is still beneficial to avoid
overcommitting data to the outgoing buffer.</t>

<t>As described in the introduction,
the goal is for the application software to be able to
write chunks of data large enough to be efficient,
without writing too many of them too quickly,
and causing unwanted self-inflicted delay.</t>

</section>
<section anchor="superiority-of-direct-backpressure"><name>Superiority of Direct Backpressure</name>

<t>Since multi-hop network protocols already implement
indirect backpressure signalling
in the form of discarding or marking packets,
it can be tempting to use the same mechanism
to generate backpressure for first-hop physical bottlenecks.
Superficially there might seem to be some attractive
elegance in having the first hop use the same drop/mark
mechanism as the remaining hops on the path.
However, this is not an ideal solution because indirect
backpressure from the network is very crude compared to
the much richer direct backpressure
that is available within the sending device itself.
Relying on indirect backpressure by
discarding or marking a packet in the sending device
is a crude rate-control signal, because it takes a full network
round-trip time before the effect of that drop or mark is
observed at the receiver and echoed back to the sender, and
it may take multiple such round trips before it finally
results in an appropriate reduction in sending rate.</t>

<t>In contrast to queue buildup in the network,
queue buildup at the sending device has different properties
regarding (i) security, (ii) packet size constraints, and (iii) immediacy.
This means that when it is the source device itself
that is building up a backlog of unsent data,
designers of networking software have more freedom about how to manage this.</t>

<t>(i) When the source of the data and the location of the backlog are
the same physical device, network security and trust concerns do not apply.</t>

<t>(ii) When the mechanism we use to communicate about queue state
is a software API instead of packets sent though a network,
we do not have the constraint of having to work within
limited IP packet header space.</t>

<t>(iii) When flow control is implemented via a local software API,
the delivery of STOP/GO information to the source is immediate.</t>

<t>Furthermore, the situation where the bottleneck is
the first hop of the path is a fairly common case,
and it is the case where indirect backpressure is at
its worst (it takes an entire network round trip to
learn what is already known on the sending device),
so it is worthwhile optimizing for this common case.</t>

<t>Direct backpressure can be achieved
simply making an API call block,
or by returning a Unix EWOULDBLOCK error,
or using equivalent mechanisms in other APIs,
and has the effect of immediately halting the flow of new data.
Similarly, when the system becomes able to accept more data,
unblocking an API call, indicating that a socket
has become writable using select() or kevent(),
or equivalent mechanisms in other APIs,
has the effect of immediately allowing the production of more data.</t>

<t>Indirect backpressure is vastly inferior to direct backpressure.
For rate adjustment signals generated within the network,
indirect backpressure has to be used because
in that situation better alternatives are not available.
Direct backpressure is vastly superior,
and where direct backpressure mechanisms are possible they
should be preferred over indirect backpressure mechanisms.</t>

</section>
<section anchor="application-programming-interface"><name>Application Programming Interface</name>

<t>It is important to understand that these
backpressure mechanisms at the API layer are not new.
By necessity, backpressure has existed for as long as we have had
networking APIs (or serial port APIs, or file system APIs, etc.).
All applications have always had to be prevented from generating
a sustained stream of data faster than the medium can consume it.
The problem is not that backpressure mechanisms
did not exist, but that, historically, for networking APIs
these backpressure mechanisms were exercised far too late,
after an excessive backlog had already built up.</t>

<t>The proposal in this Source Buffer Management
document is not to define entirely new API mechanisms
that did not previously exist, or to fundamentally
change how networking applications are written;
the proposal is to make existing
networking API mechanisms work more effectively.
Depending on how a networking application is written,
using kevent() or similar mechanisms
to tell it when it is time to write to a socket,
it may be that the only change the application
needs is a single call using TCP_REPLENISH_TIME to indicate
its expected time budget to generate a new block
of data, and everything else in the application
remains completely unchanged.
Indeed, if a networking implementation were to adopt a reasonable
default value of TCP_REPLENISH_TIME (e.g., 20 ms) that is
broadly suitable for a wide range of applications,
then many existing applications based on kevent() loops
or similar mechanisms would immediately experience
significantly lower delays, without changing a single
line of code (or even needing to be recompiled).</t>

</section>
<section anchor="relationship-between-throughput-and-delay"><name>Relationship Between Throughput and Delay</name>

<t>Is is important to understand that Source Buffer Management
using TCP_REPLENISH_TIME does not alter the overall
long-term average throughput of a data transfer.
Calculating the optimum rate to send data
(so as not to exceed receiver’s capacity,
or the available network capacity)
remains the responsibility of the transport protocol.
Using TCP_REPLENISH_TIME does not alter the data rate;
it controls the delay between the time when data is generated
and the time when that data departs the sending device.
Using the example from <xref target="casestudy"/>, in both cases
the long-term average throughput was 500 kb/s.
What changed was that originally the application was
generating 500 kb/s with two whole seconds of outgoing delay;
after using TCP_NOTSENT_LOWAT the application was
generating 500 kb/s with just 250 milliseconds of outgoing delay.</t>

</section>
<section anchor="bulk-transfer-protocols"><name>Bulk Transfer Protocols</name>

<t>It is frequently asserted that latency matters primarily for
interactive applications like video conferencing and on-line games,
and latency is relatively unimportant for most other applications.</t>

<t>We do not agree with this characterization.</t>

<t>Even for large bulk data transfers
-- e.g., downloading a software update or uploading a video --
we believe latency affects performance.</t>

<t>For example, TCP Fast Retransmit <xref target="RFC5681"/> can immediately
recover a single lost packet in a single round-trip time.
TCP generally performs at its absolute best when the
loss rate is no more than one loss per round-trip time.
More than one loss per round-trip time requires more
extensive use of TCP SACK blocks, which consume extra
space in the packet header, and makes the work of the
rate management (congestion control) algorithm harder.
This can result in the transport protocol temporarily
sending too fast, resulting in additional packet loss,
or too slowly, resulting in underutilized network capacity.
For a given fixed loss rate (in packets lost per second)
a higher total network round-trip time
(including the time spent in buffers in the sending network
interface, below the transport protocol layer)
equates to more lost packets per network round-trip time,
causing error recovery to occur less quickly.
A transport protocol cannot make rate adaptation changes
to adjust to varying network conditions in less than one
network round-trip time, so the higher the total network
round-trip time is, the less agile the transport protocol
is at adjusting to varying network conditions.</t>

<t>In short, a client running over a transport protocol like TCP
may itself not be a real-time delay-sensitive application,
but a transport protocol itself is most definitely a
delay-sensitive application, responding in real time
to changing network conditions.
The application doing the large bulk data transfer
may have no need to use TCP_REPLENISH_TIME
to manage its own application-layer backlog,
but the transport protocol it is using (e.g., TCP or QUIC)
obtains significant benefit from receiving timely
direct backpressure from the driver and hardware below
to keep the network round-trip time low.</t>

</section>
</section>
<section anchor="experimental-validation"><name>Experimental Validation</name>

<t>The mechanisms described in this document do not exist
for purely ideological or philosophical reasons.
Any work to improve source buffer management in end systems
should be validated by confirming that real-world applications
exhibit verifiably improved responsiveness, and by taking
measurements using benchmark tools that measure application-layer
round-trip times under realistic working conditions <xref target="RPM"/>.
Using the ‘ping’ command to send one ICMP Echo packet <xref target="RFC792"/>
per second on an otherwise idle network is not a good
predictor of real-world application performance.
Testing the scenario where the outgoing buffer is
almost always completely empty due to lack of traffic
does not reveal anything about how it will perform
when a nontrivial amount of data is being sent and
the buffer is no longer empty.
The quality of the source buffer management policy
and the effectiveness of its backpressure mechanisms
only become apparent when a source of the data is
willing and able to exceed the available network capacity,
and the backpressure mechanisms become operational
to regulate the rate that data is being sent.</t>

</section>
<section anchor="alternative-proposals"><name>Alternative Proposals</name>

<section anchor="just-use-raw-udp"><name>Just use “Raw UDP”</name>

<t>Because much of the discussion about network latency involves
talking about the behavior of transport protocols like TCP and QUIC,
sometimes people conclude that TCP and QUIC are the problem,
and they may imagine that directly using raw UDP packets
will magically solve the source buffering problem.
It does no such thing.
If an application sends raw UDP packets faster than the outgoing
network interface can carry them, then a queue of packets
will still build up, causing increasing delay for those packets,
and eventual packet loss when the queue reaches its capacity.</t>

<t>Any protocol that runs over UDP (like QUIC) must end up
re-creating the same rate optimization behaviors that
are already built into TCP, or it will fail to operate
gracefully over a range of different network conditions.</t>

<t>Networking APIs for UDP cannot include capabilities like
reliability, in-order delivery, and rate optimization,
because the UDP header has no sequence number or similar
fields that would make these capabilities possible.
However, networking APIs for UDP <bcp14>SHOULD</bcp14> provide appropriate
first-hop (direct) backpressure to the client software,
so that software using UDP can avoid unnecessary self-inflicted
delays when inadvertently attempting to send faster than
the outgoing first-hop interface can carry it.
Additionally, networking APIs for UDP <bcp14>SHOULD</bcp14> provide the
ability to read and write the ECN field of the IP header,
so that software using UDP can avoid unnecessary self-inflicted
delays when inadvertently attempting to send faster than
subsequent hops on the path can carry it <xref target="UDPECN"/>.
These backpressure mechanisms
(both first-hop direct backpressure and ECN-based indirect backpressure)
allow advanced protocols
like QUIC to provide capabilities like reliability,
in-order delivery, and rate optimization, while avoiding
unwanted delay caused by on-device first-hop buffering.</t>

</section>
<section anchor="expiration"><name>Packet Expiration</name>

<t>One approach that is sometimes used is to send packets
tagged with an expiration time, and if they have spent
too long waiting in the outgoing queue then they are
automatically discarded without even being sent.
This is counterproductive because the sending application
does all the work to generate data, and then has to do more
work to recover from the self-inflicted data loss caused by
the expiration time.</t>

<t>If the outgoing queue is kept short, then the
amount of unwanted delay is kept correspondingly short.
In addition, if there is only a small amount of data in the
outgoing queue, then the cost of sending a small amount of
data that may arguably have become stale is also small --
usually smaller than the cost of having to recover missing
state caused by intentional discard of that delayed data.</t>

<t>For example, in video conferencing applications it is
frequently thought that if a frame is delayed past the
point where it becomes too late to display it, then it becomes
a waste of network capacity to send that frame at all.
However, the fallacy in that argument is that modern
video compression algorithms make extensive use of
similarity between consecutive frames.
A given video frame is not just encoded as a single frame
in isolation, but as a collection of visual
differences relative to the previous frame.
The previous frame may have arrived too late for the
time it was supposed to be displayed, but the data
contained within it is still needed to decode and
display the current frame.
If the previous frame was intentionally discarded by the
sender, then the subsequent frames are also impacted by
that loss, and the cost of repairing the damage is
frequently much higher than the cost would have been
to simply send the delayed frame.
Just because a frame arrives too late to be displayed does
not mean that the data within that frame is not important.
The data contained with a frame is used not only to display
that frame, but also in the construction of subsequent frames.</t>

</section>
<section anchor="traffic-priorities-head-of-line-blocking"><name>Traffic Priorities / Head of Line Blocking</name>

<t>People are often very concerned about the problem of
head-of-line-blocking, and propose to solve it using
techniques such as packet priorities,
the ability cancel unsent pending messages <xref target="MMADAPT"/>,
and out-of-order delivery on the receiving side.
There is an unconscious unstated assumption baked into
this line of reasoning, which is that having an excessively
long outgoing queue is inevitable and unavoidable, and therefore
we have to devote a lot of our energy into how to organize
and prioritize and manage that obligatory excessive queue.
In contrast, if we take steps to keep queues short,
the problems of head-of-line-blocking largely go away.
When the line is consistently short, being at the back of
the line is no longer the serious problem that it used to be.</t>

<t>On the receiving device, if a single packet is lost,
then subsequent data cannot be delivered
to the receiving application in-order
until the missing packet is retransmitted
and arrives at the receiver to fill in the gap.
Using techniques like TCP Fast Retransmit <xref target="RFC5681"/>,
this recovery can occur in a single network round-trip time,
making the effective application-layer round-trip time
for that data twice the underlying network round-trip time.
When using techniques like L4S <xref target="RFC9330"/>
to minimize network losses and queueing delays,
even twice the network round-trip time may be substantially
better than today’s typical network round-trip times.
For many applications the difference between
one network round-trip time and
two network round-trip times may have
negligible effect on the user experience of that application,
especially if such degradations are rare.</t>

<t>There is a small class of applications,
like audio and video conferencing over long distances,
where people may feel that
a single network round-trip time provides adequate user experience
but two network round-trip times would be unacceptable.
This is the scenario where out-of-order delivery
on the receiving side appears attractive.
However, writing application code to take advantage of
out-of-order delivery has proven to be surprising difficult.
Many modern data types are not amenable to easy interpretation
when parts of the data are missing.
In compressed data, such as ZIP files, JPEG images,
and modern video formats,
correct interpretation of data depends on having the
data that preceded it, making it very difficult to write
software that will correctly handle gaps in the data.
For example, in a compressed video stream where a frame is encoded
as differences relative to the previous frame, there is no easy
way to decode the current frame when the previous frame is missing.
This scenario has many similarities to
Packet Expiration (<xref target="expiration"/>)
except that when using Packet Expiration the data loss is
intentional and self-inflicted, whereas out-of-order delivery
encompasses both the case of intentional packet loss by
the sender and inadvertent packet loss in the network.</t>

<t>In a network using L4S <xref target="RFC9330"/> the motivation
for writing extremely complicated software to handle
out-of-order delivery (i.e., data with gaps) is weak,
especially when L4S makes actual packet loss exceedingly rare,
and Fast Retransmit recovers from these rare losses
in a single extra round-trip time,
which is low when L4S is being used.</t>

<t>Note that the justification for scenarios
where one network round-trip time is acceptable
but two network round-trip times would be unacceptable
only applies when the network round-trip time is large
relative to the user-experience requirements of the application.
For example, for distributing real-time audio within a home network,
where round-trip delays over the local Ethernet or Wi-Fi network
are just a few milliseconds, simply relying on Fast Retransmit to recover
occasional lost packets within a few milliseconds <xref target="TCPFR"/>
makes the application programming easier and is preferable
to accepting received data out of order and then
playing degraded audio due to gaps in the data stream.
To give some calibration, the speed of sound in air is
roughly one foot per millisecond, so a 5 ms playback delay
buffer to allow for loss recovery equates to the same delay
as standing five feet further away from the speakers.</t>

</section>
</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t>No security concerns are anticipated resulting from reducing
the amount of stale data sitting in buffers at the sender.</t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>This document has no IANA actions.</t>

</section>


  </middle>

  <back>


    <references title='Normative References'>



<reference anchor='RFC2119' target='https://www.rfc-editor.org/info/rfc2119'>
  <front>
    <title>Key words for use in RFCs to Indicate Requirement Levels</title>
    <author fullname='S. Bradner' initials='S.' surname='Bradner'/>
    <date month='March' year='1997'/>
    <abstract>
      <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
    </abstract>
  </front>
  <seriesInfo name='BCP' value='14'/>
  <seriesInfo name='RFC' value='2119'/>
  <seriesInfo name='DOI' value='10.17487/RFC2119'/>
</reference>

<reference anchor='RFC8174' target='https://www.rfc-editor.org/info/rfc8174'>
  <front>
    <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
    <author fullname='B. Leiba' initials='B.' surname='Leiba'/>
    <date month='May' year='2017'/>
    <abstract>
      <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
    </abstract>
  </front>
  <seriesInfo name='BCP' value='14'/>
  <seriesInfo name='RFC' value='8174'/>
  <seriesInfo name='DOI' value='10.17487/RFC8174'/>
</reference>




    </references>

    <references title='Informative References'>

<reference anchor="Bloat1" target="https://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-throw-stones-at-another/">
  <front>
    <title>Whose house is of glasse, must not throw stones at another</title>
    <author initials="J." surname="Gettys">
      <organization></organization>
    </author>
    <date year="2010" month="December"/>
  </front>
</reference>
<reference anchor="Bloat2" target="https://queue.acm.org/detail.cfm?id=2071893">
  <front>
    <title>Bufferbloat: Dark Buffers in the Internet</title>
    <author initials="J." surname="Gettys">
      <organization></organization>
    </author>
    <author initials="K." surname="Nichols">
      <organization></organization>
    </author>
    <date year="2011" month="November"/>
  </front>
  <seriesInfo name="ACM Queue, Volume 9, issue 11" value=""/>
</reference>
<reference anchor="Bloat3" target="https://dl.acm.org/doi/10.1145/2063176.2063196">
  <front>
    <title>Bufferbloat: Dark Buffers in the Internet</title>
    <author initials="J." surname="Gettys">
      <organization></organization>
    </author>
    <author initials="K." surname="Nichols">
      <organization></organization>
    </author>
    <date year="2012" month="January"/>
  </front>
  <seriesInfo name="Communications of the ACM, Volume 55, Number 1" value=""/>
</reference>
<reference anchor="Cake" target="https://ieeexplore.ieee.org/document/8475045">
  <front>
    <title>Piece of CAKE: A Comprehensive Queue Management Solution for Home Gateways</title>
    <author initials="T." surname="Høiland-Jørgensen">
      <organization></organization>
    </author>
    <author initials="D." surname="Taht">
      <organization></organization>
    </author>
    <author initials="J." surname="Morton">
      <organization></organization>
    </author>
    <date year="2018" month="June"/>
  </front>
  <seriesInfo name="2018 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)" value=""/>
</reference>
<reference anchor="Demo" target="https://developer.apple.com/videos/play/wwdc2015/719/?time=2199">
  <front>
    <title>Your App and Next Generation Networks</title>
    <author initials="S." surname="Cheshire">
      <organization></organization>
    </author>
    <date year="2015" month="June"/>
  </front>
  <seriesInfo name="Apple Worldwide Developer Conference" value=""/>
</reference>
<reference anchor="Herbert" target="https://medium.com/@tom_84912/byte-queue-limits-the-unauthorized-biography-61adc5730b83">
  <front>
    <title>Byte Queue Limits: the unauthorized biography</title>
    <author initials="" surname="Tom Herbert">
      <organization></organization>
    </author>
    <date year="2025" month="January"/>
  </front>
</reference>
<reference anchor="Hruby" target="https://blog.linuxplumbersconf.org/2012/wp-content/uploads/2012/08/bql_slide.pdf">
  <front>
    <title>Byte Queue Limits</title>
    <author initials="" surname="Tomáš Hrubý">
      <organization></organization>
    </author>
    <date year="2012" month="August"/>
  </front>
</reference>
<reference anchor="MMADAPT" target="https://queue.acm.org/detail.cfm?id=2381998">
  <front>
    <title>Sender-side buffers and the case for multimedia adaptation</title>
    <author initials="" surname="Aiman Erbad">
      <organization></organization>
    </author>
    <author initials="" surname="Charles Buck Krasic">
      <organization></organization>
    </author>
    <date year="2012" month="October"/>
  </front>
  <seriesInfo name="ACM Queue, Volume 10, issue 10" value=""/>
</reference>


<reference anchor='RFC792' target='https://www.rfc-editor.org/info/rfc792'>
  <front>
    <title>Internet Control Message Protocol</title>
    <author fullname='J. Postel' initials='J.' surname='Postel'/>
    <date month='September' year='1981'/>
  </front>
  <seriesInfo name='STD' value='5'/>
  <seriesInfo name='RFC' value='792'/>
  <seriesInfo name='DOI' value='10.17487/RFC0792'/>
</reference>

<reference anchor='RFC3168' target='https://www.rfc-editor.org/info/rfc3168'>
  <front>
    <title>The Addition of Explicit Congestion Notification (ECN) to IP</title>
    <author fullname='K. Ramakrishnan' initials='K.' surname='Ramakrishnan'/>
    <author fullname='S. Floyd' initials='S.' surname='Floyd'/>
    <author fullname='D. Black' initials='D.' surname='Black'/>
    <date month='September' year='2001'/>
    <abstract>
      <t>This memo specifies the incorporation of ECN (Explicit Congestion Notification) to TCP and IP, including ECN's use of two bits in the IP header. [STANDARDS-TRACK]</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='3168'/>
  <seriesInfo name='DOI' value='10.17487/RFC3168'/>
</reference>

<reference anchor='RFC5681' target='https://www.rfc-editor.org/info/rfc5681'>
  <front>
    <title>TCP Congestion Control</title>
    <author fullname='M. Allman' initials='M.' surname='Allman'/>
    <author fullname='V. Paxson' initials='V.' surname='Paxson'/>
    <author fullname='E. Blanton' initials='E.' surname='Blanton'/>
    <date month='September' year='2009'/>
    <abstract>
      <t>This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition, the document specifies how TCP should begin transmission after a relatively long idle period, as well as discussing various acknowledgment generation methods. This document obsoletes RFC 2581. [STANDARDS-TRACK]</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='5681'/>
  <seriesInfo name='DOI' value='10.17487/RFC5681'/>
</reference>

<reference anchor='RFC6143' target='https://www.rfc-editor.org/info/rfc6143'>
  <front>
    <title>The Remote Framebuffer Protocol</title>
    <author fullname='T. Richardson' initials='T.' surname='Richardson'/>
    <author fullname='J. Levine' initials='J.' surname='Levine'/>
    <date month='March' year='2011'/>
    <abstract>
      <t>RFB ("remote framebuffer") is a simple protocol for remote access to graphical user interfaces that allows a client to view and control a window system on another computer. Because it works at the framebuffer level, RFB is applicable to all windowing systems and applications. This document describes the protocol used to communicate between an RFB client and RFB server. RFB is the protocol used in VNC. This document is not an Internet Standards Track specification; it is published for informational purposes.</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='6143'/>
  <seriesInfo name='DOI' value='10.17487/RFC6143'/>
</reference>

<reference anchor='RFC8033' target='https://www.rfc-editor.org/info/rfc8033'>
  <front>
    <title>Proportional Integral Controller Enhanced (PIE): A Lightweight Control Scheme to Address the Bufferbloat Problem</title>
    <author fullname='R. Pan' initials='R.' surname='Pan'/>
    <author fullname='P. Natarajan' initials='P.' surname='Natarajan'/>
    <author fullname='F. Baker' initials='F.' surname='Baker'/>
    <author fullname='G. White' initials='G.' surname='White'/>
    <date month='February' year='2017'/>
    <abstract>
      <t>Bufferbloat is a phenomenon in which excess buffers in the network cause high latency and latency variation. As more and more interactive applications (e.g., voice over IP, real-time video streaming, and financial transactions) run in the Internet, high latency and latency variation degrade application performance. There is a pressing need to design intelligent queue management schemes that can control latency and latency variation, and hence provide desirable quality of service to users.</t>
      <t>This document presents a lightweight active queue management design called "PIE" (Proportional Integral controller Enhanced) that can effectively control the average queuing latency to a target value. Simulation results, theoretical analysis, and Linux testbed results have shown that PIE can ensure low latency and achieve high link utilization under various congestion situations. The design does not require per-packet timestamps, so it incurs very little overhead and is simple enough to implement in both hardware and software.</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='8033'/>
  <seriesInfo name='DOI' value='10.17487/RFC8033'/>
</reference>

<reference anchor='RFC8290' target='https://www.rfc-editor.org/info/rfc8290'>
  <front>
    <title>The Flow Queue CoDel Packet Scheduler and Active Queue Management Algorithm</title>
    <author fullname='T. Hoeiland-Joergensen' initials='T.' surname='Hoeiland-Joergensen'/>
    <author fullname='P. McKenney' initials='P.' surname='McKenney'/>
    <author fullname='D. Taht' initials='D.' surname='Taht'/>
    <author fullname='J. Gettys' initials='J.' surname='Gettys'/>
    <author fullname='E. Dumazet' initials='E.' surname='Dumazet'/>
    <date month='January' year='2018'/>
    <abstract>
      <t>This memo presents the FQ-CoDel hybrid packet scheduler and Active Queue Management (AQM) algorithm, a powerful tool for fighting bufferbloat and reducing latency.</t>
      <t>FQ-CoDel mixes packets from multiple flows and reduces the impact of head-of-line blocking from bursty traffic. It provides isolation for low-rate traffic such as DNS, web, and videoconferencing traffic. It improves utilisation across the networking fabric, especially for bidirectional traffic, by keeping queue lengths short, and it can be implemented in a memory- and CPU-efficient fashion across a wide range of hardware.</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='8290'/>
  <seriesInfo name='DOI' value='10.17487/RFC8290'/>
</reference>

<reference anchor='RFC9000' target='https://www.rfc-editor.org/info/rfc9000'>
  <front>
    <title>QUIC: A UDP-Based Multiplexed and Secure Transport</title>
    <author fullname='J. Iyengar' initials='J.' role='editor' surname='Iyengar'/>
    <author fullname='M. Thomson' initials='M.' role='editor' surname='Thomson'/>
    <date month='May' year='2021'/>
    <abstract>
      <t>This document defines the core of the QUIC transport protocol. QUIC provides applications with flow-controlled streams for structured communication, low-latency connection establishment, and network path migration. QUIC includes security measures that ensure confidentiality, integrity, and availability in a range of deployment circumstances. Accompanying documents describe the integration of TLS for key negotiation, loss detection, and an exemplary congestion control algorithm.</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='9000'/>
  <seriesInfo name='DOI' value='10.17487/RFC9000'/>
</reference>

<reference anchor='RFC9330' target='https://www.rfc-editor.org/info/rfc9330'>
  <front>
    <title>Low Latency, Low Loss, and Scalable Throughput (L4S) Internet Service: Architecture</title>
    <author fullname='B. Briscoe' initials='B.' role='editor' surname='Briscoe'/>
    <author fullname='K. De Schepper' initials='K.' surname='De Schepper'/>
    <author fullname='M. Bagnulo' initials='M.' surname='Bagnulo'/>
    <author fullname='G. White' initials='G.' surname='White'/>
    <date month='January' year='2023'/>
    <abstract>
      <t>This document describes the L4S architecture, which enables Internet applications to achieve low queuing latency, low congestion loss, and scalable throughput control. L4S is based on the insight that the root cause of queuing delay is in the capacity-seeking congestion controllers of senders, not in the queue itself. With the L4S architecture, all Internet applications could (but do not have to) transition away from congestion control algorithms that cause substantial queuing delay and instead adopt a new class of congestion controls that can seek capacity with very little queuing. These are aided by a modified form of Explicit Congestion Notification (ECN) from the network. With this new architecture, applications can have both low latency and high throughput.</t>
      <t>The architecture primarily concerns incremental deployment. It defines mechanisms that allow the new class of L4S congestion controls to coexist with 'Classic' congestion controls in a shared network. The aim is for L4S latency and throughput to be usually much better (and rarely worse) while typically not impacting Classic performance.</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='9330'/>
  <seriesInfo name='DOI' value='10.17487/RFC9330'/>
</reference>

<reference anchor='RFC9369' target='https://www.rfc-editor.org/info/rfc9369'>
  <front>
    <title>QUIC Version 2</title>
    <author fullname='M. Duke' initials='M.' surname='Duke'/>
    <date month='May' year='2023'/>
    <abstract>
      <t>This document specifies QUIC version 2, which is identical to QUIC version 1 except for some trivial details. Its purpose is to combat various ossification vectors and exercise the version negotiation framework. It also serves as a template for the minimum changes in any future version of QUIC.</t>
      <t>Note that "version 2" is an informal name for this proposal that indicates it is the second version of QUIC to be published as a Standards Track document. The protocol specified here uses a version number other than 2 in the wire image, in order to minimize ossification risks.</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='9369'/>
  <seriesInfo name='DOI' value='10.17487/RFC9369'/>
</reference>

<reference anchor='RFC9622' target='https://www.rfc-editor.org/info/rfc9622'>
  <front>
    <title>An Abstract Application Programming Interface (API) for Transport Services</title>
    <author fullname='B. Trammell' initials='B.' role='editor' surname='Trammell'/>
    <author fullname='M. Welzl' initials='M.' role='editor' surname='Welzl'/>
    <author fullname='R. Enghardt' initials='R.' surname='Enghardt'/>
    <author fullname='G. Fairhurst' initials='G.' surname='Fairhurst'/>
    <author fullname='M. Kühlewind' initials='M.' surname='Kühlewind'/>
    <author fullname='C. S. Perkins' initials='C. S.' surname='Perkins'/>
    <author fullname='P.S. Tiesel' initials='P.S.' surname='Tiesel'/>
    <author fullname='T. Pauly' initials='T.' surname='Pauly'/>
    <date month='January' year='2025'/>
    <abstract>
      <t>This document describes an abstract Application Programming Interface (API) to the transport layer that enables the selection of transport protocols and network paths dynamically at runtime. This API enables faster deployment of new protocols and protocol features without requiring changes to the applications. The specified API follows the Transport Services Architecture by providing asynchronous, atomic transmission of Messages. It is intended to replace the BSD Socket API as the common interface to the transport layer, in an environment where endpoints could select from multiple network paths and potential transport protocols.</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='9622'/>
  <seriesInfo name='DOI' value='10.17487/RFC9622'/>
</reference>


<reference anchor='RPM' target='https://datatracker.ietf.org/doc/html/draft-ietf-ippm-responsiveness-07'>
   <front>
      <title>Responsiveness under Working Conditions</title>
      <author fullname='Christoph Paasch' initials='C.' surname='Paasch'>
         </author>
      <author fullname='Randall Meyer' initials='R.' surname='Meyer'>
         <organization>Apple Inc.</organization>
      </author>
      <author fullname='Stuart Cheshire' initials='S.' surname='Cheshire'>
         <organization>Apple Inc.</organization>
      </author>
      <author fullname='Will Hawkins' initials='W.' surname='Hawkins'>
         <organization>University of Cincinnati</organization>
      </author>
      <date day='7' month='July' year='2025'/>
      <abstract>
	 <t>   For many years, a lack of responsiveness, variously called lag,
   latency, or bufferbloat, has been recognized as an unfortunate, but
   common, symptom in today&#x27;s networks.  Even after a decade of work on
   standardizing technical solutions, it remains a common problem for
   the end users.

   Everyone &quot;knows&quot; that it is &quot;normal&quot; for a video conference to have
   problems when somebody else at home is watching a 4K movie or
   uploading photos from their phone.  However, there is no technical
   reason for this to be the case.  In fact, various queue management
   solutions have solved the problem.

   Our network connections continue to suffer from an unacceptable
   amount of delay, not for a lack of technical solutions, but rather a
   lack of awareness of the problem and deployment of its solutions.  We
   believe that creating a tool that measures the problem and matches
   people&#x27;s everyday experience will create the necessary awareness, and
   result in a demand for solutions.

   This document specifies the &quot;Responsiveness Test&quot; for measuring
   responsiveness.  It uses common protocols and mechanisms to measure
   user experience specifically when the network is under working
   conditions.  The measurement is expressed as &quot;Round-trips Per Minute&quot;
   (RPM) and should be included with goodput (up and down) and idle
   latency as critical indicators of network quality.

	 </t>
      </abstract>
   </front>
   <seriesInfo name='Internet-Draft' value='draft-ietf-ippm-responsiveness-07'/>
   
</reference>


<reference anchor="TCPFR" target="http://stuartcheshire.org/papers/Ruckus-WiFi-Evaluation.pdf">
  <front>
    <title>Ruckus WiFi Evaluation</title>
    <author initials="S." surname="Cheshire">
      <organization></organization>
    </author>
    <date year="2006" month="April"/>
  </front>
</reference>
<reference anchor="THJ" target="https://www.ietf.org/proceedings/86/slides/slides-86-iccrg-0.pdf">
  <front>
    <title>The State of the Art in Bufferbloat Testing and Reduction on Linux</title>
    <author initials="" surname="Toke Høiland-Jørgensen">
      <organization></organization>
    </author>
    <date year="2013" month="March"/>
  </front>
</reference>



<reference anchor='UDPECN' target='https://datatracker.ietf.org/doc/html/draft-ietf-tsvwg-udp-ecn-04'>
   <front>
      <title>Configuring UDP Sockets for ECN for Common Platforms</title>
      <author fullname='Martin Duke' initials='M.' surname='Duke'>
         <organization>Google</organization>
      </author>
      <date day='20' month='October' year='2025'/>
      <abstract>
	 <t>   Explicit Congestion Notification (ECN) applies to all transport
   protocols in principle.  However, it had limited deployment for UDP
   until QUIC became widely adopted.  As a result, documentation of UDP
   socket APIs for ECN on various platforms is sparse.  This document
   records the results of experimenting with these APIs in order to get
   ECN working on UDP for Chromium on Apple, Linux, and Windows
   platforms.

	 </t>
      </abstract>
   </front>
   <seriesInfo name='Internet-Draft' value='draft-ietf-tsvwg-udp-ecn-04'/>
   
</reference>




    </references>


<section numbered="false" anchor="acknowledgments"><name>Acknowledgments</name>

<t>This document has benefited from input and suggestions from:
Chris Box,
Morten Brørup,
Neal Cardwell,
Yuchung Cheng,
Eric Dumazet,
Jonathan Lennox,
Sebastian Moeller,
Yoshifumi Nishida,
Christoph Paasch,
Kevin Smith,
Ian Swett,
Michael Welzl,
all who joined the side meeting at IETF 121 in Dublin (November 2024),
and others [please don’t be shy about reminding me if I somehow missed your name].</t>

</section>


  </back>

<!-- ##markdown-source:
H4sIACbH9mgAA8V925IbR5refT5FWQpHdDMAsJsUKYrj2d3mQWJrRLKX3Rp5
vbM7UQAKQA0LKEwdGsTIdMxj2BHrCO9b7I1vPOEXmSfx//2HzKxCoUntjRUx
w240Kivzz/98HI/HrsmbInuafHFdttUsS561i0VWJa/TTbrM1tmm+cKl02mV
3d75lVnaZMuy2j9N8s2idG5ezjbpmpadV+miGc9WWb3Kq2xcT9fjs4eubqfr
vK7zctPstxkemmfbjP5v07hNu55m1VM3pyWfJg/OHjwan5+NH5w52sFD92WS
Vln6NPnp8oZ+3pXV+2VVttunyc31b3/6Lrmp0k29LasmSTfz5DqrbvNZVic/
0ffyzTL5Dt9177M9PTh/6pKxnmValGmDX3+gl25me/z4LqOFNnV+m22yunb0
T5s9pXcm+sKfvsMvsv/u+kmyTvPiaUKH/bs8axaTslriu2k1Wz1NVk2zrZ/e
v599SNfbIpvMyvV9WovWzZtVOwWYmzatmucKs/uHIPyCvl3QTuuGvm3rdZ+a
yGqTvBx4fuCjyapZF184l7bNqqwAGnpHkizaopCLlOUTW5//SudKN/mf0oYu
8mlysaXjJJeb2YT/mAkQ7CV/l271tM5tympND90SPB3wJfyWJM9wFedPeQm/
Gfw3JiypnybfT5LvsqbZ1/ypIe9Pq7LOklXZ0v/ndVIukmWR1nU2StZt3SSb
skmaVVXukrop6TqTFAhSNqus+oLXEWR7kc0yIB9h3fmZrJ9Wy6wJl7bkV0+A
PtuK0IJvD9++f/7g/tnj+zvsY8z7GOf1uFyMZR9jbGNMbxzzNsayjXHajHUb
9+3oDz736PbpbybJm3y2KosuRCLEpoOl1XtF9ZqeIljgopqs2mRNDIA35a0H
wDl/XmdVntW4JLrg56+Tv2+zlqD627Jo11nyzYigXbdZot/ug+uP+PYkna1B
AvfnWUMoMZkt1n+bz3/94Ozr8yffPLRzP/z/ee7v0w1h9x7HfnBw7Oflet1u
8hmjOeMWliFgeDA8ejRK3jDbSobhMC8CEMr8/vnZ5Pz8q0eEOI8fnn/9eML/
fvOYHn2evs+OA+Jmkrz6y7/lBfG28fd/+Td6x6bONp2vvJgkN+mq6cPvNbHE
ctMB1FVO2I7TPL/4zUu6XJyTcHqVMc+Ti454fHJNZwUEEqLX5FVJx/6OYLdL
93UHku0mAxifHIARHyaXL1++1BtgcKZFcr1fb8s6b9cJrf1DOaOPwLtfZ01V
bssib9JNckE8P3mTNeD3dXLyw8Wb1xdvTgdBnWdZ9mFblMQC8aPCfNbiDPef
fPX1o7OvHtGDL7J1eRzQ15MuozOY/QMJQPA53uGb7ENDiLnJKj6K398QOB4d
EhNzSxIcxXyXzzPa0G1WlFtCoeflhvCVxFA2jEv2xYnnqPdvaYWyvr8t0v39
3W4+wxvvf33+zf2/bfJ19usH5998Q2u9IrrIquYO/CrX9qUuTe0bQ4gf8nXe
0FdBAu1GFsn/lM2TaV4uq3S72g/T1YNHg4dZZ3O6eT7D3zXl+vdPvvqGGOmU
3jdm3jEu+H3ENbNx/Lqxf9348Xk6nz36+uHZ9AmYyauqne7vPOJf/vX//it/
7S//++5Txie5aJcQJJ5B9A9CDGc5KfJNS7jHnKCe0TUy9uGZ+7vtmD5ogIQt
YWc6r+Xzsyf3p38sfl8XdIOT7XxBi79+ffHi4urm+Bku8jXRxMtqms47nz9f
pVVBwu1ZO3uf/KZK63zWOeA1NKxqXAPdpsoUgci4y1lKkhOUvW4L4Mw8T5N0
nm4bxu0YEm9nTaki4pBXHoqI8zMvI4ZF6p0y4uETQl3wknffPv/6G5aN9NPD
88dP9MdHj5+c64+Pz796qD8+OXvof3zwzZn++M3Zmf/x4cPw4+Nv7MfHD+QV
V6+fJpfjFxNob+N8u12Pq642mCQ3z6++fffLOcg7upuWFNL82zx5eZsW7QF8
L7ZVXhB0zx4fgIugVbMaZkoVg2ybEjOo78vKY6w8DisrTt28+v4umnifHRUs
tvEbQpJrwobMC0DSBkmkRvI2uSGNFGowkOpdNm9nzBfB1kEY8SFfQxcGCj0c
xIndbjcxzfn+tipnGSHkZlnff/L4PpNKrf+Mnzwe57NZtRyf6Ul/fHH18vmb
6Pqa+na3HLfz7TibbZwbj8dJOq2bKp01zl2KTrBNibbn2SwlyoA2RqpkWifT
LNtA29/xmXZkeODu6emyhTqZOaK3NWnISUYgmDWsGkwjcKjCsRG5MDJak9Wd
rF6Wc7ZiACXl7JBUdVLkdCk/fHWdNCUR4hzKJj1MqxI4pkW2nrhX5Y4eqEad
d5byzppJPSEmlhWLpIIuTqpLmtT5cpMvSJchQ4tWzVUG65qjZLcilYoPD5W5
Ig2BMH5Oj60JOSpSm8HBgFbuZkV6tglWOjlzkyabrTY5UbQwlmWbz1OSY8xY
SDNuDBYEUFeXiwYw5QPelvnc/pjQ3Sxoj0nN5wLw282GtlLXkCXzjKRcTRYn
6dgkdvZJ9gF/gs4y9Q+kTQQGUomIXsgwIP0GCi7g6/DmaUWMuNgnEKQEEwJB
ks6qEndcFMDUlGTMWu7NDEtHazTljNTO5IRwbQRGMEr+/sfL56Mka2aTU1wB
HoeITpkc6n3dZOt6Isi3zufzInNkEJKgvxVwCrheZIt8w1dSOwd6I0MVyDGv
ky9e/3h988VI/k3evOWf372k1757+QI/X7+6+OEH/4PTb1y/evvjDy/CT+HJ
529fv3755oU8TJ8mnY/cF68v/uELwdgv3l7dXL59c/HDF4LQ8bXr9U0JttDp
SH1s6E4IuYk0Z1U+pV/AIp5f/Z//df5V8vPP/4GY7IPz828+ftRfnpx//RX9
siOtU95Wbug+5Fe6wL2jq8kI8XKB6Szdkj5Y1PTdOqlX5W6TgJ4Isvf+EZD5
p6fJf5rOtudf/Y1+gAN3PjSYdT5kmB1+cvCwAHHgo4HXeGh2Pu9Burvfi3/o
/G5wjz4E1pD2XJXKW5mBwfxMvs/XaiQlOdwoROSZSHalbUfMiQA2SC0E3Y0p
1+mceApxDaIK4WqOCW5cwy6AlW7UInj7889isH/8qD898D89/PhR2QQRHlEO
8RxhdbrBPRMqMZrNuJze5mQz+72CARXYf5mwJANldrgj/UGpWTCStibybuSY
cS7++PtZSRun/akO8PHjKLm6fKkfkH6AD2Bu0Sf45+NHB/wDx+WvQEPAAQKb
vYvPsGOMyNt4PhgP/k6bZgcUfbYmoN7Sj+Mx8a5Nkv2xJYzeJ9AKiVLahilp
nhHHmdPJiXmwXwRWALjnXZcwciJZcg9Az8H3WeDiLvV8vAAsIa4iht7j6EbB
dXTA6OAmMRyxtq1KplUK2ECsNQTQDT2ULOhlHQGWRU+SItKQ5g/BnuzyZkV/
hhcn+5CLGlGrxakH3GS7ZE0CJt3k9Vpfyfg0JbwnLk8nKBPYfqleggHDdFxb
b+S6XGyaLSEdeQs1DFv6a0N2BkzRaTpjD+OGwMdyf7fCna23dGEk8OTIXQfD
Iq9I0tE2ZplsHI94fZs4Jgku0hr6IlR3XqvOQDiAe8f72dPU0oNE8CLR9PRY
jGmjKRk7leJNjhK3TGd5s5dd1Nk2reA07K4ZQZR4eOnmpFbOmu538Hi+GfjL
ROTUIdI47IMk4+9+/+7l1Q8v31xev/rd728uX78kAM/eE1KWW+ZfOCJ9DVRA
El5YSlvj8p9dv0iu+bt6/aTLgGhyYgh4FZ5kz1mkUyRBNuOJi6vLWne4LOku
81pIjKQsnSlhST8moDegNH5CP9JVnIeNgFtclXVMe4nXYiCzaI94ScpqnYMZ
hRcuxUWQMQJDqRBJxw8ATEPLuXW+XBGygWfSElW2qIDVi4osdF0P5/UrumxD
iICPjq1Iy7hdldM2Zqt2856VVTyaFNC9k2xTtsuVCvMM6ldOUB45EAWjPT3K
y5dlQlxpr1bAmj+gW5m9L/aK0azM1eohIIg1ZLjTaz2LxsmBVH2e5uJt5xsW
ROBOUAiw8VpeTHruLGzfFSXtaprRe1h1p+/mpCiQci/sdQ6sj5k0LTJbEQIQ
/2lnK7lYUiLxBbmwFYDBRulcAIQj2XI1A6UP43W6F05A90EIyU8JxoBDtcU8
ofsCqoJ6YXM1WTqfuGfYHnEFZqjRrfLzLbHlIiZn/AUyjfbj/H4qWmiPW5OV
m9HB/VcwqImq1gAQ0e0srzPHQqckCUAgpuugnfI7GXRTll5NwM9yu+VLJE5C
HIcer/tkT8vNinYO1rI3HiAMnACaFurtoxv2HD3wL0JPes8Wvj7S54U0jBU6
cM4dcdQsK+Ry8UQBSSImV87yn6QpCbZtsXf4f9bv8fbwlY70AhapT3MCfaob
1HoWcTfnyOCteC0VT4Q9sGKmcK3QnvPNLKcXEoPqSYS0gF+Ufp4LxyHm41fw
dOCEDjyBAs+UmXuWQaRH6wqmL8hIpR3yeVm8rfY1yyhxoiXQKtLZLNsC7yZ8
Q35tYOg0UksYQ0kuJ+w8IHm9pfXazWxFyE0fb5J31w8ePmDXDsxDBNROfrx4
d3M6cmWlT1as8xvrJmq5hc8ogZQpikyeGt6GMZOwjQVpdlvCnbx+jxeQprLN
Nxt8h2zsOX8+cZdgiKTrkUk7zm5Zi2TJl1ZFkMnAPzJyvYw0EiR7Gw5Lhg+D
mbUq3QD9rcDFigSvxR0GNP/yy+SFiL0uYuBU8t6A0ju+TUbBifsJ1IMLiUhx
R3Iliw+dFxlbi/Hxhbvwn8RuTP76539h/vfXP/9PiDTlKGKfN221SYyaaVFx
5RH4KxcYCL8RuOjZBitpWBS0p49Fe1BGzugLBpTSa1Jzc4Dz1oU4ReZkgR1w
w7r0kJ/5rYp64mR/opaYI0nRFyAR7LU9E/yvy5Kl6jSbpcRLZgWMQV5d7knA
TZAi1K3KDWwISH06PHHGuoRQ3CfsQzZrKOjV9n4BM6HXYmS8eZR84hIcXzZL
Zr6CDujJNGEHakNWFGwCUraBVyJgASy+BhFq+qQ7xHcGv+nwwO6CjWtE7Ic0
BwH1MmMXEDMb1o6cKJuiKO9WONEv2IK63sSMN8Lma1kbZNhXQibSnL1LmyV8
KPF18CucwmfgFfQXfYtsbwCdlHno1dEXRe/jx2q8c+Leinjt7Yh92dvBHdWm
6qioHzkyweymGLN0LWHFIBpYsXA5sVRrt+CbUxgGUMt502v6jcmDuHZN5u5c
bGu7O/9i1lL4uup2K1zyopFltyUxVULDxXE9Ds5esk/4TlxT7UUr06tkUu8L
CpMP/EovIUbDeCQbm2YcL2+aQu6eN0cyjRTNnEFNgn6dvjeFE2wJuE84wOoZ
6yr0uzsRwRxxK1FjOgwrev3IVdm6vGXANZCHeQFOASEKsbQnww2SvaizU8iJ
6T5aNU1+3OQfkpc/wSPz7Ie3z3+TkG4FM6Eyw9edwHHAavzBwZVn1eEwRIDi
pW03pCsC14iSzI0ba3dsreEwZPZCN+Pl7EqwJF9LuoQKD7WHkzcq1jdPiZou
2WqIEiHwcDkl4XvLdgrhD/Quo2uOe+WzmjZZzUlIvYbk6n5oiEVQa0nWjb3d
bio6CIR0aLKDyLBYmxOPCBIhxA4pPr/6kU/8Hf6FnaOqKO0mUllFT/3QyHL6
FqiFbYVI5t7eCKG7JNaM45uVMSOsghvXWzA7SAJVE3lfK3gBN7G/hfec403P
iGzJ/tg07TY+KCAKpBPfgNDCITBepbeCOdt8mxUwCxcAPx+Cbi3LtiNgD6Sk
fMbqzyJfEE7qB0MvwgnkzxP3LaszIMkxiT3WkuYZaXj01xGURXm/Qj7eB73D
dTYi6iJ8UfpyvNo0RRFZhCEp8C0XjmSr+dDDBo+EW44PIjhJBp8X2eZg64jM
3rW1RgLsjofC09Gx2MIDaHZERi5N+L2VrSihIr1rbNHVWWbYRKoWcW9NtwDR
+GASI8CsohUnSjpTAgI2RRINCIYvBCgzJWqEhRQ4kNJm6boIG0lG+or4Ij2I
p7wnDzU5tVj6cvGfeDlkskkiu9F0RVqGKSDyJiGVhrWzzguJQ1xIklAiSo3n
Prh/BJA4vakEWEnRIUC64KoMQqndsFkXqbxq5LZbl6uyXOKOzeglvrOGhjgX
bVXIhqUGrHf6XxCNpmGJV0fIHK6TpTDySVdhgBUA1EIcOk0iJwtRTdHWK7XB
G8RO1WYD/hJAwNw4EGAqIoJfTccrWBISFWL3sX28BgqXAN24rBAT6xj3HAZi
AQ0xACVmA2O9Yg0eKOGV2CmEM8FULbR0Duk9W0XvJt6wIdOwYEKR2xH1tFLz
NES6FLSMiHCoYKHe/eCEevhDvyKUVfkGWSVFuezfL736PaR8rASXO9wdXfIy
v+16ddk1sCiyD7mK23k2w92wY4AACgsUvsF4T6CdkfkIgDlDToJLUhza9Tqt
9tCsE4E/KOIgjLBOP+Trdu2iXYySbWyt1iu7Bj4Z2YEzMtm8ZhlLcmAfAgjw
DNG/dm2kZUVfModsWBePsC8MvnPWep0RdEdPYF0EXJajuuybhLpZjYW5HS68
KWVhkQ4gthDNFPWXScF73eAgZuf8bUpKC+3c9T26EipTliC3qBFjiGsRjLBd
L81p27VeL4rCGz/mjaiTELNLpyAa8cOS4TPg+CUziEmHNIjxqtwyQvscOZVU
pj301AQmO9EO1G8A/CPiHLNjR14mxMJf5wgKvk2oFK8x0eC9WetKchWJHCV+
NQ7tEiUOEqIyjr3mGbSkAZ+3KIpFy3lY/Dj8MQg5cIjcDgePSm3WKhv9Ow6G
coguE77PEApePLHwgzuUbNsSMoHJMOJF5kSAZb7DfdHdwx2OWDSdr04eniW0
cq2mJYlXn2nAhMraO4H4Pb5lrhdSlRpW9BCOc+xPZh9flROvJhGmLEGxQyiv
ITZC9jGrMpq1POILyZmeLXuoXCDCRNTebFfQLCOcIJiLc+infPxtnlywMp1c
wdgBSB+fnSWvp/eJgwpwO9jBwmcF+fHm4oaEKucc8nLLfEkGQpO8BCHQ2Y8/
LrH9dTknS/6uJxE2gw6zlTOb+63StJeHj7BL8ZrysUtiW2khyxOA/hv95xIy
0RHnT/hY0X/nSXL814ePXPJfk+Qqwf/zW/x/33V+6/2Kr/KTK35S/hvbDqLf
er/yMyU9I7/+Df12cWW/4jcAO/r1+evwXZ9Cy4ts/h0vzqJnPvXfP0cQ/Zz/
OP2Ms4r++XNfkXBcx1173Ge8/8R/4HFy40KnQgcBz7IPWyTGEfuoHeK9XiUz
cmbPAEetlK7TqmL+5NjnVR9gqSRZLDiu2H+bBHyNk4FCCPFZKXdDMQRTowO1
4uttheQ7foB53wcPC7LPtqkIhDi8HclFB7ryxj+LR4k2kbpWJxLBptOzbXZg
otfmJ46ZTwdi5dxHfGbRRizEe9I9zalotGQj+GAoc8oUgPFMT8WAv6jBRSMg
nwa3bu/u5OpgHvYe+euf/wcxo1XJSQAcvDBfZOlUEfVv5ZVGTtXc/sel11y7
f2G7iQ6uYBcXAdJDn79Jnr9MTp6TFoS0PLrxlx4j56cJHpFMB2RTckJOlPVw
00OveQXwIT005SSRAwg4VVIBzFkTeVQ41I7A0oi9a+KAHjkLJ9PPajsOoSmr
Ob08KlcOpViY+0WXMfWKsxNcwO1j2KsRLdGX6z4tH9xiXdpFSh5ZJkeAZzyj
N5mGpjdU538SBaZBELeBZnwdf62pkFoMbwmROTSY27KAgsxpB2yQakQrpYum
HbN5io8aRebYniZ4bEgmofSklvInJHnQRhz7n2q4n+hT0hEVciMQt4GETNF2
3nKAfQ0QLjOxDF1s5YuJy0uwB5w1nVwUk02thV0sNM1ARww2n7UFR0xCNhBM
gZZNOtvaBpmE420KHOZdZ5UTc9hCY/OSUQo5fepiIqBx7ExyTOCvXIhpy4B3
C1LlodkFV0kvMZO4vzinvGdJ3l9wcNVJLouGiJkZen4tTpWc7Lt2Wmf0Tqgz
s/ebcldkcwkxOv0y5yCSnbDMZ4lZ2xxG38hpDYvZsveBQEfoly/23deeeMsa
OwKdT8Xnw79fXiXwLWTVqRPD1RKNdhkS6Mqqasl8srXAl9WsbhDNF6tbQvlt
PRLUZJZNr/EeynLjmWid+RyneZmxR4RoewZCygIxGG6PECLgY8YPqmNa4a3I
LOrwPF2nfFRzCBLMiIcLNDXJ1V8U9lpHqr4amxZLTWe0G/lh8IpGIOu0QPjK
Tp1r9gX7UvsgAFL5wIZsxUCQeBDIHxUMFcfW3AUzA8YXk3bwFfhvIculkqKb
PEQxgYd0foeSvERYQ1PlW8laMBtHJSjtCpGyjeYqKzUge9t1cX9CupykVUBb
RxiJTSnekPq3uknD88y8f5OkYw8Q0aUwXCsxwrypxQEJIEvqgIHjoJmzf7sy
VcR0Su+rtC26yJDJ5XoLpGvDqCktfZso6CSbLCdi+oiVgS8QCeED7IuZeKTw
CCDyim2L08QHQ0iFiaxYflzgB1dgmxfMJ9utaAeH9swpvsc+2o5/wjBSjqd+
3w6qZiRgkvA2u60uviVH8I3dUeGaxPFKqHY1LIFGXY4MBoD8CA65eKdDn6GM
SCSkKtWBb9sqXWJrs47YJCpxgfsmnvvmdaT7sIecT1g/hYchKH8OdmpTBYmU
0mO4f4tqEZ4hwk9SZ4RYLEFzpCs4RX51vERyydZikYSnfqxNy6wzz67uzzMl
Wign5plC8EuCP9BEPetnieGiq4COgXxuOJAULXNO/DgQiowCGnwrk7KekXLE
CVIFcr6QeKvUPbiEExZWi2cSekErwUxUNLwPXlfvMio3kuYPteNF0C1Vl1Mf
fF+bS+N4JOuUPZmJHDX2D+6AwgsBAHY1lCYoaWTiSaMP1ttYPYQkMO7OCUhB
TXQSBOgn1cbeuKmqwxqrZCvER1WCBSKWCZz3Ubxwc2iIsJ7NJxfvGksilHV0
XENQiFgV6rhRY2MI95FBz0mrvBDbKezfsvY4/eg5EO66aed7snIlZfLN25vr
l29ufvf7H97+RFb4z18yU8ZXPrJP1SqEzs8FpFzSw6kxOSEgSa4l68uuLtrl
Mq9XXLXCN/A6nSVvr5Nrjp4k1yuOhZjzw5/nt2+eJ++yNZJVvkVoQl2+J+++
fXbqUyzFXkDNFeyFS9QtSJScMCfLEKVnf2RKbJDunJiE+og8BkVVK1EtC0Jd
7PL2B+Ectlsig8zCrlqFgAVj45YhIIFmqdWaRs6/YExKTErSKA4+3qUW7/H1
JW4o73vYBpFym4mWFekWO2QT8lz5VWnHXUksxb24/iHhAOBJ1W4s5ClS/NHZ
WfJ+er8+Nf5hubSc+2eXdP3299dvXjz78dtTTZZiFeb8wZPkfV6UqKwkRvAT
G4m0YrKuRZcYe11ipInOD8MDtBl4GOGUJb6tPPDU8fqt91ojfJNrbGZKW9zl
82Y1FrhupWTBTHsI7IlaXChNwjnPHzyK3ujFkBE/n+DZwJFdyLvakywX+SyL
aFa6ZiAht4PD+Rwn2WeSdmFQZR5Ch4S/7+Txgwlt5hnBWnPFIfkaWU8CsHiB
z6BS44Dp5ttnqti4estQgaTLwIW5SqHReovh63OayJrrdotsMZS7OnHPC84+
Gwnack4Eb4uPxnub7uNXKycV2ycV/cTYgwZTSW4yO4gcV5Z4TTaCpiLlGTR1
Uhc1rUiCnw3TuluR/GLPqAckER+HRthV3jZ5oc0irNpAazzY3EZWgGT+Epva
SwcCBEGYXXayxkF9ysl4f2+vHSdezVkROOSgJPE1I+FwKQ2+iLS1dBO6A2ek
7fNuTpifHd7zKZg93fUMhlqK8453CKWP2cMCJ1QNrDRO5EjMItQVRWbv3ZPY
3b17Gr2zoDpDlVbnnFJE9u9EHnBhzl4T5+Jt7kMu3fCV81Q6wD8aCW1pPC4q
ITjyfdkoH/AP8GL4CI+nvUByCNVaQltkg83dScg+h3VDVjGZatv9XZzEKScx
xiomO+1LFhBc01w3z0rsNKduW7S1yPl798gmKAqCfbiQqDIyjqraDcCoLBEb
0Lj20BJ3PocU7pA2MPIZk/GBang/2K3L4Owmet+Rm+3UDkq5ruKCFBv1TbJ9
QNuU7AcuJNVMB80PhfDcyXWprzabnxLhDAWoD46ktU8iKwGA3eCBtK6hLll1
804dPhSJZE46lZraTxU9OXO65cHark2KS9GCA9emXQpK6JdMXQwcqsuXXq63
OZf+iC4DGbdQdRxKcdZohbOLwXGU6AkQ54+DWHOABNKr4IvBCT0r6bDgSYgo
DGPUIuNEfQQSIJ7csfePkvesTZ+cahL0Di5SMl47bKzL/ntqLBdHHSbQ6NX6
BBq9xIl7q/6dz1ke7EFW9JVLfkEukVH5mjfez3LAgUbOs0tJsGN13x8bJ4Dq
L+ulshLH5LVE0up2NMPYinbCsc3fLF5CdZL6HLc0uA6PHFi1gYYN3r1wyZB4
Z4lAkvU9mAjE1+5jjhbjlZI4djXFlTJxslESJxvpfr28U5Z7N1F7hhzz037+
NlhvKBQKOnJ9yDecObP6VaZIQ6pV/LKHDPwHzlGC2RLK+brkel/DPNU7mNNI
NGhA7ovVeeRadr6CWPiU+5yGLGBxvqULWUDoIsNFol+SMRVVM9KuhvajOUVD
Rh4Uuwyl9uLKZWuK430bXBbK10ZOskY1HsLSKahqzL+9ncSSLa6vlHSQK6t1
e5Ev9ERaac4NGjyrhkqOqiEuarSSGs7wpq27rgIV6lqAuxtOa4Sgn+srQFkw
yMvKUqrwZQE2cID1uUFw4Yrcopy17NRnM5Akv5hbRaoqiWzcFhJuPfDEe/gW
USmzLisor4RUYn981kY0X7l2Eadn78lImXDICenUsHmZy5UPPmfvkMccJtQu
wMHVS3BzcM551iDXjzuZJVC7oz2xXIloju252+z4Hu/d0xBTNicVhn21VqAD
TR9++JFjcyRf4ASmkwTWzUlzpqBolJJ0I5WVQxCFSRrJxURqz1kslhu9FE0X
lHzLmTipxD8HawI9dBKpa4WmYkmIHdmujUdCAwYta/GGB3r0tAxRkrNixwvN
S3BnzS73XJAZ3lLmiOJV4hgWK5XlXOM2cghoNiXRhBWWKu3d4NJxf6RwPWNd
gO/VJ97Aw2Z4MASxCK3F6SH9xoJiIID09rRQYe51LUg3162fgfMhu4Vh+8c2
lXAiCIidAJLaKt6CG3ZK6zfYtiBcy+omdkb3RTsWQrsBlgDE7TkpEkJXciN3
oVplARYjKzhGq+zDKm05Yh00Y/8Im0JmLtBGXRzhAe9kFYLjbKl6/SxxALoJ
0y9/F1KrtqDBg0fqBOBkPtJgT10HPXvqlwSsaIuIxLBh7disXqQ5u9MjR3zc
ucRH79U7T9th2j3cz3e2Fybdz9nM1FLk4APN+GVckztCIMgHG4ZuKt/EPpx+
zfLdIHWh8NajoWakVojlkgrBp/AW2EEi446JcQFAwd/GAePDYi9//VPNVlX3
g9nUQ55hM8mPYKGz1JaAbniNBt/myQnd6JouOFcPChcCrHNUiMsHpyMnTUuV
aP17fBlxvBtQzyzfIvzsNtLN6BfAWdMTDuu9LFiq3sYCER09ClyRI+c/UwyA
V3XNQXBVMwUIElTY+JKpvbcSp3upYJGYS5+UhVcwJ+XyyySwJtcrhOqwKcSh
2UtrgOB32UskgbUfXmTdUzdNXKBixdBvns5/TvJ9mbL3a8vF1rgna44xVOXE
Hp2zybl30O1SDg3EaB1lWKemIMtbWzSDbK2O6nx0dnaG/yUfeEnJsm4RsR1i
52hrwu7wc31KeW3whA9vmfkfwet9lm3pMv7gA9X1Ha9JxUDRkqJ5BDUOF4Hs
ri0uHzodBVe/RtgTuGzqpCN9HOo9gitHrxetCjiaFaK5dnORtzlS4wZeyiVR
qDuQwhr0iDSb9BjWi+fdvDejAX+KnICQaZGhqNWyplGOyG0XysUCigVte7ms
Mgk/yBmQEdqAi6D5gCT9MFajaF0DKEKSJM3SmcjXaVvhetQjIvxZPZhdsY3c
YY4k+wy/2h9Sz28uH4mMumnKTeDitGDbMqs2JFpXHEcxj6DnkYjQkQmI8k+f
BbLJpC5NCO+m43eIIOHTU5hQtcIHLLA2I9lueYcsEpw7hB9Hri6j6Lw/aWLN
PeZwU1WcUKU+zimUDLt4qaRXvZd3gute3HkxEtXPUvmzBTetENKjbfJEM52z
JbcXYl4vUZDzB3HchF4HWHHhR1iQY9tlrc0Xgp8geiUHNmftuvXRnvj1D74K
L0FqHCdnS2+gKOldcnNivbknc4XVLXMv7mP2sSkPeDfCmMeJr27n8wzduACG
uBrBl6gAo7S3DOzNA9luie07RO5RxvR2NktrsVxJVSqY/gZdDFr8c4fegIuW
7JR5XnNmgCi8O1IGWC+LOLwvu2F+oelklvACVzPXKAriVz4Aw1+Vmj71wA/D
ChdGi3mzPFjEEkLmqhDfNrjDjkaOi+bUGx/neoSTw6JSme3MVyVJlgw7kkJ2
QVJkGj8oFwRJ1KXOw9wAvkUvXId8jpFPMugQrOqEEJ60HIgUqgWSf505KjsI
Kek4rKNJpzLvsux90Ukekn7tbqtZc8YktP5lIlXhoSf8lbUHcoPCUiLoFsiW
PkHakU77FAm2+0+w06hPEVACDQPXwmaVv0nfCtNeVXENfYiilkbivA7tSRyw
nX1ieVZHDY8OcbDWtmdoQ6iJvKRToBObNjj9+FG2ftA1qbNnbZj05WDnJuee
hTRXSdsAh0faBueaCbuNO1ppU5faSQwvQMp8RwbVwfepzosnP91IymMSqyeu
G3Vj7NlbtWnEE2PbU+yZ2BggZM/Q7om+Fev9HdP4srGctD+UU6PxAX4aachQ
1ViwGRlJtMRoi0lbSWvclOOwI8fUFLn9A9H8Ilg5TZDyPI09dzVRMWyUOmqB
6Q56WCbSiC1WqbyDMLUso7dHsTTxWKqU1EFFoJ8AN1X/XacJmG7bl8sFwmFY
yUMqCscRFUUVmtCECQEILyu051AHRl5xzb80jlFQRkv6e/LZdlbvP5XiaKlX
s/RS3yLBJydWzR2YoasDxmZBuoit1smwLdKr9lQsgS5aDr45Rvujr+TyAZJG
fL7PNk0vu5ItMsl8WC7yiMAlwSXxPEYDiimzvSxWyKTHI/eT4Nil12dVoeuA
0jdkVf0boSFaGGnNpdabmA8Bnj98FgQYnd8Fo30A5AcA9uft00ZSk0KO3HnO
gK2ZSEv/IbOl+BK0e4n0F1iJT5MzTOt91L4ladL6fX2kVQdSJe2Y8RnJJpv3
Dhl5JuySuVnFL3FB3GjHHV8wajqYlQJ6okDbqKidJ+0/Y/4sQdWFmDEsRVn7
guZTSfo8Ms+lY8IdsV/fN4HVTl/jnLlOKbmofJ3Kq6P04+YkApHKqt2RQisH
fRNpz91qb7aXimIS67QxanFqfeceIBg5PX7DRlocxvI54F7g75UNXb68+TZS
YPxAG/ArFXmKDCLqHz948PGjk8xaQYm//vlfrrm95K0RFWFDTw5yuxEEfBw3
cECDSfPaGPvwKmHUYaORpB2SgygHHhQ8S7DMabYnwenUMOm9+96927zWku97
95yvM7/7jdwxZHjBOJGH1rp3Tx2BtPgd64qCdu8eBNS9eyyv+U2iSF6XKLtp
Qnn5G4jM63appVWkUF5wFiZrtXTJ2VOuUID86uopCJD5NKUetABFIIGP/1pX
PBZbXuXFElqZrilNkToVScw43savDdni0shLMFCCc3bRiGZoFRvXO3HGpjbC
8IU7I1/xtM1KhGq4iZ+ztjp5pb0K0EEk46xl1qJSqQPZRgKeeU66tkwzZQ28
EFKSK04lPlzM2GFeOR838W3Z/Vu05Pkw1tyvZRabni+TtsXHhS1g1/TyP7+6
+PEazZj9PRFsUQNFsD9+l6wr3bGG44gtDqXlXcjX8320u9NmWEWzGAyxSCth
7BtUkq9ND2FvIzVjSaespFGWGxYF6vTWWhdSqXJtwmGJZf59vQK9O97pdYze
O3sytqNyhFeGfbDElZpL6aOIAP3I9RnNr3+d9EHczasyTh8tDB5OOCDdBlaI
LLDZMnBO3l7NnUywFxjWmC3VWEtK3Bx3fEQFoORj+lvjllXBwZpHDaaQWIqR
A5o/2hFx1vYyb1zoJKE3qSVsZp0MIUJURMQWXdz64jttyy85AWL/JddN1c4a
bg/BArjPVBHfBNRqadTH2Qocvooirj4JBLYOoUzu63Rx9YX3hFpLrH4j5goO
ozV3VPD3Fu9Ciw/dHTTX5bYnoHJvJ3QyNQ8owZwqMUYO60fse41CMxeLhsMN
Ubi7j57ckiTK/j7sIcssjVPwU3V36B+ALZwI6nOaQvpr93wce9beeBZ85jN3
M6DuaC0b8gqG8hUOGhmrEyRtopMRhYTuh8RxG+uVYPm9Pqc3hoY/W+/G5Cqi
fopWjMfOS+v4gdgEz67x7p/Ep6HFMIZubDllGkP27ZBQ2QW7IY1T2oPC6IYS
pawKQJKzrZw2pNxh73VTbp0VFEW2CIc3juboapJB5M/yBzqixM4raR4u2hPz
ZBf6sPQvOkJ27YNo9g28mYUOG9l42o08dcSsBt1DA7gQwUJy8+TyrGpGKo2k
UYC5LBkmhzfXo0M3SIfaRy8uL4YqFniJ342k5SThWj4P3XNJt9n4zIdOBTkU
PpHmoYdB/42EwkV94OEMhfK+e/jxXfxKWMf0zvdwWg8dnpTelBN8yhXp2HgZ
KbfloSlpCT8Rwr3UjB+/tNaDhjTiyHkWeani1jGaotmXJrCP2Um/QIcGw3ag
AnNkcWQLMCJnjUewGHQn25InGHTakKGgPdRwoW/pOuPmdhK4kQvwhTJSqJh3
vE1oy/gmxh3pz4z+2TKmNDEF3BFRcZ6v16QRDlBP+qFz9bBmttv+3ict8QyM
dCjlXoLGZ3HIiJMsfMJ3tyeFC34VbONhN5NExLDliLBN/s03Xx9+xwUv/9EC
BHVTEI0XhbE/aSQY2LcmIEt7QnGBHKqdggBwV9MSLtIKuZIU6cvashil0F1Y
hCIISf3iTfSkXrRz6y/HBQcpGlUC7HC0SnXmo7P/qJoqUqe18veTez/MDY6+
jkamLi77Qu1AlM4V2oA9Ojsb+1CfNI91IaALVtQu40QwLZR92H9o4n73jwfm
qfWRBoXBNdNIq9ENGVvrAa6ktthI+4tIzSfXCLE3L/R4Rxd2abAKtWrhF9hV
3I6NLvXSjA5uzUc69e/+KVTMceeawxxFEw+S9uCCzyey5zuiWZGUlNFlyQVh
iqEhTVkruX3/PZ+4xAldukavPlC9AdJgMMoX7at7nHYhHpw6/rKezLO0Ox7W
wPoAs5ZLhEHSt2PT4dxzVqpS7Uopna3Eewa15dQnBgjVoRuXJY6HLBq2pc1M
PP+6kxnltL+nT6UnljcVtJeEdt9wR3eH7SDDq0pPpXM+2zQ1jy+Q6vg6sIGg
3cVV0GRPMSvvyhUtLvG1WIN6Cl3Dg7NealetPQkjvGJSmGcbC5SbWeCjJlGt
wkAirDKNI05/zPliW8pgLzwi7jzq/cxybbN0C+PMM5LcJjpoi9e5plfEvSL6
dNS/tyYsO3BZckM+fYrzG9uKJWpvIYtuhRl1vJSKA9rO59yL9FfrrDs5borO
0oo75tVaHOvEIjVQciWi5I5IGbAAEZ1Q/Dwc31iA7rFSf3PU86LrwuYqE/Mn
dQNI4Vi+gF8KIYc9opwhLVN1+g8kdbrImj26OSHELcjETQxwJjQPOR5HcqX3
QxeaecE03qSc8cV1177q/LOhulSbFu9EIWcMUgGYDjbhxpbDdxCVV3XTSA7H
FP0SGHNd6VEQs5wIDl7Za1OSMqzllyNrJW1g7cBP9SEnA0y8hDnIQrGkVfdc
OjH4VB3hyHZdfc7baXjOSqDkaAgFiuJXlPzKtQ4s9IXi1q2/yy2iwJjyi9K7
S0k6L0tSBFZrHjzG8Zkm0SAlo6OmsPuapEardpVe3AnX/Xa8Wp5tazYhdGhu
yR536D58PrVOqIPPW0tP9KLm0SJpHaV+DjwBh560eYO5VMdtVpHNtt9yHCt6
MjCmbjItRshI2pBMRAA16GKHUTBr7C68whdPQBHSNhNS5aVOcpfb+AsVa5rl
Ooi703a+5Mp0dpz4qUtcLD2wn084ToK9Fhu4Aw4TnRYS3DXSCVfexQUfcC+E
VEfu0gJVfi2BUiR7IytPIa41gZxN2Nfs/I6Lstx6We9ni7i0bcoQi/MVPAE9
K2MKjZZ6arfzOzhLlAdpTVPY6I/Ma1V5+rMTwlLAkqra41UyLCxuBhSqU1jZ
FFXSnaiy3M12jr7IfOl0cpDcDfD6YSh8YxAD8wgAHOdXfjlblZibx12pGfHg
qbyLDXBP1A6Jc+srGasYd64K7+N7VJ4hZHng+erzW2ddqDmiaEVUmqPLwzxr
7p9vMQrNe+lZ+EJslfSG0V6N60yb0kRVZqw110c1cky288mjWVTjuwuVO5yV
o3ldedVrD+hd6BJa1AYGh5knUf6WOjD8aNnN3lpKcbKtdEnqufFY/pkvcrDL
iJby2fCnZ747FoKQ0S4+tY7xVe275Is5JZXNmpfve+2b1zGFuI6JxWE67nUc
cnsj040dBVhBrWNJiZWxAQqL2A/b7TfkLEM7YbSv0UeoZKt1rnnDk07TFRlj
xRkAdqIwTTgYdd2KHJEL/TdpWrJEgETT1bbG2UHH6MsrHaH8WV8/Zh2cwJFD
b0ee1OkvXvBwpkmnS1HknuMgpm/eFBrWsP7hp4uF9mtBc4z0m8Fvwqzrt4e2
qX/cIw5sv96v11lDdOjOud7JOsAl2rfSmx4SY+uYqdJRzjq4HnaalhxzmbXC
fE4ab9VEFQ+12XQXYZIT7kxlksZIpO52sXbK0D+rk90pXwf7JJk80lkT4uya
zM1tK/JaZ35V2tZuPB6gVzb7e7nz3kg8+Iph8UjxdxRn7uVRXxPnswK22rM8
1r07XbxExzR6903pbVy3NO8KFcl9OVfHlawogZRWzfTBmluO+NSan39+VbXT
PdJXb159j39eZdU0q5qPHyFToGh0ENBPHWc0Rnm2p2Ftxq9D7eKTsfoYBYh1
tmA0DDOwEPU8sE7KFlVI9GZfUia2gj4dVgzFf3loQBFcSVdlf8Aro3XgowOU
VbMOb1eFq8+rktAT14Jse3l7u8UYYQm+r7MDypL3nJ+dcaNx1OIyAbIWTN9n
XTrqvUYKDGkV9IcqMYOQs1FqaazYaRhkiSGCv/I22U0Wt0zrNacHW+NKJN99
cRR60/vda+93Uqu8DsreGI6XiARZY5G1zFDAPnvjnE9DH4lIcRmirFjR5Zdw
UeW25PZwR6dJhpCBT+gduEPjyIHgo9da0uYJt2bVWQjcC3fF+XWn8f6DbHbe
YSZmZmQQDk03CLMeWCPJvbPI04e2ELe4lYXQuj0WOcsRQwAnURP0IyfG0C6V
ATyZQ7QNtI6IzH6+/yJSIDwWdLJMlIs633fNmgzTH6QOROsJjYvzhJxVKLd2
A3MOmOt2nlpLYfk2reO2Me7yKqhJYby2yiSd1NM7hzvxoriEFLJUef8qdEDc
LEX00lu40QOhGrGY2eo0iKowNeLYzeo9+UsIWyDylnlFnYBMpE7MD4ZveCh5
9AqpX9IPJxpeag3rYGpEirlprQc98XT3AV3iY3QkL0elxYBr4qkgTLie8E3r
9naO0XDULSHq9sKlST0/k+bXSFqKXOGR7VlPGGkPOdj10p3kk2wykk7soQs7
d1DnqAmaqm+991eCdtaA+dRpyk1RJKFXetSAO+r0ywrOQZ/TbkxlUE0zjUUQ
zpktXB8biBK++5mQDysOQqiXjh/N76BF84Wk43UZjwzo8C8iwReO47glrwzr
2JIQ817HCml4s3AFaopyjp6tdGoMv5zrlv1ifeanl/IHxl5jBJa3agaLN6jX
nKUQLTbMIWV6r85nZU+DGkuw9i6i83YMvqjclUcJDSgM8cSueIpYzXPpPQF3
bUTjrF33QmcMeTdsJJ4TZ5XGd9mE3pXC+XRdEyRT82PwfmvDabUouCUqSPHn
n0OH1I8H0292qRYFHVhakimHW4mQ+OQQUU5jzJb4gId06KpswuSuw5vugj31
x3sg5+Beogo9TECVLIRvG5KxIAsEvCPJbleArjzdMhzB0duyaOkD1j4H7y6a
H9+bURsx2E8Z+NDxi70UqxU6BMGiWkAc64AvZj+6Y+dNWoQUfaTNm+xgMrGO
2OLh6Y++zqUZjA+ENQy2m0FT+t9xwdyAultnzF1UROGxgYacu2rH9F5U6SBf
O53Hh4PxsBfzb1hXZSHDGEUPGix/NrT63bgidhiNPgmwTAZgSafZYQQAhsJ2
wt/rNG57jSJnP6Np54cLiOhUzz4X+6IzA20njCYz/smtskeuJz2n7O2Hdr3T
UX9DbUIkscdF7ig/D8tLSgAeTXs0ry3GJsYS1lH1MFh5swwDHn3BeEQdwsp7
pBXxxJjAwOpVdgRuj8kZItNItLOZ2xk2IOrguiy1Yrg/6QtSJW5xPDqYJf2Z
dKp9F5Hox9SaDFAre0s4FULDkaJGfBpmqe91vfeub440+ZzKxiIYe+ngHgnT
vI6tmMH5xfpW1WpoPxpyHOq8rcO/hCWT3jj9hOrT9bFALeNHjqkYRx7TTFQJ
IUUDQY1qHag2DH3sDHnspaggnziexae0YiPlQvIwesKxTBiYUeiluiR32jQR
7Ye+ajfvfWJXx+VhyaCWghTyZXwkCukyPLhkIe5gfGC9hqSKXMcztZtduhHv
WLEYY55Uzs6yOJGmheOgrJR9vRiaVSh58wdz9EJRqe9O4snUDSubUf/THgOa
D7bFN0p1yiUxEztD/w7pWOMVcYRIvPOqE1Pq+nbK6hOuHSS1oJ4010o50ds0
eSJjWHP0loPSDfr9IyDssiJbptqcMBosEbwLnZ3C6LiPI0bVcqlNTbEUPx6W
FQ3XmMRtaaLRIFaUURMf1Hb82pRVb8AdurfiqJoflVG180wHXctwLHyLeVFF
3DOrhiwS54fl+MBfJEePtGZ/l3FvI649GsQSEjHD6JD6bh9DL3CcuSvnwN2P
bca5YF0wm8MkF0lfNCHT68Ie16D62Kj6JypMPtFmf3ntfFsf39VN9SdQI6aK
aHCqOztEplCo60VCudY2lLNkw4SZuFZnkW+iwQzSoafrV+VQpRXQHxqkDJZU
eseJUsRS4iDgM3Ldv0YaRHSrHJn2ngZsIqvQLMGhY43c4El+6kfrjOg3+vXI
WC5xkdM36CvqIZntVQ9DJ2FTNNSF0A2rdNDMY2YnRHwkcx9txIAk2iRkwEUv
DkiZ0ltl2RxuJi52W0mNsei2TJkEZBzY++IG3FzWkMnPuTHFUXeXatqaTMIy
NmXy0df+3zHATOpcWSDteUPxjgLX2WXCmbohrYGJRZIHrbBAOUwUzffRPanc
YiEWBqG6XWa78Xl7XYPNGKbOCNd0Aksk954+HYGT1FuOujjBEj4U62xG7Xm3
YaoMPRVHUnwCkeDaZZtl3/XN26v7373tNIHpOuF4afXbIRInuXjcnVO+5l2D
oeyj44HtBZmi7r1SdqDdC6NBUL7/f24j8Xwcb5h5Yh2kSnJDT3rPSeB2Gwt5
GQbFI6xKJ+OqdsbRo4Zjm2Gd65QTHmVvtF6zkhHu6vhhNdhKZKMT8Qycw32r
eLcmvE5D3+vU5uP4kj3OiOHkiylsaj9LJvlxk384Mk9G9KGoX0U0rzy3ii2U
Bwi8D1JiOv7aVVr4lDwzF0Jq3zUdvpDJED4Coe0JvMvTJphLfkkwXl278fM7
oyOPkn71klUlcImCLBtqeOSwkvWMWrXKJ/zIDMvPgcLdEEitvxajr9eKJVsu
9P+5PIahtyR+OBy3YOWTc/IPvymlGmLEcVYN+w5EmtfRLMFD0310RP9cibkn
TYDmITtMw/TxQGiONEZJ9aHXgtd1JoOIHI5Xq24tOCVEO7Sr6BbwjjDjfpXt
o8rTLTvpoJxxt4nhE4a11G0ZmSRXUSnMpTdLh2q2QgMC73klMB3dtCgGXCXJ
5qVBiohi4p7tdRgoS/+D68g+5OxX6E2M36nIXaVzF8lj7k+DuBySgxEagKuJ
MTZh3d73LNYPs2Y2OZ3wtPXOsHteOy24YHiV+k4yPo+Q1eTIKEZJYs39uOeJ
DaZeHDpzRLrOcx7bsbF2sOwgu1mFqbY+GpYevT1n814YPiGgNkqInTalpk6N
pLlSFz5OfLjHbotn9WDo+izndEw0eC0ROIf3X7IPBzt2AEwmFKBUIcStLTKk
SQJHuoTd6xjnZxKffR0aAfk6YwNBaMKlURfmpEClCBaicytALOuWvqqwEQay
IKRN2UUE7Vhj2VDPIvB0cABYqqWov7JiKD2GRg3eZ/KKKNVCgdwBKPdrB9sT
ZskVOMQb+p1p0yM7kdIc3ge60eGvoc64Qg4Y5EkHHiVXMXKddKQKa7mi5s+W
XkjE4V0fSOE6M4VSz3ehiU6i9NF2Ch3gFnX5H8j3VhGVse7R6aGlGbWdUkqp
P2ZhZ22ARfnnIUpcsM/harNJ4u1Z/wIYq0XGAqnd+DaqJHQyuMTyRRfife98
Vmm+JukrXE3le9gQRqbIu/ANYgePbH2rMcDrVJ3UtZtWZTpn3q/CWMrOeZxB
xcBGvkyEhlqiwQ4dQ7YunvpOHZ203doN4oY6hWNhHXJDXKcJlEYtJTFo5FNp
/PhCu3zHw9A4bZcOwUnO8IBGLZSnmfQN2BL/xbwaCJ53mTSErVekXT7T8qSb
0Bked/2Cc1TdZf1JAXSUodyBk6HXSmEp1RCcaKoGKTNGfMF3MY571gNxxD2I
aAY7BJ/3ejizjktc3idCI7OS4yQn4lRW5iZJ594fIFOsfbGFuQ2PZk2felwX
r4IO1ZPOOkcLUSbux18EFd8u7FdOJ1pi0oL6tiXYItfHrwNBM9exemyviPk2
z+E7wrm5OB71WVZ33sum/dE3qbCkMpa/vZhirl39+UNJ7LvrGhHfs8lycKGn
jbVa1nBk2vhclOKw4hGTL6JAh59RJ61F0WxkVRaZb1stnbbFi8xA+5XK0qOj
UX7ZGzmpHD3yO1VfB28V2nvWFu+lZRWoJWq5Kape6PSFHF7MrVEqgw6AAtw1
R4NQeZWvJbKCcWwcyRCHZ5dBcUdBS3bXOS25VpqWmzGzjyVS+0URtrdwUr2v
wW03gQFIXQSG9LBFEr8M0QfvUEiXqDvRZq+wMlcpNkjcLsyS4qIeDMGTagpA
pkPbNcZmCStHFhVaTyvzM2dBu51zPgPd5Tb8Wc47HsO/Mc0K7sViJ5MmR75/
dMqdbbt1oggufwsn3LvMUhslS+7R4yfnHz/KsI3AxB34KzsUTSKj23HkDvWf
95yYE9RnKYkCzXVLrLRzEveUHcfa6NkMVocwo2/qvil7NSb8V1SmHrzs9Wd9
z8pYpYDSZR8a9Ji7FWeUyNvkGmX0rB/Umn/t1WmuFnTsBzL1oOMiGkWTQ/BH
mZwl6Wq/LBLNWazg/zLpPt0k3XG4Q21Gw8xZX9YL/Rpmwqhb2RzNuOtEd0vR
yJHZBxW/8wwLRu3TfBiktc59WsuWf8AMWH+VmKdgzjpBH19cfOpSCzk2JYLd
HRdRuDlHa8yK1heiWWOSDYNEU3377nkfxPUplknoRzwAQjYiT50EtkUVB1pF
GC9YdWSPI2fxL/b/JEo6HB6WZofcXEUjZmQcDu1BM/XYBlAPRLpV1VHEiGQ0
S7UP/XSbVvvosFFj56TTzoXowh3beKLt2f0Uz6x7GwcRirwWxyMvny6t1dxA
bSq7BaNpBXfuWOIEPLxqxOODuReCTy4UPjR0cRAEyISBsSF++ETzHVnBLsa8
67v6SvKkocHFdb1cq2PZbMzFG+Xu7FTZqfvibfixNXeO6r7pyed5aVh/TJY4
n6pMLNM61IKnDVbBh7ABT1HbdaqBx+JJ8W3lp9pYbxAwAIpgvBolcfWKk76S
nQ6wvusl61minfLh0CMM0bdDx5IPHEoxg3YJsBx/kLOzot84vNjHWPoeV3K9
9FVkdB2/TYt8LrYdQz0yZ3pR+LhVmSoBbDRx66Ntyw4ECGbrn4pPVzlxjpL+
wQeaeEdUT/aW1E6U3NcQjTCOpZVzGwr02JcOzJFP7lZ2Lg1vtHWHd9IyxnOz
vO7YzOwDdwJC3JXugxT/ve1g7nV84t/cmparHDk6CCfEmnbfSks5u3G6ydlK
Rk6WorWn8OumktzZR6g+B6m1MhBbhe058y0uI/5FesnVa8zYC3r6X//835E2
S/YMu/ZZ8VcrCIL/8vnrq+TlbGWTqkW1+fobdEaNOlrwUB5R8nY5bP150QlM
S54fagtQaz3PrXpnGLBddesmq721Vs+yDUnkMgrL9FI+YLqnhZTdi1cwci4g
42CfzKW4pUil9FdLbkLtjkz+Q4mheC9CfND6Vuj+ZIYAWsyRukF0h4e6HVR8
cg+H1GzUuN8puAtMH9R9YGt+PFhsFh7F5S0aqe69reZ9VnFnwGNOSXYaabiB
IJ/KoBE5zkCck4CKg5st4IcpiFF8t+kbhgYdc2TqPqJhjI4TZJcw1KMEsGCD
duAqBaXBwQ87id1/NdtQ30Oug3f/9c//8i7dJT++uELDUN+BnzMj7LB5jclw
XOHLt26H8abO5rYs0AiKuF2vU2qn++thv3YvUxmC0rbdFwhZ71eEfYvWZuDF
X5bWmMH1HBX0sYxek9JgKZY+w10YSyWHNo1L5qfg+1K0W+NAh5jGWTvyLm7N
r+QhCQ3ah/Ny0W99UXOfnd4bDzzrB2VvQwnNyIjS/NyovrdzCM0R09S4kU+X
6lfLafgySqAV8LEbru1q7SHWJ++sMPGGG+rUkXbOUieYCiwkeIYVJCqOfuK7
858ShmFCAbdSJ/NvLN0bN1EDiMM0e0MmEQM8x6frsucceZ73ILXrDA8MgmHl
mCkpc0syojOdAi66nvdchkyPQa3xTS9SAwDiWKpN54qmnaEa3FiTBHeuZd1w
9IxLmF0+Oi9i8OC43VI3vEdTBGSQNmEV3Byz0J/LO0zdAt0wLJuEhTlr+hI5
6WzPonFR/lU/HmWnvH6F0LOfjhCl5LiQdXYiZHY6WNmhmra5HkJDoeCMYPRU
mGo+cjzgsJvpJ4qx1dx0phTDxRMy6Vhy/6Ky8m4JwYU3ZWGxfiaAYI9HbcKB
p37mtNwp6nb4qozTXtod/38ETd1OBbOagyS9DlRI6aHN0BGgOt3cFZNzJ+zY
DGAeLNShbdBi2upgMPpLVrz0lpcpU/MgRqLZNKhR0Qs4oMMkpkP32XSYSOIH
gxz82eedChtlGmU1ltRQzdIKZ/VyQ7sqCE8l+yDXYU4/f5n5Xz4693ajpJWy
RJFslSAS+VVa5IebM87fpMul5gpIfNOvL6Z33PycTTj2ajgOjSImfayrnbZN
8KPhkLvV7Z+iuYz6csh9jp/EmsiNb/HXSvNxyahA9LVTzHs4OpTlq42vMnvG
R9lCWI03qOkPc3GoOPu6+RZDk7heujDnKEPI+Zt04qzvwFBmwQ0Ax5qQqEPB
YOXiIVcdhLEHOk1boHNggUl3wKDcmeRdsHqaajvHvk4t7+xuLSrkmbG/OUrF
7y8jifFiXqGar1q2bLoxsqguWpN+JylYRV3qAuOxa2upKeMPYo3GXhpS4Owu
uMkeStobnhHpKYjnkavTUBErZKXqFJeoxVjoc7oZ9NDHbnz2I7goOCC5fJqn
wOFUaUbDVbPyqi3nkaIXFtcGa2JaKO60zALJ8Km3fLuGA+F7LpXyksEOX6Fx
ibW185M/OpM60TI0ZWVbc6TohizXQK4NtSEbd0dbntqC/l2HtFO1QSbqSSwM
zuhs1jKRSuMieBLF8Spv8MCC6vMHUeYQQtVZEuqv528hAyknjVrZKfvAOJm5
LKLWlbc58MjNw+T4fkuygwaGNwefhXJ6aU47D3dko3z9YAJ0CGm3Mv9Cwrx6
h4ivm0eKo55wnUt2jCZjxdUY8ITJCvOMY8iwaA0bmAy0V2+362Jv3zIazmN/
h7FK935nadWeqCNRrc2lRCWuS50RaswsVd97mJWqlIm2knllavc8XbO/rkMm
cb1Ol7KjebFTwhnufS9pjdaJxxOSHv37qGmZpzftItwhpvgu2MjiijfkSIcU
D+nGaslxnngUI324TbBEB1nH1xgTPPMfPMY8NlCzi5pNCt4ycA0IyPMNuYEH
16Fz2bV7yZUUokAZuZ+80uziH2CfPtOkSOeuxOLlcllMPNHiBcl7zuaRYW3J
VkS+UBnH5YIjkWNLsJS71vkuzGTYos21b4XzLU3qxHpWWwmW36e2sVMddgal
q7DEcsv+WUP9XGZwo71+ffHi4urGJgHSTrGrro7lC8K9UxZNfHQwJosWBIEA
2RkTR7thEQGm4kfbTtP3rCJyEQeKvjSBQ5yffHZf5s33p/Inzvsq9pwlMSDL
abFbTW7BKdoN63343ZNPpf2aMz8smdS+sslCU8WyJeFEWspyLzap5tGX1ZJU
4j9lTu5G4PwnnYttOfYI3E+LfJk2ZbXvt5KaxCUOrB7QLri4guTLtvadKfnL
teokLkIYdoINYoz4/dGPq8Qg6n00j5MhrHVvyGdkxqD6jk47bbwzy5o52EPB
nSfKV6ftaWLlr63nwpjv10cSKw5gMa2ipTcaVxKNIhoUkvctKRQDZeRGd/VO
tppaBS5081ZlJXphlYWGPZIXYkysXyGDzD0p5ufPl+nW+5kDAXon2B0hc51k
4qN9MMVstlmAydGYoSaad1yiAwGZfjzUd8QX7XCXa0sgdqsXnfjaQbD8J5k3
MnTWXtMcjhWhaypowTsXS27ZDOB2m6jVVkvud3MsGqNJgcAJSAKueXPx7LWm
nKd7zlqyvpJHltK++pzG1tEqxT9qGoupTw5RgmO7Yof3rjz6Kq/CYI4b8QFO
lrZEdUGjXlMkryJ3Qo3diXjM5OcZ+iRFCaJVijz0iPuqVj8rUnGXd3P5+PLS
dp6XfC8DKjdr98xZSYI2kBhcz43l1ZuL0/HcIfHhfQJzzZinrc21FLx3eIkd
3gXPMBp3IyUJkt1uVulACGVQdLlB0QUIZWlVR/WSkeJuRa2dVrDcC7cUrs2+
DC1nd8MSE2Yth882Vp7Zkgktk4+AevmsLUjLeQ3cFBtAqXW/jdP6yVbwEYq0
3vfmgEnURnLYOrVcled/Kno6PXJHXnn4L5dXnKBOSub3Vy+/Y9+7pUTprtRu
4OqjGg38ZPhadyPeoo3a04ea08hK5UkZ0I5hcCl7k6jjPoDFJwy7ULDsa/l1
A2zkbhCbI+7sszvEyuwbmWkMgLjpsm8+6FVKtYZcVED4aZNmFKz9jdyT0zZZ
alsc2BNRH7CuQYFMArs4RnWP40Ao5mTe7uOpxqU7dE6d/Pxz5J36eOqgjmzV
Zt4FBn/4pMcg9qzktYtNe57o3vHCjASAaX2E9gDNNdpkYdqjNTPgajFujhSW
joMW6sgRu0lcYMET2vlmvzWGDGEzjjLY6U1Ug5LuUggIotLIHSlc2VqbXwnl
o7IiqpkXjDtC8drWyRs4jJg8HmeXpe87nJ3vADuTfDDp79M5moQjxcNUsdcd
gOhrGqpX1N5LVotwUDHsYiVD2tkfqBhe54aP1u+r08mhP/OG/QZhEBlAaFha
q9S4S5JyVzNj6P9OOSBBX+bPWRTmuuOVrCa7Ph1DKo0jkRyNxxsaoddjLQvu
xUesJKdTcHjSpxOJuFU7N0243WKoPmUYRZtUt78O7M20MtQaACFGJG0ILeUK
V8zOG+Jc2a4/tEKs+SpUtPfRJrj0ovEA3VQ2v/P++kRMpPN++470v/XgNIl4
thECl0bDtdaK8fX5OkOBmnbKY9IpJdtdiMu8xA5mveiS3DNyrgDW/Ie+CFD2
TixUB3hwbwSCaT6t1J3FLGabyQyEmmtOcd6csy44XxuRxg0cUKUkJUZQkC4t
yaOEDDPsjA0oaW6tuQ04IIc9OM2Xsx3NBohSCH3UVJ6Fa8sG7C7Yh5fR7dt0
DVh3kTuc9Jf3RPqcMXBtVdfPtb2uJvUQ3YaKbF+Hze4mzHTPt8zhQi6nZl5J
a/XeWFxxI2uL/cbCDpZd2enkw3u6vHhzcbCfm06ylIZE+ZvpzOK14/GYLVJO
hejO/nY/P5WwaTb/9ReLtKizLz4OLaqZZFYzl2+suKMOo3n5T0/d8xUpZcmz
8sMIecJw3jyr/vJvVbsduTdInXmOdLKsKEbuH1p0SFkmzwkflyP3sspnyYt2
nf4JxUzfEwmxcfJDRqYrLXadTYnqcvrkdZnBwU4LlPUqX7TrPHmT00/zdCRv
b8rtimRxWs9WI/cbTMpIrolM6ZdLevx6R6YPbY7YdEr6909Z8SfaDNT93apM
/lCyZ4xhD712TRijlj3PiD5/cI5retFOybBPTt5gbATCzg/OHnx1qh4fYFed
/O4fiadxG1oeJcTWd73aq/OKeGJuriMYJpdMUjwsM2e1ag/nCUbk/u6fJu7/
AfnnDvNA4gAA

-->

</rfc>

