RE: [Nolug] LAN speed confusion

From: John Souvestre <johns_at_sstar.com>
Date: Sat, 14 Jul 2012 21:41:20 -0500
Message-ID: <002b01cd6233$5401ca30$fc055e90$@sstar.com>

Hi Jimmy.

Certainly it goes without saying that the window size has to be large enough
to accommodate the speed-latency product, thus avoiding the ACK delays. I
believe that most current operating systems do a decent job of this.

I have found two ways to compensate for some of the affects you describe.
One is to be proactive, rather than reactive, about congestion. Cisco's
(W)RED, for example. I've seen this improve matters on a busy circuit on
more than a few occasions.

The other, with regard to FTP, would be to "cheat" and use multiple sessions
for a given file, thus ensuring that you are keeping the physical circuit
saturated.

Is IPv6 any smarter with regard to packet sizes, both initially and when
congested?

Regards,

John

    John Souvestre - New Orleans LA - (504) 454-0899

-----Original Message-----
From: owner-nolug@stoney.kellynet.org
[mailto:owner-nolug@stoney.kellynet.org] On Behalf Of Jimmy Hess
Sent: Saturday, July 14, 2012 7:22 pm
To: nolug@nolug.org
Subject: Re: [Nolug] LAN speed confusion

On 7/14/12, John Souvestre <johns@sstar.com> wrote:
> I thought that due to the streaming nature of the FTP protocol you
> could get 90-95%. Not having to wait for the ACK before sending
> another packet makes a lot of difference, doesn't it?

A simple FTP or HTTP file transfer is a reasonable real-world test.
FTP uses TCP/IP connections. The FTP protocol doesn't have that
little bit of chunked transfer encoding overhead tacked on like HTTP does,
or massive payload size multiplication like SMTP+MIME has.
But TCP still requires frequent acknowledgements.

TCP utilizes a congestion window size which is maintained by both senders at
each end of the connection, and there is a Receive window value at each end
of the connection, which is advertised to the peer.
   When the maximum receive bytes in the receive window have been
transmitted, an acknowledgement will be required by the receiver, before
more tcp segments (or "packets") may be sent.

The wait for acknowledgement slows down the transfer, TCP provides more
aggressive congestion management at the cost of throughput; a UDP-based
protocols such as the experimental UFTP that allows you to specify the
transfer rate to use, or BitTorrent which uses a less aggressive congestion
control, result in better throughput, possibly at a cost of latency, and
higher link utilization, with the downside that if there actually is
congestion, TCP in many cases provides better "fairness" and less
degradation to competing users.

FTP and all TCP connections utilize slow start; when a connection is
first established, the congestion window is set to 1 MSS (one
single MSS), In other words, one packet of the maximum size; the
size of the MSS value is advertised during TCP initial connection
establishment, by the host initiating the connection, based on the
End-to-End MTU / MRU, and used to calculate the congestion window.

The absolute MINIMUM value for MSS is 1220 octets (bytes), for IPv6; for
IPv4 it was 536 bytes.

The MTU value used to calculate MSS is the maximum size a host can transmit
in a single data link frame or Ethernet 'packet' for example,
MRU is maximum receive unit size. MTU and MRU must be the same for
all hosts on the same Ethernet (or other) datalink, but End-to-End, across
networks on different router interfaces, MTU can vary, and it can even be
assymetric: if the internet routing between sender and receiver is
asymmetric (The data you upload or send takes a different path through the
network than the data you download or receive),
the hosts participating in FTP or any other TCP-based protocol will
attempt to discover the best possible value less than or equal to
their configured value by using automatic discovery; sending a
packet that is too large will result in the host receiving an ICMP
Packet-To-Big error, from some router on the path to the receiver,
  and the MSS will be adjusted downwards, to enable the file transfer to
work.

If a sender uses a value for MSS that is too big, packets will be lost, and
the transfer won't work, because the receiver doesn't get the data. If a
sending host uses a value for MSS that is too small, performance will
suffer, because the packet header bytes will consume a more significant
percentage of the link capacity.

Therefore, the MSS that a host advertises can be a configurable
value, but MSS + Header size must be less than the End-to-End MTU;
    for IPv4 that would be MSS minus 40, for IPv6 that would be
MTU minus 60.

But typically, 1 MSS = 1440 bytes.
The larger the value, the more data bytes transmitted per header, reducing
the protocol data payload overhead.

Based on slow start, then, every 1440 bytes, must be
acknowledged, at first, in that example. Not much data can be
sent before an acknowledgement is required.

Now, as the FTP transfer progresses over a period of time, as long
as TCP does not
encounter congestion, lost segments, misordered segments, or
retransmitted segments (segments were not acknowledged by the remote end in
time),

the congestion window will be gradually increased by multiples of the window
size.
For example, if the window size is increased to 2 MSS;
now 1880 bytes of data can be sent, before an acknowledgement is required.

Various process for recalculating this continues until the maximum window
size is reached.
There are the original standard for calculating this, and various 'new
methods' of adjusting congestion window which were proposed, so different
TCP stacks may use a different one of the proposed methods, and there are
some extra standards such as "Fast Retransmit/Fast Recovery",

which are incremental improvements, however:

The throughput of a TCP connection gradually improves the longer the
connection is established, if there is not congestion at the chosen "window
size", but as soon as congestion is encountered, by design, the congestion
Window size is reduced, normally by 50%, causing a drop in transfer rate.

The reduction in window size will almost always massively overshoot
the reduction required. The throughput will gradually recover,
until there is again congestion.

So what you have under normal conditions is an oscillation in throughput.
And it will become more unpredictable if there is a significant amount of
other traffic on the link.

Your gross average will vary dependent upon the maximum window size, the TCP
stack features of the sender and receiver.

If the network is working properly though, at least 50% of the unused
end-to-end link throughput capacity should be utilized, and usually
somewhere between 80 and 90%, under most normal conditions.

It's possible to have other users of a link with highly bursty loads, that
result in less than 80% utilization.

The 80% number is just a rule of thumb; the overhead is expected to be
close to that value for real world applications... it's not going to be the
actual number, esp. not in extreme cases, which can vary from 50% to 95%.

> Regards,
> John

--
-JH
___________________
Nolug mailing list
nolug@nolug.org
___________________
Nolug mailing list
nolug@nolug.org
Received on 07/14/12

This archive was generated by hypermail 2.2.0 : 07/25/13 EDT