[TriLUG] Tuning WAN links

Shawn Hood shawnlhood at gmail.com
Wed Oct 31 10:32:11 EDT 2007


Not such my load balance we want to do.  We really want to figure out why we
can't get above ~41Mbps, as well as why we're having to use a massive window
size and other configuration changes to utilize this pipe.

I do understand it is a high-latency connection, and that things need to be
done to adequately utilize this pipe.  But nonetheless, it is a gigabit
connection and we can hardly push 41Mbps.  It has been tested by AboveNet
twice.  Of course, I know there's not much that can be said without seeing
specifics, but I hoping to get pointed in the way of some good documention,
books, tips, etc.

Shawn

On 10/31/07, OlsonE at aosa.army.mil <OlsonE at aosa.army.mil> wrote:
>
> For load balancing -- FatPipe is also excellent!
>
> -----Original Message-----
> From: trilug-bounces at trilug.org [mailto:trilug-bounces at trilug.org] On
> Behalf Of Greg Brown
> Sent: Wednesday, October 31, 2007 8:27 AM
> To: Triangle Linux Users Group General Discussion
> Subject: Re: [TriLUG] Tuning WAN links
>
> If your company is dropping that kind of coin you might want to look
> into clustering servers behind something like a Big IP F5 load balancer
> on the far end.  For tuning the WAN link itself I have had great success
> using Packeteer devices.
>
> http://www.f5.com/
> http://www.packeteer.com/
>
> On 10/30/07, Shawn Hood <shawnlhood at gmail.com> wrote:
> >
> > Hey guys...
> >
> > I've recently had a dedicated gigabit fiber WAN link that runs between
>
> > Rackspace in Dallas and an office in Bethesda, MD dropped in my lap.
> > It's not often (read: ever) that I'm given a high-bandwidth
> > high-latency link to tune.
> >
> > Here are the basics:
> >
> > Office in Bethesda
> > Catalyst 3560
> >     |
> > AboveNet POP - Vienna, VA
> > Catalyst 6509
> >     |
> > AboveNet IP/MPLS Backbone
> >     |
> > AboveNet POP - Dallas, TX
> > Catalyst 6509
> >     |
> > Rackspace - Dallas, TX
> > Catalyst 3560
> >
> >
> > I've run iperf between two RHEL4 boxes connected to the 3560s.  The
> > most throughput I've been able to get is ~45mbit by increasing the
> > buffer sizes in /etc/sysctl and using massive window sizes on iperf.
> > I was hoping you guys could point me in the right direction.  I need
> > to do some reading about how to get the most out of this link, and any
>
> > reference would be greatly appreciated.  Will this be a matter of
> > create a Linux router on each end to shape the traffic destined for
> > this link?  Is this something better suited for proprietary technology
>
> > that claims to 'auto-tune' this kinds of links.
> > I'm fairly fluent when it comes to talking about this stuff 'in
> theory,'
> > but
> > have yet to get any hands on experience.
> >
> > Questions, comments, suggestions?
> >
> > Shawn
> > --
> > TriLUG mailing list        :
> http://www.trilug.org/mailman/listinfo/trilug
> > TriLUG Organizational FAQ  : http://trilug.org/faq/ TriLUG Member
> > Services FAQ : http://members.trilug.org/services_faq/
> >
> --
> TriLUG mailing list        :
> http://www.trilug.org/mailman/listinfo/trilug
> TriLUG Organizational FAQ  : http://trilug.org/faq/ TriLUG Member
> Services FAQ : http://members.trilug.org/services_faq/
> --
> TriLUG mailing list        : http://www.trilug.org/mailman/listinfo/trilug
> TriLUG Organizational FAQ  : http://trilug.org/faq/
> TriLUG Member Services FAQ : http://members.trilug.org/services_faq/
>



-- 
Shawn Hood
(910) 670-1819 Mobile



More information about the TriLUG mailing list