internet

Split-Routing on Debian/Ubuntu

My post on split-routing on OpenWRT has been incredibly popular, and led to many people implementing split-routing, whether or not they had OpenWRT. While it's fun to have an exercise as a reader, it led to me having to help lots of newbies through porting that setup to a Debian / Ubuntu environment. To save myself some time, here's how I do it on Debian:

Background, especially for non-South Africa readers: Bandwidth in South Africa is ridiculously expensive, especially International bandwidth. The point of this exercise is that we can buy "local-only" DSL accounts which only connect to South African networks. E.g. I have an account that gives me 30GB of local traffic / month, for the same cost as 2.5GB of International traffic account. Normally you'd change your username and password on your router to switch account when you wanted to do something like an Debian apt-upgrade, but that's irritating. There's no reason why you can't have a Linux-based router concurrently connected to both accounts via the same ADSL line.

Firstly, we have a DSL modem. Doesn't matter what it is, it just has to support bridged mode. If it won't work without a DSL account, you can use the Telkom guest account. My recommendation for a modem is to buy a Telkom-branded Billion modem (because Telkom sells everything with really big chunky, well-surge-protected power supplies).

For the sake of this example, we have the modem (IP 10.0.0.2/24) plugged into eth0 on our server, which is running Debian or Ubuntu, doesn't really matter much - personal preference. The modem has DHCP turned off, and we have our PCs on the same ethernet segment as the modem. Obviously this is all trivial to change.

You need these packages installed:

# aptitude install iproute pppoe wget awk findutils

You need ppp interfaces for your providers. I created /etc/ppp/peers/intl-dsl:

user intl-account@uber-isp.net
unit 1
pty "/usr/sbin/pppoe -I eth0 -T 80 -m 1452"
noipdefault
defaultroute
hide-password
lcp-echo-interval 20
lcp-echo-failure 3
noauth
persist
maxfail 0
mtu 1492
noaccomp
default-asyncmap

/etc/ppp/peer/local-dsl:

user local-account@uber-isp.net
unit 2
pty "/usr/sbin/pppoe -I eth0 -T 80 -m 1452"
noipdefault
hide-password
lcp-echo-interval 20
lcp-echo-failure 3
connect /bin/true
noauth
persist
maxfail 0
mtu 1492
noaccomp
default-asyncmap

unit 1 makes a connection always bind to "ppp1". Everything else is pretty standard. Note that only the international connection forces a default route.

To /etc/ppp/pap-secrets I added my username and password combinations:

# User                     Host Password
intl-account@uber-isp.net  *    s3cr3t
local-account@uber-isp.net *    passw0rd

You need custom iproute2 routing tables for each interface, for the source routing. This will ensure that incoming connections get responded to out of the correct interface. As your provider only lets you send packets from your assigned IP address, you can't send packets with the international address out of the local interface. We get around that with multiple routing tables. Add these lines to /etc/iproute2/rt_tables:

1       local-dsl
2       intl-dsl

Now for some magic. I create /etc/ppp/ip-up.d/20routing to set up routes when a connection comes up:

#!/bin/sh -e

case "$PPP_IFACE" in
 "ppp1")
   IFACE="intl-dsl"
   ;;
 "ppp2")
   IFACE="local-dsl"
   ;;
 *)
   exit 0
esac

# Custom routes
if [ -f "/etc/network/routes-$IFACE" ]; then
  cat "/etc/network/routes-$IFACE" | while read route; do
    ip route add "$route" dev "$PPP_IFACE"
  done
fi

# Clean out old rules
ip rule list | grep "lookup $IFACE" | cut -d: -f2 | xargs -L 1 -I xx sh -c "ip rule del xx"

# Source Routing
ip route add "$PPP_REMOTE" dev "$PPP_IFACE" src "$address" table "$IFACE"
ip route add default via "$PPP_REMOTE" table "$IFACE"
ip rule add from "$PPP_LOCAL" table "$IFACE"

# Make sure this interface is present in all the custom routing tables:
route=`ip route show dev "$PPP_IFACE" | awk '/scope link  src/ {print $1}'`
awk '/^[0-9]/ {if ($1 > 0 && $1 < 250) print $2}' /etc/iproute2/rt_tables | while read table; do
  ip route add "$route" dev "$PPP_IFACE" table "$table"
done

That script loads routes from /etc/network/routes-intl-dsl and /etc/network/routes-local-dsl. It also sets up source routing so that incoming connections work as expected.

Now, we need those route files to exist and contain something useful. Create the script /etc/cron.daily/za-routes (and make it executable):

#!/bin/sh -e
ROUTEFILE=/etc/network/routes-local-dsl

wget -q http://mene.za.net/za-routes/latest.txt -O /tmp/zaroutes
size=`stat -c '%s' /tmp/zaroutes`

if [ $size -gt 0 ]; then
  mv /tmp/zaroutes "$ROUTEFILE"
fi

It downloads the routes file from cocooncrash's site (he gets them from local-route-server.is.co.za, aggregates them, and publishes every 6 hours). Run it now to seed that file.

Now some International-only routes. I use IS local DSL, so SAIX DNS queries should go through the SAIX connection even though the servers are local to ZA.

My /etc/network/routes-intl-dsl contains SAIX DNS servers and proxies:

196.25.255.3
196.25.1.9
196.25.1.11
196.43.1.14
196.43.1.11
196.43.34.190
196.43.38.190
196.43.42.190
196.43.45.190
196.43.46.190
196.43.50.190
196.43.53.190
196.43.9.21

Now we can tell /etc/network/interfaces about our connections so that they can get brought up automatically on bootup:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet static
        address 10.0.0.1
        netmask 255.255.255.0

auto local-dsl
iface local-dsl inet ppp
        provider local-dsl

auto intl-dsl
iface intl-dsl inet ppp
        provider intl-dsl

For DNS, I use dnsmasq, hardcoded to point to IS & SAIX upstreams. My machine's /etc/resolv.conf just points to this dnsmasq.

So something like /etc/resolv.conf:

nameserver 127.0.0.1

/etc/dnsmasq.conf:

no-resolv
# IS:
server=168.210.2.2
server=196.14.239.2
# SAIX:
server=196.43.34.190
server=196.43.46.190
server=196.25.1.11
domain=foobar.lan
dhcp-range=10.0.0.128,10.0.0.254,12h
dhcp-authoritative
no-negcache

If you haven't already, you'll need to turn on ip_forward. Add the following to /etc/sysctl.conf and then run sudo sysctl -p:

net.ipv4.ip_forward=1

Finally, you'll need masquerading set up in your firewall. Here is a trivial example firewall, put it in /etc/network/if-up.d/firewall and make it executable. You should probably change it to suit your needs or use something else, but this should work:

#!/bin/sh
if [ $IFACE != "eth0" ]; then
  exit;
fi

iptables -F INPUT
iptables -F FORWARD
iptables -t nat -F POSTROUTING
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -i eth0 -s 10.0.0.0/24 -j ACCEPT
iptables -A INPUT -i ppp+ -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -j DROP
iptables -A FORWARD -i ppp+ -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i eth0 -o ppp+ -j ACCEPT
iptables -A FORWARD -j DROP
iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o ppp+ -j MASQUERADE

I'm a Google Reader convert

My blog hasn’t had much to say recently, but now that I’m feeling pressured by University assignments, I think it’s time to get back into one-post-per-day mode :-)

I remember once trying Google Reader, just after it launched, and very quickly deciding that I couldn’t stand it, and I’d stick to Liferea.

Recently, however, Liferea has been giving me trouble. It’s been incredibly unstable, and I’d often forgot to run a transparent proxy on my laptop when in restrictive environments, so it’d miss lots of posts and generally be un-happy. The instability I fixed by exporting an OPML list, wiping the configuration, and re-loading, but that was a ball-ache to do. While I was bitching about this, Vhata pushed me to try Google Reader again.

I was pleasantly surprised. It works well, and I didn’t find it oppressive. That doesn’t mean it’s perfect, I’d like to see the following things improved:

  • Duplicate post detection (i.e. planetified & origional posts, liferea does this)
  • Performance
  • Favicons (or something similar, to make it more clear where a post comes from)
  • On that note, maybe configurable colour borders for important feeds?
  • Automatic refreshing (i.e. “r”)
  • More viewable area
  • A key press for opening a post in a backgrounded new tab “v” changes your focus to the new tab, which is against the principles of tabbed browsing.

Some cool things it does that lifera doesn’t:

  • Clicking on a folder shows you the all the posts from the feeds in that folder
  • river of posts” view, which lets me get through my reading a lot faster
  • preloading images for posts that I haven’t got to yet (this contributes a fair whack to the reading speed, given the slow interwebs in ZA)
  • Shared items
  • Access from multiple machines (OX, X-forwarding worked, but this is neater)
  • Doesn’t crash (sorry lifrea…)

I’m converted. Google Reader really is good.

/me gets on with reading feeds…

That was *camp

I’m now sitting in Arniston, on a horribly slow GPRS connection, after *camp, which was this weekend, at AIMS. It was a BarCamp-like “unconference”, organised by the geekdinner crowd. I put off having the weekend at Arniston for *camp, and for me, I think that was worth it.

The event was really good. I haven’t been very involved in the organising, and didn’t come prepared with a talk (just equipment). At the start, it felt like there were never going to be enough talks to keep us going, but as soon as it started, it began rolling, and continued for 2 days. The talks were varied, from technical, to psychological, to practical. I was really impressed. The quality of the talks was quite high - I was rarely bored (although I did have IRC distractions).

As usual, I had Jonathan Carter’s camera, and videoed everything. I’m going to go home to around 8 hours of video that needs editing, synchronizing, encoding, and uploading to archive.org. It’ll take a while, guys, be patient.

Today, I got involved with setting up the lab for practical demos. We had 9 PCs lent, and needed Ubuntu on them. Of course, the natural approach is netinstall - I’m familiar with netinstalling Ubuntu, and it is a great way to set up a pile of computers. However, we ran into problem after problem.

  1. We were using dnsmasq (on my laptop) for DHCP and TFTP, but it wasn’t the router. So I set the router DHCP option. This seemed to break dnsmasq - PCs stopped accepting leases and DHCPDECLINED them. I’ve never seen that before. So I had to route through my laptop - no biggie.
  2. AIMS is behind a 400kbps connection, and while thy have an apt-cacher, it seemed badly seeded, and it looked like it was going to take us hours to install, so I went to my car and collected a set of Ubuntu archive DVDs that I happened to have on hand, and loaded them via a cluster of laptops and rsync ;-)
  3. Of course those DVDs didn’t have udebs on them (the debian-installer bits and pieces), so I had to quickly write a script to download all the udebs, and their necessary support structure.
  4. Now the machines netboot installed really fast, but at the very end of the install, it failed, due to some package signature problem.
  5. I ran debmirror, to ensure that my mirror was up to date, and it was. I ran the md5 sum checks, and they passed. I have no idea what the problem was.
  6. Eventually, the lab was installed with 3 install CDs, and then clubbed into shape with clusterssh. 5hrs or so after starting - what a waste of time, we should have started with CDs…

So, lesson for next time, test your netboot setup in advance, don’t assume that a mirror will be in working shape. We should have set up the lab on day one, for use on day 2.

The upshot of this is that I didn’t see any talks today (excepting a practical in the lab, on scribus, once it was up). I’ll have to watch the videos later.

Now, I’m going to enjoy a few days in Arniston, and then come home to graduate.

GeekDinner meets Linkedin

I’ve finally jumped on the Linkedin bandwagon. Amongst other things, I’ve added a GeekDinner Group.

Uncapped Local access

We’ve read that Telkom is implementing uncapped local access, as mandated by ICASA. The regulation states “local bandwidth shall not be subject to the cap”, but nobody seriously thinks Telkom will follow this to the letter. There is a huge market in inter-office VPNs over ADSL, and Telkom don’t want to lose out on that revenue stream.

Currently the savvy users out there use hacks like mine to least-cost-route local traffic over cheaper IS “Local-Only” accounts (like these). Hell, even ISPs route their clients’ local traffic over the IS Local-Only accounts.

From what I’ve heard from the friendly frogs, Telkom are really just going to keep it simple, and implement the equivalent of IS DSL accounts, where after you get capped, you get another, local-only cap. This can be implemented with Radius only, and will (to some extent) prevent the service from being abused be everybody.

So yes, we all still need our separate IS Local-Only accounts, and do our own LCR.

Anybody who thinks Telkom is doing any good for South Africa, go and sit in a corner now!

Syndicate content