Fun with Squid and CDNs

One neat upgrade in Debian's recent 5.0.0 release1 was Squid 2.7. In this bandwidth-starved corner of the world, a caching proxy is a nice addition to a network, as it should shave at least 10% off your monthly bandwidth usage. However, the recent rise of CDNs has made many objects that should be highly cacheable, un-cacheable.

For example, a YouTube video has a static ID. The same piece of video will always have the same ID, it'll never be replaced by anything else (except a "sorry this is no longer available" notice). But it's served from one of many delivery servers. If I watch it once, it may come from

But the next time it may come from And that's not all, the signature parameter is unique (to protect against hot-linking) as well as other not-static parameters. Basically, any proxy will probably refuse to cache it (because of all the parameters) and if it did, it'd be a waste of space because the signature would ensure that no one would ever access that cached item again.

I came across a page on the squid wiki that addresses a solution to this. Squid 2.7 introduces the concept of a storeurl_rewrite_program which gets a chance to rewrite any URL before storing / accessing an item in the cache. Thus we could rewrite our example file to

We've normalised the URL and kept the only two parameters that matter, the video id and the itag which specifies the video quality level.

The squid wiki page I mentioned includes a sample perl script to perform this rewrite. They don't include the itag, and my perl isn't good enough to fix that without making a dog's breakfast of it, so I re-wrote it in Python. You can find it at the end of this post. Each line the rewrite program reads contains a concurrency ID, the URL to be rewritten, and some parameters. We output the concurrency ID and the URL to rewrite to.

The concurrency ID is a way to use a single script to process rewrites from different squid threads in parallel. The documentation is this is almost non-existant, but if you specify a non-zero storeurl_rewrite_concurrency each request and response will be prepended with a numeric ID. The perl script concatenated this directly before the re-written URL, but I separate them with a space. Both seem to work. (Bad documentation sucks)

All that's left is to tell Squid to use this, and to override the caching rules on these URLs.

storeurl_rewrite_program /usr/local/bin/
storeurl_rewrite_children 1
storeurl_rewrite_concurrency 10

#  The keyword for all youtube video files are "get_video?", "videodownload?" and "videoplaybeck?id"
#  The "\.(jp(e?g|e|2)|gif|png|tiff?|bmp|ico|flv)\?" is only for pictures and other videos
acl store_rewrite_list urlpath_regex \/(get_video\?|videodownload\?|videoplayback\?id) \.(jp(e?g|e|2)|gif|png|tiff?|bmp|ico|flv)\? \/ads\?
acl store_rewrite_list_web url_regex ^http:\/\/([A-Za-z-]+[0-9]+)*\.[A-Za-z]*\.[A-Za-z]*
acl store_rewrite_list_path urlpath_regex \.(jp(e?g|e|2)|gif|png|tiff?|bmp|ico|flv)$
acl store_rewrite_list_web_CDN url_regex ^http:\/\/[a-z]+[0-9]\.google\.com doubleclick\.net

# Rewrite youtube URLs
storeurl_access allow store_rewrite_list
# this is not related to youtube video its only for CDN pictures
storeurl_access allow store_rewrite_list_web_CDN
storeurl_access allow store_rewrite_list_web store_rewrite_list_path
storeurl_access deny all

# Default refresh_patterns
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0

# Updates (unrelated to this post, but useful settings to have):
refresh_pattern*\.(cab|exe)(\?|$) 518400 100% 518400 reload-into-ims
refresh_pattern*\.(cab|exe)(\?|$) 518400 100% 518400 reload-into-ims
refresh_pattern*\.(cab|exe)(\?|$) 518400 100% 518400 reload-into-ims
refresh_pattern (Release|Package(.gz)*)$        0       20%     2880
refresh_pattern \.deb$         518400   100%    518400 override-expire

# Youtube:
refresh_pattern -i (get_video\?|videodownload\?|videoplayback\?) 161280 50000% 525948 override-expire ignore-reload
# Other long-lived items
refresh_pattern -i \.(jp(e?g|e|2)|gif|png|tiff?|bmp|ico|flv)(\?|$) 161280 3000% 525948 override-expire reload-into-ims

refresh_pattern .               0       20%     4320

# All of the above can cause a redirect loop when the server
# doesn't send a "Cache-control: no-cache" header with a 302 redirect.
# This is a work-around.
minimum_object_size 512 bytes

Done. And it seems to be working relatively well. If only I'd set this up last year when I had pesky house-mates watching youtube all day ;-)

It should of course be noted that doing this instructs your Squid Proxy to break rules. Both override-expire and ignore-reload violate guarantees that the HTTP standards provide the browser and web-server about their communication with each other. They are relatively benign changes, but illegal nonetheless.

And it goes without saying that rewriting the URLs of stored objects could cause some major breakage by assuming that different objects (with different URLs) are the same. The provided regexes seem sane enough to not assume that this won't happen, but YMMV.

#!/usr/bin/env python
# vim:et:ts=4:sw=4:

import re
import sys
import urlparse

youtube_getvid_res = [

youtube_playback_re = re.compile(r"^http:\/\/(.*?)\/videoplayback\?id=(.*?)&(.*?)$")

others = [
    (re.compile(r"^http:\/\/(.*?)\/(ads)\?(?:.*?)$"), "http://%s/%s"),
    (re.compile(r"^http:\/\/(?:.*?)\.yimg\.com\/(?:.*?)\.yimg\.com\/(.*?)\?(?:.*?)$"), ""),
    (re.compile(r"^http:\/\/(?:(?:[A-Za-z]+[0-9-.]+)*?)\.(.*?)\.(.*?)\/(.*?)\.(.*?)\?(?:.*?)$"), "http://cdn.%s.%s.SQUIDINTERNAL/%s.%s"),
    (re.compile(r"^http:\/\/(?:(?:[A-Za-z]+[0-9-.]+)*?)\.(.*?)\.(.*?)\/(.*?)\.(.{3,5})$"), "http://cdn.%s.%s.SQUIDINTERNAL/%s.%s"),
    (re.compile(r"^http:\/\/(?:(?:[A-Za-z]+[0-9-.]+)*?)\.(.*?)\.(.*?)\/(.*?)$"), "http://cdn.%s.%s.SQUIDINTERNAL/%s"),
    (re.compile(r"^http:\/\/(.*?)\/(.*?)\.(jp(?:e?g|e|2)|gif|png|tiff?|bmp|ico|flv)\?(?:.*?)$"), "http://%s/%s.%s"),
    (re.compile(r"^http:\/\/(.*?)\/(.*?)\;(?:.*?)$"), "http://%s/%s"),

def parse_params(url):
    "Convert a URL's set of GET parameters into a dictionary"
    params = {}
    for param in urlparse.urlsplit(url)[3].split("&"):
        if "=" in param:
            n, p = param.split("=", 1)
            params[n] = p
    return params

while True:
    line = sys.stdin.readline()
    if line == "":
        channel, url, other = line.split(" ", 2)
        matched = False

        for re in youtube_getvid_res:
            if re.match(url):
                params = parse_params(url)
                if "fmt" in params:
                    print channel, "" % (params["video_id"], params["fmt"])
                    print channel, "" % params["video_id"]
                matched = True

        if not matched and youtube_playback_re.match(url):
            params = parse_params(url)
            if "itag" in params:
                print channel, "" % (params["id"], params["itag"])
                print channel, "" % params["id"]
            matched = True

        if not matched:
            for re, pattern in others:
                m = re.match(url)
                if m:
                    print channel, pattern % m.groups()
                    matched = True

        if not matched:
            print channel, url

    except Exception:
        # For Debugging only. In production we want this to never die.
        print line


  1. Yes, Vhata, Debian released in 2009, I won the bet, you owe me a dinner now. 

The joy that is SysRq

I’m constantly surprised when I come across long-time Linux users who don’t know about SysRq. The Linux Magic System Request Key Hacks are a magic set of commands that you can get the Linux kernel to follow no matter what’s going on (unless it has panicked or totally deadlocked).

Why is this useful? Well, there are many situations where you can’t shut a system down properly, but you need to reboot. Examples:

  • You’ve had a kernel OOPS, which is not quite a panic but there could be memory corruption in the kernel, things are getting pretty weird, and quite honestly you don’t want to be running in that condition for any longer than necessary.
  • You have reason to believe it won’t be able to shut down properly.
  • Your system is almost-locked-up (i.e. the above point)
  • Your UPS has about 10 seconds worth of power left
  • Something is on fire (lp0 possibly?)
  • …Insert other esoteric failure modes here…

In any of those situations, grab a console keyboard, and type Alt+SysRq+s (sync), Alt+SysRq+u (unmount), wait for it to have synced, and finally Alt+SysRq+b (reboot NOW!). If you don’t have a handy keyboard attached to said machine, or are on another continent, you can

# echo u > /proc/sysrq-trigger

In my books, the useful SysRq commands are:

Call the oom_killer
Display SysRq help
Print a kernel stacktrace
Power Off
Set your keyboard to RAW mode (required after some X breakages)
Sync all filesystems
Remount all filesystems read-only
Change console logging level

In fact, read the rest of the SysRq documentation, print it out, and tape it above your bed. Next time you reach for the reset switch on a Linux box, stop your self, type the S,U,B sequence, and watch your system come up as if nothing untoward has happened.

Update: I previously recommended U,S,B but after a bit of digging, I think S,U,B may be correct.

Ctrl-Alt-shortcuts considered harmful

I’ve written about this before, but Ctrl-Alt-workspace switching key-presses nail me routinely.

Let’s go through some history:

We have Ctrl-Alt-Delete, the “three-fingered-salute”, meaning reboot, right? That combination was designed to NEVER be pressed by accident. And it never used to be.

The X guys needed a kill-X key-press, as things can sometimes get broken in X. So they chose Ctrl-Alt-Backspace, which is also a pretty sensible combination. It’s very similar to Ctrl-Alt-Delete, so we remember it, and backspace has milder connotations than delete, so we understand it to mean that it’ll only kill a part of the system.

X also has some other Ctl-Alt- shortcuts. Some of these are also suitably obscure, i.e. NumPad+ and NumPad-. Others like Ctrl-Alt-F1 mean change to virtual console 1. That one might do by accident, if you are an old WordPerfect user, but should be safe enough otherwise. They were designed to look like big brothers of (and even work as) standard VT-changing behaviour.

For changing workspaces, Alt-F1 style key-presses were used, mimicing VT-changing key-presses. This is great for *nix users, but people coming from Windows expect Alt-F4 to close a program, not take them to workspace 4.

So GNOME came along, and decided that instead, they’d use Ctrl-Alt-Arrow key-presses to change workspace. That’s fine, but it’s a pretty common action, so I’m often holding down Ctrl-Alt without even thinking about it. If I start editing something and press delete/backspace, before I’ve released Ctrl-Alt, boom! And I run screaming and write a blog post.

Now, I know that Ctl-Alt-{Delete,Backspace} can be disabled (even if the latter is a little tricky to do), but I’d really like to change them. I like to be able to kill X without using another machine and ssh, I just don’t like this to happen by accident. And no, the solution isn’t for me to change my workspace-changing keys, because this problem must affect every GNOME user, not just me.

Dangerous key-presses should be really unlikely key-presses. Alt-SysRq- key-presses are good in this regard, they’ll always be unlikely. (Oh, and they are insanely useful.)

Split-Routing on Debian/Ubuntu

My post on split-routing on OpenWRT has been incredibly popular, and led to many people implementing split-routing, whether or not they had OpenWRT. While it's fun to have an exercise as a reader, it led to me having to help lots of newbies through porting that setup to a Debian / Ubuntu environment. To save myself some time, here's how I do it on Debian:

Background, especially for non-South Africa readers: Bandwidth in South Africa is ridiculously expensive, especially International bandwidth. The point of this exercise is that we can buy "local-only" DSL accounts which only connect to South African networks. E.g. I have an account that gives me 30GB of local traffic / month, for the same cost as 2.5GB of International traffic account. Normally you'd change your username and password on your router to switch account when you wanted to do something like an Debian apt-upgrade, but that's irritating. There's no reason why you can't have a Linux-based router concurrently connected to both accounts via the same ADSL line.

Firstly, we have a DSL modem. Doesn't matter what it is, it just has to support bridged mode. If it won't work without a DSL account, you can use the Telkom guest account. My recommendation for a modem is to buy a Telkom-branded Billion modem (because Telkom sells everything with really big chunky, well-surge-protected power supplies).

For the sake of this example, we have the modem (IP plugged into eth0 on our server, which is running Debian or Ubuntu, doesn't really matter much - personal preference. The modem has DHCP turned off, and we have our PCs on the same ethernet segment as the modem. Obviously this is all trivial to change.

You need these packages installed:

# aptitude install iproute pppoe wget awk findutils

You need ppp interfaces for your providers. I created /etc/ppp/peers/intl-dsl:

unit 1
pty "/usr/sbin/pppoe -I eth0 -T 80 -m 1452"
lcp-echo-interval 20
lcp-echo-failure 3
maxfail 0
mtu 1492


unit 2
pty "/usr/sbin/pppoe -I eth0 -T 80 -m 1452"
lcp-echo-interval 20
lcp-echo-failure 3
connect /bin/true
maxfail 0
mtu 1492

unit 1 makes a connection always bind to "ppp1". Everything else is pretty standard. Note that only the international connection forces a default route.

To /etc/ppp/pap-secrets I added my username and password combinations:

# User                     Host Password  *    s3cr3t *    passw0rd

You need custom iproute2 routing tables for each interface, for the source routing. This will ensure that incoming connections get responded to out of the correct interface. As your provider only lets you send packets from your assigned IP address, you can't send packets with the international address out of the local interface. We get around that with multiple routing tables. Add these lines to /etc/iproute2/rt_tables:

1       local-dsl
2       intl-dsl

Now for some magic. I create /etc/ppp/ip-up.d/20routing to set up routes when a connection comes up:

#!/bin/sh -e

case "$PPP_IFACE" in
   exit 0

# Custom routes
if [ -f "/etc/network/routes-$IFACE" ]; then
  cat "/etc/network/routes-$IFACE" | while read route; do
    ip route add "$route" dev "$PPP_IFACE"

# Clean out old rules
ip rule list | grep "lookup $IFACE" | cut -d: -f2 | xargs -L 1 -I xx sh -c "ip rule del xx"

# Source Routing
ip route add "$PPP_REMOTE" dev "$PPP_IFACE" src "$address" table "$IFACE"
ip route add default via "$PPP_REMOTE" table "$IFACE"
ip rule add from "$PPP_LOCAL" table "$IFACE"

# Make sure this interface is present in all the custom routing tables:
route=`ip route show dev "$PPP_IFACE" | awk '/scope link  src/ {print $1}'`
awk '/^[0-9]/ {if ($1 > 0 && $1 < 250) print $2}' /etc/iproute2/rt_tables | while read table; do
  ip route add "$route" dev "$PPP_IFACE" table "$table"

That script loads routes from /etc/network/routes-intl-dsl and /etc/network/routes-local-dsl. It also sets up source routing so that incoming connections work as expected.

Now, we need those route files to exist and contain something useful. Create the script /etc/cron.daily/za-routes (and make it executable):

#!/bin/sh -e

wget -q -O /tmp/zaroutes
size=`stat -c '%s' /tmp/zaroutes`

if [ $size -gt 0 ]; then
  mv /tmp/zaroutes "$ROUTEFILE"

It downloads the routes file from cocooncrash's site (he gets them from, aggregates them, and publishes every 6 hours). Run it now to seed that file.

Now some International-only routes. I use IS local DSL, so SAIX DNS queries should go through the SAIX connection even though the servers are local to ZA.

My /etc/network/routes-intl-dsl contains SAIX DNS servers and proxies:

Now we can tell /etc/network/interfaces about our connections so that they can get brought up automatically on bootup:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet static

auto local-dsl
iface local-dsl inet ppp
        provider local-dsl

auto intl-dsl
iface intl-dsl inet ppp
        provider intl-dsl

For DNS, I use dnsmasq, hardcoded to point to IS & SAIX upstreams. My machine's /etc/resolv.conf just points to this dnsmasq.

So something like /etc/resolv.conf:



# IS:

If you haven't already, you'll need to turn on ip_forward. Add the following to /etc/sysctl.conf and then run sudo sysctl -p:


Finally, you'll need masquerading set up in your firewall. Here is a trivial example firewall, put it in /etc/network/if-up.d/firewall and make it executable. You should probably change it to suit your needs or use something else, but this should work:

if [ $IFACE != "eth0" ]; then

iptables -F INPUT
iptables -F FORWARD
iptables -t nat -F POSTROUTING
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -i eth0 -s -j ACCEPT
iptables -A INPUT -i ppp+ -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -j DROP
iptables -A FORWARD -i ppp+ -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i eth0 -o ppp+ -j ACCEPT
iptables -A FORWARD -j DROP
iptables -t nat -A POSTROUTING -s -o ppp+ -j MASQUERADE

Just what is Universally Unique

I had an interesting discussion with “bonnyrsa” in #ubuntu-za today. He’d re-arranged his partitions with gparted, and copied and pasted his / partition, so that he could move it to the end of the disk.

However this meant that he now had two partitions with the same UUID. While you can imagine that this is the correct result of a copy & paste operation, it now means that your universally unique ID is totally non-unique. Not in your PC, and no even on it’s home drive.

Ubuntu mounts by UUID, so now how do we know which partition is being mounted?

  • mount” said /dev/sda2
  • /proc/mounts said /dev/disk/by-uuid/c087bad7-5021-4f65-bb97-e0d3ea9d01a6 which was a symlink to /dev/sda2.

However neither were correct.

Mounting /dev/sda4 (ro) produced “/dev/sda4 already mounted or /mnt busy”.

Aha, so we must be running from /dev/sda4.

/dev/sda2 mounted fine, but then wouldn’t unmount: “it seems /dev/sda2 is mounted multiple times”.


I got him to reboot, change /dev/sda2s UUID, and reboot again (sucks). Then everything was better.

This shouldn’t have happened. Non-unique UUIDs is a really crap situation to be in. It brings out bugs in all sorts of unexpected places. I think parted should (by default) change the UUID of a copied partition (although if you are copying an entire disk, it shouldn’t).

I’ve filed a bug on Launchpad, let’s see if anyone bites.

PS: All UUIDs in this post have been changed to protect the identity of innocent Ubuntu systems (who aren’t expecting a sudden attack of non-uniqueness).

Syndicate content