This morning I got an unsolicited SMS spam “Home owners ? do u need money? R100,000 @ R752 pm! Reply YES and we?ll phone you”. I know that everybody gets things like this and they just shrug them off, but I have a rabid hatred of spammers.
With e-mail spam, there’s normally nothing you can do. The spammers are on the other side of the world, and they’ve used a botnet. But when I get something from South Africans, I act. We have the ECT act protecting us against spam. It’s not the most effective anti-spam legislation, but it’s better than nothing. I’ll send the IOZ Spam Message to the spammers, their ISP, the domain registrants etc etc. Usually I get a response. Usually they remove me from their lists. (If they don’t, their VP of marketing is going to have me harassing him over the phone in short order.) But of course they rarely mend their ways. Sometimes we end up in long e-mail arguments backwards and forwards, them saying “but I’m justified in spamming, because of foo”, me saying “no bloody way, because of bar” etc. It’s ineffectual and depressing, but at least I’m doing something to deter spammers and keep South Africa relatively clean.
But enough about e-mail. It’s time for some tips on dealing with SMS-spam. The SMS Spamming industry (euphemisms: direct marketing, wireless application service provider) is attempting to regulate itself rather than be regulated by government. They’ve formed WASPA and signed the sms code of practice. WASPA lets you file complaints against its members and fines them (although the fines are rather paltry).
I heard about them via Jeremy Thurgood’s recent spam-scapades. His spammers were charging R1 to opt-out. While the WASPA code of conduct allows a <=R1 fee, I agree with him that this is intolerable extortion.
In my case, my spammers had broken a few WASPA code of conduct rules:
I looked up the originating number on the SMS Code website. It belongs to Celerity Systems. They are currently under a suspended sentence at WASPA, so my WASPA complaint should force them to fork out a fine. Lets hope for the best.
I’m against capital punishment, but I wouldn’t mind seeing a few spammers being hanged, drawn and quartered :-)
My post about repositories wasn't just a little attempt to stave off work, it was part of a larger scheme.
I share the ADSL line in my digs with 3 other people. We do split-routing to save money, but we still have to divide the phone bill at the end of the month. Rather than buy a fixed cap, and have a fight over who's fault it was when we get capped, we are running a pay-per-use system (with local use free, subsidised by me). It means you don't have to restrain yourself for the common cap, but it also means I need to calculate who owes what.
For the first month, I used my old standby, bandwidthd. It uses pcap to count traffic, and gives you totals and graphs. For simplicity of logging, I gave each person a /28 for their machines and configured static DHCP leases. Then bandwidthd totalled up the internet use for each /28.
This was sub-optimal. bandwidthd either sees the local network, in which case it can't see which packets went out over which link. Or it can watch the international link, but then not know which user is responsible.
I could have installed some netflow utilities at this point, but I wanted to roll my own with the correct Linux approach (ulog) rather than any pcapping. ulogd is the easy ulog solution.
Ulogd can pick up packets that you "-j ULOG" from iptables. It receives them over a netlink interface. You can tell iptables how many bytes of each packet to send, and how many to queue up before sending them. E.g.
will log the first 48 bytes of any incoming packet to netlink-group 1. It will tag the packet as being "input", and send them in batches of 50. 48 bytes is usually enough to catch any data you could want from the headers. If you were only need size, 4 bytes will do, and for source and destination as well, 20.
Now, we tell ulogd to listen for this stuff and log it. Ulogd has a pluggable architecture. IPv4 decoding is a plugin, and there are various logging plugins for "-j LOG" emulation, Text files, pcap-files, MySQL, PostgreSQL, and SQLite. For my purposes, I used MySQL as the router in question already had MySQL on it (for Cacti). Otherwise, I would have opted for SQLite. Be warned that the etch version of ulogd doesn't automatically reconnect to the MySQL server should the connection break for any reason. I backported the lenny version to etch to get around that. (You also need to provide the reconnect
and connect_timeout
options.)
Besides the reconnection issue, the SQL implementations are quite nice. They have a set schema, and you just need to create a table with the columns in it that you are interested in. No other configuration (beyond connection details) is necessary.
My MySQL table:
My ulogd.conf:
The relevant parts of my firewall rules:
So, traffic for my /28 (sr) will be counted as sr-f
or sr-p
so I can tally up proxy & forwarded traffic separately. (Yes, I can count traffic with squid too, but doing it all in one place is simpler.) fb
is random housemate Foo Bar, and gu
guest (unreserved IP addresses).
You can query the usage this month with for example:
Your table will fill up fast. We are averaging around 200 000 rows per day. So obviously some aggregation is in order:
And every night, run something like:
Finally, I have a simple little PHP script that provides reporting and calculates dues. Done.
Up to now, whenever I've needed a backport or debian recompile, I've done it locally. But finally last night, instead of studying for this morning's exam, I decided to do it properly.
The tool for producing a debian archive tree is reprepro. There are a few howtos out there for it, but none of them quite covered everything I needed. So this is mine. But we'll get to that later, first we need to have some packages to put up.
For building packages, I decided to do it properly and use pbuilder. Just install it:
Make the following changes to /etc/pbuilderrc
:
The first, to point to your local mirror, and the second to credit you in the packages.
Then, as root:
Now, we can build a package, lets build the hello package:
dget and debchange are neat little utilities from devscripts
. You can configure them to know your name, e-mail address, etc. If you work with debian packages a lot, you'll get to know them well. Future versions of debchange support --bpo
for backports, but we use -n
which means new package. You should edit the version number in the top line to be a backport version, i.e.:
Now, let's build it. We are only doing a backport, but if you were making any changes, you'd do them before the next stage, and list them in the changelog you just edited:
Assuming no errors, the built package will be sitting in /var/cache/pbuilder/result/
.
Now, for the repository:
This file defines your repository. The codename will be the distribution you list in your sources.list
. The version should match it. The architectures are the architectures you are going to carry - "all" refers to non-architecture-specific packages, and source to source packages. I added amd64 to mine. SignWith is the ID of the GPG key you are going to use with this repo. I created a new DSA key for the job. NotAutomatic is a good setting for a backports repo, it means that packages won't be installed from here unless explicitly requested (via package=version
or -d etch-backports
).
Let's start by importing our source package:
(There is currently a known bug in reprepro's command-line handling. -S
and -P
are swapped.)
Now, let's import our binary package:
Reprepro can be automated with it's processincoming
command, but that's beyond the scope of this howto.
Test your new repository, add it to your /etc/apt/sources.list
Enjoy. My backports repository can be found here.
My blog hasn’t had much to say recently, but now that I’m feeling pressured by University assignments, I think it’s time to get back into one-post-per-day mode :-)
I remember once trying Google Reader, just after it launched, and very quickly deciding that I couldn’t stand it, and I’d stick to Liferea.
Recently, however, Liferea has been giving me trouble. It’s been incredibly unstable, and I’d often forgot to run a transparent proxy on my laptop when in restrictive environments, so it’d miss lots of posts and generally be un-happy. The instability I fixed by exporting an OPML list, wiping the configuration, and re-loading, but that was a ball-ache to do. While I was bitching about this, Vhata pushed me to try Google Reader again.
I was pleasantly surprised. It works well, and I didn’t find it oppressive. That doesn’t mean it’s perfect, I’d like to see the following things improved:
Some cool things it does that lifera doesn’t:
I’m converted. Google Reader really is good.
/me gets on with reading feeds…
I’ve (finally) finished encoding the ~6 hours of *camp video. They can be found on archive.org. As usual, 3 qualities.
I’ve probably screwed up at least one of them, so if anyone spots a problem, please let me know soon, before I delete the source material.
Recent comments
12 years 13 weeks ago
12 years 15 weeks ago
12 years 22 weeks ago
12 years 25 weeks ago
12 years 35 weeks ago
12 years 36 weeks ago
12 years 37 weeks ago
12 years 38 weeks ago
12 years 48 weeks ago
13 years 9 weeks ago