My post about repositories wasn't just a little attempt to stave off work, it was part of a larger scheme.
I share the ADSL line in my digs with 3 other people. We do split-routing to save money, but we still have to divide the phone bill at the end of the month. Rather than buy a fixed cap, and have a fight over who's fault it was when we get capped, we are running a pay-per-use system (with local use free, subsidised by me). It means you don't have to restrain yourself for the common cap, but it also means I need to calculate who owes what.
For the first month, I used my old standby, bandwidthd. It uses pcap to count traffic, and gives you totals and graphs. For simplicity of logging, I gave each person a /28 for their machines and configured static DHCP leases. Then bandwidthd totalled up the internet use for each /28.
This was sub-optimal. bandwidthd either sees the local network, in which case it can't see which packets went out over which link. Or it can watch the international link, but then not know which user is responsible.
I could have installed some netflow utilities at this point, but I wanted to roll my own with the correct Linux approach (ulog) rather than any pcapping. ulogd is the easy ulog solution.
Ulogd can pick up packets that you "-j ULOG" from iptables. It receives them over a netlink interface. You can tell iptables how many bytes of each packet to send, and how many to queue up before sending them. E.g.
will log the first 48 bytes of any incoming packet to netlink-group 1. It will tag the packet as being "input", and send them in batches of 50. 48 bytes is usually enough to catch any data you could want from the headers. If you were only need size, 4 bytes will do, and for source and destination as well, 20.
Now, we tell ulogd to listen for this stuff and log it. Ulogd has a pluggable architecture. IPv4 decoding is a plugin, and there are various logging plugins for "-j LOG" emulation, Text files, pcap-files, MySQL, PostgreSQL, and SQLite. For my purposes, I used MySQL as the router in question already had MySQL on it (for Cacti). Otherwise, I would have opted for SQLite. Be warned that the etch version of ulogd doesn't automatically reconnect to the MySQL server should the connection break for any reason. I backported the lenny version to etch to get around that. (You also need to provide the reconnect
and connect_timeout
options.)
Besides the reconnection issue, the SQL implementations are quite nice. They have a set schema, and you just need to create a table with the columns in it that you are interested in. No other configuration (beyond connection details) is necessary.
My MySQL table:
My ulogd.conf:
The relevant parts of my firewall rules:
So, traffic for my /28 (sr) will be counted as sr-f
or sr-p
so I can tally up proxy & forwarded traffic separately. (Yes, I can count traffic with squid too, but doing it all in one place is simpler.) fb
is random housemate Foo Bar, and gu
guest (unreserved IP addresses).
You can query the usage this month with for example:
Your table will fill up fast. We are averaging around 200 000 rows per day. So obviously some aggregation is in order:
And every night, run something like:
Finally, I have a simple little PHP script that provides reporting and calculates dues. Done.
Comments
Nice, but any ideas on username logging
This looks great. I am in a similar situation, but have a shared host sitting at rackspace which people often tunnell (ssh -D) through. I would like to do per username logging. Any ideas on where I should look (I’ve done some searching) or how/
IPTables mod_owner
Match based on uid. Naturally you’ll only be able to log traffic from the machine itself, not traffic to it (owned by sshd). But if users are only tunnelling, then just double any figure this produces.
It’s a little tricky, because the “owner” module only works in OUTPUT. So we have to use some connmark foo e.g.
per-user logging.
This requires a unique setting per user. That could be quite messy in a environment where there are thousands of users…
Yup
And I can’t think of any workaround for that
I shared a dedicated server
I shared a dedicated server with four other people and started on this exact same thing a few years ago. I got sidetracked and never finished it up. Today there was some talk of one or more of the people pulling out so I’m no looking at some other hosting services. Of course, the key thing is knowing how much bandwidth I’m using. So, I started back on my long abandoned ulogd project and fortunately I came across your post which has been a great help in jogging my memory! One question though, I remember somebody saying that ip_totlen’s units were 32 bit words (not bytes or kb or bits). Do you know if that’s correct?
Hmm
I haven’t tested, but I’ve noticed almost a perfect corrolation between ip_totlen and my ISP’s bandwidth records. So I’m pretty sure it measures octets (bytes).
BTW, if you want a simple interface-wide bandwidth accounting solution, vnstat is awesome.
why don’t you use
why don’t you use chillispot ? I use it for accounting purpose and it now server 200+ user daily. :D good money do come easy…
Do it myself
I always prefer to do such things myself. I learn more that way.
Captive portals are also out of the question. I don’t want to have to deal with those unless I’m at an airport.
would you share your php script?
i did use nulog all the time, but now is no longer a simple php script, y became a huge python app, would you share your php script?
thanks
squid accounting
in your iptables configuration you skip squid, but how do you add the accounting of squid? surely there must be additional scripts that also runs and should be imported into your db for total accounting?
RE: squid accounting
This was a while ago, but it looks like I counted transparent squid access via ULOG, skipping explicit proxy requests.
Obviously one can also get data from the squid logs.
Post new comment