A recent customer request lead me to start doing some forward thinking on my web hosting infrastructure to improve security and resilience that doesn’t need me to type commands in every day. Some basic thoughts here may make your life a little bit easier at adding different layers of security, fairly easily. More importantly though, can I be ruthless AND ethically responsible?

Mr Robot GIF
Mr Robot has another solution to IPv4 exhaustion according to this gif…

pfBlockerNG and Suricata are both staples of my virtual firewall stack, with a lot of focus being spent more on denying traffic before it can even get to the stage of being analysed by Suricata. The slightly off kilter thinking being maybe a bit more old school; less eggs in one basket, less cpu load than actively using regex to deny absolutely everything that could be denied earlier, better affinity scheduling (so in theory anyway), and at least I’ve always got some form of nasty filtering if I’m restarting a process or something (very rare).

pfBlockerNG had proven to be even more useful today. I decided to turn it up to 11 on a destination interface that I don’t normally provide firewall services for. However in a bid to shape up my service offering to be even more secure by default than the competition, I discovered how much easier it made my life when it seemed to automatically filter out a typical spat-kind-DDoS. The kind where someone’s just attacking a particular service in an odd way that doesn’t require a lot of bandwidth, common among those that run game servers. 45Mbps of bandwidth was enough to cause someone’s colocated server on a clear 1Gbps transit line to start sputtering a little bit!

Image result for bgp blackholing
How BGP blackholing works – you keep the nasties away from your network in the first place – don’t let it ingress past the edge! However it still requires a netadmin to do some things unless you’re as state of the art as Cloudflare

Sure, as a provider I can do things like BGP blackholing with my upstream to not even allow the traffic to enter my network, but for low bandwidth, low effort attacks, is it even worth my time if it doesn’t make a dent in my IP transit bill? Plus, can I make a creative way to possibly let people know they’re likely to have something compromised on their network at home? I’m definitely an advocate for us all carrying the responsibility to make the internet more secure, collectively.

pfBlockerNG turned up to 11 is great at throwing reported spammy IP addresses (DNSBL based lists, Geo reputation lists, IP addresses you might have written in the back of your Mum’s cook book, etc) into a big fat deny rule, which by idea I love. Practically I’ve never ran into an issue with it, but I’ve always stayed on the side of caution by making advisories only previously, and acting upon them in an audit. What would happen if an innocent family tried to access a website, and to them it just looks like the website is down, because Dad’s PC just tripped up their IP address on to a DNSBL list after trying to get a free “premium video membership” and inevitably not getting what he bargained for?

So instead I’ve been getting creative and experimenting with turning this big fat deny rule into one a system that’s more ethical and responsible. Why not filter traffic looking for web ports and naturally redirect them to an off-the-grid web server that hosts a very primitive static site? That very primitive static site being one that humanly tells them they have an issue and how they can sort it. I’m still building said primitive static website, the biggest challenge explaining these security issues in plain English. But on another note, I can at least have a brutal GTFO rule for my networks that you may consider DMZ, whilst passing the book of responsibility to someone’s ISP in one of these botnet edge cases. I have the intention of moving this to production for even shared hosting. And if someone really has a problem with this, I can always just make a very specific exception rule.

I’m sure we all do basic things like blocking nasty ports that should be anywhere near the internet by standard (like SMB!) – but surely there’s a way we can spend less time bashing things into route reflectors and more time fixing the internet?

Let me know your thoughts – how brutally secure do you make your DMZ networks?

To perhaps also clarify on my configuration, I myself colocate racks and sometimes sell shared U space within them, and separate racks cross connected back.

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *