HN2new | past | comments | ask | show | jobs | submitlogin
Ask HN: Did anyone notice an increase in hacking attempts during the holidays?
1 point by tluyben2 on Jan 5, 2023 | hide | past | favorite
I just saw the Slack private Github repositories being accessed over the holidays.

For some of our clients and for us, I noticed that late afternoon GMT of the 24th, traffic shot up across the board with strange requests. I didn't really look at first, but soon I got an alert that one site was responding slow, so I checked (happy Christmas eve for me!) and found around 1000 IPs, coming from Azure, AWS, Alibaba and Tor trying to access and download information from the site.

It was not just a random attack; the bots had information about the site and were trying to abuse it (they couldn't), at least it very much looked like that. The other sites I have checked myself were far more random. But there were many requests, so I decided to just switch the sites to CloudFlare 'Bot Fight' and that ended it completely within a few minutes.

The sites of our customers are not hosted on the same servers as ours and there isn't even a way to link them to us in any way. So I asked a friend and he has the same issues at the same times.

As a honeypot, I left a static site under attack and the traffic dropped almost to to normal somewhere during the morning of the 26th, so it seems that this was deliberately to get info during the time when people are celebrating?

If not for CloudFlare, what could I have done myself? What are the ways other people use?

For this particular site (which got the most traffic and suffered a bit because of it), it was not hosted at a cloud provider, but if this would've been on AWS, I could've had a massive RDS or Lambda bill even before I would manage to stop it. Let alone people who don't stop it (a friend didn't stop an attack a few months ago on AWS and their bill was almost 100x higher than normal and their product was down).

I ran larger hosting setups before (millions of dynamic consumer (not b2b) sites) and I simply used rate limiting, or rather, rate killing; find anomalies (basically IP addresses I had never seen suddenly requesting far too many resources from a particular site that normally gets only a fraction of those request over it's lifetime) across server logs automatically and block those IPs (on the hardware firewalls). For consumer sites that's mostly fine and the user would see the blocked ip + reason in their panel and could onblock them (which didn't happen a lot), but for B2B sites I am a bit more careful with that. There are people like me who click around and submit fast and get rate limited while not being bots (like on HN where I often do things too fast).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: