Last month we posted a tutorial from Hub City Labs on making your own PCBs at home. At the time, Hub City was hosting their hackerspace web site on a tiny vps graciously provided by a member. As you might expect, the throngs of Hackaday readers turned Hub City Labs’ server into a pile of molten slag and made their admin’s hair a little more gray. Their web site is up again, and Hub City provided a tutorial on protecting your server from the ravages of being Slashdotted, Farked, Reddited, and even Hackaday’d.
The solution for the first few hours was to transfer Hub City Labs’ site to an Amazon EC2 instance. Since then, they’ve moved over to a Debian EC2 instance that is able to handle half a million pageviews an hour for a WordPress site.
This amazing capability required a good bit of optimizations. A WordPress installation is set to run cron tasks on every page load; not good if you’re going to see thousands of hits every minute. The guys added define(‘DISABLE_WP_CRON’, true) to their wp-config.php file and set all the background tasks – checking to see if a page should be updated – to a fixed schedule every minute or so. Along with an increase in the WordPress cache, these optimizations increased the number of pageviews an hour from 1500 to 60000.
To get up to half a million pageviews an hour, the EC2 instance was loaded up with Varnish, a front-end cache that really speeds things up.
The result – 375 million pageviews for $15 a month – is more than Hub City Labs will probably ever need. The nature of hackerspace web sites, though – light load until Hackaday, Slashdot, or Reddit figure out you did something cool – means hosting on an expandable EC2 instance is probably the way to go.
24 thoughts on “Hackaday-proofing Your Hackerspace’s Server”
This is understandable. I ran a popular streaming anime site that was constantly having these same problems. It’s awesome that HAD has the sheer visitor power to bring a server down, especially a tiny shared server that’s hosting three or more WP sites.
Well I’m not so sure it’s amazing that HAD traffic pulled down the site. Even budget shared hosting plans can handle tens of thousands of visits. It seems to me that the site was incorrectly configured. This caught my eye in the article:
“This amazing capability required a good bit of optimizations. A WordPress installation is set to run cron tasks on every page load; not good if you’re going to see thousands of hits every minute. ”
I don’t believe for a second that out of the box WordPress will run cron jobs every time someone views a page. That sounds like someone did a little hacking or purpose built a module to run cron jobs at every page view because they didn’t know how to achieve whatever their goal was any other way. I’m not a member of the WordPress fan club but if anything it is an efficient blogging platform.
it won’t, but a caching plugin will be essential if you expect tens of thousands
Hubcity, if you’re reading this:
You really, really need to put your LOCATION on the front page.
“HubCityLabs is a not-for-profit hackerspace and a community of like minded hackers and geeks that provides an accessible, collaborative environment for the sharing of tools, knowledge and ideas located in Moncton, New Brunswick, Canada.“
HCL member here, you’re right, we used to have that on our front page but it got lost in the updates. We’ll put our location back in there.
cant wait till everyone in northamerica has F.T.T.H. !!!
… kinda like other parts of the world already have
alas hope is useless, for i would be one of the last to actually get FTTH :(
Having fiber to the premises is meaningless if one can’t afford the service plans. Despite my ISP getting a boat load of grants & load grantees from the ” Obama stimulus” there will be no fiber to me and my rural neighbors. A legacy of protected service areas AFAIK.
Shared hosting on a big server is another good alternative. That way the load spikes are averaged across the thousand other sites that are calm at the moment.
This of course assumes that the shared hosting does not impose any strict per-site limits.
I wonder if a p2p browser plugin that can avoid the slashdot effect through a distributed browser cache amongst visitors to the site would work.
I recently got published on hackaday and I was actually a bit surprised about the amount of traffic I got. You’re guaranteed to get a couple thousand hits over the course of 2 days, but beyond that depends on the subject matter. Apparently my project wasn’t immensely popular, so I ended up totaling about 4,000 hits over 3 days from hackaday or related(ie, hackaday “mirror” sites and feed readers).. I wonder how much traffic Hackaday brought to this site. I have a hard time imagining half a million hits in an hours time heh.
Depending on teh content, I’ve seen over 300k referred in one day.
If the story has enough information, pictures, or a video, then I’d wager less of the casual readers click through to the main site. If the HaD post is more of a teaser, then the targeted site will get a lot higher traffic.
Hits or visits?
Someone linking to an image on your site is a hit, someone actually viewing a page is a visit and unfortunately the term “hit” became popular as a benchmarking term but it means nothing.
lol couldn’t resist. Interesting read :)
Would the same work with the Ubuntu EC2?
The worst example of flooding is surely when whatreallyhappened.com links to low volume pages.
Quite a few of their links go to “Bandwidth Exceeded” pages.
The “slashdot effect” has been around for 15 years. Way before HAD existed, or Digg, or whoever else tries to claim the “INSERTNAMEHERE effect”
Just remember, Slashdot was there first, it needs that legacy, because it’s a toilet now.
> Just remember, Slashdot was there first, it needs that legacy, because it’s a toilet now.
Just this one time I wish you weren’t right, fartface.
Only 15 years?
WRH has been around for 19 so far, if you include Rancho Runamukka.
Agreed – Long past time that Slashdot moved into the 21st. century.
When you running a your EC2 server constantly then it makes sense to use reserved instance for that.
This means you make an upfront payment but you will receive considerable discount.
See here (scroll down till “Reserved Instances”) http://aws.amazon.com/en/ec2/pricing/
You can get to as low as < $6 per month for EU or even lower (< $4) for US.
Article in one sentence:They learned not to host their own website on a slow connection..
Or they could use a static site generator. These days they are super-awesome, you can use templating languages, includes, etc etc. And you wind up with pretty static html that has 0 overhead. Sure you lose things like comments (unless you include something that runs elsewhere like discus), but it’s a good way to serve up a lot of views, fast.
Did you see the guys who moved a datacenter, and left a raspberry pi behind in the old one to serve: “sorry we’ve moved” webpages?
Maybe not half a million per hour, but way more impresive. IMHO.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)