High Availability

High Availability

Websites are fun to make. When I first learned how to build a website, it was an awesome feeling, the world being able to see my page. I still enjoy making websites, and I’m fortunate to have a job doing just that.

One of the most frustrating times of any web developers job is when their site goes down, and they don’t have control over bringing it back up. Bluehost (Endurance International Group) could be a lot better, but they overload their servers and see a lot of down time. For the last few years I have been using DigitalOcean Virtual Private Servers (VPS), and I love them.

This brings me to the topic of “high availability” and what we are doing at work. I always like to push a little further than normal. I always want to make things faster, even if it is a few milliseconds, I want it faster! We have and are hosting some sites that get a lot of traffic. They are fast, and we scaled them vertically. During slow times at work, after work, and on the weekends I have been working on building a “high availability” system with virtual machines (VMs). We’ve got it to a point where all the parts are working/talking to each other as they should be. It was fast if it was just you loading the page, even if you hit refresh 20 times. All the servers are running CentOS 7 and here is the software we’ve used:

  • 1 x HAProxy
  • 3 x nginx (with multiple versions of PHP available)
  • 2 x GlusterFS (this allows the servers to share config files and assets)
  • 1 x Galera Load Balancer
  • 3 x MariaDB using Galera Cluster
  • 1 x Redis (for session persistence)

Okay, so I’ve followed the tutorials online, I should have an impressive setup that can handle c10k (10,000 concurrent connections a second). I test it with ApacheBench, and the servers stop responding. I was disappointed. After a couple of days of fiddling, I tuned the Linux kernel on the servers and made some other adjustments, and I got it up to 5,500 requests per second. I still have some tuning to do, but I see that it is possible. I would probably already be at c10k if these servers were on SSD.

If you read a tutorial on how to do this and it doesn’t work, don’t worry, I’ll be writing a comprehensive tutorial that includes the things I wish someone would have told me. I even plan on including Bash scripts that will setup each server for you. Here are a few things that I’ll go ahead and mention:

  • The load balancer should have two NICs
  • The internal NIC should have multiple IP addresses
  • When you’re testing this infrastructure, do it from another machine, don’t use one on the same host computer
  • Start small (setup HAProxy and a few nginx servers first)