How To Solve Heartbleed In 10 Minutes

By Greg Keller Posted April 24, 2014

jumpcloud hearbleed

Heartbleed was a horrible disaster for Internet security: it caused a lot of companies to spend hours or days scrambling to fix it. And, while that was going on, you can bet those teams responsible for mitigating or patching it didn’t get much else done, and probably didn’t get much sleep, either.

But why? Why was it so difficult? Just crank out a script and run it, right?

When you look at Linux, it has the infrastructure available to make automated patching fairly straightforward:

  1. The ability to automatically check a remote repo for available updates (yum or apt-get)
  2. A very easy to use command-line tool to apply the patch (again, yum or apt-get)

If you use a Linux-based window system, most window managers have a widget that checks for security issues and flags them for you, so you can patch them quickly and easily. So whence cometh the problem?

A Problem of Horizontal Scale

Since we’ve moved to a cloud model, scaling of servers has moved from scaling up (buying bigger, more powerful hardware, and putting it in a central location) to scaling out (adding more and more cheap servers to solving complex problems). In some applications, there’s both scaling up and scaling out, but many problems lend themselves well to scaling out, and so many have chosen this path.

So, instead of people working on a relatively small number of big servers, scaling an application has meant creating a large number of relatively inexpensive servers. And, while it’s completely trivial to check for and apply a patch on one box, it can be daunting to apply that patch across hundreds or thousands of servers.

Just a script, or is there more to Heartbleed?

I still daily hear from people who use ssh in a for-loop to run commands across multiple servers. Unfortunately, there are a few preconditions that must be satisfied to make this work:

  1. You have to know about every server in your organization – many manage this via a spreadsheet, a monitoring system, or even their cloud console. For most there’s no central authority that covers everything, so you’re stuck piecing things together and crossing your fingers that everything’s up to date.
  2. You have to have a service account available on every single server, and if that account is not consistent across your servers, you have to know which accounts go with which servers, and if you’ve got different passwords or SSH keys everywhere, you’ve got that challenge, too.
  3. You have to make sure that you have access to all your servers from the same location. This is an incredibly difficult thing to ensure, most people’s networks are divided into logically- and geographically-separate enclaves, and in many cases, no one server can directly access every other.
  4. You have to be able to know which patches were successful, which failed, and which servers were simply unavailable. This means adding script code to go query those servers after the patch was applied.

And, even if all those conditions are met, if you’ve got hundreds of servers, it can take hours or more to apply those patches when a single server is making those calls. Sure, you can take steps to parallelize the process, but that means more scripting and more testing.

And, guess what? When it happens again, you have to go satisfy all those preconditions again. It’s a wicked set of problems that has plagued system administrators for a long time, and the cloud has only made the problem worse.

When you look at the problem, you can see why it’s such a challenge to patch one package across all those servers. Most companies simply don’t have the infrastructure necessary to:

  • Track all their servers, all the time.
  • Ensure that there’s a valid and usable service account on all those servers.
  • Ensure that they’ve got a safe and secure means of accessing all their servers.
  • Gather auditable results reliably
  • Parallelize the task

There’s A Better Way

But guess what, you no longer have to do that. JumpCloud’s Directory-as-a-Service® command execution functionality solves all of those problems, so when a vulnerability like Heartbleed rears its ugly head, you just do the following:

  1. Update your internal repo (if you use one), or wait for the patch to become available on your normal repo
  2. Create a command in JumpCloud’s command tab to apply the patch, for example:

yum update -y openssl

  1. Click “select all servers”
  2. Set the command to run as root
  3. Click “Save and Run”

JumpCloud:

  • Has your servers tracked at all times, 24×7.
  • Ensures that you have any necessary user accounts on each server.
  • Accesses your servers through a fully encrypted pipe (and BTW, neither our agents nor our backend infrastructure were vulnerable to Heartbleed), controlled through a console protected by two-factor authentication.
  • Makes sure you’ve got full connectivity: JumpCloud’s agent calls outbound to the Internet, no inbound connectivity is necessary.
  • Automatically parallelizes the task across all your servers.

With JumpCloud’s cloud-based directory service, you can simplify your command execution efforts, and you get an audit trail at the end that you can share with your boss. Then you can leave at a reasonable hour, and move on to the next problem.

How agile could your IT department be if they had this kind of infrastructure at their fingertips at all times? Give JumpCloud’s Directory-as-a-Service a try – your first 10 users are a free forever.

Greg Keller

Greg is JumpCloud's Chief Product Officer, overseeing the product management team, product vision and go-to-market execution for the company's Directory-as-a-Service offering. The SaaS-based platform re-imagines Active Directory and LDAP for the cloud era, securely connecting and managing employees, their devices and IT applications.

Recent Posts