An Eleven Character Linux Denial of Service Attack & How to Defend Against it

Sometimes it is the oddest, harmless looking things that could cause problems. I can’t think of anything more innocuous looking than the following Linux shell command:

But DO NOT run this on a Linux system, or chances are that you will perform a Denial of Service attack on your own machine! You may have to hard reset your system to get it back and you COULD LOSE DATA!

This is not new, I have seen this floating around, and it looked interesting. It was referenced in a 2007 post that said it didn’t work anymore because most modern OS’s are configured to protect against it. So of course I just HAD to try it.

I booted up my Ubuntu 12.04 system, opened a command shell, entered the command and…

It locked dead!

Okay just what is this command???

FORK BOMB PROCESS ATTACK

Meet the “Fork Bomb”. Basically all it does is instruct Linux to open processes – over and over again for an almost infinite number of times. Your RAM and CPU usage rises until the system no longer responds to input.

Let’s see what it does to an Ubuntu 12.04 system.

Here is an Ubuntu 12.04 System Monitor screenshot of a system before I ran the Fork Bomb:

The CPU and Memory usage are steady.

Now once the Fork Bomb is started:

Notice the significant increase in CPU and RAM usage. It even doubled the CPU usage on the virtual host, taking it from 8% to 17% while the attack was running.

I lost all control of the Ubuntu system. Even the keyboard lights were unresponsive. Supposedly some operating systems will recover if left alone long enough. But I waited a while and I never got control back.

(Okay, for all those out there claiming that it was just a Virtual Machine, I tried it on a stand alone Ubuntu 12.04 system with the same results. Okay, there was a quarter second pause before I lost control of the machine!)

DEFENDING AGAINST THE ATTACK

This is very easy to defend against. All you need to do is set limits to the number of processes that a user can open. These can be set per user, per group or globally. And you can set this one of two ways.

You can use the ulimit command for instant change that only lasts until the user logs off, or make the change permanent by editing the /etc/security/limits.conf file.

To use the ulimit command simply type “ulimit -u” with the number of processes that you want users to be allowed to run. So to set the limit to 512 just type:

sudo ulimit -u 512

Does this work? Absolutely – after running ulimit, the fork bomb is effectively throttled:

As you can see from the screenshot above, there is very little increase in RAM usage and the CPU usage is much more tolerable. And more importantly, I had full control of the system.

You can also change the /etc/security/limits.conf file to make the change permanent. Full instructions can be found on AskUbuntu.com, but basically just add the following line to the config file:

*    hard    nproc    512

The “*” means apply the change to everyone, “Hard” means it is a hard limit, and “nproc 512” locks the number of processes to 512.

You need to adjust the number of processes to a number that would be the best setting for your system. 512 seemed to work great on mine. Don’t set the number to low, or you may have other “denial of service” type issues, lol.

Oh, and for all the Mac Fanboys out there, this command didn’t seem to have any effect when run on a newer Mac. Okay, my friend ran it and it ate up 24 Gb of RAM, but seeming he had 64Gb of RAM on the system, it just laughed the attack off.

Even running it on a Mac with 24Gb of RAM had no discernible effect, other than getting a screen full of “Bash Fork: Resource Temporarily Unavailable” error messages like above. Looks like Mac’s have process limits enabled by default. (Thanks Command_Prompt and Bill!)

This should be obvious, but for the record, you should never run this command on systems that you do not own… Or put it in someone’s startup script.

But knowing how to limit a user’s ability to run processes is very important and throttling them on Linux systems where it is not done by default could curtail some problems before they surface.

8 thoughts on “An Eleven Character Linux Denial of Service Attack & How to Defend Against it”

    1. Thanks Xavi,

      It looks like Ubuntu doesn’t consider it a bug, it has been reported in 2009 and the response was that it was “expected behavior”. I just added a note to the bug report saying that setting some default level in ulimit might be a good idea.

  1. this is not a linux issue, this is a shell fork bomb, it works on most unix based OSs. Also you need access to a shell to trigger it, it is hard to do w/o a login or a big vulnerability in a network service.

    Most OSs and distributions use the same mitigation (not solution) you mention about setting a ulimit on the # of procs a user can have. Many an application server that uses multiprocessing vs multithreading hits this limit during legitimate operation and forces admins to change this limit and complain, so it is hard for the distribution (like Ubuntu) to find settings that work for every one.

    1. Thanks for the info Typhoon, I appreciate it. I guess I can’t understand why this isn’t monitored better. Macs seem to have a pre-configured limit set, I wonder why not other flavors of Linux. Especially the desktop versions.

      My Mac guru buddy said that if you go over the set limit on Macs it just prompts you to up it and it takes care of it. This seems to be a good “fix” maybe?

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.