Tag: Sysadmin

Github is now offering unlimited private repos for free. There is now literally no good reason to use BitBucket or GitLab (especially paying money to self host a GitLab instance). BRB moving all of my unfinished, half baked, terrible projects to GitHub.

Docker putting downloads behind a login wall

In regards to issue https://github.com/docker/docker.github.io/issues/6910, Docker putting links to download Docker CE behind a login wall. The comment thread is long, and people are, in my opinion, rightfully put out by the move.

Yeah, the issue is that they are not being transparent or real about the reasons why they put the download behind a login wall. All they needed to do was tell the truth, and then this wouldn’t be an issue. As it is now, they used marketing double speak and it came off disingenuous, and imo, rather pathetic.

I know that this can feel like a nuisance, but we’ve made this change to make sure we can improve the Docker for Mac and Windows experience for users moving forward.

This is simply not true.

They are not doing this to improve the experience of users, as has been explained several times in the comments. No, they are doing this because they are scrambling to collect data to figure out how to monetize a closed source product that they gave out for free. Now the expectation in the community is that the product is free which puts them in a weird position. However, honesty and transparency will go a lot further than lies and deception do in these sorts of things. Just gotta be honest with your user base. There is no harm in that. A simple message like; “hey so we messed up and in order to continue to provide Docker and support etc, we need to 1) understand the user base that is downloading the docker tools, and 2) identify ways to generate revenue to continue to develop and support the tools while maintaining a free core product. In order to do this we want to collect some data using these methods…” and so on. I’d have so much respect for a company who could say those things.

Maybe I am reading a lot into hiding the link, but at the end of the day if it feels sneaky and shady, it probably is. I do not trust that they actually had the users experience in mind when making this change. Therefore, I do not trust Docker, and would seriously consider the alternatives for my future endeavors.

TLS Peer Verification w/PHP 5.6 and WordPress SMTP Email plugin

We ran into an issue after upgrading from PHP 5.5 to 5.6. We were no longer able to send email via the awesome WordPress SMTP Email plugin. Turns out that PHP 5.6 introduced some enhancements to its OpenSSL implementation. Stream wrappers now verify peer certificates and hostnames by default. This was causing the form submissions on our site to fail. Clearly there are some issues with our Postfix config and certs. While we sort out those issues, we were able to “solve” our immediate problem, and disabled peer verification in the plugin without editing core files. Thankfully the plugin developer included a filter that would allow us to modify the options array that is passed to PHPMailer.

add_filter('wp_mail_smtp_custom_options', 'my_wp_mail_smtp_custom_options');

function my_wp_mail_smtp_custom_options($phpmailer) {
    $phpmailer->SMTPOptions = array(
        'ssl' => array(
            'verify_peer' => false,
            'verify_peer_name' => false
    return $phpmailer;


Thanks random WordPress forum user for the solution!

Whitelist IPs in Nginx

I want to whitelist my clients IP addresses (and my office IPs) to allow them to view a site, while the rest of the world will be redirected to another site, using Nginx. My Nginx server is behind a load balancer.

Using the geo module I am able to do this rather easily. By default, geo will use $remote_addr for the IP address. However, because our server is behind a load balancer this will not work, as it would always be the IP of the load balancer. You can pass in a parameter to geo to specify where it should get the IP value. In this case, we want to get the IP from $http_x_forwarded_for.

geo $http_x_forwarded_for $redirect_ips {
  default     1;  0;  0;  0;

What this is doing is assigning the variable $redirect_ips the value after the IP address. So, if my IP is, $redirect_ips will have a value of 0, or false. If my ip is not matched, it will get the default value of 1, or true;

Ok, with that, my server directive now looks like:

# Site that is not quite ready for the public to see, but we want to test on prod
server {
    listen 80;
    server_name es.example.com;

    if ( $redirect_ips ) {
        return 302 https://us.example.com$request_uri;

    # the rest of my server directive goes below this line...
    # removed for clarity in this example.



Setting up Git HTTP Backend for local collaboration

You want to share a topic branch with a colleague but do not want to push that branch upstream to Github/BitBucket/GitLab, etc. How do you do this? You could create a patch and email it. Or you could use Apache and allow your colleague to pull from your repo directly. This does take a bit more time to setup, but would be the most convenient for everyone involved. The basic idea is that you “host” the git repos on your local machine, and push your commits to it as you are developing. You then make your git repos available via Apache on your internal network to allow team members to pull from your local repo.

First create a place to store your repos. Let’s also create a test repo to work with to make sure everything is working.

mkdir -p ~/Sites/git
cd ~/Sites/git
mkdir testproject.git
cd testproject.git
git init --bare

Next let’s setup Apache (I am using OS X El Capitan with Apache 2.4).

Edit /private/etc/apache2/httpd.conf.

Ensure the following modules are being loaded.

LoadModule cgi_module libexec/apache2/mod_cgi.so
LoadModule env_module libexec/apache2/mod_env.so

Uncomment the following line:

Include /private/etc/apache2/extra/httpd-vhosts.conf

Edit /private/etc/apache2/extra/httpd-vhosts.conf.

I removed the existing virtualhosts since I actually do all of my development with Vagrant and Linux. So I really have no need to have anything more than a single virtualhost on my Mac.

<VirtualHost *:80>

    DocumentRoot /Users/user/Sites

    <Directory "/Users/user/Sites/">
        Options +Indexes +MultiViews +FollowSymLinks +ExecCGI
        AllowOverride All
        Require all granted

    <Directory "/Library/Developer/CommandLineTools/usr/libexec/git-core/">
        Options +ExecCGI
        Require all granted

    <LocationMatch "^/git/">
        Require all granted

    SetEnv GIT_PROJECT_ROOT /Users/user/Sites/git
    SetEnv REMOTE_USER user
    ScriptAlias /git/ /Library/Developer/CommandLineTools/usr/libexec/git-core/git-http-backend/


Edit the the bolded parts to suit your local setup.

Restart apache.

sudo apachectl restart

Your git repos will now be available at http://locahost/git/<REPONAME.git>.

You should now be able to clone your empty repo.

Let’s test it out.

cd ~/Sites
git clone http://localhost/git/testproject.git testproject
cd testproject

You should be able to make changes and push to your remote.

echo '# README' > README.me
git add README.md
git commit -am 'Add Readme'
git push origin master

There it is!

Now you can have a colleague pull changes directly from you. Simply provide them with your public address, for example, and they should now be able to clone your repo, add your remote, pull changes, etc.

If you want other people to be able to push to your repo you will have to explicitly allow this. In your testproject.git repo, set the http.receivepack value to true:

git config http.receivepack true

Ohai Plugin for OpenVZ VMs to get public IP Address

Media Temple uses the OpenVZ virtualization system and I have quite a few Media Temple servers under Chef management. The one thing that has made management difficult is that by default during a Chef run ohai returns as the default IP address which means I cannot run knife to execute commands from my workstation.

For example, when I run knife node show mydv.server.com  I get the following:

$ knife node show mydv.server.com

Node Name:   mydv.server.com
Environment: production
FQDN:        mydv.server.com

Kinda sucks. If try to execute on all of my MT servers, say something like knife ssh ‘tags:mt’ ‘sudo chef-client’  I will get a ton of failed connection errors because it is trying to connect to

The solution is to get ohai to return the correct IP. OpenVZ has virtual interfaces and the actual public IP is assigned to them, while the main interface, eth0, is given the ip This ohai plugin will retrieve the correct IP.

provides "ipaddress"
require_plugin "#{os}::network"
require_plugin "#{os}::virtualization"

if virtualization["system"] == "openvz"
  network["interfaces"]["venet0:0"]["addresses"].each do |ip, params|
    if params["family"] == "inet"
      ipaddress ip

Put this in your ohai plugins directory for chef: /etc/chef/plugins/openvz.rb and when chef runs, it will get the correct IP address.

Now when I run knife show mydb.server.com

$ knife node show mydv.server.com

Node Name:   mydv.server.com
Environment: production
FQDN:        mydv.server.com


Virtualbox Bug related to SendFile

I have been doing more web development with Vagrant and VirtualBox. It’s a nice way to keep my dev environment nearly the same as my production environments. Recently I was doing some front-end coding and was running into the most bizarre errors with my JavaScript.

It pays to read the documentation. Had I read it more thoroughly, I would have known about this before debugging it and wasting time. Oops!

Turns out there is bug with VirtualBox’s shared folder support and sendfile. This bug was preventing the VM from serving new versions of any file in the shared directory. Obviously this is not good for web development.

The solution is easy enough. You just have to disable sendfile in your web server.

In apache:

EnableSendFile Off

In nginx:

sendfile off;


The Vagrant documentation does include some information it: https://docs.vagrantup.com/v2/synced-folders/virtualbox.html

Setup Development Environment with Vagrant & Chef


I use Chef to manage and provision new staging and production servers in our company. It takes a lot of the headache out of managing multiple servers and allows me to fire up new web & data servers for our projects with ease. I have several cookbooks that I use to configure servers and to setup/configure websites. In a nutshell, it’s rad, and website deployments have never been easier.

For my local development environment I currently run Ubuntu, with Apache, Nginx, PHP 5.3, Ruby 1.9.3, Ruby 2.0, MySQL 5.5, etc. Some of our projects use Node, Redis, MongoDB. Ideally I would offload all these different servers into virtual machines suited and designed for the task, identical in configuration ot the staging and production servers.

Enter Vagrant. Vagrant is a tool to configure development environments.

How I expect this to work:

* I want to use my native development tools (NetBeans, Sublime Text, Git, etc) on my workstation.
* I want to use the VM to serve the project files.
* I do not want to have to deploy my local code to the VM for testing and review.
* I will mount my project directory as a shared path in the VM.
* I will build the VM using my Chef cookbooks.

Ok, not so bad. Vagrant makes this really easy.

Install Vagrant.

wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.6.3_x86_64.deb 
sudo dpkg -i vagrant_1.6.3_x86_64.deb

Once installed you want to add a box to build your VM from. There are many to choose from. I prefer CentOS for my servers, and will add one from http://www.vagrantbox.es/. All of our production machines use CentOS or RHEL, so my development VM should use CentOS.

vagrant box add https://github.com/2creatives/vagrant-centos/releases/download/v6.4.2/centos64-x86_64-20140116.box --name centos-6.4

Now you need to create your Vagrant project. I have considered creating the Vagrant config file in my project and putting it under version control. Currently I just have a directory for Vagrant projects. Either way.

cd ~/projects/mywebproject.com
vagrant init centos-6.4

This will create a Vagrant config file. The Vagrant file describes how to build the VM. It defines the network settings, shared directories, and how to provision the machine using Chef. When creating the vagrant file you can pass the name of the box to use. I used the box we added earlier, “centos-6-4”. If you leave this parameter off you can always edit the Vagrant file to change it.

Configuration is rather minimal. Not a whole lot we need to do to get something running. Open Vagrantfile in your text editor.

I want my VM to use the local network. You could opt to use the private network, which I believe is the default. You can also setup port forwarding here. For example, if you want to forward requests to http://localhost:8080 to port 80 on the VM. I just setup a public network for my VMs because often times I have people in the office review work on my server from their

config.vm.network :public_network

Let’s set up the shared folders. The first path is the local directory you want to share, and is relative from the Vagrantfile. The second is the mount point in the VM. Since the Vagrant file is the root of my project, I will set the share directory to the current directory. Set the mount point to someplace you will want to serve your project from. I tend to do things the Enterprise Linux way, and will put my web projects under /var/

config.vm.synced_folder "./", "/var/www/mywebproject.com"

Now, to tell Vagrant how to provision the VM using

config.omnibus.chef_version = :latest
	config.vm.provision :chef_solo do |chef|
		chef.cookbooks_path = "/home/user/Development/my-chef/cookbooks"
		chef.roles_path = "/home/user/Development/my-chef/roles"
		chef.data_bags_path = "/home/user/Development/my-chef/data_bags"
		chef.add_recipe "my-cookbook::role-apache-server"
		chef.add_recipe "my-cookbook::role-mysql-server"
		chef.add_recipe "my-cookbook::role-php-server"
		chef.add_recipe "my-cookbook::site-my-kickass-site.com"

		chef.json = {
		  "mysql" => {
		    "server_root_password" => "MyMysqlRootPassword",


This is pretty straight forward. We tell Vagrant where our chef data is stored, which recipes to run, and pass along any attributes we want Chef to use. I would like to explain how I organize my Chef cookbooks very briefly. I use a fat-recipe/sinny-role approach. I have a cloud server cookbook to manage AWS and Rackspace instances. I have role recipes, and site recipes. A role recipe defines how the node should act: is it an Nginx server? Or a MySQL server? Will it run PHP-FPM? And so on. Then I have a site recipe which is defines how the website will be configured. It creates an apache vhost file, sets up a PHP-FPM pool, creates an Nginx proxy to a NodeJS app. I have data bags that correspond to different environments to configure the site as well, so production uses a different hostname than staging, has a different MySQL configuration, and so on. Now when Chef runs, it detects the environment, loads the corresponding data bag, and configures the site and node.

There is one more step before we can start up our VM. We need to install the omnibus vagrant plugin.

vagrant plugin install vagrant-omnibus

The Omnibus Vagrant plugin automatically hooks into the Vagrant provisioning middleware. It will bootstrap the VM for us. It is required if you are going to provision the VM with chef.

Ok, when that is installed you can fire up the

vagrant up

And there we go. You can continue working on your project locally, but serve it using a VM configured identically to your production servers. Have fun with your kick ass new dev environment!

The resulting Vagrant file should look something like:


    Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

        config.vm.box "centos-6.4"

        config.vm.network :public_network

        config.vm.synced_folder "./", "/var/www/mywebproject.com"
        config.omnibus.chef_version = :latest
        config.vm.provision :chef_solo do |chef|
            chef.cookbooks_path = "/home/user/Development/my-chef/cookbooks"
            chef.roles_path = "/home/user/Development/my-chef/roles"
            chef.data_bags_path = "/home/user/Development/my-chef/data_bags"
            chef.add_recipe "my-cookbook::role-apache-server"
            chef.add_recipe "my-cookbook::role-mysql-server"
            chef.add_recipe "my-cookbook::role-php-server"
            chef.add_recipe "my-cookbook::site-my-kickass-site.com"

            chef.json = {
              "mysql" => {
                "server_root_password" => "MyMysqlRootPassword",