Enable status for php-fpm

Accessing the PHP-FPM Status screen is easy enough. First, enable pm.status in your php pool:

pm.status_path = /status

Then add the following block to your Nginx vhost conf:

    location ~ ^/(status|ping)$ {
        access_log off;
        allow 127.0.0.1;
        allow 192.168.1.0/24; ##### YOU WILL WANT TO CHANGE THIS TO YOUR IP ADDR #####
        deny all;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_pass unix:/var/run/php-fpm-www.sock;
    }

Restart php-fpm and nginx and then browse to http://<SERVERIP>/status. You will be presented with some useful information on the current status of your PHP-FPM pool.

By default /status will just show a short status, like an overview of all of the processes. To see output on each process append ?full to the url, http://<SERVERIP>/status?full. You can also pass ?json to get JSON output, if you wanted to feed the data into some other log or stats processing tool (thinking like, greylog or logstash?).

Here is a breakdown of the stats presented to you:

pool – the name of the pool.
process manager – static, dynamic or ondemand.
start time – the date and time FPM has started.
start since – number of seconds since FPM has started.
accepted conn – the number of request accepted by the pool.
listen queue – the number of request in the queue of pending connections (see backlog in listen(2)).
max listen queue – the maximum number of requests in the queue of pending connections since FPM has started.
listen queue len – the size of the socket queue of pending connections.
idle processes – the number of idle processes.
active processes – the number of active processes.
total processes – the number of idle + active processes.
max active processes – the maximum number of active processes since FPM has started.
max children reached – number of times the process limit has been reached.

Use this information to tune your pool configuration.

My Pantheon + Jenkins Process

pantheon-workflow-jenk

Here is a rough outline of my Pantheon + Jenkins process. I like my code in BitBucket. I also like Pantheon (check them out). The Pantheon workflow is all about being the source of truth for your code. This is fine, and actually I dig it because it promotes good practices. However, I, and my company, have many projects in BitBucket already, and am using Jenkins more and more for some Continuous Integration functions. We want to keep using BitBucket as our source of truth and our existing workflows, but also want to use Pantheon.

Already, this is problematic and managing it on a developer by developer process is going to be prone to error. You have to push branches to two remotes and deal with, probably, some ugly merging and potentially other issues.

What I want to do is to push to BitBucket and use a commit hook to trigger Jenkins to deploy our code to Pantheon automatically. I use Pantheon Multidev (at work) and this process assumes that the Mutlidev env already exists. It will not create it for you (yet).

The BitBucket commit hook

How this is going to work is we are going to setup a POST hook with BitBucket. We will set the POST url to be a PHP script (our receiver script) in our Jenkins server (or just setup a small ec2 instance to host it if you don’t want to install PHP on your Jenkins server). The POST payload will include the list of modified branches. I have called my receiver script jenk.php because, heh. I have setup some environment variables with my Jenkins username and access token so that I can make API requests to the Jenkins server.

The POST hook url looks something like: http://jenkins.server/jenk.php?project=<project-name>&token=BUILD-PROJECT

Replace <project-name> with the name of your project, and replace BUILD-PROJECT with your build token.

jenk.php

This is the receiver script. It gets data from your commit, and submits a build request to Jenkins. Easy peasy.

<?php
if (!getenv('JENKINS_USERNAME') || !getenv('JENKINS_ACCESS_TOKEN')) {
    die('No jenkins access credentials available');
}

if (!isset($_GET['token'])) {
    die('No build token');
}

if (!isset($_GET['project'])) {
    die('No project');
}

if (!isset($_POST['payload'])) {
    die('No payload, go home');
}

$token    = filter_input(INPUT_GET, 'token');
$project  = filter_input(INPUT_GET, 'project');
$payload  = json_decode($_POST['payload'], true);

$jenkins = array(
    'endpoint' => 'jenkins.server/job',
    'username' => getenv('JENKINS_USERNAME'),
    'access_token' => getenv('JENKINS_ACCESS_TOKEN')
);

if (!empty($payload['commits'])) {
    foreach ($payload['commits'] as $commit) {
        if (!empty($commit['branch'])) {

            $url = 'http://' . $jenkins['username'] . ':' . $jenkins['access_token'] . '@' . $jenkins['endpoint'] . '/' . $project . '/buildWithParameters?token=' . $token . '&BRANCH_TO_BUILD=' . $commit['branch'] . '&delay=0sec';

            $ch = curl_init();
            curl_setopt($ch, CURLOPT_URL, $url);
            curl_setopt($ch, CURLOPT_POST, 1);
            $result = curl_exec($ch);
            curl_close($ch);
        }
    }
}

The Jenkins + Pantheon Deploy Configuration

I have configured a parameterized build with Jenkins. The single defined parameter is “BRANCH_TO_BUILD“.

Under Source Code Management I have added the two git remotes, one for BitBucket called “origin” and one for Pantheon, called of course, “pantheon”. In the “branches to build” section I added in remotes/origin/$BRANCH_TO_BUILD .

Under Build Triggers I used “BUILD-PROJECT”.

Under Build I added an “execute shell” task to checkout our branch from origin so that we can push to the pantheon remote – git checkout remotes/origin/$BRANCH_TO_BUILD

Finally, I added a “Git Publisher” post build task configured to push the “$BRANCH_TO_BUILD” to the “pantheon” remote.

To summarize, the process is make code changes, commit, push to BitBucket remote, hook is fired, and a POST is sent to our receiver script, which sends a POST to Jenkins with the $BRANCH_TO_BUILD parameter, and if the build passes, the branch is pushed to Pantheon. If everything worked you will see Pantheon converging your app in the dashboard! If it fails, well, check your console output from the build.

And that’s it. We can continue using our regular, non Pantheon, workflow, with Pantheon. The process and workflow stays consistent!

A WordPress ajax handler for custom themes

A WordPress ajax handler for custom themes

Something I have been noodling on is a better way to handle ajax requests in my custom themes. Seems to me that a relatively complex theme ends up with a lot of add_action calls for custom ajax handlers, and this could be simplified/reduced. Every time a new endpoint is required we have to add two new add_action calls to our theme. Maybe a better approach is to write a single ajax endpoint that will route requests to the proper classes/methods?

The goal would be that all ajax requests are run through our custom ajax handler and routed to the appropriate controller & method for execution. This method allows us to encapsulate functionality into separate classes/modules instead of cluttering functions.php with ajax functions. Potentially this makes the code more reusable.

With some sane configuration I think that this could be a good way to build a flexible ajax interface into your theme. To protect against malicious calls classes are namespaced (\Ecs\Modules\), and the methods are prefixed (ajax*). This should stop most attempts to execute arbitrary theme code. There are issues though. There would be a fatal error if the class does not exist which could expose information about the environment. We have to make sure that the $_REQUEST params are well sanitized. This would need to be scrutinized and tested for security issues. We don’t want someone to be able to craft a request to execute code we don’t explicitly want executed.

Here is an example structure in a hypothetical Theme class.

class Theme
{

    public function __construct()
    {
        ... snip ...

        add_action('wp_ajax_ecs_ajax', array(&$this, 'executeAjax'));
        add_action('wp_ajax_nopriv_ecs_ajax', array(&$this, 'executeAjax'));
    }

    ... snip ...

    /**
     * Simple interface for executing ajax requests
     *
     * Usage: /wp-admin/admin-ajax.php?action=ecs_ajax&c=CLASS&m=METHOD&_wpnonce=NONCE
     *
     * Params for ajax request:
     * c         = class to instantiate
     * m         = method to run
     * _wpnonce  = WordPress Nonce
     * display   = json,html
     *
     * Naming Conventions
     * Method names will be prefixed with "ajax_" and then run through the Inflector to camelize it
     *     - eg: "doThing" would become "ajaxDoThing", so you need a method in your class called "ajaxDoThing"
     *
     * Classes can be whatever you want. They are expected to be namespaces to \Ecs\Modules
     *
     * Output can be rendered as JSON, or HTML
     *
     * Generate a nonce: wp_create_nonce('execute_ajax_nonce');
     */
    public function executeAjax()
    {
        try {
            // We expect a valid wp nonce
            if (!isset($_REQUEST['_wpnonce']) || !wp_verify_nonce($_REQUEST['_wpnonce'], 'execute_ajax_nonce')) {
                throw new \Exception('Invalid ajax request');
            }

            // Make sure we have a class and a method to execute
            if (!isset($_REQUEST['c']) && !isset($_REQUEST['m'])) {
                throw new \Exception('Invalid params in ajax request');
            }

            // Make sure that the requested class exists and instantiate it
            $c = filter_var($_REQUEST['c'], FILTER_SANITIZE_STRING);
            $class = "\Ecs\\Modules\\$c";

            if (!class_exists($class)) {
                throw new \Exception('Class does not exist');
            }

            $Obj = new $class();

            // Add our prefix and camelize the requested method
            // eg: "method" becomes "ajaxMethod"
            // eg: "do_thing" becomes "ajaxDoThing", or "doThing" becomes "ajaxDoThing"
            $Inflector = new \Ecs\Core\Inflector();
            $m = $Inflector->camelize('ajax_' . filter_var($_REQUEST['m'], FILTER_SANITIZE_STRING));

            // Make sure that the requested method exists in our object
            if (!method_exists($Obj, $m)) {
                throw new \Exception('Ajax method does not exist');
            }

            // Execute
            $result = $Obj->$m();

            // Render the response
            \Ecs\Helpers\json_response($result);

        } catch (\Exception $e) {
            \Ecs\Helpers\json_response(array('error' => $e->getMessage()));
        }

        // Make sure this thing dies so it never echoes back that damn zero.
        die();
    }
}

 

Reno Linux Users Group (RLUG) responsive website redesign

I recently launched a new website for the Reno Linux Users Group (RLUG). It’s a custom theme on top of WordPress.

It’s a custom responsive website that integrates with the Meetup.com API to sync the monthly RLUG meetings to the site. The custom theme is based around my sort-of-framework for WordPress theme development. The front-end is built using Grunt, Sass, and jQuery. I am using composer and a custom framework for the backend.

rlug-website-mobile

rlug-website-design

My VagrantFile

This is the Vagrantfile I am using for my development box at home and work. It is determines how much ram is available and how I want to allocate, how many CPUs are available, and configures the VM for me. I use NFS for shared folders. Finally, starting to use “hostupdater” to keep my host machines hosts file current.

I would love to make that more dynamic, based on the Apache vhosts I have configured in the VM. Something to work towards I suppose.

The base box was provisioned using Packer and Chef.

# -*- mode: ruby -*-
# vi: set ft=ruby :

#
# Using the following plugins:
# `vagrant plugin install vagrant-hostsupdater`
#

PARAMS = {
    'projects_path' => "#{ENV['HOME']}/Development/projects",
    'hostname' => "#{ENV['USER']}-web",
    'ip' => '192.168.33.10',
    'max_ram' => 3 # 4 = 1/4 of the system ram, 2= 1/3, 2 = 1/2, 
}

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  config.vm.box = "ns-centos65"
  config.vm.host_name = PARAMS['hostname']

  # Setup network
  config.vm.network :forwarded_port, guest: 22, host: 2222, id: "ssh", disabled: true
  config.vm.network :forwarded_port, guest: 22, host: 2210, auto_correct: true # ssh
  config.vm.network :forwarded_port, guest: 80, host: 8010 # http
  config.vm.network :forwarded_port, guest: 3000, host: 3000 # rails/webrick
  config.vm.network :forwarded_port, guest: 8080, host: 8080 # xhprof gui

  config.vm.network "private_network", ip: PARAMS['ip']

  # Setup synced folders
  config.vm.synced_folder PARAMS['projects_path'], "/var/www/projects", nfs: true
  
  # Setup /etc/hosts on host
  config.hostsupdater.aliases = %w(rlug.local roylindauer.local trap.local estimator.local)

  # Setup VM params based on host resources
  host = RbConfig::CONFIG['host_os']
  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  config.vm.provider "virtualbox" do |vb|
  # Give VM 1/4 system memory & access to all cpu cores on the host
  if host =~ /darwin/
    cpus = `sysctl -n hw.ncpu`.to_i
    # sysctl returns Bytes and we need to convert to MB
    mem = `sysctl -n hw.memsize`.to_i / 1024 / 1024 / PARAMS['max_ram']
  elsif host =~ /linux/
    cpus = `nproc`.to_i
    # meminfo shows KB and we need to convert to MB
    mem = `grep 'MemTotal' /proc/meminfo | sed -e 's/MemTotal://' -e 's/ kB//'`.to_i / 1024 / PARAMS['max_ram']
  else # sorry Windows folks, I can't help you
    cpus = 2
    mem = 1024
  end

    vb.gui = true
    vb.name = "Web_Development_LAMP"
    vb.customize ["modifyvm", :id, "--memory", mem]
    vb.customize ["modifyvm", :id, "--cpus", cpus]
    vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
    vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
  end

end

 

Ohai Plugin for OpenVZ VMs to get public IP Address

Media Temple uses the OpenVZ virtualization system and I have quite a few Media Temple servers under Chef management. The one thing that has made management difficult is that by default during a Chef run ohai returns 127.0.0.1 as the default IP address which means I cannot run knife to execute commands from my workstation.

For example, when I run knife node show mydv.server.com  I get the following:

$ knife node show mydv.server.com

Node Name:   mydv.server.com
Environment: production
FQDN:        mydv.server.com
IP:          127.0.0.1

Kinda sucks. If try to execute on all of my MT servers, say something like knife ssh ‘tags:mt’ ‘sudo chef-client’  I will get a ton of failed connection errors because it is trying to connect to 127.0.0.1.

The solution is to get ohai to return the correct IP. OpenVZ has virtual interfaces and the actual public IP is assigned to them, while the main interface, eth0, is given the ip 127.0.0.1. This ohai plugin will retrieve the correct IP.

provides "ipaddress"
require_plugin "#{os}::network"
require_plugin "#{os}::virtualization"

if virtualization["system"] == "openvz"
  network["interfaces"]["venet0:0"]["addresses"].each do |ip, params|
    if params["family"] == "inet"
      ipaddress ip
    end
  end
end

Put this in your ohai plugins directory for chef: /etc/chef/plugins/openvz.rb and when chef runs, it will get the correct IP address.

Now when I run knife show mydb.server.com

$ knife node show mydv.server.com

Node Name:   mydv.server.com
Environment: production
FQDN:        mydv.server.com
IP:          10.20.30.40