Tag: Development

Github is now offering unlimited private repos for free. There is now literally no good reason to use BitBucket or GitLab (especially paying money to self host a GitLab instance). BRB moving all of my unfinished, half baked, terrible projects to GitHub.

The Reality of Gutenberg & WordPress

Gutenberg is happening. It is coming and it is coming soon. I am not thrilled.

WordPress the CMS

Automattic have been trying to tell us for years that WordPress is more than a blogging platform, that is a true and full Content Management System. "Look!", they say. "You have custom post types, and the metabox API allows for you to create complex content types. It's a CMS!". Alas, it is all built on top of a blog with very blog specific design patterns. The underbelly of WP is ugly and hacky, even if it works "just fine" most of the time. Gutenberg is as direct of a statement of intent as I could expect; WordPress is a blogging/marketing platform, and not a CMS.

WordPress the Casual Site Builder

Gutenberg is a response to the threat of Squarespace and Wix and Medium. This update is for WordPress.com, to combat the threat of those other systems, to ensure dominance in the web publishing space, to increase market share by appealing to more casual users, small business owners, etc. Automattic can probably then generate leads for their WP VIPs. But I think this will come at the expense of developers like me, working at a digital agency, who drank the koolaid about WordPress being a CMS and it being a tool that can be used for Higher-Ed, Government, Healthcare, and not just for a blog or a simple marketing site. I am ready to move on to real CMS's for those projects. As for the marketing and bloggy sites, Squarespace has a much more robust block and content building experience. Why even bother with WordPress at this point? Gutenberg is not anywhere close to those systems, yet. It will probably get there, but will it matter, and will it be worth it?

Gutenberg the Editor

The new editorial process is nice, but super limiting. I hope that more blocks are in development otherwise this feels dead on arrival. I am having some fun with it on my site, checking things out, playing with the shiny new toy, but after a few posts, it's already starting to feel restrictive. Theme builders give me so many more options for how to present content. Sure, they are nasty and terrible, but they offer so much more out of the box. I kinda hope that WordPress does not depend on extending block functionality via 3rd party plugin. They have come this far, surely they can offer some more variety?

For example, how about letting me insert a block above the title. Typically you start the page with a large hero, then the primary h1 follows. You can do this now by excluding the post title, but Gutenberg does not let you change the post slug (why?). And if you dont have an SEO plugin installed your <title> will be empty.

How about letting me define a wrapper element around a few blocks to give me more HTML to hang a frontend design off of? Like, making a group of blocks.

How about a related posts that is not just a list of post links, but something that will allow an editor to create a curated list, or a dynamically generated list, that includes thumbnails, excerpt, tags, whatever.

If it ain't broke…

Creating custom blocks has a much higher barrier to entry than say, creating custom field sets with ACF Pro, or using Metabox.io to create modular content blocks. We typically have some unique design constraints and features of our content patterns that do not allow for perfect modular re-use across websites. In other words, we make boutique websites for our clients. ACF and Metabox.io make this extremely easy. Gutenberg blocks are going to take a lot more time to build and test (now we have to test integration on the backend?!)

Ok. I am starting to rant. Ill end this by saying I don't think Gutenberg needs to be THE editor for WordPress, just AN editor for WordPress. Leave it as an optional plugin.

Profiling and Debugging a PHP app with Xdebug and Docker

I have started using an IDE again (PHPStorm) so that I could debug some applications and do some basic app profiling. I want to use Xdebug to profile my PHP apps. I am using Docker Compose on Windows 10. I have made this very complicated for myself but here we go.

The directory structure of my app looks like:

/web (contains my php app)

First thing is to get Xdebug setup in the PHP container.
I am using a custom Dockerfile for my PHP container where I install a ton of additional modules and packages, install wp-cli, and copy a custom php.ini to the container.

Here is the entire Dockerfile for the PHP container:

FROM php:7.0-fpm

# Install some required tools
RUN apt-get update && apt-get install -y sudo less

# Install PHP Extensions
RUN apt-get update && apt-get install -y \
bzip2 \
libbz2-dev \
libc-client2007e-dev \
libjpeg-dev \
libkrb5-dev \
libldap2-dev \
libmagickwand-dev \
libmcrypt-dev \
libpng12-dev \
libpq-dev \
libxml2-dev \
mysql-client \
imagemagick \
xfonts-base \
xfonts-75dpi \
&& pecl install imagick \
&& pecl install oauth-2.0.2 \
&& pecl install redis-3.0.0 \
&& pecl install xdebug \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-configure imap --with-imap-ssl --with-kerberos \
&& docker-php-ext-configure ldap --with-libdir=lib/x86_64-linux-gnu/ \
&& docker-php-ext-enable imagick \
&& docker-php-ext-enable oauth \
&& docker-php-ext-enable redis \
&& docker-php-ext-enable xdebug \
&& docker-php-ext-install \
bcmath \
bz2 \
calendar \
gd \
imap \
ldap \
mcrypt \
mbstring \
mysqli \
opcache \
pdo \
pdo_mysql \
soap \
zip \
&& apt-get -y clean \
&& apt-get -y autoclean \
&& apt-get -y autoremove \
&& rm -rf /var/lib/apt/lists/* && rm -rf && rm -rf /var/lib/cache/* && rm -rf /var/lib/log/* && rm -rf /tmp/*

# Custom PHP Conf
COPY ./php.ini /usr/local/etc/php/conf.d/custom.ini

RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar \
&& mv wp-cli.phar /usr/local/bin \
&& chmod +x /usr/local/bin/wp-cli.phar \
&& ln -s /usr/local/bin/wp-cli.phar /usr/local/bin/wp

# Xdebug
RUN mkdir /tmp/xdebug
RUN chmod 777 /tmp/xdebug

# Run this container as "webuser"
RUN groupadd -r webuser && useradd -r -g webuser webuser
RUN usermod -aG www-data webuser
USER webuser

custom php.ini under ./build/docker/php/:

file_uploads = On
memory_limit = 512M
upload_max_filesize = 256M
post_max_size = 256M
max_execution_time = 600
display_errors = On
error_reporting = E_ALL

xdebug.profiler_output_dir = "/tmp/xdebug/"
xdebug.profiler_output_name = "cachegrind.out.%t-%s"
xdebug.profiler_append = 1
xdebug.profiler_enable_trigger = 1
xdebug.trace_output_dir = "/tmp/xdebug/"
xdebug.remote_enable = on
xdebug.remote_autostart = true
xdebug.remote_handler = dbgp
xdebug.remote_mode = req
xdebug.remote_port = 9999
xdebug.remote_log = /tmp/xdebug/xdebug_remote.log
xdebug.idekey = MYIDE
xdebug.remote_connect_back = 1

Some important things here. I am creating a directory to store Xdebug output (/tmp/xdebug) which will be used by another container to parse and display the output. In the custom php.ini we tell Xdebug to store its output to this directory. We also configure Xdebug to enable remote debugging so that we can debug from our IDE. If you do not want to debug EVERY request you should disable remote_autostart. If you do this you need to pass in a specific GET/POST parameter to trigger the debugger (typically XDEBUG_PROFILE). Make note of the remote_port and idekey values. We need these when we configure our IDE.

In your IDE you would configure Xdebug to listen on port 9999 for connections and to use the IDE Session Key MYIDE to ensure you are only debugging requests that use that session key (really only necessary for complicated setups with multiple apps on the same server).

There are two environment variables that I set on the PHP container that are required to make this all work.


build: ./build/docker/php/
- 9000
- mysql
- .:/var/www/html
- /tmp/xdebug
XDEBUG_CONFIG: "remote_host="

XDEBUG_CONFIG is required to tell Xdebug where the client is running. To be honest, I am not sure if this is actually required or is only required because of PHPStorm. I am using Docker Toolbox and am using the IP from the VirtualBox VM where the Docker env is running. It would be great to not have to have this param as it would be more portable.

The variable PHP_IDE_CONFIG though is required for PHPStorm, and it tells my IDE which server configuration to use.

Neither of these may be required if you are using native docker and a different IDE.  /shrug

The first part of this is done. We can now debug an app from our IDE. The second thing I wanted to do was run a profiler and inspect the results. Xdebug will output cachegrind files. We just need a way to inspect them. There are some desktop apps you can use, like KCacheGrind, QCacheGrind, WinCacheGrind, etc. Your IDE may even be able to parse them (PHPStorm is currently no able to for some reason). Or you can use a web based system. I opted for a web based system using WebGrind. There is, conveniently, a docker container for this.

I configured the php container to expose /tmp/xdebug as a shared volume, which is where Xdebug is configured too output cachegrind files. Then I configured the webgrind container to mount that volume. Also I pass an environment variable to tell WebGrind where to find the cachegrind files:


    image: devgeniem/webgrind
        - 8081:80
        - php
        XDEBUG_OUTPUT_DIR: "/tmp/xdebug"

With that we can go to and start digging into the app profiles.

Complete docker-compose.yml

    image: mysql:latest
        - mysql_data
        MYSQL_ROOT_PASSWORD: secret
        MYSQL_DATABASE: project
        MYSQL_USER: project
        MYSQL_PASSWORD: project
        - 3306

    image: tianon/true
        - /var/lib/mysql

    build: ./build/docker/nginx/
        - 80:80
        - php
        - .:/var/www/html

    build: ./build/docker/php/
        - 9000
        - mysql
        - .:/var/www/html
        - /tmp/xdebug
        XDEBUG_CONFIG: "remote_host="
        PHP_IDE_CONFIG: "serverName=XDEBUG"

    image: phpmyadmin/phpmyadmin
        - 8080:80
        - mysql
        PMA_HOST: mysql

    image: devgeniem/webgrind
        - 8081:80
        - php
        XDEBUG_OUTPUT_DIR: "/tmp/xdebug"

WP Transients must be used responsibly

We ran into an interesting issue with WooCommerce at work. First, here is the subject of the support request we got from our hosting provider:

The site is generating ~150MB/sec of transaction logs, filling 500GB of diskspace

Holy. Shit. A WordPress site should not be generating that much data. 150MB per second? Wow.

How? Why?

The simple explanation is that there is a bottleneck in WooCommerce with the filtered layer nav query objects using single transient record.

// We have a query - let's see if cached results of this query already exist.
$query_hash    = md5( $query );
$cached_counts = (array) get_transient( 'wc_layered_nav_counts' );

if ( ! isset( $cached_counts[ $query_hash ] ) ) {
    $results                      = $wpdb->get_results( $query, ARRAY_A );
    $counts                       = array_map( 'absint', wp_list_pluck( $results, 'term_count', 'term_count_id' ) );
    $cached_counts[ $query_hash ] = $counts;
    set_transient( 'wc_layered_nav_counts', $cached_counts, DAY_IN_SECONDS );

What is happening here is that a sql query based on the currently selected filters is hashed and shoved into an array that is saved to a single transient record. This means that every single interaction with the filters requires a read and possible write to a single transient record. A site with any sort of traffic and let’s say 9 filter widgets (with around 50 total options) will potentially generate a huge amount of unique queries. It is no wonder why we are pushing 150MB/s.

Our quick, temporary, patch was to simply remove the transient.

// We have a query - let's see if cached results of this query already exist.
$query_hash    = md5( $query );
$cached_counts = array();
$results                      = $wpdb->get_results( $query, ARRAY_A );
$counts                       = array_map( 'absint', wp_list_pluck( $results, 'term_count', 'term_count_id' ) );
$cached_counts[ $query_hash ] = $counts;

You can see the massive improvement in performance after removing the transients. We applied the patch around 9:47 am.

Object caching would probably help. I was surprised at how much of an improvement we saw by simply removing the transient.

I think a good solution here would be to use unique transients for each hashed query, and not a single transient for EVERY hashed query. It would work find on small WP installs and would scale.

I will try it out and see what we get and if the results are good I will submit a PR to the woocommerce devs.


I said we should use transients responsibly. In this case, I would be creating potentially 15k additional (tiny) transient records. Is that more responsible than 1 massive 1mb transient?

WooCommerce devs has asked that I run some performance tests. Going to do so and report back!

update 2:

Not having any transients at all is better at scale in our case since the SQL query that is executed is not that heavy and we have some decent page caching via varnish. Also our MySQL server is well tuned. Every single request to a page with the layered nav will make N requests for the transient data. If data has to be written, that is N updates per request. This single record becomes a bottleneck as the field is locked while it is being written to. Redis or Memcache would be a a better solution. WP Transients are just bad on their own.

Development on Windows

It has been a few months since I got a new laptop with Windows 10. For the most part it has been pretty damn nice. The UI is pretty good, the native apps are not terrible, Cortana is actually useful. It’s fine, until I want to do web development. There are all of these little things that get in the way, all of these little compromises. Beware, rant coming up..

For example, SSH. Putty is the go to for SSH on Windows, but it’s not a joy to use and now I have twice as many keys as I had before (because I need PPK, I cannot use my existing RSA keys). Microsoft has included an OpenSSH client and server beta with Windows 10, but it is not totally ready. It only works with ED25519 keys (again, I cannot use my existing RSA keys) because it does not support LibreSSL (yet). Fine, it’s just nice to be able to SSH directly from Powershell. However, docker-machine no longer works from Powershell since it cannot find the SSH binary or is using the new OpenSSH binary but cannot use the docker RSA key to connect because it only supports ED25519 and for some reason the command line arg `–native-ssh` only works if you are trying to SSH into the docker VM and arrghghhghg. Ok, so now I have to use the Docker quick start terminal and Powershell. Really the issue is OpenSSH. So, ok it’s beta, I will just uninstall it and go back to Putty.

I use Sublime Text. Sublime Text does not have a command line utility to open the app for Windows, which is annoying, but whatever. I will use Visual Studio Code because I really like being able to launch my editor from the CLI. But Visual Studio Code has a scrolling bug that won’t get fixed until Electron is updated (so, never I guess?) and is temporarily fixed by setting the app to fullscreen and back. So now thats part of my workflow, quickly maximizing/unmaximizing a window. Ok, so back to Sublime Text and navigating Explorer to open projects.

I am not able to use native Docker witout upgrading to Windows 10 pro. Native docker only supports HyperV, so I cannot use VirtualBox. I have projects that are dependent on Vagrant+VirtualBox. I am pretty sure that Docker Toolbox is on the way out. Not sure what to do here, other than dockerize my vbox projects.

Powershell is bad for me. It is powerful, but it is very verbose and requires significant knowledge of Windows internals to use effectively (this is not a negative, just a reality). The output of Powershell is terrible. Sometimes I cannot even see what is on screen because the contrast of colors is all fucked. Like, blue powershell background, with some content presented in, blue! WTF. Or the background of the text is black and text color itself is straight up #0000FF blue. Its literally unreadable.

I have a lot to complain about. I do want to say that I think Windows has come a long way and the future looks bright but it is just not totally there for me, yet, to do web development comfortably. A lot of my workflow has to change. I am not going to give up on it yet though. I think the take away here is that change can be difficult.

PS. Windows feels like a Linux. Inconsistent UI, requires arcance knowledge to use the CLI, but the CLI is required to do anything substantial, and in general things simply do not work.

Switched to Windows.

For many years I have used Apple computers at home and professionally. I have been a champion for their product, for their stability, ease of use, and unix core. However in the last few years I have become increasingly frustrated with them, from the operating system, to the hardware. I have been looking for a way to untangle myself from Apples “ecosystem” for a while now and finally had enough when I purchased a brand new Macbook Pro. It was a terrible machine and was the last straw for me. Time to rip off the band-aid.

So here I am, running Windows 10, and finding that I actually enjoy it. Perhaps it’s because it’s new and shiny, or maybe Windows has just come a long way since I used it seriously. It has not been that painful of a transition either. A lot of work has been done on the tools I use for web development that they are actually well supported on Windows. Microsoft is not “sexy”, or “cool”, or anything like that. But they aren’t trying to be, but they are making things for people to just get to work. I can buy a PC laptop that has a built-in SD reader, USB-3 and USB-C, up-gradable RAM, hard drive expansion, and powerful video cards options. That is no longer an option for Apple (it never really was I guess). Apple does not make computers for “power users”. They don’t make computers for me any more. It is unfortunate, but whatever it is just a tool, and for now I am choosing Windows as it feels like the superior tool.

I pushed Apple super hard at my job, for years, and now that our creative/development team are 100% powered by Apple computer, im out and moving back to Windows. Pretty funny, and I am sure I’ll get some shit for it. All good though.

I don’t want to forget Linux though. I have tried, for years, to use some distribution of a Linux desktop as my daily driver, and it is simply not there. Not for me. Development on Linux is great, but doing anything creative is pretty terrible IMO.

Order Posts via “Weighted Pseudo Randomness”

A request we are getting more often is to show a list of posts, to elevate some of those posts above others, and to show the posts in a random order. Imagine a post type called “Sponsors”. Sponsors are tiered, like “Platinum”, “Gold”, “Silver”, etc. We want the Platinum sponsors to appear before Gold, Gold before Silver, and so on. We don’t want to favor one particular Platinum sponsor though, we want them to be randomized but ordered by the tier.

The number one rule we had for implementing this feature is that we could not break existing WordPress functionality. Meaning, pagination has to work, and any sort of post filtering that existed on the site must also continue to work. An additional requirement is that we could not have duplicate posts show up as we paginated results. Hence, pseudo random.

This is achieved by ordering the result set by a weight value (aka the tier) and then to randomize those results, using the MySQL RAND() method. MySQL’s RAND() method takes an optional $SEED parameter. If we pass the same $SEED value to our query for each request we can maintain the same random order as we paginate through posts. This ensures that we do not have duplicates. I am generating the seed value using “date(‘ymd’)”. The value will change daily, and create a new randomness to the posts each day. Weight is derived from the tier that the posts are assigned. In my case, we use ACF and an ACF field that allows a user to select a single value from a Tier taxonomy. Knowing this, I used the postmeta value of the term id that is selected to get the slug of the term itself. I then used a CASE statement in my query to assign a weight value based on the slug of the selected taxonomy term. Tier1 is assigned 1, tier2 is assigned 2, if there is no term, weight is 9999 (so that these posts always show up after a tiered post). The CASE statement looks like:

SELECT wp_posts.*, CASE weightterms.slug WHEN 'tier1' THEN 1 WHEN 'tier2' THEN 2 ELSE 9999 END AS weight FROM wp_posts ...

In order for this to work we need to JOIN the wp_terms table based on the metavalue of the selected tier taxonomy.

LEFT JOIN wp_postmeta as weightmeta ON (weightmeta.post_id = wp_posts.ID AND weightmeta.meta_key = "sk_tier")
LEFT JOIN wp_terms as weightterms ON (weightmeta.meta_value = weightterms.term_id)

The query basically looks like this when it is compiled (this is a simplified example of how the resulting MySQL query is going to look):

    CASE weightterms.slug WHEN 'tier1' THEN 1 WHEN 'tier2' THEN 2 ELSE 9999 END AS weight
LEFT JOIN wp_postmeta as weightmeta ON (weightmeta.post_id = wp_posts.ID AND weightmeta.meta_key = "sk_tier")
LEFT JOIN wp_terms as weightterms ON (weightmeta.meta_value = weightterms.term_id)
WHERE wp_posts.post_type = 'sponsors'
    weight ASC,

The goal is to make WordPress write this query for us in the loop. We can do this using filters that modify the WP_Query. The first thing we need to do is to be able to identify the WP_Query so that we do not alter _other_ queries on the site. We only want to change the query that loads posts from our custom Sponsors post type. To do this we add a custom query_var to WordPress and then check for that query_var in our wp_query. Add the query var:

add_filter( 'query_vars', array( $this, 'theme_query_vars_filter' ), 10, 2 );
function theme_query_vars_filter( $qvars ) {
    $qvars[] = 'is_pseudorandom_query';
    return $qvars;

We now inject this param into our main query using “pre_get_posts”. We do not want our listing of sponsor posts in the admin area of WordPress to be ordered randomly, so we need to check that we are not is_admin().

add_filter( 'pre_get_posts', 'theme_pre_get_posts', 10, 2 );
function theme_pre_get_posts( $wp_query ) {
    if ( !is_admin() && isset( $wp_query->query['post_type'] ) && $wp_query->query['post_type'] == 'sponsors' ) {
        $wp_query->set( 'is_pseudorandom_query', true );

This function checks the wp_query object passed to it. If the post type is our custom post type we set the “is_pseudorandom_query” query var to true. With this set we can now setup our wp_query filters. If you are using a custom WP_Query object you can pass “is_pseudorandom_query” as one of the $args:

$my_custom_query = new \WP_Query( [ "is_pseudorandom_query" => true ] );

Now to the WP_Query filters. The three filters we need are posts_fields to add our CASE statement, posts_join to add our custom LEFT JOINS, and posts_orderby to order by our new weight value and then by RAND().

add_filter( 'posts_fields', 'theme_sponsor_posts_fields', 10, 2 );
add_filter( 'posts_join', 'theme_sponsor_posts_join', 10, 2 );
add_filter( 'posts_orderby', 'theme_sponsor_posts_orderby', 10, 2 );

The functions:

function theme_sponsor_posts_fields( $select, $wp_query ) {
    if ( $wp_query->get( 'is_pseudorandom_query' ) == true ) {
        $select .= ", CASE weightterms.slug WHEN 'tier1' THEN 1 WHEN 'tier2' THEN 2 ELSE 9999 END AS weight";
    return $select;

function theme_sponsor_posts_join( $join, $wp_query ) {
    if ( $wp_query->get( 'is_pseudorandom_query' ) == true ) {
        $join .= ' LEFT JOIN wp_postmeta as weightmeta ON (weightmeta.post_id = wp_posts.ID AND weightmeta.meta_key = "sk_tier")';
        $join .= ' LEFT JOIN wp_terms as weightterms ON (weightmeta.meta_value = weightterms.term_id)';
    return $join;

function theme_sponsor_posts_orderby( $orderby, $wp_query ) {
    if ( $wp_query->get( 'is_pseudorandom_query' ) == true ) {
        $orderby = 'weight ASC, RAND(' . date('ymd') . ') ASC';
    return $orderby;

In each function we inspect the passed $wp_query object to see if “is_pseudorandom_query” is set. If it is, then we modify the query.

And there it is. We can now order posts by tier, and then randomize each tier.

The Perfect Website

   every single tracking and analytics service known to man
    <!-- Schema because HTML is not descriptive/semantic enough -->
    <h1>A single h1 because fuck html5</h1>
    <a href="/seo/silo/juice.html">literally nothing but links to pages of links.</a>
    <h2>We don't really care about document structure</h2>
    <a href="/seo/silo/notext.html">no text. just links</a>