SC DUG August 2019

After a couple months off, SC DUG met this month with a presentation on super cheap Drupal hosting.

Chris Zietlow from Mindgrub, Will Jackson from Kanopi Studios, and I all gave short talks very cheap ways to host Drupal 8.

Chris opened by talking about using AWS Micro servers. Will shared a solution using a Raspberry Pi for a fully wireless server. I closed the discussion with a review of using Drupal Tome on Netlify.

We all worked from a loose set of rules to help keep us honest and prevent overlapping:

Rules for Cheap D8 Hosting Challenge

The goal is to figure out the cheapest D8 hosting that would actually function for a project, even if it is deeply irresponsible to actually use.

Rules

  1. It has to actually work for D8 (so modern PHP version, working database, etc),
  2. You do not actually have to spend the money, but you do need to know all the steps required to make it work.
  3. It needs to honor the TOS for any networks and services you use (no illegal network taps – legal hidden taps are fair game).
  4. You have to share your idea with the other players so we don’t have two people propose the same solution (first-come-first-serve on ideas).

Reporting

Be prepared to talk for about 5 minutes on how your solution would work.  Your talk needs to include:

  1. Estimated Monthly cost for the first year.
  2. Steps required to make it work.
  3. Known weaknesses.

If you have a super cheap hosting solution for Drupal 8 we’d love to hear about it.

Drupal Tome + Docksal + Netlify

Drupal Tome is a static site generator distribution of Drupal 8. It provides mechanisms for taking an entire Drupal site and exporting all the content to HTML for direct service. As part of a recent competition at SCDUG to come up with the cheapest possible Drupal 8 hosting, I decided to do a proof-of-concept level implementation of Drupal 8 with Docksal for local content editing, and Netlify for hosting (total cost was just the domain registration).

The Tome project has directions for setup with Docker, and for setup with Netlify, but they don’t quite line up with each other (I followed the docker instructions, then the Netlify set, but had to chart my own course to get the site from the first project linked to the repo in the second), and since I’m getting used to using Docksal when I had to fall back and do a bit of it myself I realized it was almost painfully easy to setup.

The first step was to go to the Tome documentation for Netlify and setup an account, and site from the template. There is a button in those directions to trigger the Netlify setup, I’ve added one here as well (but if this one fails, check to see if they updated theirs):

Deploy to Netlify

Login with Github or similar service, and let it create a repo for your project.

Follow Netlify’s directions for setting up DNS so you can have the domain you want, and HTTPS (through Let’s Encrypt). It took it a couple hours to get that detail to run right, but it eventually worked. For this project I chose a subdomain of my main blog domain: tome-netlify.spinningcode.org

Next go to Github (or whatever service you used) and clone the repository to your local machine. There is a generated README on that project, but the directions aren’t 100% correct if you aren’t cloning onto a machine with a working PHP environment. This is when I switched over to docksal, and ran the following series of commands:

fin init
fin composer install
fin drush tome:install
fin drush uli

Then log into your local site using the domain from docksal and the link from drush, and add some content.

Next we export the content from Drupal to send over to Netlify for deployment.

fin drush tome:static
git add .
git commit -m "Adding sample content"
git push

…now we wait while Netlify notices and builds the site…

If you look at the site a few minutes later the new content should be posted.

This is all well and good if I want to use the version of the site generated for the Netlify example, but I wanted to make sure I could do something more interesting. These days Drupal ships with an install profile called Unami that provides a more robust sample site than the more traditional Standard install.

So now let’s try to get Unami onto this site. Go back to the terminal and have Tome reset everything (it’ll warn you that you are about to nuke everything):

fin drush tome:init

…select Unami when it asks for a profile…and wait cause this takes a while…

Now just re-export the content and push it to your repo.

fin drush tome:static
git add .
git commit -m "Converting to Unami"
git push

And wait again, cause this also takes a few minutes…

The Unami home page on my subdomain hosted at Netlify.

That really was all that was involved for a simple site, you can see my repository on Github if you want to see all of what was generated along the way.

The whole process is pretty straight forward, but there are a few things that it helps to understand.

First, Netlify is actually regenerating the markup on their servers with this approach. The Drupal nodes, and other entities, are saved as JSON and then imported during the build. This makes the process reliable, but slow. Unami takes several minutes to deploy since Netlify is installing and configuring Drupal, loading the content, and generating the output. The build command provided in that template is clear enough to follow if you are familiar with composer projects:

command = "composer install && ./vendor/bin/drush tome:install -y && ./vendor/bin/drush tome:static -l $DEPLOY_PRIME_URL" 

One upside of this, is that you can use a totally unrelated domain for your local testing and have it adjust correctly to the production domain. When you are using Netlify’s branching workflow for managing dev, test, and production it also protects your work that way.

My directions above load a standard docksal container because that’s quick and easy, which includes MySQL, but Tome falls back to using a Sqlite database since you can be more confident it is there. Again this is reliable but slow. If I were going to do this on a more complete project I’d want a smaller Docksal setup or to switch to using MySQL locally.

A workflow based on this approach might also struggle with concurrent edits or complex configuration of large sites. It would probably make more sense to have the content created on a hidden, but traditional, server and then run through a different workflow. But for someone working on a series small sites that are rarely updated, a totally temporary instance of the site that can be rapidly deployed to a device, have content updated, push out to production, and then deleted locally until needed again.

The final detail to note is that there is no support for forms built into this solution. Netlify has support for that, and Tome has a module that claim to connect to that service but I wasn’t able to quickly determine how to get it connected. I am confident there are solves to this problem, but it is something that would take a little additional work.

Bypass Pantheon Timeouts for Drupal 8

Pantheon is an excellent hosting service for both Drupal and WordPress sites. But to make their platform work and scale well they have set a number of limits built into the platform, these include process time limits and memory limits that are large enough for the vast majority of projects, but from time to time run you into trouble on large jobs.

For data loading and updates their official answer is typically to copy the database to another server, run your job there, and copy the database back onto their server. That’s fine if you can afford to freeze updates to your production site, setup a process to mirror changes into your temporary copy, or some other project overhead that can be limiting and challenging. But sometimes that’s not an option, or the data load takes too long for that to be practical on a regular basis.

I recently needed to do a very large import for records into a Drupal database and so started to play around with solutions that would allow me to ignore those time limits. We were looking at needing to do about 50 million data writes and the running time was initially over a week to complete the job.

Since Drupal’s batch system was created to solve this exact problem it seemed like a good place to start. For this solution you need a file you can load and parse in segments, like a CSV file, which you can read one line at a time. It does not have to represent the final state, you can use this to actually load data if the process is quick, or you can serialize each record into a table or a queue job to actually process later.

One quick note about the code samples, I wrote these based on the service-based approach outlined in my post about batch services and the batch service module I discussed there. It could be adapted to a more traditional batch job, but I like the clarity the wrapper provides for breaking this back down for discussion.

The general concept here is that we upload the file and then progressively process it from within a batch job. The code samples below provide two classes to achieve this, first is a form that provides a managed file field which create a file entity that can be reliably passed to the batch processor. From there the batch service takes over and using a bit of basic PHP file handling to load the file into a database table. If you need to do more than load the data into the database directly (say create complex entities or other tasks) you can set up a second phase to run through the values to do that heavier lifting. 

To get us started the form includes this managed file:

   $form['file'] = [
     '#type' => 'managed_file',
     '#name' => 'data_file',
     '#title' => $this->t('Data file'),
     '#description' => $this->t('CSV format for this example.'),
     '#upload_location' => 'private://example_pantheon_loader_data/',
     '#upload_validators' => [
       'file_validate_extensions' => ['csv'],
     ],
   ];

The managed file form element automagically gives you a file entity, and the value in the form state is the id of that entity. This file will be temporary and have no references once the process is complete and so depending on your site setup the file will eventually be purged. Which all means we can pass all the values straight through to our batch processor:

$batch = $this->dataLoaderBatchService->generateBatchJob($form_state->getValues());

When the data file is small enough, a few thousand rows at most, you can load them all right away without the need of a batch job. But that runs into both time and memory concerns and the whole point of this is to avoid those. With this approach we can ignore those and we’re only limited by Pantheon’s upload file size. If they file size is too large you can upload the file via sftp and read directly from there, so while this is an easy way to load the file you have other options.

As we setup the file for processing in the batch job, we really need the file path not the ID. The main reason to use the managed file is they can reliably get the file path on a Pantheon server without us really needing to know anything about where they have things stashed. Since we’re about to use generic PHP functions for file processing we need to know that path reliably:

$fid = array_pop($data['file']);
$fileEntity = File::load($fid);
$ops = [];

if (empty($fileEntity)) {
  $this->logger->error('Unable to load file data for processing.');
  return [];
}
$filePath = $this->fileSystem->realpath($fileEntity->getFileUri());
$ops = ['processData' => [$filePath]];

Now we have a file and since it’s a csv we can load a few rows at time, process them, and then start again.

Our batch processing function needs to track two things in addition to the file: the header values and the current file position. So in the first pass we initialize the position to zero and then load the first row as the header. For every pass after that we need to find point we left off. For this we use generic PHP files for loading and seeking the current location:

// Old-school file handling.
$path = array_pop($data);
$file = fopen($path, "r");
...
fseek($file, $filePos);

// Each pass we process 100 lines, if you have to do something complex
// you might want to reduce the run.
for ($i = 0; $i < 100; $i++) {
  $row = fgetcsv($file);
  if (!empty($row)) {
    $data = array_combine($header, $row);
    $member['timestamp'] = time();
    $rowData = [
             'col_one' => $data['field_name'],
             'data' => serialize($data),
             'timestamp' => time(),
    ];
    $row_id = $this->database->insert('example_pantheon_loader_tracker')
             ->fields($rowData)
             ->execute();

    // If you're setting up for a queue you include something like this.
    // $queue = $this->queueFactory->get(‘example_pantheon_loader_remap’);
    // $queue->createItem($row_id);
 }
 else {
   break;
 }
}
$filePos = (float) ftell($file);
$context['finished'] = $filePos / filesize($path);

The example code just dumps this all into a database table. This can be useful as a raw data loader if you need to add a large data set to an existing site that’s used for reference data or something similar.  It can also be used as the base to create more complex objects. The example code includes comments about generating a queue worker that could then run over time on cron or as another batch job; the Queue UI module provides a simple interface to run those on a batch job.

I’ve run this process for several hours at a stretch.  Pantheon does have issues with systems errors if left to run a batch job for extreme runs (I ran into problems on some runs after 6-8 hours of run time), so a prep into the database followed by running on queue or something else easier to restart has been more reliable.

Docksal Pantheon Setup from Scratch

I recently had reason to switch over to using Docksal for a project, and on the whole I really like it as a good easy solution for getting a project specific Drupal dev environment up and running quickly. But like many dev tools the docs I found didn’t quite cover what I wanted because they made a bunch of assumptions.

Most assumed either I was starting a generic project or that I was starting a Pantheon specific project – and that I already had Docksal experience. In my case I was looking for a quick emergency replacement environment for a long-running Pantheon project.

Fairly recently Docksal added support for a project init command that helps setup for Acquia, Pantheon, and Pantheon.sh, but pull init isn’t really well documented and requires a few preconditions.

Since I had to run a dozen Google searches, and ask several friends for help, to make it work I figured I’d write it up.

Install Docksal

First follow the basic Docksal installation instructions for your host operating system. Once that completes, if you are using Linux as the host OS log out and log back in (it just added your user to a group and you need that access to start up docker).

Add Pantheon Machine Token

Next you need to have a Pantheon machine token so that terminus can run within the new container you’re about to create. If you don’t have one already follow Pantheon’s instructions to create one and save if someplace safe (like your password manager).

Once you have a machine token you need to tell Docksal about it.  There are instructions for that (but they aren’t in the instructions for setting up Docksal with pull init) basically you add the key to your docksal.env file:

SECRET_TERMINUS_TOKEN="HASH_VALUE_PROVIDED_BY_PANTHEON_HERE"

 Also if you are using Linux you should note that those instructions linked above say the file goes in $HOME/docksal/docksal.env, but you really want $HOME/.docksal/docksal.env (note the dot in front of docksal to hide the directory).

Setup SSH Key

With the machine token in place you are almost ready to run the setup command, just one more precondition.  If you haven’t been using Docker or Docksal they don’t know about your SSH key yet, and pull init assumes it’s around.  So you need to tell Docksal to load it but running:
fin ssh-key add  

If the whole setup is new, you may also need to create your key and add it to Pantheon.  Once you have done that, if you are using a default SSH key name and location it should pick it up automatically (I have not tried this yet on Windows so mileage there may vary – if you know the answer please leave me a comment). It also is a good idea to make sure the key itself is working right but getting the git clone command from your Pantheon dashboard and trying a manual clone on the command line (delete once it’s done, this is just to prove you can get through).

Run Pull Init

Now finally you are ready to run fin pull init: 

fin pull init --hostingplatform=pantheon --hostingsite=[site-machine-name] --hosting-env=[environment-name]

Docksal will now setup the site, maybe ask you a couple questions, and clone the repo. It will leave a couple things out you may need: database setup, and .htaccess.

Add .htaccess as needed

Pantheon uses nginx.  Docksal’s formula uses Apache. If you don’t keep a .htaccess file in your project (and while there is not reason not to, some Pantheon setups don’t keep anything extra stuff around) you need to put it back. If you don’t have a copy handy, copy and paste the content from the Drupal project repo:  https://git.drupalcode.org/project/drupal/blob/8.8.x/.htaccess

Finally, you need to tell Drupal where to find the Docksal copy of the database. For that you need a settings.local.php file. Your project likely has a default version of this, which may contain things you may or may not want so adjust as needed. Docksal creates a default database (named default) and provides a user named…“user”, which has a password of “user”.  The host’s name is ‘db’. So into your settings.local.php file you need to include database settings at the very least:

<?php
$databases = array(
  'default' =>
    array(
      'default' =>
      array(
        'database' => 'default',
        'username' => 'user',
        'password' => 'user',
        'host' => 'db',
        'port' => '',
        'driver' => 'mysql',
        'prefix' => '',
      ),
    ),
);

With the database now fully linked up to Drupal, you can now ask Docksal to pull down a copy of the database and a copy of the site files:

fin pull db

fin pull files

In the future you can also pull down code changes:

fin pull code

Bonus points: do this on a server.

On occasion it’s useful to have all this setup on a remote server not just a local machine. There are a few more steps to go to do that safely.

First you may want to enable Basic HTTP Auth just to keep away from the prying eyes of Googlebot and friends.  There are directions for that step (you’ll want the Apache instructions). Next you need to make sure that Docksal is actually listing to the host’s requests and that they are forwarded into the containers.  Lots of blog posts say DOCKSAL_VHOST_PROXY_IP=0.0.0.0 fin reset proxy. But it turns out that fin reset proxy has been removed, instead you want: 

DOCKSAL_VHOST_PROXY_IP=0.0.0.0 fin system reset.  

Next you need to add the vhost to the docksal.env file we were working with earlier:

 VIRTUAL_HOST="test.example.org"

Run fin up to get Docksal to pick up the changes (this section is based on these old instructions).

Now you need to add either a DNS entry someplace, or update your machine’s /etc/hosts file to look in the right place (the public IP address of the host machine).

Anything I missed?

If you think I missed anything feel free to let know. Particularly Windows users feel free to let me know changes related to doing things there. I’ll try to work those in if I don’t get to figuring that out on my own in the near future.

SC DUG May 2019

For this month’s SC DUG, Mauricio Orozco from the South Carolina Commission for Minority Affairs shared his notes and lessons learned during his first DrupalCon North America.

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. If you want to see a polished version checkout our group members’ talks at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

If you would like to join us please check out our up coming events on Meetup for meeting times, locations, and connection information.

SC DUG February 2019

Will Jackson – Local Development in Docksal

For the SC DUG meeting this month Will Jackson from Kanopi Studios gave a talk about using Docksal for local Drupal development. Will has the joy of working with some of the Docksal developers and has become an advocate for the simplicity and power Docksal provides.

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. If you want to see a polished version checkout our group members’ talks at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

If you would like to join us please check out our up coming events on Meetup for meeting times, locations, and connection information.

SC DUG November 2018

Kaylan Wagner – Real world lessons from online games.

This fall the South Carolina Drupal User’s Group started using Zoom are part of all our meetings. Sometimes the technology has worked better than others, but when it works in our favor we are recording the presentations and sharing them when we can.

In November Kaylan Wagner gave a draft talk on using experiences in the world of online gaming to be a better remote team member.

We frequently use these presentations to practice new presentations and test out new ideas. If you want to see a polished version hunt group members out at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

If you would like to join us please check out our up coming events on Meetup for meeting times, locations, and connection information.

SC DUG September 2018

Chis Zietlow – Using Machine Learning to Improve UX

This fall the South Carolina Drupal User’s Group started using Zoom are part of all our meetings. Sometimes the technology has worked better than others, but when it works in our favor we are recording the presentations and sharing them when we can.

Chris Zietlow presented back in September about using Machine Learning to Improve UX.

We frequently use these presentations to practice new presentations and test out new ideas. If you want to see a polished version hunt group members out at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

If you would like to join us please check out our up coming events on Meetup for meeting times, locations, and connection information.

Drupal 8 Batch Services

For this month’s South Carolina Drupal User Group I gave a talk about creating Batch Services in Drupal 8. As a quick side note we are trying to include video conference access to all our meetings so please feel free to join us even if you cannot come in person.

Since Drupal 8 was first released I have been frustrated by the fact that Drupal 8 batch jobs were basically untouched from previous versions. There is nothing strictly wrong with that approach, but it has never felt right to me particularly when doing things in a batch job that I might also want to do in another context – that really should be a service and I should write those core jobs first. After several frustrating experiences trying to find a solution I like, I finally created a module that provides an abstract class that can be used to create a service that handles this problem just more elegantly. The project also includes an example module to provide a sample service.

Some of the text in the slides got cut off by the Zoom video window, so I uploaded them to SlideShare as well:


Quick Batch Overview

If you are new to Drupal batches there are lots of articles around that go into details of traditional implementations, so this will be a super quick overview.

To define a batch you generate an array in a particular format – typically as part of a form submit process – and pass that array to batch_set(). The array defines some basic messages, a list of operations, a function to call when the batch is finished, and optionally a few other details. The minimal array would be something like:

  <?php  // Setup final batch array.
    $batch = [
      'title'    => 'Page title',
      'init_message' => 'Openning message',
      'operations'  => [],
      'finished' => '\some\class\namespace\and\name::finishedBatch',
    ];

The interesting part should be in that operations array, which is a list of tasks to be run, but getting all your functions setup and the batch array generated can often be its own project.

Each operation is a function that implements callback_batch_operation(), and the data to feed that function. The callbacks are just functions that have a final parameter that is an array reference typically called $context. The function can either perform all the needed work on the provided parameters, or perform part of that work and update the $context['sandbox']['finished'] value to be a number between 0 and 1. Once finished reaches 1 (or isn’t set at the end of the function) batch declares that task complete and moves on to the next one in the queue. Once all tasks are complete it calls the function provided as the finished value of the array that defined the batch.

The finish function implements callback_batch_finish() which means it accepts three parameters: $success, $results, and $operations: $success is true when all tasks completed without error; $results is an array of data you can feed into the $context array during processing; $operations is your operations list again.

Those functions are all expected to be static methods on classes or, more commonly, a function defined in a procedural code block imported from a separate file (which can be provided in the batch array).

My replacement batch service

It’s those blocks of procedural code and classes of nothing but static methods that bug me so much. Admittedly the batch system is convenient and works well enough to handle major tasks for lots of modules. But in Drupal 8 we have a whole suite of services and plugins that are designed to be run in specific contexts that batch does not provide by default. While we can access the Drupal service container and get the objects we need the batch code always feels clunky and out of place within a well structured module or project. What’s more I have often created batches that benefit from having the key tasks be functions of a service not just specific to the batch process.

So after several attempts to force batches and services to play nice together I finally created this module to force a marriage. Even though there are places which required a bit of compromise, but I think I have most of that contained in the abstract class so I don’t have to worry about it on a regular basis. That makes my final code with complex logic and processing far cleaner and easier to maintain.

The Batch Service Interface module provides an interface an an abstract class that implements parts of it: abstract class AbstractBatchService implements BatchServiceInterface. The developer extending that class only needs to define a service that handles generating a list of operations that call local methods of the service and the finish batch function (also as a local method). Nearly everything else is handled by the parent class.

The implementation I provided in the example submodule ends up four simple methods. Even in more complex jobs all the real work could be contained in a method that is isolated from the oddities of batch processing.

<?php

namespace Drupal\batch_example;
use Drupal\node\Entity\Node;
use Drupal\batch_service_interface\AbstractBatchService;

/**
 * Class ExampleBatchService logs the name of nodes with id provided on form.
 */
class ExampleBatchService extends AbstractBatchService {

  /**
   * Must be set in child classes to be the service name so the service can
   * bootstrap itself.
   *
   * @var string
   */
  protected static $serviceName = 'batch_example.example_batch';

  /**
   * Data from the form as needed.
   */
  public function generateBatchJob($data) {
    $ops = [];
    for ($i = 0; $i < $data['message_count']; $i++ ) {
      $ops[] = [
        'logMessage' => ['MessageIndex' => $i + 1],
      ];
    }

    return $this->prepBatchArray($this->t('Logging Messages'), $this->t('Starting Batch Processing'), $ops);
  }

  public function logMessage($data, &amp;$context) {

    $this->logger->info($this->getRandomMessage());

    if (!isset($context['results']['message_count'])) {
      $context['results']['message_count'] = 0;
    }
    $context['results']['message_count']++;

  }

  public function doFinishBatch($success, $results, $operations) {
    drupal_set_message($this->t('Logged %count quotes', ['%count' => $results['message_count']]));
  }

  public function getRandomMessage() {
    $messages = [
      // list of messages to select from
    ];

    return $messages[array_rand($messages)];

  }

}

There is the oddity that you have to tell the service its own name so it can bootstrap itself. If there is a way around that I’d love to know it. But really one have one line of code that’s a bit strange, everything else is now fairly clear call and response.

One of the nice upsides to this solution is you could write tests for the service that look and feel just like any other services tests. The methods could all be called once, and you are not trying to run tests against a procedural code block or a class that is nothing but static methods.

I would love to hear ideas about ways I could make this solution stronger. So please drop me a comment or send me a patch.

Related core efforts

There is an effort to try to do similar things in core, but they look like they have some distance left to travel. Obviously once that work is complete it is likely to be better than what I have created, but in the meantime my service allows for a new level of abstraction without waiting for core’s updates to be complete.

Waterfall-like Agile-ish Projects

In software just about all project management methodologies get labeled one of two things: Agile or Waterfall. There are formal definitions of both labels, but in practice few companies stick to those definitions particularly in the world of consulting. For people who really care about such things, there are actually many more methodologies out there but largely for marketing reasons we call any process that’s linear in nature Waterfall, and any that is iterative we call Agile.

Classic cartoon of a tree swing being poorly because every team saw it differently.
Failure within project teams leading to disasters is so common and basic that not only is there a cartoon about it but there is a web site dedicated to generating your own versions of that cartoon (http://projectcartoon.com/).

Among consultants I have rarely seen a company that is truly 100% agile or 100% waterfall. In fact I’ve rarely seen a shop that’s close enough to the formal structures of those methodologies to really accurately claim to be one or the other. Nearly all consultancies are some kind of blent of a linear process with stages (sometimes called “a waterfall phase” or “a planning phase”) followed by an iterative process with lots of non-developer input into partially completed features (often called an “agile phase” or “build phase”). Depending on the agency they might cut up the planning into the start of each sprint or they might move it all to the beginning as a separate project phase. Done well it can allow you to merge the highly complex needs of an organization with the predefined structures of an existing platform. Done poorly it can it look like you tried to force a square peg into a round hole. You can see evidence of this around the internet in the articles trying to help you pick a methodology and in the variations on Agile that have been attempted to try to adapt the process to the reality many consultants face.

In 2001 the Agile Manifesto changed how we talk about project management. It challenged standing doctrine about how software development should be done and moved away from trying to mirror manufacturing processes. As the methodology around agile evolved, and proved itself impressively effective for certain projects, it drew adherents and advocates who preach Agile and Scrum structures as rigid rules to be followed. Meanwhile older project methodologies were largely relabeled “Waterfall” and dragged through the mud as out of date and likely to lead to project failure.

But after all this time Agile hasn’t actually won as the only truly useful process because it doesn’t actually work for all projects and all project teams. Particularly among consulting agencies that work on complex platforms like Drupal and Salesforce, you find that regardless of the label the company uses they probably have a mix linear planning with iterative development – or they fail a lot.

Agile works best when you start from scratch and you have a talented team trying to solve a unique problem. Anytime you are building on a mature software platform you are at least a few hundred thousand hours into development before you have your first meeting. These platforms have large feature sets that deliver lots of the functionality needed for most projects just through careful planning and basic configuration – that’s the whole point of using them. So on any enterprise scale data system you have to do a great deal of planning before you start creating the finished product.

If you don’t plan ahead enough to have a generalized, but complete, picture of what you’re building you will discover very large gaps after far too many pieces have been built to elegantly close them, or your solution will have been built far more generically than needed – introducing significant complexity for very little gain. I’ve seen people re-implement features of Drupal within other features of Drupal just to deal with changing requirements or because a major feature was skipped in planning. So those early planning stages are important, but they also need to leave space for new insights into how best to meet the client’s need and discovery of true errors after the planning stage is complete.

Once you have a good plan the team can start to build. But you cannot simply hand a developer the design and say “do this” because your “this” is only as perfect as you are and your plan does not cover all the details. The developer will see things missed during planning, or have questions that everyone else knows but you didn’t think to write down (and if you wrote down every answer to every possible question, you wrote a document no one bothered to actually read). The team needs to implement part of the solution, check with the client to make sure it’s right, adjust to mistakes, and repeat – a very agile-like process that makes waterfall purists uncomfortable because it means the plan they are working from will change.

In all this you also have a client to keep happy and help make successful – that’s why they hired someone in the first place. Giving them a plan that shows you know what they want they are reassured early in the project that you share their vision for a final solution. Being able to see that plan come together while giving chances to refine the details allows you to deliver the best product you are able.

Agile was supposed to fix all our problems, but didn’t. The methodologies used before were supposed to prevent all the problems that agile was trying to fix, but didn’t. But using waterfall-like planning at the start of your project with agile-ish implementation you can combine the best of both approaches giving you the best chances for success.  We all do it, it is about time we all admit it is what we do.

Cartoon of a developer reviewing all the things he's done: check technical specs, unit tests, configuration, permissions, API updates and then says "Just one small detail I need to code it."
Cartoon from CommitStrip