Picking tools you’ll love: don’t make yourself hate it on day one.

Every few years organizations replace a major system or two: the web site, CMS, CRM, financial databases, grant software, HR system, etc. And too often organizations try to make the new tool behave just like the old tool, and as a result hate the new tool until they realize that they misconfigured it and then spend 5-10 years dealing with problems that could have been avoided. If you’re going to spend a lot of money overhauling a mission critical tool you should love it from day one.

No one can promise you success, but I promise if you take a brand new tool and try to force it to be just like the tool you are replacing you are going to be disappointed (at best).  Salesforce is not CiviCRM, Drupal is not WordPress, Salsa is not Blackbaud. Remember you are replacing the tool for a reason, if everything about your current tool was perfect you wouldn’t be replacing it in the first place. So here are my steps for improving your chances of success:

  1. List the main functions the tool needs to accomplish: This is the most obvious thing to do, but make sure your list only covers the things you need to do, not the ways you currently do it. Try to keep yourself at a relatively high level to avoid describing what you have now as the required system.
  2. List the pros and cons of what you have: Every tool I’ve ever used had pluses and minuses. And most major internal systems have stakeholders who love and hate it – sometimes that’s the same person – make sure you capture both the good and bad to help you with your selection later.
    Develop a list of tools that are well known in the field: Not just tools you know at the start of the project. Make sure you hunt for a few that are new to you. You might think you’ve heard of them all cause you walked around the vendor hall at NTC last year, but I promise you there are more companies that picked a different conference to push their wares, and there are open source tools you might have missed too.
  3. Make sure every tool has a salesperson: Open Source tools can be overlooked because no one sells them to you, and that may mean you miss the perfect tool for your organization. So for open source even the playing field by having a salesperson, or champion, for the tool. This can be an internal person who likes learning new things, or an outside expert (usually paid but sometimes volunteer).
  4. Let the sales teams sell, but don’t trust them: Let sales people run through their presentations, because you will learn something along the way. But at some point you also need to ask them questions that force them off your script. Force a demo of a non-contrived example, or of a feature they don’t show you the first time. Make them improvise and see what happens.
  5. Talk to other users, and make sure you find one who is not happy: Sure your organization is unique but lots of other organizations have similar needs for the basic tools – unless you have a software-based mission you probably do not want an email system that’s totally different from everyone else’s. A good salesperson will have no trouble giving you a list of references of organizations who love the tool, but if you want the complete picture find someone who hates it. They might hate it for totally unfair reasons, but they will shed light on the rough edges you may encounter. Also make sure you ask the people who love it what problems they run into, remember nothing is perfect so everyone should have a complaint of some kind.
  6. Develop a change strategy: In addition to a data migration plan you need to have a plan that covers introducing the new tool to your colleagues, training the users, communicating to leadership the risks and rewards of the new setup, and setting expectations about any disruptions the change over may cause.  I’ve seen an organization spend nearly a half million dollars on customization of a complex toolset only to have the launch fail because they didn’t make sure the staff understood that the new tool would change their day-to-day tasks.
  7. Develop a migration plan: Plan out the migration of all data, features, and functions as soon as you have your new tool selected. This is not the same thing as your change strategy, this is nuts and bolts of how things will work. Do not attempt to do this without an expert. You made yourself an expert in the field, but not of every in-and-out of the new system: hire someone who is.  That could be a setup team from the company that makes it, a 3rd party consultant, or a new internal staff person who has experience with different instances of the tool.
  8. Get staff trained on using the new tool: don’t scrimp on staff training. Make sure they have a chance to learn how to do the things they will actually be doing on a day-to-day basis.  If you can afford to have customized training arranged I highly recommend it, if you cannot have an outside person do it, consider custom building a training for your low-level internal users yourself.
  9. Develop a plan for ongoing improvement: you will not be 100% happy 100% of the time, and over time those problems will get worse as your needs shift. So make sure you are planning to consistently improve your setup. That can take many forms and what makes the most sense will vary from tool to tool and org to org, but it probably will mean a budget so ask for money from the start and build it into your ongoing budget for the project. Plan for constant improvement or you will find a growing list of pain points that push you to redo all this work sooner than expected.You’ll notice I never actually told you to make your choice. Once you’ve completed steps 1-6 you probably will see an obvious choice, of not: guess. You have a list, you listened to 20 boring sales presentations, you’ve read blogs posts, white papers, and ad materials. You now are an expert on the market and the tools. If you can’t make a good pick for your organization, no one else can either so push aside your imposter syndrome and go with your gut. Sure you could be wrong, but do the best you can and move forward. It’s usually better to make a choice than waffle indefinitely.

Sins Against Drupal 2

This is part of my ongoing series about ways Drupal can be badly misused. These examples are from times someone tried to solve an otherwise interesting problem in just about the worst possible way.

I present these at SC Drupal Users Group meetings from time to time as an entertaining way to discuss interesting problems and ways we can all improve.

This one was presented about a year ago now (August 2015). Since I wasn’t working with Drupal 8 when I did this presentation the solution here is Drupal 7 (if someone asks I could rewrite for Drupal 8).


The Problem

The developer needed to support existing Flash training games used internally by the client. Drupal was used to provide the user accounts, game state data, and exports for reporting. The games were therefore able to authenticate with Drupal and save data to custom tables in the main Drupal database. The client was looking for some extensions to support new variations of the games and while reviewing the existing setup I noticed major flaws.

 

The Sinful Solution

Create a series of bootstrap scripts to handle all the interactions, turning Drupal into a glorified database layer (also while you’re at it, bypass all SQL injection attack protections to make sure Drupal provides as little value as possible).

The Code

There was a day when bootstrap scripts with a really cool way to do basic task with Drupal. If you’ve never seen or written one: basically you load bootstrap.inc, call drupal_bootstrap() and then write code that takes advantage of basic Drupal functions – in a world without drush this was really useful for a variety of basic tasks. This was outmoded (a long time ago) by drush, migrate, feeds, and a dozen other tools. But in this case I found the developer had created a series of scripts, two for each game, that were really similar, and really really dangerous. The first (an anonymized is version shown below) handled user authentication and initial game state data, and the second allowed the game to save state data back to the database.

As always the script here was modified to protect the guilty, and I should note that this is no longer the production code (but it was):

<?php
require_once './includes/bootstrap.inc';
drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL); // "boot" Drupal
define("KEY", "ed8f5b8efd2a90c37e0b8aac33897cc5"); // set key

// check data
if(!(isset($_POST['hash'])) || !(isset($_POST['username'])) || !(isset($_POST['password']))) {
header('HTTP/1.1 404');
echo "status=100";
exit; // missing a value, force quit
}

// capture data
$hash = $_POST['hash'];
$username = $_POST['username'];
$password = $_POST['password'];

// check hash validity
$generate_hash = md5(KEY.$username.$password);
if($generate_hash != $hash) {
header('HTTP/1.1 404');
echo "status=101";
exit; // hash is wrong, force quit
}

// look for username + password combo
$flashuid = 0;
$query = db_query("SELECT * FROM {users} WHERE name = '$username' AND pass = '$password'");
if ($obj = db_fetch_object($query)){
$flashuid = $obj->uid;
}

if($flashuid == 0) {
header('HTTP/1.1 404');
echo "status=102";
exit; // no match found
}

// get user game information
$gamequery = db_query("SELECT * FROM {table_with_data_for_flash_objects} WHERE uid = '$flashuid' ORDER BY lastupdate DESC LIMIT 1");

if ($game = db_fetch_object($gamequery)){
$time = $game->time;
$round = $game->round;
$winnings = $game->winnings;
$upgrades = $game->upgrades;
} else {

// no entry, create one in db
$time = $round = $game_winnings = $long_term_savings = $bonus_list = "0";
$upgrades = "";
$insert = db_query("INSERT INTO {table_with_data_for_flash_objects} (uid, lastupdate) VALUES ('$flashuid',NOW())");
}

$points = userpoints_get_current_points($flashuid);

// echo success and values
header('HTTP/1.1 201');
echo "user_id=$flashuid&points=$points&ime=$time&round=$round&winnings=$winnings&upgrades=$upgrades";

?>

Why is this so bad?

It’s almost hard to know where to be begin on this one, so we’ll start at the beginning.

  • Bootstrap scripts are not longer needed and should never have been used for anything other than a data import or some other ONE TIME task.
  • That key defined in line 3, that’s used to track sessions (see lines 20-21). If you find yourself having to recreate a session handler with a fixed value, you should assume you’re doing something wrong. This is a solved problem, if you are re-solving it you better be sure you know more than everyone else first.
  • Error handling is done inline with a series of random error status codes that are printed on a 404 response (and the flash apps ignored all errors). If you are going to provide an error response you should log it for debugging the system, and you should use existing standards whenever possible. In this case 403 Not Authorized is a far better response when someone fails to authenticate.
  • Lines 15-17, and then line 30: a classic bobby tables SQL Injection vulnerability. Say goodbye to security from here on in. They go on to repeat this mistake several more times.
  • Finally, just to add insult to injury, the developer spends a huge amount of time copying variables around to change their name: $password = $_POST[‘password’]; $round = $game->round; There is nothing wrong just using fields on a standard object, and while there is something wrong with just using a value from $_POST, copying it to a new variable does not make it trustworthy.

Better Solutions

There are several including:

  • Use a custom menu to define paths, and have the application just go there instead.
  • Use Services module: https://www.drupal.org/project/services
  • Hire a call center to ask all your users for their data…

If I were starting something like this from scratch in D7 I would start with services and in D8 I’d start with the built-in support for RESTful web services. Given the actual details of the situation (a pre-existing flash application that you have limited ability to change) I would go with the custom router so you can work around some of the bad design of the application.

In our module’s .module file we start by defining two new menu callbacks:

function hook_menu() {

$items['games/auth'] = array(
'title' => 'Games Authorization',
'page callback' => 'game_module_auth_user',
'access arguments' => array('access content'),
'type' => MENU_CALLBACK,
);
$items['games/game_name/data'] = array( // yes, you could make that a variable instead of hard code
'title' => 'Game Data',
'page callback' => 'game_module_game_name_capture_data', // and if you did you could use one function and pass that variable
'access arguments' => array('player'),
'type' => MENU_CALLBACK,
);

return $items;
}

The first allows for remote authentication, and the second is an endpoint to capture data. In this case the name of the game is hard coded, but as noted in the comments in the code you could make that a variable.

In the original example the data was stored in a custom table for each game, but never accessed in Drupal itself. The table was not setup with a hook_install() nor did they need the data normalized since its all just pass-through. In my solution I switch to using hook_install() to add a schema that stores all the data as a blob. There are tradeoffs here, but this is a clean simple solution:

...
'fields' => array(
'recordID' => array(
'description' => 'The primary identifier for a record.',
'type' => 'serial',

...
'uid' => array(
'description' => 'The user ID.',
'type' => 'int',

...
'game' => array(
'description' => 'The game name',
'type' => 'text',

...
'data' => array(
'description' =>'Serialized data from Game application',
'type' => 'blob',

...

You could also take this one step further and make each game an entity and customize the fields, but that’s a great deal more work that the client would not have supported.

The final step is to define the callbacks used by the menu items in hook_menu():

function game_module_auth_user($user_name = '', $pass = '') { // Here I am using GET, but I don’t have to

global $user;
if($user->uid != 0) { // They are logged in already, so reject them
drupal_access_denied();
}

$account = user_authenticate($user_name, $pass);

//Generate a response based on result....
}

function game_module_[game_name]_capture_data() {
global $user;
if($user->uid == 0) { // They aren’t logged in, so they can’t save data
drupal_access_denied();
}

$record = drupal_get_query_parameters($query = $_POST); // ← we can work with POST just as well as GET if we ask Drupal to look in the right place.

db_insert('game_data')
->fields(array(
'uid' => $user->uid,
'game' => '[game_name]',
'data' => serialize($record),
))
->execute();
// Provide useful response.
}

For game_module_auth_user() I use a GET request (mostly because I wanted to show I could use either). We get the username and password, have Drupal authenticate them, and move on; I let Drupal handle the complexity.

The capture data callback does pull directly from the $_POST array, but since I don’t care about the content and I’m using a parameterized query I can safely just pass the information through. drupal_get_query_parameters() is a useful function that often gets ignored in favor of more complex solutions.

So What Happened?

The client had limited budget and this was a Drupal 6 site so we did the fastest work we could. I rewrote the existing code to avoid the SQL Injection attacks, moved them to SSL, and did a little other tightening, but the bootstrap scripts remained in place. We then went our separate ways since we did not want to be responsible for supporting such a scary set up, and they didn’t want to fund an upgrade. My understanding is they heard similar feedback from other vendors and eventually began the process of upgrade. You can’t win them all, even when you’re right.

Share your sins

I’m always looking for new material to include in this series. If you would like to submit a problem with a terrible solution, please remove any personally identifying information about the developer or where the code is running (the goal is not to embarrass individuals), post them as a gist (or a similar public code sharing tool), and leave me a comment here about the problem with a link to the code. I’ll do my best to come up with a reasonable solution and share it with SC DUG and then here. I’m presenting next month so if you have something we want me to look at you should share it soon.

If there are security issues in the code you want to share, please report those to the site owner before you tell anyone else so they can fix it. And please make sure no one could get from the code back to the site in case they ignore your advice.

Sins Against Drupal 1

This is the first is an ongoing series about ways Drupal can be badly misused. These are generally times someone tried to solve an otherwise interesting problem in just about the worst possible way. All of these will start with a description of the problem, how not to solve it, and then ideas about how to solve it well.

I present these at SC Drupal Users Group meetings from time to time as an entertaining way to discuss ways we can all improve our skills.

This first one was presented awhile ago now (Feb of 2015).


The Problem

The developer needed to support an existing JavaScript app with access to content in the form of Drupal nodes encoded in JSON. This is a key part of any headless Drupal project (this entire site was not headless, just one application), and in Drupal 7 and earlier there was no way to do this in core.

The Sinful Solution

Create a custom response handler within template.php that executes every time template.php is loaded.

The Code

During a routine code review of this site I found the following code in template.php:

...
function theme_name_preprocess_region(&$vars) {
if ($vars['region'] == 'header') {
$vars['classes_array'][] = 'clearfix';
}
}

if (isset($_POST['mode'])) {
if ($_POST['mode'] == 'get_fields_node') {
$node = node_load($_POST['id']);

$container = array();

$sliderCounter = count($node->field_event_slider['und']);
for ($i = 0; $i < $sliderCounter; $i++) { $field_collection_id = $node->field_event_slider['und'][$i]['value'];
$field_collection = entity_load('field_collection_item', array($node->field_event_slider['und'][$i]['value']));

$currentCollectionItem = $field_collection[$field_collection_id];

if (isset($currentCollectionItem->field_slider_image['und'][0]['uri'])) {
$container['slider'][$i]['src'] = file_create_url($currentCollectionItem->field_slider_image['und'][0]['uri']);
}
if (isset($currentCollectionItem->field_slider_caption['und'][0]['value'])) {
$container['slider'][$i]['caption'] = $currentCollectionItem->field_slider_caption['und'][0]['value'];
}
}

$focusTid = $node->field_event_type['und'][0]['tid'];
$eventTerm = taxonomy_term_load($focusTid);
$dateTid = $node->field_event_date['und'][0]['tid'];
$dateTid = taxonomy_term_load($dateTid);

$container['nid'] = $node->nid;
$container['focusTid'] = $focusTid;
$container['title'] = $node->title;
$container['focus'] = $eventTerm->name;
$container['date'] = $dateTid->name;
$container['body'] = $node->body['und'][0]['value'];

print json_encode($container);
die();
}

if ($_POST['mode'] == 'get_fields_focus') {
$focusTerm = taxonomy_term_load(substr($_POST['id'], 4));

$container['nid'] = $_POST['id'];
$container['title'] = $focusTerm->name;
$container['body'] = $focusTerm->description;

print json_encode($container);
die();
}
}
...

Notice that between the two functions is the random block of code wrapped in
if (isset($_POST['mode'])) {...}. So on every request that results in load the theme template.php is loaded and in addition to the normal parsing triggers a check to see if the page request was a POST that included a mode. It it was, we then proceed to load up a node, a couple taxonomy terms, and then encode them as JSON for response. The site sends the response and then unceremoniously dies().

Why is this so bad?

First, there is no parameter checking on the ID parameter: node_load($_POST[‘id’]). Anyone on the internet can load any node if they work out the ID. Since nodes are sequentially number, you could just start at 1 and your way up until they noticed that the site was sending the same useless response over and over. It also doesn’t send a 404 if the NID provided is invalid.

Second, no reasonable developer would expect you to hide a custom callback handler in template.php. It should be totally safe to load template.php without generating output under any condition (that should be true all non-tpl.php files).

Third, this code could run during any page request, not just the ones the application designer thought about. The request could have had Drupal do something relatively expensive before reaching this stage, and all that work was just wasted server resources – which creates an additional avenue for an attacker.

Fourth, Drupal has an exit function that actually does useful clean up and allows other modules to do the same. All that gets bypassed when you just die() midstream.

Finally, Drupal has tools to do all this. There was no reason to do this so badly.

Better Solutions

In Drupal 8 this is part of core.  Enable Restful Web Services and optionally the RestUI module, and in a few minutes you can have this more or less out of the box.

In Drupal 7 are two modules that will do 90% of the work for us. If we really just want the raw node as JSON you could use Content as JSON. But often we want more control over field selection, and the option to pull in related content (which the developer in this case did, and used as his argument for the approach taken). Views Datasource gives us the power of Views to select the data and provides us a JSON (and a few other) display option.

Views Datasource based approach:

  1. Install Views, ctools, and Views Datasource
  2. Create your view and set the display format to JSON data document.Drupal Sins 1 Views Definition
  3. Pick your fields, set the path, and define the contextual filter:Drupal Sins 1 Views field

From there save your view and you’re done. That’s all there is to it. No custom code to maintain, you get to rely on popular community tools to handle access checking and other security concerns, and you get multiple layers of caching.

So what happened?

This sin never saw the light of day.

At the time I encountered this “solution” I worked for a company that was asked review this site while it was still under development. Our code review gave the client a chance to go back to the developer and get a fix. The developer chose a more complicated solution than the Views one presented here (they defined a custom menu router with hook_menu() and moved much of this into the callback and added a few security checks) which was good enough for the project. But I still would have done it in views: it is much faster to develop, views plays nicely with Drupal caching to help improve performance, and is a straightforward approach is easy for a future developer to maintain.

Share your sins

I’m always looking for new material to include in this series. If you would like to submit a problem with a terrible solution, please remove any personally identifying information about the developer or where the code is running (the goal is not to embarrass individuals), post them as a gist (or a similar public code sharing tool), and leave me a comment here about the problem with a link to the code. I’ll do my best to come up with a reasonable solution and share it with SC DUG and then here.

If there are security issues in the code you want to share, please report those to the site owner before you tell anyone else so they can fix it. And please make sure no one could get from the code back to the site in case they ignore your advice.

This Week’s Drupal Fire Drill

This week many in the Drupal community lost a lot of sleep Tuesday night because the security team treated us to a warning about major security updates due out on Wednesday. Fortunately for many it wasn’t a crisis in the end, but it gave us all a chance to practice for the worst. Basically, it was like a fire drill in a elementary school: we got to prepare like there was a disaster, but since they wasn’t one we don’t really know how it would have gone if there was actually a fire. We haven’t had a stop-drop-and-roll type of emergency in a while, so it was a good refresher on how to handle a crisis.

For those who don’t know what I’m talking about here’s a quick review. At Cyberwoven, like many Drupal shops, we follow the Drupal Security twitter feed on one of our Slack channels so we saw this mid-afternoon Tuesday:

Slack posting of tweet from the security team.

I read the PSA with images of Drupalgeddon dancing in my head:

There will be multiple releases of Drupal contributed modules on Wednesday July 13th 2016 16:00 UTC that will fix highly critical remote code execution vulnerabilities (risk scores up to 22/25).These contributed modules are used on between 1,000 and 10,000 sites. The Drupal Security Team urges you to reserve time for module updates at that time because exploits are expected to be developed within hours/days. Release announcements will appear at the standard announcement locations.

Drupal core is not affected. Not all sites will be affected. You should review the published advisories on July 13th 2016 to see if any modules you use are affected.

Oh that bold line up there wasn’t part of the original announcement. On Tuesday we didn’t have a sense of scale, were we talking about modules that everyone uses on almost every site (ctools came up more than once). It’s the one thing I wish the security team had done differently: given us that sense of scale.

I read all security postings, and make sure we take prompt steps to address them for clients as needed, but the potential here was that we’d have to update all 70+ sites in a few hours or less which is very different from your run-of-the-mill security update that often aren’t related to use cases and threat profiles for the majority of sites.

Here’s what we did next:

Tuesday

  1. Took a minute to panic, complain, and joke about pending illnesses. This is actually a useful step because it allowed me to burn off some nervous energy and then to focus on the real work.
  2. Pulled out the list of all active clients with Drupal sites, and doubled checked it for accuracy.
  3. Made sure a developer had a working repo for all sites (70+). Since we had a couple people out of the office, and some projects had been reassigned recently to different developers, this was an important step to make sure no sites fell through the cracks during a rush to update them all.
  4. Made sure we knew which were 6 sites, and which were 7 in case we are able to determine that 6 is also affected. Since I knew the announcement would likely skip D6, we needed to accept that we might have to take those sites offline for a time.
  5. Made sure leadership knew that all developers may be busy start at 16:00 UTC on Wednesday. We didn’t actually cancel anything right away, but I didn’t want anyone surprised if we were all too busy posting and testing updates to worry about things like meetings.
  6. Made sure complex projects were thought about ahead of time: sites with unusual setups or ongoing dev work that make sudden updates complex. For example we have one client that has 16 sites that all have an unusual set up, so we agreed who would handle those and made sure she was prepared.

Wednesday

  1. First thing in the morning I saw the update to the notice that gave us a sense of scale and relaxed a little, but still made sure we were fully prepped.
  2. Noon: the announcement of what modules were affected was released and a couple other developers and I immediately reviewed the releases. We relaxed once we determined none of our clients were using any of the modules listed.
  3. I reviewed code from each of the three modules to see what the change was to look for ways to improve my own code to avoid similar errors.
  4. Looked for ways to improve our response for the next time it’s not a drill.

Things would have been more exciting if we’d had to update our sites. Since we were prepared it was a matter of minutes for us to check that all our sites were secure. Each developer checked all the sites in their sandbox, and since I knew all sites were in someone’s sandbox that gave us 100% coverage without having to do lots of double checks.

I think it is too easy to look past doing the code review of modules we weren’t using but I find this kind of follow up really useful. Looking back on Drupalgeddon it’s amazing how much pain was caused by such a small error (16 characters are all that were needed to fix it). And by seeking to understand what went wrong you can look for places that you make similarly invalid assumptions.

If you read my post on making new mistakes, you also know I believe that looking for improvement is the most important detail (particularly when it turned out to be a drill, not a fire). Here’s my initial list of things to improve:

  • Have a system to automatically check every site for specific modules (this was already under development but will take a little while longer to complete).
  • Make sure at least two developers have a working sandbox for all projects at all times in case something comes out during a vacation.
  • Improve internal messaging about what to expect – template message and process.  I tossed together some disjointed thoughts for account managers. But disjointed developer thinking does not make people feel like you’re on top of things.
  • Have better tracking of outliers:  I completely missed that I had a demo site on Pantheon that did have the Coder module running.  Since it was Pantheon they alerted me to this problem and had taken steps until I could do the update myself. But it would have been bad if that site had been someplace else and/or in production.
  • Make sure everyone one knows when the actual release is coming out, and what the outcome was. Several developers were hoping to get lunch before the updates, but hadn’t done so when the announcement came out (which could have been a problem if we’d ended up busy).  And I spent the rest of the day answering one-off questions from the account team who wanted to know if the announcement had been bad news.

Ideally we’d come up with a method to automate security updates (maybe all updates), but that’s not totally straightforward.  We have to worry about required patches, non-standard setups, automated testing, and other details. There has been discussion on the Pantheon power user’s mailing list, but every shop has a slightly different workflow (like the fact that we don’t use Pantheon much at all) so we’ll need to come up with a system that accommodates our system.

When Should I Update my Drupal Site to Drupal 8?

Last year Drupal 8 finally arrived, and brought the question that comes with every new release of Drupal: when should I update? New releases of Drupal mean two things: new features and cool new tools, and the retirement of an old version. We got the power and flexibility of Symfony and Drupal 6 sites are no longer getting community support. Unlike WordPress, which has well defined upgrade paths, each version of Drupal is a new adventure in upgrade pain. The more I watch people suffer with this pain, and the more I watch them try to find a way to do upgrades that preserve their site’s fundamental structure, the more I come to the conclusion that this pain is telling us something: we’re doing it wrong. Not because Drupal’s strategy is wrong, but because keeping all your content in the same structures is usually wrong. Drupal 8 should not make it easy for you to continue to use an old strategy, it should encourage us to update old assumptions.

Here is how I encourage everyone to view their choices:

If you have a Drupal 6 (or older) site you should update right now. Drupal 6 is no longer getting security updates so you are on borrowed time. But more importantly Drupal 8 is a better tool for the current state of the web than your Drupal 6 site. Most sites running on D6 reflect an online communications strategy that’s at least 4 or 5 years old. Those sites probably aren’t responsive, aren’t prepared to support apps, don’t have the right focus on social media and user engagement, and make assumptions about user behaviors that have evolved. Skip to Drupal 8: do not migrate these sites to Drupal 7. If there is a tool that is missing from Drupal 8 that your current site uses make sure you need it before complaining (or paying to have someone port it for you). Maybe that tool hasn’t been ported because it doesn’t make sense anymore. Some things are still missing, but lots of things are being rewritten differently because we have a better platform. The community is smarter than it was 5 or 10 years ago, and the platform is better, take the time to figure out why something hasn’t been ported: is it just no one has bothered, or has something better been built instead?

If you have a Drupal 7 site you should update when your web site no longer supports your work.  This is actually the same advice I just gave, but without a few assumptions like “you need a secure site.” Many Drupal 7 sites have a lot of life left in them. A site you built today will be designed to meet the needs you have now, and the ones you foresee in the near future. Three years from now (when Drupal 7 is scheduled to lose support from the community) you will be operating on assumptions that have probably been wrong for at least two years. Every six months you should ask yourself: does my site reflect my online strategy, and is my strategy still working? If the answer is yes to both of those questions you are fine, if the answer is no to either – particularly the second – you should engage someone to help you update your strategy and rebuild your site.

I’ve been part of projects that failed in part because we tried to port a stale strategy and stale content to a fresh site. We broke the new site before it even launched. Don’t try to make Drupal 8 behave like your old site: embrace the change.

Nonprofits Drive Innovation in Online Communications

I spent ten years working at a nonprofit organization wishing I had the kinds of resources that large corporations can put toward their marketing efforts. A nonprofit the organization’s web site and related marketing are usually seen as overhead, and overhead is bad, therefore budgets limited. Nonprofit budgets are tight in general which doesn’t leave a lot of extra room for fancy services, tools, and consultants.

Then I started to work with large corporations. Turns out, all that money doesn’t necessarily bring you people who know how to spend it well.  Yes the margins are bigger, and there is less complaining about the basic costs of doing business, but when it comes right down to it they aren’t any more strategic than a small scrappy team of people in the communications department of any organization large enough to have a communications team.

This shouldn’t have been a surprise.  A great deal has been written about start-up culture and ways to help companies recreate the energy, passion, and creativity of their lean early days.  And there has been a great deal written about impostor syndrome which nonprofit communications staff tend to have in spades.

Of course I’m speaking here in sweeping generalities about two massive groups, but here is what I’ve seen working with both nonprofits and for-profits:

  1. As a group nonprofit staff are there because they care about the cause(s) of the organization, and they are driven to help the organization succeed despite their lack of resources.
  2. The lack of resources — both in terms of time and money — forces NPOs to find creative solutions to their problems. They moved aggressively into social media because it was a free way to spread their message: companies then used the lessons learned by nonprofits to craft their early engagements with social media.
  3. Due to corporate donations, nonprofits actually have access to the best software tools money can buy. Salesforce, NetSuite, Google, Microsoft, Adobe, and others give nonprofits amazing discounts that allow them access to tools companies twice their size can barely afford. I used to (legally) get $20,000 server packages from Microsoft of $200. Google gives $10,000/month ad-word grants. SalesForce and NetSuite provide amazing tools at amazing prices.
  4. Nonprofits are right to believe if they had access to better tools and more money they could do even better. Tools written for nonprofits tend to be second rate (look at the vast majority of fundraising toolkits), and they are held back in the places where they need specialized software. I have friends that write this stuff, they work hard, but with literally billions less in resources they have a big hill to climb.
  5. Organizations like N-TEN have been helping nonprofits learn from each other and from the best of the for-profit world for nearly 15 years.  That community has benefited thought leaders like Beth Kanter, John Kenyon, Ryan Ozimek, and others who help NPOs focus on their goals instead of their tools.
  6. For-profit marketing staff do not believe they have anything to learn from nonprofits, and are often making mistakes that the subject of basic talks at conferences like NTC 5 years ago.

Nonprofits often struggle to figure out the right way to leverage new tools because they try to leverage them first. When traditional companies start trying to market in new spaces they sometimes make it look easy because they have a path to follow.  A path broken by nonprofits.

Always Make New Mistakes

The first major online application I wrote was a petition for the American Friends Service Committee (AFSC) in an attempt to build support against the war in Iraq. The Iraq Peace Pledge succeeded in that it gave people a place to voice their frustration and helped encourage the anti-war movement. It failed in the sense that the guy writing the software (me) had no idea what he was doing, MoveOn completely stole our thunder (gathering 100 times more names than we did), and it didn’t exactly prevent the war in Iraq.

What it did do was teach a small group of us that the online work was important, harder than we thought, and required skills we didn’t yet have.  I could list dozens of mistakes that we made in the course of the project, most of which were totally avoidable if any one of us had known then what we know now, but it was those mistakes that caused me to learn to constantly push to became better at what I do.

The biggest mistake we made was that we allowed ourselves to repeat mistakes. We were sloppy, and allowed the same errors to get posted over and over. Mark, my colleague and friend looked at me at one point and said: “Let’s always make new mistakes from here on.”  And we pretty much did – for the next 10 years (although he still makes fun of me for misspelling “signatures” over and over on the peace pledge site).

“Always make new mistakes” became mantra for us and the AFSC’s Web Team. We knew that we didn’t have the resources to bring in someone else teach us everything we needed to know, we were leading projects in a medium very few people had mastered, and we were human. So it was going to be impossible to avoid mistakes, but we could make sure that we learned from our mistakes, find ways to avoid making them again, and then push into fresh territory filled with new mistakes we didn’t know existed.

I still use the slogan for my own work, and encourage it with teams I am on. People laugh the first time they hear it. But they discover I’m very serious when they see me adjust my work process and products to deal with mistakes I failed to avoid during a previous project. And I invite them to set the same standards for themselves.

We all need to work hard to identify our mistakes and come up with ways to avoid them.  Some mistakes are easy to see and fix: if you have poor spelling, make sure you have someone editing everything you write (even web page headings).  Some mistakes are harder to see: if you use the wrong metrics you may appear to be succeeding while actually failing. And some mistakes are hard to admit: creating a web applications with very little experience meant I made just about every security blunder possible, and since I knew more about web development than anyone else around me, I resisted attempts by others to point out places the tools were at risk.

Think hard about your work, look for mistakes, own them as scars you earned doing something new, and figure out how to make sure you make a different mistake next time. You will be a better person and professional and better prepared to change the world.