Try doing it backwards

As part of my effort not to repeat mistakes I have tried to build a habit in my professional – and personal – life to look for ways to be better at what I do. I recently rediscovered how much you can learn when you try doing something you know well backwards: I drove on the left side of the road.

This is the Holden Barina we rented while in New Zealand.
This is the Holden Barina we rented while in New Zealand, a brand of car I’d never heard of before this trip. It was a good car for the mountain driving even if the wipers and lights controls were reversed from cars at home.

By driving on the left I discovered how many basic driving habits I have that are built around driving on the right. The clearest being that the whole time I was in New Zealand I never knew if anyone was behind me, and the whole time I couldn’t figure out why. The mirrors on the car worked just fine, but it turned out I wasn’t looking at them. Driving home from the airport after we returned to the US I realized that every few seconds my eyes jump to the upper right of my vision to check the mirror. In New Zealand I spent the whole time glancing at the post between the windshield and the driver’s side window (which had seemed massive to me while I was there) instead of the mirror. It made me conscious of my driving habits in a way I haven’t been in years, and as a consequence, I think it’s made me a better driver. I’m thinking about little details again; I’ve been more aware of where I am on the road and what I’m doing to keep track of the other cars around me.

My wife drove this section so I got to take some pictures. Amazing scenery but she had to adjust quickly.
My wife drove this section while I got to take some pictures. She got to learn to drive on the left on winding mountain roads – we don’t recommend that approach.

A few years ago I was watching videos from the MIT Algorithms course to refresh some of my basics, and because I wanted to know what had been added in the decade since I’d taken that class at Hamilton. During the review of QuickSort the professor mentioned that it wasn’t originally a divide-and-conqueror process, but a loop based approach meant to work on a fixed length array (so you could use a fixed block of memory). And as I recall he suggests that the students should work out the loop based version. So riding on the train home from work I pieced it together, and found that it’s an elegant process. It’s not something I ever expect to have cause to implement, but it did help me improve my thinking about when to use recursive functions vs when to use a loop, and helped me think about when to use recursion, loops, and other tools for processing everything in a list. There was a session by John Kary at DrupalCon this year on rethinking loops that pushed me again to revise some of how I made those decisions. Again his talk took the reverse view of much of my previous thinking and was therefore very much worth my time.

If you’re feeling like you are in a good groove on something, try doing it backwards and see what you discover.

Picking tools you’ll love: don’t make yourself hate it on day one.

Every few years organizations replace a major system or two: the web site, CMS, CRM, financial databases, grant software, HR system, etc. And too often organizations try to make the new tool behave just like the old tool, and as a result hate the new tool until they realize that they misconfigured it and then spend 5-10 years dealing with problems that could have been avoided. If you’re going to spend a lot of money overhauling a mission critical tool you should love it from day one.

No one can promise you success, but I promise if you take a brand new tool and try to force it to be just like the tool you are replacing you are going to be disappointed (at best).  Salesforce is not CiviCRM, Drupal is not WordPress, Salsa is not Blackbaud. Remember you are replacing the tool for a reason, if everything about your current tool was perfect you wouldn’t be replacing it in the first place. So here are my steps for improving your chances of success:

  1. List the main functions the tool needs to accomplish: This is the most obvious thing to do, but make sure your list only covers the things you need to do, not the ways you currently do it. Try to keep yourself at a relatively high level to avoid describing what you have now as the required system.
  2. List the pros and cons of what you have: Every tool I’ve ever used had pluses and minuses. And most major internal systems have stakeholders who love and hate it – sometimes that’s the same person – make sure you capture both the good and bad to help you with your selection later.
    Develop a list of tools that are well known in the field: Not just tools you know at the start of the project. Make sure you hunt for a few that are new to you. You might think you’ve heard of them all cause you walked around the vendor hall at NTC last year, but I promise you there are more companies that picked a different conference to push their wares, and there are open source tools you might have missed too.
  3. Make sure every tool has a salesperson: Open Source tools can be overlooked because no one sells them to you, and that may mean you miss the perfect tool for your organization. So for open source even the playing field by having a salesperson, or champion, for the tool. This can be an internal person who likes learning new things, or an outside expert (usually paid but sometimes volunteer).
  4. Let the sales teams sell, but don’t trust them: Let sales people run through their presentations, because you will learn something along the way. But at some point you also need to ask them questions that force them off your script. Force a demo of a non-contrived example, or of a feature they don’t show you the first time. Make them improvise and see what happens.
  5. Talk to other users, and make sure you find one who is not happy: Sure your organization is unique but lots of other organizations have similar needs for the basic tools – unless you have a software-based mission you probably do not want an email system that’s totally different from everyone else’s. A good salesperson will have no trouble giving you a list of references of organizations who love the tool, but if you want the complete picture find someone who hates it. They might hate it for totally unfair reasons, but they will shed light on the rough edges you may encounter. Also make sure you ask the people who love it what problems they run into, remember nothing is perfect so everyone should have a complaint of some kind.
  6. Develop a change strategy: In addition to a data migration plan you need to have a plan that covers introducing the new tool to your colleagues, training the users, communicating to leadership the risks and rewards of the new setup, and setting expectations about any disruptions the change over may cause.  I’ve seen an organization spend nearly a half million dollars on customization of a complex toolset only to have the launch fail because they didn’t make sure the staff understood that the new tool would change their day-to-day tasks.
  7. Develop a migration plan: Plan out the migration of all data, features, and functions as soon as you have your new tool selected. This is not the same thing as your change strategy, this is nuts and bolts of how things will work. Do not attempt to do this without an expert. You made yourself an expert in the field, but not of every in-and-out of the new system: hire someone who is.  That could be a setup team from the company that makes it, a 3rd party consultant, or a new internal staff person who has experience with different instances of the tool.
  8. Get staff trained on using the new tool: don’t scrimp on staff training. Make sure they have a chance to learn how to do the things they will actually be doing on a day-to-day basis.  If you can afford to have customized training arranged I highly recommend it, if you cannot have an outside person do it, consider custom building a training for your low-level internal users yourself.
  9. Develop a plan for ongoing improvement: you will not be 100% happy 100% of the time, and over time those problems will get worse as your needs shift. So make sure you are planning to consistently improve your setup. That can take many forms and what makes the most sense will vary from tool to tool and org to org, but it probably will mean a budget so ask for money from the start and build it into your ongoing budget for the project. Plan for constant improvement or you will find a growing list of pain points that push you to redo all this work sooner than expected.You’ll notice I never actually told you to make your choice. Once you’ve completed steps 1-6 you probably will see an obvious choice, of not: guess. You have a list, you listened to 20 boring sales presentations, you’ve read blogs posts, white papers, and ad materials. You now are an expert on the market and the tools. If you can’t make a good pick for your organization, no one else can either so push aside your imposter syndrome and go with your gut. Sure you could be wrong, but do the best you can and move forward. It’s usually better to make a choice than waffle indefinitely.

This Week’s Drupal Fire Drill

This week many in the Drupal community lost a lot of sleep Tuesday night because the security team treated us to a warning about major security updates due out on Wednesday. Fortunately for many it wasn’t a crisis in the end, but it gave us all a chance to practice for the worst. Basically, it was like a fire drill in a elementary school: we got to prepare like there was a disaster, but since they wasn’t one we don’t really know how it would have gone if there was actually a fire. We haven’t had a stop-drop-and-roll type of emergency in a while, so it was a good refresher on how to handle a crisis.

For those who don’t know what I’m talking about here’s a quick review. At Cyberwoven, like many Drupal shops, we follow the Drupal Security twitter feed on one of our Slack channels so we saw this mid-afternoon Tuesday:

Slack posting of tweet from the security team.

I read the PSA with images of Drupalgeddon dancing in my head:

There will be multiple releases of Drupal contributed modules on Wednesday July 13th 2016 16:00 UTC that will fix highly critical remote code execution vulnerabilities (risk scores up to 22/25).These contributed modules are used on between 1,000 and 10,000 sites. The Drupal Security Team urges you to reserve time for module updates at that time because exploits are expected to be developed within hours/days. Release announcements will appear at the standard announcement locations.

Drupal core is not affected. Not all sites will be affected. You should review the published advisories on July 13th 2016 to see if any modules you use are affected.

Oh that bold line up there wasn’t part of the original announcement. On Tuesday we didn’t have a sense of scale, were we talking about modules that everyone uses on almost every site (ctools came up more than once). It’s the one thing I wish the security team had done differently: given us that sense of scale.

I read all security postings, and make sure we take prompt steps to address them for clients as needed, but the potential here was that we’d have to update all 70+ sites in a few hours or less which is very different from your run-of-the-mill security update that often aren’t related to use cases and threat profiles for the majority of sites.

Here’s what we did next:

Tuesday

  1. Took a minute to panic, complain, and joke about pending illnesses. This is actually a useful step because it allowed me to burn off some nervous energy and then to focus on the real work.
  2. Pulled out the list of all active clients with Drupal sites, and doubled checked it for accuracy.
  3. Made sure a developer had a working repo for all sites (70+). Since we had a couple people out of the office, and some projects had been reassigned recently to different developers, this was an important step to make sure no sites fell through the cracks during a rush to update them all.
  4. Made sure we knew which were 6 sites, and which were 7 in case we are able to determine that 6 is also affected. Since I knew the announcement would likely skip D6, we needed to accept that we might have to take those sites offline for a time.
  5. Made sure leadership knew that all developers may be busy start at 16:00 UTC on Wednesday. We didn’t actually cancel anything right away, but I didn’t want anyone surprised if we were all too busy posting and testing updates to worry about things like meetings.
  6. Made sure complex projects were thought about ahead of time: sites with unusual setups or ongoing dev work that make sudden updates complex. For example we have one client that has 16 sites that all have an unusual set up, so we agreed who would handle those and made sure she was prepared.

Wednesday

  1. First thing in the morning I saw the update to the notice that gave us a sense of scale and relaxed a little, but still made sure we were fully prepped.
  2. Noon: the announcement of what modules were affected was released and a couple other developers and I immediately reviewed the releases. We relaxed once we determined none of our clients were using any of the modules listed.
  3. I reviewed code from each of the three modules to see what the change was to look for ways to improve my own code to avoid similar errors.
  4. Looked for ways to improve our response for the next time it’s not a drill.

Things would have been more exciting if we’d had to update our sites. Since we were prepared it was a matter of minutes for us to check that all our sites were secure. Each developer checked all the sites in their sandbox, and since I knew all sites were in someone’s sandbox that gave us 100% coverage without having to do lots of double checks.

I think it is too easy to look past doing the code review of modules we weren’t using but I find this kind of follow up really useful. Looking back on Drupalgeddon it’s amazing how much pain was caused by such a small error (16 characters are all that were needed to fix it). And by seeking to understand what went wrong you can look for places that you make similarly invalid assumptions.

If you read my post on making new mistakes, you also know I believe that looking for improvement is the most important detail (particularly when it turned out to be a drill, not a fire). Here’s my initial list of things to improve:

  • Have a system to automatically check every site for specific modules (this was already under development but will take a little while longer to complete).
  • Make sure at least two developers have a working sandbox for all projects at all times in case something comes out during a vacation.
  • Improve internal messaging about what to expect – template message and process.  I tossed together some disjointed thoughts for account managers. But disjointed developer thinking does not make people feel like you’re on top of things.
  • Have better tracking of outliers:  I completely missed that I had a demo site on Pantheon that did have the Coder module running.  Since it was Pantheon they alerted me to this problem and had taken steps until I could do the update myself. But it would have been bad if that site had been someplace else and/or in production.
  • Make sure everyone one knows when the actual release is coming out, and what the outcome was. Several developers were hoping to get lunch before the updates, but hadn’t done so when the announcement came out (which could have been a problem if we’d ended up busy).  And I spent the rest of the day answering one-off questions from the account team who wanted to know if the announcement had been bad news.

Ideally we’d come up with a method to automate security updates (maybe all updates), but that’s not totally straightforward.  We have to worry about required patches, non-standard setups, automated testing, and other details. There has been discussion on the Pantheon power user’s mailing list, but every shop has a slightly different workflow (like the fact that we don’t use Pantheon much at all) so we’ll need to come up with a system that accommodates our system.

Always Make New Mistakes

The first major online application I wrote was a petition for the American Friends Service Committee (AFSC) in an attempt to build support against the war in Iraq. The Iraq Peace Pledge succeeded in that it gave people a place to voice their frustration and helped encourage the anti-war movement. It failed in the sense that the guy writing the software (me) had no idea what he was doing, MoveOn completely stole our thunder (gathering 100 times more names than we did), and it didn’t exactly prevent the war in Iraq.

What it did do was teach a small group of us that the online work was important, harder than we thought, and required skills we didn’t yet have.  I could list dozens of mistakes that we made in the course of the project, most of which were totally avoidable if any one of us had known then what we know now, but it was those mistakes that caused me to learn to constantly push to became better at what I do.

The biggest mistake we made was that we allowed ourselves to repeat mistakes. We were sloppy, and allowed the same errors to get posted over and over. Mark, my colleague and friend looked at me at one point and said: “Let’s always make new mistakes from here on.”  And we pretty much did – for the next 10 years (although he still makes fun of me for misspelling “signatures” over and over on the peace pledge site).

“Always make new mistakes” became mantra for us and the AFSC’s Web Team. We knew that we didn’t have the resources to bring in someone else teach us everything we needed to know, we were leading projects in a medium very few people had mastered, and we were human. So it was going to be impossible to avoid mistakes, but we could make sure that we learned from our mistakes, find ways to avoid making them again, and then push into fresh territory filled with new mistakes we didn’t know existed.

I still use the slogan for my own work, and encourage it with teams I am on. People laugh the first time they hear it. But they discover I’m very serious when they see me adjust my work process and products to deal with mistakes I failed to avoid during a previous project. And I invite them to set the same standards for themselves.

We all need to work hard to identify our mistakes and come up with ways to avoid them.  Some mistakes are easy to see and fix: if you have poor spelling, make sure you have someone editing everything you write (even web page headings).  Some mistakes are harder to see: if you use the wrong metrics you may appear to be succeeding while actually failing. And some mistakes are hard to admit: creating a web applications with very little experience meant I made just about every security blunder possible, and since I knew more about web development than anyone else around me, I resisted attempts by others to point out places the tools were at risk.

Think hard about your work, look for mistakes, own them as scars you earned doing something new, and figure out how to make sure you make a different mistake next time. You will be a better person and professional and better prepared to change the world.