Picking tools you’ll love: don’t make yourself hate it on day one.

Every few years organizations replace a major system or two: the web site, CMS, CRM, financial databases, grant software, HR system, etc. And too often organizations try to make the new tool behave just like the old tool, and as a result hate the new tool until they realize that they misconfigured it and then spend 5-10 years dealing with problems that could have been avoided. If you’re going to spend a lot of money overhauling a mission critical tool you should love it from day one.

No one can promise you success, but I promise if you take a brand new tool and try to force it to be just like the tool you are replacing you are going to be disappointed (at best).  Salesforce is not CiviCRM, Drupal is not WordPress, Salsa is not Blackbaud. Remember you are replacing the tool for a reason, if everything about your current tool was perfect you wouldn’t be replacing it in the first place. So here are my steps for improving your chances of success:

  1. List the main functions the tool needs to accomplish: This is the most obvious thing to do, but make sure your list only covers the things you need to do, not the ways you currently do it. Try to keep yourself at a relatively high level to avoid describing what you have now as the required system.
  2. List the pros and cons of what you have: Every tool I’ve ever used had pluses and minuses. And most major internal systems have stakeholders who love and hate it – sometimes that’s the same person – make sure you capture both the good and bad to help you with your selection later.
    Develop a list of tools that are well known in the field: Not just tools you know at the start of the project. Make sure you hunt for a few that are new to you. You might think you’ve heard of them all cause you walked around the vendor hall at NTC last year, but I promise you there are more companies that picked a different conference to push their wares, and there are open source tools you might have missed too.
  3. Make sure every tool has a salesperson: Open Source tools can be overlooked because no one sells them to you, and that may mean you miss the perfect tool for your organization. So for open source even the playing field by having a salesperson, or champion, for the tool. This can be an internal person who likes learning new things, or an outside expert (usually paid but sometimes volunteer).
  4. Let the sales teams sell, but don’t trust them: Let sales people run through their presentations, because you will learn something along the way. But at some point you also need to ask them questions that force them off your script. Force a demo of a non-contrived example, or of a feature they don’t show you the first time. Make them improvise and see what happens.
  5. Talk to other users, and make sure you find one who is not happy: Sure your organization is unique but lots of other organizations have similar needs for the basic tools – unless you have a software-based mission you probably do not want an email system that’s totally different from everyone else’s. A good salesperson will have no trouble giving you a list of references of organizations who love the tool, but if you want the complete picture find someone who hates it. They might hate it for totally unfair reasons, but they will shed light on the rough edges you may encounter. Also make sure you ask the people who love it what problems they run into, remember nothing is perfect so everyone should have a complaint of some kind.
  6. Develop a change strategy: In addition to a data migration plan you need to have a plan that covers introducing the new tool to your colleagues, training the users, communicating to leadership the risks and rewards of the new setup, and setting expectations about any disruptions the change over may cause.  I’ve seen an organization spend nearly a half million dollars on customization of a complex toolset only to have the launch fail because they didn’t make sure the staff understood that the new tool would change their day-to-day tasks.
  7. Develop a migration plan: Plan out the migration of all data, features, and functions as soon as you have your new tool selected. This is not the same thing as your change strategy, this is nuts and bolts of how things will work. Do not attempt to do this without an expert. You made yourself an expert in the field, but not of every in-and-out of the new system: hire someone who is.  That could be a setup team from the company that makes it, a 3rd party consultant, or a new internal staff person who has experience with different instances of the tool.
  8. Get staff trained on using the new tool: don’t scrimp on staff training. Make sure they have a chance to learn how to do the things they will actually be doing on a day-to-day basis.  If you can afford to have customized training arranged I highly recommend it, if you cannot have an outside person do it, consider custom building a training for your low-level internal users yourself.
  9. Develop a plan for ongoing improvement: you will not be 100% happy 100% of the time, and over time those problems will get worse as your needs shift. So make sure you are planning to consistently improve your setup. That can take many forms and what makes the most sense will vary from tool to tool and org to org, but it probably will mean a budget so ask for money from the start and build it into your ongoing budget for the project. Plan for constant improvement or you will find a growing list of pain points that push you to redo all this work sooner than expected.You’ll notice I never actually told you to make your choice. Once you’ve completed steps 1-6 you probably will see an obvious choice, of not: guess. You have a list, you listened to 20 boring sales presentations, you’ve read blogs posts, white papers, and ad materials. You now are an expert on the market and the tools. If you can’t make a good pick for your organization, no one else can either so push aside your imposter syndrome and go with your gut. Sure you could be wrong, but do the best you can and move forward. It’s usually better to make a choice than waffle indefinitely.

This Week’s Drupal Fire Drill

This week many in the Drupal community lost a lot of sleep Tuesday night because the security team treated us to a warning about major security updates due out on Wednesday. Fortunately for many it wasn’t a crisis in the end, but it gave us all a chance to practice for the worst. Basically, it was like a fire drill in a elementary school: we got to prepare like there was a disaster, but since they wasn’t one we don’t really know how it would have gone if there was actually a fire. We haven’t had a stop-drop-and-roll type of emergency in a while, so it was a good refresher on how to handle a crisis.

For those who don’t know what I’m talking about here’s a quick review. At Cyberwoven, like many Drupal shops, we follow the Drupal Security twitter feed on one of our Slack channels so we saw this mid-afternoon Tuesday:

Slack posting of tweet from the security team.

I read the PSA with images of Drupalgeddon dancing in my head:

There will be multiple releases of Drupal contributed modules on Wednesday July 13th 2016 16:00 UTC that will fix highly critical remote code execution vulnerabilities (risk scores up to 22/25).These contributed modules are used on between 1,000 and 10,000 sites. The Drupal Security Team urges you to reserve time for module updates at that time because exploits are expected to be developed within hours/days. Release announcements will appear at the standard announcement locations.

Drupal core is not affected. Not all sites will be affected. You should review the published advisories on July 13th 2016 to see if any modules you use are affected.

Oh that bold line up there wasn’t part of the original announcement. On Tuesday we didn’t have a sense of scale, were we talking about modules that everyone uses on almost every site (ctools came up more than once). It’s the one thing I wish the security team had done differently: given us that sense of scale.

I read all security postings, and make sure we take prompt steps to address them for clients as needed, but the potential here was that we’d have to update all 70+ sites in a few hours or less which is very different from your run-of-the-mill security update that often aren’t related to use cases and threat profiles for the majority of sites.

Here’s what we did next:

Tuesday

  1. Took a minute to panic, complain, and joke about pending illnesses. This is actually a useful step because it allowed me to burn off some nervous energy and then to focus on the real work.
  2. Pulled out the list of all active clients with Drupal sites, and doubled checked it for accuracy.
  3. Made sure a developer had a working repo for all sites (70+). Since we had a couple people out of the office, and some projects had been reassigned recently to different developers, this was an important step to make sure no sites fell through the cracks during a rush to update them all.
  4. Made sure we knew which were 6 sites, and which were 7 in case we are able to determine that 6 is also affected. Since I knew the announcement would likely skip D6, we needed to accept that we might have to take those sites offline for a time.
  5. Made sure leadership knew that all developers may be busy start at 16:00 UTC on Wednesday. We didn’t actually cancel anything right away, but I didn’t want anyone surprised if we were all too busy posting and testing updates to worry about things like meetings.
  6. Made sure complex projects were thought about ahead of time: sites with unusual setups or ongoing dev work that make sudden updates complex. For example we have one client that has 16 sites that all have an unusual set up, so we agreed who would handle those and made sure she was prepared.

Wednesday

  1. First thing in the morning I saw the update to the notice that gave us a sense of scale and relaxed a little, but still made sure we were fully prepped.
  2. Noon: the announcement of what modules were affected was released and a couple other developers and I immediately reviewed the releases. We relaxed once we determined none of our clients were using any of the modules listed.
  3. I reviewed code from each of the three modules to see what the change was to look for ways to improve my own code to avoid similar errors.
  4. Looked for ways to improve our response for the next time it’s not a drill.

Things would have been more exciting if we’d had to update our sites. Since we were prepared it was a matter of minutes for us to check that all our sites were secure. Each developer checked all the sites in their sandbox, and since I knew all sites were in someone’s sandbox that gave us 100% coverage without having to do lots of double checks.

I think it is too easy to look past doing the code review of modules we weren’t using but I find this kind of follow up really useful. Looking back on Drupalgeddon it’s amazing how much pain was caused by such a small error (16 characters are all that were needed to fix it). And by seeking to understand what went wrong you can look for places that you make similarly invalid assumptions.

If you read my post on making new mistakes, you also know I believe that looking for improvement is the most important detail (particularly when it turned out to be a drill, not a fire). Here’s my initial list of things to improve:

  • Have a system to automatically check every site for specific modules (this was already under development but will take a little while longer to complete).
  • Make sure at least two developers have a working sandbox for all projects at all times in case something comes out during a vacation.
  • Improve internal messaging about what to expect – template message and process.  I tossed together some disjointed thoughts for account managers. But disjointed developer thinking does not make people feel like you’re on top of things.
  • Have better tracking of outliers:  I completely missed that I had a demo site on Pantheon that did have the Coder module running.  Since it was Pantheon they alerted me to this problem and had taken steps until I could do the update myself. But it would have been bad if that site had been someplace else and/or in production.
  • Make sure everyone one knows when the actual release is coming out, and what the outcome was. Several developers were hoping to get lunch before the updates, but hadn’t done so when the announcement came out (which could have been a problem if we’d ended up busy).  And I spent the rest of the day answering one-off questions from the account team who wanted to know if the announcement had been bad news.

Ideally we’d come up with a method to automate security updates (maybe all updates), but that’s not totally straightforward.  We have to worry about required patches, non-standard setups, automated testing, and other details. There has been discussion on the Pantheon power user’s mailing list, but every shop has a slightly different workflow (like the fact that we don’t use Pantheon much at all) so we’ll need to come up with a system that accommodates our system.

Always Make New Mistakes

The first major online application I wrote was a petition for the American Friends Service Committee (AFSC) in an attempt to build support against the war in Iraq. The Iraq Peace Pledge succeed in that it gave people a place to voice their frustration and helped encourage the anti-war movement. It failed in the sense that the guy writing the software (me) had no idea what he was doing, MoveOn completely stole our thunder (gathering 100 times more names than we did), and it didn’t exactly prevent the war in Iraq.

What it did do was teach a small group of us that the online work was important, harder than we thought, and required skills we didn’t yet have.  I could list dozens of mistakes that we made in the course of the project, most of which were totally avoidable if any one of us had known then what we know now, but it was those mistakes that caused me to learn to constantly push to became better at what I do.

The biggest mistake we made was that we allowed ourselves to repeat mistakes. We were sloppy, and allowed the same errors to get posted over and over. My colleague and friend Mark looked at me at one point and said: “Let’s always make new mistakes from here on.”  And we pretty much did — for the next 10 years (although he still makes fun of me for misspelling “signatures” over and over on the peace pledge site).

“Always make new mistakes” became mantra for us and the AFSC’s Web Team. We knew that we didn’t have the resources to bring in someone else teach us everything we needed to know, we were leading projects in a medium very few people had mastered, and we were human. So it was going to be impossible to avoid mistakes, but we could make sure that we learned from our mistakes, find ways to avoid making them again, and then push into fresh territory filled with new mistakes we didn’t know existed.

I still use the slogan for my own work, and encourage it with teams I work on. People laugh the first time they hear it. But they discover I’m very serious when they see me adjust my work process and products to deal with mistakes I failed to avoid during the previous project. And I invite them to set the same standards for themselves.

We all need to work hard to identify our mistakes and come up with ways to avoid them.  Some mistakes are easy to see and fix: if you have poor spelling, make sure you have someone double checking your writing.  Some mistakes are harder to see: if you use the wrong metrics you may appear to be succeeding while actually failing. And some mistakes are hard to admit: creating a web applications with very little experience meant I made just about every security blunder possible, and since I knew more about web development than anyone else around me, I resisted attempts by others to point out places the tools were at risk.

Think hard about your work, look for mistakes, own them as scars you earned doing something new, and figure out how to make sure you make a different mistake next time. You will be a better person and professional and better prepared to change the world.