Throw away information

When I was first starting out in IT I had a senior colleague who commented to me about all the throw away knowledge she’d collected and tossed over the years. She talked about learning early versions of VB, an obscure OS for an old HP minicomputer, the aging phone system we had, and all kinds of other things she’d learned in her career that was, or was about to become, useless knowledge.

Liz taught me a lot of things, but that basic observation about spending lots of time acquiring information that promptly becomes useless stands out in my mind as a useful caution. It has also become something I’ve try to plan around and compensate for as I advance.  For example I almost never bother to memorize an API: I can generally keep the documentation handy which is often updated when new features are added where my memory would lag behind.

It’s also part of why I love puzzles: they keep my brain in practice at gaining and tossing aside detailed information.  There is just about nothing as useless as the things you teach yourself while doing a jigsaw puzzle. The small details that allow you to associate pieces with parts of the puzzle. The ways you sort pieces for a particular puzzle. All kinds of things that really only apply to the puzzle you’re doing and not any other.

Puzzle underway.
The start of the cat puzzle at the top of this post as it got started. Stacks of red, white, brown, and green pieces are gathered on lower right while I finished the boarder. The wood grain and basket weave would be sorted during later passes.

There are plenty of studies that show we need to keep our brains active doing different things to help keep them healthy – some even look at the value of puzzles. Programming, and other tech work, certainly does that by default, but as any of us works with a specific platform we get used to the patterns it uses and therefore slow our rate of learning.  Puzzles also are good for finding practical applications for basic computer science concepts.

Sorting in particular is something that puzzles encourage different approaches not all of which are routinely needed in development work but are good to remember are out there. The concepts of the core of radix sorts  – or bucket sort – are extremely useful, and can even be used as a form of compression for a puzzle. My puzzle desk is a bit too small to allow me to spread out all the pieces of most incomplete puzzles, but digging through all the pieces over and over is just a waste of time.  So I often find a basic pattern I can do quick sorts by, and stack up pieces by color, or those with book covers, or other other easy to quantify detail. Then as I move through assembling the puzzle I can grab a stack of those pieces and do another round of sorting and assembly. Like a radix sort, you don’t need to first pass to be useful to make progress on the overall solution.

Puzzle pieces spread on a desk showing sample sorting collections.
The next puzzle getting started with edges sorted on the left and rough collections forming on the right. That big collection on the right is text and poster edges (puzzle is a collection of classic Star Wars posters).

Liz was right. I too learned and dropped VB; that stupidly designed phone system was mine until we replaced it with one slightly less stupidly designed; Drupal 4, 5, and 6 all had particularities I recall only to be nostalgic about the past; and the things I’m teaching myself these days about Salesforce will one day be equally useless. That’s okay, it’s what I signed up for, and if you are finding your own way in this field it’s a reality you should plan on being a part of the rest of your career.

Developers need to write more than code

Developers need to be able to in write their primary spoken languages as well as they can writing in their primary programming languages. It takes effort and practice, just like programming does. And you get rusty if you stop for awhile, just like in programming. I started blogging in part to make sure I was writing on a regular basis. When I worked at AFSC I was writing and editing everyday for my work, but the next two jobs I had didn’t expect much writing from me. I’ve written before about the value of my liberal arts education, and the last few weeks have been a testament to effort Hamilton put into making sure I learned the fundamentals of writing.

In my new job we work hard to provide a great deal of information to our clients and to make sure they can understand that information. While each team member brings a different set of strengths and weaknesses to the projects, we are all expected to communicate clearly with each other and our clients. All of us are expected to explain our parts of the project so that our clients can understand our advice and make can informed choices about how the projects should proceed.

In the last month I’ve been responsible for presenting basic overviews of platforms and highly technical reviews of parts of projects. I have contributed to large architectural design documents and focused detailed designs for small subsystems. All these materials were done collaboratively by teams of highly skilled technical people, but our client’s technical skills are much wider ranging. Often they are experts in other areas who need to use the tools we’re building. Others are experts in the technology we work with, and we are serving as added capacity to their in-house teams. We are expected to explain ourselves equally well to all these people, which means we need to both be clear and thorough.

One of the things I try to bring to the project teams is a willingness and ability to edit and be edited.  Because I spent time responsible for all the digital communications of a large organization, I have experience quickly and aggressively editing documents of all sizes. I also know how important it is to have an editor who catches your mistakes, and while I carefully review someone else’s work I am always supportive of anyone who is editing a section I wrote. Not everyone is equally comfortable with these roles, but they are things I believe all developers should try to master.

We have socially come to expect that technical people do not explain their work clearly. We are too often allowed to use a great deal of jargon, skip lightly over hard to understand details, and belittle those who cannot keep up. Complex systems are indeed complex so it does take effort to clearly explain and fully understand them. But the vast majority of developers are perfectly capable of explaining what we do when we take the time. And the vast majority of people are capable of understanding clear descriptions of our work.

Developers who are unwilling to take the time to learn to explain their work well do a disservice to their colleagues, clients, and themselves. Their work will suffer from a lack of good feedback, and so will not be as good as it could have been with more support. Those weaknesses will become bad user experiences, bugs, and other flaws in their final products.

A well done code review (one that’s meant to be supportive and doesn’t include yelling) and a good edit aren’t that different. In both one person is looking over the work of another to try to understand it and check for errors. Done well both give a chance for the reviewer to learn from the material and to help the person whose work is being reviewed do their best work.

When we explain our work well we open ourselves of up for feedback. That feedback gives us chances to validate our designs, improve the experiences of our users, and generally create more awesome products.

What else should I have asked but haven’t yet?

Every time I go through a job search either as a candidate or reviewing applications I try to learn a few things to make sure I am better prepared in the future and to help friends looking for work and talent. I recently completed a job search, so I want to share what I learned this time around.

I found a great question for a candidate to ask:  “What else should I have asked you?”

This is actually a question my wife and I started asking several years ago when we were buying large ticket items we didn’t know much about – like cars, houses, and HVAC systems. One of us, I don’t remember who, asked one of the salesman if there was anything he thought we should be asking him and his competitors. He gave us a couple small tips – likely things he thought he could answer better than his competition – and it gave us an insight into details he thought were important. Once we asked that question of all the sales people we had a list of questions to ask that covered more perspectives than we would have been able to figure out on our own.

It’s now a staple question we ask when starting anything new. Instead of trying to pretend to be experts we ask people for guidance. Often the answer is “No, I think you covered it.” but sometimes we learn things or are told about discounts, features, or services we would have otherwise missed. Outside of purchasing we’ve found the question can help spur conversations and get people to tell us things we need to know – it’s a question we use a lot as Guardian ad Litems.

Most good interviews include a time for the candidate to ask questions. This should not be a pro forma detail crammed in at the end. If the interviewer is taking your needs seriously they will give you several minutes for your questions that give you a chance to round out who you are as a candidate, this is particularly true when talking with the hiring manager (if they don’t take this seriously you should think about whether or not you want to work for that person). This portion of the interview is a critical chance to gather information about the organization, your potential role, their existing team, and vision of the future. It is also a chance to ask questions that highlight your experience and knowledge. Most advice you will find online will tell you to make sure you have a few questions you want to ask to try to draw out the information you need while showing off that you’re smart and talented. Doing this well can be hard. I discovered that having a simple, and reliably unusual, question that I can ask at the end gives a good last impression and this one has gained me unexpected insight more than once.

The exact wording isn’t important here. I’ve asked several versions:

Are there other things I should have asked but haven’t?

Are there questions you aren’t hearing from candidates that you expected?

What else do you think someone should be asking about before joining your team?

Are thing questions you would ask if you were in my shoes?

The idea is to ask an open ended question that shows you know there is always more information to be gained and gets them to think about things they haven’t shared with you, or with other candidates. The question alone often stands out, and if you get them to discuss something with you they didn’t discuss with others that helps you stand out in their minds even more. It also gives them a chance to talk about things they know and you don’t, which can help give a positive impression of you (this is same idea as dating advice that encourages getting your date to talk about themselves in part because it will make them think you’re smarter).

We all tend to want to know the same things when considering a job. This portion of the interview allows you to fill in gaps in their job ads and the conversation you’ve had so far. But since all job seekers want similar information they are asking similar questions. As for showing off, the hiring manager has likely already pre-selected a group that has shared backgrounds they are looking for so you aren’t going to easily stand out from that crowd of people with similar professional backgrounds. But by asking an unexpected question that puts the creativity on the interviewer you might be able to trigger a conversation that gives you that extra attention.

For me the question worked best in group interviews, because finding good questions is hard for me in that setting and it sometimes triggered discussion and debate within the team about things they wanted to hear candidates asking. It gave me a chance to hear a set of perspectives I wouldn’t have heard otherwise, and to see the team disagree about their vision for what’s needed. The most successful was when they fell into a mode of answer each other’s questions. For 15 minutes I moderated a discussion of what the team needed from their newest members and watched the internal team dynamics and politics played out in front of me. Usually the responses more mundane, but still helpful. Never did I feel like it was a foolish thing to have asked since the worst answer I got was a long pause and “Well that’s interesting, but I think we’ve covered everything I think you need to know.” followed by a quick check list of details the person thought it was important for candidates to know (the details of that list helped confirm why I didn’t want to work for that manager).

The question is also practical in a pinch. One of my interviews was rushed, I had just two hours to prepare after a first round interview so I didn’t have time to think of new things to ask. To add to the challenge the interviewer answered most of what I’d come up with before we got to my turn. I think I managed one or two detail questions that mostly clarified something she told me before I switched gears and asked something to the effect of “What else should I ask about before taking the job?” The question allowed me to stand out to her as asking questions that suggested I wanted to make my career move carefully (which was true and a good thing from their perspective too), and got the conversation into a productive place about the team’s role within a large organization. I start the job from that interview this week.

Using Composer for Drupal Modules and Private Bitbucket Repos

The next installment in my ongoing set of posts to create a public record for things I couldn’t learn in one Google search is a process for using composer to track a Drupal 8 module in a private repository.

It’s pretty common for Drupal agencies to have a small collection of modules they have built in-house and use on nearly all client sites, or to build a module for one client that has many sites. We are all becoming adept at managing our projects with Composer, but the vast majority of resources are focused on managing publicly available code via packagist. There are times these kinds of internally shared modules cannot be made fully public (for example they may contain IP belonging to the client). We have one such client that needs a module deployed to dozens of sites, and so I sat down a few weeks ago to figure out a solution.

We use Bitbucket for our private repositories, I am sure there is a similar solution using GitHub, but I haven’t worked out its details.

  1. Create private repo for module on Bitbucket.
  2. Clone that repo locally, and structure it to match Drupal.org’s conventions (this probably isn’t required, but should allow your module to blend into the rest of the project more smoothly).
  3. Create Oauth token for your account in Bitbucket. Make sure to include a dumy callback URL; you can literally use http://www.example.com. If you see references to auth.json, don’t worry about that part yet.
  4. Add a composer.json file to the module’s repo (it only requires module name, type, and the branch alias, but it’s good to include the rest):
    {
      "name": "client/client_private_module",
      "type": "drupal-module",
      "description": "A very important module to our very important client.",
      "keywords": ["Drupal"],
      "homepage": "https://www.bitbucket.org/great_agency/client_private_module",
      "license": "proprietary",
      "minimum-stability": "dev",
      "extra": {
        "branch-alias": {
          "8.x-1.x": "1.x-dev"
        }
      },
      "require": {
        "drupal/diff": "~1.0",
      }
    },
    
  5. Add reference to project composer.json repositories section:
    {
      "type": "package",
      "package": {
        "name": "client/client_private_module",
        "version": "dev",
        "type": "drupal-module",
        "dist": {
          "url": "https://www.bitbucket.org/great_agency/client_private_module/get/8.x-1.x.zip",
          "type": "zip"
        }
      }
    }
  6. Now just run composer require client/client_private_module, and provide the oauth creds from step 3 (note: the first time you do this composer will create the needed ~/.composer/auth.json)

A Process to create a Drupal 8 module’s Config

One of the best practices for Drupal 8 that is still emerging is how to create modules with complex deployable configuration. In the past we often abused the features module to do this, and while that continues to be an option, with Drupal 8’s vastly improved configuration management options and the ability to install configuration easily I have been looking for something better. I particularly want to build modules that don’t have unnecessary dependencies but I can still reliably include all the needed configuration in my project. And after a few tries I think I’ve struck on an effective process.

Let’s start with a quick refresher on installing configuration for a Drupal 8 module. During module installation Drupal will load any yaml files that match configuration patterns it already knows about that are included in your module’s config/install directory. In theory this is great but if you want to include configuration that comes with other modules you have to figure out what files are needed; if you want to include configuration from core modules you probably will need to find a fairly large collection files to get all the required elements. Finding all those files, and copying them quickly and easily is the challenge I set out to solve.

My process starts with a local development sandbox site that is just there to support this development work, and I create a local git repository for the site’s configuration (I don’t need to connect it to a remote, like Bitbucket or GitHub, or handle all of the site’s code since it’s just to support finding changes to config files). Once installation and any base configuration is complete I export the site’s config to the directory covered by the repo (here I used d8_builder/config/sync, the site itself was at d8_builder/pub), and make sure all changes in the repository are committed:

Now I create my module and a second repository just for it. The module’s repository is linked to a remote since this is the actual product I’m creating.

With that plumbing in place I can to make whatever configuration change I need included in the module. Lately I’ve been creating a custom moderation workflow with several user roles and edge cases that will need to be deployed on a dozen or so sites, so you’ll see that reflected below, but this process should work for just about any project with lots of interrelated configuration.

Once I have completed a set of changes, I export the site’s configuration again making sure to avoid uuids and hashes that will cause trouble on import:  drupal config:export --remove-uuid --remove-config-hash

Now git can easily show which configuration files were changed, added, or removed:

Next I use git, xargs, and cp to copy those files into your module (hat tip on this detail to Andy Gregorowicz):
git ls-files -om --exclude-standard --exclude=core.extensions.yml |  xargs -I{} cp "{}" pub/modules/custom/fancy_workflow/config/install/

Notice that I skip the core.extensions.yml file. If your module had dependencies you’ll still need to update your module’s info.yml file to list them.

Now a quick commit and push of the changes to the module’s repo, and I’m ready to pull the module into other projects. I also commit the builder repo to ensure it’s easy to track any future changes.

This isn’t a replacement for tools like Configuration Installer, which are designed to handle an entire site, this is intended just for module development.

If you think you have a better solution, or that I’m missing something important please let me know.

Preparing for your next crisis

Can your plan handle the bizarre?

Last winter Dries, the Drupal Association, and the whole Drupal community, stumbled when concerns about a leading contributor’s potentially exploitive relationship got caught up in discussions of Gorean subculture and related sexual behaviors (warning researching this topic will quickly lead you to NSFW information). The intriguing details drew in an ever-expanding audience but were actually irrelevant the main concerns and the secondary ones that followed. The DA lost control of the message and the story and the entire community suffered as a consequence. Last spring and summer I was asked to step in to help them regain control of the message and start to resolve the crisis. It wasn’t the first time I was part of a crisis response with unusual details, and likely won’t be the last.

The first time I saw an organization respond to a threat to their reputation was my senior year at Hamilton College when I was intern in the Communications and Development Department. The morning of February 5th, 2001 the administrative assistant who spent every Monday morning scanning major publications for references to the college and its professors (this is before you could use Google Alerts and other tools for a similar purpose) suddenly jumped up from her desk and ran into her boss’s office. A few moments later they both sprinted down the hall to the Vice President’s office. She had found an article in the New York Times Magazine titled The Cloning Mission; A Desire to Duplicate featuring then-chemistry professor Brigitte Boisselier who – unbeknownst to the college – was moonlighting as the research director for Clonaid trying to develop human cloning technology as part of her leadership of Raëlianism.

Yup, the first time I had a front row seat to an organization’s crisis was a college learning from the New York Times that they had a professor doing secret human cloning research for a group that believes aliens created humanity.

Within a hour they had a preliminary message prepared for any alumni who called – and they were calling – and got it to all alumni class presidents to share with any concerned classmates. By noon they had held meetings with all the needed decision makers including the college president, the chemistry department chair, the deans, and the VP of Communications to form a plan of action. By early afternoon that turned into a more formal statement that was recirculated to the class presidents and anyone else expressing concern to the school. Over the course of the semester, and the two years that followed (she continued to make world news after she left the college), they responded to repeated media inquires – I was one of several interns in a mock audience for b-roll footage of a CBC piece on the topic – and even got her to engage in a public debate with an ethicist from the Philosophy department (which went poorly for her). In the end the concerned alumni were pleased with the college’s handling of the matter and the school’s reputation remained intact.

Eventually every organization will face a crisis that requires a public response. Depending on the nature of your organization you may already have a plan that handles the obvious situations: like schools preparing for threats to their students or political campaigns preparing for sexual harassment claims (at least they all should be prepared regardless of what their candidate tells them). But in my experience most of the time the crisis that actually emerges isn’t what you expected and includes strange details that easily distract everyone from the main issue.

Hamilton did not have a plan titled: “What happens if one of our professors is caught leading human cloning research for an alien cult.”  What they had was a general plan for “What happens when it appears someone is going to make us look bad” that was quickly escalated to “what happens when it really is bad.” They knew who they needed to get into a meeting, and that allowed timely decision making. The communications team then had a basis to work quickly so they could get back in control of the story. The plan they had allowed them to brush aside the unimportant details – the involvement of Raëlianism was fun to talk about but didn’t change anything about the response – so they could focus on important details.

Often when faced with these kinds of strange details surrounding a crisis the people who should be leading the response get distracted and start talking about those details. When it gets really weird they really want to say “is this not my fault and not my problem” and ignore it. It doesn’t matter why just that it is your problem. Everyone will want to focus on the salacious details, but you have to focus on the important details and lead your audience to supporting your response.

These are my tips for how to plan for a crisis and building a response that allows you to stay focused:

  1. Know when to initiate your response. Develop a list of people and metrics to use to initiate a crisis response. It should include both some clear markers – e.g. a threat of violence against staff or constituents – and some fuzzier signs – e.g. anytime the ED/CEO or board chair says there is a crisis.
  2. Know who needs to be involved and how to find them. Part of what allowed Hamilton to move quickly was they knew exactly who needed to be involved in the process. Likewise you should determine which staff, board members, volunteers, consultants, etc, need to be in the loop and how you reach them quickly. And know who gets cut out when – if your board chair is on a cruise and can’t be reached until next Tuesday who do you call instead? If your director of communications is the problem you’re probably better off not having them planning the response.
  3. Prepare different types of response. You may want to monitor the situation while saying nothing; you may want to target different messages to specific audiences (your board may get a different message than your donors); you may want to try to use press contacts or avoid them. You should think about how and when to use all your communications channels as part of your response.
  4. Have metrics for testing your plan before following it. In a time of stress it can be easy to overlook important details in your response. While planning write up a checklist to run through to make sure you remembered everything that’s important. It should at least include a reminder to think about everyone’s physical safety and your legal risk (in that order). It should also probably include contacts with important constituent groups (like Hamilton’s alumni class presidents) and any internal audiences so staff and volunteers aren’t surprised by public statements.
  5. Be prepared to share aggressively. Often organizations appear to be hiding information when they share it slowly, or when they say “this is all we can release” and then are pressured into releasing more. With any given statement release everything you can – sometimes with supporting materials if you need to – and then stop.
  6. Be prepared to shut up. This correlates with the previous tip. Sometimes part of a good response will be to be quiet. Usually this will be right before and after a major statement. It gives people a chance to process your response and avoids the sense that you are leaking information in dribs and drabs.
  7. Don’t over plan. Your next crisis will not look like what you planned for, so be prepared to change course from your very first move. If you lock in to many details you will likely make your situation worse by sounding tone-def.

When you have created your draft plan you should practice using it. Some of that practice should be very practical: what do you do when a man in leadership is accused of sexual harassment or abuse? Some of it should be like the CDC and FEMA zombie drills; even if you don’t use zombies use a crisis that you think couldn’t possibly happen – if you hear someone in the office say “good thing that can’t happen here” use it as your scenario. The first makes sure you are prepared for the kinds of things that are most likely to actually happen. The second makes sure you are prepared to think outside the box and handle a messy situation to which you thought you were immune. Your organization might not do any medical research at all, but what would happen if you discovered a board member was marrying the next director of Clonaid (including aliens is almost as fun as including zombies)? What would you tell your major donors?

A good crisis response comes from being prepared for the unexpected. If your plan is flexible enough to handle the utterly expected and adapt to a staff member cloning alien zombies, you can probably handle whatever actually comes your way.

Drupal 8: Remote Database Services

I recently completed a Drupal 8 project that required pulling data from a remote database.  The actual data is not terribly complicated, so Drupal’s role in this case is mostly to provide an abstraction layer that converts the database into a (cacheable) JSON response. Pulling all the pieces together took a little more research and guessing than I expected so I figured I might save a few people time by writing it up. This is more of an intermediate than a beginner project and so I’m going to skip over lots of detail that important to making it all really work. To really understand what’s happening here you’ll want a basic understanding of Drupal 8’s controllers and database services.

What we’re doing here is creating a database service and a controller to provide a JSON endpoint. We’ll define the database connection, the Drupal service, and then the controller.

Drupal allows us to define database connections in the main settings file. This allows easy access to Drupal’s database services and query classes.

Connection Definition

The first step is to define the database connection in settings.php:

<?php
$databases['remote']['default'] = [
'database' => 'extra_data',
'username' => 'accessingUser',
'password' => 'UseGoodPasswordsIn2017&Beyond',
'prefix' => '',
'host' => '10.10.1.1',
'port' => '',
'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
'driver' => 'mysql',
// I'll explain this detail later.
'pdo' => [
\PDO::ATTR_TIMEOUT => 5,
],
];

Services

Next we need to create a new database service using the new connection information. Drupal core’s database service can be leveraged to connect to additional databases by just defining your new connection as a service in your module’s services.yml file.

services:
  mydataservice.database:
    class: Drupal\Core\Database\Connection
    factory: 'Drupal\Core\Database\Database::getConnection'
    arguments: ['default', 'remote']

Notice the arguments from the database service are the array keys (in reverse order) from the settings.php definition of the connection.

That’s all it takes to create a new service that wraps around your database.

The Controller

That service is all well and good as far as it goes, but if we want to actually to send the data to the browser we need a controller to leverage our new service and send the response.

Using dependency injection I attached the service to the controller and it looked like it worked great.

<?php
/**
* Constructs a new DataSearchController object.
*/
public function __construct(ConfigFactory $config_factory, Connection $dataservice) {
$this->config = $config_factory->get('mydataservice.datasettings');
$this->database = $dataservice;
}

/**
* {@inheritdoc}
*/
public static function create(ContainerInterface $container) {
return new static(
$container->get('mydataservice.database')
);
}

public function getData(Request $request) {
$query = $this->database->select('mytable', 'mt');
// From here you gather your data and send your response...
}

In fact it did work flawlessly all the way through initial testing. Using the database service to get the data and the cacheable JSON response technique I’d worked out previously everything came together quickly. You could make a request of the controller with your search terms and the browser gets back a list of objects for display.

Then just before launch the client physically relocated the database server and didn’t tell us it would be offline. Turns out we hadn’t tested what happens to an injected database service when there is no response from the remote database server. The request would wait for the database connection to time out and then throw an exception that didn’t get handled in my code at all. There was no place to add a nice error message and it was incredibly slow since the timeout was 30 seconds.

So at the last minute I had two more problems to solve: trap the error and shorten the timeout on PDO connections.

Drupal 8 database services attempt their connection when the service itself created. It you use dependency injection that means exceptions need to be caught in create(). But create() cannot send a response to the browser, that has to happen later when the function that corresponds to the active route is called by the kernel.

My solution was to make the database service an optional parameter on the controller, and adjust the static returned by create based on the exception thrown:

<?php
/**
* Constructs a new DataSearchController object.
*/
public function __construct(ConfigFactory $config_factory, Connection $dataservice = null) {
$this->config = $config_factory->get('mydataservice.datasettings');
$this->database = $dataservice;
}

/**
* {@inheritdoc}
*/
public static function create(ContainerInterface $container) {
try {
return new static(
$container->get('config.factory'),
$container->get('mydataservice.database')
);
}
catch (\Exception $e) {
return new static(
$container->get('config.factory')
);
}

public function getData(Request $request) {

if (!$this->database) {
throw new HttpException(404, $this->config->get('database_offline_message'));
}
$query = $this->database->select('mytable', 'mt');
// From here you gather your data and send your response...
}
}

The other way to handle the exception would be to load the connection from Drupal’s service container when you need it in place of using dependency injection for that service.

Also, notice we’re throwing a 404 error. Ideally it would return a 5xx type error, but those trigger other behaviors that prevented me from providing nice errors for the JavaScript application to process easily. Our controller also had a page display (to send the JavaScript libraries and base markup for the actual interface on application startup), which meant that we needed to create a reasonably well themed response in that function as well:

<?php
// If the database is offline, then send error message.
if (!$this->database) {
\Drupal::service('page_cache_kill_switch')->trigger();
$message = $this->config->get('database_offline_message');

$error = [
'#theme' => 'dataservice_error_page',
'#attributes' => [
'class' => ['dataservice', 'database-offline'],
'id' => 'dataservice-error',
],
'#message' => [
'#type' => 'processed_text',
'#text' => $message['value'],
'#format' => $message['format'],
'#filter_types_to_skip' => [],
],
'#title' => $this->t('Database Offline'),
];

return $error;
}

Settings revisited

So that fixed the errors, but still meant we had a really long wait for the database connection to time out before the connection error is even thrown in the first place. And now we come back to that PDO section of the database connection definition.

<?php
$databases['remote']['default'] = [
'database' => 'extra_data',
'username' => 'accessingUser',
'password' => 'UseGoodPasswordsIn2017&Beyond',
'prefix' => '',
'host' => '255.255.255.255',
'port' => '',
'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
'driver' => 'mysql',
// Now is when I explain this
'pdo' => [
\PDO::ATTR_TIMEOUT => 5,
],
];

PDO expects you to override settings at connection time not in a settings file. So I couldn’t just update my PHP.ini and call it a day but I also couldn’t find any documentation on how to change any PDO settings. Leveraging the power of open source I started to follow the code paths, actually reading through how Drupal establishes MySQL connections and, found I it.

If you add a subarray in your connection definition keyed to ‘pdo’ Drupal will load and apply those settings instead of the defaults. There are a couple settings that Drupal insists on being right about for performance, stability, and security reasons, but timeout and many others are fair game.

We can do better

This week, for the second time in a year, the unacceptable behavior of a high profile man in the Drupal community has been the topic of public discussion and debate. This time the organizations involved acted more clearly and rapidly, if imperfectly. The issue came to the forefront during #metoo campaign, and again showed that the Drupal community reflects the world around us. I listened to friends and colleagues respond in various ways to the events and to the recognition of several men that they had been allowing this behavior to go on right in front of them for years without intervention. It is clear that in Drupal, like in all parts of our society we can – and must – do better.

As I reflected on these discussions this weekend I read back through some posts from Danah Boyd I’d read a while ago and stashed with my list of ideas of topics for blog posts. In particular I read her comments from last March on how failures to understand people’s hate fuels it and then a piece I had initially missed from July on change in the tech community. Her experiences and perspective are worthy of a few minutes read in their own right, but this week her views seem particularly timely.

One of the things that struck me about Boyd’s piece from July was her clear simple ask:

…what I want from men in tech boils down to four Rs: Recognition. Repentance. Respect. Reparation.

To me the first two are painfully obvious and most men who care about these issues have been working through those for a while now (too many men still need to learn to care at all). I count myself among the group of men who care, and the this post is mostly directed at that group of peers.

More and more you will hear men acknowledge they believe women are telling the truth, recognizing there are more stories we don’t hear than we hear, and apologizing for their own actions or inactions in the past. But of course just believing people and saying sorry doesn’t get us very far. The next two on Boyd’s list are the places where real forward looking change comes from.

Respect should be easy, but too often it is the first place we get into trouble. It is the part her call to action that will always be true no matter what future progress we make on these issues. Respect is an ongoing act requiring constant care, attention, and effort. Meaning to be respectful is not the same as actually being respectful. It requires actively listening to the ideas of women and people of color and considering them as fully as you do anyone else’s. It means tracking in yourself when you fail to do listen and making the personal change required to do better going forward. It includes monitoring our own behavior in meetings, hallway interactions, and one-on-one discussions to make sure you understand how you are being perceived differently by different people – your friendly or silly gesture to one colleague could be insulting or threatening to another. Respect is not something special that women and people of color are suddenly asking for, it’s something that we all already knew we should be extending to all our colleagues but too often fail to show. And when we fail to show true respect for coworkers – regardless of why we failed or which demographic categories they fall into – it’s our responsibility to recognize it and repent.

Finally Boyd also calls for Reparations. Reparations is a word that lots of us fear for no particularly defendable reason since it’s just about attempt to undo some of the harm we’ve benefited from. And in this case her ask is so direct, plain, and frankly easy that I’m giving her the last words:

Every guy out there who wants to see tech thrive owes it to the field to actively seek out and mentor, support, fund, open doors for, and otherwise empower women and people of color. No excuses, no self-justifications, no sexualized bullshit. Just behavior change. Plain and simple. If our sector is about placing bets, let’s bet on a better world. And let’s solve for social equity.

Make sure you have the pictures your site requires

One of the challenges that organizations of all shapes and sizes frequently face is getting good pictures to go with their web site design. Good images can draw in your audience but a missing or poorly displayed image risks damaging your credibility.

Frequently organizations fall in love with a site design that includes excellent pictures on every page. Each story in the design has a wonderful supporting image to highlight a person, place, or topic. Landing pages may have main images with carefully placed text and overlays to help draw the eye and keep people’s attention. Those designs and images may be great, but only if you provide new images as fast as you create new pages (often faster).

Much of the time I worked for AFSC we were lucky to have an in-house photographer. Terry Foss traveled around the world to visit, get to know, and photograph AFSC’s programs and he provided us with excellent pictures for nearly every area of our work. But even then we still had challenges getting the exact picture we wanted for the story we needed to tell the moment we wanted to tell it. In addition to his wonderful images we would dig in the archives, beg local program staff to send us more images, and sometimes we end up taking a picture of an intern’s hand or other acts of desperation.

Web designers have to make many assumptions when they design a site and one of the most important practical issues is the quality and quantity of art the client can provide. The more prominent and plentiful the image use, the more specific questions have to get; sometimes down to the level of aspect ratios and staff image editing skill level. Good designers use this information to help make sure the images in their designs are the kinds of images you can routinely provide. You should know what those assumptions are and make sure you can follow through over time.

When you first launch a site this isn’t usually a problem. A lot of time will go into the stories and images that are included during the build and migration so the initially launched site should be at least as wonderful as the designs.

But as time passes the constant need to specific kinds of images may become a burden. You will soon realize there are places that images are required for technical or stylistic reasons. Some images will have text overlays that mean the subject needs to be on the right or the left. Some spaces will be very large or very small, which impacts what subjects work well. Some will be surrounded by a background color or pattern that may clash with an otherwise ideal photograph. The more you understand this while working with your designer the more you’ll love your new site.

Hand holding a pen
Here’s that image from the Wayback Machine. The image is owned by AFSC and was published under a creative commons license.

We had to photograph an intern’s hand because at the time our home page would break if every story didn’t have an image – it was a design and technical requirement. A few months after we launched we needed to post a statement that required home page placement, but the statement had no image to go with it. I had no budget to buy a stock image and the program had no appropriate picture we could use, so one of our team members grabbed his camera and an intern and quickly shot a few pictures of the young man’s hand holding a pen. We ended up using that hand image for several statements after that as well.

So here are a few quick guidelines for making sure your web site isn’t held back by a need for art you can’t consistently provide:

  1. Make sure you have a plan for getting new pictures. There are many ways to do this so having a plan is more important that its details. It can be a staff photographer, freelance photographers, volunteer photographers, an organization owned camera that staff use, or a budget for stock photographs. The best plans are usually some combination of all these pieces.
  2. Talk with the site designer and make sure they know what you are able to provide before they start the design. Will you be able to ensure pictures for every story, blog post, and event? Do you have editorial standards about the use of images of program participants, volunteers, and staff? Does your stock image budget allow you to fill any gaps, or can you create a picture of an intern writing a statement from thin air? Can you edit those pictures to work in a variety of spaces and sizes, or do you need to assume the pictures get uploaded more or less they way they come off your camera?
  3. When you start to see designs ask your web development partners lots of questions and document their answers. Ask about every image you see in the designs. For every single image you should know what size it needs to be, how it gets loaded and changed, if is required or optional (and how things look if it’s not present), and if it requires some kind of preprocessing for an overlay or other visual effect. Make sure you understand the purpose and details for every image they show you.
  4. Try lots of variations when the site is still under development. Upload great images, terrible images, images that seem too big and too small, a sunset over a field, an ocean scene, a portrait of a baby, a team group shot, fluffy animals, flowers, and anything else you have around your computer. For each spot you can put a picture try each image and see what works and what doesn’t. No design can handle any image of any size and description equally well so make sure you understand, and can live with, the limits imposed by the design of your site.

Finally, make sure you actually follow the plan or fix it. Having a camera is only useful if someone is comfortable using it. Volunteers may not provide what you need in time to be useful or may forget to sign the releases your board requires. Budgets get cut or adjusted over time and may no longer really meet your needs. When the rubber meets the road your plan may not always work, don’t panic, just fix it. The reason you took all those notes about what you need is so that when you adjust your plan you will already know what it needs to provide to make your site successful.

Code Without Community is Dead

With the turmoil in the Drupal community this spring and summer I have seen a wave of calls for open source projects to judge their community members, and other contributors, by the code contributions alone. It’s an idea that sounds great and has been popular among developers for at least forty years. Eric Raymond, while writing about Drupal, described it this spring as a right developers deserve. In some circles it gets treated almost as religious doctrine: “Thou shalt judge by code alone.”

The problem is that it’s not actually true nor just.

There may have been a time it was an ethic that held communities together but now it’s a line we pull out when we want to justify someone’s otherwise unacceptable behavior. Too often it’s a thinly veiled attempt to protect someone from consequences of their own choices and actions. And too often its used at the expense of other community members and other would be contributors.

For obvious reasons open source communities are dominated by developers. And as developers we have faith in our code. We like to believe our code doesn’t care who we are or what we believe. It will do exactly as asked every time. It forgives us all our sins as long as we faithfully fix any bugs and update to new versions of our platforms.

We like to claim that by focusing only on the code, and not the person behind it, we are creating a meritocracy and therefore better results. I don’t need to convince you I’m smart if I can show you my wonderful and plentiful code. We will be faithful to the best code and nothing more.

But none of this is proven to be true.

What does it profit, my brethren, if someone says he has faith but does not have works? Can faith save him? If a brother or sister is naked and destitute of daily food,  and one of you says to them, “Depart in peace, be warmed and filled,” but you do not give them the things which are needed for the body, what does it profit? Thus also faith by itself, if it does not have works, is dead.  James 2:14-17 NKJV

Any project worth doing could be done many different ways. One solution might solve the problem more quickly, another might be easier to maintain in the future, and yet another might be easier to extend to solve even more complex problems later. Which of the solutions is right will depend on the problem, the users, the developers, the timeline, and everyone’s guesses about future needs. There isn’t some objective measure of best. We figure out which solution we’re going to use by working together to sort through the competing ideas. Sometimes the best solution wins, sometimes the loudest voice wins, sometimes the most famous person wins.

When you say that code is the only valid way to judge a developer it tells people how you expect developers to work together. If I see that out of a project lead it tells me expect they me to put up with jerks and bullies as long as those people write good code. It says that women contributing code should expect to be belittled and degraded by colleagues, friends, and strangers as long as those people can solve a problem as well or better than she can (and often even if they can’t do that).

It says these things because for decades that’s been the behavior of large numbers of developers who are still welcomed on projects and praised for their amazing code.

It’s a simplistic way to view people that encourages the bad behavior developers, particularly many famous developers, are known for. Instead of creating a true meritocracy it drives away smart and talented people who don’t want to deal with being called names, threatened, and abused in their work place or while volunteering their time and talents.

~~~~~~~~~~~~~~~~~~~~~~~~~~~

People who damage a community because they mistreat others in it hold everyone back. If you join a project and write a module that brilliantly solves a new problem, but drive three people off the project because you are unbearable to work with, your work now has to offset all of their potential contributions. You also have to offset the loss of productively when people get distracted dealing with you. And you have to offset the reputation loss caused by your bad behavior. That better be an amazing block of code you wrote to do all those things, and you better be ready to do it again, and again, and again just to keep from slowing the project down.

Terrible developers can move a project forward. If someone comes into a project and writes a terrible module that stimulates someone else to solve that same problem better, they are making the project better even if none of their code is ever accepted. Or if someone takes time to make new participants feel welcome and excited to contribute, they may cause more code to be written than anyone else, even if they never write a line of code themselves.

And just because I can solve a problem faster and better than someone else doesn’t prove anything. If I start a project, or contribute for several years, I’m going to have more code credited to me than a new person. I’m going to know the project’s strengths and weaknesses better, and I’ll be able to solve problems more easily. Which means I should produce better code than another equally talented developer, or even someone a smarter than me. I just have a head start on them – anyone can beat Usain Bolt in the 100 meter dash if you give them a big enough head start. If we judge just by code contributions, I can use my head start to make sure no one catches me. That’s not merit, that’s just gaming a system.

Developers are people. People are flawed, complicated, biased, and wonderful creatures. Our value to a project is the sum of all those parts we choose to pour into a community. It matters how we handle discussions in issue queues; debates with community members on Twitter, Slack, or IRC; the content of our blogs; how we act at conferences; and anything else that is going to affect how others experience our contributions.

If you don’t care for how your community functions and if you don’t make sure people are treated well, even when their code is terrible, your project will suffer, and eventually may die. Sure there are examples (big examples like the Linux kernel and OpenBSD) of projects that are doing just fine with communities that allow developers treat each other terribly. The men leading those projects (if anyone has an example of a female lead projects where vile behavior is tolerated let me know and I’ll edit this sentence) are happy with the communities they have built. But can you imagine what those project could have achieved if they hadn’t driven highly talented developers away? Can you show any compelling evidence that those projects succeed because of the bad behavior not despite it?

Communities will be flawed, complicated, biased, and wonderful, just like the people that make them up. But if you only focus on part of someone’s contributions to the community you may miss their value and the damage one person can cause.

Faith in your code, if it isn’t matched with works in your community, is dead.