A Process to create a Drupal 8 module’s Config

One of the best practices for Drupal 8 that is still emerging is how to create modules with complex deployable configuration. In the past we often abused the features module to do this, and while that continues to be an option, with Drupal 8’s vastly improved configuration management options and the ability to install configuration easily I have been looking for something better. I particularly want to build modules that don’t have unnecessary dependencies but I can still reliably include all the needed configuration in my project. And after a few tries I think I’ve struck on an effective process.

Let’s start with a quick refresher on installing configuration for a Drupal 8 module. During module installation Drupal will load any yaml files that match configuration patterns it already knows about that are included in your module’s config/install directory. In theory this is great but if you want to include configuration that comes with other modules you have to figure out what files are needed; if you want to include configuration from core modules you probably will need to find a fairly large collection files to get all the required elements. Finding all those files, and copying them quickly and easily is the challenge I set out to solve.

My process starts with a local development sandbox site that is just there to support this development work, and I create a local git repository for the site’s configuration (I don’t need to connect it to a remote, like Bitbucket or GitHub, or handle all of the site’s code since it’s just to support finding changes to config files). Once installation and any base configuration is complete I export the site’s config to the directory covered by the repo (here I used d8_builder/config/sync, the site itself was at d8_builder/pub), and make sure all changes in the repository are committed:

Now I create my module and a second repository just for it. The module’s repository is linked to a remote since this is the actual product I’m creating.

With that plumbing in place I can to make whatever configuration change I need included in the module. Lately I’ve been creating a custom moderation workflow with several user roles and edge cases that will need to be deployed on a dozen or so sites, so you’ll see that reflected below, but this process should work for just about any project with lots of interrelated configuration.

Once I have completed a set of changes, I export the site’s configuration again making sure to avoid uuids and hashes that will cause trouble on import:  drupal config:export --remove-uuid --remove-config-hash

Now git can easily show which configuration files were changed, added, or removed:

Next I use git, xargs, and cp to copy those files into your module (hat tip on this detail to Andy Gregorowicz):
git ls-files -om --exclude-standard --exclude=core.extensions.yml |  xargs -I{} cp "{}" pub/modules/custom/fancy_workflow/config/install/

Notice that I skip the core.extensions.yml file. If your module had dependencies you’ll still need to update your module’s info.yml file to list them.

Now a quick commit and push of the changes to the module’s repo, and I’m ready to pull the module into other projects. I also commit the builder repo to ensure it’s easy to track any future changes.

This isn’t a replacement for tools like Configuration Installer, which are designed to handle an entire site, this is intended just for module development.

If you think you have a better solution, or that I’m missing something important please let me know.

Preparing for your next crisis

Can your plan handle the bizarre?

Last winter Dries, the Drupal Association, and the whole Drupal community, stumbled when concerns about a leading contributor’s potentially exploitive relationship got caught up in discussions of Gorean subculture and related sexual behaviors (warning researching this topic will quickly lead you to NSFW information). The intriguing details drew in an ever-expanding audience but were actually irrelevant the main concerns and the secondary ones that followed. The DA lost control of the message and the story and the entire community suffered as a consequence. Last spring and summer I was asked to step in to help them regain control of the message and start to resolve the crisis. It wasn’t the first time I was part of a crisis response with unusual details, and likely won’t be the last.

The first time I saw an organization respond to a threat to their reputation was my senior year at Hamilton College when I was intern in the Communications and Development Department. The morning of February 5th, 2001 the administrative assistant who spent every Monday morning scanning major publications for references to the college and its professors (this is before you could use Google Alerts and other tools for a similar purpose) suddenly jumped up from her desk and ran into her boss’s office. A few moments later they both sprinted down the hall to the Vice President’s office. She had found an article in the New York Times Magazine titled The Cloning Mission; A Desire to Duplicate featuring then-chemistry professor Brigitte Boisselier who – unbeknownst to the college – was moonlighting as the research director for Clonaid trying to develop human cloning technology as part of her leadership of Raëlianism.

Yup, the first time I had a front row seat to an organization’s crisis was a college learning from the New York Times that they had a professor doing secret human cloning research for a group that believes aliens created humanity.

Within a hour they had a preliminary message prepared for any alumni who called – and they were calling – and got it to all alumni class presidents to share with any concerned classmates. By noon they had held meetings with all the needed decision makers including the college president, the chemistry department chair, the deans, and the VP of Communications to form a plan of action. By early afternoon that turned into a more formal statement that was recirculated to the class presidents and anyone else expressing concern to the school. Over the course of the semester, and the two years that followed (she continued to make world news after she left the college), they responded to repeated media inquires – I was one of several interns in a mock audience for b-roll footage of a CBC piece on the topic – and even got her to engage in a public debate with an ethicist from the Philosophy department (which went poorly for her). In the end the concerned alumni were pleased with the college’s handling of the matter and the school’s reputation remained intact.

Eventually every organization will face a crisis that requires a public response. Depending on the nature of your organization you may already have a plan that handles the obvious situations: like schools preparing for threats to their students or political campaigns preparing for sexual harassment claims (at least they all should be prepared regardless of what their candidate tells them). But in my experience most of the time the crisis that actually emerges isn’t what you expected and includes strange details that easily distract everyone from the main issue.

Hamilton did not have a plan titled: “What happens if one of our professors is caught leading human cloning research for an alien cult.”  What they had was a general plan for “What happens when it appears someone is going to make us look bad” that was quickly escalated to “what happens when it really is bad.” They knew who they needed to get into a meeting, and that allowed timely decision making. The communications team then had a basis to work quickly so they could get back in control of the story. The plan they had allowed them to brush aside the unimportant details – the involvement of Raëlianism was fun to talk about but didn’t change anything about the response – so they could focus on important details.

Often when faced with these kinds of strange details surrounding a crisis the people who should be leading the response get distracted and start talking about those details. When it gets really weird they really want to say “is this not my fault and not my problem” and ignore it. It doesn’t matter why just that it is your problem. Everyone will want to focus on the salacious details, but you have to focus on the important details and lead your audience to supporting your response.

These are my tips for how to plan for a crisis and building a response that allows you to stay focused:

  1. Know when to initiate your response. Develop a list of people and metrics to use to initiate a crisis response. It should include both some clear markers – e.g. a threat of violence against staff or constituents – and some fuzzier signs – e.g. anytime the ED/CEO or board chair says there is a crisis.
  2. Know who needs to be involved and how to find them. Part of what allowed Hamilton to move quickly was they knew exactly who needed to be involved in the process. Likewise you should determine which staff, board members, volunteers, consultants, etc, need to be in the loop and how you reach them quickly. And know who gets cut out when – if your board chair is on a cruise and can’t be reached until next Tuesday who do you call instead? If your director of communications is the problem you’re probably better off not having them planning the response.
  3. Prepare different types of response. You may want to monitor the situation while saying nothing; you may want to target different messages to specific audiences (your board may get a different message than your donors); you may want to try to use press contacts or avoid them. You should think about how and when to use all your communications channels as part of your response.
  4. Have metrics for testing your plan before following it. In a time of stress it can be easy to overlook important details in your response. While planning write up a checklist to run through to make sure you remembered everything that’s important. It should at least include a reminder to think about everyone’s physical safety and your legal risk (in that order). It should also probably include contacts with important constituent groups (like Hamilton’s alumni class presidents) and any internal audiences so staff and volunteers aren’t surprised by public statements.
  5. Be prepared to share aggressively. Often organizations appear to be hiding information when they share it slowly, or when they say “this is all we can release” and then are pressured into releasing more. With any given statement release everything you can – sometimes with supporting materials if you need to – and then stop.
  6. Be prepared to shut up. This correlates with the previous tip. Sometimes part of a good response will be to be quiet. Usually this will be right before and after a major statement. It gives people a chance to process your response and avoids the sense that you are leaking information in dribs and drabs.
  7. Don’t over plan. Your next crisis will not look like what you planned for, so be prepared to change course from your very first move. If you lock in to many details you will likely make your situation worse by sounding tone-def.

When you have created your draft plan you should practice using it. Some of that practice should be very practical: what do you do when a man in leadership is accused of sexual harassment or abuse? Some of it should be like the CDC and FEMA zombie drills; even if you don’t use zombies use a crisis that you think couldn’t possibly happen – if you hear someone in the office say “good thing that can’t happen here” use it as your scenario. The first makes sure you are prepared for the kinds of things that are most likely to actually happen. The second makes sure you are prepared to think outside the box and handle a messy situation to which you thought you were immune. Your organization might not do any medical research at all, but what would happen if you discovered a board member was marrying the next director of Clonaid (including aliens is almost as fun as including zombies)? What would you tell your major donors?

A good crisis response comes from being prepared for the unexpected. If your plan is flexible enough to handle the utterly expected and adapt to a staff member cloning alien zombies, you can probably handle whatever actually comes your way.

Drupal 8: Remote Database Services

I recently completed a Drupal 8 project that required pulling data from a remote database.  The actual data is not terribly complicated, so Drupal’s role in this case is mostly to provide an abstraction layer that converts the database into a (cacheable) JSON response. Pulling all the pieces together took a little more research and guessing than I expected so I figured I might save a few people time by writing it up. This is more of an intermediate than a beginner project and so I’m going to skip over lots of detail that important to making it all really work. To really understand what’s happening here you’ll want a basic understanding of Drupal 8’s controllers and database services.

What we’re doing here is creating a database service and a controller to provide a JSON endpoint. We’ll define the database connection, the Drupal service, and then the controller.

Drupal allows us to define database connections in the main settings file. This allows easy access to Drupal’s database services and query classes.

Connection Definition

The first step is to define the database connection in settings.php:

<?php
$databases['remote']['default'] = [
'database' => 'extra_data',
'username' => 'accessingUser',
'password' => 'UseGoodPasswordsIn2017&Beyond',
'prefix' => '',
'host' => '10.10.1.1',
'port' => '',
'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
'driver' => 'mysql',
// I'll explain this detail later.
'pdo' => [
\PDO::ATTR_TIMEOUT => 5,
],
];

Services

Next we need to create a new database service using the new connection information. Drupal core’s database service can be leveraged to connect to additional databases by just defining your new connection as a service in your module’s services.yml file.

services:
  mydataservice.database:
    class: Drupal\Core\Database\Connection
    factory: 'Drupal\Core\Database\Database::getConnection'
    arguments: ['default', 'remote']

Notice the arguments from the database service are the array keys (in reverse order) from the settings.php definition of the connection.

That’s all it takes to create a new service that wraps around your database.

The Controller

That service is all well and good as far as it goes, but if we want to actually to send the data to the browser we need a controller to leverage our new service and send the response.

Using dependency injection I attached the service to the controller and it looked like it worked great.

<?php
/**
* Constructs a new DataSearchController object.
*/
public function __construct(ConfigFactory $config_factory, Connection $dataservice) {
$this->config = $config_factory->get('mydataservice.datasettings');
$this->database = $dataservice;
}

/**
* {@inheritdoc}
*/
public static function create(ContainerInterface $container) {
return new static(
$container->get('mydataservice.database')
);
}

public function getData(Request $request) {
$query = $this->database->select('mytable', 'mt');
// From here you gather your data and send your response...
}

In fact it did work flawlessly all the way through initial testing. Using the database service to get the data and the cacheable JSON response technique I’d worked out previously everything came together quickly. You could make a request of the controller with your search terms and the browser gets back a list of objects for display.

Then just before launch the client physically relocated the database server and didn’t tell us it would be offline. Turns out we hadn’t tested what happens to an injected database service when there is no response from the remote database server. The request would wait for the database connection to time out and then throw an exception that didn’t get handled in my code at all. There was no place to add a nice error message and it was incredibly slow since the timeout was 30 seconds.

So at the last minute I had two more problems to solve: trap the error and shorten the timeout on PDO connections.

Drupal 8 database services attempt their connection when the service itself created. It you use dependency injection that means exceptions need to be caught in create(). But create() cannot send a response to the browser, that has to happen later when the function that corresponds to the active route is called by the kernel.

My solution was to make the database service an optional parameter on the controller, and adjust the static returned by create based on the exception thrown:

<?php
/**
* Constructs a new DataSearchController object.
*/
public function __construct(ConfigFactory $config_factory, Connection $dataservice = null) {
$this->config = $config_factory->get('mydataservice.datasettings');
$this->database = $dataservice;
}

/**
* {@inheritdoc}
*/
public static function create(ContainerInterface $container) {
try {
return new static(
$container->get('config.factory'),
$container->get('mydataservice.database')
);
}
catch (\Exception $e) {
return new static(
$container->get('config.factory')
);
}

public function getData(Request $request) {

if (!$this->database) {
throw new HttpException(404, $this->config->get('database_offline_message'));
}
$query = $this->database->select('mytable', 'mt');
// From here you gather your data and send your response...
}
}

The other way to handle the exception would be to load the connection from Drupal’s service container when you need it in place of using dependency injection for that service.

Also, notice we’re throwing a 404 error. Ideally it would return a 5xx type error, but those trigger other behaviors that prevented me from providing nice errors for the JavaScript application to process easily. Our controller also had a page display (to send the JavaScript libraries and base markup for the actual interface on application startup), which meant that we needed to create a reasonably well themed response in that function as well:

<?php
// If the database is offline, then send error message.
if (!$this->database) {
\Drupal::service('page_cache_kill_switch')->trigger();
$message = $this->config->get('database_offline_message');

$error = [
'#theme' => 'dataservice_error_page',
'#attributes' => [
'class' => ['dataservice', 'database-offline'],
'id' => 'dataservice-error',
],
'#message' => [
'#type' => 'processed_text',
'#text' => $message['value'],
'#format' => $message['format'],
'#filter_types_to_skip' => [],
],
'#title' => $this->t('Database Offline'),
];

return $error;
}

Settings revisited

So that fixed the errors, but still meant we had a really long wait for the database connection to time out before the connection error is even thrown in the first place. And now we come back to that PDO section of the database connection definition.

<?php
$databases['remote']['default'] = [
'database' => 'extra_data',
'username' => 'accessingUser',
'password' => 'UseGoodPasswordsIn2017&Beyond',
'prefix' => '',
'host' => '255.255.255.255',
'port' => '',
'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
'driver' => 'mysql',
// Now is when I explain this
'pdo' => [
\PDO::ATTR_TIMEOUT => 5,
],
];

PDO expects you to override settings at connection time not in a settings file. So I couldn’t just update my PHP.ini and call it a day but I also couldn’t find any documentation on how to change any PDO settings. Leveraging the power of open source I started to follow the code paths, actually reading through how Drupal establishes MySQL connections and, found I it.

If you add a subarray in your connection definition keyed to ‘pdo’ Drupal will load and apply those settings instead of the defaults. There are a couple settings that Drupal insists on being right about for performance, stability, and security reasons, but timeout and many others are fair game.

We can do better

This week, for the second time in a year, the unacceptable behavior of a high profile man in the Drupal community has been the topic of public discussion and debate. This time the organizations involved acted more clearly and rapidly, if imperfectly. The issue came to the forefront during #metoo campaign, and again showed that the Drupal community reflects the world around us. I listened to friends and colleagues respond in various ways to the events and to the recognition of several men that they had been allowing this behavior to go on right in front of them for years without intervention. It is clear that in Drupal, like in all parts of our society we can – and must – do better.

As I reflected on these discussions this weekend I read back through some posts from Danah Boyd I’d read a while ago and stashed with my list of ideas of topics for blog posts. In particular I read her comments from last March on how failures to understand people’s hate fuels it and then a piece I had initially missed from July on change in the tech community. Her experiences and perspective are worthy of a few minutes read in their own right, but this week her views seem particularly timely.

One of the things that struck me about Boyd’s piece from July was her clear simple ask:

…what I want from men in tech boils down to four Rs: Recognition. Repentance. Respect. Reparation.

To me the first two are painfully obvious and most men who care about these issues have been working through those for a while now (too many men still need to learn to care at all). I count myself among the group of men who care, and the this post is mostly directed at that group of peers.

More and more you will hear men acknowledge they believe women are telling the truth, recognizing there are more stories we don’t hear than we hear, and apologizing for their own actions or inactions in the past. But of course just believing people and saying sorry doesn’t get us very far. The next two on Boyd’s list are the places where real forward looking change comes from.

Respect should be easy, but too often it is the first place we get into trouble. It is the part her call to action that will always be true no matter what future progress we make on these issues. Respect is an ongoing act requiring constant care, attention, and effort. Meaning to be respectful is not the same as actually being respectful. It requires actively listening to the ideas of women and people of color and considering them as fully as you do anyone else’s. It means tracking in yourself when you fail to do listen and making the personal change required to do better going forward. It includes monitoring our own behavior in meetings, hallway interactions, and one-on-one discussions to make sure you understand how you are being perceived differently by different people – your friendly or silly gesture to one colleague could be insulting or threatening to another. Respect is not something special that women and people of color are suddenly asking for, it’s something that we all already knew we should be extending to all our colleagues but too often fail to show. And when we fail to show true respect for coworkers – regardless of why we failed or which demographic categories they fall into – it’s our responsibility to recognize it and repent.

Finally Boyd also calls for Reparations. Reparations is a word that lots of us fear for no particularly defendable reason since it’s just about attempt to undo some of the harm we’ve benefited from. And in this case her ask is so direct, plain, and frankly easy that I’m giving her the last words:

Every guy out there who wants to see tech thrive owes it to the field to actively seek out and mentor, support, fund, open doors for, and otherwise empower women and people of color. No excuses, no self-justifications, no sexualized bullshit. Just behavior change. Plain and simple. If our sector is about placing bets, let’s bet on a better world. And let’s solve for social equity.

Make sure you have the pictures your site requires

One of the challenges that organizations of all shapes and sizes frequently face is getting good pictures to go with their web site design. Good images can draw in your audience but a missing or poorly displayed image risks damaging your credibility.

Frequently organizations fall in love with a site design that includes excellent pictures on every page. Each story in the design has a wonderful supporting image to highlight a person, place, or topic. Landing pages may have main images with carefully placed text and overlays to help draw the eye and keep people’s attention. Those designs and images may be great, but only if you provide new images as fast as you create new pages (often faster).

Much of the time I worked for AFSC we were lucky to have an in-house photographer. Terry Foss traveled around the world to visit, get to know, and photograph AFSC’s programs and he provided us with excellent pictures for nearly every area of our work. But even then we still had challenges getting the exact picture we wanted for the story we needed to tell the moment we wanted to tell it. In addition to his wonderful images we would dig in the archives, beg local program staff to send us more images, and sometimes we end up taking a picture of an intern’s hand or other acts of desperation.

Web designers have to make many assumptions when they design a site and one of the most important practical issues is the quality and quantity of art the client can provide. The more prominent and plentiful the image use, the more specific questions have to get; sometimes down to the level of aspect ratios and staff image editing skill level. Good designers use this information to help make sure the images in their designs are the kinds of images you can routinely provide. You should know what those assumptions are and make sure you can follow through over time.

When you first launch a site this isn’t usually a problem. A lot of time will go into the stories and images that are included during the build and migration so the initially launched site should be at least as wonderful as the designs.

But as time passes the constant need to specific kinds of images may become a burden. You will soon realize there are places that images are required for technical or stylistic reasons. Some images will have text overlays that mean the subject needs to be on the right or the left. Some spaces will be very large or very small, which impacts what subjects work well. Some will be surrounded by a background color or pattern that may clash with an otherwise ideal photograph. The more you understand this while working with your designer the more you’ll love your new site.

Hand holding a pen
Here’s that image from the Wayback Machine. The image is owned by AFSC and was published under a creative commons license.

We had to photograph an intern’s hand because at the time our home page would break if every story didn’t have an image – it was a design and technical requirement. A few months after we launched we needed to post a statement that required home page placement, but the statement had no image to go with it. I had no budget to buy a stock image and the program had no appropriate picture we could use, so one of our team members grabbed his camera and an intern and quickly shot a few pictures of the young man’s hand holding a pen. We ended up using that hand image for several statements after that as well.

So here are a few quick guidelines for making sure your web site isn’t held back by a need for art you can’t consistently provide:

  1. Make sure you have a plan for getting new pictures. There are many ways to do this so having a plan is more important that its details. It can be a staff photographer, freelance photographers, volunteer photographers, an organization owned camera that staff use, or a budget for stock photographs. The best plans are usually some combination of all these pieces.
  2. Talk with the site designer and make sure they know what you are able to provide before they start the design. Will you be able to ensure pictures for every story, blog post, and event? Do you have editorial standards about the use of images of program participants, volunteers, and staff? Does your stock image budget allow you to fill any gaps, or can you create a picture of an intern writing a statement from thin air? Can you edit those pictures to work in a variety of spaces and sizes, or do you need to assume the pictures get uploaded more or less they way they come off your camera?
  3. When you start to see designs ask your web development partners lots of questions and document their answers. Ask about every image you see in the designs. For every single image you should know what size it needs to be, how it gets loaded and changed, if is required or optional (and how things look if it’s not present), and if it requires some kind of preprocessing for an overlay or other visual effect. Make sure you understand the purpose and details for every image they show you.
  4. Try lots of variations when the site is still under development. Upload great images, terrible images, images that seem too big and too small, a sunset over a field, an ocean scene, a portrait of a baby, a team group shot, fluffy animals, flowers, and anything else you have around your computer. For each spot you can put a picture try each image and see what works and what doesn’t. No design can handle any image of any size and description equally well so make sure you understand, and can live with, the limits imposed by the design of your site.

Finally, make sure you actually follow the plan or fix it. Having a camera is only useful if someone is comfortable using it. Volunteers may not provide what you need in time to be useful or may forget to sign the releases your board requires. Budgets get cut or adjusted over time and may no longer really meet your needs. When the rubber meets the road your plan may not always work, don’t panic, just fix it. The reason you took all those notes about what you need is so that when you adjust your plan you will already know what it needs to provide to make your site successful.

Code Without Community is Dead

With the turmoil in the Drupal community this spring and summer I have seen a wave of calls for open source projects to judge their community members, and other contributors, by the code contributions alone. It’s an idea that sounds great and has been popular among developers for at least forty years. Eric Raymond, while writing about Drupal, described it this spring as a right developers deserve. In some circles it gets treated almost as religious doctrine: “Thou shalt judge by code alone.”

The problem is that it’s not actually true nor just.

There may have been a time it was an ethic that held communities together but now it’s a line we pull out when we want to justify someone’s otherwise unacceptable behavior. Too often it’s a thinly veiled attempt to protect someone from consequences of their own choices and actions. And too often its used at the expense of other community members and other would be contributors.

For obvious reasons open source communities are dominated by developers. And as developers we have faith in our code. We like to believe our code doesn’t care who we are or what we believe. It will do exactly as asked every time. It forgives us all our sins as long as we faithfully fix any bugs and update to new versions of our platforms.

We like to claim that by focusing only on the code, and not the person behind it, we are creating a meritocracy and therefore better results. I don’t need to convince you I’m smart if I can show you my wonderful and plentiful code. We will be faithful to the best code and nothing more.

But none of this is proven to be true.

What does it profit, my brethren, if someone says he has faith but does not have works? Can faith save him? If a brother or sister is naked and destitute of daily food,  and one of you says to them, “Depart in peace, be warmed and filled,” but you do not give them the things which are needed for the body, what does it profit? Thus also faith by itself, if it does not have works, is dead.  James 2:14-17 NKJV

Any project worth doing could be done many different ways. One solution might solve the problem more quickly, another might be easier to maintain in the future, and yet another might be easier to extend to solve even more complex problems later. Which of the solutions is right will depend on the problem, the users, the developers, the timeline, and everyone’s guesses about future needs. There isn’t some objective measure of best. We figure out which solution we’re going to use by working together to sort through the competing ideas. Sometimes the best solution wins, sometimes the loudest voice wins, sometimes the most famous person wins.

When you say that code is the only valid way to judge a developer it tells people how you expect developers to work together. If I see that out of a project lead it tells me expect they me to put up with jerks and bullies as long as those people write good code. It says that women contributing code should expect to be belittled and degraded by colleagues, friends, and strangers as long as those people can solve a problem as well or better than she can (and often even if they can’t do that).

It says these things because for decades that’s been the behavior of large numbers of developers who are still welcomed on projects and praised for their amazing code.

It’s a simplistic way to view people that encourages the bad behavior developers, particularly many famous developers, are known for. Instead of creating a true meritocracy it drives away smart and talented people who don’t want to deal with being called names, threatened, and abused in their work place or while volunteering their time and talents.

~~~~~~~~~~~~~~~~~~~~~~~~~~~

People who damage a community because they mistreat others in it hold everyone back. If you join a project and write a module that brilliantly solves a new problem, but drive three people off the project because you are unbearable to work with, your work now has to offset all of their potential contributions. You also have to offset the loss of productively when people get distracted dealing with you. And you have to offset the reputation loss caused by your bad behavior. That better be an amazing block of code you wrote to do all those things, and you better be ready to do it again, and again, and again just to keep from slowing the project down.

Terrible developers can move a project forward. If someone comes into a project and writes a terrible module that stimulates someone else to solve that same problem better, they are making the project better even if none of their code is ever accepted. Or if someone takes time to make new participants feel welcome and excited to contribute, they may cause more code to be written than anyone else, even if they never write a line of code themselves.

And just because I can solve a problem faster and better than someone else doesn’t prove anything. If I start a project, or contribute for several years, I’m going to have more code credited to me than a new person. I’m going to know the project’s strengths and weaknesses better, and I’ll be able to solve problems more easily. Which means I should produce better code than another equally talented developer, or even someone a smarter than me. I just have a head start on them – anyone can beat Usain Bolt in the 100 meter dash if you give them a big enough head start. If we judge just by code contributions, I can use my head start to make sure no one catches me. That’s not merit, that’s just gaming a system.

Developers are people. People are flawed, complicated, biased, and wonderful creatures. Our value to a project is the sum of all those parts we choose to pour into a community. It matters how we handle discussions in issue queues; debates with community members on Twitter, Slack, or IRC; the content of our blogs; how we act at conferences; and anything else that is going to affect how others experience our contributions.

If you don’t care for how your community functions and if you don’t make sure people are treated well, even when their code is terrible, your project will suffer, and eventually may die. Sure there are examples (big examples like the Linux kernel and OpenBSD) of projects that are doing just fine with communities that allow developers treat each other terribly. The men leading those projects (if anyone has an example of a female lead projects where vile behavior is tolerated let me know and I’ll edit this sentence) are happy with the communities they have built. But can you imagine what those project could have achieved if they hadn’t driven highly talented developers away? Can you show any compelling evidence that those projects succeed because of the bad behavior not despite it?

Communities will be flawed, complicated, biased, and wonderful, just like the people that make them up. But if you only focus on part of someone’s contributions to the community you may miss their value and the damage one person can cause.

Faith in your code, if it isn’t matched with works in your community, is dead.

Controlling Block Visibility with a Custom Field in Drupal 8 (updated for 9)

Awhile back I wrote up a pattern for creating static blocks on Drupal 8 sites. This week I was working on a site where one of those blocks needs to be enabled or disabled on specific nodes at the discretion of the content author. To make this happen, I’m adding a new feature to my pattern.

In older versions of Drupal there were a number of ways to go, like the PHP Filter, or custom handling in the block’s view hook, but I figured there were probably more appropriate tools for this in Drupal 8.  And I found what I needed in the Condition Plugin (more evidence that plugins are addictive). According to the change record they were designed to centralize a lot of the common logic used for controlling blocks, and I found it works quite nicely in this case as well (although a more generalized version might be useful).

I have the complete condition plugin at the end so you don’t have to get all the details exactly right as we go.

I started by adding a boolean field to the content type named field_enable_sidebar. Then I using drupal console generated the stub condition plugin:drupal generate:plugin:condition. In doing this the first time I also looked at the one defined in the core Node module to handle block visibility by content type.

The console will ask you a couple questions, obviously you can attach it to any module you’d like and call it whatever you’d like. For this example I have it in a fake module called my_blocks and the condition is named SidebarCondition. But the next couple questions are less obvious and more important.

Context Type should be entity

The context type should be set to entity since we are looking to work based on the node being displayed.

Context Entity Type should be Content

Next it’s trying to filter between entity types, and since we’re doing this based on the node content entity type, select “Content” to get a list of content entities on your site.

Context Definition ID should be Content

Finally select “Content” again since that’s the label for node entities in Drupal 8. If you have your field on another content entity type (like a taxonomy term, or a file), pick that entity here instead and rest of this should still work with minor editing.

Once you run through the wizard you’ll have a new file in your module: my_blocks/src/Plugin/Condition/sidebarContition.php

The condition plugin contains two main elements: a form that’s attached to all block settings forms, and an evaluate function that is called by blocks to determine if this condition applies in their current context.

buildConfigurationForm() defines the form array elements you need. In this case that means a simple checkbox to indicate that this condition applies to this block. We also need to define submitConfiguration() to save the values on block save.

public function buildConfigurationForm(array $form, FormStateInterface $form_state) {
$form['sidebarActive'] = [
'#type'          => 'checkbox',
'#title'         => $this->t('When Sidebar Field Active'),
'#default_value' => $this->configuration['sidebarActive'],
'#description'   => $this->t('Enable this block when the sidebar field on the node is active.'),
];
return parent::buildConfigurationForm($form, $form_state);
}

public function submitConfigurationForm(array &$form, FormStateInterface $form_state) {
$this->configuration['sidebarActive'] = $form_state->getValue('sidebarActive');
parent::submitConfigurationForm($form, $form_state);
}

In the complete example you’ll see there is summary() which provides the human friendly description of the values that have been set for this condition.

Now let’s jump back to the top of the plugin and review the annotation. Conditions are annotated plugins and those questions I guided you through above were used to generate the annotation at the start of the file:

/**
* Provides the 'Sidebar condition' condition.
*
* @Condition(
*   id = "sidebar_condition",
*   label = @Translation("Sidebar block condition"),
*   context_definitions = {
*     "node" = @ContextDefinition(
*        "entity:node",
*        required = TRUE ,
*        label = @Translation("node")
*     )
*   }
* )
*/

This is defining the context you’ll want passed to the condition for evaluation. In this case we are requiring that a node entity labeled “node” is provided when we need it.

The real work of the plugin is handled by evaluate():

/**
* Evaluates the condition and returns TRUE or FALSE accordingly.
*
* @return bool
*   TRUE if the condition has been met, FALSE otherwise.
*/
public function evaluate() {
if (empty($this->configuration['sidebarActive']) && !$this->isNegated()) {
return TRUE;
}
$node = $this->getContextValue('node');
if ($node->hasField('field_enable_sidebar')&& $node->field_enable_sidebar->value) {
return TRUE;
}
return FALSE;
}

The first conditional ensures that this plugin doesn’t disable all blocks that aren’t using it. Next we ask for the context node value we defined in the annotation, which provides us the current node able to be displayed. Since not all node types are guaranteed to have our sidebar field we first check that it exists and then check its value and return the correct status for block display.

Now every time a user checks a box on the node, any blocks with this condition enabled will be displayed along with the node. And the best part is that the user doesn’t need to even have block display permissions, we’ve allowed them to bypass that part of the system entirely.

Time estimation: making up numbers as we go along.

Any experienced developer, and anyone who has worked with developers, knows that we’re terrible at estimating project times.  There are mountains of blog posts telling developers how to do estimates (spoiler alert, they are wrong), and at least as many telling project managers not to rely on the bad estimates from developers. Most of the honest advice doesn’t actually help you develop a number it helps you develop strategies to make a slightly better guess.

Any time I start to work with a new project manager on time estimates I try to make sure they understand any estimate is – at best – an educated guess, not a promise. I’ve learned to give ranges to imply inaccuracy and round up to offset my bias as a developer to underestimate (I recently noticed I’m frequently doing that too well and badly overestimating but that’s another story). However, that still led to greater faith in the estimates than they deserve.

A few months ago I was asked about this topic by a program manager I really enjoy working with and who was trying to work with me to find a better solution for our projects together. One of the articles I sent her was an old one from Joel Spolsky written in 2007. Re-reading the article I again drawn to his discussion of using Monte Carlo simulations to help come up with estimates about project duration. While he argues it helps increase accuracy, I mostly think it helps emphasis their lack of accuracy.

And since I’m a developer I wrote a simple tool to create project estimates that simulates how long a list of tasks might take (code on GitHub and pull requests are welcome). It’s nothing fancy, just a simple JavaScript tool that allows you to enter some tasks and estimates (or upload a CSV file) and run the simulation as many number of times you’d like.

Currently the purpose of it is more to encourage people to understand risk levels and ranges than to provide a figure to hang your hat on. Since estimates are bad, the tool is inherently garbage in garbage out. But I’m finding helpful in explaining to PMs about the fuzziness of the estimates. By showing a range of outcomes – including some that are very high (it assumes that your high-end estimate could be as low as ⅓ the total time needed on a task) – and providing a simple visualization of the data, it helps make it clear that estimates can be wrong, and added up error can blow a budget.

Histogram of time estimates.
This is the output from a recent set of estimates I was asked for, hopefully it’ll be good news

Please take some time to play around the tool and let me know what you think. It’s extremely rough at the moment, but if people find it useful I could polish some edges and add features.

Writing Good Directions

Last fall I wrote about the importance of writing good documentation, and part of writing good documentation includes writing good directions. I have a pet peeve when it comes to poorly written instructions of any kind, but unfortunately I’m still learning to do it well myself.

Writing directions can be thankless: you know you provided good directions when people use them and never complain about them. If you write bad directions everyone who gets stuck complains about your work – and usually not nicely because you left them frustrated.

A greyhound wearing a vest labeled donations
No one ever asks where to put money when Leo wears his donations vest.
Good directions, like good recipes, cover all the steps you need to follow, are easy to read at a glance, and provide extra details to help you stay on track. If I’d written up my Drupal Cake recipe as a large block of text without formatting no one would even recognize it as a recipe let alone be able to follow the steps.

Sign with arrow pointing left label Cake and one pointing right labeled No Cake
You don’t have to ask to know where to get the cake.

Over the last few months at work we’ve been updating our development workflow. It started with a large project to migrate our code repositories to Bitbucket and move all support clients onto new testing infrastructure. With a large number of support clients, we had lots of updates to do, so we shared among all the developers. I did the first few conversions and then wrote up a set of directions for other developers to use. The first few times other people walked through them I got corrections, complaints, and updates, and then after a few edits there was silence.  Every couple of days I noticed another batch of clients got migrated without anyone asking me questions. The directions got to be good because no one struggled with them after the first few corrections. But I didn’t get, or expect, compliments on them, but they achieved their purpose.

Switch for a microwave labeled No fish, no curry.
This is from a hotel, but every office should probably have one on their microwave. I doubt the person who created those labels hears about them much unless someone broke the rule and microwaved fish in their room.

It’s easy to complain about directions, but it’s hard to do them right. There is another set of directions at work that I know are bad: because everyone complained about them and then gave up on the process they explain. I need to try again, but frankly it’s hard to get up the motivation to replace the current silence with either new silence (if I succeed) or complaints (if I fail).

Usually when I’m writing up directions the outcome doesn’t matter much. If your Drupal Cake isn’t the shade of blue you were hoping for, or my colleagues have to ask a couple extra questions while migrating site configuration, the world will not end. But there are people who have to write important directions that can save or cost lives.

Even if your directions aren’t signs that hopefully save lives, it is worth trying to do them right. I’ve already admitted I’m still working on getting this right but here are a few things that help me.

  1. Write down the steps as you do the task. Include pictures or screenshots when they are helpful.
  2. Do the task again following your directions to the letter.
  3. As you edit them (because you will find mistakes) add tips about what happened when you made mistakes during your previous attempt to help people know they are off course and how to recover.
  4. Repeat 2 and 3 until you are sure you have removed the largest errors.
  5. Watch someone else follow your directions and see where they get confused – if the task is complex they will get confused and that’s okay for now. Ideally this person should have a different experience level than you do.
  6. Edit again based on the problems that person ran into.
  7. Repeat 5 and 6 until you run out of colleagues willing to help you or you stop finding major errors.
  8. Release generally, and wait for the complaints.

This of course is an ideal. It’s what I did for the migration instructions, but not what I do most of the time. Rarely do we have the time to really work through a process like that and edit more than once or twice. You can shortcut this process some by limiting the number of edits, but if you don’t edit at all you should expect people to complain to the point of giving up.

One last thing. I’ve often been told the first part of writing good instructions is mastering the process. I disagree with that advice pretty strongly. Most of the time I find that beginners write better instructions since they are paying attention to more details. Once I master a topic I skip steps because they are obvious to me, but not to people who need the instructions. That’s why for step five you want someone of a different experience level, someone more junior to help make sure you didn’t forgot to include the obvious and someone more senior to point out that you made mistakes.

Welcome Piscataway Students. I noticed the start of the annual bump in traffic from the Piscataway LMS and figured I’d say hello. A few years ago one of the teachers started to assign this article, so I see a bump in traffic here when they post the assignment.  Tell them I said thank you, and that I’d be interested in hearing from them about how this article gets used in your classes. I hope this year goes well for all of you.

Fixing the Expert Beginner

I’ve been reading Erik Deitrich’s blog a bunch recently, particularly two pieces he wrote last fall on how developers learn. They are excellent. I recommend them to anyone who thinks they are an expert particularly if you are just starting out.

Failed Sticky bun
I am a good baker but this attempt at a giant sticky bun failed because I am not an expert.

In short he argues for a category of developer he calls the Expert Beginner.  These are people who rose to prominence in their company or community due more to a lack of local competition than raw skill.  Developers who think they are great because they are good but have no real benchmarks to compare themselves to and no one calling them out for doing things poorly.  These developers not only fail to do good work, but will hold back teams because they will discourage people from trying new paths they don’t understand.

I have one big problem with his argument: he treats Expert Beginning as a finished state. He doesn’t provide a path out of that condition for anyone who has realized they are an expert beginner or that they are working with someone who needs help getting back on track to being an actual expert.

That bothers me because I realized that at times I have been, or certainly been close to being, such expert beginner.  When I was first at AFSC, I was the only one doing web development and I lacked feedback from my colleagues about the technical quality of my work.  If I said something could be good, it was good because no one could measure it. If I said the code was secure, it was secure because no one knew how to attack it. Fortunately for me, even before we’d developed our slogan about making new mistakes, I had come to realize that I had no idea what I was doing. Attackers certainly could find the security weaknesses even if I couldn’t.

I was talking to a friend about this recently, and I realized part of how we avoided mediocrity at the time was that we were externally focused for our benchmarks.  The Iraq Peace Pledge gathered tens of thousands of names and email addresses: a huge number for us. But as we were sweating to build that list, MoveOn ran out and got a million. We weren’t playing on the right order of magnitude to keep pace, and it helped humble us.

I think expert beginnerism is a curable condition. It requires three things: mentoring, training, and pressure to get better.

Being an expert beginner is a mindset, and mindsets can be changed. Deitrich is right that it is a toxic mindset that can cause problems for the whole company but change is possible. If it’s you, a colleague, or a supervisee you have noticed being afflicted with expert beginnerism the good news is it is fixable.

Step 1: Make sure the expert beginner has a mentor.

Everyone needs a mentor, expert beginners need one more than everyone else. A good mentor challenges us to step out of the comfort zone of what we know and see how much there is we don’t understand. Good mentors have our backs when we make mistakes and help us learn to advocate for ourselves.

When I was first in the working world I had an excellent boss who provided me mentoring and guidance because he considered it a fundamental part of his job. AFSC’s former IT Director, Bob Goodman, was an excellent mentor who taught me a great deal (even if I didn’t always admit it at the time). Currently no one person in my life can serve as the central point of reference that he did, but I still need mentoring. So I maintain relationships with several people who have experiences they are willing to share with me. Some of these people own companies, some are developers at other shops, some are at Cyberwoven, and probably none think I look at them as mentors.

I also try to make myself available to my junior colleagues as a mentor whenever I can. I offer advice about programming, careers, and on any other topic they raise. Mentoring is a skill I’m still learning – likely always will be. At times I find it easy, at times it’s hard. But I consider it critical that my workplace has mentors for our junior developers so they continue progress toward excellence.

Step 2: Making sure everyone gets training.

Programming is too big a field for any of us to master all of it.  There are things for all of us to learn that someone else already knows. I try to set a standard of constant learning and training. If you are working with, or are, an expert beginner push hard to make sure everyone is getting ongoing training even if they don’t want it. It is critical to the success of any company that everyone be learning all the time. When we stop learning we start moving backward.

Also make sure there are structures for sharing that knowledge. That can take many different shapes: lunch and learns, internal trainings, informal interactions, and many others. Everyone should learn, and everyone should teach.

Step 3: Apply Pressure.

You get to be an expert beginner because you, and the people around you, allow it. To break that mindset you have to push them forward again. Dietrich is right that the toxicity of an expert beginner is the fact that they discourage other people from learning. The other side of that coin is that you can push people into learning by being the model student.

Sometimes this makes other people uncomfortable. It can look arrogant and pushy, but done well (something I’m still mastering) it shows people the advantages of breaking habits and moving forward. This goes best for me is when I find places that other colleagues, particularly expert beginners, can teach me. Expert beginners know things so show them you are willing to learn from what they have to offer as well. They may go out and learn something new just to be able to show off to you again.

Finally, remember this takes time.  You have to be patient with people and give them a chance to change mental gears.  Expert beginners are used to moving slow but it feels fast to them.  By forcing them into the a higher gear you are making them uncomfortable and it will take them time to adjust. Do not let them hold you back while they get up to speed, but don’t give up on them either.