Salesforce Queries and Proxies in Drupal 8

The Drupal 8 version of the Salesforce Suite provides a powerful combination of features that are ready to use and mechanisms for adding custom add-ons you may need.  What it does not yet have is lots of good public documentation to explain all those features.

A recent support issue in the Salesforce issue queue asked for example code for writing queries. While I’ll address some of that here, there is ongoing work to replace the query interface to be more like Drupal core’s.  Hopefully once that’s complete I’ll get a chance to revise this article, but be warned some of those details may be a little out of date depending on when you read this post.

To run a simple query for all closed Opportunities related to an Account that closed after a specific date you can do something like the following:

      $query = new SelectQuery('Opportunity');
      $query->fields = [
        'Id',
        'Name',
        'Description',
        'CloseDate',
        'Amount',
        'StageName',
      ];
      $query->addCondition('AccountId', $desiredAccountId, '=');
      $query->conditions[] = [
        "(StageName", '=', "'Closed Won'",
        'OR', 'StageName', '=', "'Closed Lost')",
      ];
      $query->conditions[] = ['CloseDate', '>=', $someSelectedDate];
      $sfResponse = \Drupal::service('salesforce.client')->query($query);

The class would need to include a use statement for to get Drupal\salesforce\SelectQuery; And ideally you would embed this in a service that would allow you to inject the Salesforce Client service more correctly, but hopefully you get the idea.

The main oddity in the code above is the handling of query conditions (which is part of what lead to the forthcoming interface changes). You can use the addCondition() method and provide a field name, value, and comparison as lie 10 does. Or you can add an array of terms directly to the conditions array that will be imploded together. Each element of the conditions array will be ANDed together, so OR conditions need to be inserted the way lines 11-14 handle it.

Running a query in the abstract is pretty straight forward, the main question really is what are you going to do with the data that comes from the query. The suite’s main mapping features provide most of what you need for just pulling down data to store in entities, and you should use the entity mapping features until you have a really good reason not to, so the need for direct querying is somewhat limited.

But there are use cases that make sense to run queries directly. Largely these are around pulling down data that needs to be updated in near-real time (so perhaps that list of opportunities would be ones related to my user that were closed in the last week instead of some random account).

I’ve written about using Drupal 8 to proxy remote APIs before. If you look at the sample code you’ll see the comment that says: // Do some useful stuff to build an array of data.  Now is your chance to do something useful:

<?php
 
namespace Drupal\example\Controller;
 
use Symfony\Component\HttpFoundation\Request;
use Drupal\Core\Controller\ControllerBase;
use Drupal\Core\Cache\CacheableJsonResponse;
use Drupal\Core\Cache\CacheableMetadata;
 
class ExampleController extends ControllerBase {
    public function getJson(Request $request) {
        // Securely load the AccountId you want, and set date range.
 
        $data = [];
        $query = new SelectQuery('Opportunity');
        $query->fields = [
            'Id',
            'Name',
            'Description',
            'CloseDate',
            'Amount',
            'StageName',
        ];
        $query->addCondition('AccountId', $desiredAccountId, '=');
        $query->conditions[] = [
            "(StageName", '=', "'Closed Won'",
            'OR', 'StageName', '=', "'Closed Lost')",
        ];
        $query->conditions[] = ['CloseDate', '>=', $someSelectedDate];
        $sfResponse = \Drupal::service('salesforce.client')->query($query);
 
    if (!empty($sfResponse)) {
        $data['opp_count'] = $sfResponse->size();
        $data['opps'] = [];
 
        if ($data['opp_count']) {
            foreach ($sfResponse->records() as $opp) {
                $data['opps'][] = $opp->fields();
            }
        }
    }
    else {
      $data['opp_count'] = 0;
    }
    // Add Cache settings for Max-age and URL context.
    // You can use any of Drupal's contexts, tags, and time.
    $data['#cache'] = [
        'max-age' => 600, 
        'contexts' => [
            'url',
            'user',     
        ],
    ];
    $response = new CacheableJsonResponse($data);
    $response->addCacheableDependency(CacheableMetadata::createFromRenderArray($data));
    return $response;
  }
}

Cautions and Considerations

I left out a couple details above on purpose. Most notable I am not showing ways to get the needed SFID for filtering because you need to apply a little security checking on your route/controller/service. And those checks are probably specific to your project. If you are not careful you could let anonymous users just explore your whole database. It is an easy mistake to make if you do something like use a Salesforce ID as a URL parameter of some kind. You will want to make sure you know who is running queries and that they are allowed to see the data you are about to present. This is on you as the developer, not on Drupal or Salesforce, and I’m not risking giving you a bad example to follow.

Another detail to note is that I used the cache response for a reason.  Without caching every request would go through to Salesforce. This is both slower than getting cached results (their REST API is not super fast and you are proxying through Drupal along the way), and leaves you open to a simple DOS where someone makes a bunch of calls and sucks up all your API requests for the day. Think carefully before limiting or removing those cache options (and make sure your cache actually works in production).  Setting a context of both URL and User can help ensure the right people see the right data at the right time.

Drupal Salesforce Suite Custom Field Mapping Types

The Drupal 8 Salesforce Suite allows you to map Drupal entities to Salesforce objects using a 1-to-1 mapping. To do this it provides a series of field mapping types that allow you to select how you want to relate the data between the two systems. Each field type provides handling to help ensure the data is handled correctly on each side of the system.

As of this writing the suite provides six usable field mapping types:

  • Properties — The most common type to handle mapping data fields.
  • Record Type — A special handler to support Salesforce record type settings when needed.
  • Related IDs — Handles translating SFIDs to Drupal Entity IDs when two objects are related in both systems.
  • Related Properties — For handling properties across a relationship (when possible).
  • Constant — A constant value on the Drupal side that can be pushed to Salesforce.
  • Token — A value set via Drupal Token.

There is a seventh called Broken to handle mappings that have changed and need a fallback until its fixed. The salesforce_examples module also includes a very simple example called Hardcoded the shows how to create a mapping with a fixed value (similar to, but less powerful than, Constant field).

These six handle the vast majority of use cases but not all.  Fortunately the suite was designed using Drupal 8 annotated plugins , so you can add your own as needed. There is an example in the suite’s example module, and you can review the code of the ones that are included, but I think some people would find an overview helpful.

As an example I’m using the plugin I created to add support for related entities to the webform submodule of the suite (I’m referencing the patch in #10 cause that’s current as of this writing, but you should actually use whatever version is most recent or been accepted).

Like all good annotated plugins to tell Drupal about it all we have to do is create the file in the right place. In this case that is: [my_module_root]/src/Plugins/SalesforceMappingField/[ClassName] or more specifically: salesforce_webform/src/Plugin/SalesforceMappingField/WebformEntityElements.php

At the top of the file we need to define the namespace, add some use statements.

<?php
 
namespace Drupal\salesforce_webform\Plugin\SalesforceMappingField;
 
use Drupal\Core\Entity\EntityInterface;
use Drupal\Core\Form\FormStateInterface;
use Drupal\salesforce_mapping\Entity\SalesforceMappingInterface;
use Drupal\salesforce_mapping\SalesforceMappingFieldPluginBase;
use Drupal\salesforce_mapping\MappingConstants;

Next we need to provide the required annotation for the plugin manager to use. In this case it just provides the plugin’s ID, which needs to be unique across all plugins of this type, and a translated label.

/**
 * Adapter for Webform elements.
 *
 * @Plugin(
 *   id = "WebformEntityElements",
 *   label = @Translation("Webform entity elements")
 * )
 */

Now we define the class itself which must extend SalesforceMappingFieldPluginBase.

class WebformEntityElements extends SalesforceMappingFieldPluginBase {

With those things in place we can start the real work.  The mapping field plugins are made up of a few parts: 

  • The configuration form elements which display on the mapping settings edit form.
  • A value function to provide the actual outbound value from the field.
  • Nice details to limit when the mapping should be used, and support dependency management.

The buildConfigurationForm function returns an array of form elements. The base class provides some basic pieces of that array that you should plan to use and modify. So first we call the function on that parent class, and then make our changes:

 /**
   * {@inheritdoc}
   */
  public function buildConfigurationForm(array $form, FormStateInterface $form_state) {
    $pluginForm = parent::buildConfigurationForm($form, $form_state);
 
    $options = $this->getConfigurationOptions($form['#entity']);
 
    if (empty($options)) {
      $pluginForm['drupal_field_value'] += [
        '#markup' => t('No available webform entity reference elements.'),
      ];
    }
    else {
      $pluginForm['drupal_field_value'] += [
        '#type' => 'select',
        '#options' => $options,
        '#empty_option' => $this->t('- Select -'),
        '#default_value' => $this->config('drupal_field_value'),
        '#description' => $this->t('Select a webform entity reference element.'),
      ];
    }
    // Just allowed to push.
    $pluginForm['direction']['#options'] = [
      MappingConstants::SALESFORCE_MAPPING_DIRECTION_DRUPAL_SF => $pluginForm['direction']['#options'][MappingConstants::SALESFORCE_MAPPING_DIRECTION_DRUPAL_SF],
    ];
    $pluginForm['direction']['#default_value'] =
      MappingConstants::SALESFORCE_MAPPING_DIRECTION_DRUPAL_SF;
    return $pluginForm;
 
  }

In this case we are using a helper function to get us a list of entity reference fields on this plugin (details are in the patch and unimportant to this discussion). We then make those fields the list of Drupal fields for the settings form. The array we got from the parent class already provides a list of Salesforce fields in $pluginForm[‘salesforce_field’] so we don’t have to worry about that part.  Since the salesforce_webform module is push-only on its mappings, this plugin was designed to be push only as well, and so limits to direction options to be push only. The default set of options is:    

'#options' => [
    MappingConstants::SALESFORCE_MAPPING_DIRECTION_DRUPAL_SF => t('Drupal to SF'),
    MappingConstants::SALESFORCE_MAPPING_DIRECTION_SF_DRUPAL => t('SF to Drupal'),
    MappingConstants::SALESFORCE_MAPPING_DIRECTION_SYNC => t('Sync'),
 ],

And you can limit those anyway that makes sense for your plugin.

With the form array completed, we now move on to the value function. This is generally the most interesting part of the plugin since it does the work of actually setting the value returned by the mapping.

  /**
   * {@inheritdoc}
   */
  public function value(EntityInterface $entity, SalesforceMappingInterface $mapping) {
    $element_parts = explode('__', $this->config('drupal_field_value'));
    $main_element_name = reset($element_parts);
    $webform = $this->entityTypeManager->getStorage('webform')->load($mapping->get('drupal_bundle'));
    $webform_element = $webform->getElement($main_element_name);
    if (!$webform_element) {
      // This reference field does not exist.
      return;
    }
 
    try {
 
      $value = $entity->getElementData($main_element_name);
 
      $referenced_mappings = $this->mappedObjectStorage->loadByDrupal($webform_element['#target_type'], $value);
      if (!empty($referenced_mappings)) {
        $mapping = reset($referenced_mappings);
        return $mapping->sfid();
      }
    }
    catch (\Exception $e) {
      return NULL;
    }
  }

In this case we are finding the entity referred to in the webform submission, loading any mapping objects that may exist for that entity, and returning the Salesforce ID of the mapped object if it exists.  Yours will likely need to do something very different.

There are actually two related functions defined by the plugin interface, defined in the base class, and available for override as needed for setting pull and push values independently:

  /**
   * An extension of ::value, ::pushValue does some basic type-checking and
   * validation against Salesforce field types to protect against basic data
   * errors.
   *
   * @param \Drupal\Core\Entity\EntityInterface $entity
   * @param \Drupal\salesforce_mapping\Entity\SalesforceMappingInterface $mapping
   *
   * @return mixed
   */
  public function pushValue(EntityInterface $entity, SalesforceMappingInterface $mapping);
 
  /**
   * An extension of ::value, ::pullValue does some basic type-checking and
   * validation against Drupal field types to protect against basic data
   * errors.
   *
   * @param \Drupal\salesforce\SObject $sf_object
   * @param \Drupal\Core\Entity\EntityInterface $entity
   * @param \Drupal\salesforce_mapping\Entity\SalesforceMappingInterface $mapping
   *
   * @return mixed
   */
  public function pullValue(SObject $sf_object, EntityInterface $entity, SalesforceMappingInterface $mapping);
 

But be careful overriding them directly. The base class provides some useful handling of various data types that need massaging between Drupal and Salesforce, you may lose that if you aren’t careful. I encourage you to look at the details of both pushValue and pullValue before working on those.

Okay, with the configuration and values handled, we just need to deal with programmatically telling Drupal when it can pull and push these fields. Most of the time you don’t need to do this, but you can simplify some of the processing by overriding pull() and push() to make sure the have the right response hard coded instead of derived from other sources. In this case pulling the field would be bad, so we block that:

  /**
   * {@inheritdoc}
   */
  public function pull() {
    return FALSE;
  }

Also, we only want this mapping to appear as an option if the site has the webform module enabled. Without it there is no point in offering it at all. The plugin interface provides a function called isAllowed() for this purpose:

  /**
   * {@inheritdoc}
   */
  public static function isAllowed(SalesforceMappingInterface $mapping) {
    return \Drupal::service('module_handler')->moduleExists('webform');
  }

You can also use that function to limit a field even more tightly based on the mapping itself.

To further ensure the configuration of this mapping entity defines its dependencies correctly we can define additional dependencies in getDependencies(). Again here we are tied to the Webform module and we should enforce that during and config exports:

  /**
   * {@inheritdoc}
   */
  public function getDependencies(SalesforceMappingInterface $mapping) {
    return ['module' => ['webform']];
  }

And that is about it.  Once the class exists and is properly setup, all you need to do is rebuild the caches and you should see your new mapping field as an option on your Salesforce mapping objects (at least when isAllowed() is returning true).

SC DUG August 2019

After a couple months off, SC DUG met this month with a presentation on super cheap Drupal hosting.

Chris Zietlow from Mindgrub, Will Jackson from Kanopi Studios, and I all gave short talks very cheap ways to host Drupal 8.

Chris opened by talking about using AWS Micro servers. Will shared a solution using a Raspberry Pi for a fully wireless server. I closed the discussion with a review of using Drupal Tome on Netlify.

We all worked from a loose set of rules to help keep us honest and prevent overlapping:

Rules for Cheap D8 Hosting Challenge

The goal is to figure out the cheapest D8 hosting that would actually function for a project, even if it is deeply irresponsible to actually use.

Rules

  1. It has to actually work for D8 (so modern PHP version, working database, etc),
  2. You do not actually have to spend the money, but you do need to know all the steps required to make it work.
  3. It needs to honor the TOS for any networks and services you use (no illegal network taps – legal hidden taps are fair game).
  4. You have to share your idea with the other players so we don’t have two people propose the same solution (first-come-first-serve on ideas).

Reporting

Be prepared to talk for about 5 minutes on how your solution would work.  Your talk needs to include:

  1. Estimated Monthly cost for the first year.
  2. Steps required to make it work.
  3. Known weaknesses.

If you have a super cheap hosting solution for Drupal 8 we’d love to hear about it.

Drupal Tome + Docksal + Netlify

Drupal Tome is a static site generator distribution of Drupal 8. It provides mechanisms for taking an entire Drupal site and exporting all the content to HTML for direct service. As part of a recent competition at SCDUG to come up with the cheapest possible Drupal 8 hosting, I decided to do a proof-of-concept level implementation of Drupal 8 with Docksal for local content editing, and Netlify for hosting (total cost was just the domain registration).

The Tome project has directions for setup with Docker, and for setup with Netlify, but they don’t quite line up with each other (I followed the docker instructions, then the Netlify set, but had to chart my own course to get the site from the first project linked to the repo in the second), and since I’m getting used to using Docksal when I had to fall back and do a bit of it myself I realized it was almost painfully easy to setup.

The first step was to go to the Tome documentation for Netlify and setup an account, and site from the template. There is a button in those directions to trigger the Netlify setup, I’ve added one here as well (but if this one fails, check to see if they updated theirs):

Deploy to Netlify

Login with Github or similar service, and let it create a repo for your project.

Follow Netlify’s directions for setting up DNS so you can have the domain you want, and HTTPS (through Let’s Encrypt). It took it a couple hours to get that detail to run right, but it eventually worked. For this project I chose a subdomain of my main blog domain: tome-netlify.spinningcode.org

Next go to Github (or whatever service you used) and clone the repository to your local machine. There is a generated README on that project, but the directions aren’t 100% correct if you aren’t cloning onto a machine with a working PHP environment. This is when I switched over to docksal, and ran the following series of commands:

fin init
fin composer install
fin drush tome:install
fin drush uli

Then log into your local site using the domain from docksal and the link from drush, and add some content.

Next we export the content from Drupal to send over to Netlify for deployment.

fin drush tome:static
git add .
git commit -m "Adding sample content"
git push

…now we wait while Netlify notices and builds the site…

If you look at the site a few minutes later the new content should be posted.

This is all well and good if I want to use the version of the site generated for the Netlify example, but I wanted to make sure I could do something more interesting. These days Drupal ships with an install profile called Unami that provides a more robust sample site than the more traditional Standard install.

So now let’s try to get Unami onto this site. Go back to the terminal and have Tome reset everything (it’ll warn you that you are about to nuke everything):

fin drush tome:init

…select Unami when it asks for a profile…and wait cause this takes a while…

Now just re-export the content and push it to your repo.

fin drush tome:static
git add .
git commit -m "Converting to Unami"
git push

And wait again, cause this also takes a few minutes…

The Unami home page on my subdomain hosted at Netlify.

That really was all that was involved for a simple site, you can see my repository on Github if you want to see all of what was generated along the way.

The whole process is pretty straight forward, but there are a few things that it helps to understand.

First, Netlify is actually regenerating the markup on their servers with this approach. The Drupal nodes, and other entities, are saved as JSON and then imported during the build. This makes the process reliable, but slow. Unami takes several minutes to deploy since Netlify is installing and configuring Drupal, loading the content, and generating the output. The build command provided in that template is clear enough to follow if you are familiar with composer projects:

command = "composer install && ./vendor/bin/drush tome:install -y && ./vendor/bin/drush tome:static -l $DEPLOY_PRIME_URL" 

One upside of this, is that you can use a totally unrelated domain for your local testing and have it adjust correctly to the production domain. When you are using Netlify’s branching workflow for managing dev, test, and production it also protects your work that way.

My directions above load a standard docksal container because that’s quick and easy, which includes MySQL, but Tome falls back to using a Sqlite database since you can be more confident it is there. Again this is reliable but slow. If I were going to do this on a more complete project I’d want a smaller Docksal setup or to switch to using MySQL locally.

A workflow based on this approach might also struggle with concurrent edits or complex configuration of large sites. It would probably make more sense to have the content created on a hidden, but traditional, server and then run through a different workflow. But for someone working on a series small sites that are rarely updated, a totally temporary instance of the site that can be rapidly deployed to a device, have content updated, push out to production, and then deleted locally until needed again.

The final detail to note is that there is no support for forms built into this solution. Netlify has support for that, and Tome has a module that claim to connect to that service but I wasn’t able to quickly determine how to get it connected. I am confident there are solves to this problem, but it is something that would take a little additional work.

Bypass Pantheon Timeouts for Drupal 8

Pantheon is an excellent hosting service for both Drupal and WordPress sites. But to make their platform work and scale well they have a number of limits built into the platform. These include process time limits and memory limits. While they are large enough for the majority of projects, large projects can have trouble.

For data loading their official answer is typically to copy the database to another server, run your job there, and copy the database back onto their server. That’s fine if you can afford to freeze updates to your production site. And have the time to setup a process to mirror changes into your temporary copy. And can afford some additional project overhead. But sometimes those things are not an option. Or the data load takes too long, or happens too often, for that to be practical on a regular basis.

I recently needed to do a very large import of records into Drupal on a Pantheon hosted site. The minimize user impact I started to play around with solutions that would allow me to ignore those time limits. We were looking at dong about 50 million data writes for the project. When I first estimated the process the running time was over a week.

The Outline

Since Drupal’s batch system was created to solve this exact problem it seemed like a good place to start. For this solution you need a file you can load and parse in segments, like a CSV file, which you can read one line at a time. It does not have to represent the final state of your data. While you can actually load the data raw, you can also load each record into a table or a queue to process later.

One quick note about the code samples, I wrote these based on the service-based approach outlined in my post about batch services and the batch service module I discussed there. It could be adapted to a traditional batch job, but I like the clarity the wrapper provides for this discussion.

The general concept here is that we upload the file and then progressively process it from within a batch job. My code samples below provide two classes to achieve this. The first is a form that provides a managed file field which create a file entity that can be reliably passed to the batch processor. From there the batch service uses a bit of basic PHP file handling to copy data into the database. If you need to do more than load the data into the database directly (say create complex entities or other tasks) you can set up a second phase to run through the values to do that heavier lifting. 

Load your file

To get us started the form includes this managed file:

   $form['file'] = [
     '#type' => 'managed_file',
     '#name' => 'data_file',
     '#title' => $this->t('Data file'),
     '#description' => $this->t('CSV format for this example.'),
     '#upload_location' => 'private://example_pantheon_loader_data/',
     '#upload_validators' => [
       'file_validate_extensions' => ['csv'],
     ],
   ];

The managed file form element automagically gives you a file entity. The value in the form state is the id of that entity. This file will be temporary and have no references once the process is complete and so depending on your site setup the file will eventually be purged. Which adds up to mean we can pass all the values straight through to our batch processor:

$batch = $this->dataLoaderBatchService->generateBatchJob($form_state->getValues());

If the data file is small, a few thousand rows at most, you can load it right away. But that runs into both time and memory concerns and the whole point of this is to avoid those. With my approach we can ignore those and we’re only limited by Pantheon’s upload file size. If the file size is too large for that you can upload the file via sftp and read from the file system, so you have options.

As we setup the file for processing in the batch job, we really need the file path not the ID. The main reason to use the managed file is they can reliably get the file path on a Pantheon server without us needing to know where they have things stashed. Since we’re about to use generic PHP functions for file processing we need to know that path reliably:

$fid = array_pop($data['file']);
$fileEntity = File::load($fid);
$ops = [];

if (empty($fileEntity)) {
  $this->logger->error('Unable to load file data for processing.');
  return [];
}
$filePath = $this->fileSystem->realpath($fileEntity->getFileUri());
$ops = ['processData' => [$filePath]];

Create your batch

Now we have a file know where it is. Since it’s a csv we can load a few rows at time, process them, and then loop back.

Our batch processing function needs to track two things in addition to the file: the header values and the current file position. So in the first pass we initialize the position to zero and then load the first row as the header. For every pass after that we need to find point we left off. For this we use generic PHP files for loading and seeking the current location:

// Old-school file handling.
$path = array_pop($data);
$file = fopen($path, "r");
...
fseek($file, $filePos);

// Each pass we process 100 lines, if you have to do something complex
// you might want to reduce the run.
for ($i = 0; $i < 100; $i++) {
  $row = fgetcsv($file);
  if (!empty($row)) {
    $data = array_combine($header, $row);
    $member['timestamp'] = time();
    $rowData = [
             'col_one' => $data['field_name'],
             'data' => serialize($data),
             'timestamp' => time(),
    ];
    $row_id = $this->database->insert('example_pantheon_loader_tracker')
             ->fields($rowData)
             ->execute();

    // If you're setting up for a queue you include something like this.
    // $queue = $this->queueFactory->get(‘example_pantheon_loader_remap’);
    // $queue->createItem($row_id);
 }
 else {
   break;
 }
}
$filePos = (float) ftell($file);
$context['finished'] = $filePos / filesize($path);

The example code just dumps this all into a database table. This can be useful as a raw data loader. If you need to add a large data set to an existing site that’s used for reference data or something similar.  It can also be used as the base to create more complex objects. The example code includes comments about generating a queue worker to run on cron or as another batch job. the Queue UI module provides a simple interface to run those on a batch job.

Final Considerations

I’ve run this process for several hours at a stretch.  Pantheon does have issues with systems errors if left to run a batch job for extreme runs. I ran into problems on some runs after 6-8 hours of run time. So a prep into the database followed by running on queue restart has been more reliable.

Docksal Pantheon Setup from Scratch

I recently had reason to switch over to using Docksal for a project, and on the whole I really like it as a good easy solution for getting a project specific Drupal dev environment up and running quickly. But like many dev tools the docs I found didn’t quite cover what I wanted because they made a bunch of assumptions.

Most assumed either I was starting a generic project or that I was starting a Pantheon specific project – and that I already had Docksal experience. In my case I was looking for a quick emergency replacement environment for a long-running Pantheon project.

Fairly recently Docksal added support for a project init command that helps setup for Acquia, Pantheon, and Pantheon.sh, but pull init isn’t really well documented and requires a few preconditions.

Since I had to run a dozen Google searches, and ask several friends for help, to make it work I figured I’d write it up.

Install Docksal

First follow the basic Docksal installation instructions for your host operating system. Once that completes, if you are using Linux as the host OS log out and log back in (it just added your user to a group and you need that access to start up docker).

Add Pantheon Machine Token

Next you need to have a Pantheon machine token so that terminus can run within the new container you’re about to create. If you don’t have one already follow Pantheon’s instructions to create one and save if someplace safe (like your password manager).

Once you have a machine token you need to tell Docksal about it.  There are instructions for that (but they aren’t in the instructions for setting up Docksal with pull init) basically you add the key to your docksal.env file:

SECRET_TERMINUS_TOKEN="HASH_VALUE_PROVIDED_BY_PANTHEON_HERE"

 Also if you are using Linux you should note that those instructions linked above say the file goes in $HOME/docksal/docksal.env, but you really want $HOME/.docksal/docksal.env (note the dot in front of docksal to hide the directory).

Setup SSH Key

With the machine token in place you are almost ready to run the setup command, just one more precondition.  If you haven’t been using Docker or Docksal they don’t know about your SSH key yet, and pull init assumes it’s around.  So you need to tell Docksal to load it but running:
fin ssh-key add  

If the whole setup is new, you may also need to create your key and add it to Pantheon.  Once you have done that, if you are using a default SSH key name and location it should pick it up automatically (I have not tried this yet on Windows so mileage there may vary – if you know the answer please leave me a comment). It also is a good idea to make sure the key itself is working right but getting the git clone command from your Pantheon dashboard and trying a manual clone on the command line (delete once it’s done, this is just to prove you can get through).

Run Pull Init

Now finally you are ready to run fin pull init: 

fin pull init --hostingplatform=pantheon --hostingsite=[site-machine-name] --hosting-env=[environment-name]

Docksal will now setup the site, maybe ask you a couple questions, and clone the repo. It will leave a couple things out you may need: database setup, and .htaccess.

Add .htaccess as needed

Pantheon uses nginx.  Docksal’s formula uses Apache. If you don’t keep a .htaccess file in your project (and while there is not reason not to, some Pantheon setups don’t keep anything extra stuff around) you need to put it back. If you don’t have a copy handy, copy and paste the content from the Drupal project repo:  https://git.drupalcode.org/project/drupal/blob/8.8.x/.htaccess

Finally, you need to tell Drupal where to find the Docksal copy of the database. For that you need a settings.local.php file. Your project likely has a default version of this, which may contain things you may or may not want so adjust as needed. Docksal creates a default database (named default) and provides a user named…“user”, which has a password of “user”.  The host’s name is ‘db’. So into your settings.local.php file you need to include database settings at the very least:

<?php
$databases = array(
  'default' =>
    array(
      'default' =>
      array(
        'database' => 'default',
        'username' => 'user',
        'password' => 'user',
        'host' => 'db',
        'port' => '',
        'driver' => 'mysql',
        'prefix' => '',
      ),
    ),
);

With the database now fully linked up to Drupal, you can now ask Docksal to pull down a copy of the database and a copy of the site files:

fin pull db

fin pull files

In the future you can also pull down code changes:

fin pull code

Bonus points: do this on a server.

On occasion it’s useful to have all this setup on a remote server not just a local machine. There are a few more steps to go to do that safely.

First you may want to enable Basic HTTP Auth just to keep away from the prying eyes of Googlebot and friends.  There are directions for that step (you’ll want the Apache instructions). Next you need to make sure that Docksal is actually listing to the host’s requests and that they are forwarded into the containers.  Lots of blog posts say DOCKSAL_VHOST_PROXY_IP=0.0.0.0 fin reset proxy. But it turns out that fin reset proxy has been removed, instead you want: 

DOCKSAL_VHOST_PROXY_IP=0.0.0.0 fin system reset.  

Next you need to add the vhost to the docksal.env file we were working with earlier:

 VIRTUAL_HOST="test.example.org"

Run fin up to get Docksal to pick up the changes (this section is based on these old instructions).

Now you need to add either a DNS entry someplace, or update your machine’s /etc/hosts file to look in the right place (the public IP address of the host machine).

Anything I missed?

If you think I missed anything feel free to let know. Particularly Windows users feel free to let me know changes related to doing things there. I’ll try to work those in if I don’t get to figuring that out on my own in the near future.

SC DUG May 2019

For this month’s SC DUG, Mauricio Orozco from the South Carolina Commission for Minority Affairs shared his notes and lessons learned during his first DrupalCon North America.

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. If you want to see a polished version checkout our group members’ talks at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

If you would like to join us please check out our up coming events on Meetup for meeting times, locations, and connection information.

SC DUG February 2019

Will Jackson – Local Development in Docksal

For the SC DUG meeting this month Will Jackson from Kanopi Studios gave a talk about using Docksal for local Drupal development. Will has the joy of working with some of the Docksal developers and has become an advocate for the simplicity and power Docksal provides.

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. If you want to see a polished version checkout our group members’ talks at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

If you would like to join us please check out our up coming events on Meetup for meeting times, locations, and connection information.

SC DUG November 2018

Kaylan Wagner – Real world lessons from online games.

This fall the South Carolina Drupal User’s Group started using Zoom are part of all our meetings. Sometimes the technology has worked better than others, but when it works in our favor we are recording the presentations and sharing them when we can.

In November Kaylan Wagner gave a draft talk on using experiences in the world of online gaming to be a better remote team member.

We frequently use these presentations to practice new presentations and test out new ideas. If you want to see a polished version hunt group members out at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

If you would like to join us please check out our up coming events on Meetup for meeting times, locations, and connection information.

SC DUG September 2018

Chis Zietlow – Using Machine Learning to Improve UX

This fall the South Carolina Drupal User’s Group started using Zoom are part of all our meetings. Sometimes the technology has worked better than others, but when it works in our favor we are recording the presentations and sharing them when we can.

Chris Zietlow presented back in September about using Machine Learning to Improve UX.

We frequently use these presentations to practice new presentations and test out new ideas. If you want to see a polished version hunt group members out at camps and cons. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback.

If you would like to join us please check out our up coming events on Meetup for meeting times, locations, and connection information.