The Queries Part 1 of 3

This is the first in a series of posts to break down the questions from my Queries on Queries talk. The full talk is available here.

Are your tools good enough?

Our migrations live and die by our tools.

Are your tools built for the scale of your project? Do they empower you to do your best work or impede rapid progress? Would a new tool serve you better now or in the future?

Having the right tools is critical to any job. In data migration we primarily talk about ETLs (Extract, Transform, and Load): tools like Jitterbit, Informatica, Mulesoft, Talend, etc. We also use additional tools to help support the process: a task tracker like Jira, a Data Modeler like Lucidchart, staging database prep like Salesforce2Sql, and more.

It’s easy to say that it’s a poor carpenter who blames his tools, but anyone who has spent time with actual carpenters knows they care a great deal about what tools they use. They might be able to make due with poor tools, but they will do their best work with the right tools for the job.

Each tool you use needs to meet your team’s needs. It should play to your strengths, supports the kinds of projects do you do, and has an eye to the future. A tool that works great for a team of declarative Salesforce consultants might drive developers crazy. A tool that works great for 10’s of thousands of records might struggle with millions; a tool scaled for 10s of millions of records may be overly complex for a project of 30,000.

Make sure you’re using the tools that let you do your best work, now and in the future.

Do you make the data atomic for processing?

Smaller pieces of data are easier to track, manipulate, and test.

Do you divide the source data into its constituent parts? Can you process individual pieces of data easily and cleanly? Can you stop your process after each stage to validate the results?

It can be tempting to process data as it comes: handling whole rows of data in the form they were provided and treating fields as a single data point. In practice exports may have extra rows or columns to deal with related records. Organizations may have encoded multiple points of data into fields like ticket names including a show name, date, and time into the name field. Fields can also contained semi-structured data, like Joomla’s use of arbitrary JSON blobs.

To process this data it is often easier and clearer to extract it from these structures prior to direct processing. It’s not always needed, and rarely required, but doing this clean up of structure – like creating interstitial database tables or predictable data objects – can greatly ease the rest of your job.

Like many problems in software engineering, it’s easier to do good work when you are operating on atomic pieces. Think about the right ways to pull your data into constituent parts when they aren’t there already.

Can you process samples of your data set?

When you have lots of data you need to test small parts to be sure your process works.

Do you know how to create and run small segments of your total input? Are your segments made up of complete and valid samples? Does your sample include all the errors and edge cases your data set will throw at your process?

If you are working with small data sets your sample can be all the data. But when you have a large data set you need to test your process with samples. When you have a multi-step migration you likely need to test the second phase while the first phase is still under construction – again a sample data set is critical.

Having valid test data, that covers all your edge cases, is critical to making sure you have a working solution.

A few years ago I worked on a project that involved a two stage migration for a membership organization with some 600,000 active contacts. Every one of them needed to be migrated into Salesforce and then into Drupal. To test the Drupal migration we needed samples of all the types of membership statuses we would see, which involved hand creating several hundred records. At the next Salesforce Commons Sprint I raised the idea of needing a better tool for this kind of work, that question eventually helped lead to Paul Prescod‘s creation of Snowfakery. Snowfakery will build you testing data sets of any size and complexity to make sure your processes will succeed.

Queries on Queries: Improve your data migration

Last week I gave my Queries on Queries talk, intended to help you improve your data migration process, as a webinar for Attain Partners.  It’s a revised and improved version from the last time I gave it.

These questions aren’t like the old Joel Test (which is still useful) where the right answer is “yes”. These questions are designed to point you in a direction but allow you to change your answer over time. I generally answer this questions with a paragraph not a word. Use these questions as a challenge to make you and your team better.

Over the next few weeks I’m planning to publish a series that will include each query and why I think it’s useful in helping you think about how to improve your process.

Salesforce Data Migration Lessons

Last week I was part of a webinar for Attain Partners talking about Salesforce data migrations. One of the questions the moderator, Eric Magnuson, asked was the three lessons I’d learned doing data migration work.

The answers I chose weren’t so much from my first data migration projects, as from more recent projects. Those early migrations were tiny by my current standards. Back then, I was mainly the consumer of migrated data. When I did migrations they were small and manual. The intellectual process was similar but the scale meant I had lots left to learn about large data projects. The lessons I chose are focused on what I wish I knew when doing the migration for clients with vastly larger data sets. When I became the producer of migrated data.

My three lessons:

  1. Manage expectations from the start.
  2. Don’t be a hero.
  3. Understand your tools, and use them well.

Manage Expectations from the Start

This still surprises me, but it’s true: one of the biggest reasons data migrations get into trouble is bad expectations.

People tend to think data migrations are easy. You create a map from the old system to the new, run some processes to convert the data, load into Salesforce. But in practice, that map might involved thousands of details. Those conversions are hard to get right for every variation of the old data. And loading large amounts of data is never as easy as it looks.

When people think something is easy, and then the outcome is less then perfect, they get mad.

The problem is, data is always messy. We do migrations when we replace systems. We replace systems when the old one has problems. Problems in a system, lead to data errors.

That reality is made worse by the reality of big system switches. The freshly migrated data lands in a system the primary users are still learning. Those factors lead to confusion, mistakes, errors, and misunderstandings.

From the very first conversation I tell our clients to expect migration errors. Our first mock run of the migration is messy – sometimes very messy. The whole point is to find errors. The second mock run will be much better, but still imperfect; we’ll find more errors. No matter how many times we do test runs, there is only some much we can do with imperfect inputs.

We can make your data better then it is, but we can’t make it perfect. If you expect perfect data, you will be disappointed. I want you to be thrilled by your new system, and that means you need to understand your migrated data will have flaws.

Don’t Be A Data Hero

When I first did data migrations I was often a one-person show. I was responsible for figuring out all the details, implementing the process, reporting to the client, and fixing all the flaws. It’s a common story for people doing migrations.

We find ourselves up late at night, working through piles of detail, trying to make sure the client is satisfied. It encourages a hero mentality: I’ll make the project successful through shear will.

And most of us can do it. Being a data hero isn’t that hard if you put the hours in. It is, however, miserable.

People doing migrations need, and deserve, support. Now that I’m leading a team of people doing migrations I have added the support I should have had. I created a space for us to come together and talk about our projects.

We ask each other for help and suggestions. We offer ideas for how to improve our processes. We talk about ways to address client concerns. And yes, we complain to one another. But mostly what we do is make sure that no one is alone. Everyone, myself included, has support and back up.

A team is stronger than a person. We don’t need to be heroes. We do better work, and are happier people, when we support one another. Good work, by happy people, makes for successful projects.

Understand Your Migration Tools

Data processing tools are code generators – understanding that allows you to use them well.

Both parts of that make sense once you say it:

  • Tools that allow you to design an arbitrary process that takes inputs and generates outputs are obviously writing code at some level.
  • If you understand any tool better, you will use it better. That’s true of a hammer, a screw driver, or a piece of software.

I learned how to migrate data from people who weren’t formally trained developers. They were using the tools the best way they knew how, but didn’t have the background to apply software development best practices.

When I combined the tool usage they taught me, with the software engineering practices I already knew, I created vastly superior solutions. Our team now creates processes that are easier to setup, run faster, and allow us to fix all errors (even if we missed them until after launch).

Apply Lessons our Salesforce Data Migrations

I apply these lessons to the Salesforce data migrations I lead in my work at Attain Partners. I combine that with my queries on queries review process, and am constantly building better solutions for our clients, our team, and myself.

Becoming a Salesforce MVP

This week I learned that I am now a Salesforce MVP!

I know that I was nominated by at least Chris Pifer, Samantha Shain, Paul Ginsberg, and Cassie Supilowski. I appreciate their support, and also everything they have taught me over time. They are an august group. Two are Rock Star Admins and two are existing MVPs (three now, as Sam is also in the 2023 MVP class). It is my a privileged to know, work with, and learn from all of them.

I got here with community support

The Salesforce MVPs are a group of people who are recognized within the Salesforce community as leaders and contributors. It’s a group of people who, in addition to our day-to-day work, also support other people’s work and learning.

For me that looks like posts about Salesforce here, participating in Salesforce.org’s Commons program, and generally supporting people I meet along the way. While I contribute as I’m able, my own contributions are fueled by support from my nominators and others.

Sam, Cassie, and I are all leaders the Commons’ Data Generation Toolkit project. Chris has been a friend and colleague nearly my entire adult life – we often remark that we’ve had about every possible professional relationship. I met Paul planning Nonprofit Dreamin’ a couple years ago and we continue to talk regularly about life, Salesforce, and nonprofit (in no particular order). In other posts I have talked about the influence mentors had on my work.

Looking forward to more

Being a Salesforce MVP gives me access to tools, events, information, and opportunities that are new to me. It also comes with the expectation of continuing contributions to the community.

The good news for me is that I generally like contributing and supporting others. I am looking forward to continuing to advancing my personal projects, contribute to Data Gen Toolkit, presenting at events, and generally helping other succeed.

If your group needs a speaker reach out and let me know. I love to talk about Salesforce, data, the web, career paths, and just about anything you find me writing about here.

What I Brought from Drupal to Salesforce

If you look over this blog, or know me, you will see that I’ve now have reasonably significant experience as both a Salesforce and Drupal developer. The last couple weeks I have been thinking about what from my Drupal experience supports my work as a Salesforce developer.

I think there are three parallels that are encouraged in Drupal developers that helped me learn to be a good Salesforce developer quickly.

  1. Embrace, don’t fight, Platform Constraints
  2. Extend the Platform’s Strengths
  3. Leverage Events

Embrace Salesforce Platform Constraints

Both Drupal and Salesforce run in constrained environments. Web applications, regardless of their purpose, have to protect themselves against bad actors and bad neighbors. There are execution timeouts and memory limits, for both platforms. Salesforce adds a variety of additional limits and governors, but they are all logical extensions about memory and time. Lots of other platforms allow developers to ignore resource use until they reach a crisis point. Don’t believe me, just check the memory used by Chrome, Electron Apps, or any other Chromium-based application.

Working in resource constrained environments forces you to think through how to use the resources you do have efficiently. While these platforms aren’t like working on hardware with highly limited resources, they still can test your resourcefulness.

Drupal and Salesforce both provide ways to run large jobs across many processing contexts. New developers on both platforms often only resort to using batch and queueable operations as a last resort, but learning to use those solutions is critical to success on most interesting projects. When you try to avoid them you create solutions that appear to work and fail at scale.

Coming to Salesforce from Drupal, I already knew and understood the importance of asynchronous batch processing. So when solutions needed batch processing it was second nature to learn that part of the platform.

Extend the Platform’s Strengths

For all Salesforce’s push and marketing to avoid code, Salesforce developers are often taught that once you write code you just do everything in your code. The interfaces you can use to extend the platform’s existing solutions are treated as advanced topics. But when you work with Drupal, you are encouraged from the start to create modules that build on and extend the platform’s existing strength. Drupal developers are encouraged to leverage the features and utilities all around them.

This has always been true, but even more after Drupal’s move to leverage Symphony plugins and services. As a developer used to extending the platform, I came to Salesforce looking for ways to extend the platform’s declarative tools.

Often Salesforce developers create powerful solutions built purely in code triggered by record changes or simple buttons. They look passed Apex Actions that extend the Flow declarative automatons, platform events, and other tools that extend the system. But when you embrace a platform’s basic structures you often create more flexible solutions than your could with pure code.

The mind set of extending a platform, which I brought with me from my Drupal work helps me create tools and solutions that are designed to adapt over time.

Leverage Events

Event driven architectures are not new, but their popularity continues to grow. Where platforms used to follow informal patterns that equated to event systems, now we see formal event structures being build to replace old habits.

Drupal and Salesforce both have had event frameworks for a long time: Drupal had hooks (events by naming convention), Salesforce had triggers.

Both have seen major upgrades to their event patterns in recent versions. Platform events in Salesforce, still making their full power clear to a lot of developers. Symphony brought proper events to Drupal in version 8 and continue to help push the platform forward.

I have learned to leverage the events systems on both platforms. Understanding them as tightly constrained state machines, and learning to push them to their limits, helps me get the most from both platforms.

My experience with Drupal hooks and events has made it obvious to me when to leverage Salesforce’s Platform events. As Salesforce increases the number of places you can use them in their declarative tools, it increases the value of this approach.

So What?

As a developer, what you learn in one part of your career can make you stronger in the next. As a field we’re not actually that creative. Even if the details are different, the concepts will often carry forward because they are built on the same fundamentals. Whatever platform you are using today, learn how to make it sing – it’ll help you learn the next faster and better.

Getting Started with Salesforce2Sql

Salesforce2Sql is a tool dedicated to doing one thing very well: mirroring Salesforce schema.

A little over a year ago I started work on Salesforce2Sql to help support people doing data work with Salesforce. Salesforce2Sql is a simple Electron app that allows you to clone the schema of a Salesforce org into an SQL database (currently supports MySql/MariaDB and Postgres). These mirrors are useful when you want to stage data during migrations in or out of Salesforce.

When is Salesforce2Sql useful

Salesforce2Sql is a tool dedicated to doing one thing very well: mirroring Salesforce schema. It does not attempt to extract data or convert data in any way.

Salesforce provides excellent APIs for data import and export, but they work best with some prep work first. Salesforce2Sql gives you a staging database that mimics your Salesforce schema. In these schema mirrors you can prepare all your data for high speed processing off-platform. Anyone who is looking to move data into or out of Salesforce at high speeds and large volumes benefits from this setup.

I have seen people write large complex ETL jobs meant to go right from one source system into Salesforce. These jobs can be hard to test and they are slow to run. They generally don’t give you an easy way to review the data before you push to Salesforce. By landing the data in a database clone you can break the jobs into stages, review transformations before they are loaded, and run thousands of iterations for testing instead of dozens.

Setup

Salesforce2Sql is built and released for MacOS, Windows, and Linux (most testing is on Mac and Windows). You can download the installers from the current release and follow the standard patterns for your OS. From there start the application to get to work.

You will also need API access to a Salesforce org and a database to create your schema within.

Basic Salesforce2Sql Use

As of this writing Salesforce2Sql uses the old security token connection method. I would like to add OAuth2 support as well  but haven’t gotten that done; contributions are welcome. So with the application running, and your security token in hand, click the big “Create New Connection” button on the left side of the main interface.

Salesforce2Sql Main Screen

Step 1: Fetch all objects

Once connected click “Fetch Objects”, and the tool will download a list of every object in your org. There will be several hundred. So the next step is to select which you want to mirror. You will notice Salesforce2Sql selected defaults for you (well I did). It will select all custom objects and based on your org’s structure Salesforce2Sql guess which standard objects to select.  There is a search box at the top right to help you find any others you’d like to add but simply checking the box.

Step 2: Fetch all fields

Click the next button to move to the Proposed Schema tab, and then the “Fetch Details” button. Now Salesforce2Sql will query every field on every object you just selected (this may take a moment but honestly I find it much faster than I expected when I first started this project). Once that is complete you can either save that schema to JSON for later re-use, or click Next to move to the “Generated Database” tab.

Step 3: Generate tables

This is the last step. Click “Create Tables” and Salesforce2Sql will ask you for your database credentials. Once you click okay on this final screen the tool will attempt to create all those tables for you. Again this will take a couple minutes if you have a large schema.  Once the process is complete you can also save the SQL statements for editing and/or later re-use.

That’s it, you now have a database with a schema that matches your org’s structure.

Preferences

There are a few preferences you might want to experiment with (although I tried to pick smart defaults) when building mirrors. For me the right choices depend on my use case.

Preference screen from the application with sections for picklist settings, index settings, other defaults, and theme.

Picklists

Salesforce Picklists are a bit of a special beast. The obvious choice is to make a picklist into a SQL Enum to support validation of data. But not all picklists are restricted in Salesforce and aren’t always required. By default the tool will use enum for restricted picklists, varchar for unrestricted picklists, and add blank values to all picklists (since it can’t easily determine if a given picklist is required or not across all page layouts).

If the picklist values in your org are pretty much set, the default settings make a lot of sense. If the picklist values in your org are likely to change you might want to make them all into regular varchar columns.

Auto Indexing

The next important section contains the index settings. While it is possible to over-index a database I think the three sets of default indexes are pretty good guesses: Id columns, external Ids, and picklists. The one you are most likely not to care about are the enums, and therefore the one you might consider disabling if you aren’t processing on those fields at all. Id columns are now case-sensitive even in MySQL by default as of version 0.7.0 since Salesforce Ids are case sensitive.

Additional Settings

The other defaults section let’s you pick a few other system behaviors. The most common to fuss with are first and last in the box.

By default it expects Lookup fields to be 18 character Salesforce Ids – cause that’s what they will be in Salesforce. But during a data migration some people like to put legacy Ids into these fields (I recommend a proper legacy Id field marked as an external Id) so I give the option to use 255 characters instead of 18. 

Salesforce also has two categories of fields that are common to ignore in a migration and you may wish to keep out of your database just for ease of use: the audit fields (createdBy and the like) and read-only fields (like formulas). The final two checkboxes in that other defaults section lets you keep those fields out of your clone schema.

The middle two fields control the behavior of field defaults – generally I like these two settings as is in just about every use case, but you may feel differently.

Salesforce2Sql uses Bootswatch themes for design elements. The preference pane also lets you pick a different look-and-feel from their theme list.

Final Notes

There are a couple other details worth knowing.

First, if the process runs into a problem where the SQL engine complains about row size limits Salesforce2Sql will automatically switch all varchar fields to TEXT fields in an attempt to reduce the row size. That will override all preference settings for these fields.

Second, Knex.js – which Salesforce2Sql uses to handle the actual SQL writing – adds indexes as a table alter even when they could be part of the create statement. This makes the process a bit slow and it means that if there are errors during the creation of indexes you may see some errors in the interface but leave you with a pretty-good schema clone.

Finally, yes there is lots of room for improvement. I work on this project when I can, or when I need a bug fixed for my own work. I am excited to get suggestions, ideas, feedback, documentation edits, and code submissions.

Salesforce Developer Podcast Episode 119

This week’s Salesforce Developer Podcast featured an interview I did with the host, Josh Birk, the end of last year. As much as I still don’t like the sound of my voice on recordings it was a fun interview and I am really excited to see it come out.

We talk about Snowfakery, Salesforce Open Source Commons, the evolution of PHP, Drupal, my career in general, and even a bit about spinning. I’d love to hear what you think.

Supportive Open Source Projects

Data Generation Toolkit Logo features Loinheart Astro with bits falling on him above the project label.

For a little over two years I’ve had the privilege of helping lead the Data Generation Toolkit Project for Salesforce. In general, Open Source projects are not known for their inclusive and supportive communities. I believe it is fair to say our project demonstrates that building a supportive community can yield great results.

Started by a question I wrote on a piece of paper in 2019 and posted to a wall, the project grows more every time I look around. That first meeting inspired the creation of Snowfakery and the recent training for Snowfakery has attracted more than 300 registrants. Contributors created documentation and presentations to help Salesforce admins learn to seed sandboxes. We are building a recipe library to help people starting out on Snowfakery. We launched two faker data providers. This year we reviewing and documenting other tools to generate and move Salesforce data.

More importantly we’ve built a project that is useful to the community, and supports new contributors to open source projects.

We did not get here accidentally. The project leadership wants to support the community members as much as create new tools. From the beginning we chose to encourage people unfamiliar with open source projects, contributions, and technologies.

Leading an Supportive Open Source Project

To be a supportive project starts with the leaders.

The project currently has seven identified leaders: Alisa Edwards, Allison Letts, Jung Mun, Paul Prescod, Samantha Shain, Cassie Supilowski, and myself. For those who like diversity statistics: that can be seen as 5 women, 2 men, 3 countries of residence, 3 counties of origin (not the same 3), 3 developers, 4 Salesforce admins, and at least 3 racial identities. No sub-group perfectly reflects its community, but that’s not a bad start in my opinion.

We have agreed that some of us lead specific sub-projects, while others tend to the health of the community. Everyone has a role.

When we started out with just 4 leaders either Paul (as creator and maintainer of Snowfakery) or I (as the person who started the project) could have dominated the project claiming founder status. Certainly men leading other open source projects have used that status to control the project direction. But Salesforce Open Source Commons projects are designed to discourage that behavior, and Paul and I embraced that design. Cori O’Brien created a wonderful space for our project to grow within.

Any open source project should focus on supporting its users. Our goal as a project is to support the Salesforce community – particularly nonprofit and education users. To do that we need the insights that only come from being open to outside ideas.

Our project’s leadership also established a pattern of self-review and reflection. Each year we will gather to discuss if the leadership group is the right size, and if anyone needs to step down. That creates a space for us to routinely reflect on what we’re doing and if we’re doing it well. The invitation to leave frees people from responsibilities they have to the project without frustration or burn out.

Recipes for Open Source Projects

Beyond our leadership team structure we also are intentional about how we encourage all contributors.

The project as a whole has space for all kinds of contributions. If someone wants to write documentation we will support them. When someone wants to learn to write recipes, we will work in pairs to get through the first one. People who want to write Python code for our faker providers get the chance to do that too.

We try to be kind to all contributors. We thank everyone when they open pull-requests or issues. Even I get thanked when I open a PR. Even if someone needs to make significant changes to a contribution before we commit it, we make sure to praise their effort.

These aren’t shit sandwiches. We genuinely appreciate all contributions and divorce that from any corrections that we request. As a long time open source contributor I am surprised at how much I appreciate the messages.

Maintaining Supportive Discussion Channels

Most open source maintainers know we have to maintain good ways for contributors to ask questions. Over time projects have used a wide variety of tools: IRC, News Groups, Email, issue trackers, and now Slack. In most projects those are the spaces that tend to become ugly. You see RTFM-style answers, personal criticisms, identity-based attacks, and other ugliness. The Slack channels for our project are a key piece of how we engage with each other, and support new contributors. We work hard to make sure people get timely answers, clear directions, and steady encouragement.

Our work falls within a community where our reputations matter and so there is a level of decorum absent in some open source projects. But no group is perfect and we will address issues when they arise. The open source common’s DEI Framework project will hopefully help us continue to deepen our understanding of the community. The kinds of attacks open source community spaces frequently allow in the name of good code, are simply unacceptable.

Come Join Us

The next community sprint for our project is coming up May 4th & 5th.

Disable a Salesforce Trigger in Production with VS Code

I recently had to disable a Salesforce Trigger from a client’s production environment. Having discovered the need last minute for a project, I needed to react quickly and make the change quickly. Since the official documentation is a bit lacking I decided it was time for blog post of better directions.

The Salesforce CLI directions in the article above make a few annoying assumptions:

  1. That you’re happy to use MDAPI not SFDX.
  2. That all your tests pass in production (which should be true, but let’s be real it;s always).
  3. You enjoy working out CLI commands in a rush.

What you need

Disabling a Trigger Using SFDX in VS Code.

Connect VS Code to your target org. Easiest solution here is to just go to the command palate and authorize an org.

VS Code command pallet wit with SFDX commands shown.

Pull down the trigger from your org. I find it easiest to go to the metadata browser, find the trigger under Apex Triggers and click the download icon on the right.

Screenshot of VS Code org browser showing an Apex Trigger to download.

Update the trigger’s XML file to change the status. All project code files in SFDX are accompanied by an XML file of class metadata. Open the file and change the status to “Inactive”.

Screenshot of the VS Code file browser with the trigger's metadata file highlighted.
Navigate to the file in the file browser on the left in VS Code
Screenshot of code snippet emphasizing the Status XML tag and Inactive value.
In the editor change the status from Active to Inactive, and save the file.

Generate a Manifest file for your trigger. Right click on the trigger code file in VS Code’s file explorer and select Generate Manifest, provide the file a useful name (in this case used DisableTrigger)

A segment of the contextual menu to show Generate Manifest command's location.
Depending on your setup the context menu might be quite long, SFDX additions are generally near the end of the list.

Deploy the change. If all your org’s tests currently pass you can just right click on the new manifest file and instruct VS Code to deploy the source in the manifest to the org.

Another contextual menu snippet, now with Deploy Source In Manifest circled.

What if tests fail on trigger deploy?

While all our tests should always pass all the time, Salesforce admins frequently find that’s not actually true in practice. Heck this trigger could be part of that problem in your org. But to deploy a code change we have to run some tests. Since we are disabling this trigger, the trigger code doesn’t need to be in tests we just need to run a working test with enough coverage to get the job done. So go find a test class and then we can deploy.

VS Code doesn’t currently have a setting to set the testing mode on a deployment, so you’ll need to do this last step in VS Code’s terminal (available by hitting control-` if you don’t have it open already). In the terminal run the following command (replacing [good_tests] with the name of your test class).

sfdx force:source:deploy -x manifest/DisableTrigger.xml -l RunSpecifiedTests -r [good_tests]

It should deploy pretty quickly, but you can check the status by going to settings in your org and checking the Deployment Status.

Announcing Two New Snowfakery Faker Providers

For the last two years I’ve been fortunate to serve as a leader of the Salesforce Open Source Commons Data Generation Toolkit project. That project has produced and inspired a variety of efforts, including Snowfakery and a collection of starter recipes.

This week, along with my colleague Allison Letts, Salesforce’s Paul Prescod (the creator of Snowfakery), and our fellow project contributor Jung Mun, I helped create two new faker providers for Snowfakery:

What These Faker Providers do

The use of Snowfakery is growing. The more we use it, the more we want the data tailored to specific projects. And the more we find places where the Faker project’s providers do not have quite what we want. In particular the project does not (well did not) have providers for nonprofits and education specific data.

The Nonprofit provider currently just provides organization names:

$ snowfakery snowfakery_nonprofit_example.recipe.yml --target-count 10 nonprofit
nonprofit(id=1, nonprofit_name=Eastern Animal Asscociation)
nonprofit(id=2, nonprofit_name=1st Animal Foundation)
nonprofit(id=3, nonprofit_name=Upper Peace Alliance)
nonprofit(id=4, nonprofit_name=Southern Peace Home)
nonprofit(id=5, nonprofit_name=Unity Home)
nonprofit(id=6, nonprofit_name=Western Peace Home)
nonprofit(id=7, nonprofit_name=Upper History Foundation)
nonprofit(id=8, nonprofit_name=Upper Friends Committee)
nonprofit(id=9, nonprofit_name=Eastern Pets Center)
nonprofit(id=10, nonprofit_name=Northern Animal Foundation)

For the Education provider we have a bit more. You can generate college names, departments, and faculty titles.

$ snowfakery snowfakery_edu_example.recipe.yml
Account(id=1, Name=South Carolina University)
Contact(id=1, FirstName=Roberto, LastName=Stanton, Title=Associate Professor of Microbiology & Immunology)
Account(id=2, Name=French)

The providers can of course run as a standard faker community provider. Once you are setup with Faker just add the new providers with pip:

pip install faker-edu faker-nonprofit

Then you can use the libraries in your code:

from faker import Faker
import faker_edu

fake = Faker()
fake.add_provider(faker_edu.Provider)

for _ in range(10):
    print(fake.institution_name())
from faker import Faker
import faker_nonprofit

fake = Faker()
fake.add_provider(faker_nonprofit.Provider)

for _ in range(10):
    print(fake.nonprofit_name())

How we got here

A few months ago I posted on how to extend Faker to create nonprofit organization names. Allison took that as a starting point to create a similar project to generate names of colleges, departments, and academic titles (and a greatly improved phone number generator but that’s not included since it’s more general). Both of these projects were good proof-of-concept but were rough around the edges. So this week, with Paul’s guidance and Jung’s input, we contributed the more polished versions to the community.

During a virtual working session on Wednesday we restructured the projects, cleaned up code, added sample recipes for Snowfakery, and published them to PyPi. By publishing these as Faker providers on PyPi, and not as Snowfakery plugins, they are available to a wider audience. By having them owned by a larger open source community we are expecting them to enjoy long-term support.

Both are still just getting started. For example the nonprofit one still just generates organization names, but would benefit from job titles, program names, and more. The EDU provider right now just handles colleges, but is expected to generate other education related data in the future. We also have a plan for a third provider to help improve the diversity of the names generated by Faker to make our fake data more representative of real communities.

The Data Generation Toolkit project has been a great example of what happens when you bring people from a wide variety of backgrounds together to solve technical problems. Like the larger Data Generation Toolkit team Paul, Allison, Jung, and I all have different backgrounds, skills, and experiences. By coming together we are able to help each other find better solutions than any one of us would have found on our own.

It’s been an exciting week, and I’m looking forward to more to come.