Salesforce Developer Podcast Episode 119

This week’s Salesforce Developer Podcast featured an interview I did with the host, Josh Birk, the end of last year. As much as I still don’t like the sound of my voice on recordings it was a fun interview and I am really excited to see it come out.

We talk about Snowfakery, Salesforce Open Source Commons, the evolution of PHP, Drupal, my career in general, and even a bit about spinning. I’d love to hear what you think.

Disable a Salesforce Trigger in Production with VS Code

I recently had to disable a Salesforce Trigger from a client’s production environment. Having discovered the need last minute for a project, I needed to react quickly and make the change quickly. Since the official documentation is a bit lacking I decided it was time for blog post of better directions.

The Salesforce CLI directions in the article above make a few annoying assumptions:

  1. That you’re happy to use MDAPI not SFDX.
  2. That all your tests pass in production (which should be true, but let’s be real it;s always).
  3. You enjoy working out CLI commands in a rush.

What you need

Disabling a Trigger Using SFDX in VS Code.

Connect VS Code to your target org. Easiest solution here is to just go to the command palate and authorize an org.

VS Code command pallet wit with SFDX commands shown.

Pull down the trigger from your org. I find it easiest to go to the metadata browser, find the trigger under Apex Triggers and click the download icon on the right.

Screenshot of VS Code org browser showing an Apex Trigger to download.

Update the trigger’s XML file to change the status. All project code files in SFDX are accompanied by an XML file of class metadata. Open the file and change the status to “Inactive”.

Screenshot of the VS Code file browser with the trigger's metadata file highlighted.
Navigate to the file in the file browser on the left in VS Code
Screenshot of code snippet emphasizing the Status XML tag and Inactive value.
In the editor change the status from Active to Inactive, and save the file.

Generate a Manifest file for your trigger. Right click on the trigger code file in VS Code’s file explorer and select Generate Manifest, provide the file a useful name (in this case used DisableTrigger)

A segment of the contextual menu to show Generate Manifest command's location.
Depending on your setup the context menu might be quite long, SFDX additions are generally near the end of the list.

Deploy the change. If all your org’s tests currently pass you can just right click on the new manifest file and instruct VS Code to deploy the source in the manifest to the org.

Another contextual menu snippet, now with Deploy Source In Manifest circled.

What if tests fail on trigger deploy?

While all our tests should always pass all the time, Salesforce admins frequently find that’s not actually true in practice. Heck this trigger could be part of that problem in your org. But to deploy a code change we have to run some tests. Since we are disabling this trigger, the trigger code doesn’t need to be in tests we just need to run a working test with enough coverage to get the job done. So go find a test class and then we can deploy.

VS Code doesn’t currently have a setting to set the testing mode on a deployment, so you’ll need to do this last step in VS Code’s terminal (available by hitting control-` if you don’t have it open already). In the terminal run the following command (replacing [good_tests] with the name of your test class).

sfdx force:source:deploy -x manifest/DisableTrigger.xml -l RunSpecifiedTests -r [good_tests]

It should deploy pretty quickly, but you can check the status by going to settings in your org and checking the Deployment Status.

Project Estimates Tool 2.0

A few years ago I wrote a piece about project time estimation and created an estimating tool. My goal was to get project managers to listen to the fact that estimates were inherently a guess not promise. The tool I created took a series of project tasks, the estimated time range, and a level of estimator confidence. It then ran a Monte Carlo simulation with those tasks, and generated a histogram of possible outcomes.

Five years later I still use it for project estimates. But I have grown tired of its interface weaknesses and needed to add cost estimation to keep it useful. So I recently heavily revised the tool and posted an updated version (the old version is still available here).

The interface is still very utilitarian (pull requests welcome), but this version makes it make easier to adjust the tasks. Much more importantly it now also estimates costs, not just time.

Histogram of time estimations
XY Scatter plot of cost projections, in a nice bell curve shape

The new version is faster than the previous. And it adjusts the graph type based on the range of possible outcomes.

Each task now includes inputs for min and max time, confidence, and hourly cost of that task. So if different people at different bill rates are part of the project it can still give you useful numbers.

The histograms broke down when faced with too many bars. So I settled on an XY scatter approach to help visualize the broader range that the cost estimator made normal.

Please give it a try, I’m always open to feedback, suggestions, and pull requests.

Costs Estimates

For this version I added the ability to include a unit cost for each task. The first tool worked just fine when you were estimating the tasks for one person who had one billing rate (or where hourly costs aren’t important). In practice teams need to be to able to do an estimate across all work streams, and different roles will have different billing rates.

This version includes a rate for each task and a graph of projected project costs.

Why I Created A Project Estimator

I wrote the original when I was struggling with project managers who would take any estimate you gave them as a range, pick a number, and promise the client (and themselves) we would hit it. To them an estimate was a promise – one that had to be kept. That lead me to badly overestimate projects so that the lowest end of my range would be a safe number – but that’s just a different form of bad estimation.

Histogram of time estimates.
This graph from the original version helped convince PMs that estimates weren’t promises.

I had a good amount of experience providing estimates, and had read a lot on the topic. I knew there were teams that did better and I wanted to help our team improve.

The original tool was loosely inspired by one Joel Spolsky described ten years earlier. He has several important ideas on his process regardless of your project methodology. But his idea of using Monte Carlo simulations had stuck with me since that article had been new. After failing to find a tool that included it, I wrote my own.

Are the Project Estimates Any Good?

Fundamentally the simulations are only as good as the estimates provided. For any project I have been able to compare my simulated project estimates to final hours my work fell within one standard deviation of the median.

The confidence measure helps more than I expected. Originally, I added the measure of confidence because I needed something to determine how often the simulator should assume people are just plain wrong – and by how much. While I could have hard coded a solution I did not know how to pick good values. I knew that my confidence varies by the task. I also knew the less confident I am the more I am likely to be wildly off. So decide to make confidence an estimator provided variable, and use that to pick the size of overruns.

For every 10% you reduce the confidence, the simulation will allow the upper bound of the estimate to increase by the size or the entered upper bound. On a task you estimate at 7-10 hours, a 90% confident estimate will allow overruns up to 20 hours (just in the 10% of times that aren’t in the 7-10 range), and 30 hours for an 80% estimate.

That extra box also immediately helped me feel comfortable with my estimates. Knowing that the simulator would offset optimism bias for me I could stop trying to do that myself. My estimates can use tighter ranges trusting the software to offset expected bias.

A Value of the Graphs in Project Estimates

The graph has turned out to be the most important feature. Initially I included it because I wanted to play with D3 and have something more impressive than numbers to show. What I discovered was a reminder of the importance of data visualizations – even simple ones.

As I said before I created this tool when working with project managers who simplified all estimate ranges to a single number and held everyone to that number. The first time I presented numbers from the simulator those project managers picked the median and complained I made it too hard. The median was better than what we had before, but not enough to treat as a promise.

When I started presenting the graph those same people immediately started to change how they talked about the project. By visualizing the impact of uncertainty over several tasks they could see that the project might run far over my estimate – or far under. The more uncertainty, the longer the tail on the graph.

Suddenly they were comfortable talking about risks from overruns, finding ways to help clients understand the possible risks, and being understanding when a task proved harder than expected.

The graphs tell the story, and empowers the team to have an honest and productive about project estimates.

Estimate your own project timeline.

Announcing Two New Snowfakery Faker Providers

For the last two years I’ve been fortunate to serve as a leader of the Salesforce Open Source Commons Data Generation Toolkit project. That project has produced and inspired a variety of efforts, including Snowfakery and a collection of starter recipes.

This week, along with my colleague Allison Letts, Salesforce’s Paul Prescod (the creator of Snowfakery), and our fellow project contributor Jung Mun, I helped create two new faker providers for Snowfakery:

What These Faker Providers do

The use of Snowfakery is growing. The more we use it, the more we want the data tailored to specific projects. And the more we find places where the Faker project’s providers do not have quite what we want. In particular the project does not (well did not) have providers for nonprofits and education specific data.

The Nonprofit provider currently just provides organization names:

$ snowfakery snowfakery_nonprofit_example.recipe.yml --target-count 10 nonprofit
nonprofit(id=1, nonprofit_name=Eastern Animal Asscociation)
nonprofit(id=2, nonprofit_name=1st Animal Foundation)
nonprofit(id=3, nonprofit_name=Upper Peace Alliance)
nonprofit(id=4, nonprofit_name=Southern Peace Home)
nonprofit(id=5, nonprofit_name=Unity Home)
nonprofit(id=6, nonprofit_name=Western Peace Home)
nonprofit(id=7, nonprofit_name=Upper History Foundation)
nonprofit(id=8, nonprofit_name=Upper Friends Committee)
nonprofit(id=9, nonprofit_name=Eastern Pets Center)
nonprofit(id=10, nonprofit_name=Northern Animal Foundation)

For the Education provider we have a bit more. You can generate college names, departments, and faculty titles.

$ snowfakery snowfakery_edu_example.recipe.yml
Account(id=1, Name=South Carolina University)
Contact(id=1, FirstName=Roberto, LastName=Stanton, Title=Associate Professor of Microbiology & Immunology)
Account(id=2, Name=French)

The providers can of course run as a standard faker community provider. Once you are setup with Faker just add the new providers with pip:

pip install faker-edu faker-nonprofit

Then you can use the libraries in your code:

from faker import Faker
import faker_edu

fake = Faker()
fake.add_provider(faker_edu.Provider)

for _ in range(10):
    print(fake.institution_name())
from faker import Faker
import faker_nonprofit

fake = Faker()
fake.add_provider(faker_nonprofit.Provider)

for _ in range(10):
    print(fake.nonprofit_name())

How we got here

A few months ago I posted on how to extend Faker to create nonprofit organization names. Allison took that as a starting point to create a similar project to generate names of colleges, departments, and academic titles (and a greatly improved phone number generator but that’s not included since it’s more general). Both of these projects were good proof-of-concept but were rough around the edges. So this week, with Paul’s guidance and Jung’s input, we contributed the more polished versions to the community.

During a virtual working session on Wednesday we restructured the projects, cleaned up code, added sample recipes for Snowfakery, and published them to PyPi. By publishing these as Faker providers on PyPi, and not as Snowfakery plugins, they are available to a wider audience. By having them owned by a larger open source community we are expecting them to enjoy long-term support.

Both are still just getting started. For example the nonprofit one still just generates organization names, but would benefit from job titles, program names, and more. The EDU provider right now just handles colleges, but is expected to generate other education related data in the future. We also have a plan for a third provider to help improve the diversity of the names generated by Faker to make our fake data more representative of real communities.

The Data Generation Toolkit project has been a great example of what happens when you bring people from a wide variety of backgrounds together to solve technical problems. Like the larger Data Generation Toolkit team Paul, Allison, Jung, and I all have different backgrounds, skills, and experiences. By coming together we are able to help each other find better solutions than any one of us would have found on our own.

It’s been an exciting week, and I’m looking forward to more to come.

Build Cycles of Respect

In consulting we should expected developers to do all the same basic things as everyone else on the team. That includes supporting a culture of mutual respect with other team members.

We should explain our work clearly, be effective in meetings, and take feedback well. In consulting especially, we should engage with clients professionally, effectively, and happily. We should not be isolated off on our own teams working from specs we didn’t write, for clients we haven’t met. Basically, we need to be good team members and we can be expected to be that way. Developers, like most everyone else, do their best when when surrounded by people who treat them with respect.

One of the things I really like about my current job is that we have a respectful culture. It’s not that we don’t have disagreements, at times energetic ones, but we work through those challenges as a team of experts with differing perspectives. We are all expected to be experts who can collaborate with other experts, and to explain our work to clients.

Cycles of Disrespect

But that hasn’t been the consistent norm since I shifted to consulting. I have been on teams, and seen teams, that routinely insult the basic professional behavior of developers and other technologists. In those settings leaders made it clear that the standard of behavior was lower for developers than other team members. Most people I know end up adjusting their behavior to meet the standards set for them. Set a low standard of behavior get bad behavior.

I want to be clear that I have no intention justifying the harassment, bullying, and assault that is too common among developers (and more generally by men in the workplace) – that is not the challenge I’m addressing here (although I’ve addressed those issues before). There are behavioral problems in places that don’t permit and cover up that kind of behavior. Developers and other IT specialists are often still allowed to act as a grumpy trouble maker, or aloof superior jerk. We are told our brains work differently somehow allowing us and our colleagues to ignore healthy social conventions. And by allowing it, stating that we expect it, and taking no actions to correct it, we encourage that behavior.

Signs of Trouble

On more than one occasion, at more than one employer, I’ve been told things like:

  • “You’re talking techie now, I’m not listening anymore.”
  • “Developers hate meetings and never contribute productively.”
  • “You just have a thin skin and act defensive of your work when questioned.”

Often the person will laugh as they say these things like we have some kind of shared inside joke about some basic lack of professionalism. There are lots of reasons why people dismiss developers with those kind of statements, but that doesn’t stop them from being destructive.

When that’s your work environment, it’s tempting for developers to degrade to match expectations. Who takes feedback well when told upfront they have a thin skin? Who likes being in meetings with project managers who explicitly state their contributions aren’t welcome? What expert thinks it’s funny when you stop them and say you are not listening? 

This builds a cycle of disrespect between developers and their colleagues, dividing teams and impeding progress. When people are treated with disrespect while still performing critical tasks, I expect to see them act with disrespect is exchange. That is no okay, but it very human.

Breaking the Cycle

Respect is a two way street which makes it important to break the cycle on both sides. Everyone on a project needs to respect the roles we each have and the contribution we each bring. Those may be, and usually should be, vastly different across the team – otherwise there is no point to having a team at all.

As team members who are often in a position of some power – in part because we are hard to replace in the current job market – developers should take it upon ourselves to be the first to build respectful practices with colleagues.

We can, nicely, point out when we hear statements that feel isolating and rude. We need to try to understand why someone acts intimidated by the technical detail and find ways to help them along. Developers should make a point of soliciting feedback from team members and processing ideas with them. Everyone ought to build personal relationships across our teams to help improve everyone’s ability to work through hard problems (there is a bunch of research on this question).

Creating a Cycle of Respect

To build an ongoing cycle of respect take more than just breaking the old patterns. Developers are often in a position of leading by example and so that is a great first step. Asking friends and colleagues to make an effort with you will help build a re-enforcing cycle in a small group, which makes a great second step.

Understand that cultural change of an organization is hard, but usually it is possible on project teams. So it may help to start with just one team and making a real effort to improve communication and collaboration with the team. From there build out and try to draw others into your new patterns.

When teams work together well, understanding the project as a whole, they do better work. That puts everyone in a position to add ideas, raise concerns, validate suggestions, and adjust to changes.  When teams allow any member to work in isolation they losing the shared vision and project failure.

Being Nice vs Being Kind

A few years ago during a job interview at a nonprofit organization, the Executive Director asked me about the culture of the job I was leaving. He had noticed that I was stressing the importance of good feedback during my interviews and wanted to know why it was so important to me. Indeed, part of why I was job hunting was the job I had rejected the notion of mistakes – everyone’s work was always good. I told him that, and that I do my best work when I get honest feedback. He responded by drawing a distinction between being nice and being kind. He was sure everyone I worked with was quite nice, but they didn’t sound very kind to him.

Being nice when giving feedback just means saying positive things.
Being kind when giving feedback requires helping another person understand how to improve.

It was one of those moments in life where someone offers you words to explain a struggle you’re having and suddenly things make more sense. I knew I was unhappy in the job, but his insight brought into focus the reasons I wasn’t fitting in. Being told my work was great did not mesh with my understanding that it’s important to admit we all make mistakes and find ways to avoid repeating them. My desire to examine my own work caused conflict, let alone my efforts to introduce peer feedback.

The CEO at that job once told me that if people heard their work wasn’t perfect, they wouldn’t want to come in the next day. He didn’t seem to realize that getting dishonest feedback causes the same thing. And seeing no chance for improvement was even worse. There was no room to revisit problems that lead to near failure and ongoing technical debt for the client. Instead they convinced clients the technical debt was a feature, and used it to sell support contracts.

All the feedback my colleagues and I got was nice to hear (“This is great.”, “The client will be thrilled”). But unrelenting positive feedback didn’t help us improve as individuals or as a team. Our leadership gave team members a false sense of success and held the company back from improving.

The thing about being kind, is that sometimes you have to tell people things they don’t want to hear. That can be hard, and to do it well requires you to care about the other person. If want people to like everything you say, it will be hard to be kind.

Giving nice feedback involves offering a string of hollow platitudes that leaves people with a false sense of achievement. That makes people vulnerable to failure when they move forward without understanding their weak foundation. Giving kind feedback requires you to provide an honest assessment of a person’s work and helping them recognize errors. Kindness requires listening to people when they push back, and finding ways to help them find ways to excel as they move forward.

When all was said and done the opportunity wasn’t the right fit for me. Turning down their job offer was probably the hardest career decision I ever made. But they and I left good enough impressions with one another that I recently started serving as an volunteer advisor to the program I would have been running. That service has allowed me to reaffirm my sense that it was the right decision not to take the role both for me and for the organization.  And it has given me a chance to help give them some kind feedback to help – I hope – improve their operations. I try to be as kind to them as an organization as they were to me as an applicant.

Why and How to Write Good How-To Articles

Part of contributing to any open source project, or even really being a contributing member of any community, is sharing what you know. That can come in many forms. While many projects over emphasis code, and most of us understand the value of conference talks, good how-to articles are some of the most critical contributions for any software platform. There isn’t much point to a tool if people cannot figure out how to use it.

Why do I write how-to articles

I’ve contributed code to Drupal, some of it even good and useful to others. But usually when I hear someone noticed something I created it’s blog posts about how to solve a problem.

When I struggled to find the answer to a question I expect it is a candidate for a how-to post. I am not so creative that I am often solving a problem no one has, or will want to, solve for another project. And I am good enough at what I do to know that if I struggled to find an answer it was probably harder to find than it could been.

That helps me find topics for articles that are helpful to the community and benefit me.

How-to articles help others in the community use tools better

The goal of a good tutorial is to help accelerate another person’s learning process. The solution does not have to be perfect, and I know most people will have to adapt the answer to their project. I write them when I struggled to find a complete answer in one place, and so I’m hoping to provide one place that gives the reader enough to succeed.

Usually I combine practical experience earned after digging through several references at various levels of technical detail – including things like other people’s blog posts, API documentation, and even slogging through other people’s code. I then write one, hopefully coherent, reference to save others that digging extra reading.

The less time people spend researching how to do something, the more time they have to do interesting work. Better yet, it can mean more time using the tools for their actual purpose.

How-to articles serve as documentation for me, colleagues, and even clients

The best articles serve as high level documentation I can refer back to later to help me repeat a solution instead of recreating it from scratch. When I first wrote how-to articles I was solidifying my own learning, and leaving a trail for later.

They also came to serve as documentation for colleagues. When I don’t have time to sit with them to talk through a solution, or know the person prefers reading, I can provide the link to get them off and running. Colleagues have given me feedback about clarity, typos, and errors to help me improve the writing.

I have even sent posts to clients to help explain how some part of their solution was, or will be, implemented. That additional documentation of their project can help them extend and maintain their own projects.

How-To articles give me practice explaining things

One of the reasons I started blogging in the first place was to keep my writing skills sharpened. How-to articles in-particular tend to be good at helping me refine my process in specific areas. The mere act of writing them gives me practice at explaining technology and that practice pays off in trainings and future articles. If you compare my work on Drupal, Salesforce, and Electron you can see the clarity improve with experience.

How-To articles give me work samples to share

When I’ve been in job applicant mode those articles give me material to share with prospective employers. In addition to Github and Drupal.org, the how-to articles can help a hiring manager understand how I work. They show how explain things to others, how I engage in the community, and serve as samples of my writing.

How-To articles help me control my public reputation

I maintain a blog, in part, to help make sure that I have control over my public reputation. To do that I need inbound links the help maintain page rank and other similar basic SEO games.

From traffic statistics I know the most popular pages on this site are technical how-to articles. From personal anecdotes I know a few of my articles have become canonical descriptions of how to solve the problems.

When I first started my current job we had a client ask if I could implement a specific feature that he’d read about in a post on Planet Drupal. It turned out to be mine. Not only was I happy to agree to his request, it helped him trust our advice. My new colleagues better understood what this Drupal guy brought to the Salesforce team. Besides let’s be honest it’s fun when people cite your own work back at you.

Writing your own

You don’t have to maintain a whole blog to write useful how-to articles. Drupal, like most large open source projects, maintains public wiki-style documentation. Github pages allow anyone to freely publish simple articles and there are many examples of single-page articles out there. And of course there is no shortage of dedicated how-to sites that will also accept content.

The actual writing process isn’t that hard, but often people leave out steps, so I’ll share my process. This is similar to my general advice for writing instructions.

Pick your audience

It’ll be used more widely than whoever you think of, but have an audience in mind. Use that to help target a skill set. I often like to think of myself before I started whatever project inspired the article. The higher your skill set the more you should adjust down, but it’s hard to adjust too far, so be careful is aiming for people with far less experience than you have – make sure you have a reviewer with less experience check your work. Me − 1 is fine, Me − 5 is really hard to do well.

Start from the beginning and go carefully step by step

Start with no code, no setup, nothing. Then walk forward through the project one step at a time writing out each step. If you gloss over a detail because you assume your audience knows about it add reference links. You can have a copy of a reference project open but do not use it directly; it’s there to prevent you from having to re-research everything.

List your assumptions as you go

Anything that you need to have in place but don’t want to describe (like installing Drupal into a local environment, creating a basic module, installing Node, etc) state as an explicit assumption so your reader starts in the same place as you do. Provide links for any assumptions which are likely hard for your expected audience to complete. This is your first check point – if there are no good references to share, start from where that article you cannot find should start (or consider writing that article too). 

Provide detailed examples

Insert code samples, screenshots, or short videos as you progress. Depending on what you are doing in your article the exact details of what works best will vary. Copy and paste as little reference code as possible. This helps you avoid accidentally copying details that may be revealing of a specific project’s details.

If you look at mine you’ll see a lot of places where I include comments in sample code that say things like “Do useful stuff”. That is usually a hint that whoever inspired the article had interesting, and perhaps proprietary, ideas in that section of code (or at least I worried they would think it was interesting). I also try to add quick little asides in the code samples to help people pay attention.

Test as you go

Make sure your directions work without that reference project you’re not sharing. This is both so your directions work properly and further insulation against accidentally sharing information you ought not share.

End with a full example

If you end up with a bunch of code that you’ve introduced piecemeal, provide a complete project repo or gist at the end. You’ll see some of my articles end in all the code being displayed from a gist, and others link to a full repository. Far too many people simply copy and paste code from samples and then either use it blindly or get stuck. Moving it to the end helps get people to at least scan the actual directions along the way.

Give credit where credit is due

If you found partial answers in several places during your initial work, thank those people with links to their articles. Everyone who publishes online likes a little link-love and if the article was helpful to you it may be helpful to others. Give them a slight boost.

Salesforce Lightning Web Components with URL Parameters

A couple weeks ago I needed to create a Salesforce Lightning Web Component (LWC) that pulls values from URL parameters. While the process is very simple it turns out the vast majority of examples on the web are out of date due to a security update Salesforce made sometime last year – and so I spent a frustrating afternoon throwing ideas at the wall until a colleague stumbled into a comment on a blog post that was an incorrect example by a highly trusted expert noting the needed fix.  So, in the hopes of shortening the search for anyone else trying to get this to work, I’m offering an example that works – at least as of this writing.

To be fair the official docs are correct but it is easy to look passed an important detail: if you do not put a namespace on your value the parameter will be deleted.

That change was the security update, before that you could have any value as your parameter name now you have to have __ (two underscores) in the name.  Officially the docs say that in the left side of those underscores you should have the namespace of your package or a “c” for unpackaged code. As far as I can tell at least in sandboxes and trailhead orgs you can have anything you want as long as there are characters before and after the __ (which kinda makes sense since package developers need to be able to write and test their JavaScript before they build their package).

So your final URLs will look something like:
https://orgname.my.salesforce.com/lightning/r/Contact/0034x000009Xy5gAAC/view?c__myUrlParameter=12345

Basic LWC

Now with that main tip out of the way on to a full example.

My assumption going into this is that you know how to create a very basic Hello World quality LWC. If not, start with the Trailhead Hello World example project.

1) Create a new component to work with, mine will be very simple to help keep the details clean, but you can fold this into more interesting code bases.

2) Update the component’s meta.xml file to set isExposed to true, and at least a target of lighning__RecordPage (although any target will do if you know how to use it), and configure the target to connect to Contact (although again any settings you know how to use are fine here).

<?xml version="1.0" encoding="UTF-8" ?>
<LightningComponentBundle xmlns="http://soap.sforce.com/2006/04/metadata">
   <apiVersion>50.0</apiVersion>
   <isExposed>true</isExposed>
   <description>Example Lightning Web Componant to read URL parameters.</description>
   <targets>
       <target>lightning__RecordPage</target>
   </targets>
   <targetConfigs>
       <targetConfig targets="lightning__RecordPage">
           <objects>
               <!-- This is setup to run on contact but you could use any sObject-->
               <object>Contact</object>
           </objects>
       </targetConfig>
   </targetConfigs>
</LightningComponentBundle>

3) In your JS file beyond the main LighningElement you need to add imports for wire, track, and CurrentPageReference from the navigation library:

import { LightningElement, wire, track } from "lwc";
import { CurrentPageReference } from "lightning/navigation";

4) Add a tracked value you want to display inside the main class: 

export default class Parameter_reader extends LightningElement { 
  @track displayValue;

5) Next use the wire decorator to connect CurrentPageReference’s getStateParameters to your own code to get an use the URL parameters:

@wire(CurrentPageReference)
getStateParameters(currentPageReference) {
 if (currentPageReference) {
   const urlValue = currentPageReference.state.c__myUrlParameter;
   if (urlValue) {
     this.displayValue = `URL Value was: ${urlValue}`;
   } else {
     this.displayValue = `URL Value was not set`;
   }
 }

From the code sample above you can see that we’re getting the values from currentPageReferences’s state child object, and then attaching them to our tracked value we created in step four.

6) Update the HTML file to display your value ideally leveraging the SLDS along the way:

<template>
 <div>
   <lightning-card title="Url Sample" icon-name="custom:custom14">
     <div class="slds-m-around_medium">
       <p>{displayValue}</p>
     </div>
   </lightning-card>
 </div>
</template>

7) Deploy all this code to your org.

8) Go to a contact record, and edit the page. Add your new competent to the side bar. Save and activate the page.

9) Return to the record page, the component should appear and say “URL Value was not set”.

10) In the address bar add to the end of the url: ?c__myUrlParameter=Hello, and reload the page, the component should now read “URL Value was Hello”.

A screenshot of the sample component displaying the provided text of "hello".

What about sending the value to APEX?

Now, let’s go one step further and send this parameter over the APEX and post a response.

1) Create an APEX class, and create a public static method using the AuraEnabled decorator.

 @AuraEnabled(cacheable=true)
 public static String reflectValue(String value) {
     // Really you should do something useful here.
     return value;
 }

In this case we’re starting with a method that just passes back the same string it was handed, but obviously you can do whatever you want here.

BE CAREFUL ABOUT SECURITY!

If you take an ID as your parameter make sure you are thinking about what happens when someone sends an ID for an object they should not see, is for an object other than the type you expected, and other similar things. The platform can help you but security is your job here, take it seriously!

2) Create good tests for your class, and deploy the code.

3) Import the new function into your JS file:

import reflectValue from "@salesforce/apex/valueReflection.reflectValue";

Update the getStateParameter handler we wrote before to call this function as a JavaScript promise:

  getStateParameters(currentPageReference) {
    if (currentPageReference) {
      const urlValue = currentPageReference.state.c__myUrlParameter;
      if (urlValue) {
        reflectValue({ value: urlValue })
          .then((result) => {
            this.displayValue = `URL Value was: ${result}`;
          })
          .catch((error) => {
            this.displayValue = `Error during processing: ${error}`;
          });
      } else {
        this.displayValue = `URL Value was not set`;
      }
    }
  }

4) That’s it! Deploy your code and reload the page, and your values should pass through to APEX, come back and get displayed.

The complete SFDX project for this example is up on Github.

SC Dug May 2020: Virtual Backgrounds

This month’s SC DUG meeting featured Will Jackson from Kanopi Studios talking about his virtual background and office.

Before everyone was learning to use Zoom virtual backgrounds, Will had built out a full 3D room for his background, including family pictures and other fun details. He talked about what he built and may inspire you to try some more personalized than swaying palm tree and night skies.

If you would like to join us please check out our up coming events on MeetUp for meeting times, locations, and remote connection information.

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback. If you want to see a polished version checkout our group members’ talks at camps and cons.

If you are interested in giving a practice talk, leave me a comment here, contact me through Drupal.org, or find me on Drupal Slack. We’re excited to hear new voices and ideas. We want to support the community, and that means you.

SC DUG April 2020: Remote Work Round Table

This month’s SC DUG was a round table discussion on working remotely during the Covid-19 lock down. We had actually planned this topic before the crisis emerged in full, but found ourselves having to pivot our talking points a fair bit.

The discussion centered on things that people are dealing with, even those of us who work remotely on a regular basis. A few resources were shared by people on the call, including Pantheon’s Donut Slack Bot.

If you would like to join us please check out our up coming events on MeetUp for meeting times, locations, and remote connection information.