Friday, April 11, 2025

Implementing a Dead-Letter Queue for Salesforce Platform Events

Salesforce Platform Events provide a powerful, scalable way to build event-driven architectures. By publishing events, different parts of your application (and external systems) can react asynchronously, decoupling processes and improving responsiveness. However, in any distributed system, failures happen. What occurs when a subscriber fails to process an event? Without a proper strategy, these failures can lead to data inconsistencies, lost transactions, and frustrated users.

This post dives into the concept of a Dead-Letter Queue (DLQ) and demonstrates how to implement this crucial pattern within Salesforce to build more resilient, reliable event-driven applications.

Asynchronous Processing & The Challenge of Failure

Platform Events enable a publisher-subscriber model. A system publishes an event (like OrderPlaced__e), and one or more subscribers (Apex triggers, Flows, external systems via CometD) receive and process it. This is fantastic for scalability – the publisher doesn't need to know about the subscribers or wait for them.

But what if a subscriber encounters an error?

  • Maybe an Apex trigger processing the OrderPlaced__e event hits a governor limit?
  • Perhaps a Flow attempting to update inventory fails due to record locking?
  • What if an external API call within the subscriber logic times out?

Salesforce provides some built-in retry mechanisms for certain types of subscribers, but these are finite. After exhausting retries, the event processing attempt might simply stop, and the event could be effectively lost from the perspective of that failed subscriber.

Real-Life Scenario: The Retail Order Fiasco

Imagine a retail company, "MegaMart," uses Platform Events for order processing:

  1. Publish: When a customer places an order online, an OrderPlaced__e event is published with order details.
  2. Subscribe & Process:
    • An Apex trigger attempts to update the Inventory__c records.
    • A Flow tries to call the external Shipping Provider's API.
    • Another Apex trigger initiates the billing process.

Now, consider these potential failures:

  • Inventory Failure: Two orders for the last item arrive simultaneously. The Inventory trigger fails on the second event due to record locking contention while trying to decrement stock. Salesforce retries a few times, but the lock persists, and the trigger eventually gives up. Result: Inventory count is now incorrect.
  • Shipping Failure: The Shipping Provider's API is temporarily down when the Flow attempts to create a shipment label. The Flow retries, but the API remains unavailable. Result: The order isn't shipped, but other parts of the system might think it was.
  • Billing Failure: The Billing trigger finds inconsistent data on the related Account (perhaps missing a required field) and throws an exception before generating the invoice. Result: The customer gets the product (if inventory/shipping succeeded) but never gets billed!

Without intervention, these failures lead to silent data inconsistencies, operational headaches, and poor customer experiences.

What is a Dead-Letter Queue (DLQ)?

A Dead-Letter Queue (DLQ), sometimes called an "undelivered-message queue," is a messaging pattern used to handle messages (or events) that cannot be successfully processed by a receiver. Instead of discarding the failed message after retry attempts, the system moves it to a separate, designated queue – the DLQ.

Why use a DLQ?

  1. Prevent Data Loss: It captures failed events, ensuring they aren't silently lost.
  2. Visibility: It provides a central place for administrators or support teams to see which events failed and why.
  3. Troubleshooting: The captured event data and error information are invaluable for diagnosing the root cause of processing failures.
  4. Manual Intervention / Retry: Allows for fixing the underlying issue (e.g., deploying a code fix, correcting bad data, waiting for an external system to recover) and then potentially reprocessing the event from the DLQ.
  5. Decoupling: Separates the failure handling logic from the main event processing flow, keeping the primary subscriber logic cleaner.

Implementing a DLQ Pattern for Platform Events in Salesforce

Salesforce does not offer a built-in, configurable DLQ feature for standard Platform Events consumed directly by Apex triggers or Flows in the same way some dedicated message brokers do. Therefore, we need to implement the DLQ pattern within our subscriber logic.

Here’s a robust approach using a Custom Object and Apex:

Step 1: Create the DLQ Custom Object

First, create a dedicated Custom Object to store the details of failed events.

Object: FailedPlatformEvent__c (API Name: FailedPlatformEvent__c)
Suggested Fields:

  • OriginalEventPayload__c (Long Text Area, 131072): Stores the JSON payload of the original Platform Event. Crucial for reprocessing.
  • SubscriberContext__c (Text, 255): Identifies which subscriber (e.g., Apex Trigger Name, Flow API Name) failed.
  • ErrorMessage__c (Long Text Area, 131072): The error message captured from the exception.
  • ErrorStackTrace__c (Long Text Area, 131072): The Apex stack trace (if available) for debugging.
  • RelatedRecordId__c (Text, 18): (Optional) If the event relates to a specific record (e.g., Order ID), store it for context.
  • Status__c (Picklist, Required, Default='New'): Values: New, Investigating, RetryScheduled, FailedPermanent, Resolved. Helps manage the lifecycle.
  • RetryCount__c (Number, Default=0): Tracks how many times reprocessing has been attempted.
  • OriginalEventUuid__c (Text(255), External ID, Unique): Store the ReplayId or a unique identifier from the event payload if possible, helps prevent duplicate DLQ entries for the same failed event delivery attempt if the trigger somehow fires multiple times before commit failure (less common but possible).
  • ProcessingAttemptTimestamp__c (DateTime): Timestamp of when the subscriber attempted processing and failed.

Tip: Ensure appropriate field-level security and sharing settings for this object. Only relevant admin/integration users should typically manage these records.

Step 2: Implement Error Handling in Subscribers (Apex Trigger Example)

Modify your Platform Event subscriber triggers (or Flows) to include robust error handling and log failures to your DLQ object.

Trigger:

trigger OrderPlacedTrigger on OrderPlaced__e (after insert) {
    OrderPlacedTriggerHandler handler = new OrderPlacedTriggerHandler(Trigger.new);
    // Run handler logic within a try-catch specifically for DLQ logging
    try {
        // Consider specific handler methods for different logic units (Inventory, Billing)
        handler.processInventoryUpdates();
        handler.processBillingInitiation();
        // Add more processing methods as needed...
    } catch (Exception e) {
        // Log to the DLQ on ANY exception during processing
        System.debug(LoggingLevel.ERROR, 'OrderPlacedTrigger Failure: ' + e.getMessage() + '\n' + e.getStackTraceString());
        handler.logFailuresToDLQ(e); // Pass the exception to the handler
    }
}

Trigger Handler:

// File: classes/OrderPlacedTriggerHandler.cls
public with sharing class OrderPlacedTriggerHandler {

    private final List<OrderPlaced__e> triggerNew;
    private final String SUBSCRIBER_CONTEXT = 'OrderPlacedTriggerHandler'; // Identify this subscriber

    public OrderPlacedTriggerHandler(List<OrderPlaced__e> newEvents) {
        this.triggerNew = newEvents;
    }

    public void processInventoryUpdates() {
        // ... implementation for inventory ...
        // Wrap critical DML or callouts in internal try-catch or ensure method throws
        try {
            // inventory logic potentially throwing exceptions
        } catch(Exception ex) {
            System.debug(LoggingLevel.ERROR, 'Error during Inventory Processing: ' + ex.getMessage());
            throw ex; // Re-throw to be caught by the main trigger catch block for DLQ logging
        }
    }

     public void processBillingInitiation() {
        // ... implementation for billing ...
         try {
             // billing logic potentially throwing exceptions
         } catch(Exception ex) {
             System.debug(LoggingLevel.ERROR, 'Error during Billing Initiation: ' + ex.getMessage());
             throw ex; // Re-throw to be caught by the main trigger catch block for DLQ logging
         }
     }

    /**
     * @description Logs failed events from the current transaction context to the DLQ object.
     * @param processingException The exception caught during processing.
     */
    public void logFailuresToDLQ(Exception processingException) {
        List<FailedPlatformEvent__c> dlqRecords = new List<FailedPlatformEvent__c>();
        DateTime failureTimestamp = Datetime.now();

        for (OrderPlaced__e event : this.triggerNew) {
            // Defensive check: Ensure event and exception are not null
             if(event == null || processingException == null) {
                 System.debug(LoggingLevel.ERROR, SUBSCRIBER_CONTEXT + ': Cannot log null event or exception to DLQ.');
                 continue;
             }

             String payloadJson = '';
            try {
                 payloadJson = JSON.serialize(event);
            } catch(Exception serEx){
                 payloadJson = 'Failed to serialize event payload: ' + serEx.getMessage();
            }

             dlqRecords.add(new FailedPlatformEvent__c(
                OriginalEventPayload__c = payloadJson,
                SubscriberContext__c = SUBSCRIBER_CONTEXT,
                ErrorMessage__c = processingException.getMessage().left(131072), // Truncate if necessary
                ErrorStackTrace__c = processingException.getStackTraceString().left(131072), // Truncate
                // Use ReplayId if guaranteed unique per *failed attempt* - often better to generate UUID or use external ID from payload
                // OriginalEventUuid__c = String.valueOf(event.ReplayId), // ReplayId might not be ideal as UUID
                 OriginalEventUuid__c = SUBSCRIBER_CONTEXT + '-' + event.ChangeEventHeader?.commitTimestamp + '-' + System.now().getTime(), // Example composite key - adapt as needed
                 RelatedRecordId__c = event.OrderId__c, // Assuming OrderId__c is a field on the event
                 ProcessingAttemptTimestamp__c = failureTimestamp,
                Status__c = 'New' // Default status
            ));
        }

        if (!dlqRecords.isEmpty()) {
             try {
                 // Use Database.insert with allowPartialInsert=true if trigger might handle multiple records
                 // where some succeed and others fail independently (more complex logic needed)
                 // For simplicity here, assuming all records in the batch fail if ANY exception occurs in the handler
                 Database.insert(dlqRecords, false); // allOrNone = false might hide insertion errors, but useful for partial success scenarios not shown here.
                 System.debug(LoggingLevel.INFO, SUBSCRIBER_CONTEXT + ': Inserted ' + dlqRecords.size() + ' records into FailedPlatformEvent__c DLQ.');
            } catch (Exception dmlEx) {
                 System.debug(LoggingLevel.FATAL, SUBSCRIBER_CONTEXT + ': CRITICAL FAILURE - Could not insert into DLQ. Data potentially lost! Error: ' + dmlEx.getMessage());
                 // Consider alternative logging: Custom Notification, log to another object, etc.
            }
        }
    }
}

Flow Equivalent: In a Record-Triggered Flow subscribing to the Platform Event, use a Fault Path. On the Fault Path, add a 'Create Records' element to create the FailedPlatformEvent__c record, mapping relevant fault message details and $Record (event payload) fields.

Step 3: Monitoring the DLQ

Create Reports and Dashboards based on the FailedPlatformEvent__c object:

  • Report: "New Failed Platform Events" (Filter: Status = New)
  • Report: "Failed Events by Subscriber"
  • Dashboard Component: Chart showing count of New failed events over time.

Consider setting up Custom Notifications or scheduled reports to alert administrators when new records appear in the DLQ.

Step 4: Reprocessing from the DLQ

This is the most complex part and requires careful consideration.

Option A: Manual Reprocessing

  1. Add a Custom Button (e.g., "Retry Event Processing") to the FailedPlatformEvent__c page layout.
  2. This button invokes an Autolaunched Flow or an Apex method.
  3. The Flow/Apex:
    • Reads the OriginalEventPayload__c.
    • Deserializes the payload back into the Platform Event structure (e.g., OrderPlaced__e).
    • Crucially: Calls the exact same business logic that the original trigger/Flow executed, but now passing the deserialized event data. Use a shared, invocable Apex class for the core business logic called by both the trigger and the retry mechanism.
    • Wrap the reprocessing logic in its own try...catch.
    • If successful: Update the FailedPlatformEvent__c record's Status__c to Resolved.
    • If it fails again: Update the Status__c to Investigating or increment RetryCount__c and leave as New or RetryScheduled. Update ErrorMessage__c with the new failure details.

Option B: Automated Reprocessing (Use with Extreme Caution!)

  1. Create a Scheduled Apex class.
  2. The scheduled job queries FailedPlatformEvent__c records with Status__c = 'New' or 'RetryScheduled' and RetryCount__c < MAX_RETRIES.
  3. For each record, deserialize the payload and attempt reprocessing using the shared business logic class (as in Option A).
  4. Implement Exponential Backoff: Don't retry immediately. Base the delay before the next retry attempt on the RetryCount__c (e.g., wait 2 ^ RetryCount__c minutes). This requires tracking the next scheduled retry time.
  5. Idempotency: Ensure your business logic is idempotent (safe to run multiple times with the same input without causing duplicate data or incorrect side effects). This is critical for any retry mechanism.
  6. Error Handling: If reprocessing fails within the scheduled job, increment RetryCount__c. If RetryCount__c exceeds the maximum, set Status__c to FailedPermanent or Investigating.
  7. Governor Limits: Be mindful of limits within the scheduled job, especially if reprocessing many events. Process records in batches.

Warning: Automated retries can mask underlying problems or repeatedly hit governor limits if not designed carefully with backoff and a maximum retry limit. Often, manual review and retry is safer for enterprise systems unless the failure cause is known to be transient.

Best Practices for DLQs and Event-Driven Architectures

  1. Implement DLQ Early: Don't wait for failures to happen in production. Design your error handling and DLQ pattern from the start.
  2. Make DLQ Informative: Log sufficient context (payload, error, stack trace, subscriber info) to make troubleshooting effective.
  3. Idempotent Subscribers: Design subscriber logic to be safe to retry. Check if work has already been done before performing actions.
  4. Monitor Actively: Regularly monitor the DLQ. A growing queue is a sign of underlying problems.
  5. Limit Automated Retries: Use exponential backoff and maximum retry counts for automated reprocessing. Know when to stop and require manual intervention.
  6. Define Resolution Processes: Have a clear process for how administrators investigate and resolve events in the DLQ.
  7. Secure the DLQ: Control access to the FailedPlatformEvent__c object and the reprocessing mechanisms.

Conclusion

Platform Events are essential for modern Salesforce development, enabling scalable, decoupled systems. However, embracing asynchronous patterns means confronting the inevitability of processing failures. By implementing a Dead-Letter Queue pattern within your Salesforce subscribers, you move from hoping failures won't happen to having a robust strategy for when they do. Capturing failed events provides visibility, aids troubleshooting, and allows for controlled recovery, leading to more resilient and reliable enterprise applications. While Salesforce doesn't provide a one-click DLQ for Platform Events consumed by Apex/Flow, building this pattern using custom objects and careful error handling is a worthwhile investment in the stability of your event-driven architecture.

Share This:    Facebook Twitter

Monday, March 4, 2024

Salesforce Apex: Factory and Strategy Patterns

Factory Pattern

The Factory Pattern is a creational design pattern that provides an interface for creating objects in a superclass, but allows subclasses to alter the type of objects that will be created. In Salesforce Apex, you can use the Factory Pattern to encapsulate the object creation process and to promote loose coupling, thereby making your code more modular, flexible, and maintainable.

To utilize the Factory Pattern in Salesforce Apex, you can define an interface or an abstract class with a method declaration that subclasses or implementing classes will use to create instances of objects. Here's an example to illustrate the Factory Pattern in Salesforce Apex:

Suppose you have different types of notifications that you want to send from Salesforce, such as EmailNotification, SMSNotification, and PushNotification. You can create a factory to generate these notification instances based on the type required.

Step 1: Define an interface with a method to send notifications.

public interface INotification {
    void send(String message);
}

Step 2: Implement the interface with different notification types.

public class EmailNotification implements INotification {
    public void send(String message) {
        // Logic to send email notification
        System.debug('Email notification sent: ' + message);
    }
}

public class SMSNotification implements INotification {
    public void send(String message) {
        // Logic to send SMS notification
        System.debug('SMS notification sent: ' + message);
    }
}

public class PushNotification implements INotification {
    public void send(String message) {
        // Logic to send push notification
        System.debug('Push notification sent: ' + message);
    }
}

Step 3: Create a Factory class to generate instances of the notifications.

public class NotificationFactory {
    public enum NotificationType {
        EMAIL, SMS, PUSH
    }

    public static INotification getNotificationInstance(NotificationType type) {
        switch on type {
            when EMAIL {
                return new EmailNotification();
            }
            when SMS {
                return new SMSNotification();
            }
            when PUSH {
                return new PushNotification();
            }
            when else {
                throw new IllegalArgumentException('Invalid notification type');
            }
        }
    }
}

Step 4: Use the Factory to get instances and send notifications.

public class NotificationService {

    public void sendNotification(NotificationFactory.NotificationType type, String message) {
        INotification notification = NotificationFactory.getNotificationInstance(type);
        notification.send(message);
    }
}

To test this pattern, you can write a test method that uses the NotificationService to send different types of notifications:

@IsTest
private class NotificationServiceTest {
    @IsTest static void testSendNotifications() {
        NotificationService service = new NotificationService();
        
        // Test sending email notification
        service.sendNotification(NotificationFactory.NotificationType.EMAIL, 'Test email message');
        
        // Test sending SMS notification
        service.sendNotification(NotificationFactory.NotificationType.SMS, 'Test SMS message');
        
        // Test sending push notification
        service.sendNotification(NotificationFactory.NotificationType.PUSH, 'Test push message');
    }
}

With this setup, adding a new notification type requires you to create a new class that implements INotification and update the NotificationFactory to handle the new type. This design adheres to the open/closed principle, one of the SOLID principles, making it easy to extend the functionality without modifying existing code.

Strategy Pattern

The Strategy Pattern is a behavioral design pattern that enables selecting an algorithm's behavior at runtime. Instead of implementing a single algorithm directly, code receives run-time instructions as to which in a family of algorithms to use.

In the context of your Salesforce Apex example with notifications, you can use the Strategy Pattern to define a set of interchangeable algorithms for sending notifications. The client code can then choose the appropriate algorithm based on the context.

Here’s an example to illustrate the Strategy Pattern in Salesforce Apex:

Step 1: Define an interface with a method to send notifications, just like in the Factory Pattern example.

public interface INotificationStrategy {
    void send(String message);
}

Step 2: Implement the interface with different strategies for sending notifications.

public class EmailNotificationStrategy implements INotificationStrategy {
    public void send(String message) {
        // Logic to send email notification
        System.debug('Email notification sent: ' + message);
    }
}

public class SMSNotificationStrategy implements INotificationStrategy {
    public void send(String message) {
        // Logic to send SMS notification
        System.debug('SMS notification sent: ' + message);
    }
}

public class PushNotificationStrategy implements INotificationStrategy {
    public void send(String message) {
        // Logic to send push notification
        System.debug('Push notification sent: ' + message);
    }
}

Step 3: Create a context class that uses a notification strategy.

public class NotificationContext {
    private INotificationStrategy strategy;

    // Constructor to set the strategy
    public NotificationContext(INotificationStrategy strategy) {
        this.strategy = strategy;
    }

    // Method to send notification using the strategy
    public void sendNotification(String message) {
        strategy.send(message);
    }

    // Method to change the strategy at runtime
    public void setStrategy(INotificationStrategy strategy) {
        this.strategy = strategy;
    }
}

Step 4: Use the context class to send notifications.

public class NotificationSender {

    public void sendNotification(String type, String message) {
        INotificationStrategy strategy;

        if (type == 'EMAIL') {
            strategy = new EmailNotificationStrategy();
        } else if (type == 'SMS') {
            strategy = new SMSNotificationStrategy();
        } else if (type == 'PUSH') {
            strategy = new PushNotificationStrategy();
        } else {
            throw new IllegalArgumentException('Invalid notification type');
        }

        NotificationContext context = new NotificationContext(strategy);
        context.sendNotification(message);
    }
}

In this example, NotificationSender is responsible for selecting the appropriate strategy based on the notification type and then using a NotificationContext to send the message.

To test this pattern, you can write a test method that sends different types of notifications:

@IsTest
private class NotificationSenderTest {
    @IsTest static void testSendNotifications() {
        NotificationSender sender = new NotificationSender();
        
        // Test sending email notification
        sender.sendNotification('EMAIL', 'Test email message');
        
        // Test sending SMS notification
        sender.sendNotification('SMS', 'Test SMS message');
        
        // Test sending push notification
        sender.sendNotification('PUSH', 'Test push message');
    }
}

Difference between Factory and Strategy Patterns:

  • Factory Pattern is a creational pattern used to create objects. It hides the instantiation logic of the classes and refers to the newly created object through a common interface. The client doesn't know about which concrete class is being instantiated.
  • Strategy Pattern is a behavioral pattern used to select an algorithm's behavior at runtime. It defines a family of algorithms, encapsulates each one, and makes them interchangeable. Strategy lets the algorithm vary independently from clients that use it.

In the given examples, the Factory Pattern would be used if you wanted a single point (the factory class) to handle the instantiation of notification objects, while the Strategy Pattern is used when the algorithm for sending the notification can be chosen at runtime by the client code. With Strategy, you define a context in which different strategies can be applied, and you can switch between them as needed.

(This blog post is generated by ChatGPT)

Share This:    Facebook Twitter

Dependency Injection in Salesforce Apex

In Salesforce Apex, Dependency Injection (DI) is a design pattern that allows a class to receive dependencies from an external source rather than creating them itself. This makes the class more flexible, testable, and modular.

Problem Statement

In a Salesforce implementation for a Quote-to-Cash process, you may have a scenario where you need to process payments using different payment gateways (e.g., PayPal, Stripe, or a custom gateway). Implementing the code to handle different payment gateways directly within your classes can lead to tightly coupled code, which is hard to maintain and not flexible for future extensions.

How Dependency Injection Can Solve the Issue:

Dependency Injection (DI) can be used to create more maintainable and testable code by decoupling the classes that implement business logic from the classes that implement specific functionalities, like payment processing. DI allows you to inject the specific payment gateway implementation at runtime, making the code more modular and easier to extend with new payment gateways without modifying existing code.

Here's an example of how you can implement DI in Apex to solve this problem:

Step 1: Define an Interface

First, define an interface that declares the methods all payment processors should implement.

public interface IPaymentProcessor {
    Boolean processPayment(Decimal amount, String currencyCode, Map<String, Object> paymentDetails);
}

Step 2: Implement the Interface for Each Payment Gateway

Create classes that implement this interface for different payment gateways.

public class PayPalPaymentProcessor implements IPaymentProcessor {
    public Boolean processPayment(Decimal amount, String currencyCode, Map<String, Object> paymentDetails) {
        // PayPal-specific implementation
        // ...
        return true;
    }
}

public class StripePaymentProcessor implements IPaymentProcessor {
    public Boolean processPayment(Decimal amount, String currencyCode, Map<String, Object> paymentDetails) {
        // Stripe-specific implementation
        // ...
        return true;
    }
}

Step 3: Inject the Payment Processor

Create a PaymentService class that will use the payment processor. The processor is injected through the constructor.

public class PaymentService {
    private IPaymentProcessor paymentProcessor;

    // Constructor for dependency injection
    public PaymentService(IPaymentProcessor processor) {
        this.paymentProcessor = processor;
    }

    public Boolean handlePayment(Decimal amount, String currencyCode, Map<String, Object> paymentDetails) {
        return paymentProcessor.processPayment(amount, currencyCode, paymentDetails);
    }
}

Step 4: Usage

Now, you can instantiate the PaymentService with the desired payment processor dynamically.

// Example of injecting PayPalPaymentProcessor
IPaymentProcessor payPalProcessor = new PayPalPaymentProcessor();
PaymentService paymentService = new PaymentService(payPalProcessor);
Boolean result = paymentService.handlePayment(100.00, 'USD', new Map<String, Object>{'orderId' => '12345'});

// Example of injecting StripePaymentProcessor
IPaymentProcessor stripeProcessor = new StripePaymentProcessor();
paymentService = new PaymentService(stripeProcessor);
result = paymentService.handlePayment(200.00, 'USD', new Map<String, Object>{'invoiceId' => '67890'});

Benefits of Using Dependency Injection

  1. Testability: It's easier to write unit tests by mocking the IPaymentProcessor interface.
  2. Extensibility: If a new payment gateway needs to be added, you only need to create a new class that implements the IPaymentProcessor interface without changing the existing code.
  3. Maintainability: Changing the payment logic for a specific gateway does not impact other parts of the system.
  4. Loose Coupling: The PaymentService class doesn't depend on concrete payment processor implementations, making the system more flexible and robust.

Integrate Custom Metadata Types with Dependency Injection in your Apex code

Using Custom Metadata Types in Salesforce can make the code even more dynamic by allowing administrators to configure which payment processor to use without changing the code. This approach can provide greater flexibility and control from the Salesforce setup interface.

Step 1: Create a Custom Metadata Type

Create a Custom Metadata Type called PaymentGatewaySetting with the following fields:

  1. GatewayName (Text): The name of the payment gateway (e.g., "PayPal", "Stripe").
  2. ClassName (Text): The Apex class name that implements the IPaymentProcessor interface for the corresponding gateway.

Step 2: Insert Records for Each Payment Gateway

Create records for each payment gateway within the Custom Metadata Type. For example:

  • GatewayName: "PayPal", ClassName: "PayPalPaymentProcessor"
  • GatewayName: "Stripe", ClassName: "StripePaymentProcessor"

Step 3: Fetch the Configuration and Instantiate the Processor

Modify your service class to fetch the payment processor class name from the Custom Metadata and use the Type.forName method to dynamically instantiate the processor.

public class PaymentService {
    private IPaymentProcessor paymentProcessor;

    // Constructor for dependency injection is removed

    // Method to set the payment processor dynamically based on Custom Metadata
    public void setPaymentProcessor(String gatewayName) {
        PaymentGatewaySetting__mdt setting = [
            SELECT ClassName__c
            FROM PaymentGatewaySetting__mdt
            WHERE GatewayName__c = :gatewayName
            LIMIT 1
        ];

        if (setting != null) {
            Type processorType = Type.forName(setting.ClassName__c);
            if (processorType != null) {
                this.paymentProcessor = (IPaymentProcessor)processorType.newInstance();
            }
        }
    }

    public Boolean handlePayment(Decimal amount, String currencyCode, Map<String, Object> paymentDetails) {
        if (paymentProcessor == null) {
            // Handle the error - payment processor not set
            return false;
        }
        return paymentProcessor.processPayment(amount, currencyCode, paymentDetails);
    }
}

Step 4: Usage

Now, you can set the payment processor based on the configured gateway name:

PaymentService paymentService = new PaymentService();
paymentService.setPaymentProcessor('PayPal');
Boolean result = paymentService.handlePayment(100.00, 'USD', new Map<String, Object>{'orderId' => '12345'});

In the above example, the setPaymentProcessor method dynamically selects the appropriate payment processor based on the Custom Metadata settings. This allows administrators to switch payment gateways or add new ones without deploying new Apex code.

Benefits of Combining DI with Custom Metadata:

  1. Flexibility: Payment gateways can be changed or added through Salesforce setup without modifying Apex code.
  2. Manageability: All gateway configurations are managed in one place, making it easy to view and edit settings.
  3. Scalability: As new gateways are needed, you only need to add new Custom Metadata records and implement the corresponding classes.

Combining Dependency Injection with Custom Metadata Types in this way facilitates a highly configurable and scalable solution for managing payment processors in Salesforce.

Testing PaymentService class

You can test the PaymentService class by mocking the IPaymentProcessor interface using the Stub API. The Stub API allows you to substitute method implementations with mock behavior, which is ideal for unit testing because it helps isolate the class under test from its dependencies. Here's how you can create a mock class for the IPaymentProcessor interface and use it to test the PaymentService:

Step 1: Create a Mock Class

Create a mock class that implements the StubProvider interface provided by Salesforce. This class will define the behavior of the mocked methods.

@isTest
private class MockPaymentProcessor implements System.StubProvider {
    private Boolean processPaymentReturnValue;

    public MockPaymentProcessor(Boolean returnValue) {
        this.processPaymentReturnValue = returnValue;
    }

    public Object handleMethodCall(Object stubbedObject, String stubbedMethodName, Type returnType, List<Type> parameterTypes, List<String> parameterNames, List<Object> args) {
        if (stubbedMethodName == 'processPayment' && returnType == Boolean.class) {
            return processPaymentReturnValue;
        }
        return null;
    }
}

Step 2: Write a Test Class

Now, write a test class for PaymentService. Use the Test.createStub method to create an instance of the IPaymentProcessor interface with the mock behavior.

@isTest
private class PaymentServiceTest {

    @isTest
    static void testHandlePayment() {
        // Create an instance of the mock payment processor with the desired return value (true for successful payment)
        IPaymentProcessor mockProcessor = (IPaymentProcessor)Test.createStub(IPaymentProcessor.class, new MockPaymentProcessor(true));

        // Inject the mock payment processor into the payment service
        PaymentService paymentService = new PaymentService(mockProcessor);

        // Call the method to test with some test data
        Boolean result = paymentService.handlePayment(100.00, 'USD', new Map<String, Object>{'orderId' => '12345'});

        // Assert that the payment was successful
        System.assertEquals(true, result, 'The payment should have been processed successfully.');
    }
}

In this test, we're asserting that handlePayment returns true, which is the behavior we've defined in our mock class for a successful payment processing scenario. You can also test for different scenarios by changing the return value in the MockPaymentProcessor constructor or adding more logic to the handleMethodCall method.

By mocking the IPaymentProcessor interface, we can focus on testing the behavior of the PaymentService class without needing to rely on actual implementations of the payment processor, which might have external dependencies and side effects. This allows for faster and more reliable unit tests.

Best Practices and Common Challenges implementing Dependency Injection

Best Practices

  • Use Interfaces: We defined IPaymentProcessor as an interface, which allows us to implement different payment processors without changing the dependent PaymentService class code.
  • Constructor Injection: Originally, we used constructor injection to pass the specific payment processor to PaymentService. This is a clear and direct way to handle dependencies.
  • Single Responsibility Principle: Each payment processor class, such as PayPalPaymentProcessor and StripePaymentProcessor, has a single responsibility: to process payments for its respective gateway.
  • Testability: With DI, we can easily test PaymentService by mocking the IPaymentProcessor interface, ensuring that unit tests do not rely on external systems.
  • Custom Metadata Types: By using Custom Metadata Types, we allowed for dynamic configuration of payment processors, which is a best practice for managing external configurations.
  • Documentation: Documenting how PaymentService and payment processors work together, including how to configure Custom Metadata, is crucial for maintainability.
  • Managing Dependencies: We only inject the necessary dependencies into PaymentService, avoiding unnecessary complexity.

Common Challenges

  • Limited Reflection: Apex's reflection capabilities are limited, but we used Type.forName to instantiate classes by name, which is a workaround for dynamic instantiation based on Custom Metadata.
  • Complex Configuration: As the number of payment gateways grows, managing Custom Metadata records can become complex. It's important to have a clear strategy for managing these configurations.
  • Learning Curve: Developers new to DI might need time to understand the pattern. In the PaymentService example, clear documentation and code comments can help mitigate this.
  • Over-Engineering: Adding DI where it's not necessary can overcomplicate the solution. In our case, we only introduced DI for actual needs, like varying payment gateways.
  • Testing: With DI, we must write tests for each payment processor and their interaction with PaymentService. This means more tests but also better coverage.
  • Debugging: Debugging can be more complex because the implementation details are abstracted. To mitigate this, ensure logging and error handling are in place, as they can provide insights when something goes wrong.
  • Performance Considerations: Creating new instances of payment processors could have performance impacts. In the PaymentService example, we should consider reusing processor instances if appropriate.
Share This:    Facebook Twitter

Monday, January 22, 2024

Salesforce Apex: Creating an Apex Test Class for a Chaining Queueable Job

Queueable jobs in Salesforce are a powerful way to handle asynchronous processing, allowing you to chain jobs for scalability and efficiency. This blog post explores how to implement a chaining Queueable job in Apex, test it effectively, and use the AsyncOptions class to control job behavior.

The AccountProcessingQueueable class processes a set of Account records in batches and chains additional jobs if more records remain to be processed. Below is the implementation:

public class AccountProcessingQueueable implements Queueable {
    private Set<Id> accountIds;
    private Integer batchSize;

    public AccountProcessingQueueable(Set<Id> accountIds, Integer batchSize) {
        this.accountIds = accountIds;
        this.batchSize = batchSize;
    }

    public void execute(QueueableContext context) {
        // Query accounts to process, limited by batchSize
        List<Account> accountsToProcess = [
            SELECT Id, Name, AnnualRevenue
            FROM Account
            WHERE Id IN :accountIds
            LIMIT :batchSize
        ];
        System.debug('Processing ' + accountsToProcess.size() + ' accounts');

        // Perform complex calculations and updates
        for (Account account : accountsToProcess) {
            account.Description = 'Updated by AccountProcessingQueueable';
        }

        // Update accounts if there are any to process
        if (!accountsToProcess.isEmpty()) {
            update accountsToProcess;
        }

        // Remove processed account IDs from the set
        for (Account account : accountsToProcess) {
            accountIds.remove(account.Id);
        }

        // If there are more accounts to process, enqueue the next job
        if (!accountIds.isEmpty()) {
            System.enqueueJob(new AccountProcessingQueueable(accountIds, batchSize));
        }
    }
}

Creating a Test Class

Testing Queueable jobs requires creating test data, enqueuing the job, and verifying the results. The Test.startTest() and Test.stopTest() methods are used to ensure asynchronous jobs execute synchronously within the test context, allowing you to validate their behavior.

Below is the test class for the AccountProcessingQueueable class:

@IsTest
public with sharing class AccountProcessingQueueableTest {
    @IsTest
    public static void testQueueable() {
        // Create test data: 7 accounts
        List<Account> accounts = new List<Account>();
        for (Integer i = 1; i <= 7; i++) {
            accounts.add(new Account(Name = 'Test ' + i));
        }
        insert accounts;

        // Prepare account IDs
        Set<Id> accountIds = new Map<Id, SObject>(accounts).keySet();

        // Set up AsyncOptions to limit chaining depth
        AsyncOptions asyncOptions = new AsyncOptions();
        asyncOptions.maximumQueueableStackDepth = 4;

        // Start test context and enqueue the job
        Test.startTest();
        System.enqueueJob(new AccountProcessingQueueable(accountIds, 2), asyncOptions);
        Test.stopTest();

        // Verify results
        List<Account> updatedAccounts = [SELECT Id, Description FROM Account WHERE Id IN :accountIds];
        for (Account account : updatedAccounts) {
            System.assertEquals('Updated by AccountProcessingQueueable', account.Description, 
                'Account description should be updated by the Queueable job');
        }
    }
}

The Rationale Behind the Test Data

In the AccountProcessingQueueableTest class, we create 7 account records to demonstrate the chaining mechanism of the Queueable job. With a batch size of 2, the job processes accounts in batches, requiring multiple chained executions to handle all 7 accounts. Specifically:

  • The first job processes accounts 1–2 (2 accounts).
  • The second job processes accounts 3–4 (2 accounts).
  • The third job processes accounts 5–6 (2 accounts).
  • The fourth job processes account 7 (1 account).

This setup ensures that the chaining logic is thoroughly tested, including the handling of partial batches in the final execution.


Understanding maximumQueueableStackDepth

The AsyncOptions class, introduced in Salesforce, allows you to control the behavior of Queueable jobs, including the maximumQueueableStackDepth property. This property limits the number of chained Queueable jobs that can be enqueued in a single execution context.

Key Points About maximumQueueableStackDepth:

  • The default value is 50, meaning up to 50 chained jobs can be enqueued in a single transaction.
  • Setting maximumQueueableStackDepth to a lower value (e.g., 4) restricts the number of chained jobs to 3 additional jobs beyond the initial job (total of 4 jobs in the chain).
  • If the limit is exceeded, Salesforce throws a System.AsyncException with a message indicating that the maximum stack depth has been reached.

In the test class, we set maximumQueueableStackDepth to 4 to demonstrate how to control chaining depth:

AsyncOptions asyncOptions = new AsyncOptions();
asyncOptions.maximumQueueableStackDepth = 4;

Running the Queueable Job

With 7 accounts and a batch size of 2, the Queueable job executes 4 times to process all records:

  • Job 1: Processes 2 accounts (remaining: 5).
  • Job 2: Processes 2 accounts (remaining: 3).
  • Job 3: Processes 2 accounts (remaining: 1).
  • Job 4: Processes 1 account (remaining: 0).

The Test.stopTest() method ensures that all chained jobs complete before the test context ends, allowing you to verify the results immediately.

Share This:    Facebook Twitter

Sunday, January 21, 2024

Apex: Get List of SObject records by Ids

The getSobjectListById() method is a powerful utility function that can greatly simplify the task of grouping SObject records by a specific field. By improving code performance and readability, this method can help you write more efficient and maintainable Apex code.

public static Map<Id, List<SObject>> getSobjectListById(String key, List<SObject> incomingList) {
    Map<Id, List<SObject>> returnValues = new Map<Id, List<SObject>>();
    for (SObject current : incomingList) {
        if (current.get(key) != null) {
            Id currentId = (Id) current.get(key);
            if (!returnValues.containsKey(currentId)) {
                returnValues.put(currentId, new List<SObject>());
            }
            returnValues.get(currentId).add(current);
        }
    }
    return returnValues;
}

This utility function takes a field name (key) and a list of SObject records as parameters. It returns a map where the keys are the unique IDs from the specified field, and the values are lists of SObject records that have the same field value.

Let's consider a real-life scenario where getSobjectListById() can be used. Suppose you are working on a Salesforce project where you need to send a customized email to each Account's Contacts. The email content is based on the specific Account's details.

First, you would query all the Contacts and their related Account details. Then, you would need to group these Contacts based on their AccountId. This is where getSobjectListById() comes into play. You can use this method to create a map where the key is the AccountId and the value is a list of Contacts related to that Account.

Here's how you can do it:

List<Contact> contactList = [SELECT Id, Name, AccountId, Account.Name FROM Contact];
Map<Id, List<SObject>> accountContactsMap = Utils.getSobjectListById('AccountId', contactList);

Now, accountContactsMap contains a list of Contacts for each AccountId. You can iterate over this map to send a customized email to each Account's Contacts.

Share This:    Facebook Twitter

Friday, January 20, 2023

LWC: Working with custom record forms using lightning-record-edit-form

I was recently working on creating a utility LWC component for displaying Salesforce record data. This component is unique in that it can be used with both standard and custom objects, and t here is no need to create a record form for each object; simply drag and drop this component on any record form page in Lightning App Builder, provide the API name of the object, and supply a few more parameters, and voila! The record form will be generated based on the page layout. By overriding the new and edit buttons, this component may be used to create or change a record.

This component relies on the fact that every record is associated with a page layout, and it requires this information when it is instantiated. If an object has record types, a mapping must be supplied. This component does not currently accept compound fields, but as far as I can tell, it is possible.

I learnt a lot while working on this component, which I'd like to share in this blog.

lightning-record-edit-form can be used for both creating and editing a record. To customize the behaviour of your form when it loads, use the onload attribute to specify event handlers. This is how you gain access to the record

async handleRecordEditFormLoad(event) {
    const record = this.recordId ? event.detail.records[this.recordId] : event.detail.record;

    ...
    ...
}

You can use the below piece of code to display the form once you've retrieved the page layout data (see below).

get sections() {
    return this.layoutSections?.map((layoutSec) => {
        const layoutSection = { ...layoutSec };
        const { layoutColumns } = layoutSection;
        layoutSection.layoutColumns = layoutColumns?.map((layoutColumn, id) => {
            const { layoutItems } = layoutColumn;
            layoutColumn = { ...layoutColumn, id };
            layoutColumn.layoutItems = layoutItems
                ?.map((layoutItem, id) => {
                    layoutItem = { ...layoutItem, id };
                    return layoutItem;
                });
            return layoutColumn;
        });
        return layoutSection;
    });
}

Use getRecord wire adapter to get record’s data.

@wire(getRecord, { recordId: '$recordId', layoutTypes: ['Full'], modes: ['View'] })
    wiredRecord({ error, data }) {
        if (data) {
            this.recordData = data;
            ...
            ...
        }
    }

During record creation, this wire adapter won't fetch recordData for us. As a result, use the onload attribute of lightning-record-edit-form (see above). The recordData will be auto-populated with default values, like OwnerId, so you won’t have to populate it yourself.

When the user is filling up information, use onchange event handler of lightning-input-field to update the record data in memory.

handleInputChange(event) {
    event.preventDefault();
    this.recordData.fields[event.target.dataset.api].value = event.detail.value;
}

If you don't include a lightning-button with type="submit" inside lightning-record-edit-form, this is how you can save the record

handleModalSave() {
    const data = this.recordData;
    this.template.querySelector('lightning-record-edit-form').submit(data);
}

There is a wire adapter getRecordUi that gets layout information, metadata, and data to build UI for one or more records. However, for unknown reasons at this time, it is marked as deprecated. As a result, I had to use metadata API to read the page layout information before the form could be loaded.

Make sure that you check out the UI API Playground provided by Philippe Ozil to study UI APIs and comprehend the type of data JSON gives. Or else use Chrome debugger tools.

Refer SLDS library as much as possible to maintain the aesthetics of Salesforce platform so that you can serve majority of the use cases.

Share This:    Facebook Twitter

Wednesday, August 31, 2022

Tuesday, August 30, 2022

Sending Email in Salesforce using Apex

This is a common snippet to send an email using Salesforce Apex:

Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage();
email.setToAddresses(new List<String>{'demo@sf.com'});

email.setPlainTextBody('Sample body');
email.setSubject('Sample subject');

List<Messaging.SendEmailResult> results = Messaging.sendEmail(new List<Messaging.SingleEmailMessage>{ email });

for (Messaging.SendEmailResult sr : results) {
    if (!sr.isSuccess()) {
        List<Messaging.SendEmailError> errors = sr.getErrors();
        String errorString = String.join(errors, ', ');
        throw new AuraHandledException(errorString);
    }
}

Sending files with Email

Now assume that we have an order record in Salesforce, and multiple files have been uploaded in Files related list. And we would like to mail a recipient attaching all these files. There is a method setEntityAttachments() on SingleEmailMessage class which accepts an array of ContentVersion Ids.

Make sure that you pass the ContentVersion Ids as a list of String 😐

Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage();
email.setEntityAttachments(entityAttachmentIds);

Save Email as an activity

What if you would like to mail to a recipient but at the same time save the email record as an activity? For this, use setSaveAsActivity() and setWhatId() methods as below:

Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage();
...
...
...
email.setEntityAttachments(new List<String>(contentVersionIds));
email.setSaveAsActivity(true);
email.setWhatId(orderId);

Replace Merge fields in Email Template

Now assume that you have already defined an email template, and there are merge fields (placeholders) both in the body and subject of the email template. When you send the email, these merge fields should get replaced with the Salesforce data from the record. Use renderStoredEmailTemplate() of Messaging class as below:

EmailTemplate emailTemplate = [SELECT Id, Body, Subject FROM EmailTemplate WHERE DeveloperName = :emailTemplateName LIMIT 1];

Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage();

email.setPlainTextBody(Messaging.renderStoredEmailTemplate(emailTemplate.Id, UserInfo.getUserId(), recordId).plainTextBody);
email.setSubject(Messaging.renderStoredEmailTemplate(emailTemplate.Id, UserInfo.getUserId(), recordId).getSubject());

Remember that executing the renderStoredEmailTemplate() counts toward the SOQL governor limit as one query. This is described more in detail here.

There is a lot of information provided in Salesforce documentation, so do refer the Messaging and SingleEmailMessage classes.

And finally, a full fledged example of sending email using Salesforce Apex: https://gist.github.com/iamsonal/3ccd44b319724f4d03cdb4df0bde54d0

Share This:    Facebook Twitter

Custom Salesforce Lightning component to Share Records Manually

There are cases where user manually share records with other users. However, users with whom a record is shared neither gets a notification, nor an email alert. How are users supposed to know a record has been shared with them?

The Share tables also do not support triggers. The only way to handle this use case is to schedule an Apex job and periodically query the table for new shares since the last check.

This is a custom component developed both in Lightning Aura and LWC (the main modal is in Aura, while the supporting components are in LWC). As soon as a record is shared manually with a user, he/she will be notified by email and a Salesforce notification. Currently, this works for practically all the Salesforce standard objects, including Accounts. Additionally, if you want to remove the share entries for a certain record, click the remove icon on the RHS of the table. To utilize this component, make sure to construct a Quick Action.

Since I haven't worked with Aura in a while, I blended LWC and Aura to render this component when it could have been rendered with only LWC. This component can be extended to work with custom objects as well.

To select multiple users, I’ve used a lookup component created by Philippe Ozil. Thanks to him.

Finally, the repo link for this component: https://github.com/iamsonal/share-records

Share This:    Facebook Twitter

Wednesday, July 6, 2022

Reviewing and evaluating different integration patterns available within Salesforce

Approach

Data integration is used to synchronize data between two or more systems. It can be described as combining data from different sources into one cohesive view. The outcome of data integration should be trusted data that is meaningful and valuable to the business process.

Process integration combines business processes from two or more systems to complete a given process task. Process integration requires more robust systems and extended transaction timing to complete the integration.

Virtual integration is used to search, report, and update data in one or more external systems. This type of integration requires real-time access and retrieval of data from the source system.

Timing

Synchronous communication is when one system sends a request to another system and must wait for the receiving system to respond. Synchronous timing is generally expected in real time.

Asynchronous communication occurs when one system sends a request to another system and does not wait for the receiving system to respond. Asynchronous timing does not require real-time communications.

Source, Target, and Direction

Each integration must have a source or sending system and a target or receiving system. Direction can be more than a pointer. Integration can be unidirectional (one-way), bidirectional (two-way), omni-directional (broadcast or one-to-many), etc.

Calling Mechanism

Salesforce has several ways to initiate integrations, including triggers, controllers, workflows, processes, flows, platform events, and batch processes.

  • Apex callouts
  • Bulk API
  • Canvas
  • Chatter REST API
  • Email
  • External objects
  • Metadata API
  • Middleware
  • Outbound messages
  • Platform event
  • Push notifications
  • RESTful API
  • SOAP-based API
  • Streaming API
  • Tooling API

Error Handling and Recovery

Integration patterns react to errors and perform rollbacks in different ways. The approach used to manage error handling and recovery is critical in selecting and managing a given integration pattern.

Idempotent Design Considerations

An operation is idempotent when it produces the same result whether you execute it once or multiple times. The most common method of creating an idempotent receiver is to search for and track duplicates based on unique message identifiers sent by the consumer.

Security Consideration

Salesforce recommends two-way SSL and appropriate firewall mechanisms to maintain the confidentiality, integrity, and availability of integration requests.

State Management

The use of primary and unique foreign keys allows different systems to maintain the state of data synchronization. If Salesforce is the master, the remote system must store the Salesforce ID, and if the remote system is the master, Salesforce must store the unique remote ID.

Integration patterns supported by Salesforce.

Request and Reply: As a requesting system, Salesforce invokes a remote system call for data and waits for the integration process to complete.

Fire and Forget: As a requesting system, Salesforce invokes a remote system call for data, is acknowledged by the remote system, and does not wait to complete the integration process.

Batch Data Synchronization: Either Salesforce or a remote system invokes a batch data call or published event to synchronize data in either direction using a third-party ETL solution or Salesforce Change Data Capture.

Remote Call-In: As a target system, Salesforce receives a remote system call to create, retrieve, update, or delete data by a remote system.

UI Update Based on Data Changes: As a requesting system, Salesforce listens for a PushTopic (CometD protocol) and updates the user interface (UI) to represent the received change.

Data Virtualization: As a requesting system, Salesforce establishes a virtual connection using Salesforce Connect to create an external object to access real-time data.

Share This:    Facebook Twitter

Monday, December 13, 2021

Javascript: Working with Arrays using Asynchronous operations

Let’s say we have a list of names. For each name, we want to make an API call and get some information, and then keep a new list with this collection of information. A typical approach may look like this.

Refactoring to using map or forEach is not that straightforward here. The callbacks provided to map or forEach (or any of the array-methods) are not awaited, so we have no way of knowing when the full collection of information is done and ready to use. However, there is still a way we can write this in a nice way. Using the Promise.all method.

Awesome! Now, map returns a list of promises, and the Promise.all method takes a list of promises and resolves them in parallel. Not only did the code become much more nice and clean - but we also benefit from the fact that the promises are not resolved sequentially, which speeds up the process and increases performance.

Share This:    Facebook Twitter

Javascript: sort() and Array destructuring

Take a look at the code below.

At first glance, this looks good. We’re not using let, and we’re not mutating on the original array using something like push. But take a look at the console.log statement. It turns out that the sort method does not create a copy of the original array. Instead, it both mutates on the original array and returns its own array from the method call.

And there are a handful of old Array-methods that do this. Be careful with push, shift, unshift, pop, reverse, splice, sort, and fill. Fortunately, most often we can simply avoid calling these methods at all, to stay out of trouble.

However, there are cases, like using sort, where we have to use a method that mutates the original array, in lack of better options. Array destructuring to the rescue! Whenever these occasions arise, make sure to manually copy the array first, before performing an operation on it. It’s as simple as this.

That [...grades] makes the entire difference.

Share This:    Facebook Twitter

Javascript: Pass arguments as an object

Say we have a function, createUser, which requires four arguments in order to create a new user (by calling some API).

When looking at the function signature itself, things seem to make pretty good sense. But how about when we call it?

It’s pretty unclear what the arguments mean, right? Especially the last two booleans. I would have to go to the implementation to look it up. Instead, we can wrap the arguments in an object.

Thanks to ES6 object destructuring, we can do this easily by simply adding curly brackets around the arguments. Now, whenever we call createUser, we pass an object as a single argument with the required values as properties instead.

See how nice that reads out now. We’re no longer in doubt what those booleans mean. There’s another version of this that I’ve seen very often: Passing optional arguments as an options object. The idea is to pass 1-2 essential arguments and then pass the remaining arguments as an options object.

Now we need to check if the options object is set before accessing its values and provide proper fallbacks. On the other hand, calling the createUser function now looks very clean.

The first two arguments are pretty obvious, and we can now optionally provide options when needed.

Share This:    Facebook Twitter

Javascript: Guard clauses and avoiding using 'else'

Guard clauses

“In computer programming, a guard is a boolean expression that must evaluate to true if the program execution is to continue in the branch in question. Regardless of which programming language is used, guard code or a guard clause is a check of integrity preconditions used to avoid errors during execution.”

— Wikipedia

Let’s take a look at an example. We have a function, getValidCandidate, which checks if a candidate is valid, provided a list of members and returns the member if the candidate is valid, or undefined otherwise.

Look how nested the code is? Ifs wrapping other ifs, nested 3 times. Let’s rewrite this and use guard clauses instead.

Guard clauses prevent the function from continuing what it’s doing and instead returning early if the guarding condition is not met. Naturally, we also know that the end result is the last return of the function.

Skip the ‘else’ part

Whenever you’re about to write else, stop and reconsider what you’re doing and search for an alternative way to express the logic.

Let’s cover a few ways that we can avoid using else. One of them is guard clauses, which we just covered above. Another approach is using default values. Let’s take an example.

Let’s say we have a function, negateOdd, which takes a number and negates it if the number is odd.

The function does what it’s supposed to. But it’s unnecessarily using an else. Let’s refactor it, and instead, use a default value.

We now assign result with a default value, and the variable will only be changed if the condition in the if-statement is met. But let’s do even better. We’re supposed to question the use of let and imperative (altering the state of your program step by step) code. Let’s see if we can make this function even more readable and concise.

There is a version of if-else that is generally accepted. It’s the ternary operator.

Share This:    Facebook Twitter

Sunday, December 12, 2021

NodeJS & AWS Lambda

NodeJS is a backend runtime environment that runs on the V8 engine and enables us to write and execute server-side JavaScript.

Use promises instead of callbacks

NodeJS was originally built using a callback pattern for asynchronous calls. All of NodeJS’s builtins are structured this way: you provide the main arguments along with a callback function that is applied when the asynchronous operation is done.

Fortunately, it’s quite easy to convert these methods to using promises instead. Let’s look at two different ways.

Using promisify

You can use a utility function, promisify, from the utils module to wrap the function using a callback in a promise.

It works for all functions that follow the NodeJS callback convention, which means that it works for a range of old third-party libraries for NodeJS as well.

Using module/promises

Instead of handling the promise-wrapping yourself, all NodeJS builtins come with a promisified version of their functions, straight from the module itself. This is, by far, the easiest way to use promises in NodeJS. Please, stick to this pattern anywhere you can.

Async handlers in AWS Lambda

The same goes for AWS Lambda. You don’t have to use the callback argument anymore. Instead, declare the handler as async, and return the result instead.

If you need to fulfill promises during the function call, you simply apply the await keyword like you normally would.

Share This:    Facebook Twitter

Tuesday, February 23, 2021

Salesforce Lightning: Assign a record to yourself

Let's assume that you want to assign an account or multiple account records to yourself from the detail record view and the list view respectively. You can easily implement the same using only flows if you want to take up a declarative approach.

I have created an autolaunched flow which accepts both single and multiple record ids. Care should be taken that the name of the variables should be exactly id and ids respectively.

To call this flow, create a custom button each for detail and list page and add it on the layout pages. So for the detail page, set the id flow variable to be the account ID, and once the flow is completed, redirect the user to the same account detail page by providing the retURL parameter in the flow URL.

For the list view page, make sure that Display Checkboxes is selected. The ids variable is populated automatically when one or more records are selected in the list view page. The only caveat is that once the flow is completed, I couldn't find a declarative way to redirect the user to the same list view page from where the flow was executed (you can use a Visualforce page to redirect to the list view). So for the sake of simplicity, I am redirecting the user to the Recently Viewed Accounts page.

Next, we have to add these buttons. To add the button on the detail page layout,

and to add on the list view page,

You can get the flow and the associated custom buttons files from this repository.

Share This:    Facebook Twitter

Thursday, January 28, 2021

Salesforce Classic: Assign a record to yourself

Let's consider a use case where you want to assign a record (in this case, Workorder) to yourself. To implement this scenario, click on New Button or Link button. Enter the details as below:

and provide the below script:

{!REQUIRESCRIPT("/soap/ajax/49.0/connection.js")}
{!REQUIRESCRIPT("/soap/ajax/49.0/apex.js")}

var __sfdcSessionId = '{!GETSESSIONID()}';
sforce.connection.sessionId = __sfdcSessionId;

var workOrder = new sforce.SObject("WorkOrder");
workOrder.Id = "{!WorkOrder.Id}";
workOrder.OwnerId = sforce.connection.getUserInfo().userId;
result = sforce.connection.update([workOrder]);
 
if (result[0].getBoolean("success")) {
    console.log(result[0].id + " updated");
} else {
    console.log("failed to update " + result[0]);
}
window.location.reload();

Now add this button on the page layout.

Share This:    Facebook Twitter

Total Pageviews

511068

My Social Profiles

View Sonal's profile on LinkedIn

Tags

__proto__ $Browser Access Grants Accessor properties Admin Ajax AllowsCallouts Apex Apex Map Apex Sharing AssignmentRuleHeader AsyncApexJob Asynchronous Auth Provider AWS Callbacks Connected app constructor Cookie CPU Time CSP Trusted Sites CSS Custom settings CustomLabels Data properties Database.Batchable Database.BatchableContext Database.query Describe Result Destructuring Dynamic Apex Dynamic SOQL Einstein Analytics enqueueJob Enterprise Territory Management Enumeration escapeSingleQuotes featured Flows geolocation getGlobalDescribe getOrgDefaults() getPicklistValues getRecordTypeId() getRecordTypeInfosByName() getURLParameters Google Maps Governor Limits hasOwnProperty() Heap Heap Size IIFE Immediately Invoked Function Expression Interview questions isCustom() Javascript Javascript Array jsForce Lightning Lightning Components Lightning Events lightning-record-edit-form lightning:combobox lightning:icon lightning:input lightning:select LockerService Lookup LWC Manual Sharing Map Modal Module Pattern Named Credentials NodeJS OAuth Object.freeze() Object.keys() Object.preventExtensions() Object.seal() Organization Wide Defaults Override PDF Reader Performance performance.now() Permission Sets Picklist Platform events Popup Postman Primitive Types Profiles Promise propertyIsEnumerable() prototype Query Selectivity Queueable Record types Reference Types Regex Regular Expressions Relationships Rest API Rest Operator Revealing Module Pattern Role Hierarchy Salesforce Salesforce Security Schema.DescribeFieldResult Schema.DescribeSObjectResult Schema.PicklistEntry Schema.SObjectField Schema.SObjectType Security Service Components Shadow DOM Sharing Sharing Rules Singleton Slots SOAP API SOAP Web Services SOQL SOQL injection Spread Operator Star Rating stripInaccessible svg svgIcon Synchronous this Token Triggers uiObjectInfoApi Upload Files VSCode Web Services XHR
Scroll To Top