Improving CPU Time
In general, there are two ways to improve the CPU usage:
- Work on a smaller amount of data (that is, do less work).
- Operate on the data in a more efficient manner.
Faster for loops
In the following code snippet, we are using the SOQL for
loop format to loop through the account records and a traditional iterator-style for
loop:
List<Account> accs = AccountSelector.getAccounts();
Integer max = accs.size();
for (Integer i = 0; i < max; i++) {
System.debug(accs[i].Name);
}
for (Account acc : AccountSelector.getAccounts()) {
System.debug(acc.Name);
}
As per the tests, the iterator loop runs faster than the SOQL for
loop. Therefore, we should use the iterator format for best performance in CPU time, though we do use a lot more heap size in the iterator.
This is an important example that highlights the fact that when optimizing for one governor limit, we will often impact another.
Using maps to remove and reduce looping
The simplest use case is when retrieving a list of records from a query and then putting them into a map so that you can retrieve a record using its ID.
Map<Id, Account> accsById = new Map<Id, Account>([SELECT Id, Name FROM Account]);
Account myAccount = accsById.get(myAccountId);
Reducing the use of expensive operations
If we wanted to retrieve the Name
field of our account record and assign its value to a variable, we could do it in one of two ways. We could either call the Name
field using a static
reference or call the get
method on the record, passing in the field name.
String accName = acc.Name;
String accountName = (String)acc.get('Name');
The second option is slower and more CPU intensive than the first one.
Another commonly used dynamic call is to the Schema
class to retrieve information about metadata within the org. The simplest way to handle these situations is to cache values locally in variables for reuse. If you are making repeated calls to any of that describe information, you should work to cache these results in a local variable outside the loop, which will remove the need to make repeated calls and reduce your CPU time overall.
Reducing Heap Size Usage
The heap size is the amount of memory being used to store the various objects, variables, and state of the Apex transaction in memory as it is being processed. For synchronous operations, this is capped at 6 MB and is doubled to 12 MB for asynchronous processes.
Using scoping
We can either declare variables at a class level or within a code block (a method, loop, and so on) within Apex. Declaring a class-level variable means that the variable will be available in memory for the lifetime of the instance of that class, while variables declared in a code block will only be available for the scope of that block.
Structuring your code well into discrete functions with limited scope will help to ensure that your code avoids the heap size limit by allowing the Apex memory manager to handle memory effectively.
Removing unwanted items
Let’s refer the code snippet for the traditional iterator-style for
loop we had used earlier. A simple action of setting the value of the accs
variable to null
reduces the heap size we are consuming.
List<Account> accs = AccountSelector.getAccounts();
Integer max = accs.size();
for (Integer i = 0; i < max; i++) {
System.debug(accs[i].Name);
}
accs = null;
It can be useful therefore to remove any unwanted items from memory manually if you are working with large sets of data or the Blob
data type and wish to free space to ensure that you do not hit the heap size limit. Combining this with scoping will help you ensure that the memory of your applications is well managed and, in most instances, you will have no concerns with the heap size limit.
Improving Query Selectivity
In order to achieve the best performance possible, we want to make our query as selective as possible to reduce the number of records returned.
The first thing that indicates selectivity is whether the field is indexed. The following types of field are indexed:
- Standard primary keys (
Id, Name, OwnerId
) - Foreign key fields (
CreatedById, LastModifiedById
, lookup relationships, master-detail relationships) - Audit fields (
CreatedDate, SystemModstamp
) - Custom fields marked as unique or
External Id
If a field is indexed, it will be considered for optimization. Following this, the optimizer then determines how many records are returned using that index. The following calculation is made to determine whether the number of records selected is below the following thresholds:
- For standard indexes, 30 percent of the first million targeted records and 15 percent of the remaining records. The threshold is 1 million records.
- For custom indexes, 10 percent of the first million targeted records and 5 percent of the remaining records. The threshold is 333,333 records.
If the indexed field meets these thresholds, it is considered selective and will be considered for optimization. Salesforce provides developers with the Query Plan tool in the Developer Console that will provide detailed information on whether a query is selective or not. To enable this in the Developer Console, open the Preferences menu and select the Enable Query Plan option. Once you have selected this option and saved, using the Query Editor pane, you can enter a query and use the Query Plan button to view metrics on the selectivity of a query. The statistics returned here inform us of which filter would be used (if any) and why. The columns presented have the following information:
- Cardinality: The number of records returned by this operation.
- Fields: The indexed fields used by the optimizer in this operation. This will be
null
if the field is not indexed. - Leading Operation Type: The primary operation used by the optimizer to optimize the query, one of either
Index
for an indexed field,Sharing
for sharing rule-based control,TableScan
if a full search of the object occurs, andOther
if an internal Salesforce optimization is used. - Cost: This is a cost score for running the query. Any value over 1 is considered non-selective. We should always aim to have a query with a cost of 1 or less.
- sObject Cardinality: The approximate, number of records on the object.
- sObject Type: The object we are querying.
The other key practice when defining WHERE
filters is to attempt to use positive/ inclusion operations (IN, =
) rather than negative/exclusion operators (NOT, !=
) as these exclusion operations are not optimizable, except when using != null
and != boolean
. It is a good practice therefore to ensure wherever possible that you use positive/inclusion operations to improve optimization chances.
Number of Queries
There are also a couple of simple ways in which you can reduce the number of queries you are running.
Retrieving child records with a sub query
If retrieving a record and you require child records, consider using a sub query where appropriate to help retrieve all the necessary data at once. This is not always necessarily a good practice; for example, when determining the Batch Apex scope, it is more performant to select the parent records in the batch scope and retrieve the child records in each batch. If you are working with a selective query on the parent record and retrieving a small set of data for each returned record, then this can help avoid additional loops, mapping, and queries.
Cache results
If the data is not expected to change during the transaction, a developer can use the singleton pattern to cache results that were retrieved. This is an extremely effective tool when retrieving setup-related objects (Profile, Holiday, Role
) or custom metadata or custom settings. None of these items should change during the course of a transaction and are slow-moving data, making them comfortably cacheable for the duration of the transaction.
Platform Cache
Platform Cache is a powerful feature that can enhance the performance of applications when working at scale and requiring data to be retrieved in a regular fashion, but that does not change regularly. A free allocation of 10 MB of cache is provided for Enterprise Edition orgs, and 30 MB for Unlimited and Performance Edition, with a greater allowance available for purchase. If you are working in an environment where there is a set of data that is retrieved regularly but does not change, and you have available a Platform Cache allocation, then consider it as a possible enhancement to help improve your system's performance.