Salesforce — Partitioned Trigger Handler Pattern

Josip Jurić
5 min readOct 31, 2020

--

If you have ever worked on a complex Salesforce implementation (containing custom code), you know how tedious it can be to implement Trigger functionality. There are some common approaches to this problem, which all focus on implementing a Trigger-Handler pattern, in one form or the other. Trigger handlers are considered a best practise. Still, even a Trigger-Handler has a major downside, when it gets to larger implementation — the Trigger Handler gets quite large very quickly.

In this post I will propose another approach to handling triggers, that will address this issue, while still keeping a Trigger-Handler pattern as a best practise.

Existing Trigger Handler patterns

In my experience, almost all existing Trigger Handler patterns follow a structure similar to this:

public class AccountTriggeHandler implements ITriggerHandler {    public void handleBeforeInsert() {
// ...
}
public void handleBeforeUpdate() {
// ...
}
// other trigger-handler methods ...}

Any code that should run e.g. in the before-insert Trigger, is added to the handleBeforeInsert method. This pattern has many advantages, including better structured code, maintainability, testability, etc.

Issues with Trigger Handlers

As the Salesforce project/instance grows and more and more trigger functionality is implemented, the initial handle*-methods will get larger and too complex to keep it that way. At that point it would be good (although it should have been done from the beginning) to refactor a bit and move the code into separate methods, or even better separate classes.

Separate methods only work until a certain point in time, as when the TriggerHandler class crosses into the KLOCs the pattern looses many of it advantages.

Moving the code parts into dedicated classes is a good approach to handle the rising complexity, but will result in the following issues:

  • Difficulty to trace all trigger functionality, as it is spread across various classes which are not necessarily grouped in any way
  • The TriggerHandler will still need to keep a lot of code to maintain the structure, do some basic pre-filtering/grouping, call the other classes, etc.

It seems that, in order to keep the main advantages of a Trigger Handler (maintainability, testability, readability, etc.), the pattern itself must be adapted.

Partitioned Trigger Handler

The solution I am proposing is based on the following principles:

  • Separate Trigger-code by functionality blocks into dedicated classes — “Trigger partitions”
  • Minimise size of Trigger Handler to allow easier management of partitions (adding, removing, re-order)

With these principles in mind, I have developed the following solution.

Class diagram for the Partitioned Trigger Handler

Virtual class PartitionedTriggerHandler

The PartitionedTriggerHandler class is a virtual class to be extended by any object-specific implementation of the Partitioned Trigger Handler. It contains the abstract method getPartitions(), which must be implemented in the concrete implementation and should return a list of partitions. This method manages the partitions for a Trigger Handler, and also defines the order in which these are executed.

Class <Object>TriggerHandler

For each object a concrete class extending the PartitionedTriggerHandler should be implemented. The concrete class should implement only the getPartitions() method, which should keep this class as thin as possible.

Interface ITriggerHandlerPartition

The interface ITriggerHandlerPartition is to be implemented by each of the partition implementations of the Trigger Handler. This is the part of the pattern that is probably most open for discussion, but I am suggesting the following methods:

  • getOperations() — returns a list of Trigger operations on which this partition should be executed (e.g. before-insert, after-update, etc.)
  • initialIteration(SObject newRecord, SObject oldRecord) — this method will be called for each record of the trigger, at the beginning of the trigger handler; it can be used e.g. for initial data gathering from the records
  • main() — this method can contain the main functionality of the partition, which is called (only once) after the initialIteration; e.g. it can be used for needed SOQL queries
  • finalIteration(SObject newRecord, SObject oldRecord) — this method will be called for each record of the trigger, at the end of the trigger handler (after the main-method); it can be used for e.g. making changes to the records based on calculations done in the main-method

Class <Object>_TP_<Name>

Last but not least, the implementation of the partition, implementing the interface ITriggerHandlerPartition. In order to keep the file well organised a naming convention is certainly useful, e.g. <Object>_TP_<Name> (examples: Account_TP_OpportunityUpdates), where “TP” stands for “Trigger Partition”.

Example implementation

Example 1 — Opportunity trigger partition, setting the value of the Account.Type to a custom field on the Opportunity (this can be useful e.g. for sharing rules).

public class Opportunity_TP_AccountType 
implements ITriggerHandlerPartition {
Set<Id> accountIds = new Set<Id>();
Map<Id, Account> accountMap;
public System.TriggerOperation[] getOperations() {
return new System.TriggerOperation[] {
System.TriggerOperation.BEFORE_INSERT
};
}
public void initialIteration(SObject newRecord,
SObject oldRecord) {
Opportunity opp = (Opportunity) newRecord;
this.accountIds.add(opp.AccountId);
}
public void main() {
this.accountMap = new Map<Id, Account>([
SELECT Type FROM Account
WHERE Id IN :this.accountIds
]);
}
public void finalIteration(SObject newRecord,
SObject oldRecord) {
Opportunity opp = (Opportunity) newRecord;
opp.Account_Type__c =
this.accountMap.get(opp.AccountId).Type;
}
}

Example2 — Account Trigger partition, where a change of the Account.Type is propagated to all Opportunities of the Account.

public class Account_UpdateOpportunityAccountType 
implements ITriggerHandlerPartition {
Set<Id> accountIdsWithChangedType = new Set<Id>();
public System.TriggerOperation[] getOperations() {
return new System.TriggerOperation[] {
System.TriggerOperation.AFTER_UPDATE
};
}
public void initialIteration(SObject newRecord,
SObject oldRecord) {
Account acc = (Account) newRecord;
Account oldAcc = (Account) oldRecord;

if (acc.Type != oldAcc.Type) {
accountIdsWithChangedType.add(acc.Id);
}
}
public void main() {
if (accountIdsWithChangedType.size() > 0) {
Opportunity[] relatedOpps = [
SELECT AccountId FROM Opportunity
WHERE AccountId IN :accountIdsWithChangedType
];
for (Opportunity opp : relatedOpps) {
Account acc =
(Account) Trigger.newMap.get(opp.AccountId);
opp.Account_Type__c = acc.Type;
}

udpate relatedOpps;
}
}
public void finalIteration(SObject newRecord,
SObject oldRecord) {
}
}

Potential drawbacks

The described pattern could impose some potential drawbacks, which would need to be proven in realistic scenarios, the main of which would be — performance. Since the pattern requires an additional class “for each functionality” (details are open to interpretation), a lot of classes could arise from this which are all called during the same trigger execution. This could impact the stack size as well as CPU Time.

I have so far done no benchmarking to prove or disprove any performance drawbacks.

Conclusion

The suggested Trigger Partition pattern addresses some of the issues arising with larger Salesforce implementations, where usual Trigger Handler patterns tend to create large and complex classes that are difficult to manage. The solution is still in its early phases, and I would love to get feedback from the community. Let me know what you think!

The samples described above, and the underlying basic implementation of the pattern scaffold, can be found on github.

If you need help implementing Salesforce, maybe we can help — contact us.

--

--

No responses yet