This article looks at how the security principles of "segregation of duties" (SoD) and "principle of least privilege" (PoLP), can be maintained to reduce security risks when automating complex attended processes involving more than one employee.
Candidate processes for automation often deal with sensitive data, for example processing transactions involving bank details. This means that if automation is implemented without proper security measures, it can risk exposing the company to fraud and data leakage.
It is easy to imagine how security problems can be introduced when "citizen developers" are business users with little understanding of IT security best practice. But more experienced professionals can also find these risks hard to mitigate. Gartner recently noted:
"Even the most careful RPA design will generate privileged accounts and possible breaks in segregation of duties (SoD), which could lead to significant risks."
Governance and data security implications are becoming more complex to manage, as many companies look to increase the scope of what they want to achieve using automation. Automation is rapidly evolving beyond the routine task handling typically associated with robotic process automation (RPA). Organisations are starting to tackle end-to-end business processes, involving humans at points where they add value.
So how can organisations safely use automation technologies to accelerate complex processes that involve several people and handle sensitive data?
To answer this it is important to understand two key IT security principles and how they are impacted by the introduction of automation:
Segregation of duties (SoD) - Sometimes called separation of duties, reduces the risk of fraud by splitting responsibility for parts of a process among different individuals. For example if the same individual is responsible for purchasing and checking-in inventory, then the potential for fraud is increased.
Principle of least privilege (PoLP) - States that users should only have the access permissions required to be able to do their jobs. This reduces the chances of data being misused and helps to ensure that confidential data is kept private.
When processing is done manually these principles are relatively easy to implement:
However, with the introduction of RPA, indirect system and data access for the people supervising can substantially increase security risks:
It is generally accepted best practice to assign each RPA bot a unique identity and ensure that an audit trail is generated. This helps to create accountability for actions and allows each bot to be granted the minimum permissions required for the task that they are performing.
Proper governance becomes even harder for attended automations that involve more than one human within a single process. Even with good audit logs and proper controls in place for bots, it is hard to ensure that data is not made available to all humans involved in the process.
This video shows how this problem is handled using the SmartFlow tool, using an example where different users complete tasks during an employee onboarding scenario:
In the example, SmartFlow drives the process from start to finish, automating some steps and involving each employee for only the tasks that they need to complete.
Each employee receives an email with a custom workflow for their part in the process. It has been configured so that when they run this they only have access to data held in systems they have been granted permissions for. The relevant information they need to complete tasks is easily available a single view, and data that is needed by other individuals in the wider onboarding process is not compromised by being made available to everyone.
To ensure that security requirements are met SmartFlow also provides a full audit trail so that activity can easily be monitored and investigated if needed. Proper controls are also in place to help ensure that Workflows and SmartFlow automations are developed safely and securely, with appropriate approvals as needed.