Adding a human layer to data security in government organisations
Written by Tony Pepper, CEO, Egress
Digital transformation in the UK public sector has accelerated. Essential measures implemented to control COVID-19 have disrupted analogue processes, such as sending a letter or fax; there’s no point sending a letter if there is no one in the office to open it. Digital processes have become the default as government employees work from home and citizens need to access public services and support remotely in order to stay safe. As this transition continues it’s inevitable that the expanding digital footprint of public services will increase the amount of digital personal data residing in public sector systems and handled by government employees. What also seems inevitable, but shouldn’t be, is the high risk of human-activated data leakage due to digitalisation.
Figures published by the ICO showed that central and local government organisations accounted for 12% of all reported personal data breaches in the second half of 2019; 92% of these were classed as “non-cyber incidents” attributed to human error or theft. Of these incidents, 23% were directly caused by mis-sent emails, failure to redact sensitive content or failure to use BCC.
Mitigating the risk of employee mistakes – and identifying those who deliberately leak data – must be an essential element of any organisation’s data security strategy. However, as recent Egress research shows, there is a broad range of reasons and scenarios in which employees leak data; identifying and understanding these will help frame how data security teams in the public sector need to respond.
The one in ten… ill-equipped, self-interested or under pressure
We surveyed 1,000 government sector employees to find out what kinds of situations lead to intentional and accidental insider data breaches. One in ten said they or a colleague had intentionally broken company policy in the past year. When asked why, one-third said that they took a risk because they hadn’t been provided with tools to share data safely. While it is hard to criticise those who are just trying to get the job done, this keen but risk-taking profile is an unfortunately common cause of breaches.
Less virtuous are the 27% who said they took data with them when they moved jobs. Our research has shown that workers have a very proprietorial attitude to the data and information they work on, frequently assuming that creation confers ownership. Just over one-third (34%) of government employees said they don’t think the organisation has exclusive ownership of data, which explains their predisposition to walking out of the door with it when a new career opportunity beckons.
For public sector employees who admitted to causing an accidental data breach, phishing was one of the biggest issues, with 28% clicking on a link in a phishing email and 8% responding to a spear-phishing email. However, more than one-third accidentally sent information to the wrong person. This seems to be a common problem in government organisations; overall 41% said they had received an outlook recall message or an email telling them to disregard a communication sent in error.
When asked why these mistakes had occurred, human factors are high on the list. One-fifth attributed their error to working in a pressured environment, 15% were tired and 19% were rushing. Some 14% said an incident happened because they were using a mobile device, a risk which will inevitably increase as home and mobile working continues in the current climate.
Tackling insider breach risk – too high a risk appetite?
The 113 government IT leaders we surveyed seem somewhat resigned to insider breaches. 78% thought employees had put data at risk maliciously and 86% thought they’d done so accidentally in the past year. Looking forward, a quarter of them felt it was likely they would suffer a breach in the coming year. Despite this, fewer than half of IT leaders said they were using technologies such as email encryption, anti-virus and secure collaboration software to help prevent insider breach risk. There seems to be a sense that human-activated breaches are inevitable within the public sector, meaning risk appetite is set at a level that accepts a 25% breach risk – but will likely prove a lot higher given evidence to date.
Addressing human layer security – bridging the current risk gap
So, the question is, how can we close that risk gap and better protect citizens’ data? It’s simply not possible for government IT leaders to “fix” most of the root causes of breaches. People will always get tired, stressed and rushed, given the pace at which organisations must operate, and even in “perfect” conditions, they’ll still make mistakes. Sadly, the gift of more time and resources is beyond the capability of security teams to deliver. Similarly, IT leaders cannot be the moral guardians against worker dishonesty. These are all human, not technological, failings and that is why we believe that a specifically human layer security programme is the only effective answer to mitigating insider breach risk.
Human layer security identifies the risk points in employees working processes and ensures that there is a safety net to support them when they are vulnerable to tiredness, rushing and stress, preventing them from making mistakes. It also acts to put a brake on employees who might be more reckless or dishonest with sensitive government data, protecting it against malicious leaks.
By using AI and contextual machine learning to identify what typical user email behaviour looks like, human layer security learns the normal sharing patterns, contacts and data types that flow between users and organisations. Once this benchmark is established, users are alerted when they deviate from their typical behaviour: perhaps they have been rushing and included an external recipient address into a usually internal email group, due to an incorrect suggestion by autocomplete. It also identifies when users are emailing a new contact or domain for the first time, which is particularly valuable in the case of phishing and spear-phishing emails where the deception depends on users failing to spot tiny address changes. Alerts allow users to correct the error before a breach occurs, helping to build a culture of tighter security, but without adding cumbersome extra processes to an already busy employee’s workflow.
When it comes to protecting the data itself, the key is knowing what data is about to leave the organisation. An intelligent solution that scans email and attachment content and identifies data such as personally identifiable information (PII) or bank account details can alert users that they are about to send information to an unauthorised recipient or without the correct degree of encryption. If the user persists, the risky email can be blocked from being sent and administrators alerted to a potentially intentional attempt to breach data, so they can respond accordingly.
The key to human layer security is that it works with the users to support them when human factors intervene to introduce risk. It is the missing piece of the email data protection puzzle that means insider breach risk must no longer be accepted as an inevitable price of day-to-day operations.