“I once copied a national journalist into client communications instead of someone with the same name for the best part of three months.”
“I sent an email with financial details to the wrong company.”
“I sent confidential information to the wrong client. I just had to ask them to ignore the previous email.”
Approximately 269 billion emails are sent every day. At that rate – and given the technology was developed decades ago – you would expect we’d have got to grips with it by now.
Yet when we asked users to share times when they’ve sent an email to the wrong person, we were inundated with real-life examples like the above. Sometimes users were just left red faced and slightly inconvenienced, having sent an email of low importance to someone, but an alarming trend was also visible in the number of people exposing personal and corporate data to unauthorised access. What’s worrying is that this mistake is so easy to make, particularly with the autofill function on most email clients.
To date, when emails are sent in error, and if encryption hasn’t been applied, a user’s best defence has been to send an Outlook recall or similar message. However, when we consider the types of data staff share both internally and externally, this really isn’t good enough.
What’s more, email isn’t the only way staff can share information with unauthorised recipients, both accidentally and maliciously. In an attempt to combat the rise of shadow IT and specifically as a response to internet file sharing applications, businesses have locked down access to such sites. Yet for many organisations, this is a Catch 22: you need communication channels to be available for work processes yet in doing so, you expose your organisation to risk. Plus, if experience has taught us anything, the answer to this is that the user will always find a way to share information – even if it isn’t necessarily approved by the business.
Supporting users to make the right choices
The first step to solving this problem is to put users at the centre of the cybersecurity approach. Technologies and ways to interact with them are constantly changing but in this equation, people remain constant – and so too does their ability to make mistakes. This needs to be about more than just training, which although important, varies in its impact if users won’t engage with the security technologies at their disposal.
Using machine learning and AI, however, it is possible to develop technologies with user engagement at their core. When using email, this involves learning from historic interactions – such as the recipients or groups of recipients users normally share data with. If in future the user then adds an incorrect person to an email thread, an alert can warn them they are about to make a mistake. Additionally, when users choose to override these alerts, administrators can be made aware of potential data breaches.
What’s more, the data derived from these and other cybersecurity decisions can be used to track, analyse and report on how users share information in order to drive understanding and adoption of data security. For example, when combined with other security measures, such as classification and encryption, users could be rewarded when they make good security decisions. This can also apply to under-use, aligning this with performance indicators to embed security and encryption into day-to-day processes.
It’s only really by ensuring secure electronic communication channels are available to users that suit the ways they work can businesses begin to tackle the threat their own staff pose to sensitive information. But this approach also needs to be more than just training and tools; technology itself needs to work to engage users and ensure that they can get their jobs done while also making good security decisions to protect their organisation and its sensitive data