Generative AI: Productivity Dream or Security Nightmare

Written by Frederick Coulton, Head of Product at CultureAI

The field of AI has been around for decades, but its current surge is rewriting the rules at an accelerated rate. Fuelled by increased computational power and data availability, this AI boom brings with it opportunities and challenges.

AI tools fuel innovation and growth by enabling businesses to analyse data, improve customer experiences, automate processes, and innovate products – at speed. Yet, as AI becomes more commonplace, concerns about misinformation and misuse arise. With businesses relying more on AI, the risk of unintentional data leaks by employees also goes up.

For many though, the benefits outweigh any risks. So, how can companies empower employees to harness the power of AI without risking data security?

Ready or not, AI is here to stay 

Generative AI (GenAI) tools have been hugely popular in recent years as this groundbreaking technology is capable of producing content that appears strikingly human-made.

New GenAI tools are transforming how we work and create. It is revolutionising natural language generation, content creation, personalised recommendations, and innovative problem-solving. These models are reshaping our interaction with technology, unlocking new avenues for efficiency, creativity, and user engagement.

This wave of innovation is reshaping industries, cementing its status as a valuable asset for businesses and individuals alike. Given the rapid pace of technological advancements, it is anticipated many more compelling use cases and applications for GenAI are on the horizon. 

Yet, not without risks  

As GenAI advances, it’s crucial to balance excitement with an awareness of the associated risks, particularly in the areas of data privacy and technology misuse.

Many organisations are finding that the number of employees accessing AI apps is growing exponentially. According to a study by Netskope Threat Labs, during May and June 2023, the percentage of enterprise users using at least one AI app daily increased by 2.4% each week.

Additionally, a recent Deloitte study revealed that 61% of employees are currently using or planning to use GenAI. Of those using it, 26% have not informed their managers, and 24% use it despite company bans.

The growing adoption of GenAI raises the risk of unintended data exposure. Security teams often have limited visibility into the data shared on these platforms, making it harder for businesses to strike a balance between innovation and minimising security risks.

Data privacy and leakage concerns 

One of the most pressing issues associated with GenAI is the risk of unauthorised data access and leakage. This arises due to two main factors. First off, AI needs a lot of data to learn and generate content, which could include sensitive personal information protected by privacy laws as well as copyrighted information used without permission.  

Secondly the various stages of AI training and deployment open multiple vectors for potential leaks or breaches, with increasing sophistication in cyber-attacks explicitly targeting these AI systems. 

For instance, a chatbot like ChatGPT requires users to provide relevant prompts to generate responses. During this interaction, employees might accidentally or intentionally share sensitive data. Once submitted, this data could be used in training AI models. Also, because information is transmitted and stored on external servers, it cannot be retrieved once submitted.

Employees may upload sensitive data like personally identifiable information (PII), intellectual property (IP), or financial data. This could lead to external exposure and leakage which could impact the company’s reputation. An example of this was last year when it was reported that Samsung workers unwittingly leaked confidential data whilst using ChatGPT to help them fix problems with their source code.

Misuse of technology 

The very attributes that make GenAI a powerhouse—like the generation of credible and sophisticated content—also make it vulnerable to misuse. This technology can produce misleading and hard-to-detect media, such as deepfakes, that can be used maliciously. Its capabilities can be weaponised to deceive, defame, or defraud individuals and organisations, enhancing impersonation and fraud attempts like phishing emails and fake news.

Ethical considerations must form the core of GenAI deployment strategies. There is an imperative for organisations to develop guidelines and policies that govern the responsible use of AI.

Inaccurate or dangerous responses and hallucinations 

While most people are aware of GenAI being inaccurate in an image —like giving people the wrong number of fingers—recent examples are emerging of GenAI responses being inaccurate or downright dangerous.

For example, in May 2024, Google Gemini briefly suggested in response to a query about cheese not sticking to pizza, that you should mix non-toxic glue into the cheese. Further, a study from Perdue University in December suggested that 52% of GenAI answers to coding questions were incorrect.

Going forward: Gain real-time visibility to promote secure AI use  

With visibility of how employees use AI tools, organisations can provide the real-time coaching necessary for safe and effective use. Monitoring for the oversharing of sensitive data is crucial. Knowing when and by whom a risk occurs allows for effective mitigation and management.

To protect data privacy and curtail misuse, a determined effort that includes stringent security protocols, ethical guidelines, and continuous education is essential. Only with a comprehensive approach can GenAI continue to be an asset rather than a liability.

Organisations should empower employees to responsibly utilise applications like ChatGPT. These tools serve specific business needs, so instead of banning them or reprimanding users, promote secure use and educate employees about potential risks. With advanced technology and strong privacy policies, organisations can maximise AI’s potential while maintaining user trust.