ACCA UK calls for AI cybersecurity approach to emphasise global applicability

  • Leading accountancy body ACCA says the UK government’s proposed AI cyber code is a useful starting point for a global regulatory approach
  • Industry experts are best placed to manage the emerging and evolving range of cyber risks

 

Responding to a UK government consultation led by the Department for Science, Innovation & Technology outlining an AI cybersecurity code of practice, ACCA says the government is best placed to set up overarching regulatory structure and principles, while those on the frontline of AI developments should be given the space to work to combat emerging cyber risks.

 

However, the pro-innovation approach of the proposed code – as set out in the government’s white paper – needs to have safeguards and its requirements may need to be revisited. The cyber challenge in AI is dynamic, and a ‘point in time’ view can become quickly outdated.

 

ACCA also highlighted the risks and impacts to end users in small and medium enterprises (SMEs), with a significant number of its members operating in this segment. The greater challenges faced by this group of stakeholders on cyber readiness – across both skills and budgets – are well-documented. ACCA wants end-user SMEs to be safe and protected from cyber risk, yet empower them to choose AI given its potential to augment business productivity.

 

Glenn Collins, head of technical and strategic engagement, ACCA UK, said: “ACCA is pleased to see the consultation taking a principle-based approach as our current view of AI offers too many unseen scenarios. ACCA, its members and partners, will be profoundly impacted by its planned use of AI including delivering finance professionals with an optimal experience and skill set for the modern workplace.”

 

ACCA warned that adherence to any code carries a cost, including indirect costs of adhering to the code and the impact through the supply chain. Effort and cost will be needed to raise awareness of the code, as well as monitoring and enforcement.

 

Narayanan Vaidyanathan, head of policy development, ACCA, noted: “We anticipate utility from such a code for those providing assurance or third-party verification of AI systems. This is an important category of stakeholders who will have a key role to play in creating a trusted AI eco-system to supplement the regulatory and legal direction from policy makers.

 

“We do not anticipate this group to be subject to the requirements of the code itself, but assurance requires checks against a well-defined, and ideally, publicly available standard – which this code could provide. Cyber risks are a part of what the assurance of an AI system may need to check for. Therefore, those providing assurance would find such a cyber code and associated standards helpful.”

 

In its response, ACCA also called on the government to tackle the skills gap, which needs to be filled in order to combat cybersecurity risks. The Apprenticeship Levy could be expanded to a ‘Growth and Skills Levy’ that is more flexible and can be used to fund shorter-term accredited training programmes that upskill and reskill workers on the cybersecurity of AI.

 

Companies should also be able to increase the proportion of their unspent levy funds to their supply chains – ACCA suggests an increase of 25% to 40%. This could unlock millions of pounds to develop AI skills.

 

Ultimately, cybersecurity issues linked to AI need staff to be trained on current and emerging risks. If insufficient training is given, standards and frameworks will fail to achieve any impact.

 

Read ACCA’s response here.

 

Visit ACCA’s website for more information.