Category Archives: Tech Thought Leadership

Putting on the AI guard rails: Experts reveal how to minimise risk

 In the ever-evolving marketing landscape, one technology emerges as the potential linchpin – Gen AI.  Joyce Gordon, Head of Generative AI, Amperity, recently joined forces with industry leaders, Rio Longacre, Managing Director at Slalom, and Jon Williams, Global Head of Agency Business Development at AWS. They revealed the key risks and the importance of setting boundaries when implementing a successful AI strategy.

Joyce Gordon, Amperity

When it comes to AI, it’s fair to say that we’re in a paradigm shift that’s similar in magnitude to the evolution from desktop to mobile. As a result, over the next couple years, we’re poised to see new types of products. We’re going to see new business models emerge as costs and cost structures change. And we’re going to see new companies enter into the market. But along with these developments, many regulatory questions across privacy and legal compliance arise.

 

Generative AI: Risky business?

There’s obviously a lot of excitement and promise surrounding Generative AI, but it’s not without its challenges and risks. Longacre echoes this sentiment, saying, “Nothing is without risk. And Gen AI is no exception.”

He advises all brands to consider the following risks, rules and considerations associated with Gen AI and its usage:

  1. Generative AI needs a lot of content to be trained on. So if any of that content is copyrighted, then that copyright still holds. This means you have to be careful that anything you create is significantly different.
  2. Content created by Gen AI cannot be copyrighted.
  3. Under the new EU Generative AI act, any content needs to be watermarked, so it can be identified as created by Gen AI.
  4. Without keeping a human in the loop, you could open your brand up to reputational risk.
  5. Have the right partners, processes and data foundation to position yourself strongly in this era. If you hold your own customer data and creative assets in one place, you can use them to train your Gen AI on, so you’re not reliant on someone else’s copyrighted content.

 

“What’s going to be important are the tools you use and the partners you have. Make sure you’re using the right tools – don’t use the free ones. Spend a little more money, do your due diligence and pick ones that have digital watermarking capabilities,” Longacre advises.

“And remember, Gen AI is definitely not without legal risk. However, this is not an insurmountable problem. Partners like AWS have some great tools to help you.”

 

Williams chimes in, pointing out, “One of the most important things to start from a consideration perspective is making sure that your company-owned content is not being used to improve the base models or being shared with third-party model providers because, otherwise, you become a part of their model. And then, whatever information you provided access to is actually integrated into their capabilities.

“The way we think about that at Amazon is that with Amazon Bedrock, your content is never used to improve the base models and is never shared with third-party models – it’s encrypted at rest and you can actually encrypt the data with your own keys.

 

AI and reputational safety

When it comes to safety, he cautions that brands should be implementing guard rails. “In terms of your reputational safety, make sure that you’re putting guard rails around the use of Generative AI, making sure your marketing team has the opportunity to define a set of policies to help safeguard Generative AI applications. With Bedrock guard rails, you can configure those policies. You can set a list of denied topics that are undesirable in the context of your application.

“For example, an online banking assistant can be designed to make sure that it refrains from providing investment advice to people that log into that banking assistant. Content filters can make sure you’re filtering harmful content across hate insults etc etc and even coming soon. actually down to the specificity of words.

 

The other thing to be really careful about, Williams cautions, is PII (personally identifiable information) redaction. “So you can make sure you select a set of PIIs that can be redacted in your generated responses that are coming from your foundational models. In a customer environment, that’s incredibly important.

“The last thing you want to do is have your customers talking to something and it’s providing them with information that it shouldn’t have shared with them. Then, indemnification. So we actually offer uncapped intellectual property indemnity for copyright claims or raising from generative output from Amazon Titan image generator and all of the images generated by it,” he says.

“The Titan image generator also has an invisible watermark that can’t be cropped or pressed out. You can look at the use of the images or the models that you’ve created for the future and make sure that you can track those things accordingly. Those are some of the things that we’re putting into place to help with the security of company’s data but also sort of the reputational risk guard rails that you need to be making sure that you have a strategy for and the tools to be able to implement.”

 

AI and the human touch

Longacre points out that every use case he shared has a human in the loop. Since we’re in the early days of AI, that’s not surprising as most brands are starting with ‘human in the loop’ use cases. This is where AI generates outputs that a person then approves and potentially refines. ‘Human in the loop’ use cases enable productivity gains while minimising risks arising from hallucinations or unexpected outputs.

“Maybe the copy is being written by Gen AI, but a human reviews it,” Longacre says. “The image might be generated, but it’s not being pushed out into the wild.

“We’re starting to see a little bit of that, but generally, there’s human oversight. Even with chatbots. I mean chatbots have been around forever. Most of them were machine learning based. You need that knowing of, ‘OK, when do you have the escalation? Where do you pass from the chatbot to a live person for certain use cases?’ Identifying that is still super critical.”

 

Gen AI cost and customer risks

Beyond the legal and reputational risks that Gen AI poses, there’s another risk to consider: customer retention and satisfaction and cost. For example, a couple of months ago, I was trying to book a flight and hotel for a trip. I went through this whole conversation with a chatbot on the booking website. Then, at the end, it wasn’t able to complete the booking.

It had asked me a lot of questions like my preferences, who I was traveling with and all of these other things. These were things it should have already known as I’ve made many bookings with the site before. So, I left feeling frustrated because I wasn’t able to make the booking at all through this experience. It didn’t enhance my discoverability because it didn’t pull in any first-party data.

And back to the cost risk. This is often overlooked. But if you think about something like conversational AI, each time it has to ask the user a question, that’s another request that needs to be made to the LLM API. If this happens once or twice, then no big deal. It costs a fraction of a cent. But at the scale of hundreds of millions of users, this becomes a huge business expense. To avoid this, brands must think about other ways to integrate more first-party data to both create a better customer experience and reduce costs.

 

Is your company making this common AI mistake?

According to Williams, one major oversight companies often commit during the implementation of AI is neglecting to consider the “what” aspect – specifically, the identification of relevant use cases. Technology is a brilliant enabler, but it’s just one of the tools you can apply to help with real-world complications. So as an organisation, have the executive team work with their teams to identify what the time-consuming, difficult or impossible problems that Generative AI could help solve. Then think small with the day-to-day irritations of either your employees or your customers. What are their ‘paper cuts’ on an everyday basis, and how can you then develop those use cases to address those challenges?

 

Get very specific with exactly what it is that you are trying to do and how you track that. Also, make sure that you have alignment given. A lot of the way that Generative AI is going to be used effectively is predicated on your technology stack and the data that you have in your organisation. Therefore, making sure that your marketing organisation is talking to your IT organisation is also a critical step to take as a company.

 

Watch the full webinar here.

 

About the Author

Joyce Gordon is the Head of Generative AI at Amperity, leading product development and strategy. Previously, Joyce led product development for many of Amperity’s ML and ML Ops investments, including launching Amperity’s predictive models and infrastructure used by many of the world’s top brands.  Joyce joined the company in 2019 following Amperity’s acquisition of Custora where she was a founding member of the product team. She earned a B.A. in Biological Mathematics from the University of Pennsylvania and is an inventor on several pending ML patents.

 

About Amperity

Amperity delivers the data confidence brands need to unlock growth by truly knowing their customers. With Amperity, brands can build a unified customer profile foundation powered by first-party data to fuel customer acquisition and retention, personalize experiences that build loyalty, and manage privacy compliance. Using patented AI and machine learning methods, Amperity stitches together all customer interactions to build a unified view that seamlessly connects to marketing and technology tools. More than 400 brands worldwide rely on Amperity to turn data into business value, including Alaska Airlines, Brooks Running, Endeavour Drinks, Planet Fitness, Seattle Sounders FC, Under Armour and Wyndham Hotels & Resorts. For more information, please visit www.amperity.com or follow us on LinkedIn, Twitter, Facebook and Instagram.

 

 

 

Charting the Course of Cloud Computing and AI in 2024

Written by Dan Krantz, CIO at Keysight Technologies

AI will continue to disrupt and dominate in 2024, with generative AI remaining the poster child. Organizations will reevaluate their multi-cloud strategies and adjust workloads between providers to support the demand for compute power that AI is driving. These two mega technology trends will create many challenges and opportunities that CIOs must consider as they plan for 2024 and beyond. Below are a handful of predictions related to these areas that I expect to come to fruition.

2024: The Rise of Cloud High-Performance Computing

As traditional cloud capabilities mature, I predict the emergence of Cloud High Performance Computing (HPC) in the next 12-18 months. Current HPC workloads typically utilize on-premise supercomputing infrastructure, but the cloud providers will bring to the HPC market supercomputing capabilities wrapped in cloud-native characteristics of elasticity, programmable automation, and metered usage, democratizing the most compute-intensive scientific and engineering workloads.

Multi-Cloud Era: Rise of Agnostic Tools

The majority of organizations are predominantly multi-cloud rather than single-cloud. Cloud vendors have started to realize this and are now building better multi-cloud interoperability capabilities. One example is the recent Azure/OCI agreement between Oracle and Microsoft. This leads to organizations needing cloud-agnostic tools for observability, visibility, and quality assurance automation.

How AI is Disrupting the Cloud Market

With AI workloads requiring unprecedented GPU, memory-intensive capacity, and next generation alternative power and cooling, look for potential disruption of the top three cloud computing leaders: AWS, Azure, and Google Cloud Platform (GCP). For example, the second generation Oracle Cloud Infrastructure (OCI) has significant price and performance capabilities for GenAI. Additionally,  Nvidia could rapidly disrupt the landscape with its own AI-specific cloud offerings.

Testing in the Brave New World of Generative AI

2023 has seen a wave of innovations on the back of generative AI solutions like ChatGPT. In 2024, this trend will continue to accelerate, with new offerings embedded into existing enterprise systems, transforming every digital experience from the employee to the customer to the supply chain. To prepare for this change, CIOs should take two specific actions. First, accelerate the fundamentals of master data management (MDM) and refine the unique language models with their own specific data sets. This requires pristinely clean data harvested for an organization’s training needs. Secondly, rethink the quality assurance processes. Having manual testing or static, predictable regression test libraries is no longer sufficient. Instead, CIOs must transform testing into continuous user experience assurance teams that leverage AI for exploratory, model-based testing injected with chaos engineering to surface unintended anomalies. These UX assurance teams will need new toolchains and independence from each project and digital product to continuously validate the desired user experience outcomes.

AI will ultimately determine business success, and in 2024, CIOs will need to develop an AI strategy that supports the business objectives. In addition, they need to foster a culture of cloud adoption and innovation. To realize this, enterprises need trusted and innovative technology partners to help them navigate and thrive in this new, intelligent era.

Don’t tell GenAI all Your Secrets: Leverage GenAI without Compromising Security

Written by Anurag Lal, CEO, NetSfere

Generative Artificial Intelligence (AI) became the latest phenomenon in November 2022 when the artificial intelligence lab OpenAI released a generative AI-powered chatbot called ChatGPT. According to a Reuters report, ChatGPT reached an estimated 100 million monthly active users just two months after launch.

Today, many enterprises are deploying the use of generative AI solutions like ChatGPT to automate responses to common questions, code a variety of apps, automate tasks such as writing emails and creating content, and more. A recent survey by VentureBeat revealed that more than half (54.6%) of organisations are experimenting with generative AI and 18.2% are already implementing it into their operations. This enterprise adoption of generative AI technology is fuelling an increase in the generative AI software market which S&P Global projects to reach $3.7 billion in 2023 and grow to $36 billion by 2028.

The use of generative AI technology holds a lot of promise for enterprises, but there are risks associated with integrating this technology into the enterprise stack. Organisations are concerned about shadow IT as different departments experiment and use it without the appropriate governance and control. There are also concerns over the potential of generative AI to displace or atrophy human intelligence, enable plagiarism, and fuel misinformation.  Indeed, in November 2023, Rishi Sunak held the UK’s first AI safety summit at Bletchley Park which discussed some of these challenges and the need for government testing of AI models.

While the use of any new technology carries some degree of risk, to get the most out of generative AI with the least amount of risk, organisations must take a secure approach to its implementation.

As the rapid adoption of AI in the enterprise continues, questions surrounding the accuracy of the technology and concerns about cybersecurity, data privacy, and intellectual property risk are why organisations like Apple, Samsung, Verizon, and some Wall Street banks are limiting or banning employee use of generative AI technology like ChatGPT.

 

Cybercriminals are using AI

Salesforce.com survey of more than 500 senior IT leaders reveals that 67% are prioritising generative AI technology for their organisations during the next 18 months, but 71% of those leaders believe this technology is likely to introduce new security risks to their data.

Cybercriminals are honing their skills in using this technology to bolster their cyberattacks. In a call with journalists reported by PC Magazine, the FBI discussed how generative AI programs are fuelling cybercrime, with cybercriminals tapping into open-source generative AI programs to deploy malware and ransomware code and execute sophisticated phishing attacks, and create AI hallucinations.

The ability of generative AI technology like ChatGPT to seamlessly generate phishing scams without spelling, grammatical, and verb tense mistakes is making it easier to dupe people into believing the legitimacy of the communication. A 2023 report by Perception Point found that advanced phishing attacks grew by 356% in 2022. The report noted that “malicious actors continue to gain widespread access to new tools and advances in AI and Machine Learning (ML) which simplify and automate the process of generating attacks.”

 

Exposing PII data

The growing popularity of generative AI is also raising data privacy concerns. Enterprises must be careful about what information they feed into generative AI tools to avoid exposing sensitive or personally identifiable information. Because generative AI tools can share user information with third parties as well as use this information to train data models, this technology has the potential to violate privacy laws.

According to news reports, the US Federal Trade Commission (FTC) launched an investigation into ChatGPT’s creator OpenAI, focusing on its handling of personal data, its potential to give users inaccurate information, and its “risks of harm to consumers, including reputational harm.”

 

Intellectual property infringement

Information entered into a generative AI tool may become part of its training set, which can put users of the tool at risk of intellectual property (IP) infringement. Gartner highlights that tools like ChatGPT, which are trained on a large amount of internet data, likely include copyrighted material. The analyst firm warned that its outputs have the potential to violate copyright or IP protections.

There is no question that generative AI holds a lot of promise for enterprises, but to reap these benefits safely and securely, organisations must take steps to minimise the risks.

 

Mitigating the risk of generative AI in the enterprise

IT leaders must examine this technology to understand how accurate and useful generative AI is to their enterprise. A lack of transparency about what is happening on the back end of this new technology can make it difficult to determine if it is really useful for the organisation and establish its best use cases.

When using any external tool, it is important to review each solution provider’s terms of service, and data protection and security policies. It is also important to conduct due diligence to determine whether the tool uses encryption, whether data is anonymised, and whether the tool complies with regulations such as the EU’s GDPR (General Data Protection Regulation) and the CCPA (California Consumer Privacy Act) and numerous other privacy regulations.

Organisations should develop and implement policies governing the use of AI in the workplace. The policy should not only spell out which tools employees are permitted to use but also what information employees are allowed to feed into them.

Enterprises should also equip their IT teams with tools that can identify what is generated by an AI like ChatGPT versus what is human-generated, especially as this relates to incoming “cold” emails. To further mitigate organisational risk, enterprises should make it a priority to routinely train and re-train employees on the latest cybersecurity threats associated with generative AI, with specific emphasis on AI-generated phishing scams. This training should also include cyber risk prevention measures as well as guidance on appropriate uses of AI in the workplace.

 

Using AI for competitive advantage

AI offers exciting opportunities for organisations but, as with any new technology, there are uncertainties and risks. By understanding these risks and taking steps to mitigate them, enterprises can more safely and securely deploy this technology to gain a competitive edge.

Double blow – Ransomware group denounces victims to American authorities

Written by Mark Molyneux, CTO for EMEA at Cohesity

The ransomware group AlphV says it has filed a complaint with the American Securities and Exchange Commission (SEC) because its victim, MeridianLink, did not report their successful attack that resulted in data loss. The pressure on companies is growing to structure their measures in the event of successful attacks.

On November 15th, the hacker group AlphV added the company MeridianLink to its own list of victims. The attack probably took place on November 7th. The group confirmed to the news portal Databreaches that it had reported the company to the American Securities and Exchange Commission (SEC).

Accordingly, AlphV wrote to the SEC: “We would like to draw your attention to a concerning issue regarding MeridianLink’s compliance with the recently adopted rules for disclosing cybersecurity incidents.” MeridianLink says it is investigating the cyber incident and possible consequences.

With this step, the ransomware group AlphV has broken new ground, highlighting the far-reaching consequences that companies can now expect if they are hacked.

This is effectively a quad-bubble ransomware attack: encrypt the data; exfiltrate and publish; harass the data subjects, and finally report to the regulator.

It is understandable that companies will want to initially downplay a successful break-in in order not to unsettle customers and the public, and to allow further time to investigate the incident in peace. However, with cybercriminals’ new manoeuvre, companies have less and less time to get their position in order, and further to this they will need to be more open than they may want to be, as the threat actors will not tone down their reporting. It is essential to modernise the processes and procedures in the event of an emergency in order to be able to react quickly.

Companies already have a very short time to investigate the cyber incident, assess the data that has been compromised, and provide an accurate report to the regulator. With threat actors now showing the will to report the breach themselves, together with evidence of the actual data encrypted or exfiltrated, companies will find themselves under increasing pressure to index, classify and secure data such that they can themselves provide accurate reporting, but more importantly, so they know what has been lost and how to quickly replace that from their vault system.

Synergies for enhanced cyber resiliency

Organisations should consolidate their disparate application data silos onto a single centralised data management platform that is based on a scalable hyper converged file system. In this case the data stored will be automatically analysed by the deduplication and compression functions to achieve the highest reduction rates across the organisation.

To protect stored data, such platforms take the Zero Trust model even further by implementing strict access rules and multi-factor authentication, encrypting the data automatically, both during transport and at rest, to further enhance security against cyber threats like ransomware. And it generates immutable backup snapshots that cannot be changed by any external application or unauthorised user.

These backup snapshots are analysed by AI-driven algorithms to identify indications of possible anomalies. These can be passed on to security automation tools from vendors such as Cisco or Palo Alto Networks, in order to examine the potential incident in more detail.

Finally, modern data management platforms also provide more insights from data analysis thanks to integrated classification. Organisations can better understand their compliance risks by getting visibility into their dark data, which according to Gartner affects between 55% and 80% of the data a company stores. They can decide with confidence whether to keep certain records or delete them with no risk.

All of these synergy effects found in a modern data platform enhance cyber resilience, reduce the operating and storage costs and help organisations to manage the growing volumes of their data in the long term.

The incident proves once again: Rather than the illusion of total cyber security, the focus must shift to operational cyber resiliency where organisations can effectively respond to and withstand attacks. While preventative measures are important, they’re table stakes, not the winning hand, when an organisation is fighting cyber-compromises. There is a very strong case for taking a modern approach to backup and recovery of data with a “‘identify / protect / detect / respond / recover’ setup”.

 

SIAM – bringing a layer of clarity to complex environments and vendor ecosystems

Written by Brian Horsfield, SIAM Operations Director, Xalient

A new approach is required

Today’s modern IT environment is complex and challenging for organisations to navigate. Not only has the threat landscape evolved, but we are seeing more compliance and regulation as well as ongoing economic pressures. This is all putting additional strain on already resource-stretched internal IT teams.

Wind back just a few years and the average IT worker was a skilled generalist who could confidently turn their hand to virtually anything that was technology-based. Fast-forward to the present day and, in our hybrid world with all kinds of tech at our disposal, be that on-premises, virtual, or in the cloud, IT teams now rely on more focussed specialist help to enable the day-to-day running of the business and to drive new innovations and digital transformation initiatives.

Even the largest of organisations with reasonable IT budgets and resources struggle to not only keep up to date with the latest IT trends, but also to justify keeping infrequently used specialist skills within their team. Consequently, many outsource different aspects of their IT landscape across many third parties, and this can lead to a fragmented and disjointed approach to vendor management and coordination.

 

Coordinating multiple suppliers and contracts

As a result, SIAM (Service Integration And Management) has gained a lot of traction in the last 5 to 10 years. For those less familiar, SIAM is the coordination of multiple suppliers and contracts around systems integration and management. SIAM has emerged to help bring order to a chaotic vendor management challenge.

Widely recognised as an approach to multi-vendor management rather than a strict methodology, it is in effect an outsourcing service model designed to introduces the concept of a single, customer-facing logical entity known as the service integrator. The term is often used interchangeably with multi-sourcing services integration, or MSI, and provides guidance on good practice in managing multiple suppliers of IT services.

In many respects, it is an evolution from the typical activities that a managed service provider delivers, but it also incorporates a broader set of integrations across complex vendor management landscapes. In some cases, organisations may seek to appoint a SIAM provider for aspects of IT which they previously managed themselves but no longer have the skills, bandwidth, or resources to deal with internally.

 

SIAM in complex environments

There is no doubt that SIAM is applicable to organisations that have a complex IT ecosystem, and it is certainly not suitable for all organisations.  I would also suggest it to those going through large-scale IT-enabled transformation programmes where managing the technology suppliers is becoming a problem for them. That doesn’t mean that smaller enterprises couldn’t benefit from a lighter touch offering, but complexity is the key requirement for SIAM services.

In the case of an MSP, who might be managing both internal and external providers, governance and co-ordination are critical for an end-to-end vendor-neutral SIAM approach. There must be a level playing field and all vendors must be managed in the same way. However, to deliver SIAM, the MSP needs to be agile and flexible enough to get to the nub of a range of requirements.

 

To some extent, a fresh perspective is required around the term and more education about the growing community of SIAM expertise and best practice, which is not as narrow as conventional IT service management practices.  Today SIAM is more about people and communications and managed service providers looking to deliver SIAM will need to provide extra value and demonstrate the ability to react to and manage problems as and when they occur. The difference between a traditional MSP and one that provides SIAM services is its ability to orchestrate and coordinate multiple providers to deliver integrated and seamless services to the client. The goal is making sure the different service providers work together to meet the customer’s needs.

 

When and Why would a company adopt a SIAM approach?

There are several triggers that might prompt an organisation to go down the SIAM route. For example, the organisation has an unclear delineation of duties among its vendors and there is a lack of cooperation and coordination between vendors.  The organisation may be disappointed with the lack of innovation from its vendors and feel there is low accountability and or transparency around vendor processes and that quality assurance is expensive and labour intensive.

If there are many vendors to manage, with diverse IT services and the complexity of the services required is growing, the need for standards to measure vendor performance might be another reason for adopting SIAM services.  Likewise, if the organisation is divesting or going through an M&A or is in an industry that is heavily regulated, SIAM can bring clarity and efficiency to a complex legacy set-up.

Certainly, here at Xalient, our SIAM team works very closely with our professional services and managed services team.  In other words, one of our sub-vendors is Xalient and this means we must be as tough on ourselves as we are on the other vendors and additionally very transparent in the way our services are reported. That vendor-neutrality is critical for credibility and authenticity, especially if you are managing the team providing those services.

 

Delivering and demonstrating value

Demonstrating value from SIAM can be quite challenging, therefore it is important that the SIAM provider can show improvements in incident response and resolution across multiple vendors.  As SIAM matures it is key to see where some of those efficiency drivers will come from, such as automating support processes. Many of the benefits will come from the technology used to manage and monitor multiple vendor services.  Typically, when supporting thousands of assets, there needs to be a high degree of automation and a single pane of glass view for senior leadership, combined with a highly flexible back-end ecosystem.

In effect, the SIAM model addresses the assurance layer that organisations are looking for.  SIAM won’t be right for every organisation, but it certainly brings a layer of clarity to complex environments and diverse vendor ecosystems.  If you are currently considering SIAM but are not sure if the model is right for your organisation, below is a recap of the typical reasons an organisation may adopt a SIAM approach:

 

  • You have an unclear delineation of duties among vendors.
  • There is a lack of cooperation and coordination between vendors.
  • Low accountability understanding or transparency of vendor processes.
  • You are disappointed by the lack of innovation among your vendors.
  • There is a low level of reporting and quality assurance is expensive and labour intensive for vendors.
  • You have many vendors with duplicate vendors and diverse IT services.
  • The complexity of the services you need to provide is growing.
  • You have 24/7 operational needs that are difficult to fulfil.
  • Your focus is on project/transformation activities not run activities.
  • You have no set standards for measuring vendor performance.
  • You are divesting/carving out or acquiring.
  • You are in a regulated industry.

 

API Security Threats Rising As Confidence Also Increases

Written by Shay Levi, CTO and co-founder, Noname Security

 

API security breaches are increasing, even as many organisations express confidence in their security strategies. Is there a disconnect between perception and reality?

APIs are the connective tissue linking applications and services in the modern enterprises that fuel today’s digital economy. But unfortunately, APIs are a lucrative target for attackers and our latest API Security report reveals these connections may be more vulnerable than companies realise.

We surveyed over 600 cybersecurity professionals and uncovered a troubling disconnect: 78% said they suffered an API security breach in the last 12 months, yet 94% expressed confidence in their security tools. This gap between perception and reality persists from our previous annual survey, indicating an ongoing lack of awareness of API security threats. As businesses rely on APIs more than ever, it’s essential to properly gauge risks and implement robust protections.

 

Key trends in API security

Our report highlights that API security-related breaches are rising, up from 76% in our 2022 report to 78% in 2023. This indicates a rising trend in API security attacks, despite high confidence levels. In 2023, the top attack vectors have shifted from dormant APIs and authorisation issues to Web Application Firewalls, network firewalls, and API gateways. So, while attack methods are continuously evolving, it would appear that security strategies are not keeping pace.

Perhaps more concerning, our findings show that 72% of organisations claim to have full API inventories but only 40% know which APIs handle sensitive data. In my own analysis of hundreds of companies’ API landscapes, I’ve seen that frequently organisations are flying blind and lack visibility into their inventories. Incomplete API inventories can lead to massive gaps in an organisation’s API security strategy. Security teams also need API inventory so that they can have a realistic view of their attack surface and risk posture to help prioritise the wide range of API security activities that must be accounted for.

On a more positive note, the report revealed that 55% of organisations now perform real-time or daily API security testing, which is an increase from 39% in 2022, but this still falls short of matching the frequency of API security attacks. Over half of the organisations surveyed cited lost customer goodwill and churn of customer accounts after API security incidents. The financial and reputational damages resulting from these incidents can be catastrophic.

 

A lack of cohesion, leading to potential blind spots

One of the most significant disconnects highlighted in our report is across roles within organisations. While 84% of CTOs reported API security breaches, only 48% of application security specialists directly managing APIs acknowledged such breaches. Web application firewalls were identified as the top attack vector for AppSec teams, while others pointed to a mix of vectors including network firewalls and API gateways.

Interestingly, only 84% of AppSec professionals expressed confidence in their security tools compared to 95% of those in other roles. These mixed signals indicate a lack of cohesion and potential blind spots across security teams.

The ongoing rise in API security breaches over the past few years makes it clear this is not just a passing fad, but a serious issue that demands urgent attention. Our repeated survey findings demonstrate a consistent pattern of escalating API security attacks, rather than this being an anomaly or temporary spike.

 

The API threat landscape is intensifying

This data shows that the API threat landscape is only intensifying with time, as more hackers recognise the value of targeting these vulnerabilities. APIs now provide an extremely attractive vector for data theft, service disruption, and other cybercrimes.

Ignoring or downplaying these risks is no longer viable given the empirical evidence. Organisations must accept that API security threats are a pressing reality that can severely impact operations and reputation. Proactive mitigation of API security vulnerabilities needs to become an immediate priority across industries.

Companies can’t afford to be complacent or slow to respond as API attacks proliferate. The time to implement robust API security measures is now, before incidents spiral out of control. Prioritising this area and dedicating appropriate resources is imperative. APIs represent a clear and growing danger facing all enterprises in today’s digital ecosystem.

 

A complex picture of API security

Our report paints a complex picture of API security. Breaches are demonstrably increasing, underlining APIs’ importance as attack vectors. But confidence and readiness don’t align with mounting threats. Patchwork visibility and testing approaches leave major gaps. And differing perceptions across functions suggest a lack of holistic understanding and strategy.

API security can’t be an afterthought given the role of APIs in connecting vital systems and data. Companies must approach protection proactively, not reactively. That requires complete visibility and scanning of the entire API inventory along with robust monitoring and testing. Rapid development and deployment of APIs also demand that developers fix issues earlier in the process, before going live.

Organisations should implement centralised API security centres to unify insights across teams. API security tooling should be able to offer a range of capabilities throughout the lifecycle and provide the necessary context to stop attacks and data exposures for an organisation’s unique API business logic.

 

As attack surfaces expand, enterprises can’t be complacent

As attack surfaces expand, enterprises can’t be complacent. They must accurately assess their API risk, make security a priority backed by budget, and bridge the gap between perception and reality. The coming year may be a watershed for API security as threats rise. Companies that align confidence with robust precautions will maintain their advantage. Those still underestimating risks may suffer the consequences.

In my experience, having a centralised API security team is crucial to connect visibility and insights across the organisation; API security is now a competitive advantage. Customers recognise and reward companies that invest in robust API protections. Enterprises absolutely cannot afford to underestimate API threats any longer – the time to shore up defences is now.

The CISO View: Navigating the Promise and Pitfalls of Cybersecurity Automation

New 2023 State of Cybersecurity Automation research reveals that while adoption is rising, lingering hurdles undermine its effectiveness

 

Written By Leon Ward, Vice President of Product Management, ThreatQuotient

Cybersecurity automation has steadily gained traction as organisations seek to improve efficiency, address talent gaps, and keep up with escalating threats. However, our latest State of Cybersecurity Automation research shows that while more businesses are utilising automation, they continue to grapple with obstacles that prevent them from fully capitalising on its benefits.

In our recent study surveying over 700 cybersecurity professionals, we uncovered several persistent pain points in implementing automation. The research found that a lack of trust in automated outcomes, insufficient expertise among users, and poor communication between teams have hampered automation success. As a result, organisations are struggling to build confidence in automation and maximise its effectiveness.

Lack of Trust Undermines Confidence in Automation

The research revealed ubiquitous struggles with implementing cybersecurity automation, with 100% of respondents reporting problems. The top issues undermining confidence in outcomes were lack of trust (31%), slow user adoption (30%), and bad decisions (29%).

However, when we drill down, CISOs differ from other leaders regarding specific challenges. 40% cite “bad decisions” as a top concern, versus 29% overall. With ultimate cyber risk accountability, CISOs feel the impact of poor automation outcomes.

Automated actions like incorrectly blocking legitimate email/domains appear suspicious but negatively impact business. These errors erode user trust that automation improves security and organisations become hesitant to rely on it.

For example, an automated system may erroneously block access to a legitimate business domain that some vendors use for email communication. Employees suddenly find themselves unable to communicate with key partners, and business operations grind to a halt. This not only negatively impacts revenue but destroys end user trust in the value and accuracy of automated security systems. Organisations then become extremely hesitant to rely on automation out of fear of these business-disrupting outcomes.

Without confidence in reliable automated outcomes, businesses will not entrust critical security processes to them. This 31% reporting lack of trust is a major obstacle preventing full realisation of automation benefits. Overcoming this requires solutions that provide transparency into automated decisions.

Skill Shortages Compound Adoption Difficulties

Insufficient expertise among security team members makes implementing automation effectively challenging. Limited skills lead to misconfigurations, integration issues, and other problems. These glitches reinforce the 31% lack of trust in outcomes. When automation fails unpredictably due to suboptimal implementation, organisations cannot reap its advantages.

With the cybersecurity skills gap still growing, and 25% of CISOs reporting the skills shortage as their biggest challenge, businesses often lack personnel to adeptly deploy and manage automation tools. 23% of respondents sought training availability when selecting solutions, key for adoption success, and it is clear that skills development should be a key area of focus for organisations to capitalise on automation potential.

CISOs point to organisational issues exacerbating challenges. 25% cited high team turnover as their number one concern, disrupting expertise continuity and skills to smoothly implement automation.

Achieving Lasting Buy-In Requires Clear Communication

The research revealed disconnects between roles on automation perspectives. 42% of CISOs cited efficiency as the top driver to adopt automation, while for SOC leads and MSSPs, regulatory compliance was prime.

These mixed viewpoints signify a lack of alignment on automation goals and direction. CISOs must bridge gaps through improved communication of automation plans and benefits. Setting clear objectives, educating all team members, and demonstrating tangible gains are critical for lasting buy-in.

When one specialised team implements automation in a vacuum, broader adoption lags. But inclusive messaging of how automation helps every role work smarter fosters shared buy-in.

Continuous engagement with stakeholders is also vital. Leaders must showcase automation enhancing efficiency, compliance, productivity, or other goals important to each executive.

With disjointed perspectives on its value and role, automation struggles for foothold. Consistent, compelling communication of advantages enables robust, organisation-wide backing of initiatives.

Smarter Tools and Processes Are Key to Overcoming Obstacles

The 2023 research makes clear that implementing cybersecurity automation still faces hurdles, with 100% of respondents reporting issues. However, smarter tools and workflows can help organisations overcome these challenges to realise automation’s potential.

One key need is for automation tools that provide transparency and guardrails, fostering user trust. Intuitive interfaces also enable easier adoption by users at all skill levels, mitigating the skills shortage cited by 23% as a top challenge.

Standardising processes around automation provides consistency needed to maximise benefits. Workflows like automated triage avoid the ad hoc approaches causing fragmented gains.

Integrations between tools create seamless data flows and unified workflows rather than disjointed toolsets. 24% want integration with multiple data sources when selecting automation solutions.

Implementing automation without addressing trust, usability, training, integration, and standardised processes invites disappointment. The research makes clear these smarter tools and workflows offer a path to overcoming obstacles and automation success. 

Automation challenges can be overcome

This exploration of the current cybersecurity automation landscape reveals persistent challenges that hamper organisations from realising its full advantages. Core problem areas include deficient trust in outcomes, skill shortages among staff, and internal disconnects about automation’s role and value.

By taking concerted action to increase confidence via transparency, boost team expertise through training, and align understanding of automation’s benefits via consistent leadership messaging, CISOs can overcome these hurdles.

With thoughtful adoption strategies, secure design principles, and inclusive change management, organisations can tap into automation’s immense power to enhance security in the face of growing threats. Through a combination of smarter tools, educated users, and clear communication, cybersecurity teams can achieve new heights of efficiency and effectiveness through automation.

However, achieving automation’s full potential is not a one-and-done effort. It requires an ongoing commitment to iteration and optimisation as technologies, threats, and business needs evolve. Regular evaluation of processes and tuning of systems helps sustain peak performance over time.

Leaders must also continually assess the human side of the equation. Check-ins with staff at all levels provide valuable insights to shape training programmes, change management tactics, and internal messaging in a way that maintains strong buy-in across the organisation. With personnel empowered and aligned around shared automation goals, organisations can nimbly adapt their approaches to maximise value.

Connected Cars — Safety gained or safety lost?

Written by Paul Mountford, CEO, Protegrity

Connected technology is everywhere and influences every part of our lives. On average, there are nine connected devices in every UK household, and according to the UK Department of Culture, Media, and Sport.  This is estimated to grow to twenty-four billion connected devices by 2050.

While connected devices provide a range of benefits, there are now growing concerns around the data they are collecting, and the subsequent loss of consumer privacy. One very real example is the recent announcement from the California Privacy Protection Agency (CPPA), which advised that its enforcement division will review the data privacy practices of connected vehicle manufacturers, stating that they are “connected computers on wheels” and should be treated as such.

 

Data collected from cars is leading to a lack of trust

Today, cars are effectively mobile IoT devices that collect, use, and send staggering amounts of valuable data. Unfortunately, they are failing their users when it comes to data protection and data privacy, as consumers have little to no control over the sensitive data their vehicles collect, share, or sell.  In fact, US car companies have reported sharing data to third party organisations without driver consent and, as a result, only 12% of Americans trust the automotive industry. A recent report by the Mozilla Foundation reviewed different car brands for data privacy and security, and all 25 car brands researched earned the Mozilla ‘Privacy Not Included’ warning label – making cars the worst official category of products for privacy that Mozilla has ever reviewed.

Now, by probing the privacy practices of car manufacturers and vehicle technology companies, the CPPA is setting a new precedent for data security for the entire automotive industry, from manufacturers to OEM parts suppliers, technology partners to dealerships, and rental businesses to governments and consumers.

 

Valuable data helps to create improvements

The value of this data is important to the entire value chain including R&D, supply chain, safety, sales and marketing, city planning, and more.  The data collected offers an array of potential benefits for nearly all business partners in the automotive industry. For example, vehicle connectivity can help OEMs reduce warranty expenses and shorten reaction time to product flaws. Access to better data can also provide better insight into the reliability of suppliers, helping OEMs to select the best partners for their needs. Suppliers, in turn, can potentially optimise lead time and gain value from better-planned supply chains. For fleet owners, connectivity enables fleet health to be tracked to better predict potential imminent failures. It can also decrease breakdown time and improve control over vehicle maintenance.

Locking down data in an industry that is built to be mobile doesn’t necessarily work, but clearly the current access controls are not sufficient to protect sensitive data. If car manufacturers were to give drivers control over their personally identifiable information (PII) and make data privacy policies, including third party apps, a selling feature of the car – not an afterthought – this could go a long way to rebuilding trust.

 

Sharing data is no different than being tailed by a private detective

Ultimately data protection must evolve to protect automotive data wherever it is, including within the vehicle itself. Having information on where a car goes, where it has been, and what other cars were in the vicinity, may expose all kinds of PII and be a safety issue, as well as a human rights issue.

The details of where an individual drives their car could be some of the most sensitive personal information they might hold and could be used in all manner of ways. In effect, without more robust data privacy controls, it’s no different than being tailed by a private detective.

Add to this equation not only the apps within the vehicle, but EV charging data.  With 2.46 million EVs expected on the road by 2028, their data will be shared across the grid from charging point operators to utility companies and governments. Again, this is where there is a requirement to tighten data security controls across the EV charging infrastructure, which is crucial to ensure data is safeguarded from unauthorised parties or bad actors.

The EU has committed to regulate data shared between connected cars and is currently in the process of devising the remit of its Data Act, which will “ensure fairness in the digital environment, stimulate a competitive data market, open opportunities for data-driven innovation and make data more accessible for all”.   However, the European Automobile Manufacturers’ Association (ACEA) has warned: “Europe’s auto industry is committed to giving access to the data generated by the vehicles it produces. However, uncontrolled access to in-vehicle data poses major safety, (cyber) security, data protection and privacy threats.” Indeed, if vehicle data was more freely shared, these issues would certainly be at the front of our minds.

There will be 470 million connected cars on the road worldwide by 2025. To add to the challenges already highlighted, data moving between the car and the cloud is not always encrypted or regulated.  This is where encrypting, anonymising and/or tokenising driver data helps to ensure it is both protected and secure, ensuring the data is safeguarded wherever it goes.

 

Balancing the ability to tap into data with data privacy

At the end of the day, those in the automotive industry looking to embrace the potential of data analytics and AI systems across multiple cloud environments – which requires an exponential increase in the amount of sensitive data captured, stored, and analysed – shouldn’t be penalised. This is where technologies that enable enterprises to tap into the power of data to innovate and stimulate revenue growth, while also meeting consumer privacy and cross-border data flow compliance obligations, will become pivotal.

Oldham Council highlights the cost and scale of cyber attacks

Written By James Blake, Field CISO EMEA at Cohesity

Oldham Council has reminded us of the constant fight against cyber attackers and the financial costs of doing so. The Council recently announced that it is spending £682,000 on computer upgrades after bosses said they were fighting off 10,000 cyber attacks a day. Most CISOs have an 80/20 budget split between likelihood and impact mitigations as Deloitte points out in a recent global Cybersecurity survey. This report says that only 11% of the budget go into incident response/disaster recovery and into infrastructure security. Rather than the illusion of total cyber security, the focus must shift to operational cyber resiliency where organisations can effectively respond to and withstand attacks. While preventative measures are important, they’re table stakes, not the winning hand, when an organisation is fighting cyber-compromises.

An abundance of technology and a lack of process

It is worth pondering for a moment on how organisations approach recovery after a ransomware attack. It’s disheartening how often the public hears about scenarios in which an organisation’s response to an imaginary ransomware attack is to try to use business continuity and disaster recovery processes and technologies built for the scenarios of weather, loss of power or misconfiguration. These traditional business continuity and disaster recovery scenarios are, simply put, not suitable for cyber scenarios, where technology recovery efforts are actively targeted. Instead, organisations need to recover to first investigate how the attack manifested itself and which vulnerabilities were exploited so they are remediated while bolstering defence. Then finally all malicious artefacts of the attack need to be removed from the recovered environment. It is only then that recovered systems can be brought back into production.

The traditional timeline to the Recovery Time Objectives are very different in cyber recovery. If you recover without first understanding how you were attacked, how defences were circumvented, closing down that attack surface and removing all traces of the attacker, the chances are you’ll continue to be impacted. I’ve witnessed first-hand efforts to move to recovery too early and the resulting elongated response cycle and continuing impact on operations. Back in the halcyon days of when CISOs only had to deal with three secondary impacts from incidents – reputational damage, litigation and regulatory fines – this kind of response strategy could be tolerated. But with ransomware and wiper attacks incidents now have a primary impact: the inability of an organisation to deliver its products and services.

Many organisations have an abundance of protective and detective security technology but a lack of process resulting in a low-level of operationalization and integration. This situation used to be tolerated when impacts were secondary losses. But now when an organisation faces primary losses that grow exponentially over time, there is a need to achieve resilience by empowering existing security solutions with better context of data and files while bringing together the traditional silos of the IT and security teams and technologies.

A data-centric focus on cyber resilience

To achieve this, the organisation should adopt a data-centric focus on cyber resilience, ensuring that data from an organisation’s diverse compute and storage environments is brought together providing the governance, detective, response and recovery capabilities needed to achieve a high level of resiliency.

This is logically sensible. After all, it is data that drives the business, data that adversaries want to steal, encrypt or wipe, and data that has compliance obligations. Set alongside this, the technology infrastructure is becoming a commodity with orchestration, cloud and virtualisation now readily accessible to help organisations manage and protect that data. Any approach to bring this data together and provide those governance, detective, response and recovery capabilities should do so in a manner that supports the wider security and IT ecosystem though integration and orchestration.

Being resilient means being able to withstand any and all possible threats: fire, flood, hurricane, misconfiguration, ransomware, wiper attack and many, many other potential eventualities. The ability to resume normal service with minimal impact and cost is critical.

Added benefits – practical and financial

Once an organisation decides it wants to take a data centric approach to cyber resilience, there are plenty of other benefits to be reaped beyond those just related to recovery from cyber-attack or downtime caused by other reasons.

Siloes are removed – creating a level playing field for those who need to access and use data, and supporting remote collaboration and storage optimization. Data can be made ready for more robust and fruitful search and use by AI and other tools:

Compliance is made easier because discovery can be streamlined.

Incident response and forensics and protection is made easier: diverse workloads can be addressed with the same teams and tooling whether it’s cloud, virtual, on-premise or hybrid; triage and investigation can be prioritised by the sensitive or regulated data discovered on systems by scanning inside the snapshots; incident timelines can be rebuilt using snapshots over time from compromised systems; and historical filesystems can be hunted for indicators of compromise.

Once these data-centric platforms are integrated into security operations, the improved effectiveness and efficiency of response and recovery delivers improved cyber resiliency.

Protection is made less complex too, as it is possible to clone production servers for restore, for breach and attack simulation work, penetration testing and for deception and vulnerability scanning. The ability to clone data allows for robust application security testing and development, using data sets which are as close to live as it gets without actually being live.

What all this boils down to is an approach which delivers resilience to traditional disaster recovery scenarios as well as cyber incidents and streamlined data management. It will by its very nature bring Cybersecurity and IT teams closer together, and may derive further, data-related benefits to the organisation. While it won’t get rid of all threats of cyber-attack, a resiliency-based approach should help organisations get back on their feet much faster if the worst happens.

Futureproofing – What the Biggest Cyber Threats are likely to be to Your Business in 2024

CYBER CRIMINALS will become more sophisticated next year – creating a wave of new threats for businesses, a leading expert has warned.

Roy Shelton, the CEO of the Connectus Group, said “businesses of all sizes” need to take steps to boost their defences.

Speaking to raise awareness in Cyber Security Month, Mr Shelton said:  “As attacks become more sophisticated, organisations need to evolve their approach to security to stay ahead of the game.”

 

According to Check Point’s cybersecurity predictions for 2024 the threats broadly fall into seven categories: Artificial Intelligence and Machine Learning; GPU farming; Supply chain and critical infrastructure attacks; cyber insurance; nation state; weaponized deepfake technology and phishing attacks.

The biggest threats which are set to emerge are predicted to include:

  • A rise of AI-directed cyberattacks: Artificial intelligence and machine learning have dominated the conversation in cybersecurity. Next year will see more threat actors adopt AI to accelerate and expand every aspect of their toolkit. Whether that is for more cost-efficient rapid development of new malware and ransomware variants or using deepfake technologies to take phishing and impersonation attacks to the next level.

 

  • Impact of regulation: There have been significant steps in Europe and the US in regulating the use of AI. As these plans develop, we will see changes in the way these technologies are used, both for offensive and defensive activities.

 

  • Hackers will Target the Cloud to Access AI Resources. As the popularity of generative AI continues to soar, the cost of running these massive models is rapidly increasing, potentially reaching tens of millions of dollars. Hackers will see cloud-based AI resources as a lucrative opportunity. They will focus their efforts on establishing GPU farms in the cloud to fund their AI activities.

 

  • Supply chain and critical infrastructure attacks: The increase in cyberattacks on critical infrastructure, particularly those with nation-state involvement, will lead to a shift towards “zero trust” models that require verification from anyone attempting to connect to a system, regardless of whether they are inside or outside the network. With governments introducing stricter cybersecurity regulations to protect personal information, it will be essential for organizations to stay ahead of these new legal frameworks.

 

  • The staying power of cyber warfare: The Russo-Ukraine conflict was a significant milestone in the case of cyber warfare carried out by nation-state groups. Geo-political instability will continue into next year, and hacktivist activities will make up a larger proportion of cyberattacks.

 

  • Deep fake technology advances: Deepfakes are often weaponised to create content that will sway opinions, alter stock prices or worse. These tools are readily available online, and threat actors will continue to use deepf fake social engineering attacks to gain permissions and access sensitive data.
  • Phishing attacks will continue to plague businesses. Software will always be exploitable. However, it has become far easier for threat actors to “log in” instead of “break in”. Over the years, the industry has built up layers of defense to detect and prevent intrusion attempts against software exploits. With the relative success and ease of phishing campaigns, next year will bring more attacks that originate from credential theft and not vulnerability exploitation.

 

  • Advanced phishing tactics: AI-enhanced phishing tactics might become more personalised and effective, making it even harder for individuals to identify malicious intent, leading to increased phishing-related breaches.