In the ever-evolving marketing landscape, one technology emerges as the potential linchpin – Gen AI.  Joyce Gordon, Head of Generative AI, Amperity, recently joined forces with industry leaders, Rio Longacre, Managing Director at Slalom, and Jon Williams, Global Head of Agency Business Development at AWS. They revealed the key risks and the importance of setting boundaries when implementing a successful AI strategy.

Joyce Gordon, Amperity

When it comes to AI, it’s fair to say that we’re in a paradigm shift that’s similar in magnitude to the evolution from desktop to mobile. As a result, over the next couple years, we’re poised to see new types of products. We’re going to see new business models emerge as costs and cost structures change. And we’re going to see new companies enter into the market. But along with these developments, many regulatory questions across privacy and legal compliance arise.

 

Generative AI: Risky business?

There’s obviously a lot of excitement and promise surrounding Generative AI, but it’s not without its challenges and risks. Longacre echoes this sentiment, saying, “Nothing is without risk. And Gen AI is no exception.”

He advises all brands to consider the following risks, rules and considerations associated with Gen AI and its usage:

  1. Generative AI needs a lot of content to be trained on. So if any of that content is copyrighted, then that copyright still holds. This means you have to be careful that anything you create is significantly different.
  2. Content created by Gen AI cannot be copyrighted.
  3. Under the new EU Generative AI act, any content needs to be watermarked, so it can be identified as created by Gen AI.
  4. Without keeping a human in the loop, you could open your brand up to reputational risk.
  5. Have the right partners, processes and data foundation to position yourself strongly in this era. If you hold your own customer data and creative assets in one place, you can use them to train your Gen AI on, so you’re not reliant on someone else’s copyrighted content.

 

“What’s going to be important are the tools you use and the partners you have. Make sure you’re using the right tools – don’t use the free ones. Spend a little more money, do your due diligence and pick ones that have digital watermarking capabilities,” Longacre advises.

“And remember, Gen AI is definitely not without legal risk. However, this is not an insurmountable problem. Partners like AWS have some great tools to help you.”

 

Williams chimes in, pointing out, “One of the most important things to start from a consideration perspective is making sure that your company-owned content is not being used to improve the base models or being shared with third-party model providers because, otherwise, you become a part of their model. And then, whatever information you provided access to is actually integrated into their capabilities.

“The way we think about that at Amazon is that with Amazon Bedrock, your content is never used to improve the base models and is never shared with third-party models – it’s encrypted at rest and you can actually encrypt the data with your own keys.

 

AI and reputational safety

When it comes to safety, he cautions that brands should be implementing guard rails. “In terms of your reputational safety, make sure that you’re putting guard rails around the use of Generative AI, making sure your marketing team has the opportunity to define a set of policies to help safeguard Generative AI applications. With Bedrock guard rails, you can configure those policies. You can set a list of denied topics that are undesirable in the context of your application.

“For example, an online banking assistant can be designed to make sure that it refrains from providing investment advice to people that log into that banking assistant. Content filters can make sure you’re filtering harmful content across hate insults etc etc and even coming soon. actually down to the specificity of words.

 

The other thing to be really careful about, Williams cautions, is PII (personally identifiable information) redaction. “So you can make sure you select a set of PIIs that can be redacted in your generated responses that are coming from your foundational models. In a customer environment, that’s incredibly important.

“The last thing you want to do is have your customers talking to something and it’s providing them with information that it shouldn’t have shared with them. Then, indemnification. So we actually offer uncapped intellectual property indemnity for copyright claims or raising from generative output from Amazon Titan image generator and all of the images generated by it,” he says.

“The Titan image generator also has an invisible watermark that can’t be cropped or pressed out. You can look at the use of the images or the models that you’ve created for the future and make sure that you can track those things accordingly. Those are some of the things that we’re putting into place to help with the security of company’s data but also sort of the reputational risk guard rails that you need to be making sure that you have a strategy for and the tools to be able to implement.”

 

AI and the human touch

Longacre points out that every use case he shared has a human in the loop. Since we’re in the early days of AI, that’s not surprising as most brands are starting with ‘human in the loop’ use cases. This is where AI generates outputs that a person then approves and potentially refines. ‘Human in the loop’ use cases enable productivity gains while minimising risks arising from hallucinations or unexpected outputs.

“Maybe the copy is being written by Gen AI, but a human reviews it,” Longacre says. “The image might be generated, but it’s not being pushed out into the wild.

“We’re starting to see a little bit of that, but generally, there’s human oversight. Even with chatbots. I mean chatbots have been around forever. Most of them were machine learning based. You need that knowing of, ‘OK, when do you have the escalation? Where do you pass from the chatbot to a live person for certain use cases?’ Identifying that is still super critical.”

 

Gen AI cost and customer risks

Beyond the legal and reputational risks that Gen AI poses, there’s another risk to consider: customer retention and satisfaction and cost. For example, a couple of months ago, I was trying to book a flight and hotel for a trip. I went through this whole conversation with a chatbot on the booking website. Then, at the end, it wasn’t able to complete the booking.

It had asked me a lot of questions like my preferences, who I was traveling with and all of these other things. These were things it should have already known as I’ve made many bookings with the site before. So, I left feeling frustrated because I wasn’t able to make the booking at all through this experience. It didn’t enhance my discoverability because it didn’t pull in any first-party data.

And back to the cost risk. This is often overlooked. But if you think about something like conversational AI, each time it has to ask the user a question, that’s another request that needs to be made to the LLM API. If this happens once or twice, then no big deal. It costs a fraction of a cent. But at the scale of hundreds of millions of users, this becomes a huge business expense. To avoid this, brands must think about other ways to integrate more first-party data to both create a better customer experience and reduce costs.

 

Is your company making this common AI mistake?

According to Williams, one major oversight companies often commit during the implementation of AI is neglecting to consider the “what” aspect – specifically, the identification of relevant use cases. Technology is a brilliant enabler, but it’s just one of the tools you can apply to help with real-world complications. So as an organisation, have the executive team work with their teams to identify what the time-consuming, difficult or impossible problems that Generative AI could help solve. Then think small with the day-to-day irritations of either your employees or your customers. What are their ‘paper cuts’ on an everyday basis, and how can you then develop those use cases to address those challenges?

 

Get very specific with exactly what it is that you are trying to do and how you track that. Also, make sure that you have alignment given. A lot of the way that Generative AI is going to be used effectively is predicated on your technology stack and the data that you have in your organisation. Therefore, making sure that your marketing organisation is talking to your IT organisation is also a critical step to take as a company.

 

Watch the full webinar here.

 

About the Author

Joyce Gordon is the Head of Generative AI at Amperity, leading product development and strategy. Previously, Joyce led product development for many of Amperity’s ML and ML Ops investments, including launching Amperity’s predictive models and infrastructure used by many of the world’s top brands.  Joyce joined the company in 2019 following Amperity’s acquisition of Custora where she was a founding member of the product team. She earned a B.A. in Biological Mathematics from the University of Pennsylvania and is an inventor on several pending ML patents.

 

About Amperity

Amperity delivers the data confidence brands need to unlock growth by truly knowing their customers. With Amperity, brands can build a unified customer profile foundation powered by first-party data to fuel customer acquisition and retention, personalize experiences that build loyalty, and manage privacy compliance. Using patented AI and machine learning methods, Amperity stitches together all customer interactions to build a unified view that seamlessly connects to marketing and technology tools. More than 400 brands worldwide rely on Amperity to turn data into business value, including Alaska Airlines, Brooks Running, Endeavour Drinks, Planet Fitness, Seattle Sounders FC, Under Armour and Wyndham Hotels & Resorts. For more information, please visit www.amperity.com or follow us on LinkedIn, Twitter, Facebook and Instagram.