Professor Mary Aiken is a globally renowned cyberpsychologist and influential keynote speaker, known for her pioneering work on the impact of technology on human behaviour.
As Chair of the Department of Cyberpsychology at Capitol Technology University and a member of INTERPOL’s Global Cybercrime Expert Group, she brings a unique, research-driven perspective to the most pressing digital challenges of our time.
In this exclusive interview, Professor Aiken shares her insights on the psychological effects of online environments, the ethical design of technology, and how society can create safer, healthier digital spaces for the future.
Q: As the field of cyberpsychology continues to evolve, how do its core principles shape the insights and themes you share during your keynote talks?
Dr Mary Aiken: “In cyberpsychology, we have certain effects — for example, the online disinhibition effect — which dictates that people do things online that they wouldn’t do in the real world. It’s a really important behavioural driver.
“We also have the power of anonymity online, which can be a very good thing, but it’s also a kind of superhuman power of invisibility. That comes with great responsibility and is not always used well by humans.
“There are positive attributes too, such as online altruism — for example, crowdsourced fundraising. The basic principle is that human behaviour mutates or changes online, and it’s very important to understand the impact of these changes.
“In my work as a cyberpsychologist, particularly on the professional speaking circuit, I get to speak to a broad range of sectors: technology, cybersecurity, infosec, financial services, education, e-commerce, and healthcare.
“All of these sectors benefit from gaining insights into the impact of technology on human behaviour — both from a user point of view and an operator point of view.
“I’ve been involved in many different research areas. For example, cyberchondria — a form of hypochondria manifesting online. We’ve all done it: you’ve got a headache — could be anything from eye strain to a hangover — but all of a sudden you start Googling symptoms and escalate to reading morbid or serious content such as brain tumours, experiencing anxiety as a result.
“Another recent area of research is cyber fraud. We’ve seen legislation in the UK, such as the Online Safety Act, which is specifically aiming to address cyber fraud and criminal behaviour online.
“In this space, I’ve worked on many information campaigns focusing on one of my core areas of expertise: cyber behavioural profiling. We see lots of campaigns saying, “Don’t click on the link”, but I go further than that.
“I look into semantic analysis — a breakdown of, for example, phishing texts — which are designed to prompt immediate action. I study the psychological drivers and emotional triggers these cyber fraud perpetrators are attempting to manipulate in order to get people to reveal personal information.
“In terms of my speaking topics, I cover a broad range of areas, from human factors in cybersecurity and cyber behavioural profiling to the psychology of AI.”
Q: Emerging technologies like AI have triggered widespread debate and speculation. From your behavioural science perspective, how should stakeholders approach these innovations responsibly and realistically?
Dr Mary Aiken: “When it comes to technologies such as AI, we’ve seen many false dawns and moral panics. With the introduction of ChatGPT, for instance, everyone became very excited about chatbots — but in fact, chatbots have been around for a long time.
“Eliza was the first chatbot, developed in the 1960s. She was modelled on what’s called Rogerian psychology, which meant she was very good at eliciting information. She’d ask questions like, “How are you?”, and then follow up with “Tell me about your day.”
“The inventor of Eliza was actually horrified when people working in the research lab started completely opening up to the chatbot, confessing all sorts of things. The programme was shut down very quickly.
“I had the pleasure of working on another chatbot — Jabberwacky — back in the 1990s. A colleague of mine built this stunning technology. The point is, we’ve seen continuous evolution in this space over time.
“Regarding the moral panic about AI replicating human intelligence and rendering humans redundant — I’m a behavioural scientist. We barely understand how the human brain works. The idea that we can build something to replicate or replace something we don’t fully understand is a flawed premise.
“Instead, I believe we need to ask: what is AI, and — importantly — what can it actually deliver for us?
“I prefer to conceptualise AI not as Artificial Intelligence, but as IIA — Intelligence Augmentation. IIA is based on the work of Licklider, a scientist from the 1950s who wrote a brilliant paper titled Man-Computer Symbiosis. He described the symbiotic, interdependent relationship between human and machine.
“IIA places the human at the centre of the process, and that, I believe, is the perspective we must adopt in order to get the best from machine learning and AI technologies.
“Going forward, of course there will be incredible developments in this space. I’m particularly fascinated by the convergence of quantum computing, machine learning, and AI. That could be the point at which we begin to approach something close to mimicking human intelligence.”
Q: You’ve spoken extensively about the ‘human factor’ in cybersecurity. Could you elaborate on the psychological dimensions that organisations often overlook when addressing digital threats?
Dr Mary Aiken: “Let’s start with cyberspace. People like me — cyberpsychologists — have been talking about cyberspace for around two decades now. In fact, in 2016, NATO officially ratified cyberspace as an environment, as a domain, acknowledging that the battles of the future would take place not only on land, sea, and air, but also across computer networks.
“The US military conceptualises cyberspace in three layers. First, the physical network — the hardware, cables, and infrastructure. Second, the logical network — which facilitates communication across those systems. Third, the cyber persona layer — that’s us, the humans.
“We’ve had 50 or 60 years of cybersecurity, and we’ve become very good at securing the physical and logical layers. However, the vast majority of cyber attacks are now facilitated through social engineering — and that’s not about technology, it’s about psychology.
“What we’ve seen is the emergence of a new sector, which falls under the broader cybersecurity umbrella — the online safety technology sector, or SafetyTech. I’m one of the founding members of this sector in the UK. Our mission is to develop technological solutions to technology-facilitated problems, such as harmful or criminal behaviour.
“So, we must factor the human into the equation — not only from a user perspective, but also as employees and as potential cyber attackers
“When we think about the types of cyber attackers — from state-sponsored and state-condoned actors, to hacktivists, activists, organised cybercrime groups, and sophisticated threat actors — we need resilient solutions.
“We want our data systems and networks to be robust and secure — but just as importantly, we need the humans who operate those systems to be psychologically robust, resilient, safe, and secure. That’s how we achieve 360-degree resilience.”
Q: With your experience speaking at global institutions like the UN, NATO, and Interpol, what lasting messages or capabilities do you hope audiences take away from your presentations?
Dr Mary Aiken: “As one of the world’s leading experts in cyberpsychology, I’ve had the privilege of speaking globally — from the White House to NATO, from the United Nations to INTERPOL.
“I’ve been invited to conferences across a wide range of sectors — cybersecurity, infosec, health tech, fintech, regtech, edtech, policy, and policing.
“The breadth and depth of this work demonstrates the relevance and importance of cyberpsychology on a global scale.
“My role is to equip attendees with the tools and skillsets they need to address problems at the intersection of humans and technology. I help people develop tech-based solutions to tech-facilitated problems — including harmful and criminal behaviours in cyberspace.
“My aim is to help people become more knowledgeable — and therefore more confident — in how they use and manage technology, so they can get the most out of it.
“Ultimately, my job is to help us all work together — in this shared environment of cyberspace — to create a safer, more secure digital world for everyone.”
This interview with Dr Mary Aiken was conducted by Mark Matthews.