From human-like robots that think and feel to towering digital metropolises, science fiction and its fantastical visions have helped wrap artificial intelligence in a sense of the otherworldly.
“It’s got a magic to it”, says Michael Rovatsos, Professor of Artificial Intelligence in the School of Informatics and Deputy Vice-Principal of AI at the University.
Born in Greece, Rovatsos spent much of his early life in Germany, and became interested in the science of making intelligent machines as an undergraduate student. It was the mid-90s, a time when AI was more of a dream than a reality, but he was drawn to the scale of that vision.
“I liked that a lot of it is about humans and understanding intelligence, piecing together what it is and how to replicate it in machines,” he says “It’s also very interdisciplinary, drawing on all sorts of elements including maths, psychology and cognitive science, which I found exciting.”
After completing his PhD at the Technical University of Munich in 2004, Rovatsos moved to Scotland. He has been at the University of Edinburgh ever since, and is now Director of the Bayes Centre, the University’s innovation hub for all things data science and AI.
His own research focuses on the social elements of AI, understanding how computer systems with multiple AI-powered parts interact. At its core, it’s about trying to give intelligent machines the ability to reason, argue and resolve problems when faced with conflicting information or objectives.
While many everyday life activities come naturally to us, Rovatsos explains, they are very difficult for machines. “Getting dressed, loading the dishwasher or writing a short poem don’t seem like the pinnacle of intelligence, but they actually require a huge amount of background information. Humans do it naturally, drawing on all sorts of knowledge that doesn’t relate directly to a problem, but it’s very hard to get AI systems to do that, which is why at the moment they’re really only good at very narrow, single tasks.”
Most AI technology readily available today focuses on relatively simple things like image or speech recognition – think Siri, Alexa and automatic tagging of photos on social media – or on solving huge mathematical problems that are actually impossible for humans to solve. As impressive and useful as these things are, most of them don’t involve the use of common sense knowledge or context. “Ultimately, the aim is to delegate more complex tasks to AI systems, but we’re really only at the start of that,” Rovatsos says.
A perennial talking point when it comes to AI is whether developments will one day lead to the creation of systems with intelligence that is on a par with the human brain. Machines with anything like that sort of intelligence are decades away at the earliest, says Rovatsos, who wonders whether it’s even something we should strive for.
“One way of looking at AI is as clever tools we can use to do incredibly useful things that make our lives easier, safer and more comfortable. They use genuinely ‘artificial’ intelligence; it’s just not at all like human intelligence. The ambition to have human-like AI is a totally different thing. While trying to learn how human intelligence works is certainly exciting, a lot of the pursuit is about the human condition and trying to better understand ourselves – it’s not that much about technology.”
Popular culture is teeming with stories about super-intelligent machines gone rogue, which is undoubtedly part of the reason why there is more than a hint of societal wariness when it comes to AI. But what truth is there to the fear that we could give machines too much power and things going, well, a bit Terminator?
“I feel that AI has been somewhat stigmatised”, says Rovatsos. “The key thing to remember is that some of the problems we’ve seen with AI haven’t happened because the systems themselves are smart, but because the work has involved techniques that might cause new challenges, for example when using historical data that may already be biased. There are so many systemic problems in society and sometimes systems based on data reflect those.”
There are plenty of examples of AI producing deeply flawed results, Rovatsos acknowledges, such as algorithms on social media platforms that labelled pictures of Black people as primates. Such issues have arisen, he says, because the tech sector doesn’t have a good history of dealing with risk. Its approach has often been to make things first and fix them later. “I’ve worked a lot on the ethical aspects of AI, and at the end of the day if you’re going to put new technology out there, you have to make sure it’s not harmful”, says Rovatsos.
Another application of AI that has caused alarm – particularly about the spread of misinformation – is deepfake technology, which can be used to create convincing but entirely fake photos or videos. This has a lot to do with the pervasiveness and speed of AI developments, but according to Rovatsos it’s nothing new. “When it comes to the manipulation of public debate and democracy, you could argue that’s the bread and butter of how propaganda, lobbyists and certain branches of the media have always worked,” he says.
“There are definitely challenges for AI developers and researchers to improve transparency, but a lot of other things are really about governance and regulation. Where there’s a public interest in authenticity, you should not allow things like deepfakes to be created unless it’s made clear they’re not real. They should be dealt with like counterfeit currency in the financial system.”
It’s easy to forget that many technologies that millions of people now trust with their lives every day were once in the early stages of development – and extraordinarily dangerous. “If you think about things like railways and aviation, it was all a bit of a Wild West initially,” says Rovatsos.
“How long did it take to get the first railway up and running safely, and what lessons did they learn in the process? One of the challenges they didn’t have – and is the reason regulators, governments, industry and researchers today are struggling – is that AI is digital, which means that if something goes wrong it could affect millions of people virtually instantaneously.”
He argues that, from the standpoint of innovation, AI is not that different to other technologies. New medical devices, engines and other tools all undergo rigorous checks to ensure they meet industry standards, and Rovatsos says coming up with effective ways to safety check digital AI tools will go a long way towards ensuring they are fit for purpose, and will improve public confidence in them.
The next major challenge in AI is to combine all of the impressive capabilities developed in recent years to create safe, responsible systems that can work autonomously in complex environments. The University is a leader in natural language processing and robotics, but also has great strengths in machine vision, reasoning, and human-AI collaboration.
“Truly autonomous AI will require bringing all these areas together to understand how more complex systems can be assembled from simpler ones with narrow capabilities. We have recently embarked on a major programme to achieve this through our new Edinburgh Lab for Integrated AI, and think this approach will enable us to make a step change in terms of the sorts of AI systems we can build.”
For these systems to make a positive impact on the world, Rovatsos says, they need to be deployed carefully in real-life applications, and we need to make sure they are safe and beneficial. Achieving that will require a wealth of expertise across numerous disciplines, which is why he believes the University can play an integral part.
“A great advantage we have at Edinburgh is our interdisciplinary strength,” he says. “As well as world-leading expertise in AI, making the next big leap will also involve ethicists, social scientists and lawyers to name just a few, and the University has a tremendously strong network across all of these disciplines.”
Edinburgh’s storied history of excellence in computer science and artificial intelligence stretches back decades. In 1963, Donald Michie, who famously worked as a codebreaker at Bletchley Park during the second world war, set up the University’s Department of Machine Intelligence and Perception. In doing so, Edinburgh became only the third academic institution in the world to conduct AI research.
Michie developed one of the first programmes capable of learning to play a game, and invented the memorisation technique, a way of speeding up computer programmes.
In the same era, Sidney Michaelson set up a Computer Unit in Edinburgh, later being appointed to the first Chair of Computer Science in 1967. Michaelson was one of the first computer scientists to develop microprogramming, and later with Harry Whitfield invented early versions of a multiuser operating system, which are commonplace today but were unprecedented in the 1960s.
So, what might the future of AI look like? With so much untapped data in so many sectors, Rovatsos says the opportunities are huge. “Everybody is producing data all the time. In the last two years alone, we have collected more data than throughout the entire history of humanity, and that’s only going to increase.”
More sophisticated AI technologies will make our lives easier and safer. Developments such as smart care homes and operating theatres, and robots to keep people out of harm’s way in dangerous environments are inevitable, he says.
Thinking bigger, Rovatsos believes that AI could enable us to do more to tackle global problems. “AI could potentially help avert things like financial crises, climate change impacts, poverty and public health emergencies,” he says. “The sheer scale of these problems makes them incredibly difficult to solve, but using AI to gather and analyse data could help us make better-informed decisions. AI could give us the power to solve these problems – but ultimately the humans will still be in charge.”
Picture credits: Callum Bennetts – Maverick Photo Agency