Recent TV shows such as Humans and Westworld paint a troubling picture when it comes to highly advanced robots. They show that, when anthropomorphic robots have no rights, humans are likely to abuse or demean them. For now this is just fiction, but should the time come, can we expect society to treat robots with respect?
By 2030…
800 million people are expected to lose their jobs to automation
14% of the global workforce will be required to switch occupational categories
(Source: McKinsey Global Institute)
Self-service checkouts and smart turnstiles are replacing human cashiers. Fulfilment robots are replacing human warehouse workers. Burger-flipping robots are even replacing fast-food cooks. If you haven’t heard already, robots are taking our jobs. And naturally, this is causing a lot of unease.
Our relationship with robots today
According to a recent Eurobarometer survey, 74% of EU citizens are worried that, because of robots and AI, more jobs will disappear than be created. While in the US more people are worried than enthusiastic about robots doing human jobs. According to Pew Research Center, only 33% of people are enthusiastic about the prospect.
And while many experts predict the opposite – Gartner says AI will create more jobs than it eliminates – this doesn’t stop people from worrying about the future.
Bill Gates, Stephen Hawking and Elon Musk, among others, have warned against the rise of AI. They share deep concerns about the not-too-distant tomorrow when robots could rise up and threaten our very existence.
Despite this, the market is expected to experience dramatic growth. According to Accenture, who looked into the impact of AI in 12 developed economies, AI is expected to increase labour productivity by 40% and double annual economic rates by 2035.
This is because the potential benefits AI offers are huge. AI can help with decision-making, create operational efficiencies, speed up supply chains and ultimately change the nature of work.
According to McKinsey’s recent report A Future That Works, automation could:
- Raise global productivity growth by as much as 0.8-1.4% a year
- Help overcome demographic aging trends in developed and developing countries
- Improve business performance through increased profit, productivity, safety and quality
However, to reap the benefits of AI, addressing the existing challenges around the technology is a must. And a lot of this concerns the role of robots in our society.
Can robots ever be our equals?
A frustrated worker kicks a jammed photocopy machine. An angry shopper swears at an unresponsive self-service till. A toddler throws an iPad on the floor. In these scenarios, we don’t feel compelled to feel sorry for these inanimate objects. After all, they’re only lifeless machines.
But what if these machines had human-looking characteristics? Would we still attack or abuse them, or would we treat them differently? As robots evolve from being inanimate objects to being intelligent machines, tricky questions surrounding their treatment begin to arise.
Our mixed feelings about robots
Research has found that, when robots demonstrate human-like qualities, we’re more inclined to feel empathy towards them.
However, if robots appear too realistic, these warm feelings disappear. Instead, we feel uneasy and perturbed. It’s a phenomenon called ‘uncanny valley’.
There’s more to the concept of uncanny valley than meets the eye. Researchers have looked at how human perceptions would influence robot acceptability.
The study indicates that social behaviour that seems natural is fine coming from humans or a computer reading a script. However, when it comes from machines that appear to feel genuine emotions, it makes people uneasy.
Will robots develop emotion?
Like Bicentennial Man or Blade Runner, storylines of AI robots having human emotions run rife. These stories are powerful because emotion is often seen as the main differentiator between humans and machines.
But even if scientists had the power to replicate the chemicals in the brain that spur on happiness, anger, sadness or fear – should they?
Today’s robots operate rationally. They use algorithms and logic to make decisions. The benefit of this is that they are predictable and manageable. Rational robots are unlikely to act in ways they aren’t programmed to.
However, the development of emotions in robots would also present benefits. For example, care robots with emotions could improve a humans’ ability to connect with the machines. Emotions could make robots better at engaging in people’s emotional, social and mental wellbeing. Emotion could also influence robots’ behaviour, potentially aiding ethical decisions.
Yet this begs the question, what makes people emotional? Our emotions are exhibited through physical responses. Our heartbeat quickens in situations of fear. Our serotonin increases when we’re happy. Our emotions are culturally ingrained and driven by chemicals as much as by character.
With robots being made of metal and plastic, can they ever truly experience human emotions?
Exploring consciousness
Perhaps it’s not a case of whether robots can have true feelings. Perhaps it’s about them knowing they exist. After all, most claims for robot rights evolve around the concept of consciousness. But defining consciousness itself has been difficult.
In a recent review published in Science, scientists premised there are three main types of consciousness:
The first level is C0. This refers to the unconscious operations that occur in the human brain, such as face or speech recognition. People are often unaware of these operations even taking place.
The next level is C1, where consciousness means making decisions, recalling past experiences and considering multiple possibilities. This ability to hold a thought, or a train of thought, is what guides conscious behaviour.
The final level of consciousness, C2 is based on the ability to be self-aware of one’s own thoughts. It is what leads to curiosity and motivation, as people recognise what they know and don’t know. The paper notes that while some robots have achieved aspects of C2, in that can they monitor their progress and learn how to solve problems, most AI machinery is still operating at C0 level.
While this is just one definition of consciousness, researchers hope these categories will help act as a roadmap for designing future AIs.
Robot rights: should they exist?
We’re clearly not there yet, but it’s a concept worth considering: if tomorrow’s robots can become self-aware and they can experience and process emotion – should the idea of human rights be applied to machines?
Some countries are already granting robots a certain level of rights. In 2017 Saudi Arabia became the first country to grant citizenship to a robot named Sophia, created by Hanson Robotics. A few weeks later in Japan, Mirai, a seven-year-old humanoid chatbot, became an official resident of Shibuya.
While these are considered PR stunts for now, they do highlight a possible future ahead. Critics noted Sophia would have more rights than most of the women in Saudi Arabia. Others commented on Japan’s history of mistreatment of its minority citizens and how unfair it was to give robots greater importance. Then there’s the case that the ongoing battle for universal human rights ought to be prioritised over those of robots.
Just like many things to do with AI, critics are divided about the right path forward.
Joanna Bryson is an associate professor at the Department of Computer Science at the University of Bath. She believes robots should be considered ‘slaves’. She argues that giving them rights will put robots and humans on equal footing and in turn hinder humankind’s abilities to fulfil its ambitions.
Robots should be built, marketed and considered legally as slaves, not companion peers.
Joanna Bryson
However, according to Kate Darling, robot ethics expert and research specialist at Massachusetts Institute of Technology, robots should be protected to discourage mistreatment – in a similar way to how animals are protected. For her it’s as much about protecting our own morality as it is the machine.
If we treat animals in inhumane ways, we become inhumane persons. This logically extends to the treatment of robotic companions. Granting them protection may reinforce behaviour in ourselves that we generally regard as morally correct, or at least behaviour that makes our cohabitation more agreeable.
Kate Darling
Philosophy professor Eric Schwitzgebel goes even further. He argues we have a greater moral obligation to robots than humans because they are akin to our children.
We will have been their creators and designers. We are thus directly responsible both for their existence and for their happy or unhappy state. If a robot needlessly suffers or fails to reach its developmental potential, it will be in substantial part because of our failure – a failure in our creation, design or nurturance of it.
Our moral relation to robots will more closely resemble the relation that parents have to their children, or that gods have to the beings they create, than the relationship between human strangers.
Eric Schwitzgebel
Defining our relationship with robots will help us determine how to deal with robot rights. Because while we may own and control robots for now, there will be a future where people work alongside robots and that future might even involve socialising and falling in love with them.
Figuring out whether we are robot owners, parents or equals will help us figure out how to best treat them, and if we should protect them.