Why AI regulation is just a matter of time

As artificial intelligence becomes more a part of our everyday lives, the question of regulation becomes an issue of not when, but how.


Today, AI-powered tools are transforming education through automated essay grading. They’re improving the outpatient experience with app-based care. And they’re alleviating our congested roads through smarter traffic control. It’s a revolution affecting all sectors and industries.

For the most part the benefits are huge. AI has shown it has the power to increase our productivity, improve safety, reduce error rates and free up our time. However, we’ve also seen the negative side of it. Algorithms designed to serve us more personalised experiences have created echo chambers of bias, especially on social media. Amazon’s pricing model has been accused of promoting its own products over others. While researchers have found AI programs can exhibit racial and gender biases based on the prejudiced data they receive.

As we continue to advance along our path towards AI-led social, political and economic transformation, complex questions surrounding market regulation, data usage and international cooperation emerge.

Attempts at regulating robots have been around for decades. The best-known case is The Three Laws of Robotics, devised by science fiction writer Isaac Asimov in 1942.

But just as robotics have made rapid advances in recent years, so have regulators. In recent years, governments and rule-making bodies around the world have been paying close attention to the need for developing better regulatory frameworks in AI.

AI regulation around the world

United States of America

In 2016 the White House published a report titled Preparing for the future of artificial intelligence. It warned that regulation could create a bottleneck for AI development, rather than advance it – especially concerning autonomous vehicles and drones.

Whether or not the current Trump administration will act on these findings is uncertain, as the topic has been removed from the current White House website.

More recently a bipartisan group of senators and representatives have introduced the FUTURE of AI Act. It is the first federal bill completely focused on AI. The act would establish an advisory committee to make recommendations on AI, and examine the opportunities and impacts it may have on the existing US population.

European Union

The European Commission’s Legal Affairs Committee has called for EU-wide rules to govern AI. More specifically, the committee has urged for the creation of a European agency that would be responsible for AI regulation.

It also calls for the creation of a distinct legal status for robots and a mandatory insurance scheme to cover any harm caused by AI. It recommends the compulsory registration of all ‘smart autonomous robots’, and highlights autonomous cars as an urgent priority. The committee sees establishing global rules as important, stating that a fragmented regulatory approach would “hinder implementation and jeopardise European competitiveness”.

United Kingdom

In October 2017 the government published an independent review titled Growing the artificial intelligence industry in the UK. The report recommends that AI should be overseen, but not regulated. However, it states that AI applications could be influenced by guidelines proposed by the Royal Society.

The report also recommends the creation of data trusts. These trusts would form a body that could advise on how the data used in AI systems should be handled. It states such data trusts would not replace the Information Commissioner’s Office (ICO). A framework that would be able to support AI decision-making analysis was also advised.

Japan

In 2015 the Japanese government announced a New Robot Strategy, which called for better collaboration between industry, government and academia in AI progression. In 2016, at the G7 meetings, Japan pushed for a basic set of international rules on AI.

And while the Japanese government hasn’t yet set any AI regulations itself, it has put forth principles its AI scientists and engineers should adhere to. The Japanese Society for Artificial Intelligence (JSAI), comprising representatives from the academic world, government and the private sector, has recently published its code of ethics for AI research and development.

Other Countries

Elsewhere around the world:

The global AI regulatory landscape is patchy. And it makes sense: different countries are all developing and advancing their AI at different speeds. But as people come to depend more heavily on intelligent machines, the decisions algorithms make, and how they come to make them, will eventually face heavier scrutiny.

What the experts say

Some critics argue regulation is a roadblock to innovation. Rather than encouraging new ideas, it encourages workarounds. Others argue against regulation from a political standpoint. It would create an uneven playing field in terms of AI development – a worrying thought for those in the global arms race for AI dominance.

However, many see regulation as inevitable. But critics are divided as to the when and how of it all:

The time is now

Elon Musk, speaking to the US National Governors Association, recently said:

“Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry […]. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation.”

Basic principles must be laid out

Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, suggests a number of rules for the future of AI ethics. Writing for The New York Times, he states that:

  • First, an AI machine must be subject to the full set of laws that apply to its human operator.
  • Second, AI must clearly disclose to its user that it is not human.
  • Third, AI systems should not disclose confidential information without the explicit approval from the source of that information.

He hopes these rules will prevent AI machines from engaging in criminal activity, assure people they’ll always be able to tell humans from machines, and ensure confidential information remains uncompromised.

Testing is needed

Latanya Sweeney, professor of government and technology in residence at Harvard University, proposes for AI testing in a similar way to how physical consumer products are tested.

Speaking with PBS, she says: “The algorithms have to be able to be transparent, tested, and have some kind of warranty or be available for inspection.” She added: “I want the people we elect controlling those norms, not the technology itself.”

Involve the public

Talking to the FT, futurist and philosopher Nick Bostrom says governments shouldn’t be too involved with AI just yet. “It would be premature today, I think, to try to introduce some regulation,” he states. However, he admits there are some applications of AI, such as in autonomous cars, surveillance and data privacy, where government regulation could help steer the industry in the right direction.

He advocates for more public discussion about the practical ethical implications of new technology.

It’s still early days

Eric Schmidt, chairman of Alphabet, says it’s still too early for regulation. According to The Verge, he says that a system where companies had to share algorithms with the government to be vetted would fail. He says the US needs to “get [its] act together as a country” first, and that regulation is a side issue that shouldn’t stand in the way of a clear AI strategy that includes both private and public bodies.

What tomorrow holds

We are on the cusp of an AI revolution. This puts society in a considerably advantageous position. There is a chance to avoid allowing AI machines to act in ways they were not initially designed to. Parallels can be drawn with the internet. In its early stages, developers failed to consider security. As a result, hacking, malware and viruses ran rampant. With greater dialogue and proactive action, there is a chance to avoid repeating these mistakes in AI. Regulators, policymakers and the public must remember this.

But with many countries yet to establish their own domestic regulations, we are unlikely to see international AI frameworks being set any time soon. For now what is clear is that governments around the world need to be proactive in raising awareness about the benefits and risks of AI. There must be greater collaboration between private technology companies and public bodies, with more involvement from a cross section of society if we’re to advance towards a harmonious AI future.

Total
0
Shares
Previous Post

12 inspirational figures leading the AI revolution

Next Post

AI and humans: what will our shared future look like?

Related Posts