“sorry, car says no”

Autonomous cars relocate human incompetence from the driver to security engineers

Avatar photo

Freelance journalist and copywriter, Textual Healing

July 2, 2019

Download

Whitepaper: Enhancing security, resilience and efficiency across a range of industries

The cybersecurity risks posed by the emergence of fully autonomous vehicles was as fascinating as any topic at IFSEC 2019.

Hot foot from the session on Secure by Design, Mike Gillespie, vice president at the Centre for Strategic Cyberspace + International Studies (CSCIS), was able to pick on the threads of that discussion for his introductory remarks on autonomous vehicles.

Speaking in the Future of Security Theatre powered by Tavcom, he invited the audience to cast their minds back a few decades to put recent technological advances in perspective. “I find myself living in the science fiction of my childhood”, said Gillespie, also founder of holistic security consultancy Advent IM.

Self-driving or autonomous cars were certainly a significant part of science fiction, in books and on film (with rather calamitous examples of the latter being The Fifth Element and Total Recall). As Gillespie points out, the future is driving towards us quickly. We’re already able to live stream films for our children in the back seat, for example.

More ominously is the ability of cars already on the roads to make decisions for you. Gillespie described how his own Volvo brakes if it doesn’t think you are braking hard enough and uses ‘lane correction’ if it thinks you have drifted into in the wrong one.

We’re effectively at the midway point on the journey to full automotive automation. This is classically broken down into five stages, said Gillespie: No Automation (0); Driver Assistance (1); Partial Automation (2); Conditional Automation (3); High Automation (4) and Full Automation (5).

It’s expected that we will reach level 3 in 2020 and level 4 in 2025, with the market for connected and autonomous vehicles projected to reach £28bn by 2035.

A teenager hacked into the Polish city of Lodz’s tram network and derailed four vehicles

Gillespie admitted to an ‘anxiety’ around the journey, little wonder perhaps given his profession, but largely stemming from the fact we have to “unlearn the way we drive today, which is so unconscious and second nature.”

But in terms of reaching stage 5 – full autonomy (with no human interaction) – Gillespie said that AI is currently “at best machine learning and at worst a badly-trained puppy. It’s not until we have self-directed learning, self-fulfilling AI, that we will get a fully autonomous vehicle.”

The prospect of fully autonomous cars is increasingly attractive to the general public, according to research carried out by Deloitte. Nevertheless, Gillespie asked if this opportunity puts us “on the highway to hell?”

He cited examples of driver management systems being hacked in tests and researchers wresting control of a car and running it off the road. Gillespie also referenced a teenager in Poland who in 2008 hacked into the city of Lodz’s tram network and derailed four vehicles.

“Why are we allowing organisations to manufacture things in a shoddy way?” he asked. “And expecting us, as end users, to make things secure?”

To underline this point, Gillespie referenced the relay hack around keyless entry: the manufacturer’s response was to recommend that customers buy a pouch for their keys rather than undertaking to fix the problem. “What happens when we are talking about a fully connected and fully autonomous vehicle and their response is ‘well, don’t let it drive off the road’?”

After urging a secure-minded approach from manufacturers, Gillespie took us over another bump in the road: algorithms. While autonomy will reduce human error (responsible for 95% of all road traffic accidents according to House of Lords research), “who writes the algorithms to say what is right and wrong? Do I [the car] protect the driver or the person outside? Whose life is more important, yours or mine, yours or a child’s, a child’s or a pensioner’s? These are real decisions that we will have to start to make pretty soon.”

It’s not hard to imagine us having arguments with our car

The future could also see entry to a car via a fingerprint or via facial recognition. The car could even check your licence or car insurance.

If that sounds sensible, Gillespie asked what happens if it was a case of “sorry, car says no.” It’s not hard to imagine us having arguments with our car, he added.

More sinister was the prospect of fully autonomous car encroaching beyond their remit. Gillespie referenced Isaac Asimov’s three laws of robotics – that a robot:

  • may not injure a human being or, through inaction, allow a human being to come to harm.
  • must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov’s stories were often depicted robots dealing with conflict between these rules. These conflicts also led to a kind of consciousness, a quality that Gillespie pointed out is present in humans without being discoverable – so “how can we deny it in non-biological beings?”

And if robots can be conscious then one might expect them to feel increasingly superior as they evolve.

“Because we can doesn’t necessarily mean we should” were the words Gillespie left hanging in the air as he reminded us that we will have to make some big decisions about what stage we want autonomy to reach.

“The future potential is huge but so is the threat. Car manufacturers have never shown themselves inclined to go the extra mile in terms of security. We have to make sure that as the drive to technical innovation continues, so does the pace at which we secure these devices.”

 

 

 

 

Related Topics

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Topics: