IFSEC 2019

The ethical and geopolitical implications of AI and machine learning

Avatar photo

Freelance journalist and copywriter, Textual Healing

July 8, 2019

Download

Whitepaper: Enhancing security, resilience and efficiency across a range of industries

A panel of AI and security experts wrestled with the profound implications of AI and machine learning in the Keynote Arena at IFSEC 2019 recently.

The session was introduced by Dr David Sloggett, research fellow at the University of Oxford and recognised as one of the leading international authorities on the subjects of counter-terrorism, counter-insurgency and counter-piracy.

He started by recalling the first iteration of Artificial Intelligence in the 1970s and 1980s. “There were thought to be boundless opportunities but then problems started to emerge, for example in its ability to handle uncertainty.” Sloggett gave the example of diagnosing liver disease and the different approaches taken by doctors and the trying to aggregate that into an algorithm.

“After a period of quiescence, with technological advances and the development of neural networks, and multi-layered neural networks particularly, AI products have started to develop again.”

Interest in AI has taken off in all directions. Sloggett referenced the £150m private donation just given to his institution – Oxford University – to investigate the moral dilemmas around AI, a technology that has, for example, got to the point that it can convict people based on face recognition or DNA of their kin.

“There are huge moral dilemmas around this and issues of data protection. However, me being a scientist that doesn’t mean we should stop exploring them and the public’s reaction to them.

“I don’t think that the public yet fully understand the implications of the surveillance society we’re beginning to create”

I don’t think that the public yet fully understand the implications of the surveillance society we’re beginning to create and I think some of them would be a bit nervous. I’ve often myself subscribed to the view that if you’ve got nothing to hide you’ve got nothing to worry about, but, again, do the public understand the implications of all of that?

“From an Oxford perspective, we’re interested in the moral dilemma but we’re also interested in how AI can be exploited so we can do a better job in certain areas – e.g. helping doctors and radiologists more accurately identify the onset of lung cancer. Anything like that, for me, is a noble cause.”

SHERPA consortium

After the preamble around AI capabilities, Sloggett introduced David Wright, director of Trilateral Research, a multidisciplinary consulting and technology development company. Wright’s company are undertaking an AI/information warfare scenario, coming out of the EC-funded SHERPA initiative. The overall initiative looks at the ethical, social and economic impact of AI and is comprised of five scenarios of which ‘information warfare’ is one.

The SHERPA consortium consists of 11 different partners from six countries, a mix from academia, industry, civil society, SMEs and even an artist. “We also had a stakeholder board of 30 independent people” added Wright, “mostly academics, regulators and journalists, with a stakeholder contact list of about 1,100 on our SHERPA list.”

Projected to a date of 2025, and involving various feedback phases and iterations (a final phases uses the Delphi study method) before being

submitted to the EC in its final form, the warfare scenario projects the technology and applications that will be available in 2025 and will likely be propelling information warfare. The research then looks at the drivers as well as the barriers and inhibitors (“maybe public opinion, for example”), with the ethical, legal, privacy, social and economic impacts also looked at before recommendations are made.

Issues covered included:

‘Is cyber war the same as kinetic war?’– essentially what’s the difference between an air attack on a power plant as opposed to a virus attack?

‘Is a cyber attack on Nato member an attack on all members?’

Wright gave the example of Estonia who asked in vain for help after a Russian cyber attack.

‘Who should retaliate? Governments? Big companies as well?’

Wright added: “Information warfare is not just conducted by states. A lot of people are involved: crime organisations, terrorists, rogue employees, in fact you can see that everyone is compromised. There are no white hats and black hats”

‘Do we need the equivalent of a Geneva convention for cyberwarfare?’

“Should certain infrastructures like electricity and water be off limits?” added Wright.

“There are hundreds, thousands of attacks every day around the world. Governments and companies are spending huge amounts of money to protect themselves. Is this the kind of future that we want?”

The scenario has generated various recommendations, including:

  • Member states could take actions along the lines of Sweden and Estonia who have attempted to inoculate themselves against viral misinformation campaigns; citizen awareness should be raised; notices of attacks should be made public.
  • Some retaliatory action is needed – for example, “if the US or a Western country were attacked it could demonstrate equivalent of greater power by switching off lights in Moscow for one minute and, if attacks don’t stop, the threat could be extended.”
  • The EC should provide funding for studies on information warfare, in particular how AI is used to spread misinformation, hate crimes, lies, and especially in undermining elections.
  • The rules of information warfare are not well-defined. It would be useful to develop the Budapest Cyber Crime convention (currently very few nations have ratified it).

Slogget asked Wright how much the scenarios had dealt with Russia’s tactics of obfuscation, denial, false rumours and creating other sources of information.

“I think one of the questions that we obsessed about the most was what sort of retaliation should take place from the US in particular and also the UK”, Wright said. “Should they just sit back and take it?

“Should you retaliate in a similar way? For example, if you have evidence that the Russians are hitting our power plants, should we ask GCHQ to start planting some Malware in Russian reactors so that if we are attacked we can take them down? A stronger form of retaliation, it seems to me, is necessary. How long are you going to sit back and let somebody punch you in the face?”

Dave Sloggett flagged that he preferred a softer power approach that involved discrediting the source and making it irrelevant and thereby stopping fake news gaining traction. “Personally, I think we need to be smarter, faster, quicker at being able to refute what is blatant information warfare, for example what the Russians have continued to pour out about MH17.”

Praising the UK government’s quick response on the Skripal affair, in terms of the international response, Sloggett stressed the importance of “getting inside the enemy’s decision cycle”, i.e. the OODA observe loop – orient- decide – act. The key issue, he felt, was whether AI algorithms could help us get ahead of our enemies in the decision cycle and stop rumours gaining traction. “Once they gain traction, they are very difficult to refute.”

Last word to David Wright. He picked up on Sloggett’s mention of ‘trusted sources’ and warned that ‘deepfake technology’ will “hit us hard” make identifying such sources “a lot more difficult, especially in an election cycle.

 

Related Topics

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments