ebook img

Oral evidence volume: AI in the UK PDF

423 Pages·2017·2.18 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Oral evidence volume: AI in the UK

SELECT COMMITTEE ON ARTIFICIAL INTELLIGENCE COLLATED ORAL EVIDENCE VOLUME Contents Alan Turing Institute and Professor Chris Hankin – Oral evidence (QQ 143–152) 4 Dr David Barber, Digital Catapult and NVIDIA – Oral evidence (QQ 38–45) ..... 20 Jeremy Barnett, Professor Chris Reed and Professor Karen Yeung – Oral evidence (QQ 29–37) ............................................................................... 38 Miles Berry, Graham Brown-Martin and Professor Rosemary Luckin – Oral evidence (QQ 181–189) ............................................................................ 57 Professor Nick Bostrom, Professor Dame Wendy Hall and Professor Michael Wooldridge – Oral evidence (QQ1–8) .......................................................... 76 Graham Brown-Martin, Professor Rosemary Luckin and Miles Berry – Oral evidence (QQ 181–189) ............................................................................ 93 Dr Mercedes Bunz, UK Information Commissioner Elizabeth Denham and Dr Sandra Wachter – Oral evidence (QQ55–64) ............................................... 94 Eileen Burbidge MBE, MMC Ventures and Project Juno – Oral evidence (QQ 46– 54) ....................................................................................................... 112 Rory Cellan-Jones, Sarah O'Connor and Andrew Orlowski – Oral evidence (QQ 9– 17) ....................................................................................................... 129 CIFAR – Oral evidence (QQ 172 – 180) ..................................................... 145 Citizens Advice and Competition and Markets Authority – Oral evidence (QQ 85– 94) ....................................................................................................... 153 Competition and Markets Authority and Citizens Advice – Oral evidence (QQ 85– 94) ....................................................................................................... 172 Digital Catapult, NVIDIA and Dr David Barber – Oral evidence (QQ 38–45) ... 173 Dyson, Fujitsu and Ocado – Oral evidence (QQ 105–115) ........................... 174 Professor David Edgerton, Professor Peter McOwan and Professor Sir David Spiegelhalter – Oral evidence (QQ213–223) .............................................. 190 Fujitsu, Ocado and Dyson – Oral evidence (QQ 105–115) ........................... 213 Future Advocacy, Professor Dame Henrietta Moore and Professor Richard Susskind OBE – Oral evidence (QQ 95–104) .............................................. 214 German Research Centre for Artificial Intelligence (DFKI) – Oral evidence (QQ 163 – 171) ............................................................................................ 234 Professor Dame Wendy Hall, Professor Michael Wooldridge and Professor Nick Bostrom – Oral evidence (QQ1–8) ............................................................ 241 Professor Chris Hankin and the Alan Turing Institute – Oral evidence (QQ 143– 152) ..................................................................................................... 242 Dr Hugh Harvey, National Data Guardian for Health and Care Dame Fiona Caldicott and NHS Digital – Oral evidence (QQ 128 – 142) .......................... 243 HM Government – The Rt Hon Matt Hancock MP, Minister of State for Digital, Department for Digital, Culture, Media and Sport (DCMS) and the Rt Hon the Lord Henley, Parliamentary Under Secretary of State, Department for Business, Energy and Industrial Strategy (BEIS) – Oral Evidence (QQ 190 – 200) ........ 261 HM Government –The Rt Hon the Lord Henley, Parliamentary Under Secretary of State, Department for Business, Energy and Industrial Strategy (BEIS) and The Rt Hon Matt Hancock MP, Minister of State for Digital, Department for Digital, Culture, Media and Sport (DCMS) – Oral Evidence (QQ 190 – 200) .............. 283 Dr Julian Huppert, PHG Foundation and Understanding Patient Data, Wellcome Trust – Oral evidence (QQ 116–127) ........................................................ 284 IBM, Sage and SAP – Oral evidence (QQ 76–84) ........................................ 307 IEEE-Standards Association and Professor Alan Winfield – Oral evidence (QQ 18 – 28) .................................................................................................... 328 Professor Rosemary Luckin, Miles Berry and Graham Brown-Martin – Oral evidence (QQ 181–189) .......................................................................... 346 Major Kitty McKendrick, Professor Noel Sharkey, Mike Stone and Thales Group – Oral evidence (QQ 153–162) ................................................................... 347 Professor Peter McOwan, Professor Sir David Spiegelhalter and Professor David Edgerton – Oral evidence (QQ213–223) .................................................... 367 MMC Ventures, Project Juno and Eileen Burbidge MBE – Oral evidence (QQ 46– 54) ....................................................................................................... 368 Professor Dame Henrietta Moore, Professor Richard Susskind OBE and Future Advocacy – Oral evidence (QQ 95–104) .................................................... 369 National Data Guardian for Health and Care Dame Fiona Caldicott, NHS Digital and Dr Hugh Harvey – Oral evidence (QQ 128 – 142) ................................. 370 NHS Digital, Dr Hugh Harvey and National Data Guardian for Health and Care Dame Fiona Caldicott – Oral evidence (QQ 128 – 142) ................................ 371 NVIDIA, Dr David Barber and Digital Catapult – Oral evidence (QQ 38–45) ... 372 Ocado, Dyson and Fujitsu – Oral evidence (QQ 105–115) ........................... 373 Sarah O'Connor, Andrew Orlowski and Rory Cellan-Jones – Oral evidence (QQ 9– 17) ....................................................................................................... 374 Open Data Institute, Open Rights Group and Privacy International – Oral evidence (QQ 65–75) ............................................................................. 375 Open Rights Group, Privacy International and Open Data Institute – Oral evidence (QQ 65–75) ............................................................................. 390 Andrew Orlowski, Rory Cellan-Jones and Sarah O'Connor – Oral evidence (QQ 9– 17) ....................................................................................................... 391 Dr Jérôme Pesenti – Oral evidence (QQ 201–212) ...................................... 392 PHG Foundation, Understanding Patient Data, Wellcome Trust and Dr Julian Huppert – Oral evidence (QQ 116–127) .................................................... 407 Privacy International, Open Data Institute and Open Rights Group – Oral evidence (QQ 65–75) ............................................................................. 408 Project Juno, Eileen Burbidge MBE and MMC Ventures – Oral evidence (QQ 46– 54) ....................................................................................................... 409 Professor Chris Reed, Professor Karen Yeung and Jeremy Barnett – Oral evidence (QQ 29–37) ............................................................................. 410 Sage, SAP and IBM – Oral evidence (QQ 76–84) ........................................ 411 SAP, IBM and Sage – Oral evidence (QQ 76–84) ........................................ 412 Professor Noel Sharkey, Mike Stone, Thales Group and Major Kitty McKendrick – Oral evidence (QQ 153–162) ................................................................... 413 Professor Sir David Spiegelhalter, Professor David Edgerton and Professor Peter McOwan – Oral evidence (QQ213–223) ..................................................... 414 Mike Stone, Thales Group, Major Kitty McKendrick and Professor Noel Sharkey – Oral evidence (QQ 153–162) ................................................................... 415 Professor Richard Susskind OBE, Future Advocacy and Professor Dame Henrietta Moore – Oral evidence (QQ 95–104) ......................................................... 416 Thales Group, Major Kitty McKendrick, Professor Noel Sharkey and Mike Stone – Oral evidence (QQ 153–162) ................................................................... 417 UK Information Commissioner Elizabeth Denham, Dr Sandra Wachter and Dr Mercedes Bunz – Oral evidence (QQ55–64) ............................................... 418 Understanding Patient Data, Wellcome Trust, Dr Julian Huppert and PHG Foundation – Oral evidence (QQ 116–127) ................................................ 419 Dr Sandra Wachter, Dr Mercedes Bunz and UK Information Commissioner Elizabeth Denham – Oral evidence (QQ55–64)........................................... 420 Professor Alan Winfield and IEEE-Standards Association – Oral evidence (QQ 18 – 28) .................................................................................................... 421 Professor Michael Wooldridge, Professor Nick Bostrom and Professor Dame Wendy Hall – Oral evidence (QQ1–8) ........................................................ 422 Professor Karen Yeung, Jeremy Barnett and Professor Chris Reed – Oral evidence (QQ 29–37) ............................................................................. 423 Alan Turing Institute and Professor Chris Hankin – Oral evidence (QQ 143–152) Alan Turing Institute and Professor Chris Hankin – Oral evidence (QQ 143–152) Evidence Session No. 15 Heard in Public Questions 143–152 Tuesday 28 November 2017 Watch the meeting Members present: Lord Clement-Jones (The Chairman); Baroness Bakewell; Lord Giddens; Baroness Grender; Lord Hollick; Lord Levene of Portsoken; The Lord Bishop of Oxford; Viscount Ridley; Baroness Rock; Lord St John of Bletso; Lord Swinfen. Examination of witnesses Dr Mark Briers and Professor Chris Hankin. Q143 The Chairman: Good afternoon and a very warm welcome to our witnesses: Dr Mark Briers, who is the strategic programme director for defence and security of the Alan Turing Institute, and Professor Chris Hankin, who is the co-director of the Institute for Security Science and Technology, Imperial College London. This is the 15th formal evidence session for the inquiry and it is intended to help the Committee discuss the potential misuse of AI and the implications for cybersecurity. I am afraid I have a little rubric that I need to read through at the beginning of every evidence session. The session is open to the public and a webcast of the session goes out live, as is, and is subsequently accessible via the parliamentary website. A verbatim transcript will be taken of your evidence and will be put on the parliamentary website. A few days after this evidence session, you will be sent a copy of the transcript to check for accuracy, and we would be grateful if you could advise us of any corrections as quickly as possible. If, after this session, you wish to clarify or amplify any points made during your evidence, or have any additional points to make, you are very welcome to submit supplementary written evidence to us. Would you like to introduce yourselves for the record? Professor Chris Hankin: I am professor of computer science at Imperial College London and co-director of the Institute for Security Science and Technology. Dr Mark Briers: Good afternoon. I am the programme director for defence and security at the Alan Turing Institute. My research interests lie at the intersection of artificial intelligence and cybersecurity. Alan Turing Institute and Professor Chris Hankin – Oral evidence (QQ 143–152) Q144 The Chairman: Thank you. I will start with something pretty broad and general, especially in terms of the timing. What does artificial intelligence mean for cybersecurity today, and how is this likely to change over the next 10 years? We are looking in particular at whether, and how, artificial intelligence will impact on conventional cybersecurity today. Does it facilitate new kinds of cyberattacks, how much does that alter the risk profile, and where can AI help? Professor Chris Hankin: When I think about artificial intelligence in the context of cybersecurity today, I think mainly about machine learning rather than broad artificial intelligence, where my own team at Imperial and many others across the globe have had a great deal of success in using machine learning to analyse network traffic and spot anomalous things happening in that traffic. In fact, there is a UK company, which you may have spoken to or heard of during your hearings, Darktrace, which has made a very successful global business out of machine learning to do that. From a defensive point of view, that would be the main application of AI at the moment in cybersecurity. I heard yesterday, in fact, of people who have been developing chatbots to engage in conversations with phishing attackers to frustrate them, to a certain extent, in their attacks. The Chairman: So Darktrace is literally finding the source of a cyberattack, is it? Professor Chris Hankin: It is analysing network traffic. They basically install a monitor in companies or an individual system that learns what “normal” looks like, so to speak, over a very short period of time and can spot if something is abnormal, which could be indicative of a cyberattack. So that is Darktrace technology. It is a very exciting technology, and they have made a great commercial success out of it, but there are still some open research challenges to reducing the false positive signals, for example, which might come out of that sort of system. The Chairman: Is that still partly under your wing or a fully independent start-up? Professor Chris Hankin: There is still quite a lot of academic research activity across the world looking at different approaches to machine learning that might be able to give more accurate signals about what is going on. I also wanted to mention a competition that was held in the States that came to fruition in August 2016 and was about automatic defensive systems. It was about programmes that could understand when they were under attack and take measures to repair themselves and mitigate against the attack. Over, say, a 10 to 15-year horizon, we could be looking at that sort of technology being lifted to the level of systems that can understand that they are being threatened in some way and that can take action to repair the damage that that threat might be causing them. The Chairman: So there are a lot of very active developments in the AI field. Alan Turing Institute and Professor Chris Hankin – Oral evidence (QQ 143–152) Professor Chris Hankin: I believe so. Cybersecurity is my area of expertise. I am sure Mark will have a more informed view of the artificial intelligence world. The Chairman: But these are AI applications, which have implications for cybersecurity. Professor Chris Hankin: Yes. The Chairman: Is that why you mentioned chatbots and so on? Professor Chris Hankin: Yes, that is correct. Dr Mark Briers: I would echo Professor Hankin’s views that artificial intelligence, specifically machine learning, is being applied in a defensive context as we sit here today. If I were to look 10 years hence, I would argue that there will be more sophisticated artificial intelligence in the offensive space. In my experience in industry as a cybersecurity specialist over the past few years, we have seen no evidence of artificial intelligence malware or software encroaching on corporate networks. I believe that will change over the next five to 10 years and that the offensive weapons used in cyberattacks will become more artificially intelligent. The Chairman: That in itself is a very interesting comment. With a number of viruses such as WannaCry and that kind of thing, do you mean that there was no AI component to them and that they were fully human interventions? Dr Mark Briers: More or less. It depends on how you characterise and define artificial intelligence. By my definition, I certainly would not say that they were artificial intelligence in any meaningful manner. The Chairman: You are aware of the developments that Professor Hankin talked about and you think they will make quite an impact themselves. Dr Mark Briers: Indeed. From a research perspective it is very exciting, but obviously from a risk mitigation perspective it is quite daunting, so yes, I agree with Professor Hankin. Lord St John of Bletso: If I could ask a question slightly beyond my pay grade, we read about D-Wave computers and their ability to solve large data analytics and about IBM’s Watson supercomputer and the future of quantum computing. Is this the next big potential opportunity to crack the cybersecurity threats that we are facing? Professor Chris Hankin: D-Wave is a kind of quantum computing, and it can still only do the same sorts of things that the computers we know and love today can do, but there are certain tasks that it can do much faster. One of the particular things that quantum computers can do very fast is the factorisation of large numbers. Modern cryptography is built upon factorising very large numbers into their prime factors. This is a very complex thing to do with a modern computer, but quantum computers can do it much faster, so the modern popular approaches to cryptography are potentially undermined by the emergence of quantum computers. Alan Turing Institute and Professor Chris Hankin – Oral evidence (QQ 143–152) I suspect that we are several years away from that really becoming a threat. I am comforted slightly by the knowledge that there are many very able research groups and people in government looking at what these days tend to be called “quantum-safe approaches” to cryptography. You do not have to build your cryptographic algorithms on the basis of using factorisation; there are other bases that you can use for crypto, some of which will be less amenable to being solved by quantum computers. It is an issue that we have to be aware of, but responsible people, both in the research field and in government, are paying great attention to this potential long-term threat. Dr Mark Briers: I have nothing to add to that excellent answer. The Chairman: Lord St John is well ahead of the pack here, as always. Q145 Lord Giddens: Will only state-sponsored hackers have the means to deploy AI in cyberattacks, or is there a risk that AI-enabled cyberattacks will be “democratised” in the near future? I do not know whether this is beyond your brief, but could you comment on the growing literature on threats to democracy from the use of chatbots and algorithms that have definitely intruded very deeply into the political process? Dr Mark Briers: In the short to near term, on the basis of my previous answer I would expect artificially intelligent cyberoffensive weapons to emerge from the state-sponsored sector. If you look back in history 10 years ago, the types of threat that are prevalent today and that are democratised in some sense would arguably have come from the state- sponsored sector. Using history as an indicator of what is likely to happen in the future, I suspect we will see those types of artificially intelligent cyberweapons being available to a wider audience in 10 years’ time, so yes, I see AI cyberweapons being democratised. With respect to chatbots and interference in the political process, it is clearly a serious problem and we need to do more research to understand the threat landscape, how malicious actors are manipulating the information space and how we can counter that manipulation such that we can present the messages that we need to present as a democratic society. Professor Chris Hankin: I broadly agree with what Dr Briers has said. At the moment, with cybersecurity it is becoming much more difficult to differentiate between state actors and organised crime. The sorts of techniques that those two groups are using to mount their attacks are becoming more and more similar. The weapons that they use are also becoming available through things like the dark web, so it is becoming much more difficult, now and going forward, to differentiate between the different styles of attackers that we are having to defend against. That is my answer to the first part of the question. On the second part of the question, about chatbots and the threat to democracy, that is a serious issue. It may be beyond the brief that I have been thinking about in preparing for this meeting. Lord Giddens: We might need some partly technical solutions to it, because they cannot be just political solutions. Alan Turing Institute and Professor Chris Hankin – Oral evidence (QQ 143–152) Professor Chris Hankin: No. Someone somewhere needs to think very seriously, as I think Mark was hinting at, about the counter-narratives so that the messages that we want to get across appear above the noise that the chatbots and so on might be creating. The Chairman: Can I come back to you on your response? You agreed with Lord Giddens and said that, yes, there was the threat of a black hat issue? Is that the result of leakage of knowhow? Is it that the skills required are getting more common? Is it the fact that the investment required is not very great? Is it a combination of all those things? What creates the fertile ground for the hackers using AI? Professor Chris Hankin: Certainly, for those who have been educated in coding practices and rudimentary computer science, many of the modern programming languages that people use to code up artificial intelligence applications have libraries that have, for example, machine learning functions within them, so one does not necessarily need to have a very deep understanding of machine learning to be able to use it in some way in a system that you are constructing. Viscount Ridley: Dr Briers, you said, going back 10 years, that you could see how state-sponsored software ended up in private hands and being used for malice. I remember 10 or 15 years ago some pretty blood- curdling presentations about how viruses were going to make computing impossible in the near future, and that the bad guys were going to win. That did not happen, did it, and we stayed one step ahead of that. We all have virus problems still, as WannaCry exemplifies, but it has not been as catastrophic as some said. Can we learn any lessons from that, or am I being Panglossian? Dr Mark Briers: No, there are lessons to learn from that. I suspect that communications via appropriate government bodies should be congratulated in some senses for moving the cybersecurity posture of some of our major industries and critical national infrastructure to avoid some of the problems that we quite rightly could have faced during that time. It is great to see the Government continue to invest in cybersecurity with the National Cyber Security Centre and other organisations like that to put cybersecurity at the forefront of the UK and make the UK a secure place. Viscount Ridley: Was it the Government or was it the Apple Corporation that helped me avoid that fate? Dr Mark Briers: Based on my expertise and opinion, I suspect it is a bit of both. It will be political pressure to encourage organisations, such as Apple, to patch their systems and so on. Q146 Lord Swinfen: Do AI researchers need to be more aware of how their research might be misused and consider how this might be mitigated before publishing? Are there situations where researchers should not publish where there is a high risk of misuse? Should the Government consider mechanisms, either voluntary or mandatory, to restrict access in exceptional cases in a similar way to the defence advisory notice system for the media? Do you think there should be a code of ethics? Alan Turing Institute and Professor Chris Hankin – Oral evidence (QQ 143–152) Dr Mark Briers: In short, yes, I do. I believe there is an ethical responsibility on all AI researchers to ensure that their research output does not lend itself to obvious misuse and to provide mitigation, where appropriate. If you look at the analogous situation of the 3D printer, the manufacturers or the designers of it perhaps did not envisage somebody producing a 3D-printed weapon downstream. If we use that as an analogy, should the manufacturers and designers of 3D printers have considered this, and should we have prevented 3D printing from ever making it into the marketplace, even though there are fantastic medical advances that could surface through 3D printing? We need to provide those principles and ethical guidelines, but they need to be principles and guidelines as opposed to definitive rules that one has to follow, so a judgment needs to be made against each of these different algorithms and the capabilities that they offer up. Professor Chris Hankin: I broadly agree. A number of the large vendors now offer bug bounties to people to disclose vulnerabilities that they discover in systems so that patches can be issued before the vulnerabilities get into the public domain. That is one aspect where there is a potential financial incentive for people to be responsible about disclosure. It is important, when we educate people in cybersecurity, that we impart some ethical values to them. Indeed, a lot of the cybersecurity work currently going on here in the UK is funded through various schemes that the National Cyber Security Centre has set up. The research institute, which I lead, has no issue with passing papers through contact points within the centre so that they can be checked before they are published. By and large, researchers are fairly responsible about the way they disclose things, but there is the danger that if new vulnerabilities are discovered and published without any of those checks, it gives the attackers and the hackers something to exploit. The Chairman: Are we in anywhere near a common code of ethics that could be accepted in research in this field? Professor Chris Hankin: A number of the professional institutions have ethical codes, and if people are registered with those institutions they ought to be living by those ethical codes, but there is no uniform approach to that. If the Alan Turing Institute is to become the AI centre for the UK, maybe it should address promulgating such a code. The Chairman: Do you think that getting that code up and running and having general acceptance could be part of your agenda? Dr Mark Briers: I certainly think it could be part of our agenda, and I am sure that Sir Alan Wilson and other colleagues at the Turing would consider that and hope to produce something of that kind to support researchers. Lord Swinfen: Should workers at certain levels be security vetted? Dr Mark Briers: In certain circumstances for certain applications, almost certainly, but I suspect it is not practical to security-clear all the research community or a large portion of it who would develop algorithms of this kind. One has to ensure that there are sufficient guidelines, ethical Alan Turing Institute and Professor Chris Hankin – Oral evidence (QQ 143–152) principles and support mechanisms, communication mechanisms, et cetera, to ensure and encourage the appropriate and ethical publication of research as opposed to validating and checking everybody. Sadly, I do not believe that is a practical solution. Q147 The Lord Bishop of Oxford: This question is on adversarial AI, which I gather is researchers attempting to fool AI systems into making incorrect classifications or decisions. How much of an issue are recent developments in that field of adversarial AI for the wider deployment of AI systems? Do you see compulsory stress-testing as part of the future? Professor Chris Hankin: We have been doing some work on using adversarial AI to see how possible it is to train an attacker to evade the state-of-the-art classifiers that we have been developing on the other side of our research activity. It is certainly true that one can use adversarial nets and get very high success rates in learning. If you can get into the right part of the system, you can learn a lot about what the classifier might be doing and introduce sufficient noise into your attacks, such that it evades detection by the classifier. We have some quite interesting results in that. The message I take from that is that, at the moment certainly, AI is not the only answer we should be thinking about for defending our systems. I will give you a short story about the Stuxnet malware that was used to delay the Iranians in their uranium enrichment process. The attack was essentially a physical attack, mounted through cyber, on the centrifuges that were being used to enrich the uranium, and in one version at least it caused the rotor blades in the centrifuges to spin at very high speeds. You might have been able to detect that attack by looking at some network traffic and seeing what was happening with the control systems, but certainly if you had been standing anywhere near the centrifuges you would also have had a physical signal that something was going wrong. At the moment, in the state we are in we have to use all sources of information that we can to decide what is going on in a system, because AI is not the only answer. In that setting, maybe the adversarial net approach might have enabled the attackers to get round the AI detector, if they had had such a thing, but the noise from the centrifuges probably would have given them away. Dr Mark Briers: I agree with Professor Hankin. In the cybersecurity industry, there is a large group of individuals known as “penetration testers”, whose job essentially is to try to ethically hack into an organisation’s network and look for vulnerabilities with the intention of trying to secure those vulnerabilities. I see there being a large marketplace in the short to near term in AI ethical hackers, if that is the correct phrase, and I hope to see the UK leading in this field so that we develop that marketplace and lead it internationally as UK plc. The Lord Bishop of Oxford: Would the people doing this typically have doctorates, or would they be, as in science fiction films, very bright teenagers who are going rogue? Dr Mark Briers: I suspect as we sit here today that it would be people with doctorates and beyond. However, with all these kinds of technologies there is a democratisation effect—the open sourcing of the

Description:
exponentially over the next few years as they realise business value. I believe that Many years—50 years—ago people were forecasting this kind of state of the art. rhetoric and expansion of a friction-free visa programme for skilled history in methodology work in Bayesian methods for artific
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.