I, Robot — by a Jewish lawyer

I, Robot — by a Jewish lawyer

For the Jewish Chronicle December 14 2018

It is the year 2028 and you have a court case coming up, trying to prove negligence on the part of … a robot.

This is not the stuff of science fiction, but an increasingly likely scenario in a world in which artificial intelligence, or AI, will affect every aspect of our lives.

Jacob Turner is a young Jewish barrister whose new book, Robot Rules is a fascinating handbook aimed at regulating AI. “It can be read by anyone from any background”, he says, “not just lawyers.” In fact, his book takes in the fields of law, ethics, philosophy, politics, computer science and technology, and discusses how each of those disciplines can have a say in dealing with AI.

There is something charming about a book relating to the cutting edge of social technology, being discussed in Turner’s chambers in central London, in a building that has been home to English lawyers for more than 200 years.

Turner himself, compact and fluent about the issue, is mindful that some previous books about AI have been overtaken by events. But he believes that his book will become more relevant as time goes on, as we humans change our relationship with AI. “Today AI technology is heavily dependent on human input”, says Turner. That is unlikely to remain the case, he says, adding that we are moving to a point where AI is increasingly independent of humans.

How did Turner come to this subject? “My background is in international law and the laws of warfare, with an emphasis on the law governing autonomous weapons”. Robot weapon systems are currently being tested in a four-week war game on Salisbury Plain.

Turner worked as judicial assistant to Lord Mance, the now retired deputy president of the UK Supreme Court. In 2015 he was asked to help write a speech about the future of law, and in carrying out the research, realised that there was very little published about AI, “something which will affect everyone across the whole of society, and where different moral problems will arise”.

Turner began to think seriously about the different aspects and applications of AI, and sets out his conclusions in Robot Rules. With a grin, he admits that he was thinking of calling the book “The Ten Commandments for Robots” but the Bishop of Oxford swiped the title.

In fact, as any sci-fi geek knows, there have been in place since 1942 not ten commandments, but Four Laws of Robotics, conceived by that grandmaster of science fiction, Isaac Asimov. Asimov’s rules state as follows: “A robot may not injure a human being, or, through inaction, allow a human being to come to harm; a robot must obey order given it by human beings except when such orders would conflict with the First Law; a robot must protect its own existence, as long as such protection does not conflict with the First or Second Law; a robot may may not harm humanity, or, by inaction, allow humanity to come to harm”.

Asimov himself acknowledged that his “laws” were flawed and had mainly been laid down so that he could write stories where robots ran rogue. “Many people miss that”, says Turner, adding that the Asimov prescriptive regulations are too often thought of as the last word in dealing with AI.

Nevertheless, Turner says that the use of robots, or AI, throws up a whole slew of moral and ethical questions which have an adjunct to Asimov’s rules.

For example, Turner says, what happens if you have a robot in an Accident and Emergency ward and its task is to decide who should be treated first? “AI has many advantages over human beings in that scenario. A robot won’t get tired, or necessarily have the biases of race or gender that a human being might have. But then, perhaps a robot might not have the necessary qualities of empathy, regret or pity. We know that saving the most people at the least cost is done better by AI than by humans. But then we come to moral objections in allowing AI to take life or death decisions”.

Much, of course, depends on the way in which human beings program the relevant robots. “We can feed bias into AI,” says Turner, but either accidentally or intentionally imbuing a robot with human frailty rather negates the point of using a robot for some of these tasks.

There is a famous philosophical issue called “the Trolley Problem”, addressed by Turner in his book. Philosopher Philippa Foot asked a group of people what they would do if they saw a train carriage (a trolley), “heading down railway tracks, towards five workmen who are in the train’s path and would not have a chance to move before being hit. If the participant does nothing, the train will hit the men.

“Next to the tracks is a switch, which will move the trolley to a different spur of tracks. Unfortunately on the second spur is another workman, who will also be hit and killed if the train carriage is directed down that set of tracks. The participant has a choice: act, and divert the trolley so that it hits the one person, or do nothing and allow the trolley to kill five”.

The “trolley problem” — what we might call “Sophie’s Choice” — is by its nature unsolvable. But it has, says Turner, a direct analogy to AI in the programming of driverless cars, or what he calls “autonomous vehicles”. What happens, for example, if a child steps into the path of an AI car? Should the car continue on the road and hit and probably kill the child — or should it swerve, hit the barrier, and perhaps kill the human passenger in the car?

Legislation relating to autonomous vehicles has been among the few non-Brexit laws passed in the UK this year, which shows, says Turner, the importance the government attaches to such regulation. At the moment the law says that the insurer of the car will always have to pay out in the event of an accident. But the question then arises as to who the insurer may sue: the manufacturer of the car, the human passenger, or the designer of the AI?

Another problem is that AI continues to learn and does not remain static. The moral trade-offs of the trolley problem may evaporate as the robots adjust to different circumstances.

Turner sees his book “as a roadmap for how we can work with AI in the future”. And he says that ideally the regulations in place should be global regulations, applicable internationally, because currently many countries do not have laws which deal with AI.

Fittingly for a thoughtful Jewish barrister, Jacob Turner gives the last word to the Almighty, and reminds me that the Lord was displeased at humans asserting independence by building the Tower of Babel. “The whole earth was of one language and of one speech. At Babel, the people decide to build a tower so high that it would reach the heavens. God saw this tower, and realised the extraordinary power that mankind was able to exercise through acting together”.

The Lord’s solution? “To confound their language, that they may not understand one another’s speech”. The people still had the tools to build the tower, but lacked a common purpose. In meeting the great challenge of robots, warns Turner, “If each country adopts its own rules for AI — or worse, none at all — we stand to bring upon ourselves the curse of Babel once again”.

Robot Rules is published by Palgrave Macmillan £20

  • 14 December, 2018