This news (from Mashable) came at an interesting point this week as a couple of days ago I came across the 2004 movie “I Robot”, which is loosely based on the homonymous Asimov’s short stories and the renowned Three Laws of Robotics. For those who are not familiar, I list the 3 laws as follows :
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The above prologue introduces “We Robot” (I can see the affection to first person pronouns..), a conference that will take place at the University of Miami School of Law April 21-22 and it will examine how the increasing sophistication and deployment of robots is challenging existing law and policies. A. Michael Froomkin, the brain behind this conference, says that robots will keep coming and become more technologically advanced, which will result in the need to introduce new legal measures. Robots, in fact, interact with the physical world and “If you have millions of these things deployed everywhere, it’s just absolutely inevitable that someone’s going to claim that they were injured by one at some point. And that’s going the be a lawsuit. So let’s get lawyers involved and people in a room, and let’s start worrying about these things”, Professor Froomkin says.
One of the main issues is to find a common ground in terms of legal responsibilities. If a robot hurts a human being, whose fault is it? Who is legally responsible? The manufacturer, the software developer(s) or the owner of that artefact? The main purpose of “We Robot” is to build a community of experts where such issues can be discussed and eviscerated in order to envision future scenarios and legislations.
Here the Mashable article Who’s Responsible When Robots Kill? ‘We Robot’ Conference Hunts Answers