Who Is To Blame When A Robot Kills Someone?

You are currently viewing Who Is To Blame When A Robot Kills Someone?
cc: Dribbble

On May 31st 2016, the European Parliament received a draft report from the Commission on Legal Affairs and it was all about robots. The report is actually pretty cool and life we see in the science fiction movies is probably closer than you think.

One of the things it focuses on is the concept of liability, who is at fault if a robot injures someone or damages property? They said that, depending upon the level of automation and autonomy, the robot may be more responsible than the creators. Here’s the thought process;

If a robot can only do the tasks that it’s been programmed to do, you would say the creators are at fault for any damage because the robot is essentially just a tool. But let’s say the robot can use machine learning and artificial intelligence to adapt to its environment. Well, some environments might be more chaotic with a lot more variables at play and the robot’s behavior, in order to complete a task, might be something beyond which anyone would have predicted. In those cases, the robot is at fault.

Great! Now we know who to blame, now what?

The report says if a robot is to blame, the robot must pay for its crimes. But how do we punish a robot?

The report has some answers for this too. First, there should be some sort of classification system for robots. If they reach a certain threshold of sophistication, they must be registered with the European Union. Secondly, they suggest a required insurance plan where the manufacturers have to pay insurance for the robots they make. It’s kind of like a car insurance, except the manufacturers are the ones paying, not the owners. Or you could always pay the robots.

Seriously, they suggest paying the robots wages. Now, this isn’t so the robot can save up for a swinging robo-pad, no it’s really meant to create a compensation fund in case it goes on a robo-rampage further down the road. The draft report also suggests the idea of granting robots a status of personhood, or maybe a new category that hasn’t been invented yet.

So why would you ever want to call a robot a person? Is it because they’re getting so sophisticated that artificial intelligence is making them self-aware? Not quite – actually, it’s really more for the benefit of we humans and not so much for the robots.

What happens if robots and automation’s start to replace more jobs than they create? Right now that doesn’t seem to be the case but that might change in the future. And there are certain systems we have in place, like social security, that depend on employment taxes and they could become under or defunded if we don’t have enough people in the workforce. The report actually suggests that robot owners, business owners that use robots and automation, pay into social security as if human employees themselves were in those positions.

There’s a lot more to this report. It also suggests that robots should be easily identifiable as robots. Now this isn’t necessarily saying we don’t want androids walking around where you are fooled into thinking it’s another person. You might not want a robot that looks like a lamppost or a tree, you want a robot that looks like a robot, so you know how to interact with it.

Who do you think will be the first AI that get a status of personhood? Siri, Cortana or Google Assistant?

What Do You Think?