Please enable JavaScript to experience the full functionality of GMX.

Intelligent robots could have ethical black box

Intelligent robots could have ethical black box

Intelligent robots may soon have an ethical "black box", which would record all its decisions.

The idea has been discussed by experts at the 18th annual Towards Autonomous Robotic Systems conference, which took place at the University of Surrey, south east England.

Dr Alan Winfield, professor of robot ethics at the University of the West of England in Bristol and Marina Jirotka, a professor of computing at Oxford University, suggested the idea of an ethical black box when they presented a paper at the conference.

Dr Winfield said: "Accidents, we hope, will be rare, but they are inevitable. Anywhere robots and humans mix is going to be a potential situation for accidents.

"Serious accidents will need investigating. But but what do you do if an accident investigator turns up and discovers there is no internal datalog, no record of what the robot was doing at the time of the accident? It'll be more or less impossible to tell what happened."

It comes after it was revealed scientists are working on a formula, which allows a driverless car to have its own morals.

Researchers at the Institute of Cognitive Science at the University of Osnabrück have been working on a formula that can help a vehicle make a life or death decision when it is on the road.

Professor Peter Konig, a senior author of the paper, said: "Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma.

"Firstly, we have to decide whether moral values should be included in guidelines for machine behaviour. Secondly, if they are, should machines act just like humans?"

In order to provide you with the best online experience this website uses cookies. By using our website, you agree to our use of cookies. More Info.