Professor Winfield’s thoughtful and entertaining lecture on this topical subject started by posing the following question: you are being driven down the road in a driverless car and a child runs into the road. The robot driver can choose between injuring the child (possibly fatally) or driving the car off the road and possibly injuring the passengers. Which decision would you wish the car to make? Many of us would probably choose to save the child - but what if the car also contained our own four year old child? What if it was a dog running into the road? Professor Winfield's point is that ethical decisions will be an inescapable part of allowing robots control of similar situations.
There are clearly ethical responsibilities placed on engineers who design robots, and recent work in which Prof. Winfield has been involved has been establishing relevant ethical principles and also the standards that should apply to robot design. The question of whether one can or should build ethical robots is much more difficult - yet commercial interests are already pushing us towards situations where, as we have seen, automated actions based on ethical judgements are inevitable. Who will take responsibility for these judgements? A machine cannot be responsible (Prof Winfield is clear that human level artificial intelligence and something akin to free will are not on the horizon for a very long time).
The situation becomes even more stark with the likely development of autonomous military robots able to exercise lethal force. Currently devices such as "Predator" aerial drones are piloted remotely by humans and they decide when bombs and missiles should be launched. Soon we will have robot tanks and aircraft that will identify, assess and propose targets with the human controller only having an option to prevent the launch of an already planned lethal attack. Are we happy to see this happening in our names?
I think that Prof. Winfield raised a number of issues that need urgent consideration by all of us and our political leaders. The fear is that we will stumble into a future where commercial interests will take precedence over both ethics and common sense - partly because the short term benefits will outweigh the longer term risks. It may well be that societies such as the CSTS ought to be playing a more prominent role in raising this debate to wider public consciousness.