Some researchers suggest that ‘electronic persons’ will surpass human intelligence by 2050, potentially making humans obsolete. This scenario invites the discussion on whether a universal standard of ethics and obligations needs to be developed, and enforced upon manufacturers of artificial intelligence.
With the rapid advancement of AI and robotics now upon us, debate on ethics and rights of robots, as well as obligations of robots to operate within a legal code or framework has come to the fore.
Robots now work alongside humans in factories, banking, flying aircraft, driving cars, performing surgeries in hospitals, and are undergoing successful trials in many more business applications and sectors. Some computer scientists argue that it won’t be long when droids will be so integrated into human life, that they will be our nurses, teachers and friends. In extreme hypotheses, it has also been argued that the advanced level of AI may have the potential to destroy human life.
The issue whether robots deserve to be given rights has been open to discussion since the 1980s, but the growing involvement of robots in our daily life has brought the subject back into focus. The issue of ethical challenges expected to arise from AI in future has drawn concerns from Academia and Industry.
While companies developing AI for commercial use like Amazon, Microsoft, Google, IBM, Facebook and Apple have undertaken individual as well as collective efforts in building safeguards around AI, academic institutions like the University of California, Berkeley, Harvard, and the Universities of Oxford and Cambridge have also shown commitment towards working on a set of universal ethics and safety standards in Artificial Intelligence.
A study at the Future of Humanity Institute at the University of Oxford showed researchers engaged in developing Artificial Intelligence had different perspectives on the pace of evolution of AI. According to the study, while North American researchers expect AI to surpass humans in all spheres in 74 years, researchers from Asia expect it to take just 30 years.
A popular view among data scientists is that Artificial Intelligence is sure to be a rival to the human race. Notable minds have also weighed in the debate, including Physicist Stephen Hawking, Microsoft Co-founder Bill Gates, Tesla CEO Elon Musk among others have openly expressed concern about the ethical boundaries of Artificial Intelligence.
The debate on the rights of robots, which for a long time remained confined among industry circles, has now spilled into the corridors of power too. South Korea prepared its first ever government-backed Robot Ethics Charter in 2007, which stipulated standards for robot users as well as manufacturers. The charter covered standards to be followed while programming robots, including rules on illegal use of robots.
Just last year, the British Standards Institution also released a set of guidelines for ethical robot and robotic devices designed earlier in 2016, which gave a directive to allow greater control to humans over more advanced AI of the future.
In February 2017, the European Parliament also adopted a resolution that proposes a charter on robotics which would require researchers in robotics to comply with the highest standards of ethical and professional conduct while building robots. The resolution recommended giving “electronic personhood” status, and rights and responsibilities to the most advanced form of Artificial Intelligence.
While there is overall concern on the evolving role of AI in human societies, there are some who argue that rights granted to any entity are not merely on the basis of intelligence but on the capacity to feel pain and suffering, one who is able to manifest self-control, have concern for others, strong will power, and able to comply with moral responsibilities.
It is argued that if robots do not have the understanding to differentiate between pain and pleasure, justice and injustice and know what is right or wrong, then society is under no moral obligation to give rights to robots whatsoever.
Whether or not robots qualify for ‘electronic personhood’ rights and when this should be granted, will most likely be answered as AI permeates deeper into human life. What is pertinent now, is a universal code of conduct for researchers, developers and users of Artificial Intelligence to ensure that droids are not built with the express intention of being harmful to human life. This of course, poses further questions, given the heavy investment in AI and robotics by some Defence industries and militaries worldwide…