The 3 laws of robotics of the father of science fiction Isaac Asimov
⦁ A robot must not hurt a human being nor, through non-action, should he allow it to get hurt.
⦁ A robot must follow orders given by human beings, except when these orders go against the first law.
⦁ A robot must protect its own existence, as long as that protection does not conflict with the first two laws.
In this case it can be said that science fiction has become a reality.
Isaac Asimov commented that these amazing predictions about robots would become reality by 2014.
The Royal Society and the British Academy have made a report debating the three laws of Asimov and suggest that there should be a single fundamental principle for governing robots.
This fundamental principle is:
“Human beings must prosper.”
Professor Dame Ottoline Leyser, who co-directs the group of scientific policy advisors of the Royal Society, says that the prosperity of human beings is the main thing.
“The prosperity of people and communities should come first, and we believe that Asimov’s principles can be included within this.”
The report more or less goes to say that it is necessary to create a new body that guarantees that machines serve people, instead of controlling them.
The report points out that when systems are used that learn and make decisions independently, the possibility arises that very serious errors may occur.
The report states that the development of these machines can not be governed solely by using technical standards.
Antony Walker, deputy director of the TechUK lobby group and another of the authors of the work, said that ethical and democratic values should also be taken into account.
“Many benefits will be obtained thanks to these technologies, but the public must have confidence that these systems are well thought out and will be regulated correctly,” he said.
It is urgent to create this new framework to govern the machines because the Asimov era is already here.
These reports can not remain secret for commercial reasons. They must be accessible to the public, in case there is a problem that can be stopped and corrected in time.
The key, according to Professor Leyser, is that regulations must be made on a case-by-case basis.
“So it does not make sense to regulate all the algorithms equally without taking into account what they are used for.”
New technologies bring new forms of existence and possible scenarios of coexistence pose dilemmas for human morality.
The human being is on the way to the future with dilemmas focused on fitting the universe of robotic life with employability or ethics, among others.
Advances that make us intuit that the interaction between machines and humans goes further. So much so that we can assume without fear of error that artificial intelligence will be able to overcome us in a few decades in many aspects of life.
Currently, we are immersed in the Third Industrial Revolution, known as the Scientific-Technical Revolution.
The rapidity with which everything progresses leaves no time for assimilation, accepting the introduction of technology and robotics in public institutions, jobs and homes.
The Fourth Industrial Revolution presents a slightly greater complexity. Multiple doubts are presented in a new paradigm in which technology and humanity will be one and ethical dilemmas arise directly related to what will be the explosion of robotics.
On the one hand, we find the common sense barrier in AI (artificial intelligence). It is necessary to ensure ethical minimums in the machines, but also to influence the education of people.