Education & Career Trends: October 27, 2022
Curated by the Knowledge Team of ICS Career GPS
Technological singularity is the idea that artificial intelligence and technological advancement overall will, in the near future, get to a point where machines are exponentially smarter than humans and changes to the world around us come so fast that normal, unmodified humans will no longer be able to keep up with it.
It seems evident that most members of the scientific community agree that there needs to be a set of rules that everyone responsible for AI and robotic technology must abide by.
What those rules are, exactly, is up in the air. Much of this is simply due to the fact that this tech is still in its infancy and the bits of it that have been released to the public for mass consumption so far have been relatively innocuous.
However, we humans are known for our fear of unknown quantities. And what we do not truly understand, we tend to exterminate.
There is a reason that a large percentage of our science fiction revolves around robots and AIs that become self-aware and try to take over the world. We recognise that this is a possibility.
The 3 laws of robotics:
Isaac Asimov, renowned sci-fi author, and futurist laid out his trademark Three Laws of Robotics in “Runaround”, a 1942 short story. He later added a “zero” law that he realised needed to come before the others. They are:
- Law 0: A robot may not harm humanity, or, by inaction, allow humanity to come to harm
- First law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov would apply these laws to nearly all the robots he featured in his following fictional works, and they are considered by most roboticists and computer scientists today to still be extremely valid.
Ever since the idea of real robots, existing alongside humans in society, became a true possibility rather than just the product of science fiction, the idea of robotics has become a true sub-field in technology, incorporating all of our knowledge of AI and machine learning with law, sociology, and philosophy. As the tech and possibilities progressed, many people added their ideas to the discourse.
As president of EPIC (the Electronic Privacy Information Center), Marc Rotenberg believes that two more laws should be included in Asimov’s list:
- Fourth Law: Robots must always reveal their identity and nature as a robot to humans when asked.
- Fifth Law: Robots must always reveal their decision-making process to humans when asked.
The 6 rules AI scientists and researchers should follow:
Satya Nadella, Microsoft’s current CEO, devised a list of six rules he believes AI scientists and researchers should follow:
- AI must exist to help humanity.
- AI’s inner workings must be transparent to humanity.
- AI must make things better without being a detriment to any separate groups of people.
- AI must be designed to keep personal and group information private.
- AI must be accessible enough that humans can prevent unintended harm.
- AI must not show bias toward any particular party.
5 problems robotics programmers should consider:
The computer scientists at Google, as well, have laid out a group of five distinct “practical research problems” for robotics programmers to consider:
- Robots should not do anything to make things worse.
- Robots should not be able to “game their reward functions”, or cheat.
- If lacking in information to make a good decision, robots should ask humans for help.
- Robots should be programmed to be curious so long as they remain safe and don’t harm humans in the process.
- Robots should recognise and react appropriately to space and situations they find themselves in.
5 principles to follow:
Perhaps the most directly human-protecting set of guidelines has been put forward by a joint effort from the Arts and Humanities Research Council and the Engineering and Physical Sciences Research Council, both of the U.K., which state:
- Robots should not be designed solely or primarily to harm humans.
- Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
- Robots should be designed in ways that assure their safety and security.
- Robots are artefacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
- It should always be possible to find out who is legally responsible for a robot.
There are numerous AI programs that have been put into practice over the past couple of decades for medical diagnosis, chess-playing, and vehicular control that use machine learning to constantly improve.
However, even the machines themselves do not know how exactly they work, and the programmers and roboticists behind them are unable to keep up with everything their creations learn.
Because of this, it may be best to instil in them the morals we hope we ourselves would better follow and hope that our creations can police themselves.
Have you checked out yesterday’s blog yet?
(Disclaimer: The opinions expressed in the article mentioned above are those of the author(s). They do not purport to reflect the opinions or views of ICS Career GPS or its staff.)