In 1942, when Isaac Asimov wrote a short story collection called, “I, Robot” he fascinated the world with the possibility of a society where human-like thinking and behaving robots co-existed with humans whom they were made to protect . These robots were designed to learn and behave autonomously in an effort to serve society. Yet autonomy and decision-making came quickly in conflict with the same laws designed to guide them.
While the prospect of designing such robots seemed unattainable, the possibility of a conflict driving the robot totally out of control was received as the pinnacle of science fiction. Unforeseen then, what was thought unattainable has become the center of extensive research in robotics.
We have already started producing machines with the ability to sense, interact, play with and beat humans in games, work side by side with humans in assembly lines and even compete with them for the same jobs. Granted, all of this is happening with humans having absolute control of the robot’s behavior.
We are also thinking about reproducing the way the brain learns and thinks. We are asking the question of what motivates the brain to make specific decisions. We are also looking at understanding how religion, culture, and education develop the laws we call ethics. The more we learn about the brain, the more we understand the fundamental rules that guide its’ decision making. We become more eager to translate these learnings and make smart, autonomous systems that will serve our society.
In 2016, Daniel Workman noted that global sales from the top 15 countries (Japan, Germany, Italy, France, United States, South Korea, Austria, China, Sweden, Taiwan, Denmark, Spain, United Kingdom, Canada, and Belgium) that export industrial robots amounted to $4.5 billion, with an average increase in annual sales equaling to 7.8% from 2015 to 2016 . These robots were designed to assist humans at work by executing more physically demanding and precise jobs , or to provide help at home.
Robots that can outperform humans in a few tasks will quickly gain the ability to outperform them in every task, becoming formidable competitors in the job market. With the accelerating process of technological innovation, it’s only natural that we are going to see more and more jobs going to robots, especially jobs where precision is needed and where machines can work faster than humans.
In addition, robots do not get tired, don’t need sleep or a vacation, don’t need a pension for their retirement, do not have associated healthcare costs, and do not have families to support. From an efficiency and cost perspective, replacing humans with robots is a no-brainer.
We then ask ourselves, what is going to happen to people who will be displaced in the workplace ? The ability of people to adapt to new jobs and tasks is limited by many factors including age, cost and the availability of intensive hands-on training programs.
Presently, we see an increased effort to develop robots for more domesticated jobs. According to Alec Ross in “Here Come the Robots,” Japan has the largest elderly population of any country due to very long-life expectancy and low birth rates. It has made substantial investments in robotics research and manufacturing .
In the next decade, due to its strict immigration policies, Japan will see a serious shortage of care workers for the elderly. Japan, being the largest producers of robots, is working intensely to create the next generation of non-human health care workers for the elderly. Toyota and Honda are in a race to bring the most capable and affordable robot to the home. A series of prototypes have already been displayed.
The same is happening in the other top 14 robot producing countries. As they increase their robot exports, they also integrate the same technologies into their own factories. Robots are being developed not only to be exported, but also to be used in the country exporting them.
With so much effort being devoted to developing robots that will help people, we ask the question of whether we know how to protect the people we are trying to help? If robots can replace humans in jobs such as construction, heavy manufacturing, nuclear plants, mining, exploration, and health care, who will provide for the people whose jobs will be displaced? Who will support the people who will not be able to find another job? Who will educate them? Who will address unemployment? How? In our human culture work is directly correlated with dignity and respect. In an era of robotics, who will redefine work to include these societal values?
Asimov in his short story collection, I-Robot, hypothesized that the robots were designed to obey three laws:
In trying to design robots that can live alongside people, we will need to define these, or other laws, to allow for co-existence. Assuming that helping people is the primary law of future machines we create, machines will need to be able to act autonomously and not cause harm to members of the community. This implies freedom in interpreting the rules and the ability to resolve a conflict.
Do we know enough about what it means to be “free”, to “protect”, to treat with “dignity”, to be “empathetic and just”, and what it means to “survive” all while remaining helpful?
If we do not have robust answers to these questions, how do we expect to create machines whose usefulness will depend on these definitions? If the future Asimov foresaw 75 years ago is around the corner, then we need to spend more time asking what makes us human, before we try to reproduce ourselves in machines.
 Isaac Asimov, “I, Robot,” 1942
 Daniel Workman, “Top Industrial Exporters,” Magazine Theme on Genesis Framework, 2017
 Alan Tovey, “Britain lags rivals in robots – but automating the workplace may not mean huge job losses,” Telegraph Media Group Limited 2017
 Gabriela Vatu, “8 Countries that Produce,” http://www.insidermonkey.com/blog/8-countries-that-produce-the-most-robots-in-the-world-512263/, December 2016
 Alec Ross, “The Industry of the Future,” Simon & Schuster Paperbacks, 2016