Books | Humans and Machines

If machines start to think for themselves, can they still remain subordinate to humans? An extract

John Edward Jackson

Humans and MachinesWhatever they are called-drones, robots, or unmanned systems-these “digital assistants” will have a profound impact on the way people live and the way combat is conducted in the future. Experts such as P.W. Singer and others believe that today’s drones are at the developmental stage analogous to where the Model-T Ford was in the early days of the automobile and where personal computers were in the early 1980s.As impressive as current capabilities may be, they pale in comparison with where they will be in ten, fifteen or twenty years.

The exponential increase in the capabilities of intelligent machines has at its base some simple laws of physics and the growth of micro-miniature manufacturing. What is commonly referred to as Moore’s law is not a law in the legal or scientific sense. Instead, it is a prediction made in 1965 by Gordon Moore, the cofounder of Intel Corporation, that the number of transistors on a microchip would double every two years. This prediction held true for four decades and then accelerated until we now see such growth about every eighteen months. These incredibly small and complex microchips enable unimaginable levels of computing power. For example, in 2016 Intel’s Xeon Broadwell chip contained 7.2 billion transistors on a single chip. Better microchips (and other factors) lead to faster, more powerful, and cheaper computers for virtually every application. The cofounder of Sun Microsystems, Bill Joy, has predicted that by 2030 we will have computers a million times more powerful than today’s personal computers. One must wonder about the astounding computerized world that could exist in a few short decades.

 

The Campaign to Stop Killer Robots

Countless stories have been written, and movies produced, about intelligent and malevolent robots who turn on their human masters. Whether the threat comes from all-powerful Skynet in the Terminator movies of from the Cylons in television’s Battlestar Galactica, the drama is derived from humanity’s struggle to defeat mechanical super machines. Flesh-and-blood Homo sapien is portrayed as slower, weaker, and less intelligent than its artificial opponents. The tipping point in the narrative usually occurs when the robots reach a level of self-awareness (or sentience) that makes them question the master-servant or master-slave relationship. The previous chapters have looked in some detail at the meaning of “One Nation under Drones.” It is fair to ask who will be in charge: man or machine. While the question may seem ridiculous on the surface, many critics with impressive credentials believe action must be taken now to forestall a future that does not bode well for humanity.

A sentient or conscious machine would have a capacity to feel, perceive, and exercise a need for self-preservation. It would have a level of intelligence equal to or greater than a human being and would also be able to display desire, ambition, will, ethics, personality, and other human qualities. Turning again to science fiction, the character Data, an android or artificial human, appears in the television series Star Trek: The Next Generation and several movie spin-offs. He is shown in a permanent quest to become more humanlike and even tries to understand and use humor in his actions with other crew members. The fictional story line of Star Trek: The Next Generation takes place more than three centuries into the future, in the year 2338. But a growing movement of critics sees a dangerous future much closer to today.

In July 2015 more than a thousand scientists, engineers, and experts in the area of artificial intelligence (AI) signed a public letter warning of the threat represented by further research into military-focused intelligent machines and calling for a ban on developing autonomous weapons. Released at the Twenty-Fourth International Joint Conference on Artificial Intelligence in Buenos Aires, the letter included noted inventors and scholars among its signatories: astrophysicist Stephen Hawking, Tesla founder Elon Musk, Apple cofounder Steve Wozniak, cognitive scientist Noam Chomsky, and others. The participation of such luminaries demonstrates that the concerns are not coming from some neo-Luddite fringe element, but rather from some of the most highly regarded minds of our time. Stephen Hawking was quoted by the BBC as saying “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Elon Musk has called AI “our biggest existential threat.” A less-alarmist group of researchers suggests that such threats could be mitigated by the manner in which artificial “brains” are designed and built.

Continuing with the notion expressed throughout this book that concepts developed by science fiction writers can be useful when one considers possible future events, it is informative to review how human control over AI-based systems was handled by award-winning author Isaac Asimov. He solved the problem by imprinting on the “positronic” brains of high-order robots what he referred to as the “Three Laws of Robotics.” Writing in 1942, he quotes from the fictitious Handbook of Robotics, 56th edition, 2058 AD, which defined the three laws as follows:

First Law: A robot may not harm a human being, or through inaction allow a human being to come to harm.

Second Law: A robot must obey the orders given to it by human beings except when such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In Asimov’s stories, whenever robots encountered situations that violated any of the laws, the circuitry in their “positronic brains” would be damaged beyond repair, and the robot would become inert. Robots following these three laws appeared in more than four dozen of Asimov’s novel and short stories. After his death in 1992, his estate authorised Medical doctor Mickey Zucker Reichert to publish a trilogy of books that tell how the “positronic brain” was developed in the earliest years of Asimov’s robotic universe. Asimov’s original books and Reichert’s prequels are recommended reading for those interested in “future history.”

Robots programmed to follow something like Asimov’s three laws could be an answer to humanity’s present-day, real-world concerns, and a number of top-ranked computer scientists and engineers are working on versions of these laws today. First among them is Georgia Tech’s Ron Arkin, whose ideas on how autonomous systems might lessen the risk to noncombatants in a warzone were detailed in chapter 10 of this book. If robots could be programmed to follow the laws of war rather than the fictitious Laws of Robotics, they could truly become trusted partners on the battlefield.

 

Where Are We Today?

Putting aside the existential questions embedded in some possible man versus machine contest, we find ourselves at some point along a developmental and operational continuum that ranges from dumb machines controlled 100 per cent of the time by human operators to robotic soldiers given a set of targets and released to exterminate them in any manner they see fit. In 2018 military systems fall clearly on the “dumb machines” side of the scale, although trends are moving inexorably toward the right. Virtually all military unmanned systems used today are controlled by a remote operator, and decisions to kill are firmly in the hands of a unit commander. The policy of the U.S. Department of Defense as codified in Instruction 3000.09 of November 21, 2012, is “autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force. This directive addresses autonomous systems that can perform a limited set of actions and activities without direct human supervision. It is a substantial (if not impossible) leap from task-based autonomy to generalized artificial intelligence, but autonomous machines are a reality today and constitute the first steps down a path that could result in artificially intelligent “thinking machines.” This path is filled with the potential for both good and evil, and it is up to us determine how we follow this path into the future. Examples of drones being used for evil purposes are not hard to find. It was widely reported in the international press on September 15, 2013, when a quadrotor drone crash-landed mere feet from German chancellor Angela Merkel during a campaign rally in Dresden, Germany. Investigators ultimately determined that the unarmed drone had been flown by members of the Pirate Party to protest the use of drones by countries in the European Union.

A much more serious incident occurred on August 4, 2018, in Caracas, Venezuela, when two explosive-laden hexacopters (6-bladed drones) were used in an apparent assassination attempt on President Nicholas Maduro, who was interrupted while speaking at a military parade and hustled to safety. The Maduro incident was the first such armed attack on a national leader, but potentially not the last. Security forces must, in the future, be concerned with both ground-level attack and possible danger from above.

Taken together, the writers of each of the preceding chapters (and your humble editor) envision a future in which robotic and unmanned systems continue to be refined as powerful tools. They will be used to improve the human condition by freeing many workers from the drudgery of repetitious manual labor; to increase agricultural production through the use of precision agricultural methods; and to take men and women out of jobs that are dull, dirty, and dangerous. Our hope is that this future world will be more peaceful, but if that is not the case, robotic and unmanned weapons will be used to fight more efficiently, more humanely, and with greater precision. They will shift conflict away from the potential need to use weapons of mass destruction to a more focused application of force when and where it is necessary.

This is what it will mean to live in “One Nation under Drones”!

One Nation, Under Drones
Legality, Morality and Utility of Unmanned Combat Systems
Edited by Capt John E. Jackson, USN (Ret.)
Naval Institute Press, Pg 218

 

 

Call us