From 1983 to 1993 DARPA spent over $1 billion on a program called the Strategic Computing Initiative. The agency’s goal was to push the boundaries of computers, artificial intelligence, and robotics to build something that, in hindsight, looks strikingly similar to the dystopian future of the Terminator movies. They wanted to build Skynet.

Much like Ronald Reagan’s Star Wars program, the idea behind Strategic Computing proved too futuristic for its time. But with the stunning advancements we’re witnessing today in military AI and autonomous robots, it’s worth revisiting this nearly forgotten program, and asking ourselves if we’re ready for a world of hyperconnected killing machines. And perhaps a more futile question: Even if we wanted to stop it, is it too late?

“The possibilities are quite startling…”

If the new generation technology evolves as we now expect, there will be unique new opportunities for military applications of computing. For example, instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. The possibilities are quite startling, and suggest that new generation computing could fundamentally change the nature of future conflicts.

That’s from a little-known document presented to Congress in October of 1983 outlining the mission of the new Strategic Computing Initiative (SCI). And like nearly everything DARPA has done before and since, it’s unapologetically ambitious.

The vision for SCI was wrapped up in a completely new system spearheaded by Robert Kahn, then director of Information Processing Techniques Office (IPTO) at DARPA. As it’s explained in the 2002 book Strategic Computing, Kahn wasn’t the first to imagine such a system, but “he was the first to articulate a vision of what SC might be. He launched the project and shaped its early years. SC went on to have a life of its own, run by other people, but it never lost the imprint of Kahn.”

The system was supposed to create a world where autonomous vehicles not only provide intelligence on any enemy worldwide, but could strike with deadly precision from land, sea, and air. It was to be a global network that connected every aspect of the U.S. military’s technological capabilities—capabilities that depended on new, impossibly fast computers.

But the network wasn’t supposed to process information in a cold, matter-of-fact way. No, this new system was supposed to see, hear, act, and react. Most importantly, it was supposed to understand, all without human prompting.

An Economic Arms Race

The origin of Strategic Computing is often associated with the technological competition brewing between the U.S. and Japan in the early 1980s. The Japanese wanted to build a new generation of supercomputers as a foundation for artificial intelligence capabilities. Pairing the economic might of the Japanese government with Japan’s burgeoning microelectronics and computer industry, they embarked on their Fifth Generation Computer System to achieve it.

The goal was to create unbelievably fast computers that would allow Japan to leapfrog other countries (most importantly the United States and its emerging “Silicon Valley”) in the race for technological dominance. They gave themselves a decade to accomplish this task. But much like the United States, no matter how much faster they made their machines, they couldn’t seem to make them “smarter” with strong AI.

Japan’s ambition terrified many people in the U.S. who worried that America was losing its technological edge. This fear was stoked in no small part by a 1983 book called The Fifth Generation: Artificial Intelligence and Japan’s Computer Challenge to the World by Edward A. Feigenbaum and Pamela McCorduck, which was seen as a must-read on Capitol Hill.

“The consumer electronics industry will integrate new-generation computing technology and create a home market for applications of machine intelligence.”

Reaching out to the private sector and the university system would also ensure that the best and brightest were contributing to DARPA’s mission for the program:

Equally important is technology transfer to industry, both to build up a base of engineers and system builders familiar with computer science and machine intelligence technology now resident in leading university laboratories, and to facilitate incorporation of the new technology into corporate product lines. To this end we will make full use of regulations of Government procurement involving protection of proprietary information and trade secrets, patent rights, and licensing and royalty arrangements.

The long and short of it? The government gave assurances to private industry that the technology developed wouldn’t be handed off to competing companies.

But economic competition with the Japanese, while very much a motivator, was almost a sideline concern for many policymakers embroiled in Cold War politics. Military build-up was the prime concern for the more hawkish members of the Republican party. The military threat from the Soviet Union was seen by many of them as the larger issue. And SCI was designed to address that threat head-on.

The Star Wars Connection

The launch of the Strategic Computing program and DARPA’s requests for proposals in 1983 and 1984 set off a heated debate in the academic community—the same community that would ultimately benefit from DARPA funding from this project. Some were skeptical that the ambitious plans for advanced artificial intelligence could ever be accomplished. Others worried that advancing the cause of AI for the military would usher in a terrifying era of autonomous robot armies.

It was a valid concern. If the goal of Star Wars—the popular nickname for Ronald Reagan’s Strategic Defense Initiative (SDI), and a popular political football at the time—was an automated response (or semi-automated response) to any missile threat from the Soviets, it would seem absurd not to tie it into a larger network of truly intelligent machines. The missions of the two projects—-not to mention their originating institutions—overlapped too much to be a coincidence, despite everyone’s insistence that it was just that.

From a 1988 paper by Chris Hables Gray:

The Star Wars battle manager, probably the most complex and the largest software project ever, is conceptually (though not administratively) a part of [Strategic Computing Initiative]. Making the scientific breakthroughs in computing that the SDI needs is a key goal of the [Strategic Computing Initiative].

If you ask anyone who worked on the SCI at the highest levels (as Roland did for his 2002 book on the project) they’ll insist that SCI had nothing to do with Ronald Reagan’s dream for Star Wars. But right from Strategic Computing’s early days, people were making connections between the SCI and the SDI. The connective tissue came in part simply because the programs shared similar names, and were even named by the same man, DARPA director from 1981 until 1985, Robert Cooper. And perhaps people saw a thread because the interconnecting computing power being developed for SCI just made sense as an application for a space-based strategy of missile defense.

Whether or not you believe SCI was going to function as an arm of the Star Wars mission for space-based defense, there’s no denying that if both had worked out, they would’ve been natural collaborators.

Applying Strategic Computing on Land, Sea and Air

The 1983 chart above outlined the mission of Strategic Computing. The goal was clear: develop a broad base of machine intelligence tech to increase national security and economic strength. But to do that, Congress and the military institutions that would eventually benefit from SCI would need to see it in action.

SCI had three applications that were supposed to prove its potential, though it would acquire many more by the late 1980s. Leading the charge were the Autonomous Land Vehicle, the Pilot’s Associate, and the Aircraft Carrier Battle Management System.

These applications were built on top of the incredibly advanced computers that were being developed at places like BBN, the Cambridge company probably best known for its work on developing the early internet, and would allow for advancements in things like vision systems, language comprehension, and navigation—vital tools for an integrated military force of man and machine.

The Driverless Vehicle of 1985

The most ominous-looking product to emerge from SCI was the Autonomous Land Vehicle. The 8-wheeled unmanned ground vehicle was 10 feet tall and 13.5 feet long, with a camera and sensors mounted on the roof guiding its vision and navigation system.

Martin Marietta, which merged with the Lockheed Corporation in 1995 to become Lockheed Martin, won the bid in the summer of 1984 to create the experimental ALV. They would get $10.6 million in the three and a half years of the program (about $24 million adjusted for inflation) with an optional $6 million after that if the project met certain benchmarks.

The October 1985 issue of Popular Science included a story about the tests that were being conducted at a secret Martin Marietta facility southwest of Denver.

Writer Jim Schefter described the scene at the test facility:

The boxy blue-and-white vehicle crawls sedately along a narrow Colorado valley road, never venturing far from the center line. A single window, set cylcops-like in the vehicle’s slab face, gives no clue about the driver. The tentative trek looks out of character for the massive 10-foot tall eight-wheeled vehicle. Although three on-board diesel engines roar, the wheels creep along at three mph.

After about a half-mile, the hulking vehicle stops. But nobody climbs out. There is no one aboard — just a computer. Using laser and video for eyes, a seminal — yet still advanced — artificial-intelligence program has sent the vehicle down the road without human intervention.

DARPA paired Martin Marietta with the University of Maryland, whose earlier work in vision systems was seen as instrumental to make the autonomous vehicle portion of the program a success.

As it turns out, creating a vision system for an autonomous vehicle is incredibly difficult. The system was fooled by light and shadows, and thus couldn’t work with any degree of consistency. It might be able to detect the edge of the road at noon just fine, only to be thrown off by the shadows cast during the early evening.

Any environmental change (like mud tracked along the road by a different vehicle) also threw the vision system for a loop. This, of course, was unacceptable even in the highly controlled testing area. If it couldn’t handle such seemingly simple obstacles, how would such a vehicle deal with the countless variables it would surely encounter out in the battlefield?

Despite meeting significant milestones by November of 1987, the ALV component of SCI was effectively abandoned by the end of the year. Though the autonomous vehicle was still quite primitive, some people at DARPA thought it was being dumped way too soon.

In the end, it couldn’t overcome its battle unreadiness. As Alex Roland notes in the book Strategic Computing, “One officer, who completely misunderstood the concept of the ALV program, complained that the vehicle was militarily useless: huge slow, and painted white, it would be too easy a target on the battlefield.” DARPA formally cancelled work on the ALV in April of 1988.

R2-D2 in Real Life

The pilot would still make the final decisions in this scenario. But the Pilot’s Associate was going to be smart enough not only to know who, what, and how to ask questions. It also understood why.

From the Strategic Computing planning document:

Pilots in combat are regularly overwhelmed by the quantity of incoming data and communications on which they must base life or death decisions. They can be equally overwhelmed by the dozens of switches, buttons, and knobs that cover their control handles demanding precise activation. While each of the aircraft’s hundreds of components serve legitimate purposes, the technologies which created them have far outpaced our skill at intelligently interfacing the pilot with them.

It’s here that we see DARPA’s case emerge for needing a Skynet of its own. The overwhelming nature of combat—overwhelming, DARPA implies, only because battlefield technology had already advanced so quickly—could only be achieved with new machines. The pilot may still be the one pushing the button, but these computers would do at least half the thinking for him. When mankind can’t keep up, hand it off to the machines.

The Pilot’s Associate application never got the same exposure in the American press that the ALV did, probably because it was harder to visualize than an enormous, driverless tank rolling down the road. But looking at the speech recognition tech of today, it’s easy to see where all that research into a Pilot’s Associate ended up.

The Invisible Robot Advisor

The Battle Management System was the third of the three applications originally planned to prove that SCI was a practical endeavor.

As it’s described in Strategic Computing (2002):

In the naval battle management system envisioned for SC, the expert system would “make inferences about enemy and own force order-of-battle which explicitly include uncertainty, generate strike options, carry out simulations for evaluating these options, generate the [operations plan], and produce explanations.

The Battle Management System was essentially the brain of the entire operation, and for that reason it was kept out of the spotlight more so than grunts like the ALV. Robots rolling down the road without human control is terrifying enough for some people. Invisible robots with their invisible finger on the very real nuclear button? You don’t exactly send press releases out for that one.

The Battle Management System was devised as an application specifically for the Navy (just as the ALV had been designed for the Army, and the Pilot’s Associate for the Air Force) but it was really just a showcase for the broader system. Every one of these technologies was intended to eventually be used wherever it was most needed. The voice recognition software developed for the Pilot’s Associate would need to work for every branch of the military, not just the Air Force. And the Battle Management System would have to play nice with everyone—except the enemy target, of course.

Piecing Together Skynet

All of the various components of the Strategic Computing Initiative were part of a larger hypothetical system that could have radically changed the nature of war in the 21st century.

Imagine a global wireless network overseeing various subnetworks within the U.S. military. Imagine armies of robot tanks on the ground talking to fleets of drones in the sky and unmanned submarines in the sea—all coordinating their activities faster than any human commander ever could. Now imagine it all being that much more complicated, with nukes waiting to be deployed in space.

The vision for the Strategic Computing Initiative was incredibly bold, and yet somehow quaint when we look at just how far it could have gone. The logical extensions of strong AI and a global network of killing machines are not hard to envision, if only because we’ve seen them played out in fiction countless times.

The Future of War and Peace

What finally killed the Strategic Computing Initiative in the early 90s was the acceptance—after nearly a decade of trying—that strong artificial intelligence on the level DARPA had imagined was simply unattainable. But if all of these various technologies developed in the 1980s sound eerily familiar, it’s probably because they’re all making headlines here in the early 21st century.

We see the vision systems that were imagined for ALV emerging in robots like Boston Dynamics’ Atlas, we see the Pilot Associate’s Siri-like understanding of speech being utilized by the US Air Force, and we see autonomous vehicles being tested by Google, among a host of other companies. They’re all the future of war. And if companies like Google are to be believed, they’re the future of peace as well.

Google’s recent purchase of Boston Dynamics has raised quite a few eyebrows among those concerned about a future filled with autonomous robot armies. Google says that Boston Dynamics will honor old contracts with military clients, though they’ll no longer accept any new ones.

But whether or not they continue to accept military contracts (and it’s certainly possible that they could do so under the radar within a secretive black budget) there’s no question that the line between military and civilian technology has always been blurred. If Boston Dynamics never again works for organizations like DARPA, and yet Google benefits from research paid for by the military, then ostensibly the system worked.

The military got what it needed by advancing the science of robotics with a private company. And now lessons from that military tech will show up in our everyday civilian lives—just like countless other technologies, including the internet itself.

In truth, this post barely scratches the surface of DARPA’s aspirations for Strategic Computing. But hopefully, by continuing to explore yesterday’s visions of the future we can gain some historical perspective to better appreciate that these new advancements don’t emerge out of thin air. They’re not even that new. They’re the product of decades of research and billions of dollars being spent by hundreds of organizations—both public and private.

Ultimately, Strategic Computing wasn’t derailed by some fear of what creating such a program would do to our world. The technology to build it—from the advanced AI to the autonomous vehicles—simply wasn’t evolving fast enough. But here we are, two decades after SCI faded away; two decades further into the development of this vision for smart machines.

Our future of super-smart, interconnected robots is nearly here. You don’t have to like it, but you can’t say you weren’t warned.


Sources: Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993 by Alex Roland with Philip Shiman (2002); Strategic Computing: New Generation Computing Technology: A Strategic Plan for its Development and Application to Critical Problems in Defense by DARPA (28 October 1983); Strategic Computing at DARPA: Overview and Assessment by Mark Stefik (1985); Arms and Artificial Intelligence: Weapons and Arms Control Applications of Advanced Computing edited by Allan M. Din (1988); The Strategic Computing Program at Four Years: Implications and Intimations by Chris Hables Gray (1988);

Images: ALV with laser sight via Lockheed Martin; Strategic Computing logo from the 1983 DARPA planning document; Cover of Fifth Generation scanned from the book cover; PIM/p computer via JipDecRonald Reagan and Star Wars screenshot taken from the PBS American Masters program; Black and white ALV outside Denver, scanned from an archival press photo; ALV illustration scanned from the October 1983 issue of Popular Science; Artists’ concept illustration for the Pilot’s Associate found in an early online draft of The Quest for Artificial Intelligence by Nils Nilsson (2009); Black and white Pilot’s Associate illustration scanned from the book Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993 by Alex Roland and Philip Shiman (2002); Atlas Robot and Google driverless car taken from WikiCommons; Future autonomous military fighter via the Unmanned Systems Integrated Roadmap FY 2011-2036 published by the U.S. Department of Defense