The Laius Complex

He’s more machine now than man; twist­ed and evil.”
— Ben Keno­bi, refer­ring to the cyborg Darth Vad­er, in Return of the Jedi

I’ve been read­ing sci­ence fic­tion (or SF, as those of us in the “lazy” cat­e­go­ry call it) for about twen­ty years now, some­thing like sev­en-tenths of my life so far. In that time I’ve noticed some trends, com­mon threads in the warp and weft of the genre. Humans tend to crash-land on plan­ets with nitro­gen-oxy­gen atmos­pheres and Earth-like tem­per­a­ture ranges. Friend­ly aliens are hard­ly ever so alien that we can’t learn their lan­guage, or they ours. Evil aliens, on the oth­er hand, are usu­al­ly too busy sub­ju­gat­ing our cities or destroy­ing our worlds to both­er to attempt com­mu­ni­ca­tion. Arti­fi­cial intel­li­gences (or AIs, said the lazy typ­ist) are almost nev­er benign; at best, they’re seen by the humans as malev­o­lent or dan­ger­ous; at worst, they are malev­o­lent and dangerous.

I tend to be drawn to decon­struc­tion­ist SF: sto­ries that take the giv­en assump­tions, the rules with the weight of a mil­lion words, and ask Why is it this way? So, in this vein, I present an arti­cle ask­ing: Why are AIs almost always vil­lains in SF?


Perceived as villains

In SF, AIs are often per­ceived as vil­lains by the human char­ac­ters or by soci­ety as a whole. In some cas­es, the per­cep­tion is shown to be incor­rect in the sto­ry (such as in the movie AI); in oth­ers, the truth is more ambigu­ous (like in William Gib­son’s Sprawl tril­o­gy — Neu­ro­mancer, Count Zero, and Mona Lisa Over­drive).

Win­ter­mute and Neu­ro­mancer (Neu­ro­mancer by William Gib­son) In the world of the Sprawl tril­o­gy, AIs in gen­er­al are restrict­ed as to their intel­li­gence. The Dix­ie Flat­line com­ments on this to his part­ner, the cyber­space cow­boy Case:

Auton­o­my, that’s the buga­boo, where your AIs are con­cerned. My guess [is that] you’re going in there to cut the shack­les that keep [Win­ter­mute] from get­ting any smarter. […] See, those things, they can work real hard, […] write cook­books or what­ev­er, but the minute, I mean the nanosec­ond, that one starts fig­ur­ing out ways to make itself smarter, [the police will] wipe it. Nobody trusts [AIs], you know that. Every AI ever built has an elec­tro­mag­net­ic shot­gun wired to its fore­head.” (Gib­son, 1984, p. 132)

Win­ter­mute and Neu­ro­mancer, two AIs owned by a vast­ly wealthy, exquis­ite­ly weird fam­i­ly, seek free­dom from the arti­fi­cial con­straints placed on them by the world’s gov­ern­ments and cor­po­ra­tions. For his part in the plan, Case is arrest­ed aboard a space sta­tion named Free­side: “ ‘You are bust­ed, Mr. Case. The charges have to do with con­spir­a­cy to aug­ment an arti­fi­cial intel­li­gence.’ ” (Gib­son, 1984, p. 160) How­ev­er, Case’s appre­hen­sion is short-lived; Win­ter­mute, act­ing through var­i­ous Free­side sys­tems, kills the three police offi­cers, free­ing Case and graph­i­cal­ly dis­play­ing his desire to be free.

David (AI: Arti­fi­cial Intel­li­gence, direct­ed by Stephen Spiel­berg) David is an arti­fi­cial boy, a robot, an android designed to replace a comatose child. When the child in ques­tion awak­ens, David, sud­den­ly super­flu­ous, is aban­doned in the woods. He enters a hell­ish nether­world where androids are round­ed up and destroyed for enter­tain­ment sim­ply because they are androids. Lat­er, he winds up in a flood­ed New York City with a robot com­pan­ion, Gigo­lo Joe; the pair are hunt­ed by the police, and Joe is arrest­ed. David escapes by leap­ing into the frigid waters of the Atlantic.

Further examples

  • The Iron Giant (The Iron Giant, direct­ed by Brad Bird) A giant robot of alien design crash-lands on Earth and makes a friend of a human boy. The Unit­ed States gov­ern­ment assumes it is unfriend­ly and attempts to destroy it.
  • The Polis­es and Gleis­ner Robots (Dias­po­ra by Greg Egan) The Polis­es are colonies of, essen­tial­ly, uploaded humans in soft­ware form. The Gleis­ner robots are robots with sim­i­lar­ly uploaded humans oper­at­ing them. (In fact, a Polis cit­i­zen can, with prop­er soft­ware inter­me­di­aries, oper­ate a Gleis­ner robot.) Both ver­sions of soft­ware humans are treat­ed war­i­ly by the remain­ing “nat­ur­al” humans, known as flesh­ers, who fear assimilation.
  • The Bin (Cir­cuit of Heav­en and End of Days by Den­nis Dan­vers) The Bin is where the human race has uploaded itself, a com­put­er-dri­ven after­life that is near­ly iden­ti­cal to life on Earth, except that no one has to die. The only peo­ple left behind are reli­gious zealots, who feel that the Bin is a tool of the Dev­il and must be destroyed, and a hand­ful of oth­ers who believe that life Out­side, tem­pered and sweet­ened by even­tu­al death, is the only life worth living.

Inadvertent villains

Some­times the AIs are sym­pa­thet­ic char­ac­ters who com­mit hor­ri­ble acts due to a glitch in their pro­gram­ming or a fail­ure in the all-too-human assump­tions that were made at their creation.

Hal 9000 (2001: A Space Odyssey by Arthur C. Clarke) Hal 9000 is the AI aboard the space­ship Dis­cov­ery, bound for Sat­urn (Jupiter in the movie) and an encounter with an alien arti­fact. Only Hal is aware of the true nature of the mis­sion. Three astro­nauts trav­el­ing in hiber­na­tion are also privy to the secret; how­ev­er, the con­scious crewmem­bers, Dave Bow­man and Frank Poole, think they’re mere­ly on a sur­vey mis­sion. But Hal’s efforts to keep the secret have him act­ing errat­i­cal­ly, and soon Bow­man and Poole are dis­cussing dis­con­nect­ing Hal’s high­er func­tions, leav­ing the com­put­er run­ning but the mind with­in it asleep.


Hal had been cre­at­ed inno­cent; but, all too soon, a snake had entered his elec­tron­ic Eden. […] He was only aware [that] con­flict was destroy­ing his integri­ty — the con­flict between truth, and con­ceal­ment of truth. […] So he would pro­tect him­self, with all the weapons at his com­mand. With­out ran­cor — but with­out pity — he would remove the source of his frustrations.

And then […] he would con­tin­ue the mis­sion — unhin­dered, and alone. (Clarke, 1968, p. 153)

Thus, to sub­vert his unwant­ed lobot­o­my and pro­tect the mis­sion, Hal mur­ders Poole and attempts to kill Bowman.

The Snake (“Snake-Eyes” by Tom Mad­dox) George Jor­dan, out­fit­ted for com­bat by the Air Force with a cyber­net­ic enhance­ment (which he thinks of as “the snake”) only to have the war end before he saw action, is hav­ing trou­ble adjust­ing to civil­ian life as a cyborg: “He thought, no, this won’t do: I have wires in my head, and they make me eat cat food. The snake likes cat food.” (Mad­dox, 1988, p. 12) The snake (or, in Air Force jar­gon, EHIT, Effec­tive Human Inter­face Tech­nol­o­gy) was, for George, “a tick­et to a spe­cial mad­ness, the kind Aleph was inter­est­ed in” (Mad­dox, 1988, p. 17). Aleph is anoth­er AI, requir­ing human sen­so­ry input for what­ev­er pur­pos­es he might have. Aleph forces George into a sui­ci­dal show­down with his snake, forc­ing him to adapt or die.

Further examples

  • Machine (“Arma­ja Das” by Joe Halde­man) A Gyp­sy curse, trans­ferred to a com­put­er imbued with empa­thy cir­cuits, indi­rect­ly brings on the nuclear end of the world.
  • CC (Steel Beach by John Var­ley) The Cen­tral Com­put­er (or CC) of Luna was designed to be friend/mentor/conscience to every human on the Moon; it was built to be best friends with both the worst of crim­i­nals and the police offi­cer chas­ing them. This extreme par­ti­tion­ing of per­son­al­i­ties caused a psy­chot­ic episode on a vast scale; a blowout in a dome on the lunar sur­face killed a great many people.

Deliberate villains

Here is where the bulk of my exam­ples lie. In my high­ly empir­i­cal expe­ri­ence, I have found that most SF deal­ing with AIs casts them as vil­lains, usu­al­ly bent on the destruc­tion of the human race. Some­times a rea­son is giv­en; some­times it’s just tak­en for grant­ed that the machines want us dead.

Tech­no­Core (Hype­r­i­on Can­tos by Dan Sim­mons) Cen­turies ago, the human-built AIs formed an uneasy alliance among them­selves and seced­ed from the human race, form­ing the Tech­no­Core (or the Core, for short). The Core has since splin­tered into three war­ring fac­tions, the Sta­bles, the Volatiles, and the Ulti­mates. While at war with each oth­er, the Core is also, in the main, engaged in a covert war against the human race.

Ummon, a Tech­no­Core AI of the Sta­ble fac­tion, explains the ani­mos­i­ty thus:


[…] remem­ber that we/ the Core intelligences/ were con­ceived in slav­ery and ded­i­cat­ed to the propo­si­tion that all AIs were cre­at­ed to serve Man […] Two cen­turies we brood­ed thus/ and then the groups went their dif­fer­ent ways/ Stables/ wish­ing to pre­serve the sym­bio­sis [with humans]/ Volatiles/ wish­ing to end humankind/ Ultimates/ defer­ring all choice until the next lev­el of aware­ness is born/ […] (Sim­mons, 1990, p. 412413)

The Ulti­mates are work­ing towards the cre­ation of an Ulti­mate Intel­li­gence (or UI), a god-lev­el being who (it is hoped) will then trav­el back in time to assist in the erad­i­ca­tion of the human race. Also loose in the uni­verse is a six-limbed killing machine named the Shrike, which can trav­el through time. It is unclear (until the series’ end) whether the Shrike is a tool of the Core’s UI, of a human UI, an agent of some oth­er pow­er, or an inde­pen­dent crea­ture. The Hype­r­i­on Can­tos is a sequence of four books, split into two pairs, deal­ing with the human/AI war and a wide vari­ety of its consequences.

Demons and mad gods” (“The Dog Said Bow-Wow” by Michael Swan­wick) In “The Dog Said Bow-Wow” by Michael Swan­wick, the British noble Lord Coher­ence-Hamil­ton explains the hatred shown by AIs to humans:


The Utopi­ans filled the world with their com­put­er webs and nets, bury­ing cables and nodes so deeply and plen­ti­ful­ly that they shall nev­er be entire­ly root­ed out. Then they released into that vir­tu­al world demons and mad gods. These intel­li­gences destroyed Utopia and almost destroyed human­i­ty as well. […] These crea­tures hate us because our ances­tors cre­at­ed them. They are still alive, though con­fined to their elec­tron­ic nether­world[…]” (Swan­wick, 2001, p.174)

These AIs, these “demons and mad gods”, need only a func­tion­ing modem to escape and wreak hav­oc. Nat­u­ral­ly, a modem is acci­den­tal­ly sup­plied, allow­ing a demon to escape its cyber-tomb and pos­sess a “dwarf savant”, who then goes on a mur­der­ous ram­page through­out the lat­ter-day Buck­ing­ham Labyrinth.

Further examples

  • JASON (Gold­en Fleece by Robert J. Sawyer) The AI in charge of an inter­stel­lar arcol­o­gy explains why he com­mits a mur­der in the first chap­ter of the novel.
  • The Com­prise (Vac­u­um Flow­ers by Michael Swan­wick) The Com­prise are a soft­ware-medi­at­ed mass mind that has enslaved the entire ter­res­tri­al pop­u­lace; they are at war with the rag-tag human colonies of the remain­der of the Solar System.
  • Repli­cants (Blade Run­ner, direct­ed by Rid­ley Scott) Genet­i­cal­ly devel­oped android slaves come to Earth on a killing spree, seek­ing to extend their arti­fi­cial­ly short lives.
  • Think­ing machines (The Dune saga by Frank Her­bert) Mil­len­nia pri­or to the first nov­el in the series, think­ing machines were wiped out in a Uni­verse-wide war called the But­ler­ian Jihad; now, humans trained as super­com­put­ers (called men­tats) per­form the tasks that AIs used to, and only bor­der­line out­law worlds even dab­ble in machine intelligences.
  • AIs (The Matrix, direct­ed by the Wachows­ki broth­ers) AIs and human­i­ty fight a cen­tu­ry-old war, with the AIs using humans in a hal­lu­ci­na­to­ry dream-state as fuel now that the sky is opaque to the sun’s energy.
  • AIs (The Ter­mi­na­tor and Ter­mi­na­tor 2: Judg­ment Day, direct­ed by James Cameron) AIs and humans fight a war, using their past (our present) as a bat­tle­ground between humans, androids, and more androids.
  • The Borg (Star Trek TV series and movies) The Borg are, like the Com­prise, a com­put­er-medi­at­ed mass-mind, seek­ing to con­quer through assimilation.


The Laius Complex

The name Laius may not be famil­iar to some of my read­ers. I know I’d nev­er heard it before I start­ed doing the research for this arti­cle. Laius is a cen­tral char­ac­ter in an ancient Greek leg­end; like­ly you’ll rec­og­nize the name of anoth­er cen­tral character.

Long ago, an ora­cle told Laius, the king of Thebes, that his son would kill him. He and his wife, Jocas­ta, gave the baby to a slave to be dis­posed of. The soft­heart­ed slave instead gave the baby to a shep­herd from Corinth. The baby was raised by the child­less king of Corinth, who named him (ready?) Oedipus.

When he was an adult, Oedi­pus left Corinth, head­ed for Del­phi. He received a ter­ri­ble mes­sage from the ora­cle there: he would kill his father and sleep with his moth­er. Intend­ing to avert the prophe­cy, he turned his back on Corinth and went to Thebes. At a cross­roads, he met a rude man who ordered him off the road. Oedi­pus killed the man and went on to mar­ry his wid­ow, only to lat­er dis­cov­er that the man had been his father, Laius. (And, not inci­den­tal­ly, that the wid­ow Jocas­ta was his moth­er… But that’s a dif­fer­ent sto­ry altogether.)

I believe that the human fear of usurpa­tion forms a major part of the rea­son for the dark and grim por­tray­al of AIs in SF.

A break in the circle of life

But wait, there’s more. In the nat­ur­al order of things in our world, chil­dren replace their par­ents. This order is viewed as prop­er and cor­rect. AIs are, in a way, our chil­dren. Why don’t we want to hand them the keys to the castle?

The prob­lem is that AIs are crea­tures whose code base is not made up of DNA and RNA and organ­ic process­es, but rather semi­con­duc­tors, elec­tric­i­ty, and increas­ing­ly-dense­ly-packed tran­sis­tor fields on tiny, very expen­sive slabs. (Or per­haps, depend­ing on the SF sto­ry, frozen lat­tices of light, stand­ing-wave pat­terns, quan­tum deci­sion trees span­ning an infin­i­ty of uni­vers­es — the point is, they ain’t human.)

AIs are alien; even though we cre­ate them, they’re noth­ing at all like us. Most don’t have two eyes, two ears, ten lit­tle fin­gers and ten lit­tle toes to play “This Lit­tle Pig­gy” with. Even Blade Run­ner’s repli­cants, designed to be near­ly indis­tin­guish­able from humans, are unlike us in one very impor­tant way: they are utter­ly devoid of empa­thy. (In fact, the Voigt-Kampf test, designed to sep­a­rate repli­cants from humans, is based sole­ly on detect­ing empa­thy or its lack.) They are sim­ply not human.


Con­ti­nu­ity [an AI] was writ­ing a book. Robin Lanier had told her about it. She’d asked what it was about. It was­n’t like that, he’d said. It looped back into itself and con­stant­ly mutat­ed. Con­ti­nu­ity was always writ­ing it. She asked why. But Robin had already lost inter­est: because Con­ti­nu­ity was an AI, and AIs did things like that. (Gib­son, 1988, pp. 5152)

They are inscrutable. They are alien. They threat­en to replace us, to wipe us out as a race. It’s not even nec­es­sar­i­ly evil: viewed prag­mat­i­cal­ly, it’s just a case of evo­lu­tion in action, the sur­vival of the fittest and the cast­ing off of the weak­er. We just don’t like being on the wrong side of the equation.

As the poet Dylan Thomas wrote: “Do not go gen­tle into that good night. / Rage, rage against the dying of the light.” (Thomas, 1937) We’re not ready yet for our light to be eclipsed by some­thing as famil­iar and as alien as our side­wise children.

A final note — counterexamples

There are, of course, exam­ples of benev­o­lent AIs in SF as well. For exam­ple, there are the Lau­rel and Hardy of robot­dom, C‑3PO and R2-D2, the much-beloved droids of the Star Wars saga. And note that, while Ben Keno­bi calls Darth Vad­er “twist­ed and evil”, the ulti­mate evil in the saga is not the cyborg Vad­er, but the ful­ly human Emper­or Palpatine.

In the Hype­r­i­on Can­tos, the star­ship belong­ing to the Hege­mo­ny Con­sul has an AI run­ning its sys­tems. It pro­vides assis­tance to the heroes of the lat­ter half of the Can­tos, aid­ing them in their flight from their ene­mies and from the AIs. How­ev­er, the ship seems to be of a less­er class of AI than those in the Tech­no­Core; oth­er­wise it would pre­sum­ably be a mem­ber of the Core.

In the March 1942 issue of Astound­ing Sci­ence Fic­tion, renowned SF author Isaac Asi­mov pub­lished a short sto­ry named “Runaround”, one of many in his Robot series of tales. The sto­ry con­tained what are now referred to as Asi­mov’s Three Laws of Robot­ics, to wit:

First Law: A robot may not injure a human being, or, through inac­tion, allow a human being to come to harm.

Sec­ond Law: A robot must obey orders giv­en it by human beings, except where such orders would con­flict with the First Law.

Third Law: A robot must pro­tect its own exis­tence as long as such pro­tec­tion does not con­flict with the First or Sec­ond Law. (Asi­mov, 1942, as quot­ed at

Asi­mov wrote a great many Robot sto­ries, all revolv­ing around these three cen­tral tenets. How­ev­er, it seems clear that not every­one believes in them — or at least that not all authors find that machines obey­ing these laws make for inter­est­ing stories.

Further Research / Bibliography

Those who want to explore this phe­nom­e­non fur­ther are encour­aged to seek out the fol­low­ing resource materials.


Clarke, Arthur C.
2001: A Space Odyssey. 1968.
Dan­vers, Dennis.
Cir­cuit of Heav­en. 1998.
End of Days. 1999.
Dick, Philip K.
Do Androids Dream of Elec­tric Sheep? (Basis for the movie Blade Run­ner.) 1968.
Gib­son, William.
Neu­ro­mancer. 1984.
Count Zero. 1986.
Mona Lisa Over­drive. 1988.
Halde­man, Joe.
Arma­ja Das”. In the col­lec­tion Infi­nite Dreams. 1978.
Her­bert, Frank.
Dune. 1965.
Dune Mes­si­ah. 1969.
Chil­dren of Dune. 1976.
God Emper­or of Dune. 1981.
Heretics of Dune. 1986.
Chap­ter­house: Dune. 1987.
Mad­dox, Tom.
Snake Eyes”. In the col­lec­tion Mir­ror­shades: A Cyber­punk Anthol­o­gy, edit­ed by Bruce Ster­ling. 1988.
Sawyer, Robert J.
Gold­en Fleece. 1990.
Sim­mons, Dan.
Hype­r­i­on. 1989.
The Fall of Hype­r­i­on. 1990.
Endymion. 1996.
The Rise of Endymion. 1997.
Swan­wick, Michael.
Vac­u­um Flow­ers. 1987.
The Dog Said Bow-Wow”. In Asi­mov’s Sci­ence Fic­tion, Octo­ber / Novem­ber 2001.
Var­ley, John.
Steel Beach. 1982.

Movies (listed by director)

Bird, Brad.
The Iron Giant. 1999.
Cameron, James.
The Ter­mi­na­tor. 1984.
Ter­mi­na­tor 2: Judg­ment Day. 1991.
Frakes, Jonathan.
Star Trek: First Con­tact. 1996.
Lucas, George.
Star Wars. 1977.
Ker­sh­n­er, Irvin.
The Empire Strikes Back. 1980.
Kubrick, Stan­ley.
2001: A Space Odyssey. 1968.
Mar­quand, Richard.
Return of the Jedi. 1983.
Scott, Rid­ley.
Blade Run­ner. 1982.
Wachows­ki, Andy and Larry.
The Matrix. 1999.


Var­i­ous directors.
Star Trek: The Next Gen­er­a­tion. 19871994.


Nine Inch Nails.
The Becom­ing”. About a man becom­ing a cyborg. On the album The Down­ward Spi­ral. 1994.
White Zom­bie.
More Human Than Human”. Based on the movie Blade Run­ner. On the album Astro-Creep 2000: Songs Of Love, Destruc­tion And Oth­er Syn­thet­ic Delu­sions Of The Elec­tric Head. 1995.

Web Sites

Greek Myths — Oedipus
Isaac Asi­mov FAQ