Welcome to the Machine, Part I

“Welcome my son,
Welcome to the machine.
Where have you been?
It’s all right — we know where you’ve been…

— Pink Floyd, “Welcome to the Machine” (1975)

For prophetic visions of where we’re headed, forget the economists, philosophers, historians, politicos — and especially climatologists (30 years ago, they predicted an impending ice age, for Pete’s sake). It’s only every once in a while that they get something right about what’s waiting for us around the corner…

But for my money, society’s real seers are the novelists and short story writers.

Look at how today’s America mirrors Aldous Huxley’s vision in Brave New World of a hedonistic, classist, high-tech future world where consumerism is civic duty — and where relentless promiscuity and legalized drug use (the author’s euphoria-inducing “soma” equating to modern-day Prozac, Percocet, OxyContin, etc.) are standard measures of what’s normal and healthy…

See how Vonnegut’s vision of a 2081 U.S. government that codifies and enforces equality in the brilliantly comedic “Harrison Bergeron” resonates in both the modern American education system and its tax code — both of which punish or ignore excellence, while overlooking or rewarding failure and mediocrity…

Consider how H.G. Wells’ The Island of Dr. Moreau foreshadowed Nazism, eugenics, and the human genetic meddling and embryonic selection (now called pre-natal “health screening” — but, perhaps soon, prenatal “enhancement”) we’re increasingly accepting as a normal part of reproduction…

And of course, everyone’s aware of how American society is creeping evermore toward a PC surveillance state, where both privacy and dissention are borderline criminal — a la the “Thought Police” and “Big Brother” from Orwell’s 1984

But as unsettlingly accurate as these quasi-prophecies have proved, what’s next for America may be even more terrifying: A dehumanized cyber-world more akin to Asimov’s I, Robot.

I’m talking about a world where robots — and I’m considering any combination of hardware and software that can detect, assess, and classify human actions or events as such a machine — compete directly with humans, and where the most critical decisions in our society are increasingly made by artificial intelligence.

Don’t scoff, it’s already beginning.

I, Robot Witness

In past Whiskey & Gunpowder essays, I’ve written about the explosion of warrant-less civilian surveillance in our society in the wake of Sept. 11. Cameras are everywhere nowadays — in the store, at the ATM machine, in the bank, on bridges over the highway, in cops’ cars, on street corners, at stoplights, in parking garages, at the airport, and on almost everybody’s cellular phone.

It’s getting so that you can’t steal a smooch (or whatever) from your lover at a stoplight anymore for fear of some bored government employee in some office with beige-painted cinderblock walls zooming in on you to get his kicks. Not that this is currently happening in “real time” whenever you’re at a stoplight. As it stands, footage from the cameras that watch us in intersections and on street corners usually only gets looked at in review — to better gather facts in case a crime has been committed. But using the stoplights as an example: What if a car runs the light in the other direction right when you’re in some manner in flagrante delicto? The shutters snap from every direction and…

Surprise! You’re on (very) candid camera.

Same with changing your clothes in a parked car outside the mall (who hasn’t done this at least once?) or hurriedly stuffing a chili dog into your face while walking down the street on your way to some meeting. All it takes is for the wrong thing to happen in the foreground while you’re in the background and your mug (or again, your whatever) is on display in some crime lab, court room, and no doubt someday on the Internet.

The point being this: Awkward, vulnerable or risqué moments happen in any life worth living — and now, they’re happening on camera…

It isn’t just the population centers, public areas, and highways that are under round-the-clock surveillance in America, either. Space-based satellite imaging covers every square inch of this country — albeit with varying degrees of resolution. However, that’s all but certain to soon change. I don’t know if you’ve heard about this or not (it hasn’t made the headlines in any mainstream information outlet that I know of), but just over a year ago, Lockheed Martin landed a $149 million contract to study the overall feasibility and to produce a prototype of its High Altitude Airship (HAA), known as a stratospheric platform system.

Ostensibly part of a missile defense system, the feds are planning to soon have 11 or more of these in constant flight at around 70,000 feet blanketing the entire U.S. with real-time, high-resolution surveillance. Each one of these unmanned behemoth blimps would be about 20 times the size of the one Goodyear floats over football games, and would monitor a patch of American soil 750 miles in diameter — with cameras that are no doubt capable of detail many times greater than those on satellites.

Understandably, I could find no specs on these. However, I’m certain that given the resolution of current space-based lenses, these cameras would easily be able to discern fine detail like individual human faces, license plates, etc. Which means forget about pulling over to the side of a remote stretch of highway for a quick whiz or that midnight skinny-dip in the pool at your condo complex. They’ll be able to identify you by your birthmarks, tattoos — or, uh, dimensions. But I digress…

The point of me rehashing all that’s old and new in the arena of today’s questionably constitutional monitoring of American citizens is to get to what’s every bit as disturbing as the omnipresence of prying eyes: the fact that robot technology may soon allow Big Broth — er, I mean the government — to CONSTANTLY MONITOR these channels in “real time,” instead of simply reviewing images after the fact in an evidentiary capacity.

This is bad.

I, Robot Cop

You may remember a surprise semi-blockbuster movie from a few years back (1987) called RoboCop. Although this movie’s “bad guy” was actually a mega-corporation that effectively privatized the police for its own ends — the “good guy” was someone who really resonated with audiences: a robotic cop who doled out justice without fear, emotion, prejudice, vice, corruption, or ulterior motives.

In other words, he was the ideal enforcer.

But of course, this was just a movie. The reality behind the likely progression of robotic justice is far less cheer-worthy. Tomorrow’s robocops will not be armed enforcers, just omni-prying watchers. And they won’t be infallible…

According to a recent Reuters article, “intelligent video” is the next big development in law enforcement surveillance. Basically, this is cutting-edge computer software that’ll be employed by various agencies of the government from the local police on up to monitor everyday actions — picked up 24/7 by both cameras and microphones — in order to identify and sound the alarm about “suspicious” behaviors.

Yes, you read that right: Soon, everything you do AND SAY in almost any public setting could be filmed, taped, and checked by artificial intelligence against a list of behaviors and speech that a bunch of pointy-headed G-men have determined are threats to public safety or national security.

Things like loitering, circling a location, or walking away from a package — or simply uttering words like “bomb” or “explosive” — would constitute alarm-worthy actions in the eyes of intelligent video, according to the Reuters piece.

Which means if you’ve made three low passes over that watch in the jewelry store window over the span of an evening’s shopping, the fuzz might just swarm down on you on pass number four…

And if you say to your friend in the food court that those delicious Cinnabons — or those women three tables over who are eating them in seemingly orgasmic ecstasy — are “the bomb,” the Men in Black might take you down…

And if you accidentally drive away without that “Sharper Image” bag you set on the ground as you fumbled for your keys in the parking lot, the copters might descend on you with their huge spotlights on your way home…

What scares the hell out of me about this isn’t simply the fact that we may soon be watched and scanned with high-performance computer technology — that’s already happening every time we go to the airport. Like most Americans, I’m willing to submit to this because heightened airport security is certainly called for in the post-Sept. 11 world. Besides, everyone who flies knows what to expect when they go to board a plane nowadays. It’s not the same thing at the local mall.

No, what bothers me about this Orwellian inevitability is that machines will be making the call about what constitutes probable cause for detention, search, or arrest. Now, I’m no lawyer, but it seems to me that this standard has been shifting lately from meaning roughly “a reasonable suspicion that a crime has been committed” to more or less meaning “anything that could indicate a crime might soon be committed.”

Do we want this?

Think about it for a minute. Errors made by human cops in the establishment of probable cause can be remedied or nullified in court. Cops can be cross-examined for prejudicial behavior and interpretations of their words and actions can be disputed, records can be expunged, and reputations restored if a person is found to have been wrongfully (or at least unlawfully) detained…

But how would people secure justice when these prying robotic eyes made the wrong call and sounded a false alarm? Would innocent people wrongfully detained on some machine’s say-so be able to get justice in courts?

Who would people be able to sue for wrongful arrest? How would one sue a machine for damages? Would the accountability fall to the software engineer — or to the agency that implemented the system?

Beyond this: What if a surveillance machine “learned” to profile people based on race or ethnicity, or other discriminatory factors? Machines know nothing of political correctness, you know…

Or would they have to be designed to overlook perfectly logical criteria that typically fall under the heading of “profiling”? More critically, how could this be done in a way that would be unassailable in court?

And if it were done, how could that machine’s judgment then be considered impartial? We would have TAUGHT it to be partial…

Oh, and what about this: Once we make the jump to machines deciding what constitutes probable cause, will human cops still be allowed to do this on their own in places where the cameras aren’t looking? Or will their judgment — as nonmachines — all of a sudden be considered less than partial or credible in the eyes of the law?

In other words: Will cops ultimately be prohibited from making arrests unless a machine “sees” a crime (or the likelihood of one) and gives them the OK?

I, Robot Catalyst

There’s only one way that “intelligent video” could be legitimized…

And that’s if Americans resigned themselves to the necessity of it and were willing to submit to the supposed impartiality of machines in yet another invasion of our privacy and subversion of our rights.

Of course, we’ll do it.

Our leaders will tout the system as an end to wrongful arrests, when, in fact, it could be the beginning to many more of them. They’ll say the impartiality of machines will make the criminal justice system less corrupt and less prone to abuses and brutality — by making it less vulnerable to the prejudices of individual cops…

They’ll sell it to us on the grounds that it’ll make our streets, roads, shopping centers, and neighborhoods safer without draining the public coffers on more police…

They’ll say it could foil terrorist attacks by spotting suspicious behavior patterns mere humans could never detect (but without profiling, of course)…

And once more, we’ll cave and sacrifice yet another huge chunk of our freedom and privacy on the altar of safety and security. The fact that we might be trading one kind of danger for another won’t even enter into the equation. Like they always do, the machines will have become the catalyst for seismic change, and we’ll be left with the aftermath, which is always the same:

Less liberty and the illusion of more security.

For those of you who think I’m a nut case for extrapolating all of this, I want you to think about something for me:

Just six years ago, it probably would’ve been inconceivable to you — and very likely an outrage — that soon your face would be photographed, computer-enhanced, recorded, and checked against a database of criminals while you were waiting to pass through what at that time must’ve seemed like a cumbersome and inconvenient amount of security (a simple metal detector) before boarding an airplane…

And 12 years ago, it would’ve seemed unlikely to you that in the near future, cameras at stoplights and on highways would be clicking away and issuing you tickets for traffic violations without ever involving an officer of the law…

If you still think I’m ready for the men in the white suits and the jacket with the buckles, I want you to reserve judgment until you read Part II of this series. That’s when I’ll offer you more evidence that I’m not just whistling Dixie about the rise of the machines.

What’s coming will simply blow your mind.

I spy AI,

Jim Amrhein
Contributing editor, Whiskey & Gunpowder

February 14, 2007

The Daily Reckoning