Ellen Ullman |
I'm
an admirer of the writer Ellen Ullman, the software engineer turned novelist.
Her 1997 memoir, Close to the Machine:
Technophilia and Its Discontents, is a wonderfully perceptive reflection on her years as a professional programmer.
Ullman
recently wrote a commentary
for the New York Times on the
computerized trading debacle triggered last month by the brokerage firm Knight Capital. In it she reaffirmed a crucial point she'd made in Close to the Machine, a point I find myself
coming back to repeatedly in this space. To wit: If you think we're in control
of our technologies, think again.
To refresh memories, Knight,
one of the biggest buyers and sellers of stocks on Wall Street – and one of its most aggressive users of automated trading systems – had developed a new program to take advantage of some upcoming changes in trading
rules. Anxious to profit from getting in first, Knight set its baby loose the
moment the opening bell sounded on the day the changes went into effect. It went
rogue, setting off an avalanche of errant trades that sent prices careening wildly all
over the market. In the forty five minutes it took to shut the system off, Knight lost nearly half a billion dollars in bad trades, along with many of its clients and its reputation.
Much
of the finger-pointing that followed was aimed at Knight's failure to adequately debug its new system before it went live. If only the engineers had been given the time they needed to
triple check their code, the story went, everything would have been fine. It
was this delusion that Ullman torpedoed in her essay for the Times.
Wondering who's in charge here. |
Each piece of hardware also has its own embedded, inaccessible
programming. The resulting system is a tangle of black boxes wired together
that communicate through dimly explained “interfaces.” A programmer on one side
of an interface can only hope that the programmer on the other side has gotten
it right.
The complexities inherent in
such a configuration are all but infinite, as are the opportunities for error.
Forget, in other words, about testing your way to perfection. "There is
always one more bug," Ullman said. "Society may want to put its trust
in computers, but it should know the facts: a bug, fix it. Another bug, fix it.
The 'fix' itself may introduce a new bug. And so on."
As I say, these were the sorts of issues Ullman explored with terrific insight in Close to the Machine. Ullman's experience as a programming insider affirmed what so many us on
the outside sense intuitively, that computer systems (like lots of other
technologies) follow their own imperatives, imperatives that make them unresponsive to the more fluid needs of human beings. “I’d
like to think that computers are neutral, a tool like any other,” she wrote,
“a hammer that can build a house or smash a skull. But there is something in
the system itself, in the formal logic of programs and data, that recreates the
world in its own image.”
I
discussed this tendency in my 2004 masters thesis on the
philosophy of technology, citing a passage from Ullman's book as an example. Here's part of what I wrote:
In
her opening chapter, Ullman describes a meeting she has with a group of clients
for whom she is designing a computer system, one that will allow AIDS patients
in San Francisco
to deal more smoothly with the various agencies that provide them services.
Typically, this meeting has been put off by the project’s initiating agency, so
that the system’s software is half completed by the time Ullman and her team actually sit
down with the people for whom it is ostensibly designed.
As
the
meeting begins, it quickly becomes apparent that all the clients are
unhappy for one reason or another: the needs of their agencies haven't
been adequately incorporated
into the system. Suddenly, the comfortable abstractions on which Ullman
and her
programmer colleagues based their system begin to take on “fleshly
existence.”
That prospect terrifies Ullman. “I
wished, earnestly, I could just replace the abstractions with the actual
people,” she writes.
But it was already too late for that. The system
pre-existed the people. Screens were prototyped. Data elements were defined.
The machine events already had more reality, had been with me longer, than the
human beings at the conference table. Immediately, I saw it was a problem not
of replacing one reality with another but of two realities. I was at the edge:
the interface of the system, in all its existence, to the people, in all their
existence.
The
real
people at the meeting continue to describe their needs and to insist
they
haven’t been accommodated. Ullman takes copious notes, pretending that
she’s outlining needed revisions.
In truth she's trying to figure out how to save the system. The
programmers retreat to discuss which demands can be integrated into
the existing matrix and which will have to be ignored. The talk is of
“globals,” “parameters,” and “remote procedure calls.” The fleshly
existence of
the end users is forgotten once more.
“Some
part of me mourns,” Ullman says,
but I know there is no other way: human needs must
cross the line into code. They must pass through this semipermeable membrane
where urgency, fear, and hope are filtered out, and only reason travels across.
There is no other way. Real, death-inducing viruses do not travel here. Actual
human confusions cannot live here. Everything we want accomplished, everything
the system is to provide, must be denatured in its crossing to the machine, or
else the system will die.
Ullman's essay on the Knight Capital trading fiasco shows that in the fifteen years since Close to the Machine was published, we still haven't gotten the bugs out of the human-machine interface, or out of the machine-machine interface, for that matter. Nor are we likely to anytime soon.
No comments:
Post a Comment