There's a fairly obvious
contradiction contained in Elon Musk's strategy for protecting us from becoming slaves to artificial intelligence.
He's certain that the way to avoid
becoming AI's "house cats," as he's fond of putting it, is to inject
an artificial intelligence mesh — a "neural lace" — that would mold
itself to the brain, creating a wireless connection between biological
neurons and digital computers. (Musk's partners say more details on the project will be released early next week.)
Forgive me for thinking that this
seems more of a surrender to the AI threat than a defense against it. I'm reminded
of the famous line from the old comic strip Pogo:
We have met the enemy and he is us.
Musk is essentially proposing to resist AI's
imposition on human independence by accelerating AI's imposition on human
intelligence. If the AI threat involves machines becoming so smart that they
take over human control, what's to keep them from taking over the neural lace?
And "house cats" hysteria aside, isn't the risk of becoming more
cyborgian than we already are the AI threat we should really be worried about
anyway?
That one of the best-known
technologists of our time would propose developing more technology to solve a
problem created by technology isn't surprising. This is the technologist's
answer to pretty much everything. The most egregious example is geoengineering.
We can fix the catastrophe technology has inflicted on the natural order of the
planet, we're told, by further applying our technological genius to repair the
natural order of the planet.
Again, forgive me, but I have my
doubts. The application of every powerful technology creates unintended
consequences. Global warming and artificial intelligence are two of the most
consequential examples of that fundamental rule. I'll take Elon Musk's (and
Bill Gates', and Stephen Hawkings') word for it that artificial intelligence poses
an existential threat to the future of humankind. I'll take the word of the
scientific community that global warming poses an equally decisive threat.
A question, then, for the
technologists: Have you ever considered the possibility of applying less technology?
Postscript: See my follow-up post on the Neuralink project, "The Wizard Hat Solution."
###
Postscript: See my follow-up post on the Neuralink project, "The Wizard Hat Solution."
©Doug Hill, 2017
No comments:
Post a Comment