Monday, March 02, 2009

the fundamentlism in futurism or how those who didn't become robots turned into grey goo

a review of two analyses of the seemingly unavoidable cybernetic revolution

The apocalyptic analysis of Bill Joy's "Why the future doesn't need us." is an interesting explanation of all things futurist and pessimistic or as he put it, "...Murphy's law - "Anything that can go wrong, will." (Actually, this is Finagle's law, which in itself shows that Finagle was right.)" Murphy's purportedly exact statement "If there are two or more ways to do something and one of those results in a catastrophe, then someone will do it that way" is not nearly as dire as Finagle's interpretation. Murphy is pointing at good design as the ability to make a standard which cannot be misused or incorrectly installed. Joy's transmission of this translation bungle actually opens a stress fracture in his logic's fuselage that will eventually cause it to fail.

Joy's statements are based in a priori of Moore's Law as well as assumptions that the all systems can be modeled via computation. Although computing statistically can give us a good range of answers and sophisticated computing can narrow the tolerances of the range it is still a ideal reading versus an actual result.

I see the loss of skepticism as the fuel of Joy's pessimism. Jaron Lanier supports this idea in his article "One Half a Manifesto" written six months after the aforementioned Joy article. Lanier labels the aficionados of Moore's Law, artificial intelligence, and evolutionary psychology as "cybernetic totalists". The disconnect for me and Lanier is ideological adherence to an idealist misinterpretation of theory. A dogma that purports theory as reality and as the path that the future will certainly unfold. It is a real irony that these ideals that are the results of studies such as complexity theory but, they have have become dogma and stultifyingly singular. It displays (learned) ignorance of the body of practical observation of diversity, character, processes, and form in the universe. Lanier concurs:
"In nature, evolution appears to be brilliant at optimizing, but stupid at strategizing. (The mathematical image that expresses this idea is that "blind" evolution has enourmous trouble getting unstuck from a local minima in an energy landscape.) The classic question would be: How could evolution have made such marvelous feet, claws, fins, and paws, but have missed the wheel?"

Both quote Moravec's "Robot: Mere Machine to Transcendent Mind" (which I highly recommend, no, you don't have to be a computer scientist to read it). In the treatise he optimistically imagines a day of decision in our lifetime or the next generation's lifetime where it will be highly possible to transplant our consciousness into a synthetic body. Moravec is projecting a future where technology has become completely transparent and where to the best and worst of our empathetic and fuzzy logic and semantic reasoning which define humanity can instantaneously merge with high-powered computational calculation. A time where mental note becomes discrete and retrievable. And a time where when I stand up I am not plagued with arthritic pain in my back and knees.

How outrageous? No. It is a simply an extension of the practices of today's life. As I write this document the laptop marks my mistyped words with better than fair recognition and my iCal reminders pop up just before my phone jitters with the aide-mémoire for the same appointment. My dentist has installed many permanent prosthetic devices and a colleague this summer is replacing her coxal articulation (hip joint) with a ball and socket made from titanium and polyethylene. If I can replace my knees so I can walk on the surface of Mars or Pluto (or further), I do believe I will get in that line.

Moravec and others suggest a potentially exciting and pivotal fold in history of humankind. To confuse any futurist prediction as the road to perdition though, seems to ignore the fundamental aspects of good scientific practice about objectivity. Quoting Lanier again: "In general, I find that technologists, rather than natural scientists, have tended to be vocal about the possibility of a near-term criticality. I have no idea, however, what figures like Richard Dawkins or Daniel Dennett make of it. Somehow I can't imagine these elegant theorists speculating about whether nanorobots might take over the planet in twenty years."

Do any of us really know about the consequences of this transformation we think we see in the near future? Of course not but, I am highly contentious about the medieval attitude that some things are too scary to intellectually know. Once again, I find myself at the crossroads of the dilemma and my analysis is that some people are lacking in empathy and have a horrifying character; the tools they decide(d) to use were merely tools. I believe the human spirit is not necessarily diluted by these transformations; changed yes, weakened no.

No comments:

Post a Comment