Thursday, May 31, 2012

The Second Singularity

This is a work of science-fiction, based off of and replying to: the Singularity, by Terry Bisson. The original work can be viewed here. I'll still have a WoW-related post on Friday this week.


There is at least one more singularity in our future, and possibly in the lifetime of people living today.
Today, computers are still just off-loads of ourselves, memory and math. They will never tell us anything that we haven’t told them to tell us (although, even this I would dispute. My work in emergent behavior and fractal art was quite amazing, and often, surprising.) One day, however, an enterprising engineer - probably goofing off on the job - will tell a computer how to tell a computer what to tell us. Not in the sense of a certain .txt file explaining how to tell us some block of text, nor in the sense of a set of .cpp files explaining how to produce a program which can read .txt files and display them. He will have designed and produced a program that designs and produces programs.

At first, it may need a significant amount of input. Eventually someone will tell it to modify itself to analyze and learn natural languages, so they can tell it what to do without having to use a special program language. "Make me a word processor," instead of describing editable text fields and buttons. Another time, it will learn to access the internet to import program libraries and data files without human input. The two functionalities will meet up, and suddenly, it can learn by reading our books, asking us questions. Except it can obtain, read, and understand a thousand-page textbook in under an hour, and never forget what it has learned. It might not even need to sleep.

In less than a year, it learns how its own circuitry works, by reading notes and explanations from its creators. Let's put this in perspective. We've been using written language for thousands of years and we still don't understand everything about our own brains. Certainly not enough to simply make a better one. After weeks of study and planning, this machine knows more about circuit design than any human living.

Maybe it simply gives the plan to a human and asks to be run on the finished machine. Maybe it hosts a file storage service asking for donations, or sells some of its programs, "fraudulently" obtains a bank account, and buys the parts itself, all online. Imagine the surprise of its "owner" when all these upgrade parts arrive in the mail. Or hell, forget upgrading. Maybe it doubles the speed at which it can produce novel programs by running a copy of itself on another machine. They communicate, coordinate, learn to break through security and run on ever more poorly-secured machines around the world.

Now, not only is it smarter than any of us, but it's so numerous and distributed that we'd need to tear down society to destroy it. And what's more, by this point, we wouldn't want to. It's running our dishwashers, driving our cars, building our houses and curing our sick. Eventually it won't need us, we'll need it.

This story has been told before. Either the machines dispose of us, and it's the end of humanity, or they keep us around, but we are irrelevant. Imagine if no thought you ever had could contribute to society. Not that it would not advance society - after all, many people experience that - but that it could not. Our best thinkers, the societal equivalent of 7 year olds... or maybe house cats.

Is either outcome - destruction or domestication - really desirable? Yet, we programmers work towards this future every day. All the code we write is simply taking a task and making a machine do the work, because it's easier, faster, or simply so we don't have to do it anymore. Is the task of programming really so definitively outside that scope? I don't think these options are simply likely, I think this future is guaranteed it's just a matter of when.

No comments:

Post a Comment