Why the future doesn't need us.

73
10
Joined
Feb 25, 2008
I'm reading an essay titled "Why the future doesn't need us" and I must say my mind is blown. It was written by Bill Joy,cofounder and Chief Scientist of Sun Microsystems. I know most of you aren't going to read this, but for those of you who do, what are your thoughts on it?

http://www.wired.com/wired/archive/8.04/joy.html

Below is a passage from Theodore (The Unibomber) Kaczynski's manifesto, which was partly responsible for Bill Joy thinking in this direction and ultimatelyleaving Sun Microsystems behind.

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can dothem. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two casesmight occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might beretained.

If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how suchmachines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race wouldnever be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to themachines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position ofsuch dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that faceit become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply becausemachine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the systemrunning will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. Peoplewon't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain privatemachines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as it istoday, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer benecessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity.If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct,leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the humanrace. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, thateveryone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes "treatment" to cure his "problem."Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power processor make them "sublimate" their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they willmost certainly not be free. They will have been reduced to the status of domestic animals.1


1 The passage Kurzweil quotes is from Kaczynski's Unabomber Manifesto, which was published jointly, under duress, byThe New York Times and The WashingtonPost to attempt to bring his campaign of terror to an end. I agree with David Gelernter, who said about their decision:

"It was a tough call for the newspapers. To say yes would be giving in to terrorism, and for all they knew he was lying anyway. On the other hand, to sayyes might stop the killing. There was also a chance that someone would read the tract and get a hunch about the author; and that is exactly what happened. Thesuspect's brother read it, and it rang a bell.

"I would have told them not to publish. I'm glad they didn't ask me. I guess." (Drawing Life: Surviving the Unabomber. Free Press, 1997:120.)
 
Originally Posted by Stylin On Yuku

I'm reading an essay titled "Why the future doesn't need us" and I must say my mind is blown. It was written by Bill Joy, cofounder and Chief Scientist of Sun Microsystems. I know most of you aren't going to read this, but for those of you who do, what are your thoughts on it?

http://www.wired.com/wired/archive/8.04/joy.html

Below is a passage from Theodore (The Unibomber) Kaczynski's manifesto, which was partly responsible for Bill Joy thinking in this direction and ultimately leaving Sun Microsystems behind.

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes "treatment" to cure his "problem." Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them "sublimate" their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.1


1 The passage Kurzweil quotes is from Kaczynski's Unabomber Manifesto, which was published jointly, under duress, byThe New York Times and The Washington Post to attempt to bring his campaign of terror to an end. I agree with David Gelernter, who said about their decision:

"It was a tough call for the newspapers. To say yes would be giving in to terrorism, and for all they knew he was lying anyway. On the other hand, to say yes might stop the killing. There was also a chance that someone would read the tract and get a hunch about the author; and that is exactly what happened. The suspect's brother read it, and it rang a bell.

"I would have told them not to publish. I'm glad they didn't ask me. I guess." (Drawing Life: Surviving the Unabomber. Free Press, 1997: 120.)

wow 11 years later
 
I'll be dead before this happens (hopefully).
I feel bad for future generations if it does happen though.
 
No...anything created can only be as advanced as its creator and since we can't even begin to grasp the complexities of our own minds machines will not beadvanced enough to complete the advanced thought processes of humans...there will always be a necessity for human intervention...
 
really interesting...

but i doubt this "machine takeover" will happen in our lifetime.
 
Irobot.jpg



Never gonna happen.
 
If there is a danger we will ignore it for as long as we can profit from it. Once the threshold of safety is breached then we'll hope we can figure out asolution.
 
Originally Posted by NYVictory45

If there is a danger we will ignore it for as long as we can profit from it. Once the threshold of safety is breached then we'll hope we can figure out a solution.


Sadly, that sounds about right.
 
I'm well versed on the NWO. I don't talk about it much though because I hate hearing the term "Conspiracy Theory".
 
Look for ENDGAME on google video. It's all about the Bilderberg Group and their plan for humanity. It actually ties in with the essay.
 
Sigh...I bet you two ate up Zeitgeist or whatever that joke of a documentary was called.

As for the article...intelligent technology definitely could pose a thread, but I think nanotechnology is much more of a threat once harnessed than"robots."
 
Originally Posted by Joseph Camel Jr

Sigh...I bet you two ate up Zeitgeist or whatever that joke of a documentary was called.

As for the article...intelligent technology definitely could pose a thread, but I think nanotechnology is much more of a threat once harnessed than "robots."
Sigh...You obviously didn't read the article. He covered nanotech. What's Zeitgeist?
 
Back
Top Bottom