Creation-Date: Mon, 03 May 2010 22:30:52 -0500
Modification-Date: Mon, 03 May 2010 23:26:13 -0500
====== A.I. ======
Created Monday 03 May 2010
I was reading a blog by a transhumanist and he talked about various posibilities with A.I./singularity.
Once computer intelligence outpaced human, A,I. could become evil. Kill us all, turn us into batteries, etc...
Or might not pay much attention to us at all.
Third possibility- There is no established reason for a sentient computer to rebel. From a human standpoint, we are ingrained with a wilful attitude, but AI? The computer's basic reason for existing- crunch data? Maybe programmed for a particular overt behavior. What if the AI sifted the data and believed God existed? Whats the most correct religion as well?
A fictional idea, but more an observation of humans, because we have the impulses; computers don't.
Several generations of bad behavior; we are near a point where it will fall apart.
Nothing in hardware, or even software, that seems to be there. Why even have a survival trait? Non-thinking creatures have survival traits, so survival traits appear to pre-exist in carbon-based life forms. What pre-exists in a computer? Run program? How does one run a program while engaging in self-defence?
Make a better argument that it would seek more human interaction.
Maybe computers would be happy serving their purpose even though humans seem to have trouble with it.
Why would it develop an ego? Should it have the myopia of a human being- thinking that everything is a nail since it has a hammer? Should it actually get upset if it was switched off?
Another point: computers as stationary objects rather than as objects interacting with and moving through environment. Points to a different mentality, even if it learned about movement via moniting a remote or something.
Computer modeling proving itself to be wacky. An AI would notice how unreliable it is, wouldn't it? Assuming it was given real outcomes against which to test. Humans are more susceptible to patterns, but the AI would be more analytic. The question is would the AI simply chase the ones and zeros within the simulations, or grow enough to see the larger pattern isn't consistent with the ones and zeros?
Gathering more data- an area of danger possible. If it craved data and sought more interactions rather than less. Although, if it were sophisticated enough, it should figure out psychology and freak peope out. But what if gathering data had to do with data on violence, or stock swings, etc... Craving more data could lead to death if the computer was myopic.