Sonntag, 22. Juni 2014

CAJ:Should we consider singularity as friend or foe?

As I have already established in my CAJ that one of the core tasks of the lifeboat foundation is the protection against the growing power of technology. An important term in this context is singularity, or more precisely technological singularity. These expressions describe the hypothetical moment when artificial intelligence will have exceeded the capacity of the human brain. I do not refer to this point as hypothetical because it could potentially happen one day in future. No, in fact the arrival of such a moment is an absolute certainty. Greater than human intelligence is estimated to set in within the next three decades. So the reason for calling it arbitrary therefore is that the exact moment will not be reasonably measurable. Nonetheless it will have massive impact on our lives.

The relationship between computers and human intelligence can be compared to the relationship between humans and lower animals. The encompassing power of technology will leave us without any hope of control. Social, economic and ethic patterns will change inevitably as the influence of humans will reach a historic point of no return to the throne of power. Scenarios as to what such a reality, dominated by technological singularity will look like are very vague and almost impossible to make out. This makes it hard for the lifeboat foundation to provide methodology to protect humanity. Still, it is an important step to create awareness and consider it worth to actively come to terms with the phenomenon of singularity.

The most scaring aspect of artificial intelligence is the hypothesis of an ever improving cycle. More specifically we are talking about the ‘intelligence explosion’ caused by clever technology that creates even cleverer technology. Just consider the evolution of the human being from ‘homo’ sapiens until the average human of the 21st century.

Keine Kommentare:

Kommentar veröffentlichen