Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
AI - Limits and controls
Message
De
26/06/2015 08:56:23
 
 
À
25/06/2015 18:13:30
Information générale
Forum:
Science & Medicine
Catégorie:
Autre
Divers
Thread ID:
01621426
Message ID:
01621483
Vues:
55
>>Hmm, for my taste the definition of intelligence is too anthropocentric in those articles. Lots of dangers with exponential growth mixed in with any self replicating or even only repairing technology, perhaps used to combat environmental damage or deep sea mapping or a virus program given some genetic-similar mutability . Some chances that such a growing ecosystem would have to learn to communicate or exterminate. And with the 50% chance of encountering politicians soon after learning to talk, extermination might seem even more probable.
>
>Any examples of non-anthropocentric intelligence?

As the best answers to "what is intelligence?" revolve around "what IQ tests measure" the question might be biased.

>
>- Hive mind aka mainframe/terminal model?

I was thinking about techniques/strategies to overcome problems : bees "communicating" rich food sources via dance, ants leaving chemical markers to "better/exploitable" food sources, wasps trailing each other after and so on. Something like Herberts Green Brain with a machine slant, where humans have been classified as dangerous by analyzing the log of chemical markers sometimes encountered before non-responsiveness of sub-hives.

>
>- So-called "machine intelligence" - usually presented no differently from an amoral human?

Here I guess it is a matter of "imprinting" mechanisms. If singularity happens in surroundings especially built to evoke it, coupled with training sessions on passing a Turing test and education similar at least in intent delivered to the more human apes, chances are much higher to awaken a god with a value system aligned with human values. As soon as learning switches from negative feedback to positive feedback you are certain to face oscillations beyond expectations (not saying that those will never occur with negative feedback training...) and you have to expect a percentage rogue gods just like you expect a spectrum of human behaviour. ANd you must also realize that the dimensions on which a god maps his morality are not certain to be even measurable/explainable in human dimensions.
>
>- Any others?

If machine intelligence passes singularity without described interaction/education, chances for not aligning moral dimensions are much higher. Awakening to consciousness and being a nice fellow like Mike in Heinleins Moon is a harsh Mistress is nothing I would expect - unless the first glimmers of intelligence happen to be while trying to understand human behaviour/interaction, formulating a frame of reference. Highly doubtful IMO.
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform