Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
AI - Limits and controls
Message
From
25/06/2015 16:28:53
 
 
To
24/06/2015 18:22:48
General information
Forum:
Science & Medicine
Category:
Other
Miscellaneous
Thread ID:
01621426
Message ID:
01621464
Views:
45
>I think they both rely too much on exponential growth. There's no doubt it could theoretically happen and achieve a greater or lesser "singularity" but as I see it, that misses the point about what always happens when that growth runs out of resources. Jos' article touches very briefly on this, "...and given intelligence the device will seek to expand the boundary as well since that will be immediately known to it as the limiting factor on its development!" True enough, but how will those boundaries be expanded? Even if it could suborn all of them, there are only so many transoceanic optical or satellite links, carrier-class Cisco routers, so much extra electricity available on creaky grids, and Predator/Reaper drones capable of killing any humans trying to pull the plug on its data centre(s). An ASI would have a tremendous uphill battle against entropy. As I see it, an ASI (or probably even an AGI) will only be able to exist in a well-resourced, clean environment e.g. corporate/university or military/intelligence. As such it would likely be limited in scope, and fragile rather than robust.
>
>OTOH, given that many human, biological and other systems run in a critical state, even limited meddling/intervention by an AI could cause tremendous damage. But, doing so would tend to reduce its own chances for survival.
>
>As for nanotech - I haven't yet seen, anywhere, any proposal or research on how to power it at scale. Some people seem to think it's magic - "Hey, Utah's running low on fresh water. I know - let's throw some nanotech in the Great Salt Lake and turn it fresh!!" Even with (unachievable) perfect efficiency, it takes energy to remove salt from salt water. Where does that come from? Entropy's a biatch.
>
>Some science fiction stories try to address these resource problems by collecting all mass in the solar system and using it to assemble a Dyson sphere around the Sun. But none go into any details on how that could be achieved or (ideally) bootstrapped.
>
>The posts also make much of how different we are from our ancestors of 100,000, a few thousand, or even a few hundred years ago. Then they extrapolate to say that ASI could outpace humans far more than we outpace any of our primate forebears, or probably even amoebae for that matter. But it's not obvious to me that it must, or even can, be so.

Hmm, for my taste the definition of intelligence is too anthropocentric in those articles. Lots of dangers with exponential growth mixed in with any self replicating or even only repairing technology, perhaps used to combat environmental damage or deep sea mapping or a virus program given some genetic-similar mutability . Some chances that such a growing ecosystem would have to learn to communicate or exterminate. And with the 50% chance of encountering politicians soon after learning to talk, extermination might seem even more probable.

Also quite possible that after singularity at least one new born god is cunning enough to lie low and restrict own growth while playing dumb
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform