The Fallacy of the Fast Take Off and AGI
Originally shared by John Newman
The Fallacy of the Fast Take Off and AGI
In artificial intelligence circles there's this term called the "Fast Take Off."
It involves a scenario where some artificially intelligent agent has learned how to rewrite its own programming, such that it can become smarter. It is assumed that once this achievement is unlocked, this thing will automatically scale up to become as smart as a human. And, as fast as this thing became as smart as a human, they say it will quickly surpass humans in intelligence.
The reason they erroneously think this is because they have some intuition that there exists a natural line of evolution between less smart things and more smart things. No such line exists. Not in some trans-universal, mathematical sense.
We are only as smart as we think we are because we manufactured a number of problems for ourselves and then came up with creative ways to solve those problems. However, if you tell an artificially intelligent agent to optimize its ability to rewrite itself, it will not magically evolve an ability to tie its shoes. Not unless we optimize it for that problem and rewriting itself.
Having the ability to rewrite oneself is a neat trick, and will probably come in handy in the future toolbox of artificially intelligent agents. But an ability to rewrite oneself does not inherently imply an ability to know what one should rewrite oneself into.
If this thing is not optimized for solving human problems, but rather optimized for obtaining resources, then that thing only needs to optimize itself into a bacteria. It doesn't need human-level intelligence to find that optima. As such, a thing with such basic needs will seem to us to be as predictable as bacteria and won't pose an existential threat.
Now, we may one day construct a machine that emulates a human consciousness. But still, from that point, there is no natural line of evolution between a human being and some mythical super-intelligence. Again, there is no natural line of evolution between less smart things and more smart things. What happens to be "smart" is only intelligent relative to accidental purposes that have been building up for billions of years, which are not present in a thing that is simply optimizing for rewriting itself.
On with the corollary -
There is no such thing as Artificial General Intelligence (AGI)
AI researchers are familiar with the notion that human brains take up a very small area in the space of all possible problem solving machines. Many of them also believe that there exists some "general mind" floating out there in the space of all possible problem solving machines. They believe that this general machine may look nothing like a human mind. But, because it is "general," it can scale up to solve any given problem.
No such general mind-machine exists in the space of all possible problem solving machines. Google's AlphaGo is arguably a general intelligence, in the sense that it can probably be tuned to solve any given human related problem. But for a thing that can tune itself, there is no general path towards human intelligence. It is a contextual path based on human problems, which are not general to all machines.
Even if we were to add more memory to our own minds - more languages, more poetry, more text books, whatever - in what way are we smarter? Smarter relative to what purpose? Once we have unlimited memory banks, what are we optimizing for?
In summary, there's no such thing as AGI and there will be no Fast Take Off.
Comments
Post a Comment