There is circulating an article on AI, based on the idea that intelligence explosion is impossible and that there is no such thing as general intelligence. Here it is, and it is rather well written, so definitely worth a read.
"Singularity" as it is talked about is largely a buzzword, exploiting the fantasy of this mythical exponential rise of machines' intelligence. The "explosion" is happening already, the article notes that it is gradual not sudden but misses the bus at " there is no such thing as general intelligence". It is also (unfortunately) incorrect to assume that human civilization works as a single cooperative swarm. To say that those working towards AGI are merely looking for a problem-solving-master-algorithm is an incorrect definition of the problem, which isn't a good place to begin from.
Intelligence provides the problem-solving ability, it is not the problem-solving ability itself. For example intelligence also provides the ability to delay gratification, exercise caution, predict eventualities, compete, cooperate, or even do nothing - depending upon what it recognizes as its best interests.
In purely evolutionary terms, intelligence is simply the ability to gain advantage over competition. Suppose we make two robots and train them to collect flowers, and give them a way to connect to a network (internet videos?) and learn more things about the task at hand. We then leave the robots in a beautiful field of flowers, so far both the robots are autonomous agents but not necessarily intelligent agents. Now while performing the task robot A figures out a way to pluck flowers in way that is less damaging to the plant and petals, or figures out a route which allows a faster collection than the other robot - then robot A will be considered an intelligent agent (learning & actuating) while the other guy still remains the dumb automaton.
In purely evolutionary terms, intelligence is simply the ability to gain advantage over competition. Suppose we make two robots and train them to collect flowers, and give them a way to connect to a network (internet videos?) and learn more things about the task at hand. We then leave the robots in a beautiful field of flowers, so far both the robots are autonomous agents but not necessarily intelligent agents. Now while performing the task robot A figures out a way to pluck flowers in way that is less damaging to the plant and petals, or figures out a route which allows a faster collection than the other robot - then robot A will be considered an intelligent agent (learning & actuating) while the other guy still remains the dumb automaton.
The thing about gradual changes is, that, on a long enough timeline...
Regarding smarter or general intelligence, it is said that "out of billions of human brains that have come and gone, none has done so". Well, there is a Chinese proverb, something on the lines that those who think something cannot be done, should not bother those who are trying to do it.
___
___
No comments:
Post a Comment