Monday, April 16, 2018

On Memory & Machines

Mathematicians argue that it is a mathematical fact, that often the most rational way of making a decision is to simply flip a coin. No wonder, most of human and societal behavior borders on irrationality. But while probability is about numbers, as Glenn Shafer says, it is also about the structure of reasoning. Humans have an ability to execute a continuous manipulation of weight distribution over past events, affecting their immediate actions. Which brings us to the workings of memory and its role in intelligent behavior. 

Traditionally computers’ memory works by recollection through an elaborate system of instructions & pointers to retrieve data. So far so good. But since our goal is designing generality in computational “thinking” - let us look a little further into how a human encodes and carries information, and how it translates into actions. I suggest the readers to have an open mind while also carrying a healthy dose of skepticism, since we’ll be making some assumptions. 

Assumption #1: Memory stored is dependent upon perceptual processes.
Assumption #2: Information is discarded more often than it is stored.
Assumption #3: Good information recall is reconstruction with least bias. 

Two humans may display varying memory of an event they experienced together, depending upon association/encoding of relevance with the input information, the health of their senses and so on. Essentially, to move towards generality, an intelligent agent must have a memory model which supports an active inference mechanism, on top of its core observe-orient-act decision cycle. 

A fundamental idea is that an autonomous agent with general intelligence should be able to fulfill surprise-attention hypothesis. While it is always best to minimize the surprise a system might face, what surprises it ( and its attentional faculties) is closely related to its memory model, how what the system retains is structured and recalled towards optimizing internal as well as external states.

Magicians and veteran criminal investigators have long known the trickeries of human mind. Given misleading information, humans tend to misremember things, so reconstructive memory has been a dangerous rope to walk upon. While generative models are getting better at producing all kinds of information, it will be interesting to see how they evolve and how much so towards reconstruction as opposed to mimicking a classification. It is therefore, a rather interesting problem in machine learning and neuroscience research, to reconstruct an event/memory from the milestones of sensory data and internal states. Eventually this will be getting into the domain of agent’s belief management and causal directionality.

Mr. Feynman The Great has one of his lectures on YouTube where he talks about the computer as a file-clerk who is getting faster and better. He also discusses, almost prophetically, the nature and future of intelligence in machines as well as the dual-use characteristic of technology among other things. Anyone interested in these subjects should spare 75 minutes of his (or her) life to listen to the man. Surely he is not joking.
____

Monday, April 2, 2018

No Explosions But

There is circulating an article on AI, based on the idea that intelligence explosion is impossible and that there is no such thing as general intelligence. Here it is, and it is rather well written, so definitely worth a read. 

"Singularity" as it is talked about is largely a buzzword, exploiting the fantasy of this mythical exponential rise of machines' intelligence. The "explosion" is happening already, the article notes that it is gradual not sudden but misses the bus at " there is no such thing as general intelligence". It is also (unfortunately) incorrect to assume that human civilization works as a single cooperative swarm. To say that those working towards AGI are merely looking for a problem-solving-master-algorithm is an incorrect definition of the problem, which isn't a good place to begin from. 

Intelligence provides the problem-solving ability, it is not the problem-solving ability itself. For example intelligence also provides the ability to delay gratification, exercise caution, predict eventualities, compete, cooperate, or even do nothing - depending upon what it recognizes as its best interests.

In purely evolutionary terms, intelligence is simply the ability to gain advantage over competition. Suppose we make two robots and train them to collect flowers, and give them a way to connect to a network (internet videos?) and learn more things about the task at hand. We then leave the robots in a beautiful field of flowers, so far both the robots are autonomous agents but not necessarily intelligent agents. Now while performing the task robot A figures out a way to pluck flowers in way that is less damaging to the plant and petals, or figures out a route which allows a faster collection than the other robot - then robot A will be considered an intelligent agent (learning & actuating) while the other guy still remains the dumb automaton.

The thing about gradual changes is, that, on a long enough timeline... 

Regarding smarter or general intelligence, it is said that "out of billions of human brains that have come and gone, none has done so". Well, there is a Chinese proverb, something on the lines that those who think something cannot be done, should not bother those who are trying to do it.
___