Mathematicians argue that it is a mathematical fact, that often the most rational way of making a decision is to simply flip a coin. No wonder, most of human and societal behavior borders on irrationality. But while probability is about numbers, as Glenn Shafer says, it is also about the structure of reasoning. Humans have an ability to execute a continuous manipulation of weight distribution over past events, affecting their immediate actions. Which brings us to the workings of memory and its role in intelligent behavior.
Traditionally computers’ memory works by recollection through an elaborate system of instructions & pointers to retrieve data. So far so good. But since our goal is designing generality in computational “thinking” - let us look a little further into how a human encodes and carries information, and how it translates into actions. I suggest the readers to have an open mind while also carrying a healthy dose of skepticism, since we’ll be making some assumptions.
Assumption #1: Memory stored is dependent upon perceptual processes.
Assumption #2: Information is discarded more often than it is stored.
Assumption #3: Good information recall is reconstruction with least bias.
Two humans may display varying memory of an event they experienced together, depending upon association/encoding of relevance with the input information, the health of their senses and so on. Essentially, to move towards generality, an intelligent agent must have a memory model which supports an active inference mechanism, on top of its core observe-orient-act decision cycle.
A fundamental idea is that an autonomous agent with general intelligence should be able to fulfill surprise-attention hypothesis. While it is always best to minimize the surprise a system might face, what surprises it ( and its attentional faculties) is closely related to its memory model, how what the system retains is structured and recalled towards optimizing internal as well as external states.
Magicians and veteran criminal investigators have long known the trickeries of human mind. Given misleading information, humans tend to misremember things, so reconstructive memory has been a dangerous rope to walk upon. While generative models are getting better at producing all kinds of information, it will be interesting to see how they evolve and how much so towards reconstruction as opposed to mimicking a classification. It is therefore, a rather interesting problem in machine learning and neuroscience research, to reconstruct an event/memory from the milestones of sensory data and internal states. Eventually this will be getting into the domain of agent’s belief management and causal directionality.
Mr. Feynman The Great has one of his lectures on YouTube where he talks about the computer as a file-clerk who is getting faster and better. He also discusses, almost prophetically, the nature and future of intelligence in machines as well as the dual-use characteristic of technology among other things. Anyone interested in these subjects should spare 75 minutes of his (or her) life to listen to the man. Surely he is not joking.
Traditionally computers’ memory works by recollection through an elaborate system of instructions & pointers to retrieve data. So far so good. But since our goal is designing generality in computational “thinking” - let us look a little further into how a human encodes and carries information, and how it translates into actions. I suggest the readers to have an open mind while also carrying a healthy dose of skepticism, since we’ll be making some assumptions.
Assumption #1: Memory stored is dependent upon perceptual processes.
Assumption #2: Information is discarded more often than it is stored.
Assumption #3: Good information recall is reconstruction with least bias.
Two humans may display varying memory of an event they experienced together, depending upon association/encoding of relevance with the input information, the health of their senses and so on. Essentially, to move towards generality, an intelligent agent must have a memory model which supports an active inference mechanism, on top of its core observe-orient-act decision cycle.
A fundamental idea is that an autonomous agent with general intelligence should be able to fulfill surprise-attention hypothesis. While it is always best to minimize the surprise a system might face, what surprises it ( and its attentional faculties) is closely related to its memory model, how what the system retains is structured and recalled towards optimizing internal as well as external states.
Magicians and veteran criminal investigators have long known the trickeries of human mind. Given misleading information, humans tend to misremember things, so reconstructive memory has been a dangerous rope to walk upon. While generative models are getting better at producing all kinds of information, it will be interesting to see how they evolve and how much so towards reconstruction as opposed to mimicking a classification. It is therefore, a rather interesting problem in machine learning and neuroscience research, to reconstruct an event/memory from the milestones of sensory data and internal states. Eventually this will be getting into the domain of agent’s belief management and causal directionality.
Mr. Feynman The Great has one of his lectures on YouTube where he talks about the computer as a file-clerk who is getting faster and better. He also discusses, almost prophetically, the nature and future of intelligence in machines as well as the dual-use characteristic of technology among other things. Anyone interested in these subjects should spare 75 minutes of his (or her) life to listen to the man. Surely he is not joking.
____