Chunking theory is a theory for how we learn/remember things. In combinatorial “chunks” essentially, stored in long term memory. Recent work in receptor webs and other areas of cognitive work and biology are following similar concepts.
If this sounds like I’m not explaining it well… I’m not. I’m absorbing it too. Read the source material for more info.
What I’m mostly excited by is how well this chunking seems to mesh with other “network” computational models. And it should as I think it’s the same researchers who’ve branched.
There’s actually a code base called CHREST modeling the theory.
Here’s great information on CHREST that leads to a variety of other great resources.
I don’t see a flow in the work of Simon to other areas of biology or to the other areas of AI that might be implied from this blog. the Chunk is interesting as a conceptual model not an explanation. To the degree that it communicates a “phase” or a “trace” organization of what we refer to as a recalled episode, memory, it is valuable and links to others who have become disenchanted with the lack of productivity from a memory organization that is spatial, i.e., located in this or that areas of a brain.
It is as if the pattern of neural activity, the chunk, is what the remembered episode is rather than a movie snipit that current memory approaches embrace. That move from a Newtonian slot machine approach, I see as a real contribution, and a newer bias, on how to work on the phenomenon.
I didn’t create the bridge between the disciplines.
The bridge is the similarities in conceptual models centered on the NETWORK theory.
Analyzing various models of cognition, long term memory, receptors, ant swarms, social networks, …. all involve network theory/the mathematics of networks.
Few serious research efforts retain the spatial approach you decry. That left most of the brain and memory studies long ago.
Chunking theory is interesting to me because people have been able to create computational models that we can use to create better robots, computer programs and algorithms.
Whether it explains human memory or not isn’t as interesting to me as having working tools to play with.
By “explanation”, in the above post, I meant model.
We cannot create models that ARE THE EXPLANATION other than just creating THE THING ITSELF.
All models (computer, mathematical, experimental, metaphor, thought experiments) are conceptual, never an exact explanation of the thing.
The idea of computational irreducibility implies that any phenomenon sufficiently complex cannot be reduced into a model other than a model that is as complex or more complex than the phenomenon itself. To me, this also implies that all models of sufficiently complex phenomenon can have no more accurate explanation than just the raw observation of what happens with the phenomenon.