Towards an
Implementation of a
Theory of Visual Learning
in the Brain
Shamit Patel
CMSC 601
May 2, 2011
The Problem
• To develop a working theory of
learning in the human neocortex
and implement it in software
• Goal is for the learning algorithm
to match or exceed human-level
accuracy in visual pattern
recognition and other hierarchical
inference tasks
• My hypothesis is that the brain
learns through a feedback loop of
sensing and reacting. I call this
theory SensoReaction.
• The brain essentially learns
through experience
• Feedback is the crucial ingredient
of intelligence because it allows
the brain to refine its predictions
into the correct answer
Medical image processing
Quality control
Ultimately, we would like to build
machines that operate on the
same neurocomputational
mechanisms as the human brain
From Von Neumann Architecture
to Neural Architecture of the Brain
Image source:
Image source:
Related Work
• Numenta’s Hierarchical Temporal
Memory (HTM) model
• Riesenhuber and Poggio’s HMAX
• Fukushima’s Neocognitron model
The Human Neocortex
Image source:
Hierarchical Temporal Memory
Image source:
Hierarchical Temporal Memory
• Directly based on the structure
and computational properties of
the human neocortex [1]
• Four main tasks of HTM: learning,
inference, prediction, and
behavior [1]
• Strength: Efficiency due to
hierarchical structure [1]
• Weakness: Needs lots of training
Image source:
• Models the behavior of the ventral
visual stream [2]
• Fundamental operations: (1) Weighted
linear sum for aggregating simple
features into complex ones, (2) Highly
nonlinear “MAX” operation that
computes output based on most active
input [2]
• Strengths: Efficiency and invariance to
position and size of input pattern [2]
• Weakness: Poor generalization to
objects of different classes [2]
Image source:
• Self-organized via unsupervised
learning [3]
• S-cells are changeable and C-cells
are invariant to position, shape,
and size of input pattern [3]
• Strength: Unsupervised learning
means we don’t need labeled
• Weakness: Poor generalization to
objects of different classes
1) Implementation of HTM system
2) Integration of SensoReaction algorithm
into the HTM system
3) Training the HTM system on temporal
image data
4) Testing the HTM system on novel input
5) Statistical analysis of results
Implementation of HTM system
• I have already implemented a
considerable part of the HTM
system, including the overall
structure of the network and
most of the training functionality
• Remaining work consists of
implementing inference and
integrating SensoReaction into
the system
Integration of SensoReaction
algorithm into HTM system
• SensoReaction is a feedback propagation
mechanism that allows predictions to be
propagated down the hierarchy for correction
• Algorithm will be integrated into the HTM
system by first introducing feedback
connections between every pair of successive
layers in the network. Then, predictions will
be passed down the hierarchy via these
feedback connections.
Training the HTM system
• Present hundreds of streams of temporal
image data to the input layer
• Allow the system to build its internal
• Training will consist of: (1) memorizing
patterns, (2) building the Markov graphs, and
(3) forming the temporal groups
Evaluation/Testing the HTM system
• Present thousands of noisy input
patterns to the HTM network
• Observe the classification
accuracy of the HTM system
• SensoReaction algorithm comes
into play here by making
predictions, passing them down
the hierarchy, correcting them,
and passing them back up
Statistical Analysis of Results
• Classification accuracy of HTM
system with SensoReaction will
be compared with classification
accuracy of standard HTM system
• Two-sample t-test will be used to
compare the classification
accuracies of the two systems
Feasibility of Approach
• SensoReaction is feasible because
it is essentially based on how the
neocortex processes feedback
• Feedback can only improve the
classification accuracy because
prior experience is taken into
• Feedback is the critical piece of
• Brain learns through constant
sensing and reacting
• Ultimate goal is to build machines
that work on the same
computational principles as the
• [1] Numenta, Inc. (2010,
December 10). Hierarchical
Temporal Memory including HTM
cortical learning algorithms
(Version No. 0.2). Retrieved from
• [2] Riesenhuber, M., & Poggio, T.
(1999, November). Hierarchical
models of object recognition in
cortex. Nature America, 2(11),
1019-1025. Retrieved from
• [3] Fukushima, K. (1980). Neocognitron: a
self-organizing neural network model for a
mechanism of pat- tern recognition
unaffected by shift in position. Biological
Cybernetics, 36, 193-202. Retrieved from
80 Neocognitron%20A%20Self-organizing

similar documents