Chapter twelve
The retina of memory

Victorians saw visual memory in the brain as a scrapbook of photographs.  It now seems possible once again that they were right.
WE caught a breeze, after lunch, which took us gently up past Wargrave and Shiplake. Mellowed in the drowsy sunlight of a summer’s afternoon, Wargrave, nestling where the river bends, makes a sweet old picture as you pass it, and one that lingers long upon the retina of memory.
Three Men in a Boat (Fiction, 1889, 197 pages)
— Jerome K. Jerome

In invoking the “retina of memory” in 1889, Jerome K. Jerome was expressing a very Victorian idea. He may have read it in a newspaper, for in 1888 the mapping of a “cortical retina” was first reported in Sweden. It seemed reasonable then that the mind’s eye should have a retina. To the retina of the eye, there could correspond conjugate points on a retina of memory, somewhere deep in the brain.

We will attempt in this chapter and the next to move from Jerome’s retina of memory, which was a literary conceit, to an anatomical and technically plausible version based on multichannel neurons. If there exists in the brain at least one retina of memory, then where could it be? What does it look like? How does it work?

Jerome K. Jerome was a popular Victorian humorist. Three men in a boat, a comic travelogue and his most successful book, has been reprinted again and again. The 1911 lantern slide from the Oxfordshire photographic archive, above, approximates the riverside scene Jerome’s narrator (and Jerome himself) must have remarked, drifting by on his boat. On the right is the St. George & Dragon, a landmark inn and pub. It is still there.

A remembered moment on the Thames inspired the remarkable turn of phrase: a Retina of Memory.

In the 1880s, when Jerome K. Jerome went gliding down the Thames past Wargrave “in the drowsy sunlight of a summer’s afternoon,” the prevalent metaphor for human memory was the photograph. It was thought something in the brain analogous to the grains of a photograph must be the substrate of memory — that the brain was taking pictures constantly through aperture of the eye. The retina of the eye was also (and often still is) presented as analogous to photographic film.

The cortical retina
Every point of light reflected into Jerome K. Jerome’s eye from the St. George & Dragon registered as a pixel on his retina.

These pixels were thought to find their way through the optic nerve to re-form an inner picture, to be recorded somewhere in the unlit parts of the brain, framed perhaps on a structure corresponding to the retina – a “retina of memory.” From this inner retina, the same picture, a frozen snapshot image, could be elicited and recalled to consciousness years later.

The idea was grounded in the anatomy and physiology of the late 19th century. By 1870, a reasonably good picture of the visual pathway was already on the drawing board, though it was not yet fully detailed or accepted. They knew the pathway began at the retina, traversed the optic nerves and optic chiasma. In the 1890s it became clear that the pathway turned at the lateral geniculate nucleus, and then radiated back toward the occipital lobe, shown here in pink.

Visual cortex of human brain is shown in pink.So they had the origin of an image, the retina, and they had followed the wires all the way back to their apparent destination: the occipital lobe of the brain – our visual cortex.

In 1888, Salomen Henschen, a Swedish neurologist and Professor of Medicine at Uppsala, published a study of lesions in the human occipital lobe. He had collected and sifted data on such lesions in 160 patients. Each lesion produced blindness in a different and distinct part of the visual field.

This simulation of hemianopia shows the vertical split in the retina. The two right half-retinas are not reporting.
Serious damage restricted to the right side of the visual cortex produced blindness on the right halves of both eyes. The phenomenon, called hemianopia, is discussed at the Lighthouse International site. Similarly, important damage to the left side of the visual cortex produced blindness on the left half of both eyes.
This simulation of hemianopia shows the vertical split in the retina. The two left half-retinas are not reporting.
In effect, each eye contains two half-retinas. Although the retinas are vertically split (in terms of their wiring) we are completely unaware of the junctures. Henschen concluded that fibers from the two right half-retinas converge and connect to the right visual cortex. Fibers from the two left half-retinas converge and connect to the left visual cortex.
Visual pathway in human brain. The retina of each eye is vertically split. The two right half-retinas are wired to the right visual cortex.  The two left half-retinas are wired to the left visual cortex.The half-retina to half-brain mnemonic is rights-to-the-right, lefts-to-the-left. This wiring plan, which is now familiar, was first confirmed by Henschen.

Downshift in scale
He then looked at the problem at a much smaller scale, to see if he could generalize from the documented effects of tiny lesions. A nick in the left visual cortex created a small blind spot at corresponding points in the two left-side half-retinas of both eyes. An adjacent tiny lesion in the brain should, Henschen asserted, produce in the visual fields adjacent blind spots. It followed that one should be able to map, point for point, the retinas onto the occipital lobes.Salomen Henschen, a Swedish neurologist, was the first scientist to map the retina onto the brain.

Henschen proceeded to draw a map of the retinas’ projections onto the occipital lobes of the brain. In retrospect it seems he got the map backwards, in that the foveal and peripheral fields were flipped, but his concept mattered vastly more than any mistaken detail. (Kort öfversigt af lären om lokalisation I hejernbarken. Uppsala, LäkFören. Forh 1888; 27: 507 and On the visual path and centre. Brain 1893; 16: 170–180).

Henschen was the first scientist to map the retina onto the brain. He thought it was the one and only such map, and that it perfectly replicated in the brain the distribution of receptors in the eye. He named the structure he had mapped “the cortical retina.”

There are now more than 30 such maps and representations and re-representations of the retinal field (and the visual field, which is not exactly the same thing) at various sites in the primate brain. In the twentieth century, mapping the visual system in the brain became a core theme — and from 1950 arguably the core theme — in vertebrate brain physiology. Here is a thorough 2005 review of the mapping aspects of this work, including recent brain maps and references.

But somehow, Henschen’s conception of a “cortical retina” was never quite fulfilled. An image appears on the retina, yes. But does a 1-for-1, pixel-for-pixel map of that retinal image make it back to the cortex? The conventional answer is an emphatic no.

So exactly what happened to the beautiful idea of images mapped into the brain, and of visual memory as a record of these images?

The hopeless bottleneck
The first difficulty with the notion of 1:1 mapping from the photoreceptors to the brain is the bottleneck of the optic nerve.
The retina has about 125 million photoreceptors. The axons of the retinal ganglion cells comprise the optic nerve, but there are only 1.2 to 1.5 million of them. The cell counts are imprecise but it is clearly quite a funnel. It would seem that if one did try to reconstruct an image of the whole retina in the brain in real time, it would have to be at a 100-fold reduction in resolution from that detected by the photoreceptor set.

The bottleneck does not exist for a multichannel neuron. In an optic nerve made up of multichannel neurons, there could be a half-billion distinct channels. More than enough.

But in the conventional view of the nervous system, the bottleneck is indeed an issue. With a 100-to-1 information loss or step-down between the photoreceptors and the visual cortex, how do we manage to see so clearly?

One solution is to remark that although the overall ratio of receptors to optic nerve output lines is 100:1 — it is just an average. It doesn’t mean the ratio is fixed at 100:1 at all points in the retina. The distribution of retinal ganglion cells is not uniform in the retina. The ganglion cells are densely concentrated near the center of the retina, the area of acute vision, and spread out at the periphery. It can also be urged that the receptive fields of the ganglion cells are smallest at the center, and trend larger as one scans toward the periphery.

So at the center of the primate retina, in the fovea, there could indeed be a 1:1 relationship between cone receptors and output lines. It is therefore possible, in this scheme of things, to transmit a high resolution picture from the fovea, only, to the visual cortex.

Moving out along the radius line of the retina into the peripheral mix of rods and cones, the input to a given ganglion cell expands to include a collection of encircling photo receptors, so resolution drops. Well out in the periphery, single ganglion cells, each serving great clusters of rod receptors — thousands — are thought to report huge, blobby pixels. These big pixels might be more accurately characterized as large targets for occasional photons in the dead of night.

In effect this system suggests a privileged cable link inside of the optic nerve, running straight from the foveal cones to the brain. About 50 percent of the optic nerve axons are thought to be pipelining information from the fovea, so there’s 625,000 axons and, thus, picture elements.

Given its typical assumptions about the neuron and the nerve impulse, this story seems plausible enough, at first, but there is a petite problem, or a seed of conflict, at the periphery. If the retina captures the Fourier plane, then the most interesting part of it — where the most finely grained detail is encoded — must be detected at the periphery. The textbook approach makes it seem impossible to record this wonderful detail in the fringes at the outermost reaches of the retina.

The idea of a privileged foveal channel encounters deeper problems when you start carving into the channel capacity to allow for parallel transmission of multiple worldviews. In trichromats, for example, color vision can require the parallel transmission, via the optic nerve, of three distinct worldviews, one each from the red, green and blue cone sets. Color is not the only type of information you might want to transmit in parallel via the optic nerve. There are 20 different identifiable types of ganglion cells in the primate retina. If receptive fields are grouped by ganglion type, each type forms a regular lattice, and so it appears there exist 20 parallel systems in the retina capable of originating 20 distinct worldviews for the brain to parse and process. The privileged channel in the optic nerve begins to look extremely cramped for space.

In short, the bottleneck of the optic nerve is not a solved problem. In my view it will never be solved because it is a problem that doesn’t exist.


There is by the way another, more recently conceived take on the problem of the bottleneck, which is that it could be overcome by a compression/decompression system that strongly resembles that used in digital television transmission and reception. The idea was probably inspired, in fact, by digital TV. Using compression and decompression maybe it would be possible, even within the conventional model of the nervous system, to resurrect Henschen’s idea of 1:1 image mapping of the whole retina — ever so slowly painted from the retina into the brain.

Hubel and Wiesel
Historically, however, the notion of a cortical retina was pretty much set aside in the 1960s by the celebrated neurophysiologists Hubel (pronounced Hewbel) and Wiesel. The 19th century idea of using the brain to memorize a snapshot of a literal image on the retina was scrapped and supplanted. The new and very different idea was to let the brain extract and memorize the features of an image.

Feature detection quickly became the basis of a new model of how the visual cortex works – a model that is still widely accepted today. It contains such admirable ideas, and it is based on such extensive and beautifully described experimental work, that it went almost unchallenged for most of the rest of the 20th century. The model is problematical – to be candid I think it is dead wrong. However, it holds such an important place in neuroscience that one cannot spin alternative hypotheses about how the brain might work without first addressing it. Herewith.

A loudspeaker in the lab can be used to broadcast the crackle of nerve impulses.
“A roar of Impulses…”
In the 1880s Salomen Henschen was what we might call today a data miner. He collected in one place, his office, data that had been observed over time by many neurologists and pathologists on the effects of small lesions in the visual cortex. He drew his map of the “cortical retina” based on his precious collection of facts — facts originally reported, for the most part, from clinical observations and autopsies performed by many different people. His work was brilliant and meticulous, but far removed from direct experimentation.

In the 20th century, it became possible to actually make electrical measurements from the visual pathway of animals, using probes planted near or in individual neurons, including neurons in the visual cortex. The new instruments were based on fine probes, electronic amplification and display. Often the experimenters used a loudspeaker to broadcast the crackle of nerve impulses as they worked. This made it unnecessary to constantly watch the screen of an oscilloscope.

This thread of research begins with Hartline’s studies of retinal ganglion cells in the late 1930s, and was picked up after the war by Steven Kuffler. Kuffler discovered the target-like geometry of inhibition and stimulation in receptive fields of the retinal ganglion cells of cats. David Hubel and Torsten Wiesel were working in Kuffler’s lab at Johns Hopkins. The Kuffler lab moved to Harvard, and Hubel and Wiesel ultimately became – with Eric Kandel – the most famous neuroscientists of their century. All of these men won Nobel prizes.

Hubel and Wiesel actually did the direct experimental work Henschen must have wished or dreamed he could have done. Torsten Wiesel was born in Uppsala, but it is not clear whether the two youthful collaborators were specifically aware of Henschen’s publications in the 1880s on the “cortical retina.”

A famous, rather incidental discovery
Their original experimental concept was to stimulate the retina of a cat with points of light (or points of darkness) while probing with an electrode in the visual cortex for a response – a burst of spikes. It was basically the same experimental approach Kuffler had taken in exploring the receptive fields of retinal ganglion cells, and they started out using Kuffler’s apparatus, a multibeam ophthalmoscope which included two extra beams for stimulating the retina.

But Kuffler had looked at spike trains on ganglion cells, which are the output neurons from the retina. Hubel and Wiesel were probing for responses far, far down the line of the visual pathway, at the back of the cat’s brain in the visual cortex.

Kuffler’s apparatus included a light beam projected through a small hole in a brass plate. The plate was the size and shape of a microscope slide, and was secured in a slot. To deliver the obverse stimulus, a black spot in a light field, the experimenter could remove the brass plate and replace it with a glass microscope slide, onto which an opaque black dot had been pasted.


It was the microscope slide that produced the most interesting response from a neuron in the visual cortex – but curiously, the black dot had nothing to do with it. The neuron’s response was quite specifically associated with the act of installing the slide in its slot. The experimenters eventually guessed that the edge of the moving slide was casting a “shadow” onto the retina. As the slide’s edge moved, the neuron responded. It emerged, moreover, that the neuron was “orientation selective.” The neuron gave a maximal response only when the edge of the slide was oriented at a certain, just-right angle. When the edge of the slide was oriented at precisely this angle and moved across the retina, the neuron would emit “a roar of impulses.”

Here is a video that shows a re-enactment of the experiment. Watch it with the sound turned on so you can hear the spike trains on the loudspeaker. Backspace to return to this text.

Another video shows a similar experiment, using a bar of light in various angular orientations. The video shows the cat’s point of view. Fast forward to the middle of the video. Note that the eye is immobilized, so the bar must be in motion to elicit a response.

In later experiments Hubel and Wiesel discarded the multibeam opthalmoscope and simply projected onto a screen images of lines, edges, bars, and the like. These figures could be moved and rotated. It was simplicity itself. Basically they were showing pictures to a cat while monitoring the reaction of neurons in its visual cortex. The first video is probably taken from this later period, and the image of the edge of the microscope slide was intercut to help retell the story.

There is a good account of this discovery in the widely used undergraduate textbook, Neuroscience: Exploring the Brain . You can find the same anecdote in several other places: In Hubel and Wiesel’s Brain and Visual Perception it appears on page 60. In this version the “cell seemed to come to life and fired impulses like a machine gun.” There is a helpful detail about the angular response. For example, if the edge is oriented at right angles to the optimal position, the signal actually goes dead. The Journal of Physiology paper (1959) 148, 574-591 that first mentions this work is reproduced from page 67. There is another account beginning on page 69 of Hubel’s Eye, Brain and Vision. Finally, the story is told in the context of subsequent research in Hubel’s Nobel acceptance speech of 1981.

The Edge Detector
It is the angular orientation of the slide’s leading edge that matters. The line or edge presented to the retina can be moved in translation any which way without affecting the pulse train response of the monitored cell. But as the angle of the edge is varied, the pulse stream from the neuron in the visual cortex speeds up or slows down. At one specific angle of orientation, the neuron’s firing rate will go wild.

In this animation the important response occurs at angles near 0 and 180 degrees. This is just a metaphor at this point in the narrative, but it suggests physical optical effects might underlie the experimental results.

A vigorous high frequency pulse train response from the cortical neuron was interpreted as a response to a high intensity stimulus of an orientation-selective cell. In effect the cortical neuron was thought to be shouting, “Yes — that’s 41 degrees exactly!” A mild response, or low frequency pulse train was read as a response to a low intensity stimulus — that is, a neuron reporting that the detected line, or edge, is not yet turned to the just-right angle.

Different cells were found in the visual cortex that responded to different edge angles. It appeared in fact that these cells, now sometimes styled as “edge detectors”, had each been pre-tuned to trigger on an edge presented to the retina at a specific angle.

It was suggested that the ability of a neuron to discern a specific angle was conferred by a configuration of the intricate patterns of center-surround inhibition and sensitivity originally discovered by Kuffler.

Once the angle of maximum response had been determined for a given neuron, Hubel and Wiesel could change the stimulus input to the eye by rotating the line or edge projected onto the retina. As the line was rotated away from the ideal angle, the pulse stream on the responding neuron would die down. But if they changed neurons, by advancing the needle of the probe through the tissue of the cortex, they could find another neuron nearby, maximally responsive to the new angular position of the edge stimulator.

The generalization to feature detectors
The discovery of “orientation selective” neurons in the visual cortex might seem obscure, but it led very quickly to a completely new view of how the brain works — and indeed it became the dominant view of how the brain works.

From the discovery it seemed to follow, in general, that the visual cortex contained cells designed to detect the angular orientation of lines or edges associated with objects in the animal’s field of view. These experiments with orientation selective neurons engendered the concept of a “feature detector.” From the image projected on the retina, the brain was, per this line of research and thinking, extracting information about the edges of objects, their angular orientation. Differential responses to their direction of motion were also noted.

In other words, it appeared the image on the retina was not being recorded frame by frame, like a movie of the world, in the visual cortex. Instead, the retinal image was being speedily dissected into a set of useful abstractions. Where does the object begin? Where does it stop? How fast is it going, and in which direction?

Since the abstract information provided by the edge detector nerve cell was pretty primitive, it followed that there must be an ascending series of brain components, assembling the incoming abstractions into a useful set of specifications that could indeed be memorized and, subsequently, recognized. Think about an egg. Such an object can be assembled – integrated, in effect – from many short line segments, each segment characterized by a specific angular orientation. A collection of orientation sensitive neurons, linked perhaps with a logic rather like an array of AND gates, could recognize and report to higher centers the quality I suppose we might call Eggness.

The idea of a hierarchy of abstractors, each recognizing an important feature and pushing it upstairs to a higher stratum of the brain to form an element of some higher abstraction – became enormously influential. The experiments were simple and the results had verisimilitude. We expect our brains to manufacture abstractions. It is what the brain is supposed to do.

The ineluctable grandmother cell
Per this brain model the actual, literal image of your grandmother, pixel for pixel, was never preserved. In fact it was left far, far behind – a momentary visual episode that had flashed upon the retina, and probably never made it past the lateral geniculate nucleus.

What was conserved and memorized instead was a collection of abstracted facts and qualities: an assembly of edges, a yellowness of teeth perhaps, a grayness of hairs, a mannerism. When the actual grandmother reappeared as a live image impressed upon the retina, all these “features” would be recognized by individual feature detecting neurons. Their outputs would converge, as though via AND gates, to the grandmother cell. And the grandmother cell would signal recognition (to something – the CPU one imagines) with an energetic firing of impulses.

Here is a link to a quick history of grandmother cell by Charles Gross at Princeton. The author explains how it was elaborated as a concept, taking Hubel and Wiesel’s work as a starting point.

The grandmother cell was probably the high watermark of this line of thought about how the brain works. David Rose outlined the main difficulties with the idea in an excellent short essay, here.

A lot of people got off the train because of the grandmother cell. One problem was that it sounds like a joke, and it is impossible to resist making more jokes when talking about it. The principle could not be decisively confirmed experimentally, though not for want of trying. Yet another problem was the rising promise, in the 1980s and 1990s, of neural nets. The idea that any one cell could do anything in isolation, at the top of a logical pyramid or anywhere else, let alone pick out your grandmother, doesn’t nicely fit the neural net model, which requires ensembles of connected cells to form a memory or decision.

The Jennifer Aniston cell
In June, 2005, there was a flurry of publicity about a “Jennifer Aniston neuron” discovered in a human patient. This neuron fired vigorously whenever the patient was shown a photo of Jennifer Aniston. Photos of other celebrities had no effect. In other patients, the researchers believed they had located a Bill Clinton neuron and a Halle Berry neuron. Not quite clear to me how you would pinpoint or luck into these celebrity neurons in a universe of 100 billion testable neurons, but.

In the context of the multichannel neuron model, we would say that a frantically firing neuron has been, for whatever reason, overdriven. It has broken into oscillation and stopped communicating. It is essentially a pinned meter. It does not signal the recognition of Jennifer Aniston.

It might have been overdriven by a pattern of light and dark in the photo of Jennifer Aniston, or in the back focal plane’s transformed representation of the photo. So this could indeed be a response to Jennifer Aniston, or to the physical optics of Jennifer Aniston. But it is not a unique signal of recognition. It is a signal the neuron is overwhelmed and has gone off duty. This could happen to any neuron.

In any event, another neuroscientist asked to comment in Nature’s news story on this work emphasized that “…nobody is saying this is the grandmother cell.” By way of pointing out what “nobody is saying,” the speaker had of course said it himself -– the grandmother cell.

The grandmother cell concept has lost favor in textbooks, but maybe it can make a comeback as a far more glamorous neuron dedicated to Jennifer Aniston.

Grandmothers aside, the basic notions of feature detection, abstraction from imagery, and orientation selectivity by cortical neurons are still well accepted and studied today using (in the brain) rather sophisticated techniques. There is a helpful review by Robert Shapley and Dario Ringach in Chapter 17 of The New Cognitive Neurosciences. For a more contemporary overview of work in this field, search on these authors and read their recent papers.

The idea that the brain deconstructs the retinal image into a set of memorable abstractions followed from the early experiments of Hubel and Wiesel. To see if this ingrained idea still holds up after so many years, we should look anew at their experimental methods and assumptions.

The Sparse Code
It is often repeated in texts and in historical accounts of neuroscience that diffuse light cannot stimulate the retina in such a way as to produce significant signals in the visual cortex. This apparent lack of response greatly frustrated early researchers.

Richard Jung, in Germany in the 1950s, devised precise instrumentation to measure the responses of nerves in the visual cortex, but he used diffuse light to stimulate the eye. He found little or nothing. Before Hubel and Wiesel, many other experimenters reported the same difficulty.

By “no important response” to diffuse light, the historians mean that diffuse light produced no vigorous spike streams. The difficulty is invariably explained by pointing out that diffuse light produces simultaneous inhibition and stimulation. The net effect, when measured as the firing rate of a cortical neuron, is a negligible response.

But today, knowing what we know now, we should probably ask — just how negligible was it? In the 1950s experimenters were not much interested in the passage of just one or two spikes. Not until the early 1990s did we learn that one or two spikes can convey an important signal. It now appears a “sparse code” – just one or two spikes — is able to convey identically the same meaning that was attributed, for decades, to a long, loud and vigorous spike stream.

When should you look for a sparse code? When should you look for a rate code? Is it like shorthand vs. longhand? If these codes are equally able to convey meaning, then why should the nervous system use one rather than another? Or why should it use one code sometimes, and other codes at other times? In fact, given the energy cost, why should it ever use a rate code?

Interesting questions.

In Chapter 2 it is suggested that a sparse code indicates the neuron is reporting on a stimulus that falls within its normal operating range. The rapid-fire rate code appears only when the neuron is overdriven, that is, hit with a stimulus that exceeds its normal operating range. A runaway spike stream has the same significance as a pinned meter.

In this model system there are two different types of spike streams. These firing patterns are different in kind. We do not possess an instrument that can readily distinguish between them, but it is a fairly easy judgement call. If the stimulus is extreme, it will elicit a rate code. If the stimulus is typical, it will be tracked by a sparse code.

In a sparse coded spike stream, if the stimulus is not changing, or is slowly drifting, an occasional spike or two will be enough to characterize the magnitude of the stimulus. If the stimulus changes markedly, a spike stream will precisely describe the change as it happens, point for point, but the stream will last only until the stimulus stabilizes at some new value. The last spike in a stream always describes the most recent state of the stimulus.

A burst of impulses that suddenly stops could be interpreted in terms of the rate code model as an adaptation. In terms of the multichannel model, it means the stimulus has stopped changing.

In an Adrian or rate coded spike stream, the stimulus exceeds the normal (adapted) operating range of the neuron. The neuron, in effect, breaks into oscillation, and produces “machine gun” firing. Until 1995, this type of vigorous spike stream was an unquestioned goal for experimenters, because it seems to signify a recognition or response. But in the context of the model developed in Chapter 2, it means the nerve has ceased to communicate.

This is a speculative model, but I think most people can agree that an extremely rapid-fire pulse stream would be produced by a neuron that is sensing a very strong stimulus, and that the neuron has not yet managed to adapt.

65 years later…
What do Hubel and Wiesel’s feature detectors mean in 2024? The results of their experiments are by now woven through the fabric of theoretical neuroscience. The received wisdom about how the visual pathway in the brain works is still grounded on these results: The brain does not store and recall images. It stores and recognizes the abstracted features of images. The nervous system is hardwired to perform the necessary abstractions.

Or so it appeared. Hubel and Wiesel’s results are not as clear cut today as they were in the 60s, 70s and 80s. The main issue is the failure, around 1995, of Adrian’s rate code. Secondary issues arise from the physical optics of their experiment.

Note that the experimenters were using their ears, relying heavily on the crackling loudspeaker output you can hear on the video. They were looking for big responses and they were elated to find them: A “roar of impulses,” or “machine gun” fire.

They will have perforce ignored what could be the normal traffic in impulses from the retina – a pulse or two in passage every now and then. We will surmise here that what they found were signals that represented a response to a stimulus that exceeded the normal operating range of the neuron — that a “roar of impulses” is the extreme response of an outrageously overdriven neuron.

What makes a neuron roar?
What sort of light pattern on the retina would be predicted to make a neuron roar?

An interference band. Simple edge diffraction projects alternating bands of bright light and deep darkness. These alternating bands, in turn, can play heavily upon the peculiar contrast sensitivities of the retina.
An interference pattern projected onto the retina could create a perfect storm of nerve impulses.The bands are an almost ideal stimulus: The brightest of the bright alternating with the blackest of the black. If light interference bands were projected onto and moved across the inhibitory and stimulatory sensitivities which, as Kuffler discovered, are built into the retina — the cortical neuron should respond with a perfect storm of impulses.

So possibly a storm of impulses produced by a potent and exceptional stimulus — a moving interference band projected by edge diffraction — was assumed to be the typical, normal, everyday response from a neuron in the visual cortex.

Orientation selection by means of physical optics
Is there also a light pattern that could produce the appearance of orientation selectivity? It would require a light pattern that would change in a detectable way if the angle of presentation of some object were changed. Are there such patterns? Yes. Superimposed interference bands, one a rotor and the other a stator — a moiré on the retina -– would certainly do this.

The principle is the basis of one type of angular position detector, in which the position of a bright spot moves in translation when the angle between the rotor and stator bands changes. Visualize the bright spot as the point of intersection of a pair of scissors. When one blade is rotated and the other held fixed, the bright spot shifts in position.

By analogy, if a projected interference band associated with diffraction from a line or bar or slot or slit were turned relative to a static interference band in the eye, then a bright spot where the bands intersect would move in translation across the retina. But instead of scissors with two blades, we have many bands, many intersections, a constellation of shifting bright spots.

This could give the appearance that a single neuron with a receptive field localized at a particular point in retina was able to selectively detect a specific angle.

Here is a photo of sunspots. Let’s guess that the orientation-selective neuron was, in effect, looking straight at a sunspot on the retinal surface. Very bright spots could arise at the intersections of interference bands. There would be a constellation of spots forming runnels of light. The receptive field of an instrumented cortical neuron, however, will include only one or a few of these hotspots.

By turning the microscope slide to a certain angle, and then advancing the edge of the slide in translation, a runnel of sunspots could be brought into register with the receptive field of the neuron under study. If the slide is then turned just a little, the signal on the neuron will increase or decrease as a spot moves in and out of register with the RF. If the slide is turned dramatically, the sunspot will be shifted in translation to the receptive field of a different cortical neuron.

In this model, the cartesian position on the retina of the receptive field of the cortical neuron, and the specific patterns of the rotor and stator — predestine that neuron to respond most strongly to a particular angle of the microscope slide. This is one way it could work, so it is a place to start, but the scissor geometry is too simple.

Bear in mind that for a neuron in the visual cortex, a hotspot or sunspot — a source of extreme stimulus — is likely to be a moving, banded pattern of light and dark, not just a central bright spot. Note too that with interference bands that are converging or diverging, curved and straight, underlying the moiré pattern, directional effects will also be observed.

Boston University published online, as part of their “Lite” (that is, Light) series a set of 17 different interactive and animated moiré patterns. Here is a screen recording of one of them. The pattern was produced by the collision of two identical zone plates.

Initially the two zone plates are overlaid, with one directly on top of the other. The animation shows what happens when a mouse, with its cursor marked by an arrow, is used to slide one zone plate relative to its twin. We can think of the translational movements of the mouse cursor as roughly analogous to the linear movement (along an angled line) of the microscope slide in the cat experiment.

Notice that this moire pattern is created using nothing but curved lines — yet there emerges immediately an array of straight bands. These bands move along a centerline path that is dictated by the directional path of the mouse cursor. The frequency of the array of bands rapidly rises as one zone plate is displaced relative to the other. Sensitivity to changes in the angle of the cursor’s pathway is exquisite.

I think the animation shows in a succinct way what an amazing bag of optical tricks could confound an experimenter in a seemingly straightforward system.

But in the cat experiment we do not have two zone plates. The cat does have an immobilized multifocal lens, which has elements of a zone plate’s geometry. The mobile, translational component is the microscope slide’s edge, which produces edge diffraction effects that project bands of light onto a curved surface. So we might try to model the experiment as an array of somewhat curved lines impinging upon the circular lines of a zone plate. From the BU collection of 17 interactive moiré patterns, this is about as close as we can come — using a collision of two concentric circle patterns that are not identical. Here is a screen recording.

We are still guessing at the appearance of a pattern on the retina of the cat’s eye. In fact, because the pattern is created by partially coherent light, it does not actually appear: It will be invisible to the experimenter. But what else can we say in general about these instantaneous patterns of light and darkness moving on the surface of the retina?

1. A pattern will have both stimulatory and inhibitory effects
2. These could be mapped on a scale of stimulatory power to show us maxima and minima.
3. The location of the pattern’s maxima will be shifted in position if the angle of the path of the moving slide is changed. As we can observe from the animations, the patterns are extremely sensitive to path angle.
4. The maxima are associated with the pattern and with the angle of the path it is traveling — not with a nerve.

Of course, some neurons will fire like a machine gun as the pattern’s maxima happen to traverse their photoreceptors. But in this version of events, no class of neurons is specifically tuned (somehow) to respond maximally to a particular angle assigned to the edge of a microscope slide. Again, the output maxima are associated with the pattern — not with angle-selective neurons.

In this interpretation, the experimental apparatus (the edge of the microscope slide) is the mobile element. It supplies interference bands that are put in motion when the slide is moved in translation. The stator is a fixed pattern that exists in the open, carefully immobilized eye of the cat.

In the instant the experimental apparatus is taken away, the pattern on the retina would vanish. And the cortical neuron would suddenly lose its supposed talent for roaring in response to some specific angle like 41 degrees. Incidentally, the “roar” itself makes it seem likely the stimulus is abnormal, outside the typical operating range.

In sum, it is possible that Hubel and Weisel created, with their experimental apparatus, the effects they attributed to specialized angle-detecting neurons in the visual cortex.

Pro and con
One could conjure with the idea that physical optics might help explain, rather than controvert, the notion of orientation selective neurons. Have at it. But note that the optical system is more stabile and precise in the lab than it is in the world. The eye is completely immobilized and dilated with atropine in the experiment. In a free ranging animal, the eye and optical system would be in motion constantly. Thus, the “stator” pattern would also be mobilized. It is not easy to see how a precise angular measuring system could made to work with both the mobile and the stator components in constant flux.

On the other hand, diffraction effects are inescapable, and double diffraction is the basis of imaging in the eye. The experimental set up would probably exaggerate edge diffraction effects, but there is not enough information in hand to state that the experiment produced effects inside the eye that are not physiological.

Still, the physical optical basis for detecting the angular orientation of a line, edge or bar is, according to the scissors model for example, nothing more than a bright spot or hotspot on the retina that moves in translation when the bar is rotated.

Would the brain be well advised to attribute — to a moving hotspot — a rotation in the presentation angle of some object in the world? Such as an oncoming truck? Lots of other events in the imaged world could shift a hotspot from point A to point B. Trains go by. Headlights sparkle on the highway. The world is filled with moving points of light, and banded diffraction patterns of light and darkness.

An optical Illusion?
The moiré pattern interpretation suggests the prototypical “feature detector” might have been, in effect, an optical illusion. In other words it is reasonable to suspect it was the physical optics of the experiment — and not a cortical neuron specialized for the task of discerning, say, a 41-degree angle — that produced this famous result.

Light in the system is not coherent, but it is partially coherent, so it produces banding that is instantaneous rather than visible. But the bands are in there.


To sort it out one might as a first step try subtracting the cat and its brain from the experiment, just to see if the results might be reproduced physically. One could use coherent light, along with an instrumented mock-up or preparation of the eye and the rest of the optical system.

One might also simulate the whole experimental system with a computer. However, the physical optics of the cat’s eye are anything but simple. I am not sure they are sufficiently well understood to model realistically with a computer. The cat has a concentric multifocal lens and a slit pupil, though the pupil is dilated. In the cat experiment light is projected past a barrier (edge diffracton), through an aperture containing the multifocal lens, onto the interior surface of an ellipsoid. The backwall of a cat’s eye is a curved mirror. The retina is a fiberoptic bundle. A lot can happen. In addition to optical effects arising from the tapetal mirror built into cat’s eye, and mirroring effects in the eyes of primates that lack a tapetum, there may be singularities, modes, and other complexities.

There are experimentally confirmed Fourier optical effects. It was suggested at the time that the neurons in visual cortex were responding to features in the frequency domain, rather than to the literal images of lines and bars. This research was published by the late Russell De Valois and Karen De Valois. The De Valois lab measured responses in the brain to the presentation of more sophisticated stimuli than lines and edges. Notably they used gratings and checkerboard patterns of varying spatial frequency. The effort is recounted in their book, Spatial Vision, published by Oxford in 1990. They probably came closer than anyone else at that time to understanding the effects Hubel and Wiesel made famous.

However, this work was also done before 1995, and it too assumed the validity of Adrian’s code. Experimentally this means they were, like Hubel and Wiesel, selecting for and giving weight to responses to extreme stimuli, and not picking up on the sparse coded responses that were unknown at the time.

It is still surprisingly common, in 2024, to observe large neuroscientific enterprises grounded upon the very energetic firing of some particular neuron, and of collections or arrays of neurons. In Edgar Adrian’s world, a high frequency pulse stream was easy to interpret, but what does it mean today?

Nobody knows.

How would a neuron say: “I remember that.”
In my view, sustained rapid firing means that the neuron has been hit by a novel stimulus that is outside its normal operating range. The stimulus could be huge or it could be tiny or very negative, but it is extreme.

The machine gun neuron is not remembering or recognizing anything. It is encountering something new and excessive. A stimulus that had been encountered before should fall within the neuron’s normal operating range, because the range will have been expanded to accommodate it. Nerves adapt.

Then what sort of firing pattern could signify a “memory”, or recognition response, to a stimulus seen before? A faithfully accurate report on the magnitude of the same stimulus, when it is encountered a second time by the same neuron, could be conveyed by one or two spikes.

Now how would you detect with a simple probe a cell that “recognizes” some familiar stimulus? It appears to me to be impossible. There is nothing exceptional or egregious about the passage of one or two spikes. If indeed a pair of spikes is required, the two might not even be on the same neuron. And if one spike is enough, then which one? They all look exactly alike to a probe.

Conclusion: The movie of life…
Are there orientation selective neurons in the visual cortex? It is still possible of course. A hierarchy or network-based set of feature detectors? We probably lost feature detectors when we lost Adrian’s code. Grandmother cells? No takers here.

The idea that the brain deconstructs into a set of abstractions the salient features of an image, and shrugs off the image itself — seems less and less convincing.

Even if the physical optical problem were set aside, the Hubel and Wiesel model should be questioned. One cannot model a brain using feature-extracting neurons or nets without a basic understanding of how a neuron communicates. When we lost that understanding in the early 90s, the model went soft.

Today we are back where we started. We are free to reset the rules of the game. Pictures are okay again. We are no longer constrained by a 1960s brain model that trivialized images and imagery in favor of abstractions.

Henschen’s cortical retina, that marvelous relic of 19th century neurology, has come down to us intact in the 21st . We are once again free to imagine, as did Salomen Henschen and Jerome K. Jerome, that the brain retains and operates upon retinal imagery – a scrapbook or a movie of life. The movie could be recorded in the frequency domain or the spatial domain or both, but it is a procession of images in any case.

Most of the jobs (like edge detection) that were supposed for decades to be done piecemeal by feature detectors can probably be accomplished all at once by Fourier filtering. Fourier filtering is also an excellent engine for a thinking and remembering machine, as van Heerden observed. Such a machine is fueled by images flowing in from the world and by images flowing out of memory.

But to process an image by any means, you must have an image to operate upon. How, then, might an image be retained in the brain?

<<PREVIOUS CHAPTER               NEXT CHAPTER>>