Chapter fifteen
The brain of
the Denisovan girl

The genome of a 5-year girl who lived in this cave 80,000 years ago has now been successfully sequenced.
Photo by Dmitry Artyukhov. From a tiny finger bone discovered by archaeologists in the Denisova cave dig, it was possible to sequence the genome of a 5-year old girl who lived here 80,000 years ago. A comparison of her genes with ours triggers some speculations about visual memory, the conflict between visual and verbal thinking, and the mechanics of synesthesia.



The cave of St. Denis
Denisova Cave, in the Altai mountains of southern Siberia, was named by the locals for a hermit who moved into the cave around 1750. For some reason he called himself Saint Denis – hence the name, Denisova.

The original Saint Denis was martyred in Paris in AD 250. After his head was chopped off, Saint Denis is said to have picked it up and walked ten kilometers, descending from the heights of Montmartre with his head held high, preaching a sermon all the way down. That severed “talking head” is such a recurring theme in stories about saints that scholars have given it a name. It is called a cephalophore.

The Denisova Cave in Russia has been escavated systematically by archaeologists since the early 1980s. In summer it is a lovely remote spot. From the mouth of the cave there is a commanding view of the Anui river and its valley, 90 feet below. In the valley scientists have constructed a base camp with housing, working and meeting facilities. The interior of the cave is criss-crossed with taut lines to impose a precise coordinate system on its spaces and volumes. Twenty layers, corresponding to about 300,000 years, have now been escavated. During the past 125,000 years the cave has been inhabited at various times by modern humans, by Neanderthals and, we now know, by yet another type of archaic human – the Denisovans.

Each summer, young people from surrounding villages are hired to trowel and sift in the cave for tools, bits of tools, animal and human bones and teeth and whatever else might be caught in a sieve retaining any and all objects larger than 3 to 5 mm. Objects captured in sieves are labeled to show the exact location of the find, washed, and bagged for subsequent scrutiny by scientists. Sometimes in the afternoons, the kids break their concentration and play volleyball outside the cave.

In 2008 one of the workers bagged a tiny bone. A Russian archaeologist thought the bone might have come from an early modern human. He sent a fragment of the bone for DNA sequencing to Svante Pääbo at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Svante Pääbo is famous. He pioneered the field of sequencing ancient DNA.

The bone from Denisova turned out to be the fingertip bone from a little finger of a 5-year old girl who lived 80,000 years ago.

From the girl’s DNA we know she was not a modern human, nor was she a Neanderthal. She is representative of a previously unknown type of archaic human, perhaps a cousin of the Neanderthals, now styled as Denisovans. Two molars sifted from the same layer in the cave belonged to other Denisovans, bringing the total haul of Denisovan remains to just three objects. The two teeth are primitive and enormous.

The distal phalange of the cave girl's little finger contained a mother lode of endogenous DNA.Other than her fingertip (the distal phalange of her little finger) there are no fossil remains of the girl. There is no associated skeleton. The fingertip bone is smaller than a baby aspirin. Her age was deduced from the immaturity of the bone. From her DNA we can say that in life she had brown hair, brown eyes, and brown skin with no freckles. And big back teeth.

Her fingertip is unique among archaic human fossils in that the ratio of her own DNA to encroaching microbial DNA is 70% vs. 30%. Values in other fossils found in temperate climates (e.g., most Neanderthal bones) are not remotely this rich in endogenous DNA, and typically range from 1% to 5%. The Denisovan tooth has just .17%.

It is speculated that the fingertip bone was somehow quickly dessicated after death, and that this put a stop to the enzymatic degradation of her DNA and to microbial growth. A quick dessication might thus account for the high concentration of endogenous DNA. I wondered if perhaps the finger dangled from and was dried out by a funeral pyre. In any event, thanks to good luck and a new sequencing technology, this extinct girl’s genetic blueprint has triumphed over time: 80,000 years after her death, her genome was transcribed from her fingertip’s lode of endogenous DNA.

By late 2012, three different sequencing efforts based on DNA from the Denisovan girl’s fingertip had been published. The first paper reported the sequence of her mitochondrial DNA. The second reports a rough draft sequence (coverage 1.9) of her nuclear DNA. The third paper, which is a technological tour de force, was published in Science online on August 31, 2012. It reports a novel sequencing technique that was essentially invented for the Denisovan girl.

The first author is Matthias Meyer, who did the work as a post-doc in the lab of Svante Pääbo. Meyer’s new sequencing method, which was patiently and painstakingly developed, is specifically designed to recover sequences from degraded ancient DNA.The cave girl's DNA was sequenced in Leipzig using an Illumina GA2x. Automatic sequencing machines are designed to operate on double stranded DNA but in degraded ancient samples, double strands of DNA have unraveled into single strands. Meyer’s technique starts from a library of single stranded DNA. The finished product is truly remarkable. The sequence for the extinct Denisovan girl is comparable in coverage (quality) to sequences we can read from contemporary walking-around modern humans.

Comparative human genomics
For the first time, it became possible to compare in detail the genome of a modern human with that of an archaic human, the Denisovan girl.

It is a comparison that is bound to be full of tricks and traps, since we know a lot about ourselves but almost nothing about the Denisovan phenotype. For example, it is a huge temptation to overlay these two slightly different genomes in an effort to discover the genetic reason why we are so smart and why, by implication, the Denisovans were so dumb. A problem here is that we think we are pretty smart but we don’t actually know the Denisovans were not.

Another thing we don’t know is what happened to the Denisovan girl. She died in childhood. She may have had an inborn error of metabolism or may have been, by Denisovan standards, genetically abnormal in some other way. She is just one individual. One should not generalize — but it is of course impossible to resist generalizing.

As it turned out, her genome and the modern human genome are quite close. There were (about) 112K single nucleotide changes and 9500 insertions and deletions that have become fixed in modern humans. In a genome of 3 billion base pairs, this a small number.

Many single nucleotide changes (SNC’s) can be ignored because they do not produce amino acid changes upon translation. Viewed in this way, it is possible to tabulate a list of what seem to be the most important genetic differences that distinguish us from the Denisovans:

• 260 single nucleotide changes (differences) that account for fixed amino acid substitutions in well defined modern human genes.
• 72 fixed single nucleotide changes that affected splice sites
• 35 single nucleotide changes that affected well defined motifs inside regulatory regions

We are past the notion that one gene => one polypeptide, but we don’t know what to expect in the way of post transcriptional manipulation of gene products. Alternative splicing can multiply or change gene products. There is silencing at the level of mRNA and there are probably regulatory systems we don’t yet understand or know how to recognize. But the rhetorical direction we are following here is to pare down the list of 260 genes and narrow our focus, rather than explode it.

One way to arrive at a short list of interesting genes is to concentrate on human genes that are different from the Denisovan genes and are also known to be associated with specific diseases. The genetic change does not necessarily imply there is some new way to parry a disease process in modern humans. The disease association simply points to the organ or system in which the changed gene operates. From the paper:

“Of the 34 genes with clear associations with human diseases that carry fixed substitutions changing the encoded amino acids in present-day humans, four … affect the skin and six … affect the eye. Thus, particular aspects of the physiology of the skin and the eye may have changed recently in human history.”

The shortest list
Following another tack, the researchers surfaced a different, even shorter list. It consists of just 23 genes. These 23 could be regarded as most decidedly modernist genes — in the sense that these genes are indeed unique to modern human beings. The original sequences for these 23 genes are almost perfectly conserved (intact, unmodified) in non-human primates, including chimps, gorillas, and organutans. The genes are also conserved in the presumably somewhat apelike Denisovan girl. But in modern humans the old ape sequences have been decisively modified, in each of the 23 genes, to produce a new gene and a new gene product. From the paper:

“We note that among the 23 most conserved positions affected by amino acid changes (primate conservation score ≥ 0.95), eight affect genes that are associated with brain function or nervous system development (NOVA1, SLITRK1, KATNA1, LUZP1, ARHGAP32, ADSL, HTR2B, CNTNAP2). Four of these are involved in axonal and dendritic growth (SLITRK1, KATNA1) and synaptic transmission (ARHGAP32, HTR2B) and two have been implicated in autism (ADSL, CNTNAP2). CNTNAP2 is also associated with susceptibility to language disorders (27) and is particularly noteworthy as it is one of the few genes known to be regulated by FOXP2, a transcription factor involved in language and speech development as well as synaptic plasticity (28). It is thus tempting to speculate that crucial aspects of synaptic transmission may have changed in modern humans.”

These shell beads were perforated 75K to 100K years ago.Here are some things that are thought to distinguish us from apes: We talk. We learn to watch each others’ eyes for clues about what someone else may be thinking or feeling. We empathize. We think fast. We make art, use symbols, and adorn ourselves with jewelry. (These shell beads, possibly from a necklace, were found by archeologists in the Blombos cave in South Africa. The shells were perforated with a tool 75,000 to 100,000 years ago.)

Could she talk?
There is a subtext here: Maybe one gene, or an ensemble of genes that could be picked out from among these 23 utterly modern human genes — has made us smarter, more talkative, and more socially adroit and bejeweled than apes and archaic humans. In other words, maybe now we can draw lines between the characteristics that make us modern humans and the genes that make us modern humans.

The key to human behavioral modernity — our giant step up from apehood — was human language. In comparing the Denisovan girl’s genome with that of modern humans, an implicit and persisting goal is the discovery of genetic substitutions in modern humans that somehow started us talking.

A corollary assumption is that Denisovans, Neanderthals and early modern humans could not talk or could only just barely manage it.

FOXP2 and the Upper Paleolithic Revolution
These ideas and assumptions about language and the timeline of human progress have a surprisingly short history. In the late 1990s and early 2000s, it was strongly argued that culturally modern humans emerged, talking, as recently as 50,000 years ago. By that time, homo sapiens had been around for about 150,000 years but he had not accomplished much. He made and used tools and weapons but did not progress, over the span of 150,000 years, to newer and better tools and weapons. He seemed to have been stuck, repeating himself, as though he were unable to invent.

Behavioral modernity, when it arrived, brought excellent innovations in tools and weapons, the use and recognition of symbols (signaling a facility with language), jewelery and impressive art. According to this narrative the sudden success of modern humans led to their energetic expansion, about 40,000 years ago — out of Africa, into Europe and Asia, the South Pacific and ultimately across the world and into the sky.

The hypothesis of an abrupt human breakthrough to behavioral modernity was styled in the literature as the Upper Paleolithic Revolution. Probably its most vigorous and convincing proponent was the archaeologist Richard Klein at Stanford. For a snapshot impression of Klein and his thinking in 2003, when the concept of the Upper Paleolithic Revolution may have been at its zenith, see this archived article from Stanford Magazine.

It seemed to Klein the revolution in human progress could have come about so suddenly because of an underlying biological change. One that archeologists could read in ancient bones was the lowering of the modern human larynx. This occurred about 50,000 years ago. The lowered larynx was thought to have facilitated human speech.

In 2001 a defect in single gene, FOXP2, was found to be responsible for a serious familial language disorder. At that time it was the first and only gene clearly associated with the human gift for language. In 2001 it was an easy jump to the idea that a changed FOXP2 gene in modern humans triggered the revolution. A mutation in the FOXP2 gene, an abrupt biological shift, might have enabled homo sapiens to start talking. From language, human success could have followed immediately. Language enabled us to create, share and steadily expand a common store of modern human wisdom, lore and technology.

The pursuit of FOXP2
With the revolution in mind, let’s fast forward to the 2012 Science paper on the genome of the Denisovan girl, and specifically to the discussion of how modern humans are different from this archaic girl and from her apelike cousins and ancestors.

It seems meaningful that eight of the 23 changed genes in modern humans are known to affect the nervous system, including the brain. But it is of course unclear how our modern human nervous system ultimately differs, in operation, from those of the archaic humans and apes.

In the paper there is a rhetorical bridge leading to FOXP2, which is still famously associated with language and vocalization. It orchestrates neurite growth in development. However, at the level of protein, the FOXP2 transcription factor is identical in modern humans, Neanderthals, and Denisovans.

This made it appear that any hoped-for alteration in the FOXP2 protein — a changeover leading to modern human success perhaps — wasn’t there. But in a subsequent paper, Tomislav Maricic et al of Pääbo’s group reported that the typical modern human FOXP2 gene does in fact differ from that of archaic humans and apes. There is a cryptic one base change hidden in an intron, within a binding site for a transcription factor that modulates FOXP2 expression.

However, this modernistic mutation is not found in 10% of contemporary modern humans in Africa. Their FOXP2 intronic binding sequence is identical to that found in archaic humans and apes. They can talk, so this subtle mutation does not, after all, auger for or help explain a modern human breakthrough to language.

FOXP2 comes up again in a March, 2016 Science paper which reports the genome sequencing of 35 living Melanesians. Both Denisovan DNA and Neanderthal DNA survive in Melanesians. The Denisovan component varies between 1.9% and 3.49%. But the paper describes “deserts” of archaic DNA in these individuals. The deserts are long passages in the genome where Denisovan and Neanderthal DNA are not found at all. Deserts are thus places where the differences between the archaic humans and ourselves are most pronounced. One might imagine that the archaic DNA had been depleted — culled and shucked in favor of modern human DNA.

In a stretch of DNA on chromosome 7, for example, no Denisovan or Neanderthal DNA appears. The researchers measured the enrichment of modern human genes in this stretch. “Enrichment” has a special meaning in this context but one would not go too far wrong to take it literally. The list of enriched genes includes FOXP2. It also includes two or three genes associated with autism. The implication is that genes associated with the development of modern human language — and with things that can go wrong with the development of modern human language — are to be found in a stretch of DNA on chromosome 7 that is unique to modern human individuals. So the idea that FOXP2, the “language gene” or “grammar gene” somehow brought the revolution persists. But did a revolution actually occur?

… a revolution that never was?
Today many and perhaps most archeologists believe the Upper Paleolithic Revolution never happened. What at first seemed to have been a revolution that occurred as recently as 50,000 years ago has been smeared out across the axis of past time by discoveries of artifacts signaling human modernity from sites dated deeper and deeper into antiquity. A nice cultural distinction between Neanderthals and modern humans has also been smeared out: protein analyses show that Neanderthals in France made jewelry.

And the lowering of the larynx 50,000 years ago cannot be said, after all, to have enabled or facilitated human speech. Computer modeling of the hyoid bone of Neanderthals suggests they could have spoken just as readily as any modern human. Neanderthals were perfectly able to talk and in the view of some researchers, they probably did talk. If so then their near cousins, the Denisovans, probably talked as well. In this view human language evolved over a long period, perhaps a million years.

Language is complex. It does seem realistic to imagine that it evolved over the span of a million years — not suddenly, in just 50,000 years or less.

The gene encoding FOXP2 is no longer the only gene known to be associated with human language. A few others have been discovered over the years but FOXP2 is still one of a small set.

FOXP2 is a transcription factor. It is well conserved in mammals. In a mouse embryo, the mouse version of FOXP2 orchestrates the transcription of 264 other genes. Sometimes it enhances transcription but often it suppresses transcription.

The 264 targeted embryonic genes build and configure neurons. They affect the length and branching of neurites. It’s a tantalizing process, but it seems doubtful we will soon discover in this complicated type of gene network a simple fork in the road that led to our gift for language.

So where are we? Did human language evolve over the span of a million years? Or did it tumble into place all of a sudden between 100,000 and 50,000 years ago, and thus launch a revolution in modern human progress and prowess?

Which?

Well, we don’t necessarily have to guess which. We can guess both, as follows:

Two conflicting modes of communication. One wins.
Here is a possibility we will explore later in this chapter: Suppose archaic humans, early modern humans and our common ancestor relied upon a mode of communication that was based on vocalizing, visualizing and, above all, synesthesia. It was not a verbal, grammar-based language but it was highly evolved. Imagine that a carefully vocalized sound made by one human was directly transmuted, in a nearby human listener’s brain, into a remembered image or a remembered scent or both.

Without using words like “Here comes a bear” — using only a vocalized sound like a hoot or a yelp — the image and scent of an oncoming bear was made instantly obvious to every human being within earshot.

Further suppose this old, rather animalistic mode of early human communication worked beautifully but it inhibited the development of a newer, slowly emerging mode of communication: a verbal and grammatical language. A conflict between the old and new modes of communication arose because both depended upon vocalization.

Something about the new second language based on words conferred an advantage. It could tell a more complete story. Instead of the warning cry, a yelp that conveyed the image and scent of a bear, verbal language could convey a detailed idea and position it in space and time: “A mother bear and her two cubs were sighted across the river yesterday morning and they were drifting this way.” Maybe verbal language was much quieter — useful on a hunt. Maybe it was sexier. Evolutionary pressure began to strongly favor verbal language over the old, established synesthetic mode of communication.

According to this hypothesis, genetic changes turned off the older mode of vocal communication in modern humans. The changes gradually suppressed or excised the brain machinery that supported the old, synesthetic mode of communicaton. Once the older, rival mode of vocal communication was silenced, modern verbal language was quickly perfected. A revolution in human progress did in fact ensue for late modern humans. It is an open question when the switchover may have occurred and how long it took to accomplish it. But it was revolutionary.

In short, one mode of vocal communication, which we call language, supplanted another, older mode of vocal communication for which we have no word. The old, synesthetic mode of communication was the template against which the new mode, verbal language, was created.

The creation of verbal language may have taken a long time but the switchover from one mode to the other could have been abrupt. The genetic changes that made the changeover happen constitute a developmental OFF switch.

Per this scheme, FOXP2 might function broadly in modern humans as an OFF switch for an archaic and obsolete brain configuration. Once the old brain was blocked from development the old mode of vocalized, synesthetic communication was deeply suppressed. The new talking brain, with its ability to reason in words and communicate with words, suddenly worked better. Ultimately it worked brilliantly.

Obviously this is speculation.

Autism
At this point, what jumps out from the listing of 23 modernized human genes, and again from the report on modern human gene enrichment on the modern human chromosome 7, is the curious implication of a few genes associated with autism.

Autism is an umbrella word. Autism is such a broad diagnosis that it can include people with high IQ’s and mental retardation. People with autism can be chatty or silent, affectionate or cold, methodical or disorganized. Until 2013 there were five formally recognized forms of autism. This list was reduced to three broader type definitions in 2013. Because autism is a diagnosis it is regarded as an illness, disorder or condition. But it can also be understood as a gift.

Many people who have been diagnosed as autistic have an astonishing gift for visualization and pictorial thinking. It has been suggested and we will urge here that this gift is an atavism — a re-expression or resurgence of an ancient style of thinking. Following is some speculation about this possibility, a gathering of cards that have now landed face up on the table.

The Aboriginal mélange
The UCLA sociologist Warren Tenhouten spent many years studying the culture and uncommon intelligence of Australian aborigines. Evidently the aborigines have very strong visual pattern recognition gifts. One of Tenhouten’s books, Time and Society, also includes a succinct history of the aborigines, which I recommend.

Modern humans in prehistoric Europe are thought to have interbred with the Neanderthals. As much as 5% of the DNA in the genomes of contemporary Europeans is Neanderthal. My own DNA is 2.9% Neanderthal. The people we now characterize as aborigines worked their way east from Europe and carried with them this typical Neanderthal fraction of up to 5%.

Australian aborigines and Melanesians are descended of modern human contemporaries of the Denisovans. Their ancestors perhaps interbred with the Denisovans. Up to 6% of aboriginal DNA is like (highly homologous to) that of the Denisovan girl.

However, it is not clear who may have interbred with whom. Another intriguing hypothesis has both Denisovans and the aborigines acquiring their “Denisovan” DNA from homo erectus. Note as well that the convenient West=Neanderthal and East=Denisovan story is not solid, since Denisovan-like sequences were recently identified in an ancient fossil from Spain, Europe’s far west.

The archaic Denisovan DNA sequences are unquestionably found where they are found in modern human populations. Exactly how they got there remains a puzzle and there are various competing hypotheses.

In any event, we have definitely learned that by whatever pathway, direct or circuitous, up to 6% of aboriginal DNA is like that of the Denisovans.

We have also learned that four or five genes among the hundreds that have been found to be associated with autism in modern humans may have somehow figured into the story of what distinguishes modern humans from the Denisovans. Aborigines have a gift for visual thinking. Such a gift is also common among autistic people. There are no logical or factual bridges here, but there are some intriguing associations.

Tenhouten studied the uncommon intelligence of aborigines.

Aborigines have a quick intelligence but Tenhouten reported that it is not the typical, verbal, “left brain” intelligence that schoolteachers praise, reward and prize. Instead it is “right brain” or visual intelligence. Incidentally, Australian aborigines are frequently born blond. Their blond hair tends to turn brown as they grow up.

Tenhouten was apparently working in an epoque when strict right brain and left brain lateralization was still an accepted idea. It was based on studies of split brain patients conducted in the 1960s. A Nobel was awarded to Roger Wolcott Sperry for this work in 1981.

Today, however, the partitioning of the normal brain into lateralized skill sets is regarded as a myth. The skill sets are not mythical. There are indeed visual talents and verbal talents, and they are distinct from each other. But fMRI scans suggest there is no neat left/right anatomical divide between visual and verbal “brains”. As conversational shorthand, however, left brain and right brain are still useful and widely used.

Of the various capabilities that were once attributed to “right brains”, I will pick out and emphasis here visual pattern recognition. Interestingly, it was thought the right brain tended to fix upon patterns as outlines, and to ignore details within. Wherever in the brain this re-imaging process happens, it produces exactly what we would expect of an image that has been Fourier filtered to select for high spatial frequencies.

Click to run a Google search on aboriginal art. Notice the strong edge outlining, and imagery that could be interpreted as diffraction patterns. Also note the rather typical mélange of literal, spatial images, as for example, outlines of animals — with what appear to be diffraction patterns. These same characteristics can often be picked out in Picasso’s work. Maybe these are mannerisms. But it is also possible these mixed spatial domain and frequency domain images can tell us something about how the brain reads the retina.

Thinking in pictures
The exceptional visual talents of the aborigines have become interesting in a new way. See this excerpt from Thinking in Pictures by Dr. Temple Grandin, who is autistic. Here are some remarks quoted from her book:

“I THINK IN PICTURES. Words are like a second language to me. I translate both spoken and written words into full-color movies, complete with sound, which run like a VCR tape in my head. When somebody speaks to me, his words are instantly translated into pictures. Language-based thinkers often find this phenomenon difficult to understand…

“One of the most profound mysteries of autism has been the remarkable ability of most autistic people to excel at visual spatial skills while performing so poorly at verbal skills. When I was a child and a teenager, I thought everybody thought in pictures. I had no idea that my thought processes were different.”

“I create new images all the time by taking many little parts of images I have in the video library in my imagination and piecing them together. I have video memories of every item I’ve ever worked with….”

“My own thought patterns are similar to those described by Alexander Luria in The Mind of a Mnemonist. This book describes a man who worked as a newspaper reporter and could perform amazing feats of memory. Like me, the mnemonist had a visual image for everything he had heard or read.”

In a later book, The Autistic Brain, Grandin remarks that her own subjective experience of autism is not necessarily a depiction of autistic thought processes in general. Specifically, her gift for “thinking in pictures”, and her sense of memory as a movie or a video library, is common to some but not all autistics.

It is interesting, however, that she identifies her ability to think in pictures with that of Alexander Luria’s patient, the mnemonist, Solomon Shereshevsky.

Shereshevsky greyscale

Solomon V. Shereshevsky 1896-1958. This photo is a frame grab from a 2007 documentary film produced for Russian television, Zagadky pamyati [Memory mysteries]. For the filmmakers, Lyudmila Malkhozova and Dmitry Grachevthe, the Shereshevsky family made available photos and new biographical details about the celebrated mnemonist. A biographical sketch of Shereshevsky was published in 2013, in English, in a journal article in Cortex by Luciano Mecacci.

Was “the mnemonist” autistic?
Luria, a neuropsychologist, began studying Shereshevsky in the 1920s. The word autism had been coined in 1910 by the Swiss psychiatrist Eugen Bleuler. (Blueler also coined the term schizophrenia). In the 1920s autism had a narrowly defined, fairly precise meaning. It meant selfism, i.e. a self-absorbed or self-contained behaviour observed by Bleuler in adult schizophrenic patients.

In the 1920s, Luria could not have had anything like our concept of autism in its broad, mutable and rather cloudy 21st century sense as a spectrum of behaviours. So Shereshevsky might or might not have fallen “on the spectrum” of autism.

Here is link to a chapter from a 2013 book, Recent Advances in Autism Spectrum Disorders – Volume I. The author, Miguel Ángel Romero-Munguía, concludes that per the diagnostic criteria of our own epoch, Shereshevsky was most likely autistic. But there is obviously no way, in the 1920s, that Luria could have diagnosed Shereshevsky as autistic in the modern sense of the word. In fact, Luria diagnosed Shereshevsky as a 5-fold synesthete.

It was synesthesia that seemed to hold the secret of his incredible powers of memory. His memory was visual — photographic or filmic — but it was modified, painted with special cues one might say, thanks to his synesthetic gift and to his self-taught techniques as a mnemonist.

In 2013, it was reported by Simon Baron-Cohen, Donielle Johnson et al that synesthesia is much more common among autistics than among neurotypicals. Synesthesia, which has about a 4% rate of occurence in the general population, had an incidence almost three times higher in an autistic patient population. The result suggests autism and synesthesia are somehow related or are different aspects of the same biological story.

In both conditions, autism and synesthesia, one can posit a sensory system with its gates left wide open.

Autistic childen confront a sensory overload. Often they cannot isolate or quickly focus upon an important input as something distinct from its immensely detailed surround — the total world of a moment received as incoming pixels, sounds, smells, tastes and tactile sensations.

The study raises the hypothesis that savantism may be more likely in individuals who are both autistic and synesthetes. Daniel Tammet, who has both Asperger syndrome (autism) and synesthesia is a famous contemporary memory savant. He memorized pi to 22, 514 decimal places. Tammet inspired the hypothesis that savantism arises in individuals who are both autistic and synesthetes.

Luria’s file on Shereshevsky is still fully preserved in Luria’s archive at his dacha at Svistucha, a village 50 miles north of Moscow. Perhaps a scholarly reading of this material would fill in the evidence Shereshevsky was both autistic and a synesthete.

We already know, however, that the most important thing to Shereshevsky himself was his visual, filmic, stream of memory.

The gifts of both Shereshevsky and Tammet are strongly questioned by skeptics on the net. The idea is that these two mnemonists’ feats of memory could be explained away as the results of applying well understood mnemonic techniques — and that there is no underlying gift or special talent or unusual brain function. In this view, Luria was naive and Shereshevsky was a trickster.

I am inclined to doubt the doubters. The mnemonists’ brains really do seem to be sculpted in ways that are not typical. One cannot ignore or set aside the fact of synesthesia. In addition there are incredible displays of autistic memory power that are purely visual, and are thus difficult to discount as products of professional memorization techniques.

The human camera: Stephen Wiltshire
A work of Stephen Wiltshire, who was characterized by the press as a human camera.The most famous autistic artist is Stephen Wiltshire. He specializes in cityscapes. Wiltshire has been characterized in the press as a human camera. He has the ability to draw astonishingly detailed images of scenes he has seen only briefly and only once. He was mute as a child and did not fully gain speech until he was nine.

A facility for thinking in pictures is not uncommonly associated with genius. John von Neumann had this gift. As a child he could scan and then recite from memory pages from the Budapest phone directory. It seemed to observers he was able to simply read aloud from his mental image of those pages.

It has been suggested that eidetic imagery is a gift many and perhaps most children have — and then lose as their verbal skills become seated and then dominant. Maybe it is an instance of ontogeny recapitulating phylogeny. It does happen. Maybe modern humans, like growing children, lost their strong pictorial gifts when they started talking.

Von Neumann’s biographers report that among people who knew him, some people thought he had a photographic memory and some thought he did not. He relished arithmetic (many mathematicians do not) and loved mental calculating. I suspect he probably did retain into adulthood a very literal, visually accessible 2D scratchpad in his head.

Picasso retained his pictorial thinking gift throughout his life. He was able to grid a canvas and then “fill in” picture elements in squares that were remote from each other. When he finished filling in all the squares, the picture was an integrated, coherent whole. Picasso may have been another example of a human camera — but it might be more apt to suggest he was a human projection machine. He could mentally project an image onto a canvas — and then trace over with a pencil this image that only he could see.

Picasso of course thought in pictures but surprisingly, so did an amateur painter whose professions were writing, politics and warfare — Winston Churchill.

Two different brains, two different development programs
A possibility begins to surface. Imagine two distinct brains, one archaic and one modern. The two brains developed in two different evolutionary settings. The archaic human visual brain is pre-verbal and it was perfected, before language dominated thought — for thinking in pictures. The “verbal” human brain is modern, perfected by and in parallel with the development of human language. This seems to have required the suppression of ancestral visual thinking. Instances of modern humans who exhibit archaic brain skills are, at least in part, atavisms.

It is possible language emerged less than 100,000 years ago, and some authors suggest just 50,000 years ago. If so we are walking around with both brains encoded in our chromosomes, archaic and modern, along with two distinctly different brain building programs. In most children the modern, talking-brain program runs to completion without incident and the archaic picture-brain program is much abbreviated or perhaps never launched. Occasionally, however, both brain building programs are triggered, and they fight for control. The outcomes of the conflict, in this hypothesis, could include autism, genius, photographic memory and synesthesia.

It almost appears that the hopelessly unfashionable and vulgarized 20th century notion of a dual brain — Right and Left, Visual and Verbal — has popped up once again. But here are some differences.

The familiar left and right brains were created, a few decades ago, with a knife slice through the corpus callosum. The visual and verbal brains we are now sketching were created in two different evolutionary epochs and were thus shaped by two different types of evolutionary pressure. These two brains, one verbal and one visual, occupy the same space inside the skull. Perhaps they may compete for this space, perhaps they are amalgamated within it. In any event, these two brains are not neatly mapped into right and left hemispheres, as were those in the prevailing model of the 1970s and 1980s.

The brain learns to listen
The eye evolved long before the ear. It is thought that the ear evolved in amphibians from the lateral line organ of fish, which uses hair cells. But there are divergent views that suggest the ear evolved independently in amphibians. One suggestion is that the most rudimentary ear was a supernumerary eye sensitive in the infrared (heat) range, since hair cells have this sensitivity.

In any event there are many familiar touchstones from the visual nervous system in the auditory nervous system, including ribbon synapses. One can juxtapose an eye and an ear and point out the analogous components. For example, the retina is broadly analogous to the organ of Corti. The optic nerve is analogous to the auditory nerve. The photoreceptors are broadly analogous to hair cells.

But what about the tapering basilar membrane, stretched inside of the cochlea? What part of the visual system, if any, does this component resemble?
basilar membrane
The basilar membrane, adopted from Introduction to Cochlear Mechanics.

The basilar membrane is functionally analogous to the lens of the eye — which produces, at its back focal plane, a Fourier pattern.
Double diffraction by a lens
Both structures, the lens and the membrane, are capable of shifting incoming signals — images and sounds, respectively — into the frequency domain.

Let’s emphasize this: On the front end of two different sensory systems, vision and hearing, are these two very different structures, the back focal plane of the lens and the basilar membrane. Yet they both work toward the same end product — a Fourier pattern. One should read this as a strong clue that the brain is using Fourier filtering and processing.

The concept of the brain as a Fourier processor glides in and out of fashion and is a recurring theme in this book. In biology it is largely a forgotten promise but AI researchers are actively experimenting with Fourier processing in machine vision and in speech recognition systems. The idea the brain could be modeled as a Fourier machine has a half-century history. It was originally suggested by Pieter Jacobus van Heerden in April, 1963, in two back-to-back papers in the Journal of Applied Optics.

The eye’s lens produces a Fourier pattern automatically at the speed of light. This is simply an inherent property of the lens — it is effortless. To arrive at a Fourier distribution the ear relies on a simple but highly mechanized sorting process, as shown here:

The basilar membrane of the Organ of Corti with its associated neural circuitry is a machine for frequency analysis.

This inspired animation (© 1997 Howard Hughes Medical Institute) unrolls the basilar membrane to show that it is narrow at one end (the base) and broad at the other (the apex, nearest our point of view). Turn on the sound. The animation shows how the membrane responds to Bach’s most famous Toccata. Notice that high frequency tones resonate strongly at the narrow, base end of the membrane, and low frequency sounds resonate strongly at the apex, or broad end. Thus, the component frequencies of the incoming sound waves can be physically separated and separately measured for intensity. Frequency peaks can be mapped against the length of the membrane, as shown here.

Tone frequency can be mapped onto the length of the membrane.
Movements of the basilar membrane are detected by 16,000 hair cells positioned along its length. Their responses trigger firing in nerve fibers that communicate information to the brain.

The wedge shape of the membrane establishes a cochlear place code. The membrane is a yardstick. If a particular clump of hair cells is detecting a vigorous movement of the membrane at a point along its length, it means the incoming tone is resonant at a specific frequency. A neuron mounted at that point is thus a frequency specific reporter.

The basilar membrane is a long, wedge shaped tabletop for sorting incoming sound into its constituent frequencies, using resonance length as a criterion.

The animation can be reviewed here along with a helpful voice over commentary by A. James Hudspeth at Rockefeller.

On the Hudspeth Lab web site it is remarked that the animation exaggerates the process in order to make clear what is happening. The actual movements of the membrane are much faster than shown in the animation, since the membrane oscillates at the frequencies of the tones — hundreds or thousands of cycles per second. The amplitudes of the movements are also far smaller, of atomic dimensions.

How fast can this sensor operate? How fast can the brain’s auditory processing system absorb and analyze incoming sounds? The near realtime performance of the brain in transforming streams of words into meaning suggests the system is very fast indeed — probably much faster than the textbook neuron allows. A multichannel system seems called for, using either some form of parallel processing or the incremental analog neuron suggested in Chapter 2.

Speech recognition
It seems a reasonable guess that the visual memory system that originally evolved from the eye was copied and adapted, in the brain, to form an auditory memory. But this is not quite enough. In modern humans, we require a memory for words. And something more. Dogs and cats know a few words. But they do not use words to communicate. And they don’t think in words.

Language is often cited as the turning point in human progress, and it must have been a fairly recent shift or tickover point in evolutionary history. Archaeologists discuss the evolution of the larynx as a process that facilitated language. But there was undoubtedly a parallel or anticipatory change in the brain. Maybe this required the further modification (or perhaps to some degree expropriation) of visual memory machinery in order to create, in effect, a word processor.

What the Denisovan girl’s genes might be able to show us, one day, is a point along this evolutionary continuum between thinking in pictures — and thinking in words. It seems inescapable that these two processes are competitive.

The early mammalian brain was a visual thinking machine. A mixed and somewhat conflicted visual and verbal thinking machine gradually emerged from it in humans. Perhaps the visual memory for objects was modified to create a memory for sounds and then words. It might be that edge detection in the eye was a technical prototype for the detection, by human beings, of the edges between their words.

In the Fourier visual memory model we have discussed in earlier chapters, there is no great distance between the memory process and the thought process. But from a data processing point of view, words are ever so much smaller than pictures. An image of the sun is a hog for computing resources compared with the tiny one syllable word, “sun.”

Thinking in words is probably a more compact process than thinking in pictures. The payoff and evolutionary advantage for verbal reasoning is quicker thinking and a form of communication that can be whispered or shouted.

A problem we are suggesting here is that verbal reasoning may somehow step on or strongly inhibit the much older and probably more highly evolved ability to think in pictures.

One explanation might be that the verbal memory and reasoning machinery was improvised in humans, maybe 50,000 years ago or less, by simply taking over for word processing some core component of the visual memory machine. The visual memory still works in talkative humans, more or less, but it is no longer up to the task of “thinking in pictures.”

The essential first step
Speech recognition in computers shows us what a logical computer designer might do in order to create, starting from scratch, a receiver and transmitter for words. It is a hellishly complicated business, involving cutting the incoming words into phonemes, and then using statistical techniques and context and brute force computing power to achieve rapid word recognition.

But the very first step, so obvious that it almost goes unmentioned, is digitizing. An analog audio signal is fed into an A-to-D converter in order to give the computer a digital data stream it can work with.

We might imagine that if the human brain’s visual memory computer was modified by evolution to recognize and reason in words — nature also had to take an “obvous first step”.

It would not be digitizing, since the brain is an analog computer.

Audio signal must first be converted to a Fourier pattern.To make use of the brain’s visual reasoning machinery, the incoming sound signals from the organs of Corti would have to be transformed into patterns that look like those produced by the retinas. This means quite specifically that sound signals would have to be converted and portrayed as images in the frequency domain.

In the eye, the Fourier conversion is accomplished by a lens. In the ear the basilar membrane of the Organ of Corti, with its associated neural circuitry, is a machine for distributing incoming sounds into the frequency domain.

This critical conversion step is the original, visual brain’s equivalent of a digital computer’s A-to-D converter. Maybe we should call it an A-to-F converter, where F=Fourier pattern.

Vision is the senior sense. The much newer ability to hear probably mimiced the pattern established by its older sibling. Incoming sound must be depicted as a Fourier pattern in order to gain access to a neuronal memory machine originally evolved for thinking in pictures.

Visual memory mechanism could be adapted for speech recognition.

We can guess that the machinery already in place for image recognition was ported over — tinkered into place — and made to work. This suggests speech recognition in the brain happens in the frequency domain. In effect, sounds become images in the frequency domain, that is, images of Fourier patterns. These are tested against and ultimately matched to sonic Fourier images cycled out of memory. Multiple comparators we have styled as Fourier flashlights or projectors are set up and constantly running short film strips — dictionaries this time — in parallel. The system is massively parallel, multitasking and, as in the visual system, memory anticipates reality. Incoming words are recognized almost instantaneously.

In consequence, a machine that was once finely tuned to process pictorial information surrendered a chunk of its picture handling capacity to the processing of Fourier patterns that depict words. One wonders whether there might be a dynamic allocation between visual and verbal reasoning spaces, depending on whether the brain is caught up in an animated conversation or silently painting a picture.

Note that we have given ourselves a hint, here, at the nature of synesthesia –and a further hint at the reason for the curious linkage between synesthesia and memory. This brings us back to the story of the Russian mnemonist, Solomon Shereshevsky. Here is an excerpt from the New Yorker‘s review of Luria’s book, The Mind of a Mnemonist:

A distinguished Soviet psychologist’s study…[of a] young man who was discovered to have a literally limitless memory and eventually became a professional mnemonist. Experiments and interviews over the years showed that his memory was based on synesthesia (turning sounds into vivid visual imagery), that he could forget anything only by an act of will, that he solved problems in a peculiar crablike fashion that worked, and that he was handicapped intellectually because he could not make discriminations, and because every abstraction and idea immediately dissolved into an image for him.

In the model of memory we have suggested here, at least two senses, sight and hearing, are sharing a common memory and recall system. To the memory, there is little technical difference between incoming visual images and remembered sounds or words. They are all Fourier patterns. In a rudimentary or slightly out of kilter system, an incoming visual might be read as a sound, or an incoming word might immediately “dissolve into an image.” One might easily find oneself “hearing colors”.



A science fiction story about synesthesia
How did an ape learn to talk?
cropped bonobo
In this model of the brain two sensory systems, vision and hearing, produce and process similar patterns — Fourier patterns. A Fourier pattern arriving from the ear looks, to the brain, very like a Fourier pattern captured by the retina. We have suggested that if Fourier patterns arriving from these two distinct sensory systems were confounded in the brain, the effect might be synesthesia. Hearing colors, for example. Similarly the ability to “think in pictures” recounted by Temple Grandin, and the memory techniques described by Solomon Shereshevsky, depend upon an instant conversion of incoming sounds — words — into images.

We can go further with this model by making up a story. Suppose an ape, our ancestor, had a missing, faulty, leaky or intermittent partition between his brain’s processors for vision and hearing. As a result the incoming signals from the ear and the eye might co-exist, badly sorted, with no partition between them.

This goes beyond accounts of contemporary synesthesia in modern human brains. The story calls for a pre-historic brain with the gates left open, constantly receptive to Fourier patterns received from two different sources, the eye and the ear.

Suppose our ancestor, sitting by himself in a tree one night, started playing with his synesthesia by deliberately making lots of different vocal noises and noticing the mental pictures and colors they could induce inside his head.

Maybe he got quite good at this game, so that by experimentally cooing or whining in a particular way he could forcibly regenerate in his head a specific image. A remembered image of a bear, let’s say. With a different sound he might induce the recall of the image of a bird.

In our model of a modern human brain, images from memory, once recalled, surface “from out of nowhere.” The energetic Fourier processing that produces these visual memories is invisible, offstage. In an ancestral primate brain, however, maybe some Fourier processing was visible and impinged upon the animal’s conscious reality. Fourier patterns induced by sounds might be read from a Fourier plane in the brain. We have modeled this component as a retina of memory. Perhaps it was shared by Fourier patterns of both visual and auditory origin. In addition, Fourier patterns created by the eye’s lens might have been read from the retinal Fourier plane as colors and fluid patterns and textures, again impinging on the animal’s everyday conscious reality.

In the end, however, the remembered bear in the ape’s mind would be spatial — very like our modern, quite literal image of a bear.

This artistic process, using vocalized sounds to elicit remembered images, depends in the model upon the brain reading input signals from the ear as though they had arrived from the eye. It also depends on getting the sounds about right. Not perfect. A partial Fourier pattern generated from a sound signal should be sufficient to elicit from memory a whole image of a bear, or at least that of a huge animal standing on its hind legs. (A van Heerden type memory will locate the memorized images that are most similar to the image represented by the Fourier pattern presented for comparison by the input organ. Exact matches are not required.)

An interesting way to entertain oneself in a tree, yes. Sing a little song and watch a film strip of remembered images from the past.

But imagine the new power to communicate this discovered skill gave to this synesthetic ape. The learned whistle-whine-warble that induced in his own head the image of a bear — would also have the power to induce the image of a bear inside the head of every family member and cousin within earshot.

The tribal survival value of this new skill, which is a first prototype of a language, is substantial.

Note that the technique works more like television than talking. It uses vocalized sound waves to transmit across the space from one animal brain to another… the image of some object in the world.

Words do this but words, as we understand the term, came later.

What carries the message here is a crude replica of a Fourier pattern originally made upon the retina by a living bear. The replica is constructed in the brain by experimenting with sounds, vocalizing, until a bear suddenly reappears in the mind’s eye. The image of the bear is induced — coaxed out of memory — by a certain vocal sound. The sound and the process are repeatable. Each time the sound is vocalized, the bear reappears. Anyone else hearing this particular sound should also see (that is, remember) a bear in his or her mind’s eye.

So in this story at least, that’s how and why the first word for “bear” was spoken or sung or hummed or chuttered by an ape. And that’s how words came to be associated in the brain with visual images recalled from the memory’s catalog of visual images. Quite automatically.

From this point it ought to be a downhill run to language but maybe it isn’t.
vervet thinking
In 1980 it was confirmed by Dorothy Cheney and Robert Seyfarth that vervet monkeys use alarm calls that do much more that raise an alarm. These calls seem to communicate the specific nature of an imminent danger — that from a leopard, snake or eagle. It would be easy to guess the monkeys are using monkey words, nouns in fact, for leopards, snakes and eagles in order to denote these predators. This idea repels students of language. A thick, famous book about human language, almost as a first step, pushes aside the notion a vervet monkey could use words in the way humans use words.

My guess would be that these monkeys are using specific alarm call sounds to elicit, in other monkeys, remembered images of leopards, snakes and eagles. These sounds are not words like our words. They are protowords that cue visual memory.

Or did so at one time. Both the signal and response may have by now become hardwired, instinctive.

If this is how Vervets communicate, then maybe a synesthetic common ancestor invented this system more than 20 million years ago. It is as plausible to guess it was invented independently in several species. In either case we should ask why we progressed from warning cries to human language and why these monkeys did not.

What would constitute progress? A next step would be to skip the re-imaging process and directly link a sound — probably a simplified sound — with a known object in the world. This shortcut produces real words that elicit stored “meanings” shared by the community. It ultimately produces a language that is, perforce, unique to a particular community of archaic humans. It is no longer a universally understandable, pictorial mode of communication.

It proved much faster to use real words rather than protowords that elicit stored pictures. This shortcut step probably required the suppression of synesthesia. Walls gradually went up between the senses. Walls also grew up between human communities that could not comprehend each others’ unique spoken languages.

If a story like this were true and who knows — then we are all probably descended of synesthetes. Synesthesia is at the root of the unique success, through language, of the human ape. And modern synesthesia is an atavism.

There is a sidebar hypothesis. This story suggests an evolutionary role for music. Perhaps melodies were the tooling we humans used to construct a language. Music could be used to elicit and transmit to another human brain a series of pictures, a film strip in effect, by means of an ordered sequence of vocalized sounds. In this hypothesis the sound track does not merely accompany the movie. The sound track creates the movie.

The notion that synesthesia enabled human language fell out of the Fourier pattern matching model of the brain’s memory. I thought at first it might be an original idea. However, a search on “synesthesia and ape” immediately turned up the thinking of Terrence McKenna (1946-2000) who attributed the invention of language to a “stoned ape.” McKenna’s ape induced synesthesia by ingesting psilocybin. This “led to the development of spoken language: the ability to form pictures in another person’s mind through the use of vocal sounds.”

The hallucinogenic drug doesn’t seem necessary. The idea that language follows from synesthesia makes sense without it. But that’s just a quibble. Perhaps a quarter of a century ago, McKenna thought his way all the way through to the origin of words.


Photographic memory
From a careful observation, it is now clear that the capacity of the visual memory for object details is massive. The verbal thought process appears fast and compact. Verbal memory could be relatively small and limited by its small capacity.

If we were to accept anecdotes about photographic memory, the visual memory is not only huge but also indelible.

Indelible?

To fall back on an analogy to computers — a sufficiently huge memory could appear to be indelible because it is never necessary to over-write it, nor to shuttle its contents to and from some deeper memory store. If a memory has great capacity it can be permanent.

If somehow the verbal memory could gain access to the vast capacity of the pictorial memory, perhaps one might achieve a “limitless” memory capacity like that of Luria’s patient, the amazing mnemonist. This tack might give us a way to think about the genius of Winston Churchill. He was a painter, a pictorial thinker and visual strategist but was nevertheless a walking Oxford Dictionary of the English Language.

Cave painting from Lascaux. Edges and high spatial frequencies are emphasized.
As late in evolutionary history as the Cro-Magnons, we know that some modern humans were still strongly pictorial in their thinking. This painting was dated to only 17,300 years ago. This Cro-Magnon’s cave painting was quite probably a projection from the visual memory of the artist onto the wall of his cave. Like Picasso, he simply traced onto the wall his mentally projected, remembered images. In effect, this is not just a photo of a wall inside a cave. It is a snapshot taken from the inside of the artist’s head, from just his behind his eyes. Notice the edginess of the drawing, the emphasis on high spatial frequencies.

There are fascinating cave and rock paintings in Australia. Noted here at the excellent Bradshaw Foundation site are two in particular — one showing voyagers in a boat, the other a queue of 26 or more deer.

deer queueThe lined up deer are especially intriguing because in Australia there were no deer. The painting must depict a literal memory brought by the artist from Borneo, or a folk memory. The deer are lined up as though along the curved edge of a crevice, and there is even a sense of perspective achieved by grading the sizes of the beasts. There is another way to look at the picture, possibly trivial, which is to observe that an insistence on the linear ordering of objects is sometimes viewed as a characteristic of autism.

The existence of a pure photographic memory, in the popular sense, has never been rigorously proved, although Stephen Wiltshire certainly seems to have one.

It makes sense that photographic memory, if indeed it exists today, is extremely rare. It almost seems as though we blabbed it away — as though anyone who can talk has inhibited or subtracted from (or set up a conflict with) his or her innate ability to think in pictures. But children who are pre-verbal might in fact have a purely eidetic visual memory.

Jurassic park, Dolly, artificial life, newborn mammoths, etc.
The Denisovan girl is vastly famous. The idea she could be reborn from the DNA of her fingertip was first presented in an inverse sense by a journalist who could not resist using the formulaic lead: “This is not Jurassic Park [scoff-scoff-scoff] but ….”

Jurassic Park was a novel and 1993 movie conceived by Michael Crichton in which dinosaurs were recreated – re-hatched — from preserved samples of dinosaur DNA.

From Nature and Science the Denisovan girl’s story has flowed into major popular science media and sites, including Wired’s excellent science section, the National Geographic, the BBC, the Huffington Post and many newspapers. Science’s spectators and science fiction enthusiasts are vocal on the internet.

The Denisovan child’s clonability is simply assumed by most internet letter writers and casual commentors.

The technology of rebirth acquired credibility from Jurassic Park, from press releases in 2010 announcing the creation of artificial life, and from readily publicized efforts (more press releases) to engender, from the DNA of cells to be sought in frozen mammoth carcasses, a baby wooly mammoth. Dolly, the many mammalian clones that have succeeded her, and the cloning of a mouse from cells frozen for 15 years – all make it seem, to a watching world, that the rebirth of an extinct lifeform is possible or just around the corner.

This is actually the second go-round on cloning an archaic human. The first discussion came with the (low coverage) sequencing of a Neanderthal.

In Lone Survivors: How we came to be the only humans on earth
the distinguished paleoanthropologist Chris Stringer asks:

“….should we reverse the process of extinction and attempt to clone a Neanderthal from its newly reconstructed genome?

…it would be quite wrong to resurrect long-extinct species purely to satisfy our curiousity about them, especially if they were human. Neanderthals were the products of a unique evolutionary history in eurasia that lasted for several hundred thousand years, but they are gone, along with the world in which they evolved, and we should let them rest in peace.”

The extinct Denisovan girl has been swept into this what-if discussion with great enthusiasm. For example on the Huffington Post, one letter writer pointed out that since this little girl is not a human, technically, her animal nature might legally exempt her from the patchwork of rules and laws that forbid human cloning.

Actually, the Denisovan girl cannot be cloned. No cells. Her DNA is shattered. All we have is her code, which was carefully pieced back together inside a computer. It is far easier to re-assemble a fragmented genome with algorithms than with biochemistry. The Denisovan girl’s genome resides on digital memory media — not in chromatin.

Some assembly required:
The human genome comprises 3 billion base pairs. This is out of reach. The current world record for genome synthesis is about 1 million base pairs. This is 1/3000th of the length of a human genome.

In the TV movie of this story we would simply push a button and synthesize from the Denisovan girl’s known DNA sequence a molecular copy of her genome, using a computer to instruct an automatic DNA synthesizer.

Contemporary automatic synthesizers, however, cannot manufacture long stretches of DNA. They make oligonucleotides. “Oligos” are short DNA polymers of 20 to 50 bases. As the polymer gets longer errors inherent in the automated chemical synthesis of DNA grow problematical. The error rate is about 1 base in 100. A practical upper limit on polymer length is in most cases about 200 bases. The longer the polymer the more likely the errors, and thus, the lower the yield of accurate copies.

Traditionally, long DNA polymers were constructed by assembling short strands step by step, sequentially adding the correct short fragments to an ever lengthening chain.

In the PNAS for December 23, 2008 Gibson, Hutchinson, Venter et al reported a new technique that had been used to assemble a long DNA polymer from 25 short DNA fragments in a single step. They created the original fragments with synthesizers, then assembled them in yeast to produce an artificial genome. The enzymatic machinery that does this work normally repairs yeast DNA. But from a practical point of view, it is as though yeast had a built-in algorithm for ordering and assembling fragments of DNA. The first genome they produced with just one assembly step in yeast was that of mycoplasm genitalia, which is about 590 kilobases long.

In July, 2010, Venter’s group reported that they had created (though in more than one step) an even longer artificial genome. It was that of Mycoplasma mycoides, which is 1.08 million base pairs in length. This artificial genome was introduced into a donor cell, where the synthetic DNA took over. The transfer succeeded completely and the cell was self replicating. This was the milestone headlined as the creation of “artificial life.” All of the cell’s DNA was artificial in the sense that it had been chemically synthesized.

Would this technique help recreate the Denisovan girl’s genome of 3 billion base pairs? No.

The limitation on genome synthesis is its marginal accuracy. If you scaled up the process today you would also, necessarily, scale up the number of mistakes it writes into the genome – departures from the digitized blueprint. Some errors are trivial or silent and some could be catastrophic.

The problem ultimately comes back to the automated chemical oligonucleotide synthesizers at the front end of the process. They are limited to making oligos because of the rate at which they make mistakes.

Automated chemical DNA synthesizers have been around since the 1970s. The sister technology of DNA sequencing machines has progressed dramatically through several generations by running chemistries in parallel. It is now possible to sequence a long genome very rapidly and economically. You could have your own genome sequenced for less than $2000. But it is still only possible to synthesize a short genome like those of the mycoplasms. And it is expensive.

An entirely new DNA synthetic process, using enzymes perhaps, might make it possible to create de novo a very long DNA polymer faithful to a human genome from a digital database. But not tomorrow morning.

The mouth of the Denisova cavePhoto by Dmitry Artyukhov. Mouth of the Denisova cave.

Is there another way?
Many biochemical interventions, including mammalian cloning, rely upon cellular machinery that is already in place in a host cell. This borrowed biochemical watchworks is not completely understood nor necessarily even known to us.

For the Denisovan girl, none of this machinery exists anymore. There survives today no Denisovan cell. What have survived are long stretches of Denisovan-like DNA in the genomes of Melanesians, including Papuans and Aborigines of Australia. About 6% of aboriginal DNA is very like that of the Denisovan girl.

Since no Denisovan cell is at hand, and no Denisovan genome can be accurately synthesized with contemporary technology — one might seek to edit into Denisovan form a modern human genome from a living cell.

This would require making 112,000 single nucleotide changes in the exactly right positions, plus 9500 insertions and deletions. This would get you close, within some tolerance, to the recorded genome of the Denisovan girl. From this point you might work toward reproducing her.

So the Denisovan girl might walk the earth once again but if it turned out she had the ability to talk, she would have nothing to tell us about her first childhood in the Denisova cave 80,000 years ago.

She would not remember her first life. That information was stored in some way in her head, not in her finger. As for her second life, to be lived among us charming, untroubled modern humans, it would be miserable would it not? She would probably grow up near a university and she would lead a life among scientists, an examined life, a specimen’s life. Alone on the planet.

Nevertheless. she has come trekking down to us across the span of eighty thousand years. It is unimaginable that she should now be left waiting indefinitely, stored in a database. Let’s guess that a century from now, which is no time at all for this little girl, biologists will know in detail how to engineer her renaissance.

To what end?

An exploit. Biology’s moonshot. To use technology to defy time and death.

But scientifically?

The Denisovan girl has given us a new genetic baseline or datum line. It would help us to see her in person. The hope would be that we could learn from her phenotype and her genome how modern humans are different, biochemically, from archaic humans.

Evolution is not an upward path leading to ourselves. In terms of language genes and skills and language-based thinking, we excel. In terms of visual or picture-based thinking, I suspect we have long since plummeted below the Denisovan baseline. A facility for language is useful but we may have paid for it with vastly diminished visual gifts. So it isn’t a question of how far we modern humans have come. It is also a question of what we have lost, jettisoned or suppressed.

<< PREVIOUS CHAPTER
HOME>>