Summary
Bryson has written a fascinating, enthralling, accessible book on the history of the natural sciences, covering topics as diverse as cosmology, quantum physics, paleontology and chemistry. Complement it with Sapiens (anthropology), The Lessons of History (society, politics, economy) and Genome (DNA).
Key Takeaways
To be added on a reread. See notes below.
What I got out of it
- Appreciation for and marvel at life, humanity and nature.
- How much we (humans) have already learnt yet how little we still understand.
- Progress accelerates, yes, but in the end we’re a speck of dust, a blip in time. There’s no point in (ever) getting flustered or rushed. Keep my life – my time on earth – in perspective.
- Individuals can make a difference if devoted and pursuing a single cause. Most who do find their life meaningful and rewarding (albeit often very challenging).
- The miracles of life/nature/universe. Somehow, someway, we are all one: we’re all energy or atoms, organized differently.
- A common thread throughout this book: “Gradually, then suddenly.”
To ponder:
- How much of what we hold to be true is science from the last 50-100 years, meaning it’s likely to be overturned in our lifetime? (Many scientific developments we consider true now were considered crazy or ignored when first announced. Also, most scientific “truths”, mentioned in this book, have lasted no longer than 50-200 years.)
- We fall prey to sticking with what we were taught during our school years, while it’s likely a (large) part of it will be outdated or overturned by the time we’re 40-60 years old (with the exception of math).
Summary Notes
1 – Lost in the Cosmos
How to Build a Universe
In the long term, gravity may turn out to be a little too strong; one day it may halt the expansion of the universe and bring it collapsing in upon itself, until it crushes itself down into another singularity, possibly to start the whole process over again. On the other hand, it may be too weak, in which case the universe will keep racing away for ever until everything is so far apart that there is no chance of material interactions, so that the universe becomes a place that is very roomy, but inert and dead. The third option is that gravity is perfectly pitched—“critical density” is the cosmologists’ term for it—and that it will hold the universe together at just the right dimensions to allow things to go on indefinitely. Cosmologists, in their lighter moments, sometimes call this the “Goldilocks effect”—that everything is just right. (For the record, these three possible universes are known respectively as closed, open and flat.)
For the moment it is enough to know that we are not adrift in some large, ever-expanding bubble. Rather, space curves, in a way that allows it to be boundless but finite. Space cannot even properly be said to be expanding because, as the physicist and Nobel laureate Steven Weinberg notes, “solar systems and galaxies are not expanding, and space itself is not expanding.” Rather, the galaxies are rushing apart. It is all something of a challenge to intuition. Or, as the biologist J. B. S. Haldane once famously observed: “The universe is not only queerer than we suppose; it is queerer than we can suppose.”
The analogy that is usually given for explaining the curvature of space is to try to imagine someone from a universe of flat surfaces, who had never seen a sphere, being brought to Earth. No matter how far he roamed across the planet’s surface, he would never find an edge. He might eventually return to the spot where he had started, and would of course be utterly confounded to explain how that had happened.
2 – The Size of the Earth
The Stone-Breakers
He became a leading member of a society called the Oyster Club, where he passed his evenings in the company of men such as the economist Adam Smith, the chemist Joseph Black and the philosopher David Hume, as well as such occasional visiting sparks as Benjamin Franklin and James Watt.
In the winter of 1807, thirteen like-minded souls in London got together at the Freemasons Tavern at Long Acre, in Covent Garden, to form a dining club to be called the Geological Society. The idea was to meet once a month to swap geological notions over a glass or two of Madeira and a convivial dinner. The price of the meal was set at a deliberately hefty 15 shillings to discourage those whose qualifications were merely cerebral. It soon became apparent, however, that there was a demand for something more properly institutional, with a permanent headquarters, where people could gather to share and discuss new findings. In barely a decade membership grew to 400—still all gentlemen, of course—and the Geological was threatening to eclipse the Royal as the premier scientific society in the country.
The members met twice a month from November until June, when virtually all of them went off to spend the summer doing fieldwork. These weren’t people with a pecuniary interest in minerals, you understand, or even academics for the most part, but simply gentlemen with the wealth and time to indulge a hobby at a more or less professional level. By 1830 there were 745 of them, and the world would never see the like again.
It is hard to imagine now, but geology excited the nineteenth century— positively gripped it—in a way that no science ever had before or would again.
Between Hutton’s day and Lyell’s there arose a new geological controversy, which largely superseded, but is often confused with, the old Neptunian-Plutonian dispute. The new battle became an argument between catastrophism and uniformitarianism—unattractive terms for an important and very long-running dispute. Catastrophists, as you might expect from the name, believed that the Earth was shaped by abrupt cataclysmic events—floods, principally, which is why catastrophism and Neptunism are often wrongly bundled together.
Catastrophism was particularly comforting to clerics like Buckland because it allowed them to incorporate the biblical flood of Noah into serious scientific discussions. Uniformitarians, by contrast, believed that changes on Earth were gradual and that nearly all earth processes happened slowly, over immense spans of time. Hutton was much more the father of the notion than Lyell, but it was Lyell most people read, and so he became in most people’s minds, then and now, the father of modern geological thought.
Scottish mathematician and physicist Lord Kelvin, who throughout his career produced revolutionary scientific theories and was arguably the first scientist to become wealthy by patenting his work.
The Second Law of Thermodynamics.
A discussion of these laws would be a book in itself, but I offer here this crisp summation by the chemist P. W. Atkins, just to provide a sense of them: “There are four Laws. The third of them, the Second Law, was recognized first; the first, the Zeroth Law, was formulated last; the First Law was second; the Third Law might not even be a law in the same sense as the others.” In briefest terms, the second law states that a little energy is always wasted. You can’t have a perpetual motion device because no matter how efficient, it will always lose energy and eventually run down. The first law says that you can’t create energy and the third that you can’t reduce temperatures to absolute zero; there will always be some residual warmth. As Dennis Overbye notes, the three principal laws are sometimes expressed jocularly as (1) you can’t win, (2) you can’t break even, and (3) you can’t get out of the game.
Science Red in Tooth and Claw
In 1856 he became head of the natural history section of the British Museum, in which capacity he became the driving force behind the creation of London’s Natural History Museum. The grand and beloved gothic heap in South Kensington, opened in 1880, is almost entirely a testament to his vision.
Before Owen, museums were designed primarily for the use and edification of the elite, and even they found it difficult to gain access. In the early days of the British Museum, prospective visitors had to make a written application and undergo a brief interview to determine if they were fit to be admitted at all. They then had to return a second time to pick up a ticket—that is, assuming they had passed the interview—and finally come back a third time to view the museum’s treasures. Even then they were whisked through in groups and not allowed to linger. Owen’s plan was to welcome everyone, even to the point of encouraging working men to visit in the evening, and to devote most of the museum’s space to public displays. He even proposed, very radically, to put informative labels on each display so that people could appreciate what they were viewing. In this, somewhat unexpectedly, he was opposed by T. H. Huxley, who believed that museums should be primarily research institutes. By making the Natural History Museum an institution for everyone, Owen transformed our expectations of what museums are for.
Elemental Matters
Three years after embarking on this lucrative career path, he married the fourteen-year-old daughter of one of his bosses. The marriage was a meeting of hearts and minds. Mme Lavoisier had an incisive intellect and soon was working productively alongside her husband. Despite the demands of his job and busy social life, they managed on most days to put in five hours of science—two in the early morning and three in the evening—as well as the whole of Sunday, which they called their jour de bonheur (day of happiness). Somehow Lavoisier also found the time to be commissioner of gunpowder, supervise the building of a wall around Paris to deter smugglers, help found the metric system and co-author the handbook Méthode de Nomenclature Chimique, which became the bible for agreeing the names of the elements.
Mendeleyev dutifully completed his studies and eventually landed a position at the local university. There he was a competent but not terribly outstanding chemist, known more for his wild hair and beard, which he had trimmed just once a year, than for his gifts in the laboratory.
However, in 1869, at the age of thirty-five, he began to toy with a way to arrange the elements. At the time, elements were normally grouped in two ways —either by atomic weight (using Avogadro’s Principle) or by common properties (whether they were metals or gases, for instance). Mendeleyev’s breakthrough was to see that the two could be combined in a single table.
3 – A New Age Dawns
Einstein’s Universe
Science was moving from a world of macrophysics, where objects could be seen and held and measured, to one of microphysics, where events transpire with inconceivable swiftness on scales of magnitude far below the limits of imagining. We were about to enter the quantum age, and the first person to push on the door was the so-far unfortunate Max Planck.
In 1900, now a theoretical physicist at the University of Berlin, and at the somewhat advanced age of forty-two, Planck unveiled a new “quantum theory,” which posited that energy is not a continuous thing like flowing water but comes in individualized packets, which he called quanta. This was a novel concept, and a good one. In the short term it would help to provide a solution to the puzzle of the Michelson-Morley experiments in that it demonstrated that light needn’t be a wave after all. In the longer term it would lay the foundation for the whole of modern physics. It was, at all events, the first clue that the world was about to change.
But the landmark event—the dawn of a new age—came in 1905 when there appeared in the German physics journal Annalen der Physik a series of papers by a young Swiss bureaucrat who had no university affiliation, no access to a laboratory and the regular use of no library greater than that of the national patent office in Bern, where he was employed as a technical examiner third class. (An application to be promoted to technical examiner second class had recently been rejected.) His name was Albert Einstein, and in that one eventful year he submitted to Annalen der Physik five papers, of which three, according to C. P Snow, “were among the greatest in the history of physics”—one examining the photoelectric effect by means of Planck’s new quantum theory, one on the behaviour of small particles in suspension (what is known as Brownian motion), and one outlining a Special Theory of Relativity.
The first won its author a Nobel Prize and explained the nature of light (and also helped to make television possible, among other things).3 The second provided proof that atoms do indeed exist—a fact that had, surprisingly, been in some dispute. The third merely changed the world.
His famous equation, E = mc2, did not appear with the paper, but came in a brief supplement that followed a few months later. As you will recall from schooldays, E in the equation stands for energy, m for mass and c2 for the speed of light squared.
In simplest terms, what the equation says is that mass and energy have an equivalence. They are two forms of the same thing: energy is liberated matter;
matter is energy waiting to happen. Since c2 (the speed of light times itself) is a truly enormous number, what the equation is saying is that there is a huge amount—a really huge amount—of energy bound up in every material thing.
In essence what relativity says is that space and time are not absolute, but relative both to the observer and to the thing being observed, and the faster one moves the more pronounced these effects become. We can never accelerate ourselves to the speed of light, and the harder we try (and the faster we go) the more distorted we will become, relative to an outside observer.
The most challenging and non-intuitive of all the concepts in the General Theory of Relativity is the idea that time is part of space. Our instinct is to regard time as eternal, absolute, immutable; to believe that nothing can disturb its steady tick. In fact, according to Einstein, time is variable and ever-changing.
It even has shape. It is bound up—“inextricably interconnected,” in Stephen Hawking’s expression—with the three dimensions of space in a curious dimension known as spacetime.
Spacetime is usually explained by asking you to imagine something flat but pliant—a mattress, say, or a sheet of stretched rubber—on which is resting a heavy round object, such as an iron ball. The weight of the iron ball causes the material on which it is sitting to stretch and sag slightly. This is roughly analogous to the effect that a massive object such as the Sun (the iron ball) has on spacetime (the material): it stretches and curves and warps it. Now, if you roll a smaller ball across the sheet, it tries to go in a straight line as required by Newton’s laws of motion, but as it nears the massive object and the slope of the sagging fabric, it rolls downwards, ineluctably drawn to the more massive object. This is gravity—a product of the bending of spacetime.
Among much else, Einstein’s General Theory of Relativity suggested that the universe must be either expanding or contracting. But Einstein was not a cosmologist and he accepted the prevailing wisdom that the universe was fixed and eternal. More or less reflexively, he dropped into his equations something called the cosmological constant, which arbitrarily counterbalanced the effects of gravity, serving as a kind of mathematical pause button. Books on the history of science always forgive Einstein this lapse, but it was actually a fairly appalling piece of science and he knew it. He called it “the biggest blunder of my life.”
Edwin Hubble: he became the most outstanding astronomer of the twentieth century.
The universe was expanding, swiftly and evenly in all directions. It didn’t take a huge amount of imagination to read backwards from this and realize that it must therefore have started from some central point. Far from being the stable, fixed, eternal void that everyone had always assumed, this was a universe that had a beginning. It might therefore also have an end.
The wonder, as Stephen Hawking has noted, is that no-one had hit on the idea of the expanding universe before. A static universe, as should have been obvious to Newton and every thinking astronomer since, would collapse in upon itself.
There was also the problem that if stars had been burning indefinitely in a static universe they’d have made the whole intolerably hot—certainly much too hot for the likes of us. An expanding universe resolved much of this at a stroke.
The Mighty Atom
Atoms, in short, are very abundant.
They are also fantastically durable. Because they are so long-lived, atoms really get around. Every atom you possess has almost certainly passed through several stars and been part of millions of organisms on its way to becoming you.
We are each so atomically numerous and so vigorously recycled at death that a significant number of our atoms—up to a billion for each of us, it has been suggested—probably once belonged to Shakespeare.
It takes the atoms some decades to become thoroughly redistributed; however much you may wish it, you are not yet one with Elvis Presley.
So we are all reincarnations—though short-lived ones. When we die, our atoms will disassemble and move off to find new uses elsewhere—as part of a leaf or other human being or drop of dew. Atoms themselves, however, go on practically for ever. Nobody actually knows how long an atom can survive, but according to Martin Rees it is probably about 1035 years—a number so big that even I am happy to express it in mathematical notation.
It is, of course, the abundance and extreme durability of atoms that make them so useful, and the tininess that makes them so hard to detect and understand. The realization that atoms are these three things—small, numerous, practically indestructible
Rutherford: “All science is either physics or stamp collecting,”
To Rutherford’s astonishment, some of the particles bounced back. It was as if, he said, he had fired a 15-inch shell at a sheet of paper and it rebounded into his lap. This was just not supposed to happen. After considerable reflection he realized there could be only one possible explanation: the particles that bounced back were striking something small and dense at the heart of the atom, while the other particles sailed through unimpeded. An atom, Rutherford realized, was mostly empty space, with a very dense nucleus at the centre. This was a most gratifying discovery, but it presented one immediate problem. By all the laws of conventional physics, atoms shouldn’t therefore exist.
Let us pause for a moment and consider the structure of the atom as we know it now. Every atom is made from three kinds of elementary particles: protons, which have a positive electrical charge; electrons, which have a negative electrical charge; and neutrons, which have no charge. Protons and neutrons are packed into the nucleus, while electrons spin around outside. The number of protons is what gives an atom its chemical identity. An atom with one proton is an atom of hydrogen, one with two protons is helium, with three protons lithium, and so on up the scale. Each time you add a proton you get a new element.
(Because the number of protons in an atom is always balanced by an equal number of electrons, you will sometimes see it written that it is the number of electrons that defines an element; it comes to the same thing. The way it was explained to me is that protons give an atom its identity, electrons its personality.)
Neutrons don’t influence an atom’s identity, but they do add to its mass. The number of neutrons is generally about the same as the number of protons, but they can vary up and down slightly. Add or subtract a neutron or two and you get an isotope. The terms you hear in reference to dating techniques in archaeology refer to isotopes—carbon-14, for instance, which is an atom of carbon with six protons and eight neutrons (the fourteen being the sum of the two).
Neutrons and protons occupy the atom’s nucleus. The nucleus of an atom is tiny—only one-millionth of a billionth of the full volume of the atom—but fantastically dense, since it contains virtually all the atom’s mass.
It is still a fairly astounding notion to consider that atoms are mostly empty space, and that the solidity we experience all around us is an illusion. When two objects come together in the real world—billiard balls are most often used for illustration—they don’t actually strike each other. “Rather,” as Timothy Ferris explains, “the negatively charged fields of the two balls repel each other… [W]ere it not for their electrical charges they could, like galaxies, pass right through each other unscathed.” When you sit in a chair, you are not actually sitting there, but levitating above it at a height of one angstrom (a hundred millionth of a centimetre), your electrons and its electrons implacably opposed to any closer intimacy.
The picture of an atom that nearly everybody has in mind is of an electron or two flying around a nucleus, like planets orbiting a sun. This image was created in 1904, based on little more than clever guesswork, by a Japanese physicist named Hantaro Nagaoka. It is completely wrong, but durable just the same. As Isaac Asimov liked to note, it inspired generations of science-fiction writers to create stories of worlds-within-worlds, in which atoms become tiny inhabited solar systems or our solar system turns out to be merely a mote in some much larger scheme.
In fact, as physicists were soon to realize, electrons are not like orbiting planets at all, but more like the blades of a spinning fan, managing to fill every bit of space in their orbits simultaneously (but with the crucial difference that the blades of a fan only seem to be everywhere at once; electrons are).
So the atom turned out to be quite unlike the image that most people had created. The electron doesn’t fly around the nucleus like a planet around its sun, but instead takes on the more amorphous aspect of a cloud. The “shell” of an atom isn’t some hard, shiny casing, as illustrations sometimes encourage us to suppose, but simply the outermost of these fuzzy electron clouds. The cloud itself is essentially just a zone of statistical probability marking the area beyond which the electron only very seldom strays. Thus an atom, if you could see it, would look more like a very fuzzy tennis ball than a hard-edged metallic sphere (but not much like either or, indeed, like anything you’ve ever seen; we are, after all, dealing here with a world very different from the one we see around us).
There was the problem that quantum physics introduced a level of untidiness that hadn’t previously existed. Suddenly you needed two sets of laws to explain the behaviour of the universe—quantum theory for the world of the very small and relativity for the larger universe beyond. The gravity of relativity theory was brilliant at explaining why planets orbited suns or why galaxies tended to cluster, but turned out to have no influence at all at the particle level.
To explain what kept atoms together, other forces were needed and in the 1930s two were discovered: the strong nuclear force and the weak nuclear force. The strong force binds atoms together; it’s what allows protons to bed down together in the nucleus. The weak force engages in more miscellaneous tasks, mostly to do with controlling the rates of certain sorts of radioactive decay.
Getting the Lead Out
In one such study, a doctor who had no specialized training in chemical pathology undertook a five-year programme in which volunteers were asked to breathe in or swallow lead in elevated quantities. Then their urine and faeces were tested. Unfortunately, as the doctor appears not to have known, lead is not excreted as a waste product. Rather, it accumulates in the bones and blood— that’s what makes it so dangerous—and neither bone nor blood was tested. In consequence, lead was given a clean bill of health.
Muster Mark’s Quarks
Eventually out of all this emerged what is called the Standard Model, which is essentially a sort of parts kit for the subatomic world. The Standard Model consists of six quarks, six leptons, five known bosons and a postulated sixth, the Higgs boson (named for a Scottish scientist, Peter Higgs), plus three of the four physical forces: the strong and weak nuclear forces and electromagnetism.
The arrangement essentially is that among the basic building blocks of matter are quarks; these are held together by particles called gluons; and together quarks and gluons form protons and neutrons, the stuff of the atom’s nucleus.
Leptons are the source of electrons and neutrinos. Quarks and leptons together are called fermions. Bosons (named for the Indian physicist S. N. Bose) are particles that produce and carry forces, and include photons and gluons. The Higgs boson may or may not actually exist; it was invented simply as a way of endowing particles with mass.
The Standard Model is not only ungainly but incomplete. For one thing, it has nothing at all to say about gravity.
In an attempt to draw everything together, physicists have come up with something called superstring theory. This postulates that all those little things like quarks and leptons that we had previously thought of as particles are actually “strings”—vibrating strands of energy that oscillate in eleven dimensions, consisting of the three we know already plus time and seven other dimensions that are, well, unknowable to us. The strings are very tiny—tiny enough to pass for point particles.
By introducing extra dimensions, superstring theory enables physicists to pull together quantum laws and gravitational ones into one comparatively tidy package; but it also means that anything scientists say about the theory begins to sound worryingly like the sort of thoughts that would make you edge away if conveyed to you by a stranger on a park bench.
String theory has further spawned something called M theory, which incorporates surfaces known as membranes—or simply branes to the hipper souls of the world of physics. This, I’m afraid, is the stop on the knowledge highway where most of us must get off.
The Earth Moves
His radical notions questioned the foundations of their discipline, seldom an effective way to generate warmth in an audience.
It was a beautiful theory that explained a great deal. Hess elaborated his arguments in an important paper, which was almost universally ignored.
Sometimes the world just isn’t ready for a good idea.
At all events, plate tectonics explained not only the surface dynamics of the Earth—how an ancient Hipparion got from France to Florida, for example—but also many of its internal actions. Earthquakes, the formation of island chains, the carbon cycle, the locations of mountains, the coming of ice ages, the origins of life itself—there was hardly a matter that wasn’t directly influenced by this remarkable new theory. Geologists, as McPhee has noted, found themselves in the giddying position where “the whole earth suddenly made sense.”
But only up to a point. The distribution of continents in former times is much less neatly resolved than most people outside geophysics think. Although textbooks give confident-looking representations of ancient land masses with names like Laurasia, Gondwana, Rodinia and Pangaea, these are sometimes based on conclusions that don’t altogether hold up. As George Gaylord Simpson observes in Fossils and the History of Life, species of plants and animals from the ancient world have a habit of appearing inconveniently where they shouldn’t and failing to be where they ought.
The outline of Gondwana, a once-mighty continent connecting Australia, Africa, Antarctica and South America, was based in large part on the distributions of a genus of ancient tongue fern called Glossopteris, which was found in all the right places. However, much later Glossopteris was also discovered in parts of the world that had no known connection to Gondwana.
This troubling discrepancy was—and continues to be—mostly ignored.
Similarly, a Triassic reptile called lystrosaurus has been found from Antarctica all the way to Asia, supporting the idea of a former connection between those continents, but it has never turned up in South America or Australia, which are believed to have been part of the same continent at the same time.
There are also many surface features that tectonics can’t explain. Take Denver. It is, as everyone knows, a mile high, but that rise is comparatively recent. When dinosaurs roamed the Earth, Denver was part of an ocean bottom, many thousands of metres lower. Yet the rocks on which Denver sits are not fractured or deformed in the way they would be if Denver had been pushed up by colliding plates, and anyway Denver was too far from the plate edges to be susceptible to their actions. It would be as if you pushed against the edge of a rug hoping to raise a ruck at the opposite end. Mysteriously and over millions of years, it appears that Denver has been rising, like baking bread. So, too, has much of southern Africa; a portion of it 1,600 kilometres across has risen about one and a half kilometres in a hundred million years without any known associated tectonic activity. Australia, meanwhile, has been tilting and sinking.
Over the past hundred million years, as it has drifted north towards Asia, its leading edge has sunk by nearly 200 metres. It appears that Indonesia is very slowly drowning, and dragging Australia down with it. Nothing in the theories of tectonics can explain any of this.
4 – Dangerous Planet
The Fire Below
I asked him in what way it was a dumb place to hunt for bones. “Well, if you’re looking for bones, you really need exposed rock. That’s why most palaeontology is done in hot, dry places. It’s not that there are more bones there.
It’s just that you have some chance of spotting them. In a setting like this“—he made a sweeping gesture across the vast and unvarying prairie—”you wouldn’t know where to begin. There could be really magnificent stuff out there, but there’s no surface clues to show you where to start looking.”
Theoretically, at least, there is no upper limit for an earthquake—nor, come to that, a lower limit. The scale is a simple measure of force, but says nothing about damage. A magnitude 7 quake happening deep in the mantle—say, 650 kilometres down—might cause no surface damage at all, while a significantly smaller one happening just 6 or 7 kilometres under the surface could wreak widespread devastation. Much, too, depends on the nature of the subsoil, the quake’s duration, the frequency and severity of aftershocks, and the physical setting of the affected area. All this means that the most fearsome quakes are not necessarily the most forceful, though force obviously counts for a lot.
So how much do we know about what’s inside the Earth? Very little.
Scientists are generally agreed that the world beneath us is composed of four layers—a rocky outer crust, a mantle of hot, viscous rock, a liquid outer core and a solid inner core.1 We know that the surface is dominated by silicates, which are relatively light and not heavy enough to account for the planet’s overall density. Therefore there must be heavier stuff inside. We know that to generate our magnetic field somewhere in the interior there must be a concentrated belt of metallic elements in a liquid state. That much is universally accepted. Almost everything beyond that—how the layers interact, what causes them to behave in the way they do, what they will do at any time in the future—is a matter of at least some uncertainty, and generally quite a lot of uncertainty.
Volcanologists may or may not be the worst scientists in the world at making predictions, but they are without question the worst in the world at realizing how bad their predictions are.
5 – Life Itself
Lonely Planet
The real terror of the deep, however, is the bends—not so much because they are unpleasant, though of course they are, as because they are so much more likely. The air we breathe is 80 per cent nitrogen. Put the human body under pressure, and that nitrogen is transformed into tiny bubbles that migrate into the blood and tissues. If the pressure is changed too rapidly—as with a too-quick ascent by a diver—the bubbles trapped within the body will begin to fizz in exactly the manner of a freshly opened bottle of champagne, clogging tiny blood vessels, depriving cells of oxygen and causing pain so excruciating that sufferers are prone to bend double in agony—hence “the bends.”
Apart from avoiding high-pressure environments altogether, only two strategies are reliably successful against the bends. The first is to suffer only a very short exposure to the changes in pressure. That is why the free divers I mentioned earlier can descend to depths of 150 metres without ill effect. They don’t stay down long enough for the nitrogen in their system to dissolve into their tissues. The other solution is to ascend by careful stages. This allows the little bubbles of nitrogen to dissipate harmlessly.
Without the Moon’s steadying influence, the Earth would wobble like a dying top, with goodness knows what consequences for climate and weather. The Moon’s steady gravitational influence keeps the Earth spinning at the right speed and angle to provide the sort of stability necessary for the long and successful development of life. This won’t go on for ever. The Moon is slipping from our grasp at a rate of about 4 centimetres a year. In another two billion years it will have receded so far that it won’t keep us steady and we will have to come up with some other solution, but in the meantime you should think of it as much more than just a pleasant feature in the night sky.
There are ninety-two naturally occurring elements on the Earth, plus a further twenty or so that have been created in labs, but some of these we can immediately put to one side—as, in fact, chemists themselves tend to do. Not a few of our earthly chemicals are surprisingly little known. Astatine, for instance, is practically unstudied. It has a name and a place on the periodic table (next door to Marie Curie’s polonium), but almost nothing else. The problem isn’t scientific indifference, but rarity. There just isn’t much astatine out there. The most elusive element of all, however, appears to be francium, which is so rare that it is thought that our entire planet may contain, at any given moment, fewer than twenty francium atoms. Altogether, only about thirty of the naturally occurring elements are widespread on Earth, and barely half a dozen are of central importance to life.
What sets the carbon atom apart is that it is shamelessly promiscuous. It is the party animal of the atomic world, latching on to many other atoms (including itself) and holding tight, forming molecular conga lines of hearty robustness—the very trick of nature necessary to build proteins and DNA. As Paul Davies has written: “If it wasn’t for carbon, life as we know it would be impossible. Probably any sort of life would be impossible.” Yet carbon is not all that plentiful even in us who so vitally depend on it. Of every 200 atoms in your body, 126 are hydrogen, 51 are oxygen, and just 19 are carbon.2 Other elements are critical not for creating life but for sustaining it. We need iron to manufacture haemoglobin, and without it we would die. Cobalt is necessary for the creation of vitamin B12. Potassium and a very little sodium are literally good for your nerves. Molybdenum, manganese and vanadium help to keep your enzymes purring. Zinc—bless it—oxidizes alcohol.
By and large, if an element doesn’t naturally find its way into our systems—if it isn’t soluble in water, say—we tend to be intolerant of it.
I have brought you a long way to make a small point: a big part of the reason that Earth seems so miraculously accommodating is that we evolved to suit its conditions.
The physicist Richard Feynman used to make a joke about a posteriori conclusions—reasoning from known facts back to possible causes. “You know, the most amazing thing happened to me tonight,” he would say. “I saw a car with the licence plate ARW 357. Can you imagine? Of all the millions of licence plates in the state, what was the chance that I would see that particular one tonight? Amazing!” His point, of course, is that it is easy to make any banal situation seem extraordinary if you treat it as fateful.
Into the Troposphere
Temperature is really just a measure of the activity of molecules. At sea level, air molecules are so thick that one molecule can move only the tiniest distance— about eight-millionths of a centimetre, to be precise—before banging into another. Because trillions of molecules are constantly colliding, a lot of heat gets exchanged. But at the height of the thermosphere, at 80 kilometres or more, the air is so thin that any two molecules will be miles apart and hardly ever come into contact. So although each molecule is very warm, there are few interactions between them and thus little heat transference. This is good news for satellites and spaceships, because if the exchange of heat were more efficient any manmade object orbiting at that level would burst into flame.
At the equator the convection process is generally stable and the weather predictably fair, but in temperate zones the patterns are far more seasonal, localized and random, which results in an endless battle between systems of high-pressure and low-pressure air. Low-pressure systems are created by rising air, which conveys water molecules into the sky, forming clouds and eventually rain. Warm air can hold more moisture than cool air, which is why tropical and summer storms tend to be the heaviest. Thus low areas tend to be associated with cloud and rain, and highs generally spell sunshine and fair weather. When two such systems meet, it often becomes manifest in the clouds.
Coriolis effect
The Earth revolves at a brisk 1,675 kilometres an hour at the equator, though as you move towards the poles the speed slopes off considerably, to about 900 kilometres an hour in London or Paris, for instance.
The reason for this is self-evident when you think about it. If you are on the equator the spinning Earth has to carry you quite a distance—about 40,000 kilometres—to get you back to the same spot, whereas if you stand beside the North Pole you may need to travel only a few metres to complete a revolution;
yet in both cases it takes twenty-four hours to get you back to where you began.
Therefore, it follows that the closer you get to the equator the faster you must be spinning.
The Coriolis effect explains why anything moving through the air in a straight line laterally to the Earth’s spin will, given enough distance, seem to curve to the right in the northern hemisphere and to the left in the southern as the Earth revolves beneath it.
The Bounding Main
Because water is so ubiquitous we tend to overlook what an extraordinary substance it is. Almost nothing about it can be used to make reliable predictions about the properties of other liquids, and vice versa. If you knew nothing of water and based your assumptions on the behaviour of compounds most chemically akin to it—hydrogen selenide or hydrogen sulphide, notably—you would expect it to boil at minus 93 degrees Celsius and to be a gas at room temperature.
Most liquids when chilled contract by about 10 per cent. Water does too, but only down to a point. Once it is within whispering distance of freezing, it begins —perversely, beguilingly, extremely improbably—to expand. By the time it is solid, it is almost a tenth more voluminous than it was before. Because it expands, ice floats on water—“an utterly bizarre property,” according to John Gribbin. If it lacked this splendid waywardness, ice would sink, and lakes and oceans would freeze from the bottom up. Without surface ice to hold heat in, the water’s warmth would radiate away, leaving it even chillier and creating yet more ice. Soon even the oceans would freeze and almost certainly stay that way for a very long time, probably for ever—hardly the conditions to nurture life.
Thankfully for us, water seems unaware of the rules of chemistry or laws of physics.
There are 1.3 billion cubic kilometres of water on Earth and that is all we’re ever going to get. The system is closed: practically speaking, nothing can be added or subtracted. The water you drink has been around doing its job since the Earth was young. By 3.8 billion years ago, the oceans had (at least more or less) achieved their present volumes.
in 1977, one of the most important and startling biological discoveries of the twentieth century.
In that year Alvin found teeming colonies of large organisms living on and around deep-sea vents off the Galápagos Islands—tube worms over 3 metres long, clams 30 centimetres wide, shrimps and mussels in profusion, wriggling spaghetti worms. They all owed their existence to vast colonies of bacteria that were deriving their energy and sustenance from hydrogen sulphides— compounds profoundly toxic to surface creatures—that were pouring steadily from the vents. It was a world independent of sunlight, oxygen or anything else normally associated with life. This was a living system based not on photosynthesis but on chemosynthesis, an arrangement that biologists would have dismissed as preposterous had anyone been imaginative enough to suggest it.
It had been known for centuries that rivers carry minerals to the sea and that these minerals combine with ions in the ocean water to form salts. So far no problem. But what was puzzling was that the salinity levels of the sea were stable. Millions of gallons of fresh water evaporate from the ocean daily, leaving all their salts behind, so logically the seas ought to grow more salty with the passing years, but they don’t. Something takes an amount of salt out of the water equivalent to the amount being put in. For a very long time, no-one could figure out what could be responsible for this.
Alvin’s discovery of the deep-sea vents provided the answer. Geophysicists realized that the vents were acting much like the filters in a fish tank. As water is taken down into the Earth’s crust, salts are stripped from it, and eventually clean water is blown out again through the chimney stacks. The process is not swift— it can take up to ten million years to clean an ocean—but if you are not in a hurry it is marvellously efficient.
We are astoundingly, sumptuously, radiantly ignorant of life beneath the seas. Even the most substantial ocean creatures are often remarkably little known to us—including the most mighty of them all, the great blue whale
We are remarkably ignorant of the dynamics that rule life in the sea. While marine life is poorer than it ought to be in areas that have been overfished, in some naturally impoverished waters there is far more life than there ought to be.
The Rise of Life
Life emerged so swiftly, in fact, that some authorities think it must have had help—perhaps a good deal of help. The idea that earthly life might have arrived from space has a surprisingly long and even occasionally distinguished history.
Whatever prompted life to begin, it happened just once. That is the most extraordinary fact in biology, perhaps the most extraordinary fact we know.
Everything that has ever lived, plant or animal, dates its beginnings from the same primordial twitch. At some point in an unimaginably distant past some little bag of chemicals fidgeted to life. It absorbed some nutrients, gently pulsed, had a brief existence. This much may have happened before, perhaps many times. But this ancestral packet did something additional and extraordinary: it cleaved itself and produced an heir. A tiny bundle of genetic material passed from one living entity to another, and has never stopped moving since. It was the moment of creation for us all. Biologists sometimes call it the Big Birth.
“Wherever you go in the world, whatever animal, plant, bug or blob you look at, if it is alive, it will use the same dictionary and know the same code. All life is one,” says Matt Ridley. We are all the result of a single genetic trick handed down from generation to generation over nearly four billion years, to such an extent that you can take a fragment of human genetic instruction and patch it into a faulty yeast cell and the yeast cell will put it to work as if it were its own. In a very real sense, it is its own.
If you were to step from a time machine into that ancient Archaean world, you would very swiftly scamper back inside, for there was no more oxygen to breathe on the Earth back then than there is on Mars today. It was also full of noxious vapours from hydrochloric and sulphuric acids powerful enough to eat through clothing and blister skin. Nor would it have provided the clean and glowing vistas depicted in the poster in Victoria Bennett’s office. The chemical stew that was the atmosphere then would have allowed little sunlight to reach the Earth’s surface. What little you could see would be illumined only briefly by bright and frequent lightning flashes. In short, it was the Earth, but an Earth we wouldn’t recognize as our own.
At some point in the first billion years of life, cyanobacteria, or blue-green algae, learned to tap into a freely available resource—the hydrogen that exists in spectacular abundance in water. They absorbed water molecules, supped on the hydrogen and released the oxygen as waste, and in so doing invented photosynthesis.
Photosynthesis is “undoubtedly the most important single metabolic innovation in the history of life on the planet”—and it was invented not by plants but by bacteria.
As cyanobacteria proliferated the world began to fill with O2, to the consternation of those organisms that found it poisonous—which in those days was all of them. In an anaerobic (or non-oxygen-using) world, oxygen is extremely poisonous. Our white blood cells actually use oxygen to kill invading bacteria. That oxygen is fundamentally toxic often comes as a surprise to those of us who find it so convivial to our well-being, but that is only because we have evolved to exploit it. To other things it is a terror. It is what turns butter rancid and makes iron rust. Even we can tolerate it only up to a point. The oxygen level in our cells is only about a tenth the level found in the atmosphere.
The new oxygen-using organisms had two advantages. Oxygen was a more efficient way to produce energy, and it vanquished competitor organisms. Some retreated into the oozy, anaerobic world of bogs and lake bottoms. Others did likewise but then later (much later) migrated to the digestive tracts of beings like you and me.
Untold number of others failed to adapt and died.
It has been suggested that the cyanobacteria at Shark Bay are perhaps the most slowly evolving organisms on Earth, and certainly now they are among the rarest. Having prepared the way for more complex life forms, they were then grazed out of existence nearly everywhere by the very organisms whose existence they had made possible. (They exist at Shark Bay because the waters are too saline for the creatures that would normally feast on them.)
One reason life took so long to grow complex was that the world had to wait until the simpler organisms had oxygenated the atmosphere sufficiently.
But once the stage was set, and apparently quite suddenly, an entirely new type of cell arose—one containing a nucleus and other little bodies collectively called organelles (from a Greek word meaning “little tools”).
The process is thought to have started when some blundering or adventuresome bacterium either invaded or was captured by some other bacterium and it turned out that this suited them both. The captive bacterium became, it is thought, a mitochondrion. This mitochondrial invasion (or endosymbiotic event, as biologists like to term it) made complex life possible. (In plants a similar invasion produced chloroplasts, which enable plants to photosynthesize.)
Small World
We depend totally on bacteria to pluck nitrogen from the air and convert it into useful nucleotides and amino acids for us. It is a prodigious and gratifying feat. As Margulis and Sagan note, to do the same thing industrially (as when making fertilizers) manufacturers must heat the source materials to 500 degrees Celsius and squeeze them to 300 times normal pressures. Bacteria do the same thing all the time without fuss, and thank goodness, for no larger organism could survive without the nitrogen they pass on. Above all, microbes continue to provide us with the air we breathe and to keep the atmosphere stable. Microbes, including the modern versions of cyanobacteria, supply the greater part of the planet’s breathable oxygen. Algae and other tiny organisms bubbling away in the sea blow out about 150 billion kilograms of the stuff every year.
And they are amazingly prolific. The more frantic among them can yield a new generation in less than ten minutes
About once every million divisions, they produce a mutant. Usually this is bad luck for the mutant—for an organism, change is always risky—but just occasionally the new bacterium is endowed with some accidental advantage, such as the ability to elude or shrug off an attack of antibiotics. With this ability to evolve rapidly goes another, even scarier advantage. Bacteria share information. Any bacterium can take pieces of genetic coding from any other.
Essentially, as Margulis and Sagan put it, all bacteria swim in a single gene pool.
Any adaptive change that occurs in one area of the bacterial universe can spread to any other. It’s rather as if a human could go to an insect to get the necessary genetic coding to sprout wings or walk on ceilings. It means that from a genetic point of view bacteria have become a single superorganism—tiny, dispersed, but invincible.
There are few environments in which bacteria aren’t prepared to live.
At depth, microbes shrink in size and become extremely sluggish. The liveliest of them may divide no more than once a century, some no more than perhaps once in five hundred years. As The Economist has put it: “The key to long life, it seems, is not to do too much.” When things are really tough, bacteria are prepared to shut down all systems and wait for better times.
Slime moulds are, make no mistake, among the most interesting organisms in nature. When times are good, they exist as one-celled individuals, much like amoebas. But when conditions grow tough, they crawl to a central gathering place and become, almost miraculously, a slug. The slug is not a thing of beauty and it doesn’t go terribly far—usually just from the bottom of a pile of leaf litter to the top, where it is in a slightly more exposed position—but for millions of years this may well have been the niftiest trick in the universe.
And it doesn’t stop there. Having hauled itself up to a more favourable locale, the slime mould transforms itself yet again, taking on the form of a plant. By some curious orderly process the cells reconfigure, like the members of a tiny marching band, to make a stalk atop of which forms a bulb known as a fruiting body. Inside the fruiting body are millions of spores which, at the appropriate moment, are released to the wind to blow away to become single-celled organisms that can start the process again.
Slowly his new scheme began to catch on among microbiologists. Botanists and zoologists were much slower to appreciate its virtues. It’s not hard to see why. In Woese’s model, the worlds of botany and zoology are relegated to a few twigs on the outermost branch of the Eukaryan limb. Everything else belongs to unicellular beings.
“These folks were brought up to classify in terms of gross morphological similarities and differences,” Woese told an interviewer in 1996. “The idea of doing so in terms of molecular sequence is a bit hard for many of them to swallow.” In short, if they couldn’t see a difference with their own eyes, they didn’t like it.
The world belongs to the very small—and it has done for a very long time.
Why, you are bound to ask at some point in your life, do microbes so often want to hurt us? What possible satisfaction could there be to a microbe in having us grow feverish or chilled, or disfigured with sores, or above all deceased? A dead host, after all, is hardly going to provide long-term hospitality.
To begin with, it is worth remembering that most micro-organisms are neutral or even beneficial to human well-being.
Making a host unwell has certain benefits for the microbe. The symptoms of an illness often help to spread the disease. Vomiting, sneezing and diarrhoea are excellent methods of getting out of one host and into position for boarding another. The most effective strategy of all is to enlist the help of a mobile third party. Infectious organisms love mosquitoes because the mosquito’s sting delivers them directly into a bloodstream where they can get straight to work before the victim’s defence mechanisms can figure out what’s hit them. This is why so many grade A diseases—malaria, yellow fever, dengue fever, encephalitis and a hundred or so other less celebrated but often rapacious maladies—begin with a mosquito bite. It is a fortunate fluke for us that HIV, the AIDS agent, isn’t among them—at least not yet. Any HIV the mosquito sucks up on its travels is dissolved by the mosquito’s own metabolism. When the day comes that the virus mutates its way around this, we may be in real trouble.
It is a mistake, however, to consider the matter too carefully from the position of logic because micro-organisms clearly are not calculating entities. They don’t care what they do to you any more than you care what distress you cause when you slaughter them by the millions with a soapy shower or a swipe of deodorant.
The only time your continuing well-being is of consequence to a pathogen is when it kills you too well. If they eliminate you before they can move on, then they may well die out themselves. History, Jared Diamond notes, is full of diseases that “once caused terrifying epidemics and then disappeared as mysteriously as they had come.”
A great deal of sickness arises not because of what the organism has done to you but because of what your body is trying to do to the organism. In its quest to rid the body of pathogens, the immune system sometimes destroys cells or damages critical tissues, so often when you are unwell what you are feeling is not the pathogens but your own immune responses. Anyway, getting sick is a sensible response to infection. Sick people retire to their beds and thus are less of a threat to the wider community.
We would have much more success with bacteria if we weren’t so profligate with our best weapon against them: antibiotics. Remarkably, by one estimate some 70 per cent of the antibiotics used in the developed world are given to farm animals, often routinely in stock feed, simply to promote growth or as a precaution against infection. Such applications give bacteria every opportunity to evolve a resistance to them. It is an opportunity that they have enthusiastically seized.
Our lifestyles invite epidemics. Air travel makes it possible to spread infectious agents across the planet with amazing ease. An Ebola virus could begin the day in, say, Benin, and finish it in New York or Hamburg or Nairobi, or all three. It means also that medical authorities increasingly need to be acquainted with pretty much every malady that exists everywhere, but of course they are not.
Life Goes On
It isn’t easy to become a fossil. The fate of nearly all living organisms— over 99.9 per cent of them—is to compost down to nothingness. When your spark is gone, every molecule you own will be nibbled off you or sluiced away to be put to use in some other system.
“The history of life,” wrote Gould, “is a story of massive removal followed by differentiation within a few surviving stocks, not the conventional tale of steadily increasing excellence, complexity, and diversity.” Evolutionary success, it appeared, was a lottery.
Gould’s Wonderful Life was published in 1989 to general critical acclaim and was a great commercial success. What wasn’t generally known was that many scientists didn’t agree with Gould’s conclusions at all, and that it was all soon to get very ugly. In the context of the Cambrian, “explosion” would soon have more to do with modern tempers than ancient physiological facts. In fact, we now know, complex organisms existed at least a hundred million years before the Cambrian.
But the real heat directed at Gould arose from the belief that many of his conclusions were simply mistaken or carelessly inflated.
Alas, it turns out the Cambrian explosion may not have been quite so explosive as all that. The Cambrian animals, it is now thought, were probably there all along, but were just too small to see. Once again it was trilobites that provided the clue—in particular, that seemingly mystifying appearance of different types of trilobite in widely scattered locations around the globe, all at more or less the same time.
“The Cambrian explosion, if that’s the word for it, probably was more an increase in size than a sudden appearance of new body types,” Fortey says. “And it could have happened quite swiftly, so in that sense I suppose it was an explosion.”
Goodbye To All That
Closer inspection showed that lichens were more interesting than magical.
They are in fact a partnership between fungi and algae. The fungi excrete acids which dissolve the surface of the rock, freeing minerals that the algae convert into food sufficient to sustain both. It is not a very exciting arrangement, but it is a conspicuously successful one. The world has more than twenty thousand species of lichens.
Like most things that thrive in harsh environments, lichens are slow-growing.
It may take a lichen more than half a century to attain the dimensions of a shirt button. Those the size of dinner plates, writes David Attenborough, are therefore “likely to be hundreds if not thousands of years old.”
“They simply exist,” Attenborough adds, “testifying to the moving fact that life even at its simplest level occurs, apparently, just for its own sake.”
It is easy to overlook this thought that life just is. As humans we are inclined to feel that life must have a point. We have plans and aspirations and desires. We want to take constant advantage of all the intoxicating existence we’ve been endowed with. But what’s life to a lichen? Yet its impulse to exist, to be, is every bit as strong as ours—arguably even stronger. If I were told that I had to spend decades being a furry growth on a rock in the woods, I believe I would lose the will to go on. Lichens don’t. Like virtually all living things, they will suffer any hardship, endure any insult, for a moment’s additional existence. Life, in short, just wants to be. But—and here’s an interesting point—for the most part it doesn’t want to be much.
The principal reason oxygen levels were able to build so robustly throughout the period of early terrestrial life was that much of the world’s landscape was dominated by giant tree ferns and vast swamps, which by their boggy nature disrupted the normal carbon recycling process. Instead of completely rotting down, falling fronds and other dead vegetative matter accumulated in rich, wet sediments, which were eventually squeezed into the vast coal beds that sustain much economic activity even now.
The heady levels of oxygen clearly encouraged outsized growth. The oldest indication of a surface animal yet found is a track left 350 million years ago by a millipede-like creature on a rock in Scotland. It was over a metre long. Before the era was out some millipedes would reach lengths more than double that.
“Evolution may abhor a vacuum,” wrote the palaeobiologist Steven M. Stanley, “but it often takes a long time to fill it.”
Darwin’s Singular Notion
One thing Darwin didn’t do on the voyage was propound the theory (or even a theory) of evolution. For a start, evolution as a concept was already decades old by the 1830s. Darwin’s own grandfather, Erasmus, had paid tribute to evolutionary principles in a poem of inspired mediocrity called “The Temple of Nature” years before Charles was even born. It wasn’t until the younger Darwin was back in England and read Thomas Malthus’s Essay on the Principle of Population (which proposed that increases in food supply could never keep up with population growth for mathematical reasons) that the idea began to percolate through his mind that life is a perpetual struggle and that natural selection was the means by which some species prospered while others failed.
Specifically, what Darwin saw was that all organisms compete for resources, and those that had some innate advantage would prosper and pass on that advantage to their offspring. By such means would species continuously improve.
6 – The Road to Us
The Mysterious Biped
Scientists have a natural tendency to interpret finds in the way that most flatters their stature. It is a rare palaeontologist indeed who announces that he has found a cache of bones but that they are nothing to get excited about. As John Reader understatedly observes in the book Missing Links, “It is remarkable how often the first interpretations of new evidence have confirmed the preconceptions of its discoverer.”
Bipedalism is a demanding and risky strategy. It means refashioning the pelvis into a full load-bearing instrument. To preserve the required strength, the birth canal in the female must be comparatively narrow. This has two very significant immediate consequences and one longer-term one. First, it means a lot of pain for any birthing mother and a greatly increased danger of fatality to mother and baby both. Moreover, to get the baby’s head through such a tight space it must be born while its brain is still small—and while the baby, therefore, is still helpless. This means long-term infant care, which in turn implies solid male-female bonding.
But stepping out onto the open savanna also clearly left the early hominids much more exposed. An upright hominid could see better, but could also be seen better. Even now, as a species we are almost preposterously vulnerable in the wild. Nearly every large animal you care to name is stronger, faster and toothier than us. Faced with attack, modern humans have only two advantages. We have a good brain, with which we can devise strategies; and we have hands, with which we can fling or brandish hurtful objects. We are the only creature that can harm at a distance. We can thus afford to be physically vulnerable.
“There is simply no compelling reason we know of to explain why human brains got large,” says Tattersall. Huge brains are demanding organs: they make up only 2 per cent of the body’s mass, but devour 20 per cent of its energy. They are also comparatively picky in what they use as fuel. If you never ate another morsel of fat, your brain would not complain because it won’t touch the stuff. It wants glucose instead, and lots of it, even if it means short-changing other organs. As Guy Brown notes: “The body is in constant danger of being depleted by a greedy brain, but cannot afford to let the brain go hungry as that would rapidly lead to death.” A big brain needs more food and more food means increased risk.
The Restless Ape
Sometime about a million and a half years ago, some forgotten genius of the hominid world did an unexpected thing. He (or very possibly she) took one stone and carefully used it to shape another. The result was a simple teardrop-shaped hand-axe, but it was the world’s first piece of advanced technology.
It was so superior to existing tools that soon others were following the inventor’s lead and making hand-axes of their own. Eventually whole societies existed that seemed to do little else.
Now here’s the mystery. When early modern humans—the ones who would eventually become us—started to move out of Africa something over a hundred thousand years ago, Acheulean tools were the technology of choice. These early Homo sapiens loved their Acheulean tools, too. They carried them vast distances. Sometimes they even took unshaped rocks with them to make into tools later on. They were, in a word, devoted to the technology. But although Acheulean tools have been found throughout Africa, Europe and western and central Asia, they are almost never found in the Far East. This is deeply puzzling.
In the 1940s a Harvard palaeontologist named Hallum Movius drew something called the Movius line, dividing the side with Acheulean tools from the one without. The line runs in a southeasterly direction across Europe and the Middle East to the vicinity of modern-day Calcutta and Bangladesh. Beyond the Movius line, across the whole of southeast Asia and into China, only the older, simpler Oldowan tools have been found. We know that Homo sapiens went far beyond this point, so why would they carry an advanced and treasured stone technology to the edge of the Far East and then just abandon it?
“That troubled me for a long time,” recalls Alan Thorne of the Australian National University in Canberra. “The whole of modern anthropology was built round the idea that humans came out of Africa in two waves—a first wave of Homo erectus, which became Java Man and Peking Man and the like, and a later, more advanced wave of Homo sapiens, which displaced the first lot. Yet to accept that you must believe that Homo sapiens got so far with their more modern technology and then, for whatever reason, gave it up. It was all very puzzling, to say the least.”
As it turned out, there would be a gre
“On the whole,” she went on more sombrely, “the genetic record supports the out of Africa hypothesis. But then you find these anomalous clusters, which most geneticists prefer not to talk about. There’s huge amounts of information that would be available to us if only we could understand it, but we don’t yet.
We’ve barely begun.”
“Data from any single gene cannot really tell you anything so definitive. If you follow the mitochondrial DNA backwards, it will take you to a certain place—to an Ursula or Tara or whatever.
But if you take any other bit of DNA, any gene at all, and trace it back, it will take you someplace else altogether.”
It was a little, I gathered, like following a road randomly out of London and finding that eventually it ends at John O’Groats, and concluding from this that anyone in London must therefore have come from the north of Scotland. They might have come from there, of course, but equally they could have arrived from any of hundreds of other places. In this sense, according to Harding, every gene is a different highway, and we have only barely begun to map the routes. “No single gene is ever going to tell you the whole story,” she said.
So genetic studies aren’t to be trusted?
“Oh you can trust the studies well enough, generally speaking. What you can’t trust are the sweeping conclusions that people often attach to them.”
Goodbye
If this book has a lesson, it is that we are awfully lucky to be here—and by “we” I mean every living thing. To attain any kind of life at all in this universe of ours appears to be quite an achievement. As humans we are doubly lucky, of course. We enjoy not only the privilege of existence, but also the singular ability to appreciate it and even, in a multitude of ways, to make it better. It is a trick we have only just begun to grasp.
We have arrived at this position of eminence in a stunningly short time.
Behaviourally modern humans have been around for no more than about 0.0001 per cent of Earth’s history—almost nothing, really—but even existing for that little while has required a nearly endless string of good fortune.
We really are at the beginning of it all. The trick, of course, is to make sure we never find the end. And that, almost certainly, will require a lot more than lucky breaks.