English Endure

English Endure

There are plenty of running books out there, and as a runner I have read many of them. But they are insider’s 유경험자 accounts 설명 written for other insiders: whether or not a runner should forefoot 앞축 딛기 or heel-strike 발꿈치딛기 , or aim for a cadence 180보 of strides 걸음 per minute, is a question only of significance to runners whose self 그들 스스로 involvement 관심이 extends 확대되다 all the way 처음부터 끝까지 to the soles 발바닥에 of their feet 달리기 선수 발들.

Chapter 1
The Unforgiving Minute
 
 
If you can fill the unforgiving minute
With sixty seconds’ worth of distance run,
Yours is the Earth and everything that’s in it. . . .
—Rudyard Kipling1
 
 
On a frigid Saturday night in the university town of Sherbrooke, Quebec, in February 1996, I was pondering—yet again—one of the great enigmas of endurance: John Landy. The stocky Australian is one of the most famous bridesmaids in sport, the second man in history to run a sub-four-minute mile. In the spring of 1954, after years of concerted effort, centuries of timed races, millennia of evolution, Roger Bannister beat him to it by just forty-six days. The enduring image of Landy, immortalized in countless posters and a larger-than- life bronze statue in Vancouver, British Columbia, comes from later that summer, at the Empire Games, when the world’s only four-minute    milers clashed head-to-head for the first and only time. Having led the entire race,
Landy glanced over his left shoulder as he entered the final straightaway—just as Bannister edged past on his right. That split-second tableau of defeat confirmed him as, in the words of a British newspaper headline, the quintessential “nearly man.”2
 
On a frigid Saturday night 몹시 추운 토요일 밤 in the university town 대학 시내 of Sherbrooke, Quebec, in February 1996, I was pondering 곰곰히 생각하는 중이였어—yet again—one of the great enigmas of endurance: John Landy. The stocky Australian is one of the most famous bridesmaids in sport, the second man in history to run a sub-four-minute mile. In the spring of 1954, after years of concerted effort, centuries of timed races, millennia of evolution, Roger Bannister beat him to it by just forty-six days. The enduring image of Landy, immortalized in countless posters and a larger-than- life bronze statue in Vancouver, British Columbia, comes from later that summer, at the Empire Games, when the world’s only four-minute    milers clashed head-to-head for the first and only time. Having led the entire race,
Landy glanced over his left shoulder as he entered the final straightaway—just as Bannister edged past on his right. That split-second tableau of defeat confirmed him as, in the words of a British newspaper headline, the quintessential “nearly man.”2

 

 
But Landy’s enigma isn’t that he wasn’t quite good enough. It’s that he clearly was. In pursuit of the record, he had run 4:02 on six different occasions, and eventually declared, “Frankly, I think the four-minute mile is beyond my capabilities. Two seconds may not sound much, but to me it’s like trying to break through a brick wall.”3 Then, less than two months after Bannister blazed the trail, Landy ran 3:57.9 (his official mark in the record books is 3:58.0, since times were rounded to the nearest fifth of a second in that era), cleaving almost four seconds off his previous best and finishing 15 yards ahead of four-minute pace—a puzzlingly rapid, and bittersweet, transformation.
 
Like many milers before me and since, I was a Bannister disciple, with a creased and nearly memorized copy of his autobiography in permanent residence on my bedside table; but in that winter of 1996 I was seeing more and more Landy when I looked in the mirror. Since the age of fifteen, I’d been pursuing my own, lesser four-minute barrier—for 1,500 meters, a race that’s about 17 seconds shorter than a mile. I ran 4:02 in high school, and then, like Landy, hit a wall, running similar times again and again over the next four years. Now, as a twenty-year-old junior at McGill University, I was starting to face the possibility that I’d squeezed out every second my body had to offer. During the long bus ride from Montreal to Sherbrooke, where my teammates and I were headed for a meaningless early-season race on one of the slowest tracks in Canada, I remember staring out the window into the swirling snow and wondering if my long-sought moment of Landyesque transformation would ever arrive.
 
The story we’d heard, possibly apocryphal, was that the job of designing the Sherbrooke indoor track had been assigned to the university’s engineering department as a student project. Tasked with calculating the optimal angles for a 200-meter track, they’d plugged in numbers corresponding to the centripetal acceleration experienced by world-class 200-meter sprinters—forgetting the key fact that some people might want to run more than one lap at a time. The result was more like a cycling velodrome than a running track, with banks so steep that even most sprinters couldn’t run in the outside lanes without tumbling inward. 
 
For middle-distance runners like me, even the inside lane was ankle-breakingly awkward; races longer than a mile had to be held on the warm-up loop around the inside of the track.
To break four minutes, I would need to execute a perfectly calibrated run, pacing each lap just two-tenths of a second faster than my best time of 4:01.7. Sherbrooke, with its amusement-park track and an absence of good competition, was not the place for this supreme effort, I decided. Instead, I would run as easily as possible and save my energy for the following week. Then, in the race before mine, I watched my teammate Tambra Dunn sprint fearlessly to an enormous early lead in the women’s 1,500, click off lap after metronomic lap all alone, and finish with a scorching personal best time that qualified her for the national collegiate championships. Suddenly my obsessive calculating and endless strategizing seemed ridiculous and overwrought. I was here to run a race; why not just run as hard as I could?
 
Reaching the “limits of endurance” is a concept that seems yawningly obvious, until you actually try to explain it. Had you asked me in 1996 what was holding me back from sub-four, I would have mumbled something about maximal heart rate, lung capacity, slow-twitch muscle fibers, lactic acid accumulation, and various other buzzwords I’d picked up from the running magazines I devoured. On closer examination, though, none of those explanations hold up. You can hit the wall with a heart rate well below max, modest lactate levels, and   muscles that still twitch on demand. To their frustration, physiologists have found that the will to endure can’t be reliably tied to any single physiological variable.
 
Part of the challenge is that endurance is a conceptual Swiss Army knife. It’s what you need to finish a marathon; it’s also what enables you to keep your sanity during a cross-country flight crammed into the economy cabin with a flock of angry toddlers. The use of the word endurance in the latter case may seem metaphorical, but the distinction between physical and psychological endurance is actually less clear-cut than it appears. Think of Ernest Shackleton’s ill-fated Antarctic expedition, and the crew’s two-year struggle for survival after their ship, the Endurance, was crushed in the ice in 1915.
 
Was it the toddlers- on-a-plane type of endurance that enabled them to persevere, or straightforward physical fortitude? Can you have one without the other?
A suitably versatile definition that I like, borrowing from researcher Samuele Marcora, is that endurance is “the struggle to continue against a mounting desire to stop.”5 That’s actually Marcora’s description of “effort” rather than endurance (a distinction we’ll explore further in Chapter 4), but it captures both the physical and mental aspects of endurance. What’s crucial is the need to override what your instincts are telling you to do (slow down, back off, give up), and the sense of elapsed time. Taking a punch without flinching requires self-control, but endurance implies something more sustained: holding your finger in the   flame long enough to feel the heat; filling the unforgiving minute with sixty seconds’ worth of distance  run.
 
The time that elapses can be seconds, or it can be years. During the 2015 National Basketball Association playoffs, LeBron James’s biggest foe was— with all due respect to Golden State defender Andre Iguodala—fatigue.6 He’d played 17,860 minutes in the preceding five seasons, more than 2,000 minutes ahead of anyone else in the league. In the semis, he surprisingly asked to be pulled from a game during a tense overtime period, changed his mind, drained a three-pointer followed by a running jumper with 12.8 seconds left to seal  the victory, then collapsed to the floor in a widely meme-ified swoon after the buzzer. By Game 4 of the finals, he could barely move: “I gassed out,” he admitted after being held scoreless in the final quarter. It’s not that he was acutely out of breath; it was the steady drip of fatigue accumulating over days, weeks, and months that just as surely pushed James to the limits of his endurance.
 
At the opposite end of the spectrum, even the greatest sprinters in the world fight against what John Smith, the coach of former 100-meter world-record holder Maurice Greene, euphemistically calls the “Negative Acceleration Phase.”7 The race may be over in ten seconds, but most sprinters hit their top speed after 50 to 60 meters, sustain it briefly, then start to fade. Usain Bolt’s ability to stride magisterially away from his competitors at the end of a race? A testament to his endurance: he’s slowing down a little less (or a little later) than everyone else. In Bolt’s 9.58-second world-record race at the 2009 World Championships in Berlin, his last 20 meters was five hundredths of a second slower than the previous 20 meters, but he still extended his lead over the rest of the field.8
 
At the same world championships, Bolt went on to set the 200-meter world record with a time of 19.19 seconds. A crucial detail: he ran the first half of the race in 9.92 seconds—an amazing time, considering the 200 starts on a curve, but still slower than his 100-meter record. It’s barely perceptible, but he was pacing himself, deliberately spreading his energy out to maximize his performance over the whole distance. This is why the psychology and physiology of endurance are inextricably linked: any task lasting longer than a dozen or so seconds requires decisions, whether conscious or unconscious, on  how hard to push and when.
 
Even in repeated all-out weightlifting efforts—brief five-second pulls that you’d think would be a pure measure of muscular force— studies have found that we can’t avoid pacing ourselves: your “maximum” force depends on how many reps you think you have left.9
 
This inescapable importance of pacing is why endurance athletes are obsessed with their splits. As John L. Parker Jr. wrote in his cult running classic, Once a Runner, “A runner is a miser, spending the pennies of his energy with great stinginess, constantly wanting to know how much he has spent and how much longer he will be expected to pay. He wants to be broke at precisely the moment he no longer needs his coin.” In my race in Sherbrooke, I knew I needed to run each 200-meter lap in just under 32 seconds in order to break four minutes, and I had spent countless training hours learning the feel of this exact pace. So it was a shock, an eye-widening physical jolt to my system, to hear the timekeeper call out, as I completed my first circuit of the track,   “Twenty-seven!”
 
The science of how we pace ourselves turns out to be surprisingly complex (as we’ll see in later chapters). You judge what’s sustainable based not only on how you feel, but on how that feeling compares to how you expected to feel at that point in the race. As I started my second lap, I had to reconcile two conflicting inputs: the intellectual knowledge that I had set off at a recklessly fast pace, and the subjective sense that I felt surprisingly, exhilaratingly good. I fought off the panicked urge to slow down, and came through the second lap in 57 seconds— and still felt good. Now I knew for sure that something special was happening.
 
As the race proceeded, I stopped paying attention to the split times. They were so far ahead of the 4:00 schedule I’d memorized that they no longer conveyed any useful information. I simply ran, hoping to reach the finish before the gravitational pull of reality reasserted its grip on my legs. I crossed the line in 3 minutes, 52.7 seconds, a personal best by a full nine seconds. In that one race, I’d improved more than my cumulative improvement since my first season of running, five years earlier. Poring through my training logs—as I did that night, and have many times since—revealed no hint of the breakthrough to come. My workouts suggested, at most, incremental gains compared to previous years.
 
After the race, I debriefed with a teammate who had timed my lap splits for me. His watch told a very different story of the race. My first lap had taken 30 seconds, not 27; my second lap was 60, not 57. Perhaps the lap counter calling the splits at the finish had started his watch three seconds late; or perhaps his effort to translate on the fly from French to English for my benefit had resulted in a delay of a few seconds. Either way, he’d misled me into believing that I was running faster than I really was, while feeling unaccountably good. As a result, I’d unshackled myself from my pre-race expectations and run a race nobody  could have predicted.
 
After Roger Bannister came the deluge—at least, that’s how the story is often told. Typical of the genre is The Winning Mind Set, a 2006 self-help book by Jim Brault and Kevin Seaman, which uses Bannister’s four-minute mile as a parable about the importance of self-belief. “[W]ithin one year, 37 others did the same thing,” they write. “In the year after that, over 300 runners ran a mile in less than four minutes.” Similar larger-than-life (that is, utterly fictitious) claims are a staple in motivational seminars and across the Web: once Bannister showed the way, others suddenly brushed away their mental barriers and unlocked their true potential.
 
As interest in the prospects of a sub-two-hour marathon heats up, this narrative crops up frequently as evidence that the new challenge, too, is primarily psychological.10 Skeptics, meanwhile, assert that belief has nothing to do with it—that humans, in their current form, are simply incapable of running that fast for that long. The debate, like its predecessor six decades ago, offers a compelling real-world test bed for exploring the various theories about endurance and human limits that scientists are currently investigating. But to draw any meaningful conclusions, it’s important to get the facts right. For one thing, Landy was the only other person to join the sub-four club within a year of Bannister’s run, and just four others followed the next year. It wasn’t until 1979, more than twenty years later, that Spanish star José Luis González became the three hundredth man to break the barrier.11
 
And there’s more to Landy’s sudden breakthrough, after being stuck for so many races, than simple mind over muscle. His six near-misses all came at low- key meets in Australia where competition was sparse and weather often unfavorable. He finally embarked on the long voyage to Europe, where tracks were fast and competition plentiful, in the spring of 1954—only to discover, just three days after he arrived, that Bannister had already beaten him to the goal. In Turku, he had a pacer for the first time, a local runner who led the first lap and a half at a brisk pace. And more important, he had real competition: Chris Chataway, one of the two men who had paced Bannister’s sub-four run, was nipping at Landy’s heels until partway through the final lap. It’s not hard to believe that Landy would have broken four that day even if Roger Bannister had never existed.
 
Still, I can’t entirely dismiss the mind’s role—in no small part because of what happened in the wake of my own breakthrough. In my next attempt at the distance after Sherbrooke, I ran 3:49. In the race after that, I crossed the line, as confused as I was exhilarated, in 3:44, qualifying me for that summer’s Olympic Trials. In the space of three races, I’d somehow been transformed. The TV coverage of the 1996 trials is on YouTube, and as the camera lingers on me before the start of the 1,500 final (I’m lined up next to Graham Hood, the Canadian record-holder at the time), you can see that I’m still not quite sure how
 
I got there.12 My eyes keep darting around in panic, as if I expect to glance down and discover that I’m still in my pajamas.
 
I spent a lot of time over the next decade chasing further breakthroughs, with decidedly mixed results. Knowing (or believing) that your ultimate limits are all in your head doesn’t make them any less real in the heat of a race. And it doesn’t mean you can simply decide to change them. If anything, my head held me back as often as it pushed me forward during those years, to my frustration and befuddlement. “It should be mathematical,” is how U.S. Olympic runner Ian Dobson described the struggle to understand the ups and downs of his  own performances, “but it’s not.” I, too, kept searching for the formula—the one that would allow me to calculate, once and for all, my limits.13 If I knew that I had run as fast as my body was capable of, I reasoned, I’d be able to walk away from the sport with no regrets.
 
At twenty-eight, after an ill-timed stress fracture in my sacrum three months before the 2004 Olympic Trials, I finally decided to move on. I returned to school for a journalism degree, and then started out as a general assignment reporter with a newspaper in Ottawa. But I found myself drawn back to the same lingering questions. Why wasn’t it mathematical? What held me back from breaking four for so long, and what changed when I did? I left the newspaper and started writing as a freelancer about endurance sports—not so much about who won and who lost, but about why. I dug into the scientific literature and discovered that there was a vigorous (and sometimes rancorous) ongoing debate about those very questions.
 
Physiologists spent most of the twentieth century on an epic quest  to understand how our bodies fatigue. They cut the hind legs off frogs and jolted the severed muscles with electricity until they stopped twitching; lugged cumbersome lab equipment on expeditions to remote Andean peaks; and pushed thousands of volunteers to exhaustion on treadmills, in heat chambers, and on every drug you can think of. What emerged was a mechanistic—almost mathematical—view of human limits: like a car with a brick on its gas pedal, you go until the tank runs out of gas or the radiator boils over, then you stop.
 
But that’s not the whole picture. With the rise of sophisticated techniques to measure and manipulate the brain, researchers are finally getting a glimpse of what’s happening in our neurons and synapses when we’re pushed to our limits. It turns out that, whether it’s heat or cold, hunger or thirst, or muscles screaming with the supposed poison of “lactic acid,” what matters in many cases is how the brain interprets these distress signals. With new understanding of the brain’s role come new—and sometimes worrisome—opportunities. At its Santa Monica, California, headquarters, Red Bull has experimented with transcranial direct- current stimulation, applying a jolt of electricity through electrodes to the brains of elite triathletes and cyclists, seeking a competitive edge. The British military has funded studies of computer-based brain training protocols to enhance the endurance of its troops, with startling results. And even subliminal messages can help or hurt your endurance: a picture of a smiling face, flashed in 16- millisecond bursts, boosts cycling performance by 12 percent compared to frowning faces.
 
Over the past decade, I’ve traveled to labs in Europe, South Africa, Australia, and across North America, and spoken to hundreds of scientists, coaches, and athletes who share my obsession with decoding the mysteries of endurance. I started out with the hunch that the brain would play a bigger role than generally acknowledged. That turned out to be true, but not in the simple it’s-all-in-your- head manner of self-help books. Instead, brain and body are fundamentally intertwined, and to understand what defines your limits under any particular set of circumstances, you have to consider them both together. That’s what the scientists described in the following chapters have been doing, and the surprising results of their research suggest to me that, when it comes to pushing our limits, we’re just getting started.
 
Chapter 2
The Human Machine
 
 
After fifty-six days of hard skiing, Henry Worsley glanced down at the digital display of his GPS and stopped.1 “That’s it,” he announced with a grin, driving a ski pole into the wind-packed snow. “We’ve made it!” It was early evening on January 9, 2009, one hundred years to the day since British explorer Ernest Shackleton had planted a Union Jack in the name of King Edward VII at this precise location on the Antarctic plateau: 88 degrees and 23 minutes south, 162 degrees east. In 1909, it was the farthest south any human had ever traveled, just 112 miles from the South Pole.2 Worsley, a gruff veteran of the British Special Air Service who had long idolized Shackleton, cried “small tears of relief and joy” behind his goggles, for the first time since he was ten years old. (“My poor physical state accentuated my vulnerability,” he later explained.) Then he and his companions, Will Gow and Henry Adams, unfurled their tent and fired up the kettle. It was −31 degrees Fahrenheit.
For Shackleton, 88°23' south was a bitter disappointment. Six years earlier,   as a member of Robert Falcon Scott’s Discovery expedition, he’d been part of a three-man team that set a farthest-south record of 82°17'. But he had been sent home in disgrace after Scott claimed that his physical weakness had held the others back. Shackleton returned for the 1908–09 expedition eager to vindicate himself by beating his former mentor to the pole, but his own four-man inland push was a struggle from the start. By the time Socks, the team’s fourth and final Manchurian pony, disappeared into a crevasse on the Beardmore glacier six weeks into the march, they were already on reduced rations and increasingly unlikely to reach their goal. Still, Shackleton decided to push onward as far as possible. Finally, on January 9, he acknowledged the inevitable: “We have shot our bolt,” he wrote in his diary. “Homeward bound at last. Whatever regrets may be, we have done our best.”
To Worsley, a century later, that moment epitomized Shackleton’s worth as a leader: “The decision to turn back,” he argued, “must be one of the greatest decisions taken in the whole annals of exploration.”3 Worsley was a descendant of the skipper of Shackleton’s ship in the Endurance expedition; Gow was Shackleton’s great-nephew by marriage; and Adams was the great-grandson of Shackleton’s second in command on the 1909 trek. The three of them had decided to honor their forebears by retracing the 820-mile route without any outside help. They would then take care of unfinished ancestral business by continuing the last 112 miles to the South Pole, where they would be picked up by a Twin Otter and flown home. Shackleton, in contrast, had to turn around and walk the 820 miles back to his base camp—a return journey that, like most in the great age of exploration, turned into a desperate race against   death.
What were the limits that stalked Shackleton? It wasn’t just beard-freezingly cold; he and his men also climbed more than 10,000 feet above sea level, meaning that each icy breath provided only two-thirds as much oxygen as their bodies expected. With the early demise of their ponies, they were man-hauling sleds that had initially weighed as much as 500 pounds, putting continuous strain on their muscles. Studies of modern polar travelers suggest they were burning somewhere between 6,000 and 10,000 calories per day—and doing it on half rations.4 By the end of their journey, they would have consumed close to a million calories over the course of four relentless months, similar to the totals of the subsequent Scott expedition of 1911–12. South African scientist Tim Noakes argues these two expeditions were “the greatest human performances    of sustained physical endurance of all  time.”
Shackelton’s understanding of these various factors was limited. He knew that he and his men needed to eat, of course, but beyond that the inner workings of the human body remained shrouded in mystery. That was about to change,   though. A few months before Shackleton’s ship, the Nimrod, sailed toward Antarctica from the Isle of Wight in August 1907, researchers at the    University of Cambridge published an account of their research on lactic acid, an apparent enemy of muscular endurance that would become intimately familiar to generations of athletes.5 While the modern view of lactic acid has changed dramatically in the century since then (for starters, what’s found inside the body is actually lactate, a negatively charged ion, rather than lactic acid), the paper marked the beginning of a new era of investigation into human endurance— because if you understand how a machine works, you can calculate its ultimate limits.6
 
 
The nineteenth-century Swedish chemist Jöns Jacob Berzelius is now best remembered for devising the modern system of chemical notation—H2O    and CO2  and so on—but he was also the first, in 1807, to draw the connection  between muscle fatigue and a recently discovered substance found in soured milk. Berzelius noticed that the muscles of hunted stags seemed to contain high levels of this “lactic” acid, and that the amount of acid depended on how close to exhaustion the animal had been driven before its death.7 (To be fair to Berzelius, chemists were still almost a century away from figuring out what “acids” really were.8 We now know that lactate from muscle and blood, once extracted from the body, combines with protons to produce lactic acid. That’s what Berzelius and his successors measured, which is why they believed that it was lactic acid rather than lactate that played a role in fatigue. For the remainder of the book, we’ll refer to lactate except in historical contexts.)
What the presence of lactic acid in the stags’ muscles signified was unclear, given how little anyone knew about how muscles worked. At the time, Berzelius himself subscribed to the idea of a “vital force” that powered living things and existed outside the realm of ordinary chemistry.9  But vitalism was gradually being supplanted by “mechanism,” the idea that the human body is basically a machine, albeit a highly complex one, obeying the same basic laws    as pendulums and steam engines. A series of nineteenth-century experiments, often crude and sometimes bordering on comical, began to offer hints about what might power this machine. In 1865, for example, a pair of German   scientists
collected their own urine while hiking up the Faulhorn, an 8,000-foot peak in the Bernese Alps, then measured its nitrogen content to establish that protein alone couldn’t supply all the energy needed for prolonged exertion.10 As such findings accumulated, they bolstered the once-heretical view that human limits are, in the end, a simple matter of chemistry and math.
These days, athletes can test their lactate levels with a quick pinprick during training sessions (and some companies now claim to be able to measure lactate in real time with sweat-analyzing adhesive patches).11 But even confirming the presence of lactic acid was a formidable challenge for early investigators; Berzelius, in his 1808 book, Föreläsningar i Djurkemien (“Lectures in Animal Chemistry”), devotes six dense pages to his recipe for chopping fresh meat, squeezing it in a strong linen bag, cooking the extruded liquid, evaporating it, and subjecting it to various chemical reactions until, having precipitated out the dissolved lead and alcohols, you’re left with a “thick brown syrup, and  ultimately a lacquer, having all the character of lactic acid.”
 
Not surprisingly, subsequent attempts to follow this sort of procedure produced a jumble of ambiguous results that left everyone confused. That was still the situation in 1907, when Cambridge physiologists Frederick Hopkins and Walter Fletcher took on the problem. “[I]t is notorious,” they wrote in the introduction to their paper, “that . . . there is hardly any important fact concerning the lactic acid formation in muscle which, advanced by one observer, has not been contradicted by some other.” Hopkins was a meticulous experimentalist who went on to acclaim as the codiscoverer of vitamins, for which he won a Nobel Prize; Fletcher was an accomplished runner who, as a student in the 1890s, was among the first to complete the 320-meter circuit around the courtyard of Cambridge’s Trinity College while its ancient clock was striking twelve—a challenge famously immortalized in the movie Chariots of Fire (though Fletcher reportedly cut the  corners).12
Hopkins and Fletcher plunged the muscles they wanted to test into   cold alcohol immediately after finishing whatever tests they wished to perform. This crucial advance kept levels of lactic acid more or less constant during the subsequent processing stages, which still involved grinding up the muscle with a mortar and pestle and then measuring its acidity. Using this newly accurate technique, the two men investigated muscle fatigue by experimenting on frog legs hung in long chains of ten to fifteen pairs connected by zinc hooks. By applying electric current at one end of the chain, they could make all the legs contract at once; after two hours of intermittent contractions, the muscles would be totally exhausted and unable to produce even a feeble twitch.
The results were clear: exhausted muscles contained three times as much lactic acid as rested ones, seemingly confirming Berzelius’s suspicion that it was a by-product—or perhaps even a cause—of fatigue. And there was an additional twist: the amount of lactic acid decreased when the fatigued frog muscles were stored in oxygen, but increased when they were deprived of oxygen. At last, a recognizably modern picture of how muscles fatigue was coming into focus— and from this point on, new findings started to pile up rapidly.
The importance of oxygen was confirmed the next year by Leonard Hill, a physiologist at the London Hospital Medical College, in the British Medical Journal.13 He administered pure oxygen to runners, swimmers, laborers, and horses, with seemingly astounding results. A marathon runner improved his best time over a trial distance of three-quarters of a mile by 38 seconds. A tram horse was able to climb a steep hill in two minutes and eight seconds instead of three and a half minutes, and it wasn’t breathing hard at the top.
One of Hill’s colleagues even accompanied a long-distance swimmer named Jabez Wolffe on his attempt to become the second person to swim across  the English Channel. After more than thirteen hours of swimming, when he was about to give up, Wolffe inhaled oxygen through a long rubber tube, and was immediately rejuvenated. “The sculls had to be again taken out and used to keep the boat up with the swimmer,” Hill noted; “before, he and it had been drifting with the tide.” (Wolffe, despite being slathered head-to-toe with whiskey and turpentine and having olive oil rubbed on his head, had to be pulled from the water an agonizing quarter mile from the French shore due to cold. He ultimately made twenty-two attempts at the Channel crossing, all unsuccessful.)14
As the mysteries of muscle contraction were gradually unraveled, an obvious question loomed: what were the ultimate limits? Nineteenth-century thinkers had debated the idea that a “law of Nature” dictated each person’s greatest potential physical capacities. “[E]very living being has from its birth a limit of growth and development in all directions beyond which it cannot possibly go by any amount of forcing,” Scottish physician Thomas Clouston argued in 1883.15 “The blacksmith’s arm cannot grow beyond a certain limit. The cricketer’s quickness cannot be increased beyond this inexorable point.” But what was that point? It was a Cambridge protégé of Fletcher, Archibald Vivian Hill (he hated his name and was known to all as A. V.), who in the 1920s made the first credible measurements of maximal endurance.16
You might think the best test of maximal endurance is fairly obvious: a race. But race performance depends on highly variable factors like pacing. You may have the greatest endurance in the world, but if you’re an incurable optimist who can’t resist starting out at a sprint (or a coward who always sets off at a jog), your race times will never accurately reflect what you’re physically capable of. You can strip away some of this variability by using a time-to-exhaustion test instead: How long can you run with the treadmill set at a certain speed? Or how long can you keep generating a certain power output on a stationary bike? And that is, in fact, how many research studies on endurance are now conducted. But this approach still has flaws. Most important, it depends on how motivated you are to push to your limits. It also depends on how well you slept last night, what you ate before the test, how comfortable your shoes are, and any number of other possible distractions and incentives. It’s a test of your performance on that given day, not of your ultimate capacity to perform.
In 1923, Hill and his colleague Hartley Lupton, then based at the University of Manchester, published the first of a series of papers investigating what they initially called “the maximal oxygen intake”—a quantity now better known by its scientific shorthand, VO2max.17 (Modern scientists call it maximal oxygen uptake, since it’s a measure of how much oxygen your muscles actually use rather than how much you breathe in.) Hill had already shared a Nobel Prize the previous year, for muscle physiology studies involving careful measurement of the heat produced by muscle contractions. He was a devoted runner—a habit shared by many of the physiologists we’ll meet in subsequent chapters. For the experiments on oxygen use, in fact, he was his own best subject, reporting in the 1923 paper that he was, at thirty-five, “in fair general training owing to a daily slow run of about one mile before breakfast.” He was also an enthusiastic competitor in track and cross-country races: “indeed, to tell the truth, it may well have been my struggles and failures, on track and field, and the stiffness and exhaustion that sometimes befell, which led me to ask many questions which I have attempted to answer here.”18
The experiments on Hill and his colleagues involved running in tight circles around an 85-meter grass loop in Hill’s garden (a standard track, in comparison, is 400 meters long) with an air bag strapped to their backs connected to a  breathing apparatus to measure their oxygen consumption.19 The faster they ran, the more oxygen they consumed—up to a point. Eventually, they reported,   oxygen intake “reaches a maximum beyond which no effort can drive it.”20 Crucially, they could still accelerate to faster speeds; however, their oxygen intake no longer followed. This plateau is your VO2max, a pure and objective measure of endurance capacity that is, in theory, independent of motivation, weather, phase of the moon, or any other possible excuse. Hill surmised that VO2max reflected the ultimate limits of the heart and circulatory system—a measurable constant that seemed to reveal the size of the “engine” an athlete was blessed with.
With this advance, Hill now had the means to calculate the  theoretical maximum performance of any runner at any distance. At low speeds, the effort is primarily aerobic (meaning “with oxygen”), since oxygen is required for the most efficient conversion of stored food energy into a form your muscles can use. Your VO2max reflects your aerobic limits. At higher speeds, your legs  demand energy at a rate that aerobic processes can’t match, so you have to draw on fast-burning anaerobic (“without oxygen”) energy sources. The problem, as Hopkins and Fletcher had shown in 1907, is that muscles contracting without oxygen generate lactic acid. Your muscles’ ability to tolerate high levels of lactic acid—what we would now call anaerobic capacity—is the other key determinant of endurance, Hill concluded, particularly in events lasting less than about ten minutes.
In his twenties, Hill reported, he had run best times of 53 seconds for the quarter mile, 2:03 for the half mile, 4:45 for the mile, and 10:30 for two miles— creditable times for the era, though, he modestly emphasized, not “first-class.” (Or rather, in keeping with scientific practice at the time, these feats were attributed to an anonymous subject known as “H.,” who happened to be the same age and speed as Hill.) The exhaustive tests in his back garden showed that his VO2max was 4.0 liters of oxygen per minute, and his lactic acid tolerance would allow him to accumulate a further “oxygen debt” of about 10 liters. Using these numbers, along with measurements of his running efficiency, he could plot a graph that predicted his best race times with surprising accuracy.
Hill shared these results enthusiastically. “Our bodies are machines, whose energy expenditures may be closely measured,” he declared in a 1926 Scientific American article titled “The Scientific Study of Athletics.” He published an analysis of world records in running, swimming, cycling, rowing, and skating, at distances ranging from 100 yards to 100 miles.21  For the shortest sprints, the  shape of the world record curve was apparently dictated by “muscle viscosity,” which Hill studied during a stint at Cornell University by strapping a dull, magnetized hacksaw blade around the chest of a sprinter who then ran past a   series of coiled-wire electromagnets—a remarkable early system for precision electric timing. At longer distances, lactic acid and then VO2max bent the world- record curve just as predicted.
But there was a mystery at the longest distances. Hill’s calculations suggested that if the speed was slow enough, your heart and lungs should be able to deliver enough oxygen to your muscles to keep them fully aerobic. There should be a pace, in other words, that you could sustain pretty much indefinitely. Instead, the data showed a steady decline: the 100-mile running record was substantially slower than the 50-mile record, which in turn was slower than the 25-mile record. “Consideration merely of oxygen intake and oxygen debt will not suffice to explain the continued fall of the curve,” Hill acknowledged. He penciled in a dashed near-horizontal line showing where he thought the ultra-distance records ought to be, and concluded that the longer records were weaker    primarily because “the greatest athletes have confined themselves to distances not greater than 10 miles.”
 
By the time Henry Worsley and his companions finally reached the South Pole in 2009, they had skied 920 miles towing sleds that initially weighed    300 pounds. Entering the final week, Worsley knew that his margin of error had all but evaporated. At forty-eight, he was a decade older than either Adams or Gow, and by the end of each day’s ski he was struggling to keep up with them. On New Year’s Day, with 125 miles still to go, he turned down Adams’s offer to take some weight off his sled. Instead, he buried his emergency backup rations in the snow—a calculated risk in exchange for a savings of eighteen pounds.
 
“Soon I was finding each hour a worrying struggle, and was starting to become very conscious of my weakening condition,” he recalled. He began to lag behind and arrive at camp ten to fifteen minutes after the others.
On the eve of their final push to the pole, Worsley took a solitary walk outside the tent, as he’d done every evening throughout the trip before crawling into his sleeping bag. Over the course of the journey, he had sometimes spent these quiet moments contemplating the jagged glaciers they had just traversed and distant mountains still to come; other times, the view was simply “a never-ending expanse of nothingness.” On this final night, he was greeted by a spectacular display in the polar twilight: the sun was shaped like a diamond, surrounded by an incandescent circle of white-hot light and flanked on either side by matching “sun dogs,” an effect created when the sun’s rays are refracted by a haze of prism-shaped ice crystals. It was the first clear display of sun dogs during the entire journey. Surely, Worsley told himself, this was an omen—a sign from the Antarctic that it was finally releasing its grip on him.
The next day was anticlimactic, a leisurely five-mile coda to their epic trip before entering the warm embrace of the Amundsen-Scott South Pole Station. They had done it, and Worsley was flooded with a sense of relief and accomplishment. The Antarctic, though, was not yet finished with him after all. Worsley had spent three decades in the British Army, including tours in the Balkans and Afghanistan with the elite Special Air Service (SAS), the equivalent of America’s SEALs or Delta Force. He rode a Harley, taught needlepoint to prison inmates, and had faced a stone-throwing mob in Bosnia.22 The polar  voyage, though, had captivated him: it demanded every ounce of his    reserves, and in doing so it expanded his conception of what he was capable of. In challenging the limits of his own endurance, he had finally found a worthy adversary. Worsley was  hooked.
Three years later, in late 2011, Worsley returned to the Antarctic for  a centenary reenactment of Robert Falcon Scott and Roald Amundsen’s race to the South Pole. Amundsen’s team, skiing along an eastern route with 52 dogs that hauled sleds and eventually served as food, famously reached the Pole on December 14, 1911. Scott’s team, struggling over the longer route that Shackleton had blazed, with malfunctioning mechanical sleds and Manchurian ponies that couldn’t handle the ice and cold, reached it thirty-four days later only to find Amundsen’s tent along with a polite note (“As you probably are the first to reach this area after us, I will ask you kindly to forward this letter to King Haakon VII. If you can use any of the articles left in the tent please do not hesitate to do so. The sledge left outside may be of use to you. With kind regards I wish you a safe return . . .”) awaiting them.23  While Amundsen’s return journey was uneventful, Scott’s harrowing ordeal showed just what was at stake. A combination of bad weather, bad luck, and shoddy equipment, combined with a botched “scientific” calculation of their calorie needs, left Scott and his men too weak to make it back.24 Starving and frostbitten, they lay in their tent for ten snowy days, unable to cover the final eleven miles to their food depot, before dying.
A century later, Worsley led a team of six soldiers along Amundsen’s route, becoming the first man to complete both classic routes to the pole. Still, he wasn’t done. In 2015, he returned for yet another centenary reenactment, this time of the Imperial Trans-Antarctic Expedition—Shackleton’s most famous (and most brutally demanding) voyage of  all.
In 1909, Shackleton’s prudent decision to turn back short of the pole had undoubtedly saved him and his men, but it was still a perilously close call. Their ship had been instructed to wait until March 1; Shackleton and one other man reached a nearby point late on February 28 and lit a wooden weather station on fire to get the ship’s attention and signal for rescue. In the years after this brush with disaster, and with Amundsen having claimed the South Pole bragging rights in 1911, Shackleton at first resolved not to return to the southern continent at all. But, like Worsley, he couldn’t stay away.
Shackleton’s new plan was to make the first complete crossing of the Antarctic continent, from the Weddell Sea near South America to the Ross Sea near New Zealand. En route to the start, his ship, the Endurance, was seized by the ice of the Weddell Sea, forcing Shackleton and his men to spend the winter of 1915 on the frozen expanse. The ship was eventually crushed by shifting ice, forcing the men to embark on a now-legendary odyssey that climaxed with Shackleton leading an 800-mile crossing over some of the roughest seas on earth —in an open lifeboat!—to rugged South Georgia Island, where there was a tiny whaling station from which they could call for rescue. The navigator behind this remarkable feat: Frank Worsley, Henry Worsley’s forebear and the origin of his obsession. While the original expedition failed to achieve any of its goals, the three-year saga ended up providing one of the most gripping tales of endurance from the great age of exploration—Edmund Hillary, conqueror of Mount Everest, called it “the greatest survival story of all time”—and again earned Shackleton praise for bringing his men home safely. (Three men did die on the other half of the expedition, laying in supplies at the trek’s planned finishing point.)
Once more, Worsley decided to complete his hero’s unfinished business. But this would be different. His previous polar treks had covered only half the actual distance, since he had flown home from the South Pole both times. Completing the full journey wouldn’t just add more distance and weight to haul; it would also make it correspondingly harder to judge the fine line between stubborn persistence and recklessness. In 1909, Shackleton had turned back not because he couldn’t reach the pole, but because he feared he and his men wouldn’t make it back home. In 1912, Scott had chosen to push on and paid the ultimate price. This time, Worsley resolved to complete the entire 1,100-mile continental crossing—and to do it alone, unsupported, unpowered, hauling all his gear behind him. On November 13, he set off on skis from the southern tip of Berkner Island, 100 miles off the Antarctic coast, towing a 330-pound sled across the frozen sea.25
That night, in the daily audio diary he uploaded to the Web throughout the trip, he described the sounds he had become so familiar with on his previous expeditions: “The squeak of the ski poles gliding into the snow, the thud of the sledge over each bump, and the swish of the skis sliding along . . . And then, when you stop, the unbelievable silence.”
 
At first, A. V. Hill’s attempts to calculate the limits of human performance were met with bemusement. In 1924, he traveled to Philadelphia to give a lecture at the Franklin Institute on “The Mechanism of Muscle.” “At the end,” he later recalled, “I was asked, rather indignantly, by an elderly gentleman, what use I supposed all these investigations were which I had been describing.” Hill first tried to explain the practical benefits that might follow from studying athletes but soon decided that honesty was the best policy: “To tell you the truth,” he admitted, “we don’t do it because it is useful but because it’s amusing.”26 That was the headline in the newspaper the next day: “Scientist Does It Because It’s Amusing.”
In reality, the practical and commercial value of Hill’s work was obvious right from the start. His VO2max studies were funded by Britain’s Industrial Fatigue Research Board, which also employed his two coauthors.27 What better way to squeeze the maximum productivity from workers than by calculating their  physical limits and figuring out how to extend them? Other labs around   the
world soon began pursuing similar goals. The Harvard Fatigue Laboratory, for example, was established in 1927 to focus on “industrial hygiene,” with the aim of studying the various causes and manifestations of fatigue “to determine their interrelatedness and the effect upon work.”28 The Harvard lab went on to produce some of the most famous and groundbreaking studies of record-setting athletes, but its primary mission of enhancing workplace productivity was signaled by its location—in the basement of the Harvard Business School.