Friday, January 21, 2011

Reliance on Indirect Evidence Fuels Dark Matter Doubts Pinning down the universe's missing mass remains one of cosmology's biggest challenges

Infrared picture of M31 from WISE
IN THE DARK? Studies of spiral galaxies such as Andromeda, pictured here in infrared wavelengths, have provided clues to dark matter's gravitational effects. But more immediate evidence for dark matter's existence, and clues to its true nature, has remained elusive.
Image: NASA/JPL-Caltech/UCLA
Most of the matter in the universe remains missing in action—at least, that's long been the standard cosmological paradigm.

Now, however, a small but vocal group of cosmologists is challenging the dark mattertenets of the widely accepted cosmological model, which holds that the universe is composed of roughly 70 percent dark energy, 25 percent dark matter, and only 5 percent normal (or baryonic) matter. Dark matter, whatever it is, exerts a gravitational pull but only interacts with ordinary matter very weakly, if at all, beyond that. Light seems to have no effect on dark matter—hence its name.

Evidence of dark matter's influence on the cosmos stretches back to the 1930s and has only gotten stronger in recent years. NASA's groundbreaking cosmology satellite, the Wilkinson Microwave Anisotropy Probe, has in the decade since its launch delivered a robust indirect detection of dark matter's footprint on the ancient echo of light known as the cosmic microwave background. And dark matter's effects are also inferred in gravitational interactions around clusters of galaxies as well as around individual galaxies themselves.

But the dark stuff itself has yet to be detected, either directly, in particle physics laboratories as a new subatomic particle, via neutrino telescopes also operating in the subatomic realm, or with concrete evidence of such hidden matter using telescopes operating in the electromagnetic spectrum. Some astrophysicists are hopeful that the Fermi Gamma-Ray Space Telescope will deliver corroborating, if still somewhat indirect, evidence for the mutual annihilation of dark matter particles in the galaxy.

"Dark matter comes about because people unquestionably find mass discrepancies in galaxies and clusters of galaxies," says Mordehai Milgrom, an astrophysicist at the Weizmann Institute of Science in Rehovot, Israel.

Stars at the very edges of spiral galaxies, for instance, rotate much faster than can be explained by Newtonian gravity alone; the picture makes sense only if astrophysicists either modify gravity itself or invoke additional gravitational acceleration due to an unknown source of mass such as dark matter.

"The mass of visible matter falls very short of what is needed to account for the gravity shown by these systems," Milgrom says. "The mainstream assumes it is due to the presence of dark matter, while others, like me, think that the theory of gravity has to be modified."

Milgrom's doubts about dark matter have long kept him on the fringe of professional astronomical circles. But as Rutgers University astronomer Jerry Sellwood notes, "people are beginning to think that we should have found some independent evidence for dark matter, and that hasn't happened."

That is arguably largely a result of the fact that dark matter is theorized to interact minimally with normal matter. But some observational campaigns have not seen the effects of dark matter where it is expected to exist. Theory predicts that spiral galaxies, including our own Milky Way, are enveloped by massive dark matter halos that provide the galaxy's missing mass. But the Milky Way's own dark matter halo has also yet to be detected, even indirectly. Its putative existence is primarily inferred from the anomalous rotations of satellite galaxies such as the Magellanic Clouds, which orbit the Milky Way too quickly to be explained by ordinary gravity alone.

More recently, there have also been predictions of a disk of dark matter that would reside in the galactic plane, co-rotating with the Milky Way itself. But in an analysis of the movements of some 300 stars located at least 6,000 light-years beyond the galactic plane, Christian Moni Bidin, an astronomer at the University of Concepción in Chile, and his colleagues conclude that there is "no compelling evidence" for such a dark disk. Given uncertainties in their own analysis, however, they acknowledge that such a disk's existence cannot be completely ruled out.
Moni Bidin, the lead author on a paper detailing the finding in the November 20 issue of The Astrophysical Journal Letters, says that one can always conclude that dark matter escapes detection because it has an exotic nature or unexpected properties. "But failing to detect it in indirect kinematical measurements such as ours," he says, "means finding a way out is harder."

Another dynamical complication comes from the so-called Tully-Fisher relationship, which describes the relation between a galaxy's luminosity and its rotation velocity: the higher the luminosity, the faster a galaxy rotates.

The measured rotation speeds on the outskirts of a spiral galaxy, Milgrom says, depend in "a very strict manner only on the total visible mass of the galaxy." But if the theory of dark matter is correct, then the speed of stars rotating on the galaxy's outskirts should also depend on the shape of the galaxy's dark matter halo.

"Dark matter halos should be lumpy, underinflated football shapes; not spherical," says Stacy McGaugh, an astronomer at the University of Maryland, College Park. "Statistically, that means we should see many [different galactic rotation] velocities for the same luminosity. We don't."

Instead, McGaugh says, the "baryonic tail wags the dark matter dog." In other words, astronomers can predict just what the galactic rotation curves will be from a given galaxy's stellar distribution. McGaugh makes the claim that if dark matter is dominant, observers shouldn't be able to predict the galactic rotation curves by what they see in normal luminous matter.

"Because each dark matter halo should be unique, you should see lots of variation in rotation curves for the same galaxy," he says. "You don't expect the kind of uniformity that we observe in hundreds of galactic rotation curves."

Even if dark matter raises questions on such large galactic scales, particle physicists are hopeful that it will be detected in the lab. If dark matter particles in the sun, for instance, undergo self-annihilation, then such annihilation events could create high-energy neutrinos that would potentially be detectable with ground-based neutrino telescopes.

Then there are detectors, such as the Xenon100 experiment at Italy's National Laboratory in Gran Sasso, built to register direct hits from particulate dark matter. Xenon100 is designed to search for the most favored dark matter particle candidate—the weakly interacting massive particle (WIMP)—by watching for signs that a WIMP has recoiled off an atom in a tank of liquid xenon. A recent analysis of an 11-day observing run in 2009, however, failed to identify any such dark particles, casting doubt on two competing groups' prior claims of possible dark matter signals.

One problem in making such detections is the uncertainty over dark matter's density in the local universe, says Chris Mihos, an astrophysicist at Case Western Reserve University. "Does the dark matter particle not exist," he wonders, "or are we just unlucky in terms of the local dark matter density?"

Current direct detection scenarios include potential dark matter particles with masses between one and 1,000 times the mass of a proton and with interaction "cross-sections" roughly one trillionth the size of a neutron.

After each non-detection, McGaugh says, theorists continually redefine the interaction cross-section of WIMPs to safely undetectable levels. This kind of behavior, he adds, can spark a never-ending game of leapfrog between experimental physicists and theoreticians, allowing them to continue business as usual without ever revising their cosmology.

"There is a lot of misplaced certainty in the dark matter model—a feeling that it's not 'if' we directly detect dark matter, but 'when,'" Mihos says.

Or, as McGaugh puts it, "Once you convince yourself that the universe is full of an invisible substance that only interacts with ordinary matter through gravity, then it is virtually impossible to disabuse yourself of that notion. There is always a way to wiggle out of any observation."

New Subatomic Particle Could Help Explain the Mystery of Dark Matter A flurry of evidence reveals that "sterile neutrinos" are not only real but common, and could be the stuff of dark matter


HIDDEN CLUEPulsars, including one inside this "guitar nebula," provide evidence of sterile neutrinos.
Image: Courtesy of Shami Chatterjee and James M. Cordes Cornell University

Neutrinos are the most famously shy of particles, zipping through just about everything—your body, Earth, detectors specifically designed to catch them—with nary a peep. But compared with their heretofore hypothetical cousin the sterile neutrino, ordinary neutrinos are veritable firecrackers. Sterile neutrinos don’t even interact with ordinary matter via the weak force, the ephemeral hook that connects neutrinos to the everyday world. Recently, however, new experiments have revealed tantalizing evidence that sterile neutrinos are not only real but common. Some of them could even be the stuff of the mysteriousdark matter astronomers have puzzled over for decades.
Physicists aren’t quite ready to make such dramatic pronouncements, but the results "will be extremely important—if they turn out to be correct,” says Alexander Kusenko of the University of California, Los Angeles.
How did scientists go about looking for particles that are virtually undetectable? Kusenko and Michael Loewenstein of the NASA Goddard Space Flight Center reasoned that if sterile neutrinos really are dark matter, they would occasionally decay into ordinary matter, producing a lighter neutrino and an x-ray photon, and it would make sense to search for these x-rays wherever dark matter is found. Using the Chandra x-ray telescope, they observed a nearby dwarf galaxy thought to be rich in dark matter and found an intriguing bump of x-rays at just the right wavelength.
Another piece of evidence comes from supernovae. If sterile neutrinos really do exist, supernovae would shoot them out in a tight stream along magnetic field lines, and the recoil from this blast would kick the pulsars out through the cosmos. It turns out astronomers observe precisely that: pulsars whizzing through the universe at speeds of thousands of kilometers a second.
Astronomers don’t have to rely on the skies for evidence of sterile neutrinos, though. Scientists at Fermi National Accelerator Laboratory recently verified a 16-year-old experiment that sought the first evidence of these particles. The Fermilab scientists fired ordinary neutrinos through Earth at a detector half a kilometer away. They found that in flight, many of these neutrinos changed their identities in just the way they should if sterile neutrinos do in fact exist.
The next step is to confirm the results. Loewenstein and Kusenko recently repeated their experiment on another space-based x-ray telescope, the XMM-Newton, and Fermilab scientists are also setting up another run. The shyest elementary particles may not be able to evade their seekers for long.

How Language Shapes Thought The languages we speak affect our perceptions of the world By Lera Boroditsky | January 20, 2011 |




  • People communicate
     using a multitude of languages that vary considerably in the information they convey.
  • Scholars have long wonderedwhether different languages might impart different cognitive abilities.
  • In recent years empirical evidence for this causal relation has emerged, indicating that one’s mother tongue does indeed mold the way one thinks about many aspects of the world, including space and time.
  • The latest findings also hint that language is part and parcel of many more aspects of thought than scientists had previously realized.
  • I am standing next to a five-year old girl in pormpuraaw, a small Aboriginal community on the western edge of Cape York in northern Australia. When I ask her to point north, she points precisely and without hesitation. My compass says she is right. Later, back in a lecture hall at Stanford University, I make the same request of an audience of distinguished scholars—winners of science medals and genius prizes. Some of them have come to this very room to hear lectures for more than 40 years. I ask them to close their eyes (so they don’t cheat) and point north. Many refuse; they do not know the answer. Those who do point take a while to think about it and then aim in all possible directions. I have repeated this exercise at Harvard and Princeton and in Moscow, London and Beijing, always with the same results.
    A five-year-old in one culture can do something with ease that eminent scientists in other cultures struggle with. This is a big difference in cognitive ability. What could explain it? The surprising answer, it turns out, may be language.
    The notion that different languages may impart different cognitive skills goes back centuries. Since the 1930s it has become associated with American linguists Edward Sapir and Benjamin Lee Whorf, who studied how languages vary and proposed ways that speakers of different tongues may think differently. Although their ideas met with much excitement early on, there was one small problem: a near complete lack of evidence to support their claims. By the 1970s many scientists had become disenchanted with the Sapir-Whorf hypothesis, and it was all but abandoned as a new set of theories claiming that language and thought are universal muscled onto the scene. But now, decades later, a solid body of empirical evidence showing how languages shape thinking has finally emerged. The evidence overturns the long-standing dogma about universality and yields fascinating insights into the origins of knowledge and the construction of reality. The results have important implications for law, politics and education.
    Under the Influence
    Around the world people communicate with one another using a dazzling array of languages—7,000 or so all told—and each language requires very different things from its speakers. For example, suppose I want to tell you that I saw Uncle Vanya on 42nd Street. In Mian, a language spoken in Papua New Guinea, the verb I used would reveal whether the event happened just now, yesterday or in the distant past, whereas in Indonesian, the verb wouldn’t even give away whether it had already happened or was still coming up. In Russian, the verb would reveal my gender. In Mandarin, I would have to specify whether the titular uncle is maternal or paternal and whether he is related by blood or marriage, because there are different words for all these different types of uncles and then some (he happens to be a mother’s brother, as the Chinese translation clearly states). And in Pirahã, a language spoken in the Amazon, I couldn’t say “42nd,” because there are no words for exact numbers, just words for “few” and “many.”
    Languages differ from one another in innumerable ways, but just because people talk differently does not necessarily mean they think differently. How can we tell whether speakers of Mian, Russian, Indonesian, Mandarin or Pirahã actually end up attending to, remembering and reasoning about the world in different ways because of the languages they speak? Research in my lab and in many others has been uncovering how language shapes even the most fundamental dimensions of human experience: space, time, causality and relationships to others.
    Let us return to Pormpuraaw. Unlike English, the Kuuk Thaayorre language spoken in Pormpuraaw does not use relative spatial terms such as left and right. Rather Kuuk Thaayorre speakers talk in terms of absolute cardinal directions (north, south, east, west, and so forth). Of course, in English we also use cardinal direction terms but only for large spatial scales. We would not say, for example, “They set the salad forks southeast of the dinner forks—the philistines!” But in Kuuk Thaayorre cardinal directions are used at all scales. This means one ends up saying things like “the cup is southeast of the plate” or “the boy standing to the south of Mary is my brother.” In Pormpuraaw, one must always stay oriented, just to be able to speak properly.
    Moreover, groundbreaking work conducted by Stephen C. Levinson of the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, and John B. Haviland of the University of California, San Diego, over the past two decades has demonstrated that people who speak languages that rely on absolute directions are remarkably good at keeping track of where they are, even in unfamiliar landscapes or inside unfamiliar buildings. They do this better than folks who live in the same environments but do not speak such languages and in fact better than scientists thought humans ever could. The requirements of their languages enforce and train this cognitive prowess.
    People who think differently about space are also likely to think differently about time. For example, my colleague Alice Gaby of the University of California, Berkeley, and I gave Kuuk Tha ayorre speakers sets of pictures that showed temporal prog res­sions—a man aging, a croc odile growing, a banana being eaten. We then asked them to arrange the shuffled photographs on the ground to indicate the correct temporal order.
    We tested each person twice, each time facing in a different cardinal direction. English speakers given this task will arrange the cards so that time proceeds from left to right. Hebrew speakers will tend to lay out the cards from right to left. This shows that writing direction in a language influences how we organize time. The Kuuk Thaayorre, however, did not routinely arrange the cards from left to right or right to left. They arranged them from east to west. That is, when they were seated facing south, the cards went left to right. When they faced north, the cards went from right to left. When they faced east, the cards came toward the body, and so on. We never told anyone which direction they were facing—the Kuuk Thaayorre knew that already and spontaneously used this spatial orientation to construct their representations of time.
    Representations of time vary in many other ways around the world. For example, English speakers consider the future to be “ahead” and the past “behind.” In 2010 Lynden Miles of the University of Aberdeen in Scotland and his colleagues discovered that English speakers unconsciously sway their bodies forward when thinking about the future and back when thinking about the past. But in Aymara, a language spoken in the Andes, the past is said to be in front and the future behind. And the Aymara speakers’ body language matches their way of talking: in 2006 Raphael Núñez of U.C.S.D. and Eve Sweetser of U.C. Berkeley found that Aymara gesture in front of them when talking about the past and behind them when discussing the future.
    Remembering Whodunit
    Speakers of different languages also differ in how they describe events and, as a result, how well they can remember who did what. All events, even split-second accidents, are complicated and require us to construe and interpret what happened. Take, for example, former vice president Dick Cheney’s quail-hunting accident, in which he accidentally shot Harry Whittington. One could say that “Cheney shot Whittington” (wherein Cheney is the direct cause), or “Whittington got shot by Cheney” (distancing Cheney from the outcome), or “Whittington got peppered pretty good” (leaving Cheney out altogether). Cheney himself said “Ultimately I’m the guy who pulled the trigger that fired the round that hit Harry,” interposing a long chain of events between himself and the outcome. President George Bush’s take—“he heard a bird flush, and he turned and pulled the trigger and saw his friend get wounded”—was an even more masterful exculpation, transforming Cheney from agent to mere witness in less than a sentence.
    The American public is rarely impressed with such linguistic wiggling because nonagentive language sounds evasive in English, the province of guilt-shirking children and politicians. English speakers tend to phrase things in terms of people doing things, preferring transitive constructions like “John broke the vase” even for accidents. Speakers of Japanese or Spanish, in contrast, are less likely to mention the agent when describing an accidental event. In Spanish one might say “Se rompió el florero,” which translates to “the vase broke” or “the vase broke itself.”
    My student Caitlin M. Fausey and I have found that such linguistic differences influence how people construe what happened and have consequences for eyewitness memory. In our studies, published in 2010, speakers of English, Spanish and Japanese watched videos of two guys popping balloons, breaking eggs and spilling drinks either intentionally or accidentally. Later we gave them a surprise memory test. For each event they had witnessed, they had to say which guy did it, just like in a police line-up. Another group of English, Spanish and Japanese speakers described the same events. When we looked at the memory data, we found exactly the differences in eyewitness memory predicted by patterns in language. Speakers of all three languages described intentional events agentively, saying things such as “He popped the balloon,” and all three groups remembered who did these intentional actions equally well. When it came to accidents, however, interesting differences emerged. Spanish and Japanese speakers were less likely to describe the accidents agentively than were English speakers, and they correspondingly remembered who did it less well than English speakers did. This was not because they had poorer memory overall—they remembered the agents of intentional events (for which their languages would naturally mention the agent) just as well as English speakers did.
    Not only do languages influence what we remember, but the structures of languages can make it easier or harder for us to learn new things. For instance, because the number words in some languages reveal the underlying base-10 structure more transparently than do the number words in English (there are no troublesome teens like 11 or 13 in Mandarin, for instance), kids learning those languages are able to learn the base-10 insight sooner. And depending on how many syllables the number words have, it will be easier or harder to keep a phone number in mind or to do mental calculation. Language can even affect how quickly children figure out whether they are male or female. In 1983 Alexander Guiora of the University of Michigan at Ann Arbor compared three groups of kids growing up with Hebrew, English or Finnish as their native language. Hebrew marks gender prolifically (even the word “you” is different depending on gender), Finnish has no gender marking and English is somewhere in between. Accordingly, children growing up in a Hebrew-speaking environment figure out their own gender about a year earlier than Finnish-speaking children; English-speaking kids fall in the middle.
    What Shapes What?
    These are just some of the many fascinating findings of cross-linguistic differences in cognition. But how do we know whether differences in language create differences in thought, or the other way around? The answer, it turns out, is both—the way we think influences the way we speak, but the influence also goes the other way. The past decade has seen a host of ingenious demonstrations establishing that language indeed plays a causal role in shaping cognition. Studies have shown that changing how people talk changes how they think. Teaching people new color words, for instance, changes their ability to discriminate colors. And teaching people a new way of talking about time gives them a new way of thinking about it.
    Another way to get at this question is to study people who are fluent in two languages. Studies have shown that bilinguals change how they see the world depending on which language they are speaking. Two sets of findings published in 2010 demonstrate that even something as fundamental as who you like and do not like depends on the language in which you are asked. The studies, one by Oludamini Ogunnaike and his colleagues at Harvard and another by Shai Danziger and his colleagues at Ben-Gurion University of the Negev in Israel, looked at Arabic-French bilinguals in Morocco, Spanish-English bilinguals in the U.S. and Arabic-Hebrew bilinguals in Israel, in each case testing the participants’ implicit biases. For example, Arabic-Hebrew bilinguals were asked to quickly press buttons in response to words under various conditions. In one condition if they saw a Jewish name like “Yair” or a positive trait like “good” or “strong,” they were instructed to press “M,”; if they saw an Arab name like “Ahmed” or a negative trait like “mean” or “weak,” they were told to press “X.” In another condition the pairing was reversed so that Jewish names and negative traits shared a response key, and Arab names and positive traits shared a response key. The researchers measured how quickly subjects were able to respond under the two conditions. This task has been widely used to measure involuntary or automatic biases—how naturally things such as positive traits and ethnic groups seem to go together in people’s minds.
    Surprisingly, the investigators found big shifts in these involuntary automatic biases in bilinguals depending on the language in which they were tested. The Arabic-Hebrew bilinguals, for their part, showed more positive implicit attitudes toward Jews when tested in Hebrew than when tested in Arabic.
    Language also appears to be involved in many more aspects of our mental lives than scientists had previously supposed. People rely on language even when doing simple things like distinguishing patches of color, counting dots on a screen or orienting in a small room: my colleagues and I have found that limiting people’s ability to access their language faculties fluently—by giving them a competing demanding verbal task such as repeating a news report, for instance—impairs their ability to perform these tasks. This means that the categories and distinctions that exist in particular languages are meddling in our mental lives very broadly. What researchers have been calling “thinking” this whole time actually appears to be a collection of both linguistic and nonlinguistic processes. As a result, there may not be a lot of adult human thinking where language does not play a role.
    A hallmark feature of human intelligence is its adaptability, the ability to invent and rearrange conceptions of the world to suit changing goals and environments. One consequence of this flexibility is the great diversity of languages that have emerged around the globe. Each provides its own cognitive toolkit and encapsulates the knowledge and worldview developed over thousands of years within a culture. Each contains a way of perceiving, categorizing and making meaning in the world, an invaluable guidebook developed and honed by our ancestors. Research into how the languages we speak shape the way we think is helping scientists to unravel how we create knowledge and construct reality and how we got to be as smart and sophisticated as we are. And this insight, in turn, helps us understand the very essence of what makes us human.

    ABOUT THE AUTHOR(S)

    Lera Boroditsky is an assistant professor of cognitive psychology at Stanford University and editor in chief of Frontiers in Cultural Psychology. Her lab conducts research around the world, focusing on mental representation and the effects of language on cognition.

A 2.4-degree C rise by 2020? Probably not

earth
Climate change is happening faster than scientists' predicted. Meltdowns in Greenlandand Antarctica are well ahead of climate science projections and overall warming continues to accelerate—we have just endured the hottest year and hottest decade on record. About the only thing that isn't happening faster than expected is increasingconcentrations of CO2 and other greenhouse gases in the atmosphere, still steadily ticking upwards by roughly 2 parts-per-million (ppm) per year.

Now, a new study from the Argentina-based Fundacion Ecologica Universal suggests that if the world continues to burn fossil fuels and emit CO2 at its present pace, atmospheric concentrations will reach 490 ppm by 2020—up from roughly 390 ppm today. Extrapolating that directly into heat, the study suggests global average temperatures would rise by 1.4 degrees Celsius in the next nine years alone—more than six times faster than present warming. The researchers move on then to their real concern: the impact of all the heat on food crops (not good), hence the study's title: "The Food Gap." (pdf)

The only problem is that increasing concentrations of CO2 don't translate instantaneously into warming. In fact, greenhouse gases take time to trap the sun's heat and warm the globe—the main reason we have built up a store of trouble for the future with more than a century of greenhouse gas emissions. In fact, the average temperature of the planet for the next several millennia will likely be determined this century by those of us living today and how much fossil fuel burning and deforestation, among other things, we choose to engage in.

Our content partner Climatewire reported on this study, including caveats from climate scientists not associated with it questioning its assumptions and noting the timing problem. We posted the Climatewire story on Wednesday morning, then took it offline Wednesday evening after we learned about the study's significant criticismand republished it Thursday with an explanatory editor's note. Climate scientists and climate contrarians alike are denouncing the study for its aggressive assumptions about the potential pace of climate change as well as its impacts on agriculture and hunger. Climate experts such as NASA GISS's Gavin Schmidt have called the kind of warming the study suggests impossible. And Eurekalert, the science news clearinghouse of the American Association for the Advancement of Science (AAAS), went so far as to retract a press release touting the study, calling it "erroneous" for "report[ing] a rate of global warming inconsistent with other respected sources of information regarding global climate change."

That is as it should be, part of the process of science correcting itselfScientific American continues to monitor and clarify that critical process.

Will Climate Change Cause Crop Shortfalls by 2020? Rising temperatures may slash yields for rice, wheat and corn throughout the developing world, according to a new report

Earth may be 2.4 degrees Celsius warmer by 2020, potentially triggering global scrambles for food supplies, according to a new analysis

Earth may be 2.4 degrees Celsius warmer by 2020, potentially triggering global scrambles for food supplies, according to a new analysis.
Work from the Universal Ecological Fund, the U.S. branch of Argentina-based nonprofit Fundación Ecológica Universal (FEU), sketches a somber portrait for world hunger by the end of the decade.
Rising temperatures will slash yields for rice, wheat and corn throughout the developing world, exacerbating food price volatility and increasing the number of undernourished people, the report warns.
It projects that food demand will substantially dwarf available supply.
The group drew upon existing climate and food production data from the Intergovernmental Panel on Climate Change (IPCC), the World Meteorological Organization and other U.N. agencies to draw its conclusions.
Chief among its findings, UEF said, is that if the planet continues on a business-as-usual path, temperatures may rise at least 2.4 degrees Celsius above preindustrial levels -- or 4.3 degrees Fahrenheit -- by 2020. Crossing a 2-degree-Celsius climate threshold is commonly considered dangerous.
The level of heat-trapping gases in the atmosphere, which was 284 parts per million in the preindustrial era, tallies more than 385 ppm today. By 2020, it could reach 490 ppm, cautions the report. Carbon concentrations that high are associated with a global temperature rise of 2.4 degrees Celsius, according to IPCC estimates.
Potential timing gapStill, it's not certain how quickly the planet would heat up if the planet had that concentration, said climate scientist Brenda Ekwurzel, with the Union of Concerned Scientists.
"If you look at Earth as an oven, by hitting 490 you turn the dial, but it could take a while for the oven to reach the temperature," she said.
Climate scientist Osvaldo Canziani served as the scientific adviser on the study, going over it "line by line," said Liliana Hisas, the executive director of the Universal Ecological Fund and author of the report. Canziani was unavailable for comment.
While not every part of the planet is expected to experience adverse effects of climate-linked impacts on agriculture, the report's numbers suggest that by 2020 there will be a 14 percent deficit between wheat production and demand, global rice production will stand at an 11 percent deficit, and there will be a 9 percent deficit in corn production. Soybeans, however, are expected to have a 5 percent surplus.
To meet the needs of a world that is expected to have an additional 890 million people by 2020, the global community would need to increase food production by about 13 percent, the report states.
Josef Schmidhuber, a senior policy analyst at the U.N. Food and Agriculture Organization, questioned some of the underlying assumptions for regional production figures and said that this UEF study also fails to consider other external factors that could affect these results.
"The only rationale for this to hold would be for climate change to have such a strong impact on the non-agricultural economy that people would lose purchasing power and thus would be so poor they couldn't afford the food they need to meet the requirements," Schmidhuber said. "Food security is much more than a production problem -- it reflects above all a lack of access to food and a lack of income," he said.
Schmidhuber contends that looking at food security purely in the context of the impacts of food production will lead to overstatements of hunger estimates.