The Tech - Online EditionMIT's oldest and largest
newspaper & the first
newspaper published
on the web
Boston Weather: 41.0°F | A Few Clouds
Illustration by Monica Gallegos
Article Tools

As a young high school student in neuroscience summer camp, I was shown the results of a computer model calculation that aimed to simulate cardiac tissue voltage as the electric pulse that kept the heart beating passed through. After being told that the simulation took several days to run, we campers were eagerly expecting to be wowed by displays of incomprehensible complexity, wide-eyed and excited at the prospect of viewing such cutting-edge medical research.

Imagine our budding scientist hearts being broken after watching an utterly mundane animated-gif-like movie of a gradient zipping through a poorly rendered 3D polygonal heart.

Every student reaches a moment when they discover the limits of a field, and no field proves this more quickly than the limits of computer modeling. The technology industry has done their best and improved computational power to the point that Playstation 2’s initially ran afoul of national security export regulations because they qualified as supercomputers. Now that iPhones are as powerful as PS2s, do running today’s models on our exponentially faster computers make for better simulations of the real world?

In a word: No.

“All models are wrong,” goes one adage, and in a sense this is perfectly true. Every simulation produces results that do not and will not conform exactly to reality. We would not expect them to do otherwise. If a computer model did manage to get calculations exactly right in one situation it will likely be horridly wrong in another.

A maxim from Amory Lovins, founder of the sustainable energy group the Rocky Mountain Institute, says that “Models complex enough to be interesting cannot be validated, and models complex enough to be valid cannot be written.” Computer models go through extensive validation phases, tweaked until historical inputs match with historical outputs. After the coefficients are aligned, the future — with any luck — can be written.

But we have all painfully learned the dangers of placing too much faith in models thanks to the recession, which was prompted by the collapse of mortgages, banks, and entire institutions. As my brother (a finance industry auditor) tells it, every Wall Street firm was using the same basic model with the same set of wrong assumptions leading to mis-structured bonds and imploding insurance companies.

The financial meltdown took place while modeling a man-made industry with clear regulations bounding the simulation space. How then are mere mortals supposed to accurately model and forecast a situation as complex and immense as one hundred years of the earth’s climate?

The first supercomputers were used for war purposes: calculating projectile trajectories, code-breaking, and nuclear physics for atomic bombs. Today’s supercomputers, tucked away in national labs and agencies, continue to crunch away at similar calculations and have now added climate forecasting modeling to their queues. However, these original physics problems are better defined and require less hand-waving assumptions compared to the back-of-the-envelope approximations needed to calculate a planet’s climate.

With orders upon orders of magnitude more particles, complex heat exchange mechanisms and vast atmospheric and ocean flow dynamics, the Earth’s climate is the largest system we have in all of the, well, planet. Coupling such massively complicated science with the economic activities of six billion individual human agents, to put it simply, does not compute.

Too many nonlinearities, “butterfly effects,” hysteresis processes, and straight-up unknowns exist to perform a forecast of any great accuracy. Why would we expect a system that has large feedbacks, flows and cycles of air, water, heat and particles to have the simple output of increased temperature from the single input of greenhouse gases?

Yes, I’ve seen the reports from the British Lord Nicholas Stern as well as the IPCC and their de facto spokesperson, Al Gore. I’ve seen the graphs where some one hundred climate models provide a range of forecasts for global warming and the world is at or exceeding the top end. People smarter and with more resources than me are pounding at the problem of climate prediction with all the might and fury of careers and reputations on the line.

But just because these results represent the state-of-the-art and offer cutting-edge science doesn’t mean the climate forecasting models are necessarily good enough to bet our global economy on trillion-dollar policy actions like capping CO2 emissions. The uncertainty is too large, the error bars too wide, the approximations too rough.

To hear the person-on-the-street self-righteously declare the “truth” and the “fact” of global warming is to hear a populace entranced by what physicist Freeman Dyson calls the new secular religion of climate change. Instead of preachers and a theocracy we now have (some) professors and a technocracy.

Any scientist worth their salt would be falling over themselves to provide conditionals, uncertainties, hem and haws on any sort of definitive conclusion. There are plenty of such scientists and they do themselves a credit to their field. More visible, however, are the dogmatists who insist that the forecasted-by-computers consequences are so dire that urgent and immediate action must be taken, caution be damned.

The political debate on climate change and the policy costs has vividly demonstrated how modeling efforts can be manipulated. Political groups pit dueling economic models — for instance, from the conservative Heritage Foundation or the Obama-cozy Center for American Progress — each containing their own intricate technical details and, more importantly, their own assumptions and structures.

As any good computer modeler knows, the devil is in these details. Results can be manipulated by tweaking any number of variables so that the liberal can say carbon dioxide regulation will create millions of jobs while the conservative can say that the cost of regulation will eat up large fractions of a household’s annual income. Unless one gets down and dirty with these computer programs none of these biases can be fully teased out, to the great detriment of our political discourse.

To be clear, the general direction of the scientific evidence is toward anthropocentric global warming. The most urgent warnings about catastrophic climate change caution against nonlinearities and feedbacks that would propel our world quickly toward disaster. But these very “tipping points” are the most uncertain approximations of our climate forecasting models and thus produce the least certain results.

A nuanced view would be to accept our current climate change along with a large margin of error for the version of global warming labeled “dangerous anthropomorphic interference”. One would then support policy actions as a type of hedge or insurance against the risks of climate change. An enlightened political debate would be upfront about the unknowns that result from the computer models and account for these costs accordingly.

As to the absolute certainty of catastrophic climate change barring immediate action? I’m not convinced, neither should you be. Probably.

Gary Shu is a graduate student in the Technology and Policy Program and the Department of Urban Studies and Planning.