Hurricane Sandy was one of the most destructive storms in US history. But beyond the catastrophic losses of life and property, the storm also dealt a blow to the American weather modelling community. The American weather model whiffed on the initial forecast, but its European counterpart was dead on.
Nearly six years later and the phrase “the Euro nailed Sandy” is still a running joke in the meteorological community. This phrase is shorthand for both the continued superiority of the European model’s ability to predict major weather events such as hurricanes, while also mocking people who obsess over a model that has its own deficiencies and occasional high profile whiffs.
Behind the scenes, though, the weather modelling arms race is heating up, with the American model getting a major upgrade early next year.
This competition about more than model supremacy: Lives and property are at stake. Witness last year, when weather disasters cost the US a record-setting $US306 billion ($422 billion). In Puerto Rico, the disastrous Hurricane Maria claimed an estimated 3000 lives according to a government-commissioned survey published this week.
An increase in extreme weather brought on by climate change only makes it more important for our weather models to get it right.
Step outside and the weather is inherently obvious and easy to understand. The sun kisses your skin, the warm air causes the sweat to build behind your knees, maybe a slight breeze ruffles the grass. You pop your umbrella open as a thunderstorm pours and run to the basement as a tornado touches down.
But these basic experiences are the result of a chaotic series of interactions between the ocean, atmosphere, sea ice, incoming sunlight, and a host of other planetary-scale features. Models are how we try to make sense of all this complexity, and meteorologists use them as guidance when they’re forecasting hurricanes, snowstorms, or the chance of an afternoon shower.
There are a ton of weather models out there. National meteorological agencies make most of them but so do private companies (did you know Panasonic — yes, the electronics company — nailed Irma?).
For much of the world, however, two models are the backbone of forecasting: The Euro, and the American model known as the Global Forecast System (the GFS). These models are created and maintained by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Oceanic and Atmospheric Administration (NOAA), respectively.
The two agencies were founded with divergent missions, which in part informs some of the differences in their weather models today. NOAA is focused on “climate, weather, oceans and coasts” while ECMMWF was established to “produce accurate climate data and medium-range forecasts”, or forecasts generally covering the range of three to seven days.
“The European Center has a very narrow mission compared to either the [UK] Met Office or the US National Weather Service (which is part of NOAA),” Richard Rood, a weather and climate expert at the University of Michigan, told us. “That mission is medium range forecasting, that is what they do very very well. In the US, our resources — which are quite substantial — are diffused across a larger set of products.”
The “Euro nailed Sandy” meme arose because ECMWF had Sandy’s weird track pegged more than a week out. The GFS eventually worked out the same track for the storm. Similarly, the Euro bested the GFS when it came to 2015’s Hurricane Joaquin when it consistently and correctly predicted the storm would stay out to sea (the GFS had it coming ashore along the Mid-Atlantic for a while).
John Morales, the chief meteorologist WTVJ in Miami who has seen no shortage of hurricane model runs, told us he relies heavily on both models for short-term forecasts.
“However, for days 3-7, while still looking at the output from both models, I lean strongly towards the European global model,” he said. “It truly is difficult to beat, especially in the medium range.”
So what’s the Euro’s special sauce?
The scale of computing power each agency has at its disposal is measured in petaflops. To put that measure in perspective, it would take your measly human brain 31.7 million years to do the number of calculations that a one petaflop computing system can do in a second. NOAA has computers capable of reaching 8.4 petaflops, ECMWF tops out at around 8.5 petaflops.
All that power allows each group to run its model fast enough to spit out results for the entire planet a few times a day at fairly high resolution. The Euro recreates conditions up to 80km into the atmosphere and has a resolution on the ground of roughly 9km. The GFS runs simulates the atmosphere up to 53km above the surface with a resolution of 13km.
That resolution difference is partly why the GFS isn’t as strong as its European counterpart when it comes to forecasting specific weather events. It’s essentially looking at the atmosphere with slightly blurry vision, while the Euro’s view is closer to 20/20 vision.
But there’s another key difference, which is how the two groups use data to start their models running. The Euro uses continuous data compiled from satellites and weather stations over a 12 hour period to set up initial conditions. These essentially give the model a sense of what the planet’s weather has been up to lately.
“This allows [the Euro] to make better use of observations and especially satellite observations which come in a nearly continuous stream,” Massimo Bonavita, a senior scientist working on at data assimilation at ECMWF, told Earther.
In comparison, the GFS assimilates a few snapshots of data. Basically, by the time the GFS gets out of the starting gates, the Euro already has a running start.
But help is on the way on many fronts for the GFS. A new version of the model is coming in early 2019. It will run on a new dynamic core, something that Brian Gross, a senior adviser for the unified forecast system at the National Centres for Environmental Prediction (which is part of NOAA), likened to the engine of a car.
“Replacing the engine is much like you do if you replace [the] engine in your car,” he said. “You will get a lot more miles out of this. That will future-proof us, [allowing us to run] higher resolution models with more advanced physics.”
The new engine is already being tested alongside the current GFS, and Gross said the early results are “pretty encouraging in terms of forecast track improvement [and] intensity improvements” for hurricanes.
The next generation of the GFS is hardly the end of the line for weather modelling. Neil Jacobs, a NOAA deputy administrator appointed by Trump, told Capital Weather Gang in April that improving weather modelling is a “top priority of the administration”.
There are also efforts afoot to create something called an unified forecast system, which will basically take all the models under NOAA’s purview from the GFS to ice, ocean and land and link them up into what will be a super model game of telephone.
“You don’t exactly have one model that does everything,” Rood, who is part of task force working on that project, said. “What the unified system gives you is a more controlled environment for prediction problems.”
Those problems can range from whether it will rain in Omaha in the next few hours to whether Nebraska will face drought conditions in the next four weeks. Rood added that the most important question about how to improve American weather forecasting is “how do we do something disruptive to make the step to the next level, not how do we catch up”.
Social media has played no small role in stoking the feud between the Euro and the GFS. You can find people saying things such as that the GFS “crapped it’s [sic] pants” forecasting Hurricane Lane or that it whipped up a “scare-icane” earlier this season, as well as the fact that it “nailed the forecast for Harvey”.
Will meteorologists and armchair weather forecasters ever stop fighting about these two weather models and their battle for supremacy? Probably not. But things are a little more subdued behind the scenes. Gross told me that I would be “disappointed” if I was searching for beef between NOAA and ECMWF.
“A measure of friendly competition has always been present among major forecasting centres,” Bonavita said. “This is actually very positive because it stimulates development and provides diversity in the forecasting ecosystem.”
Indeed, both centres collaborate, sending scientists across the Atlantic to learn from each other.
And it isn’t as though the Euro never comes in second place. The model infamously whiffed on what ended up being a minor Mid-Atlantic snowstorm this year, calling for up to 50cm of snow in Washington, DC. And both the Euro and GFS see skill drop offs in the Northern Hemisphere’s summer and have biases toward warmth or cold and wet or dry in certain locales.
“There are a lot of statistical analysis that show the Euro being superior, but the Euro operational model can, at times, be wildly inconsistent with forecast solutions too,” Matt Lanza, a Houston-based meteorologist and managing editor of Space City Weather, told us. “We are taught that it’s called model ‘guidance’ for a reason. Guidance doesn’t mean an answer key; it’s meant to guide you through a complex process.”
Indeed, that’s why anyone flashing a model run on social media without caveats is a huge red flag. Models aren’t forecasts, they’re one possible outcome in an inherently messy system. Ultimately, it requires people to interpret them.
“I consider the human element in the forecast a strong suit,” Morales said. “My experience allows me to recognise patterns and understand biases. Generally our human-produced forecast is significantly more reliable than the machine forecast alone.”
The same sentiment echoes among the folks behind the machines, too.
“We’re a community that’s fascinated by the science and the ability to provide useful information to our customers,” Gross said. “We all have something to contribute.”