Southern Florida Among Spots At Greater Risk Due To Sea Level Rise, Finds New Machine Learning Study

Credit to Author: Michael Barnard| Date: Mon, 04 Nov 2019 22:25:53 +0000

Published on November 4th, 2019 | by Michael Barnard

November 4th, 2019 by  

Sea level rise is a much studied phenomenon. It’s caused by global warming. First, the additional heat melts land ice. Then, the water is warmer, so it expands a bit. The combination means that as the world warms over the next few years, sea level rise will accelerate. By 2050, we have a very high confidence that we’ll see 20-30 cm (8-12 in) of sea level rise. The outlook for 2100 has more variance because it has a lot more room for us to mitigate warming and more room for things to go wrong, but the medium is about a meter (39 inches).

We thought we had a pretty good idea what that meant, as most people assume that elevations along coast lines were well understood. Too bad that’s not actually true.

A study published October 29, 2019 in Nature Communications, a natural sciences journal with a very credible impact of 11.88, has improved the state of the art significantly. The study is New elevation data triple estimates of global vulnerability to sea-level rise and coastal flooding, by Scott A. Kulp & Benjamin H. Strauss. Yes, with higher accuracy unfortunately came much higher risk.

To understand what the study did, it’s important to know how elevation is measured and communicated. There are numerous digital elevation models (DEMs) which provide the elevation to researchers, policy makers and the like around the world. In a lot of urban areas in wealthy countries, the elevation is very accurately measured by lidar overflights from planes and increasingly drones. In the US, most of the coast is well mapped by lidar. But that’s an expensive way to determine elevation. Most of the world’s elevation is assessed from NASA’s Shuttle Radar Topography Mission (SRTM). That was captured in 2000, available to some people and researchers from that time at full resolution, but available at full resolution to everyone as of 2014, when the US White House announced that it was publicly available to anyone.

SRTM banner courtesy NASA

What’s the problem with the SRTM data set? Well, in a lot of places with dense foliage or buildings, it keyed off of the top of the foliage or building, not off of the ground level. Yes, in a lot of places, elevation is overstated by the SRTM data that everyone outside of rich urban areas uses. That’s a big deal in coastal regions facing sea level rise. Extreme coastal water level (ECWL) exposure analysis is keyed off of SRTM data for most of the world, and for a lot of the US as well. ECWL is about areas prone to regular flooding, not areas that will be below average sea level for the area.

What Kulp and Strauss did was to define and execute on a methodology to adjust the SRTM data for coastal areas to fix the data as much as possible to align it with actual elevation.

Here’s where machine learning comes in. They took the SRTM data as the input, fed it into a multilayer perceptron (MLP) artificial neural network and used United States lidar data for specific areas to train it how to adjust the SRTM elevation to the actual elevation. Then they tested the results in multiple areas in the US and Australia to validate that the resultant model hadn’t been overfitted. We’ll get to the results, which are worse the further you read into the study, soon, but this is one of the series of articles on the use of machine learning in clean technology and climate solutions, so we’ll spend a bit of time on the neural net approach itself.

A multilayer perceptron neural network has a few characteristics. It has an input layer, in this case the NASA elevation data in SRTM. It has an output layer, the resulting new elevation map. It has a hidden layer or layers which are the neural net, which take each block of available data, apply the neural network weighting on it, and provide the output. The output is tested against the lidar data which is highly accurate for a large sample size so that the model is trained to correct the SRTM data to align with lidar data where that’s available.

The input layer is a bit more sophisticated than just a picture. The researchers had a 23-dimensional vector of known attributes at the target location from various data sets. The variables included neighborhood elevation values, land slope, population density, vegetation density, canopy height, and local SRTM deviations from snow and ice cover provided by ICESat. They trained the model on 51 million samples of these 23 variables. The output layer is simple, with literally just a prediction of the SRTM error at the location.

It’s worth poking at the location a bit. Before 2014, most of the SRTM released data was at a resolution of about 90 meters (295 ft). The newly available data was, like the unlocking of high-resolution GPS in 2000, is higher resolution, about 30 meters (98 ft). However, a lot of the other data sets are at vastly different resolutions. Population data is at the scale of a kilometer (0.62 miles) for example, but even that varies. A lot of data management was used to align the 23 input variables. Because of that resolution gap, their output resolution is about 90 meters (295 ft), similar to the pre-2014 SRTM data.

As with all neural networks, there’s little way to know what they do inside the neural network itself. All we really can do is compare the accuracy of outputs to arrive at confidence levels.

And the resulting data set, CoastalDEM, is much more accurate than SRTM, especially near the water. The mean SRTM error from 1-20 meters (3.3-65.6 ft) above sea level in the USA is 3.7 meters (12 ft). In Australia, it’s 2.5 meters (8.2 ft). Globally with the lower resolution ICESat, it’s about 1.9 meters (6.2 ft). When tested against the coastal cities of the US, CoastalDEM reduces the error from 4.7 meters (15.5 ft) to less than 0.06 m (2.4 inches). Remember that for the cities in rich countries, extreme coastal water level (ECWL) exposure analysis is already keyed off of the lidar data, so this doesn’t mean that New York is going to be underwater faster.

But it does mean that a lot of other cities in the world will be. And a lot of coastline that’s not as well mapped with lidar is in much more serious trouble much sooner.

Let’s look at southern Florida. Here is the legacy model of sea level rise risk by 2050.

Image courtesy climatecentral.org

That looks very reasonable. But let’s look at the adjusted map using CoastalDem updates.

Image courtesy climatecentral.org

Oops. That’s a lot more red. And that red is in a very bad place for southern Florida. Note that it’s not where there are a lot of people, which is a modicum of good news, but instead it’s over a lot of the Everglades, which filter the water flowing into the Biscayne Aquifer, replenishing southern Florida’s fresh water supply.

Let’s look further south, at Key West and its neighboring keys.

Image courtesy climatecentral.org

The Florida Keys are a maximum of 1.8 meters (6 ft) above sea level. Large areas of the Keys, especially the southernmost ones, will be inundated regularly by 2050. Most of the errors in extreme coastal water levels (ECWL) risk assessments in the US are in Florida because of the population density in coastal plains.

But the United States isn’t where the largest impacts will be. Among other things, all cities in the US were using accurate lidar data for their ECWL assessments already. The problem is for densely populated foreign cities.

Image courtesy climatecentral.org

The darker magenta is predictions from CoastalDEM alone. The lighter violet are predictions that both SRTM and CoastalDEM are making. The spots of yellow are areas where SRTM is wrong in the other direction and CoastalDEM thinks that there isn’t a threat.

There are 42 million people in the Pearl River Delta. Bangladesh, which has a total population of 164 million, experienced flooding in the climate change-exacerbated 2017 monsoons that flooded a third of the country and displaced 41 million people.

With the new CoastalDEM, machine learning-enabled predictions of extreme coastal water level risk exposures are much higher. Legacy models showed 250 million people at risk by 2050. With CoastalDEM, a hundred million more people are at risk. By 2100, 630 million people will be exposed to regular flooding from tides.

As other articles in the series have highlighted so far, machine learning is helping us to rapidly plan and estimate commercial solar, identify opportunities for planting trees to mitigate change, and to maintain the purity of the water we use. And, of course, solve Rubik’s cube with a one-handed robot. But as this case study shows, it’s also clarifying the level of risk we’re at from climate change. 
 
Follow CleanTechnica on Google News.
It will make you happy & help you live in peace for the rest of your life.




Tags: , , , , , ,

is Chief Strategist with TFIE Strategy Inc. He works with startups, existing businesses and investors to identify opportunities for significant bottom line growth and cost takeout in our rapidly transforming world. He is editor of The Future is Electric, a Medium publication. He regularly publishes analyses of low-carbon technology and policy in sites including Newsweek, Slate, Forbes, Huffington Post, Quartz, CleanTechnica and RenewEconomy, and his work is regularly included in textbooks. Third-party articles on his analyses and interviews have been published in dozens of news sites globally and have reached #1 on Reddit Science. Much of his work originates on Quora.com, where Mike has been a Top Writer annually since 2012. He’s available for consulting engagements, speaking engagements and Board positions.

https://cleantechnica.com/feed/