Annual Forest Loss – A comparison between global forest loss datasets

The start of this year was marked by the publication of two new global datasets for environmental analysis. My impression is that both of those datasets will be of increasing importance in ecological analysis in the future (even though their value for conservation biology has been actively criticized, see Tropek et al. 2014). Thus there is a need to assess the accuracy of their forest loss detection over time and if they are consistent.

The first dataset is the already famous Global Forest Map published by Hansen et al. (2013) in Science end of last year. The temporal span of their dataset goes back from the year 2000 up to the year 2012 and by using only Landsat data in a temporal time-series analysis they got a pretty decent high-resolution land-cover product. Although the resolution of the Hansen dataset is great (30m global average coming from Landsat) Hansen et al. decided to only publish the year 2000 baseline with the forest cover. They provide us with aggregated loss, gain and loss per year layers though, but nevertheless the user has no option to reproduce a similar product for the year 2012.

The other dataset is the combined published result from a 4 year long monitoring by the japanese satellite ALOS-PALSAR. They decided to release a global forest cover map at a 50m spatial resolution, which in contrast to Hansen can be acquired for the whole time-frame of the ALOS-PALSAR mission. It thus has a temporal coverage of the whole globe from the year 2007 until 2010. The data can be acquired on their homepage after getting an account. The ALOS PALSAR data has a nice temporal span and can be downloaded for multiple years, thus in theory allowing to make temporal comparisons and predictions about future land-use trends. However I am a bit concerned about the accuracy of their classifications as I have found multiple errors already in the area I am working in.

Classification Errors with the ALOS PALSAR dataset. Suddenly there are huge waterbodies in the Savanna near Kilimanjaro

Classification Errors with the ALOS PALSAR dataset. Suddenly there are huge waterbodies (blue) in the Savanna near Kilimanjaro

Because I am interested in using the ALOS PALSAR dataset in my analysis (how often do you get a nice spatial-temporal dataset of forest cover) I made a comparison between the forest loss detected in my area of interest for both datasets. It should be noted that is a comparison between different satellite sensors as well and not only by classification algorithms. So we are not comparing products from the same data source.

So what is the plan for our comparison:

  • We downloaded the whole ALOS PALSAR layers for all years covered of the area around Kilimanjaro in northern Tanzania (N00, E035). We then extracted only the forest cover (Value == 1) and calculate the difference between years to acquire the forest loss for the year 2008,2009 and 2010 respectively.
  • From the Google Engine app we downloaded the “loss per year” dataset and cropped it to our area of interest. Furthermore we are only interested in the aggregated Forest loss in the years 2008, 2009 and 2010 which we have available in the ALOS PALSAR dataset. We furthermore resampled the Hansen dataset up to 50m to match up with the ALOS PALSAR resolution.

The Result:

I haven’t found a fancy way to display this simple comparison, so here comes just the result table. As predicted (if you look at it visually),the ALOS PALSAR algorithm overshots the amount of forest loss a lot.

year 2007-2008 2008-2009 2009-2010
Hansen Forest Loss cells 262 304 529
ALOS PALSAR Forest Loss cells 26995 24970 16297
Equal cells in both 17 30 131

Conclusion:

So which one is right? I personally trust Hansens data a lot more. Especially because I found them to be pretty consistent in my area of study. For me the ALOS PALSAR data is not useable yet until the authors have figured out ways to improve their classification. It can be concluded that users should not forget that those Forest Cover products are ultimately just the result of a big un-supervised algorithm who doesn’t discriminate between right and wrong. Without validation and careful consideration of the observer you might end up having wrong results.

Advertisements

Tags: , , , ,

About Martin Jung

PhD researcher at the University of Sussex. Interested in nature conservation, ecology and biodiversity as well as statistics, GIS and 'big data'
Sussex Research Hive

Supporting the research community at the University of Sussex

Small Pond Science

Research, teaching, and mentorship in the sciences

Landscape Ecology 2.0

intersecting landscape ecology, open science, and R

nexus

The Research Blog of IIASA

Jörg Steinkamps Blog

Mainly things about R, Linux and vegetation modeling

Amy Whitehead's Research

the ecological musings of a conservation biologist

Michael McCarthy's Research

School of BioSciences, The University of Melbourne

The Rostrum

science, statistics, policy and more

marionpfeifer

Environmental Change - Understand, Predict, Adapt

Dynamic Ecology

Multa novit vulpes

metvurst

METeorological Visualisation Utilities using R for Science and Teaching

A Birder´s Blog

"Everybody loves what they know"

Trust Me, I'm a Geographer

Using Technology to Explore Our World

Duncan Golicher's weblog

Research, scripts and life in Chiapas

%d bloggers like this: