The start of this year was marked by the publication of two new global datasets for environmental analysis. My impression is that both of those datasets will be of increasing importance in ecological analysis in the future (even though their value for conservation biology has been actively criticized, see Tropek et al. 2014). Thus there is a need to assess the accuracy of their forest loss detection over time and if they are consistent.
The first dataset is the already famous Global Forest Map published by Hansen et al. (2013) in Science end of last year. The temporal span of their dataset goes back from the year 2000 up to the year 2012 and by using only Landsat data in a temporal time-series analysis they got a pretty decent high-resolution land-cover product. Although the resolution of the Hansen dataset is great (30m global average coming from Landsat) Hansen et al. decided to only publish the year 2000 baseline with the forest cover. They provide us with aggregated loss, gain and loss per year layers though, but nevertheless the user has no option to reproduce a similar product for the year 2012.
The other dataset is the combined published result from a 4 year long monitoring by the japanese satellite ALOS-PALSAR. They decided to release a global forest cover map at a 50m spatial resolution, which in contrast to Hansen can be acquired for the whole time-frame of the ALOS-PALSAR mission. It thus has a temporal coverage of the whole globe from the year 2007 until 2010. The data can be acquired on their homepage after getting an account. The ALOS PALSAR data has a nice temporal span and can be downloaded for multiple years, thus in theory allowing to make temporal comparisons and predictions about future land-use trends. However I am a bit concerned about the accuracy of their classifications as I have found multiple errors already in the area I am working in.
Because I am interested in using the ALOS PALSAR dataset in my analysis (how often do you get a nice spatial-temporal dataset of forest cover) I made a comparison between the forest loss detected in my area of interest for both datasets. It should be noted that is a comparison between different satellite sensors as well and not only by classification algorithms. So we are not comparing products from the same data source.
So what is the plan for our comparison:
- We downloaded the whole ALOS PALSAR layers for all years covered of the area around Kilimanjaro in northern Tanzania (N00, E035). We then extracted only the forest cover (Value == 1) and calculate the difference between years to acquire the forest loss for the year 2008,2009 and 2010 respectively.
- From the Google Engine app we downloaded the “loss per year” dataset and cropped it to our area of interest. Furthermore we are only interested in the aggregated Forest loss in the years 2008, 2009 and 2010 which we have available in the ALOS PALSAR dataset. We furthermore resampled the Hansen dataset up to 50m to match up with the ALOS PALSAR resolution.
I haven’t found a fancy way to display this simple comparison, so here comes just the result table. As predicted (if you look at it visually),the ALOS PALSAR algorithm overshots the amount of forest loss a lot.
|Hansen Forest Loss cells||262||304||529|
|ALOS PALSAR Forest Loss cells||26995||24970||16297|
|Equal cells in both||17||30||131|
So which one is right? I personally trust Hansens data a lot more. Especially because I found them to be pretty consistent in my area of study. For me the ALOS PALSAR data is not useable yet until the authors have figured out ways to improve their classification. It can be concluded that users should not forget that those Forest Cover products are ultimately just the result of a big un-supervised algorithm who doesn’t discriminate between right and wrong. Without validation and careful consideration of the observer you might end up having wrong results.
M. Shimada, O. Isoguchi, T. Tadono, and K. Isono, “PALSAR Radiometric and Geometric Calibration,” IEEE Trans. GRS, vol. 47, no. 12, pp.3915-3932, Dec 2009.
M. Shimada and T. Otaki, “Generating Continent-scale High-quality SAR Mosaic Datasets: Application to PALSAR Data for Global Monitoring,” IEEE JSTARS Special Issue on Kyoto and Carbon Initiative, vol. 3, Issue 4, 2010, pp.637-656.
Shimada, M.; Isoguchi, O.; Motooka, T.; Shiraishi, T.; Mukaida, A.; Okumura, H.; Otaki, T.; Itoh, T., “Generation of 10m resolution PALSAR and JERS-SAR mosaic and forest/non-forest maps for forest carbon tracking,” Geoscience and Remote Sensing Symposium (IGARSS), 2011 IEEE International , vol., no., pp.3510,3513, 24-29 July 2011
Since QGIS 2.0 stable was released just a while ago, i thought that it would be time to enhance my plugin LecoS a bit more. Furthermore i also missed some functions, for instance i found no appropriate function to compute ZonalStatistics for a set of rasters of mine. SAGA has a function to calculate some stats using a categorical and a zone raster layer. However it is lacking a raster output and specific stats. So i added a new ZonalStatistics function to LecoS and i am sure that it will be of some use to Landscape ecologists and other GIS users out there. See a usecase below!
Furthermore i regularly use a lot of short python scripts to generate and query raster layers using a gdal+numpy backbone. Those custom functions of mine are a lot faster than any other plugin (all hail to numpy), which is why i also implemented some functions that are already available in QGIS through other plugins.
Here is the total changelog from the last LecoS version 1.8.2 to the new 1.9 (note that QGIS 1.8 won’t be supported anymore):
# Version 1.9 ### Major Update: ### - Added new tools to the Processing toolbox for use in complex models - Function to count Raster cells -> Output as table - Function to query raster values below a point layer - Function to intersect two landscape (raster) layers -> Output clipped raster - Function to creates a new random raster based on values from a statistical distribution -> Output raster - Function to conduct a Neighborhood analysis (analogous to r.neighbors. or Focal Statistics in ArcGis) - Function to conduct a connected component labeling of connected patches - Function to conduct ZonalStatistics based on two landscapes (ZonalStatistics with raster layers in ArcGIS) - Improved the overall documentation for the Processing Toolbox and created new simple icons - Fixed Bug: http://hub.qgis.org/issues/8810
I didn’t create any new graphical interfaces as i believe that sextante aka processing is the future. All new functions were therefore only added to the processing toolbox and not as seperate GUI. This also has the cool advantage that you could use all LecoS tools within more complex multi-algorithms models. The most visible difference to older LecoS versions is that i created a new icon for every function (make them distinguishable) and wrote documentary information.
Click more to see a short tutorial demonstrating the functions using real data.
i just pushed another update to my Landcover analysis plugin LecoS. Besides fixing various bugs for Linux and Windows it now contains a remodeled BatchOverlay-Tool (the tool which allows you to compute statistics for overlaying vector grids on raster layers). It is now capable of calculating multiple metrics landscape wide and per class. The output can be displayed directly on the screen, saved in a temporary/given file or in the vector tables attribute table (Enabled in ver. 1.7.1)
I tested the Plugin under Linux (Debian Testing) and Windows (XP SP3). On both System the plugin should at least be able to start and calculate some metrics. However remember that this plugin is still marked as experimental and therefore bugs and strange python error messages might occur.
There is a known bug under Windows, when trying to add a generated table to the QGIS table of contents. If you try to this on Windows systems it will crash QGIS 1.8 and results in an Error message for QGIS dev. –> Fixed in recent update 1.7.1
Changelog for current Version 1.7.2:
– Enabled the calculation of landscape statistics for vector layers
– Replaced QMessageBoxes with QMessageBar messages if a newer QGIS version (>= 10900) is used
– Removed the landscape diversity tool and merged options to calculate landscape diversity into the other toolset
– Bug fixing
- The plugin needs the python libraries scipy, numpy and pil (imaging) to run. If you don’t install them (default is “no” on Windows), than you’ll likely see error messages after startup. To install the libraries on Linux systems, just download them with your package manager (python-scipy, python-imaging) or compile them system wide. To install them on Windows download the OSGEO4W Installer , select advanced Install, search for the libraries python-numpy, python-scipy and python-imaging and check them all (besides checking the qgis binaries). If you do this correctly LecoS should run out of the box
- If you stumble upon any errors PLEASE don’t report them in this blog as i really lose track of those comments. Use the official bug-tracker to report any bugs while using LecoS and try to give me as much information about your system and QGIS setup (including package versions and maybe a small data subset for testing).
The development of this plugin has been supported by the University of Évora and future version will include the options to analyse landscape vector layers, additional metrics and post-hoc result grouping.
i just pushed a new update to my QGis-plugin LecoS called the “Landscape Modifier“. At first it sounds strange by name, but i believe that it can be really useful even in cases that don’t involve any ecological expertise. In fact the desire to add these functions has been one of many reasons i even started coding this plugin. I am fascinated by the possibilities the scientific python library Scipy and also scikits offers in matters of image analysis. With just a few lines of code one could easily filter, denoise and improve images as demonstrated on this very good tutorial site. This can be useful in GIS-applications as well as raster layers are in fact very similar to image files. In essence both consist of multiple rows and columns containing data values. So moving from images to raster layers in GIS-systems isn’t so hard at all. But now about the plugin update.
I added the following functions:
- Extract Landscape patch edges (Returns a raster with only the outer borders of raster patches)
- Isolate greatest/smallest patch (Returns only the smallest or biggest patches of landscape class)
- Increase or decrease landscape patches (Returns a raster with landscape patches inc./dec. x times)
- Fill Holes inside landscape patches (Closes inner holes of landscape patches)
- Cleans landscape of small border pixels (Removes all pixels smaller than x times a taxicab structure)
Here is a practical use-case. I performed a supervised landscape classification using training polygons to extract tree cover from a study location in western Africa. There are a lot of errors, small pixels and holes in the woodland and i will try to improve that a little. My previous landscape classification looks like this:
After executing “Fill holes“, a “small pixel cleanup” and an “increase” of all woodland pictures the result looks like the picture below. Of course the question if one should perform those operations should always be cleared first. Think about your classification critically and if it can be improved. In my case i only wanted the major trees and i am not interested in any artifacts or smaller shrubs.
Download the new update via the QGis plugin downloader or here. Please contact me only in case you found a bug and not about operating system specific questions (like how do i install scipy). I probably won’t answer to those question anymore!
The next mayor milestone for LecoS will probably be SEXTANTE toolbox support ! Furthermore i would really want to see a plugin using all the various available scipy.ndimage functions. I believe that it is just a matter of time before such a plugin shows up. Many of the various methods could really be useful in QGis, for example for DEM smoothing as demonstrated in this or this gis.stackexchange answers.
As mentioned before i coded a new plugin for QGIS. Yesterday i decided to release a first version because i will start a new job on monday which will also include a lot of gis analysis and coding. I will further develop the plugin whenever i have time.
The Plugin is downloadable from the QGIS plugin hub. It is of course still marked as experimental, so please check for any bugs or missing things. You have to put the LecoS folder into your qgis plugin folder (~/.qgis/python/plugins/ on my machine)
You can find more information and a first use case about the plugin here:
I also included another little gui which allows the user to calculate diversity indexes (used for a heterogeneity analysis of the landscape). I also wanted to include more metrics (especially total edge length), however the calculation of the patch perimeter has driven me crazy (perimeter is the basis of a lot of more advanced metrics) over the last days and i couldn’t get it to work. Interested coders wanting to help can write me a comment or an email.
Since last week i spend my evenings in order to code a new plugin for the QGIS community. It deals with Land cover analysis of classified raster shapes such as the CORINE dataset.
The plugin is named LecoS, which stands for Landscape Ecology Statistics, and is able to compute some of the often used FRAGSTAT metrics directly in QGIS (FRAGSTAT is only available for Windows and don’t work on many Linux machines without major reconfigurations). This includes for example the mean patch area or the number of identified patches per class (like the number of forest patches in an agricultural matrix). More metrics will be added in the feature. The user can choose if he wants to compute a single or several metrics in a row. Additionally i want to include the possibility to define a custom metric for special calculations in order to add flexibility.
I will release the plugin in the near future. Although it is already running and basically working there are a lot of little bugs and the majority of metrics still needs to be implemented.
Things to be done
- Adding more metrics (for instance total edge length or the landscape division index)
- Designing the GUI surface for the Custom metric calculation (will be awesome)
- Ugly Bug hunting
- Adding a batch processor for features of masked vector shapes