To all editors, reviewers and authors: time to move on regarding land sparing

Martin Jung:

Interesting read:
J. Fischer on the land-sharing/land-sparing debate.

Originally posted on Ideas for Sustainability:

By Joern Fischer

Synopsis of this blog post: We don’t need sparing or sharing but both; and how exactly this should happen in any given landscape requires a (more holistic) interdisciplinary approach to be answered. Editors, reviewers and authors should recognize this and prioritise work that goes substantially beyond trading off sparing vs. sharing.

It’s no great secret that I’m not the biggest fan of the framework on land sparing and land sharing – though I do recognize that it does have an academic value, and it is an internally consistent, elegant framework. Those who know how to use this framework carefully do good science with it. But most users over-interpret it, which I find increasingly upsetting. So this blog post is a call to editors, reviewers and authors to be more critical about fundamental assumptions that are regularly being made by many authors, but hardly ever spelt out, or…

View original 793 more words

Android apps for researchers – My personal favourites

For a long time I have been kinda reluctant to jump on the Smartphone/Touchscreen train, which might be due to the fact that I am rather conservative with regards to software. In the same way I choose my Linux distribution (Debian for the extra stability and often outdated , but proofed versions), I prefer to choose the tools surrounding me. Stability and function always meant a lot more to me than cultural trends or taste. Nevertheless in the last month I decided to purchase my first device with a touchscreen, a tablet running Android for my daily use and so that I do not have to take the notebook with me during every travel. I have never really used sth. like this before so please excuse the following over exaggerated praises as this whole world of using apps (as in application) is pretty new to me. In the short time I used my new tabet, I already managed to find some really useful apps, which I would like to share with you (feel free to recommend others):

Doing any kind of research usually requires you to read broadly and read a lot! Most of the time I stumble across new literature by reading blogs, tweets and following researchers who work on similar issues related to mine. Of course I occasionally check out different journal homepages as well and scroll down the list of abstracts from the latest issues. Working or being registered at a scientific institution enables you to read papers from all kinds of journals, not only those directly related to your main field of research. I promptly registered to many journals including some that are only in a very general way related to my field of study. In Browzine published new issues are highlighted with a red dot, so you can be sure never to miss a new paper from your favourite journal. In addition you can save any downloaded papers directly to your dropbox or mendeley account. Cons: Some opensource journals (peerJ) and preprint hosters (biorxiv) are missing? Also it seems as if not every institution has made a deal with the app publisher.

This one probably is not a new one to you. Evernote has been around for a while and simply does a splendid job of organizing your thoughts. You can drop whole websites, simple txt’s and pictures together to build your own little collection of paste-it posts. I usually also keep their web interface open on my desktop PC and all notes are synchronized with my mobile device.

If you are not running windows / evernote on your production machine, than usually you go with ether Zotero or mendeley as literature management software of your choice. I got used to mendeley and their nice plugin for Libreoffice, which enables to insert and manage references directly from mendeley. This really paid off when I noticed that there is also a mendeley app, which syncs with your mendeley account. Why is that useful? Well, I can for instance manage all my references and tons of pdfs on my PC, sync them to my mendeley account and then have them readily available for reading and commenting on my mobile device. Not to mention that it integrates quite well with other providers such as the mentioned above Browzine.

Excellent file browser which I really would like to have open all the time. You can browse all the files on your device (even the hidden once), social and remote service (like cloud hoster, ftp or network servers) are integrated. The ES File explorer is organized in windows, which enables you to switch quickly between for instance your dropbox and pictures. Very good discovery!

I tried almost every calender and mail app that is available in the google play store, but in the end I still stuck with the default google calendar and mail. The reasons: Ease of navigations, no annoying adds or popups which want to persuade you to buy a “pro” version and especially working sync with a wide range of accounts, contacts and events(!). Obviously the google apps have kinda of a homeplay game on android compared to other alternatives. Having the same kind of interface for the calendar on both the tablet and my personal computer was really, what made the deal in the end. Google mail also is quite easy to use and manage, especially for people like me with multiple mail accounts.

This one is really handy, especially for people who often get lost. It lets you access the popular openstreemap maps and navigate through it with your touchscreen. If you enable GPS you can see your current location and calculate the optimal route to your destination. Out of internet? No problem, the app lets you download and store whole geographic regions so that you can access openstreetmaps mapping and routing even while you are have no internet. Quite good if you are lost on the way to a conference and don’t want to use your precious bandwidth.

This one is an output from the Jetz lab at Yale university. You can use the application to find out the species that you just saw on your morning stroll around the park/coast/reserve. Based on species range maps it calculates the number of species, which can be potentially discovered in the current area. The little pictures also help a lot with the identification.

That´s it. But feel welcome to comment and suggest other nice (free) apps. I should explicitly mention that I am not related or employed by any of the apps´s providers.

Interesting Paper: Land-Sparing Agriculture Best Protects Avian Phylogenetic Diversity

A quick post to highlight a new publication in this weeks issue of Current Biology. Edwards et al. went for another piece on the land-sharing/land-sparing debate and presented a very nice case study. Land-sharing is often defined as combining “sustainable” agricultural production with higher biodiversity outcomes often at the tradeoff of harvesting less and loss of natural habitats. Land-sparing on the other hand attempts to prevent remaining natural habitat from being used by humans, but instead intensify production and increase yield from other areas, thus reducing their potential for wildlife-friendly farming. They combined field work from the Choco-andres region (Taxonomic focus: Birds) with simulation models to investigate which strategy might benefit biodiversity the most. Contrary to many other previous publications they focused on phylogenetic richness (PD) rather than “species richness”. Based on landscape simulation models they could show that PD decreases steadily with greater distance to forests, which is interesting because it demonstrates that land-sharing strategies might only be successful, if sufficient amounts of natural habitat are in close proximity, that can act as source habitat for dispersing species.



Source: From Edwards et al. 2015

According to their analysis some species seem to benefit more from land-sparing strategies than others. Specific evolutionary traits thus might be ether beneficial or detrimental for surviving in intensive human land use such as agriculture. They conclude that land-sharing might be of limited benefit without the simultaneous protection of nearby blocks of natural habitat, which can only be achieved with a co-occurring land-sharing strategy.

Further reading:

Edwards, D. P., Gilroy, J. J., Thomas, G. H., Uribe, C. A. M., & Haugaasen, T. (2015). Land-Sparing Agriculture Best Protects Avian Phylogenetic Diversity. Current Biology.

Playing with Landsat 8 metadata

The Landsat mission is one the most successful remote-sensing programs and has been running since the early 1970s. The most recent addition to the flock of Landsat satellites – Mission Nr. 8 has been supplying tons of images to researchers, NGO’s and governments for over two years now.  Providing nearly 400 images daily (!) this has amassed to an impressive dataset of over half a million individual images by now (N = 515243 by 29/07/2015).

Landsat 8 scenes can be easily queried via a number of web-interfaces, the oldest and most successful being the USGS earth-explorer which also distributes other NASA remote-sensing products.  ESA also started to mirror Landsat 8 data and so did the great Libra website from developmentseed.  Using the Landsat8 before/after tool from tool you can even make on the fly comparisons of imagery scenes. You might ask how some of those services are able to show you the number of images and the estimated cloud-cover. This information is saved in the scenes-list metadata file, which contains the identity, name, acquisition date and many other information from all Landsat 8 scenes since the start of the mission. In addition Landsat 8 also has a cloudCover estimate (and sadly only L8, but the USGS is working on a post-creation measure for the previous satellites as far as I know), which you can readily explore on a global scale. Here is some example code showcasing how to peek into this huge ever-growing archive.

# Download the metadata file
l = ""
download.file(l,destfile = basename(l))
# Now decompress
t = decompressFile(basename(l),temporary = T,overwrite=T,remove=F,ext="gz",FUN=gzfile)

Now you can read in the resulting csv. For speed I would recommend using the “data.table” package!

# Load data.table
# Use fread to read in the csv
system.time( zz = fread(t,header = T) )

The metadata file contains quite a number of cool fields to explore. For instance the “browseURL” columns contains the full link to an online .jpg thumbnail. Very useful to have a quick look at the scene.

l = ""
jpg = readJPEG("LC81640712015201LGN00.jpg") # read the file
res = dim(jpg)[1:2] # get the resolution
L8 Thumbnail

L8 Thumbnail

The “cloudCoverFull” column contains the average cloud-cover for each scene, which is interesting to explore as the long-term average of measured cloudCover per region/country likely differs due to different altitude or precipitation levels. Here is a map showing the average cloud-cover per individual scene since mission start:

    Average global cloud cover in Landsat 8 data

Average global cloud cover in Landsat 8 data

Clouds are a major source of annoyance for anyone who intends to measure vegetation cover or classify land-cover. Might write another post later showcasing some examples on how to filter satellite data for clouds.

Assessing habitat specialization using IUCN data

Since quite some time ecological models have tried to incorporate both continuous and discrete characteristics of species into their models. Newbold et al. (2013) demonstrated that functional traits affect the response of tropical bird species towards land-use intensity. Tropical forest specialist birds seem to decrease globally in probability of presence and abundance in more intensively used forests. This patterns extends to many taxonomic groups and the worldwide decline of “specialist species” has been noted before by Clavel et al. (2011).

From Newbold et al. 2013

(a) Probabilities of presence of tropical bird species in in different disturbed forests and (b) ratios of abundance in light and intensive disturbed forests relative to undisturbed forests. Forest specialists are disproportionally affected in intensively used forests. Figure from Newbold et al. 2013 doi:

But how to acquire such data on habitat specialization? Ether you assemble your own exhaustive trait database or you query information from some of the openly available data sources. One could for instance be the IUCN redlist, which not only has expert-validated data on a species current threat status, but also on population size and also on a species habitat preference. Here IUCN follows its own habitat classification scheme ( ). The curious ecologist and conservationist should keep in mind however, that not all species are currently assessed by IUCN.

There are already a lot of scripts available on the net from which you can get inspiration on how to query the IUCN redlist (Kay Cichini from the biobucket explored this already in 2012 ). Even better: Someone actually compiled a whole r-package called letsR full of web-scraping functions to access the IUCN redlist. Here is some example code for Perrin’s Bushshrike, a tropical bird quite common in central Africa

# Install package

# Perrin's or Four-colored Bushshrike latin name
name <- 'Telophorus viridis'

# Query IUCN status

#>Species        Family Status Criteria Population Description_Year
#>Telophorus viridis MALACONOTIDAE LC Stable 1817                                                            
#>Angola, Congo, The Democratic Republic of the Congo, Gabon, Zambia

# Or you can query habitat information

#>Species Forest Savanna Shrubland Grassland Wetlands Rocky areas Caves and Subterranean Habitats
#>Telophorus viridis      1       1         1         0        0           0                               0
#> Desert Marine Neritic Marine Oceanic Marine Deep Ocean Floor Marine Intertidal Marine Coastal/Supratidal
#>      0              0              0                       0                 0                         0
#>  Artificial/Terrestrial Artificial/Aquatic Introduced Vegetation Other Unknown
#>                      1                  0                     0     0       0

letsR also has other methods to work with the spatial data that IUCN provides ( ), so definitely take a look. It works by querying the IUCN redlist api for the species id ( Sadly the habitat function does only return the information if a species is known to occur in a given habitat, but not if it is of major importance for a particular species (so if for instance a Species is known to be a “forest-specialist” ). Telophorus viridis for instance also occurs in savannah and occasionally artificial habitats like gardens ( ).

So I just programmed my own function to assess if forest habitat is of major importance to a given species. It takes a IUCN species id as input and returns ether “Forest-specialist”, if forest habitat is of major importance to a species, “Forest-associated” if a species is just known to occur in forest or “Other Habitats” if a species does not occur in forests at all. The function works be cleverly querying the IUCN redlist and breaking up the HTML structure at given intervals that indicate a new habitat type.

Find the function on gist.github (Strangely WordPress doesn’t include them as they promised)

How does it work? You first enter the species IUCN redlist id. It is in the url after you have queried a given species name. Alternatively you could also download the whole IUCN classification table and match your species name against it ;) Find it here. Then simply execute the function with the code.

name = 'Telophorus viridis'
data <- read.csv('all.csv')
# This returns the species id
#> 22707695

# Then simply run my function
#> 'Forest-specialist'

Neutral Landscape model generation with QGIS

There are many interesting things to calculate in relation to landscape ecology and its statistical metrics. However many (if not the majority) of the published toolsets are not reproducible, their algorithm code not published or open-source. Obviously this makes the easy implementation of underlying algorithms even harder for independent developers (scientists) if you don’t have the time to reproduce their work (not to mention the danger of making stupid mistakes, we are all human).

I recently found this new article in Methods in Ecology and Evolution by Etherington et al., who didn’t really present any novel techniques or methods, but instead provided a new python library that is capable of calculating Neutral Landscape Models (NLMs). NLMs are often used as nullmodel counterpart to real remote-sensing derived maps (land-cover or altitude) to test the effect of landscape structure or heterogeneity on a species (-community). Many NLM algorithms are based on cluster techniques, cellular automata or calculating randomly distributed numbers in a given 2d space. There have been critical and considerate voices stating that existing NLMS are often misused and better null-models are needed for specific hypothesis, such as a species perception of landscape structures. Nevertheless NLMs are still actively used and new papers published with it.


Figure by Etherington et al. showing a number of different NLMs


The new library, called NLMpy, is open source and published under the MIT licence. Thus I can easily use and integrate into QGIS and its processing framework. Their NLMpy library only depends on numpy and scipy and thus doesn’t add any other dependency to your python setup, if you already are able to run LecoS in your QGIS installation. The NLM functions are visible in the new LecoS 1.9.6 version, but only if you have NLMpy installed and it is available in your python path. Otherwise they won’t show up! Please don’t ask me here how to install additional python libraries on your machine, but rather consult google or some of the Q&A sites. I installed it following the instructions on this page.

Midpoint displacement algorithm in QGIS

Midpoint displacement algorithm in QGIS

NLM with randomly clustered landcover patches

NLM with randomly clustered landcover patches

After you have installed it and upgraded your LecoS version within QGIS, you should be able to spot a new processing group and a number of new algorithms. Here are some screenshots that show the new algorithms and two NLMs that I calculated. The first one is based on a Midpoint displacement algorithm and could be for instance used to test against an altitude raster layer (need to reclassify to real altitude values first). The second one is aimed at emulating a random classified land-cover map. Here I first calculated a proportional NLM using a random cluster nearest-neighbour algorithm. Second I used the libraries reclassify function (“Classify proportional Raster”) to convert the proportional values (range 0-1) into a relative number of landcover classes with exactly 6 different land-cover classes. Both null model look rather realistic, don’t they ;)

This is a quick and dirty implementation, so there could occur some errors. You should use a meter-based projection as extent (such as UTM) as negative values (common in degree-based projections like WGS84 latitude-longitude) sometimes result in strange error messages. You also have to change the CRS of the generated result to the one of your project manually, otherwise you likely won’t see the result. Instead of the number of rows and columns as in the original implementation, the functions in LecoS are based on a selected extent and the desired output cellsize.

For more complex modelling tasks I would suggest that you use the library directly. To give you a good start Etherington et al. also appended some example code and data in their article´s supporting information. Furthermore a few days ago they even had a spatial MEE blog post with some youtube video demonstrations how to use their library. So it shouldn’t be that hard even for python beginners. Or you could just use the processing interface within LecoS.

In any way, if you use the library in your research, I guess the authors would really appreciate it if you could cite them :)


In addition I also temporarily removed LecoS ability to calculate the mean patch distance metric due to some unknown errors in the calculation. I’m kinda stuck here and anyone who can spot the (maybe obvious) bug gets a virtual hug from me!


Happy new year!

The PREDICTS database: a global database of how local terrestrial biodiversity responds to human impacts

New article in which I am also involved. I have told the readers of the blog about the PREDICTS initiative before. Well, the open-access article describing the last stand of the database has just been released as early-view article. So if you are curious about one of the biggest databases in the world investigating impacts of anthropogenic pressures on biodiversity, please have a look. As we speak the data is used to define new quantitative indices of global biodiversity decline valid for multiple taxa (and not only vertebrates like WWF living planet index).


Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of species’ threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that support computation of a range of biodiversity indicators, is necessary to enable better understanding of historical declines and to project – and avert – future declines. We describe and assess a new database of more than 1.6 million samples from 78 countries representing over 28,000 species, collated from existing spatial comparisons of local-scale biodiversity exposed to different intensities and types of anthropogenic pressures, from terrestrial sites around the world. The database contains measurements taken in 208 (of 814) ecoregions, 13 (of 14) biomes, 25 (of 35) biodiversity hotspots and 16 (of 17) megadiverse countries. The database contains more than 1% of the total number of all species described, and more than 1% of the described species within many taxonomic groups – including flowering plants, gymnosperms, birds, mammals, reptiles, amphibians, beetles, lepidopterans and hymenopterans. The dataset, which is still being added to, is therefore already considerably larger and more representative than those used by previous quantitative models of biodiversity trends and responses. The database is being assembled as part of the PREDICTS project (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems – We make site-level summary data available alongside this article. The full database will be publicly available in 2015.

I know that I haven’t been particular active on this blog in the last months. I am currently quite busy with writing my Thesis and programming. I am gonna make it up later :)
Small Pond Science

Research, teaching, and mentorship in the sciences

Landscape Ecology 2.0

intersecting landscape ecology, open science, and R


Research blog of the International Institute for Applied Systems Analysis (IIASA)

Jörg Steinkamps Blog

Mainly things about Linux and vegetation modeling

Amy Whitehead's Research

the ecological musings of a conservation biologist

Michael McCarthy's Research

School of BioSciences, The University of Melbourne

The Rostrum

science, statistics, policy and more


Environmental Change - Understand, Predict, Adapt

Dynamic Ecology

Multa novit vulpes

The Molecular Ecologist

Conservation and Ecology and everything between


METeorological Visualisation Utilities using R for Science and Teaching

A Birder´s Blog

"Everybody loves what they know"


A new metric to quantify biodiversity response to fragmentation

Trust Me, I'm a Geographer

Using Technology to Explore Our World

Duncan Golicher's weblog

Research, scripts and life in Chiapas

Daniel J. Hocking

Ecology, conservation biology, & statistical modeling for a changing world

Climate Change Ecology

The Science, Economics, and Politics of Climate Change


Get every new post delivered to your Inbox.

Join 310 other followers