New publication: LecoS now with own reference

As I can see my QGIS plugin LecoS is still widely used and downloaded from the QGIS plugin hub. I have noticed that some people already started referencing ether my blog or the QGIS repository in their outputs, which is fine, but after thinking about it for a while I thought why not make a little descriptive article out of it (being an upstart PhD scholar and scientist an’ all). I am now happy to announce that this article has passed scientific peer-review and is now been published in early view in the Journal of Ecological Informatics.

LecoS — A python plugin for automated landscape ecology analysis

The quantification of landscape structures from remote-sensing products is an important part of many analyses in landscape ecology studies. This paper introduces a new free and open-source tool for conducting landscape ecology analysis. LecoS is able to compute a variety of basic and advanced landscape metrics in an automatized way. The calculation can furthermore be partitioned by iterating through an optional provided polygon layer. The new tool is integrated into the QGIS processing framework and can thus be used as a stand-alone tool or within bigger complex models. For illustration a potential case-study is presented, which tries to quantify pollinator responses on landscape derived metrics at various scales.

The following link provided by Elsevier is still active until the 23 of January 2016. If you need a copy later on and don’t have access to the journal (sorry, I didn’t have the money to pay for open-access fees), then feel free to ether contact me or you can read an earlier prePrint of the manuscript on PeerJ.

http://dx.doi.org/10.1016/j.ecoinf.2015.11.006
So if you are using LecoS in any way for your work, it would be nice if you could reference it using the citation below. That shows me that people are actively using it and gives me incentives to keep on developing it in the future.

Martin Jung, LecoS — A python plugin for automated landscape ecology analysis, Ecological Informatics, Volume 31, January 2016, Pages 18-21, ISSN 1574-9541, http://dx.doi.org/10.1016/j.ecoinf.2015.11.006.

The full sourcecode of LecoS is released on github.

Get credit for your work!

Today I just want to share with you two (new – granted this one is already 2 years old) services that aim to make voluntary or open work more visible and credible. Many researchers often do a lot of work that is not really valued or even accounted for in academic hiring processes or funding proposals. Such as for instance the very process of participating as a (voluntary) reviewer for a peer-reviewed journal. Those of you who have done such a review know that it is often a tremendous amount of work (to do it properly). At least for me it costs several hours of work on a day that I could in return spend on my work instead. I am not criticising peer-review here ( also because of a lack of alternatives), but I often wondered if there is a way to get credit for your past peer-reviews. And there actually is a way as I have just found out. Publons is a website that allows you create a free profile page, where you can list your past reviews. They have a mailing verification system in place and are in direct contact with journals to check if you have actually done a review or not. You don’t have to publish the contents of your review, but you obviously can decide to do so (for instance if it was a particular well written study or just love embracing the openness). Give it a try.

The other service I want to introduce is called Depsy and was just started today. It a website that promotes innovative and highly used scientific software for researchers by scanning github, citations, download rates and dependencies. Why is that helpful and very important? Consider for instance that you developed a piece of software, make it openly available and even provide a reference so that users can cite you. However your work might just be incorporated into others (such as the rgdal package that just wraps up access to the gdal and proj4 libraries) or is not cited due to pure convenience. For instance I really wonder if most of the users of ggplot2 in R ever called citation(“ggplot2”) and actually cited it in their work. Depsy is dedicating open-source projects their own page ( here for instance the one for ggplot2), which evaluate the overall impact of the software and how it has been (re)used since its release. Very cool and nice idea and I hope that this new service will finally provide credit for all those programmers out there so that maybe one day creating open-source scientific software will be properly valued.

To all editors, reviewers and authors: time to move on regarding land sparing

Interesting read:
J. Fischer on the land-sharing/land-sparing debate.

Ideas for Sustainability

By Joern Fischer

Synopsis of this blog post: We don’t need sparing or sharing but both; and how exactly this should happen in any given landscape requires a (more holistic) interdisciplinary approach to be answered. Editors, reviewers and authors should recognize this and prioritise work that goes substantially beyond trading off sparing vs. sharing.

It’s no great secret that I’m not the biggest fan of the framework on land sparing and land sharing – though I do recognize that it does have an academic value, and it is an internally consistent, elegant framework. Those who know how to use this framework carefully do good science with it. But most users over-interpret it, which I find increasingly upsetting. So this blog post is a call to editors, reviewers and authors to be more critical about fundamental assumptions that are regularly being made by many authors, but hardly ever spelt out, or…

View original post 793 more words

Android apps for researchers – My personal favourites

For a long time I have been kinda reluctant to jump on the Smartphone/Touchscreen train, which might be due to the fact that I am rather conservative with regards to software. In the same way I choose my Linux distribution (Debian for the extra stability and often outdated , but proofed versions), I prefer to choose the tools surrounding me. Stability and function always meant a lot more to me than cultural trends or taste. Nevertheless in the last month I decided to purchase my first device with a touchscreen, a tablet running Android for my daily use and so that I do not have to take the notebook with me during every travel. I have never really used sth. like this before so please excuse the following over exaggerated praises as this whole world of using apps (as in application) is pretty new to me. In the short time I used my new tabet, I already managed to find some really useful apps, which I would like to share with you (feel free to recommend others):

Doing any kind of research usually requires you to read broadly and read a lot! Most of the time I stumble across new literature by reading blogs, tweets and following researchers who work on similar issues related to mine. Of course I occasionally check out different journal homepages as well and scroll down the list of abstracts from the latest issues. Working or being registered at a scientific institution enables you to read papers from all kinds of journals, not only those directly related to your main field of research. I promptly registered to many journals including some that are only in a very general way related to my field of study. In Browzine published new issues are highlighted with a red dot, so you can be sure never to miss a new paper from your favourite journal. In addition you can save any downloaded papers directly to your dropbox or mendeley account. Cons: Some opensource journals (peerJ) and preprint hosters (biorxiv) are missing? Also it seems as if not every institution has made a deal with the app publisher.

This one probably is not a new one to you. Evernote has been around for a while and simply does a splendid job of organizing your thoughts. You can drop whole websites, simple txt’s and pictures together to build your own little collection of paste-it posts. I usually also keep their web interface open on my desktop PC and all notes are synchronized with my mobile device.

If you are not running windows / evernote on your production machine, than usually you go with ether Zotero or mendeley as literature management software of your choice. I got used to mendeley and their nice plugin for Libreoffice, which enables to insert and manage references directly from mendeley. This really paid off when I noticed that there is also a mendeley app, which syncs with your mendeley account. Why is that useful? Well, I can for instance manage all my references and tons of pdfs on my PC, sync them to my mendeley account and then have them readily available for reading and commenting on my mobile device. Not to mention that it integrates quite well with other providers such as the mentioned above Browzine.

Excellent file browser which I really would like to have open all the time. You can browse all the files on your device (even the hidden once), social and remote service (like cloud hoster, ftp or network servers) are integrated. The ES File explorer is organized in windows, which enables you to switch quickly between for instance your dropbox and pictures. Very good discovery!

I tried almost every calender and mail app that is available in the google play store, but in the end I still stuck with the default google calendar and mail. The reasons: Ease of navigations, no annoying adds or popups which want to persuade you to buy a “pro” version and especially working sync with a wide range of accounts, contacts and events(!). Obviously the google apps have kinda of a homeplay game on android compared to other alternatives. Having the same kind of interface for the calendar on both the tablet and my personal computer was really, what made the deal in the end. Google mail also is quite easy to use and manage, especially for people like me with multiple mail accounts.

This one is really handy, especially for people who often get lost. It lets you access the popular openstreemap maps and navigate through it with your touchscreen. If you enable GPS you can see your current location and calculate the optimal route to your destination. Out of internet? No problem, the app lets you download and store whole geographic regions so that you can access openstreetmaps mapping and routing even while you are have no internet. Quite good if you are lost on the way to a conference and don’t want to use your precious bandwidth.

This one is an output from the Jetz lab at Yale university. You can use the application to find out the species that you just saw on your morning stroll around the park/coast/reserve. Based on species range maps it calculates the number of species, which can be potentially discovered in the current area. The little pictures also help a lot with the identification.

That´s it. But feel welcome to comment and suggest other nice (free) apps. I should explicitly mention that I am not related or employed by any of the apps´s providers.

Interesting Paper: Land-Sparing Agriculture Best Protects Avian Phylogenetic Diversity

A quick post to highlight a new publication in this weeks issue of Current Biology. Edwards et al. went for another piece on the land-sharing/land-sparing debate and presented a very nice case study. Land-sharing is often defined as combining “sustainable” agricultural production with higher biodiversity outcomes often at the tradeoff of harvesting less and loss of natural habitats. Land-sparing on the other hand attempts to prevent remaining natural habitat from being used by humans, but instead intensify production and increase yield from other areas, thus reducing their potential for wildlife-friendly farming. They combined field work from the Choco-andres region (Taxonomic focus: Birds) with simulation models to investigate which strategy might benefit biodiversity the most. Contrary to many other previous publications they focused on phylogenetic richness (PD) rather than “species richness”. Based on landscape simulation models they could show that PD decreases steadily with greater distance to forests, which is interesting because it demonstrates that land-sharing strategies might only be successful, if sufficient amounts of natural habitat are in close proximity, that can act as source habitat for dispersing species.

 

Edwards

Source: From Edwards et al. 2015

According to their analysis some species seem to benefit more from land-sparing strategies than others. Specific evolutionary traits thus might be ether beneficial or detrimental for surviving in intensive human land use such as agriculture. They conclude that land-sharing might be of limited benefit without the simultaneous protection of nearby blocks of natural habitat, which can only be achieved with a co-occurring land-sharing strategy.

Further reading: http://www.theguardian.com/science/grrlscientist/2015/sep/03/farming-practices-benefit-tropical-birds-agriculture-sustainability

Edwards, D. P., Gilroy, J. J., Thomas, G. H., Uribe, C. A. M., & Haugaasen, T. (2015). Land-Sparing Agriculture Best Protects Avian Phylogenetic Diversity. Current Biology. http://www.cell.com/current-biology/abstract/S0960-9822%2815%2900931-8

Playing with Landsat 8 metadata

The Landsat mission is one the most successful remote-sensing programs and has been running since the early 1970s. The most recent addition to the flock of Landsat satellites – Mission Nr. 8 has been supplying tons of images to researchers, NGO’s and governments for over two years now.  Providing nearly 400 images daily (!) this has amassed to an impressive dataset of over half a million individual images by now (N = 515243 by 29/07/2015).

Landsat 8 scenes can be easily queried via a number of web-interfaces, the oldest and most successful being the USGS earth-explorer which also distributes other NASA remote-sensing products.  ESA also started to mirror Landsat 8 data and so did the great Libra website from developmentseed.  Using the Landsat8 before/after tool from remotepixel.ca tool you can even make on the fly comparisons of imagery scenes. You might ask how some of those services are able to show you the number of images and the estimated cloud-cover. This information is saved in the scenes-list metadata file, which contains the identity, name, acquisition date and many other information from all Landsat 8 scenes since the start of the mission. In addition Landsat 8 also has a cloudCover estimate (and sadly only L8, but the USGS is working on a post-creation measure for the previous satellites as far as I know), which you can readily explore on a global scale. Here is some example code showcasing how to peek into this huge ever-growing archive.

# Download the metadata file
l = "http://landsat.usgs.gov/metadata_service/bulk_metadata_files/LANDSAT_8.csv.gz"
download.file(l,destfile = basename(l))
# Now decompress
t = decompressFile(basename(l),temporary = T,overwrite=T,remove=F,ext="gz",FUN=gzfile)

Now you can read in the resulting csv. For speed I would recommend using the “data.table” package!

# Load data.table
if(!require(data.table))install.packages("data.table");library(data.table)
# Use fread to read in the csv
system.time( zz = fread(t,header = T) )
file.remove(t)

The metadata file contains quite a number of cool fields to explore. For instance the “browseURL” columns contains the full link to an online .jpg thumbnail. Very useful to have a quick look at the scene.

require('jpeg')
l = "http://earthexplorer.usgs.gov/browse/landsat_8/2015/164/071/LC81640712015201LGN00.jpg"
download.file(l,basename(l))
jpg = readJPEG("LC81640712015201LGN00.jpg") # read the file
res = dim(jpg)[1:2] # get the resolution
plot(1,1,xlim=c(1,res[1]),ylim=c(1,res[2]),asp=1,type='n',xaxs='i',yaxs='i',xaxt='n',yaxt='n',xlab='',ylab='',bty='n')
rasterImage(jpg,1,1,res[1],res[2])
L8 Thumbnail

L8 Thumbnail

The “cloudCoverFull” column contains the average cloud-cover for each scene, which is interesting to explore as the long-term average of measured cloudCover per region/country likely differs due to different altitude or precipitation levels. Here is a map showing the average cloud-cover per individual scene since mission start:

    Average global cloud cover in Landsat 8 data

Average global cloud cover in Landsat 8 data

Clouds are a major source of annoyance for anyone who intends to measure vegetation cover or classify land-cover. Might write another post later showcasing some examples on how to filter satellite data for clouds.

Assessing habitat specialization using IUCN data

Since quite some time ecological models have tried to incorporate both continuous and discrete characteristics of species into their models. Newbold et al. (2013) demonstrated that functional traits affect the response of tropical bird species towards land-use intensity. Tropical forest specialist birds seem to decrease globally in probability of presence and abundance in more intensively used forests. This patterns extends to many taxonomic groups and the worldwide decline of “specialist species” has been noted before by Clavel et al. (2011).

From Newbold et al. 2013

(a) Probabilities of presence of tropical bird species in in different disturbed forests and (b) ratios of abundance in light and intensive disturbed forests relative to undisturbed forests. Forest specialists are disproportionally affected in intensively used forests. Figure from Newbold et al. 2013 doi: http://dx.doi.org/10.1098/rspb.2012.2131

But how to acquire such data on habitat specialization? Ether you assemble your own exhaustive trait database or you query information from some of the openly available data sources. One could for instance be the IUCN redlist, which not only has expert-validated data on a species current threat status, but also on population size and also on a species habitat preference. Here IUCN follows its own habitat classification scheme ( http://www.iucnredlist.org/technical-documents/classification-schemes/habitats-classification-scheme-ver3 ). The curious ecologist and conservationist should keep in mind however, that not all species are currently assessed by IUCN.

There are already a lot of scripts available on the net from which you can get inspiration on how to query the IUCN redlist (Kay Cichini from the biobucket explored this already in 2012 ). Even better: Someone actually compiled a whole r-package called letsR full of web-scraping functions to access the IUCN redlist. Here is some example code for Perrin’s Bushshrike, a tropical bird quite common in central Africa

# Install package
install.packages(letsR)
library(letsR)

# Perrin's or Four-colored Bushshrike latin name
name <- 'Telophorus viridis'

# Query IUCN status
lets.iucn(name)

#>Species        Family Status Criteria Population Description_Year
#>Telophorus viridis MALACONOTIDAE LC Stable 1817
#>Country
#>Angola, Congo, The Democratic Republic of the Congo, Gabon, Zambia

# Or you can query habitat information
lets.iucn.ha(name)

#>Species Forest Savanna Shrubland Grassland Wetlands Rocky areas Caves and Subterranean Habitats
#>Telophorus viridis      1       1         1         0        0           0                               0
#> Desert Marine Neritic Marine Oceanic Marine Deep Ocean Floor Marine Intertidal Marine Coastal/Supratidal
#>      0              0              0                       0                 0                         0
#>  Artificial/Terrestrial Artificial/Aquatic Introduced Vegetation Other Unknown
#>                      1                  0                     0     0       0

letsR also has other methods to work with the spatial data that IUCN provides ( http://www.iucnredlist.org/technical-documents/spatial-data ), so definitely take a look. It works by querying the IUCN redlist api for the species id (http://api.iucnredlist.org/go/Telophorus-viridis). Sadly the habitat function does only return the information if a species is known to occur in a given habitat, but not if it is of major importance for a particular species (so if for instance a Species is known to be a “forest-specialist” ). Telophorus viridis for instance also occurs in savannah and occasionally artificial habitats like gardens ( http://www.iucnredlist.org/details/classify/22707695/0 ).

So I just programmed my own function to assess if forest habitat is of major importance to a given species. It takes a IUCN species id as input and returns ether “Forest-specialist”, if forest habitat is of major importance to a species, “Forest-associated” if a species is just known to occur in forest or “Other Habitats” if a species does not occur in forests at all. The function works be cleverly querying the IUCN redlist and breaking up the HTML structure at given intervals that indicate a new habitat type.

Find the function on gist.github (Strangely WordPress doesn’t include them as they promised)

How does it work? You first enter the species IUCN redlist id. It is in the url after you have queried a given species name. Alternatively you could also download the whole IUCN classification table and match your species name against it😉 Find it here. Then simply execute the function with the code.

name = 'Telophorus viridis'
data <- read.csv('all.csv')
# This returns the species id
data$Red.List.Species.ID[which(data$Scientific.Name==name)]
#> 22707695

# Then simply run my function
isForestSpecialist(22707695)
#> 'Forest-specialist'

Sussex Research Hive

Supporting the research community at the University of Sussex

Small Pond Science

Research, teaching, and mentorship in the sciences

Landscape Ecology 2.0

intersecting landscape ecology, open science, and R

nexus

The Research Blog of IIASA

Jörg Steinkamps Blog

Mainly things about R, Linux and vegetation modeling

Amy Whitehead's Research

the ecological musings of a conservation biologist

Michael McCarthy's Research

School of BioSciences, The University of Melbourne

The Rostrum

science, statistics, policy and more

marionpfeifer

Environmental Change - Understand, Predict, Adapt

Dynamic Ecology

Multa novit vulpes

metvurst

METeorological Visualisation Utilities using R for Science and Teaching

A Birder´s Blog

"Everybody loves what they know"

BIOFRAG

A new metric to quantify biodiversity response to fragmentation

Trust Me, I'm a Geographer

Using Technology to Explore Our World

Duncan Golicher's weblog

Research, scripts and life in Chiapas

Follow

Get every new post delivered to your Inbox.

Join 359 other followers