Wednesday, February 9, 2011

Understanding map projections

Map projections are both easy and tricky. I am not a specialist at all about the subject, but I have used them a little bit.

When I say that map projections are easy, I mean that, even without understanding them completely, there are tools which allow a user to get the work one. Of course, I use the Orfeo Toolbox library, which in turn uses the OSSIM library. Actually, some years ago, with a student, we designed the interface in order to integrate OSSIM map projections (and sensor models too) into OTB so that they could be used as the existing geometric transforms already available in OTB (which came from ITK).

The only thing we had to understand in order to design this interface was that map projections are coordinate transformations. Then, we chose the appropriate object-oriented design pattern together with a bit of C++ templates for the generic programming ingredient and that was it.

When some users started using the projections (well, WE were those users) questions arised about the projections themselves. The task in answering to these questions would have been much easier if we had had this info (posted by Caitlin Dempsey on GIS Lounge):

/"The USGS has posted a scanned file of John P. Snyder's 1987 "Map/ Projections: A Working Manual" online in PDF and DjVu format. The beginning of the book contains substantial introductory information about map projection families and distortions. Each projection is started with a useful summary of the context and usage of that particular projection. Snyder then delves into detail about the history, features, and usage before providing the mathematical /formulas used to calculate the projection."/

Sunday, February 6, 2011

Traffic Monitoring with TerraSAR-X

I have just read this interesting article which describes away to measure vehicle speeds using space-borne SAR sensors. The article explains very clearly how Doppler effect can be used, either with one single image, or with an interferometric pair, to estimate the speed of cars or ships.

DLR's TerraSAR-X system has always impressed me. It has been providing very high quality images from its launch. For having used the images, I can say that their geometrical quality is very good. And as far as I know, this is the only operational system in flight which is able to perform one-pass interferometry.

So far so good.

However, the article forgets to mention that these satellites acquire only 2 images per day on a given point of the Earth's surface -- typically in the morning and in the evening – so one can not expect to use this technology for real-time traffic monitoring.

So you don't need to worry about getting a ticket for speed.

Author: Jordi Inglada

Sunday, January 30, 2011

Teenager contribution for remote sensig

Check out this article. A teenager proposes an instrument concept for measuring the temperature at the bottom of clouds. NOAA is testing it.

Monday, January 24, 2011

Multi-temporal series simulations


As I mentioned in a previous post, last September I participated to the Recent Advances in Quantitative Remotes Sensing Symposium. I presented several posters there. One of them was about the assessment of the classification accuracy of Venus and Sentinel-2 sensors for land cover map production.

While the results of the study are interesting, I think that the mostimportant thing that this paper shows is how a time series withrealistic reflectance values can be simulated.

The idea here is to find a good balance between image synthesis (low accuracy) and physically sound simulation (need for ancillary data and computational complexity). The choice made here is to use a real time series of Formosat-2 images (only 4 spectral bands) in order to simulated Venus and Sentinel-2 time series with the same temporal sampling but with more than 10 spectral bands.

The Formosat-2 time series is used in order to:


  1. Estimate the LAI (leaf area index) for each pixel
  2. Give a spatial distribution using a land-cover map
A database containing leaf pigment values for different types of vegetation is used together with the above-mentioned LAI estimation in order to drive a reflectance simulator. The simulated reflectances are then convolved with the relative spectral responses of the sensors in order to generate the simulated images.

The poster presented at RAQRS 2010 is here.


Monday, January 3, 2011

Reproducible research

I have recently implemented the Spectral Rule Based Landsat TM image classifier described in:

Baraldi et al. 2006, "Automatic Spectral Rule-Based Preliminary
Mapping of Calibrated Landsat TM and ETM+ Images", IEEE Trans. on Geoscience and Remote Sensing, vol 44, no 9.

This paper proposes a set of radiometric combinations, thresholds and logic rules to distinguish more than 40 spectral categories on Landsat images. My implementation is available in the development version of the Orfeo Toolbox and should be included in the next release:
One interesting aspect of the paper is that all the information needed for the implementation of the method is given: every single value for thresholds, indexes, etc. is written down in the paper. This was really useful for me, since I was able to code the whole system without getting stuck on unclear things or hidden parameters.

This is so rarely found in image processing literature that I thought it was worth to post about it. But this is not all.

Once my implementation was done, I was very happy to get some Landsat classifications, but I was not able to decide whether the results were correct or not. Since the author of the paper seemed to want his system to be used and gave all details for the implementation, I thought I would ask him for help for the validation. So I sent an e-mail to A. Baraldi (whom I had already met before) and asked for some validation data (input and output images generated by his own implementation).

I got something better than only images. He was kind enough to send me the source code of the very same version of the software which was used for the paper – the system continues to be enhanced and the current version seems to be far better than the one published.

So now I have all what is needed for reproducible research:
  1. A clear description of the procedure with all the details needed
    for the implementation.
  2. Data in order to run the experiments.
  3. The source code so that errors can be found and corrected.
I want to publicly thank A. Baraldi for his kindness and I hope that this way of doing science will continue to grow.

If you want to know more about reproducible research, check this site.



Tuesday, November 23, 2010

Change detection of soil states

As I mentioned in a previous post, last September I participated to
the Recent Advances in Quantitative Remotes Sensing Symposium. I
presented several posters there. One of them was about the work done
by Benoît Beguet for his master thesis while he was at CESBIO earlier
this year.

The goal of the work was to assess the potential of high temporal and
spatial resolution multispectral images for the monitoring of soil
states related to agricultural practices.

This is an interesting topic for several reasons, the main ones being:


  1. a bare soil map at any given date is useful for erosion forecast
    and nitrate pollution estimations

  2. the knowledge about the dates of different types of agricultural
    soil work can give clues about the type of crop which is going to
    be grown
The problem was difficult, since we used 8 m. resolution images (so no
useful texture signature is present) and we only had 4 spectral bands
(blue, green, red and near infrared). Without short-wave infra-red
information, it is very difficult to infer something about the early and late
vegetation phases.

However, we obtained interesting results for some states and, most of
all, for some transitions – changes – between states.


You can have a look at the poster we presented at RAQRS here


Sunday, October 31, 2010

Massive Remote Sensing

Some weeks ago I had the chance to assist to the 3r Symposium on
Recent Advances in Quantitative Remote Sensing, RAQRS, in València,
Spain.

It was the first time I attended such a conference. I've been to
nearly all IGARSS since 1998, but I had never been to a conference
where the main topic was the physics of remote sensing of continental
surfaces. All in all, it was a very interesting and inspiring
experience and I learned a lot of things.

Many of the talks and posters dealt with applications related to
multi-temporal and metric to decametric resolutions. This is due to
the fact that most of the phenomena of interest (as for instance, the
Essential Climate Variables, ECVs for short) are best monitored at
those resolutions.

It was in this context that I heard the massive remote sensing
expression. As I understand it, it makes reference to the future
available flow of data produced mainly by the ESA's Sentinel
missions. Indeed, with these sensors (and others as NASA's LDCM) a
frequent complete cover of the Earth's surface by high resolution
sensors will be available. And, in order for these data to be useful,
fast and efficient automatic processing methods will be needed.


This last sentence may seem as nothing new with respect to what has
been said inn recent years about very high spatial resolution sensors,
but I think that now there are several issues which make it really
crucial:


  1. Always on: the Sentinels (at least 1 and 2) will always be
    acquiring data, so the amount of images will be huge.


  2. Data really available: I don't know if this has been officially
    validated by ESA, but, as far as I know, the images will be free of
    charge or at a minimum cost


  3. Physical reality: the sensors will not be just taking pictures,
    but provide many spectral bands which can not be easily visually
    analyzed.

So I think it's time to start taking this challenge seriously and
addressing the tough points such as:


  1. How to produce global land-cover maps without (or with very little)
    ground truth?


  2. How to develop models and methods which can be ported from one
    site to another with minimal tuning?


  3. How to exploit the synergy between image data and ancillary data or
    between image modalities (Sentinel-1 and Sentinel-2, for instance).