Sunday, January 30, 2011
Teenager contribution for remote sensig
Monday, January 24, 2011
Multi-temporal series simulations
As I mentioned in a previous post, last September I participated to the Recent Advances in Quantitative Remotes Sensing Symposium. I presented several posters there. One of them was about the assessment of the classification accuracy of Venus and Sentinel-2 sensors for land cover map production.
The idea here is to find a good balance between image synthesis (low accuracy) and physically sound simulation (need for ancillary data and computational complexity). The choice made here is to use a real time series of Formosat-2 images (only 4 spectral bands) in order to simulated Venus and Sentinel-2 time series with the same temporal sampling but with more than 10 spectral bands.
The Formosat-2 time series is used in order to:
- Estimate the LAI (leaf area index) for each pixel
- Give a spatial distribution using a land-cover map
The poster presented at RAQRS 2010 is here.
Monday, January 3, 2011
Reproducible research
Baraldi et al. 2006, "Automatic Spectral Rule-Based PreliminaryMapping of Calibrated Landsat TM and ETM+ Images", IEEE Trans. on Geoscience and Remote Sensing, vol 44, no 9.
This paper proposes a set of radiometric combinations, thresholds and logic rules to distinguish more than 40 spectral categories on Landsat images. My implementation is available in the development version of the Orfeo Toolbox and should be included in the next release:
One interesting aspect of the paper is that all the information needed for the implementation of the method is given: every single value for thresholds, indexes, etc. is written down in the paper. This was really useful for me, since I was able to code the whole system without getting stuck on unclear things or hidden parameters.
This is so rarely found in image processing literature that I thought it was worth to post about it. But this is not all.
Once my implementation was done, I was very happy to get some Landsat classifications, but I was not able to decide whether the results were correct or not. Since the author of the paper seemed to want his system to be used and gave all details for the implementation, I thought I would ask him for help for the validation. So I sent an e-mail to A. Baraldi (whom I had already met before) and asked for some validation data (input and output images generated by his own implementation).
I got something better than only images. He was kind enough to send me the source code of the very same version of the software which was used for the paper – the system continues to be enhanced and the current version seems to be far better than the one published.
So now I have all what is needed for reproducible research:
- A clear description of the procedure with all the details needed
for the implementation. - Data in order to run the experiments.
- The source code so that errors can be found and corrected.
If you want to know more about reproducible research, check this site.
Tuesday, November 23, 2010
Change detection of soil states
As I mentioned in a previous post, last September I participated to
the Recent Advances in Quantitative Remotes Sensing Symposium. I
presented several posters there. One of them was about the work done
by Benoît Beguet for his master thesis while he was at CESBIO earlier
this year.
The goal of the work was to assess the potential of high temporal and
spatial resolution multispectral images for the monitoring of soil
states related to agricultural practices.
This is an interesting topic for several reasons, the main ones being:
- a bare soil map at any given date is useful for erosion forecast
and nitrate pollution estimations - the knowledge about the dates of different types of agricultural
soil work can give clues about the type of crop which is going to
be grown
However, we obtained interesting results for some states and, most of
all, for some transitions – changes – between states.
You can have a look at the poster we presented at RAQRS here
Sunday, October 31, 2010
Massive Remote Sensing
Some weeks ago I had the chance to assist to the 3r Symposium on
Recent Advances in Quantitative Remote Sensing, RAQRS, in València,
Spain.
It was the first time I attended such a conference. I've been to
nearly all IGARSS since 1998, but I had never been to a conference
where the main topic was the physics of remote sensing of continental
surfaces. All in all, it was a very interesting and inspiring
experience and I learned a lot of things.
Many of the talks and posters dealt with applications related to
multi-temporal and metric to decametric resolutions. This is due to
the fact that most of the phenomena of interest (as for instance, the
Essential Climate Variables, ECVs for short) are best monitored at
those resolutions.
It was in this context that I heard the massive remote sensing
expression. As I understand it, it makes reference to the future
available flow of data produced mainly by the ESA's Sentinel
missions. Indeed, with these sensors (and others as NASA's LDCM) a
frequent complete cover of the Earth's surface by high resolution
sensors will be available. And, in order for these data to be useful,
fast and efficient automatic processing methods will be needed.
This last sentence may seem as nothing new with respect to what has
been said inn recent years about very high spatial resolution sensors,
but I think that now there are several issues which make it really
crucial:
- Always on: the Sentinels (at least 1 and 2) will always be
acquiring data, so the amount of images will be huge. - Data really available: I don't know if this has been officially
validated by ESA, but, as far as I know, the images will be free of
charge or at a minimum cost - Physical reality: the sensors will not be just taking pictures,
but provide many spectral bands which can not be easily visually
analyzed.
So I think it's time to start taking this challenge seriously and
addressing the tough points such as:
- How to produce global land-cover maps without (or with very little)
ground truth? - How to develop models and methods which can be ported from one
site to another with minimal tuning? - How to exploit the synergy between image data and ancillary data or
between image modalities (Sentinel-1 and Sentinel-2, for instance).