Sunday, October 31, 2010

Massive Remote Sensing

Some weeks ago I had the chance to assist to the 3r Symposium on
Recent Advances in Quantitative Remote Sensing, RAQRS, in València,
Spain.

It was the first time I attended such a conference. I've been to
nearly all IGARSS since 1998, but I had never been to a conference
where the main topic was the physics of remote sensing of continental
surfaces. All in all, it was a very interesting and inspiring
experience and I learned a lot of things.

Many of the talks and posters dealt with applications related to
multi-temporal and metric to decametric resolutions. This is due to
the fact that most of the phenomena of interest (as for instance, the
Essential Climate Variables, ECVs for short) are best monitored at
those resolutions.

It was in this context that I heard the massive remote sensing
expression. As I understand it, it makes reference to the future
available flow of data produced mainly by the ESA's Sentinel
missions. Indeed, with these sensors (and others as NASA's LDCM) a
frequent complete cover of the Earth's surface by high resolution
sensors will be available. And, in order for these data to be useful,
fast and efficient automatic processing methods will be needed.


This last sentence may seem as nothing new with respect to what has
been said inn recent years about very high spatial resolution sensors,
but I think that now there are several issues which make it really
crucial:


  1. Always on: the Sentinels (at least 1 and 2) will always be
    acquiring data, so the amount of images will be huge.


  2. Data really available: I don't know if this has been officially
    validated by ESA, but, as far as I know, the images will be free of
    charge or at a minimum cost


  3. Physical reality: the sensors will not be just taking pictures,
    but provide many spectral bands which can not be easily visually
    analyzed.

So I think it's time to start taking this challenge seriously and
addressing the tough points such as:


  1. How to produce global land-cover maps without (or with very little)
    ground truth?


  2. How to develop models and methods which can be ported from one
    site to another with minimal tuning?


  3. How to exploit the synergy between image data and ancillary data or
    between image modalities (Sentinel-1 and Sentinel-2, for instance).