Sunday, October 31, 2010

Massive Remote Sensing

Some weeks ago I had the chance to assist to the 3r Symposium on
Recent Advances in Quantitative Remote Sensing, RAQRS, in València,
Spain.

It was the first time I attended such a conference. I've been to
nearly all IGARSS since 1998, but I had never been to a conference
where the main topic was the physics of remote sensing of continental
surfaces. All in all, it was a very interesting and inspiring
experience and I learned a lot of things.

Many of the talks and posters dealt with applications related to
multi-temporal and metric to decametric resolutions. This is due to
the fact that most of the phenomena of interest (as for instance, the
Essential Climate Variables, ECVs for short) are best monitored at
those resolutions.

It was in this context that I heard the massive remote sensing
expression. As I understand it, it makes reference to the future
available flow of data produced mainly by the ESA's Sentinel
missions. Indeed, with these sensors (and others as NASA's LDCM) a
frequent complete cover of the Earth's surface by high resolution
sensors will be available. And, in order for these data to be useful,
fast and efficient automatic processing methods will be needed.


This last sentence may seem as nothing new with respect to what has
been said inn recent years about very high spatial resolution sensors,
but I think that now there are several issues which make it really
crucial:


  1. Always on: the Sentinels (at least 1 and 2) will always be
    acquiring data, so the amount of images will be huge.


  2. Data really available: I don't know if this has been officially
    validated by ESA, but, as far as I know, the images will be free of
    charge or at a minimum cost


  3. Physical reality: the sensors will not be just taking pictures,
    but provide many spectral bands which can not be easily visually
    analyzed.

So I think it's time to start taking this challenge seriously and
addressing the tough points such as:


  1. How to produce global land-cover maps without (or with very little)
    ground truth?


  2. How to develop models and methods which can be ported from one
    site to another with minimal tuning?


  3. How to exploit the synergy between image data and ancillary data or
    between image modalities (Sentinel-1 and Sentinel-2, for instance).


4 comments:

Emmanuel Christophe said...

Do you think crowd-sourcing could be a solution for problem 1?

Tisham Dhar said...
This comment has been removed by the author.
Tisham Dhar said...

There are several smart phone apps which make crowdsourcing easier. Typically as photograhic ground truth. But more quantitative measurements of the ECV may be difficult. It will also be difficult to crowd source for the ocean and sparsely populated areas. May be future phones can have more sensors built in i.e. Temperature and Airpressure, with MEMS they are nearly as simple as accelerometers.

Amit Kulkarni said...

For the speed part of processing, you would need a streamable pgraster databases. They do this for real time stock quotes, running SQL against live data.

I talked with a person, and they don't really store too much data at their site. Much data is not stored at all, just what you are interested in. Because the data quantity is just too much. There is a crying need for better image compression but the data is just so huge, that it is better not to archive it. A better automatic way is just to flat out reject if cloudy or too much shadows or some such parameters, just go for better quality pictures.

I personally think as spatial resolution trends towards < 1cm, lossy compression should be used, that is a one time downsampling.