Saturday, March 20, 2010

Geospatial analysis and Object-Based image analysis

I was searching in the web about the use of PostGIS data bases for object based image analysis (OBIA) and Google sent me to the OTB Wiki to a page that I wrote 6 months ago (first hit for "postgis object based image analysis").

It seems that this is a subject for which no much work is ongoing. Julien Michel will be presenting some results about how to put together OBIA and geospatial analysis (+ active learning and some other cool things) in a workshop in Paris next May.

Friday, February 19, 2010

Changing places

Today is my last day at my current position. On the 1st of March I will be at my new position at CESBIO.

This is not a major change. I keep working for CNES in remote sensing image processing, but I will be more focussed on multi-temporal image series (preparing Venµs and Sentinel-2 data use). The applicative context of all this work will be the Land Use and Cover Change.

I am mainly interested in introducing physical prior knowledge in image analysis techniques.

Wednesday, January 20, 2010

Simulation in Remote Sensing

Remote sensing images are expensive to buy. Remote sensing sensors are very, very expensive to design and build. Therefore, it may be interesting to know, before investing any money in images or sensors, which are the capabilities of an existing or future sensor.

In the Spring of 2009, Germain Forestier was a visiting scientist at CNES and we worked on this subject.

We (well, it was actually him who did the work!) implemented a simple simulator which used several spectral data bases, a set of sensors' spectral responses and generated as output the spectra which would have been obtained for each material of the database by each of the sensors.

The, we just applied classification algorithms in order to assess the quality of the classification results for each sensor. This simulator did not integrate atmospheric effects or spatial resolution information, so the conclusions drawn can not be used as general truth. However, we could show interesting things such as for instance, that the better results obtained by Pleiades HR with respect to Quickbird are due to the different design of the Near Infra-Red band (full disclosure: I work at CNES, where Pleiades was designed).

The detailed results of this work were published last Summer: G. Forestier, J. Inglada, C. Wemmert, P. Gancarski, Mining spectral libraries to study sensors' discrimination ability, SPIE Europe Remote Sensing, Vol. 7478, 9 pages, Berlin, Germany, September 2009

After Germain's leave at CNES, we have continued to work a little bit on the simulator in order to make it more realistic. We have already included atmospheric effects and plan to go further by introducing spatial resolution simulation.

I am convinced that this is the way to go in order to cheaply assess the characteristics of a given sensor in terms of end-user needs and not only in temrs of system quality issues as for instance SNR.

Monday, December 21, 2009

Multi-temporal analysis at IGARSS 2010

Professor Lorenzo Bruzzone has been extremely kind with me by proposing to me the co-chair
position for an invited session at IGARSS 2010. The session is entitled "Change Detection ad Multitemporal Image Analysis".

Land use and cover change is a major topic in remote sensing image processing and, as you may be aware of, there are many coming space borne systems which will be dedicated to this kind of application: Venus, the Sentinel Programme, etc.

In this context, there is an increasing need for processing techniques which allow to exploit the richness of this kind of data. The classical approach for the use of multi-temporal remote sensing
image data has been the use data assimilation frameworks. That is, models of evolution in which data is used mainly as a means for checking the soundness of models' predictions. This approach gives more weight to the model, since the data is scarce.

As pointed out above, in the coming years the amount of available data will be important, and one might want to try other approaches where data takes over models. Of course, it would be wrong to blindly apply data analysis tools without using existing models, but the assimilation techniques may benefit from changes induced by the availability of more data.

Saturday, September 12, 2009

Wikinomics

How mass collaboration changes everything

I read this very interesting book and I truly recommend it.

Just one thought: what about remote sensing?

Wednesday, August 26, 2009

Google LatLong: 3 Days, 3 Googlers, 2 CPUs, 8 Cores: Google goes to Camp Roberts

A very interesting post about how geospatial analysis software can be useful for humanitarian tasks:

Google LatLong: 3 Days, 3 Googlers, 2 CPUs, 8 Cores: Google goes to Camp Roberts

Surely, OTB could be useful for this kind of application. It has already been used in real cases as for instance the International Charter Space and Major Disasters.

Saturday, July 4, 2009

Changing the processing paradigm?

No I am not going to write about compressive sensing. I am sorry.

The paradigm I want to write about is the one currently used for mapping applications from remote sensing imagery. This way of turning images into maps can be resumed as ortho-analysis-GIS, that is, you get the image and convert it into a map projection, then you analyze it (segmentation, classification, etc.) in order to produce vector layers and finally you import these vector layers into a GIS in order to perform geospatial analysis (eventually fusing with existing vector maps) and you produce the final map.

This works OK, but is not the most efficient way of working. If you look at the final map, how much information really came from the images? How may pixels were really useful for producing this information? The answer is usually "not much".

Now look at the computation time needed by all the processing steps in the map production chain. Can you guess what is the most expensive step? With current high resolution images, the ortho-rectification step is the most time consuming.

One solution to this could be to ortho-rectify only the interesting area for the application. The drawback of this approach is that usually, you need to process the image (detect the pertinent features, changes, etc.) before you know where is the interesting information.

In this case, the solution, would be to process the image before ortho-rectification. There is one main problem to this : many modern software tools for segmentation and classification are not very good at processing huge images. Even if they were good at it, you would need to process the whole image before ortho-rectification.

The thing is that often, the existing maps tell you where the interesting things are likely to be found, but since your maps are "ortho", you still need to ortho-rectify your image before processing.

Also, the geospatial reasoning step is made at the end of the processing, inside the GIS tool, which usually knows very little about image processing and so.

So it seems that the paradigm cited above (which could also be named ERDAS-Definiens-ArcGIS, for example), although useful has real drawbacks for efficiency. And I am not talking about import/export and format issues.

In order to be really efficient, we would need a tool which would allow us to send the existing shapefile or KML maps on top of the image in sensor geometry, perform some geospatial reasoning up there, segment and classify only the areas of interest (still in sensor geometry), produce vector data and finally send only the useful information down to the map projection.

Hey, I finally nearly wrote about compressive sensing, didn't I?

To finnish: don't tell anybody, but it seems there is a free software out there which is able to do what I have just wrote. Well the PostGIS interface is not yet ready, but it is on its way.