How mass collaboration changes everything
I read this very interesting book and I truly recommend it.
Just one thought: what about remote sensing?
Saturday, September 12, 2009
Wednesday, August 26, 2009
Google LatLong: 3 Days, 3 Googlers, 2 CPUs, 8 Cores: Google goes to Camp Roberts
A very interesting post about how geospatial analysis software can be useful for humanitarian tasks:
Google LatLong: 3 Days, 3 Googlers, 2 CPUs, 8 Cores: Google goes to Camp Roberts
Surely, OTB could be useful for this kind of application. It has already been used in real cases as for instance the International Charter Space and Major Disasters.
Google LatLong: 3 Days, 3 Googlers, 2 CPUs, 8 Cores: Google goes to Camp Roberts
Surely, OTB could be useful for this kind of application. It has already been used in real cases as for instance the International Charter Space and Major Disasters.
Saturday, July 4, 2009
Changing the processing paradigm?
No I am not going to write about compressive sensing. I am sorry.
The paradigm I want to write about is the one currently used for mapping applications from remote sensing imagery. This way of turning images into maps can be resumed as ortho-analysis-GIS, that is, you get the image and convert it into a map projection, then you analyze it (segmentation, classification, etc.) in order to produce vector layers and finally you import these vector layers into a GIS in order to perform geospatial analysis (eventually fusing with existing vector maps) and you produce the final map.
This works OK, but is not the most efficient way of working. If you look at the final map, how much information really came from the images? How may pixels were really useful for producing this information? The answer is usually "not much".
Now look at the computation time needed by all the processing steps in the map production chain. Can you guess what is the most expensive step? With current high resolution images, the ortho-rectification step is the most time consuming.
One solution to this could be to ortho-rectify only the interesting area for the application. The drawback of this approach is that usually, you need to process the image (detect the pertinent features, changes, etc.) before you know where is the interesting information.
In this case, the solution, would be to process the image before ortho-rectification. There is one main problem to this : many modern software tools for segmentation and classification are not very good at processing huge images. Even if they were good at it, you would need to process the whole image before ortho-rectification.
The thing is that often, the existing maps tell you where the interesting things are likely to be found, but since your maps are "ortho", you still need to ortho-rectify your image before processing.
Also, the geospatial reasoning step is made at the end of the processing, inside the GIS tool, which usually knows very little about image processing and so.
So it seems that the paradigm cited above (which could also be named ERDAS-Definiens-ArcGIS, for example), although useful has real drawbacks for efficiency. And I am not talking about import/export and format issues.
In order to be really efficient, we would need a tool which would allow us to send the existing shapefile or KML maps on top of the image in sensor geometry, perform some geospatial reasoning up there, segment and classify only the areas of interest (still in sensor geometry), produce vector data and finally send only the useful information down to the map projection.
Hey, I finally nearly wrote about compressive sensing, didn't I?
To finnish: don't tell anybody, but it seems there is a free software out there which is able to do what I have just wrote. Well the PostGIS interface is not yet ready, but it is on its way.
The paradigm I want to write about is the one currently used for mapping applications from remote sensing imagery. This way of turning images into maps can be resumed as ortho-analysis-GIS, that is, you get the image and convert it into a map projection, then you analyze it (segmentation, classification, etc.) in order to produce vector layers and finally you import these vector layers into a GIS in order to perform geospatial analysis (eventually fusing with existing vector maps) and you produce the final map.
This works OK, but is not the most efficient way of working. If you look at the final map, how much information really came from the images? How may pixels were really useful for producing this information? The answer is usually "not much".
Now look at the computation time needed by all the processing steps in the map production chain. Can you guess what is the most expensive step? With current high resolution images, the ortho-rectification step is the most time consuming.
One solution to this could be to ortho-rectify only the interesting area for the application. The drawback of this approach is that usually, you need to process the image (detect the pertinent features, changes, etc.) before you know where is the interesting information.
In this case, the solution, would be to process the image before ortho-rectification. There is one main problem to this : many modern software tools for segmentation and classification are not very good at processing huge images. Even if they were good at it, you would need to process the whole image before ortho-rectification.
The thing is that often, the existing maps tell you where the interesting things are likely to be found, but since your maps are "ortho", you still need to ortho-rectify your image before processing.
Also, the geospatial reasoning step is made at the end of the processing, inside the GIS tool, which usually knows very little about image processing and so.
So it seems that the paradigm cited above (which could also be named ERDAS-Definiens-ArcGIS, for example), although useful has real drawbacks for efficiency. And I am not talking about import/export and format issues.
In order to be really efficient, we would need a tool which would allow us to send the existing shapefile or KML maps on top of the image in sensor geometry, perform some geospatial reasoning up there, segment and classify only the areas of interest (still in sensor geometry), produce vector data and finally send only the useful information down to the map projection.
Hey, I finally nearly wrote about compressive sensing, didn't I?
To finnish: don't tell anybody, but it seems there is a free software out there which is able to do what I have just wrote. Well the PostGIS interface is not yet ready, but it is on its way.
Monday, April 27, 2009
Can't use an atlas for geoinformation
Wikipedia defines an atlas as a collection of maps. Atlases were created to represent geographically distributed information on the Earth surface: physical geography, political geography, and any socioeconomics information for which the geographic distribution makes sense.
Atlases are also used in anatomy. They describe how the animals' organs are located into the bodies. The term atlas here is used by analogy to the geographical ones.
Anatomy atlases are often used in medical image processing, since they allow to introduce prior knowledge about the object of study in a convenient way. This way, spatial ontologies can be built to guide segmentation or classification algorithms. The relationships between different objects in the body are known and described in the atlas.
In the same way as anatomy (and as a consequence medical image processing) borrowed the term atlas from geographic information, remote sensing scientists have tried to get inspiration from medical imagery in the how prior information can be used for remote sensing scene analysis and interpretation.
In this context, atlases or ontologies have been developed in order to represent how the objects present in a remote sensing image are distributed and what are their mutual relationships.
Not surprisingly, there has not been a major break-trhough in remote sensing image analysis using this kind of technique.
In my opinion, this is due to the fact that the object of the study can contain much more variability in remote sensing than in medical imaging. For example, the spatial relationships between different parts of the brain are much less subject to variability than, for instance, the relationships between the buildings which make up an airport.
However, nearly any human operator, even not very much trained, can recognize an airport in a remote sensing image. Why?
Well, I guess that in anatomy, the spatial relationships are some kind of optimal solution to a problem which is posed in terms of functional relationship and species evolution. This is closely linked to morphogenesis.
The organization of geographic entities linked to anthropic activities is also the result of a solution of a problem posed in terms of functional relationship: the buildings of an airport must comply with a certain number of functional needs (access to planes, security regulations, etc.), but they are only a solution, not the optimal one. After all, these are solutions proposed by men, not by nature after several millions of years of evolution!
Atlases are also used in anatomy. They describe how the animals' organs are located into the bodies. The term atlas here is used by analogy to the geographical ones.
Anatomy atlases are often used in medical image processing, since they allow to introduce prior knowledge about the object of study in a convenient way. This way, spatial ontologies can be built to guide segmentation or classification algorithms. The relationships between different objects in the body are known and described in the atlas.
In the same way as anatomy (and as a consequence medical image processing) borrowed the term atlas from geographic information, remote sensing scientists have tried to get inspiration from medical imagery in the how prior information can be used for remote sensing scene analysis and interpretation.
In this context, atlases or ontologies have been developed in order to represent how the objects present in a remote sensing image are distributed and what are their mutual relationships.
Not surprisingly, there has not been a major break-trhough in remote sensing image analysis using this kind of technique.
In my opinion, this is due to the fact that the object of the study can contain much more variability in remote sensing than in medical imaging. For example, the spatial relationships between different parts of the brain are much less subject to variability than, for instance, the relationships between the buildings which make up an airport.
However, nearly any human operator, even not very much trained, can recognize an airport in a remote sensing image. Why?
Well, I guess that in anatomy, the spatial relationships are some kind of optimal solution to a problem which is posed in terms of functional relationship and species evolution. This is closely linked to morphogenesis.
The organization of geographic entities linked to anthropic activities is also the result of a solution of a problem posed in terms of functional relationship: the buildings of an airport must comply with a certain number of functional needs (access to planes, security regulations, etc.), but they are only a solution, not the optimal one. After all, these are solutions proposed by men, not by nature after several millions of years of evolution!
Friday, February 13, 2009
The Wiki Power
Last week I could experiment the power of cooperative work on the Internet.
In the frame of the development of the ORFEO Toolbox (OTB), I was doing some bibliographic research. I wanted to find as many radiometric indices as possible that I could compute using optical multispectral remote sensing data. My problem was mainly focussed on Pleiades-like data, but I thought that it would be interesting to widen the search. After all, these indices are going to be coded in OTB, which has a rather heterogeneous user base.
Unfortunately, I have no easy access to application oriented remote sensing journals, so I was facing a tedious work.
I started with the most well known indices as NDVI and I wrote some descriptions for them in the OTB Wiki. As I was on the wiki, I thought I could ask some other people to complete the documentation, so I sent an e-mail to several mailing lists where I knew that people knew about radiometry issues.
To be honest, at the beginning I thought that it would be useless. I had a good surprise, when the next morning I saw that somebody had created a new account on the wiki and was formatting and enriching (adding some descriptions and bibliographic references) to the list of indices I had started. The day after that 2 other people joined the effort and added new indices and useful information.
The list is accessible here. At the moment of this writing, the list seems to be stable and contains 14 vegetation indices, 8 water indices, 3 soil indices and 2 built-up indices. Most of them where immediately coded in OTB.
This is a real win-win approach: you give us the formulas, we give you the code.
In the frame of the development of the ORFEO Toolbox (OTB), I was doing some bibliographic research. I wanted to find as many radiometric indices as possible that I could compute using optical multispectral remote sensing data. My problem was mainly focussed on Pleiades-like data, but I thought that it would be interesting to widen the search. After all, these indices are going to be coded in OTB, which has a rather heterogeneous user base.
Unfortunately, I have no easy access to application oriented remote sensing journals, so I was facing a tedious work.
I started with the most well known indices as NDVI and I wrote some descriptions for them in the OTB Wiki. As I was on the wiki, I thought I could ask some other people to complete the documentation, so I sent an e-mail to several mailing lists where I knew that people knew about radiometry issues.
To be honest, at the beginning I thought that it would be useless. I had a good surprise, when the next morning I saw that somebody had created a new account on the wiki and was formatting and enriching (adding some descriptions and bibliographic references) to the list of indices I had started. The day after that 2 other people joined the effort and added new indices and useful information.
The list is accessible here. At the moment of this writing, the list seems to be stable and contains 14 vegetation indices, 8 water indices, 3 soil indices and 2 built-up indices. Most of them where immediately coded in OTB.
This is a real win-win approach: you give us the formulas, we give you the code.
Friday, January 16, 2009
Similarity measures and image registration
Last week (Jan. 8th) there was a very interesting meeting in Paris organized together by GdR ISIS and CNES Technical Competence Centers about Vector Fields Estimation and Analysis in Image Processing.
Unfortunately, because of heavy snow in the Toulouse area, I was not able to attend to the meeting for which I had prepared a talk on similarity measures-based image registration. Florence Tupin, organizer of the meeting has kindly put together a web site which contains the program of the meeting together with the slides.
Even though the site is in French, some slides are in English. Mine are here.
There are 3 main parts in my talk. The first one gives a general presentation of the image series co-registration problems. The second part is about similarity measures and the implementation of registration algorithms. The third part presents software for image registration.
Among others, there is a very short presentation of a fine registration application that we are shipping with OTB. More details about this application are available here.
Unfortunately, because of heavy snow in the Toulouse area, I was not able to attend to the meeting for which I had prepared a talk on similarity measures-based image registration. Florence Tupin, organizer of the meeting has kindly put together a web site which contains the program of the meeting together with the slides.
Even though the site is in French, some slides are in English. Mine are here.
There are 3 main parts in my talk. The first one gives a general presentation of the image series co-registration problems. The second part is about similarity measures and the implementation of registration algorithms. The third part presents software for image registration.
Among others, there is a very short presentation of a fine registration application that we are shipping with OTB. More details about this application are available here.
Wednesday, December 10, 2008
Multi-sensor change detection
Last week, on Dec. 2nd, Tarek Habib made the defense of his PhD dissertation. The research subject was the detection of abrupt changes on multi-sensor remote sensing images. This work is a result of a cooperation between CNES, Thales Alenia Space and GIPSA Lab and was funded by the two first parties.
The PhD jury was composed by:
The work presented by Tarek uses a supervised classification as a tool for aided image interpretation. The starting point is a framework which was developped in ORFEO Toolbox, where the 2 images on which changes are to be detected are used to compute different features
Since the fetaures computed in the first step can be very rich, these 2 classes can directly correspond to something like damage and no damage in the case of risk applications. This is so, because the operator an select only the changes of interest.
As said above, this is a very simple and pragmatic approach which works. It is generic: there is no specificity about the type of data or the type of event. However, it has a main drawback which is the computation time. Indeed, for the approach to be generic, many features have to be computed. Also, the SVM classification time is proportional to the complexity of the separating surface.
The work of Tarek consisted in proposing approaches to speed up the whole system. He has proposed and assessed new approaches for feature selection, kernel optimization, and classification surface simplification which allow to speed things up, but loosing some accuracy in the classification.
One interesting thing in this approach is that the user can tune the parameters which have a direct effect on the time vs. accuracy trade-off.
The PhD jury was composed by:
- Laure Blanc-Féraud (INRIA-CNRS, president)
- Lorenzo Bruzzone (Univ. Trento, reviewer)
- Cédric Richard (Univ. Troyes, reviewer)
- Marc Spigai (Thales Alenia Space)
- Grégoire Mercier (Télécom Bretagne, supervisor)
- Jocelyn Chanussot (GIPSA Lab, supervisor)
The work presented by Tarek uses a supervised classification as a tool for aided image interpretation. The starting point is a framework which was developped in ORFEO Toolbox, where the 2 images on which changes are to be detected are used to compute different features
- change indicators, as for instance differences, ratios, correlations, etc.
- mono-date features as statistics, textures, etc.
Since the fetaures computed in the first step can be very rich, these 2 classes can directly correspond to something like damage and no damage in the case of risk applications. This is so, because the operator an select only the changes of interest.
As said above, this is a very simple and pragmatic approach which works. It is generic: there is no specificity about the type of data or the type of event. However, it has a main drawback which is the computation time. Indeed, for the approach to be generic, many features have to be computed. Also, the SVM classification time is proportional to the complexity of the separating surface.
The work of Tarek consisted in proposing approaches to speed up the whole system. He has proposed and assessed new approaches for feature selection, kernel optimization, and classification surface simplification which allow to speed things up, but loosing some accuracy in the classification.
One interesting thing in this approach is that the user can tune the parameters which have a direct effect on the time vs. accuracy trade-off.
Subscribe to:
Posts (Atom)