Learning methods for calculating deformation maps in Earth sciences from optical images
The methods currently used for calculating horizontal deformation maps of the earth’s surface generated during telluric events (earthquakes, volcanoes, landslides), from satellite or aerial optical images, are very effective when the landscape has little evolved between successive image acquisitions. However, these methods show limits in the presence of a strong diachronism and/or strong changes of point of view or illumination conditions, which is often the case when one seeks to combine a post-event acquisition with archive images.
In the fields of photogrammetry and computer vision, learning-based matching methods have shown superior performance to classical methods when one is able to train them with sufficiently massive training data. The objective of the thesis is therefore to explore whether these learning methods can be adapted to the problem of 2D matching of potentially highly dissimilar images, in order to improve the calculation of deformation maps under difficult conditions where the performance of conventional methods is insufficient.