Université de Paris VI & Institut de Physique du Globe de Paris

Course: Inverse Problems

Albert Tarantola
Institut de Physique du Globe de Paris

Photographs of the students (all years)

Program: Basic notions of set theory. Basic notions of probability theory. Models, Observations, the Forward Simulation Problem. The Inverse Simulation Problem (the Popper-Bayes approach). Explicit use of probabilities. Monte Carlo methods (plain rejection, Metropolis). Optimization methods (least-squares, least-absolute values, ...). Functional least-squares.

Exercises: First exercise.


Monday, September 29, 10H45
Lesson I: Introduction. The Popper-Bayes approach. Jeffreys quantities. Viewgraphs. A small numerical example about Benford's law (small explanatory text, mathematica notebook, pdf of mathematica notebook).
Lesson II: Volumetric probabilities. Probability densities. Changing variables. Viewgraphs. Numerical example: changing from { Compressibility , Bulk modulus } to { Young modulus , Poisson's ratio }. Notebook.
Monday, October 6, 10H45 Lesson III: Conditional and marginal probabilities. Independent uncertainties. Uncorrelated uncertainties. Covariance. Viewgraphs. Numerical example: representing a 2D Gaussian. Notebook.
Lesson IV: Sampling a probability distribution. Rejection algorithm. Metropolis algorithm. Why the Metropolis algorithm is not panacea. Viewgraphs. Numerical example: Sampling 1D and 2D volumetric probabilities. Notebook 1, PDF explanation. Notebook 2, PDF explanation. Numerical example: Sampling the Fisher distribution. Notebook. PDF explanation.
Numerical example: Sampling a Gaussian random function
. Notebook (please help me in correcting the plot commands). PDF explanatiion.
Numerical example: Sampling using the Metropolis algorithm. Notebook. Note: I should check that, everywhere, when the test point is rejected, the old point is taken again, as in this simple example. The ultimate example would be a random walk that uniformly samples the sphere, then samples the Fisher distribution.
Monday, October 13, 10H45 Lesson V: Sets and mappings. Viewgraphs. Numerical example: measuring the aspect ratio of a screen. Notebook, PDF explanation. Intersection of probabilities. Image of a probability. Reciprocal image of a probability. Fundamental theorem. Viewgraphs. Numerical example: transport of probabilities. Notebook, PDF explanation.
Lesson VI: General formulation of an inverse problem. Method I: Explicit plotting of the posterior volumetric probability. Viewgraphs. Numerical example: Estimation of a seismic epicenter. Notebooks: complete pdf document, EpicenterMathematica.nb, EpicenterVelocityMathematica.nb, EpicenterMathematicaDiffractors.nb.
Monday, October 20, 10H45 Lesson VII: More numerical examples (Viewgraphs). A variant of the epicenter problem above: the probability density for the observations has a bimodal probability density (complete pdf document, mathematica code). A second geological example concerns the estimation of the center of rotation, and of the rotation velocity, using as data the linear velocities of some points (pdf document, executable notebook). Fitting a curve to mass disintegration data (mathematica notebook). Robust fitting of data: fitting a logistic curve to the variation of Human population (complete pdf document, mathematica code).
Lesson VIII: Method II: Sampling of the posterior volumetric probability (Monte Carlo methods). Rejection algorithm. Metropolis algorithm. Viewgraphs. Numerical example: the epicenter (rejection). Numerical example: the epicenter (Metropolis). Numerical example: the epicenter (again) but using a bimodal distribution for the arrival times (mathematica notebook). A pdf document on the epicenter exercise. Why we don't need an explicit expression for the prior probability. Why we don't need the simulated annealing algorithm. Why I don't like the genetic algorithms. Different kinds of data. Geostatistics and inverse problems. Can we solve complex problems? (No.)
Monday, October 27, 10H45 Lesson IX: More numerical examples (viewgraphs). Numerical example: determining the parameters of a fissure from geodetic data (pdf document, mathematica notebook). Numerical example: using the Metropolis algorithm to solve a waveform fitting problem (mathematica notebook [pdf document not yet available]).
Lesson X: Method III: Optimization (and approximate sampling of the posterior probability). Least-squares. Least-absolute values. Viewgraphs.
Monday, November 3, 10H45 Lesson XI: Optimization and nonlinear problems (viewgraphs). Numerical example: Epicenter, gradient method (notebook, pdf document). Numerical example: measuring the chlorophyll concentration of vegetal leafs (complete pdf document, notebook 1, notebook 2, notebook 3).
Lesson XII: Optimization and linear problems (viewgraphs). Numerical example: X-ray tomography (notebook, pdf document). Note: I am in the process of changing this example, that now contains a random Gaussian field with exponential covariance, and sampling of the prior and the posterior distributions (pictures, notebook 1, pdf version of notebook 1, notebook 2, pdf version of notebook 2).
Monday, November 10, 10H45 Lesson XIII: Viewgraphs. Numerical example: using envisat satellite data (notebook, pdf document).
Lesson XIV: Viewgraphs. Numerical example: Regression lines when there are uncertainties in both axes (error crosses instead of error bars) (notebook, pdf document). Numerical example: Nonlinear least-absolute values.
Monday, November 17, 10H45 Lesson XV: The square-root variable metric algorithm (efficient sampling of the posterior probability). Viewgraphs. A little bit of theory (pdf document). Numerical example: 1D Gaussian random function with some (linear) constraints (mathematica notebook). Numerical example: the example for gradient-based method of optimization based on the epicenter problem already contained an implementation of the (nonlinear) square-root variable metric method (mathematica notebook again). Small text about the squarte root of the exponential covariance and two (1,2) notebooks.
Monday, November 24, 10H45 Lesson XVI: Least-squares involving functions (notion of random function, the mathematics of least-squares). Viewgraphs.
Lesson XVII: Least-squares involving functions (notions of functional analysis, the formulation of the inverse problem). Viewgraphs.
Monday, December 1, 10H45 Lesson XVIII: Viewgraphs. Numerical example: Ray-tomography without blocks (mathematica notebook, pdf document, notebook 2). Numerical example: Building a smooth function given some of its points (mathematica notebook, pdf document).
Monday, December 8, 10H45 Lesson XIX: Fitting seismic waveforms to retrieve the source and the medium properties (theory). Viewgraphs. A mathematica code for the simulation of 1D acoustic wave propagation (small theory [pdf, viewgraphs], mathematica code).
Lesson XX: Numerical example: Fitting seismic waveforms to retrieve the source and the medium properties (viewgraphs, mathematica notebook, pdf document).

Solved exercises (with Mathematica codes):

I have decided to collect the exercices that I have been solving in class during the last years, to try putting them under the form of a book. It is not yet clear that I will succeed, and your help may be crucial. This tentative text and the exercices are available from this link. I do not suggest that you download the entire text (it is too preliminary): at the end of each lesson, I will suggest which section you should download and read to prepare for the next lesson. Although I know reasonably well Mathematica, I am not very familiar with Matlab and Scilab, so I also may need your help in translating some of the codes.


Physical theories allow us to make predictions: given a complete description of a physical system, we can predict the outcome of some measurements. This problem of predicting the result of measurements is called the modelization problem, the simulation problem, or the forward problem. The inverse problem consists of using the actual result of some measurements to infer the values of the parameters that characterize the system.

While the forward problem has (in deterministic physics) a unique solution, the inverse problem does not. As an example, consider measurements of the gravity field around a planet: given the distribution of mass inside the planet, we can uniquely predict the values of the gravity field around the planet (forward problem), but there are different distributions of mass that give exactly the same gravity field in the space outside the planet. Therefore, the inverse problem —of inferring the mass distribution from observations of the gravity field— has multiple solutions (in fact, an infinite number).

Because of this, in the inverse problem, one needs to make explicit any available a priori information on the model parameters. One also needs to be careful in the representation of the data uncertainties.

The most general (and simple) theory is obtained when using a probabilistic point of view, where the a priori information on the model parameters is represented by a probability distribution over the "model space." The theory developed here explains how this a priori probability distribution is transformed into the a posteriori probability distribution, by incorporating a physical theory (relating the model parameters to some observable parameters) and the actual result of the observations (with their uncertainties).

To develop the theory, we shall need to examine the different types of parameters that appear in physics and to be able to understand what a total absence of a priori information on a given parameter may mean.

Although the notion of the inverse problem could be based on conditional probabilities and Bayes's theorem, I choose to introduce a more general notion, that of the "combination of states of information," that is, in principle, free from the special difficulties appearing in the use of conditional probability densities (like the well-known Borel paradox).

The general theory has a simple (probabilistic) formulation and applies to any kind of inverse problem, including linear as well as strongly nonlinear problems. Except for very simple examples, the probabilistic formulation of the inverse problem requires a resolution in terms of "samples" of the a posteriori probability distribution in the model space. This, in particular, means that the solution of an inverse problem is not a model but a collection of models (that are consistent with both the data and the a priori information). This is why Monte Carlo (i.e., random) techniques are examined in this course. With the increasing availability of computer power, Monte Carlo techniques are being increasingly used.

Some special problems, where nonlinearities are weak, can be solved using special, very efficient techniques that do not differ essentially from those used, for instance, by Laplace in 1799, who introduced the "least-absolute-values" and the "minimax" criteria for obtaining the best solution, or by Legendre in 1801 and Gauss in 1809, who introduced the "least-squares" criterion.

The first part of the course deals exclusively with discrete inverse problems with a finite number of parameters. Some real problems are naturally discrete, while others contain functions of a continuous variable and can be discretized if the functions under consideration are smooth enough compared to the sampling length, or if the functions can conveniently be described by their development on a truncated basis. The advantage of a discretized point of view for problems involving functions is that the mathematics is easier. The disadvantage is that some simplifications arising in a general approach can be hidden when using a discrete formulation. (Discretizing the forward problem and setting a discrete inverse problem is not always equivalent to setting a general inverse problem and discretizing for the practical computations.)

The second part of the course deals with general inverse problems, which may contain such functions as data or unknowns. As this general approach contains the discrete case in particular, the separation into two parts corresponds only to a didactical purpose.

Although this course contains a lot of mathematics, it does not require any special mathematical background, as all the notions shall be explicitly introduced. The basic objective is to explain how a method of acquisition of information can be applied to the actual world, and many of the arguments are heuristic.

The course is based on the book Inverse Problem Theory and Methods for Model Parameter Estimation (Tarantola, 2005), whose electronic form is freely available for downdoad (see the link above).

Link to my other teachings at the Institut de Physique du Globe - Link to my web page - Link to IPG

Send e-mail: albert.tarantola@ipgp.jussieu.fr