Inversion Theory
Inversion is the mathematical process of calculating the cause from a set of observations. In resistivity work, it is used to calculate different formations in the ground from a set of readings taken at the surface. But it is also used in many other applications. For example, MRI medical data is processed through inversion modeling. Because doctors know exactly what area they’ve scanned of the body—and because most internal organs look the same from person to person—they know what to look for and can quickly identify any deviation of interest for a diagnosis. Geologists, on the other hand, are unable to know what will be beneath the earth’s surface, which makes their job a bit harder.
A geophysical electrical resistivity survey is conducted to map the subsurface of the earth—but the data you end up with isn’t a physical measurement with real structure. Instead, the survey provides a set of apparent resistivity values. The apparent resistivity value can be viewed as a weighted average of the different resistivities encountered from the four electrodes taking the reading. The job of the inversion software is to calculate the “true” resistivity” distribution from all these apparent resistivities.
In the 1970s, before inversion modeling, geoscientists could only look at this apparent resistivity data and guess at what the structure was. They could plot the raw data to examine it further, but it didn’t provide any sampling depth, so they had to consider how those measurements would respond. The problem is, the deeper your survey goes, the more complex the measurements become to evaluate. Without modeling software, it is very difficult to evaluate the result of the survey.
But with the help of inversion modeling software, you can easily calculate the best model that fits the raw measurements.
The purpose of the inversion software is to calculate the factual cause from a set of observations. In the case of geophysical electrical resistivity surveys, the observation is the measured apparent resistivities and the cause is the true resistivity distribution in the ground. The goal is to make an image of the ground in terms of resistivity. You can think of this like having a pair of glasses on that allow you to only see resistivity.
The inversion procedure progresses like this:

From the measured raw data set, estimate what the ground might look like. Let's call this Model 1.

Now proceed to calculate the apparent resistivity data set that would be achieved if a survey was performed on a ground that would look exactly as Model 1. Let’s call this the Synthetic Data Set 1.

Continue by adjusting the Model 1 to a new earth model called Model 2 by looking at the difference between the raw data set and the Synthetic Data Set 1.

Now proceed to calculate the apparent resistivity data set that would be achieved if a survey was performed on a ground which looks exactly as Model 2. This data set is called Synthetic Data Set 2.

Continue and perform steps 3 and 4 until the fit between the raw data set and the synthetic data set is minimal.
Inversion synthetically creates data, tests it against all the different cases of all the different geologic structures in the real data, and examines which of those many thousands of combinations matches the raw data most closely. That’s where the term “inversion” comes from—making a structure and then ‘flipping it around’ to test it against the raw data.
Inversion corrects for all the strange things that can happen with raw data. For example, a piece of metal in the survey area causing a really sharp resistivity change will distort the raw data of everything around it. In other words, the human eye can only see so much with the raw data points. Inversion takes many very complex data points and incorporates them together, so you end up with a clear picture of the structure in the ground. When you’re done, you end up with a numerical fit that gives you some confidence regarding how good your model is for the raw data. Once you’ve identified this, the structure can be groundtruthed by drilling or comparing to other data sets.