Had a great day last week at the Gulf Coast SPE 2017 Technology Forum. We had a presentation, together with Reza Ghasemi of SRT, of a paper entitled “Probabilistic Uncertainty Quantification Using Advanced Proxy Methods and GPU-Based Reservoir Simulation”
At the end, I made the bold and maybe foolish claim that EssRisk is the “first valid robust probabilistic uncertainty quantification approach”.
I was always advised to never say ‘never’ or ‘always’ or ‘all’ or ‘none’ or ‘first’ or ‘last’ in any written or spoken statement, because it only takes a single contrary case to demolish your argument and credibility. Many times in a cross country race I thought I was last, only to find somebody else 5 minutes slower even than myself.
However, I stand by my statement. Why?
First, what do we mean by a valid uncertainty quantification approach? The definition I find useful is:
An encapsulation of the team’s beliefs about models, parameters and their ranges, quality of measurement data, and quality of simulation model, within a probabilistic/Bayesian framework which can generate accurate and validated probabilistic cumulative distribution curves (S curves) for quantities of interest at times of interest, which can then be represented by a suitable set of simulation runs.
In other words, you don’t want to just know the shape of the S curve and the value of P10, P50, P90, you want a full ensemble of simulation runs corresponding to the S curve.
Not difficult, you think. Well, it is actually incredibly difficult, and EssRisk has required 15 years of research and experience, and sits on top of much cutting edge work by leading academic institutes.
What is the justification for claiming EssRisk is the first? It is quite simple. EssRisk uses a proxy model, and implements the latest Hamiltonian Markov Chain Monte Carlo methods. These methods have been validated against difficult high dimensional (64+) problems, and they reproduce the analytical answers expected. So we know the S curve is correct for the proxy model.
Second, the workflow is such that the S curve of the ensemble of simulation runs corresponds to the S curve of the proxy. The proxy S curve and the simulation run S curve are the same.
End of story.
The other advice is to never criticise the competition. So, instead I will throw out some challenges.
- Have you done history match uncertainty studies with 100+ parameters? Within 250 runs?
- If you use random walk Markov Chain Monte Carlo methods, how do you counter the findings of renowned experts in MCMC and Bayesian statistics that this method is very unreliable and will grossly underestimate uncertainty?
- If you choose (either by hand or through some kind of evolutionary algorithm) a set of history match simulations and extend them out to prediction, why do you call that ‘probabilistic’?
- If you are using Ensemble Kalman Filtering (EnKF) , have you found a validated robust solution to the ‘variance collapse’ problem?
- If you use adjoint methods, how do you avoid getting stuck in local minima?
- If you are using exotic techniques such as dimension reduction or polynomial chaos expansion, are these applicable to the history match problem? How does theory and practice compare?
- For whatever approach you are promoting, have the results ever been tested against large dimensional problems for which the analytical solution is known?
Nah, I think I will carry on promoting EssRisk as the first valid robust probabilistic uncertainty quantification approach. Nobody else comes close. Nobody else has a valid, validated and robust method. Nobody else can handle 100+ parameters. Nobody else can generate a probabilistic ensemble.
Full gory details are published.
SPE 173301. Bridging the Gap Between Deterministic and Probabilistic Uncertainty Quantification Using Advanced Proxy Based Methods