Speaker
Description
I introduce floZ, an improved method based on normalizing flows, for estimating the Bayesian evidence (and its numerical uncertainty) from samples drawn from the unnormalized posterior distribution. I validate it on distributions whose evidence is known analytically, up to 15 parameter-space dimensions and I demonstrate its accuracy for up to 200 dimensions with $10^5$ posterior samples. I show its comparison with nested sampling (which computes the evidence as its main target).
Provided representative samples from the target posterior are available, this method is more robust to posterior distributions with sharp features, especially in higher dimensions. I introduce a convergence test to determine when the normalizing flow has identified the final distribution. Finally, I show the flow's adaptability in the context of transfer learning.