We describe a computationally efficient method to produce a specific Bayesian mixture of all the models in a finite set of feature-based models that assign a probability to the observed data set. Special attention is given to the bound on the regret of using the mixture instead of the best model in the set. It is proven theoretically and verified through synthetic data that this bound is relatively tight. Comparing the workload of the proposed method with the direct implementation of the Bayesian mixture shows an almost exponential improvement of computing time.
|Number of pages
|International Journal of Pattern Recognition and Artificial Intelligence
|Published - 2016