TY - GEN
T1 - Probabilistic Integral Circuits
AU - Gala, Gennaro
AU - de Campos, Cassio
AU - Peharz, Robert
AU - Vergari, Antonio
AU - Quaeghebeur, Erik
PY - 2024/5/4
Y1 - 2024/5/4
N2 - Continuous latent variables (LVs) are a key ingredient of many generative models, as they allow modelling expressive mixtures with an uncountable number of components. In contrast, probabilistic circuits (PCs) are hierarchical discrete mixtures represented as computational graphs composed of input, sum and product units. Unlike continuous LV models, PCs provide tractable inference but are limited to discrete LVs with categorical (i.e. unordered) states. We bridge these model classes by introducing probabilistic integral circuits (PICs), a new language of computational graphs that extends PCs with integral units representing continuous LVs. In the first place, PICs are symbolic computational graphs and are fully tractable in simple cases where analytical integration is possible. In practice, we parameterise PICs with lightweight neural nets delivering an intractable hierarchical continuous mixture that can be approximated arbitrarily well with large PCs using numerical quadrature. On several distribution estimation benchmarks, we show that such PIC-approximating PCs systematically outperform PCs commonly learned via expectation-maximization or SGD.
AB - Continuous latent variables (LVs) are a key ingredient of many generative models, as they allow modelling expressive mixtures with an uncountable number of components. In contrast, probabilistic circuits (PCs) are hierarchical discrete mixtures represented as computational graphs composed of input, sum and product units. Unlike continuous LV models, PCs provide tractable inference but are limited to discrete LVs with categorical (i.e. unordered) states. We bridge these model classes by introducing probabilistic integral circuits (PICs), a new language of computational graphs that extends PCs with integral units representing continuous LVs. In the first place, PICs are symbolic computational graphs and are fully tractable in simple cases where analytical integration is possible. In practice, we parameterise PICs with lightweight neural nets delivering an intractable hierarchical continuous mixture that can be approximated arbitrarily well with large PCs using numerical quadrature. On several distribution estimation benchmarks, we show that such PIC-approximating PCs systematically outperform PCs commonly learned via expectation-maximization or SGD.
UR - http://www.scopus.com/inward/record.url?scp=85194168381&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85194168381
T3 - Proceedings of Machine Learning Research (PMLR)
SP - 2143
EP - 2151
BT - Proceedings of The 27th International Conference on Artificial Intelligence and Statistics
A2 - Dasgupta, Sanjoy
A2 - Mandt, Stephan
A2 - Li, Yingzhen
PB - PMLR
T2 - 27th International Conference on Artificial Intelligence and Statistics, AISTATS 2024
Y2 - 2 May 2024 through 4 May 2024
ER -