Kernel-Matrix Determinant Estimates from stopped Cholesky Decomposition

Simon Bartels, Wouter Boomsma, Jes Frellsen, Damien Garreau

Research output: Contribution to journalJournal articleResearchpeer-review

11 Downloads (Pure)

Abstract

Algorithms involving Gaussian processes or determinantal point processes typically require computing the determinant of a kernel matrix. Frequently, the latter is computed from the Cholesky decomposition, an algorithm of cubic complexity in the size of the matrix. We show that, under mild assumptions, it is possible to estimate the determinant from only a sub-matrix, with probabilistic guarantee on the relative error. We present an augmentation of the Cholesky decomposition that stops under certain conditions before processing the whole matrix. Experiments demonstrate that this can save a considerable amount of time while rarely exceeding an overhead of more than 5% when not stopping early. More generally, we present a probabilistic stopping strategy for the approximation of a sum of known length where addends are revealed sequentially. We do not assume independence between addends, only that they are bounded from below and decrease in conditional expectation.
Original languageEnglish
Article number71
JournalJournal of Machine Learning Research
Volume24
Pages (from-to)1-57
ISSN1532-4435
Publication statusPublished - 2023

Cite this