All sparse PCA models are wrong, but some are useful: Part II: Limitations and problems of deflation

J. Camacho*, A. K. Smilde, E. Saccenti, J. A. Westerhuis, Rasmus Bro

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

8 Citations (Scopus)

Abstract

Sparse Principal Component Analysis (sPCA) is a popular matrix factorization approach based on Principal Component Analysis (PCA). It combines variance maximization and sparsity with the ultimate goal of improving data interpretation. A main application of sPCA is to handle high-dimensional data, for example biological omics data. In Part I of this series, we illustrated limitations of several state-of-the-art sPCA algorithms when modeling noise-free data, simulated following an exact sPCA model. In this Part II we provide a thorough analysis of the limitations of sPCA methods that use deflation for calculating subsequent, higher order, components. We show, both theoretically and numerically, that deflation can lead to problems in the model interpretation, even for noise free data. In addition, we contribute diagnostics to identify modeling problems in real-data analysis.

Original languageEnglish
Article number104212
JournalChemometrics and Intelligent Laboratory Systems
Volume208
Number of pages11
ISSN0169-7439
DOIs
Publication statusPublished - 2021

Keywords

  • Artifacts
  • Data interpretation
  • Exploratory data analysis
  • Model interpretation
  • Sparse principal component analysis
  • Sparsity

Cite this