Abstract
Background: Virtual cells are embedded in widely used single-cell generative models. Nonetheless, the models’ implicit knowledge of perturbations remains unclear. Methods: We train variational autoencoders on three gene expression datasets spanning genetic, chemical, and temporal perturbations, and infer perturbations by differentiating decoder outputs with respect to latent variables. This yields vector fields of infinitesimal change in gene expression. Furthermore, we probe a publicly released scVI decoder trained on the (Formula presented.) Discover Census ((Formula presented.) M mouse cells) and score genes by the alignment between local gradients and an empirical healthy-to-disease axis, followed by a novel large language model-based evaluation of pathways. Results: Gradient flows recover known transitions in Irf8 knockout microglia, cardiotoxin-treated muscle, and worm embryogenesis. In the pretrained Census model, gradients help identify pathways with stronger statistical support and higher type 2 diabetes relevance than an average expression baseline. Conclusions: Trained single-cell decoders already contain rich perturbation-relevant information that can be accessed by automatic differentiation, enabling in-silico perturbation simulations and principled ranking of genes along observed disease or treatment axes without bespoke architectures or perturbation labels.
| Original language | English |
|---|---|
| Article number | 1439 |
| Journal | Genes |
| Volume | 16 |
| Issue number | 12 |
| Number of pages | 18 |
| ISSN | 2073-4425 |
| DOIs | |
| Publication status | Published - 2025 |
Bibliographical note
Publisher Copyright:© 2025 by the authors.
Keywords
- agentic AI
- explainable AI
- gene expression
- generative models
- in silico perturbations
- machine learning
- single-cell RNA sequencing