Recent Projects

 

Causal identification of single-cell experimental perturbation effects with CINEMA-OT

We introduce CINEMA-OT, a causal inference method for single-cell perturbation analysis. This approach distinguishes between confounding variables and actual treatment effects, creating counterfactual cell pairings for deeper analysis like treatment effect assessment and response clustering. Benchmark tests on both simulated and real datasets show CINEMA-OT outperforms existing methods. Applied to new experiments on airway organoids and immune cells, it uncovers mechanisms behind impaired antiviral responses and immune cell behavior.

 

BrainLM: A foundation model for brain activity recordings

We present BrainLM, a foundation model trained on 6,700 hours of fMRI data for understanding brain activity. Using self-supervised training, it excels in fine-tuning for clinical predictions and zero-shot inference for identifying neural networks. A new prompting technique enables BrainLM to simulate brain responses to changes. The model offers a new framework for interpreting large-scale brain data.

 

Cell2Sentence: Teaching Large Language Models the Language of Biology

We introduce Cell2Sentence, a method that converts single-cell RNA data into "cell sentences" to fine-tune language models like GPT-2 for biological predictions. Pretraining on natural language improves performance. The approach merges natural language processing and transcriptomics, letting models understand biology while retaining text capabilities. It has applications in gene expression manipulation, rare cell type generation, and data interpretation.

 

Continuous Spatiotemporal Transformer

Standard transformers have limitations in modeling continuous data due to their discrete nature. CST incorporates a regularization that minimizes higher order derivatives of the output (Sobolev loss) to ensure the model outputs smooth continuous functions that can accurately interpolate irregularly sampled noisy data. Our experiments demonstrate CST's superior interpolation performance on various synthetic and real-world spatiotemporal datasets compared to transformers, neural ODEs, and other baselines. On a video dataset, CST achieves lower error than other commonly used methods. CST also provides meaningful learned representations, as shown by modeling whole-cortex calcium imaging data where CST's self-attention captures interpretable patterns related to brain region interactions.

 

Neural Integral Equations

We introduce Neural Integral Equations (NIE) and Attentional Neural Integral Equations (ANIE), novel neural network models for learning dynamics from data in the form of integral equations. Integral equations can model complex spatiotemporal systems with non-local interactions. We tackle this as an operator learning problem, wherein the operator learns from data as an optimization problem solved via a differentiable integral equation solver. We show that NIE and ANIE can accurately and efficiently learn the dynamics of various systems including ODEs, PDEs, and IEs, outperforming neural ODEs and other baselines. Applications show the methods can provide interpretable embeddings and attention maps to understand learned dynamics, like inferring connectivity patterns underlying brain activity recordings.

 

scGAT: Single-Cell Graph Attention Networks for Interpretable Identification of Cell State Determinants

Owing to the high-dimensionality, sparsity, and heterogeneity of single-cell RNA sequencing data, it is challenging to identify transcripts associated with cell state. Yet, the robust ability to do so for any scRNA-seq data can be used for hypothesis generation, biomarker discovery, and drug repurposing. We offer a computational approach using geometric deep learning and widely used attention mechanisms to identify transcripts associated with individual cell states, which may serve as biomarkers, and be used to repurpose existing drugs to alter cell state. scGAT builds upon standard differential expression comparisons for identifying hits and offers a new approach to scRNA-seq analysis pipelines.

 

 
 

Single-cell Longitudinal Analysis of SARS-CoV-2 Infection in Airway Organoids

In this project, we performed single-cellRNA sequencing of experimentally infected human bronchial epithelial cells (HBECs) in air–liquid interface (ALI) cultures over a time course. This revealed novel polyadenylated viral transcripts and highlighted ciliated cells as a major target at the onset of infection, which we confirmed by electron and immunofluorescence microscopy. Over the course of infection, the cell tropism of SARS-CoV-2 expands to other epithelial cell types including basal and club cells. Infection induces cell-intrinsic expression of type I and type III interferons (IFNs) and interleukin (IL)-6 but not IL-1. This results in expression of interferon-stimulated genes (ISGs) in both infected and bystander cells. This provides a detailed characterization of genes, cell types, and cell state changes associated with SARS-CoV-2 infection in the human airway.

 

 

Inferring Neural Dynamics From Calcium Imaging Data Using Deep Learning

Multiple recording technologies have revealed traveling waves of neural activity in the cortex. These waves can be spontaneously generated or evoked by external stimuli. They travel along with brain networks at multiple scales and may serve a variety of functions, from long-term memory consolidation to processing of dynamic visual stimuli. The questions of how traveling waves are generated and propagated in neural networks have long intrigued researchers from several disciplines. We aim to tackle these questions by developing new machine learning frameworks for modeling the brain dynamics captured by calcium imaging as continuous spatiotemporal sequences. This project uses a combination of machine learning algorithms, such as transformer models, and Fourier feature mappings.

 

 
 

Learning Potentials of Quantum Systems using Deep Neural Networks.

In this project, we investigate possible approximations for reconstructing the Hamiltonian of a quantum system in an unsupervised manner by using only limited information obtained from the system's probability distribution by designing Neural Networks called Quantum Potential Neural Network (QPNN). QPNN was developed based on the underlying formalism for the inverse solution of the Schrodinger equation. Our proposed method works for a wide range of quantum mechanical systems including 3-Dimensional and many-electron systems.


 

Homology free functional annotation of enzymes

Microbes are incredibly talented biochemists accounting for 10% of all known bioactive natural products. Generally the very first step in determining what a novel microbial gene/enzyme does is to compare it to a known homolog, but 70% of identified microbial genes have no known annotated homolog. A homology-free method to determine enzyme function would make large strides towards decoding this microbial dark matter and unlocking microbial biosynthetic potential. Graph neural networks provide a new framework from which to extract meaning from data structured as graphs. Both small molecules and proteins can be represented as graphs, I hope to leverage developments in the graph neural network field to create models that can learn the meaningful features of an enzyme sequence on the basis of the types of reactions it catalyzes, and subsequently be used to make predict a functional annotation for arbitrary enzyme sequences.

 
image (8).png