<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://kaeliarizzo.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://kaeliarizzo.github.io/" rel="alternate" type="text/html" /><updated>2025-10-13T23:58:13+00:00</updated><id>https://kaeliarizzo.github.io/feed.xml</id><title type="html">Kaeli Rizzo</title><subtitle>Ph.D. candidate at Cold Spring Harbor Laboratory</subtitle><author><name>Kaeli Rizzo</name></author><entry><title type="html">Ensemble distillation with stochastic teachers via online moment estimation</title><link href="https://kaeliarizzo.github.io/STEDD/" rel="alternate" type="text/html" title="Ensemble distillation with stochastic teachers via online moment estimation" /><published>2025-12-15T00:00:00+00:00</published><updated>2025-12-15T00:00:00+00:00</updated><id>https://kaeliarizzo.github.io/STEDD</id><content type="html" xml:base="https://kaeliarizzo.github.io/STEDD/"><![CDATA[<h2 id="abstract">Abstract:</h2>

<p>Deep ensembles are a simple and effective method for improving both predictive performance and epistemic uncertainty estimation in deep learning. However, their high computational cost, especially at inference time, limits their practicality in real-world deployments. Ensemble distillation offers a promising solution by training a single student model to match the ensemble’s predictive distribution. Yet existing approaches typically assume full access to all M teacher predictionsduring training, which is often challenging due to compute constraints, memory limitations, or asynchronous model evaluations. Here we introduce STEDD (Stochastic Teacher-sampling for Ensemble Distribution Distillation), a framework for distilling both the mean and variance of an ensemble using only a small number of random teacher queries per input, even as few as one. STEDD includes three estimators tailored to different access regimes and provides theoretical guarantees for convergence and calibration. Experiments on genomics and vision benchmarks demonstrate that STEDD preserves ensemble-level performance and uncertainty calibration while significantly reducing training-time cost.</p>

<p>Paper In Preparation</p>]]></content><author><name>Kaeli Rizzo</name></author><summary type="html"><![CDATA[In Preparation, 2025]]></summary></entry><entry><title type="html">Uncertainty-aware genomic deep learning with knowledge distillation</title><link href="https://kaeliarizzo.github.io/DEGU/" rel="alternate" type="text/html" title="Uncertainty-aware genomic deep learning with knowledge distillation" /><published>2024-11-15T00:00:00+00:00</published><updated>2024-11-15T00:00:00+00:00</updated><id>https://kaeliarizzo.github.io/DEGU</id><content type="html" xml:base="https://kaeliarizzo.github.io/DEGU/"><![CDATA[<p><img src="/assets/images/select.png" alt="Image of network selection paper." /></p>

<h2 id="abstract">Abstract:</h2>

<p>Deep neural networks (DNNs) have advanced predictive modeling for regulatory genomics, but challenges remain in ensuring the reliability of their predictions and understanding the key factors behind their decision making. Here we introduce DEGU (Distilling Ensembles for Genomic Uncertainty-aware models), a method that integrates ensemble learning and knowledge distillation to improve the robustness and explainability of DNN predictions. DEGU distills the predictions of an ensemble of DNNs into a single model, capturing both the average of the ensemble’s predictions and the variability across them, with the latter representing epistemic (or model-based) uncertainty. DEGU also includes an optional auxiliary task to estimate aleatoric, or data-based, uncertainty by modeling variability across experimental replicates. By applying DEGU across various functional genomic prediction tasks, we demonstrate that DEGU-trained models inherit the performance benefits of ensembles in a single model, with improved generalization to out-of-distribution sequences and more consistent explanations of cis-regulatory mechanisms through attribution analysis. Moreover,DEGU-trained models provide calibrated uncertainty estimates, with conformal prediction offering coverage guarantees under minimal assumptions. Overall, DEGU paves the way for robust and trustworthy applications of deep learning in genomics research.</p>

<p><a href="https://www.biorxiv.org/content/10.1101/2024.11.13.623485v1">Full Text</a></p>]]></content><author><name>Kaeli Rizzo</name></author><summary type="html"><![CDATA[bioRxiv, 2024]]></summary></entry><entry><title type="html">PrismEXP: gene annotation prediction from stratified gene-gene co-expression matrices</title><link href="https://kaeliarizzo.github.io/PrismExp/" rel="alternate" type="text/html" title="PrismEXP: gene annotation prediction from stratified gene-gene co-expression matrices" /><published>2023-02-27T00:00:00+00:00</published><updated>2023-02-27T00:00:00+00:00</updated><id>https://kaeliarizzo.github.io/PrismExp</id><content type="html" xml:base="https://kaeliarizzo.github.io/PrismExp/"><![CDATA[<p><a href="https://peerj.com/articles/14927/">Full Text</a></p>]]></content><author><name>Kaeli Rizzo</name></author><summary type="html"><![CDATA[PeerJ, 2023]]></summary></entry></feed>