Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Authors: Gabriel Rioux, Rustum Choksi, Tim Hoheisel, Pierre Maréchal, Christopher Scarvelis
Published in Inverse Problems, 2020
Image deblurring is a notoriously challenging ill-posed inverse problem. In recent years, a wide variety of approaches have been proposed based upon regularization at the level of the image or on techniques from machine learning. In this article, we adapt the principal of maximum entropy on the mean (MEM) to both deconvolution of general images and point spread function estimation (blind deblurring). This approach shifts the paradigm toward regularization at the level of the probability distribution on the space of images whose expectation is our estimate of the ground truth. We present a self-contained analysis of this method, reducing the problem to solving a differentiable, strongly convex finite-dimensional optimization problem for which there exists an abundance of black-box solvers. The strength of the MEM method lies in its simplicity, its ability to handle large blurs, and its potential for generalization and modifications. When images are embedded with symbology (a known pattern), we show how our method can be applied to approximate the unknown blur kernel with remarkable effects.
Download here
Authors: Gabriel Rioux, Christopher Scarvelis, Rustum Choksi, Tim Hoheisel, Pierre Maréchal
Published in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
Barcode encoding schemes impose symbolic constraints which fix certain segments of the image. We present, implement, and assess a method for blind deblurring and denoising based entirely on Kullback-Leibler divergence. The method is designed to incorporate and exploit the full strength of barcode symbologies. Via both standard barcode reading software and smartphone apps, we demonstrate the remarkable ability of our method to blindly recover simulated images of highly blurred and noisy barcodes. As proof of concept, we present one application on a real-life out of focus camera image.
Download here
Authors: Christopher Scarvelis, Justin Solomon
Published in ICLR, 2023
We introduce an optimal transport-based model for learning a metric tensor from cross-sectional samples of evolving probability measures on a common Riemannian manifold. We neurally parametrize the metric as a spatially-varying matrix field and efficiently optimize our model’s objective using a simple alternating scheme. Using this learned metric, we can nonlinearly interpolate between probability measures and compute geodesics on the manifold. We show that metrics learned using our method improve the quality of trajectory inference on scRNA and bird migration data at the cost of little additional cross-sectional data.
Download here
Authors: Christopher Scarvelis, Justin Solomon
Published in NeurIPS, 2024
Penalizing the nuclear norm of a function’s Jacobian encourages it to locally behave like a low-rank linear map. Such functions vary locally along only a handful of directions, making the Jacobian nuclear norm a natural regularizer for machine learning problems. However, this regularizer is intractable for high-dimensional problems, as it requires computing a large Jacobian matrix and taking its singular value decomposition. We show how to efficiently penalize the Jacobian nuclear norm using techniques tailor-made for deep learning. We prove that for functions parametrized as compositions f=g∘h, one may equivalently penalize the average squared Frobenius norm of Jg and Jh. We then propose a denoising-style approximation that avoids the Jacobian computations altogether. Our method is simple, efficient, and accurate, enabling Jacobian nuclear norm regularization to scale to high-dimensional deep learning problems. We complement our theory with an empirical study of our regularizer’s performance and investigate applications to denoising and representation learning.
Download here
Authors: Christopher Scarvelis, Haitz Sáez de Ocáriz Borde, Justin Solomon
Published in TMLR, 2025
Score-based generative models (SGMs) sample from a target distribution by iteratively transforming noise using the score function of the perturbed target. For any finite training set, this score function can be evaluated in closed form, but the resulting SGM memorizes its training data and does not generate novel samples. In practice, one approximates the score by training a neural network via score-matching. The error in this approximation promotes generalization, but neural SGMs are costly to train and sample, and the effective regularization this error provides is not well-understood theoretically. In this work, we instead explicitly smooth the closed-form score to obtain an SGM that generates novel samples without training. We analyze our model and propose an efficient nearest-neighbor-based estimator of its score function. Using this estimator, our method achieves competitive sampling times while running on consumer-grade CPUs.
Download here
Authors: Christopher Scarvelis, David Benhaim, Paul Zhang
Published in ICML, 2025
Orientation estimation is a fundamental task in 3D shape analysis which consists of estimating a shape’s orientation axes: its side-, up-, and front-axes. Using this data, one can rotate a shape into canonical orientation, where its orientation axes are aligned with the coordinate axes. Developing an orientation algorithm that reliably estimates complete orientations of general shapes remains an open problem. We introduce a two-stage orientation pipeline that achieves state of the art performance on up-axis estimation and further demonstrate its efficacy on full-orientation estimation, where one seeks all three orientation axes. Unlike previous work, we train and evaluate our method on all of Shapenet rather than a subset of classes. We motivate our engineering contributions by theory describing fundamental obstacles to orientation estimation for rotationally-symmetric shapes, and show how our method avoids these obstacles.
Download here
Authors: Daniel Pfrommer, Zehao Dou, Christopher Scarvelis, Max Simchowitz, Ali Jadbabaie
Published in NeurIPS, 2025
We study the inductive biases of diffusion models with a conditioning-variable, which have seen widespread application as both text-conditioned generative image models and observation-conditioned continuous control policies. We observe that when these models are queried conditionally, their generations consistently deviate from the idealized “denoising” process upon which diffusion models are formulated, inducing disagreement between popular sampling algorithms (e.g. DDPM, DDIM). We introduce Schedule Deviation, a rigorous measure which captures the rate of deviation from a standard denoising process, and provide a methodology to compute it. Crucially, we demonstrate that the deviation from an idealized denoising process occurs irrespective of the model capacity or amount of training data. We posit that this phenomena occurs due to the difficulty of bridging distinct denoising flows across different parts of the conditioning space and show theoretically how such a phenomena can arise through an inductive bias towards smoothness.
Download here
Authors: Christopher Scarvelis, Justin Solomon
Under review, 2025
Training a diffusion model approximates a map from a data distribution to the optimal score function for that distribution. Can we differentiate this map? If we could, then we could predict how the score, and ultimately the model’s samples, would change under small perturbations to the training set before committing to costly retraining. We give a closed-form procedure for computing this map’s directional derivatives, relying only on black-box access to a pre-trained score model and its derivatives with respect to its inputs. We extend this result to estimate the sensitivity of a diffusion model’s samples to additive perturbations of its target measure, with runtime comparable to sampling from a diffusion model and computing log-likelihoods along the sample path. Our method is robust to numerical and approximation error, and the resulting sensitivities correlate with changes in an image diffusion model’s samples after retraining and fine-tuning.
Download here
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.