Guide
What this page is
This is a lightweight guide to what I’m working on, how my projects fit together, and the best way to engage with my work.
Research snapshot
My research focuses on improving variational inference (VI) methods—especially the practical reliability of approximate Bayesian inference in modern probabilistic models. I’m interested in how default choices (parameterizations, transformations, optimizers, and initialization) shape convergence behavior and posterior approximation quality.
Current themes
- Robust variational inference: making VI more reliable across model classes and parameterizations.
- Default settings & diagnostics: understanding when “reasonable defaults” fail and how to detect it early.
- Posterior approximation quality: clearer metrics for comparing approximations beyond a single scalar objective.
- Probabilistic programming workflows: reproducible benchmarking across frameworks and implementations.
Start here
If you’re new to my work, the best entry points are:
- CV: Andersen_CV.pdf
- Google Scholar: profile
- Code: GitHub
How I work
I care a lot about:
- reproducibility (seed control, clean configs, tracked environment details),
- comparability (consistent evaluation across methods),
- clarity (simple baselines and transparent assumptions).
Collaboration
I’m excited to collaborate on projects related to variational inference, probabilistic modeling, and robust ML. If you’d like to reach out, email is best.
