William Jurayj
I am a PhD student at Johns Hopkins University, advised by Benjamin Van Durme. My research is centered on methods to help imperfect AI systems earn human trust. Recently, I’ve focused on making language models more effective at reasoning about and conveying their uncertainty. I’m also interested in the interpretability offered by symbolic reasoning systems and in techniques for adaptation to low-resource, high-stakes domains.
Some questions that are currently on my mind:
- How do reasoning models’ uncertainties respond to increased inference compute? [Test-Time Scaling Confidence]
- How can programmatic solvers augment language model’s existing reasoning capabilities? [Legal Logic Programs] [Programming MDP Components]
- What learning algorithms will thrive in a data-scarce, compute-abundant regime?
Before I came to Hopkins, I worked at Abnormal Security as a machine learning engineer training models to detect compromised accounts. I completed my Bachelor’s and Master’s degrees at Brown, where I was fortunate to be advised by Carsten Eickhoff, Ellie Pavlick, and George Konidaris.