I work on building and evaluating agentic, tool-augmented LLM systems that are more reliable in open-ended settings (coding, analysis, and decision support) by combining symbolic structure, verification, and learning-based tool-use policies.

Vision

My goal is to make LLM agents trustworthy in the loop: they should externalize their reasoning into tools (code, tests, search, structured checks), verify intermediate results, and recover when things go wrong.

Core themes

1) Verification-aware prompting and pipelines

2) RL for tool-use policies

3) Neuro-symbolic representations for dialog + code

Selected work & experience

Graduate Research Assistant — Stony Brook University (01/2023–05/2027)

Applied Scientist Intern — Amazon AWS (09/2025–12/2025)

ML Research Intern — Nokia Bell Labs (06/2025–08/2025)

Publications

2025

2024

Contact

Email: villurignanesh@gmail.com  |  Google Scholar  |  LinkedIn

↑ Top