Abstract
Existing informal language-based (e.g., human language) Large Language Models (LLMs) trained with Reinforcement Learning (RL) face a significant challenge: their verification processes, which provide crucial training signals, are neither reliable nor scalable. In fact, the prevalent large proprietary models could hardly generate verifiable programs.
A promising yet largely uncharted alternative is formal language-based reasoning. Grounding LLMs in rigorous formal systems where generative models operate in formal language spaces (e.g., Dafny) enables the automatic and mathematically provable verification of their reasoning processes and outcomes. This capability is pivotal for achieving large-scale, reliable formal software verification.
It is a common practice to employ human-annotated chain-of-thought and other human priors to induce the reasoning and coding capabilities of LLMs. Unfortunately, it becomes unacceptably all-consuming to provide such priors for supervising complex programming tasks.
In this work, we systematically explore ways to reduce human priors with the formal language, Dafny, as the main environment for our pilot study. Our pipeline mainly relies on introducing an automatic and scalable data curation pipeline, and careful RL designs integrated with feedback from the formal language verifier.
Methodology
Automatic Data Curation
Scalable pipeline for generating and curating formal verification tasks without manual annotation.
RL Integration
Carefully designed reinforcement learning framework integrated with formal language verifier feedback.
Dafny Environment
Formal verification language serving as the testbed for reducing human supervision in program verification.
Key Findings
Formal Language Grounding
Grounding LLMs in formal systems enables automatic and mathematically provable verification of reasoning processes.
Reduced Human Supervision
Our approach significantly reduces the need for human-annotated priors while maintaining verification quality.
Scalable Verification
The automatic data curation pipeline enables large-scale formal software verification with minimal human intervention.
Results & Impact
🚀 Preliminary Results
Our preliminary study demonstrates promising improvements in verification scalability and reduced manual effort. The integration of RL with formal language verification shows potential for revolutionizing automated program verification.
🔮 Future Directions
Future work will focus on expanding to other verification tools beyond Dafny, optimizing RL strategies within LLMs, and developing more sophisticated formal reasoning capabilities.
Citation
@article{reform2025,
title={Re:Form — Reducing Human Priors in Scalable Formal Software Verification with RL in LLMs: A Preliminary Study on Dafny},
author={[Your Name] and [Co-authors]},
journal={[Conference/Journal]},
year={2025},
url={https://github.com/Veri-Code/ReForm}
}