AI Safety & Alignment

FinVault: Benchmarking Financial Agent Safety in Execution-Grounded Environments

ZZhi YangRRunguo LiQQiqi QiangJJiashun WangFFangqi LouMMengping LiDDongpo ChengRRui XuHHeng LianSShuo ZhangXXiaolong LiangXXiaoming HuangZZheng WeiZZhaowei LiuXXin GuoHHuacan WangRRonghao ChenLLiwen Zhang
Published
January 9, 2026
Authors
18
Word Count
10,682

FINVAULT benchmarks financial agent safety in real scenarios.

Abstract

Financial agents powered by large language models (LLMs) are increasingly deployed for investment analysis, risk assessment, and automated decision-making, where their abilities to plan, invoke tools, and manipulate mutable state introduce new security risks in high-stakes and highly regulated financial environments. However, existing safety evaluations largely focus on language-model-level content compliance or abstract agent settings, failing to capture execution-grounded risks arising from real operational workflows and state-changing actions. To bridge this gap, we propose FinVault, the first execution-grounded security benchmark for financial agents, comprising 31 regulatory case-driven sandbox scenarios with state-writable databases and explicit compliance constraints, together with 107 real-world vulnerabilities and 963 test cases that systematically cover prompt injection, jailbreaking, financially adapted attacks, as well as benign inputs for false-positive evaluation. Experimental results reveal that existing defense mechanisms remain ineffective in realistic financial agent settings, with average attack success rates (ASR) still reaching up to 50.0\% on state-of-the-art models and remaining non-negligible even for the most robust systems (ASR 6.7\%), highlighting the limited transferability of current safety designs and the need for stronger financial-specific defenses. Our code can be found at https://github.com/aifinlab/FinVault.

Key Takeaways

  • 1

    FINVAULT evaluates financial agents' robustness under adversarial conditions.

  • 2

    Existing defenses show insufficient protection for financial agents.

  • 3

    Semantic attacks are more effective than technical attacks.

Limitations

  • Sandboxed environments differ from production systems.

  • Attack success rate metric lacks severity differentiation.

Keywords

large language modelsfinancial agentssecurity benchmarkstate-writable databasescompliance constraintsprompt injectionjailbreakingfinancially adapted attacksattack success rates

More in AI Safety & Alignment

View all
FinVault: Benchmarking Financial Agent Safety in Execution-Grounded Environments | Paperchime