AI Agents

SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks

XXiangyi LiWWenbo ChenYYimin LiuSShenghan ZhengXXiaokun ChenYYifeng HeYYubo LiBBingran YouHHaotian ShenJJiankai SunSShuyi WangQQunhong ZengDDi WangXXuandong ZhaoYYuanli WangRRoey Ben ChaimZZonglin DiYYipeng GaoJJunwei HeYYizhuo HeLLiqiang JingLLuyang KongXXin LanJJiachen LiSSonglin LiYYijiang LiYYueqian LinXXinyi LiuXXuanqing LiuHHaoran LyuZZe MaBBowei WangRRunhui WangTTianyu WangWWengao YeYYue ZhangHHanwen XingYYiqi XueSSteven DillmannHHan-chung Lee
Published
February 13, 2026
Authors
40
Word Count
14,495
Code
Includes code

SkillsBench systematically measures how well AI agent skills improve performance across diverse real-world tasks.

Abstract

Agent Skills are structured packages of procedural knowledge that augment LLM agents at inference time. Despite rapid adoption, there is no standard way to measure whether they actually help. We present SkillsBench, a benchmark of 86 tasks across 11 domains paired with curated Skills and deterministic verifiers. Each task is evaluated under three conditions: no Skills, curated Skills, and self-generated Skills. We test 7 agent-model configurations over 7,308 trajectories. Curated Skills raise average pass rate by 16.2 percentage points(pp), but effects vary widely by domain (+4.5pp for Software Engineering to +51.9pp for Healthcare) and 16 of 84 tasks show negative deltas. Self-generated Skills provide no benefit on average, showing that models cannot reliably author the procedural knowledge they benefit from consuming. Focused Skills with 2--3 modules outperform comprehensive documentation, and smaller models with Skills can match larger models without them.

Key Takeaways

  • 1

    SkillsBench is the first benchmark systematically measuring whether AI agent skills actually improve task performance across domains.

  • 2

    The benchmark evaluates tasks under three conditions: no skills, expert-curated skills, and self-generated skills for rigorous impact isolation.

  • 3

    Rigorous quality control including automated validation, AI detection screening, and deterministic verifiers ensures reliable skill effectiveness measurement.

Limitations

  • The benchmark covers 84 tasks across 11 domains, which may not represent all real-world AI agent use cases comprehensively.

  • The script ends abruptly before discussing actual benchmark results and findings about skill effectiveness patterns.

Keywords

agent skillsLLM agentsSkillsBenchprocedural knowledgecurated Skillsself-generated Skillsagent-model configurationspass ratedomain-specific effects

More in AI Agents

View all
SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks | Paperchime