Reinforcement Learning

FLAC: Maximum Entropy RL via Kinetic Energy Regularized Bridge Matching

LLei LvYYunfei LiYYu LuoFFuchun SunXXiao Ma
Published
February 13, 2026
Authors
5

Abstract

Iterative generative policies, such as diffusion models and flow matching, offer superior expressivity for continuous control but complicate Maximum Entropy Reinforcement Learning because their action log-densities are not directly accessible. To address this, we propose Field Least-Energy Actor-Critic (FLAC), a likelihood-free framework that regulates policy stochasticity by penalizing the kinetic energy of the velocity field. Our key insight is to formulate policy optimization as a Generalized Schrödinger Bridge (GSB) problem relative to a high-entropy reference process (e.g., uniform). Under this view, the maximum-entropy principle emerges naturally as staying close to a high-entropy reference while optimizing return, without requiring explicit action densities. In this framework, kinetic energy serves as a physically grounded proxy for divergence from the reference: minimizing path-space energy bounds the deviation of the induced terminal action distribution. Building on this view, we derive an energy-regularized policy iteration scheme and a practical off-policy algorithm that automatically tunes the kinetic energy via a Lagrangian dual mechanism. Empirically, FLAC achieves superior or comparable performance on high-dimensional benchmarks relative to strong baselines, while avoiding explicit density estimation.

Keywords

diffusion modelsflow matchingMaximum Entropy Reinforcement Learningvelocity fieldGeneralized Schrödinger Bridgekinetic energypolicy optimizationpath-space energyLagrangian dual mechanismoff-policy algorithm

More in Reinforcement Learning

View all
FLAC: Maximum Entropy RL via Kinetic Energy Regularized Bridge Matching | Paperchime