Jan 267 min read
Reinforcement learning vs “regular” training: the real difference is not the math, it is the loop
Most ML people grow up on a simple mental model: you have a dataset, you define a loss, you run gradient descent, you ship a checkpoint. That covers supervised learning and a lot of self-supervised pretraining. The model is learning from a fixed distribution of examples, and the training pipeline is basically a linear flow from data to gradients. Reinforcement learning (RL) breaks that mental model because the model is not only learning from data, it is also actively creating












