Large Language Models

FineInstructions: Scaling Synthetic Instructions to Pre-Training Scale

AAjay PatelCColin RaffelCChris Callison-Burch
Published
January 29, 2026
Authors
3
Word Count
11,032
Code
Includes code

Revolutionizes LLM pre-training with synthetic instruction-based data.

Abstract

Due to limited supervised training data, large language models (LLMs) are typically pre-trained via a self-supervised "predict the next word" objective on a vast amount of unstructured text data. To make the resulting model useful to users, it is further trained on a far smaller amount of "instruction-tuning" data comprised of supervised training examples of instructions and responses. To overcome the limited amount of supervised data, we propose a procedure that can transform the knowledge in internet-scale pre-training documents into billions of synthetic instruction and answer training pairs. The resulting dataset, called FineInstructions, uses ~18M instruction templates created from real user-written queries and prompts. These instruction templates are matched to and instantiated with human-written source documents from unstructured pre-training corpora. With "supervised" synthetic training data generated at this scale, an LLM can be pre-trained from scratch solely with the instruction-tuning objective, which is far more in-distribution with the expected downstream usage of LLMs (responding to user prompts). We conduct controlled token-for-token training experiments and find pre-training on FineInstructions outperforms standard pre-training and other proposed synthetic pre-training techniques on standard benchmarks measuring free-form response quality. Our resources can be found at https://huggingface.co/fineinstructions .

Key Takeaways

  • 1

    Transforms unstructured text into high-quality instruction-answer pairs.

  • 2

    Outperforms standard pre-training on multiple benchmarks.

  • 3

    Improves generalization across knowledge-focused and open-ended tasks.

Limitations

  • Relies on initial query quality and matching effectiveness.

  • Risk of systematic bias due to large-scale generation.

Keywords

large language modelsself-supervised learningpredict the next wordinstruction-tuningsynthetic training datainstruction templatesunstructured text datapre-trainingdownstream usageresponse quality

More in Large Language Models

View all
FineInstructions: Scaling Synthetic Instructions to Pre-Training Scale | Paperchime