Large Language Models

Can LLMs Clean Up Your Mess? A Survey of Application-Ready Data Preparation with LLMs

WWei ZhouJJun ZhouHHaoyu WangZZhenghao LiQQikang HeSShaokun HanGGuoliang LiXXuanhe ZhouYYeye HeCChunwei LiuZZirui TangBBin WangSShen TangKKai ZuoYYuyu LuoZZhenzhe ZhengCConghui HeJJingren ZhouFFan Wu
Published
January 22, 2026
Authors
19
Word Count
19,190

LLMs revolutionize automated, efficient data preparation.

Abstract

Data preparation aims to denoise raw datasets, uncover cross-dataset relationships, and extract valuable insights from them, which is essential for a wide range of data-centric applications. Driven by (i) rising demands for application-ready data (e.g., for analytics, visualization, decision-making), (ii) increasingly powerful LLM techniques, and (iii) the emergence of infrastructures that facilitate flexible agent construction (e.g., using Databricks Unity Catalog), LLM-enhanced methods are rapidly becoming a transformative and potentially dominant paradigm for data preparation. By investigating hundreds of recent literature works, this paper presents a systematic review of this evolving landscape, focusing on the use of LLM techniques to prepare data for diverse downstream tasks. First, we characterize the fundamental paradigm shift, from rule-based, model-specific pipelines to prompt-driven, context-aware, and agentic preparation workflows. Next, we introduce a task-centric taxonomy that organizes the field into three major tasks: data cleaning (e.g., standardization, error processing, imputation), data integration (e.g., entity matching, schema matching), and data enrichment (e.g., data annotation, profiling). For each task, we survey representative techniques, and highlight their respective strengths (e.g., improved generalization, semantic understanding) and limitations (e.g., the prohibitive cost of scaling LLMs, persistent hallucinations even in advanced agents, the mismatch between advanced methods and weak evaluation). Moreover, we analyze commonly used datasets and evaluation metrics (the empirical part). Finally, we discuss open research challenges and outline a forward-looking roadmap that emphasizes scalable LLM-data systems, principled designs for reliable agentic workflows, and robust evaluation protocols.

Key Takeaways

  • 1

    LLMs automate and enhance data preparation tasks.

  • 2

    Significant improvements in data cleaning and integration.

  • 3

    Potential for minimal manual effort in data preparation.

Limitations

  • LLM-based methods can be computationally expensive.

  • Sensitivity to input phrasing and token costs.

Keywords

data preparationlarge language modelsprompt-driven workflowsagentic workflowsdata cleaningdata integrationdata enrichmententity matchingschema matchingdata annotationdata profiling

More in Large Language Models

View all
Can LLMs Clean Up Your Mess? A Survey of Application-Ready Data Preparation with LLMs | Paperchime