select between over 22,900 AI Tool and 17,900 AI News Posts.
A new study by Shanghai Jiao Tong University and SII Generative AI Research Lab (GAIR) shows that training large language models (LLMs) for complex, autonomous tasks does not require massive datasets. Their framework, LIMI (Less Is More for Intelligent Agency), builds on similar work in other areas of LLM research and finds that “machine autonomy emerges not from data abundance but from strategic curation of high-quality agentic demonstrations.” In other words, it's data quality, not quantity, that matters. In experiments, the researchers found that with a small, but carefully curated, dataset of just 78 examples, they could train LLMs to outperform models trained on thousands of examples by a considerable margin on key industry benchmarks. This discovery could have important implications for enterprise applications where data is scarce or expensive to collect.The challenge of building agents that workThe researchers define agency as “the emergent capacity of AI systems to function as autonomous agents–actively discovering problems, formulating hypotheses, and executing solutions through self-directed engagement with environments and tools.” In other words, these are AI systems that “don’t just think, but work.” The problem is that current training frameworks assume that higher agentic intelligence requires a lot of data, as has been shown in the classic scaling laws of language modeling. The researchers argue that this approach leads to increasingly complex training pipelines and substantial resource requirements. Moreover, in many areas, data is not abundant, hard to obtain, and very expensive to curate.However, research in other domains suggests that you don’t necessarily require more data to achieve training objectives in LLM training. For example, LIMA, a 2023 paper, showed a model could be effectively aligned with just 1,000 curated examples. More recently, LIMO demonstrated that complex mathematical reasoning could emerge from only 817 training samples. With LIMI, the researchers sought to apply the same “less is more” principle to the complex world of AI agents.How LIMI worksThe LIMI framework demonstrates that sophisticated agentic intelligence can emerge from minimal but strategically curated demonstrations of autonomous behavior. Key to the framework is a pipeline for collecting high-quality demonstrations of agentic tasks. Each demonstration consists of two parts: a query and a trajectory. A query is a natural language request from a user, such as a software development requirement or a scientific research goal. The trajectory is the series of steps the AI takes to address the query, including its internal reasoning, its calls to external tools like a code interpreter, and the observations it receives from the environment. For example, a query might be "build a simple chat application," and the trajectory would include the agent’s internal reasoning and action plan, the code it writes and executes, and the resulting output or errors. The trajectory could include multiple iterations of planning, execution, and reflection until it achieves the desired objective.To build their dataset, the researchers started with 60 queries from real-world scenarios faced by professional developers and researchers. They then expanded this pool by using GPT-5 to synthesize additional queries from GitHub Pull Requests. They employed a team of four computer science PhD students to vet the quality of these queries and choose 18 examples to create a high-quality set of 78 queries focused on software development and research workflows.To generate the trajectories, the same PhD students collaborated with a CLI coding agent powered by GPT-5 to complete the 78 tasks. They followed an iterative process, collecting the entire interaction sequence until each task was successfully completed, capturing the full arc of realistic human-AI collaboration, including back-and-forth communication and iterative refinement. For the more complex queries, the collected trajectories could extend to more than 152,000 tokens.“This approach guarantees that our models learn not only from successful outcomes but also from the complete problem-solving process, including how to adapt strategies and recover from failures during collaborative execution,” the researchers write.LIMI in actionTo test their framework, the team evaluated models on AgencyBench, a benchmark designed for measuring agentic skills, as well as other established benchmarks for tool use and coding. They fine-tuned GLM-4.5, a powerful open-source model, using their 78-sample dataset and compared its performance against several frontier models, including the base GLM-4.5, Kimi-K2-Instruct, and DeepSeek-V3.1. The LIMI-trained model achieved an average score of 73.5% on AgencyBench, significantly outperforming all baseline models, the best of which (GLM-4.5) scored 45.1%. This superiority extended to other benchmarks covering tool use, coding, and scientific computing, where LIMI also outperformed all baselines.More importantly, the study showed that the model trained on just 78 examples outperformed models trained with 10,000 samples from another dataset, delivering superior performance with 128 times less data. “This discovery fundamentally reshapes how we develop autonomous AI systems, suggesting that mastering agency requires understanding its essence, not scaling training data,” the researchers write. “As industries transition from thinking AI to working AI, LIMI provides a paradigm for sustainable cultivation of truly agentic intelligence.”The researchers have released the code for the data synthesis and training and model weights. For the enterprise, this approach offers a practical path toward developing highly specialized AI agents. Instead of undertaking massive data collection projects, organizations can leverage their in-house talent and subject matter experts to create small, high-quality datasets for bespoke agentic tasks. This lowers the barrier to entry and enables businesses to build custom AI agents that can provide a competitive edge on the workflows that matter most to them.
<p>It’s been a rough time for Peloton. Last year was marred by deep staff cuts, a change of CEO and a reckoning of where the home fitness company belonged, post-Pandemic boom. The answer is, u [...]