Discover ANY AI to make more online for less.

select between over 22,900 AI Tool and 17,900 AI News Posts.


venturebeat
Ai2's new Olmo 3.1 extends reinforcement learning training for stronger reasoning benchmarks

The Allen Institute for AI (Ai2) recently released what it calls its most powerful family of models yet, Olmo 3. But the company kept iterating on the models, expanding its reinforcement learning (RL) runs, to create Olmo 3.1.The new Olmo 3.1 models focus on efficiency, transparency, and control for enterprises. Ai2 updated two of the three versions of Olmo 2: Olmo 3.1 Think 32B, the flagship model optimized for advanced research, and Olmo 3.1 Instruct 32B, designed for instruction-following, multi-turn dialogue, and tool use. Olmo 3 has a third version, Olmo 3-Base for programming, comprehension, and math. It also works well for continue fine-tuning. Ai2 said that to upgrade Olmo 3 Think 32B to Olmo 3.1, its researchers extended its best RL run with a longer training schedule. “After the original Olmo 3 launch, we resumed our RL training run for Olmo 3 32B Think, training for an additional 21 days on 224 GPUs with extra epochs over our Dolci-Think-RL dataset,” Ai2 said in a blog post. “This yielded Olmo 3.1 32B Think, which brings substantial gains across math, reasoning, and instruction-following benchmarks: improvements of 5+ points on AIME, 4+ points on ZebraLogic, 4+ points on IFEval, and 20+ points on IFBench, alongside stronger performance on coding and complex multi-step tasks.”To get to Olmo 3.1 Instruct, Ai2 said its researchers applied the recipe behind the smaller Instruct size, 7B, to the larger model.Olmo 3.1 Instruct 32B is "optimized for chat, tool use, & multi-turn dialogue—making it a much more performant sibling of Olmo 3 Instruct 7B and ready for real-world applications,” Ai2 said in a post on X. For now, the new checkpoints are available on the Ai2 Playground or Hugging Face, with API access coming soon. Better performance on benchmarksThe Olmo 3.1 models performed well on benchmark tests, predictably beating the Olmo 3 models. Olmo 3.1 Think outperformed Qwen 3 32B models in the AIME 2025 benchmark and performed close to Gemma 27B. Olmo 3.1 Instruct performed strongly against its open-source peers, even beating models like Gemma 3 on the Math benchmark.“As for Olmo 3.1 32B Instruct, it’s a larger-scale instruction-tuned model built for chat, tool use, and multi-turn dialogue. Olmo 3.1 32B Instruct is our most capable fully open chat model to date and — in our evaluations — the strongest fully open 32B-scale instruct model,” the company said. Ai2 also upgraded its RL-Zero 7B models for math and coding. The company said on X that both models benefited from longer and more stable training runs.Commitment to transparency and open source Ai2 previously told VentureBeat that it designed the Olmo 3 family of models to offer enterprises and research labs more control and understanding of the data and training that went into the model. Organizations could add to the model’s data mix and retrain it to also learn from what’s been added.  This has long been a commitment for Ai2, which also offers a tool called OlmoTrace that tracks how LLM outputs match its training data.  “Together, Olmo 3.1 Think 32B and Olmo 3.1 Instruct 32B show that openness and performance can advance together. By extending the same model flow, we continue to improve capabilities while retaining end-to-end transparency over data, code, and training decisions,” Ai2 said. 

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
Ai2’s Olmo 3 family challenges Qwen and Llama with efficient, open reason

<p>The <a href="https://allenai.org/"><u>Allen Institute for AI (Ai2)</u></a> hopes to take advantage of an increased demand for customized models and enterpris [...]

Match Score: 556.38

venturebeat
Phi-4 proves that a 'data-first' SFT methodology is the new differentiator

<p>AI engineers often chase performance by scaling up LLM parameters and data, but the trend toward smaller, more efficient, and better-focused models has accelerated. </p><p>The &l [...]

Match Score: 198.23

venturebeat
Nvidia researchers boost LLMs reasoning skills by getting them to 'think' d

<p>Researchers at Nvidia have developed a new technique that flips the script on how large language models (LLMs) learn to reason. </p><p>The method, called <a href="https:// [...]

Match Score: 193.80

venturebeat
Google’s new AI training method helps small models tackle complex reasoni

<p>Researchers at <a href="https://research.google/teams/cloud-ai-research/">Google Cloud</a> and <a href="https://www.ucla.edu/">UCLA</a> have propos [...]

Match Score: 192.70

venturebeat
New training method boosts AI multimodal reasoning with smaller, smarter da

<p>Researchers at MiroMind AI and several Chinese universities have released <a href="https://arxiv.org/abs/2511.16334"><u>OpenMMReasoner</u></a>, a new trainin [...]

Match Score: 160.96

venturebeat
Baseten takes on hyperscalers with new AI training platform that lets you o

<p><a href="https://www.baseten.co/"><u>Baseten</u></a>, the AI infrastructure company recently valued at $2.15 billion, is making its most significant product [...]

Match Score: 145.32

venturebeat
Self-improving language models are becoming reality with MIT's updated SEAL

<p>Researchers at the Massachusetts Institute of Technology (MIT) are gaining renewed attention for developing and <a href="https://github.com/Continual-Intelligence/SEAL/blob/main/LICEN [...]

Match Score: 136.65

venturebeat
Weibo's new open source AI model VibeThinker-1.5B outperforms DeepSeek-R1 o

<p>Another day in late 2025, another impressive result from a Chinese company in open source artificial intelligence.</p><p>Chinese social networking company <a href="https:/ [...]

Match Score: 115.95

venturebeat
MiniMax-M2 is the new king of open source LLMs (especially for agentic tool

<p>Watch out, DeepSeek and Qwen! There&#x27;s a new king of open source large language models (LLMs), especially when it comes to something enterprises are increasingly valuing: agentic tool [...]

Match Score: 114.14