Discover ANY AI to make more online for less.

select between over 22,900 AI Tool and 17,900 AI News Posts.


venturebeat
Meta researchers open the LLM black box to repair flawed AI reasoning

Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its mistakes. Called Circuit-based Reasoning Verification (CRV), the method looks inside an LLM to monitor its internal “reasoning circuits” and detect signs of computational errors as the model solves a problem.Their findings show that CRV can detect reasoning errors in LLMs with high accuracy by building and observing a computational graph from the model's internal activations. In a key breakthrough, the researchers also demonstrated they can use this deep insight to apply targeted interventions that correct a model’s faulty reasoning on the fly.The technique could help solve one of the great challenges of AI: Ensuring a model’s reasoning is faithful and correct. This could be a critical step toward building more trustworthy AI applications for the enterprise, where reliability is paramount.Investigating chain-of-thought reasoningChain-of-thought (CoT) reasoning has been a powerful method for boosting the performance of LLMs on complex tasks and has been one of the key ingredients in the success of reasoning models such as the OpenAI o-series and DeepSeek-R1. However, despite the success of CoT, it is not fully reliable. The reasoning process itself is often flawed, and several studies have shown that the CoT tokens an LLM generates is not always a faithful representation of its internal reasoning process.Current remedies for verifying CoT fall into two main categories. “Black-box” approaches analyze the final generated token or the confidence scores of different token options. “Gray-box” approaches go a step further, looking at the model's internal state by using simple probes on its raw neural activations. But while these methods can detect that a model’s internal state is correlated with an error, they can't explain why the underlying computation failed. For real-world applications where understanding the root cause of a failure is crucial, this is a significant gap.A white-box approach to verificationCRV is based on the idea that models perform tasks using specialized subgraphs, or "circuits," of neurons that function like latent algorithms. So if the model’s reasoning fails, it is caused by a flaw in the execution of one of these algorithms. This means that by inspecting the underlying computational process, we can diagnose the cause of the flaw, similar to how developers examine execution traces to debug traditional software.To make this possible, the researchers first make the target LLM interpretable. They replace the standard dense layers of the transformer blocks with trained "transcoders." A transcoder is a specialized deep learning component that forces the model to represent its intermediate computations not as a dense, unreadable vector of numbers, but as a sparse and meaningful set of features. Transcoders are similar to the sparse autoencoders (SAE) used in mechanistic interpretability research with the difference that they also preserve the functionality of the network they emulate. This modification effectively installs a diagnostic port into the model, allowing researchers to observe its internal workings.With this interpretable model in place, the CRV process unfolds in a few steps. For each reasoning step the model takes, CRV constructs an "attribution graph" that maps the causal flow of information between the interpretable features of the transcoder and the tokens it is processing. From this graph, it extracts a "structural fingerprint" that contains a set of features describing the graph's properties. Finally, a “diagnostic classifier” model is trained on these fingerprints to predict whether the reasoning step is correct or not.At inference time, the classifier monitors the activations of the model and provides feedback on whether the model’s reasoning trace is on the right track.Finding and fixing errorsThe researchers tested their method on a Llama 3.1 8B Instruct model modified with the transcoders, evaluating it on a mix of synthetic (Boolean and Arithmetic) and real-world (GSM8K math problems) datasets. They compared CRV against a comprehensive suite of black-box and gray-box baselines.The results provide strong empirical support for the central hypothesis: the structural signatures in a reasoning step's computational trace contain a verifiable signal of its correctness. CRV consistently outperformed all baseline methods across every dataset and metric, demonstrating that a deep, structural view of the model's computation is more powerful than surface-level analysis.Interestingly, the analysis revealed that the signatures of error are highly domain-specific. This means failures in different reasoning tasks (formal logic versus arithmetic calculation) manifest as distinct computational patterns. A classifier trained to detect errors in one domain does not transfer well to another, highlighting that different types of reasoning rely on different internal circuits. In practice, this means that you might need to train a separate classifier for each task (though the transcoder remains unchanged).The most significant finding, however, is that these error signatures are not just correlational but causal. Because CRV provides a transparent view of the computation, a predicted failure can be traced back to a specific component. In one case study, the model made an order-of-operations error. CRV flagged the step and identified that a "multiplication" feature was firing prematurely. The researchers intervened by manually suppressing that single feature, and the model immediately corrected its path and solved the problem correctly. This work represents a step toward a more rigorous science of AI interpretability and control. As the paper concludes, “these findings establish CRV as a proof-of-concept for mechanistic analysis, showing that shifting from opaque activations to interpretable computational structure enables a causal understanding of how and why LLMs fail to reason correctly.” To support further research, the team plans to release its datasets and trained transcoders to the public.Why it’s importantWhile CRV is a research proof-of-concept, its results hint at a significant future for AI development. AI models learn internal algorithms, or "circuits," for different tasks. But because these models are opaque, we can't debug them like standard computer programs by tracing bugs to specific steps in the computation. Attribution graphs are the closest thing we have to an execution trace, showing how an output is derived from intermediate steps.This research suggests that attribution graphs could be the foundation for a new class of AI model debuggers. Such tools would allow developers to understand the root cause of failures, whether it's insufficient training data or interference between competing tasks. This would enable precise mitigations, like targeted fine-tuning or even direct model editing, instead of costly full-scale retraining. They could also allow for more efficient intervention to correct model mistakes during inference.The success of CRV in detecting and pinpointing reasoning errors is an encouraging sign that such debuggers could become a reality. This would pave the way for more robust LLMs and autonomous agents that can handle real-world unpredictability and, much like humans, correct course when they make reasoning mistakes. 

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

Meta Ray-Ban Display review: Chunky frames with impressive abilities
Meta Ray-Ban Display review: Chunky frames with impressive abilities

<p></p> <p>I've been wearing the <a target="_blank" class="link rapid-with-clickid" href="https://shopping.yahoo.com/rdlw?merchantId=e160a5dc-fb38-4e8b-a [...]

Match Score: 147.67

How an Oregon court became the stage for a $115,000 showdown between Meta and Facebook creators
How an Oregon court became the stage for a $115,000 showdown between Meta a

<p>Some of the most successful creators on Facebook aren&#39;t names you&#39;d ever recognize. In fact, many of their pages don&#39;t have a face or recognizable persona attached. In [...]

Match Score: 143.80

Oakley Meta glasses review: A familiar formula with some upgrades
Oakley Meta glasses review: A familiar formula with some upgrades

<p>If you're at all familiar with Meta's Ray-Ban-branded smart glasses, there won't be many surprises when it comes to its latest Oakley frames. The <a data-i13n="elm:affiliate_link;sell [...]

Match Score: 133.95

How to make your smartphone last longer
How to make your smartphone last longer

<p>Replacing a smartphone every two years is partially why <a data-i13n="cpos:1;pos:1" href="https://weee-forum.org/ws_news/of-16-billion-mobile-phones-possessed-worldwide-5-3- [...]

Match Score: 132.49

FTC sues John Deere over ‘unfair corporate tactics’ and ‘high repair costs’
FTC sues John Deere over ‘unfair corporate tactics’ and ‘high repair

<p>The Federal Trade Commission (FTC) has <a data-i13n="cpos:1;pos:1" href="https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-states-sue-deere-company-protect-far [...]

Match Score: 123.45

venturebeat
MiniMax-M2 is the new king of open source LLMs (especially for agentic tool

<p>Watch out, DeepSeek and Qwen! There&#x27;s a new king of open source large language models (LLMs), especially when it comes to something enterprises are increasingly valuing: agentic tool [...]

Match Score: 116.76

venturebeat
Samsung AI researcher's new, open reasoning model TRM outperforms models 10

<p>The trend of AI researchers developing new, <a href="https://www.linkedin.com/pulse/next-big-thing-ai-think-small-models-venturebeat-yyrte/?trackingId=x3X3vTZhTnmwCTUtOWGAug%3D%3D&quo [...]

Match Score: 114.54

Ray-Ban Meta (2nd Gen) review: Smart glasses are finally getting useful
Ray-Ban Meta (2nd Gen) review: Smart glasses are finally getting useful

<p style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;">In a lot of ways, Meta's hasn't changed much with its second-gen Ray-Ban gla [...]

Match Score: 111.98

Apple’s Self Service Repair program now extends to iPad
Apple’s Self Service Repair program now extends to iPad

<p><a href="https://www.yahoo.com/organizations/apple/" data-autolinker-wiki-id="Apple_Inc." data-original-link="">Apple</a>’s increasingly comprehens [...]

Match Score: 111.11