Discover ANY AI to make more online for less.

select between over 22,900 AI Tool and 17,900 AI News Posts.


venturebeat
From static classifiers to reasoning engines: OpenAI’s new model rethinks content moderation

Enterprises, eager to ensure any AI models they use adhere to safety and safe-use policies, fine-tune LLMs so they do not respond to unwanted queries. However, much of the safeguarding and red teaming happens before deployment, “baking in” policies before users fully test the models’ capabilities in production. OpenAI believes it can offer a more flexible option for enterprises and encourage more companies to bring in safety policies. The company has released two open-weight models under research preview that it believes will make enterprises and models more flexible in terms of safeguards. gpt-oss-safeguard-120b and gpt-oss-safeguard-20b will be available on a permissive Apache 2.0 license. The models are fine-tuned versions of OpenAI’s open-source gpt-oss, released in August, marking the first release in the oss family since the summer.In a blog post, OpenAI said oss-safeguard uses reasoning “to directly interpret a developer-provider policy at inference time — classifying user messages, completions and full chats according to the developer’s needs.”The company explained that, since the model uses a chain-of-thought (CoT), developers can get explanations of the model's decisions for review. “Additionally, the policy is provided during inference, rather than being trained into the model, so it is easy for developers to iteratively revise policies to increase performance," OpenAI said in its post. "This approach, which we initially developed for internal use, is significantly more flexible than the traditional method of training a classifier to indirectly infer a decision boundary from a large number of labeled examples." Developers can download both models from Hugging Face. Flexibility versus baking inAt the onset, AI models will not know a company’s preferred safety triggers. While model providers do red-team models and platforms, these safeguards are intended for broader use. Companies like Microsoft and Amazon Web Services even offer platforms to bring guardrails to AI applications and agents. Enterprises use safety classifiers to help train a model to recognize patterns of good or bad inputs. This helps the models learn which queries they shouldn’t reply to. It also helps ensure that the models do not drift and answer accurately.“Traditional classifiers can have high performance, with low latency and operating cost," OpenAI said. "But gathering a sufficient quantity of training examples can be time-consuming and costly, and updating or changing the policy requires re-training the classifier."The models takes in two inputs at once before it outputs a conclusion on where the content fails. It takes a policy and the content to classify under its guidelines. OpenAI said the models work best in situations where: The potential harm is emerging or evolving, and policies need to adapt quickly.The domain is highly nuanced and difficult for smaller classifiers to handle.Developers don’t have enough samples to train a high-quality classifier for each risk on their platform.Latency is less important than producing high-quality, explainable labels.The company said gpt-oss-safeguard “is different because its reasoning capabilities allow developers to apply any policy,” even ones they’ve written during inference. The models are based on OpenAI’s internal tool, the Safety Reasoner, which enables its teams to be more iterative in setting guardrails. They often begin with very strict safety policies, “and use relatively large amounts of compute where needed,” then adjust policies as they move the model through production and risk assessments change. Performing safetyOpenAI said the gpt-oss-safeguard models outperformed its GPT-5-thinking and the original gpt-oss models on multipolicy accuracy based on benchmark testing. It also ran the models on the ToxicChat public benchmark, where they performed well, although GPT-5-thinking and the Safety Reasoner slightly edged them out.But there is concern that this approach could bring a centralization of safety standards.“Safety is not a well-defined concept. Any implementation of safety standards will reflect the values and priorities of the organization that creates it, as well as the limits and deficiencies of its models,” said John Thickstun, an assistant professor of computer science at Cornell University. “If industry as a whole adopts standards developed by OpenAI, we risk institutionalizing one particular perspective on safety and short-circuiting broader investigations into the safety needs for AI deployments across many sectors of society.”It should also be noted that OpenAI did not release the base model for the oss family of models, so developers cannot fully iterate on them. OpenAI, however, is confident that the developer community can help refine gpt-oss-safeguard. It will host a Hackathon on December 8 in San Francisco. 

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
Meta researchers open the LLM black box to repair flawed AI reasoning

<p>Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model&#x27;s (LLM) reasoning and even interven [...]

Match Score: 117.74

venturebeat
Meet Aardvark, OpenAI’s security agent for code analysis and patching

<p>OpenAI has introduced <b>Aardvark</b>, a <a href="https://openai.com/index/introducing-aardvark/">GPT-5-powered autonomous security researcher agent</a> now [...]

Match Score: 110.08

venturebeat
Samsung AI researcher's new, open reasoning model TRM outperforms models 10

<p>The trend of AI researchers developing new, <a href="https://www.linkedin.com/pulse/next-big-thing-ai-think-small-models-venturebeat-yyrte/?trackingId=x3X3vTZhTnmwCTUtOWGAug%3D%3D&quo [...]

Match Score: 104.22

venturebeat
MiniMax-M2 is the new king of open source LLMs (especially for agentic tool

<p>Watch out, DeepSeek and Qwen! There&#x27;s a new king of open source large language models (LLMs), especially when it comes to something enterprises are increasingly valuing: agentic tool [...]

Match Score: 104.14

venturebeat
OpenAI debuts Sora 2 AI video generator app with sound and self-insertion c

<p>OpenAI today <a href="https://openai.com/index/sora-2/">announced the release of Sora 2</a>, its latest video generation model, which now includes AI generated audio mat [...]

Match Score: 94.06

venturebeat
Together AI's ATLAS adaptive speculator delivers 400% inference speedup by

<p>Enterprises expanding AI deployments are hitting an invisible performance wall. The culprit? Static speculators that can&#x27;t keep up with shifting workloads.</p><p>Speculat [...]

Match Score: 84.82

venturebeat
Nvidia researchers boost LLMs reasoning skills by getting them to 'think' d

<p>Researchers at Nvidia have developed a new technique that flips the script on how large language models (LLMs) learn to reason. </p><p>The method, called <a href="https:// [...]

Match Score: 84.09

blogspot
Top 10 AI Content Generator & Writer Tools in 2022

<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBctm5Egmcjk0M2MgIO-nxagQkcnEng710 [...]

Match Score: 83.47

venturebeat
New 'Markovian Thinking' technique unlocks a path to million-token AI reaso

<p>Researchers at Mila have proposed a new technique that makes large language models (LLMs) vastly more efficient when performing complex reasoning. Called <a href="https://arxiv.org/ab [...]

Match Score: 80.87