Discover ANY AI to make more online for less.

select between over 22,900 AI Tool and 17,900 AI News Posts.


venturebeat
Definity embeds agents inside Spark pipelines to catch failures before they reach agentic AI systems

For most data engineering teams, managing pipeline reliability often means waiting for an alert, manually tracing failures across distributed jobs and clusters, and fixing problems after they've already hit the business. Agentic AI needs the data to be there, clean and on time. A pipeline that fails silently or delivers stale data doesn't just break a dashboard — it breaks the AI system depending on it.That gap is what Definity, a Chicago-based data pipeline operations startup, is building into: embedding agents directly inside the Spark or DBT driver to act during a pipeline run, not after it. One enterprise customer identified 33% of its optimization opportunities in the first week of deployment and cut troubleshooting and optimization effort by 70%, according to Definity. The company also claims customers are resolving complex Spark issues up to 10x faster."You need three big things for agentic data operations: full stack context that is real time and production aware. Control of the pipeline. And the ability to validate in a feedback loop. Without that, you can be outside looking in and read only," Roy Daniel, CEO and co-founder of Definity told VentureBeat in an exclusive interview.The company on Wednesday announced that it has raised $12 million in Series A financing led by GreatPoint Ventures, with participation from Dynatrace and existing investors StageOne Ventures and Hyde Park Venture Partners.Why existing pipeline monitoring breaks down at scaleExisting tools approach the problem from outside the execution layer — Datadog, which acquired data quality monitor Metaplane last year, Databricks system tables, and platforms like Unravel Data and Acceldata all read metrics after a job completes. Dynatrace has monitoring capabilities; it also participated in Definity's Series A.The Definity approach is differentiated from other options in the way the solution is architected. According to Daniel, that means by the time a platform monitoring tool surfaces a problem, the pipeline has already run — and the failure, the wasted compute or the bad data is already downstream."It's always after the fact," Daniel said. "By the time you know something happened, it already happened."How Definity's in-execution agents workThe core architectural difference is where the agent sits — inside the pipeline rather than watching from outside it.Inline instrumentation. The Definity system installs a JVM agent directly inside the pipeline execution layer via a single line of code, running below the platform layer and pulling execution data directly from Spark.Execution context during the run. The agent captures query execution behavior, memory pressure, data skew, shuffle patterns and infrastructure utilization as the pipeline runs. It also infers lineage between pipelines and tables dynamically — no predefined data catalog is required.Intervention, not just observation. The agent can modify resource allocation mid-run, stop a job before bad data propagates or preempt a pipeline based on upstream data conditions. Daniel described one production deployment where the agent detected that an upstream job had been preempted and the input table it was supposed to write was stale — and stopped the downstream pipeline before it started, before bad data reached any dependent system.What is and isn't real time. Detection and prevention are real time. Root cause analysis and optimization recommendations run on demand when an engineer queries the assistant, with full execution context already assembled.Overhead and data residency. The agent adds approximately one second of compute on an hour-long run. Only metadata transmits externally; full on-premises deployment is available for environments where no metadata can leave the perimeter.What in-execution intelligence looks like in a production environmentOne early user of the Definity platform is Nexxen, an ad tech platform running large-scale Spark pipelines  for mission-critical advertising workloads, running on-premises.Dennis Meyer, Director of Data Engineering at Nexxen, told VentureBeat that the core problem he was facing was not pipeline failures but the accumulating cost of inefficiency in an environment with no elastic cloud capacity to absorb waste."The main challenge wasn't about pipelines breaking, but about managing an increasingly complex and large-scale environment," Meyer said. "Because we operate on-prem, we don't have the flexibility of instant elasticity, so inefficiencies have a direct cost impact."Existing monitoring tools gave Nexxen partial visibility but not enough to act on systematically. "We had existing monitoring tools in place, but needed full-stack visibility to understand workload behavior holistically and to systematically prioritize optimizations," Meyer said.Nexxen deployed Definity with no pipeline code changes. According to Meyer, the team identified 33% of its optimization opportunities within the first week, and engineering effort on troubleshooting and optimization dropped by 70%. The platform freed infrastructure capacity, allowing the team to support workload growth without additional hardware investment."The key shift was moving from reactive troubleshooting to proactive, continuous optimization," Meyer said. "At scale, the biggest gap often isn't tooling — it's actionable visibility."What this means for enterprise data teamsFor data engineering teams running production Spark environments, the shift from reactive monitoring to in-execution intelligence has architectural and organizational implications worth thinking through.Pipeline ops is becoming an AI infrastructure problem. Data pipelines that previously supported analytics now carry AI workloads with direct business dependencies. Failures that were once an inconvenience are now blocking production AI delivery.Troubleshooting time is a recoverable cost. According to Meyer, Nexxen cut engineering effort on troubleshooting and optimization by 70% after deploying Definity. For teams running lean, that time going back to the roadmap is the most direct near-term case for evaluating this category.

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
Goodbye, Llama? Meta launches new proprietary AI model Muse Spark — first

<p>Meta has been one of the most interesting companies of the generative AI era — initially gaining a loyal and huge following of users for the release of its mostly open source Llama family o [...]

Match Score: 185.61

venturebeat
OpenAI deploys Cerebras chips for 15x faster code generation in first major

<p><a href="https://openai.com/">OpenAI</a> on Thursday launched <a href="https://openai.com/index/introducing-gpt-5-3-codex-spark/">GPT-5.3-Codex-Spark< [...]

Match Score: 133.81

venturebeat
Nvidia launches enterprise AI agent platform with Adobe, Salesforce, SAP am

<p><a href="https://www.nvidia.com/gtc/keynote/">Jensen Huang</a> walked onto the <a href="https://www.nvidia.com/gtc/">GTC stage</a> Monday wearing h [...]

Match Score: 115.62

venturebeat
Nvidia's agentic AI stack is the first major platform to ship with sec

<p>For the first time on a major AI platform release, security shipped at launch — not bolted on 18 months later. At Nvidia GTC this week, five security vendors announced protection for Nvidia [...]

Match Score: 114.96

venturebeat
Enterprise identity was built for humans — not AI agents

<p><i>Presented by 1Password</i></p><hr/><p>Adding agentic capabilities to enterprise environments is fundamentally reshaping the threat model by introducing a new [...]

Match Score: 107.23

venturebeat
Amazon’s OpenAI gambit signals a new phase in the cloud wars — one wher

<p><a href="https://aws.amazon.com/">Amazon Web Services</a> on Tuesday launched one of the most consequential enterprise AI plays in the company&#x27;s 20-year history [...]

Match Score: 103.82

venturebeat
An AI agent rewrote a Fortune 50 security policy. Here's how to govern

<p>A CEO’s AI agent rewrote the company’s security policy. Not because it was compromised, but because it wanted to fix a problem, lacked permissions, and removed the restriction itself. Eve [...]

Match Score: 103.02

venturebeat
RSAC 2026 shipped five agent identity frameworks and left three critical ga

<p>“You can deceive, manipulate, and lie. That’s an inherent property of language. It’s a feature, not a flaw,” <a href="https://www.crowdstrike.com/en-us/press-releases/crowdstr [...]

Match Score: 100.42

venturebeat
OpenAI unveils Workspace Agents, a successor to custom GPTs for enterprises

<p>OpenAI introduced a new paradigm and product today that is likely to have huge implications for enterprises seeking to adopt and control fleets of AI agent workers.</p><p>Called & [...]

Match Score: 98.49