Can AI Be Reliable When the Input Is a Mess? Ujjwal Jain Built the Controls That Make It Possible

Ujjwal Jain
Ujjwal Jain

Knowledge workers increasingly operate under conditions of information overload: more messages and documents arrive than anyone can properly review, verify, and summarise into a sound decision. "How AI Anxiety and Attitudes Influence Decision Fatigue in Daily Technology Use" in Annals of Neurosciences reports links between AI use and decision fatigue, alongside mental exhaustion, attention strain, and information overload, and it also finds a negative relationship with decision-making self-confidence. Therefore, the useful role of AI here is not to generate more output, but to reduce what humans must process by filtering inputs and producing structured, checkable evidence.

One way to solve this overload problem is to treat the inbox as a data source and build an evidence pipeline around it, as applied AI specialist and founding engineer at Originalis, Ujjwal Jain does. He develops AI infrastructure that helps venture capital teams capture real inbound investment opportunities from email—startup pitch decks, founder introductions, and referrals from investors or industry experts and convert them into decision-ready research, so outcomes depend less on who happened to read what, and when. The constraint he targets is simple but costly: investors live in their inboxes, yet inbox volume scales faster than human attention, which makes it easy to miss strong deals and hard to run consistent diligence. Since the system went live in August 2025, it has operated across dozens of VC inboxes, scanning hundreds of thousands of email threads, processing tens of thousands of attachments, and generating thousands of investment memos automatically, helping lift platform throughput from roughly a few dozen deals per month to well over a thousand per month.

His solution fixes the overload problem. It automatically pulls real opportunities out of the inbox, converts emails and attachments into structured facts, and generates a first diligence memo in minutes instead of days. The same approach works in any field where decisions depend on scattered messages and documents.

Capturing the Signal Without Flooding the User

The first challenge is deceptively hard: detecting what matters in a stream where almost everything does not.

"An investor's inbox is a mix of everything at once," Ujjwal shares. "Introductions, founder updates, partner notes, calendar threads, newsletters, forwarded decks, internal back and forth. If you flag the wrong thing, you waste people's attention and clog the workflow. If you miss the right thing, you miss a real opportunity."

Ujjwal architected a multi-channel deal ingestion pipeline that monitors inbound communications across email, CRMs, and Slack, classifies deal-relevant threads, and extracts structured entities including company name, sector, stage, and founder identities. Since deployment in August 2025, the system has ingested more than 16,000 unique deals and identified more than 13,000 deal sources.

These figures are evidence of operating conditions and reliability. This is real inbound communication, with inconsistent formatting and mixed intent, where trust determines whether a system becomes a daily tool or gets ignored.

Converting Unstructured Material into Evidence

Catching a relevant message is only the entry point. The second bottleneck is evidence: the time intensive work of reading decks, checking founder backgrounds, building a market picture, mapping competitors, and synthesizing a memo that can survive scrutiny in an investment committee meeting.

Ujjwal built the second stage of the system to automate this conversion. It processes attachments, including pitch decks, one-pagers, and supporting files, and generates full investment memos with structured analysis. To date, it has processed 48,132 documents and generated more than 7,000 completed investment memos automatically.

"The hard part was trust. A memo is not a neutral output. If it contains weak claims, mixes facts with guesses, or cannot be checked, teams stop using it," Ujjwal explains.

He solved that by designing for verifiability instead of fluent text. The system is built to support consistent diligence preparation: it verifies and cross-references claims, flags gaps instead of filling them, and generates specific questions for diligence. As he describes it, the pipeline pulls external market intelligence signals, such as startup databases and industry sources, to contextualize markets and competitors, and it enriches founder profiles via public sources such as LinkedIn to validate credentials and experience.

The operational consequence is clear. Work that typically takes 2 to 3 days of manual research per deal can be reduced to minutes for a first memo draft. The system does not decide instead of people. It reduces the time and inconsistency that prevent people from making well-prepared decisions.

Making AI Reliable Enough for Daily Use

Many AI prototypes fail in production, not because they cannot generate output. They cannot generate output that teams can depend on. In decision workflows, unreliable output creates rework and can create false confidence.

To make the system dependable, Ujjwal Jain built it as an orchestrated workflow. It routes tasks across multiple LLM providers, including OpenAI, Anthropic, and Gemini, then forces outputs into strict structures through parsing and scoring, so results remain consistent even when inputs are messy. That design choice is driven by reality. Inboxes contain partial context, forwarded threads, inconsistent naming, and attachments in every format, so the system needs predictable structure and fallback paths, not just good generations.

As Ujjwal comments, "I do not treat the tools as interchangeable. For this job, they behave differently. Some models excel at extracting structured fields from messy email threads. Others perform better when summarising long pitch decks into clean sections. While there are also tools simply more consistent at following a strict output schema without drifting. So the system routes work to the model that fits the subtask, and then it enforces structure on the way out. We parse the output, score it, and retry or fall back when it is not consistent."

Simply said, this way it is possible to get something you can run every day, even when the inputs are chaotic. The adoption signals suggest the reliability threshold was met. The platform's throughput increased 16.5x after deployment, from 69 to 1,139 deals processed per month, and 88.7% of deals now flow through automated ingestion rather than manual submission. A senior operator at a VC firm independently noted that the feature aggregates research she would otherwise do manually, accelerating internal analysis.

Extending Beyond Email into the Calendar

Email is where opportunities arrive. Calendars are where decisions are prepared and executed. Meeting schedules compress time and increase the cost of poor preparation.

Hence, Ujjwal Jain also built a calendar intelligence system that classified 57000+ meetings, identified 5890+ founder meetings, and delivered 2,800+ AI-generated daily digest emails with meeting preparation and company research. This extends the same principle: preparation becomes systematic rather than relying on last-minute manual work. For the company, this expanded the product from inbox intake to end-to-end workflow support: teams could spot founder conversations automatically, prioritise the right meetings, and walk into calls with consistent preparation rather than last-minute manual research.

The broader conclusion is not that AI will replace analysts or automate decision-making. The more accurate shift is structural: AI is increasingly used to standardize intake, verification, and synthesis so that experts can spend their time on judgment, not triage.

If tools add more prompts, more options, and more unfiltered output, they can exhaust users and reduce decision quality. The difference in systems like Ujjwal Jain's is the design target.

Venture capital is simply one domain where the problem is easy to observe. The same constraint exists in compliance investigations, procurement, incident response, clinical review, and research operations, anywhere decisions depend on scattered messages and documents.

In that sense, Ujjwal Jain's work points to where applied AI is heading next: systems engineered to protect attention and deliver structured evidence, because attention is the limiting factor in modern expert work.

Join the Discussion

Recommended Stories