From AI Quality Loops to Product Thinking — This Is How I Got Here

1.8 years evaluating how LLMs fail at scale. Now building products that account for it from the first line of architecture.

Ajay Sharma

Product Lifecycle

FEEDBACK LOOP User Market Data Stakeholder Business Ideation Validation & Prioritization Product Roadmap PRD / Backlog Sprint Testing Alpha Beta GTM A/B Testing Data

How I Got Here

I started at Preplaced as an Associate Product Operations Manager. It wasn't a traditional PM role — but it taught me something that stuck. A 21% uplift in trial-to-paid conversion doesn't come from a deck. It comes from understanding exactly where a user journey breaks and designing the fix before anyone asks you to.

At Turing, I lead a team of 5–6 analysts running RLHF and SFT workflows for Fortune 500 AI clients. I've evaluated 5,000+ LLM interaction pairs — specifically looking for the failure modes that degrade model quality. Logic gaps. Hallucination patterns. The subtle ways a model sounds confident while being wrong. I was the feedback channel between annotation teams and delivery leads, turning error patterns into model improvement guidance. Most PM candidates have read about this process. I've run it.

That ground-level understanding of how AI fails is what I now bring to product work. When you've spent time inside RLHF feedback loops, you design differently. You ask different questions earlier. You don't treat hallucination as a UX problem — you treat it as an architecture decision. That shift in thinking is what I'm building on as I move into product.

Get to Know Me

Currently Deepening

Upskilling

Mixpanel · Behavioral Analytics · HelloPM AI Specialization

Currently Reading

On the Shelf

Hacking Growth — Sean Ellis & Morgan Brown

Location

Available Anywhere

Remote · Hybrid · On-site

Online
Currently Investigating

AI Adoption Paradox

70%+ of companies have given employees AI access. Less than 6% are generating real business outcomes. The rest are running Prompt Theater — not products. Researching why.

Identity
#SystemsThinking #AIEvaluation #ProductThinker #BuildInPublic

Where I've Operated

June 2024 – Present

Team Lead (Promoted from Research Analyst)

Turing

  • Led a team of 5–6 analysts executing RLHF and SFT workflows for Fortune 500 AI clients, maintaining delivery standards across high-volume, time-sensitive annotation projects.
  • Evaluated 5,000+ LLM interaction pairs against model alignment protocols, systematically identifying logic gaps and hallucination patterns that degrade training data quality.
  • Served as the primary feedback channel between delivery leads and the annotation team — synthesizing recurring error patterns into specific, actionable model improvement guidance.
  • Ran daily stand-ups to surface operational blockers and resolve edge cases in real time, reducing annotator query resolution time and maintaining >95% accuracy across all client deliverables.
Apr 2023 – Mar 2024

Associate Product Operations Manager

Preplaced

  • Strategized and launched tiered mentorship offerings (Basic vs. Premium), increasing service adoption rates by 10% (from 21% to 31%) and maximizing revenue per user through segmentation.
  • Spearheaded the "Structured Mentorship" feature rollout to standardize user journeys, driving a 21% uplift in trial-to-paid conversion by clarifying value propositions for mentees.
  • Implemented mentor accountability rules and attendance protocols to mitigate service gaps, reducing unattendance rates by 10% and ensuring consistent platform reliability.

Academic Background

Product Management Cohort (AI Specialization)

HelloPM

Nov 2025 – Present

B.Tech in Electrical & Computer Engineering

REVA University

2019 – 2023