Case Study Live Prototype

Signal — Returns Discovery Tool

Amazon Returns PMs were spending 7–8 hours a week on manual data discovery before writing a single PRFAQ. Signal replaces that — an AI-native tool that delivers ranked, evidence-backed problem synthesis on demand. Built on a three-tier LLM architecture: RAG query nodes → Master LLM Node → Judge Node for hallucination validation. Hallucination mitigation is not a UX feature. It is baked into the architecture.

✦ $62M–$158M annual opportunity quantified
Signal ★
RAG MASTER JUDGE
Live Project AI Agent

LinkedIn Engagement Agent

LinkedIn engagement requires finding the right posts, reading them, and writing something worth saying. Echo automates the discovery and drafting layer. Enter a keyword, trigger a run. Echo searches LinkedIn for highly relevant posts from the last 24 hours via Tavily, extracts the actual post content, and generates a contextually relevant comment. Results land in an organized, dated Google Sheet — post link, content, and generated comment in one row — with an email summary after every run. Automated posting was deliberately excluded. Platform research confirmed LinkedIn detects and penalizes it. Echo handles the low-value layer. You own the part that builds actual presence.

✦ 90% daily workflow time reduced
AI Agent ✦
Case Study Live Project

Swiggy Instamart Retention Feature

64% of Planner-type users were abandoning Instamart for offline vendors when they forgot items — validated through a 15-person primary research survey before a single feature was designed. Built Smart Cart: a Gemini API-powered contextual suggestion engine targeting decision latency under 5 seconds, supported by an A/B test framework designed for 125,000 users targeting a 10% lift in repeat order frequency.

✦ 10% projected lift in repeat orders
Smart Cart ✦
DISCOVERY SUGGEST ADD TO CART RETAINED