Skip to content
View imansipathak's full-sized avatar

Block or report imansipathak

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
imansipathak/README.md

Typing SVG


LinkedIn GitHub Email



Who I Am

I'm an AI Product Manager who builds things. Not just roadmaps and PRDs — actual working products, with real AI wired in, shipped and running.

4+ years in enterprise tech — currently at American Express, previously MathCo. IIM Ranchi MBA on top of a B.Tech in Computer Science: I can walk into an engineering architecture review and then write the business case for the same decision. CommGuard exists to prove that gap is real.

My edge is that I sit at an uncommon intersection: I understand what AI can do technically, I understand what users actually need, and I can hold both truths in tension long enough to ship something that works in the real world — not just in a demo.

I've worked across the full AI product surface: LLM integration, evaluation frameworks, agentic workflows, compliance AI, enterprise workflow automation, data products, and statistical experimentation. The domain changes — the product thinking doesn't.

What I care about most:
  ├── When should the AI decide, and when should a human?
  ├── How do you evaluate AI outputs when "good" is subjective?
  ├── What does product-market fit look like when the model is probabilistic?
  └── How do you build AI products that scale without breaking trust?

What I Ship

CommGuard — AI Communications Governance

A full-stack enterprise platform I designed and built to demonstrate what serious AI product work looks like in a regulated industry. Every layer reflects real PM decisions — where AI takes the wheel, where humans must stay in the loop, and how you make compliance fast without making it weak.

Why it showcases AI PM thinking:

  • Designed a two-tier AI system — Claude primary, keyword-rule fallback — because production AI products can't go down when an API call fails
  • Built a 7-module AI engine covering scanning, classification, content generation, readability, consistency checking, impact analysis, and anomaly detection
  • Implemented statistical experimentation (Z-test for proportions, configurable confidence levels) because gut-feel is not a metric
  • Enforced human-in-the-loop at every high-stakes decision point — a design choice, not an afterthought

→ CommGuard on GitHub


Next.js TypeScript Claude AI SQLite


171 Rules 12 Regs 7 AI Modules 43 Endpoints


What's inside the AI engine:

Module What it does Fallback
Compliance Scanner Scores 0–100, cites violations with exact section references + rewrite suggestions 171-rule keyword engine
Auto-Classifier Detects communication type, maps applicable regulations Keyword decision tree
Content Generator Drafts compliant communications with channel constraints (SMS ≤160 chars, etc.) None
Readability Analyzer Flesch-Kincaid + plain-language AI audit (jargon, passive voice, legalese) FK score + jargon list
Consistency Checker Cross-document contradiction detection (fee/rate/timeline conflicts) Regex extraction
Impact Analyzer Given a rule change → finds every affected communication, estimates remediation effort Word-overlap scoring
Anomaly Detector 2σ/3σ complaint and bounce spike detection Pure statistics — no AI needed

AI PM Toolkit

Core AI / ML Literacy

LLMs RAG Prompt Engineering Agentic AI Evals Structured Output Fine-tuning Responsible AI

Product & Strategy

Figma Notion Linear Jira Confluence Miro Power BI ServiceNow Azure DevOps

Data & Experimentation

A/B Testing Statistical Significance Anomaly Detection SQL Python Recharts

Technical (I Prototype)

TypeScript Next.js React REST APIs SQLite Vercel Claude Code

Domain Depth

FinTech Enterprise SaaS Workflow Automation Compliance AI Developer Tools RBAC


How I Think About AI Products

1. Capability is not a product. The model can do a lot of things. The PM's job is to decide which things are actually worth doing, for whom, and at what cost when they go wrong.

2. Evaluation is the hardest part. For traditional software, "does it work" is binary. For AI, it's a distribution. Defining what "good enough" means — and building the infra to measure it — is where most AI products succeed or fail.

3. The fallback is part of the product. Every AI feature has a failure mode. I design fallbacks first, not last. A keyword-rule engine isn't a consolation prize — it's the thing that keeps the product running when the model is unavailable, slow, or wrong.

4. Human-in-the-loop is a design decision, not a disclaimer. Where you put a human checkpoint — and what information you give them — determines whether AI actually helps or just creates a new bottleneck. I spend more time on this than on the AI itself.

5. Trust compounds slowly and breaks instantly. Users forgive slow. They don't forgive confidently wrong. AI products need to know when to hedge, when to explain uncertainty, and when to just not answer.


What I'm Thinking About

current:
  - How agentic AI changes the PM role — when the product is a loop, not a flow
  - Evaluation frameworks for LLM outputs that actually correlate with user value
  - RAG vs fine-tuning decision trees for enterprise knowledge products
  - Where AI governance becomes a moat, not just a cost center

exploring:
  - Multi-agent orchestration and what product design looks like when agents talk to agents
  - Structured output validation for AI in high-stakes decisions
  - Voice + LLM product surface — the interaction model is entirely different
  - AI-native B2C vs AI-embedded enterprise — very different product constraints

believe:
  - The best AI PMs can read a model card AND write a press release
  - A working prototype changes the conversation faster than a 50-slide deck
  - Responsible AI and fast AI are not opposites — safety is a feature, not a gate
  - The PM who understands the training data has an unfair advantage

Open to AI PM roles across FinTech · Enterprise SaaS · Developer Tools · Agentic AI · Regulated Industries

Let's talk

Popular repositories Loading

  1. imansipathak imansipathak Public