The actual problems, what I built, and what changed. Client names anonymized — everything else is real.
Launching AI-powered Marketing Agents required manually reviewing and categorizing large volumes of client files — a 20-hour process that created a bottleneck before any customer value was delivered.
The bottleneck was the file review — slow, entirely manual, identical across every deployment. I built a custom GPT to replace it: ingests client files, identifies the right agent associations, and generates structured content summaries including descriptions of visual assets.
Setup time dropped from ~20 hours to 8–10. That compression reduced internal delivery cost and cut customer lead time — accelerating time-to-value on a flagship AI product for every new deployment.
We had no idea where the time was actually going. Which meant every pricing conversation was a guess, scope had no floor, and the cost accumulation had nowhere to land.
Started with the question nobody had answered: what should we actually measure, and would it tell us anything useful? Ran an internal audit to find out, then built a tracking framework from scratch — designed to capture effort at the granularity needed to surface real patterns, not just aggregate totals.
Identified hidden effort sinks growing month-over-month. Provided the data foundation that defended scope in client conversations and justified price increases that previously lacked any supporting evidence.
A major enterprise customer needed more than platform access — they needed a way to embed AI-driven insight into an active product innovation cycle across a complex, multi-vertical organization.
Configured the platform for their setup, then built the integration model they'd actually use: a custom training program with workflow guidance and a repeatable structure for folding AI into the weekly product development cycle — across all 7 verticals simultaneously.
60-person rollout that wasn't a one-time training — it launched an ongoing operational rhythm. Became the internal template for what enterprise AI-first customer enablement looks like at scale.
Implementation runs on repeatable phases — but the highest-friction moments (kickoff prep, status updates, audience structuring, handover documentation) were still being handled manually, every time, at every account.
Rather than deploying AI generally, I mapped it to where friction was actually costing time — four specific phases across a 7-phase methodology. Built and deployed a tool for each one, plus a pipeline structure so future additions follow the same logic rather than starting a new conversation every time.
Four of seven implementation phases now have active AI tooling in production. Recurring admin at each touchpoint is handled by the system. New capabilities are added against a defined methodology rather than as one-off fixes.
Ask Merlin anything
Usually responds instantly