Services
Pick the engagement that fits the problem.
Two clear ways to work together. Both publish pricing. Both come with a 30-minute call before any decision.
Productized engagement
AI Automation Build
A senior engineer takes one of your highest-leverage workflows — clinical intake, prior auth drafting, claims triage, longitudinal chart summarization, market analysis, ops reporting — and ships it as a real, governed system. Not a Zapier patchwork. Not a chatbot. A system you can audit, evolve, and depend on.
What you get
- Workflow design document (system diagram, data flow, security model)
- Skill files and prompts versioned in your repo
- Evaluation suite with golden tests + drift checks
- Deployment to your cloud (or mine, scoped) with observability and audit logging
- Runbook + admin guide for non-engineering owners
- 30 days of bug-fix support and tuning post-launch
Process
Week 1 — Discovery
Two working sessions with the team doing the work. Map the current process, identify failure modes, agree on success metrics.
Week 2–4 — Build
Skill files, integrations, evaluation harness, security review, deployment. Async demos every 3 days.
Week 5 — Hand-off
Documentation, runbook, admin guide, and a working session with whoever will own this going forward.
Days 30–60 — Tuning
30 days of bug-fix support and refinement included. Optional ongoing care via the Fractional retainer.
Best fit
- Digital health companies (Series A–C) with one painful workflow consuming clinical or ops capacity
- Mid-size practice groups, MSOs, and payviders looking to extend a small ops team
- Health-adjacent SaaS that needs an AI feature shipped without staffing an AI team
- Teams that need HIPAA-aware design from day one — not a retrofit later
Recurring engagement
Fractional AI / CTO
Your team needs senior judgment more than another contractor. I sit in your leadership rhythms, own AI and engineering strategy, and ship alongside your team. The right fit if you're pre-CTO, between CTOs, or you have a CTO who needs a partner for the AI stack specifically.
What you get
- AI / engineering strategy doc, refreshed quarterly
- Standing weekly leadership presence + async availability for your team
- Architecture and code reviews on the highest-leverage decisions
- AI governance framework (eval, drift, incident response, model selection)
- Hiring scorecards and interview support for engineering / AI roles
- Compliance posture (HIPAA, SOC 2, HITRUST) advised continuously
Process
Day 0 — Onboarding
Two-week immersion. I read the code, the docs, and the room. Land with a written 90-day plan and the questions it can't answer yet.
Monthly cadence
Weekly exec sync, monthly board-ready report, ad-hoc availability for your team via Slack and Linear.
Quarterly strategy
Refreshed strategy doc, hiring plan, governance review, and a candid retro with the founder/CEO.
Best fit
- Pre-CTO startups raising or just past seed, with an AI product surface
- Companies between CTOs that don't want to slow down on the search
- Existing CTOs who want a senior partner to own the AI stack and governance
- Boards or PE / VC firms placing AI leadership into a portfolio company
FAQ
Common questions.
- Do you sign BAAs?
- Yes. PHI access is scoped, logged, and limited to what the engagement requires. I can sign your standard BAA or send mine.
- What if I'm not in healthcare?
- Most clients are. The skill set transfers to any regulated industry — finance, legal, insurance, gov. If you're outside healthcare and need the same governance posture, we can talk.
- Do you write code or just advise?
- I write code. The Build engagement ships working software. The Fractional engagement is hands-on enough to do code review, design review, and pairing with your team.
- What's the smallest version of working together?
- A single discovery call. After that, the smallest paid engagement is a 4-week Build. Fractional has a 3-month minimum to actually deliver value.
- Can you work with our existing engineering team?
- Yes — that's most of the work. Engagements are designed to leave your team better than they started: documented patterns, evaluation harnesses they own, governance they can run themselves.
- Where does the code live?
- In your repo, on your cloud, under your account. I'm not building lock-in. The deliverable is yours from day one.
- What models and tools do you build with?
- Frontier models (Anthropic Claude, OpenAI, Google) plus self-hosted where the policy demands it. MCP servers for tool access. Skill files and evaluation harnesses are model-portable so you're not locked to a vendor.
- Is the call really free?
- Yes. Thirty minutes, no deck. If we're not a fit, you'll leave with a sharper picture of what is.