The Story
Orbiit Recovery is a HIPAA-compliant SaaS platform for substance use disorder (SUD) treatment. We deliver daily recovery support via SMS - no app downloads, no passwords, just text messages that meet patients where they are. Treatment centers get real-time engagement dashboards. Patients get micro-courses, check-ins, and crisis support.
We started on GoHighLevel. Hit its walls in two weeks - couldn't control the data model, couldn't ensure HIPAA compliance, workarounds stacked. We were held hostage by software we didn't own. So we built the entire platform from scratch in 68 days - production-deployed on Azure with full HIPAA infrastructure, passwordless auth, multi-tenant architecture, and an AI clinical assistant.
Traditional estimate for this build: 4-5 developers, 12-18 months, $500K-$900K. Actual: 1 developer + AI, 6 months. That's 662 story points, 199 documented sessions, and 48 architecture decision records. That's the methodology you'd be stepping into - and the codebase you'd own.
Where We Are
V2 launched in January 2026 after a trial-by-fire rewrite. We've moved away from every piece of software we don't control - no more GoHighLevel, no more third-party platforms holding our data hostage. The entire stack is ours.
Traction
- 14 beta testers active, targeting 50 in the near term. This population is not known for consistency - engagement data is the product, and the signal is real.
- Backed by HopeLINC, a nonprofit providing the clinical relationships and community access.
- Relationships with multiple clinics, churches, and recovery organizations across Georgia and Tennessee.
- Restructuring the cap table to make room for early joiners. Pricing and company structure are being realigned now.
Doors Opening
Our CEO Dan Francis leads business development. This week alone:
- Meeting with the Commissioner of Behavioral Health for the State of Georgia
- Meeting with Healthsperien - presidential advisors on behavioral health policy, including Patrick Kennedy
- Met with the head of Georgia's Recovery Community Organizations
This is pre-funding, pre-revenue, early. But the platform is built, the clinical relationships are real, and the policy doors are opening at the state and federal level. That's the window this equity opportunity sits in.
Architecture
Twilio ──── SMS to patients
│ webhooks
Astro SSR ◄──► Django API ◄──► PostgreSQL
(frontend) (backend) (Azure)
│
Celery Worker ──── Redis (broker)
(Azure ACI)
Multi-Tenant Data Model
Organization (tenant boundary)
├── Region (optional geographic grouping)
│ └── Clinic
│ ├── Patient → CourseAssignment → Submission → SOBER Score
│ └── StaffMember (clinician, admin, biller, researcher, sales)
└── Programs & Billing
Django Apps
| App | Owns |
| accounts | Users, organizations, clinics, patients, staff profiles, SSO, magic links, interested parties |
| courses | Micro-course content, quizzes, course groups, delivery scheduling, Fibonacci reinforcement |
| messaging | Twilio SMS, SendGrid email, magic link generation and resolution |
| analytics | SOBER scores, touchpoints, check-ins, crisis alerts, risk levels |
| billing | Stripe subscriptions, programs, invoices |
| dashboard | Clinician/admin dashboards, Signal project management |
| notes | AI-assisted treatment notes (Azure OpenAI) |
| risk_engine | Biometric monitoring, AI-driven risk insights |
| crm | Sales pipeline, lead tracking |
| orbie | AI clinical assistant with kill switch |
| translations | Multi-language support (EN, ES, ZH, VI, KO, RU, AR) |
| core | Middleware, rate limiting, health checks, Azure utilities |
Tech Stack
Backend
Python 3.13, Django 5.1, DRF, Celery
Database
PostgreSQL (Azure), Redis (cache + broker)
Frontend
Astro 4.15 (SSR), Tailwind CSS, TypeScript
Infrastructure
Azure App Service, Container Instances, GitHub Actions CI/CD
SMS / Email
Twilio, SendGrid
AI
Azure OpenAI GPT-4o (HIPAA BAA), Claude Code (development)
Billing
Stripe (subscriptions + B2C payments)
Auth
Passwordless magic links (patients), SSO (staff)
48 Architecture Decision Records
Every significant architectural decision is documented before implementation. Not for compliance theater - because AI needs context to make good decisions across a multi-month project, and because the next engineer who touches this code deserves to know why, not just what.
| ADR | Decision | Why It Matters |
| 0004 | Django Monolith over Microservices | Velocity over complexity at MVP stage. Conscious trade-off, not ignorance. |
| 0006 | Micro-Courses + JIT Token Generation | Just-in-time magic link tokens reduce exposure by 96.7%. HIPAA "minimum necessary" baked into architecture. |
| 0007 | Course Generation Playbook | Recovery methodology embedded in code. References CBT/DBT/MI, P.A.U.S.E. and S.U.R.F. frameworks, trauma-informed design. |
| 0009 | User Hierarchy & RBAC | Profile-based design (not role flags) prevents User model bloat. Org → Region → Clinic → Patient hierarchy. |
| 0010 | SOBER Score Architecture | Fibonacci-spaced reinforcement based on cognitive science. Honestly documents biometric approach that didn't work. |
| 0011 | Passwordless Authentication | SMS magic links for patients (zero friction during recovery), SSO for staff. 256-bit tokens exceed NIST standards. |
| 0014 | AI Collaboration Contract | Formalized rules for how AI and human work together. Not vague guidelines - operational contracts. |
| 0015 | HIPAA Security Controls | Comprehensive threat modeling with honest gap assessment. Maps every HIPAA safeguard to implementation status. |
| 0017 | Signal Dashboard | Custom project tracking: AI edits JSON in git. No Jira. Zero admin overhead. |
| 0018 | Story Points Velocity | Measures complexity shipped, not hours spent. AI broke the time-effort correlation - need new metrics. |
| 0020 | UUID Primary Keys Migration | Found 12 models with integer IDs during testing. Fixed enumeration attack vector before production, not after. |
| 0027 | Interested Party Access Sharing | Family gets warm updates, probation gets accountability data. Same patient, different views. Revocable, passwordless (email magic links), 42 CFR Part 2 compliant. |
| 0031 | Token Entropy Standards | 256-bit cryptographic random tokens for all magic links. Documented the math. |
| 0038 | Use of AI in Recovery | Azure OpenAI with kill switch (manual + auto-trigger on cross-org leakage), zero-trust RBAC, 7-language translation infrastructure, cost model under $150/mo. |
| 0042 | Service Schema over Medical/Business | Domain modeling for SUD treatment - not forcing healthcare into a generic SaaS data model. |
| 0043 | Behavioral Signals over Composite Risk | Rejected single "risk score" in favor of per-metric signals. AI synthesizes context at query time - richer than any formula. |
| 0045 | AI-Assisted Treatment Notes | Voice → Azure OpenAI → structured SOAP/DAP/BIRP/GIRP notes. 31 SP feature, all 7 phases shipped. |
| 0046 | Fail-Open Rate Limiting | Patient access > rate limiting during Redis outage. Intentional design - availability over strictness for vulnerable users. |
Every one of these is in the repo. When you onboard, you read the ADRs and you understand the system. No tribal knowledge. No "ask Steve, he built that part."
How We Build
AI-First, Not AI-Assisted
This is an AI-first company. This platform could not have been built without AI - not by one person, and not in this timeframe. The traditional path would have required a team of 4-5 engineers, 12-18 months, and significantly higher risk of failure. Instead: 1 engineer directing AI agents, 68 days, 662 story points shipped across 199 documented sessions. The CTO's role evolved from writing code to directing AI agents the same way you'd direct a team of engineers - decompose problems, set architectural constraints, review output, course-correct when the agent drifts. The human value is in judgment, not keystrokes. We are actively redefining what it means to lead development.
Session-Based Development
Work happens in documented sessions (typically 2-4 hours). Each session produces: objectives, outcomes, files modified, decisions made, and story points completed. 199 documented sessions mean any engineer can pick up exactly where the last session left off - including AI. Git commits every 30-60 minutes during active development (a 3-hour data loss incident early on spawned that discipline).
Documentation as Operating System
ADRs for architecture decisions. Session logs for work context. 11 runbooks covering SMS failures, database issues, deployment rollback, Celery restarts, Stripe webhooks. HIPAA policies. Incident reports with root cause analysis. Docs live in the repo, not a wiki. The documentation IS the project management system.
Story Points, Not Hours
AI broke the time-effort correlation. We measure complexity delivered, not time spent. Velocity is tracked in story points against git commits - every claim is auditable. Average velocity: 17.5 SP/session across 199 sessions. The Signal dashboard shows the full history.
Engineers at Orbiit specify, review, and enforce standards. The AI writes the code. This is what we believe modern engineering leadership is moving towards, and we're defining it in real time.
The Honest Version
This platform is not perfect. But it's better than well-funded, fully-staffed applications I've worked on before. It's doable, and we're willing to fight for it - not just because of the money, but because of the mission.
AI development is painful at times. It requires incredible attention and a meticulous nature. The AI forgets things. It misses things. It recreates work that already exists. I implement guardrails that sometimes get run over and we go off the cliff. Just like managing people. But the AI is getting better, and I'm getting better at working with it. And we are fast - and getting faster.
This document set is an example. A full technical architecture overview, role description, honest codebase audit, and candidate-facing materials - produced in about 24 hours. That doesn't happen in a corporation. It doesn't happen at most startups. It happens when you commit to the methodology and learn to work with the tools instead of fighting them.
One more thing: the AI that built this platform and produced these documents is the same one you'd be working alongside daily. Claude Code is core infrastructure here, not a suggestion box. You would be its first human engineering partner.
HIPAA — Built In, Not Bolted On
Technical compliance is architectural, not aspirational. Azure BAA is in place. Here's what that looks like in the codebase:
- Unified PHI access logging - every access to patient data recorded in AdminActionLog
- Fail-open rate limiting (ADR-0046) - patient access is never blocked by infrastructure failures
- Token entropy standards (ADR-0031) - 256-bit magic links exceed NIST SP 800-63B
- JIT token generation (ADR-0006) - credentials issued at delivery time, not pre-generated
- AI kill switch - organization-level and global emergency shutdown for all AI features. Not just manual - automatically triggers if cross-organization data leakage is detected
- Zero-trust RBAC - every AI query filtered at the data access layer, not just the UI
- Broadcast messaging - superadmin SMS broadcast for outage notifications
- 9 security fixes from a dedicated QA/security sweep (Session 193)
- 42 CFR Part 2 compliance - SUD-specific federal privacy regulation, stricter than standard HIPAA. Consent-based access with revocable permissions
- PHI never in logs - UUIDs only in application logging, PHI isolated in audit trail
Healthcare SaaS that's actually HIPAA-compliant from the architecture level, not "we'll deal with it later."
SUDF / Digital Recovery Standards
The Substance Use Disorder Foundation (sudf.us) publishes the Digital Recovery Standards (DRS) - a framework for evaluating digital recovery tools across security, accessibility, evidence basis, and clinical standards. It's the accountability layer that state procurement and Medicaid MCOs will require.
Recognized
Baseline: secure, accessible, legally compliant
Certified
Clinical protocols, evidence-based design, outcome tracking
Accredited
Longitudinal outcomes, third-party validation, research-grade data
Orbiit currently holds Certified status. Accreditation requires longitudinal outcome data we haven't yet accumulated. We hold ourselves to the same standards we publish.
For the engineering team, this means building toward the first independently-certified digital recovery platform. The standards are real, the gap analysis is documented, and the work to reach Accredited is on the roadmap.
Full SUDF Framework Overview →
What's Built & What's Next
Shipped & Deployed
- SMS micro-course delivery (Fibonacci-spaced reinforcement)
- Passwordless auth (magic links + SSO)
- SOBER Score engagement tracking
- Multi-tenant clinician dashboards
- AI Treatment Notes (Voice → AI → SOAP/DAP/BIRP/GIRP)
- Orbie AI clinical assistant with kill switch
- Stripe billing (subscriptions)
- B2C 120-day program (480 courses, 9 course groups)
- 7-language translation infrastructure (Azure Translator + Redis caching, not yet exposed)
- Interested-party access (family vs. probation views)
- Crisis alert detection
- Check-in screener with magic link workflow
- Patient invitation system
- Broadcast messaging for outage notifications
The Opportunity
- Days 121-180 content (completing the 6-month program)
- Behavidence API integration (biometric stress/anxiety monitoring)
- DRS Accredited certification (longitudinal outcome data)
- Infrastructure hardening (IaC, monitoring, disaster recovery)
- Team scaling (building 4-5 engineers)
- Test coverage expansion (clear first win - see Honest Gaps)
- Performance optimization (Patient Detail page at scale)
- Predictive relapse modeling (engagement-based, 1-2 week risk windows)
- Voice interface for Orbie AI (hands-free for clinicians)
- White-label deployment for enterprise partners
- HFA payment processor integration
The Honest Gaps
We'd rather show you these than have you find them on day three.
Test coverage: 158 test methods across 4 of 12 apps. The apps that are tested (billing, notes, messaging, core) are tested well - shared fixtures, CI integration, coverage config, custom markers. But accounts, courses, analytics, and dashboard have zero tests. The test infrastructure is solid. The test coverage is the gap. Clear first-90-days win.
Error handling patterns: Broad except Exception blocks in ~90 files. MVP-era pattern - catch everything, keep the user moving. Needs systematic tightening: specific exception types, structured logging, proper error propagation. Not a fire, but the kind of technical debt a senior engineer would want to clean up methodically.
Infrastructure as Code: Azure deployments are currently GitHub Actions + manual configuration. No Terraform, no Bicep. CI/CD pipeline is mature (staging auto-deploy, production manual gate, Slack notifications, smoke tests) - but the infrastructure itself isn't codified. Maps directly to your strengths.
Monitoring: Deployment notifications and test results push to Slack. Azure Health Monitor fires email alerts on downtime. We know when things break. What we don't have is structured APM - no Datadog-level visibility into latency, slow queries, or performance degradation over time.
Bus factor: One developer + AI. The documentation (ADRs, session logs, runbooks) means onboarding is measured in days, not months. But the bus factor is 1. That's why we're hiring.
Want to See the Code?
The ADRs, the session logs, the architecture - it's all in the repo. Let's set up a technical deep dive. Bring your questions.
bert@myorbiit.com
770-605-5410