HeartSciences needed to integrate their AI-ECG platform with a major EHR vendor's system. EHR integrations are notoriously slow — sandbox access is limited, environments are shared, and every test cycle depends on external coordination. The integration required bidirectional HL7 messaging alongside FHIR OAuth flows, each with per-hospital configuration differences that only surface during live testing.

ML LABS built a full EHR API simulator that replicated the targeted workflows at the protocol level, letting the team develop and test HL7 message handling, FHIR session management, and per-organization field mappings before sandbox access was available.

graph LR
    A["EHR Simulator"] -->|"HL7 Orders"| B["AI-ECG Platform"]
    B -->|"HL7 Results"| A
    A -->|"FHIR OAuth"| B
    B --> C["Per-Org Config<br/>Profiles"]
    C --> D["Integration<br/>Test Suite"]

    style A fill:#1a1a2e,stroke:#e94560,color:#fff
    style B fill:#1a1a2e,stroke:#ffd700,color:#fff
    style C fill:#1a1a2e,stroke:#0f3460,color:#fff
    style D fill:#1a1a2e,stroke:#16c79a,color:#fff

What the Simulator Replicated

The simulator replicated the specific integration behaviors the AI-ECG platform depended on — not a generic FHIR mock server, but the exact workflows that would have required vendor sandbox access.

HL7 Message Workflows

The EHR sends an order message when a physician requests an ECG interpretation, and the platform responds with result messages containing the AI analysis. The simulator replicated this bidirectional flow including per-org message behavior — some hospitals expect unsolicited results, others expect results only in response to explicit orders.

The simulator validated:

  • Observation segments carrying RR intervals and waveform measurements
  • Placer order number tracking that links results back to originating orders
  • Diagnosis segments with billing codes for revenue cycle systems

FHIR Auth and Security

The vendor's FHIR integration requires per-organization authentication — each hospital authenticates through its own endpoint. The simulator replicated these security boundaries, including adversarial scenarios where requests attempt to cross organizational boundaries, verifying tenant isolation under conditions that would only surface in multi-organization production.

The simulator's value wasn't in replicating the vendor's API surface — it was in replicating the per-organization behavioral differences that only surface during live integration testing.

Per-Organization Configuration

Different hospitals need different HL7 field mappings — patient identifier formatting and ordering provider representation vary by site. The simulator maintained per-organization configuration profiles mirroring production variation, allowing the team to develop configuration-driven parsing logic without access to each hospital's actual EHR instance.

How the Simulator Accelerated Delivery

The simulator removed the external dependency from the development cycle, changing the discovery pattern for integration bugs — surfacing issues during development rather than during limited sandbox windows.

The unsolicited workflow's return order logic was developed entirely against the simulator, avoiding coordinated testing sessions with the vendor's team for each iteration. Patient identifier and ordering provider field corrections were also discovered through simulator testing — the original parsing logic assumed field formatting that held for the first hospital but broke for subsequent ones. The simulator's per-organization profiles exposed these assumptions before they reached a live environment.

Test Infrastructure

The simulator also served as the foundation for the integration test suite. Unit tests covered message handling across all organization configurations. End-to-end tests exercised the full clinical workflow from order receipt through AI processing to result delivery. This test infrastructure remained the regression safety net through production launch and subsequent feature additions.

When a Simulator Won't Help

This approach works when the integration surface is well-defined and behavioral variations are discoverable. The simulator is not a substitute for real EHR validation — it compresses the iteration cycle so sandbox time is spent on validation rather than discovery.

For integrations where the API surface is undocumented or actively changing, a simulator provides less value. The key question is whether the external dependency is a scheduling bottleneck or a knowledge bottleneck — simulators solve the former, not the latter.

First Steps

If your team is blocked by external system dependencies — EHR vendor or any third-party API with limited test access — the pattern is the same.

  1. Simulate the critical surface. Start with the highest-risk workflows and the most per-organization variation, not the entire API.
  2. Calibrate against real interactions. Every sandbox session should update simulator profiles with newly discovered behavioral differences.
  3. Build tests alongside it. The long-term value of a simulator is regression testing, not just a development stand-in.

Practical Solution Pattern

Simulate the specific integration behaviors that block development — per-organization field mappings, bidirectional message workflows, and authentication binding — rather than waiting for sandbox access for each iteration. Calibrate against real system interactions and build a regression test suite on top.

This works because the bottleneck in EHR integration is rarely engineering complexity — it is iteration speed imposed by external dependencies. Integration development that typically requires months of sandbox coordination was compressed to days — over 10x faster time-to-first-integration. The test infrastructure catches regression issues that would otherwise surface during production validation. If a healthcare integration workflow is already defined and blocked by technical dependencies, AI Workflow Integration is the direct build path.