Module 5: Prescribing Interventions

Duration: ~90 minutes self-paced Prerequisites: Modules 1-4 Learning objectives: - Map specific maturity gaps to specific SDC ecosystem capabilities - Decide between SDCforSMB, SDCStudio SaaS, and Sovereign deployment for a given client - Recognize the cases where SDC is not the right answer - Sequence interventions to respect the floor constraint principle - Price engagements correctly across the practitioner's compounding cost curve - Sell ongoing schema-change stewardship as a per-touch product, not as a retainer


5.1 The Intervention Catalog

The SDC ecosystem provides discrete capabilities that map to specific maturity gaps. Memorize this table.

Gap SDC Capability Product
No schema or inconsistent schemas Datasource introspection + assembly review SDC Agents SMB
Schema exists but unenforced Generated XSD 1.1 + Schematron validators SDCforSMB AppGen
Identifiers not stable CUID2 minting via component ledger SDCStudio
Provenance gaps W3C PROV provenance records + lineage tracking sdc-governance library (in development)
No interoperability format JSON-LD export from generated app SDCforSMB AppGen
Governance unmeasurable Runtime enforcement + decision receipts sdc-governance library (in development)
Cross-org data sharing Federated component registry SDCStudio (paid tier)
Data sovereignty / air gap On-prem deployment with local CUID minting SDC Agents Sovereign

When you read a Maturity Map report, the recommendations write themselves. A client at Level 1 Schema Integrity needs introspection + assembly. A client at Level 2 Provenance needs the common audit layer. Etc.


5.2 The Three Deployment Options

SDCforSMB (most cases)

  • Single-tenant, self-installed Django app over SDC Agents SMB
  • Browser UI for non-technical users
  • Wallet-based connection to SDCStudio for component minting
  • Lightweight stack: PostgreSQL + Fuseki
  • Best for: 1-50 person businesses, 1-5 datasources, single physical location

Recommend SDCforSMB when: The client has a small team, wants to run things themselves, and trusts a practitioner for setup and occasional support.

SDCStudio SaaS (full)

  • Multi-tenant hosted platform
  • Full assembly authoring UI
  • Federated component registry
  • Enterprise integrations (SSO, SCIM, audit export)

Recommend SDCStudio SaaS when: The client has 50+ employees, multiple business units, or needs to share components across organizations.

Sovereign deployment

  • On-prem or sovereign cloud
  • Air-gappable
  • Local component minting

Recommend Sovereign when: Regulatory or geopolitical requirements prohibit data leaving the premises. Healthcare under strict regional rules, defense, classified work, certain financial sectors.


5.3 When SDC Is Not the Answer

This is the most important section of the module. Recommending SDC when it is wrong destroys client trust and burns the framework's credibility.

SDC is not the right answer when:

  1. The client's problem is a process problem, not a data problem. If the data is fine but nobody acts on it, you need a process practitioner, not a semantic data infrastructure deployment.

  2. The client has no operational data at all. A pre-revenue startup with three spreadsheets does not need a semantic data charter. They need to ship a product.

  3. The client is locked into a vendor with strong native semantics. Some vertical SaaS (Epic in healthcare, certain ERP systems) already provide a semantic layer. Replacing it is the wrong battle.

  4. The client's compliance requirements are met by an existing certified product. Don't recommend custom infrastructure for a problem that has a $50/month SaaS solution.

  5. The client cannot commit to ongoing maintenance. SDC infrastructure is low-maintenance but not zero-maintenance. A client who will not assign anyone to it will have a derelict deployment in 18 months. Better to deploy nothing than that.

  6. The client has already committed to a maximal-modeling standard they cannot deviate from. If a client is contractually or regulatorily required to consume FHIR R5 messages, NIEM IEPDs, or HL7 v2 segments as their authoritative interface with an external party, you usually cannot replace those standards — you have to map to them. SDC is still useful as the internal model that the client validates and works against, with a translation layer to/from the maximal standard at the boundary. But if the client expects SDC to replace the upstream standard, the answer is "we can shield you from its complexity, not eliminate it." Set that expectation early and in writing. (See Module 1 §1.6 for why maximal-modeling standards generate the complexity they do, and program/sdc_for_graph_practitioners.md for the deeper structural argument.)

Be honest in the report. A "SDC is not recommended" finding still justifies your assessment fee, builds trust, and positions you for the next engagement when conditions change.


5.4 Sequencing Interventions

The floor constraint principle dictates the sequence. Always fix the foundational dimensions before the derived ones.

Standard sequence:

  1. Schema Integrity first — without a schema, nothing else can be enforced. Run introspection, review the assembly, deploy the generated validators.
  2. Constraint Enforcement second — once the schema exists, attach the rules. Schematron + SHACL go on top of the schema from step 1.
  3. Semantic Identity third — assign CUID2 identifiers to existing entities via the catalog toolset. This is when SDCStudio wallet usage starts.
  4. Provenance fourth — turn on audit logging. The audit layer was deployed in step 1 but unused; now it has things to audit.
  5. Interoperability fifth — once the data is identified, constrained, and provenanced, exporting to JSON-LD is mechanical.
  6. Governance last — governance over a stable foundation is straightforward. Governance over chaos is impossible.

Common mistake: clients want to start with Governance because it sounds executive. Resist. A governance program over a Level 1 schema is a paperwork exercise.


5.5 Pricing the Sequence

Each step is a billable engagement. Standard rough scoping for a first engagement in a new vertical:

Step Effort Typical fee range (USD)
Maturity Map assessment 8-12 hours $2,500 - $5,000
Schema Integrity (introspection + assembly) 20-40 hours $5,000 - $15,000
Constraint Enforcement 15-30 hours $4,000 - $10,000
Semantic Identity 10-20 hours $3,000 - $8,000
Provenance 5-15 hours $2,000 - $5,000
Interoperability 10-20 hours $3,000 - $8,000
Governance 10-20 hours $3,000 - $7,000
Annual reassessment 4-8 hours $1,500 - $3,000

The hours and fees above describe engagement #1 in a new vertical — when you have no reusable components and you are doing the domain-modeling work for the first time. This is the highest-effort, highest-learning, lowest-margin engagement you will ever do in that vertical. It is also the most important one, because everything that follows compounds off it.

Read the next section before pricing engagement #2 or beyond.


5.6 The Compounding Cost Curve

This is the section practitioners come back to most often after their first year in practice. It explains why the fee table above is the starting picture, not the steady state.

Your costs drop. The client's value does not.

The first time you onboard a small mental health practice, you do real component modeling work. You mint TherapySession, GroupTherapyEnrollment, InsuranceAuthorization, PHIRedaction, SlidingScalePayment, and a dozen others. Your Schema + Identity phase is 60-80 hours of practitioner labor.

The second mental health practice has substantially the same data shapes. The components you minted for client #1 are already in the catalog, free to deploy, and bring their constraints, validators, and identifiers with them. Your Schema + Identity phase for client #2 is 30-40 hours.

By client #5 in the same vertical, you are at 15-20 hours. By client #10, you are at 8-12 hours of practitioner labor for an engagement that the client still perceives as a custom-built bespoke application — because it is. The application is bespoke to their data. The components are reused; the deployment is not.

The crucial pricing rule: Do not pass your cost reduction through to the client as a price reduction. Pass your experience through as a value increase.

A practitioner who charges client #5 $30,000 for "the same engagement" delivers in 15 hours instead of 75 hours. The hourly equivalent is $2,000/hour. A practitioner who instead charges client #5 $15,000 because "it was easier" leaves 50% of the compounding on the table — and trains the vertical to expect rock-bottom prices.

Both practitioners deliver an identical bespoke application to client #5. The difference is whether they captured the value of the component library they already paid to build.

The practitioner mindset shift

Coming from hourly consulting, the instinct is "I should charge for the hours I worked." That instinct is wrong here. What you charge for is:

  • The bespoke application running on the client's infrastructure
  • The years of vertical knowledge encoded in your component library
  • The fact that the client cannot get this from anyone else who is not also paying you for the time it took to build the library

The library is your capital. Capital earns returns. Hours earn wages. Choose returns.

When to discount and when not to

Legitimate reasons to discount engagement #5: - The client is a charity or has compelling cause-related circumstances - The client agrees to be a public case study - The client is in a sub-vertical you want to test before committing

Bad reasons to discount engagement #5: - "It was easier than the first one" - "I felt guilty charging the same for less work" - "The client said the price was high" - "The component reuse was so high I felt like I wasn't doing anything"

The middle reasons are the ones that quietly destroy practitioner economics. Catch yourself making them.


5.7 The Schema-Change Stewardship Product

Most practitioners coming from project-based consulting underestimate this section. It is the most economically important page in the curriculum. Read it twice.

What schema change stewardship is

A client's data does not stand still. A new field is added to their CRM. A new partner integration requires a new datasource. A regulatory update changes what they need to capture. A business line is added. A merger brings in new systems. A product launch creates new entity types.

In traditional consulting, each of these triggers friction: scope conversation, engagement letter, project plan, deliverable, invoice. The transaction cost dwarfs the work itself, so practitioners typically eat the small changes for relationship reasons and only invoice for the big ones.

In the SDC ecosystem, each of these is a touch. A touch is not a project. It is a 30-minute to 4-hour cycle:

  1. Remote introspection of the changed datasource (15-30 minutes)
  2. Delta review — what new components, if any, need to be minted (5-15 minutes)
  3. Mint the new components via SDCStudio (5 minutes; usually under $20 of wallet cost)
  4. Download the updated data model and application bundle (automatic, 1-2 minutes)
  5. Import the new app and regenerate the UI bindings by asking the LLM (10-30 minutes)
  6. Deploy the redeployed application to the client's infrastructure (15 minutes)

Total practitioner labor: 30 minutes to 4 hours depending on complexity. Most touches are at the lower end. The deliverable is a redeployed bespoke application, working in production the same business day.

You bill per touch, not per hour.

Per-touch pricing

Touch type Practitioner labor Touch fee (USD)
Add a single field to an existing entity 30-60 min $1,000 - $1,500
Add a new entity type with reusable components 1-2 hours $1,500 - $2,500
Add a new datasource (existing entity types) 1-3 hours $2,000 - $3,000
Add a new datasource requiring new component minting 2-4 hours $2,500 - $4,000
Regulatory or compliance-driven change 1-3 hours $2,500 - $4,500
Major refactor (new business line, M&A intake) 4-12 hours $5,000 - $12,000

These are starting points. Establish per-touch pricing in writing during the original engagement, not after the first change request. A per-touch price list in the original proposal is what makes the steward relationship sustainable. Without it, every change request becomes a scope negotiation.

How to introduce the touch model to a client

Most clients have never been offered this model. Their experience is: "every change is a project, every project is painful, so we postpone changes until they are unavoidable."

The touch model is the opposite: every change is a routine maintenance event, billed per occurrence, that takes hours instead of weeks. The first time a client experiences a same-day schema change in production, they will tell every peer they know. This is the strongest organic referral mechanism in the entire program.

Recommended client framing: "Your data will change. When it does, you call me, I touch the system, you have a redeployed application by end of day. Each touch is billed at our agreed price list. There is no minimum and no monthly commitment. You pay for changes when you need them, not on a calendar. And — this is the part nobody else offers — you will never face a forced software upgrade and you will never have to migrate your data. The application and the data live in your environment, on your hardware, on your timeline, indefinitely."

This is a vastly better deal for the client than a SaaS subscription that bills monthly whether or not anything changed, forces upgrades on the vendor's schedule, and risks data loss on every migration. It is also a vastly better deal for you than a managed services retainer where you are on the hook regardless of touch volume. The "no forced upgrades, no migrations" promise is the one clients quote back to their peers — it is the strongest organic referral mechanism in the program.

Putting axis 1 and axis 2 together

A practitioner with 20 clients in one vertical, averaging 6 touches per year per client at an average $2,200 per touch, is generating $264,000/year of recurring revenue from touches alone on roughly 120 hours of total labor (6 hours per touch average, often less). That is on top of new engagement revenue from continuing to onboard clients.

This is the actual product. The Maturity Map is the diagnostic. The bespoke application is the deliverable. The per-touch stewardship is the business. Read program/practitioner_economics.md for the full worked example.

What you're delivering in market terms: a governed context graph

When you present to a client's CTO or data leadership, frame the deliverable in the vocabulary they are hearing from their vendors and conferences. You are not delivering "a Django app with some schemas." You are delivering:

  • A context graph - a live knowledge graph (RDF/OWL/SHACL) that grows with every component you mint and every model you publish. The graph contains not just the data structure but the relationships, the vocabulary bindings, and the governance rules.
  • Structural decision traces - workflow state machines, attestation authority, and provenance chains that are part of the graph, not logged separately. Every decision is queryable: who authorized it, under what authority, what the state of the data was at the moment of the decision.
  • Deterministic governance - not dashboards, not after-the-fact alerts, not compliance documentation. Governance that is structurally bound to the payload and travels with the data wherever it goes.

This framing matters because "context graph" is the current market term for what SDC has been producing for years. Using the vocabulary your client's CTO is hearing from Gartner, McKinsey, and Foundation Capital positions you as the practitioner who can actually deliver what those analysts are describing - not as an afterthought or a multi-year initiative, but as an intrinsic output of the modeling work you are already doing.

Execution handoff: from modeling to runtime enforcement

SDC models define what should happen - the legitimate workflow transitions, the attestation authority, the provenance requirements. But SDC does not enforce these rules at runtime. That is a deliberate architectural boundary: SDC is the substrate, not the execution engine.

When a client's workflow reaches the point of execution - where a state change is about to become real - the governance components you modeled can be handed off to a runtime enforcement engine that evaluates whether the action is admissible under current conditions.

The handoff works because SDC components are structured, composable, and self-describing. An execution engine can consume your workflow definitions, your attestation constraints, and your authority envelopes directly from the model. If the action passes the invariant checks, it executes. If it fails, it is refused, escalated, or rolled back - and a deterministic receipt records why.

What this means for practitioners:

  • When you model workflow, attestation, and audit components, you are not building documentation. You are building executable governance contracts that a runtime engine can enforce.
  • The SDCStudio default library includes execution-aware components designed for this handoff. When clients adopt them, they have a clear path from modeled governance to enforced governance.
  • You do not need to build the execution engine. Your job is the modeling. The enforcement layer is a separate partner capability that plugs into the components you define.

This is the difference between governance that is described and governance that is enforced. Your models make enforcement possible. The execution engine makes it real.


Module 5 Exercise

Given the fictional client exercises/case_atlas_legal.md with the Module 4 scoring outcomes, produce:

  1. A prioritized intervention sequence (which dimension first, second, third)
  2. A recommendation between SDCforSMB / SDCStudio SaaS / Sovereign with justification
  3. A rough total fee estimate for the first 12 months
  4. One scenario in which you would not recommend SDC for this client and what you would recommend instead