Architecture / architecture · 07
Threat model
The TRP defends against application-layer compromise, user-space manipulation, runtime injection, and the fabrication or replay of events. What the TRP does not defend against — kernel-level compromise, supply-chain compromise, backend infiltration — is declared, not silently ignored.
Why a narrow threat model
A threat model that claims to defend against everything defends against nothing in practice. The substrate is deliberately narrow. It addresses the threats that the device-side execution layer can credibly observe and sign — and declares the boundary beyond which the trust we are offering does not extend.
The result is a model a CISO can review in detail, accept the boundary of, and integrate into their own threat library — rather than a marketing claim that has to be qualified privately.
outcomes
What this lets you do today that you couldn't before
-
Pass a CISO threat-modelling review on the substrate's claims
Every in-scope threat maps to a measurable signal; every out-of-scope threat is named with the layer it belongs to. There are no hand-wave claims to defend in a security review.
-
Compose with the rest of your defence-in-depth strategy
Hardware-attestation services, build-pipeline integrity, backend security — each owns a layer. The substrate names which layer it owns, so combining them is engineering, not negotiation.
-
Distinguish signed integrity events from policy decisions
Path, ordering, key binding, hash continuity — those are integrity questions the substrate answers. User intent — was the cardholder coerced? — is a policy question for your risk engine. The boundary is explicit.
-
Reason about what is observable vs what is assumed
When a regulator asks 'how would you detect X?', the answer is either 'we sign it' or 'we declare it as out-of-scope, and here's the layer that owns it.' No inference, no silent gaps.
-
Apply policy to declared trust basis, not to inferred device state
Hardware-bound, software-resident — the runtime declares it. You weight your decisions accordingly, with no guessing.
In scope — surfaced as evidence
Application-layer compromise
Overlay attacks, accessibility-service abuse, repackaged applications, modified bundles, runtime hooking. The TRP observes these directly — overlays at the moment of input, accessibility services at registration, repackaging at app launch. The signed Evidence Token declares them.
User-space manipulation
Debugger attach, runtime injection, dynamic-instrumentation frameworks (Frida, Substrate, etc.), library substitution. These surface as runtime-environment signals on the next signed event, with enough specificity for the operator to apply policy.
Execution interference
Triggered, dormant, and injection-based adversaries against the evidence layer itself. The substrate validates detection of execution interference, not static presence of a threat. This is what adversarial runtime testing in our development pipeline is built around.
Fabrication and replay
A signed event that is not part of a coherent hash chain breaks
verification. A replayed event with the same tctx and seq
combination as a previously seen one is rejected with a specific
reason. Fabricated events without valid hardware-bound signatures
fail at step ·02 of the verification pipeline.
Autonomous-flow integrity (agentic payments)
When an autonomous agent — an LLM-orchestrated wallet, an automated
recurring-payment agent, a fleet-of-things spending pattern — drives
a payment, EEI signs the four-step agent flow as a hash-linked
sequence: propose → consent → initiate → confirm. Each step
records its conditions, its trust basis, and references the previous
step’s hash. Specific threats covered:
- Step omission. An agent that initiates without a signed consent step breaks the chain at verification — the reject reason identifies the missing step explicitly.
- Consent-payload tampering. The consent step’s signed conditions (amount cap, recipient set, time window) are part of the hash chain; any later step that exceeds those conditions fails verification.
- Re-binding to a different agent identity. If a different signing
key produces the
confirmstep than produced theconsentstep, the trust basis declaration changes mid-chain and the operator’s policy engine sees the discontinuity inline. - Replayed agent decisions. Same
tctx+ same agent-event sequence as a previously seen flow is rejected at step ·06 of verification.
What is not covered: prompt injection at the LLM input layer
(an LLM can still be tricked into producing a legitimate-looking
propose step on adversarial input); model substitution at the
operator’s inference layer; agent-cloud compromise. Those are the
operator’s LLM-supply-chain and inference-platform domains. EEI
signs the agent’s execution — what it did and in what order — not
the agent’s intent. The two compose: the operator’s prompt-injection
defences govern what the agent decides; EEI signs whether the
decision was executed coherently against device runtime state.
Out of scope — declared, not silently ignored
Kernel-level compromise
A device whose kernel has been compromised is, by definition, outside the trust the TRP can declare. We do not claim to defend against this — we treat it as a platform-trust failure. Operators who require defence at this layer should pair the substrate with hardware-attestation services that operate inside the secure element.
Supply-chain compromise
A build pipeline that has been compromised before the application ships is the operator’s responsibility to defend. Code-signing, SBOM provenance, reproducible builds — those are the relevant controls. Once the application is on the device, the TRP measures what it sees; it cannot reach back through the build.
Backend infiltration
The substrate’s verification is sovereign — meaning it works without YinkoShield in the path — but it cannot defend the operator’s backend. An attacker with admin access to the backend that consumes evidence can ignore the evidence. That is the operator’s infrastructure-security domain.
Social engineering of the user
A user who has been deceived into authorising a transaction is a user the runtime has correctly observed. The Evidence Token still records exactly what executed; what the user intended is a policy question for the operator’s risk engine, not an integrity question for the substrate.
properties
What you get when the model is in your defence-in-depth library
-
·01 Narrow, declared boundary
In-scope threats and out-of-scope threats are both enumerated. Nothing is silently ignored.
-
·02 Four in-scope threat classes
Application-layer compromise · user-space manipulation · execution interference · fabrication and replay. Each surfaces as a specific signal, not as an aggregate score.
-
·03 Four out-of-scope threat classes
Kernel-level compromise · supply-chain compromise · backend infiltration · social engineering of the user. Each is named with the layer that owns it.
-
·04 Adversarial validation methodology
Tested against triggered, dormant, and injection-based attacks against the evidence layer itself — not just static threat presence.
-
·05 Signed-or-declared, never inferred
The substrate either produces evidence about a threat (signed) or names it as out-of-scope (declared). No silent third option.
-
·06 Auditable boundary
A regulator or CISO can review what the substrate claims and what it does not — without proprietary documentation, just the spec.
-
·07 Composable with hardware attestation
The TRP names the boundary at user-space; for kernel-level guarantees, the operator pairs the substrate with platform-attestation services. The two layers don't overlap.
Why this is the honest answer
A platform that pretends to address all of these would not survive a regulator’s review. A platform that explicitly maps which threats land in scope, which land elsewhere in the operator’s defence-in-depth strategy, and where the boundaries are — that is the platform that gets approved.
How the model fits with what you already run
The threat model in YEI-001’s THREAT_MODEL.md is structured so it
slots into the operator’s existing defence-in-depth library — not
alongside it. Where a category or boundary maps to an existing
framework or regime, the framework is named; where it does not,
the layer is declared explicitly.
- STRIDE · The spec’s threat table is a STRIDE-oriented analysis. Spoofing (key substitution at re-fetch), tampering (token payload alteration, ledger record alteration), repudiation (signed evidence under device key supports non-repudiation of the claim, not of intent), information disclosure (PII discipline, quasi-identifier risk), denial of service (size limits, header allowlist, dedup negative-cache), elevation of privilege (algorithm-confusion: ES256 only, header allowlist).
- EMV 3-D Secure · Scheme authentication and EEI are orthogonal unless the operator’s program explicitly links the outcomes. Evidence is complementary, not a substitute for CAVV / 3DS results.
- ISO 8583 · Transport for the Evidence Token via DE 48 (or DE 124 / 125) inside a
0xF0BER-TLV envelope — an existing field, not a new one. - POPIA · NDPR · DPA 2019 · DPA 2012 · LPDP · Loi 09-08 · GDPR · Producer conformance requires the strict privacy profile (
device_idpseudonymised or omitted;tctxnot linkable to a natural person without explicit consent; no logging of raw(device_id, tctx)pairs without a documented lawful processing basis). The samestrictposture satisfies South Africa’s POPIA §11, Nigeria’s NDPR, Kenya’s DPA 2019, Ghana’s DPA 2012, Côte d’Ivoire’s LPDP, Morocco’s Law 09-08, Mauritius DPA, and the EU GDPR for European data subjects. The spec is enforced; the producer-conformance checklist in YEI-001 names each regime explicitly so an operator’s DPO can map controls to local requirements without inferring. - PCI DSS, payment scheme certification, jurisdictional admissibility · Out of scope. The spec explicitly does not replace these. The operator’s program owns chargeback reason-code mapping, 3DS-outcome linkage, and dispute-evidence admissibility under local law.
Where to read more
The full threat model — including the formal adversarial categories, the validation methodology, and the boundary conditions — lives in YEI-001’s THREAT_MODEL.md, shared with regulators and qualifying partners under NDA.