# nix-effects — Full Documentation ## Guide ### Introduction Nix configurations fail late. A misspelled option name, a string where an integer belongs, a firewall rule that references a port no service listens on — these surface at build time, at deploy time, or when a user files a ticket. The NixOS module system catches some of this with `mkOption` and `types.str`, but the checking is shallow: it validates individual fields, not relationships between them. "The build system must appear in the declared platforms list" is a constraint no existing Nix tool can express as a type. nix-effects is a freer-monad effect layer for pure Nix, with a dependent type checker built on top of it. The effect layer is where the DX comes from. Validation is phrased as a `typeCheck` effect, and the same validator can run under different handlers that choose what happens on a failure. The handler is where the policy lives, not the validator. Everything runs at `nix eval` time, before anything builds or ships. On top of the effect layer sits a Martin-Löf dependent type checker in `src/tc/` with Pi, Sigma, identity types with J, cumulative universes, HOAS elaboration, and verified extraction of plain Nix functions from proof terms. The kernel itself is pure functions over values, independent of the effect layer; the effect layer is what surfaces kernel errors to users. The bidirectional checker sends `typeCheck` effects carrying a field-path context, so type errors in deeply nested terms come back localized to the field that broke. ## What it looks like A port number is an integer between 1 and 65535. In nix-effects, that's a refinement type: ```nix let inherit (fx.types) Int refined; Port = refined "Port" Int (x: x >= 1 && x <= 65535); in { ok = Port.check 8080; # true bad = Port.check 99999; # false } ``` Behind the scenes, `.check` runs the MLTT kernel's decision procedure — the value is elaborated into a kernel term, type-checked, and the predicate is evaluated. For a refinement type like `Port`, this is fast: the kernel confirms the base type (`Int`), the guard confirms the range. You write normal Nix and the kernel runs behind the scenes. But checking individual values is only the starting point. The kernel can also verify entire functions — confirm that a validator you wrote is type-correct, then extract it as an ordinary Nix function. ## Verified functions over real data Write an implementation in HOAS (Higher-Order Abstract Syntax), the kernel type-checks it, and `v.verify` extracts a callable Nix function. Here's a derivation spec validator. The kernel verifies a function that takes a record with `license`, `platforms`, and `system` fields, then checks three constraints: the build system appears in the platforms list, the system is one of the supported architectures, and the license is approved. All three checks use string comparison inside the kernel — `strEq` is a kernel primitive, not a Nix-level hack. ```nix let H = fx.types.hoas; v = fx.types.verified; Spec = H.record [ { name = "license"; type = H.string; } { name = "platforms"; type = H.listOf H.string; } { name = "system"; type = H.string; } ]; licenses = mkStrList [ "MIT" "Apache-2.0" "BSD-3-Clause" ]; systems = mkStrList [ "x86_64-linux" "aarch64-linux" "x86_64-darwin" ]; # Kernel-verified: system ∈ platforms AND system ∈ supported AND license ∈ approved validateSpec = v.verify (H.forall "s" Spec (_: H.bool)) (v.fn "s" Spec (s: v.if_ H.bool (v.strElem (v.field Spec "system" s) (v.field Spec "platforms" s)) { then_ = v.if_ H.bool (v.strElem (v.field Spec "system" s) systems) { then_ = v.strElem (v.field Spec "license" s) licenses; else_ = v.false_; }; else_ = v.false_; })); in { ok = validateSpec { system = "x86_64-linux"; license = "MIT"; platforms = [ "x86_64-linux" "aarch64-linux" ]; }; # true bad = validateSpec { system = "arm64"; license = "MIT"; platforms = [ "x86_64-linux" ]; }; # false — arm64 not in supported systems mismatch = validateSpec { system = "x86_64-linux"; license = "MIT"; platforms = [ "aarch64-linux" ]; }; # false — system not in its own platforms list } ``` `validateSpec` is a plain Nix function. You call it with a plain Nix attrset. But the implementation was verified by the MLTT kernel before extraction — the kernel confirmed that the function matches its type (`Spec → Bool`), that field projections are well-typed, and that the string membership checks compose correctly. If you made a type error in the implementation — say, compared a `Bool` where a `String` was expected — the kernel would reject it at `nix eval` time. The record type (`H.record`) elaborates to nested Sigma in the kernel. `v.field` desugars to the right chain of first/second projections. `v.strEq` is a kernel primitive that reduces `strEq "foo" "foo"` to `true` during normalization. `v.strElem` folds over a list with `strEq`. None of this is Nix-level string comparison — it's computation inside the proof checker. ## Proofs as programs The same kernel that verifies functions also checks mathematical proofs. Both are the same judgment — `Γ ⊢ t : T` — applied differently. A verified function proves that an implementation inhabits its type. An equality proof proves that two expressions reduce to the same normal form. ```nix let H = fx.types.hoas; v = fx.types.verified; # Verified addition: Nat → Nat → Nat by structural recursion add = v.verify (H.forall "m" H.nat (_: H.forall "n" H.nat (_: H.nat))) (v.fn "m" H.nat (m: v.fn "n" H.nat (n: v.match H.nat m { zero = n; succ = _k: ih: H.succ ih; }))); in { five = add 2 3; # 5 # Prove 3 + 5 = 8: the kernel normalizes both sides, Refl witnesses equality proof = (H.checkHoas (H.eq H.nat (add (H.natLit 3) (H.natLit 5)) (H.natLit 8)) H.refl).tag == "refl"; # true } ``` The `add` function is extracted exactly like `validateSpec` — write in HOAS, kernel checks, extract a Nix function. The equality proof goes one step further: the kernel normalizes `add(3, 5)` by running the structural recursion, arrives at `8`, and confirms `Refl` witnesses `8 = 8`. This is computational proof — the kernel computes the answer and verifies that computation agrees with the claim. ## The effect system The "effects" in nix-effects are algebraic effects implemented via a freer monad (Kiselyov & Ishii 2015). A computation is a tree of effects with continuations. A handler walks the tree, interpreting each effect: ```nix let inherit (fx) pure bind run; inherit (fx.effects) get put; # Read state, double it, write it back doubleState = bind get (s: bind (put (s * 2)) (_: pure s)); result = run doubleState fx.effects.state.handler 21; # result.value = 21, result.state = 42 in result ``` This matters for type checking because it separates *what* to check from *how* to report. When `DepRecord.validate` finds a type error, it sends a `typeCheck` effect. The handler decides the policy: - **Strict** — abort on the first error - **Collecting** — gather all errors, keep checking - **Logging** — record every check, pass or fail Same validation logic, different handler. The type-checking kernel itself runs as an effectful computation on this same infrastructure — the kernel is just another program running on the effects substrate. ## The verification spectrum Not everything needs proofs. nix-effects supports four levels of assurance, and you pick the one that fits: **Level 1 — Contract.** Write normal Nix. Types check values via `.check`. The kernel runs behind the scenes. Zero cost to adopt. ```nix Port = refined "Port" Int (x: x >= 1 && x <= 65535); Port.check 8080 # true ``` **Level 2 — Boundary.** Data is checked by the kernel at module interfaces. Every type has a `kernelType` and `.check` is derived from the kernel's `decide` procedure. This is what all built-in types do by default — `(ListOf String).check ["a" "b"]` elaborates the list into a kernel term and type-checks it. **Level 3 — Property.** Write proof terms in HOAS that the kernel verifies. Prove that `3 + 5 = 8`, or that double negation on booleans is the identity, or that `append([1,2], [3]) = [1,2,3]`. **Level 4 — Full.** The implementation IS the proof term. Write in HOAS, the kernel verifies, `extract` produces a Nix function correct by construction. The `validateSpec` example above is Level 4 — the kernel verified the validator before extracting it as a callable function. Most users will stay at levels 1 and 2. The kernel is there when you need it. With record types and string comparison now in the kernel, Level 4 handles real-world validation — not just arithmetic on natural numbers. ## How this document is organized The rest of the guide builds up from here: - **[Getting Started](/nix-effects/guide/getting-started)** walks through installation, your first type, your first effect, and the end-to-end derivation demo. - **[Proof Guide](/nix-effects/guide/proof-guide)** builds proofs incrementally, from computational equality through the J eliminator to verified extraction of plain Nix functions from kernel-checked HOAS terms. - **[Theory](/nix-effects/guide/theory)** covers the papers that shaped the design, algebraic effects and freer monads, FTCQueue for O(1) bind, dependent type theory in the Martin-Löf and Mini-TT lineage, the handler pattern, and refinement and graded types, and how they compose as a practical engineering layer with a dependent type checker on top. - **[Trampoline](/nix-effects/guide/trampoline)** explains how `builtins.genericClosure` becomes a trampoline for stack-safe evaluation at scale. - **[Systems Architecture](/nix-effects/guide/systems-architecture)** describes the kernel-first design: one notion of type, one checking mechanism, no adequacy bridge. - **[Kernel Architecture](/nix-effects/guide/kernel-architecture)** details the MLTT kernel internals — NbE, bidirectional checking, HOAS elaboration, extraction, and the trust model. - **[Kernel Specification](/nix-effects/guide/kernel-spec)** gives the formal typing rules. ## References 1. Martin-Lof, P. (1984). *Intuitionistic Type Theory*. Bibliopolis. 2. Kiselyov, O., & Ishii, H. (2015). *Freer Monads, More Extensible Effects*. Haskell Symposium 2015. [[pdf](https://okmij.org/ftp/Haskell/extensible/more.pdf)] 3. Plotkin, G., & Pretnar, M. (2009). *Handlers of Algebraic Effects*. ESOP 2009. [[doi](https://doi.org/10.1007/978-3-642-00590-9_7)] 4. Findler, R., & Felleisen, M. (2002). *Contracts for Higher-Order Functions*. ICFP 2002. [[doi](https://doi.org/10.1145/581478.581484)] ### Getting Started ## Installation Add nix-effects as a flake input: ```nix { inputs.nix-effects.url = "github:kleisli-io/nix-effects"; outputs = { nix-effects, nixpkgs, ... }: let fx = nix-effects.lib; in { # fx.types, fx.run, fx.send, fx.bind, fx.effects, fx.stream ... }; } ``` Or import directly without flakes: ```nix let fx = import ./path/to/nix-effects { lib = nixpkgs.lib; }; in ... ``` ## Your first type Define a type with `fx.types.refined`: ```nix let inherit (fx.types) Int refined; Port = refined "Port" Int (x: x >= 1 && x <= 65535); in { # Kernel decision procedure — fast boolean check ok = Port.check 8080; # true bad = Port.check 99999; # false # Effectful validate — runs through the trampoline, produces blame context result = fx.run (Port.validate 99999) fx.effects.typecheck.collecting []; # result.state = [ { context = "Port"; message = "Expected Port, got int"; ... } ] } ``` ## Your first dependent type One field's type depends on another field's value — this is a genuine dependent type, checked by the MLTT proof-checking kernel: ```nix let inherit (fx.types) Bool Int String ListOf DepRecord refined; FIPSCipher = refined "FIPSCipher" String (x: builtins.elem x [ "AES-256-GCM" "AES-192-GCM" "AES-128-GCM" "AES-256-CBC" "AES-128-CBC" ]); ServiceConfig = DepRecord [ { name = "fipsMode"; type = Bool; } { name = "cipherSuites"; type = self: if self.fipsMode then ListOf FIPSCipher else ListOf String; } ]; in { ok = ServiceConfig.checkFlat { fipsMode = true; cipherSuites = [ "AES-256-GCM" ]; }; # true bad = ServiceConfig.checkFlat { fipsMode = true; cipherSuites = [ "3DES" ]; }; # false } ``` ## Your first effect Write a computation, then choose the handler: ```nix let inherit (fx) pure bind run; inherit (fx.effects) get put; # Double the state doubleState = bind get (s: bind (put (s * 2)) (_: pure s)); result = run doubleState fx.effects.state.handler 21; # result.value = 21 (old state returned) # result.state = 42 (state doubled) in result ``` ## Running the showcase The repo includes a working end-to-end demo: ```bash git clone https://github.com/kleisli-io/nix-effects cd nix-effects # Valid config — build succeeds nix build .#cryptoService # Invalid config (3DES in FIPS mode) — caught at eval time nix build .#buggyService # error: Type errors in ServiceConfig: # - List[FIPSCipher][3]: "3DES" is not a valid FIPSCipher # Run all tests nix flake check ``` ## The kernel behind every type Every `.check` call runs the MLTT type-checking kernel. When you write `Port.check 8080`, the kernel elaborates `8080` into a term, type-checks it, and returns a boolean. This is not a separate system — it is what `.check` does. You can also write verified implementations using HOAS combinators: ```nix let H = fx.types.hoas; v = fx.types.verified; # Write a function in HOAS, kernel type-checks it, extract as Nix function succ = v.verify (H.forall "x" H.nat (_: H.nat)) (v.fn "x" H.nat (x: H.succ x)); in succ 5 # 6 — a certified Nix function ``` The kernel checks the implementation against its type at `nix eval` time. If the types don't match, you get an error before anything builds. See the [Kernel Architecture](/nix-effects/guide/kernel-architecture) chapter for the full pipeline. ## What's in the box The `fx` attrset is the entire public API: | Namespace | Contents | |-----------|---------| | `fx.pure`, `fx.bind`, `fx.send`, `fx.map`, `fx.seq` | Freer monad kernel | | `fx.run`, `fx.handle` | Trampoline interpreter | | `fx.adapt`, `fx.adaptHandlers` | Handler composition | | `fx.types.*` | Type system (primitives, constructors, dependent, refinement, universe) | | `fx.types.hoas` | HOAS surface combinators for the kernel | | `fx.types.elaborateType`, etc. | Elaboration bridge: fx.types ↔ kernel | | `fx.types.verified` | Convenience combinators for writing verified implementations | | `fx.effects.*` | Built-in effects (state, error, reader, writer, acc, choice, conditions, typecheck, linear) | | `fx.stream.*` | Effectful lazy sequences | ### Proof Guide Nix configurations are concrete at eval time. Every field, every value, every list element is known before anything builds. The nix-effects dependent type checker exploits this: it normalizes both sides of an equation via NbE, and if they reduce to the same value, `Refl` proves them equal. No symbolic reasoning, no induction over unknowns — just computation on concrete data, checked through the freer-monad effect layer in 1,300 lines of pure Nix. This chapter builds proofs incrementally, from `0 + 0 = 0` through the J eliminator to verified extraction of plain Nix functions from kernel-checked HOAS terms. Every example is runnable. The code comes from three files in the repository: [`proof-basics.nix`](https://github.com/kleisli-io/nix-effects/blob/main/examples/proof-basics.nix), [`equality-proofs.nix`](https://github.com/kleisli-io/nix-effects/blob/main/examples/equality-proofs.nix), and [`verified-functions.nix`](https://github.com/kleisli-io/nix-effects/blob/main/examples/verified-functions.nix). **Prerequisites.** You should know what a function is and what `let` bindings do in Nix. Familiarity with the Getting Started chapter helps but isn't required. You do not need to know type theory. ## Your first proof A proof in nix-effects is a term that type-checks against an equality type. The simplest equality type is `Eq(Nat, 0+0, 0)` — the claim that adding zero to zero produces zero. The proof term is `Refl`, which says "both sides are the same." The kernel checks this by normalizing `0 + 0`, arriving at `0`, and confirming that `Refl` witnesses `0 = 0`. ```nix let H = fx.types.hoas; inherit (H) nat eq zero refl checkHoas; # Addition by structural recursion on the first argument add = m: n: H.ind (H.lam "_" nat (_: nat)) n (H.lam "k" nat (_: H.lam "ih" nat (ih: H.succ ih))) m; in # Prove: 0 + 0 = 0 (checkHoas (eq nat (add zero zero) zero) refl).tag == "refl" # → true ``` `checkHoas` is the kernel's entry point. It takes a type and a term, runs bidirectional type checking with normalization by evaluation, and returns a result. If the result's `tag` is `"refl"`, the proof was accepted. If it has an `error` field, the kernel rejected it. The kernel doesn't pattern-match on `0 + 0 = 0` as a special case. It evaluates `add(zero, zero)` by running the `NatElim` eliminator — the base case fires, returns `n` (which is `zero`), and the kernel sees `Eq(Nat, zero, zero)`. `Refl` witnesses any `Eq(A, x, x)`, so the proof goes through. Larger numbers work the same way. The kernel unrolls the recursion: ```nix # 3 + 5 = 8 (checkHoas (eq nat (add (H.natLit 3) (H.natLit 5)) (H.natLit 8)) refl).tag == "refl" # 10 + 7 = 17 (checkHoas (eq nat (add (H.natLit 10) (H.natLit 7)) (H.natLit 17)) refl).tag == "refl" ``` Both reduce to `true`. The kernel normalizes `add(3, 5)` step by step — three `succ` peels, then the base case returns `5`, then three `succ` wrappers are reapplied — and confirms the result is `8`. ## Dependent witnesses A computational equality says "these two things are the same." A dependent witness says "here is a value, and here is evidence that it has a property." The Sigma type `Σ(x:A).P(x)` packages both: a value `x` of type `A`, and a proof that `P(x)` holds. ```nix let H = fx.types.hoas; inherit (H) nat eq sigma zero pair refl checkHoas; in { # "There exists x : Nat such that x = 0" — witnessed by (0, Refl) witness = let ty = sigma "x" nat (x: eq nat x zero); tm = pair zero refl; in (checkHoas ty tm).tag == "pair"; # → true } ``` The type `Σ(x:Nat). Eq(Nat, x, 0)` says "a natural number equal to zero." The term `(0, Refl)` inhabits it: `0` for the value, `Refl` for the proof that `0 = 0`. The kernel checks both components — it confirms `0 : Nat` and `Refl : Eq(Nat, 0, 0)`. Witnesses get more interesting when the property involves computation: ```nix # "There exists x such that 3+5 = x" — witnessed by (8, Refl) witnessAdd = let add = m: n: H.ind (H.lam "_" nat (_: nat)) n (H.lam "k" nat (_: H.lam "ih" nat (ih: H.succ ih))) m; ty = sigma "x" nat (x: eq nat (add (H.natLit 3) (H.natLit 5)) x); tm = pair (H.natLit 8) refl; in (checkHoas ty tm).tag == "pair"; ``` The kernel normalizes `add(3, 5)` to `8`, checks that `8` matches the witness value, and accepts the proof. If you claimed the witness was `7`, the kernel would reject it — `Refl` can't witness `8 = 7`. ## Eliminators Eliminators are how you compute over inductive types in type theory. Where Nix uses `if`/`else` and list folds, the kernel uses eliminators: structured recursion with a *motive* that declares what type the result has. The motive is what makes these dependently typed — the return type can vary based on the input. ### Booleans `BoolElim(motive, trueCase, falseCase, scrutinee)` — case analysis on a boolean. With a constant motive (return type doesn't depend on the boolean), it's equivalent to an if/else: ```nix let H = fx.types.hoas; inherit (H) nat bool eq zero refl boolElim checkHoas; in { # if true then 42 else 0 = 42 trueCase = let result = boolElim (H.lam "_" bool (_: nat)) (H.natLit 42) zero H.true_; in (checkHoas (eq nat result (H.natLit 42)) refl).tag == "refl"; # if false then 42 else 0 = 0 falseCase = let result = boolElim (H.lam "_" bool (_: nat)) (H.natLit 42) zero H.false_; in (checkHoas (eq nat result zero) refl).tag == "refl"; } ``` ### Natural numbers `NatElim(motive, base, step, n)` — structural recursion. The base case handles zero, the step case takes the predecessor `k` and the inductive hypothesis `ih` (the result for `k`) and produces the result for `S(k)`: ```nix let H = fx.types.hoas; inherit (H) nat eq refl checkHoas; # double(n): double(0) = 0, double(S(k)) = S(S(double(k))) double = n: H.ind (H.lam "_" nat (_: nat)) H.zero (H.lam "k" nat (_: H.lam "ih" nat (ih: H.succ (H.succ ih)))) n; in # double(4) = 8 (checkHoas (eq nat (double (H.natLit 4)) (H.natLit 8)) refl).tag == "refl" ``` The kernel unrolls four steps: `double(4) = S(S(double(3))) = ... = 8`. ### Lists `ListElim(elemType, motive, nilCase, consCase, list)` — structural recursion on lists. The nil case provides the base value, the cons case takes the head, tail, and inductive hypothesis: ```nix let H = fx.types.hoas; inherit (H) nat eq refl checkHoas; list123 = H.cons nat (H.natLit 1) (H.cons nat (H.natLit 2) (H.cons nat (H.natLit 3) (H.nil nat))); # sum(xs): fold with addition sumList = xs: H.listElim nat (H.lam "_" (H.listOf nat) (_: nat)) H.zero (H.lam "h" nat (h: H.lam "t" (H.listOf nat) (_: H.lam "ih" nat (ih: H.ind (H.lam "_" nat (_: nat)) ih (H.lam "k" nat (_: H.lam "ih2" nat (ih2: H.succ ih2))) h)))) xs; in # sum([1, 2, 3]) = 6 (checkHoas (eq nat (sumList list123) (H.natLit 6)) refl).tag == "refl" ``` ### Sums (coproducts) `SumElim(L, R, motive, leftCase, rightCase, scrutinee)` — case analysis on `Left(a)` or `Right(b)`: ```nix let H = fx.types.hoas; inherit (H) nat bool sum eq zero refl checkHoas; in { # case Left(5) of { Left n → n; Right _ → 0 } = 5 leftCase = let scrut = H.inl nat bool (H.natLit 5); result = H.sumElim nat bool (H.lam "_" (sum nat bool) (_: nat)) (H.lam "n" nat (n: n)) (H.lam "b" bool (_: zero)) scrut; in (checkHoas (eq nat result (H.natLit 5)) refl).tag == "refl"; } ``` ## The J eliminator Everything above uses `Refl` on equalities that the kernel verifies by computation — normalize both sides, confirm they match. But what if you want to reason *about* equalities? Prove that equality is symmetric, or that applying a function to equal inputs gives equal outputs? That requires the J eliminator, the fundamental proof principle for identity types in Martin-Löf type theory [1]. J says: if you can prove something about `x = x` (the reflexive case), you can prove it about any `x = y` where the equality is witnessed. ``` J(A, a, P, pr, b, eq) A : type a : left side of the equality P : λ(y:A). λ(_:Eq(A,a,y)). Type — the motive pr : P(a, refl) — the base case (when y = a) b : right side eq : Eq(A, a, b) — proof that a = b Returns: P(b, eq) Computation rule: J(A, a, P, pr, a, refl) = pr ``` When the equality proof is `Refl`, J returns the base case directly. The kernel reduces `J(..., refl)` to `pr`, and the proof goes through. ### Congruence If `x = y`, then `f(x) = f(y)` for any function `f`. This is the standard *cong* combinator, derived from J: ```nix let H = fx.types.hoas; inherit (H) nat eq u forall refl checkHoas; congType = forall "A" (u 0) (a: forall "B" (u 0) (b: forall "f" (forall "_" a (_: b)) (f: forall "x" a (x: forall "y" a (y: forall "_" (eq a x y) (_: eq b (H.app f x) (H.app f y))))))); congTerm = H.lam "A" (u 0) (a: H.lam "B" (u 0) (b: H.lam "f" (forall "_" a (_: b)) (f: H.lam "x" a (x: H.lam "y" a (y: H.lam "p" (eq a x y) (p: H.j a x (H.lam "y'" a (y': H.lam "_" (eq a x y') (_: eq b (H.app f x) (H.app f y')))) refl y p)))))); in (checkHoas congType congTerm).tag == "lam" ``` The derivation: J eliminates the proof `p : Eq(A, x, y)`. The motive says "given `y'` equal to `x`, produce `Eq(B, f(x), f(y'))`." In the base case, `y' = x`, so the goal is `Eq(B, f(x), f(x))` — which `Refl` proves. J then transports this to `Eq(B, f(x), f(y))`. This generic combinator type-checks with abstract variables `A`, `B`, `f`, `x`, `y` — the kernel verifies the reasoning is valid for all inputs. On concrete data, J receives `Refl` (since concrete equalities reduce by computation), and the kernel simplifies: ```nix # Concrete: from add(2,1) = 3, derive succ(add(2,1)) = succ(3) congConcrete = let add21 = add (H.natLit 2) (H.succ H.zero); three = H.natLit 3; in (checkHoas (eq nat (H.succ add21) (H.succ three)) (H.j nat add21 (H.lam "y" nat (y: H.lam "_" (eq nat add21 y) (_: eq nat (H.succ add21) (H.succ y)))) refl three refl)).tag == "j"; ``` ### Symmetry If `x = y`, then `y = x`. The motive is `λy'.λ_. Eq(A, y', x)` — when `y' = x`, the goal is `Eq(A, x, x)`, proved by `Refl`: ```nix # sym : Π(A:U₀). Π(x:A). Π(y:A). Eq(A,x,y) → Eq(A,y,x) symTerm = H.lam "A" (u 0) (a: H.lam "x" a (x: H.lam "y" a (y: H.lam "p" (eq a x y) (p: H.j a x (H.lam "y'" a (y': H.lam "_" (eq a x y') (_: eq a y' x))) refl y p)))); ``` ### Transitivity If `x = y` and `y = z`, then `x = z`. Fix `p : Eq(A, x, y)`, then eliminate `q` with J. The motive is `λz'.λ_. Eq(A, x, z')` — when `z' = y`, the goal is `Eq(A, x, y)`, proved by `p`: ```nix # trans : Π(A:U₀). Π(x:A). Π(y:A). Π(z:A). Eq(A,x,y) → Eq(A,y,z) → Eq(A,x,z) transTerm = H.lam "A" (u 0) (a: H.lam "x" a (x: H.lam "y" a (y: H.lam "z" a (z: H.lam "p" (eq a x y) (p: H.lam "q" (eq a y z) (q: H.j a y (H.lam "z'" a (z': H.lam "_" (eq a y z') (_: eq a x z'))) p z q)))))); ``` ### Transport The most general form. If `x = y` and `P(x)` holds, then `P(y)` holds. Congruence, symmetry, and transitivity are all special cases. ```nix # transport : Π(A:U₀). Π(P:A→U₀). Π(x:A). Π(y:A). Eq(A,x,y) → P(x) → P(y) transportTerm = H.lam "A" (u 0) (a: H.lam "P" (forall "_" a (_: u 0)) (bigP: H.lam "x" a (x: H.lam "y" a (y: H.lam "p" (eq a x y) (p: H.lam "px" (H.app bigP x) (px: H.j a x (H.lam "y'" a (y': H.lam "_" (eq a x y') (_: H.app bigP y'))) px y p)))))); ``` ### Chaining proofs J applications compose. Here we chain congruence (lift through `succ`) with symmetry (reverse the equality) — two J applications, the output of the first feeding as the equality proof to the second: ```nix # From Eq(Nat, add(2,1), 3): # Step 1 (cong succ): Eq(Nat, S(add(2,1)), S(3)) # Step 2 (sym): Eq(Nat, S(3), S(add(2,1))) combinedProof = let add21 = add (H.natLit 2) (H.succ H.zero); three = H.natLit 3; sadd21 = H.succ add21; sthree = H.succ three; # Step 1: cong succ congStep = H.j nat add21 (H.lam "y" nat (y: H.lam "_" (eq nat add21 y) (_: eq nat sadd21 (H.succ y)))) refl three refl; # Step 2: sym on the cong result in (checkHoas (eq nat sthree sadd21) (H.j nat sadd21 (H.lam "y" nat (y: H.lam "_" (eq nat sadd21 y) (_: eq nat y sadd21))) refl sthree congStep)).tag == "j"; ``` ## Verified extraction Proofs establish that properties hold. Verified extraction goes further: write an implementation in HOAS, the kernel type-checks it against a specification, and `v.verify` extracts a callable Nix function. The result is an ordinary Nix value — an integer, a boolean, a function, a list — but one whose implementation was machine-checked before use. ### The simplest case ```nix let H = fx.types.hoas; v = fx.types.verified; # Kernel-verified successor: Nat → Nat succFn = v.verify (H.forall "x" H.nat (_: H.nat)) (v.fn "x" H.nat (x: H.succ x)); in succFn 5 # → 6 ``` `v.verify` does three things: elaborates the HOAS into kernel terms, type-checks the implementation against the type, and extracts the result as a Nix value. The extracted function is plain Nix — no kernel overhead at call time. `v.fn` is a convenience wrapper around `H.lam` that threads the extraction metadata. You could write raw `H.lam` instead, but `v.fn` handles the plumbing for multi-argument functions and pattern matching. ### Pattern matching `v.match` builds a `NatElim` with a constant motive. You provide the result type, the scrutinee, and branches for `zero` and `succ`: ```nix # Verified addition: Nat → Nat → Nat addFn = v.verify (H.forall "m" H.nat (_: H.forall "n" H.nat (_: H.nat))) (v.fn "m" H.nat (m: v.fn "n" H.nat (n: v.match H.nat m { zero = n; succ = _k: ih: H.succ ih; }))); addFn 2 3 # → 5 addFn 0 7 # → 7 ``` The `succ` branch receives two arguments: `k` (the predecessor) and `ih` (the inductive hypothesis — the result for `k`). For addition, `ih` is `add(k, n)`, so wrapping it with `succ` gives `add(S(k), n)`. ### Boolean and cross-type elimination `v.if_` builds a `BoolElim` for verified booleans: ```nix # Verified not: Bool → Bool notFn = v.verify (H.forall "b" H.bool (_: H.bool)) (v.fn "b" H.bool (b: v.if_ H.bool b { then_ = v.false_; else_ = v.true_; })); notFn true # → false notFn false # → true ``` Cross-type elimination — scrutinize one type, return another — works by specifying a different result type: ```nix # Verified isZero: Nat → Bool isZeroFn = v.verify (H.forall "n" H.nat (_: H.bool)) (v.fn "n" H.nat (n: v.match H.bool n { zero = v.true_; succ = _k: _ih: v.false_; })); isZeroFn 0 # → true isZeroFn 5 # → false ``` ### List operations `v.map`, `v.filter`, and `v.fold` are verified list combinators. Each takes HOAS terms, not Nix functions — the kernel verifies the entire pipeline: ```nix # Composed pipeline: filter zeros, then sum # Input: [0, 3, 0, 2, 1] → Filter: [3, 2, 1] → Sum: 6 composedResult = let input = H.cons H.nat (v.nat 0) (H.cons H.nat (v.nat 3) (H.cons H.nat (v.nat 0) (H.cons H.nat (v.nat 2) (H.cons H.nat (v.nat 1) (H.nil H.nat))))); nonZero = v.fn "n" H.nat (n: v.match H.bool n { zero = v.false_; succ = _k: _ih: v.true_; }); addCombine = v.fn "a" H.nat (a: v.fn "acc" H.nat (acc: v.match H.nat a { zero = acc; succ = _k: ih: H.succ ih; })); in v.verify H.nat (v.fold H.nat H.nat (v.nat 0) addCombine (v.filter H.nat nonZero input)); # → 6 ``` The kernel verifies the filter predicate (`Nat → Bool`), the fold combinator (`Nat → Nat → Nat`), and their composition before extracting the result. A type error in any component — say, returning a `Nat` where the filter expects a `Bool` — fails at `nix eval`, not at runtime. ### Records and string operations The kernel supports record types (elaborated as nested Sigma) and string equality (`strEq` is a kernel primitive). Together they handle verified functions over structured data with string fields: ```nix let H = fx.types.hoas; v = fx.types.verified; RecTy = H.record [ { name = "name"; type = H.string; } { name = "target"; type = H.string; } ]; # Verified: does the name match the target? matchFn = v.verify (H.forall "r" RecTy (_: H.bool)) (v.fn "r" RecTy (r: v.strEq (v.field RecTy "name" r) (v.field RecTy "target" r))); in { yes = matchFn { name = "hello"; target = "hello"; }; # → true no = matchFn { name = "hello"; target = "world"; }; # → false } ``` `v.field` desugars to the right chain of `fst`/`snd` projections for the field's position in the Sigma chain. `v.strEq` reduces in the kernel via the `StrEq` primitive — it compares string literals during normalization, producing `true` or `false` as kernel values. ## What the kernel can and cannot prove The nix-effects kernel implements Martin-Löf type theory with universes, dependent functions, dependent pairs, identity types, natural numbers, booleans, lists, sums, unit, void, and eleven axiomatized Nix primitives. It can prove any property that reduces to a comparison of normal forms. **It can prove:** - Equalities between computed values: `add(3, 5) = 8`, `length([1,2,3]) = 3`, `append([1,2], [3]) = [1,2,3]` - Properties of concrete data: "this config field is in the allowed set," "this port number is valid," "these two strings match" - Generic combinators: `cong`, `sym`, `trans`, and `transport` type-check with abstract variables - Verified function extraction: any function expressible with the kernel's eliminators can be verified and extracted **It cannot prove:** - **Symbolic induction.** `forall n, n + 0 = n` requires induction over an abstract variable. The `NatElim` evaluator only reduces on concrete values (`VZero`, `VSucc`), not symbolic ones. You can prove `3 + 0 = 3` and `100 + 0 = 100`, but not the universal statement. The evaluator would need to produce symbolic normal forms, which is a fundamentally different normalization strategy. - **Properties of Nix builtins.** The kernel axiomatizes `String`, `Int`, `Float`, etc. as opaque types. `builtins.stringLength` is not a kernel function — the kernel has `strEq` for string comparison but no string operations beyond equality and list membership. - **Eta-expansion.** The kernel does not identify `f` with `λx.f(x)`. Functions that are extensionally equal but intensionally different are not convertible. - **User-defined recursive types.** The kernel has inductive types (Nat, List, Bool, Sum) built in as eliminators but does not support user-defined inductive families. You cannot define a binary tree or a red-black tree in the kernel. For Nix, the "concrete data" restriction is less of a limitation than it sounds. Nix evaluates configurations completely before building — every module option, every service config, every package attribute is a concrete value at eval time. The kernel verifies all computable properties of that concrete data. What it gives up is proving things about *all possible* configurations generically. In practice, you prove properties of the specific configuration being built, which is the one that matters. ## Quick reference | Pattern | Type | Proof term | |---------|------|------------| | Computational equality | `Eq(A, x, y)` where `x`, `y` normalize to same value | `Refl` | | Dependent witness | `Σ(x:A). P(x)` | `(value, proof)` | | Case analysis (bool) | `BoolElim(motive, true_case, false_case, b)` | Result of elimination | | Structural recursion (nat) | `NatElim(motive, base, step, n)` | Result of elimination | | List recursion | `ListElim(elem, motive, nil_case, cons_case, xs)` | Result of elimination | | Sum dispatch | `SumElim(L, R, motive, left_case, right_case, s)` | Result of elimination | | Congruence | `Eq(A,x,y) → Eq(B, f(x), f(y))` | `J(A, x, λy'.λ_. Eq(B,f(x),f(y')), Refl, y, p)` | | Symmetry | `Eq(A,x,y) → Eq(A,y,x)` | `J(A, x, λy'.λ_. Eq(A,y',x), Refl, y, p)` | | Transitivity | `Eq(A,x,y) → Eq(A,y,z) → Eq(A,x,z)` | `J(A, y, λz'.λ_. Eq(A,x,z'), p, z, q)` | | Transport | `Eq(A,x,y) → P(x) → P(y)` | `J(A, x, λy'.λ_. P(y'), px, y, p)` | | Ex falso | `Void → A` | `absurd(A, x)` | | Verified function | `v.verify type impl` | Extracted Nix function | ## References 1. Martin-Löf, P. (1984). *Intuitionistic Type Theory*. Bibliopolis. 2. The Univalent Foundations Program (2013). *Homotopy Type Theory: Univalent Foundations of Mathematics*. Institute for Advanced Study. [[pdf](https://homotopytypetheory.org/book/)] 3. Norell, U. (2007). *Towards a practical programming language based on dependent type theory*. PhD thesis, Chalmers. [[pdf](https://www.cse.chalmers.se/~ulfn/papers/thesis.pdf)] ### Theory Nine papers shaped nix-effects. Here's how each one maps to code. ## Algebraic effects and the freer monad A computation is a tree of effects with continuations. A handler walks the tree, interpreting each effect — either resuming the continuation with a value or aborting it. That's the handler model from Plotkin & Pretnar (2009), and nix-effects implements it directly. A computation is either: - `Pure value` — finished, returning a value - `Impure effect continuation` — suspended, waiting for a handler to service `effect` and feed the result to `continuation` `send` creates an `Impure` node: ```nix send "get" null # Impure { effect = { name = "get"; param = null; }; queue = [k]; } ``` `bind` appends to the continuation queue: ```nix bind (send "get" null) (s: pure (s * 2)) # Impure { effect = get; queue = [k1, k2] } — O(1) per bind ``` Handlers provide the interpretation: ```nix handlers = { get = { param, state }: { resume = state; inherit state; }; put = { param, state }: { resume = null; state = param; }; }; ``` `resume` feeds a value to the continuation. `abort` discards it and halts. ## FTCQueue: O(1) bind Naïve free monads have O(n²) bind chains. The problem is reassociation: ``` (m >>= f) >>= g ≡ m >>= (f >=> g) ``` Each reassociation traverses the whole tree. Kiselyov & Ishii (2015) solved this by storing continuations in a catenable queue (FTCQueue) instead of a list. `snoc` is O(1); queue application (`qApp`) amortizes the reassociation across traversal. Total cost: O(n) for n bind operations, regardless of nesting depth. This matters in practice — a `DepRecord` with 100 fields sends 100 effects, each of which binds. Without the queue, validation time would be quadratic in the number of fields. The interpreter that processes these queued continuations uses defunctionalization (Reynolds 1972): the recursive handler becomes a data structure — effect name, parameter, handler result — and a worklist loop (`builtins.genericClosure`) iterates over steps instead of recursing. This is the pattern Van Horn & Might (2010) identified in *Abstracting Abstract Machines*: store-allocated continuations plus worklist iteration give you bounded stack depth. The [Trampoline](/nix-effects/guide/trampoline) chapter covers the implementation — how `genericClosure` becomes a trampoline, why `deepSeq` prevents thunk accumulation, and what the 1,000,000-operation benchmark actually measures. ## Value-dependent types Martin-Löf (1984) is where types that depend on values come from. In nix-effects, all types bottom out in the MLTT kernel (`src/tc/`), which handles type checking, universe level computation, and proof verification. The user-facing API provides convenience constructors on top. **Sigma (Σ)** — the dependent pair. The second component's type is a function of the first component's value: ```nix Σ(fipsMode : Bool). if fipsMode then ListOf FIPSCipher else ListOf String ``` In nix-effects: ```nix Sigma { fst = Bool; snd = b: if b then ListOf FIPSCipher else ListOf String; } ``` `Sigma.validate` decomposes the check: validate `fst` first, then — only if it passes — evaluate `snd fst-value` and validate that. The dependent expression is never evaluated on a wrong-typed input. That ordering is the whole point. **Pi (Π)** — dependent function type. The return type depends on the argument: ```nix Pi { domain = String; codomain = _: Int; } ``` The kernel's decision procedure checks `isFunction` — closures are opaque, so that's all it can verify at introduction. Full verification happens at elimination via the kernel's type-checking judgment. **Universe hierarchy.** Types themselves have types, stratified from `Type_0` through `Type_4` to guard against Russell's paradox: ```nix (typeAt 0).check Int # true — Int lives at universe 0 level String # 0 (typeAt 1).check (typeAt 0) # true — Type_0 lives at universe 1 ``` Universe levels are computed by the kernel's `checkTypeLevel`: `level(Pi(A,B)) = max(level(A), level(B))`, `level(U(i)) = i+1`. Self-containing universes (`U(i) : U(i)`) are rejected — `level(U(i)) = i+1 > i`, so the check fails. This prevents both accidental and adversarial paradoxes for every kernel-backed type. ## Refinement types Sometimes you need a type that's narrower than `Int` but wider than an enum. Freeman & Pfenning (1991) introduced the refinement type: given a base type T and a predicate P, the type {x:T | P(x)} admits only values of T that satisfy P. `refined` is the direct implementation — `refined "Port" Int (x: x >= 1 && x <= 65535)` is {x:Int | 1 ≤ x ≤ 65535} with a name attached. Rondon et al. (2008) later scaled the idea with SMT-based inference under the name *Liquid Types*. We skip the solver and use runtime predicate checking: ```nix Port = refined "Port" Int (x: x >= 1 && x <= 65535); NonEmpty = refined "NonEmpty" String (s: builtins.stringLength s > 0); Nat = refined "Nat" Int (x: x >= 0); ``` `Port.check` composes the kernel's decision (`Int`) with the refinement predicate. Combinators for building compound predicates: ```nix allOf [ pred1 pred2 ] # conjunction anyOf [ pred1 pred2 ] # disjunction negate pred # negation ``` ## Soundness and what Nix provides The kernel's soundness is standard MLTT metatheory. Martin-Löf (1984) set out the rules, and the Mini-TT lineage (Coquand et al. 2009, Kovács 2022, and the elaboration-zoo and pi-forall tutorials) gives the bidirectional elaboration and normalization-by-evaluation recipe the kernel follows. Nothing in the kernel is novel on the metatheory side, and none of the soundness argument routes through anything Nix-specific. What Nix contributes is a faithful runtime for definitional equality. NbE requires that reducing open terms is deterministic and side effect free, and pure Nix evaluation gives you exactly that. `builtins.trace` and `builtins.throw` are not observable through definitional equality, so they cannot perturb conversion checking. The effect layer sits above the kernel as meta-level freer monad data. `Impure` and `Pure` attrsets are values walked by pure handlers. They are not object-language effects in the kernel's grammar, and the kernel has no constructor for them. This is how the effect layer can surface kernel errors as `typeCheck` effects carrying context paths without widening the trusted core. ## Graded linear types Orchard, Liepelt & Eades (2019) introduced a type system where each variable carries a usage grade from a resource semiring. We implement three points on that spectrum: `Linear` (exactly one use), `Affine` (at most one), and `Graded` (exactly n uses). In practice, the handler maintains a resource map counting each `consume` call against a `maxUses` bound. At handler exit, a finalizer checks that every resource was consumed the expected number of times. The grade discipline is enforced at runtime through the effect system, not statically — so you get usage tracking without a custom type checker, but violations show up at eval time rather than before it. That's a real trade-off, and for configuration validation we're comfortable with it. ## Higher-order contracts and blame Findler & Felleisen (2002) solved a problem that shows up immediately when you try to check function types: you can't test a function contract at the point of definition. A function is a closure — opaque. The only way to check it is to wrap it and verify at application boundaries. In nix-effects, this is exactly what happens. `decide(H.forall ..., f)` can only confirm `builtins.isFunction f` — the kernel can't look inside a Nix closure. For full verification, you write the implementation in HOAS, the kernel type-checks the term, and `extract` wraps the result as a Nix function that elaborates its arguments at every call boundary. The contract is enforced at application, not definition. That's Findler & Felleisen. Their other contribution is blame tracking. When a check fails, the error needs to say *which* contract was violated and *where*. In nix-effects, `.validate` sends `typeCheck` effects carrying blame context — type name, field path, rejected value — and the handler decides the error policy: `strict` throws immediately, `collecting` accumulates all failures, `logging` records every check. Same kernel judgment, different reporting strategy — the handler pattern (Plotkin & Pretnar) composes with the contract pattern (Findler & Felleisen) to separate what to check from how to report. ## References 1. Plotkin, G., & Pretnar, M. (2009). *Handlers of Algebraic Effects*. ESOP 2009. [[doi](https://doi.org/10.1007/978-3-642-00590-9_7)] 2. Kiselyov, O., & Ishii, H. (2015). *Freer Monads, More Extensible Effects*. Haskell Symposium 2015. [[pdf](https://okmij.org/ftp/Haskell/extensible/more.pdf)] 3. Martin-Löf, P. (1984). *Intuitionistic Type Theory*. Bibliopolis. 4. Rondon, P., Kawaguchi, M., & Jhala, R. (2008). *Liquid Types*. PLDI 2008. [[doi](https://doi.org/10.1145/1375581.1375602)] 5. Findler, R., & Felleisen, M. (2002). *Contracts for Higher-Order Functions*. ICFP 2002. [[doi](https://doi.org/10.1145/581478.581484)] 6. Van Horn, D., & Might, M. (2010). *Abstracting Abstract Machines*. ICFP 2010. (See [Trampoline](/nix-effects/guide/trampoline)) 7. Freeman, T., & Pfenning, F. (1991). *Refinement Types for ML*. PLDI 1991. [[doi](https://doi.org/10.1145/113445.113468)] 8. Orchard, D., Liepelt, V., & Eades, H. (2019). *Quantitative Program Reasoning with Graded Modal Types*. ICFP 2019. [[doi](https://doi.org/10.1145/3341714)] ## Prior art - Borja, V. (2026). *nfx: Nix Algebraic Effects System with Handlers*. [[github](https://github.com/vic/nfx)] — Implements algebraic effects in pure Nix using a context-passing model with `immediate`/`pending` constructors. nix-effects adopted nfx's `adapt` handler combinator, `mk { doc, value, tests }` API pattern, and effect module vocabulary (`state`, `acc`, `conditions`, `choice`, streams), while building a new core on the freer monad encoding from Kiselyov & Ishii (2015) and adding value-dependent types and a type-checking kernel that nfx does not attempt. ### Trampoline The trampoline is how nix-effects interprets freer monad computations with O(1) stack depth in a language with no iteration primitives and no tail-call optimization. ## The problem Nix is a pure, lazy, functional language. It has no loops. Every "iteration" is recursion. A naïve free monad interpreter using mutual recursion would build a call stack proportional to the computation length: ``` run (bind (bind (bind ... (send "get" null) ...) ...) ...) → run step1 → run step2 → run step3 → ... (N frames deep) ``` For validation of a large config — say, a NixOS module with hundreds of fields — this would blow the stack. ## The solution: `builtins.genericClosure` Nix's `builtins.genericClosure` is the only built-in iterative primitive. It implements a worklist algorithm: ``` genericClosure { startSet = [ initialNode ]; operator = node -> [ ...nextNodes ]; } ``` `operator` is called on each node. New nodes returned by `operator` are added to the worklist if their `key` hasn't been seen before. The result is the set of all reachable nodes. nix-effects repurposes this as a trampoline: each step of computation is a node. The `operator` function handles one effect and produces the next step as a singleton list. The computation terminates when `operator` returns `[]` (i.e., when we reach a `Pure` node). ```nix steps = builtins.genericClosure { startSet = [{ key = 0; _comp = comp; _state = initialState; }]; operator = step: if isPure step._comp then [] # halt else [ nextStep ]; # one more step }; ``` Stack depth: **O(1)**. `genericClosure` handles its own iteration internally; the `operator` function is never deeply nested. ## The thunk problem and `deepSeq` `genericClosure` only forces the `key` field of each node (for deduplication). All other fields — including `_state` and `_comp` — are lazy thunks. Without intervention, after N steps the `_state` field would be: ``` f(f(f(... f(initialState) ...))) # N thunks deep ``` Forcing the final `_state` would then rebuild the entire call stack in thunk evaluation, defeating the purpose. The fix: make `key` depend on `builtins.deepSeq newState`: ```nix key = builtins.deepSeq newState (step.key + 1) ``` Since `genericClosure` forces `key`, it also forces `deepSeq newState`, which eagerly evaluates the state at each step. No thunk chain builds up. Test suite validates 100,000 operations; manual runs confirm 1,000,000 operations in ~3 seconds with constant memory. ## Defunctionalization The interpreter defunctionalizes (**Reynolds 1972**) the recursive handler: the continuation moves from the call stack into an explicit data structure (the FTCQueue). The worklist loop processes these continuations iteratively rather than recursively — the same pattern identified by **Van Horn & Might (2010)** in *Abstracting Abstract Machines*. **Gibbons (2022)** *Continuation-Passing Style, Defunctionalization, Accumulations, and Associativity* shows the hidden precondition: this transformation is valid when the accumulated operation is associative. For nix-effects, the handler state transformations compose associatively because function composition is associative. ## References - Reynolds, J. C. (1972). *Definitional Interpreters for Higher-Order Programming Languages*. ACM Annual Conference. - Van Horn, D., & Might, M. (2010). *Abstracting Abstract Machines*. ICFP 2010. - Gibbons, J. (2022). *Continuation-Passing Style, Defunctionalization, Accumulations, and Associativity*. The Art, Science, and Engineering of Programming, 6(2). [[doi](https://doi.org/10.22152/programming-journal.org/2022/6/7)] - Kiselyov, O., & Ishii, H. (2015). *Freer Monads, More Extensible Effects*. ### Systems Architecture nix-effects is a freer-monad effect layer with a dependent type checker on top, all running at `nix eval` time. The effect layer is the foundation — handlers, blame tracking, and error policy all flow through it. The type checker is a Lean-light MLTT core that uses the effect infrastructure for its own checking pipeline. The kernel is implemented in `src/tc/` (~2200 lines) and fully integrated. All types have kernel backing — `.check` is derived mechanically from the kernel's `decide` procedure. There is no separate contract system and no adequacy bridge. Universe levels are computed by the kernel's `checkTypeLevel`. One notion of type, one checking mechanism, with decidable checking as a derived operation. ## Foundation layers nix-effects has two foundation layers: **The effects kernel.** Freer monad with FTCQueue for O(1) bind. `builtins.genericClosure` trampoline for O(1) stack depth. Handler-swap pattern for configurable interpretation. This layer is solid — tested at 1,000,000 operations, constant memory, ~3 seconds. **The type-checking kernel.** Every type is defined by its kernel representation — an HOAS type tree that the MLTT kernel can check. `.check` is derived mechanically from the kernel's `decide` procedure. `.validate` sends `typeCheck` effects through the freer monad for blame tracking. You choose the error policy by choosing the handler. For first-order types, the kernel's decision procedure is the full check. But it has an inherent limitation for higher-order types: `Pi.check` can only verify `isFunction` — it can't verify that a function maps every A-value to a B-value. Decision procedures are decidable and total, which makes them practical, but they can only state properties about the values in front of them. "For ALL services in this configuration, if they listen on a port, a firewall rule exists" — that's a universally quantified statement. No decision procedure can check it. You need structural verification of a proof term, not evaluation of a predicate. An earlier design considered separating decision procedures from the kernel, with an adequacy bridge connecting two separate notions of type. That architecture was rejected. If a kernel exists, the type system should be grounded in it from the start. ## The kernel-first architecture Instead of two systems with a bridge: ``` Contracts (ad hoc) Proofs (kernel) Record, ListOf, Pi... Pi, Sigma, Nat, eq... \ / \ adequacy bridge / \ / \ / \ / v v Effects kernel ``` One system, one source of truth: ``` Type system API Record, ListOf, DepRecord, refined, Pi, Sigma, ... | | elaboration v Type-checking kernel (MLTT) | | checker runs as effectful computation v Effects kernel (freer monad, FTCQueue, trampoline, handlers) | v Pure Nix ``` Types are kernel types. `Record`, `ListOf`, `DepRecord`, `refined` — all of them compile down to kernel constructions via elaboration. Checking a value against a type is a kernel judgment. Proving a universal property about a type is also a kernel judgment. Same kernel, same judgment form `Γ ⊢ t : T`, two modes of interaction: automated (decidable checking) and explicit (proof terms). There is no adequacy gap to bridge. Contracts don't exist as a separate concept — they're the decidable fragment of kernel type checking, optimized with proven-correct fast paths. ## The trusted kernel The kernel is small and auditable. It implements a core dependent type theory — something in the neighborhood of MLTT with natural numbers, identity types, and a cumulative universe hierarchy. ### Core judgments The kernel checks four judgments: ``` ctx ⊢ term : type (type checking) ctx ⊢ term ⇒ type (type inference) type_a ≡ type_b (definitional equality, via normalization) ⊢ ctx ok (context well-formedness) ``` ### The term language Terms are Nix attrsets. Each has a `tag` field for the constructor: ```nix # Core constructors (see kernel-spec.md §2 for the full grammar) { tag = "var"; idx = 0; } # de Bruijn index { tag = "pi"; name = "x"; domain = ...; codomain = ...; } # Π type { tag = "lam"; name = "x"; domain = ...; body = ...; } # λ abstraction { tag = "app"; fn = ...; arg = ...; } # application { tag = "sigma"; name = "x"; fst = ...; snd = ...; } # Σ type { tag = "pair"; fst = ...; snd = ...; } # pair { tag = "fst"; pair = ...; } # first projection { tag = "snd"; pair = ...; } # second projection { tag = "nat"; } # ℕ { tag = "zero"; } # 0 { tag = "succ"; pred = ...; } # S(n) { tag = "bool"; } { tag = "true"; } { tag = "false"; } # 𝔹 { tag = "list"; elem = ...; } # List A { tag = "nil"; elem = ...; } { tag = "cons"; elem = ...; head = ...; tail = ...; } { tag = "unit"; } { tag = "tt"; } # ⊤ { tag = "void"; } { tag = "absurd"; type = ...; term = ...; } # ⊥ { tag = "sum"; left = ...; right = ...; } # A + B { tag = "inl"; left = ...; right = ...; term = ...; } { tag = "inr"; left = ...; right = ...; term = ...; } { tag = "eq"; type = ...; lhs = ...; rhs = ...; } # Id type { tag = "refl"; } # reflexivity { tag = "j"; type = ...; lhs = ...; motive = ...; base = ...; rhs = ...; eq = ...; } { tag = "U"; level = 0; } # universe { tag = "ann"; term = ...; type = ...; } # annotation { tag = "let"; name = "x"; type = ...; val = ...; body = ...; } # let # Eliminators: nat-elim, bool-elim, list-elim, sum-elim ``` We use de Bruijn indices internally. The surface language uses names (see "Making the syntax livable" below). A small elaborator translates named terms to de Bruijn core terms. ### The core operations (Normalization by Evaluation) The kernel uses **NbE** (Normalization by Evaluation) rather than explicit substitution. Terms (Tm) are interpreted into a semantic domain (Val) by `eval`, and read back to normal forms by `quote`. This avoids the quadratic cost of explicit substitution. **Evaluation** (`eval : Env × Tm → Val`). Interprets a term in an environment of values, performing beta and iota reductions eagerly. Closures `(env, body)` capture the environment, avoiding substitution. Trampolined via `genericClosure` for recursive eliminators (NatElim, ListElim) to guarantee O(1) stack depth. **Quotation** (`quote : ℕ × Val → Tm`). Converts a value back to a term, translating de Bruijn levels to indices. Trampolined for deep VSucc/VCons chains. **Conversion** (`conv : ℕ × Val × Val → Bool`). Checks definitional equality of two values. Purely structural on normalized values — no type information used. No eta expansion. Trampolined for deep VSucc and VCons chains. **Bidirectional type checking** (`check`/`infer`). Inference mode synthesizes a type from a term; checking mode verifies a term against an expected type. Switching between modes happens at annotations and eliminators. The algorithm follows Dunfield & Krishnaswami (2021). Type errors are reported via effects; the handler determines policy. ### Why the trampoline is essential Normalization of proof terms is iterative. A proof by induction on a natural number n unfolds n reduction steps. A naive recursive normalizer blows the stack for large proofs. The `builtins.genericClosure` trampoline that nix-effects already uses for effect interpretation handles this identically: ```nix normalize = term: let steps = builtins.genericClosure { startSet = [{ key = 0; _term = term; }]; operator = step: let next = whnfStep step._term; in if next.done then [] else [{ key = builtins.deepSeq next.term (step.key + 1); _term = next.term; }]; }; in (lib.last steps)._term; ``` O(1) stack depth. `deepSeq` breaks thunk chains. The same technique that lets nix-effects run 1,000,000 effect operations lets the kernel normalize complex proof terms without hitting Nix's stack limit. ### Why the FTCQueue matters During type checking, the checker processes a sequence of obligations: check this argument, then check that body, then verify this equality. These are continuations — "after you finish checking A, check B with the result." The FTCQueue (catenable queue) from Kiselyov & Ishii gives O(1) continuation chaining. Without it, a deeply nested proof term with 1000 nested applications would produce O(n^2) overhead from left-nested bind chains in the checker's own computation. With it: O(n) total. ## The checker as an effectful computation The checker itself is a nix-effects computation. Its operations are effects: ```nix # Core effects of the type-checking kernel check = ctx: term: type: send "check" { inherit ctx term type; }; infer = ctx: term: send "infer" { inherit ctx term; }; unify = a: b: send "unify" { inherit a b; }; freshLevel = send "freshLevel" null; typeError = msg: send "typeError" msg; ``` The handler determines checking behavior: ```nix # Strict: abort on first error strictChecker = { typeError = { param, state }: { abort = null; state = state ++ [param]; }; ... }; # Collecting: gather all errors collectingChecker = { typeError = { param, state }: { resume = null; state = state ++ [param]; }; ... }; # Interactive: yield on error for tactic guidance interactiveChecker = { typeError = { param, state }: { resume = null; state = state // { paused = param; }; }; ... }; ``` Same handler-swap pattern that the current `ServiceConfig.validate` uses. Same trampoline running the computation. The kernel is just another effectful program running on the effects infrastructure. ## Types grounded in the kernel This is where the kernel-first approach differs from adding a proof checker alongside ad hoc contracts. Every type in the public API compiles to a kernel construction. A type IS its kernel representation, and all operations are derived from it. ### Elaboration: Nix values to kernel terms When you check a value against a type, elaboration translates the Nix value into a kernel term, and the kernel checks it: ```nix # Nat.check 42 # Elaboration: 42 → succ^42(zero) # Kernel: ⊢ succ(succ(...(zero)...)) : Nat ✓ # (ListOf Nat).check [1, 2, 3] # Elaboration: [1,2,3] → cons(succ(zero), cons(succ(succ(zero)), cons(succ(succ(succ(zero))), nil))) # Kernel: ⊢ cons(1, cons(2, cons(3, nil))) : List Nat ✓ ``` ### Decidable fast paths Elaborating `42` to `succ^42(zero)` and checking structurally is correct but expensive. For decidable properties — which is everything the decision procedure handles — we derive a fast path from the kernel type definition. The kernel defines `Nat` as an inductive type with `zero` and `succ`. From that definition, a decision procedure is mechanically derived: ```nix # Decision procedure derived from kernel Nat definition Nat.check = v: builtins.isInt v && v >= 0; ``` This is the same predicate the surface API exposes. It's justified by the kernel, not ad hoc. You prove once (by structural induction on the derivation rules) that the decision procedure agrees with the kernel type. Then you use the fast predicate at eval time and fall back to the kernel for properties the predicate can't express. This is how Lean handles `Decidable` instances. For decidable propositions, evaluation IS proof. The decision procedure is the computational content of the decidability proof, extracted as a function. ### How current types map to kernel types | Current API | Kernel construction | Fast path | |------------|-------------------|-----------| | `Nat` | Inductive type (zero, succ) | `isInt v && v >= 0` | | `String` | Primitive (axiom) | `builtins.isString v` | | `ListOf A` | Inductive type (nil, cons A) | `builtins.isList v && all A.check v` | | `Record { a = A; b = B }` | Sigma (a : A) (b : B) | Field-wise guard | | `DepRecord [...]` | Nested Sigma | Dependent field-wise guard | | `Sigma { fst, snd }` | Kernel Sigma directly | `fst.check v.fst && (snd v.fst).check v.snd` | | `Pi { domain, codomain }` | Kernel Pi directly | `isFunction` (guard only) | | `refined "P" A pred` | Subset type `{ x : A \| P(x) }` | `A.check v && pred v` | | `Either A B` | Sum type (inl, inr) | Tag-based dispatch | | `typeAt n` | `Type n` (universe) | `v ? universe && v.universe <= n` | For first-order types (Nat, String, ListOf, Record), the fast path IS the full check — these are decidable. The kernel adds nothing at runtime for individual values; it adds the ability to state and verify universal properties about families of values. For higher-order types (Pi), the fast path can only check the introduction form (`isFunction`). The kernel adds full verification at elimination: `⊢ f(a) : B(a)` for specific `a`, or `⊢ p : Pi A B` for a proof term witnessing the universal property. ### Blame tracking as an effect Elaboration-mode type checking can send blame effects just as the current `validate` does — the kernel judgment emits `typeCheck` effects with context paths (`List[Nat][3]`) for error reporting. The handler determines whether to abort, collect, or log. Same pattern, now backed by a kernel judgment rather than an ad hoc predicate. ```nix # Effectful checking with blame: elaboration sends kernel judgments as effects checkWithBlame = type: value: context: let judgment = elaborate type value; in bind (send "typeCheck" { inherit type value context; }) (_: kernelCheck judgment); ``` ## Infinite universes via streams The current hardcoded `Type_0` through `Type_4` becomes a lazy stream: ```nix universes = stream.iterate (u: { level = u.level + 1; type = typeAt (u.level + 1); }) { level = 0; type = typeAt 0; }; ``` The stream unfolds on demand. If your types max out at level 3, level 4 is never computed. The trampoline handles the stream iteration. ### Universe level inference The kernel computes levels by structural recursion on types: ``` level(Nat) = 0 level(Pi A B) = max(level(A), level(B)) level(Sigma A B) = max(level(A), level(B)) level(Type n) = n + 1 ``` No manual annotations. The kernel infers levels and verifies stratification. The current `universe` field — a trusted declaration that nothing enforces — becomes a computed, verified property. ### Universe polymorphism as an effect Level allocation is an algebraic effect: ```nix # A universe-polymorphic definition requests a level polyList = bind freshLevel (u: pure (pi (typeAt u) (A: typeAt u))); # Different handlers instantiate differently atLevel3 = fx.run polyList (fixedLevel 3) null; # Or: all instantiations as a stream allLevels = stream.map (u: fx.run polyList (fixedLevel u) null ) (stream.iterate (n: n + 1) 0); ``` The definition doesn't commit to a level. The handler decides. ### Constraint solving via genericClosure Level constraints (`?u >= max(?v, ?w)`) accumulate during checking. The solver iterates to a fixed point: ```nix solveLevels = constraints: let steps = builtins.genericClosure { startSet = [{ key = 0; solved = {}; changed = true; }]; operator = state: if !state.changed then [] else let next = propagate state.solved constraints; in [{ key = builtins.deepSeq next.solved (state.key + 1); inherit (next) solved changed; }]; }; in (lib.last steps).solved; ``` Same trampoline. Same `deepSeq` trick. The universe solver reuses the exact infrastructure that runs effect handlers. ## Making the syntax livable Writing proof terms as raw attrsets is not viable. Four techniques, from least to most ambitious: ### HOAS: Nix lambdas as binders Higher-Order Abstract Syntax uses Nix's own functions for variable binding. The combinator applies a Nix lambda to a fresh variable attrset, getting scope and shadowing for free: ```nix let inherit (proof) forall lam nat zero succ eq refl cong ind; in # Proposition: forall n : Nat, n + 0 = n prop = forall nat (n: eq nat (plus n zero) n); # Proof: induction on n pf = ind nat (k: eq nat (plus k zero) k) # motive refl # base: 0 + 0 = 0 (k: ih: cong succ ih) # step: cong S on IH ; ``` The combinator `forall nat (n: ...)` calls the Nix function with a fresh `{ tag = "var"; ... }` and builds the `pi` AST node. Variable names, scope, and alpha-equivalence are handled by Nix's own evaluator. ### Tagless final: construction IS checking Combinators type-check during construction. If the expression evaluates without error, the proof is valid: ```nix let # lam checks the body type against the codomain during construction lam = domain: bodyFn: let v = mkTypedVar domain; body = bodyFn v; in { term = mkLam domain body.term; type = mkPi domain body.type; }; # app checks function/argument type compatibility during construction app = fn: arg: let _ = assertTypeEq fn.type.domain arg.type; in { term = mkApp fn.term arg.term; type = subst fn.type.codomain arg.term; }; in ... ``` Error messages point to the combinator call that failed, not to a position in a flat AST. Nix's eval trace tells you which `lam` or `app` had the wrong types. ### Builder pattern: method chaining for readability Wrap terms in attrsets with methods for infix-like notation: ```nix let E = expr: type: { inherit expr type; plus = other: E (mkApp plusFn (mkPair expr other.expr)) nat; eq = other: E (mkEq nat expr other.expr) (typeAt 0); ap = arg: E (mkApp expr arg.expr) (subst type.codomain arg.expr); }; n = E (var 0) nat; z = E zero nat; in (n.plus z).eq n # reads as: n + 0 = n ``` ## Why the kernel cannot reason about its own types Girard (1972) showed that a type theory with `Type : Type` is inconsistent, and Hurkens (1995) gave a compact MLTT rendering of the same paradox. The standard fix is a strict universe hierarchy, and the kernel enforces it by computing `level(U(i)) = i + 1` from the typing derivation rather than trusting a declared level. Concretely, the kernel at universe level N reasons about computations at levels 0 through N-1, and its own checker effects live at level N. A proof term that tried to reference the kernel's own universe would force the level solver into a cycle where `?u` depends on `?u`, which the constraint solver detects and rejects. Universe stratification is therefore a computed, verified property rather than a trusted declaration. The kernel infers levels, the solver checks them, and Girard's paradox cannot be constructed because no term can be placed at a level that contains itself. ## The infrastructure reuse Every piece of the kernel is built on machinery nix-effects already provides: | Component | Built on | |-----------|---------| | Normalization loop | `builtins.genericClosure` trampoline | | Thunk prevention | `builtins.deepSeq` in worklist key | | Continuation chaining | FTCQueue (O(1) bind) | | Checker effects | Freer monad (`send`, `bind`, `pure`) | | Error policy | Handler swap (strict / collecting / interactive) | | Universe tower | `stream.iterate` | | Level constraint solving | `genericClosure` as fixed-point | | Surface syntax parsing | `builtins.match` + `genericClosure` Pratt parser | | Blame tracking | `typeCheck` effect with context paths | | Elaboration | Nix attrset → kernel term translation | The kernel doesn't require new infrastructure. It reuses the trampoline, the queue, the monad, the handlers, and the streams. nix-effects was built to do effectful computation in a language that has no effects. A type checker is effectful computation. ## Current architecture ### Effects substrate The effects kernel — `pure`, `impure`, `send`, `bind`, `run`, `handle`, `adapt`, FTCQueue, trampoline, all effect modules (state, error, reader, writer, acc, choice, conditions, linear), streams — is the substrate the type-checking kernel runs on. ### Type system grounding The type system layer in `src/types/` is grounded in the kernel: - `foundation.nix` — `mkType` with `kernelType` + optional refinement `guard` - `primitives.nix` — `String`, `Int`, `Bool`, etc. wrapping `builtins.is*` - `constructors.nix` — `Record`, `ListOf`, `Maybe`, `Either`, `Variant` - `dependent.nix` — `Pi`, `Sigma`, `Certified`, `Vector`, `DepRecord` - `refinement.nix` — `refined`, predicate combinators - `universe.nix` — `typeAt`, `Type_0` through `Type_4`, `level` All types have kernel backing via `kernelType`. The architecture is: 1. **Kernel module** (`src/tc/`, ~2200 lines) — term/value representations, NbE evaluator, quotation, conversion, bidirectional checking. Uses environments and closures, not explicit substitution. 2. **Elaboration module** (`src/tc/elaborate.nix`, `hoas.nix`) — translates the surface API into kernel types and translates Nix values into kernel terms. HOAS combinators for readable proof terms. 3. **Extraction layer** (`extract` in `elaborate.nix`) — reverses elaboration: kernel values back to Nix values. Enables verified functions: write in HOAS, kernel-check, extract usable Nix code. 4. **Convenience combinators** (`src/tc/verified.nix`) — higher-level interface for writing verified implementations with automatic motive construction and annotation insertion. 5. **Decision procedures** — `.check` on every type is the kernel's `decide` procedure: elaborate value to HOAS, kernel-check, return boolean. This is the "one system" architecture — no separate contract system, no adequacy bridge. 6. **Surface API** — the public-facing `fx.types.*` attrset. Same names, same usage patterns. `Record`, `ListOf`, `DepRecord`, `refined`, `Pi`, `Sigma` all work. `T.check v` runs the kernel's decide procedure. `T.prove prop` checks a proof term. ### What the API looks like ```nix # Checking a value (decide via kernel) fx.types.Nat.check 42 # true fx.types.(ListOf Nat).check [1, 2, 3] # true # Effectful validation with blame fx.run (fx.types.Nat.validate 42) handlers [] # Proving a universal property fx.types.prove ( forall nat (n: eq nat (plus n zero) n) ) proofTerm # { ok = true; } or type error # Verified function extraction v.verify (H.forall "x" H.nat (_: H.nat)) (v.fn "x" H.nat (x: H.succ x)) # → Nix function, correct by construction ``` ## What exists 1. **Type-checking kernel** (`src/tc/eval.nix`, `check.nix`, `quote.nix`, `conv.nix`, `term.nix`, `value.nix`). Pi, Sigma, Nat, Bool, List, Sum, Unit, Void, identity types, cumulative universes, and 7 axiomatized primitive types (String, Int, Float, Attrs, Path, Function, Any). Bidirectional checking with NbE. Fuel-bounded evaluation with `genericClosure` trampolining for stack safety. 2. **HOAS surface combinators** (`src/tc/hoas.nix`). `forall`, `lam`, `sigma`, `pair`, `natLit`, `natElim`, `boolElim`, `listElim`, `sumElim`, `j`, `refl`, `eq`, and more. Variable binding via Nix lambdas. Automatic elaboration from HOAS to de Bruijn core terms. 3. **Elaboration and extraction** (`src/tc/elaborate.nix`). Maps all type values to kernel terms via `elaborateValue`. `decide` function provides `.check` for all types. `extract` reverses elaboration, converting kernel values back to Nix values. Sentinel-based constant family detection for non-dependent Sigma/Pi. 4. **Kernel-grounded foundation** (`src/types/foundation.nix`). `mkType` requires `kernelType` and derives `.check` from `decide`. No hand-written predicates. All types — primitives, constructors, dependent types — have kernel backing. 5. **Convenience combinators** (`src/tc/verified.nix`). Higher-level interface: `v.verify type impl` writes in HOAS, kernel-checks, and extracts a usable Nix function. Includes `if_`, `match`, `matchList`, `matchSum`, `map`, `fold`, `filter`. ## Future work The following features remain unimplemented: - **Universe polymorphism as an effect.** Level allocation via `freshLevel` effect with handler-determined instantiation. - **Constraint solving via genericClosure.** Level constraint propagation to a fixed point for universe inference. - **Tagless final construction.** Type-checking during combinator construction. - **Builder pattern notation.** Method chaining for readable proof terms. ## References - Dunfield, J., & Krishnaswami, N. (2021). *Bidirectional Typing*. ACM Computing Surveys. - Girard, J.-Y. (1972). *Interprétation fonctionnelle et élimination des coupures de l'arithmétique d'ordre supérieur*. Thèse d'État, Université Paris 7. - Hurkens, A. J. C. (1995). *A Simplification of Girard's Paradox*. TLCA 1995. - Kiselyov, O., & Ishii, H. (2015). *Freer Monads, More Extensible Effects*. Haskell Symposium 2015. - Plotkin, G., & Pretnar, M. (2009). *Handlers of Algebraic Effects*. ESOP 2009. - de Bruijn, N. (1972). *Lambda Calculus Notation with Nameless Dummies*. Indagationes Mathematicae. - Martin-Löf, P. (1984). *Intuitionistic Type Theory*. Bibliopolis. - Findler, R., & Felleisen, M. (2002). *Contracts for Higher-Order Functions*. ICFP 2002. ### Kernel Architecture This chapter describes the type-checking kernel: its pipeline, its primitives, and how to write verified implementations that the kernel checks and extracts back to usable Nix functions. ## The "one system" architecture nix-effects has one notion of type and one checking mechanism. Every type is defined by its kernel representation. `.check` is derived mechanically from the kernel's `decide` procedure. There is no separate contract system, no adequacy bridge, and no possibility of disagreement between the check predicate and the kernel. ``` Type system API Record, ListOf, DepRecord, refined, Pi, Sigma, ... | | elaboration v Type-checking kernel (MLTT) | | checker runs as effectful computation v Effects kernel (freer monad, FTCQueue, trampoline, handlers) | v Pure Nix ``` Types are kernel types. `Record`, `ListOf`, `DepRecord`, `refined` — all of them compile down to kernel constructions via elaboration. Checking a value against a type is a kernel judgment. Proving a universal property about a type is also a kernel judgment. Same kernel, same judgment form `Γ ⊢ t : T`, two modes of interaction: automated (decidable checking) and explicit (proof terms). ## The kernel pipeline The kernel implements normalization by evaluation (NbE) with bidirectional type checking. Six modules, each with a single responsibility: ``` term.nix --> eval.nix --> value.nix | quote.nix --> term.nix | conv.nix | check.nix ``` | Module | Function | Signature | |--------|----------|-----------| | `term.nix` | Term constructors | `mkVar`, `mkPi`, `mkLam`, `mkApp`, ... | | `eval.nix` | Evaluation | `Env × Tm -> Val` | | `value.nix` | Value constructors | `VLam`, `VPi`, `VPair`, `VZero`, ... | | `quote.nix` | Quotation | `ℕ × Val -> Tm` | | `conv.nix` | Conversion checking | `ℕ × Val × Val -> Bool` | | `check.nix` | Type checking | `Ctx × Tm × Val -> Tm` / `Ctx × Tm -> Tm × Val` | **Terms** (`Tm`) are the syntax — de Bruijn indexed expressions with explicit binding structure. **Values** (`Val`) are the semantics — fully normalized forms where lambdas are Nix closures (this is the NbE trick). **Evaluation** converts terms to values. **Quotation** reads values back to terms. **Conversion** checks whether two values are definitionally equal by comparing their quoted forms. The type checker is bidirectional: - `check(Γ, t, T)` — check that term `t` has type `T` (type-directed) - `infer(Γ, t)` — infer the type of term `t` (term-directed) - `checkTypeLevel(Γ, T)` — compute the universe level of a type ### Trust model The kernel has three layers with decreasing trust requirements: **Layer 0 — Trusted Computing Base.** `eval`, `quote`, `conv`. Pure functions. No side effects. No imports from the effect system. Bugs here compromise soundness. **Layer 1 — Semi-trusted.** `check`, `infer`, `checkTypeLevel`. Uses the TCB and sends effects for error reporting. Bugs may produce wrong error messages or reject valid terms, but cannot cause unsoundness. **Layer 2 — Untrusted.** The elaborator (`hoas.nix`, `elaborate.nix`). Translates surface syntax to core terms. Can have arbitrary bugs without compromising safety — the kernel verifies the output. ## Axiomatized primitives The kernel understands seven Nix primitive types as axioms. Each has a type former, a literal constructor, and a typing rule. None have eliminators — the kernel says "String is a type at level 0" and "a string literal inhabits String" but cannot structurally decompose these values. | Nix type | Kernel type | Literal | Level | |----------|-------------|---------|-------| | `string` | `String` | `StringLit(s)` | 0 | | `int` | `Int` | `IntLit(n)` | 0 | | `float` | `Float` | `FloatLit(f)` | 0 | | `set` | `Attrs` | `AttrsLit` | 0 | | `path` | `Path` | `PathLit` | 0 | | `lambda` | `Function` | `FnLit` | 0 | | (any) | `Any` | `AnyLit` | 0 | The structural types — `Nat`, `Bool`, `Unit`, `Void`, `List`, `Sum`, `Sigma`, `Pi`, `Eq` — have full introduction and elimination rules. This means the kernel can compute with natural numbers, booleans, lists, pairs, and functions, but treats strings, integers, and other Nix-native types as opaque tokens. Axiomatized primitives are critical for real-world use. Without them, verified modules can only work over `Nat`/`Bool`/`List`/`Sigma`/`Sum`. With them, modules can handle ports (`Int`), service names (`String`), configuration records (nested `Sigma`/`Attrs`), and so on. ## HOAS: the surface API Writing de Bruijn indexed terms by hand is error-prone. The HOAS (Higher-Order Abstract Syntax) layer lets you use Nix lambdas for variable binding. The public API is `fx.types.hoas`. ### Type combinators ```nix H = fx.types.hoas; H.nat # Nat H.bool # Bool H.string # String H.int_ # Int H.float_ # Float H.unit # Unit (nullary product) H.void # Void (empty type) H.listOf H.nat # List Nat H.sum H.nat H.bool # Nat + Bool H.forall "x" H.nat (_: H.bool) # Π(x : Nat). Bool H.sigma "x" H.nat (_: H.bool) # Σ(x : Nat). Bool H.u 0 # U₀ (universe of small types) ``` ### Term combinators ```nix # Lambda: λ(x : Nat). x H.lam "x" H.nat (x: x) # Application H.app f arg # Natural number literals H.zero # 0 H.succ H.zero # 1 H.natLit 42 # 42 (sugar for 42 succs) # Boolean literals H.true_ H.false_ # Pairs H.pair fst snd # Projections H.fst_ p H.snd_ p # Sum injections H.inl leftTy rightTy term H.inr leftTy rightTy term # String / Int / Float literals H.stringLit "hello" H.intLit 42 H.floatLit 3.14 # Type annotation H.ann term type ``` ### Eliminators ```nix # Natural number induction H.ind motive base step scrut # Boolean elimination H.boolElim motive onTrue onFalse scrut # List elimination H.listElim elemType motive onNil onCons scrut # Sum elimination H.sumElim leftTy rightTy motive onLeft onRight scrut # Equality elimination (J) H.j type lhs motive base rhs eq ``` ### How HOAS compiles Binding combinators produce HOAS markers — lightweight attrsets that stand for bound variables at a specific depth. The `elaborate` function (in `hoas.nix`) converts these to de Bruijn indexed `Tm` terms: ``` H.lam "x" H.nat (x: H.succ x) │ │ HOAS: x is a marker { _hoas = true; level = 0; } │ ▼ elaborate (depth=0) │ │ marker at level 0 -> T.mkVar(0 - 0 - 1) = T.mkVar(0) │ ▼ Lam("x", Nat, Succ(Var(0))) ← de Bruijn term ``` The elaboration of nested binding forms is trampolined via `builtins.genericClosure` for stack safety on deeply nested terms. ## The elaboration bridge The elaboration bridge (`elaborate.nix`) connects the user-facing type system to the kernel. It has six operations: | Operation | Signature | Direction | |-----------|-----------|-----------| | `elaborateType` | `FxType -> HoasTree` | type system -> kernel | | `elaborateValue` | `HoasTree × NixVal -> HoasTree` | Nix value -> kernel term | | `extract` | `HoasTree × Val -> NixValue` | kernel value -> Nix value | | `decide` | `HoasTree × NixVal -> Bool` | decision procedure | | `decideType` | `FxType × NixVal -> Bool` | elaborate type, then decide | | `verifyAndExtract` | `HoasTree × HoasTree -> NixValue` | full pipeline | ### elaborateType Converts an `fx.types` type into a HOAS tree. Dispatches on three things, in order: 1. The `_kernel` field (types built via `mkType` with `kernelType`) 2. Structural fields (Pi: `domain`/`codomain`, Sigma: `fstType`/`sndFamily`) 3. Name convention (`"Bool"` -> `H.bool`, `"String"` -> `H.string`, etc.) ### elaborateValue Converts a Nix value into a HOAS term, guided by a HOAS type. For example, given `H.nat` and the Nix integer `3`, produces `H.succ (H.succ (H.succ H.zero))`. Given `H.string` and `"hello"`, produces `H.stringLit "hello"`. ### extract — the reverse direction `extract` converts kernel values back to Nix values. It is the reverse of `elaborateValue`. This is where the verification story becomes interesting: you write an implementation as a HOAS term, the kernel verifies it, `eval` produces a kernel value, and `extract` converts the result to a usable Nix value. ```nix # extract : HoasTree -> Val -> NixValue extract H.nat (VSucc (VSucc VZero)) # -> 2 extract H.bool VTrue # -> true extract H.string (VStringLit "hi") # -> "hi" extract (H.listOf H.nat) (VCons ...) # -> [1 2 3] extract (H.forall "x" ...) (VLam ...) # -> Nix function (!) ``` The Pi case is the most important. Extracting a verified function produces a Nix function that: 1. Elaborates its argument into a kernel value (Nix -> kernel) 2. Applies the kernel-verified closure 3. Extracts the result back (kernel -> Nix) Correct by construction — the kernel verified the term, `eval` produced the closure, `extract` wraps with value conversion at the boundaries. ### decide The decision procedure. Returns `true` iff both elaboration and kernel type-checking succeed: ```nix decide = hoasTy: value: let result = builtins.tryEval ( let hoasVal = elaborateValue hoasTy value; checked = H.checkHoas hoasTy hoasVal; in !(checked ? error) ); in result.success && result.value; ``` This is the function that `mkType` uses to derive `.check`. Every type's `.check` is `v: decide(kernelType, v)` — no hand-written predicates. ### verifyAndExtract — the full pipeline Type-check a HOAS term against a HOAS type, then extract the result: ```nix verifyAndExtract = hoasTy: hoasImpl: let checked = H.checkHoas hoasTy hoasImpl; in if checked ? error then throw "verifyAndExtract: type check failed" else let tm = H.elab hoasImpl; # HOAS -> de Bruijn val = E.eval [] tm; # evaluate to Val in extract hoasTy val; # Val -> Nix value ``` ## Convenience combinators Raw HOAS is verbose. The convenience combinator layer (`fx.types.verified`, accessed as `v`) provides sugar that produces valid HOAS term trees: ```nix v = fx.types.verified; H = fx.types.hoas; ``` ### Literals ```nix v.nat 5 # H.natLit 5 v.str "hello" # H.stringLit "hello" v.int_ 42 # H.intLit 42 v.float_ 3.14 # H.floatLit 3.14 v.true_ # H.true_ v.false_ # H.false_ v.null_ # H.tt (unit) ``` ### Binding forms ```nix # Lambda: λ(x : Nat). body v.fn "x" H.nat (x: body) # Let: let x : Nat = 5 in body v.let_ "x" H.nat (v.nat 5) (x: body) ``` ### Eliminators with inferred motives The convenience combinators construct the required motive automatically from the result type. The motive is always constant (non-dependent): `λ_. resultTy`. ```nix # Boolean: if b then 1 else 0 v.if_ H.nat v.true_ { then_ = v.nat 1; else_ = v.nat 0; } # Natural number: pattern match on n v.match H.nat n { zero = v.nat 42; succ = k: ih: H.succ ih; } # List: fold over elements v.matchList H.nat H.nat list { nil = v.nat 0; cons = h: t: ih: H.succ ih; # count elements } # Sum: case split v.matchSum H.nat H.bool H.nat scrut { left = x: H.succ x; right = _: v.nat 0; } ``` ### Derived combinators ```nix # Map: apply f to each element v.map H.nat H.nat succFn myList # Fold: combine elements with accumulator v.fold H.nat H.nat (v.nat 0) addFn myList # Filter: keep elements matching predicate v.filter H.nat isZeroFn myList ``` ### Pairs and sums ```nix v.pair fstTerm sndTerm sigmaType v.fst p v.snd p v.inl leftTy rightTy term v.inr leftTy rightTy term v.app f arg ``` ### The verify pipeline `v.verify` wraps `verifyAndExtract` — type-check and extract in one call: ```nix v.verify type implementation # = verifyAndExtract type implementation ``` ## Writing verified implementations The key insight: instead of writing Nix functions and trying to elaborate them into kernel terms (which fails for closures), write implementations as HOAS terms and **extract** Nix functions out. This is the approach taken by Coq (extraction), Idris (compilation), and F\*. ### Example: verified addition ```nix let H = fx.types.hoas; v = fx.types.verified; addTy = H.forall "m" H.nat (_: H.forall "n" H.nat (_: H.nat)); addImpl = v.fn "m" H.nat (m: v.fn "n" H.nat (n: v.match H.nat m { zero = n; succ = _k: ih: H.succ ih; })); # Kernel verifies: addImpl : Π(m:Nat). Π(n:Nat). Nat # Then extracts a 2-argument Nix function add = v.verify addTy addImpl; in add 2 3 # -> 5 ``` What happens step by step: 1. `v.verify` calls `H.checkHoas addTy addImpl` — the kernel type-checks the HOAS term against the HOAS type 2. `H.elab addImpl` converts HOAS to de Bruijn indexed `Tm` 3. `E.eval [] tm` evaluates to a `VLam` value (a Nix closure via NbE) 4. `extract addTy val` wraps the `VLam` as a Nix function that elaborates arguments at call boundaries The resulting `add` is an ordinary Nix function. Call it with Nix integers. At each call, the argument is elaborated into a kernel value, the verified closure runs, and the result is extracted back. ### Example: verified list operations ```nix let H = fx.types.hoas; v = fx.types.verified; # Successor function: Nat -> Nat succFn = v.fn "x" H.nat (x: H.succ x); # Map successor over a list input = H.cons H.nat (v.nat 0) (H.cons H.nat (v.nat 1) (H.cons H.nat (v.nat 2) (H.nil H.nat))); result = v.verify (H.listOf H.nat) (v.map H.nat H.nat succFn input); in result # -> [1 2 3] ``` ### Example: verified filter ```nix let H = fx.types.hoas; v = fx.types.verified; # isZero : Nat -> Bool isZero = v.fn "n" H.nat (n: v.match H.bool n { zero = v.true_; succ = _k: _ih: v.false_; }); input = H.cons H.nat (v.nat 0) (H.cons H.nat (v.nat 1) (H.cons H.nat (v.nat 0) (H.cons H.nat (v.nat 2) (H.nil H.nat)))); result = v.verify (H.listOf H.nat) (v.filter H.nat isZero input); in result # -> [0 0] ``` ## The verification spectrum The architecture supports a spectrum of assurance levels. Each level offers a different trade-off between verification strength and implementation cost. ### Level 1: Contract The baseline. Types check values at introduction via `.check` (which calls `decide` under the hood). No HOAS, no proof terms — just write normal Nix code and let the type system validate data at boundaries. ```nix let inherit (fx.types) Int String refined; Port = refined "Port" Int (x: x >= 1 && x <= 65535); # Refinement with string guard — kernel checks String, guard checks membership LogLevel = refined "LogLevel" String (x: builtins.elem x [ "debug" "info" "warn" "error" ]); in { ok = Port.check 8080; # true — decide says Int, guard says in range bad = Port.check 99999; # false — guard rejects (> 65535) wrong = Port.check "http"; # false — decide rejects (not Int) info = LogLevel.check "info"; # true trace = LogLevel.check "trace"; # false — not in the allowed set # Effectful validation with blame context result = fx.run (Port.validate 99999) fx.effects.typecheck.collecting []; # result.state = [ { context = "Port"; message = "Expected Port, got int"; ... } ] } ``` **Cost:** Zero — write normal Nix. The kernel runs behind the scenes. Refinement types compose: `ListOf Port` checks that every element is an integer in range. The kernel elaborates the list, the guard runs per element. ### Level 2: Boundary Data values are checked by the kernel at module interfaces. Types carry `kernelType` and `.check` is derived from the kernel's `decide` procedure. This is what every type does by default. ```nix let inherit (fx.types) Bool Int String ListOf DepRecord refined; FIPSCipher = refined "FIPSCipher" String (x: builtins.elem x [ "AES-256-GCM" "AES-128-GCM" "AES-256-CBC" ]); # Dependent record: when fipsMode is true, cipherSuites must be FIPS-approved. # The kernel elaborates the record to a Sigma chain, checks each field's type # against its kernelType, and the guard on FIPSCipher validates membership. ServiceConfig = DepRecord [ { name = "fipsMode"; type = Bool; } { name = "cipherSuites"; type = self: if self.fipsMode then ListOf FIPSCipher else ListOf String; } ]; in { ok = ServiceConfig.checkFlat { fipsMode = true; cipherSuites = [ "AES-256-GCM" ]; }; # true — kernel verifies each cipher is a valid FIPSCipher bad = ServiceConfig.checkFlat { fipsMode = true; cipherSuites = [ "3DES" ]; }; # false — "3DES" fails the FIPSCipher guard lax = ServiceConfig.checkFlat { fipsMode = false; cipherSuites = [ "ChaCha20" "RC4" ]; }; # true — non-FIPS mode accepts any string } ``` **Cost:** Low — add `kernelType` to custom types (built-in types already have it). The dependent record pattern shows how boundary checking scales: the kernel handles structural verification, guards handle domain predicates, and the dependency between fields is resolved at check time. ### Level 3: Property Universal properties verified via proof terms. Write proofs in HOAS that the kernel checks. The proof term is separate from the implementation — you write separate Nix code alongside, and the kernel verifies that the stated property holds. ```nix let H = fx.types.hoas; inherit (H) nat bool eq forall refl checkHoas; # Define addition by structural recursion on the first argument add = m: n: H.ind (H.lam "_" nat (_: nat)) n (H.lam "k" nat (_: H.lam "ih" nat (ih: H.succ ih))) m; not_ = b: H.boolElim (H.lam "_" bool (_: bool)) H.false_ H.true_ b; in { # Prove: 3 + 5 = 8 # The kernel normalizes add(3,5) via NatElim, arrives at 8, # and confirms Refl witnesses Eq(Nat, 8, 8). arithmetic = (checkHoas (eq nat (add (H.natLit 3) (H.natLit 5)) (H.natLit 8)) refl).tag == "refl"; # Prove: not(not(true)) = true # The kernel evaluates two BoolElim steps and confirms the result. doubleNeg = (checkHoas (eq bool (not_ (not_ H.true_)) H.true_) refl).tag == "refl"; # Prove: append([1,2], [3]) = [1,2,3] # ListElim unfolds the first list, cons-ing each element onto [3]. listAppend = let list12 = H.cons nat (H.natLit 1) (H.cons nat (H.natLit 2) (H.nil nat)); list3 = H.cons nat (H.natLit 3) (H.nil nat); list123 = H.cons nat (H.natLit 1) (H.cons nat (H.natLit 2) (H.cons nat (H.natLit 3) (H.nil nat))); append = xs: ys: H.listElim nat (H.lam "_" (H.listOf nat) (_: H.listOf nat)) ys (H.lam "h" nat (h: H.lam "t" (H.listOf nat) (_: H.lam "ih" (H.listOf nat) (ih: H.cons nat h ih)))) xs; in (checkHoas (eq (H.listOf nat) (append list12 list3) list123) refl).tag == "refl"; } ``` **Cost:** Medium — write proofs in HOAS. The proofs are separate from production code, so you can add them incrementally to an existing codebase without rewriting anything. ### Level 4: Full The implementation IS the proof term. Write the entire implementation in HOAS, the kernel verifies it, and `extract` produces a Nix function that is correct by construction. The extracted function is plain Nix — no kernel overhead at call time. ```nix let H = fx.types.hoas; v = fx.types.verified; RecTy = H.record [ { name = "name"; type = H.string; } { name = "target"; type = H.string; } ]; # Verified record validator: checks if two string fields match. # The kernel type-checks the implementation against its type # (Record -> Bool), verifies that field projections are well-typed, # and confirms strEq composes correctly. Then extracts a Nix function. matchFn = v.verify (H.forall "r" RecTy (_: H.bool)) (v.fn "r" RecTy (r: v.strEq (v.field RecTy "name" r) (v.field RecTy "target" r))); # Verified addition: structural recursion extracted as Nix function add = v.verify (H.forall "m" H.nat (_: H.forall "n" H.nat (_: H.nat))) (v.fn "m" H.nat (m: v.fn "n" H.nat (n: v.match H.nat m { zero = n; succ = _k: ih: H.succ ih; }))); in { sum = add 2 3; # -> 5, correct by construction yes = matchFn { name = "hello"; target = "hello"; }; # -> true no = matchFn { name = "hello"; target = "world"; }; # -> false } ``` **Cost:** High — write the implementation in HOAS. Best reserved for code where the cost is justified by the assurance. See the [Proof Guide](/nix-effects/guide/proof-guide) for a progressive tutorial from simple proofs through the J eliminator to verified extraction of plain Nix functions. ## How mkType derives .check Every type is built by `mkType` (in `foundation.nix`). The kernel type IS the type. `.check` is its decision procedure, derived mechanically: ``` _kernel : HoasType ← the type IS this check : Value -> Bool ← derived from decide(kernelType, value) kernelCheck : Value -> Bool ← same as check (legacy alias) prove : HoasTerm -> Bool ← kernel proof checking universe : Int ← computed from checkTypeLevel(kernelType) ``` For refinement types, an optional `guard` adds a runtime predicate on top of the kernel check: `check = decide(kernelType, v) && guard(v)`. The guard handles predicates the kernel cannot express — for example, `x >= 0` for natural numbers, or membership in a finite set for validated strings. ## Limitations **Nix closures are opaque.** `decide(H.forall ..., f)` can only check `builtins.isFunction f`. For full function verification, write the function in HOAS and use `v.verify` to extract. **Refinement predicates are opaque.** The kernel cannot represent `x >= 0` as a type-level assertion. Refinement types always need a hand-written guard predicate. **`builtins.tryEval` is limited.** It only catches `throw` and `assert false`. Cross-type comparison errors, boolean coercion errors, and missing attribute access crash Nix uncatchably. This affects `decide` for types whose elaboration might trigger such errors. **Dependent extraction is limited.** Extracting a dependent Pi or Sigma requires a sentinel test to detect non-dependence. If the type family is truly dependent, extraction throws and requires explicit type annotation. **Opaque types cannot be extracted.** `Attrs`, `Path`, `Function`, and `Any` are axiomatized — the kernel knows they exist but discards their payloads. Extracting a value of these types throws. They work for type-checking (deciding membership) but not for the full verify-and-extract pipeline. **Extraction has boundary cost.** Extracted functions elaborate their arguments at every call (Nix -> kernel value -> apply -> extract -> Nix). For hot paths, the contract layer's `.check` fast path is more efficient. ### Kernel Formal Specification This document is the contract the implementation must satisfy. Every typing rule, compute rule, and conversion rule is stated precisely. Every test is derived from this spec. Every invariant the kernel must maintain is listed. The spec uses standard type-theoretic notation. No Nix code appears here — this document is reviewable by anyone who reads dependent type theory, regardless of implementation language. --- ## 1. Trust Model The kernel has three layers with strictly decreasing trust requirements. **Layer 0 — Trusted Computing Base (TCB).** The evaluator, quotation, and conversion checker. Pure functions. No side effects. No imports from the effect system. Bugs here compromise soundness. Every line must be auditable. - `eval : Env × Tm → Val` - `quote : ℕ × Val → Tm` - `conv : ℕ × Val × Val → Bool` **Layer 1 — Semi-trusted.** The bidirectional type checker. Uses the TCB and sends effects for error reporting. Bugs here may produce wrong error messages or reject valid terms, but cannot cause unsoundness (the TCB rejects ill-typed terms independently). - `check : Ctx × Tm × Val → Tm` - `infer : Ctx × Tm → Tm × Val` - `checkTypeLevel : Ctx × Tm → Tm × ℕ` **Layer 2 — Untrusted.** The elaborator. Translates surface syntax (named variables, implicit arguments, level inference, eta-insertion) into fully explicit core terms. Can have arbitrary bugs without compromising safety — the kernel verifies the output. ### Failure modes | Condition | Response | Rationale | |-----------|----------|-----------| | Kernel invariant violation | `throw` (crash) | TCB may be buggy; cannot trust own output | | User type error | Effect `typeError` | Normal operation; handler decides policy | | Normalization budget exceeded | `throw` (crash) | Layer 0 has no effect access; `tryEval` catches it | | Unknown term tag | `throw` (crash) | Exhaustiveness violation = kernel bug | --- ## 2. Syntax ### 2.1 Terms (Tm) The core term language. All binding uses de Bruijn indices. Name annotations are cosmetic (for error messages only). ``` Tm ::= -- Variables and binding | Var(i : ℕ) -- de Bruijn index | Let(n : Name, A : Tm, t : Tm, u : Tm) -- let n : A = t in u -- Functions | Pi(n : Name, A : Tm, B : Tm) -- Π(n : A). B | Lam(n : Name, A : Tm, t : Tm) -- λ(n : A). t | App(t : Tm, u : Tm) -- t u -- Pairs | Sigma(n : Name, A : Tm, B : Tm) -- Σ(n : A). B | Pair(a : Tm, b : Tm, T : Tm) -- (a, b) as T | Fst(t : Tm) -- π₁ t | Snd(t : Tm) -- π₂ t -- Natural numbers | Nat -- ℕ | Zero -- 0 | Succ(t : Tm) -- S t | NatElim(P : Tm, z : Tm, s : Tm, n : Tm) -- elim_ℕ(P, z, s, n) -- Booleans | Bool -- 𝔹 | True -- true | False -- false | BoolElim(P : Tm, t : Tm, f : Tm, b : Tm) -- elim_𝔹(P, t, f, b) -- Lists | List(A : Tm) -- List A | Nil(A : Tm) -- nil_A | Cons(A : Tm, h : Tm, t : Tm) -- cons_A h t | ListElim(A : Tm, P : Tm, n : Tm, c : Tm, l : Tm) -- elim_List(A, P, n, c, l) -- Unit | Unit -- ⊤ | Tt -- tt -- Void | Void -- ⊥ | Absurd(A : Tm, t : Tm) -- absurd_A t -- Sum | Sum(A : Tm, B : Tm) -- A + B | Inl(A : Tm, B : Tm, t : Tm) -- inl t | Inr(A : Tm, B : Tm, t : Tm) -- inr t | SumElim(A : Tm, B : Tm, P : Tm, l : Tm, r : Tm, s : Tm) -- elim_+(A, B, P, l, r, s) -- Identity | Eq(A : Tm, a : Tm, b : Tm) -- Id_A(a, b) | Refl -- refl | J(A : Tm, a : Tm, P : Tm, pr : Tm, b : Tm, eq : Tm) -- J(A, a, P, pr, b, eq) -- Universes | U(i : ℕ) -- Type_i -- Annotations | Ann(t : Tm, A : Tm) -- (t : A) -- Axiomatized primitive types | String -- string type | Int -- integer type | Float -- float type | Attrs -- attribute set type | Path -- path type | Function -- opaque function type | Any -- dynamic/any type -- String operations | StrEq(lhs : Tm, rhs : Tm) -- string equality: lhs == rhs → Bool -- Primitive literals | StringLit(s) -- string literal | IntLit(n) -- integer literal | FloatLit(f) -- float literal | AttrsLit -- attribute set literal | PathLit -- path literal | FnLit -- opaque function literal | AnyLit -- any literal ``` ### 2.2 Binding convention In `Pi(n, A, B)`, `Lam(n, A, t)`, `Sigma(n, A, B)`, and `Let(n, A, t, u)`: the body (`B`, `t`, or `u`) binds one variable. Index 0 in the body refers to the bound variable. All other indices shift by 1. In `NatElim(P, z, s, n)`: `P` binds 0 variables (it's a function term `ℕ → U`). `z` binds 0 variables. `s` binds 0 variables (it's a function term). `n` binds 0 variables. All eliminators take their arguments as closed terms (no implicit binding). The motive is a function term, not a binder. ### 2.3 De Bruijn index conventions Indices count inward from the use site: 0 = most recent binder. Example: `λ(x : A). λ(y : B). x` is `Lam(x, A, Lam(y, B, Var(1)))`. --- ## 3. Values (Semantic Domain) Values are the result of evaluation. They use de Bruijn **levels** (counting outward from the top of the context) instead of indices. ``` Val ::= -- Functions | VPi(n : Name, A : Val, cl : Closure) -- Π type | VLam(n : Name, A : Val, cl : Closure) -- λ abstraction -- Pairs | VSigma(n : Name, A : Val, cl : Closure) -- Σ type | VPair(a : Val, b : Val) -- pair value -- Natural numbers | VNat | VZero | VSucc(v : Val) -- Booleans | VBool | VTrue | VFalse -- Lists | VList(A : Val) | VNil(A : Val) | VCons(A : Val, h : Val, t : Val) -- Unit | VUnit | VTt -- Void | VVoid -- Sum | VSum(A : Val, B : Val) | VInl(A : Val, B : Val, v : Val) | VInr(A : Val, B : Val, v : Val) -- Identity | VEq(A : Val, a : Val, b : Val) | VRefl -- Universes | VU(i : ℕ) -- Axiomatized primitive types | VString | VInt | VFloat | VAttrs | VPath | VFunction | VAny -- Primitive literal values | VStringLit(s) | VIntLit(n) | VFloatLit(f) | VAttrsLit | VPathLit | VFnLit | VAnyLit -- Neutrals (stuck computations) | VNe(level : ℕ, spine : [Elim]) Elim ::= | EApp(v : Val) | EFst | ESnd | ENatElim(P : Val, z : Val, s : Val) | EBoolElim(P : Val, t : Val, f : Val) | EListElim(A : Val, P : Val, n : Val, c : Val) | EAbsurd(A : Val) | ESumElim(A : Val, B : Val, P : Val, l : Val, r : Val) | EJ(A : Val, a : Val, P : Val, pr : Val, b : Val) | EStrEq(arg : Val) Closure ::= (env : Env, body : Tm) Env ::= [Val] -- list indexed by de Bruijn index ``` ### 3.1 Level/index relationship De Bruijn levels count from the outermost binder: 0 = first-ever bound variable. Levels are stable under context extension. Conversion between index and level: ``` index = depth - level - 1 level = depth - index - 1 ``` where `depth` is the current binding depth (length of the context). ### 3.2 Fresh variables A fresh variable at depth `d` is `VNe(d, [])` — a neutral with level `d` and empty spine. Used in conversion checking to compare under binders. ### 3.3 Closure instantiation ``` instantiate((env, body), v) = eval([v] ++ env, body) ``` --- ## 4. Evaluation Rules `eval(ρ, t)` interprets term `t` in environment `ρ`, producing a value. All rules are deterministic. ### 4.1 Variables and let ``` eval(ρ, Var(i)) = ρ[i] eval(ρ, Let(n, A, t, u)) = eval([eval(ρ, t)] ++ ρ, u) eval(ρ, Ann(t, A)) = eval(ρ, t) ``` ### 4.2 Functions ``` eval(ρ, Pi(n, A, B)) = VPi(n, eval(ρ, A), (ρ, B)) eval(ρ, Lam(n, A, t)) = VLam(n, eval(ρ, A), (ρ, t)) eval(ρ, App(t, u)) = vApp(eval(ρ, t), eval(ρ, u)) ``` where `vApp` performs beta reduction or accumulates: ``` vApp(VLam(n, A, cl), v) = instantiate(cl, v) vApp(VNe(l, sp), v) = VNe(l, sp ++ [EApp(v)]) vApp(_, _) = THROW "kernel bug: vApp on non-function" ``` ### 4.3 Pairs ``` eval(ρ, Sigma(n, A, B)) = VSigma(n, eval(ρ, A), (ρ, B)) eval(ρ, Pair(a, b, T)) = VPair(eval(ρ, a), eval(ρ, b)) eval(ρ, Fst(t)) = vFst(eval(ρ, t)) eval(ρ, Snd(t)) = vSnd(eval(ρ, t)) ``` where: ``` vFst(VPair(a, b)) = a vFst(VNe(l, sp)) = VNe(l, sp ++ [EFst]) vFst(_) = THROW "kernel bug: vFst on non-pair" vSnd(VPair(a, b)) = b vSnd(VNe(l, sp)) = VNe(l, sp ++ [ESnd]) vSnd(_) = THROW "kernel bug: vSnd on non-pair" ``` ### 4.4 Natural numbers ``` eval(ρ, Nat) = VNat eval(ρ, Zero) = VZero eval(ρ, Succ(t)) = VSucc(eval(ρ, t)) -- MUST trampoline for deep naturals eval(ρ, NatElim(P,z,s,n)) = vNatElim(eval(ρ,P), eval(ρ,z), eval(ρ,s), eval(ρ,n)) ``` where: ``` vNatElim(P, z, s, VZero) = z vNatElim(P, z, s, VSucc(n)) = vApp(vApp(s, n), vNatElim(P, z, s, n)) vNatElim(P, z, s, VNe(l,sp)) = VNe(l, sp ++ [ENatElim(P, z, s)]) vNatElim(_, _, _, _) = THROW "kernel bug: vNatElim on non-nat" ``` **Note**: `vNatElim` on `VSucc` recurses. The implementation MUST trampoline this to guarantee O(1) stack depth. ### 4.5 Booleans ``` eval(ρ, Bool) = VBool eval(ρ, True) = VTrue eval(ρ, False) = VFalse eval(ρ, BoolElim(P,t,f,b)) = vBoolElim(eval(ρ,P), eval(ρ,t), eval(ρ,f), eval(ρ,b)) ``` where: ``` vBoolElim(P, t, f, VTrue) = t vBoolElim(P, t, f, VFalse) = f vBoolElim(P, t, f, VNe(l,sp)) = VNe(l, sp ++ [EBoolElim(P, t, f)]) vBoolElim(_, _, _, _) = THROW "kernel bug: vBoolElim on non-bool" ``` ### 4.6 Lists ``` eval(ρ, List(A)) = VList(eval(ρ, A)) eval(ρ, Nil(A)) = VNil(eval(ρ, A)) eval(ρ, Cons(A, h, t)) = VCons(eval(ρ, A), eval(ρ, h), eval(ρ, t)) -- MUST trampoline for deep lists eval(ρ, ListElim(A,P,n,c,l)) = vListElim(eval(ρ,A), eval(ρ,P), eval(ρ,n), eval(ρ,c), eval(ρ,l)) ``` where: ``` vListElim(A, P, n, c, VNil(_)) = n vListElim(A, P, n, c, VCons(_, h, t)) = vApp(vApp(vApp(c, h), t), vListElim(A, P, n, c, t)) vListElim(A, P, n, c, VNe(l, sp)) = VNe(l, sp ++ [EListElim(A, P, n, c)]) vListElim(_, _, _, _, _) = THROW "kernel bug: vListElim on non-list" ``` **Note**: `vListElim` on `VCons` recurses. Must be trampolined. ### 4.7 Unit ``` eval(ρ, Unit) = VUnit eval(ρ, Tt) = VTt ``` Unit has no eliminator in the core. The kernel does NOT implement eta for Unit — `conv` does not equate arbitrary Unit-typed neutrals with `VTt`. Two distinct neutrals of type Unit are not definitionally equal even though they would be in an extensional theory. If eta for Unit is needed, the elaborator must reduce to `Tt` before submitting to the kernel. ### 4.8 Void ``` eval(ρ, Void) = VVoid eval(ρ, Absurd(A, t)) = vAbsurd(eval(ρ, A), eval(ρ, t)) ``` where: ``` vAbsurd(A, VNe(l, sp)) = VNe(l, sp ++ [EAbsurd(A)]) vAbsurd(_, _) = THROW "kernel bug: vAbsurd on non-neutral" ``` `Absurd` never computes — there is no closed value of type `Void`. If a non-neutral reaches `vAbsurd`, the term is ill-typed and the kernel has a bug (the checker should have caught it). ### 4.9 Sum ``` eval(ρ, Sum(A, B)) = VSum(eval(ρ, A), eval(ρ, B)) eval(ρ, Inl(A, B, t)) = VInl(eval(ρ, A), eval(ρ, B), eval(ρ, t)) eval(ρ, Inr(A, B, t)) = VInr(eval(ρ, A), eval(ρ, B), eval(ρ, t)) eval(ρ, SumElim(A,B,P,l,r,s)) = vSumElim(eval(ρ,A), eval(ρ,B), eval(ρ,P), eval(ρ,l), eval(ρ,r), eval(ρ,s)) ``` where: ``` vSumElim(A, B, P, l, r, VInl(_, _, v)) = vApp(l, v) vSumElim(A, B, P, l, r, VInr(_, _, v)) = vApp(r, v) vSumElim(A, B, P, l, r, VNe(k, sp)) = VNe(k, sp ++ [ESumElim(A, B, P, l, r)]) vSumElim(_, _, _, _, _, _) = THROW "kernel bug: vSumElim on non-sum" ``` ### 4.10 Identity ``` eval(ρ, Eq(A, a, b)) = VEq(eval(ρ, A), eval(ρ, a), eval(ρ, b)) eval(ρ, Refl) = VRefl eval(ρ, J(A, a, P, pr, b, eq)) = vJ(eval(ρ,A), eval(ρ,a), eval(ρ,P), eval(ρ,pr), eval(ρ,b), eval(ρ,eq)) ``` where: ``` vJ(A, a, P, pr, b, VRefl) = pr vJ(A, a, P, pr, b, VNe(l,sp)) = VNe(l, sp ++ [EJ(A, a, P, pr, b)]) vJ(_, _, _, _, _, _) = THROW "kernel bug: vJ on non-refl" ``` ### 4.11 Universes ``` eval(ρ, U(i)) = VU(i) ``` ### 4.12 Axiomatized primitives Type formers evaluate to their corresponding values. Literals carry their payload through. No computation, no recursion — these are axiomatized constants. ``` eval(ρ, String) = VString eval(ρ, Int) = VInt eval(ρ, Float) = VFloat eval(ρ, Attrs) = VAttrs eval(ρ, Path) = VPath eval(ρ, Function) = VFunction eval(ρ, Any) = VAny eval(ρ, StringLit(s)) = VStringLit(s) eval(ρ, IntLit(n)) = VIntLit(n) eval(ρ, FloatLit(f)) = VFloatLit(f) eval(ρ, AttrsLit) = VAttrsLit eval(ρ, PathLit) = VPathLit eval(ρ, FnLit) = VFnLit eval(ρ, AnyLit) = VAnyLit ``` Most primitives have no eliminators. They exist to integrate Nix's native types into the kernel's type system as opaque, axiomatized constants. The exception is String, which has `StrEq` (§4.13). ### 4.13 String equality (StrEq) ``` eval(ρ, StrEq(lhs, rhs)) = vStrEq(eval(ρ, lhs), eval(ρ, rhs)) ``` where: ``` vStrEq(VStringLit(s₁), VStringLit(s₂)) = if s₁ == s₂ then VTrue else VFalse vStrEq(VNe(l, sp), rhs) = VNe(l, sp ++ [EStrEq(rhs)]) vStrEq(lhs, VNe(l, sp)) = VNe(l, sp ++ [EStrEq(lhs)]) vStrEq(_, _) = THROW "kernel bug: vStrEq on non-string" ``` `StrEq` is a binary predicate on strings. Both arguments must be of type `String`. The result type is `Bool`. Unlike other eliminators, StrEq has no motive — it always returns `Bool`, not a dependent type. When both arguments are concrete string literals, `vStrEq` reduces to `VTrue` or `VFalse` by Nix-level string comparison. When either argument is neutral, the neutral's spine is extended with `EStrEq` carrying the other argument. This is sound because `StrEq` is symmetric: `StrEq(a, b) ≡ StrEq(b, a)` for all `a, b : String`. --- ## 5. Quotation Rules `quote(d, v)` converts a value back to a term, converting levels to indices. `d` is the current binding depth. ``` quote(d, VPi(n, A, cl)) = Pi(n, quote(d, A), quote(d+1, instantiate(cl, fresh(d)))) quote(d, VLam(n, A, cl)) = Lam(n, quote(d, A), quote(d+1, instantiate(cl, fresh(d)))) quote(d, VSigma(n, A, cl)) = Sigma(n, quote(d, A), quote(d+1, instantiate(cl, fresh(d)))) quote(d, VPair(a, b)) = Pair(quote(d, a), quote(d, b), _) quote(d, VNat) = Nat quote(d, VZero) = Zero quote(d, VSucc(v)) = Succ(quote(d, v)) -- MUST trampoline for deep naturals quote(d, VBool) = Bool quote(d, VTrue) = True quote(d, VFalse) = False quote(d, VList(A)) = List(quote(d, A)) quote(d, VNil(A)) = Nil(quote(d, A)) quote(d, VCons(A, h, t)) = Cons(quote(d, A), quote(d, h), quote(d, t)) -- MUST trampoline for deep lists quote(d, VUnit) = Unit quote(d, VTt) = Tt quote(d, VVoid) = Void quote(d, VSum(A, B)) = Sum(quote(d, A), quote(d, B)) quote(d, VInl(A, B, v)) = Inl(quote(d, A), quote(d, B), quote(d, v)) quote(d, VInr(A, B, v)) = Inr(quote(d, A), quote(d, B), quote(d, v)) quote(d, VEq(A, a, b)) = Eq(quote(d, A), quote(d, a), quote(d, b)) quote(d, VRefl) = Refl quote(d, VU(i)) = U(i) quote(d, VString) = String quote(d, VInt) = Int quote(d, VFloat) = Float quote(d, VAttrs) = Attrs quote(d, VPath) = Path quote(d, VFunction) = Function quote(d, VAny) = Any quote(d, VStringLit(s)) = StringLit(s) quote(d, VIntLit(n)) = IntLit(n) quote(d, VFloatLit(f)) = FloatLit(f) quote(d, VAttrsLit) = AttrsLit quote(d, VPathLit) = PathLit quote(d, VFnLit) = FnLit quote(d, VAnyLit) = AnyLit quote(d, VNe(l, sp)) = quoteSp(d, Var(d - l - 1), sp) quoteSp(d, head, []) = head quoteSp(d, head, [EApp(v) | rest]) = quoteSp(d, App(head, quote(d, v)), rest) quoteSp(d, head, [EFst | rest]) = quoteSp(d, Fst(head), rest) quoteSp(d, head, [ESnd | rest]) = quoteSp(d, Snd(head), rest) quoteSp(d, head, [ENatElim(P,z,s) | rest]) = quoteSp(d, NatElim(quote(d,P), quote(d,z), quote(d,s), head), rest) quoteSp(d, head, [EBoolElim(P,t,f) | rest]) = quoteSp(d, BoolElim(quote(d,P), quote(d,t), quote(d,f), head), rest) quoteSp(d, head, [EListElim(A,P,n,c) | rest]) = quoteSp(d, ListElim(quote(d,A), quote(d,P), quote(d,n), quote(d,c), head), rest) quoteSp(d, head, [EAbsurd(A) | rest]) = quoteSp(d, Absurd(quote(d, A), head), rest) quoteSp(d, head, [ESumElim(A,B,P,l,r) | rest]) = quoteSp(d, SumElim(quote(d,A), quote(d,B), quote(d,P), quote(d,l), quote(d,r), head), rest) quoteSp(d, head, [EJ(A,a,P,pr,b) | rest]) = quoteSp(d, J(quote(d,A), quote(d,a), quote(d,P), quote(d,pr), quote(d,b), head), rest) quoteSp(d, head, [EStrEq(arg) | rest]) = quoteSp(d, StrEq(head, quote(d, arg)), rest) fresh(d) = VNe(d, []) ``` --- ## 6. Conversion Rules `conv(d, v₁, v₂)` checks definitional equality of two values at binding depth `d`. Returns boolean. **No type information is used** — conversion is purely structural on normalized values. ### 6.1 Structural rules ``` conv(d, VU(i), VU(j)) = (i == j) conv(d, VNat, VNat) = true conv(d, VBool, VBool) = true conv(d, VUnit, VUnit) = true conv(d, VVoid, VVoid) = true conv(d, VZero, VZero) = true conv(d, VTrue, VTrue) = true conv(d, VFalse, VFalse) = true conv(d, VTt, VTt) = true conv(d, VRefl, VRefl) = true conv(d, VSucc(a), VSucc(b)) = conv(d, a, b) conv(d, VString, VString) = true conv(d, VInt, VInt) = true conv(d, VFloat, VFloat) = true conv(d, VAttrs, VAttrs) = true conv(d, VPath, VPath) = true conv(d, VFunction, VFunction) = true conv(d, VAny, VAny) = true conv(d, VStringLit(s₁), VStringLit(s₂)) = (s₁ == s₂) conv(d, VIntLit(n₁), VIntLit(n₂)) = (n₁ == n₂) conv(d, VFloatLit(f₁), VFloatLit(f₂)) = (f₁ == f₂) conv(d, VAttrsLit, VAttrsLit) = true conv(d, VPathLit, VPathLit) = true conv(d, VFnLit, VFnLit) = true conv(d, VAnyLit, VAnyLit) = true ``` ### 6.2 Binding forms To compare under binders, generate a fresh variable and instantiate: ``` conv(d, VPi(_, A₁, cl₁), VPi(_, A₂, cl₂)) = conv(d, A₁, A₂) ∧ conv(d+1, instantiate(cl₁, fresh(d)), instantiate(cl₂, fresh(d))) conv(d, VLam(_, _, cl₁), VLam(_, _, cl₂)) = conv(d+1, instantiate(cl₁, fresh(d)), instantiate(cl₂, fresh(d))) conv(d, VSigma(_, A₁, cl₁), VSigma(_, A₂, cl₂)) = conv(d, A₁, A₂) ∧ conv(d+1, instantiate(cl₁, fresh(d)), instantiate(cl₂, fresh(d))) ``` ### 6.3 Compound values ``` conv(d, VPair(a₁, b₁), VPair(a₂, b₂)) = conv(d, a₁, a₂) ∧ conv(d, b₁, b₂) conv(d, VList(A₁), VList(A₂)) = conv(d, A₁, A₂) conv(d, VNil(A₁), VNil(A₂)) = conv(d, A₁, A₂) conv(d, VCons(A₁, h₁, t₁), VCons(A₂, h₂, t₂)) = conv(d, A₁, A₂) ∧ conv(d, h₁, h₂) ∧ conv(d, t₁, t₂) conv(d, VSum(A₁, B₁), VSum(A₂, B₂)) = conv(d, A₁, A₂) ∧ conv(d, B₁, B₂) conv(d, VInl(A₁, B₁, v₁), VInl(A₂, B₂, v₂)) = conv(d, A₁, A₂) ∧ conv(d, B₁, B₂) ∧ conv(d, v₁, v₂) conv(d, VInr(A₁, B₁, v₁), VInr(A₂, B₂, v₂)) = conv(d, A₁, A₂) ∧ conv(d, B₁, B₂) ∧ conv(d, v₁, v₂) conv(d, VEq(A₁, a₁, b₁), VEq(A₂, a₂, b₂)) = conv(d, A₁, A₂) ∧ conv(d, a₁, a₂) ∧ conv(d, b₁, b₂) ``` ### 6.4 Neutrals ``` conv(d, VNe(l₁, sp₁), VNe(l₂, sp₂)) = (l₁ == l₂) ∧ convSp(d, sp₁, sp₂) convSp(d, [], []) = true convSp(d, [e₁|r₁], [e₂|r₂]) = convElim(d, e₁, e₂) ∧ convSp(d, r₁, r₂) convSp(d, _, _) = false -- different lengths convElim(d, EApp(v₁), EApp(v₂)) = conv(d, v₁, v₂) convElim(d, EFst, EFst) = true convElim(d, ESnd, ESnd) = true convElim(d, ENatElim(P₁,z₁,s₁), ENatElim(P₂,z₂,s₂)) = conv(d, P₁, P₂) ∧ conv(d, z₁, z₂) ∧ conv(d, s₁, s₂) convElim(d, EBoolElim(P₁,t₁,f₁), EBoolElim(P₂,t₂,f₂)) = conv(d, P₁, P₂) ∧ conv(d, t₁, t₂) ∧ conv(d, f₁, f₂) convElim(d, EListElim(A₁,P₁,n₁,c₁), EListElim(A₂,P₂,n₂,c₂)) = conv(d, A₁, A₂) ∧ conv(d, P₁, P₂) ∧ conv(d, n₁, n₂) ∧ conv(d, c₁, c₂) convElim(d, EAbsurd(A₁), EAbsurd(A₂)) = conv(d, A₁, A₂) convElim(d, ESumElim(A₁,B₁,P₁,l₁,r₁), ESumElim(A₂,B₂,P₂,l₂,r₂)) = conv(d, A₁, A₂) ∧ conv(d, B₁, B₂) ∧ conv(d, P₁, P₂) ∧ conv(d, l₁, l₂) ∧ conv(d, r₁, r₂) convElim(d, EJ(A₁,a₁,P₁,pr₁,b₁), EJ(A₂,a₂,P₂,pr₂,b₂)) = conv(d, A₁, A₂) ∧ conv(d, a₁, a₂) ∧ conv(d, P₁, P₂) ∧ conv(d, pr₁, pr₂) ∧ conv(d, b₁, b₂) convElim(d, EStrEq(arg₁), EStrEq(arg₂)) = conv(d, arg₁, arg₂) convElim(_, _, _) = false ``` ### 6.5 Catch-all ``` conv(d, _, _) = false ``` Any pair of values not matching the above rules is not definitionally equal. **No eta expansion.** If `f` and `λx. f x` must be compared, the elaborator must eta-expand `f` before submitting to the kernel. --- ## 7. Typing Rules (Bidirectional) ### 7.1 Contexts ``` Ctx ::= { env : Env, -- values for evaluation types : [Val], -- types of bound variables (indexed by de Bruijn) depth : ℕ -- current binding depth } emptyCtx = { env = [], types = [], depth = 0 } extend(Γ, n, A) = { env = [fresh(Γ.depth)] ++ Γ.env, types = [A] ++ Γ.types, depth = Γ.depth + 1 } lookupType(Γ, i) = Γ.types[i] -- THROW if i >= length(Γ.types) ``` ### 7.2 Notation ``` Γ ⊢ t ⇐ A ↝ t' checking mode: check(Γ, t, A) = t' Γ ⊢ t ⇒ A ↝ t' synthesis mode: infer(Γ, t) = (t', A) Γ ⊢ T type ↝ T' type formation: checkType(Γ, T) = T' Γ ⊢ T type@i ↝ T' type + level: checkTypeLevel(Γ, T) = (T', i) ``` The output `t'` is the elaborated core term (fully annotated). ### 7.3 Synthesis rules (infer) **Var** ``` lookupType(Γ, i) = A ────────────────────── Γ ⊢ Var(i) ⇒ A ↝ Var(i) ``` **Ann** (annotation) ``` Γ ⊢ A type ↝ A' Â = eval(Γ.env, A') Γ ⊢ t ⇐ Â ↝ t' ────────────────────── Γ ⊢ Ann(t, A) ⇒ Â ↝ Ann(t', A') ``` **App** (application) ``` Γ ⊢ f ⇒ fTy ↝ f' whnf(fTy) = VPi(n, A, cl) Γ ⊢ u ⇐ A ↝ u' B = instantiate(cl, eval(Γ.env, u')) ────────────────────── Γ ⊢ App(f, u) ⇒ B ↝ App(f', u') ``` **CRITICAL**: `whnf(fTy)` must normalize `fTy` to weak head normal form before pattern matching. If `fTy` is a let-unfolding or a neutral that reduces further, the match will fail spuriously. In this kernel, `eval` already produces WHNF, so `whnf(v) = v` for all values. But this invariant must be maintained if the value representation changes. **Fst** (first projection) ``` Γ ⊢ t ⇒ tTy ↝ t' whnf(tTy) = VSigma(n, A, cl) ────────────────────── Γ ⊢ Fst(t) ⇒ A ↝ Fst(t') ``` **Snd** (second projection) ``` Γ ⊢ t ⇒ tTy ↝ t' whnf(tTy) = VSigma(n, A, cl) B = instantiate(cl, vFst(eval(Γ.env, t'))) ────────────────────── Γ ⊢ Snd(t) ⇒ B ↝ Snd(t') ``` **Eliminator motive checking (checkMotive).** All eliminators require a motive `P : domTy → U(k)` for some `k`. The implementation provides a shared `checkMotive` helper that handles two forms: - Lambda motives (`P = λx. body`): extend the context with `x : domTy` and verify `body` is a type via `checkType`. - Non-lambda motives: infer the type and verify it has shape `VPi(_, domTy, _ → VU(k))` for some `k`. The `k` is not fixed — motives may target any universe level, enabling **large elimination** (eliminators whose return type is a type, not a value). For example, `NatElim(λn. U(0), ...)` is valid and returns types at universe 1. **NatElim** ``` Γ ⊢ P ⇐ VPi(_, VNat, ([], U(k))) ↝ P' P̂ = eval(Γ.env, P') Γ ⊢ z ⇐ vApp(P̂, VZero) ↝ z' Γ ⊢ s ⇐ VPi(_, VNat, (Γ.env, Pi(_, App(P, Var(0)), ...))) ↝ s' -- s : Π(k : ℕ). P(k) → P(S(k)) Γ ⊢ n ⇐ VNat ↝ n' ────────────────────── Γ ⊢ NatElim(P, z, s, n) ⇒ vApp(P̂, eval(Γ.env, n')) ↝ NatElim(P', z', s', n') ``` The full typing of `s` is: ``` s : Π(k : ℕ). P(k) → P(S(k)) ``` This is checked by constructing the appropriate Pi type from `P̂`. **BoolElim** ``` Γ ⊢ P ⇐ VPi(_, VBool, ([], U(k))) ↝ P' P̂ = eval(Γ.env, P') Γ ⊢ t ⇐ vApp(P̂, VTrue) ↝ t' Γ ⊢ f ⇐ vApp(P̂, VFalse) ↝ f' Γ ⊢ b ⇐ VBool ↝ b' ────────────────────── Γ ⊢ BoolElim(P, t, f, b) ⇒ vApp(P̂, eval(Γ.env, b')) ↝ BoolElim(P', t', f', b') ``` **ListElim** ``` Γ ⊢ A type ↝ A' Â = eval(Γ.env, A') Γ ⊢ P ⇐ VPi(_, VList(Â), ([], U(k))) ↝ P' P̂ = eval(Γ.env, P') Γ ⊢ n ⇐ vApp(P̂, VNil(Â)) ↝ n' Γ ⊢ c ⇐ <Π(h:A). Π(t:List A). P(t) → P(cons h t)> ↝ c' Γ ⊢ l ⇐ VList(Â) ↝ l' ────────────────────── Γ ⊢ ListElim(A, P, n, c, l) ⇒ vApp(P̂, eval(Γ.env, l')) ↝ ListElim(A', P', n', c', l') ``` **Absurd** (ex falso) ``` Γ ⊢ A type ↝ A' Â = eval(Γ.env, A') Γ ⊢ t ⇐ VVoid ↝ t' ────────────────────── Γ ⊢ Absurd(A, t) ⇒ Â ↝ Absurd(A', t') ``` **SumElim** ``` Γ ⊢ A type ↝ A' Â = eval(Γ.env, A') Γ ⊢ B type ↝ B' B̂ = eval(Γ.env, B') Γ ⊢ P ⇐ VPi(_, VSum(Â, B̂), ([], U(k))) ↝ P' P̂ = eval(Γ.env, P') Γ ⊢ l ⇐ VPi(_, Â,
) ↝ l' Γ ⊢ r ⇐ VPi(_, B̂,
) ↝ r'
Γ ⊢ s ⇐ VSum(Â, B̂) ↝ s'
──────────────────────
Γ ⊢ SumElim(A,B,P,l,r,s) ⇒ vApp(P̂, eval(Γ.env, s'))
↝ SumElim(A',B',P',l',r',s')
```
**J** (identity elimination)
```
Γ ⊢ A type ↝ A' Â = eval(Γ.env, A')
Γ ⊢ a ⇐ Â ↝ a' â = eval(Γ.env, a')
Γ ⊢ P ⇐ <Π(y : A). Π(e : Id_A(a, y)). U(k)> ↝ P'
P̂ = eval(Γ.env, P')
Γ ⊢ pr ⇐ vApp(vApp(P̂, â), VRefl) ↝ pr'
Γ ⊢ b ⇐ Â ↝ b' b̂ = eval(Γ.env, b')
Γ ⊢ eq ⇐ VEq(Â, â, b̂) ↝ eq'
──────────────────────
Γ ⊢ J(A, a, P, pr, b, eq) ⇒ vApp(vApp(P̂, b̂), eval(Γ.env, eq'))
↝ J(A', a', P', pr', b', eq')
```
**J motive verification.** For non-lambda motives, the
implementation structurally verifies all three components:
1. Outer Pi domain matches `A` (conversion check)
2. Inner Pi domain matches `Eq(A, a, y)` (conversion check)
3. Innermost codomain is `VU(k)` for some `k`
For lambda motives (`P = λy. body`), the body is checked via
`checkMotive` against `Eq(A, a, y)`, which performs the same
verification on the inner structure. This catches motive errors
at the motive itself rather than deferring to the base case.
**Axiomatized primitive type formers** (synthesis)
Primitive type formers are synthesized directly — they infer as
inhabitants of `U(0)`:
```
──────────────────────
Γ ⊢ String ⇒ VU(0) ↝ String
──────────────────────
Γ ⊢ Int ⇒ VU(0) ↝ Int
```
(Similarly for Float, Attrs, Path, Function, Any — all at level 0.)
**Primitive literals** (synthesis)
Literals synthesize their corresponding type:
```
──────────────────────
Γ ⊢ StringLit(s) ⇒ VString ↝ StringLit(s)
──────────────────────
Γ ⊢ IntLit(n) ⇒ VInt ↝ IntLit(n)
──────────────────────
Γ ⊢ FloatLit(f) ⇒ VFloat ↝ FloatLit(f)
```
(Similarly for AttrsLit → VAttrs, PathLit → VPath,
FnLit → VFunction, AnyLit → VAny.)
**StrEq** (string equality)
```
Γ ⊢ lhs ⇐ VString ↝ lhs'
Γ ⊢ rhs ⇐ VString ↝ rhs'
──────────────────────
Γ ⊢ StrEq(lhs, rhs) ⇒ VBool ↝ StrEq(lhs', rhs')
```
Both arguments are checked against `VString`. The result type is
always `VBool`. StrEq is not a dependent eliminator — it has no
motive parameter.
### 7.4 Checking rules (check)
**Lam** (lambda introduction)
```
whnf(A) = VPi(n, dom, cl)
Γ' = extend(Γ, n, dom)
cod = instantiate(cl, fresh(Γ.depth))
Γ' ⊢ t ⇐ cod ↝ t'
──────────────────────
Γ ⊢ Lam(n, _, t) ⇐ A ↝ Lam(n, quote(Γ.depth, dom), t')
```
**Pair** (pair introduction)
```
whnf(T) = VSigma(n, A, cl)
Γ ⊢ a ⇐ A ↝ a'
B = instantiate(cl, eval(Γ.env, a'))
Γ ⊢ b ⇐ B ↝ b'
──────────────────────
Γ ⊢ Pair(a, b, _) ⇐ T ↝ Pair(a', b', quote(Γ.depth, T))
```
**Zero**
```
whnf(A) = VNat
──────────────────────
Γ ⊢ Zero ⇐ A ↝ Zero
```
**Succ** (MUST trampoline for deep naturals)
```
whnf(A) = VNat
Γ ⊢ t ⇐ VNat ↝ t'
──────────────────────
Γ ⊢ Succ(t) ⇐ A ↝ Succ(t')
```
**True / False**
```
whnf(A) = VBool
──────────────────────
Γ ⊢ True ⇐ A ↝ True
whnf(A) = VBool
──────────────────────
Γ ⊢ False ⇐ A ↝ False
```
**Nil**
```
whnf(A) = VList(Â)
──────────────────────
Γ ⊢ Nil(_) ⇐ A ↝ Nil(quote(Γ.depth, Â))
```
**Cons** (MUST trampoline for deep lists)
```
whnf(A) = VList(Â)
Γ ⊢ h ⇐ Â ↝ h'
Γ ⊢ t ⇐ VList(Â) ↝ t'
──────────────────────
Γ ⊢ Cons(_, h, t) ⇐ A ↝ Cons(quote(Γ.depth, Â), h', t')
```
**Tt**
```
whnf(A) = VUnit
──────────────────────
Γ ⊢ Tt ⇐ A ↝ Tt
```
**Inl / Inr**
```
whnf(T) = VSum(A, B)
Γ ⊢ t ⇐ A ↝ t'
──────────────────────
Γ ⊢ Inl(_, _, t) ⇐ T ↝ Inl(quote(Γ.depth, A), quote(Γ.depth, B), t')
whnf(T) = VSum(A, B)
Γ ⊢ t ⇐ B ↝ t'
──────────────────────
Γ ⊢ Inr(_, _, t) ⇐ T ↝ Inr(quote(Γ.depth, A), quote(Γ.depth, B), t')
```
**Refl**
```
whnf(T) = VEq(A, a, b)
conv(Γ.depth, a, b) = true
──────────────────────
Γ ⊢ Refl ⇐ T ↝ Refl
```
If `conv(Γ.depth, a, b) = false`, this is a **type error**: the
two sides of the equation are not definitionally equal, and `refl`
cannot prove the equation. Report via effect.
**Primitive literals** (checked against their corresponding types)
```
whnf(A) = VString
──────────────────────
Γ ⊢ StringLit(s) ⇐ A ↝ StringLit(s)
whnf(A) = VInt
──────────────────────
Γ ⊢ IntLit(n) ⇐ A ↝ IntLit(n)
whnf(A) = VFloat
──────────────────────
Γ ⊢ FloatLit(f) ⇐ A ↝ FloatLit(f)
whnf(A) = VAttrs
──────────────────────
Γ ⊢ AttrsLit ⇐ A ↝ AttrsLit
whnf(A) = VPath
──────────────────────
Γ ⊢ PathLit ⇐ A ↝ PathLit
whnf(A) = VFunction
──────────────────────
Γ ⊢ FnLit ⇐ A ↝ FnLit
whnf(A) = VAny
──────────────────────
Γ ⊢ AnyLit ⇐ A ↝ AnyLit
```
**Let**
```
Γ ⊢ A type ↝ A'
 = eval(Γ.env, A')
Γ ⊢ t ⇐ Â ↝ t'
t̂ = eval(Γ.env, t')
Γ' = { env = [t̂] ++ Γ.env, types = [Â] ++ Γ.types, depth = Γ.depth + 1 }
Γ' ⊢ u ⇐ B ↝ u'
──────────────────────
Γ ⊢ Let(n, A, t, u) ⇐ B ↝ Let(n, A', t', u')
```
Note: `Let` in checking mode — the expected type `B` is for the
body `u`, not for the definition `t`.
**Sub** (mode switch: fall through to synthesis)
```
Γ ⊢ t ⇒ A ↝ t'
conv(Γ.depth, A, B) = true -- or cumulativity check
──────────────────────
Γ ⊢ t ⇐ B ↝ t'
```
This is the catch-all. If no other checking rule applies, try
synthesis and verify the inferred type matches the expected type.
### 7.5 Type formation (checkType / checkTypeLevel)
The implementation provides two variants: `checkType(Γ, T)` returns
only the elaborated term, while `checkTypeLevel(Γ, T)` returns both
the elaborated term and the universe level. `checkType` is a thin
wrapper: `checkType(Γ, T) = checkTypeLevel(Γ, T).term`. Universe
levels are computed structurally during the type formation check
(see §8.2), not by post-hoc inspection of evaluated values.
```
──────────────────────
Γ ⊢ Nat type ↝ Nat
──────────────────────
Γ ⊢ Bool type ↝ Bool
──────────────────────
Γ ⊢ Unit type ↝ Unit
──────────────────────
Γ ⊢ Void type ↝ Void
──────────────────────
Γ ⊢ String type ↝ String
(Similarly for Int, Float, Attrs, Path, Function, Any)
──────────────────────
Γ ⊢ U(i) type ↝ U(i)
Γ ⊢ A type ↝ A'
──────────────────────
Γ ⊢ List(A) type ↝ List(A')
Γ ⊢ A type ↝ A' Γ ⊢ B type ↝ B'
──────────────────────
Γ ⊢ Sum(A, B) type ↝ Sum(A', B')
Γ ⊢ A type ↝ A'
 = eval(Γ.env, A')
Γ' = extend(Γ, n, Â)
Γ' ⊢ B type ↝ B'
──────────────────────
Γ ⊢ Pi(n, A, B) type ↝ Pi(n, A', B')
Γ ⊢ A type ↝ A'
 = eval(Γ.env, A')
Γ' = extend(Γ, n, Â)
Γ' ⊢ B type ↝ B'
──────────────────────
Γ ⊢ Sigma(n, A, B) type ↝ Sigma(n, A', B')
Γ ⊢ A type ↝ A' Â = eval(Γ.env, A')
Γ ⊢ a ⇐ Â ↝ a'
Γ ⊢ b ⇐ Â ↝ b'
──────────────────────
Γ ⊢ Eq(A, a, b) type ↝ Eq(A', a', b')
-- Fallback: infer and check it's a universe
Γ ⊢ T ⇒ A ↝ T'
whnf(A) = VU(i)
──────────────────────
Γ ⊢ T type ↝ T'
```
---
## 8. Universe Rules
### 8.1 Universe formation
```
U(i) : U(i + 1) for all i ≥ 0
```
### 8.2 Type former levels
Universe levels are computed by `checkTypeLevel`, which returns
`{ term; level; }` from the **typing derivation**, not from
post-hoc value inspection. This avoids the problem of unknown
levels for neutral type variables. We write `level(A)` as shorthand
for `checkTypeLevel(Γ, A).level`.
```
checkTypeLevel(Γ, Nat) = { Nat, 0 }
checkTypeLevel(Γ, Bool) = { Bool, 0 }
checkTypeLevel(Γ, Unit) = { Unit, 0 }
checkTypeLevel(Γ, Void) = { Void, 0 }
checkTypeLevel(Γ, String) = { String, 0 }
checkTypeLevel(Γ, Int) = { Int, 0 }
checkTypeLevel(Γ, Float) = { Float, 0 }
checkTypeLevel(Γ, Attrs) = { Attrs, 0 }
checkTypeLevel(Γ, Path) = { Path, 0 }
checkTypeLevel(Γ, Function) = { Function, 0 }
checkTypeLevel(Γ, Any) = { Any, 0 }
checkTypeLevel(Γ, List(A)) = { List(A'), level(A) }
checkTypeLevel(Γ, Sum(A, B)) = { Sum(A',B'), max(level(A), level(B)) }
checkTypeLevel(Γ, Pi(n, A, B)) = { Pi(n,A',B'), max(level(A), level(B)) }
checkTypeLevel(Γ, Sigma(n,A,B))= { Sigma(n,A',B'), max(level(A), level(B)) }
checkTypeLevel(Γ, Eq(A, a, b)) = { Eq(A',a',b'), level(A) }
checkTypeLevel(Γ, U(i)) = { U(i), i + 1 }
-- Fallback: infer type, require VU(i), extract i
checkTypeLevel(Γ, T) = { T', i } where Γ ⊢ T ⇒ VU(i)
```
The fallback handles neutral type expressions (variables,
applications) by inferring their type and requiring it to be a
universe. This correctly propagates levels through type variables:
if `B : U(1)`, then `checkTypeLevel` on `B` infers `VU(1)` and
returns level 1.
### 8.3 Cumulativity
A type `A` at level `i` is also a type at level `j` for all `j > i`.
This is implemented by accepting `conv(d, VU(i), VU(j))` when `i ≤ j`
**in the Sub rule only** (checking mode, when comparing an inferred
universe against an expected universe). The `conv` function itself
uses strict equality `i == j`.
The cumulativity check is in `check`:
```
-- In the Sub rule:
-- If inferredTy = VU(i) and expectedTy = VU(j) and i ≤ j: accept
-- Otherwise: conv(Γ.depth, inferredTy, expectedTy) must hold
```
### 8.4 Universe consistency
The kernel MUST reject `U(i) : U(i)`. This is guaranteed by the
level computation: `level(U(i)) = i + 1`, so `U(i)` lives at level
`i + 1`, not `i`. Self-containing universes cannot be constructed.
This prevents Girard's paradox (Girard 1972), which requires a type
that contains itself. Hurkens (1995) gives the compact MLTT rendering
of the inconsistency proof. Universe stratification is the standard
fix, and it is why the kernel enforces `level(U(i)) = i + 1`.
---
## 9. Fuel Mechanism
### 9.1 Evaluation fuel
Every call to `evalF` receives a fuel parameter and decrements it
by one before evaluating the term. When fuel reaches 0:
```
evalF(fuel=0, ρ, t) = THROW "normalization budget exceeded"
```
The kernel aborts via `throw`. Layer 0 (TCB) has no access to the
effect system by design, so fuel exhaustion and kernel invariant
violations both manifest as Nix-level throws caught by `tryEval`.
Callers should treat any throw from the evaluator as "term not
verified" — the distinction between fuel exhaustion and a kernel bug
is in the error message text, not the failure mechanism.
### 9.2 Default budget
The default fuel budget is 10,000,000 reduction steps. This is
configurable by the caller via `evalF`. No minimum is enforced —
callers may pass arbitrarily low fuel, which will cause immediate
`throw` on the first eval step.
### 9.3 Fuel accounting
Fuel is **per-path**, not a global counter. Each call to `evalF`
captures `f = fuel - 1` and passes `f` to all sub-evaluations of
that term. When evaluating `App(t, u)`, both `evalF(f, ρ, t)` and
`evalF(f, ρ, u)` receive the same `f`. This means fuel bounds the
**depth** of any single evaluation path, not the total work across
all paths.
For a balanced binary tree of N applications, the total work is
O(2^depth × fuel), not O(fuel). This is inherent to pure Nix —
there is no mutable global counter. The fuel mechanism guarantees
termination (every path eventually hits 0) but does not bound total
computation time.
All fuel consumption flows through `evalF`:
- Direct term evaluation (each `evalF` call decrements fuel by 1)
- Beta-reduction in `vApp` consumes fuel indirectly via
`instantiateF`, which calls `evalF`
- Iota-reduction in recursive eliminators (`vNatElimF`,
`vListElimF`, `vSumElimF`) consumes fuel indirectly via `vAppF`
Non-recursive eliminators (`vBoolElim`, `vJ`, `vAbsurd`) complete
in O(1) and do not call `evalF`. Structural operations (building
values, pattern matching on tags) do not consume fuel.
### 9.4 Fuel threading in trampolined eliminators
Trampolined eliminators (`vNatElimF`, `vListElimF`) flatten
recursive chains into `builtins.foldl'` loops. Each fold step
threads fuel through the accumulator:
```
foldl'(λ{acc, fuel}. λi.
if fuel ≤ 0 then THROW "normalization budget exceeded"
else { acc = step(fuel, acc, chain[i]); fuel = fuel - 1; })
{acc = base; fuel = fuel}
[1..n]
```
This ensures that an N-element chain consumes N units of fuel from
the fold, plus whatever fuel each step application consumes
internally. Without this threading, each step would get the
original fuel budget, giving an effective budget of N × fuel.
The worst-case complexity of a threaded fold is O(fuel²): at step
*i*, the inner `vAppF` receives `fuel - i` as its own per-path
budget. Summing over all steps gives Σ(fuel - i) ≈ fuel²/2. To
achieve O(fuel), `vAppF` would need to return remaining fuel — an
invasive signature change. The quadratic residual is inherent to
per-path fuel semantics and is a strict improvement over the
pre-threading O(N × fuel) with unbounded N.
### 9.5 Fuel consumption in constructor chains
Trampolined Succ and Cons evaluation (`eval(ρ, Succ(t))` and
`eval(ρ, Cons(A, h, t))`) flatten chains of n constructors and
deduct n fuel units from the budget before evaluating the base.
A chain of n Succ constructors consumes n+1 fuel (1 for the entry
evaluation, n for the chain deduction). Cons chains additionally
thread remaining fuel through the fold: each fold step evaluates
the element type and head with the current fuel budget, then
decrements by 1 (matching the eliminator fuel threading pattern
from §9.4). This is a third fuel consumption site alongside
`evalF` decrements and eliminator fold steps.
---
## 10. Properties the Implementation Must Satisfy
### 10.1 Soundness (non-negotiable)
If the kernel accepts `Γ ⊢ t : A`, then `t` is a valid term of
type `A` in MLTT with the specified type formers and universe
hierarchy. Formally:
**If `check(Γ, t, A)` succeeds, then `Γ ⊢ t : A` is derivable
in the declarative typing rules of MLTT.**
Equivalently: the kernel never accepts an ill-typed term.
### 10.2 Determinism
For any input `(Γ, t, A)`, the kernel produces the same result
on every invocation. There is no randomness, no system-dependent
behavior, no sensitivity to evaluation order (beyond fuel
exhaustion, which always rejects).
### 10.3 Termination
For any input `(Γ, t, A)`, the kernel terminates. It either:
- Accepts (returns the elaborated term)
- Rejects with a type error (via effect)
- Rejects with fuel exhaustion
- Crashes with a kernel bug diagnostic (throw)
It never loops. The fuel mechanism guarantees this.
### 10.4 Evaluation roundtrip
For any well-typed term `t` and environment `ρ` consistent with
the context:
```
quote(d, eval(ρ, quote(d, eval(ρ, t)))) = quote(d, eval(ρ, t))
```
Evaluation followed by quotation is idempotent. The result is a
normal form.
### 10.5 Conversion reflexivity
For any value `v`:
```
conv(d, v, v) = true
```
### 10.6 Conversion symmetry
For any values `v₁, v₂`:
```
conv(d, v₁, v₂) = conv(d, v₂, v₁)
```
### 10.7 Conversion transitivity
For any values `v₁, v₂, v₃`:
```
conv(d, v₁, v₂) ∧ conv(d, v₂, v₃) ⟹ conv(d, v₁, v₃)
```
### 10.8 Type preservation under evaluation
If `Γ ⊢ t : A` and `eval(Γ.env, t) = v`, then `v` represents a
value of type `A`. This is not directly testable (values don't
carry types) but is ensured by the correctness of the evaluation
rules.
### 10.9 Strong normalization (for well-typed terms)
For any well-typed term `t`, `eval` terminates without exhausting
fuel for a sufficiently large fuel budget. The fuel mechanism is
a practical safeguard, not a theoretical necessity for well-typed
terms.
---
## 11. Derived Test Cases
Every rule in this spec generates at least one positive test (the
rule applies and succeeds) and one negative test (the rule's
premises are violated and the kernel rejects).
### 11.1 Required positive tests (kernel must ACCEPT)
```
-- Identity
⊢ Refl : Eq(Nat, Zero, Zero)
-- Function type
⊢ Lam(x, Nat, Var(0)) : Pi(x, Nat, Nat)
-- Application
f : Pi(x, Nat, Nat) ⊢ App(f, Zero) : Nat
-- Dependent function
⊢ Lam(A, U(0), Lam(x, Var(0), Var(0))) : Pi(A, U(0), Pi(x, A, A))
-- Sigma pair
⊢ Pair(Zero, True, Sigma(x, Nat, Bool)) : Sigma(x, Nat, Bool)
-- Nat induction: 0 + 0 = 0
⊢ Refl : Eq(Nat, NatElim(_, Zero, _, Zero), Zero)
-- Bool elimination
⊢ BoolElim(_, Zero, Succ(Zero), True) : Nat
-- List
⊢ Cons(Nat, Zero, Nil(Nat)) : List(Nat)
-- Ex falso (with neutral)
v : Void ⊢ Absurd(Nat, v) : Nat
-- Sum injection
⊢ Inl(Nat, Bool, Zero) : Sum(Nat, Bool)
-- Universe hierarchy
⊢ U(0) : U(1)
⊢ U(1) : U(2)
⊢ Nat : U(0)
⊢ Pi(x, Nat, Nat) : U(0)
-- Let binding
⊢ Let(x, Nat, Zero, Var(0)) : Nat
-- Cumulativity: Nat : U(0) should also be accepted at U(1)
-- StrEq: equal strings reduce to true
⊢ Refl : Eq(Bool, StrEq(StringLit("foo"), StringLit("foo")), True)
-- StrEq: unequal strings reduce to false
⊢ Refl : Eq(Bool, StrEq(StringLit("foo"), StringLit("bar")), False)
-- StrEq: type inference
⊢ StrEq(StringLit("a"), StringLit("b")) : Bool
```
### 11.2 Required negative tests (kernel must REJECT)
```
-- Type mismatch
⊢ Zero : Bool REJECT
-- Universe violation
⊢ U(0) : U(0) REJECT
-- Refl on unequal terms
⊢ Refl : Eq(Nat, Zero, Succ(Zero)) REJECT
-- Application of non-function
⊢ App(Zero, Zero) REJECT
-- Projection of non-pair
⊢ Fst(Zero) REJECT
-- Wrong eliminator scrutinee
⊢ NatElim(_, _, _, True) REJECT
-- Unbound variable
⊢ Var(0) (in empty context) REJECT
-- StrEq on non-string
⊢ StrEq(Zero, StringLit("foo")) REJECT (lhs is Nat, expected String)
-- Ill-typed pair
⊢ Pair(Zero, Zero, Sigma(x, Nat, Bool)) REJECT (snd is Nat, expected Bool)
```
### 11.3 Required stress tests
```
-- Large Nat: S^5000(0) : Nat ACCEPT (trampoline)
-- Large List: cons^5000 : List(Nat) ACCEPT (trampoline)
-- NatElim on S^5000(0) ACCEPT (trampoline)
-- ListElim on cons^5000 ACCEPT (trampoline)
-- Succ elaboration: elab-succ-5000 ACCEPT (trampoline)
-- Cons elaboration: elab-cons-5000 ACCEPT (trampoline)
-- Deeply nested Pi: Pi(x₁, ..., Pi(xₙ, Nat, Nat)...) for n=500 ACCEPT
-- Fuel exhaustion: artificially low fuel on complex term REJECT (fuel)
-- Fuel threading: NatElim fold decrements fuel per step ACCEPT
-- Fuel threading: ListElim fold decrements fuel per step ACCEPT
```
### 11.4 Required roundtrip tests
For each value form, verify:
```
quote(d, eval(ρ, t)) = normal_form(t)
```
where `normal_form(t)` is the expected normal form.
---
## 12. Notation Index
| Symbol | Meaning |
|--------|---------|
| Γ | Typing context |
| ρ | Value environment |
| d | Binding depth (for levels ↔ indices) |
| ⊢ | Typing judgment |
| ⇐ | Checking mode |
| ⇒ | Synthesis mode |
| ↝ | Elaborates to |
| ≡ | Definitional equality |
| Π | Dependent function type |
| Σ | Dependent pair type |
| ℕ | Natural numbers |
| 𝔹 | Booleans |
| ⊤ | Unit type |
| ⊥ | Void / empty type |
| U(i) | Universe at level i |
| Id_A(a,b) | Identity type |
| TCB | Trusted computing base |
| WHNF | Weak head normal form |
| NbE | Normalization by evaluation |
| THROW | Kernel invariant violation (crash) |
| REJECT | Term rejected (via effect or fuel) |
---
## 13. Known Limitations
The following are documented implementation choices or limitations,
not bugs. They are recorded here so auditors do not rediscover them.
### 13.1 Pair quotation uses dummy type annotation
`quote(d, VPair(a, b))` produces `Pair(quote(d,a), quote(d,b), Unit)`
— the type annotation is always `Unit` regardless of the actual pair
type. The `VPair` value does not carry its type (values are untyped
in NbE), so the annotation cannot be reconstructed without additional
context. Quoted pairs are structurally correct but carry a dummy
annotation.
### 13.2 Lambda domain annotations discarded in checking mode
When checking `Lam(n, A, t)` against `VPi(n, dom, cl)`, the lambda's
domain annotation `A` is discarded and replaced by `dom` from the
Pi type. This is standard bidirectional type checking (Dunfield &
Krishnaswami 2021, §4): in checking mode, the expected type provides
the domain, not the term. The elaborated output uses `quote(d, dom)`,
making the original annotation unrecoverable.
### 13.3 Term constructors do not validate argument types
Term constructors (`mkVar`, `mkSucc`, etc.) accept arbitrary Nix
values without type validation. `mkVar "hello"` produces
`{ tag = "var"; idx = "hello"; }`, which crashes at eval time.
The trust boundary is the HOAS layer (`hoas.nix`), which is the
public API — direct term construction is internal to the kernel.
### 13.4 `tryEval` only catches `throw` and `assert false`
`builtins.tryEval` in the elaborator's `isConstantFamily` sentinel
detection only catches explicit `throw` and `assert false`. Nix
coercion errors (e.g., "cannot convert a function to JSON"),
missing attribute access, and type comparison errors are uncatchable.
The elaborator uses `builtins.typeOf` in error paths to avoid
triggering coercion errors.
### 13.5 HOAS sentinel comparison
The `isConstantFamily` sentinel test in the elaborator applies two
distinct sentinel values and compares the results to detect whether
a binding body is dependent. Both Pi and Sigma paths compare
**elaborated kernel terms** (`H.elab r1.value == H.elab r2.value`)
rather than raw HOAS trees. This avoids false negatives from Nix's
function identity comparison (`==` on lambdas). However, if `H.elab`
itself produces structurally different terms for semantically
equivalent types (e.g., through different elaboration paths), false
negatives remain possible. This is a safe failure mode — the kernel
still type-checks correctly, but elaboration may require explicit
`_kernel` annotations unnecessarily.
### 13.6 StrEq neutral canonicalization
When one argument to `vStrEq` is neutral and the other is a literal,
the neutral's spine is extended with `EStrEq(literal)`. When both
arguments are neutral, the **left** neutral's spine is extended with
`EStrEq(right)`. This means `StrEq(x, y)` and `StrEq(y, x)` (where
both are neutral) produce different normal forms: `VNe(x, [EStrEq(y)])`
vs `VNe(y, [EStrEq(x)])`. Therefore `conv` will report them as
**not** definitionally equal, even though `StrEq` is semantically
symmetric. This is a safe conservatism: the kernel may reject some
provable equalities but never accepts a false one.
### 13.7 Extract uses type value threading (not sentinels)
The `extract` function threads kernel type values (`tyVal`) through
recursive extraction, rather than using sentinel-based non-dependence
tests. For Pi extraction, the codomain type is computed per-invocation
via `instantiate(tyVal.closure, kernelArg)`, supporting both dependent
and non-dependent function extraction. For Sigma extraction (records),
the second component's type is computed via
`instantiate(tyVal.closure, val.fst)`. A `reifyType : Val → HoasTree`
fallback converts kernel type values back to HOAS when the HOAS body
cannot be applied (e.g., when the body accesses record fields from a
neutral). `reifyType` loses sugar (VSigma → `H.sigma`, not
`H.record`) so the HOAS body is preferred when available.
### 13.8 Spine comparison complexity
`convSp` uses `builtins.elemAt` in a fold to compare neutral spines.
In Nix, `builtins.elemAt` on lists is O(1) (Nix lists are internally
vectors/arrays), so the actual complexity is O(n), not O(n²). This
was incorrectly flagged in an earlier audit.
---
## References
1. Coquand, T. et al. (2009). *A simple type-theoretic language: Mini-TT.*
2. Dunfield, J. & Krishnaswami, N. (2021). *Bidirectional Typing.* ACM Computing Surveys.
3. Kovács, A. (2022). *Generalized Universe Hierarchies.* CSL 2022.
4. Abel, A. & Chapman, J. (2014). *Normalization by Evaluation in the Delay Monad.*
5. Girard, J.-Y. (1972). *Interprétation fonctionnelle et élimination des coupures de l'arithmétique d'ordre supérieur.* Thèse d'État, Université Paris 7.
6. Hurkens, A. J. C. (1995). *A Simplification of Girard's Paradox.* TLCA 1995.
7. de Bruijn, N. (1972). *Lambda Calculus Notation with Nameless Dummies.*
8. Martin-Löf, P. (1984). *Intuitionistic Type Theory.* Bibliopolis.
9. Felicissimo, T. (2023). *Generic Bidirectional Typing for Dependent Type Theories.*
## Core API
### Adapt
Handler context transformation. Contravariant on context, covariant on continuation.
## `adapt`
Transform a handler's state context.
```
adapt : { get : P -> S, set : P -> S -> P } -> Handler -> Handler
``` Wraps a handler that works with child state S so it works with parent state P, using a get/set lens. Propagates both resume and abort. ```nix counterHandler = { param, state }: { resume = null; state = state + param; }; adapted = adapt { get = s: s.counter; set = s: c: s // { counter = c; }; } counterHandler; # adapted now works with { counter = 0; logs = []; } state ``` ## `adaptHandlers` Adapt an entire handler set (attrset of handlers) to a different state context. Applies the same get/set lens to every handler in the set. ```nix stateHandlers = { get = { param, state }: { value = state; inherit state; }; put = { param, state }: { value = null; state = param; }; }; adapted = adaptHandlers { get = s: s.data; set = s: d: s // { data = d; }; } stateHandlers; ``` ### Comp Computation ADT: introduction and elimination forms for Pure | Impure. ## `impure` Create a suspended computation (OpCall constructor). Takes an effect and a continuation queue. ## `isPure` Test whether a computation is Pure. For hot-path conditionals where match would allocate. ## `match` Eliminate a computation by cases. ``` match comp { pure = a: ...; impure = effect: queue: ...; } ``` Every function that consumes a Computation should go through match or isPure — never inspect _tag directly. ## `pure` Lift a value into a pure computation (Return constructor). ### Kernel Freer monad kernel: Return/OpCall ADT with FTCQueue bind, send, map, seq, pipe, kleisli. ## `bind` Monadic bind: sequence two computations. ``` bind comp f = case comp of Pure a -> f a Impure e q -> Impure e (snoc q f) ``` O(1) per bind via FTCQueue snoc (Kiselyov & Ishii 2015, Section 3.1). ## `kleisli` Kleisli composition: compose two Kleisli arrows (a -> M b) and (b -> M c) into (a -> M c). ## `map` Map a function over the result of a computation (Functor instance). ## `pipe` Chain a computation through a list of Kleisli arrows, threading results via bind. ## `send` Send an effect request. Returns the handler's response via continuation. ## `seq` Sequence a list of computations, threading state via bind. Returns the last result. ### Pipeline Typed pipeline framework with composable stages. Stages are composable transformations executed with reader (immutable environment), error (collecting validation errors), and acc (non-fatal warnings) effects. The run function wires up all handlers and returns { value, errors, warnings, typeErrors }. ```nix let stage1 = pipeline.mkStage { name = "discover"; transform = data: bind (pipeline.asks (env: env.config)) (cfg: pure (data // { config = cfg; })); }; result = pipeline.run { config = "prod"; } [ stage1 ]; in result # => { config = "prod"; } ``` ## `compose` Chain stages into a single computation. ``` compose : [Stage] -> Data -> Computation Data ``` Each stage's transform receives the output of the previous stage and returns a computation producing the next stage's input. Initial data seeds the pipeline. ## `mkStage` Create a named pipeline stage. ``` mkStage : { name, description?, transform, inputType?, outputType? } -> Stage ``` transform : Data -> Computation Data Takes current pipeline data, uses effects (ask, raise, warn), returns computation producing updated pipeline data. inputType/outputType : optional type schemas for validation at stage boundaries (checked when provided). Validation uses fx.types.validate which sends typeCheck effects. ## `run` Execute a pipeline with effect handling. ``` run : args -> [Stage] -> { value, errors, warnings, typeErrors } ``` args : { ... } Becomes the reader environment -- stages access via ask/asks. stages : [Stage] Ordered list of stages to execute. Returns: value -- final pipeline data from last stage errors -- list of { message, context } from validation failures warnings -- list of non-fatal warning items typeErrors -- list of type validation errors ### Queue FTCQueue (catenable queue, after Kiselyov & Ishii 2015). O(1) snoc/append, amortized O(1) viewl. ## `append` Concatenate two queues. O(1). ## `leaf` Create a singleton queue containing one continuation function. ## `node` Join two queues. O(1) — just creates a tree node. ## `qApp` Apply a queue of continuations to a value. Processes continuations left-to-right: if a continuation returns Pure, feed the value to the next continuation. If it returns Impure, append the remaining queue to the effect's own queue and return. ## `singleton` Create a queue with a single continuation. O(1). ## `snoc` Append a continuation to the right of the queue. O(1). ## `viewl` Extract the leftmost continuation from the queue. Amortized O(1). ``` Returns { head = fn; tail = queue | null; } ``` `tail` is null if the queue had only one element. ### Trampoline Trampolined interpreter using builtins.genericClosure for O(1) stack depth. ## `handle` Trampolined handler combinator with return clause. ``` handle : { return?, handlers, state? } -> Computation a -> { value, state } ``` Follows Kiselyov & Ishii's `handle_relay` pattern but trampolined via genericClosure for O(1) stack depth. **Arguments** (attrset): - `return` — `value -> state -> { value, state }`. How to transform the final Pure value. Default: identity. - `handlers` — `{ effectName = { param, state }: { resume | abort, state }; }`. Each must return `{ resume; state; }` or `{ abort; state; }`. - `state` — initial handler state. Default: null. ## `run` Run a computation through the genericClosure trampoline. ``` run : Computation a -> Handlers -> State -> { value : a, state : State } ``` **Arguments:** - `comp` — the freer monad computation to interpret - `handlers` — `{ effectName = { param, state }: { resume | abort, state }; ... }` - `initialState` — starting state passed to handlers Handlers must return one of: ``` { resume = value; state = newState; } -- invoke continuation with value { abort = value; state = newState; } -- discard continuation, halt ``` This is the defunctionalized encoding of Plotkin & Pretnar (2009): `resume` ≡ invoke continuation k(v), `abort` ≡ discard k. Stack depth: O(1) — constant regardless of computation length. Time: O(n) where n = number of effects in the computation. ## Effects ### Acc Accumulator effect: emit/emitAll/collect for incremental list building. ## `collect` Read the current accumulated items. ``` collect : Computation [a] ``` ## `emit` Append a single item to the accumulator. ``` emit : a -> Computation null ``` ## `emitAll` Append a list of items to the accumulator. ``` emitAll : [a] -> Computation null ``` ## `handler` Standard accumulator handler. State is a list of accumulated items. Initial state: `[]` ```nix handle { handlers = acc.handler; state = []; } comp ``` ### Choice Non-deterministic choice effect: choose/fail/guard with list handler. ## `choose` Non-deterministic choice from a list of alternatives. The handler determines how alternatives are explored. ``` choose : [a] -> Computation a ``` ## `fail` Fail the current branch of non-deterministic computation. Equivalent to `choose []`. ``` fail : Computation a ``` ## `guard` Guard a condition: continue if true, fail if false. ``` guard : bool -> Computation null ``` ## `initialState` Initial state for the listAll handler. ## `listAll` Handler that explores all non-deterministic branches and returns a list of all results. Empty choices abort that branch. State is `{ results : [a], pending : [Computation a] }`. After handling, results are in `state.results`. ```nix let r = handle { handlers = choice.listAll; state = choice.initialState; } comp; in r.state.results ``` ### Conditions CL-style condition system: signal/warn with restart-based recovery. ## `collectConditions` Collecting handler: accumulates conditions in state, resumes with continue. State shape: list of { name, data } Initial state: [] ## `fail` Fail handler: throws on any condition. Ignores available restarts. Use as a last-resort handler. ## `ignore` Ignore handler: resumes with null for any condition. All conditions are silently discarded. ## `signal` Signal a condition. The handler chooses a restart strategy. ``` signal : string -> any -> [string] -> Computation any ``` **Arguments:** - `name` — condition name (e.g. `"division-by-zero"`, `"file-not-found"`) - `data` — condition data (error details, context) - `restarts` — list of available restart names The handler receives `{ name, data, restarts }` and returns a `{ restart, value }` attrset. The continuation receives this choice. ## `warn` Signal a warning condition. Like signal but with a conventional `"muffle-warning"` restart. If the handler doesn't muffle, the computation continues normally. ``` warn : string -> any -> Computation null ``` ## `withRestart` Create a handler that invokes a specific restart for a named condition. For all other conditions, falls through (throws). ``` withRestart : string -> string -> any -> handler ``` **Arguments:** - `condName` — condition name to match - `restartName` — restart to invoke - `restartVal` — value to pass via the restart ### Error Error effect with contextual messages and multiple handler strategies. ## `collecting` Collecting error handler: accumulates errors in state as a list. Resumes computation with null so subsequent effects still execute. Use when you want all errors, not just the first. State shape: list of { message, context } ## `raise` Raise an error. Returns a Computation that sends an "error" effect. The handler determines what happens: throw, collect, or recover. ``` raise : string -> Computation a ``` ## `raiseWith` Raise an error with context. The context string describes where in the computation the error occurred, enabling stack-trace-like error reports when used with the collecting handler. ``` raiseWith : string -> string -> Computation a ``` ## `result` Result error handler: aborts computation with tagged Error value. Uses the non-resumption protocol to discard the continuation. Returns `{ _tag = "Error"; message; context; }` on error. ## `strict` Strict error handler: throws on first error via builtins.throw. Use when errors should halt evaluation immediately. Includes context in the thrown message when available. ### Linear Graded linear resource tracking: acquire/consume/release with usage enforcement. Each resource gets a capability token at acquire time. The graded handler covers linear (exactly once), affine (at most once via release), exact(n), and unlimited usage through a single maxUses parameter. Quick start: ```nix let comp = bind (linear.acquireLinear "secret") (token: bind (linear.consume token) (val: pure val)); in linear.run comp ``` For composition with other handlers, use handler/return/initialState with `adaptHandlers`. ## `acquire` Acquire a graded linear resource. Returns a capability token. ``` acquire : { resource : a, maxUses : Int | null } -> Computation Token ``` The token wraps the resource with an ID for tracking. The handler maintains a resource map in its state, counting each consume call against the maxUses bound. - `maxUses = 1` — Linear: exactly one consume required - `maxUses = n` — Exact: exactly n consumes required - `maxUses = null` — Unlimited: any number of consumes allowed Tokens should be consumed exactly maxUses times, or explicitly released. At handler exit, the return clause (finalizer) checks: released → always OK, `maxUses = null` → always OK, otherwise → `currentUses` must equal `maxUses`. ## `acquireExact` Acquire a resource that must be consumed exactly n times. ``` acquireExact : a -> Int -> Computation Token ``` ## `acquireLinear` Acquire a linear resource (exactly one consume required). ``` acquireLinear : a -> Computation Token ``` ## `acquireUnlimited` Acquire an unlimited resource (any number of consumes allowed). ``` acquireUnlimited : a -> Computation Token ``` ## `consume` Consume a capability token, returning the wrapped resource value. ``` consume : Token -> Computation a ``` Increments the token's usage counter. Aborts with `LinearityError` if: - Token was already released (`"consume-after-release"`) - Usage would exceed maxUses bound (`"exceeded-bound"`) The returned value is the original resource passed to acquire. ## `handler` Graded linear resource handler. Interprets linearAcquire, linearConsume, and linearRelease effects. Tracks resource usage in handler state. Use with `trampoline.handle`: ```nix handle { handlers = linear.handler; return = linear.return; state = linear.initialState; } comp ``` Or use the convenience: `linear.run comp` - `linearAcquire`: creates token, adds resource entry to state - `linearConsume`: increments usage counter, returns resource value - `linearRelease`: marks resource as released (finalizer skips it) ## `initialState` Initial handler state for the linear resource handler. ```nix { nextId = 0; resources = {}; } ``` - `nextId`: monotonic counter for generating unique resource IDs. - `resources`: map from ID (string) to resource tracking entry. ## `release` Explicitly release a capability token without consuming it. ``` release : Token -> Computation null ``` Marks the resource as released. The finalizer skips released resources, so this allows affine usage (acquire then drop). Aborts with `LinearityError` on double-release. ## `return` Finalizer return clause for the linear handler. Checks each resource in handler state: - `released` → OK (explicitly dropped) - `maxUses = null` → OK (unlimited) - otherwise → `currentUses` must equal `maxUses` On violation, wraps the original value in a `LinearityError` with details of each mismatched resource. On success, passes through unchanged. Runs on both normal return and abort paths. ## `run` Run a computation with the graded linear handler. ``` run : Computation a -> { value : a | LinearityError, state : State } ``` Bundles handler, return clause, and initial state into one call. To compose with other handlers, use handler/return/initialState separately with `adaptHandlers`. ```nix let comp = bind (acquireLinear "secret") (token: bind (consume token) (val: pure "got:${val}")); in linear.run comp # => { value = "got:secret"; state = { nextId = 1; resources = { ... }; }; } ``` ### Reader Read-only environment effect: ask/asks/local with standard handler. ## `ask` Read the current environment. ``` ask : Computation env ``` ## `asks` Read a projection of the environment. ``` asks : (env -> a) -> Computation a ``` ## `handler` Standard reader handler. Interprets ask effects. The state IS the environment (immutable through the computation). ```nix handle { handlers = reader.handler; state = myEnv; } comp ``` ## `local` Run a computation with a modified environment. Returns a new computation that transforms the environment before executing the inner computation. Since handlers are pure functions, local is implemented by wrapping the inner computation's ask effects with the modifier. In practice, use separate handler installation with the modified env. ``` local : (env -> env) -> Computation a -> Computation a ``` ### State Mutable state effect: get/put/modify with standard handler. ## `get` Read the current state. Returns a Computation that, when handled, yields the current state value. ``` get : Computation s ``` ## `gets` Read a projection of the current state. ``` gets : (s -> a) -> Computation a ``` ## `handler` Standard state handler. Interprets get/put/modify effects. Use with `trampoline.handle`: ```nix handle { handlers = state.handler; state = initialState; } comp ``` - `get`: returns current state as value - `put`: replaces state with param, returns null - `modify`: applies param (a function) to state, returns null ## `modify` Apply a function to the current state. Returns a Computation that, when handled, transforms the state via f and returns null. ``` modify : (s -> s) -> Computation null ``` ## `put` Replace the current state. Returns a Computation that, when handled, sets the state to the given value and returns null. ``` put : s -> Computation null ``` ### Typecheck Reusable typeCheck handlers: strict (throw), collecting (accumulate), logging (record all). ## `collecting` Collecting typeCheck handler: accumulates errors in state. Resumes with `true` on success, `false` on failure (computation continues). State shape: list of `{ context, typeName, actual, message }` Initial state: `[]` ## `logging` Logging typeCheck handler: records every check (pass or fail) in state. Always resumes with the actual check result (boolean). State shape: list of `{ context, typeName, passed }` Initial state: `[]` ## `strict` Strict typeCheck handler: throws on first type error. Resumes with true on success (check passed). Use when type errors should halt evaluation immediately. State: unused (pass null). ### Writer Append-only output effect: tell/tellAll with list-collecting handler. ## `handler` Standard writer handler. Collects tell output in state as a list. Initial state: `[]` ```nix handle { handlers = writer.handler; state = []; } comp ``` ## `tell` Append a value to the output log. ``` tell : w -> Computation null ``` ## `tellAll` Append a list of values to the output log. ``` tellAll : [w] -> Computation null ``` ## Types ### Constructors Type constructors: Record, ListOf, Maybe, Either, Variant. ## `Either` Tagged union of two types. Accepts `{ _tag = "Left"; value = a; }` or `{ _tag = "Right"; value = b; }`. ## `ListOf` Homogeneous list type. `ListOf Type` checks that all elements have the given type. Custom verifier sends per-element `typeCheck` effects with indexed context strings (e.g. `List[Int][2]`) for blame tracking. Unlike Sigma, elements are independent — no short-circuit. All elements are checked; the handler decides error policy (strict aborts on first, collecting gathers all). ## `Maybe` Option type. Maybe Type accepts null or a value of Type. ## `Record` Record type constructor. Takes a schema { field = Type; ... } and checks that a value has all required fields with correct types. Extra fields are permitted (open record semantics). ## `Variant` Discriminated union. Takes `{ tag = Type; ... }` schema. Accepts `{ _tag = "tag"; value = ...; }` where value has the corresponding type. ### Dependent Dependent contracts: Pi (Π), Sigma (Σ), Certified, Vector, DepRecord. Grounded in Martin-Löf (1984) "Intuitionistic Type Theory". ## `Certified` Certified value: `Σ(v:A).Proof(P(v))`. A dependent pair where: ``` fst : A — the value snd : true — proof witness (must be true AND predicate must hold) ``` The second component's type depends on the first: it checks both that the proof is `true` and that `predicate(fst)` holds. Certified is a first-order contract — both components are concrete data, so the contract is checked immediately and completely (like Sigma). The guard IS full membership. Construction: - `.certify v` — pure smart constructor (throws on invalid) - `.certifyE v` — effectful smart constructor (sends `typeCheck` effects) - `.check` — inherited from Sigma (full dependent pair check) - `.validate` — inherited from Sigma (effectful introduction check) The `.certifyE` constructor is NOT an introduction check — it's a convenience that takes a raw value, evaluates the predicate, and produces the Sigma pair `{ fst = v; snd = true; }`. The actual introduction check (`.validate`) is inherited from Sigma and verifies an already-formed pair. ## `DepRecord` Dependent record type built on nested Sigma. Schema is an ordered list of `{ name; type; }` where `type` can be: - A Type (static field) - A function (`partial-record → Type`) for dependent fields Isomorphic to nested Sigma types: ``` { a : A, b : B(a) } ≅ Σ(a:A).B(a) { a : A, b : B(a), c : C(a,b) } ≅ Σ(a:A).Σ(b:B(a)).C(a,b) ``` Values are nested Sigma pairs: ```nix { fst = a; snd = { fst = b; snd = c; }; } ``` Inherits from Sigma: `.validate` (effectful), `.proj1`, `.proj2`, `.pair`, `.pairE`, `.curry`, `.uncurry`. Use `.pack` to convert flat attrset → nested Sigma value. Use `.unpack` to convert nested Sigma value → flat attrset. ## `Pi` Dependent function type `Π(x:A).B(x)`. Arguments: - `domain` — Type A - `codomain` — A-value → Type (type family B indexed by domain values) - `universe` — Universe level (explicit parameter — see below) - `name` — optional display name == Higher-order contract with algebraic effects == Pi is a HIGHER-ORDER CONTRACT (Findler & Felleisen 2002). Higher-order contracts check function values differently from data values: a data contract is verified immediately and completely, but a function contract is verified incrementally at each application site. This is the standard, correct strategy for function contracts — not a deficit. The (Specification, Guard, Verifier) triple for Pi: ``` Guard (check): builtins.isFunction — the immediate first-order part of the contract. Soundly rejects non-functions. Verifier (validate): effectful guard (auto-derived, 1 arg) — wraps the guard in a typeCheck effect for blame tracking. Elimination (checkAt): deferred contract check (2 args) — verifies a specific application f(arg) by sending typeCheck effects for both domain (arg : A) and codomain (f(arg) : B(arg)). ``` This is precisely the Findler-Felleisen decomposition: the immediate part (`isFunction`) is checked at introduction; the deferred part (domain + codomain) is checked at each elimination site via `checkAt`. == Adequacy == ``` check f ⟺ all typeCheck effects in (validate f) pass ``` Both `check` and `validate` verify the introduction form (is it a function?). `checkAt` verifies individual applications — the deferred contract. == Universe level == Universe level is an explicit parameter. In MLTT, the level is computed as `max(i, sup_{a:A} level(B(a)))` by inspecting the syntax of B. For types with explicit kernelType, the kernel computes and verifies levels via checkTypeLevel. The explicit universe parameter provides the level for the surface API's `.universe` field. == MLTT rule mapping == ``` Formation: Pi { domain, codomain, universe } Introduction check: .check (guard: isFunction) Introduction verify: .validate (effectful guard, auto-derived) Elimination: .apply (pure), .checkAt (effectful, deferred contract) Computation: β-reduction (Nix evaluation) ``` Operations: - `.checkAt f arg` — deferred contract check at elimination site - `.apply arg` — pure elimination: compute codomain type B(arg) - `.compose f other` — compose Pi types (requires witness function) - `.domain` — the domain type A - `.codomain` — the type family B ## `Sigma` Dependent pair type `Σ(x:A).B(x)`. Arguments: - `fst` — Type A (type of the first component) - `snd` — A-value → Type (type family for the second component) - `universe` — Universe level (explicit parameter) - `name` — optional display name Values are `{ fst; snd; }` where `fst : A` and `snd : B(fst)`. == First-order contract — guard is exact == Sigma is a FIRST-ORDER CONTRACT: both components are concrete data, so the contract is checked immediately and completely. The guard (`check`) IS full membership — there is no over-approximation. ``` Guard (check): fst:A ∧ snd:B(fst) — exact. G = ⟦Σ(x:A).B(x)⟧. Verifier (verify): decomposed effectful check — sends separate typeCheck effects for fst and snd for blame tracking. ``` This contrasts with Pi where the guard over-approximates (`isFunction`) because functions are higher-order. Sigma pairs are data — the dependent relationship (snd's type depends on fst's value) can be fully verified because both values are available. Adequacy: ``` T.check v ⟺ all typeCheck effects in T.validate v pass ``` Under the all-pass handler. The guard is exact and the decomposed verifier sends individual `typeCheck` effects per component — the all-pass handler's boolean state tracks whether all passed. Totality: if the input is structurally malformed (not an attrset, missing `fst`/`snd`), verify falls back to a single `typeCheck` for the whole type — failure goes through the effect system, never crashes Nix. Universe level is an explicit parameter (computing `sup_{a:A} snd(a).universe` requires evaluating the type family on all domain values, same as Pi). == MLTT rule mapping == ``` Formation: Sigma { fst, snd, universe } Introduction: .check (exact guard), .validate (effectful, decomposed) Elimination: .proj1 (π₁), .proj2 (π₂) Computation: π₁(a,b) ≡ a, π₂(a,b) ≡ b ``` Operations: - `.proj1 pair` — first projection π₁ - `.proj2 pair` — second projection π₂ - `.pair a b` — smart constructor (throws on invalid) - `.validate v` — effectful: decomposed typeCheck effects for blame - `.pairE a b` — effectful smart constructor - `.pullback f g` — contravariant predicate pullback (see below) - `.curry` / `.uncurry` — standard Sigma adjunction - `.fstType` — the type A - `.sndFamily` — the type family B ## `Vector` Length-indexed list type family, built on Pi. ``` Vector(A) = Π(n:Nat).{xs : List(A) | |xs| = n} ``` This is the correct Martin-Löf encoding: Vector IS a Pi type. It inherits `.validate` (effectful), `.compose`, `.apply`, `.domain`, `.codomain` from Pi. Usage: ```nix Vector elemType # the Pi type family (Nat → SizedList) (Vector elemType).apply 3 # specific type for length 3 ``` ### Foundation Type system foundation: Type constructor, check, validate, make, refine. ## `check` Check whether a value inhabits a type. Returns bool. ## `make` Validate a value and return it, or throw on failure. ## `mkType` Create a type from its kernel representation. A nix-effects type is defined by its `kernelType` — an HOAS type tree representing the type in the MLTT kernel. All fields are derived: - `.check` = `decide(kernelType, v)` — the decision procedure - `.universe` = `checkTypeLevel(kernelType)` — computed universe level - `.kernelCheck` = same as `.check` - `.prove` = kernel proof checking for HOAS terms Arguments: - `name` — Human-readable type name - `kernelType` — HOAS type tree (required — this IS the type) - `guard` — Optional runtime predicate for refinement types. When present, `.check = kernelDecide(v) && guard(v)` (conjunction — kernel catches structural errors, guard handles residual constraints). The guard handles constraints the kernel can't express (e.g., x >= 0). - `verify` — Optional custom verifier (`self → value → Computation`). When null (default), `validate` is auto-derived by wrapping `check` in a `typeCheck` effect. Supply a custom `verify` for types that decompose checking (e.g. Sigma sends separate effects for `fst` and `snd` for blame tracking). - `description` — Documentation string (default = `name`) - `universe` — Optional universe level override. When null (default), computed from `checkTypeLevel(kernelType)`. Use when the kernelType is a fallback (e.g., `H.function_` for Pi) that doesn't capture the real universe level. - `approximate` — When true, the kernelType is a sound but lossy approximation (e.g., `H.function_` for Pi, `H.any` for Sigma). Suppresses `_kernel`, `kernelCheck`, and `prove` on the result, since the kernel representation doesn't precisely capture this type. The kernelType is still used internally for universe computation. ## `refine` Narrow a type with an additional predicate. Creates a refinement type whose check = kernelDecide(v) ∧ guard(v) (conjunction). The base type's kernel provides structural checking; the guard handles the refinement predicate the kernel cannot express. Grounded in Freeman & Pfenning (1991) "Refinement Types for ML" and Rondon et al. (2008) "Liquid Types". ## `validate` Standalone effectful validation with explicit context string. Sends a `typeCheck` effect with the given type, value, and context. The handler receives `{ type, context, value }` and determines the response: throw, collect error, log, or offer restarts. For typical use, prefer `type.validate` (auto-derived by `mkType`, uses the type's name as context). This 3-arg form is for cases where a custom context string is needed. ``` validate : Type → Value → String → Computation Bool ``` ### Linear Linear type constructors: structural guards for capability tokens. Pure type predicates that check token structure without consuming. Usage enforcement is in effects/linear.nix (separate concerns). Linear(T) — exactly one consume required Affine(T) — at most one consume (release allowed) Graded(n, T) — exactly n consumes (generalizes Linear/Affine) See Orchard et al. (2019) for graded modal types. ## `Affine` Affine type: capability token that may be consumed at most once. ``` Affine : Type -> Type ``` Structurally identical to `Linear(T)`. The name communicates that the resource may be explicitly released (dropped) via `effects/linear.release` without consuming it — "at most once" vs Linear's "exactly once." The structural guard is the same: both check for a valid capability token with inner type T. The usage distinction (exactly-once vs at-most-once) is enforced by the effect handler, not the type system. Operations: - `.check v` — pure guard: is v a valid affine token wrapping T? - `.validate v` — effectful: sends `typeCheck` for blame tracking - `.innerType` — the wrapped type T ## `Graded` Graded type: capability token with usage multiplicity annotation. ``` Graded : { maxUses : Int | null, innerType : Type } -> Type ``` Generalizes Linear and Affine via a `maxUses` parameter: ```nix Graded { maxUses = 1; innerType = T; } # ≡ Linear(T) Graded { maxUses = null; innerType = T; } # ≡ Unlimited(T) Graded { maxUses = n; innerType = T; } # ≡ Exact(n, T) ``` The structural guard is the same as Linear and Affine — token structure with inner type check. The `maxUses` appears in the type name for documentation but is NOT checked by the guard (the grade lives in handler state, not the token). The name uses ω for null (unlimited): `Graded(1, Int)`, `Graded(5, String)`, `Graded(ω, Bool)` From Orchard et al. (2019) "Quantitative Program Reasoning with Graded Modal Types" — semiring-indexed usage annotations where + models branching, × models sequencing, 1 = linear, ω = unlimited. Operations: - `.check v` — pure guard: is v a valid graded token wrapping T? - `.validate v` — effectful: sends `typeCheck` for blame tracking - `.innerType` — the wrapped type T - `.maxUses` — the declared usage multiplicity ## `Linear` Linear type: capability token that must be consumed exactly once. ``` Linear : Type -> Type ``` Creates a type whose `check` verifies the capability token structure: ```nix { _linear = true, id = Int, resource = innerType } ``` Pure structural guard — checking does not consume the token. `effects/linear.nix` tracks consumption separately. Adequacy invariant: ``` Linear(T).check v ⟺ all typeCheck effects in Linear(T).validate v pass ``` Holds by construction via `mkType`'s auto-derived `validate`. Operations: - `.check v` — pure guard: is v a valid linear token wrapping T? - `.validate v` — effectful: sends `typeCheck` for blame tracking - `.innerType` — the wrapped type T ### Primitives Primitive types: String, Int, Bool, Float, Attrs, Path, Function, Null, Unit, Any. ## `Any` Top type. Every value inhabits Any. ## `Attrs` Attribute set type (any attrset). ## `Bool` Boolean type. ## `Float` Float type. ## `Function` Function type. ## `Int` Integer type. ## `Null` Null type. Only null inhabits it. ## `Path` Path type. ## `String` String type. ## `Unit` Unit type. Isomorphic to Null — the trivial type with one inhabitant. ### Refinement Refinement types and predicate combinators. Grounded in Freeman & Pfenning (1991) and Rondon et al. (2008). ## `allOf` Combine predicates with conjunction: (allOf [p1 p2]) v = p1 v && p2 v. ## `anyOf` Combine predicates with disjunction: (anyOf [p1 p2]) v = p1 v || p2 v. ## `inRange` Predicate factory: (inRange lo hi) v = lo <= v <= hi. ## `matching` Predicate factory: (matching pattern) s = s matches regex pattern. ## `negate` Negate a predicate: (negate p) v = !(p v). ## `nonEmpty` Predicate: string or list is non-empty. ## `nonNegative` Predicate: value >= 0. ## `positive` Predicate: value > 0. ## `refined` Create a named refinement type. ``` refined : string -> Type -> (value -> bool) -> Type ``` ### Universe Universe hierarchy: Type_0 : Type_1 : Type_2 : ... Lazy infinite tower. ## `level` Get the universe level of a type. ## `typeAt` Create universe type at level n (cumulative). `Type_n` contains all types with universe ≤ n. `Type_n` itself has universe n + 1. ``` Type_n : Type_(n+1) for all n ``` ## Streams ### Combine Stream combination: concat, interleave, zip, zipWith. ## `concat` Concatenate two streams: all elements of the first, then all of the second. ``` concat : Computation (Step r a) -> Computation (Step s a) -> Computation (Step s a) ``` ## `interleave` Interleave two streams: alternate elements from each. ``` interleave : Computation (Step r a) -> Computation (Step s a) -> Computation (Step null a) ``` ## `zip` Zip two streams into a stream of pairs. Stops when either stream ends. ``` zip : Computation (Step r a) -> Computation (Step s b) -> Computation (Step null { fst : a, snd : b }) ``` ## `zipWith` Zip two streams with a combining function. ``` zipWith : (a -> b -> c) -> Computation (Step r a) -> Computation (Step s b) -> Computation (Step null c) ``` ### Core Stream primitives: done/more/fromList/iterate/range/replicate. ## `done` Terminate a stream with a final value. ``` done : a -> Computation (Step a b) ``` ## `fromList` Create a stream from a list. ``` fromList : [a] -> Computation (Step null a) ``` ## `iterate` Create an infinite stream by repeated application. ``` iterate f x = [x, f(x), f(f(x)), ...] ``` Must be consumed with a limiting combinator (take, takeWhile). ``` iterate : (a -> a) -> a -> Computation (Step r a) ``` ## `more` Yield an element and a continuation stream. ``` more : a -> Computation (Step r a) -> Computation (Step r a) ``` ## `range` Create a stream of integers from start (inclusive) to end (exclusive). ``` range : int -> int -> Computation (Step null int) ``` ## `replicate` Create a stream of n copies of a value. ``` replicate : int -> a -> Computation (Step null a) ``` ### Limit Stream limiting: take, takeWhile, drop. ## `drop` Skip the first n elements of a stream. ``` drop : int -> Computation (Step r a) -> Computation (Step r a) ``` ## `take` Take the first n elements of a stream. ``` take : int -> Computation (Step r a) -> Computation (Step null a) ``` ## `takeWhile` Take elements while a predicate holds. ``` takeWhile : (a -> bool) -> Computation (Step r a) -> Computation (Step null a) ``` ### Reduce Stream reduction: fold, toList, length, sum, any, all. ## `all` Check if all elements satisfy a predicate. ``` all : (a -> bool) -> Computation (Step r a) -> Computation bool ``` ## `any` Check if any element satisfies a predicate. Short-circuits on first match (via lazy evaluation). ``` any : (a -> bool) -> Computation (Step r a) -> Computation bool ``` ## `fold` Left fold over a stream. ``` fold : (b -> a -> b) -> b -> Computation (Step r a) -> Computation b ``` ## `length` Count the number of elements in a stream. ``` length : Computation (Step r a) -> Computation int ``` ## `sum` Sum all numeric elements in a stream. ``` sum : Computation (Step r number) -> Computation number ``` ## `toList` Collect all stream elements into a list. ``` toList : Computation (Step r a) -> Computation [a] ``` ### Transform Stream transformations: map, filter, scanl. ## `filter` Keep only elements satisfying a predicate. ``` sfilter : (a -> bool) -> Computation (Step r a) -> Computation (Step r a) ``` ## `map` Map a function over each element of a stream. ``` smap : (a -> b) -> Computation (Step r a) -> Computation (Step r b) ``` ## `scanl` Accumulate a running fold over the stream, yielding each intermediate value. ``` scanl : (b -> a -> b) -> b -> Computation (Step r a) -> Computation (Step r b) ``` ## Type Checker ### Check Semi-trusted (Layer 1): uses the TCB (eval/quote/conv) and reports type errors via `send "typeError"`. Bugs here may produce wrong error messages but cannot cause unsoundness. Spec reference: kernel-spec.md §7–§9. ## Core Functions - `check : Ctx → Tm → Val → Computation Tm` — checking mode (§7.4). Verifies that `tm` has type `ty` and returns an elaborated term. - `infer : Ctx → Tm → Computation { term; type; }` — synthesis mode (§7.3). Infers the type of `tm` and returns the elaborated term with its type. - `checkType : Ctx → Tm → Computation Tm` — verify a term is a type (§7.5). - `checkTypeLevel : Ctx → Tm → Computation { term; level; }` — like `checkType` but also returns the universe level (§8.2). ## Context Operations (§7.1) - `emptyCtx` — empty typing context `{ env = []; types = []; depth = 0; }` - `extend : Ctx → String → Val → Ctx` — add a binding (index 0 = most recent) - `lookupType : Ctx → Int → Val` — look up a variable's type by index ## Test Helpers - `runCheck : Computation → Value` — run a computation through the trampoline handler, aborting on `typeError` effects. - `checkTm : Ctx → Tm → Val → Tm|Error` — check and unwrap. - `inferTm : Ctx → Tm → { term; type; }|Error` — infer and unwrap. ## Key Behaviors - **Sub rule**: when checking mode doesn't match (e.g., checking a variable), falls through to `infer` and uses `conv` to compare. - **Cumulativity**: `U(i) ≤ U(j)` when `i ≤ j` (§8.3). - **Large elimination**: motives may return any universe, enabling type-computing eliminators (`checkMotive`). - **Trampolining**: Succ and Cons chains checked iteratively (§11.3). ### Conv Checks whether two values are definitionally equal at a given binding depth. Purely structural — no type information used, no eta expansion. Pure function — part of the TCB. Spec reference: kernel-spec.md §6. ## Core Functions - `conv : Depth → Val → Val → Bool` — check definitional equality. - `convSp : Depth → Spine → Spine → Bool` — check spine equality (same length, pairwise `convElim`). - `convElim : Depth → Elim → Elim → Bool` — check elimination frame equality (same tag, recursively conv on carried values). ## Conversion Rules - §6.1 **Structural**: same-constructor values with matching fields. Universe levels compared by `==`. Primitive literals by value. - §6.2 **Binding forms**: Pi, Lam, Sigma compared under a fresh variable at depth d (instantiate both closures, compare at d+1). - §6.3 **Compound values**: recursive on all components. - §6.4 **Neutrals**: same head level and convertible spines. - §6.5 **Catch-all**: different constructors → false. ## Trampolining VSucc and VCons chains use `genericClosure` to avoid stack overflow on S^5000 or cons^5000 comparisons. ## No Eta `conv` does not perform eta expansion: a neutral `f` and `λx. f(x)` are **not** definitionally equal. Cumulativity (`U(i) ≤ U(j)`) is handled in check.nix, not here. ### Elaborate Bridges the fx.types layer to the kernel's term representation via the HOAS combinator layer. Provides the Nix ↔ kernel boundary. Spec reference: kernel-mvp-research.md §3. ## Type Elaboration - `elaborateType : FxType → Hoas` — convert an fx.types type descriptor to an HOAS tree. Dispatches on: (1) `_kernel` annotation, (2) structural fields (Pi: domain/codomain, Sigma: fstType/sndFamily), (3) name convention (Bool, Nat, String, Int, Float, ...). Dependent Pi/Sigma require explicit `_kernel` annotation. ## Value Elaboration - `elaborateValue : Hoas → NixVal → Hoas` — convert a Nix value to an HOAS term tree given its HOAS type. Bool→true_/false_, Int→natLit, List→cons chain, Sum→inl/inr, Sigma→pair. Trampolined for large lists. ## Structural Validation - `validateValue : String → Hoas → NixVal → [{ path; msg; }]` — applicative structural validator. Mirrors `elaborateValue`'s recursion but accumulates all errors instead of throwing on the first. Path accumulator threads structural context (Reader effect). Error list is the free monoid (Writer effect). Empty list ↔ `elaborateValue` would succeed (soundness invariant). ## Value Extraction - `extract : Hoas → Val → NixValue` — reverse of `elaborateValue`. Converts kernel values back to Nix values. VZero→0, VSucc^n→n, VCons chain→list, VInl/VInr→tagged union. Pi extraction wraps the VLam as a Nix function with boundary conversion. Opaque types (Attrs, Path, Function, Any) throw — kernel discards payloads. - `extractInner : Hoas → Val → Val → NixValue` — three-argument extraction with kernel type value threading. Supports dependent Pi/Sigma via closure instantiation instead of sentinel tests. - `reifyType : Val → Hoas` — converts a kernel type value back to HOAS. Fallback for when HOAS body application fails (dependent types). Loses sugar (VSigma→sigma, not record). ## Decision Procedure - `decide : Hoas → NixVal → Bool` — returns true iff elaboration and kernel type-checking both succeed. Uses `tryEval` for safety. - `decideType : FxType → NixVal → Bool` — elaborate type then decide. ## Full Pipeline - `verifyAndExtract : Hoas → Hoas → NixValue` — type-check an HOAS implementation against an HOAS type, evaluate, extract to Nix value. Throws on type error. ### Eval Pure evaluator: interprets kernel terms in an environment of values. Zero effect system imports — part of the trusted computing base (TCB). Spec reference: kernel-spec.md §4, §9. ## Core Functions - `eval : Env → Tm → Val` — evaluate with default fuel (10M steps) - `evalF : Int → Env → Tm → Val` — evaluate with explicit fuel budget - `instantiate : Closure → Val → Val` — apply a closure to an argument ## Elimination Helpers - `vApp : Val → Val → Val` — apply a function value (beta-reduces VLam, extends spine for VNe) - `vFst`, `vSnd` — pair projections - `vNatElim`, `vBoolElim`, `vListElim` — inductive eliminators - `vAbsurd` — ex falso (only on neutrals) - `vSumElim` — sum elimination - `vJ` — identity elimination (computes to base on VRefl) ## Trampolining (§11.3) `vNatElim`, `vListElim`, `succ` chains, and `cons` chains use `builtins.genericClosure` to flatten recursive structures iteratively, guaranteeing O(1) stack depth on inputs like S^10000(0) or cons^5000. ## Fuel Mechanism (§9) Each `evalF` call decrements a fuel counter. When fuel reaches 0, evaluation throws `"normalization budget exceeded"`. This bounds total work and prevents unbounded computation in the Nix evaluator. Default budget: 10,000,000 steps. ### Hoas Higher-Order Abstract Syntax layer that lets you write kernel terms using Nix lambdas for variable binding. The `elaborate` function compiles HOAS trees to de Bruijn indexed Tm terms. Spec reference: kernel-spec.md §2. ## Example ```nix # Π(A:U₀). A → A H.forall "A" (H.u 0) (A: H.forall "x" A (_: A)) ``` ## Type Combinators - `nat`, `bool`, `unit`, `void` — base types - `string`, `int_`, `float_`, `attrs`, `path`, `function_`, `any` — primitive types - `listOf : Hoas → Hoas` — List(elem) - `sum : Hoas → Hoas → Hoas` — Sum(left, right) - `eq : Hoas → Hoas → Hoas → Hoas` — Eq(type, lhs, rhs) - `u : Int → Hoas` — Universe at level - `forall : String → Hoas → (Hoas → Hoas) → Hoas` — Π-type (Nix lambda for body) - `sigma : String → Hoas → (Hoas → Hoas) → Hoas` — Σ-type ## Compound Types (Sugar) - `record : [{ name; type; }] → Hoas` — nested Sigma (sorted fields) - `maybe : Hoas → Hoas` — Sum(inner, Unit) - `variant : [{ tag; type; }] → Hoas` — nested Sum (sorted tags) ## Term Combinators - `lam : String → Hoas → (Hoas → Hoas) → Hoas` — λ-abstraction - `let_ : String → Hoas → Hoas → (Hoas → Hoas) → Hoas` — let binding - `zero`, `succ`, `true_`, `false_`, `tt`, `refl` — intro forms - `nil`, `cons`, `pair`, `inl`, `inr` — data constructors - `stringLit`, `intLit`, `floatLit`, `attrsLit`, `pathLit`, `fnLit`, `anyLit` — primitive literals - `absurd`, `ann`, `app`, `fst_`, `snd_` — elimination/annotation ## Eliminators - `ind` — NatElim(motive, base, step, scrut) - `boolElim` — BoolElim(motive, onTrue, onFalse, scrut) - `listElim` — ListElim(elem, motive, onNil, onCons, scrut) - `sumElim` — SumElim(left, right, motive, onLeft, onRight, scrut) - `j` — J(type, lhs, motive, base, rhs, eq) ## Elaboration - `elaborate : Int → Hoas → Tm` — compile at given depth - `elab : Hoas → Tm` — compile from depth 0 ## Convenience - `checkHoas : Hoas → Hoas → Tm|Error` — elaborate type+term, type-check - `inferHoas : Hoas → { term; type; }|Error` — elaborate and infer - `natLit : Int → Hoas` — build S^n(zero) ## Stack Safety Binding chains (pi/lam/sigma/let), succ chains, and cons chains are elaborated iteratively via `genericClosure` — safe to 8000+ depth. ### Quote Converts values back to terms, translating de Bruijn levels to indices. Pure function — part of the TCB. Spec reference: kernel-spec.md §5. ## Core Functions - `quote : Depth → Val → Tm` — read back a value at binding depth d. Level-to-index conversion: `index = depth - level - 1`. - `quoteSp : Depth → Tm → Spine → Tm` — quote a spine of eliminators applied to a head term (folds left over the spine). - `quoteElim : Depth → Tm → Elim → Tm` — quote a single elimination frame applied to a head term. - `nf : Env → Tm → Tm` — normalize: `eval` then `quote`. Useful for testing roundtrip idempotency (`nf env (nf env tm) == nf env tm`). - `lvl2Ix : Depth → Level → Index` — level-to-index helper. ## Trampolining VSucc and VCons chains are quoted iteratively via `genericClosure` for O(1) stack depth on deep values (5000+ elements). ## Binder Quotation For VPi, VLam, VSigma: instantiates the closure with a fresh variable at the current depth, then quotes the body at `depth + 1`. ### Term Syntax of the kernel's term language. All 48 constructors produce attrsets with a `tag` field (not `_tag`, to distinguish kernel terms from effect system nodes). Binding is de Bruijn indexed: `mkVar i` refers to the i-th enclosing binder (0 = innermost). Name annotations (`name` parameter on `mkPi`, `mkLam`, `mkSigma`, `mkLet`) are cosmetic — used only in error messages, never in equality checking. Spec reference: kernel-spec.md §2. ## Constructors ### Variables and Binding - `mkVar : Int → Tm` — variable by de Bruijn index - `mkLet : String → Tm → Tm → Tm → Tm` — `let name : type = val in body` - `mkAnn : Tm → Tm → Tm` — type annotation `(term : type)` ### Functions (§2.2) - `mkPi : String → Tm → Tm → Tm` — dependent function type `Π(name : domain). codomain` - `mkLam : String → Tm → Tm → Tm` — lambda `λ(name : domain). body` - `mkApp : Tm → Tm → Tm` — application `fn arg` ### Pairs (§2.3) - `mkSigma : String → Tm → Tm → Tm` — dependent pair type `Σ(name : fst). snd` - `mkPair : Tm → Tm → Tm` — pair constructor `(fst, snd)` - `mkFst : Tm → Tm` — first projection - `mkSnd : Tm → Tm` — second projection ### Inductive Types - `mkNat`, `mkZero`, `mkSucc`, `mkNatElim` — natural numbers with eliminator - `mkBool`, `mkTrue`, `mkFalse`, `mkBoolElim` — booleans with eliminator - `mkList`, `mkNil`, `mkCons`, `mkListElim` — lists with eliminator - `mkUnit`, `mkTt` — unit type and value - `mkVoid`, `mkAbsurd` — empty type and ex falso - `mkSum`, `mkInl`, `mkInr`, `mkSumElim` — disjoint sum with eliminator - `mkEq`, `mkRefl`, `mkJ` — identity type with J eliminator ### Universes - `mkU : Int → Tm` — universe at level i ### Axiomatized Primitives (§2.1) - `mkString`, `mkInt`, `mkFloat`, `mkAttrs`, `mkPath`, `mkFunction`, `mkAny` — type formers - `mkStringLit`, `mkIntLit`, `mkFloatLit`, `mkAttrsLit`, `mkPathLit`, `mkFnLit`, `mkAnyLit` — literal values ### Value Values are the semantic domain produced by evaluation. They use de Bruijn *levels* (counting outward from the top of the context), not indices, which makes weakening trivial. Spec reference: kernel-spec.md §3. ## Closures `mkClosure : Env → Tm → Closure` — defunctionalized closure. No Nix lambdas in the TCB; a closure is `{ env, body }` where `body` is a kernel Tm evaluated by `eval.instantiate`. ## Value Constructors Each `v*` constructor mirrors a term constructor: - `vLam`, `vPi` — function values/types (carry name, domain, closure) - `vSigma`, `vPair` — pair types/values - `vNat`, `vZero`, `vSucc` — natural number values - `vBool`, `vTrue`, `vFalse` — boolean values - `vList`, `vNil`, `vCons` — list values - `vUnit`, `vTt` — unit - `vVoid` — empty type - `vSum`, `vInl`, `vInr` — sum values - `vEq`, `vRefl` — identity values - `vU` — universe values - `vString`, `vInt`, `vFloat`, `vAttrs`, `vPath`, `vFunction`, `vAny` — primitive types - `vStringLit`, `vIntLit`, `vFloatLit`, `vAttrsLit`, `vPathLit`, `vFnLit`, `vAnyLit` — primitive literals ## Neutrals `vNe : Level → Spine → Val` — a stuck computation: a variable (identified by de Bruijn level) applied to a spine of eliminators. `freshVar : Depth → Val` — neutral with empty spine at the given depth. Used during type-checking to introduce fresh variables under binders. ## Elimination Frames (Spine Entries) - `eApp`, `eFst`, `eSnd` — function/pair eliminators - `eNatElim`, `eBoolElim`, `eListElim`, `eAbsurd`, `eSumElim`, `eJ` — inductive eliminators ### Verified High-level combinators for writing kernel-checked implementations. Write programs with these combinators, then call `v.verify` to type-check and extract a Nix function that is correct by construction. ## Example ```nix # Verified successor: Nat → Nat v.verify (H.forall "x" H.nat (_: H.nat)) (v.fn "x" H.nat (x: H.succ x)) # → Nix function: n → n + 1 ``` ## Literals - `nat : Int → Hoas` — natural number literal (S^n(zero)) - `str : String → Hoas` — string literal - `int_ : Int → Hoas` — integer literal - `float_ : Float → Hoas` — float literal - `true_`, `false_` — boolean literals - `null_` — unit value (tt) ## Binding - `fn : String → Hoas → (Hoas → Hoas) → Hoas` — lambda abstraction - `let_ : String → Hoas → Hoas → (Hoas → Hoas) → Hoas` — let binding ## Data Operations - `pair`, `fst`, `snd` — Σ-type construction and projection - `field : Hoas → String → Hoas → Hoas` — record field projection by name - `inl`, `inr` — Sum injection - `app` — function application ## Eliminators (Constant Motive) These auto-generate the motive `λ_.resultTy`, so you only supply the result type and the branches: - `if_ : Hoas → Hoas → { then_; else_; } → Hoas` — Bool elimination - `match : Hoas → Hoas → { zero; succ : k → ih → Hoas; } → Hoas` — Nat elimination - `matchList : Hoas → Hoas → Hoas → { nil; cons : h → t → ih → Hoas; } → Hoas` — List elimination - `matchSum : Hoas → Hoas → Hoas → Hoas → { left; right; } → Hoas` — Sum elimination ## Derived Combinators - `map : Hoas → Hoas → Hoas → Hoas → Hoas` — map f over a list - `fold : Hoas → Hoas → Hoas → Hoas → Hoas → Hoas` — fold over a list - `filter : Hoas → Hoas → Hoas → Hoas` — filter a list by predicate ## Pipeline - `verify : Hoas → Hoas → NixValue` — type-check + eval + extract - `verifiedFn : Hoas → Hoas → VerifiedValue` — callable value with `_hoasImpl` for full kernel body verification in parent types