<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <id>https://verum-lang.org/blog</id>
    <title>Verum Blog</title>
    <updated>2026-04-17T00:00:00.000Z</updated>
    <generator>https://github.com/jpmonette/feed</generator>
    <link rel="alternate" href="https://verum-lang.org/blog"/>
    <subtitle>Verum Blog</subtitle>
    <icon>https://verum-lang.org/img/favicon.png</icon>
    <entry>
        <title type="html"><![CDATA[Verum, examined — a systems language for an age when humans write less code]]></title>
        <id>https://verum-lang.org/blog/verum-examined</id>
        <link href="https://verum-lang.org/blog/verum-examined"/>
        <updated>2026-04-17T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Most language announcements read as feature lists. This one won't. Verum exists because a specific, uncomfortable question has become unavoidable in the last two years: if large language models write an increasing fraction of our code, what stops the resulting systems from silently decaying? The older answers — code review, tests, strict type systems — are useful but incomplete. They treat a program's intent as something only humans ever possess. That assumption is breaking.]]></summary>
        <content type="html"><![CDATA[<p>Most language announcements read as feature lists. This one won't. Verum exists because a specific, uncomfortable question has become unavoidable in the last two years: <strong>if large language models write an increasing fraction of our code, what stops the resulting systems from silently decaying?</strong> The older answers — code review, tests, strict type systems — are useful but incomplete. They treat a program's intent as something only humans ever possess. That assumption is breaking.</p>
<p>This post explains what Verum actually is, feature by feature, grounded in the grammar and the compiler that implements it, with honest comparisons to the other languages these ideas come from. At the end we return to the question above: why this language, why now.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="1-the-premise-semantic-honesty">1. The premise: semantic honesty<a href="https://verum-lang.org/blog/verum-examined#1-the-premise-semantic-honesty" class="hash-link" aria-label="Direct link to 1. The premise: semantic honesty" title="Direct link to 1. The premise: semantic honesty" translate="no">​</a></h2>
<p>Verum's single foundational rule is <em>semantic honesty</em>: every name, every syntax form, every annotation must reflect what the compiler actually does with it. The name of a reference must describe its cost and its guarantee, not the duration of its target; the name of an effect must describe the effect, not the machinery that implements it; the name of an attribute must describe what the compiler will do on encountering it, not what it looked like in the language this one borrowed the idea from. No implicit coercions. No hidden globals. No magic that activates under some configuration but not another.</p>
<p>This sounds obvious. It is also the reason a great many language features you may know are <strong>not</strong> in Verum:</p>
<ul>
<li class="">No exceptions that unwind silently past function boundaries.</li>
<li class="">No <code>static</code> lifetime that means "statically checked" but is named after duration.</li>
<li class="">No ambient globals pretending to be language features. Dependencies travel in <code>using [...]</code> clauses, always.</li>
<li class="">No <code>!</code> macro suffix to mark "this isn't really a function call" — everything compile-time uses <code>@</code>, everything runtime does not.</li>
<li class="">No coherence loopholes; orphan and overlap rules are hard.</li>
</ul>
<p>Every one of these choices costs something (occasional verbosity, a few familiar idioms ruled out). The payoff is that the language stops lying to its reader. In 2024 the reader was human. In 2026 it is often not. A language that lies — even a little — to a model that doesn't know it's being lied to is a liability, not a convenience.</p>
<p>The three reserved keywords are <code>let</code>, <code>fn</code>, <code>is</code>. Everything else — <code>type</code>, <code>using</code>, <code>where</code>, <code>async</code>, <code>spawn</code>, <code>meta</code>, <code>theorem</code> — is a contextual keyword that rebinds to an identifier when used as one. The authoritative grammar is a single EBNF file of a little under twenty-five hundred lines; everything in the language below has a production in it.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="2-types-that-carry-their-invariants">2. Types that carry their invariants<a href="https://verum-lang.org/blog/verum-examined#2-types-that-carry-their-invariants" class="hash-link" aria-label="Direct link to 2. Types that carry their invariants" title="Direct link to 2. Types that carry their invariants" translate="no">​</a></h2>
<p>A refinement type is a base type plus a predicate: <code>Int { self &gt; 0 }</code>, <code>List&lt;T&gt; { self.is_sorted() }</code>, <code>Text { valid_email(self) }</code>. The predicate is part of the type — not a comment, not a test, not a linter annotation. SMT discharges it at compile time.</p>
<p>This idea is old (Rondon, Kawaguchi, Jhala's <em>Liquid Types</em>, 2008) and has been explored in Liquid Haskell, F*, LiquidJava, SPARK/Ada 2012. The gap between these and a production systems language has always been pragmatics — usable error messages, a memory model that survives ownership, and an escape hatch when the SMT solver gives up.</p>
<p>Here is what makes Verum's treatment specific.</p>
<p><strong>The refinement is syntactically part of the type.</strong> In Liquid Haskell you write an annotation comment <code>{-@ ... @-}</code> alongside a plain Haskell function; the refinement lives in a parallel universe. In F* you write <code>(n : nat{n &gt; 0})</code>, which is closer, but F* is not a systems language — its memory model is not <code>&amp;T</code>/<code>&amp;mut T</code>. In Verum:</p>
<div class="language-verum codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#F8F8F2;--prism-background-color:#282A36"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-verum codeBlock_bY9V thin-scrollbar" style="color:#F8F8F2;background-color:#282A36"><code class="codeBlockLines_e6Vv"><div class="token-line" style="color:#F8F8F2"><span class="token keyword-primary keyword" style="color:rgb(189, 147, 249);font-style:italic">type</span><span class="token plain"> </span><span class="token type-name class-name">Port</span><span class="token plain">      </span><span class="token keyword-reserved keyword" style="color:rgb(189, 147, 249);font-style:italic">is</span><span class="token plain"> </span><span class="token type-name class-name">Int</span><span class="token plain">  </span><span class="token punctuation" style="color:rgb(248, 248, 242)">{</span><span class="token plain"> </span><span class="token number">1</span><span class="token plain"> </span><span class="token operator">&lt;=</span><span class="token plain"> </span><span class="token keyword-module keyword" style="color:rgb(189, 147, 249);font-style:italic">self</span><span class="token plain"> </span><span class="token logical-op operator">&amp;&amp;</span><span class="token plain"> </span><span class="token keyword-module keyword" style="color:rgb(189, 147, 249);font-style:italic">self</span><span class="token plain"> </span><span class="token operator">&lt;=</span><span class="token plain"> </span><span class="token number">65535</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">}</span><span class="token punctuation" style="color:rgb(248, 248, 242)">;</span><span class="token plain"></span><br></div><div class="token-line" style="color:#F8F8F2"><span class="token plain"></span><span class="token keyword-primary keyword" style="color:rgb(189, 147, 249);font-style:italic">type</span><span class="token plain"> </span><span class="token type-name class-name">NonEmpty</span><span class="token operator">&lt;</span><span class="token type-name class-name">T</span><span class="token operator">&gt;</span><span class="token plain"> </span><span class="token keyword-reserved keyword" style="color:rgb(189, 147, 249);font-style:italic">is</span><span class="token plain"> </span><span class="token type-name class-name">List</span><span class="token operator">&lt;</span><span class="token type-name class-name">T</span><span class="token operator">&gt;</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">{</span><span class="token plain"> </span><span class="token keyword-module keyword" style="color:rgb(189, 147, 249);font-style:italic">self</span><span class="token punctuation" style="color:rgb(248, 248, 242)">.</span><span class="token plain">len</span><span class="token punctuation" style="color:rgb(248, 248, 242)">(</span><span class="token punctuation" style="color:rgb(248, 248, 242)">)</span><span class="token plain"> </span><span class="token operator">&gt;</span><span class="token plain"> </span><span class="token number">0</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">}</span><span class="token punctuation" style="color:rgb(248, 248, 242)">;</span><span class="token plain"></span><br></div><div class="token-line" style="color:#F8F8F2"><span class="token plain" style="display:inline-block"></span><br></div><div class="token-line" style="color:#F8F8F2"><span class="token plain"></span><span class="token attribute function" style="color:rgb(80, 250, 123)">@verify</span><span class="token punctuation" style="color:rgb(248, 248, 242)">(</span><span class="token plain">formal</span><span class="token punctuation" style="color:rgb(248, 248, 242)">)</span><span class="token plain"></span><br></div><div class="token-line" style="color:#F8F8F2"><span class="token plain"></span><span class="token function-definition keyword" style="color:rgb(189, 147, 249);font-style:italic">fn </span><span class="token function-definition function" style="color:rgb(80, 250, 123)">bind</span><span class="token punctuation" style="color:rgb(248, 248, 242)">(</span><span class="token plain">p</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"> </span><span class="token type-name class-name">Port</span><span class="token punctuation" style="color:rgb(248, 248, 242)">)</span><span class="token plain"> </span><span class="token operator">-&gt;</span><span class="token plain"> </span><span class="token type-name class-name">Socket</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">{</span><span class="token plain"> </span><span class="token operator">..</span><span class="token punctuation" style="color:rgb(248, 248, 242)">.</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">}</span><span class="token plain">   </span><span class="token comment" style="color:rgb(98, 114, 164)">// the compiler knows p is in [1, 65535]</span><br></div></code></pre></div></div>
<p>If you pass <code>bind(70000)</code>, the error is reported at the call site, not at a runtime panic. If the value is dataflow-derived, the compiler narrows its refinement along control flow — a guard <code>if p &gt; 0 &amp;&amp; p &lt; 65536</code> is enough to prove the obligation.</p>
<p><strong>Predicates are first-class.</strong> User-defined <code>@logic</code> functions (<code>@logic fn is_sorted&lt;T: Ord&gt;(xs: &amp;List&lt;T&gt;) -&gt; Bool { forall i in 0..xs.len()-1. xs[i] &lt;= xs[i+1] }</code>) are reflected as SMT axioms; calling them from a refinement is not a syntactic trick. This is closer to Dafny's ghost functions than to Rust's <code>const fn</code>.</p>
<p><strong>When SMT fails, there are tactics, not panic.</strong> The language admits twenty-two named tactics: <code>auto</code>, <code>simp</code>, <code>ring</code>, <code>field</code>, <code>omega</code>, <code>blast</code>, <code>smt</code>, <code>trivial</code>, <code>assumption</code>, <code>contradiction</code>, <code>induction</code>, <code>cases</code>, <code>rewrite</code>, <code>unfold</code>, <code>apply</code>, <code>exact</code>, <code>intro</code>, <code>intros</code>, <code>cubical</code>, <code>category_simp</code>, <code>category_law</code>, <code>descent_check</code>. Arbitrary user-registered tactics are named by identifier. Combinators — <code>try</code>, <code>else</code>, <code>repeat</code>, <code>first</code>, <code>all_goals</code> — compose them. This is a plain-old Coq/Lean tactic language embedded in a systems language — not a separate prover invoked from the outside.</p>
<p>The trade-off is real: refinement types do not come for free in compile time, and complex predicates can exceed the SMT solver's decidable fragment. That is why the next feature exists.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="3-gradual-verification--the-spectrum">3. Gradual verification — the spectrum<a href="https://verum-lang.org/blog/verum-examined#3-gradual-verification--the-spectrum" class="hash-link" aria-label="Direct link to 3. Gradual verification — the spectrum" title="Direct link to 3. Gradual verification — the spectrum" translate="no">​</a></h2>
<p>Most verified-language proposals ask the programmer to commit to formal proof up front. Verum does not. The <code>@verify(...)</code> attribute names <strong>seven distinct strategies</strong>, which is the main interface to verification:</p>













































<table><thead><tr><th>Strategy</th><th>Intent</th><th>Cost / guarantee</th></tr></thead><tbody><tr><td><code>runtime</code></td><td>Check the predicate at runtime</td><td>Cheapest, unverified</td></tr><tr><td><code>static</code></td><td>Type-level static checks only</td><td>Fast, partial</td></tr><tr><td><code>fast</code></td><td>Prefer speed over completeness</td><td>Fastest verify, may skip</td></tr><tr><td><code>formal</code></td><td>Balanced default</td><td>Standard SMT discharge</td></tr><tr><td><code>thorough</code></td><td>Race multiple techniques in parallel</td><td>Slower, robust</td></tr><tr><td><code>certified</code></td><td>Independent cross-verification</td><td>Slowest, strongest</td></tr><tr><td><code>synthesize</code></td><td>Generate a term satisfying the specification</td><td>Variable</td></tr></tbody></table>
<p>The grammar also accepts two alias spellings — <code>@verify(proof)</code> for <code>formal</code> and <code>@verify(reliable)</code> for <code>thorough</code> — so the surface keyword count is nine, but the behaviours collapse to these seven.</p>
<p>The same body of code moves up the ladder as trust becomes a requirement. Ship a library at <code>@verify(runtime)</code>. When a caller depends on a stronger guarantee, promote to <code>@verify(formal)</code> or <code>@verify(certified)</code> — no rewrite. If the solver times out, drop a tactic script or fall back to <code>@verify(runtime)</code> with a visible apology in the type.</p>
<p>No other language in production use has this shape. Liquid Haskell is all-or-nothing. F* is all-or-nothing. Dafny is all-or-nothing. Kotlin contracts, TypeScript <code>asserts</code> clauses, and Java JML annotations are strictly weaker (runtime-only or unsound). Rust has no SMT integration in the language at all. Coq and Lean are not systems languages.</p>
<p>The closest spiritual neighbour is probably <strong>SPARK/Ada</strong>, which has a gold/silver/bronze hierarchy for verification. But SPARK inherits Ada's surface and toolchain; Verum is the first attempt to put this hierarchy into a modern systems language whose syntax would feel familiar to a Rust, Swift, or Kotlin reader.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="4-dependent-types-and-cubical-hott--in-a-systems-language">4. Dependent types and cubical HoTT — in a systems language<a href="https://verum-lang.org/blog/verum-examined#4-dependent-types-and-cubical-hott--in-a-systems-language" class="hash-link" aria-label="Direct link to 4. Dependent types and cubical HoTT — in a systems language" title="Direct link to 4. Dependent types and cubical HoTT — in a systems language" translate="no">​</a></h2>
<p>Dependent types let a type depend on a value. The canonical example is <code>Vec&lt;T, n&gt;</code> — a vector whose length is part of its type, so that <code>vec_concat : Vec&lt;T, n&gt; → Vec&lt;T, m&gt; → Vec&lt;T, n + m&gt;</code> makes the shape law a type-level fact. Idris, Agda, Coq, Lean, F* all live in this space.</p>
<p>Verum includes dependent types, sigma-type refinements of the form <code>x: Int where x &gt; 0</code>, cubical path-equality types written <code>Path&lt;A&gt;(a, b)</code>, and a cubical normaliser with higher-inductive types and computational univalence. The language's dependent-types, cubical-normaliser, HoTT-primitives, and verification-pipeline phases are all in the shipped release; conformance tests cover every production.</p>
<p>The combination — dependent types <strong>and</strong> <code>&amp;mut T</code> <strong>and</strong> systems-language syntax <strong>and</strong> production tooling — is rare. Idris 2 ships a linear-types extension but has no CBGR-style runtime safety; Lean 4 is a proof assistant first and a systems language a distant second; ATS has dependent types and linear types but a notoriously difficult surface.</p>
<p>The useful question is: <em>when does dependent typing actually earn its cost?</em> Verum's position is that it does when a library's API has shape invariants that today become runtime assertions or doc-comment folklore — tensor shape checks for ML (<code>Tensor&lt;T, [B, H, L, D]&gt;</code>), index bounds for buffer code, protocol state machines, cryptographic nonce uniqueness. For the rest, the refinement-type layer is sufficient.</p>
<p>The cubical layer is where Verum becomes experimental. Computational univalence (Cohen/Coquand/Huber/Mörtberg, 2015) makes equivalence between types a definitional equality. It is powerful — you can transport programs along proofs of type isomorphism — and rarely needed. Verum ships it because the machinery was a prerequisite for the proof system, not because every program will use it.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="5-memory-without-a-one-size-answer-the-three-tier-reference-model">5. Memory without a one-size answer: the three-tier reference model<a href="https://verum-lang.org/blog/verum-examined#5-memory-without-a-one-size-answer-the-three-tier-reference-model" class="hash-link" aria-label="Direct link to 5. Memory without a one-size answer: the three-tier reference model" title="Direct link to 5. Memory without a one-size answer: the three-tier reference model" translate="no">​</a></h2>
<p>The memory-safety debate has historically offered two choices. Either you get garbage collection (Go, Java, Haskell, OCaml; cheap to write, unpredictable latency) or you get ownership and lifetimes (Rust, Cyclone; predictable, but one model for everything). Verum picks a third shape: <strong>three tiers of reference, chosen per use site</strong>.</p>





























<table><thead><tr><th>Tier</th><th>Syntax</th><th>Cost</th><th>Guarantee</th></tr></thead><tbody><tr><td>0</td><td><code>&amp;T</code>, <code>&amp;mut T</code></td><td>~15 ns*</td><td>Capability-Based Generational References</td></tr><tr><td>1</td><td><code>&amp;checked T</code></td><td>0 ns</td><td>Compiler-proven safe (escape analysis)</td></tr><tr><td>2</td><td><code>&amp;unsafe T</code></td><td>0 ns</td><td>Caller proves safety; requires <code>unsafe</code></td></tr></tbody></table>
<p>* target overhead per dereference on current hardware; the precise cost is a small number of cycles for a single comparison and a predicted branch.</p>
<p>CBGR (Capability-Based Generational References) is the name of the Tier 0 mechanism. The compact reference form — the one you use when the pointee is a sized value — is sixteen bytes: an eight-byte pointer, a four-byte generation counter, and a four-byte word that packs a sixteen-bit epoch next to sixteen bits of capability flags. The extended form, used for slices, trait objects, and interior references, layers an eight-byte metadata word (length for slices, vtable for trait objects), a four-byte offset, and a four-byte reserved field on top of the compact one — thirty-two bytes total. The allocation header carries the current generation; on every dereference the reference's generation is compared against the header's. A free or explicit revoke atomically bumps the header's generation; every subsequent deref through a now-stale reference is rejected before it can touch memory. Capabilities are eight monotonic bits — <code>CAP_READ</code>, <code>CAP_WRITE</code>, <code>CAP_EXECUTE</code>, <code>CAP_DELEGATE</code>, <code>CAP_REVOKE</code>, <code>CAP_BORROWED</code>, <code>CAP_MUTABLE</code>, <code>CAP_NO_ESCAPE</code> — that can be attenuated, never expanded: an API that hands you a read-only reference cannot itself be used to widen it.</p>
<p>Tier 1 is the escape hatch. The compiler's reference-analysis suite — escape analysis, non-lexical lifetimes, Polonius-style borrow checking, points-to, dominance, type-sensitive and concurrency-sensitive flow, ownership and lifetime inference, plus tier-aware and array-bounds analysis — proves where Tier 0 can be lowered to a raw load. The compiler silently <strong>promotes</strong> <code>&amp;T</code> to <code>&amp;checked T</code> where this is safe; it never demotes silently — <code>&amp;unsafe T</code> is always a source-level opt-in written by the programmer.</p>
<p>Compare this to the prior art:</p>
<ul>
<li class=""><strong>Rust</strong>: one tier. <code>&amp;T</code>/<code>&amp;mut T</code> go through the borrow checker; if the check fails you rewrite the program. There is no "pay a little at runtime and move on" — the only escape is <code>unsafe { ... }</code>.</li>
<li class=""><strong>Cyclone, ATS</strong>: lifetimes or linear types, no runtime tier. Strong proofs, steep learning curve.</li>
<li class=""><strong>Swift, ObjC</strong>: ARC at runtime. One tier, always.</li>
<li class=""><strong>Pony</strong>: six reference capabilities. Closest in spirit to Verum's capability bits, but Pony binds the capability system to an actor model; Verum keeps it orthogonal to concurrency.</li>
<li class=""><strong>Fil-C, CheckedC</strong>: retrofit runtime checks onto C. Similar in motivation to CBGR but tied to C's surface.</li>
</ul>
<p>Verum's claim is not that three tiers are objectively better than one. It's that committing to one tier at the language level forces the programmer into a trade-off that should be made per function. The language should permit both answers; the compiler should make the cheaper answer automatic where it's provably safe.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="6-one-context-system-for-runtime-and-compile-time--and-why-it-isnt-an-effect-system">6. One context system for runtime and compile time — and why it isn't an effect system<a href="https://verum-lang.org/blog/verum-examined#6-one-context-system-for-runtime-and-compile-time--and-why-it-isnt-an-effect-system" class="hash-link" aria-label="Direct link to 6. One context system for runtime and compile time — and why it isn't an effect system" title="Direct link to 6. One context system for runtime and compile time — and why it isn't an effect system" translate="no">​</a></h2>
<p>Dependency injection, dynamic scoping, reader monads, algebraic effects, effect rows, context parameters — every modern language has a different name for the same underlying need: <strong>a function sometimes wants a value from an enclosing scope without passing it as a positional argument</strong>. The range of answers is wide, and the interesting one is worth a closer look before we say where Verum lands.</p>
<p><strong>The algebraic-effects option.</strong> Eff, Koka, Effekt, Frank, and OCaml 5 take the most expressive approach: treat every external interaction (logging, I/O, non-determinism, exceptions, parsing, mutable state) as an <em>effect operation</em> that a surrounding <em>handler</em> interprets. Handlers can resume the suspended computation with a value, abort it, run it many times (enabling non-deterministic search), or compose with other handlers to stack interpretations. Theoretically this subsumes dependency injection, exception handling, coroutines, generators, transactional state, and logic programming in a single mechanism — a very real achievement.</p>
<p>The cost of that generality is non-trivial. An effect operation can in principle capture its own continuation, which means every call site has to be compiled as if it might perform a stack-switch. Koka's evidence-passing transform reduces this cost by specialising handlers that never resume, but code in the general case still pays for a handler frame on every effectful call. Effekt compiles to capability-passing form. OCaml 5's fibres use a runtime stack-switching primitive. Published micro-benchmarks put the cost of a handled effect op in the tens of nanoseconds in the best case, higher with less specialised handlers. More importantly, <strong>the cost is paid whether or not the program actually uses the resumption/reinterpretation power</strong> — the surface-level function call is forced to go through the effect machinery because, in principle, the handler <em>might</em> do something non-trivial.</p>
<p>The empirical observation that drove Verum's design is that in real codebases, the vast majority of what looks like "effectful" code is simple dependency injection: <em>"give this function a <code>Logger</code>, a <code>Database</code>, a <code>Clock</code> — but let the test suite swap in mocks."</em> That use case is served by a vtable and task-local storage. It does not require delimited continuations. And for the rare cases where you do want reinterpretation — non-deterministic search, probabilistic programming, parser combinators — Verum offers metaprogramming and tactic languages that keep that power available without forcing every function in the program to pay for it.</p>
<p>So Verum makes a deliberate trade. The context system is <strong>capability-based dependency injection</strong>, not an algebraic-effect system. The syntactic surface is a single <code>using [...]</code> clause; the runtime cost is a single task-local lookup on the order of two to thirty nanoseconds depending on whether the context sits in the 256-slot inline array or in the dynamic fallback map, and often zero when the AOT compiler can monomorphise the provider away entirely. You lose the ability to <code>resume</code> a suspended operation from a handler. You gain a runtime cost that is independent of how many effectful operations the function performs, a testing story that is just "swap the provider," and a compilation model that every systems programmer already understands.</p>
<p>For reference, here is the landscape the <code>using [...]</code> clause has to compete with:</p>
<ul>
<li class=""><strong>Rust</strong>: services are threaded by hand through arguments, or through trait objects, or through ecosystem frameworks (<code>tokio::spawn_local</code>, <code>anyhow</code>-wrapped services). No language-level story. Async tasks lose their caller's logger unless you pass it explicitly.</li>
<li class=""><strong>Haskell</strong>: the <code>Reader</code> monad (or <code>ReaderT</code> transformer). Elegant, unifies with other effects via <code>mtl</code>, but "infectious" in that every function's type now mentions the transformer stack.</li>
<li class=""><strong>Scala 3</strong>: <code>given</code> / <code>using</code> — the closest syntactic ancestor. Resolution is compile-time, so there is no runtime lookup, but also no runtime provider swap without explicit helper machinery.</li>
<li class=""><strong>Kotlin</strong>: receivers for some cases and <code>CoroutineContext</code> for others — two disjoint mechanisms for the same problem, with different semantics for cancellation and inheritance.</li>
<li class=""><strong>Koka, Effekt, Eff, OCaml 5</strong>: algebraic effects, as discussed above. More expressive than Verum's contexts, more expensive per operation, and a steeper learning curve.</li>
<li class=""><strong>Verum</strong>: one clause, <code>using [...]</code>, with one cost model, one inheritance rule on <code>spawn</code>, and one lookup machinery.</li>
</ul>
<p>The grammar production for <code>using [C1, C2, ...]</code> applies <strong>identically</strong> to a runtime function taking a <code>Database</code>/<code>Logger</code>/<code>Clock</code> and to a <code>meta fn</code> taking a <code>TypeInfo</code>/<code>BuildAssets</code>/<code>Schema</code>. The call-site must provide a matching set; child tasks spawned with <code>spawn</code> inherit the parent's stack; <code>spawn using [A, B]</code> drops everything else. There is no ambient global; there is no conditional import; there is one lookup rule.</p>
<p>The numbers are concrete. Fourteen compile-time meta-contexts are shipped with the language:</p>

































































<table><thead><tr><th>Meta-context</th><th>Purpose</th></tr></thead><tbody><tr><td><code>TypeInfo</code></td><td>Reflect on types: fields, variants, size, alignment, names</td></tr><tr><td><code>AstAccess</code></td><td>Inspect and transform AST nodes</td></tr><tr><td><code>CodeSearch</code></td><td>Query the whole program (definitions, callers, uses)</td></tr><tr><td><code>DepGraph</code></td><td>Navigate the dependency graph between items</td></tr><tr><td><code>Schema</code></td><td>Read project-level schemas (SQL, JSON, Proto)</td></tr><tr><td><code>Hygiene</code></td><td>Name generation and identifier freshness</td></tr><tr><td><code>MacroState</code></td><td>Per-expansion storage, guard recursion</td></tr><tr><td><code>SourceMap</code></td><td>Span manipulation and source-line lookup</td></tr><tr><td><code>StageInfo</code></td><td>Query the current meta-stage and phase</td></tr><tr><td><code>CompileDiag</code></td><td>Emit errors, warnings, help text</td></tr><tr><td><code>BuildAssets</code></td><td>Load files and emit constants at build time</td></tr><tr><td><code>ProjectInfo</code></td><td>Manifest fields, workspace members, feature flags</td></tr><tr><td><code>MetaRuntime</code></td><td>Controlled evaluation at compile time</td></tr><tr><td><code>MetaBench</code></td><td>Benchmark compile-time work</td></tr></tbody></table>
<p>The standard runtime set is ten contexts: <code>Logger</code>, <code>Database</code>, <code>Auth</code>, <code>Config</code>, <code>Cache</code>, <code>Metrics</code>, <code>Tracer</code>, <code>Clock</code>, <code>Random</code>, <code>FileSystem</code>. Across the fourteen meta-contexts there are on the order of two hundred and thirty public methods, each a typed, documented API — not a reflection surface groped through strings.</p>
<p>Why unify these two worlds? Because from the programmer's perspective, a <code>meta fn</code> that inspects a type via <code>TypeInfo</code> is doing the same thing as a runtime function that reads a config via <code>Config</code>: reaching into an outer scope for an authorised value. The surface syntax for both should be the same. Having <em>separate</em> systems for compile-time and runtime dependency is how languages grow two subtly different sets of rules for the same concept. That is exactly the kind of hidden complexity semantic honesty is meant to prevent.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="7-concurrency-structured-contextual-and-cancellable">7. Concurrency: structured, contextual, and cancellable<a href="https://verum-lang.org/blog/verum-examined#7-concurrency-structured-contextual-and-cancellable" class="hash-link" aria-label="Direct link to 7. Concurrency: structured, contextual, and cancellable" title="Direct link to 7. Concurrency: structured, contextual, and cancellable" translate="no">​</a></h2>
<p><code>async fn</code>, <code>await</code>, <code>spawn</code>, <code>nursery { ... }</code>, <code>select { ... }</code>, <code>for await</code>. None of these are novel individually; <code>async</code>/<code>await</code> is from C#, structured concurrency is the Nathaniel Smith/Trio tradition, <code>nursery</code> + scoped cancellation came out of that.</p>
<p>Verum's contribution is not a new primitive but the integration:</p>
<ul>
<li class="">A task's <code>using [...]</code> context stack is part of its identity. A child task inherits it by default; <code>spawn using [A, B]</code> drops everything else. There is no ambient cancellation token passed through magic.</li>
<li class="">The <code>nursery(on_error: cancel_all) { ... }</code> block owns every task spawned inside it. Leaving the scope waits for every child. No task outlives its scope. This is the Trio invariant, stated in the language.</li>
<li class=""><code>async fn</code> does not infect callees. A <code>fn</code> is <code>fn</code>, and <code>async fn</code> is different. There is no automatic async-in-async coloring beyond what is visible in the type. (Async colouring remains a real language-design cost, discussed openly — Verum does not claim to solve it, only to make it explicit.)</li>
</ul>
<p>Under this surface sits a work-stealing per-core executor, a platform-native I/O reactor, and a context stack that is saved at every <code>.await</code> and restored before resumption — so that a function suspended for an hour and resumed in a different worker thread still sees the same <code>using [Database, Logger]</code> stack it had when it yielded.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="8-the-runtime-in-one-structure">8. The runtime, in one structure<a href="https://verum-lang.org/blog/verum-examined#8-the-runtime-in-one-structure" class="hash-link" aria-label="Direct link to 8. The runtime, in one structure" title="Direct link to 8. The runtime, in one structure" translate="no">​</a></h2>
<p>Most languages treat memory, capabilities, errors, and concurrency as four separate systems that happen to coexist at runtime. The allocator is one library, the dependency-injection framework is another, error handling is structured exceptions, and the scheduler is a third thing entirely. Integration between them is the programmer's problem.</p>
<p>Verum merges all four into a single typed value — the <strong>Execution Environment</strong>, written <strong>θ+</strong> in the sources. Every task, from <code>fn main()</code> downwards, carries one. The layout is fixed, the size is 2,560 bytes, and the four pillars are always at known offsets:</p>
<ul>
<li class=""><strong>Memory</strong> (128 B) — CBGR safety tier, allocator reference, shared-ownership registry, generation and epoch trackers.</li>
<li class=""><strong>Capabilities</strong> (2,048 B) — a 256-slot inline array of typed contexts (compile-time-assigned slots reach constant-time lookup at ~2 ns), plus a map for dynamic ones at ~20 ns.</li>
<li class=""><strong>Recovery</strong> (192 B) — supervisor reference, eight inline circuit breakers (64 B each), four inline retry policies (32 B each), <code>defer</code> stack.</li>
<li class=""><strong>Concurrency</strong> (128 B) — executor handle, I/O driver reference, isolation model, parallelism configuration, task ID.</li>
</ul>
<p>Nothing in that list requires a heap allocation on the hot path. An inline circuit breaker is a struct, not a pointer-to-struct. A retry policy is a struct. A context in the fast path is an index into an array. The combined result: task spawn costs ~150 ns; task fork costs ~50–70 ns; a context lookup through a compile-time slot is two nanoseconds.</p>
<p>A programmer never writes <code>env.</code> anywhere. The mapping from language features to pillars is automatic: <code>&amp;T</code> reads the memory pillar; <code>using [Logger]</code> reads the capabilities pillar; <code>defer { ... }</code> writes the recovery pillar; <code>spawn</code> reads all four and forks them for the child. Four systems, one lookup.</p>
<p>This matters concretely when programs get large. The failure modes that plague long-running services — "the async runtime was spun up after my logger was configured, so logs from request 47 went to the void"; "a panic in a worker thread took down the supervisor that would have restarted it"; "a context set in a spawned task leaked after the task panicked" — disappear when the four concerns share one structure whose lifecycle is the task's lifecycle. A panic in Verum runs the task's defers, restores the <code>parent_snapshot</code> of the capability pillar, and forwards to the supervisor — all using a single unwinder pass over the same environment, not four separate cleanup protocols.</p>
<p>The runtime ships in <strong>five profiles</strong>: <code>full</code> for servers, <code>single_thread</code> for WASM and single-core targets, <code>no_async</code> for blocking CLIs, <code>no_heap</code> for real-time and safety-critical code, and <code>embedded</code> for microcontrollers. A program that requests more than its profile supports is a compile error, not a runtime failure. The same language, the same <code>ExecutionEnv</code> type, and the same source files — with different executor, allocator, and I/O-driver implementations chosen at link time.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="9-errors-as-values-supervision-as-a-language-feature">9. Errors as values, supervision as a language feature<a href="https://verum-lang.org/blog/verum-examined#9-errors-as-values-supervision-as-a-language-feature" class="hash-link" aria-label="Direct link to 9. Errors as values, supervision as a language feature" title="Direct link to 9. Errors as values, supervision as a language feature" translate="no">​</a></h2>
<p>Verum has no exceptions in the unwinding sense. It has three mechanisms that compose:</p>
<p><strong><code>Result&lt;T, E&gt;</code> and <code>throws E</code></strong>. Errors travel as values. The <code>throws E</code> clause on a function signature is a type-level declaration that the function may fail with errors of type <code>E</code>; the <code>?</code> operator propagates. Nothing here is new — the surprise is how far the type system carries it. An <code>E</code> that is a union of several specific error shapes can be narrowed at the catch site; a refinement type on an error payload is a type; <code>@verify(formal)</code> can prove a function whose signature is <code>-&gt; Result&lt;T, E&gt;</code> always returns <code>Ok</code> on a given subset of inputs.</p>
<p><strong><code>defer</code> and <code>errdefer</code></strong>. RAII cleanup without destructors. <code>defer { cleanup(); }</code> pushes a handler onto the task's <code>RecoveryContext.defer_stack</code>; when control leaves the scope — by return, by <code>?</code>-propagation, or by panic — the handler runs. <code>errdefer { rollback(); }</code> runs only on the error path. The stack is LIFO; combined with refinement types, this replaces a large class of "we forgot to release the lock in error path 17" bugs.</p>
<p><strong>Supervision trees</strong>. When a task fails in a way its caller cannot handle locally, Verum's runtime consults the task's supervisor (if any) and applies one of four strategies: <code>OneForOne</code> (restart only the failing child), <code>OneForAll</code> (restart every sibling), <code>RestForOne</code> (restart the failing child and everyone started after it), <code>SimpleOneForOne</code> (restart spec-homogeneous children). Each child carries a restart policy — <code>Permanent</code>, <code>Transient</code>, <code>Temporary</code> — and a <code>RestartIntensity</code> (max N restarts per M-millisecond window) that, when exceeded, escalates to the parent supervisor.</p>
<p>This is Erlang/OTP's idea, translated to a statically typed systems language. Four supervision strategies, three restart policies, configurable intensity, escalation to a parent supervisor, named children, orderly shutdown with per-child timeouts — the full OTP menu. A web server whose handlers crash does not take down the process; the supervisor restarts the handler, the circuit breaker in the task's <code>RecoveryContext</code> trips after five failures, and the log line has a stack trace.</p>
<p>Most systems languages leave this story to frameworks. Verum gives it a type system, a runtime structure, and a set of primitives — because fault tolerance is a language concern if safety is.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="10-the-standard-library-is-verum">10. The standard library is Verum<a href="https://verum-lang.org/blog/verum-examined#10-the-standard-library-is-verum" class="hash-link" aria-label="Direct link to 10. The standard library is Verum" title="Direct link to 10. The standard library is Verum" translate="no">​</a></h2>
<p>Almost every production programming language has a layer it doesn't own. C's <code>stdio</code> is glibc (or musl, or Apple's libSystem). Rust's <code>std</code> builds on libc for anything OS-adjacent — threads, file I/O, the memory allocator. Go writes its own runtime but links against glibc on many targets. Swift's <code>Foundation</code> is Objective-C.</p>
<p>Verum's standard library is written in Verum. Specifically:</p>
<ul>
<li class="">The allocator — the capability-based generational-reference arena — is in <code>core/mem/allocator.vr</code>. No <code>malloc</code> dependency.</li>
<li class="">Threads, TLS, futexes, atomics — all in <code>core/sync/</code> and <code>core/runtime/thread.vr</code>, implemented over VBC intrinsics that map to platform syscalls directly. No <code>pthread</code>.</li>
<li class="">File I/O, networking, time — <code>core/io/</code>, <code>core/net/</code>, <code>core/time/</code>. Platform syscalls through VBC intrinsics, no libc.</li>
<li class="">Regex in <code>core/text/regex.vr</code>; JSON, base64, hex in <code>core/encoding/</code>; the tagged-literal compile-time validators for SQL, URL, UUID, email, CIDR, datetime, and the rest of the <code>tag#"..."</code> family in <code>core/text/tagged_literals.vr</code>. All <code>.vr</code>.</li>
<li class="">The concurrency runtime — executor, I/O driver, supervision tree, circuit breakers — all in <code>core/runtime/</code> and <code>core/async/</code>. No Rust <code>tokio</code> dependency, no C <code>libuv</code>.</li>
</ul>
<p>The zero-FFI path works because VBC opcodes <code>0xF1</code> (mmap/munmap), <code>0xF2</code> (futex, atomics), <code>0xF4</code> (io_uring, kqueue, IOCP), and <code>0xF5</code> (clock_gettime) are first-class language primitives that the VBC interpreter and the LLVM backend both lower directly. There is no hidden C ABI anywhere in a Verum binary. The consequence for security audits and proof-carrying distribution is large: a <code>.cog</code> archive's VBC is a closed artefact that can be validated offline against declared capabilities without the validator needing to trust a distro's libc version.</p>
<p>On macOS the one exception is <code>libSystem.B.dylib</code>, Apple's stable ABI entry point — Apple doesn't guarantee syscall ABI stability, so programs linking directly to syscalls would break across macOS versions. <code>libSystem</code> is linked, nothing else. No Rust <code>std</code>, no glibc, no <code>libuv</code>.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="11-metaprogramming-without-a-second-language">11. Metaprogramming without a second language<a href="https://verum-lang.org/blog/verum-examined#11-metaprogramming-without-a-second-language" class="hash-link" aria-label="Direct link to 11. Metaprogramming without a second language" title="Direct link to 11. Metaprogramming without a second language" translate="no">​</a></h2>
<p>Rust's macros have an exclamation mark. Scheme's have <code>quasiquote</code>. Scala 3 has <code>inline</code> plus <code>Quotes</code>. Each tells the reader "you've stepped out of the main language now." Verum keeps you in one language. The compile-time sub-language <strong>is the same language</strong>, run at an earlier stage.</p>
<p>A <code>meta fn</code> is a function. A <code>quote { ... }</code> block is an expression of type <code>TokenStream</code>. An <code>@-attribute</code> invokes a meta function at a specific syntactic site. Splicing (<code>$expr</code>, <code>$[for ...]</code>) pastes a quoted value back. Staging is explicit (stage 0 = runtime, 1 = first meta, etc.).</p>
<p>The reason this matters for the rest of the language is that every attribute in Verum — <code>@derive</code>, <code>@verify</code>, <code>@repr</code>, <code>@specialize</code>, <code>@logic</code>, <code>@cfg</code>, <code>@intrinsic</code>, and more than forty others — is uniformly an invocation of something you could write yourself in a <code>meta fn</code>. There is no distinction between a language-level compile-time construct and a user-level one. You do not have to petition for a new attribute; you can write one.</p>
<p>Compare:</p>
<ul>
<li class="">Rust: proc-macros are a separate crate type with separate compilation rules and restrictions.</li>
<li class="">C++: templates + constexpr are two different languages glued together.</li>
<li class="">Lisp: <code>defmacro</code> is uniform but untyped.</li>
<li class="">Template Haskell: typed but infamously hostile to tooling.</li>
<li class="">Scala 3 macros: uniform-ish but dependent on the metaprogramming API.</li>
<li class="">Verum: one language, one type system, one set of rules, staged.</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="12-proof-carrying-code-and-the-distribution-model">12. Proof-carrying code and the distribution model<a href="https://verum-lang.org/blog/verum-examined#12-proof-carrying-code-and-the-distribution-model" class="hash-link" aria-label="Direct link to 12. Proof-carrying code and the distribution model" title="Direct link to 12. Proof-carrying code and the distribution model" translate="no">​</a></h2>
<p>A <code>cog</code> is Verum's package — not an archive of source, but an archive of VBC bytecode plus optional proof certificates, type metadata, and signatures. When a downstream consumer takes a dependency at <code>@verify(certified)</code>, they can validate the included proofs <strong>offline</strong>, without running the compiler over the source.</p>
<p>The precedent here is the Proof-Carrying Code tradition (Necula, Lee, 1996) and WebAssembly's validation step. Java's bytecode verifier checks type safety; WebAssembly's validator checks more. Verum goes further: the verifier can check refinement predicates up to <code>@verify(certified)</code>. Proof terms are exportable to Coq, Lean, Dedukti, and Metamath so a paranoid consumer can re-validate against an unrelated tool.</p>
<p>This is not theatre. Supply-chain attacks on code registries are the main way modern software is compromised; the question "does this package do what it says?" is no longer hypothetical. A cog's certificate cannot rule out every attack, but it raises the lower bound of what a consumer knows about the code they just linked against.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="13-one-ir-two-executions">13. One IR, two executions<a href="https://verum-lang.org/blog/verum-examined#13-one-ir-two-executions" class="hash-link" aria-label="Direct link to 13. One IR, two executions" title="Direct link to 13. One IR, two executions" translate="no">​</a></h2>
<p>Verum is <strong>VBC-first</strong>. Every program compiles to VBC (Verum Bytecode); VBC is either interpreted directly or lowered through LLVM to native code ahead of time (with MLIR handling <code>@device(gpu)</code> kernels). There is no JIT tier and no second path from source to execution.</p>
<p>The consequence is architectural rather than performance-headline-worthy:</p>
<ul>
<li class=""><code>verum run</code> starts in milliseconds because it skips native codegen. Every VBC opcode has a direct interpreter handler; the primary plus extended opcode tables together approach the scale of a real machine's instruction set.</li>
<li class=""><code>verum build --release</code> runs the same front end, the same type checker, the same CBGR analysis, and only differs in the final lowering step.</li>
<li class="">Behaviour under the two paths is the same. A test that passes under the interpreter passes under AOT; a panic reproduces on both; a verification obligation discharges once.</li>
<li class="">Tooling — LSP, DAP, REPL, Playbook — sits on the same pipeline. There is no "dev mode semantics vs release mode semantics." Python grew that and now spends a significant chunk of its design energy managing the gap.</li>
</ul>
<p>Languages with this property: BEAM (Erlang/Elixir), the JVM, the CLR, LuaJIT. Languages without it: C, C++, Rust (separate rustdoc/test/compile paths with differences), Swift, Go. Verum is firmly in the VBC-first camp, which is unusual for a systems language targeting native.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="14-the-llm-era-angle--why-this-language-matters-now">14. The LLM-era angle — why this language matters now<a href="https://verum-lang.org/blog/verum-examined#14-the-llm-era-angle--why-this-language-matters-now" class="hash-link" aria-label="Direct link to 14. The LLM-era angle — why this language matters now" title="Direct link to 14. The LLM-era angle — why this language matters now" translate="no">​</a></h2>
<p>Here is where we return to the question at the top.</p>
<p>For roughly forty years a programming language's job was to help a human write software. Documentation, naming conventions, linters, tests, and type systems were all optimised for a specific reader: another human, probably tired, probably catching up on context. The human was the bottleneck, so the language should help the human.</p>
<p>That bottleneck has moved. In 2026, a growing percentage of production code is not written by the human holding the keyboard but by a language model they are prompting. The human reads, approves, and merges. Sometimes they don't read — and the tempo of work depends on them not having to, often.</p>
<p>This shift changes what a programming language is actually for in three ways, and each one is what Verum was built around:</p>
<p><strong>First, the spec moves from docstrings to types.</strong> An LLM that reads <code>/// Returns the first index where xs[i] == key; returns -1 if missing.</code> treats that English sentence as a suggestion. An LLM that reads <code>where ensures result.is_some() =&gt; xs[result.unwrap()] == *key</code> treats the refinement as a verifier it will be graded against. Types-as-specification is not new — Hoare's preconditions date to 1969 — but making it ergonomic enough that every function carries one was always the blocker. Semantic honesty plus gradual verification is how Verum attempts to break that blocker.</p>
<p><strong>Second, the cost of hidden behaviour is asymmetric.</strong> When a human writes a function, they carry the context in their head: <em>this runs inside a tokio runtime, so a spawn here is fine; this module has an implicit logger, so <code>log::info!</code> works</em>. An LLM that has read ten million Rust projects has statistical knowledge of those conventions but does not carry <em>this</em> project's context. If the language lies — an ambient global, a hidden allocation, a coercion under a rarely-triggered flag — the LLM's training distribution will guess wrong, confidently. <code>using [...]</code> is boring when you write it by hand. It's load-bearing when a model writes it for you.</p>
<p><strong>Third, verification becomes the social contract.</strong> Before, an open-source library's reputation was built on its maintainer's judgment. That is still true, but many eyes now include non-human ones, and the fraction of code that is machine-transcribed from another machine's output will keep rising. The honest answer is not "trust the AI" or "never trust the AI." It is "make the evidence machine-checkable." Proof-carrying cogs, <code>@verify(certified)</code>, tactic scripts, exportable proof terms to Lean and Coq — these are instruments for a world where not every reader of your code is a person, and not every writer is either.</p>
<p>None of this is to say Verum "solves the LLM problem." It is to say that the language-design decisions Verum has been making for the last three years — explicit context, refinement types, gradual verification, semantic honesty, a single stable IR, proof-carrying distribution — happen to line up with what the current and next few years of software will need. They would have been defensible features in 2021. They are close to necessary features in 2026.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="15-trade-offs-verum-actually-makes">15. Trade-offs Verum actually makes<a href="https://verum-lang.org/blog/verum-examined#15-trade-offs-verum-actually-makes" class="hash-link" aria-label="Direct link to 15. Trade-offs Verum actually makes" title="Direct link to 15. Trade-offs Verum actually makes" translate="no">​</a></h2>
<p>A blog post about a language that does not list its costs is a marketing document. These are the real costs.</p>
<ul>
<li class=""><strong>Compile times.</strong> Refinement-type SMT discharge costs real seconds per function in hard cases. The incremental compilation cache, per-obligation fingerprinting, and the <code>@verify(fast)</code> strategy exist because this is a real tax. It is not gone; it is amortised.</li>
<li class=""><strong>Syntax weight.</strong> <code>using [...]</code>, <code>where ensures ...</code>, <code>{ self &gt; 0 }</code> are all more characters than their absent Rust equivalents. The tradeoff for explicitness is, always, verbosity. Verum bets the verbosity is worth it.</li>
<li class=""><strong>Learning curve.</strong> The refinement-type + dependent-type + cubical stack is academically heavy. You do not need any of it to ship a CLI — <code>@verify(runtime)</code> on plain <code>Int</code> and <code>Text</code> is a complete program — but the full surface is larger than Go's.</li>
<li class=""><strong>Ecosystem age.</strong> Verum's registry is new; the package count is measured in dozens, not tens of thousands. Using Verum for a problem that is two <code>cargo add</code> calls away in Rust today is a worse trade than Rust. That changes only with time and adoption.</li>
<li class=""><strong>Async colouring.</strong> <code>async fn</code> is a different type from <code>fn</code>. This is a language-design cost Koka and Scala have partially avoided with effect systems; Verum does not. The choice was deliberate — an explicit <code>async</code> colour is more honest than an implicit one — but it is a real cost.</li>
<li class=""><strong>LLVM dependency.</strong> AOT native builds require LLVM 21 (and MLIR for GPU targets). That is a several-hundred-megabyte build dependency. Distributions that cannot tolerate it can ship the interpreter only, but lose native speed.</li>
</ul>
<p>None of these are show-stoppers. All of them deserve to be named.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="16-closing">16. Closing<a href="https://verum-lang.org/blog/verum-examined#16-closing" class="hash-link" aria-label="Direct link to 16. Closing" title="Direct link to 16. Closing" translate="no">​</a></h2>
<p>The shortest honest description of Verum is this: it takes refinement types from Liquid Haskell, gradual verification from SPARK, a three-tier memory model descended from CBGR and Pony's capability ideas, a capability-based context system in the place where other languages grew algebraic effects, a dependent-type layer with cubical HoTT support, a single bytecode IR that runs both the interpreter and the AOT backend, a unified per-task execution environment that merges memory, capabilities, errors, and concurrency into one structure, OTP-style supervision in the language runtime, and a standard library that is itself written in Verum with no libc / <code>pthread</code> / Rust-std dependency. It wires all of that together under one rule — semantic honesty — and refuses to include features that break the rule.</p>
<p>The pitch is not that any one of these ideas is new. They are not. The pitch is that they have never been combined in a production systems language whose surface a Rust or Swift programmer could read comfortably, and that the current moment — where software increasingly travels through a language model before it reaches production — is the moment where having them combined actually pays.</p>
<p>Whether the bet is right will be settled by practice, not by essays. The language is open, documented end-to-end, and instrumented. If the questions raised above sound like yours, the easiest next thing is to read the <a class="" href="https://verum-lang.org/docs/getting-started/tour">language tour</a>, pick one example, and try to break it. We built the language expecting to be argued with.</p>
<p>— The Verum team</p>]]></content>
        <author>
            <name>Verum Team</name>
            <uri>https://github.com/verum-lang</uri>
        </author>
        <category label="design" term="design"/>
        <category label="philosophy" term="philosophy"/>
        <category label="verification" term="verification"/>
        <category label="llm" term="llm"/>
        <category label="types" term="types"/>
        <category label="memory" term="memory"/>
    </entry>
</feed>