To open a record, select a node in the graph or a title in the index.

A Type-Theoretic Perspective on the DPLL Algorithm

id : a-type-theoretic-perspective-on-the-dpll-algorithm
types : algorithm

The Type-Theoretic DPLL (TT-DPLL) algorithm is an approach to determining the habitability of a type in the classically typed lambda calculus. It is derived from the regular DPLL algorithm used for propositional satisfiability solving. It reinterprets DPLL's operations in terms of type construction and context manipulation, leveraging the Curry-Howard correspondence where propositions are types and proofs are terms. The goal is to figure out if there is any possible context that allows construction of a term to inhabit a given type.

This algorithm applies to types structured similarly to Conjunctive Normal Form (CNF) formulae: a product (conjunction) of sums (disjunctions) of "literal types." A literal type is either an atomic type (representing a propositional variable) or its negation (e.g., PP \rightarrow \bot, where \bot is the empty type).

How it Works

The algorithm systematically explores the space of possible assumptions about atomic types to construct a term inhabiting the goal type. It operates through a recursive process:

Input

  • A goal type G0G_0 structured as a product of sums of literal types (e.g., (P¬Q)(RQ)(P \lor \neg Q) \land (R \lor Q) \equiv (P+(Q))×(R+Q)(P + (Q \rightarrow \bot)) \times (R + Q)).
  • An initial typing context Γ0\Gamma_0 (usually empty).

Output

  • A term tt and a context Δ\Delta such that Δt:G0\Delta \vdash t : G_0, indicating G0G_0 is inhabited. The typing context Δ\Delta will contain the necessary assumptions (the "satisfying assignment").
  • Or, an indication that G0G_0 is uninhabitable (UNSAT), meaning Γ0G0\Gamma_0 \vdash G_0 \leftrightarrow \bot, possibly together with a term t:G0t : G_0 \rightarrow \bot

Process

The core of the algorithm involves constructing a term for the current goal type GG within the current context Γ\Gamma. This involves propagation (simplifying the type based on context) and decisions (extending the context).

  1. Initialization:

    • Set current context Γ=Γ0\Gamma = \Gamma_0.
    • Set current goal type G=G0G = G_0.
    • Maintain a list/stack of decision points for backtracking. A decision involves assuming a term of type AA (adding a:Aa:A to Γ\Gamma) or its negation (adding na:A\text{na}: A \rightarrow \bot to Γ\Gamma).
  2. Propagation Phase (Type Simplification & Forced Moves):
    Repeatedly apply the following rules to simplify GG or extend Γ\Gamma based on logical consequence, until no more such simplifications can be made:

    • Product Consequence (Conjunct Elimination):
      If GG is of the form T1×T2××TnT_1 \times T_2 \times \dots \times T_n:
      For each TiT_i, if Γti:Ti\Gamma \vdash t_i : T_i can be readily established (e.g., TiT_i is in Γ\Gamma, or Ti=T_i = \top (the unit type), or TiT_i is an atomic type AA and a:AΓa:A \in \Gamma), then TiT_i is considered satisfied. The goal effectively reduces to the product of the remaining unsatisfied TjT_j.
      If all TiT_i are satisfied, GG becomes \top.
    • Sum Consequence (Disjunct Elimination / Unit Propagation Analogue):
      If a component TiT_i of GG (from a product) or GG itself is a sum type S=L1+L2++LkS = L_1 + L_2 + \dots + L_k:
      If for all but one literal type LjL_j (say LmL_m), its negation is provable in Γ\Gamma (i.e., Γproofnot_Lj:Lj\Gamma \vdash proof_{not\_L_j} : L_j \rightarrow \bot), then to inhabit SS, one must inhabit LmL_m. The goal to prove SS effectively becomes a goal to prove LmL_m.
      • This step uses [[mention:Beta Reduction|beta-reduction]] implicitly: if Lj=AL_j = A and na:AΓna: A \rightarrow \bot \in \Gamma, attempting to satisfy SS via LjL_j by providing a:Aa':A would lead to na(a):na(a'):\bot.
    • Contextual Derivations & [[mention:Beta Reduction|Beta Reduction]]:
      If f:ABΓf: A \rightarrow B \in \Gamma and a:AΓa:A \in \Gamma, then f(a):Bf(a):B is derivable (after beta-reduction). This new derivation f(a):Bf(a):B can be added to Γ\Gamma (or used on-the-fly) for further propagation. If B=B = \bot, a conflict is found.
  3. Check State:

    • Success (SAT): If the current goal GG simplifies to \top (all parts of the original G0G_0 are satisfied by Γ\Gamma), then G0G_0 is inhabited. Return the current Γ\Gamma (or the relevant parts forming the "assignment") and the constructed term.
    • Conflict (UNSAT sub-branch): If Γ\Gamma \vdash \bot becomes derivable (e.g., a:AΓa:A \in \Gamma and na:AΓna:A \rightarrow \bot \in \Gamma, leading to na(a):na(a):\bot), or if the goal GG simplifies to \bot:
      • A conflict is detected. Backtrack:
        • Revert Γ\Gamma to its state before the most recent un-flipped decision.
        • Flip the decision (e.g., if a:Aa:A was assumed, now remove a:Aa:A and try assuming na:Ana:A \rightarrow \bot. Mark this decision point as flipped).
        • If the decision was na:Ana:A \rightarrow \bot, and it's being flipped, this path is exhausted.
        • If there are no decision points left to flip (all branches from Γ0\Gamma_0 lead to conflict), then G0G_0 is uninhabitable. Return UNSAT.
      • Continue from step 2 (Propagation Phase) with the new context.
  4. Decision Phase (Branching):
    If no propagation rule applies, and the state is not success or conflict:

    • Select an atomic type AA that is part of the current goal GG for which neither AA nor AA \rightarrow \bot is "decided" in Γ\Gamma for the current branch.
      • Heuristic (Pure Literal Analogue): If an atomic type AA appears "purely positively" in the remaining goal (i.e., not as AA \rightarrow \bot, or in a way that readily leads to \bot if AA is inhabited), one might heuristically prefer to first try assuming a:Aa:A.
    • Record AA and the choice (e.g., assuming AA) as a new decision point.
    • Branch 1 (Assume AA true):
      • Create Γ=Γ,a:A\Gamma' = \Gamma, a:A (where aa is a fresh name for the assumption).
      • Recursively continue the process from Step 2 (Propagation Phase) with context Γ\Gamma' and goal GG.
    • Branch 2 (If Branch 1 led to conflict and backtracking to this point):
      • The decision to assume a:Aa:A is already marked as flipped (or this is the second attempt after a:Aa:A failed).
      • Create Γ=Γ,na:A\Gamma'' = \Gamma, na:A \rightarrow \bot (where nana is fresh).
      • Recursively continue the process from Step 2 (Propagation Phase) with context Γ\Gamma'' and goal GG.
    • If both branches from this decision point eventually lead to conflict and backtrack here, then this decision point itself leads to UNSAT. Backtrack further.

Relevance to Classical DPLL

  • Propositional Variables \leftrightarrow Atomic Types: P,Q,P, Q, \dots
  • Literals \leftrightarrow Literal Types: PP or ¬PA\neg P \leftrightarrow A or AA \rightarrow \bot.
  • Clauses \leftrightarrow Sum Types: (L1L2)L1+L2(L_1 \lor L_2) \leftrightarrow L_1' + L_2'.
  • CNF Formula \leftrightarrow Product of Sum Types: (C1C2)C1×C2(C_1 \land C_2) \leftrightarrow C_1' \times C_2'.
  • Assignment \leftrightarrow Assumptions in Context Γ\Gamma: {P=true,Q=false}{p:P,nq:Q}Γ\{P=\text{true}, Q=\text{false}\} \leftrightarrow \{p:P, nq:Q \rightarrow \bot\} \subseteq \Gamma.
  • Satisfiability \leftrightarrow Inhabitability: Finding a satisfying assignment \leftrightarrow finding a context Γ\Gamma and term tt such that Γt:G0\Gamma \vdash t:G_0.
  • Unit Propagation \leftrightarrow Sum Consequence / Disjunct Elimination: If a sum type A+BA+B must be inhabited and we know AA \rightarrow \bot, we must inhabit BB.
  • Boolean Constraint Propagation (BCP): The Propagation Phase aims to achieve a similar effect to BCP, deriving as many consequences as possible from current decisions before making a new one.
  • Backtracking: Handled by undoing context additions and trying alternative branches for decisions.

This type-theoretic interpretation provides a constructive view of satisfiability. A "satisfying assignment" corresponds to the set of assumptions needed to construct a proof term for the type representing the formula. The process of [[mention:Beta Reduction|beta-reduction]] is crucial for evaluating the consequences of assumptions, especially when functions (like negations AA \rightarrow \bot) are applied.
This framework can be naturally extended to more expressive logics by using richer type theories (e.g., Martin-Löf Type Theory for constructive predicate logic).

Alloy* A General Purpose Higher-Order Relational Constraint Solver

id : alloy-a-general-purpose-higher-order-relational-constraint-solver
types : source

Paper Link: Here

Summary

Summary generated by Gemini 2.5 Pro Preview 03-25:

Comprehensive overview of the paper "Alloy*: A General-Purpose Higher-Order Relational Constraint Solver":

Core Problem:
The paper addresses the limitation of most constraint solvers, which are primarily restricted to first-order logic. While extremely useful, they struggle with problems inherently requiring higher-order quantification (quantifying over functions or relations), such as program synthesis or finding optimal structures (like a maximum clique in a graph). These tasks often involve patterns like "exists a structure P such that for all inputs I, property S holds" (∃∀). Currently, such problems are typically solved using domain-specific algorithms that embed a first-order solver (like SAT or SMT) within a loop, often implementing strategies like CounterExample-Guided Inductive Synthesis (CEGIS). This approach lacks generality and requires ad-hoc implementations for each new higher-order problem.

Proposed Solution: Alloy*
The paper introduces Alloy*, a novel constraint solver designed to handle higher-order relational logic directly. It extends the existing first-order bounded relational model finder Alloy (and its backend engine Kodkod) to support higher-order quantifiers natively within the Alloy language.

Key Concepts and Mechanisms:

  1. Higher-Order Quantification: Alloy* allows higher-order quantifiers (quantifying over relations, which represent sets of tuples or functions in Alloy) to appear anywhere the Alloy syntax permits.
  2. Generalized CEGIS: Instead of users implementing CEGIS externally, Alloy* internalizes a general-purpose CEGIS strategy to solve formulas involving the crucial ∃∀ quantifier pattern, which is common in higher-order problems.
  3. Formula Decomposition: Alloy* analyzes an input formula (potentially with nested higher-order quantifiers) and decomposes it into a tree of subproblems. Each node in this tree represents either:
    • A first-order formula (solved directly by Kodkod).
    • A disjunction involving at least one higher-order part (solved by trying disjuncts).
    • An ∃∀ higher-order pattern (solved using the internal CEGIS loop).
  4. CEGIS Loop in Alloy:* For ∃∀ formulas (∃p ∀e. S(p, e)):
    • Search: Find a candidate instance for p satisfying S(p, e) for some e.
    • Verify: Attempt to find a counterexample e such that S(p, e) fails for the candidate p. This involves solving ∃e. ¬S(p, e).
    • Induce: If a counterexample e_cex is found, add the constraint S(p, e_cex) to the search step and repeat. If no counterexample is found, the candidate p is a valid solution.
  5. Leveraging First-Order Solver Features: The framework relies on the underlying first-order solver (Kodkod) supporting features crucial for efficient CEGIS: partial instances (fixing parts of the model), incremental solving (adding constraints without restarting), atom-to-expression conversion (encoding counterexamples as constraints), and skolemization.
  6. Optimizations: The authors introduce optimizations like "Quantifier Domain Constraints" (using a when clause to prune the search space in CEGIS) and "Strictly First-Order Increments" (ensuring counterexample constraints are first-order to leverage incremental SAT solving).

Evaluation:
Alloy* was evaluated on two main fronts:

  1. Classical Graph Algorithms: Problems like max-clique, max-cut, etc., which have natural higher-order formulations. Alloy* scaled reasonably well, handling graphs up to 50 nodes for most problems within modest time limits, demonstrating feasibility for modeling and bounded verification.
  2. Program Synthesis (SyGuS): Alloy* was used to encode and solve a large subset of the Syntax-Guided Synthesis competition benchmarks. It performed competitively, scaling better than the reference synthesizers provided with the competition and even rivaling Sketch (a state-of-the-art synthesizer) on some benchmarks, despite being a general-purpose tool.

Contributions:

  • A framework for extending first-order solvers to handle higher-order logic generally.
  • A concrete implementation, Alloy*, extending Alloy/Kodkod.
  • Demonstration of its feasibility and competitive performance across different domains (graph algorithms, program synthesis) via case studies and benchmarks.
  • The identification and implementation of a generalized CEGIS strategy within a constraint solver.
  • A publicly released implementation for broader use.

Conclusion:
The paper argues that just as factoring out first-order solving into reusable SAT/SMT solvers advanced the field, factoring out higher-order solving into general-purpose tools like Alloy* could bring similar benefits. It presents Alloy* as the first general-purpose, bounded, sound, and complete (for the given bounds) higher-order relational constraint solver, suggesting that the time is ripe to treat higher-order constraint solving as a reusable component rather than requiring ad-hoc solutions for each problem.

Assignment

id : assignment
types : structure

In logic, an assignment (or valuation, interpretation) is a function that maps variables within a given logical formula to values.

For example, in propositional logic, propositional variables are mapped to truth values of true or false.

Assignments can be used to evaluate the truth value of a complex formula. A satisfying assignment is one that makes a formula evaluate to true.

Axiom

id : axiom
types : structure

An axiom is a rule of inference with zero premises, meaning it is assumed to be true because it follows from nothing.

For example:

ΓTruetrue\frac{}{\Gamma \vdash \text{True}}\text{true}

Read: True can be derived as a constant from any context or even no context.

Boolean

id : boolean
types : structure

A boolean is a type with two terms: true and false.

Brouwer-Heyting-Kolmogorov interpretation

id : brouwer-heyting-kolmogorov-interpretation
types : argument

The Brouwer-Heyting-Kolmogorov interpretation (source) is an "interpretation" of intuitionistic logic. It defines truth constructively, meaning a proposition is true only if we can construct a proof of it. It follows that both the law of the excluded middle and double negation elimination are rejected, as the former claims every mathematical proposition or its inverse has a proof, and the latter claims that every consistent (i.e. doesn't lead to a contradiction) mathematical statement holds. Basically, in the BHK interpretation, "proving" things is much more concrete, and therefore, it is harder to do, but more useful.

This way of looking at truth and proof corresponds cleanly with dependent types, since they require the same amount of constructive evidence that the BHK interpretation demands. They naturally model quantified statements and logical implications as types.

This is in contrast to Tarskian semantics, championed by Alfred Tarksi, which defines truth in terms of correspondence with an model. In other words, every proposition is true or false, regardless of our ability to prove it, determined by the "model" (see Theory link) applied to the symbols used. This is the basis of classical logic, which explains why strict dependent types are unintuitive from this perspective.

Implementations of the BHK interpretation include Implicational Intuitionistic Propositional Logic, Intuitionistic Propositional Logic, but most commonly and foundationally, Martin-Lof Type Theory.

Call With Current Continuation (call/cc)

id : call-with-current-continuation-callcc
types : structure

The call-with-current-continuation (CWCC) operator, commonly abbreviated as call/cc, is a control flow operator found in some programming languages and type theories. It allows a program to capture its own execution state (the "rest of the computation" or "continuation") at a particular point and resume it later, potentially multiple times or not at all.

In pseudocode, CWCC has the following type.

  • call/cc : {A,B : Type} ((A -> B) -> A) -> A
    • Or logically, it is a proof of the following proposition: call/cc:A,B.((AB)A)A\text{call/cc} : \forall A, B. ((A \rightarrow B) \rightarrow A) \rightarrow A

When call/cc is invoked, it is given a function, let's call it handler_fn. call/cc then calls handler_fn, passing it a special function representing the current continuation, let's call this k.

  • handler_fn: A function that takes the current continuation k as its argument.
  • k (the continuation): A function that represents the entire rest of the computation from the point call/cc was called. If k is ever called with a value v, the computation immediately jumps back to where call/cc was invoked, and the call/cc expression itself evaluates to v. Any computation that was pending within handler_fn after k was invoked is discarded.

Type Theoretic Perspective

In type theory, particularly in systems aiming to incorporate classical logic via the Curry-Howard Correspondence, call/cc is often introduced with a type equivalent to Peirce's Law:

A,B.((AB)A)A\forall A, B. ((A \rightarrow B) \rightarrow A) \rightarrow A

This type signature is powerful. It essentially states: "For any types AA and BB, if you can produce an AA by assuming you have a function that can turn an AA into a BB (the continuation k:ABk: A \rightarrow B), then you can produce an AA."

This allows the derivation of classical logical principles that are not typically provable in intuitionistic systems. The empty type (\bot), representing falsehood, plays a crucial role when BB is instantiated as \bot.

A negation ¬P\neg P is typically defined as PP \rightarrow \bot.

Correspondence to Double Negation Elimination

[[mention:Double Negation Elimination]] (DNE) states that ¬¬PP\neg \neg P \rightarrow P. In type theory, this is (((P))P)(((P \rightarrow \bot) \rightarrow \bot) \rightarrow P). call/cc allows us to construct a term of this type.

Derivation Sketch:

  1. Goal: Construct a term for (((P))P)(((P \rightarrow \bot) \rightarrow \bot) \rightarrow P).
  2. Assume: We are given proof¬¬P:((P))proof_{\neg\neg P} : ((P \rightarrow \bot) \rightarrow \bot).
  3. Use call/cc: We instantiate call/cc with type A=PA=P and B=B=\bot. So, call/cc has the type ((P)P)P((P \rightarrow \bot) \rightarrow P) \rightarrow P.
  4. Argument for call/cc: call/cc expects a function of type (P)P(P \rightarrow \bot) \rightarrow P. Let's call this function g.
    • g takes an argument kP:Pk_P : P \rightarrow \bot. This kPk_P is the continuation captured by call/cc. If kPk_P is called with a value of type PP, the call/cc expression will evaluate to that value, but the context implies kPk_P leads to \bot.
    • Inside g, we have:
      • proof¬¬P:((P))proof_{\neg\neg P} : ((P \rightarrow \bot) \rightarrow \bot) (our initial assumption)
      • kP:Pk_P : P \rightarrow \bot (the continuation)
    • We can apply proof¬¬Pproof_{\neg\neg P} to kPk_P: proof¬¬P(kP)proof_{\neg\neg P}(k_P) yields a term of type \bot.
    • From \bot, we can derive any type, including PP. This is done using the principle of ex falso quodlibet, often embodied by a term like absurdP:P\text{absurd}_P : \bot \rightarrow P.
    • So, absurdP(proof¬¬P(kP))\text{absurd}_P(proof_{\neg\neg P}(k_P)) is a term of type PP.
    • Thus, g=λkP:(P).absurdP(proof¬¬P(kP))g = \lambda k_P : (P \rightarrow \bot). \text{absurd}_P(proof_{\neg\neg P}(k_P)). This function gg has type (P)P(P \rightarrow \bot) \rightarrow P.
  5. Result: call/cc(g)\text{call/cc}(g) is a term of type PP.

The complete term for DNE is:
λproof¬¬P:((P)).call/cc(λkP:(P).absurdP(proof¬¬P(kP)))\lambda proof_{\neg\neg P} : ((P \rightarrow \bot) \rightarrow \bot). \text{call/cc}(\lambda k_P : (P \rightarrow \bot). \text{absurd}_P(proof_{\neg\neg P}(k_P)))

This demonstrates how call/cc provides a constructive path (in terms of term construction) to DNE.

Correspondence to Law of the Excluded Middle

The Law of the Excluded Middle (LEM) states P¬PP \lor \neg P. In type theory, using sum types (++) for disjunction, this is P+(P)P + (P \rightarrow \bot).

Derivation Sketch:

  1. Goal: Construct a term for XP+(P)X \equiv P + (P \rightarrow \bot).
  2. Use call/cc: We instantiate call/cc with type A=XA=X and B=B=\bot. So, call/cc has the type ((X)X)X((X \rightarrow \bot) \rightarrow X) \rightarrow X.
  3. Argument for call/cc: call/cc expects a function of type (X)X(X \rightarrow \bot) \rightarrow X. Let's call this function g.
    • g takes an argument kX:Xk_X : X \rightarrow \bot. This kXk_X is the continuation. (kXk_X means "assuming P¬PP \lor \neg P leads to contradiction").
    • Inside g, we want to produce XX. We can try to produce the ¬P\neg P part of XX, which is PP \rightarrow \bot.
      • To construct PP \rightarrow \bot, we assume a hypothetical phypo:Pp_{hypo} : P.
      • With phypop_{hypo}, we can form inl(phypo):P+(P)\text{inl}(p_{hypo}) : P + (P \rightarrow \bot). This is a term of type XX.
      • Now apply the continuation kXk_X: kX(inl(phypo))k_X(\text{inl}(p_{hypo})) yields a term of type \bot.
      • So, the function λphypo:P.kX(inl(phypo))\lambda p_{hypo} : P. k_X(\text{inl}(p_{hypo})) is a term of type PP \rightarrow \bot. Let this be proof¬Pproof_{\neg P}.
    • Now we can construct the right side of the sum: inr(proof¬P):P+(P)\text{inr}(proof_{\neg P}) : P + (P \rightarrow \bot). This is a term of type XX.
    • Thus, g=λkX:((P+(P))).inr(λphypo:P.kX(inl(phypo)))g = \lambda k_X : ((P + (P \rightarrow \bot)) \rightarrow \bot). \text{inr}(\lambda p_{hypo} : P. k_X(\text{inl}(p_{hypo}))). This function gg has type ((P+(P)))(P+(P))((P + (P \rightarrow \bot)) \rightarrow \bot) \rightarrow (P + (P \rightarrow \bot)).
  4. Result: call/cc(g)\text{call/cc}(g) is a term of type P+(P)P + (P \rightarrow \bot).

This shows that call/cc (which embodies Peirce's Law) is sufficient to prove LEM.

Role in Classical Type Theories

The call/cc operator is a key ingredient in forming a Classically Typed Lambda Calculus. By adding call/cc to an intuitionistic type theory, one gains the ability to prove classical tautologies like LEM and DNE, making the logic of the type theory classical.

Its inclusion allows for reasoning by contradiction and other classical proof techniques to be directly represented as programs (terms) within the type system.

CDCL Algorithm

id : cdcl-algorithm
types : algorithm

The CDCL algorithm is a satisfiability solver for propositional formulae.
It is an improvement on the DPLL algorithm that enables more efficient traversal of the search space via learning clauses. This improvement enables CDCL to be the basis of most modern high-performance SAT solvers.

How it Works

Input

Output

Process
CDCL refines DPLL by incorporating several key techniques:

  1. Decision Making (Branching):

    • Similar to DPLL, if no direct implications (like unit propagation) can be made, the algorithm chooses an unassigned variable (the decision variable) and assigns it a truth value (e.g., True). This is a decision level.
    • Sophisticated heuristics (e.g., VSIDS - Variable State Independent Decaying Sum) are used to select decision variables that are likely to lead to conflicts or solutions quickly.
  2. Boolean Constraint Propagation (BCP) (Unit Propagation):

    • After a decision (or learning a new clause), BCP is performed. This is identical to unit propagation in DPLL: if a clause becomes a unit clause (all literals but one are false under the current assignment), the remaining literal must be assigned a value to make the clause true.
    • This process is repeated iteratively until no more unit clauses are found or a conflict is detected.
    • Efficient data structures (e.g., two-watched literals) are crucial for fast BCP.
  3. Conflict Analysis and Clause Learning:

    • If BCP leads to a conflict (i.e., a clause becomes false under the current assignment, e.g., (A ∨ B) where A is False and B is False), the algorithm does not simply backtrack to the most recent decision.
    • Instead, it analyzes the implication graph that led to the conflict to identify the assignments responsible.
    • From this analysis, a new clause, called a learned clause (or conflict clause), is generated. This clause is redundant (implied by the original formula) but prunes the search space by preventing the same conflicting assignment combination from being explored again.
    • The learned clause is typically an asserting clause: at some backtrack level, it will become a unit clause, forcing a variable assignment.
  4. Non-Chronological Backtracking (Backjumping):

    • When a conflict occurs and a conflict clause is learned, the algorithm backtracks.
    • Instead of backtracking to the immediately preceding decision level (chronological backtracking), CDCL backtracks to the most recent decision level that is actually involved in the conflict, as determined by the conflict analysis. This level is often much earlier in the decision stack, allowing the solver to jump over irrelevant parts of the search space.
    • The learned clause is designed to become unit at the backtrack level, guiding the search.
  5. Restarts:

    • Periodically, the solver may discard the current partial assignment and restart the search from the beginning, while keeping the learned clauses.
    • Restarts help to escape "stuck" regions of the search space and can improve overall performance, often guided by restart policies (e.g., Luby sequence).
  6. Literal Phase Heuristics:

    • When making a decision, heuristics can also decide which phase (True or False) to assign to the variable first. Some solvers remember the last satisfying phase (phase saving).

The algorithm terminates when one of the following conditions is met:

  • Satisfiable: If the decision process leads to an assignment where all original clauses are satisfied, the formula is satisfiable.
  • Unsatisfiable: If conflict analysis at decision level 0 (no decisions made, only propagations from original unit clauses or learned clauses) derives an empty clause (a conflict), the formula is unsatisfiable.

Relevance

CDCL algorithms are significantly more powerful than basic DPLL, especially on large, structured industrial instances. The combination of clause learning, non-chronological backtracking, and efficient BCP allows them to navigate vast search spaces effectively. They are the core engine in many SMT solvers and have applications in AI, formal verification, and other areas.

Classic Dependently Typed Lambda Calculus

id : classic-dependently-typed-lambda-calculus
types : structure

A Classic Dependently Typed Lambda Calculus (CDTLC) refers to a type theory that extends a dependently-typed lambda calculus with some axiom that embodies the Law of the Excluded Middle, such as the call/cc operator. It can be seen either as a classical extension of a dependently typed lambda calculus, or as a dependently typed extension of a classically typed lambda calculus.

When adding classical axioms, DTLC (now CDTLC) becomes equivalent in some sense to first order logic via the Curry-Howard correspondence. Under this equivalence propositions in classical first order logic correspond to types in CDTLC, and classical proofs correspond to terms in CDTLC.

This allows for the formalization of classical mathematics and reasoning within a typed lambda calculus framework, potentially leveraging the computational interpretations of proofs, though the constructive nature of evidence found in intuitionistic systems cannot be expected universally.

Classically Typed Lambda Calculus

id : classically-typed-lambda-calculus
types : structure

The classically typed lambda calculus (CTLC) is the intuitionistically typed lambda calculus together with an operator called call/cc or call-with-current-continuation which enables control flow manipulation and also just happens to correspond to the law of the excluded middle.

Due to the addition of LEM, CTLC directly corresponds to propositional logic via the Curry-Howard Correspondence.

Computable Function

id : computable-function
types : structure

A computable function is a function for which there exists an algorithm or a model of computation (like a turing machine or lambda calculus) that can determine the output for any given valid input in a finite amount of steps.

Conjunctive Normal Form (CNF)

id : conjunctive-normal-form-cnf
types : structure

Conjunctive normal form is a specific constraint on the structure of a logical formula, often specifically referring to a propositional formula.

A formula in CNF is a conjunction of clauses, where each clause is a disjunction of propositional literals.

Context-Free Grammar

id : context-free-grammar
types : structure

A context-free grammar is a formal grammar where the classification function is restricted to the set of programs simulatable by a finite state machine and a stack, i.e. a deterministic pushdown automaton.

Continuation Passing Style

id : continuation-passing-style
types : structure

Continuation-passing style (CPS) or the Continuation-Passing Style Transformation (CPST) is an alternative way to write expressions that allows precise management of what is done with the result of a function evaluation. It essentially adds an extra parameter to each function which the function then calls with whatever it wanted to return originally. The calling function then determines what gets done with the result.

A small example:

Direct Style:

def add(x, y):
  return x + y

result = add(5, 3)
print(result)  # Output: 8

Continuation-Passing Style:

def add_cps(x, y, continuation):
  continuation(x + y)

def print_result(result):
  print(result)

add_cps(5, 3, print_result)  # Output: 8

This transformation can even be done multiple times to the same program. Here's the CPST done twice:

def add_cps(x, y, continuation_inner): # Renamed for clarity
  result = x + y
  continuation_inner(result)

# 2-CPS version of add_cps_original
def add_2cps(x, y, continuation_inner_arg, continuation_outer):
  def intermediate_continuation_for_add_cps(value_from_add_cps):
    continuation_outer(value_from_add_cps)
  add_cps_original(x, y, intermediate_continuation_for_add_cps)

def print_result_cps_cps(value):
  print(value)

def original_intended_continuation_for_add_cps(value):
    print(f"(This was 'continuation_inner_arg' for add_2cps: {value}) - usually not directly called in this setup")

add_2cps(5, 3, original_intended_continuation_for_add_cps, print_result_cps_cps) # Output: 8

This double version doesn't really seem that useful in practice though.

Continuation Passing Style Transformation (the single version) is useful for compilers for certain kinds of optimization, as well as for providing the conceptual basis for continuations and the [[mention: call with current continuation]] operator, leading the way to formalize classical theorems in intuitionistic type theory.

Fun Fact: When modeling a programming language as a category where the types are objects and the morphisms are terms, the CPST corresponds to a specific case of the yoneda embedding.

Curry-Howard Correspondence

id : curry-howard-correspondence
types : argument

The Curry-Howard correspondence (also known as the Curry-Howard isomorphism or the proofs-as-programs paradigm) proposes some kind of relation between logical frameworks and type theories.

It establishes a direct correspondence between:

  • Propositions and types: A logical proposition corresponds to a type in some type theory.
  • Proofs and programs: A proof of a proposition corresponds to a program of the corresponding type.
  • Proof simplification and program evaluation: The process of simplifying a proof (cut elimination) corresponds to the process of evaluating a program (β\beta-reduction).

Data Type

id : data-type
types : structure

A data type is a variant of a type, that is used to classify different kinds of data, as opposed to logical statements.

Examples include Boolean, List, and inductively-defined term types such as those representing Lambda Calculus expressions.

DPLL Algorithm

id : dpll-algorithm
types : algorithm

The DPLL algorithm is an algorithm for propositional satisfiability solving. It is a backtracking-based search algorithm designed to determine the satisfiability of a propositional formula in conjunctive normal form (CNF).

How it Works

The algorithm systematically explores the space of all possible combinations of truth assignments for each of the variables in the formula. It operates through a recursive/iterative process described as follows:

Input

Output

Process

  • Initialize an empty list of assignments.
    • Assignments are distinguished as either decision or propagation assignments
  • Represent the CNF formula as a set of clauses. Clauses are sets of literals.
    • Literals are variables like AA or their negations like ¬A\neg A.

    • Clauses represent disjunctions (ORs): e.g., A¬BC{A,¬B,C}A \lor \neg B \lor C \equiv \{A, \neg B, C\}.

    • The formula is a conjunction (AND) of these clauses: e.g., (A¬B)(CB){{A,¬B},{C,B}}(A \lor \neg B) \land (C \lor B) \equiv \{\{A, \neg B\}, \{C, B\}\}.

    • The formula data structure must support efficient modifications and allow changes to be rolled back, i.e. by cloning for recursive calls or by maintaining a change log.

    • Simplify & Check UNSAT

      • Removes atoms that are assigned to true, if an atom is the only one left in a clause when set to true it removes the clause as well.
      • Removes atoms in clauses that are assigned to false. Rollback to last decision clause and assign it the other way. If no decisions left to reverse, return UNSAT
      • If all clauses have been removed, return SAT with current assignment.
  • Repeatedly do one of the following, whichever one can be done first:
    • Unassigned Atom Elimination: If there is a clause with only one atom that is not already assigned in the assignment:
      • Add to assignment list whatever is needed to make that atom true (and thus the resulting clause true). Simplify & Check UNSAT
    • Non-Conflicting Atom Elimination: If a variable only appears in the form PP or only appears as ¬P\neg P,:
      • Assign the variable to true if the variable appears as PP everywhere, false otherwise. Simplify and Check UNSAT
    • Branch:
      • Add an decision assignment to the assignment list. Simplify and Check UNSAT

Relevance

DPLL has been surpassed by CDCL at this point, but it is a good basis for learning about SAT solving algorithms and CDCL builds off of DPLL to some degree, so its a good toy model.

Efficient SMT Solving for Bit-Vectors and the Extensional Theory of Arrays

id : efficient-smt-solving-for-bit-vectors-and-the-extensional-theory-of-arrays
types : source

Paper Link: Here

Summary

Summary generated by Gemini 2.5 Pro Preview 05-06:

The document focuses on "Efficient SMT Solving for Bit-Vectors and the Extensional Theory of Arrays" (page 1). It details both the theoretical underpinnings and practical implementation aspects, particularly within the SMT solver Boolector.

I. Theory of Arrays (primarily from Chapter 2)

  1. Definition and Axioms (page 11, Section 2.2):

    • The document discusses the extensional theory of arrays, which allows reasoning about array elements and entire arrays (array equality).
    • The signature ΣA includes read, write, and =.
    • read(a,i) returns the value of array a at index i.
    • write(a,i,e) returns a new array, identical to a except at index i where it holds element e.
    • McCarthy's Axioms are fundamental:
      • (A1) Congruence: i = j ⇒ read(a,i) = read(a,j)
      • (A2) Read-over-write (same index): i = j ⇒ read(write(a,i,e),j) = e
      • (A3) Read-over-write (different index): i ≠ j ⇒ read(write(a,i,e),j) = read(a,j)
    • Axiom of Extensionality is crucial for array equality:
      • (A4) a = b ⇔ ∀i(read(a,i) = read(b,i))
      • An alternative form (A4') is given for quantifier-free contexts: a ≠ b ⇔ ∃λ (read(a,λ) ≠ read(b,λ)), meaning unequal arrays must differ at some witness index λ (page 12).
    • The quantifier-free fragment is denoted TA.
  2. Decision Procedure DPA (pages 17-33, Sections 2.6, 2.7):

    • A novel decision procedure DPA for the quantifier-free extensional theory of arrays combined with a base theory TB (e.g., bit-vectors) is presented.
    • Preprocessing (page 13, Section 2.4):
      • For each array inequality a ≠ c, a witness constraint based on (A4') is added: a ≠ c → ∃λ. read(a,λ) ≠ read(c,λ).
      • For each write(a,i,e), the constraint read(write(a,i,e),i) = e (from A2) is added. These make the formula equisatisfiable (page 14).
    • Formula Abstraction (page 15, Section 2.5): read operations are replaced by fresh abstraction variables, and array equalities by fresh propositional variables, creating an over-approximation α(π).
    • Abstraction Refinement Loop (page 17, Section 2.6):
      • DPB (decision procedure for the base theory) solves α(π) ∧ ξ (where ξ are refinement lemmas).
      • If a model σ is found, a consistency checker (page 18, Section 2.7) determines if σ corresponds to a valid model in TA.
      • Read Propagation (pages 21, 27): A mapping ρ tracks reads associated with array terms. Reads are propagated:
        • Downwards through writes (Rule D, page 21): If σ(α(i)) ≠ σ(α(j)), read(b,i) is propagated from write(a,j,e) to a.
        • Across array equalities (Rules R, L, page 27): If σ(α(a=c)) is true, reads are propagated between a and c.
        • Upwards from arrays through writes (Rule U, page 27): If σ(α(i)) ≠ σ(α(j)), read(b,i) from a is propagated to write(a,j,e). This respects axiom (A3).
      • Lemma Generation: If an inconsistency (e.g., violation of adapted congruence axiom (A1), see page 22) is found, a lemma based on the propagation path conditions is generated and added to ξ. These lemmas are TA-valid (page 34, Section 2.8).
    • The procedure is proven sound (page 34) and complete (page 39).
    • Complexity: The number of lemmas is bounded by O(n² · 2ⁿ) (page 43).
  3. Implementation Details for Arrays in Boolector:

    • Treating Writes as Reads (page 47, Section 2.11.4; page 61): A write(a,i,e) can be implicitly treated as a read on itself for index i, returning e. This avoids the explicit preprocessing step of adding read(write(a,i,e),i)=e.
    • Unconstrained Array Variables (page 75, Section 3.4.2):
      • read(a,i) where a is unconstrained is rewritten to a fresh variable.
      • write(a,i,v) where a and v are unconstrained is rewritten to a fresh array variable.
      • a = t where a is unconstrained is rewritten to a fresh boolean variable.

II. Theory of Bit-Vectors (primarily from Chapter 3 and Appendix A)

  1. General Handling:

    • Boolector primarily uses term rewriting and bit-blasting for bit-vectors (page 62).
    • The BTOR format supports bit-vector variables, constants (decimal, hex, binary), and operations with arbitrary bit-width (page 127, 129).
    • Semantics largely follow SMT-LIB QF_BV (page 129).
    • A key distinction: division by zero in BTOR results in the largest unsigned integer (page 129).
  2. Under-Approximation Techniques (pages 65-68, Section 3.3):

    • Aimed at speeding up satisfiable instances and producing "smaller" models by restricting bit domains.
    • Techniques (page 66):
      • Sign-extension: Most significant bits are forced equal to the (n-1)th bit (last effective bit).
      • Zero-extension/One-extension: Most significant bits are forced to 0 or 1.
      • Bit-partitioning: Bits are partitioned into equivalence classes, with all bits in a class having the same value (page 67).
    • CNF Layer Implementation (page 67, Section 3.3.1): Under-approximations are added as incremental clauses to the SAT solver, controlled by a fresh boolean assumption variable e. Refinement involves disabling e and introducing a new approximation with a new assumption variable (page 68).
    • Early Unsat Termination (page 69, Section 3.3.3): If the under-approximated formula is unsatisfiable without using the assumption variable e, the original formula is unsatisfiable.
  3. Unconstrained Bit-Vector Variables (page 72, Section 3.4.1):

    • If an operand v of a function f is unconstrained (appears only once) and f is "invertible" with respect to v, then f(...) can be replaced by a fresh bit-vector variable.
    • Example: v3 + t = v1 & v2. If v1, v2 are unconstrained, v1 & v2 can be replaced by v4. If v3 is unconstrained, v3 + t can be v5. The formula becomes v4 = v5 (page 74).
    • Corner cases exist (e.g., multiplication by an even constant, v < 000) where this is not sound (page 75).
  4. Symbolic Overflow Detection (pages 77-84, Section 3.5):

    • Boolector directly supports symbolic overflow predicates for fixed-size bit-vector arithmetic (add, subtract, multiply, divide) in two's complement.
    • Addition (page 78):
      • Unsigned: Carry-out of MSB c[n-1] is 1. Implemented by adding operands extended with a 0-bit and checking the new MSB r[n].
      • Signed: (Operands have same sign) AND (result has different sign), or c[n-1] ⊕ c[n-2] (page 79).
    • Subtraction (page 79): a - b as a + (-b).
      • Unsigned: Overflow if c[n-1] (from the addition a + (-b)) is 0.
      • Signed: Defined by specific cases of operand signs and result sign.
    • Multiplication (page 80): Considers an n+1 bit product r.
      • Unsigned: (Sum of leading zeros in a and b, Za + Zb) < n-1 OR r[n] is 1. A "leading zeros examination principle" is detailed (Figure 3.8, page 81; formula page 82).
      • Signed: (Sum of leading bits L(a) + L(b)) < n OR r[n] ⊕ r[n-1] is 1. A "leading bits examination principle" is detailed (Figure 3.9, page 83; formula page 84).
    • Division (page 84):
      • Signed: Overflow only if the smallest representable integer is divided by -1.
      • Unsigned: Never overflows.
  5. BTOR Format for Bit-Vectors (pages 129-131, Appendix A.2):

    • Lists unary (not, neg, redand, inc, etc.), binary (and, add, mul, eq, sll, uaddo, etc.), ternary (cond), and miscellaneous (slice) operators with their bit-width specifications.
    • Includes Verilog reduction operators, VHDL rotate operators, and specific overflow detection operators (e.g., umulo).

This summary covers the main theoretical aspects of bit-vectors and arrays as presented in the dissertation, including their formal definitions, the decision procedures developed, and key implementation strategies related to these theories within Boolector.

Empty Type

id : empty-type
types : structure

The empty type, denoted as \bot, Void, Never, 0, or ! is a type that contains no terms. It is impossible to construct an instance (i.e. inhabitant) of the empty type.

Under the Curry-Howard Correspondence (BHK interpretation), the empty type corresponds directly to falsity. Just as there is no proof for the proposition false, there are no inhabitants (values) for the empty type. A function that claims to return the empty type (e.g., f : A -> Void) signifies that the function can never actually be called with a valid value of type A (if the system is consistent), or that it never terminates normally (e.g., it loops forever or throws an exception). This is how negation is defined, as the existence of a function from ParseError: KaTeX parse error: Undefined control sequence: \rarrow at position 3: A \̲r̲a̲r̲r̲o̲w̲ ̲Void implies that it can never be evaluated, meaning that A must also be an empty type and therefore false.

Set-theoretically it can be thought of as the empty set.

Existential Quantifier

id : existential-quantifier
types : structure

The existential quantifier, denoted by the symbol \exists (a left-to-right reflected "E"), is a type of quantifier used in first-order logic and other predicate logics. It expresses that there is at least one member of the domain of discourse for which a certain property or relation holds.

The expression x.P(x)\exists x. P(x) is read as "there exists an xx such that P(x)P(x) is true" or "for some xx, P(x)P(x) holds."
Here, P(x)P(x) is a predicate, and xx is a variable bound by the quantifier. The statement x.P(x)\exists x. P(x) is true if and only if there is at least one value of xx from the domain for which P(x)P(x) is true. If P(x)P(x) is false for every value of xx in the domain, then x.P(x)\exists x. P(x) is false.

Under the Curry-Howard correspondence, the existential quantifier \exists corresponds to Σ-types (dependent pair types) in dependent type theory. A proof of x:A.P(x)\exists x : A. P(x) is a pair consisting of a witness aa of type AA and a proof of P(a)P(a).

Expression

id : expression
types : structure

An expression or term is a kind of well-formed formula, usually used in the context of programming languages and type theory. Expressions can represent a value to be computed, a computation process itself, or a proof of some statement. They can be assigned types, and the inhabitants of types.

Examples of terms:

  • In lambda calculus: xx, λx.xλx.x, (fa)(f a)
  • In arithmetic: x+5x + 5, (y2)z(y * 2) - z
  • In a programming language: add(arg1,arg2)\text{add}(\text{arg1}, \text{arg2}). a+ba + b

False

id : false
types : structure

False is one of the two terms of the boolean type, the other being true.

It is often used as a constant within propositional logic, commonly represented by the symbol \bot.

Feasibility of Program Inference

id : feasibility-of-program-inference
types : argument

The Curry-Howard Correspondence establishes that programs are proofs and types are propositions. This suggests that techniques developed for automated theorem proving and finding proofs in logical systems, such as SAT/SMT solvers, could potentially be adapted to automatically finding or synthesizing programs that inhabit given types (i.e., satisfy type specifications).

Finite Automaton

id : finite-automaton
types : structure

A finite automaton is a model of computation that consists of a finite set of states and transitions between those states based on input symbols. It is the simplest model of computation, capable of recognizing of regular languages.

First Order Logic

id : first-order-logic
types : structure

A First Order Logic (FOL) is a formal logic that builds on propositional logic (PL) in the sense that it is essentially propositional logic but with the additions of predicates and quantifiers. FOL collectively is an umbrella term for many theories such as Integer Arithmetic, Real Arithmetic, Bit-Vector, Array, and Set Theory. First Order Logic is generally assumed to be classical (i.e. assumes principles like the law of the excluded middle), and is thus equivalent to the classic dependently typed lambda calculus via the Curry-Howard Correspondence. Its constructive counterpart is Intuitionistic First Order Logic.

FOL adds the following structures to propositional logic, but the details of some of these structures depends on the specific FOL theory.

  • Sorts (sets of things)
  • Predicates: Express properties of objects or relationships between objects (e.g., P(x)P(x), R(x,y)R(x, y)).
  • Functions: Map objects to other objects (e.g., f(x)f(x), g(x,y)g(x, y)).
  • Quantifiers:

Examples:

  • "All humans are mortal" \rArr x.(Human(x)Mortal(x))\forall x. (\text{Human}(x) \rightarrow \text{Mortal}(x)).
  • "Joey and Jill can become friends with anyone" p:Person.(CanBecomeFriendsWith(Joey, p)CanBecomeFriendsWith(Jill, p))\rArr \forall p : \text{Person}. (\text{CanBecomeFriendsWith(Joey, p)} \land \text{CanBecomeFriendsWith(Jill, p)}).

First Order Logic Theory

id : first-order-logic-theory
types : structure

A first order logic theory, is a formal system that extends first order logic with extra logical inference rules. These rules come in two flavors:

  1. Signature (Σ\Sigma): A set of non-logical symbols particular to the theory. This typically includes:

    • Constant symbols (e.g., 0,10, 1 in arithmetic).
    • Function symbols (e.g., +,×+, \times in arithmetic, along ith their arities).
    • Predicate symbols (e.g., ,=\leq, = in arithmetic, along with their arities).
    • (Optionally, sorts if using many-sorted first-order logic, as seen in the examples within first order logic).
    • These can be thought of as Axioms.
  2. A collection of ways to construct more complicated formulas from basic symbols. Expressed as more inference rules.

A structure is considered a model of a first-order theory if it provides an interpretation for the symbols in the signature such that all axioms of the theory are true under that interpretation. Intuitively, this should be some kind of "physical realization" of a theory.

Examples

Many specific logical systems are instances of first-order theories:

Formal Grammar

id : formal-grammar
types : structure

A formal grammar is:

  • A symbol set: A set of symbols Σ\Sigma
  • A characterizing function: A Function in some turing-complete model of computation that takes in a sequence of symbols and returns true or false depending on whether or not the sequence is valid within the grammar

Formal Logic

id : formal-logic
types : structure

A Formal Logic is a Formal System that provides the foundational syntax and rules of inference for constructing a logical system and their associated formulae.

Examples include Propositional Logic, Predicate Logic, also referred to as First Order Logic and various modal or temporal logics. Each logic allows reasoning about different kinds of statements and structures.

Conceptual Structure

Many formal logics can be conceptually characterized by:

  1. Syntax
    • A description of what structures exist in the logic (true, false, symbols, quantifiers, etc.) and what constitutes a well-formed formula
  2. Semantics

Formal System

id : formal-system
types : structure

A Formal System is a general term that includes a set of Well-Formed Formulas, created by a Formal Grammar, and some kind of inference rule, either logical or type-theoretic.

Examples of formal systems include instances of Formal Logic and Type Theory.

Function

id : function
types : structure

A function is a mapping between two sets, a domain and a codomain, such that every element in the domain is associated with exactly one element in the codomain.

Functions can be specified in various ways, including explicit formulas, algorithms, or tables of values. Some functions can't be specified or computed, for those that do see: Computable Function.

Function Type

id : function-type
types : structure

Function types, denoted as ABA \rightarrow B, represent functions that take an argument of type AA and return a result of type BB.

Logically via the Curry-Howard Correspondence function types can be interpreted as ABA \rightarrow B or logical implication, meaning to prove ABA \rightarrow B you need a proof that given a proof of AA you can construct a proof of BB. Set-theoretically it can be interpreted as the set of functions from set AA to set BB. In category theory function types correspond to exponential objects.

Implicational Intuitionistic Propositional Logic

id : implicational-intuitionistic-propositional-logic
types : structure

Implicational Intuitionistic Propositional Logic (IIPL) is an incredibly simple formal logic wherein logical sentences are constructed using propositional variables combined with only one connective: logical implication.

It is equivalent to the simply typed lambda calculus via the Curry-Howard Correspondence where every proposition and proof in IIPL corresponds to a type and term in STLC.

Intuitionistic First Order Logic

id : intuitionistic-first-order-logic
types : structure

Intuitionistic First Order Logic (IFOL) is a Formal Logic that extends Intuitionistic Propositional Logic with quantifiers over individuals, but without assuming the Law of the Excluded Middle or its equivalents for quantified statements. It provides a constructive interpretation of logical connectives and quantifiers.

Differences from Traditional First Order Logic:

  1. Constructive Existence: To prove x.P(x)\exists x. P(x) in IFOL, one must provide a specific term tt (a witness) and a proof of P(t)P(t). It's not enough to show that the non-existence of such an xx leads to a contradiction.
  2. No Law of Excluded Middle for Quantifiers: The statement x.(P(x)¬P(x))\forall x. (P(x) \lor \neg P(x)) is not generally a theorem in IFOL. Similarly, ¬(x.P(x))(x.¬P(x))\neg (\forall x. P(x)) \leftrightarrow (\exists x. \neg P(x)) holds in one direction (x.¬P(x)¬x.P(x)\exists x. \neg P(x) \rightarrow \neg \forall x. P(x)) but the other direction (¬x.P(x)x.¬P(x)\neg \forall x. P(x) \rightarrow \exists x. \neg P(x)) is not generally provable.

Relationship to Type Theory

Intuitionistic First Order Logic is Martin-Lof Type Theory under the Curry-Howard Correspondence. Propositions in IFOL correspond to types in MLTT, and proofs in IFOL correspond to terms (programs) in MLTT.

Intuitionistic Propositional Logic

id : intuitionistic-propositional-logic
types : structure

Intuitionistic Propositional Logic (IPL) is a formal-logic that is a variant of propositional logic where the law of the excluded middle is not assumed as an axiom. IPL is an instance of Implicational Intuitionistic Propositional Logic with the following additional constructions:

In IPL, a proposition is considered true only if there is a constructive proof for it. This means that to prove a statement or formula, one must provide a direct construction or evidence of its truth, rather than relying on the impossibility of its negation.

IPL is equivalent to intuitionistically-typed lambda calculus via the Curry-Howard Correspondence, where proofs in IPL correspond to terms in STLC and propositions correspond to types.

Intuitionistically Typed Lambda Calculus

id : intuitionistically-typed-lambda-calculus
types : structure

The Intuitionistically typed lambda calculus (ITLC) is a type theory and extension of simply-typed lambda calculus that, in addition to function types adds:

The ITLC is not a standard term in the literature, but it does provide a core building block of other type theories.

It is equivalent to intuitionistic propositional logic via the Curry-Howard Correspondence where every type and term in ITLC corresponds to a proposition and proof in IPL.

Formally, the added rules can be described using the following sequent calculus declarations. (See more on the sequence calculus on the type theory page)

Γ():Unitunit\frac{}{\Gamma \vdash () : \text{Unit}} \text{unit}

Γabsurd:VoidAabsurd\frac{}{\Gamma \vdash \text{absurd} : \text{Void} \rightarrow A } \text{absurd}

Γa:AΓb:BΓ(a,b):A×Bpair\frac{\Gamma \vdash a : A \quad \Gamma \vdash b : B}{\Gamma \vdash (a,b) : A \times B} \text{pair}

Γt:A×BΓfst(t):Afst\frac{\Gamma \vdash t : A \times B}{\Gamma \vdash \text{fst}(t) : A} \text{fst}

Γt:A×BΓsnd(t):Bsnd\frac{\Gamma \vdash t : A \times B}{\Gamma \vdash \text{snd}(t) : B} \text{snd}

Γa:AΓinl(a):A+Bin-left\frac{\Gamma \vdash a : A}{\Gamma \vdash \text{inl}(a) : A + B} \text{in-left}

Γb:BΓinr(b):A+Bin-right\frac{\Gamma \vdash b : B}{\Gamma \vdash \text{inr}(b) : A + B} \text{in-right}

Γs:A+BΓf:ACΓg:BCΓcase(s,f,g):Ccase\frac{\Gamma \vdash s : A + B \quad \Gamma \vdash f : A \rightarrow C \quad \Gamma \vdash g : B \rightarrow C}{\Gamma \vdash \text{case}(s, f, g) : C} \text{case}

Lambda Calculus

id : lambda-calculus
types : structure

The lambda calculus (λ-calculus) is a particular type of expression or more broadly, a well-formed formula in some formal grammar that encodes computation using function abstraction, variable binding, application, and substitution.

It is a universal Model of Computation, equivalent in power to Turing machines.

Fun fact: when implementing the lambda calculus, it can be encoded as a data type like as follows:

enum LambdaExpression {
	Variable { name : String }
	Abstraction { bound : String, body: LambdaExpression }
	Application { left: LambdaExpression, right: LambdaExpression }
}

Law of the Excluded Middle

id : law-of-the-excluded-middle
types : structure

The law of the excluded middle (LEM) is an (optional) axiom in logic presupposes that there is no "3rd value" between any proposition AA or ¬A\neg A.

Formally, it can be expressed as:

A¬AA \lor \neg A

This principle is a key difference between intuitionistic propositional logic and propositional logic.

List

id : list
types : structure

A list (or sequence) is an abstract data type representing a finite, ordered collection of items where the same item may occur multiple times.

Logical Conjunction

id : logical-conjunction
types : structure

Logical conjunction is a binary logical operation typically meaning "and". In Propositional Logic, the conjunction of two formulae AA and BB, written ABA \land B, is True if and only if both AA and BB are true; it is False otherwise.

The truth table for logical conjunction is:

A B A ∧ B
True True True
True False False
False True False
False False False

Logical Disjunction

id : logical-disjunction
types : structure

Logical disjunction, a.k.a. "or", is a binary logical operation. In Propositional Logic, the disjunction of two formulae AA and BB, written ABA \lor B, is True if at least one of AA or BB is true, and False otherwise.

The truth table for logical disjunction is:

A B A ∨ B
True True True
True False True
False True True
False False False

Logical Formula

id : logical-formula
types : category

A logical formula is a well-formed formula associated with a formal logic, and can usually be interpreted as either true or false, potentially with respect to some assignment.

It is equivalent to a type from type theory.

Logical Implication

id : logical-implication
types : structure

Logical implication is a binary logical operation encoding the idea of "if... then...". In Propositional Logic, the implication ABA \rightarrow B is False only when AA is True and BB is False; it is true in all other cases.

The truth table for logical implication is:

A B A → B
True True True
True False False
False True True
False False True

Logical Negation

id : logical-negation
types : structure

Logical negation is a unary operation that reverses the truth value of a proposition. In Propositional Logic, the negation of a formula AA, written ¬A\lnot A, is True if AA is False, and False if AA is True.

The truth table for logical negation is:

A ¬A
True False
False True

Martin-Lof Type Theory

id : martin-lof-type-theory
types : structure

Martin-Lof Type Theory (MLTT) is a foundational Type Theory developed by Per Martin-Lof that builds on the simply typed lambda calculus by adding dependent types, identity, types, universe types, and inductive types. It serves as a basis for modern proof assistants like Lean, Agda and Coq.

MLTT adds the following to the simply typed lambda calculus (some of it doesn't just add but also replaces).

  • Dependent Types: Types that can depend on values.

  • Identity Types (IdA(x,y)Id_A(x,y) or x=Ayx =_A y): A type representing the proposition that xx is equal to yy (where x,y:Ax, y : A). Proofs of identity are terms of this type.

  • Universe Types (U0,U1,U_0, U_1, \dots): Types whose terms are themselves types. This allows for type-level computation and reasoning about types, forming a hierarchy to avoid paradoxes.

  • Inductive Types: Rich ways to define data types like natural numbers, lists, trees, etc., along with their principle of induction.

MLTT is inherently constructive. A proof of a proposition (represented as a type) is a program (a term inhabiting that type). For example, to prove an existentially quantified statement (a Σ-type), one must construct a term to "inhabit" the existential type and prove that the existential property is true.

It is related to Intuitionistic First Order Logic via the Curry-Howard Correspondence, but Martin-Lof is more general in a sense that most, if not all first order logic theories can be formulated using MLTT directly via inductive types instead of needing to be added via extra deduction rules.

Model of Computation

id : model-of-computation
types : structure

A model of computation is a formal, mathematical abstraction of computation. It defines a set of allowable operations and their effects, specifying how outputs are derived from inputs.

Different models capture different aspects or levels of computational power. Common examples include finite automata, pushdown automaton, turing machines, and lambda calculi. Many models, like Turing machines and lambda calculus, are equivalent in power (Turing-complete) and define the limits of what is theoretically computable.

Grammar Automaton Lambda Calculus Model
Regular Grammar Finite Automaton (FA) A function from List[Bool]Bool\text{List[Bool]} \rightarrow \text{Bool} in the Simply Typed Lambda Calculus source
Context-Free Grammar Pushdown Automaton (PDA) Unknown
Context-Sensitive Grammar Linear Bounded Automaton (LBA) Unknown
Recursively Enumerable Grammar Turing Machine (TM) Untyped Lambda Calculus

Negation Type

id : negation-type
types : structure

Negation types, denoted as ¬A\neg A or Not A, represent the negation of a type AA. They are constructed as function types from AA to the empty type \bot, i.e., AA \rightarrow \bot.

Logically via the Curry-Howard Correspondence negation types can be interpreted as logical negation, meaning to prove ¬A\neg A you need a proof that given a proof of AA you can construct a proof of \bot (i.e. a contradiction). Set-theoretically, if AA is a set, ¬A\neg A can be thought of as the set of functions from AA to the empty set.

Nelson-Oppen Theory Combination Algorithm

id : nelson-oppen-theory-combination-algorithm
types : algorithm

The Nelson-Oppen theory combination algorithm is a technique used in SMT solvers to handle formulae involving multiple logical theories (e.g. arithmetic and uninterpreted functions). The core idea is to separate a mixed-theory formula into formulas that are only in one theory, connecting them via new variables.

This is achieved by:

  1. Identifying terms that mix symbols from different theories (e.g. f(x+1)f(x+1): f(...)f(...) is uninterpreted, x+1x + 1 is arithmetic).
  2. Introducing "fresh" variables to represent these subterms within a clause belonging to a specific theory.
  3. Adding interface equality constraints that link these new variables back to the original terms they replaced.

Example: f(x+1)=yf(x+1) = y might be reduced into two parts:

  • Theory A (Arithmetic): v = x+1
  • Theory B (Uninterpreted Functions): f(v) = y
    (where v is a new shared variable)

This separation allows each theory's solver to independently check the separate parts, deriving new clauses/equalities, until we derive a contradiction or we find a valid assignment.

On the expressive power of simply typed and let-polymorphic lambda calculi

id : on-the-expressive-power-of-simply-typed-and-let-polymorphic-lambda-calculi
types : source

Paper Link: Here

Summary generated by Gemini 2.5 Pro Preview 05-06:

This paper, "On the Expressive Power of Simply Typed and Let-Polymorphic Lambda Calculi" by Gerd Hillebrand and Paris Kanellakis (posthumously), presents a comprehensive functional framework for descriptive computational complexity. It establishes precise syntactic characterizations for various complexity classes—Regular, First-order, PTIME, PSPACE, k-EXPTIME, k-EXPSPACE (for k1k \geq 1), and Elementary sets—using simply typed lambda calculus (TLC) and let-polymorphic lambda calculus (core-ML), augmented with constants and an equality predicate (TLC$^=,coreML, core-ML^=$).

Core Idea and Framework:
The central idea is that typed lambda terms can represent not only programs but also their inputs and outputs. The complexity of functions computable within this framework is directly tied to the "functionality order" of the lambda terms used. Specifically, increasing the maximum allowed functionality order by one corresponds to an increase in computational power, often by one alternation in an alternating Turing machine model.

Key Contributions:

  1. A Unified Functional Framework for Complexity: The paper completes a research program (initiated in [21, 19, 20]) to provide a functional counterpart to the logical framework (e.g., Fagin's theorem for NP, fixpoint logic for PTIME) in descriptive complexity.
  2. Syntactic Characterization of Complexity Classes: It provides fixed-order typed lambda calculi that precisely capture several important complexity classes.
  3. Highlighting the Role of Functionality Order: The paper demonstrates a clear hierarchy: higher functionality orders allow for the expression of more complex computations.
  4. The Importance of Constants and Equality: The inclusion of order 0 atomic constants (from a domain oo) and an order 1 equality predicate (Eq:ooτττEq: o \rightarrow o \rightarrow \tau \rightarrow \tau \rightarrow \tau) is crucial for moving beyond simple Boolean/regular functions to database-like queries and higher complexity classes.
  5. Semantic Evaluation Technique: This is the primary technical contribution. To prove upper bounds (i.e., that a language fragment only computes functions within a certain class), the paper introduces a "semantic evaluation" mechanism. This is necessary because standard β\beta-reduction can be too slow (e.g., a PTIME function might take exponential time to reduce syntactically). The semantic evaluator (EvalkEval_k) uses a hybrid strategy: call-by-value for redexes up to a certain order kk and call-by-name for higher-order redexes, effectively trading recursion depth for storage of intermediate results.
  6. Let-Polymorphism (core-ML): The framework is extended to core-ML (TLC with letlet expressions). Let-polymorphism often allows characterizations that are "program uniform" in TLC to become "language uniform" in core-ML, meaning inputs can be typed more generally.

Methodology and Definitions:

  • Lambda Calculi:
    • TLC: Church's simply typed lambda calculus.
    • TLC$^=$: TLC augmented with a type constant oo (for domain elements), a countable set of expression constants of type oo, and an equality constant EqEq.
    • core-ML / core-ML$^=$: The letlet-polymorphic versions.
  • Functionality Order: Defined recursively: order(type variable/constant) = 0; order(αβ\alpha \rightarrow \beta) = max(1 + order(α\alpha), order(β\beta)).
  • Input/Output Conventions:
    • Outputs are always Booleans (type Bool:=τττBool := \tau \rightarrow \tau \rightarrow \tau).
    • For pure TLC: Inputs are Booleans or lists of Booleans (encoded as list iterators).
    • For TLC$^=$: Inputs are ordered finite structures. The domain dd is a list of constants from OO. Relations rir_i are lists of tuples of constants from OO. These list iterators typically have a principal type of order 2.
  • Typing Conventions:
    • Language Uniform (e.g., TLIkLTLI_k^L): All program terms type their inputs in the same fixed way.
    • Program Uniform (e.g., TLIkPTLI_k^P): Each program has a fixed type, and its inputs must be typed to match that specific program. The same input data might have different types for different programs.
  • Lower Bounds (Expressibility): Achieved by simulating Turing machines. This involves:
    • Constructing Church numerals of (hyper)-exponential size.
    • Using higher-order types to create (hyper)-exponentially large domains (e.g., DkD_k).
    • Lifting input relations and predicates to these larger domains.
  • Upper Bounds (Containment): Proven using the novel semantic evaluation technique (EvalkEval_k), which evaluates terms in a finite model based on the constants present in the query and input. The complexity of EvalkEval_k is analyzed based on the order kk and the structure of the term.

Main Results:

  1. Pure TLC (without constants/equality):

    • BOOL (Language Uniform): Terms of type Bool...BoolBool \rightarrow ... \rightarrow Bool express exactly the Boolean functions.
    • REG (Program Uniform): Terms of type {Bool}Bool\{Bool\}^* \rightarrow Bool (list of Booleans to Boolean) express exactly the characteristic functions of regular languages.
  2. TLC$^=$ (with constants and equality):

    • First-Order Queries (Language Uniform): TLI$_0^L$ (lambda terms of order 3 over finite structures) capture first-order logic with order. This is analogous to the Schwichtenberg-Statman theorem for extended polynomials.
    • PTIME (Program Uniform): Characterized by TLC$^=$ terms of order 4.
    • PSPACE (Program Uniform): Characterized by TLC$^=$ terms of order 5.
    • k-EXPTIME (Program Uniform): Characterized by TLC$^=$ terms of order 2k+4.
    • k-EXPSPACE (Program Uniform): Characterized by TLC$^=$ terms of order 2k+5.
    • (The paper denotes these fragments as TLI$_k^P$ with different kk indexing, but the resulting orders are as above).
  3. core-ML$^=$ (with constants, equality, and let-polymorphism):

    • The program uniform characterizations for PTIME, PSPACE, k-EXPTIME, and k-EXPSPACE in TLC$^=becomelanguageuniformincoreML become **language uniform** in core-ML^=$.
    • MLREG also expresses regular languages language uniformly.

Significance and Implications:

  • Provides a deeper understanding of the expressive power inherent in the type systems of functional programming languages.
  • Establishes a direct link between a fundamental syntactic feature (functionality order) and computational complexity.
  • The characterizations are often more "economical" in terms of basic primitives compared to some logical characterizations.
  • The semantic evaluation technique is a significant contribution for analyzing the complexity of typed lambda calculi when syntactic reduction is insufficient.
  • The PTIME vs. PSPACE distinction is elegantly captured by a difference in functionality order (e.g., order 1 vs. order 2 types for intermediate data, leading to order 4 vs. order 5 for the overall language fragment).

Limitations (as noted in Discussion):
The framework, being deterministic, does not provide a syntactic characterization for NP. Capturing NP would likely require adding a non-deterministic primitive (like a "coin" operation), which would violate properties like Church-Rosser.

Personal Note:
The paper includes a touching dedication by Gerd Hillebrand to his co-author and mentor, Paris Kanellakis, who tragically passed away shortly after they finished the work.

In summary, this paper offers a robust and elegant framework connecting the typed lambda calculus, a cornerstone of functional programming, with fundamental computational complexity classes. Its key insights revolve around the role of functionality order and the novel semantic evaluation technique for proving tight complexity bounds.

Peirce's Law

id : peirces-law
types : structure

Peirce's Law is a principle in logic named after the philosopher and logician Charles Sanders Peirce. It can be defined as follows:

P,Q.((PQ)P)P\forall P, Q. ((P \rightarrow Q) \rightarrow P) \rightarrow P

Intuitively: "If it's true that PP is true whenever PP implying QQ is true, then PP must be true."

Or alternatively: "If we have a function ff that produces PP given any proof that PQP \rightarrow Q, then PP must be true. Classically, either PP holds (and we're done) or PP is false, in which case PQP \rightarrow Q holds vacuously, so ff produces PP — contradicting PP being false." This classical case-split is exactly why the law requires LEM and is not intuitionistically provable.

Peirce's Law is a theorem of classical propositional logic but is not provable in intuitionistic propositional logic. Adding Peirce's Law as an axiom to an intuitionistic system is one way to make the logic classical. It is in some sense equivalent to law of the excluded middle in that both result in a classical logic when added to an intuitionistic logic, but is in definition akin to the type-theoretical equivalent of LEM [[equivalent:call with current continuation]].

Pi Type

id : pi-type
types : structure

A Π-type (Pi-type), also known as a dependent function type or dependent product type, is a type constructor in type theories with dependent types, such as Martin-Lof Type Theory.

Given a type AA and a family of types B(x)B(x) indexed by terms x:Ax : A, the Π-type, written as Πx:A.B(x)\Pi x:A. B(x) or (x:A)B(x)(x:A) \rightarrow B(x), represents the type of functions that take a term a:Aa : A and return a term of type B(a)B(a). The return type B(a)B(a) depends on the input value aa.

Under the Curry-Howard Correspondence, Π-types correspond to universal quantifiers (\forall) in logic. A proof of x:A.P(x)\forall x:A. P(x) is a function that, given any a:Aa:A, produces a proof of P(a)P(a).

Predicate

id : predicate
types : structure

A predicate is a symbol in predicate logic and its supersets (such as first order logic) that represents a property of objects or a relation among objects. It can be thought of as a function that maps tuples of objects to a truth value (True or False).

  • A predicate with one argument (a unary predicate) expresses a property. For example, Human(x)\text{Human}(x) could mean "xx is human."
  • A predicate with two arguments (a binary predicate) expresses a relation between two objects. For example, Loves(x,y)\text{Loves}(x, y) could mean "xx loves yy."
  • A predicate with nn arguments (an nn-ary predicate) expresses a relation among nn objects.

When the variables in a predicate are instantiated with specific objects from a domain, the predicate becomes a proposition that is either true or false. For example, if Socrates\text{Socrates} is an object in the domain, Human(Socrates)\text{Human}(\text{Socrates}) is a proposition.

Predicates are fundamental building blocks for constructing logical formulas in predicate logic. They are combined with quantifiers and logical connectives to make assertions about domains of discourse, this combination of predicates, quantifiers, and some other things results in First Order Logic.

Principle of Explosion

id : principle-of-explosion
types : structure

The principle of explosion, also known as ex falso quodlibet (from a falsehood, anything follows), is a fundamental principle in logic. It states that from a contradiction, any statement can be derived.

Formally, it can be expressed as:

A\bot \rightarrow A

Product Type

id : product-type
types : structure

A product type, denoted as A×BA \times B or Pair A B, represents a value that is a pair of values, one of type AA and one of type BB. Values of a pair type are typically constructed using a pairing operation (e.g., (a, b) where a is of type A and b is of type B).

Logically via the Curry-Howard Correspondence product types can be interpreted as ABA \land B or logical conjunction, meaning to prove ABA \land B you need a proof of both AA and BB. Set-theoretically it can be interpreted as the cartesian product of sets AA and BB. In category theory product types correspond to categorical products.

Propositional Formula

id : propositional-formula
types : structure

A propositional formula is a type of logical formula that uses symbols to express the framework of propositional logic.

For the purposes of this ontology, we will use the same symbols that have been used in class.

  • Propositional variables are represented by single-letter names.
  • Logical conjunction of two sub-formulae is represented by ABA \land B, where AA and BB are any valid propositional formulae.
  • Logical disjunction of two sub-formulae is represented by ABA \lor B, where AA and BB are any valid propositional formulae.
  • Not ¬A\lnot A
  • Xor ABA \oplus B
  • Implies ABA \rightarrow B
  • True \top
  • False \bot

Propositional Literal

id : propositional-literal
types : structure

A propositional literal is either a propositional variable or its logical negation.

Propositional Logic

id : propositional-logic
types : structure

Propositional Logic is simply the intuitionistic propositional logic combined with the law of the excluded middle, or double negation elimination (they are equivalent axioms), enabling proofs by contradiction.

I.e. adding either of these rules of inference encoding the LEM/double negation axiom:

ΓA¬ALEM\frac{}{\Gamma \vdash A \lor \neg A} \text{LEM}

⪪AAdouble-neg\frac{}{\Gamma \vdash \neg \neg A \rightarrow A} \text{double-neg}

Propositional Variable

id : propositional-variable
types : structure

A propositional variable (or sentence letter) is a kind of variable and a basic element in propositional logic. It represents an atomic proposition—a statement that can be assigned either true or false.

In propositional formulae, they are often represented by single letters like p,q,r,A,Bp, q, r, A, B. An assignment maps propositional variables to truth values.

Pushdown Automaton

id : pushdown-automaton
types : structure

A pushdown automaton (PDA) is a type of finite automaton that employs a stack in addition to finite state control. It is a Model of Computation more capable than finite automata but less capable than Turing machines.

A defining property of pushdown automaton are that they are the weakest Model of Computation capable of recognizing context-free languages.

Quantifier

id : quantifier
types : structure

A quantifier is a construction defined by an inference rule that specifies how many individuals in the domain of discourse satisfy a proposition or property. Quantifiers bind variables in a formula, turning an open formula (which has a free variable and thus no definite truth value) into a formula that has a definite truth value (assuming all variables are bound).

The most common quantifiers are:

In the wild

Quantifiers are essential components of First Order Logic, and higher order logics as well.

Rule of Inference

id : rule-of-inference
types : structure

A Rule of Inference is clearly defined rule for drawing a conclusion from a set of premises. Every formal system has a deductive system as a part of its formal grammar that consists of rules of inference that allow us to take an axiom and derive a theorem.

For an example of inference rules, see the typing rules of the simply typed lambda calculus.

Satisfiability

id : satisfiability
types : structure

Satisfiability is the property of some logical formulae whereby there exists some satisfying assignment given which the formula evaluates to true.

It is related but distinct from validity.

Satisfiability Solver

id : satisfiability-solver
types : category

A satisfiability solver is an algorithm that, given a logical formula, determines whether or not it is satisfiable, and if it is, returns a satisfying assignment, sometimes referred to as a model.

Satisfying Assignment

id : satisfying-assignment
types : structure

A satisfying assignment is a kind of assignment which, taken with a logical formula, causes that formula to be interpreted as true.

Sigma Type

id : sigma-type
types : structure

A Σ-type (Sigma-type), also known as a dependent pair type or dependent sum type, is a type constructor in type theories with dependent types, such as Martin-Lof Type Theory.

Given a type AA and a family of types B(x)B(x) indexed by terms x:Ax : A, the Σ-type, written as Σx:A.B(x)\Sigma x:A. B(x) or (x:A)×B(x)(x:A) \times B(x), represents the type of pairs (a,b)(a, b) where a:Aa : A and b:B(a)b : B(a). The type of the second component B(a)B(a) depends on the value of the first component aa.

Under the Curry-Howard Correspondence, Σ-types correspond to existential quantifiers (\exists) in logic. A proof of x:A.P(x)\exists x:A. P(x) is a pair consisting of a witness a:Aa:A and a proof of P(a)P(a).

Simply Typed Lambda Calculus

id : simply-typed-lambda-calculus
types : structure

The Simply typed lambda calculus (STLC) is a type theory that extends and restricts the lambda calculus by assigning [[uses:type
|types]] to lambda terms.

In simply typed lambda calculus, every term has a unique type, and the type system enforces that functions are only applied to arguments of the correct type. This is achieved through type checking rules that verify the consistency of type assignments.

It is equivalent to implicational intuitionistic propositional logic via the Curry-Howard Correspondence where every type and term in STLC corresponds to a proposition and proof in IIPL.

The types in STLC consist of the following primitives:

And the following typing rules, see the type theory page for more info on how these work.

Typing Rules:

Γ,x:Ab:BΓλx.b:ABfunction\frac{\Gamma, x:A \vdash b : B}{\Gamma \vdash \lambda x.b : A \rightarrow B} \text{function}

Γ(λx.b):ABΓa:AΓ(λx.b)ab[x:=a]:Bbeta-reduction\frac{\Gamma \vdash (\lambda x.b) : A \rightarrow B \quad \Gamma \vdash a : A}{\Gamma \vdash (\lambda x.b)\, a \equiv b[x:=a] : B} \text{beta-reduction}

SMT Solver

id : smt-solver
types : category

An SMT solver is an implementation of algorithms that can solve first order logic formula, usually conjoined with some extra theory TT.

SMT solvers work by combining the techniques of SAT solvers (like CDCL) with theory-specific solvers. The core SAT solver handles the boolean structure of the formula, while theory solvers handle the constraints imposed by the theory TT.

The process generally involves:

  1. Boolean Abstraction: The input formula is abstracted into a propositional formula by replacing theory atoms (theory-specific formulas that resolve to a boolean e.g., x+y>zx + y > z) with new propositional variables.

  2. SAT Solving: A SAT solver is used to find a satisfying assignment for the boolean abstraction, if any exists.

  3. Theory Check: The assignment from the SAT solver is given to the theory solver, which checks whether the assignment is consistent with the theory.

  4. Conflict Resolution: If the theory solver finds a conflict (i.e., the set of theory atoms is inconsistent), it generates a "theory lemma" (a constraint implied by the theory) that explains the conflict. This lemma is added to the boolean abstraction as a clause, and the process repeats from step 2.

  5. Satisfiable/Unsatisfiable: If the SAT solver finds an assignment that is consistent with the theory, the original formula is satisfiable. If the SAT solver exhausts all possibilities (including those refined by theory lemmas) without finding a consistent assignment, the original formula is unsatisfiable.

Start

id : start
types : start

Welcome to the Automated Reasoning Ontology!

This is an ontology to catalogue and investigate concepts in Automated Reasoning. Specifically to answer the question: To what degree can existing logical frameworks and their satisfiability algorithms be generalized and applied to inferring programs in different type theories?.

Before You Start

For an improved ontology viewing experience it is recommended to tweak the settings on the sidebar to hide uses and mention links.

Types

Ontologies have types for their nodes and links between nodes, and this one is no exception! The node and link types (and their meanings) are as follows:

Node Types

  • start - unique type of the node you are reading right now.
  • structure - a formally intensionally definable concept, i.e. assume you can define this mathematically.
  • category - an informally (extensionally) defined concept, perhaps defined in terms of shared experience or other colloquial terms.
  • algorithm - a specific function or program
  • source - a note linking to and sometimes summarizing an external document.
  • argument - a note arguing for a certain point of view (e.g., Feasibility of Program Inference).
  • undefined - notes that don't have a good category yet.

Link Types

  • is-a - if a concept (a structure or a category) inherits traits/properties from another concept, the concept inheriting traits should link to the supertype using the is-a link.
  • is-broadly - Basically the same as is-a but for more longer-range subtyping relations in order to create a better visualization.
  • uses - if a concept uses another concept in its definition, for example, a structure that is constructed from other structures this link can be used.
  • part - if a concept is a part of another concept, i.e. some other concept is an integral aspect (piece that is composed), this can be used as a "stronger" connection than uses to make the graph nicer (has a stronger force for the force-graph).
  • mention - if any note mentions another note, but not in any kind of particular way, this link type may be used.
  • cites - should be used to link to source nodes, to cite that source.

If you're curious about the source code of this repository, check out the following link: https://github.com/zontasticality/ar-ontology

AI Disclosure

Gemini 2.5 Pro 03-25 and Gemini 2.5 Pro 05-06 output was used verbatim for helping readers explore sources for the reader's convenience. Models were also used as a writing aid due to the sheer amount of content required to create a comprehensive ontology. For the ontology AI output was used for idea-generation and drafting purposes only and rewritten.

Sum Type

id : sum-type
types : structure

A sum type, denoted as A+BA + B or Either A B, represents a value that can be either of type AA or of type BB. Values of a sum type are typically tagged or constructed in a way that indicates which variant they belong to (e.g., Left(a) where a is of type A, or Right(b) where b is of type B).

Logically via the Curry-Howard Correspondence sum types can be interpreted as ABA \lor B or logical disjunction, meaning to prove ABA \lor B you only need a proof of one or the other (and an indication of which one you used). Set-theoretically it can be interpreted as the disjoint union of sets AA and BB. In category theory sum types correspond to coproducts.

The Logic of Brouwer and Heyting

id : the-logic-of-brouwer-and-heyting
types : source

Paper Link: Here

This paper details the history of Intuitionistic Logic and specifically L. E. J. Brouwer's contributions to the field.

This is the main source for the Brouwer-Heyting-Kolmogorov Interpretation node.

Summary

Summary generated by Gemini 2.5 Pro Preview 05-06:

1. Introduction and Brouwer's Foundational Ideas:
The paper begins by situating intuitionistic logic as a system of reasoning developed from Brouwer's philosophical stance on mathematics.

  • Brouwer's Philosophy: Mathematics is an exclusively mental activity, independent of and prior to language and logic. Mathematical objects (including proofs) are mental constructions.
  • First Act of Intuitionism: This was the separation of mathematics from language, rooted in the "intuition of the bare two-oneness." This initially led to intuitionistic arithmetic and a theory of rationals but struggled with the continuum due to an insistence on predicative definability (finitely specifiable, hence denumerable entities).
  • Negation as Absurdity: Brouwer interpreted negation not as mere falsity but as absurdity (leading to a contradiction). He objected to the universal law of excluded middle (A¬AA \lor \neg A), seeing it as an a priori claim that every problem is solvable, and to the law of double negation (¬¬AA\neg\neg A \supset A) as asserting every consistent statement holds.
  • Early Developments: Brouwer's 1907 dissertation laid groundwork, criticizing existing philosophies of mathematics. His 1908 paper "The unreliability of the logical principles" questioned the universal validity of classical logical laws for infinite systems.

2. Formalization and Early Interpretations:

  • Heyting's Formalization (1930): Heyting, at Brouwer's suggestion, formalized intuitionistic propositional logic (IPC), first-order predicate logic (IQC), and parts of intuitionistic mathematics (arithmetic, species).
    • IPC: Presented with axioms for ,,,¬\supset, \land, \lor, \neg (e.g., ¬A(AB)\neg A \supset (A \supset B) effectively replacing A¬AA \lor \neg A). Heyting demonstrated the independence of his axioms.
    • IQC: Extended IPC with quantifiers (,\forall, \exists) and identity (\equiv). The universal quantifier implies a uniform construction; the existential quantifier implies a witness has been found or a method to find one is known.
  • Glivenko's Contributions (1928-1929): Independently worked on formalizing IPC. Proved Glivenko's Theorems:
    1. If AA is provable classically, ¬¬A\neg\neg A is provable intuitionistically.
    2. If ¬A\neg A is provable classically, ¬A\neg A is provable intuitionistically.
  • Kolmogorov's Problem Interpretation (1925, 1932):
    • His 1925 work on "minimal logic" (implication and negation) was largely unnoticed in the West.
    • His 1932 interpretation viewed logical formulas as "problems": ABA \land B is the problem of solving A and B; ABA \lor B is solving A or B; ABA \supset B is reducing the solution of B to A; ¬A\neg A is deriving a contradiction from a solution of A. This forms part of the Brouwer-Heyting-Kolmogorov (BHK) explication of intuitionistic logic.

3. Intuitionistic Arithmetic (HA):

  • Heyting's Arithmetic: Developed within his predicate calculus, including axioms for natural numbers, successor, and induction. He defined +,<,>+, <, >.
  • Kleene's Arithmetic (HA): Became the standard formalization, with constant 0, unary successor ', binary +,+, \cdot, and predicate ==. Includes logical axioms, induction schema, and specific arithmetical axioms.

4. Proof Theory of Intuitionistic Logic and Arithmetic:

  • Relationship to Classical Logic: IFOL (IQC) is a proper subsystem of classical FOL. ISOL is inconsistent with classical SOL. Intuitionistic language is richer.
  • Negative Translations (Gödel, Gentzen, Kolmogorov): Showed that classical first-order arithmetic can be faithfully translated into the negative fragment of intuitionistic arithmetic. This established proof-theoretic equivalence for certain classes of statements and clarified the relationship between classical and constructive consequences. For example, a formula AA is classically provable iff AA^* (its translation, often involving ¬¬\neg\neg for primes/disjunctions/existentials) is intuitionistically provable.
  • Constructive Properties (Disjunction and Existence):
    • Disjunction Property (DP): If ABA \lor B is a closed provable formula in IPC/IQC, then AA is provable or BB is provable.
    • Existence Property (EP/EPN): If xA(x)\exists x A(x) is a closed provable formula in IQC (or HA), then A(t)A(t) (or A(n)A(n) for a numeral nn in HA) is provable for some term tt. These were stated by Gödel and proved by Gentzen for logic, and by Kleene for arithmetic.
  • Gentzen's Sequent Calculus (LJ): Introduced a system for natural deduction where derivations are trees. His Hauptsatz (cut-elimination theorem) implies that proofs can be normalized, from which DP and EP follow more directly.
  • Admissible vs. Derivable Rules: A rule is derivable if its conclusion can be deduced from its premises within the system. It's admissible if, whenever its premises are theorems, its conclusion is also a theorem. In IPC, there are admissible rules that are not derivable (e.g., independence of premise rules like ¬A(BC)(¬AB)(¬AC)\neg A \supset (B \lor C) \vdash (\neg A \supset B) \lor (\neg A \supset C), Mints' rule). Visser's rules provide a basis for all admissible propositional rules. Markov's rule is an important admissible rule for IQC/HA involving decidable predicates.

5. Interpretations (Semantics) of Intuitionistic Logic and Arithmetic:

  • Algebraic Semantics: IPC can be interpreted in Heyting algebras (lattices with an additional operation \supset).
  • Topological Semantics (Mostowski, Rasiowa & Sikorski): IQC can be interpreted using open sets of a topological space. Both algebraic and topological semantics are sound and complete for IPC/IQC.
  • Recursive Realizability (Kleene, Nelson): For HA. A number ee "realizes" an arithmetical sentence EE if ee codes a construction satisfying EE according to BHK.
    • Nelson's Theorem: Every theorem of HA is realizable.
    • This interpretation proves the consistency of HA with Church's Thesis (CT0CT_0) (every effectively calculable function is recursive) and Markov's Principle (MPMP) (related to decidability of predicates over natural numbers).
  • Kripke Semantics (Beth, Kripke): "Possible worlds" semantics.
    • A Kripke model consists of a set of nodes (worlds) KK with a partial order \leq (accessibility) and a forcing relation \Vdash. Forcing is persistent (if kAk \Vdash A and kkk \leq k', then kAk' \Vdash A).
    • kABk \Vdash A \supset B iff for all kkk' \geq k, if kAk' \Vdash A then kBk' \Vdash B.
    • k¬Ak \Vdash \neg A iff for all kkk' \geq k, k⊮Ak' \not\Vdash A.
    • For IQC, domains D(k)D(k) are associated with nodes, expanding along \leq.
    • Sound and complete for IPC and IQC. Veldman provided an intuitionistically acceptable completeness proof for IQC using "exploding nodes."
  • Beth Models: Similar to Kripke models but with a different condition for forcing disjunctions and existential quantifiers based on "barring" by subtrees.

6. The Second Act of Intuitionism: The Continuum and Choice Sequences:
Brouwer's later work tackled the intuitionistic continuum.

  • Choice Sequences (infinitely proceeding sequences, ips): Accepted as legitimate mathematical entities, even if not fully defined in advance. Generated by "spreads" (laws defining allowed choices).
    • Fan/Finitary Spread: Only finitely many choices available at each step.
  • Brouwer's Fan Theorem: A total integer-valued function on a fan is uniformly continuous (its value is determined by a finite initial segment of the choice sequence input, with a uniform bound on the length of this segment).
  • Continuity Principle: Any total function defined on a spread (e.g., from choice sequences to numbers) is continuous. This principle contradicts classical logic (e.g., it implies ¬α(xα(x)=0¬xα(x)=0)\neg \forall \alpha (\forall x \alpha(x)=0 \lor \neg \forall x \alpha(x)=0)).
  • Bar Theorem (Bar Induction): A principle of induction over well-founded trees of finite sequences, essential for proving the Fan Theorem. Initially "proved" by Brouwer, later largely accepted as an axiom.
  • Kleene and Vesley's FIM (Foundations of Intuitionistic Mathematics, 1965): A formal system for intuitionistic analysis, two-sorted (numbers and choice sequences). Includes HA, countable choice, Brouwer's Principle for Functions (a strong form of continuous choice), and Bar Induction.
    • FIM is inconsistent with classical logic but consistent relative to its subsystem B (without Brouwer's Principle).
  • Heyting's Arithmetic of Species (HAS0HAS_0) and HAωHA^\omega (Arithmetic of Finite Types): Extensions of HA dealing with collections (species) and higher-type objects, respectively. Troelstra studied these, proving properties like the Uniformity Principle (UPUP) which also has non-classical consequences.

Conclusion:
The paper traces the evolution of intuitionistic logic from Brouwer's philosophical insights about mathematics as a mental construction, through Heyting's crucial formalizations, to the rich metamathematical study by later logicians. It highlights the distinctive features of intuitionistic logic (constructive proofs, interpretation of negation and quantifiers, non-classical principles for the continuum) and its complex relationship with classical logic, often shown to be equiconsistent via negative translations while diverging significantly in its treatment of higher-order concepts and the continuum. The development of various semantic interpretations (realizability, Kripke models) further clarified its structure and consistency.

Theorem

id : theorem
types : structure

A theorem is a well-formed formula that can be constructed according to some combination of multiple rules of inference of a formal system, and therefore can be reached by logical deduction.

Theory of Arrays

id : theory-of-arrays
types : structure

The Theory of Arrays is a First Order Logic Theory designed for reasoning about array data structures. It extends First Order Logic with specific sorts, function symbols, and axioms to model array operations. This theory is extensively used in SMT solvers for program verification.

Signature

The signature for the (extensional) theory of arrays typically includes:

  • Sorts:
    • Index: The type of array indices (e.g., integers, bit-vectors).
    • Value: The type of elements stored in the array.
    • Array: The type of arrays, often conceived as maps from Index to Value.
  • Function symbols:
    • select (or read): select:Array×IndexValue\text{select} : \text{Array} \times \text{Index} \rightarrow \text{Value}
      • Returns the value stored at a given index in an array.
    • store (or write): store:Array×Index×ValueArray\text{store} : \text{Array} \times \text{Index} \times \text{Value} \rightarrow \text{Array}
      • Returns a new array that is identical to the input array except at the given index, where it stores the new value.
  • Predicate symbols:
    • Equality for arrays (=A=_A) is often included.

Axioms (McCarthy Axioms)

The core of the theory of arrays is defined by the McCarthy axioms:

  1. Read-over-write 1 (select-of-store 1): Reading at an index ii from an array that was just written to at the same index ii yields the value that was written.
    a:Array,i:Index,v:Value.(select(store(a,i,v),i)=v)\forall a:\text{Array}, i:\text{Index}, v:\text{Value} . (\text{select}(\text{store}(a, i, v), i) = v)
  2. Read-over-write 2 (select-of-store 2): Reading at an index jj from an array that was just written to at a different index ii (where iji \neq j) yields the same value as reading from the original array at index jj.
    a:Array,i,j:Index,v:Value.(ijselect(store(a,i,v),j)=select(a,j))\forall a:\text{Array}, i,j:\text{Index}, v:\text{Value} . (i \neq j \rightarrow \text{select}(\text{store}(a, i, v), j) = \text{select}(a, j))

Optional Axiom of Extensionality:
If two arrays are different, then there must be an index at which their contents differ. Equivalently, if two arrays agree on all indices, they are equal.
a,b:Array.((i:Index.select(a,i)=select(b,i))a=Ab)\forall a,b:\text{Array} . ((\forall i:\text{Index} . \text{select}(a,i) = \text{select}(b,i)) \rightarrow a =_A b)
SMT solvers may support arrays with or without extensionality. Non-extensional arrays are simpler to handle but less expressive.

Usage

The theory of arrays is fundamental for verifying properties of programs that manipulate array-like data structures. SMT solvers provide efficient decision procedures for this theory, often in combination with other theories like integer arithmetic (for array indices) or bit-vectors.

Source

Efficient SMT Solving for Bit-Vectors and the Extensional Theory of Arrays

Theory of Integer Arithmetic

id : theory-of-integer-arithmetic
types : structure

The Theory of Integer Arithmetic is a First Order Logic Theory that formalizes reasoning about the properties of integers. It extends First Order Logic with a specific signature and axioms tailored to integers.

Signature

The signature for the theory of integer arithmetic typically includes:

  • Sorts: A sort for Integer (Z\mathbb{Z}).
  • Constant symbols: 0,1,1,2,2,0, 1, -1, 2, -2, \dots
  • Function symbols:
    • Addition: +:Integer×IntegerInteger+ : \text{Integer} \times \text{Integer} \rightarrow \text{Integer}
    • Negation: :IntegerInteger- : \text{Integer} \rightarrow \text{Integer}
    • Multiplication: ×:Integer×IntegerInteger\times : \text{Integer} \times \text{Integer} \rightarrow \text{Integer}
  • Predicate symbols:
    • Equality: =:Integer×IntegerBool= : \text{Integer} \times \text{Integer} \rightarrow \text{Bool}
    • Less than: <:Integer×IntegerBool< : \text{Integer} \times \text{Integer} \rightarrow \text{Bool}
      (Other relations like >,,>, \leq, \geq can be defined from these).

Axioms

The axioms depend on the richness of the theory:

  • Presburger Arithmetic: Includes axioms for addition, constants, and order. Multiplication by constants is allowed, but not general multiplication between variables. It is decidable.
  • Peano Arithmetic (PA): A more expressive theory that includes axioms for addition, multiplication, and induction. It is powerful enough to express many number-theoretic statements but is incomplete (by Gödel's incompleteness theorems) and undecidable.
  • Linear Integer Arithmetic (LIA): Commonly used in SMT solvers, this theory typically involves linear constraints (e.g., 2x+3y52x + 3y \leq 5).

Common axioms include properties like:

  • Associativity and commutativity of ++ and ×\times.
  • Distributivity of ×\times over ++.
  • Properties of 00 (identity for ++) and 11 (identity for ×\times).
  • Order axioms (transitivity, totality of \leq).
  • Axioms relating operations and order (e.g., aba+cb+ca \leq b \rightarrow a+c \leq b+c).

Usage

The theory of integer arithmetic is fundamental in mathematics, computer science for program verification, constraint solving, and automated theorem proving. SMT solvers often incorporate decision procedures for fragments of integer arithmetic.

Theory of Real Arithmetic

id : theory-of-real-arithmetic
types : structure

The Theory of Real Arithmetic is a First Order Logic Theory that formalizes reasoning about the properties of real numbers. It extends First Order Logic with a specific signature and axioms for real numbers.

Signature

  • Sorts: A sort for Real (R\mathbb{R}).
  • Constant symbols: 0,10, 1 and potentially other rational constants.
  • Function symbols:
    • Addition: +:Real×RealReal+ : \text{Real} \times \text{Real} \rightarrow \text{Real}
    • Subtraction (or unary negation): :RealReal- : \text{Real} \rightarrow \text{Real} (unary) or :Real×RealReal- : \text{Real} \times \text{Real} \rightarrow \text{Real} (binary)
    • Multiplication: ×:Real×RealReal\times : \text{Real} \times \text{Real} \rightarrow \text{Real}
  • Predicate symbols:
    • Equality: =:Real×RealBool= : \text{Real} \times \text{Real} \rightarrow \text{Bool}
    • Less than: <:Real×RealBool< : \text{Real} \times \text{Real} \rightarrow \text{Bool}

Theory of Uninterpreted Functions

id : theory-of-uninterpreted-functions
types : structure

The Theory of Uninterpreted Functions (TUF), a.k.a. the Theory of Equality with Uninterpreted Functions (EUF), is a First Order Logic Theory that deals with function symbols whose interpretation is not constrained beyond basic functional consistency. It extends First Order Logic by allowing function symbols without specific axiomatic definitions for their behavior (like + in arithmetic).

Signature

The signature for TUF can include:

  • Sorts: One or more sorts for the domain and codomain of functions.
  • Constant symbols: Any constant symbols.
  • Function symbols: A set of function symbols f,g,h,f, g, h, \dots, each with a fixed arity (e.g., f:S1××SnS0f: S_1 \times \dots \times S_n \rightarrow S_0). These functions are "uninterpreted" meaning their specific mapping is not defined by the theory.
  • Predicate symbols: The primary predicate is usually equality (==). Other predicate symbols can also be uninterpreted.

Axioms

The core axiom of the theory of uninterpreted functions is functional consistency (or congruence):
For any nn-ary function symbol ff:
x1,,xn,y1,,yn.((x1=y1xn=yn)f(x1,,xn)=f(y1,,yn))\forall x_1, \dots, x_n, y_1, \dots, y_n . ( (x_1 = y_1 \land \dots \land x_n = y_n) \rightarrow f(x_1, \dots, x_n) = f(y_1, \dots, y_n) )

This axiom states that if all arguments to a function are pairwise equal, then the function applications must also be equal.

If uninterpreted predicates PP are present, a similar axiom applies:
x1,,xn,y1,,yn.((x1=y1xn=yn)(P(x1,,xn)P(y1,,yn)))\forall x_1, \dots, x_n, y_1, \dots, y_n . ( (x_1 = y_1 \land \dots \land x_n = y_n) \rightarrow (P(x_1, \dots, x_n) \leftrightarrow P(y_1, \dots, y_n)) )

Additionally, the standard axioms of equality (reflexivity, symmetry, transitivity) are assumed.

Usage

The theory of uninterpreted functions is fundamental in automated reasoning and SMT solvers because:

  1. Abstraction: It allows abstracting away the concrete details of complex functions while preserving essential properties related to equality. For example, a complex hash function can be modeled as an uninterpreted function if only its property of mapping equal inputs to equal outputs is relevant.
  2. Combination with Other Theories: TUF is often the base theory upon which other theories (like arithmetic, arrays) are built or combined using frameworks like Nelson-Oppen.
  3. Decidability: The quantifier-free fragment of TUF is efficiently decidable.

It's used extensively in program verification to model operations whose precise definitions are not needed for a particular proof, or to establish equivalences between program fragments.

True

id : true
types : structure

True is one of the two terms of the Boolean type, the other being False.

It is often used as a constant within Propositional Logic, commonly represented by the symbol \top.

Turing Machine

id : turing-machine
types : structure

A Turing machine is a mathematical model of computation that defines an abstract machine manipulating symbols on a strip of tape according to a table of rules. A Turing machine can simulate any computer program.

It consists of an infinite tape divided into cells, a head that can read or write symbols to the tape and move the tape one direction or another, a state register, and a finite table of instructions.

The way a Turing machine works is as follows: the machine is given a particular starting state. During every step of computation, the machine looks at the instruction table to see what it should do for its current state.
The instructions in the table tell the machine what to do, i.e. read or write a symbol from the tape, move the tape one cell to the right, etc. It turns out that a Turing machine is equivalent in computational power to the lambda calculus.

Type

id : type
types : structure

A type is a core component of a type theory that classifies data or expressions based on their properties and the operations that can be performed on them.

For example, the type Nat, representing all natural numbers, constrains the possible values of a variable with type Nat. For instance, if one is given some variable xx of type Nat, one can be sure that xx will not be the string "hello, world", etc.

In this way, types are used to prevent errors by ensuring that operations are applied to data of the appropriate kind.

Examples include data types, booleans, and function types.

Type Theory

id : type-theory
types : category

A type theory is a formal system that is an alternative to formal logic that provides a formal system for classifying and reasoning about types. It is a foundational theory in computer science and logic, used to define the structure of data and the behavior of programs.

Type theory is the mathematical formalization of the concept of types in programming languages, and in that context they are used to prevent errors by ensuring that operations are applied to data of the appropriate kind, but they are much more powerful than that and have the potential enable formalization of all of math.

Typing Context

id : typing-context
types : structure

A typing context (Γ,Δ\Gamma, \Delta), is broadly a map that associates free variables with their respective types. It is used to keep track of "known" variables and their types at a particular point in a program or proof.

For example, a context Γ=[x:A,y:B]\Gamma = [x:A, y:B] declares that the variable xx is assumed to have type AA, and variable yy is assumed to have type BB.

Purpose:

Typing contexts are used programmatically for type checking and inference, and they are used when defining typing rules to determine the type of an expression or to verify that an expression is well-typed.

Typing Rule

id : typing-rule
types : structure

A typing rule is a fundamental component of any type theory or system. It is a formal inference rule that defines how to determine or assign a type to an expression. Typing rules specify the conditions under which an expression is considered well-typed. They are the type-theory equivalent of logical rules of inference.

Form

Typing rules are inference rules that typically take the following structure:

Premise1Premise2PremisenConclusion(Rule Name)\frac{\text{Premise}_1 \quad \text{Premise}_2 \quad \dots \quad \text{Premise}_n}{\text{Conclusion}} \quad (\text{Rule Name})

This means that if all premises (judgements above the line) are true, then the conclusion (the judgement below the line) can be derived.

Judgements are statements about expressions, types, and contexts. The most common form of judgement used in this ontology so far is the typing judgement:

  • Γe:T\Gamma \vdash e : T
    This asserts that in the typing context Γ\Gamma, the expression ee has type TT. A typing context Γ\Gamma is a list of assumptions about the types of free variables, e.g., [x1:A1,x2:A2][x_1:A_1, x_2:A_2].

Here are two fundamental examples of typing rules, for function introduction and elimination:

Γ,x:Ab:BΓλx.b:ABfunction-intro\frac{\Gamma, x:A \vdash b : B}{\Gamma \vdash \lambda x.b : A \rightarrow B} \text{function-intro}

Read: If, in a context Γ\Gamma extended with the assumption that variable xx has type AA, it can be derived that term bb has type BB, then it can be derived that the lambda abstraction λx.b\lambda x.b has the function type ABA \rightarrow B in the original context Γ\Gamma. This rule introduces a term of a function type.

Γf:ABΓa:AΓ(fa):Bfunction-elim (or app)\frac{\Gamma \vdash f : A \rightarrow B \quad \Gamma \vdash a : A}{\Gamma \vdash (f a) : B} \text{function-elim (or app)}

Read: If it can be derived in context Γ\Gamma that term ff has a function type ABA \rightarrow B, and it can also be derived in context Γ\Gamma that term aa has type AA (the argument type), then it can be derived that the application (fa)(f a) has type BB (the result type) in the same context Γ\Gamma. This rule eliminates (or uses) a term of a function type.

Generally, typing rules come in pairs:

  • Introduction rules (like function-intro) define how to construct or form a term of a particular type.
  • Elimination rules (like function-elim) define how a term of a particular type can be used or deconstructed.
  • Sometimes you also have "computation rules" as well.

While the examples above primarily use the term typing judgement (Γe:T\Gamma \vdash e : T), more complex type systems involve other forms of judgements, such as:

  • Type Formation: ΓA  type\Gamma \vdash A \; \text{type} (In context Γ\Gamma, AA is a well-formed type).
  • Type Equivalence: ΓAB  type\Gamma \vdash A \equiv B \; \text{type} (In context Γ\Gamma, types AA and BB are equivalent).
  • Term Equivalence: Γtu:A\Gamma \vdash t \equiv u : A (In context Γ\Gamma, terms tt and uu are equivalent and have type AA).

The specific set of judgements and rules defines the particular type theory.

Examples

Specific examples of typing rules can be found in various type theories, such as:

Unit Type

id : unit-type
types : structure

The unit type, denoted as \top, (), Unit, or 1, is a type that has exactly one canonical value (often written as (), unit, or *). It conveys no information beyond its own existence, similar to a void return type in some programming languages but representing a type with a single, trivial value rather than no value at all.

Under the Curry-Howard Correspondence, the unit type corresponds directly to the proposition that is always true. Just as the proposition True has a trivial, universally available proof, the unit type has a single, unique inhabitant that can always be constructed. Set-theoretically it can be thought of as the singleton set.

Universal Quantifier

id : universal-quantifier
types : structure

The universal quantifier, denoted by the symbol \forall (an inverted "A", meaning AAAAAAAAAAll), is a type of quantifier used in first-order logic and other predicate logics. It expresses that a certain property or relation holds for every member of the domain of discourse.

The expression x,P(x)\forall x, P(x) is read as "for all xx, P(x)P(x) is true" or "for every xx, P(x)P(x) holds."
Here, P(x)P(x) is a predicate, and xx is a variable bound by the quantifier. The statement x,P(x)\forall x, P(x) is true if and only if P(x)P(x) is true for every possible value of xx from the domain. If there is even one value of xx for which P(x)P(x) is false, then x,P(x)\forall x, P(x) is false.

Under the Curry-Howard Correspondence, the universal quantifier \forall corresponds to Π-types (dependent function types) in type theory. A proof of x:A.P(x)\forall x:A. P(x) is a function that, given any term aa of type AA, produces a proof of P(a)P(a).

Validity

id : validity
types : structure

Validity is the property of some logical formulae whereby the formula always evaluates to true, regardless of the assignment.

It is related but distinct from satisfiability.

Variable

id : variable
types : structure

A variable is a symbol which can be mapped to a value by an assignment.

Well-Formed Formula

id : well-formed-formula
types : structure

A well-formed formula of an associated formal grammar is a sequence of symbols for which the associated classifying function of that grammar returns true.

Note:

While "well-formed formula" (WFF) is a general term, specific kinds of WFFs in different domains often have more common names. For instance, in programming and type theory, WFFs that represent programs or computational units are frequently called terms or expressions. In logic, WFFs that represent statements are typically referred to as propositions or formulas.

Help

Click here to access Cosma's documentation

Shortcuts

Space Re-run the force-layout algorithm
S Move the cursor to Search
Alt + click (on a record type) Deselect other types
R Reset zoom
Alt + R Reset the display
C Zoom in on the selected node
F Switch to Focus mode
Escape Close the active record

The Automated Reasoning Ontology

The Automated Reasoners

Automated Reasoning Ontology


Version 2.5.4 • License GPL-3.0-or-later

  • Arthur Perret
  • Guillaume Brioudes
  • Olivier Le Deuff
  • Clément Borel
  • ANR research programme HyperOtlet
D3 v4.13.0
Mike Bostock (BSD 3-Clause)
Nunjucks v3.2.3
James Long (BSD 2-Clause)
Js-yaml v4.1.0
Vitaly Puzrin (MIT License)
Markdown-it v12.3.0
Vitaly Puzrin, Alex Kocharin (MIT License)
Citeproc v2.4.62
Frank Bennett (CPAL, AGPL)
Fuse-js v6.4.6
Kiro Risk (Apache License 2.0)