roadmap

So this is awkward, but we don't actually have docs for Caffeine yet. That's what I'm working on right now. In the meantime, you can read my brain dump document!

latestmar 23, 2026
donets-hash (type-directed hashing)
movedts-hash (type-directed hashing): planned -> in progress
movedanalysis prototype: planned -> in progress
donethe caffeine.pub website
donere
done
in progress
planned
exploring
re (monorepo config manager)complete

Declarative TOML-based monorepo configuration with bidirectional sync

workspace.toml & project.toml parsing
Config files for package.json, tsconfig, prettier, vscode settings, engines, scripts
bidirectional lens system
Edit generated JSON files and changes sync back to TOML via composable field lenses
WASM TOML mutation
Write mutate-toml for preserving comments and formatting in toml files when syncing changes back
daemon mode
Watches config files, regenerates on change
make it robust
Not all fields are supported, not all fields are synced. Add support when needed
ts-hash (type-directed hashing)complete

A TypeScript static analysis for fast hashing of data structures

type extraction
Walk interfaces, aliases, unions, intersections, generics, and produce a normalized type graph
hash function codegen
Generate specialized hash functions per type. Primitives inline, structs hash fields in declaration order, arrays stream elements
recursive and circular types
Recursive types emit composable helpers with cycle-safe refs. Supports self-recursive and mutually recursive types
generics strategy
Generic types produce genericized hash functions with trait-style constraints. Instantiated refs resolve concrete type arguments
CLI
Reads tsconfig, scans @hash types, writes hash.gen.ts, auto-adds path alias. 147 tests
fuzz with random types
It should not loop forever
analysis prototypein progress

Prototype an iterative analysis to analyze call graph and interprocedural, field-sensitive points-to analysis at the same time

depends on -> ts-hash
determine prototype language requirements
Closures (w/ forward decls), calls, objects, fields, ifs, loops, and breaks should suffice
lexer & parser
JS-like syntax for the above
iterative analysis
Discover call graph, run points-to analysis, reanalyze call graph, run points-to, iterate until fixpoint
satisfy all test cases
Mutual recursion, loops, tree recursion, higher-order functions, multi-step discovery
translate to field-sensitive SSA
Use the results of points-to analysis to lower functions and fields into SSA form (slower than Cytron et al.)
fuzz end to end
Discover examples of possible infinite loops in the implementation and address them in the analysis
open questions

What happens when a new function is added by the user? We don't want to recompute everything and end up slow like Crystal (relevant: https://arxiv.org/abs/2412.10632)

What are the edge cases where we lose context or precision in the points-to analysis? Even flow-insensitive field-sensitive interprocedural points-to is undecidable

documentationplanned

We should document the internals of everything we're doing

write the caffeine.pub website
You're looking at it
document the analysis prototype
People should be able to understand how the analysis works and why it works
start a blog
Publish accessible posts on data-flow analysis
other fun thingsexploring

caffeine.pub rocks

a logo
What's the vibe?
contributor pathways
Document how a contributor can join the organization