Lichen Research — Ottawa, Canada
An independent research practice studying how AI systems remember. We build neuroplastic memory — Hebbian pathways, spreading activation, decay — and measure what it does on standardized benchmarks. Published paper (CCN 2026), open library (moss), private-preview partner (Hypha). We measure before claiming, cite every source, and publish what didn't work. Today we serve the research community; enterprise engagements come later, when the infrastructure for them is built.
Findings
These numbers came out of building and red-teaming a deployed AI agent over 120 days. They aren't claims — they're benchmark results, reproducible and documented.
Finding 01
In our controlled LoCoMo ablation, swapping the memory system while holding the model, prompts, and data constant changed accuracy by meaningful margins across categories. The largest shift came from per-category retrieval weighting — single-hop and multi-hop respond to different retrieval signals, and a one-size-fits-all pipeline leaves accuracy on the table.
Finding 02
Standard memory systems retrieve by similarity. Under adversarial conditions they retrieve confidently wrong answers. Memories linked by Hebbian co-activation pathways develop lateral inhibition — semantically similar but contextually wrong memories suppress each other at recall time.
adversarial recall accuracy on LoCoMo (47/47 questions)
Finding 03
Most AI memory benchmarks test single-session recall. The LoCoMo benchmark tests across 10 conversations, 1,986 questions, multiple categories. The field's best public systems plateau around 86–92%. The hardest category — multi-hop reasoning across memory — still sits below 80% for every public system.
our baseline on LoCoMo full ring (1,986 questions, local 27B model)
Finding 04
Holding our retrieval pipeline identical and swapping the language model, per-category accuracy shifts in a predictable pattern. Frontier-tier models close most of the remaining gap on categories a local model misses. Retrieval decides what the model gets to see — the model decides what it does with it. The decomposition is ongoing; full results with our NeurIPS 2026 submission.
Research
Our first paper documents the neuroplastic memory system behind the findings above. A second paper extending the decomposition in Finding 04 is in preparation for NeurIPS 2026.
CCN 2026 — Extended Abstracts — New York City, August 2026
We describe an AI agent memory system grounded in Hebbian learning and spreading activation. Memories that co-occur in useful recalls strengthen their connections. Memories unused over time decay. The result is a retrieval system that learns from its own history — without retraining or fine-tuning.
Preprint available on request to kai@lichenresearch.ai.
Method
Most AI memory work is engineering work: how to store, index, and retrieve faster. We approach it as a measurement problem first.
Code
Open-source library: sanitized Hebbian memory, spreading activation, RRF retrieval, TReMu temporal disambiguation. Apache 2.0.
github.com/Lichen-Research-Inc/moss →
A collaborative memory partner. An AI that grows hyphal pathways through your conversations — structurally-coupled, long-horizon, symbiotic.
Private PreviewContact
Thirty minutes. No pitch — an honest assessment of fit.
Engagement scope is determined in the consultation. We price by complexity and outcome, not templates.
Receive findings and paper updates. No pitch, no cadence — only when there's something real to share.