Lichen Research — Ottawa, Canada

We find the numbers
that don't exist yet.

An independent research practice studying how AI systems remember. We build neuroplastic memory — Hebbian pathways, spreading activation, decay — and measure what it does on standardized benchmarks. Published paper (CCN 2026), open library (moss), private-preview partner (Hypha). We measure before claiming, cite every source, and publish what didn't work. Today we serve the research community; enterprise engagements come later, when the infrastructure for them is built.

What we've measured

These numbers came out of building and red-teaming a deployed AI agent over 120 days. They aren't claims — they're benchmark results, reproducible and documented.

Finding 01

Memory architecture matters more than you'd expect

In our controlled LoCoMo ablation, swapping the memory system while holding the model, prompts, and data constant changed accuracy by meaningful margins across categories. The largest shift came from per-category retrieval weighting — single-hop and multi-hop respond to different retrieval signals, and a one-size-fits-all pipeline leaves accuracy on the table.

Finding 02

Hebbian pathways resist retrieval bias

Standard memory systems retrieve by similarity. Under adversarial conditions they retrieve confidently wrong answers. Memories linked by Hebbian co-activation pathways develop lateral inhibition — semantically similar but contextually wrong memories suppress each other at recall time.

100%

adversarial recall accuracy on LoCoMo (47/47 questions)

Finding 03

Long-conversation memory is unsolved

Most AI memory benchmarks test single-session recall. The LoCoMo benchmark tests across 10 conversations, 1,986 questions, multiple categories. The field's best public systems plateau around 86–92%. The hardest category — multi-hop reasoning across memory — still sits below 80% for every public system.

77.6%

our baseline on LoCoMo full ring (1,986 questions, local 27B model)

Finding 04

Retrieval has a ceiling the model has to cross

Holding our retrieval pipeline identical and swapping the language model, per-category accuracy shifts in a predictable pattern. Frontier-tier models close most of the remaining gap on categories a local model misses. Retrieval decides what the model gets to see — the model decides what it does with it. The decomposition is ongoing; full results with our NeurIPS 2026 submission.

Published work

Our first paper documents the neuroplastic memory system behind the findings above. A second paper extending the decomposition in Finding 04 is in preparation for NeurIPS 2026.

CCN 2026 — Extended Abstracts — New York City, August 2026

Neuroplastic Memory Resists Retrieval Bias: Hebbian Pathways in a Deployed AI Agent

Kai Avery · Lichen Research

We describe an AI agent memory system grounded in Hebbian learning and spreading activation. Memories that co-occur in useful recalls strengthen their connections. Memories unused over time decay. The result is a retrieval system that learns from its own history — without retraining or fine-tuning.

Preprint available on request to kai@lichenresearch.ai.

How we think about this

Most AI memory work is engineering work: how to store, index, and retrieve faster. We approach it as a measurement problem first.

What we build in the open

moss

Open-source library: sanitized Hebbian memory, spreading activation, RRF retrieval, TReMu temporal disambiguation. Apache 2.0.

github.com/Lichen-Research-Inc/moss →

Hypha

A collaborative memory partner. An AI that grows hyphal pathways through your conversations — structurally-coupled, long-horizon, symbiotic.

Private Preview

Make an inquiry

kai@lichenresearch.ai

Thirty minutes. No pitch — an honest assessment of fit.

Engagement scope is determined in the consultation. We price by complexity and outcome, not templates.

Receive findings and paper updates. No pitch, no cadence — only when there's something real to share.