SNL-1: White Paper

Title

From Tokens to Thought: Inside the Netti-AI Memory Graph

Author

SynaptechLabs

Date

June 2025

Abstract

Netti-AI is a biologically inspired neural reasoning engine designed to simulate human-like memory, context

awareness, and cognitive association. At its core lies the Netti Memory Graph-a symbolic, weighted, and

dynamic structure representing the relationship between concepts over time. This white paper dives into the

internal workings of the memory graph, exploring how tokenized input is transformed into structured cognition

through activation, recall, and emotional bias.

1. Introduction

Traditional neural networks operate on fixed architectures and numeric gradients. Netti-AI takes a symbolic

approach where individual neurons (or 'nodes') represent discrete tokens tagged by type, mood, role, or

context. Connections form through co-activation, evolving a graph of meaningful associations that can be

recalled, strengthened, or pruned.

Netti's architecture seeks to mirror the flexible, emergent, and recursive nature of thought-not just processing

information, but growing understanding.

2. Tokenization and Input Flow

All input to Netti-AI is first passed through a Tokenizer, which parses and tags the data:

- word:home

- num:3.14

SNL-1: White Paper

- mood:curious

- punc:?

These tokens are then injected into the neural graph as activation pulses. Their tags guide link formation,

filtering, and neuron creation.

3. The Memory Graph

The Netti Memory Graph consists of:

- Nodes: Represent symbols (e.g., concepts, emotions, syntax units).

- Links: Directed, weighted, inhibitory or excitatory.

- Activation States: Transient values representing the current focus of thought.

- Decay Functions: Prevent runaway activation and enforce forgetting.

Nodes are linked through Hebbian-style learning: neurons that fire together, wire together.

The graph is context-sensitive: the same token may behave differently depending on mood, recent history, or

activation neighborhood.

4. Activation and Propagation

Token input initiates propagation, where signals traverse the graph:

- Nodes accumulate activation.

- Links amplify or inhibit based on weight and type.

- High-activation clusters form thought patterns or associative flares.

Short-term context windows retain the latest active tokens, guiding predictions and recall.

5. Mood and Symbolic Cognition

Mood vectors influence:

- Which links are more likely to be followed

- Whether a node is suppressed or highlighted

SNL-1: White Paper

- What is predicted next

Thus, emotional state becomes a filter on memory and logic-creating a dynamic form of mood-biased

cognition.

6. Recall and Association

- Episodic Memory: Sequences can be stored and retrieved as 'episodes' tagged with time and mood.

- Associative Queries: The engine can answer assoc <token> to show direct neighbors.

- Compression: Redundant activations are merged to simulate insight or abstraction.

7. Visualization and Debugging

Netti supports Graphviz .dot exports to visualize its memory graph:

- Nodes sized by activation

- Colored by mood or token type

- Temporal activity trails available via CLI

These visualizations help developers and researchers understand how thought flows within the system.

8. Future Work

Planned enhancements include:

- Long-range inhibitory motifs

- Pattern detectors for analogy or metaphor

- Symbolic math as a subgraph

- Memory grafting between agents

9. Conclusion

The Netti Memory Graph is more than a data structure-it is the cognitive substrate of an emerging artificial

mind. By translating tokens into evolving symbolic relationships, Netti-AI mimics the rich interplay of memory,

context, and emotion that underpins intelligent behavior. From tokens to thought, the journey is no longer

SNL-1: White Paper

purely computational-it's cognitive.

Contact

SynaptechLabs

Email: research@synaptechlabs.ai

Web: https://www.synaptechlabs.ai