CMD: READ_NODE // 2026.04.27

๐ŸงŠ Lollms-VS-Coder: Agentic System Architecture & Protocols

1. The Agent Roster (Specialized Hierarchy)

Lollms operates on a Layered Autonomy model. Tasks are not handled by a single “god-model,” but delegated through a hierarchy of specialized personas.

๐Ÿงž The Lead Architect (The Genie)

  • Role: Primary mission orchestrator.
  • Function: Analyzes high-level objectives, builds multi-step plans, and delegates technical implementation to Specialists.
  • Cognition: Uses a high-frequency ReAct (Reason + Act) loop.

๐Ÿ“š The Lead Librarian

  • Role: Context Governance.
  • Function: Scans project structure (The Tree), identifies dependencies, and prunes/expands the context window to prevent “Context Overflow.”
  • Key Tool: auto_select_context_files.

๐Ÿ›ก๏ธ The Guardian (Verifier)

  • Role: Functional Integrity.
  • Function: Performs “Cold Audits” of generated code. Scans for syntax errors, missing imports, and logic flaws before the user sees the final output.
  • Key Tool: run_verification.

๐Ÿ› ๏ธ The Technical Specialists

  • Personas: Frontend, Backend, DevOps, Security Auditor, Embedded Expert.
  • Function: Executing surgical file operations (edit_code, generate_code) using the Aider Protocol.

2. Core Cognitive Techniques

๐Ÿ”„ The ReAct Protocol (Reason -> Act -> Observe)

Lollms does not guess. Every turn follows a strict sequence:

  1. Observe: Read current disk state, terminal output, or user feedback.
  2. Think: Hypothesize the next logical step and document it in the Scratchpad.
  3. Act: Execute exactly one tool call.
  4. Reflect: Analyze the tool’s output to determine if the hypothesis was correct.

๐Ÿ”ฌ Reflexive Intelligence (Scientific Audit)

To prevent amnesia loops (repeatedly trying the same failing fix), Lollms uses a Failure Shaker:

  • Circuit Breaker: If an action results in a failure, that specific parameter set is “blacklisted” for the next 3 turns.
  • RCA (Root Cause Analysis): The agent is forced to explain why the previous attempt failed before being allowed to try a different approach.

๐Ÿงฉ Spatial Awareness

Unlike standard LLMs that only see the current file, Lollms maintains Spatial Context:

  • The Tree Proxy: The AI sees the file tree with [C] (ContentLoaded) and [D] (DefinitionsOnly) markers.
  • Anti-Hallucination Guard: The agent is forbidden from assuming the contents of “Hidden” files. It must explicitly “reach out” via read_file to gain vision.

3. The Neural Memory System (Tiered RLM)

Lollms implements a human-like memory decay and consolidation system to keep the context lean but the “intelligence” permanent.

TierTypePersistanceInjection Logic
Tier 0ROMPermanentImmutable core protocols and system rules.
Tier 1WorkingSession-basedRecent technical discoveries (e.g., “Server is on port 8080”).
Tier 2LatentDisk-basedArchived engrams. Only the IDs are injected as a “searchable index.”
Tier 3DeepDisk-basedCold storage for old projects. Requires memory_search.

๐ŸŒ™ The Dream Cycle

Every hour (or on-demand), the system performs a Dream Cycle:

  1. Decay: Importance scores of all memories drop by a fixed factor.
  2. Consolidation: Frequently accessed facts stay in Tier 1; unused facts move to Tier 2.
  3. Purge: Zero-importance facts are deleted to keep the “Project DNA” clean.

4. The Guardian Protocol (Self-Healing)

The “Definition of Done” for a Lollms agent is not “I wrote the code,” but “The code runs without errors.

  1. Surgical Application: Changes are applied via AIDER (Search/Replace) to preserve local formatting.
  2. Diagnostic Pulse: After application, the system triggers a background VS Code diagnostic scan.
  3. Self-Heal Loop: If functional errors (red squiggles) are detected, the agent immediately enters Repair Mode, using the error trace to patch the logic in a recursive loop until 0 errors remain.

5. Multi-Agent Collaboration (The Herd)

For complex brainstorming, Lollms utilizes The Herd Mode:

  • Phase 1 (Blind Brainstorming): Multiple models generate independent solutions to avoid anchoring bias.
  • Phase 2 (Peer Review): Specialists critique each other’s code (e.g., the Security Auditor reviews the Backend Specialist).
  • Phase 3 (Synthesis): The Lead Architect merges the consensus into a single, hardened implementation.

6. Communication Protocols

๐Ÿ’Ž The Diamond Protocol (Skills)

Skills are “Verified Sources of Truth.” When a skill is active:

  • It overrides the model’s base training.
  • The model treats the skill content as a mandatory protocol (e.g., “Always use async/await”).

๐Ÿ“‹ The Mission Briefing

The Prime Directive for a discussion. It sits at the absolute top of the system prompt and cannot be “scrolled away” by long conversations, ensuring the AI never forgets the project’s core constraints.