1. The Agent Roster (Specialized Hierarchy)
Lollms operates on a Layered Autonomy model. Tasks are not handled by a single “god-model,” but delegated through a hierarchy of specialized personas.
๐ง The Lead Architect (The Genie)
- Role: Primary mission orchestrator.
- Function: Analyzes high-level objectives, builds multi-step plans, and delegates technical implementation to Specialists.
- Cognition: Uses a high-frequency ReAct (Reason + Act) loop.
๐ The Lead Librarian
- Role: Context Governance.
- Function: Scans project structure (The Tree), identifies dependencies, and prunes/expands the context window to prevent “Context Overflow.”
- Key Tool:
auto_select_context_files.
๐ก๏ธ The Guardian (Verifier)
- Role: Functional Integrity.
- Function: Performs “Cold Audits” of generated code. Scans for syntax errors, missing imports, and logic flaws before the user sees the final output.
- Key Tool:
run_verification.
๐ ๏ธ The Technical Specialists
- Personas: Frontend, Backend, DevOps, Security Auditor, Embedded Expert.
- Function: Executing surgical file operations (
edit_code,generate_code) using the Aider Protocol.
2. Core Cognitive Techniques
๐ The ReAct Protocol (Reason -> Act -> Observe)
Lollms does not guess. Every turn follows a strict sequence:
- Observe: Read current disk state, terminal output, or user feedback.
- Think: Hypothesize the next logical step and document it in the Scratchpad.
- Act: Execute exactly one tool call.
- Reflect: Analyze the tool’s output to determine if the hypothesis was correct.
๐ฌ Reflexive Intelligence (Scientific Audit)
To prevent amnesia loops (repeatedly trying the same failing fix), Lollms uses a Failure Shaker:
- Circuit Breaker: If an action results in a failure, that specific parameter set is “blacklisted” for the next 3 turns.
- RCA (Root Cause Analysis): The agent is forced to explain why the previous attempt failed before being allowed to try a different approach.
๐งฉ Spatial Awareness
Unlike standard LLMs that only see the current file, Lollms maintains Spatial Context:
- The Tree Proxy: The AI sees the file tree with
[C](ContentLoaded) and[D](DefinitionsOnly) markers. - Anti-Hallucination Guard: The agent is forbidden from assuming the contents of “Hidden” files. It must explicitly “reach out” via
read_fileto gain vision.
3. The Neural Memory System (Tiered RLM)
Lollms implements a human-like memory decay and consolidation system to keep the context lean but the “intelligence” permanent.
| Tier | Type | Persistance | Injection Logic |
|---|---|---|---|
| Tier 0 | ROM | Permanent | Immutable core protocols and system rules. |
| Tier 1 | Working | Session-based | Recent technical discoveries (e.g., “Server is on port 8080”). |
| Tier 2 | Latent | Disk-based | Archived engrams. Only the IDs are injected as a “searchable index.” |
| Tier 3 | Deep | Disk-based | Cold storage for old projects. Requires memory_search. |
๐ The Dream Cycle
Every hour (or on-demand), the system performs a Dream Cycle:
- Decay: Importance scores of all memories drop by a fixed factor.
- Consolidation: Frequently accessed facts stay in Tier 1; unused facts move to Tier 2.
- Purge: Zero-importance facts are deleted to keep the “Project DNA” clean.
4. The Guardian Protocol (Self-Healing)
The “Definition of Done” for a Lollms agent is not “I wrote the code,” but “The code runs without errors.“
- Surgical Application: Changes are applied via AIDER (Search/Replace) to preserve local formatting.
- Diagnostic Pulse: After application, the system triggers a background VS Code diagnostic scan.
- Self-Heal Loop: If functional errors (red squiggles) are detected, the agent immediately enters Repair Mode, using the error trace to patch the logic in a recursive loop until 0 errors remain.
5. Multi-Agent Collaboration (The Herd)
For complex brainstorming, Lollms utilizes The Herd Mode:
- Phase 1 (Blind Brainstorming): Multiple models generate independent solutions to avoid anchoring bias.
- Phase 2 (Peer Review): Specialists critique each other’s code (e.g., the Security Auditor reviews the Backend Specialist).
- Phase 3 (Synthesis): The Lead Architect merges the consensus into a single, hardened implementation.
6. Communication Protocols
๐ The Diamond Protocol (Skills)
Skills are “Verified Sources of Truth.” When a skill is active:
- It overrides the model’s base training.
- The model treats the skill content as a mandatory protocol (e.g., “Always use async/await”).
๐ The Mission Briefing
The Prime Directive for a discussion. It sits at the absolute top of the system prompt and cannot be “scrolled away” by long conversations, ensuring the AI never forgets the project’s core constraints.