Creating an n8n AI agent that manages a “MindGarden” (a folder structure with content) and builds its logic recursively from that content is a fantastic way to achieve a self-contained and self-optimizing knowledge system.
Here’s a breakdown of the design and key components for your n8n agent and MindGarden structure, focusing on keeping the n8n workflow “light” and pushing the “logic into the structure.”
🧠 MindGarden Structure & Logic
The core idea is to use the MindGarden’s file structure and content to store the agent’s logic, prompts, memory, and next steps.
1. Core Structure
Your MindGarden folder will need a few foundational files and folders:
File/Folder | Purpose | Content Type |
---|---|---|
index.md | The Agent’s Instructions. Contains the initial prompt/goal for the agent for that specific project, as well as a high-level description of the MindGarden’s purpose. | Markdown |
_system_prompt.md | The Agent’s Identity/Role. This is the core “System” prompt the LLM will always see. It will instruct the LLM on its capabilities (create, modify, delete files), its goals, and its operating procedure (always check for _next_actions.md ). | Markdown |
_memory.json | Memory/Context Storage. A structured file to store key facts, past decisions, API results, or context that needs to persist across iterations. | JSON (for easy parsing) |
_next_actions.md | Action Queue. This is the most critical file. The agent will read this at the start of each iteration and write new actions to it at the end. It will contain a prioritized, step-by-step list of what to do next. | Markdown list/YAML |
/prompts/ | Prompt Templates/Tools. A folder containing specialized prompts for common tasks (e.g., research_query.md , summarize_file.md , refactor_content.md ). | Markdown |
/history/ | Audit/Traceability. A log of every action taken and the LLM response that generated it. (E.g., 2025-09-30_action_1.md ). | Markdown |
2. Logic via Structure (Key Concept)
The agent’s “program” isn’t in n8n nodes; it’s the process of reading and writing these files:
- Read State: Read
_system_prompt.md
,index.md
(goal),_memory.json
, and, most importantly,_next_actions.md
. - Decide/Execute: Take the top action from
_next_actions.md
.- If the action requires an LLM, the LLM prompt is assembled from:
- The System Prompt (
_system_prompt.md
). - The Goal (
index.md
). - Relevant Context (found via links or
_memory.json
). - The Action Description (from
_next_actions.md
). - The Action-Specific Prompt (e.g., loading a file from
/prompts/
).
- The System Prompt (
- The LLM’s response must be an action command (e.g.,
CREATE FILE "new_file.md" WITH CONTENT "..."
orUPDATE _next_actions.md BY ADDING "..."
).
- If the action requires an LLM, the LLM prompt is assembled from:
- Update State: Execute the LLM’s command (create/modify/delete file) and write the next set of actions (if any) back to
_next_actions.md
.
⚙️ Light n8n Workflow Design
The n8n workflow primarily acts as the “executor” and “file manager.”
1. Trigger & Loop
- Trigger: A simple Manual Trigger or a Schedule Trigger (to run the agent every X minutes).
- Loop: Use the n8n Workflow Loop functionality or a simple IF/ELSE based on the content of
_next_actions.md
to determine if another cycle is needed.
2. Key Nodes & Functionality
Node Type | Purpose | Key Action/Logic |
---|---|---|
Read Files (File System/Cloud Storage) | Reads the core state files (_system_prompt.md , _next_actions.md , _memory.json ). | Always reads these first to prime the agent. |
Set (or Function) | Prompt Assembly. Combines the content from all the files to form the final, comprehensive prompt for the LLM. | Ensures all context, goal, and the system prompt are passed. |
AI (OpenAI, Gemini, etc.) | The Agent’s Brain. Sends the assembled prompt and receives the response. | Input: Full prompt. Output: A structured action command (e.g., JSON or a specific text format). |
Code (or Function) | Action Parsing. Parses the LLM’s structured output to determine what file action to take (CREATE, MODIFY, DELETE, RESEARCH). | This logic is critical for making the LLM’s output actionable by n8n. |
Write Files (File System/Cloud Storage) | Execution/Update. Executes the parsed action (creating a new file, writing the new _next_actions.md , or logging history). | The final step of the iteration, updating the MindGarden’s state. |
HTTP Request (Optional) | Research/Online Access. Used when the action is RESEARCH . The LLM’s output will contain the search query. | The n8n Code/Function node passes the query to this node. |
3. Wiki Link Navigation
The agent finds files via Wiki Links (e.g., [[Concept X]]
) by:
- The LLM being instructed to output the filename it needs to read to answer a question or continue an action.
- The n8n Code/Function node detecting the
[[...]]
syntax in the LLM’s output. - The n8n workflow then uses the File System Read node to fetch the content of the file named by the link and injects it into the next prompt as context.
💡 First Concept (Your index.md
)
When you are ready to start, the content of your first index.md
will be the initial instructions for the agent.
For a first test, I recommend a concept that involves multiple steps, research, and file creation:
Example index.md
# MindGarden Goal: Autonomous Agent Design
## Objective
Design the full specifications for a **"Self-Healing Server Mesh"** architecture.
## Instructions
1. **Research:** Start by researching the core concepts of "Service Mesh," "Consul," and "Chaos Engineering."
2. **File Creation:** Create a file named `core_concepts.md` and summarize the research.
3. **Propose Architecture:** Create a file named `architecture_proposal.md` with a diagram description (using text/markdown for now) for the Self-Healing Server Mesh. The proposal must address redundancy, automated failover, and observability.
4. **Action List Update:** Upon completion, update `_next_actions.md` with the single item: "Review and refine `architecture_proposal.md`."
## Initial _next_actions.md
1. Read `index.md` for the current goal.
2. Formulate a research query to understand "Service Mesh" principles.
3. Write the research query to `_memory.json` under the key `next_research_query`.