Lár Engine

The Glass Box

A movement toward transparent, deterministic, and auditable AI workflows.

You're a developer. Your agent fails in production. What do you get? A 100-line stack trace. No state. No inputs. No visibility.

This is the Black Box Tax.

The silent penalty paid when systems hide the truth.

The Cost of Obscurity

Feature "Black Box" Frameworks Lár (The Glass Box)
Debugging Guesswork. 100-line stack traces from inside a "magic" executor. Precise. See the exact node, state, and error that caused the failure.
Auditability Paid add-on. Requires external tools to trace execution. Built-in. The "Flight Log" is the core output of the engine.
Control Chaotic. Agents "chat" to pass data. Order is unpredictable. Deterministic. You define the assembly line. Data flow is explicit.

The Flight Log

Lár produces a Flight Log for every run. It's not just a debug tool; it's a legal record of your AI's decisions.

full_audit_log.jsonrun_id: 20251126_153129
{
  "run_id": "20251126_153129",
  "timestamp": "2025-11-26T15:36:19.999670",
  "steps": [
    {
      "step": 0,
      "node": "LLMNode",
      "state_before": {
        "task": "What is the Lár Framework used for?"
      },
      "state_diff": {
        "added": {
          "category": "GENERAL"
        },
        "removed": {},
        "modified": {}
      },
      "run_metadata": {
        "prompt_tokens": 45,
        "output_tokens": 449,
        "total_tokens": 494,
        "model": "gemini/gemini-2.5-pro"
      },
      "outcome": "success"
    },
    {
      "step": 1,
      "node": "LLMNode",
      "state_before": {
        "task": "What is the Lár Framework used for?",
        "category": "GENERAL",
        "__last_run_metadata": null
      },
      "state_diff": {
        "added": {
          "search_query": "Lár Framework use cases"
        },
        "removed": {
          "__last_run_metadata": null
        },
        "modified": {}
      },
      "run_metadata": {
        "prompt_tokens": 83,
        "output_tokens": 1717,
        "total_tokens": 1800,
        "model": "gemini/gemini-2.5-pro"
      },
      "outcome": "success"
    },
    {
      "step": 2,
      "node": "ToolNode",
      "state_before": {
        "task": "What is the Lár Framework used for?",
        "category": "GENERAL",
        "__last_run_metadata": null,
        "search_query": "Lár Framework use cases"
      },
      "state_diff": {
        "added": {
          "retrieved_context": "=== Snath.ai & Lár Engine Knowledge Base ===\n\nProduct: lar-engine (The Open-Source Framework) & Snath.ai (The Platform)\n\nTopic: General Questions\n\nQuestion: What is Lár?\n\nAnswer: Lár (Irish for \"core\") is our open-source, \"glass box\" agentic framework. It is the \"PyTorch for Agents.\" It's a simple, \"dumb\" engine that lets you build, run, and audit complex AI agents one step at a time. Because it's not a \"black box,\" you can see the full history log for every run.\n\nQuestion: What is Snath.ai?\n\nAnswer: Snath.ai is the commercial, cloud-based platform for building, deploying, and auditing lar agents at scale. It's the \"Datadog for AI Agents.\" It provides the UI, the database, and the \"glass box\" visualizer for your production-grade agents.\n\nQuestion: What is the \"Glass Box\" philosophy?\n\n--- Retrieved Context ---\n\nQuestion: How does Lár handle errors and retries?\n\nAnswer: Our LLMNode has a built-in, resilient retry mechanism for 429 (rate limit) errors. For all other errors (like SyntaxError or ValueError from a ToolNode), the ToolNode writes the error to the GraphState and routes to an error_node. This allows you to build auditable, self-correcting loops.\n\nQuestion: How do I install lar?\n\nAnswer: You can install the engine directly from PyPI: pip install lar-engine\n\n--- Retrieved Context ---\n\nQuestion: What is the \"Glass Box\" philosophy?\n\nAnswer: \"Glass Box\" means 100% auditability. Our lar engine's core output is a step-by-step log of every state change. Unlike \"black box\" frameworks that hide their logic, lar lets you see exactly why your agent failed, which node was responsible, and what data it was processing.\n\nQuestion: How do I get support?\n\nAnswer: For general questions, you can check our GitHub repositories. For enterprise support or questions about Snath.ai, please email [email protected].\n\nTopic: Billing & Subscriptions (Snath.ai Platform)\n\nQuestion: How much does lar cost?\n\nAnswer: The lar-engine is, and always will be, 100% free and open-source under an MIT license.\n\nQuestion: How does Snath.ai make money?\n\nAnswer: Snath.ai is our paid SaaS product. We charge a monthly subscription for teams to use our hosted \"AgentScope\" dashboard, persistent \"Event Sourcing\" database, and scalable agent-runner (API).\n\nQuestion: Do you have a \"Bring Your Own Key\" (BYOK) model?"
        },
        "removed": {
          "__last_run_metadata": null
        },
        "modified": {}
      },
      "run_metadata": null,
      "outcome": "success"
    },
    {
      "step": 3,
      "node": "RouterNode",
      "state_before": {
        "task": "What is the Lár Framework used for?",
        "category": "GENERAL",
        "__last_run_metadata": null,
        "search_query": "Lár Framework use cases",
        "retrieved_context": "..."
      },
      "state_diff": {
        "added": {},
        "removed": {
          "__last_run_metadata": null
        },
        "modified": {}
      },
      "run_metadata": null,
      "outcome": "success"
    },
    {
      "step": 4,
      "node": "LLMNode",
      "state_before": {
        "task": "What is the Lár Framework used for?",
        "category": "GENERAL",
        "__last_run_metadata": null,
        "search_query": "Lár Framework use cases",
        "retrieved_context": "..."
      },
      "state_diff": {
        "added": {
          "agent_answer": "Lár (Irish for \"core\") is our open-source, \"glass box\" agentic framework. It is the \"PyTorch for Agents.\" It's a simple, \"dumb\" engine that lets you build, run, and audit complex AI agents one step at a time."
        },
        "removed": {
          "__last_run_metadata": null
        },
        "modified": {}
      },
      "run_metadata": {
        "prompt_tokens": 689,
        "output_tokens": 761,
        "total_tokens": 1450,
        "model": "gemini/gemini-2.5-pro"
      },
      "outcome": "success"
    },
    {
      "step": 5,
      "node": "AddValueNode",
      "state_before": {
        "task": "What is the Lár Framework used for?",
        "category": "GENERAL",
        "__last_run_metadata": null,
        "search_query": "Lár Framework use cases",
        "retrieved_context": "...",
        "agent_answer": "Lár (Irish for \"core\") is our open-source, \"glass box\" agentic framework. It is the \"PyTorch for Agents.\" It's a simple, \"dumb\" engine that lets you build, run, and audit complex AI agents one step at a time."
      },
      "state_diff": {
        "added": {
          "final_response": "Lár (Irish for \"core\") is our open-source, \"glass box\" agentic framework. It is the \"PyTorch for Agents.\" It's a simple, \"dumb\" engine that lets you build, run, and audit complex AI agents one step at a time."
        },
        "removed": {
          "__last_run_metadata": null
        },
        "modified": {}
      },
      "run_metadata": null,
      "outcome": "success"
    }
  ],
  "summary": {
    "total_steps": 6,
    "total_prompt_tokens": 817,
    "total_completion_tokens": 2927,
    "total_tokens": 3744
  }
}

Lár eliminates the Black Box Tax entirely.

Built in the Trenches

17
Days to Launch
246
Total Commits
53
Bugs Crushed
~142
Caffeine Cups

🛠️ A Technical Note: "Is this just a wrapper?" No. Most platforms wrap existing API calls and call it an 'agent'. We built the Lár Framework from scratch. It is a deterministic, graph-based execution engine designed for total state observability. We didn't want a black box, so we built a glass one.

Verify the Code on GitHub →

The Pledge

We believe auditing should be free. No developer should ever ship blind again.

Lár is not just a framework; it's a commitment to radical transparency. Every run produces a complete, immutable record of every state change.

The 6 Lár Primitives

We rejected complexity. No magic. Just Python.
Lár is built on just 6 "Lego bricks" that you can combine to build any agent.

The Engine
GraphExecutor
Runs one node at a time. Logs every step.
The Memory
GraphState
A simple object passed to every node.
The Brain
LLMNode
Calls Gemini/GPT. Handles retries & costs.
The Hands
ToolNode
Runs Python code. Safe & deterministic.
The Choice
RouterNode
Pure Python if/else logic. No "magic".
The Helper
Utility Nodes
Clean up state or add values.

Time Travel Debugging

To scale, we don't log the entire state every time. We log State Diffs.

The GraphExecutor yields a lightweight step_log that shows exactly what changed. This allows for infinite scalability and perfect replayability.

Show Your Agents are Auditable

If you build an agent using the Lár Engine, you are building a dependable, verifiable system. Help us spread the philosophy of the "Glass Box" by displaying the badge below in your project's README.

By adopting this badge, you signal to users and collaborators that your agent is built for production reliability and auditability.

Glass Box Ready
A seal of transparency.

Ready to break the Black Box?