Deterministic AI Orchestration
Stop guessing with black boxes. Start engineering with auditable graphs.
The only Deterministic engine for the Agentic Age.
The "PyTorch" for Agents
Lár provides the computational graph primitives to build cognitive architectures, powered by LiteLLM for universal model access.
No magic. No hidden prompts. Just pure, debuggable Python.
Deterministic Primitives
Built on Graph Theory. Define nodes and strict edges. No "loops until success" magic.
Granular Cost Tracking
Track token usage and cost per node, not just per run. Know exactly which agent is burning your budget.
Zero Abstraction Leaks
You see the raw prompt. You see the raw response. You own the `traceback`. Nothing is hidden.
Universal Provider Support
Powered by LiteLLM. Switch between OpenAI, Anthropic, Bedrock, and Ollama in 1 second. No refactoring required.
"Did the AI just do that?"
A customer was refunded $500. A sensitive file was deleted. A regulated decision was made.
In the Black Box Era, you're guessing.
With Lár, you have the Flight Recorder.
The Paradigm Shift
From "Magic" to Engineering
We built the same agent using LangChain and Lár.
When we hit a 429 Rate Limit error, this is what happened:
The Black Box
The Lár Glass Box
Structured errors. Instant clarity.
{
"run_id": "ab7c23bb-481e-4241",
"step": 4,
"node": "LLMNode",
"outcome": "error",
"error": {
"code": 429,
"message": "You exceeded your current quota...",
"status": "RESOURCE_EXHAUSTED",
"details": [
{
"@type": "type.googleapis.com/google.rpc.QuotaFailure",
"violations": [
{
"quotaMetric": "generate_content_free_tier_requests",
"quotaValue": "2"
}
]
}
]
}
}1. The Benchmark
Don't burn money on
Magic Loops.
We built the same "Corporate Swarm" using standard agent frameworks (Chat Loops) and Lár (Assembly Line).
The difference isn't just speed. It's viability.
Standard agents hit RecursionLimit at step 25.
Lár ran 10,000+ steps without a single error.
2. The Guardrails
Engineered for the
EU AI Act
Stop building "Black Box" agents that will be illegal in 2026. Lár is designed specifically for High-Risk AI Systems in Healthcare, Fintech, and Critical Infrastructure.
Native "State-Diff Ledger" produces forensic, immutable JSON logs for every step.
"Glass Box" architecture means no magic loops. Every decision path is explicit code.
**HumanJuryNode**: A dedicated primitive that enforces a "Hardware Stop," strictly preventing execution until a human explicitly approves via CLI or API.
Lár Juried Layer
The "Grand Unification" architecture for High-Risk AI.
It combines LLM Reasoning ("Proposer") with Deterministic Policy ("Jury") and Human Interrupts to stop hallucinations before they execute.
3. The Brain
Agentic Metacognition
Traditional agents have static brains. Lár agents have Dynamic Graphs. They can introspect, spot their own limitations, and spawn new subgraphs at runtime. See Examples →
Lár DMN
A Bicameral Cognitive Architecture that solves catastrophic forgetting.
It has a conscious mind (fast) and a subconscious mind (slow) that sleeps, dreams, and consolidates memories when you're away.
> Synthesizing narrative...
> Stored in Hippocampus.
Zero Maintenance.
Infinite Possibilities.
Other frameworks ship 500+ "wrapper tools" (e.g., HubSpotTool) that break whenever an API changes.
Lár takes a different approach. You don't need a library of stale wrappers. You need a prompt that teaches your IDE how to wrap any Python SDK in 30 seconds. Read the Pattern →
Use the official, latest SDK. No waiting for framework updates.
You generate the code. You read it. You own it.
User: "Make a Stripe tool for refunds"
# 2. IDE Generates Production Code (30s)
import stripe
from lar import ToolNode
def refund_charge(state):
stripe.api_key = state["stripe_key"]
return stripe.Refund.create(...)
# 3. Ready to use
stripe_tool = ToolNode(
tool_function=refund_charge,
...
)
Don't want to read docs?
Lár is Pure Python. There is no custom DSL or "Magic Chain" syntax to learn.
This means Cursor, Windsurf, and Copilot are already Lár experts.
User: "Build a Lár agent that researches stocks."
Cursor: "Done. Since Lár is just Python graphs, I used the `RAG Researcher` pattern. Here is the strict type-checked code..."
git clone snath-ai/lar Or Windsurf / VS Code
"Build a Research Agent"
Power Your IDE
Make Cursor or Windsurf an expert Lár Architect.
Reference the Master Rules file to load the constraints.
IDE_MASTER_PROMPT.md Generate any integration (Stripe, Linear, etc) in 30s.
IDE_INTEGRATION_PROMPT.md Use the template to scaffold a new agent.
IDE_PROMPT_TEMPLATE.md Ready for Production?
Lár is just a Python library. This means you deploy it like any other backend service. No proprietary "serving layers." No vendor lock-in.
from lar import GraphExecutor
@app.post("/run")
def run_agent(task):
return executor.run(task)
Deploy to AWS, Railway, or Heroku in minutes.
Ready to Build?
We have 30+ examples ranging from "Hello World" to "Self-Healing Swarms".
Browse the Full Library →Built by @axdithyaxo