L2M logo
L2M Orchestration Studio
GitHub

Visual AI Automation

Stop stitching tools together. Build one reliable AI flow.

L2M helps teams design, execute, monitor, and automate production-ready AI workflows with deterministic branching, guardrails, human approvals, and full execution traceability.

Capabilities

Everything your orchestration layer should have

Branching DAG Engine

Topological execution with `if`, `switch`, and `try/catch` for controlled, readable AI logic.

Execution History

Persistent audit logs with node-by-node trace for debugging, compliance, and quality review.

Human-in-the-Loop

Pause on approval nodes, serialize state, and resume safely after approve or reject decisions.

Guardrails

Validate input and sanitize model output with deterministic checks and configurable failure behavior.

Streaming Chat + Widget

SSE-powered chat interface in-studio and an embeddable widget for external websites.

Code Node Sandbox

Run custom JavaScript transforms in a VM-isolated execution step with timeout controls.

Batch 5

Native cron scheduling for autonomous workflows

Add a `schedule_trigger` node to run workflows every day, every 15 minutes, or any cron interval with timezone support.

  • Scheduler initializes active workflows on API startup
  • Each cron run executes with `triggerType: "cron"` and is saved in execution history
  • Schedules reload dynamically when workflows are saved, imported, or deleted

Common Schedules

0 9 * * * Daily standup summary
*/15 * * * * Quarter-hour monitoring loop
0 0 * * 1 Weekly Monday report

Stack

Practical architecture for fast iteration

FrontendVite + React Studio
BackendFastify + SQLite
EngineWorkflow DAG Executor
IntegrationsRAG, MCP, LLM Providers

From local prototype to production automation

L2M is open-source and built for teams that want transparent AI workflow logic instead of opaque chains.