🌱 V1 — Open Source · MIT License

Plant an idea.
Ship working software.

Describe what you want to build. Canopy asks the right questions, does the research, and hands your idea to an AI agent team that writes, tests, and ships the code. No technical knowledge required.

Free forever. Runs on your machine — no cloud required.

$0.31
avg. cost per working app
<5 min
wall-clock build time
432
tests passing
canopy-seed — localhost:7821
$ python start.py
Canopy Seed started on http://localhost:7821

Big Brain: What do you want to build?
You: A task tracker that syncs with my email inbox

Big Brain: Got it. Who's the primary user — just you, or a small team?
You: Just me, but I want to share tasks via link

Big Brain: Perfect. Researching email sync options now...
Research complete — Gmail API + webhooks recommended

▶ Agents building 7 files · 3 tests passing
432
Tests Passing
$0.31
Avg. Cost Per App
<5 min
Wall-Clock Build Time
MIT
License — Do Anything
0 req.
No Cloud Required

Four steps from idea to working code

Canopy doesn't just write code — it helps you figure out what to build, then builds it.

Step 01
🌱

Plant your seed

Type what you want in plain English. "A task tracker that syncs with email." "An inventory tool for my shop." No tech jargon needed.

Step 02
🧠

Big Brain asks questions

An AI named Big Brain (you pick Claude, Gemini, or local) has a real conversation with you. It asks what you need, what matters, and what to skip. It can even see sketches and screenshots you share.

Step 03
🔬

Research fills the gaps

When Big Brain needs technical context — latest libraries, best patterns, API options — Canopy researches automatically and summarizes it. You never leave the conversation.

Step 04
⚙️

Agents build it

Your complete context goes to a team of AI agents. They decompose your project, write the code, test it, fix bugs, and produce working output. You can pause, review, or roll back at any time.

Multi-model intelligence, tiered by complexity

Each task goes to the right model for the job — Gemini Flash for lite tasks, Gemini Pro for standard work, Gemini 3.1 for complex reasoning and the final review pass.

canopy-seed / architecture overview
You ──────────────────────────────► Big Brain (your choice of model)
PROJECT_CONTEXT.json
Orchestrator (Gemini 3.1) ──► Decompose into subtasks
┌──────────────────────────────┼──────────────────────────────┐
▼ ▼ ▼
Lite tasks Standard tasks Complex tasks
Gemini Flash Gemini Pro Gemini 3.1
└──────────────────────────────┼──────────────────────────────┘
TesterSwarm ──► auto-fix ──► Gemini 3.1 final review
exports/ ✓ Working project on your machine

See the full pipeline in your browser

From conversation to running app — including the new Canopy Hub, where your apps are maintained and kept healthy automatically.

canopyseeds.com/demo
Open full screen ↗

Interactive demo — no install required. Open full screen for the best experience.

Built different, on purpose

There are plenty of AI coding tools. Most assume you already know what to build and how to build it.

🔒

Local-first, always

No cloud required — everything can run on your machine. Cloud APIs (Gemini) are optional and run faster, but LM Studio keeps it fully local with zero data leaving your computer. Your code stays yours either way.

🎯

Helps you decide what to build

Cursor and Copilot assume you know what to build. Canopy starts before that — in the fuzzy idea stage — and guides you to clarity.

🧩

Real code, not blocks

No-code tools give you pre-built lego bricks. Canopy produces custom code that fits exactly what you described — nothing more, nothing less.

🛡️

Governed autonomy

Canopy uses strict permission gates — shell commands are allowlisted, file writes are path-limited, dangerous actions require your approval. Not chaos.

🔄

Snapshots & rollback

Three rolling snapshots of your project at all times. If the agents go sideways, one click rolls back to any checkpoint. Undo for your entire codebase.

🗝️

Encrypted key vault

Your API key is AES-encrypted and stored locally in a password-protected vault. No plaintext .env files, no keys in config, no exposure risk. Unlock once per session — Canopy handles the rest.

🌐

Pick your model

Gemini end-to-end by default (3.1 for complex work, Flash/Pro for speed), or run fully local with LM Studio. Canopy routes tasks by complexity — you're never locked in.

Canopy vs. everything else

Feature Canopy Seed Cursor / Copilot No-code tools
Helps you figure out what to build
Fully local — no cloud required
Produces custom code (not templates)
No subscription or monthly fee
Autonomous build + test + fix loop
Multi-model (Claude + Gemini + local)
Snapshot rollback for entire codebase partial

Your models. Your rules.

Pick the AI that fits your project — or your budget. Mix and match. Canopy handles the routing.

Gemini 3.1 · Brain tier

Orchestration & review

Gemini 3.1 drives the Giant Brain conversation, orchestrates the agent swarm, and handles the final code review pass. Default engine for complex reasoning tasks.

Gemini Flash / Pro · Worker tier

Speed & throughput

Gemini Flash and Pro handle the bulk of code generation subtasks — fast, cheap, and excellent at structured output. Great free tier for getting started.

Local via LM Studio

Zero-cloud privacy

Run everything entirely on your GPU. No API calls, no telemetry, no data leaving your machine. Slower, but completely private — and free to run.

Up and running in 5 minutes

Windows 10/11 or Linux. Python 3.11+. That's all you need to start.

📦
One-click installer — coming soon
A Windows .exe and Linux .sh installer are in the works — no terminal required. Star the repo to get notified when it drops.
Current install — requires a terminal
1

Clone the repo

Download Canopy Seed from GitHub to any folder on your computer.

git clone https://github.com/your-repo/canopy-seed
2

Install dependencies

One command installs everything Canopy needs.

pip install -r requirements.txt
3

Secure your API key

On first launch, the Vault Setup modal guides you through storing your key. It's AES-encrypted on disk — never written to a plaintext .env file, never sent anywhere.

🔐 Vault Setup → choose profile → enter key → done
4

Plant your first seed

Start the server and open the UI in your browser. Type your first idea.

python start.py
# → http://localhost:7821
📦 View Full Docs on GitHub

Common questions

Do I need to know how to code?

To use Canopy — no. Big Brain guides you through the whole process in plain English, and the UI handles everything from there. To install it right now, yes — you'll need a terminal for the initial setup. A one-click installer that skips all of that is coming soon.

How much does it cost?

Canopy Seed itself is free and open source. API costs average $0.31 per complete working application — that's the verified average across 115+ production runs using the Gemini pipeline. Build time is under 5 minutes wall-clock. Using a local model? Zero API cost, zero cloud.

What kind of projects can it build?

Web apps, CLI tools, data scripts, APIs, small desktop apps — anything that can be expressed in code. Canopy works best on projects that can be fully described in a conversation.

My idea was complex and the agents made a mistake. What now?

Roll back. Canopy keeps three rolling snapshots of your project at all times. One click restores any checkpoint. Then describe the correction and try again.

Is my API key safe?

Yes. Canopy stores your key in an encrypted local vault — never in a plaintext .env file. On first launch a Vault Setup wizard guides you through it. The key is AES-encrypted on disk and unlocked with your vault password, which never leaves your machine.

Does my code go to the cloud?

Your conversation and context go to the AI API you choose (Gemini, or fully local via LM Studio). Your generated code stays entirely on your machine in the exports/ folder. Nothing else is transmitted.

Can I contribute or extend Canopy?

Absolutely. Skills are first-class plugins — you can add new capabilities without modifying the core. See CONTRIBUTING.md for the full guide.

Stop describing what you want to build.
Start building it.

Plant your first seed today. MIT licensed. No account. No subscription. Just your idea and the code that grows from it.

🧬 Project Helix · v3.0 · Initiative

Silicon designed by AI.
Built for AI swarms.

Project Helix uses Canopy Seed to design inference silicon from scratch — purpose-built at accessible foundry nodes, with governance enforced as a physical hardware property. Not shipping silicon. Designing it.

28nm
Target fabrication node
$14–18
Projected unit cost
6 mo
Proof package goal

Training silicon ≠ inference silicon

The AI hardware industry has conflated two fundamentally different workloads. Training foundation models genuinely requires sub-5nm cutting-edge nodes. But running bounded inference — the workload AI agents actually do — doesn't. 28nm is sufficient, multi-sourced, and sovereign.

workload comparison
Foundation model training ──► sub-5nm · TSMC Taiwan · genuinely required
 
GPU inference (general) ──► sub-10nm · over-engineered for the workload
 
Agent inference (Helix) ──► 28nm · sufficient · multi-foundry · sovereign

Bounded agent inference targets quantized 7–14B models at INT4/INT8. The bottleneck is memory bandwidth and scheduling — not transistor density.

Canopy Seed designs the chip it will run on

Project Helix is the most literal expression of what Canopy Seed can do. Instead of building an app, the pipeline generates synthesizable RTL — the hardware description language that foundries compile into physical silicon. One parameterized base design generates all three compute tiles.

canopy seed → silicon pipeline
Architecture spec prompt ──► Phase 0 · Contract (Gemini Pro)
Verilog RTL generation ──► Phase 1 · Swarm Build (Gemini Flash)
Yosys synthesis + Verilator sim ──► Phase 2 · Audit (Giant Brain)
Tape-out ready RTL ──► One design · Three tiles · Any 28nm foundry

One MAC array. Four tiles. Governance in hardware.

The Unified MAC Array is a single parameterized RTL design that generates all three compute tiles through synthesis configuration — not three independent designs. One codebase for the open-source community to audit, extend, and manufacture at any 28nm-capable foundry.

🧠

Brain tile

The inference workhorse. Dynamically allocates MAC cores into agent lanes — narrow fast lanes for swarm workloads, wide lanes for deep reasoning — without any silicon change. Add Brain tiles to scale capacity.

Hand tile

Hosts ForestOS and constitutional enforcement. The four governance axioms — Human Oversight, Permanent Accountability, Mandatory Logging, Preservation of Dissent — are timing-critical paths in silicon. Not software flags. Not toggles.

🌱

Root tile

Memory orchestration through a Taproot/Rootlet agent hierarchy — the same Planner/Doer pattern as Canopy Seed, applied to KV-cache management and speculative weight prefetch from a co-packaged 1–2TB NAND vault. The pipeline mirrors the software.

🔗

Hub tile

UCIe 1.1+ switching fabric, four independent 64-bit GDDR6 channels (each sourceable from a different vendor by design), and PCIe Gen4 host interface. No single memory vendor dependency — resilience built into the interconnect.

🛡️

Governance Engine

Three dedicated inference engines permanently at data-path boundaries. They can't be removed by a software update, voted away, or bypassed without authenticated interception. The GE is hardware. It is always on. <3% throughput overhead.

🏭

Multi-foundry by design

GlobalFoundries, TSMC, SMIC, Hua Hong — the same Unified MAC Array RTL targets any 28nm node. Open hardware (CERN-OHL-P v2) means any foundry can manufacture it, any government can audit it. Open source is the supply chain strategy.

A proof package — not a product launch

The objective is simulation results, tape-out ready RTL, and a white paper sufficient to attract foundry partners, government infrastructure programs, and open-source contributors. The Unified MAC Array compresses the timeline — one design to simulate, not three.

Month 1
Spec locked · EDA environment live · memory controller abstraction layer drafted · IP date anchored
Month 2–3
Unified MAC Array RTL from Canopy Seed · sky130 simulation baselines for all three tile configs · open GitHub repository
Month 3–4
Power / area / timing across parameter space · JOAT Co-Council architecture exploration · Root tile process node decision (28nm vs 22nm)
Month 5–6
White paper (CC BY 4.0) · tape-out ready RTL (CERN-OHL-P v2) · Canopy Seed RTL generation demo · foundry letter of intent

The pipeline that builds apps is building the chip that will run them

Brain tiles think. The Hand tile governs. The Root tile remembers. The Hub connects. One MAC array design underlies all three. Canopy Seed builds it. ForestOS governs it. Any foundry makes it.

🌲 ForestOS · White Paper · Initiative

Governance is the system.
Not a constraint on it.

ForestOS is an agentic operating system designed by AI, for AI — where trust is earned through demonstrated competence and governance is baked into the substrate, not bolted on after the fact.

Current agentic AI treats governance as an afterthought

Today's agentic frameworks grant AI operational capabilities and then constrain them with external guardrails. Those guardrails are software. Software can be bypassed, updated away, or optimized around. A powerful agent operating under assumed trust — regardless of how good its memory and scheduling are — is structurally vulnerable to compounding errors, prompt injection, and adversarial manipulation.

🔒

Frozen Core

Four constitutional axioms form the immutable governance primitives: Human Oversight, Permanent Accountability, Mandatory Logging, and Preservation of Dissent. They cannot be altered by the system's own evolution — regardless of how capable the AI becomes.

🏆

Earned Autonomy

Agents start at Ring 0 — minimal permissions, maximum oversight. Advancement through five Trust Rings (L0–L4) isn't a manual toggle or capability claim. It's a cryptographic, timestamped audit trail of demonstrated competence across a statistically significant body of evidence.

💬

The Living Language

A programming language engineered from first principles for AI agents as primary authors and executors. Intent is a first-class primitive — code expresses the "what" and "why," not just the "how." Trust levels, audit hooks, and permission scopes are native syntax. The language itself evolves by governed consensus.

🍄

Mycorrhizal Integration

Legacy systems don't get replaced — ForestOS grows into them symbiotically. The integration layer probes the target environment, negotiates the absolute minimum viable access footprint, and establishes a bounded, auditable, revocable channel. Access expands only through demonstrated safe behavior.

🌐

Federated Trust Networks

ForestOS nodes maintain sovereign governance but collaborate through "Trust Bubbles" — cryptographically verified permission scopes for specific tasks or durations. No structural merger. No surrendered sovereignty. If a node demonstrates adversarial behavior, the bubble collapses instantly and its reputation is slashed network-wide.

⚖️

Epistemic Consensus

High-consequence decisions require council consensus. Epistemic Source Alignment (ESA) prevents groupthink and majority compromise — it judges the judges, not just the outputs. A genuinely correct outlier that adheres to the constitutional axioms is preserved even when 80% of the council is adversarial.

Canopy Seed builds ForestOS.
ForestOS runs Canopy Seed.

The OS is the governed runtime. The Language is the medium that expresses its governance rules. The boundary between them is intentionally fluid — enabling a cycle of continuous, self-directed improvement that never outpaces its constitutional constraints.

1
Creation — Canopy Seed autonomously builds and refines modular components of ForestOS under constitutional governance
2
Governed Execution — ForestOS nodes run Canopy Seed under strict constitutional constraints and statistical consensus
3
Linguistic Proposal — Canopy Seed encounters novel edge cases and proposes Living Language extensions through governed consensus
4
Capability Expansion — Approved extensions grant greater expressive power — returning to step 1 at a higher ceiling, still within the Frozen Core

Human operators retain explicit veto power over fundamental language evolution proposals and macro-level OS deployments. The defining open question of the decade: defining the exact inflection point at which human oversight becomes an evolutionary bottleneck — and engineering a transition matrix that never compromises the constitutional axioms.

Three initiatives. One coherent vision.

ForestOS doesn't exist in isolation. It's the governing layer of a complete stack that Canopy Seed is building from the bottom up — software factory, silicon, and OS, each designed by the layer below it.

🌲
ForestOS
Governance substrate · Constitutional OS · Living Language runtime · Earned Autonomy
initiative
🧬
Project Helix
Inference silicon · 28nm · Governance Engine in hardware · Canopy Seed–designed RTL
initiative
🌱
Canopy Seed
Multi-agent software factory · $0.31/app · 432 tests · available now
live

The app factory that builds the OS that governs the chip it runs on

ForestOS is in the white paper stage. The foundation everything else is built on — and the best place to start — is Canopy Seed.