Step 1
Give Shard one real job
Start with the `research_brief` demo. Add a question, paste a few source notes, and set simple rules for cost, trust, and where the work is allowed to run.
Shard routes AI workflow steps across personal, private, and public capacity. Then it gives you the answer, the receipts, and the provenance graph so you can see what really happened.
You ask
One research question
Shard decides
Which machine should handle each step
You get back
An answer plus a step-by-step map
How it works
You do not need to think about distributed systems to understand the demo. Ask one question, add a few notes, and Shard shows you the path it took.
Step 1
Start with the `research_brief` demo. Add a question, paste a few source notes, and set simple rules for cost, trust, and where the work is allowed to run.
Step 2
Planning, source summaries, and synthesis can run on your own machine, your team machines, or public specialist capacity depending on the policy.
Step 3
Shard returns the final result, the receipt chain, and a provenance graph so you can see what happened instead of guessing.
Why it feels different
Most tools stop once the text is generated. Shard treats the route as part of the product, so the path is visible too.
Most AI tools hide the route
They give you an answer but not the path. When a workflow becomes slow, expensive, or unreliable, your team is left guessing.
Shard makes the route part of the product
It shows why a step stayed local, why it moved to a private node, why it reached public capacity, and what fallback happened if the first choice failed.
That matters for real teams
Engineers can debug workflows faster. Operators can understand cost. Leaders can trust that the system is doing what policy says it should do.
Three supply tiers
The same workflow can stay on your machine, move to company hardware, or reach the public market only when your rules allow it.
Personal
Your own laptop or workstation. Best when you want low-latency local work and direct control.
Private
Your company or team-owned Shard machines. Best when you want shared internal capacity without using the public market first.
Public
Specialist capacity from the wider Shard mesh. Best when you need overflow capacity or a stronger synthesis worker.
The key idea
Shard is most exciting when the routing policy and the evidence stay attached to the workflow. That is what makes the system debuggable instead of mysterious.
Start here
If you want the fastest explanation, open the provenance demo first. It is the clearest proof of what makes Shard different from a normal AI endpoint.
Open the provenance page and run the `research_brief` workflow.
Open provenance demoRun Shard on your own PC so your workflows can use your capacity first.
Open quick startUse chat when you only need a familiar interface. Use workflows when you need routing evidence.
Open simple chat