Shard

See why AI work ran there

Shard V1Receipt-first workflow observability

The AI runtime that explains why each step ran where it did.

Shard routes AI workflow steps across personal, private, and public capacity. Then it gives you the answer, the receipts, and the provenance graph so you can see what really happened.

You ask

One research question

Shard decides

Which machine should handle each step

You get back

An answer plus a step-by-step map

How it works

The full idea fits in one loop.

You do not need to think about distributed systems to understand the demo. Ask one question, add a few notes, and Shard shows you the path it took.

Step 1

Give Shard one real job

Start with the `research_brief` demo. Add a question, paste a few source notes, and set simple rules for cost, trust, and where the work is allowed to run.

Step 2

Shard chooses the best place for each step

Planning, source summaries, and synthesis can run on your own machine, your team machines, or public specialist capacity depending on the policy.

Step 3

See the answer and the evidence

Shard returns the final result, the receipt chain, and a provenance graph so you can see what happened instead of guessing.

Why it feels different

Shard does not stop at the answer.

Most tools stop once the text is generated. Shard treats the route as part of the product, so the path is visible too.

Most AI tools hide the route

They give you an answer but not the path. When a workflow becomes slow, expensive, or unreliable, your team is left guessing.

Shard makes the route part of the product

It shows why a step stayed local, why it moved to a private node, why it reached public capacity, and what fallback happened if the first choice failed.

That matters for real teams

Engineers can debug workflows faster. Operators can understand cost. Leaders can trust that the system is doing what policy says it should do.

Three supply tiers

One workflow can use three kinds of capacity.

The same workflow can stay on your machine, move to company hardware, or reach the public market only when your rules allow it.

Personal

Your own laptop or workstation. Best when you want low-latency local work and direct control.

Private

Your company or team-owned Shard machines. Best when you want shared internal capacity without using the public market first.

Public

Specialist capacity from the wider Shard mesh. Best when you need overflow capacity or a stronger synthesis worker.

The key idea

Shard is most exciting when the routing policy and the evidence stay attached to the workflow. That is what makes the system debuggable instead of mysterious.

Start here

Choose the first Shard experience that fits you.

If you want the fastest explanation, open the provenance demo first. It is the clearest proof of what makes Shard different from a normal AI endpoint.

See the flagship demo

Open the provenance page and run the `research_brief` workflow.

Open provenance demo

Bring your own machine

Run Shard on your own PC so your workflows can use your capacity first.

Open quick start

Try simple chat

Use chat when you only need a familiar interface. Use workflows when you need routing evidence.

Open simple chat