Skip to content

fluree/db

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7,299 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fluree

A graph database built for data that matters. Temporal, verifiable, standards-compliant.

Fluree stores data as RDF triples with complete history, integrated search, and fine-grained access control — in a single binary with no external dependencies.

Billions of triples on commodity hardware. Over 2M triples/second bulk import. Benchmark leader across 105 W3C SPARQL queries.

License: BSL 1.1

Note

Fluree Memory — is part of the Fluree DB CLI. Persistent, searchable memory for AI coding assistants. Give Claude Code, Cursor, and other AI tools long-term project memory: facts, decisions, and preferences persist across sessions in a Fluree ledger you control — scoped per-repo or per-user, shareable via git. Fluree Memory docs →

Install

Docker — pre-configured HTTP server, ready to accept queries on port 8090. Best for trying out the API or running Fluree as a service.

docker run -p 8090:8090 fluree/server:latest

Homebrew, shell installer, or Windows PowerShell — installs the fluree binary that bundles both the CLI and the embedded server (fluree server run).

# Homebrew (macOS / Linux)
brew install fluree/tap/fluree

# Shell installer (macOS / Linux)
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/fluree/db/releases/latest/download/fluree-db-cli-installer.sh | sh
# Windows (PowerShell)
irm https://github.com/fluree/db/releases/latest/download/fluree-db-cli-installer.ps1 | iex

Pre-built binaries and the changelog for every release are on the GitHub Releases page.

Zero to graph in 60 seconds

fluree init
fluree create movies

fluree insert '
@prefix schema: <http://schema.org/> .
@prefix ex:     <http://example.org/> .

ex:blade-runner  a schema:Movie ;
  schema:name        "Blade Runner" ;
  schema:dateCreated "1982-06-25"^^<http://www.w3.org/2001/XMLSchema#date> ;
  schema:director    ex:ridley-scott .

ex:ridley-scott  a schema:Person ;
  schema:name "Ridley Scott" .

ex:alien  a schema:Movie ;
  schema:name        "Alien" ;
  schema:dateCreated "1979-05-25"^^<http://www.w3.org/2001/XMLSchema#date> ;
  schema:director    ex:ridley-scott .
'

fluree query --format table 'SELECT ?title ?date WHERE {
  ?movie a <http://schema.org/Movie> ;
         <http://schema.org/name> ?title ;
         <http://schema.org/dateCreated> ?date .
} ORDER BY ?date'
┌──────────────┬────────────┐
│ title        │ date       │
├──────────────┼────────────┤
│ Alien        │ 1979-05-25 │
│ Blade Runner │ 1982-06-25 │
└──────────────┴────────────┘

That's a SPARQL query. The same query in JSON-LD:

fluree query --jsonld '{
  "@context": { "schema": "http://schema.org/" },
  "select": ["?title", "?date"],
  "where": [
    { "@id": "?movie", "@type": "schema:Movie",
      "schema:name": "?title", "schema:dateCreated": "?date" }
  ],
  "orderBy": "?date"
}'

Both languages access the same engine — same features, same performance.

Now update the data and query the past:

# Give every Ridley Scott movie a genre
fluree update '
PREFIX schema: <http://schema.org/>
PREFIX ex:     <http://example.org/>
INSERT { ?movie schema:genre "sci-fi" }
WHERE  { ?movie schema:director ex:ridley-scott }
'

# What did the data look like before that update?
fluree query --at 1 'SELECT ?title ?genre WHERE {
  ?movie a <http://schema.org/Movie> ;
         <http://schema.org/name> ?title .
  OPTIONAL { ?movie <http://schema.org/genre> ?genre }
}'
# → Blade Runner (no genre), Alien (no genre)

# And now?
fluree query 'SELECT ?title ?genre WHERE {
  ?movie a <http://schema.org/Movie> ;
         <http://schema.org/name> ?title .
  OPTIONAL { ?movie <http://schema.org/genre> ?genre }
}'
# → Blade Runner "sci-fi", Alien "sci-fi"

Every change is preserved. Query any point in history by transaction number, ISO timestamp, or commit ID.

What makes Fluree different

Time travel

Every transaction is immutable. Query data as it existed at any point in time — by transaction number, ISO-8601 timestamp, or content-addressed commit ID. No special tables, no slowly-changing dimensions. It's built into the storage model.

fluree query --at 2024-06-15T00:00:00Z 'SELECT * WHERE { ?s ?p ?o }'

Learn more: Time travel concepts, time-travel cookbook.

Integrated search

BM25 full-text search and HNSW vector similarity are built into the query engine — not bolted-on external services. Search results participate in joins, filters, and aggregations like any other graph pattern.

{
  "@context": { "ex": "http://example.org/" },
  "from": "mydb:main",
  "where": [
    { "@id": "?doc", "ex:title": "?title" },
    ["bind", "?score", "(fulltext ?title \"knowledge graph\")"]
  ],
  "select": ["?doc", "?title", "?score"],
  "orderBy": [["desc", "?score"]],
  "limit": 10
}

For dedicated BM25 / HNSW graph sources, the same query engine drives the f:graphSource / f:searchText / f:queryVector patterns and can be backed by an embedded index or a remote fluree-search-httpd service.

Learn more: BM25 full-text, vector search, search cookbook.

Git-like data management

Branch, rebase, merge, push, pull — the same workflow developers already use for code, applied to data. Fork a dataset to experiment without affecting production. Merge when ready. Rebase to catch up with upstream changes. Every branch has its own independent commit history.

fluree branch create experiment
fluree use mydb:experiment
# ... make changes safely ...
fluree branch rebase experiment    # catch up with main
fluree branch merge experiment     # fast-forward merge into main
fluree branch drop experiment      # clean up

Learn more: branching cookbook, Ledgers and the nameservice.

Triple-level access control

Policies are data in the ledger, enforced at query and transaction time. Users see only what they're authorized to see — not rows, not tables, individual facts. No application-layer filtering required.

See Policy enforcement for the model, the policy cookbook for worked examples, and Policy model and inputs for the reference.

Reasoning and inference

RDFS subclass/subproperty reasoning, OWL 2 RL forward-chaining, and user-defined Datalog rules. The database infers facts you didn't explicitly store.

Learn more: Reasoning and inference, OWL & RDFS support reference, Datalog rules.

Standards-first

Full SPARQL 1.1 with zero compliance failures against the W3C test suite. Native JSON-LD for idiomatic JSON APIs. Both query languages access the same engine with the same capabilities — time travel, policies, graph sources, and all.

Learn more: SPARQL reference, JSON-LD Query reference, Standards and feature flags.

Also worth knowing

  • SHACL validation — declarative shape constraints enforced at transaction time, with violations reported per-target, per-property.
  • OWL ontology imports — pull external vocabularies into a ledger via f:schemaSource + owl:imports, materialized at commit time.
  • Apache Iceberg / R2RML — query Parquet warehouses and relational stores as first-class graph sources alongside native Fluree data.

Use it your way

CLI — Explore data, script pipelines, manage ledgers from the terminal.

fluree query -f report.rq --format csv > output.csv

HTTP Server — Run fluree server for a production API with OIDC auth, content negotiation, and OpenTelemetry.

fluree server run
curl -X POST http://localhost:8090/v1/fluree/query?ledger=mydb:main \
  -H "Content-Type: application/sparql-query" \
  -d 'SELECT ?s ?p ?o WHERE { ?s ?p ?o } LIMIT 10'

Rust library — Embed Fluree directly in your application. No server process needed.

let fluree = FlureeBuilder::memory().build_memory();
fluree.create_ledger("mydb").await?;

let result = fluree.graph("mydb:main")
    .query()
    .sparql("SELECT ?s WHERE { ?s a <http://schema.org/Person> }")
    .execute()
    .await?;

MCP server — Expose Fluree to AI assistants over the Model Context Protocol.

fluree mcp serve            # stdio transport for Claude Desktop, Cursor, etc.

Capabilities

Query languages SPARQL 1.1, JSON-LD Query
Data formats JSON-LD, Turtle, TriG, N-Triples, N-Quads
Time travel Transaction number, ISO timestamp, commit ID
Full-text search Integrated BM25 with Block-Max WAND
Vector search Embedded HNSW or remote service
Reasoning RDFS, OWL 2 QL, OWL 2 RL, Datalog rules
Access control Triple-level policy enforcement
Geospatial GeoSPARQL, S2 cell indexing
Verifiability JWS-signed transactions, Verifiable Credentials
Data sources Apache Iceberg, R2RML relational mappings
Storage backends Memory, file, AWS S3 + DynamoDB, IPFS
Replication Clone, push, pull between instances
Branching Fork ledgers, independent commit histories
Observability OpenTelemetry tracing, structured logging
Validation SHACL shape constraints

Documentation

For documentation and more information, visit labs.flur.ee/docs.

Full documentation also lives in docs/:

License

Licensed under the Business Source License 1.1, with a Change Date to Apache License 2.0 as specified in that file.

About

Fluree database library

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages