Distributed DuckDB Instance
by citguru on 4/14/2026, 6:31:44 AM
https://github.com/citguru/openduck
Comments
by: atombender
How does this (or DuckLake for that matter) handle sparseness and fragmentation of the differential storage? My experience with B+trees, at least, is that pages get spread all over the place, so if you run a normal query, page 537 may be in layer 1, page 8374 in layer 2, and so on, and a single query might need hundreds or thousands of pages, too scattered to read efficiently in large sequential reads, but requiring a lot of random ones, which in turn means latency is very poor unless you aggressively cache. Neon deals with this through compaction and prewarming, I believe. Maybe DuckDB avoids this because column data tends to be more sequential, and something batches up bigger layers? Or maybe aggressive layer compaction?
4/14/2026, 11:53:29 AM
by: herpderperator
Does this help with DuckDB concurrency? My main gripe with DuckDB is that you can't write to it from multiple processes at the same time. If you open the database in write mode with one process, you cannot modify it at all from another process without the first process completely releasing it. In fact, you cannot even read from it from another process in this scenario.<p>So if you typically use a file-backed DuckDB database in one process and want to quickly modify something in that database using the DuckDB CLI (like you might connect SequelPro or DBeaver to make changes to a DB while your main application is 'using' it), then it complains that it's locked by another process and doesn't let you connect to it at all.<p>This is unlike SQLite, which supports and handles this in a thread-safe manner out of the box. I know it's DuckDB's explicit design decision[0], but it would be amazing if DuckDB could behave more like SQLite when it comes to this sort of thing. DuckDB has incredible quality-of-life improvements with many extra types and functions supported, not to mention all the SQL dialect enhancements allowing you to type much more concise SQL (they call it "Friendly SQL"), which executes super efficiently too.<p>[0] <a href="https://duckdb.org/docs/current/connect/concurrency" rel="nofollow">https://duckdb.org/docs/current/connect/concurrency</a>
4/14/2026, 7:31:25 AM
by: citguru
This is an attempt to replicate MotherDucks differential storage and implement hybrid query execution on DuckDB
4/14/2026, 6:31:44 AM
by: decide1000
I built a distributed DuckDB setup using OpenRaft for state replication. Every node holds a full copy of the database. Writes go through Raft consensus, reads are local. It's more like etcd-with-DuckDB than MotherDuck-lite.<p>OpenDuck takes a different approach with query federation with a gateway that splits execution across local and remote workers. My use case requires every node to serve reads independently with zero network latency, and to keep running if other nodes go down.<p>The PostgreSQL dependency for metadata feels heavy. Now you're operating two database systems instead of one. In my setup DuckDB stores both the Raft log and the application data, so there's a single storage engine to reason about.<p>Not saying my approach is universally better. If you need to query across datasets that don't fit on a single machine, OpenDuck's architecture makes more sense. But if you want replicated state with strong consistency, Raft + DuckDB works very well.
4/14/2026, 9:12:21 AM
by: nehalem
I have a deep appreciation for DuckDB, but I am afraid the confluence of brilliant ideas makes it ever more complicated to adopt —- and DuckLake is another example for this trend.<p>When I look at SQLite I see a clear message: a database in a file. I think DuckDb is that, too. But it’s also an analytics engine like Polars, works with other DB engines, supports Parquet, comes with a UI, has two separate warehouse ideas which both deviate from DuckDB‘s core ideas<i>.<p></i> Yes, DuckLake and Motherduck are separate entities, but they are still part of the ecosystem.
4/14/2026, 7:03:52 AM
by: jeadie
You might find <a href="https://github.com/apache/datafusion" rel="nofollow">https://github.com/apache/datafusion</a> and <a href="https://github.com/datafusion-contrib/datafusion-federation" rel="nofollow">https://github.com/datafusion-contrib/datafusion-federation</a> of interest
4/14/2026, 10:14:43 AM
by: arpinum
I read the code. It's a good case study of one-shot output from AI when you ask it to replicate a SaaS product. This is probably better than most because MotherDuck has been open about their techniques to build the product.<p>Obviously not a production implementation.
4/14/2026, 9:23:19 AM
by: Lucasoato
Last week I’ve sent my first PR in duckdb to support iceberg views in catalogs like Polaris! Let’s hope for the best :)
4/14/2026, 7:12:11 AM
by: oulipo2
Seems cool! But would be nice to have some "real-world" use cases to see actual usage patterns...<p>In my case my systems can produce "warnings" when there are some small system warning/errors, that I want to aggregate and review (drill-down) from time to time<p>I was hesitating between using something like OpenTelemetry to send logs/metrics for those, or just to add a "warnings" table to my Timescaledb and use some aggregates to drill them down and possibly display some chunks to review...<p>but another possibility, to avoid using Timescaledb/clickhouse and just rely on S3 would be to upload those in a parquet file on a bucket through duckdb, and then query them from time to time to have stats<p>Would you have a recommendation?
4/14/2026, 8:05:07 AM