01HNNWZ0MV43FF4 hours ago
> In this post we take you on a walkthrough on how you can use DataFusion
Thought it was gonna be a "build your own SQLite" or something
Gepsens3 hours ago
I remember 2 years ago someone proposed adding stream processing in datafusion and PRs followed. But IMO stream processing is an entirely different beast, some people could use the sql engine of df for it though. There are rust projects like Arroyo
knuckleheadsan hour ago
I’ve been messing around with sql and stream processing off and on the last few months via https://github.com/zmaril/bpfquery and then https://github.com/zmaril/zquery, so I very much feel this comment. I didn’t want to build out my own stream processing architecture in bpfquery, it was getting pretty tough pretty fast, so I switched over to a datafusion backend in zquery in the hopes that it could do stream processing well. It can handle static data really well, much better the home grown half engine I made in bpfquery, but streaming sql isn’t easily possible at the moment, everybody is building their own implementations and trying to upstream what they need, no coherent whole from data fusion. I was looking into making an attempt with arroyo sometime, but I think the authors want that code to be used as a standalone binary and not as a library in something else, based on my last impression of it a while back. So, maybe in a few years it’ll be as easy to make a streaming database as it is now to make a normal one, but that’s not the case currently.
hantusk8 minutes ago
I agree. So many disparate solutions. The streaming sql primitives are by themselves good enough (e.g. `tumble`, `hop` or `session` windows), but the infrastructural components are always rough in real life use cases.
crossing fingers for solutions like `https://github.com/feldera/feldera` to be wrapped in a nice database, `https://materialize.com/` to solve their memory issues, or `https://clickhouse.com/docs/en/materialized-view` to solve reliable streaming consumption.
Various streaming processing frameworks often have domain specific languages with a lot of limitations of how to express aggregations and transformations.
necubi3 hours ago
Creator of Arroyo here—we agree that stream processing is a different beast and needs different infrastructure from a batch engine like DataFusion.
Our approach has been to take pieces of DF (including the SQL frontend and expression engine) but embedding them in our own dataflow and operators. This allows us to support low latency, distribution, watermark processing, and consistent checkpointing.
But the great thing about DF is that it’s designed as a toolkit for SQL-oriented data processing, so it’s relatively easy to pick and use just the pieces you need.
[deleted]3 hours agocollapsed
dangoodmanUT3 hours ago
this post feels like it's skipping over a lot of code that could be included
ambroodop2 hours ago
thanks for the feedback! the first version had a lot more detailed code but decided to go with linking to our GitHub than copying all the code. Wanted to illustrate the core touch points involved in extending DF.
JoeOfTexas2 hours ago
Step 1. Choose a color Step 2. Finish the database
ztratar2 hours ago
I almost thought the opposite, but im no db guy.