I noticed another interesting side effect of going straight to Parquet files and using #DuckDB. I can run a local DuckDB and query against the files in the event store for easy investigation. This is less tooling I have to write/support. A simple shell script to set up DuckDB and create a view over the `read_parquet()` call is all I need.

I've been writing a new Event archiving service for the new event-based environment at my new gig. This is a second chance to iterate on what we built at Community that I talked about in my blog post on the subject. This time, I'm writing in #Golang because I'm on my own on this stuff at the moment. I've taken a different approach with it by archiving events to a local copy of #DuckDB as they come off the wire. Then I use DuckDB's native Parquet and S3 support to write the events batches out to S3, where they can then be queried with Athena.

This approach seems to be good so far. I will learn more when I get it into production. I feel another blog post coming later this year...