Messing with #Golang KV DB Badger and finding it pretty cool. It's FAST.

New blog post: "Parsing Protobuf Definitions with Tree-sitter" .  It's all about how I used #Treesitter to jumpstart some nice internal #Protobuf tooling at work in #Golang. https://relistan.com/parsing-protobuf-files-with-treesitter

This #Golang production project at work has been a lot of fun so far. Nearly 20K LoC and the static binary is about 45MB :)

I compared ZStandard compression against GZip in #Golang for a workload that I have at work. ZStandard produces binaries that are about 5% smaller but takes 50% less time to compress and 50% less time to decompress. That's a real boost.

The Go implementation is here

Go `sqlmock` from Data Dog can be useful for making sure you know exactly which queries are being sent to the DB. But it has a limited set of type matchers and when matchers don't match, it's so hard to figure out what was actually sent. It supports custom matchers.

I figured out a little pattern I like there I pass `t` from the test (`testing.T`) into the custom matcher's struct and then use `t.Log()` to pass back what the heck was wrong with the value passed. So many WTFs saved when tests fail. #Golang

type MicrosTimestamp struct {
    t *testing.T
}
func (r MicrosTimestamp) Match(v driver.Value) bool {
 i, ok := v.(int64)
    if !ok {
        r.t.Log(fmt.Sprintf("TESTS: MicrosTimestamp got %v which is not an int64", v))
        return false
    }
...

I'm experimenting with using a couple of #Golang database libs together that I haven't used before: https://github.com/blockloop/scan and https://github.com/Masterminds/squirrel . So far I am quite positively impressed with the simplicity and niceness of the combo. I wrote some custom code generation to generate models from the DB schema. So far I am happy.

I've been writing a new Event archiving service for the new event-based environment at my new gig. This is a second chance to iterate on what we built at Community that I talked about in my blog post on the subject. This time, I'm writing in #Golang because I'm on my own on this stuff at the moment. I've taken a different approach with it by archiving events to a local copy of #DuckDB as they come off the wire. Then I use DuckDB's native Parquet and S3 support to write the events batches out to S3, where they can then be queried with Athena.

This approach seems to be good so far. I will learn more when I get it into production. I feel another blog post coming later this year...

If you are using the awesome Benthos framework in #Golang for stream processing, you might like to know that writing custom bloblang functions is very low hanging.

We wanted to have a service that would optionally tail logs from #Kubernetes for apps we deploy and report them over UDP syslog—in an existing JSON log format that we use from #Mesos.

  • It should make the log scrape/relay decision based on Annotations on the Pods.
  • It should rate limit by *pod* and not by host/node so that we don't overrun our log provider (e.g. when someone forgets to turn of debug logging) or starve other apps on the same node from being able to send their logs.
  • It should report rate limiting to our metrics system so we can track which pods are getting limited.

There was nothing that we could find that was able to do all of that. So I spent the last two days writing it in #Golang and we're doing initial deployment of that as a DaemonSet. Seems to work nicely 🎉