In regards to contention, the answer is definitely dependent on how you host. We've had a lot of experience running different ML workloads and from an SRE perspective we knew you'd need a variety of different styles of hosting the models depending on read/write patterns of your usage. Termite and the proxy service/operator allow for all styles of model loading, either preloading and compiling to prevent cold starts or lazy loading to protect memory, with different pooling strategies and caching strategies for bundling multiple models running in the same Termite container.
If a heavy indexing job is running on a CPU only single-node deployment, it won't be using Raft (no replication). If it's running with GPU it doesn't share resources with the DB anyways really significantly there. If it's running distributed, also no issue with contention really.
I'd be super interested to here more about what you all do in this space, currently Antfly (and Termite) doesn't handle custom content types explicitly because we've mostly focused on supporting the "classic" ones (application/pdf, image/png, image/jp2, e.g.) but we've had to build out a lot of the support for these things as custom support into the system. For instance I chose jsonschema for the schema so users could do exactly what you're suggesting, custom content types indexed differently. The ML side of things also has to know how to support them (i.e. does a pdf get rendered ocr then embedded or text extraction on some fallback). Would love to here about what you all do and the types of media you make searchable today!
Upon another look it looks like we were actually missing the pause lock for the backfill operation too during a shard split though, I also went ahead and added it to batch for good measure although that case should be caught by the manager! Thank you for the report!
Possibly, Amazon and Google also made the ability for smaller startup based DB companies to go that route with things like ValKey and OpenSearch. LLMs have made it super easy to transpile the ideas into whatever programming language you please though, you just have to put in the time.
Nope! Awesome you’re poking around though. I’m currently working on deterministic simulation testing and a feature set to allow pausing of index backfills but it’s not fully implemented yet, stay tuned!
Great question! I think the fundamentally hard problem with distributed systems (at least for me!) comes down to the complicated distributed state machines you have to manage rather than the memory management problems. I think async rust gets in my way with respect to these problems more than it helps (especially when it comes to raft or paxos). That being said with the new async Zig, I’ve been excitedly implementing a swappable backend for the core database that I hope will be a nice marriage of performance and ergonomics.
Fascinating! We settled on Quic with Protobuf because it was more performant in our testing than the gRPC when coupled with the backoff, failure cases (node startup ordering server/client connections), and to not be coupled with the gRPC library versions in Go, which has bitten us a number of times when dealing with dependency management when you're trying to juggle k8s, etcd, and google dependencies in the same Go project. Plus the performance bottleneck in most of the use cases we're specializing in are on the embedding/ml side of things.
I can't speak for everyone, knowledge graphs are the "new hotness" of the ai space (RAG and MCP are seeing a lull in their hype cycles I guess). But I've used graphs professionally for a long time to connect relationships that SQL normal forms have trouble expressing non-recursively. E.g. I used graphs to define identity relationships between data sources hierarchically, and then had a another graph relationship on top of that to define connections between those identities, user at one level and organizations at the next. Graphs as indexes allow you to express arbitrary relationships between data to allow for more efficient lookups by a database. Some folks use it to express conceptual relationship between data for AI now, so if I have a bunch of images stored in google drive, I might want to abstract the concept of pets and pets have relationship with a human etc. then my database queries for looking up all pictures related to the dog-pets owned by some human becomes a tractable search instead of a scan of the corpus!
In regards to contention, the answer is definitely dependent on how you host. We've had a lot of experience running different ML workloads and from an SRE perspective we knew you'd need a variety of different styles of hosting the models depending on read/write patterns of your usage. Termite and the proxy service/operator allow for all styles of model loading, either preloading and compiling to prevent cold starts or lazy loading to protect memory, with different pooling strategies and caching strategies for bundling multiple models running in the same Termite container.
If a heavy indexing job is running on a CPU only single-node deployment, it won't be using Raft (no replication). If it's running with GPU it doesn't share resources with the DB anyways really significantly there. If it's running distributed, also no issue with contention really.
Let us know if you have any other questions!