> The compatibility with the AWS CLI was mostly excellent in my experiences.
Interesting, I've had the opposite experience. Every single AWS service I've ever tried to build tests around with LocalStack has run into compatibility issues. Usually something works in LocalStack but fails when it hits the real endpoint.
I guess the CLI itself has mostly worked, its more the LocalStack service not behaving the way the real service is documented/works.
Got any concrete examples? I've been happily using LocalStack for roughly a decade now and haven't run into a single compatibility issue, aside from the obvious missing of net new services for the first N months after AWS product launches. Things like AppConfig, etc, but those gaps got filled in time. They clearly prioritized the 95pct/most used features of each service first though. There's a long tail in some AWS services, as one might expect. I've never used any of the more esoteric AWS feature sets of any of their services. Those are the things that tend to end up deprecated. So requiring those long tail featur sets may be the simple answer to having very different experiences.
I think the argument still applies on a shorter timescale. The child mortality rate in the US fell from 26 per thousand in 1970 to 7 in 2020 [1]. It seems reasonable that some portion of kids that now have treatable but persistent illnesses such as allergies/asthma would have died just a few decades ago.
In Canada heating is the leading emitter. And even if power is the leading emitter in the OP's region, power plants are lowering their carbon output far faster than any other source.
Coal -> Natural Gas is still the leading source of emissions reduction, but that transition is nearing its completion. The shift to solar/wind is what will continue the transition.
So definitely possible to decrypt it without JumpWire, if you have the keys. There are some pieces of metadata we add in that we could make optional if you want to reduce the resulting ciphertext size. That metadata adds a few extra bytes, but it doesn't grow with the data size.
Thank you – I would recommend writing up a page with all the details on your docs because that would appease a whole lot of people that would be your target customer (like myself)
Although I might be biased cuz I'm a founder from a tech background so I want those details, but even with those details, I'm one of your target market but my worry with these kinds of products tends to be more about things like:
- am I adding an unreliable piece of infra to my stack? this is going to be a critical gatekeeper, so if this fails, not only is it like my DB being down, as the only method of decrypting my data, does it have the ability to fail in a way that results in permanent data loss (whereby I can't decrypt some subset of the data)
- if I had to yank this out, what's the process? will I be stuck?
- what are the chances of us doing something stupid and lock OURSELVES out of our own data? what guardrails are available there?
- what is the key management story? (which answers a lot of the above questions)
- is this roll-your-own crypto (not just which algorithm, but how the messages are constructed, etc) or something standard and vetted? Because there's no secret sauce to be had there, it's more in making all those OTHER elements easier for me.
It depends on the use case. In our experience, it's been rare that queries on PII need to do anything more complex than substring matching (which we're working on support for). We're definitely not trying to be able to encrypt every column, just to make some common workflows around PII and PHI a lot easier.
Custom views can help, but it does mean you're dealing with access controls directly in the database which can be hard to manage. And the database is fully exposed through backups or engineers with server access.
Tonic is awesome! We think of synthetic data/differential privacy as a different use case - trying to replicate data across scoped environments while preserving certain properties or distributions of the entire data set. There is a security/privacy component from scrubbing the data, but the original data source is unmodified, and that's where we feel risk lies. And the desired outcome isn't to add security but to produce a data set that "looks like" the original well enough for testing/modeling/analytics.
> Are the policies something like "retool" gets tokenized or faked data back, and the main app gets everything?
Yep, that's exactly right. Application credentials are grouped under classifications, and policies can be included/excluded across classifications. We aren't passing authz through JumpWire but for something like Retool you can configure it to connect through different proxies for different users.
> I prefer self-hosted and reasonably auditable code for such sensitive systems. Is that the case here?
Exactly. The engine which interacts with your data is almost always self-hosted, and the web app also can be if needed.
> At my scale (50 person company), it works reasonably well enough with just me maintaining it.
Makes sense! No reason to add more tools to your stack yet if the custom process isn't too burdensome.
Interesting, I've had the opposite experience. Every single AWS service I've ever tried to build tests around with LocalStack has run into compatibility issues. Usually something works in LocalStack but fails when it hits the real endpoint.
I guess the CLI itself has mostly worked, its more the LocalStack service not behaving the way the real service is documented/works.
reply