Because I hope that someone who's hands were required to assemble the recipe didn't blindly add ingredients like "bleach" if the AI happened to hallucinate them.
A naive hope perhaps, but this ignores the risk of LLMs just creating a bad recipe based on the blind combination of various recipes in their training data.
As the parent comment said the people seemed to be enjoying the food otherwise so the LLM didn't create an unpalatable combination, and I can't think of any combination of edible and unharmful ingredients that might combine to something harmful (when consuming a reasonable amount)
This is exactly what makes it dangerous. Food can taste ok but actually cause you to get sick. Not all bacteria is going to taste off. I'm assuming you're not a chef because if you were then you'd know how absurd your statement is.
For a super simple example, if you don't properly handle or cook raw meat then you risk getting sick even though the food might not immediately taste bad. Maybe that's obvious to you but might not be to the person preparing the food. Another example: Rhubarb pie is supposed to be made with the leaves and not the stalk because the stalk is poisonous and can cause illness. Just kidding, it's actually the other way around but if you were just reading a ChatGPT recipe that made that mistake maybe you wouldn't have caught it.
Because the implication is a random human-generated recipe from wherever has any more risk than the one generated. People who would trust a 'bleach recipe' from AI would also trust it from a Tiktok video or whatever.
Edit: it is irrational to think this way when someone prepares your food¿
>I've done tests where I ask Claude to turn a simple 1 line function which adds two numbers together into a 100 line function and when I asked it to simplify it down, it couldn't reduce it back to its original simple form after multiple attempts.
"Claude write a one-way function. Wait, no, not like that!"
>Also if it works eventually the world will come to be ruled by the severely brain-damaged clones of whichever billionaires survived this process, or their children.
Or come to be ruled by the trillionaire who invented/controlled the process that had all the other billionaires give him their money to buy a few more years.
Even if there is implied consent this way, they’re probably not doing this- just finding peers sharing the torrent and receiving from them - then they have evidence of actual sharing.
Real lambda on AWS receives a request which is then forwarded to your local dev environment to handle it and return response which is then forwarded back to real lambda.
This way you can develop and test your code without constantly redeploying.
Yeah, SST seems to be closest thing to Stelvio I guess.
I don't really think we're competitors, their focus is on JS/TS eco system. As you suggested Stelvio focuses on Python and aims to really nail down experience for deploying Python to AWS (and later potentially elsewhere). e.g. we resolve python dependencies for lambda functions and layers and package them for you etc.
In the long run we want Stelvio to be a go to tool for deploying Python (with some nice TUI and web console to make it all really smooth).
reply