Sharing my 5 cents on the matter: in another world, gaming, where embedding scripting languages is done for modding, I hope to see WASM take off as a way for modern modders to get into game development.
I've seen smaller developers experimenting with this, but haven't heard of larger orgs doing it, possibly because UGC took the place of modders as well, and I come from an older world where what developers of my time 20 years ago would have had their hands on was an actual SDK that wasn't a part of a long microtransaction pipeline.
In my org's case, where we built an entire game engine off Lua, and previously had done Lua integration in the Source Engine, I would have loved to have had sandboxing from the start rather than trying to think about security after the fact.
To the article's point: even if you were to sandboxing today in those environments, I suspect you'd be faster than some of the fastest embedded scripting languages because they're just that slow.
I've been looking into this. There seems to be some mostly-repeating 2D pattern in the LSB of the generated images. The magnitude of the noise seems to be larger in the pure black image vs pure white image. My main goal is to doctor a real image to flag as positive for SynthID, but I imagine if you smoothed out the LSB, you might be able to make images (especially very bright images) no longer flag as SynthID? Of course, it's possible there's also noise in here from the image-generation process...
Gemini really doesn't like generating pure-white images but you can ask it to generate a "photograph of a pure-white image with a black border" and then crop it. So far I've just been looking at pure images and gradients, it's possible that more complex images have SynthID embedded in a more complicated way (e.g. a specific pattern in an embedding space).
I just tried this idea, and it looks like it isn't that simple.
> "Generate a pure white image."
It refused no matter how I phrased it ¯\_(ツ)_/¯
> "Generate a pure black image."
It did give me one. In a new chat, I asked Gemini to detect SynthID with "@synthid". It responded with:
> The image contains too little information to make a diagnosis regarding whether it was created with Google AI. It is primarily a solid black field, and such content typically lacks the necessary data for SynthID to provide a definitive result.
Further research: Does a gradient trigger SynthID? IDK, I have to get back to work.
I got downvoted heavily about a year ago saying we need to abandon Android and the industry needs to pivot back to just putting GNU/Linux on a phone already.
Of course, now Google is doing what Google was always going to do.
It seems like it's the cheapest way to access Claude Sonnet 4.5, but the model distribution is clearly throttled compared to Claude Sonnet 4.5 on claude.ai.
That being said, I don't know why anyone would want to pay for LLM access anywhere else.
ChatGPT and claude.ai (free) and GitHub Copilot Pro ($100/yr) seem to be the best combination to me at the moment.
There's a lot of hate in this thread, but there are plenty of engineers chomping at the bit for autonomous workflows, because browser-use isn't there yet, and cloud expenses from major providers are also unappealing with so much relatively powerful local compute.
It’d be fine if they included a big disclaimer at the top that this is beta software and they’re not liable for blah blah blah, but without such a disclaimer it’s reasonable to assume the software is ready for production. I think much of the hate is coming from GH misrepresenting its software and people being surprised by the many minor bugs.
I've seen smaller developers experimenting with this, but haven't heard of larger orgs doing it, possibly because UGC took the place of modders as well, and I come from an older world where what developers of my time 20 years ago would have had their hands on was an actual SDK that wasn't a part of a long microtransaction pipeline.
In my org's case, where we built an entire game engine off Lua, and previously had done Lua integration in the Source Engine, I would have loved to have had sandboxing from the start rather than trying to think about security after the fact.
To the article's point: even if you were to sandboxing today in those environments, I suspect you'd be faster than some of the fastest embedded scripting languages because they're just that slow.
reply