Hacker News .hnnew | past | comments | ask | show | jobs | submit | simonpure's commentslogin

I've been working on a pure Clojure implementation of WebRTC Data Channels (SCTP over DTLS over UDP). The library provides a minimal, dependency-free (except for Clojure itself) way to establish peer-to-peer data channels on the JVM.

I've always wanted this and have used it to experiment with Gemini's cloud agent Google Jules.

https://github.com/alpeware/datachannel-clj


Cool project. I'd be interested to hear your general impressions of Jules (as the somewhat forgotten agent).

Thanks! I've noticed a big jump when they switched to Gemini 3.1 Pro and it really became useful. I like that I can use it from my phone too. It took a bit of trial and error but I came up with a good ralph loop between GitHub Actions and Google Jules using the Jules API. So basically I have Jules extend its TODO.md with the next set of tasks and open a PR then run a GitHub Action with a few checks, auto-merge, and then call back into Jules to kick off the next cycle if there are still open tasks. It then mostly just runs and occasionally gets hang up on some questions that I then answer on my phone mostly just telling it to make a judgement call and keep the build green. You can check out the prompt, action, and past PRs for examples ex. Jules prompt is here: https://github.com/alpeware/datachannel-clj/blob/main/prompt...

Very cool, thanks! I have an under-utilized Google AI Max plan and this has inspired me to investigate Jules further (and Antigravity for that matter). It's been a while since I've checked in with Google's AI suite.

I have an upcoming project in Flutter, and maybe it's wishful thinking, but my intuition is that perhaps Google has top-tier LLM performance within their own ecosystem relative to peers.


I was wondering the same and learned the model doesn't know about itself during training [0]

[0] https://developers.googleblog.com/closing-the-knowledge-gap-...


the model doesn't know itself, but all these larger models are generating a significant amount of synthetic data from the prior models, and the prior models are all context bloated renditions; you fill the KV cache with whatever alignment you want, and then generate synthetic data.

That training on existing models is what brings out various other things about other models; then there's models that are just like snowballs, where you build one iteration, then you give it it's identity, then you train on that with the same synthetic generaiton.

So a model could generation include at some point it's own name.


I don’t think what you’re saying makes a lot of sense. You don’t “fill the KV cache with whatever alignment you want.” That doesn’t exist. The KV cache is an inference optimization, and is populated by running tokens through the model.

Synthetic data is generated by other models, and yes this is often where identity propagates.

I think with the snowballing you mean things like iterative self distillation? That’s definitely not done unsupervised, because of the risk of model collapse, and typically heavily curated and/or mixed with real data.


I've been impressed by Google Jules since the Gemini 3.1 Pro update. Sometimes it's been working on a task for 4h. I've now put it in a ralph loop using a Github Action to call itself and auto merge PRs after the linter, formatter and tests pass. It does still occasionally want my approval, but most of the time I just say Sounds great!

It's currently burning through the TESTING.md backlog: https://github.com/alpeware/datachannel-clj


There's DolphinGemma; no microchips needed -

https://blog.google/technology/ai/dolphingemma/


There's now also Gemini CLI GitHub Actions for a similar async experience -

https://github.com/google-github-actions/run-gemini-cli


He also published animated video versions:

45min https://youtu.be/xguam0TKMw8

5min https://youtu.be/BB2r_eOjsPw


So this guy just takes the time and money (I guess that part doesn’t hurt if you’ve got 15 billion) to write a book and produce an excellent video version for a broader audience, just for … what exactly?

According to his Wikipedia his employees are forced to agree to continuous recordings, the NYT claims he does some sort of (i assume perfectly legal) insider trading and just acts like he’s got the perfect system in place, he defended the CCPs actions on multiple accessions and just in general does not seem like the type of guy to provide anything of value for free.

So can someone educate me on the angle? What’s the goal here?

And why would anyone read this and not get the feeling of being scammed. There is no way this guy is in it for the book sales.

Honestly I’m sometimes questioning my sanity while reading the comments of videos like this. There are people acting like he’s a modern Prometheus, sharing the knowledge of financial whatever. I would really be amazed if my cynicism turns out to be wrong on this one.


I work with a fellow that raves about this guy and his "historical views on empires".

I'm not convinced, history is written by the surviving victors and revised by those with a narrative.

I'm a firm believer that the market is not logical predictable or rational, but it is game-able.


> So can someone educate me on the angle? What’s the goal here?

Gain influence, increase his marketing & exposure, creating new opportunities for himself. Good chance he also believes what he is preaching. All these things can be true at the same time.

He gives away a lot of the info in the book for free, so you can consider the book as just a method of marketing his ideas.



Not my project but also tinkering with something similar.

From the comments, it sounds like they'll open source something soon -

https://old.reddit.com/r/ChatGPTCoding/comments/1jibmtc/i_ma...


I was curious as well.

Not OP but via their website linked in their profile -

https://youtu.be/Tl3pGTYEd2I


One of the original authors also published a follow up with some additional details and analysis that may be of interest (still reading through it myself) [0]

[0] https://eprint.iacr.org/2024/696


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: