Hacker News .hnnew | past | comments | ask | show | jobs | submit | Squarex's commentslogin

The rumor was that the 5.5 is a brand new pretrain. But who knows, it's 2x as expensive as 5.4, so it would check out.

If so that would be big, they haven’t been able to successfully pretrain in close to two years (since 4o).

As a europe federalist, I would think it is more likely EU would implement these restrictions itself instead of step against Spain.

In theory, we should already be protected against this via the various "Net neutrality" directives, but as the US currently is showing us, laws and regulations are only worth as much as you're willing to enforce them ultimately. But things like these are supposed to be worth at least something:

> Regulation 2015/2120 also states that access providers “shall treat all traffic equally, when providing internet access services, without discrimination, restriction or interference, and irrespective of the sender and receiver, the content accessed or distributed, the applications or services used or provided, or the terminal equipment used,” although they are permitted to apply “reasonable traffic management measures.” In any case, those measures must be “transparent, non-discriminatory and proportionate, and shall not be based on commercial considerations but on objectively different technical quality of service requirements of specific categories of traffic” (Article 3.3) - https://www.cuatrecasas.com/en/global/intellectual-property/...

Remains to be seen if something/someone will put a stop to La Liga's shenanigans, judges have seem unwilling so far, and not a big enough problem for the average person to really care about it (yet?).


The regulation has an opt out for court orders though, which these are.

Codex and gemini cli are open source already. And plenty of other agents. I don't think there is any moat in claude code source.


Well, Claude does boast an absolutely cursed (and very buggy) React-based TUI renderer that I think the others lack! What if someone steals it and builds their own buggy TUI app?


Your favorite LLM is great at building a super buggy renderer, so that's no longer a moat


Gemini-cli is much worse in my experience but I agree


I think that’s a real problem now. In our parliament (Czech) almost every politician is a lawyer or a doctor. Almost no other profession is represented.


It is behind paywall, but the question itself seems like trivial.


It is clearly not. Why would you think so?


the ux feels extremely similar down to the elicitation ... but I did some more research ... they were started independently in april 2025. Therefore, one being a fork of the other is almost impossible and there is no evidence for it. Also, opencode is in go and gemini is in typescript.

Sadly my above misinformation can no longer be edited.


So you would deny children the greatest source of knowledge in the history? I have learned math and programming thanks to unlimited access to the web and would not be where I am without it.


>So you would deny children the greatest source of knowledge in the history?

Absolutely.

This is much better than destroying "the greatest source of knowledge in the history" to make it safe for kids.


This is a false dichotomy. We do not need to do neither. The parents are responsible to keep their children safe on the internet.


>I would not be where I am without it

First of all, you cannot know that, since plenty of people before you learnt that stuff from libraries.

>So you would deny children the greatest source of knowledge in the history?

Yes, because other sources of knowledge exist and are much more appropriate for children. It is also the greatest source of despicable stuff in history. When you turn 18, have fun exploring the world wide web.


I still remember gemini 1.5 ultra and gpt 4.5 as extremely strong at some areas that no benchmark capture. It was probably not economical to use them at 20 usd subscription, but they felt differently and smarter at some ways. The benchmarks seems to be missing something, because flash 3 was very close on some benchmarks to 3 pro, but much, much dumber.


I wonder why they named it so similiarly to the normal codex model while it much worse, while cool of course.


Not sure what you mean. It IS the same model, just a smaller version of it. And gpt-5.3-codex is a smaller version of gpt-5.3 trained more on code and agentic tasks.

Their naming has been pretty consistent since gpt-5. For example, gpt-5.1-codex-max > gpt-5.1-codex > gpt-5.1-codex-mini.


what do you mean by the same model, just smaller version? Codex should be finetune of the "normal" version, where did you get it's smaller? It's not that simple as to take some weights from the model and create a new model, normaly the mini or flash models are separately trained based on the data from the larger model.


how are you so sure :)


or a simple cron :)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: