Hacker News .hnnew | past | comments | ask | show | jobs | submit | user34283's commentslogin

It helps if you have a monopoly on app distribution for half of all phones, or video streaming.

Then you can afford zero support and still take 15-30%.


Does it do anything that GitLab does not?

With what I heard about GitHub Actions, the GitLab CI pipelines should be much better.

Not that I haven’t shot myself in the foot with GitLab pipelines on numerous occasions.


Hundreds or 10k+ users?

I imagine requirements and integrations may differ a lot. I have seen many incidents with a large instance.


I am using the Codex desktop app without the pi harness and my experience is quite different.

5.5 has been a noticeable improvement over 5.4, solving more complicated issues and faster too.

5.5 does not use a huge amount of my session limits with the $100 plan.

I use multiple conversations in parallel, all on xhigh effort with Fast on (2.5x consumption), and it’s still enough for me not to switch off Fast.

It also runs my tests, but I did not use TDD apart from sometimes telling it to cover an issue in a test before fixing it.


To evaluate the legal risks of using AI generated code, let’s consider how many lawsuits there have been over these concerns.

Inadvertent copyleft license violations: probably 0 lawsuits

Competitor copied your software, you could not defend your rights in court because it was made with AI: probably also 0

Users of agentic AI for software development: >10 million

The thinking here seems pretty clear to me.


This is a terrible take. Complex litigation takes longer to play out than the time span that agents have existed.

We also pay $300/month for Azure Desktop VMs.

We are paying for tens of thousands of those machines, although everyone knows they are stupidly expensive and incredibly slow.


At this point it’s undeniable for my use cases.

After I discovered how to use git worktrees in Codex to work in three conversations in parallel, I am able to build apps with a scope that simply was not realistic before.


You obviously are not reviewing the generated code in any detail before merging it. This is not sustainable for the project as it will grow to be too large for what it needs to be.

I will see if that becomes a blocker.

There was one feature/screen that Codex built in a single 5k LOC file.

It was still perfectly capable of developing the feature and it was working as expected.

I had it break it down into multiple files, but if I wouldn’t have seen it during the MR review, I would not have noticed. The large file did not seem to degrade the performance of the agent.


It would be interesting to discover how large of a project in KLOC an agent can continue to effectively maintain without messing things up due to the large size.

Three? Across how many projects?

One, thus the git worktrees.

You might think that this would lead to a mess with merge conflicts, but the agent can resolve them automatically.

I added an instruction to AGENTS.md so that before handoff it fetches and rebases, resolving conflicts if needed plus rerunning the tests.


I have a RTL8157 5 Gbps adapter from CableMatters.

Interestingly it seems to get burning hot on the MacBook M1 Pro while it remains cool on the M5 Pro model.

Maybe the workload is different, but I would not rule out some sort of hardware or driver difference. I only use a 1G port on my router at the moment.


Huh! That's very interesting.

I am definitely not the person to shed any light on what is going on, but you've added to my feeling that these adapters are all incomprehensible, so I'll try and do the same for you.

I have a USB C ethernet adapter (a Belkin USB-C to Ethernet + Charge Adapter which I recommend if you need it). I ran out of USB C ports one day, and plugged it through a USB C to USB A adapter instead. I must have done an fast.com speed-test to make sure it wasn't going to slow things down drastically, and found that the latency was lower! Not a huge amount, and I think the max speed was quicker without the adapter. But still, lower latency through a $1.50 Essager USB C to USB A adapter, bought from Shein or Shopee or somewhere silly!

I tried tons of times, back and forward, with the adapter a few times, then without the adapter a few times. Even on multiple laptops. As much as I don't want to, I keep seeing lower latency through this cheap adapter.

Next step, I'll try USB C to USB A, then back through a USB A to USB C adapter. Who knows how fast my internet could be!


I used it last night for iOS app development and it felt like a noticeable improvement.

With the Pro plan it was available in both Codex and ChatGPT already when I first checked, which was within an hour of the release.


Based on my experience with Claude Code on the $20 plan I would not think so.

Opus 4.7 would blow through the session limits in 2-4 prompts. It was a noticeable further decrease in usage quota, which was already tight before.

Based on Anthropic‘s description 4.7 was trained to think longer.

With GPT 5.5 yesterday, I felt it completes task noticeably faster than 5.4. I kept the xhigh effort setting.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: