Hacker News .hnnew | past | comments | ask | show | jobs | submit | nathanielherman's commentslogin

Claude Code hasn't updated yet it seems, but I was able to test it using `claude --model claude-opus-4-7`

Or `/model claude-opus-4-7` from an existing session

edit: `/model claude-opus-4-7[1m]` to select the 1m context window version


API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"\"thinking.type.enabled\" is not supported for this model. Use \"thinking.type.adaptive\" and \"output_config.effort\" to control thinking behavior."},"request_id":"req_011Ca7enRv4CPAEqrigcRNvd"}

Eep. AFAIK the issues most people have been complaining about with Opus 4.6 recently is due to adaptive thinking. Looks like that is not only sticking around but mandatory for this newer model.

edit: I still can't get it to work. Opus 4.6 can't even figure out what is wrong with my config. Speaking of which, claude configuration is so confusing there are .claude/ (in project) setting.json + a settings.local.json file, then a global ~/.claude/ dir with the same configuration files. None of them have anything defined for adaptive thinking or thinking type enable. None of these strings exist on my machine. Running latest version, 2.1.110


~~That just changes it to Opus 4, not Opus 4.7~~

My statusline showed _Opus 4_, but it did indeed accept this line.

I did change it to `/model claude-opus-4-7[1m]`, because it would pick the non-1M context model instead.


Oh good call


Does it run for you? I can select it this way but it says 'There's an issue with the selected model (claude-opus-4-7). It may not exist or you may not have access to it. Run /model to pick a different model.'


Weird, yeah it works for me


Claude Code doesn't seem to have updated yet, but I was able to try it out by running `claude --model claude-opus-4-7`


/model claude-opus-4-7[1m]


I was a kid who started shaping surfboards as a hobby shortly before this happened. The hectic-ness was real — it was almost impossible to find a foam blank afterwards, they were more expensive, and they felt noticeably lower quality. I thought my hobby was going to see a quick end. Glad it's turned around over the years for the better!


This experiment is a bit weird. If you look at https://github.com/matklad/lock-bench, this was run on a machine with 8 logical CPUs, but the test is using 32 threads. It's not that surprising that running 4x as many threads as there are CPUs doesn't make sense for spin locks.

I did a quick test on my Mac using 4 threads instead. At "heavy contention" the spin lock is actually 22% faster than parking_lot::Mutex. At "extreme contention", the spin lock is 22% slower than parking_lot::Mutex.

Heavy contention run:

  $ cargo run --release 4 64 10000 100
      Finished release [optimized] target(s) in 0.01s
      Running `target/release/lock-bench 4 64 10000 100`
  Options {
      n_threads: 4,
      n_locks: 64,
      n_ops: 10000,
      n_rounds: 100,
  }

  std::sync::Mutex     avg 2.822382ms   min 1.459601ms   max 3.342966ms  
  parking_lot::Mutex   avg 1.070323ms   min 760.52µs     max 1.212874ms  
  spin::Mutex          avg 879.457µs    min 681.836µs    max 990.38µs    
  AmdSpinlock          avg 915.096µs    min 445.494µs    max 1.003548ms  

  std::sync::Mutex     avg 2.832905ms   min 2.227285ms   max 3.46791ms   
  parking_lot::Mutex   avg 1.059368ms   min 507.346µs    max 1.263203ms  
  spin::Mutex          avg 873.197µs    min 432.016µs    max 1.062487ms  
  AmdSpinlock          avg 916.393µs    min 568.889µs    max 1.024317ms  
Extreme contention run:

  $ cargo run --release 4 2 10000 100
      Finished release [optimized] target(s) in 0.01s
      Running `target/release/lock-bench 4 2 10000 100`
  Options {
      n_threads: 4,
      n_locks: 2,
      n_ops: 10000,
      n_rounds: 100,
  }

  std::sync::Mutex     avg 4.552701ms   min 2.699316ms   max 5.42634ms   
  parking_lot::Mutex   avg 2.802124ms   min 1.398002ms   max 4.798426ms  
  spin::Mutex          avg 3.596568ms   min 1.66903ms    max 4.290803ms  
  AmdSpinlock          avg 3.470115ms   min 1.707714ms   max 4.118536ms  

  std::sync::Mutex     avg 4.486896ms   min 2.536907ms   max 5.821404ms  
  parking_lot::Mutex   avg 2.712171ms   min 1.508037ms   max 5.44592ms   
  spin::Mutex          avg 3.563192ms   min 1.700003ms   max 4.264851ms  
  AmdSpinlock          avg 3.643592ms   min 2.208522ms   max 4.856297ms


The top comment opens up the concept of latency versus throughput. My interpretation is that this experiment is demonstrating that optimizing only for latency has consequences elsewhere in the system. Which is not surprising at all, but then again I spend a lot of time explaining unsurprising things.

I remember a sort of sea change in my thinking on technical books during a period where I tended to keep them at work instead of at home. I noticed a curious pattern in which ones were getting borrowed and by whom. Reading material isn't only useful if it has something new to me in it. It's also useful if it presents information I already know and agree with, in a convenient format. Possibly more useful, in fact.


If you only have 4 threads it is likely that all your CPUs are sharing caches and you won't see the real downside of the spinlock. They don't really fall apart until you have several sockets.


Note that I get a similar speedup with 6 and 8 threads on my Mac (which has 8 logical CPUs)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: