Boris Cherny, creator of Claude Code, posted a six-part thread on Threads on how he and his team get the most out of Opus 4.7. The tips are small on their own but coherent together. I went through each one, cross-checked it against the Claude Code docs, the migration guide, and the Opus 4.7 announcement, and pulled out what I think actually matters.
I wanted per-turn visibility at the individual Claude Code chat session level. So I built a Claude Code skill, sessions-metric that reads Claude Code’s raw conversation logs and breaks down every response at the project and project session level.
There are other popular usage tools, ccusage, ccburn, Claude-Code-Usage-Monitor, codeburn etc, but none would also operate at the Claude Code individual chat session level.
Claude Code/Cowork Skill called, ai-video-creator allows video generation using a unified API that aggregates ByteDance Seedance 2.0, Kling 3.0, Google Veo 3.1, Grok Imagine, Wan 2.7, Runway, ElevenLabs, and Suno AI behind a single authentication flow.
I built a Claude Code skill that generates images from the terminal and also via Claude Desktop MacOS app and Cowork. One command, any AI model, with transparent backgrounds, reference image editing, prompt engineering patterns, and composite banner generation built in.
The skill supports five AI image models through OpenRouter’s API, all proxied through Cloudflare AI Gateway for monitoring and cost control:
Gemini 3.1 Flash Image Preview (Google Nano Banana 2)
FLUX.2 Max
Riverflow v2 Pro
Seedream 4.5
GPT-5 Image
I built a Claude Code skill that generates images from the terminal and now also via Claude Desktop MacOS app. Skill supports image generation via Openrouter models
I tested Grok 4 Fast, and it does a bit better than Sonoma Alpha models, but nowhere near Grok Code Fast 1, Claude, etc, for code analysis at least. Posted my comparison evals at https://github.com/centminmod/code-supernova-evaluation