HN2new | past | comments | ask | show | jobs | submit | xienze's commentslogin

True or false: a person in a third world country has a lower carbon footprint than someone in a developed country.

Also true or false: an immigrant from a third world country will have a higher carbon footprint if they emigrate to a developed country.

Maybe they are part of the problem.


Aren’t all these developed countries voluntarily self-depopulating by way of having birth rates below replacement? Seems like the problem will sort itself out if we can resist the urge to invite the entire third world to come in and instantly raise their carbon footprint to first world levels.

That effect is much too slow.

> Trump admin did put people in prison and then deported them, for doing nothing more than protesting.

Link? I’m guessing we’re going to see that this definition of “protesting” involves being aggressive and directly in the face of law enforcement officers, not merely holding a sign at a distance.


> Link? I’m guessing we’re going to see that this definition of “protesting” involves being aggressive and directly in the face of law enforcement officers, not merely holding a sign at a distance.

Please read up on this one example of a US permanent resident. And then justify the actions of the govt against Yunseo Chung.

https://humanrightsfirst.org/yunseo-chung-v-trump-administra...


I would consider them to not be a good choice for a role that requires remembering new information...

That doesn’t sound very reassuring. 3205 hours, or a little over a year at 8 hours a day. Be generous and call it two years of use. You’re babying it with low brightness, dynamic dimming, etc. etc. and the fact that there’s anything, even if you have to “look for it”, is not a good sign.

I've been having it for 1y 4mo.

> You’re babying it with low brightness

That's the same brightness I was using on my IPS. And if you watched the videos then you'd know that those people use OLEDs at "almost max brightness" and see no burn in.

> dynamic dimming

Such features are unnoticeable during normal use and most of them are defaults.

> the fact that there’s anything, even if you have to “look for it”

Again, this is only noticeable if your room is completely black and you're staring at gray content.

To counter your argument, you have a much worse backlight bleeding on IPS, which is very but very visible during normal use. To quote you: "the fact that there's anything, is not a good sign".

It's weird how you call OLEDs bad but completely forget about IPS downsides, and I'm not gonna even start on VA.

At the current state, OLED wins.


I _have_ watched those videos and they show burn-in after an alarmingly short amount of use. My current IPS monitor has been going strong for the past decade. I expect monitors to last at least that long. Get back to us about your burn-in after 8.6 more years.

Adding one more reference, here is a recent post to /r/monitors showing burn-in after 2 years of constant use: https://www.reddit.com/r/Monitors/comments/1pf0tmi/here_is_m...

And as a personal anecdote, I've experienced burn-in on my pixel 3a after 2 years. When switching to a full-screen solid grey, you could clearly see the bottom button bar with the home/back buttons.


> I _have_ watched those videos and they show burn-in after an alarmingly short amount of use.

Debatable, it's not what those content creators say, and I definitely wouldn't call it alarming if it requires you to stare at gray background in a dark room. It isn't even half as bad as IPS backlight bleeding or VA angles. Different people have different standards. You don't have to buy it if you don't like it.

> Adding one more reference, here is a recent post to /r/monitors showing burn-in after 2 years of constant use

Pixel clean cannot run if the monitor is constantly receiving signal.

Also show me an IPS after 2 years of constant use. The backlight can degrade as well.

[edit] Furthermore, 3 year warranty covers your burn in, so I guess they would happily replace your monitor once the burn in normally visible. [/edit]

> Get back to us about your burn-in after 8.6 more years.

Feel free to ping my email (@gmail.com) at that time.

> I've experienced burn-in on my pixel 3a after 2 years

2019. My Nothing Phone 2 is from 2023. I've been having it for 1y 8mo+ and experienced zero burn in on the same gray test. For reference, I don't use automatic brightness and it's almost full-brightness all-day (except evenings).


> it's not what those content creators say

Quoting one of your videos: "After 21 months, seeing these artifacts is certainly annoying". If I spend $3k on a monitor, it should _not_ be annoying after 2 years.

Also "If your primary monitor use case is productivity, you likely have up to 3 years of decent usage under normal conditions before burn-in starts to become a concern". I almost exclusively use my monitor for productivity, and it definitely needs to last more than 3 years.

> Pixel clean cannot run if the monitor is constantly receiving signal.

True, but pixel clean works by burning-in the rest of your (sub)pixels so that they are evenly burned. Therefore what you are seeing in that photo is permanent degradation of those (sub)pixels. The clean will smooth it out so it doesn't look bad, but those pixels will never be as bright again. That portion of their life is spent. It is an unavoidable part of how OLED works.

I agree that emissive displays are the future. But OLED is not the way to get there.


> Quoting one of your videos: "After 21 months, seeing these artifacts is certainly annoying". If I spend $3k on a monitor, it should _not_ be annoying after 2 years.

Why are you insisting to spend 3k? I've already said that 1k is enough. Also he said "but generally speaking it hasn't been a noticeable problem in most tasks". In the same productivity scenario, with a gray background, IPS backlight bleeding would've been even worse (unless you win the lottery and there's little to no bleeding). RTings showed this on many monitors (unfortunately they paywalled everything due to AI [1]).

[1] https://www.youtube.com/watch?v=6DshOOs39vA

> It is an unavoidable part of how OLED works.

True.

> OLED is not the way to get there.

I'm not a screen scientist so I'll refrain from making a statement like this. I don't have anything to add. I guess we'll see how it pans out in the future.

Edit: I bought my FO32U2P mostly to reduce eye strain. 4k 240hz with nice colors is also a very neat upgrade.


Also my gf used to use Samsung A40 which is also from 2019 and there's no burn in. The only issue I see is slow response times, however she upgraded in 2022 and again a few months ago.

> The problem is some developers now just submit code for review that they didn't bother to read.

Can you blame them? All the AI companies are saying “this does a better job than you ever could”, every discussion topic on AI includes at least one (totally organic, I’m sure) comment along the lines of “I’ve been developing software for over twenty years and these tools are going to replace me in six months. I’m learning how to be a plumber before I’m permanently unemployed.” So when Claude spits out something that seems to work with a short smoke test, how can you blame developers for thinking “damn the hype is real. LGTM”?


This is correct. And at this point (and I think you agree?) we have to take that critical thinking skill and stop letting it just happen to us.

It might seem hopeless. But on the other hand the innate human BS detector is quite good. Imagine the state of us if we could be programmed by putting billions of dollars into our brains and not have any kind of subconscious filter that tells us, hey this doesn’t seem right. We’ve already tried that for a century. And it turns out that the cure is not billions of dollars of counter-propaganda consisting of the truth (that would be hopeless as the Truth doesn’t have that kind of money).

We don’t have to be discouraged by whoever replies to you and says things like, oh my goodness the new Siri AI replaced my parenting skills just in the last two weeks, the progress is astounding (Siri, the kids are home and should be in bed by 21:00). Or by the hypothetical people in my replies insisting, no no people are stupid as bricks; all my neighbors buy the propaganda of [wrong side of the political aisle]. Etc. etc. ad nauseam.


I'm an 99% organic person (I suppose I have tooth fillings) and the new models write code better than I do.

I've been using LLMS for 14+ months now and they've exceeded my expectations.


So are you learning a trade? Or do you somehow think you’ll be one of the developers “good enough” to remain employed?

I have a physical goods side hustle already and I'm brainstorming ideas about a trade I can do that will benefit from my programming experience.

I'm thinking HVAC or painting lines in parking lots. HVAC because I can program smart systems and parking lot lines because I can use google maps and algos to propose more efficient parking lot designs to existing business owners.

There is that paradox when if something becomes cheaper there is more demand so we'll see what happens.

Finally, I'm a mediocre dev that can only handle 2-3 agents at a time so I probably won't be good enough.


Not only do they exceed expectations, but any time they fall down, you can improve your instructions to them. It's easy to get into a virtuous cycle.

> Can you blame them?

Yes I absolutely can and do blame them


> My experience with ESP32 development has been unreasonably positive. My codebase is sitting around 600k LoC and is the product of several hundred Opus 4.x Plan -> Agent -> Debug loops.

I feel like this is an example of people having different standards of what “good” code is and hence the differing opinions of how good these tools are. I’m not an embedded developer but 600K LOC seems like a lot in that context, doesn’t it? Again I could be way off base here but that sounds like there must be a lot of spaghetti and copy-paste all over the codebase for it to end up that large.


I don't think it's that large. Keep in mind embedded projects take few if any dependencies. The standard library in most languages is far bigger than 600k loc.

I work with ESP32 devices and 600k lines of code is insane.

I'm curious: What does this device do?

It's wild to come back to this after a day away and have the takeaway from my attempt to answer the question with punditry about the size of my codebase from people who don't have any idea what my device does.

Answering this question directly puts me in an awkward spot because I realized last fall that there was absolutely no way that I could talk about what I'm working on in a way that can be associated with my product because there's so much anti-AI activism right now. That sucks, because I'd like to be "loud and proud" but I have a family to feed. I strongly suspect that versions of my story are playing out for hundreds of entrepreneurs right now.

Here's what I can describe: it's an ESP32-P4 based consumer device with about 45 ESP-IDF components that all communicate over an event bus. There's a substantially modified LVGL front-end with a 3D rendering engine and SVG-like 2D animation in front of a driver for a customized variation of the ST7789. There is substantial custom code for both USB host and client functions across various modes of operation. There's custom drivers for several sensors and haptic feedback. There's a very elaborate menu UI system which is also backed by a BBS style terminal configuration system for power users. There's an assignable action system with about 40 actions that all have their own state machines and a lot of mutex locking. There's a very involved and feature-dense trigger scheduling system. There's a very flexible data stream routing matrix. There's a full suite of command line scripts for most functions. There's a self-hosted web app for configuration that also implements a screen share functionality via an HTML canvas object so that I can record videos of what's happening on the device with OBS without having to point a DSLR at it from a gantry.

Honestly, I could go on and on, but all of the people who think that 600kloc is a lot [sight unseen] are following YouTube tutorials and can eat me.

I responded to you because you asked politely. I hope it was an interesting reply.


It's less than you'd think. I'm using the 35B-A3B model on an A5000, which is something like a slightly faster 3080 with 24GB VRAM. I'm able to fit the entire Q4 model in memory with 128K context (and I think I would probably be able to do 256K since I still have like 4GB of VRAM free). The prompt processing is something like 1K tokens/second and generates around 100 tokens/second. Plenty fast for agentic use via Opencode.

There seem to be a lot of different Q4s of this model: https://www.reddit.com/r/LocalLLaMA/s/kHUnFWZXom

I'm curious which one you're using.


Unsloth Dynamic. Don't bother with anything else.

For anyone else trying to run this on a Mac with 32GB unified RAM, this is what worked for me:

First, make sure enough memory is allocated to the gpu:

  sudo sysctl -w iogpu.wired_limit_mb=24000
Then run llama.cpp but reduce RAM needs by limiting the context window and turning off vision support. (And turn off reasoning for now as it's not needed for simple queries.)

  llama-server \
    -hf unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL \
    --jinja \
    --no-mmproj \
    --no-warmup \
    -np 1 \
    -c 8192 \
    -b 512 \
    --chat-template-kwargs '{"enable_thinking": false}'
You can also enable/disable thinking on a per-request basis:

  curl 'http://localhost:8080/v1/chat/completions' \
  --data-raw '{"messages":[{"role":"user","content":"hello"}],"stream":false,"return_progress":false,"reasoning_format":"auto","temperature":0.8,"max_tokens":-1,"dynatemp_range":0,"dynatemp_exponent":1,"top_k":40,"top_p":0.95,"min_p":0.05,"xtc_probability":0,"xtc_threshold":0.1,"typ_p":1,"repeat_last_n":64,"repeat_penalty":1,"presence_penalty":0,"frequency_penalty":0,"dry_multiplier":0,"dry_base":1.75,"dry_allowed_length":2,"dry_penalty_last_n":-1,"samplers":["penalties","dry","top_n_sigma","top_k","typ_p","top_p","min_p","xtc","temperature"],"chat_template_kwargs": { "enable_thinking": true }}'|jq .
If anyone has any better suggestions, please comment :)

Shouldn't you be using MLX because it's optimised for Apple Silicon?

Many user benchmarks report up to 30% better memory usage and up to 50% higher token generation speed:

https://reddit.com/r/LocalLLaMA/comments/1fz6z79/lm_studio_s...

As the post says, LM Studio has an MLX backend which makes it easy to use.

If you still want to stick with llama-server and GGUF, look at llama-swap which allows you to run one frontend which provides a list of models and dynamically starts a llama-server process with the right model:

https://github.com/mostlygeek/llama-swap

(actually you could run any OpenAI-compatible server process with llama-swap)


I didn't know about llama-swap until yesterday. Apparently you can set it up such that it gives different 'model' choices which are the same model with different parameters. So, e.g. you can have 'thinking high', 'thinking medium' and 'no reasoning' versions of the same model, but only one copy of the model weights would be loaded into llama server's RAM.

Regarding mlx, I haven't tried it with this model. Does it work with unsloth dynamic quantization? I looked at mlx-community and found this one, but I'm not sure how it was quantized. The weights are about the same size as unsloth's 4-bit XL model: https://huggingface.co/mlx-community/Qwen3.5-35B-A3B-4bit/tr...


Yes that's right. The config is described by the developer here:

https://www.reddit.com/r/LocalLLaMA/comments/1rhohqk/comment...

And is in the sample config too:

https://github.com/mostlygeek/llama-swap/blob/main/config.ex...

iiuc MLX quants are not GGUFs for llama.cpp. They are a different file format which you use with the MLX inference server. LM Studio abstracts all that away so you can just pick an MLX quant and it does all the hard work for you. I don't have a Mac so I have not looked into this in detail.


FYI UD quants of 3.5-35BA3B are broken, use bartowski or AesSedai ones.

They've uploaded the fix. If those are still broken something bad has happened.

UD-Q4_K_XL?

I've had an AMD card for the last 5 years, so I kinda just tuned out of local LLM releases because AMD seemed to abandon rocm for my card (6900xt) - Is AMD capable of anything these days?

> I've had an AMD card for the last 5 years, so I kinda just tuned out of local LLM releases because AMD seemed to abandon rocm for my card (6900xt) - Is AMD capable of anything these days?

Sure. Llama.cpp will happily run these kinds of LLMs using either HIP or Vulcan.

Vulkan is easier to get going using the Mesa OSS drivers under Linux, HIP might give you slightly better performance.


The vulkan backend for llama.cpp isn't that far behind rocm for pp and tp speeds

I think AMD just add support of rocm to rdna2 recently? I can run torch and aisudio with it just fine.

They also finally fix all ai related stuff building on windows, so you are no longer limited to linux for these.


I think you're both right. Prior to the other day, Claude had nowhere near as much mindshare with regular people as ChatGPT did. But now that they've stood up to Orange Man, they're heroes to a large segment of the population who would otherwise have never given them a second thought.

On a related note, since OpenAI is playing ball with Orange Man, they're public enemy #1 for this same segment, hence the calls to cancel subscriptions and boycott OpenAI.

By this time next week, most people will have forgotten about all of this.


The best container security in the world isn’t going to help you when the agent has credentials to third party services. Frankly, I don’t think bad actors care that much about exploiting agents to rm -rf /. It’s much more valuable to have your Google tokens or AWS credentials.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: