Hacker News .hnnew | past | comments | ask | show | jobs | submit | RetroTechie's commentslogin

Amazing value indeed!

That said: it's a bit sad there's so little (if anything) in the space between microcontrollers & feature-packed Linux capable SoC's.

I mean: these days a multi-core, 64 bit CPU & a few GB's of RAM seems to be the absolute minimum for smartphones, tablets etc, let alone desktop style work. But remember ~y2k masses of people were using single core, sub-1GHz CPU's with a few hundred MB RAM or less. And running full-featured GUI's, Quake1/2/3 & co, web surfing etc etc on that. GUI's have been done on sub-1MB RAM machines once.

Microcontrollers otoh seem to top out on ~512KB RAM. I for one would love a part with integrated: # Multi-core, but 32 bit CPU. 8+ cores cost 'nothing' in this context. # Say, 8 MB+ RAM (up to a couple hundred MB) # Simple 2D graphics, maybe a blitter, some sound hw etc # A few options for display output. Like, DisplayPort & VGA.

Read: relative low-complexity, but with the speed & power efficient integration of modern IC's. The RP2350pc goes in this direction, but just isn't (quite) there.


You might like the ESP32-P4


IIRC, you can use up to 16 MB of PSRAM with RP2350. Maybe up to 32 MB, not sure.

Many dev boards provide 8 MB PSRAM.


> A company of 100 engineers should probably have 10-20% of the team allocated to just internal tools and making things go faster.

But beware of Jevons paradox.

Say that eg. a software project has 10 developers, and each build takes ~15 minutes. Most developers would take at least some care to check their patches, understand how they work etc, before submitting. And then discuss follow-on steps with their team over a coffee.

Now imagine near-instant builds. A developer could copy-paste a fix & hit "submit", see "oh that doesn't work, let's try something else", and repeat. You'll agree that probably wouldn't help to improve the codebase quality. It would just increase the # of non-functional patches that can be tested & rejected in a given time span.

In other words: be careful what you wish for.


Both are important. Signals represent the data being worked on, and thus naming them can be useful.

But gates apply operations/functions to those signals, and naming that logic clarifies its purpose.


Nothing new in the article imho. But it's a nice overview of what content creators are facing, and what to look for when carving out a niche.

The #1 point really: have access to data / experiences / expert knowledge that's unique & can't be distilled from public sources and/or scraped from the internet. This has always been the case. It just holds more weight when AI agents are everywhere.


Actually a small flathead screw driver is a useful tool to pull an (EP)ROM from its socket. Been there, done that (80s/early 90s).

Might be fun though to check with ChatGPT on how to use a screw driver in this context. :-)


That's what is expected to finally kill Moore's law: the economics. At some point it'll still be technically possible to fabricate smaller IC structures, stack more layers etc, but the tech to do so (and fabs to do it at scale) will be costly enough that it's just not worth it.

The other point is of course a next-gen fab first needs to be built, and get those yields up. While previous-gen fab already exists - with all the fine-tuning already done & kinks ironed out. Not to mention maaanny applications simply don't need complex ICs (typical 32bit uC comes to mind, but even 8bit ones are still around).


> Still amazes me how much we did with only 512MB.

Or the other way around: still amazes me how many GBs today's machines need to do conceptually simple things. Things that were done ages ago (successfully, if not better) on much, much, much lower-powered kit. Never mind CPU speed.

Eg. GUIs have been done on machines with a few 100s Kbytes of RAM. 'Only 512MB' is already >1000x that.


> Imo there's an identifiable core common to all of these kinds of package managers (..)

Indeed. It's hard to see why eg. a prog language would need its own package management system.

Separate mechanics from policy. Different groups of software components in a system could have different policies for when to update what, what repositories are allowed etc. But then use the same (1, the system's) package manager to do the work.


It is easy to see why the system package managers in that are in use are not sufficient. The distro packages are too bureaucratic and too opinionated about all the wrong things.

I don't disagree that it would be nice if there was more interoperability, but so far I havn't seen anyone even try to tackle the different needs that exist for different users. Heck, we really havn't seen anyone trying to build a cross-language package system yet, and that should be a lot easier than trying to bridge the development-distro chasm.


> The distro packages are too bureaucratic and too opinionated about all the wrong things.

For example?


Author didn't name source of an image in the article: ("Dependency")

https://xkcd.com/2347/

Uncool.



https://xkcd.com/license.html

Randall's license text is really chill; normally to give attribution to an image from Commons there would be more to do, but currently the article does hotlink to the page on Xkcd.com, and that is clearly "stated in the LICENSE file".


Ah you're correct. I text-searched article for "xkcd" & that came up empty. But I missed that image is a link.

s/uncool/okay/


How would one cool the CPU? I see no mounting holes for a cooler. Specs seem to mention a cpu fan connector.

Glue a heatsink onto the CPU?

Also CPU TDP is missing from the specs?


> Also CPU TDP is missing from the specs?

About 10W maximum. It's a fairly low-power CPU intended for use in networking equipment.

https://community.nxp.com/pwmxy87654/attachments/pwmxy87654/...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: