Hacker News new | past | comments | ask | show | jobs | submit login
“Removal of powerpc as a release architecture” (debian.org)
173 points by bandrami on Oct 31, 2016 | hide | past | favorite | 170 comments



Sad, but a day everyone knew would come. I totally understand the lack of support for a dead architecture. It's not like you can just keep throwing software at these machines and run them fine anyway.

I have a 2003-04 era Powerbook with Ubuntu on it. It runs 10.04 just fine, but any newer and it's pretty damn slow. Basically the only thing I can do with it now is text based stuff, and not much of it. Some Python, bash C++ etc. Can't even really browse the web, because the newer versions of any web browser won't run/compile.

Since I can do all of those same things on my current laptop just fine there's no motivation to use the PPC mac other than nostalgia. Imagine how the folks feel trying to support it now?


It frustrates me that computers become obsolete this way. We've generated an awful lot of waste and not a whole lot of progress. Everything I do now I could do ten years ago, it's just "prettier" now. I'd go back to one of these old machines in an instant if it didn't mean that reading articles on the web would be so painful.


This comment reminds me of a story I heard years, years ago. I was living in central Asia, in one of the post-Soviet countries. (As an aside, I met quite a few people there who much preferred life under Soviet rule. Sometimes at parties in the countryside they'd even start singing rousing choruses of the soviet anthem!)

Anyway, the apocryphal story goes, Krushchev is visiting a Western washing machine factory. He sees all the different models coming off the line, this one has two agigtators, that one has three speeds, whatever. He sees the churn of factory equipment. All that hustle and bustle to try and sell more washing machines.

He laments, "Oh, you capitalists, you're so inefficient. Look at how much time and effort you waste on with all this competition! We soviets made one washing machine, and it works!"


It's a little deceptive to label these kinds of sentiments as contrary to the premise of innovation.

Experiments, by nature, require expendable resources, but an finished appliance intended for a consumer, with no end-user-servicable internals is not the sort of thing you want to have spewing garbage in all directions.

The reality though, is that we're all being railroaded down strict paths of planned obsolesence that are integrated into business models from day one.

Yours and my waste should not be interpretted as an unforced error. It's carefully planned, with the intent to fund more plannings sessions to forecast more waste.

...at least down among these lower eschelons we inhabit (speaking for myself, so as not to make assumptions about who reads this), and then there are those who ride above it all and skim from the fruits of our obesity.


There are side channels for places like apartment complexes that want a much longer device lifespan and are willing to pay for it. Planned obsolesce is much more common on the 'mass market' channels than the industrial channels.

You really can still buy refrigerators that will last 30 years. They just don't have built in ice makers etc.

PS: The 50-200% premium may seem steep, but if it lasts 5x as long then you more than break even.


Where can I buy these 30 year refrigerators?


As a consumer a Sub Zero fridge tends to last over 20 years on average. And significantly longer than that if you clean the condenser annually.

Past that, Arctic Air for example has commercial refrigerators which will last 30+ years if you do some occasional repairs. Though, they really don't look like home units.


> There are side channels for places like apartment complexes that want a much longer device lifespan and are willing to pay for it. Planned obsolesce is much more common on the 'mass market' channels than the industrial channels.

Most apartment building were I've lived have these washing rooms, yes. Yeah, those machines are built like tanks and probably last at least a few decades. The size and weight of these machines are also more like a tank than what you'd normally consider a washing machine, so you wouldn't want one of those in your own apartment.. :)


You're wrong, I'm not lamenting progress.* The fact that our computers are so much faster now is pretty cool. But the fact that I NEED today's computer just to read an article or buy something online is absolutely ridiculous.

* Well, aside from the indirect effect, where the needing-the-hotness leads to dumping toxic waste in far off countries we don't care about.


Do you really? I ran on a chromebook as a laptop for about a year without much issue. I upgraded this past year mainly to reduce my electricity consumption... the computer I had two desktops ago was plenty fast, and still in use (handed down) today.

I'm far more concerned with the churn in phones than I am in desktops, though some laptops are particularly bad long term devices.


Where i live you can get a phone contract that includes the latest model, a new phone, once a year. SO if you sign up now for an iPhone 7, in a year you will get a 7s, a year after the 8 and so on. You return the phone you have to the telecoms company.

I don't know what they do with all of the returned phones, I assume they sell them on refurbished, but I also assume that a percentage of those that are not broken will fall into the telecom's basket of not cost-effective to refurbish. Those are phones that would have remained in use and continued to be used. Even then, the amount of new phones being manufactured just because consumers want to consume the latest thing with little to no real benefit is wasteful.


We've generated an awful lot of waste and not a whole lot of progress.

For some to succeed, others must fail. It's not possible to keep every CPU design ever conceived in production. PowerPC lost. There's not an infinite amount of software engineering resources to support every discontinued architecture.

I'd go back to one of these old machines in an instant if it didn't mean that reading articles on the web would be so painful.

If you really feel strongly, this is an opportunity for you to get involved in Linux maintenance.


The underlying problem that the parent post was trying to point out is that the programs we run today largely do pretty much the same things that they did 10 years ago, but for some reason they require vastly more powerful hardware, with questionable gains.

A modern browser crawls on my old iBook G3, even when all it has to display is the HN homepage. And it crawls on the web in general; it would be clever engineering if it were slow only on websites that use super modern features, like WebGL, but the whole things grinds to a halt when reading a newspaper's website, even though the website is fundamentally the same as it was ten years ago. The same goes for virtually every other kind of software.

My computer is at least an order of magnitude faster than the one I owned ten years ago, but the quality, or at least the speed of my usage is pretty much the same. This sounds less like progress and more like running in circles :-).


What's the first thing that's drilled into you as a professional developer? Don't optimize prematurely!

Better hardware just moves the goalpost of where 'prematurely' is.

From a business perspective, if it runs fine on modern hardware, why spend money optimizing it?

This sort of means that better hardware should lower the price of software development. You should be able to replace devs competent at optimizing with cheaper mediocre devs, or the competent devs should be able spend less time on the same software.

I get the impression this hasn't happened but I havn't looked at actual statistics.

But also, software now has more features, there was nothing like modern web apps 15 years ago.

(Note: I'm making guesses on why things are as they are, not as I think they should be. I share your frustrations, why is my 1 year old budget smartphone, that's 4 times as powerful as the 1st laptop I owned, so laggy that's it's nearly unusable?!)


> software now has more features, there was nothing like modern web apps 15 years ago.

You're confusing "features" with the technology delivering them. ;)

> why is my 1 year old budget smartphone, that's 4 times as powerful as the 1st laptop I owned, so laggy that's it's nearly unusable?!)

cough Java cough :)

A friend of mine used a Nokia N900 smartphone (running Maemo Linux), with a single-core 600 MHz CPU and 256 MB RAM. When it finally died, he moved to a recent Samsung Galaxy model, and complained about how it is much laggier when performing the same tasks that the ancient N900 breezed through at buttery smooth 60 FPS.


A modern browser crawls on my old iBook G3

OK I agree in the browser space (as opposed to the processor space) the past 5 years have been a lot of churn and flabby software.

It's a hard problem as web developers have incentive to use all the horsepower available to them. The only way forward I can imagine, is if everybody sandboxes their browser tabs to limit the resources available to them (e.g. using Linux cgroups), and web developers know they are forced to live in those constraints.

"cgroups in the browser" turned up this proposal (March 2016) https://news.ycombinator.com/item?id=11998210


> It's a hard problem as web developers have incentive to use all the horsepower available to them.

Although it's not only the fault of the browser devs, but probably mainly of JS/frontend devs... the result is that leaving a browser window open easily halves battery time, because even innocuous sites eat half a core of CPU. Also, fan's always whirling.

Qubes OS is quite nice here. Just pause the VM with the browser in it -> 0 % CPU, but can still read what's on the display.


> Just pause the VM with the browser in it -> 0 % CPU, but can still read what's on the display.

On a related note, how is "halt all execution and scheduled future execution of scripts in the current tab" not a solved problem for mainstream browsers? Or maybe it is and somehow I missed it?


Not exactly that, but with Firefox you can tweak dom.min_background_timeout_value so scripts in the background tabs could be invoked even less often than they already are. (Chrome does the same, but its minimal timeout value is, unfortunately, hard-coded.)


Fantastic; thanks!


I'd be happy with "tabs not visible go into stasis".


Vivaldi hibernates background tabs natively. Can't remember the details. There are extensions, but they just swap the site with static page.


There are some FF extensions that do just that, but some sites break as they're not expecting it.


Wouldn't those same sites break in exactly the same way if you were browsing them on a mobile browser and switched temporarily away to another app? Or suspended/resumed your desktop?


> The underlying problem that the parent post was trying to point out is that the programs we run today largely do pretty much the same things that they did 10 years ago, but for some reason they require vastly more powerful hardware, with questionable gains.

Oh God, not this story again.

I'll tell you what I told my uncle complaining his 200Mhz Pentium MMX with 64MB of ram, running Windows 2000 was slow when going on Facebook.

Yes, we do (pretty much) the same things we used to do ten years ago (this is a lie, actually). What has changed is the volume of data that we generate, send, receive and have to store and process.

The first digital camera I got for Christmas maybe 14 years ago took some nice photos. At high resolution, a photo would take 80 to 100 kb of storage. No metadata. And you could have up to 20 photo in high resolution. It was a kind of a "toy" camera, but still.. Nowadays kids with digital cameras can take photos that take 7-9 megabytes per photo.

Your mail client does not send your username and password as cleartext anymore. Everything is encrypted.

Your www browser does not send your forms (and your passwords too) as cleartext anymore. Everything is encrypted.

Ten years there were no videos on facebook, no realtime chat, no realtime video streaming. There was no Dropbox, and most webmails used to allow up to 100MB of storage.

Data grows, and this is why the industry puts out faster computers every year. And this is why computers become "old" and "slow".

Get over it.

============================

Now, we could ask ourselves, why do many websites have to be so fat? I would really love to see websites supporting an "ultralight" version.

Just like HN: the source is all tables, td and tr but... It renders well on mobile and works awesomely fast on a ten years old laptop (and I tried).


Seriously? The reason for things being this bloated is that we put every piece of junk in an inefficient web browser, in an inefficient scripting languange, instead of the reasonably fast, compiled ones. Put gmail and facebook into a real desktop qt program and a 10-15 year old laptop would be glad to run them, just like they did to thunderbird, pidgin, msn messenger, and all that jazz. Geez.


> Put gmail and facebook into a real desktop qt program and a 10-15 year old laptop would be glad to run them

Depends on how much mail you keep.

I use GMail's storage as it's intended, I don't delete anything other than spam. Over the course of however many years I've been using it my mailboxes have expanded to the point that simply keeping track of the current state of things causes Mail.app to drag my '08-era Macbook Pro to a crawl. That's a native app built by the designers of the platform. Thunderbird wasn't much better. Simply opening either one up basically maxes out 4GB of RAM, and unfortunately the model I have doesn't like 8GB.

I stopped using a local mail client and went back to webmail pretty much because of that.


Thunderbird is a funny example here, because all of its UI is rendered by a web browser...


It takes quite a bit of developer effort to write good desktop apps, especially if you want to make them cross-platform as well.

Developers are expensive, and getting pricier. CPU cycles get cheaper every year. Spending more cycles in order to spend fewer developer hours is better from the perspective of rational allocation of resources.


And don't underestimate shipping them!

I think there's a generation of people who, since they've never done it, have no idea how hard it is to ship an executable that will reliably run even across multiple patch levels of what is nominally the same OS. Let alone across different OS versions or even OSes. And then even knowing of crashes that happen on end-user boxes, let alone debugging them, is another pile of suck. And upgrades.

The web browser is a light-years better target, because it lets you run the servers yourself. And you can do things like only have one version deployed at a time! No more trying to make different client versions backwards and forwards compatible with multiple server versions, etc.


That is utterly untrue. Compare e.g. the terse, intuitive GUI tooling you get with Racket vs. the effort required to achieve an inferior result with Javascript and the framework of the week.


You need to hire a dev that actually knows Racket. Companies already have JS programmers around because they always need to build a front end.

Just compare mobile to desktop. Apps abound in mobile because the extra performance inefficiency actually matters enough that it's worthwhile to put in the engineering effort; on desktop the web app is feature-rich and fast enough.

There's a reason people use FB in a browser on desktop but the app on mobile.


I don't mean performance inefficiency, I mean that developing apps in JS is harder and slower than with other frameworks. I picked on Racket because it's possible to build a cross platform GUI app skeleton in less than a screen full of code, with no external dependencies.


> What has changed is the volume of data that we generate, send, receive and have to store and process.

I don't think this is true. The volume of data that a regular user generates, sends and receives has changed, but if this were the root of the problem, I'd expect to see CPUs stalling while waiting for the damn network to roll. What we see instead is CPUs spinning hot in order to display the latest list of hotties in your area.

> Nowadays kids with digital cameras can take photos that take 7-9 megabytes per photo.

And ten years ago, default settings in scanning software would result in 20-megabyte JPEGs that were processed just fine.

> Your mail client does not send your username and password as cleartext anymore. Everything is encrypted.

I think I first used SSL for email in mutt 1.4 (I might have been using it in Evolution before that, but mutt being mutt, it was a little painful to get it working, so I remember it well). That was more than ten years ago. OpenSSL itself is almost twenty years old, has been quite extensively used throughout its history, and it's the successor of another project.

> Your www browser does not send your forms (and your passwords too) as cleartext anymore. Everything is encrypted.

And yet wget-ing a page over HTTPS works fine on a 600 MHz, single-core, low-power i.MX6. So does browsing it on a 600 MHz PowerPC with a 2003-era browser, at least until the banner-carrying JS cavalry comes in.

(Not to mention -- again -- that HTTPS worked just fine ten years ago, it just wasn't as common; but it's well within the processing possibilities of a relatively modest CPU, and it's not like now your CPU is continuously spinning encrypted data for umpteen websites at the same time).

> Ten years there were no videos on facebook,

This one might be true, but videos, real-time chat and real-time video streaming were certainly available on the web, even though not on the (still recently-launched) Facebook.

Dropbox wasn't launched yet, but they hardly invented file storage and distributed access, and I am not going to start explaining why the amount of storage that a webmail provider offers should have no impact over the speed of the client-side code of their webmail interface.


I think you're on to something but I would like to point out

> realtime chat

hasn't really improved over the past 25 years. Someday we'll have federated chat servers, but that seems more of a social engineering problem than a technical one.


But IRC was just plain text with URLs that you had to click on! Now you can post inline GIFs for everyone to see!

... Oh wait, that's actually not an improvement. :)

Also, FTFY:

> Someday we'll have federated chat servers again


> Ten years there were no videos on facebook, no realtime chat, no realtime video streaming. There was no Dropbox, and most webmails used to allow up to 100MB of storage.

Youtube was around in 2005, realtime chat hasn't changed, cloud services and facebook suck anyway, and I had file sharing long before then.

"Everything is encrypted" is fine as well.

I hear you on "everything is bigger now," but I don't see much benefit from it.

> Get over it.

Pff.


> Youtube was around in 2005

1080p? Your point is invalid.

> and I had file sharing long before then

Oh, and how much storage did the typical user have ready access to?


1080p60 is gradually happening, which is what's making my 2009 laptop suck at YouTube.

…that and a depressingly inefficient playback path. (It is not necessary to spin up my fans to play 1080p30, but it is from a Web browser.)


I am not saying nor implying that bigger is better.

Come on, people!

I am clearly stating that pretty much everything is bigger today, and old PCs hsve become slow because they weren't designed to handle such s volume of data.

My post was only an explanation.


> Now, we could ask ourselves, why do many websites have to be so fat?

Lots of javascript to get ad dollars.


Unfortunatly newspaper websites in particular are very different from how they were ten years ago. Newspapers for some reason deem it necessary to load every piece of Javascript ever written to display an article.


Browsing the Web from a Raspberry Pi (3) was an interesting experience. Browsing supposedly static content like news sites was often surprisingly painful, yet Google Docs was at least usable—alongside Roll20, which is also not exactly the lightest weight thing I use.

(I also came to loathe even more videos I didn't ask for playing.)


I'll just leave this here: https://www.ublock.org/


uMatrix gives even more control.


The same criticism, that 'new software is just slower but doesn't offer much new', has always been made.

Browsers in 2016 do vastly more than they did in 2006, websites are richer and are much closer to apps than static pages.

The demand that "I want my browser to be exactly the same speed as it used to be, but also support all these new features" doesn't seem like a reasonable engineering ask. You optimize for the hardware that people will run, not the hardware of 10 years ago, and you also optimize for your slowest new use cases, not the old use cases which will be fast enough even without much optimization.

The argument also doesn't make logical sense. Ten years ago, there was someone saying software 'largely do pretty much the same things' compared to 1996, and ten years before that I'm sure someone was comparing 1996 software to 1986 and saying something similar. But software today looks nothing like in 1986.


I'd rather it stayed the exact same speed it used to be, even if that meant only providing the same features as I had in 2006. The web was pretty good in 2006!


That'll probably happen not long after hardware stops improving.


> The demand that "I want my browser to be exactly the same speed as it used to be, but also support all these new features"

Wait wait wait, hold up. I don't want my browsers to support those features. I just want to not need them in order to read a goddamn news article or access a web store. The problem isn't that these technologies are available, it's that they're unavoidable.


I don't see richer, more app-like web sites as a clear improvement. They often leave me feeling frustrated, like the designer's ego is more important than actual utility. Would you please, I sometimes want to say, just stop zooming around and show me the text already?


> The demand that "I want my browser to be exactly the same speed as it used to be, but also support all these new features" doesn't seem like a reasonable engineering ask.

Certainly. What does sound like a reasonable engineering request is "I want my browser to be roughly the same speed as it used to be when using these same features that it used to have". Some speed penalty is obviously to be tolerated because of the larger infrastructure (I was about to say boilerplate, but let's keep it civil, there's a legitimate need for it) required to service the extra features, but I think I'm being reasonably lenient even here, considering that this larger infrastructure runs on vastly, vastly superior hardware.


The problem isn't the browser - it's the newspaper's website.

Between display ads, and the good dozen web analytics trackers, website bloat is the culprit.


Some websites work fine on a 10 year old browser with a 10 year old computer. Others are so saturated in crappy javascript crap and auto-playing videos they'll give a modern high end machine a run for its money.

I'd go out on a limb and say tons of websites got way worse over the past 10 years.


I agree with you to a point. I'm doing the same thing I was 10 years ago, but my output is of much higher quality.

in 2007 I was developing websites, the ones I build now are considerably better in almost every way. I used photoshop, but now the images I work with are far better resolution. Video production.. you get the point.

So yes I'm doing much of the same thing, but my quality has improved significantly.


PowerPC may be dead, but POWER sure isn't.

My guess is that PowerPC will still hold some relevance for the time to come as a cheap and widely available testing architecture: if a piece of software works on ppc (BE), it most likely has no byte-ordering issues (that show in it's tests) and should work on POWER or SPARC as well.


I think you'll find most developers wanting to get stuff working on Power will just try and get their hands on a POWER8 VM rather than relying on an ancient PowerPC box.


I still use my PowerMac G5 and it works fine performance wise. I'd be willing to bet it would run POWER8 code faster than an interpreter running on x64.

Although realistically I'd just log in to an instance somewhere in an IBM cloud.

And now I feel sad, like the sad mac face.


Modern POWER is ppc64le so no use for this. On the other hand nearly every arm machine can run big endian.


I mostly referred to

> PowerPC has lost.


"The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry."


Agree 100%. I'm hoping that RISC-V will be a nice plateau where we can hang out for a long, long time. It's within spitting distance of a good as the current mode of thought is going to get, and there's no commercial entity behind it to chase after differentiation and therefore churn.

It almost makes me hope that arches like the Mill don't pan out - if there's nothing obviously better, boring compatibility will win and bring with it the opportunity to build higher on narrower, more stable, base.


> It frustrates me that computers become obsolete this way.

Honestly, I'm optimistic. Technological progress is plateauing (at least single-threaded CPU speeds), and this will naturally need to more stability.

I think the last few decades of rapid technological change has led us through a dark ages of sorts, and the next few decades will hopefully be a maturing of platforms and standards, at least as far as desktop/laptop computing goes.

To put it another way: Moore's law has led to a multitude of sins, and once we can no longer rely on Moore's law to make our computers faster, we're going to really think through past decisions and make smarter choices going forwards. It'll be the only way to keep making progress.

To bring this back to your comment specifically, I think computers will take longer and longer to become obsolete, as these technologies mature. I think this is already happening: I can quite happily use a 5 year old computer now for most current computing tasks, whereas this would not have been true a decade ago.


> Technological progress is plateauing (at least single-threaded CPU speeds), and this will naturally need to more stability.

That's a surprisingly narrow perspective on technological progress (although convenient for your argument).

> once we can no longer rely on Moore's law to make our computers faster

Moore's law is still making our GPUs faster as planned. Our only hope is Amdahl's law. :)

> I think computers will take longer and longer to become obsolete

No need to look at the future. The average lifespan of a computer is already 4 years and maybe even more. We are starting to see the same larger lifespan for smartphones. (The biggest issue with keeping my phones for longer is actually the availability of Android updates.)


Sadly the longer lifespan for phones is being sabotaged by epoxied in batteries My 2 year old note 4 will be fine (quad core, 3 GB ram, 96GB storage) for many more years... only because I can replace the battery (which I've already done once).

Finding a replacement for my note 4 with a replaceable battery has been tough. Hopefully in the next few years someone will offer it.


User-replaceable batteries are definitely a benefit, but that's a false dichotomy. There is such a thing as battery replacement service.


Ten years ago? More like 30 years, what can you do now that you couldn't do on an Amiga in 1985? Same stuff faster and in higher resolution that's all...


> Ten years ago? More like 30 years, what can you do now that you couldn't do on an Amiga in 1985?

Things like streaming HD video on YouTube for hours on a single battery charge?


I'm using a seven year old Macbook Pro for everything and it works just fine. I upgraded the RAM and maybe I'll get a SSD at some point. I really don't understand why people throw out their computers every two or three years. If Apple eventually stops supporting the hardware I'll switch to Linux.


The hardware progress is also slowing down. The difference between a PC from 1993 and one from 2000 was massive, but I'm using a seven year old desktop right now and it's running fine.

CPUs seem to have only about doubled in speed in that time. This thing has an i7 860 from 2009 that gets a PassMark score of 5083. The top-of-the-line i7 975 from that time has a score of 6215. A present day i7 6700K has a score of 11011.

Graphics cards have advanced a bit more strongly. But swap out an old disk drive for an SSD and PCs are really staying viable for a long time.


Amen. As long as the disks hold (and even that is a Solved process with RAID), mid-nulls HW is still viable (e.g. with LXLE).


I thought most of that jump was made in the sandybridge iteration and the rest have mostly been gains power efficiency. Maybe I just don't push my gen 4 enough to notice the difference from my old gen 2.


What's that, core duo? Core 2 duo? I'm quite pleased with how well those devices are holding on.


From 2003 there's been a lot of progress.. since around 2009 not much in raw per-core performance in CPUs, but a lot of effort around reducing power draw. GPUs are significantly faster today. The real slowdown started around 2009, but there has still been progress.

Also, "prettier" has a cost... it takes a lot of memory to pump that much data to a screen at a usable rate. Hell, even a modern powerhouse with a top of the line video card still cant play modern games at 4k with higher quality settings at a playable framerate, so there's room to grow.

Also, prettier has value, as does some of the interactive pieces that we tend to take advantage of. It isn't like things have stood still. Also, most of the work is about delivering overall value, not optimal performance, so there is still room... but not on very old hardware that doesn't have a lot of support.

Yeah, there's a few powerpc users, but how many of them are contributing to upkeep... not many would be my guess, and maintaining testing labs is another issue entirely. I'd much rather get a great ootb experience on an under 5yo macbook in a current linux distro than hardware that is much older.


It's really not that bad with NoScript; my primary machine is from 2009. I was considering buying a backup machine that was a few years older, and the main thing that kept me from pulling the trigger was that the LCD brightness was kinda crap, not anything to do with performance.


That you haven't taken advantage of anything new in the last decade doesn't seem like an indictment of technology so much as a declaration that you are happy to ossify. I certainly do new things all the time that came about in the last decade.


Oh, I do new things, too - but most of them are done somewhere cloud-ish, with the browser essentially simulating a thin client.

In fact, my set of normaly-user essential local software is shrinking - I expect it to become just "a browser" within a few years. (As a developer, now that's a very different story)


It's mainly the lack of a javascript JIT.

You can get a JIT for 32-bit x86 and ARM. You can get a JIT for 64-bit x86, ARM, and PowerPC.

In the modern world, no JIT means no web.

Right after that problem, those old PowerPC machines face a lack of RAM and a lack of well-accelerated 3D. It was typical to have 64 to 256 megabytes of RAM, with an upper limit of 1.5 gigabytes. There might have been a 3D accelerator, but it couldn't work in full resolution/color fast enough to keep up with modern compositing needs.

For us older people, these are all silly big numbers for reading web pages. Wasteful frameworks and advertising have bloated up the web, requiring us to follow the crowd in buying faster hardware just to get the performance we had two decades ago.


Not just the web, year or so ago tried importing a dxf file (old autodesk export/import format) on a newer CAD program. Used up 2gb of ram and then died. Opened it in an old moldy gerber viewer, displayed instantly. Turned out it had about 10000 elements. If I deleted half then the CAD program would display it after a while. Far as I could tell each element in the dxf was consuming about 300-500k of RAM.

Reminds me of the Tom Waits song: What's he building in there.


Also, modern JavaScript is little-endian and PowerPC is big-endian.


This year I retired a Pentium III co-located 1U server. RedHat -> Ubuntu 6.0 Dapper Drake -> 8.04 Hardy Heron -> 12.04 Precise Pangolin. Upgraded the memory to 512MB from 128MB and replaced the two 6GB hard drives with 80GB drives. All this thing did was rsync and do SOCKS5 over SSH on port 53 (which was very useful until ~2012).

Co-location was free for this thing, which kept it in service far longer than it should have been. It was nice to keep the same hardware functional for that long.


Note that "powerpc" is the 32-bit PowerPC, if I'm not mistaken.

ppc64el (64-bit little-endian PowerPC) is listed as a release architecture.


ppc64el isn't supported on legacy Motorola PowerPC CPUs or IBM POWER7 or lower, so a large swath of 64-bit PowerPC is also unsupported (POWER7, G5, etc.): https://wiki.debian.org/ppc64el

For comparison, Ubuntu discontinued official support for this platform in 2007 (!), although an "unofficial" community port lives on.


The port covered both 32 and 64 bit chips. The specific 64 bit little endian support (ie, your old iBook) was broken out into the ppc64el in either Squeeze or Jessie.


Apple PPC[64] devices were all big endian. Little-endian mode (ppc64le) was introduced on POWER8 circa 2013.


The chip itself is ambidextrous or whatever the word is.


Well, sort of. First of all, the 970 (aka G5) doesn't support little-endian. Second of all, you need board-level support, since the chip only does half of the work.


The board-level (chipset) thing hasn't been true since "true" LE support was introduced. So Power8 machines can switch arbitrarily between endianness by changing MSR.LE. In fact, you could even run exception vectors at a different endianness from other code (MSR.ILE != MSR.LE), and endianness doesn't need to match between a hypervisor and VM.


Yeah, that's right—I should have clarified that I was talking about pre-970 chips. But we're talking about older Apple laptops here.


It depends on what you mean; the PowerPC 970 (G5) did not support little endian mode at all. Earlier PowerPC devices (dating back to the PowerPC 601 IIRC) do in fact support runtime switching of endianness, although they were all 32-bit. PowerPC 64 LE is only supported on POWER8 (and apparently, per other commenters, requires hardware support although as a Mac developer I stopped paying attention after the 970 for some reason :)

I think it's probably fair to say all the Apple devices that shipped were big endian as the OS and firmware were, and while in theory the 32-bit machines could operate in little-endian mode I don't know of any distributions that actually did so, and 970 didn't support it.


Matrox (yes, the graphics card company) sold a PCI board with several PowerPC "G4" chips that ran little-endian Linux. They posted a smallish kernel patch for this.

MC/OS and Windows NT 4.0 also ran little-endian on PowerPC.

The hardware support was simple: swap all 8 byte lanes for every memory access. It only mattered when MMIO or DMA was involved, so a software-only alternative was to have the OS do that -- but disk access via DMA would make the software solution impractical.


Virtual PC on the Mac did use the pseudo-little-endian instructions/mode when emulating x86, this made a lot of work for Microsoft when the 970 came out without pseudo-little-endian.


bi-endian ;)


This is really sad. I was really hoping the Talos would get funded:

https://www.crowdsupply.com/raptor-computing-systems/talos-s...

..but it looks pretty far from its goal. :( Still it's the best alternative architecture that's free of binary blobs, that isn't ARM. If we do see open ppc hardware, this will mean one less potential OS we can use.

EDIT: it looks like they're just dropping older 32-bit PPC per the other comments. So, this doesn't seem as big a deal really.


The Talos is a POWER8, which is targeted by the ppc64el architecture.


It's difficult to justify spending $7k for a motherboard and CPU that only sometimes matches the performance of an Intel equivalent at $500. Only the most security-sensitive applications could justify that price. They need an investor with deep pockets and no compromising motives.


Well ~$5235, but the CPU starting at $1135 isn't even the problem. Its the $4100 motherboard that should really be closer to $500. But it all seems sorta silly since the tyan openpower machine was selling for ~$2750 or so a couple years ago.

I guess IBM got tired of throwing cash at it? But really, these guys (and ARM too) need economies of scale, and they aren't going to get it by selling a machine for 4x+ more than the equivalent Intel. Both of them need to look at some of the low cost integrated intel motherboard/cpu combos, match one with slots, graphics, networking, usb, sata, etc on spec's and cost, and start from there.

But they don't want to do it, because those machines don't have the fat margins that intel makes on the E5/E7 lines, and they aren't willing to spend a few hundred million buying every hacker/linux developer a desktop machine using their cores. So they will need a ton of runway, and a long term plan to beat intel. Something I see that ARM may have due to their huge advantage selling a few billion devices a quarter. IBM OTHOH, spent the last decade trying to sell overpriced POWER/linux machines and failing, so they don't have another decade left. Plus, they can't even get their strategies straight enough to push one single linux platform (because they are also pushing s390, and their software guys are all busy selling x86/linux software).


Supermicro Power8 servers are competitive with Supermicro Intel servers; it's just Talos that's failing.


> It's difficult to justify spending $7k for a motherboard and CPU

$7,500 is for the complete workstation. The mainboard is $4,100:

> https://www.crowdsupply.com/raptor-computing-systems/talos-s...


And you need to add at least $1100-$3000 for the CPU, so it's still >=$5200. Note that your price for the "complete workstation" does not include a CPU either.


Right, $4100 for motherboard plus ~$3000 for CPU.

I should point out that I really want something like this to succeed, but it's in the catch-22 phase of not having enough demand to drive down costs to drive up demand.


I like the power ISA a lot, but as long as we're looking at alternative CPUs, I'd be interested in seeing the mill get a shot:

https://en.wikipedia.org/wiki/Mill_architecture


Not a meaningful comparison, Power8 is being fabbed right now. The mill dosent even have an FPGA implementation (yet?). Mill is a long way from usable.


Meanwhile, I'm continually amazed that s390x [1] remains supported. I've never seen such a system in the wild.

[1] https://en.wikipedia.org/wiki/Linux_on_z_Systems


I took a university course that was specifically about IBM midrange machines. The instructor had one on his desk. I remember that the whole architecture was based around virtual machines. One assignment involved downloading a 5250 emulator and logging into some IBM mainframe to leave a message. I even got a t-shirt for that.

Unfortunately, the instructor quit about a month into the course. Just one day, we came into the room and some higher up faculty were there instead. I don't know for sure what happened, but his Linked-in suggested that he was hired by the government of Egypt. (Mind you, this was 2009) It must have been very important.


That is IBM i, aka OS/400, a completely different beast than s390. It's one of the most underappreciated and misunderstood examples of systems software in the industry. It's survived four distinct CPU changes of the same magnitude as Apple m68k->ppc->ppc64->ix32->ix64, except retaining full compatibility back to the original (and even further back to System/3x). Because everything above the analogue of the syscall layer is machine independent code, you get new benefits with even minor hardware ISA upgrades for existing code without rebuilding. This is all basically "free" for the user without any thought or downside. IBM was also doing ahead of time Java compilation to MI for ~15 years. IIRC this was one of the fastest platforms for JVM apps for many years.

The closest equivalent outside OS/400 would be like using LLVM IR for the entire system. But the memory model is also very elegant, the Single Level Store allows for storage tiering without the kind of block layer hiding that most solutions use.. closest equivalent is ZFS with a L2ARC in the POSIX world.

The lessons learned from the machine dependent part of IBM i were used to build the pHyp hypervisor in POWER4+. This is a firmware level microkernel and very interesting in its own right.

This isn't to say you should go out and port everything to IBM i, just that it should be studied by any serious OS developer and academic. While there is a good book on the design by Frank Soltis, the chief architect, it does not go into anywhere near the depth that UNIX design books did even when covering the proprietary versions. I wish I could learn a lot more about it.


It's such a damn shame that IBM makes all of this stuff utterly inaccessible to the developer, hobbyist, and academic crowds.

In the past few years I've taken an interest in big-iron systems, and I've recently started spending time with OpenVMS in preparation for release of the x64 port. I spent a lot of time digging into System i, and z/OS, but my efforts were constantly frustrated by the inability to get my hands on real software and/or hardware, or even paid time on a machine.

Shoutout for the OpenVMS hobbyist program. Anyone interested in obscure commercial OS's with serious pedigree should give it a look. HPE's OpenVMS documentation portal is well laid out and straightforward - all the information is there.


There is http://pub400.com which'll give you free terminal access onto an IBM iSeries


The s390 is not a midrange platform: it evolved into what we call zSeries these days and is pretty much the highest end of IBM's portfolio.

5250s were terminals used with the System/34 family, which was succeeded by the AS/400 series which is now called iSeries and moved to POWER many years ago.


Would you expect to though? I would think most of them are squirreled away deep inside the private networks of banks and such. Not something you're going to be encountering unless you work in their specific niche.

I know someone working at one of the bigger banks and they do development on some z/Arch machines. From what he's said they live in what amounts to a vault and spend their time quietly processing transactions on a secure network.


Your description sounds so ... peaceful. Machines quietly processing transactions, on a secure network. Serenity.


I think that's just what they tell system admins when it's time to put a machine down. "Oh yes, it's in a very quiet, peaceful place where it can process things all day long securely."


IBM puts a lot of resources into it. It helps move mainframes and CIOs eat it up.

I've heard of a lot of poc activity. When I tried it out many years ago, it really sucked. I think the use case that makes sense is when you have well defined workloads that have high availability requirements and license models that fit.


IBM dumping tons of cash into supporting it is why. Now that IBM has basically abandoned any PPC that isn't >= POWER8 and freescale/nxp/qualcomm focusing on ARM the few PPC users out there are on their own.


Support Contract of the Gods


Yeah, pretty much this

IBM is very good at charging astronomical quantities for legacy systems (though it seems those systems, both in hardware and in software have been receiving updates - the processors in these systems look impressive)

Basically, why rewrite an older system for (1Mi/10Mi) when you can just throw money and get a brand new mainframe?


It's not the cost of the rewrite, it's the immense cost of problems relating to the upgrade.


I'm working with four of them as build minions for a project. We support both Power and Z as platforms (as well as x86_64)

I suspect the reason we're building software for them is someone is using it. I know Hyperledger is supported on Bluemix to some degree.


In addition to the comment above, about sending old mainframes to deep blue pastures (so to speak), I love how your comment adds to the mystique: "I suspect ... someone is using [the platform]".

"Yes, my friend. Deep in the Dark Web, a hundred and one LANs beyond the routable Internet, there roams magnificent, ancient and magical creatures of legend - the great Zee. Quietly they tally numbers, and mete out just rewards to the poor souls trapped in this prosaic concrete reality, unable to so much as cross from our world to the Internet - let alone travel to these fabled realms of the deep, deep blue."


> "Yes, my friend. Deep in the Dark Web, a hundred and one LANs beyond the routable Internet, there roams magnificent, ancient and magical creatures of legend - the great Zee. Quietly they tally numbers, and mete out just rewards to the poor souls trapped in this prosaic concrete reality, unable to so much as cross from our world to the Internet - let alone travel to these fabled realms of the deep, deep blue."

I think I'll tweak this a bit and put it on a shirt.


I have no idea how you're going to fit all that on a tie or a pocket protector.

Some history on uniforms of those that have most closely attended the Zee herd: https://www-03.ibm.com/ibm/history/exhibits/waywewore/waywew...

I must say, this guy here, he's really onto something:

https://www-03.ibm.com/ibm/history/exhibits/waywewore/waywew...

Such power, and so much blue.

In all seriousness, please share your tweaks.


When I was a sysadmin intern at my college in New York, I worked on these systems daily. We also had a very close partnership with IBM since we're located in the same city as HQ.


I've never seen a z system, although I hear they are popular among certain circles.


Qemu can emulate it now :-)



I've worked with them since 2009. Experience varies 8).


I have an old PowerBook (from about 2005, not long before Apple's intel switch) that still runs fine, after replacing RAM a couple times. I use it as my back-up computer at the office to this day, sometimes bring it to the coffee shop to outhip the hipsters. It's been running the Debian PPC port for many years without much trouble.

Of course this does not come as a big surprise; I'll just keep running the most recent version until the hard drive dies and my current laptop becomes the back-up.

Are there any distros still standing with PPC support? I used to run Ubuntu, but it was dropped years ago.


Based on "The question of whether powerpc remains an architecture in the main archive or moves to ports is one for FTP masters, not the release team."

That would imply to me you can continue to apt-get update upgrade into eternity as long as you're following unstable or testing. They'll just never be a new stable again for PPC. Someday the FTP masters will remove PPC as an architecture, everything ends someday, but thats not the topic of this notification.

There's a process where the release team freezes and grabs a "testing" and polishes it and cleans it up until it feels its a "stable" and that's never happening for PPC again, just too many release critical architecture specific bugs or the autobuilders are too slow or nobody wants to own PPC as a release arch or ...

The people and teams who generate binaries for unstable and testing are not changing their behavior per this press release, this is a release team announcement.

Needless to say this has nothing to do with stuff like online documentation or mailing lists WRT all the "Debian drops support" type of stories. All the release means is unless something very surprising happens you're not getting a 9.0 install disk. ftpmaster isn't rm -Rf ing the arch, nobodies shutting down nothing, etc..

I mostly hang out with the freebsd world but I hung out in Debian world for a bit less than two decades, so I might be a little out of date, but probably not much.


Unfortunately powerpc will probably be removed from sid/unstable just like sparc recently was. The binaries are generated by extensive architecture that is a lot of work to maintain and needs enthusiastic developers to keep everything running and if that were the case then they wouldn't have removed it as a release arch.

It may be added to debian-ports which is simultaneously the nursery for new debian archs and the nursing home for the abandoned which still have some support. ppc64 is there, sparc didn't make it. The attention has shifted to sparc64.

I was a Debian/powerpc user for many years and I helped fix builds and ported new packages. But all my macs died and the limitations of the machines available make it harder to justify replacing them from ebay. I imagine many other devs had similar experiences.


Its not linux, but NetBSD still has its macppc port. And Gentoo.


Looks like Yellowdog Linux is discontinued, but at the time it was great: http://www.fixstars.com/en/technologies/linux/

One of the best features of that distribution was none of the standard pre-compiled root-kits would run on it, plus all the buffer-overflow attacks had to be customized for PowerPC, something few script kiddies had the capability of doing. It made for a surprisingly resilient system at the time.


OpenBSD does too


Ubuntu MATE supports PPC https://ubuntu-mate.org


From what I can tell, it uses Ubuntu's community supported release of PowerPC.


While it may be petty to say, I think it is accurate to point out that this sort of news tends to lead people to focus on the "winning" commodity architectures, in this case x64 and ARM, and minimize the use of other architectures.

If I were starting a business today, you'd have to give me free hardware that used POWER or SPARC cpu in order to persuade me to use it. The last thing I want is to have to switch hardware platforms; and thus x64 and ARM are the safe choices.


You've kind of got the cart before the horse. People already minimize the use of PowerPC, and that's why Debian is dropping it.


Is there any popular hardware that is PowerPC?


I was surprised to find an unholy mix of Debian 5 & 6 running on powerpc when I enabled SSH access to my Western Digital MyBook Live Drives:

https://www.ifixit.com/Teardown/Western+Digital+My+Book+Live...

I don't know if they're still using powerpc on more modern iterations of these devices ...


Which is pretty poor, IMO. They can't afford to fund someone to put a little work back into Debian?



All "modern" hardware for AmigaOS is either discontinued or not yet shipping, and also very expensive for the performance. This seems to be the normal state of affairs - as far as I can tell, by the time any modern AmigaOS hardware has semi-decent driver support it's no longer available, and a bunch of the onboard hardware simply never works properly.


Those are 64-bit ppc CPUs, not the 32-bit that ppc references.


None of them are POWER8, the minimum to run Debian ppc64el[1]

[1] https://wiki.debian.org/ppc64el#Required_Hardware


WD Mybook Live (a networked hdd for home use). Which, ironically, uses Debian as its operating system internally.


Seems like Western Digital could step up and maintain the Debian build tools for powerpc.

...just kidding, they'll just keep shipping progressively more ancient and insecure versions of Debian.


Nintendo Wii U?


Along with all of the prior generation consoles - Xbox 360, PS3, Wii.


Quite a lot of automotive microcontrollers (e.g. for engine control, body control, gateways, ...) from Freescale are based on PowerPC. However you won't find Linux on these.


Yes, there is. The latest Powermacs. Hey, Debian! I have a suggestion! Drop everything except amd64! Who needs all the other crap anyway! Not even ARM, that's crap too!


Where were you when they asked for a volunteer to maintain the port? Debian doesn't have paid developers, you know.


Yepp, that's where the argument goes. "It's for free, don't complain!" Well, now that my projects depend on something that I took for granted, I agree that's my own fault. Next time I'll think before choosing between software distributions. Now, I mostly do not care to use or support Debian in any way. I want to USE it, not maintain. Silly me.


I just find it amazing. Where's your post shitting all over Apple, the company you paid to get your PC and OS, and which dropped support for it in 2011, half a decade before Debian did?

FOSS is truly a thankless vocation.


FOSS is a "thankless vocation" because it chose to be that way. Everyone who expects a thank you is an idiot and doesn't understand how people work. Linus Torvalds created the Kernel not for the thanks. His intention was that someone else has any use of it. Anyone who contributes bullshit code to the Kernel was and will be criticised even more severe than I with my rants here. Not because Linus isn't thankful, but because his project is important to him.

Now, with the above said, I also like it when people find any use of my software and I feel well damn responsible for anything I contribute. I take the blame if anything goes wrong and if I'm not able to keep up with bug reports. But somehow the FOSS-community got the idea that it's okay to just abandon a project, because no one wants to participate anymore. Once they've established a certain circle of users it's their goddamn responsibility to keep it running! It's a moral choice here, not a rational! It doesn't matter whether they have time for it or don't! You do not abandon your child when it becomes annoying and a burden.


Court is now in session. The situation we are asked to deliberate upon today: Are the following situations morally equatable?

A) abuse and abandonment of a child by their caregivers, a grave injustice in our modern society, and

B) someone didn't keep writing software for me, for free, in their own spare time -- even when when I decided to cry on the internet about it publicly

My friends: the choice before you is obvious.


Can you point me to your software so that I can hold you to that?


I doubt anyone is going to miss you or care about your use case if you have no interest in stepping up.

You are demanding that people that no longer have an interest in something, work for free, for your benefit, while you contribute nothing.


> I doubt anyone is going to miss you or care about your use case if you have no interest in stepping up.

As an ungrateful annoyance, I must disagree. I'll tell all 6 people I know to never use Debian again, despite the fact 4 of them don't know what it is. I'll stand up to anyone who dares suggest I'm lazy or ungrateful, but I'll do that only by not engaging them until the very last moment. Solidarity will triumph.

Also, my dad works at Nintendo and I'm important, and I'm going to get him to ban you from Online. You'll rue the day you stopped considering all my unstated desires and needs at all times, my friend.


That's the most common fallacy within the Linux community. I use Linux to do something else and contribute there. Wasn't that the intention behind Linux? Provide a common OS to create and contribute different things? I do not use Linux to maintain it. I use it to create something new, as an instrument. I neither have the knowledge, nor the time to acquire it, nor to maintain it. There are people who can do it better than myself. So, after all, it's my fault that maintainers abandon ports, because no one appreciates their work, including me?

All I read now is, how dare I to raise the issue. Who am I anyway to raise such an issue, without even being willing to contribute anymore. And that's just plain stupid. As an IT-guy I just begin to hate anything that smells like a "software-community", because not just the software stinks, but also the arguments about it.


> So, after all, it's my fault that maintainers abandon ports, because no one appreciates their work, including me?

No. It is not your fault that maintainers eventually decide to move on. It is nobody's fault. In fact, many maintainers do work in spite of the fact nobody appreciates it. In some cases, it's even a good or great thing to move on -- maintenance is time consuming and there are many other valuable things to do in life, as opposed to writing software, or even simply working on software you think is more important.

But it is your fault for making shitty posts like a child on the internet, in attempts to allude that Debian, somehow, is just fucking around with you personally or something -- or that they do not have any rhyme or reason to their decisions to do things like this. Your post might as well have read "why not just get rid of ARM too lulzzzzz amirite guise?!?!?? debian is so dum", like a 4chan pepe meme regurgitator -- and it would have had the same amount of content. It's simply extremely ungrateful to people who have put in 10 years of work on the PPC port already, probably much of it their spare time, to act this way.

Nobody is maintaining the port anymore. That's unfortunate if you depended on it. I've been burned by lack of maintenance too. But, here -- it is not a conspiracy against the hardware or an affront to you individually. It's just the reality of the fact nobody wants to support it anymore, and the involved people are moving on. You need to accept that. There's no blame on anyone. It's simply the way it is.

The reason you're getting responses that are effectively just brushing you aside, FWIW, is because many people here are involved with or have maintained FOSS to a large degree, software that is diverse, that has been used by many people, and in many contexts. That includes me. Unfortunately, a large part of that territory involves dealing with ungrateful users like yourself, who not only do not understand how such projects tend to function or operate -- but also who interpret choices they dislike as personal attacks, and furthermore contribute nothing of value on their own, or even allege that the motivations behind some decisions are unsound (while, again, having no insight or ability to contribute meaningfully). And I don't mean code. Most users rarely even, if ever, say "thanks". I don't write free code to get thanks, I don't assume it. I do try to assume that when I write code, I won't be treated like shit from ungrateful nobodies.

And, as unfortunate as it is, and however much it pains me to say this directly to you -- experience is telling me that, right now frankly, you are just another ungrateful, whiny user who is unlikely to contribute anything of value or substance. And despite the potential possibility that you could contribute things of value, after extensive work -- it's really not in my interests to "work towards" accepting attitudes like yours. There are 10,000 people who are at least 10x as qualified you while being nowhere near as annoying. I can wait for those people. Hell, I'd rather wait for people who are 1/4th as skilled as you and less annoying. At least I'd have the opportunity to mentor or help someone who isn't going to accuse me of bad faith every other decision.

Putting up with people like you does not make me any more excited to write or maintain my software, it makes me less excited. This isn't a happy thing to say. But it is the world we live in, and I have better things to do than appease users who act like brats.

You'll get less agitated responses if you drop the malicious implicit assumption that Debian or whatever is doing this to spite you, or that people are responding negatively to you are just doing so to troll you because "they can't understand your side".


An excellent response - and frankly much better than the poster deserved.


The latest Powermacs.. from 2006?


most of this thread is concerned with the desktop. but powerpc is widely used in the automotive systems. Those systems are running either some form of Linux, an RTOS or a bareboard exec. The modern powerpc SOC's are available in very wide temperature ranges (-40..+125) which makes them attractive for automotive. The QorIQ ppc's are used in networking also.


The Linux kernel still has support. Running Debian is no longer an option, but I doubt many embedded systems were doing that anyway. Most probably were running some Yocto image from the SoC vendor. So it just means cross compiling from source now.


crap.

Since jessie, I could not maintain my old Sparc V100 server.

Now it's my PowerBook G3 Pismo :(

(to be true, it wasn't very usable anymore, 45 seconds 100% CPU to load the main gmail page just to give an idea).


More detail on why would have been interesting. Especially with other lesser known architectures still supported.


No (credible) volunteers to manage the port, it seems: http://www.mail-archive.com/[email protected]/...


I didn't go look but that was going to be my guess. Since Debian is volunteer-ran, each architecture needs people to manage it, as well as having adequate hardware to test/build on.


Excellent name. :-)


We've updated the title from “Debian drops support for PowerPC”, which breaks the guidelines by being editorialized.


It may well have been, but the title as it stands now is useless. I was forced to click through and read all the comments to find out what this was about.


The title “Release Architectures for Debian 9 'Stretch'” is precisely accurate; it's what the linked article is and was titled that way by the author for a reason. On the other hand, because the article states that the removal of PowerPC is the only change from Jessie we've reverted the title to something that expresses this.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: