I've only heard of Scientific Linux once before (Ironically, in a HN posting) - 100% of all the companies I work with (Bay area), or have colleagues working at, use CentOS, until they get a lot of stodgy enterprise types working for them, or need to call Oracle for support, or need to resell their product to customers - at which point they switch (mostly) over to RedHat and start paying licensing fees.
What is other other people's experience for *nix server environments that deploy more than a hundred or so servers? (Presumably, at low server count, it's whatever the guy deploying them decided to deploy at the time - Suse/Debian/Ubuntu might all be in the mix - maybe even RHEL. )
Personally, I'm an Ubuntu + OpenBSD guy for my workgroup / personal servers, except for Linode/Slicehost where I usually go with CentOS.
I personally moved my Linodes to SL from CentOS (since Linode doesn't offer SL image, you start with centos base, switch mirrors, do update - pretty painless) after CentOS took forever getting it's 6.0 version out.
It wasn't that CentOS took a long time, it was that the whole process lacked transparency. You couldn't tell who was responsible for what and what the plans were. If they published status reports it wasn't anywhere public (at least, nowhere reachable through their site, google searches, or mentioned in mailing lists). At the time, I had no idea who was in charge of CentOS, who worked on it and kept it up with security updates. Even nowadays it's not so clear (the menu dropdown lists their team members, but seriously, CentOS 2.1 lead is on there? Nobody is listed as being responsible for 6, and there's only one guy listed as being responsible for security).
Also, being on Linode also means I don't use the SL kernel (with it's custom patches). I use the ones provided by Linode, and security on that end is taken care of by them, so that difference doesn't affect me.
This isn't true; at least it wasn't a year ago. SL, out of the box, doesn't include all the cernlibs or the ROOT libs and is even a royal PITA to get these things to compile on. Don't even think about things like ADAMO or PAW. I tried to get SL working for a day. I gave up and just installed Ubuntu.
It was true at CERN while I was there (01/2003-12/2004).
I remember the IT division had a nice SL installation where all ATLAS projects where one yum away, if not included already on the default installation.
SL lost its way in recent years. Right now, it seems little more than a (basic, out-of-date) redhat clone. It could be much better - I don't understand why it doesn't include GEANT4, ROOT and the associated CERNlibs, matplotlib, Octave and R out of the box. If I have a distribution called 'Scientific Linux' sponsored by CERN and Fermilab, I expect that it least has particle physics tools as part of the default ditro. :-(
I don't really understand why science labs would need a cusotmized distro. Can anyone clue me in? What payoff do they get from one that offsets the cost of creating and mantaining it?
When I worked at CERN, the Atlas project was developing high performance network stacks with low level IP protocols, to be able to cope with the data throughput from the particle accelerator.
Maybe that's part of the answer, but you can run a custom kernel in probably any distro - no need to provide an entire distro from scratch for that. So there must be more to it.
Well, they're not really creating a distro from scratch in the way that Debian maintainers pull source code from various upstream projects and spend weeks to months doing bug testing. The RHEL sources that the SL folks pull have already been bug tested and verified to work very well. The SL folks are compiling those sources (with their own modifications). Of course, you could do this with the Debian or Ubuntu source code, but I guess they figure if you're going to get source code from somewhere, it might as well be a well-known enterprise distro.
Why not just contribute the changes they need upstream where possible, and maintain a repo for additional "scientific" packages (or just fork and start new projects) when that strategy doesn't work?
Because then they would no longer have complete control over what gets in their packages. In a place like CERN, total control is something people like to have.
Probably for the sake of cooperation, so it's easier for people at the various labs to develop, package, and support scientific applications that are used across the community.
They have a "self support" license, which is basically just a subscription to a yum repo. In the net cost of my hardware and datacenter bills, its a pretty tiny part. I'm glad to pay it to fund the development of something that saves me so much time and provides so much value.
If you're suggesting that other random free distros are suitable when one is looking at RHEL or an RHEL clone... well, that's rather silly.
Call it silly if you want, but yes, I would like to know the answer to that. I'm being perfectly serious. I've used several Linux distros (though maybe I'm not an expert), and at the end of the day, they all run the same programs.
RHEL is extremely stable and releases are supported for much longer than other Linux distros. Also, some commercial software is designed to work on specific versions of RHEL.
Great writeup outlining some of the features you get out of the box. But at the end of the day, this is another RHEL clone. Other clones are mentioned, like CentOS, but there is no clear motive to choose SciLinux over anything else.
I recommended the same thing to my clients for a while last year, CentOS 6 was not getting updates at all for a time. They seem to have fixed that now though, it looks to be safe to use CentOS 6 again. Of course, if you want fast updates, pay RedHat.
The SL additions seem very inconsequential for most users. The article doesn't imho give any good reasons to use SL instead of some other RHEL clone (such as CentOS).
SL has a couple full time employees building it. Also, they publish a lot of info about their build process. Centos doesn't. Sometime their releases are slower.
I quite agree. The article seems to say something along the lines of "It's Red Hat! But it's not Red Hat!" I really don't see what's so great about this distro (having used it myself once). If it works for the people at CERN, that's fine of course. However, I think there's little reason to call SL a 'great distro'.
When I last had to deal with SL (about two years ago at my job back then), it was a nightmare to deal with, and used linux 2.6.9 (!) with literally thousands of custom patches. We needed to do some kernel development with it, which was incredibly painful… (since it was totally different than vanilla linux)
Since they just repackage RHEL you were probably just running a 4.x release or a early 5.x release... SL is mostly for people interested in running RHEL but don't want to pay the licensing cost.
I vaguely recall statements to the effect that the SL folks didn't really want a lot of attention. Basically, that they're trying to meet the specific needs of the organizations supporting its maintenance, and are afraid of the demands on their already very limited resources should a large community coalesce around SL.
If my impression is right, then this article is a little misdirected. The small userbase and "wrong name" would not be considered problems by the project itself.
What is other other people's experience for *nix server environments that deploy more than a hundred or so servers? (Presumably, at low server count, it's whatever the guy deploying them decided to deploy at the time - Suse/Debian/Ubuntu might all be in the mix - maybe even RHEL. )
Personally, I'm an Ubuntu + OpenBSD guy for my workgroup / personal servers, except for Linode/Slicehost where I usually go with CentOS.