Red Hat Enterprise Linux Forks / Offshoots

Summary (with details below):

The "rhel-rebuild" mailing list is a "vendor-neutral" discussion forum for issues related to rebuilding and installing Linux systems based on the Red Hat Enterprise Linux SRPMS.

Critique of sundry RHEL forks and suggestions for future directions, written by Raimo Koski of Lineox:

See also....
Red Hat Enterprise Linux Rebuild mini-HOWTO, by Michael Redinger.

Notes on Listed Options

From: David O'Callaghan
X-Mailer: Ximian Evolution 1.4.5 (1.4.5-7)
Date: Fri, 28 Nov 2003 14:39:33 +0000
Subject: Re: [ILUG] Musings on RH Enterprise (long)

On Fri, 2003-11-28 at 12:58, Kenn Humborg wrote:
> I've been a Red Hat guy up til now, and have two
> critical boxes in the office running RH7.3. Now we
> need to decide what to do with them.

The same problem is facing us here: we have RedHat 7.3 cluster machines that are supposed to be a stable platform for the stuff we're doing. Backporting fixes would be a lot of work unless it can be distributed amongst the RH7.3-using masses.

White Box Linux ( looks like it might help. They're attempting to build an unencumbered fork of RHEL 3 from RedHat's sources with some success.

And I see Esat have a mirror...


Release information

This product is derived from the Free/Open Source Software made available by Red Hat, Inc but IS NOT produced, maintained or supported by Red Hat. Specifically, this product is forked from the source code for Red Hat's _Red Hat Enterprise Linux 3_ product under the terms and conditions of it's EULA

There may be remaining packaging problems and other odd bugs. These are solely the responsibility of the White Box Linux effort and should not in any shape, manner or form reflect on the quality of Red Hat's commercial product. In fact, if you need a fully tested and supported OS you probably should go buy their box set.

A fair amount of effort has gone into removing Red Hat's trademarks and logos. Should you find one remaining, please report it so that it can be removed. Write me about this and any other problems at Or join the devel list and dive in!

What is the goal for White Box Linux?

To provide an unencumbered RPM based Linux distribution that retains enough compatibility with Red Hat Linux to allow easy upgrades and to retain compatibility with their Errata srpms. Being based off of RHEL3 means that a machine should be able to avoid the upgrade treadmill until Oct 2008 since RHEL promises Errata availability for five years from date of initial release and RHEL3 shipped in Oct 2003.

Or more briefly, to fill the gap between Fedora and RHEL.
CentOS : Community ENTerprise Operating System

CentOS 2 and 3 are a 100% compatible rebuild of the RHEL 2 and 3 versions, in full compliance with RedHat's redistribution requirements. It is for people who need an enterprise class OS without the cost of certification and support.

CentOS-2 is a freely distributable OS built from the source at:

Before build, non-free packages are altered. Non-free packages would include those encumbered with a non-redistributable copyright or trademark.

This project is also the base from which cAos Core and cAos GP are based.

There will initally be support for x86. Updates are distributed via yum repositories.

CentOS-3 is a freely distributable OS built from the source at:

Before build, non-free packages are altered. Non-free packages would include those encumbered with a non-redistributable copyright or trademark.

The x86 and x86_64 architectures are currently supported. Updates are distributed via YUM repositories.
About cAos Linux

What is cAos Linux? cAos Linux is a community maintained and managed, RPM-based distribution of Linux. It combines aspects of Debian, RH/Fedora, and FBSD in a manner that ends up with a solution that is stable enough for servers and clusters, utilizes a long term life cycle (3-5 years), and built from current cutting edge packages (as of today). It also includes many of the features that are considered standard for desktops and laptops, which makes it a very good general purpose Linux distribution.
Tao Linux

Why did you create Tao Linux in the first place?

As were other folks, I was initially disappointed with Red Hat's product changes and shortened support windows for the free products. While I like Fedora on my laptop, the main feature of Red Hat Enterprise Linux that I was interested in is it's long lifespan - five years of security updates. Eventually I reallized, however, that it was still 'free software', and I could, if I wished, build an installable set of binaries from the RHEL SRPMs, compile updated SRPMs as they became available, and use something like 'yum' for keeping all these machies updated. For the boxes I'll run Tao on, I can support them myself. (I have, in fact, bought 2 copies of RHEL for our most important machines, where my co-workers have somebody else to call when I'm on vacation). Since rebuilding RHEL was no small feat, and I was going to the trouble anyway, I thought I'd take care of the trademark stuff, too, and make something I could redistribute and share with others. Of course, that turned out to be a lot more work than I had imagined.

intend to run Tao Linux on the bulk of our Linux servers - mail, web, dhcp, spam filtering, clustering, etc. Tao puts me back where I want to be - when I need a new server, I just grab the boot.iso and go - no licensing, subscriptions, etc, and no worrying that security updates won't be available 6-9 months from now. For the bulk of server and workstation needs, I think Tao Linux fits. I do, however, run genuine RHEL on our primary fileserver. This is one of the two most mission-critical servers in my organization; and while I can easily support it myself, for the money it's worth having Red Hat standing behind it. I feel better when I go on vacation, too.

So, while I have gone to great lengths to parallel RHEL3 as closely as possible, Tao Linux is by no means equivalent. It is impossible (or at least very hard) to know the exact toolchain and libraries present when the original RPMs were built; my '' script can only tell me how close individual RPMs are; so I know fairly certainly just how close (or far off) I am (see Design and Technical). Red Hat, Inc., also performs rigorous performance, stability, and stress testing on their software, which I cannot replicate. They also have a huge fleet of talented engineers and a large support infrastructure.

That is all to say, if you're new to Linux, I wouldn't (currently) recommend running your mission-critical servers/services on Tao Linux. However, you could certainly download and try Tao Linux, and see if it will fit your needs, maybe even to help in your decision whether to buy RHEL3, which is comparable in some respects. Note, however, that I don't guarantee RHEL3 will support all the functionality of Tao Linux (or vice-versa). Since it includes some of the excellent system administration manuals from RHEL3, you might also use it for learning Linux, and experimenting with enterprise-class software.

[Tao Linux does not focus on staying self-hosting.]
Scientific Linux

Scientific Linux's History

Scientific Linux was first thought of when computer admin from a couple of high energy physics labs contacted Fermilab computer scientists and mentioned a joint Linux collaboration.

Connie Sieh had been working on the newest version of anaconda (the S.L. installation program) at the time, and saw the potential it had. She soon had a working prototype made based on Fermi Linux LTS 3.0.1. This first prototype was called HEPL, standing for High Energy Physics Linux.

From the beginning Scientific Linux was designed to be a community project. We solicited input from the labs and universities that originally contacted us, as well as other interested parties. We also designed the sites area to make it easy for sites to create their own distribution, as well as add to the mirrors without disturbing the main core distribution.

After showing it to various labs, many people liked it, but they didn't like the name because their labs weren't all dealing with high energy physics. It also became apparant that if Universities started using it, that some of them might not have anything to do with physics at all. There was also the problem that there is actually a lab with the initials HEPL. And so the name eventually came to be Scientific Linux.

Scientific Linux 3.0.1 was released on May 10, 2004.

Long Term Support
Scientific Linux has plans to do security updates for 3 years from the initial release of an Enterprise product. So for the Scientific Linux 3.0.x line, we plan on doing security updates for 3 years from the release of Enterprise 3. When we get close to the end of that we will make a more exact date.

There is a caveat to the above statement. This is just a plan. We do not, and cannot, guarantee our plan. There are too many factors for us to guarantee it. One of the possibilities is that the source rpm's from the Enterprise release will become unavailable. There are other factors that also may prevent a guarantee.

Major Releases
Scientific Linux has plans to make a major release based on each major release of Enterprise Linux. How soon after, we cannot say.

Minor Releases
Scientific Linux has plans to make a minor release based on each of the Enterprise Updates for the latest major release. Minor releases for the older major releases will occur much less frequently. So for the Scientific Linux 3.0.x line, we will make minor releases for each Enterprise Update, until Scientific Linux 4.0.x is released. We will then make the 4.0.x minor releases for each of the Enterprise 4 Updates, and only occasionally create a minor release for the 3.0.x line. The minor releases will be named according to their corresponding update release. Hence, Scientific Linux 3.0.1 corresponded with Update 1, 3.0.2 will correspond with Update 2.

The minor releases will also be a time for the installer to be enhanced, programs to be added or removed, and other minor tweeking. Administrators should be able to use yum or apt to get from one minor release to another, without much hassle.
NPACI Rocks Cluster Distribution

(Quoting the User Guide's Introduction section:)

From a hardware component and raw processing power perspective, commodity clusters are phenomenal price/performance compute engines. However, if a scalable ``cluster'' management strategy is not adopted, the favorable economics of clusters are offset by the additional on-going personnel costs involved to ``care and feed'' for the machine. The complexity of cluster management (e.g., determining if all nodes have a consistent set of software) often overwhelms part-time cluster administrators, who are usually domain application scientists. When this occurs, machine state is forced to either of two extremes: the cluster is not stable due to configuration problems, or software becomes stale, security holes abound, and known software bugs remain unpatched.

While earlier clustering toolkits expend a great deal of effort (i.e., software) to compare configurations of nodes, Rocks makes complete Operating System (OS) installation on a node the basic management tool. With attention to complete automation of this process, it becomes faster to reinstall all nodes to a known configuration than it is to determine if nodes were out of synchronization in the first place. Unlike a user's desktop, the OS on a cluster node is considered to be soft state that can be changed and/or updated rapidly. This is clearly more heavywieght than the philosophy of configuration management tools [Cfengine] that perform exhaustive examination and parity checking of an installed OS. At first glance, it seems wrong to reinstall the OS when a configuration parameter needs to be changed. Indeed, for a single node this might seem too severe. However, this approach scales exceptionally well, making it a preferred mode for even a modest-sized cluster. Because the OS can be installed from scratch in a short period of time, different (and perhaps incompatible) application-specific configurations can easily be installed on nodes. In addition, this structure insures any upgrade will not interfere with actively running jobs.

One of the key ingredients of Rocks is a robust mechanism to produce customized distributions (with security patches pre-applied) that define the complete set of software for a particular node. A cluster may require several node types including compute nodes, frontend nodes file servers, and monitoring nodes. Each of these roles requires a specialized software set. Within a distribution, different node types are defined with a machine specific Red Hat Kickstart file, made from a Rocks Kickstart Graph.

A Kickstart file is a text-based description of all the software packages and software configuration to be deployed on a node. The Rocks Kickstart Graph is an XML-based tree structure used to define RedHat Kickstart files. By using a graph, Rocks can efficiently define node types without duplicating shared components. Similiar to mammalian species sharing 80% of their genes, Rocks node types share much of their software set. The Rocks Kickstart Graph easily defines the differences between node types without duplicating the description of their similarities. See the Bibliography section for papers that describe the design of this structure in more depth.

By leveraging this installation technology, we can abstract out many of the hardware differences and allow the Kickstart process to autodetect the correct hardware modules to load (e.g., disk subsystem type: SCSI, IDE, integrated RAID adapter; Ethernet interfaces; and high-speed network interfaces). Further, we benefit from the robust and rich support that commercial Linux distributions must have to be viable in today's rapidly advancing marketplace. [...]
StartCom Linux

The StartCom Linux operating systems are initially based on the RedHat Enterprise AS-3 source code with reliability, security and efficiency in mind, modified to fit the various tasks each flavor of StartCom Linux is assigned to....
Eadem Enterprise AS

"Eadem Enterprise AS V3.0 is the core operating system and infrastructure enterprise Linux solution. Supporting the largest commodity-architecture servers -- with up to eight CPUs and 16GB of main memory -- and available with the highest levels of support, Eadem Server is the ultimate solution for large departmental and datacenter servers."
X/OS Linux

X/OS Linux is a freely-available GNU/Linux distribution focused on business and corporate users, featuring:
Pie Box Enterprise Linux

Pie Box Enterprise Linux AS 3.0 is fully compatible with Red Hat Enterprise Linux AS 3.0. The differences between the Pie Box Enterprise Linux AS 3.0 and Red Hat Enterprise Linux AS 3.0 distributions are limited to two packages. The images in these packages have been modified to remove Red Hat, Inc. trademarks in order to comply with their Subscription Agreement (EULA). Specifically these packages are "redhat-logos" and "anaconda-images", all other binaries remain unchanged. More information regarding this modification can be found on our products page and in Red Hat, Inc.'s Trademark Guidelines.

Our updates and repository service is very similar to the up2date service offered by Red Hat, Inc. It is a subscription-based service under which we distribute errata that we have compiled from source RPMTM packages that have been released by Red Hat, Inc. Like the Pie Box Enterprise Linux AS 3.0 distribution we only modify two packages. As a result we can provide customers with trusted packages that are compatible with Pie Box Enterprise Linux AS 3.0, Red Hat Enterprise Linux AS 3.0, ES 3.0 and WS 3.0 whilst satisfying Red Hat, Inc.'s Subscription Agreement (EULA). More information regarding this repository can be found on our products page.