The history of SE Linux begins at the National Security Agency, where researchers were growing frustrated that flawed assumptions of security in modern computing environments was leading to an inevitability of failure. The paper they published presented a pair of powerful insights: that computer security was fundamentally dependent on (operating) system-level security, and that system-level security depended on checking every system-level interaction against a strong security policy. Sounds simple in principal, but how could it work in practice?
The nominal design of computer security for mainstream operating systems had been the candy bar model: hard on the outside, soft on the inside. Indeed, the term “root exploit” refers to the condition where once somebody can figure out how to become super-user on a system (Windows or Unix), then they can compromise the system in any way they want. The “remote root exploit” is the scariest: it means one can take over a machine by logging in over the internet. Some estimate that up to 100M PCs have already been compromised in this way and are part of a large and growing Zombie Network, waiting to be called to participate in some massive cyber take-down. So far, the main comfort we can take is that competition between Zombie overlords limits the size of any one such network to around 1 to 1.5 million PCs at a time. Obviously a single point of failure that grants unlimited access to an attacker is not a very strong security model.
The SE Linux model controls every single system-level interface, to the point where it is safe (as has been demonstrated by Russell Coker) to give random internet attackers remote root access to test machines. Over a period of more than two years (the full length of the experiment) nobody was able to crack Russell’s SE Linux machines even with root access.
The challenge of SE Linux, which seemed impossible to people unfamiliar with open source, was how to possibly create not only the mandatory security policy infrastructure for a large (but finite) number of system-level services, but how to conform a potentially infinite number of applications to use such a complex policy correctly. In the early days it wasn’t easy: mainstream applications like the Apache web server used system services in ways that were permitted by POSIX standards but were not safe from a security point of view. The first port of Apache to SE Linux showed hundreds of privilege violations. For months there were daily bug reports about SE Linux denying Apache’s attempted access to system-level resources or interfaces.
But after a year’s worth of effort, Apache had been tamed, as well as about a dozen other fundamental internet services: bind, ftp, smtp, nfs, and the like. Through successive releases, more and more low-level services, then higher-level services were confirmed to the mandatory access controls of SE Linux. And today, every single application shipped as part of Red Hat’s Enterprise Linux now has an SE Linux profile, over 1500 in all!
But this is not an ad for a commercial product. It is an explanation of how the open source model, which allowed multiple parties to collaboration, to see how hard work was done by one team and apply it to the challenges of one’s own work, succeeded in doing something that was widely considered impossible just 5 years ago. Today, as the #1 story of the SE Linux community shows, the greatest challenge to SE Linux is overcoming the myth that SE Linux should be disabled. I imagine it’s not unlike the challenge of early aviators trying to prove that their heavier-than-air machines actually flew to an audience who refused to accept even the evidence of their own eyes.
My congratulations to the SE Linux team for demonstrating so convincingly the benefits of open source collaboration. And to my readers: enable SE Linux and worry less about zero-day attacks.