Can Secret Operating Systems Be Any More Secure Than Widely Available OSes?


An operating system produced by the National Security Agency or any of their partners should be more secure than anything we, the public, can get for our servers and home machines.

Maybe, but I’m not so sure. The same doubts should cloud other national security agencies, law enforcement agencies, nuclear power plant operators, and business corporate headquarters, such as defense contractors; and I don’t know that I’ve included every type of entity who should be.

Public or well-known operating systems (OSes) get critiqued by large numbers of users — Microsoft Windows may generate more critiques than any other OS does simply because it is so widely deployed and has substantial vulnerabilities, probably due to Microsoft’s business model of releasing versions despite flaws to be patched later and all major OSes depend on frequent updates for new technology and new demands — and many of those criticisms are reported to other users and to an OS’s programmers and managers. Those reports are motivated by a shared interest in security, the fun in uncovering problems, a desire to do programming in order to solve problems and thereby build a résumé for paid work somewhere, a function of their employment (perhaps at an OS firm), and a desire to support harm (yes, I meant that, because not all goals need be compatible with each other in the same person).

Secret operating systems, on the other hand, don’t attract much public attention that results in reporting based on most of those motivations. (A claim of lesser attention has been made regarding a publicly-available OS that has a reputation for high security for website hosting.) They do attract enemy attention and that’s critical, but enemies generally don’t report their findings and recommendations to anywhere that developers of a secret OS can find them.

The NSA, like other agencies and enterprises which maintain their own OSes, therefore are crippled by the lack of reports arriving in the OS maintainers’ hands. They can compensate by having more people attacking their OSes from inside with permission (that’s by penetration testers (pen testers) or tiger teams), for which they’d need many more people depending on the complexity of the OS to be tested (and it almost certainly is complex and frequently revised, the revisions usually introducing new complexities), but often those people are good but not the best in their profession (they can’t all be the best) and usually they find institutional limitations on their work that mean they can’t act entirely like the bad actors they’re sort of supposed to mimic. Being too good at that kind of work can cost you your job, and occasionally even risk jail time. Some of those limitations probably can’t be solved, for institutional reasons. Therefore, the internal testing and reporting is almost always both inadequate and expensive. When a service is inadequate, cost-cutting becomes more attractive, which would undercut or destroy the service.

It may be that their solution is OS obscurity. Commonly, an attacker will try to deduce which OS operates a computer of interest, and there often are various clues. The OS may broadcast its self-description or clues to it. If it is connected to the Internet, a user agent such as a browser may send to the Internet a description of the operating system it’s on. Not broadcasting that information is itself a clue and broadcasting a false description may be countered by attempts at disproof, the attempts often successful. A study of the applications running on the computer may reveal applications that run on very few OSes and, if it is confirmed that the applications are not just sitting there (like for a honey pot) or running through an OS emulator but are actually run at times without an emulator, then the possible OSes are confirmably fewer. That does not eliminate that a custom OS might interact with an app like another OS would, but the number of possibilities is still reduced. And if all public OSes are eliminated or unlikely, then the intellectual focus can be on a custom OS, which means a presumption of design novelty (including conceptual) but also of a greater set of vulnerabilities.

The attacks can then begin, to see what would work.

Maybe the best solution is to use a popular OS with some customization for security. While some of the agencies in question could lawfully use closed-source OSes and modify without notice to the proprietors, not all can. And even those who can would suffer from there being few people among closed-source users who would have expertise and critiques to share. To gain access to expertise and critiques from many more people, open-source is better. So, the agencies and enterprises may be best off by using an open-source operating system that is well reputed for security and for having a large number of people reporting bugs and feature requests (suggesting a large number of users examining it), the choice of operating system subject to ground-up reconsideration every few years in case of rerankings in security quality, security reputation (as a proxy for the part of security quality that can’t be directly known by one organization with finite staffing), and in numbers of publicly-known reviewers worldwide, inviting security experts in many different OSes (including multiple experts in each OS) to study this one, reprogramming the OS to add security, and accepting updates which they also reprogram (to avoid disclosing where modifications are located, the agencies would need to learn that an update is pending, essentially restore the OS to its non-custom state, accept all updates, essentially restore customizations where compatible, and newly modify the updates as needed, the last two steps requiring rapid and comprehensive cross-platform testing, and that’s expensive). That procedure would be a lot of work, and it could have to be done daily.

Wider-range creative thinking would also help. People who are expert in non-OS computer security and people who are expert in noncomputer security may bring or develop concepts that OS programmers can adapt for an OS.

Institutions often don’t do everything they should. High-security agencies may be no different in that regard, doing more but not everything they should, and they doubtless attract a great many highly-skilled unreported probes and attacks, some of them legal, and only some of them detected.

Big concepts are still developable and implementable. The NSA created SELinux, collecting some of their concepts, and doubtless has other security concepts up its sleeves. Nonetheless, SELinux was probably not developed in a day. If it could have been, it would have required a very large staff and a lot of coordination, and coordination of creative people is a classic case of herding cats.