Home page

Each Apollo mission prior to Apollo 11 was plagued with about 20,000 defects apiece. [Ralph Rene]

Is that a lot? Were they crucial systems? Mr. Rene either doesn't know or doesn't wish to disclose it. This is where Mr. Rene might have benefitted from some actual engineering experience.

The command module alone contained two million individual parts. Even if those 20,000 defects had been confined to the command module alone, that's one defective part for every 100 parts. Adding in the service module, the lunar module, and the Saturn V launch vehicle, the more accurate figure is one defect in 300 parts. Since this was a highly complex experimental spacecraft, this is not an unusually high number.

But of course a per-part measurement isn't exactly correct. Some of the defects were simply anomalies in performance, or procedural problems. The issue here is not design flaws or procedural errors but rather a meaningful context in which to evaluate a reported number of defects. Non-engineers are alarmed by what appears to them to be a very large number of defects. But since those unfamiliar with the rigors of design for manned spaceflight and the procedures for reporting and disposing of defect reports are not equipped to put that statistic in a meaningful framework, this represents a kind of deception on Mr. Rene's part. He too is likely not to know whether this constitutes a large or small number of defects for a product of this type and size. But since alarmism fits well into his agenda, Mr. Rene is not served by a thorough investigation into these alleged defects.

One must also consider the nature of the engineering development process. When the product has reached an advanced stage of design, the quality control engineers will begin to examine the design and suggest methods for testing. When components of the product become available they are subjected to "unit testing" that may produce a limited number of defect reports. Not until the prototypes are fully assembled and submitted to flight test can the real process of quality control begin. At this point the focus of the engineers shifts from active design to reactive treatment of discovered defects.

If you plot defect reports against the development timeline, you discover that this pattern of testing produces the maximum number of defect reports shortly before deployment. That number drops off just as dramatically to the point of deployment.

Just one defect could have blown the whole thing. [Ralph Rene]

Only if it were a defect in a critical system for which there was no redundancy, the lunar module ascent engine for example. Implying that no defects of any kind were permitted during a successful flight is very naive.

Defect reports, or "chits" as they are known in the industry, come in many types and levels of severity. The most serious are "show stoppers" which directly impact the safety of the crew and the reliability of the spacecraft. Leaks in the RCS fuel system, for example, would be show-stoppers. But the majority of chits are not for critical items. In fact as many as half the chits may be what are known informally as CYAs, (for "cover your ass").

Since quality control engineers are often held just as responsible as design and manufacturing engineers for defects in products, quality controllers tend to prolifically write chits so that in the event of future failure they can prove to their managers that they did their job. Often these CYA chits are nothing more than differences in the interpretation of the product's written specifications. But since little or nothing can or ought to be done about the CYA issues, many languish in defect tracking systems as "open" issues when in fact the designers have no intention of addressing them.

The defect reporting mechanism is also used to introduce requests for additional functionality or enhancements. Failure to address these defects does not necessary diminish the safety or functionality of the spacecraft. These chits basically start out saying, "It would be nice if ..." Since these observations frequently illuminate deficiencies in the original specifications, it's worth paying attention to them. But it's wrong to classify them as something that must be corrected before a safe flight.

Understanding that Apollos 7, 8, 9, and 10 were essentially flight tests of the Apollo hardware prior to operational deployment, we realize that the purpose of these flights was to discover defects. The unmanned flight tests (Apollos 4, 5, and 6) established the basic spaceworthiness of the Apollo spacecraft. At this point it was determined that the spacecraft were capable of ferrying astronauts safely to space and back to earth, although they were not yet capable of executing a lunar landing mission. But since flight tests are conducted specifically in order to discover defects, it is not surprising that a large number of defects was discovered.

The operational flights too encountered in-flight defects. Apollo 13 is the notable example, but we can examine the defect lists for successful flights as well.

The readiness reviews that precede each manned space launch do not require that all defects be corrected prior to launch, only those defects which directly impact the mission success or safety of the crew. It is common to waive responsibility for items that have redundant backups or established safety margins, or which are not mission critical. Although airlines don't appreciate this being generally known, hardly a commercial flight takes to the air without some number of components having been "red-tagged" as unusable. And if our automobiles were subject to the same rigorous inspection and acceptance criteria as a manned space launch, the number of "show-stoppers" might astound us.

Prev Next