Aircraft Acquisition

With a new administration in place in Washington, supposedly dedicated to the task of improving efficiency of government, the time is obviously propitious for suggesting a major overhaul in the methods now being used to procure our aircraft — the “acquisition process” as it has come to be called.

In the Spring issue of Wings of Gold, the subject was forcibly addressed by Admiral Moorer and Vice Admiral Cagle, both of whom called for eliminating the DSARC, and by Vice Admiral Seymour who called for positive change in the entire acquisition process.

The need for significant revision should be apparent to everyone who looks at what has been done in the last 10 to 20 years and compares it with what could have been done. The solution to the problem should be equally apparent. We need only return to the system as it was practiced by the Navy for aircraft procurements in the late 1950s. That system had evolved over a period of years with each procurement tailored to the particular circumstances then existing, while avoiding the mistakes made on previous programs. None of the actions taken from above to reform the system during and subsequent to the McNamara regime was either necessary or desirable. All were designed by relative amateurs and were recognized by the professionals at the time as being either solutions to problems which didn’t exist, or which were irrelevant to those that did. Those non-solutions run the gamut from McNamara’s “Program Definition Phase” and “Cost Reduction” to the Packard “DSARC”, “Prototyping”, and “Separate T & E”.

As many are aware, I am not considered exactly an unbiased observer in the area of how best to procure aircraft. Since I was convinced that the methods used in the Navy had been demonstrated to be superior to those of the other services, I consistently opposed attempts, usually in the name of standardization, for us to adapt their methodologies, or to reintroduce practices we had previously discarded.

Although we managed to get through the initiation phase of the F-14 and S-3 procurements without complete defeat, gradual compromises have now just about eliminated all vestiges of the system as it was, as everyone now complies instead of circumventing the prescribed rules.

The historians of today have trouble documenting the system actually used internally by the Bureau of Aeronautics (one of the predecessors of the Naval Air Systems Command) prior to the McNamara era. An operative directive covering the procurement of major systems in fact did not exist at that time in the bureau, although on occasions audit officials or management commissions would recommend that such be produced. The Robertson Committee, a Blue Ribbon-type operation, was one example of this. In one of its 1956 reports (“A Program for Reducing the Time Cycle from Concept to Inventory, Manned Aircraft Weapon Systems), it approved of the Navy’s procurement record, but directed that an “Instruction” be issued on the acquisition system being used. This was never accomplished since the internal bureaucracy would not agree on the system as it actually existed, and no one wanted to degrade the system to match the charters of those parts of the organization which objected.

Aircraft Acquisition

An “Instruction” was really unnecessary within the bureau since the basic design competition system was well known to all those actually involved. Up to that point, the Navy had not found it necessary to standardize procurement systems between its bureaus, except on a very broad policy level. Most reasonable people would agree that there was little to be gained by detailed standardization of procurement methods for such diverse products as aircraft, ships, and ammunition procured from quite different industries, by different personnel, in separate bureaus. On the other hand, compromises to achieve a single system could only reduce the efficiency of those several systems, each of which was considered more nearly optimum for its particular field by those using it.

With the absence of any official documentation, let me set down some of the ways in which we conducted the business of developing new aircraft, and the reasons therefore, before control of the procedures was taken from us. Obviously, this is my version of history, and it may be as incomplete and as inaccurate as in the story of the blind men describing an elephant. If there are more competent observers in the readership, perhaps they can contribute to a fuller understanding of the situation, then and now.

In presentations made at the time, the basic steps in the acquisition process were listed briefly as: Establish Requirement; Define Program; Obtain Program Approval; Conduct Design Competition; Contract. Of these, a sound “Operational Requirement” is probably the most important single factor for a successful development. No amount of technical expertise nor management attention can overcome the handicaps introduced by a faulty one.

Subsonic fighters do not grow into supersonic ones. Day fighters do not grow into good all-weather fighters (F3H, F7U). Short ranged designs get shorter, and inadequate payloads shrink during development and operation. On the other hand, overstated requirements kill programs before they start. .

When “Military Requirements” was a part of BuAer (instead of OpNav), the acquisition process started with either “Plans” writing a memorandum to “Engineering” outlining the characteristics needed in a new design, or “Engineering” writing to “Plans” detailing a new capability judged feasible. Conceptual studies were done both in-house and by industry, and almost always on an unfunded basis. Decisions to start programs were made on a judgmental basis by the Chief, BuAer, after considering — but not necessarily in writing — all the alternatives. By the early 1950s, the discipline of operational analysis had developed to a point that allowed its use internally to help in reaching those decisions, although then and now, the experienced professional, armed with the basic technical and cost characteristics, would usually not need the formal analysis to reach a sound conclusion. The closed loop generation of requirements balancing operational desires with technical and cost feasibility is as necessary today as it was then.

The “Requirements” served as the key for more detailed studies in the Navy and in industry, and were most effective when they spelled out minimum levels of performance to be met and minimum levels of equipment and armament to be carried. A nonspecific, generalized mission type of requirement, such as, “Achieve air superiority over the battlefield in 1990 when operating from a carrier” is virtually useless to any preliminary design organization, although this type of a “Requirement” is frequently advocated in the belief that more creative solutions may emerge. All that really happens is that all the design organizations descend upon the Navy en masse trying to determine what is wanted. The process works far better when the professional operators spell out their needs with as much precision as possible, but allowing freedom of acceptable choices in the design process.

In the period of the 1950s, with need and feasibility determined, the program was introduced into the budget cycle. After budget approval, and not before, a type specification was drawn up and a design competition held. The timing for this phase was normally planned as three months to prepare specifications and issue the “Request for Proposal”: (or earlier, and better, an “Invitation to Bid”), three months for industry to prepare its proposals; three months for evaluation; and a final three months for decision, negotiation, and contract award. The entire effort was unfunded. (Industry was allowed “Bid and Proposal” expenses as part of overhead, so the government paid the bill indirectly if the manufacturer had ongoing production contracts.)

The reason for conducting the whole of the conceptual phase of development on an unfunded basis was obviously that this method was by far the simplest, it saved time, money and effort, and sacrificed nothing of value. A cited disadvantage, used in part to change the rules, was that the very small businesses could not compete because of the bid and study expenses involved. Since that type of bidder would not qualify for the later development award, it is not clear what is gained by paying him to compete in the conceptual phase. At least a year is added to the development cycle each time an open, competitive, funded phase is added. There are also some problems, usually unrecognized, in the real world in conducting study type competitions. There is little objective information on which to base a selection, leading to an impossible task in justifying an award to a third party, and particularly to an unsuccessful bidder. The Air Force’s experience with “Source Selection” (note Source, not Design) using only brief management proposals, was apparently unsatisfactory for the same reasons.

The real “benefit” may have been the ability to make politically acceptable decisions, or to practice “industrial statesmanship”, in the award of contracts. The Air Force returned to the practice of requiring design data in their “Source Selection” in the early 1960s, although they continued to emphasize “Source”.

The design competition method employed by the Navy was planned to permit the selection of the best design from those submitted under ground rules which tried to minimize time and cost for both the Navy and industry. The details of the competition process as practiced by the Navy and how it differed from the other services requires too much space for this article, but a few of the fundamentals may be of interest. Normally the “Systems” were specified, and acceptable engines listed, allowing the aircraft itself to be the primary variable. Remember that the “program” had already been defined and authorized in the budget. The best design was selected on the basis of the Navy’s own estimates of performance, cost, flying qualities, logistics, etc. The Chief, BuAer exercised his authority for making all selections and reported his decision to OpNav and Secretarial levels.

If the selected design matched or bettered the characteristics used in making the decision to include the program in the budget, a contract was immediately negotiated and awarded. If the design did not meet the earlier estimates, program rejustification was necessary, although this step in practice was almost never required.

The Navy relied on what was basically an airframe competition for a variety of reasons, among which, of course, was the fact that the system had worked reasonably well over the years. Adequate data were available to evaluate, and then to justify the selection to everyone. The ground rules were well understood by industry who accepted the fact that the bureau had the engineering talent to produce sound comparative data in the evaluation process, thus preventing competitions from becoming lying contests. (We can note a total lack of program failures caused by failure of the Navy’s engineers to predict aircraft weight, performance, etc., to an acceptable level of accuracy.)

When major systems, weapons, engine types, etc., are left as variables in a competition, one is forced to rely on more sophisticated analysis techniques, more difficult to define and usually much harder to accept.

Additionally, one runs the risk that a “best system” is in the “worst airplane”, or vice versa. Separation of the major variables into separate competitions eliminates that risk. (The S-3 competition was an example of a competition in which the “system” was left as a partial variable, but fortunately, the best system and best airframe were proposed by the same bidder.)

Separation of systems and engine selection from the airplane competition is also logical because of the fact that each requires a longer development period than does the aircraft itself. If the aircraft development cycle is to be reduced to a reasonable length, both engines and major systems have to be funded and developed separately. The “Systems Approach”, requiring integrated development and funding was instituted during the McNamara years. Ignored were the lessons taught by programs around the J40, J46 and T40 engine developments and lead-nosed fighters whose fire control systems were late. (These examples were actually separate developments, but with inadequate lead time over the airframes.)

At the present time, the current Navy trainer competition seems to be a good example of how not to structure a program. There are so many variables that a single best selection would seem improbable from the collection of modified and new, foreign and domestic airframes; powered by one or two foreign or domestic engines, and each accompanied by a different ground-based training system and syllabus. Because of budgetary inadequacies, presumably, the development schedule is at least twice as long as it should be. The training system issue at least could have been decided well in advance of the airplane competition.

Leaving the competition process, it might be well to discuss some of the other parts of the development process, as it is now being practiced, which were adopted despite the lessons from the past. Competitive prototyping with fly-and-test-before buy was reintroduced during the Packard era, although discussed before his arrival as DepSecDef, and strongly espoused by the GAO and by some of the think tank theorists. The Navy stopped the practice of the “prototype fly-test-redesign and produce” type of procurement before World War II. Time and cost penalties were too great as compared to concurrent development and production programs when a high probability of success could be predicted. The last Navy fighter to reach the fleet via the old prototype route was the F4U-1 Corsair initiated in 1938. Every service fighter after that was developed in programs which authorized production prior to first flight. We did have a few prototype fighters as well, but none reached production, e.g., XF8B-1, XF5U-1, XF14C-2, and XF15C-1.

As demonstrated over the years, the prototype approach saves money over a concurrent program only when the project fails and is terminated. The professionals in the development game can certainly discriminate between the designs bound to succeed and reach the fleet and those that probably will not. Those that are predicted to fail, or to offer no improvement in capability even if they succeed, should not be started, rather than prototyped (XFV-12).

As part of the Fleet Introduction of Replacement Models (FIRM) plan of the early 1950s, the Navy obtained approval to fund those aircraft developments designed for production and fleet use from the beginning almost wholly with production funds. R & D funding was used only for a “Phase I” effort, usually initial engineering through the mock up, or for about a three-month span of effort. That change in funding rules provided a windfall of R & D funds which was unwisely exploited to initiate more programs that could be carried to completion. By the end of the decade, that lesson had been absorbed and more realistic, longer ranged fiscal planning implemented. Unfortunately, the McNamara era purists arrived and reinstated full development funding with R & D monies, but without increasing the R & D share of the Navy’s aeronautical budget by the orders of magnitude required to compensate for the rules change. In fact, their detailed categorization of R & D funds into separate accounts, and greatly increased scope of testing required to be included, made the R & D budget crunch far worse than it had been a decade earlier. Funding from a single pocket would appear to be a far simpler arrangement, and would facilitate needed trade-off decisions between continuing a design in production or starting a new one.

By the end of the 1950s, the Navy had returned to a fixed price type of contracting for development and production after using cost plus fixed fee (CPFF) contracts during and after WW II. CPFF contracts were much simpler both to negotiate and administer, but their flexibility led to cost overruns, which necessitated cancelling many smaller programs in order to remain solvent. It was found by requesting both firm and cost plus bids that industry was willing to undertake total development and to offer production options on a fixed price or fixed price ceiling basis for a reasonable number of aircraft. This method, with some variations, was used in procuring the first 200 CH-46s, the first 100 CH-53s, and about the first 200 A-7s. The OV-10 contract, which followed, provided for up to 500 aircraft, but the production options for that quantity were never exercised. The fixed price type of contracting solved the cost overrun problem for the government, if not for the contractor. It also greatly increased the credibility of cost quotations, while the increased discipline necessary in defining the program was undoubtedly good for both parties.

At the end of the 1960s, both the F14 and S-3 contracts were let using the same method of contracting with an added feature of providing for a 50 percent variation in production option quantities. The entire system was proved feasible as long as the producer was not forced into accepting too large a cost exposure over too long a period. Lockheed produced all the S-3s within their contract ceiling, but Grumman found it impossible to accept the final production options without going bankrupt. A shorter period of years, as initially recommended by the Navy, would have eliminated that problem while retaining the basic advantages noted earlier. Acquisition instructions in the Packard era directed a return to CPFF development for reasons that I still do not understand, but which apparently were related to our F-14 and the Air Force’s C-5 financial problems. Neither of these, however, was caused by fixed price type of R & D contracting.

Among the changes made in the acquisition process in the last 20 years have been the greatly increased emphasis on Program Management, with capital letters. It could be noted that there seems to be a fair degree of correlation between that growth in emphasis with severity of the acquisition problem in terms of lengthened schedules and increased costs. The greater the management, the worse the problem. The former “Project Officer” in the services and “Project Engineer” in the industry has been elevated by a couple of ranks, designated a “PM” and given “complete authority” for his program. Although the concept has been employed to different degrees within the services, the clearest effect has been the degradation of technical capability in all the agencies involved.

Fixed, or usually reduced, overall personnel ceilings have necessitated that the management growth be achieved at the expense of the functional disciplines, already weakened by previously forced decentralization moves. The PM, with responsibility restricted to only one program, tends to build a self-sufficient staff to overcome a perceived lack of responsiveness from the already reduced size supporting organizations, thereby further compounding the problem. In practice, the PM becomes a salesman for his program, too often ignoring the needs of his service as a whole. Nearly every so-called management improvement, from the “Systems Approach” of the 1950s on, has been introduced in other services or in industry and later adopted within the Navy by outside pressure with no proof of efficacy. From a personal point of view, I believe that every reorganization and every so called management innovation in the last 20 years made the task of starting and producing naval aircraft more difficult.

There are many other nonproductive management techniques which have been adopted since the relatively good, relatively old days. It must be time to get back to basics and get rid of the system which requires the development cycle to be several years longer for the F/A-18 than for the far more capable F-14 (even when one ignores the whole prototype phase of the former); and at least five years longer for the CH-53E than for the original CH-53A (ignoring the four years spent in unnecessary delays in starting the program). We should return to optimizing the naval aircraft acquisition process, rather that accepting compromise in the name of Federal procurement standardization. Perhaps we need a class action, malpractice suit against all those who have fouled up what was once a pretty fair system with a good track record, and which, even then, we knew could have been better.

Leave a Reply

Your email address will not be published. Required fields are marked *