Since the dawn of the aviation era, cockpit design has become increasingly complicated owing to the advent of new technologies enabling aircraft to fly farther and faster more efficiently than ever before. With greater workloads imposed on pilots as fleets modernize, the reality of he or she exceeding the workload limit has become manifest. Because of the unpredictable nature of man, this problem is impossible to eliminate completely.
However, the instances of occurrence can be drastically reduced by examining the nature of man, how he operates in the cockpit, and what must be done by ngineers to design a system in which man and machine are ideally interfaced. The latter point involves an in-depth analysis of system design with an emphasis on human factors, biomechanics, cockpit controls, and display systems. By analyzing these components of cockpit design, and determining which variables of each will yield the lowest errors, a system can be designed in which the Liveware-Hardware interface can promote safety and reduce mishap frequency.
The history of cockpit design can be traced as far back as the first balloon flights, where a barometer was used to measure altitude. The Wright rothers incorporated a string attached to the aircraft to indicate slips and skids (Hawkins, 241). However, the first real efforts towards human factors implementation in cockpit design began in the early 1930’s. During this time, the United States Postal Service began flying aircraft in all-weather missions (Kane, 4:9). The greater reliance on instrumentation raised the question of where to put each display and control.
However, not much attention was being focused on this area as engineers cared more about getting the instrument in the cockpit, than about how it would interface with the pilot (Sanders & McCormick, 39). In the mid- to late 1930’s, the development of the first gyroscopic instruments forced engineers to make their first major human factors-related decision. Rudimentary situation indicators raised concern about whether the displays should reflect the view as seen from inside the cockpit, having the horizon move behind a fixed miniature airplane, or as it would be seen from outside the aircraft.
Until the end of World War I, aircraft were manufactured using both types of display. This caused confusion among pilots who were familiar with one type of display and were flying an aircraft with the other. Several safety violations were observed because of this, none of which were fatal (Fitts, 20-21). Shortly after World War II, aircraft cockpits were standardized to the six-pack’ configuration. This was a collection of the six critical flight instruments arranged in two rows of three directly in front of the pilot.
In clockwise order from the upper left, they were the airspeed indicator, artificial horizon, altimeter, turn coordinator, heading indicator and vertical speed indicator. This arrangement of instruments provided easy transition training for pilots going from one aircraft to another. In addition, instrument scanning was enhanced, because the instruments were strategically placed so the pilot could reference each instrument against the artificial horizon in a hub and spoke method (Fitts, 26-30). Since then, the bulk of human interfacing with cockpit development has been largely due to technological achievements.
The dramatic increase in the complexity of aircraft after the dawn of the jet age brought with it a greater need than ever for automation that exceeded a simple autopilot. Human factors studies in other industries, and within the military paved the way for some of he most recent technological innovations such as the glass cockpit, Heads Up Display (HUD), and other advanced panel displays. Although these systems are on the cutting edge of technology, they too are susceptible to design problems, some of which are responsible for the incidents and accidents mentioned earlier.
They will be discussed in further detail in another chapter (Hawkins, 249-54). A design team should support the concept that the pilot’s interface with the system, including task needs, decision needs, feedback requirements, and responsibilities, must be primary considerations for defining the system’s unctions and logic, as opposed to the system concept coming first and the user interface coming later, after the system’s functionality is fully defined. There are numerous examples where application of human-centered design principles and processes could be better applied to improve the design process and final product.
Although manufacturers utilize human factors specialists to varying degrees, they are typically brought into the design effort in limited roles or late in the process, after the operational and functional requirements have been defined (Sanders & McCormick, 727-8). When joining the design process late, the ability of the human factors specialist to influence the final design and facilitate incorporation of human-centered design principles is severely compromised. Human factors should be considered on par with other disciplines involved in the design process.
The design process can be seen as a six-step process; determining the objectives and performance specifications, defining the system, basic system design, interface design, facilitator design, and testing and evaluation of the system. This model is theoretical, and few design systems actually meet its erformance objectives. Each step directly involves input from human factors data, and incorporates it in the design philosophy (Bailey, 192-5). Determining the objectives and performance specifications includes defining a fundamental purpose of the system, and evaluating what the system must do to achieve that purpose.
This also includes identifying the intended users of the system and what skills those operators will have. Fundamentally, this first step addresses a broad definition of what activity-based needs the system must address. The second step, definition of the system, determines the unctions the system must do to achieve the performance specifications (unlike the broader purpose-based evaluation in the first step). Here, the human factors specialists will ensure that functions match the needs of the operator. During this step, functional flow diagrams can be drafted, but the design team must keep in mind that only general functions can be listed.
More specific system characteristics are covered in step three, basic system design (Sanders & McCormick, 728-9). The basic system design phase determines a number of variables, one of which is the allocation of functions to Liveware, Hardware, and Software. A sample allocation model considers five methods: mandatory, balance of value, utilitarian, affective and cognitive support, and dynamic. Mandatory allocation is the distribution of tasks based on limitations. There are some tasks which Liveware is incapable of handling, and likewise with Hardware.
Other considerations with mandatory allocation are laws and environmental restraints. Balance of value allocation is the theory that each task is either incapable of being done by Liveware or Hardware, is better done by Liveware or Hardware, or can only be done only by Liveware or Hardware. Utilitarian allocation is based on economic restraints. With the avionics package in many commercial jets costing as much as 15% of the overall aircraft price (Hawkins, 243), it would be very easy for design teams to allocate as many tasks to the operator as possible.
This, in fact, was standard practice before the advent of automation as it exists today. The antithesis to that philosophy is to automate as many tasks as possible to relieve pressure on the pilot. Affective and cognitive support allocation recognizes the unique need of the Liveware component and assigns asks to Hardware to provide as much information and decision-making support as possible. It also takes into account limitations, such as emotions and stress which can impede Liveware performance.
Finally, dynamic allocation refers to an operator-controlled process where the pilot can determine which functions should be delegated to the machine, and which he or she should control at any time. Again, this allocation model is only theoretical, and often a design process will encompass all, or sometimes none of these philosophies (Sanders & McCormick, 730-4). Basic system design also delegates Liveware performance requirements, haracteristics that the operator must posses for the system to meet design specifications (such as accuracy, speed, training, proficiency).
Once that is determined, an in-depth task description and analysis is created. This phase is essential to the human factors interface, because it analyzes the nature of the task and breaks it down into every step necessary to complete that task. The steps are further broken down to determine the following criteria: stimulus required to initiate the step, decision making which must be accomplished (if any), actions required, information needed, feedback, potential sources of error nd what needs to be done to accomplish successful step completion. Task analysis is the foremost method of defining the Liveware-Hardware interface.
It is imperative that a cockpit be designed using a process similar to this if it is to maintain effective communication between the operator and machine (Bailey, 202-6). It is widely accepted that the equipment determines the job. Based on that assumption, operator participation in this design phase can greatly enhance job enlargement and enrichment (Sanders & McCormick, 737; Hawkins, 143-4). Interface design, the fourth process in the design model, analyzes the nterfaces between all components of the SHEL model, with an emphasis on the human factors role in gathering and interpreting data.
During this stage, evaluations are made of suggested designs, human factors data is gathered (such as statistical data on body dimensions), and any gathered data is applied. Any application of data goes through a sub-process that determines the data’s practical significance, its interface with the environment, the risks of implementation, and any give and take involved. The last item involved in this phase is conducting Liveware performance studies to determine the capabilities nd limitations of that component in the suggested design.
The fifth step in the design stage is facilitator design. Facilitators are basically Software designs that enhance the Liveware-Hardware, such as operating manuals, placards, and graphs. Finally, the last design step is to conduct testing of the proposed design and evaluate the human factors input and interfaces between all components involved. An application of this process to each system design will enhance the operators ability to control the system within desired specifications. Some of the specific design characteristics can be found in subsequent chapters.