Test conduct
Systems testing can be divided conveniently into three main areas of interest: the interface with the operator, performance, and system integration. The interface with the operator breaks down in turn into the areas of information and control. Information covers the data input that the system needs from the operator, evaluating the ease with which it can be provided, and also covers the information provided by the system to the operator. In the later case it will be necessary to answer such questions as:
• Is the information presented in the required format?
• How easy is it to assimilate?
• What workload is required to monitor system performance?
• Are failures indicated adequately?
Assessing control aspects involves examining the range, degree and ease of control provided. In essence it answers the question ‘Can the operator make the system perform the required function within its capabilities and can this be achieved easily?’ It should be remembered that there may be limits to the amount of control that an operator should be given. For example it would not be desirable to allow a pilot the option of completely disabling visual or audio warnings. Another example would be providing individual lighting controls for each cockpit instrument, this would give the pilot complete control but it would be difficult and time consuming to exercise it.
Clearly evaluating the adequacy of control and the operator interface are largely a subjective process, while evaluating performance can often be an objective process. The system performance can be broken down into quantitative and qualitative performance. In the case of a navigation system the quantitative performance or accuracy is determined by comparing actual position measured from accurate maps or pre-surveyed points with displayed position. For a weapon system the accuracy of firing tests is measured against required accuracy. With other systems, such as displays and piloting vision aids, assessing performance relies to a large extent on the qualitative opinion of the pilot or operator. When dealing with qualitative aspects of systems testing it is important to describe fully the way that the performance impacts on the conduct of role tasks. As in all testing the fundamental question is ‘Does the system contribute sufficiently to the efficient conduct of the mission?’
The last area to be discussed is systems integration. In the past most aircraft systems were stand-alone and the pilot was required to monitor the status and output of all the systems individually: the only integration which took place was in the pilot’s head! This led to a high level of pilot workload. As more and more systems have been added to rotorcraft and operational tasks have become more demanding the need to off-load the crew has become increasingly important. The systems on board should share information so that the pilot or operator is not required to enter information more than once, pass information from one system to another or make unnecessary control selections. Examples of good integration might include automatic display of a cable hover screen when the sonar is armed or display of a pre-landing checklist as the final point on the navigation plan is approached. When assessing the adequacy of integration within an aircraft realistic simulated missions are conducted and the actions required of each crew member analyzed with regard to the systems. If any of the actions could be eliminated or made easier then the integration is deficient.