Monte Carlo Analysis

If you know system engineers, you have heard them talk about Monte Carlo. They do not mean the picturesque city, nestled in the foothills of the Alps on the shores of the azure-blue Mediterranean, but they are referring to chance events; events that are too complex to model in all of their minute details.

We have already encountered some of them in the performance of INS and seekers. The complete error sources of INS sensors are so difficult to model that for system level analyses stochastic models like random walk, random bias, and white noise describe these uncertainties. Similarly, the IR sensor of Sec. 10.2.6 has a host of errors that model the imperfection of the electro/mechanical apparatus: aimpoint, jitter, bias, quantization, and blur of the focal plane array; rate gyros, gimbal noise, and bias of the mechanical assembly. Other uncertainties that affect the performance of an aerospace vehicle are airframe misalignments, erratic engine performance, and environmental effects of winds, gusts, and nonstandard atmo­spheric conditions. We already touched on nonstandard atmospheres in Chapter 8. In this section I will show you how to model winds and air turbulence, but first I must explain random events and their characterization.

What a throng of possible errors! It is the responsibility of the system engi­neer to define them mathematically and model them in the vehicle simulation. The stochastic nature of these errors produces random results. If the simulation were linear and the error sources were all Gaussian, the outcome would also be Gaussian. Then, the output covariances could be calculated directly from the input covariances. A single run of a so-called covariance analysis would suffice. How­ever, the world of aerospace vehicles can seldom be fully linearized. Our highly nonlinear five – and six-DoF simulations are witnesses to that fact. Therefore, many computer runs have to be executed, each time with a new draw from the random input. All output is then collected and analyzed.

The more runs you make, the closer to the truth you get; just like the more you frequent the casino of Monte Carlo the closer to bankruptcy you get—after all, the

casino has to make a profit for Prince Rainier of Monaco. The Monte Carlo tech­nique has its root in the Manhattan Project. It involved the simulation of the proba­bilistic phenomenon of neutron diffusion of fissionable material. Hammersley and Handscomb (H&H)23 give a readable summary of the 1964 status, and Zarchan24 emphasizes guidance applications. As H&H point out, the Monte Carlo method plays an important role in experimental mathematics. It addresses problems in statistical mechanics, nuclear physics, and even genetics, which are otherwise im­possible to solve. The specific Monte Carlo technique that applies to our problems is called the direct simulation method. Quoting from H&H, “… direct simulation of a probabilistic problem is the simplest form of the Monte Carlo method. It pos­sesses little theoretical interest and so will not occupy us much in this book, but it remains one of the principal forms of Monte Carlo practice because it arises so often in various operational research problems.” Could it be, because of its lack of mathematical sophistication, that it is so often called the brute force method?

There are three elements that you need to focus on: the validation of the sim­ulation, the input parameters, and the postprocessing of the output. If you read this chapter from the beginning and exercised some of the six-DoF examples, you have a good grasp of a typical simulation. As you build your own model, make sure that the level of detail is tailored to the particular problem. For trajectory studies you should concern yourself with aerodynamics, propulsion, winds, and nonstandard atmospheres; for targeting studies you have to add navigation and guidance uncertainties with realistic stochastic error models. Foremost, however, allow plenty of time and resources to verify your work. Test your simulation un­der various conditions, as you would test a prototype aircraft. Have other experts review your brainchild, and do not let fatherhood pride keep you from accepting corrections.

The input parameters must be accurate, but it is sometimes difficult to pick good values for random initializations and stochastic parameters. Because their statistical distributions assume infinite sample size, you are hard pressed to find sufficient data to support your choices. That deficiency is particularly evident for new concepts that have little test data for backup.

Output data are plentiful when you make Monte Carlo runs. As a rule, the more replications you execute the better the results, but, oh horror!, the more data you have to analyze. Hopefully, your simulation environment has some or most of these chores automated. CADAC Studio provides you with a host of statistical analysis tools, which you can tailor to your needs.

To apply these methods, you have to know the bare essentials of statistics, proba­bility, and random numbers. I assume that you have made their acquaintance so that I can concentrate here on the key elements of the Monte Carlo technique. Books by Gelb,25 Maybeck,26 or Stengel27 can help you overcome potential deficiencies.

The direct simulation technique of the Monte Carlo methodology addresses primarily questions of accuracy. How precise can an aircraft navigate over water, how close will the space shuttle come to the space station, or where will the missile hit the target? I will introduce you to some of the key concepts in accuracy analysis, like the circular error probable (CEP), error ellipses, and the practice of delivery accuracy investigations. Then, using CADAC-generated diagrams, I will demonstrate with practical examples the usefulness of the Monte Carlo method in establishing the performance of an aerospace vehicle.

10.3.1 Accuracy Analysis

Let us assume the aerospace vehicle design is well established, be it as a concept or as hardware, and a validated simulation is available. You, as system engineer, have to answer questions on performance and accuracy. With your powerful PC at your beckoning, you load the simulation, provide the input data, run repetitive runs, and then sit there, overwhelmed by the output. You probably have two questions: l) with the diversity of random input, what is the most likely statistical model of the output? and 2) how many runs are necessary for the output to be statistically significant?

Indeed, you may have a diverse array of input distributions. INS errors usually behave according to Gaussian statistics (normal distribution) and may be corre­lated in time (called Gauss-Markov processes). Similarly, seeker biases and noise behave mostly according to Gaussian and uniform distributions. More complex are the models of wind gusts that buffet your vehicle. Spectral densities with names like von Karman and Dryden have a long history in aircraft analysis. It gets even more complicated, however, for a terrain-following and obstacle-avoiding cruise missile. The terrain is modeled, unless you have actual data, by a second-order autocorrelation function, driven by white Gaussian noise, and you have to select three parameters to characterize the particular terrain roughness. Obstacles are generated by two stochastic functions: an exponential distribution that determines the distance to the next obstacle and a Rayleigh distribution that randomizes obsta­cle height. To get more insight, I recommend you scrutinize some of your favorite six-DoF models for their stochastic prowess or look at the CADAC simulations CRUISE5 and SRAAM6.

With that many random variables taken from different types of distribution, modified and filtered by linear and nonlinear dynamics, the question is what is the most likely distribution of the output parameters. The famous central limit theorem provides us with the answer. It asserts that the sum of n independent random variables has an approximate Gaussian distribution if n is large. This is good news indeed for our sophisticated six-DoF simulations. The more noise sources it contains, the more Gaussian-like the output will be. But what is enough? H&H state, “In practical cases, more often than not, n = 10 is a reasonable number, while n = 25 is effectively infinite.” Any respectable six-DoF simulation easily meets this condition. What a relief! We can use the well-established Gaussian statistical techniques to analyze the output.

With the output being Gaussian distributed, we can also answer the question of how many replications are necessary for a Monte Carlo analysis. It is based on the calculation of confidence intervals and their relationship to the standard deviation. Zarchan24 provides a graph that relates the number of sample runs to confidence intervals. For instance, if 50 Monte Carlo runs produced a unit standard deviation estimate, we would have a 95% confidence that the actual standard deviation is between 0.85 and 1.28. Increasing the run number to 200 would give us, at the same confidence level, an interval between 0.91 and 1.12. In my Monte Carlo studies I never use less than 30 runs (at 95%, 0.80 < a < 1.43) and increase them to 100 runs (at 95%, 0.89 < a < 1.18) if accurate estimates are essential.

Are you now anxious to analyze your output? We will discuss first the sta­tistical parameters of one-dimensional distributions, like altitude uncertainty or range error, the so-called univariate distributions, and then the two-dimensional
distributions, like ground navigation error or miss distance on a surface target, the so-called bivariate distribution.