EO sensors

9.2.5.4 The technology leaps in focal plane arrays have made the EO sensor an excellent candidate for missile seekers. Detector costs have

plummeted, whereas array sizes have increased. In military applications the in­frared (IR) spectrum is preferred because it opens the envelope to adverse weather and night operations. We are particularly interested in the 8.5- to 12.5-дт wave band, where mercury-cadmium-telluride detectors operate at temperatures of 70 to 80 K. Besides passive sensors receiving the thermal energy from emitting targets or reflected natural energy, there are also active sensors under development that emit and receive IR energy in radar fashion. They combine a laser emitter with radar processing techniques and are therefore called ladars. Modern C02-based ladars operate in the 10.6-дт wavelength, an area in the spectrum where atmospheric attenuation is at a relative minimum.

We will concentrate here on the modeling of passive IR sensors, either used as hot-spot trackers or as imaging seekers. As in radar, our ambition is not in the detailed modeling of the processing algorithms—I leave this to the experts— but our interest is in a top-level representation of the errors that corrupt the LOS between the sensor and the target. The active ladar sensor, on the other hand, can be treated like a radar, and you can refer back to the preceding section for details.

Some of the important error sources of passive IR sensors are atmospheric attenuation (water vapor, mist, fog, rain, clouds), ground clutter, processing delays, and, of course, countermeasures. In addition, we have to model the dynamic errors like spectral target scintillation, radome diffraction, gimbal friction, cross coupling, and rate gyro errors. The dynamic errors were addressed in the section on dynamic seekers. In the following we look at the physical properties of the passive IR sensor and how they affect the acquisition and tracking performance.

IR sensors measure the heat energy and calculate the temperature gradients to produce a TV-like image at night as well as during the day. For a given sensor the acquisition range is a function of the radiation intensity of the target Jt, the S/N, the number of detectors n, and the dwell time calculated from the frame time 7/ over the search area Q (in steradian)

EO sensors(9.91)

Notice the similarities with the radar equation (9.84), The radar cross section has been replaced by the radiation intensity of the target Jt, and the scan and frame time are synonymous. However, the detection range is inversely proportional to the square root of S/N, whereas the fourth root applies to radars. The difference is based on the fact that the emitted energy has to travel the distance twice for radars but only once for passive IR sensors. К represents the sensor specific constant that contains such terms as aperture, focal length, detector detectivity, and losses. Equation (9.91) is valid for point sources against a clear background and without atmospheric attenuation. It describes the acquisition performance of a hot-spot sensor under ideal conditions quite well.

If the target is embedded in a background with variable spectral radiation emit- tance, like a vehicle traveling over land, the noise level of the system is increased, and the acquisition range is decreased likewise. This deteriorating effect depends on many variables, e. g., terrain type, sun angle, and seasonal changes. For sim­ple simulations we just increase the threshold S/N by the background conditions (S/N)c.

Atmospheric attenuation is expressed as a loss per kilometer in decibels. It is a function of temperature, visibility, and humidity, as well as the spectral band of the sensor, and is formulated as an incremental signal-to-noise ratio A(S/N)a. The threshold S/N is then

S/N = (S/N), + (S/N)c + Д (S/N)e R (9.92)

This equation is similar to Eq. (9.90) but warrants further explanations. The thresh­old S/N establishes the acquisition range through Eq. (9.91), i. e., as the missile approaches the target, the signal strength in the detector increases to a level so that the S/N for target detection is reached. Without ground clutter and atmospheric attenuation the sensor specifications require, for target detection to occur, that the signal must be above the system noise by a certain factor. This is expressed by the sensor (S/N),. Ground clutter raises this factor and is additive because we use logarithmic units. Furthermore, the atmospheric attenuation increases this factor even more; however, it is not constant but is a function of acquisition range.

To implement Eq. (9.92) in your simulation, keep a running account of this threshold S/N and calculate the acquisition range from Eq. (9.91) (do not forget to convert from decibels to natural units: x — 10dB/I°). As the missile approaches, the target and its LOS range become equal to the acquisition range and target acquisition occurs.

Once the seeker starts to track the target, the uncertainties are dominated by dynamic errors and not signal processing phenomena. Just consider that the beam width of an IR sensor in the 10-/i m wave band and with an aperture of 10 cm is 0.1 mr, small enough to be overwhelmed by dynamic errors.

So far, we have limited our discussion to targets that are essentially point emitters—far removed targets and objects with a strong radiating heat source fall into this category. Vintage IR seekers, like those of the Stinger and Sidewinder missiles, can only track such point sources. One of their drawbacks is that they are very susceptible to flare countermeasures. With the introduction of IR focal plane arrays, it has become feasible to image the target and to correlate the image with stored templates. If a match is found, the sensor locks on to the target and guides the missile to intercept. Sophisticated processing does not only acquire the target, but also classifies it and selects a particular vulnerable aimpoint. Turn with me now to a top-level discussion of these imaging seekers.

The image of such a seeker is either produced by a line scanner or a staring array. In both cases we consider the number of pixels on target: the more pixels, the higher the resolution of the target. Processing the temperature gradients from the pixels forms the image.

As the missile approaches the target and the threshold S/N is exceeded, the seeker starts to image the area where the target is expected to be located. The processor compares the temperature gradients with a prestored template of the target. When a match is found, the difference between the predicted and actual target location is used to improve the navigation solution. This imaging/update cycle repeats until the target fills the array completely.

Modeling of the acquisition phase consists of two parts. First, we calculate the threshold S/N from Eq. (9.92) and the associated acquisition range, Eq. (9.91). This procedure represents a deterministic approach. An alternate stochastic model is based on curves of the probability of acquisition vs range-to-target with the

target size and the atmospheric conditions as parameters. These curves, calculated or measured, approximate parabolas with vertices at the probability of one and de­crease with range. With p, the parabola parameter, the probability of acquisition is

Подпись: (9.93)р*ч ~ 1 4p

Developing the tables of p = f {target size, atmospheric conditions} can involve time-consuming tests and calculations. So, be forewarned! As a simplified model, I have used a linear curve fit of p as a function of target size at fixed weather con­ditions. To determine the occurrence of the in-range event of a particular computer run, draw a number from a uniform distribution. If Eacq is greater than that number, the seeker starts imaging the scene.

To ensure that the target is contained in the scene, the pixels must cover an area large enough to account for the pointing uncertainty of the sensor’s centerline. This uncertainty is primarily determined by the midcourse navigation accuracy. With the INS position error given by its standard deviation ctins and the targeting error by OTar, the pointing error is (in units of length)

EO sensors(9.94)

The second effect to be modeled is the target acquisition time, consisting of the template imaging and matching process. Before launch the three-dimensional target template is stored onboard the missile processor. It consists of high-contrast facets in the form of a wire frame model. Once the sensor is within acquisition range, the three-dimensional template is readied for correlation by projecting it into the plane normal to the LOS. The pixels of the focal plane must cover this two – dimensional picture and the uncertainty area surrounding it. The time to image and process the data is directly proportional to the number of the pixels such engaged.

Each pixel has an instantaneous field-of-view of e,-, given in radians. A typical value is 0.75 mr. We calculate the number Na of pixels involved in the search process by covering three standard deviations or 99.7% of the pointing error (see Fig. 9.38):

EO sensors(9.95)

EO sensors

If we designate each pixel’s imaging time as At; and its processing time as Atp,

340 MODELING AND SIMULATION OF AEROSPACE VEHICLE DYNAMICS then the duration of the acquisition Ta is

Ta = Na(Ati + Atp) (9.96)

In your simulation tracking of the target should begin at the time the missile enters the acquisition range and acquisition time period Ta has elapsed. At this instant the first navigation update is sent to the INS and both the target location and INS navigation errors are reduced to the sensor’s uncertainties.

After the first update the error basket has been reduced significantly, particularly by the elimination of the targeting error. Before acquisition the navigation solution was carried out in an absolute frame of reference. After acquisition the missile guides relative to the target, thus making the absolute targeting error irrelevant.

During tracking, the size and dynamics of the target determines the numbers of pixels engaged in the imaging and correlation process. For a stationary target we take three times the linear size of the target lj. The number of active pixels is then

EO sensors

and the duration of imaging and processing is

T, — Nt(Ati + Atp)

Tt is significantly smaller than Ta, and, therefore, the update rate during tracking is faster than the acquisition time. Furthermore, most imaging seekers take advantage of the fact that imaging of the next frame can occur during processing of the preceding image. Because imaging is faster than processing, the update rate is determined by the processing of the pixels only. A 20-Hz update rate is the current state of the art. For a maneuvering target all pixels may be required to keep the target in the field of view. Then T, may not be much smaller than Tu.

The tracking accuracy of imaging seekers is not determined by the beam width of the pixels, but by the template matching process. During mission planning, photography is used to build a three-dimensional wire frame model of the target. If the aspect angles and the range at which the picture was taken are known imprecisely, an error will creep into the tracking performance. Moreover, during target tracking the aspect angles and the range are corrupted by the INS errors. Both phenomena, prelaunch and in-flight distortions, contribute primarily to the tracking errors.

The sensor measures the azimuth and elevation angles of the LOS to the target aimpoint. These angles are taken relative to the missile body. For gimbaled seekers they are the gimbal angles. The measurements are corrupted by the correlation process, consisting of the mission planning and tracking errors and the dynamic errors of the gimbals. For a well-designed and fabricated seeker the dominant errors are not caused by the gimbals but by the template matching process.

We model the mission planning and tracking angular distortions by em and er, respectively, and the range errors as Д Rm and Д Rt. The measurement errors in the azimuth and elevation plane can then be formulated by

£az — ^e, az(^m "b £f) "b ^ R, az{ARm + ARf) £el = Kss i(gm + £() + KR^(ARm + AR,)

where К are constants for a particular target, obtained from extensive testing and analysis. In your simulation you can keep the values of sm and A Rm fixed, whereas st and A R, are provided directly by the INS error model. If you execute Monte Carlo runs, you could interpret the values of sm and A Rm as standard deviations of a random Gaussian draw.

I have led you from simple kinematic seeker formulations to fairly complex imaging sensors and discussed both radar and IR implementations. As long as you pursue top-level system simulations, you should have enough information to model the seeker for your particular application—by the way, you can include these seeker models also in your six-DoF simulations. However, I caution you, if you should embark on building a specific simulation for the development of a seeker you must consult the seeker specialist and learn the finer points of seeker modeling.