CHAPTER 1:      INTRODUCTION & BRIEF HISTORY

 

 

  S1.C1.1        INTRODUCTION

RADAR is an acronym for RAdio Detection And Ranging. Radars operate in the microwave portion of the electromagnetic spectrum beyond the visible and thermal infrared regions. Imaging radars are generally considered to include wavelengths from 1mm to 1 meter. Longer wavelengths (lower frequencies) areas are generally devoted to communication and navigation purposes among other uses. Operating in the microwave region of the electromagnetic spectrum involves signal penetration (decreases attenuation), especially in the atmosphere. The relationship of the microwave region to the shorter wavelengths of remote sensing systems operating in the thermal, mid, and near infrared regions as well as the visible and ultraviolet can also be seen in Figure S1.C1.D1. Imaging radars are not affected by cloud cover, or haze as optical sensors are, and operate generally independent of weather conditions. Water clouds have a significant effect only on radars operating below 2 cm in wavelength; the effects of rain are relatively inconsequential at wavelengths above 4 cm.

Imaging radars operate at a specific wavelength or frequency. The counterpart in what is often referred to, as the optical or multi-spectral region might be the bands or channels at which electromagnetic energy is recorded. However, whereas these bands include a region or range of wavelengths (e.g.0.4 - 0.5 micrometers), a radar system records the signal response from the ground or target at a single, specific wavelength. As a further comparison, the visible part of the electromagnetic spectrum can be said to include the red, green and blue spectral regions. Similarly, the active microwave region includes X, C, L and K bands, among others, that refer to specific segments of the microwave portion of the electromagnetic spectrum. To illustrate, an ” X “ band system would be a Radar that operates at a single wavelength within this band (e.g. 3.2 cm.). This alphabet reference to radar wavelength regions is relatively standardized, although it was established by military in the early days of radar research for security reasons.

S1.C1.D1

         

Radar is an active sensor, transmitting a signal of electromagnetic energy, illuminating the terrain, and recording or measuring the response returned from the target or surface. Thus, the term “active microwave” is often synonymous with radar. As an active sensor, radars are independent of the sun and sun conditions and can operate day or night. Radar has a variety of characteristics valuable to geoscientist. Perhaps the two most notable are its weather independence and 24 hours operation. Radar also provides a unique perspective of the landscape and many unique opportunities for quantitative terrain analysis.

 

S1.C1.2          BRIEF HISTORY

Imaging radar can be considered a relatively new remote sensing system in comparison to aerial photography. Civilian applications in the geosciences have only been widely investigated since the 1960s. However, the history of radar exploration and the reflection of radio waves from objects predate this time by many years. The first experiment that used radio waves close to the microwave frequency occurred in the late nineteenth century by Heinrich Hertz. Hertz showed that reflections could be received from metallic and nonmetallic objects. The first patent for using radar as a ship detector was obtained by Huelsmeyer in 1904. However, the initial research and development of radar took place almost simultaneously in Germany, the USA and Great Britain. The detection of ships and air-craft and the use of radar as a navigation aid received serious attention in the USA and Great Britain in 1920s and 1930s.

These early radar systems transmitted from the ground to the air and in some instances from air platforms to the ground. During this period, radar investigators noticed noise or ground clutter when monitoring hard targets such as ships and planes and attempted to eliminate it. Fortunately, the reality that this noise was a crude image of the terrain was soon recognized. Radar systems used in most geoscience remote sensing applications today usually produce images of some kind. The product generated by these radars for the geosciences usually is in the form of continuous strip imagery of digital data. A scan in the direction perpendicular to the platform flight line or track is obtained by a round trip pulse delay measurement, and the scan along the flight line is obtained by synchronizing motion of the recording device with the signal return. The original imaging radars were built with mechanically rotated antennas producing a PPI or circular display.

The original PPI air-borne radars were developed to aid in navigation of air-craft. The idea was to present a picture to a pilot, or in some cases a bombardier that allowed him to navigate to a known spot on the earth by providing a continuously updated “map” of the ground. Because the pilot could do most of his navigation by noting sharp boundaries between strongly constraining surfaces such as water and land or by locating large targets such as major buildings and cities, little attempt was made to achieve a good gray-scale rendition on the early air-borne radars. Instead of looking at the horizon or skywards, the scanning antennas on the radars were directed downward towards the landscape. Radar return from the ground, often termed “clutter” and considered “noise” when operating from a ground based position, became “signal” when operated from an air-craft.

The along track or azimuth resolution of “brute force” or real aperture radar (RAR) systems is a function of the ratio of the wavelength to aperture (antenna) size: the smaller the ratio, the better the resolution. Thus, decreasing the wavelength or increasing the antenna length provides better spatial resolution in the azimuth direction. The development of high-power microwave transmitters enabled the propagation of much shorter wavelengths, permitting the use of shorter antennas to achieve adequate resolution. The smaller antennas allowed the development of air-borne radar. After the war, civilian investigators began more thoroughly to investigate imaging radar systems for geoscience applications.

Various aspects of the progress in radar technology occurred in parallel because of the two types of radar that were evolving: Side-Looking Air-borne Radar (SLAR) and Synthetic Aperture Radar (SAR). SLAR systems looked off to the side of the air-craft and collected continuous strip imagery. A fixed antenna was propagated by a moving platform (the air-craft). Although the concept of SLAR systems had been understood since 1940s, it remained only a concept until the development of new techniques and more sophisticated components in the 1950s. During this period Goodyear Corporation and Ohio State University among others conducted research involving the electromagnetic reflection properties of natural surfaces and measurements of terrain backscattering for both static and air-borne radars. Probably the greatest catalyst for the introduction of earth scientists to research fields of radar and remote sensing was the First Symposium on Remote Sensing of Environment held at the University of Michigan in 1962.

The development of synthetic aperture radar, in which an artificially long antenna can be synthetically created using a transported small antenna, permits fine resolution in the azimuth direction using longer radar wavelengths. When azimuth resolution is defined by the beam width of the antenna, as it was in early real aperture radar systems, the resolution degrades in proportion to the radar range and the antenna length becomes unmanageable for fine resolutions, long ranges and long wavelengths. Recording the amplitudes and phases of the radar returns and synthesizing an antenna from the recorded signal history whose length increases with range (i.e. SAR) enables fine azimuth resolution to be obtained at all ranges and at most radar wavelengths for modest sized antennas. Crucial to SAR development, under military security restrictions, was the ability to finely resolve Doppler frequencies via a frequency analysis of the reflected signal. In 1978, the launch of the SEASAT satellite greatly increased interest in imaging radars. Advancing technologies provided the means of collecting radar data in digital rather than analog format. This interest and active research by geoscientists was further augmented in the late 1970s and 1980s by imagery from NASA’s Shuttle Imaging Radar (SIR-A and SIR-B) systems along with several systematic air-borne SAR projects.

The 1990s brings to the fore not only past and present SAR air-borne research efforts but the appearance of several satellite systems (e.g. ERS, JERS, RADARSAT, ALMEZ, and Space Shuttle) and the promise of a continuous stream of readily available digital radar data comparable to that of the LANDSAT and SPOT series of multi-spectral imagers. The trend is towards multi-dimensional (multi-frequency, multi-polarized, multi-temporal, multi-incident & look angles) digital radar data and generation of “true” color radar imagery rather than “colorized” single channel radar imagery. The whole development in SAR imaging and Remote Sensing has opened the new horizons in GIS (Geographical Information Systems).


CHAPTER 2:      SAR- REMOTE SENSING

 

 

S1.C2.1          ADVANTAGES

There are certain advantages of Active Microwave Remote Sensing using SAR in comparison with conventional Optical Remote Sensing.

·                     Radar is independent of cloud cover, weather and sunlight conditions.

·                     Penetrates haze, fog, clouds, rain and snowfall, smoke, dazzling of sunlight reflections.

·                     As an active system, the SAR provides its own illumination and is not dependent on light from sun, thus permitting continuous day/night operation. Furthermore, clouds, fog or precipitation have a no significant effect on microwaves, thus permitting all-weather imaging. The net result is an instrument that is capable of continuously observing dynamic phenomena such as ocean currents, sea ice motion, or changing patterns of vegetation.

·                     Some radar frequencies penetrate terrain surfaces and detect sub-surface features; they permit access to information, different than the ones detected in the visible and infrared regions.

·                     Sensitive to moisture and electrical properties.

·                     Enhances topographic features.

·                     Qualified for most land and sea applications and it is a unique remote sensor for some special applications.

·                     Parameters of microwave radiations are controllable and can be selected with regard to the objective of investigation.

·                     For high resolution SAR, spatial resolution is independent of flight altitude.

 

S1.C2.2          APPLICATIONS

Following are the few areas of applications of SAR- Remote Sensing.

·                     Defense & Military.

·                     Archeology and Anthropology.

·                     Cartography

·                     Geology: surveys, mineral resources.

·                     Land Use: urban land, agriculture land, soil survey, health of crops, soil moisture, yield prediction, wildlife, forestry-inventory.

·                     Civil Engineering: site studies, water resources, transport facilities

·                     Water Resources: surface water, supply, pollution, underground water, snow & ice mapping.

·                     Coastal Studies: erosion, accretion, bathymetry, sewage, thermal and chemical pollution monitoring.

·                     Oceanography:  surface temperature, bottom topography winds, waves and currents circulation, mapping of sea ice, oil pollution monitoring, sea levels and sea conditions.

·                     Meteorology: weather system tracking, weather forecasting, sounding for atmospheric profiles, cloud classification.

·                     Climatology: atmospheric monitoring, prediction of variations in season cycle, desertification.

·                     Natural disasters: floods, earthquakes, volcanoes, forest fires, subsurface coal fires, and land sliders.

·                     Planetary Studies and Astronomy.

 

S1.C2.3          AIR-CRAFT VS SATELLITE

Remote sensing of earth from air-craft and from satellite is established in a number of areas of environmental sciences. There are number of applications where SAR remote sensing is particularly useful. There are various considerations that have to be taken in to account when deciding between using air-craft or satellite data. The fact an air-craft flies so much lower than a satellite means that one can see more detail on the ground than one can see from satellite. However, although satellite sees fewer details, it may be more suitable for many purposes. A satellite has the advantage of regularity of coverage and a scale of coverage (in terms of area on ground) that could never be achieved from an air-craft. The frequency of coverage of a given site by satellite flown instant may be too low for some applications. For a small area a light air-craft can be used to obtain a large number of images more frequently.

There are number of factors to be considered in deciding whether to use air-craft or satellite data; these include

·                     The extent of the area to be covered.

·                     The speed of the phenomenon to be observed.

·                     The detailed performance of the instrument available for flying in the air-craft or satellite.

·                     Availability and cost of data.

The last point in the list, which concerns the cost to the user, may seem little surprising. Clearly it is more expensive to build a satellite platform and sensor systems, to launch it, to control it in its orbit and to recover the data than it would be to buy and operate light air-craft and a good sensor system on the air-craft.

The influence of the extent of the area to be studied on the choice of air-craft or satellite as a source of remote sensing data is closely related to the question of spatial resolution. Loosely speaking, we can think of the spatial resolution as being the size of the smallest object that can be seen in a remote sensing image. Satellites are flown several hundred kilometers above the surface of the earth whereas air-craft may fly very low, possibly only a few hundred miles above the surface of the earth. The fact that the air-craft is able to fly so low means that, with a given sensor, far more detail of the ground can be seen from the air-craft than could be seen by the same sensor on a satellite. However, there are many purposes, for which the lower resolution that is available is perfectly adequate and, compared to an air-craft, a satellite can have several advantages. For instance, launched in to orbit a satellite simply continue in that orbit without consuming fuels for propulsion, since the air resistance in negligible at the altitudes concerned. Occasional adjustments to the orbit may be made by a remote command from the ground; these adjustments consume only a very small amount of fuel. The electrical energy needed to drive the instruments and transmitter on board satellite is derived from large solar panels.

 

S1.C2.4          ASAR MODES

Figure S1.C2.D1 illustrates the three common Air-borne SAR imaging modes: Spotlight, Stripmap, and Scan. During a spotlight mode data collection, the sensor steers its antenna beam to continuously illuminate the terrain patch being imaged. In the stripmap mode, antenna pointing is fixed relative to the flight line. The result is a moving antenna footprint that sweeps along a strip of terrain parallel to the path of motion. The stripmap mode involves either a broadside imaging geometry, which the figure illustrates, or a squinted imaging geometry. In the squinted stripmap mode, the antenna is pointed forward or behind the normal to the flight line. In the scan mode, the sensor steers the antenna beam to illuminate a strip of terrain at any angle to the path of motion. The scan mode is versatile operating mode that encompasses both the spotlight and stripmap modes as special cases.

Because scan mode involves additional operational and processing complexity, spotlight and stripmap modes are the most common SAR modes. The spotlight mode is a practical choice when the mission objective is to collect fine-resolution data from one or more localized areas. The stripmap mode is more effective when used for coarse-resolution mapxing of large regions.

A fourth operating mode called Inverse SAR (ISAR) produces radar signal data similar to that of spotlight mode SAR. However, the ISAR mode is different in that data collection is accomplished with the radar stationary and the target moving. The signals are similar because it is the relative position and the motion between the sensor and the scene being imaged that is important. Since the signals are similar, the processing required to produce an image is similar also.

S1.C2.D1
CHAPTER 3:      ASAR SYSTEMS

 

 

The various types of remote sensing radars are differentiated only by the design emphasis on one or more parameters guided by the intended use of the system. Military radars usually are optimized to respond to point objects (hard targets). Such systems are characterized by fine resolution, frequency, agility, moving target capability and multiple target dynamic tracking. In contrast, the primary objectives of most remote sensing radars are mapping and reflectivity estimation of relatively large areas. This leads to systems characterized by moderate resolution, post detection averaging to improve image quality and constraints on the calibration and stability of final image products. The specifications of representative ASAR systems can be obtained from Appendix A.

The radars used for ASAR (Air-borne Synthetic Aperture Radar) imaging systems have fan shaped antenna beams. They have relatively wide patterns to illuminate a respectable swath width in range and have rather narrow antenna patterns in azimuth, which allows an image to be accumulated line by line. In the simplest form ASAR system for Remote Sensing can be viewed as shown in Figure S1.C3.D1. Most remote sensing radars are not simple, since they use extensive signal processing. Signal processing radars usually exploit the relative phases of a large number of individual single pulse responses.

The hardware of most radars must perform similar functions. In contrast, there are fundamental differences between the processors needed to support the different kinds of radar systems. Simple radars use only detection and video display in there signal processing. For more sophisticated radars, extensive azimuth signal processing is done, for which coherence is necessary. Signal processing includes detection and post-detection filter operations, such as data averaging.

Depending on the position of radar platform and method of echo data collection, ASAR system has various modes of operation. In this chapter the entire ASAR system is viewed based on the operating modes, as a composition of two logically distinct sub systems: radar hardware and signal processor.

           

S1.C3.D1

 

S1.C3.1          RADAR HARDWARE

A block diagram of the hardware needed by a typical remote sensing radar is shown in Figure S1.C3.D2. The radar generates a pulse for each transmission that is radiated by an antenna and propagates to the scene. A fraction of the incident field is reflected back towards the radar where it is gathered by the receiving antenna. The received signal, being weak, is vulnerable to additive noise, which accompanies the signal through the rest of the system. Both signal and noise are amplified in the receiver, and demodulated from the carrier frequency to a low-pass or video frequency. For most radars of interest in remote sensing, and for a SAR in particular, these video signals are written into memory, which holds the input data for subsequent signal processing.

After square-law detection, the radar output is an estimate of reflectivity, with units of power. If the measurement is one dimensional, as it would be for an altimeter, then the output is usually in graphical form with horizontal axis proportional to time delay, and the vertical axis proportional to radar reflectivity. For radars with two-dimensional output format, as in the case for air traffic control radars, both spatial dimensions of the output are needed to represent position. The reflectivity at each position is indicated by brightness. The output appears in one of several forms, such as video, digital or photographic. For remote sensing radars, the two dimensional output is referred to as an image in general terms.

S1.C3.D2

1.1              Transmission : The heart of the most radars is timing and frequency control. At the outset, the radar shapes and transmits a pulse in response to a trigger from the control unit. This task may be distributed over several sub-systems, or it may be centralized in one control computer. For a radar, “Timing is Everything”… almost. In the transmit chain, the trigger initiates generation of a pulse having a particular shape or envelope, and which usually is expanded with a special phase code or a SAW device. The pulse modulates the radar radio frequency RF carrier, whose frequency must be maintained accurately by the local oscillator in the control unit. The RF pulse is amplified in the transmitter. The resulting high power pulse gets to the antenna through a one-way device, usually a circulator as shown in the Figure S1.C3.D2. Most radars use the same antenna for both transmission and reception: the circulator directs outgoing signals from the transmitter to the antenna, and incoming signals from the antenna to the receiver. As such the hardware comprising the timing and frequency control, pulse generation, high power amplifier and so forth are not very crucial from the imaging performance point of view.

1.2             Reception : When the signal arrives at the first receiver stage, it is extremely weak. The first task of the receiver is to amplify the signal to a more useful level, which is done in the radio frequency low-noise amplifier. Unfortunately, the signal must compete with back ground noise. Any physical device generates internal noise, which is added to the signal. Additive noise is most evident where signal is weakest. For the system of interest in remote sensing, the dominant additive noise is thermal noise in the “Front End” of the system.

The information of interest is in the envelope and phase of the received signal, not in the carrier frequency itself. The carrier is removed by a process known as demodulation, which might be visualized as a simple frequency shift. The mean frequency after demodulation is always much less than that of original radar frequency. To retain the signal phase information, Quadrature demodulation must be used, which simplifies the mathematics for remote sensing radars. Quadrature demodulation uses a pair of reference signals, both derived from the local oscillator (LO), which is the part of the timing and frequency control unit. One of the reference signal is in phase with system LO (0°), and the other is in Quadrature with respect to the LO (90°). When both sine and cosine demodulation references are used in the same system, there are two output signals available. These are known as in-phase (I) and the Quadrature (Q) components.

After amplification, the signals in both I and Q channels are digitally sampled for analog-to-digital (A/D) conversion. The A/D stage creates two parallel digital data streams, and forces Quantization levels onto the incoming analog signals. The principle issues are: A/D sampling rate; Quantization; and I/Q balance. Since the data are in complex form, the band pass version of the Nyquist criteria applies. The rule is that the sampling rate must be larger than only the bandwidth of the signal, rather than twice the bandwidth as is true for the more familiar low-pass case. The choice of the number of digital bits of Quantization implies a trade-off between data rate and Quantization noise. Good design relies on gain control at the input to the A/D unit to adapt signal levels to an optimum dynamic range. For most well implemented radars operating on signals within the designed dynamic range, the performance of A/D conversion is reliably linear. The signal data thereafter must be transferred to memory for subsequent processing. This may be done as simply as by a wire to a cathode ray tube display, or in more elegant fashion, depending on system requirements.

1.3             Motion Compensation : Most remote sensing radars for air-craft are used from a moving platform. Moving platforms usually have a dominant velocity vector, but in addition to a constant speed along the velocity vector, all realistic platforms have several degrees of freedom that allow small but sometimes troublesome extraneous movement. Possibilities include: the three angles of rotation, pitch, roll and yaw; displacements horizontally or vertically away from the line of light; and variations in the along track speed. The task of the motion compensation system is to estimate and to correct for these unwanted motions.

Several levels of corrections may be required. These are noted as per the increasing order of required accuracy. First, if the antenna pattern has angular beam widths that are comparable to or smaller than the angular platform motions, than the antenna must be steered in compensation. Second, if the radar strays from the ideal line off light by more than the range resolution, then the range timing must be adjusted to offset the effect. Third, if there are along track velocity variations large enough to distort the pulse to pulse spatial scaling, then the motion compensation system must recognize this, and adjust the radar timing accordingly. Forth, and most critical, if the radar uses phase information for signal processing, then motion compensation system must estimate the cross-track and vertical random motions accurate to better than l/10, and generate the appropriate phase corrections. Specialized radars may need variations on these compensations.

 

S1.C3.2          SIGNAL PROCESSOR

All radars of interest in remote sensing use processing over many pulses. As transmissions are repeated, the resulting amplitude range lines constitute the input signal set for the processor, as in Figure S1.C3.D3. For air-craft or satellite based side-looking systems, such as pulse/Doppler radars or SAR are characterized by both Range processing and Azimuth processing blocks in series as an integral part of the SAR processor.

S1.C3.D3

 

Multi-pulse radars are important because they may be designed to retain signal phase structure in the azimuth dimension as well as in the range dimension. The data from each pulse are stored in memory, and subsequently processed as a two-dimensional complex data set. The two spatial dimensions are slant range and azimuth position which may be indexed by pulse number when the formulation is in terms of discrete pulses. The processing is designed to take advantage of systematic patterns in the details of signal structure within the ensemble of received signals. From data processing point of view, the several types of radars used in remote sensing are distinguished primarily by the specific signal structures in each instance.

            The complex image is the result of all processing before detection. If the complete signal to this point is coherent and there are no phase errors introduced, then the complex image may be analyzed to provide an estimate of the phase of the reflectivity coefficient. Complex image phase is exploited by advanced techniques for further processing if required. Following the linear processing stages, the processor includes detection of the signal. Detection usually is implemented using a square-law non-linearity. Detection is needed to provide estimates of the power resulting from the scene reflectivity. For most natural scenes, reflectivity of an area derives from many small and individual echoes that add coherently. Finally post-detection processing usually is used, to improve certain image quality characteristics, and to reduce signal data volume.

2.1              Pulse Compression and Range Filters : The task of a range compression filter is to convert the pulse to a simple short impulse that has the same bandwidth as the original pulse. Under the constraints that the additive noise is Gaussian, and that the output signal-to-noise ratio is to be maximized, it may be shown that the optimum compression filter transfer function is the complex conjugate of the Fourier transform of the original pulse modulation. This particular configuration is known as a matched filter. Usually, the range compression filter uses a shaped frequency passband (weighting) to reduce sidelobes on the compressed pulse. Given that only a limited bandwidth is available, it is impossible to realize a pulse modulation that leaves no residue outside of the desired peak response.

The main reason that radars use a phase modulated (practically LFM) pulse is that the radar is better able to transmit a relatively large amount of energy in each pulse under the conditions that; 1) the peak transmitted power cannot exceed a given value, and 2) resolution, or pulse length after compression should be small.  The operation of such filters is often called “focusing” and is illustrated in Figure S1.C3.D4. The main feature is that for an input pulse p(t) of unity amplitude, time duration t, and bandwidth Bp, the amplitude of the compressed pulse pc(t) at the output of the matched filter is substantially increased, and its width is sharply decreased. The quantity tBp is known as the time-bandwidth product (TBP), which is a parameter of fundamental importance for pulse compression systems. Prior to pulse compression, values of TBP>100 are typical in remote sensing radars.

The theoretical resolution to of the system is given by the inverse bandwidth of the channel. Through an idealized pulse compression scheme, the bandwidths of the original modulated pulse and of the compressed pulse are the same. In practice, weighting is used on both the original pulse shape and the filter envelope to reduce sidelobes of the compressed pulse. Weighting causes the output pulse to have a peak level less than the maximum, and a width greater than the minimum. If the actual pulse width t and output pulse width to are measured, then the more general expression of the filter action is in terms of the range pulse compression ratio, CR = t/to, which would be equal to the TBP of the input pulse for an ideal unweighted filter. The action of the pulse compression filter may be incorporated explicitly into the model through linear convolution. The range matched filter is fR(t) = p*(-t).

 

S1.C3.D4

 

The output of each of the range filters in the processing model is the same as that at the input except for a narrower pulse shape, and the presence of compression gains acting on the peak value of each pulse. The energy of each pulse is conserved through the matched filter. Peak gains benefits only those signals that have the matching phase structure; the noise components do not have compression gain applied. It follows that pulse compression is an effective technique to use when attempting to enhance the ratio of single scatterer peak strength to the average noise level.

2.2       Azimuth Filters : Any systematic movement between the radar and the scene during the radar illumination interval leads to systematic phase shifts from each elemental reflector. These phase shifts may be observable if the radar is coherent. If the phase histories have predictable structure, and if the relative imaging geometry is known, then the phase shifts should be matched by a filter acting in the azimuth direction. In principle, azimuth matched filters have the same fundamental characteristics as range matched filters. In general, each azimuth filter is a linear summation over the sequence of input signals, analogous to the action of range pulse compression. When those signals have a large (azimuth) time-bandwidth product, then compression gain and focusing may be done. For the azimuth filter operator fA(m,i;R), where m is the reference azimuth position of the filter, and i is the index of each range line. Usually, the azimuth filter has a parametric dependence on range R, carried explicitly in the expressions as needed.