
We depend on our cell phones. Our cell phones go with us wherever we go. No matter where you are on this planet, you will find people with their cell phones. They are an essential part of modern life.
Some of us fondly recall the halcyon pre-cell phone days. We then lived out our private, disconnected lives in undistracted peace and tranquility. Yet there is a compulsion within all of us to be connected to others and in the 1980’s we imagined the benefits of ubiquitous personal communications. We now see the personal price we pay for our loss of privacy and our long-lost anonymity. Being anonymous now and then can be a healthy thing.
Personal privacy has become a fading ideal.
Cell phones did not happen in a technical vacuum. They are the latest implementation of a broad range of technical disciplines that have been developed over many years. One of these is the ability to reliably communicate through dispersive RF communication channels.
Looking at the history of radio communications technology we see that it has been about 120 years since this all began, depending on where you choose your starting date. If we begin our calendar at about 1900 then Gugliermo Marconi comes to mind; an inventor and experimenter who was also a clever entrepreneur. His work was basic, but it was new. It was designed in a “technical vacuum” and was essential to the developing field of radio frequency communications technology. He was awarded the Nobel Prize in Physics with Karl Ferdinand Braun in 1909 for their work in this field.
Marconi is credited with starting the radio communications revolution and many people wanted to invest in it. The closest thing you might compare to his situation might be the “Dotcom” era of the 1990’s. Reputations and money were made and lost in astounding amounts in Marconi’s day and there were few constraints. It was a new market with innovative technology and little regulation over these new inventions.

Gugliermo Marconi
We all know about the sinking of the RMS TITANIC, but that was not the first time radio was a deciding factor in addressing a maritime disaster. That honor belongs to the RMS REPUBLIC in 1909, where this new technology was instrumental in saving 1,500 lives.
However, it was the sinking of the RMS TITANIC on April 15th, 1912, that captured the public imagination because of the enormous loss of life. The public immediately recognized the importance of wireless communication. Congress quickly passed the Radio Act of 1912 requiring all maritime radio operators to be federally licensed and requiring them to maintain a constant radio alert for distress signals.
History and honor compel us to pause and recognize the radio operator of TITANIC, Jack Phillips, who remained at his post and went down with his ship. He was 25 years old. Those who survived the sinking of TITANIC owe their lives to his dedication. His assistant, Harold Bride, 22 years old, was washed off the deck of TITANIC when the ship went down but he managed to scramble onto an upturned lifeboat. He survived to tell the story.

Jack Phillips
From the communications perspective, what was once a theory became a possibility, which then became an opportunity, and then became an essential legal requirement. Public attention to this new capability led to a remarkable history of research and innovation in wireless communication technology that continues today.
Then came two world wars and the rapid development of high frequency communications. Long-range non-line of sight (NLOS) communications now became possible, but only when ionospheric conditions were favorable. The history of military and commercial aviation as well as commercial broadcast radio and television demonstrates the significance of RF communications in new and essential markets. At first radio was the voice of the world, then television, then the cell phone, and now the internet. None have been replaced. We just added more capabilities to them.
The issue driving the rapid development of radio communications after World War II was the rise of two great nuclear powers, the United States (USA) and the Union of Soviet
Socialist Republics (USSR). The Cold War between these two nations confronted us all with horrific security concerns over nuclear and thermonuclear weapons delivered (at that time) by long-range bombers. Then came ballistic missiles, designed to wreak havoc and destruction from long distances. How could you detect such weapons, let alone engage them?
RADAR was the answer of course, but it was limited by the shape of the earth and the small physical size of the powerful new weapons being developed. These realities meant that we needed to install powerful radar stations as close as possible to our expected adversary and do it quickly.
What was once a multipolar world was now a bipolar world with the United States and the USSR building large networks of radar stations in remote locations. Strategic defense was at stake on both sides of this historic confrontation and communications was likewise strategic. Having radar stations was a good thing but these stations had to communicate their information to decision-makers who might be able to thwart or respond to a nuclear crisis.
Without reliable communications, our three-tier early warning radar systems were of little use. HF and shortwave radio simply would not do. Moreover, the first Soviet satellite (Sputnik) would not be launched until 1957 and the concept of global communication coverage by geostationary satellites had not yet approached reality.
Even with a geostationary satellite constellation, satellite coverage was limited due to the altitude of the satellite (6.61 earth radii) and the size of the spherical earth (radius of 3,963 miles). Line of sight communications for geostationary satellites could go no further than 70 degrees of latitude. Nevertheless, we needed reliable communications at even higher northern latitudes for our early warning radars to be of any benefit. Moreover, geostationary satellites would not come into reality for another 20 years; not because we did not have the electronics, but because we did not have rockets powerful enough to put satellites into high orbit.
It was during the early 1950s that engineers and scientists began to consider the possibilities of deploying meteor burst (developed by the Canadians in 1954) and tropospheric scatter (troposcatter) to meet these communications needs. Between the two, troposcatter won out. It was secure, it could not be jammed, and it provided communications up to 300 Km.
I was in Greenland in the early 1980s when I was told of the history of the troposcatter system between Labrador, Canada, and Thule Air Force Base in Greenland. It is an interesting history, rich in perspective concerning microwave diversity communications.
As I recall the numbers told to me years ago, the plan was to provide 100 kilohertz of bandwidth between the two locations. In practice they were able to achieve a bandwidth of about three kilohertz, a huge disappointment. Today, we know why … but this was in the early 50s, and it was a big question.
A troposcatter system illuminates a small volume of the atmosphere in order to reflect tiny amounts of power from small variations in the refractory index of the atmosphere. The loss between transmit and receive is about 120 decibels, a large number. The problem is that we are trying to reflect power over basically … nothing.

A NATO Communications Relay There are two extreme limits of reflection in propagation:
- Specular reflection (mirror like)
- Diffuse reflection (like a fog bank, except constantly moving).
Troposcatter is the worst case for diffuse reflections. The transmitted signal may be pure (coherent) when sent but by the time it is received it is a jumbled collection of many tiny
reflections from oddities in the atmosphere that appear and vanish at random. The final received signal is an incoherent collection of many smaller signals on top of each other.
If you communicate slowly enough, you might make troposcatter work but the coherent bandwidth was found to be far lower than what was needed. So, an experiment was designed to understand the boundaries of the problem.
Here is how this went:
The engineers assembled a two-channel receive system. They co-located two receive antennas and correlated the baseband signals against each other. They then measured the statistical dependence between them. They knew in advance that signals from the same location would have 100% correlation, but what was the correlation between channels as you move the receive antennas further away from each other? At what point do they become statistically independent?
The answer to this question provides insight into the properties of the diffuse propagation channel.
We observed that cross-correlation between channels dropped off rapidly at first and became statistically independent at about 10 wavelengths of separation. When three channels were used, each about 10 wavelengths from each other, the statistics of all three taken together was sufficiently reliable such that the full bandwidth of the channel could be restored. Antenna spacing was used, but base-band signal processing was the tool required to take three marginal channels and create one good one.
We learned several things from this experiment:
- Troposcatter is the worst case of a diffuse propagation medium.
- A receive antenna separation of about 10 wavelengths produces statistical independence for each channel of this particular problem.
- There is a separation distance beyond which there is no further improvement (10 wavelengths).
- The statistical correlation between channels drops off rapidly with separation distance. Even a few wavelengths of separation are useful.
- Both baseband signal processing and spatial separation are required to restore the performance of a troposcatter communications channel.
Diversity reception and signal re-combination are powerful tools, and they became (and are now) the backbone of today’s microwave communications. In today’s conversation, this would be called a SIMO (single transmit, multiple receive) system which is still in use today as the AN/TRC-170 by the US Marine Corps.

AN/TRC-170
What followed was Operation Pole Vault, White Alice, Pine Tree, and Ace High … the backbone of our strategic communications for the US Defense Early Warning system and for NATO for several decades.
Satellite Coverage of the High Arctic
In 1982 my company introduced satellite communications into the high Arctic using the LES-9 satellite (launched in 1976, retired in 2020). It was a unique asset in that while the satellite orbit was geosynchronous, it was tilted from the earth’s axis by 32.10 degrees. It provided radio coverage for several hours throughout the Arctic and, twelve hours later, the Antarctic. It was a UHF “bent pipe” transponder and, with the right antenna (RHCP in this case,) we could send digital data (BPSK) from the central Arctic back to Santa Barbara, California, once a day. These were brief messages, about 2 minutes in length, but they carried strategically important data from several monitoring sites.
Santa Barbara is on the California coast. As the evening fog envelopes the city, it deposits moisture on the power line insulators. Visible sparks can be seen across these high voltage insulators making RF noise levels too high to receive UHF satellite signals. So, we set up a remote receive site in the mountains surrounding the city and built a microwave transponder to send a much stronger signal back to the city.
We used 2400 bits per second of BPSK which exceeded the capacity of the leased phone line because of low fidelity, crosstalk, poor bandwidth, and group delay. So, we originally planned to send our data to Santa Barbara using a Bell 202 telephone modem.

The venerable Bell 202 modem was designed to work with dispersive phone line data communications because the phone lines of those days were analog, not digital. The only technique then available was adaptive channel equalization … an elaborate filtering protocol.
Then communications technology changed rapidly. Digital signal processors arrived (DSPs). What once required a large electronic system could now be done with a single chip. To the data scientist, things were now possible that had long been considered too difficult to implement.
The Internet then arrived, and the public demanded faster data. Soon, the data rate over analog landlines shot upwards as we saw data of 9600 bits per second over our old analog phone lines. Today, data rates of 256 Kilo Bits to 25 Mega Bits per second are common.
What happened? Landlines had not changed. However, digital signal processing found a way to make better use of bandwidth that is not normally used.
Short-range (last mile) bandwidth between your home and the “local office” is typically quite good but the long-range bandwidth (300 to 3400 Hz) is not useful for high-speed data. Phone companies found ways to use the higher frequency band to send and receive data at frequencies well above the normal audio band.
DSL (Digital Subscriber Loop) (also ADSL and VDSL) used a variety of competing modulation techniques. The dominant modulation technique, however, is OFDM (Orthogonal Frequency Division Multiplexing) which was developed by Robert W. Chang from Bell Laboratories and was patented in 1966 (US 3488445A).
However, Chang was not the first scientist to develop OFDM. Patent history tells us that he was preceded by R. R. Mosier and R. G. Clabaugh 8 years earlier with a system called Kineplex. Digging even deeper, we find that Pierre Deman (from France) patented the basic principles of OFDM in 1962 (patent US 3163718A).
Remarkably, these were all preceded by William H. Cherry (US 2870247) whose patent application in 1950 lay silent until it was finally awarded in 1959. Little is recorded about him or his contributions or why the U.S. Government chose to hold on to it for a decade.
Later, COFDM added forward error correction coding to improve data integrity. It buried redundant data in the signal in order to recover data loss in an uncertain environment. It was widely used after 1986 and relied on work done by Richard Hamming and Andrew Viterbi in 1967. While ownership of this technology is claimed by several other scientists, Viterbi and Hamming chose to not file a patent on the advice of their lawyers. In retrospect, it was a wise move.
Andrea Giacomo Viterbi was born in Italy (a nod to his countryman, Gugliemo Marconi). Just before World War II, his family emigrated from Italy to the United States where he studied at MIT where he received his BS and MS degrees. He later received his PhD from the University of Southern California.
Andrew Viterbi led a spectacular business career. With his colleague, Irwin Jacobs, they founded Linkabit Corporation and then Qualcomm, both in San Diego (my hometown). Both became billionaires and are widely known for their very significant philanthropy.
Engineers found that you can divide the digital data signal into hundreds (or even thousands) of slow, narrowband channels. You can then transmit them simultaneously and reassemble and error-correct the data back to its original form.
This technique is now used in COFDM (Coded Orthogonal Frequency Division Multiplexing) microwave communications such as 5G, LTE, WiMax, and your television.
In COFDM, the incoming signal is divided into as many as 2000 narrow band channels before being transmitted. That is how your Wi-Fi transceiver works (52 channels) and your TV set (1705 or 6817 channels). In the early days of COFDM, the world relied on a low-cost TV receiver chip from China after the Federal Communication Commission changed the US TV modulation protocol to 8VSB. The rest of the world went with COFDM.
Where do we find these 2000 individual receivers? They are all bundled together into a single, high-speed Digital Signal Processor (DSP). DSP computing components have finally become fast enough to be able to process all these narrowband channels simultaneously.
Moreover, with forward error correction coding (FEC) we spread our data over several self-correcting channels, providing internal data redundancy.
Digital signal processing is essential because multipath events from specular reflectors occur every one-half wavelength. They appear as very narrow band notches in the baseband signal. If you lose a few channels of your original 2000, you replace the lost data from the embedded redundancy within the remaining channels.
COFDM does have practical problems. It requires a large amount of processor power which, in turn, requires a lot of DC power. It also places high demands on the linearity of the transmitter power amplifier in order to reduce intermodulation distortion between channels. What this means is that you need a very linear power amplifier.
Even so, this solved the specular multipath problem, a huge win for everyone. Microwave communications that were once made useless by multiple specular reflections and frequency selective fading now became fully useable.
The “King of the Hill” of COFDM technology is Qualcomm:

Even more interesting is that there is a normal life cycle for COFDM as most of the technical issues have been studied and resolved as shown in the following:

It is clear that, while COFDM is an essential component to a successful data communication system, the amount of development peaked in about 2019 and has since declined to about 10% of what it was only a few years ago. The dip in the number of patent application filings in OFDM wireless communications after 2019 may be due to the emergence of software-defined radios (SDRs) and cognitive radios (CRs) which led system designers to turn their attention to new and more interesting challenges.
Digital signal processing using COFDM solves many multipath problems, yet even this technology has a plateau. Having soared to new heights, we must now solve the physical propagation problem … the Fresnel zone.
Fresnel Zones
Is a microwave signal (or light for that matter) a longitudinal wave or a transverse wave? This is an important question, and it was a topic of conversation among studied men for about 150 years.
The answer, of course, is that it is a transverse wave that propagates at the speed of light in a longitudinal direction. It is obvious now, but it was not obvious to Isaac Newton (1642-1726) and most of his technical friends … that is if he had friends, knowing how irascible he was. He was a hard guy to get close to. History presents him as being reclusive, vain, and vindictive in his later years.
On the other hand, it would have been fascinating to spend an evening with his contemporary, Christiaan Huygens (1629-1695). We do not hear much about him in modern conversation, but he was the equal of Isaac Newton … even better. He was, after all, the one who discovered the moon Titan with his newly designed telescope and, even more significant, he invented the pendulum clock which was the most precise means of keeping time for the next 300 years.
Huygens was a physicist, a mathematician, an engineer, an inventor, a diplomat, a musician, and a friend of the great scientists of his day: Galileo, Mersenne, Spinoza, Fermet, Locke, Cassini, and Rene Descartes. He even visited Newton during his lifetime and became a Fellow of the Royal Society (formed in 1660). Newton was committed to his corpuscular theory (light travels longitudinally through the “luminiferous aether”) while Huygens staked his reputation on the wave theory of light which holds that light is a self- propagating transverse wave.
It was not until the 1820’s when Agustin-Jean Fresnel (1788-1827), a French physicist, put the lie to Newton’s theory because only the wave theory could match his data. The final axe would fall on the “aether issue” with the failure of the Michaelson-Morley experiment at the U.S. Naval Academy in July 1887, also known as “The most famous failed experiment in history.”
The Michaelson-Morley experiment was an attempt to measure the motion of the Earth with respect to the “luminiferous aether.” The problem is that there is no such thing. It was a well-designed experiment, but it was doomed because it was based on a false premise.
Albert Einstein then came along with his Special Theory of Relativity (1905) and wrote: “If the Michelson–Morley experiment had not brought us into serious embarrassment, no one would have regarded the relativity theory as a redemption.”
It is a curious thing that a major experimental failure became so essential to our larger understanding of physics. Yet that is a sign of the integrity of this lengthy process. Life teaches us that we learn more from our failures than we do from our successes.
Michelson did not slink into obscurity … he went on to receive the Nobel Prize in 1907 for his precision optical instruments and measurements, many of which were used in validating Einstein’s theory of general and special relativity.
The unfolding of the physics of electromagnetics (which includes light) goes back several hundred years. Fresnel now takes his place in that pantheon of great scientific minds. He died in 1827 at the age of 39 from tuberculosis, having lived through challenging times: the French Revolution, the Napoleonic Wars, and the later turbulent times of late-royalist France.
When you think of light or any other electromagnetic wave as travelling between two points in a 1-dimensional straight line, you are seeing the world as Isaac Newton saw it… and that is not quite correct. Huygens developed the correct picture in the 17th century, Fresnel expanded on it in the 19th century, it was confirmed by the failure of the Michaelson-Morley experiment (1887), and Einstein explained it all with his Special Theory of Relativity (1905).
Of course, James Clerk Maxwell knew it all along before he published his famous Maxwell’s Equations in Edinburgh in 1865, but he was a theoretician, not an experimentalist. Physics rests on two distinct camps: the theoretician (the math and the model) and the experimentalist (the experiment and the data). Historically, the data comes first and then the theoretician explains it. But there is a singular exception to this rule: Maxwell predicted the propagating electromagnetic wave before it was ever invented … then the data quickly followed. Truly remarkable.
If Newton’s view of the propagating electromagnetic wave is flawed, then how is Huygen and Fresnel’s vision different? It is this: That the propagating electromagnetic wave occupies a set of ellipsoids between two points with alternating phases. Why do we not see it that way? Because visible light is an electromagnetic wave at such a high frequency that the diameters of these ellipsoids are extremely small and appear as if they were a 2-dimensional line. However, if you reduce the frequency of the electromagnetic wave to the frequencies we use in communications, you must broaden your view of the structure of the propagating wave and the environment around it.
Fresnel makes no approximations concerning the structure of the electromagnetic wave. He deals with the geometry of the problem … including all the reflections that may occur in the environment.
Application
Now that we have a brief background in Fresnel zones, we need to segment the propagation problem into 4 distinct types … each having its own technical issues. These are as follows:
- The unobstructed 1st Fresnel zone in free space.
- The occluded 1st Fresnel zone (obstructions)
- 1st Fresnel zone with ground (reflections)
- Multiple Fresnel zones operating together (multipath). Case 1: Unobstructed Fresnel zone in free space
Understanding the simple 1st Fresnel zone is fundamental to understanding all the others. Consider a simple point to point communication path from antenna A to antenna B:

We are interested in the 1st Fresnel zone because it envelops 80% of the power available to the receiver. In practice, the rest is ignored … being minor compared to the dominant zone (20% of the total).
The equation for the radius of the 1st Fresnel zone at any point is as follows:

Where λ is the wavelength of the signal.

Using the half-way point (maximum radius) we compute the following table:
| Frequency | Wavelength, ft | Distance, miles | Max 1st Fresnel Zone Radius, ft | Radius/ Length |
| 914 MHz | 1.076 | 1.0 | 37.7 | 0.714% |
| 1390 MHz | 0.707 | 1.0 | 30.56 | 0.578% |
| 2350 MHz | 0.4185 | 1.0 | 23.50 | 0.445% |
| 4700 MHz | 0.2093 | 1.0 | 16.62 | 0.315% |
We take note of the narrow width of the 1st Fresnel zone at these frequencies. They are remarkably smaller than most people expect.
If you have no obstructions inside of the 1st Fresnel zone you will experience uninterrupted communications. Antenna height is based on having a height of one Fresnel zone radius above any obstructions.
Situations like this often occur in point-to-point microwave communications. Satellites, aircraft, and high-altitude UAVs are similar. These all follow the Friis transmission equation without modification.

There is an issue here that often arises: small platforms (such as UAVs) require transmit and receive antennas to be close to each other. The design criteria for most radios are based on low noise figure, not selectivity. If you want to transmit at one frequency and receive at another, filtering is required so as to not jam (overload) your receiver.
Case 2: The occluded Fresnel zone (obstructions) The issue here is obstruction.
Some years ago, a friend of mine was installing a microwave link for a television station near Atlanta, Georgia and he was seeing a brief but total signal loss every few minutes. My friend spent two weeks trying to figure out what could be wrong with his equipment. He then realized that he was trying to send a microwave signal through one of the flight paths approaching Atlanta airport which he could not see. Every time a large plane traversed through his signal path, his signal vanished and then was quickly restored. He had a Fresnel zone obstruction problem.
Monsieur Fresnel would have been proud.
When there is a significant obstruction (like a building) between antenna A and antenna B, the Huygen’s wavelets that occupy the 1st Fresnel zone try to recreate the signal from what remains unobstructed based on the boundary conditions of the individual wavelets. In the event we need to send a signal around an obstacle, Monsieur Fresnel also gives us the tools to calculate the effects of knife-edge diffraction and rounded surface diffraction. We can model the electromagnetic wave into the shadow of an obstruction but that is rarely done as these losses are high.
In the case of obstruction, there is a rule of thumb that has gained widespread acceptance: “Do not obstruct into the 1st Fresnel zone more than 40% of the radius.” There will be some additional losses, but it is not large.
This case applies to point-to-point radio links, usually terrestrial. Case 3: Fresnel zone with ground.
The issue here is ground reflections.
When the 1st Fresnel zone intercepts ground, it creates a phase reversed copy of itself. The receiving antenna now sees two signals that can cancel each other.
Ground is the dominant reflection mechanism. What is non-intuitive is that only a very small area of ground creates meaningful reflections, a narrow but long elliptical surface where ground intercepts the 1st Fresnel zone. Moreover, the roughness of that area has little influence on the quality of the reflection. The Rayleigh Roughness Criterion tells us when a reflecting surface is smooth (specular) or rough (diffuse). It is a function of the physical properties of the reflector and the angle of incidence, which is usually quite low.
Multi-path fading takes place fairly often in low altitude UAVs, UGVs, and vehicle communications and the fade point can be precisely calculated. Moreover, I have tested this in the field several times and the data matches the model exactly.
Note that we are dealing here with reflections, not obstructions. We also note that the two Fresnel zones are identical to each other. The information content of one Fresnel zone is identical to the other. Where there is a “multi-path null” there is no signal processing gain because there is nothing to work with since the two signals, being reversed in phase, cancel each other.
The size and shape of that intersection with ground is a 2-dimensional ellipse which is similar in shape as the main ellipsoid but scaled in size. Reflections outside this ellipse no longer matter. It may be a huge ground plane, but only the reflections from within this very narrow area matter.
Ground plane material is of little significance. At frequencies above 300 MHz, the displacement term of the intrinsic impedance of both ground and seawater dominates their electromagnetic properties and are homogenous. Moreover, it is rare to encounter conductive metal surfaces in outdoor propagation studies.
The ground reflection problem can be solved by using two antennas, one higher than the other by 20-40%. This ensures that the multipath nulls of the two antennas do not occur at the same time.
Case 4: Multiple Fresnel zones operating together (severe multipath).
The issue here is “overspreading,” where one signal overlaps another. This is a common occurrence with in-building WiFi systems using omnidirectional antennas where the local reflections from a transmitted signal create a self-induced noise floor. In the urban model, we assume that the environment is rich with several competing Fresnel zones from several directions, all jamming each other. There is no “null signal” condition. There is always some signal there to be processed from environmental reflections.
It is MIMO (multiple-input multiple-output) communications that now takes our COFDM signal processing to a new plateau. Here we use all our new signal processing techniques, and we design a waveform to reduce the effects of mutual interference. Then we introduce multiple transmit antennas and multiple receive antennas in order to create as many independent diversity channels as possible and sort it out later in the Digital Signal Processor.
Our purpose is to create as many independent signal channels as possible through any available independent Fresnel zones (not just the 1st Fresnel zone) to create one viable channel. Our goal is “Diversity by any and all means possible.”
In the case of MIMO, we require multiple antennas. The signal transmitted on each antenna is unique in both waveform and polarization, even if the antennas are co-located. Given the dynamic nature of the urban model, total fading is statistically unlikely.
The Antenna Problem
Diversity communication … which was once the main curse on microwave communications, has now become its salvation. MIMO and COFDM provide us with powerful tools to break through the issues of diversity and overspreading of the communications channel.
What remains for the antenna developer is to develop new products and techniques which enable independent signal channels by providing spatial, polarization, and frequency diversity without increasing the size or weight of the antenna.
By design, antennas can provide any of several modes of physical diversity:
- Spatial diversity (physical distance)
- Polarization diversity
- Vertical / Horizontal
- Slant Left / Slant Right
- Right Hand / Left Hand Circular Polarization (RHCP / LHCP)
- Frequency diversity
- Multiple frequencies and/or frequency bands
- Broadband
All these depend on the “use-case” of the system being developed. Each requires careful technical study of the application in order to have assurance of useful outcomes.
It may appear that we have reached a plateau in the development of microwave communication, yet that is not the case. In the rapid development of communications technology over the past 125 years, we seized on elegant solutions with hidden vulnerabilities.
For example, satellite warfare stands before us. A technically capable adversary can readily deny us access to our broad range of communications satellites either by direct jamming or by destroying satellites them. Destroying strategic satellites or rendering them incapable is far less costly than replacing them. There are many reasonable scenarios where this makes sense.
So we look behind us to revisit those resources we bypassed on our way to our present elegant solutions and use many of the same tools to recover non-satellite resources: HF, VHF, Meteor Burst, and Troposcatter. When they were in their prime, these modes of communications were ruled out because we lacked effective tools to support them. That is now no longer the case.
With the tools now available to us, we find ourselves in a fertile new field of communications development with exciting new capabilities.
That which was once old has now become new again. The possibilities are exciting!
You may also Be interested in:

Nato Vehicle Mount: Enhancing Communications in Harsh Environments
About the Author Finnigen Rynehart is a multifaceted individual whose talents span mechanical engineering, painting, music, and world travel. While studying at Miami University in Ohio, he chose to focus