Introduction
The Internet is permeating virtually every society on the face of the earth and every level of those societies. There has been a rush to provide access to the Internet for customers that now can be counted in the billions. But access to the Internet requires both local access and then a "pipe" to carry that local traffic to and from a point that is connected to one of the major switches that form the backbone of the Internet. These pipes and switches girdle the globe but unfortunately are not accessible to two groups that are disparate in their relative economic level: the first group are the hundreds of millions that populate vast areas of Africa and Asia where a combination of poverty and lack of infrastructure make it unlikely that fibre will be provided, at least in the near future; the second group consists of areas in relatively wealthy countries such as Canada and the USA where the density of population is insufficient to warrant construction of fibre trunks. The goal in both cases is to provide "broadband" service as opposed to dial-up service which has been available for some time to the group in the US and Canada. Despite the lack of access to fibre it is feasible to provide broadband service to both groups by using "hot spots" to provide local access. These hot spots are in turn connected to a wireless backhaul network that carries this traffic, via one or more hops, to an Internet access point. This access point will in a developed country be a connection to the fibre infrastructure; in an underdeveloped country this point will likely be a satellite link. The organisation that provides this backhaul local access service is called a WISP (Wireless Internet Service Provider).
The Current System
The backhaul systems currently utilised to support the service described above have problems that can best be explained by looking at figure 1. Four local access points: A, B, C and D are connected to an Internet access point I by way of wireless links 1, 2, 3, and 4. The wireless links are full duplex in that at each point there are four radio channels, two pairs of radios back-to-back. The access point at each site consists of one or more "hot spots" plus copper links supporting nearby clients, all of which are connected to a router, which at intermediate points has a duplex port connected to each pair of radios at that point.
Traffic picked up at site A is collected by the router and sent to the radio at that site and then transmitted over link 1 to the router at access point B. The router at point B accepts the inbound traffic from point A plus the local traffic generated at point B and then forwards the combined traffic to the router at Point C. The process is repeated till the upstream traffic reaches Point I and is sent on to the Internet. The traffic flowing towards Point I is termed upstream traffic, in the other direction it is termed downstream. The process is reversed for traffic arriving from the Internet, but in this case the flow is smoother since no further traffic is added at intermediate points. Local traffic, that is traffic generated at Points A, B, C, or D destined for local points is generally so minimal that it is just sent upstream to an Internet switching point where it is turned around and finds it way back.
The Problems
The system described above provides access to the Internet but there are problems that in the main are due to the inherent nature of packet switching. The basic problem encountered by each and every router is that the traffic arrives randomly and while the average load over some extended period of time can be well below the capacity of the outgoing link there will be periods where the number of packets arriving destined for a single output port will exceed the capacity of that port. When this occurs the traffic must be buffered but theory and practice tell us that if the average load is high enough the buffer capacity at times will be exceeded and packets will have to be discarded. This is a regular occurrence in Internet operations. Therefore in a typical backhaul network:
Full capacity of the link cannot be utilised. As stated above even when average traffic is well below the capacity of the link, the "bunching" effect of packets arriving at one of the routers means that capacity could be temporarily exceeded.
A guaranteed level of service cannot be provided. If a customer at Site A desires to transmit say exactly 100 Kbps for some period of time, in effect requesting a fixed-size channel, that customer cannot be satisfied except when the total loading on the system is well below capacity.
It is often necessary to increase the capacity on the link closest to the Internet Access point, e.g., link 4, to accommodate surges in traffic generated at sites A, B and C even though the overall average traffic does not exceed the capacity of the normal link 4. See point 1.
These problems are endemic to any packet network and as such are considered part of normal operation. But the performance can be greatly improved.
(IMAGE HERE: Figure 1 – Conventional Backhaul Network)
A DQ Backhaul Network
A DQ (Distributed Queue Switch Architecture) backhaul network utilises technology developed at the Illinois Institute of Technology in the 1990s by Graham Campbell and his students that enables a single communications channel operating at any speed to be efficiently shared amongst multiple users distributed over any distance. The technology is described in [1], [2], [3]. A DQ backhaul network utilises this technology, along with enhancements, to provide a backhaul system that overcomes all the problems described in the previous section. We use figure 2 to explain the advantages of DQ. Again we have an environment similar to a conventional backhaul network with a major difference: the four links are “tied” together thus forming a single full duplex channel. The router at each point is connected via what are little more than simple taps to the upstream and downstream channels DQ thus treats all four links as a single communications channel and in the process achieves much improved performance.
Benefits of DQ
The now single channel is divided into fixed-size slots on both downstream and upstream channels. It is important to note that the bits at the physical layer remain synchronized, the slots are created at what is called the data link layer, so that in effect the segmenting is carried out by software. Approximately 90% of the capacity is available for data with the remainder of the bandwidth utilized to carry requests from the routers at A, B, C and D and to convey network status back to these same routers. The DQ algorithm is distributed in that there is no central control (the DQ can be understood to represent distributed queueing), the data slots are apportioned out to the users at A, B, C and D such that 100% of these slots may be utilised. Requests are honoured in the order that they are made, a fair system, but priorities can be used to ensure that important traffic is transmitted before any lower priority traffic. Also in the case of the user at site A that would like to transmit packets at a rate equivalent to 100 Kbps it is possible to allocate recurring data slots to that user equivalent to exactly 100 Kbps. [4] This latter feature is impossible to achieve in conventional packet switched systems. The benefits are:
Full utilisation of the channel is achieved by allocating access along the entire length of the network. Users at the furthest point on the network are treated equally with users next to the Internet access point.
The balanced loading eliminates the need to increase the capacity of the link(s) closest to the Internet access point.
Service is first-come, first served; not possible in conventional packet networks.
Priorities based on packet by packet basis, not statistical as in packet networks.
There is no packet loss due to congestion.
(IMAGE HERE: Figure 2 - Backhaul Network with DQ)
Versatility
DQ has been demonstrated on a single line topology, very common, but Figure 3a illustrates the mesh topology where the geographic distribution of the access sites makes it possible to establish more than one path to reach the Internet access point I. What is significant is that all the traffic must finally pass through one site so that is the limit with respect to capacity, no matter how many parallel paths may exist further back in the network. DQ will work with a mesh topology by treating it as a tree-and-branch topology wherein all the branches feed to the root of the tree. The benefits described above are achieved.
But there is a more interesting way to take advantage of DQ by treating the same network of Figure 3a as a ring as shown in Figure 3b. The links are connected as shown in Figure 2 but at the Internet access point I there is the flexibility of treating the network as true ring or as two separate legs, with the separating point moving to balance the load on the two legs. Figure 3c demonstrates if there is a break in the ring that the network would automatically operate as two single legs.
The ring topology can be taken further by organising the radios to operate as two rings where traffic flows completely around the ring in opposite directions, A station wishing to transmit monitors the each channel and transmits on the ring that has less traffic thus leading to a balancing of traffic. The other DQ features of priorities, fixed-bandwidth and full utilisation are available.
Comments
Up to this point, no specific issues regarding the wireless nature of the links have been discussed with respect to the DQ operation. Indeed, the radio channel imposes several considerations to be taken into account. The architecture shown in Figure 2 proposes to treat all the links in the backhaul network as a single link with the capability of injecting packets into the stream at the junction points, following the rules of the protocol. It is important to note that if the links have different PHY characteristics (i.e. data rate), the data rate in each link will be set at the lowest rate. Applying DQ, an average utilisation of up to 90% of all the PHY layer capacity will be efficiently used, with no congestion, no collisions and the ability to provision fixed-bandwidth channels. Furthermore, DQ is a very robust technique that does not impose any restriction on where (in which layer) the error correction mechanisms and possible retransmissions techniques must be located. For fixed radio links, the time variations of the channel could be slow and long segments, a feature of the protocol, with no provision for retransmission could be used at the MAC layer, thus improving the overall efficiency.
Regarding the PHY layer, in principle any modulation and coding scheme can be supported under DQ since it is implemented at the MAC layer. However, OFDMA seems to be the most widely used PHY layer for current and future wireless systems, mainly due to the fact that transmitters are improving in managing signals with high PAPR (Peak to Average Power Ratio) without affecting efficiency or incurring losses due to other non-linearity effects. Also, OFDMA is very robust when suffering from multi-path propagation, although in principle this problem should not be relevant for fixed radio links. OFDMA simplifies the frequency planning of the radio links as well.
Another requirement of the protocol is that synchronism must be maintained at the MAC layer across the junction points but there is flexibility in how long it takes for the physical bits to traverse a junction, as long as it does not vary in time. If using OFDMA, where an FFT must be carried out for each symbol, the timing restrictions and synchronism may be a critical issue. However, if the processing times are not changing in time, the transmission instants for each packet injection in the data flow can be adjusted to keep synchronism at the MAC level. DQ is able to work in networks of any size with any value of the roundtrip delay, using a technique called interleaved DQ [4] or pipelining. So, if the total delay from one end of the network to the destination is such that multiple packets could occupy the channel, interleaved DQ ensures that the near optimum features of the system are maintained.
Conclusion
We have described a method of providing backhaul capability to WISPs that is superior to any presently utilised. Full bandwidth utilisation, true priorities, packets intermingled with fixed- bandwidth channels, and using the ring topology there is a level of “self-healing” if a break occurs. The DQ solution can also be applied to satellite systems, cell phone towers and any other commnications system that serially connects more than two sites.
References
[1] W. Xu and G. Campbell "DQRAP - A Distributed Queueing Random Access Protocol for a Broadcast Channel", presented at SIGCOMM '93, San Francisco, September 14, 1993. Computer Communication Review, Vol 23, No. 4, Oct 1993, pp. 270-278.
[2] C.T. Wu and G. Campbell, "Extended DQRAP (XDQRAP): A Cable TV Protocol Functioning as a Distributed Switch", Proceedings of 1st International Workshop on Community Networking, July 1994, San Francisco. Computer Communication Review, Vol 23, No. 4, Oct 1993, pp. 270-278.[3] H. J. Lin and G. Campbell, "PDQRAP - Prioritized Distributed Queueing Random Access Protocol", Proc. of 19th Conference on Local Computer Networks, Oct. 1994, pp 82 - 91. Distributed Switch", Proceedings of 1st International Workshop on Community Networking, July 1994, San Francisco. Computer Communication Review, Vol 23, No. 4, Oct 1993, pp. 270-278.
[3] H. J. Lin and G. Campbell, "PDQRAP - Prioritized Distributed Queueing Random Access Protocol", Proc. of 19th Conference on Local Computer Networks, Oct. 1994, pp 82 - 91.
[4] C. T. Wu and G. Campbell "CBR Channels on a DQRAP-based HFC Network", SPIE '95 (Photonics East), Philadelphia, PA Oct. 1995.