Review: Aerohive Dynamic Airtime Scheduling - Page 2

By Lisa Phifer

July 06, 2009

Controlling QoS

As explained in Part 1, HiveAPs are configured using a HiveUI Web interface or HiveManager appliance. QoS parameters, such as WMM priorities, can be tweaked using HiveUI, but Dynamic Airtime Scheduling must be enabled via HiveManager or the HiveAP CLI.

Dynamic Airtime Scheduling determines how a HiveAP shares a channel across all clients using a WLAN policy. This simple toggle works in conjunction with other existing QoS control mechanisms, including traffic classification, per-user queuing, rate limits, scheduling, and WMM prioritization.

Within each HiveAP, a policy-driven QoS engine maps arriving frames into per-user queues, based on MAC address, interface, SSID, TCP/UDP protocol, and/or markers (i.e., 802.1p priority, WMM category, DiffServ tag). Each client has eight user queues, one per WMM category. Scheduling parameters control when frames move from user queues onto WMM hardware queues—for example, voice queues can be serviced by strict priority when best effort queues get emptied by weighted round robin. Max rates can also be applied to users, groups, or queues to limit total bandwidth consumption (below).

Fig1-QoS-Policies_sm.jpg

Figure 1. Aerohive QoS Policies. Click to enlarge.

Turning Dynamic Airtime Scheduling on causes this engine to schedule traffic by airtime instead of bandwidth. Normally, two clients with the same weight would be allowed to send the same amount of data at a given priority, while a client with twice the weight could send double the data. With Dynamic Airtime Scheduling, a HiveAP makes its priority and weighting decisions based not on frame number and size, but on the time they actually take to transmit.

Thus, as a client's data rate starts to fall, Aerohive's scheduler can react. Faster clients will start receiving more transmissions sooner instead of waiting around for the slower client to finish. Conversely, as faster clients finish, that slower client gets to use bigger chunks of airtime, making up lost ground. This algorithm uses each HiveAP's visibility into actual transmit times, adjusting allocations when otherwise speedy clients hit dead spots or interference. But importantly, WMM priorities and rate limits are still enforced—for example, even slow VoIP handsets still need to exchange top-priority SIP frames frequently enough to avoid jitter.

Creating our test WLAN

To exercise this feature in our own live hive, we defined two WLAN policies: one for 2.4 GHz and another for 5 GHz (below). We tied each to a single SSID policy linking our "PerfTest" SSID to one radio band, supported data rates, MAC and traffic filters (null), and beaconed capabilities like WPA/WEP encryption (off) and WMM prioritization (on). We configured our 5 GHz radio profile to permit only 802.11n clients on a 40 MHz channel, with MAC frame aggregation on and no SGI. We configured our 2.4 GHz radio profile to permit 802.11g/n clients on a 20 MHz channel, without aggregation or SGI.

Fig2-WLAN-Policy.jpg

Figure 2. WLAN Policy

These single-SSID, single-band policies let us control multi-band clients without touching them mid-test. To change scenarios, we used our HiveManager to push a new policy under test to one HiveAP. Once activated, all clients automatically reassociated to that AP's "PerfTest" WLAN using the desired band/channel and data rates. This helped us limit the variables that changed during each run—even minor differences in client location or orientation can affect data rate.

Of course, in open air tests, one can't control everything. We used a WLAN analyzer to measure air quality and configured our HiveAP to use the cleanest available channel in each band, realizing there would be some interference at 2.4 GHz. But our objective wasn't to achieve the highest possible throughput. We wanted to measure the difference between our own clients, operating with and without slow client drag, and with and without Dynamic Airtime Scheduling.

At 5 GHz, we staged four 802.11n clients: three dual-band Linksys WPC600n clients and one TrendNET dual-band TEW-664UB client. All four were placed close to the HiveAP, operating reliably at 270 Mbps. From an IxChariot console, we launched Ixia's High Performance Throughput script, sending each Wi-Fi client a 1 MB file over TCP from a GigE laptop tethered to the HiveAP's 10/100/1000 Ethernet port. This established our downlink baseline: 50 seconds per completed test run, averaging ~30 Mbps throughput per client.

Go to page 3 of 3.

Pages: 1 2 3
Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.