Review: Aerohive Dynamic Airtime Scheduling - Page 3

By Lisa Phifer

July 06, 2009

Introducing drag

We then moved one WPC600n to a location where its Tx/Rx rate consistently fell to 81.5 Mbps, repeating the test without Dynamic Airtime Scheduling. As expected, the under-performing client hurt everyone. Not only did our distant client take 2:40 to finish, but fast clients now took nearly two minutes apiece (below). Moving one 802.11n client—identical in type and configuration—had degraded our entire WLAN's throughput.

Fig3-Downlink_sm.jpg

Figure 3. Downlink IxChariot High Throughput Tests. Click to enlarge.

Next, we left those four 802.11n clients in place, checked the "Enable Dynamic Airtime Scheduling" box on our 5 GHz WLAN profile, pushed it to our HiveAP, and waited for associations to be reestablished. Fast client transfer times were cut in half. Even our distant client's download completed in 1:50 (vs. 2:40). Without a doubt, we saw better aggregate WLAN throughput with Dynamic Airtime Scheduling enabled than without.

Don't suspect beginner's luck or a one-time fluke. We repeated all of these 5 GHz tests several times, and ran similar tests at 2.4 GHz, varying client device populations. Dynamic Airtime Scheduling consistently improved our faster client download durations and throughputs, without visibly lengthening slower client downloads. We found this to be true whether clients were slowed by distance, interference, or protocol type. However, we found that client Tx/Rx rates had to drop to produce these results—a "sticky client" that stubbornly refused to adjust its rate triggered varied outcomes.

Looking up

Note that these results illustrate Dynamic Airtime Scheduling's impact on downlink performance: predominantly one-way file transfers, with some uplink TCP ACK and control traffic. In real life, downlink traffic very often exceeds uplink traffic, but whether this is true for your WLAN really depends upon application mix.

An AP-driven optimization like Dynamic Airtime Scheduling cannot exert tight control over uplink performance because, using standard 802.11 DCF/HCF, APs cannot stop clients from transmitting. According to Aerohive VP of Product Management Adam Conway, uplink data is also more greatly impacted by the Packet Error Rate (PER) of each client.

However, Dynamic Airtime Scheduling does take total client airtime (downstream plus upstream) into account when deciding how to schedule traffic. It can also influence the rate at which TCP applications send data upstream by slowing the return of TCP ACKs, applying upper-layer back-pressure to throttle selected clients. While this isn't as effective as airtime scheduling, Aerohive argues that it can still have a positive effect on uplink performance—especially when the overall traffic mix is bi-directional.

During our uplink-only tests, four fast client upstream throughputs ranged from 30 to 50 Mbps, with average run time 30 seconds. Adding a slow client resulted in lower throughputs and longer runs for that one client, but did not have noteworthy impact on faster clients. In this case, turning Dynamic Airtime Scheduling on did not visibly change our WLAN's performance.

According to Conway, "The reason you don't see much improvement [on your uplink test] is that there is a lot less precision with upstream performance (because we hold the RTT and don't actually affect the 802.11 MAC)." Furthermore, "the PER is more meaningful than Airtime Scheduling so you get desired results with or without Airtime Scheduling. If the PER is the same for all clients (like with VeriWave) then [Dynamic Airtime Scheduling] is really easy to see." He suggested that tests with more clients and bi-directional tests might generate more readily-visible benefits.

We did run bi-directional tests, composed of three downloads and one upload. Slowing one download client degraded the throughputs experienced by other download clients, without impacting our upload client. When we enabled Dynamic Airtime Scheduling, all fast client downloads improved—but we still didn't see significant uplink impact. Conway demonstrated more impact in his own bi-directional IxChariot tests using mostly Intel a/b/g and n clients.

Fig4-Bidirectional_sm.jpg

Figure 4. Bi-Directional IxChariot High Throughput Tests. Click to enlarge.

It may be that small bi-directional tests are more strongly influenced by client-specific behaviors. This is why we conducted our tests with three identical clients and one different client, sticking to 802.11n. Moving one of the identical clients let us cause location-based differences without being distracted by client-specific behaviors. On the other hand, including one different client gave us confidence that our results were not unique to one card.

Bottom line

Our tests, while conducted methodically and repeated to produce consistent results, are still informal open air tests. Although we ran 2.4 GHz tests, we chose to describe only 5 GHz results because they had no outside interferers. Similarly, although we tested some mixed 802.11g/n scenarios, we focused on our 11n results because they cannot be achieved through allocation based on protocol type alone.

In the end, it doesn't much matter whether a slow client is a cranky or mis-configured 802.11n device or a legacy 802.11g NIC that maxes out at 54 Mbps. Any client that requires longer transmit times can dominate the downlink—and we saw for ourselves that Dynamic Airtime Scheduling made a difference. Even though we did not experience much uplink benefit, we never paid a penalty for enabling Dynamic Airtime Scheduling. Admins may as well turn this feature on, available to at no additional charge to Aerohive customers running HiveOS 3.2 or later.

Real-world benefits depend on client and application mix, so we encourage admins to run their own open air tests, with representative devices and traffic. You may not see improvement in lightly-used WLANs where clients operate at similar data rates, but throughput increases are likely in congested, dense WLANs where competition is high, and in sparse or diverse WLANs where one really bad apple can spoil the whole bunch.

Lisa Phifer owns Core Competence, a consulting firm focused on business use of emerging network and security technologies. She has been involved in the design, implementation, assessment, and testing of NetSec products and services for over 25 years.

Pages: 1 2 3
Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.