Review: Aerohive HiveUI

By Lisa Phifer

March 27, 2009

With HiveUI, small businesses can tap Aerohive's cooperative control architecture to build feature-rich WLANs without having to pay for enterprise controllers or management systems.

Aerohive HiveOS 3.2 HiveUI

List Price: Included in HiveAP 320 ($1,299) or HiveAP 340 ($1,499)
Pros: Quick-and-easy setup; built-in RADIUS server and captive portal; attractively priced
Cons: Little status monitoring; no reporting; limited to one hive with up to 12 members

             AerohiveLogo.jpg                   

Aerohive's cooperative control strikes a unique balance in the WLAN architecture debate. HiveAPs are provisioned by a central manager, but otherwise operate autonomously. HiveAPs not only forward, filter, and shape traffic on their own--they even seek out neighbors to form a self-healing adaptive mesh that depends neither upon a WLAN controller nor a root node.

But until recently, a HiveManager appliance was required to configure HiveAPs. Starting at $2,999, that appliance was less expensive than enterprise WLAN controllers, but prohibitive for small businesses with just a few HiveAPs. In HiveOS release 3.2r1, Aerohive took aim at this untapped market by releasing HiveUI: an embedded Web manager that can provision up to 12 HiveAPs without a HiveManager appliance. To learn about the benefits and limitations of HiveUI, we decided to use it to build our own little "hive."

Assembling the pieces

Any Series 300 (802.11n) HiveAP running HiveOS 3.2r1 can be used to manage up to a dozen HiveAPs, including older Series 20 (802.11abg) HiveAPs. For our review, we used one HiveAP 340 to provision a total of three HiveAP 340s and one HiveAP 320.

Both HiveAP models are currently available at a promotional price of $999 each, so our four-node dual-radio WLAN retailed for $3995. In this small WLAN, doing without HiveManager cut our CAPX a compelling 43 percent. But HiveUI isn't an embedded HiveManager--it's a scaled-down GUI, designed for quick-and-easy set-up of small hives with relatively basic needs.

For example, starting a hive is faster with HiveUI than with HiveManager. Just plug one HiveAP into your wired network, open a Web browser to that AP's DHCP-assigned IP, and launch the Startup Configuration page (see Figure 1). Promote that HiveAP to run HiveUI by checking "Server for WLAN Management;" all other HiveAPs default to "Client for WLAN Management." The only parameters that absolutely must be configured here are a hive name and the passphrase used to secure future traffic between hive members with WPA2-PSK.

Fig1-StartupConfig.jpg

Figure 1. HiveUI Startup Configuration

Like HiveManager, a HiveAP that runs HiveUI operates as a CAPWAP server. By default, all HiveAPs behave as CAPWAP clients, periodically broadcasting Discovery Requests until a Discovery Response is received from a CAPWAP Server. If the CAPWAP server is on the same wired or wireless segment, it hears those Discovery Requests and responds to them. If the CAPWAP server resides elsewhere, HiveAPs will find it by sending Discovery Requests to any IP bound to the hostname "hivemanager" or designated using the Startup Configuration page.

This standard CAPWAP exchange is how HiveUI learns about each brand new (or freshly reset) HiveAP on your network. However, auto-discovered HiveAPs cannot actually join your hive until an administrator moves them to the Managed HiveAP list. CAPWAP Join Request/Response messages are then exchanged, establishing the secure session that HiveUI uses to provision and maintain members of a single hive.

Getting this far took us five minutes when using a HiveAP with the latest HiveOS. Unfortunately, the first HiveAP we tried ran older firmware that merely burped out a cryptic error: Failed opening required AhWebUIConf.class.php5. In fact, 2 of 5 HiveAPs arrived with older firmware because our tests started right after HiveOS 3.2r1 was announced. This temporary mismatch was easily corrected, but played a role in a few other hiccups we encountered.

For comparison, we repeated set-up using a full-blown HiveManager appliance. It took us roughly 30 minutes to reach this same point because HiveManager requires console port CLI initialization, followed by Web map initialization. Topology maps are one of many features found in HiveManager, but not HiveUI--for good reason. With a dozen APs, creating tiered maps would add complexity for little benefit. But for enterprises with hundreds or thousands of HiveAPs, correlating devices to mapped locations (Site/Building/Floor) is an absolute necessity.

Branching out

Like many enterprise-class APs, 300 Series HiveAPs have two PoE-capable 10/100/1000 Ethernet ports for cabling to a wired backhaul network. Any HiveAP tethered this way is said to be operating as a "portal." (The second Ethernet interface can be used for dual-homing, failover, or providing bridged access to another wired segment.)

Alternatively, HiveAPs can behave as "mesh points" that establish wireless backhaul links through any nearby HiveAP portal. Each HiveAP 320 or 340 has one 2.4 GHz 802.11b/g/n radio (wifi0) and one 5 GHz 802.11an radio (wifi1). By default, wifi0 is configured for access; wifi1 for backhaul. This makes covering an unwired space simple: just provide AC power and the mesh point magically does the rest. So long as at least one other backhaul-capable HiveAP is within range, mesh points determine their own default route onto your wired network, using it to get an IP address and then find the HiveAP running HiveUI.

In many enterprise WLANs, PoE provides simultaneous power and backhaul connectivity. However, in small businesses where PoE is uncommon, HiveAPs that are powered up before being cabled to Ethernet may form unintended wireless backhaul links. But even if accidental backhaul links do form, HiveAPs always fall back to Ethernet when available. (If you want wireless backhaul to take precedence, reconfigure Ethernet interfaces to bridge mode.)

WLANs that use wireless backhaul can do so in two ways: dedicate one radio to full-time backhaul (the default) or use both radios for access while designating one for backhaul failover. To enable the latter, radio profiles must be configured to trigger wireless failover whenever Ethernet is down for X seconds, reverting after Ethernet is back up for Y seconds. In practice, we found this a bit tricky; HiveAPs must failover to a backhaul band/channel offered by a nearby portal--that cannot happen if everyone else is still using both radios for access.

In fact, this failover configuration is one of the few wireless mesh options surfaced by HiveUI. Most other hive options exposed by HiveManager are hidden by HiveUI, including backhaul thresholds and traffic filters. In a pinch, HiveUI admins can use the HiveAP CLI to query mesh status (e.g., show amrp neighbor) as shown in Figure 2.

Fig2-Managed-HiveAPs.jpg

Figure 2. Managed Hive composed of portal and mesh points

We agree that small WLAN admins generally should not tweak parameters like min signal strength required to form a backhaul link--useful in larger high-density WLANs. For small businesses, mesh networks that are self-forming and self-healing are best. However, we think this audience could benefit from backhaul status change and traffic utilization summaries not now provided by HiveUI. For example, if essential parameters like the hive passphrase or backhaul radio profile are modified while a mesh point is down, it can't reconnect--small WLAN admins may need hints to resolve such problems.

Go to page 2.

Pages: 1 2 3
Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.