Friday, February 15, 2008

What does a wan accelerator appliance do?

There are several things this sort of appliance can accomplish. In a nut shell, the appliance can; enforce quality of service rules, compress data, compress IP headers, accelerate TCP, accelerate CIFS (Common Internet File System), mitigate lost packets with forward error correction and cache repeated data patterns at the byte level.

At a higher level, this sort of new product can enable server consolidation to a central site. This is accomplished by providing lower latency and higher throughput where some applications wouldn't otherwise be usable across a LAN.

Consider this article from Silver Peak Systems

The Emergence of Local Instance Networking (LIN)

By Craig Stouffer

As enterprises grow in size and enterprise applications become more critical to business operations, CIOs are faced with a design dilemma: should branch office infrastructure be centralized or distributed?

In a distributed implementation, e-mail servers, file servers and databases are placed locally within each branch location. While this typically provides the best possible performance to end users, it results in server sprawl,which can be costly to implement and creates a variety of management, security and compliance challenges.

The alternative is to consolidate server infrastructure into a select number of data centers, which enables all maintenance, troubleshooting, security policy enforcement, backups and auditing to be performed centrally. While this solves most of the challenges associated with server sprawl, it does not address one of the most important ones' performance. Most applications simply do not perform well over a wide area network (WAN) due to bandwidth and latency constraints.

Given the compelling arguments for server centralization, various solutions have emerged to try and improve application performance over enterprise WANs. WAN optimization products leverage compression and Quality of Service (QoS) techniques to maximize bandwidth utilization and prioritize enterprise traffic; application acceleration products employ application-specific caching and latency mitigation tools to improve performance on an application-by-application basis. While both generations of products have benefits, neither addresses the full set of challenges facing enterprise IT staff from cost and performance to security and management.

A breakthrough approach is required to solve existing performance and scale limitations, while preserving application transparency. This is accomplished with Local Instance Networking,the first technology that provides all of the benefits of a centralized approach, without compromising performance. LIN is the first network technology to improve application delivery while settling the centralized vs. distributed debate.

1ST Generation: WAN Optimization WAN optimization products are most often deployed as bandwidth band-aids, providing short-term benefits on congested WAN links where it is infeasible or too expensive to buy additional bandwidth. Although each vendor has their own proprietary implementations, WAN optimization solutions rely on two underlying technologies: compression and Quality of Service (QoS).

Compression Compression is used to reduce the bandwidth consumed by traffic traversing the WAN.

The gains realized by compression techniques vary depending on the mix of traffic traversing the WAN. Text and spreadsheets, for example, are easy to compress, so they typically yield 25x performance gains. On the other hand, pre-compressed content, like zip files, cannot be compressed much further. On average, most enterprises deploying compression technology will see around a 50 percent improvement in WAN utilization, which is the equivalent of doubling the effective WAN bandwidth. This is often not enough performance improvement to justify the additional hardware expenditure and operation costs.

QoS In an effort to maximize WAN utilization, most enterprises will oversubscribe their network. When demand exceeds the capacity of a WAN link and all traffic is contending for the same limited resource, less important traffic (such as Web browsing) may take bandwidth away from business-critical applications. To prevent this, most 1st generation WAN optimization solutions implement Quality of Service techniques to classify and prioritize traffic based on applications, users and other criteria.

By using a combination of compression and QoS techniques, 1st generation WAN optimization products enable enterprises to get more out of their congested WAN links. In some instances, this saves money by delaying the purchase of additional bandwidth. However, this is often a short-term gain. It also does not address latency issues across the WAN, which has a significant impact on application performance.

It is important to note, however, that while compression and QoS are not sufficient on their own for enterprise-wide application delivery, they are essential components of newer, more comprehensive application acceleration solutions, such as Local Instance Networking.

2nd Generation: Application Acceleration A second generation of products emerged to address some of the shortcomings of WAN optimization solutions. These application acceleration solutions can provide significant improvements by optimizing the performance of specific applications. However, the tradeoff is ease of use, manageability and long-term interoperability. There are two broad techniques used for application acceleration: application proxies/caches and latency compensation.

Application Proxies and Caches Application proxies are used to locally simulate an application server, enabling specific content to be delivered locally with LAN-like performance. One example of a proxy-type device is the Web cache, which stores local copies of requested Web pages so that subsequent requests for the same URL could be serviced from the local appliance disk rather than from the remote Web server. This technique provides a reasonable boost for static content. However, it does not work well for dynamic content or applications that require up-to-date information. Unfortunately, as most enterprise applications have been Webified, and Web content is expected to be very dynamic in nature, Web caches have reached a roadblock in terms of overall effectiveness.

More recently, similar proxy approaches have been extended to file services. Wide Area File Services (WAFS) emerged as a way of implementing proxy file servers in distributed offices. By configuring clients to point to a WAFS share, the proxy file server can make remote content appear local. These devices terminate CIFS sessions, and then examine requests to see if the requested filename can be delivered locally. To achieve this, WAFS servers must replicate file locking semantics.

Although WAFS offers a number of specialized features, like the ability to authenticate users and read and write files even when the data center is unreachable (e.g., due to a network event), they create an enormous management burden. The branch office, in effect, is supporting a full blown file server. This requires user and password updates and can lead to coherency issues when multiple versions of the same file exist in the network at the same time. In addition, they must be constantly updated to support the latest changes to file system protocols. As a result, rather than simplifying the branch office, these approaches can actually make things more complicated by introducing another vendors implementation of a file system.

If performance gains are to be achieved across all applications, WAFS and Web caches have to be implemented in conjunction with other application-specific acceleration tools. In addition to being cost prohibitive, this is not scalable, as the applications themselves frequently undergo changes that require significant modification to those products that are used to accelerate them. This dynamic has already been witnessed in the e-mail space, where a variety of MS Exchange acceleration products were rendered obsolete when Microsoft moved from Exchange 2000 to Exchange/Outlook 2003.

Latency Compensation An alternative approach to application acceleration is to reduce the amount of latency created by underlying protocols, like TCP. Latency results when chatty protocols communicate frequently with a server and are required to stop and wait for a response before the next step can proceed. The more steps, the longer the end users perceived response time.

While these latency mitigation techniques are transparent at the application level, they still require termination and re-injection of TCP streams. Theoretically, this should not be an issue. However, in practice, this can be problematic because routing is often asymmetric packets can take different inbound and outbound paths when communicating between different locations.

Fortunately, some of the latency compensation techniques that operate at the protocol level can provide non-intrusive benefits. These are leveraged by 3rd generation approaches to application delivery.

3rd Generation: Local Instance Networking In addition to accelerating application performance, Local Instance Networking addresses server sprawl by providing a viable mechanism for centralizing branch office infrastructure while localizing information delivery.

Local Instance Networking inspects all WAN traffic and stores a local instance of information in an application independent data store at each enterprise location. The local instance is transparently populated based on day-to-day usage, containing a subset of the enterprises working data set that is most relevant to each location. Each piece of information is stored only once per location, enabling an appropriately sized LIN appliance to hold weeks worth of data.

Local Instance Networking appliances examine outbound packets to see if a match exists in the local instance at the destination location. If a match exists, then the repetitive information is not sent across the WAN and instructions are sent to deliver the data locally. If the data has been modified, only the delta is transmitted across the WAN, maximizing bandwidth utilization and application performance.

In a LIN implementation, all authentication, authorization, file and record locking is performed centrally by the native applications. This ensures 100 percent application coherency and future compatibility with new version of applications. By working at the network (or packet) level, a Local Instance Network transparently supports all enterprise applications and transport methods, allowing for exceptionally simple deployments that provide immediate improvements to a wide variety of enterprise applications.

LIN appliances deliver the performance of distributed servers, without the cost and complexity. By operating at the network layer, they are completely transparent to all transport protocol (e.g., TCP, UDP, etc.), and provide significant benefits to all enterprise applications. By localizing information, yet centralizing management and control of branch office infrastructure, Local Instance Networking puts an end to server sprawl and the management, security, cost and compliance headaches that accompany it.

Source :

No comments: