Chapter 11 Selecting Technologies and Devices for Enterprise Networks This chapter presents technologies for the remote-access and wide-area network (WAN) components of an enterprise network design. The chapter discusses physical and data link layer protocols and enterprise network devices, such as remote-access servers, routers, firewalls, and virtual private network (VPN) concentrators. The chapter begins with a discussion of the following remote-access technologies: Point-to-Point Protocol (PPP) Cable modems Digital subscriber line (DSL) After discussing remote-access technologies, the chapter presents options for selecting WAN and remote-access capacities with the North American Digital Hierarchy, the European E system, or the Synchronous Digital Hierarchy (SDH). The chapter continues with a discussion of the following WAN technologies: Leased lines Synchronous Optical Network (SONET) Frame Relay Asynchronous Transfer Mode (ATM) Metro Ethernet The chapter then covers two topics that will help you complete your WAN design: Selecting routers for an enterprise WAN design Selecting a WAN service provider The chapter concludes with an example of a WAN network design that was developed for a medium-sized company, Klamath Paper Products, Inc. The example indicates what technologies and devices were chosen for this customer based on the customer’s goals. The technologies and devices you select for your particular network design customer will depend on bandwidth and quality of service (QoS) requirements, the network topology, business requirements and constraints, and technical goals (such as scalability, affordability, performance, and availability). An analysis of traffic flow and load, as discussed in Chapter 4, “Characterizing Network Traffic,” can help you accurately select capacities and devices. For some organizations, scalability is a key design goal. The selected WAN solution must have enough headroom for growth. As discussed in this chapter, some WAN technologies are more scalable than others. Another key design goal for many organizations is to minimize the cost of WAN and remote-access circuits. Optimization techniques that reduce costs play an important role in most WAN and remote-access designs. Methods for merging separate voice, video, and data networks into a combined, cost-effective WAN also play an important role. These methods must handle the diverse QoS requirements of different applications. Remote-Access Technologies As organizations have become more mobile and geographically dispersed, remote-access technologies have become an important ingredient of many enterprise network designs. Enterprises use remote-access technologies to provide network access to telecommuters, employees in remote offices, and mobile workers who travel. An analysis of the location of user communities and their applications should form the basis of your remote-access design. It is important to recognize the location and number of full- and part-time telecommuters, how extensively mobile users access the network, and the location and scope of remote offices. Remote offices include branch offices, sales offices, manufacturing sites, warehouses, retail stores, regional banks in the financial industry, and regional doctors’ offices in the health-care industry. Remote offices are also sometimes located at a business partner’s site (for example, a vendor or supplier). Typically, remote workers use such applications as email, web browsing, sales order-entry, and calendar applications to schedule meetings. Other, more bandwidth-intensive applications include downloading software or software updates, exchanging files with corporate servers, providing product demonstrations, managing the network from home, videoconferencing, and attending online classes. In the past, telecommuters and mobile users typically accessed the network using an analog modem line. Analog modems take a long time to connect and have high latency and low speeds. (The highest speed available for analog modems is 56 kbps.) These days remote users need higher speeds, lower latency, and faster connection-establishment times. Analog modems have been replaced with small office/home office (SOHO) routers that support a cable or DSL modem. The sections that follow discuss these options and provide information on PPP, a protocol typically used with remote-access and other WAN technologies. PPP The Internet Engineering Task Force (IETF) developed PPP as a standard data link layer protocol for transporting various network layer protocols across serial, point-to-point links. PPP can be used to connect a single remote user to a central office, or to connect a remote office with many users to a central office. PPP is used with Integrated Services Digital Network (ISDN), analog lines, digital leased lines, and other WAN technologies. PPP provides the following services: Network layer protocol multiplexing Link configuration Link-quality testing Link-option negotiation Authentication Header compression Error detection PPP has four functional layers: The physical layer is based on various international standards for serial communication, including EIA/TIA-232-C (formerly RS-232-C), EIA/TIA-422 (formerly RS-422), V.24, and V.35. The encapsulation of network layer datagrams is based on the standard High-Level Data Link Control (HDLC) protocol. The Link Control Protocol (LCP) is used for establishing, configuring, authenticating, testing, and terminating a data-link connection. A family of Network Control Protocols (NCP) is used for establishing and configuring various network layer protocols, such as IP, IPX, AppleTalk, and DECnet. Multilink PPP and Multichassis Multilink PPP Multilink PPP (MPPP) adds support for channel aggregation to PPP. Channel aggregation can be used for load sharing and providing extra bandwidth. With channel aggregation, a device can automatically bring up additional channels as bandwidth requirements increase. Channel aggregation was popular with ISDN links, but it can be used on other types of serial interfaces also (a PC connected to two analog modems can use channel aggregation, for example). MPPP ensures that packets arrive in order at the receiving device. To accomplish this, MPPP encapsulates data in PPP and assigns a sequence number to datagrams. At the receiving device, PPP uses the sequence number to re-create the original data stream. Multiple channels appear as one logical link to upper-layer protocols. Multichassis MPPP is a Cisco IOS Software enhancement to MPPP that allows channel aggregation across multiple remote-access servers at a central site. Multichassis MPPP allows WAN administrators to group multiple access servers into a single stack group. User traffic can be split and reassembled across multiple access servers in the stack group. Multichassis MPPP makes use of the Stack Group Bidding Protocol (SGBP), which defines a bidding process to allow access servers to elect a server to handle aggregation for an application. The server has the job of creating and managing a bundle of links for an application that requests channel aggregation. SGBP bidding can be weighted so that CPU-intensive processes, such as bundle creation, fragmentation and reassembly of packets, compression, and encryption, are offloaded to routers designated as offload servers. You can deploy a high-end router as an offload server. An Ethernet switch can connect members of the stack group and the offload server, as shown in Figure 11-1. Figure 11-1 Multichassis Multilink PPP Stack Group and Offload Server Password Authentication Protocol and Challenge Handshake Authentication Protocol PPP supports two types of authentication: Password Authentication Protocol (PAP) Challenge Handshake Authentication Protocol (CHAP) CHAP is more secure than PAP and is recommended. (In fact, the RFC that discusses CHAP and PAP, RFC 1334, has been obsoleted by RFC 1994, which no longer mentions PAP.) With PAP, a user’s password is sent as clear text. An intruder can use a protocol analyzer to capture the password and later use the password to break into the network. CHAP provides protection against such attacks by verifying a remote node with a three-way handshake protocol and a variable challenge value that is unique and unpredictable. Verification happens upon link establishment and can be repeated any time during a session. Figure 11-2 shows a CHAP sequence of events. When a remote node connects to an access server or router, the server or router sends back a challenge message with a challenge value that is based on an unpredictable random number. The remote station feeds the challenge value and the remote node’s password through a hash algorithm, resulting in a one-way hashed challenge response. The remote node sends the hashed challenge response to the server, along with a username that identifies the remote node. The server looks up the username and runs the associated password and the challenge through the same hash algorithm that the remote node used. If the remote node sent the correct password, the hash values will match and the server sends an accept message; otherwise, it sends a deny message. Figure 11-2 Connection Establishment with the Challenge Handshake Authentication Protocol Cable Modem Remote Access A popular option for remote access is a cable modem, which operates over the coax cable used by cable TV (CATV) providers. Coax cable supports higher speeds than telephone lines, so cable modem solutions are much faster than analog modem solutions. Another benefit of cable modems is that no dialup is required. This is an advantage over analog modems that take a long time to dial and connect to a remote site. Note The term cable modem is somewhat misleading. A cable modem works more like a LAN interface than an analog modem. Cable-network service providers offer hybrid fiber/coax (HFC) systems that connect CATV networks to the service provider’s high-speed fiber-optic network. The HFC systems allow users to connect their PCs or small LANs to the coax cable that enters their home or small office, and use this connection for high-speed access to the Internet or to an organization’s private network using VPN software. The cable-network service provider operates a cable modem termination system (CMTS) that provides high-speed connectivity for numerous cable modems. Many cable providers use a specialized CMTS router for this purpose. The router is designed to be installed at a cable operator’s headend facility or distribution hub, and to function as the CMTS for subscriber end devices. The router forwards data upstream to connect with the Internet and/or the public switched telephone network (PSTN) if telephony applications are deployed. Challenges Associated with Cable Modem Systems A challenge with implementing a remote-access solution based on cable modems is that the CATV infrastructure was designed for broadcasting TV signals in just one direction: from the cable TV company to a person’s home. Data transmission, however, is bidirectional. Data travels from the provider to the home (or small office) and from the home to the provider. Because of the design of CATV networks, most cable-network services offer much more bandwidth for downstream traffic (from the service provider) than for upstream traffic (from the customer). An assumption is made that a lot of the data traveling from the home or small office consists of short acknowledgment packets and requires less bandwidth. This assumption is accurate for such applications as web browsing, but might not be accurate for other applications that will be deployed in your network design. A cable modem solution is not the best answer for peer-to-peer applications or client/server applications in which the client sends lots of data. A typical cable-network system offers 25 to 50 Mbps downstream bandwidth and about 2 to 3 Mbps upstream bandwidth. Multiple users share the downstream and upstream bandwidth. Downstream data is seen by all active cable modems. Each cable modem filters out traffic that is not destined for it. Upstream bandwidth is allocated using timeslots. There are three types of timeslots: Reserved: A timeslot that is available only to a particular cable modem. The headend system at the provider’s site allocates reserved timeslots to cable modems using a bandwidth allocation algorithm. Contention: A timeslot that is available to all cable modems. If two cable modems transmit in the same timeslot, the packets collide and the data is lost. The headend system signals that no data was received, and the cable modems can try again after waiting a random amount of time. Contention timeslots are used for short data transmissions (including requests for a quantity of reserved timeslots for transmitting more data). Ranging: A timeslot that is used for clock correction. The headend system tells a cable modem to transmit during a ranging timeslot. The headend system measures the time to receive the transmission, and gives the cable modem a small positive or negative correction value for its local clock. If you plan to use a cable modem solution for remote users or remote offices, be sure to query the service provider about the number of users who share a single cable and the types of applications they use. Provide the service provider with information about the bandwidth requirements of your users’ applications, based on the analysis of traffic characteristics that you did as part of the requirements-analysis phase of the network design project. Typically a service provider can give you an approximation of how much bandwidth is available per user of a cable-network system. Some systems also have the capability to guarantee a level of bandwidth for heavy-usage applications. If your users require more bandwidth than the service provider can offer, then you should investigate using a leased line or Frame Relay circuit instead of a cable modem. Another concern with shared media such as cable modem systems is how to offer QoS for voice, video, and other delay-sensitive applications. If the service provider doesn’t have any good solutions for this problem, consider using a different WAN technology if your users rely on these delay-sensitive applications. Digital Subscriber Line Remote Access Another technology for remote access is digital subscriber line (DSL). Telephone companies offer DSL for high-speed data traffic over ordinary telephone wires. With DSL, a home office or small office can connect a DSL modem (or DSL router with a built-in modem) to a phone line and use this connection to reach a central-site intranet and/or the Internet. DSL operates over existing telephone lines between a telephone switching station and a home or office. Speeds depend on the type of DSL service and many physical layer factors, including the length of the circuit between the home or branch office and the telephone company, the wire gauge of the cable, the presence of bridged taps, and the presence of crosstalk or noise on the cable. DSL supports asymmetric and symmetric communication. With asymmetric DSL (ADSL), traffic can move downstream from the provider and upstream from the end user at different speeds. An ADSL circuit has three channels: A high-speed downstream channel with speeds ranging from 1.544 to 12 Mbps A medium-speed duplex channel with speeds ranging from 16 to 640 kbps A plain old telephone service (POTS) 64-kbps channel for voice With symmetric DSL (SDSL), traffic in either direction travels at the same speed, up to 1.544 Mbps. Unlike ADSL, SDSL does not allow POTS to run on the same line as data (although VoIP is feasible with SDSL). SDSL is a viable business solution that is a good choice for a small enterprise or branch office that hosts web or other services that send data as well as receive data. Other DSL Implementations DSL is sometimes called xDSL because of the many types of DSL technologies. In addition to ADSL and SDSL, providers in your area may support the following services: ISDN DSL (IDSL): A cross between ISDN and DSL. As with ISDN, IDSL uses a single wire pair to transmit data at 128 kbps in both directions and at distances of up to 15,000 to 18,000 feet (about 4600 to 5500 m). Unlike ISDN, IDSL does not use a signaling channel (a D channel). High-bit-rate DSL (HDSL): A mature technology that provides symmetric communications up to 1.544 Mbps over two wire pairs or 2.048 Mbps over three wire pairs. HDSL is a cost-effective alternative to a T1 or E1 circuit. HDSL is less expensive than T1 or E1 partly because it can run on poorer-quality lines without requiring any line conditioning. HDSL’s range is 12,000 to 15,000 feet (about 3700 to 4600 m). Providers can extend the range with signal repeaters. HDSL does not support access to the PSTN. HDSL-2: Provides symmetric communications of up to 1.544 Mbps, but it is different from HDSL in that it uses a single wire pair. Engineers developed HDSL-2 to serve as a standard by which different vendors’ equipment could interoperate. The biggest advantage of HDSL-2 is that it is designed not to interfere with other services, in particular ADSL, which is the most popular type of DSL. A disadvantage with HDSL-2 is that it supports only a full rate, offering services only at 1.544 Mbps. G.SHDSL: Combines the best of SDSL and HDSL-2. The standard defines multiple rates, as SDSL does, but provides spectral compatibility with HDSL-2. Very-high-bit-rate DSL (VDSL): Provides data and PSTN service on a single twisted pair of wires at speeds up to 52 Mbps downstream and 16 Mbps upstream. VDSL is reserved for users in close proximity to a central office. Long-Reach Ethernet (LRE) uses VDSL technology. PPP and ADSL ADSL designs use two popular PPP implementations: PPP over ATM (PPPoA) and PPP over Ethernet (PPPoE). In a PPPoA architecture, the customer premises equipment (CPE) acts as an Ethernet-to-WAN router and a PPP session is established between the CPE and a Layer 3 access concentrator in the service provider’s network. In a PPPoE architecture, the CPE acts as an Ethernet-to-WAN bridge. The client initiates a PPP session by encapsulating PPP frames into MAC frames and then bridging the frames over ATM/DSL to a gateway router at the service provider. From this point, the PPP sessions can be established, authenticated, and addressed. The client receives its IP address from the service provider, using PPP negotiation. A PPPoA implementation involves configuring the CPE with PPP authentication information (login and password). This is the main advantage of this architecture over pure bridging implementations, as it provides per-session authentication, authorization, and accounting. Another advantage with PPPoA is that it uses ATM end to end. Therefore, the provider might find it easier to put a subscriber into a specific traffic class. Selecting Remote-Access Devices for an Enterprise Network Design The previous sections discussed remote-access technologies. This section covers selecting devices to implement those technologies. Selecting remote-access devices for an enterprise network design involves choosing devices for remote users and for a central site. Remote users include telecommuters, users in remote offices, and mobile users. The central site could be the corporate headquarters of a company, the core network of a university that has branch campuses, a medical facility that connects doctors’ offices, and so on. Selecting Devices for Remote Users The most important consideration when selecting a cable or DSL modem is that the modem must interoperate with the provider’s equipment. In some cases, the provider supplies the modem to avoid problems. In other cases, the provider supplies a list of products or standards that must be supported in a modem purchased by the end user, such as the Data Over Cable Service Interface Specification (DOCSIS) for cable modems in the United States. Criteria for selecting a router for remote sites include the following: Security and VPN features Support for NAT Reliability Cost Ease of configuration and management Support for one or more high-speed Ethernet interfaces Support for the router to act as a wireless access point (if desired) Support for features that reduce line utilization, such as snapshot routing and compression Support for channel aggregation Support for QoS features to support VoIP or other applications with specific QoS requirements Selecting Devices for the Central Site The central site connects remote users who access the corporate network with cable modems, DSL modems, and VPN software. With a VPN, users gain network access through a service provider’s local network and send their data over encrypted tunnels to a VPN firewall or concentrator at the central site. Criteria for selecting central-site devices to support remote users include the criteria listed previously for a remote-site router as well as additional criteria related to VPN functionality. Both routers and firewalls at the central site can act as the termination point for VPN tunnels. A generic router can become overwhelmed if a network supports many tunnels, however. If you expect the peak number of simultaneous users to reach 100, a dedicated firewall or VPN concentrator should be deployed. (A VPN concentrator is a standalone hardware platform that aggregates a large volume of simultaneous VPN connections.) Generally, enterprises place the VPN firewall between a router that has access to the VPN and a router that forwards traffic into the campus network. Hence, the firewall should support at least two Ethernet interfaces of the flavor used in this module of the network design (Fast Ethernet, Gigabit Ethernet, and so on). When selecting a VPN firewall, make sure it will interoperate with the VPN client software on the users’ systems. Cisco, Microsoft, and other vendors provide client software. Also pay attention to the number of simultaneous tunnels the firewall supports and the amount of traffic it can forward. The firewall should have a fast processor, high-speed RAM, and support for redundant power supplies and hardware-assisted encryption. It should also support the following software features: Tunneling protocols, including IPsec, PPTP, and L2TP Encryption algorithms, including 56-bit DES, 168-bit Triple DES, Microsoft Encryption (MPPE), 40- and 128-bit RC4, and 128-, 192-, and 256-bit AES Authentication algorithms, including Message Digest 5 (MD5), Secure Hash Algorithm (SHA-1), Hashed Message Authentication Coding (HMAC) with MD5, and HMAC with SHA-1 Network system protocols, such as DNS, DHCP, RADIUS, Kerberos, and LDAP Routing protocols Support for certificate authorities, such as Entrust, VeriSign, and Microsoft Network management using Secure Shell (SSH) or HTTP with Secure Sockets Layer (SSL) WAN Technologies This section covers WAN technologies that are typical options for connecting geographically dispersed sites in an enterprise network design. The section covers the most common and established WAN technologies, but the reader should also research new technologies as they gain industry acceptance. Wireless WAN technologies, for example, are not covered in this book, but are expected to expand the options available for WAN (and remote-access) networks in the future. Recent changes in the WAN industry continue an evolution that began in the mid-1990s when the bandwidth and QoS requirements of corporations changed significantly due to new applications and expanded interconnectivity goals. As the need for WAN bandwidth accelerated, telephone companies upgraded their internal networks to use SONET and ATM technologies, and started offering new services to their customers. Today, an enterprise network architect has many options for WAN connectivity. The objective of this section is to present some of these options to help you select the right technologies for your customer. Systems for Provisioning WAN Bandwidth Regardless of the WAN technology you select, one critical network design step you must complete is selecting the amount of capacity that the WAN must provide. You need to consider capacity requirements for today and for the next 2 to 3 years. Selecting the right amount of capacity is often called provisioning. Provisioning requires an analysis of traffic flows, as described in Chapter 4, and an analysis of scalability goals, as described in Chapter 2, “Analyzing Technical Goals and Tradeoffs.” This section provides an overview of the bandwidth capacities that are available to handle traffic flows of different sizes. WAN bandwidth for copper cabling is provisioned in North America and many other parts of the world using the North American Digital Hierarchy, which is shown in Table 11-1. A channel in the hierarchy is called a digital signal (DS). Digital signals are multiplexed together to form high-speed WAN circuits. DS-1 and DS-3 are the most commonly used capacities. Table 11-1 North America Digital Hierarchy Signal Capacity Number of DS-0s Colloquial Name DS-0 64 kbps 1 Channel DS-1 1.544 Mbps 24 T1 DS-1C 3.152 Mbps 48 T1C DS-2 6.312 Mbps 96 T2 DS-3 44.736 Mbps 672 T3 DS-4 274.176 Mbps 4032 T4 DS-5 400.352 Mbps 5760 T5 In Europe, the Committee of European Postal and Telephone (CEPT) has defined a hierarchy called the E system, which is shown in Table 11-2. Table 11-2 Committee of European Postal and Telephone (CEPT) Hierarchy Signal Capacity Number of E1s E0 64 kbps N/A E1 2.048 Mbps 1 E2 8.448 Mbps 4 E3 34.368 Mbps 16 E4 139.264 Mbps 64 E5 565.148 Mbps 256 The Synchronous Digital Hierarchy (SDH) is an international standard for data transmission over fiber-optic cables. SDH defines a standard rate of transmission of 51.84 Mbps, which is also called Synchronous Transport Signal level 1 (STS-1). Higher rates of transmission are a multiple of the basic STS-1 rate. The STS rates are the same as the SONET Optical Carrier (OC) levels, which are shown in Table 11-3. Table 11-3 Synchronous Digital Hierarchy (SDH) STS Rate OC Level Speed STS-1 OC-1 51.84 Mbps STS-3 OC-3 155.52 Mbps STS-12 OC-12 622.08 Mbps STS-24 OC-24 1.244 Gbps STS-48 OC-48 2.488 Gbps STS-96 OC-96 4.976 Gbps STS-192 OC-192 9.952 Gbps Leased Lines The first WAN technology this chapter covers is the leased-line service offered by many telephone companies and other carriers. A leased line is a dedicated circuit that a customer leases from a carrier for a predetermined amount of time, usually for months or years. The line is dedicated to traffic for that customer and is used in a point-to-point topology between two sites on the customer’s enterprise network. Speeds range from 64 kbps (DS-0) to 45 Mbps (DS-3). Enterprises use leased lines for both voice and data traffic. Data traffic is typically encapsulated in a standard protocol such as PPP or HDLC. Dedicated leased lines have the advantage that they are a mature and proven technology. Historically, they had the disadvantage that they were expensive, especially in some parts of Europe and Asia. As carriers upgrade their internal networks with more capacity, costs are dropping, however. Leased lines also have the advantage over most other services that they are dedicated to a single customer. The customer does not share the capacity with anyone. Most newer systems, such as cable modems, DSL, ATM, and Frame Relay, are based on a shared network inside the provider’s network. Leased lines tend to be overlooked as a potential WAN solution because they are not a new technology. In some situations, however, they are the best option for simple point-to-point links. Leased lines are a good choice if the topology is truly point to point (and not likely to become point to multipoint in the near future), the pricing offered by the local carrier is attractive, and applications do not require advanced QoS features that would be difficult to implement in a simple leased-line network. Synchronous Optical Network The next WAN technology this chapter covers is Synchronous Optical Network (SONET), which is a physical layer specification for high-speed synchronous transmission of packets or cells over fiber-optic cabling. SONET was proposed by Bellcore in the mid-1980s and is now an international standard. SONET uses the SDH system with STS-1 as its basic building block. Service providers and carriers are making wide use of SONET in their internal networks. SONET is also gaining popularity within private networks to connect remote sites in a WAN or metropolitan-area network (MAN). Both ATM and packet-based networks can be based on SONET. With packet transmission, SONET networks usually use PPP at the data link layer and IP at the network layer. Packet over SONET (POS) is expected to become quite popular as Internet and intranet traffic grows, and as new applications, such as digital video, demand the high speed, low latency, and low error rates that SONET can offer. One of the main goals of SONET and SDH was to define higher speeds than the ones used by the North American Digital Hierarchy and the European E system, and to alleviate problems caused by incompatibilities in those systems. The creators of SONET and SDH defined high-speed capacities, starting with the 51.84-Mbps STS-1, that were approved by both North American and European standards bodies. Another goal of SONET was to support more efficient multiplexing and demultiplexing of individual signals. With SONET (SDH), it is easy to isolate one channel from a multiplexed circuit (for example, one phone call from a trunk line that carries numerous phone calls). With plesiochronous systems, such as the North American Digital Hierarchy and the European E system, isolating one channel is more difficult. Although isolating a 64-kbps channel from a DS-1 circuit is straightforward, isolating a 64-kbps channel from a DS-3 trunk requires demultiplexing to the DS-1 level first. Note The North American Digital Hierarchy and European E system are called plesiochronous systems. Plesio means “almost” in Greek. A truly synchronous system, such as SONET, supports more efficient multiplexing and demultiplexing than a plesiochronous (“almost synchronous”) system. The SONET specification defines a four-layer protocol stack. The four layers have the following functions: Photonic layer: Specifies the physical characteristics of the optical equipment Section layer: Specifies the frame format and the conversion of frames to optical signals Line layer: Specifies synchronization and multiplexing onto SONET frames Path layer: Specifies end-to-end transport Terminating multiplexers (implemented in switches or routers) provide user access to the SONET network. Terminating multiplexers turn electrical interfaces into optical signals and multiplex multiple payloads into the STS-N signals required for optical transport. A SONET network is usually connected in a ring topology using two self-healing fiber paths. A path provides full-duplex communication and consists of a pair of fiber strands. One path acts as the full-time working transmission facility. The other path acts as a backup protection pair, remaining idle while the working path passes data. If an interruption occurs on the working path, data is automatically rerouted to the backup path within milliseconds. If both the working and protected pairs are cut, the ring wraps, and communication can still survive. Figure 11-3 shows a typical SONET network. Figure 11-3 Redundant SONET Ring Frame Relay Frame Relay is a high-performance WAN protocol that operates at the physical and data link layers of the OSI reference model. Frame Relay emerged in the early 1990s as an enhancement to more complex packet-switched technologies, such as X.25. Whereas X.25 is optimized for excellent reliability on physical circuits with a high error rate, Frame Relay was developed with the assumption that facilities are no longer as error prone as they once were. This assumption allows Frame Relay to be more efficient and easier to implement than X.25. Frame Relay offers a cost-effective method for connecting remote sites, typically at speeds from 64 kbps to 1.544 Mbps (a few providers offer DS-3 speeds for Frame Relay). Frame Relay offers more granularity in the selection of bandwidth assignments than leased lines, and also includes features for dynamic bandwidth allocation and congestion control to support bursty traffic flows. Frame Relay has become a popular replacement for both X.25 and leased-line networks because of its efficiency, flexible bandwidth support, and low latency. Frame Relay provides a connection-oriented data link layer service. A pair of devices communicate over a Frame Relay virtual circuit, which is a logical connection created between two data terminal equipment (DTE) devices across a Frame Relay packet-switched network (PSN). Routers, for example, act as DTE devices and set up a virtual circuit for the purpose of transferring data. A virtual circuit can pass through any number of intermediate data circuit-terminating equipment (DCE) devices (switches) located within the Frame Relay PSN. Frame Relay virtual circuits fall into two categories: Switched virtual circuits (SVC): Temporary connections for supporting occasional data transfer Permanent virtual circuits (PVC): Permanently configured circuits that are established in advance of any data transfer An SVC requires call setup and termination whenever there is data to send. With PVCs, the circuit is established by the provider, which means that troubleshooting is simplified. Most networks use PVCs rather than SVCs. A Frame Relay virtual circuit is identified by a 10-bit data-link connection identifier (DLCI). DLCIs are assigned by a Frame Relay service provider (for example, a telephone company). Frame Relay DLCIs have local significance. Two DTE devices connected by a virtual circuit might use a different DLCI value to refer to the same circuit. Although you can think of the DLCI as an identifier for the entire virtual circuit, practically speaking, the DLCI refers to the connection from a DTE router to the DCE Frame Relay switch at the provider’s site. The DLCI might be different for the DTE-DCE connection at each end of the virtual circuit. Frame Relay Hub-and-Spoke Topologies and Subinterfaces Frame Relay networks are often designed in a hub-and-spoke topology, such as the topology shown in Figure 11-4. A central-site router in this topology can have many logical connections to remote sites with only one physical connection to the WAN, thus simplifying installation and management. One problem with a hub-and-spoke topology is that split horizon can limit routing. With split horizon, distance-vector routing protocols do not repeat information out the interface it was received on.