Introduction to Bridging and Switching

By | 2007-05-03

Bridges and switches operate principally at Layer 2 of the OSI reference model. As such, they are widely referred to as data-link layer devices. Bridges became commercially available in the early 1980s. At the time of their introduction, bridges connected and enabled packet forwarding between homogeneous networks. More recently, bridging between different networks also has been defined and standardized.

Switching and bridging technologies pass information by learning connecting addresses, and then filtering and forwarding the information based on the collected addresses. Networks that acquire bridging and switching normally reduce collisions that can occur on network segments.

Switching technology has emerged as the evolutionary heir to bridging-based internetworking solutions. Bridges of old performed this functionality in software. However, today’s switches perform the bridging in hardware allowing for increases in performance. In addition, the switches can implement this bridging functionality for every host connected, allowing full duplex by virtually eliminating collisions.

Switching implementations now dominate applications in which bridging technologies were implemented in prior network designs. Superior throughput performance, higher port density, lower per-port cost, and greater flexibility have contributed to the emergence of switches as replacement technology for bridges and as complements to routing technology.

Functionality

Address learning

In order for the bridges to begin passing information to and from devices and segments, they must first familiarize themselves with the addresses associated with those devices and segments. Initially, they must let all information pass through them, even if that information is not intended for a device on the opposite side of the bridges/switches. This is known as flooding Once the devices have allowed the information from the connecting segments to pass through, they can log the address information into tables for further use in forwarding and filtering.

Forwarding / Filtering

Bridging and switching devices determine if incoming frames are destined for a device on the segment where they were generated. If so, the devices do not forward the frames to the other device ports. This is an example of filtering. If the MAC destination address is on another segment, the devices send the frames to the appropriate segment. This is known as forwarding.

Loop Avoidance

When the switched network includes loops for redundancy, an Ethernet switch can prevent duplicate frames from traveling over the redundant path if spanning tree protocol is configured.

Frame Transmission Modes

Cut-Through

In the cut-through mode, the switch checks the destination address (DA) as soon as the header is received and immediately begins forwarding the frame.

Store and Forward

In the store-and-forward mode, the switch must receive the complete frame before forwarding takes place. The destination and source addresses are read, the cyclic redundancy check (CRC) is performed, relevant filters are applied, and the frame is forwarded. If the CRC is bad, the frame is discarded. Latency through the switch varies with frame length.

Fragment Free

In the fragment free mode, the switch will read into the first 64 bytes before forwarding the frame. Usually, collisions happen within the first 64 bytes of a frame. By reading 64 bytes, the switch can filter out collision frames

What is Redundant Topology?

Bridged networks, including switched networks, are commonly designed with redundant links and devices. Such designs eliminate the possibility that a single point of failure will result in loss of function for the entire switched network. Redundant topology is the duplication of switches or other devices/connections so that in the event of a failure the redundant devices, services, or connections can perform the work of those that failed.

While redundant designs may eliminate the single point of failure problem, they introduce several others that must be taken into account:

  • Without some loop avoidance service in operation, each switch will flood broadcasts endlessly. This situation is commonly called a broadcast storm.
  • Multiple copies of nonbroadcast frames may be delivered to destination stations. Many protocols expect to receive only a single copy of each transmission. Multiple copies of the same frame may cause unrecoverable errors.
  • Database instability in the MAC address table contents results from copies of the same frame being received on different ports of the switch. Data forwarding may be impaired when the switch consumes resources coping with address thrashing in the MAC address table.

Spanning-Tree Protocol

Spanning-Tree Protocol is a link management protocol that provides path redundancy while preventing undesirable loops in the network. For an Ethernet network to function properly, only one active path can exist between two stations. To provide path redundancy, Spanning-Tree Protocol defines a tree that spans all switches in an extended network.

The purpose of the Spanning-Tree Protocol is to maintain a loop-free network. A loop free path is accomplished when a device recognizes a loop in the topology and blocks one or more redundant ports. Spanning-Tree Protocol forces certain redundant data paths into a standby (blocked) state. If one network segment in the Spanning-Tree Protocol becomes unreachable, or if Spanning-Tree Protocol costs change, the spanning-tree algorithm reconfigures the spanning-tree topology and reestablishes the link by activating the standby path. Spanning-Tree Protocol operation is transparent to end stations, which are unaware whether they are connected to a single LAN segment or a switched LAN of multiple segments. Spanning-Tree Protocol continually explores the network so that a failure or addition of a link switch, or bridge is discovered quickly. When the network topology changes, Spanning-Tree Protocol reconfigures switch or bridge ports to avoid loss of connectivity or creation of new loops.

Spanning-Tree Operation

The Spanning-Tree Protocol provides a loop free network topology by:

  • Electing a Root Bridge
  • Electing Root Ports for Nonroot Bridges
  • Electing One Designated Port for each network segment.

A loop free path is accomplished when the switches and ports elected by this operation recognize a loop in the topology and block one or more redundant ports.

Spanning-Tree Protocol operation requires that for a network, a root bridge is elected, root ports for non-root bridges are determined, and a designated port is selected for each segment. Ports are placed in forwarding or blocking states. Nondesignated ports are normally in blocking state to break the loop topology.

A BPDU is exchanged every 2 seconds. One of the pieces of information exchanged is the bridge ID which carries the MAC address. The root bridge on a network is determined as the bridge with the lowest bridge ID.

Port States

Propagation delays can occur when protocol information is passed through a switched LAN. As a result, topology changes can take place at different times and at different places in a switched network. When a switch port transitions directly from non-participation in the stable topology to the forwarding state, it can create temporary data loops. Ports must wait for new topology information to propagate through the switched LAN before starting to forward frames. They must also allow the frame lifetime to expire for frames that have been forwarded using the old topology. Each port on a switch using Spanning-Tree Protocol exists in one of the following states:

  • Blocking
  • Listening
  • Learning
  • Forwarding

Movement of the Port States

From initialization to blocking – When Spanning-Tree is initialized, all ports start in the blocking state to prevent bridge loops. The port stays in a blocked state if the spanning tree determines that there is another path to the root bridge that has a better cost. Blocking ports can still receive BPDUs.

From blocking to listening or to disabled – Ports transit from blocked state to the listening state. When the port is in the transitional listening state, it is able to check for BPDUs. This state is really used to indicate that the port is getting ready to transmit but would like to listen for just a little longer to make sure it does not create a loop.

From listening to learning or to disabled – When the port is in learning state, it is able to populate its MACaddress table with MAC addresses heard on its ports, but it does not forward frames.

From learning to forwarding or to disabled – In the forwarding state, the port is capable of sending and receiving data.

From forwarding to disabled – At any time the port can become nonoperational.

Virtual LAN

The virtual LAN (VLAN) permits a group of users to share a common broadcast domain regardless of their physical location in the internetwork. Creating VLANs improves performance and security in the switched network by controlling broadcast propagation.

VLAN Characteristics

Within the switched internetwork, VLANs provide segmentation and organizational flexibility. Using VLAN technology, you can group switch ports and their connected users into logically defined communities of interest such as coworkers in the same department, a cross-functional product team, or diverse user groups sharing the same network application.

  • A VLAN is a logical broadcast domain that can span multiple physical LAN segments.
  • A VLAN can be designed to provide stations logically segmented by functions, project teams, or applications without regard to the physical location of users.
  • Each switch port can be assigned to only one VLAN.
  • Ports in a VLAN share broadcasts. Ports that do not belong to the same VLAN do not share broadcasts. This improves the overall performance of the network.
  • A VLAN can exist on a single switch or span across multiple switches.
  • VLANs can include stations in a single building or multiple-building infrastructures, or they can even connect across wide-area networks (WANs).

VLAN Assignment

Catalyst 1900 ports are configured with a VLAN membership mode that determines which VLAN they can belong to. Membership modes are assigned as either static or dynamic.

Static Assignment Assignment of the VLAN to a port is statically configured by an administrator.

Dynamic Assignment The Catalyst 1900 supports dynamic VLANs by using a VMPS (VLAN Membership Policy Server). The VMPS can be a Catalyst 5000 or an external server. The Catalyst 1900 cannot operate as the VMPS. The VMPS contains a database that maps MAC addresses to VLAN assignment. When a frame arrives on a dynamic port at the Catalyst 1900, the Catalyst 1900 queries the VMPS for the VLAN assignment based on the source MAC address of the arriving frame. A dynamic port can only belong to one VLAN at a time. Multiple hosts can be active on a dynamic port only if they all belong to the same VLAN.

ISL Protocol

ISL, Inter-Switch Link, is a Cisco proprietary protocol for interconnecting multiple switches and for maintaining VLAN information as traffic goes between switches.

ISL Tagging

The ISL frame tagging used by the Catalyst series of switches is a low-latency mechanism for multiplexing traffic from multiple VLANs on a single physical path. It has been implemented for connections between switches, routers, and network interface cards used on nodes such as servers.

Ports configured as ISL trunks encapsulate each frame with a 26-byte ISL header and a 4-byte CRC before sending it out the trunk port.

VLAN Trunking Protocol(VTP)

VLAN Trunking Protocol (VTP) is a protocol used to distribute and synchronize identifying information about VLANs configured throughout a switched network.

Characteristics

Configurations made to a single VTP server are propagated across links to all connected switches in the network.

  • VTP allows switched network solutions to scale to large sizes by reducing the manual configuration needs in the network.
  • VTP is a Layer 2 messaging protocol that maintains VLAN configuration consistency by managing the additions, deletions, and names changes of VLANs across networks.
  • VTP minimizes misconfigurations and configuration inconsistencies that can cause problems, such as duplicate VLAN names or incorrect VLAN-type specifications.
  • A VTP domain is one switch or several interconnected switches sharing the same VTP environment. A switch is configured to be in only one VTP domain. By default, a Catalyst switch is in the no-management-domain state until it receives an advertisement for a domain over a trunk link, or until you configure a management domain. VTP operates in one of three modes: server mode, client mode, or transparent mode. The default VTP mode is server mode, but VLANs are not propagated over the network until a management domain name is specified or learned.
  • VTP Pruning is a configuration that allows restricted traffic flow inside a management domain of a VLAN.
Author: dwirch

Derek Wirch is a seasoned IT professional with an impressive career dating back to 1986. He brings a wealth of knowledge and hands-on experience that is invaluable to those embarking on their journey in the tech industry.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.