RDM Data Center Handbook V20 En - [PDF Document] (2024)

R&M Data CenterHandbook


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 1 of 156

R&M Data Center Handbook Preface The modern data center, along with the IT infrastructure, is the nerve center of an enterprise. Of key importance is not the data center's size but its capability to provide high availability and peak performance, 7 days a week, 365 days a year. High-quality products alone are not enough to ensure continuous, reliable operation in the data center, nor is it enough just to replace existing components with high-performance products. Of much greater importance is an integrated and forward-looking planning process that is based on detailed analyses to determine specific requirements and provide efficient solutions. Planning is the relevant factor in the construction of new systems as well as for expansions or relocations. In all cases, the data cabling system is the bedrock of all communication channels and therefore merits special attention. Data centers can have different basic structures, which in some cases are not standard-compliant, and it is essential to apply case-specific relevant parameters to create an efficient and economically viable nerve center. Today more than ever, an efficient and reliable data center is a distinct competitive advantage. In addition to company-specific requirements, legal security standards and their impact need to be taken into account; these include directives like the SOX agreement, which provide the basic legal premises for data center planning. Furthermore, various standardization bodies are addressing cabling standards in data centers. This handbook is designed to provide you with important knowledge and to point out relationships between the multitude of challenges in the data center.

© 08/2011 Reichle & De-Massari AG (R&M) Version 2.0


Page 2 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Table of Contents 1. Introduction to Data Centers ……………………………………………………………………….. 5 1.1. Technical Terms …………………………………………………………………………………………………... 5 1.2. General Information ………………………………………………………………………………………………. 5 1.3. Standards …………………………………………………………………………………………………………... 8 1.4. Availability …………………………………………………………………………………………………………. 8 1.5. Market Segmentation ……………………………………………………………………………………………. 9 1.6. Cabling Systems ………………………………………………………………………………………………… 10 1.7. Intelligent Cable Management Solutions …………………………………………………………………… 12 1.8. Network Hardware and Virtualization ……………………………………………………………………….. 12 1.9. Data Center Energy Consumption …………………………………………………………………………… 13 1.10. Energy Efficiency – Initiatives and Organizations ………………………………………………………… 14 1.11. Green IT…………………………………………………………………………………………………………… 15 1.12. Security Aspects ………………………………………………………………………………………………… 16

2. Planning and Designing Data Centers ………………………………………………………….. 20 2.1. Data Center Types ………………………………………………………………………………………………. 20

2.1.1. Business Models and Services………………………………………………………………………………….. 20 2.1.2. Typology Overview……………………………………………………………………………………………….. 21 2.1.3. Number and Size of Data Centers……………………………………………………………………………… 22 2.2. Classes (Downtime and Redundancy) ………………………………………………………………………. 23

2.2.1. Tiers I – IV…………………………………………………………………………………………………………. 23 2.2.2. Classification Impact on Communications Cabling……………………………………………………………. 24 2.3. Governance, Risk Management and Compliance …………………………………………………………. 26

2.3.1. IT Governance…………………………………………………………………………………………………….. 27 2.3.2. IT Risk Management……………………………………………………………………………………………… 28 2.3.3. IT Compliance…………………………………………………………………………………………………….. 28 2.3.4. Standards and Regulations……………………………………………………………………………………… 28 2.3.5. Certifications and Audits…………………………………………………………………………………………. 33 2.3.6. Potential Risks…………………………………………………………………………………………………….. 34 2.4. Customer Perspective ………………………………………………………………………………………….. 36

2.4.1. Data Center Operators / Decision-Makers…………………………………………………………………….. 36 2.4.2. Motivation of the Customer……………………………………………………………………………………… 39 2.4.3. Expectations of Customers……………………………………………………………………………………… 40 2.4.4. In-House or Outsourced Data Center………………………………………………………………………….. 43 2.5. Aspects of the Planning of a Data Center …………………………………………………………………... 44

2.5.1. External Planning Support……………………………………………………………………………………….. 44 2.5.2. Further Considerations for Planning……………………………………………………………………………. 45

3. Data Center Overview ………………………………………………………………………………. 46 3.1. Standards for Data Centers ……………………………………………………………………………………. 46

3.1.1. Overview of Relevant Standards……………………………………………………………………………….. 46 3.1.2. ISO/IEC 24764……………………………………………………………………………………………………. 47 3.1.3. EN 50173-5……………………………………………………………………………………………………….. 49 3.1.4. TIA-942……………………………………………………………………………………………………………. 50 3.2. Data Center Layout ……………………………………………………………………………………………… 51

3.2.1. Standard Requirements…………………………………………………………………………………………. 51 3.2.2. Room Concepts…………………………………………………………………………………………………... 52 3.2.3. Security Zones……………………………………………………………………………………………………. 53


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 3 of 156

3.3. Network Hierarchy ………………………………………………………………………………………………. 54

3.3.1. Three Tier Network………………………………………………………………………………………………. 54 3.3.2. Access Layer……………………………………………………………………………………………………… 55 3.3.3. Aggregation / Distribution Layer………………………………………………………………………………… 55 3.3.4. Core Layer / Backbone………………………………………………………………………………………….. 56 3.3.5. Advantages of Hierarchical Networks………………………………………………………………………….. 57 3.4. Cabling Architecture in the Data Center ……………………………………………………………………. 57

3.4.1. Top of Rack (ToR)………………………………………………………………………………………………... 58 3.4.2. End of Row (EoR) / Dual End of Row………………………………………………………………………….. 60 3.4.3. Middle of Row (MoR)…………………………………………………………………………………………….. 61 3.4.4. Two Row Switching………………………………………………………………………………………………. 62 3.4.5. Other Variants…………………………………………………………………………………………………….. 62 3.5. Data Center Infrastructure …………………………………………………………………………………….. 66

3.5.1. Power Supply, Shielding and Grounding………………………………………………………………………. 66 3.5.2. Cooling, Hot and Cold Aisles……………………………………………………………………………………. 69 3.5.3. Hollow / Double Floors and Hollow Ceilings…………………………………………………………………… 72 3.5.4. Cable Runs and Routing…………………………………………………………………………………………. 73 3.5.5. Basic Protection and Security…………………………………………………………………………………… 75 3.6. Active Components / Network Hardware …………………………………………………………………… 78

3.6.1. Introduction to Active Components……………………………………………………………………………... 79 3.6.2. IT Infrastructure Basics (Server and Storage Systems)……………………………………………………… 79 3.6.3. Network Infrastructure Basics (NICs, Switches, Routers & Firewalls)……………………………………… 84 3.6.4. Connection Technologies / Interfaces (RJ45, SC, LC, MPO, GBICs/SFPs)………………………………. 87 3.6.5. Energy Requirements of Copper and Fiber Optic Interfaces………………………………………………... 89 3.6.6. Trends……………………………………………………………………………………………………………… 90 3.7. Virtualization ……………………………………………………………………………………………………... 90

3.7.1. Implementing Server / Storage / Client Virtualization………………………………………………………… 91 3.7.2. Converged Networks, Effect on Cabling……………………………………………………………………….. 93 3.7.3. Trends……………………………………………………………………………………………………………… 94 3.8. Transmission Protocols ……………………………………………………………………………………….. 95

3.8.1. Implementation (OSI & TCP/IP, Protocols)……………………………………………………………………. 95 3.8.2. Ethernet IEEE 802.3……………………………………………………………………………………………… 98 3.8.3. Fibre Channel (FC)……………………………………………………………………………………………… 102 3.8.4. Fibre Channel over Ethernet (FCoE)………………………………………………………………………….. 104 3.8.5. iSCSI & InfiniBand………………………………………………………………………………………………. 105 3.8.6. Protocols for Redundant Paths………………………………………………………………………………… 106 3.8.7. Data Center Bridging……………………………………………………………………………………………. 108 3.9. Transmission Media …………………………………………………………………………………………… 109

3.9.1. Coax and Twinax Cables ………………………………………………………………………………………. 110 3.9.2. Twisted Copper Cables (Twisted Pair)……………………………………………………………………….. 110 3.9.3. Plug Connectors for Twisted Copper Cables………………………………………………………………… 113 3.9.4. Glass Fiber Cables (Fiber optic)………………………………………………………………………………. 116 3.9.5. Multimode, OM3/4 ……………………………………………………………………………………………… 116 3.9.6 Single mode, OS1/2 ............................................................................................................................... 117 3.9.7. Plug Connectors for Glass Fiber Cables……………………………………………………………………... 118 3.10. Implementations and Analysis ……………………………………………………………………………… 124

3.10.1. Connection Technology for 40/100 gigabit Ethernet (MPO/MTP®)……………………………………….. 126 3.10.2. Migration Path to 40/100 gigabit Ethernet…………………………………………………………………… 129 3.10.3. Power over Ethernet (PoE/PoEplus)…………………………………………………………………………. 135 3.10.4. Short Links ……………………………………………………………………………………………………… 139 3.10.5. Transmission Capacities of Class EA and FA Cabling Systems…………………………………………… 143 3.10.6. EMC Behavior in Shielded and Unshielded Cabling Systems…………………………………………….. 148 3.10.7. Consolidating the Data Center and Floor Distributors……………………………………………………… 155


Page 4 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

4. Appendix ……………………………………………………………………………………………..156

References………………………………………………………………………………………………………. 156 Standards………………………………………………………………………………………………………... 156

All information provided in this handbook has been presented and verified to the best of our knowledge and in good faith. Neither the authors nor the publisher shall have any liability with respect to any damage caused directly or indirectly by the information contained in this book. All rights reserved. Dissemination and reproduction of this publication, texts or images, or extracts thereof, for any purpose and in any form whatsoever without the express written approval of the publisher is in breach of copyright and liable to prosecution. This includes copying, translating, use for teaching purposes or in electronic media. This book contains registered trademarks, registered trade names, and product names, and even if not specifically indicated as such, the corresponding regulations on protection apply.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 5 of 156

1. Introduction to Data Centers According to Gartner, cloud computing, social media and context-aware computing will be shaping the economy and IT in the coming years. Cloud computing is already changing the IT industry today, and there will be more and more services and products entering the market. Outsourcing projects and subtasks and procuring IT resources and services by means of the cloud (World Wide Web) is becoming a standard process. The attraction of services provided by the cloud is also the result of their considerable cost and efficiency advantages as compared to traditional IT infrastructures. Server and storage virtualization help to optimize and concentrate IT infrastructures and save on resources. Consequently, high demands are placed on data center performance and availability, and server operations are required around the clock. A breakdown in a company's computer room, however, can prove fatal as well, and it always results in enormous costs and dissatisfied users and customers. 1.1. Technical Terms There are no standard, consistently used terms in the data center world, and the companies' creativity is endless. We recommend always specifying what the supplier implies or what the customer expects. The terms

data center – datacenter – data processing center – computer room – server room – server cabinet – collocation center – IT room – and more

stand for the physical structure that is designed to house and operate servers. Interpretations may vary. The term campus (or MAN) is also used for data centers on a larger area. We define "data center" as follows:

A data center is a building or premise that houses the central data processing equipment (i.e. servers and infrastructure required for operation) of one or several companies or organizations. The data center must consist of at least one separate room featuring independent power supply and climate control.

Consequently, there is a distinction between a data center, and specific server cabinets or specific servers. 1.2. General Information The data center is the core of a company; it creates the technical and infrastructural conditions required for virtually all business processes in a company. It secures the installed servers and components in use, protects them from external dangers and provides the infrastructure required for continued reliable operation. The physical structure provides protection against unauthorized access, fire and natural disasters. The necessary power supply and climate control ensure reliability. Accurate, economical planning plus the sustainable operation of a modern data center are steps companies need to take to meet the requirements of availability. The enormous infrastructural requirements involved lead many companies to opt for an external supplier and outsource their data center needs. Outsourcing companies (such as IBM, HP, CSC, etc.) offer their customers the use of data center infrastructures in which they manage the hardware as well as the system software. This offers the advantage of improved control of IT costs, combined with the highest security standards and the latest technology. In outsourcing, the systems and applications of a company are first migrated to the new data center under a migration project, and are then run in the data center of the outsourcing company. Along with data security, high availability and operational stability are top priorities. The data center must have the ability to run all the necessary applications, required server types and server classes. Due to the increasing number of real-time applications, the number of server variants is soaring as well. For the general performance of a server service it doesn’t matter if the service, i.e. the software, that is offered,

• runs with other services on the same server; • is processed by a separate server computer or even; • if it is distributed to several servers because the load would otherwise be too great.


Page 6 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

This last situation is especially common with public WWW servers, because the Internet sites that are frequented, such as search engines, large web shops or auction houses, generate a large quantity of data for processing. In this case, so-called load-balancing systems come into play. They are programmed to distribute incoming queries to several physical servers. The following basic types of server services exist:

File Server A file server is a computer attached to a network that provides shared access to computer files for workstations that are connected to the computer network. This means that users can send each other files and share them via a central exchange server. The special feature offered by file servers is that using them is as transparent as using a local file system on workstation computers. Management of access rights is important, for not all users should have access to all the files. Print Server A print server or printer server allows several users or client computers shared access to a printer or printers. The biggest challenge is to automatically provide the client computer with the appropriate printer driver for their respective operating system so that they can use the printer without having to install the driver software locally. Mail Server A mail server, or message transfer agent (MTA), does not necessarily have to be installed on an Internet provider's server, but can operate on the local network of a company. First, it is often an advantage when employees can communicate with each other by e-mail, and second, it is sometimes necessary to operate an internal e-mail server just because Internet access is heavily restricted for security reasons, or communication between workstations and external mail servers is not allowed in the first place. Web Server The primary function of a web server (HTTP server) is to deliver web pages from a network to clients on request. Usually, this network is the public Internet (World Wide Web). This form of information transfer is becoming more and more popular in local company networks (intranet). The client uses a browser to display, request and explore web pages, and also to mouse-click to follow the hyperlinks contained within those pages, which are links to other websites, files, etc. on the same or a different server. Directory Services (DNS server) Directory services are becoming increasingly important in IT. A directory in this context is not a file system but an information structure, a standardized catalog of users, computers, peripherals and rights in a network. Information can be accessed network-wide via an entry in this directory, which means that directory services are a practical basis for numerous services, making work easier for administrators and life easier for users in large network environments. Here are some examples:

• Automatic software distribution and installation • Roaming user profiles • Single sign-on services • Rights management based on computer, clients and properties

Application Server An application server allows users on a network to share software applications on the server.

• This can be a server in the LAN, which runs applications that are frequently used by clients (instead of pure file server services).

• Or it runs the application/business logic of a program (e.g. SAP R/3®) in a three-tier architecture (see section 3.3.1.), interacts with database servers and manages multiple client access.

• Or it is a web application server (WAS) for web applications, dynamically generating HTML pages (or providing web services) for an Internet or intranet connection.

Web services are automatic communication services over the Internet (or intranet) which can be found, interpreted and used through established and standardized procedures.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 7 of 156

Simple Application Services In an application server’s simplest form, it stores the database of an application. It is loaded onto the client's random access memory (RAM) and launched there. This process differs only slightly from a file server. The application just needs to know that any necessary additional components or configuration data are not located on the computer it launches from, but on the server from where it was loaded. This configuration makes sense when a high number of single-user application programs exist. It offers two advantages: first, it just requires one program installation on the server instead of several installations on each workstation, and second, costs are lower – usually only one software license per computer is required, for the computer that runs the program. Now, when an application is used by several computers, but not at the same time, the software can be installed on the server. It can be used by all computers and the licensing fee only needs to be paid once. And updating software is easier too. Client–Server Applications In the more complex forms of application servers, parts of a program – in some cases the entire program – are launched directly on the server. In the case of large databases, for example, the actual data and the basic data management software are usually stored on the server. The components on client computers are called front ends, i.e. software components that provide an interface to the actual database. Network High-performance, fail-safe networks are required in order to enable client-server as well as server-server communication. Investing in server technology is not very profitable if the network is not "qualitatively suited". Network components and transmission protocols are discussed in sections 3.6. and 3.8. Network Structures and Availability One way to increase the availability of a network is to design it for redundancy, meaning that several connections are provided to connect servers, and possibly storage systems, as well as network components to the network. This ensures that, in case of system, interface or connection failure, data traffic can still be maintained. This is of critical importance especially for the backbone, which is literally the backbone of the infrastructure. This is why data center classifications were defined to specify different levels of network reliability. These are discussed in greater detail in section 2.2. Access for Remote Users In addition to the local network, many enterprises have remote users that must be connected. There are two categories of remote users: those working at a remote regional company office and those with mobile access or working from home. Two company sites are usually connected by means of a virtual private network (VPN), also called a site-to-site scenario. A VPN connection can also be established between individual users and the company's head office, which would constitute a site-to-end scenario. An SSL-encrypted connection is sufficient for providing access for one single user if this is necessary. VPN Connections In VPN connections, an automatic VPN tunnel is established and network data is transported between the company network and the client using encryption. In general, VPN connections support any kind of network traffic, i.e. terminal service sessions, e-mails, print data, file transfers and other data exchange. To provide the user with fast access to the network resources, the transmission of the respective data packets must be prioritized in comparison with the rest of the network traffic. These requirements can be met with high-end routers. With regards to operational reliability of Wide Area Network (WAN) links, these connections should also be laid out redundantly.


Page 8 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Storage Architectures

A storage server conventionally holds a sufficient number of local hard disk drives (DAS: Direct Attached Storage). Even with high numbers of hard disks there are rarely any technical problems since modern servers can easily access several dozens hard disks thanks to RAID controllers (Redundant Array of Independent Disks). There is a trend emerging in data centers towards storage consolidation. This means that a storage network (SAN: Storage Area Network) is installed which includes one or several storage systems for the servers to store data.

A storage-consolidated environment offers the following advantages:

• Better manageability (particularly for capacity extension etc.)

• Higher availability (requiring additional measures)

• Extended functions such as snapshot-ting, cloning, mirroring, etc SAN, NAS, iSCSI Different technologies exist for accessing central storage resources:

• SAN: Even though the term SAN, Storage Area Network, is basically used for all kinds of storage networks independent of their technology, it is generally used today as a synonym for Fibre Channel SAN. Fibre Channel is currently the most widely used technology for storage networking. It is discussed in greater detail in section 3.8.3.

• NAS: Network Attached Storage, as the name suggests, is a storage system that can be connected

directly to an existing LAN. NAS systems are primarily file servers. A point to mention is that there is no block-level access (exclusively assigned storage area) to a file share, so in general database files cannot be stored.

• iSCSI: iSCSI, Internet Small Computer System Interface, allows block-level access to an accessible

storage resource on the network as if it was locally connected through SCSI. Professional NAS systems support iSCSI access (e.g. through a Network Appliance), so that the storage areas of these systems can also be used by database servers. On the server side, iSCSI can be used with conventional network cards. It is a cost-effective technology that is spreading fast.

However, the dominant technology in data center environments today is Fibre Channel, mainly because high-end storage systems have only been available with Fibre Channel connectivity so far. 1.3. Standards The U.S. TIA-942 standard, the international ISO/IEC 24764 standard and the European EN 50173-5 standard define the basic infrastructure in data centers. They cover cabling systems, 19" technologies, power supply, climate control, grounding, etc., and also classify data centers into availability tiers. A detailed comparison of the different standards is provided in section 3.1. 1.4. Availability Just a few years ago, there were still many enterprises which could have survived a failure of several hours of their IT infrastructure. Today, the number of companies who absolutely rely on continuous availability of their IT is very high and growing. A study carried out by the Meta Group showed that the damage caused by the breakdown of a company's key IT systems for more than 10 days can put the company out of business in the following three to five years – with a likelihood of 50 percent.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 9 of 156

There are a number of classifications with regards to reliability and availability of data centers, for example, those issued by the Uptime Institute, BSI and BITKOM (overview in 2.1.2). The availability rating of a company's communications infrastructure is of crucial importance in developing, up-grading and analyzing an IT concept in today’s world. Consequently, the basic question is: "What is the maximum acceptable IT outage time for a company?" These growing requirements for availability not only affect the IT infrastructure itself but also place high demands on the continuous securing of environmental conditions and supply. Redundancy in cooling and power supplies, multiple paths and interruption-free system maintenance have become standard for highly available IT infrastructures. Before planning and coordinating technical components so as to achieve the targeted availability, there are additional points regarding risk assessment and site selection that must be considered. These points include all possible site risks, be they of a geographical (air traffic, floods, etc.) or political (wars, conflicts, terrorist attacks, etc.) nature or determined by the surroundings (fire-hazards due to gas stations, chemical storages, etc.), which could have an effect on the likelihood of a potential failure. Furthermore, potentially criminal behavior of company employees and from outside the company should also be taken into consideration. Achieving high availability on the one hand presupposes an analysis of technical options and solutions, but it also requires the operator to plan and implement a comprehensive organizational structure, including the employment of trained service staff and the provision of spare parts and maintenance contracts. Precise instructions as to how to behave in case of failure or emergency are also part of this list. The term "availability" denotes the probability that a system is functioning as planned at a given point in time. 1.5. Market Segmentation We can often read about Google’s gigantic data centers. However, Google is no longer the only company running large server farms. Many companies like to keep their number of servers a secret, giving rise to speculation. Most of these are smaller companies but there are also some extremely big ones among them. The following companies can assumed to be running more than 50,000 servers.

• Google: There has been much speculation from the very beginning about the number of servers Google runs. According to estimates, the number presently amounts to over 500,000.

• Microsoft: The last time Microsoft disclosed their number of servers in operation was in the second quarter of 2008, and that number was 218,000 servers. In the meantime, it can be assumed that as a result of the new data center in Chicago, that number has grown to over 300,000. Provided the Chicago data center stays in operation, the number of Microsoft servers will continue to increase rapidly.

• Amazon: The company owns one of the biggest online shops in the world. Amazon reveals only little about their data center operations. However, one known fact is that they spent more than 86 million dollars on servers from Rackable.

• eBay: 160 million people are active on the platform of this large online auction house, plus another 443 million Skype users. This requires a gigantic data center. In total, more than 8.5 petabytes of eBay data are stored. It is difficult to estimate the number of servers, but it is safe to say that eBay is a member of the 50,000-server club.

Tier level Introduced Requirements

Tier I In the 1960s Single path for power and cooling

distribution, no redundant components, 99.671% availability

Tier II In the 1970s Single path for power and cooling distribution, includes redundant

components, 99.749% availability

Tier III End of the 1980s

Multiple power and cooling distribution paths but with only one

active path, concurrently maintainable, 99.982% availability

Tier IV 1994

Multiple active power and cooling distribution paths, includes

redundant components, fault-tolerant, 99.995% availability

Source: US Uptime Institute: Industry Standards Tier Classification


Page 10 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

• Yahoo!: The Yahoo! data center is significantly smaller than that of Google or Microsoft. Yet, it can still be assumed that Yahoo! runs more than 50,000 servers to support their free hosting operation, the paid hosting services and the Yahoo! store.

• HP/EDS: This company operates a huge data center. Company documentation reveals that EDS runs more than 380,000 servers in 180 data centers.

• IBM: The area covered by the IBM data center measures 8 million square meters. This tells us that a very high number of servers is at work for the company and their customers.

• Facebook: The only information available is that Facebook has one data center with more than 10,000 servers. Since Facebook has over 200 million users storing over 40 billion of pictures, it can safely be assumed that the number of servers is higher than 50,000. Facebook has plans to use a data center architecture that is similar to Google.

Google, eBay and Microsoft are known users of container data centers , which were a trend topic in 2010, according to Gartner. Other terms used to describe container-type data centers are:

modular container-sized data center – scalable modu lar data center (SMDC) – modular data center (MDC) – cube – black box – portable modular data ce nter (PMDC) – ultra-modular data center – data center in-a-box – plug-and-play data center – perfo rmance-optimized data center – and more

Data center segmentation by center size in Germany, according to the German Federal Environment Agency (Bundesamt für Umwelt):

Data center type Server cabinet

Server room

Small center

Medium center

Large center Total

Total number of servers installed 160,000 340,000 260,000 220,000 300,000 1,280,000

Percentage of server clusters 12.5 % 26.6 % 20.3 % 17.2 % 23.4 % 100 %

Number of data centers 33,000 18,000 1,750 370 50 53,170

Percentage of all data centers 62.0 % 33.9 % 3.3 % 0.7 % 0.1 % 100 %

Data Center Inventory in Germany, as of November 2010

See section 2.1.2. for additional data center types. 1.6. Cabling Systems

The communications cabling system is essential to the availability of IT applications in data centers. Without a high-performance cabling system, servers, switches, routers, storage devices and other equipment cannot communicate with each other and exchange, process and store data. However, cabling systems have often grown historically and they are not fully capable of meeting today's requirements. This is because today's requirements for data centers are high:

• High channel densities • High transmission speeds • Interruption-free hardware changes • Ventilation aspects • Support

Careful, foresighted planning plus the structuring of communications cabling systems should therefore be high on the list of responsibilities for data center operators. Furthermore, related regulations such as Basel II and SOX (U.S.) stipulate consistently stringent transparency.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 11 of 156

System Selection Due to the requirement of maximum high availability and ever increasing data transmission rates, quality demands on cabling components for data centers are considerably higher than on those for LAN products. Quality assurance thus starts in the early stages of planning, when selecting systems that meet the performance requirements listed here.

• Cable design of copper and fiber optic systems • Bandwidth capacity of copper and fiber optic systems • Insertion and return loss budget of fiber optic systems • EMC immunity of copper systems • Migration capacity to next higher speed classes • 19" cabinet design and cable management

Both fiber optic and copper offer the option of industrially preassembled cabling components for turnkey systems in plug-and-play installations. These installations provide the highest possible reproducible quality, very good transmission characteristics and high operational reliability. Given the high requirements for availability and operational stability, shielded systems are the best choice for copper systems. Global standards require a minimum of Class EA copper cabling. The choice of supplier is also something that requires careful consideration. The main requirements of a reliable supplier are – in addition to the quality of cabling components – professional expertise, experience with data centers and lasting supply performance. Structure Data centers are subject to constant change, as a result of the short life cycles of active components. To avoid having to perform major changes in the cabling system with the introduction of every new device, a well-structured, transparent physical cabling infrastructure that is separated from the architecture and that connects the various premises running the devices in a consistent, end-to-end structure is recommended. Data center layouts and cabling architecture are discussed in sections 3.2. and 3.4. respectively. Redundancy and Reliability The requirement of high availability necessitates the redundancy of connections and components. It must be possible to replace hardware without interrupting operation, and in case a link fails, an alternative path must take over and run the application without any glitches. Proper planning of a comprehensive cabling platform is therefore essential, taking into account factors like bending radii, performance reliability and easy and reliable assembly during operation. The availability of applications can be further increased by using preassembled cabling systems, which also reduces the amount of time installation staff need to spend in the data center's security area for both the initial installation as well as expansions or changes. This further enhances operational reliability. It is also important that all products be tested and documented under compliance with a quality management system. Like the data center cabling itself, connections used for communication between data centers (e.g. redundant data centers and back-up data centers) and for outsourcing and the backup storage of data at a different location, need to be redundant as well. The same applies to the integration of MAN and WAN provider networks. Installation In order to be qualified for installation and patching work in data centers, technicians need to be trained in the specifications of the cabling system. Suppliers like R&M provide comprehensive quality assurance throughout the entire value chain – from the manufacturing of components, through professional installation and implementation up to professional mainte-nance of the cabling system. Another advantage of the above-mentioned factory-assembled cabling systems is the time saved during installation. Moreover, when extending capacity by adding IT equipment, the devices can be cabled together quickly using preassembled cabling, thus ensuring that the equipment and the associated IT applications are in operation in the shortest time; the same applies to hardware.


Page 12 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Cabinet systems with a minimum width of 800 mm are recommended with regard to the 19" server cabinet. They allow a consistent cable manage-ment system to be set up in a vertical and horizontal direction. Cabinet depth is usually determined by the passive and active components to be installed. A cabinet depth of 1000 mm to 1200 mm is recommended for the installation of active components. Ideally, a server cabinet has a modular structure, ensuring reliability at manageable costs. A modular cabinet can be dismantled or moved to a different location if needed. Modularity is also important with containment solutions and climate control concepts. Equally important is stability. Given the high packing densities of modern server systems and storage solutions incl. the power supply units, capacity loads of over 1000 kg are required for server racks. This means that cabinet floors and sliding rails must also be designed for high loads. Loads of 150 kg per floor tile or rail are feasible today. Cable management is another important aspect. With growing trans-mission speeds, it is absolutely essential to run power and data cables separately in copper cabling systems so as to avoid interference.

Due to increasing server performance and higher packing density in racks, ventilation concepts such as perforated doors and insulation between hot and cold areas in the rack are more and more important. Further productivity-improving and energy-optimized solutions can be achieved by means of cold/hot aisle approaches, which are part of the rack solution. Documentation and Labeling A further essential prerequisite for a good cabling system administration and a sound planning of upgrades and extensions is meticulous, continuously updated documentation. There are numerous options available, from individual Excel lists to elaborate software-based documentation tools. It is absolutely essential that the docu-mentation reflect the current state and the cabling that is actually installed at any given point in time. Related to documentation is the labeling of the cables; it should be unambiguous, easy to read and readable even in poor visibility conditions. Here too, numerous options exist, up to barcode-based identification labels. Which option is best depends on specific data center requirements. Maintaining uniform, company-wide nomenclature is also important. To ensure unambiguous cable labeling, central data administration is recommended. 1.7. Intelligent Cable Management Solutions Data and power supply cables need to be planned in advance and their routing must be documented to allow immediate response to updates in concepts and requirements. The cable routing on cable racks and runs needs to be planned meticulously. Raised floors are often used for cable routing and allow maintenance work to be carried out in the data center without the need for any structural work. Further information appears in sections 3.5.3. and 3.5.4. Cable penetration seals, however, often prove to be a weak point. Like walls, ceilings and doors, penetration seals need to fulfill all safety requirements with respect to fire, gas and water protection. To allow for swift and efficient changes and retrofittings in the cabling, they also need to be flexible. 1.8. Network Hardware and Virtualization A comprehensive examination of a data center must include server security needs and the network. Many companies have already converted their phone systems to Voice over IP (VoIP), and the virtualization of servers and storage systems is spreading rapidly. Virtualized clients will be the next step. This development means that business-critical services will be run over data links, which also carry the power to the terminal devices via Power over Ethernet (PoE). Along with the growing importance of network technology to ensure uninterrupted business operations, security requirements are growing in this area too. The rack is the basis for dimensioning servers as well as components in network technology. Since active components are standardized to 19", network cabinets are usually based on the same platform. Requirements with regard to stability, fire protection and access control are comparable as well.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 13 of 156

Network infrastructures in buildings are generally designed to last for 10 years, therefore foresighted planning in the provisioning of network cabinets is recommendable. Accessories should be flexible to make sure that future developments can be easily integrated. In terms of interior construction, racks differ considerably. Due to the frequent switching at connection points, i.e. ports, of network components, cables in network cabinets need to be replaced more frequently than those in server cabinets. These processes, also called MACs (Moves, Adds, Changes) and the increasing density of ports make good cable management even more important. Top and bottom panels in the rack are subject to these demands as well. Easy cable entry in these areas make extensions easier and ensure short cable runs. Cable ducts and management panels ensure clean, fine distribution in the rack. In cable management, particular attention has to be paid to the stability of components. Minimum bending radii of cables must also be considered. Climate control is another aspect which is rapidly gaining in importance with network cabinets. Switches and routers are increasingly powerful, thereby producing more heat. It is important to allow for expansion when selecting climate control devices. They range from passive cooling through the top panels, venting devices, double-walled housings, ventilators, top cooling units and cooling units between the racks. Find further information on network hardware and virtualization in sections 3.6 and 3.7. respectively. 1.9. Data Center Energy Consumption Greenpeace claims that at current growth rates, data centers and telecommunication networks will consume about 1’963 billion kilowatt hours of electricity in 2020, which would translate into a threefold increase within 10 years. According to Greenpeace, that’s more than the current electricity consumption of France, Germany, Canada and Brazil combined. The powerful emergence of Cloud Computing and the use of the Internet as an operating system and not just a way to access stored data are reported to generate an enormous increase in the energy needs of the IT industry. Data centers are bursting at their seams. In their studies, Gartner analysts predict that data centers will reach their limits in the next three years, with respect to energy consumption and space requirements. In their opinion, this is mostly due to technical developments such as new processors in multi-core design, blade servers and virtualization, which push density higher and higher. According to the analysts, density will increase at least tenfold in the next ten years, driving energy consumption for both operation and cooling to considerably higher levels. This is particularly true for data centers that have been in operation for a longer period of time and use outdated technologies. In some of these, as much as 70% of the energy consumption is required just for cooling.

The energy consumption of processors today amounts to approx. 140 watts, that of servers approx. 500 watts and that of blade servers approx. 6300 watts. Each watt of computing performance requires 1 watt of cooling. The table on the left summarizes the actual energy consumption. Conclusion: 1 watt IT savings = 3 watts of total savings!

In 2003, the average energy consumption per server cabinet amounted to 1.7 kW, in 2006 it was already up to 6.0 kW and in 2008 it was 8.0 kW. Today, energy consumption is 15 kW for a cabinet containing the maximum number of servers and 20 kW with the maximum number of blade servers. The overall energy consumption in a data center is shown in the pie chart right:

Energy consumption at server level 1 watt

DC/DC conversion + 0.18 watt

AC/DC conversion + 0.31 watt

Power distribution + 0.04 watt

Uninterruptible power supply, UPS + 0.14 watt

Cooling + 1.07 watt

Building switchgear/transformer + 0.10 watt

Total energy consumption 2.84 watts

Source: Emerson


Page 14 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

1.10. Energy Efficiency – Initiatives and Organizat ions There are numerous measures for increasing energy efficiency in data centers. The chain of cause and effect starts with applications, continues with IT hardware and ends with the power supply and cooling systems. A very important point is that measures taken at the very beginning of this chain, namely for causes, are the most effective ones. When an application is no longer needed and the server in question is turned off then less power is consumed, losses in the uninterruptible power supply decrease and so does the cooling load. Virtualization – a way out of the trap Virtualization is one of the most effective tools for a cost-efficient green computing solution. By partitioning physical servers into several virtual machines to process applications, companies can increase their server productivity and downsize their extensive server farms. This approach is so effective and energy-efficient that the Californian utility corporation PG & E offers incentive rebates of 300 to 600 U.S. dollars for each server that is saved thanks to Sun or VMware virtualization products. These rebate programs compare the energy consumption of existing systems with that of systems in operation after virtualization. The refunds are paid once the qualified server consolidation project is implemented. They are calculated on the basis of the resulting net reduction in kilowatt hours (at a rate of 8 Cents per kilowatt hour). The maximum rebate is 4 million U.S. dollars or 50 percent of the project costs. By implementing a virtual abstraction level to run different operating systems and applications, an individual server can be cloned and thus used more productively. On the strength of virtualization, energy savings can be increased in practice by a factor of three to five – and even more in combination with a consolidation to high-performance multi-processor systems. Cooling – great potential for savings A rethinking process is also taking place in the area of cooling. The cooling system of a data center is turning into a major design criterion. The continuous increase in processor perfor-mance is leading to a growing demand for energy, which in turn leads to considerably higher cooling loads. This means that it makes sense to cool partitioned sections in the data center individually, in accordance with the specific way heat is generated in the area. The challenge is to break the vicious circle of an increased need of energy leading to more heat, which in turn has to be cooled, consuming a lot of energy. Only an integrated, overall design for a data center and its cooling system allows performance requirements for productivity, availability and operational stability to be synchronized with an energy-efficient use of hardware. In some data centers, construction design focuses more on aesthetics than on efficiency, an example being the hot aisle/cold aisle constructions. Water or liquid cooling can have an enormous impact on energy efficiency, and is 3000 times more efficient than air cooling. Cooling is further discussed in section 3.5.2. Key Efficiency Benchmarks

Different approaches are available for evaluating the efficient use of energy in a data center. The approach chosen by the Green Grid organization works with two key benchmarks: Power Usage Efficiency (PUE) and Data Center Infrastructure Efficiency (DCIE). While PUE determines the efficiency of the energy used, the DCIE value rates the effectiveness of the energy used in data centers. The two values are calculated using total facility power and IT equipment power. The DCIE value is the quotient of IT equipment power and total facility power, and is the reciprocal of the PUE value. The DCIE thus equals 1/PUE and is expressed as a percentage.

A DCIE of 30 percent means that only 30 percent of the energy is used to power the IT equipment. This would result in a PUE value of 3.3. The closer this ratio gets to the value of 1, the more efficiently the data center uses its energy. Google can claim a PUE value of 1.21 for 6 of their largest facilities. Total facility power includes the energy used to power distribution switch board, the uninterruptible power supply (UPS), the cooling system, climate control and all IT equipment, i.e. computers, servers, and associated communication devices and peripherals.

PUE DCiE Level of efficiency

3.0 33% very inefficient

2.5 40% inefficient

2.0 50% average

1.5 67% efficient

1.2 83% very efficient


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 15 of 156

In addition to the existing key benchmarks, the Green Grid is introducing two new metrics, whose goal is to support IT departments in optimizing and expanding their data centers. These new metrics are called Carbon Usage Effectiveness (CUE) and Water Usage Effectiveness (WUE). CUE will help managers determine the amount of greenhouse gas emissions generated from the IT gear in a data center facility. Similarly, WUE will help managers determine the amount of water used to operate IT systems. Initiatives and Organizations The Green Grid is an American industry association of suppliers of supercomputers and chips, dedicated to the concept of environment-friendly IT, also called green IT. The consortium was founded by the companies AMD, APC, Cisco, Dell, Hewlett Packard, IBM, Microsoft, Rackable Systems, Sun Microsystems and VMware, and its members are IT companies and professionals seeking to improve energy efficiency in data centers. The Green Grid develops manufacturer-independent standards (IEEE P802.3az Energy Efficient Energy Task Force), and measuring systems and processes to lower energy consumption in data centers. The European Code of Conduct on Data Centres Energy Efficiency was introduced by the European Commission in November 2008 to curb excessive energy consumption in data center environments. The code comprises a series of voluntary guidelines, recommendations and examples of best practices to improve energy efficiency. Data centers implementing an improvement program recognized by the EU commission may use the code's logo. Every year, data centers that are especially successful receive an award. In the medium-term, quantitative minimum requirements will also be defined. The SPEC Server Benchmark Test of the U.S. American Standard Performance Evaluation Corporation (SPEC) has kicked off the efficiency competition in the IT hardware section. Manufacturers are focusing on producing and testing configurations that are as efficient as possible. 1.11. Green IT

Green IT is about reducing operational costs while at the same time enhancing a data center's productivity. Targeted are measurable, short-term results. A top priority for IT managers in organizations is the efficiency of data centers. Financial service providers, with their enormous need of computing power, are not the only ones who can profit from green IT. Today, companies in all industrial sectors are paying much more attention to their electricity bills. According to a study by Jonathan Koomey, a professor at the Lawrence Berkeley National Laboratory of the Stanford University, energy costs for servers and data centers have doubled in the last five years.

In 2010, the U.S. Environmental Protection Agency (EPA) projected that the electricity consumption of data centers would double again in the coming five years, generating costs of another 7.4 billion dollars per year. These bills are processed in the finance departments of the companies. The pressure is now on IT managers to reduce energy costs, which in turn results in environmental benefits. Along with the automotive and real estate industries, ICT is one of the key starting points for the improvement of climate protection. According to a study1 by the Gartner group, ICT accounts for 2 to 2.5 percent of the global CO2 emissions, about the same as the aviation industry. The study states that a quarter of these emissions are caused by large data centers and their constant need for cooling. In office buildings, IT equipment power normally accounts for over 20 percent, in some offices for over 70 percent, of the energy used. In addition to the energy demand for production and operation of hardware, i.e. computers, monitors, printers and phones, the materials used and production methods also need to be taken into account. This latter area includes the issue of hazardous substances, whether such substances are involved in the production or if any poisonous substances such as lead or bromine are contained in the end product and released during use or disposal. More detailed specifications are provided by the RoHS guideline of the EU (Restriction of Hazardous Substances).

Gartner (2007) “Gartner Estimates ICT Industry Accounts for 2 Percent of Global CO2 Emissions”


Page 16 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

1.12. Security Aspects In many countries, legislation stipulates that IT systems are an integral part of corporate processes and thus no longer only a tool to advance a company's success, but an element that is bound by law and in itself part of the corporate purpose. Laws and regulations such as the German Supervision and Transparency in the Area of Enterprise Act (KonTraG), the Basel II Accord or the Sarbanes-Oxley-Act (SOX) almost entirely integrate the company's own IT into the main corporate processes. As a consequence, IT managers have to deal with significant liability risks. Companies and organizations find themselves facing IT-related complications such as customer claims for compensation, productivity losses and negative effects on the corporate image, to name but a few. The need to act and set up IT systems that are more secure and available is therefore a question of existential importance for companies. The first step is a comprehensive analysis phase to specify weaknesses in IT structures, in order to determine the exact requirements for IT security. The next step is the planning process, taking all potential hazards in the environment of the data center into consideration and providing additional safeguards if necessary. A detailed plan must be worked out in advance to avoid a rude awakening, defining room allocations, transport paths, room heights, cable laying routes, raised floor heights and telecommunications systems. Taking a broader view of IT security reveals that it goes beyond purely logical and technical security. In addition to firewalls, virus protection and storage concepts, it is essential to protect IT structures from physical damage. Regardless of the required protection class, ranging from basic protection to high availability with minimum downtime, it is essential to work out a requirement-specific IT security concept. Cost-effective IT security solutions are modular so that they can meet specific requirements in a flexible manner. They are scalable so that they can grow in line with the company and, most of all, they are comprehensive to ensure that in the event of any hazard, the protection is actually in place. It is therefore paramount to know the potential hazards beforehand, as a basis for implementing specific security solution. Fire Risk

Only 20 percent or so of all fires start in the server room, or its environment. Nearly 80 percent of fires start outside IT structures. This risk therefore needs to be examined on two levels. Protection against fire originating in the server room can be covered by early fire detector systems (EFD), a fire-alarm and extinguishing systems. These systems can also be designed redundantly – so that false alarms can be avoided. EFD systems suck ambient air from racks by means of active smoke extraction systems which detect even the smallest, invisible smoke particles. Digital particle counters, as used in laser technology, can also be applied here.

Due to high air speeds in climate-controlled rooms, the smoke is rapidly diluted, meaning that the EFD systems must have a sufficiently high detection sensibility. Disturbances can be avoided or kept away with filters and intelligent signal-processing algorithms. Professional suppliers also offer these systems in a version that combines fire-alarm and extinguishing systems, which can be easily integrated in 19" server racks for space efficiency.

Using non-poisonous extinguishing gases, fires can be put out in their pyrolysis phase (fire ignition phase), effectively minimizing potential damage and preventing the fire from spreading further. Extinguishing gas works much faster than foam, powder or water, causes no damage and leaves no residue. In modern systems, gas cartridges can even be replaced and activated without the help of technicians. In addition to FM-200 and noble gas (e.g. argon), nitrogen, inergen or carbon dioxide are also used to smother fires through oxygen removal. Then there are extinguishing gases which put out fires by absorbing heat, e.g. the new NovecTM 1230. Their advantage is that only small quantities are required.

In addition to the use of extinguishing gases, the oxygen level can be reduced (called inertization) as a parallel process in fire-hazard rooms such as data centers. The ambient air is split into its individual components via an air decomposition system, reducing the oxygen level to approx. 15 percent – providing a means of early fire prevention. This oxygen reduction does not mean that people cannot enter the data center, as it is principally non-hazardous to the humans. Both EFD and fire-alarm and extinguishing systems are now available from leading manufacturers in space-saving, installation-friendly 1U versions, showing that effective protection is not a question of space.

Fire in the data center

In the evening of March 1, 2011, a fire

broke out in the high-performance

data center in Berlin. For security

reasons, all servers were shut down

and the entire space was flooded with

CO2 to smother the fire!

German Red Cross


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 17 of 156

Water Risk An often neglected danger for IT systems is water. It is usually not a threat in the case of pipe leaks or floods, but represents a danger in the form of extinguishing water in the event of the above-discussed fire threats. The damage caused by the fire is often less severe than the damage caused by the water used to extinguish it.

This means that IT rooms need to stay watertight during the fire-fighting process, and they must be resistant to long-standing stagnant water that, for example, occurs in a flood. Watertightness should be proven and independently certified to be EN 60529-compliant. Protection from stagnant water over a period of 72 hours is the current level of technology with high-availability systems. The latest developments allow data centers to be equipped with wireless sensors that are able to detect leaks early, send warning signals and automatically close doors if required. This is particularly important when highly efficient liquid-cooling systems are used for racks. Smoke Risk Even if a fire is not raging in the immediate vicinity of a data center, smoke still presents a risk of severe damage to IT equipment. In particular, burning plastics such as PVC create poisonous and corrosive smoke gases. Burning one kilogram of PVC releases approx. 360 liters of hydrochloric acid gas, producing up to 4500 cubic meters of smoke gas. This can destroy the IT facility in little time, a fact that reduces the "mean time between failure" (MTBF) substantially. MTBF is the predicted elapsed time between inherent failures of a system during operation.

Reliable protection can only be provided by hermetically-sealed server rooms which are able to resist these hazardous gases and protect their valuable components against the threat. Smoke gas resistance that is tested in accordance with EN 18095 is essential. In Germany, the level of water and gas resistance is defined based on the IP categorization. A data center should have a protection class of IP56. Power Supply Risk

Uninterruptible power supply systems, called UPS systems, take over when the power network fails. Modern UPS systems (online systems) operate continually, supplying consumers via their power circuits. This means that the brief yet dangerous change-over time can be eliminated. The UPS system simply and reliably bridges the time until the power network is up and running again. Thanks to integrated batteries, UPS systems also operate continuously if power is off for a longer period of time. UPS systems are classified in accordance with EN 50091-3 and EN 62040-3 VFI. For a reliable breakdown protection, equipment used in data centers should fulfill the highest quality class 1 VFI-SS-111.

UPS systems are divided into the single-phase group and the multi-phase group of 19" inserts and stand-alone units of different performance classes. The units provide perfect sinus voltage and effectively balance out voltage peaks and "noise". Particularly user-friendly systems can be extended as required and retrofitted during operation. If, however, the power supply network stays offline for several hours, even the best batteries run out of power. This is where emergency power systems come into play. These are fully self-sufficient systems that independently generate the power needed to keep the data center running and recharge the batteries in the UPS system. These emergency systems are usually diesel engines that start up in the event of a power failure once the power supply has been taken over by the UPS systems. New research indicates that in the future these diesel engines could be driven with resource-conserving fuels such as vegetable oil, similar to a cogeneration unit (heat and power plant). This would mean that the units can continuously generate power in an environmentally-friendly manner without additional CO2 emissions, and this power could even be sold profitably when it is not required for the data center operation. Fuel cells will also become increasingly important as power sources for emergency power systems. Fuel cells reduce total cost of ownership (TCO) and offer distinct advantages over battery-buffered back-up systems, with respect to service life, temperature fluctuations and back-up times. In addition, they are very ecological because they generate pure water as reaction product. Air-Conditioning Risk Modern blade server technologies boost productivity in data centers. Climate-control solutions are primarily concerned with transporting the heat emitted in the process. Data center planners must remember that every productivity boost also increases the demand for cooling capacity of the air-conditioning units in a data center. With maximum thermal loads of 800 W/m2 in a data center, cooling units that are suspended from the ceiling or mounted to walls can be used. With thermal loads above 800 W/m2, however, the data center requires floor-mounted air-conditioning units which blow the air downwards into the raised floor.


Page 18 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

In general, air-conditioning units can be placed inside or outside a data center. Inside use is better for rack-based cooling or specific cooling of hot spots in a server room. Operational costs are lowered and the associated noise stays inside the room. Units are also protected from unauthorized access and the protective wall is not weakened by additional openings for ventilation slides. The advantages of outside use are that service technicians do not need to enter the room for maintenance work, the air-conditioning units require no extra space in the data center and the fire load is not increased by the air-conditioning system. Fresh air supply can generally be achieved without additional costs. In most cases, redundancy in accordance with Tier III and IV (see section 2.2.1.) is only required for air-conditioning units on walls and ceilings that have a rather low cooling capacity. When higher capacities and thus stand-alone units are required, a "N+1 redundancy" in accordance with Tier II should be established, which leaves a number of units in continuous operation while an additional unit acts as redundancy (reserve). In order to establish and maintain the recommended relative humidity range of 40% to 60%, the air-conditioning units should feature both air humidifiers and dehumidifiers. A safe bet is to opt for a system certified by Eurovent (the interest group of European manufacturers of ventilation and air-conditioning systems). For cooling hot spots in data centers, the use of liquid-cooling packages is also an option. These packages extract the emitted heat/hot air along the entire length of the cabinet by means of redundant, high-performance fans, discharging it via an air and water heat exchanger into a cold water network or a cooler. Dust Risk Dust is the natural enemy of sensitive IT systems and does not belong in secure data centers. Fine dust particles can reduce the life cycle of ventilators and other electronic units drastically. One main source of dust is maintenance work and staff – any intrusion into secured data centers must be avoided. An intelligent IT room security system is absolutely dust-free. The dust-free policy also applies to extension or upgrade work. The dust-tightness in place should comply with the specifications in EN 60529, and fulfill IP56 with characteristic 1 (see water risk) in order to avoid any unpleasant surprises at a later stage. Unauthorized Access Risk The data center is one of the most sensitive areas in a company. It is critical that only authorized persons have access, and that every data center access is documented. A study of the International Computer Security Association (ICSA) showed that internal attacks on IT systems are more frequent than external ones. Protection of the data center must therefore first meet all requirements in terms of protection against unauthorized access, sabotage and espionage, and also ensure that specifically authorized persons can only enter the rooms they need to perform their defined tasks. Burglary protection in accordance with EN 1627 with resistance class III (RCIII) is easily implemented. All processes are to be monitored and recorded in accordance with the relevant document-tation and logging regulations. If possible, the air conditioning system and electrical equipment should be physically separated from the servers so that they can be serviced from the outside. For access control purposes, biometric or standard access control solutions can be installed or a combination of the two. Biometric systems in combination with magnetic card scanners enhance the security level considerably. Most of all, the access control solution installed should fulfill the specific requirements of the operator. The highest level of security can be guaranteed by the new vein recognition technology. Its high precision and thus critical advantage stands out in its false acceptance rate of less than 0.00008 percent and a false rejection rate of only 0.01 percent. Moreover, it ensures highly hygienic handling, since operating the device does not require direct contact. By using video surveillance systems with image sensors in CCD and CMOS technology, up to 1000 cameras can be managed with the matching software, regardless of the manufacturer. Video surveillance systems provide transparency, monitoring and reliability in data centers. Advanced video management technology enables modern surveillance systems to manage and record alarm situations. For images to be analyzed and used as evidence, an intelligent system must provide the proper interfaces and processing possibilities. Explosion Risk The risk of terror attacks or other catastrophes which could trigger an explosion must be factored in right from the start when planning a highly available security room concept. Modern, certified server rooms need to undergo an explosion test in accordance with the SEAP standard. High-security modular server rooms are built with pressure-resilient wall panels to resist heavy explosions, protecting valuable IT systems from irreparable damage. IT systems need also be protected against debris and vandalism, to ensure real all-round protection.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 19 of 156

Future Viability The long-term planning of communications structures in these times of short product life cycles and continually growing demands on IT systems is becoming increasingly complex for many companies. Future developments need to be taken into account when planning a data center. Will the data center become larger or smaller in the future? Will it require a new site and how can an existing data center be made secure during operation?

Are there any options for setting up a data center outside company grounds or for carrying out a complete relocation without having to go through an expensive dismantling of the components? A partner with professional experience in building a data center is at the customer's side right from the beginning of the project and does not abandon them once the center is completed, but provides long-term support to the company. Networks in data centers will undergo significant changes in the coming years, since corporate IT itself is being restructured. Trends such as virtualization and the resulting standardization of infrastructures will present a challenge to the network architectures in use today. Flexibility and Scalability In order for data center operators to benefit from both high flexibility and investment security, they need to be able to choose the suppliers of secure data centers on the basis of valid criteria. An important contribution to that end is certificates from independent test organizations. An external inspection of the construction work during and after construction is also important. Scalable solutions are indispensable for an efficient use of data center infrastructures. Only suppliers who fulfill this requirement should be included in the planning process. A well-planned use of building and office space can minimize the risk of a total system failure by means of a decentralized arrangement of data center units. Using existing building and office space intelligently does not necessarily involve new constructions. Modular security room technologies also allow for decentralized installation of secure server rooms. Thanks to this modularity, they can be integrated into existing structures cost-effectively, and they can be easily extended, changed or even relocated if necessary. This room structuring, designed to meet specific needs and requirements, translates into considerable cost savings for the operator. Security rooms can often be rented or leased, making it easy to carry out short-term extensions as well. Maintenance

Regardless of the size of an enterprise, the importance of the availability of IT systems is constantly growing. This is why maintenance work is also necessary once a secure data center is completed. Continuous, documented maintenance and monitoring of IT structures at defined intervals is indispensable in this day and age, yet it is often neglected. A long-term service concept must be established, one that is effective even when a failure does not occur. Only a robust concept of required services that is adapted to specific data center requirements provides all-around security for data centers.

Maintenance, service and warranties for each aspect, i.e. climate control, fire-alarm and early fire detections system, UPS systems, emergency power systems, video surveillance and access control, need to be covered in "all-round worry-free package" that also incorporates the physical and power environment of IT structures. A wide range of services is available, covering specific requirements. Case-by-case evaluation is required to determine whether a company needs full technical customer service, including monitoring and 24/7 availability, or a different solution. Remote Surveillance Special remote surveillance tools allow for external monitoring and control of all data center functions. In the event of an alarm, a pre-determined automatic alarm routine is activated without delay. This routine might be an optical or acoustic signal sent as a message through an interface to the administrator or a defined call center. In addition, these tools control the fire extinguishing systems and can be programmed to activate other measures in the alarm sequence plan. Color displays at racks expressing the status of a system allow for a visual inspection of system states, for example via a web cam. Integrated Approach A consultation process on data centers, taking into account the entire corporate structure, follows an integrated approach. It is obvious that a comprehensive risk analysis combined with the analysis and objective assessment of existing building structures at the locations in question is a basic requirement. The entire planning process, including a customer requirement specification, are integral elements of an offer. This allows the optimal solution for each business to realize a secure data center that meets all requirements.


Page 20 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

2. Planning and Designing Data Centers The volumes of data that must be processed are constantly increasing, demanding higher productivity in data centers. At the same time, data center operators need to keep their operational costs in check, improve profitabi-lity and eliminate sources of error that are potentially causing failures. Statistics show that many failures occurring in data centers are caused by problems in the passive infrastructure or human error. Increasing availability starts with quality planning of the infrastructure and the cabling system. 2.1. Data Center Types The data center sector is teeming with technical terms describing the various business models and services offered by data center operators. This section lists definitions of the key terms. 2.1.1. Business Models and Services The term data center does not reveal what type of business model an operator runs and what type of data center facility it is. The terms

Housing – Hosting – Managed Services – Managed Hosti ng – Cloud Services – Outsourcing – and others

describe the services offered. What they have in common is that the operator provides and manages the data center's infrastructure. However, not every supplier of data center services actually runs his/her own data center. There are different business models in this area as well:

• The supplier is renting space from a data center operator;

• He acts as reseller for this data center operator;

• He is renting the space according to need. In the case of housing services, the operator provides the rented data center space plus cabling. The customer usually supplies his own servers, switches, firewalls, etc. The equipment is managed by the customer or a hired third party. In the case of hosting services, the web host provides space on a server or storage space for use by the customer, and he/she also provides Internet connectivity. The services include web hosting, share hosting, file hosting, free hosting, application service provider (ASP), Internet service provider (ISP), full service provider (FSP), cloud computing provider, and more. In both managed services and managed hosting, the contracting parties agree on specific services on a case-by-case basis. All conceivable combinations are possible; data center operators running the clients' servers, including the operating system level, or data center operators providing the servers for their clients, etc. Even less clearly defined are the cloud services. Since they are a trend topic, nearly every company is offering cloud services, for example SaaS (Software as a Service), IaaS (Infrastructure as a Service), PaaS (Platform as a Service) or S+S (Software plus Service). It is important to first establish the specific components of a product portfolio. Pure housing services qualify as outsourcing, as does relo-cating the entire IT operation to be run by a third-party company. Areas The terms

collocation – open collocation – cage – suite – roo m – rack – cabinet – height unit – and others

refer to the space rented in a data center.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 21 of 156

• Collocation (also spelled colocation) stands for a space or a room in a data center rented for the customer's own IT equipment. The term open collocation usually refers to a space shared by several customers.

• Cage – suite – room refer to a separate room or lockable section in a data center. This separate section is reserved for the exclusive use of the customer's IT equipment. The section's design depends on the operator but customers usually decide on matters such as cabling and infrastructure in these sections.

• Rack – cabinet : These terms are used interchangeably. Depending on the data center operator and the rented space, the customer can decide on the models to use. Sometimes ¼ and ½ racks are available. Particularly in hosting services, the number of height units is also relevant.

2.1.2. Typology Overview There is no information available on the total number of data centers worldwide and their differentiation and classification in terms of types and sizes. Different authors and organizations classify data centers into different types, but a globally applicable, consistent categorization does not yet exist. The approaches differ with regard to a data center’s purpose. They are: Typology based on statistics This typology has been applied to provide information about the energy consumption of data centers (see US-EPA 2007, TU Berlin 2008, Bailey et al. 2007). US-EPA uses the following terms, among others: server closet, server room, localized data center, mid-tier data center, enterprise-class data center . The German Federal Environment Agency further developed and refined the approach in a study of resources and energy consumption, published in November 2010. Some of the figures are presented below in section 2.1.3. Typology based on data center availability Reliability or availability is an essential quality criterion of a data center. There are a number of data center classifications based on availability (BSI2009, Uptime Institute 2006, BITKOM 2009, and others). The following table presents some examples – for each level there are guidelines for data centers operators on how to achieve it.






s B



VK 0 ~59% approx. 2-3 weeks/year no requirements

VK 1 99% 87.66 hours/year normal availability

VK 2 99.9% 8.76 hours/year high availability

VK 3 99.99% 52.6 minutes/year very high availability

VK 4 99.999% 5.26 minutes/year highest availability

VK 5 99.9999% 0.526 minutes/year disaster tolerant


r cl


s U



e In



Tier I 99.671% 28.8 hours/year

Tier II 99.749% 22.0 hours/year

Tier III 99.982% 1.6 hours/year

Tier IV 99.995% 24 minutes/year


a ce









A 72 hours/year

B 24 hours/year

C 1 hour/year

D 10 minutes/year

E 0 minutes/year


Page 22 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Typology based on data center purpose In many cases, data centers are classified on the basis of the underlying business model, e.g. housing and collocation data centers, high-performance data centers, and others. Typology based on operator type Suppliers of data centers often categorize data centers by their operators, e.g. by industry sector (bank, automobile, research, telecommunication, etc.) or in terms of public authorities or private companies. Another categorization distinguishes between enterprise data centers and service data centers.

• Enterprise Data Centers (company-owned) Data centers performing all IT services for a company and belonging to that company (be it in the form of an independent company or a department).

• Service Data Centers (service providers) Data centers specialized in providing data center services for third parties. Typical services are housing, hosting, collocation and managed services, and exchange services for the exchange of data.

Enterprise data centers are being increasingly offered as service data centers (service providers) on the market, particularly by public utility companies. 2.1.3. Number and Size of Data Centers The above-mentioned study by the Federal Environment Agency included a classification for the German market. The figures and data in the following table were taken from that study.

Data center type Server cabinet Server room Small data center

Mid-size data center

Large data center


Ø number of servers 4.8 19 150 600 6,000

Ø total power 1.9 kW 11 kW 105 kW 550 kW 5,700 kW

Ø floor size 5 m2 20 m2 150 m2 600 m2 6,000 m2

Ø network

Copper 30 m 170 m 6,750 m 90,000 m 900,000 m

Fiber optic 10 m 1,500 m 12,000 m 120,000 m

Number of data centers in Germany Total

2008 33,000 18,000 1,750 370 50 53,170

Extrapolation for 2015

Business as usual 34,000 17,000 2,150 680 80 53,910

Green IT 27,000 14,000 1,750 540 75 43,365








In G



In S













Artmotion 1 x x x

COLT >5 x x

data11 1 x x

Databurg 1 x x datasource Switzerland

1 x x x

Equinix >90 x x x x

e-shelter 7 x x x

Global Switch 8 x x x

green.ch 4 x x x

Hetzner >3 x x x

Host-Europe 2 x x x

I.T.E.N.O.S. >8'000 x x x

interXion 28 x x x x

Layerone 1 x x

Level 3 >200 x x x x

Strato 2 x x

Telecity 8 x x x

telemaxx 3 x x

Selection of housing and hosting service providers, 2011


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 23 of 156

2.2. Classes (Downtime and Redundancy) The best-known classification system defines availability in data centers for both physical structures and technical facilities and was developed by the Uptime Institute. The Institute was founded in 1993, is based in Santa Fe (Mexico) and numbers approx. 100 members. The tier classes I to IV define the probability that a system will be functional over a specified period of time. SPOF (Single Point of Failure) refers to the component of the system, whose failure causes the entire system to collapse. There can be no SPOF in a high-availability system. 2.2.1. Tiers I – IV Among other factors, the tier classification is based on the elements of a data center's infrastructure. The lowest value of the individual element (cooling, power supply, communication, monitoring, etc.) determines its overall evaluation. Also taken into account is the sustainability of measures, operational processes and service assurance. This is particularly evident in the transition from Tier II to Tier III, where the alternative supply path allows maintenance work to be performed without interfering with the operation of the data center, which in turn is manifested in the MTTR value (Mean Time to Repair).


Distribution paths power and cooling 1 1 1 active / 1

alternate 2 active

Redundancy active components N N+1 N+1 2 (N+1)

Redundancy backbone no no yes yes Redundancy horizontal cabling no no no optional

Raised floors 12“ 18“ 30“-36“ 30“-36“

UPS / generator optional yes yes dual Concurrently maintainable no no yes yes

Fault tolerant no no no yes

Availability 99.671% 99.749% 99.982% 99.995%

N: Needed Source: Uptime Institute Tier I

This class is appropriate for smaller companies and start-ups. There is no need for extranet applications, Internet use is mainly passive and availability is not a top priority.

A Tier I data center basically has non-redundant capacity components and single non-redundant distribution networks. An emergency power supply, uninterruptible power supply and raised floors are not required, maintenance and repair work are scheduled and failures of site infrastructure components will cause disruption of the data center. Tier II

This is the class for companies that have already moved part of their business processes online, mostly during standard office and business hours. In the event of non-availability there will be delays but no data loss. There are no business-critical delays (non-time critical backups).

A Tier II data center has redundant capacity components (N+1) and single non-redundant distribution networks. An uninterruptible power supply and raised floors are required. Tier III & IV

These are the classes for companies that use their IT equipment for internal and external electronic business processes and require 24-hour availability. Maintenance work or shut-down times have no damaging effects. Tier III and IV are the basis for companies operating in e-commerce, electronic market transactions or financial services (high-reliability security backups).

A Tier III data center has redundant capacity components and multiple distribution networks that serve the site's computer equipment. Generally, only one distribution network serves the computer equipment at any time.

A fault-tolerant (Tier IV) data center has redundant capacity components and multiple distribution networks simultaneously serving the site's computer equipment. All IT equipment is dual powered and installed properly so as to be compatible with the topology of the site's architecture.


Page 24 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

2.2.2. Classification Impact on Communications Cabl ing To ensure availability and reliability of the communications network in a data center, a redundant cabling configu-ration is required, in accordance with the respective Tier level. Data center layout is described in section 3.2. and cabling architecture in 3.4. The transmission media (copper and fiber optic) used for the specific areas are discussed in section 3.9.

Backbone cabling, horizontal cabling and active network components are not redundant. The set-up is a simple star topology; network operation can thus be interrupted. However, data integrity must be ensured.

Here too, backbone cabling and horizontal cabling are not redundant, but the network components and their connections are. This set-up is a star topology as well; network operation can only be interrupted at specified times or only minimally during peak hours of operation.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 25 of 156

Both backbone cabling and active network components are redundant in this star topology since network opera-tion must be maintained without interruption within defined period of times and during peak hours of operation.

Backbone cabling and all active components such as servers, switches, etc., as well as the uninterruptible power supply and the emergency power generator are redundant, 2 x (N+1). Horizontal cabling may also be redundant. Uninterrupted system and network functions must be ensured 24/7. This Tier level is completely fault-tolerant, meaning that maintenance and repair work can be performed without interrupting operation. There is no SPOF (Single Point of Failure) on this Tier level. The four classes Tier I to IV, listed in the American standard TIA-942, are associated with the Uptime Institute and were integrated by the TIA-942 for classification purposes. However, it is important to note that the Tiers in TIA-942 only resemble the Uptime Institute Tiers on the surface. The Tier classification of the Uptime Institute is based on a comprehensive analysis of a data center, from the point of view of the user and the operator. The TIA on the other hand is standard-oriented and provides decisions on the basis of a Yes/No checklist.


Page 26 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

For cabling structure designers, the TIA-942 Tiers offers an appropriate basis to design a data center. The names show some differences too. TIA-942 uses Latin numbers, whereas the Uptime Institute uses Roman numbering. Implementation Time and Investment Costs


Implementation time 3 months 3 - 6 months 15 - 20 months 15 - 20 months

Relative investment costs 100% 150% 200% 250%

Costs per square meter ~ $ 4,800 ~ $ 7,200 ~ $ 9,600 ~ $ 12,000

Source: Uptime Institute 2.3. Governance, Risk Management and Compliance Today, a large number of organizations (public authorities and private companies) rely on modern information and communication technologies (ICT). Information technology provides the basis for business processes. More and more companies are therefore highly dependent on IT systems that work flawlessly. In 2008, the Federal Statistical Agency of German supplied the following information:

• 97% of all companies with more than 10 employees use IT systems;

• 77% of these companies present their products on the Internet;

• 95% of all companies in Germany make use of the Internet. Since the 1960s, Switzerland ranks second worldwide (behind the U.S.) in terms of using IT. In proportion to the size of the population, more computers are used in Switzerland than in other European countries.





… never recover … go out of businesswithin three years

… are acquired or merge … fully recover

Of business that suffered a major system failure .. .

Source: Wirtschaftswoche magazine, Issue 18, 24 April 1997 Governance, risk management and compliance or GRC is an umbrella term that covers an organization's approach to these three areas.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 27 of 156

In 2010, a scientifically established definition, validated by GRC experts, of Integrated Governance, Risk and Compliance Management was published (Racz et al. 2010): “GRC is a holistic approach that ensures an organization acts ethically and in accordance with its risk appetite, internal policies and external regulations through the alignment of strategy, processes, technology and people, to improve efficiency and effectiveness.”

Governance Governance is corporate management by defined guidelines. These guidelines include the definition of corporate goals, the planning of the necessary resources to accomplish them and the methods applied to implement them. Risk Management Risk management is the set of processes by which management identifies, analyzes, and, where necessary, responds appropriately to risks that might adversely affect realization of the organization's business objectives. Possible responses include early recognition, controlling, avoiding, and damage control. Compliance Compliance means conforming to internal and external standards in the flow and processing of information. Requirements include parameters from standardization drafts, data access regulations and their legal bases. The relevance of these three areas in terms of IT systems will be discussed in the following sections. 2.3.1. IT Governance IT governance refers to the corporate structuring, controlling and monitoring of the IT systems and IT processes in a company. Its objective is to dovetail IT processes with corporate strategy; it is closely linked to overall corporate governance. The main components of IT governance are:

• Strategic Alignment: continuously aligning IT processes and structures with strategic enterprise objectives

• Resources Management: responsible and sustainable use of IT resources

• Risk Management: identification, evaluation and management of IT-related risks

• Performance Measurement: measuring the performance of IT processes and services

• Value Delivery: monitoring and evaluating IT contributions The implementation of IT governance is supported by powerful, internationally recognized procedures. These include:

• Control model – focus financial reporting: COSO (Committee of Sponsoring Organizations of the Treadway Commission)

• Corporate governance in information technology: ISO/IEC 38500:2008

• Control model for IT management: Cobit (Control Objectives for Information and Related Technology)

• Implementation of IT service management: ISO 20000, ITIL (Information Technology Infrastructure


• Information security: ISO/IEC 27002 and basic IT protection catalogs

IT Governance IT Compliance

IT Risk Management


Page 28 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

2.3.2. IT Risk Management IT risk management helps to ensure an organization’s strategic objectives are not jeopardized by IT failure. The term risk refers to any negative deviation from planned values – whereas chance refers to any positive deviation. Reasons to implement an IT risk management system:

• Legal aspect

o Careful management (see SOX, KonTraG, RRG)

• Economic aspects

o Providing fundamental support by reducing errors and failures in IT systems

o Negotiating advantage with potential customers by enhancing their trust in the ability to supply

o Advantage in rating assessment of banks for credit approval

o Premium advantages with insurance policies (e.g. fire and business interruption insurance)

• Operational aspects

o Wide range of applications and saving options There are four classic risk response strategies:

• Avoidance

• Mitigation

• Transfer

• Acceptance IT risks can be categorized as follows:

• Organizational risks

• Legal and economic risks

• Infrastructural risks

• Application and process-related risks 2.3.3. IT Compliance IT compliance means conforming to current IT-related requirements, i.e. laws, regulations, strategies and contracts. Compliance requirements typically include information security, availability, data storing and data protection. IT compliance mainly affects stock corporations and limited liability companies (Ltd.) since in these companies CEOs and management can be held personally liable for compliance with legal regulations. Non-observance may result in civil and criminal proceedings. The Federal Data Protection Act in Germany specifies a custodial sentence of up to two years or a fine in the event of infringement. Frequently, there are standards and good practice guidelines to comply with, generally by contractual agreement with customers or competitors. The core task basically involves documentation and the resulting adaptation of IT resources as well as the analysis and evaluation of potential problems and hazards (see also risk analysis). IT resources refer to hardware, software, IT infrastructures (buildings, networks), services (e.g. web services) and the roles and rights of software users. It is crucial that the implementation of compliance is understood to be a continuous process and not a short-term measure. 2.3.4. Standards and Regulations There are numerous bodies responsible for developing and specifying security standards and regulations worldwide.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 29 of 156

The German Federal Association for Information Technology, Telecommunications and New Media (BITKOM) has compiled basic standards on IT security and risk management within the compass of their IT security standards (current edition 2009). Below are some extracts which are of particular relevance for data centers.

Information Security

Management System (ISMS)

Security measures

and monitorin


Risk mgmt

Relevant standards Regulations














S (





















/ MaR






















R (





G (


) / R





X /






el II




y II





/ DS

G (



Enterprise type

Banks/insurances � � � ∅ ∅ ∅ ∅ � � � � � � ∅ � � � Authorities/administrations � � � ∅ ∅ ∅ ∅ � � � � � � � � � � �

Consultancy firms � � ∅ � ∅ ∅ ∅ ∅ � � � � ∅ ∅ � � � � ∅ HW/SW manufacturers ∅ ∅ � ∅ ∅ ∅ ∅ ∅ ∅ ∅ � ∅ ∅ � � � � � IT service providers � � ∅ � ∅ ∅ ∅ ∅ ∅ � � � � ∅ ∅ ∅ � ∅ ∅ � Public health system � � � ∅ ∅ ∅ ∅ � � ∅ � � � � � � � � Law firms � � � ∅ ∅ ∅ ∅ ∅ ∅ ∅ � � � � � � � � Skilled trades and industry ∅ ∅ � ∅ ∅ ∅ ∅ � � � � � � ∅ ∅ ∅ ∅ � Service providers ∅ ∅ � ∅ ∅ ∅ ∅ � � � � � � ∅ ∅ ∅ ∅ � International orientation � � � ∅ � � � � � � � � ∅ � ∅ ∅ �

Relevance: � High

∅ Medium

� Low

Information Security Management System (ISMS)

• ISO/IEC 27001

This standard describes the basic requirement concerning the ISMS in an organization (company or public authority). It emerged from the British standard BS 7799-2.

Its objective is to specify the requirements for an ISMS in a process approach.

The standard primarily addresses company management and IT security managers, and secondarily implementation managers, technicians and administrators.

The ISMS implementation can be audited by internal and external auditors.

• ISO/IEC 27002

This is a guide to information security management. The standard emerged from the British BS 7799-1.

Basically, this standard is to be applied where a need for information protection exists.

The standard addresses IT security managers.

• ISO/IEC 27006

Requirements for bodies providing audits and certifications of information security management systems.


Page 30 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

• IT Baseline Protection

Since 1994, the Federal Office for Information Security (BSI) in Germany has been issuing the IT Baseline Protection Manual, which provides detailed descriptions of IT security measures and requirements for the IT security management.

In 2006 the manual was adapted to international standards, rendering it fully compatible with ISO/IEC 27001 while also incorporating the recommendations specified in ISO/IEC 27002.

ISO/IEC 27001 certification based on IT baseline protection can be applied for with the BSI. Security Measures and Monitoring The following standards deal with enhancing IT network security. IT network security is not restricted to internal corporate networks, but also includes the security of external network access points. A selection of standards follows below.

• ISO/IEC 18028 (soon 27033)

The objective of this standard is to focus on IT network security by specifying detailed guidelines aimed at different target groups within an organization. It includes security aspects in the handling, maintenance and operation of IT networks plus their external connections.

The standard comprises five parts:

1. Guidelines for network security 2. Guidelines for the design and implementation of network security 3. Securing communications between networks using Security Gateways 4. Remote access 5. Securing communications between networks using Virtual Private Networks (VPN)

• ISO/IEC 24762

This standard provides guidelines on the provision of information and communications technology disaster recovery (ICT DR) services. It includes requirements and best practices for implementing disaster recovery services for information and communications technologies, for example emergency workstations and alternate processing sites.

• BS 25777:2008

The objective of this standard is to establish and maintain an IT Continuity Management system. Risk Management

• ISO/IEC 27005

This standard provides guidelines for a systematic, process-oriented risk management system that supports the requirements for a risk management in accordance with ISO/IEC 27001.

• MaRisk / MaRisk VA

The Minimum Requirements for Risk Management (MaRisk) for banks were first issued in 2005 by the German Federal Financial Supervisory Authority (BaFin). They include requirements for IT security and disaster recovery planning. In 2009 another edition was published, which was expanded to include insurance companies, leasing and factoring companies (MaRisk VA).

Relevant Standards


The COSO model was developed by the organization of the same name (Committee of Sponsoring Organizations of the Treadway Commission). It provides a framework for internal control systems and describes:

o A method for introducing the primary components of an internal control system, e.g. the control environment in a company

o Risk evaluation procedures o Specific control activities o Information and communication measures in a company o Measures required for the monitoring of the control system

COSO is the basis of reference models like Cobit and facilitates their introduction.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 31 of 156

• ISO/IEC 38500

The standard "Corporate Governance in Information Technology – ISO/IEC 38500” describes six key principles for corporate governance in IT:

o Responsibility : adequate consideration of IT interests by the top management o Strategy : include IT potential and IT strategy in corporate strategy planning and align IT

strategy with corporate principles o Acquisition : rigorously demand-oriented IT budget plan, based on transparent decision making o Performance : align IT services with the requirements of corporate areas and departments o Conformance : IT systems compliance with applicable laws, regulations and internal and

external standards o Human Behavior : the needs of internal and external IT users are taken into account in the IT


The standard assigns three functions to each of these principles:

o Assessment : continuous assessment of IT performance o Control : controlling the business-specific focus of IT measures o Monitoring : systematic monitoring of the IT compliance and the IT systems productivity

• Cobit

To support management and IT departments in their responsibilities in connection with IT governance, the ISACA association (Information Systems Audit and Control Association) created Cobit (Control Objectives for Information and Related Technology), a comprehensive control system and framework encompassing all aspects of information technology use, from the planning to operation and finally disposal, all in all providing a comprehensive view of IT issues.

Here are the Cobit elements and their respective target groups:

o Executive summary Senior executives such as the CEO, CIO o Governance and control framework Senior operational management o Implementation guide Middle management, directors o Management guidelines Middle management, directors o Control objectives Middle management o IT assurance guide Line management and auditors


ITIL is a best practices reference model for IT service management (ITSM). The acronym ITIL originally derived from "IT Infrastructure Library", though the current third edition has a wider scope.

The objective of ITIL is to shift the scope of the IT organization from the purely technological to include processes, services and customers needs.

The ITILV3 version improved the strategic planning process to dovetail IT service management with the corporate strategy and thus helped ensure compatibility with the IT service management standard ISO 20000.

IT professionals can acquire ITIL certification. There are three levels:

1. Foundation 2. Intermediate 3. Advanced

Companies can have their process management certified in accordance with the international standard ISO/IEC 20000 (formerly BS 15000).

• IDW PS 330 (Germany)

When performing an annual audit, an auditor is required by law to examine the (accounting-relevant) IT systems of the audited company. In accordance with articles 316 to 324 of the German Commercial Code (HGB), the auditor also assesses the regularity of the accounting system and thus compliance with principles of correct accounting (GoB) as stipulated in articles 238 ff., in article 257 of the HGB as well as in articles 145 to 147 of the Tax Code (AO).

Furthermore, IT systems are audited according to principles of regular data-processing supported accounting systems (GoBS) and the accompanying letter of the Federal Ministry of Finance (BMF). The principles of data access and verification of digital documents (GDPdU) as stipulated in the BMF letter, containing regulations for the storing of documents, need also to be considered.


Page 32 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Depending on the complexity of the IT systems in operation, comprehensive testing in accordance with the IDW Auditing Standard 330 (IDW PS 330) of the IT system or of selected units or subsystems of the IT system may be required.

• SWISS GAAP FER (Switzerland)

The Swiss GAAP FER focuses on the accounting system of small and medium-sized organizations and companies operating on a national level. Also included are non-profit organizations, pension funds, insurance companies and property and health insurers. These organizations are provided with an effective framework for authoritative accounting to provide a true and fair view of the company's net assets, financial position and earnings situation. Promoting communication with investors, banks and other interested parties is also a GAAP FER objective. Moreover, it increases comparability of annual financial reports across time and between organizations.


• KonTraG (Germany)

The KonTraG (Control and Transparency in Business Act) came into effect in 1998. It is not a law by itself but a so-called amending act (Artikelgesetz) meaning that amendments and changes must be incorporated by other economic laws such as the Stock Corporation Act, the Commercial Code or the Limited Liability Companies Act (GmbHG).

The KonTraG is aimed at establishing business control and transparency in stock corporations and limited liability companies. This is achieved by setting up a monitoring system for the early identification of developments that threaten their existence and by requiring management to implement a corporate risk management policy. The act stipulates personal liability of members of the board of management, the board of directors and the managing director in the event of any infringement.

• Accounting and Auditing Act (RRG, Switzerland)

The comprehensive revision of Switzerland's audit legislation in 2008 made risk assessment compulsory. It is now subject to review by the auditing body. Overall responsibility and responsibility for monitoring lies with the highest decision-making and governing body of the company, e.g. the board of directors in a stock corporation. Responsibility for introduction and implementation lies with the board of managers.

The revision of the auditing obligations is applicable to all corporate forms, i.e. stock corporations and companies in the form of limited partnerships, limited liability companies, collectives, and also foundations and associations. Publicly held companies and companies of economic significance in this respect need to subject their annual financial statements to proper auditing.

• SOX (US)

The Sarbanes-Oxley Act of 2002 (also called SOX, SarbOx or SOA) is a United States federal law enacted on July 30, 2002. The bill was enacted as a reaction to a number of major corporate and accounting scandals including those concerning Enron or WorldCom. Its objective is to improve the reliability of accurate financial reporting by those companies which dominate the nation's securities market.

The bill defines responsibilities of management and external and internal auditors. The companies have to prove that they have a functional internal auditing system. The boards are responsible for the accuracy and validity of corporate financial reports.

The bill's provisions apply to all companies worldwide that are listed on an American stock exchange, and, in certain cases, their subsidiaries as well.


The 8th EU Directive, also known as EURO-SOX, came into effect in 2006. It is aimed at establishing an internationally recognized regulation for the auditing of financial statements in the European Union (EU). It closely resembles its American equivalent, the SOX act.

But unlike SOX, EURO-SOX applies to all capital companies, not only to market-listed companies. Small and medium-sized companies are required to address issues such as risk management, IT security and security audits.

In Germany, the EU directive was incorporated into the Accounting Law Modernization Act (BilMoG), turning it into an applicable national law, mandatory as of financial year 2010-1.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 33 of 156

• Basel II

The term Basel II refers to the entire set of directives on equity capital that were put forward by the Basel Committee for Banking Supervision in the past years. These directives apply to all banking institutions and financial service providers. In Switzerland, its implementation was carried out by the FINMA and in Germany the rules were implemented through the German Banking Act (KWG), the Solvency Regulation (SolvV) and the Minimum Requirements for Credit Management (MaRisk).

Basel II primarily relates to internal banking regulations. Banks apply credit-risk-related standards to their customers that are the same as the ones with which they have to comply. This means that in the private sector as well, loan conditions and credit risks are directly dependent.

• Basel III

The term Basel III refers to the rules and regulations adopted by the Basel Committee at the Bank for International Settlements in Basel (Switzerland) as an extension of the existing equity capital regulations for financial institutions. These supplementary rules and regulations are based on the experiences with Basel II, issued in 2007, and on knowledge gained as a result of the global financial crisis of 2007.

The provisional end version of Basel III was published in December 2010, even though individual aspects are still under discussion. Implementation within the European Union is achieved by amendments to the Capital Requirements Directive (CRD) and expected to come into effect gradually starting in 2013.

• Solvency II

Solvency II is the insurance industry's equivalent of Basel II. It is scheduled to come into effect in European Union member states in the first quarter of 2013.

• Federal Data Protection Act (BDSG, Germany) / Data Pro tection Act (DSG, CH)

Data protection acts regulate the handling of personal data. From a data center perspective, the technical and organizational measures that might be necessary are of primary relevance, especially those concer-ning regulations on access control and availability control.

In addition to national legislation, there are state-specific and canton-specific data protection laws, in Germany and Switzerland respectively.

2.3.5. Certifications and Audits Certification and audit preparations are costly and time-consuming, therefore every organization must ask them-selves whether it makes economic sense or not. Possible motivation and reasons for certification are:

• Competitive differentiation: A certification can strengthen customer, staff and public trust in an organization. It can also make an organization stand out among competitors, provided it is not a mass product. Certifications are also gaining importance in invitations to tender.

• Internal benefit of a certification: A certification can have the additional benefit of reducing possible security weaknesses.

• Regulations The major certifications and audits in the data center sector are:

• DIN EN ISO 9001- Quality Management

• ISO 27001 – Information Security Management Systems

• ISO 27001 – based on Basic IT Protection

• ISO 20000 – IT Service Management

• DIN EN ISO 14001 – Environmental Management

• SAS 70 Type II certification – SOX relevant

• IDW PS951 – German equivalent of the American SAS70

• Data Center Star Audit – ECO Association (Association of the German Internet Economy)

• TÜV certification for data centers – (Technical Inspection Association, Germany)

• TÜV certification for energy-efficient data centers


Page 34 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

2.3.6. Potential Risks According to a report by the IT Policy Compliance Group of 2010, 80 percent of companies have poor visibility into their IT risks, taking three to nine months or longer to classify their IT risk levels. Inability to prioritize risks, lack of a comprehensive risk view and inadequate control assessments all contribute to this problem. Liability Risks

Need for regulation, need for action

Responsibilities Legislation Potential damage and losses

Strategic tasks

Management / CEO Supervisory Board

See regulations

- Losses due to system failure - Insolvency - Increased costs of corporate loans - Loss of insurance coverage - Image loss - Monetary fines

Conceptual tasks

Management / CEO Data Protection Officer Head of IT

See regulations Employment contract

- See strategic tasks - Data loss - Unauthorized access - Virus infection - Loss due to failed projects - Loss of claims against suppliers - Loss of development know-how

Operative tasks

Management / CEO Data Protection Officer Head of IT Staff

See regulations Employment contract Commercial Code (HGB) Copyright Act (UrhG) Penal Code (StGB)

- No annual audit confirmation - Taxation assessment - Imprisonment - Corporate shutdown/loss of production - Capital losses - Image loss - Loss of business partners or data

Excerpt: Liability Risk Matrix (Matrix der Haftungsrisiken), Bitkom, as of March 2005 Companies have taken to including limited liability for negligent behavior in agreements between managing directors and the company. In the case of limited-liability companies, members have the option of declaring acceptance of the managing director, rendering any claims for damages against the company null and void. This is not true in the case of stock corporations, where acceptance of management does not result in the waiver of damage claims. Good insurance is a valuable asset because...

… Managers fail to see their liability risks … Managers are usually not clear about the scope of their liability risk … There is a growing risk of being held liable for mistakes at work and losing everything in the process … Managing directors and management can be held liable if they fail to provide sufficient IT security for their

companies … The boss is not the only one held liable! Operational Risks Operational risks – as defined in Basel II – are "the risks of loss resulting from inadequate or failed internal procedures, people and systems or from external events". The primary focus, in terms of operational risks, is on:

• Information system failures • Security policy • Human resource security • Physical and environment security • IT system productivity • Keeping systems, procedures and documentation up-to-date • Information and data of all business operations • Evaluation and identification of key figures • Clear, previously drawn-up emergency plan • Backups of the entire data base


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 35 of 156

Source: Marsh – business continuity benchmark report 2010 – survey of 225 companies

Among the factors that can disrupt business processes are:

• Natural disasters, acts of God (fire, flooding, earthquake, etc.) • Breaches of data and information security • Fraud, cybercrime and industrial espionage • Technical failure and loss of business-critical data • Terror attacks and transporting hazardous goods • Disruption of business processes or the supply chain and disturbances occurring on business partners’

premises • Organizational deficiencies, strikes, lockouts • Official decrees

"Experience tells us that a fire can break out at any given moment. When de-cades go by without a fire, it doesn't mean that there is no danger, it simply means that we have been lucky so far. Yet, the odds are still the same and a fire can still break out at any given mo-ment." This succinct statement, made by a Higher Admini-strative Court in Germany in 1987, still says it all. It is obvious that data centers must be equipped with reliable, fast, early fire-detection systems, extin-guishing systems and fire prevention systems. Water as extinguishing agent, however, is the wrong choice altogether. Today, specialized companies supply specific extinguishing systems for every type of fire. In the case of an expansion or later security measures in the data center, specific planning of systems is important. Cost of Downtime In 2010, market researchers from Aberdeen Research determined that only 3% of companies in the past twelve months were running a data center that had 100-percent availability. Only 4 percent of companies stated the availability of their data center was 99.999 percent.


Page 36 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The following diagram shows how to calculate the cost of downtime.

Costs arising during the downtime

Lost end-user productivity = Hourly cost of users impacted x Hours of downtime

Lost IT productivity = Hourly cost of the IT staff impacted

x Hours of downtime

Lost sales = Loss of sales per hour x Hours of downtime

Costs arising after the downtime

Other loss of business

= Image damage (customer service)

+ Costs of overtime

= Missed deadlines + Contractual fines or fees

Source: securitymanager 2007

The report "Avoidable Cost of Downtime" was commissioned by CA in 2010. The survey was carried out among more than 1,800 companies, 202 of them in Germany. It reveals that systems in German companies are shut down for an average of 14 hours per year, 8 hours of which are required for full data restoration.

A publication by Marsh before the year 2000 provided the following figures:

• Costs for a 24-hour downtime are approx.: o Amazon 4.5 million US dollars o Yahoo! 1.6 million US dollars o Dell 36.0 million US dollars o Cisco 30.0 million US dollars

• AOL, Aug. 6, 1996, breakdown of 24 hours due to maintenance and human error, estimated costs were 3 million US dollars for discounts

• AT&T, April 4, 1998, 6-26 downtime due to software conversion, costs were 40 million US dollars for discounts

IT system downtimes that last longer than three days are considered existence-threatening for companies! 54 percent of customers switched to a competitor as a result of an IT system failure (Symantec study, 2011).

2.4. Customer Perspective The following sections focus on the point of view of customers and decision-makers; their motives and what they expect from a data center, taking into account operational costs and energy consumption costs. The various aspects to consider when deciding between outsourcing and operating your own data center will also be discussed. 2.4.1. Data Center Operators / Decision-Makers When discussing decision-makers and influencers with regards to decisions around data centers, it is best to distinguish between these two types:

• enterprise data centers and

• service data centers

France € 499,358

Germany € 389,157

Norway € 320,069

Spain € 316,304

Sweden € 279,626

Netherlands € 274,752

Finland € 263,314

UK € 244,888

Denmark € 161,655

Belgium € 84,050

Italy € 33,844


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 37 of 156

Enterprise Data Centers Organizations and persons in charge with respect to data centers need to be differentiated by data center type – from single server cabinet to large data center.

There are three organizational groups in a company that influence the purchasing decision:

• ICT • Facility Management • Procurement The IT department is basically an internal customer of facility management; the relationship between the two is often characterized by thinking in boxes. Thinking patterns in the IT department are re-structured in cycles of 2 to 3 years whereas facility management is inclined to move in sync with the construction industry, i.e. in a 20-year rhythm. The facility staff in charge of data centers is mostly positioned in middle management or at the head of department level and they usually have a technical background. Facility departments vary considerably between companies. Companies dealing in rental objects often have none, or one with little influence. In many cases facility management is outsourced. It is similar with IT; the following subsections are the most influential:

• IT operations (infrastructure management, technology management, IT production, technical services, infrastructure services, data center, and others)

• Network planning (network technology, network management, and others)

• Heads of projects for new applications, infrastructure replacement, IT risk management measures, and others

Another strong influence factor is that management (CIO, Head of Information Technology, IT Director, IT Manager) often changes or reverses decisions made in the facility management. Management decisions might be affected by all-in-one contracts, supplier contracts with service providers, and corporate policy. If and to what extent the procurement department is involved in purchase decisions mainly depends on their position and on the corporate structure. Also very powerful in terms of their influence on decisions are corporate group directives, especially in the case of takeovers and mergers. Procurement practices might change drastically, projects postponed or cancelled and new contact persons have to be found.

Corporate group directives




CIO, Head of Information Technology, IT Director, IT Manager

Planning Design of IT architecture

IT Department, IT Operations,

Infrastructure Management/Services,

Technology Management, IT Production, Technical

Services/Support, Data Center

Facility Management

Network planning, network

technology Purchasing Department, Technical Procurement,





Consultants, data center planners

Suppliers, installers, service providers

Partner companies Rental property owners


Page 38 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Moreover, the influence factor of external consultants and experts needs to also be taken into account, the most important ones being:

• Consultants held in high esteem by a company's management and contracted to provide support with a project.

• Data center planners who are consulted because of their special know-how and relevant reference projects.

• Suppliers of hardware, software and data center equipment who are interested in selling their products. If there is a longstanding business relationship, their recommendations must not be underestimated. Moreover, they are certain to have advance information on the ongoing project.

• Installers (electrical work, cabling measures) who did good work for the company in the past. Their preference for systems and solutions as well as a good price offer usually leads to successful recommen-dations.

• Suppliers like system integrators and technology partners who have successfully worked for the company before.

• Partner companies with specific requirements for their suppliers and partner companies, with long-standing contacts at management level exchanging recommendations.

• Rental property owners who adopted the facility management in their lease terms. Service Data Centers

Here, the main influencers are the:

• technical department (operations) and • network management When larger tasks are planned (extensions, conversions, new constructions or individual customer projects) project groups are formed. They prepare the management decisions. Most of these organizations work with a flat hierarchy, and the decision-makers are to be found in management. With global data center operators, specifications for products and partners are being increasingly managed centrally. Here too, the influence of external consultants and experts is considerable. As a rule, data center planners (engineering firms, specialized consultancy firms) take on the overall planning of a data center – i.e. from finding the right location, to the building shell, safety and fire protection facilities, power supply and raised floor design. In some cases, they are also involved with the selection of server cabinets but they are usually not concerned with passive telecommunications infrastructures. However, the market is undergoing changes. Data center planners are often commissioned to take on the entire project (new construction, extension, upgrade), including the passive telecommunications infrastructure. They usually work together in a partnership with industry-specific companies for individual projects. Often these planners also support tendering and contracting procedures and are consulted by management on decision processes.

Corporate group directives




Technical department, operations, head of projects

Network management





Consultants, data center planners

Suppliers, installers, service providers

Partner companies Rental property owners


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 39 of 156

There is also a trend for companies to offer a complete package by way of acquisitions, from planning to construction, up to the complete equipment including cabinets and cabling, thus using the "one-stop shopping" aspect as a selling point. Typical examples are Emerson and the Friedrich Loh Group. Often, service data center customers themselves hire service providers to equip their rented space or they define the specifications for products and solutions that service providers have to comply with. Service providers equipping data centers strongly influence the product selection. They usually have two sections, network planning and installation. Data center operators like to work with companies they know and trust, and they are open to their product recommendations and positive experience reports. This in turn affects the operators' purchase decisions, also due to their assumption that it will lower installation costs. When the data center is located in a rental property, one must also respect the owner's specifications. 2.4.2. Motivation of the Customer A data center project requires large staff and personnel resources and it involves risks. Here are listed reasons for a company to undertake such a project – again divided by data center operator type: Enterprise Data Centers

• Increasing occurrence of IT failures

• Existing capacities are not sufficient to meet the growing requirements, often due to increased power needs and climate control requirements

• Technological developments in applications and concepts, such as the server and storage landscape, and virtualization and consolidation measures

• Latency requirements, resulting from business models and applications

• Requirements resulting from IT risk management and IT compliance, such as:

o Backup data centers, disaster data centers o Increasing availability o Structuring problems o The data center is located in a danger zone o Inspection of security concepts

• Consequences of IT governance requirements

• Focusing on core competencies of the company

• The existing data center will not be available in the future because it is moved or the operator will cease operation

• The systems installed are not state-of-the-art and no longer profitable

• The terms of existing contracts are expiring (outsourcing, service contracts, rental agreements, hardware, and others)

• Cost-saving requirements

• Data center consolidation

• Corporate restructurings

• Centralization requirements

• Continual improvement process

• And many more Service Data Centers

• Development of additional areas

• Development of a new site

• Customer requirements

• Continual improvement process

• Improving performance features

• Cost reduction

• Corporate directives

• And many more


Page 40 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

2.4.3. Expectations of Customers The following sections will focus on customer implementations of enterprise data centers. Service data center operators can call upon experience in their efforts to meet the expectations of customers. Availability The first section will discuss availability, including hardware and software security, reliable infrastructure and organizational security. In connection with data centers, the main points are:

• Uninterrupted IT operations • 24/7 availability, 24/7 monitoring, 24/7 access • Flexibility in power supply and climate control to meet specific needs (rising trend) • On-demand models can be displayed • Structural and technical fire protection • Security systems such as burglar alarms, access control, video surveillance, building security, security

personnel, security lighting, central building controls systems, etc. • Choice between several carriers, redundant connections • Cost optimization • Green IT, sustainability • Energy efficiency • Short response times for upgrades and extensions • Low latencies to meet the growing requirements in terms of internet presence • Data centers should be outside of flood-risk areas, dangerous production sites, airports, power plants or

other high-risk zones • Distance between two data center sites • Profitability

It can be observed that availability requirements are very high at the beginning of a project, but they are not necessarily reflected in expectations in terms of price per m2/kVA.

Availability = security = costs

Companies are strongly recommended to check their requirements of availability on a project-specific level if appropriate and to evaluate alternative concepts (e.g. the 2-location concept). Determining availability not only has an impact on installation costs but also on operating costs. The Swiss Federal Office of Energy (SFOE) has developed a cost model for transparent analysis. The graphic below is part of it.

Data Center Kostenmodell nach VerfügbarkeitData Center Kostenmodell nach VerfügbarkeitData Center Costs by Availability








r sq




Capital costs Operational costs excl. power Energy costs

Data Center Kostenmodell nach VerfügbarkeitData Center Kostenmodell nach VerfügbarkeitData Center Costs by Availability








r sq




Capital costs Operational costs excl. power Energy costs

Source: Department of the Environment, Transport, Energy and Communications, Swiss Federal Office of Energy, Oct. 2008


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 41 of 156

According to an analysis by IBM, data center operational costs accumulated over a period of 20 years are 3 to 6 times higher than the original investment costs, with energy costs constituting 75 percent of the operational costs. The following table shows the utilization rate of the components in data centers on the basis of the Tier levels, described in section 2.2.1.

Degree of redundancy N N+1 N+1 N+1 N+1 2N 2(N+1) 2(N+1) 2(N+1) 2(N+1)

Configuration 1 1+1 2+1 3+1 4+1 2 2(1+1) 2(2+1) 2(3+1) 2(4+1)

Number of components 1 2 3 4 5 2 4 6 8 10

Utilization rate of components 100% 50% 66% 75% 80% 50% 25% 33% 37.5% 40%

According to an IBM analysis, the increase in cost between a Tier-III and a Tier-IV installation amounts to 37 per-cent. Energy Costs The growing use of ICT also has an impact on the environment. The ICT-related energy consumption in Germany has increased from 38 TWh in 2001 to 55 TWh in 2007. This equals 10.5 percent of the country's total energy consumption. The strongest growth was recorded in ICT infrastructures, i.e. servers and data centers. Data center energy consumption in Germany amounted to 10.1 TWh in 2008, generating CO2 emissions of 6.4 million tons. Approx. 50 percent of this energy consumption and related CO2 emissions is not used by the actual IT systems but by the surrounding infrastructure, e.g. the cooling system and power supply. Since 2006, the importance of developing measures to reduce energy consumption in data centers has increased. Innovations relating to power supply and climate control also contribute to reductions in energy consumption, as do other cost-effective measures such as arranging the racks in hot aisle/cold aisle configurations, or avoiding network cables in front of ventilation fans. There are various guidelines and recommendations available on measures to increase energy efficiency in data centers, issued, for example, by BITKOM or solution providers. In some cases, financial incentives are available for energy-saving measures in data centers. A trend analysis commissioned by the German Federal Environment Agency (UBA) indicated that the energy consumption in Germany will increase from 10.1 TWh in 2008 to 14.2 TWh in 2015 in a "business as usual" scenario, while in a "Green IT" scenario it could be reduced down to 6.0 TWh by 2015.

2008 Scenario 2015

Data center types and quantities in Germany Business as usual Green IT

Server cabinets 33,000 34,000 27,000

Server rooms 18,000 17,000 14,000

Small data centers 1,750 2,150 1,750

Medium data centers 370 680 540

Large data centers 50 90 75

Energy demand in TWh 10.1 14.2 6.0

When it cannot be measured, it cannot be optimized. This is the problem with many data centers. A large number of companies cannot provide accurate information as to how much energy is needed for the ICT equipment alone, expressed as a proportion of the overall consumption. A recent study by the ECO association in 2010 has revealed that only 33% of data center operators employ staffs who are responsible for energy efficiency and at least 37.5% of operators run operations without specialized personnel responsible for IT systems in their data centers. Still with 31.25% of all operators, the IT budget is not affected by energy consumption at all.


Page 42 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

There are several key benchmarks for assessing the energy efficiency of a data center. They were introduced in section 1.10. Here is some additional information:

Green Grid (USA)

PUE Power Usage Effectiveness The ratio of power consumed by the entire facility to the power consumed by the IT equipment alone

DCIE Data Center Infrastructure Efficiency The ratio of power consumed by the IT equipment to the power consumed by the entire facility (=1/PUE, i.e. the reciprocal of PUE)

IEP IT Equipment Power The actual power load associated with all of the IT equipment including computing, storage and networking equipment

TFP Total Facility Power The actual power load associated with the data center facility including climate control (cooling), power, surveillance, lighting, etc.

Uptime Institute (USA)

SI-EER Site Infrastructure - Energy Efficiency Ratio

The ratio of power consumed by the entire facility to the power consumed by the IT equipment alone

IT-PEW IT Productivity per Embedded Watt The ratio of IT productivity (network transactions, storage, or computing cycles) to the IT equipment power consumption

DC-EEP Value derived by multiplying SI-EER with IT-PEW

The most widely-used benchmark is the PUE but caution is advised since the data used in the calculation is not always comparable. The table on the right is an approximate evaluation from the study carried out by the German Federal Environment Agency (UBA) in 2010. "Good" PUE values are currently trying to outdo one another on the data center market; there are even values below 1, which seems hard to believe. As discussed above, operational costs are usually many times higher than acquisition costs, the PUE of a data center being a key factor. Below, expected acquisition costs are compared with energy costs, using a server as example. Not included are the related data center costs for rack space, maintenance, monitoring, etc.

Server hardware acquisition cost 1,500.00 €

Life span 4 years

Power consumption (in operation) 400 watts

Hours per year 24 hr. x 30.5 days x 12 months = 8,784 hr.

Power consumption per year 8.784 hr. x 0.4 kW = 3,513.6 kWh

Power costs per kWh 0.15 €

Power costs per year 3,513.6 kWh x 0.15 € = 527.04 €

Power costs 1 year 4 years

PUE = 3.0 527.04 € x 3 = 1,581.12 € 6,324.48 €

PUE = 2.2 527.04 € x 2.2 = 1,159.49 € 4,637.95 €

PUE = 1.6 527.04 € x 1.6 = 843.26 € 3,373.06 €


Data Center Type 2008 2015 Green IT

Server cabinet 1.3 1.2

Server room 1.8 1.5

Small data center 2.1 1.5

Medium data center 2.2 1.6

Large data center 2.2 1.6


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 43 of 156

2.4.4. In-House or Outsourced Data Center IT departments generally realize before the critical point in time has arrived, that something needs to be done about their data center situation. The company then faces the question of whether to outsource their data center, or only parts of it, or not to outsource at all.

In the following sections it is assumed that the company decides against any outsourcing beyond housing and hosting services, leaving us with the following alternatives to discuss:

• Owning the data center • Using housing or hosting services from a hosting provider (renting) • Mixture between owning and renting

The arguments against outsourcing are mostly of a psychological or organizational nature or are security-related. Psychological Reasons

A perceived loss of control is probably the strongest among the psychological reasons. Among the arguments stated are that response time to failure alerts would be longer and that it would not be entirely clear who had access to computers. In addition, IT departments are often under the impression that they would be become less important within the company if this core activity is outsourced.

There are valid arguments to counter these concerns because the issues raised can be resolved by working out adequate procedures and contracts with data center operators. Organizational Reasons The most frequently cited organizational arguments against outsourcing include a lack of transportation options for IT staff, necessary coordination of internal and external staff, an inability to react immediately to hardware events, limited influence on scheduled maintenance work, etc. Actual access needs and all the tasks involved must be investigated and clearly defined in advance. Many data center operators employ technical staff on-site who can carry out simple hardware jobs on behalf of the customer. Experience shows that most of the work can be done remotely. If on-site presence is required more frequently, the company can opt for a local operator.

With the number of customers in a data center growing, an individual customers’ influence on scheduled mainte-nance work decreases. This is where the availability level of the data center comes into play, because redundancy allows maintenance without interrupting operation. Security-Related Reasons A perceived lack of security is often cited as main argument against outsourcing a data center. When no re-gulations exist that stipulate data be processed on company premises, these arguments are easy to counter. Here are some examples:

• Data center providers generally work in accordance with audited processes that define security in different areas.

• The distance between the data center and company premises is a security factor.

• The data centers are usually "invisible", meaning they are not signposted and, on customer request, operators guarantee anonymity.

• It is easier to implement access limitations by external staff. It is recommended that these arguments be weighed, to assess them and incorporate them when calculating the budget. In terms of data center operators, it is important to acquire a comprehensive list of all the costs involved. The expenses listed for energy consumption often do not include power consumption for IT equipment (see also PUE). A popular option is combining a company-owned data center and renting space from a data center operator. This solution offers the following advantages:

• Company backup concepts can be realized.

• Capacity bottleneck situations can be compensated.

• The required flexibility (on demand) can be defined by contract.

• A higher security level is reached. The bottom line is that there is no universally applicable right decision. Each concept has its advantages and requirements that must be assessed from a company-specific point of view.


Page 44 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

2.5. Aspects of the Planning of a Data Center What does the perfect data center look like? Planning a data center involves a good deal of communication because there is no such thing as the perfect data center. Construction costs as well as operational costs of a data center are closely linked to the requirements placed on the data center. It is therefore advisable to pay particular attention to the analysis phase and the definition of requirements and concepts. Corporate strategy, requirements arising from IT governance and IT risk management must also be integrated. Whether it is the construction of a new data center, the extension of an existing one, the outsourcing of services (housing, hosting, managed services, cloud services) or yet another data center concept – the basic principle is always the same: there is no universal solution. A succinct analysis of the actual situation and the determination of requirements are indispensable prerequisites. A factor that cannot be underestimated is the definition of realistic performance parameters. Selection of the site for a data center is becoming increasingly subject to economic considerations and available resources. Two crucial factors are the power supply and IP integration. Then there are the corporate challenges of physical safety, i.e. potential hazards in the vicinity, risks due to natural disasters (flooding, storms, lightning, earthquakes) and also sabotage. 2.5.1. External Planning Support We must emphasize that planners and consultants are often only referred to when the damage has already been done. Server parks, phone networks and complex information systems are realized without the help of specialized planners or professional consultants, and then when the facility is commissioned, deficiencies are often revealed that could have been avoided if professionals with comprehensive expertise had been consulted. Another important aspect is a consultant's or planner's experience in data center issues. Management members, IT and infrastructure managers (electrical and air-conditioning engineers, utilities engineers, architects, security experts, network specialists, etc.) quite often do not speak the same language – even within the same company. One essential prerequisite of a successful data center project, however, is a perfect flow of communication and the integration of all the areas concerned – one of the most important tasks of the project leader. Companies often hire external consultants for analysis, definition of requirements and location research. It is essential that these external consultants maintain very close communication with management, the IT department and infrastructure managers to allow them to establish company-specific requirements that are economically feasible. The tasks of a data center planner are:

• Initial assessment of the facts • Planning (iterative process) – define system specifications • Call for tender • Tender evaluation • Supervision, quality assurance • Inspection and approval • Warranty points • Project management

In Switzerland, planning work is usually carried out in compliance with SIA guidelines (Swiss Engineer and Architects Association) and in Germany with the HOAI (Fees Regulations for Architects and Engineers).

Analysis Initial assessment Planning

Call for tender Tender evaluation

Supervision Warranty

Project management

Inspection and acceptance


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 45 of 156

The occupation of data center planner does not have an officially defined profile! Data center planners are often engineers with a specific technical background who acquired work experience in data center environments. Due to the high demand, the group of data center planners is growing rapidly. However, their service portfolios vary and the selection and hiring of a planner depends on the client's ideas and requirements. These range between data center contractor (from planning to completion) to professional con-sultant for a subtask. The search term "data center planning" yields a wide range of service providers, such as:

• Companies who have added data center planning and all aspects connected with it to their existing portfolio. Typical examples are IBM, HP, Emerson or Rittal. The advantage stated is the one-stop-shopping approach.

• Companies who have specialized in data center planning. In many cases these are experts in specific industrial areas, who subcontract other experts themselves if required. Some of these employ engineers from all technical fields required for an overall data center planning.

• Building engineering consultants and engineering firms

• Companies specialized in structured cabling systems

• Suppliers of data center infrastructures (e.g. air-conditioning, fire protection, power supply)

• Suppliers of passive network components

• Data center operators

• Consultancy firms

• Power supply companies with Energy Performance Contracting (EPC) In many cases, potential clients – depending on in-house expert experience and skills – only hire experts for specific areas or consult a quality assurance professional. 2.5.2. Further Considerations for Planning Topics of structured cabling systems and application-neutral communications cabling systems (see also EN 50173, EN 50173-5 and EN 50174) are not a high priority for data center planners. However, experts estimate that today as much as 70 % of all IT failures are caused by faults in the cabling system infrastructure. Since the introduction of Ethernet, the data rate has increased by a factor of 10 every 6 years. This generates an increased demand for bandwidth that a large number of networks installed around the world are not capable of transmitting. In network planning, however, many essential data center planning aspects need to be considered and imple-mented, namely:

• Security, availability

• Impact on energy consumption – energy efficiency

• Redundancy

• Installation and maintenance costs

• Delivery times

• Flexibility for changes

• Productivity


Page 46 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

3. Data Center Overview The purpose of this section is to provide a comprehensive overview of the relevant standards and prevailing technologies in data centers. It discusses data center layout, its overall infrastructure, zones and hierarchies. In addition to active components and network and virtualization technologies, this section primarily deals with trans-mission protocols and media in data centers. It also examines current and future LAN technologies as well as cabling architectures. This is because data center planners, operators and managers can successfully manage current and upcoming tasks only through a holistic view. 3.1. Standards for Data Centers In earlier times, a room in which one or more servers, a telephone system and active network components were located was often called a computer room. However, because of the development of cloud computing, virtualization technologies, and an increased tendency to use outsourcing, “computer room” requirements became more complex. The one room was replaced by a room structure, whose individual rooms are assigned defined functions.

The data center is now divided up into an entrance area, the actual computer room and a work area for administrators, right through to separate rooms for UPS batteries, emergency power generators and cooling. In addition, attention must be given to matters like access control, video monitoring and alarm systems. An emphasis on active and passive components requires an improved infrastructure that includes such equipment as a cooling system and power supply, which in turn affect installation and construction as well as the data cabling structure. The point-to-point connections that were used previously are being repla-ced by a structured cabling system, which allows for rapid restoration in case of a fault, effortless system expansion and easy admi-nistration.

The data center layout, hierarchical structure and individual zones and their functions are described in detail in sections 3.2 and 3.3 below. 3.1.1. Overview of Relevant Standards Increasing demands on data centers have led standardization bodies to take this development into account and to take a close look as this topic. Cabling standards issued from international and national committees describe the structure and characteristics of a cabling system almost identically, but differ in terms of content. There are three big organizations in the world which are concerned with the standardization of data centers. The ISO/IEC (International Organization for Standardization / International Electrotechnical Commission) develops international standards, the CENELC (Comité Européen de Normalisation Électrotechnique) European standards and the ANSI (American National Standards Institute) American standards. There are other organizations for the area of data centers, though these play a subordinate role to the ones above. The standardization bodies also remain in contact with one another in an attempt to avoid differing interpretations on the same topic, but this is not always successful. To achieve harmonization in Europe, CENELEC and ISO exchange information with national committees (SEV/SNV, VDE/DIN, ÖVE, etc.). Data center cabling is covered by the following standards:

• ISO/IEC 24764 • EN 50173-5 • TIA-942 All three standards focus on cabling, but describe the requirements for a data center in different ways. New standards were developed for this area so as not to change the general (generic) cabling structure all too greatly. The central focus of these standards is to cover the structure and performance of data center cabling. The goal was to move away from the general point-to-point connections normally used in data centers and construct a structure that is flexible, scalable, clear and permits changes, troubleshooting and expansion.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 47 of 156

Where do the differences lie between the different standards for a data center? A “cabling planner” designing a generic building cabling system must consider space requirements, the connection for potential equalization and other factors, but in a data center the planner must also consider things like low space availability, air conditioning, high power consumption, redundancy and failure safety including access restriction. The type of the data center and the future prospect for transmission protocols and data rates have an effect on the development of standards. These requirements and the interfaces to other standards are examined differently, depending upon the standard. The most important points in the different standards are classified in the following table.

Criteria ISO/IEC 24764 EN 50173-5 TIA-942

Structure ���� ���� ����

Cabling performance ���� ���� ����

Redundancy ���� ���� ����

Grounding/potential equalization IEC 60364-1 EN 50310 ���� 2

Tier classification ���� ���� ����

Cable routing IEC 14763-2 1 EN 50174-2 /A1 ����.2

Ceilings and double floors IEC 14763-2 1 EN 50174-2 /A1 ���� 6

Floor load ���� ���� ���� 2

Space requirements (ceiling height, door width) IEC 14763-2 1 EN 50174-2 /A1 3 ����

Power supply/UPS ���� ���� ����

Fire protection/safety ���� EN 50174-2 /A1 4 ���� 4

Cooling ���� ���� ����

Lighting ���� ���� ����

Administration/labeling IEC 14763-1 5 EN 50174-2 /A1 5 ���� 1

Temperature/humidity ���� ���� ���� 1 not data center-specific, 2 refers to TIA-607, 3 only door widths and ceiling height, 4 refers to local standards, 5 refers to complexity level, 6 refers to TIA-569 On first glance, we see that TIA-942 devotes much more attention to the subject of data centers than EN 50173-5 or ISO/IEC 24764. Many important terms and their descriptions can be found in additional documents in ISO/IEC and EN, and are kept at a very general level. Some IEC documents were created a few years ago and are therefore not at the current state of technology. There are also differences in terminology in the central areas of this cabling standard. ISO terminology is used in this section since that is the international worldwide organization. ANSI terms are used for items for which the ISO has no corresponding terminology. 3.1.2. ISO/IEC 24764 The configuration and structure of an installation per ISO/IEC 24764 must be carried out in accordance with this standard and/or ISO 11801. Testing methods for copper cabling must follow IEC 61935-1 and for optical fiber cabling the IEC 14763-3 standard must be followed. In addition, a quality plan and installation guidelines in accor-dance with IEC 14763-2 must be observed for any compliant installation. Since IEC 14763-2 is an older document, ISO/IEC 18010 must be consulted for cable routing issues. This stan-dard not only contains descriptions for cable routing, but also rudimentary instructions concerning failure safety. Nevertheless, neither IEC 14763-2 nor ISO/IEC 18010 is explicitly designed for data center requirements. The cabling system is arranged in a tree structure that starts out from a central distributor. Point-to-point connections must be avoided wherever possible. Exceptions are allowed as long as active devices are located very closely to one another or cannot communicate over the structured cabling system. Not included in this structure are cabling in a local distributor per ISO 11801 and cabling at the interface to external networks.


Page 48 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The data center layout, its individual zones and their functions are described in detail in section 3.2 below.

Evolution of the basic computer room into a data center involves increased layout requirements. As mentioned at the start, it has become necessary to separate the different areas of the data center into rooms. It must be clearly indicated that the network structure in the data center is separate from building cabling. In addition, the connection to the internal network must be established over a separate distributor that is spatially detached from the data center. The interface to the external network (ENI) is created either within or outside of the data center. All other functional elements must exist in the data center permanently and be always accessible. Cabling components, patch cords and connection cables as well as distributors for the building’s own network are not components of the standard. As the table below shows, ISO 11801 has terminology that is different from that of ISO/IEC 24764.

Different distributor functions may be combined, depending upon the size of the data center. However, at least one main distributor must exist. Redundancy is not explicitly prescribed, but is referred to under failure safety. The standard gives information on options for redundant connections, cable paths and distributors. Like the consolidation point, the local distribution point (LDP) is located in either the ceiling or in the raised floor. Patching is not per-mitted, since the connection must be wired through.

ISO/IEC 24764 refers to ISO 11801 regarding performance. The properties described in detail in ISO 11801 apply for all transmission links or channels and components. The difference is that for the data center minimum require-ments for main and area distributor cabling apply. A cabling class that satisfies the applications listed in ISO 11801 is permitted for the network access connection.

ISO/IEC 24764 ISO 11801

External network interface (ENI) Campus distributor (CD)

Network access cable Primary cable

Main distributor (MD) Building distributor (BD)

Main distributor cable Secondary cable

Zone distributor (ZD) Floor distributor (FD)

Area distributor cable Tertiary/horizontal cable

Local distribution point (LDP) Consolidation point (CP)

Local distribution point cable Consolidation point cable

Equipment outlet (EO) Telecommunications outlet (TO)

ENI: External Network Interface MD: Main Distributor ZD: Zone Distributor LDP: Local Distribution Point EO: Equipment Outlet EQP: Equipment


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 49 of 156

Cabling Infrastructure:

• Copper cables: Category 6A, Category 7, Category 7A • Copper transmission link or channel: Class EA, Class F, Class FA • Multi-mode optical fiber cables: OM3, OM4 • Single-mode optical fiber cables: OS1, OS2

Connectors for Copper Cabling:

• IEC 60603-7-51 or IEC 60603-7-41 for Category 6A for shielded and unshielded respectively • IEC 60603-7-7/71 for Category 7 and 7A • IEC 61076-3-104 for Category 7 and 7A

Connectors for Optical Fibers:

• LC duplex per IEC 61754-20 for GA and ENS • MPO/MTP® multi-fiber connectors per IEC 61754-7 for 12 or 24 fibers

It is important that the appropriate polarity be observed in multi-fiber connections so the transmitter and receiver can be correctly connected. ISO/IEC 24764 references ISO 14763-2 with regard to this. The polarity of multi-fiber connections is not described, due to the age of the document. 3.1.3. EN 50173-5 This document is essentially the same as ISO/IEC 24764, but has a few minor differences. The last changes were established through an amendment. Therefore EN 50173-5 as well as EN 50173-5/A1 are necessary for the complete overview. This also applies for all other standards in the EN 50173 series. EN 50173-5 also uses the hierarchical structure per EN 50173-1 & -2. A standard-compliant installation is achieved through both application of the EN 50173 series and application of the EN 50174 series and its amendments. In particular, an entire section about data centers has been added to EN 50174-2/A1. The following topics are covered in this section:

• Design proposals • Cable routing • Separation of power and data cables • Raised floors and ceilings • Use of pre-assembled cables • Entrance room • Room requirements

As opposed to ISO 11801, this standard uses optical fiber classes other OF-300, OF-500 und OF-2000; possible classes are OF-100 to OF-10000. Cabling performance is determined by EN 50173-1 and EN 50173-2. However, minimum requirements for data center cabling were also established in the standard. A cabling class that satisfies the applications listed in ISO 11801 is also permitted for the network access connection in EN 50173-5. Cabling Infrastructure:

• Copper cable: Category 6A, Category 7, Category 7A • Copper transmission path: Class EA, Class F, Class FA • Multi-mode optical fiber cable: OM3, OM4 • Single-mode optical fiber cable: OS1, OS2

Plug Connectors for Copper Cabling:

• IEC 60603-7-51 or IEC 60603-7-41 for Category 6A for shielded and unshielded respectively • IEC 60603-7-7/71 for category 7 and 7A • IEC 61076-3-104 for category 7 and 7A

Plug Connectors for Optical Fibers:

• LC duplex per IEC 61754-20 for GA and ENS • MPO/MTP® multi-fiber connectors per IEC 61754-7 for 12 or 24 fibers

Though different options are listed for the use of multi-fiber connections, polarity is not covered in any of these standards. Detailed information on this appears in sections 3.10.1 and 3.10.2.


Page 50 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

3.1.4. TIA-942 Without a doubt, the American standard is currently the most comprehensive body of standards for data centers. Not only is cabling described, but also the entire data center environment including the geographical location, as well as a discussion of seismic activities. Among other things, the data center is divided up into so-called “tiers”, or different classes of availability (see section 2.2.1). The data center must satisfy specific requirements for each class in order to guarantee availability. What differences from ISO/IEC exist, apart from cabli ng performance?

• The interface to the external network may not be in the computer room with TIA-942. • The maximum length of horizontal cabling may not exceed 300 meters, even with glass fibers. • No minimum requirements are provided for cabling. • The limits for Category 6A (≠ 6A) are less strict than those of ISO/IEC 24764 or EN 50173-5.

The smallest common denominator of the data center standards listed is cabling. TIA-942 references the EIA/TIA-568 series in the area of performance. TIA-942 itself is divided into three sections; TIA-942, TIA-942-1 and TIA-942-2. All data center details are covered in the main standard. The use of coaxial cables is covered in TIA-942-1, and the latest version of the Cat6A (copper) performance class, laser-optimized fibers (OM3 and higher), changed temperature and ambient humidity as well as other changes in content are introduced in TIA-942-2. Cabling is defined as follows in TIA. Cabling Infrastructure:

• Copper cable: Category 3 – 6A • Copper transmission path: Category 3 – Category 6A (Category 6A is recommended, TIA doesn’t use classes) • Multi-mode optical fiber cable: OM1, OM2, OM3 (OM3 recommended) • Single-mode optical fiber cable: 9/125 µm • Coaxial cables: 75 ohm

Connectors for Copper Cabling:

• ANSI/TIA/EIA-568-B.2 Plug Connectors for Optical Fibers:

• SC duplex (568 SC) per TIA 604-3 • MPO/MTP® multi-fiber connector per TIA 604-5 (only 12-fiber) • Other duplex connectors like LC may be used.

A new version of TIA-942 (TIA-942A) was in the planning stages at the time this document was published. Since this change includes important areas, a preview of the new version is meaningful.

• Combining TIA-942, TIA-942-1 and TIA-942-2 into one document • Harmonization with TIA-568.C0, TIA-568.C2, TIA-568.C3 • Grounding is removed and integrated with TIA-607-B • Administration is integrated with TIA-606-B • Distribution cabinets and separation of data and power cables is integrated with TIA-569-C • The limit restriction for glass fiber paths in horizontal cabling is rescinded. • Category 3 and Category 5 are no longer allowed for horizontal cabling. • The glass fiber for backbone cabling is OM3 and OM4. OM1 and OM2 are no longer allowed. • LC and MPO plugs are a mandatory requirement for the data center. • Adaptation of ISO/IEC 24764 terminology

Combining Bodies of Standards None of the bodies of standards is complete. A body will either omit an examination of the overall data center, or its permitted performance class will not allow for scaling to higher data rates. It is therefore recommended that all relevant standards be consulted when planning a data center. The best standard should be used, depending upon specific requirements. One example of this would be to use the cabling performance capability from the standard which provides the highest performance requirement for cabling components. In the case of other properties, other standards should be used if they handle these properties in a better way. Recommendation:

A specification of the standards used must therefore be listed in the customer requirement specification in order to avoid ambiguous requirements!


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 51 of 156

3.2. Data Center Layout In spite of the virtualization trends currently in the software market – keyword server consolidation – there is a high demand on the horizon for renewing data center and server rooms. It is estimated that over 85 percent of all data centers in Germany are over ten years old. A study by Actionable Research that was recently commissioned by Avocent, is also examining the global situation in this area. Approximately 35 percent of all data centers worldwide are therefore currently undergoing a redesign. 3.2.1. Standard Requirements Even if small differences between generic and data center cabling are uncovered between the American TIA-942 standard and international and European standards, these bodies of standards essentially differ only in how they name functional elements.

ISO/IEC 24764 ISO 11801 TIA-942 External Network Interface (ENI) [Externe Netzschnittstelle (ENS)]

Campus Distributor (CD) [Standortverteiler (SV)]

ENI (External Network Interface) ER (Entrance Room)

Network Access Cable [Netzzugangkabel] Primary Cable [Primärkabel] Backbone Cabling

Main Distributor (MD) [Hauptverteilter (HV)]

Building Distributor (BD) [Gebäudeverteiler (GV) ]

MC (Main Cross Connect) MDA (Main Distribution Area)

Main Distributor Cable [Hauptverteilerkabel] Secondary Cable [Sekundärkabel ] Backbone Cabling

Zone Distributor (ZD) [Bereichsverteiler (BV)]

Floor Distributor (FD) [Etagenverteiler (EV)]

HC (Horizontal Cross Connect) HDA (Horizontal Distribution Area)

Zone Distributor Cable [Bereichsverteilungskabel]

Tertiary/Horizontal Cable [Tertiär-/Horizontalkabel] Horizontal Cabling

Local Distribution Point (LDP) [Lokaler Verteilpunkt (LVP)]

Consolidation Point (CP) [Sammelpunkt (SP)]

CP* (Consolidation Point) ZDA (Zone Distribution Area)

Local Distribution Point Cable [Lokales Verteilpunktkabel]

Consolidation Point Cable [Sammelpunktkabel] Horizontal Cabling

Equipment Outlet (EO) [Geräteanschluss (GA)]

Telecommunications Outlet (TO)] [Informationstechnischer Anschluss (TA)

EO (Equipment Outlet) EDA (Equipment Distribution Area)

Unique assignment exists of functional elements to rooms, though in the TIA-942 it is data center areas rather than the functional elements that are at the forefront. Given a detailed examination, we see that this topology is the same for ISO/IEC 24764 and EN 50173-5.


Page 52 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Individual areas in the data center are therefore assigned specific functions.

• Entrance Room (External Network Interface): This is the entrance area to the network in the data center, which provides for access to the public network (the Internet provider) and can be connected multiple times, depending upon the “tier” level that is selected. In smaller networks, the External Network Interface can be connected directly to the Horizontal Distribution Area (Zone Distributor).

• Main Distribution Area (Main Distributor): This area represents the core of the data center and therefore forms the “lifeblood” of the network, which is why the redundant connections and components just in this area alone are of crucial importance. All data traffic in the backbone is therefore controlled here, which is why this point in the network is known as the Core Layer. However, the Aggregation Layer (or Distribution Layer), whose aggregation/distribution switches bundle and forward Access Layer data traffic to the core, is also located in this area.

• Horizontal Distribution Area (Zone Distributor): In this “interface” between backbone and horizontal cabling, the data traffic of the access switches which control the data exchange with terminal devices is “passed over” to the aggregation layer. This area in the network is known as the Access Layer.

• Zone Distribution Area (Local Distribution Point): This area is for “interim distribution” to the Equipment Distribution Area , which can be used for reasons of space and is placed in the raised floor, for example. The Raised Floor Solution from R&M that was developed for this purpose provides the ideal option for this process, since it allows up to 288 connections per box and is freely configurable, thanks to its modular design.

• Telecom Room : This is the location where the connection to the internal network is located.

• The Operation Center, Support Room a nd Offices are rooms for data center personnel. Backbone cabling is preferably laid with fiber optic cables, and horizontal cabling with copper cables. The transmission media recommended for this purpose is detailed in section 3.9. Possible transmission protocols and the associated maximum transmission rates (see section 3.8) are defined at the same time that cable types are selected, which is why this is a significant decision which determines the future viability of the data center. An equally important factor in data center scalability is determination of the cabling architecture, which in turn influences network availability and determines rack arrangement. Different cabling architectures such as “End of Row”, “Top of Rack”, etc., as well as their advantages and disadvantages are listed under section 3.4. Additional designs for the physical infrastructure of a data center are described in section 3.5. 3.2.2. Room Concepts Different room concepts exist with regard to data center layout. The classical Room-in-Room Concept with separate technical and IT security rooms that house any type and number of server racks and network cabinets is equipped with raised floors, dropped ceilings if necessary, active and passive fire protection and a cooling system.

The large room container is a Modular Container Concept with separate units for air conditioning and energy, as well as an IT container to house server, storage and network devices. The Self-Sufficient Outdoor Data Center , with its own block heating and generating plant, also provides a transportable data center infrastructure, but is independent from the external energy supply. The associated power plant provides for the supply of both energy as well as cold water, through an absorption unit.

The redundant, automated Compact Data Center represents an entire, completely redundant data center which includes infrastructure and high-performance servers in one housing. Due to its high power density, this “Mini Data Center” is also a suitable platform for private cloud computing.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 53 of 156

3.2.3. Security Zones Information technology security is a broad term which includes logical data security, physical system security, and organizational process security. The goal of a comprehensive security concept is to examine all areas, detect and assess risks early on and take measures so that a company’s competitive ability on the market is not at risk. When a company’s IT infrastructure and different IT functional areas are taken into consideration, a well thought-out design can reduce or even eliminate significant physical security risks. Both the locations of IT areas and the spatial assignment of different functions together play a decisive role in this process. Functional Areas Designing an IT infrastructure and therefore selecting the location of a data center are based on a company’s specific data security concept, which reflects its requirements of availability and the direction of corporate policy. The following criteria should be examined when considering the physical security of a data center location:

• Low potential of danger through neighboring uses, adjacent areas or functions

• Avoidance of risks through media and supply lines, tremors, chemicals, etc. which may impair the physical security of IT systems

• Prevention of possible dangers through natural hazards (water, storms, lightning, earthquakes) – assess-ment of the characteristics of a region

• The data center as a separate, independent functional area

• Protection from sabotage via a “protected” location

• An assessment of the danger potential that is based on the social position of the company If all risk factors and basic company-specific conditions are taken into consideration, not only can dangers be eliminated in advance during the conception process for the IT infrastructure, but expenditures and costs can also be avoided. When designing and planning a data center, its different functional areas are arranged in accordance with their requirements for security and their importance to the data center’s functional IT integrity. The different functional areas can be divided up as follows:

Security Zones Function Marking (example)

1 Site white

2 Semi-public area, adjacent office spaces green

3 Operating areas, auxiliary rooms for IT


4 Technical systems for IT operation blue

5 IT and network infrastructure red

Arrangement of Security Zones The image above is one example that results when different security zones are shown schematically: The IT area (red) is located on the inside and is protected by its adjacent zones 3 and 4 (yellow/blue). Security zones 1 and 2 (white/green) form the outer layer. Separating functional areas provides for limited possibilities for accessing sensitive areas, so possible sabotage is prevented. This ensures, for example, that a maintenance technician for air conditioning systems or the UPS only has access to the technical areas (blue) of the company and not to the IT area (red). The locations of the different functional areas as well as the division of security zones, or security lines, are key to ensuring the security of the IT infrastructure. However, continuous IT availability can be realized only within the overall context of a comprehensive security concept that considers all IT security areas.


Page 54 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

FO Copper











3.3. Network Hierarchy A hierarchical network design subdivides the network into discrete layers. Each of these layers provides specific functions that define its role within the overall network. When the different functions provided in a network are made separate, the network design becomes modular and this also results in optimal scalability and performance. As compared to other network designs, a hierarchical network is easier to administrate and to expand, and prob-lems can be solved more quickly. 3.3.1. Three Tier Network Three Tier Networks consist of an Access Layer with switches to desktops, servers and storage resources; an Aggregation/Distribution Layer in which switching centers combine and protect (e.g. via firewalls) the data streams forwarded from the Access Layer, and the Core Switch Layer, which regulates the traffic in the backbone.

Three Tier Networks resulted from Two Tier Networks, toward the end of the 1990s. These Two Tier networks were pushing their capa-city limits. The bottleneck that was created was able to be rectified by using an additional aggregation layer. From a technical point of view, the addition of a third layer was therefore a cost-effective, temporary solution for resolving the performance problems of that time. Both network architectures are governed by the Spanning Tree Protocol (STP). This method was developed in 1985 by Radia Perlman, and determines how switching traffic in the network behaves. However, after 25 years, the STP is pushing its limits. The IETF (Internet Engineering Task Force) therefore intends to replace STP by the Trill Protocol (Transparent Inter-connection of Lots of Links) internalized by the STP. Network protocols for redundant paths are described in more detail in section 3.8.6. Trends such as virtualization and the associated infrastructure standardization represent a new challenge for these Three Tier Networks.

Rampant server virtualization is making for increasing complexity. For example, ten virtual servers can operate on one physical computer, and can also be moved as needed from hardware to hardware, in a very flexible manner. So if a network previously had to manage the data traffic of 1,000 servers, this now becomes 10,000 virtual machines, which are still in motion to make matters worse. Classic networks built on three tiers, which dominated data centers from the late 90s, have been having more and more trouble with this complexity. The call is becoming increasingly loud for a flat architecture, in which the network resembles a fabric of nodes with equal rights. Instead of just point-to-point connections, cross-connections between nodes are also possible and increase the performance of this type of network.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 55 of 156

3.3.2. Access Layer The Access Layer creates the connection to terminal devices such as PCs, printers and IP phones, and arranges access to the remainder of the network. This layer can include routers, switches, bridges, hubs and wireless access points for wireless LAN. The primary purpose of the Access Layer is to provide a mechanism which controls how devices are connected to the network and which devices are allowed to communicate in the network. For this reason, LAN switches in this layer must support features like port security, VLANs, Power over Ethernet (PoE) and other necessary functionalities. Switches in this layer are often also known as Edge Switches. Multiple access switches can often be stacked with one another into one virtual switch, which can then be managed and configured through one master device, since the stack is managed as one object. Manufacturers sometimes also offer complete sets of security features for connection and access control. One example of this is a user authentication system at the switch port in accordance with IEEE802.1x, so only authorized users have access to the network, while malicious data traffic is prevented from spreading. New Low Latency Switches specifically address the need for I/O consolidation in the Access Layer, for example in densely packed server racks. Cabling and management are simplified dramatically and power consumption and running costs drop while availability increases. These switches already support Fibre Channel over Ethernet (FCoE) and thus the construction of a Unified Fabric. Unified Fabric combines server and storage networks into one common platform that can be uniformly administered, which paves the way for extensive virtualization of all services and resources in the data center.

Product example from Cisco

3.3.3. Aggregation / Distribution Layer The Aggregation Layer, also known as the Distribution Layer, combines the data that was received from the switches in the Access Layer before they are forwarded on to the Core Layer to be routed to the final receiver. This layer controls the flow of network data with the help of guidelines and establishes the broadcast domains by carrying out the routing functions between VLANs (virtual LANs) that are defined in the Access Layer. This routing typically occurs in this layer, since Aggregation Switches have higher processing capacities than switches in the Access Layer. Aggregation Switches therefore relieve Core Switches of the need to carry out this routing function. Core Switches are already used to capacity for forwarding on very large volumes of data. The data on a switch can be subdivided into separate sub-networks through the use of VLANs. For example, data in a university can be subdivided by addressee into data for individual departments, for students and for visitors. In addition, Aggregation Switches also support ACLs (Access Control Lists) which are able to control how data are transported through the network. An ACL allows the switch to reject specific data types and to accept others. This way they can control which network devices may communicate in the network. The use of ACLs is processor-intensive, since the switch must examine each individual packet to determine whether it corresponds to one of the ACL rules defined for the switch. This examination is carried out in the Aggregation Layer, since the switches in that layer generally have processing capacities that can manage the additional load. Aggregation Switches are usually high-performance devices that offer high availability and redundancy in order to ensure network reliability. Besides, the use of ACLs in this layer is comparatively simple. So instead of using ACLs on every Access Switch in the network, they are configured on Aggregation Switches, of which there are fewer, making ACL administration significantly easier. Combining lines contributes to the avoidance of bottlenecks, which is high importantly in this layer. If multiple switch ports are combined, data throughput can then be multiplied by their number (e.g. 8 x 10 Gbit/s = 80 Gbit/s). The term Link Aggregation is used in accordance with IEEE 802.1ad to describe switch ports that are combined (parallel switching), a process called EtherChannel by Cisco and also trunking by other providers. Due to the functions they provide, Aggregation Switches are severely loaded by the network. It is important to emphasize that these switches support redundancy, for purposes of their availability. The loss of one Aggregation Switch can have significant repercussions on the rest of the network, since all data from the Access Layer are forwarded to these switches. Aggregation Switches are therefore generally implemented in pairs so as to ensure their availability. In addition, it is recommended that switches be used here that support redundant network components that can be replaced during continuous operation.


Page 56 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Aggregation Switches must finally also support Quality of Service (QoS) in order to uphold prioritization of the data coming from the Access Switches, if QoS is also implemented on them. Priority guidelines ensure that audio and video communication are guaranteed to receive enough bandwidth so an acceptable quality of service is ensured. All switches which forward on data of this type must support QoS so as to ensure prioritization of voice data in the entire network.

Product example from Cisco

3.3.4. Core Layer / Backbone The Core Layer in the hierarchical design is the high-speed “backbone” of the network. This layer is critical for connectivity between the devices in the Aggregation Layer, which means that devices in this layer must also have high availability and be laid out redundantly. The core area can also be connected with Internet resources. It combines the data from all aggregation devices and must therefore be able to quickly forward large volumes of data. It is not unusual to implement a reduced core model in smaller networks. In this case, the Aggregation and Core Layers are combined into one layer. Core Switches are responsible for handling most data in a switched LAN. As a result, the forwarding rate is one of the most important criteria when selecting these devices. Naturally, the Core Switch must also support Link Aggregation so as to ensure that Aggregation Switches have sufficient bandwidth available. Due to the severe load placed on Core Switches, they have a tendency to generate higher process heat than Access or Aggregation Switches; therefore one must make certain that sufficient cooling is provided for them. Many Core Switches offer the ability to replace fans without having to turn off the switch.

Product example from Cisco

QoS (Quality of Service) is also an essential service component for Core Switches. Internet providers (who provide IP, data storage, e-mail and other services) and company WANs (Wide Area Networks), for example, have contributed significantly to a rise in the volume of voice and video. Company- and time-critical data such as voice data should receive higher QoS guaran-tees in the network core and at the edges of the network than data that is less critical, such as files that are being transferred or e-mail.

Since access to high-speed WANs is often prohibitively expensive, additional bandwidth in the Core Layer may not be the solution. Since QoS makes a software-based solution possible for data prioritization, Core Switches can then provide a cost-effective option for using existing bandwidth in an optimal and diverse manner.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 57 of 156

3.3.5. Advantages of Hierarchical Networks A number of advantages are associated with hierarchical network designs:

• Scalability • Redundancy

• Performance • Security

• Ease of administration • Maintainability Since hierarchical networks are by nature scalable in an easy, modular fashion, they are also very maintainable. Maintenance for other network topologies becomes increasingly complicated as the network becomes larger. In addition, a fixed boundary for network growth exists for certain network design models, and if this boundary value is exceeded, maintenance becomes too complicated and too expensive. In the hierarchical design model, switching functions are defined for every layer, simplifying the selection of the correct switch. This does not mean that adding switches to a layer does not lead to a bottleneck in another layer, or that some other restriction may occur. All switches in a fully meshed topology must be high-performance devices so the topology can achieve its maximum performance. Every switch must be capable of performing all network functions. By contrast, switching functions in a hierarchical model are differentiated by layer. So in contrast to the Aggregation Layer and Core Layer, more cost-effective switches can be used in the Access Layer. Before designing the network, the diameter of the network must be first examined. Although a diameter is traditionally specified as a length value, in the case of network technology this parameter for the size of a network must be measured via the number of devices. So network diameter refers to the number of devices that a data packet must pass in order to reach its recipient. Small networks can therefore ensure a lower, predictable latency between devices. Latency refers to the time a network device requires to process a packet or a frame. Each switch in the network must specify the destination MAC address of the frame, look it up in its MAC address table and then forward the frame over the corresponding port. If an Ethernet frame must pass a number of swit-ches, latencies add up even when the entire process only lasts a fraction of a second. Very large volumes of data are generally transferred over “data center” switches, since these can handle commu-nication for both server/server data as well as client/server data. For this reason, switches that are provided for data centers offer higher performance than those provided for terminal devices. Active network components and their additional functionalities are covered in section 3.6. Data centers are geared to tasks and requirements, which include mass storage in addition to arithmetic opera-tions. Networks for these data centers are high- and maximum-performance networks in which data transfer rates in the gigabit range can be achieved. Various high-speed technologies are therefore used in data center architectures, and data rates are increased using aggregation. Memory traffic (SAN) is handled over Fibre Channel (FC), client-server communication over Ethernet and server-server communication for example over InfiniBand. These different network concepts are being increasingly replaced by 10 gigabit Ethernet. The advantage of this technology is that a 40 Gbit/s version also exists, as well as 100 gigabit Ethernet. Therefore 40/100 gigabit Ethernet is also suited for the Core Layer and Aggregation Layer in data center architectures, and for the use of ToR switches. Different Ethernet versions are listed in section 3.8.2 and Ethernet migration paths in section 3.10.2. 3.4. Cabling Architecture in the Data Center The placement of network components in the data center is an area that has high potential for optimization. So-called ToR (Top of Rack) concepts have advantages and disadvantages as compared to EoR/MoR (End of Row/Middle of Row) concepts. One advantage of ToR lies in its effective cabling with short paths up to the server, whereas one disadvantage is its early aggregation and potential “overbooking” in the uplink direction. This is a point in favor of EoR concepts, which use chassis with high port densities and serve as a central fabric for all servers. One will usually find a mix which aggregates 1-Gbit/s connections to 10-Gbit/s in the ToR area, which is then combined with a direct 10-Gbit/s server connection on an EoR system.


Page 58 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0


Aggregation& Core


........ ........

Application Server or Storage LibraryLAN Switch

KVM Switch

SAN Switch

LAN ConnectionKVM Connection

SAN ConnectionUplink



The different cabling architectures as well as their advantages and disadvantages are described below. All servers used in the concept are connected via 3 connections each. For redundancy 5 connections are provided (+1 for LAN, +1 for SAN).

• LAN : Ethernet over copper or fiber optic cables (1/10/40/100 gigabit)

• SAN: Fibre Channel (FC) over fiber optic cables or Fibre Channel over Ethernet (FCoE) over fiber optic or copper cables

• KVM: Keyboard/video/mouse signal transmission over copper cables KVM switches (keyboard/video/mouse) are available starting from stand-alone solutions for 8 to 32 servers up to complex multi-user systems for data center applications of up to 2,048 computers. The location of the computers is irrelevant. These computers can therefore be accessed and administered locally or over TCP/IP networks – worldwide! 3.4.1. Top of Rack (ToR) Top of Rack (ToR) is a networking concept that was developed specifically for virtualized data centers. As follows from its name, a ToR switch or rack switch (functioning as an Access Switch) is installed into the top of the rack so as to simplify the cabling to individual servers (patch cords). The ToR concept is suited for high-performance blade servers and supports migration up to 10 gigabit Ethernet. The ToR architecture implements high-speed connections between servers, storage systems and other devices in the data center.

ToR switches have slots for transceiver modules, enabling an optimal and cost-effective adaptation to 10 gigabit Ethernet or 40-/100 gigabit Ethernet. The port densities that can be achieved in this way depend on the trans-ceiver module that is used. Thus, a ToR switch on one height unit can have 48 ports with SFP+ modules or 44 ports with QSFP modules. The non-blocking data throughput of ToR switches can be one terabit per second (Tbit/s) or higher. Different transceiver modules are described in section 3.6.4.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 59 of 156

New ToR switches sometimes support Data Center Bridging (DCB), and therefore Lossless Ethernet and Converged Enhanced Ethernet (CEE). The operation and significance of Data Center Bridging is explained in section 3.8.7. If one limits the ToR concept to LAN communication, the result is the cabling architecture shown on the right side, where the SAN and KVM connections are de-signned as in EoR architecture and an additional rack is there-fore required to house them. Additional space for other servers is provided for this pur-pose in individual racks. The port assignment of SAN switches is opti-mized, which can mean that fewer SAN switches are needed. The Cisco version, as also shown in the image on the right side, represents another example of a ToR architecture. LAN fabric extenders are used here in place of ToR switches and can also support Fibre Channel over Ethernet (FCoE) in

addition to 10 gigabit Ethernet connectivity. By using these exten-ders of the Unified Computing System ar-chitecture (UCS), Cisco provides standardization in the LAN/SAN net-works also known as I/O consolidation. Though these various ToR concepts appear to be different, they nevertheless have common features. Advantages: Disadvantages:

• Smaller cable volume (space savings in horizontal cabling and reduced installation costs)

• No optimal assignment of LAN ports (no efficient use of all switch ports, possibly unnecessary additional switches)

• Suitable for high server density (blade servers)

• Easy to add additional servers

• Inflexible relation between Access and Aggregation, which leads to problems with increased server perfor-mance and aggregation of blade servers, as well as the introduction of 100 GbE >>> scalability, future-proof!

Aggregation& Core







Application Server or Storage LibraryLAN Switch

KVM Switch

SAN Switch

LAN ConnectionKVM Connection

SAN ConnectionUplink




LAN Switch

KVM Switch

SAN Switch

LAN ConnectionKVM Connection

SAN ConnectionUplink

Aggregation& Core





Application Server or Storage Library






Page 60 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

3.4.2. End of Row (EoR) / Dual End of Row In an End of Row architecture, a group of server cabinets is supplied by one or more switches. Data cables are installed in a star pattern from the switches to the individual server cabinets. This is generally 3 cables (LAN, SAN, KVM) for each server, or 5 for redundant LAN & SAN connections. For 32 1U servers, this would then require at least 96 data cables per cabinet.


Aggregation& Core


Application Server or Storage LibraryLAN Switch

KVM Switch

SAN Switch

LAN ConnectionKVM Connection

SAN Connection









Since 2010, the performance of virtualized servers has increased dramatically through the systematic introduction of SR-IOV (Single Root I/O Virtualization) by leading processor manufacturers. Up to that point, data throughput in virtualized systems was limited to 3 – 4 Gbit/s, and even less than 1 Gbit/s during continuous operation. A doubled 1 GbE connection was therefore the right solution. The modern alternative is to directly connect servers to high-performance aggregation switches. This solution is possible because I/O performance in a virtualized system is currently about 20 – 30 Gbit/s, since the hypervisor (the virtualization software which creates an environment for virtual machines) is freed up from tasks of switching wherever possible, through SR-IOV and a suitable piece of hardware. There are server blades where, in order to reach their full potential, a single blade must be connected doubly to 10 GbE. Blade systems that have a total of up to 8 blade servers must therefore be connected to 100 GbE. Blade system manufacturers report they are currently working on improving inner aggregation, so a 100 GbE connection is expected to be offered by 2011. A conventional ToR structure design would therefore be completely overloaded. In addition, no cost-effective switches are available on the market which provide the necessary flexibility and can be equipped with new, high-quality control functions such as DCB, which only make sense when they work end-to-end. An EoR structure therefore has the following features. Advantages: Disadvantages:

• Flexible, scalable solution (future-proof) • Greater cable volume in horizontal cabling • Optimal LAN port assignment (efficient) • Concentration of Access Switches

• Many cable patchings for EoR switches (server and uplink ports)

(simplified moves/adds/changes) • Space optimization in racks for server



R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 61 of 156

Dual End of Row The Dual End of Row variant is attractive for reasons of redundancy, since the concentration of Access Switches is split up on both ends of the cabinet row and the corresponding cable paths are also separated as a result. Apart from that, the solution corresponds to the EoR principle.


Aggregation& Core












Application Server or Storage LibraryLAN Switch

KVM Switch

SAN Switch

LAN ConnectionKVM Connection

SAN ConnectionUplinkRedunancy

The following features are provided with this cabling architecture, in addition to the redundancy option. Advantages: Disadvantages:

• Flexible, scalable solution (future-proof) • Greater cable volume in horizontal cabling • Relatively good LAN port assignment (efficient) • Concentration of Access Switches in 2 places

• Many cable patchings for EoR switches (server and uplink ports), but ½ of EoR

(simplified moves/adds/changes) • Longer rows of cabinets if necessary (+1 Rack) • Space optimization, allowing for server expansion

3.4.3. Middle of Row (MoR) In MoR concepts, the concentration points containing the Access Switches are not located at the ends, but centrally within the rows of racks. Advantages: (similar to EoR)

• Flexible, scalable solution • Optimal port assignment • Easy MAC • Easier server expansion • Shorter distances to EoR

Disadvantages: (like EoR)

• Greater cable volume in horizontal cabling

• Many cable patchings for MoR



Aggregation& Core







Application Server or Storage LibraryLAN Switch

KVM Switch

SAN Switch

LAN ConnectionKVM Connection

SAN ConnectionUplink


Page 62 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

3.4.4. Two Row Switching The terminal devices (server and storage systems) in this concept as well as network components (switches) are housed in separate cabinet rows, which can be a sensible option in smaller data centers.








LAN Switch

KVM Switch

SAN Switch

LAN ConnectionKVM Connection

SAN ConnectionUplink

Aggregation& Core

Application Server orStorage Library








This cabling structure, which is comparable to the Dual End of Row concept, stands out because of the following features. Advantages: (similar to EoR) Disadvantages: (like EoR)

• Flexible, scalable solution (future-proof) • Greater cable volume in horizontal cabling • Relatively good LAN port assignment (efficient) • Concentration of Access Switches in 2 places

• Many cable patchings for EoR switches (server and uplink ports), but ½ of EoR

(simplified moves/adds/changes) • Space optimization in racks, allowing for server


• Shorter distances in backbone 3.4.5. Other Variants Other possible variants on cabling structures are listed below. As mentioned at the start, combinations or hybrid forms of the described cabling architectures are used in practice, depending upon physical conditions and require-ments. The same is true with the selection of transmission media, which can determine data center sustainability in terms of migration options (scalability). An overview of different transmission protocols and the media they support as well as their associated length restrictions can be found in section 3.8.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 63 of 156

Integrated Switching Blade servers are used in integrated switching. They consist of an housing with integrated plug-in cards for switches and servers. Cabling is usually reduced to FO backbone cabling, though this is not the case in the following configuration.


LAN Switch

KVM Switch

SAN Switch

LAN ConnectionKVM Connection

SAN ConnectionUplink

Aggregation& Core




Application Server or Storage Library





Integrated switching generally includes I/O consolidation as well, which means that communication with the SAN environment is implemented using FCoE (Fibre Channel over Ethernet). Pod Systems Pod systems are a group (12 to 24) of self-contained rack systems. They are highly optimized and efficiently con-structed with regard to power consumption, cooling performance and cabling, so as to enable rapid replication. In addition, each pod system can be monitored and evaluated. Because of their modularity, pod systems can be put into use regardless of the size of the data center. In organi-zations that require a higher data center capacity, units are replicated onto one another as necessary until the re-quired IT performance is achieved. In smaller pod systems, lab and data center capacities can be expanded in a standard manner, depending upon business development. In this way, the organization can react flexibly to acquisitions or other business expansions, corporate consolidations or other developments dictated by the market. Any consolidated solution must consider the “single point of failure” problem. This means that systems and their paths of communication should be laid out in a redundantly. Uplinks All terminal devices in the network are connected to ports on switches (previously hubs) which act as distribution centers and forward the data traffic to the next higher network level in the hierarchy (also see section 3.3). This means that the forwarding system must support some multiple of the data rates of the separately connected devices, so that the connection does not become a “bottleneck”. These connections, so-called uplinks, must therefore be designed to be high-performance. For example, if all subscribers at a 24-port switch want to transfer data at the same time at an unrestricted rate of 1 Gbit/s, the uplink must make at least 24 Gbits/s available. However, since this represents a theoretical assumption, a 10 Gbit/s connection which can transfer 20 Gbit/s in full-duplex operation would be used in practice in this case. Uplink ports can also be used as media converters (copper > glass) and can exist as permanently installed inter-faces or be modular (free slot), which increases flexibility and scalability, since a wide range of interfaces are offered by most manufacturers (also see section 3.6.4).


Page 64 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

In contrast to user ports, the transmission and receiving line for the uplink port of a switch are interchanged. In modern devices, ports are equipped with functionality known as auto uplink (or Auto MDI-X). This allows them to recognize, on their own, whether it is necessary to exchange the transmission and receiving line, so separate uplink ports or crossover cables are unnecessary. Connecting switches in series by using uplinks is known as cas-cading. The following illustration demonstrates cascading from the Access to the Aggregation area. In the left image, these connections are established over permanently installed cables, and by means of patch cords over the cable duct in the right image.


Aggregation& Core

LAN Switch

KVM Switch

SAN Switch

LAN ConnectionKVM Connection

SAN ConnectionUplink




Aggregation& Core



These variants can be a sensible option for short distances, especially since there are two fewer connections in between and at the same time patch panels can be saved. Arranging Patch Panels It is not a must that patch panels be installed horizontally, or installed in the rack in general. There are attractive alternatives to doing this, just from the viewpoint of the efficient use of the otherwise already scarce free spaces in the data center.



Vertical mounting









Floor box

LAN ConnectionKVM Connection

SAN Connection




R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 65 of 156

R&M provides space-spacing solutions for this purpose. So, for example, 3 patch panels can be installed in the Robust (CMS) cabinet type on both the front as well as rear 19” mounting plane, each on the left and right side of the 19” sections, as shown in the center image. By using high-density panels with 48 ports each, 288 connections can be built into in a rack of that type, in both the front and rear – and the full installation height is still available. Similarly 288 connections (copper or fiber optic) can be supported with R&M’s “Raised Floor Solution”, an under-floor distributor solution that can mount up to 6 19” patch panels (image to the right). The distributor platform is designed for raised floors (600x600 mm) and therefore allows the use of narrow racks (server racks) which in turn leads to space savings in the data center. Take care when installing these solutions that the supply of cooled air is not blocked. Cable Patches for Network Components In smaller data centers, distribution panels and active network components are often housed in the same rack, or racks are located directly next to one another.




LAN Switch

KVM Switch

SAN Switch

LAN ConnectionKVM Connection

SAN ConnectionUplinkUplinkRedunancy





Combined CrossconnectInterconnect

In the image on the left above, we see the components are separated into different racks which are connected to one another with patch cords. In the center example, all components are located in the same cabinet, which simplifies patching. In the illustration on the right, active network devices are separated once again, but are now routed into the second cabinet over installed cables to patch panels, a process that is solved most easily using pre-assembled cables. Patching is then performed on these patch panels. SAN These days, companies generally use Ethernet for TCP/IP networks and Fibre Channel for Storage Area Networks (SAN) in data centers. Storage Area Networks are implemented in companies and organizations that, in contrast to LANs, require access to block I/O for applications like booting, mail servers, file servers or large databases. However, this area is undergoing a radical change, and Fibre Channel over Ethernet (FCoE) and the iSCSI method are competing to take the place of Fibre Channel technology in the data center. Many networks in the data center are still based on a “classic” setup: Fibre Channel is used for the SAN, Ethernet for LAN (both as a server-interconnect and for server-client communication). Accordingly, both a network interface card (NIC), frequently a 10GbE NIC, as well as an appropriate FC host bus adapter (HBA) are also built into each server. Nevertheless, the direction of communication in the medium term is leading to a converged infrastructure, since the separation of LAN and SAN is merging more and more. Both economic as well as operational aspects are points in favor of this, one example being the standardization of the components in use. The convergence of FC to FCoE is described in section 3.7.2.


Page 66 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

3.5. Data Center Infrastructure An enormous savings potential in the area of energy demand and CO2 reduction can result from properly designing a data center and installing servers and racks. This especially applies for the layout of rack rows and conduction of heated and cooled air. Even small changes in arranging the alignment of supply and return air can lead to dramatic improvements and higher energy efficiency. This in turn assumes an “intelligent” room concept, which saves space yet at the same time allows for expansion, saves energy and is cost-effective. In addition to expandability, one must make sure, when equipping a data center, that the facility is easy to maintain. Data center accesses (doors) and structural engineering must be designed in such a way that large, heavy active components and machines can be installed, maintained and replaced. Access doors must therefore be at least 1 meter wide and 2.1 meters high, lockable and have no thresholds. The room must have a headroom of at least 2.6 meters from the floor to the lowest installations, and a floor loading capacity of more than 750 kg/m2 (2.75 meters and 1,200 kg/m2 is better still). Placing cabinets in rows makes for arrangements that are advantageous for cooling active components. One must make sure that aisles have sufficient space to allow for installing and dismantling active components. Since active components in recent years are having continuously larger installation depths, an aisle width of at least 1.2 meters must be provided. If aisles cannot be made equally wide, the planner should choose to make the aisle at the front of the cabinet row larger. If a raised floor is used, cabinet rows should be placed so that at least one side of the cabinets is flush with the grid of the base tiles, and that at least one row of base tiles per passageway can be opened. 3.5.1. Power Supply, Shielding and Grounding Proper grounding is a key factor in any building. It protects people within the building from dangers that originate from electricity. Regulations for grounding systems in individual countries are established locally, and installation companies are generally well-known. With the introduction of 10 gigabit Ethernet, shielded cabling systems have been gaining in popularity around the world. However, since unshielded data cabling systems have been installed for the most part in most of the world up to this point, it is important to have exact knowledge about the grounding of shielded patch panels and the options available for this. Importance of Grounding Every building in which electrical devices are operated requires an appropriate grounding concept. Grounding and the ground connection affect safety, functionality and electromagnetic interference resistance (EMC: electromag-netic compatibility). By definition, electromagnetic compatibility is the ability of an electrical installation to operate in a satisfactory manner in its electromagnetic environment, without improperly affecting this environment in which other installa-tions operate. The grounding system of buildings in which IT devices are operated must therefore satisfy the following requirements:

• Safety from electrical hazards

• Reliable signal reference within the entire installation

• Satisfactory EMC behavior, so all electronic devices work together without trouble In additional to current local regulations, the following international standards provide valuable aid in this area: IEC 60364-5-548, EN 50310, ITU-T K.31, EN 50174-2, EN 60950, TIA-607-A. Ideally, low- and high-frequency potential equalization for EMC is meshed as tightly as possible between all metal masses, housings (racks), machine and system components. Alternating Current Distribution System A TN-S system should always be used to separate the neutral conductor of the alternating current distribution system from the grounding network. TN-C systems for internal installations with PEN conductors (protective earth and neutral conductor combined) may not be used for IT devices. There are essentially two possible configurations for building grounding systems, described as follows.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 67 of 156

Grounding System as a Tree Structure A tree or star configuration was traditionally the preferred configuration for the grounding system in the telecommunications sector. In a tree structure, the individual grounding conductors are only connected with one another at a central grounding point. This method avoids ground loops and reduces the interference from low-frequency interference volages (humming).

Grounding System as a Mesh Structure The basic concept of a meshed system is not to avoid ground loops, but to minimize these loops to the greatest extent possible, and to distribute the currents flowing into them as evenly as possible. Meshed structures are almost always used today to ground high-frequency data transfer systems, since it is extremely difficult to achieve a proper tree structure in modern building environments. The building as a whole must have as many suitable grounding points as possible if this type of grounding is to be used. All metallic building components must always be connected to the grounding system using appropriate connection components. The conductive surfaces and cross-sections of these connection elements (e.g. metal strips and bars, bus connections, etc.) must be as large as possible so that they can draw off high-frequency currents with little resistance.

TGBB: Telecommunications Ground Bonding Backbone

MET: Main Earthing Terminal



Building Structural St

Building Structural St





g S







g S



l Ste



Horizon tal cable

Note: power outlets connections shown are logical, not physical.


Page 68 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

In buildings in which no continuous meshed structure can be set up for the earthing system, the situation can be improved by setting up cells. A locally meshed grounding system of this type is achieved by using cable ducts made of metal, conductive raised floors or parallel copper conductors.

Grounding Options for Patch Panels All patch panels should be suitable for both tree as well as meshed grounding systems, so that customer options remain completely flexible. In addition to the grounding system in use (mesh or tree), selection of the method to be used is also based on whether or not the 19” cabinet or mounting rail is conductive. The earthing connection for patch panels may not be daisy chained from one patch panel to another, since this increases impedance. Every patch panel in the cabinet must be grounded individually as a tree structure to the cabinet or to ground busbars. Grounding and Ground Connection in the Cabinet Once the patch panel has been bonded to the cabinet, the cabinet itself must be bonded to ground. In a tree configuration it would connect to the telecommunication main ground busbars (TMGB), or telecommunication ground busbar (TGB). In a mesh configuration it would be connected to the nearest grounding point of the building structure. It is crucial in this ground connection that the impedance from the current path to the ground is as low as possible. The impedance (Z) is dependent on ohmic resistance (R) and line inductance (L):





This relationship should be considered when choosing the earthing strap to bond the cabinet to ground. Flat cables, woven bands or copper sheet metal strips exhibit low impedance characteristics. At a minimum, a stranded conductor with a diameter of 4-6 mm2 should be considered (16 mm2 is better). With regard to EMC, an HF-shielded housing with improved shielding effectiveness may be required in applications with high-frequency field-related influences. A definite statement regarding what housing version is required to satisfy specific standard limits can be made only after measurements have been taken.





g S







g S



l Ste


Metallic conduit

Note: power connections shown are logical, not physical.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 69 of 156

3.5.2. Cooling, Hot and Cold Aisles Every kilowatt (kW) of electrical power that is used by IT devices is later released as heat. This heat must be drawn away from the device, cabinet and room so that operating temperatures are kept constant. Air conditioning systems, that operate in a variety of ways and have different levels of performance, are used to draw away heat. Providing air conditioning to IT systems is crucial for their availability and security. The increasing integration and packing densities of processors and computer/server systems causes a level of waste heat that was unimaginable in such a limited space only a few years ago. Various solutions for air conditioning exist on the market and are based on power and dissipation loss, in other words the waste heat from the IT components in use. With more than 130 W/cm² per CPU – this corresponds to two standard electric light bulbs per square centimeter – the job of “data center air conditioning” has been gaining in importance. This power density results in heat loads that are much greater than 1 kW per square meter. According to actual measurements and practical experience, dissipation loss of up to 8 kW can still prevail in a rack or a housing that uses a conventional air conditioning system that is implemented in a raised floor and realized through cooling air, as still exists in many data centers. However, the air flow system of the raised floor implemented in conventional mainframe data centers can no longer meet today’s in some instances extremely high requirements. Although a refrigerating capacity of 1 to 3 kW per 19” cabinet was sufficient for decades, current cooling capacities must be increased significantly per rack. Current devices installed in a 19” cabinet with 42 height units can take in over 30 kW of electrical power and therefore emit over 30 kW of heat. A further increase in cooling capacity can be expected as device performance continues to increase while device sizes become smaller. In order to improve the performance of existing air conditioning solutions that use raised floors, containment is currently provided for active components, which are arranged in accordance with the cold aisle/hot aisle principle. Cold and hot aisles are sometimes enclosed in order to enable higher heat emissions per rack. A cabling solution that is installed in a raised floor, and that is well-arranged and professionally optimized in accor-dance with space requirements, can likewise contribute to improved air conditioning. The decision criteria for an air conditioning solution include, among other things, maximum expected dissipation loss, operating costs, acquisition costs, installation conditions, expansion costs, guaranteed future and costs for downtimes and for physical safety. There are basically two common air conditioning types:

• Closed-circuit air conditioning

• Direct cooling Solution per the cold aisle/hot aisle principle – closed-circuit air conditioning (Source: BITKOM)

Closed-Circuit Air Conditioning Operation In the past, room conditions for data center air conditioning were limited to room temperatures of 20° C to 25° C and relative humidity of 40% to 55% (RH). Now requirements for supply and exhaust air are also being addressed because of the cold aisle/hot aisle principle, since room conditions in the actual sense are no longer found in the entire space to be air conditioned. Conditions for supply air in the cold aisle should be between 18° C and 27° C, depending upon the appl ication – relative supply air humidity should be between 40% and 60% RH. Lower humidity leads to electrostatic charging, high humidity to corrosion of electrical and electronic components. Operating conditions with extremely cold temperatures of under 18° C and high humidity, whic h lead to formation of condensation on IT devices, must always be avoided. A maximum temperature fluctuation of 5° C per hour must also be taken into considera tion.


Page 70 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Higher temperatures may be selected today than were previously, thanks to the improved manufacturer quality of silicon chips. Preset temperatures of 24° C to 27° C and relative humidities of about 60% are currently state of the art. In the closed-circuit air conditioning principle, the air cooled by the air conditioning system circulates to IT components, takes in heat and the warmed air then reaches the air conditioning system again in the form of return air that is to be re-cooled. Only a small amount of outside air is introduced into the room that is to be air-conditioned and used for air exchange. Optimal conditions with respect to temperature and relative humidity can only be achieved with closed-circuit air-conditioning units, or so-called precision air-conditioning units. The energy used in these systems is better utilized, i.e. reducing the temperature of return air is the first priority. These units are contrasted with comfort air-conditioning units used for residential and office spaces, such as split air-conditioning units, which continuously use a large portion of the energy they consume to dehumidify recirculated air. This can lead not only to critical room conditions but also to significantly higher operating costs, which is why their use is not economically feasible in data centers. Rack arrangement and air flow are both crucial factors in the performance of closed-circuit air conditioning systems. This is why 19’’ cabinets, in particular, are currently arranged in accordance with the so-called hot aisle/cold aisle principle, so as to reproduce, as best as possible, the horizontal air flow required for IT and network components. In this arrangement, the air flow is forced to take in the heat from the active components on its path from the raised floor back to the air-conditioning. This process differentiates between self-contained duct systems (Plenum Feed, Plenum Return) and via room air (Room Feed, Room Return).

One must make sure that the 19” cabinets correspond to the circulation principle that is selected. In “Room Feed, Room Return”, cabinet doors (if they exist) must be per-forated at least 60% in order to permit air circulation. The floor of the cabinet should be sealed to the raised floor at least on the warm side, in order to avoid mixing air currents. A cooling principle of this type is suited for power up to approximately 10 kW per cabinet. In “Plenum Feed, Plenum Return”, it is abso-lutely necessary that the cabinet has doors and that they can be sealed airtight. By contrast, the floor must be sealed on the warm side and be open on the cold side. These solutions are suitable for power of approximately 15 kW.

Within the cabinet no openings may exist between the cold side and the warm side, to ensure that mixtures and microcirculations are avoided. Unused height units and feedthroughs should be closed with blind panels and seals. Performance can be further optimized by containment of the cold or warm aisle (Contained Cold/Warm Aisle). Of crucial importance is the height of the raised floor, over which the area of the data center is to be provided with cold air. Supply air comes out via slot plates or grids in the cold aisle. After it is warmed by the IT devices, the return air reaches the air conditioning system again to be re-cooled.

Direct-Cooling Principle – Water-Cooled Server Rack Direct cooling of racks must be implemented when heat loads exceed 10 to 15 kW per cabinet. This is realized via a heat exchanger installed in the immediate vicinity of the servers. These are usually heat exchangers cooled with cold water, that are arranged either below or next to the 19” fixtures. Up to 40 kW of heat per rack can be drawn off in this way. A cold-water infrastructure must be provided in the rack area for this method. The water-cooled racks ensure the proper climatic conditions for their server cabinets, and are therefore self-sufficient with respect to the room air-conditioning system. In buildings with a low distance between floors, water-cooled server racks are a good option for drawing off high heat loads safely without having to use a raised floor. The high-performance cooling system required for higher power also includes a cold and hot aisle containment.

Plenum Feed, Plenum Return (Source: Conteg)


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 71 of 156

The following illustrations show the power densities that can be realized with various air conditioning solutions. (Source: BITKOM)

Air flow in the data center is essential for achieving optimal air conditioning. The question is whether hot or cold aisles make sense economically and are energy-efficient. In a cold aisle system, cold air flows through the raised floor. The aisle between server cabinets is housed to be airtight. Since the floor is only opened up in this encapsulated area, cold air only flows in here. Data center operators therefore often provide this inlet with higher pressure. Nevertheless, the excess pressure should be moderate, since the fans in the server regulate air volume automatically. These are fine points, but they contribute greatly to optimizing the PUE value. Computers blow waste heat into the remaining space. It rises upwards and forms a cushion of warm air under the ceiling. The closed-circuit cooling units suck out the air again, cool it and draw it back over the raised floor into the cold aisle between the racks. This way warm and cold air does not mix together and unnecessary losses or higher cooling requirements are avoided.

Air conditioning via raised floor without arranging racks for ventilation

Air conditioning system with water -cooled housing for hot aisles

Top view

Air conditioning via raised floor without arranging racks for ventilation

Air conditioning via raised floor with racks arranged in cold/hot aisles

Air conditioning system with water -cooled housing for cold aisles

Top view

Air c onditioning via raised floor with housing for cold aisles

Air conditioning with water -cooled rack (sealed system)


Page 72 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Kaltgang Warmgang




Kaltluft KorridorDatenkabel


Kaltgang Warmgang




Kaltluft KorridorDatenkabel


The server cabinets in the hot aisle containment on the other side are located in two rows opposite one another, with their back sides to one another. Air flows through computers from the outside to the inside via an air duct, so waste heat is then concentrated in the hot aisle between the rows of racks. The raised floor between them is sealed tight. The warm air is then fed directly into the closed-circuit cooling units. These use a heat exchanger to cool the warmed air down to the specified supply air temperature, and then blow it into the outer room. The difference in temperature between supply air and exhaust air can be monitored and controlled better here than in the cold aisle process. On the other hand, supply air temperature and flow velocity is easier to regulate in the cold aisle process, because of the smaller volume. Which advantages and disadvantages prevail in the end must be examined in the overall consideration of the data center. No matter what approach is selected for waste heat management, one must always make sure that air circulation is not obstructed or even entirely blocked when setting up horizontal cabling (in the raised floor or under the ceiling) and in vertical cable routing (in the 19” cabinet when connecting to active components). In this way, cabling can also contribute to the energy efficiency of the data center. 3.5.3. Raised Floors and Dropped Ceilings The raised floor, ideally 80 centimeters in height, also has a great influence in the area of hot and cold aisles. Among its other functions, the floor is used to house cabling. Since the floor is also the air duct, cables must not run across the direction of the current and thus block the path of air flow. One must therefore think about running the cabling in the ceiling and only using the raised floor for cooling, or at least reducing it to the area of the hot aisle. Depending upon building conditions, possible solutions for data center cable routing include the raised floor, cable trays suspended from the ceiling or a combination of these two solutions. The supply of cold air to server cabinets may not be restricted when cables are routed through the raised floor. It is recommended that copper and fiber optic cables be laid in separate cable trays. Data and power supply cables must be separated in accordance with EN 50174-2 standards. The cable trays at the bottom need to remain accessible when multiple cable routes are arranged on top of one another.

It is recommended that racks be connected with cabling along the aisles.

By so doing, the smaller required volume of power supply cables are routed under the cold aisle directly on the floorbase, and the more voluminous data cables are routed under the hot aisle in the cable trays mounted on the raised floor supports.

If air-conditioning units in this arrangement are also arranged within the aisle extension, corridors result under the raised floor and can route cold air to server cabinets without signficant obstacles. If there is no raised floor in the data center or if the raised floor should remain clear for the cooling system, the cabling must be laid in cable trays suspended under the ceiling. One must also make sure that the distances between power and data cables as prescribed in EN 50174-2 are observed. Arranging cable routes vertically on top of one another directly above the rows of cabinets is one possible solution. In this case, the cable routing may not obstruct the lighting provided for server cabinets, cover security sensors, or obstruct any existing fire extinguishing systems. The use of metal cable routing systems for power cables and copper data cables improves EMC protection. Cable routing systems made of plastic, with arcs and cable inlets that delimit the radii, like R&M’s Raceway System, are ideal for fiber optic cabling.

Hot aisle Cold aisle

Perforated base plates

Data cables Data cables

Power supply

Power supply

Cold air corridor


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 73 of 156

A maximum filling level of 40% should be estimated when dimensioning the cable tray. The use of low diameter cables can be important in this regard. These significantly reduce the volume required in the cross-section of the cable routing system, as well as weight. The AWG26 installation cable (Ø: 0,40 mm) from R&M is one example of this cable type; it complies with all defined cabling standards up to a maximum cable length of 55 m, and only takes up about 60% of the volume of a standard AWG22 cable (Ø: 0,64 mm). Mixed cable routing systems are increasingly gaining in popularity, when basic data center conditions permit this method to be used. Data cables in this setup are routed in a cable routing system under the ceiling, and the power supply routed in the raised floor. This ensures the necessary distance between data cabling and the power supply in an elegant way that does not require extensive planning, and cooling capacity is not impaired. All other requirements already men-tioned must be fulfilled here as well. 3.5.4. Cable Runs and Routing The cable routing system in the data center must fulfill the following demands.

• It must satisfy cabling requirements so no losses in performance occur

• It must not obstruct the cooling of active components

• If must fulfil requirements of electromagnetic compatibility (EMC)

• It must support maintenance, changes and upgrades Depending upon building conditions, possible solutions for cable routing in the data center are the raised floor, cable trays suspended from the ceiling or a combination of these two solutions, as already described in the prece-ding section. The cable routing in many “historically developed” data centers and server rooms may be described as catastrophic. This is based on a lack of two things:

• In principle, installation of a “structured cabling” system in the data center is an “modern” concept. For a long time, server connections were created using patch cords routed spontaneously in the raised floor between the servers and switches. An infrastructure of cable routing systems did or does not exist in the raised floor, and a “criss-cross cabling” is the result.

• In many cases, the patch cords that were laid were documented only poorly or not at all, so it was im-possible to remove connecting cords that were no longer needed or defective.

The result is very often raised floors that are completely overfilled, though cable routing is in principle only an additional function of the raised floor in the data center. In most data centers these raised floors are also required for server cabinet ventilation/cooling. An air flow system in the raised floor with as few obstructions as possible is rising in importance, if only in view of the increasingly greater cooling capacities required for systems. Any kind of chaotic cabling interferes with this. Considering that cabling systems are constantly changing, it is very difficult for air conditioning technicians to dimension the air flow. Avoiding doing this is a point in favor of cabling technologies like glass fibers or “multi-core cables” which save on cable volume, but also force one to use cable routing systems like trays or grids. These systems ensure that cable routing is orderly, they prevent the proliferation of cables, and also protect cables that were laid. What system provides which advantages?

Metallene Kabelführung für Kupfer-Verkabelung

2,6 m min.

Kabelführungen aus Kunststoff für FO

Max. Füllgrad 40%

Metallene Kabelführungen für Stromversorgung

Metallene Kabelführung für Kupfer-Verkabelung

2,6 m min.

Kabelführungen aus Kunststoff für FO

Max. Füllgrad 40%

Metallene Kabelführungen für Stromversorgung

Mixed cable routing under ceiling and in hollow floor

Max. filling level 40% Metal cabling routing for copper cabling

Plastic cabling routing for fiber optic

Metal cabling routing for power supply

2.6 m min.


Page 74 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Tray The advantage of a cable tray is that cables are better protected mechanically. According to the results of a test carried out by the laboratories of Labor, AEMC Mesure and CETIM, a tray does not essentially provide better electromagnetic protection than a mesh cable tray. Both produce the same “Faraday cage” effect (source: www.cablofil.at). A key disadvantage of trays that must be mentioned is that only trays should be used which prevent cable damage at the cable entry and exit points using appropriate edge protection In the case where an exit point is added later, it is very difficult to implement the edge protection and thus the cable could be damaged. Providing an additional cover reduces the danger that “external cables”, such as power cables for post-construction cabling, may be laid on data cables and impair their operation. Mesh Cable Tray A mesh cable tray does not provide the same mechanical protection as the regular cable tray described above. The cable support points on grid bars may mechanically damage cables at the bottom of the tray, in the case of higher cable densities. This risk can be definitively reduced if a metal plate, which prevents cable point pressure, is placed on the base of the mesh cable tray. The advantage of the “open” mesh cable tray lies in its improved ability for cables to be routed out. The option of using an additional cover is an exceptional case when using mesh cable trays, but is a possibility with certain manufacturers. Laying Patch Cords If appropriate solutions exist for laying installation cables, the question remains: where should patch cords be put, if not in the raised floor? This concept, and in turn the solution for the problem, is again made clear by comparing the distribution elements found in EN 50173-1 and EN 50173-3: In principle, the Area Distributor (HC: horizontal cross connect) assumes the function of the Floor Distributor, and the Device Connection (EO: Equipment Outlet, in the Distribution Switch-board) assumes the function of the telecommunication outlet (accordingly in one room). This telecommunication outlet is located in a cabinet that is comparable in principle to an office room. No one would think of essentially supplying another room by means of a long patch cord from an outlet in an office. Similarly, in in the data center only one element (e.g. a server) should be supplied from a device connection in the same cabinet. If this principle is followed, there will be no, or at least significantly less, patch cords routed from one cabinet to another, sparing the raised floor. Of course, patch patching from one cabinet to another cannot be excluded completely. Never-theless, this should be implemented outside of the raised floor. Three technical variants for this are worth mentioning:

• Implementing trays or mesh cable trays above cabinets. The advantages and disadvantages of both systems were already described above.

• The clearly more elegant alternative to metal trays or mesh cable trays are special cable routing tray systems, such as the Raceway System from R&M. This system, with a distinctive yellow color, is likewise is mounted on top of the cabinet. It provides a wide variety of shaped pieces (e.g. T pieces, crosses, exit/entry pieces, etc.) which fit together in such a way as to ensure that cable bending radius are maintained, an important factor for fiber optic cables.

• If no option is available for installing additional cable routing systems above the row of cabinets, or more space in the cabinet is needed for active components, the “underground” variant is one possible solution. In this case patch panels are mounted in the raised floor in a box, e.g. the Raised Floor Solution from R&M, where up to 288 connections can be realized. A floor box is placed in front of each cabinet and the patch cords are then routed directly from it into the appropriate cabinet to the active components.

The same savings in rack space can achieved with R&M’s Robust Cabinet (CMS), where patch panels are installed vertically and at the sides. Here as well, patch cords remain within one rack.

These solutions were already shown in a cabling architecture context, in section 3.4.5 “Other Variants”. Clear cable patching makes IMAC (Installation, Moves, Adds, Changes) work in the data center significantly easier.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 75 of 156

3.5.5. Basic Protection and Security Ensuring data security – e.g. with backups and high-availability solutions – is not the only challenge that faces data center administrators. They also must make sure that the data center itself and the devices in it are protected from environmental influences, sabotage, theft, accidents, and similar incidents. Many of the following items were already mentioned in detail under the topic of “Security Aspects” in section 1.12. An overview of essential data center dangers and corresponding countermeasures is again provided here. In general, it is true that security in a data center is always relative. It depends, for one, on the potential for danger, and also on any security measures that were already taken. A high level of security can be achieved absolutely by means of extremely high safeguards, even in an extremely dangerous environment. However, protecting a data center that is located in earthquake areas, military IT installations under threat or under similar situations does have its price. It therefore makes sense for those responsible for IT in the data center to assess the potential of danger before beginning their security projects, and to relate this to the desired level of security. They can then work out the measures that are required for realizing this level of security. Threat Scenarios The first step in safeguarding a data center against dangers always consists of correctly assessing actual threats. These are different for every installation. Installations in a military zone have requirements that are different than those of charitable organizations, and data centers in especially hot regions must place a higher emphasis on air conditioning their server rooms than IT companies in Scandinavia. The IT departments concerned should there-fore always ask themselves the following questions:

• How often does the danger in question occur? • What consequences will it have? • Are these consequences justifiable economically?

But what elements make up the threats that typically endanger IT infrastructures and pro-vide the basis for the questions listed above? We must now list possible catastrophic losses in connection with this. Floods and hurricanes have been occurring with disproportionate frequency in central Europe in recent years, and, if an IT department assumes that this trend is continuing, it must absolutely make sure its data center is well-secured against storms, lightning and high water. Losses from vandalism, theft and sabotage have achieved a similar level of significance. High-performance building security systems are used so that IT infrastructures can be pro-tected as well as possible. Finally, the area of technical disturbances also plays an important role in security. Picture supplied by Lucerne town hall Catastrophic Losses

Our definition of the term “catastrophe” is very liberal, and we assign everything to this area that is related to damages caused by fire, lightning and water, as well as other damage caused by storms. The quality of the data center building plays an especially big part in resisting storms. The roof should be robust enough so tiles do not fly off during the first hurricane and rain water is not able to penetrate unhindered; the cellar must be sealed and dry, and doors and windows should have the ability to withstand the storm, even at high wind forces.

At first glance, these points appear to be obvious, and also apply to all serious residential buildings – in fact, practically all buildings in central Europe fulfill these basic requirements. Nevertheless, it often makes perfect sense, for temporary offsite installations and when erecting IT facilities abroad, to take these basic requirements into consideration.


Page 76 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Fire and Lightning

Quite a few security measures that are not at all obvious can be taken when it comes to fire protection. An obvious first step that makes sense is to implement the required precautions in the rooms or buildings to be protected. This includes smoke and heat detectors, flame sensors, emergency buttons as well as surveillance cameras that sound alarms when fire erupts, or ensure that security officers become aware of this situation faster.

In addition, one must ensure that fire department reaction time is as short as possible. Companies with a fire department on site have a clear advantage here – but it can also be of advantage to carry out regular fire drills that include the local fire department.

As lightning is one of the most common causes of fires, a lightning rod is mandatory in every data center building. In addition, responsible employees should provide all IT components with surge protection, to avoid the damage that can occur from distant lightning. Structural measures can also be a help: When constructing a data center, some architects go so far as to use concrete that contains metal, so a Faraday cage is produced. This ensures that the IT infrastructure is well protected against lightning damage. Other Causes of Fires

Lightning is not the only cause of fires. Other causes are also associated with electricity; furthermore, overheating and explosions play an important role in connection with fire. Among other causes, fire that is started by electricity can erupt when the current strength or voltage of a power connection is too high for the device that is running on it.

Defective cable insulations also represent a danger, namely they provide sparks that can accidentally ignite inflammable materials that are located nearby. The parties responsible should therefore make sure that not only is the cable insulation in order, but also that as few flammable materials as possible exist in the data center. This tip may sound excessive, but in practice it is often true that piles of paper and packaging that are lying around, overflowing wastepaper baskets and dust bunnies and similar garbage make fires possible. Regular data center cleaning therefore fulfills not only a health need, but also a safety need. In connection with this, IT employees should also make sure that smoking is prohibited in data center rooms, and that a sufficient number of fire extinguishers are available.

Other prevention methods can also be implemented. These include fire doors, and cabling that is especially secure. With fire doors, it is often enough to implement measures that stop the generation of smoke, since smoke itself can have a devastating effect on IT environments. However, in environments that are at an especially higher risk of fire, installing “proper” fire doors that are able to contain a fire for a certain amount of time will definitely provide additional benefits. With regard to cabling, flame-retardant cables can be laid, and so-called fire barriers can be installed that secure the cable environment from fires. These ensure that cable fires do not get oxygen and also prevent the fire from quickly spreading to other areas.

Fire Fighting Once a fire occurs, the data center infrastructure should already have the means to actively fight it. Sprinkler systems are especially important in this area. These are relatively inexpensive and are easily maintained, but are a problem in that the water can cause tremendous damage to IT components. Apart from that, sprinkler systems provide only a few advantages when it comes to fighting concealed fires, like those in server cabinets or cable shafts. CO2 fire extinguishing systems represent a sensible alternative to sprinkler systems in many environments. These operate on the principle of smothering burning fires, and therefore do not require water. However, they also bring along disadvantages: For one, they represent a great danger to any individuals who are still in the room, and for another, they have no cooling effect, so they are unable to contain damage caused by the generation of heat. A further fire-fighting method is the use of rare gases. Fire extinguishing systems based on rare gases (like argon) also reduce the oxygen content in air and thus suffocate flames. However, they are less dangerous to humans than CO2, so individuals who are in the burning room are not in mortal danger. In addition, argon does not cause damage to IT products. Drawback: These systems are rather expensive.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 77 of 156

Water Damage

Among other causes, water damage in the data center results not only from the extinguishing water used in fighting fires, but also from pipe breaks, floods and building damage. A certain amount of water damage can therefore be prevented by avoiding the use of a sprinkler system.

Nevertheless, the parties responsible for IT in the data center should also consider devices that protect against water damage. First on the list are humidity sensors that monitor a room and sound an alarm when humidity appears – on the floor, for example. Surveillance cameras can also be used for this same purpose; they keep security officers informed at all times of the state of individual rooms, and thus shorten reaction time in case of damage.

In many cases, another sensible option is to place server cabinets and similarly delicate components in rooms in such a way that they do not suffer damage when small amounts of water set in. Ideally, those on watch, namely those responsible for the main tap of the defective pipe, will have already turned it off before the “great waters” come.

Technical Malfunctions Technical malfunctions have a great effect on the availability of the applications that are operated in the data center. Power failures rank among the most common technical malfunctions, and can be prevented or have their effects reduced through the use of UPSs, emergency generators and redundant power supply systems. In many cases, electromagnetic fields can affect the data transfer. This problem primarily occurs when power and data cables are laid together in one cable tray. However, other factors like mechanical stresses and high temperatures also have a great effect on the functioning of cable connections. Most difficulties that arise in this area can therefore be easily avoided: It is enough to house power and data cables in physically separate locations, to use shielded data cables, prevent direct pressure on the cable and provide for sufficient cooling. The parties responsible for this latter measure must make sure that data center air conditioning systems do not just cool the devices that generate heat in server rooms, but also cool cable paths exposed to heat – for example, cables under the roof. Surveillance Systems Most of the points covered in the section on catastrophes, such as threats resulting from fire and water, only cause significant damage when they have the opportunity to affect IT infrastructures over a certain period of time. It is therefore especially important to constantly monitor server cabinets, computer rooms and entire buildings, so that reaction times are extremely short in the event of damage. Automatic surveillance systems provide great benefits in this area; they keep an eye on data center parameters like temperature, humidity, power supply, dew point and smoke development, and at the same time can auto-matically initiate countermeasures (like increasing fan speed when a temperature limit value is exceeded) or send warning messages. Ideally, systems like these do not operate from just one central point. Instead, their sensors should be distributed over the entire infrastructure and their results should be reported via IP to a console that is accessible to all IT employees. They therefore use the same cabling as data center IT components to transfer their data, which makes it unnecessary to lay new cables. In addition, they may be scalable in any fashion, so they can grow along with the requirements of the data center infrastructure. Some high-performance products have the ability to integrate IP-based microphones and video cameras so they can transmit a real-time image and audio recording of the environment to be protected. Many of these products even have motion detectors, door sensors and card readers, as well as the ability to control door locks remotely. If a company’s budget for a data center surveillance solution is not sufficient, the company can still consider purchasing and installing individual sensors and cameras. However, proceeding in this fashion only makes sense in smaller environments, since many sensors – for example, most fire sensors – cannot be managed centrally, and therefore must be installed at one location, in which an employee is able to hear at all times if an alarm has gone off. With cameras, especially in consideration of current price trends, it always makes sense to use IP-based solutions to avoid having to use dedicated video cables.


Page 78 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Vandalism, Theft, Burglary and Sabotage

Security products which control access to the data center building play a primary role in preventing burglaries, sabotage and theft. These products help to ensure that “villians” do not have any access whatsoever to the data center infrastructure. Modern access control systems manage individual employee authoriza-tions, usually through a central server, and make it possible for employees to identify themselves by means of card readers, keypads, iris scanners, fingerprints and similar means.

If data center management implements an ingenious system that only opens up the routes that employees need to perform their daily work, they automatically minimize the risk of misuse. However, this does not rule out professional evildoers from the outside attempting to enter the data center complex by force. Access control systems should therefore, if possible, be combi-ned with camera surveillance systems, security guards and alarm systems, since these contribute greatly to unauthorized indivi-duals not entering the building at all.

As we saw in the preceding section, data center surveillance systems with door lock controls and door sensors that trigger alarms automatically (for example, between 8:00 in the evening and 6:00 in the morning) when room or cabinet doors are opened supplement the products mentioned above, increasing the overall level of security. “Vandalism sensors” measure vibrations and also help to record concrete data that document senseless destruction. The audio recordings already mentioned can also be of use in this area.

Conclusion The possibilities for influencing the security of a data center security are by no means limited to the planning phase. Many improvements can still be implemented at a later point in time. The implementation of rare gas fire extinguishing systems and high-performance systems for data center sur-veillance and access control were just two examples given in this regard. It is therefore absolutely essential that those responsible for the data center continue, at regular intervals, to devote time to this topic and analyze the changes that have come to light in their environment, so they can then update their security concept to the modified requirements. 3.6. Active Components / Network Hardware The purpose of this section is to give an overview of the active components in an ICT environment, as well as their functions. These include, on the one hand, terminal devices like servers, PCs, printers, etc. as well as network devices like switches, routers and firewalls, in which each of these components also contains software. One can also differentiate between passive hardware such as cabling, and active hardware such as the devices listed above.

Active network components are electronic systems that are required to amplify, convert, identify, control and forward streams of data. Network devices are a prerequisite for computer workstations being able to communicate with one another, so they can send, receive and understand data. The network here consists of independent computers that are linked, but separate from one another, and which access common resources. In general, a distinction is made in the area of networks between LAN (local area network / private networks) and WAN (wide area network / public networks), MAN

(metropolitan area network / city-wide networks) and SAN (storage area network / storage networks).


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 79 of 156

3.6.1. Introduction to Active Components Small teams that want to exchange files with one another through their workstations have needs that are different than those of large organizations, in which thousands of users must access specific databases. It is for that reason that a distinction is made between so-called client/server networks, in which central host computers (servers) are made available, and peer-to-peer networks, in which individual computers (clients) have equal rights to release and use resources.

• In general, a Client/Server network differentiates between two types of computers that make up the network: The server, as host, is a computer which provides resources and functions to individual users from a central location; the client, as customer, takes advantage of these services. The services provided by servers are quite diverse: They range from simple file servers which distribute files over the network or release disk space for other uses, to printer servers, mail servers and other communication servers, up to specialized services like database or application servers.

• As its name implies, a Peer-to-Peer Network consists principally of workstations with equal rights. Every user is able to release the resources from his/her own computer to other users in the network. This means that all computers in the network, to a certain extent, provide server services. In turn, this also means that data cannot be stored in one central location.

In practice, however, hybrids of these network types are encountered more frequently than pure client/server- or absolute peer-to-peer networks. For example, one could imagine the following situation in a company:

Tasks like direct communication (e-mail), Internet access (through a proxy server or a router) or solutions for data backup are made available by means of central servers; and by contrast, access to files within departments or to colleague’s printer within an office are controlled through peer-to-peer processes that bypass servers. In a stricter sense, it is actually special software components, rather than specific computers, that are designated as “clients” or “servers”. This also results in differences in active network components for small and large organizations. Functions that are provided must satisfy different requirements, whether these concern security (downtime, access, monitoring), performance (modular expandability), connection technology (supported protocols and media) and other specific functions that are desired or required. 3.6.2. IT Infrastructure Basics (Server and Storage Systems) Active components, if they are network-compatible, have LAN interfaces of various types. These are specified by the technology used for the transmission (transmission protocol, e.g. Ethernet), speed (100 Mbit/s, 1/10 Gbit/s), media support (MM/SM, copper/fiber) and interface for the physical connection (RJ45, LC, etc.). There are two types of latencies (delay) in the transmission of data for network-compatible devices:

• Transfer delay (in µs/ns, based upon medium and distance)

• Switching delay (in ms/µs, based upon device function, switching/routing) One must therefore pay special attention to the latency of active components to ensure that the delay in the data communication is as low as possible. This does not imply, however, that cabling quality does not have an effect on data transmission. On the contrary: Other factors like crosstalk, attenuation, RL, etc. play an important role in a clean signal transmission. In addition, distances between signals are becoming smaller as a result of continually growing transmission rates and other factors, while signal levels are becoming smaller as a result of enhanced modulation types. Therefore, transmissions are becoming more susceptible to interference, i.e. transmissions are becoming more sensitive to outside influences. In addition, performance reserves which can stand up to future data rates are in demand more and more for cabling components. In the final analysis, the typical investment period for cabling extends to over 10 years, as compared to 2 to 3 years for active components. Wanting to save costs in data center cabling would therefore certainly be the wrong investment approach, especially since the percentage of this investment to total investments for a data center is very small (5 % to 7 %) In addition, having to replace cabling is tied to enormous expenditure and, in the case of switches for example, leads to a minor interruption in the data center if redundancy does not exist. Decision makers must be aware here that the cabling structure in the data center, along with all of its components, represents the basis of all forms of communication (e-mail, VOIP, etc.) – for a long period of time!


Page 80 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Server (Single Server) This term denotes either a software component (program), in the context of the client/server model, or a hardware component (computer) on which this software (program) runs, in the context of this concept. The term “host” is also used in technical terminology for the server hardware component. So whether a server is meant to denote a host or a software component can only be known from context, or from background information. Server as software:

A server is a program which provides a service. In the context of the client/server model, another program, the client, can use this service. Clients and servers can run as programs on different computers or on the same computer.

In general, the concept can be extended to mean a group of servers which provide a group of services. Examples are mail servers, (extended) web servers, application servers and database servers. (also see section 1.2) Server as hardware:

The term server for hardware is used…

• …as a term for a computer on which a server program or a group of server programs run and provide basic services, as already described above.

• …as a term for a computer whose hardware is adapted specifically to server applications, partly through specific performance features (e.g. higher I/O throughput, higher RAM, numerous CPUs, high reliability, but minimal graphics performance).

Server Farm A server farm is a group of networked server hosts that are all of the same kind and connected to one logical system. It optimizes internal processes by distributing the load over the individual servers, and speeds up computer processes by taking advantage of the computing power of multiple servers. A server farm uses appro-priate software for the load distribution. Virtual Server If the performance provided by a single host is not enough to manage the tasks of a server, several hosts can be interconnected into one group, also called a computer cluster. This is done by installing a software component on all hosts, which causes this cluster to appear as a single server to its clients. Which host is actually executing which part of a user’s task remains hidden to that user, who is connected to the server through his/her client. This server is thus a distributed system. The reverse situation also exists, in which multiple software servers are installed onto one high-performance host. In this case, it remains hidden to users that the different services are in reality being handled by only a single host. Examples of well-known providers of virtualization solutions include VMware, Citrix and Microsoft. Rack Server Rack servers combine high performance in a small amount of space. They can be employed in a very flexible manner, and are therefore the first choice for constructing IT infrastructures that are scalable and universal. In June of 2010, Tec-Channel created a list of the most popular rack servers from its extensive, detailed database of products:

• HP ProLiant DL380

• IBM System x3950

• Dell PowerEdge R710

• HP ProLiant DL370 G6

• HP ProLiant DL785

• Dell PowerEdge 2970

• Dell PowerEdge R905

• Dell PowerEdge R805

• Dell PowerEdge R610

• HP ProLiant DL580

Insides of a HP ProLiant DL380


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 81 of 156

As an illustration, the key performance features of the first three models are listed below:

HP ProLiant DL380 IBM System x3950 Dell PowerEdge R710

Construction 2U rack 4U rack 2U rack

Main memory (max.) 144 GByte 64 GByte 144 GByte

Processor Intel Xeon 5500 series; Intel Xeon 5600 series Intel Xeon 7400

Intel Xeon 5600 series; Intel Xeon 5500 series

Storage (max.) 16 TByte 2.35 TByte 6 TByte

Network ports 4 x Gbit Ethernet 2 x Gbit Ethernet 2 x 10 Gbit Ethernet

Power supply 2 x 750 watt power supplies

2 x 1,440 watt power supplies

2 x 570 watt power supplies

Pizza Box The term pizza box, when used in a server context, is generally a slang term for server housing types with 19-inch technology and a single height unit. Blade Server Blade systems (blade, blade server, blade center) are one of the most modern server designs and are the fastest growing segment of the server market. However, they do not differ from traditional rack servers in terms of operation and the execution of applications. This makes it relatively easy to use blades for software systems that already exist. The most important selection criteria for blade servers are the type of application expected to run on the server and the expected workload. In consideration of their maintainability, provisioning and monitoring, blades on the whole deliver more today than their 19-inch predecessors, yet are economical when it comes to energy and cooling. For example, a Blade Center, an IBM term, provides the infrastructure required by the blades connected inside it. In addition to the power supply, this includes optical drives, network switches, Fibre Channel switches (for the storage connection) as well as other components. The advantage of blades lies in their compact design, high power density, scalability and flexibility, a cabling system that is more straightforward with significantly lower cable expenditure, and quick and easy maintenance. In addition, only a single keyboard-video-mouse controller (KVM) is required for the rack system. A flexible system management solution always pays off, especially in the area of server virtualization. Since in this situation multiple virtual servers are usually being executed on one computer, a server also requires multiple connections to the network. Otherwise, one must resort to costly processes for address conversion, similar to what NAT (Network Address Translation) does. In addition, the level of security is increased by separating networks. Many manufacturers allow up to 24 network connections to be provided for one physical server for this purpose, without administrators having to change the existing network infrastructure. This simplifies integration of the blade system into the existing infrastructure. TecChannel created the following list of the most popular blade servers from its extensive, detailed product database:

• 1st place: IBM BladeCenter S

• 2nd place: Dell PowerEdge M610

• 3rd place: Fujitsu Primergy BX900

• 4th place: Fujitsu Primergy BX600 S3

• 5th place: Fujitsu Primergy BX400

• 6th place: IBM BladeCenter H

• 7th place: Dell PowerEdge M910

• 8th place: Dell PowerEdge M710

• 9th place: HP ProLiant BL460c

• 10th place: Dell PowerEdge M710HD This list of manufacturers must also be extended by Oracle (Sun Microsystems), Transtec, Cisco with its UCS systems (Unified Computing System) and other popular manufacturers.

IBM BladeCenter S


Page 82 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The key performance features of the first three models are listed below:

IBM BladeCenter S Dell PowerEdge M610 Fujitsu Primergy BX900

Construction 7U rack 10U rack 10U rack

Server blade module BladeCenter HS22 PowerEdge M610 Primergy BX920 S1

Front mounting slots 6 x 2 CPU plug-in unit and others

16 x half height or 8 x full height

18 x half height or 9 x full height

Processor Intel Xeon 5500 Intel Xeon 5500 and 5600 series Intel Xeon E5500 series

Network ports 2 x Gbit-Ethernet and TCP/IP OffloadEngine TOE

2 x Gbit Ethernet (CMC); 1 x ARI (iKVM) 4 x Gbit Ethernet

Power supply 4 x 950 / 1,450 watts 3 x non-redundant or 6 x redundant, 2,360 watts 6 x 1,165 watts


The mainframe computer platform (also known as large computer or host) that was once given up for dead has been experiencing a second life. At this point, IBM, with its System-z machines (see image to left), is practically the only supplier of these monoliths. The mainframe falls under the category of servers as well. System i (formerly called AS/400 or eServer, iSeries or System i5) is a computer series from IBM. IBM’s System i has a proprietary operating system called i5/OS and its own database called DB2, upon which a vast number of installations run commercial applications for managing typical company business processes, as a server or client/server application. These can typically be found in medium-sized companies. The System i also falls under the server category. Some of these systems can be installed in racks.

Storage Systems Storage systems are systems for online data processing as well as for data storage, archiving and backup. Depending upon the requirements for their application and for access time, storage systems can either operate as primary components, such as mass storage systems in the form of hard disk storage or disk arrays, or as secondary storage systems such as jukeboxes and tape backup systems. There are various transmission technologies available for storage networks:

• DAS – Direct Attached Storage

• NAS – Network Attached Storage

• SAN – Storage Area Networks

• FC – Fibre Channel (details in section 3.8.3)

• FCoE – Fibre Channel over Ethernet (details in section 3.8.4) Some of these storage architectures have already been mentioned in section 1.2 that describes the basic elements of a data center. Storage Networks NAS and SAN are the best-known approaches for storing data in company networks. New SAN technologies like Fibre Channel over Ethernet have been gaining in popularity, because with FCoE the I/O consolidation from merging with Ethernet LAN is attractive for infrastructure reasons (see section 3.7.2.). Nevertheless, all storage solutions have disadvantages as well as advantages, and these need to be considered in order to implement future-proof storage solutions.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 83 of 156

DAS – Direct Attached Storage Direct Attached Storage (DAS) or Server Attached Storage is a term for hard disks in a separate housing and that are connected to a single host. The usual interfaces for this implementation are SCSI (Small Computer System Interface) and SAS (Serial Attached SCSI). The parallel SCSI interface with its final standard Ultra-320 SCSI was the forerunner to SAS. However, it represented the physical boundaries of SCSI, since the signal propagation delay of individual bits on the parallel bus were too different. The clock rate on the bus had to be limited so that the slowest and fastest bit could still be evaluated at the bit sample time. However, this process conflicted with the goal of continuously increasing bus performance. As a result, a new interface was designed, one that was serial and therefore offered greater performance reserves. Since the Serial ATA (S-ATA) serial interface had already been introduced into desktop PCs a few years earlier, it also made sense to keep SCSI’s successor compatible with S-ATA to a great extent, in order to reduce development and manufacturing costs through its reusability. Nevertheless, all block-oriented transmission protocols can be used for direct (point to point) connections. NAS – Network Attached Storage Network Attached Storage (NAS) describes an easy to manage file server. NAS is generally used to provide independent storage capacity in a computer network without great effort. NAS uses the existing Ethernet network with a TCP/IP protocol like NFS (Network File System) or CIFS (Common Internet File System), so computers that are connected to the network can access the data media. They often operate purely as file servers. An NAS generally provides many more functions than just assigning computer storage over the network. In contrast to Direct Attached Storage, an NAS is therefore always either an independent computer (host) or a virtual computer (Virtual Storage Appliance, VSA for short) with its own operating system, and is integrated into the network as such. Many systems therefore also possess RAID functions to prevent data loss arising from defects. NAS systems can manage large volumes of data for company uses. Extensive volumes of data are also made quickly accessible to users through the use of high-performance hard disks and caches. Professional NAS solutions are well-suited for consolidating file services in companies. NAS solutions are high-performance, redundant and therefore fail-safe, and represent an alternative to traditional Windows/Linux/Unix file servers. Due to the hardware they use and their ease of administration, NAS solutions are significantly cheaper to imple-ment than comparable SAN solutions. However, this is at the expense of performance. SAN – Storage Area Networks A Storage Area Network (SAN) in a data processing context denotes a network used to connect hard disk subsystems and tape libraries to server systems. SANs are designed for serial, conti-nuous, high-speed transfers of large volumes of data (up to 16 Gbit/s). They are currently based on the imple-mentation of Fibre Channel standards for high-availability, high-performance installations. Servers are connected into the FC-SAN using Host Bus Adapters (HBA). The adjacent graphic shows a typical network configuration in which the SAN network is being run as a separate network by means of Fibre Channel. With the continued development of FCoE (Fibre Channel over Ethernet), this network will merge with Ethernet-based LAN in the future.

FC and the migration to FCoE is shown in sections 3.7.2 and 3.8.3 as well as 3.8.4.


LAN aggregation& core switch





FC Storage


FO Copper


Page 84 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Storage Solutions

The demand for storage capacities has been steadily growing as a result of increasing volumes of data as well as require-ments for data backup and archiving. Cost efficiency in the area of company storage is therefore still a central theme for IT managers. Their goal is to achieve high storage efficiency by using their existing means and hardware in combination with storage virtualization, Cloud Storage (Storage as a Service) and consolidation. The popularity of the Unified Storage approach, a system that supports all storage protocols, has been picking up speed in new storage acquisitions.

Racks that contain only hard disks are finding their way into more and more data centers and are being supplied preassembled by solution providers such as:

• NetApp • EMC • HDS Hitachi Data Systems • HP • IBM • Oracle (Sun Microsystems)

Secondary storage solutions like jukeboxes and tape backup systems no longer play a part in current storage solutions. However, IT managers must not ignore the conventional backup as well as data storage that is legally required – key word compliance – for all the storage technologies they use. 3.6.3. Network Infrastructure Basics (NICs, Switche s, Routers & Firewalls) Networks can be classified in different ways and through the use of different criteria. A basic distinction is made between private (LAN) and public (WAN) networks, which also use different transmission technologies. One typically comes across base-band networks in LANs (Ethernet, and also Token Ring and FDDI in earlier times) and broadband networks in WANs (ATM, xDSL, etc.). Routers which possess a LAN interface as well as a WAN port assume the task of LAN/WAN communication transition. The Ethernet is also definitely pushing its way more and more to the front of technologies for public networks, as a result of its continuing development and easy implementation, particularly for providers who operate city-wide networks (MANs). The components in a network infrastructure can basically be divided up into two groups:

• Passive network components which provide the physical network structure, such as cables (copper and glass fiber), outlets, distribution panels, cabinets, etc.

• Active network components, which process and forward (amplify, convert, distribute, filter, etc.) data. The following sections cover active network components in detail, to give an understanding of their function and area of application. NIC – Network Interface Card A network interface card (NIC) usually exists “onboard” a network-compatible device and creates the physical connection to the network, by means of an appropriate access method. Ethernet in all its forms (different media and speeds) is currently being used almost exclusively as an access method, also known as a MAC protocol (Media Access Control). Each NIC has a unique hardware address, also called a MAC address, that is required by switches in order to forward Ethernet frames. The first 3 bytes of this 6-byte MAC address is assigned by IEEE for the manufacturer, who uses the remaining 3 bytes as a serial number. Therefore, in addition to its configured IP address which is required for routing, every device integrated in an Ethernet LAN also possesses a MAC address, which is used by switches for forwarding. Repeaters and Hubs These network components are not used much today, since, for reasons of performance, current LAN networks are constructed only to be switched. They were used as signal amplifiers (repeater) and distributors (hub) in shared Ethernets in which network subscribers still had to share overall bandwidth. Since these devices are rarely used today, we will omit a detailed description of them.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 85 of 156

Switches A “standard” layer 2 switch divides a network up into physical subnets and thus increases network bandwidth (from shared to switched). This means that each individual subscriber can now use the bandwidth commonly available to hubs, if the subscriber alone is connected to a switch port (segment). Segmenting is achieved by the switch remembering what destination MAC address can be reached at what port. To do this, the switch creates a “Source Address Table” (SAT) for itself and remembers the physical port (switch connection) to which information is sent and received, that corresponds to the NIC address of the terminal device. If the destination address that was received is not yet known, or in other words not yet available in the SAT table, the switch forwards this frame to all ports, an operation known as broadcasting . When response frames come back from recipients, the switch then makes a note of their MAC addresses and the associated port (entry in the table) and then sends the data only there. Switches therefore learn the MAC addresses of connected devices automatically, which is why they do not have to be configured unless additional specific functions are required, of which there can be several. A switch operates on the Link Layer (layer 2, MAC layer) of the OSI model (see section 3.8.1) and works like a bridge. Therefore, manufacturers also use terms like bridging switch or switching bridge. Bridges were the actual forerunners to switches and generally have only two ports available for sharing one LAN. A switch in this context is a multi-port bridge, and because of the interconnection of the corresponding ports, this device could also be called a matrix switch. Different switches can be distinguished based on their performance ability and other factors, by using the following features:

• Number of MAC addresses that can be stored (SAT table size)

• Method by which a received data packet is forwarded (switching method)

• Latency (delay) of the data packets that are forwarded

Switching method Description Advantages Disadvantages

Cut-Through The switch forwards the frame immediately after it reads the destination address.

Latency, or delay, between receiving and forwarding is extremely small.

Defective data packets are not identified and forwarded to the recipient anyway.


The switch receives the entire frame and saves it in a buffer. The packet is then checked and processed there using different filters. Only after that is the packet forwarded to the destination port.

Defective data packets can therefore be sorted out beforehand.

Storing and checking data packets causes a delay that depends on the size of the frame.

Combination of Cut-Through and Store-and-Forward

Many switches operate using both methods. Cut-through is used as long as only a few defective frames come up. If faults become more frequent, the switch switches over to Store-and-Forward.

Fragment-Free The switch receives the first 64 bytes of the Ethernet frame. The data are forwarded if this portion has no errors. The reason behind this process is that most errors and collisions occur in the first 64 bytes. In spite of its effectiveness, this method is seldom used.

The expression layer 3 switch is somewhat misleading, since it describes a multi-functional device which is a combination of router and switch. Brouter was once also a term for this. In routing, the forwarding decision is made on the basis of OSI layer 3 information, i.e. an IP address. A layer 3 switch can therefore both assign different domains (IP subnets) to individual ports and also operate as a switch within these domains. However it also controls the routing between these domains. A wide variety of switch designs are available, from the smallest device with 5 ports up to a modular backbone switch which can provide hundreds of high-speed ports. There are also a multitude of extra functions available which could be listed here. The different switch types in the data center (Access, Aggregation, Core) and their primary functions were already described in section 3.3. The functions / protocols required for redundancy in the data center are listed in section 3.8.6.


Page 86 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Router A router connects a number of network segments, which may operate with different transmission protocols (LAN/WAN). It operates on Layer 3, the Network Layer of the OSI reference model (see section 3.8.1).

As network nodes between two or more different networks or subnets, a router has an interface as well as an IP address in each network. It can therefore communicate within each of these networks. When a data packet reaches the router, the router reads its destination (destination IP address) to deter-mine the appropriate interface and shortest path over which the packet will reach the destination network. In the private sector, one is acquainted with DSL routers or WLAN routers (wireless LAN). Many routers include an integrated firewall which protects the network from un-authorized accesses. This is one step to increasing security in networks. Many manufacturers do not put high-speed routers (carrier-class routers, backbone routers or hardware routers) under a separate heading. They market routers together with high-end switches (Layer 3 and higher, Enterprise Class). This makes sense, as today’s switches in the high-end range often possess routing functionality as well.

Compared to switches, routers ensure better isolation of data traffic, since, for example, they do not forward broadcasts by default. However, routers generally slow down the data transfer process. Nevertheless, in bran-ched networks, especially in WANs, they route data to the destination more effectively. On the other hand, routers are generally more expensive than switches. Therefore, when considering a purchase, an analysis must be made of what requirements must be satisfied. Load Balancing Load balancing is a common data center application which can be implemented in large switching-routing engines. One generally uses the term to mean server load balancing (SLB), a method used in network technology to distribute loads to multiple, separate hosts in the network. The load balancer serves different layers in the OSI model. Server load balancing comes into use anywhere a great number of clients create a high density of requests and thus overload a single server. Typical criteria for determining a need for SLB include data rate, the number of clients and request rate. Firewall A firewall is a software component that is used to restrict access to the network on the basis of the address of the sender or destination and services used, up to OSI layer 7. The firewall monitors the data running through it and uses established rules to decide whether or not to let specific network packets through. In this way, the firewall attempts to stop illegal accesses to the network. A security vulnerability in the network can therefore be the basis for unauthorized actions to be performed on a host. A distinction is made, on the basis of where the firewall software is installed, between a personal firewall (also known as desktop firewall) and an external firewall (also known as network or hardware firewall). In contrast to a personal firewall, the software for an external firewall does not operate on the system to be protected, but runs on a separate device (appliance) which connects the networks or network segments to one another and also restricts access between the networks, by means of the firewall software. Other firewall functions include intrusion detection and prevention (IDP), which checks the data transfer for abno-rmalities, as well as content/URL filtering, virus checking

and spam filtering. High-end devices use these functions to check data transfers up to 30 Gbit/s.

The Cisco Nexus 7000 series is a modular, data center class switching system.

With the introduction of its SuperMassive E10000, Sonicwall has announced a high-end firewall series. This allows companies to protect entire data centers from intruders.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 87 of 156

3.6.4. Connection Technologies / Interfaces (RJ45, SC, LC, MPO, GBICs/SFPs) Active network components have different interfaces available, and different methods exist for linking up different devices:

• Permanently installed interfaces (the network technology in use must be known for the purchase decision)

• Flexible interfaces, so-called GBICs (gigabit Interface Converters) / SFPs (Small Form-factor Pluggable) Data centers support different transmission media (listed in section 3.9), depending on the network tech-nology they use (transmission protocols listed in section 3.8), which can be classified into two groups:

• Copper cables with RJ45 plug type

• Fiber optic cables (multi-mode / single-mode) with SC / SC-RJ, LC / duplex plug types

GBICs / SFPs are available for both media types. Gigabit Ethernet can be implemented using copper cables without a problem for normal distances in LAN. This becomes more difficult in 10 gigabit Ethernet, even if network interface cards with a copper interface for 10 GbE are available. The IEEE 802.3an standard defines transfers with a maximum of 10 Gbit/s via twisted-pair cabling over a maximum length of 100 m. In the backbone area, fiber optic cables are usually used for 10 gigabit Ethernet, with LC plugs used for greater distances in single-mode implementation. So-called next-generation networks – 40/100 gigabit Ethernet networks with a maximum data transmission rate of 40 Gbit/s and 100 Gbit/s respectively – must, by standard, be implemented using high-quality OM3 and OM4 fiber optic cables and MPO- based (multipath push-on) parallel optic transceivers in combination with equally high-quality MPO/MTP® connection technology. 12-fiber MPO/MTP® plugs are used for the transfer of 40 gigabit Ethernet. The outer four fibers are used to send and receive signals, while the remaining middle channels remain unused. By contrast, the transfer of 100 Gbit/s requires 10 fibers for transmission and reception. This is realized either by means of the 10 middle channels in two 12-fiber MPOs/MTPs®, or a 24-fiber MPO/MTP® plug can be used instead. Additional information on this follows in sections 3.10.1 and 3.10.2. GBIC / SFP The prevailing design in expansion slots is Small Form-Factor Pluggable (SFP), also known as Mini-GBIC, SFF GBIC, GLC or "New GBIC" or "Next Generation GBIC". The regular design is usually just called Mini-GBIC for short. Mini-GBICs are pushing their physical limits in gigabit Ethernet, since they are only specified up to 5 Gbit/s. Higher speeds like 10 gigabit Ethernet require, in addition to SFP+, the somewhat bigger XFP- or XENPAC modules. XENPAC already has two successors, XPAK and X2.

The QSFP module (Quad Small Form-Factor Pluggable) is a transceiver module for 40 gigabit Ethernet and is meant to replace four SFP modules. The compact QSFP module has dimensions of 8.5 mm x 18.3 mm x 52.4 mm, and can be used for 40 gigabit Ethernet, Fibre Channel and InfiniBand. The idea for new designs arises from the need to house more connections in the same area.

• 1 gigabit Ethernet: Mini-GBIC, SFP

• 10 gigabit Ethernet: SFP+, XFP, XENPAC, XPAK, X2

• 40 gigabit Ethernet: QSFP, MPO

• 100 gigabit Ethernet: MPO

The official GBIC standard is governed by the SFF Committee (http://www.sffcommittee.org).

Cisco Nexus 7000 series 48-port gigabit Ethernet SFP module

Netgear module / 1000Base-T SFP / RJ-45 / GBIC

Mellanox ConnectX®-2 VPI adapter card, 1 x QSFP, QDR


Page 88 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

A GBIC module is inserted into an electrical inter-face, in order to convert it to an optical connection, for example. The type of the signal to be transmitted can be adapted to specific transmission require-ments by using GBICs. Individual GBICs, also known as transceivers, are available both as electrical as well as optical interfaces that are made up of sepa-rate transmitting and receiving devices. A transceiver module can be replaced during normal operation (hot-swappable) and operates as a fiber module, typically for wavelengths of 850 nm, 1310 nm and 1550 nm. There are also third-party suppliers who provide transceivers, including models with an intelligent configurator for flexible use in any system. However, one must first check whether the switch/router manufacturer is blocking these “open” systems before these models can be used. Example of an SFP switching solution from Hewlett Packard: HP E6200-24G-mGBIC yl switch Key Features

• Distribution layer • Layer 2 to 4 and intelligent edge feature set • High performance, 10-GbE uplinks • Low-cost mini-GBIC connectivity, Ports 24 open mini-GBIC (SFP) slots • Supports a maximum of 4 10-GbE ports, with optional module

Transceiver modules:

Product name Product code Performance description HP X131 10G X2 SC ER Transceiver J8438A An X2 form-factor transceiver that supports the 10-gigabit ER standard, providing 10-

gigabit connectivity up to 30km on singlemode fiber (40km on engineered links). HP X130 CX4 Optical Media Converter J8439A Optical media converter for CX4 (10G copper) mmf cable up to 300 m

HP X131 10G X2 SC SR Transceiver J8436A An X2 form-factor transceiver that supports the 10-gigabit SR standard, providing 10-

gigabit connectivity up to 300 meters on multimode fiber. HP X111 100M SFP LC FX Transceiver J9054B A small form-factor pluggable (SFP) 100Base-FX transceiver that provides 100 Mbps full-

duplex connectivity up to 2 km on multimode fiber. HP X131 10G X2 SC LR Transceiver J8437A An X2 form-factor transceiver that supports the 10-gigabit LR standard, providing 10-

gigabit connectivity up to 10 km on single-mode fiber. HP X131 10G X2 SC LRM Transceiver J9144A An X2 form-factor transceiver that supports the 10-gigabit LRM standard, providing 10-

gigabit connectivity up to 220 m on legacy multimode fiber. HP X112 100M SFP LC BX-D Transceiver J9099B A small form-factor pluggable (SFP) 100-Megabit BX (bi-directional) "downstream" trans-

ceiver that provides 100 Mbps fd connectivity up to 10 km on one strand of singlemode. HP X112 100M SFP LC BX-U Transceiver J9100B A small form-factor pluggable (SFP) 100-Megabit BX (bi-directional) "upstream" trans-

ceiver that provides 100 Mbps fd connectivity up to 10 km on one strand of singlemode. HP X121 1G SFP LC LH Transceiver J4860C A small form-factor pluggable (SFP) gigabit LH transceiver that provides a full-duplex

gigabit solution up to 70 km on single-mode fiber. HP X121 1G SFP LC SX Transceiver J4858C A small form-factor pluggable (SFP) gigabit SX transceiver that provides a full-duplex

gigabit solution up to 550 m on multimode fiber. HP X121 1G SFP LC LX Transceiver J4859C A small form-factor pluggable (SFP) gigabit LX transceiver that provides a full-duplex

gigabit solution up to 10 km (single-mode) or 550 m (multimode). HP X121 1G SFP RJ45 T Transceiver J8177C A small form-factor pluggable (SFP) gigabit copper transceiver that provides a full-duplex

gigabit solution up to 100 m on Category 5 or better cable. HP X122 1G SFP LC BX-D Transceiver J9142B A small form-factor pluggable (SFP) gigabit-BX (bi-directional) "downstream" transceiver

that provides a full-duplex gigabit solution up to 10 km on one strand of singlemode fiber. HP X122 1G SFP LC BX-U Transceiver J9143B A small form-factor pluggable (SFP) gigabit-BX (bi-directional) "upstream" transceiver

that provides a full-duplex gigabit solution up to 10 km on one strand of singlemode fiber.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 89 of 156

3.6.5. Energy Requirements of Copper and Fiber Opti c Interfaces Acquisition costs for active network components with fiber optic connections are significantly higher than those for active network components with RJ45 connections for copper cables. However, one also must not ignore their different energy requirements. If you compare a copper solution to a fiber optic solution on the basis of their power consumption, a surprising picture results.

Energy requirement when using a 10 Gbit/s switch chassis:

* Average based on manufacturer specifications A switch equipped with fiber optic ports needs only half the energy of that required by a copper solution! If you take total energy requirements into consideration (see section 1.9), the following image now results:

If these values are now compared using different PUE factors, significant differences in operating costs result (see section 2.4.3). Operating costs exceed acquisition costs after only a few years. The proportionate data center costs (like maintenance, monitoring, room costs, etc.) have yet to be considered.

Power costs for switching hardware Fiber optic switch Copper switch

Power consumption (during operation) 4.4 kW 8.2 kW

Hours per year 24 h x 30.5 days x 12 months = 8,784 h

Power consumption per year 38,650 kWh 72,029 kWh

Power costs per kWh 0,15 €

Power costs per year 5,797.50 € 10,804.35 €

Energy costs by energy efficiency

PUE factor = 3.0 17,392.50 € / year 32,413.05 € / year

PUE factor = 2.2 12,754.50 € / year 23,769.55 € / year

PUE factor = 1.6 9,276.00 € / year 17,286.95 € / year

336 copper connections Power

7 modules with 48 copper ports at 6 W* 2,016 W

Total ventilation 188 W

NMS module 89 W

10 Gb fiber uplinks 587 W

Total (with copper ports) 2.88 kW

336 fiber optic connections Power

7 modules with 48 FO ports at 2 W* 672 W

Total ventilation 188 W

NMS module 89 W

10 Gb fiber uplinks 587 W

Total (with fiber optic ports) 1.54 kW

Power consumption – FO switch 1.54 kW

DC/DC conversion 0.28 kW

AC/DC conversion 0.48 kW

Power distribution 0.06 kW

Uninterrupted power supply 0.22 kW

Cooling 1.65 kW

Switchgear in energy distribution 0.15 kW

Total power requirement 4.4 kW

Power consumption – copper switch 2.88 kW

DC/DC conversion 0.52 kW

AC/DC conversion 0.89 W

Power distribution 0.12 W

Uninterrupted power supply 0.40 W

Cooling 3.08 W

Switchgear in energy distribution 0.29 W

Total power requirement 8.2 kW


Page 90 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

3.6.6. Trends Some trends which increase demands on network hardware and those of the overall data center are listed below. x

• Data volumes as well as demands on broadband are increasing (Context Aware Computing) as a result of changes in the use of ICT everywhere and for everything (a result of the combination/convergence of WLAN technology and mobile terminal devices). The increasing use of live videos over networks is contributing to this trend.

• Latency times are being accepted less and less (i.e. real-time applications).

• The higher power densities of more efficient servers require data center changes.

• High ICT availability is becoming increasingly important in companies.

• Companies must professionalize their data centers, whether this be through the construction of new facilities, modernization or outsourcing.

• Cloud computing is resulting in higher data network loads.

• Virtualization-related themes are gaining greater importance.

• Archiving is required for purposes of both legislation as well as risk management. This issue has been promoted for years, but has not yet been fully resolved up to this point. When companies do take on these topics, the related area of storage and networks will gain urgency.

• Energy efficiency, sustainability and reducing costs will continue to drive the market. It is expected that these themes will extend to active components in data center infrastructures.

• Plug-and-play solutions will make an entrance, since the labor market does not provide the required personnel and/or work must be completed in shorter and shorter amounts of time. In addition, data centers are service providers in companies and must deliver “immediately” if projects require them to do so. Oftentimes those responsible in the data center are not consulted during project planning.

• The companies within a value chain will continue to build up direct communication. Other communities will develop (such as the Proximity Services of the German stock market) and/or links from data centers will be created.

3.7. Virtualization Traditionally all applications are run separately on individual servers; servers are loaded differently and in most cases, for reasons of performance, more servers are in operation that are required. However, developments in operating systems allow more than one application to be provided per server. As a result, operating resources are better loaded to capacity through this “virtualization” of services, and the actual number of physical servers is drastically reduced. A distinction is made between two different types of virtualization:

• Virtualization by means of virtualization software (e.g. VMware)

• Virtualization at the hardware level (e.g. AMD64 with Pacifica) The virtualization of servers and storage systems continues to make great strides, as we already mentioned in sections 1.8 and 1.10. The advantages of this development are obvious.

• Less server hardware is required.

• Energy requirements can be reduced.

• Resource management is simplified.

• The patching between components is clearer.

• Security management can be improved.

• Recovery is simplified.

• Application flexibility is increased.

• Acquisition and operating costs can be reduced.

• and much more.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 91 of 156

The advance of virtualization can include the following effects on data center cabling:

• Connections that are higher performance, so-called “fat pipes”, are required because of increased network traffic (data volumes).

• More connections are required for each server and between switches because of aggregation (trunking).

• An increased number of additional redundant connections are created as a result of increased demands on data center failure safety.

3.7.1. Implementing Server / Storage / Client Virtu alization The primary goal of virtualization is to provide the user a layer of abstraction that isolates him or her from the actual hardware – i.e. computing power and disk space. A logical layer is implemented between users and resources so as to hide the physical hardware environment. In the process, every user is led to believe (as best as possible) that he or she is the sole user of a resource. Multiple heterogeneous hardware resources are combined into one hom*ogeneous environment. It is generally the operating system’s job in this process to manage in a way that is invisible and transparent to users. The level of capacity utilization for all infrastructure components increases in data centers as a result of the increased use of virtual systems like blade servers and Enterprise Cloud Storage. For example, where servers were previously loaded to only 15 to 25 percent, this value increases to 70 or 80 percent in a virtualized environment. As a result, systems consume significantly more power and in turn generate many times more waste heat. The adjacent graphic shows the development of server compaction.

Client Virtualization , or so-called desktop virtualization, is a result of the ongoing development of Server and Storage Virtualization . In this process, a complete PC desktop is virtualized in the data center instead of just an individual component or application. This is a process that allows multiple users to execute application programs on a remote computer simultaneously and independent of one another. The virtualization of workstations involves individually configured operating system instances that are provided for individual users through a host. Each user therefore works in his or her own virtual system environment that in principle behaves like one complete local computer. This is a different process from providing a terminal server in which multiple users share resources in a specially configured operating system. The advantages of a virtual client lie in its individuality and its ability to run hosts at a central location. Resources are optimized through the common use of hardware.

Disadvantages arise as a result of operating systems which are provided redundantly (and the associated demand for resources), as well as the necessity of providing network communication to operate these systems.


Page 92 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Classic Structure The adjacent graphic shows an exam-ple of a classic server and storage structure in a data center with separate networks. Each server is connected physically to two networks, the LAN for commu-nication and the SAN for storage media. This leads to an inefficient use of all available storage capacities, since each storage of an application is handled separately and its reserved resources must be made available. Servers are typically networked to-gether via copper cables in the LAN/access area in the data center. Glass fibers are the future-proof cable of choice in the area of aggregation and cores. Fibre Channel is the typical appli-cation for data center storage.

SAN Structure A network with SAN switches can greatly increase the efficiency of storage space in the network. Data can be brought together into fewer systems, which perform better and are better utilized. This behavior in the LAN, or more precisely SAN, is like that of the Internet, where storage space exists in a “cloud” whose actual locality is un-important. Storage consolidation and storage virtualization are often terms that come up in this context. The advantage of Fibre Channel is that it optimizes the transfer of mass data, i.e. a block-oriented data transfer. This advantage over the Ethernet has led to Fibre Channel being the most commonly used transmission protocol in SANs.


LAN aggregation& core switch










FO Copper


FC SANswitch

LAN aggregation& core switch




….. …..


FO Copper


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 93 of 156

3.7.2. Converged Networks, Effect on Cabling Classic LANs and SANs use different transmission pro-tocols, typically Ethernet for LAN and Fibre Channel for SAN. However, LANs, as communication networks and SANs as storage networks are merging (converging) more and more. Current transmission protocols under continuous deve-lopment support this standardization and are paving the way for a structure with FCoE, Fibre Channel over Ethernet. The following 3-Stage Migration Path per the Fibre Channel Industry Association (FCIA: www.fibrechannel.org) is presented below. This process is based on hardware reusability and the life cycles of the individual components. Stage 1 The introduction of a new switch, a so-called Converged Network Switch, allows existing hard-ware to be reused. It is recommended in smaller data centers that storage systems be directly connected via this Converged Switch. A dedicated SAN switch would be required in larger structures so as to avoid bottlenecks. Converged Adapters are required for the server connection. FCIA recommends SFP+ as its transceiver inter-face of choice (fiber optic & copper). Since a copper solution is possible only over 10 meters using Twinax cables, fiber optic cables must replace copper ones in these “converging” net-works.

Stage 2

Stage 3

The Converged Switch is upgraded with an FCoE uplink in a second migration stage and connected to FCoE in the backbone.

The final FCoE solution is connected directly to the core network. FC as an independent protocol has dis-appeared from the network.


LAN aggregation& core switch





FC Storage


FO Copper


LAN aggregation& core switch

10G convergednetwork switch

FC SANswitch



FC Storage




FO SFP+: Twinax or FO


LAN aggregation& core switch

10G LANaccess switch




FC Storage


FO Copper



LAN aggregation& core switch

10G LANaccess switch



FCoE Storage

FO Copper



Page 94 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

3.7.3. Trends More than half (54 percent) of the IT infra-structure components in the SME segment in Germany have already been moved into the “cloud”. According to one study, this translates to somewhere in the middle range of this sector when compared with all of Europe. Number one is Russia with 79 per-cent, followed closely by Spain with 77 per-cent and France with 63 percent. The Dutch are currently still the most reluctant in this area, with 40 percent. Over 1,600 decision makers throughout Europe from businesses with up to 250 em-ployees were asked about their use of vir-tualization and cloud computing for a study by the market research institute Dynamic Markets, commissioned by VMware. In addition to 204 mid-sized com-panies from Germany, companies from France, Great Britain, Italy, the Netherlands, Poland, Russia and Spain also participated in the survey carried out from the end of January to the middle of February of 2011.

The cloud service used most often by mid-sized companies in Germany is memory outsourcing (68 percent). Applications used in the cloud include e-mail services (55 percent) and office applications for pro-cessing files, tables and spreadsheets (53 percent). According to the study, the company management in over a third of the companies surveyed supports the cloud strategy. This is a sign that German mid-sized companies do not judge cloud computing to be a topic that is purely IT-related.

The management of those companies surveyed see cloud computing’s greatest advantages in not only lower costs for IT maintenance (42 percent), hardware equipment (38 percent), as well as in reduced power costs (31 percent), but also in improved productivity and cooperation from IT personnel themselves. All in all, 76 percent of German SMEs stated they achieved lasting benefits through cloud computing. Accordingly, about a third of the companies surveyed are also planning to extend their cloud activities in the next twelve months. Virtualization as a Prerequisite for Cloud Computing A key technical prerequisite for providing IT services quickly and flexibly is virtualization. This was the opinion of 82 percent of the SMEs in Germany. Already 75 percent of the German mid-sized companies that were surveyed have virtualized. This number is thus greater than the average for all of Europe (73 percent). Russia is again a step ahead in this area with 96 percent. "The technological basis for setting up a cloud environment is a high level of virtualization, ideally 100 percent", according to Martin Drissner of the systems and consulting company Fritz & Macziol. “That’s the only way you can guarantee that resources can be distributed and used flexibly.” According to Drissner, the cloud is not a revolution, but the next logical step in IT development. Cloud Computing would simplify and speed up processes and also lower costs. It is recommended that a private cloud, i.e. a solution internal to the company, be set up as a first step to entering the world of Cloud Computing. Reasons for Virtualization Cost-related arguments for virtualization are moving into the background more and more. Where previously points like reduction in costs for hardware (35 percent), maintenance (36 percent) and power (28 percent) were often the main drivers for virtualization in the data center, criteria like higher application availability (31 percent), greater agility (30 percent) and better disaster recovery (28 percent) are becoming more and more important, according to the survey. In addition, the aspect of gaining more time and resources is becoming an increasingly higher priority for those surveyed (27 percent), showing a desire to devote more time to projects that are more innovative than just focusing attention on purely IT operations.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 95 of 156

3.8. Transmission Protocols In order to be able to transfer information from a transmitter to one or more recipients, the data must be packed properly for transmission, in accordance with how it is transmitted. This is the job of transmission protocols in data communication. A connection-oriented protocol first establishes a connection, then sends the data, has this confirmed and finally ends the connection when the transmission is finished. By contrast, a connectionless protocol entirely omits the steps of establishing and ending the connection and confirming the transmission, which is why such a transmission is less secure but faster. Protocols therefore establish different rules. A protocol is an agreement on how a connection, communication and data transfer between two parties must be executed. Protocols can be implemented via hardware, software, or a combination of the two. Protocols are divided up into:

• Transport-oriented protocols (OSI layer 1–4) & application-oriented protocols (OSI layer 5–7)

• Routable and non-routable protocols (ability to forward through routers, IP = routable)

• Router protocols (path selection decision for routers, e.g. RIP, OSPF, BGP, IGRP, etc.) A transmission protocol (also known as a network protocol) is a precise arrangement by which data are ex-changed between computers or processors that are connected to one another via a network (distributed system). This arrangement consists of a set of rules and formats (syntax) which specifies the communication behavior of the instances which are communicating in the computers (semantics). The exchange of messages requires an interaction of different protocols, which assume different tasks. The indivi-dual protocols are organized into layers so as to control the complexity associated with the process. In such an architecture, each protocol belongs to a specified layer and is responsible for carrying out specific tasks (for example, checking data for completeness – layer 2). Protocols at higher layers use the services from protocols at lower layers (e.g. layer 3 relies on the fact that all data arrived in full). The protocols structured in this way together form a protocol stack (this is currently typically a TCP/IP protocol stack). 3.8.1. Implementation (OSI & TCP/IP, Protocols) The OSI reference model was drafted starting in 1977 by the International Organization for Standardization as a basis for developing communication standards, and published in 1984 (ISO 7498). The goal of Open Systems Interconnection (OSI) is a communication system within heterogeneous networks, especially between different computer worlds, that is based on basic services supported by applications. These basic services include, for example, file transfer, the virtual terminal, remote access to files and the exchange of electronic mail. In addition to the actual application data, these and all other communication applications require additional structural and procedural information that are specified as OSI protocols.

� Passive components

� Application

6- Presentation Layer 5- Session Layer 4- Transport Layer

3- Network Layer 2- Data Link Layer

1- Physical Layer

� Computer

� Active components

7- Application Layer


Page 96 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Layer 1 – Physical Layer / Bit Transmission Layer This layer is responsible for establishing and ending connections as well as for the protocol information pattern. This specifies, among other things, what logical information “1” or “0” is represented by what voltage level.

Cables and plugs, electrical and optical signals can be assigned to this layer. Layer 2 – Data Link Layer The bitstream is examined for transmission errors in the Data Link Layer, and error-correcting functions are used if necessary. This layer is also therefore called the Security Layer. It is also responsible for flow control, in case data is arriving faster than it can be processed. In cell transfers like ATM, cells are numbered so the recipient can put cells, which sometimes arrive unordered due to the use of different transmission paths, back into their original order. In LAN applications, this layer is subdivided into the sublayers called Logical Link Control (LLC) and Media Access Control (MAC). The MAC Layer assigns transmitter rights. The best known transmission protocol defined by the IEEE is Ethernet (IEEE 802.3).

Network adapters as well as switches (layer 2) can be assigned to this layer. Layer 3 – Network Layer This layer is responsible for routing information that is transported over multiple heterogeneous networks. This model assumes that the data from the Data Link Layer (layer 2) is correct and free of errors. IP network addresses are read and evaluated, and the information is passed to the next stages (networks) through the use of routing tables. In this process, routing parameters like tariff rates, maximum bit transmission rates, network capacities, thresholds and information on service quality are reported to routers by means of appropriate configurations.

In addition to routers, Layer 3 switches are also part of this layer. Layer 4 – Transport Layer This layer is responsible for controlling end-to-end communication. This information is therefore not evaluated in the intermediate stations (switches or routers). If there is information in this layer on whether a connection is un-stable or has failed, this layer takes care of re-establishing the connection (handshake). In addition, this layer is sometimes responsible for address conversions.

In the meantime, devices indicated as layer 4 switches are provided. This is intended to stress the fact that certain protocol functions from this layer are integrated into these devices. Layer 5 – Session Layer The primary task of the Session Layer is to fundamentally implement and complete a session from the Application Layer. In the process, this layer also specifies whether the connection is full-duplex or half-duplex. This layer is also responsible for monitoring and synchronizing the data stream as well as other tasks.










IEEE 802.2




IEEE 802.1




Media Access


IEEE 802.1



IEEE 802.3 Ethernet 10Base... 100Base... 1000Base.. 10GBase...


IEEE 802.11Wireless LAN

FrequencyHopping Direct Sequence


802.11a 5 GHz

802.11b/g 2.4GHz









Network (Internet/IP)

IEEE 802.2 Logical Link Control





Media Access Control

IEEE 802.1 MAC Bridging

IEEE 802.3 Ethernet




IEEE 802.11Wireless LAN

FrequencyHopping Direct Sequence


802.11a 5 GHz

802.11b/g 2.4GHz




R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 97 of 156

Layer 6 – Presentation Layer Different computer systems use different structures for data formatting. Layer 6 interprets the data, and ensures the syntax is uniform and correct (character set, coding language). Here, the data format from the transmitter is converted into a format that is not specific to any terminal device. Cryptography is used in this layer for data secu-rity purposes. Layer 7 – Application Layer In this layer, the data that was transmitted is made available to the user, or rather his/her programs. Standardi-zation is hardest for this layer because of the multitude of applications that are available. These include well-known application protocols like http, ftp, smtp, dns, dhcp, etc. Order of events in a transmission If a piece of information is being transmitted from one device to another, the transmitter begins with the information flow in layer 7 and forwards it down, layer by layer, right down to layer 1. In the process, each layer places a so-called header containing header information in front of the actual information to be transmitted. So, for example, the IP source and destination address as well as other information are added in layer 3, and routers use this infor-mation for path selection. An “FCS” field (checksum field) is added in layer 2 for a simple security check. By adding all this extra information, the total amount of information which is transmitted on the cable becomes far greater than the actual net information. However, the length of an Ethernet frame is limited: Its minimum length is 64 bytes, and its maximum 1,518 bytes. The frame length was expanded by 4 bytes to 1,522 bytes for the purpose of tagging VLANs, as a result of the Frame Extension established in IEEE 802.3as. VLAN Virtual networks or virtual LANs (VLANs) are a technological concept for implementing logical work groups within a network. A network of this type is implemented via LAN switching or virtual routing on the link layer or on the network layer. Virtual networks are extended by a great number of switches, which for their part are connected to one another through a backbone. Half- / Full-Duplex The terms full-duplex and half-duplex are understood to mean, in a telecommunications context, a transmission of information that is based on direction and time. For half-duplex, this generally means that only one transmission channel is available. Data can be either sent or received but never both at the same time. In full-duplex operation, two stations connected with one another can send and receive data simultaneously. Data streams are transmitted in both directions at the same speed. In a gigabit Ethernet (1 Gbit/s) full-duplex process, for example, this translates into a transmission speed of 2 Gbit/s. OSI & TCP/IP Toward the end of the 1960s, as the “cold war” reached its high point, the United States Department of Defense (DOD) demanded a network technology that would be very secure against attacks. It should be possible for the network to continue to operate, even in case of a nuclear war. Data transmission over telephone lines was not suitable for this purpose, since these were too vulnerable to attacks. For this reason, the United States Department of Defense commissioned the Advanced Research Projects Agency (ARPA) to develop a reliable network technology. ARPA was founded in 1957 as a reaction to the launch of Sputnik by the USSR, and had the task of developing technologies that were of use to the military. In the meantime, ARPA was renamed the Defense Advanced Research Projects Agency (DARPA), since its interests primarily served military purposes. ARPA was not an organization that employed scientists and re-searchers, but one which handed out orders to universities and research institutes. Over time and as ARPANET grew, it became clear that the protocols selected up to that point were no longer suitable for operating a larger network that also connected many (sub)networks to one another. For this reason, other research projects were finally initiated, which led to the development of TCP/IP protocols , or the TCP/IP model , in 1974. TCP/IP (Transmission Control Protocol / Internet Protocol) was developed with the goal of connecting various networks of different types with one another for purposes of data transmission. In order to force the integration of TCP/IP protocols into ARPANET, (D)ARPA commissioned the company of Bolt, Beranek & Newman (BBN) and the University of California at Berkeley to integrate TCP/IP into Berkeley UNIX. This also laid the foundation for the success of TCP/IP in the UNIX world.


Page 98 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Although the OSI model is recognized worldwide, TCP/IP represents the technically open standard on whose base the Internet was developed. The TCP/IP reference model and the TCP/IP protocol stack enable the exchange of data between any two computers located anywhere in the world. The TCP/IP model is historically just as significant as those standards which are the basis for successful develop-ments in the telephone, electrical engineering, rail transportation, television and video industries. No general agreement exists on how TCP/IP should be described in a layer model. While the OSI model has an academic character, the TCP/IP model is closer to programmer reality and leans to the structure of existing proto-cols. However, both models – OSI as well as TCP/IP – have one thing in common: Data are passed down in the stack during transmission. When receiving data from the network, the path through the stack again leads to the top. A comparison of both models appears below.

The best known use of protocols takes place around the Internet, where they take care of:

• Loading websites – (HTTP or HTTPS) • Sending and receiving e-mails – (SMTP & POP3/IMAP) • File uploads and downloads – (FTP, HTTP or HTTPS)

3.8.2. Ethernet IEEE 802.3 The IEEE (Institute of Electrical and Electronics Engineers) is a professional organization that has been active worldwide since 1884 and currently has more than 380,000 members from over 150 countries (2007). This largest technical organization with headquarters in New York is divided up into numerous so-called societies that con-centrate in the special fields of electrical and information technology. Project work is concentrated into approxi-mately 300 country-based groups. Through publications like its journal IEEE Spectrum, the organization also con-tributes to providing interdisciplinary information on and discussion of the social consequences of new techno-logies. IEEE 802 is an IEEE project which started in February 1980, therefore the designation 802. This project concerns itself with standards in the area of local networks (LAN) and establishes network standards for layers 1 and 2 of the OSI model. However, IEEE 802 teams also give tips for sensibly integrating systems into one overall view (network management, internetworking, ISO interaction). Different teams were formed within the 802 project, and also deal with new project aspects as needed. The oldest study group is the CSMA/CD group IEEE 802.3 for Ethernet . CSMA/CD is a description of the original access method (Carrier Sense Multiple Access with Collision Detection). The central topic for this study group is the discussion of high-speed protocols. The group is divided up into various other groups which concentrate their effects on optical fibers in the backbone, InterRepeater links, layer management, and other topics.

OSI layers

TCP/IP layers

Protocol examples

Coupling elements

7 Application

4 Application HTTP, FTP, SMTP, POP, DNS, DHCP, Telnet







Gateway, Content Switch, Layer 4 to 7 Switch

6 Presentation

5 Session

4 Transport 3 Computer TCP, UDP

3 Network 2 Internet IP, ICMP, IGMP Router, Layer 3 Switch

2 Data Link

1 Network access Ethernet, Token Ring, Token Bus, FDDI

Bridge, switch

1 Bit transfer (physical)

Cabling, repeater, hub, media converter


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 99 of 156

The most important teams within the IEEE 802.3 study group include:

• IEEE 802.3ae with 10 Gbit/s Ethernet , since 2002

• IEEE 802.3af with DTE Power via MDI , since 2003 (PoE, Power over Ethernet)

• IEEE 802.3an with 10GBase-T , since 2006

• IEEE 802.3aq with 10GBase-LRM , since 2006

• IEEE 802.3at with DTE Power Enhancements , since 2009 (PoE enhancements)

• IEEE 802.3ba with 40 Gbit/s and 100 Gbit/s Ethernet , since 2010

• IEEE 802.3bg with 40 Gbit/s serial , in progress The IEEE 802.3ba standard for 40GBASE-SR4 or 100GBASE-SR10 makes provisions for, regardless of data rate, maximum lengths of 100 meters for OM3 glass fibers . The OM4 glass fiber is the preferred choice for longer links (larger data centers, campus backbones, etc.) and applications with lower power budgets (e.g. device connections in data centers). These can support 40GBASE-SR4 and 100GBASE-SR10 applications up to a length of 150 meters . The 100-meter link length over OM3 supports – regardless of architecture and size – approximately 85 % of all channels in the data center. The 150-meter link length over OM4 fibers covers almost 100 % of the required range.

The common lengths for both data rates are made possible through parallel optics. The signals in this process are transmitted over multiple 10 Gbit/s channels. The 40 Gbit/s solution uses four glass fibers for this purpose in each of the two directions, while the 100 Gbit/s solution requires 10 fibers in each direction. MPO connection technology is used as a plug connection (12-pole / 8 required for 40 GbE or 24-pole / 20 required for 100 GbE, see migration concept in section 3.10.2).

Since the data rate per channel is the same as in the 10 Gbit/s Ethernet (IEEE 802.3ae) standard, the inevitable question is why the maximum length for the optical link via OM3 was reduced from 300 to only 100 meters, and from 550 to 150 meters over OM4. The goal was to reduce link costs, in particular transceiver (combined transmitter and receiver component) prices could be reduced. The effects of this cost reduction can be found in the relaxed technical specifications for trans-ceivers, which make it possible to produce smaller, more cost-effective components. The greatest relaxing of these specifications comes in their greater tolerances for jitters. A jitter is a time-related signal fluctuation that occurs during bit transmission and is caused by factors like hissing or symbol crosstalk. In data center applications like 40-GBASE-SR4 and 100GBASE-SR10, the jitter begins in the output signal from the transmitter chip and accumulates further over the optical transmitter, the glass fiber, the optical receiver, the receiver chip and through the entire link. Since a large part of the jitter budget is now used up by electronic link components, overall jitter accumulation is only acceptable if the optical channel is virtually perfect and lossless. Lasers used in data centers are usually surface-emitting semiconductor lasers, so-called Vertical Cavity Surface Emitting Lasers (VCSEL, pronounced “vixel”). VCSEL are therefore very attractive to networks in data centers because they are easy and cost-effective to produce. However, they do have a relatively large spectral width for lasers. And exactly the specifications for this spectral width were also relaxed during the transition from 10 to 40/100 GbE. The laser’s maximum permitted spectral width was creased from 0.45 to 0.65 nm by IEEE802.3ba, which led to increased chromatic dispersion Chromatic dispersion is the phenomenon of an optical pulse spreading during its transmission through the optical fiber. A glass fiber has different refractive indices for different wavelengths. That is why some wavelengths spread faster than others. Since a pulse consists of multiple wavelengths, it will inevitably spread out to a certain degree and thus cause a certain dispersion.

Binary Binary


10-Gbit/s signal

10-Gbit/s signal

distorted through





Page 100 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

All optical signals consist of a wavelength range. Though this spectrum may only amount to a fraction of a nanometer, a limited spectral range always exists. The spreading of the pulse (dispersion) in a fiber is based approximately on the square of the spectral width. This effect, in combination with the additional jitters is the reason for the shorter maximum lengths of OM3 and OM4 fibers. Total system costs for optical cabling are significantly reduced as a result of the specifications being relaxed, but to the detriment of link lengths and permitted insertion losses. While maximum losses of 2.6 dB over OM3 were permitted in the 10GBASE-SR application, these dwindle down to only 1.9 dB, or 1.5 dB for OM4, in 40GBASE-SR4 and 100GBASE-SR10 applications. (see tables further below) Planners, installers and users must be aware that selecting a provider like R&M, whose products guarantee minimal optical losses, is crucial for the performance of the next generation of fiber optic networks. The great success of Ethernet up to the present day can be traced back to the affordability, reliability, ease of use and scalability of the technology. For these reasons, most network traffic these days begins and ends its trip on Ethernet. Since the computing power of applications grows slower than data traffic as networks are aggregated, the development of 40 and 100 gigabit Ethernet came at just the right time to allow the Ethernet to continue its triumphant march. This is because these high speeds provide that technology a perspective for the next years in companies, in data centers and also in increasing numbers of carrier networks. Ethernet Applications for Copper Cables with Cat.5, Cat.6 & Cat.6 A

Category & Class acc. ISO/IEC 11801 Cat.5 - Class D Cat.6 - Class E Cat.6A - Class E A

shielded & unshielded

shielded & unshielded

shielded & unshielded

Topology PL + Channel* PL+ Channel* PL+ Channel*

Installation cable AWG Wire type

IEEE 802.3ab - 1000BASE-T

26 Solid 60 m 70 m 55 m 65 m 55 m 65 m

26 Flexible 60 m 70 m 55 m 65 m 55 m 65 m

24 Solid 90 m 100 m

23 Solid 90 m 100 m 90 m 100 m

22 Solid 90 m 100 m 90 m 100 m

IEEE 802.3an - 10GBASE-T

26 Solid 65 m° 55 m 65 m

26 Flexible 65 m° 55 m 65 m

24 Solid

23 Solid 100 m° 90 m 100 m

22 Solid 90 m 100 m

* Channel calculation based on 2x5m patch cord (flexible) + Permanent link and channel length reduction if patch cord (flexible cable) >10m ° Cable tested up to 450MHz

10GBase-T , 10 gigabit data transmission for copper cables, was standardized in the summer of 2006. It allows for use of a 4-pair cable (transmission on all 4 pairs) and a segment length of 100 m. There is a “higher speed study group” for 40GBase-T and 100GBase-T , which allows 40 Gbit/s over copper cables, at this point only over 10 m via Twinax cables or 1 m via backplane. However, a 40GBase-T standard is expected, but is rather doubtful for 100Base-T.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 101 of 156

Ethernet Applications for Fiber Optic Cables with OM 1 & OM2

Fiber type acc. ISO/IEC 11801 OM1 OM2

Wavelength 850nm 1300nm 850nm 1300nm overfilled modal bandwidth (MHz*km) 200 500 500 500 eff. laser launch modal bandwidth (MHz*km) y z ISO/IEC 8802-3 100BASE- F X 2km 2km LED 11.0dB 11.0dB IEEE 802.3 z 1000BASE- S X 275m 550m LED 2.6dB 3.56dB IEEE 802.3 z 1000BASE- L X 550m 550m LED 2.35dB 2.35dB IEEE 802.3 ae 10GBASE- S R 33m 82m VCSEL 2.5dB 2.3dB IEEE 802.3 ae 10GBASE- L X 4k 300m 300m WDM 2.0dB 2.0dB IEEE 802.3 ae 10GBASE- L R M 220m 220m OFL 1.9dB 1.9dB IEEE 802.3 ae 10GBASE- S W 33m 82m VCSEL 2.5dB 2.3dB

Ethernet Applications for Fiber Optic Cables with OM 3, OM4 & OS2

Fiber type acc. ISO/IEC 11801 OM3 OM4 OS2

Wavelength 850nm 1300nm 850nm 1300nm 1310nm 1550nm overfilled modal bandwidth (MHz*km) 1500 500 3500 500 eff. laser launch modal bandwidth (MHz*km) 2000 4700 NA NA y z ISO/IEC 8802-3 100BASE- F X 2km 2km LED 11.0dB 11.0dB IEEE 802.3 z 1000BASE- S X 550m 550m LED 3.56dB 3.56dB IEEE 802.3 z 1000BASE- L X 550m 550m 5km LED 2.35dB 2.35dB 4.57dB IEEE 802.3 ae 10GBASE- S R 300m 550m VCSEL 2.6dB IEEE 802.3 ae 10GBASE- L R 10km Laser 6.0dB IEEE 802.3 ae 10GBASE- E R 30km 11.0 Laser 40km 11.0 IEEE 802.3 ae 10GBASE- L X 4k 300m 300m 10km WDM 2.0dB 2.0dB 6.0dB IEEE 802.3 ae 10GBASE- L R M 220m 220m OFL 1.9dB 1.9dB IEEE 802.3 ae 10GBASE- S W 300m VCSEL 2.6dB IEEE 802.3 ae 10GBASE- L W 10km Laser 6.0dB IEEE 802.3 ae 10GBASE- E W 30km 11.0 Laser 40km 11.0 IEEE 802.3 ba 40GBASE- L R 4k 10km WDM 6.7dB a IEEE 802.3 ba 40GBASE- S R 4o 100m 150m VCSEL 1.9dB bc 1.5dB bd IEEE 802.3 ba 100GBASE- L R 4k 10km WDM 6.3dB a IEEE 802.3 ba 100GBASE- E R 4k 30km (15) WDM 40km (18)

a IEEE 802.3 ba 100GBASE- S R 10o 100m 150m VCSEL 1.9dB bc 1.5dB bd

b The channel insertion loss is calculated using the maximum distances specified in Table 86–2 and cabled optical fiber attenuation of 3.5 dB/km at 850 nm plus an allocation for connection and splice loss given in

a These channel insertion loss values include cable, connectors and splices. c 1.5 dB allocated for connection and splice loss. d 1 dB allocated for connection and splice loss. k Wavelengthdivision-mulitplexed lane assignement o number of fiber pairs y Wavelength: S=short 850nm / L=long 1300/1310 / E=extralong 1550 z Encoding: X=8B/10B data coding method / R=64B/66B data coding method / W=64B/66B with WIS WAN Interface Sublayer


Page 102 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Trend Servers are already being equip-ped with a 10 gigabit Ethernet interface as “state of the art”. Blade system manufacturers are planning to install 40 Gbit/s adap-ters. Switch manufacturers are like-wise providing their equipment with 40 and 100 Gbit/s uplinks. The adjacent graphic shows the development of Ethernet techno-logies over time. (Source: Intel and Broadcom, 2007)

3.8.3. Fibre Channel (FC) Fibre Channel was designed for the serial, continuous high-speed trans-mission of large volumes of data. Most storage area networks (SAN) today are based on the implementation of Fibre Channel standards. The data trans-mission rates that can be achieved with this technology reach 16 Gbit/s. The most common transmission media are copper cables within storage devices, and fiber optic cables for connecting storage systems to one another. As in conventional networks (LAN) in which every network interface card has a MAC address, each device in Fibre Channel has a WWNN (World Wide Node Name) as well as a WWPN (World Wide Port Name) for every port in the device. These are 64-bit values that uniquely identify each Fibre Channel device. Fibre Channel devices can have more than just one port available; in this case the device still has only one WWNN, but it has WWPNs of a number equal to that of the number of ports it possesses. The WWNN and WWPNs are generally very similar. Fibre Channel is assigned a different protocol as a connection medium in the higher layers of the OSI layer model, e.g. the SCSI we mentioned or even IP. This results in the advantage that only minor changes are required for drivers and software. Fibre Channel has a user data capacity of over 90 %, while Ethernet can fill only between 20 % and 60 % of the maximum possible transmission rate with a useful load. Topologies In general, there are two different types of Fibre Channel implementations; Switched Fabric, usually known as Fibre Channel or FC-SW for short, and Arbi-trated Loop, or FC-AL. In Fibre Channel Switched Fabric, point-to-point con-nections are switched bet-ween terminal devices, while Fibre Channel Arbitrated Loop is a logical bus in which all terminal devices share the common data transmission rate.

Apple Quad-Channel 4Gb Fibre Channel PCI Express Card host bus adapter - PCI Express x8 - 4 connections


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 103 of 156

Arbitrated Loop (FC-AL) is also known as Low Cost Fibre Channel, since it is a popular choice for entry into the world of Storage Area Networks. FC-AL implementations are frequently found in small clusters in which it is possible for multiple physical nodes to directly access a common mass storage system. FC-AL makes it possible to operate up to 127 devices on one logical bus. All devices share the data transmission rate that is available there (133 Mbit/s to 8 Gbit/s, depending upon the technology in use). Cabling is implemented in a star shape over a Fibre Channel hub. However, it is also possible to connect devices one after the other, since many Fibre Channel devices have two inputs or outputs available. Implementing cabling as a ring is not common. Fibre Channel Switched Fabric (FC-SW) is the implementation of Fibre Channel which provides the highest per-formance and greatest degree of failure safety. Switched fabric is generally meant when the term Fibre Channel is used. The Fibre Channel switch or director is located at the center of the switched fabric. All other devices are connected with one another through this device, so it is possible to switch direct point-to-point connections bet-ween any two connected devices via the Fibre Channel switch. In order to increase failure safety even more, many installations are moving to redundant dual fabric in their Fibre Channel implementations. This system operates two completely independent switched fabrics; each storage sub-system and each server is connected to each of the two fabrics using at least one HBA (Host Bus Adapter). In addition to the failure of specific data paths, the overall system can even cope with the failure of one entire fabric, since there is no longer one single point of failure. This capability plays an important part in the area of high availability, and therefore in the data center as well. Fibre Channel Applications for Fiber Optic Cables w ith OM1 – OM4 & OS2

Source: thefoa.org

In summary, Fibre Channel-based storage solutions offer the following advantages:

• Very broad support is provided by hardware and software manufacturers. • At this point, the technology has reached a high level of maturity. • The system is very high-performance. • High-availability solutions can be set up. (redundancy)

Fiber type acc. ISO/IEC 11801 OM1 OM2 OM3 OM4 OS2

Wavelength 850nm 1300nm 850nm 1300nm 850nm 1300nm 850nm 1300nm 1310nm 1550nm Overfilled modal bandwidth (MHz*km) 200 500 500 500 1500 500 3500 500

Eff. laser launch modal bandwidth (MHz*km) 2000 4700 NA NA

1G Fibre Channel 100-MX-SN-I (1062 Mbaud)

300m 3dB

500m 3.9dB

860m 4.6dB

860m 4.6dB

1G Fibre Channel 100-SM-LC-L

10km 7.8dB

2G Fibre Channel 200-MX-SN-I (2125 Mbaud)

150m 2.1dB

300m 2.6dB

500m 3.3dB

500m 3.3dB

2G Fibre Channel 200-SM-LC-L

10km 7.8dB

4G Fibre Channel 400-MX-SN-I (4250 Mbaud)

70m 1.8dB

150m 2.1dB

380m 2.9dB

400m 3.0dB

4G Fibre Channel 400-SM-LC-M

4km 4.8dB

4G Fibre Channel 400-SM-LC-L 10km


8G Fibre Channel 800-M5-SN-I 50m

1.7dB 150m 2.0dB 190m


8G Fibre Channel 800-SM-LC-I 1.4km


8G Fibre Channel 800-SM-LC-L (4250 Mbaud)

10km 6.4dB

10G Fibre Channel 1200-MX-SN-I (10512 Mbaud)

33m 2.4dB

82m 2.2dB

300m 2.6dB

300m 2.6dB

10G Fibre Channel 1200-SM-LL-L

1km 6.0dB

16G Fibre Channel 1600-MX-SN (10512 Mbaud)

35m 1.6dB

100m 1.9dB

125m 1.9dB

16G Fibre Channel 1600-SM-LC-L

10km 6.4dB

16G Fibre Channel 1600-SM-LC-I

2km 2.6dB


Page 104 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

3.8.4. Fibre Channel over Ethernet (FCoE) FCoE is a protocol for transmitting Fibre Channel frames in networks based on full-duplex Ethernet. The essential goal in introducing FCoE is to promote I/O consolidation that is based on Ethernet (IEEE 802.3), with an eye to reducing the physical complexity of network structures, especially in data centers. FCoE makes it possible to use one standard physical infrastructure for both the transmission of Fibre Channel as well as conventional Ethernet. The essential advantages of this technology include scalability and the high bandwidths of Ethernet-based network, commonly 10 Gbit/s at the present time. On the other hand, the use of Ethernet for the transport of Fibre Channel frames also brings to bear the dis-advantages of the classic Ethernet protocol, such as frame loss in overload situations. Therefore some improve-ments to the standard are necessary in order to ensure reliable Ethernet-based transmission. These enhance-ments are being driven forward with Data Center Bridging (see section 3.8.7). A further definite advantage can be seen in the current virtualization strategy that prevails for many data center providers, since FCoE in the end also represents, in practice, a form of virtualization technology that is based on physical media, and as such can partially be applied in host systems for virtualized servers. Consolidation strategies of this nature can therefore …

… reduce expenditures and costs for a physical infrastructure that consists of network elements and cables.

… reduce the number and total costs of the network interface cards required in terminal devices like servers.

… reduce operating costs (power supply and heat dissipation). Converged 10 GbE Converged 10 GbE is a standard for networks which merge 10 Gbit/s Ethernet and 10 Gbit/s Fibre Channel. Fibre Channel over Ethernet (FCoE) also falls under this convergence approach. The FC packets are encapsulated in the header of the Ethernet frame, which then allows the Converged Ethernet topology to be used. If switches are provided with FCoE wherever possible (because of the different packet sizes), they are then transparent for FC and iSCSI storage and for LAN.

A typical Fibre Channel data frame has a payload of 2,112 bytes (data), a header, and a CRC field (checksum field). A conventional Ethernet frame has a maximum size of 1,518 bytes. In order to achieve good performance, one must avoid having to break an FCoE frame into two Ethernet frames and then put them back together. One must therefore use a jumbo frame or at least a “baby jumbo”, that is not (yet) standardized, but offered by various manufacturers anyway. The 3-stage migration path from Fibre Channel to Fibre Channel over Ethernet per the Fibre Channel Industry Association (FCIA) was already shown in section 3.7.2. An essential requirement of the FCoE-supported Ethernet environment is that it be “lossless”, which means that no packets may be thrown away. Manufacturers implement “lossless” Ethernet by registering a possible buffer overflow and stopping the transmission afterwards. This is because in general, Ethernet frames are lost only in case of a buffer overflow.


Converged Network SwitchConverged Network Adapter

(CNA)10GBase-T: Data & FCoE



Ethernet data: 1 / 10 Gbps

FC Storage: 1 / 2 / 4 / 8 Gbps


Converged Network SwitchConverged Network SwitchConverged Network Adapter

(CNA)10GBase-T: Data & FCoE



Ethernet data: 1 / 10 Gbps

FC Storage: 1 / 2 / 4 / 8 Gbps

Fibre Channel payloadheaderFCoEheader


FC encapsulation

Fibre Channel payloadheaderFCoEheader


Fiber Channel payload












R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 105 of 156

3.8.5. iSCSI & InfiniBand Another driver of the continued develop-ment of Ethernet is the connection of storage resources to the server landscape via iSCSI. iSCSI Internet Small Computer System Interface At the beginning, iSCSI supported speeds of 1 Gbit/s for this purpose, which previously was no real competition in a Fibre Channel technology field that generally used SAN. This is because their storage networks allowed for connections with 4 and even 16 Gbit/s today. However, changes began with the introduction of 10 gigabit Ethernet (10 GbE), even in this area. (graphic) This is because network managers are now increasingly toying with the idea of using Ethernet not just for their LAN networks, but also using the same physical network to integrate their SAN traffic. Such a change not only lowers the number of physical connections in the network, but also reduces general network complexity as well as associated costs. This is because a separate, costly Fibre Channel network becomes unnecessary through the use of iSCSI over Ethernet as an alternative to FCoE. iSCSI is a method which enables the use of the SCSI protocol over TCP/IP. As in normal SCSI, there is a controller (initiator) which directs communication. Storage devices (hard disks, tape drives, optical drives, etc.) are called targets. Each iSCSI node possesses a 255-byte-long name as well as an alias, and both are independent of its IP address. In this way, a storage array can be found even it is moved to another network subsegment.

The use of iSCSI enables access to the storage network through a virtual point-to-point connection without separate storage devices having to be installed. Existing network components (switches) can be used since no special new hardware is required for the node connections, as is the case with Fibre Channel. Access to hard disks is on a block basis and is therefore also suitable for databases, and the access is transparent as well, appearing at the application level as an access to a local hard disk. Its great advantage over a classic network is its high level of security, since iSCSI attaches great importance to flawless authentication and the security of iSCSI packets, which are transported over the network encrypted. The performance that is possible with this technology definitely lies below that of a SCSI system that exists locally, due to the higher latencies in the network. However, with bandwidths currently starting at 1 Gbit/s or 125 MB/s, a large volume of data can be stored.

InfiniBand Since InfiniBand is scalable and supports quality of service (QoS) as well as failover (redundancy function), this technology was originally used for the connection between HPC servers. From that time, servers and storage systems have also been connected via InfiniBand, to speed up the data transfer process. InfiniBand is designed as a switched I/O system and, via the I/O switch, connects separate output units, processor nodes and I/O platforms like mass storage devices at a high data rate. The connection is carried out over a switching fabric. The point-to-point connection operates in full-duplex mode. Data transmission in InfiniBand is packet-oriented, and data packets have a length of 4,096 bytes. In addition to its payload, every data packet has a header with addresses and error correction. Up to 64,000 addressable devices are supported.

Overview of Ethernet Development

Time scale with introduction timeframe of the techn ologies


Page 106 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The InfiniBand concept is aware of four network components: The Host Channel Adapter (HCA), the Target Channel Adapter (TCA), the switch and the router. Every component has one or more ports and can be connected to another component using a speed class (1x, 4x, 8x, 12x).

So, for example, an HCA can be connected with one or more switches, which for their part connect input/output devices via a TCA with the switch. As a result, the Host Channel Adapter can communicate with one or more Target Channel Adapters via the switch. Multi-point connections can be set up through the switch. The InfiniBand router’s method of operation is essentially comparable to that of a switch; however the router can transfer data from local subnets to other subnets.

InfiniBand will normally transmit over copper cables, as they are also used for 10 gigabit Ethernet. Transmission paths of up to 15 meters are possible. If longer paths need to be bridged over, fiber-optic media converters can be accessed, which convert the InfiniBand channels on individual fiber pairs. Optical ribbon cables with MPO plugs are used for this purpose, in other words the same plug type as in 40/100 gigabit Ethernet. In the early years, the transmission rate was 2.5 Gbit/s, which resulted in a usable transmission rate of 250 MB/s per link in both directions, in the case of 8B/10B coding. In addition to this standard data rate (1x), InfiniBand defines bundling of 4, 8 and 12 links for higher transmission rates. The resulting transmission speeds lie at 10 Gbit/s, or a 1 GB/s usable data rate, and 30 Gbit/s or 3 GB/s. The range for a connection with glass fibers (single mode) can come to 10 kilometers. Newer InfiniBand developments have a ten time higher data rate of 25 Gbit/s and reach data rates at levels 1x, 4x, 8x and 12x between 25 Gbit/s and 300 Gbit/s or 25 GB/s. 3.8.6. Protocols for Redundant Paths If Fibre Channel over Ethernet is integrated into the data center or voice and video into LAN, then chances are good that the network will have to undergo a redesign. This is because current network designs build on the provision of substantial overcapacities. The aggregation/distribution and core layer are often laid out as a tree structure in a multiple redundant manner. In this case, the redundancy is only used in case of a fault. As a result, large bandwidth resources are squan-dered and available redundant paths are used only insufficiently or ineffi-ciently, if at all. However, short paths and thus lower delay times are the top priority for new real-time applications in networks. So-called "shortest path bridging" will im-plement far-reaching changes in this area. The most important protocols developed for network redundancy are listed below. Each of these protocols has its own specific characteristics. Spanning Tree The best-known and oldest protocol, that prevents “loops” in star-shaped structured Ethernet networks and there-fore allows redundant paths, is the Spanning Tree Protocol (STP). The method was standardized in IEEE 802.1D and came into use in the world of layer 2 switching. The STP method works as follows:

• If redundant network paths exist, only one path is active. The others are passive – or actually they are “turned off” (for loop prevention).

• The switches determine independently which paths are active and which are passive (negotiation). Manual configuration is not required.

• If an active network path fails, a recalculation is performed for all network paths and the required redundant connection is built.

IB1X plug for 1xInfiniBand / IB12X plug for 12xInfiniBand

Sierra Technologies


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 107 of 156

Layer 2 switching with Spanning Tree has two essential disadvantages:

• Half of the network paths that are available are not used – there is therefore no load distribution.

• The calculation of network paths in case of a failure can take a full 30 seconds for a network that is large and very complex. The network functions in a very restricted manner during this time.

The Rapid Spanning Tree protocol (RSTP) was designed as a solution for this second problem, and has also been standardized for quite some time. Its basic idea lies in not stopping communication completely when a network path fails, but continuing to work with the old configuration until the re-calculation is completed – and therefore a few connections do not work. It is true this solution does not resolve the problem immediately. However, in case of a more “minor” failure, such as a path failing in the “edge” of the network, the network does not immediately come to a total standstill and downtime in this case amounts to only 1 second or so. Since after more 25 years the Spanning Tree process is pushing its limits, the IETF (Internet Engineering Task Force) wants to replace it with the Trill Protocol (Transparent Interconnection of Lots of links) which internalizes STP. The more elegant method is Layer 3 Switching (IP address) which, strictly speaking, represents more routing than a classic switching. Routing protocols such as RIP and OSPF come into use in the layer 3 environment, which is why both the network paths that are switched off as well as the relatively extensive re-calculations do not apply when an active path fails. Layer 3 switching requires extremely careful planning of IP address ranges, since a separate segment must be implemented for each switch. In practice, one frequently finds some combination which uses layer 3 switching in the core area and or layer 2 switching in the aggregation/access area or floor area. Planning a high-performance and fail-safe network within a building, building complex or campus is already a costly procedure. It is even a more complex process to construct a network that comprises multiple locations within a city (MAN Metropolitan Area Network). In contrast to a WAN implementation, all locations in a MAN are connected at a very high speed. The core of a MAN is a ring, based on gigabit Ethernet for example. The ring is formed through point-to-point connections between the switches at separate locations. A suitable method must ensure that an alternate path is selected in case a connection between two locations fails. In general, this is realized through layer 3 switching. The primary location, which is generally also the server location, is equipped with redundant core switches which are all connected to the ring. Link Aggregation Link Aggregation is another IEEE standard (802.3ad, 802.1ax since 2008) and designates a method for bundling multiple physical LAN interfaces into one logical channel. This technology was originally used exclusively to increase the data throughput between two Ethernet switches. However, current implementations can also connect servers and other systems via link aggregation, e.g. in the data center. Since incompatibilities and technical differences exist when aggregating Ethernet interfaces, the following terms, based on manufacturer, are used as synonyms.

• Bonding, in the Linux environment • Etherchannel, for Cisco • Load balancing, for Hewlett-Packard • Trunking, for 3Com and Sun Microsystems • Teaming, for Novell Netware • Bundling, as the German term

Link Aggregation is only intended for Ethernet and assumes that all ports that are to be aggregated have identical speeds and support full-duplex. The possible number of “aggregation” ports is different and based on manufacturer implementation. The method only makes sense when interfaces that are as fast as possible are bundled. Four times Fast Ethernet 100 Mbit/s (full-duplex) corresponds to a maximum 800 Mbit/s, on the other hand four times gigabit Ethernet gives 8 Gbit/s. Another advantage is increased failure safety. If a connection fails, the logical channel as well as a transmission path continue to exist, even if it is at a reduced data throughput.


Page 108 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Shortest Path Bridging In Shortest Path Bridging (IEEE 802.1aq) the Shortest Path Tree (SPT) is calculated for each individual node, using the Link State Protocol (LSP). The LSP protocol is used to detect and show the shortest connection structures for all bridges/switches in the SPB area. Depending on the SPB concept, MAC addresses for participating stations and information on interface service affiliations are distributed to those stations that are not participating. This SPB method is called SPBM, which stands for SPB-MAC. The topology data helps the calculation computer to determine the most economical connections from each individual node to all other participating nodes. That way each switch or router always has an overall view of all paths available between the nodes. In case of a change in topology, for example a switch is added or a link deactivated or activated, all switches must re-learn the entire topology. This process may sometimes last several minutes and depends upon the number of switches and links participating in the switchover process. Nevertheless, Shortest-Path Bridging offers a number of advantages:

• It is possible to construct networks in the Ethernet that are truly meshed.

• Network loads are distributed equally over the entire network topology.

• Redundant connections are no longer deactivated automatically, but used to actively transport the data that is to be transmitted.

• The network design becomes flatter and performance between network nodes increases. Dynamic negotiation must be mentioned as one disadvantage of Shortest Path Bridging. Transmission paths in this topology change very quickly, which makes traffic management and troubleshooting more difficult. The first products to support the new “trill” and “SPB” switch mechanisms are expected by the middle of 2011. It still remains to be seen whether present switches can be upgraded to these standards. Network design will change drastically as a result of Shortest Path Bridging. Therefore one should prepare intensively for future requirements and start to lay the groundwork for a modern LAN early on. 3.8.7. Data Center Bridging The trend in the data center is moving toward convergent networks. Here, different data streams like TCP/IP, FCoE and iSCSI are transported over Ethernet. However, Ethernet is not especially suited for loss-free trans-mission. This is why Data Center Bridging (DCB) is being developed by the IEEE. Cisco uses the term “Data Center Ethernet” for this technology. IBM uses the name “Converged Enhanced Ethernet”. Since DCB can only be equipped with specialized hardware, upgrading switches that have been used up to this point is not an easy matter. One must therefore make sure that DCB is supported when purchasing switches, for purposes of investment security. Four different standards are behind Data Center Bridging, for the purpose of drastically increasing network con-trollability, reliability and responsiveness:

• Priority-based Flow Control (PFC, 802.1Qbb)

• Enhanced Transmission Selection (ETS, 802.1Qaz)D

• CB Exchange Protocol (DCBX, 802.1Qaz)

• Congestion Notification (CN, 802.1Qau) All four elements are independent of one another. However, they only achieve their full effect when working together:

• PCF enhances the current Ethernet stop mechanism. The current mechanism stops the entire transmission, PCF only one channel. There are then eight independent channels for this one channel. They can be stopped and restarted.

• DCBX requires switches, to exchange configurations with one another.

• ETS is used to define different priority classes within a PCF channel and to process it in different ways.

• CN stands for a type of traffic management that restricts the flow of traffic (rate limiting). A notification service causes the source to lower its transmission speed. (optional)

DCB allows multiple protocols with different requirements to be handled over layer 2 in a 10 gigabit Ethernet infra-structure. This method can therefore also be used in FCoE environments. Up to this point, there are only three manufacturers who are known to support DCB: Cisco, Brocade and Blade Network Technologies.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 109 of 156

3.9 Transmission Media Although the foundation for the standardization of structured communication cabling was laid in the 1990s and standards have been continuously deve-loped to data, the data center and server room area have been neglected during this entire period. The first ANSI (American National Standards Institute) standard for data centers, the TIA-942, was published only in 2005, whereupon international and European standards followed. The result of this neglect becomes obvious when entering many data centers; one finds a passive IT infrastructure that is implemented without a solid, needs-oriented design and has become increasingly chaotic over the years. This chaos is further intensified – in spite of the introduction of VM solutions – by the increasing number of servers with may have very different requirements for transmission technology or the type of LAN interface supported. The time has therefore come to rethink the technologies practiced up to this point as well as the alternative options available, in both the realization of new data centers as well as redesign of existing ones. The following section discusses these alternatives, and in the process attempts to describe methods that remain practically relevant and realistic. Definition and Specification Any sensible planning of technical measures must always be preceded by an analysis of requirements. The first step to that end is to recognize or identify the components that will be subject to these requirements. Obviously, cabling involves components which make it possible to connect a server to an LAN connection. In retrospect, one can say that the structured communication cabling approach in accordance with EN 50173 (or ISO/IEC 11801 or EIA/TIA 568) was successful. This is because this standardization approach ended up intro-ducing a cabling infrastructure that can be scaled according to needs, and permits a variety of quality grades. It was only as a result of the specifications in these standards that a stability and security was guaranteed to the planners and users which led to a high service life of current cabling systems. Put more simply, the secret to success of the standards implemented in this area consists of two principles, the specification of cabling components, and the recommendation (not regulation!) for linking these components together in specific topology variants. A fundamental strength of the favored star-shape topology results from its scalability, since through the use of different materials every layer of the hierarchy can be scaled irrespective of the other layers. New supplementary standards that are based on this proven star structure were therefore developed for data center cabling systems. They take advantage of this strength and are adapted to these environmental requirements as already described in section 3.1. The following improvements, among others, are achieved by using standards-compliant cabling systems:

• Distributors are laid out and positioned better, in a way that minimizes patch and device connection cable lengths and provides optimal redundancies in the cabling system.

• Work areas that are susceptible to faults are limited to the patching area in the network cabinet and in the server cabinet. If the cables used there are damaged, the cables in the raised floor do not need to be replaced as well.

• The selection of the modules in the copper patch panel as well as the network cabinet and server cabinet can be made independently of the usual requirement for RJ45 compatibility. A connection cable can be used to adapt the plug to any connecting jack for active network components or network interface cards. A similar level of freedom is also provided by glass fibers. In this case, connectors of the highest technical quality are to be used in patch panels.

• Inaccessible areas can be continuously documented and labeled. Now that the reasons for implementing a structured cabling system in a data center have been determined as above, we will now specify the requirements for materials, especially requirements with regard to data communi-cation. An fierce debate currently prevails over the question of whether glass fiber or copper should be the first choice for media in a data center. In the opinion of the author, there is no solid reason for recommending one of these media types over the other in this area, nor is one strictly necessary in terms of a structured cabling system. However, it is certainly worthwhile to examine the advantages and disadvantages of both technologies in light of their use in data centers.


Page 110 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

3.9.1 Coax and Twinax Cables The importance of using asymmetric cable media like coaxial and Twinax cables is being increasingly restricted to areas outside of the data center network infrastructure. This is because symmetric cables, or so-called twisted pair cables, are prevailing the copper area of the structured communication cabling system. In data centers, coax cables (also called coaxial cables) are therefore found predominantly in cabling for video surveillance systems and analog KVM switches. Copper cables are often used in mass storage systems, since the connection paths between individual Fibre Channel devices in these systems are generally not very long. A total of three different copper cable types (Video Coax, Miniature Coax and Shielded Twisted Pair) are defined for FC with FC-DB9 connectors. Paths of up to 25 meters between individual devices can be realized by using these cables. In terms of copper transmission media, Fibre Channel differentiates between intra-cabinet and inter-cabinet cables. Intra-cabinet cables are suitable for short housing connections of up to 13 meters. These cables are also known as passive copper cables. Inter-cabinet cables are suitable for connections of up to 30 meters between two FC ports, with transmission rates of 1.0625 Gbit/s. These cables are also known as active copper cables. The glass fiber media supported by FC are listed in the table on page 103, along with their corresponding length restrictions. IEEE 802.3ak dates from 2004 and was the first Ethernet standard for 10 Gbit/s connections over copper cable. IEEE 802.3ak defines connections in accordance with 10GBase-CX4. The standard allows for transmissions at a max. rate of 10 Gbit/s by means of Twinax cabling, over a maximum length of 15 meters. However, this speed can only be guaranteed under optimal conditions (connectors, cables, etc.) 10GBase-CX4 uses a XAUI 4-lane interface for connection, a copper cabling system which is also used for CX4-compatible InfiniBand (see 3.8.5). A maximum connection of 10 meters is supported in 40 and 100 gigabit Ethernet (40GBase-CR4 and 100GBase-CR10) using Twinax cables in 4- or 10-pair respectively (double Twinaxial IB4X- or IB10X cables respectively). The examples above are still the best-known areas of application in data centers of asymmetric cables. As already mentioned, future-oriented copper cable systems for network infrastructures consist of symmetric cables, or twisted copper cables. 3.9.2 Twisted Copper Cables (Twisted Pair)

In designing a data cabling system in data centers that is geared for future use, there really is no alternative to using twisted pair, shielded cables and connectors. This is not a big problem for cabling systems in the German-speaking world, since this is already a de facto standard for most installations. The possibility also exists to choose your required bandwidth from the large selection of shielded cables and connectors (note: Category 6 or 6A are also available in an unshielded version).

Cabling components based on copper (cables and connecting elements) can therefore be differentiated as follows:

• Category 6 (specified up to a bandwidth of 250 MHz)

• Category 6A / 6A (specified up to a bandwidth of 500 MHz)

• Category 7 (specified up to a bandwidth of 600 MHz)

• Category 7A (specified up to a bandwidth of 1,000 MHz) As already mentioned in section 3.1.2 on page 47, ISO/IEC 24764 references ISO 11801 in the area of perfor-mance; the current version of that standard was developed as follows:

• With its IEEE 802.3an standard from 2006, the IEEE not only published the transmission protocol for 10 gigabit Ethernet (10GBase-T), but also defined the minimum standards for channels to be able to transmit 10 gigabit up to 100 meters over twisted copper lines.

• EIA/TIA followed with its 568B.2-10 standard in 2008 which set higher minimum standards for channels, and also specified requirements for components: Category 6A was born.

• In the same year, the ISO/IEC 11801 published Appendix 1, which formulated even stricter requirements for channels: channel class E A was defined. The definition of components remained open.

• This gap was closed in 2010 with publication of ISO/IEC 11801, Appendix 2. This new standard defines components, i.e. Category 6 A cables and plug connections – note the subscript A – in the process setting standards which surpass those of EIA/TIA.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 111 of 156


Channel, etc. Channel Components Channel Components

1-250 MHz 1GBase-T Cat. 6 Cat. 6 Class E Cat. 6

1-500 MHz 10GBase-T IEEE 802.3an


Cat. 6A EIA/TIA 568B.2-

10 (2008)

Cat. 6A EIA/TIA 568B.2-

10 (2008)

Class EA ISO/IEC 11801

Appendix 1 (2008)

Cat. 6A ISO/IEC 11801

Appendix 2 (2010)

1-600 MHz Class F Cat. 7

1-1000 MHz Class FA Cat. 7A

Current cabling standards for typical data center requirements Channel standards in accordance with EIA/TIA Cat. 6A show a moderate drop of 27 dB in the NEXT curve starting from 330 MHz, while a straight line is defined for the channel in accordance with ISO/IEC Class EA. A design that is in accordance with ISO/IEC therefore provides for the best possible transmission performance with maximum availability in twisted pair copper cabling based on RJ45 technology. At 500 MHz the NEXT performance required for Class EA must be 1.8 dB better than for a Cat. 6A channel. In practice, this higher demand leads to higher network reliability, and in turn to fewer transmission errors. This also lays the basis for a longer service and overall life of the cabling infrastructure. Cable Selection It is generally known that maximum bandwidth usually also allows a maximum data rate. If cable prices are equal in terms of length, preference should be given to the cable type of the highest quality (i.e. Category 7A), unless this results in clear drawbacks, such as a large outer diameter. This is because this cable category is theoretically more difficult to install and leads to larger cable routing systems, both of which can increase costs. However, the experience with many tenders shows that providers very rarely demand an additional price for laying cables of a higher quality. In addition, the process for planning cable routing systems generally does not differ for Category 7(A) and Category 6(A/A) media. There is therefore no striking disadvantage when installing top quality cables in the data center. In terms of shielding, one must take into account that by using increasingly higher modulation levels, the required bandwith is certainly reduced, but therefore new protocols are becoming increasingly susceptible to external electromagnetic influences. The symbol distance of 10 gigabit Ethernet is approximately 100 times smaller than that of 1 gigabit Ethernet. Nevertheless, unshielded cabling systems can be used for 10GBase-T if appropriate basic and environmental conditions are satisfied. This is because UTP cabling systems require additional protective measures to support 10GBase-T, such as:

• Careful separation of data cables and power supply cables or other potential sources of interference (minimum distance of 30 cm between data and power cables)

• Use of a metallic cable routing system for data cables

• Prevention of the use of wireless communication devices in the vicinity of the cabling system

• Prevention of electrostatic discharges through protective measures common in electronics manufacturing, like conductor stations, ESD arm bands, anti-static floors, etc.

The effects and costs of additional protective measures as well as operational constraints must also be taken into consideration when deciding between shielded and unshielded cabling for 10GBase-T. The EMC (electromagnetic compatibility) behavior of shielded and unshielded cabling systems for 10GBase-T is described in detail in section 3.10.6, by means of an independent examination.

Comparing IEEE 802.3an, TIA and ISO/IEC: NEXT limit values for channels


Page 112 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Since old cable terms related to shielding were not standard, were inconsistent, and often led to confusion, a new naming system in the form XX/YZZ was introduced with ISO/IEC 11801 (2002).

• XX stands for overall shielding o U = no shielding (unshielded) o F = foil shield o S = braided shield o SF = braid and foil shield

• Y stands for pair shielding o U = no shielding (unshielded) o F = foil shield o S = braided shield

• ZZ always stands for TP = twisted pair Twisted pair cables in the following shield variants are available on the market:

The introduction of thin, lightweight AWG26 cables (with a wire diameter of 0.405 mm, as compared to 0.644 mm for AWG22) as well as enormous advances in shielding and connection technology is making massive savings in cabling costs possible, and at the same time increasing the performance and efficiency of passive infrastructures. Up to 30 percent savings in cabling volume and weight is possible using these cables. However, some details come into play with regard to planning, product selection and installation for achieving sufficient attenuation reserves in the channel and permanent link, and in turn full operational reliability of the cabling system. AWG stands for American Wire Gauge and is a wire diameter coding system used to characterize stranded electrical lines as well as solid wires, and is primarily used in the field of electrical engineering. The following core diameters are used for communication cabling:

• AWG22 / Ø 0.644 mm • AWG23 / Ø 0.573 mm • AWG24 / Ø 0.511 mm • AWG26 / Ø 0.405 mm In addition to the actually transmitting data, a structured cabling system with twisted copper cables allows terminal devices that are frequently used to be supplied with power from a remote location. Such a system based on the IEEE 802.3af and IEEE 802.3at standards, better known by the names “Power over Ethernet ” (PoE) and “Power over Ethernet Plus ” (PoEplus), can use data cables to supply terminal devices such as wireless access points, VoIP telephones and IP cameras with the electrical energy they require. You can find detailed information on this process in section 3.10.3. Pre-Assembled Systems / Plug-and-Play Solutions There are manufacturing concepts that will provide decided advantages when it is expected that a data center will undergo dynamic changes. All installations described up to this point (copper and glass fiber cabling) require that an installation cable be laid and be provided with plugs or jacks at both ends. The termination process requires a high level of craftsmanship. Quick, spontaneous changes to the positions of connections in a data center are only possible under certain conditions, and are therefore expensive. Most cabling system manufacturers provide alternatives in the form of systems made up of the following individual components:

• Trunk cables consisting of multiple twisted-pair cables or multi-fiber cables, both ends terminated with normal plug connectors or jacks or even a special unique manufacturer-specific connector.

Manufacturers also provide a measurement report for all connections, especially for glass fiber systems.

• Modular terminal blocks which can be mounted in the cabinet using 19” technology or even in special double floor systems. Trunk cables can be connected to this block at the input side using the special connector, and then completely normal RJ45 jacks or even glass fiber couplers are available on the output side.

At this point we should again mention that the availability of data center applications can be increased by using cabling systems pre-assembled at the factory. This is because these systems minimize the time spent by installation personnel (third-party personnel) in the security area of the data center during both initial installation of the cabling system as well as during any expansions or changes, which translates into an additional plus in terms of operational reliability.


Overall shield Foil (����) ���� ����

Wire mesh (����) ���� ����

Core pair shield Foil ���� ���� ���� ����


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 113 of 156

In addition, the advantage of factory pre-assembled cabling systems comes up during installation in the form of time savings, which also saves costs. In case capacity must be expanded because of an increase in IT devices, these cabling systems and in turn the actual IT application can be linked to one another and put into operation easily and as quickly as possible by internal operational personnel. These ready-to-operate systems for so-called “plug-and-play installations” are often freely configurable as well, and have reproducible quality that is of the highest quality and therefore excellent transmission properties. In addition, no acceptance test is required after these cabling systems are installed, as long as the cables and connection elements were not damaged during the installation. Furthermore, these solutions provide high invest-ment protection, since they can be uninstalled and reused.

The increase in density of glass fiber cabling in data centers has led to parallel optic connection technology, and in turn to innovative multi-fiber connectors that are as compact as an RJ45 plug connection. Multi Fiber Push-On (MPO) technology has proven itself to be a practicable solution in this area. This is because four or even ten times the number of plug connections must now be provided to ports for 40/100 gigabit Ethernet. This high number of connectors can no longer be managed with conventional single connectors. This is why the multi-fiber MPO connector, which can contact 12 or 24 fibers in a minimal amount of space, was included in the IEEE 802.3ba standard for 40GBase-SR4 and 100 GBase-SR10. It is therefore clear that an MPO connector with 12, 24 or even up to 72 fibers can no longer be assembled on site. The use of the MPO connectors therefore always also implies the use of trunk cables

which are pre-assembled and delivered in the desired length.

Please find more detailed information on parallel optic connection technology using MPO in section 3.10.1. 3.9.3 Plug Connectors for Twisted Copper Cables The RJ45 connection system has been being developed over decades, from a simple telephone plug to a high-performance data connector. The RJ45 plug and jack interface was defined by ISO/IEC JTC1/SC25/WG3 as the international ISO/IEC 11801 standard, and is consi-dered a component in universal cabling systems. It is used as an eight-pole miniature plug system in connec-tion with shielded and unshielded twisted copper cables. The RJ45 is the most widely distributed of all data connectors, and is used in both the area for connecting terminal devices like PCs, printers, switches, routers, etc. – in the outlets and on connection/patch cables – as well as in distribution frames for laying out and distribu-ting horizontal cabling. The transmission properties of the RJ45 connection system are essentially determined through its small dimen-sions. Parameters like frequency range, attenuation, crosstalk and ACR are significant in determining transmission behavior. Crosstalk is especially critical, since pairs which are guided separately in the cable must be laid over one another in the connection system (1-2, 3-6, 4-5, 7-8). Connector characteristics must be adapted to those of the cable within the structured cabling system. In order to compare and adjust these characteristics, connectors, like data cables, connection cables, patch cords and other cabling components, are defined by the categories already mentioned.

Pre-assembled shielded cable fitted with 6A modules from R&M

Trunk cable fitted with MPO/MTP® connectors from R&M


Page 114 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The closeness of the contacts, as well as the wire routing mentioned above, have proven to be problematic for high-frequency behavior. Only connection modules that were optimized with respect to contact geometry satisfy standards up to categories 6A and 6A. As in channel technology (see page 111), higher performance can be achieved with a Cat. 6A connector designed in accordance with ISO specifications than with a Cat. 6A plug connector in accordance with EIA/TIA specifications. A 40 dB slope in NEXT should be provided for Cat. 6A starting from 250 MHz, and a 30 dB slope for Cat. 6A, in accordance with the component standard. In the case of 500 MHz, this means that a Cat. 6A module must achieve NEXT performance that is 3 dB better than a Cat. 6A module (see the graphic to the right). This is significant. To achieve this perfor-mance, new modules must be developed from the ground up. The required NEXT performance is cannot be reached through a change in the existing design alone – as can often be seen in Cat. 6A modules available on the market. Above all, more compensation elements are required to balance out the additional coupling effects like cross-modal couplings. Greater effort is required to separate pairs from one another. The termination process must be performed in a very precise manner and guaranteed to be without error, to ensure consistent signal transmission. R&M has succeeded in doing just that with its Cat. 6A module, even with enormous safety reserves over and above the strict international standard. This innovative solution can be shown using the following product example for Cat. 6A. The requirements for the transmission performance for Cat. 7/7A can only be achieved by assigning the pair pins (i.e. 1-2 and 7-8) to the outer corners of the jack, which restricts flexibility considerably. For this reason, the RJ45 connector was not used to connect to Cat. 7 and 7A cables. The GG45 connector, ARJ45 connector and TERA connector were developed and standardized for these categories. These connection systems have a different contact geometry, by means of which the center, or critical, pairs can be separated from one another. Selecting Connection Technology As compared to the “cable question” the “connector question” is more difficult to solve, if one consistently chooses to use cabling of class F or class FA, which, however, is not required for 10GBase-T (see table on page 111). This is because the only choice that remains when opting for Category 7 or 7A connection systems is between the TERATM connection system and the GG45- and ARJ45 connector types.

TERATM connection system, GG45 connection system, ARJ45 connection system, Siemon Company Nexans Stewart Connector The advantage of the GG45 jack lies in its compatibility with existing RJ45 connection cables, since all previous cables can be re-used, up to the first use of a link for Class F(A). However, tenders in which Category 6A/A technology and GG45-based technology were compared show that the use of GG45 leads to about triple the price in overall connection technology (mind you: a factor of 3 for materials and assembly!) and this in turn is what turns off many clients. The additional cost of the TERA connection system is not so high, but on the other hand this system forces an immediate use of new connection and patching cables (TERA on RJ45). As in the case of GG45, many clients accept this restriction only reluctantly.











ISO/IEC Cat 6A vs. TIA Cat 6AConnecting Hardware NEXT Values



ISO/IEC Cat. 6A vs. TIA Cat. 6A NEXT-Werte für Steckverbindungen

ISO/IEC Cat. 6A vs. TIA Cat. 6A NEXT Values for Plug Connections


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 115 of 156

ARJ45 (augmented RJ) likewise satisfies Category 7A standards and thus those of link class FA. Furthermore, ARJ45 connection systems can be used in addition to 10 gigabit Ethernet in other high-speed networks like InfiniBand or Fibre Channel (FC), or to replace Serial Attached SCSI (SAS) or PCI-Express (PCIe). ARJ45 is interoperable with the GG45 plug connector, which has the same areas of application. Even its form factor is identical to that of the RJ45 connector. However, the question remains open of whether one must use a Category 7A cabling system at all, or whether a more cost-effective Category 6A/A system, which is fully usable for 10 Gbit/s, would not be sufficient. This variant represents a sensible solution because of its more cost-effective connection technology. Category 6 A from R&M For all processes from product development to production, R&M’s primary goal is to guarantee that 100 percent of the modules it provides will always deliver the expected performance. This means that every single module must always achieve the desired performance everywhere, regardless of who is performing the termination. A pilot project carried out last year and using over 2,000 class EA permanent links of different lengths showed that all installers could achieve exceptionally good NEXT performance using this module (4 to 11 dB reserve).

The Cat. 6A module from R&M was designed from the bottom up to be easy to operate, yet provide maximum performance during operation – in other words, where it matters. Its innovative wiring technology with automatic cutting process guarantees that an absolutely consistent, precise termination of the wires, indepen-dent of the installer. The use of all four module sides and its unique pyramid shape maximizes the distance between the pairs in the module. The X Separator comes into use here and isolates pairs from one another to minimize the NEXT effect of the cable connection on performance.

All these product features ensure that Category 6A modules from R&M always achieve outstanding, consistent performance, with enormous safety reserves for signal transmission. Conclusion What should a data center planner or operator select after deciding to go with copper for cabling components? The current viewpoint, for financial and technical reasons, is that a shielded Category 7A cable and a shielded Category 6A connection system is definitely the best choice. Security Solutions for Cabling Systems A quality cabling system can have a positive effect on data center security, and appropriate planning will allow it to be designed in such a way that it can be adapted to increasing requirements at any time. Clearness, clarity, simplicity and usability play a major role in this process. And in the end, the cost-benefit ratio should also be taken into consideration in product selection. Often a simple, yet consistent, mechanical solution is more effective than an expensive active component or complex software solution. To use an example, R&M consistently extends its principle of modularity into the area of security. A central element here is color coding. Colored patch cords are generally used in data centers. The purpose of the colors is to indicate different type of IT services. This method is effective but inefficient, since it makes it necessary for the data center to stock up on different types of cables. In addition, it is not always possible to ensure that the correct colors are used – especially after large-scale system failures occur, or in the case of changes that occur over a long period of time. Incorrectly patched color-coded cables have a detrimental effect on the IT Service. This may result in chain reactions up to and including total system failure. By contrast, R&M’s security solution is based on color marking components for the ends of patch cords. They are easy to put on and replace. The cable itself does not need to be pulled out or removed. A color coding system that is equally easy to apply can be used for patch panels. This allows connections to be assigned unambiguously. In addition, this system reduces costs for stocking up on patch cords. Furthermore, the autonomous security system R&M offers as an extension to its cabling systems is characterized by mechanical coding and locking solutions. This system ensures that cables are not connected to incorrect ports, or that a cable – whether it be a copper or glass fiber cable – always stays in its place. The connection can only be opened with a special key.

Cat. 6A module with X Separator from R&M


Page 116 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

R&M’s visual and mechanical coding system and locking solution together provide an enormous improvement to passive data center security.

Security solutions from R&M, ranging from insertable color coding units (left) to locks on patch cords (right)

3.9.4 Glass Fiber Cables (Fiber optic) Glass fiber is a fascinating medium. It promises bandwidth that is virtually without limits. However, it is also a deli-cate, brittle medium. It takes a skilled person to handle glass fiber cabling. Nevertheless, by using proper planning and well-thought-out product selection, data centers can get a successful and problem-free start into their glass fiber future. The continuously growing requirements of data centers leave no other choice. Nowadays, data centers must include glass fibers in their planning to enable them to cope with migrating the data center to 40/100 gigabit Ethernet, growing data volume and many other requirements beyond that. Scalability, performance, access time, security, flexibility, and so on can be optimized through the use of glass fibers. Ad-vances in glass fiber technology, practical knowledge from manufacturers and revolutionary standards help users to make the right decisions during the planning process and to invest in a future-proof installation. An overview appears below of the fundamental topics, terminologies and issues that play a role in making plans for a glass fiber cabling system in a data center. We then present an orientation for decision makers, and infor-mation on some progressive glass fiber solutions that are currently available on the market. 3.9.5 Multimode, OM3/4 Optical fibers can be differentiated between glass optical media (Glass Optical Fiber, GOF) and plastic (Plastic Optical Fiber, POF). Due to its clearly lower performance capabilities, POF is not suitable for data center appli-cations. One can differentiate between three types of glass fibers:

• Multimode glass fibers with step index profile

• Multimode glass fibers with gradient index profile

• Single mode glass fibers

Multimode and single mode specify whe-ther the glass fiber is designed for multiple modes, or only for one mode. By means of a “smooth” transition between the high and low refractive index of the medium, the mo-des in multimode glass fibers with a gradient index profile, in contrast to those with a step index profile, no longer run straight, but in a wave shape around the fiber axis, which evens out modal delays (minimization of modal dispersion). Because of its good cost-benefit ratio (production expense to band-width length product) this glass fiber type

has virtually established itself as the standard fiber for high-speed connections over short to medium distances, e.g. in LANs and in data centers. Apart from attenuation in db/km, the bandwidth length product (BLP) is the most important performance indicator for glass fibers. BFP is specified in MHz*km: a BLP of 1000 MHz*km means that the usable bandwidth comes to 1000 MHz over 1000 m or 2000 MHz over 500 m.

Structure of a glass fiber (core: 10 µm single mode, 50 & 62.5 µm multimode)

Core glass (10, 50, 62.5 µm)

Cladding glass (125 µm)

Primary coating (0.2 ... 0.3 mm)

Secondary coating (0.6 ... 0.9 mm)


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 117 of 156

OM3 and OM4 are laser-optimized 50/125 µm multimode glass fibers. While OM1 and OM2 fibers are used with LEDs as signal sources, lasers are used for Category OM3 and OM4 fibers. In general, these are type VCSEL lasers (Vertical-Cavity Surface-Emitting Laser), that are considerably more cost-effective than Fabry-Perot lasers, for example. Lasers have the advantage that, unlike LEDs, they are not limited to a maximum frequency of 622 Mbit/s and can therefore transmit higher data rates. The transmission capacities of multimode fibers of (almost) all types allow a data center to have a centrally constructed data cabling system. In contrast, a copper solution starting out from a single area distributor is unlikely to work in room sizes greater than about 1,600 m2. But just as the argumentation to use only one central cabling with glass fiber to the desk makes little sense in office cabling, there is also no specified requirement for a central cabling system in the data center. Instead, the standard permits the creation of different areas with different media and correspondingly different length restrictions (see tables on pages 100 and 101). So if the data center is to have a greater area, a main distributor must be provided which connects the linked area distributors over either copper or glass fiber. The cabling coming from the area distributor could then be realized again in twisted pairs, similar to the floor distributors. In contrast to glass fibers, which, when using a media of minimum OM3 quality, are without a doubt well-suited for data rates higher than 10 Gbit/s, a similar suitability for twisted pair cables has not yet been identified. If one follows ISO/IEC 24764, EN 50173-5 or the new TIA-942A, new installations of OM1 and OM2 are no longer allowed. This raises the question, what should one do with cabling systems where OM1 or OM2 was already used? Do these need to be removed? Of course, the problem of mixing glass fiber types does not come up only in data centers. In any glass fiber en-vironment that has “evolved”, the question must be asked if an incorrect “interconnection” of old and new fibers could lead to errors. According to cable manufacturers, opinions among experts are split on this point. The predominant opinion is that with short cable paths, where an incorrect use of connection cables is likely, the difference in different modal dispersions should not have a negative effect. If this opinion should prove true, older fibers may certainly be kept as well in the data center, especially when incorrect connections can be avoided through the use of different connectors or codings or color marking on the adapters.

Conclusion Anyone examining the future of data centers must inevi-tably devote special attention to OM4 glass fibers. And there are a few reasons in favor of a consistent use of these fibers, already today. They offer additional head-room for insertion loss over the entire channel (Channel Insertion Loss) and therefore for more plug connections. Similarly, the use of OM4 results in higher reliability of the overall network, a factor which will play a decisive role in up-coming applications of 40 and 100 gigabit Ethernet. Finally OM4 and its 150-meter range (as opposed to 100 meters with OM3) provide a good reserve length, which is also covered by a higher attenuation reserve. 3.9.6 Single mode, OS1/2 In contrast to multimode glass fibers, only one light path exists for single-mode glass fibers due to its extremely thin core, typically a diameter of 9 µm. Since it is therefore impossible for multi-path propagation with signal delay differences between different modes to occur, extremely high transmission rates can be achieved over great distances. Also known as mono-mode fibers, single mode fibers can be used in the wavelength range from 1,280 to 1,650 nm, in the second or third optical window. This results in a theoretical bandwidth of 53 THz. On the other hand, single-mode glass fibers place maximum demands on light injection and connection technology. They therefore come into use primarily in high-performance areas like MAN and WAN backbones. In order to do justice to the higher demands of optical networks with WDM and DWDM technology (Dense Wave-length Division Multiplexing) there exist dispersion-optimized monomode fibers: the Non Dispersion Shifted Fiber (NDSF), Dispersion Shifted Fiber (DSF) and Non Zero Dispersion Shifted Fiber (NZDSF), which were standardized by the ITU in its G.650 ff. recommendations.


Page 118 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The following table shows the specifications of all standardized multimode and single-mode glass fiber types:

Fiber Types and Categories

Modes Multimode Single mode

ISO/IEC 11801 class OM1 OM2 OM3 OM4 OS1 OS2

IEC 60793-2 category

10-A1b 10-A1a 10-A1a 10-A1a 50-B1.1 50-B.1.3

ITU-T type G.651 G.651 G.651 G.651 G.652 G.652

Core/casting (typical) 62.5/125 µm 50/125 µm 50/125 µm 50/125 µm 9(10)/125 µm 9/125 µm

Numeric aperture 0.275 0.2 0.2 0.2 -- --

Attenuation dB/km (typical)

for 850 nm 3.5 dB/km 3.5 dB/km 3.5 dB/km 3.5 dB/km -- --

for 1,300 nm 1.5 dB/km 1.5 dB/km 1.5 dB/km 1.5 dB/km 1.0 dB/km 0.4 dB/km

Bandwidth length product (BLP) MHz*km

for 850 nm 200 MHz*km 500 MHz*km 1.5 GHz*km 3.5 GHz*km -- --

for 1300 nm 500 MHz*km 500 MHz*km 500 MHz*km 500 MHz*km -- --

Effective modal bandwidth

-- -- 2 GHz*km 4.7 GHz*km -- --

3.9.7 Plug Connectors for Glass Fiber Cables

So we come to another point which must be defined: which is the best fiber optic connector? The standard in this area leaves us very much on our own, since no clear focus exists on just one to two systems for glass fibers, as in the area of copper-based technologies. If you look back over the past 20 years, you will find that almost none of the modern plug connectors for active components possesses the same timelessness as the RJ45 does in the world of copper.

Maximum transmission rates and universal network availability can be achieved through the use of high-quality plug connectors in all WAN areas, over the Metropolitan Area and Campus Network up to the Backbone and subscriber terminal. A basic knowledge in this area of Quality Grades for fiber optic connectors is indispensable for planners and installers. The following section provides information on current standards and discusses their relevance for product selection. Connector Quality and Attenuation Naturally, as in any other detachable connection in electrical and communication technology, losses in signal transmission occur with fiber optic connectors as well. Therefore, the primary goal in the development, manufacture and application of high-quality fiber optic connectors is to eliminate the cause of loss at fiber junctions wherever possible. This can only be achieved by means of specialized knowledge and years of experience in the areas of optical signal transmission and high-precision production processes. The extremely small diameter of glass fiber cores requires a maximum degree of mechanical and optical precision. Working with tolerances of 0.5 to 0.10 µm (much smaller than a grain of dust), manufacturers are moving toward the limits of fine mechanics and advancing their processes into the field of microsystems technology. No compro-mises can be made in this area.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 119 of 156

In contrast to their electromagnetic counterparts, fiber optic connectors make no distinction between plugs and jacks. Fiber optic connectors include a ferrule that accepts the end of the fiber and positions it precisely. They are connected together by means of an adapter with an alignment sleeve. A complete connection consists of a connector/adapter/connector combination. Both ferrules with their fiber ends must meet each other in the inside of the connection in so precise a manner that a minimum amount possible of light energy is lost or scattered back (return loss). A crucial factor in this process is the geometric alignment and processing of fibers in the connector. Core diameters of 9 µm for single mode and 50 µm or 62,5 µm for multimode fibers, and ferrules with diameters of 2.5 mm or 1.25 mm make it impossible to inspect the connector visually, without the use of aids. Of course, one can determine on site immediately whether a connector snaps and locks into place correctly. Users must be able to rely on manufacturer specifications for all other properties – the “inner qualities” – like attenuation, return loss, or mechanical strength. Analogous to copper cabling, an assignment into classes also exists for fiber optic channel links – not to be confused with categories (see table below) that describe fiber type and material. Per DIN EN 50173, a channel link contains permanently installed links (Permanent Links) as well as patching and device connection cables. The associated standardized transmission link classes like OF-300, OF-500 or OF-2000 show the permitted attenuation in decibels (dB) and maximum fiber length in meters, as displayed in the following table.

Transmission Link Classes and Attenuation

Class Realized in category

Maximum transmission link attenuation in dB max. length

Multimode Single mode

850 nm 1300 nm 1310 nm 1550 nm

OF-300 OM1 to OM4, OS1, OS2

2.55 1.95 1.80 1.80 300 m

OF-500 OM1 to OM4, OS1, OS2

3.25 2.25 2.00 2.00 500 m

OF-2000 OM1 to OM4, OS1, OS2

8.50 4.50 3.50 3.50 2,000 m

OF-5000 OS1, OS2 -- -- 4.00 4.00 5,000 m

OF-10000 OS1, OS2 -- -- 4.00 4.00 10,000 m

The limit value for attenuation in transmission link classes OF-300, OF-500 and OF-2000 is based on the assumption that 1.5 dB must be calculated as the connection allocation (2 dB for OF-5000 and OF-10000). Using OF-500 with multimode glass fibers of 850 nm as an example, this results in the following table value: 1.5 dB connection allocation + 500 m x 3.5 dB/1,000 m = 3.25 dB. The attenuation budget (see the calculation example above) must be maintained in order to ensure a reliable transmission. This is especially important for concrete data center applications like 10 Gigabit Ethernet and 40/100 gigabit Ethernet protocols in accordance with IEEE802.3ba. In these cases, an extremely low attenuation budget must be taken into account. Maximum link ranges result accordingly (see tables on page 101). The transmission quality of a fiber optic connector is essentially determined by two features:

• Insertion Loss (IL) Ratio of light output in fiber cores before and after the connection

• Return Loss (RL) Amount of light at the connection point that is reflected back to the light source

The smaller the IL value and the larger the RL value, the better the signal transmission in a plug connection will be. ISO/IEC 11801 and EN 50173-1 standards require the following values for both single mode as well as multi-mode:

Insertion Loss (IL) Return Loss (RL) < 0.75 dB for 100% of plug connections > 20 dB for multimode glass fibers

< 0.50 dB for 95% of plug connections > 35 dB for single mode glass fibers

< 0.35 dB for 50% of plug connections


Page 120 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

As already mentioned, return loss is a measurement for the amount of light at the connection point that is reflected back to the light source. The higher the RL decibel value, the lower reflections will be. Typical RL values lie at 35 to 50 dB for PC, 60 to 90 dB for APC and 20 to 40 dB for multimode fibers. Initially the end surface of fiber optic connectors was polished to a 90° angle to the fiber axis. Current standards require Physical Contact or Angled Physical Contact. The expression HRL (High Return Loss) is also used fairly often, and means the same thing as APC. In a PC surface polishing, the ferrule gets a convex polished end surface so fiber cores can make contact at their highest elevation points. As a result, the creation of reflections at the contact point is reduced. An additional improvement in return loss is achieved by means of APC angled polishing technology. Here, the convex end surfaces of the ferrule are polished angled (8°) to the axis of the fiber. SC connectors are also offered with a 9° angled poli shing. They have IL and RL values that are identical to the 8° version and hav e therefore not gained accep-tance worldwide. As a comparison: the fiber itself has a return loss of 79.4 dB in case of a 1310 nm fiber, 81.7 dB for 1550 nm and 2.2 dB for 1625 nm (all values for a pulse length of 1 ns).

Quality Grades A channel’s attenuation budget is burdened substantially by the connections. In order to guarantee the compati-bility of fiber optic connectors from different manufacturers, manufacturer-neutral attenuation values and geo-metric parameters for single-mode connectors were defined in 2007 via the IEC 61753 and IEC 61755-3-1/-2 standards. The attenuation values established for random connector pairings, also known as Each-to-Each or Random-Mate, come significantly closer to actual operating connections than the attenuation values specified by manufacturers. One novelty of these Quality Grades is their requirements for mean (typical) values. This provides an optimal basis for the calculation of path attenuation. Instead of calculating connectors using maximum values, the specified mean values can be used. A grade M was also considered for multimode connectors in the drafts of the standards, but it was rejected in the standard that was adopted. Since then, manufacturers and planners have been getting by using information from older or accompanying standards to find guideline values for multimode connectors. R&M uses Grade M, in the way it was described shortly before the publication of IEC 61753-1. These values were supported by ISO/IEC with a required insertion loss of < 0.75 dB per connector. These standards for multimode connector quality already were no longer sufficient with the widespread introduction of 10 Gigabit Ethernet, but especially with future Ethernet technologies which provide 40 Gbit/s and 100 Gbit/s.

PC Physical Contact

APC Angled Physical Contact

Different amounts of light or modes are diffused and scattered back, depending on the junction of two fibers, eccentricities, scratches and impurities (red arrow). APC connector that is polished and cleaned well has about 14.7 dB RL against air and 45 to 50 dB when plugged in.

With APC connectors, modes are also scattered back as a result of the 8° or 9° polishing, though at an angle that is greater than the angle of acceptance for total reflection. As a result, modes are not transmitted. The calculation of the angle of acceptance using shows that all modes which have an angle greater than 7.5° are decoupled after a few centimeters and therefore do not reach the source and interfere with it. A quality APC connector has at least 55 dB RL against air and 60 to 90 dB when plugged in.

°===Θ⇒Θ= −− 47.7)13.0(sin)(sinsin 1652.



R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 121 of 156

An example of this:

It should be possible to transmit 10 gigabit Ethernet per IEEE 802.3ae standards over the maximum distance of 300 m with OM3 fibers. In accor-dance with the standard, 1.5 dB will remain for connection losses after de-ducting fiber attenuation and power penalties (graphic, red line). Operating with current grade M multimode connec-tors and an insertion loss of 0.75 dB per connector, two plug connections would therefore be possible. However, this is not very realistic. Looking at it from the other side, as-sume that we want to use two MPO and two LC plug connections. With an insertion loss of 0.75 dB per connec-tion, total attenuation will amount to 3 dB. As a result, the link could only measure about 225 meters (graphic, green line). The same example, using a data rate of 40 Gbit/s, decreases the link to 25 meters. This shows that the attenuation budget available for connections becomes smaller and smaller as the data rate increases. Since up to this point the IEC 61755 group of standards could not present any reliable, and above all meaningful values, R&M defined its own quality grades for multimode connectors – based on the empirical values acquired over years of experience. These multimode grades are based on the classification for single mode, extended by grade EM or grade 5 for PCF and POF connectors. Grades in accordance with IEC 61753-1:

Grades for R&M multimode connectors:

Determining the Attenuation Budget of an OM3 Fiber with 10 gigabit Ethernet

Blue: Remaining attenuation budget for connections Red: Remaining budget for 300-meter length Green: Remaining line length for two MPO and two LC connectors

IL, Insertion Loss > 97 % Mean Comments

Grade A < 0.15 DB < 0.07 dB Not yet conclusively established

Grade B < 0.25 dB < 0.12 dB

Grade C < 0.50 dB < 0.25 dB

Grade D < 1.00 dB < 0.50 dB

Grade M < 0.75 dB (100 %) < 0.35 dB Not specified in IEC 61753-1

RL, Return Loss 100 % Comments

Grade 1 > 60 dB > 55 dB in unplugged state (APC only)

Grade 2 > 45 dB

Grade 3 > 35 dB

Grade 4 > 26 dB

IL 100 % 95 % Median RL 100 % Comments

Grade AM 0.25 dB 0.15 dB 0.10 dB Grade 2 45 dB Only LSH, SC and LC single-fiber connectors

Grade BM 0.50 dB 0.25 dB 0.15 dB Grade 3 35 dB Only single-fiber connectors

Grade CM 0.60 dB 0.35 dB 0.20 dB Grade 4 26 dB Multi-fiber connectors

Grade DM 0.75 dB 0.50 dB 0.35 dB Grade 4 26 dB Similar to current grade M

Grade EM 1.00 dB 0.75 dB 0.40 dB Grade 5 20 dB For PCF and POF


Page 122 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

According to the graphic (page 121, red line) a maximum attenuation of 1.5 dB is available for four plug connections. By using the new multimode grades, we get a maximum attenuation of 1.2 dB with a certainty of 99.9994% for two MPO grade CM connectors and two LC grade BM connectors, or a maximum 0.7 dB with a certainty of 93.75%. The maximum target length of 300 m can therefore be achieved without a problem. In this way, users also gain freedom and security in planning in multimode as well. If more connections are required, one just selects connectors from a higher quality class. The previous practice of using maximum values leads to incorrect specifications. The median, or 95% value, should be used in the planning process instead. As already required with single mode grades, multimode grades also require each-to-each guaranteed values (i.e. where connectors are measured against connectors). Information taken from measurements against a reference connector – a normal procedure with competitors – are not helpful, but instead lead to pseudo-security in the planning process. To be able to plan a future-proof network and to cope with the diminishing attenuation budgets, a reliable classification system by means of grades for single and multimode components is crucial. Connector Types The LC and MPO connector were defined for data center applications in accordance with ISO/IEC 24764, EN 50173-5 and TIA-942 standards for fiber optic cabling systems. MPO (IEC 61754-7)

The MPO (Multi Patch Push-on) is based on a plastic ferrule that allows of up to 24

fibers to be housed in one connector. In the meantime, connectors with up to 72 fibers are already in development.

This connector stands out for its compact design and easy operation, but brings dis-advantages in optical performance and reliability.

This connector is of crucial importance because of its increased packing density and ability to migrate to 40/100 gigabit Ethernet.

LC Connector (IEC 61754-20)

This connector is part of a new generation of compact connectors. It was developed by Lucent (LC stands for Lucent Connector). Its design is based on a 1.25 mm-diameter ferrule. The duplex coupling matches the size of an SC coupling. As a result, it can achieve extremely high packing densities, which makes the connector attractive for use in data centers.

Other common connector types, sorted by IEC 61754-x, include: ST Connector (also called BFOC, IEC 61754-2)

These connectors, with a bayonet locking mechanism, were the first PC connectors (1996). Thanks to the bayonet lock and extremely robust design, these connectors can still be found in LAN networks around the world (primarily in industry). ST de-signates a “straight” type.

DIN/LSA (optical fiber plug connector, version A, IEC 61754-3, DIN 47256)

Compact connector with screw lock, only known in the German-speaking world. SC Connector (IEC 61751-4)

This connector type with its square design and push/pull system is recommended for new installations (SC stands for Square Connector or Subscriber Connector). It makes high packing density possible because of its compact design, and can be combined into duplex and multiple connections. Despite its age, the SC continues to gain in importance because of its outstan-ding properties. Up to this day, it remains the most important WAN connector worldwide, usually as a duplex version, due to its good optical properties.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 123 of 156

MU Connector (IEC 61754-6)

The first small form connector was the MU connector. Based on its 1.25 mm ferrule, its appearance and function-nality is similar to the SC, in half the model size. FC (Fibre Connector, IEC 61753-13)

A robust, proven first generation connector. The first true WAN connector, and still millions in use. Disadvantageous for use in confined space conditions because of its screw cap and there-fore not considered for use in current racks with high packing densities.

E-2000™ (LSH, IEC 61753-15)

This connector is a development from Diamond SA and is geared to LAN and CATV applications. It is produced by three licensed manufacturers in Switzerland, which has also led to its unequalled quality standard. The integrated dust shutter not only provides protection from dust and scratches, but also from laser beams. The connector is can be locked using frames and levers which can be color coded and mechanically coded.

MT-RJ (IEC 61751-18)

The MT-RJ is a connector for LAN applications. Its appearance matches that of the familiar RJ45 connector from the copper world. It is used as a duplex connector. F-3000 (IEC 61754-20 compatible)

LC-compatible connector with dust-laser protection cover. F-SMA (Sub-Miniature Assembly, IEC 61754-22)

Connector with screw cap and no physical contact between ferrules. The first standardized optical fiber connector, but only used today for PCF/HCS and POF. LX.5 (IEC 61754-23)

Similar to the LC and F-3000 in size and shape, but only partially compatible with these connectors because of different ferrule distances in duplex mode. Developed by ADC. SC-RJ (IEC 61754-24)

As the name reveals, the developers at R&M geared this connector to the RJ45 format. Two SCs form one unit in the design size of an RJ45. This corresponds to SFF (Small Form Factor). The 2.5 mm ferrule sleeve technology is used for this connector. As compared to the 1.25 mm ferrule, it is more robust and more reliable. The SC-RJ therefore stands out not only because of its compact design, but also through its optical and mechanical performance.

It is a jack-of-all-trades – usable from grade A to M, from single mode to POF, from WAN to LAN, in the lab or out-doors. R&M has published a white paper on SC-RJ (“SC-RJ – Reliability for every Category”). Adapters The best optical fiber connector will not be of any use at all if it is combined with a bad adapter. This is why R&M adopted its grade philosophy for adapters as well. The following values were the basis for this:

Optical properties, IL Insertion Loss Grade B Grade C Grade D Grade M Grade N

Sleeve material Ceramic Ceramic Phosphor bronze Ceramic Phosphor


Attenuation variation (IL) Delta IEC61300-3-4 0.1 dB 0.2 dB 0.3 dB 0.2 dB 0.3 dB


Page 124 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0


Despite the validity of cabling standards for data centers, it must be assumed that a cabling system designed in accor-dance with these standards will not necessarily ensure the same usability period as required standards for office cabling. Designs with a requirements-oriented cabling system, and the modified design and material selection that are derived from this, must not be described as incorrect. They represent an alternative and may be a reasonable option in smaller data centers or server rooms. Data centers must be planned like a separate building within a building, and a cabling system that is structured and static therefore becomes more likely – at least between distributors. In any event, planners must ensure that capa-city (in terms of transmission rate) is valued more than flexibility. Quality connection technology must ensure that fast changes and fast repairs are possible using standard materials. This is the only way to be able to guarantee the high availabilities that are required. Data center planners and operators will also face the question of whether using single mode glass fiber is still strictly necessary. The performance level of the laser-optimized multimode glass fibers available today already open up numerous options for installation. The tables above showed what potentials can be covered with the OM3 and OM4 fibers. Do single-mode fibers therefore have to be included in examinations of transmission rate and trans-mission link?

A criterion that may make single mode installations absolutely necessary is, for example, the length restriction for OM3/OM4 from IEEE 802.3ba (40/100 GbE). The 150 meter limitation with OM4 fiber for 40/10 Gigabit Ethernet would likely be exceeded in installations with extensive data center layouts. Outsourced units, like backup systems stored in a separate building, could not be integrated at all. In this case, single mode is the only technical option. A cost comparison is not necessary. The high costs of transceivers for single mode transmission technology are a crucial factor in configurations that can be used with either multimode or single mode. Experience shows that a single mode link costs three to four times more than a multimode link. With multimode up to double the port density be achieved. However, highly sophisticated parallel optic OM4 infrastructures also require considerable investments. The extent to which the additional expenses are compensated through lower cabling costs (CWDM on single mode vs. parallel optic OM4 systems) can only be determined by an overall examination of the scenario in question. In addition, the com-parably higher energy consumption is also a factor. Single mode consumes about three to five times more watts per port than multimode. Whether one uses multimode length restrictions as a basis for a decision in favor of single mode depends upon strategic considerations in addition to a cost comparison. Single mode offers a maximum of performance reserves, in other words a guaranteed future beyond current standards. Planners who want to eliminate restricting factors over multiple system generations and are willing to accept a bandwidth/length overkill in existing appli-cations should plan on using single mode. 3.10 Implementations and Analyses The 40 GbE specification is geared to High-Performance Computing (HPC) and storage devices, and is very well suited for the data center environment. It supports servers, high-performance clusters, blade servers and storage networks (SAN). The 100 GbE specification is focused on core network applications – switching, routing, interconnecting data centers, Internet exchanges (Internet nodes) and service provider Peering Points (PP). High-performance computing environments with bandwidth-intensive applications like Video-on-Demand (VoD) will profit from 100 GbE technology. Since data center transmission links typically do not exceed 100 m, 40/100 GbE components will be significantly more economical than OS1 and OS2 single mode components, and still noticeably improve performance.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 125 of 156

In contrast to some previous speed upgrades, in which new optical transceiver technologies, WDM (Wavelength Division Multiplexing) and glass fibers with higher bandwidth were implemented, the conversion to 40 GbE and 100 GbE is implemented using existing photonic technology on a single connector interface with several pairs of transceivers. This new interface, known as Multi-Fiber-Push-On (MPO), presents an arrangement of glass fibers – up to twelve per row and up to two rows per ferrule (24-fiber) – as a single connector. It is relatively easy to implement 40 GbE, where four glass fiber pairs are used with a single transceiver with MPO interface. More difficult by far is the conversion to 100 GbE, which uses ten glass fiber pairs. There are a number of possibilities for arranging fibers on the MPO interface – right up to the two-row, 24-fiber MPO connector. The next greatest challenge for cable manufacturers is the different signal delay time (Delay Skew). A single transceiver for four fibers must separate four signals then combine them back together. These signals are synchronized. Every offset between arrival of the first signal and arrival of the last has a detrimental effect on overall throughput, since the signals can only be re-bundled when all have arrived. The difference in signal delay, that in the past played a secondary role for serial devices, has now become a central aspect of the process. Ensuring the conformity of these high-performance cables requires specifications that are carefully worked out and a manufacturing process that uses the strictest of tolerances. It is up to cable manufacturers to implement their own processes for achieving conformity with standards during the cable design process. This includes the use of ribbon fiber technology and, more and more, the use of traditionally manufactured glass fiber cables. Traditional cables can provide advantages with respect to strength and flexibility and still have a high fiber density – up to 144 fibers in one cable with a 15 mm diameter. In many large data centers with tens of thousands of fiber optic links the need exists for faster solutions with an extremely high density, and which can be easily managed and scaled to a high bandwidth requirement. Another important factor is to retain the correct fiber arrangement (Polarity ), regardless of whether the optical fiber installation is performed on site in a conventional manner, or using pre-assembled plug-and-play compo-nents. The transmitting port at one end of the channel must be linked to the receiving port at the other end – a very obvious matter. However, this situation did lead to confusion in the case of older single mode Simplex ST and FC connectors. Back then it was usually enough just to swap the connector at one end if a link could not be achieved. With the increased use of MPO multi-fiber connectors, it is no longer a good idea just to turn the connector around. The twelve fibers in the ferrule have already been ordered in the factory. There are various options for assigning fibers. These options are part of the different cabling standards, and are documented in the fiber assignment list. It is important to know that in two of the polarity variants listed, the assignment of components at the end of the transmission channel are different from each other, which can easily lead to problems when connectors are re-plugged and if care was not taken to ensure that the glass fiber components were installed according to instructions. Introductory information on parallel optic connection technology and MPO technology follows below. In addition, performance and quality criteria are listed to provide decision makers an initial orientation when planning their optical fiber strategy and selecting connection technology. The migration path to 40/100 GbE is also listed and discusses polarity methods. This last section also covers the bundling of power and data transmission into one cable – Power over Ethernet (PoE & PoEplus ). A separate power feeding system for IP cameras, Wireless Access Points, IP telephones and other devices becomes unnecessary. Application possibilities increase greatly when higher power can be provided. That is why PoEplus was introduced in 2009. More electrical energy on the data cable automatically means more heat to wire cores. This is obviously a risk factor. Anyone planning data networks that use PoEplus must therefore be especially careful when selecting a cabling system and take into account some limitations under certain circ*mstances. However, the heating problem can be managed through consistent compliance with existing and future standards so that no problems will come up during data transmission. There is another risk factor to consider: the danger of contact damage from sparks when unplugging under load. R&M tests show that high-quality, stable solutions will ensure continuous contact quality. The following explanations will provide support when planning data networks that use PoE and refer to still-unsolved questions in the current standardization process. Changes were made in Appendix 1 of the ISO 11801 2008 standard with respect to the definition of the Class EA cabling structure. As a consequence, permanent links below 15m are no longer allowed for the reference imple-mentation. However, link lengths that are shorter than 15 m are common in various applications, especially in data centers. Information therefore appears below on the earliest development in the area of standardized copper cabling solutions for Short Links , and lists new options for providing an orientation for planning high-performance data networks and decision aids for product selection.


Page 126 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Two examinations follow. First, the capabilities of Class EA and FA cabling systems, or Cat. 6A and 7A components are examined more closely and compared with one another. In the process, a comparison of the transmission capacities of the different classes is carried out and critical cabling parameters listed. In addition, the EMC behaveior of shielded and unshielded cabling systems for 10GBase-T is examined using an independent study. 3.10.1 Connection Technology for 40/100 gigabit Et hernet (MPO/MTP ®) As shown in the previous chapter, parallel optical channels with multi-fiber multimode optical fibers of the catego-ries OM3 and OM4 are used for implementing 40 GbE and 100 GbE. The small diameter of the optical fibers poses no problems in laying the lines, but the ports suddenly have to accommodate four or even ten times the number of connectors. This large number of connectors can no longer be covered with conventional individual connectors. That is why the IEEE 802.3ba standard incorporated the MPO connector for 40GBASE-SR4 and 100GBASE-SR10. It can contact 12 or 24 fibers in the tiniest of spaces. This chapter describes this type of connector and explains how it differs from the much improved MTP® connectors of the kind R&M offers.

The MPO connector (known as multi-fiber push-on and also as multi-path push-on) is a multi-fiber connector defined according to IEC 61754-7 and TIA/EIA 604-5 that can accommodate up to 72 fibers in the tiniest of spaces, comparable to an RJ45 connector.

MPO connectors are most commonly used for 12 or 24 fibers.

The push-pull interlock with sliding sleeve and two alignment pins are meant to position the MPO connector exactly for more than 1000 in-sertion cycles.

Eight fibers are needed for 40 GbE and twenty for 100 GbE. That means four contacts remain non-interconnected in each case. The diagrams below show the connection pattern:

MPO Connectors, 12-fold (left) and 24-fold (right). The fibers for sending (red) and receiving (green) are color-coded. As with every connector, the quality of the connection for the MPO connector depends on the precision of the con-tacting. In this case, how-ever, that precision must be executed 12-fold or 24-fold. The fibers are usually glued into holes within the ferrule body. Such holes have to be larger than the fiber itself to allow the fiber to be fed through so there is always a certain amount of play in the hole. This play causes two errors that are crucial for attenuation:

• Angle error (angle of deviation):

The fiber is not exactly parallel in the hole but at an angle of deviation. The fibers are therefore inclined when they come into contact with each other in the connector and are also radially offset in relationship to each other. The fibers are also subject to greater mechanical loading.

• Radial displacement (concentricity):

The two fiber cores in a connector do not touch each other fully but somewhat offset in relationship to each other. This is referred to as concentricity. The true cylindrical center is taken and rotated once around the reference center. This results in a new cylinder whose diameter is defined as the concentricity value. In accordance with EN 50377-15-1, the concentricity of fiber holes in the MPO ferrule is allowed to be no more than 5 µm.

One often talks about eccentricity, too, in connection with radial displacement. This is a vector indicating the distance of the true radial center of the cylinder from the reference center and the direction of deviation. To determine the concentricity, the true cylinder center is rotated once around the reference center. This value is therefore twice as big as the value for eccentricity but contains no information about the direction of deviation.

MPO Connector for accommodating 24 fibers


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 127 of 156

In both cases, the consequences are higher insertion loss and return loss because part of the light is decoupled or reflected instead of being transmitted. MPO connectors (and the MTP® connectors described below) are no longer terminated on site because of the de-licate multi-fiber structure and narrow tolerances involved. MPO/MTP® connectors are therefore sold already terminated together with trunk cables. With this arrangement, customers have to plan line lengths precisely but are also assured top quality and short installation times. MTP® connectors with Elite ® ferrules from R&M Fibers are glued directly into the plastic body of the ferrule with MPO connectors, making reworking later on diffi-cult and putting limits on manufacturing accuracy. Angle errors and radial displacement can be reduced only to a certain degree. To achieve lower tolerances and better attenuation values, the American connectivity specialist US Conec deve-loped the MTP® connector (MTP = mechanical transfer push-on). It has better optical and mechanical quality than the MPO. Unlike the MPO connector, an MTP® connector consists of a housing and a separate MT ferrule (MT = mechanical transfer). The MTP® connector differs from the MPO in various ways. e.g. rounded pins and oval sha-ped compression springs. These features prevent scratches during plug-in and protect the fibers in the transition area from connector to cable. This design makes the connectors highly stable mechanically. The MT ferrule is a multi-fiber ferrule in which the fiber alignment depends on the eccentricity and positioning of the fibers and the holes drilled in the centering pins. The centering pins help control fiber alignment during insertion. Since the housing is detachable, the ferrules can undergo interferometric measurements and subsequent processing during the manufacturing process. In addition, the gender of the connector (male/female) can be changed at the last minute on site. A further advance from UC Conec is that the ferrules can be moved longitudinally in the housing. That allows the user to define the applied pressure. A further improvement is the multimode (MM) MT Elite® ferrule. It has about 50 percent less insertion loss and the same return loss as the MM MT standard ferrule, which typically has 0.1 dB insertion loss. Tighter process tole-rances are what led to these better values. Series of measurements from the R&M Laboratory confirm the ad-vantages of the R&M Elite® ferrule. The histograms in Figure below and the table illustrate the qualitative differen-ces of the two ferrule models. The random-mated measurements with R&M MTP® patch cords were carried out in accordance with IEC 61300-3-45, one group with Standard and another with Elite® ferrules. Thus, connections with R&M MTP® Elite plugs, for example, have a 50 percent probability of exhibiting insertion loss of just 0.06 dB or less. There is a probability of 95 percent that these same connections will have optical losses of 0.18 dB or less.

Insertion loss characteristic of Standard (left) and Elite ferrules (right) from the series of R&M measurements

50% and 95% values for Standard and Elite® ferrules

For its own MPO solutions, R&M makes exclusive use of MTP® connectors with Elite® ferrules. However, it has further improved the already good measured values in a finishing process specially developed for the purpose. In other words, R&M not only ratcheted up the EN-specified tolerances for the endface geometry of MPO connectors but also defined new parameters. Finishing is done in a special high-end production line.

Ferrulen Typ 50% 95%

Standard ≤ 0.09 dB ≤ 0.25 dB

Elite ≤ 0.06 dB ≤ 0.18 dB


Page 128 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The criteria R&M sets for each processing step and parameter and for the statistical processes in quality manage-ment are basically more stringent than the relevant standards in order to guarantee users maximum operational reliability. Fiber Protrusion For instance, the EN 50377-15-1 requires that fibers in MPO/MTP® connectors protrude at least one µm beyond the end of the ferrule and have physical contact with their counterpart. The reason is that the ferrule of an MPO/MTP® has an extremely large contacting surface area. This large area equally distributes the spring power pressing the ferrules together across all fibers in each connector. If fibers are too short, there is a risk that just the surfaces of the two ferrules that are stuck together will come into contact with each other, thus preventing the two fibers from touching. Fiber height is therefore a critical factor for connection performance.

At the same time, all fiber heights in an MPO/MTP® connector also have to be within a certain range. This range is de-fined as the distance between the shortest and the longest fiber and naturally has to be minimized to ensure that all fibers touch in a connection. The situation is exacerbated by the fact that the fiber heights are not linear arrangements. Va-riation in fiber heights arises during the polishing of fiber endfaces and can only be reduced by working with extreme care using high-quality polishing equipment

and suitable polishing agents. Although operators should always try to keep all fibers at the same height during polishing they will not fully succeed in avoiding a certain deviation in the sub-micrometer range. R&M has opti-mized this work step in such a way as to reduce the tight tolerances in EN 50377-15-1 yet again by 1 to 3.5 µm. But one not only has to keep in mind the height differences between all 12 or 24 fibers but also between the two adjoining fibers in each case. As mentioned above, the spring power is ideally distributed evenly across all fibers. At the same time, fibers become compressed if subjected to pressure. If a short fiber is flanked by two adjoining higher fibers, there is a chance it will not contact its counterpart in the other connector and thus increase the amount of insertion loss and return loss. Core Dip Particular attention must be paid to what is called core dip. This is the place where the fibers come into contact with each other in a connection and is a major factor in determining insertion loss and return loss. Core dip is a dip in the fiber core, as illustrated in Figure. It occurs during polishing because the core is some what softer than the fiber sheath due to its allocated purpose. EN 50377-15-1 says the depth of the core dip is not allowed to be more than 100 nm. In the case of a 24-fiber MPO/MTP® connector for 100GbE applications, the problem of optimum geometry becomes even more complex because these models have two rows with twelve fibers each that have to be polished. Optimizing the polish geometry is the prerequisite for ensuring the high quality of the connectors. But 100 percent testing during production is what guarantees compliance with these tolerances. Further parameters Besides fiber height and core dip, there are further parameters of crucial importance for the surface quality of fiber endfaces and thus for the transmission properties and longevity of a connector. Ferrule radius and fiber radius are two examples. Instead of focu-sing on just one value here, you must examine the geometry as a whole du-ring production and quality control. The tight tolerances R&M set are tougher than those in EN 50377-15-1 and ensure the top quality of the connectors. But 100 percent testing during production is what guarantees compliance with these tolerances. R&M there-fore subjects all connector endfaces to an interferometric inspection (Figure). Defective ferrules are reworked until they comply with the specified tolerances.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 129 of 156

The insertion loss and return loss of all connectors are also checked. To reduce the insertion loss of connectors, the offset of two connected fibers must be as small as possible. R&M likewise measures the internal polarity of the fibers in all MPO/MTP® cables to be sure that it is correct. Polarity refers to how the fibers are connected, e.g. fiber Tx1 (transmitting channel 1) leads to Rx1 (receiving channel 1), Tx2 to Rx2, Tx3 to Rx3, etc. Trunk cables On-site termination of an MPO/MTP® connector with 12, 24 or even up to 72 fibers is obviously no longer possible. In other words, if you use MPO connectors you also have to use trunk cables delivered already cut to length and terminated. This approach requires greater care in planning but has a number of advantages:

• Higher quality

Higher quality can usually be achieved with factory termination and testing of each individual product. A test certificate issued by the factory also serves as long-term documentation and as quality control.

• Minimal skew

The smallest possible skew between the four or ten parallel fibers is crucial to the success of a parallel optical connection. Only then can the information be successfully synchronized and assembled again at the destination. The skew can be measured and minimized with factory-terminated trunk cables.

• Shorter installation time

The pre-terminated MPO cable system can be incorporated and immediately plugged in with its plug and play design. This design greatly reduces the installation time.

• Better protection

All termination is done in the factory, so cables and connectors are completely protected from ambient influences. FO lines lying about in the open in splice trays are exposed at least to the ambient air and may age more rapidly as a result.

• Smaller volume of cable

Smaller diameters can be achieved in the production of MPO cabling from FO loose tube cables. The situation changes accordingly: Cable volume decreases, conditions for air-conditioning in data centers im-prove and the fire load declines.

• Lower total costs

In splice solutions, a number of not always clearly predictable factors set costs soaring, e.g. splicing involving much time and equipment, skilled labor, meters of cable, pigtails, splice trays, splice protection and holders. By comparison, pre-terminated trunk cables not only have technical advantages but also usually involve lower total costs than splice solutions.

3.10.2 Migration Path to 40/100 gigabit Ethernet MPO connectors contact up to 24 fibers in a single connection. A connection must be stable and its ends correctly aligned. These aspects are essential for achieving the required transmission parameters. A defective connection may even damage components and or cause the link to fail altogether. MPO connectors are available in a male version (with pins) or a female version (without pins). The pins ensure that the fronts of the connectors are exactly aligned on contact and that the endfaces of the fibers are not offset. Two other clearly visible features are the noses and guide grooves (keys) on the top side. They ensure that the adapter holds the connector with the correct ends aligned with each other.

MPO male with pins (left) and MPO female without Pins (right)


Page 130 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Adapters There are two types of MPO adapters based on the placement of the key:

• Type A: Key-up to key-down

Here the key is up on one side and down on the other. The two connectors are connected turned 180° in relation to each other.

• Type B: Key-up to key-up

Both keys are up. The two connectors are connected while in the same position in relation to each other.

Connection rules 1. When creating an MPO connection, always use one m ale connector and one female connector plus

one MPO adapter.

2. Never connect a male to a male or a female to a female. With a female-to-female connection, the fiber cores of the two connectors will not be at the exact same height because the guide pins are missing. That will lead to losses in performance. A male-to-male connection has even more disastrous results. There the guide pins hit against guide pins so no contact is established. This can also damage the connectors.

3. Never dismantle an MPO connector. The pins are difficult to detach from an MPO connector and the fibers might break in the process. In addition, the warranty becomes null and void if you open the connector housing!

Cables MPO cables are delivered already terminated. This approach requires greater care in planning in advance but has a number of advantages: shorter installation times, tested and guaranteed quality and greater reliability. Trunk cables/patch cables

Trunk cables serve as a permanent link connecting the MPO modules to each other. Trunk cables are available with 12, 24, 48 and 72 fibers. Their ends are terminated with the customer's choice of 12-fiber or 24-fiber MPO connectors. MPO patch cords will not be used until 40/100G active devices are employed (with MPO interface). The ends of MPO patch cords are terminated with the customer's choice of 12-fiber or 24-fiber MPO connectors.

Trunk/patch cables are available in a male – male version (left) and a female – female version (right) Harness cables

Harness cables provide a transition from multi-fiber cables to individual fibers or duplex connectors. The 12-fiber harness cables available from R&M are terminated with male or female connectors on the MPO side; the whips are available with LC or SC connectors.

Y cables

Y cables are generally used in the 2-to-1 version. A typical application is to join two 12-fiber trunk cables to a 24-fiber patch cord as part of a migration to 100 GbE. The rather rare version of 1 to 3 allows three eight-fiber MTP connectors to be joined to a 24-fiber permanent link, e.g. for migration to 40GbE.

Adapters key-up to key-down (left), key-up to key-up (right)


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 131 of 156

Duplex patch cables

Classic duplex cables are involved here, not MPO cables. They are available in a cross-over version (A-to-A) or a straight-through version (A-to-B) and are terminated with LC or SC connectors.

Modules and adapter plates

Modules and adapter plates are what connect the permanent link to the patch cable. The MPO module enables the user to take the fibers brought by a trunk cable and distribute them to a duplex cable. As already assembled units, the MPO modules are fitted with 12 or 24 fibers and have LC, SC or E2000™∗� adapters on the front side and MPO at the rear.

The adapter plate connects the MPO trunk cable with an MPO patch cord or harness cable. MPO adapter plates are available with 6 or 12 MTP adapters, type A or type B.

MPO module with LC duplex adapters MPO adapter plate (6 x MPO/MTP®)

The polarity methods The connectors and adapters are coded throughout to ensure the correct orientation of the plug connection. The three polarity methods A, B and C as defined in TIA-568-C, for their part, are used to guarantee the right bi-directional allocation. This chapter describes these methods briefly. Method A

Method A uses straight-through type A backbones (pin1 to pin1) and type A (key-up to key-down) MPO adapters. On one end of the link is a straight-through patch cord (A-to-B), on the other end is a cross-over patch cord (A-to-A). A pair-wise flip is done on the patch side. Note that only one A-to-A patch cord may be used for each link.

MPO components from R&M have been available for Method A since 2007. It can be implemented quite easily, because e.g. just one cassette type is needed, and it is probably the most widespread method. Method B

Method B uses cross-over type B backbones (pin1 to pin12) and type B (key-up to key-up) MPO adapters. How-ever, the type B adapters are used differently on the two ends (key-up to key-up versus key-down to key-down), which requires more planning effort and expense. A straight-through patch cord (A-to-B) is used on both ends of the link.

Method B does not enjoy wide use because of the greater planning effort and expense involved and because singlemode MPO connectors cannot be used. R&M does not support this method either or does so only on re-quest. Method C

Method C uses pair-wise flipped type C backbones and type A (key-up to key-down) MPO adapters. A straight-through patch cord (A-to-B) is used on both ends of the link. In other words, the pair-wise flip of polarity occurs in the backbone, which increases the planning effort and expense for linked backbones. In even-numbered linked backbones, an A-to-A patch cord is needed.

Method C is not very widespread either because of the greater planning effort and expense involved and because it does not offer a way of migrating to 40/100GbE. R&M does not support Method C or does so only on request.

∗� E-2000™, manufactured under license from Diamond SA, Losone


Page 132 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Method R

Method R (a designation defined by R&M) has been available since 2011. It requires just one type of patch cord (A-to-B). The crossover of the fibers for duplex signal transmission (10 GBase-SR) takes place in the pre-assembled cassette. The connectivity diagram for the trunk cable and patch cord or the light guidance remains the same all the time, even for parallel transmission (Method B) for setting up 40/100 GbE installations. That means capacity can be expanded directly in an uncomplicated and inexpensive manner. In addition, the only thing that has to be done is replace the cassettes with panels. The table below summarizes the described methods once again:

Polarity method

MPO/MTP Cable MPO module Duplex patch












A Type A Type A (Type A Adapter)

1 x A-to-B 1 x A-to A

B Type B Type B1, Type B2 (Type B Adapter) 2 x A-to-B

C Type C Type A (Type A Adapter) 2 x A-to-B

R Type B Type R

(Type B Adapter) 2 x A-to-B









l sig


) Polarity method

MPO/MTP Cable Adapter plate MPO/MTP


A Type A Type A 1 x Type A 1 x Type B

B Type B Type B 2 x Type B

Polarity, methods and component types

The complete rebuilding of a data center is certainly not an everyday event. When it is done, the operator has the option of relying immediately on the newest technologies and to lay the groundwork for higher bandwidths. Gradually converting or expanding existing infrastructure to accommodate 100 Gb/s will be a common occurrence in the years ahead, indeed it will have to be. A useful approach involves successively replacing first existing passive components, then active components as they become available and reasonably affordable. The capacity expansion is usually done in three steps:

Type A: MPO/MTP Cable Key-up to key-down

Type B: MPO/MTP Cable key-up to key-up

Type C: MPO/MTP Cable Key-up to key-down, flipped pair-wise

Fibers 1 2 3 4 5 6 7 8 9

10 11 12

Fibers 1 2 3 4 5 6 7 8 9

10 11 12

Fibers 1 2 3 4 5 6 7 8 9

10 11 12

Fibers 1 2 3 4 5 6 7 8 9

10 11 12

Fibers 1 2 3 4 5 6 7 8 9

10 11 12

Fibers 2 1 4 3 6 5 8 7

10 9

12 11

Duplex patch cable A-to-B

Type A Adapter Key-up to key-down

Type B Adapter Key-up to key-up

Duplex patch cable A-to-A

Key-Up Key-down

Pos. 1

Pos. 12 Pos. 12

Pos. 1

Key-Up Key-down


Pos. 1

Pos. 12 Pos. 1

Pos. 12

Key-Up Key-down

Pos. 1

Pos. 12 Pos. 12

Pos. 1


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 133 of 156

The capacity expansion in existing 10G environments The standards TIA-942, DIN EN 50173-5 and ISO/IEC 24764 provide specifications for network planning in data centers. The steps below assume a correspondingly well-planned and installed network and merely describe the migration to 100 GbE. The first step in the migration from 10 GbE to 40/100 GbE certainly involves the expansion of capacity in an existing 10 GbE environment. A 12-fiber MPO cable serves as the permanent link (backbone). MPO modules and patch cords establish the connection to the 10G switches. Different constellations emerge depending on the legacy infrastructure and the polarity method used. Methode A

10G, Case 1 - an MPO trunk cable (type A, male-male) serves as the permanent link (middle). MPO modules (type A, female) provide the transition to the LC Duplex patch cords A-to-B (left) and A-to-A (right).

10G, Case 2 - an MPO trunk cable (type A, male-male) serves as the permanent link (middle). An MPO module (type A, female) provides the transition to LC Duplex patch cord A-to-B (left), adapter plate (type A) and harness cable (female) as an alternative to the combination of MPO module and LC Duplex patch cord.

10G, Case 3 - duplex connection without permanent link, consisting of LC Duplex patch cord A-to-B (left), MPO module (type A, female) and harness cable (male). Methode R

10G - an MPO trunk cable (type B, male-male) serves as a permanent link (middle), MPO modules (type R, female) provide the transition to the LC Duplex patch cords A-to-B (left, right).

Important: Harness cables cannot be used with metho d R!

The capacity expansion from 10G to 40G If the next step involves replacing the 10G switches with 40G versions, MPO adapter plates can be installed easily instead of the MPO modules to make the next adaptation.

Here, too, the installer has to keep in mind the polarity method used. Methode A

Replacement of the MPO modules (type A) with MPO adapter plates (type A) and the LC Duplex patch cords with MPO patch cords type A, female-female (left) and type B, female-female (right). The permanent link (type A, male-male) remains.


Page 134 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Methode B

Replacement of the MPO modules (type R) with MPO adapter plates (type B) and the LC Duplex patch cords with MPO patch cords type B, female-female (left, right). The permanent link (type B, male-male) remains.

The upgrade from 40G to 100G Finally, in the last step, 100G switches are installed. This requires the use of 24-fiber MPO cables. The existing 12-fiber connection (permanent link) can either be expanded with the addition of a second 12-fiber connection or can be replaced with the installation of a 24 fiber connection. Methode A

Capacity expansion for the MPO trunk cable (type A, male-male) with the addition of a second trunk cable; the MPO adapter plates (type A) remain unchanged; the MPO patch cords are replaced with Y cables. (left: Y cable female-female type A; right: Y cable female-female type B)

The MPO-24 solution - using one MPO-24 trunk cable (type A male-male); the MPO adapter plates (type A) remain unchanged. The patch cords used are MPO-24 patch cords type A, female-female (left) and type B, female-female (right). Methode B

Capacity expansion for the MPO trunk cable (type B, male-male) with the addition of a second trunk cable; the adapter plates (type B) remain unchanged; the MPO patch cords are replaced with Y cables. (left, right: Y cable female-female type B)

The MPO-24 solution - using one MPO-24 trunk cable (type B male-male); the MPO adapter plates (type B) remain unchanged. The patch cords used are MPO-24 patch cords type B, female-female (left, right).

Summary Planners and managers at data centers face new requirements with the introduction of MPO components and parallel optical connections. They will have to plan cable lengths carefully, select the right MPO types, keep polarities in mind across the entire link and calculate attenuation budgets with precision. Short-term changes are expensive and planning mistakes can be costly. The changeover is worthwhile anyway, especially since technology will make it mandatory in the medium term. It therefore makes sense to lay the groundwork early and at least adapt the passive components to meet the upcoming requirements. Short installation periods, tested and verified quality for each individual component and reliable operations and investments for years to come are the three big advantages that make the greater effort and expense more than worthwhile.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 135 of 156

3.10.3 Power over Ethernet (PoE/PoEplus) Since its introduction in 2003, PoE has grown into a thriving market and is forecast to continue to grow signify-cantly in the future. Market research group Dell’Oro predicts that in 2011 there will be 100M PoE enabled devices sold, as well as over 140M PoE ports in sourcing equipment, such as switches (see Figure below). VDC’s (Venture Development Corpo-ration, Natick, Massachusetts) market study, “Power over Ethernet”: Global Market Opportunity Analysis, discusses the adoption drivers of leading PoE appli-cations. IP phones and wireless access points (WAPs), and the leading vendors of these

devices. According to the research, the demand for enterprise WAPs will increase by nearly 50 % per year through 2012. Initially a maximum power of 12.95 W at the powered device (PD) was defined by the standard. Since the introduction of Power over Ethernet, demand for higher power grew in order to supply devices such as:

• IP-cameras with PAN/Tilt/Zoom functions • VOIP Video phones • POS Terminals • Multiband Wireless Access Points (IEEE 802.11n) • RFID-Reader, etc. Therefore a new standard was developed with the objective to provide a minimum of 24 W of power at the PD.

• IEEE 802.3af: Power over Ethernet (PoE) = 12.95 W power • IEEE 802.3at: Power over Ethernet (PoEplus) = average 25.5 W power

The newer standard, IEEE 802.3at (PoEP), was released in October 2009. The definition of powered de-vices (PD) and power sourcing equipment (PSE) is the same as defined by the PoE IEEE 802.3af standard. On the right side one possible end span option is pictured. Application Objectives The objectives of the IEEE standards group were as follows:

1. PoEplus will enhance 802.3af and work within its defined framework. 2. The target infrastructure for PoEplus will be ISO/IEC 11801-1995 Class D / ANSI/TIA/EIA-568.B-2 category 5

(or better) systems with a DC loop resistance no greater than 25 ohms. 3. IEEE STD 802.3 will continue to comply with the limited power source and SELV requirements as defined in

ISO/IEC 60950. 4. The PoEplus power sourcing equipment (PSE) will operate in modes compatible with the existing require-

ments of IEEE STD 802.3af as well as enhanced modes (Table as follow). 5. PoEplus will support a minimum of 24 Watts of power at the Powered Device (PD). 6. PoEplus PDs, which require a PoEPlus PSE, will provide the user with an active indication when connected

to a legacy 802.3af PSE. This indication is in addition to any optional management indication that may be provided.

7. The standard will not preclude the ability to meet FCC / CISPR / EN Class A, Class B, Performance Criteria A and B with data for all supported PHYs.

8. Extend power classification to support PoEplus modes. 9. Support the operation of midspan PSEs for 1000BASE-T. 10. PoEplus PDs within the power range of 802.3af will work properly with 802.3af PSEs.


Page 136 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The initial goal was to provide 30 W of power at the PD over two pairs, but this was revised down to an average power of 25.5 W. Also, the goal of doubling this power by transmitting over 4 pairs was removed, but could be revi-sited in a new version at a later date.

Great care was taken to be backward compatible and continue support for legacy PoE, or low power devices. The following terminology was thus introduced to differentiate between low power and the new higher power devices:

• Type 1: low power

• Type 2: high power The Table on the right side summarizes some of the differences between PoE and PoEplus.

With the higher currents flowing through the cabling, heating was an important con-sideration. Some vendors have recommen-ded using higher category cabling to reduce these effects. Below we look at this issue in detail. Heat Considerations for Cabling

The transmission of power over generic cabling will result in some temperature rise in the cabling depending on the amount of power transferred and the conductor size. The cable in the middle of a bundle will naturally be somewhat warmer due to the inability of heat to dissipate. As heat increases in the cable (ambient + temperature increase), in-sertion loss also increases which can reduce the maximum allowed cable length.

In addition, the maximum temperature (ambient + increase) is limited to 60°C according to standards.

Therefore, two limiting factors are given:

• Reduction of the maximum allowed cable length due to higher cable insertion loss from higher temperatures

• The maximum specified temperature of 60°C given in the standard

The temperature rise for various cable types was determined through tests performed by the IEEE 802.3at PoEplus working group (measured in bundles of 100 cables, Table on the right). Investigations showed, the use of AWG23 and AWG22 cable for PoEplus is not absolutely necessary at room temperature. The problem of increased cable temperatures with PoEplus should be considered with long cable lengths, or long patch cords, and with high ambient temperatures such as those found in tropical environments. With an unshielded Cat. 5e/u cable, an additional temperature increase of 10°C from PoEplus with an ambi ent temperature of 40°C would mean a reduction in the a llowed permanent link length of approximately 7 m. This reduction with a shielded Cat 5e/s cable and an ambient temperature of 40°C would only be approximatel y 1 m. This 7 m or 1 m reduction in the link length could be compensated with a higher category cable with a larger wire diameter. However, a careful review of the cost benefit relationship of such a solution is recommended. It should also be considered that the length restrictions for class E and F are much more severe than those for PoEplus and may limit the applicable permanent link length. In any case, when planning an installation for PoEplus, extra care must be taken to consider the consequences of heat dissipation, both in the cable and ambient, regardless of which cable is used.

PD Operation based on PSE Version

PoE PSE PoEplus PSE PoE-PD Operates Operates PoEplus -PD < 12,95 W Operates Operates Note

PoEplus-PD > 12,95 W PD will provide user activity indication Operates Note

PD = Powered Devices, PSE = Power Sourcing Equipment Note: Operates with extended power classification

Compatibility between PoE and PoEplus versions of the PD and PSE

PoE PoEplus

Cable requirement Cat. 3 or better

Type 1: Cat. 3 or better Type 2: Cat. 5 or better

PSE current (A) 0,35 A Type 1: 0,35 A Type 2: 0,6 A

PSE voltage (Vdc) 44-57 Vdc Type 1: 44-57 Vdc Type 2: 50-57 Vdc

PD current (A) 0,35 A Type 1: 0,35 A Type 2: 0,6 A

PD voltage (Vdc) 37-57 Vdc Type 1: 37-57 Vdc Type 1: 47-57 Vdc

Differences between PoE and PoEplus / Source: Ethernet Alliance, 8/2008

Cable Type Profile Approx. Temp. Rise

Cat. 5e / u AWG 24 10° C

Cat. 5e / s AWG 24 8° C

Cat. 6 / u AWG 24+ 8° C

Cat. 6A / u AWG 23 6° C

Cat. 6A / s AWG 23 5° C

Cat. 7 AWG 22 4° C

Temperature rise operating PoEplus


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 137 of 156

Connector Considerations R&M investigated the effects of PoE on the connector, specifically the damage from sparking that may be caused by unmating a connection under power. In addition, R&M co-authored a technical report on this subject that will be published by IEC SC48B “The effects of engaging and separating under electrical load on connector interfaces used in Power-over-Ethernet (PoE) applications”. In this paper, the concept of nominal contact area was introduced. During the mating operation, the point of con-tact between A (plug) and B (module) moves along the surface of the contacts from a point of first contact (the connect/disconnect area) to the point of final rest (the nominal contact area). These two areas are separated by the wiping zone (figure below).

Illustration of the nominal contact area concept


Page 138 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The investigations showed that traditionally the design of modular connectors described in the IEC 60603 standards ensures that the zone where contact is broken and sparking can occur is separate from the zone where contact between plug and jack is made during normal operation (the nominal contact area). The picture on the left illustrates the case of a good contact design where damage does not affect the contact zone. The picture on the right shows a bad contact design where the contacts are folded and the damage is in the contact zone (overlap of nominal contact area and connect/disconnect area).

Good contact design (R&M module) Bad contact design due to overlapping of nominal contact

and connect/disconnect areas

Nominal contact area

Photos: R&M Coonect/disconnect area

The increased power of PoEplus may cause a larger spark upon disconnection which will aggravate this problem. In addition, with new Category 6A, 6A, 7 and 7A connecting hardware, the contact design may deviate significantly from more traditional designs and thus be affected by the electrical discharges. Unfortunately, the standards bodies have not yet fully addressed this concern. Test methods and specifications have not been finalised to ensure that connecting hardware will meet the demands of PoEplus. Efforts to date in both IEEE and ISO/IEC bodies have focused mainly on establishing the limits in terms of cable heating. Until the connecting hardware is also addressed, a guarantee of PoEplus support for a cabling system is premature. R&M will continue to push for resolution of this issue in the appropriate cabling standards bodies and will inform the customers as new information becomes available. Conclusions The success of PoE to date and the demand for PoEplus indicate that this technology has met a need in the market that will continue to grow and expand. There are many issues to take into account when implementing this techno-logy including power sourcing and backup in the rack and dealing with the additional heat produced there by the use of PoEplus-capable switches. The cabling system also needs to be carefully considered. Much effort has been invested in looking at the effects of increased heat in the cabling. As we have seen, the combination of high ambient heat and the effects of PoEplus can lead to length limitations for all types of cabling. Customers are therefore encouraged to choose the appropriate cabling for their application and requirements after reviewing the specifications and guidelines provided. The same amount of effort needs to be invested into looking at the effects of unmating connecting hardware under power. Unfortunately, to date, this work has not been undertaken by the standards bodies and thus no specifi-cations or test methods currently exist to ensure compatibility with the upcoming PoEplus standard.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 139 of 156

3.10.4 Short Links The standard ISO/IEC 11801 Amendment 2 specifies that Class EA can only be fulfilled with Cat. 6A components when the minimum length of 15 m for the permanent link (PL) is observed. The use of shorter PLs is allowed when the manufacturer guarantees that the PL requirements are met. The wish for shorter links and the question of head room Today, many users tend to shorten the links in order to save material, energy and costs. But what effect does this have on the performance and transmission reliability of the network? Shortening the distance between the two modules in a 2-connector permanent link results in a lower attenuation of the cable between the modules, which in turn increases the interfering effect of the further module. The standard, however, lists only one module in its calculation, which is problematic: Particularly, the limit values for NEXT and RL (return loss) can not be maintained any longer. This is why the standard defines a minimum link length of 15 m. What happens if a high quality module is paired with a high-quality cable? The result is a PL with much headroom in terms of length in the range between 15 m to 90 m. Using the R&M Advanced System for example, ensures a typical NEXT headroom of 6 dB with 15 m and 11 dB with 90 m (see Figure below).

NEXT headroom of permanent links, measured in an installation using the new Cat. 6 A module of R&M. The zero line shows the limit value in acc. with ISO/IEC 11801 Amendment 2. In the R&M freenet warranty program, R&M guarantees a NEXT headroom of 4 dB for a permanent link of above 15 m, built with components from the R&M Advanced System. Alternatively, the high headroom provides customers with the possibility to set up very short PLs, which fulfill all the requirements of the standard ISO/IEC 11801 Amendment 2. The advantage of additional headroom Basically, it makes sense to lay permanent links that are exactly as long as is needed. However, that would mean that the links would often be shorter than the minimum length of 15 m. When components are used that only meet the standard requirements, the permanent links must be artificially extended with loops to 15 m to make sure that the limit values are reached. This not only entails additional costs but these cable loops also take up space in the cable conduits and interfere with the ventilation, which results in an increase in infrastructure costs and energy consumption.


Page 140 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Thanks to the large headroom with standard lengths, the use of R&M's Cat. 6A module allows a shortening of the minimum length as far as down to 2 m. This is sufficient to cover the customarily used lengths in data centers.

On average, the cables between individual server cabinets and the network cabinet need to be 4 to 5 meters of length, if they are routed from above over a cable guidance system, or 7 to 8 meters if they are routed from below, through the raised floor. The loops to extend the cables to 15 m are not required when the Class EA/Cat. 6A solution developed by R&M is used. How to measure short permanent links? According to ISO/IEC 11801 / 2 (Table A.5), less stringent requirements are applicable for short 2-connector per-manent links with an insertion loss of less than 12 dB, from 450 MHz on up. Even if the range in question is 450 MHz and 500 MHz, the field measuring device should always be set to "Low IL". The maximum lowering of the NEXT requirement is 1.4 dB. Figures below show an example of a measuring arrangement and the results.

Testing of short permanent link (configuration PL2) with the Fluke DTX 1800 Cable Analyzer.

Test result. The permanent link of 2.1 m in length meets all requirements of ISO/IEC 11801.

required excess length

excess length



R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 141 of 156

NEXT measuring of a permanent link of 2 m in length, set up with the R&M components.

RL measurement of the same permanent link


Page 142 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The permanent link model of ISO/IEC 11801 Amendment 2 ISO/IEC distinguishes between four configurations of the permanent link (Figure on right), relevant for network planning. The determining parameters for transmission quality are NEXT (near-end crosstalk) and RL (return loss). ISO/IEC calculates the limit values of NEXT of all four configurations [in dB] for the frequency range 1 = f = 300 [MHz] with this formula:

+− −








The two terms in the bracket refer to the effect of the module at the near end and the effect of the cable. ISO/IEC specifically states that they should not be taken as individual limit values. This means that the overall limit value can be reached with very good cables and lower quality modules or with very good modules and lower quality cables. Different formulas are applicable to the frequency range 300 = f = 500 [MHz]. The ISO/IEC calculates the NEXT limit values for configuration PL2 [in dB] like this:

)lg(57,2146,87 f− [2] (blue line in Figure)

A moderate requirement applies to configuration PL3 with additional consolidation point and frequency range 300 = f = 500 [MHz]; it is calculated with this formula:

)lg(54,2722,102 f− [3] (violet line in Figure)

Less stringent requirements are applicable to the configurations PL1, PL2 and CP2 in the frequency range above 450 MHz: If the insertion loss (IL) at 450 MHz is lower than 12 dB, the required NEXT curve can be lowered, calculated with the following formula [2]:

)50/)450((4,1 −f [4] (yellow line in Figure).

Conclusion Short links in structured cabling are not only possible but – combined with high-quality Cat. 6A components – they are even recommended. Cat. 6A modules and installation cables from R&M allow the setting up of permanent links with advantages that impact positively on the cost factor:

• easier installation • no unnecessary cable loops • material savings of up to two thirds • improved ventilation, lower energy consumption • reduced costs of installation and operations In R&M freenet warranty program, R&M guarantees a NEXT headroom of 4 dB for a permanent link of above 15 m, built with components from the R&M Advanced System. Alternatively, the high headroom provides customers with the possibility to set up very short PLs, which fulfill all the requirements of the standard ISO/IEC 11801 / 2.

Permanent link configurations acc. to ISO/IEC 11801 Amendment 2







250 300 350 400 450 500

Frequency [MHz]







Short Link

NEXT limit values for the permanent link configurations 2 and 3 and for short links according to ISO/IEC 11801 Amendment 2.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 143 of 156

3.10.5 Transmission Capacities of Class E A and F A Cabling Systems Some cabling manufacturers frequently cite 40GBase-T as the killer application for Class FA and Cat. 7A compo-nents. Other applications to legitimize Class FA are not in sight. There were various earlier discussions on broad-casting cable TV over twisted pair (CATV over TP) or on using one cable for several applications (cable sharing). Lower classes and RJ45 connector systems can demonstrably handle these applications. The standardization committees also failed to require more stringent coupling attenuation for higher classes, so EMC protection cannot be used as an argument either. When Cat. 6 was defined, there was no application for it either, but Cat. 6 ultimately paved the way for the development of 10GBase-T. But compared to then, there is so much more knowledge now about the capabilities and limitations of active transmission technology than back then. 10GBase-T is approaching the theoretical transmission capacity of Class EA as no other application before. In order for 10GBase-T to be able to run on Class EA cabling, the application must have active echo and crosstalk cancelation. For this, a test pattern is sent over the channel during set-up and the effects on the other pairs are stored. In operation, the stored unwanted signals (noise) are used as correction factors in digital signal processing [DSP] and subtracted from the actual signal. The potential improvements that are achieved in 10GBase-T are 55 dB for RL, 40 dB for NEXT and 25 dB for FEXT. Adjoining channels are not connected or synchronized, so no DSP can be conducted. Active noise cancelation for crosstalk is therefore impossible from one channel to the next (alien crosstalk). Signal and noise spectrum of 10GBase-T The data stream is split up among the four pairs for 10GBase-T and then modulated by a pseudo random code (Tomlinson-Harashima Precoding [THP]) to obtain even spectral distribution of power independent of the data. Using a Fourier transformation, one can calculate the power spectrum density [PSD] of the application from the time based signal that is similar to PAM-16. IEEE specified this spectral distribution of power over the frequency in the standard for 10GBase-T. That is important in this context because all cabling parameters are defined as a function of the frequency and their influence on the spectrum can thus be calculated. If attenuation is subtracted from the transmission spectrum, for example, the spectrum for the receiving signal can be calculated. The parameters RL, NEXT, FEXT, ANEXT, AFEXT etc. can be subtracted to obtain the spectrum of the various noise components. With active noise cancellation from DSP taken into account, one can calculate the actual signal and noise distribution at the receivers. The figure below contains a diagram showing these inter-actions.

Theoretical procedure for determining the spectral distribution of signal and noise With this PSD the relative magnitudes of signal and different noise source contributions can be compared with each other. That is what makes this procedure particularly interesting. The next figure shows the signal spectrum, the total noise spectrum and the different components of noise for an unscreened 100-m Class EA channel. IL and RL serve as examples to show how the cabling parameters are subtracted from the transmission spectrum. In the case of RL, the influence of active noise cancelation is also taken into account.


Page 144 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The intersection between signal strength and noise strength often serves in cabling technology to define the band-width. If no active noise cancelation is done, return loss is the parameter which is defining the noise power. In this case, the bandwidth is around 50 MHz. In order to ensures the bandwidth of 66 MHz needed for the 1-GBit Ethernet 1000Base-T active noise cancelation of return loss is needed already at that level. Using the active noise cancelation realized by 10GBase-T, bandwidth improves to 450 MHz (see Figure).

Comparison of spectral noise components

If one compares the different noise parameters with each other, it is striking that alien crosstalk contributes the most to noise in Class EA as it is defined today. With modern data transmission featuring active noise cancelation, this means the achievable bandwidth is limited by alien crosstalk which cannot be improved electronically. One can only increase bandwidth by improving alien crosstalk, i.e. ANEXT [Alien NEXT] and AFEXT [Alien FEXT]. The cable jacket diameter required to meet the specification for alien crosstalk in unscreened cabling already hits users' limits of acceptance. In order to increase the bandwidths further, one has no choice but to switch to screened systems. With the effects of additional screening, alien crosstalk becomes so slight in these systems that it can be ignored for these considerations for the time being. Return loss [RL] is the biggest source of internal noise in cabling, causing 61% of the total. It is followed by FEXT, which causes 27%, and NEXT, which generates 12%. Unfortunately, RL is precisely one of the parameters whose full potential is already nearly reached. Improvements are no longer so easy to achieve. This situation is under-scored by the fact that RL is defined exactly the same for the Classes EA and FA. That means the biggest noise component remains unchanged when one switches from Class EA to Class FA. Transmission capacity of different cabling classes C.E. Shannon was one of the scientists who laid the mathematical foundations for digital signal transmission in the 1940s. The Shannon capacity is a fundamental law in information theory based on the physical limitations of the transmission channel (entropy). It defines the maximum rate at which data can be sent over a given transmission channel. The Shannon capacity cannot be exceeded by any technical means. According to Shannon, the maximum channel capacity can be calculated as follows: KS = XT * B * log2 (1 + S/N) [Bit/s] Abbreviations: KS = Channel capacity (according to Shannon) XT = Factor dependent on channel (dependent on media used, modulation methods and other factors; ranges from 0 to 1) B = Bandwidth of signal used (3 dB points) S = Received signal power in W N = Received noise power in W The higher the bandwidth of the signal and the transmission power used and the smaller the noise level, the higher the possible data transmission rate. The noise level is usually derived from the transmission power in a fixed ratio in cabling, e.g. as with RL. In this case, the signal-to-noise ratio cannot be improved by increasing the transmission power.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 145 of 156

One can now calculate the signal and noise power from the power density spectra of received signal and total noise. The integral of the area under the signal spectrum corresponds to the signal power S and the one under the noise spectrum to the noise power N. S and N can now be applied directly in the Shannon formula. The interesting thing about calculating S and N from the power density spectrum is that one can change the indi-vidual cabling parameters independently of each other and investigate their influence on channel capacity. That means the Shannon capacities can be calculated for the different classes of cabling.

Comparison of the Shannon capacities of different cabling channels at 400 MHz The Shannon capacities of different cabling configurations are compared in Figure. This comparison shows which changes will lead to a substantial increase in channel capacity and that are therefore cost effective and yield actual added value. The first line in the picture corresponds to an unscreened Class EA channel in accordance with the standard. This line serves as the reference for comparison with other configurations. The switch from unscreened to screened cabling improves alien crosstalk, thereby increasing the channel capacity by 14%. The switch from a Cat. 6A cable to a Cat. 7A cable mainly reduces attenuation and thus increases the signal level on the receiver side. The channel capacity increases by a further 5% in the process, to 119%. If one also replaces the RJ45 Cat. 6A jacks with Cat. 7A jacks, the channel capacity rises by only 1%. The explanation for this minimal increase is that the new jacks improves mainly NEXT and FEXT but these values are already sufficiently low due to the active noise cancelation. Outlook for possible 40GBase-T over 100 meters IEEE specified that it needs a channel capacity of 20 Gbit/s to realize 10GBase-T. That corresponds well to the 21.7 Gbit/s capacity achieved with the Class EA channel at 400 MHz. One can therefore assume that 40GBase-T requires a channel capacity of 80 Gbit/s. Using the power density spectrum, one can now investigate what the ideal situation would be for signal and noise. The signal spectrum has maximum strength if one uses a cable with a maximum wire diameter. The considerations below are based on an AWG-22 cable, assuming customers would not accept cables with larger diameters for reasons related to handling and space consumption. The lowest possible noise level achievable is limited by thermal noise. This noise is produced by the copper atoms in the cable that are moving due to their temperature. It equals -167 dBm/Hz at room temperature. Considering, it is practically infeasible to cool cabling artificially, e.g. with liquid nitrogen, this noise level cannot be reduced further. To keep total noise in the range of non-reducible thermal noise, one must reduce all noise factors to the point where they are also under the level of thermal noise even in a worst case scenario. For Class FA cabling, that is true if the following basic conditions are met:


Page 146 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Alien crosstalk: PSANEXT = 30 dB higher than FA level PSAFEXT = 25 dB higher than FA level

Active noise cancellation: RL = 90 dB (55 dB) (values in parentheses: 10GBase-T) NEXT = 50 dB (40 dB) FEXT = 35 dB (25 dB) One can now calculate the Shannon capacity for this hypothetical cabling system based on the above assumptions (refer to Figure). One achieves a channel capacity of 80 Gbit/s at a bandwidth of around 1 GHz. For a PAM signal, one achieves a bandwidth of 1000 MHz with a symbol rate of 2000 MSymbols/s or 2 GBd (Baud). To achieve a throughput of 40 Gbit/s with this symbol rate per pair, one must have 5 bits per symbol. That corresponds to PAM-32 coding.

Channel capacity according to Shannon for the proposed 40GBase-T channel over 100 m

The channel bandwidth is approximately 1.4 GHz for the conditions mentioned above. That means the portions of the signal above 1.4 GHz that arrive at the receiver are less than thermal noise. At 1 GHz, the signal-to-noise ratio amounts to 14 dB. This should allow a 40GBase-T protocol to be operated. On the other hand, the Shannon capacity at 1.4 GHz is just around 110 Gbit/s which is not sufficient for the operation of a 100GBase-T. For 100-m cabling, one can conclude that it is physically impossible to develop a 100GBase-T. By contrast, 40GBase-T appears to be technically feasible albeit extremely challenging.

• 90 dB RL compensation >>> A/D converter with better resolution (+6 bits)

• Clock rate 2.5 times higher than for 10GBase-T

• Substantial heat generation >>> reduced port density in active equipment

• Extremely low signal level >>> EMC protection needed

The cabling attenuation exceeds 67 dB at 1000 MHz. In addition, the available voltage range with PAM 32 is subdivided into 32 levels. That means the distance from one level to the next is just 0.03 mV. That is more than 20 times less than the distance in 10GBase-T. One must therefore also improve the protection from outside noise accordingly.

Alternative approaches for 40GBase-T Many experts think the requirements for a 40G protocol over 100 m are too challenging for use under field condi-tions. Moreover, most experts consider the main use of 40GBase-T to be in data centers to connect servers in the access area and not in classic structured horizontal cabling. The cabling lengths required in the DC are much shorter and a reduced transmission distance is certainly reasonable. For instance, if the PL is fixed at 45 m max and the patch cable lengths at a total of 10 m, attenuation falls to the point that there is the same signal distance at 1 GHz and PAM 32 from level to level as there is now with 10GBase-T (417 MHz and PAM 16). That seems to be a good compromise between achievable cable length and noise resistance and is a good basis for further considerations. One positive side effect of a reduced attenuation is that somewhat more noise can be allowed and still have the same signal-to-noise ratio:


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 147 of 156

Alien crosstalk: PSANEXT = 25 dB higher than FA level PSAFEXT = 25 dB higher than FA level

Active noise cancellation: RL = 75 dB (55 dB) (values in parentheses: 10GBase-T) NEXT = 45 dB (40 dB) FEXT = 30 dB (25 dB) A Shannon capacity of 86 Gbit/s is achieved based on the above assumptions.

Power density spectrum of 40GBase-T cabling shortened to the greatest possible extent

Since one must increase alien crosstalk for the Class FA channel by an additional 25 dB to achieve the required channel capacity, this means the Class FA channel as it is defined today will not suffice to support a 40-Gbit/s connection. Also an active noise cancellation of 75 dB for RL will be required. To achieve this level, one will probably have to use a higher resolution for the A/D converter than was the case with 10GBase-T. This higher resolution will also probably improve NEXT and FEXT to a similar extent and create a certain reserve. With this reserve the use of RJ45 Cat. 6A jacks could not be completely ruled out. By applying the parameters as postulated in the power density spectrum and then calculating the Shannon capacity, one can now compare the suitability of the different types of cabling for the 40GBase-T channel.

Comparison of channel cap. at 1 GHz for 45m PL and active noise cancellation of RL=75dB, PSNEXT=60dB & PSFEXT=45dB Class EA is only defined up to 500 MHz. The curves for the parameters between 500 MHz and 1 GHz was linearly extrapolated for these calculations and are thus debatable. The figure does clearly show, however, that Classes EA through FA cabling do not suffice to achieve the required channel capacity for 40GBase-T. Alien crosstalk (ANEXT and AFEXT) is the limiting factor for these cabling classes. By improving alien crosstalk by 25 dB, one could achieve the required channel capacity for 40GBase-T, regardless of whether one uses a Cat. 7A or Cat. 6A connector system.


Page 148 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

The Cat. 7A cable has to be newly specified with better alien crosstalk characteristics in order to improve this para-meter. To illustrate this difference, this newly specified cable is referred to here as Cat. 7B. That designation is not based on any standard, however. The key to future proof the cabling therefore lies in the specification of a new Cat. 7B cable and not necessarily in the use of a Cat. 7A connector system. Summary

• 10GBase-T up to 100 m appears to be the limit of what is feasible for unscreened cabling.

• 40GBase-T up to 100 m is technically feasible but appears to be too challenging for field use. Presumably it will be defined with reduced cable length (e.g. with 45 m PL).

• 100GBase-T up to 100 m appears to be technically unfeasible.

• Today's Class FA standardization does not support 40GBase-T because the specification for alien crosstalk is not stringent enough >>> a new Class FB will be needed.

• It has not yet been shown that a Cat. 7A connector system is necessary to achieve 40GBase-T. Given the uncertainties about connector system requirements, one should wait for clarification in the standardi-zation before using Cat. 7A connecting hardware. Cat. 7A cables with improved alien crosstalk (Cat. 7B) can be used to be 40GBase-T ready. 3.10.6 EMC Behavior in Shielded and Unshielded Cab ling Systems The motivation to once again investigate the topic of the EMC behavior of shielded and unshielded cabling, which has been examined a number of times already, is the introduction of the new application 10GBase-T. The use of ever higher order of modulation may well reduce the necessary bandwidth, but it makes the new proto-cols more and more susceptible to external interference. Here it is worth noting, that the separation between symbols with 10GBase-T is approximately 100x smaller than with 1000Base-T (see Figure below).

Comparison of the signal strength of various Ethernet protocols at the receiver The figure shows the relative levels of signal strength at the receiver for various protocols after the signal has been transmitted through 100 m of Class EA cabling. As a general principle, the higher the signal’s frequency or bandwidth, the larger the attenuation through the cabling is. For reasons concerning EMC, the level of the output signal is not increased above +/- 1V. Of this 2V range, depending on the frequency and attenuation in the cabling, an ever smaller part reaches the receiver. In addition, the voltage differences from one symbol to the next get increasingly smaller as a result of higher modulations. While in the past the separation between symbols at the receiver has decreased by roughly a factor of three for each step from 10M to 100M to 1G, in the final step in development from 1G to 10G it has decreased by a factor of 100.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 149 of 156

If for visualization some noise is added, it becomes obvious that at faster data-transmission rates, the sensitivity to interference increases tremendously. As a result, the sensitivity to interference in the EMC area gets increasingly higher as the data-transmission rate increases. During 2008, a group of cable suppliers, including R&M, came together to compare the EMC behavior of various cabling types in a neutral environment. To ensure the independence and objectivity of the results, a 3rd party test laboratory was commissioned with the investigation, specifically the “Gesellschaft für Hochfrequenz Messtechnik GHMT” (“Organization for High Frequency Testing”) located in the German town of Bexbach. The goal of the study was to answer the following questions concerning the selection of the cabling system:

• Which parameters must be estimated in order to draw meaningful conclusions about the EMC behavior of shielded and unshielded cabling systems?

• Which special measures are necessary when using shielded or unshielded cabling in order to ensure operation that conforms to legal requirements and standards?

• How does the EMC behavior of shielded and unshielded cabling compare during the operation of 1G and 10GBase-T?

The basic idea was not to pit shielded and unshielded systems against each other, but rather to clearly indicate the basic conditions that must be adhered to in order to ensure problem-free operation of 10GBase-T that is in conformance with legal requirements. By doing so, assistance can be offered to planners and end customers so they can avoid confusion and uncertainty when selecting a cabling system. The results presented are exclusively those from the independent EMC study carried out by GHMT; the inter-pretation of the results and conclusions drawn are, however, based on analyses by R&M. Equipment under test and test set-up The examination of six cabling systems was planned:

• 1x unshielded Cat 6

• 2x unshielded Cat 6A

• 3x shielded Cat 6A with shielded twisted pairs (foil) and different degrees of coverage of the braided shield (U-FTP without braided shield, S-STP “light” with moderate braided shield, S-STP with good braided shield).

The Figure below shows the results from the antecedent measurements according to ISO/IEC 11801 (2008-04). The values given are the margin vs. the Class EA limits, with the exception of coupling attenuation, which is given as an absolute value.

Cable parameters in accordance with ISO/IEC 11801 (2008-04) Surprisingly, the transmission parameters (IL, NEXT, PS NEXT, TCL and RL) of the Cat 6A UTP systems are at a similar level as those from the shielded systems. Except with the older Cat 6 system, the values are more or less comparable to each other. Noticeable is that the PS ANEXT requirements are barely reached, if at all, by the unshielded systems. As soon as shielding is present, the ANEXT values are improved 30 to 40 dB so much that they no longer present a problem.


Page 150 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

With the “EMC” parameter “coupling attenuation” the difference between the systems becomes very clear. The Cat 6 system deviates widely from the necessary values, while the newer UTP systems meet the requirements. Here too, there is a clear difference between the shielded and unshielded cabling systems. In international stan-dardization, TCL is used as the EMC parameter in unshielded systems instead of the coupling attenuation that is used with shielded systems. The comparison of values for TCL and coupling attenuation of Systems 0 – 2, however, puts this practice into question. That is because the relatively small difference in TCL between System 0 and 1 or 2 makes a huge difference in the coupling attenuation (Pass for TCL, Fail for coupling attenuation). Based on these poor results, no EMC measurements were carried out with System 0 for the remainder of the study. As is common with all EMC investigations, cabling systems cannot be examined for EMC aspects on their own, but only in connection with the active components as a total system. The following active components were used:

• Switch: Extreme Networks; Summit X450a-24t (Slot: XGM2-2bt)

• Server: IBM; X3550 (Intel 10gigabit AT server adapter) Both servers, which created data traffic of 10 Gbit/s, were connected to a switch through the cabling. The figure shows a diagram of the test set-up.

Functional diagram of the test set-up In order to be able to test the EMC properties from all sides, all data-transmission components for the test were installed on a turntable (d = 2.5 m) in an anechoic EMC test chamber. An anechoic test chamber is a room that prevents electric or acoustic reflections by means of absorbing layers on the walls, and it shields the experiment from external influences. A 19” rack containing the switches and the servers was placed in the middle of the turntable, with an open 19” rack with patch panels on both sides. The panel and active components were connected with 2 m patch cords. Each end of the tested 90 m installed cables was terminated to one of the RJ45 panels. To achieve a high degree of reproducibility and to have a well-known test set-up, the cables were fixed to a wooden cablerack that was suggested as a test set-up by CENELEC TC 46X WG3. The rack was located behind the 19” racks. Basically, this is a system set-up that simulates the situation in a data center. Radiated power in accordance with EN 55022 This test is mandatory in the EU’s EMC directive and thus has the force of law. The operation of equipment and systems that do not pass this test is forbidden in the EU. The test is designed to recognize radiated interference from the tested system that could impair the operation of receiving equipment such as broadcast radios, TVs and telecommunication equipment.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 151 of 156

The test system’s receiving antenna is erected at a distance of 3 m from the equipment under test, and the radiated power is logged in both horizontal and vertical polarization. There are two limits: Class A (for office environments) with somewhat less stringent requirements, and Class B (for residental areas) with correspondingly tougher re-quirements.

Emission measurements for a typical shielded and unshielded system Limits: Red: Class B Violet: Class A As an example the figure above compares the emission measurements of System 2 and System 5 for a 10GBase-T transmission. It is obvious that the unshielded system exceeds the limits for Class B several times while the shiel-ded system provides better protection, especially in the upper frequency range, and thus meets the requirements. Here it makes no difference whether the shielded cabling was grounded on one side or both. Because 10GBase-T uses the frequency range up to more than 400 MHz for data transmissions, it can be assumed that for unshielded systems it would not be possible to improve radiation enough even if filtering tech-niques were applied. The limits for Class A, in contrast, are met by both cabling types. Other measurements show that for 1000Base-T, both shielded and unshielded cabling can comply with the limits for Class B. From these results it can be concluded that, at least in the EU, unshielded cabling systems in residential environ-ments should not be operated with 10GBase-T. The responsibility for EMC compliance lies with the system operator. In an office environment, and that also means in a data center, it is also permissible to operate an un-shielded cabling system with 10GBase-T. Immunity from external interferences The EMC immunity tests are based on the EN 61000-4-X series of standards and were carried out following the E1/E2/E3 conditions for different electromagnetic environments in accordance with the MICE table of EN 50173-1 (see next figure). In these tests the continuous transmission of data between the servers was monitored with a protocol analyzer, and the transmission system was then subjected to defined stress conditions. It was reported from what stress level onward the transmission rate was impaired.


Page 152 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Stress conditions for Classes E1 to E3 in accordance with the MICE table from EN 50173-1 Immunity against high-frequency radio waves The test outlined in EN 61000-4-3 serves to examine the immunity of the equipment under test (EUT) against radiated electromagnetic fields in the frequency range from 80 MHz to 2.0 GHz. This test simulates the influence of interferences such as radio broadcast and TV transmitters, mobile phones, wireless networks and others. The transmitting antenna was set up at a distance of 3 m from the EUT, which was then irradiated from all four sides. The field strength, measured at the location of the equipment under test, was selected in accordance with the MICE table (Figure, “Radiated high frequency”).

Results in accordance with EN61000-4-3 (left) / Operational test of a mobile phone (right) All shielded cabling variants are well suited for 10GBase-T operation in offices and light industrial areas. For harsher industrial areas, an additional overall braided shield (S-FTP design) is necessary. Single- or double-sided grounding has no influence on the immunity against external radiation. A good unshielded cabling is suitable for 1000Base-T in offices and areas used for light industry. For the use of 10GBase-T with unshielded cabling, how-ever, additional protective measures are necessary; e.g. metallic raceway systems and/or additional separation away from the sources of interference. An additionally performed operational test confirmed the high sensitivity of the unshielded systems to wireless communications equipment in the 2 m to 70 cm band when operating with 10GBase-T. If a personal radio or mobile phone was operated at a distance of 3 m (see Figure above), with unshielded cabling there was an interruption in the data transmission, while all shielded systems experienced no impairment of the transmission. Immunity against interference due to power cables A test in accordance with EN 61000-4-4 was carried out in order to check the immunity of the equipment under test (EUT) against repeated, fast transients as produced by the switching of inductive loads (motors), by relay contact chatter and by electronic ballasts for fluorescent lamps. In order to achieve reproducible coupling between the power cable and the EUT, a standardized capacitive coupling clamp was used in this test. The disturbances overed voltage peaks of 260 - 4000 V with a wave shape of 5/50 ns and a separation of 0.2 ms. The voltage levels in accordance with the MICE table were applied (see Figure, “Fast transient (burst)”).


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 153 of 156

Results in accordance with EN 61000-4-4 (left) / Results operational test “mesh cable tray” (right) The figure left shows the results of this test. All shielded systems allow the use of 10GBase-T in all environmental conditions (E1, E2, E3). At the same time, higher quality shielding provided better immunity against fast transients. High-quality unshielded cabling allows the use of 1000Base-T in an office environment. For an industrial environ-ment and for 10GBase-T, shielded cabling is necessary. Unshielded cabling, if it is to support 10GBase-T, needs additional protective measures such as the careful separation of the data cables from the power cables. A double-sided ground improves the immunity of shielded cabling against external fast transients above the minimum requirements of the standard. If the shield is not continuous, its effectiveness with 10GBase-T is negated, and the protection is then the same as with unshielded cabling. At lower frequencies (such as used in 1000Base-T), a certain protective effect of the non-continuous shield is apparent, especially with double-sided grounding. An operational test with fluorescent lamps that were located 0.5 m from the data cable showed that the test conditions in the test based on the standard were entirely realistic. The interference that arises when a fluorescent lamp is switched on influenced the 10GBase-T data transmission in the same way as during the test according to the standard. Not only the lamp itself, but also the power cable to it, created interference. It must therefore be ensured that there is sufficient separation of both from the data cabling. In order to compare the standard test using the coupling clamp to an actual installation situation, the “mesh cable tray” experiment was also carried out. The data cables and a power cable were laid in a mesh cable tray with a total length of 30 m and with a constant separation from 0 to 50 cm. The interference signal according to the stan-dard was then applied to the power cable. The figure shows the results of these measurements. A comparison of the results with those of the test according to the standard from the figure shows that the standardized test simulates a separation of the cables of approximately 1 - 2 cm. In order to guarantee the operation of 10GBase-T, a separation between the data and power cables of at least 30 cm must be maintained with an unshielded system. The shielded cabling met the requirements even without any separation between the cables. According to EN 50174-2, a separation of only 2 cm is defined for the unshielded system in this configuration, which however is insufficient for 10GBase-T. In other words, with unshielded cables, when using 10GBase-T, a far greater separation must be maintained than is d efined in the standard. A further test in accordance with EN 61000-4-6 was carried out to check the system’s immunity against conducted RF interference in the range from 150 kHz to 80 MHz on power cables located nearby. Power cables can act as antennas for high-frequency interference coming from external sources (such as shortwave and VHF transmitters) or also intentionally subjected to a powerline signal. In this test, too, the previously mentioned coupling clamp was used. The stress levels were chosen in accordance with the MICE table (see Figure, “Conducted high frequency”). The results corresponded to the well-known pattern that the shielded cabling meets all requirements for 10GBase-T. Unshielded cabling meets the requirements for offices and areas with light industry for 1000Base-T, however for 10GBase-T transmissions additional protective measures such as increased separation of the data cabling from power cabling is necessary. Immunity from magnetic fields arising from power ca bles This test in accordance with EN61000-4-8 checks the ability of a system to function in the presence of strong magnetic fields at 50 Hz. These magnetic fields can be generated by power lines (cable or busbars) or power-distribution equipment (transformers, distribution panels). The stress levels were selected in accordance with the MICE table (see Figure, “Magnetic fields”). All cabling fulfills the highest environmental class (E3) with both 1000Base-T and 10GBase-T. No difference was determined between the susceptibility of shielded or unshielded cabling. No heightened susceptibility of the shielded cabling based on ground loops can be observed.


Page 154 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

Immunity against electrostatic discharge This test in accordance with EN 61000-4-2 checks the immunity of a system against electrostatic discharge. This phenomenon, which we all know from everyday life when a spark jumps from our finger to a conductive surface, can be generated in a reproducible manner with a test rig with metallic test fingers. Environmental and climactic conditions – such as low humidity, insulated plastic floors and clothing made of synthetic fibers – can promote electrostatic charging. The test points were selected to simulate normal contact of the cabling during operation and maintenance. At each test point, 10 sparks per polarity were generated with a separation of more than 1 second. The test levels used were set in accordance with the MICE table (see Figure, “Electrostatic discharge – contact / air”). The shielded cabling systems did not show sensitivity to electrostatic discharges; no faults arose. With unshielded cable, the active devices reacted in a very sensitive way to discharges as soon as these could impact on the signal conductors. The good performance of shielded cabling can be attributed to the fact that the shield can serve as a bypass path for the flashover and in this way no energy could make its way inside the cable. For 10GBase-T operation with unshielded cabling, additional measures must ensure that no electrostatic discharges can take place. Suitable protective measures are well known in electronics manufacturing and can include grounding stations, ESD wrist straps, antistatic floors, etc. Summary It has been shown that the introduction of 10GBase-T in fact has a considerable impact on the selection of cabling. The increased sensitivity of 10GBase-T transmissions compared to 1000Base-T was clearly evident with unshiel-ded cabling in terms of immunity against external interference. In order to guarantee the operation of 10GBase-T, it is not sufficient to pay attention to the cabling alone, rather the environmental conditions must also be considered and the cabling components must be properly selected. Coupling attenuation can serve as a qualitative comparative parameter for the EMC behavior of cabling. Summing up, this investigation has shown that shielded cabling for 10GBase-T can be used without any problems in every environmental class. The following applies: The better the quality of the shielding, the smaller the emis-sions and the better the immunity of the cabling against external interference. Unshielded cabling, in contrast, is suited for use outside residential areas only and in conjunction with additional preventive measures for the use of 10GBase-T. Within the EU, this cabling shall be used only outside residential areas, in dedicated work areas (like offices, data centers, etc.). In choosing between shielded or unshielded cabling for 10GBase-T, the influences and applications of additional protective measures and operational limitations must be taken into consideration. Recommendations for the operation of 10GBase-T In industrial environments (Classes E2 and E3), shielded cabling should be used. In harsher industrial environ-ments (E3), an S-FTP shield design with braided overall shield is necessary and, if possible, a double-sided grounding should be applied to the cabling. In residential areas, unshielded cabling should not be used. In office areas and data centers with unshielded cabling, the above-mentioned additional protective measures should be applied.


R&M Data Center Handbook V2.0 © 08/2011 Reichle & De-Massari AG Page 155 of 156

3.10.7 Consolidating the Data Center and Floor Dist ributors When planning a new floor distributor system as part of a cabling system redesigning, a new floor distributor is usually placed in the data center or server room. This offers the advantage of using a quality infrastructure that generally already exists, including a raised floor, cooling system, USP, etc. However, modern EMC-based designs advise against this if copper is used as a tertiary medium. The reason is that these modern EMC designs are based on a division of a building into different lightning protection zones. The overvoltage caused by an indirect stroke of lightning will be greatest in the outer area of the building (lightning protection zone 0) and will lead to an extremely high induction current. To put it simply, metallic lines into a building from the outside will lead this high current into the building (lightning protection zone 1). Therefore, in a conventional telephone cabling system surge protection was provided at the entry of the building. However, there are other also zones in the building itself that would have a problem with the remaining residual current behind the surge protection element. Electronic components could be destroyed by this current. This resulted in the definition of other lightning protection zones, where the remaining current diminishes after each surge protection element in subsequent zones. Data center server units have a very high protection requirement. These units are “packed” in lightning protection zone 2, so one must take care to minimize the lightning’s residual current within the data center. Designs in accor-dance with standards make provisions that each metallic conductor that is routed into or out of the data center be secured with a surge protection element. If a floor distributor with a Twisted Pair concept is placed in a data center, each twisted pair line would therefore have to be protected with a surge protection element of sufficient quality (suppliers for this include Phoenix and other companies). This is very rarely put into practice in the data cabling system (in contrast to the power cabling system!), yet the risk does exist. Recommendation When setting up a high-availability data center, disregarding this generally established lightning protection zone concept (DIN EN 62305 / VDE 0185-305) would be negligent, and a floor distributor should therefore not be placed in the data center if at all possible.


Page 156 of 156 © 08/2011 Reichle & De-Massari AG R&M Data Center Handbook V2.0

4. Appendix

References Content from the following encyclopedias, studies, manuals and articles were used in this handbook or can be used as additional references:

BFE • Energy-efficient data centers through sensitization via transparent cost accounting (original German title: Stromeffiziente Rechenzentren durch Sensibilisierung über eine transparente Kostenrechnung)

BITKOM • Operationally reliable data centers (original German title: Betriebssichere Rechenzentren) • Compliance in IT Outsourcing Projects (original German title: Compliance in IT-Outsourcing-

Projekten) • Energy efficiency in the Data Center (original German title: Energieeffizienz im

Rechenzentrum) • IT Security Compass (original German title: Kompass der IT-Sicherheitsstandards) • Liability Risk Matrix (original German title: Matrix der Haftungsrisiken) • Planning Guide for Operationally Reliable Data Centers (original German title: Planungshilfe

betriebssichere Rechenzentren) • Certification of Information Security in the Enterprise (original German title: Zertifizierung von

Informationssicherheit im Unternehmen) CA • The Avoidable Cost of Downtime COSO • Internal Monitoring of Financial Reporting – Manual for Smaller Corporations – Volume I

Summary (original German title: Interne Überwachung der Finanzberichterstattung – Leitfaden für kleinere Aktiengesellschaften – Band I Zusammenfassung)

IT Governance Institute • Cobit 4.1 Symantec • IT Risk Management Report 2: Myths and Realities UBA • Material Stock of Data Centers in Germany (original German title: Materialbestand der

Rechenzentren in Deutschland) • Future Market for Energy-Efficient Data Centers (original German title: Zukunftsmarkt

Energieeffiziente Rechenzentren) Information from suppliers like IBM, HP, CISCO, DELL, I.T.E.N.O.S., SonicWall and others was also added.

Standards ANSI/TIA-942 Telecommunication Standard for Data Centers ANSI/TIA-942-1, Data Center Coaxial Cabling Specification and Application Distances ANSI/TIA-942-2 Telecommunication Standard for Data Centers Addendum 2 – Additional Guidelines for Data Centers ANSI/TIA-568-C.0 Generic Telecommunication Cabling for Customer Premises ANSI/TIA-568-C.1 Commercial Building Telecommunication Cabling Standard ANSI/TIA-568-C.2 Balanced Twisted-Pair Telecommunication Cabling and Components Standard ANSI/TIA-568-C.2 Optical Fiber Cabling and Components Standard ANSI/TIA/EIA-606-A Administration Standard for Commercial Telecommunications Infrastructure ANSI/EIA/TIA-607-A Grounding and Bonding Requirements for Telecommunications ANSI/TIA 569-B Commercial Building Standard for Telecommunications Pathways and Spaces EIA/TIA-568B.2-10 EN 50310 Application of equipotential bonding and earthing in buildings with information technology equipment. EN 50173-1 + EN 50173-1/A1 (2009 Information technology – Generic cabling systems – Part 1: General requirements EN 50173-2 + EN 50173-2/A1 Information technology – Generic cabling systems – Part 2: Office premises EN 50173-5 + EN 50173-5/A1 Information technology – Generic cabling systems – Part 5: Data centers EN 50174-1 Information technology – Cabling installation – Part 1: Specification and quality assurance EN 50174-2 Information technology – Cabling installation – Part 2: Installation planning and practices inside buildings EN 50346 +A1 +A2 Information technology – Cabling installation – Testing of installed cabling IEEE 802.3an ISO/IEC 11801 2nd Edition Amendment 1 (04/2008) + Amendment 2 (02/2010) Information technology – Generic cabling for customer premises ISO/IEC 24764 Information technology – Generic cabling systems for data centers ISO/IEC 14763-1 AMD 1 Information technology - Implementation and operation of customer premises cabling - Part 1: Administration; Amendment 1 ISO/IEC 14763-2 Information technology – Implementation and operation of customer premises cabling – Part 2: Planning and installation ISO/IEC 14763-3 Information technology – Implementation and operation of customer premises cabling – Part 3: Testing of optical fibre cabling (ISO/IEC 14763-3:2006 + A1:2009) ISO/IEC 18010 Information technology – Pathways and spaces for customer premises cabling ISO/IEC 61935-1 Specification for the testing of balanced and coaxial information technology cabling – Part 1: Installed balanced cabling as specified in EN 50173 standards (IEC 61935-1:2009, modified); German version EN 61935-1:2009 Please find additional information on R&M products and solutions on our website: www.rdm.com


HeadquartersReichle & De-Massari AGBinzstrasse 31CHE-8620 Wetzikon/SwitzerlandPhone +41 (0)44 933 81 11Fax +41 (0)44 930 49 41


Your local R&M partnersAustraliaAustriaBelgiumBulgariaCzech RepublicChinaDenmarkEgyptFranceFinlandGermanyGreat BritainHungaryIndiaItalyJapanJordanKingdom of Saudi ArabiaKoreaNetherlandsNorwayPolandPortugalRomaniaRussiaSingaporeSpainSwedenSwitzerlandUnited Arab Emirates

RDM Data Center Handbook V20 En - [PDF Document] (2024)
Top Articles
Latest Posts
Article information

Author: Ray Christiansen

Last Updated:

Views: 5974

Rating: 4.9 / 5 (69 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Ray Christiansen

Birthday: 1998-05-04

Address: Apt. 814 34339 Sauer Islands, Hirtheville, GA 02446-8771

Phone: +337636892828

Job: Lead Hospitality Designer

Hobby: Urban exploration, Tai chi, Lockpicking, Fashion, Gunsmithing, Pottery, Geocaching

Introduction: My name is Ray Christiansen, I am a fair, good, cute, gentle, vast, glamorous, excited person who loves writing and wants to share my knowledge and understanding with you.