Difference between revisions of "Data Center"

From OnnoWiki
Jump to navigation Jump to search
(Created page with " ==Pranala Menarik== * Data Center Tier Standard * Data Center Perhitungan Cost")
 
Line 1: Line 1:
 +
{{Refimprove|date=July 2015}}
 +
[[File:NetworkOperations.jpg|thumb|right|An operation engineer overseeing a network operations control room of a data center]]
 +
 +
A '''data center''' ([[American English]]) or '''data centre''' ([[British English]]) is a [[Building|facility]] used to house [[Computer|computer systems]] and associated components, such as [[telecommunication]]s and [[computer data storage|storage systems]]. It generally includes [[Redundancy (engineering)|redundant]] or backup components and infrastructure for [[power supply]], data communications connections, environmental controls (e.g. air conditioning, fire suppression) and various security devices. A large data center is an industrial-scale operation using as much electricity as a small town.<ref name=NYT92212>{{cite news|title=Power, Pollution and the Internet|url=https://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html|accessdate=2012-09-25|newspaper=The New York Times|date=September 22, 2012|author=James Glanz}}</ref><ref name="ReferenceDC2">{{cite journal|url=https://www.academia.edu/6982393/Power_Management_Techniques_for_Data_Centers_A_Survey|title=Power Management Techniques for Data Centers: A Survey|first=Mittal,|last=Sparsh|publisher=}}</ref>
 +
 +
==History==
 +
[[File:NASAComputerRoom7090.NARA.jpg|thumb|NASA mission control computer room circa 1962]]
 +
{{Unreferenced section|date=August 2014}}
 +
Data centers have their roots in the huge computer rooms of the 1940s, typified by [[ENIAC]], one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard [[19-inch rack|racks]] to mount equipment, [[raised floor]]s, and [[cable tray]]s (installed overhead or under the elevated floor). A single [[Mainframe computer|mainframe]] required a great deal of power, and had to be cooled to avoid overheating. Security became important&nbsp;– computers were expensive, and were often used for [[military]] purposes. Basic design-guidelines for controlling access to the computer room were therefore devised.
 +
 +
During the boom of the microcomputer industry, and especially during the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as [[information technology]] (IT) [[IT operations|operations]] started to grow in complexity, organizations grew aware of the need to control IT resources. The advent of [[Unix]] from the early 1970s led to the subsequent proliferation of freely available [[Linux]]-compatible [[personal computer|PC]] operating-systems during the 1990s. These were called "[[Server (computing)|servers]]", as [[timesharing]] [[operating system]]s like Unix rely heavily on the [[client-server model]] to facilitate sharing unique resources between multiple users. The availability of inexpensive [[Networking hardware|networking]] equipment, coupled with new standards for network [[structured cabling]], made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center", as applied to specially designed computer rooms, started to gain popular recognition about this time.{{citation needed|date=September 2015}}
 +
 +
The boom of data centers came during the [[dot-com bubble]] of 1997–2000. [[Company|Companies]] needed fast [[Internet]] connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called '''Internet data centers''' (IDCs), which provide [[customer|commercial clients]] with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results. Data centers for cloud computing are called '''cloud data centers''' (CDCs). But nowadays, the division of these terms has almost disappeared and they are being integrated into a term "data center".
 +
 +
With an increase in the uptake of [[cloud computing]], business and government organizations scrutinize data centers to a higher degree in areas such as security, availability, environmental impact and adherence to standards. Standards documents from accredited [[professional]] groups, such as the [[Telecommunications Industry Association]], specify the requirements for data-center design. Well-known operational metrics for [[data availability|data-center availability]] can serve to evaluate the [[Business Impact Analysis|commercial impact]] of a disruption. Development continues in operational practice, and also in environmentally-friendly data-center design. Data centers typically cost a lot to build and to maintain.{{citation needed|date=September 2015}}
 +
 +
==Requirements for modern data centers==
 +
[[File:Datacenter-telecom.jpg|thumb|left|Racks of telecommunications equipment in part of a data center]]
 +
[[IT operations]] are a crucial aspect of most organizational operations around the world. One of the main concerns is [[business continuity]]; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of mechanical cooling and power systems (including emergency backup power generators) serving the data center along with fiber optic cables.
 +
 +
The [[Telecommunications Industry Association]]'s Telecommunications Infrastructure Standard for Data Centers<ref>{{cite web|url=http://www.tia-942.org|title=TIA-942 Certified Data Centers -  Consultants - Auditors -  TIA-942.org|website=www.tia-942.org}}</ref> specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.<ref>{{cite web|url=http://www.tiaonline.org/standards/ |title=Archived copy |accessdate=2011-11-07 |deadurl=yes |archiveurl=https://web.archive.org/web/20111106042758/http://www.tiaonline.org/standards/ |archivedate=2011-11-06 |df= }}</ref>
 +
 +
Telcordia GR-3160, ''NEBS Requirements for Telecommunications Data Center Equipment and Spaces'',<ref>{{cite web|url=http://telecom-info.telcordia.com/site-cgi/ido/docs.cgi?ID=SEARCH&DOCUMENT=GR-3160&|title=GR-3160 - Telecommunications Data Center - Telcordia|website=telecom-info.telcordia.com}}</ref> provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:
 +
* Operate and manage a carrier's telecommunication network
 +
* Provide data center based applications directly to the carrier's customers
 +
* Provide hosted applications for a third party to provide services to their customers
 +
* Provide a combination of these and similar data center applications
 +
 +
Effective data center operation requires a balanced investment in both the facility and the housed equipment. The first step is to establish a baseline facility environment suitable for equipment installation. Standardization and modularity can yield savings and efficiencies in the design and construction of telecommunications data centers.
 +
 +
Standardization means integrated building and equipment engineering. Modularity has the benefits of scalability and easier growth, even when planning forecasts are less than optimal. For these reasons, telecommunications data centers should be planned in repetitive building blocks of equipment, and associated power and support (conditioning) equipment when practical. The use of dedicated centralized systems requires more accurate forecasts of future needs to prevent expensive over construction, or perhaps worse&nbsp;— under construction that fails to meet future needs.
 +
 +
The "lights-out" data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.<ref>{{cite book | first=Victor | last=Kasacavage | year=2002 | page=227 | title=Complete book of remote access: connectivity and security | series=The Auerbach Best Practices Series | publisher=CRC Press | isbn=0-8493-1253-1
 +
}}</ref><ref>{{cite book |author1=Burkey, Roxanne E. |author2=Breakfield, Charles V. | year=2000 | title=Designing a total data solution: technology, implementation and deployment | page=24 | series=Auerbach Best Practices | publisher=CRC Press | isbn=0-8493-0893-3 }}</ref>
 +
 +
There is a trend to modernize data centers in order to take advantage of the performance and [[Electrical efficiency|energy efficiency]] increases of newer IT equipment and capabilities, such as [[cloud computing]]. This process is also known as data center transformation.<ref name="mspmentor.net">{{cite web|url=http://www.mspmentor.net/2011/08/17/hp-updates-data-transformation-solutions/|title=Mukhar, Nicholas. "HP Updates Data Center Transformation Solutions," August 17, 2011 |publisher=}}</ref>
 +
 +
Organizations are experiencing rapid IT growth but their data centers are aging. Industry research company [[International Data Corporation]] (IDC) puts the average age of a data center at nine years old.<ref name="mspmentor.net"/> [[Gartner]], another research company, says data centers older than seven years are obsolete.<ref>{{cite web|url=https://www.forbes.com/2010/03/12/cloud-computing-ibm-technology-cio-network-data-centers.html |title=Sperling, Ed. "Next-Generation Data Centers," Forbes, March 15. 2010 |publisher=Forbes.com |date= |accessdate=2013-08-30}}</ref> The growth in data (163 zettabytes by 2025<ref>{{Cite web|url=https://www.seagate.com/files/www-content/our-story/trends/files/Seagate-WP-DataAge2025-March-2017.pdf|title=IDC white paper, sponsored by Seagate|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>) is one factor driving the need for data centers to modernize.
 +
 +
In May 2011, data center research organization [[Uptime Institute]] reported that 36 percent of the large companies it surveyed expect to exhaust IT capacity within the next 18 months.<ref>{{cite web|url=http://www.cio.com/article/681897/Data_Centers_Turn_to_Outsourcing_to_Meet_Capacity_Needs|title=Data Centers Turn to Outsourcing to Meet Capacity Needs|first=James|last=Niccolai|publisher=}}</ref>
 +
 +
Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach.<ref>{{cite web|url=http://www.datacenterknowledge.com/archives/2010/08/03/three-signs-it%E2%80%99s-time-to-transform-your-data-center/|title=Tang, Helen. "Three Signs it's time to transform your data center," August 3, 2010, Data Center Knowledge |publisher=}}</ref> The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, [[automation]] and security.
 +
* Standardization/consolidation: The purpose of this project is to reduce the number of data centers a large organization may have. This project also helps to reduce the number of hardware, software platforms, tools and processes within a data center. Organizations replace aging data center equipment with newer ones that provide increased capacity and performance. Computing, networking and management platforms are standardized so they are easier to manage.<ref name="datacenterknowledge.com">
 +
[http://www.datacenterknowledge.com/archives/2007/05/16/complexity-growing-data-center-challenge/ Miller, Rich. "Complexity: Growing Data Center Challenge," Data Center Knowledge, May 16, 2007]</ref>
 +
* Virtualize: There is a trend to use IT virtualization technologies to replace or consolidate multiple data center equipment, such as servers. Virtualization helps to lower capital and operational expenses,<ref>
 +
[http://virtualization.tmcnet.com/topics/virtualization/articles/193652-carousels-expert-walks-through-major-benefits-virtualization.htm Sims, David. "Carousel's Expert Walks Through Major Benefits of Virtualization," TMC Net, July 6, 2010]</ref> and reduce energy consumption.<ref>{{cite web|last=Delahunty |first=Stephen |title=The New urgency for Server Virtualization |work=InformationWeek |date=August 15, 2011 |url=http://www.informationweek.com/news/government/enterprise-architecture/231300585 |archive-url=https://web.archive.org/web/20120402220551/http://www.informationweek.com/news/government/enterprise-architecture/231300585 |archive-date=2012-04-02 |deadurl=yes }}</ref> Virtualization technologies are also used to create virtual desktops, which can then be hosted in data centers and rented out on a subscription basis.<ref>{{cite web|title=HVD: the cloud's silver lining |url=http://www.intrinsictechnology.co.uk/FileUploads/HVD_Whitepaper.pdf |archive-url=http://webarchive.nationalarchives.gov.uk/20121002231021/http%3A//www.intrinsictechnology.co.uk/FileUploads/HVD_Whitepaper.pdf |dead-url=yes |archive-date=2012-10-02 |publisher=Intrinsic Technology |accessdate=2012-08-30 }}</ref>  Data released by investment bank Lazard Capital Markets reports that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.<ref>{{cite web|url=http://www.datacenterknowledge.com/archives/2008/12/02/gartner-virtualization-disrupts-server-vendors/|title=Gartner: Virtualization Disrupts Server Vendors|date=2 December 2008|publisher=}}</ref>
 +
* Automating: Data center automation involves automating tasks such as [[provisioning]], configuration, [[Patch (computing)|patching]], release management and compliance. As enterprises suffer from few skilled IT workers,<ref name="datacenterknowledge.com"/> automating tasks make data centers operations more efficient.
 +
* Securing: In modern data centers, the security of data on virtual systems is integrated with existing security of physical infrastructures.<ref>{{cite web|url=http://lippisreport.com/2011/05/securing-the-data-center-transformation-aligning-security-and-data-center-dynamics/|title=Ritter, Ted. Nemertes Research, "Securing the Data-Center Transformation Aligning Security and Data-Center Dynamics"|publisher=}}</ref> The security of a modern data center must take into account physical security, network security, and data and user security.
 +
 +
==Carrier neutrality==
 +
Today many data centers are run by [[Internet service provider]]s solely for the purpose of hosting their own and third party [[Server (computing)|servers]].
 +
 +
However traditionally data centers were either built for the sole use of one large company, or as [[carrier hotel]]s or [[Network-neutral data center]]s.
 +
 +
These facilities enable interconnection of carriers and partners, and act as regional fiber hubs serving local business in addition to hosting content [[Server (computing)|servers]].
 +
 +
==Data center levels and tiers==
 +
<!-- linked from [[data availability]] -->
 +
The [[Telecommunications Industry Association]] is a trade association accredited by ANSI (American National Standards Institute). In 2005 it published ANSI/TIA-942, Telecommunications Infrastructure Standard for Data Centers, which defined four levels of data centers in a thorough, quantifiable manner.<ref>{{cite web|url=
 +
http://global.ihs.com/doc_detail.cfm?currency_code=USD&customer_id=2125452B2C0A&oshid=2125452B2C0A&shopping_cart_id=292558332C4A2020495A4D3B200A&country_code=US&lang_code=ENGL&item_s_key=00414811&item_key_date=940819&input_doc_number=TIA-942&input_doc_title= |title=Telecommunications Infrastructure Standard for Data Centers |website=ihs.com |date=2005-04-12 |accessdate=2017-02-28}}</ref> TIA-942 was amended in 2008, 2010, 2014 and 2017. ''TIA-942:Data Center Standards Overview'' describes the requirements for the data center infrastructure. The simplest is a Level 1 data center, which is basically a [[server room]], following basic guidelines for the installation of computer systems. The most stringent level is a Level 4 data center, which is designed to host the most mission critical computer systems, with fully redundant subsystems, the ability to continuously operate for an indefinite period of time during primary power outages.
 +
 +
The [[Uptime Institute]], a data center research and professional-services organization based in Seattle, WA defined what is commonly referred to today as "Tiers" or more accurately, the "Tier Standard". Uptime's Tier Standard levels describe the availability of data processing from the hardware at a location. The higher the Tier level, the greater the expected availability. The Uptime Institute Tier Standards are shown below.<ref>A document from the Uptime Institute describing the different tiers (click through the download page) {{cite web|url=http://uptimeinstitute.org/index.php?option=com_docman&task=doc_download&gid=82|title=Data Center Site Infrastructure Tier Standard: Topology|date=2010-02-13|publisher=Uptime Institute|format=PDF|accessdate=2010-02-13|deadurl=yes|archiveurl=https://web.archive.org/web/20100613072610/http://uptimeinstitute.org/index.php?option=com_docman&task=doc_download&gid=82|archivedate=2010-06-13|df=}}</ref><ref>The rating guidelines from the Uptime Institute {{cite web|url=http://professionalservices.uptimeinstitute.com/UIPS_PDF/TierStandard.pdf |title=Data Center Site Infrastructure Tier Standard: Topology |date=2010-02-13 |publisher=Uptime Institute |format=PDF |accessdate=2010-02-13 |deadurl=yes |archiveurl=https://web.archive.org/web/20091007121511/http://professionalservices.uptimeinstitute.com/UIPS_PDF/TierStandard.pdf |archivedate=2009-10-07 |df= }}</ref>
 +
 +
For the 2014 TIA-942 revision, the TIA organization and Uptime Institute mutually agreed{{citation needed|date=July 2017}} that TIA would remove any use of the word "Tier" from their published TIA-942 specifications, reserving that terminology to be solely used by Uptime Institute to describe its system.
 +
 +
Other classifications exist as well. For instance, the German Datacenter Star Audit program uses an auditing process to certify five levels of "gratification" that affect data center criticality.
 +
 +
{| class="wikitable"
 +
|+ Uptime Institute's Tier Standards
 +
|-
 +
! Tier level
 +
! Requirements
 +
|-
 +
! I
 +
|
 +
* Single non-redundant distribution path serving the critical loads
 +
* Non-redundant critical capacity components
 +
|-
 +
! II
 +
|
 +
* Meets all Tier I requirements, in addition to:
 +
* Redundant critical capacity components
 +
* Critical capacity components must be able to be isolated and removed from service while still providing N capacity to the critical loads.
 +
|-
 +
! III
 +
|
 +
* Meets all Tier II requirements in addition to:
 +
* Multiple independent distinct distribution paths serving the IT equipment critical loads
 +
* All IT equipment must be dual-powered provided with two redundant, distinct UPS feeders. Single-corded IT devices must use a Point of Use Transfer Switch to allow the device to receive power from and select between the two UPS feeders.
 +
* Each and every critical capacity component, distribution path and component of any critical system must be able to be fully compatible with the topology of a site's architecture isolated for planned events (replacement, maintenance, or upgrade) while still providing N capacity to the critical loads.
 +
* Onsite energy production systems (such as engine generator systems) must not have runtime limitations at the site conditions and design load.
 +
|-
 +
! IV
 +
|
 +
* Meets all Tier III requirements in addition to:
 +
* Multiple independent distinct and active distribution paths serving the critical loads
 +
* Compartmentalization of critical capacity components and distribution paths
 +
* Critical systems must be able to autonomously provide N capacity to the critical loads after any single fault or failure
 +
* Continuous Cooling is required for IT and UPS systems.
 +
|}
 +
 +
While any of the industry's data center resiliency systems were proposed at a time when availability was expressed as a theory, and a certain number of 'Nines' on the right side of the decimal point, it has generally been agreed that this approach was somewhat deceptive or too simplistic, so vendors today usually discuss availability in details that they can actually affect, and in much more specific terms.  Hence, the leveling systems available today no longer define their results in percentages of uptime.
 +
 +
Note:  The Uptime Institute also classifies the Tiers for each of the three phases of a data center, its design documents, the constructed facility and its ongoing operational sustainability.<ref name="uptimeinstitute">{{cite web|url=http://uptimeinstitute.com/TierCertification/|title=Uptime Institute - Tier Certification|publisher=uptimeinstitute.com|accessdate=2014-08-27}}</ref>
 +
 +
==Design considerations==
 +
[[File:Rack001.jpg|thumb|right|A typical server rack, commonly seen in [[colocation center|colocation]]]]
 +
A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in [[19 inch rack]] cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from [[Rack unit|1U servers]] to large freestanding storage silos which occupy many square feet of floor space. Some equipment such as [[mainframe computer]]s and [[computer storage|storage]] devices are often as big as the racks themselves, and are placed alongside them. Very large data centers may use [[intermodal container|shipping containers]] packed with 1,000 or more servers each;<ref>{{cite web|url=https://www.youtube.com/watch?v=zRwPSFpLX8I|title=Google Container Datacenter Tour (video)}}</ref> when repairs or upgrades are needed, whole containers are replaced (rather than repairing individual servers).<ref>{{cite web| title=Walking the talk: Microsoft builds first major container-based data center| url=http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9075519| archiveurl=https://web.archive.org/web/20080612193106/http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9075519| archivedate=2008-06-12| accessdate=2008-09-22}}</ref>
 +
 +
Local building codes may govern the minimum ceiling heights.
 +
 +
===Design programming===
 +
Design programming, also known as architectural programming, is the process of researching and making decisions to identify the scope of a design project.<ref>Cherry, Edith. "Architectural Programming: Introduction", Whole Building Design Guide, Sept. 2, 2009</ref> Other than the architecture of the building itself there are three elements to design programming for data centers: facility topology design (space planning), engineering infrastructure design (mechanical systems such as cooling and electrical systems including power) and technology infrastructure design (cable plant). Each will be influenced by performance assessments and modelling to identify gaps pertaining to the owner's performance wishes of the facility over time.
 +
 +
Various vendors who provide data center design services define the steps of data center design slightly differently, but all address the same basic aspects as given below.
 +
 +
===Modeling criteria===
 +
Modeling criteria are used to develop future scenarios for space, power, cooling, and costs in the data center.<ref>{{cite web|url=http://www.networkcomputing.com/data-center/231000669|title=Romonet Offers Predictive Modeling Tool For Data Center Planning|date=29 June 2011|publisher=}}</ref> The aim is to create a master plan with parameters such as number, size, location, topology, IT floor system layouts, and power and cooling technology and configurations. The purpose of this is to allow for efficient use of the existing mechanical and electrical systems and also growth in the existing data center without the need for developing new buildings and further upgrading of incoming power supply.
 +
 +
===Design recommendations===
 +
Design recommendations/plans generally follow the modelling criteria phase. The optimal technology infrastructure is identified and planning criteria are developed, such as critical power capacities, overall data center power requirements using an agreed upon PUE (power utilization efficiency), mechanical cooling capacities, kilowatts per cabinet, raised floor space, and the resiliency level for the facility.
 +
 +
===Conceptual design===
 +
Conceptual designs embody the design recommendations or plans and should take into account "what-if" scenarios to ensure all operational outcomes are met in order to future-proof the facility. Conceptual floor layouts should be driven by IT performance requirements as well as lifecycle costs associated with IT demand, energy efficiency, cost efficiency and availability. Future-proofing will also include expansion capabilities, often provided in modern data centers through modular designs.  These allow for more raised floor space to be fitted out in the data center while using the existing major electrical plant of the facility.
 +
 +
===Detailed design===
 +
Detailed design is undertaken once the appropriate conceptual design is determined, typically including a proof of concept. The detailed design phase should include the detailed architectural, structural, mechanical and electrical information and specification of the facility.  At this stage development of facility schematics and construction documents as well as schematics and performance specification and specific detailing of all technology infrastructure, detailed [[IT infrastructure]] design and IT infrastructure documentation are produced.
 +
 +
===Mechanical engineering infrastructure designs===
 +
[[File:CRAC Cabinets 2.jpg|thumb|CRAC Air Handler]]
 +
Mechanical engineering infrastructure design addresses mechanical systems involved in maintaining the interior environment of a data center, such as heating, ventilation and air conditioning (HVAC); humidification and dehumidification equipment; pressurization; and so on.<ref name="nxtbook.com">{{cite web|url=http://www.nxtbook.com/nxtbooks/bicsi/news_20100506/#/26|title=BICSI News Magazine - May/June 2010|website=www.nxtbook.com}}</ref>
 +
This stage of the design process should be aimed at saving space and costs, while ensuring business and reliability objectives are met as well as achieving PUE and green requirements.<ref>Data Center Energy Management: Best Practices Checklist: Mechanical, Lawrence Berkeley National Laboratory {{cite web |url=http://hightech.lbl.gov/dctraining/strategies/mam.html |title=Archived copy |accessdate=2012-02-08 |deadurl=yes |archiveurl=https://web.archive.org/web/20120223225537/http://hightech.lbl.gov/DCTraining/strategies/mam.html |archivedate=2012-02-23 |df= }}</ref> Modern designs include modularizing and scaling IT loads, and making sure capital spending on the building construction is optimized.
 +
 +
===Electrical engineering infrastructure design===
 +
Electrical Engineering infrastructure design is focused on designing electrical configurations that accommodate various reliability requirements and data center sizes. Aspects may include utility service planning; distribution, switching and bypass from power sources; uninterruptible power source (UPS) systems; and more.<ref name="nxtbook.com"/>
 +
 +
These designs should dovetail to energy standards and best practices while also meeting business objectives. Electrical configurations should be optimized and operationally compatible with the data center user's capabilities. Modern electrical design is modular and scalable,<ref>{{cite web|url=http://www.datacenterjournal.com/design/hedging-your-data-center-power/|title=Hedging Your Data Center Power|publisher=}}</ref> and is available for low and medium voltage requirements as well as DC (direct current).
 +
 +
===Technology infrastructure design===
 +
[[File:Under Floor Cable Runs Tee.jpg|thumb|Under Floor Cable Runs]]
 +
Technology infrastructure design addresses the telecommunications cabling systems that run throughout data centers. There are cabling systems for all data center environments, including horizontal cabling, voice, modem, and facsimile telecommunications services, premises switching equipment, computer and telecommunications management connections, keyboard/video/mouse connections and data communications.<ref>{{cite web|url=http://www.nxtbook.com/nxtbooks/bicsi/news_20100506/#/26|title=BICSI News Magazine - May/June 2010|website=www.nxtbook.com}}</ref> Wide area, local area, and storage area networks should link with other building signaling systems (e.g. fire, security, power, HVAC, EMS).
 +
 +
===Availability expectations===
 +
The higher the availability needs of a data center, the higher the capital and operational costs of building and managing it. Business needs should dictate the level of availability required and should be evaluated based on characterization of the criticality of IT systems estimated cost analyses from modeled scenarios. In other words, how can an appropriate level of availability best be met by design criteria to avoid financial and operational risks as a result of downtime?
 +
If the estimated cost of downtime within a specified time unit exceeds the amortized capital costs and operational expenses, a higher level of availability should be factored into the data center design. If the cost of avoiding downtime greatly exceeds the cost of downtime itself, a lower level of availability should be factored into the design.<ref>Clark, Jeffrey. "The Price of Data Center Availability—How much availability do you need?", Oct. 12, 2011, The Data Center Journal {{cite web |url=http://www.datacenterjournal.com/home/news/languages/item/2792-the-price-of-data-center-availability |title=Archived copy |accessdate=2012-02-08 |deadurl=yes |archiveurl=https://web.archive.org/web/20111203145721/http://www.datacenterjournal.com/home/news/languages/item/2792-the-price-of-data-center-availability |archivedate=2011-12-03 |df= }}</ref>
 +
 +
===Site selection===
 +
Aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines and emergency services can affect costs, risk, security and other factors to be taken into consideration for data center design.  Whilst a wide array of location factors are taken into account (e.g. flight paths, neighbouring uses, geological risks) access to suitable available power is often the longest lead time item. Location affects data center design also because the climatic conditions dictate what cooling technologies should be deployed. In turn this impacts uptime and the costs associated with cooling.<ref>{{cite web|url=http://searchcio.techtarget.com/news/1312614/Five-tips-on-selecting-a-data-center-location|title=Five tips on selecting a data center location|publisher=}}</ref> For example, the topology and the cost of managing a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate.
 +
 +
===Modularity and flexibility===
 +
[[File:Cabinet Asile.jpg|thumb|Cabinet aisle in a data center]]
 +
{{main article|Modular data center}}
 +
 +
Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed.<ref>Niles, Susan. "Standardization and Modularity in Data Center Physical Infrastructure," 2011, Schneider Electric, page 4. {{cite web |url=http://www.apcmedia.com/salestools/VAVR-626VPD_R1_EN.pdf |title=Archived copy |accessdate=2012-02-08 |deadurl=yes |archiveurl=https://web.archive.org/web/20120416120624/http://www.apcmedia.com/salestools/VAVR-626VPD_R1_EN.pdf |archivedate=2012-04-16 |df= }}</ref>
 +
 +
A modular data center may consist of data center equipment contained within shipping containers or similar portable containers.<ref>{{cite web|url=http://www.datacenterknowledge.com/archives/2011/09/08/strategies-for-the-containerized-data-center/|title=Strategies for the Containerized Data Center|date=8 September 2011|publisher=}}</ref> But it can also be described as a design style in which components of the data center are prefabricated and standardized so that they can be constructed, moved or added to quickly as needs change.<ref>{{cite web|url=http://www.infoworld.com/d/green-it/hp-says-prefab-data-center-cuts-costs-in-half-837?page=0,0|title=HP says prefab data center cuts costs in half|first=James|last=Niccolai|publisher=}}</ref>
 +
 +
===Environmental control===
 +
{{main article|Data center environmental control}}
 +
The physical environment of a data center is rigorously controlled.
 +
[[Air conditioning]] is used to control the temperature and humidity in the data center. [[ASHRAE]]'s "Thermal Guidelines for Data Processing Environments"<ref>{{cite book|title=Thermal Guidelines for Data Processing Environments|year=2012|publisher=American Society of Heating, Refrigerating and Air-Conditioning Engineers|isbn=978-1936504-33-6|author=ASHRAE Technical Committee 9.9, Mission Critical Facilities, Technology Spaces and Electronic Equipment|edition=3}}</ref> recommends a temperature range of {{convert|18|–|27|C|F}}, a dew point range of {{convert|-9| to|15|C|F}}, and ideal relative humidity of 60%, with an allowable range of 40% to 60% for data center environments.<ref name=ServersCheck>{{Cite web| title = Best Practices for data center monitoring and server room monitoring  | url=https://serverscheck.com/sensors/temperature_best_practices.asp | author = ServersCheck | accessdate = 2016-10-07}}</ref>  The temperature in a data center will naturally rise because the electrical power used heats the air. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help control [[humidity]] by cooling the return space air below the [[dew point]]. Too much humidity, and water may begin to [[condensation|condense]] on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapor if the humidity is too low, which can result in [[electrostatics|static electricity]] discharge problems which may damage components. Subterranean data centers may keep computer equipment cool while expending less energy than conventional designs.
 +
 +
Modern data centers try to use economizer cooling, where they use outside air to keep the data center cool. At least one data center (located in [[Upstate New York]]) will cool servers using outside air during the winter. They do not use chillers/air conditioners, which creates potential energy savings in the millions.<ref>{{cite news| url=https://www.reuters.com/article/pressRelease/idUS141369+14-Sep-2009+PRN20090914 | work=Reuters | title=tw telecom and NYSERDA Announce Co-location Expansion | date=2009-09-14}}</ref>  Increasingly indirect air cooling<ref>{{cite web|url=http://www.datacenterdynamics.com/focus/archive/2013/09/air-air-combat-indirect-air-cooling-wars-0|title=Air to air combat - indirect air cooling wars|publisher=}}</ref> is being deployed in data centers globally which has the advantage of more efficient cooling which lowers power consumption costs in the data center. Many newly constructed data centers are also using Indirect Evaporative Cooling (IDEC) units as well as other environmental features such as sea water to minimize the amount of energy needed to cool the space.
 +
 +
Telcordia ''NEBS: Raised Floor Generic Requirements for Network and Data Centers'',<ref>{{cite web|url=http://telecom-info.telcordia.com/site-cgi/ido/docs.cgi?ID=SEARCH&DOCUMENT=GR-2930&|title=GR-2930 - NEBS: Raised Floor Requirements - Telcordia|website=telecom-info.telcordia.com}}</ref> GR-2930 presents generic engineering requirements for raised floors that fall within the strict NEBS guidelines.
 +
 +
There are many types of commercially available floors that offer a wide range of structural strength and loading capabilities, depending on component construction and the materials used. The general types of [[raised floor]]s include stringer, stringerless,  and structural platforms, all of which are discussed in detail in GR-2930 and summarized below.
 +
* '''''Stringered raised floors''''' - This type of raised floor generally consists of a vertical array of steel pedestal assemblies (each assembly is made up of a steel base plate, tubular upright, and a head) uniformly spaced on two-foot centers and mechanically fastened to the concrete floor. The steel pedestal head has a stud that is inserted into the pedestal upright and the overall height is adjustable with a leveling nut on the welded stud of the pedestal head.
 +
* '''''Stringerless raised floors''''' - One non-earthquake type of raised floor generally consists of an array of pedestals that provide the necessary height for routing cables and also serve to support each corner of the floor panels. With this type of floor, there may or may not be provisioning to mechanically fasten the floor panels to the pedestals. This stringerless type of system (having no mechanical attachments between the pedestal heads) provides maximum accessibility to the space under the floor. However, stringerless floors are significantly weaker than stringered raised floors in supporting lateral loads and are not recommended.
 +
* '''''Structural platforms''''' - One type of structural platform consists of members constructed of steel angles or channels that are welded or bolted together to form an integrated platform for supporting equipment. This design permits equipment to be fastened directly to the platform without the need for toggle bars or supplemental bracing. Structural platforms may or may not contain panels or stringers.
 +
 +
Data centers typically have [[raised floor]]ing made up of {{convert|60|cm|ft|abbr=on|0}} removable square tiles. The trend is towards {{convert|80|-|100|cm|in|abbr=on}} void to cater for better and uniform air distribution. These provide a [[plenum space|plenum]] for air to circulate below the floor, as part of the air conditioning system, as well as providing space for power cabling.
 +
 +
====Metal whiskers====
 +
Raised floors and other metal structures such as cable trays and ventilation ducts have caused many problems with [[zinc whiskers]] in the past, and likely are still present in many data centers. This happens when microscopic metallic filaments form on metals such as zinc or tin that protect many metal structures and electronic components from corrosion. Maintenance on a raised floor or installing of cable etc. can dislodge the whiskers, which enter the airflow and may short circuit server components or power supplies, sometimes through a high current metal vapor [[plasma arc]]. This phenomenon is not unique to data centers, and has also caused catastrophic failures of satellites and military hardware.<ref>{{cite web|title=NASA - metal whiskers research|url=http://nepp.nasa.gov/whisker/other_whisker/index.htm|publisher=NASA|accessdate=2011-08-01}}</ref>
 +
 +
===Electrical power===
 +
 +
[[File:Datacenter Backup Batteries.jpg|thumb|right|A bank of batteries in a large data center, used to provide power until diesel generators can start]]
 +
 +
Backup power consists of one or more [[uninterruptible power supply|uninterruptible power supplies]], battery banks, and/or [[Diesel generator|diesel]] / [[gas turbine]] generators.<ref>Detailed explanation of UPS topologies {{cite web|url=http://www.emersonnetworkpower.com/en-US/Brands/Liebert/Documents/White%20Papers/Evaluating%20the%20Economic%20Impact%20of%20UPS%20Technology.pdf |format=PDF |title=EVALUATING THE ECONOMIC IMPACT OF UPS TECHNOLOGY |deadurl=yes |archiveurl=https://web.archive.org/web/20101122074817/http://emersonnetworkpower.com/en-US/Brands/Liebert/Documents/White%20Papers/Evaluating%20the%20Economic%20Impact%20of%20UPS%20Technology.pdf |archivedate=2010-11-22 |df= }}</ref>
 +
 +
To prevent [[single point of failure|single points of failure]], all elements of the electrical systems, including backup systems, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve [[N+1 redundancy]] in the systems. [[Transfer switch#Static transfer switch|Static transfer switches]] are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
 +
 +
===Low-voltage cable routing===
 +
Data cabling is typically routed through overhead [[cable tray]]s in modern data centers. But some{{Who|date=May 2012}} are still recommending under raised floor cabling for security reasons and to consider the addition of cooling systems above the racks in case this enhancement is necessary. Smaller/less expensive data centers without raised flooring may use anti-static tiles for a flooring surface. Computer cabinets are often organized into a [[Data center environmental control#Aisle containment|hot aisle]] arrangement to maximize airflow efficiency.
 +
 +
===Fire protection===
 +
[[File:FM200 Three.jpg|thumb|[[FM200]] Fire Suppression Tanks]]
 +
Data centers feature [[fire protection]] systems, including [[passive fire protection|passive]] and [[Active Design]] elements, as well as implementation of [[fire prevention]] programs in operations. [[Smoke detectors]] are usually installed to provide early warning of a fire at its incipient stage. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. An [[active fire protection]] system, such as a [[fire sprinkler system]] or a [[clean agent]] fire suppression gaseous system, is often provided to control a full scale fire if it develops. High sensitivity smoke detectors, such as [[aspirating smoke detector]]s, activating [[clean agent]] fire suppression gaseous systems activate earlier than fire sprinklers.
 +
 +
* Sprinklers = structure protection and building life safety.
 +
* Clean agents = business continuity and asset protection.
 +
* No water = no collateral damage or clean up.
 +
 +
Passive fire protection elements include the installation of [[Firewall (construction)|fire walls]] around the data center, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems. Fire wall penetrations into the server room, such as cable penetrations, coolant line penetrations and air ducts, must be provided with fire rated penetration assemblies, such as [[fire stop]]ping.
 +
 +
===Security===
 +
{{main|Data center security}}
 +
Physical security also plays a large role with data centers. Physical access to the site is usually restricted to selected personnel, with controls including a layered security system often starting with fencing, [[bollard]]s and [[mantrap (access control)|mantraps]].<ref>{{cite web|author=Sarah D. Scalet |url=http://www.csoonline.com/article/220665 |title=19 Ways to Build Physical Security Into a Data Center |publisher=Csoonline.com |date=2005-11-01 |accessdate=2013-08-30}}</ref> [[Video camera]] surveillance and permanent [[security guard]]s are almost always present if the data center is large or contains sensitive information on any of the systems within. The use of finger print recognition [[mantrap (snare)|mantraps]] is starting to be commonplace.
 +
 +
Documenting access is required by some data protection regulations. To do so, some organizations use access control systems that provide a logging report of accesses. Logging can occur at the main entrance, at the entrances to mechanical rooms and white spaces, as well as in at the equipment cabinets. Modern access control at the cabinet allows for integration with intelligent [[power distribution units]] so that the locks can be powered and networked through the same appliance.<ref>{{Citation|title=Systems and methods for controlling an electronic lock for a remote device|date=2016-08-01|url=https://patents.google.com/patent/US9865109B2/en|accessdate=2018-04-25}}</ref>
 +
 +
==Energy use==
 +
[[File:Google Data Center, The Dalles.jpg|thumb|[[Google Data Centers|Google Data Center]], [[The Dalles, Oregon]]]]
 +
{{main article|IT energy management}}
 +
 +
Energy use is a central issue for data centers. Power draw for data centers ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building.<ref>{{cite web|url=http://www1.eere.energy.gov/femp/program/dc_energy_consumption.html|title=Data Center Energy Consumption Trends|publisher=U.S. Department of Energy|accessdate=2010-06-10}}</ref> For higher power density facilities, electricity costs are a dominant [[operating expense]] and account for over 10% of the [[total cost of ownership]] (TCO) of a data center.<ref>[http://www.intel.com/assets/pdf/general/servertrendsreleasecomplete-v25.pdf J. Koomey, C. Belady, M. Patterson, A. Santos, K.D. Lange: Assessing Trends Over Time in Performance, Costs, and Energy Use for Servers] Released on the web August 17th, 2009.</ref> By 2012 the cost of power for the data center is expected to exceed the cost of the original capital investment.<ref>{{cite web|url=http://www1.eere.energy.gov/femp/pdfs/data_center_qsguide.pdf |title=Quick Start Guide to Increase Data Center Energy Efficiency |publisher=U.S. Department of Energy |accessdate=2010-06-10 |deadurl=yes |archiveurl=https://web.archive.org/web/20101122035456/http://www1.eere.energy.gov/femp/pdfs/data_center_qsguide.pdf |archivedate=2010-11-22 |df= }}</ref>
 +
 +
According to a [[Greenpeace]] study, in 2012, data centers represented 21% of the electricity consumed by the IT sector, which was about 382 billion kWh a year.<ref>{{Cite web|url=https://storage.googleapis.com/p4-production-content/international/wp-content/uploads/2017/01/35f0ac1a-clickclean2016-hires.pdf|title=CLICKING CLEAN: WHO IS WINNING  THE RACE TO BUILD  A GREEN INTERNET|last=Greenpeace|first=|date=2017|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref> U.S. data centers use more than 90 billion kWh of electricity a year. Global data centers used roughly 416 TWh in 2016, nearly 40% more than the entire United Kingdom.<ref>{{Cite news|url=https://www.forbes.com/sites/forbestechcouncil/2017/12/15/why-energy-is-a-big-and-rapidly-growing-problem-for-data-centers/|title=Why Energy Is A Big And Rapidly Growing Problem For Data Centers|last=Danilak|first=Radoslav|work=Forbes|access-date=2018-07-06|language=en}}</ref>
 +
 +
===Greenhouse gas emissions===
 +
In 2007 the entire [[information and communication technologies]] or ICT sector was estimated to be responsible for roughly 2% of global [[Greenhouse gas|carbon emissions]] with data centers accounting for 14% of the ICT footprint.<ref name="smart1">{{cite web|url=http://www.smart2020.org/_assets/files/03_Smart2020Report_lo_res.pdf |title=Smart 2020: Enabling the low carbon economy in the information age |publisher=The Climate Group for the Global e-Sustainability Initiative |accessdate=2008-05-11 |deadurl=yes |archiveurl=https://web.archive.org/web/20110728032834/http://www.smart2020.org/_assets/files/03_Smart2020Report_lo_res.pdf |archivedate=2011-07-28 |df= }}</ref> The US EPA estimates that servers and data centers are responsible for up to 1.5% of the total US electricity consumption,<ref name="energystar1">{{cite web|url=http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf|title=Report to Congress on Server and Data Center Energy Efficiency|publisher=U.S. Environmental Protection Agency ENERGY STAR Program}}</ref> or roughly .5% of US GHG emissions,<ref>[http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf A calculation of data center electricity burden cited in the Report to Congress on Server and Data Center Energy Efficiency] and electricity generation contributions to green house gas emissions published by the EPA in the [http://epa.gov/climatechange/emissions/downloads10/US-GHG-Inventory-2010_ExecutiveSummary.pdf Greenhouse Gas Emissions Inventory Report]. Retrieved 2010-06-08.</ref>  for 2007. Given a business as usual scenario greenhouse gas emissions from data centers is projected to more than double from 2007 levels by 2020.<ref name="smart1"/>
 +
 +
Siting is one of the factors that affect the energy consumption and environmental effects of a datacenter. In areas where climate favors cooling and lots of renewable electricity is available the environmental effects will be more moderate. Thus countries with favorable conditions, such as: Canada,<ref>[https://www.theglobeandmail.com/report-on-business/canada-called-prime-real-estate-for-massive-data-computers/article2071677/ Canada Called Prime Real Estate for Massive Data Computers - Globe & Mail] Retrieved June 29, 2011.</ref> Finland,<ref>[http://datacenter-siting.weebly.com/ Finland - First Choice for Siting Your Cloud Computing Data Center.]. Retrieved 4 August 2010.</ref> Sweden,<ref>{{cite web|url=http://www.stockholmbusinessregion.se/templates/page____41724.aspx?epslanguage=EN|title=Stockholm sets sights on data center customers|accessdate=4 August 2010|archiveurl=https://web.archive.org/web/20100819190918/http://www.stockholmbusinessregion.se/templates/page____41724.aspx?epslanguage=EN|archivedate=19 August 2010}}</ref> Norway <ref>[http://www.innovasjonnorge.no/en/start-page/invest-in-norway/industries/datacenters/ In a world of rapidly increasing carbon emissions from the ICT industry, Norway offers a sustainable solution] Retrieved 1 March 2016.</ref> and Switzerland,<ref>[http://www.greenbiz.com/news/2010/06/30/swiss-carbon-neutral-servers-hit-cloud Swiss Carbon-Neutral Servers Hit the Cloud.]. Retrieved 4 August 2010.</ref> are trying to attract cloud computing data centers.
 +
 +
In an 18-month investigation by scholars at Rice University's Baker Institute for Public Policy in Houston and the Institute for Sustainable and Applied Infodynamics in Singapore, data center-related emissions will more than triple by 2020.
 +
<ref>{{Cite news
 +
|author=Katrice R. Jalbuena
 +
|title=Green business news.
 +
|quote=
 +
|publisher=EcoSeed
 +
|date=October 15, 2010
 +
|pages=
 +
|url=http://ecoseed.org/en/business-article-list/article/1-business/8219-i-t-industry-risks-output-cut-in-low-carbon-economy
 +
|accessdate=2010-11-11
 +
|deadurl=yes
 +
|archiveurl=https://web.archive.org/web/20160618081417/http://ecoseed.org/en/business-article-list/article/1-business/8219-i-t-industry-risks-output-cut-in-low-carbon-economy
 +
|archivedate=2016-06-18
 +
|df=
 +
}}</ref>
 +
 +
===Energy efficiency===
 +
The most commonly used metric to determine the energy efficiency of a data center is [[power usage effectiveness]], or PUE. This simple ratio is the total power entering the data center divided by the power used by the IT equipment.
 +
 +
:<math> \mathrm{PUE}  =  {\mbox{Total Facility Power} \over \mbox{IT Equipment Power}} </math>
 +
 +
Total facility power consists of power used by IT equipment plus any overhead power consumed by anything that is not considered a computing or data communication device (i.e. cooling, lighting, etc.). An ideal PUE is 1.0 for the hypothetical situation of zero overhead power. The average data center in the US has a PUE of 2.0,<ref name="energystar1"/> meaning that the facility uses two watts of total power (overhead + IT equipment) for every watt delivered to IT equipment. State-of-the-art data center energy efficiency is estimated to be roughly 1.2.<ref>{{cite web|url=https://microsite.accenture.com/svlgreport/Documents/pdf/SVLG_Report.pdf|title=Data Center Energy Forecast|publisher=Silicon Valley Leadership Group}}</ref> Some large data center operators like [[Microsoft]] and [[Yahoo!]] have published projections of PUE for facilities in development; [[Google]] publishes quarterly actual efficiency performance from data centers in operation.<ref>{{cite web|url=https://www.google.com/about/datacenters/efficiency/internal/|title=Efficiency: How we do it – Data centers|publisher=Google|accessdate=2015-01-19}}</ref>
 +
 +
The [[U.S. Environmental Protection Agency]] has an [[Energy Star]] rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile of energy efficiency of all reported facilities.<ref>Commentary on introduction of Energy Star for Data Centers {{cite web|title=Introducing EPA ENERGY STAR for Data Centers |url=http://www.emerson.com/edc/post/2010/06/15/Introducing-EPA-ENERGY-STARc2ae-for-Data-Centers.aspx |format=Web site |publisher=Jack Pouchet |accessdate=2010-09-27 |date=2010-09-27 |deadurl=yes |archiveurl=https://web.archive.org/web/20100925210539/http://emerson.com/edc/post/2010/06/15/Introducing-EPA-ENERGY-STARc2ae-for-Data-Centers.aspx |archivedate=2010-09-25 |df= }}</ref> The United States passed the Energy Efficiency Improvement Act of 2015, which requires federal facilities — including data centers — to operate more efficiently. In 2014, California enacted [[California Energy Code|title 24]] of the California Code of Regulations, which mandates that every newly constructed data center must have some form of airflow containment in place, as a measure to optimize energy efficiency.
 +
 +
European Union also has a similar initiative: EU Code of Conduct for Data Centres<ref>{{cite web|url=http://iet.jrc.ec.europa.eu/energyefficiency/ict-codes-conduct/data-centres-energy-efficiency |title=EU Code of Conduct for Data Centres |publisher=iet.jrc.ec.europa.eu |date= |accessdate=2013-08-30 }}</ref>
 +
 +
===Energy use analysis===
 +
Often, the first step toward curbing energy use in a data center is to understand how energy is being used in the data center. Multiple types of analysis exist to measure data center energy use. Aspects measured include not just energy used by IT equipment itself, but also by the data center facility equipment, such as chillers and fans.<ref>{{cite web|url=http://www.gtsi.com/cms/documents/white-papers/green-it.pdf|title=UNICOM Global :: Home|website=www.gtsi.com}}</ref> Recent research has shown the substantial amount of energy that could be conserved by optimizing IT refresh rates and increasing server utilization.<ref>{{cite web|url=http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8263130|title=IEEE Xplore Full-Text PDF:|website=ieeexplore.ieee.org}}</ref>
 +
 +
===Power and cooling analysis===
 +
Power is the largest recurring cost to the user of a data center.<ref name=DRJ_Choosing>{{Citation
 +
| title = Choosing a Data Center
 +
| url = http://www.atlantic.net/images/pdf/choosing_a_data_center.pdf
 +
| publisher = Disaster Recovery Journal
 +
| year = 2009
 +
| author = Cosmano, Joe
 +
| accessdate = 2012-07-21
 +
}}</ref>  A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures.<ref>{{cite web|url=http://www.internetnews.com/xSP/article.php/3690651/HPs+Green+Data+Center+Portfolio+Keeps+Growing.htm|title=HP's Green Data Center Portfolio Keeps Growing - InternetNews.|website=www.internetnews.com}}</ref> A power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center. Power cooling density is a measure of how much square footage the center can cool at maximum capacity.<ref name=Inc_Howtochoose>{{Citation
 +
| title = How to Choose a Data Center
 +
| url = http://www.inc.com/guides/2010/11/how-to-choose-a-data-center_pagen_2.html
 +
| year = 2010
 +
| author = Inc. staff
 +
| accessdate = 2012-07-21
 +
}}</ref> The cooling of data centers is the second largest power consumer after servers. The cooling energy varies from 10% of the total energy consumption in the most efficient data centers and goes up to 45% in standard air-cooled data centers.
 +
 +
===Energy efficiency analysis===
 +
An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center's power use effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics.<ref>{{cite web|url=http://www.triplepundit.com/2011/04/hp-launches-program-companies-integrate-manage-energy-carbon-reduction-strategies/|title=Siranosian, Kathryn. "HP Shows Companies How to Integrate Energy Management and Carbon Reduction," TriplePundit, April 5, 2011.|publisher=}}</ref> However, the limitation of most current metrics and approaches is that they do not include IT in the analysis. Case studies have shown that by addressing energy efficiency holistically in a data center, major efficiencies can be achieved that are not possible otherwise.<ref>{{cite web|url=http://ieeexplore.ieee.org/document/7927928/|title=Architectural Principles for Energy-Aware Internet-Scale Applications - IEEE Journals & Magazine|website=ieeexplore.ieee.org}}</ref>
 +
 +
===Computational fluid dynamics (CFD) analysis===
 +
{{main article|Computational fluid dynamics}}
 +
 +
This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling.<ref>[http://blog.transitionaldata.com/aggregate/bid/37840/Seeing-the-Invisible-Data-Center-with-CFD-Modeling-Software Bullock, Michael. "Computation Fluid Dynamics - Hot topic at Data Center World," Transitional Data Services, March 18, 2010.] {{webarchive|url=https://web.archive.org/web/20120103183406/http://blog.transitionaldata.com/aggregate/bid/37840/Seeing-the-Invisible-Data-Center-with-CFD-Modeling-Software|date=January 3, 2012}}</ref> By predicting the effects of these environmental conditions, CFD analysis in the data center can be used to predict the impact of high-density racks mixed with low-density racks<ref>{{cite web|url=http://www.thegreengrid.org/~/media/WhitePapers/White_Paper_27_Impact_of_Virtualization_Data_On_Center_Physical_Infrastructure_020210.pdf?lang=en|title=Bouley, Dennis (editor). "Impact of Virtualization on Data Center Physical Infrastructure," The Green grid, 2010.|publisher=}}</ref> and the onward impact on cooling resources, poor infrastructure management practices and AC failure or AC shutdown for scheduled maintenance.
 +
 +
===Thermal zone mapping===
 +
Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center.<ref>{{cite web|url=http://searchdatacenter.techtarget.com/news/1265634/HP-Thermal-Zone-Mapping-plots-data-center-hot-spots|title=HP Thermal Zone Mapping plots data center hot spots|publisher=}}</ref>
 +
 +
This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units.
 +
 +
===Green data centers===
 +
{{Main|Green Data Center}}
 +
[[File:Magazin Vauban E.jpg|thumb| This water-cooled data center in the [[Independent Port of Strasbourg|Port of Strasbourg]], France claims the attribute ''green''.]]
 +
Data centers use a lot of power, consumed by two main usages: the power required to run the actual equipment and then the power required to cool the equipment. The first category is addressed by designing computers and storage systems that are increasingly power-efficient.<ref name="ReferenceDC2"/> To bring down cooling costs data center designers try to use natural ways to cool the equipment. Many data centers are located near good fiber connectivity, power grid connections and also people-concentrations to manage the equipment, but there are also circumstances where the data center can be miles away from the users and don't need a lot of local management. Examples of this are the 'mass' data centers like Google or Facebook: these DC's are built around many standardized servers and storage-arrays and the actual users of the systems are located all around the world. After the initial build of a data center staff numbers required to keep it running are often relatively low: especially data centers that provide mass-storage or computing power which don't need to be near population centers.Data centers in arctic locations where outside air provides all cooling are getting more popular as cooling and electricity are the two main variable cost components.<ref>{{cite web|url=http://www.gizmag.com/fjord-cooled-data-center/20938/|title=Fjord-cooled DC in Norway claims to be greenest|access-date=23 December 2011}}</ref>
 +
 +
=== Energy reuse ===
 +
The practice of cooling data centers is a topic of discussion. It is very difficult to reuse the heat which comes from air cooled data centers. For this reason, data center infrastructures are more often equipped with heat pumps. An alternative to heat pumps is the adoption of liquid cooling throughout a data center. Different liquid cooling techniques are mixed and matched to allow for a fully liquid cooled infrastructure which captures all heat in water. Different liquid technologies are categorised in 3 main groups, Indirect liquid cooling (water cooled racks), Direct liquid cooling (direct-to-chip cooling) and Total liquid cooling (complete immersion in liquid). This combination of technologies allows the creation of a [[thermal cascade]] as part of [[temperature chaining]] scenarios to create high temperature water outputs from the data center.
 +
 +
==Network infrastructure==
 +
[[File:Paris servers DSC00190.jpg|thumb|left|An example of "rack mounted" servers]]
 +
Communications in data centers today are most often based on [[computer network|networks]] running the [[Internet protocol|IP]] [[protocol (computing)|protocol]] suite. Data centers contain a set of [[Router (computing)|routers]] and [[Network switch|switches]] that transport traffic between the servers and to the outside world<ref>{{cite journal|last1=Noormohammadpour|first1=Mohammad|last2=Raghavendra|first2=Cauligi|title=Datacenter Traffic Control: Understanding Techniques and Tradeoffs|journal=Communications Surveys & Tutorials, IEEE|date=16 July 2018|volume=20|issue=2|page=1492-1525|url=http://ieeexplore.ieee.org/document/8207422/}}</ref> which are connected according to the [[data center network architectures|data center network architecture]]. [[Redundancy (engineering)|Redundancy]] of the Internet connection is often provided by using two or more upstream service providers (see [[Multihoming]]).
 +
 +
Some of the servers at the data center are used for running the basic Internet and [[intranet]] services needed by internal users in the organization, e.g., e-mail servers, [[proxy server]]s, and [[Domain Name System|DNS]] servers.
 +
 +
Network security elements are also usually deployed: [[firewall (networking)|firewalls]], [[VPN]] [[Gateway (computer networking)|gateways]], [[intrusion detection system]]s, etc. Also common are monitoring systems for the network and some of the applications. Additional off site monitoring systems are also typical, in case of a failure of communications inside the data center.
 +
 +
==Data center infrastructure management==
 +
Data center infrastructure management (DCIM) is the integration of information technology (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center's critical systems. Achieved through the implementation of specialized software, hardware and sensors, DCIM enables common, real-time monitoring and management platform for all interdependent systems across IT and facility infrastructures.
 +
 +
Depending on the type of implementation, DCIM products can help data center managers identify and eliminate sources of risk to increase availability of critical IT systems. DCIM products also can be used to identify interdependencies between facility and IT infrastructures to alert the facility manager to gaps in system redundancy, and provide dynamic, holistic benchmarks on power consumption and efficiency to measure the effectiveness of "green IT" initiatives.
 +
 +
It's important to measure and understand data center efficiency metrics.  A lot of the discussion in this area has focused on energy issues, but other metrics beyond the PUE can give a more detailed picture of the data center operations. Server, storage, and staff utilization metrics can contribute to a more complete view of an enterprise data center. In many cases, disc capacity goes unused and in many instances the organizations run their servers at 20% utilization or less.<ref>{{cite web|url=http://content.dell.com/us/en/enterprise/d/large-business/measure-data-center-efficiency.aspx |title=Measuring Data Center Efficiency: Easier Said Than Done |publisher=Dell.com |accessdate=2012-06-25 |deadurl=yes |archiveurl=https://web.archive.org/web/20101027083349/http://content.dell.com/us/en/enterprise/d/large-business/measure-data-center-efficiency.aspx |archivedate=2010-10-27 |df= }}</ref> More effective automation tools can also improve the number of servers or virtual machines that a single admin can handle.
 +
 +
DCIM providers are increasingly linking with [[computational fluid dynamics]] providers to predict complex airflow patterns in the data center. The CFD component is necessary to quantify the impact of planned future changes on cooling resilience, capacity and efficiency.<ref name="gartner">{{cite web|url=http://www.gartner.com/it-glossary/computational-fluid-dynamic-cfd-analysis|title=Computational-Fluid-Dynamic (CFD) Analysis &#124; Gartner IT Glossary|publisher=gartner.com|accessdate=2014-08-27}}</ref>
 +
 +
==Managing the capacity of a data center==
 +
{{unreferenced section|date=August 2016}}
 +
[[File:Capacity of a datacenter - Life Cycle.jpg|thumbnail|left|Capacity of a datacenter - Life Cycle]]
 +
Several parameters may limit the capacity of a data center. For long term usage, the main limitations will be available area, then available power. In the first stage of its life cycle, a data center will see its occupied space growing more rapidly than consumed energy. With constant densification of new IT technologies, the need in energy is going to become dominant, equaling then overcoming the need in area (second then third phase of cycle). The development and multiplication of connected objects, the needs in storage and data treatment lead to the necessity of data centers to grow more and more rapidly. It is therefore important to define a data center strategy before being cornered. The decision, conception and building cycle lasts several years. Therefore, it is imperative to initiate this strategic consideration when the data center reaches about 50% of its power capacity. Maximum occupation of a data center needs to be stabilized around 85%, be it in power or occupied area. Resources thus managed will allow a rotation zone for managing hardware replacement and will allow temporary cohabitation of old and new generations. In the case where this limit would be overcrossed durably, it would not be possible to proceed to material replacements, which would invariably lead to smothering the information system. The data center is a resource in its own right of the information system, with its own constraints of time and management (life span of 25 years), it therefore needs to be taken into consideration in the framework of the SI midterm planning (between 3 and 5 years).
 +
 +
==Applications==
 +
[[File:IBMPortableModularDataCenter.jpg|thumb|right|A 40-foot [[Portable Modular Data Center]]]]
 +
 +
The main purpose of a data center is running the IT systems applications that handle the core business and operational data of the organization. Such systems may be proprietary and developed internally by the organization, or bought from [[enterprise software]] vendors. Such common applications are [[Enterprise resource planning|ERP]] and [[Customer relationship management|CRM]] systems.
 +
 +
A data center may be concerned with just [[operations architecture]] or it may provide other services as well.
 +
 +
Often these applications will be composed of multiple hosts, each running a single component. Common components of such applications are [[database]]s, [[file server]]s, [[application server]]s, [[middleware]], and various others.
 +
 +
Data centers are also used for off site backups. Companies may subscribe to backup services provided by a data center. This is often used in conjunction with [[Tape drive|backup tapes]]. Backups can be taken off servers locally on to tapes. However, tapes stored on site pose a security threat and are also susceptible to fire and flooding. Larger companies may also send their backups off site for added security. This can be done by backing up to a data center. Encrypted backups can be sent over the Internet to another data center where they can be stored securely.
 +
 +
For quick deployment or [[disaster recovery]], several large hardware vendors have developed mobile/modular solutions that can be installed and made operational in very short time. Companies such as
 +
[[File:Edge Night 02.jpg|thumb|A modular data center connected to the power grid at a utility substation]]
 +
* [[Cisco Systems]],<ref>{{cite web|title=Info and video about Cisco's solution |url=http://www.datacenterknowledge.com/archives/2008/May/15/ciscos_mobile_emergency_data_center.html |publisher=Datacentreknowledge |accessdate=2008-05-11 |date=May 15, 2007 |deadurl=yes |archiveurl=https://web.archive.org/web/20080519213241/http://www.datacenterknowledge.com/archives/2008/May/15/ciscos_mobile_emergency_data_center.html |archivedate=2008-05-19 |df= }}</ref>
 +
* [[Sun Microsystems]] ([[Sun Modular Datacenter]]),<ref>{{cite web|url=http://www.sun.com/products/sunmd/s20/specifications.jsp|archiveurl=https://web.archive.org/web/20080513090300/http://www.sun.com/products/sunmd/s20/specifications.jsp|archivedate=2008-05-13|title=Technical specs of Sun's Blackbox|accessdate=2008-05-11}}</ref><ref>And English Wiki article on [[Sun Modular Datacenter|Sun's modular datacentre]]</ref>
 +
* [[Groupe Bull|Bull]] (mobull),<ref>{{cite web|title=Mobull Plug and Boot Datacenter|url=http://www.bull.com/extreme-computing/mobull.html|archive-url=https://web.archive.org/web/20101119103409/http://bull.com/extreme-computing/mobull.html|dead-url=yes|archive-date=2010-11-19|publisher=Bull|first=Daniel|last=Kidger|accessdate=2011-05-24}}</ref>
 +
* [[IBM]] ([[Portable Modular Data Center]]),
 +
* [[Schneider-Electric]] ([[Portable Modular Data Center]]),
 +
* [[Hewlett-Packard|HP]] ([[HP Performance Optimized Datacenter|Performance Optimized Datacenter]]),<ref>{{cite web|url=http://h18004.www1.hp.com/products/servers/solutions/datacentersolutions/pod/index.html |title=HP Performance Optimized Datacenter (POD) 20c and 40c - Product Overview |publisher=H18004.www1.hp.com |date= |accessdate=2013-08-30}}</ref>
 +
* [[ZTE Corporation]],
 +
* [[FiberHome Technologies Group]] (FitMDC Modular Data Center Solution), <ref>{{cite web|title=FitMDC Modular Data Center Solution|url=https://www.bloomberg.com/research/stocks/private/snapshot.asp?privcapId=22662281}}</ref>
 +
* [[Huawei]] (Container Data Center Solution),<ref>{{cite web|title=Huawei's Container Data Center Solution|url=http://www.huawei.com/ilink/enenterprise/download/HW_143893|publisher=Huawei|accessdate=2014-05-17}}</ref>
 +
* [[Google]] ([[Google Modular Data Center]]) have developed systems that could be used for this purpose.<ref>{{cite web|url=http://www.crn.com/hardware/208403225 |publisher=ChannelWeb |accessdate=2008-05-11 |title=IBM's Project Big Green Takes Second Step |first=Brian |last=Kraemer |date=June 11, 2008 |deadurl=yes |archiveurl=https://web.archive.org/web/20080611114732/http://www.crn.com/hardware/208403225 |archivedate=2008-06-11 |df= }}</ref><ref>{{cite web|url=http://hightech.lbl.gov/documents/data_centers/modular-dc-procurement-guide.pdf |title=Modular/Container Data Centers Procurement Guide: Optimizing for Energy Efficiency and Quick Deployment |format=PDF |date= |accessdate=2013-08-30 |deadurl=yes |archiveurl=https://web.archive.org/web/20130531191212/http://hightech.lbl.gov/documents/data_centers/modular-dc-procurement-guide.pdf |archivedate=2013-05-31 |df= }}</ref>
 +
* BASELAYER has a patent on the software defined modular data center.<ref>{{Citation|title = System and method of providing computer resources|url = http://www.google.com/patents/US8434804|date = May 7, 2013|accessdate = 2016-02-24|first = George|last = Slessman}}</ref><ref>{{Cite web|title = Modular Data Center Firm IO to Split Into Two Companies|url = http://www.datacenterknowledge.com/archives/2014/12/02/modular-data-center-firm-io-to-split-into-two-companies/|website = Data Center Knowledge|access-date = 2016-02-24|language = en-US}}</ref>
 +
 +
==US wholesale and retail colocation providers==
 +
According to data provided in the third quarter of 2013 by Synergy Research Group, "the scale of the wholesale colocation market in the United States is very significant relative to the retail market, with Q3 wholesale revenues reaching almost $700 million. [[Digital Realty]] Trust is the wholesale market leader, followed at a distance by [[DuPont Fabros]]." Synergy Research also described the US colocation market as the most mature and well-developed in the world, based on revenue and the continued adoption of cloud infrastructure services.
 +
;Estimates from Synergy Research Group's Q3 2013 data.<ref name="srgresearch">{{cite web|url=https://www.srgresearch.com/articles/mature-us-colocation-market-led-equinix-and-centurylink-savvis|title=Mature US Colocation Market Led by Equinix and CenturyLink-Savvis &#124; Synergy Research Group|author=Synergy Research Group, Reno, NV|publisher=srgresearch.com|accessdate=2014-08-27}}</ref>
 +
 +
{| class="wikitable sortable"
 +
|-
 +
!Rank !! Company name !! US market share
 +
|-
 +
!1
 +
| Various providers || 34%
 +
|-
 +
!2
 +
| [[Equinix]] || 18%
 +
|-
 +
!3
 +
| [[CenturyLink-Savvis]] || 8%
 +
|-
 +
!4
 +
| [[SunGard]] || 5%
 +
|-
 +
!5
 +
| [[AT&T]] || 5%
 +
|-
 +
!6
 +
| [[Verizon]] || 5%
 +
|-
 +
!7
 +
| Telx || 4%
 +
|-
 +
!8
 +
| [[CyrusOne]] || 4%
 +
|-
 +
!9
 +
| [[Level 3 Communications]] || 3%
 +
|-
 +
!10
 +
| [[Internap]] || 2%
 +
|}
 +
 +
==See also==
 +
{{columns-list|colwidth=20em|
 +
* [[Central apparatus room]]
 +
* [[Colocation center]]
 +
* [[Disaster recovery]]
 +
* [[Dynamic Infrastructure]]
 +
* [[Electrical network]]
 +
* [[HVAC]]
 +
* [[Internet exchange point]]
 +
* [[Internet hosting service]]
 +
* [[Modular data center]]
 +
* [[Neher–McGrath]]
 +
* [[Network operations center]]
 +
* [[Open Compute Project]], by [[Facebook]]
 +
* [[Peering]]
 +
* [[Server farm]]
 +
* [[Server room]]
 +
* [[Server Room Environment Monitoring System]]
 +
* [[Server sprawl]]
 +
* [[Sun Modular Datacenter]]
 +
* [[Telecommunications network]]
 +
* [[Utah Data Center]]
 +
* [[Web hosting service]]
 +
}}
 +
 +
==References==
 +
{{Reflist|30em}}
 +
 +
==External links==
 +
{{Commons category|Data centers}}
 +
{{wikibooks|The Design and Organization of Data Centers}}
 +
{{wiktionary}}
 +
* [https://web.archive.org/web/20060929131812/http://hightech.lbl.gov/datacenters.html Lawrence Berkeley Lab] - Research, development, demonstration, and deployment of energy-efficient technologies and practices for data centers
 +
* [https://web.archive.org/web/20110723081149/http://hightech.lbl.gov/dc-powering/faq.html DC Power For Data Centers Of The Future] - FAQ: 380VDC testing and demonstration at a Sun data center.
 +
* [http://media.wix.com/ugd/fb8983_e929404b24874e4fa7a8279f1cda58f8.pdf White Paper] - Property Taxes: The New Challenge for Data Centers
 +
* [https://www.dceureca.eu/ The European Commission H2020 EURECA Data Centre Project] - Data centre energy efficiency guidelines, extensive online training material, case studies/lectures (under events page), and tools.
 +
 +
{{Authority control}}
 +
{{Cloud computing}}
 +
 +
{{DEFAULTSORT:Data Center}}
 +
[[Category:Computer networking]]
 +
[[Category:Applications of distributed computing]]
 +
[[Category:Cloud storage]]
 +
[[Category:Data management]]
 +
[[Category:Distributed data storage]]
 +
[[Category:Distributed data storage systems]]
 +
[[Category:Servers (computing)]]
 +
[[Category:Data centers| ]]
 +
 +
  
  

Revision as of 06:01, 13 October 2018

Template:Refimprove

File:NetworkOperations.jpg
An operation engineer overseeing a network operations control room of a data center

A data center (American English) or data centre (British English) is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup components and infrastructure for power supply, data communications connections, environmental controls (e.g. air conditioning, fire suppression) and various security devices. A large data center is an industrial-scale operation using as much electricity as a small town.<ref name=NYT92212>Template:Cite news</ref><ref name="ReferenceDC2">Template:Cite journal</ref>

History

File:NASAComputerRoom7090.NARA.jpg
NASA mission control computer room circa 1962

Template:Unreferenced section Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power, and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design-guidelines for controlling access to the computer room were therefore devised.

During the boom of the microcomputer industry, and especially during the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The advent of Unix from the early 1970s led to the subsequent proliferation of freely available Linux-compatible PC operating-systems during the 1990s. These were called "servers", as timesharing operating systems like Unix rely heavily on the client-server model to facilitate sharing unique resources between multiple users. The availability of inexpensive networking equipment, coupled with new standards for network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center", as applied to specially designed computer rooms, started to gain popular recognition about this time.Template:Citation needed

The boom of data centers came during the dot-com bubble of 1997–2000. Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide commercial clients with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results. Data centers for cloud computing are called cloud data centers (CDCs). But nowadays, the division of these terms has almost disappeared and they are being integrated into a term "data center".

With an increase in the uptake of cloud computing, business and government organizations scrutinize data centers to a higher degree in areas such as security, availability, environmental impact and adherence to standards. Standards documents from accredited professional groups, such as the Telecommunications Industry Association, specify the requirements for data-center design. Well-known operational metrics for data-center availability can serve to evaluate the commercial impact of a disruption. Development continues in operational practice, and also in environmentally-friendly data-center design. Data centers typically cost a lot to build and to maintain.Template:Citation needed

Requirements for modern data centers

File:Datacenter-telecom.jpg
Racks of telecommunications equipment in part of a data center

IT operations are a crucial aspect of most organizational operations around the world. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of mechanical cooling and power systems (including emergency backup power generators) serving the data center along with fiber optic cables.

The Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers<ref>Template:Cite web</ref> specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.<ref>Template:Cite web</ref>

Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces,<ref>Template:Cite web</ref> provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:

  • Operate and manage a carrier's telecommunication network
  • Provide data center based applications directly to the carrier's customers
  • Provide hosted applications for a third party to provide services to their customers
  • Provide a combination of these and similar data center applications

Effective data center operation requires a balanced investment in both the facility and the housed equipment. The first step is to establish a baseline facility environment suitable for equipment installation. Standardization and modularity can yield savings and efficiencies in the design and construction of telecommunications data centers.

Standardization means integrated building and equipment engineering. Modularity has the benefits of scalability and easier growth, even when planning forecasts are less than optimal. For these reasons, telecommunications data centers should be planned in repetitive building blocks of equipment, and associated power and support (conditioning) equipment when practical. The use of dedicated centralized systems requires more accurate forecasts of future needs to prevent expensive over construction, or perhaps worse — under construction that fails to meet future needs.

The "lights-out" data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.<ref>Template:Cite book</ref><ref>Template:Cite book</ref>

There is a trend to modernize data centers in order to take advantage of the performance and energy efficiency increases of newer IT equipment and capabilities, such as cloud computing. This process is also known as data center transformation.<ref name="mspmentor.net">Template:Cite web</ref>

Organizations are experiencing rapid IT growth but their data centers are aging. Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old.<ref name="mspmentor.net"/> Gartner, another research company, says data centers older than seven years are obsolete.<ref>Template:Cite web</ref> The growth in data (163 zettabytes by 2025<ref>Template:Cite web</ref>) is one factor driving the need for data centers to modernize.

In May 2011, data center research organization Uptime Institute reported that 36 percent of the large companies it surveyed expect to exhaust IT capacity within the next 18 months.<ref>Template:Cite web</ref>

Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach.<ref>Template:Cite web</ref> The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security.

  • Standardization/consolidation: The purpose of this project is to reduce the number of data centers a large organization may have. This project also helps to reduce the number of hardware, software platforms, tools and processes within a data center. Organizations replace aging data center equipment with newer ones that provide increased capacity and performance. Computing, networking and management platforms are standardized so they are easier to manage.<ref name="datacenterknowledge.com">

Miller, Rich. "Complexity: Growing Data Center Challenge," Data Center Knowledge, May 16, 2007</ref>

  • Virtualize: There is a trend to use IT virtualization technologies to replace or consolidate multiple data center equipment, such as servers. Virtualization helps to lower capital and operational expenses,<ref>

Sims, David. "Carousel's Expert Walks Through Major Benefits of Virtualization," TMC Net, July 6, 2010</ref> and reduce energy consumption.<ref>Template:Cite web</ref> Virtualization technologies are also used to create virtual desktops, which can then be hosted in data centers and rented out on a subscription basis.<ref>Template:Cite web</ref> Data released by investment bank Lazard Capital Markets reports that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.<ref>Template:Cite web</ref>

  • Automating: Data center automation involves automating tasks such as provisioning, configuration, patching, release management and compliance. As enterprises suffer from few skilled IT workers,<ref name="datacenterknowledge.com"/> automating tasks make data centers operations more efficient.
  • Securing: In modern data centers, the security of data on virtual systems is integrated with existing security of physical infrastructures.<ref>Template:Cite web</ref> The security of a modern data center must take into account physical security, network security, and data and user security.

Carrier neutrality

Today many data centers are run by Internet service providers solely for the purpose of hosting their own and third party servers.

However traditionally data centers were either built for the sole use of one large company, or as carrier hotels or Network-neutral data centers.

These facilities enable interconnection of carriers and partners, and act as regional fiber hubs serving local business in addition to hosting content servers.

Data center levels and tiers

The Telecommunications Industry Association is a trade association accredited by ANSI (American National Standards Institute). In 2005 it published ANSI/TIA-942, Telecommunications Infrastructure Standard for Data Centers, which defined four levels of data centers in a thorough, quantifiable manner.<ref>Template:Cite web</ref> TIA-942 was amended in 2008, 2010, 2014 and 2017. TIA-942:Data Center Standards Overview describes the requirements for the data center infrastructure. The simplest is a Level 1 data center, which is basically a server room, following basic guidelines for the installation of computer systems. The most stringent level is a Level 4 data center, which is designed to host the most mission critical computer systems, with fully redundant subsystems, the ability to continuously operate for an indefinite period of time during primary power outages.

The Uptime Institute, a data center research and professional-services organization based in Seattle, WA defined what is commonly referred to today as "Tiers" or more accurately, the "Tier Standard". Uptime's Tier Standard levels describe the availability of data processing from the hardware at a location. The higher the Tier level, the greater the expected availability. The Uptime Institute Tier Standards are shown below.<ref>A document from the Uptime Institute describing the different tiers (click through the download page) Template:Cite web</ref><ref>The rating guidelines from the Uptime Institute Template:Cite web</ref>

For the 2014 TIA-942 revision, the TIA organization and Uptime Institute mutually agreedTemplate:Citation needed that TIA would remove any use of the word "Tier" from their published TIA-942 specifications, reserving that terminology to be solely used by Uptime Institute to describe its system.

Other classifications exist as well. For instance, the German Datacenter Star Audit program uses an auditing process to certify five levels of "gratification" that affect data center criticality.

Uptime Institute's Tier Standards
Tier level Requirements
I
  • Single non-redundant distribution path serving the critical loads
  • Non-redundant critical capacity components
II
  • Meets all Tier I requirements, in addition to:
  • Redundant critical capacity components
  • Critical capacity components must be able to be isolated and removed from service while still providing N capacity to the critical loads.
III
  • Meets all Tier II requirements in addition to:
  • Multiple independent distinct distribution paths serving the IT equipment critical loads
  • All IT equipment must be dual-powered provided with two redundant, distinct UPS feeders. Single-corded IT devices must use a Point of Use Transfer Switch to allow the device to receive power from and select between the two UPS feeders.
  • Each and every critical capacity component, distribution path and component of any critical system must be able to be fully compatible with the topology of a site's architecture isolated for planned events (replacement, maintenance, or upgrade) while still providing N capacity to the critical loads.
  • Onsite energy production systems (such as engine generator systems) must not have runtime limitations at the site conditions and design load.
IV
  • Meets all Tier III requirements in addition to:
  • Multiple independent distinct and active distribution paths serving the critical loads
  • Compartmentalization of critical capacity components and distribution paths
  • Critical systems must be able to autonomously provide N capacity to the critical loads after any single fault or failure
  • Continuous Cooling is required for IT and UPS systems.

While any of the industry's data center resiliency systems were proposed at a time when availability was expressed as a theory, and a certain number of 'Nines' on the right side of the decimal point, it has generally been agreed that this approach was somewhat deceptive or too simplistic, so vendors today usually discuss availability in details that they can actually affect, and in much more specific terms. Hence, the leveling systems available today no longer define their results in percentages of uptime.

Note: The Uptime Institute also classifies the Tiers for each of the three phases of a data center, its design documents, the constructed facility and its ongoing operational sustainability.<ref name="uptimeinstitute">Template:Cite web</ref>

Design considerations

File:Rack001.jpg
A typical server rack, commonly seen in colocation

A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers to large freestanding storage silos which occupy many square feet of floor space. Some equipment such as mainframe computers and storage devices are often as big as the racks themselves, and are placed alongside them. Very large data centers may use shipping containers packed with 1,000 or more servers each;<ref>Template:Cite web</ref> when repairs or upgrades are needed, whole containers are replaced (rather than repairing individual servers).<ref>Template:Cite web</ref>

Local building codes may govern the minimum ceiling heights.

Design programming

Design programming, also known as architectural programming, is the process of researching and making decisions to identify the scope of a design project.<ref>Cherry, Edith. "Architectural Programming: Introduction", Whole Building Design Guide, Sept. 2, 2009</ref> Other than the architecture of the building itself there are three elements to design programming for data centers: facility topology design (space planning), engineering infrastructure design (mechanical systems such as cooling and electrical systems including power) and technology infrastructure design (cable plant). Each will be influenced by performance assessments and modelling to identify gaps pertaining to the owner's performance wishes of the facility over time.

Various vendors who provide data center design services define the steps of data center design slightly differently, but all address the same basic aspects as given below.

Modeling criteria

Modeling criteria are used to develop future scenarios for space, power, cooling, and costs in the data center.<ref>Template:Cite web</ref> The aim is to create a master plan with parameters such as number, size, location, topology, IT floor system layouts, and power and cooling technology and configurations. The purpose of this is to allow for efficient use of the existing mechanical and electrical systems and also growth in the existing data center without the need for developing new buildings and further upgrading of incoming power supply.

Design recommendations

Design recommendations/plans generally follow the modelling criteria phase. The optimal technology infrastructure is identified and planning criteria are developed, such as critical power capacities, overall data center power requirements using an agreed upon PUE (power utilization efficiency), mechanical cooling capacities, kilowatts per cabinet, raised floor space, and the resiliency level for the facility.

Conceptual design

Conceptual designs embody the design recommendations or plans and should take into account "what-if" scenarios to ensure all operational outcomes are met in order to future-proof the facility. Conceptual floor layouts should be driven by IT performance requirements as well as lifecycle costs associated with IT demand, energy efficiency, cost efficiency and availability. Future-proofing will also include expansion capabilities, often provided in modern data centers through modular designs. These allow for more raised floor space to be fitted out in the data center while using the existing major electrical plant of the facility.

Detailed design

Detailed design is undertaken once the appropriate conceptual design is determined, typically including a proof of concept. The detailed design phase should include the detailed architectural, structural, mechanical and electrical information and specification of the facility. At this stage development of facility schematics and construction documents as well as schematics and performance specification and specific detailing of all technology infrastructure, detailed IT infrastructure design and IT infrastructure documentation are produced.

Mechanical engineering infrastructure designs

File:CRAC Cabinets 2.jpg
CRAC Air Handler

Mechanical engineering infrastructure design addresses mechanical systems involved in maintaining the interior environment of a data center, such as heating, ventilation and air conditioning (HVAC); humidification and dehumidification equipment; pressurization; and so on.<ref name="nxtbook.com">Template:Cite web</ref> This stage of the design process should be aimed at saving space and costs, while ensuring business and reliability objectives are met as well as achieving PUE and green requirements.<ref>Data Center Energy Management: Best Practices Checklist: Mechanical, Lawrence Berkeley National Laboratory Template:Cite web</ref> Modern designs include modularizing and scaling IT loads, and making sure capital spending on the building construction is optimized.

Electrical engineering infrastructure design

Electrical Engineering infrastructure design is focused on designing electrical configurations that accommodate various reliability requirements and data center sizes. Aspects may include utility service planning; distribution, switching and bypass from power sources; uninterruptible power source (UPS) systems; and more.<ref name="nxtbook.com"/>

These designs should dovetail to energy standards and best practices while also meeting business objectives. Electrical configurations should be optimized and operationally compatible with the data center user's capabilities. Modern electrical design is modular and scalable,<ref>Template:Cite web</ref> and is available for low and medium voltage requirements as well as DC (direct current).

Technology infrastructure design

Technology infrastructure design addresses the telecommunications cabling systems that run throughout data centers. There are cabling systems for all data center environments, including horizontal cabling, voice, modem, and facsimile telecommunications services, premises switching equipment, computer and telecommunications management connections, keyboard/video/mouse connections and data communications.<ref>Template:Cite web</ref> Wide area, local area, and storage area networks should link with other building signaling systems (e.g. fire, security, power, HVAC, EMS).

Availability expectations

The higher the availability needs of a data center, the higher the capital and operational costs of building and managing it. Business needs should dictate the level of availability required and should be evaluated based on characterization of the criticality of IT systems estimated cost analyses from modeled scenarios. In other words, how can an appropriate level of availability best be met by design criteria to avoid financial and operational risks as a result of downtime? If the estimated cost of downtime within a specified time unit exceeds the amortized capital costs and operational expenses, a higher level of availability should be factored into the data center design. If the cost of avoiding downtime greatly exceeds the cost of downtime itself, a lower level of availability should be factored into the design.<ref>Clark, Jeffrey. "The Price of Data Center Availability—How much availability do you need?", Oct. 12, 2011, The Data Center Journal Template:Cite web</ref>

Site selection

Aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines and emergency services can affect costs, risk, security and other factors to be taken into consideration for data center design. Whilst a wide array of location factors are taken into account (e.g. flight paths, neighbouring uses, geological risks) access to suitable available power is often the longest lead time item. Location affects data center design also because the climatic conditions dictate what cooling technologies should be deployed. In turn this impacts uptime and the costs associated with cooling.<ref>Template:Cite web</ref> For example, the topology and the cost of managing a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate.

Modularity and flexibility

File:Cabinet Asile.jpg
Cabinet aisle in a data center

Template:Main article

Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed.<ref>Niles, Susan. "Standardization and Modularity in Data Center Physical Infrastructure," 2011, Schneider Electric, page 4. Template:Cite web</ref>

A modular data center may consist of data center equipment contained within shipping containers or similar portable containers.<ref>Template:Cite web</ref> But it can also be described as a design style in which components of the data center are prefabricated and standardized so that they can be constructed, moved or added to quickly as needs change.<ref>Template:Cite web</ref>

Environmental control

Template:Main article The physical environment of a data center is rigorously controlled. Air conditioning is used to control the temperature and humidity in the data center. ASHRAE's "Thermal Guidelines for Data Processing Environments"<ref>Template:Cite book</ref> recommends a temperature range of Template:Convert, a dew point range of Template:Convert, and ideal relative humidity of 60%, with an allowable range of 40% to 60% for data center environments.<ref name=ServersCheck>Template:Cite web</ref> The temperature in a data center will naturally rise because the electrical power used heats the air. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help control humidity by cooling the return space air below the dew point. Too much humidity, and water may begin to condense on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapor if the humidity is too low, which can result in static electricity discharge problems which may damage components. Subterranean data centers may keep computer equipment cool while expending less energy than conventional designs.

Modern data centers try to use economizer cooling, where they use outside air to keep the data center cool. At least one data center (located in Upstate New York) will cool servers using outside air during the winter. They do not use chillers/air conditioners, which creates potential energy savings in the millions.<ref>Template:Cite news</ref> Increasingly indirect air cooling<ref>Template:Cite web</ref> is being deployed in data centers globally which has the advantage of more efficient cooling which lowers power consumption costs in the data center. Many newly constructed data centers are also using Indirect Evaporative Cooling (IDEC) units as well as other environmental features such as sea water to minimize the amount of energy needed to cool the space.

Telcordia NEBS: Raised Floor Generic Requirements for Network and Data Centers,<ref>Template:Cite web</ref> GR-2930 presents generic engineering requirements for raised floors that fall within the strict NEBS guidelines.

There are many types of commercially available floors that offer a wide range of structural strength and loading capabilities, depending on component construction and the materials used. The general types of raised floors include stringer, stringerless, and structural platforms, all of which are discussed in detail in GR-2930 and summarized below.

  • Stringered raised floors - This type of raised floor generally consists of a vertical array of steel pedestal assemblies (each assembly is made up of a steel base plate, tubular upright, and a head) uniformly spaced on two-foot centers and mechanically fastened to the concrete floor. The steel pedestal head has a stud that is inserted into the pedestal upright and the overall height is adjustable with a leveling nut on the welded stud of the pedestal head.
  • Stringerless raised floors - One non-earthquake type of raised floor generally consists of an array of pedestals that provide the necessary height for routing cables and also serve to support each corner of the floor panels. With this type of floor, there may or may not be provisioning to mechanically fasten the floor panels to the pedestals. This stringerless type of system (having no mechanical attachments between the pedestal heads) provides maximum accessibility to the space under the floor. However, stringerless floors are significantly weaker than stringered raised floors in supporting lateral loads and are not recommended.
  • Structural platforms - One type of structural platform consists of members constructed of steel angles or channels that are welded or bolted together to form an integrated platform for supporting equipment. This design permits equipment to be fastened directly to the platform without the need for toggle bars or supplemental bracing. Structural platforms may or may not contain panels or stringers.

Data centers typically have raised flooring made up of Template:Convert removable square tiles. The trend is towards Template:Convert void to cater for better and uniform air distribution. These provide a plenum for air to circulate below the floor, as part of the air conditioning system, as well as providing space for power cabling.

Metal whiskers

Raised floors and other metal structures such as cable trays and ventilation ducts have caused many problems with zinc whiskers in the past, and likely are still present in many data centers. This happens when microscopic metallic filaments form on metals such as zinc or tin that protect many metal structures and electronic components from corrosion. Maintenance on a raised floor or installing of cable etc. can dislodge the whiskers, which enter the airflow and may short circuit server components or power supplies, sometimes through a high current metal vapor plasma arc. This phenomenon is not unique to data centers, and has also caused catastrophic failures of satellites and military hardware.<ref>Template:Cite web</ref>

Electrical power

File:Datacenter Backup Batteries.jpg
A bank of batteries in a large data center, used to provide power until diesel generators can start

Backup power consists of one or more uninterruptible power supplies, battery banks, and/or diesel / gas turbine generators.<ref>Detailed explanation of UPS topologies Template:Cite web</ref>

To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve N+1 redundancy in the systems. Static transfer switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.

Low-voltage cable routing

Data cabling is typically routed through overhead cable trays in modern data centers. But someTemplate:Who are still recommending under raised floor cabling for security reasons and to consider the addition of cooling systems above the racks in case this enhancement is necessary. Smaller/less expensive data centers without raised flooring may use anti-static tiles for a flooring surface. Computer cabinets are often organized into a hot aisle arrangement to maximize airflow efficiency.

Fire protection

File:FM200 Three.jpg
FM200 Fire Suppression Tanks

Data centers feature fire protection systems, including passive and Active Design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a fire at its incipient stage. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. An active fire protection system, such as a fire sprinkler system or a clean agent fire suppression gaseous system, is often provided to control a full scale fire if it develops. High sensitivity smoke detectors, such as aspirating smoke detectors, activating clean agent fire suppression gaseous systems activate earlier than fire sprinklers.

  • Sprinklers = structure protection and building life safety.
  • Clean agents = business continuity and asset protection.
  • No water = no collateral damage or clean up.

Passive fire protection elements include the installation of fire walls around the data center, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems. Fire wall penetrations into the server room, such as cable penetrations, coolant line penetrations and air ducts, must be provided with fire rated penetration assemblies, such as fire stopping.

Security

Template:Main Physical security also plays a large role with data centers. Physical access to the site is usually restricted to selected personnel, with controls including a layered security system often starting with fencing, bollards and mantraps.<ref>Template:Cite web</ref> Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information on any of the systems within. The use of finger print recognition mantraps is starting to be commonplace.

Documenting access is required by some data protection regulations. To do so, some organizations use access control systems that provide a logging report of accesses. Logging can occur at the main entrance, at the entrances to mechanical rooms and white spaces, as well as in at the equipment cabinets. Modern access control at the cabinet allows for integration with intelligent power distribution units so that the locks can be powered and networked through the same appliance.<ref>Template:Citation</ref>

Energy use

Template:Main article

Energy use is a central issue for data centers. Power draw for data centers ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building.<ref>Template:Cite web</ref> For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center.<ref>J. Koomey, C. Belady, M. Patterson, A. Santos, K.D. Lange: Assessing Trends Over Time in Performance, Costs, and Energy Use for Servers Released on the web August 17th, 2009.</ref> By 2012 the cost of power for the data center is expected to exceed the cost of the original capital investment.<ref>Template:Cite web</ref>

According to a Greenpeace study, in 2012, data centers represented 21% of the electricity consumed by the IT sector, which was about 382 billion kWh a year.<ref>Template:Cite web</ref> U.S. data centers use more than 90 billion kWh of electricity a year. Global data centers used roughly 416 TWh in 2016, nearly 40% more than the entire United Kingdom.<ref>Template:Cite news</ref>

Greenhouse gas emissions

In 2007 the entire information and communication technologies or ICT sector was estimated to be responsible for roughly 2% of global carbon emissions with data centers accounting for 14% of the ICT footprint.<ref name="smart1">Template:Cite web</ref> The US EPA estimates that servers and data centers are responsible for up to 1.5% of the total US electricity consumption,<ref name="energystar1">Template:Cite web</ref> or roughly .5% of US GHG emissions,<ref>A calculation of data center electricity burden cited in the Report to Congress on Server and Data Center Energy Efficiency and electricity generation contributions to green house gas emissions published by the EPA in the Greenhouse Gas Emissions Inventory Report. Retrieved 2010-06-08.</ref> for 2007. Given a business as usual scenario greenhouse gas emissions from data centers is projected to more than double from 2007 levels by 2020.<ref name="smart1"/>

Siting is one of the factors that affect the energy consumption and environmental effects of a datacenter. In areas where climate favors cooling and lots of renewable electricity is available the environmental effects will be more moderate. Thus countries with favorable conditions, such as: Canada,<ref>Canada Called Prime Real Estate for Massive Data Computers - Globe & Mail Retrieved June 29, 2011.</ref> Finland,<ref>Finland - First Choice for Siting Your Cloud Computing Data Center.. Retrieved 4 August 2010.</ref> Sweden,<ref>Template:Cite web</ref> Norway <ref>In a world of rapidly increasing carbon emissions from the ICT industry, Norway offers a sustainable solution Retrieved 1 March 2016.</ref> and Switzerland,<ref>Swiss Carbon-Neutral Servers Hit the Cloud.. Retrieved 4 August 2010.</ref> are trying to attract cloud computing data centers.

In an 18-month investigation by scholars at Rice University's Baker Institute for Public Policy in Houston and the Institute for Sustainable and Applied Infodynamics in Singapore, data center-related emissions will more than triple by 2020. <ref>Template:Cite news</ref>

Energy efficiency

The most commonly used metric to determine the energy efficiency of a data center is power usage effectiveness, or PUE. This simple ratio is the total power entering the data center divided by the power used by the IT equipment.

<math> \mathrm{PUE} = {\mbox{Total Facility Power} \over \mbox{IT Equipment Power}} </math>

Total facility power consists of power used by IT equipment plus any overhead power consumed by anything that is not considered a computing or data communication device (i.e. cooling, lighting, etc.). An ideal PUE is 1.0 for the hypothetical situation of zero overhead power. The average data center in the US has a PUE of 2.0,<ref name="energystar1"/> meaning that the facility uses two watts of total power (overhead + IT equipment) for every watt delivered to IT equipment. State-of-the-art data center energy efficiency is estimated to be roughly 1.2.<ref>Template:Cite web</ref> Some large data center operators like Microsoft and Yahoo! have published projections of PUE for facilities in development; Google publishes quarterly actual efficiency performance from data centers in operation.<ref>Template:Cite web</ref>

The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile of energy efficiency of all reported facilities.<ref>Commentary on introduction of Energy Star for Data Centers Template:Cite web</ref> The United States passed the Energy Efficiency Improvement Act of 2015, which requires federal facilities — including data centers — to operate more efficiently. In 2014, California enacted title 24 of the California Code of Regulations, which mandates that every newly constructed data center must have some form of airflow containment in place, as a measure to optimize energy efficiency.

European Union also has a similar initiative: EU Code of Conduct for Data Centres<ref>Template:Cite web</ref>

Energy use analysis

Often, the first step toward curbing energy use in a data center is to understand how energy is being used in the data center. Multiple types of analysis exist to measure data center energy use. Aspects measured include not just energy used by IT equipment itself, but also by the data center facility equipment, such as chillers and fans.<ref>Template:Cite web</ref> Recent research has shown the substantial amount of energy that could be conserved by optimizing IT refresh rates and increasing server utilization.<ref>Template:Cite web</ref>

Power and cooling analysis

Power is the largest recurring cost to the user of a data center.<ref name=DRJ_Choosing>Template:Citation</ref> A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures.<ref>Template:Cite web</ref> A power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center. Power cooling density is a measure of how much square footage the center can cool at maximum capacity.<ref name=Inc_Howtochoose>Template:Citation</ref> The cooling of data centers is the second largest power consumer after servers. The cooling energy varies from 10% of the total energy consumption in the most efficient data centers and goes up to 45% in standard air-cooled data centers.

Energy efficiency analysis

An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center's power use effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics.<ref>Template:Cite web</ref> However, the limitation of most current metrics and approaches is that they do not include IT in the analysis. Case studies have shown that by addressing energy efficiency holistically in a data center, major efficiencies can be achieved that are not possible otherwise.<ref>Template:Cite web</ref>

Computational fluid dynamics (CFD) analysis

Template:Main article

This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling.<ref>Bullock, Michael. "Computation Fluid Dynamics - Hot topic at Data Center World," Transitional Data Services, March 18, 2010. Template:Webarchive</ref> By predicting the effects of these environmental conditions, CFD analysis in the data center can be used to predict the impact of high-density racks mixed with low-density racks<ref>Template:Cite web</ref> and the onward impact on cooling resources, poor infrastructure management practices and AC failure or AC shutdown for scheduled maintenance.

Thermal zone mapping

Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center.<ref>Template:Cite web</ref>

This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units.

Green data centers

Template:Main

File:Magazin Vauban E.jpg
This water-cooled data center in the Port of Strasbourg, France claims the attribute green.

Data centers use a lot of power, consumed by two main usages: the power required to run the actual equipment and then the power required to cool the equipment. The first category is addressed by designing computers and storage systems that are increasingly power-efficient.<ref name="ReferenceDC2"/> To bring down cooling costs data center designers try to use natural ways to cool the equipment. Many data centers are located near good fiber connectivity, power grid connections and also people-concentrations to manage the equipment, but there are also circumstances where the data center can be miles away from the users and don't need a lot of local management. Examples of this are the 'mass' data centers like Google or Facebook: these DC's are built around many standardized servers and storage-arrays and the actual users of the systems are located all around the world. After the initial build of a data center staff numbers required to keep it running are often relatively low: especially data centers that provide mass-storage or computing power which don't need to be near population centers.Data centers in arctic locations where outside air provides all cooling are getting more popular as cooling and electricity are the two main variable cost components.<ref>Template:Cite web</ref>

Energy reuse

The practice of cooling data centers is a topic of discussion. It is very difficult to reuse the heat which comes from air cooled data centers. For this reason, data center infrastructures are more often equipped with heat pumps. An alternative to heat pumps is the adoption of liquid cooling throughout a data center. Different liquid cooling techniques are mixed and matched to allow for a fully liquid cooled infrastructure which captures all heat in water. Different liquid technologies are categorised in 3 main groups, Indirect liquid cooling (water cooled racks), Direct liquid cooling (direct-to-chip cooling) and Total liquid cooling (complete immersion in liquid). This combination of technologies allows the creation of a thermal cascade as part of temperature chaining scenarios to create high temperature water outputs from the data center.

Network infrastructure

File:Paris servers DSC00190.jpg
An example of "rack mounted" servers

Communications in data centers today are most often based on networks running the IP protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world<ref>Template:Cite journal</ref> which are connected according to the data center network architecture. Redundancy of the Internet connection is often provided by using two or more upstream service providers (see Multihoming).

Some of the servers at the data center are used for running the basic Internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers.

Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, etc. Also common are monitoring systems for the network and some of the applications. Additional off site monitoring systems are also typical, in case of a failure of communications inside the data center.

Data center infrastructure management

Data center infrastructure management (DCIM) is the integration of information technology (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center's critical systems. Achieved through the implementation of specialized software, hardware and sensors, DCIM enables common, real-time monitoring and management platform for all interdependent systems across IT and facility infrastructures.

Depending on the type of implementation, DCIM products can help data center managers identify and eliminate sources of risk to increase availability of critical IT systems. DCIM products also can be used to identify interdependencies between facility and IT infrastructures to alert the facility manager to gaps in system redundancy, and provide dynamic, holistic benchmarks on power consumption and efficiency to measure the effectiveness of "green IT" initiatives.

It's important to measure and understand data center efficiency metrics. A lot of the discussion in this area has focused on energy issues, but other metrics beyond the PUE can give a more detailed picture of the data center operations. Server, storage, and staff utilization metrics can contribute to a more complete view of an enterprise data center. In many cases, disc capacity goes unused and in many instances the organizations run their servers at 20% utilization or less.<ref>Template:Cite web</ref> More effective automation tools can also improve the number of servers or virtual machines that a single admin can handle.

DCIM providers are increasingly linking with computational fluid dynamics providers to predict complex airflow patterns in the data center. The CFD component is necessary to quantify the impact of planned future changes on cooling resilience, capacity and efficiency.<ref name="gartner">Template:Cite web</ref>

Managing the capacity of a data center

Template:Unreferenced section

File:Capacity of a datacenter - Life Cycle.jpg
Capacity of a datacenter - Life Cycle

Several parameters may limit the capacity of a data center. For long term usage, the main limitations will be available area, then available power. In the first stage of its life cycle, a data center will see its occupied space growing more rapidly than consumed energy. With constant densification of new IT technologies, the need in energy is going to become dominant, equaling then overcoming the need in area (second then third phase of cycle). The development and multiplication of connected objects, the needs in storage and data treatment lead to the necessity of data centers to grow more and more rapidly. It is therefore important to define a data center strategy before being cornered. The decision, conception and building cycle lasts several years. Therefore, it is imperative to initiate this strategic consideration when the data center reaches about 50% of its power capacity. Maximum occupation of a data center needs to be stabilized around 85%, be it in power or occupied area. Resources thus managed will allow a rotation zone for managing hardware replacement and will allow temporary cohabitation of old and new generations. In the case where this limit would be overcrossed durably, it would not be possible to proceed to material replacements, which would invariably lead to smothering the information system. The data center is a resource in its own right of the information system, with its own constraints of time and management (life span of 25 years), it therefore needs to be taken into consideration in the framework of the SI midterm planning (between 3 and 5 years).

Applications

The main purpose of a data center is running the IT systems applications that handle the core business and operational data of the organization. Such systems may be proprietary and developed internally by the organization, or bought from enterprise software vendors. Such common applications are ERP and CRM systems.

A data center may be concerned with just operations architecture or it may provide other services as well.

Often these applications will be composed of multiple hosts, each running a single component. Common components of such applications are databases, file servers, application servers, middleware, and various others.

Data centers are also used for off site backups. Companies may subscribe to backup services provided by a data center. This is often used in conjunction with backup tapes. Backups can be taken off servers locally on to tapes. However, tapes stored on site pose a security threat and are also susceptible to fire and flooding. Larger companies may also send their backups off site for added security. This can be done by backing up to a data center. Encrypted backups can be sent over the Internet to another data center where they can be stored securely.

For quick deployment or disaster recovery, several large hardware vendors have developed mobile/modular solutions that can be installed and made operational in very short time. Companies such as

File:Edge Night 02.jpg
A modular data center connected to the power grid at a utility substation

US wholesale and retail colocation providers

According to data provided in the third quarter of 2013 by Synergy Research Group, "the scale of the wholesale colocation market in the United States is very significant relative to the retail market, with Q3 wholesale revenues reaching almost $700 million. Digital Realty Trust is the wholesale market leader, followed at a distance by DuPont Fabros." Synergy Research also described the US colocation market as the most mature and well-developed in the world, based on revenue and the continued adoption of cloud infrastructure services.

Estimates from Synergy Research Group's Q3 2013 data.<ref name="srgresearch">Template:Cite web</ref>
Rank Company name US market share
1 Various providers 34%
2 Equinix 18%
3 CenturyLink-Savvis 8%
4 SunGard 5%
5 AT&T 5%
6 Verizon 5%
7 Telx 4%
8 CyrusOne 4%
9 Level 3 Communications 3%
10 Internap 2%

See also

Template:Columns-list

References

"Firebase - CrunchBase". CrunchBase. Retrieved June 11, 2014.

External links

Template:Commons category Template:Wikibooks Template:Wiktionary

Template:Authority control Template:Cloud computing



Pranala Menarik