IT operations are a crucial aspect of most organizational operations. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. As the complexities of IT have grown, the demand for computing power has increased the electrical power required for data centers. According to an EPA report to congress, demand for data processing and storage is driven by several factors including the increased use of electronic transactions in financial services, the shift to electronic medical records for healthcare, the growth in global commerce and services, the adoption of satellite navigation and electronic shipment tracking in transportation, and the adoption of Smart Meters in Smart Grid applications. Today’s data centers account for 25% of the corporate IT budget for companies, and expanding data center requirements has driven a steady increase in energy consumption.
Historically, the main concern of data center operations has been to ensure availability, performance, security and resilience. Increased needs for storage have been met by simply by adding more servers and storage, increasing the necessary cooling and power distribution infrastructure and, if necessary, building out into more space or adding a new data center. As a result, data center operation has grown to be highly inefficient. Despite their immense energy consumption, non-virtualized server utilization rarely exceeds 6% and facility utilization can be as low as 50%. Data center emissions are expected to quadruple by 2020 – data center electricity consumption is almost 0.2% of world consumption. The power and cooling infrastructure that supports the IT equipment accounted for roughly 50% of the total energy consumption of data centers.
ScottMadden has developed an approach for analyzing data center requirements and driving improvements in existing data center retrofits. Our approach takes into account the technological requirements, the physical attributes of a data center, and the requirements for a rigorous measurement and verification program needed to ensure improvements actually capture the energy efficiently gains and the resultant greenhouse gas reductions.
Our approach addresses the latest trends in data center management such as virtualization and cloud computing and provide a framework for developing metrics needed to drive changes in data center performance.
View More
Purpose
- Effective data center management is crucial to corporate operations, with several key drivers
- Business continuity
- Data management and security
- Data center operating costs
- Energy usage and GHG management
- ScottMadden merges three unique knowledge and skill sets to assist companies in optimizing data center operations and maintenance costs and performance
- Direct experience in IT systems and data center management
- Multiple people certified in IPMVP Measurement and Verification (M&V) and facility energy audit
- Certification with the Point Carbon/GHG Institute in GHG management
- ScottMadden has significant performance management experience with energy companies and Fortune 500 non-energy corporations and understands how to develop and bring to bear performance management solutions
Introduction to Data Centers
Effective data center management is crucial to management of corporate business continuity, IT spend, energy consumption and GHG emissions
- A data center is a centralized repository, either physical or virtual, for the storage, management, and dissemination of data and information organized around a particular body of knowledge or pertaining to a particular business
- A data center generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices
- The expanded use of data centers came during 1980’s and 1990’s. Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations
- Data centers are typically very expensive to build and maintain, with large company installations costing up to $100 million and exceeding 100,000 sq ft
- Data center design, construction, and operation have developed into an established discipline. Standard documents from accredited professional groups, specify the requirements for data center design. Well-known operational metrics for data center availability can be used to evaluate the business impact of a disruption
- IT operations are a crucial aspect of most organizational operations. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Data centers have to offer a secure environment which minimizes the chances of a security breach
- A data center must also maintain high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of both fiber optic cables and power, which includes emergency backup power generation
Today’s Data Center Environment
- The demand for computing power has increased the electrical power required for data centers1: according to an EPA report to Congress, demand for data processing and storage is driven by several factors:
- The increased use of electronic transactions in financial services, such as online banking and electronic trading
- The growing use of Internet communication and entertainment
- The shift to electronic medical records for healthcare
- The growth in global commerce and services
- The adoption of Smart Meters in Smart Grid applications can increase data points by a significant factor over traditional electric meters (estimates are from hundreds to thousands of times more data)
- The adoption of satellite navigation and electronic shipment tracking in transportation
- Today’s data centers account for 25 percent of the corporate IT budget for companies
- Managing Moore’s Law: the doubling of computer power every18 months defines the fundamental forces that have shaped the modern data center. There is no sign of an end to this process, but the IT industry is being forced to think about how it manages the rising demand for data center power2
- The virtualized data center: virtualization exploits modern processor power to consolidate servers, saving both energy and space. Historical server utilization rates were typically under 20 percent. Virtualization pushes server utilization above 80 percent, making better use of hardware investments and supporting energy and cooling costs. Virtualization tools are now relatively mature and becoming widely adopted. Their impact includes:
- Dramatic reductions in the number of servers required to support a given platform and a corresponding reduction in the platform’s data center square footage requirements
- Servers used for virtualization are often blade servers – less expensive Intel-based machines running MS Windows or Linux – with smaller footprints, so more servers can be supported per data center square foot
- Fewer, smaller servers mean less overall space consumed – allowing for data center consolidation:
- Remaining data centers become densely populated (more servers/square foot), increasing total server power consumption and overall data center cooling requirements (which also consumes more power) – this can limit how much space a data center can use (data center sq footage previously was the data center’s constraining factor, it is now power and cooling capacity) – so energy efficiency gains can increase data center capacity
- The resulting denser environment can result in higher data center power consumption
- Chip manufacturers are providing more efficient chips
- AMD sensed an opportunity a few years ago and took market share away from Intel by rolling out with more efficient and cooler chips
- Intel leveraged their R&D machinery to reverse this trend and now evaluates new performance gains against the cost of power use
- Cloud computing continues to grow as a data center consideration – as companies move from private data centers and private platforms to managed service provider cloud environments, an individuals data center’s power consumption will be reduced – and offset by the cloud providers potentially more efficient power consumption
- Dynamic infrastructure: the difference in refresh rates between IT equipment and the power and cooling infrastructure is a critical pressure point in the modern data center. While IT servers might be replaced every three years, the infrastructure is expected to last 10 to 15 years. As IT provisioning becomes more dynamic under the influence of virtualization and cloud computing, a static view of the infrastructure is no longer feasible. In addition, there is strong financial pressure to avoid spending hundreds of millions of dollars on new data centers – pressure that is fueling the retrofit of older data centers to improve energy efficiency and extend their cooling capacity
- Integrated management and monitoring systems: a dynamic data center is a much more complex operation than a traditional static model. As such, it requires more sophisticated management tools and a holistic view of the entire ecosystem. It also requires much closer working between facilities and IT professionals, which means the data and tools must be able to work together to optimize the use of power, cooling, and IT resources
- Data center inefficiency still remains a problem:
- Despite their immense energy consumption, non-virtualized server utilization rarely exceeds 6 percent and facility utilization can be as low as 50 percent1
- Data center emissions are expected to quadruple by 2020 - data center electricity consumption is almost 0.2 percent of world consumption2
- In 2006, the volume server was responsible for the majority (68 percent) of electricity consumed by IT equipment1
- The power and cooling infrastructure that supports the IT equipment accounted for roughly 50 percent of the total energy consumption of data centers1
Data Center Opportunities
- Green data center market: The term “green data center” does not lend itself to precise definition. It is broad label for a range of initiatives that address the environmental impact of data centers, particularly their energy efficiency. The green data center may also be referred to as a low-carbon data center, an energy-efficient data center, or even a sustainable data center1:
- The green data center market can be defined as the revenue opportunity generated by the operational improvements and technical innovations that are increasing the energy efficiency of data centers
- The need for a modular, adaptable, and energy-efficient model for the data center is opening opportunities across the data center supply chain
- Power, cooling, and IT infrastructure suppliers have an enormous opportunity over the next five years to support the reengineering of the data center infrastructure
- Short-term focus will remain on the technologies developed over the last four or five years that address shortfalls in the performance of older products
- Going forward, new opportunities are emerging that are centered on greater integration between the data center infrastructure and the IT assets, which will involve new forms of monitoring, more sophisticated management tools and greater automation
- Real-world efforts to implement energy-efficiency upgrades in data centers have proven highly cost effective. This is illustrated by a collection of 36 examples from a major telecommunications company. A total one-time investment of over $500,000 yielded $2,000,000 per year in energy savings, for an average payback time of only three months. In ten of the cases, the improvements were made at no cost (e.g., changes in operations and maintenance procedures)1
- According to an EPA data center study, site infrastructure and volume servers continue to be the top contributors to a data center’s energy usage. Through locating inefficiencies in the systems, data centers can greatly reduce their energy usage. The Lawrence Berkeley National Laboratory has identified several best practices for optimizing energy efficiency and facility performance for data centers including: airflow management, air handler systems, humidification, plant optimization, IT equipment selection, electrical infrastructure, lighting, and commissioning and retrocommissioning2
Data Center Upgrades and Improvements
- Airflow management – The efficiency and effectiveness of a data center conditioning system is heavily influenced by the path, temperature, and quantity of cooling air delivered to the IT equipment and waste hot air removed from the equipment
- Eliminate mixing and recirculation of hot equipment exhaust air (e.g., hot aisle/cold aisle configuration, ventilated racks, flexible barriers, etc.)
- Maintain a larger temperature difference between hot air return and cold air intake
- Maximize return air temperature by supplying air directly to the loads
- Air handler systems – The air handler fan is typically the second largest energy use in the mechanical system, and can even exceed the energy use of the cooling plant in some cases. Optimizing the air handler system for data center use, as opposed to relying on traditional air handler design rules developed over years of office system design, is essential to achieve an efficient and cost effective system
- Minimize fan power requirements
- Use an optimized airside economizer
- Use large centralized air handlers
- Humidification – Humidification specifications and systems have often been found to be excessive and/or wasteful in data center facilities. A careful, site specific design approach to these energy-intensive systems is usually needed to avoid energy waste:
- Eliminate over humidification and/or dehumidification
- Use efficient humidification technology and develop monitoring and testing intervals
- Plant optimization – When a chilled water plant is used, all the standard design best practices apply, with a few additions. The unusual nature of a data center load, which is mostly independent of outside air temperature and solar loads, makes free cooling very attractive and increases the importance of efficiency over first cost
- Uninterruptible power supplies (UPS) — Efficiency losses in a data center's UPS represent about 5 percent to 12 percent of all the energy consumed in data centers and can total hundreds of thousands of kwh per year. Manufacturer specifications can differ widely from measured results because of differences in loading conditions and test procedures. There may also be differences between efficiencies measured under reference conditions and under "in-use" conditions in data centers. Work is underway to estimate how much energy could be saved by improving UPS efficiency, develop standardized efficiency testing protocols, measure the efficiencies of UPS's across a range of load conditions, and propose efficiency metrics for use by the marketplace in comparing units for purchase
- Electrical infrastructure – Protection from power loss is a common characteristic of data center facilities. Such protection comes at a significant first cost price, and also carries a continuous power usage cost that can be reduced through careful design and selection
- Design an uninterruptible power supply (UPS) system for efficiency and select most efficient UPS possible
- Use self-generation for large installation
- Direct liquid cooling – Water can hold and transfer heat faster than air. It would take 140 cubic feet of air (equivalent to 1280 gallons) to transfer 1kWh as opposed to only 1 gallon of water to transfer 1kWh. As technology advances, data centers are looking towards direct liquid cooling options for their server racks and ideally each individual component in IT equipment
- Lighting – Data centers are typically lightly occupied. While lighting is a small portion of the total power usage of a data center, it can be often be safely reduced through mature, inexpensive technologies and designs
- Use active sensors to shut off lights when data center is unoccupied
- Design light circuiting and switching to allow for greater manual control
- Power supplies - Power supplies convert high voltage AC power into the low voltage data center power needed by the circuitry found within servers, routers, hubs, switches, data storage units, and other electronic equipment used in data centers and commercial buildings. Typical server power supplies operate at roughly 65 percent to 75 percent efficiency, meaning that 25 to 35 percent of all the energy consumed by servers is wasted (converted to heat) within their power supplies. The technology exists to achieve efficiencies of 80 percent to 90 percent in conventional server power supplies
- Direct data center power — Even greater efficiencies might be possible by systematically replacing the chain of AC power generation, AC-data center-AC uninterruptible power supplies, and AC-data center power supplies with a direct data center power system in data centers. Work is underway to categorize current server power supplies by type, characterize the energy savings opportunity, develop standardized efficiency testing protocols, measure efficiencies across a range of load conditions, and propose efficiency metrics that can be employed by power supply buyers, manufacturers, and utilities to accelerate sales of efficient designs for use in data centers
- Commissioning and retro-commissioning – An efficient data center not only requires a reliable and efficient design, it also requires proper construction and operation of the space. Commissioning is a methodical and thorough process to ensure the systems are installed and operating correctly in all aspects, including efficiency
- Engage additional design expertise for review and guidance
- Perform system commissioning and retro-commissioning
- Utilize appropriate measurement and verification (M&V) programs for ongoing performance management
- IT equipment selection - The IT equipment is the reason for the facility. Increasingly, there are reasonable opportunities to increase the efficiency of IT equipment, reducing the need for mechanical infrastructure and ongoing energy use directly at the load level through the selection of IT equipment
- Efficient microprocessors
- Multiple-core processors
- Dynamic frequency and voltage
- Virtualization opportunities
- Efficient server equipment
- Power management and virtualization
- High efficiency power supplies
- Internal variable speed fans for on-demand cooling
- Cool equipment racks
- Fundamental process change — In addition to a host of promising hardware "fixes,” more fundamental innovations have been proposed, such as shifting all software to the server farms (and away from desktop personal computers). While this would not save energy in the data center itself, it could reduce the overall demand by simplifying and reducing the power requirements of hundreds of millions of personal computers. It could also enable more people to telecommute
Potential GHG Reduction in Data Center Improvements
- In 2007 the entire information and communication technologies (ICT) sector was estimated to be responsible for roughly 2 percent of global carbon emissions with data centers accounting for 14 percent of the ICT footprint
- The U.S. EPA estimates that servers and data centers are responsible for up to 1.5 percent of the total U.S. electricity consumption, or roughly .5 percent of U.S. GHG emissions for 20071
- Although data centers are not a primary source of GHG emissions, due to their high level of electricity consumption, the rapid growth of data centers’ energy consumption threatens GHG emissions reduction initiatives1
- Corporate responsibility initiatives in the U.S. have produced reporting of greenhouse gas emissions, and government agencies are striving to meet energy goals – it is expected that GHG initiatives will focus on data center efficiency improvements1
- According to an analysis by McKinsey & Company, data center GHG emissions totaled 80 MMTCO2, and are expected to increase by approximately 425 percent by 20203
- Business decisions by companies could result in a higher data center carbon footprint (e.g., new growth, new products and services, merger and acquisition activity, macroeconomic stock, etc.)2
- Through locating energy inefficiencies in a data center’s energy consumption, the carbon footprint can be minimized
- There is a growing trend for LEED certification of data centers which has shown to be a driver for increased energy reductions
- Given a business as usual (BAU) scenario greenhouse gas emissions from data centers is projected to more than double from 2007 levels by 20202
- However, in a 2007 Report to Congress, the EPA suggested that energy efficient improvements could decrease carbon dioxide emissions by 15-47 MMTCO2 given three different levels of energy efficiency scenarios1
- Data center GHG reduction opportunities
- Siting is one of the factors that affect the energy consumption and environmental effects of a data center. In areas where climate favors cooling and renewable electricity is available, the environmental effects could be more moderate - thus countries with favorable conditions, such as Finland, Sweden, and Switzerland, are trying to attract cloud computing data centers
- Cloud computing – Many companies have experienced thousands of dollars in annual savings by outsourcing their computing needs to third parties. Since third parties focus on maximizing performance and efficiency of their computing operations, they are able to operate at higher levels of utilization than individual companies, thus decreasing greenhouse gas emissions. An Accenture case study of a Microsoft comparison of onsite carbon emissions vs. cloud computing carbon emissions showed a reduction of more than 90 percent for small deployments of about 100 users, 60 to 90 percent for medium-sized deployments of about 1,000 users, and 30 to 60 percent for large deployments of about 10,000 users when using cloud computing3
- CHP (Combined Heat and Power Systems) and DG (Distributed Generations) – Installation of CHP and DG systems with absorption cooling can reduce energy costs by producing power more cheaply onsite than if purchased from a utility. In addition, it can reduce air pollutants through increased efficiency and use of cleaner technologies, which in turn results in lower levels of fossil fuels combustion and reduced emissions of CO21
M&V Requirements in Data Center Improvements
- Several metrics have been defined in order to measure data center efficiency, locate improvement opportunities, and benchmark data centers competitively
- The Uptime Institute metrics1
- Established the Data Center-EER (Data Center Energy Efficiency Ratio) which is comprised of:
- SI-EER (Site Infrastructure Energy Efficiency Ratio) – the ratio of power consumed by the entire facility to the power consumed by the IT equipment (see PUE)
- IT-EER (Information Technology Energy Efficiency Ratio) – the computed performance of IT equipment per embedded watt of power consumption
- Established four metrics for data center “greenness” – which provide accountabilities for management, IT, and corporate real estate
- IT strategy
- IT hardware asset utilization
- IT energy and power efficient hardware deployment
- Site physical infrastructure overhead
- Established three data center sites where measures can be taken to determine energy inefficiencies in practices and technologies
- At the meter – determines the demand charge the data center must pay to the utility
- At the plug – determines the alternating current (AC) power consumption at the hardware plug
- Hardware compute load – determines the number of watts of direct current (data center) power that are consumed within the IT equipment
- Lawrence Berkeley National Labs (LBNL)1
- Proposed an overall energy performance benchmark for data centers:
- EUI = kWh / Sq. Ft. / Year / Per Facility
- The Green Grid consortium2
- Proposed a formula to benchmark a data center’s energy efficiency as well as to locate opportunities to improve a data center’s operational efficiency
- PUE (Power Usage Effectiveness) = Total facility power / IT equipment power
- Data Center E (Data Center Efficiency) = IT equipment power / Total facility power
- Explaining the variables
- IT equipment power – the load associated with all of the IT equipment (e.g., compute, storage, and network equipment) as well as supplemental equipment (KVM switches, monitors, etc.)
- Everything that supports the IT equipment load (e.g., UPS, PDUs, batteries, generators, etc.)
- The Green Grid has a more defined PUE metric system for dedicated data centers, and PUE metrics for data centers in mixed-use buildings are being defined
- ASHRAE3
- Established metrics for determining a data center’s thermal performance
- RCI (Rack Cooling Index) is a measure of the absence of over-temperatures, the lower the percentage the greater probability that equipment experiences temperatures above the maximum allowable temperature
- RTI (Return Temperature Index) is a measure of the energy performance of the air management system, deviations from 100 percent are a sign of declining performance
ScottMadden Data Center Optimization Methodology
Optimizing data center performance requires careful examination of capacity, costs, energy usage, cooling requirements, and greenhouse gas reduction goals as well as a careful projection of capital outlays and implementation milestones
- Assess current data center operations
- Data center capacity
- Power and cooling requirements
- Current energy usage
- Current data center cost structure
- Current GHG emissions
- Project future data center operations
- Future capacity needs
- Future computing needs
- Security requirements
- Business continuity requirements
- Growth of existing offerings
- Impacts of potential new service offerings (internally identified and/or market change driven)
- Future IT budgets
- Energy efficiency targets
- Develop options for data center improvements
- Server virtualization options
- Computing power requirements (e.g., cloud computing, etc.)
- Power consumption improvement options
- Data center cooling options
- GHG requirements
- M&V and carbon management
- Explore investments in power saving technologies
- Develop strategy for data center optimization
- Plan for data center scaling requirements
- Plan for data center Power and cooling usages
- Plan for M&V and carbon management
- Plan for attaining budgeted data center requirements
- Evaluate potential for LEED certification
- Develop data center improvement implementation plan
- Develop gap analysis
- Build gap closure plan
- Cost
- Schedule
- Milestones
- M&V implementation
- Energy/carbon performance monitoring
- Establish data center metrics and goals
View Accessible Version