回顾10年内建设的世界级数据中心
Looking back on 10 years of building world-class data centers
November 9 , 2021 By Tom Furlong
2008年,Meta(当时是facebook)的规模远不如今天。在我们建立第一座俄勒冈州的普林维尔数据中心并成立开放式计算项目之前,我们做了与其他公司一样的事——我们从托管商那里租用数据中心空间。除非市场受到重大的影响,比如2008年的金融危机,否则这种安排是行得通的。
In 2008, Meta (then Facebook) was nowhere near the size it is today. Before we built our first data center in Prineville, Oregon, and founded the Open Compute Project, we did what many other companies that need data center capacity do — we leased or rented data center space from colocation providers. This sort of arrangement works fine unless the market experiences a major impact … something like the 2008 financial crisis.
金融冲击数据中心业务的同时,我们正与一家大型的托管公司进行谈判。他们不想承诺所有的支出,直到他们能顺利的度过2009。从公司的角度完全可以理解,但是这将我们这样的潜在客户置于非常不舒服的位置。
The financial crisis hit the data center business right at a time when we were in the middle of a negotiation with one of the big colocation providers. They didn’t want to commit to all this spending until they had a better idea of what 2009 would be like. This was totally understandable from a business perspective, but it put us, as a potential customer, in a rather uncomfortable position.
我们最终达成了一些较小的交易,但是并不能有效的解决我们的根本问题——如何解决facebook发展迅猛的业务需求。在基础设施团队中,我们需要一个能支持业务增长而不是阻碍的设施。由于金融危机导致未来两年计划被扔进垃圾桶,可不是一件容易解决的事情。
We ended up making smaller deals, but they weren’t efficient from the standpoint of what we ultimately wanted — a way to handle how rapidly Facebook was growing. On the Infrastructure team, we always wanted an infrastructure that facilitates the growth of the business rather than holding it back. That’s not easy when your plan for the next two years effectively gets thrown in the trash.
那一刻,我们真的在问,我们可以做些什么,以确保公司拥有向前发展所需要的基础设施。唯一的答案就是,我们必须拥有自己的数据中心,这意味着我们要设计和建造自己的数据中心。
That was the moment where we really asked what we could do to ensure that the company had the infrastructure it would need going forward. The only answer was that we had to take control of our data centers, which meant designing and building our own.
在2009年,我们开始研究如何建立和运营自己的数据中心,我们的目标是什么。 我们需要尽可能高效的数据中心和服务器生态系统。为此,我们决定创造一个开放和模块化的数据中心。它使用可分解的硬件,弹性和可移植的软件。对硬件进行分解——将传统数据中心分解为几个核心组件——方便高效的升级硬件技术。拥有能够在宕机期间移动的弹性软件,这样可以使我们最小化冗余系统的数量并且建设更少的物理设施。这意味着,数据中心的建设和运营成本更低,效率更高。
In 2009, we started looking at what it would really mean to build and operate our own data centers, and what our goals should be. We knew we wanted the most efficient data center and server ecosystem possible. To do that, we decided to create an infrastructure that was open and modular, with disaggregated hardware, and software that is resilient and portable. Having disaggregated hardware — breaking down traditional data center technologies into their core components — makes it easy and efficient to upgrade our hardware as new technologies become available. And having software that can move around and be resilient during outages allows us to minimize the number of redundant systems and build less physical infrastructure. It means the data centers will be less expensive to build and operate, and more efficient.
一张餐巾纸
The napkin
我曾经为Exodus和Yahoo设计建造数据中心,因此我知道我们需要做什么、需要找什么样的伙伴一起做Meta。Jay Park一位出色的电气工程师,我之前在Exodus与他共事,现在我请他来带领数据中心设计建设团队。 Jay是在2009年年初加入团队的,我们先花费了6个月来设计整个工程的框架。我们认为数据中心与内部的硬件存在共生关系,因此我们负责整个数据中心团队和硬件团队。
I had previously designed and constructed data centers for Exodus Communications and Yahoo, so I knew what we needed to do and who I wanted to work with on this for Meta: Jay Park, a brilliant electrical engineer I had worked with at Exodus, who I ultimately brought on to lead the Data Center Design & Engineering team. Jay joined the team in early 2009, and we spent those first six months trying to decide exactly what the scope for this project would be. We had an idea that there is a symbiosis between the data center itself and the hardware inside it, so we were standing up the data center and hardware development teams at the same time.
当我们设计数据中心时,需要考虑高可靠性—— 减少操作降低停机风险——少即是多。更少的设备可以产生更高的可靠性,因为我们已经消除了一些潜在设备的故障风险。Jay对电力系统的看法也一样。我们想要限制电流从一种电压转换到另一种电压的次数,因为每次转换都将损失效率。每一次你这样做——无论你是从公网电压到中压再到数据中心的低压侧——一些能量会从变压器中以热量的形式损失,这是低效的。效率永远是基础设施团队的指标。
When we think about designing a data center, one thing to remember when data centers are in high availability — operating with limited to no downtime — is that less is often more. Less equipment can yield higher reliability because you’ve eliminated some potential equipment failure. Jay’s view of the electrical system was the same; we want to limit the number of times that we convert electricity from one voltage to another, because each one results in some loss of efficiency. Every time you do that — whether you’re going from utility voltage to medium voltage to voltage inside the data centers — some energy is lost in the form of heat from the transformer. It’s inefficient, and efficiency has always been a core objective of the Infrastructure team.
我们面临的挑战是,如何处理电力系统中的转换,而且我们需要将AC转换为DC。 我需要交流电驱动服务器,但是我也需要一个直流电池来应对紧急情况。一些大型的数据中心使用非常大的电池组来服务整个工厂。 我们将电池放在服务器所在的机架上。然而,问题是没有任何可用的服务器电源可以将电池的交流电压转换成所需的直流电压。
The challenge was how to deal with these transitions in the electrical system, plus the fact that we have to convert from AC to DC. You need AC voltage driving your servers, but you also need a DC battery of some kind to power things in case of an outage. Some big data centers use very large battery banks that serve the whole facility. In our case, we opted to keep the batteries inside the same racks that the servers are in. The catch, however, was that there weren’t any server power supplies available that could switch from AC to the needed DC voltage from the batteries.
然后Jay顿悟了。他告诉我当他躺在床上,思考着如何从交流转变为直流时,灵感突然涌现。 他跳起来,随手拿起身边的餐巾纸。他写下了他所认为电气线路该有的样子,第二天与硬件团队一起讨论如何落地实施。
Then Jay had an epiphany. He told me he was lying in bed, thinking about our need for this shift from AC to DC, when the idea hit. He jumped up and all he had was a napkin by his bedside. He scratched down what he thought this electrical circuit would look like and then went to the hardware team the next day and asked if they could make it work.
这就是我们高效电源的起源。它使用了很少的转换,这样可以更加的高效。当这块拼图就位,我们就开始着手设计建造在Prineville的第一座数据中心。
That was the origin of our highly efficient electrical system, which uses fewer transitions, and the idea that the servers themselves could toggle between AC and DC reasonably simply and quickly. Once this piece of the puzzle was in place, it laid the groundwork for us to start designing and building our very first data center in Prineville.
一旦我们制定了限制系统中电能转换的策略,我们就会寻找最有效的方法来消除必要转换时产生的热量。这意味着要考虑一些事情,比如让服务器比平常高一点,允许更大的散热器,以及让数据中心本身有高效的空气流通。
Once we lined up on the strategy to limit the electrical conversions in the system, we sought the most efficient way to remove the heat that’s generated when conversions are necessary. That meant thinking about things like making the servers a bit taller than usual, allowing for bigger heat sinks, and having efficient air flow through the data center itself.
我们知道我们想要避免大规模的机械冷却(例如,空气或水冷式冷却器),因为它们是非常能源密集型的,并且会导致数据中心的整体电力效率的显著降低。一个想法是让外部空气通过数据中心,让它成为冷却介质的一部分。代替传统的空调系统,我们将使用室外空气和直接蒸发冷却来冷却服务器,并将服务器产生的热量从建筑中完全排到室外。
We knew we wanted to avoid large-scale mechanical cooling (e.g., air or water cooled chillers) because they were very energy intensive and would’ve led to a significant reduction in overall electrical efficiency of the data center. One idea was to run outside air through the data center and let that be part of the cooling medium. Instead of a traditional air conditioning system, then, we’d have one that uses outside air and direct evaporative cooling to cool the servers and remove the heat generated from the servers from the building entirely.
更重要的是,今天我们在环境条件不理想的地方使用间接冷却系统(例如,极端的湿度或高粉尘水平),可能会干扰直接冷却。这些间接冷却系统不仅保护了我们的服务器和设备,而且比传统的空调或冷水机更节省能源和水。这样的策略使我们能够建立比典型数据中心至少少50%的用水量的数据中心。
What’s more, today we use an indirect cooling system in locations with less than ideal environmental conditions (e.g., extreme humidity or high dust levels) that could interfere with direct cooling. Not only do these indirect cooling systems protect our servers and equipment, but they’re also more energy- and water-efficient than traditional air conditioners or water chillers. Strategies like this have allowed us to build data centers that use at least 50 percent less water than typical data centers.
自从我们在prineville建立第一个数据中心以来的10年里,我们一直保持最初设计的基本原则。但我们一直在不断的优化。最重要的是,为了应对不断增长的网络需求,我们已经增加了额外的电力和冷却系统。
In the 10 years since we built our first data center in Prineville, the fundamental concepts of our original design have remained the same. But we’re continually making optimizations. Most significantly, we’ve added additional power and cooling to handle our increasing network requirements.
例如,在2018年,我们的StatePoint液体冷却(SPLC)系统引入我们的数据中心。SPLC是同类中第一个节能节水的液体冷却系统,它允许我们在直接冷却技术无法实现的地区建立新的数据中心。这可能是对原始设计的最重要的更改,并将继续影响未来的数据中心设计。
In 2018, for example, we introduced our StatePoint Liquid Cooling (SPLC) system into our data centers. SPLC is a first-of-its-kind liquid cooling system that is energy- and water-efficient and allows us to build new data centers in areas where direct cooling isn’t a viable solution. It is probably the single most significant change to our original design and will continue to influence future data center designs.
最初的重点是尽量减少电压变化和确定如何最好地冷却,这仍然是我们数据中心的核心属性。这就是为什么Facebook的设施是世界上最高效的。平均而言,我们的数据中心比行业标准少用32%的能源和80%的水。
The original focus on minimizing electrical voltage transitions and determining how best to cool are still core attributes of our data centers. It’s why Facebook’s facilities are some of the most efficient in the world. On average, our data centers use 32 percent less energy and 80 percent less water than the industry standard.
软件在这一切中也扮演着重要的角色。正如我所提到的,我们从一开始就知道,软件弹性将在我们的数据中心的效率中扮演重要角色。记住我的话,在2009年,软件不能做它今天能做的任何事情。但如今我们在软件方面取得的进展令人难以置信。例如,今天我们使用一系列软件工具,帮助我们的工程师检测、诊断、补救和修复数据中心的外围组件硬件故障。
Software plays an important role in all of this as well. As I mentioned, we knew from the start that software resiliency would play a big part in our data centers’ efficiency. Take my word for it when I say that, back in 2009, the software couldn’t do any of the things it can do today. The strides we made in terms of the ability and the resiliency on the software side are unbelievable. For example, today we employ a series of software tools that help our engineers detect, diagnose, remediate, and repair peripheral component interconnect express (PCIe) hardware faults in our data centers.
如果我要描述我们对数据中心项目的看法和传统行业的做法之间的差异,我认为我们在评估风险和效率的回报方面计算得更多。风险可以通过提高软件的弹性来降低。例如,软件优化允许我们在紧急情况下将服务器工作负载从一个数据中心转移到另一个数据中心,而不会中断我们的任何服务。
If I were to characterize the differences between how we thought about our data center program and how more traditional industries do, I think we were much more calculating about trying to assess risk versus the reward to efficiency. And risk can be mitigated by software being more resilient. Software optimizations allow us, for example, to move the server workload away from one data center to another in an emergency without interrupting any of our services.
现在我们已经走过了10年的历史,我们正在考虑未来10年甚至更久的事情。我们通过开放计算项目(Open Compute Project)分享我们的设计、主板、原理图等,希望能激发集体创新。2021年,我们通过与新的芯片制造商和原始设备制造商合作,进一步扩大我们数据中心的开放硬件。开放硬件推动创新,与更多供应商合作意味着有更多机会开发下一代硬件,以支持Meta技术家族中当前和正在出现的新特性。
Now that we have 10 years of history behind us, we’re thinking about the next 10 years and beyond. We share our designs, motherboards, schematics, and more through the Open Compute Project in the hope of spurring collective innovation. In 2021, we’ve furthered our disaggregation efforts by working with new chipmakers and OEMs to expand the open hardware in our data centers. Open hardware drives innovation, and working with more vendors means more opportunity to develop next-generation hardware to support current and emerging features across Meta’s family of technologies.
在我写这篇文章的时候,我们有48个正在运行的建筑和47个在施工的建筑,所以在不久的将来我们会有超过70个建筑,它们都和我们最初的概念一样。但它们也需要保持相关性,并与未来的趋势保持一致——尤其是在可持续性方面。
As I’m writing this, we have 48 active buildings and another 47 buildings under construction, so we’re going to have more than 70 buildings in the near future that all look like our original concept. But they also need to stay relevant and in line with future trends — particularly when it comes to sustainability.
2020年,我们实现了直接运营的净零排放。我们的全球业务现在由100%的可再生能源支持。到今天为止,我们已经签订了超过70亿瓦的新风能和太阳能的合同,所有的数据中心都由他们来支持。
In 2020, we reached net zero emissions in our direct operations. Our global operations are now supported by 100 percent renewable energy. As of today, we have contracted for over 7 gigawatts of new wind and solar energy, all on the same grids as the data centers they support.
我们在未来建立的数据中心将继续这一趋势。我们每一步都在考虑可持续性,从能源到数据中心本身的设计和建设。例如,我们已经制定了雄心勃勃的目标,要在2030年之前实现价值链的净零排放,实现用水效益,这意味着我们向当地流域恢复的水将超过我们数据中心的消耗。在建设我们最新的数据中心的过程中,通过对材料的再利用和回收利用,我们平均能够转移80%的垃圾足迹。
The data centers we build in the future will continue this trend. We think about sustainability at every step, from the energy sources that power them all the way down to the design and construction of the data centers themselves. For example, we have set ambitious goals to reach net zero emissions for our value chain and be water positive by 2030, meaning we will restore more water into local watersheds than our data centers consume. In building our newest data centers we’ve been able to divert, on average, 80 percent of our waste footprint away from landfills by reusing and recycling materials.
如今,在数据中心和建筑行业有很多活动,这给我们寻找合适地点和合作伙伴增加了难度。这也意味着我们需要创建更灵活的选址和施工流程。
There is a lot of activity in the data center and construction industries today, which puts pressure on us to find the right sites and partners. It also means we need to create more flexible site selection and construction processes.
所有这些努力还包括将我们的供应商和承包商更多地视为合作伙伴。我们不能只为了省钱。我们必须关注性能。我们必须让它成为最佳实践和持续改进。
All this effort also involves looking at our vendors and contractors more as partners in all this. We can’t just make this about dollars. We have to make it about performance. We have to make it about driving best practices and continuous improvement.
但这并不是建筑行业通常的工作方式。所以,我们必须有自己很多关于运营和改进的的想法,并给我们合作的公司留下深刻的印象。
But that’s not the way the construction industry typically works. So, we’ve had to bring a lot of our own ideas about running operations and making improvements and impress them on the companies we work with.
进入数据中心领域从来都不是件容易的事。但我认为我们已经完成了一个惊人的项目,其规模之大是我从未想过的。我们总是被要求做得更多,这归咎于商业上的挑战,也是让我和我的团队每天都来工作的主要原因之一。我们面临着巨大的挑战,要做一些规模大得令人难以置信的事情。
Moving into the data center arena was never going to be easy. But I think we’ve ended up with an amazing program at a scale that I never, ever would have imagined. And we’re always being asked to do more. That’s the business challenge, and it’s probably one of the main things that keep me and my team coming in to work every day. We have this enormous challenge ahead of us to do something that is unbelievably massive at scale.
DeepKnowledge
作者:
Plato Deng
数据中心高级研究员 /深知社创始成员
|