Data centers and the cloud – an integral part of the digital world in which almost all user data, photos, music and films are stored – are also huge energy guzzlers. Ironically, most of the energy used to keep them running isn’t being used for data processing, but rather for actually keeping the servers cool.
This problem is exacerbated by the complex design of modern servers, which leads to high operating temperatures, according to David Atienza Alonso, head of the embedded systems laboratory at the Swiss Federal Institute of Technology in Lausanne (EPFL). “As a result, servers cannot operate at their full potential without the risk of overheating and system failures,” he told journalists who visited the EPFL campus in the hilly city of Lausanne on the shores of Lake Geneva, halfway between the Jura Mountains and the Swiss Alps .
Faced with this problem, a new server architecture being developed at EPFL is experimenting with a so-called “multi-core architecture template with an integrated on-chip microfluidic fuel cell network” – meaning that it employs tiny chip-level microfluidic channels to ensure that the Channels and the liquid flowing through it cools servers and also converts heat into electricity. Etching layers of small channels between the layers of silicon and then pumping liquid through those channels makes it theoretically possible to extract heat from a stacked chip fast enough to keep it running without overheating.
This on-chip microfluidic fuel cell network is one of several solutions being tested around the world to manage the heat generated by modern servers during operation. Other technical interventions include an experiment by a US-based company called Subsea Cloud, which proposes building commercial data centers in deep sea waters and claims it is close to physically launching an underwater capsule near Port Angeles, Washington state.
Microsoft has also proposed something similar: build a large tube with closed ends, place servers in this tube, which will then be lowered to the sea floor. As part of this facility, Microsoft’s Project Natick team dropped its data center in the North Islands 34 meters to the seabed off Scotland’s Orkney Islands in spring 2018, and over the next two years team members tested and monitored the performance and reliability of the Data center servers. The team hypothesized that a sealed container on the seabed could offer opportunities to improve the overall reliability of data centers. Lessons learned from Project Natick will feed into Microsoft’s data center sustainability strategy on energy, waste and water, Ben Cutler, a project manager in Microsoft’s Special Projects Research Group who leads Project Natick, said in an official blog after the data center in year 2020 was rolled up.
The reason for all these experiments is the way computer chips are constructed today: how they get their electrical power via thin copper wires, which then emit the heat generated into the surrounding air, causing many air conditioners to work overtime to cool the surrounding air in server rooms to keep. The need for a continuous flow of air to dissipate heat has forced chip designers to rely on more or less flat design for chip packaging. This is extremely inefficient in terms of space usage, especially as integrated circuit technology is continually being scaled down to smaller transistor sizes to keep up with the increasing demand for computing capacity of the range of home and office applications today.
By using fluidic channels through which water flows, designers can rely on water’s much higher heat absorption capacity compared to air, making it possible to cool chip components that are closer together, Atienza Alonso said. This actually allows these components to be stacked on top of each other in a three-dimensional arrangement, improving server efficiency and making them much denser in terms of storage capacity.
According to Atienza Alonso, the EPFL project intends to completely overhaul the current computer server architecture to drastically improve its energy efficiency and that of the data centers it serves. The 3D architecture his team is designing, he said, can simultaneously overcome “the worst of power and cooling problems” by employing what he calls a “heterogeneous computing architecture template” that recycles the energy expended on cooling the integrated microfluidic cell array channels and recovers up to 40 percent of the energy typically consumed by data centers. With further gains expected as microfluidic cell array technology improves in the future, a data center’s energy consumption will be greatly reduced as more computing power is performed with the same amount of energy.
“Thanks to the integration of new optimized computing architectures and accelerators, the next generation of workloads can run much more efficiently in the cloud,” said Atienza Alonso. “As a result, servers in data centers can serve many more applications with much less energy, dramatically reducing the carbon footprint of the IT and cloud computing sector.”
Ultimately, if any or all of these experiments work and can be deployed at scale, it could result in a quantum leap in the way typical data centers and the cloud work. Using a liquid coolant inside the chip is an idea that’s been discussed for some time, with engineers at IBM originally proposing it almost a decade ago to address the problem of cooling 3D chips. However, as these cooling solutions are nearing market maturity, 3D server stacking is now being viewed as a potentially game-changing step in increasing server performance.
Any breakthrough technology would be welcome news in countries experiencing ever-increasing data consumption, triggering the need for data storage and processing and the growing demand for data centers. In most countries, including India, storing data locally is becoming increasingly important as privacy and security are top priorities.
The USA dominates the world with over 2,500 data centers, Germany has around 490 of them. India ranks thirteenth among the countries with the highest number of data centers, although the country’s data center capacity has been growing rapidly – fixed at 637 MW in the first half of 2022 and expected to double to 1318 MW by 2024. Mumbai has the highest number of data centers in the country, accounting for almost half of the data centers, followed by Bengaluru and Chennai.
(The author was in Switzerland on a trip arranged by the Swiss government)