Skip to Main Content

Today’s technology would not exist without datacenters. Six in 10 people use modern web services such as social media, web search, video streaming, online banking, and online healthcare that require datacenters that scale to hundreds of thousands of high-end computers or servers. Researchers at Carnegie Mellon are rethinking datacenter architecture, improving their cost- and energy-efficiency, sustainability, and equity.

Cavernous and cold, datacenters house thousands of servers that hold information and route signals for billions of users. Everything from a typical Google search to messaging applications are directed through a datacenter. Traditionally, large-scale datacenters have adopted a performance-first approach. This approach ensures that web service responses are quickly sent to the user to promote a positive user experience, preventing users from abandoning the service. For every user request, service operators typically aim to respond within 300 milliseconds so that the response is perceived as instantaneous. With the surge of devices and users coming online daily and the growing amount of data being exchanged, the demand for faster, more efficient cloud services is drastically increasing.

“When you think about all of this in the systems context of a datacenter setting, which is servicing all of these users, the data they are interested in, as well as the increased functionality, it’s becoming challenging to deal with user requirements in a way that the user is still seeing responses that they need,” says Akshitha Sriraman, assistant professor of electrical and computer engineering. “Users expect to see their search results in an instant, or what appears instantaneously. Their expectations from the applications are also growing but at no cost to them.”


An expert in computer architecture and systems software, Sriraman is rethinking datacenter computing across hardware and software systems to enable efficient, sustainable, and equitable large-scale web systems.

The fundamental architecture of modern-day datacenter servers is pretty much the same as the architecture of the desktop PCs of the 1980s. Of course, the computations are faster, but the basic architecture of a computer’s central unit, supported by data elements, is the same. Datacenters have evolved rapidly, so we must take a step back to consider how best to design hardware and software systems for them.

“We are redesigning datacenters from first principles, thinking about what these servers should look like at the hardware level in a way that they can be cost- and energy-efficient,” says Sriraman. “And second, we are looking at how to program this new hardware, and what kinds of software paradigms are needed to take advantage of that hardware in a more efficient way.”

Currently, hardware architects build specialized hardware for each service operation, which is economically impractical at datacenter scale. To improve cost-efficiency without compromising performance, Sriraman proposes identifying and accelerating important and common operations across diverse services. To accelerate common data orchestration operations that handle ever-growing data, instead of using traditional compute-centric hardware architectures, Sriraman will introduce novel data-centric architectures that eliminate data movement overheads.

We are redesigning datacenters from first principles, thinking about what these servers should look like at the hardware level in a way that they can be cost- and energy-efficient.

Akshitha Sriraman, Assistant Professor, Electrical and Computer Engineering

“Introducing cost-efficient, high-performance hardware will enable new companies and industries to accessibly enter the technology field,” says Sriraman.

With this increase in demand, the logical answer is to keep building larger and more datacenters. However, this is not sustainable in the long-term. Not only are these colossal datacenters extremely expensive to build and maintain, but their carbon footprint is massive.

“To enable sustainable datacenters, we must carbon-efficiently architect and manufacture hardware and make the most out of existing hardware,” explains Sriraman. “Datacenters must adopt the mindset of reducing, reusing, and recycling hardware.”

Building on this theme, Sriraman has formulated carbon-efficiency metrics to help identify services that perform acceptably on older servers, ultimately extending the server lifetime.

“Extending server lifetime minimizes hardware manufacturing’s carbon footprint, which reduces anthropogenic climate change’s effect,” she explains.

While efficiency is important, Sriraman is also using equity as a systems design consideration to identify and mitigate web systems’ inequities. By defining demographic-driven bias metrics based on age, race, and other factors, Sriraman is building datacenter systems that mitigate inequitable decisions. Making equity as a first-order concern will also allow us to prioritize building web systems for rural communities. In collaboration with CMU-Africa, the team is working on developing and deploying web systems that work under stringent systems constraints in rural African communities.

“Introducing equity as a systems consideration elevates historically underserved communities, which could lift more than a billion people out of poverty,” says Sriraman.

The driving force behind the internet and devices, a datacenter is the beating heart for the technology we use every day. By reimagining datacenters to make them more efficient, sustainable, and equitable, Sriraman’s enhanced standard practices will benefit industry, users, and the environment.