How the shift to a factory construction mindset can help data centers meet the demands of the AI era
Demand for data centers has never been higher. Where once, a new data center build was a rare occurrence, operators are now facing the prospect of building multiple facilities concurrently, potentially across different site locations, whilst simultaneously facing the external challenges of the increase in demand driven by the rise of AI, the power crunch, tighter environmental regulations, site selection, and the ongoing skills drought.
Instead of taking up to five years to bring a site online, the average development cycle is down to 18-24 months for a first facility in a given territory, with aggressive expansion after that. The solution is increasingly seen as lying in prefabricated modules, which are assembled off-site and delivered ready to be moved in-situ without assembly.
This represents a big change from the traditional methods of building everything from scratch during construction and leverages many opportunities for refining and streamlining the process at a time when speed and efficiency are top priorities to service the overwhelming demand for new facilities.
Juan Colina, Eaton’s EMEA data center segment leader, is an evangelist for the many benefits of moving to modular data center construction, which can include purpose-designed prefabricated pod and skid systems for power and/or IT that are pre-commissioned and factory tested. As he explains, this approach has been particularly beneficial for the rise in Tier II and III markets, where data center construction can be more challenging and workforces have less experience:
“Modularity and a programmatic approach to building data centers create multiple advantages. This includes increasing workforce options with people who may not have previous experience in data center construction, an overall decrease in total cost of ownership, and an increase in quality and consistency across sites.”
Overcoming supply chain woes
Another key challenge in recent years has been supply chain disruptions, caused by a combination of factors, ranging from geopolitical issues to manufacturers who are unable to upscale their output to meet the exponential growth in demand. Many operators have overcome this challenge by working with suppliers and partners who are able to leverage faster acquisition of goods and raw materials.
Colina explains that prefabrication is an ideal way to achieve this. By taking critical components and labor expertise to a central location, modules can be assembled in a manufacturing environment, offering more control, reducing shipping costs, and offering the potential for collective bargaining on prices.
Right skills, right place
As we’ve already alluded to, the skills shortage remains a key challenge in data center development. It’s not just a question of having the right skillsets, it’s having them in the places where they’re actually needed. You may have a brilliant team on the payroll, but asking them to relocate to a new site every time you want to expand your footprint is not tenable, least of all for the workers and their families.

Once again, this demonstrates an advantage of modularity. “By moving construction primarily to a factory environment, you are able to offer consistent work at a consistent location. That way you have a regular workforce who you can invest in, upskill, and offer a more conventional workday, with regular hours and a predictable commute,” says Colina.
Many data centers today are being built in phases to support future growth, he explains. Building in a modular way allows you to duplicate your processes and speed up construction. The best practices and learnings transfer between phases, and a more regimented workflow means that if phases are being built simultaneously, there’s less likelihood of builds interfering with each other.
A module is for life
Once a data center or campus is up and running, the approach can be applied to the entire lifecycle. If critical components break beyond economical on-site repair, you can simply remove a faulty module and replace it with a duplicate. Alternatively, if new equipment has to be built to order, doing so in an off-site factory environment reduces the disruption caused to the rest of the facility. It can also be beneficial when it becomes time to upgrade.
“Modularity gives you the opportunity to more easily upgrade and move your systems. The portability of modular enables you to relocate an existing pod to a new location and provides the ability to upgrade to newer pods if business needs change,” says Colina.
He explains that this upgrade can apply to many different facets of the data center environment:
“Imagine, for example, that a revolutionary new type of high-capacity battery becomes available that can offer 24-hour redundancy power in case of grid failure. If your current setup only offers two hours, you’ll be keen to upgrade, and replacing a power module for one that contains this new super-battery will be far easier than taking out a series of disparate components. This can apply to almost any characteristic of the data center, both for upgrading existing sites and improving future ones.”
Think in modules
To really leverage the benefits of modular data centers, you have to go in with a data center mindset from the beginning. Retrofitting modularity is much more challenging than ensuring that all stakeholders are working toward a modular outcome from the outset.
Colina advises that while not everything in the data center has to be modularized, it is important to take time at the blueprint stage to identify areas where a modular approach will be beneficial. These can vary from project to project, with barriers ranging from availability of parts, site accessibility, and equipment, all of which could require some sort of hybrid approach.
Whatever you decide, the key learning is to make those decisions and identify roadblocks while you’re still at the draft table:
“Modularity helps create cost efficiencies when at the design stage. The lower upfront capital required is matched with further capital investment as power needs escalate. This scalability in capital deployment is completed with lower construction and engineering cost due to the standardization of designs.”
For Colina, the key is to establish the “building blocks” from the outset. These can then be applied to whatever scenario, workload, or density far more quickly and easily than building ‘a la carte.’

He goes on to warn that, as it is incredibly likely that demands and densities will continue to increase, it is vitally important to have a strategy. Most people didn’t see the sudden uptake of AI coming, and as he attests, the speed of change continues to surprise, adding:
“We have to consider that sites might have additional power requirements once they are being deployed. Keeping an open mind at the very beginning can help decrease cost and increase speed of implementing such solutions later on.”
But what if a new build isn’t an option? Whilst it’s always easier to start afresh than retrofit, modular options can play a vital role in making an existing data center fit for future demands. It won’t work for every environment, but if there’s no footprint inside the data hall, you could consider external modules such as backup batteries to augment an existing setup.
“There will need to be a balanced approach to how we augment existing sites. It’s not practical to bring modularity to bear on every single site, but there’s plenty of very capable, professional engineers who will be able to identify where it’s best positioned to be utilized,” says Colina.
Greener modules
Since the influx of AI workloads, the burgeoning issue of environmental protection and sustainability has been somewhat overshadowed, but it remains a primary concern, and one that will be of increasing importance in the years to come, as more stringent standards are required at both a local and federal level.
Colina gives an example of Eaton’s EnergyAware grid-interactive UPS, which allows data centers to transfer excess stored power back into the grid at peak times. Additionally, when the power stored in the UPS comes from renewable sources, it further helps in reducing carbon footprint, improves sustainability metrics, and demonstrates a data center operator’s commitment to being a good “grid citizen”, serving the community to which it has invited itself.
He explains that a controlled approach is key, and this is, again, somewhere where modularity can help. Building offsite can allow you to build components under controlled conditions to protect from environmental contamination, whilst making best use of sources of responsibly sourced steel and other ethically designed components that might be harder to have distributed to multiple onsite projects.
As Scope 3 emissions move into sharper focus, a modular approach is once again hugely beneficial, as it reduces the number of journeys taken by staff and equipment to and from disparate locations in favor of a single destination that can be further enhanced by sourcing labor and equipment locally.
Beginning to end
Modularity has benefits throughout the lifecycle of the data center. Colina talks of the “Implement, Operate, Retire” lifecycle of facilities. So far, we’ve spoken mostly about the ‘implement’ aspect, but during operations modularity can bring additional advantages in play.
Citing Eaton’s Center for Intelligent Power in Dublin, he talks about the development of predictive analytics tools that leverage AI to assess the wear and tear on key components, learning as they go until they will eventually be able to spot maintenance issues, resolve them before they result in downtime and even apply KPIs to critical components.
This is particularly important when it comes to bringing equipment, and even the entire data center, to end-of-life.
The modular future
So what does the future hold for modular data centers? There’s little doubt that they are increasingly being seen as the solution for operators looking to bolster speed and scale. This will become increasingly important for use cases such as autonomous vehicles, which are likely to reach mainstream adoption in a number of territories over the next decade.
These vehicles require a huge amount of compute and data transfer to operate, especially as AI learns how to refine their driving style. All this data will need to be harnessed and controlled, particularly if the knowledge base is to be pooled and disseminated to other vehicles.
At the heart of this revolution is Edge compute, which is still a burgeoning technology. Specific types of inference AI workloads, such as autonomous vehicles, industrial automation and healthcare monitoring, are of most use when the data center is as close to the end user as possible.
This requires a huge investment in Edge facilities, rolled out in a compressed space of time to mitigate cost of real estate and get as close as possible to where the demand is being generated. Once again, modularity is able to serve the situation brilliantly, allowing Edge sites to be populated quickly and easily.
https://www.datacenterdynamics.com/en/marketwatch/the-future-of-the-data-center-is-modular/