Discover
The Data Center Frontier Show
The Data Center Frontier Show
Author: Endeavor Business Media
Subscribed: 66Played: 1,203Subscribe
Share
Copyright Data Center Frontier LLC © 2019
Description
Welcome to the Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.
176 Episodes
Reverse
In this Data Center Frontier Trends Summit 2025 session—moderated by Stu Dyer (CBRE) with panelists Aad den Elzen (Solar Turbines/Caterpillar), Creede Williams (Exigent Energy Partners), and Adam Michaelis (PointOne Data Centers)—the conversation centered on a hard truth of the AI buildout: power is now the limiting factor, and the grid isn’t keeping pace.
Dyer framed how quickly the market has escalated, from “big” 48MW campuses a decade ago to today’s expectations of 500MW-to-gigawatt-scale capacity. With utility timelines stretched and interconnection uncertainty rising, the panel argued that natural gas has moved from taboo to toolkit—often the fastest route to firm power at meaningful scale.
Williams, speaking from the IPP perspective, emphasized that speed-to-power requires firm fuel and financeable infrastructure, warning that “interruptible” gas or unclear supply economics can undermine both reliability and underwriting. Den Elzen noted that gas is already a proven solution across data center deployments, and in many cases is evolving from a “bridge” to a durable complement to the grid—especially when modular approaches improve resiliency and enable phased buildouts. Michaelis described how operators are building internal “power plant literacy,” hiring specialists and partnering with experienced power developers because data center teams can’t assume they can self-perform generation projects.
The panel also “de-mystified” key technology choices—reciprocating engines vs. turbines—as tradeoffs among lead time, footprint, ramp speed, fuel flexibility, efficiency, staffing, and long-term futureproofing. On AI-era operations, the group underscored that extreme load swings can’t be handled by rotating generation alone, requiring system-level design with controls, batteries, capacitors, and close coordination with tenant load profiles.
Audience questions pushed into public policy and perception: rate impacts, permitting, and the long-term mix of gas, grid, and emerging options like SMRs. The panel’s consensus: behind-the-meter generation can help shield ratepayers from grid-upgrade costs, but permitting remains locally driven and politically sensitive—making industry communication and advocacy increasingly important.
Bottom line: in the new data center reality, natural gas is here—often not as a perfect answer, but as the one that matches the industry’s near-term demands for speed, scale, and firm power.
In this episode, we crack open the world of ILA (In-Line Amplifier) huts, those unassuming shelters are quietly powering fiber connectivity. Like mini utility substations of the fiber world, these small, secure, and distributed facilities keep internet, voice, and data networks running reliably, especially over long distances or in developing areas. From the analog roots of signal amplification to today’s digital optical technologies, this conversation explores how ILAs are redefining long-haul fiber transport.
We’ll discuss how these compact, often rural, mini data centers are engineered and built to boost light signals across vast distances. But it’s not just about the tech. There are real-world challenges to deploying ILAs: from acquiring land in varied environments, to coordinating civil construction often built in isolation. You’ll learn why site selection is as much about geology and permitting as it is about signal loss, and what factors can make or break an ILA deployment.
We also explore the growing role of hyperscalers and colocation providers in driving ILA expansion, adjacent revenue opportunities, and what ILA facilities can mean for the future of rural connectivity.
Tune in to find out how the pulse of long-haul fiber is beating louder than ever.
In this panel session from the 2025 Data Center Frontier Trends Summit (Aug. 26-28) in Reston, Va., JLL’s Sean Farney moderates a high-energy panel on how the industry is fast-tracking AI capacity in a world of power constraints, grid delays, and record-low vacancy.
Under the banner “Scaling AI: The Role of Adaptive Reuse and Power-Rich Sites in GPU Deployment,” the discussion dives into why U.S. colocation vacancy is hovering near 2%, how power has become the ultimate limiter on AI revenue, and what it really takes to stand up GPU-heavy infrastructure at speed.
Schneider Electric’s Lovisa Tedestedt, Aligned Data Centers’ Phill Lawson-Shanks, and Sapphire Gas Solutions’ Scott Johns unpack the real-world strategies they’re deploying today—from adaptive reuse of industrial sites and factory-built modular systems, to behind-the-fence natural gas, microgrids, and emerging hydrogen and RNG pathways. Along the way, they explore the coming “AI inference edge,” the rebirth of the enterprise data center, and how AI is already being used to optimize data center design and operations.
During this talk, you’ll learn:
* Why record-low vacancy and long interconnection queues are reshaping AI deployment strategy.
* How adaptive reuse of legacy industrial and commercial real estate can unlock gigawatt-scale capacity and community benefits.
* The growing role of liquid cooling, modular skids, and grid-to-chip efficiency in getting more power to GPUs.
* How behind-the-meter gas, virtual pipelines, and microgrids are bridging multi-year grid delays.
* Why many experts expect a renaissance of enterprise data centers for AI inference at the edge.
Moderator:
Sean Farney, VP, Data Centers, Jones Lang LaSalle (JLL)
Panelists:
Tony Grayson, General Manager, Northstar
Lovisa Tedestedt, Strategic Account Executive – Cloud & Service Providers, Schneider Electric
Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers
Scott Johns, Chief Commercial Officer, Sapphire Gas Solutions
Recorded live at the 2025 Data Center Frontier Trends Summit in Reston, VA, this panel brings together leading voices from the utility, IPP, and data center worlds to tackle one of the defining issues of the AI era: power.
Moderated by Buddy Rizer, Executive Director of Economic Development for Loudoun County, the session features:
Jeff Barber, VP Global Data Centers, Bloom Energy
Bob Kinscherf, VP National Accounts, Constellation
Stan Blackwell, Director, Data Center Practice, Dominion Energy
Joel Jansen, SVP Regulated Commercial Operations, American Electric Power
David McCall, VP of Innovation, QTS Data Centers
Together they explore how hyperscale and AI workloads are stressing today’s grid, why transmission has become the critical bottleneck, and how on-site and behind-the-meter solutions are evolving from “bridge power” into strategic infrastructure.
The panel dives into the role of gas-fired generation and fuel cells, emerging options like SMRs and geothermal, the realities of demand response and curtailment, and what it will take to recruit the next generation of engineers into this rapidly changing ecosystem.
If you want a grounded, candid look at how energy providers and data center operators are working together to unlock new capacity for AI campuses, this conversation is a must-listen.
Live from the Data Center Frontier Trends Summit 2025 – Reston, VA
In this episode, we bring you a featured panel from the Data Center Frontier Trends Summit 2025 (Aug. 26-28), sponsored by Schneider Electric. DCF Editor in Chief Matt Vincent moderates a fast-paced, highly practical conversation on what “AI for good” really looks like inside the modern data center—both in how we build for AI workloads and how we use AI to run facilities more intelligently.
Expert panelists included:
Steve Carlini, VP, Innovation and Data Center Energy Management Business, Schneider Electric
Sudhir Kalra, Chief Data Center Operations Officer, Compass Datacenters
Andrew Whitmore, VP of Sales, Motivair
Together they unpack:
How AI is driving unprecedented scale—from megawatt data halls to gigawatt AI “factories” and 100–600 kW rack roadmaps
What Schneider and NVIDIA are learning from real-world testing of Blackwell and NVL72-class reference designs
Why liquid cooling is no longer optional for high-density AI, and how to retrofit thousands of brownfield, air-cooled sites
How Compass is using AI, predictive analytics, and condition-based maintenance to cut manual interventions and OPEX
The shift from “constructing” to assembling data centers via modular, prefab approaches
The role of AI in grid-aware operations, energy storage, and more sustainable build and operations practices
Where power architectures, 800V DC, and industry standards will take us over the next five years
If you want a grounded, operator-level view into how AI is reshaping data center design, cooling, power, and operations—beyond the hype—this DCF Trends Summit session is a must-listen.
On this episode of The Data Center Frontier Show, Editor in Chief Matt Vincent sits down with Rob Campbell, President of Flex Communications, Enterprise & Cloud, and Chris Butler, President of Flex Power, to unpack Flex’s bold new integrated data center platform as unveiled at the 2025 OCP Global Summit.
Flex says the AI era has broken traditional data center models, pushing power, cooling, and compute to the point where they can no longer be engineered separately. Their answer is a globally manufactured, pre-engineered platform that unifies these components into modular pods and skids, designed to cut deployment timelines by up to 30 percent and support gigawatt-scale AI campuses.
Rob and Chris explain how Flex is blending JetCool’s chip-level liquid cooling with scalable rack-level CDUs; how higher-voltage DC architectures (400V today, 800V next) will reshape power delivery; and why Flex’s 110-site global manufacturing footprint gives it a unique advantage in speed and resilience.
They also explore Flex’s lifecycle intelligence strategy, the company’s circular-economy approach to modular design, and their view of the “data center of 2030”—a landscape defined by converged power and IT, liquid cooling as default, and modular units capable of being deployed in 30–60 days.
It’s a deep look at how one of the world’s largest manufacturers plans to redefine AI-scale infrastructure.
Artificial intelligence is completely changing how data centers are built and operated. What used to be relatively stable IT environments are now turning into massive power ecosystems. The main reason is simple — AI workloads need far more computing power, and that means far more energy.
We’re already seeing a sharp rise in total power consumption across the industry, but what’s even more striking is how much power is packed into each rack. Not long ago, most racks were designed for 5 to 15 kilowatts. Today, AI-heavy setups are hitting 50 to 70 kW, and the next generation could reach up to 1 megawatt per rack. That’s a huge jump — and it’s forcing everyone in the industry to rethink power delivery, cooling, and overall site design.
At those levels, traditional AC power distribution starts to reach its limits. That’s why many experts are already discussing a move toward high-voltage DC systems, possibly around 800 volts. DC systems can reduce conversion losses and handle higher densities more efficiently, which makes them a serious option for the future.
But with all this growth comes a big question: how do we stay responsible? Data centers are quickly becoming some of the largest power users on the planet. Society is starting to pay attention, and communities near these sites are asking fair questions — where will all this power come from, and how will it affect the grid or the environment? Building ever-bigger data centers isn’t enough; we need to make sure they’re sustainable and accepted by the public.
The next challenge is feasibility. Supplying hundreds of megawatts to a single facility is no small task. In many regions, grid capacity is already stretched, and new connections take years to approve. Add the unpredictable nature of AI power spikes, and you’ve got a real engineering and planning problem on your hands. The only realistic path forward is to make data centers more flexible — to let them pull energy from different sources, balance loads dynamically, and even generate some of their own power on-site.
That’s where ComAp’s systems come in. We help data center operators manage this complexity by making it simple to connect and control multiple energy sources — from renewables like solar or wind, to backup generators, to grid-scale connections. Our control systems allow operators to build hybrid setups that can adapt in real time, reduce emissions, and still keep reliability at 100%.
Just as importantly, ComAp helps with the grid integration side. When a single data center can draw as much power as a small city, it’s no longer just a “consumer” — it becomes part of the grid ecosystem. Our technology helps make that relationship smoother, allowing these large sites to interact intelligently with utilities and maintain overall grid stability.
And while today’s discussion is mostly around AC power, ComAp is already ready for the DC future. The same principles and reliability that have powered AC systems for decades will carry over to DC-based data centers. We’ve built our solutions to be flexible enough for that transition — so operators don’t have to wait for the technology to catch up.
In short, AI is driving a complete rethink of how data centers are powered. The demand and density will keep rising, and the pressure to stay responsible and sustainable will only grow stronger. The operators who succeed will be those who find smart ways to integrate different energy sources, keep efficiency high, and plan for the next generation of infrastructure.
That’s the space where ComAp is making a real difference.
In this episode of the DCF Show podcast, Data Center Frontier Editor in Chief Matt Vincent sits down with Bill Severn, CEO of 1623 Farnam, to explore how the Omaha carrier hotel is becoming a critical aggregation hub for AI, cloud, and regional edge growth. A featured speaker on The Distributed Data Frontier panel at the 2025 DCF Trends Summit, Severn frames the edge not as a location but as the convergence of eyeballs, network density, and content—a definition that underpins Farnam’s strategy and rise in the Midwest.
Since acquiring the facility in 2018, 1623 Farnam has transformed an underappreciated office tower on the 41st parallel into a thriving interconnection nexus with more than 40 broadband providers, 60+ carriers, and growing hyperscale presence. The AI era is accelerating that momentum: over 5,000 new fiber strands are being added into the building, with another 5,000 strands expanding Meet-Me Room capacity in 2025 alone. Severn remains bullish on interconnection for the next several years as hyperscalers plan deployments out to 2029 and beyond.
The conversation also dives into multi-cloud routing needs across the region—where enterprises increasingly rely on Farnam for direct access to Google Central, Microsoft ExpressRoute, and global application-specific cloud regions. Energy efficiency has become a meaningful differentiator as well, with the facility operating below a 1.5 PUE, thanks to renewable chilled water, closed-loop cooling, and extensive free cooling cycles.
Severn highlights a growing emphasis on strategic content partnerships that help CDNs and providers justify regional expansion, pointing to past co-investments that rapidly scaled traffic from 100G to more than 600 Gbps. Meanwhile, AI deployments are already arriving at pace, requiring collaborative engineering to fit cabinet weight, elevator limitations, and 40–50 kW rack densities within a non–purpose-built structure.
As AI adoption accelerates and interconnection demand surges across the heartland, 1623 Farnam is positioning itself as one of the Midwest’s most important digital crossroads—linking hyperscale backbones, cloud onramps, and emerging AI inference clusters into a cohesive regional edge.
In this episode, Matt Vincent, Editor in Chief at Data Frontier is joined by Rob Macchi, Vice President Data Center Solutions at Wesco and they explore how companies can stay ahead of the curve with smarter, more resilient construction strategies. From site selection to integrating emerging technologies, Wesco helps organizations build data centers that are not only efficient but future-ready. Listen now to learn more!
In this episode of the Data Center Frontier Show, we sit down with Ryan Mallory, the newly appointed CEO of Flexential, following a coordinated leadership transition in October from Chris Downie.
Mallory outlines Flexential's strategic focus on the AI-driven future, positioning the company at the critical "inference edge" where enterprise CPU meets AI GPU. He breaks down the AI infrastructure boom into a clear three-stage build cycle and explains why the enterprise "killer app"—Agentic AI—plays directly into Flexential's strengths in interconnection and multi-tenant solutions.
We also dive into:
Power Strategy: How Flexential's modular, 36-72 MW build strategy avoids community strain and wins utility favor.
Product Roadmap: The evolution to Gen 5 and Gen 6 data centers, blending air and liquid cooling for mixed-density AI workloads.
The Bold Bet: Mallory's vision for the next 2-3 years, which involves "bending the physics curve" with geospatial energy and transmission to overcome terrestrial limits.
Tune in for a insightful conversation on power, planning, and the future of data center infrastructure.
On this episode of the Data Center Frontier Show, DartPoints CEO Scott Willis joins Editor in Chief Matt Vincent to discuss why regional data centers are becoming central to the future of AI and digital infrastructure. Fresh off his appearance on the Distributed Edge panel at the 2025 DCF Trends Summit, Willis breaks down how DartPoints is positioning itself in non-tier-one markets across the Midwest, Southeast, and South Central regions—locations he believes will play an increasingly critical role as AI workloads move closer to users.
Willis explains that DartPoints’ strategy hinges on a deeply interconnected regional footprint built around carrier-rich facilities and strong fiber connectivity. This fabric is already supporting latency-sensitive workloads such as AI inference and specialized healthcare applications, and Willis expects that demand to accelerate as enterprises seek performance closer to population centers.
Following a recent recapitalization with NOVA Infrastructure and Orion Infrastructure Capital, DartPoints has launched four new expansion sites designed from the ground up for higher-density, AI-oriented workloads. These facilities target rack densities from 30 kW to 120 kW and are sized in the 10–50 MW range—large enough for meaningful HPC and AI deployments but nimble enough to move faster than hyperscale builds constrained by long power queues.
Speed to market is a defining advantage for DartPoints. Willis emphasizes the company’s focus on brownfield opportunities where utility infrastructure already exists, reducing deployment timelines dramatically. For cooling, DartPoints is designing flexible environments that leverage advanced air systems for 30–40 kW racks and liquid cooling for higher densities, ensuring the ability to support the full spectrum of enterprise, HPC, and edge-adjacent AI needs.
Willis also highlights the importance of community partnership. DartPoints’ facilities have smaller footprints and lower power impact than hyperscale campuses, allowing the company to serve as a local economic catalyst while minimizing noise and aesthetic concerns.
Looking ahead to 2026, Willis sees the industry entering a phase where AI demand becomes broader and more distributed, making regional markets indispensable. DartPoints plans to continue expanding through organic growth and targeted M&A while maintaining its focus on interconnection, high-density readiness, and rapid, community-aligned deployment.
Tune in to hear how DartPoints is shaping the next chapter of distributed digital infrastructure—and why the market is finally moving toward the regional edge model Willis has championed.
In this episode of the Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent speaks with Ed Nichols, President and CEO of Expanse Energy / RRPT Hydro, and Gregory Tarver, Chief Electrical Engineer, about a new kind of hydropower built for the AI era.
RRPT Hydro’s piston-driven gravity and buoyancy system generates electricity without dams or flowing rivers—using the downward pull of gravity and the upward lift of buoyancy in sealed cylinders. Once started, the system runs self-sufficiently, producing predictable, zero-emission power.
Designed for modular, scalable deployment—from 15 kW to 1 GW—the technology can be installed underground or above ground, enabling data centers to power themselves behind the meter while reducing grid strain and even selling excess energy back to communities.
At an estimated Levelized Cost of Energy of $3.50/MWh, RRPT Hydro could dramatically undercut traditional renewables and fossil power. The company is advancing toward commercial readiness (TRL 7–9) and aims to build a 1 MW pilot plant within 12–15 months.
Nichols and Tarver describe this moonshot innovation, introduced at the 2025 DCF Trends Summit, as a “Wright Brothers moment” for hydropower—one that could redefine sustainable baseload energy for data centers and beyond.
Listen now to explore how RRPT Hydro’s patented piston-driven system could reshape the physics, economics, and deployment model of clean energy.
At this year’s Data Center Frontier Trends Summit, Honghai Song, founder of Canyon Magnet Energy, presented his company’s breakthrough superconducting magnet technology during the “6 Moonshot Trends for the 2026 Data Center Frontier” panel—showcasing how high-temperature superconductors (HTS) could reshape both fusion energy and AI data-center power systems.
In this episode of the Data Center Frontier Show, Editor in Chief Matt Vincent speaks with Song about how Canyon Magnet Energy—founded in 2023 and based in New Jersey and Stony Brook University—is bridging fusion research and AI infrastructure through next-generation magnet and energy-storage technology.
Song explains how HTS magnets, made from REBCO (Rare Earth Barium Copper Oxide), operate at 77 Kelvin with zero electrical resistance, opening the door to new kinds of super-efficient power transmission, storage, and distribution. The company’s SMASH (Superconducting Magnetic Storage Hybrid) system is designed to deliver instant bursts of energy—within milliseconds—to stabilize GPU-driven AI workloads that traditional batteries and grids can’t respond to fast enough.
Canyon Magnet Energy is currently developing small-scale demonstration projects pairing SMES systems with AI racks, exploring integration with DC power architectures and liquid-cooling infrastructure. The long-term roadmap envisions multi-mile superconducting DC lines connecting renewables to data centers—and ultimately, fusion power plants providing virtually unlimited clean energy.
Supported by an NG Accelerate grant from New Jersey, the company is now seeking data-center partners and investors to bring these technologies from the lab into the field.
Who is Packet Power?
Since 2008, Packet Power has been at the forefront of energy and environmental monitoring, pioneering wireless solutions that helped define the modern Internet of Things (IoT). Built on the belief that energy is the new cost frontier of computation, Packet Power enables organizations to understand exactly where, when, and how energy is used—and at what cost.
As AI-driven workloads push energy demand to record levels, Packet Power’s mission of complete energy traceability has never been more critical. Their systems are trusted worldwide for providing secure, out-of-band monitoring that remains fully independent of operational data networks.
Introducing the All-New High-Density Power Monitor
Packet Power’s newest innovation, the High-Density Power Monitor, is redefining what’s possible in energy monitoring. At just under 6 cubic inches, it’s the smallest and most scalable multi-circuit power monitoring system on the market, capable of tracking 120 circuits in a space smaller than what’s inside a standard light switch.
The High-Density Power Monitor eliminates bulky hardware, complex wiring, and lengthy installations. It’s plug-and-play simple, seamlessly integrates with Packet Power’s EMX software or any third-party monitoring platform, and supports both wired and wireless connectivity—including secure, air-gapped environments.
Solving the Challenges of Modern Power Monitoring
The High-Density Power Monitor is engineered for the next generation of high-performance systems and facilities. It tackles five key challenges:
Power Density: Monitors high-load environments with unmatched precision.
Circuit Density: Tracks more circuits per module than any competitor.
Physical Density: Fits anywhere, from PDUs to sub-panels to embedded devices.
Installation Simplicity: Snaps into place—no tools, no complexity.
Connection Flexibility: Wireless, wired, LAN, cloud, or cellular—you can mix and match freely.
Whether managing a single rack or thousands of devices, Packet Power ensures monitoring 1 device is as easy as monitoring 1,000.
Why It Matters Now
Today’s computing environments are experiencing an energy density arms race—with systems consuming megawatts of power in a single cabinet. New cooling methods, extreme power densities, and evolving form factors demand monitoring solutions that can keep up. Packet Power’s new High-Density Power Monitor meets that challenge head-on, offering the scalability, adaptability, and visibility needed to manage energy use in the AI era.
Perfect for Any Application
This solution is ideal for:
High-density servers and compute cabinets
Distribution panels, PDUs, and busway components
Embedded monitoring in OEM systems
Large-scale deployments requiring fleet-level simplicity
+ more!
Whether new installations or retrofitting existing buildings, Packet Power systems deliver vendor-agnostic integration and proven scalability with unmatched turn times and products Made in the USA for BABA compliance.
Learn More!
Discover the true meaning of small & mighty:
👉 Visit PacketPower.com/high-density-power-monitor
📧 Contact sales@packetpower.com
In this episode of The Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent talks with Yuval Boger, Chief Commercial Officer at QuEra Computing, about the fast-evolving intersection of quantum and AI-accelerated supercomputing.
QuEra, a Boston-based pioneer in neutral-atom quantum computers, recently expanded its $230 million funding round with new investment from NVentures (NVIDIA’s venture arm) and announced a Nature-published breakthrough in algorithmic fault tolerance that dramatically cuts runtime overhead for error-corrected quantum algorithms.
Boger explains how QuEra’s systems, operating at room temperature and using identical rubidium atoms as qubits, offer scalable, power-efficient performance for HPC and cloud environments.
He details the company's collaborations with NVIDIA, AWS, and global supercomputing centers integrating quantum processors alongside GPUs, and outlines why neutral-atom architectures could soon deliver practical, fault-tolerant quantum advantage.
Listen as Boger discusses QuEra’s technology roadmap, market position, and the coming inflection point where hybrid quantum-classical systems move from the lab into the data center mainstream.
Matt Vincent, Editor-in-Chief of Data Center Frontier, sits down with Angela Capon, Vice President of Marketing at EdgeConneX, to discuss the groundbreaking collaboration between EdgeConneX and the Duke of Edinburgh's International Award Program.
Charting the Future of AI Storage Infrastructure
In this episode, Solidigm Director of Strategic Planning Brian Jacobosky guides listeners through a tech-forward conversation on how storage infrastructure is helping redefine the AI-era data center. The discussion frames storage as more than just a cost factor; it's also a strategic building block for performance, efficiency, and savings.
Storage Moves to the Center of AI Data Infrastructure
Jacobosky explains how, in the AI-driven era, storage is being elevated from a forgotten metric like “dollars per gigabyte” to a core priority: maximizing GPU utilization, managing soaring power draw, and unlocking space savings. He illustrates how every watt and every square inch counts. As GPU compute scales dramatically, storage efficiency is being engineered to enable maximum density and throughput.
High-Capacity SSDs as a Game-Changer
Jacobosky spotlights Solidigm D5-P5336 122TB SSDs as emblematic of the shift. Rather than a simple technical refresh, these drives represent a tectonic realignment in how data centers are being designed for huge capacity and optimized performance. With all-flash deployments offering up to nine times the space savings compared to hybrid architectures, Jacobosky underscores how SSD density can enable more GPU scale within fixed power and space budgets. This could even unlock achieving a 1‑petabyte SSD by the end of the decade.
Embedded Efficiency
The episode brings environmental considerations to the forefront. Jacobosky shares how an “all‑SSD” strategy can dramatically slash physical footprints as well as energy consumption. From data center buildout through end of lifecycle drive retirement, efficiency is driving both operational cost savings and ESG benefits — helping reduce concrete and steel usage, power draw, and e‑waste.
Pioneering Storage Architectures and Cooling Innovation
Listeners learn how AI-first innovators like Neo Cloud-style providers and sovereign AI operators lead the charge in deploying next-generation storage. Jacobosky also previews the Solidigm PS-1010 E1.S form factor, an NVIDIA fanless server solution that enables direct‑to‑chip Cold-Plate-Cooled SSDs integrated into GPU servers. He predicts that this systems-level integration will become a standard for high-density AI infrastructure.
Storage as a Strategic Investment
Solidigm challenges the notion that high-capacity storage is cost prohibitive. Within the framework of the AI token economy, Jacobosky explains that the true measure becomes minimizing cost per token and time to first token and, when storage is optimized for performance, capacity, and efficiency, the total cost of ownership (TCO) will often prove favorable after the first evaluation.
Looking Ahead: Memory Wall, Inference Workloads, Liquid Cooling
Jacobosky ends with a look ahead to where storage innovation will lead in the next five years. As AI models grow in size and complexity, he argues, storage is increasingly acting as an extension of memory, breaking through the “memory wall” for large inference workloads. Companies will design infrastructure from the ground up with liquid-cooling, future-scalable storage, and storage that supports massive model deployments without compromising latency.
This episode is essential listening for data center architects, AI infrastructure strategists, and sustainability leaders looking to understand how storage is fast-becoming a defining factor in AI-ready data centers of the future.
Florida is emerging as one of the most promising new frontiers for data center growth — combining power availability, policy alignment, and strategic geography in ways that mirror the early success of Northern Virginia.
In this episode of The Data Center Frontier Show, Editor-in-Chief Matt Vincent sits down with Buddy Rizer, Executive Director of Loudoun County Economic Development, and Lila Jaber, Founder of the Florida’s Women in Energy Leadership Forum and former Chair of the Florida Public Service Commission. Together, they explore how Florida is building the foundation for large-scale digital infrastructure and AI data center investment.
Episode Highlights:
Energy Advantage: While Loudoun County faces a 600-megawatt deficit and rising demand, Florida enjoys excess generation capacity, proactive utilities, and growing renewable integration. Utilities like FPL and Duke Energy are preparing for hyperscale and AI-driven loads with new tariff structures and grid-hardening investments.
Tax Incentives & Workforce: Florida’s extended data center sales tax exemption through 2037 and its raised 100-megawatt IT load threshold signal a commitment to hyperscale development. The state’s universities and workforce programs are aligned with this tech growth, producing top talent in engineering and applied sciences.
Strategic Location: As a digital gateway to Latin America and the Caribbean, Florida’s connectivity advantage—especially around Miami—is attracting hyperscale and AI operators looking to expand globally.
Market Outlook: Industry insiders predict that within the next year, a major data center player will establish a significant footprint in Florida. Multiple campuses are expected to follow, driven by the state’s power resilience, policy stability, and collaborative approach between utilities, developers, and government leaders.
Why It Matters:
Florida’s combination of energy abundance, policy foresight, and strategic geography positions it as the next great growth market for digital infrastructure and AI-ready data centers in North America.
This podcast explores the rapidly evolving thermal and water challenges facing today’s data centers as AI workloads push rack densities to unprecedented levels. The discussion highlights the risks and opportunities tied to liquid cooling—from pre-commissioning practices and real-time monitoring to system integration and water stewardship. Ecolab’s innovative approaches to thermal management can not only solve operational constraints but also deliver competitive advantage by improving efficiency, reducing resource consumption, and strengthening sustainability commitments.
Join Bill Tierney of The Data Center Construction Alliance, as he discusses some of the emerging challenges facing data center development today. Topics will include how increasing collaboration between OEMs, owners, contractors, and sub-contractors is leading to some exciting and innovative solutions in the design and construction of data centers. He will also share some examples of how collaboration has led to new ideas and methodologies in the field.




