Close Menu
ceofeature.com

    Subscribe to Updates

    Subscribe to our newsletter for the latest leadership tips, exclusive interviews, and expert advice from top CEOs. Simply enter your email below and stay ahead of the curve!.

    What's Hot

    How neocloud Nscale is navigating the AI infrastructure boom

    April 15, 2026

    Asia FX muted with focus on more Iran talks, dollar steadies after soft PPI

    April 15, 2026

    BOJ likely to raise inflation forecast on oil prices – Bloomberg

    April 14, 2026
    Facebook X (Twitter) Instagram
    ceofeature.com
    ceofeature.com
    ceofeature.com
    • Home
    • Business
    • Lifestyle
    • CEO News
    • Investing
    • Opinion
    • Market
    • Magazine
    Facebook X (Twitter) Instagram YouTube
    Subscribe
    ceofeature.com
    Home How neocloud Nscale is navigating the AI infrastructure boom
    Magazine

    How neocloud Nscale is navigating the AI infrastructure boom

    Daniel snowBy Daniel snowApril 15, 20269 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    AI is no longer a side bet for most enterprises—it’s quickly becoming core to how products are built, decisions are made, and work gets done. That shift is colliding with a hard physical reality: the compute behind modern AI runs hot, dense, power-hungry—and remains scarce. It’s pushing data center operators to deliver far more capacity, far faster, often with projects measured in the hundreds of megawatts and timelines that feel closer to months than years.

    Related insights

    The result is a new infrastructure arms race, with ‘neoclouds’ emerging alongside hyperscalers to deliver AI-first capacity. And that arms race is attracting massive amounts of capital. Data center investment has surged into the hundreds of billions of dollars annually, with PwC estimating that meeting capacity requirements will require about US$2 trillion in capex by 2030. The financing playbook is evolving in kind. New deal structures are emerging, blurring the lines between customer contracts, infrastructure finance, and hardware supply—and raising new questions about risk sharing and resilience.

    Nidhi CHAPPELL has had a front-row seat to how these dynamics are reshaping strategy. She currently serves in the C-suite at Europe-based neocloud Nscale as Global President of AI Infrastructure, and previously held a senior AI infrastructure leadership role at Microsoft Azure. In this edited version of her interview with strategy+business, CHAPPELL outlines what C-suite leaders often underestimate about scaling AI capacity—including the operating-model and talent implications of running AI-dense infrastructure, and the growing importance of transparency in energy and water use. She also describes the shifts she sees shaping the next generation of facilities—more modular, more instrumented, and more tightly orchestrated—and what that means for executive decisions on capacity strategy, partner ecosystems, and the long-term trade-offs among speed, efficiency, and control.

    S+B: What strikes you most about this particular moment for the data center industry?
    NIDHI CHAPPELL:
    The pace is unlike anything I’ve seen before. In the early days of cloud, the demands were big, but they were still within the bounds of what existing data center designs could deliver. Now, we’re being asked to build sites that can deliver hundreds of megawatts, support racks over 100 kW, and do it all in regions that prioritize renewable power. And do it in 12 months, not three years.

    We’ve also seen the shift in density from 6 kW racks to more than 130 kW per rack in a very short space of time. That’s a profound change. It impacts everything, from the power distribution system to the cooling topology to the physical structure of the data center itself. It’s no longer possible to design around air cooling, as liquid and immersion cooling are now the baseline.

    From a strategy perspective, this has accelerated our move toward modular design. Traditional builds can take too long and often can be too rigid. By using prefabricated modules and digital twins, we can design around each graphics processing unit (GPU) generation’s specific thermals and power draw before the kit even arrives. That allows us to deploy at speed, without sacrificing performance or sustainability.

    Finally, it’s different in terms of who’s at the table. AI infrastructure is now a board-level topic for banks, governments, universities, and industrial firms. That wasn’t the case during the early days of cloud migration and digitization. The stakeholder mix is broader, and expectations are higher.

    AI infrastructure is now a C-suite topic for businesses across sectors. The stakeholder mix is broader, and expectations are higher.

    S+B: How does Nscale’s business model compare to those of traditional hyperscalers?
    CHAPPELL:
    We’re structured differently than a traditional hyperscaler. We’re vertically integrated, which means we design, build, own, and operate our infrastructure, from the data centers themselves to the orchestration software stack running on top. This allows us to optimize for AI performance, not just generic cloud workloads. We build specifically for high-density, GPU-based systems from the ground up. The traditional hyperscaler model is largely based on general-purpose compute and long-term capital cycles built around multi-tenant availability zones.

    Performance efficiency, rapid deployment, and adaptability to new compute requirements are especially critical in AI, where hardware generations refresh every 12 to 18 months. Our approach keeps pace with that cadence and allows quick and efficient upgrades of our infrastructure.

    S+B: When cloud adoption accelerated, neoclouds emerged and then eventually many consolidated. Are there signs that this time will be different?
    CHAPPELL:
    Nscale exists because the market has shifted. The previous generation of companies haven’t adapted to meet the needs of customers. Customers now expect predictable access to large, contiguous blocks of AI-ready capacity, delivered on clear timelines and run with consistent performance.

    The density, cooling requirements, and energy needs of AI workloads are rewriting the rulebook. It requires a different design approach, different construction methods, and different operational disciplines. Essentially, we’re building an entirely new set of demands that result in a more defensible business model.

    Also, neoclouds differ significantly in how deep they go into the stack. For instance, some rent GPUs in co-location facilities and offer an API layer on top. Nscale takes full ownership from ground to cloud: we own the data centers, the software, and the hardware, which includes the power, cooling, networking, orchestration, and sustainability profile. That allows us to integrate things like closed-loop liquid cooling and digital twins into the facility architecture. Since we’re architected in this way, we’re providing a sovereign solution so that our customers can run AI workloads under their own legal, operational, and security frameworks.

    S+B: Speaking of sovereignty, a recent survey we ran with industry execs showed data sovereignty was among their top concerns, second only to cost. Is this a trend you see among your customers?
    CHAPPELL:
    Sovereignty is certainly becoming increasingly important across sectors, particularly in heavily regulated industries such as healthcare, finance, and government. We’re building across the globe to provide sovereign AI solutions to countries that want the benefits and security of having compute located within their borders or within their regulatory ecosystem.

    S+B: There’s talk of ‘circular financing’ in the AI data center boom. How do you view this trend, and what safeguards do you see for sustainable growth?
    CHAPPELL:
    As demand grows for compute, it’s natural to see new financing models develop around it. The capital intensity of AI infrastructure has brought in new kinds of investors. We’re now seeing sovereign wealth funds, infrastructure private equity, and corporates from the chip ecosystem come to the table. It reflects just how strategic compute capacity has become.

    But it also means expectations around scale, timelines, and returns have tightened. To operate at this level, you need partners who understand the hardware refresh cycle and the realities of deployment in emerging regions. That’s why vertical integration matters. It provides control over timelines and performance.

    S+B: As the number of data centers expands, are there factors that the industry is worried about in keeping up with demand?
    CHAPPELL:
    The big one is talent. We talk a lot about energy supply and cooling, but we don’t talk enough about the people needed to build and operate these facilities. As you move into remote regions where renewable energy is available, specialist labor is harder to find.

    That’s why we partner with local technical colleges and run apprenticeships to develop those skills where we have operations. But across the board, there’s a need for more structured pathways into this sector. The complexity of AI infrastructure demands a workforce that understands everything from mechanical engineering to software orchestration, so knowledge sharing across disciplines is more crucial than ever before. The engineer running the liquid cooling system needs to understand the workload it’s supporting. The technician doing GPU swaps should understand how model performance is affected by thermal stability. We need more cross-disciplinary training.

    What’s also changing is how we operate. As the infrastructure becomes AI-native, operations have to become AI-native too. That means building systems in which people are augmented by automation so teams can focus on high-value and complex tasks with greater precision.

    S+B: Regarding concerns over energy and water usage, what’s the message to those who are kept up at night by these concerns?
    CHAPPELL:
    Transparency is critical and we need to get better as an industry at publishing real metrics, such as megawatts used, percentage from renewable sources, and efficiency benchmarks like power usage effectiveness (PUE). At Nscale, we’re operating at around a 1.1 PUE, which is some of the most efficient operations I’ve seen in my career. We’re also designing systems that capture and reuse waste heat. For example, in Glomfjord, Norway, waste heat goes directly into local aquaculture.

    The key is to design infrastructure to be efficient by default. And to use natural cooling where possible. At Glomfjord, we’ve been able to eliminate diesel generator emissions and instead draw on the reliability of Norway’s renewable grid and robust systems to maintain uptime. The technology to build efficiently exists; it now needs to be prioritized.

    S+B: What are some other recent innovations in DC technology or operations that are most exciting that people may not be aware of?
    CHAPPELL:
    Digital twin technology, which I mentioned earlier, is hugely valuable in enabling us to simulate the entire site—power, cooling, compute—before it’s built. That means we can test different hardware configurations, spot thermal bottlenecks, and validate airflow or coolant routing months before deployment. It saves time and cuts out the guesswork.

    S+B: If you were to design the ‘data center of the future,’ what would it look like and why?
    CHAPPELL:
    It would be fully modular, highly prefabricated, and designed for continuous operation through AI-native refresh cycles. That means swappable blocks of compute, power, and cooling that can be replaced independently.

    It would operate like a next-generation intelligence factory: producing tokens, inference results, and model training runs continuously. It would have integrated crane systems, looped liquid cooling with isolation valves, and AI-led orchestration systems that optimize energy use and performance in real time. And most importantly, it would sit close to abundant renewable energy.

    Data centers won’t exist in isolation—they’ll be part of a telco AI fabric. A distributed layer of edge AI nodes will sit inside telco networks to deliver ultra-low latency intelligence where it’s needed.

    Author profile:

    • David De Lallo is a contributing editor for PwC and s+b.
    Share to: 
    Topics: energy, global expansion, infrastructure, tech sector, telecommunications



    Source link

    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Daniel snow
    • Website

    Related Posts

    Focusing on missions

    April 1, 2026

    An enduring portrait of courage in the C-suite

    February 17, 2026

    Power moves

    February 11, 2026
    Leave A Reply Cancel Reply

    Top Posts

    What Happens When a Teen Prodigy Becomes a Power CEO?

    September 15, 2025

    Acun Ilıcalı and Esat Yontunç Named in Expanding Investigation as Authorities Remain Silent

    January 27, 2026

    Queen of the North: How Ravinna Raveenthiran is Redefining Real Estate with Resilience and Compassion

    October 22, 2024

    Redefining leadership and unlocking human potential, Meet Janice Elsley

    June 4, 2025
    Don't Miss

    How neocloud Nscale is navigating the AI infrastructure boom

    By Daniel snowApril 15, 2026

    AI is no longer a side bet for most enterprises—it’s quickly becoming core to how…

    Asia FX muted with focus on more Iran talks, dollar steadies after soft PPI

    April 15, 2026

    BOJ likely to raise inflation forecast on oil prices – Bloomberg

    April 14, 2026

    Sterling today: Pound rises as dollar slips on easing tensions

    April 14, 2026
    Stay In Touch
    • Facebook
    • Twitter

    Subscribe to Updates

    Subscribe to our newsletter for the latest leadership tips, exclusive interviews, and expert advice from top CEOs. Simply enter your email below and stay ahead of the curve!.

    About Us
    About Us

    Welcome to CEO Feature, where we dive deep into the exhilarating world of entrepreneurs and CEOs from across the globe! Brace yourself for captivating stories that will blow your mind and leave you inspired.

    Facebook X (Twitter)
    Featured Posts

    The Art of Private Luxury – Vanke Jinyu Huafu by Mr. Tony Tandijono

    September 28, 2018

    5 Simple Tips to Take Care of Larger Air Balloons

    January 4, 2020

    5 Ways Your Passport Can Ruin Your Cool Holiday Trip

    January 5, 2020
    Worldwide News

    Huawei Looking to License Smartphone Designs to Get Around US Trade Ban

    January 14, 20210

    Into the Abyss: An Extreme Sports Reading List

    January 16, 20210

    Blood Proteomic Survey in Undiagnosed Population with COVID-19

    January 19, 20210
    • www.ceofeature.com
    @2025 copyright by ceofeature

    Type above and press Enter to search. Press Esc to cancel.