Lenovo is stepping deeper into large‑scale artificial intelligence with a new AI Cloud Gigafactory program built together with Nvidia and unveiled at CES 2026 in Las Vegas. The company wants cloud providers to switch on big AI data centers much faster than the long build cycles that used to be normal.
In the joint program, Lenovo provides its Neptune liquid‑cooled hybrid AI infrastructure, manufacturing, and services, while Nvidia supplies its newest accelerated computing platforms. The blueprint is meant to work at gigawatt scale so operators can scale to millions of GPUs for next‑generation workloads like agentic AI and high‑end physics and simulation.
Key ideas behind the Gigafactory approach include:
- Treating AI data centers like repeatable “factories” that can be stamped out in multiple locations.
- Using liquid‑cooled racks to pack in more GPUs per rack while controlling energy use and heat.
- Designing around a single metric: how quickly a customer reaches time‑to‑first‑token for real workloads.
Lenovo chairman and CEO Yuanqing Yang said the Gigafactory with Nvidia should set a new benchmark for scalable AI factory design and help the “world’s most advanced AI environments” come online in record time. Nvidia chief Jensen Huang framed these builds as AI factories that companies will either construct or rent in order to produce intelligence for their own industries.
Time‑to‑first‑token: the new AI metric
One idea that runs through the Lenovo–Nvidia story is time‑to‑first‑token, or TTFT, which they now present as a critical way to measure AI factory performance. Instead of only counting total compute or raw power, TTFT looks at how fast an investment in GPUs and infrastructure turns into a live AI service answering real queries.
The Gigafactory framework is built to:
- Move cloud customers from concept to production in weeks through pre‑engineered components and reference designs.
- Combine accelerated computing, storage, networking, and software into a single integrated path.
- Offer full‑lifecycle services, from initial design to ongoing optimization and governance.
On the hardware side, Lenovo will ship fully liquid‑cooled platforms based on Nvidia’s Blackwell Ultra architecture, including GB300 NVL72 racks with 72 GPUs and 36 Grace CPUs in one system. The program will also support Nvidia’s new Vera Rubin NVL72 system with Rubin GPUs, Vera CPUs, ConnectX‑9 SuperNICs, BlueField‑4 DPUs, and Spectrum‑6 Ethernet for even larger AI training and inference clusters.
For enterprise buyers and cloud providers watching capex, the TTFT focus links these technical choices to payback: the sooner models start serving customers, the faster those investments can be justified.
Qira: personal AI across Lenovo and Motorola devices
Alongside the data‑center push, Lenovo is expanding its AI lineup with Qira, a personal AI system designed to run across Lenovo PCs and Motorola devices. At CES in Las Vegas, Yang presented Qira as part of a broader AI effort that also includes concept glasses and a wearable assistant under “Project Maxwell.”
Reports describe Qira as a cross‑device personal AI that stays active in the background and moves with the user rather than living inside one app. The system is meant to work on:
- Lenovo and Motorola PCs and laptops
- Motorola smartphones and tablets
- Wearables, and other devices that Lenovo is demoing as future concepts
Qira will be able to tap services from partners such as Expedia and other providers, turning the assistant into a travel and daily‑life companion on top of work tasks. With Lenovo’s wider Hybrid AI Advantage and emerging platforms like Agentic AI and xIQ, the goal is to give enterprises a full stack that spans data‑center factories and AI agents running on employee devices.
For consumers and prosumers, the promise is that content, context, and tasks can follow them more smoothly between phone, PC, and emerging form factors like AR glasses. That idea also helps Lenovo position its hardware ecosystem more directly against other platform players that already blend devices and AI services.
How the tie-up reshapes Lenovo’s AI roadmap
This new phase of the Lenovo–Nvidia relationship pushes the partnership beyond earlier hybrid AI announcements into a more aggressive, factory‑style model for AI infrastructure. By pairing the Gigafactory design with Qira and Agentic AI, Lenovo is signalling that it wants to own more of the AI lifecycle, from GPU‑dense racks to cross‑device assistants and governed enterprise agents.
For cloud providers, the Gigafactory program offers:
- Standardized blueprints for building or expanding AI data centers at gigawatt scale.
- Access to the latest Nvidia hardware in integrated, liquid‑cooled systems.
- Lifecycle services and libraries of ready‑to‑adapt AI use cases.
For enterprises and end users, the Qira and Agentic AI stack promises:
- Personal AI that spans Lenovo–Motorola hardware and links into daily workflows.
- AI agents that can be created, governed, and deployed with a focus on measurable business outcomes.
- A bridge between local, on‑device intelligence and large models running in the Gigafactory cloud.
If Lenovo executes, the company could move from being primarily seen as a PC and server maker to being recognized as a full‑stack AI infrastructure and experience provider. The pace of deployment and customer adoption over the next year will show whether the Gigafactory model and Qira can stand out in an increasingly crowded AI race.





