Contents
Two of the biggest winners over the past year — chipmaker Nvidia and pharmaceutical drug maker Novo Nordisk — are joining forces to set up one the world’s most advanced AI supercomputers in Denmark.
Named Gefion, after the Norse goddess of ploughing and abundance, and built by Atos Group subsidiary Eviden, it will be based on the Nvidia DGX SuperPod architecture. Gefion will feature over 1,500 of Nvidia’s H100 Tensor Core GPUs and deliver six exaflops of FP8 AI performance. Furthermore, it will connect to the Nvidia Cuda Quantum open-source software platform, which allows for simulations on hybrid quantum-classical computers.
The supercomputer will belong to the “Danish Center for AI Innovation,” and be hosted by a Digital Realty data centre, which uses 100% renewable energy. Novo Nordisk, the maker of diabetes-turned-weight-loss drug Ozempic/Wegovy, is investing around €80mn in the project. The Export and Investment Fund of Denmark is also contributing around €8mn.
“Groundbreaking scientific discoveries are based on data, and AI has now provided us with an unprecedented opportunity to accelerate research within, for example, human, and planetary health,” said Mads Krogsgaard Thomsen, CEO of the Novo Nordisk Foundation in a statement.
Clarifying the partnership agreement in a press briefing, Kimberly Powell, VP of healthcare at Nvidia, said “In our collaboration agreement, we’ll be taking all of this generative AI and bring it over to their sovereign AI infrastructure so that [Denmark] can really push into advancing medicine, quantum computing, and social sciences.”
The geopolitics of AI
Meanwhile, AI and the supercomputers that power its training is no longer the domain of a few select researchers. As the technology becomes ever more ubiquitous, the compute needed to train larger and larger AI models or run sophisticated simulations increases.
Its significance ranges between everything from productivity and efficiency gains, to military and cybersecurity applications — something that is not passing unnoticed by national security policymakers, as illustrated by US export restrictions on hardware used to train AI to China.
“In the current geopolitical climate, it is important that we strengthen our strategic positions,” Morten Bødskov, Danish Minister of Industry, Business and Financial Affairs, stated (albeit steering clear of the phrase of the day, “digital sovereignty”).
But staying on top of the high-performance computing (HPC) and AI training game is a difficult task. The fact that Gefion, a supercomputer announced today and intended to be fully operational in 2025, will feature the Nvidia H100 GPU is a testament to just how quickly things move in the realm of HPC and artificial intelligence.
Just yesterday Nvidia, now the third most valuable company in the world, launched its latest AI chip, the Blackwell, which it says is 4x faster than the H100. It will also combine into a superchip with two Blackwell GPUs and one Grace CPU, for “trillion-scale parameter generative AI.”
The European race to exascale
Denmark’s new supercomputer will be in good European company. The UK is building an exascale supercomputer in Edinburgh, as well as an AI supercomputer in Bristol. Joint undertaking EuroHPC is supporting not one but two exascale supercomputers in the EU. The project is installing Jupiter at the Jülich Supercomputing Centre, Germany, this year, and a second system called Jules Verne will come online in France in 2025.
An exascale supercomputer is one that can exceed the threshold of one billion billion calculations, or one exaflop, per second. To put that into context, the work of one second on an exascale computer is the equivalent of one calculation every second for 31,688,765,000 years.
Until Jupiter comes online, there is only one exascale supercomputer in the world — Frontier, housed at the Oak Ridge Leadership Computing Facility in Tennessee, USA. (Although China has now stopped disclosing its supercomputer capacity, so we cannot be 100% certain, according to experts.)
So how can Gefion then have a six exaflop AI performance, you may wonder. Well, “AI performance” means that it has this capacity specifically in tasks related to AI. (Nvidia’s own Eos supercomputer, running at an undisclosed location, has 18.4 exaflops of AI performance.) However, to be considered an actual exascale computer, a supercomputer system must be capable of performing at exascale across a wide range of computational tasks, not limited to AI.
In either case, this is not the last push for more national capabilities in HPC and AI. While Intel and AMD are getting closer to Nvidia’s offerings, the latter looks set to reign a little longer.
[fluentform id="8"]