NVIDIA is expected to unveil its next-generation GB300 AI server platform at GTC 2025 which takes place between March 17-20, with some huge surprises from the new AI GPUs.
In new leaks from UDN, we're learning that NVIDIA is preparing its new B300 AI GPUs will scream along with up to 1400W of power, with 1.5x the performance in FP4 performance on a single AI GPU over the B200 AI GPU. We can expect the HBM capacity of each GPU to increase from the max of 192GB HBM3E on the B200, to a much bigger 288GB of HBM3E on B300.
NVIDIA's current GB200 AI uses 192GB of HBM3E in an 8-Hi stack configuration, but the next-gen GB300 will use 288GB of HBM3E in a larger 12-Hi stack, which SK hynix has been working on for a while now.
The number of fast connector components and network cards has also been upgraded on GB300 AI server platforms, with the optical module improved again from 800G to an ultra-fast 1.6T. UDN reports that performance and equipment have "been improved in all aspects" and that it is NVIDIA's next "market-grabbing weapon".
Some of the other upgrades that NVIDIA is doing with GB300 is the use of a slot design, the computing board will use LPCAMM, and the capacitor tray may become standard in the next-gen GB300 NVL72 AI server cabinet, with UDN adding that BBU "may be optional".
Now... the cost.
The supply chain expects that the mass production price of a BBU module is around $300, with an estimation that the total price of the BBU of a GB300 AI server would hit $1500, while the production price of a supercapacitor expected to be between $20-$25... and the GB300 NVL72 AI server cabinet requires over 300 of those supercapacitors.