MLPerf Inference v5.0 results with Supermicro’s GH200 Grace Hopper Superchip-based Server and Red Hat OpenShift

Mlperf Inference V5.0 Results with Supermicro’s Gh200 Grace Hopper Superchip-based Server and Red Hat Openshift

MLPerf Inference v5.0 results with Supermicro’s GH200 Grace Hopper Superchip-based Server and Red Hat OpenShift

Home » News » MLPerf Inference v5.0 results with Supermicro’s GH200 Grace Hopper Superchip-based Server and Red Hat OpenShift
Table of Contents

On April 2, 2025, industry-standard MLPerf Inference v5.0 datacenter outcomes have been printed by MLCommons. Pink Hat and Supermicro submitted sturdy outcomes for the favored llama2 70B mannequin with Pink Hat OpenShift operating on their twin GPU GH200 Grace Hopper Superchip 144GB server. This was the primary time anybody has submitted an MLPerf outcome with OpenShift on GH200. You’ll be able to view these outcomes at mlcommons.org. Llama2-70b Meta launched the Llama2-70b mannequin on July 18, 2023. This mannequin is open supply and a part of the highly regarded Llama household of fashions that vary from 7 billion to 70 billion paramet

author avatar
roosho Senior Engineer (Technical Services)
I am Rakib Raihan RooSho, Jack of all IT Trades. You got it right. Good for nothing. I try a lot of things and fail more than that. That's how I learn. Whenever I succeed, I note that in my cookbook. Eventually, that became my blog. 
share this article.

Enjoying my articles?

Sign up to get new content delivered straight to your inbox.

Please enable JavaScript in your browser to complete this form.
Name