
Developers only need to use the latest framework releases of TensorFlow and PyTorch to unleash this performance. The 4th Gen Intel Xeon Scalable processors with Intel AMX deliver this performance out of the box across multiple industry standard frameworks and integrated with end-to-end data science tools and a broad ecosystem of smart solutions from partners. Intel Xeon Scalable Processor was the only CPU submitted for MLPerf v2.1, once again demonstrating it is the best server CPU for AI training, which enables customers to use their shared infrastructure to train anywhere, anytime.

It is purpose-designed to deliver the best DL performance and TCO for these dedicated use cases.Ībout the Results for Xeon: Intel submitted MLPerf Training v2.1 results on the 4th Gen Intel Xeon Scalable processor product line across a range of workloads. In cases where the server or a cluster of servers are predominantly used for DL training and inference compute, the Habana Gaudi2 accelerator is the optimal accelerator. This dedicated AI engine is optimized to deliver up to 6x higher gen-to-gen DL training model performance using industry standard frameworks.

AMX is a dedicated matrix multiplication engine built into every core of 4th Gen Intel Xeon Scalable processors. The 4th Generation Intel Xeon Scalable processor with Intel® Advanced Matrix Extensions (AMX), a new built-in AI accelerator, allows customers to extend the general-purpose Xeon server platform to cover even more DL use cases, including DL training and fine tuning. It is in these use cases that Xeon Scalable delivers the best total cost of ownership (TCO) and year-round utilization.

You need to contact us to know further information.
#Hpc ai benchmark code
We have replaced a copyright-sensitive source code with generic or emulated source code.

HPL-AI stands for "the High Performance LINPACK for Accelerator Introspection", which allows mixed-precision arithmetic to solve a linear equation system and seeks to highlight the convergence of HPC and artificial intelligence (AI) workloads.
