MLPerf Inference v1.0: 2000 Suite Results, New Power Measurements

There has been a strong desire for a series of industry standard machine learning benchmarks, akin to the SPEC benchmarks for CPUs, in order to compare relative solutions. Over the past two years, MLCommons, an open engineering consortium, have been discussing and disclosing its MLPerf benchmarks for training and inference, with key consortium members releasing benchmark numbers as the series of tests gets refined. Today we see the full launch of MLPerf Inference v1.0, along with ~2000 results into the database. Alongside this launch, a new MLPerf Power Measurement technique to provide additional metadata on these test results is also being disclosed.