EEMBC's MLMark™ benchmark is the first in a series designed to measure aspects of Machine Learning performance on embedded edge-device platforms.
Portland, Ore.—July 29, 2019—EEMBC, an industry consortium that develops benchmarks for embedded hardware and software, today announced the release of MLMark, the first Machine Learning benchmark specifically designed with commercial applications in mind.
As the market for Machine Learning-enabled embedded platforms rapidly expands, there's been intense demand for a consistent benchmarking standard, to create transparency and accountability in an industry awash in hype. Creating relevant standards for neural net inference behavior has been challenging however, given the pace of advancement and the wide variety of SDKs and compute libraries.
MLMark addresses this confusion by focusing on three measurements deemed significant in the majority of commercial ML applications:
"Hardware platforms for running AI are proliferating, in a wide variety of architectures—both repurposed and custom-built—and the industry has had a hard time finding a common baseline for measuring their performance and accuracy," says Sujeeth Joseph, Chief Product Officer at Ignitarium. "The release of MLMark comes at just the right time to fill a crucial gap in the AI inference toolkit. It's a welcome addition to EEMBC's highly respected suite of benchmarks."
The largest single category of application for ML today is image recognition, so MLMark's measurements focus on visual object detection and image classification tasks. It comes with three vision models ready for immediate use: ResNet-50 v1, Mobilenet v1.0 and SSDMobileNet v1.0. In addition to a hardware-independent Tensorflow implementation, MLMark also includes C++ code for a variety of target hardware:
"With this benchmark, we're helping to solve the 'scavenger hunt' problem that faces newcomers to Machine Learning - there are so many competing technologies that it's hard to know where to start," says EEMBC President Peter Torelli. "MLMark includes a collection of models, ground truths, and source code right out of the box, giving recent entrants a useful overview, and helping the entire industry move toward a more coherent, collaborative footing."
In addition to this learning function, MLMark also enables engineers to create better performance models, helping them identify bottlenecks in ML-specific tasks when developing new products. And as the market expands, the MLMark score database provides both a snapshot of the current market, and a historic record (and predictor) of industry trends.
By creating strict requirements for gathering data, disclosing results and publishing scores, MLMark creates a standardized environment where competing technologies can be evaluated on merits rather than marketing messages. This raises the quality of the ML space as a whole, by incentivizing the market to focus on improving performance rather than messaging. "EEMBC has a 20 year track record of delivering intuitive benchmarks that enable the embedded industry," explains Torelli, "and we're excited to add machine learning to our repertoire. It's a crucial technology, and ultimately it should be a capability, not a contest."
For more information, please visit the EEMBC MLMark website.
EEMBC Contact: