Benchmark Characterization
Preliminary results on an EEMBC benchmark characterization project carried out at North Carolina State University are now available. Read more.
The EEMBC Board of Directors observed the 10th anniversary of EEMBC’s founding at its meeting in Sunnyvale, Calif. on May 22. The festivities included special presentations by EEMBC veterans Alan Anderson of Analog Devices, Sergei Larin of Freescale Semiconductor, John Hogan of MIPS, Geoff Lees of NXP Semiconductor, Roger Shepherd of STMicroelectronics, and William Bryant of Sun Microsystems.
EEMBC Calendar
Markus Levy presents "Performance Evaluation of SMP Power Architectures" at the Power Architecture Development Conference, September 24-25 in Austin, Texas. More info is at www.power.org/devcon/07.
Shay Gal-On (EEMBC), Rob Cosaro (NXP), and John Goodacre (ARM) will present on several EEMBC related topics at the upcoming ARM Developers Conference being held at the Santa Clara Convention Center on October 2-4, 2007. More info is at www.rtcgroup.com/arm/2007/.
Markus Levy will discuss multicore benchmarking at the Multicore Expo being held in Tokyo, Japan on October 31 and November 1, 2007. More info is at
www.multicore-expo.com.
|
Letter from the President
How to Win at Embedded Processor Benchmarking
|
|
The guaranteed way to be a performance winner at embedded processor benchmarking is to use a 12 GHz processor with 512 kbytes of L1 cache, 64 Mbytes of L2 cache, and a perfectly tuned compiler. But this genre of "benchmarking" has little to do with helping the system designer make intelligent choices among processors. In the real world, the required/desired performance of an embedded processor depends on the application, and system designers typically want no more or less performance than the application needs. This is why the most useful benchmarks aren't those that showcase only speed, but rather ones that provide a variety of information that can be used to make useful assessments.
Since the inception of EEMBC, processor vendors and system designers have used the benchmark scores to make a high-level comparison between processors. However, these scores (and especially the out-of-the-box scores) also reflect the capabilities of the compilers. And in this context, we have seen very significant performance differences between various compilers running on the same processor platform. Although scores are usually published only for the compiler that generates the best performance results, embedded system programmers would doubtless like to see scores generated by using other compilers, as well as with other compiler options (i.e. optimal performance versus optimal code size).
With EnergyBench, EEMBC added a new dimension to the comparisons that can be made between processors. The balance of performance and energy is a critical design concern for just about any application. Hence, it makes sense to generate benchmark scores (and EnergyBench scores) for a processor platform running at a variety of operating frequencies and potentially different operating voltages. Peeling off another layer of this onion, you may also notice that performance does not necessarily scale linearly with changes in the operating frequency, as it's entirely possible that the core-to-memory clock ratio also changes. This is the kind of information that system designers need.
Another way to provide still more useful and interesting information for system designers is to do multiple benchmark runs that isolate the performance impact of various device features and system-level parameters. For example, NXP Semiconductors recently published certified performance/energy scores for its LPC3180 microcontroller that were obtained under four distinct conditions: with cache and floating point unit both on, both off, and in one-on, one-off combinations. Certainly, in the test with the cache off and floating-point unit off, the microcontroller was not represented in its best light, but this information has allowed system designers to see the impact of those features and thus judge how much energy (no pun intended) they should put into optimizing their applications to fully take advantage of them. In the case of the floating-point unit, the performance and energy scores (comparing FPU on and off) provide quite valuable information for determining whether to spend the ‘extra money' on a microcontroller with or without an FPU (which remains an unusual feature within the low-end microcontroller market).
Of course, there are many other processor- and system-level parameters that could be played with to generate additional performance and energy data. For example, you could vary the memory options, such as DDR2 versus DDR3, or the cache size (if using a simulator). You could do separate benchmark runs to assess the impact of on-chip versus off-chip memory, hardware acceleration, or special instructions. Although the last test in particular might require some hand tuning of benchmark code, surely all would provide valuable information for system designers. Not to mention something that our industry's trade magazines are interested in writing about, as demonstrated by the widespread coverage of the aforementioned LPC3180 benchmark scores. I've been saying for years that the fastest processor isn't always the "bestest." I encourage all EEMBC members to take advantage of the manifold opportunities presented by our benchmarks to confirm or challenge this assumption with benchmark runs that go beyond the "out-of-the-box" and "optimized" paradigms to help engineers make even better design choices.
Markus Levy
EEMBC President |
New Benchmark Scores
|
NEWS BRIEFS
The new chair of EEMBC's Networking Subcommittee is Raghib Hussain, Vice President of Software Engineering and CTO at Cavium Networks.
Cypress Semiconductor has joined the EEMBC Automotive/ Industrial subcommittee. "The development of system-level benchmarks that allow engineers to make unbiased comparisons between platforms is an important activity, and we’re happy to lend our expertise to the EEMBC," said Cypress' Ata Khan, vice president of technical staff. "We have a unique perspective in this area based on the architecture of our PSoC® mixed-signal arrays, which include a microcontroller core and both analog and digital programmable blocks."
EEMBC will release the next generation of its office automation benchmarks, OABench™ Version 2.0, Part 1 in September. The four benchmark kernels included in the suite are Bezier, Dithering, Rotate, and Text Parsing. Part 2 of the suite, which will include a special embedded version of Ghostscript, is expected for release in Q4.
Multicore has become prevalent in many areas of the embedded industry and there has become a pressing need for benchmarks to help chip and system designers understand the performance benefits and bottlenecks. This area has been a primary focus for EEMBC this year, and the Consortium is now in discussion on creating realistic workloads that will sufficiently stress multicore processors. |
|