Certified Performance Analysis for Embedded Systems Designers

Summer 2008

EEMBC Calendar


EEMBC Journal readers are entitled to a 15% registration discount at Digital Power Forum '08, September 15-17 in San Francisco. To receive your discount, enter code "EEMBC508" when entering payment information on checkout. Presentations at the conference, for which EEMBC is a sponsor, will include a talk by EEMBC President Markus Levy on "Enabling System Developers with Tools That Support Accurate Power Measurement" as part of an "Optimizing Energy Efficiency" track beginning 1 p.m. on Tuesday, September 16. The Digital Power Forum exhibition will feature a demonstration of EEMBC's EnergyBench performance/energy tool within the EEMBC booth. More information


EEMBC is also among the sponsors of this year's Hot Chips symposium (August 24-26, Stanford University) and Portable Design Conference & Exhibition (September 18, San Jose). More information on Hot Chips. More information on Portable Design Conference & Exhibition.


On October 21-23 at the Santa Clara Convention Center, the AdvancedTCA Summit will bring together developers and end-users of equipment meeting the Advanced Telecommunications Computing Architecture (AdvancedTCA) specification. Topics will include creating standard-based telecom equipment, reducing equipment cost and development time, and making equipment more flexible and more maintainable. Presentations at the conference, for which EEMBC is a sponsor, will include a talk related to EEMBC's new Hypervisor working group. Early registration SAVINGS: all EEMBC Journal subscribers will receive a pre-registration discount. Simply enter code SPGP when registering to receive your discount. EEMBC will also have a booth in the exhibition which will feature a demonstration of MultiBench, a new multicore benchmark suite. More information.


Chris Fournier of AMD will present "Analyzing Embedded Multicore Processor and System Capabilities," which includes an overview of MultiBench, EEMBC's new multicore benchmark software, at the Embedded Systems Conference in Boston. This year's show takes place October 26-30 at the Hynes Convention Center. More information.

 

Letter from the President

 

Evolution is the key to survival. Hence, we have all witnessed the evolution of the embedded processor industry from simple microprocessors with a few integrated peripherals to extremely complex SoCs containing multiple cores and an abundance of sophisticated peripherals and memory hierarchies.

EEMBC has evolved along with the industry. We've expanded our benchmarks to include EnergyBench, a tool that measures the energy cost of processor performance, and our first set of multicore benchmarks has made its debut. New benchmark programs support a wide variety of embedded technologies including hypervisors and networking/telecomm. We're also beginning an evolution towards a completely different benchmark approach which I’ll describe below.

In the last issue of EEMBC Journal, we discussed MultiBench, our new multi-suite of multicore benchmarks. MultiBench is composed of 36 workloads that utilize data decomposition to analyze memory bottlenecks, operating system overhead, and thread synchronization. Associated with these workloads, we have recently standardized on a set of single-number "marks," or consolidated scores, to help you more quickly assess the performance of a multicore processor. Shay Gal-On, EEMBC director of software engineering, discusses these marks in detail in his column within this EEMBC Journal.

In the meantime, EEMBC is working on an extended version of MultiBench that will utilize functional decomposition to further stress the ability of a multicore processor to manage multiple threads and pass data between them. The API will allow handling . . . more . . .


Marks for MultiBench
By Shay Gal-On, Director of Software Engineering

Analyzing multicore platform performance is an amazingly complex task. Unlike our single-core, single-threaded benchmarks, MultiBench analyzes the processor and its memory hierarchy, the operating system scheduler, and other factors. The number of simultaneous contexts and work items, as well as the associated data sizes, have huge effects on performance. Initial data that we have collected in the lab indicates that it’s really impossible to represent the capabilities of a multicore platform with just a single number. So instead, we derived three numbers that best reflect multicore processor throughput and performance scaling.

MultiBench 1.0 (known internally as RC1) contains 36 workloads. These workloads use common embedded algorithms in a way that enables multiple cores to enhance performance. When running these workloads, the user may change the number of work items running in parallel, as well as the affinity of each instance of the work item, to take advantage of available computing resources in the target platform.

To provide as much insight as possible when evaluating architectures, EEMBC has created several "marks" or consolidated scores for MultiBench. The consolidated score for a given device is based on individual scores in a group of related workloads. Furthermore, each mark is based on two figures of merit derived from each workload:

  1. Performance Factor – This concept depicts the best throughput of the platform; defined in iterations per second, each platform will execute a workload a specific number of times per second. The best configuration is platform dependent.
  2. Scaling Factor– This concept defines how well performance scales when more computing resources are brought to bear on the workload. There are several ways of utilizing computing resources in parallel, and MultiBench 1.0 workloads with the MultiBench framework to test most of them (the exception being functional decomposition, which will be addressed in the next revision of MultiBench). By limiting the available resources to only execute one work item at a time, and comparing the throughput to the best throughput for the workload, we gain an insight into how well the platform scales for that workload. Providing performance-scaling information, we also define an associated Scale Factor. To calculate the Scale Factor, first calculate the geometric mean of the throughput for the workloads with only one work item at a time enabled. Then divide the performance mark by that number.
SingleWorkerMark – This mark consolidates the best throughput of workloads with only one work item that uses only one worker. The throughput factor . . . more . . .
New Benchmark Scores
IBM 750CL - 1000 MHz
Software: GHS 4.2.3
Hardware/Production Silicon

DENBench 1.0 (out-of-the-box)
Networking 2.0 (out-of-the-box)
OABench 2.0 (out-of-the-box)
TeleBench 1.1 (out-of-the-box)
ConsumerBench 1.1 (out-of-the-box)
OABench 1.1 (out-of-the-box)
AutoBench 1.1 (out-of-the-box)

News Briefs

EEMBC has two new members. Open Kernel Labs (OK Labs) has joined the consortium's new Hypervisor Subcommittee, while Samsung has joined the EEMBC Board of Directors.

The EEMBC Web site has a convenient new feature that allows you to export side-by-side comparisons of multiple score reports to an Excel spreadsheet. Simply select the score reports you want to compare, click "View Report," then click on "Export Report to Excel." This feature makes it much easier to compare scores.

The next meeting of the EEMBC Board of Directors and Working Groups will take place September 9 and 10 in Santa Barbara, Calif. at The Hotel Santa Barbara. Technical meetings will take place on September 10. If you are not a member (yet) and would like to attend the technical meetings as a guest, send an email to sb_guest@eembc.org to inquire.

If you do not wish to receive e-mail from EEMBC, you can un-subscribe. EEMBC sends no more than one e-mail per month to registered users at www.eembc.org. Continuing your subscription ensures you'll be notified when new scores and other important announcements are available.