Industry-Standard Benchmarks for Embedded Systems
EEMBC, an industry alliance, develops benchmarks to help system designers select the optimal processors and understand the performance and energy characteristics of their systems. EEMBC has benchmark suites targeting mobile devices (for phones and tablets), networking, ultra-low power microcontrollers, the Internet of Things (IoT), digital media, automotive, and other application areas. EEMBC also has benchmarks for general-purpose performance analysis including CoreMark, MultiBench (multicore), and FPMark (floating-point).

CoreMark-Pro FAQ

Expand All | Collapse All | Click on arrow to expand individual FAQ

How do I obtain a commercial license?


What are the run rules and allowances for CoreMark-Pro
  1. For the base run rules:
    • Each workload must run for at least a 1000 times the minimum timer resolution. For example on a 10 ms timer tick based system, each workload must run for at least 10 sec.
    • To report results, the build target "certify" must be used or that process must be followed. Each workload must report no errors when run with -v1.
    • All workloads within CoreMark-PRO must be compiled with the same flags and linked with the same flags. These must be disclosed and/or reported with any publication of CoreMark-PRO scores.
  2. Base run rule allowances:
    • Profile guided optimizations are allowed on base run. If used, must be used for all workloads.
    • You may change the number of iterations
    • You may change toolchain and build/load/run options
    • You may change the implementation of porting files under mith/al sub tree
    • You may change makefiles or using IDE projects
  3. Base run rule restrictions:
    • You cannot change the source file under benchmarks or workloads folders.
  4. Full Fury (optimized):
    • Full Fury is only permitted for EEMBC members. For publishing CoreMark-PRO results or other methods of disclosure, Full Fury rules require certification of results by EEMBC Technology Center (a free service for EEMBC members).
  5. Format for reporting results (this includes data sheets, presentations, marketing materials, etc.):
    • CoreMark-PRO 1.0.x : N / C [/ P] [/ M]
    • For example: CoreMark-PRO 1.0.x : 128 / GCC 4.1.2 -O2 / 2  or CoreMark-PRO 1.0.x : 1400 / GCC 3.4 -O4
How does this compare to CoreMark?

1. Both have open score submission

2. CoreMark and CoreMark Pro perform self-verification

3. CoreMark has one integer workload with 4 functions. CoreMark Pro has 5 integer and 4 floating-point workloads.

4. CoreMark is relatively small (2k code, 16k data). CoreMark Pro is large (42k-3MB (per context)

5. CoreMark targets processor core. CoreMark Pro targets processor and memory subsystems, and provides multicore support..

Does this replace CoreMark?

Absolutely not. CoreMark continues to grow in popularity for analyzing microcontrollers and many embedded processors.

What if I don't have hardware floating-point?

You can use software floating-point emulation, but good luck - it will be slow.

Can I optimize the code or must be it used out of the box?

If you're not publishing CoreMark Pro results, you can do what you wish. 

What is the run-time of CoreMark-Pro?

It might seem like the subtests are so short that the cores are able to run entirely at the burst frequency. It also might imply that you are not intending to measure workloads with sustained execution (and will therefore throttle for smartphones/tablets). However, the score of subtests is measured in iterations/s (BW) and a user may choose the number of iterations. With CoreMark Pro inside AndEBench-Pro, the iterations were set such that each workload runs for 2s on a reference device. The requirement is not on run time as such, but on the accuracy required from the timing mechanism.