Microbenchmark Technology

Micro-benchmarks are at the heart of multicore timing analysis solutions.
Maspatechnologies builds on long-term expertise in the design and deployment of specialized micro-benchmarks for timing analysis and identification of performance bottlenecks.

Micro-benchmarking for analyzing timing interference

Maspatechnologies micro-benchmarks are simple, well-crafted pieces of code that operate at the lowest interface between hardware and software. They have been accurately designed and refined to stress a specific interference channel of a hardware shared resource.

Shared hardware resources in multicore platforms are the precondition for timing interference as a delay is potentially incurred as a result of arbitrating mutliple, simultaneous requests. By generating specific activities on shared resources, micro-benchmarks offer a powerful tool to bring out multicore timing interference and, ultimately, to analyze the impact of interference channels on software timing. To do so, micro-benchmarks can be selectively deployed to cause a configurable and quantifiable pressure on a specific resource, allowing the deployment of diverse verification strategies.

Micro-benchmarks are designed to exhibit a single, clearly defined behavior and to trigger a predefined effect on a specific interference channel of a hardware resource, while at the same time preventing as much as possible the generation of contention on other interference channels.

Micro-benchmarks key features

Maspatechnologies micro-benchmarks stand out from the traditional way of benchmarking, and come up with a distinctive set of features:
  • Single-behavior applications
  • Put high quantifiable pressure on specific interference channels (ICHs) of shared resources
  • Specifically designed micro-benchmarks to expose ICH impact on software timing
  • Prevent, as much as possible, creating contention in other ICHs
  • Validated via PMCs
  • Qualifiable technology

Our micro-benchmarks build on long-standing hardware and analysis expertise matured over more than 40 years of combined experience.

We offer a wide catalogue of micro-benchmarks for the well-known MPSoCs used in critical domains.  Following an incremental approach, we capture on-chip and off-chip hardware resources. This includes on-core (caches, interconnects), DMA, shared caches, GPUs and other accelerators; DMA controllers, shared caches (off-core resources), I/O interfaces.

 

Comprehensive catalogue

Maspatechnologies builds on a consolidated data base of more than 300 micro-benchmarks that can be tailored to different hardware and software configurations, and accomodate all customer-specific verification requirements.
Micro-benchmarks offer an extensive coverage over most common resources.  Solutions for target-specific components are also available on demand.
  • Request patterns: RR,RW,WR,WW
  • MMU, Paging, Memory partitioning
  • Memory ranks, banks and the like
  • Fairness assessment
  • Contention analysis
  • Routing impact
  • Cache hierarchies (L1, L2, …)
  • Inter-level inclusion policies
  • Coherence
  • Partitioning
  • Floating-Point Units (FPU)
  • Graphical Processing Units (GPU)
  • Digital Signal Processors (DSP)
  • Correctness assessment
  • Accuracy assessment
  • Homogeneity
  • Segregation mechanisms
  • Dynamic DVFS behavior
  • Dynamic power/thermal caps
  • Controllers and channels
  • Operation modes

Micro-benchmark Verification and Validation

Micro-benchmarks are a powerful enabler for your multicore verification process.
For this reason, micro-benchmarks need to undergo a rigorous verification and validation campaign to guarantee they comply with their design and are ultimately producing the expected effects on multicore execution.

Micro-benchmark design and development follows the classic V-model software development life-cycle. Each micro-benchmark is developed starting from a formalization of its intended behavior as a traceable requirement.

In the classic V-model (see diagram on the right), at each design activity, on the left side of the V, corresponds a verification activity on the right side of it. During the testing phases, it is fundamental to verify that the micro-benchmarks behave as expected. Notably, correctness does not exclusively depend on the functional behavior as the micro-benchmark must be guaranteed to cause the expected degree of contention on the target component. To verify the latter, we build on the information that can be collected from the Hardware Event Monitors (HEMs) normally available in multicore hardware. To ensure that the metrics collected are valid, we independently verify the results produced by the HEMs.

For each micro-benchmark, verification evidence and support documents for certification are produced following a requirement-based testing approach, inspired by DO-178C principles. Verification artifacts include: Verification requirements, Test design and procedures, and Test reports.

Micro-benchmark verification artifacts are directly fed into the certification documentation and process, as supporting evidence on which to build the certification argument for the analyzed system.  Micro-benchmarks can be leveraged to demonstrate Freedom From Interference (ISO 26262 Automotive) and to cover Interference Channels identification, classification, and bounding (CAST-32A Avionics).

Micro-benchmark tailoring and adaptation

While micro-benchmarks may share high-level design features, they are inherently platform specific and some degree of tailoring is always necessary either to adapt to the specific hardware and software configuration or to address specific customer requirements.

The adaptation of the micro-benchmark technology to a specific project follows a well-structured process covering analysis, development, and verification.

Requirement definition and hardware analysis

Specific verification requirements drive the qualitative and quantitative hardware analysis effort to identify the sources of multicore timing interference in the considered platform.

Micro-benchmark tailoring and porting
A set of micro-benchmarks is tailored/ported to the target configuration. This only happens after a validation campaign targeting the event monitors, as the latter are used to verify the micro-benchmark behavior. At the end of this phase, micro-benchmarks are selected to allow covering the verification requirements.

Test design and execution
Test involves the deployment of the selected micro-benchmarks that are designed and executed on the target platform. Finally, the results are analyzed and conclusions formalized in a set of verification artifacts, which can be used as part of the certification project.

Evidence for multicore certification

The evidence you need to support your certification arguments over multicore execution

We design and deploy platform-specific micro-benchmarks to confirm, identify, and assess the potential sources of multicore timing interference (Interference Channel Identification). Test designs are formalized and executed to produce trustworthy evidence on the worst-case impact of multicore interference on the execution time of a target application under specific hardware and software configurations. Maspatechnologies Microbenchmark technology supports your multicore certification projects by providing the necessary evidence upon which to build a certification argument on the absence of tight control of sources of interference, as required by CAST-32A and ISO 26262.