Designing Computer Architecture Research Workloads
نویسنده
چکیده
D esigners of microarchitectures for generalpurpose microprocessors once based their design decisions on experts’ intuition and rules of thumb. Since the mid-1980s, however, microarchitecture research has become a systematic process that uses simulation tools extensively. Although architectural simulators model microarchitectures at a high abstraction level, the increasing complexity of both the microarchitectures and the applications that run on them make these simulators very time-consuming. Simulators must execute huge numbers of instructions to create a workload representative of real applications. The Standard Performance Evaluation Corporation’s (SPEC) CPU2000 benchmark suite, for example, has many more dynamic instructions than CPU95, which it replaced. Although real hardware evaluations benefit from this increase, using architectural simulators for such large numbers of instructions becomes infeasible. The dynamic instruction count of the SPEC2000 benchmark parser with reference input is about 500 billion instructions, or three weeks of simulation at 300,000 instructions per second. Including the benchmarks that must be run for a huge number of design points creates an unreasonably long simulation time, stretching the time to market. Running the simulations in parallel results in a huge equipment cost. To solve this problem, we can use reduced input sets instead of reference input sets. The ideal reduced input set has a limited dynamic instruction count but produces program behavior comparable to the reference input set behavior. MinneSPEC collects a number of reduced input sets for some CPU2000 benchmarks. It proposes three reduced inputs: smred for short simulations, mdred for mediumlength simulations, and lgred for full-length, reportable simulations. Although a number of techniques—such as truncating or modifying the inputs—can derive these reduced input sets from the reference inputs, it is unclear whether these reduced input sets will produce behavior similar to a program using a reference input set. We have developed a methodology that reliably quantifies program behavior similarity. As such, we can validate MinneSPEC—that is, we can verify whether the reduced input sets result in program behavior similar to the reference inputs. To overcome the shortcomings of previous work, our methodology uses metrics that are closely related to performance. We also use statistical data analysis techniques to calculate the similarity in program behavior based on uncorrelated workload characteristics.
منابع مشابه
Research statement - Architectural Considerations for Big Data
Big data analytics are the driving force behind the current revolution in computing architectures and system designs. While a plethora of research has focused on designing the computing fabric for big data analytics, limited research has focused on building a storage system architecture to enable big data acceleration. My current research bridges this gap by designing storage systems for big da...
متن کاملThe Parallel Research Kernels: A tool for architecture and programming system investigation
We present the Parallel Research Kernels; a collection of kernels supporting research on parallel computer systems. This set of kernels covers the most common patterns of communication, computation and synchronization encountered in parallel HPC applications. By focusing on these kernels instead of specific workloads, one can design an effective parallel computer system without needing to make ...
متن کاملChallenges in Computer Architecture Evaluation
IONS AND METHODOLOGY Computer architecture’s heavy emphasis on simulation effectively discourages the research community from exploring other useful and possibly more informative modeling techniques. The few published papers using and proposing analytic models have not stimulated significant follow-up efforts. Stories abound of such papers receiving knee-jerk negative rejections from program co...
متن کاملChallenges in computer architecture evaluation - Computer
IONS AND METHODOLOGY Computer architecture’s heavy emphasis on simulation effectively discourages the research community from exploring other useful and possibly more informative modeling techniques. The few published papers using and proposing analytic models have not stimulated significant follow-up efforts. Stories abound of such papers receiving knee-jerk negative rejections from program co...
متن کاملCapturing Locality of Reference and Branch Predictability of Programs in Synthetic Workloads
A synthetic workload whose performance correlates well with long-running application programs is of great benefit to the computer architecture community because it reduces simulation time, fosters benchmark sharing by abstracting proprietary codes, and enables analysis of futuristic workloads by altering program characteristics. Recent research [2] [14] has demonstrated that it is possible to a...
متن کاملAn Integrated Simulation Environment for Parallel and Distributed System Prototyping
The process of designing parallel and distributed computer systems requires predicting performance in response to given workloads. The scope and interaction of applications, operating systems, communication networks, processors, and other hardware and software lead to substantial system complexity. Development of virtual prototypes in lieu of physical prototypes can result in tremendous savings...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2003