نتایج جستجو برای: instruction fetch
تعداد نتایج: 42508 فیلتر نتایج به سال:
despite the extensive deployment of multi-core architectures in the past few years, the design and optimization of each single processing core is still a fresh field in computing .on the other hand, having a design procedure (used to solve the problems related to the design of a single processing core )makes it possible to apply the proposed solutions to specific-purpose processing cores .the i...
Instruction fetch mechanism is a performance bottleneck of a Superscalar Processor. Fetch performance can be improved with the aid of an instruction memory known as a Trace Cache. This paper presents analytical expressions, which describe instruction fetch performance of a Trace Cache microarchitecture. The instruction fetch rates predicted by the expressions differ by seven percent from the si...
The purpose of this work is to improve performance of a 16-bit stack processor. This processor is suitable for embedded applications. A stack processor has an advantage of low complexity but its performance can be improved. Observing the instruction fetch consumes 53% of the execution cycle, focusing on improving instuction fetch is the primary goal of this work. The proposed scheme uses 16-bit...
ÐIn the pursuit of instruction-level parallelism, significant demands are placed on a processor's instruction delivery mechanism. Delivering the performance necessary to meet future processor execution targets requires that the performance of the instruction delivery mechanism scale with the execution core. Attaining these targets is a challenging task due to I-cache misses, branch mispredictio...
Simultaneous Multithreading (SMT) is a technique that permits multiple threads to execute in parallel within a single processor. Usually, an SMT processor uses shared instruction queues to collect instructions from the different threads. Hence, an SMT processor’s performance depends on how the instruction fetch unit fills these instruction queues. On each cycle the fetch unit must judiciously d...
In embedded processors, instruction fetch and decode can consume more than 40% of processor power. An instruction filter cache can be placed between the CPU core and the instruction cache to service the instruction stream. Power savings in instruction fetch result from accesses to a small cache. In this paper, we introduce decode filter cache to provide decoded instruction stream. On a hit in t...
| In designing today's mobile embedded systems such as cellular phones and PDAs, power consumption is an important design constraint. In a CMOS circuit, switching activity accounts for over 90% of total power dissipation. In this paper, we describe a method of encoding opcodes for low-power instruction fetch by reducing the switching activity from the instruction fetch logic. To reduce the swit...
We describe a method for supporting static branch prediction on a decoupled fetch-execute pipeline. Using instruction buffers to decouple instruction fetch from the execute pipeline is an effective way to minimize instruction cache penalties by allowing instruction fetch and stall miss handling to proceed independent of the execution pipeline. Dynamic branch prediction is typically used with su...
Computer system performance is highly dependent on high access rate and low miss rate in the instruction cache, which also have implications on energy consumed by fetching instructions. Simulation experiments on a small scalar processor typical for embedded systems show that up to 20% of the overall processor energy is consumed in the instruction fetch path and that as much as 23% of the execut...
GPUs are massively multithreaded architectures designed to exploit data level parallelism in applications. Instruction fetch and memory system are two key components in the design of a GPU. In this paper we study the effect of fetch policy and memory system on the performance of a GPU kernel. We vary the fetch and memory scheduling policies and analyze the performance of GPU kernels. As part of...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید