نتایج جستجو برای: instruction cache

تعداد نتایج: 56814  

2003
Alberto F. de Souza

 Dynamically Trace Scheduled VLIW (DTSVLIW) machines have two execution engines and two instruction caches: a Scheduler Engine and a VLIW Engine, and an Instruction Cache and a VLIW Cache. The Scheduler Engine fetches instructions from the Instruction Cache and executes them singly, the first time, using a simple pipelined processor. In addition, it dynamically schedules the instruction trace ...

2002
Koji Inoue Vasily G. Moshnyaga Kazuaki Murakami

This paper proposes a low-energy instruction-cache architecture, called history-based tag-comparison (HBTC) cache. The HBTC cache attempts to re-use tag-comparison results for avoiding unnecessary way activation in setassociative caches. The cache records tag-comparison results in an extended BTB, and re-uses them for directly selecting only the hit-way which includes the target instruction. In...

1997
Hans-Joachim Stolberg Masao Ikekawa Ichiro Kuroda

Real-time operation of signal processing applications on multimedia RISC processors is often limited by high instruction cache miss rates of direct-mapped caches. In this paper, a heuristic approach is presented which reduces high instruction cache miss rates in direct-mapped caches by code positioning. The proposed algorithm rearranges functions in memory based on trace data so as to minimize ...

2001
Vijayalakshmi Srinivasan Edward S. Davidson Gary S. Tyson Mark J. Charney Thomas R. Puzak

Instruction cache misses stall the fetch stage of the processor pipeline and hence affect instruction supply to the processor. Instruction prefetching has been proposed as a mechanism to reduce instruction cache (I-cache) misses. However, a prefetch is effective only if accurate and initiated sufficiently early to cover the miss penalty. This paper presents a new hardware-based instruction pref...

2007
Theodore H. Romer Dennis Lee Brian N. Bershad Bradley Chen J. Bradley Chen

Modern microprocessors tend to use on-chip caches that are much smaller than the working set size ofmany interesting computations. In such situations, cache performance can be improved through selectivecaching, use of cache replacement policies where data fetched from memory, although forwarded to theCPU, is not necessarily loaded into the cache. This paper introduces a selectiv...

2001
Weiyu Tang Rajesh K. Gupta Alexandru Nicolau

Filter cache has been proposed as an energy saving architectural feature [9]. A filter cache is placed between the CPU and the instruction cache (I-cache) to provide the instruction stream. Energy savings result from accesses to a small cache. There is however loss of performance when instructions are not found in the filter cache. The majority of the energy savings from the filter cache in hig...

2007
Naeem Zafar Azeemi

Next generation multimedia mobile phones that use the high bandwidth 3G cellular radio network consume more power. Multimedia algorithms such as speech, video transcodecs have very large instruction foot prints and consequently stalled due to instruction cache misses. The conflicts in on-chip caches contribute a large fraction of the CPU cycle penalty and hence increase in power consumption. Ma...

Journal: :Microprocessing and Microprogramming 1993
Seong Baeg Kim Myung Soon Park Sun-Ho Park Sang Lyul Min Heonshik Shin Chong-Sang Kim Deog-Kyoon Jeong

We propose and analyze an adaptive instruction prefetch scheme, called threaded prefetching, that makes use of history information to guide the prefetching. The scheme is based on the observation that control ow paths are likely to repeat themselves. In the proposed scheme, we associate with each instruction block a number of threads that indicate the instruction blocks that have been brought i...

2011
Arnaldo Azevedo Ben H. H. Juurlink

In this paper we propose an instruction to accelerate software caches. While DMAs are very efficient for predictable data sets that can be fetched before they are needed, they introduce a large latency overhead for computations with unpredictable access behavior. Software caches are advantageous when the data set is not predictable but exhibits locality. However, software caches also incur a la...

Journal: :J. Inf. Sci. Eng. 2010
Jih-Ching Chiu Yu-Liang Chou Tseng-Kuei Lin

The potential performance of superscalar processors can be exploited only when processor is fed with sufficient instruction bandwidth. The front-end units, the Instruction Stream Buffer (ISB) and the fetcher, are the key elements for achieving this goal. Current ISBs could not support instruction streaming beyond a basic block. In x86 processors, the split-line instruction problem worsens this ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید