Jackpot Stealing Information From Large Caches via Huge Pages

نویسندگان

  • Gorka Irazoqui Apecechea
  • Thomas Eisenbarth
  • Berk Sunar
چکیده

The cloud computing infrastructure relies on virtualized servers that provide isolation across guest OS’s through sandboxing. This isolation was demonstrated to be imperfect in past work which exploited hardware level information leakages to gain access to sensitive information across co-located virtual machines (VMs). In response virtualization companies and cloud services providers have disabled features such as deduplication to prevent such attacks. In this work, we introduce a fine-grain cross-core cache attack that exploits access time variations on the last level cache. The attack exploits huge pages to work across VM boundaries without requiring deduplication. No configuration changes on the victim OS are needed, making the attack quite viable. Furthermore, only machine co-location is required, while the target and victim OS can still reside on different cores of the machine. Our new attack is a variation of the prime and probe cache attack whose applicability at the time is limited to L1 cache. In contrast, our attack works in the spirit of the flush and reload attack targeting the shared L3 cache instead. Indeed, by adjusting the huge page size our attack can be customized to work virtually at any cache level/size. We demonstrate the viability of the attack by targeting an OpenSSL1.0.1f implementation of AES. The attack recovers AES keys in the cross-VM setting on Xen 4.1 with deduplication disabled, being only slightly less efficient than the flush and reload attack. Given that huge pages are a standard feature enabled in the memory management unit of OS’s and that besides co-location no additional assumptions are needed, the attack we present poses a significant risk to existing cloud servers.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bringing the Web to the Network Edge: Large Caches and Satellite Distribution

In this paper we discuss the performance of a Web caching distribution where caches are interconnected through a satellite channel. Web caching is emerging as an important way to reduce client-perceived latency and network resource requirements in the Internet. A satellite distribution is being extensively deployed to offer broadcast services avoiding highly congested terrestrial links. When a ...

متن کامل

Web Acceleration for Electronic Commerce Applications

Response time is one key point of di erentiation among electronic commerce (e-commerce) Web sites. For many ecommerce sites, Web pages are created dynamically based on the current state of a business stored in database systems. Snafu and slow-downs during special events or peak times demonstrate the challenges to engineer high performance database-driven e-commerce Web sites. One way to achieve...

متن کامل

L2-Cache Miss Profiling on the p690 for a Large-scale Database Application

This paper profiles L2-cache data-load misses generated by the TPC-C benchmark executed on 8and 32-way configurations of the IBM eserver pSeries 690 (p690). Using sampled performance monitor event traces, the resolution sites of L2-cache data-load misses are identified. To determine ways to enhance performance, the heavily hit resolution sites, L3 caches and main memory, are studied with respec...

متن کامل

The LSAM Proxy Cache - a Multicast Distributed Virtual Cache

1 The LSAM Proxy is a multicast distributed web cache that provides automated multicast push of web pages, based on self-configuring interest groups. The proxy is designed to reduce network and server load, and to provide increased client performance for associated groups of web pages, called ‘affinity groups.’ These affinity groups track the shifting popularity of web sites, such as for the Su...

متن کامل

Improving Performance of Large Physically Indexed Caches by Decoupling Memory Addresses from Cache Addresses

Modern CPUs often use large physically-indexed caches that are direct-mapped or have low associativities. Such caches do not interact well with virtual memory systems. An improperly placed physical page will end up in a wrong place in the cache, causing excessive conflicts with other cached pages. Page coloring has been proposed to reduce the conflict misses by carefully placing pages in the ph...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IACR Cryptology ePrint Archive

دوره 2014  شماره 

صفحات  -

تاریخ انتشار 2014