Sustaining Moore's Law Through Inexactness

نویسندگان

  • John Augustine
  • Krishna V. Palem
  • Parishkrati
چکیده

Inexact computing aims to compute good solutions that require considerably less resource – typically energy – compared to computing exact solutions. While inexactness is motivated by concerns derived from technology scaling and Moore’s law, there is no formal or foundational framework for reasoning about this novel approach to designing algorithms. In this work, we present a fundamental relationship between the quality of computing the value of a boolean function and the energy needed to compute it in a mathematically rigorous and general setting. On this basis, one can study the tradeoff between the quality of the solution to a problem and the amount of energy that is consumed. We accomplish this by introducing a computational model to classify problems based on notions of symmetry inspired by physics. We show that some problems are symmetric in that every input bit is, in a sense, equally important, while other problems display a great deal of asymmetry in the importance of input bits. We believe that our model is novel and provides a foundation for inexact Computing. Building on this, we show that asymmetric problems allow us to invest resources favoring the important bits – a feature that can be leveraged to design efficient inexact algorithms. On the negative side and in contrast, we can prove that the best inexact algorithms for symmetric problems are no better than simply reducing the resource investment uniformly across all bits. Akin to classical theories concerned with space and time complexity, we believe the ability to classify problems as shown in our paper will serve as a basis for formally reasoning about the effectiveness of inexactness in the context of a range of computational problems with energy being the primary resource. Many believe that the exponential scaling afforded by Moore’s law [Moore, 1965] is reaching its limits as transistors approach nanometer scales. Many of these limitations are based on physics based limits ranging over thermodynamics and electromagnetic noise [Kish, 2002] and optics [Ito and Okazaki, 2000]. Given that information technology is the prime beneficiary of Moore’s law, computers, memories, and related chip technologies are likely to be affected the most. Given the tremendous value of sustaining Moore’s law through information technology in a broad sense, much effort has gone into sustaining Moore’s law, notably through innovations in material science and electrical engineering. Given the focus on information technology, a central tenet of these innovations has been to preserve the behavior of CMOS transistors and computing systems built from them. While many of these innovations revolve around non-traditional materials such as graphene [Novoselov et al., 2004, Chodos, 2004] supplementing or even replacing [Anthony, 2014] CMOS, exciting developments based on entirely alternate and potentially radical models of computing have also emerged. Notable examples include DNA [Adleman, 1994, Boneh et al., 1996], and quantum computing frameworks [Benioff, 1980, 1 ar X iv :1 70 5. 01 49 7v 1 [ cs .C C ] 3 M ay 2 01 7 Feynman, 1982, Deutsch, 1985]. However, these exciting approaches and alternate models face a common and potentially steep hurdle to becoming deployable technologies leading to the preeminence of CMOS as the material of choice. This brings the importance of Moore’s law back to the fore and consequently, in the foreseeable future, the centrality of CMOS to growth in information technologies remains. The central theme of this paper is to develop a coherent theoretical foundation with the goal of reconciling these competing concerns. On the one hand, continuing with CMOS centric systems is widely believed to result in hardware that is likely to be erroneous or function incorrectly in part. On the other hand and dating back to the days of Alan Turing [1936], and explicitly tackled by von Neumann [1956], a computer — the ubiquitous information technology vehicle — has an unstated expectation that it has to function correctly. This expectation of computers always functioning correctly as an essential feature is at the very heart of our alarm about the doomsday scenario associated with the end of Moore’s law. For if one can use computers with faulty components as they are with concomitant but acceptable errors in the computation, we could continue to use CMOS transistors albeit functioning in a potentially unreliable regime. Over the past decade, this unorthodox approach to using a computer and related hardware such as memory built out of faulty components, and used in this potentially faulty mode, referred to as inexact computing, has emerged as a viable alternative to coping with the Moore’s law cliff. Palem and Lingamneni [2013] and Palem [2014] (and references therein) provide a reasonable overview of inexact computing practice. At its core, the counterintuitive thesis behind inexactness is to note that, perhaps surprisingly, working with faulty components can in fact result in computing systems that are thermodynamically more efficient [Palem, 2003a,b, Korkmaz et al., 2006]. This approach simultaneously appeals to another hurdle facing the sustenance of Moore’s law. Quite often referred to as the energy-wall or power-wall, energy dissipation has reached such prohibitive levels that being able to cope with it is the predominant concern in building computer systems today. For example, to quote from an article from the New York Times [Markoff, 2015] about the potential afforded through inexact computing: “If such a computer were built in today’s technologies, a so-called exascale computer would consume electricity equivalent to 200,000 homes and might cost $20 million or more to operate.” While individual technological artifacts demonstrating the viability of inexact computing might be many, a coherent understanding of how to design algorithms — essential to using inexact computing in large scale — and understand the inherent limits to the power of this idea are not there. Such characterizations are typically the purview of theoretical computer science, where questions of designing efficient algorithms, and inherent limits to being able to design efficiently are studied. While algorithm design is concerned with finding efficient ways of solving problems, inherent limits allow us to understand what is not possible under any circumstance within the context of a mathematically well-defined model. Understanding what is inherent to computing in abstract terms has been a significant part of these enquiries, and has evolved into the field referred to as computational complexity [Arora and Barak, 2009, Moore and Mertens, 2011]. In this paper, we present a computational complexity theoretic foundation to characterizing inexactness, and to the best of our knowledge for the first time. As in classical complexity theory, the atomic object at the heart of our foundation is a bit of information. However, inexactness allows something entirely novel: each bit is characterized by two attributes or dimensions, a cost and a quality. Historically, physical energy was the cost and the probability of correctness was the quality [Palem, 2003b, 2005]. As shown in Figure 1 (originally reported in [Korkmaz et al., 2006]), under this interpretation, a cost versus quality relationship was measured in the context of physically constructed CMOS gates. More recently, Frustaci et al. [2015] have presented a voltage-scaled SRAM along with a characterization of the energy/error tradeoff, where, unsurprisingly, we see that the bitcell error rate (BER) drops exponentially as Vdd increases.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sustaining Moore's law and the US economy

COMPUTING IN SCIENCE & ENGINEERING Semiconductors are indispensable to the modern economy. The semiconductor— through its synchronous increase in power and decline in price—has contributed significantly to the United States’ economic growth over the past decade.1,2 The technological advances spurred by this regular and predictable rate of growth in power (as described by Moore’s law) have enhan...

متن کامل

Chips, Architectures and Algorithms: Reflections on the Exponential Growth of Digital Signal Processing Capability

The celebrated 1965 prediction by Gordon Moore regarding exponential improvements in integrated circuit density is so widely known, and has proven so accurate, that it has been elevated to the status of a “law”. Less appreciated is the fact that many areas of computation have benefited equally from progress in algorithms. In this paper we compare and contrast the contributions to progress in si...

متن کامل

Reversible Evolvable Networks: A Reversible Evolvable Boolean Network Architecture and Methodology to Overcome the Heat Generation Problem in Molecular Scale Brain Building

Today’s irreversible computing style, in which bits of information are routinely wiped out (e.g. a NAND gate has 2 input bits, and only 1 output bit), cannot continue. If Moore's Law remains valid until 2020, as many commentators think, then the heat generated in molecular scale circuits that Moore's Law will provide, would be so intense that they will explode [Hall 1992]. To avoid such heat ge...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1705.01497  شماره 

صفحات  -

تاریخ انتشار 2017