Philosophical Issues in Computer Science

نویسندگان

  • Ralph Kopperman
  • Steve Matthews
  • Homeira Pajoohesh
چکیده

The traditional overlap between computer science and philosophy centres upon the issue of in what sense a computer may be said to think. A lesser known issue of potential common interest is the ontology of the semantics computer science assigns to programming languages such as Pascal or Java. We argue that the usual purely mathematical approach of denotational semantics necessarily excludes important semantic notions such as program reliability. By studying the ontology of determinism versus non determinism we deduce the presence of vagueness in programming language semantics, this then being compared to the Sorites paradox. Parallels are drawn between Williamson’s treatment of the paradox using ignorance and generalised metric topology. Introduction Computer science has well established itself as an academic discipline throughout universities of the world. Often a separate department will exist specifically for the large number of undergraduates who wish to major in the subject they loosely term ‘computers’. The power which computers bring both to the individual and society to change the world, for better or for worse, is self evident. What is less clear is the extent to which the use of computers changes our comprehension of the world. While internet communication makes the world appear smaller it does not mean we have created a new world as such. However, popular culture can clearly visualise the existence of inorganic life forms such as Commander Data in Star Trek. As there is no conceptual problem for android life forms can we in the academic discipline of computer science not provide substance to the notion of a computer existing? Usually this question is reduced to, can a computer think? If so, it is considered to exist, otherwise not. But this begs the question, what is intelligence? While being an interesting question, it does not address matters of computer existence which, while successfully employed in computer science, bear no obvious relation to intelligence. Due to Gödel’s incompleteness theorems computers necessarily have to process incomplete data, thus necessarily giving rise to consideration of the semantic concept of partial information. Mathematics before the 1960s had no ready made answers on how to model information which in a well defined sense had bits missing. Computer scientists wishing to understand the semantics of programming languages had to find ways to denote information which could, quite legitimately & usefully, be incomplete. An analogy can be drawn from mathematics where there are many applications requiring us to have a real number i such that i = −1. As clearly no such real number exists we creatively extend the real numbers to include appropriately termed imaginary numbers, one of which is to be i = √ −1. The mathematical solution for denoting partiality accords a pivotal role to nothing, that information totally devoid of all content. In what sense then can nothing be said to exist? Posed as an oxymoron, what is the sound of silence? Fortunately we have available to us the inspired mathematics developed by, largely one person, Dana Scott to resolve the potential logical paradox of self reference embodied in any non trivial deterministic programming language. Roughly speaking a computer program is deterministic if each time it is executed the same output necessarily results. What are the implicit philosophical premises upon the nature of information assumed in Scott’s ground breaking technical work? And what do these premises tell us about why his work has great difficulty in generalising to non deterministic programs, those where one execution may legitimately yield different results from another. In this paper we take a non deterministic program to be one where at one or more points in the program’s execution the computer may choose any one from a finite set of commands specified in the program to execute next. A non deterministic program thus specifies a possibly infinite number of possible execution sequences. The work in this paper is an attempt to identify the key premises upon the fundamental nature of information underpinning Scott’s work, and to demonstrate how computer science has successfully generalised the ontology of information inherited from mathematics. This paper is intended to promote research into ontological backdrops for existing practical work on the epistemology of reasoning, knowledge representation, and knowledge acquisition using computers. A clearer understanding of such ontology could make existing highly technical work on the semantics of programming languages more accessible to programmers. For philosophy, this work is intended to promote debate upon how we might ultimately develop a theory of everything to reconcile the essence of mathematics with that of computer science. Premises for partiality We now attempt to identify the implicit premises in Scott’s approach to modelling partial information in the semantics of programming languages. Premise 1 : programming language semantics is certain knowledge. Assigning a semantics to a program is the problem of determining and expressing all of its certain properties, these being properties which are provable in an appropriate logic of programs. By implication, there is no room here for approximation, lack of clarity, or ignorance of program properties. A program is understood to be exactly what it is, neither in part of what it is, nor what it might be. Premise 2 : a program’s semantics is mathematically denotable. This premise asserts that a mathematical model can be constructed in which each program can, for its meaning, be assigned a single value in that model. Premise 3 : the semantics of a program is its observable behaviour. The semantics of a deterministic program is the totality of effects which can be observed and recorded (by a human or another program) in the one, and only possible, execution sequence. At the risk of using an anthropomorphism, we may say that a deterministic program is what it does. In contrast a non deterministic program may, when executed, produce one of many possible execution sequences. Thus, what may we say a non deterministic program does? Attempts have been made to generalise Scott’s approach to non deterministic programs, so providing, in accordance with premise 2, a denotation for non determinism. By premise 1 we require certain knowledge of all properties of each non deterministic program. Thus, the meaning of a non deterministic program has to entail all the certain properties of each and every possible execution sequence. Thus, non determinism, it is reasoned by some, can be captured as a set of determinisms, a determinism being the meaning assigned to a deterministic program. And so, by premise 2, we need a mathematical model in which a value is a set of determinisms. And so, if S is the set of all possible determinisms for deterministic programs, then all we should need is a so-called power domain 2 of subsets of S, each such subset to be the meaning assigned to a non deterministic program. The power domain approach presumes non determinism to be a set of determinisms. Non determinism is defined in terms of the primitive notion of determinism, and as such is a kind of, what we may say, multiple determinism. Process calculi such as Milner’s Calculus of Communicating Systems (CCS ) [Mi89] have presumed non determinism to be a primitive notion, rather than a derivative of determinism. The process P + Q (pronounced P or Q) is the process which can either behave like process P or behave like process Q. ‘+’ is introduced as a primitive notion of choice. While the program may specify what choices there are, such as a coin has the sides head and tail, it is the computer executing the program, as in the tossing of a coin, that will make the choice. CCS reverses the approach of power domains, by asserting choice to be a core primitive notion, and that determinism is a process having just one choice. In other words, determinism is a choice of one. And so, in accordance with premise 2, what is the denotation of choice? An equivalence relation, termed 4 The use of the terms does and can do in this paper are common parlance in computer science. They have no anthropomorphic connotations, they merely refer to the underlying capabilities of the computer system used to execute programs. 5 A binary relation ≡ over a set A is an equivalence if it has the following properties for all a, b, c in A. a ≡ a. If a ≡ b then b ≡ a. If a ≡ b and b ≡ c then a ≡ c. The equivalence class for a is the set of all those members of A equivalent to a. a bisimulation (pronounced bi simulation), on processes whose semantics are to be regarded as logically equivalent is introduced. The denotation of a process is its equivalence class. Premise 4 : the semantics of a program is precisely that information which can be discovered by means of Turing computation. This premise asserts that we can know whatever the computer can reveal to us by way of (Turing) computation, but no more. It is thus, by presumption, not permitted to enhance our understanding of the program’s meaning by use of clever reasoning which would go beyond the Church-Turing hypothesis universally accepted in computer science as the definition of what is computable. The semantics of a program is thus necessarily taken to be precisely the computation(s) it describes. For a deterministic program we observe what it does, as this word was used earlier, in its one and only possible computation. For a non deterministic program, such as a process in CCS, we need to observe both what choices are available and what each such choice (when chosen) does. The required notion of observation is thus twofold, what a process can do and what it does. As a deterministic program only can do what it does, the notion of can do is safely dropped from consideration in its semantics. Premise 5 : nothing is observable. The problem of how to assign a denotation for a non terminating loop in a deterministic program written in a typical Pascal-like language loosely corresponds to the problem of assigning a semantics for recursive function theory, which in turn is similar to the problem of how to demonstrate the logical consistency of self applications such as x(x) in the lambda calculus. The initial and instrumental technical concept in Scott’s work is to utilise the notion of ⊥ (pronounced bottom). ⊥ can be understood for Pascal-like languages as a non terminating program which, while remaining alive, never progresses in any observable sense of producing further output. For example, the following Pascal code will run forever, but never produce any output. while true do begin end ;

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Philosophy of Computing and Information Technology

Philosophy has been described as having taken a ‘computational turn’, referring to the ways in which computers and information technology throw new light upon traditional philosophical issues, provide new tools and concepts for philosophical reasoning, and pose theoretical and practical questions that cannot readily be approached within traditional philosophical frameworks. As such, computer te...

متن کامل

Philosophy of Computing and Information Technology

Philosophy has been described as having taken a ‘computational turn’, referring to the ways in which computers and information technology throw new light upon traditional philosophical issues, provide new tools and concepts for philosophical reasoning, and pose theoretical and practical questions that cannot readily be approached within traditional philosophical frameworks. As such, computer te...

متن کامل

A Brief Philosophical Encounter with Science and Medicine

We show a lot of respect for science today. To back up our claims, we tend to appeal to scientific methods. It seems that we all agree that these methods are effective for gaining the truth. We can ask why science has its special status as a supplier of knowledge about our external world and our bodies. Of course, one should not always trust what scientists say. Nonetheless, epistemological jus...

متن کامل

The philosophy of simulation: hot new issues or same old stew?

Computer simulations are an exciting tool that plays important roles in many scientific disciplines. This has attracted the attention of a number of philosophers of science. The main tenor in this literature is that computer simulations not only constitute interesting and powerful new science, but that they also raise a host of new philosophical issues. The protagonists in this debate claim no ...

متن کامل

Philosophy of Medicine Exam in the Olympiad for Medical Sciences Students: Report of an Experience

Introduction: Academic Olympiad for medical sciences students focuses on areas that require reflection, reasoning and problem solving and that are generally neglected in the official education system and evaluation of medical students. Previously, it was administered in three areas of scientific thinking in basic sciences, clinical reasoning and health system management. In order to cover, phil...

متن کامل

The Philosophy of Information: A Methodological Point of View

The Philosophy of Information is a new area of research at the intersection of philosophy and computer science [4]. It concerns (a) the critical investigation of the conceptual nature and basic principles of information, including its dynamics (especially computation), utilization (especially computer ethics) and sciences; and (b) the elaboration and application of computational and information...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004