The Margin-for-Error Principle Revised

نویسنده

  • Julien Dutant
چکیده

Williamson’s anti-luminosity argument purports to show that principle KK is incompatible with a plausible Margin-for-Error requirement for inexact knowledge. In this paper I advocate an alternative conception of the requirement which blocks the antiluminosity argument and argue that the revised principle provides a better account of inexact and higher-order knowledge. Williamson (2000) has argued that the claim that one is always in position to know that one knows (henceforth, principle KK) is incompatible with a plausible margin-for-error requirement for inexact knowledge. The requirement is based on the appealing claim that knowledge requires one’s belief to be safely true. But if Williamson is right, it leads to the unexpectedly strong conclusion that it is impossible for any creature to reach arbitrary high orders of knowledge with respect to any item of inexact knowledge. Several authors have responded by rejecting safety altogether. By contrast, I will rely here on a conception of inexact knowledge advocated by Halpern (2004) to formulate a revised Margin for Error Principle that both preserves the safety intuitions and blocks Williamson’s anti-luminosity argument. Williamson’s argument is restated in section (1). The revised principle is presented and shown to block the argument in section (2). Section (3) spells out the infaillibilist conception of knowledge implicit in the revision, and applies it to higher-order knowledge in order to formulate a condition for (KK) to hold. Finally, section (4) offers two arguments in favour of the new account. 1. Williamson’s anti-luminosity argument Mr Magoo wonders how tall a certain tree is. Magoo’s estimations of the tree’s height are imperfect, but allow him to gain some knowledge of the tree’s height, for instance that it is not 1000 inches high. Let us assume that Magoo is liable to make errors of plus or minus 1 inch about the tree’s height. (For simplicity, I will abbreviate “the tree is i inches tall” by “the tree is i”.) According to Williamson (2000), Magoo cannot know of a tree of i that it is not i+1 because he could easily mistake a tree of i+1 for a tree of i. Thus Magoo’s knowledge is subject to the following Margin-for-Error requirement: (WM) For all i, if the tree is i+1, Magoo does not know that the tree is not i. Williamson shows that if Magoo knows (WM), as seems possible, principle (KK) leads to a contradiction. Let the tree be 100. Magoo knows that it is not 0. By (KK), he knows that he knows it. Since he knows (WM), he knows that, if he knows it, the tree is not 1. By inference, he knows that the tree is not 1. By reiterating those steps, he knows that the tree is not 100. But that is impossible, since knowledge is factive and the tree is 100. 1 For instance Brueckner and Fiocco (2002). 2 All the conditionals in the paper are material implications. The Margin-for-Error Principle revised The case involves several simplifications: Magoo’s inaccuracy is itself precise and constant across heights, and he knows it accurately. However, the argument would go through if we assumed instead that Magoo’s inaccuracy was always substantially more than a tenth of an inch, and that Magoo knew that it is at least a tenth of an inch. The argument assumes that Magoo has some inexact knowledge (namely, that the tree is not 0), that knowledge is factive, and that a restricted version of epistemic closure holds. I do not think it promising to challenge those assumptions, so I will not discuss them any further. The remaining options are to reject (KK), (WM) or Magoo’s knowledge of (WM). Williamson rejects (KK). The consequence is that even an idealized Magoo could not reach arbitrary high orders of knowledge that the three is not 0. 2. The revised principle Williamson (2000, p.115) introduces (WM) in the following passage: To know that the tree is i inches tall, Mr Magoo would have to judge that it is i inches tall; but even if he so judges and in fact the tree is i inches tall, he is merely guessing; for all he knows it is really i-1 or i+1 inches tall. He does not know that it is not. Equally, if the tree is i-1 or i+1 inches tall, he does not know that it is not i inches tall. The last sentence states (WM). But the first states a different principle, which I shall now present. In Williamson’s terminology, a “case” is individuated by the objective facts such as the tree’s being 100. Halpern (2004) advocates an alternative framework in which cases are individuated by objective facts and the subject’s state. For instance, given Magoo’s limited powers of discrimination, there are three possible cases in which the tree is 100: 1. The tree is 100 and Magoo estimates it as being 99. 2. The tree is 100 and Magoo estimates it as being 100. 3. The tree is 100 and Magoo estimates it as being 101. Conversely, if Magoo estimates the tree as being 100, it might really be 99, 100 or 101. Hence if he judged that the tree is not 101, he would not be safe from error. That suggests a revised Margin-for-Error requirement: (RM) For all i, if Magoo estimates the tree as being i, he does not know that it is not i+1. In some respects, (RM) is stronger than (WM). Suppose Magoo estimates the tree as being 100 while it is really 99. (WM) does not rule out Magoo’s knowing that it is not 101. But (RM) does so, because Magoo could have made the same estimation if the tree was 101. We can go further and state a sufficient condition for Magoo’s knowledge. If Magoo estimates that the tree is i, then it cannot be greater than i+1. Thus he is safe from error if he judges that it is not so: (SC) If Magoo estimates the tree as being i, then for any j greater than i+1, Magoo knows that the tree is not j. 3 Dokic and Egré’s (2004) solution involves distinguishing perceptive knowledge, for which (WM) holds but not (KK), from reflective knowledge, for which (KK) holds but not (WM). Bonnay and Egré (2006) advocate a non-standard semantics for “knows” in which (WM) holds but cannot be known. 4 Unless one makes the ad hoc move of restricting (KK) to a subset of the propositions about the tree’s height. 2 The Margin-for-Error Principle revised If (SC) holds, Williamson’s (WM) is false. Suppose Magoo estimates that it is 99 while it is in fact 100. By (SC), he knows that it is not 101. So it is possible for Magoo to know that the tree is not 101 while it is 100. In that respect, (RM) is weaker than (WM). To see what goes on, consider the following conditionals: (a) If the tree is 100, Magoo knows that it is not 101. (b) If the tree is 100, Magoo does not know that it is not 101. Williamson’s framework makes it seem that either (a) or (b) is true. Since (a) grants Magoo with exact knowledge, Williamson holds (b) – that is (WM). But on the present framework, their truth-value depends on Magoo’s estimations. Thus: (c) If the tree is 100 and Magoo estimates it as being 99, he knows that it is not 101. (d) If the tree is 100 and Magoo estimates it as being 100 or 101, he does not know that it is not 101. Now, one can see how (c) blocks Williamson’s anti-luminosity argument. Suppose Magoo estimates the tree as being 99. By (SC), he knows that it is not 101. By (KK), he knows that he knows so. However, his knowing that the tree is not 101 does not entail that the tree is not 100. So Magoo cannot infer that it is not so. The sorites-like series cannot get off the ground. The two conceptions are contrasted in the following schemas, where numbers stand for the tree’s heights and Ei for “Magoo estimates the tree is i”. Empty places are impossible cases. Let p be the proposition that the tree is not 101, and Kp the proposition that Magoo knows p: E101 ¬Kp ¬Kp E101 ¬Kp ¬Kp E100 Kp ¬Kp ¬Kp E100 ¬Kp ¬Kp ¬Kp E99 Kp ¬Kp E99 Kp Kp 99 100 101 99 100 101 p p ¬p p p ¬p Williamson’s MFE Revised MFE The crucial case that blocks Williamson’s argument is (100, E99). 3. Higher-order knowledge The conception of knowledge implicit in the revised principle can be generalized. Let p be a basic contingent proposition such as the tree is i, and let E be the p-relevant estimation made by a subject S: (IK) S knows that p iff necessarily, if S makes estimation E then p is true. That is infaillibilism. Scepticism threatens: surely, a mad scientist could have Magoo estimating that the tree is 100 while there is no tree. However, that can be avoided in several ways, either by individuating estimations broadly, or by restricting the necessity operator in the contextualist’s way (remote possibilities are ignored in common contexts) or the subjectsensitive invariantist’s way (remote possibilities are not genuine in S’s particular situation). The details do not matter here. 5 Halpern (2004) rejects Williamson’s (MFE) for substantially the same reasons. 3 The Margin-for-Error Principle revised Estimations are subjective states that S could be in only if certain facts obtain. They roughly correspond to Lewis’ (1996) notion of evidence. Thus when I say that an estimation E is incompatible with p, I do not mean that E’s content implies not-p, but that S’s being in E implies not-p. I need not specify them further here: they can be experiences, judgements about the way things look, or beliefs. (IK) generalizes to higher-order knowledge if we allow p to range over higher-order propositions and if S makes higher-order estimations – beliefs that one is in some lowerorder state or has some lower-order knowledge. Suppose Magoo estimates that he estimates that the tree is 100. Then, by (IK): (e) Magoo knows that he estimates the tree as being 100 iff necessarily, if he estimates that he estimates so, he does estimate so. (f) Magoo knows that he knows that the tree is not 98 iff necessarily, if he estimates that he estimates that the tree is 100, he knows that the tree is not 98. And similarly for higher orders. (Strange things happen if one’s higher-order estimations are more sensitive to facts than lower-order ones, but let us assume that they are not.) Consequently, (KK) holds between two orders only if Magoo’s higher-order estimation of his lower-order state is perfect. Assume Magoo is liable to make errors of 1 about his lower-order estimations – which is likely if they are experiences. Suppose he estimates the tree as being 100, so he knows that it is not 98. He also estimates rightly that he estimates so, but that is compatible with his estimating the tree as being 99, in which case he would not have known that the tree is not 98. So he does not know that he knows it. The revised principle thus leaves it open whether (KK) holds between any two orders. For instance, it is compatible with the following attractive view: from a certain order n on, the very same cognitive state of ours implements both our estimation that p, our estimation that p and so on. That is reflected in our ability to distinguish them. But by the same token, we cannot fail to be estimating that p when we estimate that p, so (KK) holds from n on. By constrast, we do distinguish the first orders but are liable to make errors about our own states at those orders, so (KK) fails to hold there. On that view, if the tree is 100, the closer i is to 100, the lesser Magoo is likely to know that he knows that the tree is not i; nevertheless, he has arbitrary high orders of knowledge that it is not 0. 4. Two arguments for the revision There are two significant reasons to prefer the revised account over Williamson’s. First, (WM) implies that inexact knowledge states are strictly more informative that their content. For instance, if Magoo knows that the tree is between 99 and 101, then the tree is neither 99 nor 101. So the information carried by Magoo’s knowledge state is richer that what Magoo knows. That is the deep reason for (KK)’s failure: additional information is required for Magoo to know that he knows it. On the revised conception, knowledge states are just as informative as their content: Magoo’s knowing that the tree is between 99 and 101 only indicates that the tree is between 99 and 101. That is welcome, since there is no principled reason why the information carried by a subject’s state should be partly hidden to her. Second, (WM) implies that subjects infer they own states from the state of the world, in spite of direct information they might have about them. Consider why (WM) rules out Magoo’s knowing that he knows that the tree is not 101 when it is actually 100 and he estimates it as being 99. For all Magoo knows, the tree might be 100. But if it was 100, 4 The Margin-for-Error Principle revised Magoo might be estimating it as being 101. Hence, the reasoning goes, for all Magoo knows, he might be estimating it as being 101. (Therefore, for all he knows, it might be 101.) But that move is unacceptable if Magoo is aware that he does not estimate it as being 101. So (WM) is cogent only if Magoo has no direct information about his own states. According to (RM), the cases that are epistemically accessible to a given estimation are only cases in which Magoo makes the same estimation. For instance, If Magoo estimates that he estimates the tree as being 99, and would not do so if he estimated it as being 101, it is false that for all he knows, he might be estimating it as being 101. Thus Maggo can have direct information about his own states. 5. ConclusionI have defended a revision of Williamson’s (2000) Margin-for-Error principle forinexact knowledge that preserves the safety intuitions while avoiding the implausibleconsequence that no possible creature can satisfy (KK) with respect to any piece of inexactknowledge. The principle provides a more flexible account of higher-order knowledge,allowing (KK) to hold or not between any two orders, and formulated the condition at whichit does. I have argued that the revised principle also avoids two problematic consequences ofWilliamson’s account: namely, that some information carried by knowledge states isnecessarily hidden, and that subjects cannot rely on direct information about their own states. ReferencesBrueckner, A., and Fiocco, M.O. 2002. ‘Williamson’s Anti-Luminosity Argument’.Philosophical Studies 110: 285–293. Dokic, J. and Egré, P. 2004. ‘Margin for error and the transparency of knowledge’. Undersubmission. Bonnay, D. and Egré, P. 2006. ‘A non-standard semantics for inexact knowledge withintrospection’. In S. Artemov and R. Parikh, editors, Proceedings of the ESSLLI2006 Workshop on Rationality and Knowledge. Malaga, 2006. Halpern, J. Y. 2004. ‘Intransitivity and vagueness’. In D. Dubois, Ch. Welty andM.-A. Williams, eds, Principles of Knowledge Representation and Reasoning:Proceedings of the Ninth International Conference (KR2004). The AAAI Press,Menlo Park, California.Lewis, D.K. 1996. ‘Elusive knowledge’. Australasian Journal of Philosophy, 74: 549–567.Weatherson, B. 2004. ‘Luminous margins’. Australasian Journal of Philosophy, 83:373–383.Williamson. T. 2000. Knowledge and its Limits. Oxford University Press.5

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An improved collocation method based on deviation of the error for solving BBMB equation

In this paper, we improve b-spline collocation method for Benjamin-Bona-Mahony-Burgers (BBMB) by using defect correction principle. The exact finite difference scheme is used to find defect and the defect correction principle is used to improve collocation method. The method is tested on somemodel problems and the numerical results have been obtained and compared.

متن کامل

A Fast SVM-based Feature Elimination Utilizing Data Radius, Hard-Margin, Soft-Margin

Margin maximization in the hard-margin sense, proposed as feature elimination criterion by the MFE-LO method, is combined here with data radius utilization to further aim to lower generalization error, as several published bounds and bound-related formulations pertaining to lowering misclassification risk (or error) pertain to radius e.g. product of squared radius and weight vector squared norm...

متن کامل

Margin-Based Feed-Forward Neural Network Classifiers

Margin-Based Principle has been proposed for a long time, it has been proved that this principle could reduce the structural risk and improve the performance in both theoretical and practical aspects. Meanwhile, feed-forward neural network is a traditional classifier, which is very hot at present with a deeper architecture. However, the training algorithm of feed-forward neural network is devel...

متن کامل

Large-Margin Gaussian Mixture Modeling for Automatic Speech Recognition

Discriminative training for acoustic models has been widely studied to improve the performance of automatic speech recognition systems. To enhance the generalization ability of discriminatively trained models, a large-margin training framework has recently been proposed. This work investigates large-margin training in detail, integrates the training with more flexible classifier structures such...

متن کامل

Large margin multinomial mixture model for text categorization

In this paper, we present a novel discriminative training method for multinomial mixture models (MMM) in text categorization based on the principle of large margin. Under some approximation and relaxation conditions, large margin estimation (LME) of MMMs can be formulated as linear programming (LP) problems, which can be efficiently and reliably solved by many general optimization tools even fo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007