Testing Multiple Forecasters∗

نویسندگان

  • Yossi Feinberg
  • Colin Stewart
چکیده

We consider a cross-calibration test of predictions by multiple potential experts in a stochastic environment. This test checks whether each expert is calibrated conditional on the predictions made by other experts. We show that this test is good in the sense that a true expert—one informed of the true distribution of the process—is guaranteed to pass the test no matter what the other potential experts do, and false experts will fail the test on all but a small (category one) set of true distributions. Furthermore, even when there is no true expert present, a test similar to cross-calibration cannot be simultaneously manipulated by multiple false experts, but at the cost of failing some true experts. ∗We wish to thank Nabil Al-Najjar, Brendan Beare, Dean Foster, Sergiu Hart, Stephen Morris, Wojciech Olszewski, Alvaro Sandroni, Jakub Steiner, and Jonathan Weinstein for helpful comments and suggestions. The first author gratefully acknowledges the support of the NSF grant IIS-0205633 and the hospitality of the Institute for Advanced Studies at the Hebrew University. †email: [email protected] ‡email: [email protected]

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Incentive-Compatible Forecasting Competitions

We consider the design of forecasting competitions in which multiple forecasters make predictions about one or more independent events and compete for a single prize. We have two objectives: (1) to award the prize to the most accurate forecaster, and (2) to incentivize forecasters to report truthfully, so that forecasts are informative and forecasters need not spend any cognitive effort strateg...

متن کامل

Testing for multiple-period predictability between serially dependent time series

This paper reports the results of a simulation study that considers the finite-sample performances of a range of approaches for testing multiple-period predictability between two potentially serially correlated time series. In many empirically relevant situations, but not all, most of the test statistics considered are significantly oversized. In contrast, both an analytical approach proposed i...

متن کامل

Testing the Value of Probability Forecasts for Calibrated Combining.

We combine the probability forecasts of a real GDP decline from the U.S. Survey of Professional Forecasters, after trimming the forecasts that do not have "value", as measured by the Kuiper Skill Score and in the sense of Merton (1981). For this purpose, we use a simple test to evaluate the probability forecasts. The proposed test does not require the probabilities to be converted to binary for...

متن کامل

Efficient Testing of Forecasts

Each day a weather forecaster predicts a probability of each type of weather for the next day. After n days, all the predicted probabilities and the real weather data are sent to a test which decides whether to accept the forecaster as possessing predicting power. Consider tests such that forecasters who know the distribution of nature are passed with high probability. Sandroni shows that any s...

متن کامل

The Good Judgment Project: A Large Scale Test of Different Methods of Combining Expert Predictions

Many methods have been proposed for making use of multiple experts to predict uncertain events such as election outcomes, ranging from simple averaging of individual predictions to complex collaborative structures such as prediction markets or structured group decision making processes. We used a panel of more than 2,000 forecasters to systematically compare the performance of four different co...

متن کامل

Using a rolling training approach to improve judgmental extrapolations elicited from forecasters with technical knowledge

Several biases and inefficiencies are commonly associated with the judgmental extrapolation of time series even when forecasters have technical knowledge about forecasting. This study examines the effectiveness of using a rolling training approach, based on feedback, to improve the accuracy of forecasts elicited from people with such knowledge. In an experiment forecasters were asked to make mu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007