Addictive Games: Case Study on Multi-Armed Bandit Game

نویسندگان

چکیده

The attraction of games comes from the player being able to have fun in games. Gambling that are based on Variable-Ratio schedule Skinner’s experiment most typical addictive It is necessary clarify reason why gambling simple but addictive. Also, Multiarmed Bandit game a test for Skinner Box design and popular house, which good example analyze. This article mainly focuses expanding idea motion mind model scene games, quantifying player’s psychological inclination by simulation experimental data. By relating with quantification satisfaction play comfort, expectation’s feeling discussed energy perspective. Two different energies proposed: player-side (Er) game-side (Ei). provides difference (Ei), denoted as Ed show gap. Ten settings mass bandit were simulated. was found setting best confidence entry difficulty (Ei) can balance expectation. results when m=0.3,0.7, has biggest gap, expresses will be motivated not reconciled. Moreover, addiction likely occur m∈[0.5,0.7]. Such an approach also help developers educators increase edutainment games’ efficiency make more attractive.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Noise Free Multi-armed Bandit Game

We study the loss version of adversarial multi-armed bandit problems with one lossless arm. We show an adversary’s strategy that forces any player to suffer K − 1− O(1/T ) loss where K is the number of arms and T is the number of rounds.

متن کامل

Mistake Bounds on Noise-Free Multi-Armed Bandit Game

We study the {0, 1}-loss version of adaptive adversarial multi-armed bandit problems with α(≥ 1) lossless arms. For the problem, we show a tight bound K − α − Θ(1/T ) on the minimax expected number of mistakes (1-losses), where K is the number of arms and T is the number of rounds.

متن کامل

MULTI–ARMED BANDIT FOR PRICING Multi–Armed Bandit for Pricing

This paper is about the study of Multi–Armed Bandit (MAB) approaches for pricing applications, where a seller needs to identify the selling price for a particular kind of item that maximizes her/his profit without knowing the buyer demand. We propose modifications to the popular Upper Confidence Bound (UCB) bandit algorithm exploiting two peculiarities of pricing applications: 1) as the selling...

متن کامل

Online Multi-Armed Bandit

We introduce a novel variant of the multi-armed bandit problem, in which bandits are streamed one at a time to the player, and at each point, the player can either choose to pull the current bandit or move on to the next bandit. Once a player has moved on from a bandit, they may never visit it again, which is a crucial difference between our problem and classic multi-armed bandit problems. In t...

متن کامل

Monotone multi-armed bandit allocations

We present a novel angle for multi-armed bandits (henceforth abbreviated MAB) which follows from the recent work on MAB mechanisms (Babaioff et al., 2009; Devanur and Kakade, 2009; Babaioff et al., 2010). The new problem is, essentially, about designing MAB algorithms under an additional constraint motivated by their application to MAB mechanisms. This note is self-contained, although some fami...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Information

سال: 2021

ISSN: ['2078-2489']

DOI: https://doi.org/10.3390/info12120521