منابع مشابه
Adaptive Submodular Ranking
We study a general stochastic ranking problem where an algorithm needs to adaptively select a sequence of elements so as to “cover” a random scenario (drawn from a known distribution) at minimum expected cost. The coverage of each scenario is captured by an individual submodular function, where the scenario is said to be covered when its function value goes above some threshold. We obtain a log...
متن کاملSubmodular Optimization with Routing Constraints
Submodular optimization, particularly under cardinality or cost constraints, has received considerable attention, stemming from its breadth of application, ranging from sensor placement to targeted marketing. However, the constraints faced in many real domains are more complex. We investigate an important and very general class of problems of maximizing a submodular function subject to general ...
متن کاملOnline Submodular Set Cover, Ranking, and Repeated Active Learning
We propose an online prediction version of submodular set cover with connections to ranking and repeated active learning. In each round, the learning algorithm chooses a sequence of items. The algorithm then receives a monotone submodular function and suffers loss equal to the cover time of the function: the number of items needed, when items are selected in order of the chosen sequence, to ach...
متن کاملNon-Monotone Adaptive Submodular Maximization
A wide range of AI problems, such as sensor placement, active learning, and network influence maximization, require sequentially selecting elements from a large set with the goal of optimizing the utility of the selected subset. Moreover, each element that is picked may provide stochastic feedback, which can be used to make smarter decisions about future selections. Finding efficient policies f...
متن کاملAdaptive Submodular Optimization under Matroid Constraints
Many important problems in discrete optimization require maximization of a monotonic submodular function subject to matroid constraints. For these problems, a simple greedy algorithm is guaranteed to obtain near-optimal solutions. In this article, we extend this classic result to a general class of adaptive optimization problems under partial observability, where each choice can depend on obser...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Operations Research
سال: 2020
ISSN: 0030-364X,1526-5463
DOI: 10.1287/opre.2019.1889