Self-Supervised Self-Supervision by Combining Deep Learning and Probabilistic Logic

نویسندگان

چکیده

Labeling training examples at scale is a perennial challenge in machine learning. Self-supervision methods compensate for the lack of direct supervision by leveraging prior knowledge to automatically generate noisy labeled examples. Deep probabilistic logic (DPL) unifying framework self-supervised learning that represents unknown labels as latent variables and incorporates diverse self-supervision using train deep neural network end-to-end variational EM. While DPL successful combining pre-specified self-supervision, manually crafting attain high accuracy may still be tedious challenging. In this paper, we propose Self-Supervised Self-Supervision (S4), which adds capability learn new automatically. Starting from an initial "seed," S4 iteratively uses self-supervision. These are either added directly (a form structured self-training) or verified human expert (as feature-based active learning). Experiments show able accurate can often nearly match supervised with tiny fraction effort.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Towards Lifelong Self-Supervision: A Deep Learning Direction for Robotics

Despite outstanding success in vision amongst other domains, many of the recent deep learning approaches have evident drawbacks for robots. This manuscript surveys recent work in the literature that pertain to applying deep learning systems to the robotics domain, either as means of estimation or as a tool to resolve motor commands directly from raw percepts. These recent advances are only a pi...

متن کامل

Self-Supervision for Reinforcement Learning

Reinforcement learning optimizes policies for expected cumulative reward. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, making it a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquit...

متن کامل

Self-supervised Learning of Geometrically Stable Features Through Probabilistic Introspection

One of the most promising directions of deep learning is the development of self-supervised methods that can substantially reduce the quantity of manually-labeled training data required to learn a model. Several recent contributions, in particular, have proposed self-supervision techniques suitable for tasks such as image classification. In this work, we look instead at self-supervision for geo...

متن کامل

Self-Supervised Deep Visuomotor Learning from Motor Unit Feedback

Despite recent success in a number of domains with deep learning, expensive data collection and the need for large datasets becomes a major drawback for deep learning with real robotic platforms. As a result, many of the successful work in deep learning has been limited to domains where large datasets are readily available or easily collected. To address this issue, we leverage closed-loop cont...

متن کامل

Self Paced Deep Learning for Weakly Supervised Object Detection

In a weakly-supervised scenario, object detectors need to be trained using image-level annotation only. Since bounding-box-level ground truth is not available, mostof the solutions proposed so far are based on an iterative approach in which theclassifier, obtained in the previous iteration, is used to predict the objects’ positionswhich are used for training in the current itera...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i6.16631