Self-Supervised Representation Learning for Evolutionary Neural Architecture Search

نویسندگان

چکیده

Recently proposed neural architecture search (NAS) algorithms adopt predictors to accelerate search. The capability of accurately predict the performance metrics is critical NAS, but obtaining training datasets for often time-consuming. How obtain a predictor with high prediction accuracy using small amount data central problem predictor-based NAS. Here, new encoding scheme first devised calculate graph edit distance architectures, which overcomes drawbacks existing vector-based schemes. To enhance predictive predictors, two self-supervised learning methods are pre-train embedding part generate meaningful representation architectures. method designs network-based model independent branches and utilizes different architectures as supervision force representations. Inspired by contrastive learning, second presents algorithm that feature vector proxy contrast positive pairs against negative pairs. Experimental results illustrate pre-trained can achieve comparable or superior compared their supervised counterparts only half samples. effectiveness further validated integrating into guided evolutionary (NPENAS) algorithm, achieves stateof-the-art on NASBench-101, NASBench-201, DARTS benchmarks

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Supervised Learning for Self-Generating Neural Networks

In this paper, supervised learning for Self-Generating Neural Networks (SGNN) method, which was originally developed for the purpose of unsupervised learning, is discussed. An information analytical method is proposed to assign weights to attributes in the training examples if class information is available. This significantly improves the learning speed and the accuracy of the SGNN classiier. ...

متن کامل

Self-organizing Neural Architecture for Reinforcement Learning

Self-organizing neural networks are typically associated with unsupervised learning. This paper presents a self-organizing neural architecture, known as TD-FALCON, that learns cognitive codes across multi-modal pattern spaces, involving states, actions, and rewards, and is capable of adapting and functioning in a dynamic environment with external evaluative feedback signals. We present a case s...

متن کامل

Neural Architecture Search with Reinforcement Learning

Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of t...

متن کامل

the search for the self in becketts theatre: waiting for godot and endgame

this thesis is based upon the works of samuel beckett. one of the greatest writers of contemporary literature. here, i have tried to focus on one of the main themes in becketts works: the search for the real "me" or the real self, which is not only a problem to be solved for beckett man but also for each of us. i have tried to show becketts techniques in approaching this unattainable goal, base...

15 صفحه اول

Evolutionary Architecture Search For Deep Multitask Networks

Multitask learning, i.e. learning several tasks at once with the same neural network, can improve performance in each of the tasks. Designing deep neural network architectures for multitask learning is a challenge: There are many ways to tie the tasks together, and the design choices matter. The size and complexity of this problem exceeds human design ability, making it a compelling domain for ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Computational Intelligence Magazine

سال: 2021

ISSN: ['1556-6048', '1556-603X']

DOI: https://doi.org/10.1109/mci.2021.3084415