Progressive Network Grafting for Few-Shot Knowledge Distillation

نویسندگان

چکیده

Knowledge distillation has demonstrated encouraging performances in deep model compression. Most existing approaches, however, require massive labeled data to accomplish the knowledge transfer, making compression a cumbersome and costly process. In this paper, we investigate practical few-shot scenario, where assume only few samples without human annotations are available for each category. To end, introduce principled dual-stage scheme tailored data. first step, graft student blocks one by onto teacher, learn parameters of grafted block intertwined with those other teacher blocks. second trained progressively connected then together network, allowing learned adapt themselves eventually replace network. Experiments demonstrate that our approach, unlabeled samples, achieves gratifying results on CIFAR10,CIFAR100, ILSVRC-2012. On CIFAR10 CIFAR100, even par schemes utilize full datasets. The source code is at https://github.com/zju-vipa/NetGraft.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dynamic Input Structure and Network Assembly for Few-Shot Learning

The ability to learn from a small number of examples has been a difficult problem in machine learning since its inception. While methods have succeeded with large amounts of training data, research has been underway in how to accomplish similar performance with fewer examples, known as one-shot or more generally few-shot learning. This technique has been shown to have promising performance, but...

متن کامل

Learning to Compare: Relation Network for Few-Shot Learning

We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is de...

متن کامل

Few-shot Object Detection

In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named “few-shot object detection”. The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, e...

متن کامل

Few-shot Learning

Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a classifier has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity classifiers requires many iterative steps over many examples to perform well. Here, we propose ...

متن کامل

Prototypical Networks for Few-shot Learning

A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i3.16356