Stop-Gradient Softmax Loss for Deep Metric Learning

نویسندگان

چکیده

Deep metric learning aims to learn a feature space that models the similarity between images, and normalization is critical step for boosting performance. However directly optimizing L2-normalized softmax loss cause network fail converge. Therefore some SOTA approaches appends scale layer after inner product relieve convergence problem, but it incurs new problem it's difficult best scaling parameters. In this letter, we look into characteristic of softmax-based propose novel objective function Stop-Gradient Softmax Loss (SGSL) solve in deep with L2-normalization. addition, found useful trick named Remove last BN-ReLU (RBR). It removes backbone reduce burden model. Experimental results on four fine-grained image retrieval benchmarks show our proposed approach outperforms most existing approaches, i.e., achieves 75.9% CUB-200-2011, 94.7% CARS196 83.1% SOP which other at least 1.7%, 2.9% 1.7% Recall@1.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Semantic Softmax Loss for Zero-Shot Learning

A typical pipeline for Zero-Shot Learning (ZSL) is to integrate the visual features and the class semantic descriptors into a multimodal framework with a linear or bilinear model. However, the visual features and the class semantic descriptors locate in different structural spaces, a linear or bilinear model can not capture the semantic interactions between different modalities well. In this le...

متن کامل

Significance of Softmax-based Features over Metric Learning-based Features

The extraction of useful deep features is important for many computer vision tasks. Deep features extracted from classification networks have proved to perform well in those tasks. To obtain features of greater usefulness, end-to-end distance metric learning (DML) has been applied to train the feature extractor directly. End-to-end DML approaches such as Magnet Loss and lifted structured featur...

متن کامل

Cold-Start Reinforcement Learning with Softmax Policy Gradient

Policy-gradient approaches to reinforcement learning have two common and undesirable overhead procedures, namely warm-start training and sample variance reduction. In this paper, we describe a reinforcement learning method based on a softmax value function that requires neither of these procedures. Our method combines the advantages of policy-gradient methods with the efficiency and simplicity ...

متن کامل

Soft-Margin Softmax for Deep Classification

In deep classification, the softmax loss (Softmax) is arguably one of the most commonly used components to train deep convolutional neural networks (CNNs). However, such a widely used loss is limited due to its lack of encouraging the discriminability of features. Recently, the large-margin softmax loss (L-Softmax [14]) is proposed to explicitly enhance the feature discrimination, with hard mar...

متن کامل

Deep metric learning for multi-labelled radiographs

Many radiological studies can reveal the presence of several co-existing abnormalities, each one represented by a distinct visual pattern. In this article we address the problem of learning a distance metric for plain radiographs that captures a notion of “radiological similarity”: two chest radiographs are considered to be similar if they share similar abnormalities. Deep convolutional neural ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i3.25421