نتایج جستجو برای: stip 1

تعداد نتایج: 2752655  

2012

Classifying realistic human actions in video remains challenging for existing intro-variability and inter-ambiguity in action classes. Recently, Spatial-Temporal Interest Point (STIP) based local features have shown great promise in complex action analysis. However, these methods have the limitation that they typically focus on Bag-of-Words (BoW) algorithm, which can hardly discriminate actions...

2013
Qianru Sun Hong Liu

Automatically inferring ongoing activities is to enable the early recognition of unfinished activities, which is quite meaningful for applications, such as online human-machine interaction and security monitoring. Stateof-the-art methods use the spatio-temporal interest point (STIP) based features as the low-level video description to handle complex scenes [1, 2, 3]. While the existing problem ...

Journal: :Computer Vision and Image Understanding 2013
Yingying Zhu Nandita M. Nayak Utkarsh Gaur Bi Song Amit K. Roy-Chowdhury

In this paper, a novel generalized framework of activity representation and recognition based on a ‘string of feature graphs (SFG)’ model is introduced. The proposed framework represents a visual activity as a string of feature graphs, where the string elements are initially matched using a graph-based spectral technique, followed by a dynamic programming scheme for matching the complete string...

2013
Maximilian Panzner Oliver Beyer Philipp Cimiano

In this paper we present an online approach to human activity classification based on Online Growing Neural Gas (OGNG). In contrast to state-of-the-art approaches that perform training in an offline fashion, our approach is online in the sense that it circumvents the need to store any training examples, processing the data on the fly and in one pass. The approach is thus particularly suitable i...

2013
Mahmood Karimian Mostafa Tavassolipour Shohreh Kasaei

In large databases, the lack of labeled training data leads to major difficulties in classification. Semi-supervised algorithms are employed to suppress this problem. Video databases are the epitome for such a scenario. Fortunately, graph-based methods have shown to form promising platforms for Semi-supervised video classification. Based on multimodal characteristics of video data, different fe...

2012
Qianru Sun Hong Liu

Classifying realistic human actions in video remains challenging for existing intro-variability and inter-ambiguity in action classes. Recently, Spatial-Temporal Interest Point (STIP) based local features have shown great promise in complex action analysis. However, these methods have the limitation that they typically focus on Bag-of-Words (BoW) algorithm, which can hardly discriminate actions...

2014
Michalis Vrigkas Christophoros Nikou Ioannis A. Kakadiaris

A human behavior recognition method with an application to political speech videos is presented. We focus on modeling the behavior of a subject with a conditional random field (CRF). The unary terms of the CRF employ spatiotemporal features (i.e., HOG3D, STIP and LBP). The pairwise terms are based on kinematic features such as the velocity and the acceleration of the subject. As an exact soluti...

2016
Mohammad Reza Farahvash Ghasemali Khorasani Yadollah Mahdiani Ahmad Reza Taheri

BACKGROUND Early postoperative edema and ecchymosis are the most common factors to complicate initial patient perceptions about rhinoplasty. The current study was conducted to determine the effects of longer steri-strip tape on patient malar and cheek in terms of ecchymosis control and reduction. METHODS Through a randomized controlled clinical trial, 64 patients who underwent rhinoplasty wer...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید