Hierarchical Saliency-based Representation for Human Interaction Recognition
首发时间:2014-12-24
Abstract:Recognizing human interactions is a one of the most important problems in computer vision and impacts a wide range of applications. This paper presents a new method for the recognition of two-person interactions using hierarchical saliency-based representation. Hierarchical saliency is defined as Salient Action at the highest level, Salient Point at middle level, Salient Joint at the lowest level of interaction, determined by the greatest spatial-temporal positional changes at each level. Given the saliency of interactions at different levels, several types of features were extracted according to the discriminative characteristics of behaviors, such as spatial displacement, direction relations and etc. Since there are few publicly accessible test datasets, we created a new dataset with eight types of interactions named K3HI, using a new depth sensor, the Microsoft Kinect. The method was tested using the SVM multi-class classifier; our experimental results demonstrate that the average recognition accuracy of hierarchical saliency-based representation is 90.29%, outperforming methods using other features.
keywords: pattern recognition human interactions recognition Kinect hierarchical saliency
点击查看论文中文信息
基于层次显著性表达的人体行为识别
摘要:识别人体行为是计算机视觉中最重要的问题之一,广泛影响着许多应用。本文提出一种基于层次显著性表达的两人互动行为识别的新方法。层次显著性在互动行为的最高层次上被定义为显著动作,在中等层次上为显著点,最低的层次则为显著关节点。层次显著性由每个层次上的最大时空位移决定。给出不同层次的互动行为显著性后,便可根据行为的判别特性提取出几种类型特征。这些判别特性包括空间位移、方向关系等。目前,由于很少有公开的测试数据集,因此本文创建了一个包含八种互动行为类型的新数据集K3HI。该数据集使用微软公司新型深度传感器Kinect完成数据采集。本文中采用SVM多级分类器进行测试。实验结果表明:基于层次显著性表达识别方法的平均识别精度为90.29%,该结果优于使用其它特征的方法。
论文图表:
引用
No.4622030102260414****
同行评议
共计0人参与
勘误表
基于层次显著性表达的人体行为识别
评论
全部评论0/1000