Robust Action Recognition Using Multi-Scale Spatial-Temporal Concatenations of Local Features as Natural Action Structures
Abstract
Human and many other animals can detect, recognize, and classify natural actions in a very short time. How this is achieved by the visual system and how to make machines understand natural actions have been the focus of neurobiological studies and computational modeling in the last several decades. A key issue is what spatial-temporal features should be encoded and what the characteristics of their occurrences are in natural actions. Current global encoding schemes depend heavily on segmenting while local encoding schemes lack descriptive power. Here, we propose natural action structures, i.e., multi-size, multi-scale, spatial-temporal concatenations of local features, as the basic features for representing natural actions. In this concept, any action is a spatial-temporal concatenation of a set of natural action structures, which convey a full range of information about natural actions. We took several steps to extract these structures. First, we sampled a large number of sequences of patches at multiple spatial-temporal scales. Second, we performed independent component analysis on the patch sequences and classified the independent components into clusters. Finally, we compiled a large set of natural action structures, with each corresponding to a unique combination of the clusters at the selected spatial-temporal scales. To classify human actions, we used a set of informative natural action structures as inputs to two widely used models. We found that the natural action structures obtained here achieved a significantly better recognition performance than low-level features and that the performance was better than or comparable to the best current models. We also found that the classification performance with natural action structures as features was slightly affected by changes of scale and artificially added noise. We concluded that the natural action structures proposed here can be used as the basic encoding units of actions and may hold the key to natural action understanding.Citation
PLoS One. 2012 Oct 4; 7(10):e46686ae974a485f413a2113503eed53cd6c53
10.1371/journal.pone.0046686
Scopus Count
Related articles
- Multi-scale spatial concatenations of local features in natural scenes and scene classification.
- Authors: Zhu X, Yang Z
- Issue date: 2013
- Assessment and statistical modeling of the relationship between remotely sensed aerosol optical depth and PM2.5 in the eastern United States.
- Authors: Paciorek CJ, Liu Y, HEI Health Review Committee
- Issue date: 2012 May
- Data-driven spatio-temporal RGBD feature encoding for action recognition in operating rooms.
- Authors: Twinanda AP, Alkan EO, Gangi A, de Mathelin M, Padoy N
- Issue date: 2015 Jun
- In Vivo Observations of Rapid Scattered Light Changes Associated with Neurophysiological Activity.
- Authors: Frostig RD, Rector DM, Yao X, Harper RM, George JS
- Issue date: 2009
- Performance of a Computational Model of the Mammalian Olfactory System.
- Authors: Persaud KC, Marco S, Gutiérrez-Gálvez A, Benjaminsson S, Herman P, Lansner A
- Issue date: 2013
Related items
Showing items related by title, author, creator and subject.
-
What the â Moonwalkâ Illusion Reveals about the Perception of Relative Depth from MotionKromrey, Sarah; Bart, Evgeniy; Hegéd, Jay; Brain & Behavior Discovery Institute; Vision Discovery Institute; Department of Ophthalmology (2011-06-22)When one visual object moves behind another, the object farther from the viewer is progressively occluded and/or disoccluded by the nearer object. For nearly half a century, this dynamic occlusion cue has beenthought to be sufficient by itself for determining the relative depth of the two objects. This view is consistent with the self-evident geometric fact that the surface undergoing dynamic occlusion is always farther from the viewer than the occluding surface. Here we use a contextual manipulation ofa previously known motion illusion, which we refer to as theâ Moonwalkâ illusion, to demonstrate that the visual system cannot determine relative depth from dynamic occlusion alone. Indeed, in the Moonwalk illusion, human observers perceive a relative depth contrary to the dynamic occlusion cue. However, the perception of the expected relative depth is restored by contextual manipulations unrelated to dynamic occlusion. On the other hand, we show that an Ideal Observer can determine using dynamic occlusion alone in the same Moonwalk stimuli, indicating that the dynamic occlusion cue is, in principle, sufficient for determining relative depth. Our results indicate that in order to correctly perceive relative depth from dynamic occlusion, the human brain, unlike the Ideal Observer, needs additionalsegmentation information that delineate the occluder from the occluded object. Thus, neural mechanisms of object segmentation must, in addition to motion mechanisms that extract information about relative depth, play a crucial role in the perception of relative depth from motion.
-
A Hierarchical Probabilistic Model for Rapid Object Categorization in Natural ScenesHe, Xiaofu; Yang, Zhiyong; Tsien, Joe Z.; Brain & Behavior Discovery Institute; Department of Ophthalmology; Department of Neurology (2011-05-25)Humans can categorize objects in complex natural scenes within 100â 150 ms. This amazing ability of rapid categorization has motivated many computational models. Most of these models require extensive training to obtain a decision boundary in a very high dimensional (e.g., â ¼6,000 in a leading model) feature space and often categorize objects in natural scenes by categorizing the context that co-occurs with objects when objects do not occupy large portions of the scenes. It is thus unclear how humans achieve rapid scene categorization.
-
Workstation Configuration PolicyInformation Technology Support and Services; Georgia Health Sciences University (2005-12)The purpose of this document is to establish standards for the base configuration of workstation computers that are authorized to operate within Georgia Health Sciences University. Since data that is created, manipulated and stored on these systems may be proprietary, sensitive or legally protected, it is essential that the computer systems and computer network, as well as the data they store and process, be operated and maintained in a secure environment and in a responsible manner. It is also critical that these systems and machines be protected from misuse and unauthorized access. Therefore, ITSS requires that all access to workstations be authorized and that all data be safeguarded