|Title||Visual region understanding: unsupervised extraction and abstraction|
The ability to gain a conceptual understanding of the world in uncontrolled environments is the ultimate goal of vision-based computer systems. Technological
societies today are heavily reliant on surveillance and security infrastructure, robotics, medical image analysis, visual data categorisation and search, and smart device user interaction, to name a few. Out of all the complex problems tackled
by computer vision today in context of these technologies, that which lies closest to the original goals of the field is the subarea of unsupervised scene analysis or scene modelling. However, its common use of low level features does not provide
a good balance between generality and discriminative ability, both a result and a symptom of the sensory and semantic gaps existing between low level computer
representations and high level human descriptions.
In this research we explore a general framework that addresses the fundamental
problem of universal unsupervised extraction of semantically meaningful visual
regions and their behaviours. For this purpose we address issues related to
(i) spatial and spatiotemporal segmentation for region extraction, (ii) region shape modelling, and (iii) the online categorisation of visual object classes and the spatiotemporal analysis of their behaviours. Under this framework we propose (a)
a unified region merging method and spatiotemporal region reduction, (b) shape
representation by the optimisation and novel simplication of contour-based growing neural gases, and (c) a foundation for the analysis of visual object motion properties using a shape and appearance based nearest-centroid classification algorithm
and trajectory plots for the obtained region classes.
Specifically, we formulate a region merging spatial segmentation mechanism
that combines and adapts features shown previously to be individually useful,
namely parallel region growing, the best merge criterion, a time adaptive threshold, and region reduction techniques. For spatiotemporal region refinement we
consider both scalar intensity differences and vector optical flow. To model the shapes of the visual regions thus obtained, we adapt the growing neural gas for
rapid region contour representation and propose a contour simplication technique. A fast unsupervised nearest-centroid online learning technique next groups observed region instances into classes, for which we are then able to analyse spatial
presence and spatiotemporal trajectories. The analysis results show semantic correlations to real world object behaviour. Performance evaluation of all steps across
standard metrics and datasets validate their performance.