Robust and fast scene recognition in robotics through the automatic identification of meaningful images
Scene recognition is still a very important topic in many fields, and that is definitely the case in robotics. Nevertheless, this task is view-dependent, what means the existence of preferable directions when recognizing a particular scene. This actually happens, both in human and computer vision-based classification that often turn out to be biased. In our case, instead of trying to improve the generalization capability to different view directions, we have opted for the development of a system capable of filtering out noisy or meaningless images while, on the contrary, retain those views from which is likely feasible the correct identification of the scene. Our proposal works with an heuristic metric based on the detection of key-points on 3D meshes (Harris 3D). This metric is later used to build a model that combines a Minimum Spanning Tree and a SVM. We have performed an extensive number of experiments through which we have addressed the (a) search of efficient visual descriptors, (b) the analysis of the extent to which our heuristic metric resembles the human criteria for relevance and, finally, (c) the experimental validation of our complete proposal. In the experiments we have used both, a public image data base and images collected at our research center.