Implementing environment comprehension into machines is a challenge for mankind. The ability to fetch, classify and interrelate the perceptible elements captured by cameras is the link between our global net of cameras and the Artificial Intelligence. Therefore, the next giant leap will occur when the digital captures of the outside world are observed by neural networks, without requiring a human interpreter and translator. In order to make pictures understandable by machines, these have to be reduced to atomic describable concepts like points, lines or ellipses. The most simple forms,
like points or lines, are referred to as primitives. The most traditional techniques detect primitives and classify them attending to their apparent attributes. During these early stages many problems had to be solved, originated by limitations of the digital technology, the similarity between primitives in the same image, the impossibility of characterizing them unequivocally, and the nature of the capture of light with changes on illumination or contrast. These approaches for description and matching of primitives have been developed in parallel with spatial abstraction methods, in such a way that nowadays it is common to derive from a series of pictures an unique 3D representation including estimations for some of the captured primitives with the relative position and orientation of the cameras. This memory is focused on a single kind of primitives: the straight line segments. It goes through straight segment matching between images and the other operations that lead to the creation of 3D representations from these detected primitives. Straight line segments are frequently found in captures of man-made environments. The inclusion of straight lines in 3D representations provide structural information about the captured shapes and their limits, such as the intersection of planar structures.
Keywords: Line matching, 3D abstraction, Structure-From-Motion