Three-dimensional reconstruction and texturation of museographic objects using multiple images and stereoscopic depth map fusion
Résumé
Lots of techniques exist to generate a 3D model given a limited set of photographs. Extracting depth information using stereoscopic correspondences generally outputs a large number of depth maps to be merged. Generating a dense representation of the object (shape-from-shading, level-set minimization) can be time-consuming. Shape-from-silhouettes and hybrid techniques, carving an approximate model given consistency constraints, suffer from a lack of scalability. In the context of the GANTOM project, we developed a system based on an automatic turntable, high-resolution digital cameras (up to 4Mpixels) and accurate encoders to acquire the data. This allows us to perform a very fine calibration of the acquisition process, i.e. less than 0.5mm. In this paper, we present the GANTOM software system that can be subdivided into the following elements: - Depth map extraction using stereoscopic correspondence. Different search spaces are compared. - Depth map fusion using a voxel-based algorithm, called shape-from-depth, which is a generalization of shape-from-silhouettes. - Conversion from a volumetric model to a polyhedral model. - Texture extraction using the polyhedral model and the photographs as inputs.