Dissertation: Efficient 3D Reconstruction using Point Cloud Registration
RGB-D sensors became very popular and affordable in the last couple of years. Using Depth sensors (RGB-D sensors) such as Microsoft Kinect, the artefact produced is able to create point clouds at 30 frames-per-second. Using these point clouds, and aligning them using Iterative Closest Point, a 3D model can be constructed.
There are several 3D reconstruction and scanners available today. These can be categorised as: contact scanners; non-contact active scanners (example: RGB-D sensors); and non-contact passive scanners (example: multiple cameras forming stereoscopic vision). Based on research of 3D reconstruction and modelling, the most effective method creating models for small to medium size real life objects, are located indoors has been implemented. Through analysis of different ICP algorithms to convert the point cloud into a 3D model, the best algorithms depending on the scenario of the object being scanned have been identified. The accuracy of the models created can be measure by the number of vertices created; the number of error points; and the number of blind spots (or empty spots in the model).
The process set boundaries and filtered the environment noise. The finished product replicates the real life object scanned.
The tool implemented can be used by museums to replicate and 3D print their artefacts for display and safely secure the originals.