Most of our 3D models start with a surface model of a specimen. Some of these were created with a Microscribe 3DLX point digitizer. More recently, many of our models have been generated with a Creaform HandyScan laser-scanning digitizer. Models from the invertebrate Type and Figured Collection were produced using a “visual hull” approach, implemented in the commercial software 3DSOM Pro. Still others were derived from photogrammetry (using PhotoModeler, Autodesk 123D Catch, or VisualSFM + CMPMVS) or from x-ray computed tomography scans.
Texture-mapping to give models a photorealistic appearance has been done mostly with 3DSOM Pro. In this procedure, 2D images of a specimen are extracted from their background and then projected onto the surface of the model, aligning them to topographic features of the model. Multiple images from different positions around the form blend to approximate the local color and texture of the original specimen. Onscreen exploration of these texture-mapped models offers a close approximation to the experience of handling the original specimens yourself.
Wu, C. (2013). Towards Linear-time Incremental Structure from Motion. 3DV.
Wu, C. (2007). SiftGPU: A GPU implementation of Scale Invariant Feature Transform (SIFT). http://cs.unc.edu/~ccwu/siftgpu.
Wu, C., Agarwal, S., Curless, B., Seitz, S.M. (2011). Multicore Bundle Adjustment. CVPR.
Jancosek, M., Pajdla, T. (2011). Multi-View Reconstruction Preserving Weakly-Supported Surfaces. 2011 – IEEE Conference on Computer Vision and Pattern Recognition.