Automating image capture for 3D scanning through photogrammetry

Sep 29, 2018

Michael J. Bennett

We love it when our readers get in touch with us to share their stories. This article was contributed to DIYP by a member of our community. If you would like to contribute an article, please contact us here.

Automating image capture for 3D scanning through photogrammetry

Sep 29, 2018

Michael J. Bennett

We love it when our readers get in touch with us to share their stories. This article was contributed to DIYP by a member of our community. If you would like to contribute an article, please contact us here.

Join the Discussion

Share on:


3D data creation is part of a growing trend in the use of computational imaging techniques within cultural heritage digitization shops. In particular, operational adoption of photogrammetry has been witnessed at such institutions as the Minneapolis Institute of Art (MIA), the Smithsonian, and the University of Virginia Library.

3D data use cases abound. For instance, it can be leveraged to create 3D digital models for display and manipulation in various viewers, 3D printed, and re-purposed in VR environments. Additionally, virtual models can be employed as teaching tools, used in conservation condition assessments of objects through time and to open new lines of inquiry and digital scholarship around such data sets.

One of the current bottlenecks in the multi-step workflow that leads to the creation of original 3D data is the capture stage. In the case of photogrammetry, automating original 2D image capture under controlled shooting conditions is one way to begin to not only scale up data creation but to also make data more accurate and easier for 3D post-processing software to work with.

As we recently began to build out our own 3D capture capabilities at the University of Connecticut Library’s Digital Production Lab, we decided to look at existing automated systems with an eye towards customizing a rig that would best fit our space, budget, and anticipated requirements. In collaboration with ace systems integrator, Michael Ulsaker, this is the structure that we recently co-designed and installed in our studio:

University of Connecticut Digital Production Lab 3D Capture System

Salient features include an automated 360 degree spin turntable, and camera column that can be programmed to seamlessly control movements along X, Y, and Z axes during a given shooting session. Both the turntable and camera are driven by an integrated combination of 5 stepper motors.

Cognisys NEMA 17 Stepper Motor and Canon 5D II

All of this movement is coordinated through a linked pair of Cognisys Stackshot 3X modules. Each module, which in essence acts like a programmable logic controller, has a haptic touchscreen and a nice GUI to the Cognisys software.

Cognisys Stackshot 3X Controllers

Successful photogrammetry requires a 2D image set of an object from overlapping look angles. This needs to be done in a comprehensive manner across a subject’s entire surface in order to give post-processing software a greater opportunity to create 3D data from the original. Turntables are a good capture solution in this scenario, as they help control needed overlap from shot to shot and permit consistent stationary lighting to be built into the overall design. Beyond object movement, however, there remains the need to reposition the camera to different look angles above the subject per 360 spin for optimal capture coverage. This is where precisely programmed turntable rotation and camera movement can come together to create high quality source imaging:

YouTube video

The net results are a series of hemispheric image sets, viewed here in Agisoft Photoscan, where each individual 2D image capture is represented by a blue rectangle around the generated 3D model:

Model View, Agisoft Photoscan

Once exported from post-processing software, the model can then be uploaded to an online viewer site like Sketchfab where it may be shared more broadly to the online world.

Though the Polaroid Test model presented common photogrammetric challenges like the presence of specular highlights from its more reflective surfaces and self-occluded areas along the bellows, this initial trial, exported straight from Photoscan was promising nonetheless. A second test, this time using a small gift store duck with a terracotta-like surface was something that the software more elegantly handled and made watertight.

After our initial test phase concludes, we hope to eventually begin work on aspects of the Connecticut Archaeology Center’s bone collection and selections from the department of Ecology and Evolutionary Biology’s Biodiversity Research Collections, both of which are housed nearby on campus.

About the Author

Michael J. Bennett is Head of Digital Imaging and Conservation at the University of Connecticut. There, he oversees the digital capture and conservation operations for the University’s archives and special collections. His research interests include technologies and techniques that focus on digitization, post-processing, and 2D and 3D data formats. You can find out more about Michael and see his work over on his website, Tundra Graphics.

You can find out more about Michael and see his work over on his website. This article was also published here, and shared with permission.

Filed Under:

Tagged With:

Find this interesting? Share it with your friends!

DIPY Icon

We love it when our readers get in touch with us to share their stories. This article was contributed to DIYP by a member of our community. If you would like to contribute an article, please contact us here.

Join the Discussion

DIYP Comment Policy
Be nice, be on-topic, no personal information or flames.

Leave a Reply

Your email address will not be published. Required fields are marked *