By: Dr. Julie Winchester, Dr. Doug Boyer, and Jocelyn Triplett
MorphoSource is a publicly accessible web data repository that allows museums, researchers, and scholars to upload, archive, curate, and share 3D data (as well as 2D images and video) representing physical objects of scholarly interest, mostly consisting of biological natural history specimens and cultural heritage objects. For educators, students, and interested members of the general public, MorphoSource represents an online gateway to the 3D collections of some of the world’s largest museums and allows direct viewing and “virtual handling” of vitally important objects critical for scientific understanding and the advancement of human knowledge.
3D models, CT and photogrammetry image stacks, 2D images, and videos are contributed to MorphoSource by a community of more than 2,300 users, and the repository holds over 210,000 media datasets, including 95,000 published 3D models that can be viewed and interacted with in a user’s browser immediately with no need for download. 45,000 of these 3D models can be openly downloaded for further visualization and measurement. Data are contributed by large and small museums of natural history and/or cultural heritage, researchers and research laboratories, and facilities that manage scanning and imaging devices such as those served and supported by the NoCTURN network. Researchers can use the repository to share findings and obtain permanent identifiers for their media datasets, while imaging facility and object-managing collections can use MorphoSource to track impact of datasets as well as to directly manage datasets representing physical objects imaged by their facility or managed by their organization.
MorphoSource Media 000679621, skull of an olive sea snake (Aipysurus laevis, L-KS:0564) with individual bones colored. Created and uploaded by Jaimi Gray.
Recently, we unveiled a new viewer for 3D models on MorphoSource. An instance of this viewer, called aleph-r3f, is embedded above, please check it out! Interactively viewing 3D content on the web is an essential part of MorphoSource, as it allows users to engage with and evaluate data without needing to download each individual dataset before knowing if it is right for them.
While we have always provided a 3D viewer on MorphoSource, we are excited about the advantages that aleph-r3f offers. The viewer allows users to orbit around 3D models from many different angles, optionally while visualizing a coordinate grid or 3D axis markers to help orient the user within 3D space. It also provides annotation tools, so that users can highlight points of interest on models and export these annotations in CSV or JSON formats. For users interested in performing research on models, the annotation tool can be used to collect landmark data for geometric morphometric analysis. Furthermore, the viewer also provides tools to measure lengths and angles. These lengths and angles can be measured from points of interest on the model (object measurement) or a user can simply lay a ruler “over the screen” (via screen measurement) to more easily gauge the length of an object or angles between margins of the object without needing to fuss over specific point coordinates on the model surface. Relatedly, the viewer offers two different render modes: perspective and orthographic. The orthographic view is required when measuring from the “screen plane,” but it may also be useful for documenting object shape from different standardized anatomical views. The viewer can also be easily embedded in non-MorphoSource webpages, for use in coursework or elsewhere.
For more technical users, aleph-r3f is fully open source software available on GitHub and NPM for use outside of MorphoSource. It’s built from Three.js and react-three-fiber, which positions it to speak the language of a very common 3D web framework but to do so in a cutting-edge React-based way. Aleph-r3f can also be integrated directly with Universal Viewer, a standards-compliant IIIF viewer for both 3D and non-3D image and video content. We are excited to see where we can take the viewer moving forward!