Reality capture? Scanning a person in 3D? Photogrammetry? I had no clue about any of this technology before learning about it this week in Prof. Dan Pacheco’s Emerging Media Platforms class at Syracuse University.
Reality capture is using technology to capture a digital 3D model representation of an object or person from the real world.
Pacheco showed a colleague using a structure sensor to completely scan him, which produced a 3D model that could then be printed on a 3D printer, creating a professor action figure. This 3D model of the professor was also animated by auto-rigging a virtual skeleton beneath his virtual skin.
We then learned about photogrammetry, which is using multiple 2D photos to reconstruct a 3D image. The Smithsonian Institute is using this technique to make 3D scans available of its entire collection of artifacts. Curators are then annotating the 3D models so you can take virtual tours without visiting Washington, D.C.
This is fascinating stuff. The class had to try its hand at annotating an artifact with facts to tell a story. I struggled with the assignment, but on Sketchfab.com, I was able to download this 3D model of Il Duomo di Firenze, the magnificent cathedral in Florence, Italy and tell a bit of a story about its construction.
As this reality capture technology improves and becomes easier to use, I can see it being used in journalism. A sportswriter who wanted to analyze the swing of Red Sox slugger Mookie Betts could create 3D models and animation and have a physicist explain the physics of his hitting style.
I’m a health writer who has written about long delays in emergency rooms, so I could see creating a 3D model of a hospital emergency department and tracking patients as they travel through it, to see where the process bogs down.
I could see a virtual Anderson Cooper being placed at the scene of a wildfire so that he could safely walk us through how the fire is spreading and how firefighters are trying to contain it.
For readers to understand how a crime scene investigation works, I can see creating 3D models of a CSI tech moving about the scene, fingerprinting, checking for DNA and blood spatter. This could give viewers an understanding of how forensics works without reporters contaminating the crime scene.
How might we field test this ideas? One hypothesis would be that a 3D model of Cooper at the wildfire would cost less than flying him there. To measure if that was the case, we’d have to pay the 3D techs to do the production, have aerial drone footage for them to build the scene and make scans of Cooper. We could total the cost of that and compare it to a flight, housing, equipment and meals for Cooper and a film crew.
That would give us a cost breakdown to measure our success, but what about the intangibles? On the plus side, it’s safer, but wouldn’t the viewers lose Cooper’s real world reporting on the scene as he talked to the fire victims fleeing their homes? How about the empathy factor? That’s hard to measure, but if we’re showing a sanitized 3D version from a distance, viewers might not see the horror of the victims and might be less likely to donate to help their cause.
If I tried the 3D re-creation of the hospital ER, my hypothesis is that viewers might engage with the story in greater numbers than simply showing them still photos or a video of patients going through the ER. I could produce a behind-the-scenes video of an ER and survey the viewers, asking them what they thought of it, what they would change at the ER and whether what they saw made them angry. I could measure their engagement on social media with the video. I could then produce a 3D re-creation of the ER and show it to viewers and ask them the same questions and see if I received more likes, shares, comments and retweets.
I’m not convinced that going to all this trouble would be better than simply videotaping these scenes for viewers, but as the technology improves, it might be worth exploring for some stories like these.