Construction jobsites are teeming with data as site conditions evolve, projects progress and people, materials and vehicles move throughout the area. But contractors have yet to tap into the full data stream, according to Shaun Lewis, senior manager of reality capture at Clark Construction.
The industry could take a page from Amazon’s book when it comes to maximizing use of all the data that’s available, he told attendees at the Advancing Construction Quality conference in Baltimore this month, pointing to the e-tailer’s recommendations based on customers’ interests.
Lewis set out three reality capture tools in particular that can help firms reap cost savings from data that is there for the taking. Documenting the conditions of a site at different moments throughout a project can help firms catch design mistakes before they’re built out, plan realistic task sequences, verify that subcontractors’ billing matches the work performed and more.
1. 3D laser scanners
Laser scanners are among the most common reality capture tools used on jobsites, said Lewis, and firms don’t necessarily have to pay for top-of-the-line hardware to collect data that will be useful. “One-button hardware” products “sacrifice a little bit of accuracy” compared with total stations and other higher grade scanners, he said, but they’re far more user-friendly and easy to introduce across projects.
What’s even more important than the hardware, according to Lewis, is the software that’s used with it. Companies that can input their data in software programs to create point clouds — millions of points of data comprising the digital picture of an environment — can test, measure and experiment in a digital space that’s true to jobsite conditions (within a half-inch or so) but without the risk and implications of failure.
In one case study Lewis referenced, a Clark Construction team had to determine whether a very large generator could be removed through a smaller door during a renovation. They mapped out the generator in a mesh model format, tested different sequences in a “Google Street View”-style environment, and identified the sequence that would need to be followed to successfully remove the system.
Unmanned aerial vehicles (UAVs), or drones, can be used with various software programs to stitch together hundreds of jobsite photos for a comprehensive aerial picture of project progress. They can also be used to measure distances and quantities that are tied to project costs. By capturing the volume of a particular dirt pile, for example, a team can determine the number of trucks that will be needed to remove it, Lewis said. If that pile includes hazardous materials, he added, the team can calculate what removing it will cost and factor that into their budget.
But collecting drone data comes with two fairly significant constraints, according to Lewis. The first is that drones only offer the view from above, as opposed to the more sweeping views that laser scanners can offer. But scanners also have their limitations, Lewis noted, in that they can only collect data within the line of sight. There’s no reason the industry won’t see some combination of the two technologies down the road, though, he said.
The second constraint is heavy regulation of drone use, especially in dense urban markets. In Washington, D.C., where Clark Construction in based, drones are either "a no-go,” for the most part, Lewis said, or the lengthy approval process is just not worth the effort. Instead, Clark is exploring the use of cameras on tower cranes, which make a path through the site not unlike that which a drone would take.
3. 3D cameras with AI photo tagging
3D cameras are an accessible reality capture tool, according to Lewis, because the most expensive devices they require are already present on most jobsites — iPads and iPhones. A full setup including a camera, 360-degree light, battery, charger and monopod cost Clark $610, he said, not including the Apple devices they were already using.
Photo documentation software runs more expensive, and using it to give project stakeholders access to photos in a 360-degree environment is both a blessing and a curse, Lewis joked. Within certain programs, teams can evaluate the progress of construction at one area of the site, by viewing side-by-side panoramas captured a few months apart. Plus, “if you had an incident, you can go back a day before, a week before, and see what led up to that incident,” Lewis said.
When artificial intelligence tools are added to the mix, they can be used to analyze photos and flag safety risks like workers not wearing hardhats, he said, as well as distinguish between materials on the site like concrete and rebar. This object recognition capability is refined through the process of machine learning techniques, Lewis said, in which an expert builder or safety professional, for example, could confirm whether an AI conclusion was correct or incorrect. If notified that it was incorrect, the AI could “learn” not to make that mistake again and reduce its chances of erring.
There’s no shortage of data on the jobsite nor tools to collect it, and firms that want to introduce reality capture don’t need to radically change what they’re doing on projects. As elaborate as point clouds and other advanced models look, “it’s all just data and what you do with it is really the key,” Lewis said.