Monday, July 26, 2021

Processing the Data - Terrestrial Laser Scanning at Piney Croft

 After we captured the data in the field, I returned to the lab a few days later to get hands-on experience in working with the post-processing component of the Lecia RT 360. Both photogrammetry and laser scanning take a considerable amount of processing power and wait time, yet due to the amount of data available from laser scanning, it takes exponentially more. Once the scans were loaded in, a digital site map is available to manipulate with a series of dots representing each scan that we monitored on the iPad in the field complete with each link we made on the ground.

               Each point on the map comes with a spattering of data creating a partial point cloud of everything the scanner was able to see from its location.  Each link that connects two points is essentially pulling each partial point cloud together in one frame, once every point is linked the point cloud should resemble an accurate depiction of what we scanned. The links also serve as the main way to align each partial point cloud, a similar process when dealing with photogrammetry. Once a link is made between two scans, the main workflow involves inspecting each of the two scans to ensure they are aligned properly and if not, then fixing them.

Figure 1

This alignment process is depicted in figure 2 with a top-down view of the site that has an orange depiction, which is one of the partial points clouds, and the teal depiction, which is the currently selected partial point cloud, which is a part of this link. This particular set of scans had a hard time aligning in the field which results in a clustered and confusing point cloud if left alone. The goal is to pivot the teal scan to match the orange so it sits directly on top of the original. Once the top-down view is aligned, we have to switch the viewpoint to be lateral to the site to best align the floor and ceilings from the orange and teal scans to one another. Upon verifying that both the lateral and topdown views are aligned we clicked “join and optimize” in the lower right-hand corner for the link to be created and for the program to see if it can find more potential links on its own.  The rest of the post-processing mainly involves finding scans that could make good links, verifying the alignment on each one, and letting the program attempt to find links on its own. The result looks like what is depicted on the right screen in figure one, a clean and concise scan.

               This leaves us with an accurate point cloud ready to be showcased on the Chronopoints website, yet this data can be further developed for other ends. Everything around the structure including the front and back yards can be cut from the point cloud to reduce the need for unwanted data. Then the remaining structure could be meshed in another program to create a 3D object that could then be 3D printed similar to my previous internship. Texture can also then be applied to that mesh to get a very accurate and eye-catching model that can serve as a great addition to a video game engine like Unreal or for a detailed form of digital storytelling. 

               

Figure 2

No comments:

Post a Comment