Friday, November 20, 2020

Racing against the Sunset: Shifting the Process to Reaching Deadlines

 

This past week I spent creating these merged and complete models, then realizing there are slight imperfections, or areas that I know I can improve, I went back to take more photos and improve on those areas. I have two of the models completed out of the set of six, while six was the arbitrary goal set at the beginning and is flexible, I am determined to finish the other four this coming weekend in order to meet the December 1st deadline for the conference.

                A recording of my 15-minute PowerPoint presentation on the project is due on the first of December for the Society for Historical Archaeology (SHA) conference . I am willing to condense the number of models I will create down from six if need be but I intend to try and meet this initial goal. This implies I finish the rest of the models and accept the small imperfects in each model instead of the slight tweaking I have been doing this past week. I can always go back and tweak them more after the December 1st deadline or mention them in the limitations of the project.  The goal besides creating these models is to then print them and compare them in relation to the low-cost methods I am using.

                I adapted some of my methods over the past week and learned more about the capabilities of Metashape, which continues to surprise me the more I use it. A limitation last week was the processing time for each model, and I decided that due to approaching deadlines, I turned the detail down from “High” to “Medium” which processes in about 5 minutes as opposed to an hour which helped me save a lot of time working on this project. This does mean, however, that the quality of the overall model is lower albeit more than acceptable.  I additionally found that the arrow heads have a unique problem with alignment as the automatic alignment of the different chunks does not really work for these sharp and flat pieces. I learned how to manually align these chunks which is a little more time intensive but was the last step to creating the finish model and overall, was not too difficult.

                I am surprised by the quality of these 3D models since the photos themselves were taken on my phone, which is apart of the low-cost of the project, and I have realized over the course of this project that it is the quality of the input that most drastically affects the output. The greatest impact on these models is the number of photos collected in each profile; the lower the amount of photos the worse and blurrier the final result, while the higher the amount of photos collected results in a crisp high definition model. Incomplete, or missing, sections of certain models already send me back to restart the process again which is only made worse if I rush the picture process as the low resolution sends me back to do it again and correctly. This project has taught me to do my due diligence when taking the first step in the modelling process as building off of a poor photo collection is like building on a shaky foundation, it does not matter how much effort I put in after that point because it will only cause my efforts to be less than ideal.

This is one of the completed models. What was the lower half before the merging process had less photos in its profile and resulted in a blurry product. Once merged together with the higher resolution upper half, this creates a slightly jarring result. I am unsure if this will translate through the resin 3D printing process.


Update: Considering this is the last blog for this internship for the Fall 2020 semester I wanted to update my progress on this project since I wrote the blog a few days ago to showcase the goal I achieved. 

I finished three other models and were able to print them over the weekend and wanted to showcase them here. The scale on some of the models is a bit off but can easily be fixed down the road.





Friday, November 13, 2020

Good Things Come to Those Who Wait: a more detailed look into merging 3-D models


The title of this blog sums up this past week pretty well. I work for OCPS as a middle school teacher and only have time at night or on the weekends to work on this project and no longer have the luxury to work whole days on classes as I have done in the past. This is prevalent because the merging processes took around 30-45 minutes each time I ran the process. I created many models this way before I realized an error in what I was doing. I could not separate the points based off of confidence as I learned last week because I rendered all the models with he advanced setting “calculate point confidence” turned off by default. Having this enabled is necessary to making the model clean as I will demonstrate in this weeks blog. This caused me to rerun these merging processes again, however enabling the “calculate point confidence,” caused processing time to increase even more from 30-45 minutes to between an hour to an hour and 15 minutes. Once more, if I realize there is an area of the model that is shaky and could use more photos then this waiting process has to start over again. This is why I describe this week as a lot of waiting as a single mistake can cost me a whole evening of just waiting.

                After hours of researching on the internet why I was having such difficulty merging models I discovered that Metashape has an automatic alignment method without the need of finding key points of similarity around the model. It took some time to get the hang of it but this made the process much easier by having the program itself detect the similarities.

                Once the models were merged I realized I forgot to cut away the table portions where the artifact was sitting before I merged the models but I was still able to cut it away after I merged them and it was not too much a problem considering the alternative was another hour of waiting time. I was now able to see the model in a heat map style of view that is organizing all of the points based of off how confident the program is on their placement. Red and orange dots are all those points that the program is unsure of and are therefore clouding the model giving it a dirty or fuzzy appearance. All those models that shift from green to blue are high confidence points where the program is confident in its placement. I was able to filter out the view to separate high and low confidence points and once there I could delete all low confidence points leaving behind a clean model.


This is the confidence map. The key in the lower left helps understand that the colder the color the higher the confidence value is.
The red and orange points are filtered out from the high confidence points and then deleted. These points are what clouds the model and gives it a fuzzy appearance due to their inaccurate location.
Only the high confidence points left behind

The dense point cloud after the confidence cleaning process. Notice how this is cleaner than any other dense point cloud I showcased thus far.

                From here I realized I would need to go back and take even more pictures on the bottom side of the model due to the lip of the conch being undefined and curved as opposed to a strong harsh edge. This artifact was included in the collection to be one of the more difficult model to create and so far it has and will continue to cost me more processing time as I try to get the inner fold of the shell to render but now with this streamlined process of cleaning the model I’m confident that this once will be completed as well.


The above photo shows the end product that is fully textured. This is the cleanest model I have made yet and looks perfect.

This is the problem mentioned in the above paragraph. The inner lip of the conch did not render any points due to a shortage of light and photos.


Friday, November 6, 2020

The Merging process

 

This past week I worked on figuring out how to merge two models together. This process is little longer than I anticipated but does not change my view on it being worth it. I did not encounter this problem earlier because the test models I was capturing were sitting on a base where I did not need to capture and in retrospect it would have probably been best to learn the merging process earlier.

I managed to successfully merge one of the artifacts together while the rest are in different stages of this process. The process involves placing a set of points on each model with the key being to place point 1 in the same location on both models. So point one will be in the same location on the artifact just placed on both models. This process is continued to at least three times, however the more points placed the higher the accuracy is during the merging processes. The purpose for placing these points is to tell the program which points match up across both models so they can align correctly.  The difficulty is in the form of finding a place on the model that is identifiable which is not always easy depending on the model, finding the right ridge on an arrowhead takes time as well as finding multiple points that are identifiable.

                Thankfully I learned that, once merged, Metashape can categorize all of the millions of points in the point cloud and separate these based off of a certainty rating that looks like a heat map. I can then easily remove all points that Metashape created yet was uncertain of its exact location while preserving the majority of points that are slotted into their correct position. This can save me a lot of time cleaning the models and removing the uncertain points by hand. This is an advanced feature I haven’t seen in free modelling programs and the longer I use this program the more I grow to understand more of its complexity.

                I still need to take this process slowly as I double check and review the steps that I am taking to get more used to it but I will go ahead and print a copy of a broken arrowhead that was created with the merged models and one with the blank side filled in by default to best see and compare the differences between them. Perhaps the detail will transfer well into the physical form or perhaps it will be too hard to tell.