Thursday, October 1, 2020

 This past week I've spent moving and unpacking my apartment and tried my hand at photogrammetry once again to see if I could solve my previous problem. I tired a few different methods and built the models through the rendering processes and had different measures of success.

The first method I tried was the same one I used two weeks ago but without the turn table and I just turned the model by hand. This method failed once again as the cameras would not align properly which only ends up rendering half of the model. The reason for this failure is that there is a visible texture in the background of all of the photos because of photobox. 

The second method I used was the same as above put I moved the camera much closer to the model so nothing but the model and the immediate area around it was in frame. This way, not only would more detail be captured, but there would be no extra data or texture that would confuse the program. This method was a success and the cameras were actually aligned around the model meaning it was rendering three dimensionally.

Each blue square is where the program detects where my camera was when I took it in relation to the model.

From here I cleaned up any stray points in the sparse point cloud to make the dense point cloud cleaner with only the data I need, then I built the mesh which takes all the dense points and essentially connects them all which starts to turn the model into something that could be solid. I then did the last step and built the texture which brings all the definition onto the model. Since this model is just the test model as I wait for the artifacts, I did not spend too much time cleaning away all the white noise around the model. Cleaning this would make the model more defined and sharp but it takes some time to do as you have to go in by hand and delete away each segment without deleted the rest of the model it is attached to.

This is the rendered texture, the final step in Metashape.

This is a close up of the wireframe

The third method I did was to actually move around the model, yet with worse lighting. I decided it was worth doing still despite the success of the previous method so I can compare and decide which produces a better result. This method was a success and all of the cameras rendered in the correct area, however, it is clear that these cameras do not make a perfect circle or were taken from exactly the same height as I took each photo by hand as opposed to having a tripod. Despite this I was able to take a lot of photos from around the model and the results turned out really well. There was more work cleaning out the stray data such as part of my couch, TV, and desk. I completed the above mentioned steps to render the model and it came out with what appears to be a higher resolution because of the additionally photos I took. Overall, I will try physically moving around the model more often since it produced great results despite shaky hands, uneven photo heights etc.

These cameras are spaced unevenly and are at uneven heights because I took this by hand as I moved around the model.

This is the rendered texture for the method where I physically moved around the model.

The next step is to important these models in blender and fill in gaps and holes and make them printable on a 3D printer where I can even test print it out to see how accurate it was to the original. Hopefully within the next week or two I can start the work on the actual artifacts.  

No comments:

Post a Comment