Photogrammetry Pt. 3

The third lesson on photogrammetry is the one where we could finally make 3D models using the data sets that we created last time. By taking photos of various objects and then converting them to PNGs, we went through the phases of ‘capture’ and ‘processing’. As I said in my last lesson, I don’t think that we took enough pictures since the minimum was around 80 and the only set that had an amount greater than that was the photos of Tegan. Nevertheless, I was still excited to see what would happen in the final stage ‘reconstruction’ since I was curious to see how the program would react to the photos I took.

To recap, reconstruction is a process in which your photos are transformed into a three-dimensional representation. The main stages are alignment, depth extraction, reconstruction and texturing largely handled by the software. We also went over three examples of what software could be used to do this.

 

Metashape – Paid

 

 

The commercial version of this program can be quite expensive but the educational license is fairly affordable. Metashape allows you to gain lifetime access for a one-time fee and is a great introduction to Photogrammetry as the process is simplified. It can be a little difficult to get good alignment but once you do, the quality is great. 

 

Reality Capture – Pay Per Input

 

 

In Reality Capture, any scans that you want to export can be bought on a pay per input method (PPI), but the program can also be trialled for free. PPI can be really affordable with a small object on par with the cost of a coffee. What is essentially done is you pay and attach a license to the images you use, giving you the freedom to reprocess at any time. There are great results but a pretty steep learning curve to navigating all of the features. 

 

Meshroom – Free

 

 

Finally, we have Meshroom, a free Photogrammetry software that can be installed and run on most computers without Admin privileges and is the one that we would be using. It can often require several attempts at processing and node tweaking to get right and can be slightly slower than the paid software, however, can be fairly better at alignment than Metashape. 

 

Setting up Meshroom & Running a Project

 

First, we had to unzip the Meshroom folder from insight files and double click on the Meshroom.exe application to run it. Once I saved the new project to desktop, I took a good look at the program since it was completely new to me. Certain aspects were recognizable, such as the area where we would be importing our photos and the boxes next to it where a specific one could be viewed and the reconstruction is seen rendering. 

 

 

In the graph editor at the bottom, we learned that you scroll to zoom in or out and hold the mouse wheel and move it in order to move around. It was also pointed out to us that every box is a node and the program goes through each one for the data set provided automatically, although some can be toggled or tweaked. This is what we did in order to speed the process up, by clicking on ‘feature extraction’ and going to the three dots in the corner. Then, in advanced attributes, we unticked ‘force CPU extraction’. By turning this off, the program no longer renders on CPU but on GPU instead.

After this, we loaded the images in and clicked ‘start’. There was no indication that anything was happening except for the long bar of colours that appeared right underneath. If it was red or black, that would mean that something went wrong and likewise, the corresponding node that failed or stopped would be red.

 

I chose to do the Grogu set even though I predicted that the model wouldn’t be very complete

 

Thankfully, my bar didn’t have any red or black and I took this as confirmation that the software was running and had accepted the images.

 

Green – Completed

Orange – In Progress

Blue – Not Started

 

The scanning took the longest out of the entire progress and there wasn’t much else to do in the program apart from sitting and waiting, watching the model slowly take shape in the 3D viewer. To occupy my time, I started writing up my notes from the lesson alongside my screenshots for this blog and only occasionally checked in with the reconstruction. If I wanted to see a particular node’s progress in more detail, all I had to do was click on the box with the orange line and go to the panel on the right, in ‘log’. It shows you what is happening in real-time, which can be useful when there is a problem and it needs to be discovered at the root. 

 

 

Once Grogu began to be visible in the 3D viewer, I started panning around the model and exploring it. I found it a little strange at first – the controls were more finicky than in Maya and the model seemed to be rendering at an angle, but some of my questions were answered. For example, the program rendered roughly the same distance from the model as I had taken the pictures from. From this, I gathered that you should get as close as you want depending on how much floor and background you would like visible and although you can zoom in, in my case I would have preferred to be a little closer to the figurine. It was still really interesting to see though and makes you think about how far technology has advanced and what kind of possibilities will open up in the future. Seeing the program interpret an object from a couple of images and actually construct a representation of that in a three-dimensional space is really cool!

 

 

I wasn’t expecting the mesh to look like that. It was built up of many overlapping spheres that roughly matched the colours of the subject. They didn’t have any shading, however, making them look like flat circles whichever angle you looked at them from. Eventually, the bar at the top was 100% green, meaning the software was done and had created a 3D model as best as it could from the images provided. We then went to the node labelled ‘meshing’ and double-clicked, which made the model load up if it wasn’t there already. If you wanted to see what it looked like without all of the little spheres, you would have to click the ‘x’ next to ‘structurefrommotion’, unticking it and stripping back the colour.

 

 

As you can see from above, mine wasn’t smooth or refined at all. There were a lot of bumps, lumps, cracks, holes and entire chunks missing but even so, I was still very impressed at the likeness to the real object. You could tell that it was Grogu because the general shape of the head, face and body was there – even the folds of the clothes were captured! Still, it looked a little scary/bizarre and I wanted to see it with colour again. I did this by double-clicking the ‘texture’ node.

 

 

 

I was amazed – Grogu had appeared! The model looked 10 times better with the texture, even though it wasn’t perfect by any means. The bumpiness of the model seemed to have been removed, replaced by the scraggy cracking colours that the program managed to pick up on each area. The ears were getting a little lost against the background and I wondered if that was because of the colour of the wall behind the subject or if I just needed more pictures. Well, that much was for certain – I definitely should have taken more pictures. At the time, I was very unsure of how much to move each time and if I had gotten enough overlap over each area. Luckily, there was a really nifty technique in Meshroom to see which areas didn’t have enough data. In the panel to the right, underneath the Display section with the three bars, by selecting the camera icon there and then clicking an image, the program syncs up the model with the image so that they are in the exact same position.

 

 

You could also control the transparency so that you could see less or more of the original image underneath the model. From there, I observed that I needed to take more pictures at the back (you can see a massive chunk missing towards the top back) and also of the ears. Although a second picture for every one I had would have benefitted the render greatly overall and might have helped with the many minuscule cracks and holes of the texture too.

 

 

To conclude, if I could go back and create the same data set again, I would take more photos with smaller distances in between each, get closer to the subject with the camera and possibly add a random squiggly pattern underneath to help with alignment. These are things that I realised only after my first attempt and exploration of the program but the process of Photogrammetry is enjoyable overall – I had a lot of fun with these three sessions, especially because I had more physical input with the result than usual, having taken the pictures myself and directly influenced how the model would turn out. I also find it cool that if need be, you can import the model into Maya, as long as the OBJ & MTL files and the texture images were saved. In our case, they were in the ‘MeshroomCache’ folder. If I had some extra time, I would have loved to see what kind of renders could come out of Maya with the scan as the base model.

Leave a Reply