Thomas Kole talked about his amazing new tech, which allows capturing low poly copies of complex 3d elements with techniques, similar to photogrammetry.
I’m Thomas Kole, and together with Daan Niphuis at Force Field VR we developed a method to convert complex environments into simple 3D meshes with only one texture, in Unreal Engine 4.
Daan Niphuis is an Engine Programmer, and I’m a Technical Artist intern, though this is my last week at the company. Force Field VR is a studio in Amsterdam that came from Vanguard, known for the Halo Spartan Assault and Spartan Strike games.
Under Force Field, they produced Landfall and Terminal, for the Oculus Rift and Gear VR respectively. The system we developed looks a lot like Photogrammetry or Photoscanning, but cuts some tricky steps. The scenes you see here are from Unreal Tournament.
Generating Low Poly Models of Large Environments
When you’re making an open-word game, or a game with large-scale levels in general, showing large scale structures in the distance can often be a challenge. Usually the process involves making a manual low-poly version of the distant mesh, and somehow bake the textures to that. This system could automate that process.
Automating Process
The process is not 100% automated, but it requires very little manual steps. Essentially, and artist would set up numerous camera positions from where he’d like to capture the environment, and executes a function that starts the capturing.
He then takes the data into Meshlab, processes it, and puts it back into Unreal. Depending on the size and complexity of the scene, the process should not take more than 1 hour.
Photogrammetry
Photogrammetry works by comparing many photos, and looking for similarities. With these similarities, it can reconstruct a sparse 3D point cloud. Now it can look for even more similarities and reconstruct a dense 3D point cloud. We can skip this step, because we can extract this information per photo directly from UE4. We capture the environment from a bunch of locations, from all directions. 4 times in total per direction. This way we capture base colors, normals and world positions, which we compose into one big point cloud. We also capture a high resolution screenshot from that point, which we use to project the textures from at the end of the process.
With this Point Cloud we generate a new mesh within Meshlab. This mesh has the same shape and contour as the environment, but it’s very high-poly. This mesh is then reduces, unwrapped, and receives textures for the final model.
UV Work
UV unwrapping, sometimes called Mesh Parameterization, is always tricky, and took a large chunk of the research time. Initially, I wanted to do that process entirely in Meshlab, but it did not produce good enough results. UV’s should have large chunks, little stretching, and no overlap. Three criteria which are always conflicting. I found that Maya’s automatic unwrapping, together with their packing, works pretty good. There’s a plugin for Blender called Auto Seams Unwrap, which also produces even better patches, but it can take a long time to compute (sometimes over half an hour for a very complicated mesh). This process could be automated further by automating it with a script.
Capturing Information
In this case, we capture the final color of the scene – with lighting and all. This means that the final model can be used with an unlit shader, which is very cheap. But that does mean that all dynamic lighting is lost.
However, the system could be modified to capture Base Colors, Normals and Roughness (optional) Instead, for dynamically lit scenes.
Small lights in the environment could even be baked to an emissive texture for additional detail.
Optimization
Of course there’s a huge loss in geometry detail when you run an environment through this pipeline. However, the final polycount is in your hands.
Everything is baked into one final texture, so small details in texture are also lost. High frequency details such as wires, chain link fences, and thin meshes can be problematic too.
For the captures of Unreal Tournament, I tried to take those out.
The one thing that the process does preserve very well is contour and shape, which is perfect for distant geometry.
Cases
There’s all sorts of uses for this technology. The most useful one would be using it for distant geometry.
But you could also use it for marketing purposes, uploading 3D action scenes of a game to a site like Sketchfab.
If you want to read some more, you can read my article on my portfolio.
Thomas Kole, Technical Artist
Interview conducted by Kirill Tokarev.
Follow 80.lv on Facebook, Twitter and Instagram
© a.sergeev for 80lvl, 2017. | Permalink | No comment | Add to del.icio.us
Post tags: Epic Games, Game Design, game development, game industry, gamedev, indiedev, Sketchfab, UE4, Unreal Engine 4
Feed enhanced by Better Feed from Ozh
Read the full article here by 80lvl
1 comment:
A 3D laser scanner has so many advantages. One of the best advantages is it can reduce the manufacturing process of your construction project. So if you are the owner of construction business then you should own a 3D laser scanner. Best 3D laser scanner Grand Prairie, Alberta
Post a Comment