Hi Joris,
Thanks very much for your detailed response, I really appreciate you taking the time to explain the basics of Source and why .obj's are ill-suited to become maps.
My goal for this is not to create a map based off someplace I have been, but to create an easily repeatable process that many can follow to create maps based off places they scan without necessarily having much experience with either Hammer or 3ds Max. The device is primarily being marketed for real estate and home remodeling purposes, but I'd like to prove that it has utility in the game sector as well by making mapmaking more accessible to the masses. I think that this could really generate a lot of interest in mapmaking from people who have never really considered it before.
The camera is not suited to scan individual objects. It is tripod-mounted and spins in 360 degrees to capture its surroundings, and it is only accurate to within an inch of a feature's actual dimension. While it generates far more polygons than is necessary for a map (the model I'm testing with has about 17k), they are split up such that objects that you might want to use as models have relatively few and appear jagged and unrealistic. It is also capable of stitching together multiple captures (each time it spins and captures its surroundings) together into a single model, which is what makes it so good at making models of large spaces.
The textures it generates are all 2048x2048, which are pretty big compared to the numbers you mentioned, but at least they are powers of 2.
The ideal solution that I'm imagining for this is a tool that is able to break a model down into its most basic surfaces (floors, walls, and ceilings) and turn those into brushes, while applying the textures properly. Is there anything like that in existence?
Shawn,
Thanks for taking the time to look at the technology, and thanks for all your work on Wall Worm. I haven't had the chance to learn all the intricacies of it but it seems like a really cool tool.
From what I've seen of Image Modeler, I'm not quite sure it's a good fit here. While the camera is cable of making 360 panoramas, its real strength is that it also collects 3D data, so I don't think that discarding all of that would result in a better end product.
Your sky texture idea sounds pretty neat, although I must admit I'm not familiar with how those work. Unfortunately, the camera does not work well outdoors. It uses an IR projector to measure depth, so either sunlight interferes with the dot pattern or it's too dark to get decent RGB data. Would it still have any utility for this purpose?
Again, thank you both for taking interest in the matter(port (sorry)) and taking the time to write thoughtful responses to a newb such as myself. What you do is super-awesome and I'd love to get more people interested in it.
Thanks very much for your detailed response, I really appreciate you taking the time to explain the basics of Source and why .obj's are ill-suited to become maps.
My goal for this is not to create a map based off someplace I have been, but to create an easily repeatable process that many can follow to create maps based off places they scan without necessarily having much experience with either Hammer or 3ds Max. The device is primarily being marketed for real estate and home remodeling purposes, but I'd like to prove that it has utility in the game sector as well by making mapmaking more accessible to the masses. I think that this could really generate a lot of interest in mapmaking from people who have never really considered it before.
The camera is not suited to scan individual objects. It is tripod-mounted and spins in 360 degrees to capture its surroundings, and it is only accurate to within an inch of a feature's actual dimension. While it generates far more polygons than is necessary for a map (the model I'm testing with has about 17k), they are split up such that objects that you might want to use as models have relatively few and appear jagged and unrealistic. It is also capable of stitching together multiple captures (each time it spins and captures its surroundings) together into a single model, which is what makes it so good at making models of large spaces.
The textures it generates are all 2048x2048, which are pretty big compared to the numbers you mentioned, but at least they are powers of 2.
The ideal solution that I'm imagining for this is a tool that is able to break a model down into its most basic surfaces (floors, walls, and ceilings) and turn those into brushes, while applying the textures properly. Is there anything like that in existence?
Shawn,
Thanks for taking the time to look at the technology, and thanks for all your work on Wall Worm. I haven't had the chance to learn all the intricacies of it but it seems like a really cool tool.
From what I've seen of Image Modeler, I'm not quite sure it's a good fit here. While the camera is cable of making 360 panoramas, its real strength is that it also collects 3D data, so I don't think that discarding all of that would result in a better end product.
Your sky texture idea sounds pretty neat, although I must admit I'm not familiar with how those work. Unfortunately, the camera does not work well outdoors. It uses an IR projector to measure depth, so either sunlight interferes with the dot pattern or it's too dark to get decent RGB data. Would it still have any utility for this purpose?
Again, thank you both for taking interest in the matter(port (sorry)) and taking the time to write thoughtful responses to a newb such as myself. What you do is super-awesome and I'd love to get more people interested in it.