Thursday, March 8, 2012

Benchmarking Models

For a while I have been wondering how to find out just how resource-hungry any model is. Obviously the more details in a model the harder it will run the PC, but exactly how much, and what is the hungriest, is somewhat of a grey area for me. I was hoping that perhaps someone had made a program into which you could load a model and it would tell you how much cpu it takes to load, how much ram it requires, etc. Unfortunately it doesn't appear that this exists. This is understandable considering the different needs of engines, the different PC environments and I assume a whole range of other crap. But I still want to have something to benchmark my models against, dammit!

I would assume that the amount of RAM used would directly correlate to the file size of the model. The way I understand it, the textures need to be loaded into RAM along with the polygonal data of the model itself. That's my official hypothesis.

Enter Task Manager

Task manager has cpu and RAM usage in the processes tab, and this is the only thing I really have to go off. To get a handle on it, I did a few experiments. I recently completed this model for Mist of Stagnation:

This is an environment piece, based on a crashed bomber plane.

And I decided to use it for this test. It has a few resource hungry aspects to it, mainly the five 2048x2048 textures (totally overkill, I know, but man says), so I figured it would be easiest to see the changes in task manager. For starters, the packed blender file is 18mb.

I decided to start with a tare, so I started up blender a few times to get an average idle RAM usage. It was about 55mb.

I assume 55,000K means 55mb?

I loaded a number of things into blender to see what difference they made, the windmill model, a highpoly untextured model I am working on, and the default cube:

WhattimTask mgr reading(mb)less idle(mb)
Blender Idle: 550
Windmill: 13479
Highpoly: 649
2 Windmills on the same layer: 13681
2 Windmills on separate layers: 13782
I also closed and restarted blender in between each test

As you can see, blender loaded a whole lot more than 18mb of data. Where did all this extra data go? Well I have NFI but I would assume it is for modelling-related stuff. Also note that when duplicating the model the increase in RAM usage is only about 1mb. The unpacked, mesh-only file is 750kb. This shows how relatively negligible polygons are to RAM usage.

Since a game engine is designed to use minimal resources, I would assume that UDK would have less overheads due to there being no need for the extra features that blender has. So I decided to test the model in UDK. Keep in mind, the UPK containing nothing but the textures, materials and model required for only the windmill is 27mb. I assume this extra data is used due to the model being animated, therefore requiring skeletal and animation data.

WhattimTask mgr reading(mb)less idle(mb)
UDK Editor Idle: 3440
Windmill in mdl viewer: 36117
Windmill in viewport: 36218
Both: 36521
The UDK seems to use less overheads

So this looks more like the 18mb that the blender packed file was, considering it is only the textures and polygonal data that is necessary to load that should be loaded, but the UPK file is 27mb. I think that the animation and skeletal data might not have been loaded in this instance since the model viewer nor the viewport were animating, but I really don't know.

Whilst this experiment was a moderate success,

I am not utterly convinced. I don't know if task manager takes into consideration VRAM or GPU, so as far as I know this could be completely reliant on system components alone, or it could be hiding a bunch of stuff that VRAM is doing and not showing on task manager. I did concrete a few things in my mind along the way, like the benefit of having a stripped mesh file to see how big your mesh data is (it is probably better to look at an exported .ase file rather than the .blend file) and how to pack and unpack .blend files.

The main lesson to take away from this I think, is one that was already in my noggin, and just needed to be learned. The smaller the file size, the better. Not too hard.

No comments:

Post a Comment