2 Replies - 1296 Views - Last Post: 19 November 2014 - 03:22 AM

#1 BBeck   User is offline

  • Here to help.
  • member icon


Reputation: 792
  • View blog
  • Posts: 1,886
  • Joined: 24-April 12

Multiple meshes in a single vertex array

Posted 15 November 2014 - 04:43 AM

I don't know if anyone's doing XNA these days, but I figured something out that I can't find documented on the Internet anywhere. I spent a day or so figuring it out the hard way.

What I was doing was building my own model class. I know XNA has a model class, but DX11 doesn't and I'm using XNA to prototype my process that I'll eventually be building out in DX11. This probably applies to DX as well, although I haven't done it in DX yet, so I can't be certain of that.

I wanted to put my entire model with all of it's children meshes in a single vertexbuffer (or vertex array in this case). In my example, I wanted the car body and all of its wheels to be in the same buffer even though they are actually separate meshes.

DrawUserIndexedPrimitives allows for this although I can't find an example anywhere on the Internet where someone has made use of this. Probably due to the confusion that I'm about to explain, everyone just loads a single mesh into the vertex array and sets all the parameters to zero.

Here's an example from Microsoft's website:

GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionNormalTexture>(PrimitiveType.TriangleList,
        quad.Vertices, 0, 4, quad.Indexes, 0, 2);




Here's Microsoft's documentation:

public void DrawUserIndexedPrimitives<T> (
         PrimitiveType primitiveType,
         T[] vertexData,
         int vertexOffset,
         int numVertices,
         int[] indexData,
         int indexOffset,
         int primitiveCount,
         VertexDeclaration vertexDeclaration
) where T : struct



So, the zeros are vertexOffset and indexOffset. I suspect that part of the reason that everyone always sets these to zero in every example I can find on the Internet is confusion on how this works.

First of all, vertexOffset works the way you would probably imagine it works: if you have a second or seventh mesh you can put all of your meshes in vertexData and then when you make this draw call you merely tell it which vertex to start drawing from and numVertices you want drawn.

The confusion comes with indexOffset. When I built indexData, I numbered all my indices according to which vertex I wanted them to match up to. So, if I had 1,000 indices, they were numbered 0 through 999.

Turns out that's not the way it works. Somewhat unintuitively, every mesh in vertexData has its first index as zero. So, if you have 7 meshes, there will be 7 zeros in indexData and 7 completely separate sets of indices in indexData. So, if you have 10 meshes in indexData with 1,000 indicies, indexData might have indices 0 to 99 ten times if all 10 meshes had the same number of indices per mesh.

XNA or DX will add the vertexOffset to the indexOffset on its own without you doing anything. So for the third mesh, the first vertex might be 423 and you (or at least I) would think that the first index would be 423 to point to that vertex. But it turns out the first index has to be zero even though its the third mesh in the buffer because it will automatically add the vertexOffset of 423 and you'll be pointing way out in left field if you add 423 to 423. It has to be zero because 423+0 is where you need to start drawing with the first vertex in that submesh.

This came as a big surprise to me and was a bug in my code for awhile until I figured it out. I can't find anywhere on the Internet where its documented or how DrawUserIndexedPrimitives can be used to draw multiple meshes out of one vertex array/buffer. (In this case its actually an array and not a buffer because it's not stored on the graphics card.) Anyway, I thought I might share what I learned just incase someone else comes across this problem. So, now at least there will be one place where its documented.

This post has been edited by BBeck: 15 November 2014 - 04:50 AM


Is This A Good Question/Topic? 0
  • +

Replies To: Multiple meshes in a single vertex array

#2 tHc   User is offline

  • D.I.C Head

Reputation: 22
  • View blog
  • Posts: 69
  • Joined: 12-October 13

Re: Multiple meshes in a single vertex array

Posted 18 November 2014 - 12:44 PM

Helpful as usual Mr Beck, Interesting post its good to see your still at it.
Was This Post Helpful? 0
  • +
  • -

#3 BBeck   User is offline

  • Here to help.
  • member icon


Reputation: 792
  • View blog
  • Posts: 1,886
  • Joined: 24-April 12

Re: Multiple meshes in a single vertex array

Posted 19 November 2014 - 03:22 AM

Glad to see you're still at it as well! Seems like the forums are really dying and I'm wondering where all the game programmers have gone and why there's not a new generation of game programmers coming in.

I guess everyone's doing Unity these days. I can see some wisdom in that, but I've still decided to pursue DX11 after playing around with Unity for several months last year.

DX11 is really challenging though. Then again, that's exactly why I like it.

Right now I'm trying to figure out the whole modeling thing from start to finish. And it's both really cool and really frustrating at the same time, because it seems like it's always 2 steps forward and 1 step back.

My current project is to create models in Blender, use Python to write a custom Blender exporter to extract exactly the data I need, read in the data file, parse it, populate the data for a custom model class that I've built which will use a vertex buffer to draw the model and allow for rigid animation.

I'm really close to having it work in XNA. I've actually got some pretty complex models with a few thousand polygons drawing what appears to be correctly. And this is so low level the whole process could be replicated easily in DX or OpenGL or anything that allows you to directly manipulate the graphics card. Theoretically, I could even build my own system and not need DX or OpenGL because it's all math and such. It's a deep understanding of what's going on in any computer graphics environment.

But I keep hitting glitches and bugs. One of my most recent is that Python/Blender is giving me UV coordinates in exponential notation and my code wasn't handling that since I was expecting normal numbers. My code was basically interpreting any coordinates it didn't understand as 0,0 rather than crashing. Once I taught it how to read exponential notation things really started straightening up, but I've still got a couple issues.

I'm so close, yet so far.

Once I get that, I hope to make some videos on how to do it in DX. I may also start trying to figure out how to manually do skinned animation. I think I understand most of the theory at this point, but haven't tried any of it. And it can get pretty complex with different types of animation blending and whatnot. Guess I should worry about what I'm currently working on rather than that.

Anyway, good to hear you're still at it!
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1