Diffuse light stuck? How to just animate object?

  • (3 Pages)
  • +
  • 1
  • 2
  • 3

33 Replies - 3160 Views - Last Post: 06 June 2015 - 09:48 AM

#1 thelastofus  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 23
  • Joined: 16-November 14

Diffuse light stuck? How to just animate object?

Posted 27 May 2015 - 05:54 AM

Ive started doing HLSL. Basically I just want to understand HLSL. But tbh I tried reading again and again the introduction of HLSL and getting started but I still cant understand it. I know about the vertex shader input and output and the pixel shader. The importance of world, view and projection. But thats about it. So i tried messing around with some shader code but cant get my desired output.

this is what I get

Posted Image

The thing about that is, the light is stuck on just the one side. And also it rotates around with the object which is not what I want. This is the result I want.

Here is my source code

The Effects file

struct VertexShaderOutput
{
	float4 Position : POSITION;
	float4 Color : COLOR0;
};


struct PixelShaderInput
{
	float4 Color: COLOR0;
};



VertexShaderOutput DiffuseLighting(
	float3 position : POSITION,
	float3 normal : NORMAL)
{

	VertexShaderOutput output;

	//generate the world-view-proj matrix
	float4x4 wvp = mul(mul(world, view), projection);

	//transform the input position to the output
	output.Position = mul(float4(position, 1.0), wvp);

	float3 worldNormal = mul(normal, world);


	float diffuseIntensity = saturate(dot(-lightDirection, worldNormal));


	float4 diffuseColor = lightColor * diffuseIntensity;



	output.Color = diffuseColor + ambientColor;
	diffuseColor.a = 1.0;


	//return the output structure
	return output;
}

float4 SimplePixelShader(PixelShaderInput input) : COLOR
{
	return input.Color;
}


My draw method

foreach (ModelMesh mesh in model.Meshes)
            {
                foreach (ModelMeshPart part in mesh.MeshParts)
                {

                    
                    part.Effect = effect;
                    effect.Parameters["world"].SetValue(world * mesh.ParentBone.Transform);
                    effect.Parameters["view"].SetValue(view);
                    effect.Parameters["projection"].SetValue(projection);
                    effect.Parameters["lightDirection"].SetValue(new Vector3(1, 0, 0));
                    effect.Parameters["lightColor"].SetValue(Color.Green.ToVector4());
                    effect.Parameters["ambientColor"].SetValue(Color.DarkSlateGray.ToVector4());

                }

                for (int i = 0; i < effect.CurrentTechnique.Passes.Count; i++)
                {
                    effect.CurrentTechnique.Passes[i].Apply();
                    mesh.Draw();
                }

              
            }


All i want is that to make the light source fixed in one position while it illuminates the side of box while the box is rotating.

Is This A Good Question/Topic? 0
  • +

Replies To: Diffuse light stuck? How to just animate object?

#2 thelastofus  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 23
  • Joined: 16-November 14

Re: Diffuse light stuck? How to just animate object?

Posted 27 May 2015 - 08:23 AM

Okay I Got it.

I was making the camera view rotate instead of the world

this is the solution

world *= Matrix.CreateFromAxisAngle(new Vector3(0, 1, 0), elapsedTime*1);
Was This Post Helpful? 0
  • +
  • -

#3 BBeck  Icon User is offline

  • Here to help.
  • member icon


Reputation: 792
  • View blog
  • Posts: 1,886
  • Joined: 24-April 12

Re: Diffuse light stuck? How to just animate object?

Posted 27 May 2015 - 08:52 AM

Ok. I see several problems. First, your vertex shader output IS your pixel shader input. It goes through the rasterizer in between but the vertex shader output feeds the rasterizer that feeds the pixel shader. So for all practical purposes, the vertex shader output is the pixel shader input. So you have vertex shader input and output and there is no pixel shader input or output, or rather it's kind of inferred because the vertex shader output is the pixel shader input and the pixel shader output is always a color. It's the color of the pixel. That's all a pixel shader does is determines the color of the pixel being drawn.

You basically have two functions that you can call whatever you want, but one of them is the vertex shader and the other is the pixel shader. I usually call them "VertexShader" and "PixelShader". You're actually passing entire buffers at once as parameters. That's why you see a vertex shader input structure and a vertex shader output. It's really just a row of bytes and the structure tells the shader how to interpret those bytes. It should match your vertex layout for your vertices you are sending in the vertex buffer.

I was actually thinking about beginning the production work today to produce a video on pretty much this exact subject. The Blinn-Phong shader is pretty much your most basic shader that is actually useful. Here's my XNA HLSL code for the Blinn-Phong shader I've recently been using in XNA.

float4x4 World;
float4x4 View;
float4x4 Projection;

float4 AmbientLightColor;
float3 DiffuseLightDirection;
float Padding;
float4 DiffuseLightColor;
float4 CameraPosition;

texture ColorMap;

sampler2D TextureSampler = sampler_state
{
	texture = (ColorMap);
	magfilter = LINEAR;
	minfilter = LINEAR;
	AddressU = CLAMP;
	AddressV = CLAMP;
};

struct VertexShaderInput
{
    float4 Position : POSITION0;
	float2 UV : TEXCOORD0;
	float3 Normal : NORMAL;
	float4 Color : COLOR;
};

struct SmoothVertexShaderOutput
{
    float4 Position : POSITION0;
	float3 WorldSpacePosition : TEXCOORD2;
	float4 Color : COLOR;
	float3 Normal : NORMAL;
	float2 UV : TEXCOORD0;
};


struct FlatVertexShaderOutput
{
    float4 Position : POSITION0;
	float3 WorldSpacePosition : TEXCOORD2;
	float4 Color : COLOR;
	float2 UV : TEXCOORD0;
};


float4 BlinnSpecular(float3 LightDirection, float4 LightColor, float3 PixelNormal, float3 CameraDirection, float SpecularPower)
{
	float3 HalfwayNormal;
	float4 SpecularLight;
	float SpecularHighlightAmount;


	HalfwayNormal = normalize(LightDirection + CameraDirection);
	SpecularHighlightAmount = pow(saturate(dot(PixelNormal, HalfwayNormal)), SpecularPower);
	SpecularLight = SpecularHighlightAmount * LightColor;

	return SpecularLight;
}


float4 PhongSpecular(float3 LightDirection, float4 LightColor, float3 PixelNormal, float3 CameraDirection, float SpecularPower)
{
	float3 ReflectedLightDirection;	
	float4 SpecularLight;
	float SpecularHighlightAmount;


	ReflectedLightDirection = 2.0f * PixelNormal * saturate(dot(PixelNormal, LightDirection)) - LightDirection;
	SpecularHighlightAmount = pow(saturate(dot(ReflectedLightDirection, CameraDirection)), SpecularPower);
	SpecularLight = SpecularHighlightAmount * LightColor; 

	return SpecularLight;
}


SmoothVertexShaderOutput SmoothVertexShaderFunction(VertexShaderInput Input)
{
    SmoothVertexShaderOutput Output;
	float4 WorldPosition;
	float4 ViewPosition;


	Input.Position.w = 1.0f;
    WorldPosition = mul(Input.Position, World);
	Output.WorldSpacePosition = WorldPosition;	//World position without camera conversion.
    ViewPosition = mul(WorldPosition, View);
    Output.Position = mul(ViewPosition, Projection);

	Output.Normal = mul(Input.Normal, (float3x3)World); 
	Output.Normal = normalize(Output.Normal);

	Output.Color = Input.Color;

	Output.UV = Input.UV;

    return Output;
}


FlatVertexShaderOutput FlatVertexShaderFunction(VertexShaderInput Input)
{
    FlatVertexShaderOutput Output;
	float4 WorldPosition;
	float4 ViewPosition;


	Input.Position.w = 1.0f;
    WorldPosition = mul(Input.Position, World);
	Output.WorldSpacePosition = WorldPosition;	//World position without camera conversion.
    ViewPosition = mul(WorldPosition, View);
    Output.Position = mul(ViewPosition, Projection);

	Output.Color = Input.Color;

	Output.UV = Input.UV;

    return Output;
}


float4 SmoothPixelShaderFunction(SmoothVertexShaderOutput Input) : SV_TARGET
{
	float3 LightDirection;
	float DiffuseLightPercentage;
	float4 OutputColor;
	float4 SpecularColor;
	float3 CameraDirection;	//Float3 because the w component really doesn't belong in a 3D vector normal.
	float4 AmbientLight;
	float4 DiffuseLight;
	float4 Texel;


	LightDirection = -DiffuseLightDirection;	//Normal must face into the light, rather than WITH the light to be lit up.
	DiffuseLightPercentage = saturate(dot(Input.Normal, LightDirection));	//Percentage is based on angle between the direction of light and the vertex's normal. 
	DiffuseLight = saturate((DiffuseLightColor * Input.Color) * DiffuseLightPercentage);	//Apply only the percentage of the diffuse color. Saturate clamps output between 0.0 and 1.0.

	CameraDirection = normalize(CameraPosition - Input.WorldSpacePosition);	//Create a normal that points in the direction from the pixel to the camera.

	if (DiffuseLightPercentage == 0.0f) 
	{
		SpecularColor  = float4(0.0f, 0.0f, 0.0f, 1.0f);
	}
	else
	{
		//SpecularColor = BlinnSpecular(LightDirection, DiffuseLightColor, Input.Normal, CameraDirection, 45.0f);
		SpecularColor = PhongSpecular(LightDirection, DiffuseLightColor, Input.Normal, CameraDirection, 35.0f);
	}

	Texel = tex2D(TextureSampler, Input.UV);
	Texel.a = 1;
	//OutputColor =  saturate((AmbientLightColor * Input.Color) + DiffuseLight * DiffuseLightPercentage + SpecularColor);
	OutputColor =  saturate(AmbientLightColor + Input.Color + Texel * DiffuseLightPercentage + SpecularColor);
	//OutputColor = saturate(Texel);

    return OutputColor;
}


float4 FlatPixelShaderFunction(FlatVertexShaderOutput Input) : SV_TARGET
{
	float3 LightDirection;
	float DiffuseLightPercentage;
	float4 OutputColor;
	float4 SpecularColor;
	float3 CameraDirection;	//Float3 because the w component really doesn't belong in a 3D vector normal.
	float4 AmbientLight;
	float4 DiffuseLight;
	float3 FaceNormal; 
	float4 Texel;

	
	FaceNormal = cross(ddy(Input.WorldSpacePosition.xyz), ddx(Input.WorldSpacePosition.xyz));	//Use the partial differential of contiguous screen coordinates to obtain 3D positions on the triangle surface that correspond to 2 horizontal and 2 vertical positions in screen coordinates. Take the difference between the 2 positions to form a vector and use the two vectors to get a tangent cross product.
	FaceNormal = normalize(FaceNormal);

	LightDirection = -DiffuseLightDirection;	//Normal must face into the light, rather than WITH the light to be lit up.
	DiffuseLightPercentage = saturate(dot(FaceNormal, LightDirection));	//Percentage is based on angle between the direction of light and the vertex's normal. 
	DiffuseLight = saturate((DiffuseLightColor * Input.Color) * DiffuseLightPercentage);	//Apply only the percentage of the diffuse color. Saturate clamps output between 0.0 and 1.0.

	CameraDirection = normalize(CameraPosition - Input.WorldSpacePosition);	//Create a normal that points in the direction from the pixel to the camera.

	if (DiffuseLightPercentage == 0.0f) 
	{
		SpecularColor  = float4(0.0f, 0.0f, 0.0f, 1.0f);
	}
	else
	{
		SpecularColor = BlinnSpecular(LightDirection, DiffuseLightColor, FaceNormal, CameraDirection, 10.0f);
		//SpecularColor = PhongSpecular(LightDirection, DiffuseLightColor, FaceNormal, CameraDirection, 45.0f);
	}

	Texel = tex2D(TextureSampler, Input.UV);
	Texel.a = 1;
	//OutputColor =  saturate((AmbientLightColor * Input.Color) + DiffuseLight * DiffuseLightPercentage + SpecularColor);
	OutputColor =  saturate(AmbientLightColor + Input.Color + Texel * DiffuseLightPercentage + SpecularColor);
	//OutputColor =  saturate(Texel);

    return OutputColor;
}


technique SmoothShadedTechnique
{
    pass Pass1
    {
        VertexShader = compile vs_3_0 SmoothVertexShaderFunction();
        PixelShader = compile ps_3_0 SmoothPixelShaderFunction();
    }
}


technique FlatShadedTechnique
{
    pass Pass1
    {
        VertexShader = compile vs_3_0 FlatVertexShaderFunction();
        PixelShader = compile ps_3_0 FlatPixelShaderFunction();
    }
}



Notice that my vertex shader is FlatVertexShaderFunction() and it has the structure/buffer VertexShaderInput as the input and outputs FlatVertexShaderOutput.

This actually wasn't put together to teach and so it's maybe a little more complicated than necessary. It's a shader that can do texturing and it does both Blinn and Phong lighting. If I remember correctly, Blinn basically invented this method of lighting where you have smooth shading and a specular highlight. The highlight is basically Blinn's work. Phong came along later and figured out another way to produce the highlight using a half vector. Same thing, just a different way of doing it which is why it's generally called Blinn-Phong either way.

This is actually two shaders that both can be used to do Blinn shading or Phong shading. I use "techniques" to put two completely separate shaders in one file together. In DirectX you may end up having the vertex shader and the pixel shader in completely separate files. So, a lot of this stuff is very optional.

But using separate techniques allows both shaders to live in one file. Flat, or faceted, shading has basically been dropped in DirectX (and XNA is built on DX9). There's no way to do it directly. Everything is smooth shaded which is what you generally want most of the time anyway. I have a second shader that does flat shading. It calculates a normal for the entire face/triangle that the pixel is sitting on for every pixel of that triangle. Smooth shading passes vertex normals which are the direction that each of the corners of the triangle face. So, there is no face normal and you have to have a face normal for flat shading. So, I think it calculates a normal by using the vector cross products of two pixel positions on the face of the triangle. It's a pretty cool trick that took me a couple years to find. And it works.

But you can ignore that whole shader and just look at the smooth shading technique shaders.

Somewhere I have a Blinn-Phong shader with good comments in it. I'll post this and come back in a little bit after I see what I can come up with.
Was This Post Helpful? 0
  • +
  • -

#4 BBeck  Icon User is offline

  • Here to help.
  • member icon


Reputation: 792
  • View blog
  • Posts: 1,886
  • Joined: 24-April 12

Re: Diffuse light stuck? How to just animate object?

Posted 27 May 2015 - 09:03 AM

Here's a similar shader in HLSL for DirectX11. Notice it is very similar but there are some differences. It doesn't include flat shading and is just one technique. In fact, the way it is used is as if the pixel shader and the vertex shader are in completely separate shader files, but it has different commenting and takes another look at the same thing.

cbuffer MatrixConstantBuffer : register(cb0)	//The main program assigns this to the Vertex Shader.
{
	matrix WorldMatrix;
	matrix ViewMatrix;
	matrix ProjectionMatrix;
}


cbuffer ParametersBuffer : register(cb0)	//The main program assigns this to the Pixel Shader.
{
	float4 AmbientLightColor; 
	float3 DiffuseLightDirection;	//Must be normalized before passing as a parameter. So, that it can be used to calculate angles.
	float Padding;
	float4 DiffuseLightColor;
	float4 CameraPosition;
}


struct VertexShaderInput
{
    float4 InputPosition : POSITION;
	float2 InputUV : TEXCOORD0;
	float3 InputNormal : NORMAL;
	float4 InputColor : COLOR;
};

struct PixelShaderInput
{
    float4 Position : SV_POSITION;
	float3 WorldSpacePosition : TEXCOORD1;
    float4 Color : COLOR;
	nointerpolation float3 Normal : NORMAL;		//Nointerpolation turns off smooth shading.
};


float4 BlinnSpecular(float3 LightDirection, float4 LightColor, float3 PixelNormal, float3 CameraDirection, float SpecularPower)
{
	float3 HalfwayNormal;
	float4 SpecularLight;
	float SpecularHighlightAmount;


	HalfwayNormal = normalize(LightDirection + CameraDirection);
	SpecularHighlightAmount = pow(saturate(dot(PixelNormal, HalfwayNormal)), SpecularPower);
	SpecularLight = SpecularHighlightAmount * LightColor; 

	return SpecularLight;
}


float4 PhongSpecular(float3 LightDirection, float4 LightColor, float3 PixelNormal, float3 CameraDirection, float SpecularPower)
{
	float3 ReflectedLightDirection;	
	float4 SpecularLight;
	float SpecularHighlightAmount;


	ReflectedLightDirection = 2.0f * PixelNormal * saturate(dot(PixelNormal, LightDirection)) - LightDirection;
	SpecularHighlightAmount = pow(saturate(dot(ReflectedLightDirection, CameraDirection)), SpecularPower);
	SpecularLight = SpecularHighlightAmount * LightColor; 

	return SpecularLight;
}


PixelShaderInput VertexShaderMain(VertexShaderInput Input)
{
    PixelShaderInput Output;

	Input.InputPosition.w = 1.0f;	//This is actually brought in as 3D instead of 4D and so we have to correct it for matrix calculations.
    Output.Position = mul(Input.InputPosition, WorldMatrix);
	Output.WorldSpacePosition = Output.Position;
	Output.Position = mul(Output.Position, ViewMatrix);
	Output.Position = mul(Output.Position, ProjectionMatrix);


	Output.Normal = mul(Input.InputNormal, (float3x3)WorldMatrix);	//Only the Object's world matrix need be applied, not the 2 camera matrices. Float3x3 conversion is because it's a float3 instead of a float4.
	Output.Normal = normalize(Output.Normal);	//Normalize the normal in case the matrix math de-normalized it.

    Output.Color = Input.InputColor;

    return Output;
}


float4 PixelShaderMain(PixelShaderInput Input) : SV_TARGET
{
	float3 LightDirection;
	float DiffuseLightPercentage;
	float4 OutputColor;
	float4 SpecularColor;
	float3 CameraDirection;	//Float3 because the w component really doesn't belong in a 3D vector normal.
	float4 AmbientLight;
	float4 DiffuseLight;



	LightDirection = -DiffuseLightDirection;	//Normal must face into the light, rather than WITH the light to be lit up.
	DiffuseLightPercentage = saturate(dot(Input.Normal, LightDirection));	//Percentage is based on angle between the direction of light and the vertex's normal. 
	DiffuseLight = saturate((DiffuseLightColor * Input.Color) * DiffuseLightPercentage);	//Apply only the percentage of the diffuse color. Saturate clamps output between 0.0 and 1.0.

	CameraDirection = normalize(CameraPosition - Input.WorldSpacePosition);	//Create a normal that points in the direction from the pixel to the camera.

	if (DiffuseLightPercentage == 0.0f) 
	{
		SpecularColor  = float4(0.0f, 0.0f, 0.0f, 1.0f);
	}
	else
	{
		//SpecularColor = BlinnSpecular(LightDirection, DiffuseLightColor, Input.Normal, CameraDirection, 45.0f);
		//SpecularColor = PhongSpecular(LightDirection, DiffuseLightColor+Input.Color, Input.Normal, CameraDirection, 45.0f);
		SpecularColor = PhongSpecular(LightDirection, DiffuseLightColor, Input.Normal, CameraDirection, 45.0f);
	}
	//AmbientLightColor + Input.Color + 
	//OutputColor = saturate(AmbientLightColor + DiffuseLight * DiffuseLightPercentage + SpecularColor);
	OutputColor = saturate(AmbientLightColor + DiffuseLight + SpecularColor);

    return OutputColor;
}



Somewhere I have a Blinn-Phong shader that is fully commented, but I can't seem to find it anywhere here. I'll have to post it when I find it.

Good to see you got the problem sovled!
Was This Post Helpful? 0
  • +
  • -

#5 thelastofus  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 23
  • Joined: 16-November 14

Re: Diffuse light stuck? How to just animate object?

Posted 28 May 2015 - 04:57 AM

I Dont know. The blinn phong code looks intimidating and complicated. I wonder if it suits for beginner level.

Btw since you know a lot of this things, what is you definition of normals? Like this

Quote

"In the first two lines, we just get a copy of the light direction and normal that are normalized"
qoute from RB Whitaker specular lights tutorial.

Cause I dont get it. I was thinking about surface normal/vertex normals. Im not really sure
Was This Post Helpful? 0
  • +
  • -

#6 BBeck  Icon User is offline

  • Here to help.
  • member icon


Reputation: 792
  • View blog
  • Posts: 1,886
  • Joined: 24-April 12

Re: Diffuse light stuck? How to just animate object?

Posted 28 May 2015 - 09:50 AM

Blinn-Phong is pretty much your most basic 3D shader other than the specular highlight. I'm thinking of doing a tutorial on Blinn-Phong that first shows the algorithm without the Blinn-Phong specular. All Blinn-Phong does is put a specular bright "dot" on the surface to make it appear to be slick and "shinny". Most of the algorithm is actually Gouraud shading. Gouraud shading is one step up from flat shading which you can see examples of in that link. (Note that they seem to say some things that are false in that Wikipedia link and they also confuse flat, Gouraud, and Phong shading. Flat looks faceted or flat. Gouraud is smooth shading. Blinn added the specular highlight bright spot. Phong came up with a different algorithm to do the same thing as Blinn.)

So, first we should probably define what a normal is. And to do that, you need to define what a vector is. A vector is an arrow. In our case it's a 3D arrow represented by two positions in 3D space. All we care about is the direction it points and how long the arrow is. Because of that, we can always assume that the tail of the vector/arrow is at the origin (x=0,y=0,z=0) and so you only need to store the position of the arrow head in the vector object (Vector3 in XNA or float3 in HLSL although we sometimes throw in a 4th component as a w value which is almost always set to 0 for a vector and 1 for a position but that's getting more complicated).

So, when you see a 3D position stored in the vector that's the position of the arrow head and you can calculate the direction and length of the vector/arrow with that information. XNA will calculate the length for you and you normally don't need to know what the direction actually is as long as it's pointing where it should be.

So, sometimes we don't really care about the length/amount/magnitude of the vector/arrow; all we care about is the direction. In those cases we "normalize" the vector by setting it's length to 1. This changes the vector to have a length of 1 but it does not change the direction it points. When a vector is normalized we call it a "normal". If you had of set it's length to zero it would no longer exist. That's why we chose a length of one to be a vector without an amount. Plus, you can easily change a normalized vector to any length you want by multiplying by the number of the length you want.

So, in short, a normal is an arrow with a length of one stored as a vector that represents a direction of something.

Check out my YouTube video for an indepth explanation of vectors and normals. The YouTube channel is VirtuallyProgramming.com and my website of the same name has a lot of writing about this under "Fundamentals" I think in the XNA section of the website. (My footer has a hyperlink to the website and the YouTube channel).

The way flat shading works is by calculating a normal (arrow that tells us what direction) that tells us what direction the triangle, or face, is facing towards. The omnipresent "diffuse" light is nothing more than a normal too. It is a normal that tells us what direction the light shines in. It's the same for everything in the 3D world. If the surface/face/triangle faces exactly into the light (directly opposite the direction the light shines in) then it gets 100% of the color of the light which is generally mixed with the color of the object. As the face turns away from the light direction towards 90 degrees away it gets less and less of the light's color (and it's own) until at 90 degrees it gets no light at all. Anything facing further away than that also gets no light.

You can easily use a vector cross product to calculate a vector that points straight out of the plane that two vectors live on. This is vector math and again check out my video. But two sides of any triangle can be thought of as vectors that point from one corner to the two other corners. Since they are part of the same face, they obviously live on that 2D plane in 3D space. Their cross product gives a vector that points perfectly perpendicular out of that plane or out of that surface. It tells you what direction that triangle faces. And if you can do a vector cross product it's a simple calculation.

I forget under what conditions that vector will be normalized. I want to say that you have to first normalize both of the side vectors. But it's probably best and easiest to just normalize the result vector, setting it's length to 1, and throwing away any length it had.

Then it's just a matter of representing the diffuse light source as a vector that shines in a specified direction and doing the math to calculate the angle between the two vectors/normals to determine what percentage they are between 0 and 90 degrees and you get that much percentage of the light on the surface.

That's flat shading which is not so much in use anymore. It's actually a little tricky to do this in XNA now days partially because XNA uses Shader Model 3. In DirectX you can use Shader Model 5 which makes flat shading very simple by just adding one word to the shader. But it doesn't work in XNA so you have to actually calculate a face normal.

Anyway, by default XNA wants to do Gouraud shading. Gouraud shading is very similar in concept. Instead of using a normal for the face to determine what direction the triangle faces, you use a separate normal at every corner, or vertex. The vertices are the corners of the triangle and they store more than just a position. The first thing you probably want them to store is a normal that tells us what direction that corner faces.

You can do flat shading this way by having all 3 corner normals point in the direction of the face. Then there's basically no difference to flat shading. But by having the 3 normals point in different directions you can sort of simulate "bending" the triangle to no longer look flat. Lighting is calculated basically just like flat shading, but you do it for each corner of the triangle which can face in different directions. You then interpolate the color value between the corners.

This is really what the rasterizer does between the vertex shader and pixel shader. Your vertex shader passes vertices of the triangle to the rasterizer. The rasterizer fills in, or shades in, the triangle between the three points in 2D space (that wvp matrix multiplication turned the 3D positions into 2D positions to draw on a 2D screen in the vertex shader).

By interpolating between the corners you get a smooth change in color between corners. You can add colors to vertices and the colors will interpolate and blend across the face of the triangle. But here we're interpolating or blending the normals of the vertices/corners so that if one vertex normal should give us 100% of the light and the other is 90 degrees different and facing away from the light then the light will be blended across the face from light to dark.

When you have thousands of triangles/polygons/faces in your model these changes in lighting will be very slight and it will just make the object look smooth instead of faceted like with flat shading.

Notice that the faces/triangles are still flat, but you have basically created an illusion that they are curved by interpolating the lighting across their face.

And that's the difference between flat shading and Gouraud shading.

Blinn-Phong takes Gouraud shading and adds a specular highlight spot to simulate shininess. And it is pretty much your most basic of all shaders.

There's another type of shader known as a post processing effect that is maybe easier to get into. Blinn-Phong draws the objects on the screen. Post processing effects take a copy of what is drawn to the screen and then use a shader to process the entire 2D image. For example, you might make the whole image gray scale. Or you might do blur or bloom to the image. Post processing effects don't use vector and matrix math as much as your drawing shaders do.

Really, I don't consider HLSL to be a beginner subject. That's why in all of my XNA examples I went out of my way to use BasicEffect in every example rather than HLSL. It's a Blinn-Phong shader that also does textures.

When you get into HLSL you need to be extremely comfortable with matrices and vectors. You may not have to master Linear Algebra to do it, but you at least need to be comfortable with matrices and vectors to write even the most basic drawing shaders like Blinn-Phong which is just one step up from Gouraud shading which is pretty much the most simple useful 3D shader you can write for drawing.

That's why I started off my YouTube channel with Vectors and Matrices. I'm focusing on DirectX11 now instead of XNA and in DX11 there is no BasicEffect for you to use or a Sprite class even. So, you have to basically write a Blinn-Phong shader on day one just to see anything on the screen. That's pretty rough for beginners, but I hope to do enough in video tutorials to explain it where beginners can hopefully understand.

I'm actually a bit disappointed that so many XNA tutorials used HLSL instead of the BasicEffect. For the beginner, I think getting used to BasicEffect is a lot more useful and allows you to get on with other things like making skyboxes and terrains and stuff without having to learn a lot of Linear Algebra. Until you know matrices and vectors, you're not going to get very far with HLSL and matrices and vectors tend to intimidate people starting out.

BasicEffect is a pretty good basic Blinn-Phong shader. I don't think it can do complex graphics using normal maps, specular maps, and so on, but it's as good as any Blinn-Phong shader you're likely to use with the XNA built in model class which I also believe is not capable of using normal maps or specular maps. So, by using BasicEffect you are learning how to use a Blinn-Phong shader without having to write one yourself.

This is one of the ways I think XNA helps beginners "ease" into 3D game programming as opposed to DX11 where you have to write a Blinn-Phong shader day one in order to put just about anything on the screen and there is no model class so get ready to write your own (you can also use the DX Tool Kit which at least gives you a model and sprite class).

Anyway, read this. Go through my matrix and vector videos. Come back and ask questions and maybe we can get you understanding how to write a Blinn-Phong shader. If you're not ready for Blinn-Phong, you're not ready for drawing with HLSL except maybe try Post Processing Effect shaders in HLSL since that I think involves less Linear Algebra. The problem learning Post Processing Effects is that you have to use Render Targets which is also a bit of an intermediate subject. (Basically you draw to a memory buffer instead of the screen, do your post processing effect, and then draw the buffer to the screen.)

You could also stick with BasicEffect instead of jumping immediately into HLSL. Even then I would recommend watching my videos as they will introduce you to Matrices, Vectors, and GimbalLock as well as 3D motion. Even if you just watch and don't understand much, I think watching them will help you with 3D game programming in XNA or with any other platform. Although XNA using the Sprite, Model, and BasicEffect classes doesn't require you to have a deep understanding of vectors, matrices, or gimbal lock, it helps a lot if you at least know the basics of these subjects. And hopefully I'm a good enough teacher that you walk away from those videos knowing pretty much everything you need to know about those subjects to do HLSL and 3D game programming and just need to practice after that.

It's about 5 hours of lecture between the first 3 videos. Maybe take multiple sessions to watch it. Realize you don't have to have a deep understanding of this stuff to use it. You mostly just need to understand it at a high level because XNA knows how to do matrix and vector math for you. So, all you really need to know is why you would want to multiply two matrices together (to combine their information) or why you would want to subtract one vector from another (to determine the direction and distance between their arrow heads). So, don't get intimidated by the math because XNA is basically like using a calculator and you almost never need to know exactly how the math is done as long as you understand why you are adding, subtracting, or multiplying.

And it doesn't really matter what platform you are working with, you need a solid high level understanding of matrices and vectors in order to do 3D graphics. XNA helps you avoid this at first by giving you a built in model and BasicEffect class. But the sooner you can learn this stuff the better off you'll be doing 3D game graphics. Some authors will spend about a third of the book burning this stuff into your head. In XNA, it's less important because you don't have to do HLSL and such. I decided that teaching DX I would spend a few hours lecturing on this stuff primarily because you have to get into HLSL from day one in DX11 and you're going to be hopelessly lost in HLSL without a solid understanding of matrices and vectors. You'll use them in other areas of 3D graphics too. And almost everything that involves moving things in the 3D world or any sort of animation is matrix multiplication. So, you need to learn that pretty quick too. (Not how to do the math as you can just say MatrixA = MatrixA * MatrixB in XNA and it will do the math for you but why you multiply two matrices together. Also, not you pretty much never divide, add, or subtract matrices in 3D game programming. If you want to uncombine them, which you would think would be division or really subtraction, you multiply by the inverse of the matrix which I cover in the video. Then again you would think that combining matrices is addition when if fact it's multiplication. Watch the video to find out why.)

This post has been edited by BBeck: 28 May 2015 - 10:25 AM

Was This Post Helpful? 0
  • +
  • -

#7 BBeck  Icon User is offline

  • Here to help.
  • member icon


Reputation: 792
  • View blog
  • Posts: 1,886
  • Joined: 24-April 12

Re: Diffuse light stuck? How to just animate object?

Posted 28 May 2015 - 10:55 AM

Interesting. Almost all of the stuff I'm seeing on the Internet confuses Phong shading and Gouraud shading. No wonder it took me so long to get it straight. Now I can write the algorithm, so I know exactly what is what. Flat shading computes lighting with a single normal for the face which gives it that faceted look. Gouraud figured out how to do smooth shading by giving every vertex it's own normal and interpolating the lighting calculation between the triangle corners to create the illusion that it is not flat. (Look at the edges of the triangles and you'll see it's not actually curved.)

Blinn came up with an algorithm to add to Gouraud shading to simulate shininess. By making the shiney spot large the object appears dull and by making it small the object appears slick or even wet. It helps tremendously to have a lot of triangles to spread the effect across. A lot of the examples are calling Blinn-Phong shading "Gouraud" shading when in fact they are just using fewer triangles. Your triangle/polygon count has nothing to do with this. They are just mislabeling it. If it has a specular highlight it is Blinn-Phong even if it's flat shading. The highlight itself is what makes it Blinn-Phong. Flat shading and smooth shading are a matter of being Gouraud or not. So, Blinn-Phong can be used with flat shading. It's just the order they were discovered in is that Blinn-Phong specular highlights came long after flat shading was used no more and everyone went to Gouraud(smooth) shading.

There are still times when flat shading is better. Models tend to share vertices. And a vertex can only have one normal. So a cube may only have 8 vertices where the corners can only point in 8 directions. That's a real problem if you want the sides of the cube to appear flat. There are a couple of solutions that I've explored. The first is to stop sharing vertices which in some cases is the preferred way to go. Then you have 36 vertices in your cube which is a whole lot more. But you can make them all point in the same flat directions even when their positions are on top of one another.

If you don't want to do that and want the faceted look, you need to do what I did in the shaders I posted where you calculate a face normal or otherwise tell the rasterizer to not interpolate values between the corners. Then you can get the advantages of shared vertices while flat shading.

You can still calculate a specular highlight either way. The whole principle there is that light will leave a surface at the exact opposite angle that it hits the surface at. So, if there is a 20 degree angle between the surface and the angle of the light, the light will reflect at a 20 degree angle from the surface in the exact opposite direction. Specular highlights work by calculating whether the camera is in the path of the reflection and looking towards the reflection or not and what percentage it is between looking at the refection and not looking at the reflection. So if the camera is not looking down the path of the reflection the surface gets 0% of the specular light. If it is perfectly aligned with the path of the reflection it gets 100% of the specular light color on the surface.

That's basically Blinn specular shading. Phong just does a math trick to simplify the math slightly using a half vector.

As intimidating as Blinn specular highlights are, it really just boils down to "Are you looking down the path of the reflected light off the surface? If no then don't do anything but draw normally. If yes, add the light color to that pixel on the surface so that it appears brighter to produce a shiny spot."

But again, notice the heavy use of vectors here which is why I say you need to be comfortable with vectors for this stuff.

After this, you add textures. You can assign colors to each vertex. Normally they would all get the same color pretty much although you might be able to make one face a different color, but shared vertices mean the colors are going to smear between faces when it interpolates just like the flat face vs. smooth face problem.

Instead, what you probably want to do is to tell the rasterizer to color the pixels of the triangle to match a photograph. You can add the light color (such as white) to this color and use specular highlights or smooth shading in addition to the texture color which is what I do in the examples I posted. You use a "sampler" to sample the pixels of the photograph and map them to the surface of the triangle. Each corner of the triangle (vertex) gets a mapping coordinate called a UV coordinate that tells the sampler and rasterizer where that corner of the triangle maps to in the photograph. It's like pinning that corner/vertex to that position in the photo. Then the sampler and rasterizer interpolate the position between the corners to calculate a UV coordinate for each pixel that is passed to the pixel shader.

That's texturing which could be used with flat shading, smooth shading, and/or specular highlights. Now you're pretty much up to my level with shader algorithms although I've also done some post processing stuff.

My next step will be to try and add a normal map to the model and UV map that to a normal map photo and have the shader do a separate normal for each pixel in the pixel shader according to the normals in the texture/photo. I think I know how to do it, but have not actually done it yet. I'm going to attempt it in DirectX soon. And that will be normal mapping. Paralax mapping, specular mapping, reflection mapping, occlusion mapping, and so forth should all be basically the same thing. And notice it's all built on top of Gouraud and Blinn-Phong (although you could theoretically do flat shading with it as well). Also, not that normal mapping uses more vector math and normals. Most of the other types of mapping will too.

Collision detection also uses vectors when you get ready to make things touch one another.

This post has been edited by BBeck: 28 May 2015 - 11:02 AM

Was This Post Helpful? 1
  • +
  • -

#8 thelastofus  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 23
  • Joined: 16-November 14

Re: Diffuse light stuck? How to just animate object?

Posted 29 May 2015 - 05:25 AM

Havent completed reading your answer yet, but I have the strong feeling to suggest this. Do a tutorial please. It would be much better if you could explain math on different playlist. :D
Was This Post Helpful? 0
  • +
  • -

#9 thelastofus  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 23
  • Joined: 16-November 14

Re: Diffuse light stuck? How to just animate object?

Posted 29 May 2015 - 07:40 AM

Ugh too many vectors matrices. So confusing, having a head ache right now.

I think i understand it a little bit, but theres a lot of things I dont understand I dont really know what is the correct question.

First uhm is this what your talking about flat shading, gourad, and phong.

Posted Image

In my understanding, when you said shading can make illusion that a surface is curve, okay this is hard to visualize. Cause flat is just all square in there.

is this what all 3d software do?

When your talking about smooth shading, are you talking only about gourad and blin and phong?


Maybe blin phong is a too much for me now, and even looking at tutorials of spcular thing. I really cant understand them. Up to this point all i HLSL code I can understand is diffuse and even that gives me a hard time.


There's not much tutorial on using basic effect. Even on MS community page. They use HLSL.



Lastly, Ive seen your videos, It looks great. Althoug can you add some, I dont know like a marker so that we know what number or vector are you pointing at? You know like what others do for example they circle a vector to indicate that "this" vector will do something that will affect "this" vector. like that thing? sorry lol I know its hard to make a video.
Was This Post Helpful? 0
  • +
  • -

#10 thelastofus  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 23
  • Joined: 16-November 14

Re: Diffuse light stuck? How to just animate object?

Posted 29 May 2015 - 07:48 AM

Hi, I just saw this tutorial on youtube, explaining also what you just have said about shading and the normals and how it affects them. Although without the math. Is it posible you could create this kind of tutorials? but will math explained. Im kind of visual learner :D

Just want to add
Was This Post Helpful? 0
  • +
  • -

#11 BBeck  Icon User is offline

  • Here to help.
  • member icon


Reputation: 792
  • View blog
  • Posts: 1,886
  • Joined: 24-April 12

Re: Diffuse light stuck? How to just animate object?

Posted 29 May 2015 - 11:31 AM

The images you posted don't appear to have been posted.


This picture, if I can get it to show here, shows flat, Gouraud, and Phong shading.
Posted Image

With the flat shading example, you can see how it looks faceted like a gem. But also look at how the edges of the apple are straight lines because of it. Look at the next example which is Gouraud shading. Notice how smooth it looks in comparison even though this is the exact same model/mesh. However, notice how straight the edges are because the illusion does not cover that up. The apple appears to be much more rounded, but that is an illusion created by the gradient color blending across the surface of the flat triangles. Compare it to the example labeled "ray tracing".

The bright "reflection" dot on the apple is what Blinn-Phong does. I think they may have also textured the apple for the Phong example which is not necessarily part of Phong. You can texture with any of these. Notice that the edges even on the Phong example are flat. Again, the "roundness" of the apple is an illusion because the mesh itself is very much still faceted and the triangles are not rounded out the way they appear to be, which becomes obvious if you watch the edges of the object. You could probably see it even better if you could spin the apple.

The vector and matrix stuff is a lot to take in if you have never been exposed to it before. Vectors were one of the reasons about 75% of my trigonomety class dropped out before the end of the semester. It's intimidating. But it's really not that hard once you "get" it. You could probably fit everything there is to know about vectors in game programming on a post card. My video pretty much covers all of it which is why it's kind of long.

Don't get too bogged down trying to understand the math because XNA and even DX11 or whatever you're working in will do the math for you like a calculator. The big "take away" for vectors is that they are a "concept" where an amount and a direction are tied together. We most easily think of them as arrows where the arrow points in the direction and the amount is represented by the length of the arrow. A normal is a vector that we set it's length to one to represent the fact we no longer care about its length and only care about the direction.

You see a whole lot of normals in 3D game programming but all they are is an arrow that keeps track of a 3D direction for us.

Usually it's the vector dot product and cross product that really blows peoples' minds. But that's not that big of a deal really because you basically just use them for 2 different things. One is used to project a mathematical shadow. You use that for collision detection for one thing. The other basically gives you a vector that points in the direct that the plane or triangle the two original vectors share. That's useful for a lot of things but it's the same reason over and over.

With matrices, other than the difference between world, view, and projection matrices, you mostly just need to know that a world matrix stores an orientation, scale, and position for an object in the 3D world. And that to change any of those things you load up a second matrix with the change, like a rotation, and then you multiply the two matrices together to combine their information, and the result matrix is the orientation, scale, and position after the change which you can store again as the object's current world matrix to make the change permanent. A view matrix is the same thing except it's the world matrix of a camera and it's backwards, or inversed. And the projection matrix is what takes the 3D vertex positions and flattens them out into 2D to be drawn on a 2D computer monitor.

That's most of it right there.

But don't expect to "get" it immediately. You're going to need to see some code examples and practice a bit, but it will eventually click. And it's on video, so you can watch it 12 times if you need to. I might recommend watching it once without making any effort to understand. And then coming back and watching it again on another day to try and understand it.

Yes, all 3D software is using something like Gouraud shading with Blinn-Phong. Loading a vertex buffer up with vertices that represent the corners of triangles, sending that buffer to the vertex shader to place them on the 2D screen and any other modifications you like in the vertex buffer, sending that to the rasterizer to shade in the area between the 3D vertices of the triangle on the now 2D screen, the rasterizer sending each pixel that it interpolates between the 3 corners of the triangle to the pixel shader for any changes you want to make, and the pixel shader writing the pixel to the screen, is how the graphics card works.

So, DirectX and OpenGl are doing that because that's how the electronics of the graphics card draw stuff to the screen. Even 2D is done by drawing a quad (rectangle) using two triangles and texturing it. In DX11, it's actually 3D even when it looks 2D. XNA is built on top of DirectX 9. Pretty much everything that does 3D graphics is either built on top of DirectX or OpenGL. DX uses HLSL and OpenGL uses GLSL which are very similar.

You could say that smooth shading is pretty much anything that is not flat shading. Gouraud is the part that smooth shades it and Blinn-Phong just adds the specular bright spot. But you can't really do Blinn-Phong by itself. It has to be done on top of either flat shading or Gouraud. So now that I think about it smooth shading really only applies to Gouraud and not Blinn-Phong because Blinn-Phong could be done on top of flat shading or smooth shading. But most of the time Blinn-Phong is called smooth shading because it's kind of rare for anyone to do flat shading anymore with or without Blinn-Phong and it's built on top of Gouraud which is smooth shading.

No. There's not much out there on BasicEffect. I learned a lot of it the hard way by just using it. I'll write another post to get into that.

This post has been edited by BBeck: 29 May 2015 - 12:05 PM

Was This Post Helpful? 0
  • +
  • -

#12 BBeck  Icon User is offline

  • Here to help.
  • member icon


Reputation: 792
  • View blog
  • Posts: 1,886
  • Joined: 24-April 12

Re: Diffuse light stuck? How to just animate object?

Posted 29 May 2015 - 01:38 PM

So, BasicEffect is XNA's primary built in shader that allows you to get stuff done without learning HLSL. It's really a Blinn-Phong shader that does texturing. It's a real "workhorse" that almost has too many features.

If you look at all of its members, it's a pretty long list of settings that you can change.

You don't have to set most of them.

Here is some code that I've been working on as a tutorial:

using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Audio;
using Microsoft.Xna.Framework.Content;
using Microsoft.Xna.Framework.GamerServices;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using Microsoft.Xna.Framework.Media;

namespace HLSLShadersDemo
{
    public class Game1 : Microsoft.Xna.Framework.Game
    {
        GraphicsDeviceManager graphics;
        Model ProfessorZombie;
        Matrix ProjectionMatrix;
        Matrix ViewMatrix;
        Matrix ZombiesWorldMatrix;
        Matrix[] ProfessorsBoneWorldMatrices;
        Vector3 CameraPostion;


        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }


        protected override void Initialize()
        {
            ProjectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, graphics.GraphicsDevice.Adapter.CurrentDisplayMode.AspectRatio, 0.01f, 1000f);
            ZombiesWorldMatrix = Matrix.Identity;
            CameraPostion = new Vector3(0f, 1.68f, 1f);
            base.Initialize();
        }


        protected override void LoadContent()
        {
            ProfessorZombie = Content.Load<Model>("ProfessorZombie");
            ProfessorsBoneWorldMatrices = new  Matrix[ProfessorZombie.Bones.Count];
        }


        protected override void UnloadContent()
        {

        }

        protected override void Update(GameTime gameTime)
        {
            // Allows the game to exit
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed)
                this.Exit();

            ViewMatrix = Matrix.CreateLookAt(CameraPostion, CameraPostion + Vector3.Forward, Vector3.Up);

            base.Update(gameTime);
        }


        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            ProfessorZombie.CopyAbsoluteBoneTransformsTo(ProfessorsBoneWorldMatrices); //Load the world matrices of every bone in the model into the array.
            foreach (ModelMesh Mesh in ProfessorZombie.Meshes)
            {
                foreach (BasicEffect Effect in Mesh.Effects)    //Every Mesh has a different world matrix that has to be passed to the shader and the shader here is stored as part of the mesh. The model object basically draws itself using the information stored in the model object.
                {
                    Effect.Projection = ProjectionMatrix;   //Set the camera's projection.
                    Effect.View = ViewMatrix;   //Set the camera position.

                    Effect.World = ProfessorsBoneWorldMatrices[Mesh.ParentBone.Index] * ZombiesWorldMatrix; //Each part's world matrix is relative to the parent part. 

                    Effect.EnableDefaultLighting(); //Little more detailed scene lighting.
                }
                Mesh.Draw();    //Draw this mesh and then go to the next mesh.
            }



            base.Draw(gameTime);
        }    

    }
}




It draws a zombie on the screen. The zombie is a .fbx model from Blender. There's quite a bit of stuff in here that is not really necessary for a lot of models. First of all, the built in model class pretty much has BasicEffect built into it and it knows how to draw itself with code like Mesh.Draw().

If you have a model with sub-meshes like a car with wheels or like this model where the eyeballs are actually separate meshes from the body of the model, each part gets its own world matrix which is why I do the ProfessorZombie.CopyAbsoluteBoneTransformsTo(ProfessorsBoneWorldMatrices); in the code. It's also the reason for the foreach loops in the drawing. Notice that section of code uses BasicEffect. The inner loop is setting the "effect" for each submesh in the model which they setup to be able to use completely different effects for different parts of the model although I've never seen anyone do that in XNA. There's a lot of cool stuff they built into XNA that no one ever uses.

So, since each part gets its own effect I have to set the Projection, View, and World Matrix for each part. The View and Projection matrices are the same for the every part but since it stores them with each part it has to be set to the same thing for each. Why in the world you would ever want them to be different for different parts I have no idea since they are all parts of the same model. But having a separate world matrix for each part of the model allows them to be animated separately in what is called rigid animation.

Effect.World = ProfessorsBoneWorldMatrices[Mesh.ParentBone.Index] * ZombiesWorldMatrix; is setting the world matrix for every part to be a child of the ZombiesWorldMatrix which would allow me to move the whole model as one piece by moving that parent matrix. I'm basically getting the world matrix for each part and then just assigning it again to be the same thing. There's no animation happening in this code.

But other than using the 3 matrices with BasicEffect, the only other parameter/member that I set is EnableDefaultLighting();

Turning Default Lighting off, if I recall correctly allows you to change most of the other parameters.

The parameters/members are:

Quote

Alpha Gets or sets the material alpha which determines its transparency. Range is between 1 (fully opaque) and 0 (fully transparent).
AmbientLightColor Gets or sets the ambient color for a light, the range of color values is from 0 to 1.
CurrentTechnique (Inherited from Effect.)
DiffuseColor Gets or sets the diffuse color for a material, the range of color values is from 0 to 1.
DirectionalLight0 Gets the first directional light for this effect.
DirectionalLight1 Gets the second directional light for this effect.
DirectionalLight2 Gets the third directional light for this effect.
EmissiveColor Gets or sets the emissive color for a material, the range of color values is from 0 to 1.
FogColor Gets or sets the fog color, the range of color values is from 0 to 1.
FogEnabled Enables fog.
FogEnd Gets or sets the maximum z value for fog, which ranges from 0 to 1.
FogStart Gets or sets the minimum z value for fog, which ranges from 0 to 1.
GraphicsDevice (Inherited from GraphicsResource.)
IsDisposed (Inherited from GraphicsResource.)
LightingEnabled Enables lighting for this effect.
Name (Inherited from GraphicsResource.)
Parameters (Inherited from Effect.)
PreferPerPixelLighting Gets or sets a value indicating that per-pixel lighting should be used if it is available for the Per-pixel lighting is available if a graphics adapter supports Pixel Shader Model 2.0.
Projection Gets or sets the projection matrix.
SpecularColor Gets or sets the specular color for a material, the range of color values is from 0 to 1.
SpecularPower Gets or sets the specular power of this effect material.
Tag (Inherited from GraphicsResource.)
Techniques (Inherited from Effect.)
Texture Gets or sets a texture to be applied by this effect.
TextureEnabled Enables textures for this effect.
VertexColorEnabled Enables use vertex colors for this effect.
View Gets or sets the view matrix.
World Gets or sets the world matrix.



Alpha - I believe this is transparency for whatever is currently being drawn as a whole. I don't think I've ever used this because you normally assign transparency to each vertex as part of its color. RGBA = Red, Green, Blue, Alpha transparency. Those 3 colors of light combine to make any color imaginable.

AmbientLightColor - Ambient light is light that everything in the scene receives. It's "supposed" to be the color of shadows in a daylight kind of lighting situation. It colors both the parts of the model in diffuse light and not in diffuse light. In other words, it colors everything in the scene equally.

CurrentTechnique - You probably won't use techniques at first. Techniques allow you to put multiple shaders in one shader file.

DiffuseColor - The color of the diffuse light. The diffuse light is the light I've talked about before where anything facing more than 90 degrees away from the direction of the light gets none of the light (although it will still get ambient light and maybe the color of the object).

DirectionalLight0 - I've never used more than 1 directional light but there are 2 more if you think you need them. Directional lights are your Diffuse lights. They are nothing more than a direction and a light color. So, everything in the scene gets lit up by them the same way. It basically simulates the way sunlight works. This doesn't work nearly so well for indoor lighting where the lighting is far more complex.

DirectionalLight1 Gets the second directional light for this effect.

DirectionalLight2 Gets the third directional light for this effect.

EmissiveColor - EmissiveColor ignores basically all the other lighting stuff to simulate a light source. A lightbulb turned on might have emissive color which is the color of the light being emitted. If something is drawn that is supposed to be the light source, you might want to use emissive color for that. I've never used it.

FogColor - Fog is not as easy to explain. Basically, it's simulated fog that was a trick they used in the old days of 3D graphics because there was nothing out there. So, for example, you're in a village but the computer is not powerful enough to draw more than one block of the village at a time. So, you can see there's nothing there on the screen for the next block until you go there and then it suddenly pops up out of no where. So, they would put fog in the scene to keep you from seeing that happening and the object would start drawing pretty much just before it emerged from the fog. I don't see as much of it used any more but it can be useful sometimes. In my terrain example code on my website, I used fog to kind of gray trees out in the distance.

It doesn't work like true fog. With true fog, I would not have been able to see the sky with the fog that thick, but it only affects things that are drawn with the fog turned on. So, a lot of times you see the background normally but the object is the fog color which just turns it into a silhouette. So in a way, this fog is really another illusion.

FogEnabled - Turns the fog on and off. Pretty straight forward.

FogEnd - So, I think the fog starts at FogStart and reaches 100% at FogEnd. Anything further away than FogEnd is going to be 100% obscured by the fog effect.

FogStart - See above.

LightingEnabled - I'm not sure the difference between this and EnableDefaultLighting(). I think this has to be turned on to get the directional diffuse light and such to work.

PreferPerPixelLighting - Looks a little better graphically when turned on. Runs a little slower but on modern computers you're likely not to have a performance problem here.

Projection - The scene's projection matrix.
SpecularColor - The Blinn-Phong specular light can make that bright spot any color you like to maybe suggest the light it is reflecting has that color.

SpecularPower - The Blinn-Phong bright spot allows you to control its size too. A small spot makes the object look super shiny and almost wet. The bigger it gets the more dull and matte finished the object appears to be.

Texture - Model files generally have the name of the texture file stored in them. But you can assign a texture to the model here that is not necessarily the same one it was created with.

TextureEnabled - Must be set to false if there is no texture. Must be set to true in order to see the texture.

VertexColorEnabled - Each vertex can have a color. But if an object uses a texture, you may or may not want to use the vertex color since it may be largely redundant. There are a few graphics tricks you can do with this on, using the colors stored in a vertex, along with a texture. But most of the time you can turn this off if you are using textures.

View - The scene's camera.
World - The world matrix of the object you are about to draw.


BasicEffect is pretty straight forward once you get the difference between diffuse light and ambient light which I think are terrible names but they are industry standards for Blinn-Phong shading. That directional light represented by the vector normal is the "diffuse" light. Here's one to confuse you: ambient light is diffuse and diffuse is ambient. Both words basically apply to both types of light so don't try to make sense of the names. Diffuse light is your directional light that basically simulates something like sunlight from a very large light source that covers the entire scene. By itself, the sides of objects facing away from the light would get no light and presumably would be black and unseen on a black background.

You could maybe call the side facing away "in shadow". But there really are no shadows with this type of lighting. Objects will not cast shadows. They will not cast shadows on themselves. But in real life, it's pretty rare for a shadow to be pitch black. You normally can see into the shadows and see objects in shadow. They normally are not black but just darker than the lit area. This is partially because light reflects off of surfaces.

If you go outside and look at the shadow cast by a building, the sunlight cannot shine in that shadow. Yet it's well lit compared to night time. Where does the light come from? Mostly from the blue sky casting that blue light downward into the shadow area as well as light that reflect off everything nearby such as nearby walls that can influence the color in shadow. But this is why shadows are thought to be slightly blue.

Anyway, ambient light is supposed to fill in those "shadow" areas (the back sides of objects not lit up by the direct diffuse light). For an outdoor scene, I would recommend setting the diffuse light to a color that is almost white with a hint of yellow to represent sunlight and a white that is just a little blue for the ambient light. The shadow side will get darker the darker you make the ambient light, so don't make it too bright or it won't be in shadow. I would gray it out a bit as needed by reducing the red, green, and blue values by an equal amount. All grays have equal red, green, and blue so to darken things you want to reduce all 3 by the same amount. If you change them by unequal amounts you will also change the color and not just darken it.

You might want to know that everything is drawn with a vertex buffer. But XNA has a model class that will draw 3D models from .fbx files for you. So, it contains a vertex buffer but you never see it. If you want to draw stuff procedurally, like a terrain for example, you have to use your own vertex buffer and sent it to the shader.

My Textured Grid tutorial on my website is an XNA example that builds a vertex buffer and draws it using BasicEffect. The TexturedGridComponent game component class (an intermediate topic in and of itself explained in my first tutorial) looks something like this:

 public class TexturedGridComponent : Microsoft.Xna.Framework.DrawableGameComponent  //Changed to DrawableGameComponent after adding game component.
    {
        BasicEffect Effect;
        RasterizerState RS;
        SamplerState SS;
        Matrix ProjectionMatrix;
        Matrix ViewMatrix;
        float[,] HeightMap;
        VertexPositionTexture[] GridVertices;
        VertexBuffer GridVertexBuffer;
        int[] GridIndices;
        IndexBuffer GridIndexBuffer;
        Texture2D HeightMapImage;   //Photo containing the HeightMap data.
        Texture2D TerrainTexture;
        const int GridSize = 512;   //The grid is always square. 
        const int QuadSize = 2;     //Size of a grid square in meters.
        const int OuterEdge = 1;
        const float TerrainHeight = 220f;


...


  public override void Initialize()
        {
            Effect = new BasicEffect(Game.GraphicsDevice);
            Effect.TextureEnabled = true;
            Effect.VertexColorEnabled = false;
            Effect.LightingEnabled = false;
            Effect.PreferPerPixelLighting = true;
            Effect.FogEnabled = true;
            Effect.FogColor = Color.Black.ToVector3();
            Effect.FogStart = 20f;
            Effect.FogEnd = 110f;
            Effect.World = Matrix.Identity;

            RS = new RasterizerState();
            RS.FillMode = FillMode.Solid;
            RS.MultiSampleAntiAlias = true;
            Game.GraphicsDevice.RasterizerState = RS;

            SS = new SamplerState();
            SS.Filter = TextureFilter.Anisotropic;  //http://blogs.msdn.com/b/shawnhar/archive/2009/09/14/texture-filtering-mipmaps.aspx
            SS.MaxAnisotropy = 16;  //http://blogs.msdn.com/b/shawnhar/archive/2009/09/24/texture-filtering-anisotropy.aspx
            Game.GraphicsDevice.SamplerStates[0] = SS;


            HeightMap = new float[GridSize + OuterEdge, GridSize + OuterEdge];     //Instantiate the HeightMap array.


            base.Initialize();
        }


        protected override void LoadContent()
        {
            HeightMapImage = Game.Content.Load<Texture2D>("HeightMap512");
            TerrainTexture = Game.Content.Load<Texture2D>("GridTexture");
            Effect.Texture = TerrainTexture;
            Effect.TextureEnabled = true;
            LoadTerrainHeightData();
            DefineGrid();


            base.LoadContent();
        }
...

 public override void Draw(GameTime gameTime)
        {

            Effect.View = ViewMatrix;
            Effect.Projection = ProjectionMatrix;


            foreach (EffectPass pass in Effect.CurrentTechnique.Passes)
            {
                pass.Apply();
                //GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionTexture>(PrimitiveType.TriangleStrip, GridVertices, 0, GridVertices.Length, GridIndices, 0, ((GridSize + 2) * (GridSize) * 2) - 2, VertexPositionTexture.VertexDeclaration);
                GraphicsDevice.Indices = GridIndexBuffer;
                GraphicsDevice.SetVertexBuffer(GridVertexBuffer);
                GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleStrip, 0, 0, GridVertices.Length, 0, ((GridSize + 2) * (GridSize) * 2) - 2);
            }


            base.Draw(gameTime);
        }




Anyway, maybe that will help with BasicEffect a bit. Mostly you just need to pass it world, view, and projection matrices and then you can set the other parameters if you like. There are also different ways to call it and use it. If you are using the built in model class, you have to set BasicEffect for each piece of the model. But the model knows how to draw itself.

If you're going to tell the graphics card to draw something directly, you'll use a vertex buffer and you need to call something like GraphicsDevice.DrawIndexedPrimitives() (which also requires an index buffer and you have to tell it to Apply the pass. But in that case you aren't looping through pieces of a model since it's probably all one piece. So, you just set the parameters of BasicEffect which are primarily the View and Projection matrices for the camera. The world matrix can often be set to an identity matrix because the vertex positions are fixed in one place with a terrain for example. So, you don't have to really use a world matrix for something that is created in its final position and never moves.

Anyway, maybe that will help you get started with BasicEffect. I think there's only one code tutorial on my website that doesn't use BasicEffect which is the terrain example with HLSL where I do a post processing blur effect to make the terrain "glow". I know some of the examples (on the example page where they are different than tutorials as they were not designed to be tutorials and may not be well commented) used HLSL a little like the water example which uses HLSL. But a lot of that probably uses BasicEffect as well.

I really tried to use BasicEffect as much as I could in my tutorials because I think it's probably easier for people to use than trying to understand HLSL by a long shot. And there's so much to learn starting out that if you can take that off the plate for the moment, I think it really helps. I don't know why so many books and tutorials throw everyone in the deep end with HLSL right from the beginning (I'm talking to you Riemers Grootjans! Just kidding. I mean I think he should have made better use of BasicEffect but I love his books and his XNA 3.1 book is one of the best books on game programming I have. I learned a whole lot from his books. Not to mention he does have some pretty good HLSL examples if you're going to do HLSL.)

I didn't even really start moving more towards HLSL until I started doing DX11 where there is no BasicEffect and you have to write your own shaders for everything.

RB has got a tutorial that talks about BasicEffect vs. HLSL.

If you're going to do XNA, you will want to bookmark Shawn Hargreaves' site.

There are actually, I believe, 5 shaders built into XNA although I've only used BasicEffect and AlphaTestEffect for transparency. There's also DualTextureEffect, EnvironmentMapEffect, and SkinnedEffect. I never really used them largely because no one does and so I never found much information on them. I think dual texture can be used for things like shadow mapping. I think Environment Map can be used for reflections with cube mapping. And Skinned was intended for doing skinned animation which Microsoft did not include in XNA (although there's a tutorial from Microsoft on how to extend the Content Pipeline yourself and the XNA model class to do skinned animation).

This post has been edited by BBeck: 29 May 2015 - 01:45 PM

Was This Post Helpful? 1
  • +
  • -

#13 thelastofus  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 23
  • Joined: 16-November 14

Re: Diffuse light stuck? How to just animate object?

Posted 29 May 2015 - 08:09 PM

Quote

You might want to know that everything is drawn with a vertex buffer. But XNA has a model class that will draw 3D models from .fbx files for you. So, it contains a vertex buffer but you never see it. If you want to draw stuff procedurally, like a terrain for example, you have to use your own vertex buffer and sent it to the shader.


I see. So thats why I saw some tutorials that set that get the vertex of fbx files and set it to graphics device vertex buffer. while others just dont bother and went straight ahead of using mesh.draw(). I tried using both and doesnt make much any difference.


Quote

No. There's not much out there on BasicEffect. I learned a lot of it the hard way by just using it

I blame MS for naming it BasicEffect. Now people think its just so basic why not learn the HLSL basics :D

Quote

The vector and matrix stuff is a lot to take in if you have never been exposed to it before. Vectors were one of the reasons about 75% of my trigonomety class dropped out before the end of the semester. It's intimidating. But it's really not that hard once you "get" it. You could probably fit everything there is to know about vectors in game programming on a post card. My video pretty much covers all of it which is why it's kind of long.

For me the reason I didnt even bother listening to my math teacher is that I dont see any practical application of it. I mean its not when your walking then suddenly you see this thing then you wonder about its math. I just learn the importance of it when I get to game dev which is why I am regretting why I didnt even bother learning math earlier.

This youtube video is interesting. Can I do that on BasicEffect?
Was This Post Helpful? 0
  • +
  • -

#14 thelastofus  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 23
  • Joined: 16-November 14

Re: Diffuse light stuck? How to just animate object?

Posted 30 May 2015 - 02:59 AM

Just finish watching the vector and matrix video. I will watch it again from time to time since I cant memorize all of them.

One thing about matrix. Whats the point of w component? I mean if 0 its direction and 1 is position. But then why did u add a formula on w component row? Shouldnt that be all 0? I dont understand the use of it though. Like if you said that think of it as a box.

So Im thinking. Say I have model part like a hand. This hand has an invisible box on it. so the hand is inside this invisible box. this box is compose of all the math you could use for transforming this model(rotation, scaling,translation). So all of this transformation are just vector computation and they're happening on 3D space. So this W component, why is there an option that it could be a direction or a position? I just dont get this w thing.


Oh and I read your matrix article

Quote

They don't realize that a rotation matrix says "rotate the vertices by this angle around the origin" and that that results in a rotation only when the object is at the origin. They also may not realize that there's nothing wrong with popping the model back to the origin, applying the rotation or scale, and then popping it right back to where it was between Draw frames. Since it happens between Draw frames it won't be seen on screen.


Are you talking about the back buffer on this one?
Was This Post Helpful? 0
  • +
  • -

#15 thelastofus  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 23
  • Joined: 16-November 14

Re: Diffuse light stuck? How to just animate object?

Posted 30 May 2015 - 06:55 AM

Ugh I cant edit my post. Sorry too many post
Was This Post Helpful? 0
  • +
  • -

  • (3 Pages)
  • +
  • 1
  • 2
  • 3