Texture Cubes
Also known as cube maps, its 6 2D texture maps corresponding to the faces of an axis-aligned cube. Used in lighting a scene to project environment colors.
There's a photoshop plugin that lets you create cubemaps. Make a long line of squares in the order of: +X, -X, +Y, -Y, +Z, -Z. Go to file> save a copy and change the file type to D3D/DDS. An interface will pop up asking how you want to configure it. With these settings:
- 8.8.8.8 ARGB 32 bpp | unsigned
- no mipmaps
- cube map
A cubemap acts as if its center is at the origin. Stretching will occur if you move outside of it.
With cubemaps we're no longer dealing with just 2 u and v coordinates. We're using x, y, and z. We are going from the origin of the world and casting out a ray to the edge of the sphere. This gives you the xyz coordinate and it will be translated to the cubemap coordinates.
Structures for cubemap: the VS_OUTPUT struct includes a texture coordinate that's a float3 instead of a float2
/******** Data Structures ********/
struct VS_INPUT
{
float4 ObjectPosition : POSITION;
};
struct VS_OUTPUT
{
float4 Position : SV_Position;
float3 TextureCoordinate : TEXCOORD; // Returns Float3 not Float2
};
Vertex Shader:
/****************** Vertex Shader *********************/
VS_OUTPUT vertex_shader(VS_INPUT IN)
{
VS_OUTPUT OUT = (VS_OUTPUT)0;
OUT.Position = mul(mul(mul(IN.ObjectPosition, World), View), Projection);
OUT.TextureCoordinate = IN.ObjectPosition.xyz; // Working on Direction and not UV Position
return OUT;
}
Pixel Shader: for a skyboxTexture sampler you need to input a float3 or else it will not work.
/*************** Pixel Shader ***********************/
float4 pixel_shader(VS_OUTPUT IN): SV_Target
{
return SkyBoxTexture.Sample(TrilinearSampler, IN.TextureCoordinate);
}
Environment Mapping/Mirror
AKA Reflection Mapping. It approximates reflective surfaces, such as chrome. You need to compute the reflection vector for light hitting the surface. It is dependent on the view Direction and Surface normal.
When light hits a surface, it creates a reflection vector. That vector goes off and hits the cubemapped sphere's surface. The color that the vector hits can be sent back to the surface to be used as a reflection.
formula:
R = I-2*N*(I*N)
There is a reflect intrinsic function in HLSL that computes the reflectionv vector for you.
Data Structs
/******** Data Structures ********/
struct VS_INPUT
{
float4 ObjectPosition : POSITION;
float2 TextureCoordinate : TEXCOORD;
float3 Normal : NORMAL;
};
struct VS_OUTPUT
{
float4 Position : SV_Position;
float2 TextureCoordinate : TEXCOORD;
float3 ReflectionVector : TEXCOORD1;
};
Vector, here we're using the reflect function:
/****************** Vertex Shader *********************/
VS_OUTPUT vertex_shader(VS_INPUT IN)
{
VS_OUTPUT OUT = (VS_OUTPUT)0;
OUT.Position = mul(mul(mul(IN.ObjectPosition, World), View), Projection);
OUT.TextureCoordinate = get_corrected_texture_coordinates(IN.TextureCoordinate);
float3 worldPosition = mul(IN.ObjectPosition, World).xyz; //Only Direction
float3 incident = normalize(worldPosition - CameraPosition);
float3 normal = normalize(mul(float4(IN.Normal,0), World).xyz); // Only the direction
OUT.ReflectionVector = reflect(incident, normal); // HLSL Intrinsic Function
return OUT;
}
Pixel, if a cube map is used as an environment map you need to use Environment Map sampler:
/*************** Pixel Shader ***********************/
float4 pixel_shader(VS_OUTPUT IN): SV_Target
{
float4 OUT = (float4)0;
float4 color = ColorTexture.Sample(TrilinearSampler, IN.TextureCoordinate);
float3 ambient = get_vector_color_contribution(AmbientColor, color.rgb);
float3 environment = EnvironmentMap.Sample(TrilinearSampler, IN.ReflectionVector).rgb;
float3 reflection = get_vector_color_contribution(EnvColor, environment);
OUT.rgb = lerp(ambient, reflection, ReflectionAmount);
OUT.a = color.a;
return OUT;
}
 |
you can see the reflection of the environment map here |
Many environment maps don't match object or scene. The solution is a dynamic environment maps. Per Frame: it drops camera with 90 degrees FOV and render each of the six orthogonal directions. This is very expensive and very slow, only use for primary object and render at lower resolution. Fog
Objects fade into background as distance increases. Need to know where fog begins and how long for obscurity. The amount of light from fog is needed also.
v is distance from camera from surface
FogAmount = (|V} - fogStart)/FogRange
Color(final) = lerp(litColor, FogColor, FogAmount)
/*********** Utility Function ****************/
float get_fog_amount(float3 viewDirection, float fogStart, float fogRange)
{
return saturate((length(viewDirection)- fogStart)/fogRange);
}
This gives us a ratio of how much fog is there for us to lerp with. length(viewDirection) gives us a vector of how far away the surface of an object is away from the camera. Aside from this, its just phong lighting again.
Color Blending
When frame buffer alrady has a color value and a new pixel color is blended. HLSL BlendState objects behave similar to rasterizer states. New Color is Source. pre exisitng color is destination.
output layering = (Source*Factor)*Operator*(Destination*Factor)