Monday, 31 March 2025

Illumination Models

 Basic Lighting Models

The Three Illumination Models:
  • ambient: color of objects in the scene with no direct light on it (When it is in shadow). It's not black, it has some hue associated with it. We don't want black, if anything goes to 0,0,0 information is clipped off and lost. 
  • Diffuse: color of an object when a light is shining on it
  • Specular: the hot spot on the object. The reflection of the light on the surface of the object that you're casting on. (PBR does it differently)
  • Combined (Phong): the final combined light with all three lighting models. 

Constant Buffer or Cbuffer: a group of memory for the frame.
    The view direction is equal to the camera position subtracted by the object position. The Light Drection is the light position minus the object position

Ambient Lighting


/********** Constants ********************/
cbuffer CBufferPerFrame
{
  float4 AmbientColor;
  float4 LightColor;
  float3 LightDirection;
  float3 CameraPosition;
}

cbuffer CBufferPerObject
{
  float4 SpecularColor;
  float SpecularPower;
}

Pixel Shader of ambient Color:

float4 ambientPixelShader(VS_OUTPUT IN): SV_Target
{
  float4 OUT = (float4)0;
  
  OUT = ColorTexture.Sample(ColorSampler, IN.TextureCoordinate);
  OUT.rgb *= AmbientColor.rgb * AmbientColor.a;
  
  return OUT;
}
    The Ambient Color is the color of the shadows. Here it is similar from last weeks, but we added in the extra line of multiplying it by the ambient color rgb and the alpha as well. We multiply the alpha seperately because when things are transparent, it becomes even darker because it's letting less light through. It is ultimately just an offset for your light, to push it up just a bit to make sure it's not totally black. (the vertex shader doesn't do anything special)
 

Diffuse Lighting

    Simplest shading Model. Where ambient light is flat and is the minimum amount of color everywhere. Diffuse light uses the normal of the object to determine the gradient of how light the object appears. If the normal is pointed directly at the light, the angle between the light and the normal is 0 degrees. Cosine at 0 degrees is 1 and so that point will receive the greatest amount of light. If the normal is orthogonal to the light, that means the angle between will be 90 and receive the least amount of light (cosine of 90 is 0). 
    Lambert's cosine law: "Brightness of a surface is directly proportional to the cosine of the angle between the Light vector and the surface normal. 
    Light Bounces irregularly off the 'microfacets' in the material. 

Dot Product Reminder




    Lambert's Cosine Law uses the Dot Product to find cosine of angle between light vector and surface normal. 
    *Only true when source ray and normal vectors are Unit vectors. 

    Assuming a Directional Light: a light source infinitely far away, all light reaching the surface are parallel, so its just a directional vector. D=L-O
Vertex Shader Lambert 
VS_OUTPUT lambertVertexShader(VS_INPUT IN)
{
  VS_OUTPUT OUT = (VS_OUTPUT)0;
  
  OUT.Position = mul(mul(mul(IN.ObjectPosition, World), View), Projection);    // Flip the coordinates if in HLSL
  OUT.TextureCoordinate = get_corrected_texture_coordinates(IN.TextureCoordinate);
  
  OUT.Normal = normalize(mul(float4(IN.Normal, 0.0), World).xyz); // remember that a 0 at the end makes it a direction, 1 is a point
  OUT.LightDirection = normalize(-LightDirection);
  
  return OUT;
}
Pixel Shader Lambert 
float4 lambertPixelShader(VS_OUTPUT IN): SV_Target
{
  float4 OUT = (float4)0;
  
  float3 normal = normalize(IN.Normal);  //we renormalize because the rasterization stage flattened everything so it's probably not normal again
  float3 lightDirection = normalize(IN.LightDirection);
  float n_dot_l = dot(lightDirection, normal);
  
  float4 color = ColorTexture.Sample(ColorSampler, IN.TextureCoordinate);
  float3 ambient = AmbientColor.rgb * AmbientColor.a * color.rgb;
  float3 diffuse = (float3)0;
  
  if (n_dot_l > 0)
  {
    diffuse = LightColor.rgb * LightColor.a * n_dot_l * color.rgb;
  }
  
  OUT.rgb = ambient + diffuse;
  OUT.a = color.a;
  
  return OUT;
}

Specular (Phong Shading)


    The Reflection angle is going to be the exact same angle the light angle is, just reflected. You take the dot product of the view angle and the reflection angle to find how bright the hot spot is. 
    The blin phong shader is the optimized version.
R is the reflection Vector, V is View Direction. S is size of highlight. 
    This gives you the reflection vector. 
    The Specular Component is added to ambient and diffuse.
    Now you need the camera position in the CBufferPerFrame in order to calculate the view and reflection vector. 
    You also need to include the CBufferPerObject because different objects have different specular colors and power. 

Vertex Shader Phong

VS_OUTPUT phongVertexShader(VS_INPUT IN)
{
  VS_OUTPUT OUT = (VS_OUTPUT)0;
  
  OUT.Position = mul(mul(mul(IN.ObjectPosition, World), View), Projection);    // Flip the coordinates if in HLSL
  OUT.TextureCoordinate = get_corrected_texture_coordinates(IN.TextureCoordinate);
  
  OUT.Normal = normalize(mul(float4(IN.Normal, 0.0), World).xyz);
  OUT.LightDirection = normalize(-LightDirection);
  
  float3 worldPosition = mul(IN.ObjectPosition, World).xyz;
  OUT.ViewDirection = normalize(CameraPosition - worldPosition);
  
  return OUT;
}

Pixel Shader Phong

float4 phongPixelShader(VS_OUTPUT IN): SV_Target
{
  float4 OUT = (float4)0;
  
  float3 normal = normalize(IN.Normal);
  float3 lightDirection = normalize(IN.LightDirection);
  float n_dot_l = dot(lightDirection, normal);
  
  float4 color = ColorTexture.Sample(ColorSampler, IN.TextureCoordinate);
  float3 ambient = AmbientColor.rgb * AmbientColor.a * color.rgb;
  float3 diffuse = (float3)0;
  
  float3 viewDirection = normalize(IN.ViewDirection);
  float3 specular = (float3)0;
  
  if (n_dot_l > 0)
  {
    diffuse = LightColor.rgb * LightColor.a * n_dot_l * color.rgb;
    float3 reflectionVector = normalize(2*n_dot_l*normal-lightDirection);
    
    float specDot = dot(reflectionVector, viewDirection);
    float specSat = saturate(specDot);
    float specPow = pow(specSat, SpecularPower);
    float specMin = min(specPow, color.a);
    specular = SpecularColor.rgb * SpecularColor.a*specMin;
  }
  
  OUT.rgb = ambient + diffuse + specular;
  OUT.a = color.a;
  
  return OUT;
}

Blinn Phong Modification

    You can replace the Reflection Vector with a 'half-vector'

Pixel Shader BlinnPhong
float4 blinnPhongPixelShader(VS_OUTPUT IN): SV_Target
{
  float4 OUT = (float4)0;
  
  float3 normal = normalize(IN.Normal);
  float3 lightDirection = normalize(IN.LightDirection);
  float n_dot_l = dot(lightDirection, normal);
  
  float4 color = ColorTexture.Sample(ColorSampler, IN.TextureCoordinate);
  float3 ambient = AmbientColor.rgb * AmbientColor.a * color.rgb;
  float3 diffuse = (float3)0;
  
  float3 viewDirection = normalize(IN.ViewDirection);
  float3 specular = (float3)0;
  
  if (n_dot_l > 0)
  {
    diffuse = LightColor.rgb * LightColor.a * n_dot_l * color.rgb;
    
    float3 halfVector = normalize(lightDirection + IN.ViewDirection);
    
    float specDot = dot(normal, halfVector);
    float specSat = saturate(specDot);
    float specPow = pow(specSat, SpecularPower);
    float specMin = min(specPow, color.a);
    specular = SpecularColor.rgb * SpecularColor.a*specMin;
  }
  
  OUT.rgb = ambient + diffuse + specular;
  OUT.a = color.a;
  
  return OUT;
}
    Changing between the two of these, you'll notice that the specular hot spot is just a bit bigger than the phong shader. You can adjust the size of the specular power to change this. 

HLSL intrinsic
Since people do this so often, there is a function in hlsl that calculates the light for you, taking in the light dot product, the half vector dot product, and the specular power. 
Pixel Shader blinn phong intrinics
float4 bfIntrinsicsPixelShader(VS_OUTPUT IN): SV_Target
{
  float4 OUT = (float4)0;
  
  float3 normal = normalize(IN.Normal);
  float3 lightDirection = normalize(IN.LightDirection);
  float3 viewDirection = normalize(IN.ViewDirection);
  float n_dot_l = dot(lightDirection, normal);
  float3 halfVector = normalize(lightDirection + viewDirection);
  float n_dot_h = dot(normal, halfVector);
  
  float4 color = ColorTexture.Sample(ColorSampler, IN.TextureCoordinate);
  float4 lightCoefficients = lit(n_dot_l, n_dot_h, SpecularPower);
  
  float3 ambient = get_vector_color_contribution(AmbientColor, color.rgb);
  float3 diffuse = get_vector_color_contribution(LightColor, lightCoefficients.y*color.rgb);
  float3 specular = get_scalar_color_contribution(SpecularColor, min(lightCoefficients.z, color.w));
  
  OUT.rgb = ambient + diffuse + specular;
  OUT.a = color.a;
  
  return OUT;
}
    Gives you back a float4. The y component has the diffuse value and the z value has the specular component. 
    Those get_vector_color_contribution and get_scalar_color_contribution functions come from the 'Common.fxh' file we included in the folder (we have a #include "Common.fxh at the top of the file). 
#ifndef _COMMON_FXH  //if not defined, define it
#define _COMMON_FXH

/**********Resources****************/

#define FLIP_TEXTURE_Y  1

/* Globals tobe defined by user */
matrix World;
matrix View;
matrix Projection;

/********** Utility Functions ************/
float2 get_corrected_texture_coordinates(float2 textureCoordinate)
{
  #if FLIP_TEXTURE_Y
    return float2(textureCoordinate.x, 1.0 - textureCoordinate.y);
  #else
    return textureCoordinate;
  #endif
}

float3 get_vector_color_contribution(float4 light, float3 color)
{
  return light.rgb * light.a *color;
}

float3 get_scalar_color_contribution(float4 light, float color)
{
  return light.rgb * light.a *color;
}

#endif

Types of Light

Point Light

    The position of the point light is what matters. Your basic Lightbulb, gives off rays in all directions. Deals with attenuation radius, how the light falls off. Usually its inverse squared for the attenuation curve. 
    Changing the radius of a point light changes the attenuation curve. Higher radius means less drastic change in light over the attenuation. 

    Light Attenuation is computed with this equation: 
    Typically it is stored on the W component of the Light Direction Vector
    The vertex shader is similar to what we've done before. Except this time to get the light direction, we first subtract the world position from it and normalize it (we don't need to make it negative like in the directional one). Then the attenuation ratio is light direction divided by light radius.
/*************** Vertex SHader **********/
VS_OUTPUT pointVertexShader(VS_INPUT IN)
{
  VS_OUTPUT OUT = (VS_OUTPUT)0;
  matrix WorldViewProjection = mul(mul(World, View), Projection);
  
  OUT.Position = mul(IN.ObjectPosition, WorldViewProjection);
  OUT.TextureCoordinate = get_corrected_texture_coordinates(IN.TextureCoordinate);
  OUT.Normal = normalize(mul(float4(IN.Normal, 0), World).xyz);
  
  float3 worldPosition = mul(IN.ObjectPosition, World).xyz;
  float3 lightDirection = LightPosition - worldPosition;
  OUT.LightDirection.xyz = normalize(lightDirection);
  
  float attenuationRatio = length(OUT.LightDirection.xyz)/LightRadius;
  OUT.LightDirection.w = saturate(1.0 - attenuationRatio);
  OUT.ViewDirection = normalize(CameraPosition - worldPosition);
  
  return OUT;
}
 
    For the pixel shader, using the lit function, you use the y value for the diffuse, and the z value for the specular. you don't need the x or w value. Why it gives you these I don't know. 
/*************** Pixel SHader ***********/
float4 pointPixelShader(VS_OUTPUT IN):SV_Target
{
  float4 OUT = (float4)0;
  
  float3 normal = normalize(IN.Normal);
  float4 lightDirection = normalize(IN.LightDirection);
  float3 viewDirection = normalize(IN.ViewDirection);
  float n_dot_l = dot(lightDirection.xyz, normal);
  float3 halfVector = normalize(lightDirection.xyz + viewDirection);
  float n_dot_h = dot(normal, halfVector);
  
  float4 color = ColorTexture.Sample(ColorSampler, IN.TextureCoordinate);
  float4 lightCoefficients = lit(n_dot_l, n_dot_h, SpecularPower);
  
  float3 ambient = get_vector_color_contribution(AmbientColor, color.rgb);
  float3 diffuse = IN.LightDirection.w * get_vector_color_contribution(LightColor, lightCoefficients.y * color.rgb);
  float3 specular = IN.LightDirection.w * get_scalar_color_contribution(SpecularColor, min(lightCoefficients.z, color.w));
  
  OUT.rgb = ambient + diffuse + specular;
  OUT.a = color.a;
  
  return OUT;
}

    However you might notice that you get some clipping with the specular hot spot. 

    This is due to the rasterization part of the pipeline clipping off some values of the light direction. You need to calculate the light direction and attenuation ratio is again in the pixel shader. 

    You need to have a new struct output for the vertex shader to include the attenuation ratio
struct VS_POUTPUT
{
  float4 Position : SV_Position;
  float3 Normal:NORMAL;
  float2 TextureCoordinate : TEXCOORD0;
  float3 WorldPosition : TEXCOORD1;
  float Attenuation
   : TEXCOORD2;
};
    Pixel Shader:
float4 pixelPointPixelShader(VS_POUTPUT IN):SV_Target
{
  float4 OUT = (float4)0;
  float3 lightDirection = normalize(LightPosition - IN.WorldPosition);
  float3 viewDirection = normalize(CameraPosition - IN.WorldPosition);
  
  float3 normal = normalize(IN.Normal);
  float n_dot_l = dot(lightDirection, normal);
  float3 halfVector = normalize(lightDirection + viewDirection);
  float n_dot_h = dot(normal, halfVector);
  
  float4 color = ColorTexture.Sample(ColorSampler, IN.TextureCoordinate);
  float4 lightCoefficients = lit(n_dot_l, n_dot_h, SpecularPower);
  
  float3 ambient = get_vector_color_contribution(AmbientColor, color.rgb);
  float3 diffuse = IN.Attenuation * get_vector_color_contribution(LightColor, lightCoefficients.y * color.rgb);
  float3 specular = IN.Attenuation * get_scalar_color_contribution(SpecularColor, min(lightCoefficients.z, color.w));
  
  OUT.rgb = ambient + diffuse + specular;
  OUT.a = color.a;
  
  return OUT;
}

Spotlight

    A spotlight has an inner radius and an outer radius. Anything outside of the cone has a value of 0. Anything inside the inner radius as a value of 1. So you're dealing with location, direction, outer angle, and inner angle. 

For the struct:
struct VS_SOUTPUT
{
  float4 Position : SV_Position;
  float3 Normal:NORMAL;
  float2 TextureCoordinate : TEXCOORD0;
  float3 WorldPosition : TEXCOORD1;
  float Attenuation: TEXCOORD2;
  float3 LightLookAt:TEXCOORD3;
};
For the vertex shader:
VS_SOUTPUT spotVertexShader(VS_INPUT IN)
{
  VS_SOUTPUT OUT = (VS_SOUTPUT)0;
  matrix WorldViewProjection = mul(mul(World, View), Projection);
  
  OUT.Position = mul(IN.ObjectPosition, WorldViewProjection);
  OUT.WorldPosition = mul(IN.ObjectPosition, World).xyz;
  OUT.TextureCoordinate = get_corrected_texture_coordinates(IN.TextureCoordinate);
  OUT.Normal = normalize(mul(float4(IN.Normal, 0), World).xyz);
  

  float3 lightDirection = LightPosition - OUT.WorldPosition;
  float3 nLightDirection = normalize(lightDirection);
  
  OUT.Attenuation = saturate(1.0 - (length(nLightDirection)/LightRadius)); //using the normal lightdirection to get amplitude, for the attenuation
  OUT.LightLookAt = -LightLookAt;
  
  return OUT;
}

Pixel Shader:

float4 spotPixelShader(VS_SOUTPUT IN):SV_Target
{
  float4 OUT = (float4)0;
  float3 lightDirection = normalize(LightPosition - IN.WorldPosition);
  float3 viewDirection = normalize(CameraPosition - IN.WorldPosition);
  
  float3 normal = normalize(IN.Normal);
  float n_dot_l = dot(lightDirection, normal);
  float3 halfVector = normalize(lightDirection + viewDirection);
  float n_dot_h = dot(normal, halfVector);
  
  float4 color = ColorTexture.Sample(ColorSampler, IN.TextureCoordinate);
  float4 lightCoefficients = lit(n_dot_l, n_dot_h, SpecularPower);
  
  float3 ambient = get_vector_color_contribution(AmbientColor, color.rgb);
  float3 diffuse = IN.Attenuation * get_vector_color_contribution(LightColor, lightCoefficients.y * color.rgb);
  float3 specular = IN.Attenuation * get_scalar_color_contribution(SpecularColor, min(lightCoefficients.z, color.w));
  
  float3 lightLookAt = normalize(IN.LightLookAt);
  float spotFactor = 0.0;
  float lightAngle = dot(LightLookAt, lightDirection);
  if(lightAngle > 0.0)
  {
    spotFactor = smoothstep(SpotLightInnerAngle, SpotLightOuterAngle, lightAngle);
  }
  
  OUT.rgb = ambient + spotFactor*(diffuse + specular);
  OUT.a = color.a;
  
  return OUT;
}

    There is an intrinsic data type in hlsl called LIGHT_CONTRIBUTION_DATA that holds all the information necessary to do the lighting calculations. It helps when you have multiple lights since you need to calculate it for every light. 

Sunday, 30 March 2025

Shader Tool Week 1

 Direct X 3D 11 Graphics Pipeline

    The rasterizing pipeline. 
        DirextX is used to for the cpu with the gpu to communicate with each other. The CPU uses C++ while the GPU uses HLSL. 

    The pipeline

Vertex Shader: deals with the mesh
Tessellation stage : identifies which pixels are going to be rendered 
Rasterized stage: decides what color each pixel has

Input-Assembler

  • entry point
  • assembles Data into primitives
  • gets the data from the cpu to start
  • receives: Vertices from Vertex Buffers (bulk of memory of vertices), indices from index buffers, Control Point Patch Lists
  • outputs: primitives

Vertex Buffers    

    Each vertex contains at least a point position (you have one point, but that point has 3 vertices for each connected polygon.
    They could also contain color, texture, and normal information
    Input layout - a templates or organizational formats you need to conform with to work with the engine.

Index buffers

  • feed indices to input assembler.
  • saves memory and speed
  • indexes of positional information
  • a face of a cube has 4 vertices, a cube has 6 faces so that's 24 vertices.

How much do you save with index Buffers?
    Doesn't seem that big of a saving, but it adds up when you have additional information on the vertices other than just position. 

Primitive Types

Primitive is a topology defined in Vertex Buffer
â–ª Types:
– Point List
- line list with adjacency (a way of representing a convergence series. You may need a a pre point or a post point)
– Line List
– Line Strip
– Triangle list
– Triangle Strip
– Control Point Patch Lists
â–ª Needed Topology for tessellation stage



Shader Tool first shader

    Creating a context buffer. Used to store context variables. You want few draw calls as possible. You want as few as possible.

    Vertex shader: transforms every vertex from object model space to screen space

SEMANTICS: PLACES AFTER VARIABLES. It tells the system how to treat the variable. For the vertex shader you need the semantic SV_Position 

hard to integrate tessellation in unreal engine. 

    Geometry shader can add and subtract vertices (hair, grass, fuzz) they're kinda clunky and expensive so not used as much (unity is easy to use for this)
Stream-Output Storage
rasterization- flattens 3d into 2d image- a string of bits
interpolation of pervertex data - uses bilnear/tri linear interpolation

output-merger/blending stage
takes two passes: the old one and the new one and sees what has been occluded.

Technique - how to put the stuff together





making structs - similar to classes but more primitive. We can use them later. 
VS_OUTPUT is a struct as float4 position. This is casting it as a VS_OUTPUT type with initialized with all values as 0

To comment, you can use // for a single line, or /**/ for multiline comments. 
Preprocessor
    at the beginning you define these. If the compiler finds any instances it immediately replaces it 
    Whenever it comes across FLIP_TEXTURE_Y it will immediately replace it with a value of 1


float 4x4 and matrix mean the same thing

HLSL Texture Usage
First you need to declare a texture object - the image. Then you need to declare + initialize the texture sampler, how it's going to sample the image. And then you sample the texture using declared sampler.  (most of the time you put them together in the pixel/fragment shader)

Texture Filtering
Magnification: When there are more pixels on the screen than there are texels. Screen window is larger than texture map
Minification: When pixels cover up more than one texel. More complicated than magnification. MipMapping

Texture Filtering
for magnification
Point Filtering: Use the texel colors closestto pixel center. Fastest Technique. Lowest Quality (blocky)

Linear Interpolation: Use of bi-linear interpolation. Slower but smoother.


Anisotropic Filtering: Good for object oblique to camear. reduced distortion and improves rendered output. most expensive and best looking
comparison
    For texture minification; the filter could be used but is more complicated. ignored texels produce artifiacts and decrease rendering quality. down-sampled images would be better. MipMapping- instead of using a texture map that is larger, it produces smaller maps, each submap is half the size of the previous one- prebaked not real time. 

Texture Evaluation: Point Filtering selects nearest Mip Level OR LInear Filtering selects two surrounding mipmaps. Selected mipmap is sampled with point, linear, or anisotropic filtering. 

Friday, 7 March 2025

Houdini Workshop Week 8 - Spring 2025

     Make a new geo object named FluidSim, inside make 3 primitive sphere, merge them together. Position them around in a triangle formation, The size and position doesn't matter too much. 


        Make a Flip Dop source node, you'll see that they're now voxels. These will be the sources of our fluid sim. 
    When you want to preset a velocity, you have to manipulate the points attributes. Velocity acts on a point. You look for the v attribute, and you set the v attribute to some sort of vector. v = (x,y,z)
    Make a new polygon sphere, make it big enough so that it intersects all of our other spheres.

    We're going to get the normals from the big sphere, flip then so they face inward, and then apply them to the tiny spheres so that the velocity of the tiny spheres will be pointed towards the center. 


    A point is a position of information, a vertex is associated with a polygon, which is why when you look closer at these green lines, there are 6 of them. Change add normals to points and you'll see only one line now.
    Now create an attribute transfer node so we can transfer the normals. Make sure on the normal node you check 'Reverse Normals' to true. 
    Now we want to have a little arc of water before it goes down, so go to the sphere and change the position of the source sphere upward. 
    Create a point wrangle with this vexexpression:
@v = @N *5;

    We're setting the velocity equal to 5 times the normal vectors, (normals should be normalized, so 1).
    Now we'll make the bathtub. Put down a tube sop. Change the radiuses and height to make it similar to a basin. Make sure its a polygon.  We made it 50 columns. 
    We want to extrude the top face. Turn on the polygon numbers to see which face number is the top. For us it was face 0. So in the group we put in '0'.
    Create another polyextrude node. The face numbers changed again so find the inner face again to extrude it downwards. 
    We used an edit node afterward to make the basin even deeper. We took the inner face numbers and outer face numbers and then changed the translate y to a negative number to stretch it out.  

    Add a null output node and you're done with the basin. 

Making the Dop net:



    Create a dop network node and go inside it. For a dop network we need a source, a fluid object (houdini abstract object that acts as a place to receive it. An abstract database to store the data), and a collision system. Create a volume source node. Change initialize to Source Flip. This reinitializes the node. Change input to first context geometry. This will make it so that the first input on the dopnet node will feed into this node. 

    Next you need a flip object. your database, this does not take an input. 
    Create a flip solver node. First input should be the flip object and the fourth input should be the volume source node you've made. 
    Press alt+e on the 'particles per voxel' parameter. write this expression in the expression editor:
ceil(pow(ch("../flipobject1/gridscale"), 3))
    This gets the grid scale of out flip object, raises it to the power of 3 and then rounds it up. 
    Now we need a static object node for our basin. Change the parameter SOP Path to our basin geo. 
    In the static object node, Change collisions > Bullet Data to concave and collisions>rbdSolver > collision detection to 'use Surface Collisions'.
    Now add a static solver for your static object, a merge  node to merge it with the flip solver, and then add a gravity node to finish it. 
    Outside the DOP network, add a particle fluid surface node, change the voxel size to 2 so its more simple and takes less time to compute

    Then get a ROP node and export. Should be similar settings as before. 

INTO UNREAL

    Importing into Unreal, make sure generate missing collision is on, vertex Color Import option is replace, and Normal import method is import normals and tangents. 
    For the static mesh, look into it's details tab and find 'UE4 Compatible UVs' make sure it's ticked on. 
    
    Bring in all the images that were created. For the exrs, make sure you right click > Scripted Asset Actions > Houdini Config Textures for VAT (HDR). For the png image (the color lookup table), you need to use the SDR option. You'll know it works if filter should be nearest, and texture group for exrs should be 16Bit Data and the png should be 8Bit Data for SDR.

    Material is the same setup as before. Tangent space normal is off, need 4 custom uvs and bring in the houdini vat node for dynamic remeshing. 

    Make a material instance, and attach the rotation, position, and lookup table images to it. Apply it to the mesh. If the mesh looks weird at first, close Unreal and open it back up. This may fix it.     

Note: i had an issue where i had ray traced shadows setting was turned on and i was getting shadows as if the polygons were still floating in space. Turning it off fixes it. 

Illumination Models

 Basic Lighting Models The Three Illumination Models: ambient: color of objects in the scene with no direct light on it (When it is in shado...