⏱ Reading time: 4 min

Shaders, there is no place where to hide from them; there is no way to avoid them. In these days, computer graphics and 3D engines are everywhere. Any device has a GPU and compatibility with the most common frameworks that use them. Therefore, for everyone working on something with 3D models it came the moment where the material isn’t enough.
We want real-time deformations without burden CPU with a lot of secondary data. We want water with at least some reflections and some waves or foams without stupid tricks that will haunt our dreams.

The Purpose of This Guide

Don’t panic, we are here to help. Together we can tame the beast and give you some hints about this monstrosity. We can’t guarantee that at the end of this series of articles you will become the next wizard of shaders. However, at least you will get the correct understanding to move on more esoteric books with more mathematically complex explanation and you will know when and how to use them.

Tools for future battles

In these series of articles, we will focus on two different methods to deliver 3D content, Unity and Three.js.
Why Unity? Because Unity is free for personal use and it is probably the most known engine with a wide userbase. Furtherly, it has implemented various ways to write shaders and one of them will be the one we’ll use; a method not too different from many other implementations used in other engines.
On the other hand, Three.js is the most useful javascript library available for 3D models rendering. Additionally, it can render shaders on the web and is widely compatible with any devices.
Why not Unreal? Unreal is a beautiful engine with enormous capabilities but it tends to put the user on a fixed easy way when writing shaders. Moreover, it offers a node editor that, frankly, it’s too bad avoid using. All the concepts and information that will appear here are compatible with other engines such as Unreal, but their practical application may require some additional work.

Understanding our enemy and why we need to win

We can’t rely on the CPU, not because it is not powerful, but because in the vast majority of cases the number of cores isn’t enough.
Imagine the things in this way, our CPU is a magic cube pulsating of alien energy that converts every bit of information that it touches into something we need. ( It could be that our CPU is multi-core, in that case, we will have the same number of magic cubes as the numbers of the logical cores of the CPU )
However, an image is composed of a number of pixels equalling the height multiplied by the width. Hence an image of 4096 px x 4096 px is composed of 16.777.216 pixels. In the case our magic cube have to touch every single pixel of this image in order to process above image, there will be no way to avoid an immense queue. It would be an incredible waste of time, also considering that this must be done multiple time every second.
That is especially the case when parallel processing becomes the best solution. Instead of having a couple of big and powerful processors, or cubes, it is much smarter to have lots of tiny microcubes running in parallel at the same time. And these are GPUs. Multiple microprocessors that execute operations blindly and also memoryless without any direct connection to other threads.

Don’t let a mysterious and obscure language block you.

There are many languages available for shader writing. The most common are HLSL and GLSL but there are different variants and different sub-languages that have a root in common. Every language tends to implement things in his own way, but is always possible to understand and adapt such techniques to other languages.

For example, this is a basic shader written in Unity:

Shader "Unlit/NewUnlitShader"
 {
     Properties
     {
         _MainTex ("Texture", 2D) = "white" {}
     }
     SubShader
     {
         Tags { "RenderType"="Opaque" }
         LOD 100   

  Pass
    {
        CGPROGRAM
        #pragma vertex vert
        #pragma fragment frag
        // make fog work
        #pragma multi_compile_fog

        #include "UnityCG.cginc"

        struct appdata
        {
            float4 vertex : POSITION;
            float2 uv : TEXCOORD0;
        };

        struct v2f
        {
            float2 uv : TEXCOORD0;
            UNITY_FOG_COORDS(1)
            float4 vertex : SV_POSITION;
        };

        sampler2D _MainTex;
        float4 _MainTex_ST;

        v2f vert (appdata v)
        {
            v2f o;
            o.vertex = UnityObjectToClipPos(v.vertex);
            o.uv = TRANSFORM_TEX(v.uv, _MainTex);
            UNITY_TRANSFER_FOG(o,o.vertex);
            return o;
        }

        fixed4 frag (v2f i) : SV_Target
        {
            // sample the texture
            fixed4 col = tex2D(_MainTex, i.uv);
            // apply fog
            UNITY_APPLY_FOG(i.fogCoord, col);
            return col;
        }
        ENDCG
    }
}        
}   

Also, this is a shader in Three.js

    varying vec3 vUv; 

    void main() {
      vUv = position; 

      vec4 modelViewPosition = modelViewMatrix * vec4(position, 1.0);
      gl_Position = projectionMatrix * modelViewPosition; 
    }

This all sounds terribly complicated, doesn’t it? Don’t worry, from the next article all things will be clearer and more understandable.

One thought on “Don’t let the Shaders scare you – Part 1

Comments are closed.