AI and Nvidia’s chips relationship

It all starts with video games

The company (Nvidia), founded in 1993 at a California Denny’s, became a breakthrough hit in the late ’90s and early 2000s because it led the world in the creation of Graphics Processing Units, which are the chips that allow for good 3D graphics in video games. Nvidia’s chips were used in the Xbox, PlayStation 3 and Nintendo Switch, and are a staple in pretty much every gaming computer.

A graphics processing unit (GPU) is different from a standard computer chip (a central processing unit / CPU) because it does one thing really well:

  • It turns 3D models into advanced math. For video games, this means sweet graphics.

So, Nvidia had been doing cool stuff for decades. But then computer programmers stumbled upon more use cases for GPUs.

The first new use case stems from the fact that GPUs can solve complex math problems, but use a lot of energy (electricity) to do so.

So, when the anonymous creator of Bitcoin wanted a way to limit the number of imaginary digital coins that could be minted, he(?) set up a system that would throttle the creation of new coins by forcing you to solve a mathematical riddle with your GPU first.

The result: with crypto mining, you could literally use GPUs to mint (theoretical) money. Your costs would come in the form of buying the hardware and paying for the electricity to operate it.

This did result in the production and proliferation of a lot of GPUs, which eventually found their third (and much more productive) purpose.

Why AI and 3D video games are the same

The building blocks of an AI model are “tokens,” which are words or parts of words. (For example, “think” is a token, and “-ing” is also a token, and the two tokens can be connected to form the word “thinking.”)

Once you have tokens, you need to connect them in a way that helps your model understand how often they are associated with one another. The way you do this is with very advanced mathematics (linear algebra) that eventually forms vectors that connect every token to every other token.

To visualize this, imagine you’re standing out in a field looking up at the night sky. Every star is a token/word. Now, imagine a bunch of lines drawn from every star to every other star. With millions of stars, you’d have trillions and trillions of lines (vectors) connecting them. And from earth, this would look sort of like a giant 3D web of vectors.

Doing the math to connect all the tokens in all the languages for all the content in the world requires a lot of computing power. And GPUs – which were designed specifically for advanced multi-dimensional math for video games – are the perfect chips for the job.

This is how a video game company ended up at the cutting edge of AI, jockeying with Microsoft and Apple for the title of most valuable company in the world.

Show/Hide element in Blazor WebAssembly

Blazor WebAssembly doesn’t allow direct manipulation of DOM. Here is how to show/hide DOM element without using Javascript Interop;

The hidden html attribute also works to hide an element.

<p hidden>This paragraph should be hidden.</p>

To bind to Model:

 <p hidden="@HideLabel">I am Hidden When HideLabel == true</p>

 <p hidden="@(!HideLabel)">I am Hidden when Hidelabel == false</p>
    
 <button @onclick="@Toggle">Show/Hide</button>

 @code {
      private bool HideLabel { get; set; } = false;
      private void Toggle()
      {
         HideLabel =   !HideLabel;
      }      
 } 

Edit: You can also use a CSS class to hide/show an element:

<div class="font-italic @(HideLabel ? "d-none" : "d-show")">
   I am Hidden When HideLabel == true
</div>

Reference

https://stackoverflow.com/questions/63693734/how-to-show-hide-an-element-in-real-time-blazor

Convert Enum to List in C#

We can use LINQ for this;

public class EnumModel
{
    public int Value { get; set; }
    public string Name { get; set; }
}

public enum MyEnum
{
    Name1=1,
    Name2=2,
    Name3=3
}

public class Test
{
        List<EnumModel> enums = ((MyEnum[])Enum.GetValues(typeof(MyEnum))).Select(c => new EnumModel() { Value = (int)c, Name = c.ToString() }).ToList();

        // A list of Names only, does away with the need of EnumModel 
        List<string> MyNames = ((MyEnum[])Enum.GetValues(typeof(MyEnum))).Select(c => c.ToString()).ToList();

        // A list of Values only, does away with the need of EnumModel 
        List<int> myValues = ((MyEnum[])Enum.GetValues(typeof(MyEnum))).Select(c => (int)c).ToList();

        // A dictionnary of <string,int>
        Dictionary<string,int> myDic = ((MyEnum[])Enum.GetValues(typeof(MyEnum))).ToDictionary(k => k.ToString(), v => (int)v);
}

Reference

https://stackoverflow.com/questions/1167361/how-do-i-convert-an-enum-to-a-list-in-c

StateHasChanged() vs InvokeAsync(StateHasChanged) in Blazor WebAssembly

I have tried calling StateHasChanged() – instead of InvokeAsync(StateHasChanged) – in a Timer’s Elapsed event, and it works as expected

That must have been on WebAssembly. When you try that on Blazor Serverside I would expect an exception. StateHasChanged() checks if it runs on the right thread.

The core issue is that the rendering and calling StateHasChanged both have to happen on the main (UI) thread. Actually that is “on the SynchronizationContext” but for all intents and purposes you can think of it as being a single-thread, just as in WinForms, WPF and other GUIs). The virtual DOM is not thread-safe.

The main Blazor life-cycle events (OnInit, AfterRender, ButtonClick) are all executed on that special thread so in the rare case that you need StateHasChanged() there it can be called without InvokeAsync().

A Timer is different, it is an “external event” so you can’t be sure it will execute on the correct thread. InvokeAsync() delegates the work to Blazor’s SynchronizationContext that will ensure it does run on the main thread.

But Blazor WebAssembly only has 1 thread so for the time being external events always run on the main thread too. That means that when you get this Invoke pattern wrong you won’t notice anything. Until one day, when Blazor Wasm finally gets real threads, your code will fail. As is the case with your Timer experiment.

What is “Blazor’s synchronization context”?

In .net a synchronization context determines what happens with (after) await. Different platforms have different settings, the Blazor synccontext is a lot like that of WinForms and WPF. Mainly, the default is .ConfigureAwait(true): resume on the same thread/context.

I sometimes see .ConfigureAwait(false) in toplevel Blazor Wasm code. That too will blow up when we get real threads there. It is fine to use in services called from Blazor, but not for the toplevel methods.

And finally, await InvokeAsync(StateHasChanged) or await InvokeAsync(() => StateHasChanged()) is just about lambda’s in C#, nothing to do with Blazor. The first short form is a little more efficient.

I also sometimes see InvokeAsync() called without await

That will work. It probably is better than the other option: making the calling method (like the Timer’s OnTick) an async void. So do use this from a synchronous code path.

Reference

https://stackoverflow.com/questions/65230621/statehaschanged-vs-invokeasyncstatehaschanged-in-blazor

400 vs 422 response in post action

There are three possible types of client errors on API calls that receive request bodies:

Sending invalid JSON will result in a 400 Bad Request response:

HTTP/1.1 400 Bad Request
Content-Length: 35
{"message":"Problems parsing JSON"}

Sending the wrong type of JSON values will result in a 400 Bad Request response:

HTTP/1.1 400 Bad Request
Content-Length: 40

{"message":"Body should be a JSON object"}

Sending invalid fields will result in a 422 Unprocessable Entity response:

HTTP/1.1 422 Unprocessable Entity
Content-Length: 149

{
  "message": "Validation Failed",
  "errors": [
    {
      "resource": "Issue",
      "field": "title",
      "code": "missing_field"
    }
  ]
}

Reference

https://stackoverflow.com/questions/16133923/400-vs-422-response-to-post-of-data