Azure AppInsights integration in Blazor WASM

Integrating Azure Application Insights in a Blazor WebAssembly (WASM) app is possible, though it requires special handling since Blazor WASM runs entirely in the browser, and you can’t use the full .NET SDK for Application Insights like you can in server-side apps.

Here’s how you can set it up using JavaScript SDK (since Blazor WASM ultimately runs in the browser):


✅ Step-by-Step Guide

1. Create Application Insights Resource (if not done)

  • Go to Azure Portal → Create a resource → Application Insights.
  • Choose GeneralApplication Insights, select region, etc.
  • After creation, copy the Instrumentation Key or Connection String.

2. Add the Application Insights JavaScript SDK

In your Blazor WebAssembly project:

Modify wwwroot/index.html (for standalone Blazor WASM)

htmlCopyEdit<!-- Application Insights JavaScript SDK -->
<script type="text/javascript">
  var appInsights = window.appInsights || function (config) {
    function r(config) {
      t[config] = function () {
        var i = arguments;
        t.queue.push(function () { t[config].apply(t, i); })
      }
    }
    var t = { config: config }, u = document, e = window, o = "script", s = u.createElement(o), i, f;
    for (s.src = config.url || "https://az416426.vo.msecnd.net/scripts/a/ai.0.js", u.getElementsByTagName(o)[0].parentNode.appendChild(s), t.cookie = u.cookie, t.queue = [], i = ["Event", "Exception", "Metric", "PageView", "Trace", "Dependency"]; i.length;)
      r("track" + i.pop());
    return r("setAuthenticatedUserContext"), r("clearAuthenticatedUserContext"), r("flush"), config.disableExceptionTracking || (i = "onerror", r("_" + i), f = e[i], e[i] = function (config, r, u, e, o) {
      var s = f && f(config, r, u, e, o);
      return s !== !0 && t["_" + i](config, r, u, e, o), s
    }), t
  }({
    instrumentationKey: "YOUR_INSTRUMENTATION_KEY"
  });

  window.appInsights = appInsights;
  appInsights.trackPageView();
</script>

3. Call App Insights from C# Code

You can invoke JavaScript from your Blazor C# code like this:

Create a service to interact with JS (e.g., AppInsightsService.cs):

csharpCopyEditusing Microsoft.JSInterop;
using System.Threading.Tasks;

public class AppInsightsService
{
    private readonly IJSRuntime _jsRuntime;

    public AppInsightsService(IJSRuntime jsRuntime)
    {
        _jsRuntime = jsRuntime;
    }

    public async Task TrackEventAsync(string eventName)
    {
        await _jsRuntime.InvokeVoidAsync("appInsights.trackEvent", new { name = eventName });
    }

    public async Task TrackExceptionAsync(string errorMessage)
    {
        await _jsRuntime.InvokeVoidAsync("appInsights.trackException", new
        {
            exception = new { message = errorMessage }
        });
    }

    public async Task TrackPageViewAsync(string pageName)
    {
        await _jsRuntime.InvokeVoidAsync("appInsights.trackPageView", new { name = pageName });
    }
}

4. Register the Service

In Program.cs:

csharpCopyEditbuilder.Services.AddScoped<AppInsightsService>();

5. Use in Your Components

razorCopyEdit@inject AppInsightsService AppInsights

<button @onclick="TrackEvent">Track Event</button>

@code {
    private async Task TrackEvent()
    {
        await AppInsights.TrackEventAsync("ButtonClicked");
    }
}

🧠 Notes

  • Only client-side telemetry will be captured (JS-side) — no automatic dependency tracking, for example.
  • If you need full telemetry, consider combining it with Blazor WASM hosted model and using Application Insights server SDK in the backend.

Supervised and Unsupervised Learning

Supervised Learning

Definition:
The model learns from labeled data — meaning each input has a corresponding correct output.

Goal:
Predict an output (label) from input data.

Examples:

  • Email spam detection (Spam / Not Spam)
  • Predicting house prices (Price in $)
  • Handwriting recognition (0–9 digits)

Types:

  • Classification (output is a category): e.g., cat vs dog
  • Regression (output is a number): e.g., predicting temperature

Requires Labels? ✅ Yes

Example Dataset:

Input FeaturesLabel
“Free offer now” (email text)Spam
3 bedrooms, 2 baths, 1500 sq ft$350,000

🔍 Unsupervised Learning

Definition:
The model learns patterns from unlabeled data — it finds structure or groupings on its own.

Goal:
Explore data and find hidden patterns or groupings.

Examples:

  • Customer segmentation (group customers by behavior)
  • Anomaly detection (detect fraud)
  • Topic modeling (find topics in articles)

Types:

  • Clustering: Group similar data points (e.g., K-Means)
  • Dimensionality Reduction: Simplify data (e.g., PCA)

Requires Labels? ❌ No

Example Dataset:

Input Features
Age: 25, Spent: $200
Age: 40, Spent: $800

(The model might discover two customer groups: low-spenders vs high-spenders)


✅ Quick Comparison

FeatureSupervised LearningUnsupervised Learning
LabelsRequiredNot required
GoalPredict outputsDiscover patterns
OutputKnownUnknown
ExamplesClassification, RegressionClustering, Dimensionality Reduction
AlgorithmsLinear Regression, SVM, Random ForestK-Means, PCA, DBSCAN

Supervised Learning Use Cases

1. Email Spam Detection

  • ✅ Label: Spam or Not Spam
  • 📍 Tech companies like Google use supervised models to filter email inboxes.

2. Fraud Detection in Banking

  • ✅ Label: Fraudulent or Legitimate transaction
  • 🏦 Banks use models trained on historical transactions to flag fraud in real-time.

3. Loan Approval Prediction

  • ✅ Label: Approved / Rejected
  • 📊 Based on income, credit history, and employment data, banks decide whether to approve loans.

4. Disease Diagnosis

  • ✅ Label: Disease present / not present
  • 🏥 Healthcare systems train models to detect diseases like cancer using medical images or lab reports.

5. Customer Churn Prediction

  • ✅ Label: Will churn / Won’t churn
  • 📞 Telecom companies predict if a customer is likely to cancel a subscription based on usage data.

🔍 Unsupervised Learning Use Cases

1. Customer Segmentation

  • ❌ No labels — model groups customers by behavior or demographics.
  • 🛒 E-commerce platforms use this for targeted marketing (e.g., Amazon, Shopify).

2. Anomaly Detection

  • ❌ No labeled “anomalies” — model detects outliers.
  • 🛡️ Used in cybersecurity to detect network intrusions or malware.

3. Market Basket Analysis

  • ❌ No prior labels — finds item combinations frequently bought together.
  • 🛍️ Supermarkets like Walmart use this to optimize product placement.

4. Topic Modeling in Text Data

  • ❌ No labels — model finds topics in documents or articles.
  • 📚 News agencies use it to auto-categorize stories or summarize themes.

5. Image Compression (PCA)

  • ❌ No labels — model reduces dimensionality.
  • 📷 Used in storing or transmitting large image datasets efficiently.

🚀 In Summary:

IndustrySupervised ExampleUnsupervised Example
FinanceLoan approvalFraud pattern detection
HealthcareDiagnosing diseases from scansGrouping patient records
E-commercePredicting purchase behaviorCustomer segmentation
CybersecurityPredicting malicious URLsAnomaly detection in traffic logs
RetailForecasting salesMarket basket analysis

Training, Validation and Test Data in Machine Learning

Training Data

  • Purpose: Used to teach (train) the model.
  • Contents: Contains both input features and corresponding output labels (in supervised learning).
  • Usage: The model learns patterns, relationships, and parameters from this data.
  • Size: Typically the largest portion of the dataset (e.g., 70–80%).

Example:
If you’re training a model to recognize handwritten digits:

  • Input: Images of digits
  • Label: The digit (0–9)

Test Data

  • Purpose: Used to evaluate how well the model performs on unseen data.
  • Contents: Same format as training data (features + labels), but not used during training.
  • Usage: Helps assess model accuracy, generalization, and potential overfitting.
  • Size: Smaller portion of the dataset (e.g., 20–30%).

Key Point: It simulates real-world data the model will encounter in production.

Validation Data

  • Purpose: Used to tune the model’s hyperparameters and monitor performance during training.
  • Contents: Same format as training/test data — includes input features and labels.
  • Usage:
    • Helps choose the best version of the model (e.g., best number of layers, learning rate).
    • Detects overfitting early by evaluating on data not seen during weight updates.
  • Not used to directly train the model (no weight updates from validation data).

Summary Table

AspectTraining DataValidation DataTest Data
Used forTraining modelTuning modelFinal evaluation
Used duringModel trainingModel trainingAfter model training
Updates model?YesNoNo
Known to modelYesSeen during trainingNever seen before

Tip:

In practice, for small datasets, we often use cross-validation, where the validation set rotates among the data to make the most of limited samples.

Typical Size Ranges for Small Datasets

Dataset TypeNumber of Samples (Roughly)
Very Small< 500 samples
Small500 – 10,000 samples
Medium10,000 – 100,000 samples
Large100,000+ samples

Why Size Matters

  • Small datasets are more prone to:
    • Overfitting – model memorizes data instead of learning general patterns.
    • High variance in performance depending on the data split.
  • Big models (e.g., deep neural networks) usually need large datasets to perform well.

💡 Common Examples

  • Medical diagnosis: Often < 5,000 patient records → small dataset.
  • NLP for niche domains: < 10,000 labeled texts → small.
  • Handwritten digit dataset (MNIST): 60,000 training images → medium-sized.

🔁 Tip for Small Datasets

If your dataset is small:

  1. Use cross-validation (like 5-fold or 10-fold).
  2. Consider simpler models (e.g., logistic regression, decision trees).
  3. Use data augmentation (e.g., rotate/scale images, reword texts).
  4. Apply transfer learning if using deep learning (e.g., pre-trained models like BERT, ResNet).

Export SonarQube issues (community edition) in Excel File

SonarQube community edition has no direct way to export issues to excel file. Here are the steps to export;

  1. Install Python from here

https://www.python.org/downloads

Go through custom installation. Specify a manual path e.g. c:\Python313. Check all checkboxes.

Verify installation using “Python –version” in python console.

Clone following repository from GitHub;

https://github.com/talha2k/sonarqube-issues-export-to-excel

open “sonarqube-issues-export-to-excel” in python IDE (IDLE) and edit SonarQube URL, Project_Key and Token. Save file.

Run the script

python sonar-export.py

The script will fetch the issues and save them to an Excel file named sonarqube_issues.xlsx.

Hope this will help someone.

Install SonarQube using Docker

Sonar is a popular open-source platform for continuous inspection of code quality. One of the easiest ways to install and use Sonar is to use Docker, a containerization platform that makes it easy to deploy and manage applications.

Prerequisites

Before getting started, you will need to have Docker installed on your machine. If you do not have Docker installed, you can download and install it from the Docker website.

Step 1: Pull the Sonar Docker Image

The first step in installing Sonar with Docker is to pull the Sonar Docker image from the Docker Hub repository. To do this, open a terminal or command prompt and run the following command:

docker pull sonarqube

This will download the latest version of the Sonar Docker image to your machine.

Step 2: Create a Docker Network

Next, we need to create a Docker network that will allow the Sonar container to communicate with the database container. To create a Docker network, run the following command:

docker network create sonar-network

Step 3: Start a Database Container

Sonar requires a database to store its data. In this example, we will use a PostgreSQL database, but you can also use a MySQL or Microsoft SQL Server database if you prefer. To start a PostgreSQL database container, run the following command:

docker run -d --name sonar-db --network sonar-network -e POSTGRES_USER=sonar -e POSTGRES_PASSWORD=sonar -e POSTGRES_DB=sonar postgres:9.6

Step 4: Start the Sonar Container

Once the database container is running, we can start the Sonar container. To do this, run the following command:

docker run -d --name sonar -p 9000:9000 --network sonar-network -e SONARQUBE_JDBC_URL=jdbc:postgresql://sonar-db:5432/sonar -e SONAR_JDBC_USERNAME=sonar -e SONAR_JDBC_PASSWORD=sonar sonarqube

Step 5: Access the Sonar Dashboard

Once the Sonar container is running, you can access the Sonar dashboard by opening a web browser and navigating to http://localhost:9000. The default username and password are admin and admin, respectively.