Now Static Site Generation (SSG)

Ok, I ditched the old Angular CSR website for this new static site generated website. It’s so easy now with AI that it took me maybe a couple of hours to get it all live with GitHub Copilot. The prompt was something like:

let’s convert this Angular CSR into an SSG website. We should read the posts from a posts folder in the repo instead of our Firestore database. We should build the pages on every merge into main and deploy to GitHub Pages instead of Firebase. And let’s replace Google Analytics with Umami.is

From there it asked me a couple of questions about what to use and we were done.

There were a couple of follow-ups with style fixes and a problem with the base URL (when using a GitHub Pages URL or a custom domain), but it pretty much worked out of the box. Check the README file generated below.

Stack

  • Astro — static site generator
  • GitHub Actions — builds and deploys on every push to master
  • GitHub Pages — hosting, served via custom domain
  • Umami — privacy-focused analytics

Structure

site/                  # Astro project
├── src/
│   ├── content/
│   │   ├── posts/     # Published blog posts (.md)
│   │   └── drafts/    # Unpublished drafts (not built)
│   ├── layouts/       # BaseLayout and PostLayout
│   └── pages/         # index, /post/[slug], 404
└── public/            # Static assets (images, CSS, favicon)

Writing a post

  1. Create site/src/content/posts/<slug>.md with the required frontmatter:
    ---
    title: Your Post Title
    publishedOn: YYYY-MM-DD
    ---
  2. Push to master — GitHub Actions builds and deploys automatically.

To draft a post without publishing it, place the file in src/content/drafts/ instead.

Local development

cd site
npm install
npm run dev      # http://localhost:4321
npm run build    # production build → dist/
npm run preview  # preview production build

Multiple environments on same Service Fabric Cluster

updated

When developing a service on Azure Service Fabric you can work on your local cluster, it works quite well. You can decide between 1 or 5 node local cluster (check how to prepare your development environment from Microsoft docs). But if you are part of a team, you probably want everyone on the team to access the cluster. In our team, we have deployed a cluster in Azure for this, we called it dev environment, but then you have the QA team that needs to test, but this environment is too volatile for that, one minute is fine and the next is gone after we deployed something that broke the application. We needed an staging environment but we didn’t want to deploy another cluster (you need to pay for it and all that).

Deploying same application type multiple times on the same cluster

Follow these 2 steps to deploy the same application type multiple times on the same cluster, to act like multiple environments.

Parametrise any service URLs

If you have multiple services talking to each other, you need to parametrise any service URLs into your environment parameters file, in our dev.xml application parameters we’ll have something like this

<Application 
    xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
    Name="fabric:/MyApp.dev" 
    xmlns="http://schemas.microsoft.com/2011/01/fabric">
  <Parameters>
     <Parameter Name="MyApiServiceOneUri" Value="fabric:/MyApp.dev/ServiceOne" />
     <Parameter Name="MytActorUri" Value="fabric:/MyApp.dev/ActorOne" />
  </Parameters>
</Application>

and below is our staging.xml parameters file

<Application 
    xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
    Name="fabric:/MyApp.staging" 
    xmlns="http://schemas.microsoft.com/2011/01/fabric">
  <Parameters>
     <Parameter Name="MyApiServiceOneUri" Value="fabric:/MyApp.staging/ServiceOne" />
     <Parameter Name="MytActorUri" Value="fabric:/MyApp.staging/ActorOne" />
  </Parameters>
</Application>

The important thing to notice from both parameters files above is the name of your application, we added a suffix with the environment name, in this case .dev and .staging, and used the right URIs for same app services and actors.

Use different ports

If you are specifying ports explicitly in your service endpoints, then you need to be sure to use different ports for your different environments. We could add another parameter in our ‘dev.xml’ and ‘staging.xml’ parameters files.

<Application 
    xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
    Name="fabric:/MyApp.staging" 
    xmlns="http://schemas.microsoft.com/2011/01/fabric">
  <Parameters>
     <Parameter Name="MyApiServiceOneUri" Value="fabric:/MyApp.staging/ServiceOne" />
     <Parameter Name="MyApiServiceOnePort" Value="9051" />
     <Parameter Name="MytActorUri" Value="fabric:/MyApp.staging/ActorOne" />
  </Parameters>
</Application>

When picking static ports remember this:

By design static ports should not overlap with application port range specified in the ClusterManifest. If you specify a static port, assign it outside of application port range, otherwise it will result in port conflicts(Specify resources in a service manifest).

To use the port from your parameters file in your service, you can add a ResourceOverride section in the ApplicationManifest as indicated here. Locate the ServiceManifestImport of the service you want to override the endpoint port and add the ResourceOverride section.

<ResourceOverrides>
   <Endpoints>
      <Endpoint Name="ServiceEndpoint" Port="[MyApiServiceOnePort]" />
   </Endpoints>
</ResourceOverrides>

Also, remember to declare the parameters at the top of the ApplicationManifest file.

<Parameters>
    <Parameter Name="MyApiServiceOnePort" DefaultValue="9999" />
</Parameters>

And that’s it, ready to deploy 🚀


Can't connect to internet from WSL 2?

updated

Seems to be pretty common, check below what worked for me, found it here

From cmd execute

netsh winsock reset
netsh int ip reset all
netsh winhttp reset proxy
ipconfig /flushdns

From WSL execute

sudo bash -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'
sudo bash -c 'echo "nameserver 8.8.4.4" >> /etc/resolv.conf'

Azure Service Fabric in production

It’s been a while since we deployed our first project to a Service Fabric cluster at work.

I’d like to share what we’ve done and the bits we’ve learned in the process.

The first thing…

Deploying the cluster

We had to create a cluster and we had to be able to recreate exactly the same cluster quickly and with minimum effort, using Azure Portal interface was not an option, an ARM template was the way to go.

We based our cluster template on one of Ivan Gavryliuk’s templates, I recommend his Pluralsight course Using Azure Service Fabric in Production. Ivan’s powershell script will create the resource group (if needed), key vault, import certificate and then apply the ARM template.

You can find more template examples on Azure Samples repository on GitHub.

Your template will change based on your needs, like operating system, VM size, load balancer, durability tier, et cetera. I’ll describe the changes we’ve done in the template and powershell scripts.

Load balancer

In our case we use two Azure load balancers, a public load balancer for cluster management and a private one with rules to expose the services we need to access from our private virtual network, we use an nginx load balancer as a reverse proxy for the services that need to be accessed from internet.

Check the load balancers configuration on this gists.

Docker images cleanup

After a couple of months of pushing upgrades to a container based service in our staging cluster we ran into disk space problems, if you are going to run containers in your cluster and don’t want to login to each node to prune your docker images you will need a couple of settings.

  • PruneContainerImages if set to true will:

Remove unused application container images from nodes. When an ApplicationType is unregistered from the Service Fabric cluster, the container images that were used by this application will be removed on nodes where it was downloaded by Service Fabric. The pruning runs every hour, so it may take up to one hour (plus time to prune the image) for images to be removed from the cluster. Service Fabric will never download or remove images not related to an application. Unrelated images that were downloaded manually or otherwise must be removed explicitly.

  • ContainerImagesToSkip will prevent the deletion of the listed images

More info about this settings can be found here and here. This is how the settings look like on the template.

{
    "fabricSettings": [
        {
            "name": "Hosting",
            "parameters": [
                {
                    "name": "PruneContainerImages",
                    "value": "True"
                },
                {
                    "name": "ContainerImagesToSkip",
                    "value": "microsoft/windowsservercore|microsoft/nanoserver"
                }
            ]
        }
    ]
}

Service instance count

Another problem we ran into was changing the instance count when upgrading an application, suppose you deployed your application for the first time on your cluster with InstanceCount="2" and later you realise you actually need one instance of your service running on every node, so you change it to InstanceCount="-1" and deploy just to receive something like:

Start-ServiceFabricApplicationUpgrade : Default service descriptions can not be modified as part of upgrade.
To allow it, set EnableDefaultServicesUpgrade to true.

And that’s all you need to do actually, just add it to the cluster settings on your template as below.

{
    "fabricSettings": [
        {
            "name": "ClusterManager",
            "parameters": [
                {
                    "name": "EnableDefaultServicesUpgrade",
                    "value": "true"
                }
            ]
        }
    ]
}

You can read a bit more about the behaviour of changing default services during application upgrades here.

Security

There is a good article from Microsoft describing all the cluster security scenarios

In our case, we use:

  • Wildcard certificate for server identity and SSL encryption of http communication
  • Self signed client certificates for users and Azure DevOps Pipelines.

The powershell script to prepare for cluster deployment looks really similar to Ivan’s script. Instead of uploading a self signed certificate to the Vault we upload our wildcard certificate and create 3 self signed certificates for client access:

  • Read only access
  • Admin access
  • Another admin access for DevOps Pipelines
. "$PSScriptRoot\..\Common.ps1"

# The declare come variables
$ResourceGroupName = "everything-sfcluster"
$Location = "North Europe"
$KeyVaultName = "cluster-vault"

# Check that you're logged in to Azure 
# before running anything at all, the call will
# exit the script if you're not
CheckLoggedIn

# Ensure resource group we are deploying to exists.
EnsureResourceGroup $ResourceGroupName $Location

# Ensure that the Key Vault resource exists.
$keyVault = EnsureKeyVault $KeyVaultName $ResourceGroupName $Location

# Upload buyagift wildcard certificate
$cert = UploadCertificate $KeyVaultName "certName" $PSScriptRoot "certPassword"

# Create three self signed certificates and return thumb to use on cluster template
$readOnlyThumb, $adminThumb, $devOpsThumb = EnsureSelfSignedClientCertificates $PSScriptRoot

To use our wildcard certificate we also had to change the service fabric explore url and create a DNS A record pointing to the public IP DNS name associated to our public load balancer, our A record looks like this service-fabric-explorer.our-domain.com => xxxxxxx.northeurope.cloudapp.azure.com

Check below the relevant properties of the service fabric resource on the ARM Template

"properties": {
                "certificateCommonNames": {
                    "commonNames": [
                    {
                        "certificateCommonName": "*.our-domain.com", # remember to add * if wildcard certificate
                        "certificateIssuerThumbprint": ""
                    }
                    ],
                    "x509StoreName": "My"
                },
                "clientCertificateThumbprints": [
                    {
                        "isAdmin": false,
                        "certificateThumbprint": "[parameters('readOnlyThumb')]" # cert thumb from previous powershell script
                    },
                    {
                        "isAdmin": true,
                        "certificateThumbprint": "[parameters('adminThumb')]" # cert thumb from previous powershell script
                    },
                    {
                        "isAdmin": false,
                        "certificateThumbprint": "[parameters('devOpsThumb')]" # cert thumb from previous powershell script
                    }
                ],
                "managementEndpoint": "[concat('https://service-fabric-explorer.our-domain.com:',variables('fabricHttpGatewayPort'))]",
            }

And the following on the Virtual Machine Scale Set resource

"osProfile": {
                        "adminPassword": "[parameters('adminPassword')]",
                        "adminUsername": "[parameters('adminUsername')]",
                        "computernamePrefix": "[parameters('vmNodeType0Name')]",
                        "secrets": [
                            {
                                "sourceVault": {
                                    "id": "[parameters('sourceVaultValue')]"
                                },
                                "vaultCertificates": [
                                    {
                                        "certificateStore": "My",
                                        "certificateUrl": "[parameters('certificateUrlValue')]" # secretId from cert uploaded to vault on previous powershell script.
                                    }
                                ]
                            }
                        ]
                    },

One important thing to remember is to replace your server certificate before it expires, if the certificate expires you’ll lose connection to the cluster (Service Fabric Explorer will stop working and you would not be able to deploy anything). It happened to the cluster of another team in the company and you’ll get a “Upgrade service unreachable” message on Azure Portal, the list of nodes and applications will be empty.

The message had a link to a Cluster not reachable document on github describing possible causes and mitigations. In our case it was the certificate.

If already happened then head on to fix expired cluster certificate steps document by Microsoft.

Follow up

I think it is enough for now, will continue on a follow up post soon.


Search Non-ASCII characters

updated

I can’t remember where I got this one, but here it is:

[^\x00-\x7f]

Just be sure to search using regular expressions.

Visual Studio Code regular expression search


Slug or Permalink

updated

Turns out what I call permalink is actually called Slug, permalink is the full URL and slug is “the part of a URL that identifies a page in human-readable keywords” you can read more about here and here.

Below is the function I used to Slugify blog titles, got it from Matt Hagemann, this is the link to the gist.

function slugify(string) {
  const a = 'àáâäæãåāăąçćčđďèéêëēėęěğǵḧîïíīįìłḿñńǹňôöòóœøōõṕŕřßśšşșťțûüùúūǘůűųẃẍÿýžźż·/_,:;'
  const b = 'aaaaaaaaaacccddeeeeeeeegghiiiiiilmnnnnooooooooprrsssssttuuuuuuuuuwxyyzzz------'
  const p = new RegExp(a.split('').join('|'), 'g')

  return string.toString().toLowerCase()
    .replace(/\s+/g, '-') // Replace spaces with -
    .replace(p, c => b.charAt(a.indexOf(c))) // Replace special characters
    .replace(/&/g, '-and-') // Replace & with 'and'
    .replace(/[^\w\-]+/g, '') // Remove all non-word characters
    .replace(/\-\-+/g, '-') // Replace multiple - with single -
    .replace(/^-+/, '') // Trim - from start of text
    .replace(/-+$/, '') // Trim - from end of text
}

1st version

First buggy version of the blog is out!