Free Subscription Ended (and how to setup cost alerts)

I had quite a lot to do the last couple of weeks and I barely found time to continue my journey to become an Azure Solutions Architect. My free subscription ended and Microsoft asks me kindly to do an upgrade.

Understanding the Costs

I want to understand costs better! I guess this is something everyone wants to know who uses their own credit card. So the first thing I did was to delete all my resources without thinking too much, I had nothing worth keeping.

As the costs for me at the moment are a bit unpredictible I wanted to be sure not to spend too much money. In the costs management of the subscription I stumbled upon cost alerts. Sounds like a good start!

Cost Alerts

Here is a step by step guide on how I created my firt budget with a cost alert.

First we open the subscription. On the left side navigation we will see “Cost alerts”. The view wil be empty, as there are no cost alerts yet, but from there we can create a new budget.


The is empty, as there are no cost alerts yet, but from there we can create a new budget.

This will open the creation form and I think it is quite obvious what we can configure here.

If we scroll down a little bit, we will see a summary of the costs so far. This may help to define a meaningful budget.

On the next page we can now define the actual alerts. I decided to alert me once I spend a small amount of money. The alert should be sent to my email address.

Conclusion

Creating a budget and cost alerts is quite simple. I miss an option to send a test email to see if it passes all spam filters and to get a feeling what information I would get. If Microsoft would take this up, would be a useful extension I believe.

Getting Started with Azure

I’ve already learned quite some stuff about Azure networking on my way to become Azure Solution Architect. Reading is great, but hands on experience is really helpful to strengthen the knowledge. But how to start?

Luckily there is an Azure Free Account. Subscribing to it is very easy and you get a bit of play money. Starting on a fresh subscription makes sense in my view as you can play around with the most basic building blocks easily and it’s not yet polluted with a lot of mysterious stuff that someone else has created.

Altough I already knew about some of the basic concepts it is still great to play around with them to understand better how things relate. So here are a bunch of things I learned after playing around on my fresh subscription for a few hours.

The below may not be 100% accurate. It’s just how I personally understand the concepts after a few hours of usage. So apologies for any misinformaiton.

Portal

The Azure portal is the web application that you can use to administrate your Azure resources. Not sure if you can find every option for every little detail, but the most common tasks can be done here (creating VMs, networks, etc.). To get started I think the portal is really good. In practice you would rather use another option (PowerShell, Azure CLI, ARM templates).

Subscription

Everything starts with a subscription. The subscription is the root building block for which you can have access. If you create the free Azure account they create a subscription for you and add your user as the administrator thereof. Your Azure (or Microsoft AD?) identity lives separately from the subscription, you can get access to one or many subscriptions.

Microsoft bills you based on the subscription, so you could use subscriptions to organize your organization along this dimension (e.g. one subscription per team, per solution, environment, ..). This really depends on your organization.

I’m not sure how flexible you can move resources between subscriptions (or if it is possible at all). I need to try that out later.

Resource Group

Inside a subscription you can manage resource groups. Resource groups are acting as containers for your resources (e.g. they contain your VMs). When you create a resource group you have to specify the location. For me this was not clear at first, but after reading a bit about it: the meta data about the managed resources are stored within the resource group – so you basically decide where this information is stored.

VM

Creating a virtual machine is really simple. You just click through the wizard and you have your VM up and running. A VM is a resource like everything else. I started off creating two Linux virtual machines using the Ubuntu image. Ultimately I want to play around with the networking. But it is quite an experience to create a VM without any hurdles. After a few minutes you can SSH into your VM and do whatever you like.

Step by Step

I guess this article is more like a diary and writing stuff down helps me to describe the concepts in my own words. But I assume the value for the readers is rather limited. So to have something useful here is the step by step instructions.

1. Create a Resource Group

You can create a RG alongside creating a VM. But I decided to create it separately, starting with the subscription.first click on the subscriptions tile to get to your subscription.

List subscriptions

If you are on the home screen of the portal, click on the “subscriptions” title in the top section.

This will lead you to an overview screen where you see all your subscriptions. Here you could also add additional subscriptions. But for now just click on the subscription that was generated for you for the free account.

Which leads you to the details of the subscription. Here you can find a lot of useful information, but for now we want to create two new resource groups inside the subscription. You find the resource groups on the left hand navigation.

This brings you to the overview page for all RG in the selected subscription. Here you can create new RG.

You bascially need to pass a name. The region is relevant for storing the meta-data about the resources managed inside the resource group. So you can still create VMs wherever you like, it is not limiting you later.

2. Create a VM

Now that we have a resource group, we can create resources belonging to the group. Let’s create a Linux VM. Let’s start from the home screen.

Next hit the add button to start the VM creation wizard.

I created a Linux VM with the Ubuntu image and decided to start with a username/password login. To reach the VM from the machine I allow inbound ports via SSH and HTTP .

After hitting the review and create button, everything is validated and if your selections are OK you can hit the create button and your VM is provisioned and after a few minutes ready to use.

Summary

  1. you can have access to one or many subscriptions
  2. subscriptions are the level on which you are billed
  3. organization of subscriptions is depending on the organizational structure (e.g. per team, per environment, etc.)
  4. resource groups organize resources logically, whatever that means for you (depends on the organization in your company)
  5. VMs and other resources are bound to a resource group
  6. Creating a VM can be done in a few simple steps going through the wizard
  7. Whenever I wanted to know a bit more about an option it was really simple to find the documentation. The documentation is really good!

Becoming an Azure Solution Architect

I decided to become an Azure Solution Architet. I’m by no means an expert at this stage, but the main reason I’m writing a blog is to write down what I’ve learned.

Why Solution Architect

I haven’t checked out all the other possible certifications in detail, but I felt like this gives me the ability to navigate through the Azure world quite well. Maybe not in all details, but to a level that allows me to design proper solutions and give guidance to other teams.

I you want to know a lot of stuff to a certain level so that you can take decisions, maybe this is also the track for you.

Current Situation

I don’t know much about Azure. But I do have more than 20 years of experience with software development and I know how to build solutions from nothing to production. My hope is that this experience helps me during this journey.

I did the AZ900 fundamentals certification and it was really tough. Altough I got the certificate, the questions where quite difficult for me.

Also, I do have experience with “Cloud”. I was even interviewed 2014 as one of the first cloud providers of online accounting services by the Swiss IT Magazine. But how we thought about “Cloud” back then is nothing compared to the services we have today.

https://www.itmagazine.ch/artikel/58115/Cloud-Strategien_fuer_Schweizer_KMU.html

First Steps

First of all we need to take a look what is required to become an Azure Solutions Architect. Microsoft really did a great job with their documentation, and they provide a lot of free online learning material.

Here you can read about the certification and what’s required to get the certification: https://docs.microsoft.com/en-us/learn/certifications/azure-solutions-architect

We need to get two exams: AZ-303 and AZ-304.

First of all I will go through the online training material and try out as many things as possible – and hopefully I find the time to document as much as possible.

Being a top down person I usually try to get an overview of what is around and figure out how things relate and what they are good for. Therefore using mind maps is the technique I personally use a lot, this is how far I got – not that much. Looking forward to learn many new things. 🙂

Typolino – Firebase and GitLab CI/CD

I decided to migrate the Typolino application to GitLab and this blog post describes how I did it.

Until now I hosted my code on GitHub or Bitbucket. I wasn’t really interested in GitLab. But after taking a closer look I realized that GitLab has some very nice features aboard, specifically in the area of CI/CD. I’m really new to GitLab, so apologies if I did some things wrong – happy to learn how to do it better.

First steps

The first thing was to create an account on GitLab. I just used one of my social logins and I was in. This one was very easy.

79bc5e8d homer woohoo - General - Warcradle Community

Next I had to import my project from Bitbucket. There are a few import options and after granting GitLab access to my repositoires I just needed to select the repository I’d like to import.

Just hit that import button and you are done.

Importing repositories is really simple, I was positively surprised how well that worked.

Update local project

Being a lazy guy I really didn’t want to setup my project again on all my machines. Git makes this one quite easy as well:

  • remove the existing origin remote
  • add a new origin remote
  • push to the new origin
There you can copy your new remote URL
git remote remove origin
git remote add origin https://your-remote-url/...
git push origin master

..after that I could just continue to work on my code and everything was migrated to GitLab.

CI/CD

The reason I wanted to move to GitLab was to play around with CI/CD. The idea to push some code and let someone else just build and deploy my stuff is very tempting. Specifically as this is more like a personal project I also don’t need something fancy. The main concern I had was: how can I login to Firebase?

But let me first explain the steps I usually took on my local machine before.

ng build --prod
firebase deploy --only hosting

But of course there are more steps if you setup a machine from scratch, and all these commands need to be run in the CI/CD pipeline as well, so I would need to do something like this:

npm install -g @angular/cli
npm install -f firebase-tools
firebase login
ng build --prod
firebase deploy --only hosting

Luckily the CI/CD feature can easily be enabled for a repository. I just had to add a file describing the pipeline and that’s it! This files resides in the root of the repository and is named .gitlab-ci.yml 

image: node:latest

stages:
  - build
  - deploy

cache:
  paths:
    - dist/

build:
  stage: build
  script:
    - npm install -g @angular/cli
    - npm install
    - ng build --prod
    
deploy:
  stage: deploy
  script:
    - npm install -g firebase-tools
    - firebase deploy --only hosting --token $FIREBASE_TOKEN

My pipeline has two stages:

  • build the application
  • deploy the files to Firebase

As these stages run in separate containers but are dependent upon each other I had to cache the dist/ folder. This ensures the files built in the build stage are also available in the deploy stage.

My main concern using GitLab was that I would have to put my credentials into the code (login to Firebase). But luckily Firebase allows you to generate an access token that you can just pass along with your commands (as seen above). I configured the actual token as a variable, which is injected automatically in your build. So this is a very nice way to keep your secrets secret.

Get a token

On your local development machine just run firebase login:ci and follow the steps. It is very well described in the Firebase Docs. It will spit out the token on the command line and you can grab it there.

Configure a CI/CD Variable

Not sure if there is a simpler way, but I had to move my project into a group. Then on the group settings for CI/CD you can expand the variables section and define your custom variables.

Just add a new variable and you can use it right away in your pipeline

Run the pipeline

Just commit some code and the pipeline starts automatically.

Just deployed a new version to production

Conclusion

It’s pretty cool how easy everything was to setup and configure. I still feel in control of every single step and the possibility to store secrets in a secure way resolved one of my main concerns. I just scratched the surface of GitLab, but this project helped me to understand some of the concepts better and I’m looking forward to migrate a bigger project and setup a nice pipeline.

Typolino – Analyze Bundle Size

Finally I had some time to fix some bugs on the weekend – and I switched all queries to get() as I don’t see the point for this application to get live updates – one may ask why use Firebase at all. But I have to say it’s more than just the realtime features.

How to

To find out more about the bundle size of your application you could use a bundle analzyer. The tool is really easy to install and use.

npm install --save-dev webpack-bundle-analyzer 

Build the application and generating the stats file, which is required to analyze the bundles.

ng build --stats-json 

And finally run the tool

npx webpack-bundle-analyzer .\dist\typolino\stats-es2015.json

If you want to see the effects of tree shaking you can analyze the prod bundles as well (same as above, but just run ng build –prod –stats-json).

Typolino – Add animations

I never used animations with Angular. Funny as I remember having used the AngularJS variant quite excessively.

So why not try it out. I’m pretty sure there is a cleverer way of doing it but maybe I just need to get used to the system first.

Seems as if one can simply define the animations as part of a component. Basically you define a trigger and possible transitions. As I wanted to make the image border light up nicely I decided to use keyframe style.

Component({
  selector: 'app-lesson',
  templateUrl: './lesson.component.html',
  styleUrls: ['./lesson.component.less'],
  animations: [
    trigger('lastInput', [
      transition('* => correct', [
        animate(
          '0.4s',
          keyframes([
            style({ backgroundColor: '#21e6c1' }),
            style({ backgroundColor: '#dbdbdb' }),
          ])
        ),
      ]),
      transition('* => wrong', [
        animate(
          '0.4s',
          keyframes([
            style({ backgroundColor: '#ffbd69' }),
            style({ backgroundColor: '#dbdbdb' }),
          ])
        ),
      ]),
    ]),
  ],
})

And this is the template part:

 <img
        [@lastInput]="lastInput"
        (@lastInput.done)="resetAnimation()"
        class="lesson__img"
        [hidden]="!imageLoaded"
        [src]="currentWord.imageUrl"
        (load)="loaded()"
      />

Basically this let’s the image “listen” to changes on the state. Angular evaluates the lastInput and triggers the defined animation. So whenever we set lastInput to either correct or wrong, the animation is triggered. We can trigger now programmatically when a letter is typed:

 const newWord = this.userWord + event.key;
    if (this.currentWord.word.toUpperCase().startsWith(newWord.toUpperCase())) {
      this.lastInput = 'correct';
      this.userWord = newWord;
      this.checkWord();
    } else {
      this.lastInput = 'wrong';
      this.millisSpent += 2_000; // penalty
      this.highscoreMissed = this.millisSpent > this.lesson.bestTime;
    }
  }

To ensure we can play the animation over and over somehow we need to reset the trigger. Really not sure if there isn’t an easier way, but (@lastInput.done)=”resetAnimation()” solves the problem for me.

  resetAnimation() {
    this.lastInput = null;
  }

Conclusion

I’m really not an expert on the animations part of Angular. But it looks pretty thought through and I feel like I’m in control of the animations.

Typolino – Web Worker Revisited

The first iteration of the web worker was OK and did the job. But somehow it didn’t feel good enough. I wanted to rewrite it using RxJs. The main reason being that with RxJs I have built in functionality to control concurrency. I don’t want a lesson with a lot of images to go crazy. Therefore I decided to rewrite everything and (altough painful) passing all data always down the stream. Not sure if this would be considered good practice, but I wanted to try it out. It seems quite natural to use tap() and / or access variables that are somewhere in the scope from an operator – but if you think about proper decomposition, purity and testability..

/// <reference lib="webworker" />

import { Lesson } from './lesson';
import { environment } from '@typolino/environments/environment';
import * as firebase from 'firebase/app';
import 'firebase/storage';
import 'firebase/auth';

import { from, of, forkJoin } from 'rxjs';
import {
  mergeMap,
  switchMap,
  map,
  withLatestFrom,
  delay,
} from 'rxjs/operators';

const firebaseConfig = environment.firebase;
firebase.initializeApp(firebaseConfig);

addEventListener('message', ({ data }) => {
  const lesson = data as Lesson;

  from(lesson.words)
    .pipe(
      withLatestFrom(of(lesson)),
      mergeMap(
        ([word, lesson]) =>
          from(
            firebase
              .storage()
              .ref(`${lesson.id}/${word.imageId}`)
              .getDownloadURL()
          ).pipe(withLatestFrom(of(word))),
        5 // concurrency
      ),
      mergeMap(([downloadUrl, word]) =>
        from(
          fetch(downloadUrl, {
            mode: 'no-cors',
            cache: 'default',
          })
        ).pipe(withLatestFrom(of(downloadUrl), of(word)))
      )
    )
    .subscribe(([response, downloadUrl, word]) => {
      word.imageUrl = downloadUrl;
      postMessage({
        imageId: word.imageId,
        imageUrl: downloadUrl,
      });
    });
});

It does almost the same as before, but I had to rewrite some parts. The good thing is that we can control concurrency now and add delays etc. as we wish.

Processing the messages has changed a bit too, as we don’t have the index anymore. I don’t think it is terribly inefficient, but could be improved:

 worker.onmessage = ({ data }) => {
          from(lesson.words)
            .pipe(
              withLatestFrom(of(data)),
              filter(([word, data]) => word.imageId === data.imageId)
            )
            .subscribe(([word, data]) => {
              word.imageUrl = data.imageUrl;
            });
        };

Typolino – Web Worker

Just for fun I was thinking to add web workers to fetch the image download URLs and also the images in a web worker. If the URL is already there, perfect, if not wait for the web worker to complete its task.

Actually web workers are quite well supported and it is super simple to add your own web worker to an Angular application. I think the easiest way to describe a web worker would be a background script or some sort of thread that you can spawn that does some things.

If you have a strong Java background you are for sure familar with threads and the related issues if you misuse synchronization locks. Web workers are simpler as they provide a clear interface to communicate with your application. You don’t have to care about synchronization.

Create a Web Worker

To create a new web worker we can use our beloved ng g command.

ng g web-worker image-loader

This will create a more or less empty web worker that we can use. The web worker interface is really simple:

  • We can post messages to it
  • We can get messages from it

So what we would like to achieve: once we are starting a Typolino lesson we pass it the lesson to load the data in the background. Once we’ve got an image URL we try to fetch it. To be honest I’m not 100% sure if we win anything (maybe it’s even worse given the serialization overhead) as the operations are anyhow asynchronous by nature – but why not try it with web workers.

/// <reference lib="webworker" />

import { Lesson } from './lesson';
import { environment } from '@typolino/environments/environment';
import * as firebase from 'firebase';

const firebaseConfig = environment.firebase;
firebase.initializeApp(firebaseConfig);

addEventListener('message', ({ data }) => {
  const lesson = data as Lesson;
  lesson.words.forEach((word, index) => {
    firebase
      .storage()
      .ref(`${lesson.id}/${word.imageId}`)
      .getDownloadURL()
      .then((url) =>
        Promise.all([
          fetch(url, {
            mode: 'no-cors',
            cache: 'default',
          }),
          Promise.resolve(url),
        ])
      )
      .then(([response, url]) => {
        postMessage({
          index,
          url,
        });
      });
  });
});

Create the worker..

const worker = new Worker('@typolino/app/image-loader.worker', {
  type: 'module',
});

..and in the lesson.component

 ngOnInit(): void {
    this.route.paramMap
      .pipe(
        first(),
        map((params) => params.get('lessonId')),
        switchMap((lessonId: string) =>
          this.lessonService.selectLesson(lessonId)
        )
      )
      .subscribe((lesson) => {
        this.lesson = lesson;
        this.setupWord(this.lesson.words[0]); 

        worker.onmessage = ({ data }) => {
          this.lesson.words[data.index].imageUrl = data.url;
        };

        worker.postMessage(this.lesson);
      });
  }

It works! When we load a lesson we see that the browser is fetching all the URLs and images. As soon as the first image is available it should be rendered and the data is read from the cache as expected.

Conclusion

Using web workers is quite straight forward. Of course it’s a bit cheap as I’m only supporting modern browsers – but it is way more fun to code like this. When using Typolino the images are just there – for me it feels really fast when using the application. There are definitely other techniques but it was fun trying it out.

Typolino – Prefetching Assets

Caching has a big effect. But a first time visitor might still have to wait for certain resources to download.

To improve the experience for the users we can try to prefetch resources that we most probably will need later (e.g. after the login). In our example application Typolino the candidates are found easily:

  • the alphabet.mp3 audio file
  • the placeholder image
  • the card image (which besides making the UI look a bit fancier is totally useless)

For this we can add the prefetch instructions directly to our index.html

<link rel="prefetch" href="assets/alphabet.mp3" as="fetch">
<link rel="prefetch" href="assets/img_placeholder.jpg" as="image">
<link rel="prefetch" href="assets/lesson_1.jpg" as="image">

If we clear the cache and just navigate to our login page we will see that the browser is fetching the files as soon as it finds some time for it:

…and later when we actually need the files…

Find the full source code below in case something is a bit out of context.

<!doctype html>
<html lang="en">

<head>
  <meta charset="utf-8">
  <title>Typolino</title>
  <base href="/">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="icon" type="image/x-icon" href="favicon.ico">
  <link href="https://fonts.googleapis.com/css2?family=Baloo+Paaji+2&family=Roboto:wght@300&display=swap" rel="stylesheet">
  <link rel="prefetch" href="assets/alphabet.mp3" as="fetch">
  <link rel="prefetch" href="assets/img_placeholder.jpg" as="image">
  <link rel="prefetch" href="assets/lesson_1.jpg" as="image">
</head>

<body>
  <app-root></app-root>
</body>

</html>

Conclusion

pre-fetching resources can improve the overall UX of your application. It’s worth having a look at this capability. For Typolino it may look rather simple and I’m not so sure we can easily extend this to some Firebase queries as well (I don’t really want to construct the URL myself) but I’m sure you will find a chunk, image, script or any other resource that may be required in just a moment.

Typolino – Cache Control

To improve the UX it is important to serve content as fast as possible. The Firebase hosting is pretty clever, but the images we serve from the storage has a default cache configuration that disallows caching the content. We can control the cache-control header when uploading the images.

   bucket.file(imagePath).createWriteStream({
            metadata: {
               cacheControl: 'public, max-age=604800' 
            }
   }
            

This will allow caching the content for 7 days. Sill – accessing the images is quite slow as we first need to get the actual download URL. I wonder whether it would make sense to store the actual URL in the DB so that we can save the additional lookup call. Also we could add some PWA features to preload the images.