Becoming an Azure Solution Architect

I decided to become an Azure Solution Architet. I’m by no means an expert at this stage, but the main reason I’m writing a blog is to write down what I’ve learned.

Why Solution Architect

I haven’t checked out all the other possible certifications in detail, but I felt like this gives me the ability to navigate through the Azure world quite well. Maybe not in all details, but to a level that allows me to design proper solutions and give guidance to other teams.

I you want to know a lot of stuff to a certain level so that you can take decisions, maybe this is also the track for you.

Current Situation

I don’t know much about Azure. But I do have more than 20 years of experience with software development and I know how to build solutions from nothing to production. My hope is that this experience helps me during this journey.

I did the AZ900 fundamentals certification and it was really tough. Altough I got the certificate, the questions where quite difficult for me.

Also, I do have experience with “Cloud”. I was even interviewed 2014 as one of the first cloud providers of online accounting services by the Swiss IT Magazine. But how we thought about “Cloud” back then is nothing compared to the services we have today.

First Steps

First of all we need to take a look what is required to become an Azure Solutions Architect. Microsoft really did a great job with their documentation, and they provide a lot of free online learning material.

Here you can read about the certification and what’s required to get the certification:

We need to get two exams: AZ-303 and AZ-304.

First of all I will go through the online training material and try out as many things as possible – and hopefully I find the time to document as much as possible.

Being a top down person I usually try to get an overview of what is around and figure out how things relate and what they are good for. Therefore using mind maps is the technique I personally use a lot, this is how far I got – not that much. Looking forward to learn many new things. 🙂

Typolino – Firebase and GitLab CI/CD

I decided to migrate the Typolino application to GitLab and this blog post describes how I did it.

Until now I hosted my code on GitHub or Bitbucket. I wasn’t really interested in GitLab. But after taking a closer look I realized that GitLab has some very nice features aboard, specifically in the area of CI/CD. I’m really new to GitLab, so apologies if I did some things wrong – happy to learn how to do it better.

First steps

The first thing was to create an account on GitLab. I just used one of my social logins and I was in. This one was very easy.

79bc5e8d homer woohoo - General - Warcradle Community

Next I had to import my project from Bitbucket. There are a few import options and after granting GitLab access to my repositoires I just needed to select the repository I’d like to import.

Just hit that import button and you are done.

Importing repositories is really simple, I was positively surprised how well that worked.

Update local project

Being a lazy guy I really didn’t want to setup my project again on all my machines. Git makes this one quite easy as well:

  • remove the existing origin remote
  • add a new origin remote
  • push to the new origin
There you can copy your new remote URL
git remote remove origin
git remote add origin https://your-remote-url/...
git push origin master

..after that I could just continue to work on my code and everything was migrated to GitLab.


The reason I wanted to move to GitLab was to play around with CI/CD. The idea to push some code and let someone else just build and deploy my stuff is very tempting. Specifically as this is more like a personal project I also don’t need something fancy. The main concern I had was: how can I login to Firebase?

But let me first explain the steps I usually took on my local machine before.

ng build --prod
firebase deploy --only hosting

But of course there are more steps if you setup a machine from scratch, and all these commands need to be run in the CI/CD pipeline as well, so I would need to do something like this:

npm install -g @angular/cli
npm install -f firebase-tools
firebase login
ng build --prod
firebase deploy --only hosting

Luckily the CI/CD feature can easily be enabled for a repository. I just had to add a file describing the pipeline and that’s it! This files resides in the root of the repository and is named .gitlab-ci.yml 

image: node:latest

  - build
  - deploy

    - dist/

  stage: build
    - npm install -g @angular/cli
    - npm install
    - ng build --prod
  stage: deploy
    - npm install -g firebase-tools
    - firebase deploy --only hosting --token $FIREBASE_TOKEN

My pipeline has two stages:

  • build the application
  • deploy the files to Firebase

As these stages run in separate containers but are dependent upon each other I had to cache the dist/ folder. This ensures the files built in the build stage are also available in the deploy stage.

My main concern using GitLab was that I would have to put my credentials into the code (login to Firebase). But luckily Firebase allows you to generate an access token that you can just pass along with your commands (as seen above). I configured the actual token as a variable, which is injected automatically in your build. So this is a very nice way to keep your secrets secret.

Get a token

On your local development machine just run firebase login:ci and follow the steps. It is very well described in the Firebase Docs. It will spit out the token on the command line and you can grab it there.

Configure a CI/CD Variable

Not sure if there is a simpler way, but I had to move my project into a group. Then on the group settings for CI/CD you can expand the variables section and define your custom variables.

Just add a new variable and you can use it right away in your pipeline

Run the pipeline

Just commit some code and the pipeline starts automatically.

Just deployed a new version to production


It’s pretty cool how easy everything was to setup and configure. I still feel in control of every single step and the possibility to store secrets in a secure way resolved one of my main concerns. I just scratched the surface of GitLab, but this project helped me to understand some of the concepts better and I’m looking forward to migrate a bigger project and setup a nice pipeline.

Typolino – Analyze Bundle Size

Finally I had some time to fix some bugs on the weekend – and I switched all queries to get() as I don’t see the point for this application to get live updates – one may ask why use Firebase at all. But I have to say it’s more than just the realtime features.

How to

To find out more about the bundle size of your application you could use a bundle analzyer. The tool is really easy to install and use.

npm install --save-dev webpack-bundle-analyzer 

Build the application and generating the stats file, which is required to analyze the bundles.

ng build --stats-json 

And finally run the tool

npx webpack-bundle-analyzer .\dist\typolino\stats-es2015.json

If you want to see the effects of tree shaking you can analyze the prod bundles as well (same as above, but just run ng build –prod –stats-json).

Typolino – Add animations

I never used animations with Angular. Funny as I remember having used the AngularJS variant quite excessively.

So why not try it out. I’m pretty sure there is a cleverer way of doing it but maybe I just need to get used to the system first.

Seems as if one can simply define the animations as part of a component. Basically you define a trigger and possible transitions. As I wanted to make the image border light up nicely I decided to use keyframe style.

  selector: 'app-lesson',
  templateUrl: './lesson.component.html',
  styleUrls: ['./lesson.component.less'],
  animations: [
    trigger('lastInput', [
      transition('* => correct', [
            style({ backgroundColor: '#21e6c1' }),
            style({ backgroundColor: '#dbdbdb' }),
      transition('* => wrong', [
            style({ backgroundColor: '#ffbd69' }),
            style({ backgroundColor: '#dbdbdb' }),

And this is the template part:


Basically this let’s the image “listen” to changes on the state. Angular evaluates the lastInput and triggers the defined animation. So whenever we set lastInput to either correct or wrong, the animation is triggered. We can trigger now programmatically when a letter is typed:

 const newWord = this.userWord + event.key;
    if (this.currentWord.word.toUpperCase().startsWith(newWord.toUpperCase())) {
      this.lastInput = 'correct';
      this.userWord = newWord;
    } else {
      this.lastInput = 'wrong';
      this.millisSpent += 2_000; // penalty
      this.highscoreMissed = this.millisSpent > this.lesson.bestTime;

To ensure we can play the animation over and over somehow we need to reset the trigger. Really not sure if there isn’t an easier way, but (@lastInput.done)=”resetAnimation()” solves the problem for me.

  resetAnimation() {
    this.lastInput = null;


I’m really not an expert on the animations part of Angular. But it looks pretty thought through and I feel like I’m in control of the animations.

Typolino – Web Worker Revisited

The first iteration of the web worker was OK and did the job. But somehow it didn’t feel good enough. I wanted to rewrite it using RxJs. The main reason being that with RxJs I have built in functionality to control concurrency. I don’t want a lesson with a lot of images to go crazy. Therefore I decided to rewrite everything and (altough painful) passing all data always down the stream. Not sure if this would be considered good practice, but I wanted to try it out. It seems quite natural to use tap() and / or access variables that are somewhere in the scope from an operator – but if you think about proper decomposition, purity and testability..

/// <reference lib="webworker" />

import { Lesson } from './lesson';
import { environment } from '@typolino/environments/environment';
import * as firebase from 'firebase/app';
import 'firebase/storage';
import 'firebase/auth';

import { from, of, forkJoin } from 'rxjs';
import {
} from 'rxjs/operators';

const firebaseConfig = environment.firebase;

addEventListener('message', ({ data }) => {
  const lesson = data as Lesson;

        ([word, lesson]) =>
        5 // concurrency
      mergeMap(([downloadUrl, word]) =>
          fetch(downloadUrl, {
            mode: 'no-cors',
            cache: 'default',
        ).pipe(withLatestFrom(of(downloadUrl), of(word)))
    .subscribe(([response, downloadUrl, word]) => {
      word.imageUrl = downloadUrl;
        imageId: word.imageId,
        imageUrl: downloadUrl,

It does almost the same as before, but I had to rewrite some parts. The good thing is that we can control concurrency now and add delays etc. as we wish.

Processing the messages has changed a bit too, as we don’t have the index anymore. I don’t think it is terribly inefficient, but could be improved:

 worker.onmessage = ({ data }) => {
              filter(([word, data]) => word.imageId === data.imageId)
            .subscribe(([word, data]) => {
              word.imageUrl = data.imageUrl;

Typolino – Web Worker

Just for fun I was thinking to add web workers to fetch the image download URLs and also the images in a web worker. If the URL is already there, perfect, if not wait for the web worker to complete its task.

Actually web workers are quite well supported and it is super simple to add your own web worker to an Angular application. I think the easiest way to describe a web worker would be a background script or some sort of thread that you can spawn that does some things.

If you have a strong Java background you are for sure familar with threads and the related issues if you misuse synchronization locks. Web workers are simpler as they provide a clear interface to communicate with your application. You don’t have to care about synchronization.

Create a Web Worker

To create a new web worker we can use our beloved ng g command.

ng g web-worker image-loader

This will create a more or less empty web worker that we can use. The web worker interface is really simple:

  • We can post messages to it
  • We can get messages from it

So what we would like to achieve: once we are starting a Typolino lesson we pass it the lesson to load the data in the background. Once we’ve got an image URL we try to fetch it. To be honest I’m not 100% sure if we win anything (maybe it’s even worse given the serialization overhead) as the operations are anyhow asynchronous by nature – but why not try it with web workers.

/// <reference lib="webworker" />

import { Lesson } from './lesson';
import { environment } from '@typolino/environments/environment';
import * as firebase from 'firebase';

const firebaseConfig = environment.firebase;

addEventListener('message', ({ data }) => {
  const lesson = data as Lesson;
  lesson.words.forEach((word, index) => {
      .then((url) =>
          fetch(url, {
            mode: 'no-cors',
            cache: 'default',
      .then(([response, url]) => {

Create the worker..

const worker = new Worker('@typolino/app/image-loader.worker', {
  type: 'module',

..and in the lesson.component

 ngOnInit(): void {
        map((params) => params.get('lessonId')),
        switchMap((lessonId: string) =>
      .subscribe((lesson) => {
        this.lesson = lesson;

        worker.onmessage = ({ data }) => {
          this.lesson.words[data.index].imageUrl = data.url;


It works! When we load a lesson we see that the browser is fetching all the URLs and images. As soon as the first image is available it should be rendered and the data is read from the cache as expected.


Using web workers is quite straight forward. Of course it’s a bit cheap as I’m only supporting modern browsers – but it is way more fun to code like this. When using Typolino the images are just there – for me it feels really fast when using the application. There are definitely other techniques but it was fun trying it out.

Typolino – Prefetching Assets

Caching has a big effect. But a first time visitor might still have to wait for certain resources to download.

To improve the experience for the users we can try to prefetch resources that we most probably will need later (e.g. after the login). In our example application Typolino the candidates are found easily:

  • the alphabet.mp3 audio file
  • the placeholder image
  • the card image (which besides making the UI look a bit fancier is totally useless)

For this we can add the prefetch instructions directly to our index.html

<link rel="prefetch" href="assets/alphabet.mp3" as="fetch">
<link rel="prefetch" href="assets/img_placeholder.jpg" as="image">
<link rel="prefetch" href="assets/lesson_1.jpg" as="image">

If we clear the cache and just navigate to our login page we will see that the browser is fetching the files as soon as it finds some time for it:

…and later when we actually need the files…

Find the full source code below in case something is a bit out of context.

<!doctype html>
<html lang="en">

  <meta charset="utf-8">
  <base href="/">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="icon" type="image/x-icon" href="favicon.ico">
  <link href="" rel="stylesheet">
  <link rel="prefetch" href="assets/alphabet.mp3" as="fetch">
  <link rel="prefetch" href="assets/img_placeholder.jpg" as="image">
  <link rel="prefetch" href="assets/lesson_1.jpg" as="image">




pre-fetching resources can improve the overall UX of your application. It’s worth having a look at this capability. For Typolino it may look rather simple and I’m not so sure we can easily extend this to some Firebase queries as well (I don’t really want to construct the URL myself) but I’m sure you will find a chunk, image, script or any other resource that may be required in just a moment.

Typolino – Cache Control

To improve the UX it is important to serve content as fast as possible. The Firebase hosting is pretty clever, but the images we serve from the storage has a default cache configuration that disallows caching the content. We can control the cache-control header when uploading the images.

            metadata: {
               cacheControl: 'public, max-age=604800' 

This will allow caching the content for 7 days. Sill – accessing the images is quite slow as we first need to get the actual download URL. I wonder whether it would make sense to store the actual URL in the DB so that we can save the additional lookup call. Also we could add some PWA features to preload the images.

Typolino – Upload Lesson

Finally I find some time to write about the upload utility. Maybe we can add an online editor later, but to get started quickly I decided to use a pure node application running on my local device using the Firebase Admin SDK.

download the private key – and NEVER share it!

Now we are already connected to our DB and storage and can just upload and manipulate the data as we need. Each lesson will be store in a folder containing the actual lesson structure and all the media files.

each lesson is in a separate folder

And this is an example of a single lesson:

The lesson.json file has the same content as we will upload to the DB.

  "name": "Blumen",
  "description": "Lerne die Blumen kennen",

  "difficulty": "EASY",
  "words": [
      "word": "Rose",
      "sentence": "",
      "imageId": "rose.jpg",
      "audioId": "rose.mp4"
      "word": "Tulpe",
      "sentence": ".",
      "imageId": "tulpe.jpg",
      "audioId": "tulpe.mp4"
      "word": "Lilie",
      "sentence": "",
      "imageId": "lilie.jpg",
      "audioId": "lilie.mp4"
      "word": "Seerose",
      "sentence": "",
      "imageId": "seerose.jpg",
      "audioId": "seerose.mp4"
      "word": "Sonnenblume",
      "sentence": "",
      "imageId": "sonnenblume.jpg",
      "audioId": "sonnenblume.mp4"

Note I just configured the audioId, but we are not using it yet. So.. you can just ignore it for now.

Upload Script

The upload script is really simple and can be used like:

node upload.js <folder>

It will look for the lesson.json file, create a lesson document in DB and upload all non JSON files to the storage. It expects you have no typos in the configuration – but it’s not that hard to fix it and just upload again.

We will discuss this topic later and (hopefully) even improve more. As with many cloud services you have a pay-as-you-go model (OpEx) it is important to minimize these costs as much as possible. So having to download multi MB images and then just render them in a small 640px * 480px resolution doesn’t make much sense. I love to tune things as much as possible, I feel being back at the times where resources where sparse and finding creative ways to save some money is really cool. 🙂 One aspect where we definitley need to look into later on is caching.

Long story short: we transform every image before uploading to fit our target resolution. For this I found a cool library called Jimp.

(async() => { 
    const admin = require("firebase-admin");
    const fs = require("fs");
    const Jimp = require("jimp");

    const arguments = process.argv;
    const lessonFolder = arguments[2]; // find a better way e.g. oclif?

    console.log(`You would like to uplaod folder ${lessonFolder}`);

    const serviceAccount = require("C:/Dev/typolino-firebase-adminsdk-4dg10-b2047407d9.json");
    const app = admin.initializeApp({
    credential: admin.credential.cert(serviceAccount),
    databaseURL: "",

    // upload the data
    const lesson = require(`./${lessonFolder}/lesson.json`);
    const firestore = app.firestore();
    const storage =;
    const bucket = storage.bucket("gs://");

    // create basic structure

    // upload all files
    .filter((file) => !file.endsWith(".json"))
    .forEach(async (file) => {
        const imagePath = `${lessonFolder}/${file}`;
        const image = await;
        await image.resize(640, 480);
        await image.quality(80);        
        bucket.file(imagePath).createWriteStream().end(await image.getBufferAsync("image/jpeg"));                

Not much tuning yet, but at least we don’t upload super big files. That’s all! For now it is good enough and does the job.

Typolino – Adding Speech

If we want to add sound to Typolino we need to consider the proper format. I’m not an expert but is usually very good at answering these kind of questions and also Wikipedia provides a good overview. I don’t care too much about the lossyness of the format as my audio capture device anyhow is pretty cheap. So let’s try with mp3.

To record the audio I usually use Audacity. The tool is great and full with features. To make it a bit more complicated to code let’s try to keep one audio file and just jump to the correct location for every character. The map needs to be confiugred and only one audio resource needs to be managed. This would be a similar approach to what we use to do with CSS sprites. Having zero to no experience I would say: just try it. 🙂


Recording was quite straight forward. Just had to find a moment of (almost) silence. Audacity is really cool – they have so many built in effects. I was only interested in the one to remove silence and the noise suppression. I think especially noise suppression cleans the curves quite a bit and helps in keeping the file size smaller. But really – not an expert. 🙂

The exported mp3 file is roughly 200k. Not sure what else we can do to improve – but for now I’m quite happy. Without noise suppression it was around 260k.


I keep the file as an asset. For this I just put it into the assets folder.

Seek and Play

For our audio sprite we need a service. It seems to be a bit cumbersome to use the native API so I found this Angular *wrapper* which looks quite promising:

Installation is simple:

 npm install --save angular-audio-context 

…and it looks way too complex. I don’t want to deal with all the details of the web audio API. But as I’m writing as I’m coding sometimes you change your mind. Not sure how I found this one (but it was not obvious to me at least), but it looks so much easier and seems to offer exactly what I need:

It even supports Audio sprites, which.. kind of answers my initial question. 🙂 So let’s try this one.

 npm install --save howler 
import { Howl, Howler } from 'howler';
import { Injectable } from '@angular/core';

  providedIn: 'root',
export class AudioService {
  private sound: Howl;

  constructor() {
    this.sound = new Howl({
      src: ['assets/alphabet.mp3'],
      sprite: {
        A: [0, 380],
        B: [389, 450],
        C: [830, 530],
        D: [1360, 420],
        E: [1805, 460],
        F: [2280, 360],
        G: [2650, 500],
        H: [3115, 490],
        I: [3615, 460],
        J: [4065, 360],
        K: [4440, 470],
        L: [4920, 510],
        M: [5430, 480],
        N: [5920, 530],
        O: [6460, 490],
        P: [6960, 460],
        Q: [7427, 510],
        R: [7937, 465],
        S: [8413, 557],
        T: [8982, 495],
        U: [9467, 470],
        V: [9950, 470],
        W: [10439, 528],
        X: [10972, 560],
        Y: [11548, 770],
        Z: [12380, 365],

  play(letter: string) {;


It works, not sure how well it will work in the wild. But it was interesting to implement this functionality. It took a bit of time in setting up the sprite but howlerjs just worked out of the box. Just for the sake of completeness: very first I was thinking of using speech synthesis API but I wasn’t so happy with the results.