Launch of our new Kubernetes tech-tutorial series in April

By the end of the course you have developed a Logging Application that is purposely divided into multiple individual components following a Microservice approach. The aim is to get familiarised how those components and the Kubernetes mechanics interact with each other.

We are hooked up with Kubernetes and work for some major companies with a strong focus on building lightweight, reliable inhouse products and shipping technology to the next level. 

Four years ago we decided to focus on the cloud platform and transferred our team technology stack to application development and infrastructure models using the benefits of the cloud.

If you ever tried to get a whole development team transferred in reasonable time to a new technology stack, you know it can be a time consuming task and it requires not only good organisation, knowledge transfer and a lot of communication and endurance, but also open-minded and savvy people on your team. 

Leaving known technologies behind and admitting that you start on an unfamiliar terrain with tons of resources out on the web and lots of researching, trial ‘n error and loss of time ahead of you sounds at first scary.

The goal of this tutorial is to provide you insights into cloud development and its architecture, gaining the confidence to combine different tools for targeted tasks and by doing so understanding and applying the main benefit of the cloud platform: 

  • Automation 
  • Scalability
  • Independency
  • Efficiency
  • Security 

Starting from Docker and Kubernetes logs, as the data source for our Logging Application, we take it from there step-by-step to a functional UI and condition based Monitoring Alerts. We will use the advantages of Fluent Bit to ship collected logs to our http endpoint, written in GoLang. Our endpoint will store the data in the time series database Timescale, an extension for Postgres, providing features like time buckets and data aggregation over time. The Data will be served with Hasura, our primary GraphQL backend API, allowing the UserInterface to receive live subscriptions through an Apollo client.

The outcome is a simple Monitoring on top of the Laravel Framework, with all components running on your local Kubernetes cluster in Minikube. Finally we will deploy it, for testing purposes, to a real cloud environment.

The course will contain of following chapters:

  • Kubernetes
  • Logs Aggregation
  • Time-Series Database
  • Hasura Graph QL
  • VueJS/VueX/Apollo
  • Alerts & Monitoring

Each chapter contains between 4-10 video lectures with hands-on implementation. 

The course will be available on Udemy and on a dedicated Website, which will launch by end of march. As an addition to the standard course program you will have on our dedicated Website realtime annotations of important steps and terminal commands as well as two additional chapters included:

  • Crash courses and Basics of Docker and Kubernetes
  • Guided Test Deployment of your Logging Application 

If you are interested in this Kubernetes Course you can send us a notice via our contact form.
Author: Celia Grieme, CEO Impaddo