Monitoring and debugging distributed systems is hard. In this episode, we catch up with Kelsey Hightower, Stevenson Jean-Pierre, and Carlisia Thompson to get their insights on how to approach these challenges and talk about the tools and practices that make complex distributed systems more observable.
k0s is an all-inclusive Kubernetes distribution with all the required bells and whistles preconfigured to make building a Kubernetes clusters a matter of just copying an executable to every host and running it.
We’re talking with Gerhard Lazu, our resident SRE, ops, and infrastructure expert about the evolution of Changelog’s infrastructure, what’s new in 2020, and what we’re planning for in 2021. The most notable change? We’re now running on Linode Kubernetes Engine (LKE)! We even test the resilience of this new infrastructure by purposefully taking the site down. That’s near the end, so don’t miss it!
In this post I share the latest 2020 and beyond details for changelog.com’s infrastructure.
Why Kubernetes? How is Kubernetes simpler than what we had before? What was our journey to running production on Kubernetes? What worked well? What could have been better? What comes next for changelog.com? Read this post and listen to episode #419 to learn all the details.
This segment will be included in a podcast near you soon enough, but we thought it’d be fun to share the video as a standalone since we watched the whole thing play out via K9s.
kubectl is the new SSH. If you are using it to update production workloads, you are doing it wrong. See examples on how to automate application updates.
We’re using this in our new Kubernetes-based infrastructure (more details on that coming to a podcast near you). Keel runs as a single container, scanning Kubernetes and Helm releases for outdated images. Super cool stuff, and even has a web interface (which we’re not using yet, but should).
We’ve linked K9s up in the past, but I’ve been playing with it today and I just had to share it again. Gerhard has us up and running on LKE (more on that coming to the blog and podcast soon) so I’ve had a chance to kick the tires a bit.
I have no idea how any of this magic works, but I do know that I like it and I’m excited to learn more. Here’s a screen grab of its Pulses feature, which gives you an overview of your entire cluster.
Tightly integrated with GitLab, GitHub, and Bitbucket, Gitpod automatically and continuously prebuilds dev environments for all your branches. As a result, team members can instantly start coding with fresh, ephemeral and fully-compiled dev environments - no matter if you are building a new feature, want to fix a bug or do a code review.
How do you respond when someone asks:
Is Kubernetes right for us?
Where do you start? Let’s talk about IT modernisation, beginning with the problem that needs to be solved, and exploring any constraints that are obvious.
In the search for a comfy and portable developer experience, I’ve made a lot of compromises in the past. The experience has gotten significantly better recently thanks to VS Code and Kubernetes. This workflow also does a good job for underpowered laptops or when working with lots of different and conflicting versions of python or ruby.
This is a solid, balanced piece that doesn’t overly sell the workflow and walks you through setting it up for yourself.
Application deployment and management should be automated, auditable, and easy to understand and that’s what beetle tries to achieve in a simple manner. Beetle automates the deployment and rollback of your applications in a multi-cluster, multi-namespaces kubernetes environments. Easy to integrate with through API endpoints & webhooks to fit a variety of workflows.
This article compares six static tools to validate and score Kubernetes YAML files for best practices and compliance.
One of the challenges with YAML is that it’s rather hard to express constraints or relationships between manifest files.
What if you wish to check that all images deployed into the cluster are pulled from a trusted registry?
How can you prevent Deployments that don’t have PodDisruptionBudgets from being submitted to the cluster?
I was recently involved in an interesting project. Deploying a full production and development environment on a very budget-constrained Kubernetes cluster, managed through GKE. A big departure from my usual, where I have nearly unlimited budget for my cluster. The issues I ran into, and the solutions for them, were actually the inspiration to start this blog, just so I could write this post.
This is a great reason to start a new blog 👏
Yeah, this might be crazy… Crazy like a FOX
Remember that README that answers the age old question:
What happens when you type google.com into your browser’s address box and press enter?
Well, the format is back with a Kubernetes focus, this time answering:
Imagine I want to deploy nginx to a Kubernetes cluster. I’d probably type something like this in my terminal:
kubectl run nginx --image=nginx --replicas=3
and hit enter. After a few seconds, I should see three nginx pods spread across all my worker nodes. It works like magic, and that’s great! But what’s really going on under the hood?
Bryan Liles joins Johnny and Mat for a wide-ranging discussion that starts with the question: what even is enterprise Go?
Lens is a standalone application for MacOS, Windows, and Linux. It’s open source and free.
If you’re using Docker, the next natural step seems to be Kubernetes, aka K8s. Or is it? If you’re part of a small team, Kubernetes probably isn’t for you: it’s a lot of pain with very little benefits.
- Amazing usability and end user experience
- Real-time cluster state visualization
- Resource utilization charts and trends with history powered by built-in Prometheus
- Terminal access to nodes and containers
- Fully featured role based access control management
- Dashboard access and functionality limited by RBAC
Video demo here.
KBall interviews Brian Leroux in a wide-ranging discussion covering “Progressive Bundling” with native ES Modules, building infrastructure as code, and what the future of JamStack and serverless deployment might look like.
Gone are the days of contending with dozens of README files just to get the right version of helm and to install a chart with sane defaults.
arkfor short) provides a clean CLI with strongly-typed flags to install charts and apps to your cluster in one command.
Unpopular opinion! Monoliths are the future because the problem people are trying to solve with microservices doesn’t really line up with reality. Just to be honest - and I’ve done this before, gone from microservices to monoliths and back again. Both directions.
Kubernetes has won and the cloud is a moving target. But, one thing that often gets lost in the mix with all the Cloud Native talk is the productivity costs associated with keeping up.
In the US alone, over 70% of enterprises have adopted or are currently adopting cloud-native architecture, causing a surge in developers who are trying to learn the stack.
It’s called the “cutting edge” for a reason…
Staying on the cutting edge…one critical area of productivity loss is keeping up with all the changing technologies.
Cloud-native architecture is still being developed and learning the latest technologies is a moving target. While at the same time, most computer science and software engineering programs don’t delve into the heart of these technologies. At best, graduates will have limited experience working with a handful of these cloud technologies…
Depending on your perspective or seat at the table, these hidden costs could be a good thing.
In this episode, we’re joined by Kelsey Hightower to discuss the evolution of cloud infrastructure management, the role Kubernetes and its API play in it, and how we, as developers and operators, should be adapting to these changes.