Application deployment and management should be automated, auditable, and easy to understand and that’s what beetle tries to achieve in a simple manner. Beetle automates the deployment and rollback of your applications in a multi-cluster, multi-namespaces kubernetes environments. Easy to integrate with through API endpoints & webhooks to fit a variety of workflows.
This article compares six static tools to validate and score Kubernetes YAML files for best practices and compliance.
One of the challenges with YAML is that it’s rather hard to express constraints or relationships between manifest files.
What if you wish to check that all images deployed into the cluster are pulled from a trusted registry?
How can you prevent Deployments that don’t have PodDisruptionBudgets from being submitted to the cluster?
I was recently involved in an interesting project. Deploying a full production and development environment on a very budget-constrained Kubernetes cluster, managed through GKE. A big departure from my usual, where I have nearly unlimited budget for my cluster. The issues I ran into, and the solutions for them, were actually the inspiration to start this blog, just so I could write this post.
This is a great reason to start a new blog 👏
Yeah, this might be crazy… Crazy like a FOX
Remember that README that answers the age old question:
What happens when you type google.com into your browser’s address box and press enter?
Well, the format is back with a Kubernetes focus, this time answering:
Imagine I want to deploy nginx to a Kubernetes cluster. I’d probably type something like this in my terminal:
kubectl run nginx --image=nginx --replicas=3
and hit enter. After a few seconds, I should see three nginx pods spread across all my worker nodes. It works like magic, and that’s great! But what’s really going on under the hood?
Bryan Liles joins Johnny and Mat for a wide-ranging discussion that starts with the question: what even is enterprise Go?
Lens is a standalone application for MacOS, Windows, and Linux. It’s open source and free.
If you’re using Docker, the next natural step seems to be Kubernetes, aka K8s. Or is it? If you’re part of a small team, Kubernetes probably isn’t for you: it’s a lot of pain with very little benefits.
- Amazing usability and end user experience
- Real-time cluster state visualization
- Resource utilization charts and trends with history powered by built-in Prometheus
- Terminal access to nodes and containers
- Fully featured role based access control management
- Dashboard access and functionality limited by RBAC
Video demo here.
KBall interviews Brian Leroux in a wide-ranging discussion covering “Progressive Bundling” with native ES Modules, building infrastructure as code, and what the future of JamStack and serverless deployment might look like.
Gone are the days of contending with dozens of README files just to get the right version of helm and to install a chart with sane defaults.
arkfor short) provides a clean CLI with strongly-typed flags to install charts and apps to your cluster in one command.
Unpopular opinion! Monoliths are the future because the problem people are trying to solve with microservices doesn’t really line up with reality. Just to be honest - and I’ve done this before, gone from microservices to monoliths and back again. Both directions.
Kubernetes has won and the cloud is a moving target. But, one thing that often gets lost in the mix with all the Cloud Native talk is the productivity costs associated with keeping up.
In the US alone, over 70% of enterprises have adopted or are currently adopting cloud-native architecture, causing a surge in developers who are trying to learn the stack.
It’s called the “cutting edge” for a reason…
Staying on the cutting edge…one critical area of productivity loss is keeping up with all the changing technologies.
Cloud-native architecture is still being developed and learning the latest technologies is a moving target. While at the same time, most computer science and software engineering programs don’t delve into the heart of these technologies. At best, graduates will have limited experience working with a handful of these cloud technologies…
Depending on your perspective or seat at the table, these hidden costs could be a good thing.
In this episode, we’re joined by Kelsey Hightower to discuss the evolution of cloud infrastructure management, the role Kubernetes and its API play in it, and how we, as developers and operators, should be adapting to these changes.
Patrick DeVivo pointed tickgit at Kubernetes’ source code and discovered that the team has a lot TODO…
- 2,380 TODOs across 1,230 files from 363 distinct authors
- 489 TODOs were added in 2019 so far
- 860 days (or 2.3 years) is the average age of a TODO
That’s just a taste of what they found. The article has more info and some analysis to boot.
Chaos Mesh is a cloud-native Chaos Engineering platform that orchestrates chaos on Kubernetes environments. At the current stage, it has the following components:
- Chaos Operator: the core component for chaos orchestration. Fully open sourced.
- Chaos Dashboard: a visualized panel that shows the impacts of chaos experiments on the online services of the system; under development; curently only supports chaos experiments on TiDB(https://github.com/pingcap/tidb).
For the uninitiated, chaos engineering is when you unleash havoc on your system to prove out its resiliency (or lack thereof).
Gerhard is back for part two of our interviews at KubeCon 2019. Join him as he goes deep on Prometheus with Björn Rabenstein, Ben Kochie, and Frederic Branczyk… Grafana with Tom Wilkie and Ed Welch… and Crossplane with Jared Watts, Marques Johansson, and Dan Mangum.
Don’t miss part one with Bryan Liles, Priyanka Sharma, Natasha Woods, & Alexis Richardson.
Changelog’s resident infrastructure expert Gerhard Lazu is on location at KubeCon 2019. This is part one of a two-part series from the world’s largest open source conference. In this episode you’ll hear from event co-chair Bryan Liles, Priyanka Sharma and Natasha Woods from GitLab, and Alexis Richardson from Weaveworks.
Stay tuned for part two’s deep dives in to Prometheus, Grafana, and Crossplane.
What do you do when you have CronJobs running in your Kubernetes cluster and want to know when a job fails? Do you manually check the execution status? Painful. Or do you perhaps rely on roundabout Prometheus queries, adding unnecessary overhead? Not ideal… But worry not! Instead, let me suggest a way to immediately receive notifications when jobs fail to execute, using two nifty tools…
How do you know if your Kubernetes cluster is production-ready?
If you’re a beginner, it’s hard to tell what you’re missing. The subject is soo vast and it’s easy to lose sight on what’s the right path to production.
And even if you’re an expert, remembering all networking, storage, cluster, and application development best practices is impossible. There are so many.
Here is a curated a list of best practices for Kubernetes that helps you drive your roadmap to production.
Check things off the list and keep track as you go. ✅
Johnny and Mat are joined by Kris Nova and Joe Beda to talk about Kubernetes and Cloud Native. They discuss the rise of “Cloud Native” applications as facilitated by Kubernetes, good places to use Kubernetes, the challenges faced running such a big open source project, Kubernetes’ extensibility, and how Kubernetes fits into the larger Cloud Native world.
You should have a plan to roll back releases that aren’t fit for production. In Kubernetes, rolling updates are the default strategy to release software.
In a nutshell, you deploy a newer version of your app and Kubernetes makes sure that the rollout happens without disrupting the live traffic. However, even if you use techniques such as Rolling updates, there’s still risk that your application doesn’t work the way you expect it at the end of the deployment.
Kubernetes has a built-in mechanism for rollbacks. Learn how it works in this article.
Have you ever created a Kubernetes cluster and wondered what type of worker nodes you should use? For example, if you’re on AWS, should you use many small and cheap t2.micro instances, or some few powerful m5.xlarge instances?
This article discusses the pros and cons of using different worker node sizes in your cluster.