Fedora CoreOS is a container-focused (mostly) immutable Linux distribution designed to be lightweight and secure. It features Ignition as an early-boot-provisioning systems that alleviates all post-boot configuration, OSTree as an atomic-update mechanism, and podman as a secure and daemon-less container runtime.
If you’ve ever asked yourself WHY you need to SSH in to configure a system, why your cloud server OS comes with inkjet printer packages, or how you can get out of the burden of critical but uninspired kernel updates… then check out Fedora CoreOS!
If you’ve been following along in the open source news cycle lately, you’ve probably heard that Red Hat has dropped the docker container runtime engine from both its Red Hat Enterprise Linux (RHEL) and CentOS Linux distributions.
I must not be following along, because that’s news to me.
That being the case, what do you do when you need to deploy containers? Fortunately, they’ve created a near drop-in replacement for docker, called Podman.
Podman is a rename from kpod, sorta. The new thing is actually called libpod, and Podman exists as the CLI for that library. It’s all a bit confusing, but what’s cool is none of this requires a daemon like the Docker Engine.
If you’d like to give it a go, this walk-through by The New Stack will get you started.
Containerization technologies are one of the trendiest topics in the cloud economy and the IT ecosystem. The container ecosystem can be confusing at times, this post may help you understand some confusing concepts about Docker and containers. We are also going to see how the containerization ecosystem evolved and the state of containerization in 2019.
Put on your swimming suit, because this is a deep dive. 🏊♀️🏊
The most widely used container runtime on High Performance Computing now runs on Mac, allowing any developer to package their entire application into a single container. This has broader implications and possibilities of what exactly is possible by putting everything into a single file with no daemon required on OSX but I would let an expert like Greg Kurtzer talk about that :)
This was a brief topic of conversation when we had Greg on The Changelog a few weeks back.
We’re talking with Greg Kurtzer, the founder of CentOS, Warewulf, and most recently Singularity — an open source container platform designed to be simple, fast, and secure. Singularity is optimized for enterprise and high-performance computing workloads. What’s interesting is how Singularity allows untrusted users to run untrusted containers in a trusted way. We cover the backstory, Singularity Pro and how they’re not holding the open source community version hostage, as well as how Singularity is being used to containerize and support workflows in artificial intelligence, machine learning, deep learning, and more.
One of the most exciting announcements from last week’s AWS re:Invent was Firecracker — an open source project that delivers the speed of containers with the security of VMs.
Firecracker’s focus is transient and short-lived processes, so it differs from containers in that it’s optimized for startup speed.
Why can’t we use containers? The answer is simple — slower cold start. While LXC and Docker are certainly faster and lighter than full-blown virtual machines, they still don’t match the speed expected by functions.
There are also some security wins with how Firecracker is architected:
Firecracker takes a radically different approach to isolation. It takes advantage of the acceleration from KVM, which is built into every Linux Kernel with version 4.14 or above. KVM, the Kernel Virtual Machine, is a type-1 hypervisor that works in tandem with the hardware virtualization capabilities exposed by Intel and AMD.
There’s a lot to be intrigued by here. We should probably line up an episode on Firecracker. In the meantime, click through to go deeper on the topic.
Disclaimer: no servers were harmed in the taping of this show. We hosted a special discussion with Jeremy Daly, Kevin Ball, Nick Nisi, and Christopher Hiller on the ideas around serverless, managed services, Functions as a Service (FaaS), micro-services, nano-services, all-the-services!
CodeSandbox Containers was just announced by Ives van Hoorne on Hacker Noon.
But you gotta use it so they can test things and get it right.
We can only test CodeSandbox Containers fully when we have other people using it. … Please don’t use it for any project with files you don’t want publicly exposed. There’s also the chance that the service might be down because of things that we haven’t foreseen yet, in which case you’ll see a nice warning message.
We will dedicate the coming months to squash every bug we can find, when we think that CodeSandbox Containers is stable enough to remove the beta warning we will announce this.
This is the battle cry that started the Open Container Initiative. But in reality, are/was multi-cloud and vendor lock-in true concerns for software teams? Tyler Treat writes on his personal blog:
We want to be cloud-agnostic. We need to avoid vendor lock-in. We want to be able to shift workloads seamlessly between cloud providers. Let me say it again: multi-cloud is a trap. Outside of appeasing a few major retailers who might not be too keen on stuff running in Amazon data centers, I can think of few reasons why multi-cloud should be a priority for organizations of any scale.
There seems to be some confusion around sandboxing containers as of late, mostly because of the recent launch of gvisor… There is a large amount of ignorance towards the existing defaults to make containers secure. Which is crazy since I have written many blog posts on it and given many talks on the subject.
Jessie has been doing the yeoman’s work of Linux kernel isolation and making containers secure for awhile now, but much of that work has been overlooked or disregarded by others in the community. I’m on the outside looking in at this situation, so it’s tough to call exactly what’s going on, but according to Jessie:
When you work at a large organization you are surrounded by an echo chamber. So if everyone in the org is saying “containers are not secure,” you are bound to believe it and not research actual facts.
That doesn’t mean Jessie thinks containers are secure (click through to read her take on that). There’s a lot to dig in to here and think about. I’ll pull out one last point:
I am not trying to throw shade at gvisor but merely clear up some FUD in the world of open source marketing. I truly believe that people choosing projects to use should research into them and not just choose something shiny that came out of Big Corp.
Now that’s a sentiment I can get behind! Oh, and listen to this related episode of The Changelog if you haven’t yet. It’s a must-listen for all developers.
Why does this exist?
Containers are not a sandbox. While containers have revolutionized how we develop, package, and deploy applications, running untrusted or potentially malicious code without additional isolation is not a good idea. The efficiency and performance gains from using a single, shared kernel also mean that container escape is possible with a single vulnerability.
gVisor takes a distinct approach to container sandboxing and makes a different set of technical trade-offs compared to existing sandbox technologies, thus providing new tools and ideas for the container security landscape.
Titus powers critical aspects of the Netflix business, from video streaming, recommendations and machine learning, big data, content encoding, studio technology, internal engineering tools, and other Netflix workloads
So, why is Netflix open sourcing Titus?
…we’ve been asked over and over again, “When will you open source Titus?” It was clear that we were discussing ideas, problems, and solutions that resonated with those at a variety of companies, both large and small. We hope that by sharing Titus we are able to help accelerate like-minded teams, and to bring the lessons we’ve learned forward in the container management community.
The question is, is it too late for Titus to gain traction in a world where Kubernetes has seemingly already won?
This is a big deal. We’ve been tracking CoreOS since the beginning — we’re huge fans of Alex, Brandon and the team behind CoreOS.
Red Hat has signed a definitive agreement to acquire CoreOS, Inc., an innovator and leader in Kubernetes and container-native solutions, for a purchase price of $250 million.
Red Hat is a publicly traded company and while this announcement hasn’t really impacted shareholder value (yet), we, the open source community have been immeasurably impacted by the team behind CoreOS.
Also, check out Alex Polvi’s announcement on the CoreOS blog which includes some details and backstory.