Hacker News

pmig
Show HN: Glasskube – Open Source Kubernetes Package Manager, alternative to Helm github.com

Hello HN, we're Philip and Louis from Glasskube (https://github.com/glasskube/glasskube). We're working on an open-source package manager for Kubernetes. It's an alternative to tools like Helm or Kustomize, primarily focused on making deploying, updating, and configuring Kubernetes packages simpler and a lot faster. Here is a demo video (https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s) with quick start instructions.

Most developers working with Kubernetes use Helm, an open-source tool created during a hackathon nine years ago. However, with the rapid growth of Kubernetes packages to over 800 packages on the CNCF landscape today, the prerequisites have changed, and we believe it’s time for a new package manager. Every engineer we talked to has a love-hate relationship with Helm, and we also found ourselves defaulting to Helm despite its shortcomings due to a lack of alternatives.

We have spent enough time trying to get Helm to do what we need. From looking for the correct chart, trying to learn how each value affects the components and hand-crafting a schemaless values.yaml file, to debugging the final release if it inevitably fails to install, the experience of using Helm is, for the most part, time consuming and cumbersome.

Charts often become more complex, requiring the use of sub-charts. These umbrella charts tend to be even harder to maintain and upgrade, because so many different components are bundled into a single release.

We talked to over 100 developers and found that everyone developed their own little workarounds, with some working better than others. We collected the feedback poured everything we learned from that into a new package manager. We want to build something that is as easy to use as Homebrew or npm and make package management on Kubernetes as easy as on every other platform.

Some of the features Glasskube already supports are

Typesafe package configuration via UI or interactive CLI to inject values from other packages, ConfigMaps, and Secrets.

Browse our central package repository so there is no need to look for a Helm repository to find a specific package.

All packages are dependency-aware so they can be used and referenced by multiple other packages even across namespaces. We validate the complete dependency tree - So packages get installed in the correct namespace.

Preview and perform pending updates to your desired version with a single click of a button. All updates have been tested in the Glasskube test suite before being available in the public repository.

Use multiple repositories and publish your own private packages (e.g., your company's internal services packages, so all developers will have the up-to-date and easily configured internal services).

All features are available via UI or interactive CLI. You can also manage all packages via GitOps.

Currently, we are focused on enhancing the user experience, aiming to save engineers as much time as possible. We are still using Helm and Manifests under the hood. However, together with the community, we plan to develop an entirely new packaging and bundling format for all cloud-native packages. This will provide package developers with a straightforward way to define how to install and configure packages, offer simple upgrade paths, and enable us to provide feedback, crash reports, and analytics to every developer working on Kubernetes packages.

We also started working on a cloud version. You can pre-signup here in case you are interested: https://glasskube.cloud

We'd greatly appreciate any feedback you have and hope you get the chance to try out Glasskube.


guhcampos3 months ago

I think this might be a step in the right direction, but my main problem with Kubernetes package management today might not be fixable by a package manager, sadly. The biggest issue I have in my daily life is handling the multiple levels of nested YAML and the unpredictability of the results.

Think of an ArgoCD ApplicationSet that generates a bunch of Applications. Those Applications render a bunch of Helm charts, and inside those charts there are CRDs used by some random operator like Strimzi, Grafana or Vector.

Given YAML's lack of syntax and the absense of any sort of standard for rendering templates, it's practically impossible to know what are the actual YAML being injected in the Kubernetes API when you make a top-level change. It's trial and error, expensive blue-green deployments and hundreds of debugging minutes all the way, every month.

theLiminator3 months ago

The wide adoption of YAML for devops adjacent tooling was a mistake.

I think proper programming language support is the way to go.

Ideally a static type system that isn't turing complete and guaranteed to terminate. So something like starlark with types.

ants_everywhere3 months ago

The idea of declarative config is that empirically programmatic config was bad at scale.

If your config is the source of truth of what your infra should be, then you can use source control tools to roll back to a known good state, or to binary search for when a problem was introduced.

If you use programmatic config, then you can't find out the intended state of your system without executing a program. You can't grep through program executions in a meaningful way, especially at scale. So you can't do even simple things like search for a string.

Guaranteeing termination is helpful, but it doesn't solve the main problem that programmatic config puts a large complexity barrier between you and the ability to understand your infrastructure.

Tools like Helm give up a fair amount of this declarative benefit. And IMO that's one of the reasons why it's almost always a worse experience to use a helm chart than to just render the chart once and for all and forget Helm ever existed.

nostrebored3 months ago

> The idea of declarative config is that empirically programmatic config was bad at scale.

Languages can be declarative or imperative. For instance, Pulumi and CDK are declarative.

> If you use programmatic config, then you can't find out the intended state of your system without executing a program. You can't grep through program executions in a meaningful way, especially at scale. So you can't do even simple things like search for a string.

I don’t understand — nothing stops a language from having an intermediate compilation step that shows the intended state and is searchable. Beyond that, programmatic config means you can add in hooks to make plans or runs more interrogatable.

> Guaranteeing termination is helpful, but it doesn't solve the main problem that programmatic config puts a large complexity barrier between you and the ability to understand your infrastructure.

It seems like this is untrue — having seen templated IaC that is hundreds of thousands of lines and cdk that defers that complexity to an abstraction that I have to understand once, I’d always take the latter.

Agreed that helm use is a giant mistake and liability.

nijave3 months ago

I think codegen/compilation is a middle ground here. A higher level language like starlark can be compiled down to a set of instructions that provide the described guarantees.

This is how Pants (build system) works. You have declarative Starlark which supports basic programming semantics and this generates a state the engine reads and tries to produce.

I've been meaning to dive into jsonnet for a while but it'd be good to have a higher level representation that didn't rely on sophisticated templating and substitution engines like current k8s.

Compare k8s to Terraform where you have modules, composability, variables. These can be achieved in k8s but you need to layer more tooling on (kustomize, helm, etc). There could be a richer config system than "shove it in YAML"

Things like explicit ordering and dependencies are hard to represent in pure yaml since they're ",just text fields" without additional tools

tkz13123 months ago

If you restrict your language to pure functions only, then it is quite possible to have a system be both declarative and reproducible while having more expressivity than yaml.

pkage3 months ago

and indeed, this is the approach that config-centric languages like Nickel[0] take.

[0]: https://nickel-lang.org/

verdverm3 months ago

The priority field in Nickel seems a lot like CSS weighting, though more explicit, I suspect it will cause headaches at scale.

verdverm3 months ago

Have you looked at CUE? (https://cuelang.org/docs/concept/the-logic-of-cue/)

CUE is also pragmatic in that it has integrations with yaml, json, jsonschema, openapi, protobuf

dewbrite3 months ago

I've tried out Pkl which is similar in spirit, and I think it's a real solution for k8s manifests. The only thing holding it back is industry adoption imo. It's leagues better than Helm, and mostly better than Kustomize.

See also: KCL, which is very similar and might _actually_ be "the winner". Time will tell.

verdverm3 months ago

I don't expect a winner personally, rather that there will be dozens of alternatives always. Like build systems, deployments are quite bespoke to organizations and legacy has a way of sticking around for a long time

Having used CUE, mainly outside of Kubernetes, I cannot see myself switching to KCL. I really like having a configuration language that isn't so tied to a specific system and which I can use with the Go SDK

pmigop3 months ago

So whats your take on https://github.com/stripe/skycfg do you also have experience with it?

verdverm3 months ago

No, I went with CUE instead of Starlark

dilyevsky3 months ago

> So something like starlark with types.

this exists for k8s[0]. there have been other users based on the same library[1], I heard reddit did something similar internally

[0] - https://github.com/cruise-automation/isopod [1] - https://github.com/stripe/skycfg

kminehart3 months ago

If I was in charge of our infra automation I would have done this. We opted for jsonnet instead which is an absolute nightmare, or at least the way we've set it up is.

okamiueru3 months ago

My take on this is that the issue is not declarative infrastructure resources, but a tendency to over-complicate the infrastructure.

For example: You have a problem that is suitable for some message queue -> Apache Kafka. Now you have 7 new problems and the complexity warrants perhaps 3 other services, and on, and on.

pmigop3 months ago

Introducing complexity is always something that needs to be introduced carefully. It makes things harder if you introduce it to early, but everything will break in a big bang if you introduce it too late.

Nowadays you can also start with a light weight MQ like Rabbit MQ and decouple your service just into a hand full components. This will set you up for scalability without introducing massive overheads.

In end it is also always a knowledge game. How experienced are you or how much time are you willing to invest into learning and understanding a technology.

davidmdm913 months ago

I have been developing my own package manager, and my core idea is that proper programming languages are the proper level for describing packages.

Programs take inputs and can output arbitrary data such as resources. However they can do so with type safety, and everything else a programming ecosystem can achieve.

For asset distribution it uses wasm, and that's it!

If you want to check it out its here: github: (https://github.com/davidmdm/yoke) docs: (https://davidmdm.github.io/yoke-website)

I like that you said: > I think proper programming language support is the way to go.

I think we need to stop writing new ways of generating yaml since we already have the perfect way of doing so. Typed languages!

verdverm3 months ago

> that isn't turing complete and guaranteed to terminate

This means general purpose languages do not qualify, and more generally, no general recursion

davidmdm913 months ago

Why limit yourself to those types of tools?

To protect against somebody writing a non-terminating program?

General programming languages come with a lot of general purpose benefits from their ecosystems like package managers npm, cargo, go modules, etc.

They have test runners, and control flow.

Lots of them already have type definitions for kubernetes and if you are working in Go you have access to almost the entire kubernetes ecosystem.

Maybe we are throwing the baby out with the bath water when we disqualify general purpose languages?

verdverm3 months ago

Because people write code that is hard to understand. Configuration doesn't need all that. What it needs is to be provably correct and easy for someone to make predictable changes under high pressure (when prod is down). The non-terminating thing is one of the features of a turing incomplete language, not the goal. You don't want inheritance either, because it becomes hard to know where and when a value gets set (which is what helm overlay via multiple -f uses effectively is)

You speak like turning incomplete languages cannot have the control structures, tooling, and ecosystems we enjoy elsewhere, which would be the wrong assessment. I recommend you take a look at CUE to see how this can be true

The OpenAPI specs are probably better than the Go language types for k8s. They have more of the validation information and you can get at the CRDs / versions actually running in the cluster.

davidmdm913 months ago

I am not saying that Turing incomplete languages don’t or can’t be a good fit for this task.

However there’s no reason we should rule out general purpose languages.

We have a lot of configuration based IaC and configuration tooling a la jsonnette and cue and yet these are riddled with their own problems and DX issues.

Anyways we don’t need to see eye to to eye on this but I respect your position.

MrDarcy3 months ago

> However there’s no reason we should rule out general purpose languages.

We’ve learned the hard way general purpose languages are poor for configuration at scale. I know first hand having worked on some of the larger prod infrastructures out there.

At scale, the best SRE’s out there still have trouble reasoning about the system and end up pushing bad config that takes down prod.

Languages like CUE really are different and better. CUE in particular hits the right balance for configuration of millions of lines of k8s yaml.

davidmdm913 months ago

I actually really like CUE. I use it kind of extensively as a YAML replacement where I can, and at my work we've done our best to integrate cue with our charts to validate the values used to invoke our charts and to unify the value space.

However there's something about a full blown general purpose language that is so much more flexible.

I don't think that the fact that people can and do write bad programs disqualifies general purpose languages from being great tools to build packages.

I am sure there is just as equally bad CUE, Jsonette, PKL, etc out there.

Other than CDK8s I don't know of other tools that have tried in this space to use general purpose languages to define their packages, and I think CDk8s uses are generally happy. Much more so than helm users at least.

I am not sure I can agree with this statement > We’ve learned the hard way general purpose languages are poor for configuration at scale

I think we've just assumed this, or seen a pulumi project we didn't like working in.

I believe and hope there will be plenty of room to experiment and innovate in this space!

verdverm3 months ago

The difference is in shared understanding. With tools like CUE or Starlark, you can learn one system and everyone can reason across each other's work. With imperative languages, every instance is a snowflake and creates significant mental overhead. It's the flexibility that is actually the problem for configuration. I get there is/was a trend towards general purpose languages in DevOps, but I think we are post peak on the adventure.

verdverm3 months ago

> CUE in particular hits the right balance for configuration of millions of lines of k8s yaml.

CUE was created by the same person who wrote the Borg precursor and also worked on BCL & GCL.

mitjam3 months ago

In addition it is actually hard to not make a template language accidentally Turing complete. Here is an entertaining list of accidentally Turing complete things: https://beza1e1.tuxen.de/articles/accidentally_turing_comple...

shepherdjerred3 months ago

cdk8s + TypeScript is my favorite option.

Here's how I use it: https://github.com/shepherdjerred/homelab/tree/main/cdk8s

https://github.com/cdk8s-team/cdk8s

dventimi3 months ago

> I think proper programming language support is the way to go.

Personally, I would prefer a SQLite database. Ok I'll show myself out.

Aeolun3 months ago

I think you can use Pulumi for helm? Or maybe just straight up kube.

taspeotis3 months ago

Yelling At My Laptop

oldpersonintx3 months ago

[dead]

zikohh3 months ago

Have you tried the rendered manifests pattern ? https://akuity.io/blog/the-rendered-manifests-pattern/

granra3 months ago

I read this article a while ago and it seems like the most sane way of dealing with this. Which tool you use to render the manifests doesn't even matter anymore.

rorychatt3 months ago

While I agree generally with the pattern (dynamically generating manifests, and using pipelines to co-ordinate pattern change), I could never quite figure out the value of using Branches instead of Folders (with CODEOWNER restrictions) or repositories (to enforce other types of rules if needed).

I can't quite put my finger on it, but having multiple, orphaned commit histories inside a single repository sounds off, even if technically feasible.

theptip3 months ago

I believe the idea is that it makes it very explicit to track provenance of code between environments, eg merge staging->master is a branch merge operation. And all the changes are explicitly tracked in CI as a diff.

With directories you need to resort to diffing to spot any changes between files in folders.

That said there are some merge conflict scenarios that make it a little annoying to do in practice. The author doesn’t seem to mention this one, but if you have a workflow where hotfixes can get promoted from older versions (eg prod runs 1.0.0, staging is running 1.1.0, and you need to cut 1.0.1) then you can hit merge conflicts and the dream of a simple “click to release” workflow evaporates.

rorychatt3 months ago

> I believe the idea is that it makes it very explicit to track provenance of code between environments, eg merge staging->master is a branch merge operation.=

That isn't quite my understanding - but I am happy to be corrected.

There wouldn't be be a staging->main flow. Rather CI would be pushing main->dev|staging|prod, as disconnected branches.

My understanding of the problem being solved, is how to see what is actually changing when moving between module versions by explicitly outputting the dynamic manifest results. I.e. instead of the commmit diff showing 4.3 -> 5.0, it shows the actual Ingress / Service / etc being updated.

> With directories you need to resort to diffing to spot any changes between files in folders.

Couldn't you just review the Commit that instigated that change to that file? If the CI is authoring the change, the commit would still be atomic and contain all the other changes.

> but if you have a workflow where hot-fixes can get promoted from older versions

Yeah 100%.

In either case, I'm not saying it's wrong by any stretch.

It just feels 'weird' to use branches to represent codebases which will never interact or be merged into each other.

alfons_foobar3 months ago

Glad I am not the only one feeling "weird" about the separate branches thing :D

Probably just a matter of taste, but I think having the files for different environments "side by side" makes it actually easier to compare them if needed, and you still have the full commit history for tracking changes to each environment.

theptip3 months ago

Sorry, typo, you’re quite right, I meant to say staging->prod is a merge. So your promotion history (including theoretically which staging releases don’t get promoted) can be observed from the ‘git log’. (I don’t think you want to push main->prod directly, as then your workflow doesn’t guarantee that you ran staging tests.)

When I played with this we had auto-push to dev, then click-button to merge to staging, then trigger some soak tests and optionally promote to prod if it looks good. The dream is you can just click CI actions to promote (asserting tests passed).

> Couldn't you just review the Commit that instigated that change to that file?

In general though a release will have tens or hundreds of commits; you also want a way to say “show me all the commits included in this release” and “show me the full diff of all commits in this release for this file(s)”.

> In either case, I'm not saying it's wrong by any stretch.

Yeah, I like some conceptual aspects of this but ultimately couldn’t get the tooling and workflow to fit together when I last tried this (probably 5 years ago at this point to be fair).

rorychatt3 months ago

> staging->prod is a merge

I might be misunderstanding what you mean by staging in this case. If so, my bad!

I don't think staging ever actually gets merged into prod via git history, but is rather maintained as separate commit trees.

The way that I visualised the steps in this flow was something like:

  - Developer Commits code to feature branch
  - Developer Opens PR to Main from feature branch: Ephemeral tests, linting, validation etc occurs
  - Dev Merges PR
  - CI checks out main, finds the helm charts that have changed, and runs the equivelant of `helm template mychart`, and caches the results
  - CI then checks out staging (which is an entirely different HEAD, and structure), finds the relevant folder where that chart will sit, wipes the contents, and checks in the new chart contents.
  - Argo watches branch, applies changes as they appear
  - CI waits for validation test process to occur
  - CI then checks out prod, and carries out the same process (i.e. no merge step from staging to production).
In that model, there isn't actually ever a merge conflict that can occur between staging and prod, because you're not dealing with merging at all.

The way you then deal with a delta (like ver 1.0.1 in your earlier example) is to create a PR directly against the Prod branch, and then next time you do a full release, it just carries out the usual process, 'ignoring' what was there previously.

It's basically re-invented the terraform delta flow, but instead of the changes being shown via Terraform by comparing state and template, it's comparing template and template in git.

> ultimately couldn’t get the tooling and workflow to fit together when I last tried this

I genuinely feel like this is the bane of most tooling in this space. Getting stuff from 'I can run this job execution on my desktop', to 'this process can scale across multiple teams, integrated across many toolchains and deployment environments, with sane default' still feels like a mess today.

edit: HN Formatting

appplication3 months ago

Interesting, we have a system (different context, though it does use yaml) that allows nested configurations, and arrived at a similar solution, where nested configs (implicit/human interface) are compiled to fully qualified specifications (explicit/machine interface). It works quite well for managing e.g. batch configurations with plenty of customization.

I was unaware there was a name for this pattern, thank you.

nijave3 months ago

This pattern is powerful since you can pick arbitrary tooling and easily make modifications with your own tooling. For instance substituting variables/placeholders or applying static analysis.

pmigop3 months ago

This is actually a problem we want to focus on with Glasskube Cloud (https://glasskube.cloud/) where our glasskube[bot] will comment on your GitOps Pull request with an exact diff of resources that will get changed across all connected clusters. This diff will be performed by controller running inside your cluster.

Think of it as codecov analysis, but just for resource changes.

granra3 months ago

IMO the pull request should be the diff.

https://akuity.io/blog/the-rendered-manifests-pattern/

ForHackernews3 months ago

This sounds like terraform. Is this TF for k8s?

pmigop3 months ago

That's an interesting analogy, but I thinks it's a stretch.

sandwitches3 months ago

The solution is to not use Kubernetes.

llama0523 months ago

This looks like an interesting take on package management. Would be cool for homebrew clusters and the like.

However things like helmfile with renovate paired with a pipeline is my personal preference even if just for ensuring things remain consistent in a repo.

The `update all` button for instance seems terrifying on a cluster that means anything at all. None the less it's still cool for personal projects and the like!

The package controller reminds me a lot of Helm tiller with older versions of helm, and it became a big security issue for a lot of companies, so much so that helm3 removed it and did everything clientside via configmaps. Curious how this project plans on overcoming that.

pmigop3 months ago

Thanks for your input, let me comment on your points one by one.

> However things like helmfile with renovate paired with a pipeline is my personal preference even if just for ensuring things remain consistent in a repo.

Glasskube packages can also be put inside a GitOps repository as every package is a CR (custom resource). (They can even be configured via the CLI using the `--dry-run` and `--output yaml` flags and than put into git. In addition we are working on pull request to support package updates via Renovate: https://github.com/renovatebot/renovate/issues/29322

> The package controller reminds me a lot of Helm tiller with older versions of helm, and it became a big security issue for a lot of companies, so much so that helm3 removed it and did everything clientside via configmaps. Curious how this project plans on overcoming that.

As helm3 is now a client side tool only, that means that it can't enforce any RBAC by itself. OLM introduced Operator Groups (https://olm.operatorframework.io/docs/advanced-tasks/operato...) which introduces a permissions on an operator level. We might introduce something similar for Glasskube packages. Glasskube itself will still require be quite powerful, but we can than scope packages and introduce granular permissions.

0xbadcafebee3 months ago

Application packages are (traditionally) versioned immutable binaries that come with pre- and post-install steps. They are built with a specific platform in mind, with specific dependencies in mind, and with extremely limited configuration at install time. This is what makes packages work pretty well: they are designed for a very specific circumstance, and allow as little change as possible at install time.

Even with all that said, operating system packages require a vast amount of testing, development, and patching, constantly, even within those small parameters. Packages feel easy because potentially hundreds of hours of development and testing have gone into that package you're installing now, on that platform you're on now, with the components and versions you have now.

Kubernetes "packages" aren't really packages. They are a set of instructions of components to install and configure, which often involves multiple distinct sets of applications. This is different in a couple ways: 1) K8s "packages" are often extremely "loose" in their definition, leading to a lot of variability, and 2) they are built by all kinds of people, in all kinds of ways, making all kinds of assumptions about the state of the system they're being installed into.

There's actually multiple layers of dependencies and configuration that have to come together correctly for a Kubernetes "package" to work. The K8s API version has to be right, the way the K8s components are installed and running have to be right, the ACLs have to be right, there has to be no other installed component which could conflict, the version of the components and containers being installed by the package need to be pinned (and compatible with everything else in the cluster), and the user has to configure everything properly. Upgrades are similarly chaotic, as there's no sense of a stable release tree, or rolling releases. It's like installing random .deb or .rpm or .dmg files into your OS and hoping for the best.

Nothing exists that does all of this today. To make Kubernetes packaging as seamless as binary platform-specific packaging, you need an entire community of maintainers, and either a rolling-release style (ala Homebrew) or stable versioned release branches. You basically need a project like ArtifactHub or Homebrew to manage all of the packages in one way. That's a big undertaking, and in no way profitable.

pmigop3 months ago

We started similar to Homebrew with putting packages inside our "core" Glasskube package repository (https://github.com/glasskube/packages) where all updates are centrally stored and tested by our CI / CD workflows so users can enjoy tested and seamless upgrades. Users can of course host their private repositories (and packages) but we want to provide an opinionated set of packages ourselves.

Building them for different Kubernetes version or environments is something we also already thought about and need to happen at some time to bake more configuration into to the build step.

verdverm3 months ago

I have a hard time seeing how a k8s package manager could ever be as simple as brew or apt. One reason that stands out is that I have different values depending on what environment I'm targeting, and almost every user has a snowflake, just the way it is. The idea of a repl like prompt or web UI for setting those values is not appealing to me.

The main pains remain unaddressed

- authoring helm charts sucks

- managing different values per environment

- connecting values across charts so I don't have to

pmigop3 months ago

We are absolutely seeing a lot of snowflake clusters in the wild. This is also a hot topic on all cloud native conferences I attended lately.

Platform teams try to create internal developer platforms to further standardize Kubernetes configurations across teams and clusters, where developers can only do minor modifications. From my experience we want to reduce snow flake configurations. This is also a reason why we created Glasskube in the first place.

> - authoring helm charts sucks

Yes, 100% and we are on a mission to change this in future.

> - managing different values per environment

Glasskube packages are still configurable, but come with meaningful default values.

> - connecting values across charts so I don't have to

This is already possible, you can reference package configuration values from other packages easily via Glasskube, not needing to provide the same values multiple times.

verdverm3 months ago

> This is already possible, you can reference package configuration values from other packages

This misses the point, what we need is something more like Terraform, which has a way to get dynamic information from resource values that are assigned by the system. One such example would be the secret that the postgres operator generates for connecting in the api server that needs access.

> > - managing different values per environment

Most charts already come with meaningful defaults. The issue is that you need simpler defaults for multiple environments that the user doesn't have to think about. There ought to be some higher level information coming into the pipeline that tells the tool what environment I'm working with and assign certain values automatically.

> Yes, 100% and we are on a mission to change this in future.

Configuration needs a proper language. Please avoid Yaml and something bespoke. There are a few configuration languages emerging, CUE is my personal pick in the horse race.

pmigop3 months ago

Thanks for clarifying your inputs.

> This misses the point, what we need is something more like Terraform, which has a way to get dynamic information from resource values that are assigned by the system. One such example would be the secret that the postgres operator generates for connecting in the api server that needs access.

It is also already possible to inject values from secrets during runtime. You can for example create a Glasskube package that has a dependency on cnpg and add a `cluster.yaml` to your package and then dynamically patch the connection string (or credentials) from it to your deployment.

See the "ValueFrom" section of our configuration documentation for the exact inner workings: https://glasskube.dev/docs/design/package-config/

verdverm3 months ago

> See the "ValueFrom" section of our configuration documentation

... this is what we have today, how do I know what value to patch in from, like what is the name of the secret?

Looking at that link makes me think this is like another layer of helm on helm, especially with the same go template values in yaml that are going to be fed into helm templates under the hood.

Putting more yaml on top of templated yaml is not the way to create the next package manager for k8s.

linuxdude3143 months ago

Agreed, this makes little sense to me.

Fundamentally there’s no such thing as a k8s “package”. OLM is great for packaging operators, but I don’t see why we need yet another Helm.

It was a mistake that shouldn’t be repeated.

lars_francke3 months ago

We[1] build a lot of Kubernetes operators[2] and deal a lot with Helm and OLM issues. Good luck!

One immediate question: Your docs say "Upgrading CRDs will be taken care of by Glasskube to ensure CRs and its operators don't get out-of-sync." but searching for "CRD" in your docs doesn't lead to any concrete results.

This is one of our biggest pain ponts with Helm right now. Can you share your plans?

[1] <https://stackable.tech/en/>

[2] <https://www.youtube.com/watch?v=Q8OSYOgBdCc>

pmigop3 months ago

Packages available in the public Glasskube repo are configured in a way to make sure changes in CRDs get applied (either via Manifest or the helm-controller)

We will update the docs though.

revel3 months ago

In my opinion kubernetes is fundamentally hamstrung by the overly simplistic operator model. I really like the general idea, but it's not really possible to reduce the entire model down to "current state, desired state, next action." It means that an entire workflow ends up in the next action logic, but with so many operators looking at the same system state it's not really possible to know how the various components will interact. The problems with helm are a subcase of this larger issue.

By analogy, this is the same issue as frontend programming faces with the DOM. Introducing a VDOM / reducer paradigm (like react) would go a long way towards solving these problems.

ants_everywhere3 months ago

> it's not really possible to reduce the entire model down to "current state, desired state, next action."

This is basically how control theory works in general though. You have a state, a goal, and a perturbation toward the goal. I think this is the right level of abstraction if you want a powerful and flexible tool.

> it's not really possible to know how the various components will interact....Introducing a VDOM / reducer paradigm (like react) would go a long way towards solving these problems.

I think the problem here is that the physical characteristics and layouts of the machines makes such a huge difference that it would be prohibitively costly to virtualize or simulate this in a meaningful way. So instead, people use subsets of the physical structure to verify that configuration states work. You do this by having `dev`, `staging`, `prod` environments, using colored deployments, canary analysis, partial rollouts etc.

lucianbr3 months ago

> I think this is the right level of abstraction if you want a powerful and flexible tool.

This says nothing about ease of use. And for software development, ease of use matters. Otherwise we would all use assembler, or at most C++. They're very powerful and flexible.

If anything, too much power and flexibility is a problem.

linuxdude3143 months ago

If you need something like this to use k8s you’re probably better off with a different solution.

Kubernetes is not intended to be something you use with no background. It’s hard, and without making a full blown PaaS there’s no getting away from that complexity.

k8sagic3 months ago

I think you are marketing this thing wrong.

This has very little to do with helm. For me helm is primarily not a package manager, its a templating language and a way of configuring and installing it to a k8s cluster happens through kubeapps, helm cli or argocd.

This approach also kills for me the really awesome IaC paradigma: I bootstrap ArgoCD and after that, reference only git repos.

Your demo doesn't talk about HOW someone would use your templating features (like your 'form' support) but shows everything besides that.

I honestly like this as i'm still having the feeling that something is wrong with helm, but the way you are approaching it, i think it will fail. It will not gain enough traktion as bigger companies do not need your tooling. Kubeapps works really well and helm too (you want to replace helm, you probably will keep the helm support in there for a long time).

The problems helm have is: its getting convoluted when helm charts become big. The templating folder is a shit place for basically having everything in there, yaml is not that good for templating and values.yamls become way to big.

pmigop3 months ago

Our demo is more end user focused. You can find more information about our configuration options in our package configuration documentation: https://glasskube.dev/docs/design/package-config/

k8sagic3 months ago

So you combine helm with kustomize patching?

that just solves a subset of issues helm and kustomize have right now.

pmigop3 months ago

Yes we do, in fact you can find the exact comparison here: https://glasskube.dev/docs/comparisons/helm/

redrove3 months ago

I don’t mean for this to sound condescending or dismissive, BUT if you don’t think Helm is primarily a package manager you haven’t worked with infrastructure deployed in k8s much.

k8sagic3 months ago

I'm running a 500 node cluster, 3 private ones and 5 in an opensource context.

I see this primarily from a business/ops perspective and i do not install helm charts manually through the cli besides for testing.

We provide kubeapps as the packagemanager / interface for providing helm charts and cirumventing package manager features of helm.

For smaller use, we use ArgoCD for IaC and the helm charts are only there for having a package to reference. Again no usage of helm as package manager

redrove3 months ago

I’ve been doing this for about 8 years and have seen a few hundred clusters at dozens of orgs, been using GitOps for the last 4ish years.

Invariably they all involved using helm as a package manager to deploy off the shelf infrastructure with minor adjustments. I still don’t see your point, we can just agree to disagree.

k8sagic3 months ago

I'm happy to discuss this topic and to clarify it.

For me the 'package manager' aspects are more than the templating and having a zip file. For me it is more what apt etc. do

So using helm and its remote repositories, using the helm cli etc.

But we use kubeapps or ArgoCD to install the helm packages and download all helm charts before we deploy them (due to security requirements).

We leverage 100% IaC. Therefore we bootstrap ArgoCD and than install everything through ArgoCD. Only helm charts for our customers/collegues are installed through kubeapps.

redrove3 months ago

I don’t doubt your experience and what you do at your org, yeah that falls into a bit of a different category since you use kubeapps.

I guess what I’m saying is: in my experience across many organizations, helm is indeed treated more like a package manager (like I described above). Your workplace seems to be in the minority.

Hope that made sense :)

tflinton3 months ago

The selling point of being faster than helm isn’t a very big draw to me.. I never felt the problem with helm was its speed.

slipheen3 months ago

This looks interesting, thanks for sharing it.

Feel free to disregard, but it would help me understand if you briefly explain how this fits in with / compares to existing tools like argocd.

I watched your video and I saw that argo was one of the tools you were installing, so clearly this is occupying a different niche - But I'm not sure what that is yet :)

pmigop3 months ago

ArgoCD is a great tool to sync the state from your GitOps repository to your cluster and helps by visualizing the installed resources and showcase potentials errors.

It is often used by developers to get a glimpse of the state of core application of a company without cluster access.

Glasskube focuses on the packages your core application depends on. Managing the life cycle of these infrastructure components, testing updates and providing upgrading paths. You can still put Glasskube packages into your GitOps repo and sync them via ArgoCD into the cluster. Our PackageController will do the rest.

speedgoose3 months ago

Do you plan to eventually support alternatives to the mix of Golang templates in YAML? It’s my main issue regarding Helm charts and I dream about pkl helm charts.

verdverm3 months ago

I dream of Helm adopting CUE as an option for chart templates and values. They are both written in Go, which makes the integration a possibility. I've played around with a wrapper that renders the CUE to to Yaml before running Helm, which effectively means helm is deploying hardcoded templates, lifting the values/templates merging out of Helm proper.

kingcan3 months ago

Have you tried https://timoni.sh/?

verdverm3 months ago

pmigop3 months ago

Yes, but no concrete plans.

We already looked into pkl, but this would require every package author to either have Java (and pkl) running on their system or we would need to package the jre (and pkl) in order to make it work probably. But Kubernetes examples are already out there (https://github.com/apple/pkl-k8s-examples) and we are keeping an eye on it.

mdaniel3 months ago

I don't believe that's true, they have pkl in other languages, too, and I double checked it doesn't require the jre https://news.ycombinator.com/item?id=40146077 (tl;dr = https://github.com/apple/pkl-go/blob/v0.6.0/.circleci/config... which shows both pkl-go as well as how they, themselves, use a single-binary in a circleci setup)

I haven't tried to integrate pkl-go into something like glasskube so I am open to that part being painful because software gonna software, but I believe the general statement of pkl being Java-only is incorrect

gtirloni3 months ago

Same here. It's where I spend 90% of my time working with Helm charts and I absolutely despise it.

woile3 months ago

Has anyone tried https://github.com/stefanprodan/timoni ? Seems like a good alternative to helm

verdverm3 months ago

I tried it early on, being a big CUE fan.

Pros:

- comes from someone with deep k8s experience

- has features for secrets and dynamic information based on k8s version and CRDs

- thinks about the full life-cycle and e2e process

Cons: (at the time)

- holds CUE weird, there are places where they overwrite values (helm style) which is antithetical to CUE philosophy. This was rationalized to me as keeping with the Helm mindset rather than using the CUE mindset, because it is what people are used to. I think this misses the big opportunity for CUE in k8s config.

- has its own module system that probably won't integrate with CUE's (as it stands today), granted CUE's module system wasn't released at the time, but it seems the intention is to have a separate system because the goal is to align with k8s more than CUE

- Didn't allow for Helm modules to be dependencies. This seems to have since changed, but requires you to use FluxCD (?)

I didn't ever adopt it because of the philosophical differences (me from CUE, stefan from k8s). I have seen others speak highly of it, definitely worth checking out to see if it is something you'd like. I have plans for something similar once an internal Go package in CUE is made publicly available (https://github.com/cue-lang/cue/commits/master/internal/core...). The plan is to combine the CUE dep package with OpenTofu's graph solver to power a config system that can span across the spaces.

geiser3 months ago

Reviewing their docs I found this: https://glasskube.dev/docs/comparisons/timoni/ I have personally never worked with Timoni

fsniper3 months ago

When I check the packages repo, I see that they are yaml.files linking to helm packages. Is this only capable of deploying helm packages? Is there a packaging solution included too?

pmigop3 months ago

> We are still using Helm and Manifests under the hood. However, together with the community, we plan to develop an entirely new packaging and bundling format for all cloud-native packages.

Yes at the moment Glasskube packages wrap around Helm charts and manifests. We don't have a dedicated packaging format as of now.

We are actively looking into OCI images and other possibilities to bundle Kubernetes packages, but focus on features like multi namespace dependencies and simplicity at the moment.

How would you like to package Kubernetes packages or what should be avoided from your perspective?

ryanisnan3 months ago

This is the worst of both worlds.

Helm, as an abstraction layer, is a real pain in the ass. Having another abstraction layer here is pure madness.

I wish you luck, but I do not wish to board your boat.

fsniper3 months ago

This was what I understood too.

As a power Kubernetes user providing Kubernetes based paas to internal customers, we are not looking for more GUI or, abstractions over helm.

There are already a lot of solutions out there in this area like helmsman/argocd/helmfile/cross plane helm-provider. And we like kubernetes resources + gitops based automations over any other fancy tools.

Most of the time problems around helm is it's string based templating and lack of type safety. That's why timoni looks a more promising solution in this space. It's lack of packages is the limiting factor.

Another interesting approach is kapp-controller and carvel tooling. Packaging helm charts, OCI images, etc as OCI artifacts to use as a combined source is really interesting. We were considering using kapp-kontroller however our current dependence helm, and some architectural concerns caused us to pass on kapp-controller for now.

As to the question of what could be selling points towards a new "Package Manager" would be,

* Timoni like packages that has templating with type safety. * A large package pool, or * Abstraction over helm packages that could add type safety, or better yet automatic or at least semi-automatic conversion of helm charts (One can dream ;)) * Full management through kubernetes API / CRDs * Multicluster management, or fleet management.

meling3 months ago

Yaml files should be avoided.

noisy_boy3 months ago

Google makes so many things - I wonder why they didn't bother to build a simple programming language that is specifically suited for managing configs.

q3k3 months ago

There's Starlark...

...and CUE, and Jsonnet...

...and probably no-one wants to see BCL/GCL see the light of day :)

danmur3 months ago

"Inspired by the simplicity of Homebrew and npm" had me until here :P

impalallama3 months ago

Definitely looks like a solid improvement over Helm when it comes to just downloading and installing Kubernetes packages but it still to early for me to have a solid opinion since the aspect of actually building and distributing your own chart doesn't seem have been tackled which is basically 90% of the use case for Helm despite calling itself a package-manager, ha.

necubi3 months ago

As someone with an opensource project that ships a helm package, I wish you the best of luck. Helm has many flaws, and we can definitely do better.

It sounds like your pitch is focused on users, but I think you might want to think about how to attract a package ecosystem. Some things that would make my life easier as a packager:

* Statically typed config language. It's insane that we're generating YAML files with Go templates. I'm a big fan of jsonnet, but whatever it is it should not be possible to have errors at runtime.

* A better way to document to users the various options and how they play together. For any moderately complex package it become very challenging to communicate how to configure the chart for your environment

* Better testing infrastructure; am I creating valid k8s resources given all of the possible configuration options?

cbanek3 months ago

I've had to use helm and argocd in anger at work, and I feel like the real problem is not helm, nor argocd. They are tricky to learn but in general pretty straightforward once you get it.

What was really annoying was the constant moving and changing the yaml that kubernetes wanted. After you update your cluster and things stop working, it's really not about the layers on top, but it's about kubernetes and having to keep up with the new beta versions and also sunsetting other things.

TheDong3 months ago

> I feel like the real problem is not helm, nor argocd ... What was really annoying was the constant moving and changing the yaml that kubernetes wanted.

That to me sounds like you're angry at helm and argocd, but don't realize it.

The kubernetes apiserver publishes all the resources it supports, including custom-resource-definitions, including typed specifications that can be used to validate what you're submitting client-side.

If helm weren't a dumb layer of yaml templating, it could tell you locally, like compiling a typed programming language, "this helm chart won't work on your cluster because you have the beta version of this CRD and need the alpha version", or it could even transform things into the correct version.

The kubernetes API provides everything that's needed to statically verify what Groups/Kinds/Versions exist, and tooling like helm is just too dumb to work with it.

arccy3 months ago

That's what the --api-versions in helm is for. If you look at the helm commands argocd uses you'll see a very long string of flags that pass every available api version to helm.

rad_gruchalski3 months ago

We started with some pretty cool Jsonnet-based build infrastructure and we're pretty happy with it. Grafana Tanka is also okay, there's tooling to generate Jsonnet libraries for anything that has a CRD. There's jsonnet-bundler for package management.

One can throw Flux and Git* actions or what not in the mix. The outcome is a boring CI/CD implementation. Boring is good. Boring and powerful because of how Jsonnet works.

It's pretty neat in the right hands.

redbackthomson3 months ago

First off I just wanted to say I think it's great that you're attempting to tackle the problem that is Kubernetes package management. I work at a Kubernetes SaaS startup and spend many hours working with YAML and Helm charts every day, so I absolutely feel the pain that comes with it.

That being said, I'm confused as to where Glasskube is positioned in solving this problem. In the title of this post, you are claiming Glasskube is an "alternative to Helm"; although in your documentation you have a "Glasskube vs Helm" guide that explicitly states that "Glasskube is not a full replacement of Helm". I'm trying to understand how these two statements can be true. To make things more confusing, under the hood Glasskube repositories appear to be a repackaging of a Helm repository, albiet with a nicer UI.

From what I've gathered after reading the docs, Glasskube is being positioned as an easier way to interact with Helm charts - offering some easy-to-use tooling for upgrades and dependency management. To me, that doesn't exactly feel like it replaces Helm, but simply supplements my use of it, because it doesn't actually combat the real problems of using Helm.

My biggest pain points, some of which I don't think Glasskube is addressing, that I think are at the crux of switching off Helm:

- The arbitrary nature of how value files are laid out - every chart appears to have its own standards for which fields should be exposed and the nomenclature for exposing them

- Helm releases frequently get stuck when updating or rolling back, from which they can't be fixed without needing to be uninstalled and reinstalled

- I need to reference the Helm chart values file to know what is exposed and what values and types are accepted (Glasskube schema'd values files does address this! Yay!)

Apart from the Helm chart values schema, I don't think Glasskube solves these fundamental problems. So I'm not sure why I would spend the large amount of effort to migrate to this new paradigm if the same problems could still cause headaches.

Lastly, I would also concur with @llama052's comment, that an "update all" button will always be forbidden in my, and probably most other, companies. Considering the serious lack of standardisation that comes with Helm chart versioning (whether the app version changes between charts, whether roles or role bindings need to be updated, whether values have been deprecated or their defaults have changed, etc.), it's incredibly risky to update a Helm chart without understanding the implications that come with it. Typically our engineers have to review the release notes for the application between the two Helm chart versions, at least test in dev and staging for a few days, and only then can we feel comfortable releasing the changes - one chart at a time. Not to mention that if you are in charge of running a system with multiple applications, you probably want to use GitOps, and in that case a version upgrade would require a commit to the Git repository and not just a push of a button on the infra IDP.

leroman3 months ago

Somehow I am able to get bye with Kustimise. Cant stand the mess that is helm and its eco system.

joshuak3 months ago

Agreed.

Adding black boxes on top of black boxes is not a good way to abstract complexity. Helm does nothing more than any template engine does, yet requires me to trust not only the competency of some random chart author but also that they will correctly account for how my k8s environment is configured.

When I inevitably have to debug some deployment, now I'm digging through not only the raw k8s config, but also whatever complexity Helm has added on to obfuscate that k8s config complexity.

Helm is an illusion. All it does is hide important details from you.

cortesoft3 months ago

I always see comments like this, and as someone who used Kustomize first and then moved to Helm, it really doesn't fit with my experience.

I found Kustomize extremely annoying to work with. Changing simple configuration options required way too much work.

Take a simple example - changing the URL value of an ingress. This is something every deploy is going to have to set, since it will be different for every cluster.

In Kustomize, I first have to find the ingress resource, then recreate the nesting properly in my kustomize file, and then repeat that for every deployment.

In Helm, I just change one entry in a values file, that is clearly named so I know what I am setting.

In addition, it is REALLY hard to refactor resources once there are a lot of Kustomization files. If I want to redo how the deployment works, I have to change every Kustomize file that any other repo using my project uses. If I have a lot of other people who are pulling in my project and then using kustomize on top of it, we have to coordinate any changes, because changing the structure breaks all Kustomizations.

With Helm, as long as I keep the same values file structure, I am free to move things around however I want. I can use values in completely new locations without having to change anything about the values files themselves.

I just don't see how it is easier. I find it a lot easier to read a default values file and figure out what every setting does rather than read 20 k8s yaml files trying to figure out what does what.

In some ways, I kind of feel like the Kustomize enthusiasts LIKE the things I find annoying about it; they think you SHOULD have to read every resource and fully understand it, and they don't want anyone to be able to change anything without every kustomizer also changing things. I get the theory that everyone should know the underlying resources, but in practicality I find Kustomize to be the wrong level of abstraction for what I want to do.

k8sToGo3 months ago

Do you rewrite all the apps that are out there as your own kustomize charts or what do you mean by "its eco system"

leroman3 months ago

Kustomize is basically a higher level file for K8s deployments, I have all the resources as declarative code that gets deployed when I apply the relevant directory. I have istio + ssl certs + services and any other resource, multiple projects with cross project communication and provisioning etc..

leroman3 months ago

By “eco system” I mean all the charts that get shared, when ever I try to look under the hood I instantly regret it..

lmm3 months ago

So what do you do when you need to deploy something a bit fiddly with multiple components (e.g. Cassandra) that would just be importing a shared chart in Helm. Do you rewrite that yourself in kustomize?

leroman3 months ago

Mostly look for an operator for that

epgui3 months ago

My immediate question is whether/how this can be an improvement over using the terraform helm provider.

If I need a better GUI anywhere, it’s probably in EKS, or something that makes working with EKS a bit less painful.

pmigop3 months ago

Helm (also in combination with terraform) is a client side tool for deploying charts to your cluster.

Glasskube on the other hand is a package manger where you can find look up, install and configure packages via a cli and UI and overcome some of the shortcomings of Helm.

fsniper3 months ago

I think there is a fundamental misunderstanding about what a package manager is.

From Wikipedia: "A package manager or package-management system is a collection of software tools that automates the process of installing, upgrading, configuring, and removing computer programs for a computer in a consistent manner." https://en.wikipedia.org/wiki/Package_manager

Helm is a package manager as it consistently,

* Can pull and deploy applications via packages * Can manage (upgrade/reconfigure/delete) deployed applications * Can search and find helm charts.

So the difference is it lacks a GUI? Afaik GUI was never a requirement for a package manager.

And another perspective is, as GlassKube does not provide a packaging mechanism, and uses helm in the backend (established in another question, which I'll also reply) it's not really a package manager but a frontend to another one. (examples: dpkg - package manager - apt-get/apt/aptitude frontend)

Also IMHO, Considering CNCF landscape, Glasskube is more positioned as a Continues Delivery tool than a package manager. But this is my take.

epgui3 months ago

Isn't Helm typically described as a package manager for Kubernetes?[0][1][2]

But more importantly, what I'm getting at is that with Terraform I get infrastructure as code.

[0] "The package manager for Kubernetes" https://helm.sh/

[1] "Get up to speed with Helm, the preeminent package manager for the Kubernetes container orchestration system." https://www.oreilly.com/library/view/learning-helm/978149208...

[2] "Helm is a package manager for Kubernetes." https://en.wikipedia.org/wiki/Helm_(package_manager)

SOLAR_FIELDS3 months ago

Helm is actually three things

- package manager

- templating engine

- deployment tool

You’ll hear various opinions on how good it is at each of these roles. In my personal experience it is a decent package manager, a poor but serviceable templating engine, and a horrifically bad deployment tool.

Normally you pair Helm with something like Flux or Argo if you want IaC

epgui3 months ago

That was just the comment I needed, just at the moment I needed it. Thanks!

SOLAR_FIELDS3 months ago

Some additional context:

IMO terraform is probably not the right tool for the job for managing deployments. It can do it, but like helm itself, it’s also not super great at doing cluster deployments. If you’re looking for a good GUI like experience Argo is a good option.

I like Terraform for managing infra, and it’s good at a lot of things, but managing deployments on a cluster with IaC is not one of them. Why? Mainly because deployments are much more dynamic than infrastructure and the amount of throat clearing required for terraform to perform a state diff is much much much higher than other tech. Much better to look at the tech I mentioned (Argo and Flux) for that, because they do state diff for these things in milliseconds. I’ll leave it to the reader to figure out how long it takes terraform to do this.

It’s possible to go entirely in the “everything is a kube manifest” direction using technologies like Crossplane and Cluster API and (for AWS) technologies like ACK. But I don’t think we are entirely there yet for these technologies in terms of maturity, so in my recent designs I usually settle for provisioning of cluster and initial bootstrapping with terraform before mostly handing off to Argo for deployment, but then doing this weird counterbalance for having to go back to terraform when infra stuff is necessary. I can see a future world, however, where you bootstrap management clusters with something like terraform but then basically everything else, both infra (clusters, buckets, IAM, etc) and deployments (Helm) is declarative through tech like Argo and Crossplane provisioning.

The tough part right now is when you have application devs that need to provision infrastructure and then deploy on top of it. Right now that looks like asking your developer to write some app specific terraform like S3/IAM/KMS/Redis/whatever and then deploying their app on top of it with Argo or flux or what not. The ideal maybe looks like using the same tech for both eventually, as well as even provisioning the cluster that the stack runs on with the same tech.

hodgesrm3 months ago

I see that Glasskube is licensed under Apache 2.0. Do you plan to contribute it to the CNCF? Many of us are very skeptical of base infra that is not protected by a foundation.

pmigop3 months ago

The problem with contributing Glasskube to the CNCF atm would be that we would also need to contribute our Logo and trade mark. Something which we currently not willing to do. We look as Grafana as a successful example that did not contribute their core product to the CNCF and is still widely adopted.

Cyphus3 months ago

Looks like they were accepted into the CNCF Landscape at least https://glasskube.dev/blog/cncf-landscape/

curiousdeadcat3 months ago

Nickel feels like my end game for this kind of thing.

lefof43 months ago

[flagged]

lefof43 months ago

[flagged]

stackskipton3 months ago

[flagged]

[deleted]3 months agocollapsed

geodel3 months ago

[flagged]

[deleted]3 months agocollapsed

lefof43 months ago

[flagged]

hn-front (c) 2024 voximity
source