

garden seems similar to GitOps solutions like ArgoCD or FluxCD for deploying helm charts.
Here is an example of authentik deployed using helm and fluxcd.
garden seems similar to GitOps solutions like ArgoCD or FluxCD for deploying helm charts.
Here is an example of authentik deployed using helm and fluxcd.
Firstly, I want to say that I started with podman (alternative to docker) and ansible, but I quickly ran into issues. The last issue I encountered, and the last straw, was that creating a container, I was frustrated because Ansible would not actually change the container unless I used ansible to destroy and recreate it.
Without quadlets, podman manages it’s own state, which has issues, and was the entire reason I was looking into alternatives to podman for managing state.
More research: https://github.com/linux-system-roles/podman: I found an ansible role to generate podman quadlets, but I don’t really want to include ansible roles in my existing ansible roles. Also, it intakes kubernetes yaml, which is very complex for what I am trying to do. At that point, why not just use a single node kubernetes cluster and let kubernetes manage state?
So I switched to Kubernetes.
To answer some of your questions:
Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too “by hand”! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
So what I (and the industry) uses is called “GitOps”. It’s essentially you have a git repo, and the software automatically pulls the git repo and applies the configs.
Here is my gitops repo: https://github.com/moonpiedumplings/flux-config. I use FluxCD for GitOps, but there are other options like Rancher’s Fleet or the most popular ArgoCD.
As a tip, you can search github for pieces of code to reuse. I usually do path:*.y*ml keywords keywords
to search for appropriate pieces of yaml.
I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
So the first issue is that Kubernetes doesn’t really have “containers”. Instead, the smallest controllable unit in Kubernetes is a “pod”, which is a collection of containers that share a network device. Of course, pods for selfhosted services like the type this community is interested in will rarely have more than one container in them.
There are ways to convert a docker-compose to a kubernetes pod.
But in general, Kubernetes doesn’t use compose files for premade services, but instead helm charts. If you are having issues installing specific helm charts, you should ask for help here so we can iron them out. Helm charts are pretty reliable in my experience, but they do seem to be more involved to set up than docker-compose.
Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard
So what you’re supposed to do is deploy an “ingress”, (k3s comes with traefik by default), and then use cert-manager to automatically apply get letsencrypt certs for ingress “objects”.
Actually, traefik comes with it’s own way to get SSL certs (in addition to ingresses and cert manager), so you can look into that as well, but I decided to use the standardized ingress + cert-manager method because it was also compatible with other ingress software.
Although it seems complex, I’ve come to really, really love Kubernetes because of features mentioned here. Especially the declarative part, where all my services can be code in a git repo.
No, because it has a “termination clause”, where if Watcom is suing you you can’t use the software anymore while you are
https://en.m.wikipedia.org/wiki/Sybase_Open_Watcom_Public_License
See the first bit, and the linked discussion by Debian developers.
I actually tried this right after I made this post, and it was not where near as smooth as I wanted. KDE would put the window that I had assigned to all desktops on top, whenever I would switch virtual desktops.
I found a solution though, it looks like mpv has support.
Maybe nginx proxy manager can do this.
I took a look through the twitter, which someone mentioned in another thread.
Given the 4chan like aestetic of your twitter post, I decided to take a look through the boards and it only took me less than a minute to find the n word being used.
Oh, and all the accounts are truly anonymous, rather than pseudoanonymous, which must make moderation a nightmare. Moderation being technically possible doesn’t make it easy or practical to do.
I don’t want an unmoderated experience by default, either.
No, I’m good. I think I’ll stay far away from plebbit.
To be pedantic, lemmy is federated, rather than decentralized (e.g. a direct p2p architecture).
With decentralization, moderation is much harder than federation, so many people aren’t a fan.
I’m not spotting it. “AI” is only mentioned once.
The key and secret in the docker compose don’t seem to be API keys, but keys for directus itself (which upon a careful reread of the article, I realize is not FOSS, which might be anpther reason people don’t like it").
Directus does seem to have some integration with openai, but it requires at least an api key and this blog post doesn’t mention any of that.
The current setup they are using doesn’t seem to actually connect to openai at all.
There’s only one project that provides truly static/relocatable python that work on both glibc/musl: https://github.com/leleliu008/python-distribution
There is the python provided by APE/cosmo. They also have two other distributions containing various goodies, pypack1, and pypack2. https://cosmo.zip/pub/cosmos/bin/
But this came at the cost of discontinuing support for Android & Windows
I don’t care about android support, but for the competition, and I don’t really know about Windows support. Right now, RDP is used to authenticate and managed the machines, but maybe a portable VNC we can quickly spin up, so more than one person can be on the same machine, would be useful.
My original thought was to replace in place, insecure services with secure one’s via something like docker containers or nix. But I think many of the machines have too little ram bundled libraries for the services to be viable. I actually tested replacing apache, but it simply wouldn’t launch (I think the machine only had 2 GB of ram?).
There are a few reasons why I really like it being public, even though it means I have to be careful not to share sensitive stuff.
This isn’t exactly what you want. But I use a static site generator, with a fulltext search engine (that operates entirely locally!), called quarto. (although there are other options).
Although I call it a “blog”, it really is more of a personal data dump for me, where I put all my notes down and also record all my processes as I work through projects. Whenever I am redoing something I know I did in an old project, or something I saved here (but disguised as a blogpost), I can just search for it.
Here is my site: https://moonpiedumplings.github.io/ . You can try search at the top right (requires javascript).
Are you using rpmfusion?
Lol I misread it too.
There is literally no way to do performant e2ee at large scale. e2ee works by encrypting every message for every recipient, on the users device.
At 1000 users, that’s basically a public room.
I have been using your stuff since they were called toolpacks.
https://moonpiedumplings.github.io/playground/ape-experiments/
Welcome to Lemmy, Azathothas. It’s nice to see more and more usernames I recognize show up here.
I think a browser extension, similar to tor snowflake would be a good way to do this.
There a source port of at least portal 1.
https://github.com/AruMoon/source-engine
Here’s the active fork of the original project. Going through the issues of the original project, it seems to have support for building for 64 bit platforms.
No portal 2 support though. Although mentioned in the issues of nileusr’s repo is this: https://github.com/EpicSentry/P2ASW , which is interesting
You should look into “Configuration as code”, where you use automation via various methods and store the code in a git repo. The other commenter in the thread is a good example of this methodology, using Terraform and Ansible, but there are many ways to do this.
That’s CI 🙃
Confusing terms, but yeah. With ArgoCD and FluxCD, they just read from a git repo and apply it to the cluster. In my linked git repo, flux is used to install “helmreleases” but argo has something similar.