Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Process for setup of Uyuni under k8s #9369

Open
rhar78 opened this issue Oct 16, 2024 · 3 comments
Open

Process for setup of Uyuni under k8s #9369

rhar78 opened this issue Oct 16, 2024 · 3 comments
Labels
question Further information is requested

Comments

@rhar78
Copy link

rhar78 commented Oct 16, 2024

Question

What do you want to know about Uyuni?
I have a number of questions related to deploying Uyuni into (ideally) a scalable k8s environment.

Is there a clearly defined document for deploying the server component in a k8s environment? The current document seems to be very brief unless I am missing critical links.

Currently it seems as though the Official container is very much a stateful monolith. DB, mail server, reverse proxy, systemd services and a dozen or so PVCs will result in problems that we can't easily monitor, and performance issues in any of those components can't be resolved through scaling. Is it fair to say that the current Helm chart only supports a single replica? This seems to be so.

Further, it looks as though Helm chart is broken by default, with some chart values expected in the templates but never mentioned or documented.

Also, when using the Helm chart the application starts in a failed state. Is the migration script the only means to get an existing Uyuni instance to the k8s container?

Or is there a method to deploying a new instance from scratch?Ideally one that is not monalithic as described above.

It's difficult to confirm this due to the lack of documentation.

Alternvatively, must Uyuni be run in a docker/podman VM instead? If so is it possible to setup an HA VM pair, for example via vSphere or Proxmox?

Version of Uyuni Server and Proxy (if used)

Uuyni Server 202407, currently a VM.

zypper info Uyuni-Server-release and zypper info Uyuni-Proxy-release (if used)
N/A

@rhar78 rhar78 added the question Further information is requested label Oct 16, 2024
@rjmateus
Copy link
Member

Is it fair to say that the current Helm chart only supports a single replica?
This is currently the case. You can only have 1 replica. In the future we will continue to work on splitting the monolith container.

Further, it looks as though Helm chart is broken by default,
I think this will be solved in the next version.

Also, when using the Helm chart the application starts in a failed state. Is the migration script the only means to get an existing Uyuni instance to the k8s container?
Fresh installation should work.

Or is there a method to deploying a new instance from scratch?Ideally one that is not monalithic as described above.
Only monolithic is available. We will split it in the future. This was just the first step of the process.

It's difficult to confirm this due to the lack of documentation.
Documentation needs some love in this area.

Alternvatively, must Uyuni be run in a docker/podman VM instead? If so is it possible to setup an HA VM pair, for example via vSphere or Proxmox?

Uyuni will run using podman. We have a set of tools to help: mgradm and mgrctl.
We don't provide any HA mechanism. However, users can set-up a mechanism to backup the podman volumes folder /var/lib/containers/storage/volumes/etc-sssd/. User can even attach an extra disk and configure uyuni to mount that storage in the podman volumes location. This way if the machine dies, you can just attach that disk to a new one, and run again the install, wich will re-use the volumes and keep all the data.

@cbosdo anything to add?

@cbosdo
Copy link
Contributor

cbosdo commented Oct 16, 2024

Is there a clearly defined document for deploying the server component in a k8s environment? The current document seems to be very brief unless I am missing critical links.

Indeed running Uyuni on kubernetes is technically working but not yet fitting the Kubernetes user standards, so doc is still not done as it could evolve a big lot in the future.

Currently it seems as though the Official container is very much a stateful monolith. DB, mail server, reverse proxy, systemd services and a dozen or so PVCs will result in problems that we can't easily monitor, and performance issues in any of those components can't be resolved through scaling. Is it fair to say that the current Helm chart only supports a single replica? This seems to be so.

At the moment, the Uyuni server is a single container that cannot be scaled. The only advantages of running it on Kubernetes would be resilience and getting it away from a virtual machine. Scalability will not be in scope before long as it requires cutting the big container into smaller scalable ones. Remember that a journey starts with a single step... and this is the first one we just made.

Further, it looks as though Helm chart is broken by default, with some chart values expected in the templates but never mentioned or documented.

The helm chart cannot be used without mgradm for now. mgradm performs some tasks like the setup that can't fit in a helm chart yet: the setup still requires systemd to be running in the container.

Also, when using the Helm chart the application starts in a failed state. Is the migration script the only means to get an existing Uyuni instance to the k8s container?

mgradm install kubernetes is the way to install Uyuni on kubernetes right now. I know it's not the way users expect it and I plan to improve this during the next hackweek.

Even the migration script has some flaws that I started addressing in a refactoring that is not yet in a mergable state.

Or is there a method to deploying a new instance from scratch?Ideally one that is not monalithic as described above.

mgradm install kubernetes... but there is no way to deploy a non-monolithic Uyuni as it doesn't exist yet.

Alternvatively, must Uyuni be run in a docker/podman VM instead? If so is it possible to setup an HA VM pair, for example via vSphere or Proxmox?

Setting it with podman is the scenario that is tested, docker won't work for sure. Use mgradm install podman for this.
HA has never been supported for Uyuni and won't before long. The only thing you can do is to store your podman volumes on a shared device so that it's easy to start the container from another VM if needed. Load balancing with 2 running Uyuni instances is not possible.

@rhar78
Copy link
Author

rhar78 commented Oct 17, 2024

Thank you for your responses. This information has been very helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants