Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify minimum pxc.size #151

Open
SlavikCA opened this issue Jun 1, 2024 · 1 comment
Open

Clarify minimum pxc.size #151

SlavikCA opened this issue Jun 1, 2024 · 1 comment

Comments

@SlavikCA
Copy link

SlavikCA commented Jun 1, 2024

Here is comment about pxc.size:

| **Description** | The size of the Percona XtraDB cluster must be 3 or 5 for [High Availability](https://www.percona.com/doc/percona-xtradb-cluster/5.7/intro.html). other values are allowed if the `spec.allowUnsafeConfigurations` key is set to true |

has link to documentation, which said:

The recommended configuration is to have at least 3 nodes, but you can have 2 nodes as well

Similarly here:
https://github.com/percona/k8spxc-docs/blob/main/docs/operator.md?plain=1#L39

allowUnsafeConfigurations ... Prevents users from configuring a cluster with unsafe parameters such as starting the cluster with the number of Percona XtraDB Cluster instances which is less than 3, more than 5, or is an even number...

But here is another comment, which confuses:
https://github.com/percona/percona-helm-charts/blob/main/charts/pxc-db/README.md?plain=1#L52

PXC Cluster target member (pod) quantity. Can't even if allowUnsafeConfigurations is true

Does it mean "Can't be even"? Or what does it mean?
Should it be "Can't be even number if allowUnsafeConfigurations is false"?
Or "Can't be even number unless allowUnsafeConfigurations is true"?

@SlavikCA
Copy link
Author

SlavikCA commented Jun 1, 2024

I have 2 nodes Kubernetes cluster.
So, I tried to create 2 node DB cluster:

helm install my-op percona/pxc-operator --namespace mysql  --create-namespace

helm install db2 percona/pxc-db --namespace mysql \
 --set pxc.volumeSpec.resources.requests.storage=20Gi \
 --set spec.allowUnsafeConfigurations=true \
 --set pxc.size=2 \
 --set haproxy.size=2

But it stuck in initializing status and showing, that size is 3 (I expect it to be 2):

Message:
pxc: 0/3 nodes are available: 1 Insufficient cpu, 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node-role.kubernetes.io/etcd: true}. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..
Observed Generation: 1
Pmm:
Proxysql:
Pxc:
Label Selector Path: app.kubernetes.io/component=pxc,app.kubernetes.io/instance=db2-pxc-db,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
Message: 0/3 nodes are available: 1 Insufficient cpu, 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node-role.kubernetes.io/etcd: true}. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..
Ready: 2
Size: 3
Status: initializing

# kubectl get pxc -n mysql
NAME         ENDPOINT                   STATUS         PXC   PROXYSQL   HAPROXY   AGE
db2-pxc-db   db2-pxc-db-haproxy.mysql   initializing   2                2         17m
Full output
kubectl describe pxc -n mysql
Name:         db2-pxc-db
Namespace:    mysql
Labels:       app.kubernetes.io/instance=db2
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=pxc-db
              app.kubernetes.io/version=1.14.0
              helm.sh/chart=pxc-db-1.14.3
Annotations:  meta.helm.sh/release-name: db2
              meta.helm.sh/release-namespace: mysql
API Version:  pxc.percona.com/v1
Kind:         PerconaXtraDBCluster
Metadata:
  Creation Timestamp:  2024-06-01T14:32:03Z
  Finalizers:
    delete-pxc-pods-in-order
  Generation:        1
  Resource Version:  77646791
  UID:               08d5191f-2bd5-4ccb-9c68-d4dd067b0f81
Spec:
  Backup:
    Image:  percona/percona-xtradb-cluster-operator:1.14.0-pxc8.0-backup-pxb8.0.35
    Pitr:
      Enabled:  false
    Schedule:
    Storages:
  Cr Version:                    1.14.0
  Enable CR Validation Webhook:  false
  Haproxy:
    Affinity:
      Anti Affinity Topology Key:  kubernetes.io/hostname
    Annotations:
    Enabled:       true
    Grace Period:  30
    Image:         percona/percona-xtradb-cluster-operator:1.14.0-haproxy
    Labels:
    Liveness Delay Sec:  300
    Liveness Probes:
      Failure Threshold:      4
      Initial Delay Seconds:  60
      Period Seconds:         30
      Success Threshold:      1
      Timeout Seconds:        5
    Node Selector:
    Pod Disruption Budget:
      Max Unavailable:    1
    Readiness Delay Sec:  15
    Readiness Probes:
      Failure Threshold:      3
      Initial Delay Seconds:  15
      Period Seconds:         5
      Success Threshold:      1
      Timeout Seconds:        1
    Resources:
      Limits:
      Requests:
        Cpu:     600m
        Memory:  1G
    Sidecar PV Cs:
    Sidecar Resources:
      Limits:
      Requests:
    Sidecar Volumes:
    Sidecars:
    Size:  2
    Tolerations:
    Volume Spec:
      Empty Dir:
  Log Collector Secret Name:  db2-pxc-db-log-collector
  Logcollector:
    Enabled:  true
    Image:    percona/percona-xtradb-cluster-operator:1.14.0-logcollector
    Resources:
      Limits:
      Requests:
        Cpu:     200m
        Memory:  100M
  Pause:         false
  Pmm:
    Enabled:  false
  Proxysql:
    Enabled:  false
  Pxc:
    Affinity:
      Anti Affinity Topology Key:  kubernetes.io/hostname
    Annotations:
    Auto Recovery:  true
    Grace Period:   600
    Image:          percona/percona-xtradb-cluster:8.0.36-28.1
    Labels:
    Liveness Delay Sec:  300
    Liveness Probes:
      Failure Threshold:      3
      Initial Delay Seconds:  300
      Period Seconds:         10
      Success Threshold:      1
      Timeout Seconds:        5
    Node Selector:
    Pod Disruption Budget:
      Max Unavailable:    1
    Readiness Delay Sec:  15
    Readiness Probes:
      Failure Threshold:      5
      Initial Delay Seconds:  15
      Period Seconds:         30
      Success Threshold:      1
      Timeout Seconds:        15
    Resources:
      Limits:
      Requests:
        Cpu:     600m
        Memory:  1G
    Sidecar PV Cs:
    Sidecar Resources:
      Limits:
      Requests:
    Sidecar Volumes:
    Sidecars:
    Size:  2
    Tolerations:
    Volume Spec:
      Persistent Volume Claim:
        Access Modes:
          ReadWriteOnce
        Resources:
          Requests:
            Storage:         8Gi
  Secrets Name:              db2-pxc-db-secrets
  Ssl Internal Secret Name:  db2-pxc-db-ssl-internal
  Ssl Secret Name:           db2-pxc-db-ssl
  Update Strategy:           SmartUpdate
  Upgrade Options:
    Apply:                     disabled
    Schedule:                  0 4 * * *
    Version Service Endpoint:  https://check.percona.com
  Vault Secret Name:           db2-pxc-db-vault
Status:
  Backup:
  Conditions:
    Last Transition Time:  2024-06-01T14:32:04Z
    Status:                True
    Type:                  initializing
  Haproxy:
    Label Selector Path:  app.kubernetes.io/component=haproxy,app.kubernetes.io/instance=db2-pxc-db,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
    Ready:                2
    Size:                 2
    Status:               ready
  Host:                   db2-pxc-db-haproxy.mysql
  Logcollector:
  Message:
    pxc: 0/3 nodes are available: 1 Insufficient cpu, 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node-role.kubernetes.io/etcd: true}. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..
  Observed Generation:  1
  Pmm:
  Proxysql:
  Pxc:
    Label Selector Path:  app.kubernetes.io/component=pxc,app.kubernetes.io/instance=db2-pxc-db,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
    Message:              0/3 nodes are available: 1 Insufficient cpu, 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node-role.kubernetes.io/etcd: true}. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..
    Ready:                2
    Size:                 3
    Status:               initializing
  Ready:                  4
  Size:                   5
  State:                  initializing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant