Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: getaddrinfo EAI_AGAIN haproxy #2558

Open
justinkumpe opened this issue Oct 9, 2024 · 4 comments
Open

Error: getaddrinfo EAI_AGAIN haproxy #2558

justinkumpe opened this issue Oct 9, 2024 · 4 comments

Comments

@justinkumpe
Copy link

justinkumpe commented Oct 9, 2024

Trying to deploy infisical via docker swarm. When I run the db migration I get the error below. First time I have used docker swarm so I am sure I am missing something.

> backend@1.0.0 migration:latest
> knex --knexfile ./src/db/knexfile.ts --client pg migrate:latest

Requiring external module ts-node/register
Working directory changed to /backend/src/db
Using environment: production
getaddrinfo EAI_AGAIN haproxy
Error: getaddrinfo EAI_AGAIN haproxy
    at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26)
@Ritish134
Copy link

Error: getaddrinfo EAI_AGAIN haproxy
at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26)

Probably It's a DNS resolution issue of DB_HOST, Check if

  1. your backend and haproxy service are in the same network
  2. DB_HOST should be correct to be able to db migration
  3. If above is correct then try to ping haproxy service inside the backend container

@justinkumpe
Copy link
Author

justinkumpe commented Oct 12, 2024

I know very little about docker so bear with me. I followed the docker swam instructions in the infisical docs so I assume it is all configured correctly. From what I can tell the backend and haproxy are on same network. If I am doing this correctly, I am unable to ping haproxy.

What I did to ping haproxy:

docker exec -it infisical-backend sh
/backend $ ping haproxy
ping: bad address 'haproxy'

infisical_stack.yaml:

services:
  haproxy:
    image: haproxy:latest
    ports:
      - '7001:7000'
      - '5002:5433' # Postgres master 
      - '5003:5434' # Postgres read 
      - '6379:6379'
      - '8080:8080'    
    networks:
    - infisical
    configs:
      - source: haproxy-config
        target: /usr/local/etc/haproxy/haproxy.cfg
    deploy:
      mode: global
          
  infisical:
    container_name: infisical-backend
    image: infisical/infisical:v0.88.0-postgres
    env_file: .env
    networks:
      - infisical
    secrets:
      - env_file
    deploy:
      replicas: 5
    
  etcd1:
    image: ghcr.io/zalando/spilo-16:3.2-p2
    networks:
      - infisical
    environment:
      ETCD_UNSUPPORTED_ARCH: arm64
    container_name: demo-etcd1
    deploy:
      placement:
        constraints:
          - node.labels.name == node1
    hostname: etcd1
    command: |
      etcd --name etcd1 
      --listen-client-urls http://0.0.0.0:2379 
      --listen-peer-urls=http://0.0.0.0:2380 
      --advertise-client-urls http://etcd1:2379 
      --initial-cluster=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380 
      --initial-advertise-peer-urls=http://etcd1:2380 
      --initial-cluster-state=new

  etcd2:
    image: ghcr.io/zalando/spilo-16:3.2-p2
    networks:
      - infisical
    environment:
      ETCD_UNSUPPORTED_ARCH: arm64
    container_name: demo-etcd2
    hostname: etcd2
    deploy:
      placement:
        constraints:
          - node.labels.name == node2
    command: |
      etcd --name etcd2 
      --listen-client-urls http://0.0.0.0:2379 
      --listen-peer-urls=http://0.0.0.0:2380 
      --advertise-client-urls http://etcd2:2379 
      --initial-cluster=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380 
      --initial-advertise-peer-urls=http://etcd2:2380 
      --initial-cluster-state=new

  etcd3:
    image: ghcr.io/zalando/spilo-16:3.2-p2
    networks:
      - infisical
    environment:
      ETCD_UNSUPPORTED_ARCH: arm64
    container_name: demo-etcd3
    hostname: etcd3
    deploy:
      placement:
        constraints:
          - node.labels.name == node3
    command: |
      etcd --name etcd3 
      --listen-client-urls http://0.0.0.0:2379 
      --listen-peer-urls=http://0.0.0.0:2380 
      --advertise-client-urls http://etcd3:2379 
      --initial-cluster=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380 
      --initial-advertise-peer-urls=http://etcd3:2380 
      --initial-cluster-state=new

  spolo1:
    image: ghcr.io/zalando/spilo-16:3.2-p2
    container_name: postgres-1
    networks:
    - infisical
    hostname: postgres-1
    environment:
        ETCD_HOSTS: etcd1:2379,etcd2:2379,etcd3:2379
        PGPASSWORD_SUPERUSER: "postgres"
        PGUSER_SUPERUSER: "postgres"
        SCOPE: infisical
    volumes:
      - postgres_data1:/home/postgres/pgdata
    deploy:
      placement:
        constraints:
          - node.labels.name == node1

  spolo2:
    image: ghcr.io/zalando/spilo-16:3.2-p2
    container_name: postgres-2
    networks:
    - infisical
    hostname: postgres-2
    environment:
        ETCD_HOSTS: etcd1:2379,etcd2:2379,etcd3:2379
        PGPASSWORD_SUPERUSER: "postgres"
        PGUSER_SUPERUSER: "postgres"
        SCOPE: infisical
    volumes:
      - postgres_data2:/home/postgres/pgdata
    deploy:
      placement:
        constraints:
          - node.labels.name == node2

  spolo3:
    image: ghcr.io/zalando/spilo-16:3.2-p2
    container_name: postgres-3
    networks:
    - infisical
    hostname: postgres-3
    environment:
        ETCD_HOSTS: etcd1:2379,etcd2:2379,etcd3:2379
        PGPASSWORD_SUPERUSER: "postgres"
        PGUSER_SUPERUSER: "postgres"
        SCOPE: infisical
    volumes:
      - postgres_data3:/home/postgres/pgdata
    deploy:
      placement:
        constraints:
          - node.labels.name == node3


  redis_replica0:
    image: bitnami/redis:6.2.10
    environment:
      - REDIS_REPLICATION_MODE=master
      - REDIS_PASSWORD=123456
    networks:
      - infisical
    deploy:
      placement:
        constraints:
          - node.labels.name == node1

  redis_replica1:
    image: bitnami/redis:6.2.10
    environment:
      - REDIS_REPLICATION_MODE=slave
      - REDIS_MASTER_HOST=redis_replica0
      - REDIS_MASTER_PORT_NUMBER=6379
      - REDIS_MASTER_PASSWORD=123456
      - REDIS_PASSWORD=123456
    networks:
      - infisical
    deploy:
      placement:
        constraints:
          - node.labels.name == node2

  redis_replica2:
    image: bitnami/redis:6.2.10
    environment:
      - REDIS_REPLICATION_MODE=slave
      - REDIS_MASTER_HOST=redis_replica0
      - REDIS_MASTER_PORT_NUMBER=6379
      - REDIS_MASTER_PASSWORD=123456
      - REDIS_PASSWORD=123456
    networks:
      - infisical
    deploy:
      placement:
        constraints:
          - node.labels.name == node3

  redis_sentinel1:
    image: bitnami/redis-sentinel:6.2.10
    environment:
      - REDIS_SENTINEL_QUORUM=2
      - REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=5000
      - REDIS_SENTINEL_FAILOVER_TIMEOUT=60000
      - REDIS_SENTINEL_PORT_NUMBER=26379
      - REDIS_MASTER_HOST=redis_replica1
      - REDIS_MASTER_PORT_NUMBER=6379
      - REDIS_MASTER_PASSWORD=123456
    networks:
      - infisical
    deploy:
      placement:
        constraints:
          - node.labels.name == node1

  redis_sentinel2:
    image: bitnami/redis-sentinel:6.2.10
    environment:
      - REDIS_SENTINEL_QUORUM=2
      - REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=5000
      - REDIS_SENTINEL_FAILOVER_TIMEOUT=60000
      - REDIS_SENTINEL_PORT_NUMBER=26379
      - REDIS_MASTER_HOST=redis_replica1
      - REDIS_MASTER_PORT_NUMBER=6379
      - REDIS_MASTER_PASSWORD=123456
    networks:
      - infisical
    deploy:
      placement:
        constraints:
          - node.labels.name == node2

  redis_sentinel3:
    image: bitnami/redis-sentinel:6.2.10
    environment:
      - REDIS_SENTINEL_QUORUM=2
      - REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=5000
      - REDIS_SENTINEL_FAILOVER_TIMEOUT=60000
      - REDIS_SENTINEL_PORT_NUMBER=26379
      - REDIS_MASTER_HOST=redis_replica1
      - REDIS_MASTER_PORT_NUMBER=6379
      - REDIS_MASTER_PASSWORD=123456
    networks:
      - infisical
    deploy:
      placement:
        constraints:
          - node.labels.name == node3

networks:
  infisical:


volumes:
  postgres_data1:
  postgres_data2:
  postgres_data3:
  postgres_data4:
  redis0:
  redis1:
  redis2:

configs:
  haproxy-config:
    file: ./haproxy.cfg

secrets:
  env_file:
    file: .env

@Ritish134
Copy link

Check if the haproxy service is up and running:
docker service ls

If haproxy is running check the logs for the haproxy service using:
docker service logs haproxy

@justinkumpe
Copy link
Author

justinkumpe commented Oct 13, 2024

when I run docker service ls I get

ID             NAME                        MODE         REPLICAS   IMAGE                                  PORTS
pjos183a0g39   infisical_etcd1             replicated   1/1        ghcr.io/zalando/spilo-16:3.2-p2        
u7hyc3r22yrk   infisical_etcd2             replicated   1/1        ghcr.io/zalando/spilo-16:3.2-p2        
v1ixxwzv615w   infisical_etcd3             replicated   1/1        ghcr.io/zalando/spilo-16:3.2-p2        
j0yas2ldmd5q   infisical_haproxy           global       3/3        haproxy:latest                         *:5002-5003->5433-5434/tcp, *:6379->6379/tcp, *:7001->7000/tcp, *:8080->8080/tcp
kgp0p5r2anyv   infisical_infisical         replicated   1/5        infisical/infisical:v0.88.0-postgres   
p88ccjbg70l8   infisical_redis_replica0    replicated   1/1        bitnami/redis:6.2.10                   
ird0y9mnolcc   infisical_redis_replica1    replicated   1/1        bitnami/redis:6.2.10                   
unap05hbcc3t   infisical_redis_replica2    replicated   1/1        bitnami/redis:6.2.10                   
9irvwx8xzczn   infisical_redis_sentinel1   replicated   1/1        bitnami/redis-sentinel:6.2.10          
7hv9u5dzcyxw   infisical_redis_sentinel2   replicated   1/1        bitnami/redis-sentinel:6.2.10          
da0y5linaav7   infisical_redis_sentinel3   replicated   1/1        bitnami/redis-sentinel:6.2.10          
z57l3qd0w3xi   infisical_spolo1            replicated   1/1        ghcr.io/zalando/spilo-16:3.2-p2        
kbox56p0ee8i   infisical_spolo2            replicated   1/1        ghcr.io/zalando/spilo-16:3.2-p2        
qfou0chjsbl9   infisical_spolo3            replicated   1/1        ghcr.io/zalando/spilo-16:3.2-p2   

and docker service logs haproxy gives me

no such task or service: haproxy

but if I run docker service logs infisical_haproxy:

infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:39998 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:40044 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:48114 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:48126 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:48146 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:48184 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:48226 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:48248 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:48264 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:34848 to 10.0.1.56:5433 (postgres_master/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:35696 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:35716 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:35726 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:35748 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:35776 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.4:35792 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:39694 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:46886 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:46918 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:46956 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:46982 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:47014 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:47036 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:39672 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:41410 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:39728 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:39760 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:39778 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:39794 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:39826 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:43324 to 10.0.1.56:6379 (redis_master_frontend/TCP)
infisical_haproxy.0.m4wvji0i1py5@vps11.kumpedns.us    | Connect from 10.0.1.7:43342 to 10.0.1.56:6379 (redis_master_frontend/TCP)

also tried running the db migration with infisical_haproxy instead of haproxy and same issue.

root@vps11:~# docker run --env DB_CONNECTION_URI=postgres://postgres:postgres@infisical_haproxy:5433/postgres?sslmode=no-verify infisical/infisical:v0.88.0-postgres  npm run migration:latest

> backend@1.0.0 migration:latest
> npm run auditlog-migration:latest && knex --knexfile ./src/db/knexfile.ts --client pg migrate:latest


> backend@1.0.0 auditlog-migration:latest
> knex --knexfile ./src/db/auditlog-knexfile.ts --client pg migrate:latest

Requiring external module ts-node/register
Working directory changed to /backend/src/db
Dedicated audit log database not found. No further migrations necessary
Requiring external module ts-node/register
Working directory changed to /backend/src/db
Using environment: production
getaddrinfo ENOTFOUND infisical_haproxy
Error: getaddrinfo ENOTFOUND infisical_haproxy
    at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants