Skip to content

Slack Notification Aggregation Error:Β #6482

@lucaskulla

Description

@lucaskulla

πŸ“‘ I have found these related issues/pull requests

Notification is not working - Aggregation error

πŸ›‘οΈ Security Policy

πŸ“ Description

No response

πŸ‘Ÿ Reproduction steps

Uptime Kuma Version 2.0.2 - dockerized
Environment: Kubernetes / Rancher RKE2

Websocket support activated

Proxy: tried it with and without

Reproduce:

  1. Run uptime Kuma
  2. Add a new service
  3. Add a new SLACK notification
    3.01. do a manual POST Request to that slack web hook and check if it works; if yes go on
    3.1 go to Slack and create new Webhook
    3.2 Copy Webhook URL to uptime Kuma Webhook URL
  4. Leave everything else out in setting up notifications
  5. Save
  6. Test the notification
  7. Wait around 30 seconds
  8. See error message Aggregation Error

I have tried it with different services and I get always the same error.

πŸ‘€ Expected behavior

Slack notification is reached

πŸ˜“ Actual Behavior

Nothing - only Aggregation error

🐻 Uptime-Kuma Version

2.0.2

πŸ’» Operating System and Arch

Kubernetes RKE2

🌐 Browser

Safari / Chrome

πŸ–₯️ Deployment Environment

  • Runtime Environment:
    • Kubernetes Version: v1.33.5 +rke2r1
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: uptime-kuma-data
  namespace: limbs
  labels:
    app: uptime-kuma
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
deployment
# Uptime Kuma Deployment
# Self-hosted monitoring tool for tracking uptime and status
apiVersion: apps/v1
kind: Deployment
metadata:
  name: uptime-kuma
  namespace: limbs
  labels:
    app: uptime-kuma
spec:
  replicas: 1 
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: uptime-kuma
  template:
    metadata:
      labels:
        app: uptime-kuma
    spec:
      # Security context for the pod
      securityContext:
        fsGroup: 1000
      
      containers:
        - name: uptime-kuma
          image: louislam/uptime-kuma:2.0.2
          imagePullPolicy: IfNotPresent
          
          ports:
            - name: http
              containerPort: 3001
              protocol: TCP
          resources:
            requests:
              memory: "256Mi"
              cpu: "100m"
            limits:
              memory: "512Mi"
              cpu: "500m"          
          env:
            - name: UPTIME_KUMA_PORT
              value: "3001"
            - name: NODE_ENV
              value: "production"
          
          # Volume mounts
          volumeMounts:
            - name: data
              mountPath: /app/data
          
          # Resource limits and requests
          resources:
            requests:
              cpu: 100m
              memory: 256Mi
            limits:
              cpu: 1000m
              memory: 512Mi
          
          # Liveness probe - checks if the container is alive
          livenessProbe:
            httpGet:
              path: /
              port: http
              scheme: HTTP
            initialDelaySeconds: 30
            periodSeconds: 30
            timeoutSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          
          # Readiness probe - checks if the container is ready to serve traffic
          readinessProbe:
            httpGet:
              path: /
              port: http
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3
      
      # Volumes
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: uptime-kuma-data
service ```yaml # Uptime Kuma Service # Exposes the Uptime Kuma application within the cluster apiVersion: v1 kind: Service metadata: name: uptime-kuma namespace: limbs labels: app: uptime-kuma spec: type: ClusterIP selector: app: uptime-kuma ports: - name: http protocol: TCP port: 3001 targetPort: http sessionAffinity: ClientIP # Important for WebSocket connections sessionAffinityConfig: clientIP: timeoutSeconds: 10800 # 3 hours ```
ingress
# Uptime Kuma Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: uptime-kuma-ingress
  namespace: limbs
  labels:
    app: uptime-kuma
  annotations:
    # WebSocket support - critical for Uptime Kuma
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
    
    # WebSocket specific configuration
    nginx.ingress.kubernetes.io/websocket-services: "uptime-kuma"
    
    # Body size limit
    nginx.ingress.kubernetes.io/proxy-body-size: "50m"
    
    # SSL Redirect
    nginx.ingress.kubernetes.io/force-ssl-redirect: "false"

spec:
  ingressClassName: nginx
  
  rules:
    - host: status.limbs.dkfz.de
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: uptime-kuma
                port:
                  number: 3001

πŸ“ Relevant log output

Error: Error: AggregateError 
    at Slack.throwGeneralAxiosError (/app/server/notification-providers/notification-provider.js:122:15)
    at Slack.send (/app/server/notification-providers/slack.js:175:18)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Socket.<anonymous> (/app/server/server.js:1486:27)

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions