-
-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
π I have found these related issues/pull requests
Notification is not working - Aggregation error
π‘οΈ Security Policy
- I have read and agree to Uptime Kuma's Security Policy.
π Description
No response
π Reproduction steps
Uptime Kuma Version 2.0.2 - dockerized
Environment: Kubernetes / Rancher RKE2
Websocket support activated
Proxy: tried it with and without
Reproduce:
- Run uptime Kuma
- Add a new service
- Add a new SLACK notification
3.01. do a manual POST Request to that slack web hook and check if it works; if yes go on
3.1 go to Slack and create new Webhook
3.2 Copy Webhook URL to uptime Kuma Webhook URL - Leave everything else out in setting up notifications
- Save
- Test the notification
- Wait around 30 seconds
- See error message Aggregation Error
I have tried it with different services and I get always the same error.
π Expected behavior
Slack notification is reached
π Actual Behavior
Nothing - only Aggregation error
π» Uptime-Kuma Version
2.0.2
π» Operating System and Arch
Kubernetes RKE2
π Browser
Safari / Chrome
π₯οΈ Deployment Environment
- Runtime Environment:
- Kubernetes Version: v1.33.5 +rke2r1
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: uptime-kuma-data
namespace: limbs
labels:
app: uptime-kuma
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gideployment
# Uptime Kuma Deployment
# Self-hosted monitoring tool for tracking uptime and status
apiVersion: apps/v1
kind: Deployment
metadata:
name: uptime-kuma
namespace: limbs
labels:
app: uptime-kuma
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: uptime-kuma
template:
metadata:
labels:
app: uptime-kuma
spec:
# Security context for the pod
securityContext:
fsGroup: 1000
containers:
- name: uptime-kuma
image: louislam/uptime-kuma:2.0.2
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 3001
protocol: TCP
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
env:
- name: UPTIME_KUMA_PORT
value: "3001"
- name: NODE_ENV
value: "production"
# Volume mounts
volumeMounts:
- name: data
mountPath: /app/data
# Resource limits and requests
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1000m
memory: 512Mi
# Liveness probe - checks if the container is alive
livenessProbe:
httpGet:
path: /
port: http
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
successThreshold: 1
failureThreshold: 3
# Readiness probe - checks if the container is ready to serve traffic
readinessProbe:
httpGet:
path: /
port: http
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
# Volumes
volumes:
- name: data
persistentVolumeClaim:
claimName: uptime-kuma-dataservice
```yaml # Uptime Kuma Service # Exposes the Uptime Kuma application within the cluster apiVersion: v1 kind: Service metadata: name: uptime-kuma namespace: limbs labels: app: uptime-kuma spec: type: ClusterIP selector: app: uptime-kuma ports: - name: http protocol: TCP port: 3001 targetPort: http sessionAffinity: ClientIP # Important for WebSocket connections sessionAffinityConfig: clientIP: timeoutSeconds: 10800 # 3 hours ```ingress
# Uptime Kuma Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: uptime-kuma-ingress
namespace: limbs
labels:
app: uptime-kuma
annotations:
# WebSocket support - critical for Uptime Kuma
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
# WebSocket specific configuration
nginx.ingress.kubernetes.io/websocket-services: "uptime-kuma"
# Body size limit
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
# SSL Redirect
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
spec:
ingressClassName: nginx
rules:
- host: status.limbs.dkfz.de
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: uptime-kuma
port:
number: 3001π Relevant log output
Error: Error: AggregateError
at Slack.throwGeneralAxiosError (/app/server/notification-providers/notification-provider.js:122:15)
at Slack.send (/app/server/notification-providers/slack.js:175:18)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Socket.<anonymous> (/app/server/server.js:1486:27)Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working