Sentry部署
私有化部署-快速部署-测试验证
依赖
docker-compose
docker ```bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
## 私有化部署-集群部署
### 仓库地址
注: 非官方提供,官方只提供了Docker-compose的部署方式
### 基础组件(新版)
### 修改Helm-chart
#### sentry/templates/snuba/_helper-snuba.tpl
> 以下是 Sentry Snuba 中这些 storage_sets 的用途分类和详细说明,按功能模块整理:
>
>>```
>>"storage_sets": {
>> "cdc",
>> "discover",
>> "events",
>> "events_ro",
>> "metrics",
>> "migrations", # 存储 Snuba 的数据库迁移记录(内部管理用),无需分片
>> "outcomes",
>> "querylog",
>> "sessions",
>> "transactions",
>> "profiles",
>> "functions",
>> "replays",
>> "generic_metrics_sets",
>> "generic_metrics_distributions",
>> "search_issues", # 实验性功能
>> "generic_metrics_counters",
>> "spans",
>> "events_analytics_platform",
>> "group_attributes",
>> "generic_metrics_gauges",
>> "metrics_summaries", # 实验性功能
>> "profile_chunks",
>>},
>>```
>
>核心功能
| Storage Set | 用途 | 对应数据表 | 分片键建议 |
| ------------ | -------------------------------------------------------------------- | ------------------ | --------------------- |
| events | 存储所有错误事件(Error Events),是 Sentry 的核心数据表。 | events_local | sipHash64(event_id) |
| transactions | 存储性能监控的 Transaction 事件(APM 数据)。 | transactions_local | sipHash64(event_id) |
| sessions | 存储会话健康度数据(Release Health),用于统计版本崩溃率和用户留存。 | sessions_local | sipHash64(session_id) |
| profiles | 存储性能剖析数据(Profiling),用于代码级性能分析。 | profiles_local | sipHash64(profile_id) |
| replays | 存储用户行为回放数据(Session Replay),记录用户操作轨迹。 | replays_local | sipHash64(replay_id) |
| | | | |
> 指标与监控
| Storage Set | 用途 | 对应数据表 | 分片键建议 |
| ----------------------------- | ------------------------------------------------------------- | ----------------------------------- | --------------------- |
| metrics | 存储自定义指标和系统指标(旧版指标系统)。 | metrics_local | sipHash64(project_id) |
| generic_metrics_sets | 存储 Set 类型指标(如唯一用户数),属于通用指标系统(新版)。 | generic_metrics_sets_local | sipHash64(project_id) |
| generic_metrics_distributions | 存储分布类型指标(如延迟百分位数),属于通用指标系统。 | generic_metrics_distributions_local | sipHash64(project_id) |
| generic_metrics_counters | 存储计数器类型指标(如请求次数),属于通用指标系统。 | generic_metrics_counters_local | sipHash64(project_id) |
| generic_metrics_gauges | 存储瞬时值指标(如当前内存使用量),属于通用指标系统。 | generic_metrics_gauges_local | sipHash64(project_id) |
| metrics_summaries | 存储指标的聚合摘要(实验性功能)。 | metrics_summaries_local | rand() |
| | | | |
> 调试与诊断
| Storage Set | 用途 | 对应数据表 | 分片键建议 |
| -------------- | ------------------------------------------------------- | -------------------- | --------------------- |
| spans | 存储分布式追踪的 Span 数据(APM 链路追踪)。 | spans_local | sipHash64(trace_id) |
| functions | 存储函数级性能剖析数据(如代码热点函数)。 | functions_local | sipHash64(profile_id) |
| profile_chunks | 存储性能剖析的原始数据块(用于 Profiling 的详细分析)。 | profile_chunks_local | sipHash64(profile_id) |
> 系统与运维
| Storage Set | 用途 | 对应数据表 | 分片键建议 |
| ----------- | ------------------------------------------------------------------------ | -------------------- | ---------- |
| outcomes | 存储事件处理结果(如丢弃、限流、成功),用于监控 Sentry 自身的处理状态。 | outcomes_local | rand() |
| querylog | 记录 Snuba 的查询日志,用于审计和分析查询性能。 | querylog_local | rand() |
| migrations | 存储 Snuba 的数据库迁移记录(内部管理用)。 | 无独立表,管理元数据 | 无需分片 |
> 高级功能
| Storage Set | 用途 | 对应数据表 | 分片键建议 |
| ------------------------- | -------------------------------------------------------------------------- | ------------------------------- | --------------------- |
| cdc | Change Data Capture 数据变更捕获(用于实时数据管道,如 Kafka 同步)。 | cdc_local | sipHash64(message_id) |
| discover | 支持 Discover 查询的衍生数据集(基于 events 和 transactions 的聚合视图)。 | 无独立表,逻辑视图 | 无需分片 |
| events_ro | 只读副本的 Events 表(用于分流查询负载)。 | events_ro_local | 同 events |
| search_issues | 支持全文搜索的事件索引(实验性功能)。 | search_issues_local | sipHash64(event_id) |
| group_attributes | 存储 Issue 的扩展属性(如标签、优先级)。 | group_attributes_local | sipHash64(group_id) |
| events_analytics_platform | 企业级事件分析平台数据(高级分析功能)。 | events_analytics_platform_local | sipHash64(event_id) |
#### sentry/value.yaml
prefix:
Set this to true to support IPV6 networks
ipv6: false
global:
# Set SAMPLED_DEFAULT_RATE parameter for all projects # sampledDefaultRate: 1.0
nodeSelector: {} affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md tolerations: - key: “serv.type” operator: “Equal” value: “md” effect: “NoSchedule” sidecars: [] volumes: []
user: create: true email: sentry.notify@hannto.com password: **
## set this value to an existingSecret name to create the admin user with the password in the secret # existingSecret: sentry-admin-password
## set this value to an existingSecretKey which holds the password to be used for sentry admin user default key is admin-password
# existingSecretKey: admin-password
this is required on the first installation, as sentry has to be initialized first
recommended to set false for updating the helm chart afterwards,
as you will have some downtime on each update if it’s a hook
deploys relay & snuba consumers as post hooks
asHook: true
images: sentry: repository: hub.kce.ksyun.com/middleware/sentry #tag: Chart.AppVersion tag: 25.3.0 pullPolicy: IfNotPresent imagePullSecrets: - name: ksyunregistrykey snuba: repository: hub.kce.ksyun.com/middleware/snuba tag: 25.2.0 pullPolicy: IfNotPresent imagePullSecrets: - name: ksyunregistrykey relay: repository: hub.kce.ksyun.com/middleware/relay tag: 25.2.0 pullPolicy: IfNotPresent imagePullSecrets: - name: ksyunregistrykey symbolicator: # repository: getsentry/symbolicator # tag: Chart.AppVersion # pullPolicy: IfNotPresent imagePullSecrets: [] vroom: # repository: getsentry/vroom # tag: Chart.AppVersion # pullPolicy: IfNotPresent imagePullSecrets: []
serviceAccount:
# serviceAccount.annotations – Additional Service Account annotations.
annotations: {}
# serviceAccount.enabled – If true
, a custom Service Account will be used.
enabled: false
# serviceAccount.name – The base name of the ServiceAccount to use. Will be appended with e.g. snuba-api
or web
for the pods accordingly.
name: “sentry”
# serviceAccount.automountServiceAccountToken – Automount API credentials for a Service Account.
automountServiceAccountToken: true
vroom: # annotations: {} # args: [] replicas: 1 env: [] probeFailureThreshold: 5 probeInitialDelaySeconds: 10 probePeriodSeconds: 10 probeSuccessThreshold: 1 probeTimeoutSeconds: 2 resources: {} # requests: # cpu: 100m # memory: 700Mi affinity: {} nodeSelector: {} securityContext: {} containerSecurityContext: {} # priorityClassName: “” service: annotations: {} # tolerations: [] # podLabels: {}
autoscaling: enabled: false minReplicas: 2 maxReplicas: 5 targetCPUUtilizationPercentage: 50 sidecars: [] # topologySpreadConstraints: [] volumes: [] volumeMounts: []
relay: enabled: true # annotations: {} replicas: 1 # args: [] mode: managed env: [] probeFailureThreshold: 5 probeInitialDelaySeconds: 10 probePeriodSeconds: 10 probeSuccessThreshold: 1 probeTimeoutSeconds: 2 resources: {} # requests: # cpu: 100m # memory: 700Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} # healthCheck: # readinessRequestPath: “” securityContext: {} # if you are using GKE Ingress controller use ‘securityPolicy’ to add Google Cloud Armor Ingress policy securityPolicy: “” # if you are using GKE Ingress controller use ‘customResponseHeaders’ to add custom response header customResponseHeaders: [] containerSecurityContext: {} service: annotations: {} # tolerations: [] # podLabels: {} # priorityClassName: “” autoscaling: enabled: true minReplicas: 2 maxReplicas: 5 targetCPUUtilizationPercentage: 50 sidecars: [] topologySpreadConstraints: [] volumes: [] volumeMounts: [] init: resources: {} # additionalArgs: [] # credentialsSubcommand: “” # env: [] # volumes: [] # volumeMounts: [] # cache: # envelopeBufferSize: 1000 # logging: # level: info # format: json processing: kafkaConfig: messageMaxBytes: 50000000 # messageTimeoutMs: # requestTimeoutMs: # deliveryTimeoutMs: # apiVersionRequestTimeoutMs:
1
2
3
# additionalKafkaConfig:
# - name: compression.type
# value: "lz4"
Override custom Kafka topic names
WARNING: If you update this and you are also using the Kafka subchart, you need to update the provisioned Topic names in this values as well!
kafkaTopicOverrides:
prefix: “”
enable and reference the volume
geodata:
accountID: “”
licenseKey: “”
editionIDs: “”
persistence:
## If defined, storageClassName:
sentry: # to not generate a sentry-secret, use these 2 values to reference an existing secret # existingSecret: “my-secret” # existingSecretKey: “my-secret-key” singleOrganization: true web: enabled: true # if using filestore backend filesystem with RWO access, set strategyType to Recreate strategyType: RollingUpdate replicas: 3 env: [] existingSecretEnv: “” probeFailureThreshold: 5 probeInitialDelaySeconds: 10 probePeriodSeconds: 10 probeSuccessThreshold: 1 probeTimeoutSeconds: 2 resources: {} # requests: # cpu: 200m # memory: 850Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} # if you are using GKE Ingress controller use ‘securityPolicy’ to add Google Cloud Armor Ingress policy securityPolicy: “” # if you are using GKE Ingress controller use ‘customResponseHeaders’ to add custom response header customResponseHeaders: [] containerSecurityContext: {} service: annotations: {} # tolerations: [] # podLabels: {} # Mount and use custom CA # customCA: # secretName: custom-ca # item: ca.crt logLevel: “WARNING” # DEBUG|INFO|WARNING|ERROR|CRITICAL|FATAL logFormat: “human” # human|machine autoscaling: enabled: true minReplicas: 3 maxReplicas: 5 targetCPUUtilizationPercentage: 50 sidecars: [] topologySpreadConstraints: [] volumes: [] volumeMounts: [] # workers: 3
features: orgSubdomains: false vstsLimitedScopes: true enableProfiling: false enableSessionReplay: true enableFeedback: false enableSpan: false
# example customFeature to enable Metrics(beta) https://docs.sentry.io/product/metrics/ # customFeatures: # - organizations:custom-metric # - organizations:custom-metrics-experimental # - organizations:derive-code-mappings # other feature here https://github.com/getsentry/sentry/blob/24.11.2/src/sentry/features/temporary.py
worker: enabled: true # annotations: {} replicas: 3 # concurrency: 4 env: [] existingSecretEnv: “” resources: {} # requests: # cpu: 1000m # memory: 1100Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} # tolerations: [] # podLabels: {} logLevel: “WARNING” # DEBUG|INFO|WARNING|ERROR|CRITICAL|FATAL logFormat: “human” # human|machine # excludeQueues: “” # maxTasksPerChild: 1000 # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: true minReplicas: 3 maxReplicas: 5 targetCPUUtilizationPercentage: 50 livenessProbe: enabled: true periodSeconds: 60 timeoutSeconds: 10 failureThreshold: 3 sidecars: [] # securityContext: {} # containerSecurityContext: {} # priorityClassName: “” topologySpreadConstraints: [] volumes: [] volumeMounts: []
# allows to dedicate some workers to specific queues workerEvents: ## If the number of exceptions increases, it is recommended to enable workerEvents enabled: false # annotations: {} queues: “events.save_event,post_process_errors” ## When increasing the number of exceptions and enabling workerEvents, it is recommended to increase the number of their replicas replicas: 1 # concurrency: 4 env: [] resources: {} affinity: {} nodeSelector: {} # tolerations: [] # podLabels: {} # logLevel: “WARNING” # DEBUG|INFO|WARNING|ERROR|CRITICAL|FATAL # logFormat: “machine” # human|machine # maxTasksPerChild: 1000 # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: false minReplicas: 2 maxReplicas: 5 targetCPUUtilizationPercentage: 50 livenessProbe: enabled: false periodSeconds: 60 timeoutSeconds: 10 failureThreshold: 3 sidecars: [] # securityContext: {} # containerSecurityContext: {} # priorityClassName: “” topologySpreadConstraints: [] volumes: [] volumeMounts: []
# allows to dedicate some workers to specific queues workerTransactions: enabled: false # annotations: {} queues: “events.save_event_transaction,post_process_transactions” replicas: 1 # concurrency: 4 env: [] resources: {} affinity: {} nodeSelector: {} # tolerations: [] # podLabels: {} # logLevel: “WARNING” # DEBUG|INFO|WARNING|ERROR|CRITICAL|FATAL # logFormat: “machine” # human|machine # maxTasksPerChild: 1000 # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: false minReplicas: 2 maxReplicas: 5 targetCPUUtilizationPercentage: 50 livenessProbe: enabled: false periodSeconds: 60 timeoutSeconds: 10 failureThreshold: 3 sidecars: [] # securityContext: {} # containerSecurityContext: {} # priorityClassName: “” topologySpreadConstraints: [] volumes: [] volumeMounts: []
ingestConsumerAttachments: enabled: true replicas: 1 # concurrency: 4 # maxBatchTimeMs: 20000 # maxPollIntervalMs: 30000 env: [] resources: {} # requests: # cpu: 200m # memory: 700Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # maxBatchSize: “” # logLevel: info # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: true minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 50 sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: # - mountPath: /dev/shm # name: dshm # autoOffsetReset: “earliest” # noStrictOffsetReset: false
ingestConsumerEvents: enabled: true replicas: 1 # concurrency: 4 env: [] resources: {} # requests: # cpu: 300m # memory: 500Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # maxBatchSize: “” # logLevel: “info” # inputBlockSize: “” # maxBatchTimeMs: “”
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# it's better to use prometheus adapter and scale based on
# the size of the rabbitmq queue
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 50
sidecars: []
topologySpreadConstraints: []
volumes: []
livenessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 320
# volumeMounts:
# - mountPath: /dev/shm
# name: dshm
# autoOffsetReset: "earliest"
# noStrictOffsetReset: false
ingestConsumerTransactions: enabled: true replicas: 1 # concurrency: 4 env: [] resources: {} # requests: # cpu: 200m # memory: 500Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # maxBatchSize: “” # logLevel: “info” # inputBlockSize: “” # maxBatchTimeMs: “”
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# it's better to use prometheus adapter and scale based on
# the size of the rabbitmq queue
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 50
sidecars: []
topologySpreadConstraints: []
volumes: []
livenessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 320
# volumeMounts:
# - mountPath: /dev/shm
# name: dshm
# autoOffsetReset: "earliest"
# noStrictOffsetReset: false
ingestReplayRecordings: enabled: true replicas: 1 env: [] resources: {} # requests: # cpu: 100m # memory: 250Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: true minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 50 sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: # - mountPath: /dev/shm # name: dshm # autoOffsetReset: “earliest” # noStrictOffsetReset: false
ingestProfiles: replicas: 1 env: [] resources: {} affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: true minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 50 sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: # - mountPath: /dev/shm # name: dshm # autoOffsetReset: “earliest” # noStrictOffsetReset: false
ingestOccurrences: enabled: true replicas: 1 env: [] resources: {} # requests: # cpu: 100m # memory: 250Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: true minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 50 sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: # - mountPath: /dev/shm # name: dshm # autoOffsetReset: “earliest” # noStrictOffsetReset: false
ingestFeedback: enabled: false replicas: 1 env: [] resources: {} # requests: # cpu: 100m # memory: 250Mi affinity: {} nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: false minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 50 sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: # - mountPath: /dev/shm # name: dshm # autoOffsetReset: “earliest” # noStrictOffsetReset: false
ingestMonitors: enabled: true replicas: 1 env: [] resources: {} # requests: # cpu: 100m # memory: 250Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: true minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 50 sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: # - mountPath: /dev/shm # name: dshm # autoOffsetReset: “earliest” # noStrictOffsetReset: false
monitorsClockTasks: enabled: false replicas: 1 env: [] resources: {} # requests: # cpu: 100m # memory: 250Mi affinity: {} nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: false sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: # - mountPath: /dev/shm # name: dshm # autoOffsetReset: “earliest” # noStrictOffsetReset: false
monitorsClockTick: enabled: false replicas: 1 env: [] resources: {} # requests: # cpu: 100m # memory: 250Mi affinity: {} nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: false sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: # - mountPath: /dev/shm # name: dshm # autoOffsetReset: “earliest” # noStrictOffsetReset: false
billingMetricsConsumer: enabled: true replicas: 1 env: [] resources: {} # requests: # cpu: 100m # memory: 250Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: true minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 50 sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: # - mountPath: /dev/shm # name: dshm # autoOffsetReset: “earliest” # noStrictOffsetReset: false
genericMetricsConsumer: enabled: true replicas: 1 # concurrency: 4 env: [] resources: {} # requests: # cpu: 200m # memory: 500Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # maxPollIntervalMs: “” # logLevel: “info” # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: true minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 50 sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: # - mountPath: /dev/shm # name: dshm # autoOffsetReset: “earliest” # noStrictOffsetReset: false
metricsConsumer: enabled: true replicas: 1 # concurrency: 4 env: [] resources: {} # requests: # cpu: 200m # memory: 500Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} # logLevel: “info” # maxPollIntervalMs: “” # it’s better to use prometheus adapter and scale based on # the size of the rabbitmq queue autoscaling: enabled: false minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 50 sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: # - mountPath: /dev/shm # name: dshm # autoOffsetReset: “earliest” # noStrictOffsetReset: false
cron: enabled: true replicas: 1 env: [] resources: {} affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} # tolerations: [] # podLabels: {} sidecars: [] topologySpreadConstraints: [] volumes: [] # volumeMounts: [] # logLevel: “WARNING” # DEBUG|INFO|WARNING|ERROR|CRITICAL|FATAL # logFormat: “machine” # human|machine
subscriptionConsumerEvents: enabled: true replicas: 1 env: [] resources: {} # requests: # cpu: 200m # memory: 500Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # autoOffsetReset: “earliest” # noStrictOffsetReset: false # volumeMounts: []
subscriptionConsumerTransactions: enabled: true replicas: 1 env: [] resources: {} # requests: # cpu: 200m # memory: 500Mi affinity: {} nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # autoOffsetReset: “earliest” # noStrictOffsetReset: false # volumeMounts: []
postProcessForwardErrors: enabled: true replicas: 1 env: [] resources: {} # requests: # cpu: 150m # memory: 500Mi affinity: {} nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} sidecars: [] topologySpreadConstraints: [] volumes: [] # volumeMounts: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # autoOffsetReset: “earliest” # noStrictOffsetReset: false
postProcessForwardTransactions: enabled: true replicas: 1 # processes: 1 env: [] resources: {} # requests: # cpu: 200m # memory: 500Mi affinity: {} nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} sidecars: [] topologySpreadConstraints: [] volumes: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumeMounts: [] # autoOffsetReset: “earliest” # noStrictOffsetReset: false
postProcessForwardIssuePlatform: enabled: true replicas: 1 env: [] resources: {} # requests: # cpu: 300m # memory: 500Mi affinity: {} nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} sidecars: [] topologySpreadConstraints: [] volumes: [] # volumeMounts: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # autoOffsetReset: “earliest” # noStrictOffsetReset: false
subscriptionConsumerGenericMetrics: enabled: true replicas: 1 # concurrency: 1 env: [] resources: {} # requests: # cpu: 200m # memory: 500Mi affinity: {} nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} sidecars: [] topologySpreadConstraints: [] volumes: [] # volumeMounts: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # autoOffsetReset: “earliest” # noStrictOffsetReset: false
subscriptionConsumerMetrics: enabled: true replicas: 1 # concurrency: 1 env: [] resources: {} # requests: # cpu: 200m # memory: 500Mi affinity: {} nodeSelector: {} securityContext: {} containerSecurityContext: {} # tolerations: [] # podLabels: {} sidecars: [] topologySpreadConstraints: [] volumes: [] # volumeMounts: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # autoOffsetReset: “earliest” # noStrictOffsetReset: false
cleanup: successfulJobsHistoryLimit: 5 failedJobsHistoryLimit: 5 activeDeadlineSeconds: 100 concurrencyPolicy: Allow concurrency: 1 enabled: true schedule: “0 0 * * *” days: 90 # logLevel: INFO logLevel: ‘’ # securityContext: {} # containerSecurityContext: {} sidecars: [] volumes: [] # volumeMounts: [] serviceAccount: {}
# Sentry settings of connections to Kafka kafka: message: max: bytes: 50000000 compression: type: # ‘gzip’, ‘snappy’, ‘lz4’, ‘zstd’ socket: timeout: ms: 1000
snuba: api: enabled: true replicas: 1 # set command to [“snuba”,”api”] if securityContext.runAsUser > 0 # see: https://github.com/getsentry/snuba/issues/956 command: [] # - snuba # - api env: [] probeInitialDelaySeconds: 10 liveness: timeoutSeconds: 2 readiness: timeoutSeconds: 2 resources: {} # requests: # cpu: 100m # memory: 150Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md nodeSelector: {} securityContext: {} containerSecurityContext: {} service: annotations: {} # tolerations: [] # podLabels: {}
1
2
3
4
5
6
7
8
9
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 50
sidecars: []
topologySpreadConstraints: []
volumes: []
# volumeMounts: []
consumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # noStrictOffsetReset: false # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # volumeMounts: # - mountPath: /dev/shm # name: dshm # volumes: # - name: dshm # emptyDir: # medium: Memory
outcomesConsumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” # noStrictOffsetReset: false maxBatchSize: “3” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # processes: “” # inputBlockSize: “” # outputBlockSize: “” # maxBatchTimeMs: “” # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # volumeMounts: # - mountPath: /dev/shm # name: dshm # volumes: # - name: dshm # emptyDir: # medium: Memory
outcomesBillingConsumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” # noStrictOffsetReset: false maxBatchSize: “3” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # volumeMounts: # - mountPath: /dev/shm # name: dshm # volumes: # - name: dshm # emptyDir: # medium: Memory
replacer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” # maxBatchTimeMs: “” # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # volumes: [] # volumeMounts: [] # noStrictOffsetReset: false
metricsConsumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumes: [] # volumeMounts: [] # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # noStrictOffsetReset: false
subscriptionConsumerEvents: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumes: [] # volumeMounts: [] # autoOffsetReset: “earliest” # noStrictOffsetReset: false
genericMetricsCountersConsumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumes: [] # volumeMounts: [] # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # noStrictOffsetReset: false
genericMetricsDistributionConsumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumes: [] # volumeMounts: [] # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # noStrictOffsetReset: false
genericMetricsSetsConsumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumes: [] # volumeMounts: [] # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # noStrictOffsetReset: false
subscriptionConsumerMetrics: enabled: true replicas: 1 env: [] resources: {} # requests: # cpu: 200m # memory: 500Mi affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # volumes: [] # volumeMounts: []
subscriptionConsumerTransactions: enabled: true replicas: 1 env: [] resources: {} # requests: # cpu: 200m # memory: 500Mi affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # volumes: [] # volumeMounts: [] livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # autoOffsetReset: “earliest” # noStrictOffsetReset: false
replaysConsumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # noStrictOffsetReset: false # volumeMounts: # - mountPath: /dev/shm # name: dshm # volumes: # - name: dshm # emptyDir: # medium: Memory
transactionsConsumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # noStrictOffsetReset: false # volumeMounts: # - mountPath: /dev/shm # name: dshm # volumes: # - name: dshm # emptyDir: # medium: Memory
profilingProfilesConsumer: replicas: 1 env: [] resources: {} affinity: {} sidecars: [] nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # noStrictOffsetReset: false
1
2
3
4
5
6
7
# volumeMounts:
# - mountPath: /dev/shm
# name: dshm
# volumes:
# - name: dshm
# emptyDir:
# medium: Memory
profilingFunctionsConsumer: replicas: 1 env: [] resources: {} affinity: {} sidecars: [] nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # noStrictOffsetReset: false # volumeMounts: # - mountPath: /dev/shm # name: dshm # volumes: # - name: dshm # emptyDir: # medium: Memory
issueOccurrenceConsumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # noStrictOffsetReset: false # volumeMounts: # - mountPath: /dev/shm # name: dshm # volumes: # - name: dshm # emptyDir: # medium: Memory
spansConsumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # noStrictOffsetReset: false # volumeMounts: # - mountPath: /dev/shm # name: dshm # volumes: # - name: dshm # emptyDir: # medium: Memory
groupAttributesConsumer: enabled: true replicas: 1 env: [] resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # autoOffsetReset: “earliest” livenessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 320 # maxBatchSize: “” # processes: “” # inputBlockSize: “” # outputBlockSize: “” maxBatchTimeMs: 750 # queuedMaxMessagesKbytes: “” # queuedMinMessages: “” # noStrictOffsetReset: false
1
2
3
4
5
6
7
# volumeMounts:
# - mountPath: /dev/shm
# name: dshm
# volumes:
# - name: dshm
# emptyDir:
# medium: Memory
dbInitJob: env: []
migrateJob: env: []
clickhouse: maxConnections: 100
rustConsumer: false
hooks:
enabled: true
preUpgrade: false
removeOnSuccess: true
activeDeadlineSeconds: 2000
shareProcessNamespace: false
dbCheck:
enabled: true
image:
# repository: subfuzion/netcat
# tag: latest
# pullPolicy: IfNotPresent
imagePullSecrets: []
env: []
# podLabels: {}
podAnnotations: {}
resources:
limits:
memory: 64Mi
requests:
cpu: 100m
memory: 64Mi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: serv.type
operator: In
values:
- md
tolerations:
- key: “serv.type”
operator: “Equal”
value: “md”
effect: “NoSchedule”
nodeSelector: {}
securityContext: {}
containerSecurityContext: {}
# tolerations: []
# volumes: []
# volumeMounts: []
dbInit:
enabled: true
env: []
# podLabels: {}
podAnnotations: {}
resources:
limits:
memory: 2048Mi
requests:
cpu: 300m
memory: 2048Mi
sidecars: []
volumes: []
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: serv.type
operator: In
values:
- md
tolerations:
- key: “serv.type”
operator: “Equal”
value: “md”
effect: “NoSchedule”
nodeSelector: {}
# tolerations: []
# volumes: []
# volumeMounts: []
snubaInit:
enabled: true
# As snubaInit doesn’t support configuring partition and replication factor, you can disable snubaInit’s kafka topic creation by setting kafka.enabled
to false
,
# and create the topics using kafka.provisioning.topics
with the desired partition and replication factor.
# Note that when you set kafka.enabled
to false
, snuba component might fail to start if newly added topics are not created by kafka.provisioning
.
kafka:
enabled: true
# podLabels: {}
podAnnotations: {}
resources: {}
#limits:
# cpu: 2000m
# memory: 1Gi
#requests:
# cpu: 700m
# memory: 1Gi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: serv.type
operator: In
values:
- md
tolerations:
- key: “serv.type”
operator: “Equal”
value: “md”
effect: “NoSchedule”
nodeSelector: {}
# tolerations: []
# volumes: []
# volumeMounts: []
snubaMigrate:
enabled: true
# podLabels: {}
# volumes: []
# volumeMounts: []
system: ## be sure to include the scheme on the url, for example: “https://sentry.example.com” url: “https://sentry.htdevops.com” adminEmail: “yingliang.feng@hannto.com” ## This should only be used if you’re installing Sentry behind your company’s firewall. public: false ## This will generate one for you (it’s must be given upon updates) # secretKey: “xx”
mail: # For example: smtp backend: smtp useTls: true useSsl: false username: “sentry.notify@hannto.com” password: “HntSN2017” # existingSecret: secret-name ## set existingSecretKey if key name inside existingSecret is different from ‘mail-password’ # existingSecretKey: secret-key-name port: 587 host: “smtp.partner.outlook.cn” from: “sentry.notify@hannto.com”
symbolicator: enabled: false api: usedeployment: true # Set true to use Deployment, false for StatefulSet persistence: enabled: true # Set true for using PersistentVolumeClaim, false for emptyDir accessModes: [“ReadWriteOnce”] storageClassName: “ksc-kfs” size: “10Gi” replicas: 1 env: [] probeInitialDelaySeconds: 10 resources: {} affinity: {} nodeSelector: {} securityContext: {} topologySpreadConstraints: [] containerSecurityContext: {} # tolerations: [] # podLabels: {} # priorityClassName: “xxx” config: |- # See: https://getsentry.github.io/symbolicator/#configuration cache_dir: “/data” bind: “0.0.0.0:3021” logging: level: “warn” metrics: statsd: null prefix: “symbolicator” sentry_dsn: null connect_to_reserved_ips: true # caches: # downloaded: # max_unused_for: 1w # retry_misses_after: 5m # retry_malformed_after: 5m # derived: # max_unused_for: 1w # retry_misses_after: 5m # retry_malformed_after: 5m # diagnostics: # retention: 1w
1
2
3
4
5
6
7
8
9
10
# TODO autoscaling in not yet implemented
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 50
# volumes: []
# volumeMounts: []
# sidecars: []
# TODO The cleanup cronjob is not yet implemented cleanup: enabled: false # podLabels: {} # affinity: {} # env: [] # volumes: [] # sidecars: []
auth: register: false
service: name: sentry type: ClusterIP externalPort: 9000 annotations: {} # externalIPs: # - 192.168.0.1 # loadBalancerSourceRanges: []
https://github.com/settings/apps (Create a Github App)
github: {}
github:
appId: “xxxx”
appName: MyAppName
clientId: “xxxxx”
clientSecret: “xxxxx”
privateKey: “—–BEGIN RSA PRIVATE KEY—–\nMIIEpA” !!!! Don’t forget a trailing \n
webhookSecret: “xxxxx”
#
Note: if you use existingSecret
, all above clientId
, clientSecret
, privateKey
, webhookSecret
params would be ignored, because chart will suppose that they are stored in existingSecret
. So you
must define all required keys and set it at least to empty strings if they are not needed in existingSecret
secret (client-id, client-secret, webhook-secret, private-key)
#
existingSecret: “xxxxx”
existingSecretPrivateKeyKey: “” # by default “private-key”
existingSecretWebhookSecretKey: “” # by default “webhook-secret”
existingSecretClientIdKey: “” # by default “client-id”
existingSecretClientSecretKey: “” # by default “client-secret”
#
Reference -> https://docs.sentry.io/product/integrations/source-code-mgmt/github/
https://developers.google.com/identity/sign-in/web/server-side-flow#step_1_create_a_client_id_and_client_secret
google: {}
google:
clientId: “”
clientSecret: “”
existingSecret: “xxxxx”
existingSecretClientIdKey: “” # by default “client-id”
existingSecretClientSecretKey: “” # by default “client-secret”
slack: {}
slack:
clientId: “”
clientSecret: “”
signingSecret: “”
existingSecret: “xxxxx”
existingSecretClientId: “” # by default “client-id”
existingSecretClientSecret: “” # by default “client-secret”
existingSecretSigningSecret: “” # by default “signing-secret”
Reference -> https://develop.sentry.dev/integrations/slack/
discord: {}
discord:
applicationId: “”
publicKey: “”
clientSecret: “”
botToken: “”
existingSecret: “xxxxx”
existingSecretApplicationId: “” # by default “application-id”
existingSecretPublicKey: “” # by default “public-key”
existingSecretClientSecret: “” # by default “client-secret”
existingSecretBotToken: “” # by default “bot-token”
Reference -> https://develop.sentry.dev/integrations/discord/
openai: {}
existingSecret: “xxxxx”
existingSecretKey: “” # by default “api-token”
nginx: enabled: true # true, if Safari compatibility is needed containerPort: 8080 existingServerBlockConfigmap: ‘{{ template “sentry.fullname” . }}’ resources: {} replicaCount: 1 nodeSelector: {} affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: “serv.type” operator: In values: - md tolerations: - key: “serv.type” operator: “Equal” value: “md” effect: “NoSchedule” service: type: ClusterIP ports: http: 80 extraLocationSnippet: false customReadinessProbe: tcpSocket: port: http initialDelaySeconds: 5 timeoutSeconds: 3 periodSeconds: 5 successThreshold: 1 failureThreshold: 3 # extraLocationSnippet: | # location /admin { # allow 1.2.3.4; # VPN network # deny all; # proxy_pass http://sentry; # } # Use this to enable an extra service account # serviceAccount: # create: false # name: nginx metrics: serviceMonitor: {}
ingress: enabled: true # If you are using traefik ingress controller, switch this to ‘traefik’ # if you are using AWS ALB Ingress controller, switch this to ‘aws-alb’ # if you are using GKE Ingress controller, switch this to ‘gke’ regexPathStyle: nginx ingressClassName: ingress-nginx-ops # If you are using AWS ALB Ingress controller, switch to true if you want activate the http to https redirection. alb: httpRedirect: false # Add custom labels for ingress resource # labels: # annotations: # If you are using nginx ingress controller, please use at least those 2 annotations # kubernetes.io/ingress.class: nginx # nginx.ingress.kubernetes.io/use-regex: “true” # https://github.com/getsentry/self-hosted/issues/1927 # nginx.ingress.kubernetes.io/proxy-buffers-number: “16” # nginx.ingress.kubernetes.io/proxy-buffer-size: “32k” # hostname: “sentry.htdevops.com” # additionalHostNames: [] # tls:
- secretName: “sentry-htdevops-com”
hosts:
- “sentry.htdevops.com”
filestore: # Set to one of filesystem, gcs or s3 as supported by Sentry. backend: filesystem
filesystem: path: /var/lib/sentry/files
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
## database data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "ksc-kfs"
accessMode: ReadWriteMany # Set ReadWriteMany for work Replays
size: 10Gi
## Whether to mount the persistent volume to the Sentry worker and
## cron deployments. This setting needs to be enabled for some advanced
## Sentry features, such as private source maps. If you disable this
## setting, the Sentry workers will not have access to artifacts you upload
## through the web deployment.
## Please note that you may need to change your accessMode to ReadWriteMany
## if you plan on having the web, worker and cron deployments run on
## different nodes.
persistentWorkers: false
## If existingClaim is specified, no PVC will be created and this claim will
## be used
existingClaim: ""
gcs: {} ## Point this at a pre-configured secret containing a service account. The resulting ## secret will be mounted at /var/run/secrets/google # secretName: # credentialsFile: credentials.json # bucketName:
## Currently unconfigured and changing this has no impact on the template configuration. ## Note that you can use a secret with default references “s3-access-key-id” and “s3-secret-access-key”. ## Otherwise, you can use custom secret references, or use plain text values. s3: {} # existingSecret: # accessKeyIdRef: # secretAccessKeyRef: # accessKey: # secretKey: # bucketName: # endpointUrl: # signature_version: # region_name: # default_acl:
config: # No YAML Extension Config Given configYml: {} sentryConfPy: | # No Python Extension Config Given snubaSettingsPy: | # No Python Extension Config Given relay: | # No YAML relay config given web: httpKeepalive: 15 maxRequests: 100000 maxRequestsDelta: 500 maxWorkerLifetime: 86400
clickhouse: enabled: true nodeSelector: {} clickhouse: replicas: “3” configmap: remote_servers: internal_replication: true replica: backup: enabled: true zookeeper_servers: enabled: true config: - index: “clickhouse” hostTemplate: “{{ .Release.Name }}-zookeeper-clickhouse” port: “2181” users: enabled: false user: # the first user will be used if enabled - name: default config: password: “fVoQdJPbzkXCvUVNQLU3” networks: - ::/0 profile: default quota: default
1
2
3
4
5
6
7
8
persistentVolumeClaim:
enabled: true
dataPersistentVolume:
enabled: true
storageClassName: "ksc-kfs"
accessModes:
- "ReadWriteMany"
storage: "30Gi"
## Use this to enable an extra service account # serviceAccount: # annotations: {} # enabled: false # name: “sentry-clickhouse” # automountServiceAccountToken: true
This value is only used when clickhouse.enabled is set to false
##
externalClickhouse:
## Hostname or ip address of external clickhouse
##
host: “clickhouse”
tcpPort: 9000
httpPort: 8123
username: default
password: “”
database: default
singleNode: true
# existingSecret: secret-name
## set existingSecretKey if key name inside existingSecret is different from ‘postgres-password’
# existingSecretKey: secret-key-name
## Cluster name, can be found in config
## (https://clickhouse.tech/docs/en/operations/server-configuration-parameters/settings/#server-settings-remote-servers)
## or by executing select * from system.clusters
##
# clusterName: test_shard_localhost
Settings for Zookeeper.
See https://github.com/bitnami/charts/tree/master/bitnami/zookeeper
zookeeper: enabled: true nameOverride: zookeeper-clickhouse replicaCount: 3 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: serv.type operator: In values: - md tolerations: - key: “serv.type” operator: “Equal” value: “md” effect: “NoSchedule” nodeSelector: {} # tolerations: [] ## When increasing the number of exceptions, you need to increase persistence.size # persistence: # size: 8Gi
Settings for Kafka.
See https://github.com/bitnami/charts/tree/master/bitnami/kafka
kafka:
enabled: true
provisioning:
## Increasing the replicationFactor enhances data reliability during Kafka pod failures by replicating data across multiple brokers.
# Note that existing topics will remain with replicationFactor: 1 when updated.
replicationFactor: 3
enabled: true
# Topic list is based on files below.
# - https://github.com/getsentry/snuba/blob/master/snuba/utils/streams/topics.py
# - https://github.com/getsentry/sentry/blob/master/src/sentry/conf/types/kafka_definition.py
## Default number of partitions for topics when unspecified
##
# numPartitions: 1
# Note that snuba component might fail if you set hooks.snubaInit.kafka.enabled
to false
and remove the topics from this default topic list.
topics:
- name: events
## Number of partitions for this topic
# partitions: 1
config:
“message.timestamp.type”: LogAppendTime
- name: event-replacements
- name: snuba-commit-log
config:
“cleanup.policy”: “compact,delete”
“min.compaction.lag.ms”: “3600000”
- name: cdc
- name: transactions
config:
“message.timestamp.type”: LogAppendTime
- name: snuba-transactions-commit-log
config:
“cleanup.policy”: “compact,delete”
“min.compaction.lag.ms”: “3600000”
- name: snuba-metrics
config:
“message.timestamp.type”: LogAppendTime
- name: outcomes
- name: outcomes-dlq
- name: outcomes-billing
- name: outcomes-billing-dlq
- name: ingest-sessions
- name: snuba-metrics-commit-log
config:
“cleanup.policy”: “compact,delete”
“min.compaction.lag.ms”: “3600000”
- name: scheduled-subscriptions-events
- name: scheduled-subscriptions-transactions
- name: scheduled-subscriptions-metrics
- name: scheduled-subscriptions-generic-metrics-sets
- name: scheduled-subscriptions-generic-metrics-distributions
- name: scheduled-subscriptions-generic-metrics-counters
- name: scheduled-subscriptions-generic-metrics-gauges
- name: events-subscription-results
- name: transactions-subscription-results
- name: metrics-subscription-results
- name: generic-metrics-subscription-results
- name: snuba-queries
config:
“message.timestamp.type”: LogAppendTime
- name: processed-profiles
config:
“message.timestamp.type”: LogAppendTime
- name: profiles-call-tree
- name: snuba-profile-chunks
- name: ingest-replay-events
config:
“message.timestamp.type”: LogAppendTime
“max.message.bytes”: “15000000”
- name: snuba-generic-metrics
config:
“message.timestamp.type”: LogAppendTime
- name: snuba-generic-metrics-sets-commit-log
config:
“cleanup.policy”: “compact,delete”
“min.compaction.lag.ms”: “3600000”
- name: snuba-generic-metrics-distributions-commit-log
config:
“cleanup.policy”: “compact,delete”
“min.compaction.lag.ms”: “3600000”
- name: snuba-generic-metrics-counters-commit-log
config:
“cleanup.policy”: “compact,delete”
“min.compaction.lag.ms”: “3600000”
- name: snuba-generic-metrics-gauges-commit-log
config:
“cleanup.policy”: “compact,delete”
“min.compaction.lag.ms”: “3600000”
- name: generic-events
config:
“message.timestamp.type”: LogAppendTime
- name: snuba-generic-events-commit-log
config:
“cleanup.policy”: “compact,delete”
“min.compaction.lag.ms”: “3600000”
- name: group-attributes
config:
“message.timestamp.type”: LogAppendTime
- name: snuba-dead-letter-metrics
- name: snuba-dead-letter-generic-metrics
- name: snuba-dead-letter-replays
- name: snuba-dead-letter-generic-events
- name: snuba-dead-letter-querylog
- name: snuba-dead-letter-group-attributes
- name: ingest-attachments
- name: ingest-attachments-dlq
- name: ingest-transactions
- name: ingest-transactions-dlq
- name: ingest-transactions-backlog
- name: ingest-events-dlq
- name: ingest-events
## If the number of exceptions increases, it is recommended to increase the number of partitions for ingest-events
# partitions: 1
- name: ingest-replay-recordings
- name: ingest-metrics
- name: ingest-metrics-dlq
- name: ingest-performance-metrics
- name: ingest-feedback-events
- name: ingest-feedback-events-dlq
- name: ingest-monitors
- name: monitors-clock-tasks
- name: monitors-clock-tick
- name: monitors-incident-occurrences
- name: profiles
- name: ingest-occurrences
- name: snuba-spans
- name: snuba-eap-spans-commit-log
- name: scheduled-subscriptions-eap-spans
- name: eap-spans-subscription-results
- name: snuba-eap-mutations
- name: snuba-lw-deletions-generic-events
- name: shared-resources-usage
- name: snuba-profile-chunks
- name: buffered-segments
- name: buffered-segments-dlq
- name: uptime-configs
- name: uptime-results
- name: snuba-uptime-results
- name: task-worker
- name: snuba-ourlogs
listeners:
client:
protocol: “PLAINTEXT”
controller:
protocol: “PLAINTEXT”
interbroker:
protocol: “PLAINTEXT”
external:
protocol: “PLAINTEXT”
zookeeper:
enabled: false
kraft:
enabled: true
controller:
replicaCount: 3
nodeSelector: {}
# tolerations: []
## if the load on the kafka controller increases, resourcesPreset must be increased
# resourcesPreset: small # small, medium, large, xlarge, 2xlarge
## if the load on the kafka controller increases, persistence.size must be increased
# persistence:
# size: 8Gi
## Use this to enable an extra service account
# serviceAccount:
# create: false
# name: kafka
## Use this to enable an extra service account # zookeeper: # serviceAccount: # create: false # name: zookeeper
# sasl:
# ## Credentials for client communications.
# ## @param sasl.client.users ist of usernames for client communications when SASL is enabled
# ## @param sasl.client.passwords list of passwords for client communications when SASL is enabled, must match the number of client.users
# ## First user and password will be used if enabled
# client:
# users:
# - sentry
# passwords:
# - password
# ## @param sasl.enabledMechanisms Comma-separated list of allowed SASL mechanisms when SASL listeners are configured. Allowed types: PLAIN
, SCRAM-SHA-256
, SCRAM-SHA-512
, OAUTHBEARER
# enabledMechanisms: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
# listeners:
# ## @param listeners.client.protocol Security protocol for the Kafka client listener. Allowed values are ‘PLAINTEXT’, ‘SASL_PLAINTEXT’, ‘SASL_SSL’ and ‘SSL’
# client:
# protocol: SASL_PLAINTEXT
This value is only used when kafka.enabled is set to false
## externalKafka: ## Multi hosts and ports of external kafka ## # cluster: # - host: “233.5.100.28” # port: 9092 # - host: “kafka-confluent-2” # port: 9093 # - host: “kafka-confluent-3” # port: 9094 ## Or Hostname (ip address) of external kafka # host: “kafka-confluent” ## and port of external kafka # port: 9092 compression: type: # ‘gzip’, ‘snappy’, ‘lz4’, ‘zstd’ message: max: bytes: 50000000 sasl: mechanism: None # PLAIN,SCRAM-256,SCRAM-512 username: None password: None security: protocol: plaintext # ‘PLAINTEXT’, ‘SASL_PLAINTEXT’, ‘SASL_SSL’ and ‘SSL’ socket: timeout: ms: 1000
sourcemaps: enabled: false
redis: enabled: false replica: replicaCount: 1 nodeSelector: {} # tolerations: [] auth: enabled: false sentinel: false ## Just omit the password field if your redis cluster doesn’t use password # password: redis # existingSecret: secret-name ## set existingSecretPasswordKey if key name inside existingSecret is different from redis-password’ # existingSecretPasswordKey: secret-key-name nameOverride: sentry-redis master: persistence: enabled: true nodeSelector: {} # tolerations: [] ## Use this to enable an extra service account # serviceAccount: # create: false # name: sentry-redis
This value is only used when redis.enabled is set to false
## externalRedis: ## Hostname or ip address of external redis cluster ## host: “10.101.8.148” port: 6379 ## Just omit the password field if your redis cluster doesn’t use password password: ** # existingSecret: secret-name ## set existingSecretKey if key name inside existingSecret is different from redis-password’ # existingSecretKey: secret-key-name ## Integer database number to use for redis (This is an integer) db: 3 ## Use ssl for the connection to Redis (True/False) # ssl: false
postgresql: enabled: false nameOverride: sentry-postgresql auth: database: sentry replication: enabled: false readReplicas: 2 synchronousCommit: “on” numSynchronousReplicas: 1 applicationName: sentry ## Use this to enable an extra service account # serviceAccount: # enabled: false ## Default connection max age is 0 (unlimited connections) ## Set to a higher number to close connections after a period of time in seconds connMaxAge: 0 ## If you are increasing the number of replicas, you need to increase max_connections # primary: # extendedConfiguration: | # max_connections=100 # nodeSelector: {} # tolerations: [] ## When increasing the number of exceptions, you need to increase persistence.size # primary: # persistence: # size: 8Gi
This value is only used when postgresql.enabled is set to false
Set either externalPostgresql.password or externalPostgresql.existingSecret to configure password
externalPostgresql: host: 10.102.3.229 port: 5432 username: sentry_admin password: ** # existingSecret: secret-name # set existingSecretKeys in a secret, if not specified, value from the secret won’t be used # if externalPostgresql.existingSecret is used, externalPostgresql.existingSecretKeys.password must be specified. existingSecretKeys: {} # password: postgresql-password # Required if existingSecret is used. Key in existingSecret. # username: username # database: database # port: port # host: host database: sentry_v25_3_0 # sslMode: require ## Default connection max age is 0 (unlimited connections) ## Set to a higher number to close connections after a period of time in seconds connMaxAge: 0
rabbitmq: ## If disabled, Redis will be used instead as the broker. enabled: false vhost: / clustering: forceBoot: true rebalance: true replicaCount: 1 auth: erlangCookie: pHgpy3Q6adTskzAT6bLHCFqFTF7lMxhA username: guest password: guest nameOverride: “” pdb: create: true persistence: enabled: true resources: {} memoryHighWatermark: {} # enabled: true # type: relative # value: 0.4 extraSecrets: load-definition: load_definition.json: | { “users”: [ { “name”: “{{ .Values.auth.username }}”, “password”: “{{ .Values.auth.password }}”, “tags”: “administrator” } ], “permissions”: [{ “user”: “{{ .Values.auth.username }}”, “vhost”: “/”, “configure”: “.”, “write”: “.”, “read”: “.” }], “policies”: [ { “name”: “ha-all”, “pattern”: “.”, “vhost”: “/”, “definition”: { “ha-mode”: “all”, “ha-sync-mode”: “automatic”, “ha-sync-batch-size”: 1 } } ], “vhosts”: [ { “name”: “/” } ] } loadDefinition: enabled: true existingSecret: load-definition extraConfiguration: | load_definitions = /app/load_definition.json ## Use this to enable an extra service account # serviceAccount: # create: false # name: rabbitmq metrics: enabled: false serviceMonitor: enabled: false path: “/metrics/per-object” # https://www.rabbitmq.com/docs/prometheus labels: release: “prometheus-operator” # helm release of kube-prometheus-stack
memcached: memoryLimit: “2048” maxItemSize: “26214400” args: - “memcached” - “-u memcached” - “-p 11211” - “-v” - “-m $(MEMCACHED_MEMORY_LIMIT)” - “-I $(MEMCACHED_MAX_ITEM_SIZE)” extraEnvVarsCM: “sentry-memcached” nodeSelector: {} # tolerations: []
Prometheus Exporter / Metrics
## metrics: enabled: false
podAnnotations: {}
## Configure extra options for liveness and readiness probes ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) livenessProbe: enabled: true initialDelaySeconds: 30 periodSeconds: 5 timeoutSeconds: 2 failureThreshold: 3 successThreshold: 1 readinessProbe: enabled: true initialDelaySeconds: 30 periodSeconds: 5 timeoutSeconds: 2 failureThreshold: 3 successThreshold: 1
## Metrics exporter resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ resources: {} # limits: # cpu: 100m # memory: 100Mi # requests: # cpu: 100m # memory: 100Mi
nodeSelector: {} tolerations: [] affinity: {} securityContext: {} containerSecurityContext: {}
volumes: [] sidecars: []
# schedulerName: # Optional extra labels for pod, i.e. redis-client: “true” # podLabels: {} service: type: ClusterIP labels: {}
image: repository: hub.kce.ksyun.com/middleware/sentry-statsd-exporter tag: v0.17.0 pullPolicy: IfNotPresent imagePullSecrets: - name: ksyunregistrykey
# Enable this if you’re using https://github.com/coreos/prometheus-operator serviceMonitor: enabled: false additionalLabels: {} namespace: “” namespaceSelector: {} # Default: scrape .Release.Namespace only # To scrape all, use the following: # namespaceSelector: # any: true scrapeInterval: 30s # honorLabels: true relabelings: [] metricRelabelings: []
revisionHistoryLimit: 10
dnsPolicy: “ClusterFirst”
dnsConfig:
nameservers: []
searches: []
options: []
extraManifests: []
pgbouncer: enabled: false postgres: cp_max: 10 cp_min: 5 host: ‘’ dbname: ‘’ user: ‘’ password: ‘’ image: repository: “bitnami/pgbouncer” tag: “1.23.1-debian-12-r5” pullPolicy: IfNotPresent replicas: 2 podDisruptionBudget: enabled: true # Define either ‘minAvailable’ or ‘maxUnavailable’, never both. minAvailable: 1 # – Maximum unavailable pods set in PodDisruptionBudget. If set, ‘minAvailable’ is ignored. # maxUnavailable: 1 authType: “md5” maxClientConn: “8192” poolSize: “50” poolMode: “transaction” resources: {} nodeSelector: {} tolerations: [] affinity: {} updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 25% priorityClassName: ‘’ topologySpreadConstraints: []
1
2
3
4
5
6
7
8
9
10
11
12
13
#### sentry/charts/clickhouse/values.yaml
## 遇到的问题
### snuba-metrics-consumer 服务启动报错
> snuba.clickhouse.errors.ClickhouseWriterError: Method write is not supported by storage Distributed with more than one shard and no sharding key provided (version 21.8.13.6 (official build))
>
>> 原因: 新版本的 snuba 与 Clickhouse 表结构不对(最新直接部署的也是)
>>
>> 解决: 删除原表: default.metrics_raw_v2_dist,以新表重新构建
$ kubectl exec -it sentry-clickhouse-0 – bash root@sentry-clickhouse-0:/# clickhouse-client -h sentry-clickhouse ClickHouse client version 21.8.13.6 (official build). Connecting to sentry-clickhouse:9000 as user default. Connected to ClickHouse server version 21.8.13 revision 54449.
sentry-clickhouse :) SHOW TABLES;
SHOW TABLES
Query id: 00a56a30-1f3a-47bc-9017-0d7b0e753e32
┌─name───────────────────────────────────────────┐ │ discover_dist │ │ discover_local │ │ errors_dist │ │ errors_dist_ro │ │ errors_local │ │ functions_local │ │ functions_mv_dist │ │ functions_mv_local │ │ functions_raw_dist │ │ functions_raw_local │ │ generic_metric_counters_aggregated_dist │ │ generic_metric_counters_aggregated_local │ │ generic_metric_counters_aggregation_mv │ │ generic_metric_counters_aggregation_mv_v2 │ │ generic_metric_counters_raw_dist │ │ generic_metric_counters_raw_local │ │ generic_metric_distributions_aggregated_dist │ │ generic_metric_distributions_aggregated_local │ │ generic_metric_distributions_aggregation_mv │ │ generic_metric_distributions_aggregation_mv_v2 │ │ generic_metric_distributions_raw_dist │ │ generic_metric_distributions_raw_local │ │ generic_metric_gauges_aggregated_dist │ │ generic_metric_gauges_aggregated_local │ │ generic_metric_gauges_aggregation_mv │ │ generic_metric_gauges_raw_dist │ │ generic_metric_gauges_raw_local │ │ generic_metric_sets_aggregated_dist │ │ generic_metric_sets_aggregation_mv │ │ generic_metric_sets_aggregation_mv_v2 │ │ generic_metric_sets_local │ │ generic_metric_sets_raw_dist │ │ generic_metric_sets_raw_local │ │ group_attributes_dist │ │ group_attributes_local │ │ groupassignee_dist │ │ groupassignee_local │ │ groupedmessage_dist │ │ groupedmessage_local │ │ metrics_counters_polymorphic_mv_v4_local │ │ metrics_counters_v2_dist │ │ metrics_counters_v2_local │ │ metrics_distributions_polymorphic_mv_v4_local │ │ metrics_distributions_v2_dist │ │ metrics_distributions_v2_local │ │ metrics_raw_v2_dist │ │ metrics_raw_v2_local │ │ metrics_sets_polymorphic_mv_v4_local │ │ metrics_sets_v2_dist │ │ metrics_sets_v2_local │ │ migrations_dist │ │ migrations_local │ │ outcomes_hourly_dist │ │ outcomes_hourly_local │ │ outcomes_mv_hourly_local │ │ outcomes_raw_dist │ │ outcomes_raw_local │ │ profiles_dist │ │ profiles_local │ │ querylog_dist │ │ querylog_local │ │ replays_dist │ │ replays_local │ │ search_issues_dist │ │ search_issues_dist_v2 │ │ search_issues_local │ │ search_issues_local_v2 │ │ sessions_hourly_dist │ │ sessions_hourly_local │ │ sessions_hourly_mv_local │ │ sessions_raw_dist │ │ sessions_raw_local │ │ spans_dist │ │ spans_local │ │ test_migration_dist │ │ test_migration_local │ │ transactions_dist │ │ transactions_local │ └────────────────────────────────────────────────┘
78 rows in set. Elapsed: 0.012 sec.
sentry-clickhouse :) DESCRIBE TABLE default.metrics_raw_v2_dist;
DESCRIBE TABLE default.metrics_raw_v2_dist
Query id: 28daccfc-d3dc-4caa-8adc-3d2a08abf20d
┌─name────────────────────┬─type───────────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐ │ use_case_id │ LowCardinality(String) │ │ │ │ │ │ │ org_id │ UInt64 │ │ │ │ │ │ │ project_id │ UInt64 │ │ │ │ │ │ │ metric_id │ UInt64 │ │ │ │ │ │ │ timestamp │ DateTime │ │ │ │ │ │ │ tags.key │ Array(UInt64) │ │ │ │ │ │ │ tags.value │ Array(UInt64) │ │ │ │ │ │ │ metric_type │ LowCardinality(String) │ │ │ │ │ │ │ set_values │ Array(UInt64) │ │ │ │ │ │ │ count_value │ Float64 │ │ │ │ │ │ │ distribution_values │ Array(Float64) │ │ │ │ │ │ │ materialization_version │ UInt8 │ │ │ │ │ │ │ retention_days │ UInt16 │ │ │ │ │ │ │ partition │ UInt16 │ │ │ │ │ │ │ offset │ UInt64 │ │ │ │ │ │ │ timeseries_id │ UInt32 │ │ │ │ │ │ └─────────────────────────┴────────────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
16 rows in set. Elapsed: 0.004 sec.
sentry-clickhouse :) SELECT COUNT(*) FROM default.metrics_raw_v2_dist;
SELECT COUNT(*) FROM default.metrics_raw_v2_dist
Query id: 1d5e2a59-4f88-4274-80a1-828d4d0acbc1
┌─count()─┐ │ 0 │ └─────────┘
1 rows in set. Elapsed: 0.025 sec.
sentry-clickhouse :) DROP table default.metrics_raw_v2_dist ON CLUSTER ‘sentry-clickhouse’ SYNC;
DROP TABLE default.metrics_raw_v2_dist ON CLUSTER sentry-clickhouse
NO DELAY
Query id: 83f1a44a-15dd-4491-97e5-b2dd16e31676
┌─host────────────────────────────────────────────────────────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐ │ sentry-clickhouse-2.sentry-clickhouse-headless.sentry.svc.cluster.local │ 9000 │ 0 │ │ 2 │ 0 │ │ sentry-clickhouse-1.sentry-clickhouse-headless.sentry.svc.cluster.local │ 9000 │ 0 │ │ 1 │ 0 │ └─────────────────────────────────────────────────────────────────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘ ┌─host────────────────────────────────────────────────────────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐ │ sentry-clickhouse-0.sentry-clickhouse-headless.sentry.svc.cluster.local │ 9000 │ 0 │ │ 0 │ 0 │ └─────────────────────────────────────────────────────────────────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
3 rows in set. Elapsed: 0.203 sec.
sentry-clickhouse :) CREATE TABLE default.metrics_raw_v2_dist ON CLUSTER ‘sentry-clickhouse’
:-] (
:-] use_case_id
LowCardinality(String),
:-] org_id
UInt64,
:-] project_id
UInt64,
:-] metric_id
UInt64,
:-] timestamp
DateTime,
:-] tags.key
Array(UInt64),
:-] tags.value
Array(UInt64),
:-] metric_type
LowCardinality(String),
:-] set_values
Array(UInt64),
:-] count_value
Float64,
:-] distribution_values
Array(Float64),
:-] materialization_version
UInt8,
:-] retention_days
UInt16,
:-] partition
UInt16,
:-] offset
UInt64,
:-] timeseries_id
UInt32
:-] )
:-] ENGINE = Distributed(‘sentry-clickhouse’, ‘default’, ‘metrics_raw_v2_local’, sipHash64(‘timeseries_id’));
CREATE TABLE default.metrics_raw_v2_dist ON CLUSTER sentry-clickhouse
(
use_case_id
LowCardinality(String),
org_id
UInt64,
project_id
UInt64,
metric_id
UInt64,
timestamp
DateTime,
tags.key
Array(UInt64),
tags.value
Array(UInt64),
metric_type
LowCardinality(String),
set_values
Array(UInt64),
count_value
Float64,
distribution_values
Array(Float64),
materialization_version
UInt8,
retention_days
UInt16,
partition
UInt16,
offset
UInt64,
timeseries_id
UInt32
)
ENGINE = Distributed(‘sentry-clickhouse’, ‘default’, ‘metrics_raw_v2_local’, sipHash64(‘timeseries_id’))
Query id: 011e2e89-faea-4e14-88d7-68c7679c1293
┌─host────────────────────────────────────────────────────────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐ │ sentry-clickhouse-2.sentry-clickhouse-headless.sentry.svc.cluster.local │ 9000 │ 0 │ │ 2 │ 0 │ │ sentry-clickhouse-1.sentry-clickhouse-headless.sentry.svc.cluster.local │ 9000 │ 0 │ │ 1 │ 0 │ │ sentry-clickhouse-0.sentry-clickhouse-headless.sentry.svc.cluster.local │ 9000 │ 0 │ │ 0 │ 0 │ └─────────────────────────────────────────────────────────────────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
3 rows in set. Elapsed: 0.253 sec.
1
2
3
4
5
6
7
8
9
### BadGatewayError: GET /api/0/issues/{issueId}/attachments/ 502
> 加载所有事件的时候,报这个错
>
> 解决方式: [All Events tab not loading #2340](https://github.com/getsentry/self-hosted/issues/2329#issuecomment-1672747556)
>
>> 原因:在Sentry-nginx前 有另外一个nginx, 因为事件太多处理失败,加个缓存,如下:
vi /etc/nginx/nginx.conf
1
2
3
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k; # service nginx restart ``` >> ingress-nginx修改配置 >> >>>在 ConfigMap: ingress-nginx-controller等类似配置文件, 在data下添加配置 >>> >>>``` >>>data: >>> proxy-buffering: "on" # 开启缓冲 >>> proxy-buffer-size: "128k" # 单个缓冲区大小 >>> proxy-buffers-number: "4" # 缓冲区数量 >>> proxy-buffers-size: "1024k" # 总缓冲区大小(4*256k=1024k) >>>```
SMTPRecipientsRefused: {‘Admin’: (501, b’5.1.3 Invalid address’)}
错误原因:Admin 是无效的邮箱地址(缺少 @ 符号和域名)。系统默认账号 Admin是无效的,可以删除
解决方法2种:
1.在初次部署Sentry时,直接设置正确的管理员账号
1 2 3 4 user: create: true email: sentry.notify@hannto.com password: ******2.直接在web界面,删除 Admin账号即可