A practical migration story from Promtail to Grafana Alloy for Kubernetes logging, without downtime and with better log pipelines.
Panduan praktis migrasi logging stack Kubernetes dari Promtail yang akan deprecated ke Grafana Alloy — tanpa mengganggu production.
Selama bertahun-tahun, Promtail menjadi pilihan utama kami untuk mengirim log Kubernetes ke Grafana Loki. Konfigurasinya simpel, ringan, dan "just works".
Lalu di awal 2025, Grafana Labs mengumumkan: Promtail akan deprecated pada Februari 2026.
Rekomendasi resmi mereka? Migrasi ke Grafana Alloy.
Ini bukan soal Promtail "jelek" — tapi soal arah strategis:
Kami punya dua pilihan:
Kami pilih opsi 2.
Yang perlu kamu tahu:
Alloy bukan sekadar "pengganti Promtail" — ini adalah cara berpikir ulang tentang observability agent.
Alloy adalah:
Keunggulan Utama:
Kami tidak langsung cabut Promtail dalam semalam. Begini cara kami:
Result: Zero production downtime.
Begini tampilan setup Alloy production kami:
loki.write "default" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
tenant_id = "tenant1"
}
}
Kami bangun pipeline untuk:
loki.process "pipeline" {
// Drop kube-probe
stage.drop {
expression = ".*kube-probe.*"
}
// CRI log parsing
stage.cri {}
// Static labels
stage.labels {
values = {
service_name = "app",
}
}
// Extract trace fields
stage.json {
expressions = {
trace_id = "trace_id",
span_id = "span_id",
}
}
// Parse nested JSON
stage.json {
expressions = {
message = "message",
}
}
stage.json {
source = "message"
expressions = {
request_body = "request_body",
}
}
// Mask passwords
stage.regex {
expression = "\\\"password\\\"\\s*:\\s*\\\"(?P<password>[^\\\"]+)\\\""
}
stage.replace {
expression = "\\\"password\\\"\\s*:\\s*\\\"(?P<password>[^\\\"]+)\\\""
replace = "\"***\""
}
// Similar masking for token, otp, recaptcha_token...
stage.output {
source = "."
}
forward_to = [loki.write.default.receiver]
}
Alloy melakukan discovery pods dan extract metadata:
discovery.kubernetes "pods" {
role = "pod"
}
discovery.relabel "pods" {
targets = discovery.kubernetes.pods.targets
// Extract namespace
rule {
source_labels = ["__meta_kubernetes_namespace"]
target_label = "namespace"
}
// Extract pod name
rule {
source_labels = ["__meta_kubernetes_pod_name"]
target_label = "pod"
}
// Extract container name
rule {
source_labels = ["__meta_kubernetes_pod_container_name"]
target_label = "container"
}
// Create job label
rule {
source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_name"]
separator = "/"
target_label = "job"
}
// Extract app label
rule {
source_labels = ["__meta_kubernetes_pod_label_app"]
target_label = "app"
}
// Extract node name as instance
rule {
source_labels = ["__meta_kubernetes_pod_node_name"]
target_label = "instance"
}
// Build log file path
rule {
source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_name", "__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
separator = "/"
regex = "(.+)/(.+)/(.+)/(.+)"
replacement = "/var/log/pods/*${1}_${2}_${3}*/${4}/*.log"
target_label = "__path__"
}
}
local.file_match "pods" {
path_targets = discovery.relabel.pods.output
}
loki.source.file "pods" {
targets = local.file_match.pods.targets
forward_to = [loki.process.pipeline.receiver]
tail_from_end = true
}
Kami deploy Alloy sebagai DaemonSet dengan:
alloy:
configMap:
create: true
content: |
logging {
level = "info"
format = "logfmt"
}
loki.write "default" {
endpoint {
url = "http://YOUR_LOKI_HOST/loki/api/v1/push"
tenant_id = "tenant1"
}
}
loki.process "pipeline" {
// drop kube-probe
stage.drop {
expression = ".*kube-probe.*"
}
// CRI log parsing
stage.cri {}
// static label
stage.labels {
values = {
service_name = "app",
}
}
// extract tracing fields
stage.json {
expressions = {
trace_id = "trace_id",
span_id = "span_id",
}
}
// extract message
stage.json {
expressions = {
message = "message",
}
}
// parse nested json from message
stage.json {
source = "message"
expressions = {
request_body = "request_body",
}
}
// mask password
stage.regex {
expression = "\\\"password\\\"\\s*:\\s*\\\"(?P<password>[^\\\"]+)\\\""
}
stage.replace {
expression = "\\\"password\\\"\\s*:\\s*\\\"(?P<password>[^\\\"]+)\\\""
replace = "\"***\""
}
// mask token
stage.regex {
expression = "\\\"token\\\"\\s*:\\s*\\\"(?P<token>[^\\\"]+)\\\""
}
stage.replace {
expression = "\\\"token\\\"\\s*:\\s*\\\"(?P<token>[^\\\"]+)\\\""
replace = "\"****\""
}
// mask otp
stage.regex {
expression = "\\\"otp\\\"\\s*:\\s*\\\"(?P<otp>[^\\\"]+)\\\""
}
stage.replace {
expression = "\\\"otp\\\"\\s*:\\s*\\\"(?P<otp>[^\\\"]+)\\\""
replace = "\"***\""
}
// mask recaptcha token
stage.regex {
expression = "\\\"recaptcha_token\\\"\\s*:\\s*\\\"(?P<recaptcha_token>[^\\\"]+)\\\""
}
stage.replace {
expression = "\\\"recaptcha_token\\\"\\s*:\\s*\\\"(?P<recaptcha_token>[^\\\"]+)\\\""
replace = "\"***\""
}
// output final log line
stage.output {
source = "."
}
forward_to = [loki.write.default.receiver]
}
discovery.kubernetes "pods" {
role = "pod"
}
discovery.relabel "pods" {
targets = discovery.kubernetes.pods.targets
rule {
source_labels = ["__meta_kubernetes_namespace"]
target_label = "namespace"
}
rule {
source_labels = ["__meta_kubernetes_pod_name"]
target_label = "pod"
}
rule {
source_labels = ["__meta_kubernetes_pod_container_name"]
target_label = "container"
}
rule {
source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_name"]
separator = "/"
target_label = "job"
}
rule {
source_labels = ["__meta_kubernetes_pod_label_app"]
target_label = "app"
}
rule {
source_labels = ["__meta_kubernetes_pod_node_name"]
target_label = "instance"
}
rule {
source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_name", "__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
separator = "/"
regex = "(.+)/(.+)/(.+)/(.+)"
replacement = "/var/log/pods/*${1}_${2}_${3}*/${4}/*.log"
target_label = "__path__"
}
}
local.file_match "pods" {
path_targets = discovery.relabel.pods.output
}
loki.source.file "pods" {
targets = local.file_match.pods.targets
forward_to = [loki.process.pipeline.receiver]
tail_from_end = true
}
mounts:
# -- Mount /var/log from the host into the container for log collection.
varlog: true
dockercontainers: false
controller:
type: daemonset
nodeSelector: {}
tolerations:
- operator: Exists
effect: NoSchedule
initContainers:
- name: sysctl
image: busybox
securityContext:
privileged: true
runAsUser: 0
command:
- sh
- -c
- |
sysctl -w fs.inotify.max_user_instances=1024
sysctl -w fs.inotify.max_user_watches=1048576
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
serviceAccount:
create: true
name: alloy
rbac:
create: true
ingress:
enabled: false
podLabels:
app: alloy
Setelah cutover penuh, ini yang kami amati:
✅ Log mengalir normal — tidak ada data loss
✅ Label konsisten — dashboard tetap berfungsi
✅ Pipeline lebih bersih — kualitas data lebih baik
✅ Data sensitif ter-mask — keamanan meningkat
✅ Resource usage stabil — tidak ada performance regression
Bonus: Kami sekarang siap ingest trace dan metric tanpa tambah agent baru.
Jangan tunggu sampai deadline. Mulai sekarang saat kamu punya waktu untuk test dengan baik.
Menjalankan kedua agent side-by-side memberikan confidence dan fallback plan.
Pastikan label Alloy match dengan Promtail. Ini menjaga dashboard tetap berfungsi.
Kami kaget betapa lebih baik Alloy handle complex JSON parsing dan data masking.
Meskipun "baru," Alloy stabil. Ini berbasis teknologi OpenTelemetry Collector yang mature.
Kalau kamu masih pakai Promtail, tanya diri sendiri:
Kalau jawabannya "ya" untuk salah satu pertanyaan di atas, sudah waktunya mulai planning migrasi Alloy.
Ini roadmap singkatnya:
Migrasi dari Promtail ke Alloy bukan soal "upgrade" — ini tentang stay current dengan roadmap Grafana.
Promtail belum mati, tapi akan mati. Februari 2026 lebih dekat dari yang kamu kira.
Mulai migrasi sekarang. Diri kamu di masa depan (dan tim kamu) akan berterima kasih.
Punya pertanyaan tentang migrasi ke Alloy? Drop di kolom komentar. Saya akan bantu sebisa mungkin.
Artikel ini membantu? Kasih clap 👏 dan share ke tim kamu.