Centralized Logging with Loki and Grafana: The Lightweight ELK Alternative
You're running a dozen Docker containers. Something breaks at 2 AM. You SSH in, run docker logs on six different containers, scroll through thousands of lines, and try to piece together what happened. By the time you find the relevant log line, you've lost 45 minutes and your patience.
Centralized logging solves this. Every log from every service flows to one place. You search across all of them at once, filter by time range, and correlate events across services.
The traditional answer is the ELK stack (Elasticsearch, Logstash, Kibana). It works, but it's a resource monster — Elasticsearch alone wants 4-8 GB of RAM. Grafana Loki takes a different approach: it indexes only metadata (labels), not the full log content, which makes it dramatically lighter. If you're already running Grafana for metrics, Loki slides in as a natural companion.
How Loki Works
Unlike Elasticsearch, which builds a full-text index of every log line, Loki stores logs as compressed chunks and only indexes the labels (like service name, hostname, log level). This means:
- Much less RAM — Loki runs comfortably on 256-512 MB
- Cheaper storage — No massive index to maintain
- Simpler operations — Fewer moving parts to break
The trade-off: searching log content is slower than Elasticsearch because Loki has to scan chunks rather than hitting an index. For most self-hosters, this is negligible — your log volume isn't big enough for it to matter.
Docker Deployment
# docker-compose.yml
services:
loki:
image: grafana/loki:3.4
ports:
- "3100:3100"
volumes:
- ./loki-config.yml:/etc/loki/local-config.yaml
- loki_data:/loki
command: -config.file=/etc/loki/local-config.yaml
restart: unless-stopped
promtail:
image: grafana/promtail:3.4
volumes:
- ./promtail-config.yml:/etc/promtail/config.yml
- /var/log:/var/log:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
command: -config.file=/etc/promtail/config.yml
restart: unless-stopped
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
volumes:
- grafana_data:/var/lib/grafana
environment:
GF_SECURITY_ADMIN_PASSWORD: changeme
restart: unless-stopped
volumes:
loki_data:
grafana_data:
Loki configuration
Create loki-config.yml:
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
limits_config:
retention_period: 30d
compactor:
working_directory: /loki/compactor
retention_enabled: true
Promtail configuration
Create promtail-config.yml:
server:
http_listen_port: 9080
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
# Docker container logs
- job_name: docker
docker_sd_configs:
- host: unix:///var/run/docker.sock
refresh_interval: 5s
relabel_configs:
- source_labels: ['__meta_docker_container_name']
target_label: 'container'
regex: '/(.*)'
- source_labels: ['__meta_docker_container_log_stream']
target_label: 'stream'
# System logs
- job_name: syslog
static_configs:
- targets: [localhost]
labels:
job: syslog
__path__: /var/log/syslog
docker compose up -d
Connecting Grafana to Loki
- Open Grafana at
http://your-server:3000 - Go to Connections > Data sources > Add data source
- Select Loki
- Set URL to
http://loki:3100 - Click Save & test
Now navigate to Explore and select the Loki data source. You should see logs flowing in from your Docker containers.
LogQL: Querying Your Logs
LogQL is Loki's query language. It's inspired by PromQL but designed for logs.
Basic queries
# All logs from a specific container
{container="nginx"}
# Filter by content
{container="nginx"} |= "error"
# Exclude lines
{container="nginx"} != "healthcheck"
# Regex matching
{container="nginx"} |~ "status=(4|5)\\d{2}"
Parsing and filtering
# Parse JSON logs and filter
{container="myapp"} | json | level="error"
# Parse key-value logs
{container="myapp"} | logfmt | status >= 500
# Count errors per minute
rate({container="myapp"} |= "error" [1m])
Useful dashboard queries
# Error rate across all containers
sum(rate({job="docker"} |= "error" [5m])) by (container)
# Top 10 containers by log volume
topk(10, sum(rate({job="docker"}[5m])) by (container))
# HTTP 5xx responses from nginx
{container="nginx"} | pattern `<ip> - - <_> "<method> <path> <_>" <status> <_>` | status >= 500
Setting Up Alerts
Grafana can alert on log patterns — for example, notify you when error rates spike:
- In Grafana, go to Alerting > Alert rules > New alert rule
- Use a LogQL query like
sum(rate({container="myapp"} |= "error" [5m])) - Set a threshold (e.g., alert when error rate > 10/minute)
- Configure a notification channel (email, Slack, Discord)
Loki vs ELK Stack
| Feature | Loki + Grafana | ELK Stack |
|---|---|---|
| RAM usage | 256-512 MB | 4-8 GB minimum |
| Full-text indexing | No (label index only) | Yes |
| Search speed (content) | Slower | Fast |
| Search speed (labels) | Fast | Fast |
| Storage efficiency | Very good | Good |
| Setup complexity | Low | High |
| Operations burden | Low | Significant |
| Grafana integration | Native | Requires config |
| Cost at scale | Low | High |
When to choose Loki
- You're already using Grafana for metrics
- You don't need blazing-fast full-text search across terabytes of logs
- You want a lightweight logging solution that won't overload your server
- You have a homelab or small infrastructure (1-20 services)
When to choose ELK
- You need full-text search across massive log volumes
- You're doing security analytics (SIEM) with complex queries
- You need Kibana's specific visualization capabilities
- You have dedicated resources for Elasticsearch operations
Retention and Storage
Loki compresses logs efficiently. Typical storage usage:
- 10 services logging normally — roughly 1-5 GB per month
- 30-day retention is a good default for homelabs
- 90-day retention if you have storage to spare
The retention is configured in loki-config.yml under limits_config.retention_period. The compactor handles cleanup automatically.
Multi-Host Collection
If you have multiple servers, run Promtail on each one pointing to the same Loki instance:
# promtail-config.yml on remote host
clients:
- url: http://loki-server:3100/loki/api/v1/push
scrape_configs:
- job_name: docker
docker_sd_configs:
- host: unix:///var/run/docker.sock
relabel_configs:
- source_labels: ['__meta_docker_container_name']
target_label: 'container'
regex: '/(.*)'
- target_label: 'host'
replacement: 'server-02'
Adding a host label lets you filter logs per machine in Grafana: {host="server-02", container="nginx"}.
The Bottom Line
Loki fills the gap between "running docker logs manually" and "operating a full ELK cluster." It gives you centralized, searchable, alertable logs with a fraction of the resource cost. If you're running Grafana for monitoring (and you should be), adding Loki for logs is a natural next step that takes about 30 minutes to set up and transforms how you debug issues across your self-hosted services.