Self-Hosting MinIO: S3-Compatible Object Storage on Your Own Hardware
Cloud storage bills add up. If you're paying AWS or Backblaze for S3 storage and your data lives on predictable infrastructure, running your own S3-compatible storage can save real money.
MinIO is a high-performance, S3-compatible object storage server. It speaks the same API as Amazon S3, which means any tool, library, or application that works with S3 works with MinIO — no code changes needed.
Why Self-Host Object Storage?
Object storage (like S3) is fundamentally different from traditional file storage:
- Flat namespace: No directories, just buckets and keys
- HTTP API: Access files over HTTP/HTTPS with standard S3 API calls
- Metadata-rich: Each object carries custom metadata
- Scalable: Designed to handle millions of objects
Self-hosting it makes sense when:
- You're already paying for server storage and want to use it as S3
- You need S3-compatible storage for applications that expect it (backup tools, media servers, databases)
- You want to keep data on-premises for compliance or privacy
- You're tired of cloud egress fees
MinIO vs. Cloud S3 vs. Garage
| Feature | MinIO | AWS S3 | Garage |
|---|---|---|---|
| S3 compatible | Yes (full) | Native | Yes (partial) |
| Self-hosted | Yes | No | Yes |
| Performance | Very fast | Fast | Moderate |
| Clustering | Yes | N/A (managed) | Yes |
| Erasure coding | Yes | Yes | Yes |
| Web console | Yes | Yes (AWS Console) | No (CLI only) |
| Resource usage | Moderate | N/A | Low |
| Maturity | Very mature | N/A | Newer |
| Versioning | Yes | Yes | No |
| Lifecycle rules | Yes | Yes | No |
| Cost | Free + hardware | $0.023/GB/mo + egress | Free + hardware |
Why MinIO over alternatives
MinIO is the default choice for self-hosted S3 because:
- Full S3 API compatibility — most alternatives only support a subset
- Battle-tested — used in production by thousands of organizations
- Excellent performance — can saturate network links on modern hardware
- Good documentation and a large community
- Web console for visual bucket management
Garage is worth considering if you need a lightweight, distributed option for a homelab cluster. It uses far less RAM but doesn't support all S3 features.
Self-Hosting MinIO: Setup
Server requirements
MinIO's requirements depend on your workload:
- Minimum (single node): 1 GB RAM, 2 CPU cores, any storage
- Recommended: 4 GB RAM, 4 CPU cores, fast disks (SSD or NVMe for metadata-heavy workloads)
- Storage: As much as you need — MinIO is limited by your disks, not software
- Network: Gigabit minimum, 10GbE recommended for large transfers
Single-node Docker Compose setup
For most homelab and small business use cases, a single-node setup is plenty:
version: "3.8"
services:
minio:
container_name: minio
image: quay.io/minio/minio:latest
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: your-secure-password-here
ports:
- "9000:9000" # S3 API
- "9001:9001" # Web console
volumes:
- /path/to/storage:/data
restart: always
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 30s
timeout: 20s
retries: 3
Change MINIO_ROOT_PASSWORD to something secure. The /path/to/storage mount is where all your objects will be stored.
Starting MinIO
docker compose up -d
- S3 API:
http://your-server:9000 - Web Console:
http://your-server:9001
Log into the web console with your root credentials to create buckets and manage access.
Using MinIO with S3 Tools
MinIO Client (mc)
MinIO ships its own CLI client:
# Install mc
curl -O https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
sudo mv mc /usr/local/bin/
# Configure your MinIO server
mc alias set myminio http://your-server:9000 minioadmin your-secure-password-here
# Create a bucket
mc mb myminio/my-bucket
# Upload a file
mc cp myfile.txt myminio/my-bucket/
# List bucket contents
mc ls myminio/my-bucket/
# Sync a directory
mc mirror /local/path/ myminio/my-bucket/
AWS CLI
Since MinIO speaks S3, the standard AWS CLI works:
aws --endpoint-url http://your-server:9000 s3 ls
aws --endpoint-url http://your-server:9000 s3 cp myfile.txt s3://my-bucket/
aws --endpoint-url http://your-server:9000 s3 sync /local/path/ s3://my-bucket/
Any S3 SDK
Python (boto3), Go, Node.js, Java — any S3 SDK works with MinIO. Just change the endpoint:
import boto3
s3 = boto3.client(
"s3",
endpoint_url="http://your-server:9000",
aws_access_key_id="minioadmin",
aws_secret_access_key="your-secure-password-here",
)
# Upload a file
s3.upload_file("local-file.txt", "my-bucket", "remote-file.txt")
# List objects
response = s3.list_objects_v2(Bucket="my-bucket")
for obj in response.get("Contents", []):
print(obj["Key"])
Common Use Cases
Backup target
MinIO is an excellent target for backup tools that support S3:
- Restic:
restic -r s3:http://your-server:9000/backup-bucket init - BorgBackup (via rclone): Configure rclone with S3 backend, then use
borgwith rclone mount - Duplicati: Add S3-compatible storage in the UI, point to MinIO
- Proxmox Backup Server: Supports S3 as a storage backend
Docker registry
Run a private container registry backed by MinIO:
registry:
image: registry:2
environment:
REGISTRY_STORAGE: s3
REGISTRY_STORAGE_S3_ACCESSKEY: minioadmin
REGISTRY_STORAGE_S3_SECRETKEY: your-secure-password-here
REGISTRY_STORAGE_S3_REGION: us-east-1
REGISTRY_STORAGE_S3_REGIONENDPOINT: http://minio:9000
REGISTRY_STORAGE_S3_BUCKET: docker-registry
REGISTRY_STORAGE_S3_SECURE: "false"
Database backups
Most databases can dump to S3-compatible storage:
# PostgreSQL backup to MinIO
pg_dump mydb | mc pipe myminio/db-backups/mydb-$(date +%Y%m%d).sql
# MySQL backup to MinIO
mysqldump mydb | mc pipe myminio/db-backups/mydb-$(date +%Y%m%d).sql
Application storage
Many self-hosted apps support S3 for file storage:
- Nextcloud: External storage plugin with S3 backend
- Gitea/Forgejo: LFS objects in S3
- Mastodon: Media attachments in S3
- Immich: Can store photos in S3 (experimental)
Access Control
Don't use root credentials for applications. Create dedicated access keys:
Via web console
- Go to Identity → Users in the web console
- Create a new user with an access key and secret key
- Attach a policy (e.g.,
readwritefor full access to specific buckets)
Via mc
# Create a user
mc admin user add myminio app-user app-secret-key
# Create a policy that grants access to one bucket
cat > /tmp/app-policy.json << 'EOF'
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": [
"arn:aws:s3:::app-bucket",
"arn:aws:s3:::app-bucket/*"
]
}
]
}
EOF
mc admin policy create myminio app-policy /tmp/app-policy.json
mc admin policy attach myminio app-policy --user app-user
Performance Tuning
Disk layout
For best performance:
- Use dedicated disks for MinIO, not shared with the OS
- XFS is the recommended filesystem (ext4 works fine too)
- NVMe or SSD for metadata-heavy workloads (many small files)
- HDD is fine for large files (backups, media)
Network
MinIO can easily saturate a gigabit connection. If you're moving large amounts of data:
- Use 10GbE or faster networking
- Enable jumbo frames if your network supports them
- Place MinIO and its clients on the same network segment to avoid router bottlenecks
Erasure coding (multi-drive)
With 4+ drives, MinIO can use erasure coding for data protection:
command: server /data{1...4} --console-address ":9001"
volumes:
- /mnt/disk1:/data1
- /mnt/disk2:/data2
- /mnt/disk3:/data3
- /mnt/disk4:/data4
This gives you redundancy — you can lose 2 of 4 drives without data loss (with default settings). It's like RAID but at the application level.
HTTPS Setup
For production use, always run MinIO behind HTTPS:
Option 1: Reverse proxy (recommended)
Use Caddy, Nginx, or Traefik in front of MinIO. This is the simplest approach and lets you use the same reverse proxy for all your services.
s3.yourdomain.com {
reverse_proxy localhost:9000
}
console.yourdomain.com {
reverse_proxy localhost:9001
}
Option 2: MinIO native TLS
Place your certificates in MinIO's config directory:
mkdir -p ~/.minio/certs
cp public.crt ~/.minio/certs/
cp private.key ~/.minio/certs/
MinIO will automatically detect and use the certificates.
Monitoring
MinIO exposes Prometheus metrics at /minio/v2/metrics/cluster:
# prometheus.yml
scrape_configs:
- job_name: minio
metrics_path: /minio/v2/metrics/cluster
static_configs:
- targets: ["minio:9000"]
Key metrics to watch:
minio_bucket_usage_total_bytes: Storage used per bucketminio_s3_requests_total: Request counts by type (GET, PUT, DELETE)minio_node_disk_free_bytes: Available disk spaceminio_s3_requests_errors_total: Error rates
Honest Trade-offs
MinIO is great if you:
- Need S3-compatible storage for applications and backup tools
- Want to avoid cloud storage costs and egress fees
- Have spare disk capacity on an existing server
- Need fast, reliable object storage
Consider cloud S3 if you:
- Need geographic redundancy (multiple regions)
- Don't want to manage storage hardware
- Only store small amounts of data (cloud S3 is cheap at low volumes)
- Need 99.999999999% durability guarantees
Consider Garage if you:
- Want distributed storage across multiple low-power nodes
- Need to minimize RAM usage (Garage uses much less than MinIO)
- Don't need full S3 API compatibility
The bottom line: MinIO is the gold standard for self-hosted object storage. If you need S3 compatibility — and many self-hosted applications do — MinIO is the obvious choice. It's fast, mature, well-documented, and the full S3 API compatibility means you'll never hit surprising incompatibilities.