← All articles
Self-Hosting MinIO

Self-Hosting MinIO: S3-Compatible Object Storage on Your Own Hardware

Infrastructure 2026-02-08 · 6 min read minio s3 object-storage backup infrastructure
By Selfhosted Guides Editorial TeamSelf-hosting practitioners covering open source software, home lab infrastructure, and data sovereignty.

Cloud storage bills add up. If you're paying AWS or Backblaze for S3 storage and your data lives on predictable infrastructure, running your own S3-compatible storage can save real money.

Photo by Gabriel Heinzer on Unsplash

MinIO is a high-performance, S3-compatible object storage server. It speaks the same API as Amazon S3, which means any tool, library, or application that works with S3 works with MinIO — no code changes needed.

MinIO S3-compatible object storage logo

Why Self-Host Object Storage?

Object storage (like S3) is fundamentally different from traditional file storage:

Self-hosting it makes sense when:

MinIO vs. Cloud S3 vs. Garage

Feature MinIO AWS S3 Garage
S3 compatible Yes (full) Native Yes (partial)
Self-hosted Yes No Yes
Performance Very fast Fast Moderate
Clustering Yes N/A (managed) Yes
Erasure coding Yes Yes Yes
Web console Yes Yes (AWS Console) No (CLI only)
Resource usage Moderate N/A Low
Maturity Very mature N/A Newer
Versioning Yes Yes No
Lifecycle rules Yes Yes No
Cost Free + hardware $0.023/GB/mo + egress Free + hardware

Why MinIO over alternatives

MinIO is the default choice for self-hosted S3 because:

Garage is worth considering if you need a lightweight, distributed option for a homelab cluster. It uses far less RAM but doesn't support all S3 features.

Self-Hosting MinIO: Setup

Server requirements

MinIO's requirements depend on your workload:

Single-node Docker Compose setup

For most homelab and small business use cases, a single-node setup is plenty:

version: "3.8"

services:
  minio:
    container_name: minio
    image: quay.io/minio/minio:latest
    command: server /data --console-address ":9001"
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: your-secure-password-here
    ports:
      - "9000:9000"   # S3 API
      - "9001:9001"   # Web console
    volumes:
      - /path/to/storage:/data
    restart: always
    healthcheck:
      test: ["CMD", "mc", "ready", "local"]
      interval: 30s
      timeout: 20s
      retries: 3

Change MINIO_ROOT_PASSWORD to something secure. The /path/to/storage mount is where all your objects will be stored.

Starting MinIO

docker compose up -d

Log into the web console with your root credentials to create buckets and manage access.

Like what you're reading? Subscribe to Self-Hosted Weekly — free weekly guides in your inbox.

Using MinIO with S3 Tools

MinIO Client (mc)

MinIO ships its own CLI client:

# Install mc
curl -O https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
sudo mv mc /usr/local/bin/

# Configure your MinIO server
mc alias set myminio http://your-server:9000 minioadmin your-secure-password-here

# Create a bucket
mc mb myminio/my-bucket

# Upload a file
mc cp myfile.txt myminio/my-bucket/

# List bucket contents
mc ls myminio/my-bucket/

# Sync a directory
mc mirror /local/path/ myminio/my-bucket/

AWS CLI

Since MinIO speaks S3, the standard AWS CLI works:

aws --endpoint-url http://your-server:9000 s3 ls
aws --endpoint-url http://your-server:9000 s3 cp myfile.txt s3://my-bucket/
aws --endpoint-url http://your-server:9000 s3 sync /local/path/ s3://my-bucket/

Any S3 SDK

Python (boto3), Go, Node.js, Java — any S3 SDK works with MinIO. Just change the endpoint:

import boto3

s3 = boto3.client(
    "s3",
    endpoint_url="http://your-server:9000",
    aws_access_key_id="minioadmin",
    aws_secret_access_key="your-secure-password-here",
)

# Upload a file
s3.upload_file("local-file.txt", "my-bucket", "remote-file.txt")

# List objects
response = s3.list_objects_v2(Bucket="my-bucket")
for obj in response.get("Contents", []):
    print(obj["Key"])

Common Use Cases

Backup target

MinIO is an excellent target for backup tools that support S3:

Docker registry

Run a private container registry backed by MinIO:

registry:
  image: registry:2
  environment:
    REGISTRY_STORAGE: s3
    REGISTRY_STORAGE_S3_ACCESSKEY: minioadmin
    REGISTRY_STORAGE_S3_SECRETKEY: your-secure-password-here
    REGISTRY_STORAGE_S3_REGION: us-east-1
    REGISTRY_STORAGE_S3_REGIONENDPOINT: http://minio:9000
    REGISTRY_STORAGE_S3_BUCKET: docker-registry
    REGISTRY_STORAGE_S3_SECURE: "false"

Database backups

Most databases can dump to S3-compatible storage:

# PostgreSQL backup to MinIO
pg_dump mydb | mc pipe myminio/db-backups/mydb-$(date +%Y%m%d).sql

# MySQL backup to MinIO
mysqldump mydb | mc pipe myminio/db-backups/mydb-$(date +%Y%m%d).sql

Application storage

Many self-hosted apps support S3 for file storage:

Access Control

Don't use root credentials for applications. Create dedicated access keys:

Via web console

  1. Go to IdentityUsers in the web console
  2. Create a new user with an access key and secret key
  3. Attach a policy (e.g., readwrite for full access to specific buckets)

Via mc

# Create a user
mc admin user add myminio app-user app-secret-key

# Create a policy that grants access to one bucket
cat > /tmp/app-policy.json << 'EOF'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:*"],
      "Resource": [
        "arn:aws:s3:::app-bucket",
        "arn:aws:s3:::app-bucket/*"
      ]
    }
  ]
}
EOF

mc admin policy create myminio app-policy /tmp/app-policy.json
mc admin policy attach myminio app-policy --user app-user

Performance Tuning

Disk layout

For best performance:

Network

MinIO can easily saturate a gigabit connection. If you're moving large amounts of data:

Erasure coding (multi-drive)

With 4+ drives, MinIO can use erasure coding for data protection:

command: server /data{1...4} --console-address ":9001"
volumes:
  - /mnt/disk1:/data1
  - /mnt/disk2:/data2
  - /mnt/disk3:/data3
  - /mnt/disk4:/data4

This gives you redundancy — you can lose 2 of 4 drives without data loss (with default settings). It's like RAID but at the application level.

HTTPS Setup

For production use, always run MinIO behind HTTPS:

Option 1: Reverse proxy (recommended)

Use Caddy, Nginx, or Traefik in front of MinIO. This is the simplest approach and lets you use the same reverse proxy for all your services.

s3.yourdomain.com {
    reverse_proxy localhost:9000
}

console.yourdomain.com {
    reverse_proxy localhost:9001
}

Option 2: MinIO native TLS

Place your certificates in MinIO's config directory:

mkdir -p ~/.minio/certs
cp public.crt ~/.minio/certs/
cp private.key ~/.minio/certs/

MinIO will automatically detect and use the certificates.

Monitoring

MinIO exposes Prometheus metrics at /minio/v2/metrics/cluster:

# prometheus.yml
scrape_configs:
  - job_name: minio
    metrics_path: /minio/v2/metrics/cluster
    static_configs:
      - targets: ["minio:9000"]

Key metrics to watch:

Honest Trade-offs

MinIO is great if you:

Consider cloud S3 if you:

Consider Garage if you:

The bottom line: MinIO is the gold standard for self-hosted object storage. If you need S3 compatibility — and many self-hosted applications do — MinIO is the obvious choice. It's fast, mature, well-documented, and the full S3 API compatibility means you'll never hit surprising incompatibilities.

Get free weekly tips in your inbox. Subscribe to Self-Hosted Weekly