Running a Private Container Registry: Docker Registry and Harbor Self-Hosted
Every time you run docker pull nginx, you're downloading an image from Docker Hub -- a public container registry. Docker Hub works fine for pulling official images, but it has rate limits (100 pulls per 6 hours for anonymous users), no privacy for your custom images on the free tier, and no control over availability.
Photo by Glen Carrie on Unsplash
If you're building custom Docker images for your homelab, deploying private applications, or running CI/CD pipelines that build containers, you need your own registry. A private registry gives you unlimited pulls, complete privacy, and total control over your container images.

Why Run Your Own Registry?
Docker Hub rate limits -- If you're rebuilding containers frequently (CI/CD pipelines, development workflows), you'll hit the 100 pulls/6 hours limit fast. A local registry has no limits.
Privacy -- Your custom application images contain your code, configuration, and sometimes secrets baked into layers. A private registry keeps them off public infrastructure.
Speed -- Pulling images from a local registry is significantly faster than pulling from Docker Hub, especially for large images. On a gigabit LAN, a 500 MB image pulls in seconds instead of minutes.
Availability -- Your deployments don't fail because Docker Hub is having an outage (which happens more often than you'd think).
Mirror/cache -- A local registry can cache Docker Hub images, so frequently-used base images are always available locally.
Docker Registry vs Harbor
There are two main options for self-hosted container registries. Here's how they compare:
| Feature | Docker Registry | Harbor |
|---|---|---|
| Complexity | Minimal | Moderate |
| Setup time | 15 minutes | 1-2 hours |
| Web UI | None | Full web interface |
| Authentication | Basic auth or token | OIDC, LDAP, local accounts |
| Vulnerability scanning | No | Yes (Trivy integrated) |
| Image signing | No | Yes (Cosign/Notary) |
| Replication | No | Yes (multi-registry sync) |
| Garbage collection | Manual CLI | Scheduled via UI |
| Access control | All-or-nothing | Per-project RBAC |
| Resource usage | ~50 MB RAM | ~2 GB RAM |
| CNCF project | No | Yes (graduated) |
| Best for | Small homelab, single user | Teams, security-conscious setups |
Choose Docker Registry if you need a simple, lightweight solution for storing your own images. It's just a container that serves images -- nothing more.
Choose Harbor if you want vulnerability scanning, a proper web UI, multi-user access control, or you're running anything that resembles a production environment.
Setting Up Docker Registry
The simplest private registry you can run:
Basic setup
services:
registry:
image: registry:2
container_name: registry
restart: unless-stopped
ports:
- "127.0.0.1:5000:5000"
volumes:
- registry_data:/var/lib/registry
environment:
REGISTRY_STORAGE_DELETE_ENABLED: "true"
volumes:
registry_data:
docker compose up -d
That's it. You now have a private registry at localhost:5000. Push an image to it:
# Tag an existing image for your registry
docker tag myapp:latest localhost:5000/myapp:latest
# Push it
docker push localhost:5000/myapp:latest
# Pull it back
docker pull localhost:5000/myapp:latest
Adding authentication
A registry without authentication is an open door. Add basic auth with htpasswd:
# Create a password file
mkdir -p /opt/registry/auth
docker run --rm --entrypoint htpasswd httpd:2 \
-Bbn myuser mypassword > /opt/registry/auth/htpasswd
Updated compose file:
services:
registry:
image: registry:2
container_name: registry
restart: unless-stopped
ports:
- "127.0.0.1:5000:5000"
volumes:
- registry_data:/var/lib/registry
- /opt/registry/auth:/auth
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: "Private Registry"
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_STORAGE_DELETE_ENABLED: "true"
volumes:
registry_data:
Now you need to log in before pushing or pulling:
docker login localhost:5000
# Enter username and password
Adding TLS
If your registry needs to be accessible over the network (not just localhost), you must add TLS. Docker refuses to communicate with non-HTTPS registries by default, and for good reason.
If you're already running a reverse proxy (Traefik, Caddy, Nginx Proxy Manager), route traffic through it and let it handle TLS. Otherwise, configure TLS directly:
services:
registry:
image: registry:2
container_name: registry
restart: unless-stopped
ports:
- "443:5000"
volumes:
- registry_data:/var/lib/registry
- /opt/registry/auth:/auth
- /opt/registry/certs:/certs
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: "Private Registry"
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/fullchain.pem
REGISTRY_HTTP_TLS_KEY: /certs/privkey.pem
REGISTRY_STORAGE_DELETE_ENABLED: "true"
volumes:
registry_data:
Using it as a Docker Hub mirror
Cache Docker Hub images locally so pulls are fast and don't count against rate limits:
services:
registry-mirror:
image: registry:2
container_name: registry-mirror
restart: unless-stopped
ports:
- "127.0.0.1:5001:5000"
volumes:
- mirror_data:/var/lib/registry
environment:
REGISTRY_PROXY_REMOTEURL: "https://registry-1.docker.io"
volumes:
mirror_data:
Configure Docker to use the mirror by adding to /etc/docker/daemon.json:
{
"registry-mirrors": ["http://localhost:5001"]
}
Restart Docker (sudo systemctl restart docker), and all pulls from Docker Hub will be cached locally.
Like what you're reading? Subscribe to Self-Hosted Weekly — free weekly guides in your inbox.
Setting Up Harbor
Harbor is a full-featured registry platform. The setup is more involved but the capabilities are significantly greater.
Prerequisites
Harbor requires Docker Compose and at least 2 GB of RAM. Download the installer:
# Download the latest Harbor release
HARBOR_VERSION="2.11.0"
wget "https://github.com/goharbor/harbor/releases/download/v${HARBOR_VERSION}/harbor-offline-installer-v${HARBOR_VERSION}.tgz"
tar xzf "harbor-offline-installer-v${HARBOR_VERSION}.tgz"
cd harbor
Configure Harbor
Copy and edit the configuration file:
cp harbor.yml.tmpl harbor.yml
Key settings in harbor.yml:
hostname: registry.yourdomain.com
# HTTPS configuration
https:
port: 443
certificate: /opt/harbor/certs/fullchain.pem
private_key: /opt/harbor/certs/privkey.pem
# Initial admin password (change after first login)
harbor_admin_password: Harbor12345
# Database configuration
database:
password: change-this-to-something-random
max_idle_conns: 50
max_open_conns: 100
# Storage
data_volume: /opt/harbor/data
# Enable vulnerability scanning
trivy:
ignore_unfixed: false
security_check: vuln
insecure: false
Install and start
sudo ./install.sh --with-trivy
Harbor will pull its images and start all components. Access the web UI at https://registry.yourdomain.com and log in with admin / Harbor12345.
Using Harbor
# Log in
docker login registry.yourdomain.com
# Tag and push (Harbor uses projects for organization)
docker tag myapp:latest registry.yourdomain.com/myproject/myapp:latest
docker push registry.yourdomain.com/myproject/myapp:latest
Harbor's web UI lets you browse images, see vulnerability scan results, manage users and projects, and configure replication to other registries.
Storage Backends
Both Docker Registry and Harbor support multiple storage backends:
| Backend | Docker Registry | Harbor | Best for |
|---|---|---|---|
| Local filesystem | Yes (default) | Yes (default) | Small setups, fast access |
| S3/MinIO | Yes | Yes | Scalable, durable storage |
| Azure Blob | Yes | Yes | Azure-based infra |
| Google Cloud Storage | Yes | Yes | GCP-based infra |
| OpenStack Swift | Yes | Yes | OpenStack environments |
For most homelabs, local filesystem storage is fine. If you're storing more than a few hundred GB of images, consider S3-compatible storage with MinIO:
# Docker Registry with MinIO backend
environment:
REGISTRY_STORAGE: s3
REGISTRY_STORAGE_S3_ACCESSKEY: minioadmin
REGISTRY_STORAGE_S3_SECRETKEY: minioadmin
REGISTRY_STORAGE_S3_REGION: us-east-1
REGISTRY_STORAGE_S3_BUCKET: registry
REGISTRY_STORAGE_S3_REGIONENDPOINT: http://minio:9000
REGISTRY_STORAGE_S3_SECURE: "false"
Garbage Collection
Container images are built in layers. When you push a new version of an image, the old layers aren't automatically deleted -- they accumulate. Garbage collection reclaims disk space by removing unreferenced layers.
Docker Registry
Garbage collection requires stopping the registry or running in read-only mode:
# Run garbage collection (dry run first)
docker exec registry bin/registry garbage-collect \
/etc/docker/registry/config.yml --dry-run
# Actual cleanup
docker exec registry bin/registry garbage-collect \
/etc/docker/registry/config.yml
Schedule this weekly via cron:
# Weekly garbage collection, Sunday at 3 AM
0 3 * * 0 docker exec registry bin/registry garbage-collect /etc/docker/registry/config.yml >> /var/log/registry-gc.log 2>&1
Harbor
Harbor has built-in scheduled garbage collection accessible from the web UI under Administration > Clean Up. Set it to run weekly and it handles everything automatically.
CI/CD Integration
A private registry really shines when integrated with CI/CD. Here's a basic example with a Woodpecker CI pipeline:
# .woodpecker.yml
steps:
build:
image: docker:latest
commands:
- docker build -t registry.yourdomain.com/myproject/myapp:${CI_COMMIT_SHA} .
- docker push registry.yourdomain.com/myproject/myapp:${CI_COMMIT_SHA}
- docker tag registry.yourdomain.com/myproject/myapp:${CI_COMMIT_SHA} registry.yourdomain.com/myproject/myapp:latest
- docker push registry.yourdomain.com/myproject/myapp:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Every commit builds a new image, tags it with the commit SHA for traceability, and also updates the latest tag. Pulls are instant because everything is on your local network.
Security Best Practices
Always use TLS -- Never run a registry over plain HTTP on a network. Docker's "insecure registries" workaround exists but should only be used for local development.
Enable content trust -- Docker Content Trust (DCT) verifies image signatures. Harbor supports this natively with Cosign.
Scan images for vulnerabilities -- Harbor's integrated Trivy scanner checks every pushed image against known CVE databases. If you're using plain Docker Registry, run Trivy separately.
Use read-only credentials for pulls -- Don't give your deployment pipeline push access. Create separate accounts for pushing (CI/CD) and pulling (production servers).
Limit network access -- Bind the registry to localhost or a private network interface. Only expose it through a reverse proxy with proper authentication.
The Bottom Line
For a single-user homelab, Docker Registry with basic auth takes 15 minutes to set up and solves the Docker Hub rate limit problem while giving you a fast, private store for your custom images. For anything more -- multiple users, vulnerability scanning, audit trails -- Harbor is the standard. Both are straightforward to operate and dramatically improve your container workflow once you stop depending on Docker Hub for everything.
