Docker Hub is about to implement much stricter pull rate limits starting April 1st, 2025. If you're running CI/CD pipelines—especially on self-hosted runners—this could hurt. GitHub-hosted GitHub Actions runners get a pass here because of their IP whitelisting agreements with Docker. This is a rare situation in which running your GitHub Actions on their slower, much more expensive machines has saved you from some pain! If you've been happily enjoying the use of a free DockerHub account and want to continue, you'll want to keep reading...
Docker is tightening their pull limits to the extent that everyone who is not on a Pro, Team or a Business plan is on notice:
As expected, there was public outcry. Yet, I don't think even Docker expected the level of pushback they got because if you check the previous versions of their announcement, you will see that they quickly reacted to the bad press and pushed back the in-effect date from March 1st to April 1st, and changed various values in this table. Here's a screenshot my cofounder took around the day this was first announced:
As much as I enjoy the discourse, this is not a writeup about whether this is fair or not, for that we have HackerNews. What frustrated me most was Docker's short notice before implementing this change.
The immediate impact will hit CI pipelines across organizations of all sizes. Even a modest test suite running a few containers can quickly exceed the new limit of 10 pulls per hour per IP for unauthenticated requests. A small team of 2-3 engineers pushing multiple commits or running parallel jobs might hit this limit in minutes. Miss just one authentication step in your workflow, and you'll face those dreaded 429 (Too Many Requests) errors, grinding your dev cycles to a halt.
Production deployments using unauthenticated Docker Hub access are at risk too. Many production environments—surprisingly—pull public containers without proper authentication. As your CD pipeline pulls images across multiple environments or regions, you can quickly hit rate limits and temporarily break your deployment pipeline.
Option 1 requires you to create your own Docker registry mirror that caches pulls of public images:
storage:
s3:
accesskey: {{ minio_access_key }}
secretkey: {{ minio_secret_key }}
region: us-east-1
bucket: docker-registry
regionendpoint: http://{{ minio_endpoint }}:9000
secure: false
v4auth: true
chunksize: 5242880
rootdirectory: /
cache:
blobdescriptor: inmemory
maintenance:
uploadpurging:
enabled: true
age: 168h
interval: 24h
dryrun: false
proxy:
remoteurl: https://registry-1.docker.io
http:
addr: "{{ registry_address }}"
relativeurls: false
draintimeout: 60s
secret: "{{ http_secret }}"
This will ensure you reduce your hits on the Docker hub registry, but requires a solid 2-3 days of engineering time to set up properly, plus ongoing maintenance to ensure HA and to scale out your storage cluster. You'll need to navigate storage configuration, network setup across all your runners, and probably a month of back-and-forth with that security-obsessed CIO who keeps asking "but why can't you just pay for Docker?!".
Option 2 requires adding authentication to every workflow, which raises your limit to 100 pulls per hour:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Your existing steps...
However, this solution only partially works since it’s only a 10x increase on your pull limit and more importantly involves significant organizational overhead. While it may seem straightforward at first, consider the scale: a mid-sized company with 40+ engineers typically manages 50+ repositories containing 250+ workflow files. For larger organizations with 200+ engineers, these numbers grow exponentially—potentially reaching 2,000+ workflows spread across hundreds of repositories.
Without a comprehensive solution, humans will be humans and you'll likely experience weeks of intermittent CI failures as each undiscovered workflow hits rate limits at unpredictable times, creating a lengthy tail of disruption that affects developer productivity and release schedules.
Option 3 keeps your life simple and stress-free: just use Blacksmith runners. Blacksmith runners have the benefits of option 1—featuring a Docker pull-through cache mirror enabled by default—without the organizational overhead of option 2, thanks to our migration wizard that automatically creates PRs for each repository needing review. Try it for free!
As a side effect, you'll not only have dealt with the new Docker Hub usage limits, but you'll have reduced your CI complexity while simultaneously increasing your CI performance. Blacksmith handles all your infrastructure, scaling, and storage needs behind the scenes—while delivering fast CI performance.
jobs:
build:
runs-on: blacksmith
# The rest of your workflow remains the same
While solving rate limits is the immediate concern, we're taking this opportunity even further. Our Sticky Disks don't just circumvent Docker's rate limits - it eliminates the entire Docker pull problem for both public AND private images:
With Sticky Disks, we're not just helping you avoid the April 1st Docker rate limits - we're trying to make those pulls in your workflow a no-op.
You have a week left before Docker's new pull limits take effect on April 1st. Docker changing the rules in this manner is an infrastructure headache, but I suggest you don't wait until your CI pipelines start breaking and your team loses hours fighting flaky CI. We offer a quick, yet effective solution today that can have your CI pipelines fully protected in under 5 minutes - no auditing hundreds of workflows, no setting up complex mirrors, and no sudden disruptions to your development process.