When Go Container Images Use More Space Than Python
I noticed something interesting recently while working with container images for a Go app and a Python app. As I pushed new versions of each image, the remote repository size grew faster for the Go image than the Python image.
This was unexpected, but I realized what was happening when I remembered that Docker images consist of layers and that Go and Python work very differently when it comes to how you create container images from them.
This post demonstrates what I saw in GCP Artifact Registry and goes into detail about what’s going on with the image layers for each app. It also discusses how this might impact repository storage used in remote repositories and Kubernetes clusters.
Pushing the first Go image
This is how I ran into the situation where I saw this in the remote repository. I had a Go app. Here is a simple example of a Go app, taken from an example repo published alongside this post.
total 24
drwxr-xr-x 2 matt matt 4096 Nov 8 15:23 .
drwxr-xr-x 7 matt matt 4096 Nov 8 15:42 ..
-rw-r--r-- 1 matt matt 201 Nov 8 13:43 Dockerfile
-rwxr-xr-x 1 matt matt 240 Nov 8 15:23 build_and_push.sh
-rw-r--r-- 1 matt matt 30 Nov 8 13:40 go.mod
-rw-r--r-- 1 matt matt 75 Nov 8 15:23 main.go
The app content is really just in main.go.
package main
import "fmt"
func main() {
fmt.Println("Hello from Go!")
}
I can run the following command to create an Artifact Registry repository in my GCP project.
gcloud artifacts repositories create go \
--repository-format=docker \
--location=northamerica-northeast2 \
--disable-vulnerability-scanning
When it’s created, I can see the repository in the GCP UI. It’s empty right now, so it shows size as 0 B.

The Go app’s Dockerfile:
FROM golang:1.25.4 AS build-stage
WORKDIR /build
COPY go.mod ./
COPY *.go ./
RUN CGO_ENABLED=0 GOOS=linux go build -o /app
FROM scratch
WORKDIR /
COPY --from=build-stage /app /app
CMD ["/app"]
This is a typical multi-stage build for Go apps which produces a pretty small image.
I run the following to build and push the image to Artifact Registry.
TAG_LOCAL="img-size-demo-go"
TAG_AR="northamerica-northeast2-docker.pkg.dev/docker-image-size-demo/go/app"
docker buildx build --provenance=false --sbom=false -t $TAG_LOCAL .
docker tag $TAG_LOCAL $TAG_AR
docker push $TAG_AR
Note that I’m using docker buildx build with --provenance=false --sbom=false instead of a simple docker build because Docker’s
buildx adds these artifacts by default. Disabling them keeps the Artifact Registry output cleaner for this demo.
I can use docker images to see details of the built image. (Some docker images details omitted for brevity.)
REPOSITORY TAG IMAGE ID SIZE
img-size-demo-go latest 404d701942c4 3.61MB
.../go/app latest 404d701942c4 3.61MB
And the GCP UI shows that some space is now being used in the repo.

When I click through to the digests for the app image in the repo, I see one digest, representing one layer of my image. Compressed, it’s 1.3 MB.

Updating the Go app and pushing the image
I make a change to the source code of the app, so that the next image build will produce a new image.
import "fmt"
func main() {
- fmt.Println("Hello from Go!")
+ fmt.Println("Hello from Go!!")
}
Then, I build and push the image again, and use docker images to inspect what was built:
REPOSITORY TAG IMAGE ID SIZE
img-size-demo-go latest c02779f83c2f 3.61MB
.../go/app latest c02779f83c2f 3.61MB
<none> <none> 404d701942c4 3.61MB
The latest image has a different image ID and is about the same size.
In the Artifact Registry UI, I can see the repository size has grown to 2.6 MB.

And when I click through to the digests for the app image, I see two digests, each 1.3 MB.

So things made sense so far to me. Every time I push a new image, I’m increasing the amount of space in the image repo by 2.6 MB.
Pushing the first Python image
Things looked a bit different for Python though. Here are the steps I went through and what I observed for Python.
Like with Go, I have a simple app:
total 20
drwxr-xr-x 2 matt matt 4096 Nov 8 15:23 .
drwxr-xr-x 7 matt matt 4096 Nov 8 15:42 ..
-rw-r--r-- 1 matt matt 126 Nov 8 15:35 Dockerfile
-rwxr-xr-x 1 matt matt 248 Nov 8 15:24 build_and_push.sh
-rw-r--r-- 1 matt matt 28 Nov 8 15:23 main.py
Its main.py:
print('Hello from Python!')
I can run the following command to create an Artifact Registry repository for it in my GCP project. I’m creating a second repository for this app instead of re-using the first repository created in the steps above because it will help demonstrate how the size of a repository grows each time a Python image is pushed.
gcloud artifacts repositories create python \
--repository-format=docker \
--location=northamerica-northeast2 \
--disable-vulnerability-scanning
Like with the Go repo, the repo’s size is shown as 0 B when it’s first created.

The Python app’s Dockerfile:
FROM python:3.14.0-slim
WORKDIR /build
COPY . .
CMD ["python", "main.py"]
I’m using a -slim base image because I don’t need any of the Python tooling from the base image that isn’t included in -slim too, like for compiling dependencies from source.
I run the following to build and push the image to Artifact Registry.
TAG_LOCAL="img-size-demo-python"
TAG_AR="northamerica-northeast2-docker.pkg.dev/docker-image-size-demo/python/app"
docker buildx build --provenance=false --sbom=false -t $TAG_LOCAL .
docker tag $TAG_LOCAL $TAG_AR
docker push $TAG_AR
The details from docker images for the built image:
REPOSITORY TAG IMAGE ID SIZE
img-size-demo-python latest a2f1db29a6e9 177MB
.../python/app latest a2f1db29a6e9 177MB
The GCP UI shows some space is now being used in the new repo.

Clicking through to the digests for the app image in this repo shows one 41.2 MB digest.

Updating the Python app and pushing the image
So far, everything looks like it did for Go. I push an image and the size of the repo increases. But, here’s where things start to look quite different for Python.
Like with the Go app, I make a change to the source code of the Python app.
-print('Hello from Python!')
+print('Hello from Python!!')
Then, I build and push the image again. While it’s pushing the image, some output appears that hints at why I see the effect I see next in Artifact Registry. I see lines like this:
464ba63211a0: Layer already exists
And I see only one line like this:
21d7a00b2424: Pushed

The Python repo has only increased in size by about 0.1 MB, from 41.2 MB to 41.3 MB. In fact, based on my analysis (described further below), I think the UI was rounding this increase in size up to 0.1 MB. It was likely far less.
Clicking through to the digests for the Python app shows two digests that each have a compressed size of 41.2 MB.

Images and layers
What I observed was that, each time the Python app is updated and an up to date image for it is pushed, the repository is only increasing in size slightly.
At first, this baffled me. I was used to thinking of Python apps as “larger” than Go apps. Indeed, when I pushed the first version of each app’s image, I saw the Python app’s image being larger (by about 10x) and it causing the Python repository to use that much more storage space.
But then I remembered that at a lower level, images are made up of image layers, and if you set up your Dockerfile properly, you can take advantage of the way image layers work to use less storage space (among other benefits) as you push updates to your apps’ images.
A tool to inspect image layers would help break this down. The Artifact Registry UI has this kind of tooling built in. I can click through to the first digest of the Go app, and see tabs in the UI for inspecting it.


The “Manifest” tab shows the following JSON.
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"digest": "sha256:d64b19d6d7228d606a19f74d424ea6118df33bfc4c2bb8c40b90520952145504",
"size": 718
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"digest": "sha256:f2cfd1c431aa44425f5a08b7d09266be2e1ae3949b96ec1a0bc2f5dc610a7e2b",
"size": 1351732
}
]
}
The important part here is layers and the digest of the only layer in this image, which ends in e2b.
The following is what that info looks like for the second digest of the Go image (some details omitted for brevity):
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"digest": "sha256:ef02ed5ec528c344438bbd7389fb359adc2e5cd49ffcc7ea84075344aeb47af0",
"size": 1351728
}
]
This digest also has a single layer, but it ends in af0 this time. It’s about the same size, in bytes, as the old layer.
This file in the second digest is an entirely new file created by the Go compiler when I compiled my app after making that small change. Changing just one source code file was enough to require an entirely new binary to be built. Each time I re-built the app, I was throwing away the old 1.3 MB of compressed image to be replaced by a new 1.3 MB.
The repository stores both because I haven’t deleted the old one from it yet and I haven’t set up any form of cleanup to automatically delete old and/or unused images.
Things look different for Python. The layer data for the first image pushed is the following (all except layer digests omitted for brevity).
sha256:d7ecded7702a5dbf6d0f79a71edc34b534d08f3051980e2c948fba72db3197fc
sha256:963010aaad3b35fe41f5accfb3da3e6a12ca711542ae537fdb283b5b30058c79
sha256:f2b97c97aa66d4fb1e4a8405a2e403985ddedb59766575db075d172896fe2797
sha256:feae240f352d7a935929cf27eb013ca06372a3cf81337987f48b3afc15d99746
sha256:464ba63211a0444141ae1d9c3b635611072c3f5906850e77a8ae4c11b3c6bf3d
sha256:0de8ac4d628a22a5cd9b980fc0c46475309775e77a29725e9a0a4324c54795c4
This image has more layers because the base Python image (python:3.14-slim) has multiple layers. You can see the base image layer details on Docker Hub.
The layer data for the second image digest:
sha256:d7ecded7702a5dbf6d0f79a71edc34b534d08f3051980e2c948fba72db3197fc
sha256:963010aaad3b35fe41f5accfb3da3e6a12ca711542ae537fdb283b5b30058c79
sha256:f2b97c97aa66d4fb1e4a8405a2e403985ddedb59766575db075d172896fe2797
sha256:feae240f352d7a935929cf27eb013ca06372a3cf81337987f48b3afc15d99746
sha256:464ba63211a0444141ae1d9c3b635611072c3f5906850e77a8ae4c11b3c6bf3d
sha256:21d7a00b2424f347fa72e2935aa53211de957663ea7f8704b77142bd21b8b347
Each image digest has the same first five layers (7fc, c79, 797, 746, and f3d). But, each image has a different sixth layer (5c4 for the first one and 347 for the second one).
These final layers are small. In the case of the second image digest, it’s just 486 B!
This is why the remote repository grew only slightly each time I pushed an updated image. With Python apps, only one small layer changes each time you update the app’s source code. When a Python app imports from the standard library, it is able to reach into the base image, which doesn’t change just the app’s source code changes.
Go and Python Dependencies
When I realized that the Python base image was being re-used each time, significantly reducing the size the remote repository must grow by each time the app is updated, it also made me realize this affects app dependencies too.
The proper way to make a Dockerfile for a Python app that has dependencies is like this:
FROM python:3.14.0-slim
WORKDIR /build
RUN python -m pip install -r requirements.txt
COPY . .
CMD ["python", "main.py"]
The RUN ... pip install line is sandwiched between the base image and the step that copies in the app’s source code (COPY . .).
Note that with this example, the requirements.txt file doesn’t need to be copied in first in order for pip install to reference it, because it is included in the Docker build context. Docker therefore has access to it while it runs pip install.
This order of operations in the Docker build causes an image layer to exist just for the Python dependencies installed by pip. That means that if the app’s dependencies didn’t change, but its source code did, that’s yet another opportunity for the remote repository to re-use an image layer. Even if my app were using large libraries, like NumPy or machine learning libraries, the remote repository would still only grow in size by the size of my app’s source code each time.
With Go, things are different for libraries. Go libraries are distributed as Go source code, and Go apps are typically compiled to single static binaries. Therefore, even if the library code hasn’t changed when the app’s source code has changed, there is no image layer dedicated only to library code to be re-used. The binary grows to include all the library code, combined with the app’s source code, each time the app is built again.
We can also see this reflected in the Go app’s Dockerfile. It’s a multi-stage build where the second stage is the following.
FROM scratch
WORKDIR /
COPY --from=build-stage /app /app
CMD ["/app"]
No matter which Go libraries were used, there is that single file (called app in this example) that is copied into the final stage. For a given set of app source code files, this compiled binary file could be small or large. It depends which libraries were used.
As an example, consider what happens to the example Go app above when the logrus library is used in it:
package main
import "github.com/sirupsen/logrus"
func main() {
logrus.Info("Hello from Go!")
}
When I build a Docker image from this app, the image is 4.29 MB (compared to the 3.61 MB before using the library).
How repositories grow over time
This brings me to the main realization I had, the one that made this observation seem notable enough to cross over from Docker tutorial to real world operations impact.
If I’m making apps using Go and I’m not careful, my Docker repositories could grow over time to use more space than I expect them too. I might think only a few KB are being added each time, when in reality it’s many MBs, especially when I’m using many libraries in a Go app or if I’m embedding other binaries (like utils such as kubectl) into it.
When I run an experiment where I have a Go app using logrus and a Python app using Pandas, and I push the images multiple times after making one line changes to each app’s source code, I observe the Artifact Registry repository sizes grow the following way after each image push.
| nth push | Go (MB) | Python (MB) |
|----------|---------|-------------|
| 1 | 1.5 | 114.7 |
| 2 | 3.1 | 114.7 |
| 3 | 4.6 | 114.7 |
The Go image is growing by about 1.5 MB (compressed) each time and the Python image is barely registering any change at all. Each time, the Python compressed size is still rounding to 114.7 MB.
I will need to run many builds to represent real world behaviour where many builds and pushes occur over time, so I can speed things up by running 10 builds and pushes at a time. The following is what that looks like for the Go app.
for i in {1..10}; do
sed -i 's/Hello from Go!/Hello from Go!!/g' main.go
./build_and_push.sh # the steps from above that build and push the image
done
| nth push | Go (MB) | Python (MB) |
|----------|---------|-------------|
| 1 | 1.5 | 114.7 |
| 2 | 3.1 | 114.7 |
| 3 | 4.6 | 114.7 |
| 13 | 19.6 | 114.8 |
| 23 | 35.6 | 114.9 |
| 23 | 51 | 115 |
| 33 | 66.5 | 115 |
| 43 | 82 | 115.1 |
| 53 | 97.4 | 115.2 |
| 53 | 112.9 | 115.3 |
| 54 | 114.4 | 115.3 |
| 55 | 116 | 115.3 |
I find that after the 55th build and push, the repository storing the Go images is using more storage space than the repository storing the Python images.
This doesn’t mean just the remote repository would be storing the images though. If I were deploying the apps to Kubernetes, the clusters would need to pull the images down when they change to deploy the latest version of the app. Without cleanup set up in the clusters, they would end up storing all these images for a long time, potentially longer than is needed for quick rollback.
This gives me something to think about when it comes to keeping apps running in production and managing my infrastructure like Docker image repositories. I should implement automated cleanup procedures so that I avoid storing more image layers long term than I need to.
Would I use Python instead of Go in order to be able to take advantage of long term caching of base image layers? No, there’s a time and place for each tool. Go is still a fantastic language for creating apps, especially when you want small apps for components like Kubernetes controllers etc.
I simply need to make sure I’m optimizing my environment for Go when the time comes for that optimization.
Accompanying code
See docker-image-size-demo for accompanying code.