$ docker push xxxxxx.gra5.container-registry.ovh.net/xxxxxxxxxxxxx:latest The push refers to repository [xxxxxx.gra5.container-registry.ovh.net/xxxxxxxxxxxxx] 1e1342d93521: Preparing f7a255c0ab52: Preparing 800e62dacf38: Preparing 1ff62ebfdf00: Preparing 3e207b409db3: Preparing 800e62dacf38: Layer already exists 1ff62ebfdf00: Layer already exists 3e207b409db3: Layer already exists f7a255c0ab52: Layer already exists unknown blob
Hello, do you know where i can find the total size of my registry? In my ovh manager the size is still 0.
I also have a question regarding an example or a doc to setup nginx front proxy pass to hide the ovh url of my registry and to make it like registry.mycompany.com. thanks in advance.
Hi, we can't push an image
We did 5 tries from 12:04 CEST to 15:11 CEST
The push refers to repository [xxxxxx.gra5.container-registry.ovh.net/xxxxxxxxx] 869b88ad82bf: Preparing 39f5759ee91a: Preparing 9fcb4b1f6588: Preparing d5c7fefe9138: Preparing 7a0035f07138: Preparing f567c7d48345: Preparing 923ea7ff5381: Preparing b0adf2da66d5: Preparing 66de6bafdc9e: Preparing 90139b0092c1: Preparing 2e3d924ff114: Preparing 372f309d119c: Preparing 50644c29ef5a: Preparing 923ea7ff5381: Waiting b0adf2da66d5: Waiting 50644c29ef5a: Waiting 66de6bafdc9e: Waiting 2e3d924ff114: Waiting 90139b0092c1: Waiting 372f309d119c: Waiting f567c7d48345: Waiting 7a0035f07138: Layer already exists d5c7fefe9138: Layer already exists 923ea7ff5381: Layer already exists f567c7d48345: Layer already exists b0adf2da66d5: Layer already exists 66de6bafdc9e: Layer already exists 90139b0092c1: Layer already exists 2e3d924ff114: Layer already exists 372f309d119c: Layer already exists 50644c29ef5a: Layer already exists 39f5759ee91a: Retrying in 5 seconds 39f5759ee91a: Retrying in 4 seconds 39f5759ee91a: Retrying in 3 seconds 39f5759ee91a: Retrying in 2 seconds 39f5759ee91a: Retrying in 1 second 39f5759ee91a: Retrying in 10 seconds 39f5759ee91a: Retrying in 9 seconds 39f5759ee91a: Retrying in 8 seconds 39f5759ee91a: Retrying in 7 seconds 39f5759ee91a: Retrying in 6 seconds 39f5759ee91a: Retrying in 5 seconds 39f5759ee91a: Retrying in 4 seconds 39f5759ee91a: Retrying in 3 seconds 39f5759ee91a: Retrying in 2 seconds 39f5759ee91a: Retrying in 1 second 869b88ad82bf: Pushed 9fcb4b1f6588: Pushed 39f5759ee91a: Pushed received unexpected HTTP status: 500 Internal Server Error
Is there anything going on ?
error pushing image: failed to push to destination xxx.gra5.container-registry.ovh.net/project/repo:tag: PUT https://xxx.gra5.container-registry.ovh.net/v2/repo/repo/blobs/uploads/a28bfac9-a522-4249-8a11-b7c8341ce79a?_state=REDACTED&digest=sha256%3A33df0b4b677132b4f20990e1443b5b8b809f2d3bf3ce22fd3c7a2ec5a36623bb: UNKNOWN: unknown error; map[DriverName:s3aws Path:/docker/registry/v2/repositories/project/repo/_upload/a28bfac9-a522-4249-8a11-b7c8341ce79a/data]
error pushing image: failed to push to destination XXX.XXX.container-registry.ovh.net/library/XXX-XXX:rkYuxab: Head "https://storage.gra.cloud.ovh.net/XXX-d697-4588-bfda-eb24eaa98aa6/files/docker/registry/v2/blobs/sha256/f2/f26661f4bd05f0baedb0825bd32a7704570e64aa6f587ff7e343e8f62d4270c5/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=18677960e2d84ec482a66615d254962a%2F20210114%2FGRA%2Fs3%2Faws4_request&X-Amz-Date=20210114T080021Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=XXX": read tcp 10.42.208.30:46442->126.96.36.199:443: read: connection reset by peer
PUT https://XXX.gra7.container-registry.ovh.net/v2/library/tools-XXX/blobs/uploads/72f8e4f6-6c04-4022-9a82-51f91ac29cdd?_state=REDACTED&digest=sha256%3A1064dab604aaf3f83a795daf993808926d6c578bee4ebbc16cbfb86eb4c18129: UNKNOWN: unknown error; map[DriverName:s3aws Path:/docker/registry/v2/repositories/library/tools-XXX/_uploads/72f8e4f6-6c04-4022-9a82-51f91ac29cdd/data]
unknown bloberror while pushing my images. This cause my deploy pipeline to fail. If this continue I will have to leave the OVH Docker Registry cause it's just not stable enough to use in production. My pipelines are long and see them fail after 15 minutes is annoying.. I've already posted some messages here for the same issue (nov. 18 2020). Do you have isolated the issue on your side? What could you do to prevent this in the future?