Git fetch failed with exit code 128, back off <whatever> seconds before retry.
Hi, I'm trying to set up selenium tests with GitLab CI, but the tests fail without creating any artifacts or console logging from inside the steps or describes (though there is some logging from inside a file imported by the config file). The job output looks like this
npm run confidence-check --host=selenium__standalone-chrome wdio testConfig/conf.js --cucumberOpts.tagExpression @testlogin Execution of 3 workers started at 2022-06-07T18:52:19.086Z [0-0] RUNNING in chrome - /features/Login.feature [0-0] FAILED in chrome - /features/Login.feature Spec Files: 0 passed, 1 failed, 2 skipped, 3 total (100% completed) in 00:00:02 Uploading artifacts for failed job Uploading artifacts... e2e/errorShots: found 1 matching files and directories (these directories are all empty) e2e/testResults: found 1 matching files and directories e2e/allure-report: found 1 matching files and directories Uploading artifacts as "archive" to coordinator... 201 Created id=400 responseStatus=201 Created token=K2Lx3dVt Cleaning up project directory and file based variables ERROR: Job failed: exit code 1
And the pipeline stage looks like this:
e2e:chrome: image: node:lts-alpine3.14 stage: confidence-check services: - selenium/standalone-chrome script: - echo "====== Change to test directory ======" - cd e2e - echo "====== Install python ======" - apk add --update --no-cache py-pip - echo "====== Set up python build envt ======" - apk add python3 make g++ - echo "====== Install dependencies ======" - npm install --legacy-peer-deps - echo "====== Create directories ======" - mkdir testResults - mkdir errorShots - mkdir allure-results - mkdir allure-report - echo "====== Run tests ======" - npm run confidence-check --host=selenium__standalone-chrome - allure generate -c ./allure-results -o ./allure-report artifacts: when: always paths: - e2e/errorShots - e2e/testResults - e2e/allure-report
Hi! I am trying to migrate from S3 terraform state to gitlab terraform state and get:
Acquiring state lock. This may take a few moments...
│ Error: Error acquiring the state lock
│ Error message: HTTP remote state endpoint invalid auth
HTTP remote state endpoint invalid auth
Is there an issue currently with the Dependency Proxy? I've been getting intermittent CI job failures with errors like this:
ERROR: Job failed: failed to pull image "gitlab.com:443/<redacted>/dependency_proxy/containers/docker:20.10.7-dind" with specified policies [always]: Error response from daemon: received unexpected HTTP status: 500 Internal Server Error (manager.go:203:0s)
I've also had failures when trying to build an image that uses the dependency proxy in its FROM.
General pipelinesettings introduced in v15. As an example I want to programmatically change the above option in screenshot.
Is there a way to customise the schedule of the Container Registry Cleanup job?
For context, we use the Docker Registry HTTP API to create a new tag for an existing image without having to pull and push the image.
This morning an image that was tagged in this way disappeared mysteriously from the registry, and it was at about the same time that the Cleanup job ran.
This might be a bug with the cleanup job, but I'm not sure how to give you a reproducible bug report if I can't control when the job runs! (and also if I can ensure the job runs during the night, it won't be a problem anymore)
gitlab.rbvariables via environment variables from the host? So let's say, I define a
ROOT_PASSWORD=fooon the host system, now in the docker-compose I would like to use that env var so its value is used for gitlab_rails['initial_root_password']. This is defined in: https://docs.docker.com/compose/environment-variables/#pass-environment-variables-to-containers
GITLAB_OMNIBUS_CONFIGas this is a multiline string. Tried the following, but without success. The root password was not set, instead I had to lookup and use the generated initial_root_password file.
version: '3.6' services: web: image: 'my-registry/dh/gitlab/gitlab-ee:15.0.3-ee.0' restart: always hostname: 'gitlab.example.com' environment: SMTP_PASSWORD: ROOT_PASSWORD: GITLAB_OMNIBUS_CONFIG: | external_url 'https://gitlab.my-domain.com' ... gitlab_rails['smtp_password'] = ENV['SMTP_PASSWORD'] gitlab_rails['initial_root_password'] = ENV['ROOT_PASSWORD'] ...
hello, I have an issue with submodules and gitlab-runner.
I have a project with submodules, and recursive submodules, like:
This project runs CI since few month, without any issue. but today, I want to remove top/sub-b/subsub-a. and I get this error during repo intialization in CI:
fatal: No url found for submodule path 'top/sub-b/subsub-a' in .gitmodules fatal: run_command returned non-zero status while recursing in the nested submodules of top/sub-b
Initially, GIT_STRATEGY is set to fetch. If I set GIT_STRATEGY to clone, it fixes the issue for this particular runner. But I have several runners, and I can't force the choice of the runner.
gitlabUrl: "https://gitlab.foo.fr/" imagePullPolicy: IfNotPresent unregisterRunners: true concurrent: 2 checkInterval: 10 ## Configure integrated Prometheus metrics exporter ## ref: https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server metrics: enabled: true service: enabled: true serviceMonitor: enabled: true ## Configuration for the Pods that that the runner launches for each new job ## runners: ## Default container image to use for builds when none is specified ## image: rockylinux:8.5 privileged: true tags: "privileged,large" runUntagged: false ## Configure environment variables that will be injected to the pods that are created while ## the build is running. These variables are passed as parameters, i.e. , ## to command. ## ## Note that (see below) are only present in the runner pod, not the pods that are ## created for each build. ## ## ref: https://docs.gexport NAMESPACE="gitlab"itlab.com/runner/commands/#gitlab-runner-register ## env: HOME: /tmp config: | [[runners]] [runners.kubernetes] privileged = true # build container cpu_limit = "2" memory_limit = "5Gi" # service containers service_cpu_limit = "1" service_memory_limit = "1Gi" # helper container helper_cpu_limit = "1" helper_memory_limit = "1Gi" [runners.kubernetes.volumes] [[runners.kubernetes.volumes.host_path]] name = "var-dbus" host_path = "/var/run/dbus" mount_path = "/var/run/dbus" read_only = false [[runners.kubernetes.volumes.host_path]] name = "run-dbus" host_path = "/run/dbus" mount_path = "/run/dbus" read_only = false ## Configure environment variables that will be present when the registration command runs ## This provides further control over the registration process and the config.toml file ## ref: ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html ## envVars: - name: HOME value: /home/gitlab-runner