Version Control on your Server. Visit https://gitlab.com/gitlab-org/gitlab for more information
Hey everyone,
I currently have a bit trouble to understand the whitelisting of gitlab.
I setup rack attack like described here:
https://docs.gitlab.com/ee/security/rack_attack.html#settings
but the whitelisted IPs still get blocked.
grep "Rack_Attack" /var/log/gitlab/gitlab-rails/auth.log because to many requests goes to "/jwt/auth?account=..."
Does anybody know how to do it correctly?
Hi all,
can anybody tell me whether do you guys have any problems with the "gitlab git push options" atm? I mean especially the
script:
- git push origin $MERGE_BRANCH_NAME -o merge_request.create -o merge_request.target=master -o merge_request.remove_source_branch -o merge_request.merge_when_pipeline_succeeds
when creating a merge-request that should be automatically merged upon successful pipeline completion.
Yesterday in the evening my last pipeline succeeded and merge request was been automatically created & merged, but today i'm getting the response that
"To create a merge request for merge-branch-1599640565, visit:
https://gitlab.com/..."
and the MR is NOT created :-(
$ docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Hey, i'm trying to connect to my self hosted micrk8s cluster.
But i'm getting
There was a problem authenticating with your cluster. Please ensure your CA Certificate and Token are valid.
Output of sudo gitlab-ctl tail -f
when updating the Kubernetes cluster setting:
{
"method":"PATCH",
"path":"/ochorocho/composer-test/-/clusters/22",
"format":"html",
"controller":"Projects::ClustersController",
"action":"update",
"status":302,
"location":"https://gitlab.knallimall.org/ochorocho/composer-test/-/clusters/22",
"time":"2020-09-10T22:08:21.120Z",
"params":[
{
"key":"utf8",
"value":"✓"
},
{
"key":"_method",
"value":"patch"
},
{
"key":"authenticity_token",
"value":"[FILTERED]"
},
{
"key":"cluster",
"value":{
"name":"MicroK8s",
"platform_kubernetes_attributes":{
"api_url":"https://ochorocho.ddns.net:16443/api/v1/",
"ca_cert":"-----BEGIN CERTIFICATE-----\r\nMIIDATCCAemgAwIBAgIJAIpuVGLR9iLFMA0GCSqGSIb3DQEBCwUAMBcxFTATBgNV\r\nBAMMDDEwLjE1Mi4xODMuMTAeFw0yMDA5MTAxODQ0MjJaFw0zMDA5MDgxODQ0MjJa\r\nMBcxFTATBgNVBAMMDDEwLjE1Mi4xODMuMTCCASIwDQYJKoZIhvcNAQEBBQADggEP\r\nADCCAQoCggEBAMc46zPNDPL2UPmcjbbYO/Tk1Y0JE1ft0EerIwV7Ef4Fo7L3NRZ4\r\nz+uaK6PxQW4PsP8DmsHbU6FUU5g435gTWOxajH9rArNutniJZtz7tBLnxpnIGgm7\r\n6pMZ8+stY9BUO10ALdLLrS6SFUGgiAQNYzN8qxDlvJmXLxUL8AiTkTKP8HiwEd\r\n3alCuwFp2z5CA47X9ar1mBW/U84BRJKXQJm+aEOCSOhxkzsxIQYOD3AuQpOo9lN\r\nWrRS9GsSaJXPWZkJPyvyd+dO9lDiLbb6nGCBbgm2Td/c7rXd8GcXJ6Cs3TOMdD\r\nsChouEgpp10b9duJ/A6UwcD863RMrBqSSTcCAwEAAaNQME4wHQYDVR0OBBYEFFIv\r\njdZMn2bsYIfGPEmw7duXbnKwMB8GA1UdIwQYMBaAFFIvjdZMn2bsYIfGPEmw7duX\r\nbnKwMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAE9Yyi4OA40YO6o+\r\nalnuzYV5zZBGBr2rptZCYw1eO1QvNZh96kJ89WGRakh5WmpCdQMBR/LBZGcaWtk8\r\n8Wl7qgEvqX3xQqtRN/eBaNXVqAefKH9xirSjmD0AE8i3gGliXB/7U0tS4WqOrRSg\r\nmv1Y8KaL3PwSHV4/pF8e8ehHOGpC+eV9baPgvKDS/H3sqjrFwUB7Jdn81JXLBvDO\r\nS33v/k1WOBRCmsUmsnE9Xqw22gXu1/kgtBhB7jrODY8mDf/ZJ1M9xRPZ2LouqUDe\r\nHo55pbzAiu/BoGATaNfit2Fj57YIV6RxlyjZBNL7nXPy0bC2WlmZa4lg0ryh3v4X\r\n5xTHbAk=\r\n-----END CERTIFICATE-----",
"token":"[FILTERED]",
"namespace":"",
"id":"22"
},
"managed":"1"
}
},
{
"key":"namespace_id",
"value":"ochorocho"
},
{
"key":"project_id",
"value":"composer-test"
},
{
"key":"id",
"value":"22"
}
],
"remote_ip":"84.191.xxx.xxx",
"user_id":2,
"username":"ochorocho",
"ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:80.0) Gecko/20100101 Firefox/80.0",
"correlation_id":"0Cs04roVAI5",
"meta.user":"ochorocho",
"meta.project":"ochorocho/composer-test",
"meta.root_namespace":"ochorocho",
"meta.caller_id":"Projects::ClustersController#update",
"redis_calls":4,
"redis_duration_s":0.001124,
"redis_read_bytes":1589,
"redis_write_bytes":722,
"redis_cache_calls":3,
"redis_cache_duration_s":0.000824,
"redis_cache_read_bytes":205,
"redis_cache_write_bytes":137,
"redis_shared_state_calls":1,
"redis_shared_state_duration_s":0.0003,
"redis_shared_state_read_bytes":1384,
"redis_shared_state_write_bytes":585,
"queue_duration_s":0.00326,
"cpu_s":0.06,
"db_duration_s":0.0721,
"view_duration_s":0.0,
"duration_s":0.15489,
"db_count":18,
"db_write_count":1,
"db_cached_count":1
}
I used a self singed cert and added it to the trusted certs to circumvent SSL issues in case this matters.
How can i fix this issue oder debug further?
does anybody know what this log message is in the middle of a CI run[runner-0277ea0f-project-12612417-concurrent-0:25622] Read -1, expected 23552, errno = 38
?
for example here: https://gitlab.com/npneq/inq/-/jobs/737608492#L5676
(it doesn't per se make the runner fail but it fills up the log file beyond 4MB.)
This typical happens when running with mpirun
$ pip install awsebcli -q --upgrade
pyrsistent requires Python '>=3.5' but the running Python is 2.7.13
ERROR: Job failed: exit code 1
test
stage followed by rpm:publish
stage and I would like to skip compiling on the second stage and get the compiled files from the first stage. What is the ideal way of doing this ?
While the cache could be configured to pass intermediate build results between stages, this should be done with artifacts instead.
. However, this doesn't needs to be an artifact that's saved and stored in some place because it is only needed for the consecutive stage. Hence I am confused a little
allow_failure: true
on the previous step instead?
rules:
- if: $CI_COMMIT_BRANCH =~ /^release\/v\d+.\d+.\d+$/
when: manual
allow_failure: true
Hi, wondering if anybody could help me with an issue with using trigger / bridge jobs.
I have Project A which contains global variables in the CI YAML. Project B downstream also contains global variables which I do NOT want to override. I understand upstream variables have precedence over downstream vars, so I want to avoid passing Project A's global vars to Project B.
My .gitlab-ci.yml for Project A looks like:
variables:
MY_VAR: "set in the upstream job"
downstream_project_b:
stage: trigger_downstream
variables: {}
trigger: myprojects/project_b
In Project B I have:
variables:
MY_VAR: "set in the downstream job"
test:
stage: test
script:
- echo "$MY_VAR"
As you can see I have attempted to unset the global vars in Project A by using variables: {}
in the job, however $MY_VAR is still being passed to Project B and overriding $MY_VAR there. Is there any way I can unset it? Thanks in advance for any tips!
environment:
name: review/$PROJECT_NAME-$CI_COMMIT_REF_NAME
url: http://$CI_PROJECT_ID-$CI_ENVIRONMENT_SLUG.$KUBE_INGRESS_BASE_DOMAIN
on_stop: reviewApiStop
that works because Gitlab respects review/*
as part of AutoDevops and provides the magic sauce to let it work, but if I try the same with staging like so:
environment:
name: staging/$PROJECT_NAME
url: http://$CI_PROJECT_PATH_SLUG-staging-api.$KUBE_INGRESS_BASE_DOMAIN
It doesn't work cause it only works for staging
and not staging/*