ok figured it out. the docker image for the agent in https://docs.gocd.org/current/gocd_on_kubernetes/importing_a_sample_workflow.html
gocddemo/gocd-agent-dind:webinar is running an old version 18.3.0-6540 of the gocd-agent
The tutorial needs to be updated.
@lucaferrarihotmail I’m not sure how that would work in general, GoCD or not. Not saying it isn’t possible, just not straightforward or obvious.
How would a single OS process that crashes in midway be picked up by another process in another host from where it left off? There would have to be some state saved, synchronously, and that would have to be to the new host after recovering from the initial crash. Right?
If you are talking about picking up jobs that didn’t finish in a stage, the GoCD already does this; it doesn’t need to reexecute jobs that have already completed within a stage if the stage fails midway, and can run the jobs that were unfinished or pending.
However, at the task level, this is not possible to continue unfinished tasks on another agent after the original one dies.
You cannot use an IP address? But the error indicates a resolution problem.
Anyhow, what do you mean by proxy? Reverse http proxy, forward proxy, or something like SOCKS proxy?
If your DNS is happening over proxy, you might check there first since you got a resolution error.
@lucaferrarihotmail sorry, I don’t know how that would work.
GoCD does not have native support for such a thing, as I imagine no other software would either. This sounds like some custom functionality specific to your application. Sorry about that, nothing we can do here to that specific solution.
Feel free to elaborate on some specifics of your problem to see if there are alternative approaches to solve your problem, other than what you hypothesized.
Hi R.Rajalakshmi it would be more helpful to pose a specific question rather than just say that you need help.
Try to be as specific as possible:
Hello! Can a task stop the pipeline without being shown as failed? My use case is a pipeline which deploys on a special staging server whenever there are new commits in the main branch. The pipeline is placed behind our testing pipeline so that it is triggered only when all the tests where successful. This is the task that checks for the main branch:
source ./app_env_vars if [[ "$APP_CURRENT_BRANCH" == "main" ]]; then echo "Yes, it's the main branch!" exit 0 else echo "That's not the main branch: $APP_CURRENT_BRANCH" exit 1 fi
It works but it's distracting that the pipeline is shown as failed when the latest tested commits where not in main.
cruise.h2.db) with the newly generated one in the prior step (
cruise.mv.db— note the name change)