by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Naresh Rayapati (Rao)
    @nrayapati
    Awesome!
    jzr1991
    @jzr1991
    A nice easy one.. what's the recommended place to put a "global variable" as it were. One that you might pass in to most of your libraries, but that doesn't fit in "application_environments" because a.) It's the same always, and b.) You don't want to have to specify the environment on every step (i.e. build_to dev). In this instance for me, it's the application name. I could use the Jenkins job name but it's not guaranteed to be what I want to use.
    steven-terrana
    @steven-terrana

    sweet! i can answer this one satisfactorily! :)

    JTE exposes a variable called pipelineConfig that you can access in your libraries.

    so if you have a pipeline configuration like:

    applicationName = “my cool app name” 
    
    libraries{ … } 
    application_environments{ … }

    then you can say pipelineConfig.applicationName

    jzr1991
    @jzr1991
    Easy, thanks!
    mfuxi
    @mfuxi

    Hi,

    I just followed this example:
    https://www.jenkins.io/blog/2019/05/09/templating-engine/

    and i'm getting the following message in my pipeline:
    [JTE] Library maven does not have a configuration file.

    What am I missing here?

    steven-terrana
    @steven-terrana

    hey @mfuxi - not missing anything. Libraries in JTE can have a library configuration file (library_config.groovy) that lets you validate configurations provided.

    JTE logs if a library doesn’t have a library config file but it doesn’t impact anything.

    docs: https://boozallen.github.io/sdp-docs/jte/1.7.1/library-development/validate_library_parameters.html

    mfuxi
    @mfuxi
    Thanks @steven-terrana
    mfuxi
    @mfuxi

    I don't understand two basic subjects:
    1) How do I pass the agent parameter in this kind work method (JTE)?
    2) How do we pass the repository parameter, let's say we call in our Jenkinsfile to a step build() which what it does is build our source code, how do I give the source repo at the start? checkout scm ?

    Also, working in this method means that I will mostly work with scripted pipelines method and no usage of declarative at all, right?

    1 reply
    Vincent Letarouilly
    @Opa-
    Hello @steven-terrana
    Would you be interested with a new feature that would allow JTE to run inside a node ?
    I have a Jenkins instance with 3 agents to run the builds. The master is not powerful and we decided not to build projects on the master and have 0 executors on it.
    The issue I'm facing with JTE is that the loading of a JTE build runs on the Jenkins' master instance no matter what, and when there's a lot of projects triggering at the same time, or when adding a new multibranch project that needs to build all branches, it puts a high load on the master instance. Having the code running on a node would be very appreciated. If it could be an opt-in option would be a plus to allow users to choose. I'm not sure it's doable tho, maybe you have some inputs on this ?
    Thanks
    2 replies
    mfuxi
    @mfuxi

    ok, so I created node/node.groovy which looks like that:

    void call(Closure body){
        steps.node(config.label ?: ""){
            body()
        }
    }

    and I got my pipeline_config.groovy:

    libraries{
        node{
            label = “myLabel” 
        }
        maven
    }

    and my maven library just runs maven on some source code
    But i'm getting:
    hudson.model.Computer$TerminationRequest: Termination requested by Thread ....
    And I see down below:
    java.lang.IllegalStateException: There is no body to invoke

    8 replies
    steven-terrana
    @steven-terrana

    @mfuxi so it looks like your pipeline configuration should be:

    libraries{
      node{
        label = “myLabel"
      }
      maven
    }

    and then your pipeline template should be:

    build()

    and that ought to work based on what you’ve shown me

    assuming that maven build step is called build.groovy
    Gregory Paciga
    @gpaciga

    I'm confused about what the conventions with node allocation are... Looking at sdp-libraries and other examples I've seen online, it looks like typically the steps inside *.groovy files in libraries define

    stage("something") {
        node {
            ....
        }
    }

    but I'm used to a pattern like

    node {
        stage("checkout") { ... }
        stage("unit test") { ... }
        stage("build") { ... }
        stage("etc") { ... }
    }

    so each stage is guaranteed to run on the same node and code only needs to be checked out once. Am I missing something basic? Fixing a particular label doesn't seem to be the solution if multiple nodes are available.

    makashu
    @makashu
    Hi @steven-terrana just recently started exploring JTE and looks great approach to me!
    I am trying to run post step via the Jenkinsfile to always call the office365connector function but it seems that its not working for me & I am not doing it right. Can you please or anyone advice what am I missing?
    Snippet below from Jenkinsfile & the pipeline error:
    post {
    always {
    echo 'One way or another, I have finished'
    msteams()
    }
    }
    and error:
    [JTE] Library office365Connector does not have a configuration file.
    [JTE] Obtained Pipeline Template from job configuration
    [Pipeline] End of Pipeline
    java.lang.NoSuchMethodError: No such DSL method 'post' found among steps [addBadge, addErrorBadge, addHtmlBadge, addInfoBadge, addShortText, addWarningBadge, archive, artifactPromotion, bat, bbs_checkout, build, catchError, checkout, createSummary, deleteDir, dir, dockerFingerprintFrom, dockerFingerprintRun, echo, emailext, emailextrecipients, envVarsForTool, error, fileExists, findFiles, getContext, git, input, isUnix, jiraAddComment, jiraAddWatcher, jiraAssignIssue, jiraAssignableUserSearch, jiraComment, jiraDeleteAttachment,
    steven-terrana
    @steven-terrana

    hey @gpaciga - there isn’t really a right or wrong way to do it. I like to think about each library as a totally self-encapsulated thing - so to me, relying on a node block already being present (and maybe files already being present) is an external dependency to the library i’d rather avoid. That being said, the libraries i work on are used in many different environments by many different teams - so i need to lean towards being cautious and making sure it’ll always work regardless of who is consuming the library. our strategy for handling this is to checkout the source code and create a stash.. and then unstash that across the nodes that need it. this isn’t a great solution for large repositories though

    if it’s just you, or just a central team that’s going to be creating the templates and using the libraries then you can really write them however you want to.

    Using a single node block can certainly be a lot more convenient.

    1 reply

    @makashu - hey! it looks like you’re trying to use declarative syntax instead of scripted syntax, which is not supported.

    the most “idiomatic” way to do something like that in JTE would be to create a step that has a CleanUp or Notify annotation on it. that annotation will register your step to always run at the end of the pipeline (assuming the library contributing hte step has been loaded)

    @CleanUp
    void call(context){
      println “this will always run after the template” 
    }

    https://boozallen.github.io/sdp-docs/jte/1.7.1/library-development/lifecycle_hooks.html

    makashu
    @makashu
    thanks @steven-terrana will try that appreciate your quick response!
    Gregory Paciga
    @gpaciga
    I'm running issues with a regular pipeline job. Is it not possible to load a Jenkinsfile or pipeline config from SCM? It looks like I can only specify them inline in the UI. And, when I do, providing a Jenkinsfile works but providing a pipeline config does not.
    Gregory Paciga
    @gpaciga
    actually ignore the second part - obvious issue was forgetting override=true on the global pipeline config. The first part holds though, no way to set Jenkinsfile/pipeline config from SCM?
    steven-terrana
    @steven-terrana

    right now for regular pipeline jobs you can only define a template and pipeline configuration in the Jenkins UI.

    it’d be possible to allow it to be defined in an SCM - no one’s asked for it before! i think most folks end up just using a MultiBranch Pipeline Job.

    but feel free to open an issue on the repo (https://github.com/jenkinsci/templating-engine-plugin) and i’d be happy to explain how to do it or get to it when i can

    not a great solution - but right now you could create a multi-branch job and filter it to just the branch you care about.

    i do think it’s worthwhile to add SCM support for regular pipeline jobs

    Gregory Paciga
    @gpaciga
    ok thanks for that confirmation! so far we've done everything with regular pipeline jobs since different branches had to be built different ways - but next thing on my list is to look into using multibranch pipelines instead
    steven-terrana
    @steven-terrana

    might be worth taking a look at the Pipeline Libraries i’ve helped to build for our organization.

    https://boozallen.github.io/sdp-docs/sdp-libraries/2.2/index.html

    specifically - the github library.

    For this use case we typically define Keywords that are regular expressions that map to branch names:

    keywords{
        master  =  /^[Mm]aster$/
        develop =  /^[Dd]evelop(ment|er|)$/ 
        hotfix  =  /^[Hh]ot[Ff]ix-/ 
        release =  /^[Rr]elease-(\d+.)*\d$/
        feature = /^JIRA-(\d+)*$/
    }

    and then those work well with our library so you can write pipelines using our library like:

    on_commit to: feature, {
      // do something on a commit to a feature branch
      // named after some JIRA ticket 
    }
    
    on_pull_request to: develop, from: feature, {
      // do something on PRs to develop
    }
    
    on_merge to: develop , {
      // do something specifically on merge of a PR to develop
    }
    
    on_merge to: master, {
      // do something on a merge to master
    }

    etc

    Gregory Paciga
    @gpaciga
    yeah, I'd like that approach, but would need a bitbucket server library. I did some poking on how to identify the source branch, and it's definitely doable with their REST API and probably their java sdk too, but am not sure how to go about taking it any further than that.
    although i probably don't even need the from: functionality
    Scott Crosby
    @iscooter
    This is curious. I just saw JTE for the first time. I created something similar a few months ago to enable managing hundreds of pipelines easily. https://github.com/iscooter/nopipeline-containers
    I used callable objects within the DSL to control pipeline level logic for integrating apps or stages in containers. Has anyone used JTE to extend app-in-docker to the developer?
    steven-terrana
    @steven-terrana

    @iscooter hello! not exactly sure what you mean by “app-in-docker”.

    i can tell you that i typically recommend libraries use container-images for pipeline runtime environments using the Docker Pipeline Plugin but don’t typically go so far as to recommend the 3 musketeers pattern and rely exclusively on Make/Compose/Docker.

    that being said - there’s no reason you couldn’t build your libraries that way.. where each “step” is really just a Make target.

    JTE might be a nice way to orchestrate your nopipeline-containers framework by being able to centralize the a pipeline template that loads the shared library instead of needing to put that in every repo

    Daniel Fullarton
    @linead
    Hey @steven-terrana - ever tackled mono repos with JTE?
    4 replies
    Scott Crosby
    @iscooter
    Thanks @steven-terrana, that was a dumb question half baked on my part. The route I’ve taken is to define what the pipeline stage order and action in JSON. There’s no Jenkinsfile, JSL or script to expose to the user. They just commit the actionable code and the “type” of pipeline (make, python, container, etc) just does that work. Integrating by simply, from groovy, parsing the JSON and processing it in a runner with python. We have one Jenkinsfile in hundreds of repos where the user is directed to only change the JSON as needed or we define a new param for them. The 3musketeers pattern is only for the....how do we say...Jenkins nuts like us :)
    I am however still interested to dive into these libs you’ve baked up. I see lots of opportunity. Plus the JTE framework sucks it in with less work for me. Just need to see how to add in seemlessly with our set of vars/call() functions.
    2 replies
    Gregory Paciga
    @gpaciga

    so I (perhaps naively) thought that @AfterStep and @BeforeStep would trigger on calls to stage(). I take it they only work on steps defined within JTE? Is there a way to put hooks on stage() calls instead? My use case right now is that I'm trying to put together a way of reporting on the results each stage individually. I'm putting together a Jenkinsfile with a bunch of on_commit and on_pull_request calls (from my own library), each one wrapping a different set of stage() calls that aren't abstracted into JTE yet. Using @AfterStep and @BeforeStep to act as listeners, the list of steps the pipeline goes through looks like

    on_pull_request
    on_commit
    on_commit
    on_commit

    when in fact only one of these will ever actually do anything. What I'd like to see is the list of stages that actually ran. Maybe this is more of a Jenkins-in-general question than a JTE one, just that the lifecycle hooks had me optimistic that I'd be able to do this easily within JTE.

    13 replies
    Gregory Paciga
    @gpaciga
    do stages have to be JTE-defined steps, or can I do arbitrary code? e.g.
    stages {
        some_stage_name {
            stage("my stage") {
                // normal stage as I'd define it in a scripted pipeline
            }
        }
    }
    steven-terrana
    @steven-terrana
    they have to be JTE steps
    Gregory Paciga
    @gpaciga
    hmm I'm this close to defining a JTE step that just wraps a vanilla stage with my before and after logic built in, but that feels pretty hacky
    steven-terrana
    @steven-terrana
    lol it’s one of those… if you really understand what you’re doing then i’d call it more clever than hacky. but with great power…..
    Gregory Paciga
    @gpaciga
    I think I clearly don't :) Something for me to sleep on, though.
    steven-terrana
    @steven-terrana
    if you decide to do that.. feel free to post here and i’d be happy to tell you if there’s any glaring issues you’ll run into.
    freestyler164
    @freestyler164
    Hi There, since the context.status is not available any more , I was trying to use currentBuild.result. Is this available at @Cleanup ? I seem to get a null value in Cleanup
    freestyler164
    @freestyler164

    Hi There, since the context.status is not available any more , I was trying to use currentBuild.result. Is this available at @Cleanup ? I seem to get a null value in Cleanup

    currentBuild.currentResult seems to be the one I was looking for.

    Gregory Paciga
    @gpaciga
    regardling library configs, if I have an optional config, does it make sense to set it to a default value if it's not provided within the @Init or maybe an @validate?
    @Init
    void call(Map args = [:], Closure body) {
        config.myparam = config.myparam ?: 'defaultValue'
    }
    steven-terrana
    @steven-terrana
    i don’t think updates to the config variable will persist so you’d have to do it in each step. It would be an interesting idea to be able to declare the default value for optional steps inside your library configuration file
    Gregory Paciga
    @gpaciga
    they do seem to persist, but sounds like that might be a happy accident rather than purposeful
    i just the usual pattern is to just check if it's defined in the steps that need it?
    steven-terrana
    @steven-terrana

    hahah i feel like that’s a happy accident that i should probably break tbh..

    bc imagine library A contributes a step A and library B contributes a step B..

    in A i could do: B.config = [:] and break another library’s step

    it assumes malicious intent bc you’d have to go pretty out of your way to hit that situation
    but i don’t think it should be possible

    i do think it would make sense for library configs to declare default values.

    or for you to only be able to manipulate that config variable within the same library

    steven-terrana
    @steven-terrana
    Thoughts? i’m open to ideas here on how you all think it should work