Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    steven-terrana
    @steven-terrana
    @tapasmishra. sure, this oughta do it:
    import org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject
    import org.boozallen.plugins.jte.job.TemplateBranchProjectFactory
    
    Jenkins.get().getItems().findAll{ it instanceof WorkflowMultiBranchProject }.each{ job -> 
      if( !(job.getProjectFactory() instanceof TemplateBranchProjectFactory) ){
        println "Noncompliant project: ${job.getFullName()}"
        println "Changing project to use JTE." 
        job.setProjectFactory(new TemplateBranchProjectFactory())
      }
    }
    
    Jenkins.get().save()
    Rasmus Praestholm
    @Cervator
    Question: I had more or less assumed that JTE libraries and regular pipeline libraries were interchangeable to some degree - but now that I'm trying to look at how you might evolve from a regular Jenkinsfile + pipeline libs to then offering the useful libraries via JTE as well I'm not so sure anymore?
    I had figured a typical workflow to adopt the JTE might be that teams with existing Jenkinsfiles + libs would move as much of the goodness into their libs, then we could steal their work for the greater glory of the enterprise and deliver the same via minimal templates and the same libs, while other teams still use the libs directly with regular Jenkinsfiles (until ready to accept the mantle of responsibility the JTE would press upon them)
    steven-terrana
    @steven-terrana

    hey @Cervator!

    that’s an interesting use case. the challenge would be that libraries in JTE are loaded differently than regular jenkins shared libraries.

    so what i would probably recommend is that you do a mini migration up front.

    Pipeline templates in JTE get executed the same way as regular Jenkinsfiles.

    so you could copy and paste their Jenkinsfile as their pipeline template and it would work.

    and then over time, you can migrate their libraries from being regular jenkins pipeline libraries to being JTE libraries.

    In fact, by default JTE will look for a Jenkinsfile in the repository so long as allow_scm_jenkinsfile = true in the aggregated pipeline configuration.

    so you’d just need to change things from a Jenkins configuration side to be using JTE and they probably wouldn’t have to change anything for it to still work.

    Rasmus Praestholm
    @Cervator
    Curiously, do you think it would be conceptually possible to load a JTE library from a named definition for a regular lib? As in, a lib inclusion in the JTE named MyLibX would first look in the JTE defined lib repo but if not found then look in the active master and if there's a regular pipeline lib definition named MyLibX then go retrieve it from its repo? Or even allow the cascading loading with merge/override type functionality that way too. I don't mind digging in a bit if it means a better plugin down the line, but I figure you'd have an idea if there's an architectural obstacle to even the concept of doing it :-)
    Partly I'm wondering because there are so many teams and libs, with some wanting to remain on an older version of some things for :reasons: for some period of time. Some teams are allergic to upgrades until they can pick a later time to try out the new stuff themselves. Which could be alright if we just do an upgrade in "permissive" mode of sorts that would let them define lib levels themselves until they're satisfied with testing, or run out of time at which point the top governance tier then starts enforcing versions again
    And yeah I've been playing with it just dropping in regular Jenkinsfiles :-) The lib and other config stuff is what I've dealt with less. In open source land I just needed to be able to use one Jenkinsfile across 200 repos and that's working great
    steven-terrana
    @steven-terrana
    i’m gonna ramble a bit in my answer to that :) the upfront TL;DR is: yes it’s conceptually possible.
    longer answer coming haha
    Rasmus Praestholm
    @Cervator
    Great, i'm a fan of long rambles! :grin:
    steven-terrana
    @steven-terrana

    so! JTE went through many design iterations. it actually started out itself as a regular old Jenkins Shared Library and worked by dynamically loading other Jenkins Shared Libraries from a config file.

    over time, to minimize the amount of configuration needed and generally lower the technical barrier to entry, it became a plugin!

    the biggest challenge that regular Jenkins Shared Libraries present for JTE’s use case is that you can load multiple steps from the vars directory across different libraries and it doesn’t fail. if i remember correctly, the first implementation of the step loaded is the one that gets invoked.

    there was also the challenge of autowiring the config variable that libraries get access to.

    so going into the weeds a bit deeper here, and i might be overexplaining some concepts so forgive me if i am, JTE takes advantage of the Groovy Binding. The TL;DR of groovy bindings are that any time you declare a variable without giving it a type it gets stored in the binding.

    i.e,

    x = 3 // equivalent to getBinding().setVariable(“x”, 3) 
    def x = 3 // this actually gets transformed into a field on the class that groovy creates from the script during compilation

    the binding is shared, so when JTE is initializing the runtime environment for the pipeline template, what we’re actually doing is creating a series of objects representing the different “things” in JTE (steps, application environments, stages, keywords, etc) and storing them in a custom binding implementation that is able to track what objects in the binding came from JTE.

    by tracking that, we can now throw exceptions if multiple instances are created for the same thing (for example, loading the same step from two different libraries).

    This implementation also gave us a ton more flexibility, so we can use metaprogramming to automagically wire up the config variable based upon the step’s library configuration.

    TL;DR -> regular jenkins shared libraries are implemented differently and would make it difficult to maintain some of the functionality of JTE (primarily the autowired config variable)

    ————

    however!

    we recently released a feature that lets you extend JTE to add custom library providers. so now, you can load libraries for JTE from either an SCM repository or package them as a plugin. This extension point is exposed though, so it would be possible to additional library providers.

    i’m mentioning this, because it would be possible to create a Library Provider that lets you pull in regular Jenkins Shared Libraries so they can be used either in JTE or in regular old Jenkinsfiles.

    if you’re interested in pursuing that use case, i’d be more than happy to work with you to show which classes can be extended and how JTE initialization works in general
    i’m going to be pretty out of pocket until after KubeCon next week though
    Rasmus Praestholm
    @Cervator
    oh you'll be at KubeCon then? Me too! I'm even supposed to do some presentation at the CodeFresh Zero Day thing, i really should actually finish that thing already :-)
    also: "JTE went through many design iterations. it actually started out itself as a regular old Jenkins Shared Library and worked by dynamically loading other Jenkins Shared Libraries from a config file." - yeah that's exactly what i'm trying to prevent here haha. I figured that would be reinventing the wheel, and kept pointing at the JTE to insist the thing already exists, so don't rebuild it!
    Rasmus Praestholm
    @Cervator
    and yeah, that field thing sounds very familiar, i've had to do some acrobatics before related to that :-) definitely interested in digging deeper there. Maybe we could even find a quiet moment at KubeCon? I'm not even sure yet which sessions to catch myself
    :)
    but yeah, i’d love to meet! feel free to message me on here next week and we’ll find some time
    Rasmus Praestholm
    @Cervator
    Nice! I'll aim to make it there for sure :+1:
    steven-terrana
    @steven-terrana

    @Cervator looking forward to it.

    in the meantime, i’ve just added an ADOPTERS file for JTE. I’d love to showcase your use case and in general, have a forum to track our community of users!

    If your use of JTE is something that you can share, i’d love it if you’d be able to add a short blob.

    Docs page: https://jenkinsci.github.io/templating-engine-plugin/master/ADOPTERS.html

    https://github.com/jenkinsci/templating-engine-plugin/blob/master/docs/ADOPTERS.rst

    same goes for anyone else here using JTE! It would be great to capture and you’ll be featured on the JTE docs!
    cc: @linead
    Rasmus Praestholm
    @Cervator
    Sure! I'm actively using it to build 200 game content modules for Terasology, just need to make it all more visible then i'd love to write a thing about it. Then also trying to convince a work client to use it, that'd probably be a ways out (might not really start till January)
    Terasology being my open source project, that is, so that's easy to share. And it'll be a lot more than 200 over time, which is why just having a central JTE-hosted Jenkinsfile + a GitHub Org job is so great :-)
    steven-terrana
    @steven-terrana

    that’s awesome. really glad it’s adding value for you.
    alright, gotta get back to building the kubecon demo :)

    looking forward to catching up next week.

    Rasmus Praestholm
    @Cervator
    likewise, see you there :wave:
    Joost Schriek
    @joostschriek
    heya - i'm evaluating using the JTE plugin for some fun projects, but am running into some issues surrounding agent selection. i couldn't find that much on the docs site
    i basically want to include some libraries (e.g. kaniko and helm) and have the library specify the k8s pod. meaning that both kaniko and helm both have a seperate container they run on
    instead of having a big k8s deployment upfront, it would use the container and podtemplates used in the kubernetes-plugin
    i'm seeing the pods being created eventually, right off the bat it seems to create and delete them almost instantly for about 3 mins
    is there a source file or some docs that i could look at for a more "advanced" pod deployment type of scenario?
    steven-terrana
    @steven-terrana

    hey @joostschriek - JTE doesn’t actually do anything within libraries for agent selection. This is coming up frequently enough that i’ll add it to my backlog to write up a docs page about it :)

    Writing library code is essentially the same as writing jenkins pipeline-as-code just with some syntactic sugar (like the config variable to access a library configuration).

    so i would probably do something like:

    String podLabel = config.podLabel // from pipeline_config.groovy library config 
    podTemplate {
        node(podLabel) {
            stage('Run shell') {
                sh 'echo hello world'
            }
        }
    }

    and then each library could specify

    libraries{
      kaniko{
        podLabel = “kaniko"
      }
      helm { 
        podLabel = “helm” 
      }
    }

    (* this assumes you have pod templates defined in jenkins for kaniko and helm..)

    not sure if this exactly matches the use case you’re trying to achieve, but the general guidance is the same. externalize the configuration via the pipeline_config.groovy file to dynamically specify which agent the library should use.

    Joost Schriek
    @joostschriek
    Hm, i tried doing something like that (only with the more specific container function that is included in the kubernetes plugin to make sure code gets executed on a specfic container within a pod) but it was taking forever and some times just timed out. maybe i just type-o-ed something and didn't see it.
    Enrique Fernández-Polo Puertas
    @Quaiks

    Hello!! I've been looking at the code and I think the implementation is not prepared for ephemeral agents that are resued throw the build, especially for Kubernetes.

    The plugin calls the node step twice even before the actual pipeline implementation (one for checking out the code and the other for archiving the pipeline config). I think the most common scenario is to create a new pod for the entire pipeline and get rid of it at the end. This means that all the "logic" of the pipeline has to run inside a podTemplate { node(POD_LABEL) { theJTEstuff() } } block

    What do you think?

    I have created this PR which is a huge WIP jenkinsci/templating-engine-plugin#34 I am not able to build the plugin locally and test it :(
    Enrique Fernández-Polo Puertas
    @Quaiks
    It would be awesome to have something like this at the end
    allow_scm_jenkinsfile = false
    skip_default_checkout = false
    startPod {
      build()
    }
    steven-terrana
    @steven-terrana

    hey @Quaiks - i agree that the use of the node blocks aren’t perfect right now.

    The bare minimum improvement would be to let you define agent lables from the pipeline config.
    but JTE is first and foremost a framework for pipeline templating and governance that ideally should not care what tools are being used.
    I think the optimal solution would be for the framework to allow users to override specific steps of the initialization process like creating the workspace stash, etc.
    perhaps the framework can check for the implementation of a step that overrides these behaviors, and if not present, falls back to the default.

    So - to start with that idea, https://github.com/jenkinsci/templating-engine-plugin/blob/master/src/main/resources/org/boozallen/plugins/jte/TemplateEntryPoint.groovy

    Could instead look for steps called createWorkspaceStash and archiveConfig and if leverage those implementations if present.

    i do disagree with what you’re calling the most common scenario. For me, the most common scenario is that each library has an associated container image that contains runtime dependencies for that step. like a sonar-scanner image for doing sonarqube analysis, a helm image for doing helm deployments, a dind image for building container images, etc.

    The use case you’re talking about is perfectly valid though, and can be implemented right now with JTE (assuming those changes to the template entry point are made).

    in that case, you would have a library that contributes a step startPod that takes in a closure argument, makes an invocation to get an agent using the pod template, and then invokes the closure inside of it.

    so long as none of the steps invoked within that closure (in the example, build) invoke their own node blocks, then that would work how you’d like it to

    Enrique Fernández-Polo Puertas
    @Quaiks
    that's exactly what I did!
    it's working like a charm
    about the common scenario, I mean disposable agents that are created on the fly. Instead of an image, is a pod in k8s
    About the default checkout jenkinsci/templating-engine-plugin#35
    steven-terrana
    @steven-terrana
    awesome! i saw your PR, love the idea! just need to find a good place to put those configuration options in the documentation and then we’re good to merge it
    Enrique Fernández-Polo Puertas
    @Quaiks
    Hello again!!
    Enrique Fernández-Polo Puertas
    @Quaiks
    is it possible to pass aditional arguments to a hook??
    just the context is not enough for us
    JoshD
    @jd0x

    so one thing that would work for that is to have a library that implements a node step and takes a configuration option for the label you’d want to apply.

    node.groovy

    void call(Closure body){
        steps.node(config.label ?: ""){
            body()
        }
    }

    So if this was in a library called… example.. then your configuration would look like:

    libraries{
        example{
            label = “your custom node label” 
        }
    }

    and then you could have a library_config.groovy file in the library for setting the field to be a string

    fields{
        required{
            label = String
        }
    }

    Hi all! Currently having trouble getting my agent node labels applied. Using the approach above, do I execute thenode.groovy function by adding node() to my Jenkinsfile?

    JoshD
    @jd0x
    when I renamed my lib source and function, the wrapper was not applied. Is there a dependency for it to be named node.groovy ?
    steven-terrana
    @steven-terrana

    @Quaiks - passing arguments directly from the template to the hooks is not possible because, well, the templates don’t invoke hooks.

    Hooks do have access to their library configuration just like anything else though - so you could have

    libraries{
       myLibraryWithHook{
        myHookArg = “whatever” 
      }
    }

    and then if you had myLibraryHook/someHook.groovy

    @BeforeStep
    void call(context){
       println “the hook context is -> ${context}”
       println “my hook library configuration -> ${config}”
       println config.myHookArg
     }

    does this fit your use case?

    @jd0x - hello!

    if you loaded the library, then your node step would be invoked every time you did something like

    node{
      // some pipeline code
    }

    that example assumes that there aren’t any calls to a node block already passing a label though, since the step only takes a Closure parameter

    so if you’re positive that the library is being loaded and it still doesn’t seem to be executing your version of the node step, then i’d look for cases where you’re doing something like:

    node(“some-label”){
      // pipeline code
    }

    if you want to be able to support both use cases, you can by updating that example node implementation to take a default label option of null and then use it if it gets passed via the method invocation, otherwise, use the one defined in the library config.

    node.groovy:

    void call(String label = null, Closure body){
       steps.node( label ?: config.label ?: “” ){
        body()
      }
    }