Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Felix Kling
    @KlingFelix
    @TehMeh Just to catch up: Are things working now?
    Simonas
    @TehMeh
    Hi @KlingFelix , thanks for asking!
    I did not bother with updating the interface. It's probably a matter of time before Pythia8.3 comes to MadGraph, until then I'll be using my version since it does the job.
    abdtnourji
    @abdtnourji

    Dear expert,

    First of all, I wish you a happy new year!

    I am a PhD student and I have stared to use MadMiner for my analysis.

    First, I started with the tutorial (everything work fine) then I tried to adapt the code for my case. I generated new ttbar data using dim6top_LO_UFO model and defining the parameter space.

    Then, I tired to add spin correlation observable (very important for my analysis) but I do not know how I can do it.

    To compute spin correlation one need to:
    Define the pt, eta, energy/mass, and phi for particle like Top/AntiTop and lepton/antiLepton
    Be able to use TLorentzVector methods :
    SetPtEtaPhiM, SetPtEtaPhiE, Boost …
    Below, I put an example of how to compute one of this spin correlation observable using C++ .

    Thank you in advance.

    A.Tnourji.

    // Method to compute cos(theta) in the direction of top quark

    inline auto cos_theta_helicity (int sign=1) {
    return sign {

    // Boost to the ttbar CM
    TVector3 boost_to_ttbar = ttbar.BoostVector();
    boost_to_ttbar *= -1;

    t_parent.Boost(boost_to_ttbar);
    top.Boost(boost_to_ttbar);
    lepton.Boost(boost_to_ttbar);

    TVector3 boost_to_t_parent = t_parent.BoostVector();
    boost_to_t_parent *= -1;

    lepton.Boost(boost_to_t_parent);

    TVector3 k_axis = top.Vect().Unit();
    k_axis *= sign;
    float theta = lepton.Vect().Unit().Dot(k_axis); // cos (thata) helicity.

    // In case we have a non, move it to underflow bin ---//
    if(isnan(theta)) return float(-55);
    else return theta;
    };

    }

    Felix Kling
    @KlingFelix

    Hi @abdtnourji,

    Sure, this should be possible within MadMiner.

    Defining observables happens in the LHEReader or DelphesReader class. There are two options on two to define an observable. For simple observables, such as the pT of the leading jet, you can use

    lhe.add_observable('pt_j1', 'j[0].pt')

    Additionally, you can also define more complex observables through a function of the final state 4-momenta. For example this one calculates the Collins Soper angle (angle of the top-quark in the tt-rest frame with respect to the beam axis in the lab frame). Note that MadMiner uses the Python library scikit-hep for 4-vectors (see here for documentation: https://github.com/scikit-hep/scikit-hep/blob/master/skhep/math/vectors.py), which should have all the functions you need.

    def function_obs1(p,l,a,j,met):
        #momenta
        ptop=p[0]+p[1]+p[2]
        ptt=p[0]+p[1]+p[2]+p[3]+p[4]+p[5]
        #boost top momenta in tt frame
        pt_ = ptop.boost(ptt.boostvector)
        #create vectors
        vz=Vector3D(0.,0.,1.)
        vt=pt_.vector.unit()
        #calculate obs
        obs  = vz.dot(vt)
        return np.arccos(obs)

    and then call it with

    lhe.add_observable_from_function('obs1', function_obs1)   

    I hope that helps,

    Best, Felix

    abdtnourji
    @abdtnourji

    @KlingFelix thank you very much. This is the exact thing I was looking for.
    I have one more questions just to understand :

    I don't understand how these two lines are able to compute the pt of top and ttbar ?
    ptop=p[0]+p[1]+p[2]
    ptt=p[0]+p[1]+p[2]+p[3]+p[4]+p[5]

    Since I also want to compute the pt/eta/phi of top/anti-top and lepton/anti-lepton and using the equation above, I don't know how I can do it ?

    thank you in advance.
    A.Tnourji

    alexander-held
    @alexander-held
    The lines you mention compute the four-vectors for top/ttbar, not just the pT. The pT is a property https://github.com/scikit-hep/scikit-hep/blob/4e2c8aa94a8a7f9adea67df38d509242f598965d/skhep/math/vectors.py#L788-L791 you can get via e.g. ptop.pt or ptop.perp.
    abdtnourji
    @abdtnourji

    @alexander-held @KlingFelix
    Many thanks!
    Sorry for my ignores, but I still have some confusion!

    For example : ptop=p[0]+p[1]+p[2]
    what is the meaning of p[0], p[1] and p[2] ? and how do we know that this is the Top quark particle? (same for ptt ? )

    Thanks alot!

    abdtnourji
    @abdtnourji

    Hello,
    @alexander-held @KlingFelix @johannbrehmer I will be happy to hear from you because I am still having a problem with the basic understanding of MadMiner.

    Thank you

    Felix Kling
    @KlingFelix

    @abdtnourji The LHEReader provides two options on how to access the final state 4 momenta. One option is to use the p[i],which refer to the 4-momentum of the ith final state particle listed in the LHE file. (Note that this encodes that additional information which might not always be observable). Alternatively, you can use a[i], j[i], l[i],... refering to the photons, jets and leptons, which are sorted by pT (so no hidden unobservable information encoded in the ordering). With the DelphesReader you only have the a[i], j[i], l[i],... option.

    So ptop=p[0]+p[1]+p[2]is the combined 4-momentum of the first 3 particles of an event in the LHE file. So for my example (which was ttH with t > mu v b) it would be the momentum of the top at truth level.

    abdtnourji
    @abdtnourji

    @alexander-held @KlingFelix @johannbrehmer

    Dear expert,

    When trying to add spin correlation observable (ttbar dilepton events), the distribution I got using MadMiner is different compare to what I got using my code (reading the LHE file then extracting the observable)

    The observable I want to add is : cos(theta[k, +]).

    where theta is the angle between the momentum direction (noted as k axis) of the charged lepton from the top decay (e.g lepton with charge +) in the top rest frame and the top momentum direction in the ttbar centre-of-mass frame.

    Please find below, How I defined this observable in MadMiner and My code. The result is shown here :
    https://cernbox.cern.ch/index.php/s/Hoh3HFQvKLP5KCW

    Please let me know how can I modify MadMiner function, so I can get the same distribution ?

    Cheers
    A.TNOURJI

    Using MadMiner function :

    def cos_p_helicity (p,l,a,j,met):
    #defined top/ttbar TLorentz 
    top = p[0]+p[1]+p[2]
    top_parent = p[0]+p[1]+p[2]
    ttbar = p[0]+p[1]+p[2]+p[3]+p[4]+p[5]
    
    #defined Lepton TLorentz  
    lepton1 = l[0]
    lepton2 = l[1]
    lep_plus = LorentzVector()
    
    if l[0].charge > 0 : 
        lep_plus = lepton1
    elif l[1].charge > 0 : 
        lep_plus = lepton2
    
    #boost to the ttbar CM 
    boost_ttbar = ttbar.boostvector
    boost_ttbar*=-1 
    
    #boost top momentum to the ttbar CM   
    top.boost (boost_ttbar)
    top_parent.boost (boost_ttbar)
    
    
    #boost to top parent CM 
    boost_parent = top_parent.boostvector 
    boost_parent*=-1 
    
    lep_plus.boost(boost_parent)
    
    #Creat k vector 
    k_axis=top.vector.unit()
    k_axis*=1 
    
    vlep = lep_plus.vector.unit()
    theta= vlep.dot(k_axis)
    return theta

    and In my code (C++):

    auto cos_theta_helicity (TLorentzVector top, TLorentzVector t_parent, const TLorentzVector& ttbar, TLorentzVector lepton ) {

    // Boost to the ttbar CM
    TVector3 boost_to_ttbar = ttbar.BoostVector();
    boost_to_ttbar *= -1;

    t_parent.Boost(boost_to_ttbar);
    top.Boost(boost_to_ttbar);

    TVector3 boost_to_t_parent = t_parent.BoostVector();
    boost_to_t_parent *= -1;

    lepton.Boost(boost_to_t_parent);

    TVector3 k_axis = top.Vect().Unit();
    k_axis *= 1;
    float theta = lepton.Vect().Unit().Dot(k_axis); // cos (thata) helicity.

    // In case we have a non, move it to underflow bin ---//
    if(isnan(theta)) return float(-55);
    else return theta;
    };
    }

    Felix Kling
    @KlingFelix
    Hi @abdtnourji, I think you need to do top=top.boost (boost_ttbar). Just doing top.boost (boost_ttbar) doesn't do anything.
    abdtnourji
    @abdtnourji

    Hi @KlingFelix , Thanks a lot for your help, I tried your solution, but I still did not have the same distribution?

    here is the update code :

    def cos_p_helicity (p,l,a,j,met):

    #defined Lepton TLorentz  
    lepton1 = l[0]
    lepton2 = l[1]
    lep_plus = LorentzVector()
    
    if l[0].charge > 0 : 
        lep_plus = lepton1
    elif l[1].charge > 0 : 
        lep_plus = lepton2
    
    #defined top/ttbar TLorentz 
    top = p[0]+p[1]+p[2]
    top_parent = p[0]+p[1]+p[2]
    ttbar = p[0]+p[1]+p[2]+p[3]+p[4]+p[5]
    
    #boost to the ttbar CM 
    boost_ttbar = ttbar.boostvector
    boost_ttbar*=-1 
    
    #boost top momentum to the ttbar CM 
    top_ = top.boost (boost_ttbar)
    topp_ = top_parent.boost (boost_ttbar)
    
    #Creat k vector 
    k_axis = top_.vector.unit()
    k_axis*=1 
    
    #boost to top parent CM 
    boost_parent = topp_.boostvector 
    boost_parent*=-1 
    
    lep_p = lep_plus.boost(boost_parent)
    
    vlep = lep_p.vector.unit()
    theta= vlep.dot(k_axis)
    return theta
    abdtnourji
    @abdtnourji
    Hi @KlingFelix
    I was wondering if you have had a chance to look at my error yet? your feedback will be of great assistance.
    A.TNOURJI
    alexander-held
    @alexander-held
    Since you have two different implementations, I would suggest picking a single event and calculating the observable with both implementations. If the results agree (for this single event), try to find a result where they disagree. Once you have found such an event, you can add print outs / breakpoints to the code to find the line where the implementations give different results.
    Felix Kling
    @KlingFelix
    Dear @abdtnourji .
    I agree with Alex - the simplest wayof trying to find the error is to compare the results of your two codes after different lines of the code. There is one more thing that caught my eye: At first, you define the top quark using the p[i], which corresponds to the momenta of the i-th particle in the LHE file. But you use the l[i] to define the lepton. While there is nothing wrong with this, it would have been much easier to use something like lep_plus=p[1](assuming that you defined the decay t > b l+ vl in the process, and l+ is indeed the 2nd particle in the LHE file.)
    BiscetSJ
    @BiscetSJ

    Dear developers,

    we are trying to compare the difference between models trained with different numbers of epochs and we have few questions about it:
    1) Is it possible to load already trained model and continue training it with different number of epochs? Or the training of model always created brand new model?
    2) is it possible to see the contents of the nodes' weights directly from MadMiner?
    3) Does Madminer have function to save validation loss values into file?

    Thank you!

    Johann Brehmer
    @johannbrehmer

    Hi all, chiming in for the lass questions.

    1) Yes, this should be possible. Though we don't save the optimizer state together with the network weights, so this may lead to different results than training the model in one go for longer.

    2) Yes, you can access e.g. ParameterizedRatioEstimator.model.parameters().
    3) Basically yes, ParameterizedRatioEstimator.train() etc return numpy arrays with train and validation losses per epoch.
    Johann Brehmer
    @johannbrehmer
    Of course, if you want to have more control over all of these things, you may want to write your own ML model and training loop and simply use the numpy arrays with training data from MadMiner and the loss functions from our papers.
    Sorry for the brevity – please let us know if this makes sense!
    Sinclert Pérez
    @Sinclert

    Hi :wave:

    I am currently trying to use Madminer with Python3.7. I have checked the whole MadGraph5 - LHAPDF6- Pythia8 stack, and Python3.7+ is fully supported in their modern versions.

    The versions I am using, are:

    • MadGraph: 2.9.3.
    • LHAPDF6: 6.3.0.
    • Pythia: 8.2 (Yes, I know version 8.3 is not supported by MadGraph).

    I am currently facing a bug when setting "reweight=ON" on my mg_commands.mg5 card. Apparently... MadGraph? Pythia8? need some Fortran matrix during runtime, that is not being generated beforehand.

    The error traceback is as follows:

    FileNotFoundError: [Errno 2] No such file or directory: ‘<MY_PATH>/.workdir/mg_processes/signal/rw_me/SubProcesses/P1_ud_udh_no_az_h_aa/matrix.f'
    Related File: <MY_PATH>/.workdir/mg_processes/signal/rw_me/SubProcesses/P1_ud_udh_no_az_h_aa/matrix.f

    My intuition is that, after the "PDF hosting migration" that LHAPDF6 suffered, I would need to manually download some very specific PDF to make reweighting work.
    (For claritifcation, PDF is not what we usually mean by PDF document, it is some kind of small dataset).

    Has anyone successfully reweight events using Python3.7+?
    And if so, with which versions?
    Felix Kling
    @KlingFelix

    Hi @Sinclert ,

    the file matrix.f should contain the information on the matrix element describing your process. It should automatically be created by MadGraph when you generate you process. That's pretty much the whole purpose of MadGraph: automatically creating a function that calculates the matrix element for the process you specified. If this file is missing, I guess something when wrong when building the process.

    There should be a log file called "generate.log" that is created by MadMiner. It contains all the output from MadGraph that you would usually see in the terminal, and should help you finding out what went wrong.

    Best, Felix

    Sinclert Pérez
    @Sinclert

    Hi @KlingFelix ,

    I have checked the generate.log file, but it does not contain any errors. Nevertheless, I found many references to the folder P1_qq_qqh_h_aa, which is extrangely familiar to the one where the matrix.f file should be, but it is not (P1_ud_udh_no_az_h_aa).

    These names are very archaic to me, but if you think they may indicate some relevant information for debugging the issue, please, let me know.

    Also, I should mention that I have not touch my generation / Pythia8 execution code from the last time run this workflow (when it ran perfectly fine), apart from the Madminer.run_multiple() argument python2_override=True, which has been replaced by python_executable=“python3”
    Not sure if this change may have other implications under the hood, as I was not aware of the motto for python_executable to be offered as argument.
    Sinclert Pérez
    @Sinclert

    I should also mention an INFO entry within the generate.log file, saying: Some T-channel width have been set to zero [new since 2.8.0]
    if you want to keep this width please set "zerowidth_tchannel" to False
    .

    Again, not sure if this is related to the folder that gets generated at the end of the process.

    Felix Kling
    @KlingFelix

    Hi @Sinclert ,

    The name P1_qq_qqh_h_aa means that you had a process q q > q q h with h > gamma gamma, where q should refer to the light (massless) quarks. The corresponding MadGraph syntax is generate p p > j j h QCD=0, h > a a. So this is vector boson fusion Higgs production.

    The no_az means that you exclude any intermediate Z-bosons and photons., but just keep the W-boson diagrams (for example u ubar > d dbar h).

    I suspect that your problem is with MadGraph, not with Pythia8. One thing you could easily try is to run MG manually, and generate some events for the same process. Then you will see if something fails.

    I am not sure about the T-channel-width issue. That's not familiar to me. Hopefully it's not relevant.

    Best, Felix

    Sinclert Pérez
    @Sinclert

    Hi @KlingFelix ,

    Thanks for the quick reply. I know there are a lot of moving pieces and it can be tough to help debugging.

    I have tried generating the events (with reweigh turned ON), without generating the intermediate shell scripts (therefore, having the Madminer.run_multiple only_prepare_script set to False). Turns out that helped! :smile:

    Now the …/rw_me/SubProcesses/P1_ud_udh_no_az_h_aa/matrix.f file exists, although there is a new further-down-the-process-error that I will deal with later on. Seeing how performing the event generation directly with Madminer seemed to work, and performing the generation indirectly seemed to fail. Are you, or @johannbrehmer aware of any limitations when using the python_executable=“python3” argument on the intermediate shell scripts?

    (Again, it is the only code part I changed).

    Felix Kling
    @KlingFelix

    Dear @Sinclert ,

    Regarding the only_prepare_script: yes, that makes sense. If only_prepare_script=True then MadMiner only prepared the scripts (and I think also generates the process), but it would not generate any events.

    Regarding Python3: MadGraph used to only work with Python2. But since Python2 is now not mantained anymore, they were forced to mke it work with Python3 as well. Personally, I never managaed to make it work with Python3, but I have also not tried very hard since it worked with Python2. Johann managed to make everything work with Python3 I think.

    Best, Felix

    Sinclert Pérez
    @Sinclert

    Thanks.

    Happy to hear how Johann managed it :)

    Sinclert Pérez
    @Sinclert

    Hey there :wave:

    Regarding my last messages and the conversation I had with @KlingFelix: I finally debug what was going on regarding the missing matrix.f file, and why when generating the events without creating the intermediate shell scripts everything works, and when using intermediate scripts (through the usage of the only_prepare_scripts=True flag on the Madminer.run_multiple function) the error appears

    Sinclert Pérez
    @Sinclert
    Turns out some recent version of…MadGraph? …Pythia8? checks whether a folder called rw_me (rw stands for reweight) exists within the process directory. If it does, it automatically stops part of the reweighting process, leaving the rw_me folder as is
    In my case, I was wrapping the execution of the event-generation intermediate scripts into my own shell script, which removed and created the rw_me directory before every run:
    ...
    rm -rf “${SIGNAL_ABS_PATH}/rw_me”
    mkdir -p "${SIGNAL_ABS_PATH}/rw_me”   # <— Here is the problem
    ...
    Sinclert Pérez
    @Sinclert
    I am sharing my findings here because the debugging process has been quite painful, due to the lack of traceability of this behaviour (when was it introduced? who knows! 🤷‍♂) and the silly behaviour on itself (who would do something like this? 🤦)
    Felix Kling
    @KlingFelix
    Thank you @Sinclert for explaining the issue.
    Ricardo Barrué
    @ricardobarrue_gitlab
    Hello ! My name is Ricardo, a 1st year PhD student with the ATLAS experiment, currently looking at the possibility of using Madminer in an analysis. I have a question: what exactly are the parton shower/hadronization and detector latent variables (where are they defined)? Has there been any progress in integrating MadMiner with different event generators, parton shower and hadronization programs and also (and most importantly for us) Geant 4.
    Felix Kling
    @KlingFelix

    Dear @ricardobarrue_gitlab . MadMiner indirectly considers all the latent variables that are defined during the modeling og the shower/hadronization/detector. Think for example about the 4-momenta of hadronic particles provided by Pythia as lentent variables of the hadronization stage. However, we do not use the latent variables directly (but only the fact that latent variables exist), so we don't need to specify them.

    As far as I know, we have not integrated other generators or detector simulation, such as Geant4, yet. But let me ask around, in case I missed something.

    Ricardo Barrué
    @ricardobarrue_gitlab

    Dear @ricardobarrue_gitlab . MadMiner indirectly considers all the latent variables that are defined during the modeling og the shower/hadronization/detector. Think for example about the 4-momenta of hadronic particles provided by Pythia as lentent variables of the hadronization stage. However, we do not use the latent variables directly (but only the fact that latent variables exist), so we don't need to specify them.

    As far as I know, we have not integrated other generators or detector simulation, such as Geant4, yet. But let me ask around, in case I missed something.

    Aren't the values of the latent variables actually necessary to calculate the joint quantities ?

    Felix Kling
    @KlingFelix

    Dear @ricardobarrue_gitlab . No, (most of) the values of the latent variables are not necessary. That's because the joint quantities just depend on the matrix elements of the hard process, which themselves only depend on the parton level kinematics. So one only needs to know the parton level momenta, which are stored in the LHE file.

    I have talked with Kyle Cranmer, who mentioned that there are discussions within ATLAS on MadMiner. I would suggest to contact him to find out if there is any particular progress. As I am not an ATLAS member, I don't know about these collaboration internal discussion and efforts .

    Ricardo Barrué
    @ricardobarrue_gitlab
    Thank you very much Felix ! One more thing: I am training the network in a batch system and would like to access the training and validation losses to be able to check the network for overtraining. I saved the model (actually an Ensemble, but the options are the same) with the option save_model=True, which saves the full PyTorch model. Do you know if/how I can access the losses from the model files ? I tried searching online, but the answers I found are inconclusive at best.
    Felix Kling
    @KlingFelix
    The answer to BiscetSJ's post from Feb 22nd might answer your question.
    Ricardo Barrué
    @ricardobarrue_gitlab

    Dear @ricardobarrue_gitlab . No, (most of) the values of the latent variables are not necessary. That's because the joint quantities just depend on the matrix elements of the hard process, which themselves only depend on the parton level kinematics. So one only needs to know the parton level momenta, which are stored in the LHE file.

    I have talked with Kyle Cranmer, who mentioned that there are discussions within ATLAS on MadMiner. I would suggest to contact him to find out if there is any particular progress. As I am not an ATLAS member, I don't know about these collaboration internal discussion and efforts .

    Hey Felix, sorry to be bothering again with this, but wouldn't one need to know also the helicities, charges and polarisation states of the parton-level final state particles ?

    Felix Kling
    @KlingFelix
    @ricardobarrue_gitlab Ah, good question. In principle your are right, and the matrix element depends on the helicity. However, final state helicities are generally not observable. Therefore, by default, we ask MadGraph to calculate the matrix element summed over helicities. You could also ask MadGraph to calculate the matrix elements for fixed helicities. But them you also need to tell the reweighting tool to do the same. I remember that there was some issue, that I think was related to how helicities are stored in the LHE file, but I cannot recall the details. I would suggest to keep summing over helicities.
    Ricardo Barrué
    @ricardobarrue_gitlab

    Hello all, I come to you with a different question, this time regarding the memory use in the training of the ML methods. I am working in WH production, and I am training a SALLY method, with an ensemble of 5 networks of 50 layers each, using the memmap feature. I am training with two different inputs, which have either 60 or 48 variables, and the samples that I use are separated by charge and flavor of the lepton from the W boson. The 5 training samples for each of the processes occupy a total of 350MB (signal-only, 300k events) and 1275MB (w/ backgrounds, 1.05M events). Checking with nvidia-smi, I see that the training uses a very small amount of GPU memory (~600MB).

    For some reason that I don't understand (since my first contact with Pytorch was through MadMiner), the training is requiring 40+GB of virtual memory, which is making it impossible for me to train them, even on our institute's batch system. Have any of you experienced this ? Do you think the memory use can be controlled at MadMiner level or does one need to go into the lower-level PyTorch code ?

    PS: regarding @BiscetSJ 's post from Feb 22nd on returning train and validation losses. I saw that they are returned via text to the command line, but at least for the ensemble module (with SALLY estimators), the train function returns a None object.

    Felix Kling
    @KlingFelix
    Hi @ricardobarrue_gitlab I am not sure if I can help you with te memory problem. Maybe someone else here can comment on it? @johannbrehmer perhaps? Do you really use 50 layers? How many neutrons per layer? 50 layers seem like a lot, especially for SALLY. Or do you mean 50 neurons?
    Ricardo Barrué
    @ricardobarrue_gitlab
    Yeah, sorry. It's 50 neurons per layer.