by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    pmg
    @pmg7557_twitter
    @ncharles Done : Demande #18203 créée. Hope it is clear.
    Nicolas Charles
    @ncharles
    crystal clear
    thank you
    norbertoaquino
    @norbertoaquino
    Hello, I am planning to install PostgreSQL on a separate machine, which version of PostgreSQL is supported with rudder?
    Alexis Mousset
    @amousset
    At least 9.2 for Rudder 6.1 IIRC
    10 recommended
    cc @ncharles
    Francois Armand
    @fanf
    yep, that's it
    actually: any postgres newer than 9.2 will work, we just don't use newer feature to remain compatible. But with newer version, you will benefits from all perf/tooling/etc enhancements of postgres
    norbertoaquino
    @norbertoaquino
    Great! thank you @fanf @amousset
    Nicolas Charles
    @ncharles
    Latest postgresql is the best choice if possible
    realitygaps
    @realitygaps
    We have noticed that slapd is regularly spiking to 85+% cpu on our rudder server. Is there a known issue related to this?
    its version 6.0.9
    Alexis Mousset
    @amousset
    can it be during policy generation?
    realitygaps
    @realitygaps
    hmm it could be, how would i easily check?
    im not manually generating them when it happens
    Alexis Mousset
    @amousset
    they can happen automatically when there are changes in inventories or applied policies
    realitygaps
    @realitygaps
    there werent any changes being made in the past day
    Alexis Mousset
    @amousset
    do you have monitoring graphs showing this behavior?
    realitygaps
    @realitygaps
    am checking. it seems the largest spikes are indeed at midnight which makes sense. but the last couple of days there has been more occurrences outside of that time
    i have some basic cpu graphs
    seems only to be after the update to 6.0.9 that it spiked at other times
    but i will keep an eye on it the coming day and see
    also, are there any risks from doing a vacuum?
    Nicolas Charles
    @ncharles
    It is most likely when you receive inventories
    at midnight there are also maintenance of the database, that uses resources.
    Doing a simple vacuum (or vacuum analyze) on the database is pretty safe. Avoid doing it around midnight, and if you can increase maintenance_work_mem before it would help making it faster
    If you can identify when were the spikes, you can check on the Event Logs (Utilities/event logs page) to see what happened at that time
    Finally, 6.1 improves the slapd part by adding indexes. It does help slapd a lot
    realitygaps
    @realitygaps
    thanks we will try a vacuum then and take a look at event logs
    Stephane Paillet
    @spaillet_gitlab
    hello
    Stephane Paillet
    @spaillet_gitlab
    I would like to know if is there a way to use LDAP authentification without the Authentication backends plugin ?
    Nicolas Charles
    @ncharles
    Hi ! Unfortunately no, you need the plugin to use ldap auth
    realitygaps
    @realitygaps
    our policy regeneration is currently giving errors ending with ' timed out after PT1H' any ideas how to fix this?
    " Policy update error for process '10452' at 2020-09-21 17:38:43 <- Error when executing hooks in directory '/opt/rudder/etc/hooks.d/policy-generation-finished'. <- Inconsistancy: Hook ''/opt/rudder/etc/hooks.d/policy-generation-finished' with environment parameters:"
    (also inconsistancy is spelt inconsistency:)
    currently we cant regenerate policies due to this issue though
    realitygaps
    @realitygaps
    ⇨ Policy update error for process '10453' at 2020-09-21 18:38:49
    ⇨ Error when executing hooks in directory '/opt/rudder/etc/hooks.d/policy-generation-finished'.
    ⇨ Inconsistancy: Hook ''/opt/rudder/etc/hooks.d/policy-generation-finished' with environment parameters: [[RUDDER_NODE_IDS:] [RUDDER_GENERATION_DATETIME:2020-09-21T17:38:43.499+02:00] [RUDDER_END_GENERATION_DATETIME:2020-09-21T17:38:49.814+02:00] [RUDDER_NODE_IDS_PATH:/var/rudder/policy-generation-info/last-updated-nodeids] [RUDDER_NUMBER_NODES_UPDATED:0] [RUDDER_ROOT_POLICY_SERVER_UPDATED:1]]' timed out after PT1H
    pmg
    @pmg7557_twitter
    Hello, I setup a 'Job scheduled' directive from 0 to 24 but the reports says 'Schedule is not valid (from 00 to 24'. Is-it a new bug on this directive? Do you recommend to use this directive or a cron job?
    realitygaps
    @realitygaps
    hmm may have found an issue on the machine causing this
    it was an issue on our end in the end
    Nicolas Charles
    @ncharles
    @pmg7557_twitter I think it's a limit issue: the schedule is between 0 and 24, and we ensure that it's strictly between 0 and 24. Can you try with 0 to 23 or 1 to 24 ?
    @realitygaps what was the issue ?
    Francois Armand
    @fanf
    @realitygaps "pth1" is a (not well know, sorry about that) standard for time period, so it says that after 1 hour, hooks timed out. If you found the problem, perfect! Else, look at what can cause hooks in /opt/rudder/etc/hooks.d/policy-generation-finished to stop. You can have more detailed logs about hooks by changing log level in /opt/rudder/etc/logback.xml, the part about hooks: <logger name="hooks" level="info" <- use debug or trace (which is very very verbose)
    realitygaps
    @realitygaps
    thanks @fanf i did look through logback.xml but didnt see anything clear, ended up being a max open files issue as java opened a lot of sockets
    pmg
    @pmg7557_twitter
    @ncharles 0 to 23 works, it can be considered as an UI bug, to improve, if people select 0-24 force internally 0-23.
    Francois Armand
    @fanf
    @realitygaps you mean for changing log lovel in https://github.com/Normation/rudder/blob/branches/rudder/6.1/webapp/sources/rudder/rudder-web/src/main/resources/logback.xml#L466? Or in the resulting logs in /var/log/rudder/webapp/2020_09_22.stderrout.log, there wasn't anything usefull? I suspect the second, I believe we don't see max open files limit from the app. How did you find out ?
    Nicolas Charles
    @ncharles
    @pmg7557_twitter i think it should consider 24 as 23h59
    realitygaps
    @realitygaps
    @fanf we found it in the syslog not in the application logs
    just didnt think to check there first :|