Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • May 18 2016 15:33
    @satterly banned @sandeepl337
João Rebelo
@joaorebelo-ar
@satterly hi :D
Nick Satterly
@satterly
service is used for classifying / grouping for visualisation and downstream processing.
Kjetil
@kjetilmjos
@satterly I made a gist showing how parsing and validation of different types of environment variables could be done. What do you think?https://gist.github.com/kjetilmjos/d377d2976b974d2137c5760f6322e841
alexb145
@alexb145
hi, what's the Groups tab used for? It's always shown "Sorry, nothing to display here :( " for us, not sure if that's expected. How do we add anything there?
Nick Satterly
@satterly
@kjetilmjos it looks like it could work but there's almost certainly large scope for bugs or user error with converting things like dictionaries.
A popular cloud stack tool called terraform takes this approach...

From environment variables
Terraform will read environment variables in the form of TF_VAR_name to find the value for a variable. For example, the TF_VAR_region variable can be set to set the region variable.

Note: Environment variables can only populate string-type variables. List and map type variables must be populated via one of the other

Nick Satterly
@satterly
@alexb145 you mean the group menu option from the navigation menu on the left? User groups. It isnt well documented but you can define groups that can map to roles and assign users to those groups when using basic auth.
Would that be useful to you? I could write some docs on how it works.
@PedroMSantosD :+1:
alexb145
@alexb145
@satterly yes it would be helpful, I've seen the roles/scopes too but again no idea how these work either. Is it all defined inside environment variables? it's not really clear at all so a simple doc would help a lot
Ngo Quang Hoa
@ngohoa211
@satterly , hi, what mailer need amqp for? i read code and still not figure out. Please explain a little about it's flow, ..like why it need headbeat alerta?
Kjetil
@kjetilmjos

@satterly from what I can read about how terraform deals with environment varibles. They support complex structures like list and dict but recommends using configuration files for complex structures. See section Complex-typed Values here:
https://www.terraform.io/docs/configuration/variables.html

Found an example with different datatypes under the TF_VAR_name section here:
https://www.terraform.io/docs/commands/environment-variables.html

I totally agree that setting complex dictonaries and lists in evironment variables are not practical. But I think alerta should accept it if the users want's.
The validation for the inputs in my example can be done a lot more refined. If you have an example for a more complex config input I'm happy to make a better validation example for it.

Nick Satterly
@satterly

Here’s an example config for LDAP …

LDAP_URL = 'ldap://localhost:389'  # replace with your LDAP server
LDAP_DOMAINS = {
    'my-domain.com': 'uid=%s,ou=users,dc=my-domain,dc=com'
}
LDAP_DOMAINS_BASEDN = {
    'my-domain.com': 'dc=my-domain,dc=com'
}
LDAP_DOMAINS_GROUP = {
    'my-domain.com': '(&(memberUid={username})(objectClass=groupOfUniqueNames))'
    #OR
    'my-domain.com': '(&(member={userdn})(objectClass=groupOfUniqueNames))'
    #OR
    'my-domain.com': '(&(member={email})(objectClass=groupOfUniqueNames))'
}

https://docs.alerta.io/en/latest/authentication.html#basic-auth-using-ldap

And here’s the CORS_ORIGINS default config…
CORS_ORIGINS = [
    # 'http://try.alerta.io',
    # 'http://explorer.alerta.io',
    'http://localhost',
    'http://localhost:8000',
    r'https?://\w*\.?local\.alerta\.io:?\d*/?.*'  # => http(s)://*.local.alerta.io:<port>
]
And a SAML2 config …
CONFIG = {
    "entityid": "%s/idp.xml" % BASE,
    "description": "My IDP",
    "valid_for": 168,
    "service": {
        "aa": {
            "endpoints": {
                "attribute_service": [
                    ("%s/attr" % BASE, BINDING_SOAP)
                ]
            },
            "name_id_format": [NAMEID_FORMAT_TRANSIENT,
                               NAMEID_FORMAT_PERSISTENT]
        },
        "aq": {
            "endpoints": {
                "authn_query_service": [
                    ("%s/aqs" % BASE, BINDING_SOAP)
                ]
            },
        },
        "idp": {
            "name": "Rolands IdP",
            "endpoints": {
                "single_sign_on_service": [
                    ("%s/sso/redirect" % BASE, BINDING_HTTP_REDIRECT),
                    ("%s/sso/post" % BASE, BINDING_HTTP_POST),
                    ("%s/sso/art" % BASE, BINDING_HTTP_ARTIFACT),
                    ("%s/sso/ecp" % BASE, BINDING_SOAP)
                ],
                "single_logout_service": [
                    ("%s/slo/soap" % BASE, BINDING_SOAP),
                    ("%s/slo/post" % BASE, BINDING_HTTP_POST),
                    ("%s/slo/redirect" % BASE, BINDING_HTTP_REDIRECT)
                ],
                "artifact_resolve_service": [
                    ("%s/ars" % BASE, BINDING_SOAP)
                ],
                "assertion_id_request_service": [
                    ("%s/airs" % BASE, BINDING_URI)
                ],
                "manage_name_id_service": [
                    ("%s/mni/soap" % BASE, BINDING_SOAP),
                    ("%s/mni/post" % BASE, BINDING_HTTP_POST),
                    ("%s/mni/redirect" % BASE, BINDING_HTTP_REDIRECT),
                    ("%s/mni/art" % BASE, BINDING_HTTP_ARTIFACT)
                ],
                "name_id_mapping_service": [
                    ("%s/nim" % BASE, BINDING_SOAP),
                ],
            },
            "policy": {
                "default": {
                    "lifetime": {"minutes": 15},
                    "attribute_restrictions": None, # means all I have
                    "name_form": NAME_FORMAT_URI,
                    "entity_categories": ["swamid", "edugain"]
                },
            },
            "subject_data": "./idp.subject",
            "name_id_format": [NAMEID_FORMAT_TRANSIENT,
                               NAMEID_FORMAT_PERSISTENT]
        },
    },
    "debug": 1,
    "key_file": full_path("pki/mykey.pem"),
    "cert_file": full_path("pki/mycert.pem"),
    "metadata": {
        "local": [full_path("../sp-wsgi/sp.xml")],
    },
    "organization": {
        "display_name": "Rolands Identiteter",
        "name": "Rolands Identiteter",
        "url": "http://www.example.com",
    },
    "contact_person": [
        {
            "contact_type": "technical",
            "given_name": "Roland",
            "sur_name": "Hedberg",
            "email_address": "technical@example.com"
        }, {
            "contact_type": "support",
            "given_name": "Support",
            "email_address": "support@example.com"
        },
    ],
    # This database holds the map between a subject's local identifier and
    # the identifier returned to a SP
    "xmlsec_binary": xmlsec_path,
    #"attribute_map_dir": "../attributemaps",
    "logger": {
        "rotating": {
            "filename": "idp.log",
            "maxBytes": 500000,
            "backupCount": 5,
        },
        "loglevel": "debug",
    }
}
Nick Satterly
@satterly
@kjetilmjos ^^^
@te4336 You probably have configured the correct Alerta URL in New Relic. Please open a github issue providing all the information required in the issue template and I’ll help you troubleshoot. https://github.com/alerta/alerta/issues/new/choose
@ngohoa211 mailer uses a message queue because it waits 30 seconds or so to see if an alert is cleared before sending an email. This way the amount of spam email is greatly reduced.

It is specifically designed to reduce the number of unnecessary emails by ensuring that alerts meet the following criteria:

must not be a duplicate alert (ie. repeat != True)
must have status of open or closed
must have a current severity OR previous severity of critical or major
must not have been cleared down within 30 seconds (to prevent flapping alerts spamming)
To achieve the above, alerts are actually held for a minimum of 30 seconds before they generate emails.

https://github.com/alerta/alerta-contrib/tree/master/integrations/mailer#overview

Ngo Quang Hoa
@ngohoa211

thanks, when i enable mailer, in config file "/etc/alertad.conf" , i must edit field :

PLUGINS=['amqp','alerta-mailer']

or i just need :

PLUGINS=['amqp']
@satterly
oh stupid question, mailer is integrations , not plugin :)
João Rebelo
@joaorebelo-ar
@satterly thanks for the response :) one more thing, that I don't quite know if has already been asked. but is there a way to disable the sound for specific environments? Like we only care about Production, everything else is white noise, but currently we have all our environments pinging whenever there is an alarm poping up
Nick Satterly
@satterly
@joaorebelo-ar if you choose the Production tab in the web UI it will only ping when there is an Open alert for the Production environment.
Ah. after some testing I can see that this isn't working as it should. that's a bug.
Nick Satterly
@satterly
João Rebelo
@joaorebelo-ar
@satterly thanks :)
alexb145
@alexb145
image.png
@satterly this seems like a bug, we get this if we type anything in the search bar at the top then after searching click the x to delete it
this also makes all alerts disappear
and there's also things that we cannot find by searching using the top bar but which we can find using the filters on the right side
Kjetil
@kjetilmjos

@satterly updated my gist with example on a couple of the configs you sent.
https://gist.github.com/kjetilmjos/d377d2976b974d2137c5760f6322e841

Complex things as the SAML2 one would not be practical to have as an environment variable. But by setting up a schema that matches allowed/required parameters it can be done.
Is there any validation on config entries when configured from the config file?

If you are to approve this way of doing validation I think it could be an idea to start with the ones already in config.py and then add on new variables when needed to not take in too much at a time.

Grayson Head
@graysonhead

Has anyone had any luck running more recent versions of Alerta in Kubernetes? It seems that 7.x.x versions at some point started expecting to have write access to the /app directory, which isn't possible in my configuration since /app is a config volume (and thus read-only). Is there any way to change this behavior?

$ kubectl logs -f alerta-web-5bbb9476c5-m92g7 -n monitoring
touch: cannot touch '/app/.run_once': Read-only file system

Nick Satterly
@satterly
Do you have an example kubernetes deployment file?
Grayson Head
@graysonhead
Let me find a sanitized one
Grayson Head
@graysonhead
@satterly This is what we are running in our test environment, I just tested again and confirmed that 7.3.1 doesn't work (with the error above), 6.8.5 does. I haven't tried any releases in between. https://pastebin.com/K48grGM8
Nick Satterly
@satterly
Thanks for that. I've been considering for a while now splitting the single "alerta-web" docker image into two, one for the web UI and one for the API. I don't know if that would make any difference here but I'd be interested in your opinion as someone who's trying to deploy Alerta into Kubernetes.
Nick Satterly
@satterly
@alexb145 obviously a bug. the input box is probably handling an empty string different to null, or something. I’ll log an issue and check it out. thanks for reporting.
Grayson Head
@graysonhead
@satterly I think that is a good idea. Currently if the DB doesn't connect Kubernetes won't attempt to restart the container because the main process doesn't exit on error, I imagine separating the two would allow you to exit the main loop on error for the API server if it can't connect to the DB for whatever reason.
The alternative I've considered is setting up a health check script that sees if the DB is connected and tells the Kube API there is a problem if it isn't, but that is much less "works out of the box" friendly
Nick Satterly
@satterly
I see. Thanks that’s good to know.
Grayson Head
@graysonhead
The only thing that is difficult is needing to populate the config with the endpoint of the API so the webui can connect to it. You don't necessarily know what that address is beforehand since it gets assigned by your Cloud Provider on deployment. (you can reference it via DNS within the cluster, but that doesn't help a client accessing it from outside the cluster)
That might be fixable by having the webui proxy the API requests when someone is accessing the webui, and then you would have a separate endpoint for API only stuff
João Rebelo
@joaorebelo-ar
@satterly could you point me to where in the alerta code do you set the history update timefield? thank you
Nick Satterly
@satterly
@graysonhead Thanks that’s interesting food for thought.
for deduplicated alerts, correlated alerts and new alerts.