Rules engine for AWS management, DSL in yaml for query, filter, and actions on resources
Is not not allowed with a query?
- name: ec2-stopped-over-25-notify-untag-notstopped
resource: aws.ec2
description: |
Remove notification tag for instance not stopped.
query:
- not:
- instance-state-name: stopped
filters:
- "tag:cc-stopped-notified-25": present
actions:
- type: untag
tags: [ "cc-stopped-notified-25" ]
my error is: : EC2 Query Filter invalid filter name {'not': [{'instance-state-name': 'stopped'}]}\n"
Hi Guys,
I have a policy for terminating ec2 instances 31 days after creation. I have a 10 day warning, but the warning and the actual termination happen on same day. This policy used to work but not does not as expected.
The policy is as below.
https://gist.github.com/satvan/cd8e2920265810b3e7eaa642f43db145
Hi Guys,
I'm writing a policy to delete non-tag-compliant RDS instances and EBS volumes in my account. But, can I take a snapshot of EBS and RDS automatically while performing the delete action? If yes, how can I specify that in my action block?
Guys,
Yesterday I did a pip upgrade on c7n , c7n-mailer and c7n-org. The old versions were
c7n 0.9.8
c7n-mailer 0.6.6
c7n-org 0.6.6 and new ones are
c7n 0.9.9
c7n-mailer 0.6.7
c7n-org 0.6.7
But after this , I get cannot find credentials. I am using a regular IAM role attached to the ec2 instance.
Any ideas ?
~~Traceback (most recent call last):
File "/home/c7n/c7n/lib64/python3.7/site-packages/c7n_org/cli.py", line 563, in run_account
resources = p.run()
File "/home/c7n/c7n/lib64/python3.7/site-packages/c7n/policy.py", line 1181, in call
resources = mode.run()
File "/home/c7n/c7n/lib64/python3.7/site-packages/c7n/policy.py", line 275, in run
with self.policy.ctx:
File "/home/c7n/c7n/lib64/python3.7/site-packages/c7n/ctx.py", line 88, in enter
update_session(local_session(self.session_factory))
File "/home/c7n/c7n/lib64/python3.7/site-packages/c7n/utils.py", line 318, in local_session
s = factory()
File "/home/c7n/c7n/lib64/python3.7/site-packages/c7n/credentials.py", line 46, in call
region or self.region, self.external_id)
File "/home/c7n/c7n/lib64/python3.7/site-packages/c7n/credentials.py", line 104, in assumed_session
metadata=refresh(),
File "/home/c7n/c7n/lib64/python3.7/site-packages/c7n/credentials.py", line 95, in refresh
session, region).assume_role, parameters)['Credentials']
File "/home/c7n/c7n/lib64/python3.7/site-packages/c7n/utils.py", line 438, in _retry
return func(*args, kw)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/client.py", line 663, in _make_api_call
operation_model, request_dict, request_context)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/client.py", line 682, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/endpoint.py", line 132, in _send_request
request = self.create_request(request_dict, operation_model)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/endpoint.py", line 116, in create_request
operation_name=operation_model.name)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, kwargs)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(kwargs)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/signers.py", line 162, in sign
auth.add_auth(request)
File "/home/c7n/c7n/lib64/python3.7/site-packages/botocore/auth.py", line 357, in add_auth
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
/home/c7n/c7n/lib64/python3.7/site-packages/botocore/auth.py(357)add_auth()
-> raise NoCredentialsError
~~
- name: vpcflowlogs-cloudwatch-log-retention
resource: aws.log-group
mode:
type: cloudtrail
role: arn:aws:iam::{account_id}:role/somerole
events:
- source: "logs.amazonaws.com"
event: CreateLogGroup
ids: "requestParameters.logGroupName"
filters:
- type: value
key: logGroupName
op: regex
value: "VPCFlowLogs"
- retentionInDays: absent
- "tag:buid": adsent
actions:
- type: retention
days: 30
- type: tag
key: "buid"
value: “24005”
does anyone know where I can find the keys that I can work with, like in this particular example in the documentation?
- name: no-ec2-public-ips
resource: aws.ec2
mode:
type: cloudtrail
events:
- RunInstances
filters:
- type: event
key: "detail.requestParameters.networkInterfaceSet.items[].associatePublicIpAddress"
value: true
actions:
- type: terminate
force: true
I don't see any event that has this pattern "detail.requestParameters.networkInterfaceSet.items[].associatePublicIpAddress"
aws.iam-user.credential.user_creation_time
that is greater than a day ago. I'm not sure if I can use value_type: age
to accomplish this since it's in a date format? Any advice is welcome! full policy: - name: iam-user-mfa-disabled
resource: iam-user
title: IAM users who don't have mfa enabled
severity: MEDIUM
filters:
- type: credential
key: mfa_active
value: false
- type: credential
key: password_enabled
value: true
- type: credential
key: user_creation_time
op: gt
value_type: age
value: 1
Hi, I have an accounts.yml with the details of 50 AWS org account (including account_name, account_id, and list of all regions in which I want to deploy my policies). But there is a requirement of deploying a new policy in only one region(say for example us-east-1 only) for all 50 accounts using the existing accounts.yml file. Is there a way to specify the region explicitly to the deployment command while deploying the policy and not deploying to other regions mentioned in accounts.yml file?
Sample accounts.yml file is below:
---
accounts:
- account_id: '000000000000'
name: my-aws-account
regions:
- us-east-1
- eu-central-1
- ap-south-1
- ap-southeast-2
role: arn:aws:iam::xxxxxxx:role/cross_access_role
vars:
mail-CC: xxxx@abc.com
mail-CC_01: yyyy@abc.com
rate: cron(30 13 ? * MON-FRI *)
tags:
- type:production
- status:deployed
:59.261Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc https://monitoring.us-east-1.amazonaws.com:443 "POST / HTTP/1.1" 200 212
[INFO] 2021-01-07T17:42:59.265Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc Start 1 of 1 instances
[DEBUG] 2021-01-07T17:42:59.316Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc Starting new HTTPS connection (1): ec2.us-east-1.amazonaws.com:443
[DEBUG] 2021-01-07T17:43:00.48Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc https://ec2.us-east-1.amazonaws.com:443 "POST / HTTP/1.1" 500 None
[DEBUG] 2021-01-07T17:43:00.803Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc Resetting dropped connection: ec2.us-east-1.amazonaws.com
[DEBUG] 2021-01-07T17:43:01.381Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc https://ec2.us-east-1.amazonaws.com:443 "POST / HTTP/1.1" 500 None
[DEBUG] 2021-01-07T17:43:01.727Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc Resetting dropped connection: ec2.us-east-1.amazonaws.com
[DEBUG] 2021-01-07T17:43:02.884Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc https://ec2.us-east-1.amazonaws.com:443 "POST / HTTP/1.1" 500 None
[DEBUG] 2021-01-07T17:43:05.46Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc Resetting dropped connection: ec2.us-east-1.amazonaws.com
[DEBUG] 2021-01-07T17:43:05.652Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc https://ec2.us-east-1.amazonaws.com:443 "POST / HTTP/1.1" 500 None
[DEBUG] 2021-01-07T17:43:11.806Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc Resetting dropped connection: ec2.us-east-1.amazonaws.com
[DEBUG] 2021-01-07T17:43:12.334Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc https://ec2.us-east-1.amazonaws.com:443 "POST / HTTP/1.1" 500 None
[ERROR] 2021-01-07T17:43:12.336Z 1c883970-34f2-4cd6-9e0b-7c4e8c0b4fcc Error while executing policy
Traceback (most recent call last):
File "/var/task/c7n/policy.py", line 338, in run
results = a.process(resources)
File "/var/task/c7n/resources/ec2.py", line 769, in process
fails = self.process_instance_set(client, batch, itype, izone)
File "/var/task/c7n/resources/ec2.py", line 788, in process_instance_set
retry(client.start_instances, InstanceIds=instance_ids)
File "/var/task/c7n/utils.py", line 348, in _retry
return func(*args, **kw)
File "/var/runtime/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 676, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (InternalError) when calling the StartInstances operation (reached max retries: 4): An internal error has occurredAn error occurred (InternalError) when calling the StartInstances operation (reached max retries: 4): An internal error has occurred: ClientError
Traceback (most recent call last):
File "/var/task/custodian_policy.py", line 4, in run
return handler.dispatch_event(event, context)
File "/var/task/c7n/handler.py", line 109, in dispatch_event
p.push(event, context)
File "/var/task/c7n/policy.py", line 749, in push
return mode.run(event, lambda_ctx)
File "/var/task/c7n/policy.py", line 552, in run
return PullMode.run(self)
File "/var/task/c7n/policy.py", line 338, in run
results = a.process(resources)
File "/var/task/c7n/resources/ec2.py", line 769, in process
fails = self.process_instance_set(client, batch, itype, izone)
File "/var/task/c7n/resources/ec2.py", line 788, in process_instance_set
retry(client.start_instances, InstanceIds=instance_ids)
File "/var/task/c7n/utils.py", line 348, in _retry
return func(*args, **kw)
File "/var/runtime/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 676, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (InternalError) when calling the StartInstances operation (reached max retries: 4): An internal error has occurred
Hi All,
I have an accounts.yml with details of 50 AWS accounts. Each account has 1 or more regions (but different for each account) where policies are going to deploy. I want to deploy the policies of global resources in only 1 region. Is it possible to do that without using the -r option or can I give in the c7n-org run command to pick the first region from the list of regions mentioned in the accounts.yml file?
Hi All,
I have a strange request for our infrastructure and thought that it's possible to resolve my case over custodian, but seems something is wrong
my existing policy:
policies:
- name: s3-tags-check
resource: s3
description: |
Report on S3 buckets that do not meet tag compliance policies
filters:
- type: value
key: Name
op: not-equal
value: "tag:Name"
I just need to find all S3 buckets where Name of bucket is not equal to Tags:Name
Is it possible to do over CC?
Hi All,
I am trying to create a c7n policy to get ses resources, but I did not find any reference that this type of resource exists in the official docs https://cloudcustodian.io/docs/aws/resources/index.html. What I want to achieve is to retrieve the ses resource configuration and to invoke custom Lambda action that does some checks. But I can't find resource type for ses (like for example s3). Do you know if I can achieve this in any different way or if ses resource is under development?
policies:
I'm getting below error while applying the lifecycle rule for my S3 buckets using cloud custodian
[ERROR] ClientError: An error occurred (InvalidRequest) when calling the PutBucketLifecycleConfiguration operation: Filter element can only be used in Lifecycle V2.
My action block in the policy is as below:
- type: configure-lifecycle
rules:
- ID: delete-older-versions-of-s3-objects
Status: Enabled
Filter:
Prefix: ""
NoncurrentVersionExpiration:
NoncurrentDays: 7
Hi all
I am new to CC and trying to find how I can transition the SecurityHub findings compliance_status to "PASSED" or workflow_status to "RESOLVED" using CC.
I have a below policy-1 to find the ec2-instances without Owner tag and set as a "WARNING". After the initial findings, I went and tagged the ec2-instances and executed the policy-1 and I do not see any change(PASSED/RESOLVED) in the findings as noted.
So I created a policy-2(must be same policy name to work) with compliance_status as "PASSED" and executed to see the compliance status update to "PASSED".
I would like to know if there is any better way to achieve the above scenario of updating the findings to "PASSED"/"RESOLVED" without using a duplicate(repeated) policy?
policy-1:
- name: ec2-missing-Owner
resource: aws.ec2
filters:
- State.Name: running
- “tag:owner”: absent
actions:
- type: post-finding
severity_normalized: 30
compliance_status: WARNING
types:
- “Software and Configuration Checks/AWS Security Best Practices/ec2 missing owner”
policy-2:
- name: ec2-missing-Owner
resource: aws.ec2
filters:
- State.Name: running
- “tag:owner”: present
actions:
- type: post-finding
compliance_status: PASSED
types:
- “Software and Configuration Checks/AWS Security Best Practices/ec2 missing owner”
policies:
- name: cp-ami
resource: ami
filters:
- type: value
key: tag:environment
value: prod
- type: value
key: tag:backup_tier
value: bronze
- type: value
key: tag:use
value: seige
actions:
- type: copy
region: us-west-2
- type: copy-related-tag
resource: ami
key:
type: ImageId
tags:
oneOf:
- enum:
- '*'
post-findings
under actions
post the findings to a different account? - name: ec2-missing-Owner
resource: aws.ec2
filters:
- State.Name: running
- “tag:owner”: absent
#### Can above filter/check run on AWS-account 1234567890 and ####
#### Can the findings be posted to SecurityHub of account 0987654321???####
actions:
- type: post-finding
severity_normalized: 30
compliance_status: WARNING
types:
- “Software and Configuration Checks/AWS Security Best Practices/ec2 missing owner”
I would like to use the azure-event-grid execution method on an event-grid that has other subscriptions publishing to it. When performing actions on filtered events, I would like to take that action on the originating subscription. In AWS, this can be accomplished via the member-role
execution option in the comparable cloudtrail execution method, to assume roles (using the {account_id} replacement). Any way to switch subscriptions for actions?
I'm curious because the Event Grid Functions documentation states, "Currently, Event Grid Functions are only supported at the subscription level." and I can't find any documentation on the execution-options object. Any insights?
Alerting on root login from master account where cloudtrail is aggregated?
We are using the AWS Organizations feature to easily aggregate cloudtrail logs into a master account. Previously we had used a normal c7n mode cloudtrail to track these root logins. However, it appears that this aggregation cannot be used to track these events from member accounts. The aggregated events are flowing into cloudwatch logs too, do we need to alert from this data stream? Is it even possible to deploy an event based alert via c7n that creates cloudwatch event rule against cloudwatch logs? Here is our old method we ran against all accounts:
mode:
type: cloudtrail
role: arn:aws:iam::{account_id}:role/custodian-lambda-role
events:
- ConsoleLogin
filters:
- type: event
key: "detail.userIdentity.type"
value_type: swap
op: in
value: Root
actions:
Hi, I am using cloudcustodian to filter my EC2 AMI's. I have already a custodian policy as follows:
- name: ec2-ami-deregister-by-tag-filter
resource: ami
comment: |
Check EC2 AMI's which are not having the following tags and deregister it.
filters:
- "tag:Owner": absent
- "tag:Environment": absent
- "tag:Purpose": absent
- "tag:Retention": absent
actions:
- type: deregister
The problem with the above policy is that all tag name need to have a tagName with Capital Letters(i.e Owner, Environment, Purpose, Retention). I am planning on having some flexibility so allowing users to tag with Capitalizing the tagName(i.e owner, environment, purpose, retention). For that, I have added some AND and OR conditions but they are not working:
- name: ec2-ami-deregister-by-tag-filter
resource: ami
comment: |
Check EC2 AMI's which are not having the following tags and deregister it.
filters:
- and :
- "tag:Owner": absent
- "tag:Environment": absent
- "tag:Purpose": absent
- "tag:Retention": absent
- or :
- "tag:owner": absent
- "tag:environment": absent
- "tag:purpose": absent
- "tag:retention": absent
actions:
- type: deregister
What changes might be required on the following policy to achieve my goal.
Guys,
I have this policy to exclude security groups with Name Tag not containing "glue" but I still see that string in the email I get.
`` policies:
I have experience using c7n, but typically when it comes to just managing a few accounts. My team is currently decommissioning our existing governance solution and going to implement c7n across the organization which currently has 200+ AWS accounts.
We were planning on using c7n-org to distribute CloudTrail-based lambdas as well as a few daily periodic ones (kind of like a custodian sweep of things that may have been missed e.g. AWS issue with EventBridge). After reading through a number of conversations here, it is obvious that many others are doing something similar. However, it seems like there is a trade-off regarding a centralized a run (event-based lambda in master along with c7n-org polling) and a distributed setup across accounts, and I am having a hard time determining whether or not we should continue with the architecture path we have chosen.
Any help regarding the dilemma I am in, would be greatly appreciated.
Hello, I am trying to create a CC policy to retrieve a report of all IAM users in all the AWS accounts who have access keys creation date greater than 90 days. Here's the policy written based on the documentation:
policies:
- name: iam-user-access-keys-older-than-90-days
description: Retrieve all IAM users whom have access keys older than 90 days
resource: iam-user
filters:
- type: access-key
key: Status
value: Active
- type: access-key
match-operator: and
key: CreateDate
op: greater-than
value: 90
value_type: age
The command used to create the report is
c7n-org report -c ~/accounts.yml -s output --region all -u iam-user-audit.yml
No report is generated when the command is executed. I have checked the AWS accounts and there are multiple IAM user accounts that have IAM users having access keys created more than 90 days back. No errors are generated, but the report is blank as shown below:
Account,Region,Policy,UserName,CreateDate
Is the CC policy correct to retrieve the list of IAM user accounts?
Hi, I am trying to catch s3 bucket with cross enviroment access. ie, If any Test or Prod environment S3 buckets are accessible in lower environment accounts. We have different OU for each enviroment accounts. Tried to catch this using cross acount filter with whitelist OU.
policies:
- name: core-s3-bucket-cross-account
resource: s3
filters:
- type: cross-account
whitelist_orgids:
- ou-xxx-xxxxx
but seems like, it catches only if we have explicitly mentioned OU in bucket access policy, it does not catch if bucket have access to specific account in OU. Is there anyway I can catch cross envirnment buckets ?