stealthycoin on develop
chore: Use OrderedDict in yamlh… Import from ordereddict and sim… Support parsing yaml merge tags and 2 more (compare)
@shaswatgupta I've got aws-cli version 1.15.4 and when I run
aws acm request-certificate help I do see the
What is the exact error you're getting?
aws acm request-certificate --domain-name example.com --validation-method DNSworks just fine for me
@angrychimp Thanks for replying! I think the crux of it is authenticating for "least privilege".
As an example, I have a pipeline that runs on-prem from which I need to deploy a lambda. I want to use a least privilege role that only has access to create/update that lambda (or, at least some measure of least privilege).
I have created this pipeline currently with a manual gate for a User to to supply STS keys, but am working towards full automation of acquiring AWS keys to do the code push to s3 and then passing those AWS Keys to TF to do the actual provisioning of the lambda (I wouldn't want to give our CI server keys with full access to AWS accounts)
I have also created an AWS support ticket when I posted here where the support person remarked that utilizing STS would be the best option and to use a SAML assertion for authentication (but, they said acquiring the SAML assertion response is outside their purview, understandably).
But, automating this authentication step is where I'm at. I'm exploring methods for doing this, or I'm going back to the security team and submitting to them that "we don't seem to gain much by storing a "SAML assertion" instead of IAM User keys".
@bogeylnj There has to be trust at some point along the chain, right? If you're looking for actual pipeline automation, I see there being two options. If you want to use external authentication you could use Directory Service to tie an external user (such as an Active Directory service account) to a role, which in turn could allow for STS. But then you have to figure out how to allow your pipeline resources to authenticate with AD (again, as an example), so you still have credentials hard coded somewhere.
In my opinion the safest thing to do is create an automation IAM user with limited access to what you need - S3, Lambda, etc. - and really restrict access to the resources it needs to manage. You can use ARN conditions to make sure you're only touching relevant resources.
You have to hard-code the IAM API key/secret into your pipeline, but at least it can only do exactly what you want it to do. You can use CloudTrail to audit activity and ensure no one is using your pipeline incorrectly. Then just make sure you control access to where ever those keys are stored.
I completely agree with your first sentence and IAM User recommendation; but I am not savvy enough at this point to confront the security requirements. (Thanks again for your replies - very helpful)
Our AWS Console access is being controlled via Federated Identities and Conditional Policies
OktaMFA>STS>IAMRoleWithSpecificResourcesAndConditions. So, everything you describe makes sense.
When I question the security requirement, I think persistent keys will be one of the cons they present with IAM Users. But, this is something that I want to clarify more and more, lately. And, your suggestion supports that.
Some concerns I see are:
In your experience, are IAM Users used predominantly or are others attempting to avoid them? Most things I find just talk about IAM Users with required access (which in the case of a CI/CD server might be
Cross-Account Log Data Sharing with Subscriptionsfrom doc’s on
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CreateDestination.htmlI keep getting
An error occurred (InvalidParameterException) when calling the PutDestination operation: Could not deliver test message to specified destination. Check if the destination is valid.and this is my command
aws-log aws logs put-destination --destination-name snplydst --target-arn arn:aws:kinesis:region:999999999999:stream/RecipientStream --role-arn arn:aws:iam::999999999999:role/CWLtoKinesisRole
Hey people I am trying to create an AMI from an OVA and have been running into trouble. I am using the Amazon Linux box..
I am now receiving this error after inputting this command into the cli.
aws ec2 import-image --description "Vormetric DSM 6.0" --disk-containers file://containers.json
This is the error message then appears:
Could not connect to the endpoint URL: "https://ec2.us.east.1.amazonaws.com/"
I did a reset the original error message was it wasn't locating the region us-east-1 in
aws configure set region us-east-1
$ aws elbv2 describe-tags --resource-arns arn:aws:elasticloadbalancing:us-west-2:044443245626:targetgroup/T-1/62a3060e529c7e69
An error occurred (ValidationError) when calling the DescribeTags operation: 'arn:aws:elasticloadbalancing:us-west-2:044443245626:targetgroup/T-1/62a3060e529c7e69' must be in ARN format
hey all. can I get Secrets Manager values with a trailing slash from Parameter Store? https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html shows examples of a single key without a path, and these are the errors I get when trying to get the SM values with and without trailing slash (/dev/my/key is the credential in SM - also note the "null" in the first error:
$ aws ssm get-parameter --name /aws/reference/secretsmanager/dev/my/key --with-decryption An error occurred (ParameterNotFound) when calling the GetParameter operation: An error occurred (ParameterNotFound) when referencing Secrets Manager: Secret aws/reference/secretsmanager/dev/my/keynull not found. $ aws ssm get-parameter --name /aws/reference/secretsmanager//dev/my/key --with-decryption An error occurred (ValidationException) when calling the GetParameter operation: Parameter name: can't be prefixed with "ssm" (case-insensitive). If formed as a path, it can consist of sub-paths divided by slash symbol; each sub-path can be formed as a mix of letters, numbers and the following 3 symbols .-_
It works for an existing SM value created without trailing slash. Should I escape the trailing slash somehow?
Get-ECRLoginCommand : The security token included in the request is invalid.checked my creds all seems to be okay and when i fire the same get creds on my nix instance it seems to work
TestSyncCommand.test_warning_on_invalid_timestampfailing. It looks like the test utilities are trying to generate an invalid timestamp but it's not throwing the expected
OverflowErroron this system. Does this sound possible and/or does anyone have a working OSX setup that I could compare notes with?
aws cloudformation create-stack --stack-name my-new-stack --template-url https://s3-us-west-2.amazonaws.com/path/to/the/template --capabilities CAPABILITY_NAMED_IAM --region us-west-2 --parameters file:///path/to/json
aws s3 syncto throttle down disk I/O? My intention was to sync ~600 GB of pictures to S3, but every time I try to do that, the disk usage jumps up to 100% and makes a huge impact on the site operation. I saw in strace output that the tool walks recursively over the directory tree and collects stats, so I'd love to find a way to decrease the rate limit of those operations, if possible.