stealthycoin on develop
chore: Use OrderedDict in yamlh… Import from ordereddict and sim… Support parsing yaml merge tags and 2 more (compare)
aws ecs register-task-definition, and
aws ecs update-servicecommands. The results are expected: new image is created, new task definition revision created, and the service is updated with the new revision, but the task (even though it's tagged latest) still uses the old image. I pull the newly pushed image to my local machine and I can see the changes, but not on the service
aws ecs register-task-definition. I don't think it's a race condition either because I can run these commands multiple times and it still uses the same image I started with.
s3api. You can use
aws s3 ls <bucket>/<prefix>to list files in your source, then iterate over those to copy them to a target using
aws s3api copy-object --copy-source <bucket>/<prefix> --bucket <target_bucket> --key <target_prefix>
aws s3 ls --recursive
redis, key <=> value file
AWS_SECRET_KEY, but the aws command line still complains
Unable to locate credentials. You can configure credentials by running "aws configure".
aws acm request-certificate helpdoes not shows any such option even after installing awscli version 1.15.37 which got released few hours back. I tried using --validation-method DNS but awscli throws error.
@shaswatgupta I've got aws-cli version 1.15.4 and when I run
aws acm request-certificate help I do see the
What is the exact error you're getting?
aws acm request-certificate --domain-name example.com --validation-method DNSworks just fine for me
@angrychimp Thanks for replying! I think the crux of it is authenticating for "least privilege".
As an example, I have a pipeline that runs on-prem from which I need to deploy a lambda. I want to use a least privilege role that only has access to create/update that lambda (or, at least some measure of least privilege).
I have created this pipeline currently with a manual gate for a User to to supply STS keys, but am working towards full automation of acquiring AWS keys to do the code push to s3 and then passing those AWS Keys to TF to do the actual provisioning of the lambda (I wouldn't want to give our CI server keys with full access to AWS accounts)
I have also created an AWS support ticket when I posted here where the support person remarked that utilizing STS would be the best option and to use a SAML assertion for authentication (but, they said acquiring the SAML assertion response is outside their purview, understandably).
But, automating this authentication step is where I'm at. I'm exploring methods for doing this, or I'm going back to the security team and submitting to them that "we don't seem to gain much by storing a "SAML assertion" instead of IAM User keys".
@bogeylnj There has to be trust at some point along the chain, right? If you're looking for actual pipeline automation, I see there being two options. If you want to use external authentication you could use Directory Service to tie an external user (such as an Active Directory service account) to a role, which in turn could allow for STS. But then you have to figure out how to allow your pipeline resources to authenticate with AD (again, as an example), so you still have credentials hard coded somewhere.
In my opinion the safest thing to do is create an automation IAM user with limited access to what you need - S3, Lambda, etc. - and really restrict access to the resources it needs to manage. You can use ARN conditions to make sure you're only touching relevant resources.
You have to hard-code the IAM API key/secret into your pipeline, but at least it can only do exactly what you want it to do. You can use CloudTrail to audit activity and ensure no one is using your pipeline incorrectly. Then just make sure you control access to where ever those keys are stored.
I completely agree with your first sentence and IAM User recommendation; but I am not savvy enough at this point to confront the security requirements. (Thanks again for your replies - very helpful)
Our AWS Console access is being controlled via Federated Identities and Conditional Policies
OktaMFA>STS>IAMRoleWithSpecificResourcesAndConditions. So, everything you describe makes sense.
When I question the security requirement, I think persistent keys will be one of the cons they present with IAM Users. But, this is something that I want to clarify more and more, lately. And, your suggestion supports that.
Some concerns I see are:
In your experience, are IAM Users used predominantly or are others attempting to avoid them? Most things I find just talk about IAM Users with required access (which in the case of a CI/CD server might be
Cross-Account Log Data Sharing with Subscriptionsfrom doc’s on
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CreateDestination.htmlI keep getting
An error occurred (InvalidParameterException) when calling the PutDestination operation: Could not deliver test message to specified destination. Check if the destination is valid.and this is my command
aws-log aws logs put-destination --destination-name snplydst --target-arn arn:aws:kinesis:region:999999999999:stream/RecipientStream --role-arn arn:aws:iam::999999999999:role/CWLtoKinesisRole