#terminate
on an instance or #terminate
on a batch of instances:ec2 = Aws::EC2::Resource.new
# single instance terminate
ec2.isntance(‘id’).terminate
# batch terminate
ec2.instances(filters: […]).terminate
With regard to Instances, would you be opposed to exposing a "Delete" action on the Instance resource to also map to "TerminateInstances", in addition to the existing "Terminate" action that maps to "TerminateInstances"?
It just seems more CRUD-like and consistent that resources have both a CreateResource and a Delete, rather than instances having "CreateInstances" and "TerminateInstances".
For example, here: https://github.com/aws/aws-sdk-ruby/blob/master/aws-sdk-core/apis/EC2.resources.json#L14-L23
CreateInstances gets mapped to RunInstances. Just seems like having the complement for Delete would be nice. Something like this in the actions for an Instance resource:
"Delete": {
"request": {
"operation": "TerminateInstances",
"params": [
{ "target": "InstanceIds[0]", "source": "identifier", "name": "Id" }
]
}
}
If you're not opposed, I could submit a PR. If you are opposed, no worries, just thought we'd throw that idea out there. :)
wait_until
to be run after the process was complete, and that's how I've seen other async waiters implemented.
Thats not what I meant. What I don’t like is having two entries in the resource definition for two actions that are exact copies of each other. One action should be an alias, not a straight up copy, of the other. As I pointed out, we don’t have aliases.
With regard to the create method being an alias, it is not, as it is the only action present that calls RunInstances. I would take up the same argument if the request was to add a “RunInstances” action that did the same thing as the current create.
require 'aws-sdk'
s3 = Aws::S3::Resource.new(
region: ‘us-east-1’, # this should be the region your bucket was created in
credentials: Aws::Credentials.new(‘YOUR-ACESS-KEY’, ‘YOUR-SECRET-ACCESS-KEY’)
)
s3.bucket(’bucket-name’).object(’target-filename’).upload_file(‘/path/to/source/file’)
aws-flow-ruby
Aws::S3::Client
to upload files, but you have the manage the upload more. You have to choose between using #put_object
or using the multipart file APIs. Also, you need to open the source file for reading, etc. The #upload_file
will intelligently switch to a manged multipart upload using multiple threads for performance on large objects.
I am back again, still missing something. This seems like it should work but it doesn't, I get an AccessDenied. I am trying to just list the objects in a bucket, these are objects that I have successfully put there using my ruby script (Thanks Trevor!) . This is the script which I am
code
credentials = Aws::Credentials.new(@aws_access_key_id, @aws_secret_access_key)
client = Aws::S3::Client.new(
region: 'us-east-1', # this should be the region your bucket was created in
credentials: credentials)
client.list_objects(bucket: "<bucket-name>")
It is the last line that throws an error,
<Error><Code>AccessDenied</Code><Message>Access Denied</Message>
#upload_file
. I was hoping to find more information regarding that in the v2 SDK documentation but was unable to. Could you point me to the correct place? Also, what would be the proper way to use Aws::S3::Client
and multipart_upload? We've been struggling with with how to manage create and manage parts for the object.