require 'aws-sdk'
s3 = Aws::S3::Resource.new(
region: ‘us-east-1’, # this should be the region your bucket was created in
credentials: Aws::Credentials.new(‘YOUR-ACESS-KEY’, ‘YOUR-SECRET-ACCESS-KEY’)
)
s3.bucket(’bucket-name’).object(’target-filename’).upload_file(‘/path/to/source/file’)
aws-flow-ruby
Aws::S3::Client
to upload files, but you have the manage the upload more. You have to choose between using #put_object
or using the multipart file APIs. Also, you need to open the source file for reading, etc. The #upload_file
will intelligently switch to a manged multipart upload using multiple threads for performance on large objects.
I am back again, still missing something. This seems like it should work but it doesn't, I get an AccessDenied. I am trying to just list the objects in a bucket, these are objects that I have successfully put there using my ruby script (Thanks Trevor!) . This is the script which I am
code
credentials = Aws::Credentials.new(@aws_access_key_id, @aws_secret_access_key)
client = Aws::S3::Client.new(
region: 'us-east-1', # this should be the region your bucket was created in
credentials: credentials)
client.list_objects(bucket: "<bucket-name>")
It is the last line that throws an error,
<Error><Code>AccessDenied</Code><Message>Access Denied</Message>
#upload_file
. I was hoping to find more information regarding that in the v2 SDK documentation but was unable to. Could you point me to the correct place? Also, what would be the proper way to use Aws::S3::Client
and multipart_upload? We've been struggling with with how to manage create and manage parts for the object.
GroupName
and GroupDescription
as the required parameters. But the EC2.api.json metadata and therefore the ruby client expects GroupName
and Description
(instead of GroupDescription
)Name
instead of Id
? https://github.com/aws/aws-sdk-ruby/blob/master/aws-sdk-core/apis/EC2.resources.json#L68Name
Aws::SQS::Client#send_message_batch
method allows you to send up to 10 messages in a single request. The 10 limit is imposed by the service, not the client. There is also a maximum payload size of 256KB per request. So if your messages are larger, you may not be able to send 10 in a single request without exceeding that size limit.
@nitinmohan87 I’ll try to answer these in order. The AWS EC2 documentation is documenting the wire-protocol, the format of the query string parameters as they are marshalled onto an HTTP request as a GET querystring or via a POST body. That representation is an implementation detail. The AWS SDKs, all of the, share a model of the API that gives friendlier, context sensitive names for many things. Some of these names will provide a pluralized name for a list where the wire procotol uses a singular name. Some will removed prefixes where it can be inferred from the context.
When using the Ruby SDK, you should be referencing its API documentation, not service protocol API docs.
aws-sdk
ruby gem and have just upgraded from v1 to v2. I know that this is a breaking change, but I can't seem to find any documentation on what the breaking changes are, and how to fix them. Does such a doc exist, or is there one in the works? Thanks!
@BrentWheeldon There is not an upgrading document currently. I hope to put together an upgrading guide soon. You can use v1 and v2 in the same application.
# in your Gemfile
gem 'aws-sdk’, '~> 2'
gem 'aws-sdk-v1'
# in your app
require 'aws-sdk'
require 'aws-sdk-v1’
AWS # v1 namespace
Aws # v2 namespace
This allows you to upgrade gradually.
@trevorrowe our main use case is something like:
s3 = AWS::S3.new(access_key_id: key_id, secret_access_key: access_key)
bucket = s3.buckets[bucket_name]
bucket.objects["file_name"].write(contents)
Seems like we now need to use AWS::S3::Client
, specify the region as well as our credentials, and then use the put_object
method rather than everything being a hash?
@BrentWheeldon The client class is optional. You can use Aws::S3::Resource
. That said, there is no longer a default region. The old default was ‘us-east-1’, so that should still work:
s3 = Aws::S3::Resource.new(
access_key_id: key_id,
secret_access_key: access_key,
region: ‘us-east-1’)
Using the s3 resource object, you can upload an object in one of two ways. You can use the #put
method, or you can use the #upload_file
method. Using put will upload the entire object in a single request, but it is not recomended for large objects. If you prefer to upload a file from disk, use the #upload_file
method.
obj = s3.bucket(bucket_name).object('file_name')
# single request, up to 5GB, contents can be a string or an IO object
obj.put(body:contents)
# managed upload, intelligently uses the multipart API for larger objects. Better reliability and performance
obj.upload_file('/path/to/source/file')