@BrentWheeldon There is not an upgrading document currently. I hope to put together an upgrading guide soon. You can use v1 and v2 in the same application.
# in your Gemfile
gem 'aws-sdk’, '~> 2'
gem 'aws-sdk-v1'
# in your app
require 'aws-sdk'
require 'aws-sdk-v1’
AWS # v1 namespace
Aws # v2 namespace
This allows you to upgrade gradually.
@trevorrowe our main use case is something like:
s3 = AWS::S3.new(access_key_id: key_id, secret_access_key: access_key)
bucket = s3.buckets[bucket_name]
bucket.objects["file_name"].write(contents)
Seems like we now need to use AWS::S3::Client
, specify the region as well as our credentials, and then use the put_object
method rather than everything being a hash?
@BrentWheeldon The client class is optional. You can use Aws::S3::Resource
. That said, there is no longer a default region. The old default was ‘us-east-1’, so that should still work:
s3 = Aws::S3::Resource.new(
access_key_id: key_id,
secret_access_key: access_key,
region: ‘us-east-1’)
Using the s3 resource object, you can upload an object in one of two ways. You can use the #put
method, or you can use the #upload_file
method. Using put will upload the entire object in a single request, but it is not recomended for large objects. If you prefer to upload a file from disk, use the #upload_file
method.
obj = s3.bucket(bucket_name).object('file_name')
# single request, up to 5GB, contents can be a string or an IO object
obj.put(body:contents)
# managed upload, intelligently uses the multipart API for larger objects. Better reliability and performance
obj.upload_file('/path/to/source/file')
s3_bucket.presigned_post(key: …)
in v1 gives a fancy object with fields and url that accepts a multipart post. is there an equivalent in v2? s3_bucket.object(key).presigned_url(:put)
gives a string url that stores the body as the object, which isn't ideal
will there be any feature additions to the CloudFormation.resources.json? I am finding some of the actions (like GetTemplate, GetStackPolicy, etc) are missing. Some of the resource schema do not have path
information.
For example,
"Stacks": {
"request": { "operation": "DescribeStacks" },
"resource": {
"type": "Stack",
"identifiers": [
{ "target": "Name", "source": "response", "path": "Stacks[].StackName" }
]
}
keyName
)?@nitinmohan87 The casing of attributes as they appear in the EC2.resources.json document is consistent with the names as they appear in the EC2.api.json document. They do not reference the names as they appear on the wire. This is standard with all of the various AWS protocols. ALL of the AWS SDKs, Java, .Net, PHP, Ruby, Python, CLI, JavaScript, and Go, use the member names as they appear in the API instead of the wire format name. For example, when working with Amazon EC2, if you describe instances, you get back over the wire something like (nested in the response):
<reservationSet>
<item>
<instanceSet>
<item>
<instanceId>i-1234578</instanceId>
…
</item>
</instanceSet>
</item>
</reservationSet>
…
None of our SDKs represent the “item” in the reservation list, or in the instance list. Also, none of our SDKs returns these two lists as “reservationSet” or “instanceSet”. They all use the names “Reservations” and “Instances”. This information is how it is documented in the EC2.api.json document. These deltas are not limited to Amazon EC2 either. These changes appear in many/most of the APIs.
Essentially, the API document describes a webservice’s public interface, with just enough information to marshall and unmarshall requests and responses given information such as the protocol, and the shape and shape reference traits.
For consistency, the resource models all references the API names, not the wire format names.
resp = sqs.receive_message(
queue_url: "...",
attribute_names: ["All"],
#message_attribute_names: ["MessageAttributeName", '...'],
max_number_of_messages: 10,
visibility_timeout: 1,
wait_time_seconds: 1,
)
aws configure
on the command line
Aws::S3::Resource
s
def exists?(key)
object(key).metadata
true
rescue ::Aws::S3::Errors::NotFound
false
end
s3.client.get_bucket_location(bucket: ‘bucketname’) && true rescue false
— https://github.com/aws/aws-sdk-ruby/blob/master/aws-sdk-core/features/s3/step_definitions.rb#L75
#exists?
check methods are on the public backlog (https://github.com/aws/aws-sdk-ruby/blob/master/FEATURE_REQUESTS.md#adding-exists-method-to-resource-classes). There is a reasonable plan in place, its mostly down to augmenting the resource definitions with reference to which waiter should be invoked.
@trevorrowe @jbr I have come up with this one for now:
$s3 = Aws::S3::Resource.new(region: 'us-east-1')
def exists?(bucket) begin $s3.bucket(bucket).wait_until_exists do |w|
w.interval = 1
w.max_attempts = 1
end
true
rescue Aws::Waiters::Errors::WaiterFailed
false
end
end
Works well for now.
#exists?
is going to be implemented in terms of #wait_until_exists
instead of the other way around? Would have expected #wait_until_exists
to poll #exists?
instead of the latter being the single-attempt special case of the former