Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Brent Wheeldon
@BrentWheeldon
Thanks, @trevorrowe. We'll just lock to 1 now. We don't have much code using it, so it won't take us long to upgrade once we know where to start.
Trevor Rowe
@trevorrowe
@BrentWheeldon I can definetly answer specific questions. If you are using v1 clients, almost nothing has changed. Most of what is different is basic configuration and some of the objet-oriented interfcaes on services.
Brent Wheeldon
@BrentWheeldon

@trevorrowe our main use case is something like:

s3 = AWS::S3.new(access_key_id: key_id, secret_access_key: access_key)
bucket = s3.buckets[bucket_name]
bucket.objects["file_name"].write(contents)

Seems like we now need to use AWS::S3::Client, specify the region as well as our credentials, and then use the put_object method rather than everything being a hash?

Trevor Rowe
@trevorrowe

@BrentWheeldon The client class is optional. You can use Aws::S3::Resource. That said, there is no longer a default region. The old default was ‘us-east-1’, so that should still work:

s3 = Aws::S3::Resource.new(
  access_key_id: key_id,
  secret_access_key: access_key,
  region: ‘us-east-1’)

Using the s3 resource object, you can upload an object in one of two ways. You can use the #put method, or you can use the #upload_file method. Using put will upload the entire object in a single request, but it is not recomended for large objects. If you prefer to upload a file from disk, use the #upload_file method.

obj = s3.bucket(bucket_name).object('file_name')

# single request, up to 5GB, contents can be a string or an IO object
obj.put(body:contents)

# managed upload, intelligently uses the multipart API for larger objects. Better reliability and performance
obj.upload_file('/path/to/source/file')
Brent Wheeldon
@BrentWheeldon
Nice, thanks for that info!
William Wilson
@millenniumbrain
Excuse me, I was wondering how to format the options in the upload_file method in order to grant public read access. I am having trouble finding the string it wants.
Trevor Rowe
@trevorrowe
Same options as Client#put_object, for example upload_file('source', acl: 'public-read')
William Wilson
@millenniumbrain
Ah, thanks!
Will Weaver
@funkymonkeymonk
Hey all, I'm trying to scope out how much effort it would take to add in the ability to use managed policies. Has there been any previous work done with this? Does anyone have any suggestions as to the scope of the change?
Trevor Rowe
@trevorrowe
@funkymonkeymonk If you are talking about adding the resource abstractions for managed policies, I’ve got good news for you. A user of the preview-release of boto3 already did the leg work. The Ruby SDK, Boto3, PHP, .NET and Java SDKs share these definitions, so once this gets merged upstream, I can add these to the Ruby SDK:
boto/boto3#71
Will Weaver
@funkymonkeymonk
You make me very happy. Thanks @trevorrowe
Trevor Rowe
@trevorrowe
@funkymonkeymonk No problem!
Jacob Rothstein
@jbr
anyone around that can answer an v1->v2 upgrade question?
s3_bucket.presigned_post(key: …) in v1 gives a fancy object with fields and url that accepts a multipart post. is there an equivalent in v2? s3_bucket.object(key).presigned_url(:put) gives a string url that stores the body as the object, which isn't ideal
Nitin
@nitinmohan87

will there be any feature additions to the CloudFormation.resources.json? I am finding some of the actions (like GetTemplate, GetStackPolicy, etc) are missing. Some of the resource schema do not have path information.

For example,

 "Stacks": {
  "request": { "operation": "DescribeStacks" },
    "resource": {
      "type": "Stack",
      "identifiers": [
        { "target": "Name", "source": "response", "path": "Stacks[].StackName" }
      ]
  }
Trevor Rowe
@trevorrowe
@nitinmohan87 The existing cloudformation resource document was submitted by a user. If it is incomplete, then we can definetly ammend it to add the missing functionality, especially adding actions, or paths.
Nitin
@nitinmohan87
i see.. thanks for the info.. also I am not finding a resources file for ElasticLoadBalancing. is it being actively worked on?
Trevor Rowe
@trevorrowe
@nitinmohan87 Not that I am aware of.
Nitin
@nitinmohan87
why are the attributes in EC2.resources.json are upper camel-cased - https://github.com/aws/aws-sdk-ruby/blob/master/aws-sdk-core/apis/EC2.resources.json#L39
while the EC2 API returns the attributes in response in lower-camel cased (e.g. keyName)?
it may not matter for ruby SDK since it transforms everything to snake_case. But wouldn't it be nice if the resources.json files could mimic the AWS response casing as much as possible?
Trevor Rowe
@trevorrowe

@nitinmohan87 The casing of attributes as they appear in the EC2.resources.json document is consistent with the names as they appear in the EC2.api.json document. They do not reference the names as they appear on the wire. This is standard with all of the various AWS protocols. ALL of the AWS SDKs, Java, .Net, PHP, Ruby, Python, CLI, JavaScript, and Go, use the member names as they appear in the API instead of the wire format name. For example, when working with Amazon EC2, if you describe instances, you get back over the wire something like (nested in the response):

<reservationSet>
  <item>
    <instanceSet>
      <item>
        <instanceId>i-1234578</instanceId></item>
    </instanceSet>
  </item>
</reservationSet>

None of our SDKs represent the “item” in the reservation list, or in the instance list. Also, none of our SDKs returns these two lists as “reservationSet” or “instanceSet”. They all use the names “Reservations” and “Instances”. This information is how it is documented in the EC2.api.json document. These deltas are not limited to Amazon EC2 either. These changes appear in many/most of the APIs.

Essentially, the API document describes a webservice’s public interface, with just enough information to marshall and unmarshall requests and responses given information such as the protocol, and the shape and shape reference traits.

For consistency, the resource models all references the API names, not the wire format names.

Nitin
@nitinmohan87
that makes sense.. thanks for the explanation
Trevor Rowe
@trevorrowe
@nitinmohan87 I’d be curious at some point, if possible to see what you are building with the resource definitions. Purly a curiosity of mine, and definetly not necessary. If you ever feel like showing it off, I’d be interested in meeting up in a Google hangout some time.
Nitin
@nitinmohan87
we are building a proxy that exposes a standardized json interface and a REST-ful interface.. i am part of a team working with @tve
Trevor Rowe
@trevorrowe
@nitinmohan87 I recall that. I’d be curious to see how you are using it and talk to you about your use case in more depth, especially as it pertains to the resource models.
Nitin
@nitinmohan87
sure.. let me discuss with our team and try to get the proper information
Federico Gonzalez
@fedegl
Can anyone help me to fetch SQS messages with the new sdk?
there is no documentation or examples on how to do it
This message was deleted
This message was deleted
resp = sqs.receive_message(
  queue_url: "...",
  attribute_names: ["All"],
  #message_attribute_names: ["MessageAttributeName", '...'],
  max_number_of_messages: 10,
  visibility_timeout: 1,
  wait_time_seconds: 1,
)
Trevor Rowe
@trevorrowe

@fedegl I just merged today a pull-request that adds a QueuePoller utility. You can read the examles on the pull-request for more information: aws/aws-sdk-ruby#740

This will go out with the next stable release, I expect some time this week.

Emelia Smith
@ThisIsMissEm
Has anyone had any luck using DynamoDB local with the latest SDK?
I keep getting a missing credentials error, despite having done aws configure on the command line
abalakersky
@abalakersky

Previously, with v1 SDK it was possible to do something like this to check if bucket exists:

s3 = AWS::S3.new( credential_provider: credentials )
s3bucket = s3.buckets[bucket_name]
if s3bucket.exists?

is there a way to do the same with v2?

Jacob Rothstein
@jbr
:+1: @abalakersky I’ve run into the same limitation. Somewhere in the docs they said “soon"
I can’t remember where I saw that, but I think it was for all Aws::S3::Resources
I’d love a better solution than this, though
  def exists?(key)
    object(key).metadata
    true
  rescue ::Aws::S3::Errors::NotFound
    false
  end
abalakersky
@abalakersky
Thanks @jbr I'll play around with this for now. Better solution would be nice though.
Jacob Rothstein
@jbr
I’m pretty sure that Aws::S3::Object#metadata just sends a HEAD which would be the same as #exists?, although using exceptions for flow control is certainly a smell
Jacob Rothstein
@jbr
@abalakersky i just took a look at the specs, they’re effectively using s3.client.get_bucket_location(bucket: ‘bucketname’) && true rescue falsehttps://github.com/aws/aws-sdk-ruby/blob/master/aws-sdk-core/features/s3/step_definitions.rb#L75
Trevor Rowe
@trevorrowe
@miksago I would check where aws configure saved your credentials. I believe older versions of the CLI saved credentials to ~/.aws/config instead of ~/.aws/credentials. The SDK will only load credentials from the latter file.
Trevor Rowe
@trevorrowe
@abalakersky @jbr The #exists? check methods are on the public backlog (https://github.com/aws/aws-sdk-ruby/blob/master/FEATURE_REQUESTS.md#adding-exists-method-to-resource-classes). There is a reasonable plan in place, its mostly down to augmenting the resource definitions with reference to which waiter should be invoked.
abalakersky
@abalakersky
Thank you @trevorrowe
Emelia Smith
@ThisIsMissEm
I figured out my problem with credentials. Turns out the awscli that you install through homebrew stores it's credentials in ~/.aws/config rather than ~/.aws/credentials which is where the ruby gem expects it
abalakersky
@abalakersky

@trevorrowe @jbr I have come up with this one for now:

$s3 = Aws::S3::Resource.new(region: 'us-east-1')

def exists?(bucket)  begin    $s3.bucket(bucket).wait_until_exists do |w|
      w.interval = 1
      w.max_attempts = 1
    end
    true
  rescue Aws::Waiters::Errors::WaiterFailed
    false
  end
end

Works well for now.

Jacob Rothstein
@jbr
This message was deleted
@trevorrowe is there some aws-wide reason #exists? is going to be implemented in terms of #wait_until_exists instead of the other way around? Would have expected #wait_until_exists to poll #exists? instead of the latter being the single-attempt special case of the former
Jacob Rothstein
@jbr
(I’m assuming that what @abalakersky posted is along the lines of you were referring to with “which waiter to be invoked”)
abalakersky
@abalakersky
cannot figure out why my code display came out mis-formatted though. Still, it is readable.
Trevor Rowe
@trevorrowe

@jbr There is not. I’m open to discussion on the implementation. The reason I’m inclined to have the #exists? method poll an exists waiter once is two fold:

  • Expands the number of waiters available for polling from the client level
  • The #exists? check benefits from the inherited waiter ability to have multiple success and failure states that can match on http status codes, exracted response data, error codes, etc.

The solution should also avoid one-off hand-coded solutions that can not be shared between the other AWS SDKs.

The example from @abalakersky is in line with what I am envisioning.