by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Thorsten von Eicken
@tve
In particular, I'm looking at EC2 DescribeKeyPars. The API call returns something like:
<DescribeKeyPairsResponse xmlns="http://ec2.amazonaws.com/doc/2014-10-01/">
    <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId> 
    <keySet>
      <item>
         <keyName>my-key-pair</keyName>
         <keyFingerprint>1f:51:ae:28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca:9f:f5:f1:6f</keyFingerprint>
      </item>
   </keySet>
</DescribeKeyPairsResponse>
and the metadata says:
    "DescribeKeyPairs":{
      "name":"DescribeKeyPairs",
      "http":{
        "method":"POST",
        "requestUri":"/"
      },
      "input":{"shape":"DescribeKeyPairsRequest"},
      "output":{"shape":"DescribeKeyPairsResult"}
    },
    "DescribeKeyPairsResult":{
      "type":"structure",
      "members":{
        "KeyPairs":{
          "shape":"KeyPairList",
          "locationName":"keySet"
        }
      }
    },
So the fact that the response contains an outer DescribeKeyPairsResponse wrapper seems to be implicit?
Yet, for CloudFormation the responses have the same kind of wrapper and it's explicitly named in the metadata. For example, CreateStack:
<CreateStackResult>
  <StackId>arn:aws:cloudformation:us-east-1:123456789:stack/MyStack/aaf549a0-a413-11df-adb3-5081b3858e83</StackId>
</CreateStackResult>
Thorsten von Eicken
@tve
And the metadata has the expected resultWrapper stanza:
      "output":{
        "shape":"CreateStackOutput",
        "resultWrapper":"CreateStackResult"
      },
Trevor Rowe
@trevorrowe

Good question. Have you noticed the 5 protocol types given in the “metadata” of each api? They are json, rest-json, rest-xml, query, and ec2. Almost all new services are being released using the “json” protocol. Most of the older APIS, making up the majority of services use query.

Query is a RPC style protocol, where requests are sent via POST where the body of the request are url encoded querystrings. The response body is returned as XML.

The ec2 protocol is very close to the query protocol, but not quite. It diverges in a few ways. Primarily in how the input is serialized, but also slightly on output. It does not contain that extra wrapping element that standard query services use. In the Ruby SDK i have a different response handler that extends the query handler, but does not apply a wrapping element for parsing.

pedroadsl
@pedroadsl
Hi folks, how is the best way to extend the waiters even to Elastic Beanstalk ?
Thorsten von Eicken
@tve
trevor: thanks for the response. I'll add a "dispatch" based on the protocol type in my code
Trevor Rowe
@trevorrowe

The waiters are defined as JSON documents. For example, here are the watiers for Glacier: https://github.com/aws/aws-sdk-ruby/blob/master/aws-sdk-core/apis/Glacier.waiters.json

Basically a waiter has a name, default delay and max attempts, and then a list of acceptors that can fail or succeede the watier. There are a few matcher types, you can look at the other waiters in the same folder for more examples. If you put something together that you are happy with, please feel free to submit a pull-request and I’ll review it.

@tve I’m curious, and maybe you’ve mentioned it before, but what are you building with the API models?
pedroadsl
@pedroadsl
@trevorrowe Thanks for your reply and informations. I will have a look to other examples and I hope to put something together :D. Have a nice night.
David Newkerk
@dnewkerk
sweet! I just finished automating building my full OpsWorks stack for my Rails app using just the Ruby SDK v2 in a rake task. Tried out waiters for the first time also so I could have the task wait to proceed while my RDS instance spins up, very cool! Loving the new SDK so far
Trevor Rowe
@trevorrowe
@dnewkerk Thanks for the positive feedback! If you run across anything that you think could be improved, please pass it along. Otherwise, I’m glad you had a good experience with the SDK.
David Newkerk
@dnewkerk
@trevorrowe cool will do, thanks for all your hard work on it! I tried my hand last night at building some mini s3 helper tasks for myself as well using S3::Resource (until now I've only worked with the Client) and that turned out nicely as well :D
sidenote - really love that you guys are on gitter now :D I got into using gitter recently for discussing several open source projects and love it
Daniel Collis-Puro
@djcp
What I hope is a basic question: is there a way to create a vpc with tags in a single call? Or must one use ec2_client.create_vpc and then Aws::EC2::Vpc#create_tags in a subsequent call?
If the answer is "no", that's fine. When I attempted to use Aws::EC2::Vpc#create_tags right after creating the VPC, though, I got an Aws::EC2::Errors::InvalidVpcIDNotFound error, I assume because it hadn't propagated across aws. Is this what I'd want a waiter for?
Trevor Rowe
@trevorrowe
Currently it is two API calls to create and then tag. Waiters can poll and wait for a resource to enter a given state. You should be able to call ec2_client.wait_until(:vpc_available, vpc_ids:[id]) and then call your tagging operation.
Tim Tischler
@tischler
What does Aws::S3::Errors::PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. mean?

'''
client = Aws::S3::Client.new(
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['SECRET_ACCESS_KEY'],
region: 'us-west-2'
)

resp = client.list_objects( bucket: 'my-backup')

'''

Tim Tischler
@tischler
and the result is
'''
Aws::S3::Errors::PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
/Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/seahorse/client/plugins/raise_response_errors.rb:15:in call' /Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/aws-sdk-core/plugins/s3_sse_cpk.rb:18:incall'
/Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/seahorse/client/plugins/param_conversion.rb:22:in call' /Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/aws-sdk-core/plugins/response_paging.rb:10:incall'
/Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/seahorse/client/plugins/response_target.rb:18:in call' /Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/seahorse/client/request.rb:70:insend_request'
/Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/seahorse/client/base.rb:216:in `block (2 levels) in define_operation_methods'
'''

baah...

client = Aws::S3::Client.new(
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['SECRET_ACCESS_KEY'],
region: 'us-west-2'
)

resp = client.list_objects( bucket: 'my-backup')

and

Aws::S3::Errors::PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
/Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/seahorse/client/plugins/raise_response_errors.rb:15:in call' /Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/aws-sdk-core/plugins/s3_sse_cpk.rb:18:incall'
/Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/seahorse/client/plugins/param_conversion.rb:22:in call' /Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/aws-sdk-core/plugins/response_paging.rb:10:incall'
/Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/seahorse/client/plugins/response_target.rb:18:in call' /Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/seahorse/client/request.rb:70:insend_request'
/Users/tischler/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/aws-sdk-core-2.0.26/lib/seahorse/client/base.rb:216:in `block (2 levels) in define_operation_methods'
Trevor Rowe
@trevorrowe

If you rescue that error, you can inspect the HTTP response body:

begin
  resp = client.list_objects( bucket: 'my-backup’)
rescue Aws::S3::Errors:: PermanentRedirect => error
  puts error.context.http_response.body.read
end

My guess is the bucket you are addressing is not in the ‘us-west-2’ region. You can call #get_bucket_location to determine the actual region.

Nikola Velkovski
@parabolic
Hi Folks
can somebody help me with the instance role credentials
I am having hard time getting it to work
my code works perfect with credntials from a json file but I need to utilize the instance role
permisiions
can somebody give me an working example of AssumeRoleCredentials
Nikola Velkovski
@parabolic
Ok found it
if anybody needs an example ping me
:)
Daniel Collis-Puro
@djcp
Thanks for your answer yesterday, @trevorrowe
Robin van Wijngaarden
@robinvw1
i'm doing a head_object with Ruby AWS SDK (v2). according to the documentation I should catch on Aws::S3::Errors::NoSuchKey -> http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Client.html#head_object-instance_method
but i'm receiving Aws::S3::Errors::NotFound
is the documentation up-to-date?
Trevor Rowe
@trevorrowe
@robinvw1 That is a documentation bug / limitation. Normally, if you perform an operation against an object that does not exist, e.g. a GET object, Amazon S3 will return a response with an XML body indicating that there is no such key. However, per the HTTP spec, the response to a HEAD request may not have a body. For this reason, the SDK has no error message or code to parse, so it falls back on NotFound.
I've considered guessing, but this proved problematic, as it is not possible to know if the 404 is a result of a no such bucket or no such key.

I've considered guessing, but this proved problematic, as it is not possible to know if the 404 is a result of a no such bucket or no such key.

We need to change this in the docs to reflect the SDK will raise a NotFound.

chrishawk
@chrishawk
Does anyone know if there is a way of writing data to an s3 object? It looks like object.write was gotten rid of in place of object.upload_file in the new v2 sdk
Trevor Rowe
@trevorrowe
You can call Aws::S3::Client#put_object, or Aws::S3::Object#put
chrishawk
@chrishawk
wow awesome thanks
I don't know how I missed that
What about object.move_to
what's the equivalent in v2
I've been trying copy_object but it produces a Aws::S3::Errors::AllAccessDisabled error I can't figure out
Trevor Rowe
@trevorrowe
The #move_to method was a wrapper around #copy_object. If you share a gist, then I can see if I can help debug your call.
Robin van Wijngaarden
@robinvw1
@trevorrowe thanks for your answer :)
Trevor Rowe
@trevorrowe
@robinvw1 No problem!
chrishawk
@chrishawk
I actually have a stackoverflow question with it