Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Dec 03 2018 22:03
    jpeddicord unassigned #126
  • Dec 03 2018 22:03
    jpeddicord unassigned #47
  • Sep 06 2017 05:22
    rocifier commented #187
  • Jun 26 2017 20:04
    wy193777 commented #138
  • Mar 01 2017 22:38
    awood45 closed #225
  • Mar 01 2017 22:38
    awood45 commented #225
  • Mar 01 2017 22:36
    iamatypeofwalrus edited #225
  • Mar 01 2017 22:36
    iamatypeofwalrus edited #225
  • Mar 01 2017 22:36
    iamatypeofwalrus edited #225
  • Mar 01 2017 22:35
    iamatypeofwalrus edited #225
  • Mar 01 2017 22:34
    iamatypeofwalrus opened #225
  • Sep 01 2016 18:35
    bmedici commented #194
  • Aug 30 2016 12:35
    fcarrega commented #154
  • Jul 25 2016 16:11
    heaven commented #166
  • Jul 25 2016 15:57
    trevorrowe commented #166
  • Jul 25 2016 02:23
    heaven commented #166
  • May 15 2016 22:02
    StevenHarlow commented #166
  • May 15 2016 21:58
    StevenHarlow commented #166
  • May 15 2016 21:58
    StevenHarlow commented #166
  • May 15 2016 21:53
    StevenHarlow commented #166
Erik Straub
@brkattk
regardless of bucket name, it should tell which account by the credentials passed to Aws::S3::Client, right?
my issue is that the default doesn’t even have a bucket named my.bucket
I’ve got a test for this:

I have 2 profiles in my ~/.aws/credentials that are completely separate AWS accounts (one is my personal, one for my company)

I created an ran this script. I do not have a bucket named reports in my personal AWS account, but this puts the first object in my company account’s report bucket

require 'rubygems'
require 'aws-sdk'

creds1 = Aws::SharedCredentials.new(profile_name: 'erik')
s3_client1 = Aws::S3::Client.new(region: 'us-east-1', credentials: creds1)
s3_resource1 = Aws::S3::Resource.new(client: s3_client1)
s3_bucket1 = s3_resource1.bucket('reports')
object = s3_bucket1.objects.first
p object
Trevor Rowe
@trevorrowe
What is Aws::SharedCredentials.new doing if not what you expect? The ability to write to a bucket is not strictly limited to a single account. That is the default, only the bucket owner may write, but you can open the public for public writes, or even white listed users and accounts.
Erik Straub
@brkattk
ugh.. hang on this might be my own stupidity
Trevor Rowe
@trevorrowe
When you construct a shared credentials object, inspect the #access_key_id and #secret_access_key. Are they for the correct profile?
Erik Straub
@brkattk
:cry: :gun: I definitely had a policy on that bucket that was open to any authenticated user
now I’m getting the appropriate AccessDenied exception
Peter Mounce
@petemounce
Hi @trevorrowe - could you give me a steer on how to create a waiter for a windows ec2 password being updated since <some date>?
The reason being is that I have ec2 instances where ec2config is configured to set a new administrator password on each launch. A reboot counts (apparently), and so i'm in a position where i'm baking an AMI, and need to orchestrate reboots, and therefore need to wait for the the password to both be available and different.
The EC2:GetPasswordData operation helpfully has a timestamp - so I think i just need to know how to create the waiter which takes a date time as a parameter to compare against
Trevor Rowe
@trevorrowe
Currently the client waiter interface only accepts inputs that are valid request parameters. That said, there is a custom waiter interface on the resource classes where you can pass a block and wait on it to return a truthy value.
instance = Aws::EC2::Resource.new.instances.first
old_timestamp = instance.password_data.timestamp
instance.reboot
instance.wait_until { |i| i.password_data.timestamp != old_timestamp }
Peter Mounce
@petemounce
@trevorrowe thanks; perfect!
one thing i did run into was that response in the block was nil on at least the first attempt, which seemed odd
i wanted to do
... do |attempts, response|
  puts "#{attempts}; timestamp this time: #{response.password_data.timestamp}"
end
but response was nil
Trevor Rowe
@trevorrowe
@petemounce Do you mean for the before_wait callback?
Peter Mounce
@petemounce
@trevorrowe yes
      proc = Proc.new do |attempts, response|
        logger.debug "Waiting for password data timestamp to be newer", {pwd_timestamp: instance.password_data.timestamp, newer_than: since}
      end
      instance.wait_until(max_attempts: 30, delay: 10, before_wait: proc) { |i| i.password_data.timestamp != since }
i is an Instance resource
Peter Mounce
@petemounce
@trevorrowe is there a way to globally change the global waiter? I was using the standard settings, and I'm getting RequestLimitExceeded exceptions, presumably because of all the Describe* requests.
to combat that, i've changed my waiters to look like, for example,
w.max_attempts = 15
w.interval = 0
w.before_attempt do |a, resp|
  pause_exponentially n
  logger.debug "Waiting for password...", {instance_id: id, attempt_count: a}
end

def pause_exponentially(n, seed=2, exp=1.4)
  sleep(seed * (exp ** n))
end
it would be really handy to be able to change that in just a single place (with the capability probably of still being able to configure the seed and exponent-even-if-that's-the-wrong-mathematical-term for each waiter)
Peter Mounce
@petemounce
actually, one thing i thought of - it would be nice to just tell the waiter definition the maximum time it should wait, rather than do maths to figure out what that will end up being (regardless of whether the strategy is linear or exponential)
Trevor Rowe
@trevorrowe

There is not an accessible way to change the default waiter value currently. Currently, the waiter provider is a private interface, and is subject to change. If you are comfortable poking into something that may change in the future, you could do the following:

EC2::Client.waiters.instance_variable_get("@waiters")[:instance_running][:delay] = 20

I’d be open to suggestions on how to make this more flexible, until then, I left the interfaces marked @api private.

Peter Mounce
@petemounce
@trevorrowe thanks; i don't have good ideas there beyond maybe Aws.config[:default_waiter_strategy] or something hand-wavey like that.
Peter Mounce
@petemounce
@trevorrowe in v1 of the sdk, I'm reasonably sure AWS::EC2::Image had an exists? method...? v2 does not. Aws::EC2::Instance does. the API reference doesn't show either Instance or Image to have an attribute that sounds like 'exists' other than maybe state on Instance.
but even that is tenuous
i can use the wait_for_exists instance method on Instance (hehe), but there's no equivalent on Image.
... could there be?
Trevor Rowe
@trevorrowe
@petermounce The exists? method will spring into existence if an appropriate waiter is created and linked in the resource definition.
亀山 翔大
@ShotaKameyama
Hi there, I simply want to know how to send sms to mobile phone in Japan from rails app using amazon sns. Is there any useful links to help? Thanks.
Trevor Rowe
@trevorrowe
@ShotaKameyama There is a SNS developer guide that goes through the major steps: http://docs.aws.amazon.com/sns/latest/dg/SMSMessages.html - You can use Aws::SNS::Resource from the aws-sdk gem to make the API calls.
亀山 翔大
@ShotaKameyama
@trevorrowe Thank you for the quick reply! I will check the link which you showed me.
kevin-staiger
@kevin-staiger
@trevorrowe Hi I am using the aws-sdk-core-ruby to create a lambda java function from a jar file in a an s3 folder however, when I run my command I get an error
`validate!': parameter validator found 2 errors: (ArgumentError)
  • unexpected value at params[:code][:s3_bucket]
  • unexpected value at params[:code][:s3_key]
but I am just passing in strings of the bucket name and s3 key to my jar
it works when I pass these values on the console
kevin-staiger
@kevin-staiger
ok nevermind, just needed to update my gem version
Alex Wood
@awood45
what version did you update to/from?
kalpitad
@kalpitad
Hi there, doing more stubbing in my tests and have a question (issue #187)… is it possible to stub a response to a request for a particular DynamoDB table? i.e. can I further specify the table name somehow in this line of code?… Aws.config[:dynamodb][:stub_responses] = {put_item: 'ProvisionedThroughputExceededException’}
Trevor Rowe
@trevorrowe
You should rely on your test framework to provide that functionality. If I were using RSpec, I would do it this way:
allow_any_instance_of(Aws::DynamoDB::Client).to receive(:put_item).
  with(hash_including(table_name:'my-table').
  and_raise(Aws::DynamoDB::Errors::ProvisionedThroughputExceededException)
kalpitad
@kalpitad
Got it, thanks @trevorrowe!
kalpitad
@kalpitad
hey @trevorrowe, just an FYI, I couldn’t get this to work by just raising Aws::DynamoDB::Errors::ProvisionedThroughputExceededException. I kept getting an ArgumentError returned (ArgumentError: wrong number of arguments (0 for 2)). I finally figured out that this had to do with the constructor of the error. Perhaps, this is expected, but I didn’t realize it. Setting nil for both params made this work.
allow_any_instance_of(Aws::DynamoDB::Client).to
 receive(:put_item).
 with(hash_including(table_name: "Users")).
 and_raise(Aws::DynamoDB::Errors::ProvisionedThroughputExceededException.new(nil, nil))
kalpitad
@kalpitad
This message was deleted
kalpitad
@kalpitad

hey guys, for the most part, the response stubbing is working great! I believe that I am seeing an issue, however, when a test runs code that has more than one DynamoDB request, where the second or third is the one that has a stubbed response.

For example... I have a method in my code that does a query request to DynamoDB, followed by a get_item. If I stub the query method as so: Aws.config[:dynamodb][:stub_responses] = {query: 'ProvisionedThroughputExceededException’}, the exception is raised and the right thing happens. This first test works fine.

My second test is to allow the query to succeed and instead have the get_item request raise ProvisionedThroughputExceededException. So, the stub looks like this: Aws.config[:dynamodb][:stub_responses] = {get_item: 'ProvisionedThroughputExceededException’}. When I run this test (standalone and seperate from the first one), the query returns nothing, even though there are objects to return (i.e. if I comment out the response stubbing line, the query works). So, it seems as if my response stub for get_item is affecting my query request.

Do you guys have any thoughts on this? Thanks!

Trevor Rowe
@trevorrowe
@kalpitad If you enabled response stubbing, all responses are stubbed for the client, not just specific operations. When you do not provide stub data, then a default empty response is returned.
If you want to have a client make actual requests for some operations you will need to not enable response stubbing and instead use your test framework to stub the specific response. If you are using spec, you could do this:
ddb = Aws::DynamoDB::Client.new
allow(ddb).to receive(:query).and_return(ddb.stub_data(:query, {})
allow(ddb).to  receive(:get_item).and_raise(Aws::DynamoDB::ProvisionedThroughputExceededException.new(nil, nil))
kalpitad
@kalpitad
@trevorrowe ohhhhhhhh! OK, that explains a lot, thank you. :+1: