Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Dec 03 2018 22:03
    jpeddicord unassigned #126
  • Dec 03 2018 22:03
    jpeddicord unassigned #47
  • Sep 06 2017 05:22
    rocifier commented #187
  • Jun 26 2017 20:04
    wy193777 commented #138
  • Mar 01 2017 22:38
    awood45 closed #225
  • Mar 01 2017 22:38
    awood45 commented #225
  • Mar 01 2017 22:36
    iamatypeofwalrus edited #225
  • Mar 01 2017 22:36
    iamatypeofwalrus edited #225
  • Mar 01 2017 22:36
    iamatypeofwalrus edited #225
  • Mar 01 2017 22:35
    iamatypeofwalrus edited #225
  • Mar 01 2017 22:34
    iamatypeofwalrus opened #225
  • Sep 01 2016 18:35
    bmedici commented #194
  • Aug 30 2016 12:35
    fcarrega commented #154
  • Jul 25 2016 16:11
    heaven commented #166
  • Jul 25 2016 15:57
    trevorrowe commented #166
  • Jul 25 2016 02:23
    heaven commented #166
  • May 15 2016 22:02
    StevenHarlow commented #166
  • May 15 2016 21:58
    StevenHarlow commented #166
  • May 15 2016 21:58
    StevenHarlow commented #166
  • May 15 2016 21:53
    StevenHarlow commented #166
Peter Mounce
@petemounce
to combat that, i've changed my waiters to look like, for example,
w.max_attempts = 15
w.interval = 0
w.before_attempt do |a, resp|
  pause_exponentially n
  logger.debug "Waiting for password...", {instance_id: id, attempt_count: a}
end

def pause_exponentially(n, seed=2, exp=1.4)
  sleep(seed * (exp ** n))
end
it would be really handy to be able to change that in just a single place (with the capability probably of still being able to configure the seed and exponent-even-if-that's-the-wrong-mathematical-term for each waiter)
Peter Mounce
@petemounce
actually, one thing i thought of - it would be nice to just tell the waiter definition the maximum time it should wait, rather than do maths to figure out what that will end up being (regardless of whether the strategy is linear or exponential)
Trevor Rowe
@trevorrowe

There is not an accessible way to change the default waiter value currently. Currently, the waiter provider is a private interface, and is subject to change. If you are comfortable poking into something that may change in the future, you could do the following:

EC2::Client.waiters.instance_variable_get("@waiters")[:instance_running][:delay] = 20

I’d be open to suggestions on how to make this more flexible, until then, I left the interfaces marked @api private.

Peter Mounce
@petemounce
@trevorrowe thanks; i don't have good ideas there beyond maybe Aws.config[:default_waiter_strategy] or something hand-wavey like that.
Peter Mounce
@petemounce
@trevorrowe in v1 of the sdk, I'm reasonably sure AWS::EC2::Image had an exists? method...? v2 does not. Aws::EC2::Instance does. the API reference doesn't show either Instance or Image to have an attribute that sounds like 'exists' other than maybe state on Instance.
but even that is tenuous
i can use the wait_for_exists instance method on Instance (hehe), but there's no equivalent on Image.
... could there be?
Trevor Rowe
@trevorrowe
@petermounce The exists? method will spring into existence if an appropriate waiter is created and linked in the resource definition.
亀山 翔大
@ShotaKameyama
Hi there, I simply want to know how to send sms to mobile phone in Japan from rails app using amazon sns. Is there any useful links to help? Thanks.
Trevor Rowe
@trevorrowe
@ShotaKameyama There is a SNS developer guide that goes through the major steps: http://docs.aws.amazon.com/sns/latest/dg/SMSMessages.html - You can use Aws::SNS::Resource from the aws-sdk gem to make the API calls.
亀山 翔大
@ShotaKameyama
@trevorrowe Thank you for the quick reply! I will check the link which you showed me.
kevin-staiger
@kevin-staiger
@trevorrowe Hi I am using the aws-sdk-core-ruby to create a lambda java function from a jar file in a an s3 folder however, when I run my command I get an error
`validate!': parameter validator found 2 errors: (ArgumentError)
  • unexpected value at params[:code][:s3_bucket]
  • unexpected value at params[:code][:s3_key]
but I am just passing in strings of the bucket name and s3 key to my jar
it works when I pass these values on the console
kevin-staiger
@kevin-staiger
ok nevermind, just needed to update my gem version
Alex Wood
@awood45
what version did you update to/from?
kalpitad
@kalpitad
Hi there, doing more stubbing in my tests and have a question (issue #187)… is it possible to stub a response to a request for a particular DynamoDB table? i.e. can I further specify the table name somehow in this line of code?… Aws.config[:dynamodb][:stub_responses] = {put_item: 'ProvisionedThroughputExceededException’}
Trevor Rowe
@trevorrowe
You should rely on your test framework to provide that functionality. If I were using RSpec, I would do it this way:
allow_any_instance_of(Aws::DynamoDB::Client).to receive(:put_item).
  with(hash_including(table_name:'my-table').
  and_raise(Aws::DynamoDB::Errors::ProvisionedThroughputExceededException)
kalpitad
@kalpitad
Got it, thanks @trevorrowe!
kalpitad
@kalpitad
hey @trevorrowe, just an FYI, I couldn’t get this to work by just raising Aws::DynamoDB::Errors::ProvisionedThroughputExceededException. I kept getting an ArgumentError returned (ArgumentError: wrong number of arguments (0 for 2)). I finally figured out that this had to do with the constructor of the error. Perhaps, this is expected, but I didn’t realize it. Setting nil for both params made this work.
allow_any_instance_of(Aws::DynamoDB::Client).to
 receive(:put_item).
 with(hash_including(table_name: "Users")).
 and_raise(Aws::DynamoDB::Errors::ProvisionedThroughputExceededException.new(nil, nil))
kalpitad
@kalpitad
This message was deleted
kalpitad
@kalpitad

hey guys, for the most part, the response stubbing is working great! I believe that I am seeing an issue, however, when a test runs code that has more than one DynamoDB request, where the second or third is the one that has a stubbed response.

For example... I have a method in my code that does a query request to DynamoDB, followed by a get_item. If I stub the query method as so: Aws.config[:dynamodb][:stub_responses] = {query: 'ProvisionedThroughputExceededException’}, the exception is raised and the right thing happens. This first test works fine.

My second test is to allow the query to succeed and instead have the get_item request raise ProvisionedThroughputExceededException. So, the stub looks like this: Aws.config[:dynamodb][:stub_responses] = {get_item: 'ProvisionedThroughputExceededException’}. When I run this test (standalone and seperate from the first one), the query returns nothing, even though there are objects to return (i.e. if I comment out the response stubbing line, the query works). So, it seems as if my response stub for get_item is affecting my query request.

Do you guys have any thoughts on this? Thanks!

Trevor Rowe
@trevorrowe
@kalpitad If you enabled response stubbing, all responses are stubbed for the client, not just specific operations. When you do not provide stub data, then a default empty response is returned.
If you want to have a client make actual requests for some operations you will need to not enable response stubbing and instead use your test framework to stub the specific response. If you are using spec, you could do this:
ddb = Aws::DynamoDB::Client.new
allow(ddb).to receive(:query).and_return(ddb.stub_data(:query, {})
allow(ddb).to  receive(:get_item).and_raise(Aws::DynamoDB::ProvisionedThroughputExceededException.new(nil, nil))
kalpitad
@kalpitad
@trevorrowe ohhhhhhhh! OK, that explains a lot, thank you. :+1:
As always, I appreciate the guidance! :)
Trevor Rowe
@trevorrowe
No problem!
kevin-staiger
@kevin-staiger
Hey I have a question, right now for AWS it is not supported to upload a file from s3 in one region to create a lambda in another region, my work around for this is get the s3 object in memory and then upload it back up to lambda from my own box to create it in another region
this works fine for creating the lambda
however when I call update function
I receive the error ruby-1.9.3-p547/gems/aws-sdk-core-2.1.18/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call': Could not unzip uploaded file. Please check your file, then try to upload again. (Aws::Lambda::Errors::InvalidParameterValueException)
so I am using the ruby 1.9.3 and the aws-sdk-core gem v 2.1.18
when I have spoke with AWS support they have said they can successfully run this code

`lambda = Aws::Lambda::Client.new(access_key_id: '',
secret_access_key: '
',
region: 'us-west-2')

@function_name = '',
@s3_jar_path = '
.jar',
s3 = Aws::S3::Client.new(access_key_id: '',
secret_access_key: '
',
http_wire_trace: true,
region: 'us-east-1'
)
lambda_jar = s3.get_object(bucket: '**', key: @s3_jar_path)

lambda.update_function_configuration({
function_name: @function_name,
role: "",
handler: "
",
timeout: 15,
memory_size: 512,
})
lambda.update_function_code({
function_name: @function_name,
zip_file: lambda_jar.body
}) `

```
lambda = Aws::Lambda::Client.new(access_key_id: '***',
                               secret_access_key: '***',
                                       region: 'us-west-2')

@function_name = '***',
@s3_jar_path = '***.jar',
s3 = Aws::S3::Client.new(access_key_id: '***',
                               secret_access_key: '***',
                               http_wire_trace: true,
                               region: 'us-east-1'
                               )
lambda_jar = s3.get_object(bucket: '****', key: @s3_jar_path)


lambda.update_function_configuration({
    function_name:  @function_name, 
    role: "***",
    handler: "***", 
    timeout: 15,
    memory_size: 512,
})
lambda.update_function_code({
    function_name:  @function_name,
    zip_file: lambda_jar.body
})
but this when I try to run it I receive the zip error
kevin-staiger
@kevin-staiger
any ideas is this an aws-sdk-core vs aws-sdk issue?
Preethi Ramesh
@pree011235
Hi @trevorrowe, can you please give me some clarification regd. the calls I'm making to get the access token from AWS Cognito. I am implementing the [Developer Authenticated Identities workflow(http://docs.aws.amazon.com/cognito/latest/developerguide/developer-authenticated-identities.html) where I authenticate the user on my backend. My code:
permanent_aws_creds = Aws::SharedCredentials.new(profile_name: 'ABCD')
cognitoIndentityClient = Aws::CognitoIdentity::Client.new(
    region: 'us-east-1',
    credentials: permanent_aws_creds,
)
developerProviderName = '1.Got From Developer Provider Name under Custom in Cognito Console'
IdentityTokenProvider = '2. Where do I get this from?'
identityPoolId = 'us-east-1:Xxxxx'
resp = cognitoIndentityClient.get_open_id_token_for_developer_identity(
    identity_pool_id: identityPoolId,
    logins: {
        developerProviderName => IdentityTokenProvider
    }
)
resp2 = cognitoIndentityClient.get_credentials_for_identity(
    {
        identity_id: resp['identity_id'],
        logins: {
            developerProviderName => resp['token'] # Am I passing in the right token?
        }
    }
)
Trevor Rowe
@trevorrowe
Sorry, I don’t have any practical working experience with Cognito. I’d offer suggestions if I was familiar with their workflows, but I am not.
Preethi Ramesh
@pree011235
Ah, no problem. Thanks.
Preethi Ramesh
@pree011235
For anyone who's interested BTW I managed to figure it out. All I had to do was substitute cognito-identity.amazonaws.com for developerProviderName key in get_credentials_for_identity()
Evan Worley
@evanworley
Hi All - Is there any way to generate a presigned_url for a multi part upload? Specifically to upload one of the parts? I've been trying several different things and am not having any luck. I've found the object.presigned_url method as well as the Presigner class, but this does not seem to support the uploadId and partNumber parameters that need to go into the url as query parameters.
edmondtm
@edmondtm
@trevorroweIs Is there a log file for SES ? I am using AWS SES on Elastic Beanstalk. My delayed_job showed email was sent. There is no error in production.rb log files or my email. I did not receive my emails in my test email.
I am using aws-sdk, aws-sdk-rails and aws-ses gems.
Laura
@pelx
Hi, I try to play my audios from S3 Bucket. <%= audio_tag url_for(song.key), class: "audio-play" %> url_for=Aws::S3::Presigner.new.presigned_url(S3_BUCKET, BUCKET, song_key). It does not work. The value I get for the URL is the same as song.key. Please help
Doug
@Doug-AWS

@pelx I get a valid pre-signed URL for my object "test" in my bucket "doug-test" by the following code:

signer = Aws::S3::Presigner.new
url = signer.presigned_url(:get_object, bucket: 'doug-test', key: 'test')

What do you get?