Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 22:28
    swetashre unlabeled #2301
  • 22:28
    swetashre unlabeled #2331
  • 22:28
    swetashre unlabeled #2334
  • 22:28
    swetashre unlabeled #2337
  • 21:19
    swetashre labeled #2371
  • 21:19
    swetashre unlabeled #2371
  • 21:19
    swetashre labeled #2371
  • 21:19
    swetashre unlabeled #2371
  • 21:19
    swetashre commented #2371
  • 21:15
    no-response[bot] commented #2349
  • 20:55
    swetashre commented #2370
  • 20:15
    no-response[bot] closed #2349
  • 20:15
    no-response[bot] commented #2349
  • 19:02
    mr26 commented #2370
  • 18:42
    swetashre closed #2368
  • 18:42
    swetashre commented #2368
  • 18:36
    swetashre labeled #2370
  • 18:35
    swetashre commented #2370
  • 13:12
    mr26 commented #2370
  • 13:11
    mr26 commented #2370
Ciaran Evans
@ciaranevans
This is within a step function flow, so if I invoke a flow, most of the time this lambda passes first time, but sometimes it fails and then fails all 3 retries
Andy Loughran
@andylockran
Hey team - I'm trying to upload a file to AWS S3 using a presigned URL, but adding in SSE
I've defaulted to AES256 in the bucket, but really want to use SSE. However, if I add the headers to the request I get a signature mismatch
Sunil Chopra
@dayer4b
has anybody been able to use boto3 with SSO credentials that come from aws2 configure sso?
it does not work for me. the v2 release of the aws CLI supports authentication with SSO, but it seems to deposit credentials in a non-standard place that is missed by boto3
Corey Quinn
@QuinnyPig
I was under the impression those credentials were set as env variables, no?
James Saryerwinnie
@jamesls
@dayer4b It's not in boto3 just yet. It's only in the v2 branch for botocore. I think the plan is to eventually support loading these creds in all the SDKs.
Leon
@LamedB_twitter
hi, couldn't find a way to fetch arn-role from .aws/config file by profile, I prefer not to use assume_role with hardcoded value but read it from the config file
James Saryerwinnie
@jamesls
You can grab it from .full_config on a botocore session
Leon
@LamedB_twitter
I only have this, boto3.Session(profile_name = 'dev') , how to create botocore session?
Got it, Thanks!! @jamesls
James Saryerwinnie
@jamesls
Nice!
Ashi
@ashi4rou_twitter
Can anyone suggest me a way to query on redshift data using python api.
Rod Knowlton
@codelahoma
Hey all, I'm looking at upgrading, but the guide hasn't changed in eighteen months. So I gotta ask: is the guide out of date, or have there been no breaking changes since 1.9.0?
James Saryerwinnie
@jamesls
The changelog would give you a more detailed list of changes (https://github.com/boto/boto3/blob/develop/CHANGELOG.rst), but there shouldn't be any breaking changes in general for 1.x. The only thing I can think of is the removal of vendored requests in botocore, but that's really only relevant if you're on lambda.
Marcelo Gobelli
@decareano
when I run below code in aws lambda I get: "expected string or bytes-like object". When I run boto3 in my local drive, it creates the bucket. Any ideas?
import logging
import boto3
from botocore.exceptions import ClientError


def lambda_handler(bucket_name, region=None):
    """Create an S3 bucket in a specified region

    If a region is not specified, the bucket is created in the S3 default
    region (us-east-1).

    :param bucket_name: Bucket to create
    :param region: String region to create bucket in, e.g., 'us-west-2'
    :return: True if bucket created, else False
    """

    # Create bucket
    try:
        if region is None:
            s3_client = boto3.client('s3')
            s3_client.create_bucket(Bucket=bucket_name)
        else:
            s3_client = boto3.client('s3', region_name=region)
            location = {'LocationConstraint': region}
            s3_client.create_bucket(Bucket=bucket_name,
                                    CreateBucketConfiguration=location)
    except ClientError as e:
        logging.error(e)
        return False
    return True
James Saryerwinnie
@jamesls
Is that your actual lambda handler (i.e you've configured that as the handler value when you created the lambda function)?
Marcelo Gobelli
@decareano
well, the original code was: def create_bucket() but I was under the impression that lambda wants the function "lambda_handler"
also, in boto3 on my local drive, I call the function with: create_bucket('awscajondedata', 'us-west-1')
but I don't know how is the function called in aws. it's a bit confusing to me.
James Saryerwinnie
@jamesls
so in lambda your "lambda handler" has a specific signature you have to adhere to, it's "def my_handler(event, context)" where "event" is specific to how the lambda function is being invoked and context is a LambdaContext that gives you additional info about the runtime. So just have your lambda handler call this function
So:
def lambda_handler(event, context):
    return create_bucket("awscajondedata", "us-west-2")
sri s
@sri41_gitlab

Is this the right forum to ask this question?
In pycurl, below options are set to upload to s3 bucket thru proxy .
How to achieve the same using boto3 python?

I was using the boto3.resource to create s3 object and then set the bucket name and uploading file

'''
curl.setopt(pycurl.PROXY_SSLCERT,"./GXXXXXXXX.pem")
curl.setopt(pycurl.PROXY_SSLKEY, "./GXXXXXXXX.pem")
curl.setopt(pycurl.PROXY_CAINFO,"./GXXXXXXXX.pem")

'''

cfile='xxxx.pem'
s3Obj=boto3.resource('s3',aws_access_key_id=tokenCred['access_key'],aws_secret_access_key=tokenCred['secret_access_key']
,aws_session_token=tokenCred['session_token'],region_name=tokenDetails['region']
,config=Config(proxies=proxy,client_cert=cfile)
,use_ssl=True,verify=False
)
s3bucket=s3Obj.Bucket(tokenDetails['bucket_name'])

s3bucket.upload_file(fileLoc,directory+'/'+file)

'''

Marcelo Gobelli
@decareano
@jamesls, it's working. thanks for the help
Joe Sweeney
@jswny
Does calling table = dynamodb.Table('name') error if the table doesn't exist yet?
And does that call create a table if it doesn't exist? Or do you have to call create_table
James Saryerwinnie
@jamesls
It does not error if the table does not exist.
Joe Sweeney
@jswny
@jamesls It won't create the table either right?
If it doesn't exist
James Saryerwinnie
@jamesls
Correct
Joe Sweeney
@jswny
Thanks!
Hopkins Nji
@HopkinsNji_twitter
I have a resource in an accouint, how can I find all iam permissions which allow access to it? Loaded question.
Amos Kyler
@amoskyler
I need to sign a request for an s3-compliant link where the bucket name is empty - I can't seem to find a way to pass an empty bucketname (either no Bucket param or empty param string).
Is there any guidance on how to accomplish this with boto3?
Wojciech Pietrzak
@astropanic
Hi, I got a permission denied using s3.get_object calling my python script with aws-vault, issuing the same using aws-vault just with plain aws s3api calls works just fine. What is the issue? I am missing something?
s3 = boto3.client('s3')
my_bucket = 'xxxxx'
key = 'xxxxx'

response = s3.get_object(Bucket=my_bucket, Key=key)
print(response)
ChernikovP
@ChernikovP

I'm trying to guard against working with non-existing alarms.

cloudwatch = boto3.resource('cloudwatch')
alarm = cloudwatch.Alarm('non_existing_alarm')
print(type(alarm)) # <class 'boto3.resources.factory.cloudwatch.Alarm'>, but not NoneType

but if i try something like print(alarm.alarm_description) I got - 'NoneType' object has no attribute 'get': AttributeError.

I expected alarm = cloudwatch.Alarm('non_existing_alarm') to either throw an error or return None, but now I'm stuck with how to check whether returned object is meaningful or not. Am I missing something?

przem123
@przem123
Can boto3 be compiled and used on architectures powerpc 64 (LE & BE) and s390x ?
Francisco Albert Albusac
@tatitati
guys, if I do:
ecs.run_task(....), that spin-up a new container?
Ramesh534
@Ramesh534
Hello Guys,
I don't have any experience with boto3. Can you guys please help where to start and can guys please share some documents and material to lean
Rishikeshpal
@Rishikeshpal
OverlordQ
@OverlordQ
So I have a long running daemon process on an ec2 instance that does some cross account work. To work in its account it has an ec2 iam profile attached. Is there still a way to get the Assume Role provider to still handle the cred refreshing?
Or do I just need to specify a credential_source of instance meta when setting the role_arn and it'll automagically work
James Saryerwinnie
@jamesls
yeah if you want the source credentials for the assume role call to be from imds then you can set the credential_source and it'll just automagically work
OverlordQ
@OverlordQ
Cool, was hoping to avoid having to roll a hacky token refresher
James Saryerwinnie
@jamesls
yeah it can get quite involved. That's some of the most complex code in boto3/botocore.
Anubhav
@anubhav6663
Is there a way by which we can close ec2 connections explicitly? As of now I have a a lot of long running connections and most of them are going in CLOSE_WAIT state. I want to close these connections. How can I make sure all of these connections get closed?
I know that after few number of connections( https://stackoverflow.com/questions/18383839/python-s3-boto-connection-close-causes-an-error/24958783) S3 starts closing the connection and that causes the CLOSE_WAIT state. What can be done to prevent this?
Ryan Delaney
@rpdelaney
docs for ECS.Waiter.TasksStopped says "an error is returned after 100 failed checks" but then the specified Return is "None". What gets returned if an error? Or is an exception raised?
Ryan Delaney
@rpdelaney
Looks like an exception is raised: botocore.exceptions.WaiterError: Waiter TasksStopped failed: Max attempts exceeded