Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 21:46
    quasi-coherent opened #512
  • Jan 23 02:38
    dustin opened #511
  • Jan 15 22:48
    axman6 opened #510
  • Jan 07 17:46
    cs opened #507
  • Jan 07 16:22
    cs opened #506
  • Dec 30 2018 23:06
    nhibberd added as member
  • Dec 30 2018 17:35
    andreyk0 opened #505
  • Dec 10 2018 05:37
    sevanspowell opened #504
  • Nov 27 2018 23:02
    ianbamforth opened #503
  • Nov 16 2018 07:06

    brendanhay on develop

    Remove fractional seconds from … (compare)

  • Nov 16 2018 07:06
    brendanhay closed #502
  • Nov 14 2018 04:55
    penntaylor opened #502
  • Nov 13 2018 03:19
    gridaphobe opened #501
  • Nov 02 2018 22:48
    m5 opened #500
  • Nov 02 2018 17:42
    MaxGabriel edited #499
  • Nov 02 2018 17:42
    MaxGabriel edited #499
  • Nov 02 2018 17:34
    MaxGabriel opened #499
  • Oct 29 2018 22:50
    newhoggy opened #498
  • Oct 26 2018 16:41
    MichaelXavier added as member
  • Oct 10 2018 18:57
    penntaylor closed #497
Brandon Martin
@codedmart
I am trying to figure out where/how to track this down.
I am using a service that is S3 compatible, but is not S3. I receive that error whenever I try to getObject a file that was putObject'ed in chunks. Meaning any file I upload that is smaller then the ChunkSize and gets upload as one chunk I can getObject fine.
But any file that is uploaded in chunks that I try to getObject I see that error.
From above.
I have tested with s3cmd and the nodejs aws-sdk and the objects are in the bucket properly and I can get them just fine with those tools.
What intrigues me most is I am not sure this is even related to amazonka as I tried with aws (https://www.stackage.org/lts-10.3/package/aws-0.18) and received the same error.
Brandon Martin
@codedmart
So maybe a http-conduit issue as they share that dependency?
Brandon Martin
@codedmart
Also I am seeing a SignatureDoesNotMatch when I try to use poContentEncoding.
Brandon Martin
@codedmart
@axman6 or @brendanhay Any ideas of suggestions?
One thing I figured out is if I fork amazonka and add a Accept-Encoding header to GetObject it works.
Alex Mason
@axman6
I'm trying to write a small script which uses amazonka, and I'm struggling with how best to take auth info from the user. Ideally I would like to be able to take an optional profile name and have newEnv look for that profile in the default profile file location, or the environment provided by the container etc. Other tools generally just have a --profile argument, and it feels to me like the Discover option with a Maybe Text for the profile name w
ould be useful, as there is now way currently to automate discovery with a specific profile name
Alex Mason
@axman6
(ping brendanhay, I'd love your thoughts on this)
Leif Warner
@LeifW
@axman6 Let me know what you come up with for naming amazon-s3-streaming. I was doing the same thing for glacier, and called it amazonka-glacier-conduit, but then I saw that glacier-prefixed packages were discouraged.
I still want the name to have "amazonka" in it to indicate it's an amazonka lib. Also I was wondering about module naming...
Leif Warner
@LeifW
Maybe conduit-amazonka-glacier?
Alex Mason
@axman6
LeifW: yeah I guess s3-streaming-amazonka could work
Leif Warner
@LeifW
@axman6 As far as as taking Env config from a user - like they pass in a profile name and you read the creds from e.g. a YAML settings file?
There's Read and To/From JSON instances for the Env stuff - so parsing stuff in there from a YAML file using the yaml lib would be trivial (it re-uses aeson's FromJSON typeclasses)
Alex Mason
@axman6
I basically want the behaviour of Discover but being able to specify a profile if it uses a .aws/credentials file and multiple profiles are specified
Utku Demir
@utdemir
Am I doing this wrong? (Getting a strict ByteString from S3 getObject call):
      runResourceT . runAWS env $ do
        gors <- send $
          getObject
            (S3.BucketName bucketName)
            (ObjectKey p)
        unless (gors ^. gorsResponseStatus `div` 100 == 2) $
          throwIO . InvokeException $
            "Downloading result failed. Status code: " <> T.pack (show $ gors ^. gorsStatusCode)
        let body = _streamBody (gors ^. gorsBody)
        chunks <- lift . runConduit $ body .| sinkList
        return $! mconcat chunks

Throws

HttpExceptionRequest Request {\n host = \"s3.amazonaws.com\"\n port = 443\n secure = True\n requestHeaders = [(\"Host\",\"s3.amazonaws.com\"),(\"X-Amz-Date\",\"20180623T123650Z\"),(\"X-Amz-Content-SHA256\",\"...\"),(\"Authorization\",\"<REDACTED>\")]\n path = \"...\"\n queryString = \"\"\n method = \"GET\"\n proxy = Nothing\n rawBody = False\n redirectCount = 0\n responseTimeout = ResponseTimeoutMicro 70000000\n requestVersion = HTTP/1.1\n}\n ConnectionClosed"

The exception is thrown by the conduit, not from the send.
Leif Warner
@LeifW
@axman6 Is your amazonka s3-conduit lib working for you to upload large blobs in constant memory?
I kind of cribbed from you, and I call Glacier's uploadMultipartPart from inside mapM on the conduit of ByteStrings, but it seems that holds on to memory. If replace that with liftIO $ runResourceT $ runReaderT (uploadMultipartPart ...) (basically force the connection pool or whatever that ResourceT stuff is for to run for that time), the problem goes away.
Or if I just replace the call to send uploadMultipartPart with ByteString.appendFile ...
Alex Mason
@axman6
hmm, I believe it was working fine last time I tested it, it happily uploaded in constant memory
did you have a look at the application in Main.hs?
it shows how to do streaming upload and concurrent upload of a file
Leif Warner
@LeifW
I'm using runResourceT $ runReaderT instead of runResouceT $ runAWS, but I wouldn't think that would make much of a difference - with AWS just being a newtype wrapper for ReaderT IO
I can try an S3 upload of a large file and see if it works for me in constant space.
Leif Warner
@LeifW
Know how to change the endpoint? I'm getting a "301 Moved Permantly: The bucket you are attempting to access
must be addressed using the specified endpoint. Please send all future requests to
this endpoint." trying to upload to a bucket in us-west-2
Leif Warner
@LeifW
@axman6 Yeah, cool - the S3 streaming upload works: I uploaded 153MB file to S3 with +RTS -s -M40m, and used "8,843,136 bytes maximum residency"
So I'll have to look into what's going on with my usage. Maybe because I'm throwing away the UploadPart response?
Leif Warner
@LeifW
Oh, well, the upload from file in there uses concurrentUpload, which doesn't look like it's using Conduit, and it's doing its own stuff with the http connection manager...
If I cat the file and pipe it to the stdin of that program, it uses the conduit upload, which crashes with heap exhaustion even if i set the heap up to -M100m
Leif Warner
@LeifW
Maybe runResourceT is not intended for the top-level of the app, but narrowly scoped around each request?
Alex Mason
@axman6
LeifW: hmm, possdibly - it's been a long time since I've looked at this code so would need some time to context switch. I'll try and find some time this weekend to see what's happening
Leif Warner
@LeifW
I wrapped every send call in my code with "runResourceT", and the problem went away. I left some details on the amazonka ticket I opened: brendanhay/amazonka#475
I'd maybe want to profile this, but I guess something's being left open on the request / responses and not freed until "runResourceT" is called. Not sure if that's intended behaviour. I thought ResourceT was supposed to be a "failsafe", but that you should have other means of freeing the resource sooner.
When you send a big request body to AWS, and get back a bodyless response with some headers, it certainly seems like you shouldn't have dangling resources for that request still lying around.
Leif Warner
@LeifW
Effectively I replaced the MonadResource constraint in my app with MonadUnliftIO by doing that.
Alex Mason
@axman6
brendanhay: thanks for the 1.6.1 release, once that's in the new stackage LTS our build times will go down significantly, I'm unreasonably excited about this
حبيب الامين
@habibalamin
Can someone explain to me why amazonka's ObjectKey type has a FromText instance that can fail with Maybe, yet at the same time, has an _ObjectKey :: Iso' ObjectKey Text that converts cleanly between the two? What's the canonical way to construct an ObjectKey when I have a Text?
Right now, I'm just manually constructing it the same way the lens Iso does, as I don't actually know how to use lens Isos.
Utku Demir
@utdemir
@habibalamin Why don't you just use the constructor?
حبيب الامين
@habibalamin

Right, that's what I meant when I said “I'm just manually constructing it the same way the lens Iso does”.

However, I get the feeling that I shouldn't be doing it that way. I get the feeling I should be either using the lens Iso or using fromText.

Since the lens Iso is saying there's a direct correspondence, I don't want to have to deal with getting Maybe ObjectKey so that rules out fromText.

So it seems like I should use the lens Iso, except I don't know how.

Julian Ospald
@hasufell
Alex Mason
@axman6
Anyone have any thoughts on my comments on brendanhay/amazonka#540 ? It seems likely other packages might also be broken because signingName is being ignored in the service descriptions