Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jun 21 09:42
    make-github-pseudonymous-again opened #808
  • Jun 18 14:19
    make-github-pseudonymous-again commented #786
  • Jun 18 14:17
    make-github-pseudonymous-again commented #786
  • Jun 17 09:03
    tjaisson opened #807
  • Jun 14 13:40
    Saeeed-B opened #806
  • Jun 02 20:40
    ajaypillay edited #805
  • Jun 02 20:40
    ajaypillay edited #805
  • Jun 02 20:29
    ajaypillay edited #805
  • Jun 02 20:28
    ajaypillay edited #805
  • Jun 02 20:28
    ajaypillay edited #805
  • Jun 02 20:28
    ajaypillay edited #805
  • Jun 02 20:27
    ajaypillay opened #805
  • Jun 02 18:30
    naschpitz commented #796
  • Jun 02 17:58
    dr-dimitru commented #796
  • May 31 15:04
    ajaypillay opened #804
  • May 31 13:04
    bartenra opened #803
  • May 30 19:24
    imongithubnow commented #796
  • May 30 08:57
    dr-dimitru commented #796
  • May 30 06:25
    imongithubnow commented #796
  • May 30 06:24
    imongithubnow commented #796
ronytesler
@ronytesler
someone?
ronytesler
@ronytesler
when I add this line: import S3 from 'aws-sdk/clients/s3'; I get this error: var zeroBuffer = new Buffer(intSize); zeroBuffer.fill(0);
Buffer is not defined
ronytesler
@ronytesler
onBeforeUpload is called, but onAfterUpload is not called
I get this on the console:
[FilesCollection] [UploadInstance] [sendEOF] false
from core.js
Tathan
@Tathan1047
Hola buenas tarde una consulta acerca de este paquete alguien podria ayudarme
estoy realizando la carga del archivo y todo se ejecuta correctamente pero cuando intento descargarlo recibo un 404
y le estoy especificando la ruta exacta donde se encuentra el archivo
Sarajin
@Sarajin
Wonder if anyone is around to answer a question about ostrio:files?
dr.dimitru
@dr-dimitru
@Sarajin yes, but it's better to open new issue on GITHUB as it would get much more attention
Sarajin
@Sarajin
@dr-dimitru It wasn't really a bug or issue. Would that matter?
Sarajin
@Sarajin
I'll just type it here, and you can tell me if it should be opened on github. I'm using ostrio:file in a meteor app that uses S3 as the storage area, works great so far. I noticed it has some kind of versioning ability, but it seems more like a static list of possible versions (as the example shows different video encoding formats, etc). Can it be used as a versioning mechanic where the file can be updated with multiple versions (1,2,3) at some point in the future, only returning say the latest version of the file? I'm trying to look throught and figure out if its possible, but I dont see some kind of update function on an already existing file. Hopefully that makes sense. if it is possible, and you know of an example somewhere, that would be great.
Thanks!
dr.dimitru
@dr-dimitru

@Sarajin questions are welcome in our GitHub issues as well.

Regarding your question:

  1. There is no build-in mechanism for uploading/adding different file's versions, you got to update MongoDB record on your own/3rd-party code
  2. There is build-in mechanism to retrieve/download file using its "version"
  3. Yes, it's suitable for versioning
Sarajin
@Sarajin
@dr-dimitru Thanks for answering!
dr.dimitru
@dr-dimitru
@Sarajin πŸ‘¨β€πŸ’»πŸ‘
LaloMores
@LaloMores
Hi all. I just found this package and want to know if it integrates well in mobile. The online demo works awesome in the browser (also in the mobile browser). I wonder if it integrates well with the cordova context and if there are specific instructions on what to do to achieve that which I don't seem to find (do I need to install any cordova plugins to access the file system, for example ? )
dr.dimitru
@dr-dimitru
Hey @LaloMores I recommend to check open and closed issues, especially marked as security (you will get why). This package is tested on Cordova by community members (again, go to issues). Also check this repo β€” https://github.com/risetechnologies/cookieTest
LaloMores
@LaloMores
Alright. Thanks @dr-dimitru It seems all is fixed though? That repo runs all tests nice and smooth in the browser, but couldn't deploy it to the phone for some reason (everything break after first apk build)
dr.dimitru
@dr-dimitru
@LaloMores start new issue β€” https://github.com/risetechnologies/cookieTest/issues
LaloMores
@LaloMores
@dr-dimitru done
gjdanskij
@gjdanskij
Hello, i'm looking for help integrating this awesome module
I have the meteor web app and regular express js web app on the same server, so i have a question: can i add the files to Meteor-Files meteor collection from express js
is it possible to do?
Please advise, how can i properly do that
dr.dimitru
@dr-dimitru
Hello @gjdanskij , theoretically β€” yes. As both are running in node.js environment. But I'm not sure how much time and hassle would it cause. Need more detailed overview on your current solution
Timo Schneider
@tschneid

Hello, I'm using the S3 integration (with 1.13.0) and have a question regarding downloading files from S3.

The S3 integration is inapplicable for us when handling big files (1GB+), since s3.getObject in interceptDownload first downloads files into memory and then serves them to the client. So our machines ran out of memory and stalled. Therefore, I'd like to stream the data to the client, but the following solution does not work for me:

const stream = s3.getObject(opts).createReadStream();
fileCollection.serve(http, fileRef, fileRef.versions[version], version, stream);

Do you see any flaws in this solution? Could you give me any pointers on why this isn't working? Thanks!

Diesmaster
@Diesmaster
Hey I have a question. I am trying to show the image that is uploaded to my mongo db on a website. I tried to get the filepath and put it into the src of an img however, this gives me a broken image. Do you guys know what to do? (sorry if this is a dumb question but I am relatively new to meteor in general)
dr.dimitru
@dr-dimitru
@tschneid check out AWS:S3 official docs, look for streaming and piping, then pass readable stream to .serve() method:
fileCollection.serve(http, fileRef, fileRef.versions[version], version, readableStream);
@Diesmaster Let me know if that helps
Diesmaster
@Diesmaster
This message was deleted
Diesmaster
@Diesmaster
it worked.
thanks for the link :-P
dr.dimitru
@dr-dimitru
@Diesmaster πŸ‘
Diesmaster
@Diesmaster
@dr-dimitru is there an way to update the meta data? I tried finding it in the manual :-)
Joseph Rousseau
@joerou
@tschneid I am now working on a project that seems to be running into the same problem you are describing with handling big files.. Did you figure out the solution to this?
dr.dimitru
@dr-dimitru
@tschneid @joerou have you checked streaming docs by AWS:S3?
I know @tschneid mentioned streaming wasn't working for you, any further details why? and what exactly wasn't working for you?
Joseph Rousseau
@joerou
@dr-dimitru Thanks for the quick response, sorry for the delay on my end! It seems for me that according to the docs, it should be exactly as @tschneid has listed above, the readable stream being s3.getObject(opts).createReadStream(); should be passed to the .serve() method. This results in my server not crashing as it was before, but I am only able to play the first 19 seconds of the video. I'm still testing and playing with it, but was curious if he had found a working solution or exactly what wasn't working for him to see if it is the same problem..
Timo Schneider
@tschneid

@joerou I cannot exactly recall what the problem was, but I remember that I also was not able to play videos inline. There was always some edge case in which streams weren't working. What I ended up doing is (1) keeping the default implementation with s3.getObject since that works for playing inline videos and (2) downloading the video via a signed URL (that expires after 5 seconds) when a download parameter is passed. Something like this:

const downloadViaSignedUrl = async (...) {
    // opts = ...
    const url = await s3.getSignedUrl('getObject', opts);
    http.response.redirect(url);
}

interceptDownload(...) {
    // ...
    if (http.params.query.download && http.params.query.download === 'true') {
        downloadViaSignedUrl(http, fileRef, version);
    } else {
        downloadViaGetObject(http, fileRef, version);
    }    
    return true;
}

No insights on why streams won't work with Meteor-Files, sorry.

dr.dimitru
@dr-dimitru

@joerou @tschneid I believe you're looking for chunk-range support, not streaming. All you need to implement chunk download support is to read "Range" request header and pass it to .getObject method, please, see demo-code here:

Quick sample:

        if (http.request.headers.range) {
          const vRef  = fileRef.versions[version];
          let range   = _.clone(http.request.headers.range);
          const array = range.split(/bytes=([0-9]*)-([0-9]*)/);
          const start = parseInt(array[1]);
          let end     = parseInt(array[2]);
          if (isNaN(end)) {
            // Request data from AWS:S3 by small chunks
            end       = (start + this.chunkSize) - 1;
            if (end >= vRef.size) {
              end     = vRef.size - 1;
            }
          }
          opts.Range   = `bytes=${start}-${end}`;
          http.request.headers.range = `bytes=${start}-${end}`;
        }

        const fileColl = this;
        client.getObject(opts, function (error) {
          if (error) {
            console.error(error);
            if (!http.response.finished) {
              http.response.end();
            }
          } else {
            if (http.request.headers.range && this.httpResponse.headers['content-range']) {
              // Set proper range header in according to what is returned from AWS:S3
              http.request.headers.range = this.httpResponse.headers['content-range'].split('/')[0].replace('bytes ', 'bytes=');
            }

            // Imitate proper streaming:
            const dataStream = new stream.PassThrough();
            fileColl.serve(http, fileRef, fileRef.versions[version], version, dataStream);
            dataStream.end(this.data.Body);
          }
        });
Timo Schneider
@tschneid
Thanks, @dr-dimitru. I've implemented that for inline videos. However, then you will still need to differentiate between playing a video inline (which perfectly works with the code provided in the docs) or requesting to download the full file (for which no Range headers are sent and the file is fully downloaded to the server's memory). So, the problem I had only occurred when a user wanted to download a (big) file. And for that, I chose to use signed URLs in the end (since the approach I posted above on Nov 17 wasn't working). If you, @joerou, come up with a solution using streams I would be glad to hear about it!
dr.dimitru
@dr-dimitru
@tschneid it’s obviously something wrong on AWS api client end. Could you test to make sure AWS sending whole file instead of chunks inside of created stream?
another solution on my mind is as long as you have signed URL you can use request or request-libcurl NPM package to pipe HTTP request, which is by default streaming data source
dr.dimitru
@dr-dimitru
@tschneid
Let me know if that was helpful :)
Joseph Rousseau
@joerou
@dr-dimitru I can't seem to get it work inline for large video files using the code above. It seems to be that you are correct though that aws is sending the whole file, and loading that into memory is what is crashing the code. If I have a small video file, then it works perfectly otherwise.
@tschneid I'll keep you posted here if I find a solution :)
dr.dimitru
@dr-dimitru

@joerou

I can't seem to get it work inline for large video files using the code above.

Do you get an error? Exception? What exactly doesn't work?

Joseph Rousseau
@joerou
@dr-dimitru no, not specifically, most of the times it just crashes, last night, I got an out of memory exception... I didn't have much time but I started to look into it last night and it appears that the first block(below) is not always getting executed, is it possible that these headers are not being set properly?
if (http.request.headers.range) {
          const vRef  = fileRef.versions[version];
          let range   = _.clone(http.request.headers.range);
          const array = range.split(/bytes=([0-9]*)-([0-9]*)/);
          const start = parseInt(array[1]);
          let end     = parseInt(array[2]);
          if (isNaN(end)) {
            // Request data from AWS:S3 by small chunks
            end       = (start + this.chunkSize) - 1;
            if (end >= vRef.size) {
              end     = vRef.size - 1;
            }
          }
          opts.Range   = `bytes=${start}-${end}`;
          http.request.headers.range = `bytes=${start}-${end}`;
        }
Joseph Rousseau
@joerou
I got it to work like this using the example off of the AWS site So I am wondering if maybe it has to do with using the call back instead of hooking into the events?
const s3Stream = s3.getObject(opts).createReadStream();

        // Listen for errors returned by the service
        s3Stream.on('error', function(err) {
          // NoSuchKey: The specified key does not exist
          console.error(err);
        });

        s3Stream.pipe(http.response).on('error', function(err) {
          // capture any errors that occur when writing data to the file
          console.error('File Stream:', err);
        }).on('close', function() {
          console.log('Done.');
        });
dr.dimitru
@dr-dimitru

@joerou

is it possible that these headers are not being set properly?

Yes, because it's up to a browser to send partial file request with Range header. Not all browsers and in all cases send this header.