Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 25 16:20

    dr-dimitru on dev

    Merge pull request #760 from Ve… Merge pull request #764 from ve… Update FUNDING.yml and 1 more (compare)

  • Oct 25 16:19

    dr-dimitru on dev

    πŸ“‹ Docs update πŸ“‹ Docs update (compare)

  • Oct 24 16:31
    blaggacao commented #763
  • Oct 24 16:24
    blaggacao commented #763
  • Oct 23 22:33
    dr-dimitru commented #763
  • Oct 23 22:25
    dr-dimitru commented #765
  • Oct 23 20:51
    leprekon91 closed #765
  • Oct 23 20:51
    leprekon91 commented #765
  • Oct 23 13:30
    blaggacao commented #763
  • Oct 23 13:30
    blaggacao commented #763
  • Oct 23 13:29
    blaggacao commented #763
  • Oct 23 11:22
    dr-dimitru closed #759
  • Oct 23 11:22
    dr-dimitru commented #759
  • Oct 23 11:22
    dr-dimitru labeled #765
  • Oct 23 11:22
    dr-dimitru labeled #763
  • Oct 23 11:21
    dr-dimitru commented #763
  • Oct 23 10:55
    dr-dimitru commented #765
  • Oct 23 07:07
    leprekon91 commented #765
  • Oct 22 20:50
    dr-dimitru commented #765
  • Oct 22 20:49
    dr-dimitru commented #765
dr.dimitru
@dr-dimitru
@Diesmaster Let me know if that helps
Diesmaster
@Diesmaster
This message was deleted
Diesmaster
@Diesmaster
it worked.
thanks for the link :-P
dr.dimitru
@dr-dimitru
@Diesmaster πŸ‘
Diesmaster
@Diesmaster
@dr-dimitru is there an way to update the meta data? I tried finding it in the manual :-)
Joseph Rousseau
@joerou
@tschneid I am now working on a project that seems to be running into the same problem you are describing with handling big files.. Did you figure out the solution to this?
dr.dimitru
@dr-dimitru
@tschneid @joerou have you checked streaming docs by AWS:S3?
I know @tschneid mentioned streaming wasn't working for you, any further details why? and what exactly wasn't working for you?
Joseph Rousseau
@joerou
@dr-dimitru Thanks for the quick response, sorry for the delay on my end! It seems for me that according to the docs, it should be exactly as @tschneid has listed above, the readable stream being s3.getObject(opts).createReadStream(); should be passed to the .serve() method. This results in my server not crashing as it was before, but I am only able to play the first 19 seconds of the video. I'm still testing and playing with it, but was curious if he had found a working solution or exactly what wasn't working for him to see if it is the same problem..
Timo Schneider
@tschneid

@joerou I cannot exactly recall what the problem was, but I remember that I also was not able to play videos inline. There was always some edge case in which streams weren't working. What I ended up doing is (1) keeping the default implementation with s3.getObject since that works for playing inline videos and (2) downloading the video via a signed URL (that expires after 5 seconds) when a download parameter is passed. Something like this:

const downloadViaSignedUrl = async (...) {
    // opts = ...
    const url = await s3.getSignedUrl('getObject', opts);
    http.response.redirect(url);
}

interceptDownload(...) {
    // ...
    if (http.params.query.download && http.params.query.download === 'true') {
        downloadViaSignedUrl(http, fileRef, version);
    } else {
        downloadViaGetObject(http, fileRef, version);
    }    
    return true;
}

No insights on why streams won't work with Meteor-Files, sorry.

dr.dimitru
@dr-dimitru

@joerou @tschneid I believe you're looking for chunk-range support, not streaming. All you need to implement chunk download support is to read "Range" request header and pass it to .getObject method, please, see demo-code here:

Quick sample:

        if (http.request.headers.range) {
          const vRef  = fileRef.versions[version];
          let range   = _.clone(http.request.headers.range);
          const array = range.split(/bytes=([0-9]*)-([0-9]*)/);
          const start = parseInt(array[1]);
          let end     = parseInt(array[2]);
          if (isNaN(end)) {
            // Request data from AWS:S3 by small chunks
            end       = (start + this.chunkSize) - 1;
            if (end >= vRef.size) {
              end     = vRef.size - 1;
            }
          }
          opts.Range   = `bytes=${start}-${end}`;
          http.request.headers.range = `bytes=${start}-${end}`;
        }

        const fileColl = this;
        client.getObject(opts, function (error) {
          if (error) {
            console.error(error);
            if (!http.response.finished) {
              http.response.end();
            }
          } else {
            if (http.request.headers.range && this.httpResponse.headers['content-range']) {
              // Set proper range header in according to what is returned from AWS:S3
              http.request.headers.range = this.httpResponse.headers['content-range'].split('/')[0].replace('bytes ', 'bytes=');
            }

            // Imitate proper streaming:
            const dataStream = new stream.PassThrough();
            fileColl.serve(http, fileRef, fileRef.versions[version], version, dataStream);
            dataStream.end(this.data.Body);
          }
        });
Timo Schneider
@tschneid
Thanks, @dr-dimitru. I've implemented that for inline videos. However, then you will still need to differentiate between playing a video inline (which perfectly works with the code provided in the docs) or requesting to download the full file (for which no Range headers are sent and the file is fully downloaded to the server's memory). So, the problem I had only occurred when a user wanted to download a (big) file. And for that, I chose to use signed URLs in the end (since the approach I posted above on Nov 17 wasn't working). If you, @joerou, come up with a solution using streams I would be glad to hear about it!
dr.dimitru
@dr-dimitru
@tschneid it’s obviously something wrong on AWS api client end. Could you test to make sure AWS sending whole file instead of chunks inside of created stream?
another solution on my mind is as long as you have signed URL you can use request or request-libcurl NPM package to pipe HTTP request, which is by default streaming data source
dr.dimitru
@dr-dimitru
@tschneid
Let me know if that was helpful :)
Joseph Rousseau
@joerou
@dr-dimitru I can't seem to get it work inline for large video files using the code above. It seems to be that you are correct though that aws is sending the whole file, and loading that into memory is what is crashing the code. If I have a small video file, then it works perfectly otherwise.
@tschneid I'll keep you posted here if I find a solution :)
dr.dimitru
@dr-dimitru

@joerou

I can't seem to get it work inline for large video files using the code above.

Do you get an error? Exception? What exactly doesn't work?

Joseph Rousseau
@joerou
@dr-dimitru no, not specifically, most of the times it just crashes, last night, I got an out of memory exception... I didn't have much time but I started to look into it last night and it appears that the first block(below) is not always getting executed, is it possible that these headers are not being set properly?
if (http.request.headers.range) {
          const vRef  = fileRef.versions[version];
          let range   = _.clone(http.request.headers.range);
          const array = range.split(/bytes=([0-9]*)-([0-9]*)/);
          const start = parseInt(array[1]);
          let end     = parseInt(array[2]);
          if (isNaN(end)) {
            // Request data from AWS:S3 by small chunks
            end       = (start + this.chunkSize) - 1;
            if (end >= vRef.size) {
              end     = vRef.size - 1;
            }
          }
          opts.Range   = `bytes=${start}-${end}`;
          http.request.headers.range = `bytes=${start}-${end}`;
        }
Joseph Rousseau
@joerou
I got it to work like this using the example off of the AWS site So I am wondering if maybe it has to do with using the call back instead of hooking into the events?
const s3Stream = s3.getObject(opts).createReadStream();

        // Listen for errors returned by the service
        s3Stream.on('error', function(err) {
          // NoSuchKey: The specified key does not exist
          console.error(err);
        });

        s3Stream.pipe(http.response).on('error', function(err) {
          // capture any errors that occur when writing data to the file
          console.error('File Stream:', err);
        }).on('close', function() {
          console.log('Done.');
        });
dr.dimitru
@dr-dimitru

@joerou

is it possible that these headers are not being set properly?

Yes, because it's up to a browser to send partial file request with Range header. Not all browsers and in all cases send this header.

@joerou Feel free to send a PR to our docs once this code is tested
Joseph Rousseau
@joerou

@dr-dimitru Ok. I think the better option would probably be to check if the header is set and if not then check the size of the file being requested, if it's a large one then only return the first chunk? I'm not sure if once started then the browser will catch on and keep requesting the rest, worth testing though. The problem with the current method is that it just streams directly from start to finish, so whereas with your solution I can "skip" to the middle of the video and start playback, with this solution I cannot (if that makes sense?)

Once I get a nice solution I'll definitely make the PR :)

dr.dimitru
@dr-dimitru
@joerou streaming and downloading by chunks are two different approaches. Not sure if browser will request further chunks, only tested it with inline player, which will request file partially upon playback progress.
@joerou I'd use s3Stream.pipe(http.response) for downloading whole file and let HTTP protocol manage progressive download by itself
and use Range header for playback (e.g. inline)
plvenkatesh92
@plvenkatesh92
I am using ostrio:files plugin to add, update or remove an image. I am not able to preview the uploaded image. I am using react js as a front end. Can any one help me?
Eric Burel
@eric-burel
Hi guys, I can't find any documentation about controlling access to file, so that files are available to only certain users. I am probably not the first to do that, is there any resource I could use?
dr.dimitru
@dr-dimitru
@plvenkatesh92 you have to implement it yourself showing image as base64 after user has selected a file. This is not related to the libary, amd you can find a lot of examples in the internet
Eric Burel
@eric-burel
Thanks! So to create a sharing system, I guess you would create another collection storing permission of the files + the fileId, which you can then query in the "protected" callback
Eric Burel
@eric-burel
Also how is populated this.userId? I can't seem to get it defined
nvm, by using this.request I can retrieve the user, that's nice
Eric Burel
@eric-burel
hmmm, protected can't be an async function? adding async makes the check fail systematically
dr.dimitru
@dr-dimitru

protected can't be an async function? adding async makes the check fail systematically

Use fibers/future

kakadais
@kakadais
What a great channel here-. Nice to meet you guys.
kakadais
@kakadais
I just opened new issue for Suggetion on git-hub. Not urgent one. But please have an attention if you guys available-
dr.dimitru
@dr-dimitru

Hello @kakadais ,

Thank you for your proposal, we are looking forward to release major release for Meteor-Files package, with idea of pluggable adapters for 3rd party storage, we will take your suggestions into consideration.

To be honest upload re-tries should be managed by adapter/3rd party storage implementation

ecarlotti
@ecarlotti
is there a sample on how to use Meteor-Files for creating Temporary Files on the server?
ecarlotti
@ecarlotti

what I need to do is:

  1. Create a temporary file;
  2. Write lots of data to it - This includes looping through a collection and exporting data, so I need to be able to "append" data to the temp file;
  3. return the newly generated temp file to the client - It will be immediately downloaded there;
  4. After the download ends, the file has to be removed/destroyed.

From all I read about this component I think I would be able to do all that with it - Is that correct??
Can anyone give me any instructions on how I should proceed in order to do that?

-= Thanks in advance =-

dr.dimitru
@dr-dimitru

Hello @ecarlotti ,

I don't think yiu need any 3rd party library at all, everything you described available in node.js

This library more about files management, rather than reading/writing
ecarlotti
@ecarlotti
@dr-dimitru , Thanks a lot for your answer - I'm quickly getting to that conclusion, but it seems using the component makes my life easier in terms of managing the temp files - The component gives me a ready-to-use Meteor download mechanism as well as all the event handlers I would have to otherwise create myself. It seems to be a good way to go...
dr.dimitru
@dr-dimitru
@ecarlotti download endpoint is fraction of its codebase. You can achieve it with middleware created using webapp module
Fei Yan
@kernelogic
Hi guys, I am trying to achieve a very simple feature to allow my Meteor application to download a file already saved on the server disk. I googled a lot and this package seems to serve my purpose but I feel it's an overkill. So my question is, is this package the best to suit my need or if there's anything else out there?
dr.dimitru
@dr-dimitru
@kernelogic it should fit your needs
use .addFile, then fileURL to get downloadable URL
Fei Yan
@kernelogic
Thanks. Also is there a way to do pagination if I have a large amount of files?
dr.dimitru
@dr-dimitru
@kernelogic yes, like with any other meteor collection