Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 19 06:44
    dr-dimitru commented #797
  • Apr 19 06:30
    jankapunkt labeled #797
  • Apr 19 06:30
    jankapunkt labeled #797
  • Apr 19 06:30
    jankapunkt opened #797
  • Apr 16 19:45
    GSYamaji commented #794
  • Apr 16 19:44
    GSYamaji closed #794
  • Apr 16 18:51
    dr-dimitru commented #794
  • Apr 12 13:50
    naschpitz commented #796
  • Apr 12 13:49
    naschpitz commented #796
  • Apr 12 12:57
    naschpitz commented #796
  • Apr 12 12:56
    naschpitz commented #796
  • Apr 12 12:19
    dr-dimitru commented #796
  • Apr 12 04:51
    naschpitz edited #796
  • Apr 12 04:50
    naschpitz opened #796
  • Apr 10 08:50
    jankapunkt commented #753
  • Apr 08 23:12
    harryadel commented #753
  • Apr 08 22:57
    harryadel opened #795
  • Apr 08 19:42
    dr-dimitru closed #786
  • Apr 08 19:42
    dr-dimitru commented #786
  • Apr 08 19:41
    dr-dimitru commented #794
dr.dimitru
@dr-dimitru
Let me know if that was helpful :)
Joseph Rousseau
@joerou
@dr-dimitru I can't seem to get it work inline for large video files using the code above. It seems to be that you are correct though that aws is sending the whole file, and loading that into memory is what is crashing the code. If I have a small video file, then it works perfectly otherwise.
@tschneid I'll keep you posted here if I find a solution :)
dr.dimitru
@dr-dimitru

@joerou

I can't seem to get it work inline for large video files using the code above.

Do you get an error? Exception? What exactly doesn't work?

Joseph Rousseau
@joerou
@dr-dimitru no, not specifically, most of the times it just crashes, last night, I got an out of memory exception... I didn't have much time but I started to look into it last night and it appears that the first block(below) is not always getting executed, is it possible that these headers are not being set properly?
if (http.request.headers.range) {
          const vRef  = fileRef.versions[version];
          let range   = _.clone(http.request.headers.range);
          const array = range.split(/bytes=([0-9]*)-([0-9]*)/);
          const start = parseInt(array[1]);
          let end     = parseInt(array[2]);
          if (isNaN(end)) {
            // Request data from AWS:S3 by small chunks
            end       = (start + this.chunkSize) - 1;
            if (end >= vRef.size) {
              end     = vRef.size - 1;
            }
          }
          opts.Range   = `bytes=${start}-${end}`;
          http.request.headers.range = `bytes=${start}-${end}`;
        }
Joseph Rousseau
@joerou
I got it to work like this using the example off of the AWS site So I am wondering if maybe it has to do with using the call back instead of hooking into the events?
const s3Stream = s3.getObject(opts).createReadStream();

        // Listen for errors returned by the service
        s3Stream.on('error', function(err) {
          // NoSuchKey: The specified key does not exist
          console.error(err);
        });

        s3Stream.pipe(http.response).on('error', function(err) {
          // capture any errors that occur when writing data to the file
          console.error('File Stream:', err);
        }).on('close', function() {
          console.log('Done.');
        });
dr.dimitru
@dr-dimitru

@joerou

is it possible that these headers are not being set properly?

Yes, because it's up to a browser to send partial file request with Range header. Not all browsers and in all cases send this header.

@joerou Feel free to send a PR to our docs once this code is tested
Joseph Rousseau
@joerou

@dr-dimitru Ok. I think the better option would probably be to check if the header is set and if not then check the size of the file being requested, if it's a large one then only return the first chunk? I'm not sure if once started then the browser will catch on and keep requesting the rest, worth testing though. The problem with the current method is that it just streams directly from start to finish, so whereas with your solution I can "skip" to the middle of the video and start playback, with this solution I cannot (if that makes sense?)

Once I get a nice solution I'll definitely make the PR :)

dr.dimitru
@dr-dimitru
@joerou streaming and downloading by chunks are two different approaches. Not sure if browser will request further chunks, only tested it with inline player, which will request file partially upon playback progress.
@joerou I'd use s3Stream.pipe(http.response) for downloading whole file and let HTTP protocol manage progressive download by itself
and use Range header for playback (e.g. inline)
plvenkatesh92
@plvenkatesh92
I am using ostrio:files plugin to add, update or remove an image. I am not able to preview the uploaded image. I am using react js as a front end. Can any one help me?
Eric Burel
@eric-burel
Hi guys, I can't find any documentation about controlling access to file, so that files are available to only certain users. I am probably not the first to do that, is there any resource I could use?
dr.dimitru
@dr-dimitru
@plvenkatesh92 you have to implement it yourself showing image as base64 after user has selected a file. This is not related to the libary, amd you can find a lot of examples in the internet
Eric Burel
@eric-burel
Thanks! So to create a sharing system, I guess you would create another collection storing permission of the files + the fileId, which you can then query in the "protected" callback
Eric Burel
@eric-burel
Also how is populated this.userId? I can't seem to get it defined
nvm, by using this.request I can retrieve the user, that's nice
Eric Burel
@eric-burel
hmmm, protected can't be an async function? adding async makes the check fail systematically
dr.dimitru
@dr-dimitru

protected can't be an async function? adding async makes the check fail systematically

Use fibers/future

kakadais
@kakadais
What a great channel here-. Nice to meet you guys.
kakadais
@kakadais
I just opened new issue for Suggetion on git-hub. Not urgent one. But please have an attention if you guys available-
dr.dimitru
@dr-dimitru

Hello @kakadais ,

Thank you for your proposal, we are looking forward to release major release for Meteor-Files package, with idea of pluggable adapters for 3rd party storage, we will take your suggestions into consideration.

To be honest upload re-tries should be managed by adapter/3rd party storage implementation

ecarlotti
@ecarlotti
is there a sample on how to use Meteor-Files for creating Temporary Files on the server?
ecarlotti
@ecarlotti

what I need to do is:

  1. Create a temporary file;
  2. Write lots of data to it - This includes looping through a collection and exporting data, so I need to be able to "append" data to the temp file;
  3. return the newly generated temp file to the client - It will be immediately downloaded there;
  4. After the download ends, the file has to be removed/destroyed.

From all I read about this component I think I would be able to do all that with it - Is that correct??
Can anyone give me any instructions on how I should proceed in order to do that?

-= Thanks in advance =-

dr.dimitru
@dr-dimitru

Hello @ecarlotti ,

I don't think yiu need any 3rd party library at all, everything you described available in node.js

This library more about files management, rather than reading/writing
ecarlotti
@ecarlotti
@dr-dimitru , Thanks a lot for your answer - I'm quickly getting to that conclusion, but it seems using the component makes my life easier in terms of managing the temp files - The component gives me a ready-to-use Meteor download mechanism as well as all the event handlers I would have to otherwise create myself. It seems to be a good way to go...
dr.dimitru
@dr-dimitru
@ecarlotti download endpoint is fraction of its codebase. You can achieve it with middleware created using webapp module
Fei Yan
@kernelogic
Hi guys, I am trying to achieve a very simple feature to allow my Meteor application to download a file already saved on the server disk. I googled a lot and this package seems to serve my purpose but I feel it's an overkill. So my question is, is this package the best to suit my need or if there's anything else out there?
dr.dimitru
@dr-dimitru
@kernelogic it should fit your needs
use .addFile, then fileURL to get downloadable URL
Fei Yan
@kernelogic
Thanks. Also is there a way to do pagination if I have a large amount of files?
dr.dimitru
@dr-dimitru
@kernelogic yes, like with any other meteor collection
kakadais
@kakadais
I always wondered, are you doing this Veliov things alone? @dr-dimitru
dr.dimitru
@dr-dimitru
@kakadais Nearly 98% of open source — yes
kakadais
@kakadais
Damn good and cheers- ;)
_Saeed_
@Saeed-Bahrami
Hi
How to get link file after upload in Meteor Files
Hello friends, I am uploading my text editor (Quil Editor) files using React and Meteor-Files.
But I do not know exactly how to get the link after uploading and put it in the editor.
Do I have to subscribe to get the link to that file after upload?
And if so. Is there a way to subscribe to that file right away?
I asked hear too :
@dr-dimitru?
dr.dimitru
@dr-dimitru
Hello @Saeed-Bahrami I updated .link() method documentation, hope now this method usage will be more straightforward. Also answered on your veliovgroup/Meteor-Files#769
Mickael Faivre-Maçon
@micktaiwan
Hello, is there a way to remove a particular version of a file ?
dr.dimitru
@dr-dimitru
@micktaiwan yes, https://github.com/veliovgroup/Meteor-Files/blob/master/docs/unlink.md
FilesCollection#unlink(fileObj, 'version')
kakadais
@kakadais

Hello guys-

Meteor-Files is great to integrate file server as a single one, so server scales are much easier than before, but using multi-core would be good to go from the scratch.

Have you any idea for clustering for Meteor?

I'm expecting none Node or System level something like pm2/passenger , but something higher level integration such as kadira:clustering, but seems like not maintained.

Any idea?

dr.dimitru
@dr-dimitru
@kakadais this is related to DevOps and the way beyond this package. There are a lot of ways to implement horizontally scaled and decentralized node.js apps, as well as a lot of way to do it wrong. I'd recommend to look on DNS balancing, haProxy, nginx, and Phusion Passenger
kakadais
@kakadais

@dr-dimitru Oh I think I made a misunderstanding, that's why I said None System Level something cause I have a knowledge about the field and used it.

I've seen the below video recently for cluster package and realized Microservices for Meteor were already concerned from 2015,
and thought that this model is super great.

https://www.youtube.com/watch?v=oudsAQZkvzQ&feature=youtu.be&t=15m27s

I think the reason why people are not implemented there system as a microservice is 'Annoying'. If we handle whole system level to salce up/out then the configuration and flexibility will hold us even a single changes, so some people uses NetflixOSS stuff to build their DevOps efficiently.

But after I've looked up the NetflixOSS things, it is still complex enough because of too much options and almost of it is not required at the small/middle system so we couldn't start on the same line.

The metreorhacks/cluster package is simple enough and especially Multi-Core Support strategy by node worker is so cool and works good still, I got a curiosity 'Isn't there any movement about this' things.

I should dig some more for this movement in Meteor or Node- ;)
So Need and Interesting part-