by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Sameroom
    @sameroom-bot
    [Sean Farley, chainer] But since I'm not on that team, I'm not sure which issues would be good ones to start with
    [Sean Farley, chainer] I think the Tokyo team will be online soon, so hopefully someone else will be able to answer!
    Sameroom
    @sameroom-bot

    [Masaki Kozuki, chainer] I think there’re some choices and currently implementing new features for chainerx and rewriting tests of chainer.function & chainer.links :-)
    IMO, rewriting chainer.function tests are the best to start contributing because there are PRs to refer and it helps you understand the structure of chainer repository.

    NOTE: me neither an official member.

    Sameroom
    @sameroom-bot
    [Sean Farley, chainer] @mishrasalil23 If you're interested in learning the workflow process, the team mentioned that it'd be easiest to contribute to the documentation
    [Sean Farley, chainer] Alternatively, there's a prio:low tag in the issues section
    [Sean Farley, chainer] Also, here's a helpful guide for getting started: https://docs.chainer.org/en/latest/contribution.html
    [Sean Farley, chainer] Hope that helps!
    Sameroom
    @sameroom-bot

    [Seiya Tokui, chainer] As @crcrpar wrote, test refinements and chainerx routines are good for contribution. These are tracked by issues pinned at the top of the issue list (#6423, #6071, and #6628). They have a list of tasks (bullet points or spreadsheet) each of which is a separate task that can be done without interfering with each other. Already done tasks can be used as a reference of how to complete the job.

    Documentation is also good. It tends to have a shorter review process, so you can quickly walk through the commit-PR-review-fix-CI-merge cycle.

    [Seiya Tokui, chainer] cat:enhancement issues, which indicates that the fix should not require changes on interface, would also be good to try.
    Sameroom
    @sameroom-bot
    [fool, chainer] Thank you all for your suggestions. I would start soon with writing tests and documentation.
    Sameroom
    @sameroom-bot
    [Leow Chee Siang, chainer] Hi, i have a question about chainer’s model , does chainer provides any official guide that how to use the trained model in c++?
    [Shimpei Sawada, chainer] Chainer compiler exists, but is is experimental
    https://github.com/pfnet-research/chainer-compiler
    Sameroom
    @sameroom-bot
    [Albert Kahira, chainer] Why does the compute_mean.py script in the imagenet example give this error "ValueError: operands could not be broadcast together with shapes (3,250,250) (3,150,200) (3,250,250)
    "
    Sameroom
    @sameroom-bot
    [Seiya Tokui, chainer] I guess your dataset contains images of different sizes.
    Sameroom
    @sameroom-bot
    [Leow Chee Siang, chainer] while I trained my model , i use the following converter. my image is a RGB 64x64 images, so i convert the image to (batchsize,channel,height,width) inputs to train the model and the model can be trained and loaded with chainer serializers, so i think my inputs is correct..
    def convert(batch,device):
      batchData = [X for X,_ in batch]
      batchLabel = [Y for _,Y in batch]
    
      data = xp.array(batchData,dtype=xp.float32).transpose([0,3,1,2])
    
      data = data / 255
      label = xp.array(batchLabel,dtype=xp.int32)
    
      return (data,label)
    Sameroom
    @sameroom-bot
    [UmashankarTriforce, chainer] Can someone explain what backward or backward_gpu aims to do in chainer functions?
    Sameroom
    @sameroom-bot
    [Leow Chee Siang, chainer] Hi, i have a future hope that chainer can integrate , which about the ConnectionistTemporalClassification function. I hope that you can add the flags to the this function to implicit that the current inputs to the function is whether already softmax, this very confuse that, by default it 100% will softmax the inputs within the function…
    Sameroom
    @sameroom-bot
    [Cloud Han, chainer] I see on the sprint, @beam2d added "Make code reading slides public", what is the slides about? What make it takes so looooooooooong to be public?(thinking face emoji) FYI https://github.com/chainer/chainer/projects/2#card-23325377
    Sameroom
    @sameroom-bot
    [Seiya Tokui, chainer] Oh, sorry for the delay. The slides introduce the concept and internals of ChainerX design/impl. We will publish them shortly. Thanks for showing the interest!
    Sameroom
    @sameroom-bot
    [Juan C., chainer] hi! I'm trying to build a network that takes a 4-dimensional numpy array of floats as an input. Is it any Dataset implementation that I can use?
    Sameroom
    @sameroom-bot
    [Juan C., chainer] Another question, is it possible to implement an attention mechanism for a LSTM layer? If so, how could I? Currently my layer building code looks like this: res = ch.Sequential(L.LSTM(100, 100))
    Sameroom
    @sameroom-bot
    [Wang Yu, chainer] Hello, Mr. Masaki Kozuki, I’am sorry to bother you. When I use ‘links.NStepBiGRU’, ‘segmentation fault’ error always occurred. But there was not problem while ‘NStepBiLSTM’. So do you know what’s wrong here?
    Sameroom
    @sameroom-bot
    [Do Anh Tuan, chainer] hi
    [Do Anh Tuan, chainer] I investigate YOLOv2 and Chainer
    [Do Anh Tuan, chainer] How do I can load a custom dataset to training with Chainer?
    Sameroom
    @sameroom-bot
    Sameroom
    @sameroom-bot
    [Albert Kahira, chainer] What is chainer's default behavior when feeding data to input layer. Is a new batch fed as soon as the first batch leaves the input layer or does it wait till end of backprop to feed another batch. Is it possible to change this behavior and create some sort of pipeline where a new batch is fed as soon as the previous one leaves the input layer.
    Sameroom
    @sameroom-bot
    [Wang Yu, chainer] Hello, When I train model with trainer and without trainer, I found the performance is different ( The former is better). Do you know the reason?
    Sameroom
    @sameroom-bot
    [Kai Liang, chainer] Hello, I'm using ChainerRL for research. For some reason I cannot reproduce the same exact experiment twice. I've used the seed import chainerrl.misc.random_seed and env.seed(). I'm curious about the initialization weights of the agent (if there are the same). Does anyone know how to print out chainerRL agent's weights? Thanks!
    Sameroom
    @sameroom-bot
    [xixi, chainer] Hi, everyone! I met problem about multiplication. I noticed that '*' in chainer can achieve multiplication, but it didn't work on parameter. Any suggesstion?
    Sameroom
    @sameroom-bot
    [Albert Kahira, chainer] Has anyone implemented spatial parallelism with chainer?
    Sameroom
    @sameroom-bot
    [Albert Kahira, chainer] Hi , I am trying to do padding in a Conv layer. But instead of a usual padding with zeros, I want to determine the values to pad with (say 0.2578578). Is this possible?
    Sameroom
    @sameroom-bot
    [Harshan Baskar, chainer] Hi, since Chainer has moved focus to PyTorch, is there possibility of Chainer participating in GSoC 2020?
    Sameroom
    @sameroom-bot
    [Parth Dode, chainer] I am curious about the same thing @Harshan, did anyone answer you in personcal chal
    Sameroom
    @sameroom-bot
    [Harshan Baskar, chainer] @Parth Dode not yet
    [Harshan Baskar, chainer] I heard that there was a possibility for GSoC before the move to PyTorch. I have no idea after that
    Sameroom
    @sameroom-bot
    [Emilio, chainer] Unfortunately it's now unlikely that Chainer would participate in GSoC next year.
    However we may do so with CuPy or our other libraries such as Optuna instead.
    Sameroom
    @sameroom-bot

    [Andrew Summers, chainer] I'm having trouble getting the fallback mode to work. Theoretically, I should be able to replace:
    import numpy as np

    with:
    from cupyx.fallback_mode import numpy as np

    This should work, right? I have a project that uses intersect1d, and it fails on that (which requires me to manually convert from cupy > numpy . . . which is the whole point of the fallback mode). Am I doing something wrong?

    Sameroom
    @sameroom-bot
    [Mark Turner, chainer] Hello! Hope I'm not piling on with the questions: Is it possible to use Parallel Updater with Sequential chains that contain functions? Eg/
    [Mark Turner, chainer] Hello! Hope I'm not piling on with the questions: Is it possible to use Parallel Updater with Sequential chains that contain functions? Eg/ Sequential(L.Linear(None, 50),

    [Mark Turner, chainer] Hello! Hope I'm not piling on with the questions: Is it possible to use Parallel Updater with Sequential chains that contain functions? Eg/ Sequential(L.Linear(None, 50), F.relu)

    Message #beginner

    *Threadgeneral*

    Sameroom
    @sameroom-bot
    [Mark Turner, chainer] Hello! Hope I'm not piling on with the questions: Is it possible to use the Parallel Updater with Sequential chains that contain functions? Eg/ Sequential(L.Linear(None, 50), F.relu) I'm getting an error KeyError: 'b' in line 606, in addgrad dst[name].addgrad(src[name]). I think this comes from when it iterates over children and indices, as self._children of the Sequential doesn't contain the ReLU functions, but the index of the Sequential Link still does. This then means it looks for the weight and bias values in F.relu.__dict__ which is just empty. (Is probably a bug somewhere else in my code that actually causes this, but want to make sure)
    Sameroom
    @sameroom-bot
    [Harshan Baskar, chainer] Hi! I have a question related to cupy's memory management. Is it intended for CuPy to retain GPU memory even after a CuPy object is deleted.(i.e the GPU memory is released only if the python process is killed)?
    Sameroom
    @sameroom-bot
    [Harshan Baskar, chainer] Say, I declare an array and delete it. Now, when I declare another array of same size, I see that the GPU memory usage doesn't increase. This, I assume is because the deleted array's memory is reused for the new array. However, in a case when the array is deleted and there is no active CuPy object using that memory, I see that that memory is still being flagged as 'in-use' (using nvidia-smi). Another parallel process which uses GPU cannot use the memory. Is it intended?
    Sameroom
    @sameroom-bot
    [Mrinal, chainer] Hello everyone
    [Mrinal, chainer] I just got to know about chained
    [Mrinal, chainer] Chainer*
    [Mrinal, chainer] Are there any video tutorials which could teach about this technology?