[Masaki Kozuki, chainer] I think there’re some choices and currently implementing new features for chainerx and rewriting tests of
chainer.function tests are the best to start contributing because there are PRs to refer and it helps you understand the structure of chainer repository.
NOTE: me neither an official member.
prio:lowtag in the issues section
[Seiya Tokui, chainer] As @crcrpar wrote, test refinements and chainerx routines are good for contribution. These are tracked by issues pinned at the top of the issue list (#6423, #6071, and #6628). They have a list of tasks (bullet points or spreadsheet) each of which is a separate task that can be done without interfering with each other. Already done tasks can be used as a reference of how to complete the job.
Documentation is also good. It tends to have a shorter review process, so you can quickly walk through the commit-PR-review-fix-CI-merge cycle.
cat:enhancementissues, which indicates that the fix should not require changes on interface, would also be good to try.
def convert(batch,device): batchData = [X for X,_ in batch] batchLabel = [Y for _,Y in batch] data = xp.array(batchData,dtype=xp.float32).transpose([0,3,1,2]) data = data / 255 label = xp.array(batchLabel,dtype=xp.int32) return (data,label)
env.seed(). I'm curious about the initialization weights of the agent (if there are the same). Does anyone know how to print out chainerRL agent's weights? Thanks!
[Andrew Summers, chainer] I'm having trouble getting the fallback mode to work. Theoretically, I should be able to replace:
import numpy as np
from cupyx.fallback_mode import numpy as np
This should work, right? I have a project that uses
intersect1d, and it fails on that (which requires me to manually convert from cupy > numpy . . . which is the whole point of the fallback mode). Am I doing something wrong?
[Mark Turner, chainer] Hello! Hope I'm not piling on with the questions: Is it possible to use Parallel Updater with Sequential chains that contain functions? Eg/
Sequential(L.Linear(None, 50), F.relu)
Sequential(L.Linear(None, 50), F.relu)I'm getting an error
line 606, in addgrad dst[name].addgrad(src[name]). I think this comes from when it iterates over children and indices, as
self._childrenof the Sequential doesn't contain the ReLU functions, but the index of the Sequential Link still does. This then means it looks for the weight and bias values in
F.relu.__dict__which is just empty. (Is probably a bug somewhere else in my code that actually causes this, but want to make sure)
nvidia-smi). Another parallel process which uses GPU cannot use the memory. Is it intended?