by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Shunta Saito
    @mitmul
    -sameroom portal
    Sameroom
    @sameroom-bot
    <Sameroom> Your Portal URL is https://sameroom.io/xZLaqHjL -- you can send the URL to someone on a different team to share this room. Note: you can connect more than two teams this way.
    <Sameroom> I've connected 1 new room #vision (chainer) on Slack. See map
    Sameroom
    @sameroom-bot
    [Satoshi Tsutsui, chainer] i wonder if there's tutorial on how to use Chainer CV for training object detector on our own dataset. I need to compare multiple object detectors (YOLO,SSD, Faster R-CNN) on my own dataset. I don't want to use different public implementations for each detector so I wonder Chainer CV provide a unified framework.
    [Tommi Kerola, chainer] https://github.com/chainer/chainercv/blob/master/examples/detection/visualize_models.py
    The above code compares different detectors.
    You can modify VOCBboxDataset to match your own dataset. It just needs to return the image and bboxes.
    [Tommi Kerola, chainer] (Not sure about any tutorial though, but at least example code.)
    Sameroom
    @sameroom-bot
    [Tommi Kerola, chainer] And you should be able to train using your own custom dataset then using this example code, and replace SSD with any other detector: https://github.com/chainer/chainercv/tree/master/examples/ssd
    [Satoshi Tsutsui, chainer] Oh this is nice. Thanks Tommi!
    Sameroom
    @sameroom-bot
    [Satoshi Tsutsui, chainer] Okay i spend half a day for just start training Faster R-CNN on my own dataset.
    [Satoshi Tsutsui, chainer] It seems working, but not sure after the results comes.
    Sameroom
    @sameroom-bot

    [Satoshi Tsutsui, chainer] I had to read some source codes for just applying a off-the-shelf model onto my problem, which might indicate the lack of documentation. Here's the point I got stuck:
    Prepare Dataset: VOCBboxDataset is not designed for general object detection dataset. Probably we need general class that works for any kind of bounding box based detection dataset.
    Evaluate: There's some tricks you need to keep in mind when you evaluate. In default, it estimate lower mAP. See chainer/chainercv#624
    Nice Tutorial: I strongly agree with this issue: chainer/chainercv#391 It's nice if we have a tutorial on how to train our own object detector, not just to use PASCAL VOC.

    Also, similar to image classification example, it's nice if we can make a single training script that can switch multiple models in command line argument, which is what I want to do. I'll make this an issue.

    Sameroom
    @sameroom-bot
    [Wonjoon Goo, chainer] Do Chainer have map_fn like function that can parallelize function call over first dimension of data?
    Sameroom
    @sameroom-bot
    [Daniel Angelov, chainer] Hi all,
    In regards to the above comments about the VOCBboxDataset, there really needs to be a tiny exit (remove the check for difficulty, or make if none specifies, easy by default), which would in term allow the loading of any dataset as long as it follows that style. Works like a charm for me! (Can submit a PR with changes if anyone interested)
    My question is about training bbox models. I've loaded my own data, trained on it, and am in general satisfied with the performance of the model. However, I want to be able to provide external hard negative data. I am aware SSD and FRCNN, etc, use the out of the bbox data as a hard negative within the training set. Is there a way to provide data to the model that contains zero detections (i.e. data that usually the model confuses one of the classes with)
    I see several possible ways to tackle this:
    1. Dig into the code and see where it breaks when an image has zero detections (in order to generate a loss towards not detecting any bbox) - on initial quick inspection doesn't look trivial
    2. Add an extra dummy class that is a random 1 pixel across my dataset, allowing this dummy class to be present in my negative data for the true classes I am interested in. A drawback would be that I need to change the weight of the class (as it would be a random pixel) with respect to the other classes? I haven't noticed this functionality in Chainer. I.e. for keras https://datascience.stackexchange.com/questions/13490/how-to-set-class-weights-for-imbalanced-classes-in-keras rather than sampling the classes in an imbalanced way.
    3. Any other suggestions?
    safak17
    @safak17
    Hello!
    safak17
    @safak17
    trainer.run
    TypeError: list indices must be integers or slices, not NoneType

    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    Exception in main training loop: list indices must be integers or slices, not NoneType
    Traceback (most recent call last):
    File "/usr/local/lib/python3.6/dist-packages/chainer/training/trainer.py", line 316, in run
    update()
    File "/usr/local/lib/python3.6/dist-packages/chainer/training/updaters/standard_updater.py", line 175, in update
    self.update_core()
    File "/usr/local/lib/python3.6/dist-packages/chainer/training/updaters/standard_updater.py", line 181, in update_core
    in_arrays = convert._call_converter(self.converter, batch, self.device)
    File "/usr/local/lib/python3.6/dist-packages/chainer/dataset/convert.py", line 73, in _call_converter
    return converter(batch, device)
    File "/usr/local/lib/python3.6/dist-packages/chainer/dataset/convert.py", line 58, in wrap_call
    return func(args, *kwargs)
    File "/usr/local/lib/python3.6/dist-packages/chainer/dataset/convert.py", line 223, in concat_examples
    [example[i] for example in batch], padding[i])))
    File "/usr/local/lib/python3.6/dist-packages/chainer/dataset/convert.py", line 254, in _concat_arrays
    [array[None] for array in arrays])
    File "/usr/local/lib/python3.6/dist-packages/chainer/dataset/convert.py", line 254, in <listcomp>
    [array[None] for array in arrays])
    Will finalize trainer extensions and updater before reraising the exception.
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32
    (3, 336, 596) float32

    (3, 336, 596) float32

    TypeError Traceback (most recent call last)

    <ipython-input-55-041e2033e90a> in <module>()
    ----> 1 trainer.run()

    9 frames
    /usr/local/lib/python3.6/dist-packages/chainer/training/trainer.py in run(self, show_loop_exception_msg)
    347 f.write('Traceback (most recent call last):\n')
    348 traceback.print_tb(sys.exc_info()[2])
    --> 349 six.reraise(*excinfo)
    350 finally:
    351 for
    , entry in extensions:

    /usr/local/lib/python3.6/dist-packages/six.py in reraise(tp, value, tb)
    691 if value.traceback is not tb:
    692 raise value.with_traceback(tb)
    --> 693 raise value
    694 finally:
    695 value = None

    /usr/local/lib/python3.6/dist-packages/chainer/training/trainer.py in run(self, show_loop_exception_msg)
    314 self.observation = {}
    315 with reporter.scope(self.observation):
    --> 316 update()
    317 for name, entry in extensions:
    318 if entry.trigger(self):

    /usr/local/lib/python3.6/dist-packages/chainer/training/updaters/standard_updater.py in update(self)
    173
    174 """
    --> 175 self.update_core()
    176 self.iteration += 1
    177

    /usr/local/lib/python3.6/dist-packages/chainer/training/updaters/standard_updater.py in update_core(self)
    179 iterator = self._iterators['main']
    180 batch = iterator.next()
    --> 181 in_arrays = convert._call_converter(self.converter, batch, self.device)
    182
    183 optimizer = self._optimizers['main']

    /usr/local/lib/python3.6/dist-packages/chainer/dataset/convert.py in _call_converter(converter, batch, device)
    71 if getattr(converter, '__is_decorated_converter', False):
    72 # New-style converter
    ---> 73 return