1- The main reason is that we are not a "high-level framework API" and trying to add all the features that both ignite and lightning provides is futile. So we had to choose one and one of our collaborators uses PL. I will investigate what is required to run on ignite if it is not too much I will make a tutorial or extract some methods to make it easy to use. (I guess that BaaL can be easily used for both cases)
2- yes! we will make the code available as soon as we publish our paper on BaaL and then we are working on a weakly-supervised segmentation paper which would also be released as soon as it is published.
If you have any questions on semantic segmentation, feel free to ask here! We would be happy to help you :)
We can probably provide some scripts with PascalVOC/COCO. When doing SatImagery, we have many optimization available to make this easier! Like storing predictions in float16, predicting on a portion of the pool, etc. We will make a guide for these usecases that are memory hungry.
Right now, the state-of-the-art for Seg is to simply compute BALD or BatchBALD per pixel and take the mean per image. I know there are some techniques that will create "regions" or superpixels and compute the maximum of each regions before taking the mean. We have not tried these methods yet.
Sounds good !
Right now, the state-of-the-art for Seg is to simply compute BALD or BatchBALD per pixel and take the mean per image.
Yes, I had a chat with one of the authors of BatchBALD and it seems he wasn't too optimistic about how much there is to gain from active learning on image segmentation task... Anyway, I'm looking forward to test your implementation :)
And if I can provide some help with a tutorial of BaaL with Ignite, do not hesitate to ask me.
I think I'm done, but I would need your help for review.
Basically, I'm making a new engine that will perform Monte-Carlo sampling and then when we are done with training, we perform the active step (prediction, heuristic, labelling).
What do you think?
Oh my bad I took the example from Ignite without thinking.
You're right with mc_inference it is in the branch pytorch_lightning, but should be merged quickly.
If we can extract all the utils necessary for both framework and make proper tutorial for both I think this can be very valuable!
I updated the gist with what is lacking. But they are true for PL as well.
Sorry for the late answer, I'm really busy with our NeurIPS submission.
One emtric I think would be useful to Pytorch Ignite is ECE, the Expected Calibration Error.
We do have an implementation here:
I'm trying to build a FAQ. Here is somethign that can be useful to both of you.
model = YourModel() dataset = YourDataset() wrapper = ModelWrapper(model, criterion=None) heuristic = BALD() uncertainty = heuristic.get_uncertainties(wrapper.predict_on_dataset(dataset, batch_size=32, iterations=20, use_cuda=True)
If memory is an issue, you can use both
Hello, we have an issue with the recently merged Pytorch Lightning API. In particular with the partial uncertainty sampling (with the
I would like to get the input of the community.
In PL, it is the responsability of the user to make the DataLoaders. To stay in line with this design, we let the user define
pool_loaderwhich defines the pool dataloader. If the users wants to do partial uncertainty sampling, he can make a Subset of the pool there. The issue is to recover this set of index.
I'm open to any other suggestions! :)