Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Soodabeh Ghaffari
    @Hajar66_gitlab
    Screen Shot 2022-09-20 at 10.46.23 AM.png
    Screen Shot 2022-09-20 at 10.48.20 AM.png
    Screen Shot 2022-09-20 at 10.48.43 AM.png
    Screen Shot 2022-09-20 at 10.48.31 AM.png
    Soodabeh Ghaffari
    @Hajar66_gitlab
    Do I need to install any packages? Please note that the compute node on our cluster do not have access to the Internet.
    Bharath Ramsundar
    @rbharath
    Can you install through pip instead of conda?
    Most of our testing/install is done through pip these days so that may help
    Actually on second look, this particular featurizer is failing I think because of lack of internet access. It think it looks like the GCN featurizer requires internet access. We should try to make a non-internet stable version of DeepChem. Could you raise an issue on girthub?
    Soodabeh Ghaffari
    @Hajar66_gitlab
    @rbharath Thank you! I tried installing deepchem via pip but the issue persists.
    I will raise an issue on the github.
    Soodabeh Ghaffari
    @Hajar66_gitlab
    @rbharath I was wondering how often developers on Deepchem Github respond to the issues.
    Bharath Ramsundar
    @rbharath
    We try to respond as often as we can. Your issue in particular is complex since it will require some testing to get a non-internet stable version running. I'll try to get non-internet install stability as a feature for either this upcoming major release or the one following
    Yoochan Myung
    @YoochanMyung

    Hi here,
    I got stuck with using MPNNModel with MolGraphConvFeaturizer which is known to be acceptable featurizer for MPNNModel. If I do model.fit(), then I get AttributeError: 'GraphData' object has no attribute 'num_atoms' . I used the following codes:

    featurizer = dc.feat.MolGraphConvFeaturizer()
    _feature = featurizer.featurize(input_pd['smiles'].to_list())
    
    train_dataset = dc.data.NumpyDataset(X=_feature, y=input_pd['property'].to_list())
    model = dc.models.MPNNModel(n_tasks=1, mode = 'regression')
    model.fit(train_dataset, nb_epoch=50)

    And it's error as below:

       1174 start = 0
       1175 for im, mol in enumerate(X_b):
    -> 1176   n_atoms = mol.get_num_atoms()
       1177   # number of atoms in each molecule
       1178   atom_split.extend([im] * n_atoms)
    
    AttributeError: 'GraphData' object has no attribute 'get_num_atoms'
    Bharath Ramsundar
    @rbharath
    Ah I think that's a bug. The new pytorch MPNNModel uses MolGraphConvModel but the default TF one uses WeaveModel https://github.com/deepchem/deepchem/blob/master/deepchem/models/tests/test_graph_models.py#L258
    See this test for a usage example
    You can get the pytorch one from `deepchem.models.torch_models.MPNNModel
    and this will work with MolGraphConvFeaturizer
    Valentina Frenkel
    @valentina04belen_gitlab
    Hi! I recently discovered deepchem <3 I aim to use the protein-ligand interaction tools.
    I'm trying to run the Modeling_Protein_Ligand_Interactions_With_Atomic_Convolutions.ipynb notebook but when running acm.fit(train, nb_epoch=max_epochs, max_checkpoints_to_keep=1, callbacks=[val_cb]) I get ValueError: could not broadcast input array from shape (154,) into shape (1,) (I tested on colab and in a local containter, same error in both). Is there any change of the imported data size?
    vinay-hebb
    @vinay-hebb

    I am planning to contribute to deepchem for the first time. I am trying to port LSTMStep to PyTorch as first step. Precommit hooks like flake8 are failing due to changes which I haven't made (as below). Is there any general procedure in such scenarios? Should I fix these checks or is there any way to ignore these and proceed with commit and push

    deepchem/models/tests/test_layers.py:8:3: F401 'tensorflow.python.framework.test_util' imported but unused
    deepchem/models/tests/test_layers.py:124:3: F841 local variable 'out_channels' is assigned to but never used
    deepchem/models/tests/test_layers.py:169:3: F841 local variable 'out_channels' is assigned to but never used
    deepchem/models/tests/test_layers.py:211:3: F401 'tensorflow as tf' imported but unused
    deepchem/models/tests/test_layers.py:213:3: F841 local variable 'out_channels' is assigned to but never used
    deepchem/models/tests/test_layers.py:232:3: E265 block comment should start with '# '
    deepchem/models/tests/test_layers.py:239:3: E265 block comment should start with '# '
    deepchem/models/tests/test_layers.py:290:3: F841 local variable 'n_atoms' is assigned to but never used
    deepchem/models/tests/test_layers.py:310:3: F841 local variable 'max_depth' is assigned to but never used
    deepchem/models/tests/test_layers.py:501:3: F841 local variable 'result' is assigned to but never used
    deepchem/models/tests/test_layers.py:574:3: E265 block comment should start with '# '
    deepchem/models/tests/test_layers.py:579:3: F841 local variable 'outputs' is assigned to but never used
    deepchem/models/tests/test_layers.py:585:7: E265 block comment should start with '# '
    deepchem/models/tests/test_layers.py:587:3: E266 too many leading '#' for block comment
    deepchem/models/tests/test_layers.py:588:3: E266 too many leading '#' for block comment
    deepchem/models/tests/test_layers.py:608:3: F841 local variable 'outputs' is assigned to but never used

    Bharath Ramsundar
    @rbharath
    I won't be able to make tomorrow's 9am PST OH
    CHANG-Shaole
    @CHANG-Shaole
    Hi, thanks for such a great python package for computational chemistry.
    There I want to ask some questions about how to better use it.
    I used a graph model created by deepchem.models.GraphConvModel class, and I set the parameters n_tasks=1, dropout=0.5, optimizer=dc.models.optimizers.Adam(learning_rate=0.001), batch_size=128, graph_conv_layers=[32, 32], dense_layer_size=64, number_atom_features=75, mode="regression" to train the model. The first question is how I can visualize the model to see what actually the layers' structure is, just like the torch.model.summary.
    After training the first model, I also want to use another dataset to train the model using transfer learning technology. So the second question is how I can change the model's structure, such as how can I add another dense layer to the model at the end. The last question is how to freeze the weights of certain layers, you know when using the transfer learning technology, sometimes we need to freeze some model weights.
    Anyone who answers the questions will be appreciated.
    Thanks in advance!
    Bharath Ramsundar
    @rbharath
    @CHANG-Shaole Come by the office hours: https://forum.deepchem.io/t/announcing-the-deepchem-office-hours/293, 9am PST daily on weekdays
    Layla Hosseini-Gerami
    @laylagerami

    Thanks for the wonderful resource. I'm having a few issues with Sklearn models. Most of them work fine but I get the following error only with GaussianProcessClassifier, KNeighborsClassifier, MLPClassifier and QuadraticDiscriminantAnalysis (i.e., RandomForestClassifier works fine and gives the expected output, for example)

    TypeError Traceback (most recent call last)
    /tmp/ipykernel_29886/3380587234.py in <cell line: 2>()
    
      1 model = dc.models.SklearnModel(GaussianProcessClassifier())
    ----> 2 model.fit(train_dataset)
    3 model.predict(test_dataset)
    
    ~/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/deepchem/models/sklearn_models/sklearn_model.py in fit(self, dataset)
    109 # Some scikit-learn models don't use weights.
    110 if self.use_weights:
    --> 111 self.model.fit(X, y, w)
    112 return
    113 self.model.fit(X, y)
    
    TypeError: fit() takes 3 positional arguments but 4 were given

    Any ideas what's going wrong here?

    Bharath Ramsundar
    @rbharath
    Runnin a few minutes late for today's OH
    Bharath Ramsundar
    @rbharath
    @laylagerami Hmm, this might be a bug. Would you mind raising an issue on the github?
    3 replies
    Bharath Ramsundar
    @rbharath
    I wont be at OH today (10/20) due to a one time scheduling conflict. I will be back as usual tomorrow
    Bharath Ramsundar
    @rbharath
    I won't be at OH today due to a scheduling conflict. Will be back on Monday as usual
    Fernando Schimidt
    @Schimidt99
    Hello guys!
    I would like to use the USPTO data base, but when I wirte the line:
    tasks, datasets, transformers = dc.molnet.load_uspto(featurizer='RxnFeaturizer', splitter='random')
    I receive the error:
    NameError: name 'RobertaTokenizerFast' is not defined
    Anyone have an idea how to solve this?
    Ashwin Murali
    @Suzukazole
    @Schimidt99 could you check if you have transformers installed?
    Fernando Schimidt
    @Schimidt99
    yes, before starting the tests I installed "transformers":
    pip install transformers
    from transformers import RobertaTokenizerFast, BatchEncoding, BertTokenizer
    from tokenizers import Encoding
    I work at Google Colab
    Fernando Schimidt
    @Schimidt99
    Now, I understood that I need to install some virtual environment to work with USPTO database, for Google Colab. There are no hints about this in "https://deepchem.readthedocs.io/"
    Does anyone here have any tips?
    Ashwin Murali
    @Suzukazole
    Hey, that does not seem to be necessary. Here is a colab notebook https://colab.research.google.com/drive/14DEzysC0rLYysC-9bAh4l-W-zCwg6tcw?usp=sharing where I demonstrate how to use it starting from scratch.
    If you still have issues with using it, you could share your notebook with me.
    Fernando Schimidt
    @Schimidt99
    Thank you very much!!! @Suzukazole
    now it works!
    Bharath Ramsundar
    @rbharath
    I am running 15 minutes late for OH today. Will be on at 9:15 am PST
    Bharath Ramsundar
    @rbharath
    Running 10 minutes for OH today. Will be on at 9:10am PST
    Bharath Ramsundar
    @rbharath
    I won't be able to make OH today but will be back on tomorrow as usual
    Bharath Ramsundar
    @rbharath
    Running 15 minutes lates for OH today
    Bharath Ramsundar
    @rbharath
    Sorry forgot to mention but I am out of office today. Will be back for OH tomorrow
    hemogoblin
    @hemogoblin:anontier.nl
    [m]
    Hello world!