Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 13 20:46
    zababurinsv opened #310
  • Sep 13 06:07
    zababurinsv edited #309
  • Sep 13 05:59
    zababurinsv opened #309
  • Sep 07 16:14

    rspeer on master

    workaround db connection gettin… Merge pull request #307 from am… (compare)

  • Sep 07 16:14
    rspeer closed #307
  • Sep 07 16:01

    rspeer on master

    optimize postgresql queries and… Merge pull request #303 from gs… (compare)

  • Sep 07 16:01
    rspeer closed #303
  • Sep 07 16:01
    rspeer commented #303
  • Sep 02 16:35
    rspeer commented #303
  • Aug 19 08:25
    amirouche commented #308
  • Aug 19 08:23
    amirouche commented #308
  • Aug 19 08:22
    amirouche commented #308
  • Aug 19 08:22
    amirouche opened #308
  • Aug 19 08:16
    amirouche edited #307
  • Aug 19 08:16
    amirouche opened #307
  • Aug 19 08:15
    amirouche commented #306
  • Aug 19 06:35
    amirouche commented #306
  • Aug 18 21:47
    henningsmith commented #306
  • Aug 18 21:36
    krgallagher commented #306
  • Aug 16 20:10
    Tahnan commented #306
Hanna L Tischer
@hannalt
Hey I was wondering what the best way to parse the csv file for conceptnet is? Also is there a file that only has English words?
Rejnald Lleshi
@rlleshi
Hi everyone! Whenever I query the REST API like so:'http://api.conceptnet.io/c/en/' + word + '?limit=2000' I get a variable result even though I am setting a limit=2000. Any idea why this is happening?
pzdkn
@pzdkn
@hannalt hi hanna, I wrote a simple python python script that filteres english assertions from the raw assertions.csv, you can take a look here:
https://pastebin.com/gPHNnuQ2
hi guys, I wonder how you can get the links to wordnet from the assertions.csv ?
Kiko
@pretty_flacko99_twitter
Hello everyone, I am trying to import the ConceptNet graph to Neo4j. I found some instances on github: https://github.com/redsk/neo_concept but they are all done long time ago and the ConceptNet structure is different now. Can anyone help me extract the Conceptnet to nodes and relationships csv files that I will later import to Neo4j database? Thank you in advance.
theolivenbaum
@theolivenbaum
Hi Kiko - i wrote a reader for it that you could use to dump a csv file with minor modifications: https://github.com/curiosity-ai/catalyst-conceptnet - or just ingest directly using the neo4j driver for c#
Lalchand
@lalchand-pandia
hi I want to download all the relations with surfaceText in csv format. Is there a url to directly do that?
Lalchand
@lalchand-pandia
@rspeer I want to get results with relation=IsA and language preference is english.
So, I am using a python request:
response = requests.get('http://api.conceptnet.io/query?rel=/r/IsA&lang=en')
Still, I am getting results in Japanese.
92komal
@92komal
hello everyone!!
92komal
@92komal
this is the program extracting the relation using conceptnet API. I am new and have less knowledge about conceptnet . I got an error a very long time.
This is the error :
multiprocessing.pool.MaybeEncodingError: Error sending result: '<multiprocessing.pool.ExceptionWithTraceback object at 0x7f9d326b5940>'. Reason: 'TypeError("cannot pickle '_io.BufferedReader' object")
I appreciate your help. thanks in advance
황다영
@wisenut_hhhddd56_gitlab

anyone got an error like me
return _wf_cache[args]
KeyError: ('##', 'ja', 'best', 0.0)

During handling of the above exception, another exception occurred:

Robyn Speer
@rspeer
I think the above is an issue I responded to over e-mail as well, but it's a problem with the dependencies of wordfreq, which is a dependency of ConceptNet. I've been working on untangling those dependencies. pip install --upgrade wordfreq should solve it.
@92komal I'd need a lot more context in your case, such as why multiprocessing and pickle are involved. Neither of those modules are involved in ConceptNet as far as I know.
oh, now I see the code. Anyway, right, that's a problem with something you're trying to send between processes in multiprocessing, and I believe you'd encounter it no matter what API you're using.
Anmol Kalia
@AnmolKalia13_twitter
Hey,
Can someone point me to the best way to get all concepts that have a r/partof with a given concept, eg airplane?
Thanks!
Walt96
@Walt96
Hello guys, are servers down today? i keep getting error 500
JafarMansouri
@JafarMansouri
Hi. How an I get the embedding vector for a sentence using ConceptNet?Thanks
Alan Hogue
@alan134_gitlab
I'm seeing the same server errors.
I have a question. I see in the CSV download that there are a lot of relations between (to-from?) wordnet lexemes, but I don't see wordnet sense keys or anything that I recognize as pointing uniquely to something in wordnet. We'd like to use the two together, so my question is: does a unique WN identifier exist anywhere in the data, perhaps accessible through the API that is currently down? Thanks!
Navneet Agarwal
@navneet-ag

Hello guys, are servers down today? i keep getting error 500

Hello guys, are the servers still down ? (accessed at 6:24 am, 3 June 2021)

Walt96
@Walt96

Hello guys, are servers down today? i keep getting error 500

Hello guys, are the servers still down ? (accessed at 6:24 am, 3 June 2021)

Yes, i still get the error unfortunately...

Navneet Agarwal
@navneet-ag
Yep, I am also keep getting the error.
Jean-Louis Villecroze
@CocoaGeek_twitter
Getting error 500 as well (both for the http and https server) :(
Jean-Louis Villecroze
@C0c0aG33k
Left an issue on github for the Server Error we have been seeing for a while now ...
Jean-Louis Villecroze
@C0c0aG33k
It got fixed (once they saw the issue on github) :)
YassirMatrane
@YassirMatrane
Hello guys, I hope you're doing well. Can I use ConceptNet through Python ?
JieSun1990
@JieSun1990
Same question as above. Is there a step-by-step guide to download Conceptnet dataset?
Bibi Rabiya
@CoderPhoenix
Hello everyone,
I was going through all conceptnet wikis and FAQs to find out about the advantages of creating our own copy of ConceptNet on AWS in terms of perfomance but didn't find any.
Can anyone tell me what is the perfomance difference in calling ConceptNet via their Web REST APIs vs when you build your own copy of ConceptNet on AWS? For Web API it is 3600 rph as given in the wiki but what about using on our AWS container. Assume that the data remains the same as that of ConceptNet.
Thanks !
ahmed
@ahmedbahaabas_twitter
Hello... How can use concept net to search about a keyword relations or class?? Please I need a clear tutorial
patham9
@patham9:matrix.org
[m]
I can give you an example query I'm using in my project @ahmedbahaabas_twitter give me a few minutes
patham9
@patham9:matrix.org
[m]
Example, querying what a cat is made of, limiting to at most 10 results: http://api.conceptnet.io/query?start=/c/en/cat&rel=/r/MadeOf&limit=10
If cat would be a material we could also ask what is made of cat, in this case "start" would be replaced with "end" as cat would then be the second argument of the MadeOf relation
Constanza
@cfierro94
Hi! The API is not working for me I've triedhttps://conceptnet.io/ and http://conceptnet5.media.mit.edu/ is this a known issue?
Also, is building (https://github.com/commonsense/conceptnet5/wiki/Build-process) the only way to use the data? or is there a flat downloadable file?
krgallagher
@krgallagher
The API is down for me as well
Bancherd
@Bancherd-DeLong
Hi, a newbie here. I tried to run "snakemake data/vectors/mini.h5" and received this error: File "/home/bancherd/.local/lib/python3.8/site-packages/wordfreq/tokens.py", line 264, in tokenize
tokens = _mecab_tokenize(text, language.language)
File "/home/bancherd/.local/lib/python3.8/site-packages/wordfreq/mecab.py", line 40, in mecab_tokenize
MECAB_ANALYZERS[lang] = make_mecab_analyzer(lang)
File "/home/bancherd/.local/lib/python3.8/site-packages/wordfreq/mecab.py", line 20, in make_mecab_analyzer
import ipadic
ModuleNotFoundError: No module named 'ipadic'
[Sun Aug 22 17:01:09 2021]
Error in rule miniaturize:
jobid: 0
output: data/vectors/mini.h5
shell:
cn5-vectors miniaturize data/vectors/numberbatch-biased.h5 data/vectors/w2v-google-news.h5 data/vectors/mini.h5
(exited with non-zero exit code)
I tried to look for "ipadic", without success. Can anyone suggest solutions? Thank you!
Bancherd
@Bancherd-DeLong

Hi, a newbie here. I tried to run "snakemake data/vectors/mini.h5" and received this error: File "/home/bancherd/.local/lib/python3.8/site-packages/wordfreq/tokens.py", line 264, in tokenize
tokens = _mecab_tokenize(text, language.language)
File "/home/bancherd/.local/lib/python3.8/site-packages/wordfreq/mecab.py", line 40, in mecab_tokenize
MECAB_ANALYZERS[lang] = make_mecab_analyzer(lang)
File "/home/bancherd/.local/lib/python3.8/site-packages/wordfreq/mecab.py", line 20, in make_mecab_analyzer
import ipadic
ModuleNotFoundError: No module named 'ipadic'
[Sun Aug 22 17:01:09 2021]
Error in rule miniaturize:
jobid: 0
output: data/vectors/mini.h5
shell:
cn5-vectors miniaturize data/vectors/numberbatch-biased.h5 data/vectors/w2v-google-news.h5 data/vectors/mini.h5
(exited with non-zero exit code)
I tried to look for "ipadic", without success. Can anyone suggest solutions? Thank you!

Inspite of the warning in pypi, I went ahead , installed "ipadic" and rerun the script: got the following(different) error:Building prefix dict from /home/bancherd/.local/lib/python3.8/site-packages/wordfreq/data/jieba_zh.txt ...
Dumping model to file cache /tmp/jieba.u600b79f75cbc9b33aa477293be70c0e2.cache
Loading model cost 0.057 seconds.
Prefix dict has been built successfully.
/usr/bin/bash: line 1: 37532 Killed cn5-vectors miniaturize data/vectors/numberbatch-biased.h5 data/vectors/w2v-google-news.h5 data/vectors/mini.h5
[Sun Aug 22 20:16:41 2021]
Error in rule miniaturize:
jobid: 0
output: data/vectors/mini.h5
shell:

Bancherd
@Bancherd-DeLong
Problem solved . I had too many opened chrome-tabs(even with 32G memory/ubuntu 20.04)
Evelyne
@echevry_gitlab
Hello, I am reviving an old project. I am trying to get the association through a list. In the past, the query was this: https://api.conceptnet.io/assoc/list/en/toast,cereal,juice@0.5,egg. does this feature is still available? If yes.. Where can I find the documentation? This query does not work anymore with the current API...
Gwen Rehrig
@dr-gwen

Hi all, I'm also building ConceptNet5 for the first time on a machine running Ubuntu 20.04 with 32 GB of RAM. I was able to run ./build.sh without any obvious errors that I saw in the output, but pytest is returning failed and skipped tests. Specifically:
test_languages.py fails (316), error message indicates it is unable to find the language_data module (traced to line 809 in .../langcodes/init.py).
test_json_ld.py fails as well with a KeyError on line 82 (which is: "quiz = ld[api('/c/en/quiz')]") and line 161 ("rel = ld[vocab('rel')])

Do these errors indicate that the installation was not successful and I should re-install? Or, have others encountered the same issues and have solutions? I did check the documentation and Googled the errors, but did not find any relevant troubleshooting solutions. Any suggestions would be appreciated.

3 replies
Bancherd
@Bancherd-DeLong
Hmm, could someone please tell me how to generate "numberbatch.txt.gz" file? It is NOT simply specifying in the snakemake file, is it?
1 reply
Carlos F. Enguix
@cenguix
Hi folks, I am currently writing a research survey paper about Open Knowledge Graphs. I am including Conceptnet. I would appreciate indeed any link/info indicating repository size and related fresh stats. I look forward to hearing from you. Best regards, Carlos F. Enguix
Sergey
@zababurinsv

Hi i want build conceptnet node.

But when i run build.sh i get this error

Error in rule convert_opensubtitles_ft:
    jobid: 0
    output: data/vectors/fasttext-opensubtitles.h5

RuleException:
CalledProcessError in line 663 of /home/zb/Desktop/conceptnet/Snakefile:
Command 'set -euo pipefail;  CONCEPTNET_DATA=data cn5-vectors convert_fasttext -n 2000000 data/raw/vectors/ft-opensubtitles.vec.gz data/vectors/fasttext-opensubtitles.h5' returned non-zero exit status 137.
  File "/home/zb/Desktop/conceptnet/Snakefile", line 663, in __rule_convert_opensubtitles_ft
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
Exiting because a job execution failed. Look above for error message
[Sat Sep 11 19:11:39 2021]
Finished job 206.
371 of 472 steps (79%) done
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /home/zb/Desktop/conceptnet/.snakemake/log/2021-09-11T182112.270477.snakemake.log

What could be the reason for this?

1 reply
Sergey
@zababurinsv

How can I proceed with the installation instead of starting over?

I have.
300 GB of free disk space
At least 30 GB of available RAM
The time and bandwidth to download 24 GB of raw data

I start build.sh and on
464 of 472 steps (98%) done
I getting error.

/usr/bin/bash: line 1: 22394 Killed                  cn5-vectors intersect data/vectors/crawl-300d-2M-retrofit.h5 data/vectors/w2v-google-news-retrofit.h5 data/vectors/glove12-840B-retrofit.h5 data/vectors/fasttext-opensubtitles-retrofit.h5 data/vectors/numberbatch-retrofitted.h5 data/vectors/intersection-projection.h5
[Mon Sep 13 03:26:07 2021]
Error in rule merge_intersect:
    jobid: 177
    output: data/vectors/numberbatch-retrofitted.h5, data/vectors/intersection-projection.h5
    shell:
        cn5-vectors intersect data/vectors/crawl-300d-2M-retrofit.h5 data/vectors/w2v-google-news-retrofit.h5 data/vectors/glove12-840B-retrofit.h5 data/vectors/fasttext-opensubtitles-retrofit.h5 data/vectors/numberbatch-retrofitted.h5 data/vectors/intersection-projection.h5
        (exited with non-zero exit code)

Removing temporary output file data/psql/edges_gin.csv.
[Mon Sep 13 03:27:40 2021]
Finished job 3.
464 of 472 steps (98%) done
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /home/zb/Desktop/conceptnet5/.snakemake/log/2021-09-12T221420.189372.snakemake.log
2 replies
Sergey
@zababurinsv
After 6 hours of installation, I got an error.
Please tell me if it is possible to continue the installation after a failure?
Bancherd
@Bancherd-DeLong
It continues from just before the error, so should restart at step ~465.