Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
lisa563
@lisa563
I set the Score threshold to 0.8 and Max recommendations to 1 on the Recommenders page. When annotation file, there are many predicted results per tag, far more than one, and many of the predicted results have a Confidence value less than 0.8. Is there any problem?
Richard Eckart de Castilho
@reckart
The score threshold is not a confidence threshold
it causes the recommender to run an internal evaluation
if the internal evaluation yields a score lower than the threshold, then the recommender is not used at all
the max recommendations setting for a recommender says how many recommendations per location (begin/end) the recommender should yield
if you get more than one recommendation with exactly the same location from the same recommender (with max rec = 1), then we may have a bug
there is a recommender sidebar on the left side on the annotation page which has more details
Richard Eckart de Castilho
@reckart
@lisa563 would you find a confidence threshold useful?
lisa563
@lisa563
Thanks for your answer. When I set Max. suggestions to 1 in the recommender sidebar on the annotation page, I get much more than one recommendation per tag, is this a bug?
Richard Eckart de Castilho
@reckart
I think we have now updated the documentation to make this clearer
@lisa563 max recommendations 1 in the recommender sidebar should say that "per location, only show the top X recommendations from all recommender suggestions for that location accumulated"
so if you have 3 recommenders and each produces 3 suggestions, you could end up with 9 suggestions - let say some of them have the same label so they are conflated and you end up with 5 suggestions
these should then be ordered by score and the top X (i.e. 1) recommendation is displayed
so for any given tuple of [begin, end, layer, feature] there should only be 1 suggestion shown
lisa563
@lisa563
I see, Thank you very much for your answer.
lisa563
@lisa563
When i switch to the PDF editor on Settings in the Document panel located at the top, I don’t see the content of the PDF document. Are there any other settings?
Richard Eckart de Castilho
@reckart
@lisa563 what do you see instead of the content? Is the document you try to view in PDF mode actually a PDF file?
lisa563
@lisa563
Yes, i try to view in PDF mode actually a PDF file, I can’t see any content, it’s blank, but there is content when I switch to brat editor. I am using chrome browser.
Richard Eckart de Castilho
@reckart

@lisa563 Hm - I have heard of such issues but never been able to reproduce them. Would you be able to locate the Chrome Developer Tools and check if you see any kind of error there, e.g. in the "Console" part?


To open the developer console in Google Chrome, open the Chrome Menu in the upper-right-hand corner of the browser window and select More Tools > Developer Tools. You can also use the shortcut Option + ⌘ + J (on macOS), or Shift + CTRL + J (on Windows/Linux).

lisa563
@lisa563
There are two errors in the console. The first error, "VM354:17 Uncaught TypeError: Cannot read property'findAnnotationById' of undefined"; The second error, "Mixed Content: The page at'https://morbo.ukp.informatik.tu-darmstadt.de /annotation.html?4&p=549#!p=549&d=2240' was loaded over HTTPS, but requested an insecure resource'http://morbo.ukp.informatik.tu-darmstadt.de/resources/pdfanno/index.html ?pdf=http://morbo.ukp.informatik.tu-darmstadt.de/annotation.html?4-1.0-centerArea-editor-vis&p=549&pdftxt=http://morbo.ukp.informatik.tu-darmstadt.de /annotation.html?4-1.1-centerArea-editor-vis&p=549&api=http://morbo.ukp.informatik.tu-darmstadt.de/annotation.html?4-1.2-centerArea-editor-vis&p=549'. This request has been blocked; the content must be served over HTTPS."
Richard Eckart de Castilho
@reckart
Ok, the second one is likely the problem
The page is served via HTTPS but the PDF annotator tries to load its data via HTTP
that is something we need to check
If you download the software from our website and try it out on your local machine, chances are good that the PDF will load
we'll try fixing it for the upcoming bugfix release - thanks !
lisa563
@lisa563
OK, I willl try it , thanks.
Three other questions, the first one, When I upload an html document, I select HTML as Format, Import is unsuccessful, and there is no response or prompt after loading 2s.
The second one, If run inception platform on my local machine, what kind of computer configuration is recommended?
Third, can’t I upload word format files?
I'd appreciate your help y'all thank you.
Richard Eckart de Castilho
@reckart
HTML upload is broken in the current release - fix is already in place and will be included in 0.18.2
There is no specific machine recommendation - it really depends on how you use it, data size, if you use recommenders, KBs, etc etc.
But it is strongly recommended that you use a proper MariaDB/MySQL database if you are not just testing around
also, make sure you make regular backups / project exports
Word files are not supported atm
Adding some basic support for word files shouldn't be too difficult though. If you would like to see that, best open a feature request: https://github.com/inception-project/inception/issues/new/choose
lisa563
@lisa563
Thank you very much for your answer.
Richard Eckart de Castilho
@reckart
star/watch the github project, then you should get a mail when 0.18.2 is released
lisa563
@lisa563
OK, thank you.
lisa563
@lisa563
I'm very sorry, I still have some questions.
When annotation, the content of a label marked spans 2 lines, what should I do to mark it all at once?
Annotate approximately how many files, the forecast is more accurate?
lisa563
@lisa563
After I annotated 150 files, there are still many prediction results with a confidence of 0.1 or 0.2. How can I reduce these low-confidence prediction results? Is the annotation file not enough? Or what configuration needs to be done?
Jan-Christoph Klie
@jcklie
@lisa563 Regarding the recommender, that totally depends on your task and the recommender you choose. What exactly are you annotating and using? You can remove recommender with low confidence automatically by setting a threshold in the recommender settings, then the suggestions of that recommender are not shown
Richard Eckart de Castilho
@reckart
@lisa563 for curiosity: did you annotate those 150 files on our demo server?
lisa563
@lisa563
The recommender I used set the tool to Multi-Token Sequence Classifier (OpenNLP NER) and set the Score threshold to 0.7, but there are still many prediction results with a confidence lower than 0.7.
Yes, I annotate those 150 files on your demo server.
Richard Eckart de Castilho
@reckart
@lisa563 please be aware there is no security on the demo server as everybody uses the shared demo account and that we even regularly wipe it. We can only warn about not doing any serious work on the demo server.
Richard Eckart de Castilho
@reckart
@lisa563 regarding the confidence threshold: I have opened an issue for it: inception-project/inception#2028
lisa563
@lisa563
Thanks
lisa563
@lisa563
When I made annotations in the Named Entity Layer, I set Recmmender to use the Multi-Token Sequence Classifier (OpenNLP NER) tool, so I couldn't select the content of multiple lines at once. When the Allow crossing sentence boundaries is checked in the Layer configuration, the OpenNLP NER Tool can no longer be used. What should I do?
Richard Eckart de Castilho
@reckart
sequence labelling across sentence boundaries is not supported . If you want to annotate entire sentences, set the annotation granularity from "tokens" to "sentences" and then you can choose another type of opennlp recommender more suitable. Mind you might have to create a custom span layer for that - I do not know if the builtin NE layer accepts being configured to entire sentences.
lisa563
@lisa563
After setting the annotation granularity from "tokens" to "sentences", what type of opennlp recommender is suitable?