Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    ituri
    @ituri
    Could be reddit.
    totti4ever
    @totti4ever
    minaly that I guess, yes
    Plus the discussion regarding merging back into paperless I assume
    Jonas Winkler
    @jonaswinkler
    How do you properly test these CI/CD scripts, without actually doing the thing? :) Of course I managed to mess up some of the conditions and steps.
    Johann Bauer
    @bauerj
    Hey Jonas, I'm currently looking into bauerj/paperless_app#34. I was wondering if it would be easy to change the API to allow uploading multiple images to be merged into a single document.
    The alternative would be for us to add some code to create PDF documents
    Jonas Winkler
    @jonaswinkler
    See the referenced issue.
    VitalerHummer
    @VitalerHummer
    Hello people, I am an idiot and deleted the consumer account thinking it was superfluous, what do i do now?
    Jonas Winkler
    @jonaswinkler
    @VitalerHummer don't have access right now, but you should be able to just create it again. Username 'consumer', nothing else required. If possible, leave password blank and that will disable login.
    Johann Bauer
    @bauerj
    Is there any way to redo OCR on all my documents?
    As a side effect, this will also update the content field of all processed documents.
    BTW: I'm not happy with performance of 1.1.0 on Raspberry Pi at all, this might take a little longer
    Johann Bauer
    @bauerj
    Ah, perfect. Thank you!
    Jonas Winkler
    @jonaswinkler
    Turns out most of the performance issues on Raspberry Pi are caused by the logging system of paperless. Since its inception, paperless logged messages to the database. On Raspberry Pi, especially with SQLite, this might cause "database is locked" errors during consumption when ANY kind of debug log is written to the database at the same time. That's really bad.
    Also really long wait times on some operations that log things to the database.
    I'm on the edge to remove all this database logging and log to files instead.
    There's also no log rotation on that database table, so it might get quite big rather quickly.
    Jonas Winkler
    @jonaswinkler
    Bad news is that the log viewer on the front end won't be as nice as it is right now.
    Marco Everts
    @Marcopolox13
    Hey, since I updated paperless-ng a few days ago I have problems while consuming PDFs. Always some error with "Cannot allocate memory" . System restart didn't fix it
    Also seted PAPERLESS_TASK_WORKERS * PAPERLESS_THREADS_PER_WORKER up in the config, but it didn't helped
    Jonas Winkler
    @jonaswinkler
    @Marcopolox13 since this is a memory issue, I'd like to know some details about your system and setup. Please provide: Host OS, CPU core count, RAM amount, installation method, and any custom settings, apart from those you already mentioned. I assume this is on 1.1.0
    Also, host platform (arm / arm64 / amd64)
    Also, the entire error log.
    Jonas Winkler
    @jonaswinkler
    might want to open an [other] ticket for that.
    Marco Everts
    @Marcopolox13
    I installed it via docker on an debian system with 1 vCPUs and 2GB ram / 50GB Disk
    Befor I update to. 10 it worked fine
    It hangs at " Retrieving date from document..."
    The entire error log :

    Result: <class 'redis.exceptions.InvalidResponse'> returned a result with an error set : MemoryError

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
    File "/usr/local/lib/python3.7/site-packages/django_q/cluster.py", line 436, in worker
    res = f(task["args"], **task["kwargs"])
    File "/usr/src/paperless/src/documents/tasks.py", line 82, in consume_file
    task_id=task_id
    File "/usr/src/paperless/src/documents/consumer.py", line 267, in try_consume_file
    classifier = load_classifier()
    File "/usr/src/paperless/src/documents/classifier.py", line 36, in load_classifier
    classifier = cache.get("paperless-classifier", version=version)
    File "/usr/local/lib/python3.7/site-packages/django_redis/cache.py", line 87, in get
    value = self._get(key, default, version, client)
    File "/usr/local/lib/python3.7/site-packages/django_redis/cache.py", line 27, in _decorator
    return method(self,
    args, kwargs)
    File "/usr/local/lib/python3.7/site-packages/django_redis/cache.py", line 94, in _get
    return self.client.get(key, default=default, version=version, client=client)
    File "/usr/local/lib/python3.7/site-packages/django_redis/client/default.py", line 220, in get
    value = client.get(key)
    File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 1606, in get
    return self.execute_command('GET', name)
    File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 901, in execute_command
    return self.parse_response(conn, command_name,
    options)
    File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 915, in parse_response
    response = connection.read_response()
    File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 739, in read_response
    response = self._parser.read_response()
    File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 471, in read_response
    response = self._reader.gets()
    SystemError: <class 'redis.exceptions.InvalidResponse'> returned a result with an error set

    Result:

    <class 'redis.exceptions.InvalidResponse'> returned a result with an error set : MemoryError

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
    File "/usr/local/lib/python3.7/site-packages/django_q/cluster.py", line 436, in worker
    res = f(task["args"], **task["kwargs"])
    File "/usr/src/paperless/src/documents/tasks.py", line 82, in consume_file
    task_id=task_id
    File "/usr/src/paperless/src/documents/consumer.py", line 267, in try_consume_file
    classifier = load_classifier()
    File "/usr/src/paperless/src/documents/classifier.py", line 36, in load_classifier
    classifier = cache.get("paperless-classifier", version=version)
    File "/usr/local/lib/python3.7/site-packages/django_redis/cache.py", line 87, in get
    value = self._get(key, default, version, client)
    File "/usr/local/lib/python3.7/site-packages/django_redis/cache.py", line 27, in _decorator
    return method(self,
    args, kwargs)
    File "/usr/local/lib/python3.7/site-packages/django_redis/cache.py", line 94, in _get
    return self.client.get(key, default=default, version=version, client=client)
    File "/usr/local/lib/python3.7/site-packages/django_redis/client/default.py", line 220, in get
    value = client.get(key)
    File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 1606, in get
    return self.execute_command('GET', name)
    File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 901, in execute_command
    return self.parse_response(conn, command_name,
    options)
    File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 915, in parse_response
    response = connection.read_response()
    File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 739, in read_response
    response = self._parser.read_response()
    File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 471, in read_response
    response = self._reader.gets()
    SystemError: <class 'redis.exceptions.InvalidResponse'> returned a result with an error set

    System is x64
    Jonas Winkler
    @jonaswinkler
    Thank you. I'll investigate.
    Jonas Winkler
    @jonaswinkler
    For the time being, I'd suggest to go with 1.0; switching between these two versions is perfectly safe.
    Marco Everts
    @Marcopolox13
    Ok, can I just go back by edit the docker compuse file to 1.0 instead of latest
    Jonas Winkler
    @jonaswinkler
    exactly.
    Marco Everts
    @Marcopolox13
    Ok cool
    Jonas Winkler
    @jonaswinkler
    after that, docker-compose pull while the instance is down, then docker-compose up and you're good to go.
    Marco Everts
    @Marcopolox13
    I hope this will help for now , becouse I update from, I thing 0.9.10 to 1.1
    Or may this be the problem?
    Jonas Winkler
    @jonaswinkler
    According to the information you posted, the change that's responsible for your issues happened between 1.0 and 1.1.
    Marco Everts
    @Marcopolox13
    Ah OK cool - will try it later at home
    Thanks
    Marco Everts
    @Marcopolox13
    Hey, I did it like you sayed, first it worked perfectly. Everything there was in the consum folder got consumed correctly, but now, where I tried loading new pdfs in, I get error again.

    consuming document Scan02122021170418.pdf: [Errno 12] Cannot allocate memory

    2/12/21, 5:04 PM INFO Consuming Scan02122021170418.pdf

    2/12/21, 4:59 PM ERROR Error while consuming document 20210212171335_001.pdf: [Errno 12] Cannot allocate memory

    2/12/21, 4:59 PM INFO Consuming 20210212171335_001.pdf

    Jonas Winkler
    @jonaswinkler
    @Marcopolox13 I've addressed this issue in 1.1.2.
    Marco Everts
    @Marcopolox13
    Ok cool thanks
    Lenny.
    @lenny:com.flipdot.dev
    [m]
    hey

    so I have a general quastion:
    I'm currently ingesting a lot of docs and I sometimes create new correspondents, or tags when editing the data with world

    however when I go to the next document, that clearly matches these rules, they are not auto applied

    will new tag rules and stuff only be applied to documents ingested after their creation? is there a way to rerun tagging?
    Lenny.
    @lenny:com.flipdot.dev
    [m]

    :point_up: Edit: so I have a general quastion:
    I'm currently ingesting a lot of docs and I sometimes create new correspondents, or tags when editing the data with word matching and stuff

    however when I go to the next document, that clearly matches these rules, they are not auto applied