Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 20 22:10
    sbrl labeled #219
  • Feb 20 22:10
    sbrl closed #219
  • Feb 20 22:10
    sbrl commented #219
  • Feb 20 22:08
    yuu1412 commented #219
  • Feb 20 21:59
    sbrl commented #218
  • Feb 20 21:59
    sbrl labeled #219
  • Feb 20 21:59
    sbrl commented #219
  • Feb 20 21:49
    yuu1412 edited #219
  • Feb 20 19:22
    yuu1412 edited #219
  • Feb 20 19:15
    yuu1412 opened #219
  • Feb 12 22:41
    sbrl labeled #218
  • Feb 12 22:41
    sbrl commented #218
  • Feb 10 22:19

    sbrl on master

    docs: clarify system requiremen… (compare)

  • Feb 10 22:17

    sbrl on master

    search: properly apply weightin… (compare)

  • Feb 10 12:01
    letyphon opened #218
  • Nov 20 2020 21:27
    sbrl commented #217
  • Nov 20 2020 21:26
    sbrl labeled #217
  • Nov 20 2020 21:26
    sbrl labeled #217
  • Nov 20 2020 21:25
    sbrl closed #217
  • Nov 20 2020 21:25

    sbrl on master

    core: Support setting page thro… (compare)

Starbeamrainbowlabs
@sbrl
Currently the storage system for the user accounts isn't scalable though - it uses peppermint.json
I can say that I've got a wiki is ~570 pages and 70K+ words on it, and it performs fine :D
Particularly the full-text search engine (~330ms single word, ~500ms 2-word searches). I'm very pleased with that bit.
sunjam
@sunjam
hmm, I'd love to store my wiki pages on nextcloud in specified directory. Nextcloud also has full-text search via elastic + tesseract OCR
Starbeamrainbowlabs
@sbrl
Ah
You can tell Pepperminty Wiki you want it to store it's data in an arbitrary directory
sunjam
@sunjam
Nice!
Starbeamrainbowlabs
@sbrl
The trick would be getting nextcloud to recognise that
It's the data_storage_dir setting in peppermint.json
sunjam
@sunjam
Cool, I'll look into it once I have pepperminty up and running
Starbeamrainbowlabs
@sbrl
(peppermint.json is auto-generated on first load)
Awesome!
I've even got an auto-generated configuration guide that details every setting an what it does
My general rule is "if I might want to change it later, make it a setting"
SpcCw
@SpcCw
Hey, I seem to get stuck with this "block access to peppermint.json" stuff. I want to check out the wiki so I installed clean Debian 10 with nginx and php 7.3. I try to start it and get "Error: peppermint.json.compromised exists on disk..." message. But I blocked access on the webserver to it? If I go to http://192.168.220.227/peppermint.json I get 404, also tried with 403 - still same warning. What do I miss here?
Seems like I found the setting skipping this check but... :)
Starbeamrainbowlabs
@sbrl
Hey, @SpcCw! Basically, you need to delete the peppermint.json.compromised file on disk
it creates it next to peppermint.json
The check will fail if peppermint.json returns a 2XX code
The problem is that the check is part of the first run wizard, but for performance reasons I can't keep making the check over and over you see
as it's actually the Pepperminty Wiki core that displays that message
Starbeamrainbowlabs
@sbrl
Let me know if that fixes it :-)
SpcCw
@SpcCw

Hey, @sbrl :)

I got around this by disabling the check in settings file so this is no problem.

The problem is that in my setup (Debian 10, nginx and PHP 7.3 from their repo, all very basic) for some reason this check always succeeds unless disabled in settings. If I delete "peppermint.json.compromised" and reload the page then it just gets created again. At the same time I am very sure the access is blocked as my attempts to go to "http://.../peppermint.json" result in 404 or 403 (depending how I set it).

E.g.: # curl http://172.17.156.215/peppermint.json

<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.14.2</center>
</body>
</html>

It looks like the check goes around the webserver somehow or doesn't recognize the 404 response from nginx :)

Starbeamrainbowlabs
@sbrl
Ah, glad you got it sorted @SpcCw! It's probable that there's a bug in the detection system. I suspect that the issue is that it's very difficult to tell what the original URL you loaded the page on was. I use this to swap out index.php / the query string for peppermint.json with a regular expression and then make another request.
Sean Feeney
@SeanFromIT
Trying to get started. ._extra_data seems to not be getting populated
(the folder itself gets created, but it's empty)

So when I load index.php I get these errors:
Warning: require_once(._extra_data/parser-parsedown/Parsedown.php): failed to open stream: No such file or directory in /path/to/html/index.php on line 9571

Fatal error: require_once(): Failed opening required '._extra_data/parser-parsedown/Parsedown.php' (include_path='.:/path/to/pear/php') in /path/to/html/index.php on line 9571

SpcCw
@SpcCw
@SeanFromIT in my case I had the same problem and it was permission issue, looks like PHP couldn't write to it. chmod -R 777 fixed it (for production environment I would give write access just to webserver though but I am just testing).
Starbeamrainbowlabs
@sbrl
Ah, I've been having a bunch of trouble with that actually @SeanFromIT. I actually fixed a number of bugs in the system - hence the multiple hotfix point releases :P
@SpcCw: Ah, I see. I should definitely add a check in for that then when extracting - it should give a nice error message.
Starbeamrainbowlabs
@sbrl
@SeanFromIT: Let me know if @SpcCw's solution doesn't work. Ideally the folder that index.php is located in should be owned by the same user and group as the PHP process accessing it
I recommend 0775 for directories, and 0664 for files
If it still doesn't work, is there anything else in the PHP error log at all?
Starbeamrainbowlabs
@sbrl
^---- Also, if it doesn't work, try this special development version. I've added, like, all the error messages to the unpacking system
Sean Feeney
@SeanFromIT

After some tweaking of php.ini I was able to get it to dump the errors to a log file. After chmod'ing what I thought was all the things, I'm still getting the same errors. In addition to the above errors, which happen every time index.php is loaded, if I delete index and re-upload it, the first time run errors are as follows:

Warning: ZipArchive::extractTo(): Invalid or uninitialized Zip object in /path/to/html/index.php on line 1358

Warning: ZipArchive::close(): Invalid or uninitialized Zip object in /path/to/html/index.php on line 1359

Warning: session_start(): Cannot start session when headers already sent in /path/to/html/index.php on line 1369

Warning: Cannot modify header information - headers already sent by (output started at /path/to/html/index.php:1358) in /path/to/html/index.php on line 1371

Warning: Cannot modify header information - headers already sent by (output started at /path/to/html/index.php:1358) in /path/to/html/index.php on line 1529

Starbeamrainbowlabs
@sbrl
That's really strange @SeanFromIT.
What version of Pepperminty Wiki are yo using? I've tried looking up line 1359 in the latest dev version and latest release, but both show a blank line
Can you try the version I link to just above this message please?
And also, can you try appending ?action=debug&secret=YOUR_SECRET to index.php please? You can find your secret in peppermint.json under the secret property
if you could PM or email me the output of that it'll give me a ton more information about your issue
Sean Feeney
@SeanFromIT
image.png
is this normal?
Starbeamrainbowlabs
@sbrl
Ouch, no. They should be showing as #0 #1 #2 if they are all edits to the same page.
The diamond indicates that you're a moderator or better
See admindisplaychar in the reference on how to disable that
@SeanFromIT
Starbeamrainbowlabs
@sbrl
Could you PM me your pageindex.json file please? You can also contact me privately using any of the other methods on my website, such as Keybase or Email
Starbeamrainbowlabs
@sbrl
Thanks, @SeanFromIT. Bug squashed! Another point release incoming.
Weird workaround: Create a page called history, and page revision numbers will increment normally.
Starbeamrainbowlabs
@sbrl
I can't wait to release v0.20, as I've got a seriously cool new search system backend that's pretty awesome. Part of this is the new backend, which I've blogged about here if anyone's interested in the details: https://starbeamrainbowlabs.com/blog/article.php?article=posts/372-next-gen-search-1-backend-storage.html