Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
David Jencks
@djencks_gitlab
Thanks! Do you expect logging to get into 3.0? That's a much bigger effort than upgrading asciidoctor. I'm a bit bogged down in my efforts there, but might get farther with advice.
Dan Allen
@mojavelinux
3.0 or 3.1. it depends on how long it's taking
David Jencks
@djencks_gitlab
Thanks!
Dan Allen
@mojavelinux
:+1:
David Jencks
@djencks_gitlab
What ties all the packages in antora together so that running yarn at the base populates all the node_modules in the packages?
Ewan Edwards
@eskwayrd
package.json. If your project depends on Antora, your package.json file describes it as a dependency. When npm resolves the Antora module, it finds its package.json file and resolves its dependencies, and so on.
For this, yarn is a synonym for npm. yarn used to be notably faster than npm, but npm has caught up and performance is similar-ish.
David Jencks
@djencks_gitlab
yes. I'm trying to understand Antora's project structure itself. I've imitated a lot of it but running yarn, nom, or lerna bootstrap isn't populating the packages/* node_modules. Maybe I have a typo....
no typo I can see :-(
Ewan Edwards
@eskwayrd
Wherever you run npm/yarn, unless you specify the global flag, the node_modules folder is created/updated within the current directory, not in any packages or other folder.
David Jencks
@djencks_gitlab
well, something populates Antora's packages/*/node_modules.... I'm hoping to find out what. Lerna seems like it's supposed to, but it's not working for me yet.
Ewan Edwards
@eskwayrd
Are you building a package for distribution? If not, there is no packages folder. When I install Antora, all of its packages live under @antora, with this folder structure:
$ tree -d @antora/
@antora
├── asciidoc-loader
│   └── lib
│       ├── converter
│       ├── image
│       ├── include
│       ├── util
│       └── xref
├── cli
│   ├── bin
│   └── lib
│       └── commander
├── content-aggregator
│   └── lib
├── content-classifier
│   └── lib
│       └── util
├── document-converter
│   └── lib
├── expand-path-helper
│   └── lib
├── navigation-builder
│   └── lib
├── page-composer
│   └── lib
├── playbook-builder
│   └── lib
│       └── config
├── redirect-producer
│   └── lib
├── site-generator-default
│   └── lib
├── site-mapper
│   └── lib
├── site-publisher
│   └── lib
│       └── providers
│           └── common
└── ui-loader
    └── lib
David Jencks
@djencks_gitlab
yes. I'm starting with checking out antora from gitlab. When I run yarn in the checkout, magically the packages/*/node_modules get populated. I'm setting up another project where I want that to happen too :-)
Ewan Edwards
@eskwayrd
This seems like the bit of the package.json file that you may be missing:
  "workspaces": [
    "packages/*"
  ]
But, I've never published an NPM package before, so I don't know all of the details.
David Jencks
@djencks_gitlab
I thought that was part of it too, but I have it and it's still not enough.
Ewan Edwards
@eskwayrd
Does your lerna.json file have this line:
  "useWorkspaces": true,
David Jencks
@djencks_gitlab
yes
Ewan Edwards
@eskwayrd
When I checkout the Antora repo, and run npm i, the only node_modules folder that gets created under packages is asciidoc-loader/test/fixtures/node_modules.
David Jencks
@djencks_gitlab
hmmm.... something populated all those node_modules, apparently not what I thought.
Ewan Edwards
@eskwayrd
I suspect that the packages folder is using lerna's workspaces features, which is (possibly greatly over-simplified) a way to declare that dependencies are local/unpublished: they are the canonical source for those modules.
David Jencks
@djencks_gitlab
yes, that's part of what I need...
Ewan Edwards
@eskwayrd
David Jencks
@djencks_gitlab
could I ask what happens when you run yarn i instead of npm i?
Ewan Edwards
@eskwayrd
That did populate a number of node_modules folders:
$ find packages -type d -name 'node_modules'
packages/asciidoc-loader/test/fixtures/node_modules
packages/content-aggregator/node_modules
packages/playbook-builder/node_modules
packages/ui-loader/node_modules
packages/page-composer/node_modules
Does your lerna.json have this line:
  "npmClient": "yarn",
David Jencks
@djencks_gitlab
yes
Ewan Edwards
@eskwayrd
Also, package.json likely needs to have "private": true, set.
David Jencks
@djencks_gitlab
got that :-). I notice lerna bootstrap says lerna info bootstrap root only which doesn't seem right.
Ewan Edwards
@eskwayrd
That's what I see too.
At least, on the Antora repo.
Then I run lerna bootstrap, the yarn install happens after the lerna info bootstrap root only line. I think yarn is populating the node_modules folders for each package.
David Jencks
@djencks_gitlab
I thought that was what was supposed to happen....
Ewan Edwards
@eskwayrd
Yep, deleting all of the node_modules folders within the Antora repo, and running yarn re-populates all of the node_modules folders.
For your new project, do your package folders each contain a package.json file?
David Jencks
@djencks_gitlab
yes. I wonder if all this is working properly and I've misdiagnosed the error I'm getting.... perhaps all the links are present but I'm not referencing the pipeline project properly from Antora's command line.
Ewan Edwards
@eskwayrd
Note that yarn de-dupes dependencies. If your project's package.json requires package A, and a package in the packages folder also requires A, then only the project node_modules folder gets created for the dependency. Dependencies unique to the package in the packages folder get managed in the per-package node_modules folder.
David Jencks
@djencks_gitlab
Looking deeper into the root node_modules, I suspect yarn/lerna are working properly and there's something else wrong, such as how I'm trying to invoke antora....
I have to leave for a while, thanks for your help! I love that article you found, it helps me understand what's going on much better!
Ewan Edwards
@eskwayrd
:ok_hand: It's been interesting poking into these bits.
David Jencks
@djencks_gitlab
I've run into a problem making antora generate pdfs, adapting asciidoctor-pdf.js. Single pages work pretty well, and I've been working to print larger bits by manually constructing pages that include other pages; basically replacing the xrefs in a nav file with includes. This is similar to what the uyuni manager (formerly Suse manager) docs do, although they use asciidoctor-pdf (ruby). The pdfs look pretty good, with page numbers, TOC, index etc. but the internal xrefs don't work! This isn't surprising, since they started out as between-page xrefs. Does anyone have any ideas how to detect and rewrite these links? For instance, do includes leave some trace in the asciidoctor parse tree that could be detected by a tree processor? Can the include processor attach a map of included file name to first included section? Or could it add an anchor for the page start?
I guess you could use a different strategy if you are running in this "mode"
danyill
@danyill
@djencks_gitlab thanks for your work on pdf, I'm looking forward to trying it at some stage.
David Jencks
@djencks_gitlab
@Mogztter Yes, so far I'm using the built-in antora xref processing. Basically I'm wondering how to gather the information I'd need to transform what are now intra-document references. Does the parse tree retain indications of inclusions, or do I have to write an include processor and gather that myself during inclusion?
Guillaume Grossetie
@Mogztter
David Jencks
@djencks_gitlab
When I look at that in a tree processor, it's undefined :-(
I haven't found any information in the accessible parse tree that indicates inclusions happened, so I'm going to try to modify the antora include processor.
David Jencks
@djencks_gitlab
I was able to find enough information in an include processor and use it in the xref processor so that intradocument pdf links now get transformed and work (at least sometimes :-).
I'm really wondering, though, what should happen to links outside the document to other parts of the "original" site? For instance, if each module is glommed into one pdf, what should happen to an xref from one module to another? I guess a starting point would be to link to the published site url for the html page.