setuptools-scm(https://pypi.org/project/setuptools-scm/) is the best alternative. I haven't completely switched to it for Satpy yet (PR waiting to be merged) but I'd like to use it for vispy too. Let me know what you think or if you have other ideas/concerns?
@kmuehlbauer Yeah no problem. I typed that out before I had to go offline so left out some details.
Right now we manually set the version of vispy in
vispy/__init__.py. When we make a release we update it from
0.6.0, make the release, then update it again to
With versioneer or setuptools-scm, the tools are setup to look at the latest git tag matching a specific pattern (ex.
vX.Y.Z). If the current commit is the same as the git tag then
vispy.__version__ will be the version in the tag (ex.
0.6.0). If the current git is a couple commits beyond a tag then you'll get something like
abc123 is the current commit hash. If the current working directory is dirty (uncommitted changes) then you'll get a version like
0.6.0+abc123.dirty or something like that.
This means that whenever we need to make a release, especially because of the automated PyPI deployment I just got working, all we have to do it
git tag -a v0.7.0 -m "Version 0.7.0"; git push --follow-tag. The CIs will clone the repository, the current git tag will be used for the version of the package, build the sdist/wheel for that version, then deploy it. Then as we merge more PRs/features, the version we install from github will automatically increase from the last released version.
sdistisn't built first and the extra files needed for versioneer to work don't meet standard style rules (iirc)
.objfiles have normals defined so wouldn't it be advantageous to provide them?
.objand all of the information from the
.objfile, there is no builtin Visual that takes mesh information and a texture...right?
I think there are some limitations in the current system:
(vertices, faces, normals, texcoords)or something similar. This assumes texcoords are vertex attributes (see above). We probably need to be more flexible. More generally, there would be arrays of data (vertex array, normals array, texture coordinates array) and arrays of indices telling how the arrays of data are mapped to the face corners (currently on the "face indices" is well supported).
Internal representation / API:
These are some thoughts and suggestions I could gather. Not sure if it's totally clear/correct. Let's discuss.
@asnt What are your thoughts on depending on a library like https://pypi.org/project/PyWavefront/
If a user has it installed then they can load
.obj files, if not then they can't. We could keep the "dumb" reader for now
@asnt What I want to avoid is maintaining our own functionality and/or library. I would hope to depend on someone else's work. Granted it is just very likely that pywavefront will die and go away, BUT I would much rather contribute to an open source project, make it better, and keep it alive by making it better. Looks like pywavefront has a roadmap that includes using numpy arrays: pywavefront/PyWavefront#92
And from what I can tell it is possible to access the individual elements (vertices, etc) as lists.
Side note: I have an optimization for your code: if you read all the vertices first, determine how many there are, then pre-initialize (np.empty) for all the other types then you don't have to waste the memory/performance creating lists and converting them to numpy arrays. You can instead set the value for each
norms[idx] = value.