- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

Discuss Boost.Geometry (Generic Geometry Library) - https://github.com/boostorg/geometry

- Sep 19 23:03awulkiew review_requested #620
- Sep 19 23:01awulkiew commented #619
- Sep 19 23:00awulkiew labeled #619
- Sep 19 22:59awulkiew milestoned #612
- Sep 19 22:59awulkiew milestoned #620
- Sep 19 22:59awulkiew labeled #620
- Sep 19 22:58awulkiew opened #620
- Sep 19 19:26awulkiew commented #616
- Sep 19 18:35mloskot commented #616
- Sep 19 18:34mloskot commented #616
- Sep 19 18:03awulkiew commented #616
- Sep 19 15:09awulkiew synchronize #616
- Sep 19 08:54mloskot commented #616
- Sep 18 16:01awulkiew commented #616
- Sep 18 13:51awulkiew commented #616
- Sep 18 13:51awulkiew commented #616
- Sep 18 13:51awulkiew commented #616
- Sep 18 13:51awulkiew commented #616
- Sep 18 13:50awulkiew commented #616
- Sep 18 12:42rominf starred boostorg/geometry

@vissarion Thanks for pointing that out, my response was incomplete there. The O(n) estimate given, was based on the assumptions that,

a) if I don't end up implementing an incremental algorithm that does this better, I would go with some additonal Delaunay triangulation vertex deletion algorithm with an expected runtime that is some function of the degree of the deleted vertex, and

b) that this degree would be much smaller than n. And O(n) would be the time complexity of deleting the vertex from a vector.

a) if I don't end up implementing an incremental algorithm that does this better, I would go with some additonal Delaunay triangulation vertex deletion algorithm with an expected runtime that is some function of the degree of the deleted vertex, and

b) that this degree would be much smaller than n. And O(n) would be the time complexity of deleting the vertex from a vector.

hello, I am a GSoC 2019 student working on `boost::astronomy`

.

I am integrating the astronomical coordinate system with `boost::units`

currently coordinate system uses boost.geometry as the underlying framework and uses `geometry::model::point`

to store the coordinate data. Underlying data is always converted in SI units first and then stored.

I am facing some confusion and would like to get some suggestions.

suppose two coordinates are stored `(100cm, 200cm, 300cm)`

and `(400cm 500cm, 600cm)`

. expected cross product result would be `(-30000cm, 60000cm, -30000cm)`

.

But as I mentioned earlier all the coordinates are stored in SI units so actual points inside the class would be `(1m, 2m, 3m)`

and `(4m, 5m, 6m)`

and cross product of it would be `(-3m, 6m, -3m)`

and converting it back to `cm`

while returning because user originally stored it in `cm`

will result in `(-300cm, 600cm, -300cm)`

Please correct me where I am wrong and what is the correct answer.

Conceptually, you’re multiplying two vectors together to get a third, with a direction perpendicular to the first two and a magnitude of the product of the two vectors times the sine of the angle between them. Angles are dimensionless, but when you multiply lengths you get areas.

For Euclidean geometry, anyways.

Hi, quick question, this is unrelated to GSoC, but since the 1.71 deadline is approaching, I'd be very thankful if anybody could take a look at #584 (Pull Request). We use it in our codebase it is a slight implementation change for the matrix_transformer strategy that allows it to work for arbitrary dimensions (rather than just 3 to 3, 3 to 2, 2 to 2) and allows composing of transformers across dimensions. A test case is included.

@awulkiew can you suggest how I would add support for these types and bg::intersection? (e.g. 1. implement circularstring, compoundcurve and curvepolygon concepts 2. implement bg::models for them 3...n ??? -> what else to implement to be able to use bg::intersection for them? Also, I guess calculation operations with circularstring etc. requires a parameter for how many points to approximate the arc?)

to be clear: As a first step, I would appreciate the minimum work required to make bg::intersection work for these types/concepts.

(e.g. I have already implemented a circularstring concept locally, but, as it consists of three points, it does (only) work for converting to WKB, where these three points are the only thing required.)

@meastp Separate models for them are the most straightforward solution indeed. Besides of that:

You have to implement all strategies that are used in intersection, strategies calculating:

- intersection of 2 segments
- whether a point is inside polygon
- on which side of a segment a point lies
- area
- envelope and expand for circular segments
- checking if box disjoints circular segment
- possibly some other ones...

Many strategies would stay the same, e.g. regarding point/point and box/box operations.

The problem I see is that internally we're considering segments as defined by 2 points, but in this case a segment is defined by 3 points. So I'd expect it to be an issue. You'd probably need some kind of thin wrapper around 3 points defining a segment between 2 first points. And this

`circular_referring_segment`

should be passed into the utilities/strategies in algorithms instead of e.g. 2 points or linear segment or`referring_segment`

. So you'd also need some utility returning regular/linear`referring_segment`

or`circular_referring_segment`

based on the tag of a geometry. Unless I'm missing something and it is possible to define a circular segment based on 2 points?If the above fails and parts of circular segments cannot be represented as described above you'd have to write new versions of algorithms for these geometries as well as strategies.

But I think it should be possible to do it with 1 and 2. You could start with something simpler than intersection of 2 polygons. E.g. `relate(point, polygon)`

which needs less strategies. Or even `area(polygon)`

which needs only 1 strategy and you could straight away test if you need a separate algorithm or would `circular_referring_segment`

be enough.

If you define a separate tag for your geometry, e.g. circular_polygon_tag and call an algorithm the compiler should complain that the dispatch::algorithm is not implemented for this tag. Then you have to make a decision whether reimplement an algorithm or simply write dispatch for your tag and derive from the old implementation.

Then the compiler will start to complain about default strategies not implemented (or maybe first it will complain about that). At that point you have to implement them as described above.

@awulkiew thank you for the thorough description :) I agree with you that only 2 points isn't sufficient to represent an arc - three points or two points + radius is needed (and I think three points is the better option because tiny rounding of floats makes 2 points + radius fluctuate)

I see @meastp follows SQL MM/OGC terminology what is good as it helps t o have such common reference when discussing or reviewing BG against existing implementations like PostGIS or SQL Server.

Regarding the number of points in circular :point_up: July 15, 2019 7:37 AM

AFAIK, only requirement is that `CircularString`

has odd number of points, so it is composed from connected 3-point arc segments

- 3-points means it is single arc
- 5-points it is two arc segments
- 5-points with 2nd in middle of 1st arc and 4th in middle of 2nd arc compose a circle

etc.

Sorry if I wasn't clear enough. I wasn't talking about representing circular segments in a geometry like circular string. I was specifically talking about representing a segment internally in algorithms. E.g. when you pass them into the intersection strategy deeply in set operation. Now we assume that a segment is defined by 2 points, we take them directly from geometry, passing pointers/references into referring_segment and passing the 2-point segment into the intersection strategy. We then assume/expect that the result of the intersection of 2 segments is either 0 or 1 point unless they are collinear when the number is 2. But with circular segments both of these assumptions/expectations are not correct. It's because the shape of a segment between any two points is defined by a third point (and in general case like bezier curves probably more points around) either before or after a pair of points and because of the non-linear shape the number of intersection points of circular segments may be 0, 1 or 2 even if they are not collinear. And this means that at least some parts of the algorithms should be modified as well.

@meastp Hi, you may count me among those who are interested in CurvePolygon. The codebase at my job has demand for that kind of geometry and maybe I can contribute to that during work hours in the future. More specifically, we would have demand for that in 2D, cartesian coordinates and we would probably use the algorithms area, envelope, within (point in CurvePolygon) and intersects (CurvePolygon and CurvePolygon).

Btw, what's the state of WKB? Do you think we should release it?

@tinko92 I think you are much more familiar with the code base than I, but I have started with tiny baby steps to try to get familiar, create tags, concepts and models for circular_segment and circularstring. What is a good algorithm to start implementing, to get familiar with the strategies architecture? I'm currently trying intersects, but there are a lot of algorithms behind it....?

Those are often used as steps of more complex algorithms

Starting with such simple self-contained algorithms will confirm the new circular types are recognised as geometries, make it easier to review,

before jumping to complex behemots ;)

before jumping to complex behemots ;)

@meastp I also think starting from

In general let mw know if I can help too

`envelope`

or `length`

should be the entry point. `area`

could also be not that difficult (maybe replacing the determinants in shoelace formula with the area under the circular arc), for `centroid`

I am not sure, but those are a good start. The side predicates are implemented by @tinko92 here: boostorg/geometry#617In general let mw know if I can help too

The

`convex hull`

should be as (or simpler than, because the arcs are sorted) the one for the union of circles, see for example https://hal.inria.fr/inria-00074391/document
So computing the intersection is algorithmically much simpler but the code in boost geometry is maybe too much to start with

@meastp I will look into this too, after finishing work on some branches I try to get merged. Will you work on this on your fork of Boost.Geometry? Then I'll know where to look for that. Also thanks for thinking I'm familiar with the code base ;) Incidentally, just after I read that, I noticed that I unknowingly reimplemented an existing algorithm. I will familiarize myself with the implementations of algorithms for polygons and I'll try to think about, how some of that can be done for CurvePolygon.

Speaking of which, @vissarion my implementation of uniform point distribution for LineStrings caches intermediate lengths and segments so that a single point can be interpolated in logarithmic, rather than linear time using binary search over the accumulated lengths. I now use line_interpolate for generating the point on a single segment now.

Do you think it would be sensible to make that caching in some way an option for line_interpolate for the benefit of users who repeatedly interpolate single points with varying max_distance over the same LineString? I am asking because I do not know what this algorithm is generally used for.

@tinko92 regarding caching I do not know of any particular use case for `line_interpolate`

I just implemented since it seems useful in GIS and there are many implementations of it in various libraries (`PostGIS`

included). But since you have already implemented I think it doesn't harm to add this functionality to `line_interpolate`

as long as it does not overcomplicate the user interface.

Regarding uniform points generated on a linestring, thanks for sharing your approach. I haven't reviewed your PR yet ;) An alternative would be to generate random numbers in [0,len] (where len := the length of the input linestring) sort them and then simply parse/navigate on the linestring by creating points in distances from the sorted list. That should be more efficient, as far as I can see.

Hi @vissarion , thanks for your thoughts. I look forward to discussing this during the review. Regarding ordering random numbers: This can probably be faster in some use cases, I think the asymptotic time complexity should be in O( n * log(n) + m ) with O(n) auxiliary space where n is the number of samples and m is the length of the LineString.

I think my approach is in O(m + n * log(m)) with O(m) auxiliary space, so it depends on whether n is larger than m.

The issue, though, as far as I can see, is that this is incompatible with the current interface, which is modeled after the random distribution classes in Boost.Random, where samples are always generated one at a time (at least as far as the distribution interface is concerned).

I think my approach is in O(m + n * log(m)) with O(m) auxiliary space, so it depends on whether n is larger than m.

The issue, though, as far as I can see, is that this is incompatible with the current interface, which is modeled after the random distribution classes in Boost.Random, where samples are always generated one at a time (at least as far as the distribution interface is concerned).