These are chat archives for kirkshoop/kirkshoop.github.io
Reading through the Single description, it isn't clear to me how one would build in task cancellation. A future is both a way to transport a value as well as a handle to control the task that generates it. Splitting out
execute_on() would seem to make it impossible to change the execution context without causing a task to be scheduled twice. Also, you mention "some will poll on a thread" - I hope no continuations actually do this. The then() callback should be scheduled for execution directly from the context of of the task, but in what context it is scheduled to execute in needs to be defaultable from the promise and overridable from the code attaching the continuation.
I very much like the idea of trying to split out concepts and algorithms - but I do not see how to do so. For example, the requirements of get() are:
The associated task is guaranteed to be scheduled on a context other than the calling context, or can be promoted to execute in the calling context. To be promotable a task must either be fully resolved (a nullary function) or all arguments must be supplied by promotable tasks, and must not be required to be serialized with other tasks or dependent on a particular execution context.
So is this a concept? A precondition? In practice, using get() or wait() within a tasking system is incredibly error prone. Even if we introduce the notion of a "getable" future what restriction do we put on it to guarantee the above? If we want to support systems without threads then it has to be promotable, but if instead we want to use get() as synchronization primitive to replace condition variables, then we absolutely don't want that requirement.
We clearly have too many "things" mixed-up in the idea of a future, and though I'm sure a Single is a useful construct and might make a better basis type for some constructs, my initial impression is that it is of limited use.
lifetime::stop()- if I understand correctly this goes the wrong direction. It is forcing a broken promise as opposed to informing upstream tasks that a value is no longer necessary. Regarding
consume_on()- let's say I have processes which is producing values that generally should be consumed on the UI thread. So I return a single as
return http.get() | consume_on(UX);, this works for most cases, however, the client wants to attach a performance critical continuation that could be scheduled for immediate execution. So the have something like
result | consume_on(immediate_scheduler)so the entire pipe looks like:
http.get() | consume_on(UX) | consume_on(immediate)with the intent that the second consume_on() replaces the first. Defaulting to immediate execution, is error prone. See my writeup here. get() and wait() are "just an algorithm" - but a very vexing to use correctly. Implemented as a condition variable they are an immediate deadlock if you are not in a threaded environment. Used within a thread pool they can create very difficult to diagnose deadlocks... My understanding is Microsoft goes through considerable lengths to be able to promote tasks created from std::async() so they can execute those tasks within a thread pool and not deadlock, however, that doesn't work in the presence of continuations (at last there is no reasonable way to implement it that I'm aware of). This is why get() and wait() were not included in our proposal, but from the notes from the standard meeting there is a desire for having them available to be used as one would a condition variable as a lower level synchronization mechanism... I can implement that but I'm really not certain it is a good idea.
Lifetime::stop() is bound to the consumer. The stop signal travels from the consumer to the producer.
In normal operation the SingleSubscription enforces the scope contract by calling
Cancelation from an algorithm (e.g.
timeout(duration)) or from user code calling
lifetime.stop() performs a well-defined race to propagate the stop signal from the consumer to the producer.
When there are multiple algorithms, there can be multiple nested Lifetime ‘scopes’. Each lifetime will
take_until(SingleDeferred) algorithm that receives the
value() from the other SingleDeferred will call
destination.error() which then calls
lifetime.stop() when it returns.
the entire pipe looks like: http.get() | consume_on(UX) | consume_on(immediate) with the intent that the second consume_on() replaces the first.
consume_on() does not override, in the above the
| consume_on(immediate) would be always be a noop because the
immediate context is a noop. The value would still be moved onto the UX thread.
consume_on() should be used at the largest scope possible. Only introduce queueing when required.
return http.get() | consume_on(UX);
I would not recomend this pattern. A function that is returning a stream should leave it on its natural context. only the caller knows if the context needs to be shifted.
get() and wait() are "just an algorithm" - but a very vexing to use correctly.
get() and wait() were not included in our proposal, but from the notes from the standard meeting there is a desire for having them available to be used as one would a condition variable as a lower level synchronization mechanism... I can implement that but I'm really not certain it is a good idea.
I think that using concepts to extract the algorithms makes
wait() more palatable to include.
Perhaps using awkward names similar to the
reinterpret_cast<>() that would make their usage stand out in code.
Defaulting to immediate execution, is error prone.
The power of the concepts is that this tradeoff can be explored in different implementations.
C++ usually favors designs where the default is risky but performant (STL containers are not thread-safe). However,
std::shared_ptr always uses interlock ref-counts, which is slower.
In this case, promises that default to immediate & trampoline & task can all coexist and compose using the same algorithms. Over time, this will result in guidelines and perhaps tooling (GSL) that will use the best strategy for the task at hand.