truizlop on utilities
truizlop on master
Miscellaneous utilities (#459) … (compare)
truizlop on utilities
Add docs (compare)
truizlop on utilities
Fix wrong documentation Add instances of Semigroup and … Add utility to Kleisli and 2 more (compare)
miguelangel-dev on calvellido-patch-1
miguelangel-dev on master
Fix bad space char at docs (#45… (compare)
func getPeopleAPI(id: Int) -> IO<Never, SWAPIPeopleResponse?> {
var components = URLComponents(string: "https://swapi.co/api/people/\(id)")!
URLSession.shared.dataTaskPublisher(for: components.url(relativeTo: nil)!).replaceError(with: <#T##Output##Foundation.URLSession.DataTaskPublisher.Output#>)
return URLSession.shared
.dataTaskIO(with: components.url(relativeTo: nil)!)
.map { _, data in data }
.map { data in
try? JSONDecoder().decode(SWAPIPeopleResponse?.self, from: data)
}
.handleError({ _ in nil })^
.mapLeft({ _ in abort() })
}
on another hand, for me it is weird model your problem like an right-optional (maybe I am, and you want to wrap this network over IO)
you are working with: ERROR + (ERROR + VALUE) = ERROR + ERROR + VALUE, so you can go out with 2 possible errors (one from IO another for Optional - losing the type) - do you think about to change the API to: IO<ApiError, SWAPIPeopleResponse>
@miguelangel-dev you are right, I shouldn't model it like that, for the network call, I should pass the error to the caller (that is what I will do at a later time) but I still need a way (that you 've given me) to go back to a Left Never, because in my redux implementation, I don't want the core implementation to know about errors, errors should be handled early (in the reducer part) to model it as a state published by the store.
I'll show you when I'm done with the whole implementation 🙂
I will change for a fatalError as suggested,
Thanks to you two for your help
Hi Marcelo, it depends on kind the project you are looking for. For now, as OSS, we are working in a refactor in another library from bash to FP in pure Swift. It is nef: https://github.com/bow-swift/nef - and you can find the project using an architecture FP thanks to Bow: https://github.com/bow-swift/nef/tree/develop/project
It is nice because in the same project you can find an example how to make a functional-library (API) and how to use it (CLI) in pure Swift with Bow
hi everyone !
what would be the right way to execute an IO on background queue and get back to main queue to use the result ?
I tried
effect.attempt(on: .global(qos: .background)).unsafeRunAsync(on: .main,
{ resultAction in
print(resultAction)
self.send(resultAction.rightValue) //Should never error out
})
but it seems to block the UIThread
let action = IO<Never, Action>.var()
let sendEffect = binding(
continueOn(.global(qos: .userInitiated)),
action <- effect,
|<-ConsoleIO.print("action is \(action.get)"),
continueOn(.main),
|<-IO.invoke { self.send(action.get) },
yield: ()
)^
try! sendEffect.unsafeRunSync()
attempt
is synchronous. Even though you are passing a different queue to run it, it will block the current queue until it finishes, and then do the unsafeRunAsync
. The second one should work fine if you call sendEffect.unsafeRunAsync
. It is the same reason: it blocks because it is synchronous.
public func send(_ action: Action) {
let effects = self.reducer(&self.value, action)
print(effects)
effects.forEach { effect in
let action = IO<Never, Action>.var()
let sendEffect = binding(
continueOn(.global(qos: .userInitiated)),
action <- effect,
|<-ConsoleIO.print("action is \(action.get)"),
continueOn(.main),
|<-IO.invoke { self.send(action.get) },
yield: ()
)^
sendEffect.unsafeRunAsync({ _ in
})
}
}
effect.unsafeRunAsync(on: .global(qos: .background),
{ resultAction in
print(resultAction)
self.send(resultAction.rightValue) //Should never error out
})
let single: IO<Never, [Void]> = effects.traverse { effect in
let action = IO<Never, Action>.var()
return binding(
continueOn(.global(qos: .userInitiated)),
action <- effect,
continueOn(.main),
|<-IO.invoke { self.send(action.get) },
yield: ())
}^
single.unsafeRunAsync(on: .global(qos: .userInitiated)) { _ in }
traverse
will get you a single IO describing the effects of your array of IO, and collecting all results in an array. You can also use parTraverse
if you'd like the execution of the effects to be run in parallel.
send
could be something like:public func send(_ action: Action) {
let effects = self.reducer(&self.value, action)
let single: UIO<[Void]> = effects.traverse { effect in
let action = UIO<Action>.var()
return binding(
continueOn(.global(qos: .userInitiated)),
action <- effect,
continueOn(.main),
|<-UIO.invoke { self.send(action.get) },
yield: ())
}^
single.unsafeRunAsync(on: .global(qos: .userInitiated)) { _ in }
}
oh great, I will try that, thank you for your help 🙂.
an other question on the API, wouldn't it be great to have this possibility :
effect.execute(on: .global(qos: .background)) // execute the IO on the background thread
.unsafeRunAsync(on: .main, // run the callback execution on the main thread
{ resultAction in
print(resultAction)
self.send(resultAction.rightValue) //Should never error out
})
I find it hard to resort to Monad comprehension for a simple case like this. And the empty unsafeRunAsync is bothering me :). But there is maybe something like that already and I may have missed it.
unsafeRunSync
or unsafeRunAsync
, you shouldn't make any other operation. We could add to the API a default empty closure, so that you don't have to pass it yourself.continueOn
that you can invoke to switch the queue of whatever comes next. In any case, you can still do what you are suggesting by calling attempt
(your execute
in the example above) and then the unsafeRun
.DispatchQueue.main.async
and then launching the unsafeRunAsync
also in the main queue, and causing a deadlock.I added the DispatchQueue.main.async just before pushing the code, because it was working as I wanted to with it. I didn't have that when I was asking initially.
Thank you for your help on understanding that, it is really nice of you. So don't hesitate to tell me if it's bothering you, I don't want to take too much of your time. :)
so by doing something like that :
public func send(_ action: Action) {
print("send method called on \(DispatchQueue.currentLabel)")
let effects = self.reducer(&self.value, action)
print(effects)
effects.forEach { effect in
effect.attempt(on: .global(qos: .userInitiated))
.unsafeRunAsync(on: .main,
{ resultAction in
print("Resulting action : \(resultAction) on \(DispatchQueue.currentLabel)")
self.send(resultAction.rightValue) //Should never error out
})
}
}
(I don't have the Traverse, but I want to understand with my initial code 🙂 )
I should have the same result as with your propose solution ? But it's not the case, I still have the deadlock . So what would be the right way to do it this way ?
Ok I think I put the finger on what is bothering me, the unsafeRunAsync run the whole computation of the IO, not just the ending callback on the thread that we pass, meaning that it will wait (because it wrap a unsafeRunSync call) for my IO computation to be executed before executing the callback on the given thread
Am I right with that ?
I assumed the IO would be executed asynchronously on any previously given Queue, then the callback would be called when the computation finished and this same callback would be called on the queue given through the .unsafeRunAsync
method