- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

- 20:25som-snytt synchronize #8591
- 19:57nevillelyh commented #8289
- 19:47eed3si9n edited #8591
- 19:28
SethTisue on 2.13.x

2.12: new Scala SHA 2.12: update expected-failure l… add akka-http-webgoat (#1017) and 1 more (compare)

- 19:08
SethTisue on 2.11.x

sbt version bump (1.3.3->1.3.4)… 2.12: new Scala SHA 2.12: update expected-failure l… and 3 more (compare)

- 19:04SethTisue commented #8545
- 19:03som-snytt commented #11824
- 18:54som-snytt labeled #11824
- 18:54som-snytt commented #11824
- 18:46som-snytt review_requested #8591
- 18:46scala-jenkins milestoned #8591
- 18:46som-snytt opened #8591
- 18:38SethTisue milestoned #3237
- 18:38SethTisue assigned #9386
- 18:38SethTisue milestoned #9386
- 15:21som-snytt commented #8590
- 14:50lrytz commented #8588
- 13:12
lrytz on 2.13.x

Avoid length of list Lint confusing by-name conversi… Lint by-name conversion of bloc… and 1 more (compare)

- 13:12lrytz closed #8590
- 13:12lrytz closed #9386

@tirumalesh123 `lift`

is just another name for `map`

, with a slightly rearranged signature.`map`

normally looks like `def map(fa: F[A])(f: A => B): F[B]`

. If you rearrange the arguments you get `def lift(f: A => B): F[A] => F[B]`

, so you can look at `Functor`

as the api for lifting functions of one argument in `F`

. This extends to Applicative, which lets you lift functions of multiple arguments into `F`

, and ultimately into `Monad`

, which gives you the ability to change the structure of one computation based on the result of another.

et's say I have a method which returns optional if I want to get the value inside optional should I use lifting functions

yeah, pretty much. You should just `map`

(or `flatMap`

) and transform the `Option`

that way. I don't particularly like saying "the value inside the `F`

" because it's misleading in the long run, but it's a decent approximation at first

not just monads

take for instance a deserializer, aka

`Deserializer[A](run: Array[Byte] => A)`

it has a map operator

`A => B`

giving us a `Deserializer[B]`

And it's very interesting close relative

`Serializer[A]`

and the more mysterious contramap

It's a good short exercise that gives a lot of design decisions to think about.

maybe those aren’t “projects”, but they’re good for learning language fundamentals and the collections API. is that what you currently want to learn?

trying to build some kind of actual useful thing (whether it’s command line, web, desktop, or whatever) usually involves learning a bunch of APIs at the same time you’re trying to learn fundamentals,which can be pretty distracting

But otherwise yeah what Seth said. Go through some exercises. It really doesn't matter which.

@dsebban_twitter

Are you saying this because of other Monads like State and IO that are not containers as opposed to List and Option

yes

and even List or Option can be viewed as either containers or computations. Also, I'm actually quite keen on explaining the datatypes (State, IO, List, Option) and the idea of higher-kinded types to represent computation *separately* (and before) from whatever algebra they happen to form (e.g. Monad). Consequently, I don't like (when teaching) saying that IO or Option are monads. They *form* a Monad. This might seem pedantry but it's actually quite important for a few reasons imho (which I might expand on if you care). Obviously I probably do say "the IO monad" a bunch of times when talking casually, but I consider that an abuse of notation, for brevity only

As a data type Either is a simple disjunction, but as a computation it can model exception-handling.

Etc.

@SystemFw likely has a more fundamental take, I just felt like butting in ;-)

but basically it starts from thinking about what types are for (and not what they are against, to quote Connor McBride), to the fact that they are given meaning and semantics not by their underlying representation, but by the operations you define on them (let's say just functions for now).

The second step would be defining algebras as specifications of *part* of the behaviour of a data type. This works really well with typeclasses, which give you an `has a`

, rather than the *is a* relationship you typically get from OO style interfaces. The result of this thought process is that given a datatype, part of what it can do is specified by operations that are unique to that type, and part by algebras (i.e. Monoid, Functor, and so on). This kind of reasoning can be explained with simple types and lower kinded typeclasses only.

Then, you move on to explaining how types of higher kind can be used not just for containers, but for computations as well: List and Option are interesting because they can be viewed both ways, but there are somethings that really only make sense as computations (like State or IO).

At this point you can actually explain F-A-M as algebras that specify part of the behaviour of higher kinded types:

- Functor lifts functions of one argument into F
- Applicative lifts functions of n arguments into F
- Monad gives you context sensitivity: the ability to change the
*structure*of a computation based on the*result*of a previous one

The final bit is learning how to operate on types based on their algebras and operations only, while being agnostic to their representation (e.g avoiding pattern matching on `Option`

): this is propedeutic to learning about types which either have an opaque representation (`IO`

) or a very complex one (`fs2.Stream`

). It also means that you can tackle a new library which exposes different types, and basically know most of what you need to do to use it once you know which algebras it forms (e.g. doobie ConnectionIO).

I guess the main problem with this approach is that it requires some upfront motivation, and ideally a mentor to give you clear explanations and "unstuck" you along the way

@dsebban_twitter

I think completeness of understanding is important, shallow understanding for these concepts will limit you and eventually make you give up on FP. I appreciate the answers, piecing together data types -> hkt ->computation -> algebras -> monad is indeed the clearest explanation I have seen so far

note that the fact that algebras only describe *part* of the behaviour of a data type really is crucial: if you understand that, you realise that "how do I extract a value out of a monad" is a question that literally makes no sense on multiple levels : the monad is the algebra, not the type, and extraction appears nowhere in the definition of monad, although it might be part of the behaviour of a given datatype which also happens to form a monad (like State), but makes no sense for others (like

`IO`

)
@marcinsokrates_twitter if nobody answers here about

`Vector`

branching factor, you might try the scala/contributors room, and/or https://contributors.scala-lang.org
Also I was specifically thinking that in the case of varargs like in

`Vector("a","b","c","d")`

, since in 2.13 these will be passed as an `ArraySeq[String]`

, we could just share the underlying array with the arrayseq like...```
object Vector {
def apply[A](elems: A*): Vector[A] =
if (elems.length <= 32 && elems.unsafeArray.isInstanceOf[Array[AnyRef]]) {
new Vector(0, elems.unsafeArray, 0)
} else { ... }
}
```

Need to try it at home

That's neat