Is the only part that is worth factoring out IMHO. Still not a true non-greedy match but I guess we're not gonna see that in fastparse anytime soon so this is probably as good as it gets.
def anyCharExcept[_:P, T](p: => P[T]): P[Unit] = P( (!p ~ AnyChar) )
Hi, I'm trying to reuse part of the scalaparse parser in my own fasteparse-based parser. Specifically, I'd like to reuse the string parsing.
However, I cannot figure out how to capture the result of the
String subparser that I found in the
Ideally I'd like to write something like
private def string[_: P]: P[String] = P(scalaparse.Scala.Literals.NoInterp.String).!, but this does not compile.
Any pointers on how to achieve this?
[Sth](implicit x: P[Sth])
@reid-spencer thanks for asking that good question. I found myself probably in the same situation in that I have many small parser functions that add whitespace explicitly between tokens all over the place and it feels really silly and poor form. I now have something to look at when I revisit my code.
using the different implicits as Li suggested does wonders. But if you have your whole parser in a single file and several object/trait, it can be a nightmare. You can break down modularity with single traits (or objects) and place one trait per file (Java style) and then you use one implicit per file/trait. Or if you feel the need to keep some traits together, you should import inside your traits not at file level. This way you have proper fine grain policy of your white space policy. It's very nice.
private val reservedTokens: Map[String, TokenType] = Map( "(" -> LeftParen, ")" -> RightParen, //... ) def toParser[ ]( )): P[TokenType] = P(kv._1).map(_ => kv._2) def combine[ ]( ): P[TokenType] = P(acc | value) def all[ ] = reservedTokens.map(toParser).reduce[ ]](combine)