satabin on main
Update sbt-scalajs, scalajs-com… Merge pull request #241 from sc… (compare)
ybasket on main
Update sbt-scoverage to 1.9.1 Merge pull request #243 from sc… (compare)
satabin on sst
Add SST implementation (compare)
ybasket on main
Update fs2-core, fs2-io to 3.1.5 Merge pull request #242 from sc… (compare)
satabin on main
Update scalafmt-core to 3.0.6 Merge pull request #239 from sc… (compare)
hi everyone, after yesterday even though I still couldn't solve the problem I found out this is a standard, described below;
I am not sure wheater it is in scope of this library here
text.linesafter decoding, it should be easy to handle with a regex
"^tagname:(.*)"should then capture properly after you ensured your data arrives in lines in the pipe
fs2.data.csv.CellDecoder#javaTimeDecoderis private. Apart from implementing something similar to
javaTimeDecodermyself, is there another way to achieve this with this libary?
fs2.data.csv.CellDecoder#javaTimeDecoderis private as it's not safe given its (current) signature. But I totally see your use case and will create an issue to expose something appropriate. Until then, you'll have to do it yourself, sorry!
CellDecoder[String].emap(s => Either.catchOnly[DateTimeParseException](doYourParsing(s)).leftMap(new DecoderError("Couldn't parse my format", _)))
Is there anything in the lib to work with strings regards parsing to rows? For example, I am currently crunching csv files from s3, using the
fs2-aws-s3 lib, and have something like:
s3Client.readFileMultipart(BucketName(bucket), FileKey(fileKey), partSizeMB) .through(fs2.text.utf8Decode) .through(fs2.text.lines) .through(rows[IO]()) .through(headers[IO, String])
However rows expects an
Stream[F, Char], not
rowshandles it, including corner cases (when new line occurs within a row)
flatMap(Stream.emits(_))won't be required anymore in version 1.x onward
.flatMap(fs2.Stream.emits(_))and it should work out of the box
java.timecodecs. https://twitter.com/lucassatabin/status/1373402923154161665 From now on, development will focus on the 1.0.0 release, which will bring new and better CSV pipes, as well as easier support for textual inputs (and much more in CBOR, XML, ...). Main branch will become the one targetting 1.0.0 soon(-ish) and 0.x series will be renamed
maintenanceand will only accept bug fixes. I wish you all a nice Sunday.
mainbranch as the default one, which is the former
masterwas renamed to
0.xand should only be targeted for bugfix PR from now on. I will update CI configuration and documentation accordingly now. This is the official start of 1.x development!
Pipe[F, XmlEvent, A]for xml streams and
Pipe[F, Token, A]for json streams
Stream[F, Byte]to a
Pipe[F, Byte, Char]beforehand)
Char, but the new 1.0 (released RC1 just a few days ago) has some support for using non-
Charstreams directly using the
CharLikeChunksabstraction. IIRC we don't documentation on that one yet, but if you look at the modified pipes and this: https://github.com/satabin/fs2-data/blob/main/text/shared/src/main/scala/fs2/data/text/package.scala I hope you figure it out easily (in the end, it boils down to choosing the correct
import). Let us know if you get stuck there!
CharLikeChunkconcept and how to parse byte streams. @dylemma you can have a look here: https://fs2-data.gnieh.org/documentation/#reading-text-inputs-from-a-file
textpackage was there. That should work out nicely since an end user can just import the one they want from that package, and my implicits that work in terms of CharLikeChunks should just work. I'll poke at this a bit more later tonight.