A relaxed chat room about all things Scala. Beginner questions welcome. https://scala-lang.org/conduct/ applies
lrytz on travis-jdk16
Handle a few JDK deprecations allow reflective access to java… Fix tests for JDK 16 and 3 more (compare)
lrytz on travis-jdk16
lrytz on travis-jdk16
Handle a few JDK deprecations allow reflective access to java… Fix tests for JDK 16 and 2 more (compare)
lrytz on travis-jdk16
Upgrading our code from Spark 2.4.7 to 3.0.1, and getting this error: object typed in package scalalang is deprecated (since 3.0.0): please use untyped builtin aggregate functions
in this code:
val meanSores: Dataset[(Int, Double)] = classified
.filter(e => e._2 >= 0.0 && e._2 <= 1.0)
.groupByKey { case (typeId, _) => typeId }
.agg(typed.avg(_._2))
But I can’t figure out what it’s supposed to look like. All the examples I could find online use that typed.someAggregateFunction
pattern...
classified
in this example is Dataset[(Int, Double)]
DataSet
, not Dataframe
alias
I believe.
DataFrames
?
If you are training on a single GPU, the most important thing is keeping the GPU 'fed' (i.e. sending it enough data so the compute units aren't waiting on memory DMA).
If you are training on a cluster of GPUs -- I don't have the budget for this -- but the most important thing is probably figuring out how to update the training weights (across multiple GPUs).
main
) it will work fine.