A relaxed chat room about all things Scala. Beginner questions welcome. http://scala-lang.org/conduct/ applies
SethTisue on 2.12.x
add akka-http-webgoat (#1017) (compare)
Array((1,2))
why can't i use Array(1,2))(0)
scala.reflect.ClassTag
@HabrisMun_twitter sadly you have encountered some of the little corner cases that are unpleasant about Scala.
Let's check at the documentation(implicitevidence$5:scala.reflect.ClassTag[T]):Array[T]) it says something like:
def apply[T](xs: T*)(implicit arg0: ClassTag[T]): Array[T]
So, it seems that it takes a second parameter group with an implicit argument about some ClassTag thing... that is due the way arrays work on the JVM level, is an implementation detail that you shouldn't bother too much (at least for now).
So when you do:
List(1, 2, 3)(0)
It is actually doing:
List(1, 2, 3).apply(0)
However, On your case, the compiler believes that you really want to pass the implicit parameter explicitly, instead of calling the apply
.
You can solve it by splitting the call to a second line or calling apply
directly, like:
val arr = Array((1, 2))
val first = arr(0)
// or
Array((1, 2)).apply(0)
On both cases what happens is that the compiler fills the implicit parameter for you (that is the point of it being implicit) and it is clear that you really want to call the apply
.
Hope that explains the problem.
BTW, Just for the record, this is one of the things that Scala 3 will solve (which is great!!) because then you will need to be explicit about passing an implicit argument explicitly, so this will no longer be ambiguous.
f5
every minute.
Array
is super-specialized for low-level JVM stuff and you rarely need it in normal code.
@saif-ellafi you want a private constructor + a factory.
final class Foo private (...) {
}
object Foo {
def apply(...): Foo = {
// preconditions.
new Foo(...)
}
}
You may even return an Option[Foo] instead to handle unmatched pre-conditions.
I have a 25 Spark jobs in Scala, each job requires configuration property values. At the moment, every class reads configLoader and does read that and each job has duplicate variables / functions. I am thinking to create a common class and pass args to constructor to generate property variables. and in all the spark jobs create an object and access each variables. another thought, I have is instead of class create a scala object and import that object._
, so I don't have to create mapping such as,
val commonUtil = new ConfigLoader
val database_password = commonUtil.database_password
val database_port = commonUtil.database_port
etc. What is the best practice to do?
def enumerationToList[A](e: java.util.Enumeration[A]): List[A] = {
val builder = new mutable.ListBuffer[A]
while (e.hasMoreElements()){
val next = e.nextElement()
builder.+=(next)
}
builder.result
}
@ChristopherDavenport if you can use Java 9 you may use this:
def enumerationToList[A](e: java.util.Enumeration[A]): List[A] = {
val builder = new mutable.ListBuffer[A]
e.asIterator.forEachRemaining(builder.addOne) // Disclaimer, I do not have Java 9 so I couldn't test that it compiles.
builder.result()
}
Not sure if it looks cleaner for you.
Apart from that, I do not see any other way.
while
and never open the file again lol
java.util.Collections.list(e).asScala
e.asScala.toList
traverse
and run some operation across the keys.