@Narasimhaporeddy Hi, I think you may have some documentation/jobserver versions mismatch:
if you use jobserver 0.8.0, please check documentation for this version: https://github.com/spark-jobserver/spark-jobserver/tree/0.8.0
db/combineddao/postgresql/migration
was introduced only recently and is not part of 0.8.0 release
I think hdfs+postgres
DAO would work only if you use master branch
You may use version 0.10.0 and use hdfs+h2
DAO (https://github.com/spark-jobserver/spark-jobserver/tree/e3c3d3ce9ba81b63608130d3904161c8246fe064)
Hi @valan4ik , Do we support utf-8 spark context name? for example: curl -i -d "" 'http://<host>:8090/contexts/Sparkconext漢字?num-cpu-cores=2&memory-per-node=512M&context-factory=spark.jobserver.context.SessionContextFactory' . I got followign error upon executing the above curl command: HTTP/1.1 400 Bad Request
Server: spray-can/1.3.4
Date: Fri, 31 Jul 2020 00:00:36 GMT
Content-Type: text/plain; charset=UTF-8
Connection: close
Content-Length: 65
Illegal request-target, unexpected character '₩' at position 23-bash-4.2$
Hi all , I was running concurrency benchmark on spark-job-server using Jmeter, but I am not able to achieve high concurrency with increasing cores .
override def runJob(sparkSession: SparkSession, runtime: JobEnvironment, data: JobData): JobOutput = {
Map("data" -> 1)
}
I am not running any spark job here .
still I am not able to achieve more than 8 query per second -> Same results on (4 ,8 , 16) core aws ec2 machine
I have created 4 contexts and maintaining concurrency 5 per context
max-jobs-per-context = num of cores on machine
Can anyone tell me , what could be going wrong here ?