Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Felix Gao
    I do think I have installed the dependencies correctly
    Ivy Default Cache set to: /Users/ggao/.ivy2/cache
    The jars for the packages stored in: /Users/ggao/.ivy2/jars
    :: loading settings :: url = jar:file:/usr/local/Cellar/apache-spark/2.4.4/libexec/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
    org.apache.spark#spark-avro_2.11 added as a dependency
    ml.combust.mleap#mleap-spark_2.11 added as a dependency
    :: resolving dependencies :: org.apache.spark#spark-submit-parent-dbeefc3f-8e12-443d-8629-8adf19670d42;1.0
        confs: [default]
        found org.apache.spark#spark-avro_2.11;2.4.4 in central
        found org.spark-project.spark#unused;1.0.0 in local-m2-cache
        found ml.combust.mleap#mleap-spark_2.11;0.15.0 in central
        found ml.combust.mleap#mleap-spark-base_2.11;0.15.0 in central
        found ml.combust.mleap#mleap-runtime_2.11;0.15.0 in central
        found ml.combust.mleap#mleap-core_2.11;0.15.0 in central
        found ml.combust.mleap#mleap-base_2.11;0.15.0 in central
        found ml.combust.mleap#mleap-tensor_2.11;0.15.0 in central
        found io.spray#spray-json_2.11;1.3.2 in central
        found com.github.rwl#jtransforms;2.4.0 in central
        found ml.combust.bundle#bundle-ml_2.11;0.15.0 in central
        found com.google.protobuf#protobuf-java;3.5.1 in central
        found com.thesamet.scalapb#scalapb-runtime_2.11;0.7.1 in local-m2-cache
        found com.thesamet.scalapb#lenses_2.11;0.7.0-test2 in local-m2-cache
        found com.lihaoyi#fastparse_2.11;1.0.0 in local-m2-cache
        found com.lihaoyi#fastparse-utils_2.11;1.0.0 in local-m2-cache
        found com.lihaoyi#sourcecode_2.11;0.1.4 in local-m2-cache
        found com.jsuereth#scala-arm_2.11;2.0 in central
        found com.typesafe#config;1.3.0 in local-m2-cache
        found commons-io#commons-io;2.5 in local-m2-cache
        found org.scala-lang#scala-reflect;2.11.8 in local-m2-cache
        found ml.combust.bundle#bundle-hdfs_2.11;0.15.0 in central
    :: resolution report :: resolve 547ms :: artifacts dl 16ms
        :: modules in use:
        com.github.rwl#jtransforms;2.4.0 from central in [default]
        com.google.protobuf#protobuf-java;3.5.1 from central in [default]
        com.jsuereth#scala-arm_2.11;2.0 from central in [default]
        com.lihaoyi#fastparse-utils_2.11;1.0.0 from local-m2-cache in [default]
        com.lihaoyi#fastparse_2.11;1.0.0 from local-m2-cache in [default]
        com.lihaoyi#sourcecode_2.11;0.1.4 from local-m2-cache in [default]
        com.thesamet.scalapb#lenses_2.11;0.7.0-test2 from local-m2-cache in [default]
        com.thesamet.scalapb#scalapb-runtime_2.11;0.7.1 from local-m2-cache in [default]
        com.typesafe#config;1.3.0 from local-m2-cache in [default]
        commons-io#commons-io;2.5 from local-m2-cache in [default]
        io.spray#spray-json_2.11;1.3.2 from central in [default]
        ml.combust.bundle#bundle-hdfs_2.11;0.15.0 from central in [default]
        ml.combust.bundle#bundle-ml_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-base_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-core_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-runtime_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-spark-base_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-spark_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-tensor_2.11;0.15.0 from central in [default]
        org.apache.spark#spark-avro_2.11;2.4.4 from central in [default]
        org.scala-lang#scala-reflect;2.11.8 from local-m2-cache in [default]
        org.spark-project.spark#unused;1.0.0 from local-m2-cache in [default]
        :: evicted modules:
        com.google.protobuf#protobuf-java;3.5.0 by [com.google.protobuf#protobuf-java;3.5.1] in [default]
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        |      default     |   23  |   0   |   0   |   1   ||   22  |   0   |
        confs: [default]
        0 artifacts copied, 22 already retrieved (0kB/15ms)
    Akarsh Gupta
    Hi Everyone, Has anyone seen this problem with XGB serving where the Predictions in Spark and MLEAP serving is different? I am using MLeap Version 0.11
    Luca Giovagnoli
    @akarsh3007 what transformers are you using? Is it similar to this combust/mleap#596 ?
    I cannot find the file "bundle.json" in my model contains one custom transformer, who knows about it?
    the files in zip are like this
    Ganesh Krishnan
    does MEAP support spark LDA? I can see combust/mleap#144 with LDA support but neither the documentation nor our code seems to work
    Luca Giovagnoli
    hi @ancasarb, do you know if MLeap Runtime is thread-safe? I cannot see many ‘synchronized’ functions in the codebase https://github.com/combust/mleap/search?l=Scala&q=synchronized so I assume it’s not. I wonder if there’s been any clear reports of it being non-thread-safe.
    Anca Sarb
    Hi @lucagiovagnoli, do you mean things like the FrameReader(s), RowTransformer, Transformer, FrameWriter(s) etc?
    If so, then yes, they’re thread safe. There’s no need for synchronization, most are stateless.
    We have been having all these beans wired as singleton beans (if you’re familiar with Spring framework in Java) without any issues for 3+ years.
    Luca Giovagnoli
    @ancasarb thanks so much for sharing your valued experience. I’m not familiar with beans but I’m going to read up about it now :)
    Transformer and RowTransformer is what we’re using, so that sounds great!
    Daniel Hen
    Hi all, I wanted to ask a junior question :)
    I have a Spark model (XGBoost4J), already serialized in the famous MLeap bundle json. Now I'd like to deploy it to some service on docker / Kubernetes and start querying it. My question is where do I put the parameters that shall be relevant to each request? if I have let's say 1000 features, and only 500 of them are relevant, how should I tackle this use case? Where should I start? the documentation is not that clear about this use case. Thank you!
    hi @hollinwilkins , how can i get the list of all the deployed models.
    @ancasarb How can I replace value of a column in leap frame, by using .withColumn? Or any helpful advice.
    how can i execute transformation of custom transformer when i have same input and output columns?
    @ancasarb how can I check if a column exists in a leapframe?
    Gustavo Salazar Torres
    hey guys, I'm working on a Golang library for MLeap, to begin with I'm trying to bring Word2Vec models. So far my problem has been to understand how the JSON model is parsed, is there any documentation about this?
    marvin xu
    "Failed to find a default value for splits", any one meet this problem while save model with mleap?
    Anca Sarb
    @marvinxu-free replied on the github issue
    Anca Sarb
    @prafulrana21 at the moment it seems we don’t have an endpoint for that if you’re using the spring boot service (https://github.com/combust/mleap/tree/master/mleap-spring-boot). Let me know if you’re interested in adding one!
    Anca Sarb
    @mtsol I’ve replied to the questions here combust/mleap#660, hope it helps!
    marvin xu
    @ancasarb i have reopen issue, please read it on the github issue.
    marvin xu
    @ancasarb combust/mleap#676
    Anca Sarb
    Hi @here, I’ve just released the latest version of mleap (0.16.0), both the scala projects and the pypi package. Release notes are under https://github.com/combust/mleap/blob/master/RELEASE_NOTES.md, thank you all for your contributions and support! Will be updating the documentation in the next few days.
    Hi, @here. Is there any example how to serialize trained tensorflow model to mleap bundle? Documentation proposes to use tensor flow freeze_graph function. But it's unclear how it can be used to generate mleap bundle.
    Nastasia Saby
    Does anyone know how to get back a RandomForest model written in Scikit-Learn please? I would like to use Pyspark to do that. But I can't find a good way to do that.
    Daniel Hen
    Did anyone ever try to save an XGBoost4J model (as part of a Spark pipeline - Bundle.ml) and load it in a docker for REST API? I'm having some difficulties...
    @ancasarb can you kindly assist?
    Thank you!
    Nastasia Saby
    Hello. I'm still stuck. Do you know if it is possible to save a model/pipeline with Scikit-Learn in zip? I can't find a way to do that. Thank you.
    Nastasia Saby
    I found a solution. My problem was linked to "databricks". If anyone else is interested, I explained my workaround here: combust/mleap#690
    i am trying to use mleap to log a logistic regression (pyspark.ml.classification.LogisticRegression) model by doing mlflow.mleap.log_model(spark_model=model, sample_input=test_data.limit(1), artifact_path=SAGEMAKER_APP_NAME), and then doing a deploy to SageMaker. But when i use boto3 to make the prediction call, the SageMaker endpoint only returns the prediction label 1 or 0, without the probability value. Is there anywhere i can look into to debug this problem?
    2 replies
    marvin xu
    java.util.NoSuchElementException: key not found: org.apache.spark.ml.PipelineModel, does anyone met this problem?
    use mleap-spark_2.3.0, seriazation pipelne mode in local environment is success, while failed on cluster mode
    and if it seems reference.conf in mleap-xgboost-spark overwrite reference.conf in mleap-spark
    @Daniel8hen have get any resolution?
    Monark Singh

    Hi Guys,

    Is there any way, where we can load models if mleap runtime is added as a dependency in java app? Couldn't figure out from looking at the java doc available.

    Could only find API way of loading the models.

    Andrea Guidi

    Hello everybody :)

    I am new to this chat. I was reading the MLeap documentation and I really think it's a great product. My only concern is that (as far as I know) it's not possible to use Spark-NLP annotators or any other python NLP package. Did anybody manage to build a pipeline with a Lemmatizer or any other processing step which is not included in default Spark ML or sklearn modules?

    Is there any way of serializing List[String] in a separate folder like being done in DecisionTree and GBT like serializers.
    Alex Holmes
    hi folks - is there a rough sense of when JDK11/Spark3 support may be added as per combust/mleap#475 ? thanks so much
    @ancasarb hey, hope you're doing well! Could you take a look at https://github.com/combust/mleap/pull/719/files

    Hi folks, i am trying to run mleap transform using gRPC, but it fails on the server, can some one help ?

    grpcurl -plaintext -proto grpc.proto -proto mleap.proto -proto bundle.proto -d "$jsg" ml.combust.mleap.pb.Mleap/Transform

    getting this error on the server

    Exception in thread "grpc-default-executor-31" java.lang.Error: java.lang.ClassNotFoundException: json.DefaultFrameReader
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
    Caused by: java.lang.ClassNotFoundException: json.DefaultFrameReader
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    at ml.combust.mleap.runtime.serialization.FrameReader$.apply(FrameReader.scala:20)
    at ml.combust.mleap.grpc.server.GrpcServer.transform(GrpcServer.scala:86)
    at ml.combust.mleap.pb.MleapGrpc

    KaTeX parse error: Can't use function '$' in math mode at position 5: anon$̲9.invoke(MleapG…: anon$9.invoke(MleapGrpc.scala:291)
        at ml.combust.mleap.pb.MleapGrpc
    at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:171)
    at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:272)
    at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:653)
    at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
    at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    ... 2 more

    Hi folks, i have opened a git issue too for above error, its an Internal error if try with ml.combust.mleap.json, https://github.com/combust/mleap/issues/730#issuecomment-740596806
    Hi all, I opened an issue on mleap
    i want to serialize feature importance vector but mleap is not supporting, is there any resolution?
    Hi all, I was wondering if it is intentional that the mleap-spring-boot docker image is only available in snapshot versions? https://hub.docker.com/r/combustml/mleap-spring-boot/tags
    i think so
    Anca Sarb
    Hey @shmyer I don’t think there is, will make sure that we publish a release version going forward as well.
    Hey @mtsol, will reply on the issue you raised shortly