Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    karthik-bs
    @karthik-bs
    I would be reading a large json blob of data from a database . So would i essentially convert this into a leap frame and pass it to the pipeline ?
    I would need to do some aggregation, compute lag in the data and then implement a rule based algorithm that would score a yes or no based on the rule.
    This is my present use case, Eventually this rule based method would go away and be replaced by an ML classifier
    Any pointers on how i could leverage MLLeap to implement this functionality
    Bill Lindsay
    @yazbread
    For Model Management, loading or unloading a Model. The timeout settings are in seconds? Also, The Spring Boot app returns 202 when loading the model, even if it didnt get loaded. I need to do a GET call to see it if it is actually there. I pass in a Integer.MAX_VALUE for both disk and memory timeout. How can I see if the model got unloaded due to Timeout?
    Felix Gao
    @gaotangfeifei_twitter

    Hi, I am new to mleap and trying the airbnb example. I have encountered the following errors

    ERROR:root:Exception while sending command.
    Traceback (most recent call last):
      File "/usr/local/Cellar/apache-spark/2.4.4/libexec/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1159, in send_command
        raise Py4JNetworkError("Answer from Java side is empty")
    py4j.protocol.Py4JNetworkError: Answer from Java side is empty
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/Cellar/apache-spark/2.4.4/libexec/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 985, in send_command
        response = connection.send_command(command)
      File "/usr/local/Cellar/apache-spark/2.4.4/libexec/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1164, in send_command
        "Error while receiving", e, proto.ERROR_ON_RECEIVE)
    py4j.protocol.Py4JNetworkError: Error while receiving

    I am using Spark 2.4.4 and I have installed mleap using spark-defaults.conf

    spark.jars.packages  org.apache.spark:spark-avro_2.11:2.4.4,ml.combust.mleap:mleap-spark_2.11:0.15.0

    My terminal is showing exception wtih NoClassDefFoundError

    Exception in thread "Thread-4" java.lang.NoClassDefFoundError: ml/combust/bundle/serializer/SerializationFormat
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at py4j.reflection.CurrentThreadClassLoadingStrategy.classForName(CurrentThreadClassLoadingStrategy.java:40)
        at py4j.reflection.ReflectionUtil.classForName(ReflectionUtil.java:51)
        at py4j.reflection.TypeUtil.forName(TypeUtil.java:243)
        at py4j.commands.ReflectionCommand.getUnknownMember(ReflectionCommand.java:175)
        at py4j.commands.ReflectionCommand.execute(ReflectionCommand.java:87)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)
    Caused by: java.lang.ClassNotFoundException: ml.combust.bundle.serializer.SerializationFormat
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 9 more
    I do think I have installed the dependencies correctly
    Ivy Default Cache set to: /Users/ggao/.ivy2/cache
    The jars for the packages stored in: /Users/ggao/.ivy2/jars
    :: loading settings :: url = jar:file:/usr/local/Cellar/apache-spark/2.4.4/libexec/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
    org.apache.spark#spark-avro_2.11 added as a dependency
    ml.combust.mleap#mleap-spark_2.11 added as a dependency
    :: resolving dependencies :: org.apache.spark#spark-submit-parent-dbeefc3f-8e12-443d-8629-8adf19670d42;1.0
        confs: [default]
        found org.apache.spark#spark-avro_2.11;2.4.4 in central
        found org.spark-project.spark#unused;1.0.0 in local-m2-cache
        found ml.combust.mleap#mleap-spark_2.11;0.15.0 in central
        found ml.combust.mleap#mleap-spark-base_2.11;0.15.0 in central
        found ml.combust.mleap#mleap-runtime_2.11;0.15.0 in central
        found ml.combust.mleap#mleap-core_2.11;0.15.0 in central
        found ml.combust.mleap#mleap-base_2.11;0.15.0 in central
        found ml.combust.mleap#mleap-tensor_2.11;0.15.0 in central
        found io.spray#spray-json_2.11;1.3.2 in central
        found com.github.rwl#jtransforms;2.4.0 in central
        found ml.combust.bundle#bundle-ml_2.11;0.15.0 in central
        found com.google.protobuf#protobuf-java;3.5.1 in central
        found com.thesamet.scalapb#scalapb-runtime_2.11;0.7.1 in local-m2-cache
        found com.thesamet.scalapb#lenses_2.11;0.7.0-test2 in local-m2-cache
        found com.lihaoyi#fastparse_2.11;1.0.0 in local-m2-cache
        found com.lihaoyi#fastparse-utils_2.11;1.0.0 in local-m2-cache
        found com.lihaoyi#sourcecode_2.11;0.1.4 in local-m2-cache
        found com.jsuereth#scala-arm_2.11;2.0 in central
        found com.typesafe#config;1.3.0 in local-m2-cache
        found commons-io#commons-io;2.5 in local-m2-cache
        found org.scala-lang#scala-reflect;2.11.8 in local-m2-cache
        found ml.combust.bundle#bundle-hdfs_2.11;0.15.0 in central
    :: resolution report :: resolve 547ms :: artifacts dl 16ms
        :: modules in use:
        com.github.rwl#jtransforms;2.4.0 from central in [default]
        com.google.protobuf#protobuf-java;3.5.1 from central in [default]
        com.jsuereth#scala-arm_2.11;2.0 from central in [default]
        com.lihaoyi#fastparse-utils_2.11;1.0.0 from local-m2-cache in [default]
        com.lihaoyi#fastparse_2.11;1.0.0 from local-m2-cache in [default]
        com.lihaoyi#sourcecode_2.11;0.1.4 from local-m2-cache in [default]
        com.thesamet.scalapb#lenses_2.11;0.7.0-test2 from local-m2-cache in [default]
        com.thesamet.scalapb#scalapb-runtime_2.11;0.7.1 from local-m2-cache in [default]
        com.typesafe#config;1.3.0 from local-m2-cache in [default]
        commons-io#commons-io;2.5 from local-m2-cache in [default]
        io.spray#spray-json_2.11;1.3.2 from central in [default]
        ml.combust.bundle#bundle-hdfs_2.11;0.15.0 from central in [default]
        ml.combust.bundle#bundle-ml_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-base_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-core_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-runtime_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-spark-base_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-spark_2.11;0.15.0 from central in [default]
        ml.combust.mleap#mleap-tensor_2.11;0.15.0 from central in [default]
        org.apache.spark#spark-avro_2.11;2.4.4 from central in [default]
        org.scala-lang#scala-reflect;2.11.8 from local-m2-cache in [default]
        org.spark-project.spark#unused;1.0.0 from local-m2-cache in [default]
        :: evicted modules:
        com.google.protobuf#protobuf-java;3.5.0 by [com.google.protobuf#protobuf-java;3.5.1] in [default]
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   23  |   0   |   0   |   1   ||   22  |   0   |
        ---------------------------------------------------------------------
    ...
        confs: [default]
        0 artifacts copied, 22 already retrieved (0kB/15ms)
    Akarsh Gupta
    @akarsh3007
    Hi Everyone, Has anyone seen this problem with XGB serving where the Predictions in Spark and MLEAP serving is different? I am using MLeap Version 0.11
    Luca Giovagnoli
    @lucagiovagnoli
    @akarsh3007 what transformers are you using? Is it similar to this combust/mleap#596 ?
    王伟
    @woneway
    I cannot find the file "bundle.json" in my model contains one custom transformer, who knows about it?
    image.png
    the files in zip are like this
    Ganesh Krishnan
    @ganeshkrishnan1
    does MEAP support spark LDA? I can see combust/mleap#144 with LDA support but neither the documentation nor our code seems to work
    Luca Giovagnoli
    @lucagiovagnoli
    hi @ancasarb, do you know if MLeap Runtime is thread-safe? I cannot see many ‘synchronized’ functions in the codebase https://github.com/combust/mleap/search?l=Scala&q=synchronized so I assume it’s not. I wonder if there’s been any clear reports of it being non-thread-safe.
    Anca Sarb
    @ancasarb
    Hi @lucagiovagnoli, do you mean things like the FrameReader(s), RowTransformer, Transformer, FrameWriter(s) etc?
    If so, then yes, they’re thread safe. There’s no need for synchronization, most are stateless.
    We have been having all these beans wired as singleton beans (if you’re familiar with Spring framework in Java) without any issues for 3+ years.
    Luca Giovagnoli
    @lucagiovagnoli
    @ancasarb thanks so much for sharing your valued experience. I’m not familiar with beans but I’m going to read up about it now :)
    Transformer and RowTransformer is what we’re using, so that sounds great!
    Daniel Hen
    @Daniel8hen
    Hi all, I wanted to ask a junior question :)
    I have a Spark model (XGBoost4J), already serialized in the famous MLeap bundle json. Now I'd like to deploy it to some service on docker / Kubernetes and start querying it. My question is where do I put the parameters that shall be relevant to each request? if I have let's say 1000 features, and only 500 of them are relevant, how should I tackle this use case? Where should I start? the documentation is not that clear about this use case. Thank you!
    prafulrana21
    @prafulrana21
    hi @hollinwilkins , how can i get the list of all the deployed models.
    mtsol
    @mtsol
    @ancasarb How can I replace value of a column in leap frame, by using .withColumn? Or any helpful advice.
    mtsol
    @mtsol
    how can i execute transformation of custom transformer when i have same input and output columns?
    mtsol
    @mtsol
    @ancasarb how can I check if a column exists in a leapframe?
    Gustavo Salazar Torres
    @tavoaqp
    hey guys, I'm working on a Golang library for MLeap, to begin with I'm trying to bring Word2Vec models. So far my problem has been to understand how the JSON model is parsed, is there any documentation about this?
    marvin xu
    @marvinxu-free
    "Failed to find a default value for splits", any one meet this problem while save model with mleap?
    Anca Sarb
    @ancasarb
    @marvinxu-free replied on the github issue
    Anca Sarb
    @ancasarb
    @prafulrana21 at the moment it seems we don’t have an endpoint for that if you’re using the spring boot service (https://github.com/combust/mleap/tree/master/mleap-spring-boot). Let me know if you’re interested in adding one!
    Anca Sarb
    @ancasarb
    @mtsol I’ve replied to the questions here combust/mleap#660, hope it helps!
    marvin xu
    @marvinxu-free
    @ancasarb i have reopen issue, please read it on the github issue.
    marvin xu
    @marvinxu-free
    @ancasarb combust/mleap#676
    Anca Sarb
    @ancasarb
    Hi @here, I’ve just released the latest version of mleap (0.16.0), both the scala projects and the pypi package. Release notes are under https://github.com/combust/mleap/blob/master/RELEASE_NOTES.md, thank you all for your contributions and support! Will be updating the documentation in the next few days.
    Igor
    @GoshaP
    Hi, @here. Is there any example how to serialize trained tensorflow model to mleap bundle? Documentation proposes to use tensor flow freeze_graph function. But it's unclear how it can be used to generate mleap bundle.
    Nastasia Saby
    @NastasiaSaby
    Does anyone know how to get back a RandomForest model written in Scikit-Learn please? I would like to use Pyspark to do that. But I can't find a good way to do that.
    Daniel Hen
    @Daniel8hen
    Did anyone ever try to save an XGBoost4J model (as part of a Spark pipeline - Bundle.ml) and load it in a docker for REST API? I'm having some difficulties...
    @ancasarb can you kindly assist?
    Thank you!
    Nastasia Saby
    @NastasiaSaby
    Hello. I'm still stuck. Do you know if it is possible to save a model/pipeline with Scikit-Learn in zip? I can't find a way to do that. Thank you.
    Nastasia Saby
    @NastasiaSaby
    I found a solution. My problem was linked to "databricks". If anyone else is interested, I explained my workaround here: combust/mleap#690
    wyan
    @kungfunerd_twitter
    i am trying to use mleap to log a logistic regression (pyspark.ml.classification.LogisticRegression) model by doing mlflow.mleap.log_model(spark_model=model, sample_input=test_data.limit(1), artifact_path=SAGEMAKER_APP_NAME), and then doing a deploy to SageMaker. But when i use boto3 to make the prediction call, the SageMaker endpoint only returns the prediction label 1 or 0, without the probability value. Is there anywhere i can look into to debug this problem?
    2 replies
    marvin xu
    @marvinxu-free
    java.util.NoSuchElementException: key not found: org.apache.spark.ml.PipelineModel, does anyone met this problem?
    use mleap-spark_2.3.0, seriazation pipelne mode in local environment is success, while failed on cluster mode
    and if it seems reference.conf in mleap-xgboost-spark overwrite reference.conf in mleap-spark
    @Daniel8hen have get any resolution?
    Monark Singh
    @monark789_gitlab

    Hi Guys,

    Is there any way, where we can load models if mleap runtime is added as a dependency in java app? Couldn't figure out from looking at the java doc available.

    Could only find API way of loading the models.

    Andrea Guidi
    @guidiandrea

    Hello everybody :)

    I am new to this chat. I was reading the MLeap documentation and I really think it's a great product. My only concern is that (as far as I know) it's not possible to use Spark-NLP annotators or any other python NLP package. Did anybody manage to build a pipeline with a Lemmatizer or any other processing step which is not included in default Spark ML or sklearn modules?

    mtsol
    @mtsol
    Is there any way of serializing List[String] in a separate folder like being done in DecisionTree and GBT like serializers.
    Alex Holmes
    @alexholmes
    hi folks - is there a rough sense of when JDK11/Spark3 support may be added as per combust/mleap#475 ? thanks so much
    Talal
    @talalryz
    @ancasarb hey, hope you're doing well! Could you take a look at https://github.com/combust/mleap/pull/719/files