Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    lukas-mi
    @lukas-mi
    Thanks for the response, found the usage of underlying Undertow server. So I guess I'll just override cask.Main.main and create the server directly. Though it would be nice for DefaultHandler to be exposed publicly without needing to extend cask.Main because now I only need defaultHandler from it :).
    Li Haoyi
    @lihaoyi
    yeah cask.Main is a bit of an opaque blob; I never really thought about providing a nice API, if you have some ideas PRs are welcome. But for now, it's hopefully small enough you can deconstruct it and use the pieces you care about directly
    jodersky
    @jodersky
    Happy new year! @lihaoyi I just saw someone mention that the undertow version used by cask contains some vulnerabilities. Is it ok for you if I merge this https://github.com/lihaoyi/cask/pull/42/files?
    Li Haoyi
    @lihaoyi
    go for it
    jodersky
    @jodersky
    I forgot that the upgrade to undertow changes the cookie handling slightly. It will no longer handle cookie values with spaces implicitly; those must now be quoted explicitly. This doesn't affect cask itself, but one of the tests now fails because of a missing feature in scala-requests (lihaoyi/requests-scala#73)
    I can update the test for now, but ideally we'd also fix scala-requests to handle spaces in cookies correctly
    Li Haoyi
    @lihaoyi
    sounds good
    Simon Parten
    @Quafadas
    Does anyone have a link to a repo with Cask at the back and scala JS at the front? Does such a thing exist / work "comfortably" together? Intuitively feels like it could work well...
    discobaba
    @uncleweirdo_twitter
    I'm trying to blend cask with a legacy web app (Turbine) so I can slowly convert to a steady diet of scala. I replaced an external web server with embedded undertow thinking I could just split the requests by prefix. That basically works. However I'm not being particularly adept at figuring out how to share the session. Turbine is creating a cookie with the standard JSESSIONID and passing it around, but that cookie isn't showing up in the cask session. Can anyone send me down the right path?
    Siddhartha Gadgil
    @siddhartha-gadgil
    Simon Parten
    @Quafadas
    @siddhartha-gadgil Thanks. I'll have a play with these ... I think exactly what I was looking for. Much appreciated.
    daymo
    @daymo
    I'm seeing a warning in Firefox that says The script from “http://localhost:8080/static/app.js” was loaded even though its MIME type (“”) is not a valid JavaScript MIME type. And indeed, when I check the headers content type in the browser it is not set. How can set it in Cask? I tried naively
    script(type:= "text/javascript", src := "/static/app.js") but it didn't make the warning disappear.
    simanto604newscred
    @simanto604newscred
    Hi All , is it possible to generate API Spec (i.e swagger) by using this CASK?
    @lihaoyi
    jodersky
    @jodersky
    that's not built into cask. However, since routing information is available at compile-time, you may be able to build a solution that generates swagger docs for you
    simanto604newscred
    @simanto604newscred
    @jodersky Do you have any reference example of doing so ?
    jodersky
    @jodersky
    I don't have any example, but I can give you an overview of how the problem could be approached. When you call initialize() in a cask Routes object, it will traverse all methods defined in the object and derive routing information for all methods annotated with an "endpoint" annotation (common endpoints are defined in thecask.endpoints package, for example cask.get). This information is contained in an RoutesEndpointMetadata object and will be used by the final http request routing mechanism (essentially the code that instructs the webserver how to map HTTP requests to method calls)
    I think that it would be possible to inspect this metadata to generate swagger specs for most simple endpoints
    (but do note that I'm not too familiar with what swagger has to offer, and that I don't know if a cask endpoint can always cleanly be mapped to an OpenAPI spec)
    jodersky
    @jodersky
    I just published 0.7.10, featuring support for the latest Scala 3.0.0-RC3.
    Anatolii Kmetiuk
    @anatoliykmetyuk
    Hi everyone! Is there any reason why Cask's CI wasn't updated to GitHub Actions? I see all the rest of the projects from com-lihaoyi org are updated but not this one. Are there any blockers?
    jodersky
    @jodersky
    No blockers, someone just needs to find the time to do it :)
    jodersky
    @jodersky
    0.7.11 is out, featuring support for the latest Scala version, 3.0.0
    zetashift
    @zetashift
    Thanks for all the hard work, cask is great!
    megri
    @megri

    I just tried the new version on 3.0 and I can't replicate com-lihaoyi/cask#30 any longer. I might as well close it.

    However, one thing that struck me is that cask returns a 405 — method not allowed for undefined routes, if the method has not been used for at least one other route. It seems like this should be 404 instead?

    jodersky
    @jodersky
    might be related to this recent PR com-lihaoyi/cask#46
    cask groups all route tries per method, and the lookup errors out early, so a 405 is returned instead of a 404
    jodersky
    @jodersky
    It looks like a proper fix would be to change cask's internal routing system to include methods in the dispatch trie, instead of having one trie per method
    megri
    @megri
    I think a good structure would be something like Map[Path, Map[Method, Endpoint]]
    But paths would have to be iterated on every request for more complex paths. Perhaps that's why the dispatch trie groups by methods.
    jodersky
    @jodersky
    I'm thinking that we could fold all the per-method tries into a single one, but append the method as the final path component
    what would need to be added to the trie, is the ability to distinguish between method lookup failures and regular path failures
    we could of course also have a nested method trie for every path try, but that IMO seems more complex than augmenting the existing dispatch trie
    Anatolii Kmetiuk
    @anatoliykmetyuk
    Hi everyone, is there any chance anyone can look into: com-lihaoyi/cask#50 ?
    megri
    @megri
    @jodersky I've made a PR for #51 with the shape DispatchTrie[Map[String, …]]. It's the old structure inverted basically.
    Martin Ceronio
    @mydoghasworms

    Hi! I am new to both Scala and Cask. I am creating a small solution using Cask to run on a local machine and need a way to start and stop the server on an ad-hoc basis.

    Based on the examples in the Cask documentation, it seems that Cask is designed to always start automatically (either by extending cask.MainRoutes with the route definitions or extending cask.Main to include multiple route definitions).

    Is there a way I can embed Cask into an application, from which I can control starting and stopping it as required?

    Li Haoyi
    @lihaoyi
    @mydoghasworms look at what the Cask test suite does, and do that
    Martin Ceronio
    @mydoghasworms
    Thanks @lihaoyi
    gaminricks87
    @gaminricks87:matrix.org
    [m]

    :point_up: Edit: @lihaoyi: Hi! I am new to Scala and have been looking everywhere for something like Flask in Scala and Cask seems like the best minimalistic option.
    Is something like what the below code does possible in Cask and if so how can I implement it? Please help!

    app = Flask(name)
    @app.route("/get_emp_info", methods = ['POST'])
    def get_employee_record():
    input_data = json.loads(request.get_json())
    out_data = input_data .to_dict(orient='records')
    return jsonify(out_data)

    if name == "main":
    app.run(host='0.0.0.0', port=6123)

    Even if you can provide some link that does something very close to this, that'd be helpful. I'd basically want to be able to run it as a Scala RESTful application.

    2 replies
    gaminricks87
    @gaminricks87:matrix.org
    [m]
    Hi is it possible to set port number and host in the minimal application example type of program?
    Also why does reading a json into a spark DF and returning a spark DF back as a json string take so long with the read option json... Is there any alternatives?
    jodersky
    @jodersky
    @gaminricks87:matrix.org check out some of the examples on how cask can read JSON. You can override port and host in Main (https://github.com/com-lihaoyi/cask/blob/f3fd29a9206cee465812a80fe6bd04c6c63500c2/cask/src/cask/main/Main.scala#L36-L37). Unfortunately I can't help about the issue with Spark dataframes
    gaminricks87
    @gaminricks87:matrix.org
    [m]
    Thank you @jodersky and @Quafadas

    request: cask.Request

    How can we read this if it is a json? like in python requests.get_json()

    jodersky
    @jodersky

    You can access the request's body and give it to your JSON parser, for example

    @cask.get("/")
      def foo(req: cask.Request) = {
        val json = upickle.default.read[ujson.Value](req.bytes)
        ...
      }

    However, there are more idiomatic ways to read JSON in cask. The two that come to mind are:

    1. if you're using ujson, you can follow the example here https://com-lihaoyi.github.io/cask/#receiving-form-encoded-or-json-data
    2. for an alternate json library, you could implement your own endpoints https://com-lihaoyi.github.io/cask/#custom-endpoints
    gaminricks87
    @gaminricks87:matrix.org
    [m]
    @jodersky: Appreciate the quick response.
    I'm unable to access the request's body as you have mentioned above. Getting only readAllBytes() option
    gaminricks87
    @gaminricks87:matrix.org
    [m]
    I can access req.data which returns an object like this : io.undertow.io.UndertowInputStream@6d81f169
    jodersky
    @jodersky
    strange, the bytes method is implemented right in the request class https://github.com/com-lihaoyi/cask/blob/master/cask/src/cask/model/Params.scala#L20