by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    baaastijn
    @baaastijn
    @NicoCG70_twitter it’s now working :)
    Nicocg70
    @NicoCG70_twitter
    Thanks... I will try soon. Commercial contact are here too ?
    baaastijn
    @baaastijn
    you mean on the channel ? i don’t think so
    in short, we first developed AI Marketplace with pre-trained models, FREE for all, but some limitations (amount of calls etc)
    since 1 month we released ML Serving in OVH control panel
    allowing you to deploy your own models
    Or OVH pretrained ones.
    So far we only pushed Sentiment Analysis but we are here to discuss if you need something else
    i’m the Product Manager for Data, and i will do the bridge with commercial team if required
    Nicocg70
    @NicoCG70_twitter
    DeOldify works perfectly, many thks @baaastijn
    🎙Jean-Louis Queguiner
    @JiliJeanlouis_twitter
    sorry @NicoCG70_twitter My mistake on the swagger.
    Fabien Antoine
    @rhanka
    Hi all, just discovered https://market-place.ai.ovh.net, exciting
    Some collegue worked on https://ia-flash.fr which seem to be quite better for car recognition (~20 millions of cars images for training de kernel) - how are the API integrated to the catalogue ?
    Would be interested aussi to integrate https://deces.matchid.io which relies on a API which allow bulk matching (still in construction but will be stable for bulk in 1 month) - usefull for liberal workers and large administration (banking, retirements payment, etc.)
    Fabien Antoine
    @rhanka
    (this last one will have an ML base rescoring feature soon but is not fully ML, the first one in is pytorch)
    baaastijn
    @baaastijn
    hello @rhanka ! for the marketplace new models are integrated by OVH on demand
    recently we also launched a service to deploy yourself model : ML serving
    it’s like this marketplace but integrated in OVh control panel
    We allow customers to deploy their own models but also allow customers to select rpetrained models like AI Flash from a catalog
    I see that IAflash is in Apache 2 licence, cool !
    i’ll check with our team
    Thomas PEDOT
    @slamer59
    Hello, I wonder how to setup an environnment with jupyter or whatever
    The idea is to just launch this instance when I need.
    baaastijn
    @baaastijn
    hello @slamer59 , so you want an instance with jupyter installed and be able to stop/start it ?
    Thomas PEDOT
    @slamer59
    Yes! Like an on demand
    Thomas PEDOT
    @slamer59
    It's Ok I found where to setup this. (y)
    baaastijn
    @baaastijn
    public cloud VM + snapshot (i’ll do it like that)
    i have a jupyter running like this for 3 euros per month (VM S1-2)
    Thomas PEDOT
    @slamer59
    I will do that with GPU session. When I can connect to it ...
    Thomas PEDOT
    @slamer59
    I don't know which password I need to type...
    Sylvain Corlay
    @SylvainCorlay
    @/all
    info2000
    @info2000
    Hello, models saved in tensorflow 2.1 works on ovh ia serving?
    baaastijn
    @baaastijn
    baaastijn
    @baaastijn
    re, sorry i misread it. <=1.15 for the version cc @info2000
    info2000
    @info2000
    @baaastijn thanks, the sample is very similar to tf 2 syntax.
    Some timeline to accept tf2 ? Thanks
    vballu
    @vballu

    Hi everyone!
    After a long year with a waiting-to-be-used prescience token. I finally start to explore this tool.
    Yet, I have some issue to connect my warp10 backend to it. I always have the 'unable to fetch data from time series backen' error message. I assume that come from a misconfiguration when I add a source. But I can't figure out how to fix it because my warpscript work on a quantum instance. So I assume that the wanted backend url must be the address only, without the endpoints (/api/v0/exec). I also assumed that the data can be numerous gts even if i have tried with an only one GTS as output.

    So, I'm now clueless and I some helps would be really appreciate :)

    Maël LE GAL
    @mael-le-gal
    Hello @vballu
    • what is the ID of your prescience project please ? (you can see it once you're logged in prescience in your browser at the bottom left corner)
    • have you create a Warp10 Source or a TimeSerie Source ?
    vballu
    @vballu
    Hello @mael-le-gal
    We have tried both warp10 and TimeSerie sources.
    The project id is c5ea632f-5184-4659-8bde-ae11c303aca8
    Maël LE GAL
    @mael-le-gal
    In the log I see an error 500 received from Warp10 server saying : LIMIT cannot extend limit past 1000000
    The warp script query is probably returning too much points
    To answer your other questions :
    • yes the backend url should be filled without the http path (/api/v0/exec)
    • If you use the TimeSerie source you can only have a single GTS
    • If you use the Warp10 source you can have several
    vballu
    @vballu
    okay, that is strange because for the last tries I volontary a lot reduce the fetch span to avoid this problem.
    I will try to increase the hard limit. I let you know. Thank you for your answer