Bonjour, je suis en train de suivre le getting started pour PreScience et j'ai une erreur quand j'essaye d'entrainer le model. Une idée ?
Bonjour, c'est un problème connu que nous essayons de résoudre, avez vous essayez de relancer l'entrainement du modèle ? Sinon une solution pourrait être de recréer le dataset
@ChrisRannou_twitter oui j'avais essayé de refaire le train mais sans résultat. Je vais tenter de refaire le dataset. Merci.
Hello guys, anyone plans to be at the OVH summit this week ?
It seems that you will have a track @jqueguiner at 15.00 ?
Maël LE GAL
Almost all the team will be at the summit yes
do you plan some particular things outside of the main program ?
Maël LE GAL
During all the event you will find us on the Public Cloud booth.
Guillaume will present a demo of the AI MarketPlace at 13:25
Adrien will present a demo of our serving engine at 14:40
Myself and Adrien will animate a labs on prescience at 13:00
Christophe & Clément will present a breakout session about Machine Learning & Time-series at 15:00
Jean-Louis will be animate a round table discussion on AI at 15:00
Schedules may change
Thanks a lot Maël !
Does the labs about prescience need some technical knowledge ?
Maël LE GAL
Basic knowledge on Machine Learning will help a lot, but we will try to stay as simple as possible. You'll just need to come with a laptop. We will explain the 2 ways of using Prescience and attendees will choose the solution they are comfortable with :
From the UI (which is the simpliest)
From a Jupyter notebook (you will need some skills with Python)
Hello ! We are currently working on several almost real times operations. We would like avoid to pre-compute some intermediates GTS. Yet, we currently spend more than 10 min for just a count over a 24h windows and lot of GTSs and Values. Do you have any advice to address this problems ? Is the pre-computation needed ?
Hello, thanks @cedricmourizard_twitter, we will have a look :)
Hello, @vballu. Are you using Prescience with the metrics connector ?
@jagwar , hello. I'm not using prescience for this kind of operations. It is more a data engineering issue. After some investigations. I'm pretty sure that can't handle it in the way we wanted to proceed. (IE: Real time processing over millions GTS and huge JSON string as values to treat). Network and memory are bottleneck expensive to manage for a low return in term of performances. I think that we will proceed by rethinking our data pipeline to have an easier data to handle. Less heavy. Prescience will come in a second time