For text recognation I didn't check yet the OCR API on landscape picture with tag or stuff like that by example
OCR is the text extraction
Thanks for the feedbacks.
Hey, could be a bit of a silly question. I've been playing around with the Image Deblurring API. Is there anyway to get better results? Looking at the source code. You can clearly run training on it to improve results. Is this something that can be done by me just sending loads of images in?
hi @pictowolf thanks for testing our products, for now we are focussing on provided the start of art models on this platform with no specific training. You can indeed train a model on you own. We will include some progress in the futur. 1 of the improvement is color based example - the back and white model for debluring is more efficient than the color one as there are less channels to manage, an improvement can be to combine the black and white deblurring model with image colorization and espiacially the example based colorization method.
For now the focus is to keep providing state of art APIs in various domain such as speech to text, text to speech, text to text and image to image (including video). Custom studio for building your own models will be available soon.
Hi @jqueguiner Thanks for the information. I will keep following the progress, great work so far and looking forward to seeing what else comes out :)
hey @pictowolf , thanks ! What’s missing or to improve from your point of view ?
Hi @baaastijn So I've been looking different types of image processing. I know Jean said you are focussing on providing new models currently. Could be interesting to see things like: https://github.com/jqueguiner/DeOldify and https://github.com/alexjc/neural-enhance on the marketplace. In terms of improvements, response back and ease of use so far as been good. I am interested in the Custom studio mentioned though.
Bonjour, je suis en train de suivre le getting started pour PreScience et j'ai une erreur quand j'essaye d'entrainer le model. Une idée ?
Bonjour, c'est un problème connu que nous essayons de résoudre, avez vous essayez de relancer l'entrainement du modèle ? Sinon une solution pourrait être de recréer le dataset
@ChrisRannou_twitter oui j'avais essayé de refaire le train mais sans résultat. Je vais tenter de refaire le dataset. Merci.
Hello guys, anyone plans to be at the OVH summit this week ?
It seems that you will have a track @jqueguiner at 15.00 ?
Maël LE GAL
Almost all the team will be at the summit yes
do you plan some particular things outside of the main program ?
Maël LE GAL
During all the event you will find us on the Public Cloud booth.
Guillaume will present a demo of the AI MarketPlace at 13:25
Adrien will present a demo of our serving engine at 14:40
Myself and Adrien will animate a labs on prescience at 13:00
Christophe & Clément will present a breakout session about Machine Learning & Time-series at 15:00
Jean-Louis will be animate a round table discussion on AI at 15:00
Schedules may change
Thanks a lot Maël !
Does the labs about prescience need some technical knowledge ?
Maël LE GAL
Basic knowledge on Machine Learning will help a lot, but we will try to stay as simple as possible. You'll just need to come with a laptop. We will explain the 2 ways of using Prescience and attendees will choose the solution they are comfortable with :
From the UI (which is the simpliest)
From a Jupyter notebook (you will need some skills with Python)
Hello ! We are currently working on several almost real times operations. We would like avoid to pre-compute some intermediates GTS. Yet, we currently spend more than 10 min for just a count over a 24h windows and lot of GTSs and Values. Do you have any advice to address this problems ? Is the pre-computation needed ?
Hello, thanks @cedricmourizard_twitter, we will have a look :)
Hello, @vballu. Are you using Prescience with the metrics connector ?
@jagwar , hello. I'm not using prescience for this kind of operations. It is more a data engineering issue. After some investigations. I'm pretty sure that can't handle it in the way we wanted to proceed. (IE: Real time processing over millions GTS and huge JSON string as values to treat). Network and memory are bottleneck expensive to manage for a low return in term of performances. I think that we will proceed by rethinking our data pipeline to have an easier data to handle. Less heavy. Prescience will come in a second time