Loghijiaha on master
code cleanup Added HTML view (compare)
Loghijiaha on master
Fixed unnecessary SEVERE log (compare)
Loghijiaha on master
Updated release draft Updated changelog (compare)
Loghijiaha on master
[maven-release-plugin] prepare … (compare)
Loghijiaha on machine-learning-1.0.1
Loghijiaha on master
[maven-release-plugin] prepare … (compare)
Loghijiaha on master
Fixed class loader by shading Added shaded interpreter class … (compare)
Assumed the pluggin installation would do that.
That's a possible improvement for later. Jenkins has the concept of tools, and if I remember well, other plug-ins like Selenium Plugin fetch required dependencies. The trick part here is that some of these dependencies are OS specific, and could fail to be installed from Jenkins/JVM.
Easier to install via pip
or conda
.
Do you have steps to perform before issuing the pip install? Thank you.
All I did was to create a venv (python -m venv venv
and activate it source ./venv/bin/activate
). You could include these two steps in your container, though you would also either need to copy a layer with python
/pip
from another container, or modify your Dockerfile
to fetch these two, plus run pip install -r requirements.txt
or add the dependencies in another RUN
command in your container. HTH
Hi @chinnusujitha , a Jenkins job can take one/many parameters. You can configure a parameter of type File; or you can use a string containing an S3 object ID or maybe another identifier for your file in another file system.
After that your model could be deployed to your environment. If you need to run code, the machine-learning-plugin developed during last GSoC might be useful. But if your deployment involves only copying a trained model somewhere to use in a web app or scripts, then I think all you'll need is vanilla Jenkins.
Maybe some Shell or Groovy to assist with what you need. Plus, you can use any Jenkins plug-in to help you keeping track of metadata in your Job (Ioannis M. has a lot of experience doing that), reporting the execution, archiving artefacts, etc. Hope that helps (and I'm learning Airflow, o if you write about it anywhere, let me know as I'd be interested to read and see how you implemented it).
Hi @DasithEdirisinghe , we are not involved with the selection of projects for GSoC 2021 yet. There's a separate project, where Jenkins needs to apply for GSoC (at least I think that's how it works). Developers here are also free to get involved submitting proposals that Jenkins will include in their submission to Google.
You can get involved with the plug-in, but in my opinion, you should first look at projects that you are either using, or that you are interested, and start to get involved. Whether the project is selected for GSoC or not, you can still use it as a reference in your application to one of the selected GSoC projects :-) so find something that really motivates you, and where you feel really happy in working/contributing to.
And stay tuned for updates about GSoC 2021 here, probably next year I think.
Hi, I am having an issue with this plugin. Installing the plug-in went well. It is the Kernel Configuration "Test connection" result which I can not seem to get to work.
The failure always shows "No python3 kernel available", but this is not true.
:~$ jupyter kernelspec list
Available kernels:
python3 /home/vince/.local/share/jupyter/kernels/python3
Do you have any suggestion where I will need to look?
$PATH
. Check the readme installation instruction and compare with the steps you used to set up your environment. You might need to install the python dependencies directly or with conda/venv/etc, and make sure Jenkins has it loaded.