Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 27 14:36
    vascoguita closed #3633
  • Jan 27 14:19
    vascoguita synchronize #3633
  • Jan 27 14:01
    vascoguita ready_for_review #3633
  • Jan 27 14:00
    vascoguita synchronize #3633
  • Jan 27 14:00
    vascoguita edited #3633
  • Jan 27 13:39
    vascoguita synchronize #3633
  • Jan 27 10:58
    vascoguita synchronize #3633
  • Jan 27 10:50
    vascoguita synchronize #3633
  • Jan 27 10:44
    vascoguita synchronize #3633
  • Jan 27 10:39
    vascoguita edited #3633
  • Jan 27 10:39
    vascoguita converted_to_draft #3633
  • Jan 27 10:38
    vascoguita review_requested #3633
  • Jan 27 10:38
    vascoguita review_requested #3633
  • Jan 27 10:38
    vascoguita review_requested #3633
  • Jan 27 10:38
    vascoguita review_requested #3633
  • Jan 27 10:38
    vascoguita opened #3633
  • Jan 27 09:20
    vascoguita closed #3630
  • Jan 27 09:05
    vascoguita synchronize #3630
  • Jan 27 07:47
    glpatcern closed #3631
  • Jan 27 07:47
    glpatcern closed #3627
dragonchaser
@dragonchaser:matrix.datenschmutz.space
[m]
/cc @SamuAlfageme
there are lots of timeouts towards the registry for example
dragonchaser
@dragonchaser:matrix.datenschmutz.space
[m]
butonic
@butonic:matrix.org
[m]
@labkode @glpatcern @SamuAlfageme any insights on the CI status?
Giuseppe Lo Presti
@glpatcern
I understood from @SamuAlfageme that some interventions were going on in the image registry, or Sam did you find anything else?
butonic
@butonic:matrix.org
[m]
ok, latest runs seem to go smoother ...
Michael Barz
@micbar
@labkode @glpatcern Seems that the reva CI on edge and experimental is broken since cs3org/reva@636b2b6
I wonder what happened. Did you change the codacy secret?
Giuseppe Lo Presti
@glpatcern
@micbar there's some active ongoing work on that, yes, because we really have to get out of drone - our license expired long ago
The target is to move everything to GitHub workflows
Hugo Labrador
@labkode
I pinged the person working on that
Vasco Guita
@vascoguita
@micbar We are moving out from Drone. Drone was running the "coverage" job which was using the Codacy API but now we run the Codacy/GitHub integration so we don't need the API anymore.
Therefore I removed the API access token (maybe I shouldn't have done it yet). Anyway, rebasing on master should solve the problem.
Or just cherry-pick that commit
Michael Barz
@micbar

The target is to move everything to GitHub workflows

Seen that. I hope it doesn't break anyhthing during our planned code freeze.

Michael Barz
@micbar
@labkode I wrote you an email. We need to sync these changes. Today it cost us a 4h delay on the reva experimental branch.
Michael Barz
@micbar
@vascoguita where did the code coverage report go? IMO we need to send the coverage report to codacy. Can you do that with the Codacy/GitHub integration?
Michael Barz
@micbar
we have two test pipelines which create code coverage, unit tests and integration tests.
Both coverage reports should be sent to codacy and aggregated.
Vasco Guita
@vascoguita
@micbar they are still in the same place: https://app.codacy.com/gh/cs3org/reva/dashboard
But now, instead of being trigger for those 2 pipelines, codacy runs for every PR/push
Vasco Guita
@vascoguita
Have you found it?
This is the one for edge – https://app.codacy.com/gh/cs3org/reva/dashboard?branch=edge
And this is the one for experimental – https://app.codacy.com/gh/cs3org/reva/dashboard?branch=experimental
Michael Barz
@micbar
ok, found it.
Hugo Labrador
@labkode
We're having issues with the CI, its been not working since yesterday, looks like a kernel issue, we're debugging
butonic
@butonic:matrix.org
[m]
Thx!
Hugo Labrador
@labkode
The CI box has a faulty line card that will be replaced, we'll ping you back when the box is ready
That's why some jobs are not passing and timing out
Michael Barz
@micbar
@labkode Is there an ETA for the fix?
Hugo Labrador
@labkode
Hopefully today, linecard didn't help
Hugo Labrador
@labkode
@micbar CI looks better now
We're restarting some of the jobs
Michael Barz
@micbar
@labkode Thanks. Seems something is not fixed yet
1669122996455.png
3 replies
@vascoguita I don't understand the context of cs3org/reva#3483
Vasco Guita
@vascoguita
@micbar That PR was not suppose to be merged. It was a test to see how Drone would behave with less concurrent containers.
I reverted the merge already.
Michael Barz
@micbar
Ah, ok.
Thanks for the explanation
PR had no description ;-)
Vasco Guita
@vascoguita
Next time I will add a note saying 'Please don't merge'
Antoon P.
@redblom

There's an issue with my changelog/unreleased file in my PR that I don't understand (https://drone.cernbox.cern.ch/cs3org/reva/9879/2/1).

CONFLICT (file location): changelog/unreleased/rclone-tpc.md added in d2438df128b3e61bc1cea3856fc727a96e0f0793 inside a directory that was renamed in HEAD, suggesting it should perhaps be moved to changelog/1.20.0_2022-11-24/rclone-tpc.md.

I'm thinking it's at the right location. Anyone? Or has it to do with the above mentioned issues?

Vasco Guita
@vascoguita
Try to rebase
gmgigi96 created a tag that moved changelog/unreleased to changelog/1.20.0_2022-11-24
if you rebase on upstream/master the issue should be solved
Antoon P.
@redblom
Yes. Thanks.
Michael Barz
@micbar
@vascoguita We have a problem with this github action https://github.com/cs3org/reva/actions/runs/3656717393/jobs/6179494364
7 replies
Can you give us a hint?
Michael Barz
@micbar
@vascoguita Yesterday it was working fine. Did something change in your infra?
Vasco Guita
@vascoguita
nothing changed
I'll take a look now
Vasco Guita
@vascoguita
Well, it seems that the problem mysteriously solved it self. It might have been some temporary network problem in CERN infrastructure indeed. There's nothing special with this test
Michael Barz
@micbar
@michielbdejong the drone CI was migrated to drone.owncloud.com. Please rebase your PRs
1 reply