Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 08 19:58
    micbar synchronize #3641
  • Feb 08 17:53
    gmgigi96 ready_for_review #3646
  • Feb 08 17:47
    gmgigi96 synchronize #3646
  • Feb 08 17:46
    gmgigi96 edited #3646
  • Feb 08 17:01
    gmgigi96 synchronize #3646
  • Feb 08 16:57
    gmgigi96 synchronize #3646
  • Feb 08 16:50
    gmgigi96 synchronize #3646
  • Feb 08 14:37
    vascoguita synchronize #3640
  • Feb 08 14:37
    vascoguita synchronize #3640
  • Feb 08 13:41
    gmgigi96 synchronize #3646
  • Feb 08 13:39
    gmgigi96 edited #3646
  • Feb 08 13:38
    gmgigi96 synchronize #3646
  • Feb 08 13:36
    gmgigi96 synchronize #3646
  • Feb 08 13:34
    gmgigi96 edited #3646
  • Feb 08 13:23
    gmgigi96 synchronize #3646
  • Feb 08 11:23
    micbar synchronize #3641
  • Feb 08 11:03
    gmgigi96 edited #3646
  • Feb 08 11:02
    gmgigi96 converted_to_draft #3646
  • Feb 08 10:58
    gmgigi96 review_requested #3646
  • Feb 08 10:58
    gmgigi96 review_requested #3646
Vasco Guita
@vascoguita
@micbar they are still in the same place: https://app.codacy.com/gh/cs3org/reva/dashboard
But now, instead of being trigger for those 2 pipelines, codacy runs for every PR/push
Vasco Guita
@vascoguita
Have you found it?
This is the one for edge – https://app.codacy.com/gh/cs3org/reva/dashboard?branch=edge
And this is the one for experimental – https://app.codacy.com/gh/cs3org/reva/dashboard?branch=experimental
Michael Barz
@micbar
ok, found it.
Hugo Labrador
@labkode
We're having issues with the CI, its been not working since yesterday, looks like a kernel issue, we're debugging
butonic
@butonic:matrix.org
[m]
Thx!
Hugo Labrador
@labkode
The CI box has a faulty line card that will be replaced, we'll ping you back when the box is ready
That's why some jobs are not passing and timing out
Michael Barz
@micbar
@labkode Is there an ETA for the fix?
Hugo Labrador
@labkode
Hopefully today, linecard didn't help
Hugo Labrador
@labkode
@micbar CI looks better now
We're restarting some of the jobs
Michael Barz
@micbar
@labkode Thanks. Seems something is not fixed yet
1669122996455.png
3 replies
@vascoguita I don't understand the context of cs3org/reva#3483
Vasco Guita
@vascoguita
@micbar That PR was not suppose to be merged. It was a test to see how Drone would behave with less concurrent containers.
I reverted the merge already.
Michael Barz
@micbar
Ah, ok.
Thanks for the explanation
PR had no description ;-)
Vasco Guita
@vascoguita
Next time I will add a note saying 'Please don't merge'
Antoon P.
@redblom

There's an issue with my changelog/unreleased file in my PR that I don't understand (https://drone.cernbox.cern.ch/cs3org/reva/9879/2/1).

CONFLICT (file location): changelog/unreleased/rclone-tpc.md added in d2438df128b3e61bc1cea3856fc727a96e0f0793 inside a directory that was renamed in HEAD, suggesting it should perhaps be moved to changelog/1.20.0_2022-11-24/rclone-tpc.md.

I'm thinking it's at the right location. Anyone? Or has it to do with the above mentioned issues?

Vasco Guita
@vascoguita
Try to rebase
gmgigi96 created a tag that moved changelog/unreleased to changelog/1.20.0_2022-11-24
if you rebase on upstream/master the issue should be solved
Antoon P.
@redblom
Yes. Thanks.
Michael Barz
@micbar
@vascoguita We have a problem with this github action https://github.com/cs3org/reva/actions/runs/3656717393/jobs/6179494364
7 replies
Can you give us a hint?
Michael Barz
@micbar
@vascoguita Yesterday it was working fine. Did something change in your infra?
Vasco Guita
@vascoguita
nothing changed
I'll take a look now
Vasco Guita
@vascoguita
Well, it seems that the problem mysteriously solved it self. It might have been some temporary network problem in CERN infrastructure indeed. There's nothing special with this test
Michael Barz
@micbar
@michielbdejong the drone CI was migrated to drone.owncloud.com. Please rebase your PRs
1 reply
Michiel de Jong
@michielbdejong
Hi! I could use some advice about my development setup; I'm finding the edit - build - test cycle very slow nowadays, is it really necessary to build revad entirely (taking about 2 minutes on my machine) each time I add a log statement or fix the next typo? Or is everybody else using a step debugger for this?
4 replies
Vasco Guita
@vascoguita
Hi, today I'll migrate the self-hosted runners from cs3org/reva to organization level - so that they can be shared by other repositories under the cs3org organisation.
For this reason there might be some CI down time. I'll write you back once the migration is concluded.
Vasco Guita
@vascoguita
Done, all the runners have been migrated to organisation level. Apparently there was no down time and everything should work seamlessly
David Antos
@david-antos
And also here: a kindle reminder, can we hope to have cs3org/reva#3121 merged?
Michiel de Jong
@michielbdejong
+1, this is currently blocking us from sending/accepting OCM invites on the ScienceMesh network.
Hugo Labrador
@labkode
Hi, I merged yesterday
Michiel de Jong
@michielbdejong
Thanks a lot!
Antoon P.
@redblom
The build for cs3org/cs3apis#191 was killed for no apparent reason ... anyone ?
And best wishes for 2023 for all !
Michiel de Jong
@michielbdejong
@labkode or @ishank011 can you help us document the verify_request_hostname config entry for ocm provider authorizers? pondersource/sciencemesh-php#122
David Christofas
@C0rby
@labkode, could you please take a look at this CS3 API change? cs3org/cs3apis#192
I need this to continue my work on a feature. Thanks. :)
Antoon P.
@redblom
Is it possible to re-run the build on cs3org/cs3apis#191 ?
Vasco Guita
@vascoguita
The drone instance used to test cs3org/cs3apis#191 no longer exists. New PRs will trigger the ownCloud drone instance. Could you close that PR and open a new one?
Antoon P.
@redblom
Done, Thanks!
Vasco Guita
@vascoguita
I see that it passed now :) You're welcome
Vasco Guita
@vascoguita
Some of the GitHub self-hosted runners got out of disk space causing CI jobs to fail.
I've replaced half of them with larger nodes, the other half will be replaced today.
If you have jobs that fail because of this you may restart them.
Vasco Guita
@vascoguita
Our self-hosted GitHub runners have been upgraded:
We now have 10 runners - each one with 4 vCPUs, 8GB RAM and 40 GB Disk.
Gianmaria Del Monte
@gmgigi96
@micbar Could you please retrigger the CI for the bindings of the cs3apis?
Vasco Guita
@vascoguita
We're experiencing some instability in the GitHub Actions tests - namely, the litmus and acceptance tests. If they fail for you just restart them.