Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 05 14:35

    utopiabound on postoffice-write

    (compare)

  • Apr 05 14:34
    utopiabound closed #2344
  • Apr 05 14:33
    utopiabound synchronize #2344
  • Mar 30 19:31
    dependabot[bot] labeled #18
  • Mar 30 19:31
    dependabot[bot] opened #18
  • Mar 30 19:31

    dependabot[bot] on npm_and_yarn

    Bump y18n from 3.2.1 to 3.2.2 … (compare)

  • Mar 30 19:30
    dependabot[bot] labeled #18
  • Mar 30 19:30
    dependabot[bot] opened #18
  • Mar 30 19:30

    dependabot[bot] on npm_and_yarn

    Bump y18n from 3.2.1 to 3.2.2 … (compare)

  • Mar 29 23:42

    deiter on cleanup

    (compare)

  • Mar 29 23:42

    deiter on master

    Update Copyright and cleanup (#… (compare)

  • Mar 29 23:42
    deiter closed #2
  • Mar 29 23:42
    deiter opened #2
  • Mar 29 23:41

    deiter on cleanup

    Update Copyright and cleanup S… (compare)

  • Mar 29 23:30

    deiter on copyrights

    (compare)

  • Mar 29 23:30

    deiter on master

    Update copyright Signed-off-by… Update copyright Signed-off-by… (compare)

  • Mar 29 23:30
    deiter closed #1
  • Mar 29 18:53
    deiter added as member
  • Mar 29 14:39
    mdiep25 added as member
  • Mar 27 19:56
Will Johnson
@johnsonw
Notice the volume id is 22.
Looking above, volume 22 is /dev/mapper/mpathm and is marked as deleted. So it’s probably safe to say that this is the old value and we just need to update the managed target table such that the volume_node_id is 1130 instead of 22. I made this change locally and that removed the item from the volumes page.
But just thought I would verify that /dev/mapper/mapthl was the correct path for ost9.
Jason Williams
@uberlinuxguy
interesting. One sec let me see, but I uploaded a file to https://www.marcc.jhu.edu/downloads/lustre/ which is the contents of all of the targets files from /var/lib/chroma/
Will Johnson
@johnsonw
:+1:
Jason Williams
@uberlinuxguy
OST0009 actually maps to mpathl on both nodes.
Will Johnson
@johnsonw
ok let me check something
Ok and we’re sure that’s correct?
yguvvala
@yguvvala

looks correct to me :

oss01: /var/lib/chroma/targets/ZmE0NjQxYmUtZDZlNi00N2Y4LTk1NzgtYzBmZTk4OGMxMzNi:{"target_name": "scratch-OST0009", "device_type": "linux", "bdev": "/dev/mapper/mpathl", "mntpt": "/mnt/scratch-OST0009", "backfstype": "ext4"}

oss02: /var/lib/chroma/targets/ZmE0NjQxYmUtZDZlNi00N2Y4LTk1NzgtYzBmZTk4OGMxMzNi:{"target_name": "scratch-OST0009", "device_type": "linux", "bdev": "/dev/mapper/mpathl", "mntpt": "/mnt/scratch-OST0009", "backfstype": "ext4"}

Will Johnson
@johnsonw
ok just wanted to make sure
yguvvala
@yguvvala
@uberlinuxguy @johnsonw I am looking at the "targets_readout.txt" file and looks like that is correct...
Will Johnson
@johnsonw
Ok, thanks.
Jason Williams
@uberlinuxguy
@johnsonw yes, sorry I was interrupted for a bit there.
Will Johnson
@johnsonw
No problem Jason. I’m making progress and will keep you posted.
yguvvala
@yguvvala
@uberlinuxguy just checking on the status.
Will Johnson
@johnsonw
@yguvvala I’m working on a script to resolve some database issues.
Will Johnson
@johnsonw
Hi @uberlinuxguy, I’ve created a script that cleans up the majority of the volumes on the volumes page. Here is a screenshot of my local volume page after running the script:
image.png
Let’s sync up tomorrow in the morning and i’ll send you the script.
Jason Williams
@uberlinuxguy
Hi @johnsonw Let me know when you are available. I am in the office now.
Will Johnson
@johnsonw
Hi @uberlinuxguy, the script is able to resolve the majority of the items on the volumes page, but there are four items that only have one entry in the list you sent:
- 600a098000591fd00000012b5527440d - scratch OST003d - only has one entry
- 600a09800060f4860000059e55274536 - scratch OST0043 - only has one entry
- 600a0980006355d50000050155274407 - scratch-OST0045 - only has one entry
- 600a0980006355d50000050955274432 - scratch OST0047 - only has one entry
Since there is only 1 entry it can’t create an ha pair.
Jason Williams
@uberlinuxguy
which list was that in?
Jason Williams
@uberlinuxguy
ah because oss12 is offline
Will Johnson
@johnsonw
Do a search for “ost003d” for example and you will see there is only 1 entry.
ah ok that makes sense
Jason Williams
@uberlinuxguy
all of those seem to come from oss11/oss12
Will Johnson
@johnsonw
Yes
Do you know if you will be able to get oss12 back up?
In either case, I can still send the script your way and it will resolve the majority of the items in the list.
Jason Williams
@uberlinuxguy
ok
I am going to try to boot oss12 now
Will Johnson
@johnsonw
Ok. Also, please make sure you backup the database before running the script.
Do you want me to e-mail the script or is it ok to paste into this chat?
Jason Williams
@uberlinuxguy
I can just re-run the same dump command I run to send you the db right?
Will Johnson
@johnsonw
Yes.
Jason Williams
@uberlinuxguy
you can paste it into chat I suppose
Will Johnson
@johnsonw
I can also put it on the ticket. That’s probably better. Is that ok?
Jason Williams
@uberlinuxguy
Yeah sure
Will Johnson
@johnsonw
Script posted intel-hpdd/intel-manager-for-lustre#637.
Jason Williams
@uberlinuxguy
can I add the targets that should be on oss12 to the target_list variable before I run this?
Will Johnson
@johnsonw
It should be fine but just keep in mind that it hasn’t been tested.
Jason Williams
@uberlinuxguy
kk
Jason Williams
@uberlinuxguy
awesome that cleared the volumes page.
and the number of volumes per host is now 12
Will Johnson
@johnsonw
Excellent news :clap:
And failover still works correct?
Jason Williams
@uberlinuxguy
failing a couple of the targets back to their primary hosts and that seems to be working too.
Will Johnson
@johnsonw
awesome :+1: