Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 31 21:59
    cneira edited #373
  • Jan 31 21:58
    cneira review_requested #373
  • Jan 31 21:58
    cneira review_requested #373
  • Jan 31 21:58
    cneira review_requested #373
  • Jan 31 21:58
    cneira opened #373
  • Jan 31 21:20
    cneira synchronize #367
  • Jan 31 21:16
    cneira synchronize #367
  • Jan 31 19:23
    wakiyamap starred omniosorg/illumos-omnios
  • Jan 31 14:09
    citrus-it labeled #1249
  • Jan 31 14:09
    citrus-it review_requested #1249
  • Jan 31 14:09
    citrus-it review_requested #1249
  • Jan 31 14:09
    citrus-it opened #1249
  • Jan 31 11:36
    citrus-it unlabeled #372
  • Jan 31 11:36
    citrus-it edited #372
  • Jan 31 11:36
    citrus-it labeled #372
  • Jan 31 11:36
    citrus-it labeled #372
  • Jan 31 11:36
    citrus-it review_requested #372
  • Jan 31 11:36
    citrus-it review_requested #372
  • Jan 31 11:36
    citrus-it opened #372
  • Jan 31 10:36

    citrus-it on upstream_gate

    10105 libproject needs smatch f… 10117 libbe needs smatch fixes … 10126 smatch fix for kmfcfg Rev… and 2 more (compare)

Andy Fiddaman
@citrus-it
is format any better?
Jakub Eliasz
@jakubeliasz
hm yes
  1. c6t0025385791B01A45d0 <Samsung-SSD 970 PRO 512GB-1B2QEXP7-476.94GB>
weird
let me look at nappit
Andy Fiddaman
@citrus-it
I don't know napp-it, but maybe now that the drive is labelled, it will be happier
destroying the pool will leave the label, so I'd do that
Jakub Eliasz
@jakubeliasz
yes nappit seems happy
ctmblake
@ctmblake
solaris does not changed the device it always based on ctd and the you have lsi utilities to tell you the enclosure.
Jakub Eliasz
@jakubeliasz
cool guys! thanks a lot! I guess that this would be the workaround for nappit
to init the label manually by creating a dummy pool
Andy Fiddaman
@citrus-it
Yes, or with fdisk, but zpool is sometimes easier to remember the syntax for :)
Jakub Eliasz
@jakubeliasz
fdisk c6t0025385791B01A45d0
gave me interactive menu
Andy Fiddaman
@citrus-it
You want something like fdisk -E /dev/rdsk/c6t0025385791B01A45d0p0
Jakub Eliasz
@jakubeliasz
I will try with the other ssd that was not init'ed yet
Andy Fiddaman
@citrus-it
This message was deleted
Jakub Eliasz
@jakubeliasz
rdsk so raw yes, fdisk will complain if I try to use dsk (block)
Andy Fiddaman
@citrus-it
-E just says to create an EFI partition table, and create a partition spanning the disk
Jakub Eliasz
@jakubeliasz
thanks a lot again for your fast reaction! :) helps a lot to have a fast ZIL mirror
tho 512GB is probably overkill sizewise
guenther-alka
@guenther-alka
Napp-it v18 shows the NVMe as removed when there is no label. If you add/remove the disk manually to a pool, it is shown properly on a disk list. Napp-it up from current 19.06/10 home shows disks without a label as normal disks, https://napp-it.org/downloads/changelog_en.html
Andy Fiddaman
@citrus-it
@guenther-alka thanks :)
Jaakko Linnosaari
@jlinnosa_twitter
hmm, is there a way to configure r151032 to provide console both on serial and vga?
budachst
@budachst

I am trying the latest omniosce stable on a new Supermicro. This one has two i40e NICs on board. After setting up the aggregation and vlan on a LACP trunk to a Cisco Nexus, I am unable to ping the network's gateway, which happens to be at .1 - I can ping all other IPs in that subnet.

The network-settings seem straightforward:

`root@jvmhh-archiv:~# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
i40e0 Ethernet up 10000 full i40e0
i40e1 Ethernet up 10000 full i40e1

root@jvmhh-archiv:~# dladm show-aggr aggr0
LINK POLICY ADDRPOLICY LACPACTIVITY LACPTIMER FLAGS
aggr0 L4 auto active short -----

root@jvmhh-archiv:~# dladm show-vlan
LINK VID OVER FLAGS
vlan16 16 aggr0 -----
vlan14 14 aggr0 -----

root@jvmhh-archiv:~# ipadm show-if
IFNAME STATE CURRENT PERSISTENT
lo0 ok -m-v------46 ---
vlan14 down bm--------46 -46
vlan16 ok bm--------46 -46

root@jvmhh-archiv:~# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
vlan16/v4 static ok 10.11.14.125/24
lo0/v6 static ok ::1/128`

The router address is 10.11.14.1, which is the only one, I cannot ping from the host. Anyone any idea of what the problem could be?

budachst
@budachst
WFT… without doing anything, it starte to work after an hour or so… this is really strange…
Andy Fiddaman
@citrus-it
Is it possible that the gateway had cached the wrong MAC address for the IP?
budachst
@budachst

That came to my mind as well, but for nearly 90 minutes - I have never seen this happen. Also, I do think, that the MAC address for the IP changed, when I installed omnios on the host. It had been running CentOS 8 before. So it would have had to be a new entry, no?

However, it works and thats enough atm. ;)

Andy Fiddaman
@citrus-it
I've had that problem before with a Cisco router - IIRC they have a very long arp cache by default
budachst
@budachst
I will check up on that.

Now, that I have omniosce running I tried importing the ZPOOL, that I created, when this host was running CentOS 8 with ZOL 0.8, but ZFS on omniosce won't import it due to corrupt data on all member drives. These drives are iSCSI LUNs, but I'd assume, that it's rather the ZPOOL format, which ZFS on omniosce doesn't support:

`
root@jvmhh-archiv:~# zpool import
pool: vsmPool01
id: 17650354432715805290
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-5E
config:

    vsmPool01                                  UNAVAIL  insufficient replicas
      raidz1-0                                 UNAVAIL  insufficient replicas
        c0t600140556534D484430312D373934370d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373934360d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373934350d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373934340d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373934330d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373934320d0  UNAVAIL  corrupted data
      raidz1-1                                 UNAVAIL  insufficient replicas
        c0t600140556534D484430312D373934310d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373934300d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373933390d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373933380d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373933370d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373933360d0  UNAVAIL  corrupted data
      raidz1-2                                 UNAVAIL  insufficient replicas
        c0t600140556534D484430312D373935390d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373935380d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373935370d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373935360d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373935350d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373935340d0  UNAVAIL  corrupted data
      raidz1-3                                 UNAVAIL  insufficient replicas
        c0t600140556534D484430312D373935330d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373935320d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373935310d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373935300d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373934390d0  UNAVAIL  corrupted data
        c0t600140556534D484430312D373934380d0  UNAVAIL  corrupted data

`

Andy Fiddaman
@citrus-it
It is probably the devid that ZoL has put into the disk labels. You can tell ZoL to export the pool with those labels blank
At least, it's worth a try to rule it out
on Linux, export ZFS_VDEV_DEVID_OPT_OUT=YES
then import and export the pool
budachst
@budachst
Ahh… okay, I may try that, if I have enought time - the pool has been exported from ZOL on this machine, which it doesn't run any longer now that omniosce is on it… ;) It my colleague complains enough, I will go for it.
budachst
@budachst
@citrus-it Tried what you suggested, but omnios is still unable to import the pool. Looks like I am going to destroy and recreate it.
budachst
@budachst

So it looks like, there're some compatibilitie issues when using ZFS on omniosce and ZOL. I was also unable to mount a pool , that I created on omniosce r151032, which would only mount read-only on ZOL 0.8 on CentOS.

Anyone else experienced something similar?

Thomas Wagner
@tomww_twitter
anyone of you attending 36C3 Chaos Communication Congress in Leipzig this year?
It would be cool if we could meet!
Carsten Grzemba
@cgrzemba
there are no ticketes anymore for 36C3 .?
nomad
@discard-this
Hello. Anyone have docs or pointers for best practices for encryption for 151032? We're looking at it for an offsite backup host I'm about to build.
Tobias Oetiker
@oetiker
hi @discard-this zfs encryption is brand new in r32, hence there are not best practices yet ...
Andy Fiddaman
@citrus-it
The zfs man page has pretty good documentation on how to create an encrypted dataset and, on OmniOS at least, encrypted datasets are mounted at boot if the key can be found
What's missing is any key management systems..
nomad
@discard-this
It's the key management that seems to be the "best practices" issue.
$BOSS is now saying we shouldn't bother with encryption.
guenther-alka
@guenther-alka

In the EU encryption is essential as the General Data Protection Regulation demands personal data to be protected in a state of the art manner so everyone needs or wants encryption on a filer and backup.

btw. For next napp-it I will include a https based key management system where keys are on an external and/or encrypted filesystem and the keys are encrypted additionally by a user access key and the option to lock/unlock a filesystem via SMB.

bronkoo
@bronkoo

Did anyone compared CIFS throughput performance via 10Gbit single session after upgrading OmniOS 151030 -> 151032 from Linux Client?

On CentOS Linux release 7.6.1810 Client (mount option 'vers=1.0' was necessary since 7.5 > 7.6)

OmniOS 151030
$ sync && dd if=/dev/zero of=/mnt/staff/data.dat count=10 bs=1G && sync && dd if=/mnt/staff/data.dat of=/dev/null count=10 bs=1G
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 13,313 s, 807 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 65,279 s, 164 MB/s

OmniOS 151032
$ sync && dd if=/dev/zero of=/mnt/staff/data.dat count=10 bs=1G && sync && dd if=/mnt/staff/data.dat of=/dev/null count=10 bs=1G
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 26.7616 s, 401 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 31.6324 s, 339 MB/s

OmniOS 151032 (SMB3: CentOS 7.6 doesn't need anymore mount option 'vers=1.0')
$ sync && dd if=/dev/zero of=/mnt/staff/data.dat count=10 bs=1G && sync && dd if=/mnt/staff/data.dat of=/dev/null count=10 bs=1G
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 61.4288 s, 175 MB/s
10+0 records in
10+0 records out
10737418240 bytes (11 GB) copied, 76.3618 s, 141 MB/s

Someone can confirm that?

bronkoo
@bronkoo
Since I'm upgrading all pools, I can't go back to double check.
May be first run was on OmniOS 151028; archived these result.