Lost filesystems after upgrading Jessie to Stretch?

New to Debian (Or Linux in general)? Ask your questions here!

Lost filesystems after upgrading Jessie to Stretch?

Postby brashquido » 2020-05-05 05:17

Hi All,

I've been running OMV on my NAS for a few years and finally found some time to go through the upgrade process. Unfortunately it doesn't seem to have gone too well for me in that my data filesystems are no longer accessible. My basic setup as follows;

1 x 30GB SSD (sda) - OMV install
2 x 500GB - (sdb/c) - Software RAID1
8 x 4TB - (sdd ~ k) - Using UnionFS and SnapRaid

Foolishly I did NOT take a OMV backup prior to the upgrade, however I do have a backup of all my data encrypted in the cloud. On my connection I have already calculated that it will take 35+ days to restore all my data at pretty much full speed 24/7, as such my preference would be to fix my NAS rather than rebuilt it and restore. I have super basic Linux skills, hoping someone is able to point me in the right direction.

Following is an output from lsblk which shows all but 2 of my drives using in SnapRaid / UnionFS as being ZFS Members, however I've never installed ZFS;
Code: Select all
root@TryanNAS:~# lsblk --fs
NAME    FSTYPE            LABEL              UUID                                 MOUNTPOINT
sda
├─sda1  ext4                                 89f815c4-6bd7-4413-858b-6d0c90f62acc /
├─sda2
└─sda5  swap                                 847f2b9e-a5e7-4ecb-8d09-bac4e0d3cdb9 [SWAP]
sdb     linux_raid_member TryanNAS:NASSYSTEM cb72ad6f-68a3-a644-2d86-6f7eb3a87494
└─md127
sdc     linux_raid_member TryanNAS:NASSYSTEM cb72ad6f-68a3-a644-2d86-6f7eb3a87494
└─md127
sdd     zfs_member
└─sdd1  zfs_member
sde     zfs_member
└─sde1  zfs_member
sdf     zfs_member
└─sdf1  zfs_member
sdg     zfs_member
└─sdg1  zfs_member
sdh     zfs_member
└─sdh1  zfs_member
sdi
└─sdi1  ext4              4TB6               b6cfc90a-b9b9-4531-a93c-cf7ce22c6866 /srv/dev-disk-by-label-4TB6
sdj
└─sdj1  ext4              4TB8               855af507-ae2c-45ed-a2d6-6aaade3f9279 /srv/dev-disk-by-label-4TB8
sdk     zfs_member
└─sdk1  zfs_member


As far as I can tell, the result of this is the udev is unable to auto generate the disk labels required for file system mounts as per my fstab;

Code: Select all
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda1 during installation
UUID=89f815c4-6bd7-4413-858b-6d0c90f62acc /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=847f2b9e-a5e7-4ecb-8d09-bac4e0d3cdb9 none            swap    sw              0       0
tmpfs           /tmp            tmpfs   defaults        0       0
# >>> [openmediavault]
/dev/disk/by-label/NASSYSTEM /srv/dev-disk-by-label-NASSYSTEM ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/4TB1 /srv/dev-disk-by-label-4TB1 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/4TB2 /srv/dev-disk-by-label-4TB2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/4TB3 /srv/dev-disk-by-label-4TB3 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/4TB4 /srv/dev-disk-by-label-4TB4 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/4TB5 /srv/dev-disk-by-label-4TB5 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/4TB6 /srv/dev-disk-by-label-4TB6 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/4TB7 /srv/dev-disk-by-label-4TB7 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/4TB8 /srv/dev-disk-by-label-4TB8 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/srv/dev-disk-by-label-4TB1:/srv/dev-disk-by-label-4TB2:/srv/dev-disk-by-label-4TB3:/srv/dev-disk-by-label-4TB4:/srv/dev-disk-by-label-4TB5:/srv/dev-disk-by-label-4TB6:/srv/dev-disk-by-label-4TB7 /srv/b8e6f38f-a691-44c2-a67f-667ce22630a3 fuse.mergerfs defaults,allow_other,direct_io,use_ino,category.create=epmfs,minfreespace=100G 0 0
/srv/b8e6f38f-a691-44c2-a67f-667ce22630a3/NASDATA /export/NASDATA none bind,nofail 0 0
/srv/b8e6f38f-a691-44c2-a67f-667ce22630a3/NASDATA /sftp/nasftp/NASDATA none bind,rw,nofail 0 0
# <<< [openmediavault]


As you can see, all file system sources are referenced by label. If I use blkid as udev does to query "ID_FS_LABEL_ENC" label, then this is what I get on sdj1 (which is mounting) vs sdd1 and md127 (which aren't);

SDJ1
Code: Select all
root@TryanNAS:~# blkid -o udev -p /dev/sdj1
ID_FS_LABEL=4TB8
ID_FS_LABEL_ENC=4TB8
ID_FS_UUID=855af507-ae2c-45ed-a2d6-6aaade3f9279
ID_FS_UUID_ENC=855af507-ae2c-45ed-a2d6-6aaade3f9279
ID_FS_VERSION=1.0
ID_FS_TYPE=ext4
ID_FS_USAGE=filesystem
ID_PART_ENTRY_SCHEME=gpt
ID_PART_ENTRY_UUID=2f99cd10-2041-4f97-aa76-6f2208d3805d
ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SIZE=7814035087
ID_PART_ENTRY_DISK=8:144
root@TryanNAS:~#


SDD1

Code: Select all
root@TryanNAS:~# blkid -o udev -p /dev/sdd1
ID_FS_AMBIVALENT=filesystem:ext4:1.0 filesystem:zfs_member:5000

MD127
Code: Select all
root@TryanNAS:~# blkid -o udev -p /dev/md127
ID_FS_AMBIVALENT=filesystem:ext4:1.0 filesystem:zfs_member:5000


Weird thing is if I use something like e2label the expected partition label is returned in every single case. Can anyone offer any advice as to what is going on with my filesystems here? Why is blkid (and other file system query tools such as lsblk and wipefs) not able to read my filesystem labels?
brashquido
 
Posts: 1
Joined: 2020-05-05 02:08

Re: Lost filesystems after upgrading Jessie to Stretch?

Postby Head_on_a_Stick » 2020-05-05 11:39

brashquido wrote:I've been running OMV

This forum is for users of Debian, it is not for users of Debian-based derivatives.

https://forum.openmediavault.org/
Black Lives Matter

Debian buster-backports ISO image: for new hardware support
User avatar
Head_on_a_Stick
 
Posts: 12622
Joined: 2014-06-01 17:46
Location: /dev/chair


Return to Beginners Questions

Who is online

Users browsing this forum: No registered users and 11 guests

fashionable