Welcome! Log In Create A New Profile

Advanced

Cleanly unmount during reboot

Posted by cdlenfert 
Cleanly unmount during reboot
November 19, 2018 09:34PM
I've got some hfsplus format drives connected to my Debian Stretch box. If I stop the Samba service and unmount the drives (via umount -a) and then reboot, everything comes back up working great. If I just do a reboot without unmounting the drives when the system boots back up the hfsplus shares are mounted read-only and I can only fix that by connecting to my mac and running disk utility repairs on them. fsck.hfsplus doesn't work on them for some reason.

Any idea how I can keep these drives read/write a little more consistently (safely ejecting / cleanly unmounting) during a reboot?

I'll convert them to EXT3 or 4 (not sure which would be best) if I must, but I'm also curious why they aren't unmounted as part of the reboot process.

Thanks for any insights.
Re: Cleanly unmount during reboot
November 19, 2018 11:31PM
cdlenfert,

The hfplus drive mounting default is usually Read-Only. Do you exlicitly mount it RW at some point while the system is running, or do you use an automount mechanism such as udev rules like I posted in the Wiki thread?


Quote

udev

Automount USB drives with udev rules using disk label
https://forum.doozan.com/read.php?2,24139

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Re: Cleanly unmount during reboot
November 21, 2018 09:55AM
bodhi - I'm able to mount read/write by my entries in /etc/fstab
# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
LABEL=rootfs    /               ext3    noatime,errors=remount-ro 0 1
tmpfs          /tmp            tmpfs   defaults          0       0
LABEL=MGB-500	/mnt/pogo1	hfsplus	nofail,force,rw	0	0

I recently added the 'nofail` option as the boot process seemed to hang on occasion and I was hoping that would help booting continue if there were issues mounting a drive.

I've also been seeing some weird behaviors with the Samba sharing of these drives where there are over 20 process IDs for the service.
root@Pogoplug:~# systemctl status smbd.service
* smbd.service - Samba SMB Daemon
   Loaded: loaded (/lib/systemd/system/smbd.service; enabled; vendor preset: ena
bled)
   Active: active (running) since Tue 2018-11-20 19:37:30 PST; 12h ag
o
     Docs: man:smbd(8)
           man:samba(7)
           man:smb.conf(5)
  Process: 4231 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCES
S)
 Main PID: 3479 (smbd)
   Status: "smbd: ready to serve connections..."
   CGroup: /system.slice/smbd.service
           |-1702 /usr/sbin/smbd
           |-1787 /usr/sbin/smbd
           |-2872 /usr/sbin/smbd
           |-2889 /usr/sbin/smbd
           |-2911 /usr/sbin/smbd
           |-2935 /usr/sbin/smbd
           |-2943 /usr/sbin/smbd
           |-2979 /usr/sbin/smbd
           |-2986 /usr/sbin/smbd
           |-2987 /usr/sbin/smbd
           |-2990 /usr/sbin/smbd
           |-2996 /usr/sbin/smbd
           |-3034 /usr/sbin/smbd
           |-3052 /usr/sbin/smbd
           |-3054 /usr/sbin/smbd
           |-3063 /usr/sbin/smbd
           |-3246 /usr/sbin/smbd
           |-3252 /usr/sbin/smbd
           |-3255 /usr/sbin/smbd
           |-3258 /usr/sbin/smbd
           |-3262 /usr/sbin/smbd
           |-3267 /usr/sbin/smbd
           |-3269 /usr/sbin/smbd
           |-3350 /usr/sbin/smbd
           |-3479 /usr/sbin/smbd
           |-3480 /usr/sbin/smbd
           |-3481 /usr/sbin/smbd
           |-3483 /usr/sbin/smbd
           |-3492 /usr/sbin/smbd
           |-3500 /usr/sbin/smbd
           |-3534 /usr/sbin/smbd
           `-4712 /usr/sbin/smbd

I realize this may drift off topic, but it seems like it could be related to the inconsistency in the mounting and access level of the hfsplus drives.

And disc related commands keep getting hung up (like fdisk -l or trying to mkdir on a drive, or stopping/starting smbd service). I'm leaning toward converting 1 of the drives to EXT3 and copying everything from the HFS+ drives over, then converting the others drives and using them as backups. Maybe that would yield the best results, but then these drives will be purely dedicated to my Pogoplug and not pluggable for Macs
Re: Cleanly unmount during reboot
November 21, 2018 05:28PM
cdlenfert,

I use HFplus drives as sneakernet to mostly to xfer large files that take too long to copy over NFS. I've never seen these problems. I will update the udev rules to inlcude how I mount these HFplus drives. I am sure udev rules will perform better in automouting the drive as RW.

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Re: Cleanly unmount during reboot
November 21, 2018 05:40PM
thanks bodhi, that offers some hope I can stick with this format and it won't be too painful. I'll look out for that update.

do your HFSplus drives not become read-only after an unexpected power loss, or during a reboot? That's the case for me. As long as I umount everything before rebooting, I remain able to read/write but if I just do a reboot command or toggle power, the drives are mounted read only seemingly because of some level of data corruption (I'm not to familiar) that can only be remedied if I plug the drive into my Mac and run Disk Utility repair. I can then plug back in to the pogo, run a mount -a and the drive is read/write once again.

I've turned off journaling, but I'm wondering if there's some other issue in the way the drives were formatted that keeps Linux from playing nice with them after reboots (other than the mounting process itself).
Re: Cleanly unmount during reboot
November 21, 2018 08:02PM
cdlenfert,

I've update the howto in the Wiki thread:

https://forum.doozan.com/read.php?2,23630,73604#msg-73604

It now includes mounting HFSplus in Read/Write mode.


> do your HFSplus drives not become read-only after
> an unexpected power loss, or during a reboot?
> That's the case for me. As long as I umount
> everything before rebooting, I remain able to
> read/write but if I just do a reboot command or
> toggle power, the drives are mounted read only
> seemingly because of some level of data corruption
> (I'm not to familiar) that can only be remedied if
> I plug the drive into my Mac and run Disk Utility
> repair. I can then plug back in to the pogo, run a
> mount -a and the drive is read/write once again.

No. I've never seen data corruption using udev rules. Usually I don't leave the HFSplus drive plugged in the NAS box permanently if I remember to remove. But for the time that I forgot for many days, it did not cause problem. But your use case is different, so you have to test the udev rules to make sure.

However, as I cautioned users in the Howto post, HFSPlus is unsupported in mainline Linux. Write capability should be used with caution (have backup, and only use unimportant data with it).

> I've turned off journaling, but I'm wondering if
> there's some other issue in the way the drives
> were formatted that keeps Linux from playing nice
> with them after reboots (other than the mounting
> process itself).


I don't think it is journaling. The issue seems to me just a normal unsync shutdown of the drive. Since you relied on the system to sync and unmount the drive, and it probably lacks the equivalence of this udev rules trigger.

Also, I am not sure if fsck.hfsplus works properly during rootfs mounting phase. You might need to tweak initramfs to force the issue when the box lost power and coming up (nothing you can do other than making the fsck.hfsplus works).

I have high hope it will work for your reboot with udev rules. If there is still problem, I could look into how to change rules to help with that.

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)



Edited 1 time(s). Last edit at 11/21/2018 08:03PM by bodhi.
Re: Cleanly unmount during reboot
November 21, 2018 09:42PM
You could always try forcing an fsck during boot through fstab, would add some time during boot but you would always have read/write.

That said, data corruption can be a problem with hfs plus drives in Linux, I’ve had drives get messed up when moving between linux and macOS.
Re: Cleanly unmount during reboot
November 22, 2018 06:53PM
I'll try out the udev rules. Looks easy enough to implement. I notice this will change where my drives are mounted. No big deal since a simple change to the smb.conf file should make external devices see them the same anyway.

One thing that's still puzzling me is why fsck.hfsplus is failing on my drives.

root@Pogoplug:~# fsck.hfsplus -f /dev/sdb2
** /dev/sdb2
** Checking HFS Plus volume.
** Volume check failed

I moved one of the drives that failed in this say way from my Pogoplug to my Raspberry Pi and installed hfsprogs, hfsutils, hfsplus and ran the same fsck.hfsplus command and it scanned and fixed the drive just fine. I was then able to get write access after remounting the drive. I need to be able to do the same on the Pogo, but I'm not sure how to find out why it's failed. I get next to nothing on good ol' Google.
Re: Cleanly unmount during reboot
November 23, 2018 12:39AM
> I moved one of the drives that failed in this say
> way from my Pogoplug to my Raspberry Pi and
> installed hfsprogs, hfsutils, hfsplus and ran the
> same fsck.hfsplus command and it scanned and fixed
> the drive just fine. I was then able to get write
> access after remounting the drive. I need to be
> able to do the same on the Pogo, but I'm not sure
> how to find out why it's failed. I get next to
> nothing on good ol' Google.

Start with checking the version of hfsprogs utiltities on both side. And check Linux and Debian version if necessary.

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Author:

Your Email:


Subject:


Spam prevention:
Please, enter the code that you see below in the input field. This is for blocking bots that try to post this form automatically. If the code is hard to read, then just try to guess it right. If you enter the wrong code, a new image is created and you get another chance to enter it right.
Message: