Discussion:
ZFS: I/O error - blocks larger than 16777216 are not supported
(too old to reply)
KIRIYAMA Kazuhiko
2018-06-21 01:36:05 UTC
Permalink
Hi all,

I've been reported ZFS boot disable problem [1], and found
that this issue occers form RAID configuration [2]. So I
rebuit with RAID5 and re-installed 12.0-CURRENT
(r333982). But failed to boot with:

ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot

FreeBSD/x86 boot
ZFS: I/O error - blocks larger than 16777216 are not supported
ZFS: can't find dataset u
Default: zroot/<0x0>:

In this case, the reason is "blocks larger than 16777216 are
not supported" and I guess this means datasets that have
recordsize greater than 8GB is NOT supported by the
FreeBSD boot loader(zpool-features(7)). Is that true ?

My zpool featues are as follows:

# kldload zfs
# zpool import
pool: zroot
id: 13407092850382881815
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://illumos.org/msg/ZFS-8000-EY
config:

zroot ONLINE
mfid0p3 ONLINE
# zpool import -fR /mnt zroot
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 19.9T 129G 19.7T - 0% 0% 1.00x ONLINE /mnt
# zpool get all zroot
NAME PROPERTY VALUE SOURCE
zroot size 19.9T -
zroot capacity 0% -
zroot altroot /mnt local
zroot health ONLINE -
zroot guid 13407092850382881815 default
zroot version - default
zroot bootfs zroot/ROOT/default local
zroot delegation on default
zroot autoreplace off default
zroot cachefile none local
zroot failmode wait default
zroot listsnapshots off default
zroot autoexpand off default
zroot dedupditto 0 default
zroot dedupratio 1.00x -
zroot free 19.7T -
zroot allocated 129G -
zroot readonly off -
zroot comment - default
zroot expandsize - -
zroot freeing 0 default
zroot fragmentation 0% -
zroot leaked 0 default
zroot ***@async_destroy enabled local
zroot ***@empty_bpobj active local
zroot ***@lz4_compress active local
zroot ***@multi_vdev_crash_dump enabled local
zroot ***@spacemap_histogram active local
zroot ***@enabled_txg active local
zroot ***@hole_birth active local
zroot ***@extensible_dataset enabled local
zroot ***@embedded_data active local
zroot ***@bookmarks enabled local
zroot ***@filesystem_limits enabled local
zroot ***@large_blocks enabled local
zroot ***@sha512 enabled local
zroot ***@skein enabled local
zroot ***@com.delphix:device_removal inactive local
zroot ***@com.delphix:obsolete_counts inactive local
zroot ***@com.delphix:zpool_checkpoint inactive local
#

Regards

[1] https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068886.html
[2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=151910

---
KIRIYAMA Kazuhiko
Allan Jude
2018-06-21 03:34:48 UTC
Permalink
Post by KIRIYAMA Kazuhiko
Hi all,
I've been reported ZFS boot disable problem [1], and found
that this issue occers form RAID configuration [2]. So I
rebuit with RAID5 and re-installed 12.0-CURRENT
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot
FreeBSD/x86 boot
ZFS: I/O error - blocks larger than 16777216 are not supported
ZFS: can't find dataset u
In this case, the reason is "blocks larger than 16777216 are
not supported" and I guess this means datasets that have
recordsize greater than 8GB is NOT supported by the
FreeBSD boot loader(zpool-features(7)). Is that true ?
# kldload zfs
# zpool import
pool: zroot
id: 13407092850382881815
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://illumos.org/msg/ZFS-8000-EY
zroot ONLINE
mfid0p3 ONLINE
# zpool import -fR /mnt zroot
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 19.9T 129G 19.7T - 0% 0% 1.00x ONLINE /mnt
# zpool get all zroot
NAME PROPERTY VALUE SOURCE
zroot size 19.9T -
zroot capacity 0% -
zroot altroot /mnt local
zroot health ONLINE -
zroot guid 13407092850382881815 default
zroot version - default
zroot bootfs zroot/ROOT/default local
zroot delegation on default
zroot autoreplace off default
zroot cachefile none local
zroot failmode wait default
zroot listsnapshots off default
zroot autoexpand off default
zroot dedupditto 0 default
zroot dedupratio 1.00x -
zroot free 19.7T -
zroot allocated 129G -
zroot readonly off -
zroot comment - default
zroot expandsize - -
zroot freeing 0 default
zroot fragmentation 0% -
zroot leaked 0 default
#
Regards
[1] https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068886.html
[2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=151910
---
KIRIYAMA Kazuhiko
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
I am guessing it means something is corrupt, as 16MB is the maximum size
of a record in ZFS. Also, the 'large_blocks' feature is 'enabled', not
'active', so this suggest you do not have any records larger than 128kb
on your pool.
--
Allan Jude
Toomas Soome
2018-06-21 05:38:12 UTC
Permalink
Post by Allan Jude
Post by KIRIYAMA Kazuhiko
Hi all,
I've been reported ZFS boot disable problem [1], and found
that this issue occers form RAID configuration [2]. So I
rebuit with RAID5 and re-installed 12.0-CURRENT
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot
FreeBSD/x86 boot
ZFS: I/O error - blocks larger than 16777216 are not supported
ZFS: can't find dataset u
In this case, the reason is "blocks larger than 16777216 are
not supported" and I guess this means datasets that have
recordsize greater than 8GB is NOT supported by the
FreeBSD boot loader(zpool-features(7)). Is that true ?
# kldload zfs
# zpool import
pool: zroot
id: 13407092850382881815
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://illumos.org/msg/ZFS-8000-EY
zroot ONLINE
mfid0p3 ONLINE
# zpool import -fR /mnt zroot
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 19.9T 129G 19.7T - 0% 0% 1.00x ONLINE /mnt
# zpool get all zroot
NAME PROPERTY VALUE SOURCE
zroot size 19.9T -
zroot capacity 0% -
zroot altroot /mnt local
zroot health ONLINE -
zroot guid 13407092850382881815 default
zroot version - default
zroot bootfs zroot/ROOT/default local
zroot delegation on default
zroot autoreplace off default
zroot cachefile none local
zroot failmode wait default
zroot listsnapshots off default
zroot autoexpand off default
zroot dedupditto 0 default
zroot dedupratio 1.00x -
zroot free 19.7T -
zroot allocated 129G -
zroot readonly off -
zroot comment - default
zroot expandsize - -
zroot freeing 0 default
zroot fragmentation 0% -
zroot leaked 0 default
#
Regards
[1] https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068886.html
[2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=151910
---
KIRIYAMA Kazuhiko
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
I am guessing it means something is corrupt, as 16MB is the maximum size
of a record in ZFS. Also, the 'large_blocks' feature is 'enabled', not
'active', so this suggest you do not have any records larger than 128kb
on your pool.
yes indeed, this value printed is 1 << 24 and is current, however, I would start with reinstalling gptzfsboot on freebsd-boot partition.

rgds,
toomas
KIRIYAMA Kazuhiko
2018-06-21 06:00:48 UTC
Permalink
At Wed, 20 Jun 2018 23:34:48 -0400,
Post by Allan Jude
Post by KIRIYAMA Kazuhiko
Hi all,
I've been reported ZFS boot disable problem [1], and found
that this issue occers form RAID configuration [2]. So I
rebuit with RAID5 and re-installed 12.0-CURRENT
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot
FreeBSD/x86 boot
ZFS: I/O error - blocks larger than 16777216 are not supported
ZFS: can't find dataset u
In this case, the reason is "blocks larger than 16777216 are
not supported" and I guess this means datasets that have
recordsize greater than 8GB is NOT supported by the
FreeBSD boot loader(zpool-features(7)). Is that true ?
# kldload zfs
# zpool import
pool: zroot
id: 13407092850382881815
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://illumos.org/msg/ZFS-8000-EY
zroot ONLINE
mfid0p3 ONLINE
# zpool import -fR /mnt zroot
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 19.9T 129G 19.7T - 0% 0% 1.00x ONLINE /mnt
# zpool get all zroot
NAME PROPERTY VALUE SOURCE
zroot size 19.9T -
zroot capacity 0% -
zroot altroot /mnt local
zroot health ONLINE -
zroot guid 13407092850382881815 default
zroot version - default
zroot bootfs zroot/ROOT/default local
zroot delegation on default
zroot autoreplace off default
zroot cachefile none local
zroot failmode wait default
zroot listsnapshots off default
zroot autoexpand off default
zroot dedupditto 0 default
zroot dedupratio 1.00x -
zroot free 19.7T -
zroot allocated 129G -
zroot readonly off -
zroot comment - default
zroot expandsize - -
zroot freeing 0 default
zroot fragmentation 0% -
zroot leaked 0 default
#
Regards
[1] https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068886.html
[2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=151910
---
KIRIYAMA Kazuhiko
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
I am guessing it means something is corrupt, as 16MB is the maximum size
of a record in ZFS. Also, the 'large_blocks' feature is 'enabled', not
'active', so this suggest you do not have any records larger than 128kb
on your pool.
As I mentioned above, [2] says ZFS on RAID disks have any
serious bugs except for mirror. Anyway I gave up to use ZFS
on RAID{5,6}* until Bug 151910 [2] fixed.
Post by Allan Jude
--
Allan Jude
---
KIRIYAMA Kazuhiko
Toomas Soome
2018-06-21 07:48:28 UTC
Permalink
Post by KIRIYAMA Kazuhiko
At Wed, 20 Jun 2018 23:34:48 -0400,
Post by Allan Jude
Post by KIRIYAMA Kazuhiko
Hi all,
I've been reported ZFS boot disable problem [1], and found
that this issue occers form RAID configuration [2]. So I
rebuit with RAID5 and re-installed 12.0-CURRENT
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot
FreeBSD/x86 boot
ZFS: I/O error - blocks larger than 16777216 are not supported
ZFS: can't find dataset u
In this case, the reason is "blocks larger than 16777216 are
not supported" and I guess this means datasets that have
recordsize greater than 8GB is NOT supported by the
FreeBSD boot loader(zpool-features(7)). Is that true ?
# kldload zfs
# zpool import
pool: zroot
id: 13407092850382881815
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://illumos.org/msg/ZFS-8000-EY
zroot ONLINE
mfid0p3 ONLINE
# zpool import -fR /mnt zroot
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 19.9T 129G 19.7T - 0% 0% 1.00x ONLINE /mnt
# zpool get all zroot
NAME PROPERTY VALUE SOURCE
zroot size 19.9T -
zroot capacity 0% -
zroot altroot /mnt local
zroot health ONLINE -
zroot guid 13407092850382881815 default
zroot version - default
zroot bootfs zroot/ROOT/default local
zroot delegation on default
zroot autoreplace off default
zroot cachefile none local
zroot failmode wait default
zroot listsnapshots off default
zroot autoexpand off default
zroot dedupditto 0 default
zroot dedupratio 1.00x -
zroot free 19.7T -
zroot allocated 129G -
zroot readonly off -
zroot comment - default
zroot expandsize - -
zroot freeing 0 default
zroot fragmentation 0% -
zroot leaked 0 default
#
Regards
[1] https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068886.html
[2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=151910
---
KIRIYAMA Kazuhiko
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
I am guessing it means something is corrupt, as 16MB is the maximum size
of a record in ZFS. Also, the 'large_blocks' feature is 'enabled', not
'active', so this suggest you do not have any records larger than 128kb
on your pool.
As I mentioned above, [2] says ZFS on RAID disks have any
serious bugs except for mirror. Anyway I gave up to use ZFS
on RAID{5,6}* until Bug 151910 [2] fixed.
if you boot from usb stick (or cd), press esc at boot loader menu and enter lsdev -v. what sector and disk sizes are reported?

the issue [2] is mix of ancient freebsd (v 8.1 is mentioned there), and RAID luns with 512B sector size and 15TB!!! total size - are you really sure your BIOS can actually address 15TB lun (with 512B sector size)? Note that the problem with large disks can hide itself till you have pool filled up enough till the essential files will be stored above the limit… meaning that you may have “perfectly working” setup till at some point in time, after next update, it is suddenly not working any more.

Note that for boot loader we have only INT13h for BIOS version, and it really is limited. The UEFI version is using EFI_BLOCK_IO API, which usually can handle large sectors and disk sizes better.

rgds,
toomas

Loading...