Discussion:
ZFS: alignment/boundary for partition type freebsd-zfs
(too old to reply)
Allan Jude
2017-12-26 16:44:29 UTC
Permalink
Running recent CURRENT on most of our lab's boxes, I was in need to replace and restore a
ZFS RAIDZ pool. Doing so, I was in need to partition the disks I was about to replace.
Well, the drives in question are 4k block size drives with 512b emulation - as most of
them today. I've created the only and sole partiton on each 4 TB drive via the command
sequence
gpart create -s GPT adaX
gpart add -t freebsd-zfs -a 4k -l nameXX adaX
After doing this on all drives I was about to replace, something drove me to check on
the net and I found a lot of websites giving "advices", how to prepare large, modern
drives for ZFS. I think the GNOP trick is not necessary any more, but many blogs
recommend to perform
gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX
to put the partition boundary at the 1 Megabytes boundary. I didn't do that. My
partitions all start now at block 40.
My question is: will this have severe performance consequences or is that negligible?
Since most of those websites I found via "zfs freebsd alignement" are from years ago, I'm
a bit confused now an consideration performing all this days-taking resilvering process
let me loose some more hair as the usual "fallout" ...
Thanks in advance,
Oliver
The 1mb alignment is not required. It is just what I do to leave room
for the other partition types before the ZFS partition.

However, the replacement for the GNOP hack, is separate. In addition to
aligning the partitions to 4k, you have to tell ZFS that the drive is 4k:

sysctl vfs.zfs.min_auto_ashift=12

(2^12 = 4096)

Before you create the pool, or add additional vdevs.
--
Allan Jude
O. Hartmann
2017-12-26 17:04:44 UTC
Permalink
Am Tue, 26 Dec 2017 11:44:29 -0500
Post by Allan Jude
Running recent CURRENT on most of our lab's boxes, I was in need to replace and
restore a ZFS RAIDZ pool. Doing so, I was in need to partition the disks I was about
to replace. Well, the drives in question are 4k block size drives with 512b emulation
- as most of them today. I've created the only and sole partiton on each 4 TB drive
via the command sequence
gpart create -s GPT adaX
gpart add -t freebsd-zfs -a 4k -l nameXX adaX
After doing this on all drives I was about to replace, something drove me to check on
the net and I found a lot of websites giving "advices", how to prepare large, modern
drives for ZFS. I think the GNOP trick is not necessary any more, but many blogs
recommend to perform
gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX
to put the partition boundary at the 1 Megabytes boundary. I didn't do that. My
partitions all start now at block 40.
My question is: will this have severe performance consequences or is that negligible?
Since most of those websites I found via "zfs freebsd alignement" are from years ago,
I'm a bit confused now an consideration performing all this days-taking resilvering
process let me loose some more hair as the usual "fallout" ...
Thanks in advance,
Oliver
The 1mb alignment is not required. It is just what I do to leave room
for the other partition types before the ZFS partition.
However, the replacement for the GNOP hack, is separate. In addition to
sysctl vfs.zfs.min_auto_ashift=12
(2^12 = 4096)
Before you create the pool, or add additional vdevs.
I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created the vdev. What is
the consequence for that for the pool? I lived under the impression that this is necessary
for "native 4k" drives.

How can I check what ashift is in effect for a specific vdev?
--
O. Hartmann

Ich widerspreche der Nutzung oder Übermittlung meiner Daten fÃŒr
Werbezwecke oder fÌr die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG).
Alan Somers
2017-12-26 17:13:09 UTC
Permalink
Post by O. Hartmann
Am Tue, 26 Dec 2017 11:44:29 -0500
Post by Allan Jude
Running recent CURRENT on most of our lab's boxes, I was in need to
replace and
Post by Allan Jude
restore a ZFS RAIDZ pool. Doing so, I was in need to partition the
disks I was about
Post by Allan Jude
to replace. Well, the drives in question are 4k block size drives with
512b emulation
Post by Allan Jude
- as most of them today. I've created the only and sole partiton on
each 4 TB drive
Post by Allan Jude
via the command sequence
gpart create -s GPT adaX
gpart add -t freebsd-zfs -a 4k -l nameXX adaX
After doing this on all drives I was about to replace, something drove
me to check on
Post by Allan Jude
the net and I found a lot of websites giving "advices", how to prepare
large, modern
Post by Allan Jude
drives for ZFS. I think the GNOP trick is not necessary any more, but
many blogs
Post by Allan Jude
recommend to perform
gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX
to put the partition boundary at the 1 Megabytes boundary. I didn't do
that. My
Post by Allan Jude
partitions all start now at block 40.
My question is: will this have severe performance consequences or is
that negligible?
Post by Allan Jude
Since most of those websites I found via "zfs freebsd alignement" are
from years ago,
Post by Allan Jude
I'm a bit confused now an consideration performing all this
days-taking resilvering
Post by Allan Jude
process let me loose some more hair as the usual "fallout" ...
Thanks in advance,
Oliver
The 1mb alignment is not required. It is just what I do to leave room
for the other partition types before the ZFS partition.
However, the replacement for the GNOP hack, is separate. In addition to
sysctl vfs.zfs.min_auto_ashift=12
(2^12 = 4096)
Before you create the pool, or add additional vdevs.
I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created the vdev. What is
the consequence for that for the pool? I lived under the impression that this is necessary
for "native 4k" drives.
How can I check what ashift is in effect for a specific vdev?
It's only necessary if your drive stupidly fails to report its physical
sector size correctly, and no other FreeBSD developer has already written a
quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the
partitions in the ZFS raid group in question. It should print either
"ashift: 12" or "ashift: 9".

-Alan
O. Hartmann
2017-12-26 17:31:05 UTC
Permalink
Am Tue, 26 Dec 2017 10:13:09 -0700
Post by Alan Somers
Post by O. Hartmann
Am Tue, 26 Dec 2017 11:44:29 -0500
Post by Allan Jude
Running recent CURRENT on most of our lab's boxes, I was in need to
replace and
Post by Allan Jude
restore a ZFS RAIDZ pool. Doing so, I was in need to partition the
disks I was about
Post by Allan Jude
to replace. Well, the drives in question are 4k block size drives with
512b emulation
Post by Allan Jude
- as most of them today. I've created the only and sole partiton on
each 4 TB drive
Post by Allan Jude
via the command sequence
gpart create -s GPT adaX
gpart add -t freebsd-zfs -a 4k -l nameXX adaX
After doing this on all drives I was about to replace, something drove
me to check on
Post by Allan Jude
the net and I found a lot of websites giving "advices", how to prepare
large, modern
Post by Allan Jude
drives for ZFS. I think the GNOP trick is not necessary any more, but
many blogs
Post by Allan Jude
recommend to perform
gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX
to put the partition boundary at the 1 Megabytes boundary. I didn't do
that. My
Post by Allan Jude
partitions all start now at block 40.
My question is: will this have severe performance consequences or is
that negligible?
Post by Allan Jude
Since most of those websites I found via "zfs freebsd alignement" are
from years ago,
Post by Allan Jude
I'm a bit confused now an consideration performing all this
days-taking resilvering
Post by Allan Jude
process let me loose some more hair as the usual "fallout" ...
Thanks in advance,
Oliver
The 1mb alignment is not required. It is just what I do to leave room
for the other partition types before the ZFS partition.
However, the replacement for the GNOP hack, is separate. In addition to
sysctl vfs.zfs.min_auto_ashift=12
(2^12 = 4096)
Before you create the pool, or add additional vdevs.
I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created the vdev. What is
the consequence for that for the pool? I lived under the impression that
this is necessary
for "native 4k" drives.
How can I check what ashift is in effect for a specific vdev?
It's only necessary if your drive stupidly fails to report its physical
sector size correctly, and no other FreeBSD developer has already written a
quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the
partitions in the ZFS raid group in question. It should print either
"ashift: 12" or "ashift: 9".
-Alan
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
I checked as suggested and all partitions report ashift: 12.

So I guess I'm save and sound and do not need to rebuild the pools ...?
--
O. Hartmann

Ich widerspreche der Nutzung oder Übermittlung meiner Daten fÃŒr
Werbezwecke oder fÌr die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG).
O. Hartmann
2017-12-26 17:09:01 UTC
Permalink
Am Tue, 26 Dec 2017 11:44:29 -0500
Post by Allan Jude
Running recent CURRENT on most of our lab's boxes, I was in need to replace and
restore a ZFS RAIDZ pool. Doing so, I was in need to partition the disks I was about
to replace. Well, the drives in question are 4k block size drives with 512b emulation
- as most of them today. I've created the only and sole partiton on each 4 TB drive
via the command sequence
gpart create -s GPT adaX
gpart add -t freebsd-zfs -a 4k -l nameXX adaX
After doing this on all drives I was about to replace, something drove me to check on
the net and I found a lot of websites giving "advices", how to prepare large, modern
drives for ZFS. I think the GNOP trick is not necessary any more, but many blogs
recommend to perform
gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX
to put the partition boundary at the 1 Megabytes boundary. I didn't do that. My
partitions all start now at block 40.
My question is: will this have severe performance consequences or is that negligible?
Since most of those websites I found via "zfs freebsd alignement" are from years ago,
I'm a bit confused now an consideration performing all this days-taking resilvering
process let me loose some more hair as the usual "fallout" ...
Thanks in advance,
Oliver
The 1mb alignment is not required. It is just what I do to leave room
for the other partition types before the ZFS partition.
However, the replacement for the GNOP hack, is separate. In addition to
sysctl vfs.zfs.min_auto_ashift=12
(2^12 = 4096)
Before you create the pool, or add additional vdevs.
I just checked with "zdb" what ashift is reported to my pool(s) and the result claims to
be "ashift: 12".
--
O. Hartmann

Ich widerspreche der Nutzung oder Übermittlung meiner Daten fÃŒr
Werbezwecke oder fÌr die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG).
Rodney W. Grimes
2017-12-26 17:31:53 UTC
Permalink
Post by Alan Somers
Post by O. Hartmann
Am Tue, 26 Dec 2017 11:44:29 -0500
Post by Allan Jude
Running recent CURRENT on most of our lab's boxes, I was in need to
replace and
Post by Allan Jude
restore a ZFS RAIDZ pool. Doing so, I was in need to partition the
disks I was about
Post by Allan Jude
to replace. Well, the drives in question are 4k block size drives with
512b emulation
Post by Allan Jude
- as most of them today. I've created the only and sole partiton on
each 4 TB drive
Post by Allan Jude
via the command sequence
gpart create -s GPT adaX
gpart add -t freebsd-zfs -a 4k -l nameXX adaX
After doing this on all drives I was about to replace, something drove
me to check on
Post by Allan Jude
the net and I found a lot of websites giving "advices", how to prepare
large, modern
Post by Allan Jude
drives for ZFS. I think the GNOP trick is not necessary any more, but
many blogs
Post by Allan Jude
recommend to perform
gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX
to put the partition boundary at the 1 Megabytes boundary. I didn't do
that. My
Post by Allan Jude
partitions all start now at block 40.
My question is: will this have severe performance consequences or is
that negligible?
Post by Allan Jude
Since most of those websites I found via "zfs freebsd alignement" are
from years ago,
Post by Allan Jude
I'm a bit confused now an consideration performing all this
days-taking resilvering
Post by Allan Jude
process let me loose some more hair as the usual "fallout" ...
Thanks in advance,
Oliver
The 1mb alignment is not required. It is just what I do to leave room
for the other partition types before the ZFS partition.
However, the replacement for the GNOP hack, is separate. In addition to
sysctl vfs.zfs.min_auto_ashift=12
(2^12 = 4096)
Before you create the pool, or add additional vdevs.
I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created the
vdev. What is
the consequence for that for the pool? I lived under the impression that
this is necessary
for "native 4k" drives.
How can I check what ashift is in effect for a specific vdev?
It's only necessary if your drive stupidly fails to report its physical
sector size correctly, and no other FreeBSD developer has already written a
quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the
partitions in the ZFS raid group in question. It should print either
"ashift: 12" or "ashift: 9".
And more than likely if you used the bsdinstall from one of
the distributions to setup the system you created the ZFS
pool from it has the sysctl in /boot/loader.conf as the
default for all? recent? bsdinstall's is that the 4k default
is used and the sysctl gets written to /boot/loader.conf
at install time so from then on all pools you create shall
also be 4k. You have to change a default during the
system install to change this to 512.
Post by Alan Somers
-aLAn
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
--
Rod Grimes ***@freebsd.org
Steven Hartland
2017-12-26 20:20:00 UTC
Permalink
You only need to set the min if the drives hide their true sector size, as
Allan mentioned.
camcontrol identify <drive> is one of the easiest ways to check this.

If the pool reports ashift 12 then zfs correctly detected the drives as 4K
so that part is good

On Tue, 26 Dec 2017 at 20:15, Rodney W. Grimes <
Post by Alan Somers
Post by O. Hartmann
Am Tue, 26 Dec 2017 11:44:29 -0500
Post by Allan Jude
Running recent CURRENT on most of our lab's boxes, I was in need to
replace and
Post by Allan Jude
restore a ZFS RAIDZ pool. Doing so, I was in need to partition the
disks I was about
Post by Allan Jude
to replace. Well, the drives in question are 4k block size drives
with
Post by Alan Somers
Post by O. Hartmann
512b emulation
Post by Allan Jude
- as most of them today. I've created the only and sole partiton on
each 4 TB drive
Post by Allan Jude
via the command sequence
gpart create -s GPT adaX
gpart add -t freebsd-zfs -a 4k -l nameXX adaX
After doing this on all drives I was about to replace, something
drove
Post by Alan Somers
Post by O. Hartmann
me to check on
Post by Allan Jude
the net and I found a lot of websites giving "advices", how to
prepare
Post by Alan Somers
Post by O. Hartmann
large, modern
Post by Allan Jude
drives for ZFS. I think the GNOP trick is not necessary any more,
but
Post by Alan Somers
Post by O. Hartmann
many blogs
Post by Allan Jude
recommend to perform
gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX
to put the partition boundary at the 1 Megabytes boundary. I
didn't do
Post by Alan Somers
Post by O. Hartmann
that. My
Post by Allan Jude
partitions all start now at block 40.
My question is: will this have severe performance consequences or
is
Post by Alan Somers
Post by O. Hartmann
that negligible?
Post by Allan Jude
Since most of those websites I found via "zfs freebsd alignement"
are
Post by Alan Somers
Post by O. Hartmann
from years ago,
Post by Allan Jude
I'm a bit confused now an consideration performing all this
days-taking resilvering
Post by Allan Jude
process let me loose some more hair as the usual "fallout" ...
Thanks in advance,
Oliver
The 1mb alignment is not required. It is just what I do to leave room
for the other partition types before the ZFS partition.
However, the replacement for the GNOP hack, is separate. In addition
to
Post by Alan Somers
Post by O. Hartmann
Post by Allan Jude
aligning the partitions to 4k, you have to tell ZFS that the drive
sysctl vfs.zfs.min_auto_ashift=12
(2^12 = 4096)
Before you create the pool, or add additional vdevs.
I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created
the
Post by Alan Somers
Post by O. Hartmann
vdev. What is
the consequence for that for the pool? I lived under the impression
that
Post by Alan Somers
Post by O. Hartmann
this is necessary
for "native 4k" drives.
How can I check what ashift is in effect for a specific vdev?
It's only necessary if your drive stupidly fails to report its physical
sector size correctly, and no other FreeBSD developer has already
written a
Post by Alan Somers
quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the
partitions in the ZFS raid group in question. It should print either
"ashift: 12" or "ashift: 9".
And more than likely if you used the bsdinstall from one of
the distributions to setup the system you created the ZFS
pool from it has the sysctl in /boot/loader.conf as the
default for all? recent? bsdinstall's is that the 4k default
is used and the sysctl gets written to /boot/loader.conf
at install time so from then on all pools you create shall
also be 4k. You have to change a default during the
system install to change this to 512.
Post by Alan Somers
-aLAn
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "
--
Rod Grimes
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
O. Hartmann
2017-12-26 20:24:12 UTC
Permalink
Am Tue, 26 Dec 2017 09:31:53 -0800 (PST)
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
Am Tue, 26 Dec 2017 11:44:29 -0500
Post by Allan Jude
Running recent CURRENT on most of our lab's boxes, I was in need to
replace and
Post by Allan Jude
restore a ZFS RAIDZ pool. Doing so, I was in need to partition the
disks I was about
Post by Allan Jude
to replace. Well, the drives in question are 4k block size drives with
512b emulation
Post by Allan Jude
- as most of them today. I've created the only and sole partiton on
each 4 TB drive
Post by Allan Jude
via the command sequence
gpart create -s GPT adaX
gpart add -t freebsd-zfs -a 4k -l nameXX adaX
After doing this on all drives I was about to replace, something drove
me to check on
Post by Allan Jude
the net and I found a lot of websites giving "advices", how to prepare
large, modern
Post by Allan Jude
drives for ZFS. I think the GNOP trick is not necessary any more, but
many blogs
Post by Allan Jude
recommend to perform
gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX
to put the partition boundary at the 1 Megabytes boundary. I didn't do
that. My
Post by Allan Jude
partitions all start now at block 40.
My question is: will this have severe performance consequences or is
that negligible?
Post by Allan Jude
Since most of those websites I found via "zfs freebsd alignement" are
from years ago,
Post by Allan Jude
I'm a bit confused now an consideration performing all this
days-taking resilvering
Post by Allan Jude
process let me loose some more hair as the usual "fallout" ...
Thanks in advance,
Oliver
The 1mb alignment is not required. It is just what I do to leave room
for the other partition types before the ZFS partition.
However, the replacement for the GNOP hack, is separate. In addition to
sysctl vfs.zfs.min_auto_ashift=12
(2^12 = 4096)
Before you create the pool, or add additional vdevs.
I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created the
vdev. What is
the consequence for that for the pool? I lived under the impression that
this is necessary
for "native 4k" drives.
How can I check what ashift is in effect for a specific vdev?
It's only necessary if your drive stupidly fails to report its physical
sector size correctly, and no other FreeBSD developer has already written a
quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the
partitions in the ZFS raid group in question. It should print either
"ashift: 12" or "ashift: 9".
And more than likely if you used the bsdinstall from one of
the distributions to setup the system you created the ZFS
pool from it has the sysctl in /boot/loader.conf as the
default for all? recent? bsdinstall's is that the 4k default
is used and the sysctl gets written to /boot/loader.conf
at install time so from then on all pools you create shall
also be 4k. You have to change a default during the
system install to change this to 512.
I never used any installation scripts so far.

Before I replaced the pool's drives, I tried to search for informations on how-to. This
important tiny fact must have slipped through - or it is very bad documented. I didn't
find a hint in tuning(7), which is the man page I consulted first.

Luckily, as Allan Jude stated, the disk recognition was correct (I guess stripesize
instead of blocksize is taken?).
Post by Rodney W. Grimes
Post by Alan Somers
-aLAn
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
--
O. Hartmann

Ich widerspreche der Nutzung oder Übermittlung meiner Daten fÃŒr
Werbezwecke oder fÌr die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG).
Steven Hartland
2017-12-26 22:44:08 UTC
Permalink
Yes it does know how to figure out based on stripe size
Post by O. Hartmann
Am Tue, 26 Dec 2017 09:31:53 -0800 (PST)
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
Am Tue, 26 Dec 2017 11:44:29 -0500
Post by Allan Jude
Running recent CURRENT on most of our lab's boxes, I was in need
to
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
replace and
Post by Allan Jude
restore a ZFS RAIDZ pool. Doing so, I was in need to partition
the
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
disks I was about
Post by Allan Jude
to replace. Well, the drives in question are 4k block size
drives with
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
512b emulation
Post by Allan Jude
- as most of them today. I've created the only and sole partiton
on
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
each 4 TB drive
Post by Allan Jude
via the command sequence
gpart create -s GPT adaX
gpart add -t freebsd-zfs -a 4k -l nameXX adaX
After doing this on all drives I was about to replace, something
drove
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
me to check on
Post by Allan Jude
the net and I found a lot of websites giving "advices", how to
prepare
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
large, modern
Post by Allan Jude
drives for ZFS. I think the GNOP trick is not necessary any
more, but
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
many blogs
Post by Allan Jude
recommend to perform
gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX
to put the partition boundary at the 1 Megabytes boundary. I
didn't do
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
that. My
Post by Allan Jude
partitions all start now at block 40.
My question is: will this have severe performance consequences
or is
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
that negligible?
Post by Allan Jude
Since most of those websites I found via "zfs freebsd
alignement" are
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
from years ago,
Post by Allan Jude
I'm a bit confused now an consideration performing all this
days-taking resilvering
Post by Allan Jude
process let me loose some more hair as the usual "fallout" ...
Thanks in advance,
Oliver
The 1mb alignment is not required. It is just what I do to leave
room
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
Post by Allan Jude
for the other partition types before the ZFS partition.
However, the replacement for the GNOP hack, is separate. In
addition to
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
Post by Allan Jude
aligning the partitions to 4k, you have to tell ZFS that the drive
sysctl vfs.zfs.min_auto_ashift=12
(2^12 = 4096)
Before you create the pool, or add additional vdevs.
I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I
created the
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
vdev. What is
the consequence for that for the pool? I lived under the impression
that
Post by Rodney W. Grimes
Post by Alan Somers
Post by O. Hartmann
this is necessary
for "native 4k" drives.
How can I check what ashift is in effect for a specific vdev?
It's only necessary if your drive stupidly fails to report its physical
sector size correctly, and no other FreeBSD developer has already
written a
Post by Rodney W. Grimes
Post by Alan Somers
quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the
partitions in the ZFS raid group in question. It should print either
"ashift: 12" or "ashift: 9".
And more than likely if you used the bsdinstall from one of
the distributions to setup the system you created the ZFS
pool from it has the sysctl in /boot/loader.conf as the
default for all? recent? bsdinstall's is that the 4k default
is used and the sysctl gets written to /boot/loader.conf
at install time so from then on all pools you create shall
also be 4k. You have to change a default during the
system install to change this to 512.
I never used any installation scripts so far.
Before I replaced the pool's drives, I tried to search for informations on how-to. This
important tiny fact must have slipped through - or it is very bad documented. I didn't
find a hint in tuning(7), which is the man page I consulted first.
Luckily, as Allan Jude stated, the disk recognition was correct (I guess stripesize
instead of blocksize is taken?).
Post by Rodney W. Grimes
Post by Alan Somers
-aLAn
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "
--
O. Hartmann
Ich widerspreche der Nutzung oder Übermittlung meiner Daten für
Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG).
Loading...