Why can't I import an ZFS pool without partitioning the data disk with fdisk?24 hour crash course in ZFS…...
integration of absolute value
Applying for mortgage when living together but only one will be on the mortgage
Derivative is just speed of change?
Rampant sharing of authorship among colleagues in the name of "collaboration". Is not taking part in it a death knell for a future in academia?
Should I intervene when a colleague in a different department makes students run laps as part of their grade?
Puzzles with multiple, user-specific, solutions
Planting Trees in Outer Space
Numerically Stable IIR filter
What parameters are to be considered when choosing a MOSFET?
How do discovery writers hibernate?
Should students have access to past exams or an exam bank?
How char is processed in math mode?
What is the oxidation state of Mn in HMn(CO)5?
How does the barbarian bonus damage interact with two weapon fighting?
Avoiding Implicit Conversion in Constructor. Explicit keyword doesn't help here
If the Moon were impacted by a suitably sized meteor, how long would it take to impact the Earth?
How to structure presentation to avoid getting questions that will be answered later in the presentation?
Why does Latex make a small adjustment when I change section color
Can machine learning learn a function like finding maximum from a list?
Reducing the time for rolling hash
How do I respond appropriately to an overseas company that obtained a visa for me without hiring me?
Easy way to get process information from a window
Is it unprofessional to mention your cover letter and resume are best viewed in Chrome?
When did J.K. Rowling decide to make Ron and Hermione a couple?
Why can't I import an ZFS pool without partitioning the data disk with fdisk?
24 hour crash course in ZFS… some final questionsHow to replace a disk in a non-redundant ZFS pool?Can I use dd to quickly resilver a ZFS mirror disk?What's an effective offsite backup strategy for a ZFS mirrored pool?How to import ZFS pool with different configurationChecking for a failed drive in a ZFS poolReplacing a failed disk in a ZFS poolSafe FreeBSD test on system running debian with zpoolsZFS: zpool import - UNAVAIL missing deviceWhy does ZFS on Linux not read from the cache?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
I have a strange situation here, in which I am unable to import an ZFS pool that I brought from another OS UNLESS I fdisk the disk of the pool. I am puzzled as to why this is happening and hoping to get an answer or an advice.
The flow:
An ZFS pool containing a single 3TB disk is created on a "NAS4Free 9.3.0.2" system (FreeBSD).
I export the pool and attach it to a "NexentaStor 4.0.4" system (OpenSolaris).
zpool import then shows
root@nexenta:/volumes# zpool import
pool: tank1
id: 17717822833491017053
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://illumos.org/msg/ZFS-8000-3C
config:
tank1 UNAVAIL insufficient replicas
c2t50014EE2B5B23B15d0p0 UNAVAIL cannot open
zdb -l on the disk shows label0 and label1 as expected but
--------------------------------------------
LABEL 2
--------------------------------------------
failed to read label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to read label 3
I then run fdisk c2t50014EE2B5B23B15d0p0
. It says the disk is not initialized and offers to initialize and create 1 Linux partition.
I let it do it. Then optionally delete the new partition and save.
At this point an MBR is created on the first sector of the disk. The disk as a whole is still a device of the zpool.
With the MBR on the disk, I am able to import the pool and as expected.
Important details:
Same process with a 256GB disk works as expected without fdisk involvement. I suspect this issue is related to disks over 2TB.
What I have tried:
See if disk size is detected correctly and the same on the different systems. It seems that disk geometry is different for fdisk on NexentaStor than for other systems. I am not certain how to verify.
Why does the creation of the MBR on such a disk allows for proper read of the ZFS labels on the end of the disk?
freebsd zfs freenas opensolaris nexenta
bumped to the homepage by Community♦ 4 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
I have a strange situation here, in which I am unable to import an ZFS pool that I brought from another OS UNLESS I fdisk the disk of the pool. I am puzzled as to why this is happening and hoping to get an answer or an advice.
The flow:
An ZFS pool containing a single 3TB disk is created on a "NAS4Free 9.3.0.2" system (FreeBSD).
I export the pool and attach it to a "NexentaStor 4.0.4" system (OpenSolaris).
zpool import then shows
root@nexenta:/volumes# zpool import
pool: tank1
id: 17717822833491017053
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://illumos.org/msg/ZFS-8000-3C
config:
tank1 UNAVAIL insufficient replicas
c2t50014EE2B5B23B15d0p0 UNAVAIL cannot open
zdb -l on the disk shows label0 and label1 as expected but
--------------------------------------------
LABEL 2
--------------------------------------------
failed to read label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to read label 3
I then run fdisk c2t50014EE2B5B23B15d0p0
. It says the disk is not initialized and offers to initialize and create 1 Linux partition.
I let it do it. Then optionally delete the new partition and save.
At this point an MBR is created on the first sector of the disk. The disk as a whole is still a device of the zpool.
With the MBR on the disk, I am able to import the pool and as expected.
Important details:
Same process with a 256GB disk works as expected without fdisk involvement. I suspect this issue is related to disks over 2TB.
What I have tried:
See if disk size is detected correctly and the same on the different systems. It seems that disk geometry is different for fdisk on NexentaStor than for other systems. I am not certain how to verify.
Why does the creation of the MBR on such a disk allows for proper read of the ZFS labels on the end of the disk?
freebsd zfs freenas opensolaris nexenta
bumped to the homepage by Community♦ 4 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
I have a strange situation here, in which I am unable to import an ZFS pool that I brought from another OS UNLESS I fdisk the disk of the pool. I am puzzled as to why this is happening and hoping to get an answer or an advice.
The flow:
An ZFS pool containing a single 3TB disk is created on a "NAS4Free 9.3.0.2" system (FreeBSD).
I export the pool and attach it to a "NexentaStor 4.0.4" system (OpenSolaris).
zpool import then shows
root@nexenta:/volumes# zpool import
pool: tank1
id: 17717822833491017053
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://illumos.org/msg/ZFS-8000-3C
config:
tank1 UNAVAIL insufficient replicas
c2t50014EE2B5B23B15d0p0 UNAVAIL cannot open
zdb -l on the disk shows label0 and label1 as expected but
--------------------------------------------
LABEL 2
--------------------------------------------
failed to read label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to read label 3
I then run fdisk c2t50014EE2B5B23B15d0p0
. It says the disk is not initialized and offers to initialize and create 1 Linux partition.
I let it do it. Then optionally delete the new partition and save.
At this point an MBR is created on the first sector of the disk. The disk as a whole is still a device of the zpool.
With the MBR on the disk, I am able to import the pool and as expected.
Important details:
Same process with a 256GB disk works as expected without fdisk involvement. I suspect this issue is related to disks over 2TB.
What I have tried:
See if disk size is detected correctly and the same on the different systems. It seems that disk geometry is different for fdisk on NexentaStor than for other systems. I am not certain how to verify.
Why does the creation of the MBR on such a disk allows for proper read of the ZFS labels on the end of the disk?
freebsd zfs freenas opensolaris nexenta
I have a strange situation here, in which I am unable to import an ZFS pool that I brought from another OS UNLESS I fdisk the disk of the pool. I am puzzled as to why this is happening and hoping to get an answer or an advice.
The flow:
An ZFS pool containing a single 3TB disk is created on a "NAS4Free 9.3.0.2" system (FreeBSD).
I export the pool and attach it to a "NexentaStor 4.0.4" system (OpenSolaris).
zpool import then shows
root@nexenta:/volumes# zpool import
pool: tank1
id: 17717822833491017053
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://illumos.org/msg/ZFS-8000-3C
config:
tank1 UNAVAIL insufficient replicas
c2t50014EE2B5B23B15d0p0 UNAVAIL cannot open
zdb -l on the disk shows label0 and label1 as expected but
--------------------------------------------
LABEL 2
--------------------------------------------
failed to read label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to read label 3
I then run fdisk c2t50014EE2B5B23B15d0p0
. It says the disk is not initialized and offers to initialize and create 1 Linux partition.
I let it do it. Then optionally delete the new partition and save.
At this point an MBR is created on the first sector of the disk. The disk as a whole is still a device of the zpool.
With the MBR on the disk, I am able to import the pool and as expected.
Important details:
Same process with a 256GB disk works as expected without fdisk involvement. I suspect this issue is related to disks over 2TB.
What I have tried:
See if disk size is detected correctly and the same on the different systems. It seems that disk geometry is different for fdisk on NexentaStor than for other systems. I am not certain how to verify.
Why does the creation of the MBR on such a disk allows for proper read of the ZFS labels on the end of the disk?
freebsd zfs freenas opensolaris nexenta
freebsd zfs freenas opensolaris nexenta
edited Oct 1 '15 at 10:38
Dominique
1651 silver badge8 bronze badges
1651 silver badge8 bronze badges
asked Sep 30 '15 at 14:10
Arik YavilevichArik Yavilevich
1141 bronze badge
1141 bronze badge
bumped to the homepage by Community♦ 4 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 4 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 4 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
If ZFS uses a whole disk, it writes an EFI label to the disk.
Did you check whether there is an EFI label present on the disk?
I know that FreeBSD does things different than Solaris. IIRC, the recommendation is to manually write an EFI label first before initializing ZFS on FreeBSD.
Note that based on 512 byte sector size, the maximum disk size with fdisk labels is 2 TB.
schily, thanks for pointing me in this direction. It does seem there are differences between FreeBSD and Solaris ZFS behavior. The disk I was working with didn't have any partitioning or labeling on it. No GPT/EFI or MBR. It was just a zeroed disk with ZFS on the whole disk (not a partition). This is how NAS4Free creates the ZFS volumes by default. Once I created an MBR sector with fdisk (not a GPT) it was enough for Solaris to detect the volume on Solaris. Strangely it didn't matter for the smaller disk. I experimented with using GPT partitions for ZFS and it was ok as well.
– Arik Yavilevich
Oct 1 '15 at 5:17
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f233019%2fwhy-cant-i-import-an-zfs-pool-without-partitioning-the-data-disk-with-fdisk%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
If ZFS uses a whole disk, it writes an EFI label to the disk.
Did you check whether there is an EFI label present on the disk?
I know that FreeBSD does things different than Solaris. IIRC, the recommendation is to manually write an EFI label first before initializing ZFS on FreeBSD.
Note that based on 512 byte sector size, the maximum disk size with fdisk labels is 2 TB.
schily, thanks for pointing me in this direction. It does seem there are differences between FreeBSD and Solaris ZFS behavior. The disk I was working with didn't have any partitioning or labeling on it. No GPT/EFI or MBR. It was just a zeroed disk with ZFS on the whole disk (not a partition). This is how NAS4Free creates the ZFS volumes by default. Once I created an MBR sector with fdisk (not a GPT) it was enough for Solaris to detect the volume on Solaris. Strangely it didn't matter for the smaller disk. I experimented with using GPT partitions for ZFS and it was ok as well.
– Arik Yavilevich
Oct 1 '15 at 5:17
add a comment |
If ZFS uses a whole disk, it writes an EFI label to the disk.
Did you check whether there is an EFI label present on the disk?
I know that FreeBSD does things different than Solaris. IIRC, the recommendation is to manually write an EFI label first before initializing ZFS on FreeBSD.
Note that based on 512 byte sector size, the maximum disk size with fdisk labels is 2 TB.
schily, thanks for pointing me in this direction. It does seem there are differences between FreeBSD and Solaris ZFS behavior. The disk I was working with didn't have any partitioning or labeling on it. No GPT/EFI or MBR. It was just a zeroed disk with ZFS on the whole disk (not a partition). This is how NAS4Free creates the ZFS volumes by default. Once I created an MBR sector with fdisk (not a GPT) it was enough for Solaris to detect the volume on Solaris. Strangely it didn't matter for the smaller disk. I experimented with using GPT partitions for ZFS and it was ok as well.
– Arik Yavilevich
Oct 1 '15 at 5:17
add a comment |
If ZFS uses a whole disk, it writes an EFI label to the disk.
Did you check whether there is an EFI label present on the disk?
I know that FreeBSD does things different than Solaris. IIRC, the recommendation is to manually write an EFI label first before initializing ZFS on FreeBSD.
Note that based on 512 byte sector size, the maximum disk size with fdisk labels is 2 TB.
If ZFS uses a whole disk, it writes an EFI label to the disk.
Did you check whether there is an EFI label present on the disk?
I know that FreeBSD does things different than Solaris. IIRC, the recommendation is to manually write an EFI label first before initializing ZFS on FreeBSD.
Note that based on 512 byte sector size, the maximum disk size with fdisk labels is 2 TB.
answered Sep 30 '15 at 14:35
schilyschily
11.2k3 gold badges19 silver badges44 bronze badges
11.2k3 gold badges19 silver badges44 bronze badges
schily, thanks for pointing me in this direction. It does seem there are differences between FreeBSD and Solaris ZFS behavior. The disk I was working with didn't have any partitioning or labeling on it. No GPT/EFI or MBR. It was just a zeroed disk with ZFS on the whole disk (not a partition). This is how NAS4Free creates the ZFS volumes by default. Once I created an MBR sector with fdisk (not a GPT) it was enough for Solaris to detect the volume on Solaris. Strangely it didn't matter for the smaller disk. I experimented with using GPT partitions for ZFS and it was ok as well.
– Arik Yavilevich
Oct 1 '15 at 5:17
add a comment |
schily, thanks for pointing me in this direction. It does seem there are differences between FreeBSD and Solaris ZFS behavior. The disk I was working with didn't have any partitioning or labeling on it. No GPT/EFI or MBR. It was just a zeroed disk with ZFS on the whole disk (not a partition). This is how NAS4Free creates the ZFS volumes by default. Once I created an MBR sector with fdisk (not a GPT) it was enough for Solaris to detect the volume on Solaris. Strangely it didn't matter for the smaller disk. I experimented with using GPT partitions for ZFS and it was ok as well.
– Arik Yavilevich
Oct 1 '15 at 5:17
schily, thanks for pointing me in this direction. It does seem there are differences between FreeBSD and Solaris ZFS behavior. The disk I was working with didn't have any partitioning or labeling on it. No GPT/EFI or MBR. It was just a zeroed disk with ZFS on the whole disk (not a partition). This is how NAS4Free creates the ZFS volumes by default. Once I created an MBR sector with fdisk (not a GPT) it was enough for Solaris to detect the volume on Solaris. Strangely it didn't matter for the smaller disk. I experimented with using GPT partitions for ZFS and it was ok as well.
– Arik Yavilevich
Oct 1 '15 at 5:17
schily, thanks for pointing me in this direction. It does seem there are differences between FreeBSD and Solaris ZFS behavior. The disk I was working with didn't have any partitioning or labeling on it. No GPT/EFI or MBR. It was just a zeroed disk with ZFS on the whole disk (not a partition). This is how NAS4Free creates the ZFS volumes by default. Once I created an MBR sector with fdisk (not a GPT) it was enough for Solaris to detect the volume on Solaris. Strangely it didn't matter for the smaller disk. I experimented with using GPT partitions for ZFS and it was ok as well.
– Arik Yavilevich
Oct 1 '15 at 5:17
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f233019%2fwhy-cant-i-import-an-zfs-pool-without-partitioning-the-data-disk-with-fdisk%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown