From 86b7b1ec3745e2a4f97aa4964f9bb91caa65efc6 Mon Sep 17 00:00:00 2001 From: mm Date: Thu, 5 Jan 2012 10:55:17 +0000 Subject: [PATCH 32/65] MFC zfs manpage update (r227646, r227648, r227649, r227752, r228019, r228045, r228054, r228055) MFC r227646 (partial): Update and desolarization of zfs(8) and zpool(8) manual pages: - synchronized to match new vendor code (Illumos rev. 13513) [1] - removed references to sun commands (replaced with FreeBSD commands) - removed ATTRIBUTES sections - updated SEE ALSO sections - properly updated copyright information (required by CDDL) zfs(8) only: - replaced "Zones" section with new "Jails" section - removed misleading "ZFS Volumes as Swap or Dump Devices" section - updated shareiscsi and sharesmb option information (not supported on FreeBSD) - replace zoned property with jailed property zpool(8) only: - updated device names in examples MFC r227648: Fix reference to fsync(2). Add more references to SEE ALSO section. MFC r227649: More zfs(8) manpage fixes: - remove shareiscsi property - mark casesensitivity property as unsupported - remove reference to Solaris Administration Guide MFC r227752 (partial): Update and desolarization of zdb(8) and zstreamdump(1) manual pages: - synchronized to match new vendor code [1] - removed ATTRIBUTES sections - updated SEE ALSO sections - properly updated copyright information (required by CDDL) MFC r228019: Update ZFS manual pages to a mdoc(7) reimplementation. The zfs(8) and zpool(8) manual pages now match the state of the ZFS module and have been customized for FreeBSD. The new texts of the "Deduplication" subsection in zfs(8), the zpool "split" command, the zfs "dedup" property and several other missing parts have been added from illumos or OpenSolaris snv_134 (CDDL-licensed). The mdoc(7) reimplementation of whole manual pages, the descriptions of the zpool "readonly" property, "zfs diff" command and descriptions of several other missing command flags and/or options were authored by myself. MFC r228045: Add missing -n flag to "zpool import" description. MFC r228054: Add missing warning to zfs(8) for using "zfs destroy" with -r and -R flags. MFC r228055: Use singular form for zfs destroy snapshot in zfs(8). Obtained from: Illumos (as of rev. 13513:f84d4672fdbd) [1] git-svn-id: http://svn.freebsd.org/base/stable/9@229576 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f (cherry picked from commit 8f4870de3a2d0f3c14631198721182e55d657616) Signed-off-by: Xin Li --- cddl/contrib/opensolaris/cmd/zdb/zdb.8 | 151 +- cddl/contrib/opensolaris/cmd/zfs/zfs.8 | 6001 ++++++++++---------- cddl/contrib/opensolaris/cmd/zpool/zpool.8 | 3610 ++++++------ .../opensolaris/cmd/zstreamdump/zstreamdump.1 | 126 +- 4 files changed, 4939 insertions(+), 4949 deletions(-) diff --git a/cddl/contrib/opensolaris/cmd/zdb/zdb.8 b/cddl/contrib/opensolaris/cmd/zdb/zdb.8 index f601825..f84bf9a 100644 --- a/cddl/contrib/opensolaris/cmd/zdb/zdb.8 +++ b/cddl/contrib/opensolaris/cmd/zdb/zdb.8 @@ -1,84 +1,79 @@ '\" te +.\" Copyright (c) 2011, Martin Matuska . +.\" All Rights Reserved. +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or http://www.opensolaris.org/os/licensing. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" .\" Copyright (c) 2004, Sun Microsystems, Inc. All Rights Reserved. -.\" The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. -.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing. See the License for the specific language governing permissions and limitations under the License. -.\" When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner] -.TH zdb 1M "31 Oct 2005" "SunOS 5.11" "System Administration Commands" -.SH NAME -zdb \- ZFS debugger -.SH SYNOPSIS -.LP -.nf -\fBzdb\fR \fIpool\fR -.fi - -.SH DESCRIPTION -.sp -.LP -The \fBzdb\fR command is used by support engineers to diagnose failures and gather statistics. Since the \fBZFS\fR file system is always consistent on disk and is self-repairing, \fBzdb\fR should only be run under the direction by a support engineer. -.sp -.LP -If no arguments are specified, \fBzdb\fR, performs basic consistency checks on the pool and associated datasets, and report any problems detected. -.sp -.LP -Any options supported by this command are internal to Sun and subject to change at any time. -.SH EXIT STATUS -.sp -.LP +.\" +.\" $FreeBSD$ +.\" +.Dd November 26, 2011 +.Dt ZDB 8 +.Os +.Sh NAME +.Nm zdb +.Nd ZFS debugger +.Sh SYNOPSIS +.Nm +.Ar pool +.Sh DESCRIPTION +The +.Nm +command is used by support engineers to diagnose failures and +gather statistics. Since the +.Tn ZFS +file system is always consistent on disk and is self-repairing, +.Nm +should only be run under the direction by a support engineer. +.Pp +If no arguments are specified, +.Nm +performs basic consistency checks on the pool and associated datasets, and +report any problems detected. +.Nm +Any options supported by this command are internal to Sun and subject to change +at any time. +.Sh EXIT STATUS The following exit values are returned: -.sp -.ne 2 -.mk -.na -\fB\fB0\fR\fR -.ad -.RS 5n -.rt +.Bl -tag -offset 2n -width 2n +.It 0 The pool is consistent. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB1\fR\fR -.ad -.RS 5n -.rt +.It 1 An error was detected. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB2\fR\fR -.ad -.RS 5n -.rt +.It 2 Invalid command line options were specified. -.RE - -.SH ATTRIBUTES -.sp -.LP -See \fBattributes\fR(5) for descriptions of the following attributes: -.sp - -.sp -.TS -tab() box; -cw(2.75i) |cw(2.75i) -lw(2.75i) |lw(2.75i) -. -ATTRIBUTE TYPEATTRIBUTE VALUE -_ -AvailabilitySUNWzfsu -_ -Interface StabilityUnstable -.TE - -.SH SEE ALSO -.sp -.LP -\fBzfs\fR(1M), \fBzpool\fR(1M), \fBattributes\fR(5) +.El +.Sh SEE ALSO +.Xr zfs 8 , +.Xr zpool 8 +.Sh AUTHORS +This manual page is a +.Xr mdoc 7 +reimplementation of the +.Tn OpenSolaris +manual page +.Em zdb(1M) , +modified and customized for +.Fx +and licensed under the +.Tn Common Development and Distribution License +.Pq Tn CDDL . +.Pp +The +.Xr mdoc 7 +implementation of this manual page was initially written by +.An Martin Matuska Aq mm@FreeBSD.org . diff --git a/cddl/contrib/opensolaris/cmd/zfs/zfs.8 b/cddl/contrib/opensolaris/cmd/zfs/zfs.8 index 03deef2..d1c282a 100644 --- a/cddl/contrib/opensolaris/cmd/zfs/zfs.8 +++ b/cddl/contrib/opensolaris/cmd/zfs/zfs.8 @@ -1,2701 +1,2726 @@ '\" te -.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. -.\" The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing. -.\" See the License for the specific language governing permissions and limitations under the License. When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this CDDL HEADER, with -.\" the fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner] -.\" The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing. -.\" See the License for the specific language governing permissions and limitations under the License. When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this CDDL HEADER, with -.\" the fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner] +.\" Copyright (c) 2011, Martin Matuska . +.\" All Rights Reserved. +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or http://www.opensolaris.org/os/licensing. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" Copyright (c) 2010, Sun Microsystems, Inc. All Rights Reserved. .\" Copyright 2011 Nexenta Systems, Inc. All rights reserved. -.\" Copyright 2011 by Delphix. All rights reserved. -.TH zfs 1M "24 Sep 2009" "SunOS 5.11" "System Administration Commands" -.SH NAME -zfs \- configures ZFS file systems -.SH SYNOPSIS -.LP -.nf -\fBzfs\fR [\fB-?\fR] -.fi - -.LP -.nf -\fBzfs\fR \fBcreate\fR [\fB-p\fR] [\fB-o\fR \fIproperty\fR=\fIvalue\fR] ... \fIfilesystem\fR -.fi - -.LP -.nf -\fBzfs\fR \fBcreate\fR [\fB-ps\fR] [\fB-b\fR \fIblocksize\fR] [\fB-o\fR \fIproperty\fR=\fIvalue\fR] ... \fB-V\fR \fIsize\fR \fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBdestroy\fR [\fB-rRf\fR] \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBdestroy\fR [\fB-rRd\fR] \fIsnapshot\fR -.fi - -.LP -.nf -\fBzfs\fR \fBsnapshot\fR [\fB-r\fR] [\fB-o\fR \fIproperty\fR=\fIvalue\fR]... - \fIfilesystem@snapname\fR|\fIvolume@snapname\fR -.fi - -.LP -.nf -\fBzfs\fR \fBrollback\fR [\fB-rRf\fR] \fIsnapshot\fR -.fi - -.LP -.nf -\fBzfs\fR \fBclone\fR [\fB-p\fR] [\fB-o\fR \fIproperty\fR=\fIvalue\fR] ... \fIsnapshot\fR \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBpromote\fR \fIclone-filesystem\fR -.fi - -.LP -.nf -\fBzfs\fR \fBrename\fR \fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR - \fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR -.fi - -.LP -.nf -\fBzfs\fR \fBrename\fR [\fB-p\fR] \fIfilesystem\fR|\fIvolume\fR \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBrename\fR \fB-r\fR \fIsnapshot\fR \fIsnapshot\fR -.fi - -.LP -.nf -\fBzfs\fR \fBrename\fR \fB-u\fR [\fB-p\fR] \fIfilesystem\fR \fIfilesystem\fR -.fi - -.LP -.nf -\fBzfs\fR \fBlist\fR [\fB-r\fR|\fB-d\fR \fIdepth\fR][\fB-H\fR][\fB-o\fR \fIproperty\fR[,...]] [\fB-t\fR \fItype\fR[,...]] - [\fB-s\fR \fIproperty\fR] ... [\fB-S\fR \fIproperty\fR] ... [\fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR] ... -.fi - -.LP -.nf -\fBzfs\fR \fBset\fR \fIproperty\fR=\fIvalue\fR \fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR ... -.fi - -.LP -.nf -\fBzfs\fR \fBget\fR [\fB-r\fR|\fB-d\fR \fIdepth\fR][\fB-Hp\fR][\fB-o\fR \fIfield\fR[,...]] [\fB-s\fR \fIsource\fR[,...]] - "\fIall\fR" | \fIproperty\fR[,...] \fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR ... -.fi - -.LP -.nf -\fBzfs\fR \fBinherit\fR [\fB-r\fR] \fIproperty\fR \fIfilesystem\fR|\fIvolume|snapshot\fR ... -.fi - -.LP -.nf -\fBzfs\fR \fBupgrade\fR [\fB-v\fR] -.fi - -.LP -.nf -\fBzfs\fR \fBupgrade\fR [\fB-r\fR] [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIfilesystem\fR -.fi - -.LP -.nf -\fBzfs\fR \fBuserspace\fR [\fB-niHp\fR] [\fB-o\fR \fIfield\fR[,...]] [\fB-sS\fR \fIfield\fR] ... - [\fB-t\fR \fItype\fR [,...]] \fIfilesystem\fR|\fIsnapshot\fR -.fi - -.LP -.nf -\fBzfs\fR \fBgroupspace\fR [\fB-niHp\fR] [\fB-o\fR \fIfield\fR[,...]] [\fB-sS\fR \fIfield\fR] ... - [\fB-t\fR \fItype\fR [,...]] \fIfilesystem\fR|\fIsnapshot\fR -.fi - -.LP -.nf -\fBzfs\fR \fBmount\fR -.fi - -.LP -.nf -\fBzfs\fR \fBmount\fR [\fB-vO\fR] [\fB-o \fIoptions\fR\fR] \fB-a\fR | \fIfilesystem\fR -.fi - -.LP -.nf -\fBzfs\fR \fBunmount\fR [\fB-f\fR] \fB-a\fR | \fIfilesystem\fR|\fImountpoint\fR -.fi - -.LP -.nf -\fBzfs\fR \fBshare\fR \fB-a\fR | \fIfilesystem\fR -.fi - -.LP -.nf -\fBzfs\fR \fBunshare\fR \fB-a\fR \fIfilesystem\fR|\fImountpoint\fR -.fi - -.LP -.nf -\fBzfs\fR \fBsend\fR [\fB-vR\fR] [\fB-\fR[\fBiI\fR] \fIsnapshot\fR] \fIsnapshot\fR -.fi - -.LP -.nf -\fBzfs\fR \fBreceive\fR [\fB-vnFu\fR] \fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR -.fi - -.LP -.nf -\fBzfs\fR \fBreceive\fR [\fB-vnFu\fR] \fB-d\fR \fIfilesystem\fR -.fi - -.LP -.nf -\fBzfs\fR \fBallow\fR \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBallow\fR [\fB-ldug\fR] "\fIeveryone\fR"|\fIuser\fR|\fIgroup\fR[,...] \fIperm\fR|\fI@setname\fR[,...] - \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBallow\fR [\fB-ld\fR] \fB-e\fR \fIperm\fR|@\fIsetname\fR[,...] \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBallow\fR \fB-c\fR \fIperm\fR|@\fIsetname\fR[,...] \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBallow\fR \fB-s\fR @\fIsetname\fR \fIperm\fR|@\fIsetname\fR[,...] \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBunallow\fR [\fB-rldug\fR] "\fIeveryone\fR"|\fIuser\fR|\fIgroup\fR[,...] [\fIperm\fR|@\fIsetname\fR[,... ]] - \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBunallow\fR [\fB-rld\fR] \fB-e\fR [\fIperm\fR|@\fIsetname\fR[,... ]] \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBunallow\fR [\fB-r\fR] \fB-c\fR [\fIperm\fR|@\fIsetname\fR[ ... ]] \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBunallow\fR [\fB-r\fR] \fB-s\fR @\fIsetname\fR [\fIperm\fR|@\fIsetname\fR[,... ]] \fIfilesystem\fR|\fIvolume\fR -.fi - -.LP -.nf -\fBzfs\fR \fBhold\fR [\fB-r\fR] \fItag\fR \fIsnapshot\fR... -.fi - -.LP -.nf -\fBzfs\fR \fBholds\fR [\fB-r\fR] \fIsnapshot\fR... -.fi - -.LP -.nf -\fBzfs\fR \fBrelease\fR [\fB-r\fR] \fItag\fR \fIsnapshot\fR... -.fi - -\fBzfs\fR \fBjail\fR \fBjailid\fR \fB\fIfilesystem\fR\fR -.fi -.LP -.nf -\fBzfs\fR \fBunjail\fR \fBjailid\fR \fB\fIfilesystem\fR\fR -.fi - -.SH DESCRIPTION -.sp -.LP -The \fBzfs\fR command configures \fBZFS\fR datasets within a \fBZFS\fR storage pool, as described in \fBzpool\fR(1M). A dataset is identified by a unique path within the \fBZFS\fR namespace. For example: -.sp -.in +2 -.nf -pool/{filesystem,volume,snapshot} -.fi -.in -2 -.sp - -.sp -.LP -where the maximum length of a dataset name is \fBMAXNAMELEN\fR (256 bytes). -.sp -.LP +.\" Copyright (c) 2011 by Delphix. All rights reserved. +.\" Copyright (c) 2011, Pawel Jakub Dawidek +.\" +.\" $FreeBSD$ +.\" +.Dd November 26, 2011 +.Dt ZFS 8 +.Os +.Sh NAME +.Nm zfs +.Nd configures ZFS file systems +.Sh SYNOPSIS +.Nm +.Op Fl \&? +.Nm +.Cm create +.Op Fl p +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... filesystem +.Nm +.Cm create +.Op Fl ps +.Op Fl b Ar blocksize +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... +.Fl V +.Ar size volume +.Nm +.Cm destroy +.Op Fl rRf +.Ar filesystem Ns | Ns Ar volume +.Nm +.Cm destroy +.Op Fl rRd +.Ar snapshot +.Nm +.Cm snapshot +.Op Fl r +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... filesystem@snapname Ns | Ns Ar volume@snapname +.Nm +.Cm rollback +.Op Fl rRf +.Ar snapshot +.Nm +.Cm clone +.Op Fl p +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... snapshot filesystem Ns | Ns Ar volume +.Nm +.Cm promote +.Ar clone-filesystem +.Nm +.Cm rename +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Nm +.Cm rename +.Fl p +.Ar filesystem Ns | Ns Ar volume +.Ar filesystem Ns | Ns Ar volume +.Nm +.Cm rename +.Fl r +.Ar snapshot snapshot +.Nm +.Cm rename +.Fl u +.Op Fl p +.Ar filesystem filesystem +.Nm +.Cm list +.Op Fl r Ns | Ns Fl d Ar depth +.Op Fl H +.Op Fl o Ar property Ns Op , Ns Ar ... +.Op Fl t Ar type Ns Op , Ns Ar ... +.Op Fl s Ar property +.Ar ... +.Op Fl S Ar property +.Ar ... +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Nm +.Cm set +.Ar property Ns = Ns Ar value +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Nm +.Cm get +.Op Fl r Ns | Ns Fl d Ar depth +.Op Fl Hp +.Op Fl o Ar all | field Ns Op , Ns Ar ... +.Op Fl s Ar source Ns Op , Ns Ar ... +.Ar all | property Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Nm +.Cm inherit +.Op Fl rS +.Ar property +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Nm +.Cm upgrade +.Op Fl v +.Nm +.Cm upgrade +.Op Fl r +.Op Fl V Ar version +.Fl a | Ar filesystem +.Nm +.Cm userspace +.Op Fl niHp +.Op Fl o Ar field Ns Op , Ns Ar ... +.Op Fl sS Ar field +.Ar ... +.Op Fl t Ar type Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar snapshot +.Nm +.Cm groupspace +.Op Fl niHp +.Op Fl o Ar field Ns Op , Ns Ar ... +.Op Fl sS Ar field +.Ar ... +.Op Fl t Ar type Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar snapshot +.Nm +.Cm mount +.Nm +.Cm mount +.Op Fl vO +.Op Fl o Ar property Ns Op , Ns Ar ... +.Fl a | Ar filesystem +.Nm +.Cm unmount +.Op Fl f +.Fl a | Ar filesystem Ns | Ns Ar mountpoint +.Nm +.Cm share +.Fl a | Ar filesystem +.Nm +.Cm unshare +.Fl a | Ar filesystem Ns | Ns Ar mountpoint +.Nm +.Cm send +.Op Fl DvRp +.Op Fl i Ar snapshot | Fl I Ar snapshot +.Ar snapshot +.Nm +.Cm receive +.Op Fl vnFu +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Nm +.Cm receive +.Op Fl vnFu +.Op Fl d | e +.Ar filesystem +.Nm +.Cm allow +.Ar filesystem Ns | Ns Ar volume +.Nm +.Cm allow +.Op Fl ldug +.Cm everyone Ns | Ns Ar user Ns | Ns Ar group Ns Op , Ns Ar ... +.Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Nm +.Cm allow +.Op Fl ld +.Fl e +.Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Nm +.Cm allow +.Fl c +.Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Nm +.Cm allow +.Fl s +.Ar @setname +.Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Nm +.Cm unallow +.Op Fl rldug +.Cm everyone Ns | Ns Ar user Ns | Ns Ar group Ns Op , Ns Ar ... +.Op Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Nm +.Cm unallow +.Op Fl rld +.Fl e +.Op Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Nm +.Cm unallow +.Op Fl r +.Fl c +.Op Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Nm +.Cm unallow +.Op Fl r +.Fl s +.Ar @setname +.Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Nm +.Cm hold +.Op Fl r +.Ar tag snapshot ... +.Nm +.Cm holds +.Op Fl r +.Ar snapshot ... +.Nm +.Cm release +.Op Fl r +.Ar tag snapshot ... +.Nm +.Cm diff +.Op Fl FHt +.Ar snapshot +.Op Ar snapshot Ns | Ns Ar filesystem +.Nm +.Cm jail +.Ar jailid filesystem +.Nm +.Cm unjail +.Ar jailid filesystem +.Sh DESCRIPTION +The +.Nm +command configures +.Tn ZFS +datasets within a +.Tn ZFS +storage pool, as described in +.Xr zpool 8 . +A dataset is identified by a unique path within the +.Tn ZFS +namespace. For example: +.Bd -ragged -offset 4n +.No pool/ Ns Brq filesystem,volume,snapshot +.Ed +.Pp +where the maximum length of a dataset name is +.Dv MAXNAMELEN +(256 bytes). +.Pp A dataset can be one of the following: -.sp -.ne 2 -.mk -.na -\fB\fIfile system\fR\fR -.ad -.sp .6 -.RS 4n -A \fBZFS\fR dataset of type \fBfilesystem\fR can be mounted within the standard system namespace and behaves like other file systems. While \fBZFS\fR file systems are designed to be \fBPOSIX\fR compliant, known issues exist that prevent compliance in some cases. Applications that depend on standards conformance might fail due to nonstandard behavior when checking file system free space. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fIvolume\fR\fR -.ad -.sp .6 -.RS 4n -A logical volume exported as a raw or block device. This type of dataset should only be used under special circumstances. File systems are typically used in most environments. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fIsnapshot\fR\fR -.ad -.sp .6 -.RS 4n -A read-only version of a file system or volume at a given point in time. It is specified as \fIfilesystem@name\fR or \fIvolume@name\fR. -.RE - -.SS "ZFS File System Hierarchy" -.sp -.LP -A \fBZFS\fR storage pool is a logical collection of devices that provide space for datasets. A storage pool is also the root of the \fBZFS\fR file system hierarchy. -.sp -.LP -The root of the pool can be accessed as a file system, such as mounting and unmounting, taking snapshots, and setting properties. The physical storage characteristics, however, are managed by the \fBzpool\fR(1M) command. -.sp -.LP -See \fBzpool\fR(1M) for more information on creating and administering pools. -.SS "Snapshots" -.sp -.LP -A snapshot is a read-only copy of a file system or volume. Snapshots can be created extremely quickly, and initially consume no additional space within the pool. As data within the active dataset changes, the snapshot consumes more data than would otherwise be shared with the active dataset. -.sp -.LP -Snapshots can have arbitrary names. Snapshots of volumes can be cloned or rolled back, but cannot be accessed independently. -.sp -.LP -File system snapshots can be accessed under the \fB\&.zfs/snapshot\fR directory in the root of the file system. Snapshots are automatically mounted on demand and may be unmounted at regular intervals. The visibility of the \fB\&.zfs\fR directory can be controlled by the \fBsnapdir\fR property. -.SS "Clones" -.sp -.LP -A clone is a writable volume or file system whose initial contents are the same as another dataset. As with snapshots, creating a clone is nearly instantaneous, and initially consumes no additional space. -.sp -.LP -Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as a clone exists. The \fBorigin\fR property exposes this dependency, and the \fBdestroy\fR command lists any such dependencies, if they exist. -.sp -.LP -The clone parent-child dependency relationship can be reversed by using the \fBpromote\fR subcommand. This causes the "origin" file system to become a clone of the specified file system, which makes it possible to destroy the file system that the clone was created from. -.SS "Mount Points" -.sp -.LP -Creating a \fBZFS\fR file system is a simple operation, so the number of file systems per system is likely to be numerous. To cope with this, \fBZFS\fR automatically manages mounting and unmounting file systems without the need to edit the \fB/etc/vfstab\fR file. All automatically managed file systems are mounted by \fBZFS\fR at boot time. -.sp -.LP -By default, file systems are mounted under \fB/\fIpath\fR\fR, where \fIpath\fR is the name of the file system in the \fBZFS\fR namespace. Directories are created and destroyed as needed. -.sp -.LP -A file system can also have a mount point set in the \fBmountpoint\fR property. This directory is created as needed, and \fBZFS\fR automatically mounts the file system when the \fBzfs mount -a\fR command is invoked (without editing \fB/etc/vfstab\fR). The \fBmountpoint\fR property can be inherited, so if \fBpool/home\fR has a mount point of \fB/export/stuff\fR, then \fBpool/home/user\fR automatically inherits a mount point of \fB/export/stuff/user\fR. -.sp -.LP -A file system \fBmountpoint\fR property of \fBnone\fR prevents the file system from being mounted. -.sp -.LP -If needed, \fBZFS\fR file systems can also be managed with traditional tools (\fBmount\fR, \fBumount\fR, \fB/etc/vfstab\fR). If a file system's mount point is set to \fBlegacy\fR, \fBZFS\fR makes no attempt to manage the file system, and the administrator is responsible for mounting and unmounting the file system. -.SS "Zones" -.sp -.LP -A \fBZFS\fR file system can be added to a non-global zone by using the \fBzonecfg\fR \fBadd fs\fR subcommand. A \fBZFS\fR file system that is added to a non-global zone must have its \fBmountpoint\fR property set to \fBlegacy\fR. -.sp -.LP -The physical properties of an added file system are controlled by the global administrator. However, the zone administrator can create, modify, or destroy files within the added file system, depending on how the file system is mounted. -.sp -.LP -A dataset can also be delegated to a non-global zone by using the \fBzonecfg\fR \fBadd dataset\fR subcommand. You cannot delegate a dataset to one zone and the children of the same dataset to another zone. The zone administrator can change properties of the dataset or any of its children. However, the \fBquota\fR property is controlled by the global administrator. -.sp -.LP -A \fBZFS\fR volume can be added as a device to a non-global zone by using the \fBzonecfg\fR \fBadd device\fR subcommand. However, its physical properties can be modified only by the global administrator. -.sp -.LP -For more information about \fBzonecfg\fR syntax, see \fBzonecfg\fR(1M). -.sp -.LP -After a dataset is delegated to a non-global zone, the \fBzoned\fR property is automatically set. A zoned file system cannot be mounted in the global zone, since the zone administrator might have to set the mount point to an unacceptable value. -.sp -.LP -The global administrator can forcibly clear the \fBzoned\fR property, though this should be done with extreme care. The global administrator should verify that all the mount points are acceptable before clearing the property. -.SS "Native Properties" -.sp -.LP -Properties are divided into two types, native properties and user-defined (or "user") properties. Native properties either export internal statistics or control \fBZFS\fR behavior. In addition, native properties are either editable or read-only. User properties have no effect on \fBZFS\fR behavior, but you can use them to annotate datasets in a way that is meaningful in your environment. For more information about user properties, see the "User Properties" section, below. -.sp -.LP -Every dataset has a set of properties that export statistics about the dataset as well as control various behaviors. Properties are inherited from the parent unless overridden by the child. Some properties apply only to certain types of datasets (file systems, volumes, or snapshots). -.sp -.LP -The values of numeric properties can be specified using human-readable suffixes (for example, \fBk\fR, \fBKB\fR, \fBM\fR, \fBGb\fR, and so forth, up to \fBZ\fR for zettabyte). The following are all valid (and equal) specifications: -.sp -.in +2 -.nf +.Bl -hang -width 12n +.It Sy file system +A +.Tn ZFS +dataset of type +.Em filesystem +can be mounted within the standard system namespace and behaves like other file +systems. While +.Tn ZFS +file systems are designed to be +.Tn POSIX +compliant, known issues exist that prevent compliance in some cases. +Applications that depend on standards conformance might fail due to nonstandard +behavior when checking file system free space. +.It Sy volume +A logical volume exported as a raw or block device. This type of dataset should +only be used under special circumstances. File systems are typically used in +most environments. +.It Sy snapshot +A read-only version of a file system or volume at a given point in time. It is +specified as +.Em filesystem@name +or +.Em volume@name . +.El +.Ss ZFS File System Hierarchy +A +.Tn ZFS +storage pool is a logical collection of devices that provide space for +datasets. A storage pool is also the root of the +.Tn ZFS +file system hierarchy. +.Pp +The root of the pool can be accessed as a file system, such as mounting and +unmounting, taking snapshots, and setting properties. The physical storage +characteristics, however, are managed by the +.Xr zpool 8 +command. +.Pp +See +.Xr zpool 8 +for more information on creating and administering pools. +.Ss Snapshots +A snapshot is a read-only copy of a file system or volume. Snapshots can be +created extremely quickly, and initially consume no additional space within the +pool. As data within the active dataset changes, the snapshot consumes more +data than would otherwise be shared with the active dataset. +.Pp +Snapshots can have arbitrary names. Snapshots of volumes can be cloned or +rolled back, but cannot be accessed independently. +.Pp +File system snapshots can be accessed under the +.Pa \&.zfs/snapshot +directory in the root of the file system. Snapshots are automatically mounted +on demand and may be unmounted at regular intervals. The visibility of the +.Pa \&.zfs +directory can be controlled by the +.Sy snapdir +property. +.Ss Clones +A clone is a writable volume or file system whose initial contents are the same +as another dataset. As with snapshots, creating a clone is nearly +instantaneous, and initially consumes no additional space. +.Pp +Clones can only be created from a snapshot. When a snapshot is cloned, it +creates an implicit dependency between the parent and child. Even though the +clone is created somewhere else in the dataset hierarchy, the original snapshot +cannot be destroyed as long as a clone exists. The +.Sy origin +property exposes this dependency, and the +.Cm destroy +command lists any such dependencies, if they exist. +.Pp +The clone parent-child dependency relationship can be reversed by using the +.Cm promote +subcommand. This causes the "origin" file system to become a clone of the +specified file system, which makes it possible to destroy the file system that +the clone was created from. +.Ss Mount Points +Creating a +.Tn ZFS +file system is a simple operation, so the number of file systems per system is +likely to be numerous. To cope with this, +.Tn ZFS +automatically manages mounting and unmounting file systems without the need to +edit the +.Pa /etc/fstab +file. All automatically managed file systems are mounted by +.Tn ZFS +at boot time. +.Pp +By default, file systems are mounted under +.Pa /path , +where +.Ar path +is the name of the file system in the +.Tn ZFS +namespace. Directories are created and destroyed as needed. +.Pp +A file system can also have a mount point set in the +.Sy mountpoint +property. This directory is created as needed, and +.Tn ZFS +automatically mounts the file system when the +.Qq Nm Cm mount Fl a +command is invoked (without editing +.Pa /etc/fstab Ns ). +The +.Sy mountpoint +property can be inherited, so if +.Em pool/home +has a mount point of +.Pa /home , +then +.Em pool/home/user +automatically inherits a mount point of +.Pa /home/user . +.Pp +A file system +.Sy mountpoint +property of +.Cm none +prevents the file system from being mounted. +.Pp +If needed, +.Tn ZFS +file systems can also be managed with traditional tools +.Pq Xr mount 8 , Xr umount 8 , Xr fstab 5 . +If a file system's mount point is set to +.Cm legacy , +.Tn ZFS +makes no attempt to manage the file system, and the administrator is +responsible for mounting and unmounting the file system. +.Ss Jails +.No A Tn ZFS +dataset can be attached to a jail by using the +.Qq Nm Cm jail +subcommand. You cannot attach a dataset to one jail and the children of the +same dataset to another jails. To allow managment of the dataset from within +a jail, the +.Sy jailed +property has to be set. The +.Sy quota +property cannot be changed from within a jail. +.Pp +.No A Tn ZFS +dataset can be detached from a jail using the +.Qq Nm Cm unjail +subcommand. +.Pp +After a dataset is attached to a jail and the jailed property is set, a jailed +file system cannot be mounted outside the jail, since the jail administrator +might have set the mount point to an unacceptable value. +.Ss Deduplication +Deduplication is the process for removing redundant data at the block-level, +reducing the total amount of data stored. If a file system has the +.Cm dedup +property enabled, duplicate data blocks are removed synchronously. The result +is that only unique data is stored and common components are shared among +files. +.Ss Native Properties +Properties are divided into two types, native properties and user-defined (or +"user") properties. Native properties either export internal statistics or +control +.Tn ZFS +behavior. In addition, native properties are either editable or read-only. User +properties have no effect on +.Tn ZFS +behavior, but you can use them to annotate datasets in a way that is meaningful +in your environment. For more information about user properties, see the +.Qq Sx User Properties +section, below. +.Pp +Every dataset has a set of properties that export statistics about the dataset +as well as control various behaviors. Properties are inherited from the parent +unless overridden by the child. Some properties apply only to certain types of +datasets (file systems, volumes, or snapshots). +.Pp +The values of numeric properties can be specified using human-readable suffixes +(for example, +.Sy k , KB , M , Gb , +and so forth, up to +.Sy Z +for zettabyte). The following are all valid (and equal) specifications: +.Bd -ragged -offset 4n 1536M, 1.5g, 1.50GB -.fi -.in -2 -.sp - -.sp -.LP -The values of non-numeric properties are case sensitive and must be lowercase, except for \fBmountpoint\fR, \fBsharenfs\fR, and \fBsharesmb\fR. -.sp -.LP -The following native properties consist of read-only statistics about the dataset. These properties can be neither set, nor inherited. Native properties apply to all dataset types unless otherwise noted. -.sp -.ne 2 -.mk -.na -\fB\fBavailable\fR\fR -.ad -.sp .6 -.RS 4n -The amount of space available to the dataset and all its children, assuming that there is no other activity in the pool. Because space is shared within a pool, availability can be limited by any number of factors, including physical pool size, quotas, reservations, or other datasets within the pool. -.sp -This property can also be referred to by its shortened column name, \fBavail\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBcompressratio\fR\fR -.ad -.sp .6 -.RS 4n -For non-snapshots, the compression ratio achieved for the \fBused\fR space of this dataset, expressed as a multiplier. The \fBused\fR property includes descendant datasets, and, for clones, does not include the space shared with the origin snapshot. For snapshots, the \fBcompressratio\fR is the same as the \fBrefcompressratio\fR property. Compression can be turned on by running: \fBzfs set compression=on \fIdataset\fR\fR. The default value is \fBoff\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBcreation\fR\fR -.ad -.sp .6 -.RS 4n +.Ed +.Pp +The values of non-numeric properties are case sensitive and must be lowercase, +except for +.Sy mountpoint , sharenfs , No and Sy sharesmb . +.Pp +The following native properties consist of read-only statistics about the +dataset. These properties can be neither set, nor inherited. Native properties +apply to all dataset types unless otherwise noted. +.Bl -tag -width 2n +.It Sy available +The amount of space available to the dataset and all its children, assuming +that there is no other activity in the pool. Because space is shared within a +pool, availability can be limited by any number of factors, including physical +pool size, quotas, reservations, or other datasets within the pool. +.Pp +This property can also be referred to by its shortened column name, +.Sy avail . +.It Sy compressratio +For non-snapshots, the compression ratio achieved for the +.Sy used +space of this dataset, expressed as a multiplier. The +.Sy used +property includes descendant datasets, and, for clones, does not include +the space shared with the origin snapshot. For snapshots, the +.Sy compressratio +is the same as the +.Sy refcompressratio +property. Compression can be turned on by running: +.Qq Nm Cm set compression=on Ar dataset +The default value is +.Cm off . +.It Sy creation The time this dataset was created. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBdefer_destroy\fR\fR -.ad -.sp .6 -.RS 4n -This property is \fBon\fR if the snapshot has been marked for deferred destroy by using the \fBzfs destroy\fR \fB-d\fR command. Otherwise, the property is \fBoff\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBmounted\fR\fR -.ad -.sp .6 -.RS 4n -For file systems, indicates whether the file system is currently mounted. This property can be either \fByes\fR or \fBno\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBorigin\fR\fR -.ad -.sp .6 -.RS 4n -For cloned file systems or volumes, the snapshot from which the clone was created. The origin cannot be destroyed (even with the \fB-r\fR or \fB-f\fR options) so long as a clone exists. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBreferenced\fR\fR -.ad -.sp .6 -.RS 4n -The amount of data that is accessible by this dataset, which may or may not be shared with other datasets in the pool. When a snapshot or clone is created, it initially references the same amount of space as the file system or snapshot it was created from, since its contents are identical. -.sp -This property can also be referred to by its shortened column name, \fBrefer\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBrefcompressratio\fR\fR -.ad -.sp .6 -.RS 4n -The compression ratio achieved for the \fBreferenced\fR space of this dataset, expressed as a multiplier. See also the \fBcompressratio\fR property. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBtype\fR\fR -.ad -.sp .6 -.RS 4n -The type of dataset: \fBfilesystem\fR, \fBvolume\fR, or \fBsnapshot\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBused\fR\fR -.ad -.sp .6 -.RS 4n -The amount of space consumed by this dataset and all its descendents. This is the value that is checked against this dataset's quota and reservation. The space used does not include this dataset's reservation, but does take into account the reservations of any descendent datasets. The amount of space that a dataset consumes from its parent, as well as the amount of space that are freed if this dataset is recursively destroyed, is the greater of its space used and its reservation. -.sp -When snapshots (see the "Snapshots" section) are created, their space is initially shared between the snapshot and the file system, and possibly with previous snapshots. As the file system changes, space that was previously shared becomes unique to the snapshot, and counted in the snapshot's space used. Additionally, deleting snapshots can increase the amount of space unique to (and used by) other snapshots. -.sp -The amount of space used, available, or referenced does not take into account pending changes. Pending changes are generally accounted for within a few seconds. Committing a change to a disk using \fBfsync\fR(3c) or \fBO_SYNC\fR does not necessarily guarantee that the space usage information is updated immediately. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBusedby*\fR\fR -.ad -.sp .6 -.RS 4n -The \fBusedby*\fR properties decompose the \fBused\fR properties into the various reasons that space is used. Specifically, \fBused\fR = \fBusedbychildren\fR + \fBusedbydataset\fR + \fBusedbyrefreservation\fR +, \fBusedbysnapshots\fR. These properties are only available for datasets created on \fBzpool\fR "version 13" pools. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBusedbychildren\fR\fR -.ad -.sp .6 -.RS 4n -The amount of space used by children of this dataset, which would be freed if all the dataset's children were destroyed. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBusedbydataset\fR\fR -.ad -.sp .6 -.RS 4n -The amount of space used by this dataset itself, which would be freed if the dataset were destroyed (after first removing any \fBrefreservation\fR and destroying any necessary snapshots or descendents). -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBusedbyrefreservation\fR\fR -.ad -.sp .6 -.RS 4n -The amount of space used by a \fBrefreservation\fR set on this dataset, which would be freed if the \fBrefreservation\fR was removed. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBusedbysnapshots\fR\fR -.ad -.sp .6 -.RS 4n -The amount of space consumed by snapshots of this dataset. In particular, it is the amount of space that would be freed if all of this dataset's snapshots were destroyed. Note that this is not simply the sum of the snapshots' \fBused\fR properties because space can be shared by multiple snapshots. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBuserused@\fR\fIuser\fR\fR -.ad -.sp .6 -.RS 4n -The amount of space consumed by the specified user in this dataset. Space is charged to the owner of each file, as displayed by \fBls\fR \fB-l\fR. The amount of space charged is displayed by \fBdu\fR and \fBls\fR \fB-s\fR. See the \fBzfs userspace\fR subcommand for more information. -.sp -Unprivileged users can access only their own space usage. The root user, or a user who has been granted the \fBuserused\fR privilege with \fBzfs allow\fR, can access everyone's usage. -.sp -The \fBuserused@\fR... properties are not displayed by \fBzfs get all\fR. The user's name must be appended after the \fB@\fR symbol, using one of the following forms: -.RS +4 -.TP -.ie t \(bu -.el o -\fIPOSIX name\fR (for example, \fBjoe\fR) -.RE -.RS +4 -.TP -.ie t \(bu -.el o -\fIPOSIX numeric ID\fR (for example, \fB789\fR) -.RE -.RS +4 -.TP -.ie t \(bu -.el o -\fISID name\fR (for example, \fBjoe.smith@mydomain\fR) -.RE -.RS +4 -.TP -.ie t \(bu -.el o -\fISID numeric ID\fR (for example, \fBS-1-123-456-789\fR) -.RE -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBuserrefs\fR\fR -.ad -.sp .6 -.RS 4n -This property is set to the number of user holds on this snapshot. User holds are set by using the \fBzfs hold\fR command. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBgroupused@\fR\fIgroup\fR\fR -.ad -.sp .6 -.RS 4n -The amount of space consumed by the specified group in this dataset. Space is charged to the group of each file, as displayed by \fBls\fR \fB-l\fR. See the \fBuserused@\fR\fIuser\fR property for more information. -.sp -Unprivileged users can only access their own groups' space usage. The root user, or a user who has been granted the \fBgroupused\fR privilege with \fBzfs allow\fR, can access all groups' usage. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBvolblocksize\fR=\fIblocksize\fR\fR -.ad -.sp .6 -.RS 4n -For volumes, specifies the block size of the volume. The \fBblocksize\fR cannot be changed once the volume has been written, so it should be set at volume creation time. The default \fBblocksize\fR for volumes is 8 Kbytes. Any power of 2 from 512 bytes to 128 Kbytes is valid. -.sp -This property can also be referred to by its shortened column name, \fBvolblock\fR. -.RE - -.sp -.LP -The following native properties can be used to change the behavior of a \fBZFS\fR dataset. -.sp -.ne 2 -.mk -.na -\fB\fBaclinherit\fR=\fBdiscard\fR | \fBnoallow\fR | \fBrestricted\fR | \fBpassthrough\fR | \fBpassthrough-x\fR\fR -.ad -.sp .6 -.RS 4n -Controls how \fBACL\fR entries are inherited when files and directories are created. A file system with an \fBaclinherit\fR property of \fBdiscard\fR does not inherit any \fBACL\fR entries. A file system with an \fBaclinherit\fR property value of \fBnoallow\fR only inherits inheritable \fBACL\fR entries that specify "deny" permissions. The property value \fBrestricted\fR (the default) removes the \fBwrite_acl\fR and \fBwrite_owner\fR permissions when the \fBACL\fR entry is inherited. A file system with an \fBaclinherit\fR property value of \fBpassthrough\fR inherits all inheritable \fBACL\fR entries without any modifications made to the \fBACL\fR entries when they are inherited. A file system with an \fBaclinherit\fR property value of \fBpassthrough-x\fR has the same meaning as \fBpassthrough\fR, except that the \fBowner@\fR, \fBgroup@\fR, and \fBeveryone@\fR \fBACE\fRs inherit the execute permission only if the file creation mode also requests the execute bit. -.sp -When the property value is set to \fBpassthrough\fR, files are created with a mode determined by the inheritable \fBACE\fRs. If no inheritable \fBACE\fRs exist that affect the mode, then the mode is set in accordance to the requested mode from the application. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBaclmode\fR=\fBdiscard\fR | \fBgroupmask\fR | \fBpassthrough\fR\fR -.ad -.sp .6 -.RS 4n -Controls how an \fBACL\fR is modified during \fBchmod\fR(2). A file system with an \fBaclmode\fR property of \fBdiscard\fR (the default) deletes all \fBACL\fR entries that do not represent the mode of the file. An \fBaclmode\fR property of \fBgroupmask\fR reduces permissions granted in all \fBALLOW\fR entries found in the \fBACL\fR such that they are no greater than the group permissions specified by \fBchmod\fR. A file system with an \fBaclmode\fR property of \fBpassthrough\fR indicates that no changes are made to the \fBACL\fR other than creating or updating the necessary \fBACL\fR entries to represent the new mode of the file or directory. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBatime\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether the access time for files is updated when they are read. Turning this property off avoids producing write traffic when reading files and can result in significant performance gains, though it might confuse mailers and other similar utilities. The default value is \fBon\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBcanmount\fR=\fBon\fR | \fBoff\fR | \fBnoauto\fR\fR -.ad -.sp .6 -.RS 4n -If this property is set to \fBoff\fR, the file system cannot be mounted, and is ignored by \fBzfs mount -a\fR. Setting this property to \fBoff\fR is similar to setting the \fBmountpoint\fR property to \fBnone\fR, except that the dataset still has a normal \fBmountpoint\fR property, which can be inherited. Setting this property to \fBoff\fR allows datasets to be used solely as a mechanism to inherit properties. One example of setting \fBcanmount=\fR\fBoff\fR is to have two datasets with the same \fBmountpoint\fR, so that the children of both datasets appear in the same directory, but might have different inherited characteristics. -.sp -When the \fBnoauto\fR option is set, a dataset can only be mounted and unmounted explicitly. The dataset is not mounted automatically when the dataset is created or imported, nor is it mounted by the \fBzfs mount -a\fR command or unmounted by the \fBzfs unmount -a\fR command. -.sp +.It Sy defer_destroy +This property is +.Cm on +if the snapshot has been marked for deferred destroy by using the +.Qq Nm Cm destroy -d +command. Otherwise, the property is +.Cm off . +.It Sy mounted +For file systems, indicates whether the file system is currently mounted. This +property can be either +.Cm yes +or +.Cm no . +.It Sy origin +For cloned file systems or volumes, the snapshot from which the clone was +created. See also the +.Sy clones +property. +.It Sy referenced +The amount of data that is accessible by this dataset, which may or may not be +shared with other datasets in the pool. When a snapshot or clone is created, it +initially references the same amount of space as the file system or snapshot it +was created from, since its contents are identical. +.Pp +This property can also be referred to by its shortened column name, +.Sy refer . +.It Sy refcompressratio +The compression ratio achieved for the +.Sy referenced +space of this dataset, expressed as a multiplier. See also the +.Sy compressratio +property. +.It Sy type +The type of dataset: +.Sy filesystem , volume , No or Sy snapshot . +.It Sy used +The amount of space consumed by this dataset and all its descendents. This is +the value that is checked against this dataset's quota and reservation. The +space used does not include this dataset's reservation, but does take into +account the reservations of any descendent datasets. The amount of space that a +dataset consumes from its parent, as well as the amount of space that are freed +if this dataset is recursively destroyed, is the greater of its space used and +its reservation. +.Pp +When snapshots (see the +.Qq Sx Snapshots +section) are created, their space is +initially shared between the snapshot and the file system, and possibly with +previous snapshots. As the file system changes, space that was previously +shared becomes unique to the snapshot, and counted in the snapshot's space +used. Additionally, deleting snapshots can increase the amount of space unique +to (and used by) other snapshots. +.Pp +The amount of space used, available, or referenced does not take into account +pending changes. Pending changes are generally accounted for within a few +seconds. Committing a change to a disk using +.Xr fsync 2 +or +.Sy O_SYNC +does not necessarily guarantee that the space usage information is updated +immediately. +.It Sy usedby* +The +.Sy usedby* +properties decompose the +.Sy used +properties into the various reasons that space is used. Specifically, +.Sy used No = +.Sy usedbysnapshots + usedbydataset + usedbychildren + usedbyrefreservation . +These properties are only available for datasets created +with +.Tn ZFS +pool version 13 pools and higher. +.It Sy usedbysnapshots +The amount of space consumed by snapshots of this dataset. In particular, it is +the amount of space that would be freed if all of this dataset's snapshots were +destroyed. Note that this is not simply the sum of the snapshots' +.Sy used +properties because space can be shared by multiple snapshots. +.It Sy usedbydataset +The amount of space used by this dataset itself, which would be freed if the +dataset were destroyed (after first removing any +.Sy refreservation +and destroying any necessary snapshots or descendents). +.It Sy usedbychildren +The amount of space used by children of this dataset, which would be freed if +all the dataset's children were destroyed. +.It Sy usedbyrefreservation +The amount of space used by a +.Sy refreservation +set on this dataset, which would be freed if the +.Sy refreservation +was removed. +.It Sy userused@ Ns Ar user +The amount of space consumed by the specified user in this dataset. Space is +charged to the owner of each file, as displayed by +.Qq Nm ls Fl l . +The amount of space charged is displayed by +.Qq Nm du +and +.Qq Nm ls Fl s . +See the +.Qq Nm Cm userspace +subcommand for more information. +.Pp +Unprivileged users can access only their own space usage. The root user, or a +user who has been granted the +.Sy userused +privilege with +.Qq Nm Cm allow , +can access everyone's usage. +.Pp +The +.Sy userused@ Ns ... +properties are not displayed by +.Qq Nm Cm get all . +The user's name must be appended after the +.Sy @ +symbol, using one of the following forms: +.Bl -bullet -offset 2n +.It +POSIX name (for example, +.Em joe Ns ) +.It +POSIX numeric ID (for example, +.Em 1001 Ns ) +.El +.It Sy userrefs +This property is set to the number of user holds on this snapshot. User holds +are set by using the +.Qq Nm Cm hold +command. +.It Sy groupused@ Ns Ar group +The amount of space consumed by the specified group in this dataset. Space is +charged to the group of each file, as displayed by +.Nm ls Fl l . +See the +.Sy userused@ Ns Ar user +property for more information. +.Pp +Unprivileged users can only access their own groups' space usage. The root +user, or a user who has been granted the +.Sy groupused +privilege with +.Qq Nm Cm allow , +can access all groups' usage. +.It Sy volblocksize Ns = Ns Ar blocksize +For volumes, specifies the block size of the volume. The +.Ar blocksize +cannot be changed once the volume has been written, so it should be set at +volume creation time. The default +.Ar blocksize +for volumes is 8 Kbytes. Any +power of 2 from 512 bytes to 128 Kbytes is valid. +.Pp +This property can also be referred to by its shortened column name, +.Sy volblock . +.El +.Pp +The following native properties can be used to change the behavior of a +.Tn ZFS +dataset. +.Bl -tag -width 2n +.It Xo +.Sy aclinherit Ns = Ns Cm discard | +.Cm noallow | +.Cm restricted | +.Cm passthrough | +.Cm passthrough-x +.Xc +Controls how +.Tn ACL +entries are inherited when files and directories are created. A file system +with an +.Sy aclinherit +property of +.Cm discard +does not inherit any +.Tn ACL +entries. A file system with an +.Sy aclinherit +property value of +.Cm noallow +only inherits inheritable +.Tn ACL +entries that specify "deny" permissions. The property value +.Cm restricted +(the default) removes the +.Em write_acl +and +.Em write_owner +permissions when the +.Tn ACL +entry is inherited. A file system with an +.Sy aclinherit +property value of +.Cm passthrough +inherits all inheritable +.Tn ACL +entries without any modifications made to the +.Tn ACL +entries when they are inherited. A file system with an +.Sy aclinherit +property value of +.Cm passthrough-x +has the same meaning as +.Cm passthrough , +except that the +.Em owner@ , group@ , No and Em everyone@ Tn ACE Ns s +inherit the execute permission only if the file creation mode also requests the +execute bit. +.Pp +When the property value is set to +.Cm passthrough , +files are created with a mode determined by the inheritable +.Tn ACE Ns s. +If no inheritable +.Tn ACE Ns s +exist that affect the mode, then the mode is set in accordance to the requested +mode from the application. +.It Sy aclmode Ns = Ns Cm discard | groupmask | passthrough +Controls how an +.Tn ACL +is modified during +.Xr chmod 2 . +A file system with an +.Sy aclmode +property of +.Cm discard +(the default) deletes all +.Tn ACL +entries that do not represent the mode of the file. An +.Sy aclmode +property of +.Cm groupmask +reduces permissions granted in all +.Em ALLOW +entries found in the +.Tn ACL +such that they are no greater than the group permissions specified by +.Xr chmod 2 . +A file system with an +.Sy aclmode +property of +.Cm passthrough +indicates that no changes are made to the +.Tn ACL +other than creating or updating the necessary +.Tn ACL +entries to represent the new mode of the file or directory. +.It Sy atime Ns = Ns Cm on | off +Controls whether the access time for files is updated when they are read. +Turning this property off avoids producing write traffic when reading files and +can result in significant performance gains, though it might confuse mailers +and other similar utilities. The default value is +.Cm on . +.It Sy canmount Ns = Ns Cm on | off | noauto +If this property is set to +.Cm off , +the file system cannot be mounted, and is ignored by +.Qq Nm Cm mount Fl a . +Setting this property to +.Cm off +is similar to setting the +.Sy mountpoint +property to +.Cm none , +except that the dataset still has a normal +.Sy mountpoint +property, which can be inherited. Setting this property to +.Cm off +allows datasets to be used solely as a mechanism to inherit properties. One +example of setting +.Sy canmount Ns = Ns Cm off +is to have two datasets with the same +.Sy mountpoint , +so that the children of both datasets appear in the same directory, but might +have different inherited characteristics. +.Pp +When the +.Cm noauto +value is set, a dataset can only be mounted and unmounted explicitly. The +dataset is not mounted automatically when the dataset is created or imported, +nor is it mounted by the +.Qq Nm Cm mount Fl a +command or unmounted by the +.Qq Nm Cm umount Fl a +command. +.Pp This property is not inherited. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBchecksum\fR=\fBon\fR | \fBoff\fR | \fBfletcher2,\fR| \fBfletcher4\fR | \fBsha256\fR\fR -.ad -.sp .6 -.RS 4n -Controls the checksum used to verify data integrity. The default value is \fBon\fR, which automatically selects an appropriate algorithm (currently, \fBfletcher4\fR, but this may change in future releases). The value \fBoff\fR disables integrity checking on user data. Disabling checksums is \fBNOT\fR a recommended practice. -.sp +.It Sy checksum Ns = Ns Cm on | off | fletcher2 | fletcher4 +Controls the checksum used to verify data integrity. The default value is +.Cm on , +which automatically selects an appropriate algorithm (currently, +.Cm fletcher4 , +but this may change in future releases). The value +.Cm off +disables integrity checking on user data. Disabling checksums is +.Em NOT +a recommended practice. +.It Sy compression Ns = Ns Cm on | off | lzjb | gzip | gzip- Ns Ar N | Cm zle +Controls the compression algorithm used for this dataset. The +.CM lzjb +compression algorithm is optimized for performance while providing decent data +compression. Setting compression to +.Cm on +uses the +.Cm lzjb +compression algorithm. The +.Cm gzip +compression algorithm uses the same compression as the +.Xr gzip 1 +command. You can specify the +.Cm gzip +level by using the value +.Cm gzip- Ns Ar N +where +.Ar N +is an integer from 1 (fastest) to 9 (best compression ratio). Currently, +.Cm gzip +is equivalent to +.Cm gzip-6 +(which is also the default for +.Xr gzip 1 Ns ). +The +.Cm zle +compression algorithm compresses runs of zeros. +.Pp +This property can also be referred to by its shortened column name +.Cm compress . Changing this property affects only newly-written data. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBcompression\fR=\fBon\fR | \fBoff\fR | \fBlzjb\fR | \fBgzip\fR | \fBgzip-\fR\fIN\fR\fR -.ad -.sp .6 -.RS 4n -Controls the compression algorithm used for this dataset. The \fBlzjb\fR compression algorithm is optimized for performance while providing decent data compression. Setting compression to \fBon\fR uses the \fBlzjb\fR compression algorithm. The \fBgzip\fR compression algorithm uses the same compression as the \fBgzip\fR(1) command. You can specify the \fBgzip\fR level by using the value \fBgzip-\fR\fIN\fR where \fIN\fR is an integer from 1 (fastest) to 9 (best compression ratio). Currently, \fBgzip\fR is equivalent to \fBgzip-6\fR (which is also the default for \fBgzip\fR(1)). -.sp -This property can also be referred to by its shortened column name \fBcompress\fR. Changing this property affects only newly-written data. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBcopies\fR=\fB1\fR | \fB2\fR | \fB3\fR\fR -.ad -.sp .6 -.RS 4n -Controls the number of copies of data stored for this dataset. These copies are in addition to any redundancy provided by the pool, for example, mirroring or RAID-Z. The copies are stored on different disks, if possible. The space used by multiple copies is charged to the associated file and dataset, changing the \fBused\fR property and counting against quotas and reservations. -.sp -Changing this property only affects newly-written data. Therefore, set this property at file system creation time by using the \fB-o\fR \fBcopies=\fR\fIN\fR option. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBdevices\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether device nodes can be opened on this file system. The default value is \fBon\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBexec\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether processes can be executed from within this file system. The default value is \fBon\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBmountpoint\fR=\fIpath\fR | \fBnone\fR | \fBlegacy\fR\fR -.ad -.sp .6 -.RS 4n -Controls the mount point used for this file system. See the "Mount Points" section for more information on how this property is used. -.sp -When the \fBmountpoint\fR property is changed for a file system, the file system and any children that inherit the mount point are unmounted. If the new value is \fBlegacy\fR, then they remain unmounted. Otherwise, they are automatically remounted in the new location if the property was previously \fBlegacy\fR or \fBnone\fR, or if they were mounted before the property was changed. In addition, any shared file systems are unshared and shared in the new location. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBnbmand\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether the file system should be mounted with \fBnbmand\fR (Non Blocking mandatory locks). This is used for \fBCIFS\fR clients. Changes to this property only take effect when the file system is umounted and remounted. See \fBmount\fR(1M) for more information on \fBnbmand\fR mounts. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBprimarycache\fR=\fBall\fR | \fBnone\fR | \fBmetadata\fR\fR -.ad -.sp .6 -.RS 4n -Controls what is cached in the primary cache (ARC). If this property is set to \fBall\fR, then both user data and metadata is cached. If this property is set to \fBnone\fR, then neither user data nor metadata is cached. If this property is set to \fBmetadata\fR, then only metadata is cached. The default value is \fBall\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBquota\fR=\fIsize\fR | \fBnone\fR\fR -.ad -.sp .6 -.RS 4n -Limits the amount of space a dataset and its descendents can consume. This property enforces a hard limit on the amount of space used. This includes all space consumed by descendents, including file systems and snapshots. Setting a quota on a descendent of a dataset that already has a quota does not override the ancestor's quota, but rather imposes an additional limit. -.sp -Quotas cannot be set on volumes, as the \fBvolsize\fR property acts as an implicit quota. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBuserquota@\fR\fIuser\fR=\fIsize\fR | \fBnone\fR\fR -.ad -.sp .6 -.RS 4n -Limits the amount of space consumed by the specified user. User space consumption is identified by the \fBuserspace@\fR\fIuser\fR property. -.sp -Enforcement of user quotas may be delayed by several seconds. This delay means that a user might exceed their quota before the system notices that they are over quota and begins to refuse additional writes with the \fBEDQUOT\fR error message . See the \fBzfs userspace\fR subcommand for more information. -.sp -Unprivileged users can only access their own groups' space usage. The root user, or a user who has been granted the \fBuserquota\fR privilege with \fBzfs allow\fR, can get and set everyone's quota. -.sp -This property is not available on volumes, on file systems before version 4, or on pools before version 15. The \fBuserquota@\fR... properties are not displayed by \fBzfs get all\fR. The user's name must be appended after the \fB@\fR symbol, using one of the following forms: -.RS +4 -.TP -.ie t \(bu -.el o -\fIPOSIX name\fR (for example, \fBjoe\fR) -.RE -.RS +4 -.TP -.ie t \(bu -.el o -\fIPOSIX numeric ID\fR (for example, \fB789\fR) -.RE -.RS +4 -.TP -.ie t \(bu -.el o -\fISID name\fR (for example, \fBjoe.smith@mydomain\fR) -.RE -.RS +4 -.TP -.ie t \(bu -.el o -\fISID numeric ID\fR (for example, \fBS-1-123-456-789\fR) -.RE -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBgroupquota@\fR\fIgroup\fR=\fIsize\fR | \fBnone\fR\fR -.ad -.sp .6 -.RS 4n -Limits the amount of space consumed by the specified group. Group space consumption is identified by the \fBuserquota@\fR\fIuser\fR property. -.sp -Unprivileged users can access only their own groups' space usage. The root user, or a user who has been granted the \fBgroupquota\fR privilege with \fBzfs allow\fR, can get and set all groups' quotas. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBreadonly\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether this dataset can be modified. The default value is \fBoff\fR. -.sp -This property can also be referred to by its shortened column name, \fBrdonly\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBrecordsize\fR=\fIsize\fR\fR -.ad -.sp .6 -.RS 4n -Specifies a suggested block size for files in the file system. This property is designed solely for use with database workloads that access files in fixed-size records. \fBZFS\fR automatically tunes block sizes according to internal algorithms optimized for typical access patterns. -.sp -For databases that create very large files but access them in small random chunks, these algorithms may be suboptimal. Specifying a \fBrecordsize\fR greater than or equal to the record size of the database can result in significant performance gains. Use of this property for general purpose file systems is strongly discouraged, and may adversely affect performance. -.sp -The size specified must be a power of two greater than or equal to 512 and less than or equal to 128 Kbytes. -.sp -Changing the file system's \fBrecordsize\fR affects only files created afterward; existing files are unaffected. -.sp -This property can also be referred to by its shortened column name, \fBrecsize\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBrefquota\fR=\fIsize\fR | \fBnone\fR\fR -.ad -.sp .6 -.RS 4n -Limits the amount of space a dataset can consume. This property enforces a hard limit on the amount of space used. This hard limit does not include space used by descendents, including file systems and snapshots. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBrefreservation\fR=\fIsize\fR | \fBnone\fR\fR -.ad -.sp .6 -.RS 4n -The minimum amount of space guaranteed to a dataset, not including its descendents. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by \fBrefreservation\fR. The \fBrefreservation\fR reservation is accounted for in the parent datasets' space used, and counts against the parent datasets' quotas and reservations. -.sp -If \fBrefreservation\fR is set, a snapshot is only allowed if there is enough free pool space outside of this reservation to accommodate the current number of "referenced" bytes in the dataset. -.sp -This property can also be referred to by its shortened column name, \fBrefreserv\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBreservation\fR=\fIsize\fR | \fBnone\fR\fR -.ad -.sp .6 -.RS 4n -The minimum amount of space guaranteed to a dataset and its descendents. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by its reservation. Reservations are accounted for in the parent datasets' space used, and count against the parent datasets' quotas and reservations. -.sp -This property can also be referred to by its shortened column name, \fBreserv\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBsecondarycache\fR=\fBall\fR | \fBnone\fR | \fBmetadata\fR\fR -.ad -.sp .6 -.RS 4n -Controls what is cached in the secondary cache (L2ARC). If this property is set to \fBall\fR, then both user data and metadata is cached. If this property is set to \fBnone\fR, then neither user data nor metadata is cached. If this property is set to \fBmetadata\fR, then only metadata is cached. The default value is \fBall\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBsetuid\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether the set-\fBUID\fR bit is respected for the file system. The default value is \fBon\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBshareiscsi\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Like the \fBsharenfs\fR property, \fBshareiscsi\fR indicates whether a \fBZFS\fR volume is exported as an \fBiSCSI\fR target. The acceptable values for this property are \fBon\fR, \fBoff\fR, and \fBtype=disk\fR. The default value is \fBoff\fR. In the future, other target types might be supported. For example, \fBtape\fR. -.sp -You might want to set \fBshareiscsi=on\fR for a file system so that all \fBZFS\fR volumes within the file system are shared by default. However, setting this property on a file system has no direct effect. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBsharesmb\fR=\fBon\fR | \fBoff\fR | \fIopts\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether the file system is shared by using the Solaris \fBCIFS\fR service, and what options are to be used. A file system with the \fBsharesmb\fR property set to \fBoff\fR is managed through traditional tools such as \fBsharemgr\fR(1M). Otherwise, the file system is automatically shared and unshared with the \fBzfs share\fR and \fBzfs unshare\fR commands. If the property is set to \fBon\fR, the \fBsharemgr\fR(1M) command is invoked with no options. Otherwise, the \fBsharemgr\fR(1M) command is invoked with options equivalent to the contents of this property. -.sp -Because \fBSMB\fR shares requires a resource name, a unique resource name is constructed from the dataset name. The constructed name is a copy of the dataset name except that the characters in the dataset name, which would be illegal in the resource name, are replaced with underscore (\fB_\fR) characters. A pseudo property "name" is also supported that allows you to replace the data set name with a specified name. The specified name is then used to replace the prefix dataset in the case of inheritance. For example, if the dataset \fBdata/home/john\fR is set to \fBname=john\fR, then \fBdata/home/john\fR has a resource name of \fBjohn\fR. If a child dataset of \fBdata/home/john/backups\fR, it has a resource name of \fBjohn_backups\fR. -.sp -When SMB shares are created, the SMB share name appears as an entry in the \fB\&.zfs/shares\fR directory. You can use the \fBls\fR or \fBchmod\fR command to display the share-level ACLs on the entries in this directory. -.sp -When the \fBsharesmb\fR property is changed for a dataset, the dataset and any children inheriting the property are re-shared with the new options, only if the property was previously set to \fBoff\fR, or if they were shared before the property was changed. If the new property is set to \fBoff\fR, the file systems are unshared. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBsharenfs\fR=\fBon\fR | \fBoff\fR | \fIopts\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether the file system is shared via \fBNFS\fR, and what options are used. A file system with a \fBsharenfs\fR property of \fBoff\fR is managed through traditional tools such as \fBshare\fR(1M), \fBunshare\fR(1M), and \fBdfstab\fR(4). Otherwise, the file system is automatically shared and unshared with the \fBzfs share\fR and \fBzfs unshare\fR commands. If the property is set to \fBon\fR, the \fBshare\fR(1M) command is invoked with no options. Otherwise, the \fBshare\fR(1M) command is invoked with options equivalent to the contents of this property. -.sp -When the \fBsharenfs\fR property is changed for a dataset, the dataset and any children inheriting the property are re-shared with the new options, only if the property was previously \fBoff\fR, or if they were shared before the property was changed. If the new property is \fBoff\fR, the file systems are unshared. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBlogbias\fR = \fBlatency\fR | \fBthroughput\fR\fR -.ad -.sp .6 -.RS 4n -Provide a hint to ZFS about handling of synchronous requests in this dataset. If \fBlogbias\fR is set to \fBlatency\fR (the default), ZFS will use pool log devices (if configured) to handle the requests at low latency. If \fBlogbias\fR is set to \fBthroughput\fR, ZFS will not use configured pool log devices. ZFS will instead optimize synchronous operations for global pool throughput and efficient use of resources. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBsnapdir\fR=\fBhidden\fR | \fBvisible\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether the \fB\&.zfs\fR directory is hidden or visible in the root of the file system as discussed in the "Snapshots" section. The default value is \fBhidden\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBversion\fR=\fB1\fR | \fB2\fR | \fBcurrent\fR\fR -.ad -.sp .6 -.RS 4n -The on-disk version of this file system, which is independent of the pool version. This property can only be set to later supported versions. See the \fBzfs upgrade\fR command. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBvolsize\fR=\fIsize\fR\fR -.ad -.sp .6 -.RS 4n -For volumes, specifies the logical size of the volume. By default, creating a volume establishes a reservation of equal size. For storage pools with a version number of 9 or higher, a \fBrefreservation\fR is set instead. Any changes to \fBvolsize\fR are reflected in an equivalent change to the reservation (or \fBrefreservation\fR). The \fBvolsize\fR can only be set to a multiple of \fBvolblocksize\fR, and cannot be zero. -.sp -The reservation is kept equal to the volume's logical size to prevent unexpected behavior for consumers. Without the reservation, the volume could run out of space, resulting in undefined behavior or data corruption, depending on how the volume is used. These effects can also occur when the volume size is changed while it is in use (particularly when shrinking the size). Extreme care should be used when adjusting the volume size. -.sp -Though not recommended, a "sparse volume" (also known as "thin provisioning") can be created by specifying the \fB-s\fR option to the \fBzfs create -V\fR command, or by changing the reservation after the volume has been created. A "sparse volume" is a volume where the reservation is less then the volume size. Consequently, writes to a sparse volume can fail with \fBENOSPC\fR when the pool is low on space. For a sparse volume, changes to \fBvolsize\fR are not reflected in the reservation. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBvscan\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether regular files should be scanned for viruses when a file is opened and closed. In addition to enabling this property, the virus scan service must also be enabled for virus scanning to occur. The default value is \fBoff\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBxattr\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether extended attributes are enabled for this file system. The default value is \fBon\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzoned\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether the dataset is managed from a non-global zone. See the "Zones" section for more information. The default value is \fBoff\fR. -.RE - -.sp -.LP -The following three properties cannot be changed after the file system is created, and therefore, should be set when the file system is created. If the properties are not set with the \fBzfs create\fR or \fBzpool create\fR commands, these properties are inherited from the parent dataset. If the parent dataset lacks these properties due to having been created prior to these features being supported, the new file system will have the default values for these properties. -.sp -.ne 2 -.mk -.na -\fB\fBcasesensitivity\fR=\fBsensitive\fR | \fBinsensitive\fR | \fBmixed\fR\fR -.ad -.sp .6 -.RS 4n -Indicates whether the file name matching algorithm used by the file system should be case-sensitive, case-insensitive, or allow a combination of both styles of matching. The default value for the \fBcasesensitivity\fR property is \fBsensitive\fR. Traditionally, UNIX and POSIX file systems have case-sensitive file names. -.sp -The \fBmixed\fR value for the \fBcasesensitivity\fR property indicates that the file system can support requests for both case-sensitive and case-insensitive matching behavior. Currently, case-insensitive matching behavior on a file system that supports mixed behavior is limited to the Solaris CIFS server product. For more information about the \fBmixed\fR value behavior, see the \fISolaris ZFS Administration Guide\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBnormalization\fR = \fBnone\fR | \fBformC\fR | \fBformD\fR | \fBformKC\fR | \fBformKD\fR\fR -.ad -.sp .6 -.RS 4n -Indicates whether the file system should perform a \fBunicode\fR normalization of file names whenever two file names are compared, and which normalization algorithm should be used. File names are always stored unmodified, names are normalized as part of any comparison process. If this property is set to a legal value other than \fBnone\fR, and the \fButf8only\fR property was left unspecified, the \fButf8only\fR property is automatically set to \fBon\fR. The default value of the \fBnormalization\fR property is \fBnone\fR. This property cannot be changed after the file system is created. -.RE - -.sp -.ne 2 -.mk -.na -\fBjailed =\fIon\fR | \fIoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether the dataset is managed from within a jail. The default value is "off". -.RE - -.sp -.ne 2 -.mk -.na -\fB\fButf8only\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Indicates whether the file system should reject file names that include characters that are not present in the \fBUTF-8\fR character code set. If this property is explicitly set to \fBoff\fR, the normalization property must either not be explicitly set or be set to \fBnone\fR. The default value for the \fButf8only\fR property is \fBoff\fR. This property cannot be changed after the file system is created. -.RE - -.sp -.LP -The \fBcasesensitivity\fR, \fBnormalization\fR, and \fButf8only\fR properties are also new permissions that can be assigned to non-privileged users by using the \fBZFS\fR delegated administration feature. -.SS "Temporary Mount Point Properties" -.sp -.LP -When a file system is mounted, either through \fBmount\fR(1M) for legacy mounts or the \fBzfs mount\fR command for normal file systems, its mount options are set according to its properties. The correlation between properties and mount options is as follows: -.sp -.in +2 -.nf - PROPERTY MOUNT OPTION - devices devices/nodevices - exec exec/noexec - readonly ro/rw - setuid setuid/nosetuid - xattr xattr/noxattr -.fi -.in -2 -.sp - -.sp -.LP -In addition, these options can be set on a per-mount basis using the \fB-o\fR option, without affecting the property that is stored on disk. The values specified on the command line override the values stored in the dataset. The \fB-nosuid\fR option is an alias for \fBnodevices,nosetuid\fR. These properties are reported as "temporary" by the \fBzfs get\fR command. If the properties are changed while the dataset is mounted, the new setting overrides any temporary settings. -.SS "User Properties" -.sp -.LP -In addition to the standard native properties, \fBZFS\fR supports arbitrary user properties. User properties have no effect on \fBZFS\fR behavior, but applications or administrators can use them to annotate datasets (file systems, volumes, and snapshots). -.sp -.LP -User property names must contain a colon (\fB:\fR) character to distinguish them from native properties. They may contain lowercase letters, numbers, and the following punctuation characters: colon (\fB:\fR), dash (\fB-\fR), period (\fB\&.\fR), and underscore (\fB_\fR). The expected convention is that the property name is divided into two portions such as \fImodule\fR\fB:\fR\fIproperty\fR, but this namespace is not enforced by \fBZFS\fR. User property names can be at most 256 characters, and cannot begin with a dash (\fB-\fR). -.sp -.LP -When making programmatic use of user properties, it is strongly suggested to use a reversed \fBDNS\fR domain name for the \fImodule\fR component of property names to reduce the chance that two independently-developed packages use the same property name for different purposes. Property names beginning with \fBcom.sun\fR. are reserved for use by Sun Microsystems. -.sp -.LP -The values of user properties are arbitrary strings, are always inherited, and are never validated. All of the commands that operate on properties (\fBzfs list\fR, \fBzfs get\fR, \fBzfs set\fR, and so forth) can be used to manipulate both native properties and user properties. Use the \fBzfs inherit\fR command to clear a user property . If the property is not defined in any parent dataset, it is removed entirely. Property values are limited to 1024 characters. -.SS "ZFS Volumes as Swap or Dump Devices" -.sp -.LP -During an initial installation or a live upgrade from a \fBUFS\fR file system, a swap device and dump device are created on \fBZFS\fR volumes in the \fBZFS\fR root pool. By default, the swap area size is based on 1/2 the size of physical memory up to 2 Gbytes. The size of the dump device depends on the kernel's requirements at installation time. Separate \fBZFS\fR volumes must be used for the swap area and dump devices. Do not swap to a file on a \fBZFS\fR file system. A \fBZFS\fR swap file configuration is not supported. -.sp -.LP -If you need to change your swap area or dump device after the system is installed or upgraded, use the \fBswap\fR(1M) and \fBdumpadm\fR(1M) commands. If you need to change the size of your swap area or dump device, see the \fISolaris ZFS Administration Guide\fR. -.SH SUBCOMMANDS -.sp -.LP -All subcommands that modify state are logged persistently to the pool in their original form. -.sp -.ne 2 -.mk -.na -\fB\fBzfs ?\fR\fR -.ad -.sp .6 -.RS 4n +.It Sy copies Ns = Ns Cm 1 | 2 | 3 +Controls the number of copies of data stored for this dataset. These copies are +in addition to any redundancy provided by the pool, for example, mirroring or +RAID-Z. The copies are stored on different disks, if possible. The space used +by multiple copies is charged to the associated file and dataset, changing the +.Sy used +property and counting against quotas and reservations. +.Pp +Changing this property only affects newly-written data. Therefore, set this +property at file system creation time by using the +.Fl o Cm copies= Ns Ar N +option. +.It Sy dedup Ns = Ns Cm on | off | verify | sha256 Ns Op Cm ,verify +Configures deduplication for a dataset. The default value is +.Cm off . +The default deduplication checksum is +.Cm sha256 +(this may change in the future). +When +.Sy dedup +is enabled, the checksum defined here overrides the +.Sy checksum +property. Setting the value to +.Cm verify +has the same effect as the setting +.Cm sha256,verify . +.Pp +If set to +.Cm verify , +.Tn ZFS +will do a byte-to-byte comparsion in case of two blocks having the same +signature to make sure the block contents are identical. +.It Sy devices Ns = Ns Cm on | off +The +.Sy devices +property is currently not supported on +.Fx . +.It Sy exec Ns = Ns Cm on | off +Controls whether processes can be executed from within this file system. The +default value is +.Cm on . +.It Sy mlslabel Ns = Ns Ar label | Cm none +The +.Sy mlslabel +property is currently not supported on +.Fx . +.It Sy mountpoint Ns = Ns Ar path | Cm none | legacy +Controls the mount point used for this file system. See the +.Qq Sx Mount Points +section for more information on how this property is used. +.Pp +When the +.Sy mountpoint +property is changed for a file system, the file system and any children that +inherit the mount point are unmounted. If the new value is +.Cm legacy , +then they remain unmounted. Otherwise, they are automatically remounted in the +new location if the property was previously +.Cm legacy +or +.Cm none , +or if they were mounted before the property was changed. In addition, any +shared file systems are unshared and shared in the new location. +.It Sy nbmand Ns = Ns Cm on | off +The +.Sy nbmand +property is currently not supported on +.Fx . +.It Sy primarycache Ns = Ns Cm all | none | metadata +Controls what is cached in the primary cache (ARC). If this property is set to +.Cm all , +then both user data and metadata is cached. If this property is set to +.Cm none , +then neither user data nor metadata is cached. If this property is set to +.Cm metadata , +then only metadata is cached. The default value is +.Cm all . +.It Sy quota Ns = Ns Ar size | Cm none +Limits the amount of space a dataset and its descendents can consume. This +property enforces a hard limit on the amount of space used. This includes all +space consumed by descendents, including file systems and snapshots. Setting a +quota on a descendent of a dataset that already has a quota does not override +the ancestor's quota, but rather imposes an additional limit. +.Pp +Quotas cannot be set on volumes, as the +.Sy volsize +property acts as an implicit quota. +.It Sy userquota@ Ns Ar user Ns = Ns Ar size | Cm none +Limits the amount of space consumed by the specified user. +Similar to the +.Sy refquota +property, the +.Sy userquota +space calculation does not include space that is used by descendent datasets, +such as snapshots and clones. User space consumption is identified by the +.Sy userspace@ Ns Ar user +property. +.sp +Enforcement of user quotas may be delayed by several seconds. This delay means +that a user might exceed their quota before the system notices that they are +over quota and begins to refuse additional writes with the +.Em EDQUOT +error message. See the +.Cm userspace +subcommand for more information. +.sp +Unprivileged users can only access their own groups' space usage. The root +user, or a user who has been granted the +.Sy userquota +privilege with +.Qq Nm Cm allow , +can get and set everyone's quota. +.sp +This property is not available on volumes, on file systems before version 4, or +on pools before version 15. The +.Sy userquota@ Ns ... +properties are not displayed by +.Qq Nm Cm get all . +The user's name must be appended after the +.Sy @ +symbol, using one of the following forms: +.Bl -bullet -offset 2n +.It +POSIX name (for example, +.Em joe Ns ) +.It +POSIX numeric ID (for example, +.Em 1001 Ns ) +.El +.It Sy groupquota@ Ns Ar group Ns = Ns Ar size | Cm none +Limits the amount of space consumed by the specified group. Group space +consumption is identified by the +.Sy userquota@ Ns Ar user +property. +.sp +Unprivileged users can access only their own groups' space usage. The root +user, or a user who has been granted the +.Sy groupquota +privilege with +.Qq Nm Cm allow , +can get and set all groups' quotas. +.It Sy readonly Ns = Ns Cm on | off +Controls whether this dataset can be modified. The default value is +.Cm off . +.It Sy recordsize Ns = Ns Ar size +Specifies a suggested block size for files in the file system. This property is +designed solely for use with database workloads that access files in fixed-size +records. +.Tn ZFS +automatically tunes block sizes according to internal algorithms optimized for +typical access patterns. +.Pp +For databases that create very large files but access them in small random +chunks, these algorithms may be suboptimal. Specifying a +.Sy recordsize +greater than or equal to the record size of the database can result in +significant performance gains. Use of this property for general purpose file +systems is strongly discouraged, and may adversely affect performance. +.Pp +The size specified must be a power of two greater than or equal to 512 and less +than or equal to 128 Kbytes. +.Pp +Changing the file system's +.Sy recordsize +affects only files created afterward; existing files are unaffected. +.sp +This property can also be referred to by its shortened column name, +.Sy recsize . +.It Sy refquota Ns = Ns Ar size | Cm none +Limits the amount of space a dataset can consume. This property enforces a hard +limit on the amount of space used. This hard limit does not include space used +by descendents, including file systems and snapshots. +.It Sy refreservation Ns = Ns Ar size | Cm none +The minimum amount of space guaranteed to a dataset, not including its +descendents. When the amount of space used is below this value, the dataset is +treated as if it were taking up the amount of space specified by +.Sy refreservation . +The +.Sy refreservation +reservation is accounted for in the parent datasets' space used, and counts +against the parent datasets' quotas and reservations. +.sp +If +.Sy refreservation +is set, a snapshot is only allowed if there is enough free pool space outside +of this reservation to accommodate the current number of "referenced" bytes in +the dataset. +.sp +This property can also be referred to by its shortened column name, +.Sy refreserv . +.It Sy reservation Ns = Ns Ar size | Cm none +The minimum amount of space guaranteed to a dataset and its descendents. When +the amount of space used is below this value, the dataset is treated as if it +were taking up the amount of space specified by its reservation. Reservations +are accounted for in the parent datasets' space used, and count against the +parent datasets' quotas and reservations. +.Pp +This property can also be referred to by its shortened column name, +.Sy reserv . +.It Sy secondarycache Ns = Ns Cm all | none | metadata +Controls what is cached in the secondary cache (L2ARC). If this property is set +to +.Cm all , +then both user data and metadata is cached. If this property is set to +.Cm none , +then neither user data nor metadata is cached. If this property is set to +.Cm metadata , +then only metadata is cached. The default value is +.Cm all . +.It Sy setuid Ns = Ns Cm on | off +Controls whether the +.No set- Ns Tn UID +bit is respected for the file system. The default value is +.Cm on . +.It Sy sharesmb Ns = Ns Cm on | off | Ar opts +The +.Sy sharesmb +property has currently no effect o +.Fx . +.It Sy sharenfs Ns = Ns Cm on | off | Ar opts +Controls whether the file system is shared via +.Tn NFS , +and what options are used. A file system with a +.Sy sharenfs +property of +.Cm off +is managed the traditional way via +.Xr exports 5 . +Otherwise, the file system is automatically shared and unshared with the +.Qq Nm Cm share +and +.Qq Nm Cm unshare +commands. If the property is set to +.Cm on +no +.Tn NFS +export options are used. Otherwise, +.Tn NFS +export options are equivalent to the contents of this property. The export +options may be comma-separated. See +.Xr exports 5 +for a list of valid options. +.Pp +When the +.Sy sharenfs +property is changed for a dataset, the +.Xr mountd 8 +dameon is reloaded. +.It Sy logbias Ns = Ns Cm latency | throughput +Provide a hint to +.Tn ZFS +about handling of synchronous requests in this dataset. +If +.Sy logbias +is set to +.Cm latency +(the default), +.Tn ZFS +will use pool log devices (if configured) to handle the requests at low +latency. If +.Sy logbias +is set to +.Cm throughput , +.Tn ZFS +will not use configured pool log devices. +.Tn ZFS +will instead optimize synchronous operations for global pool throughput and +efficient use of resources. +.It Sy snapdir Ns = Ns Cm hidden | visible +Controls whether the +.Pa \&.zfs +directory is hidden or visible in the root of the file system as discussed in +the +.Qq Sx Snapshots +section. The default value is +.Cm hidden . +.It Sy sync Ns = Ns Cm standard | always | disabled +Controls the behavior of synchronous requests (e.g. +.Xr fsync 2 , +O_DSYNC). This property accepts the following values: +.Bl -tag -offset 4n -width 8n +.It Sy standard +This is the POSIX specified behavior of ensuring all synchronous requests are +written to stable storage and all devices are flushed to ensure data is not +cached by device controllers (this is the default). +.It Sy always +All file system transactions are written and flushed before their system calls +return. This has a large performance penalty. +.It Sy disabled +Disables synchronous requests. File system transactions are only committed to +stable storage periodically. This option will give the highest performance. +However, it is very dangerous as +.Tn ZFS +would be ignoring the synchronous transaction demands of applications such as +databases or +.Tn NFS . +Administrators should only use this option when the risks are understood. +.El +.It Sy volsize Ns = Ns Ar size +For volumes, specifies the logical size of the volume. By default, creating a +volume establishes a reservation of equal size. For storage pools with a +version number of 9 or higher, a +.Sy refreservation +is set instead. Any changes to +.Sy volsize +are reflected in an equivalent change to the reservation (or +.Sy refreservation Ns ). +The +.Sy volsize +can only be set to a multiple of +.Cm volblocksize , +and cannot be zero. +.Pp +The reservation is kept equal to the volume's logical size to prevent +unexpected behavior for consumers. Without the reservation, the volume could +run out of space, resulting in undefined behavior or data corruption, depending +on how the volume is used. These effects can also occur when the volume size is +changed while it is in use (particularly when shrinking the size). Extreme care +should be used when adjusting the volume size. +.sp +Though not recommended, a "sparse volume" (also known as "thin provisioning") +can be created by specifying the +.Fl s +option to the +.Qq Nm Cm create Fl V +command, or by changing the reservation after the volume has been created. A +"sparse volume" is a volume where the reservation is less then the volume size. +Consequently, writes to a sparse volume can fail with +.Sy ENOSPC +when the pool is low on space. For a sparse volume, changes to +.Sy volsize +are not reflected in the reservation. +.It Sy vscan Ns = Ns Cm off | on +The +.Sy vscan +property is currently not supported on +.Fx . +.It Sy xattr Ns = Ns Cm off | on +The +.Sy xattr +property is currently not supported on +.Fx . +.It Sy jailed Ns = Ns Cm off | on +Controls whether the dataset is managed from a jail. See the +.Qq Sx Jails +section for more information. The default value is +.Cm off . +.El +.Pp +The following three properties cannot be changed after the file system is +created, and therefore, should be set when the file system is created. If the +properties are not set with the +.Qq Nm Cm create +or +.Nm zpool Cm create +commands, these properties are inherited from the parent dataset. If the parent +dataset lacks these properties due to having been created prior to these +features being supported, the new file system will have the default values for +these properties. +.Bl -tag -width 4n +.It Sy casesensitivity Ns = Ns Cm sensitive | insensitive | mixed +The +.Sy casesensitivity +property is currently not supported on +.Fx . +.It Sy normalization Ns = Ns Cm none | formC | formD | formKC | formKD +Indicates whether the file system should perform a +.Sy unicode +normalization of file names whenever two file names are compared, and which +normalization algorithm should be used. File names are always stored +unmodified, names are normalized as part of any comparison process. If this +property is set to a legal value other than +.Cm none , +and the +.Sy utf8only +property was left unspecified, the +.Sy utf8only +property is automatically set to +.Cm on . +The default value of the +.Sy normalization +property is +.Cm none . +This property cannot be changed after the file system is created. +.It Sy utf8only Ns = Ns Cm on | off +Indicates whether the file system should reject file names that include +characters that are not present in the +.Sy UTF-8 +character code set. If this property is explicitly set to +.Cm off , +the normalization property must either not be explicitly set or be set to +.Cm none . +The default value for the +.Sy utf8only +property is +.Cm off . +This property cannot be changed after the file system is created. +.El +.Pp +The +.Sy casesensitivity , normalization , No and Sy utf8only +properties are also new permissions that can be assigned to non-privileged +users by using the +.Tn ZFS +delegated administration feature. +.Ss Temporary Mount Point Properties +When a file system is mounted, either through +.Xr mount 8 +for legacy mounts or the +.Qq Nm Cm mount +command for normal file systems, its mount options are set according to its +properties. The correlation between properties and mount options is as follows: +.Bl -column -offset 4n "PROPERTY" "MOUNT OPTION" +.It PROPERTY MOUNT OPTION +.It atime atime/noatime +.It exec exec/noexec +.It readonly ro/rw +.It setuid suid/nosuid +.El +.Pp +In addition, these options can be set on a per-mount basis using the +.Fl o +option, without affecting the property that is stored on disk. The values +specified on the command line override the values stored in the dataset. These +properties are reported as "temporary" by the +.Qq Nm Cm get +command. If the properties are changed while the dataset is mounted, the new +setting overrides any temporary settings. +.Ss User Properties +In addition to the standard native properties, +.Tn ZFS +supports arbitrary user properties. User properties have no effect on +.Tn ZFS +behavior, but applications or administrators can use them to annotate datasets +(file systems, volumes, and snapshots). +.Pp +User property names must contain a colon +.Pq Sy \&: +character to distinguish them from native properties. They may contain +lowercase letters, numbers, and the following punctuation characters: colon +.Pq Sy \&: , +dash +.Pq Sy \&- , +period +.Pq Sy \&. +and underscore +.Pq Sy \&_ . +The expected convention is that the property name is divided into two portions +such as +.Em module Ns Sy \&: Ns Em property , +but this namespace is not enforced by +.Tn ZFS . +User property names can be at most 256 characters, and cannot begin with a dash +.Pq Sy \&- . +.Pp +When making programmatic use of user properties, it is strongly suggested to +use a reversed +.Tn DNS +domain name for the +.Ar module +component of property names to reduce the chance that two +independently-developed packages use the same property name for different +purposes. Property names beginning with +.Em com.sun +are reserved for use by Sun Microsystems. +.Pp +The values of user properties are arbitrary strings, are always inherited, and +are never validated. All of the commands that operate on properties +.Po +.Qq Nm Cm list , +.Qq Nm Cm get , +.Qq Nm Cm set +and so forth +.Pc +can be used to manipulate both native properties and user properties. Use the +.Qq Nm Cm inherit +command to clear a user property. If the property is not defined in any parent +dataset, it is removed entirely. Property values are limited to 1024 +characters. +.Sh SUBCOMMANDS +All subcommands that modify state are logged persistently to the pool in their +original form. +.Bl -tag -width 2n +.It Xo +.Nm +.Op Fl \&? +.Xc +.Pp Displays a help message. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs create\fR [\fB-p\fR] [\fB-o\fR \fIproperty\fR=\fIvalue\fR] ... \fIfilesystem\fR\fR -.ad -.sp .6 -.RS 4n -Creates a new \fBZFS\fR file system. The file system is automatically mounted according to the \fBmountpoint\fR property inherited from the parent. -.sp -.ne 2 -.mk -.na -\fB\fB-p\fR\fR -.ad -.sp .6 -.RS 4n -Creates all the non-existing parent datasets. Datasets created in this manner are automatically mounted according to the \fBmountpoint\fR property inherited from their parent. Any property specified on the command line using the \fB-o\fR option is ignored. If the target filesystem already exists, the operation completes successfully. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIproperty\fR=\fIvalue\fR\fR -.ad -.sp .6 -.RS 4n -Sets the specified property as if the command \fBzfs set\fR \fIproperty\fR=\fIvalue\fR was invoked at the same time the dataset was created. Any editable \fBZFS\fR property can also be set at creation time. Multiple \fB-o\fR options can be specified. An error results if the same property is specified in multiple \fB-o\fR options. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs create\fR [\fB-ps\fR] [\fB-b\fR \fIblocksize\fR] [\fB-o\fR \fIproperty\fR=\fIvalue\fR] ... \fB-V\fR \fIsize\fR \fIvolume\fR\fR -.ad -.sp .6 -.RS 4n -Creates a volume of the given size. The volume is exported as a block device in \fB/dev/zvol/{dsk,rdsk}/\fR\fIpath\fR, where \fIpath\fR is the name of the volume in the \fBZFS\fR namespace. The size represents the logical size as exported by the device. By default, a reservation of equal size is created. -.sp -\fIsize\fR is automatically rounded up to the nearest 128 Kbytes to ensure that the volume has an integral number of blocks regardless of \fIblocksize\fR. -.sp -.ne 2 -.mk -.na -\fB\fB-p\fR\fR -.ad -.sp .6 -.RS 4n -Creates all the non-existing parent datasets. Datasets created in this manner are automatically mounted according to the \fBmountpoint\fR property inherited from their parent. Any property specified on the command line using the \fB-o\fR option is ignored. If the target filesystem already exists, the operation completes successfully. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-s\fR\fR -.ad -.sp .6 -.RS 4n -Creates a sparse volume with no reservation. See \fBvolsize\fR in the Native Properties section for more information about sparse volumes. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIproperty\fR=\fIvalue\fR\fR -.ad -.sp .6 -.RS 4n -Sets the specified property as if the \fBzfs set\fR \fIproperty\fR=\fIvalue\fR command was invoked at the same time the dataset was created. Any editable \fBZFS\fR property can also be set at creation time. Multiple \fB-o\fR options can be specified. An error results if the same property is specified in multiple \fB-o\fR options. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-b\fR \fIblocksize\fR\fR -.ad -.sp .6 -.RS 4n -Equivalent to \fB-o\fR \fBvolblocksize\fR=\fIblocksize\fR. If this option is specified in conjunction with \fB-o\fR \fBvolblocksize\fR, the resulting behavior is undefined. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs destroy\fR [\fB-rRf\fR] \fIfilesystem\fR|\fIvolume\fR\fR -.ad -.sp .6 -.RS 4n -Destroys the given dataset. By default, the command unshares any file systems that are currently shared, unmounts any file systems that are currently mounted, and refuses to destroy a dataset that has active dependents (children or clones). -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n +.It Xo +.Nm +.Cm create +.Op Fl p +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... filesystem +.Xc +.Pp +Creates a new +.Tn ZFS +file system. The file system is automatically mounted according to the +.Sy mountpoint +property inherited from the parent. +.Bl -tag -width indent +.It Fl p +Creates all the non-existing parent datasets. Datasets created in this manner +are automatically mounted according to the +.Sy mountpoint +property inherited from their parent. Any property specified on the command +line using the +.Fl o +option is ignored. If the target filesystem already exists, the operation +completes successfully. +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property as if the command +.Qq Nm Cm set Ar property Ns = Ns Ar value +was invoked at the same time the dataset was created. Any editable +.Tn ZFS +property can also be set at creation time. Multiple +.Fl o +options can be specified. An error results if the same property is specified in +multiple +.Fl o +options. +.El +.It Xo +.Nm +.Cm create +.Op Fl ps +.Op Fl b Ar blocksize +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... +.Fl V +.Ar size volume +.Xc +.Pp +Creates a volume of the given size. The volume is exported as a block device in +.Pa /dev/zvol/path , +where +.Ar path +is the name of the volume in the +.Tn ZFS +namespace. The size represents the logical size as exported by the device. By +default, a reservation of equal size is created. +.Pp +.Ar size +is automatically rounded up to the nearest 128 Kbytes to ensure that +the volume has an integral number of blocks regardless of +.Ar blocksize . +.Bl -tag -width indent +.It Fl p +Creates all the non-existing parent datasets. Datasets created in this manner +are automatically mounted according to the +.Sy mountpoint +property inherited from their parent. Any property specified on the command +line using the +.Fl o +option is ignored. If the target filesystem already exists, the operation +completes successfully. +.It Fl s +Creates a sparse volume with no reservation. See +.Sy volsize +in the +.Qq Sx Native Properties +section for more information about sparse volumes. +.It Fl b Ar blocksize +Equivalent to +.Fl o Cm volblocksize Ns = Ns Ar blocksize . +If this option is specified in conjunction with +.Fl o Cm volblocksize , +the resulting behavior is undefined. +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property as if the +.Qq Nm Cm set Ar property Ns = Ns Ar value +command was invoked at the same time the dataset was created. Any editable +.Tn ZFS +property can also be set at creation time. Multiple +.Fl o +options can be specified. An error results if the same property is specified in +multiple +.Fl o +options. +.El +.It Xo +.Nm +.Cm destroy +.Op Fl rRf +.Ar filesystem Ns | Ns Ar volume +.Xc +.Pp +Destroys the given dataset. By default, the command unshares any file systems +that are currently shared, unmounts any file systems that are currently +mounted, and refuses to destroy a dataset that has active dependents (children +or clones). +.Bl -tag -width indent +.It Fl r Recursively destroy all children. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-R\fR\fR -.ad -.sp .6 -.RS 4n -Recursively destroy all dependents, including cloned file systems outside the target hierarchy. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.sp .6 -.RS 4n -Force an unmount of any file systems using the \fBunmount -f\fR command. This option has no effect on non-file systems or unmounted file systems. -.RE - -Extreme care should be taken when applying either the \fB-r\fR or the \fB-R\fR options, as they can destroy large portions of a pool and cause unexpected behavior for mounted file systems in use. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs destroy\fR [\fB-rRd\fR] \fIsnapshot\fR\fR -.ad -.sp .6 -.RS 4n -The given snapshot is destroyed immediately if and only if the \fBzfs destroy\fR command without the \fB-d\fR option would have destroyed it. Such immediate destruction would occur, for example, if the snapshot had no clones and the user-initiated reference count were zero. -.sp -If the snapshot does not qualify for immediate destruction, it is marked for deferred deletion. In this state, it exists as a usable, visible snapshot until both of the preconditions listed above are met, at which point it is destroyed. -.sp -.ne 2 -.mk -.na -\fB\fB-d\fR\fR -.ad -.sp .6 -.RS 4n -Defer snapshot deletion. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n -Destroy (or mark for deferred deletion) all snapshots with this name in descendent file systems. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-R\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl R +Recursively destroy all dependents, including cloned file systems outside the +target hierarchy. +.It Fl f +Force an unmount of any file systems using the +.Qq Nm Cm unmount Fl f +command. This option has no effect on non-file systems or unmounted file +systems. +.El +.Pp +Extreme care should be taken when applying either the +.Fl r +or the +.Fl R +options, as they can destroy large portions of a pool and cause unexpected +behavior for mounted file systems in use. +.It Xo +.Nm +.Cm destroy +.Op Fl rRd +.Ar snapshot +.Xc +.Pp +The given snapshot is destroyed immediately if and only if the +.Qq Nm Cm destroy +command without the +.Fl d +option would have destroyed it. Such immediate destruction would occur, for +example, if the snapshot had no clones and the user-initiated reference count +were zero. +.Pp +If the snapshot does not qualify for immediate destruction, it is marked for +deferred deletion. In this state, it exists as a usable, visible snapshot until +both of the preconditions listed above are met, at which point it is destroyed. +.Bl -tag -width indent +.It Fl r +Destroy (or mark for deferred deletion) all snapshots with this name in +descendent file systems. +.It Fl R Recursively destroy all dependents. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs snapshot\fR [\fB-r\fR] [\fB-o\fR \fIproperty\fR=\fIvalue\fR] ... \fIfilesystem@snapname\fR|\fIvolume@snapname\fR\fR -.ad -.sp .6 -.RS 4n -Creates a snapshot with the given name. All previous modifications by successful system calls to the file system are part of the snapshot. See the "Snapshots" section for details. -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n -Recursively create snapshots of all descendent datasets. Snapshots are taken atomically, so that all recursive snapshots correspond to the same moment in time. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIproperty\fR=\fIvalue\fR\fR -.ad -.sp .6 -.RS 4n -Sets the specified property; see \fBzfs create\fR for details. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs rollback\fR [\fB-rRf\fR] \fIsnapshot\fR\fR -.ad -.sp .6 -.RS 4n -Roll back the given dataset to a previous snapshot. When a dataset is rolled back, all data that has changed since the snapshot is discarded, and the dataset reverts to the state at the time of the snapshot. By default, the command refuses to roll back to a snapshot other than the most recent one. In order to do so, all intermediate snapshots must be destroyed by specifying the \fB-r\fR option. -.sp -The \fB-rR\fR options do not recursively destroy the child snapshots of a recursive snapshot. Only the top-level recursive snapshot is destroyed by either of these options. To completely roll back a recursive snapshot, you must rollback the individual child snapshots. -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl d +Defer snapshot deletion. +.El +.Pp +Extreme care should be taken when applying either the +.Fl r +or the +.Fl R +options, as they can destroy large portions of a pool and cause unexpected +behavior for mounted file systems in use. +.It Xo +.Nm +.Cm snapshot +.Op Fl r +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... +.Ar filesystem@snapname Ns | Ns volume@snapname +.Xc +.Pp +Creates a snapshot with the given name. All previous modifications by +successful system calls to the file system are part of the snapshot. See the +.Qq Sx Snapshots +section for details. +.Bl -tag -width indent +.It Fl r +Recursively create snapshots of all descendent datasets. Snapshots are taken +atomically, so that all recursive snapshots correspond to the same moment in +time. +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property; see +.Qq Nm Cm create +for details. +.El +.It Xo +.Nm +.Cm rollback +.Op Fl rRf +.Ar snapshot +.Xc +.Pp +Roll back the given dataset to a previous snapshot. When a dataset is rolled +back, all data that has changed since the snapshot is discarded, and the +dataset reverts to the state at the time of the snapshot. By default, the +command refuses to roll back to a snapshot other than the most recent one. In +order to do so, all intermediate snapshots must be destroyed by specifying the +.Fl r +option. +.Bl -tag -width indent +.It Fl r Recursively destroy any snapshots more recent than the one specified. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-R\fR\fR -.ad -.sp .6 -.RS 4n -Recursively destroy any more recent snapshots, as well as any clones of those snapshots. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.sp .6 -.RS 4n -Used with the \fB-R\fR option to force an unmount of any clone file systems that are to be destroyed. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs clone\fR [\fB-p\fR] [\fB-o\fR \fIproperty\fR=\fIvalue\fR] ... \fIsnapshot\fR \fIfilesystem\fR|\fIvolume\fR\fR -.ad -.sp .6 -.RS 4n -Creates a clone of the given snapshot. See the "Clones" section for details. The target dataset can be located anywhere in the \fBZFS\fR hierarchy, and is created as the same type as the original. -.sp -.ne 2 -.mk -.na -\fB\fB-p\fR\fR -.ad -.sp .6 -.RS 4n -Creates all the non-existing parent datasets. Datasets created in this manner are automatically mounted according to the \fBmountpoint\fR property inherited from their parent. If the target filesystem or volume already exists, the operation completes successfully. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIproperty\fR=\fIvalue\fR\fR -.ad -.sp .6 -.RS 4n -Sets the specified property; see \fBzfs create\fR for details. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs promote\fR \fIclone-filesystem\fR\fR -.ad -.sp .6 -.RS 4n -Promotes a clone file system to no longer be dependent on its "origin" snapshot. This makes it possible to destroy the file system that the clone was created from. The clone parent-child dependency relationship is reversed, so that the origin file system becomes a clone of the specified file system. -.sp -The snapshot that was cloned, and any snapshots previous to this snapshot, are now owned by the promoted clone. The space they use moves from the origin file system to the promoted clone, so enough space must be available to accommodate these snapshots. No new space is consumed by this operation, but the space accounting is adjusted. The promoted clone must not have any conflicting snapshot names of its own. The \fBrename\fR subcommand can be used to rename any conflicting snapshots. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs rename\fR \fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR\fR -.ad -.br -.na -\fB\fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR\fR -.ad -.br -.na -\fB\fBzfs rename\fR [\fB-p\fR] \fIfilesystem\fR|\fIvolume\fR \fIfilesystem\fR|\fIvolume\fR\fR -.ad -.br -.na -\fB\fBzfs rename\fR \fB-u\fR [\fB-p\fR] \fIfilesystem\fR \fIfilesystem\fR\fR -.ad -.sp .6 -.RS 4n -Renames the given dataset. The new target can be located anywhere in the \fBZFS\fR hierarchy, with the exception of snapshots. Snapshots can only be renamed within the parent file system or volume. When renaming a snapshot, the parent file system of the snapshot does not need to be specified as part of the second argument. Renamed file systems can inherit new mount points, in which case they are unmounted and remounted at the new mount point. -.sp -.ne 2 -.mk -.na -\fB\fB-p\fR\fR -.ad -.sp .6 -.RS 4n -Creates all the nonexistent parent datasets. Datasets created in this manner are automatically mounted according to the \fBmountpoint\fR property inherited from their parent. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-u\fR\fR -.ad -.sp .6 -.RS 4n -Do not remount file systems during rename. If a file system's \fBmountpoint\fR property is set to \fBlegacy\fR or \fBnone\fR, file system is not unmounted even if this option is not given. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs rename\fR \fB-r\fR \fIsnapshot\fR \fIsnapshot\fR\fR -.ad -.sp .6 -.RS 4n -Recursively rename the snapshots of all descendent datasets. Snapshots are the only dataset that can be renamed recursively. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs\fR \fBlist\fR [\fB-r\fR|\fB-d\fR \fIdepth\fR] [\fB-H\fR] [\fB-o\fR \fIproperty\fR[,\fI\&...\fR]] [ \fB-t\fR \fItype\fR[,\fI\&...\fR]] [ \fB-s\fR \fIproperty\fR ] ... [ \fB-S\fR \fIproperty\fR ] ... [\fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR] ...\fR -.ad -.sp .6 -.RS 4n -Lists the property information for the given datasets in tabular form. If specified, you can list property information by the absolute pathname or the relative pathname. By default, all file systems and volumes are displayed. Snapshots are displayed if the \fBlistsnaps\fR property is \fBon\fR (the default is \fBoff\fR) . The following fields are displayed, \fBname,used,available,referenced,mountpoint\fR. -.sp -.ne 2 -.mk -.na -\fB\fB-H\fR\fR -.ad -.sp .6 -.RS 4n -Used for scripting mode. Do not print headers and separate fields by a single tab instead of arbitrary white space. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n -Recursively display any children of the dataset on the command line. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-d\fR \fIdepth\fR\fR -.ad -.sp .6 -.RS 4n -Recursively display any children of the dataset, limiting the recursion to \fIdepth\fR. A depth of \fB1\fR will display only the dataset and its direct children. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIproperty\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl R +Recursively destroy any more recent snapshots, as well as any clones of those +snapshots. +.It Fl f +Used with the +.Fl R +option to force an unmount of any clone file systems that are to be destroyed. +.El +.It Xo +.Nm +.Cm clone +.Op Fl p +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... snapshot filesystem Ns | Ns Ar volume +.Xc +.Pp +Creates a clone of the given snapshot. See the +.Qq Sx Clones +section for details. The target dataset can be located anywhere in the +.Tn ZFS +hierarchy, and is created as the same type as the original. +.Bl -tag -width indent +.It Fl p +Creates all the non-existing parent datasets. Datasets created in this manner +are automatically mounted according to the +.Sy mountpoint +property inherited from their parent. If the target filesystem or volume +already exists, the operation completes successfully. +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property; see +.Qq Nm Cm create +for details. +.El +.It Xo +.Nm +.Cm promote +.Ar clone-filesystem +.Xc +.Pp +Promotes a clone file system to no longer be dependent on its "origin" +snapshot. This makes it possible to destroy the file system that the clone was +created from. The clone parent-child dependency relationship is reversed, so +that the origin file system becomes a clone of the specified file system. +.Pp +The snapshot that was cloned, and any snapshots previous to this snapshot, are +now owned by the promoted clone. The space they use moves from the origin file +system to the promoted clone, so enough space must be available to accommodate +these snapshots. No new space is consumed by this operation, but the space +accounting is adjusted. The promoted clone must not have any conflicting +snapshot names of its own. The +.Cm rename +subcommand can be used to rename any conflicting snapshots. +.It Xo +.Nm +.Cm rename +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Xc +.It Xo +.Nm +.Cm rename +.Fl p +.Ar filesystem Ns | Ns Ar volume +.Ar filesystem Ns | Ns Ar volume +.Xc +.It Xo +.Nm +.Cm rename +.Fl u +.Op Fl p +.Ar filesystem filesystem +.Xc +.Pp +Renames the given dataset. The new target can be located anywhere in the +.Tn ZFS +hierarchy, with the exception of snapshots. Snapshots can only be renamed +within the parent file system or volume. When renaming a snapshot, the parent +file system of the snapshot does not need to be specified as part of the second +argument. Renamed file systems can inherit new mount points, in which case they +are unmounted and remounted at the new mount point. +.Bl -tag -width indent +.It Fl p +Creates all the nonexistent parent datasets. Datasets created in this manner +are automatically mounted according to the +.Sy mountpoint +property inherited from their parent. +.It Fl u +Do not remount file systems during rename. If a file system's +.Sy mountpoint +property is set to +.Cm legacy +or +.Cm none , +file system is not unmounted even if this option is not given. +.El +.It Xo +.Nm +.Cm rename +.Fl r +.Ar snapshot snapshot +.Xc +.Pp +Recursively rename the snapshots of all descendent datasets. Snapshots are the +only dataset that can be renamed recursively. +.It Xo +.Nm +.Cm list +.Op Fl r Ns | Ns Fl d Ar depth +.Op Fl H +.Op Fl o Ar property Ns Op , Ns Ar ... +.Op Fl t Ar type Ns Op , Ns Ar ... +.Op Fl s Ar property +.Ar ... +.Op Fl S Ar property +.Ar ... +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Xc +.Pp +Lists the property information for the given datasets in tabular form. If +specified, you can list property information by the absolute pathname or the +relative pathname. By default, all file systems and volumes are displayed. +Snapshots are displayed if the +.Sy listsnaps +property is +.Cm on +(the default is +.Cm off Ns ). +The following fields are displayed, +.Sy name , used , available , referenced , mountpoint . +.Bl -tag -width indent +.It Fl r +Recursively display any children of the dataset on the command line. +.It Fl d Ar depth +Recursively display any children of the dataset, limiting the recursion to +.Ar depth . +A depth of +.Sy 1 +will display only the dataset and its direct children. +.It Fl H +Used for scripting mode. Do not print headers and separate fields by a single +tab instead of arbitrary white space. +.It Fl o Ar property Ns Op , Ns Ar ... A comma-separated list of properties to display. The property must be: -.RS +4 -.TP -.ie t \(bu -.el o -One of the properties described in the "Native Properties" section -.RE -.RS +4 -.TP -.ie t \(bu -.el o +.Bl -bullet -offset 2n +.It +One of the properties described in the +.Qq Sx Native Properties +section +.It A user property -.RE -.RS +4 -.TP -.ie t \(bu -.el o -The value \fBname\fR to display the dataset name -.RE -.RS +4 -.TP -.ie t \(bu -.el o -The value \fBspace\fR to display space usage properties on file systems and volumes. This is a shortcut for specifying \fB-o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild\fR \fB-t filesystem,volume\fR syntax. -.RE -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-s\fR \fIproperty\fR\fR -.ad -.sp .6 -.RS 4n -A property for sorting the output by column in ascending order based on the value of the property. The property must be one of the properties described in the "Properties" section, or the special value \fBname\fR to sort by the dataset name. Multiple properties can be specified at one time using multiple \fB-s\fR property options. Multiple \fB-s\fR options are evaluated from left to right in decreasing order of importance. -.sp +.It +The value +.Cm name +to display the dataset name +.It +The value +.Cm space +to display space usage properties on file systems and volumes. This is a +shortcut for specifying +.Fl o +.Sy name,avail,used,usedsnap,usedds,usedrefreserv,usedchild +.Fl t +.Sy filesystem,volume +syntax. +.El +.It Fl t Ar type Ns Op , Ns Ar ... +A comma-separated list of types to display, where +.Ar type +is one of +.Sy filesystem , snapshot , volume , No or Sy all . +For example, specifying +.Fl o Cm snapshot +displays only snapshots. +.It Fl s Ar property +A property for sorting the output by column in ascending order based on the +value of the property. The property must be one of the properties described in +the +.Qq Sx Properties +section, or the special value +.Cm name +to sort by the dataset name. Multiple properties can be specified at one time +using multiple +.Fl s +property options. Multiple +.Fl s +options are evaluated from left to right in decreasing order of importance. +.Pp The following is a list of sorting criteria: -.RS +4 -.TP -.ie t \(bu -.el o +.Bl -bullet -offset 2n +.It Numeric types sort in numeric order. -.RE -.RS +4 -.TP -.ie t \(bu -.el o +.It String types sort in alphabetical order. -.RE -.RS +4 -.TP -.ie t \(bu -.el o -Types inappropriate for a row sort that row to the literal bottom, regardless of the specified ordering. -.RE -.RS +4 -.TP -.ie t \(bu -.el o -If no sorting options are specified the existing behavior of \fBzfs list\fR is preserved. -.RE -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-S\fR \fIproperty\fR\fR -.ad -.sp .6 -.RS 4n -Same as the \fB-s\fR option, but sorts by property in descending order. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-t\fR \fItype\fR\fR -.ad -.sp .6 -.RS 4n -A comma-separated list of types to display, where \fItype\fR is one of \fBfilesystem\fR, \fBsnapshot\fR , \fBvolume\fR, or \fBall\fR. For example, specifying \fB-t snapshot\fR displays only snapshots. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs set\fR \fIproperty\fR=\fIvalue\fR \fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR ...\fR -.ad -.sp .6 -.RS 4n -Sets the property to the given value for each dataset. Only some properties can be edited. See the "Properties" section for more information on what properties can be set and acceptable values. Numeric values can be specified as exact values, or in a human-readable form with a suffix of \fBB\fR, \fBK\fR, \fBM\fR, \fBG\fR, \fBT\fR, \fBP\fR, \fBE\fR, \fBZ\fR (for bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or zettabytes, respectively). User properties can be set on snapshots. For more information, see the "User Properties" section. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs get\fR [\fB-r\fR|\fB-d\fR \fIdepth\fR] [\fB-Hp\fR] [\fB-o\fR \fIfield\fR[,...] [\fB-s\fR \fIsource\fR[,...] "\fIall\fR" | \fIproperty\fR[,...] \fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR ...\fR -.ad -.sp .6 -.RS 4n -Displays properties for the given datasets. If no datasets are specified, then the command displays properties for all datasets on the system. For each property, the following columns are displayed: -.sp -.in +2 -.nf - name Dataset name - property Property name - value Property value - source Property source. Can either be local, default, - temporary, inherited, or none (-). -.fi -.in -2 -.sp - -All columns are displayed by default, though this can be controlled by using the \fB-o\fR option. This command takes a comma-separated list of properties as described in the "Native Properties" and "User Properties" sections. -.sp -The special value \fBall\fR can be used to display all properties that apply to the given dataset's type (filesystem, volume, or snapshot). -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n +.It +Types inappropriate for a row sort that row to the literal bottom, regardless +of the specified ordering. +.It +If no sorting options are specified the existing behavior of +.Qq Nm Cm list +is preserved. +.El +.It Fl S Ar property +Same as the +.Fl s +option, but sorts by property in descending order. +.El +.It Xo +.Nm +.Cm set +.Ar property Ns = Ns Ar value +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Xc +.Pp +Sets the property to the given value for each dataset. Only some properties can +be edited. See the "Properties" section for more information on what properties +can be set and acceptable values. Numeric values can be specified as exact +values, or in a human-readable form with a suffix of +.Sy B , K , M , G , T , P , E , Z +(for bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or +zettabytes, respectively). User properties can be set on snapshots. For more +information, see the +.Qq Sx User Properties +section. +.It Xo +.Nm +.Cm get +.Op Fl r Ns | Ns Fl d Ar depth +.Op Fl Hp +.Op Fl o Ar all | field Ns Op , Ns Ar ... +.Op Fl s Ar source Ns Op , Ns Ar ... +.Ar all | property Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Xc +.Pp +Displays properties for the given datasets. If no datasets are specified, then +the command displays properties for all datasets on the system. For each +property, the following columns are displayed: +.Pp +.Bl -hang -width "property" -offset indent -compact +.It name +Dataset name +.It property +Property name +.It value +Property value +.It source +Property source. Can either be local, default, temporary, inherited, or none +(\&-). +.El +.Pp +All columns except the +.Sy RECEIVED +column are displayed by default. The columns to display can be specified +by using the +.Fl o +option. This command takes a comma-separated list of properties as described in +the +.Qq Sx Native Properties +and +.Qq Sx User Properties +sections. +.Pp +The special value +.Cm all +can be used to display all properties that apply to the given dataset's type +(filesystem, volume, or snapshot). +.Bl -tag -width indent +.It Fl r Recursively display properties for any children. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-d\fR \fIdepth\fR\fR -.ad -.sp .6 -.RS 4n -Recursively display any children of the dataset, limiting the recursion to \fIdepth\fR. A depth of \fB1\fR will display only the dataset and its direct children. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-H\fR\fR -.ad -.sp .6 -.RS 4n -Display output in a form more easily parsed by scripts. Any headers are omitted, and fields are explicitly separated by a single tab instead of an arbitrary amount of space. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIfield\fR\fR -.ad -.sp .6 -.RS 4n -A comma-separated list of columns to display. \fBname,property,value,source\fR is the default value. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-s\fR \fIsource\fR\fR -.ad -.sp .6 -.RS 4n -A comma-separated list of sources to display. Those properties coming from a source other than those in this list are ignored. Each source must be one of the following: \fBlocal,default,inherited,temporary,none\fR. The default value is all sources. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-p\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl d Ar depth +Recursively display any children of the dataset, limiting the recursion to +.Ar depth . +A depth of +.Sy 1 +will display only the dataset and its direct children. +.It Fl H +Display output in a form more easily parsed by scripts. Any headers are +omitted, and fields are explicitly separated by a single tab instead of an +arbitrary amount of space. +.It Fl p Display numbers in parseable (exact) values. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs inherit\fR [\fB-r\fR] \fIproperty\fR \fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR ...\fR -.ad -.sp .6 -.RS 4n -Clears the specified property, causing it to be inherited from an ancestor. If no ancestor has the property set, then the default value is used. See the "Properties" section for a listing of default values, and details on which properties can be inherited. -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl o Cm all | Ar field Ns Op , Ns Ar ... +A comma-separated list of columns to display. Supported values are +.Sy name,property,value,received,source . +Default values are +.Sy name,property,value,source . +The keyword +.Cm all +specifies all columns. +.It Fl s Ar source Ns Op , Ns Ar ... +A comma-separated list of sources to display. Those properties coming from a +source other than those in this list are ignored. Each source must be one of +the following: +.Sy local,default,inherited,temporary,received,none . +The default value is all sources. +.El +.It Xo +.Nm +.Cm inherit +.Op Fl rS +.Ar property +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Xc +.Pp +Clears the specified property, causing it to be inherited from an ancestor. If +no ancestor has the property set, then the default value is used. See the +.Qq Sx Properties +section for a listing of default values, and details on which properties can be +inherited. +.Bl -tag -width indent +.It Fl r Recursively inherit the given property for all children. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs upgrade\fR [\fB-v\fR]\fR -.ad -.sp .6 -.RS 4n +.It Fl S +For properties with a received value, revert to this value. This flag has no +effect on properties that do not have a received value. +.El +.It Xo +.Nm +.Cm upgrade +.Op Fl v +.Xc +.Pp Displays a list of file systems that are not the most recent version. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs upgrade\fR [\fB-r\fR] [\fB-V\fR \fIversion\fR] [\fB-a\fR | \fIfilesystem\fR]\fR -.ad -.sp .6 -.RS 4n -Upgrades file systems to a new on-disk version. Once this is done, the file systems will no longer be accessible on systems running older versions of the software. \fBzfs send\fR streams generated from new snapshots of these file systems cannot be accessed on systems running older versions of the software. -.sp -In general, the file system version is independent of the pool version. See \fBzpool\fR(1M) for information on the \fBzpool upgrade\fR command. -.sp -In some cases, the file system version and the pool version are interrelated and the pool version must be upgraded before the file system version can be upgraded. -.sp -.ne 2 -.mk -.na -\fB\fB-a\fR\fR -.ad -.sp .6 -.RS 4n +.Bl -tag -width indent +.It Fl v +Displays +.Tn ZFS +filesystem versions supported by the current software. The current +.Tn ZFS +filesystem version and all previous supported versions are displayed, along +with an explanation of the features provided with each version. +.El +.It Xo +.Nm +.Cm upgrade +.Op Fl r +.Op Fl V Ar version +.Fl a | Ar filesystem +.Xc +.Pp +Upgrades file systems to a new on-disk version. Once this is done, the file +systems will no longer be accessible on systems running older versions of the +software. +.Qq Nm Cm send +streams generated from new snapshots of these file systems cannot be accessed +on systems running older versions of the software. +.Pp +In general, the file system version is independent of the pool version. See +.Xr zpool 8 +for information on the +.Nm zpool Cm upgrade +command. +.Pp +In some cases, the file system version and the pool version are interrelated +and the pool version must be upgraded before the file system version can be +upgraded. +.Bl -tag -width indent +.It Fl r +Upgrade the specified file system and all descendent file systems. +.It Fl V Ar version +Upgrade to the specified +.Ar version . +If the +.Fl V +flag is not specified, this command upgrades to the most recent version. This +option can only be used to increase the version number, and only up to the most +recent version supported by this software. +.It Fl a Upgrade all file systems on all imported pools. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fIfilesystem\fR\fR -.ad -.sp .6 -.RS 4n -Upgrade the specified file system. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n -Upgrade the specified file system and all descendent file systems -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-V\fR \fIversion\fR\fR -.ad -.sp .6 -.RS 4n -Upgrade to the specified \fIversion\fR. If the \fB-V\fR flag is not specified, this command upgrades to the most recent version. This option can only be used to increase the version number, and only up to the most recent version supported by this software. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs userspace\fR [\fB-niHp\fR] [\fB-o\fR \fIfield\fR[,...]] [\fB-sS\fR \fIfield\fR]... [\fB-t\fR \fItype\fR [,...]] \fIfilesystem\fR | \fIsnapshot\fR\fR -.ad -.sp .6 -.RS 4n -Displays space consumed by, and quotas on, each user in the specified filesystem or snapshot. This corresponds to the \fBuserused@\fR\fIuser\fR and \fBuserquota@\fR\fIuser\fR properties. -.sp -.ne 2 -.mk -.na -\fB\fB-n\fR\fR -.ad -.sp .6 -.RS 4n +.It Ar filesystem +Upgrade the specified file system. +.El +.It Xo +.Nm +.Cm userspace +.Op Fl niHp +.Op Fl o Ar field Ns Op , Ns Ar ... +.Op Fl sS Ar field +.Ar ... +.Op Fl t Ar type Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar snapshot +.Xc +.Pp +Displays space consumed by, and quotas on, each user in the specified +filesystem or snapshot. This corresponds to the +.Sy userused@ Ns Ar user +and +.Sy userquota@ Ns Ar user +properties. +.Bl -tag -width indent +.It Fl n Print numeric ID instead of user/group name. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-H\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl H Do not print headers, use tab-delimited output. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-p\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl p Use exact (parseable) numeric output. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIfield\fR[,...]\fR -.ad -.sp .6 -.RS 4n -Display only the specified fields from the following set, \fBtype,name,used,quota\fR.The default is to display all fields. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-s\fR \fIfield\fR\fR -.ad -.sp .6 -.RS 4n -Sort output by this field. The \fIs\fR and \fIS\fR flags may be specified multiple times to sort first by one field, then by another. The default is \fB-s type\fR \fB-s name\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-S\fR \fIfield\fR\fR -.ad -.sp .6 -.RS 4n -Sort by this field in reverse order. See \fB-s\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-t\fR \fItype\fR[,...]\fR -.ad -.sp .6 -.RS 4n -Print only the specified types from the following set, \fBall,posixuser,smbuser,posixgroup,smbgroup\fR. -.sp -The default is \fB-t posixuser,smbuser\fR -.sp +.It Fl o Ar field Ns Op , Ns Ar ... +Display only the specified fields from the following set, +.Sy type,name,used,quota . +The default is to display all fields. +.It Fl s Ar field +Sort output by this field. The +.Fl s +and +.Fl S +flags may be specified multiple times to sort first by one field, then by +another. The default is +.Fl s Cm type Fl s Cm name . +.It Fl S Ar field +Sort by this field in reverse order. See +.Fl s . +.It Fl t Ar type Ns Op , Ns Ar ... +Print only the specified types from the following set, +.Sy all,posixuser,smbuser,posixgroup,smbgroup . +.Pp +The default is +.Fl t Cm posixuser,smbuser . +.Pp The default can be changed to include group types. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-i\fR\fR -.ad -.sp .6 -.RS 4n -Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping exists. Normal POSIX interfaces (for example, \fBstat\fR(2), \fBls\fR \fB-l\fR) perform this translation, so the \fB-i\fR option allows the output from \fBzfs userspace\fR to be compared directly with those utilities. However, \fB-i\fR may lead to confusion if some files were created by an SMB user before a SMB-to-POSIX name mapping was established. In such a case, some files are owned by the SMB entity and some by the POSIX entity. However, the \fB-i\fR option will report that the POSIX entity has the total usage and quota for both. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs groupspace\fR [\fB-niHp\fR] [\fB-o\fR \fIfield\fR[,...]] [\fB-sS\fR \fIfield\fR]... [\fB-t\fR \fItype\fR [,...]] \fIfilesystem\fR | \fIsnapshot\fR\fR -.ad -.sp .6 -.RS 4n -Displays space consumed by, and quotas on, each group in the specified filesystem or snapshot. This subcommand is identical to \fBzfs userspace\fR, except that the default types to display are \fB-t posixgroup,smbgroup\fR. -.sp -.in +2 -.nf -- -.fi -.in -2 -.sp - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs mount\fR\fR -.ad -.sp .6 -.RS 4n -Displays all \fBZFS\fR file systems currently mounted. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs mount\fR [\fB-vO\fR] [\fB-o\fR \fIoptions\fR] \fB-a\fR | \fIfilesystem\fR\fR -.ad -.sp .6 -.RS 4n -Mounts \fBZFS\fR file systems. Invoked automatically as part of the boot process. -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIoptions\fR\fR -.ad -.sp .6 -.RS 4n -An optional, comma-separated list of mount options to use temporarily for the duration of the mount. See the "Temporary Mount Point Properties" section for details. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-O\fR\fR -.ad -.sp .6 -.RS 4n -Perform an overlay mount. See \fBmount\fR(1M) for more information. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-v\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl i +Translate SID to POSIX ID. This flag has currently no effect on +.Fx . +.El +.It Xo +.Nm +.Cm groupspace +.Op Fl niHp +.Op Fl o Ar field Ns Op , Ns Ar ... +.Op Fl sS Ar field +.Ar ... +.Op Fl t Ar type Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar snapshot +.Xc +.Pp +Displays space consumed by, and quotas on, each group in the specified +filesystem or snapshot. This subcommand is identical to +.Qq Nm Cm userspace , +except that the default types to display are +.Fl t Sy posixgroup,smbgroup . +.It Xo +.Nm +.Cm mount +.Xc +.Pp +Displays all +.Tn ZFS +file systems currently mounted. +.Bl -tag -width indent +.It Fl f +.El +.It Xo +.Nm +.Cm mount +.Op Fl vO +.Op Fl o Ar property Ns Op , Ns Ar ... +.Fl a | Ar filesystem +.Xc +.Pp +Mounts +.Tn ZFS +file systems. +.Bl -tag -width indent +.It Fl v Report mount progress. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-a\fR\fR -.ad -.sp .6 -.RS 4n -Mount all available \fBZFS\fR file systems. Invoked automatically as part of the boot process. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fIfilesystem\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl O +Perform an overlay mount. Overlay mounts are not supported on +.Fx . +.It Fl o Ar property Ns Op , Ns Ar ... +An optional, comma-separated list of mount options to use temporarily for the +duration of the mount. See the +.Qq Sx Temporary Mount Point Properties +section for details. +.It Fl a +Mount all available +.Tn ZFS +file systems. +This command may be executed on +.Fx +system startup by +.Pa /etc/rc.d/zfs . +For more information, see variable +.Va zfs_enable +in +.Xr rc.conf 5 . +.It Ar filesystem Mount the specified filesystem. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs unmount\fR [\fB-f\fR] \fB-a\fR | \fIfilesystem\fR|\fImountpoint\fR\fR -.ad -.sp .6 -.RS 4n -Unmounts currently mounted \fBZFS\fR file systems. Invoked automatically as part of the shutdown process. -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.sp .6 -.RS 4n +.El +.It Xo +.Nm +.Cm unmount +.Op Fl f +.Fl a | Ar filesystem Ns | Ns Ar mountpoint +.Xc +.Pp +Unmounts currently mounted +.Tn ZFS +file systems. +.Bl -tag -width indent +.It Fl f Forcefully unmount the file system, even if it is currently in use. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-a\fR\fR -.ad -.sp .6 -.RS 4n -Unmount all available \fBZFS\fR file systems. Invoked automatically as part of the boot process. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fIfilesystem\fR|\fImountpoint\fR\fR -.ad -.sp .6 -.RS 4n -Unmount the specified filesystem. The command can also be given a path to a \fBZFS\fR file system mount point on the system. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs share\fR \fB-a\fR | \fIfilesystem\fR\fR -.ad -.sp .6 -.RS 4n -Shares available \fBZFS\fR file systems. -.sp -.ne 2 -.mk -.na -\fB\fB-a\fR\fR -.ad -.sp .6 -.RS 4n -Share all available \fBZFS\fR file systems. Invoked automatically as part of the boot process. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fIfilesystem\fR\fR -.ad -.sp .6 -.RS 4n -Share the specified filesystem according to the \fBsharenfs\fR and \fBsharesmb\fR properties. File systems are shared when the \fBsharenfs\fR or \fBsharesmb\fR property is set. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs unshare\fR \fB-a\fR | \fIfilesystem\fR|\fImountpoint\fR\fR -.ad -.sp .6 -.RS 4n -Unshares currently shared \fBZFS\fR file systems. This is invoked automatically as part of the shutdown process. -.sp -.ne 2 -.mk -.na -\fB\fB-a\fR\fR -.ad -.sp .6 -.RS 4n -Unshare all available \fBZFS\fR file systems. Invoked automatically as part of the boot process. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fIfilesystem\fR|\fImountpoint\fR\fR -.ad -.sp .6 -.RS 4n -Unshare the specified filesystem. The command can also be given a path to a \fBZFS\fR file system shared on the system. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs send\fR [\fB-vR\fR] [\fB-\fR[\fBiI\fR] \fIsnapshot\fR] \fIsnapshot\fR\fR -.ad -.sp .6 -.RS 4n -Creates a stream representation of the second \fIsnapshot\fR, which is written to standard output. The output can be redirected to a file or to a different system (for example, using \fBssh\fR(1). By default, a full stream is generated. -.sp -.ne 2 -.mk -.na -\fB\fB-i\fR \fIsnapshot\fR\fR -.ad -.sp .6 -.RS 4n -Generate an incremental stream from the first \fIsnapshot\fR to the second \fIsnapshot\fR. The incremental source (the first \fIsnapshot\fR) can be specified as the last component of the snapshot name (for example, the part after the \fB@\fR), and it is assumed to be from the same file system as the second \fIsnapshot\fR. -.sp -If the destination is a clone, the source may be the origin snapshot, which must be fully specified (for example, \fBpool/fs@origin\fR, not just \fB@origin\fR). -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-I\fR \fIsnapshot\fR\fR -.ad -.sp .6 -.RS 4n -Generate a stream package that sends all intermediary snapshots from the first snapshot to the second snapshot. For example, \fB-I @a fs@d\fR is similar to \fB-i @a fs@b; -i @b fs@c; -i @c fs@d\fR. The incremental source snapshot may be specified as with the \fB-i\fR option. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-R\fR\fR -.ad -.sp .6 -.RS 4n -Generate a replication stream package, which will replicate the specified filesystem, and all descendent file systems, up to the named snapshot. When received, all properties, snapshots, descendent file systems, and clones are preserved. -.sp -If the \fB-i\fR or \fB-I\fR flags are used in conjunction with the \fB-R\fR flag, an incremental replication stream is generated. The current values of properties, and current snapshot and file system names are set when the stream is received. If the \fB-F\fR flag is specified when this stream is received, snapshots and file systems that do not exist on the sending side are destroyed. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-v\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl a +Unmount all available +.Tn ZFS +file systems. +.It Ar filesystem | mountpoint +Unmount the specified filesystem. The command can also be given a path to a +.Tn ZFS +file system mount point on the system. +.El +.It Xo +.Nm +.Cm share +.Fl a | Ar filesystem +.Xc +.Pp +Shares +.Tn ZFS +file systems that have the +.Sy sharenfs +property set. +.Bl -tag -width indent +.It Fl a +Share all +.Tn ZFS +file systems that have the +.Sy sharenfs +property set. +This command may be executed on +.Fx +system startup by +.Pa /etc/rc.d/zfs . +For more information, see variable +.Va zfs_enable +in +.Xr rc.conf 5 . +.It Ar filesystem +Share the specified filesystem according to the +.Tn sharenfs +property. File systems are shared when the +.Tn sharenfs +property is set. +.El +.It Xo +.Nm +.Cm unshare +.Fl a | Ar filesystem Ns | Ns Ar mountpoint +.Xc +.Pp +Unshares +.Tn ZFS +file systems that have the +.Tn sharenfs +property set. +.Bl -tag -width indent +.It Fl a +Unshares +.Tn ZFS +file systems that have the +.Sy sharenfs +property set. +This command may be executed on +.Fx +system shutdown by +.Pa /etc/rc.d/zfs . +For more information, see variable +.Va zfs_enable +in +.Xr rc.conf 5 . +.It Ar filesystem | mountpoint +Unshare the specified filesystem. The command can also be given a path to a +.Tn ZFS +file system shared on the system. +.El +.It Xo +.Nm +.Cm send +.Op Fl DvRp +.Op Fl i Ar snapshot | Fl I Ar snapshot +.Ar snapshot +.Xc +.Pp +Creates a stream representation of the last +.Ar snapshot +argument (not part of +.Fl i +or +.Fl I Ns ) +which is written to standard output. The output can be redirected to +a file or to a different system (for example, using +.Xr ssh 1 Ns ). +By default, a full stream is generated. +.Bl -tag -width indent +.It Fl i Ar snapshot +Generate an incremental stream from the +.Fl i Ar snapshot +to the last +.Ar snapshot . +The incremental source (the +.Fl i Ar snapshot Ns ) +can be specified as the last component of the snapshot name (for example, the +part after the +.Sy @ Ns ), +and it is assumed to be from the same file system as the last +.Ar snapshot . +.Pp +If the destination is a clone, the source may be the origin snapshot, which +must be fully specified (for example, +.Cm pool/fs@origin , +not just +.Cm @origin Ns ). +.It Fl I Ar snapshot +Generate a stream package that sends all intermediary snapshots from the +.Fl I Ar snapshot to the last +.Ar snapshot . For example, +.Ic -I @a fs@d +is similar to +.Ic -i @a fs@b; -i @b fs@c; -i @c fs@d Ns . +The incremental source snapshot may be specified as with the +.Fl i +option. +.It Fl R +Generate a replication stream package, which will replicate the specified +filesystem, and all descendent file systems, up to the named snapshot. When +received, all properties, snapshots, descendent file systems, and clones are +preserved. +.Pp +If the +.Fl i +or +.Fl I +flags are used in conjunction with the +.Fl R +flag, an incremental replication stream is generated. The current values of +properties, and current snapshot and file system names are set when the stream +is received. If the +.Fl F +flag is specified when this stream is received, snapshots and file systems that + do not exist on the sending side are destroyed. +.It Fl D +Generate a deduplicated stream. Blocks which would have been sent multiple +times in the send stream will only be sent once. The receiving system must +also support this feature to recieve a deduplicated stream. This flag can +be used regardless of the dataset's +.Sy dedup +property, but performance will be much better if the filesystem uses a +dedup-capable checksum (eg. +.Sy sha256 Ns ). +.It Fl p +Include the dataset's properties in the stream. This flag is implicit when +.Fl R +is specified. The receiving system must also support this feature. +.It Fl v Print verbose information about the stream package generated. -.RE - -The format of the stream is committed. You will be able to receive your streams on future versions of \fBZFS\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs receive\fR [\fB-vnFu\fR] \fIfilesystem\fR|\fIvolume\fR|\fIsnapshot\fR\fR -.ad -.br -.na -\fB\fBzfs receive\fR [\fB-vnFu\fR] \fB-d\fR \fIfilesystem\fR\fR -.ad -.sp .6 -.RS 4n -Creates a snapshot whose contents are as specified in the stream provided on standard input. If a full stream is received, then a new file system is created as well. Streams are created using the \fBzfs send\fR subcommand, which by default creates a full stream. \fBzfs recv\fR can be used as an alias for \fBzfs receive\fR. -.sp -If an incremental stream is received, then the destination file system must already exist, and its most recent snapshot must match the incremental stream's source. For \fBzvols\fR, the destination device link is destroyed and recreated, which means the \fBzvol\fR cannot be accessed during the \fBreceive\fR operation. -.sp -When a snapshot replication package stream that is generated by using the \fBzfs send\fR \fB-R\fR command is received, any snapshots that do not exist on the sending location are destroyed by using the \fBzfs destroy\fR \fB-d\fR command. -.sp -The name of the snapshot (and file system, if a full stream is received) that this subcommand creates depends on the argument type and the \fB-d\fR option. -.sp -If the argument is a snapshot name, the specified \fIsnapshot\fR is created. If the argument is a file system or volume name, a snapshot with the same name as the sent snapshot is created within the specified \fIfilesystem\fR or \fIvolume\fR. If the \fB-d\fR option is specified, the snapshot name is determined by appending the sent snapshot's name to the specified \fIfilesystem\fR. If the \fB-d\fR option is specified, any required file systems within the specified one are created. -.sp -.ne 2 -.mk -.na -\fB\fB-d\fR\fR -.ad -.sp .6 -.RS 4n -Use the name of the sent snapshot to determine the name of the new snapshot as described in the paragraph above. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-u\fR\fR -.ad -.sp .6 -.RS 4n +.El +.Pp +The format of the stream is committed. You will be able to receive your streams +on future versions of +.Tn ZFS . +.It Xo +.Nm +.Cm receive +.Op Fl vnFu +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Xc +.It Xo +.Nm +.Cm receive +.Op Fl vnFu +.Op Fl d | e +.Ar filesystem +.Xc +.Pp +Creates a snapshot whose contents are as specified in the stream provided on +standard input. If a full stream is received, then a new file system is created +as well. Streams are created using the +.Qq Nm Cm send +subcommand, which by default creates a full stream. +.Qq Nm Cm recv +can be used as an alias for +.Qq Nm Cm receive . +.Pp +If an incremental stream is received, then the destination file system must +already exist, and its most recent snapshot must match the incremental stream's +source. For +.Sy zvol Ns s, +the destination device link is destroyed and recreated, which means the +.Sy zvol +cannot be accessed during the +.Sy receive +operation. +.Pp +When a snapshot replication package stream that is generated by using the +.Qq Nm Cm send Fl R +command is received, any snapshots that do not exist on the sending location +are destroyed by using the +.Qq Nm Cm destroy Fl d +command. +.Pp +The name of the snapshot (and file system, if a full stream is received) that +this subcommand creates depends on the argument type and the +.Fl d +or +.Fl e +option. +.Pp +If the argument is a snapshot name, the specified +.Ar snapshot +is created. If the argument is a file system or volume name, a snapshot with +the same name as the sent snapshot is created within the specified +.Ar filesystem +or +.Ar volume . +If the +.Fl d +or +.Fl e +option is specified, the snapshot name is determined by appending the sent +snapshot's name to the specified +.Ar filesystem . +If the +.Fl d +option is specified, all but the pool name of the sent snapshot path is +appended (for example, +.Sy b/c@1 +appended from sent snapshot +.Sy a/b/c@1 Ns ), +and if the +.Fl e +option is specified, only the tail of the sent snapshot path is appended (for +example, +.Sy c@1 +appended from sent snapshot +.Sy a/b/c@1 Ns ). +In the case of +.Fl d , +any file systems needed to replicate the path of the sent snapshot are created +within the specified file system. +.Bl -tag -width indent +.It Fl d +Use the full sent snapshot path without the first element (without pool name) +to determine the name of the new snapshot as described in the paragraph above. +.It Fl e +Use only the last element of the sent snapshot path to determine the name of +the new snapshot as described in the paragraph above. +.It Fl u File system that is associated with the received stream is not mounted. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-v\fR\fR -.ad -.sp .6 -.RS 4n -Print verbose information about the stream and the time required to perform the receive operation. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-n\fR\fR -.ad -.sp .6 -.RS 4n -Do not actually receive the stream. This can be useful in conjunction with the \fB-v\fR option to verify the name the receive operation would use. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-F\fR\fR -.ad -.sp .6 -.RS 4n -Force a rollback of the file system to the most recent snapshot before performing the receive operation. If receiving an incremental replication stream (for example, one generated by \fBzfs send -R -[iI]\fR), destroy snapshots and file systems that do not exist on the sending side. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs allow\fR \fIfilesystem\fR | \fIvolume\fR\fR -.ad -.sp .6 -.RS 4n -Displays permissions that have been delegated on the specified filesystem or volume. See the other forms of \fBzfs allow\fR for more information. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs allow\fR [\fB-ldug\fR] "\fIeveryone\fR"|\fIuser\fR|\fIgroup\fR[,...] \fIperm\fR|@\fIsetname\fR[,...] \fIfilesystem\fR| \fIvolume\fR\fR -.ad -.br -.na -\fB\fBzfs allow\fR [\fB-ld\fR] \fB-e\fR \fIperm\fR|@\fIsetname\fR[,...] \fIfilesystem\fR | \fIvolume\fR\fR -.ad -.sp .6 -.RS 4n -Delegates \fBZFS\fR administration permission for the file systems to non-privileged users. -.sp -.ne 2 -.mk -.na -\fB[\fB-ug\fR] "\fIeveryone\fR"|\fIuser\fR|\fIgroup\fR[,...]\fR -.ad -.sp .6 -.RS 4n -Specifies to whom the permissions are delegated. Multiple entities can be specified as a comma-separated list. If neither of the \fB-ug\fR options are specified, then the argument is interpreted preferentially as the keyword "everyone", then as a user name, and lastly as a group name. To specify a user or group named "everyone", use the \fB-u\fR or \fB-g\fR options. To specify a group with the same name as a user, use the \fB-g\fR options. -.RE - -.sp -.ne 2 -.mk -.na -\fB[\fB-e\fR] \fIperm\fR|@\fIsetname\fR[,...]\fR -.ad -.sp .6 -.RS 4n -Specifies that the permissions be delegated to "everyone." Multiple permissions may be specified as a comma-separated list. Permission names are the same as \fBZFS\fR subcommand and property names. See the property list below. Property set names, which begin with an at sign (\fB@\fR) , may be specified. See the \fB-s\fR form below for details. -.RE - -.sp -.ne 2 -.mk -.na -\fB[\fB-ld\fR] \fIfilesystem\fR|\fIvolume\fR\fR -.ad -.sp .6 -.RS 4n -Specifies where the permissions are delegated. If neither of the \fB-ld\fR options are specified, or both are, then the permissions are allowed for the file system or volume, and all of its descendents. If only the \fB-l\fR option is used, then is allowed "locally" only for the specified file system. If only the \fB-d\fR option is used, then is allowed only for the descendent file systems. -.RE - -.RE - -.sp -.LP -Permissions are generally the ability to use a \fBZFS\fR subcommand or change a \fBZFS\fR property. The following permissions are available: -.sp -.in +2 -.nf -NAME TYPE NOTES -allow subcommand Must also have the permission that is being - allowed -clone subcommand Must also have the 'create' ability and 'mount' - ability in the origin file system -create subcommand Must also have the 'mount' ability -destroy subcommand Must also have the 'mount' ability -mount subcommand Allows mount/umount of ZFS datasets -promote subcommand Must also have the 'mount' - and 'promote' ability in the origin file system -receive subcommand Must also have the 'mount' and 'create' ability -rename subcommand Must also have the 'mount' and 'create' - ability in the new parent -rollback subcommand Must also have the 'mount' ability -send subcommand -share subcommand Allows sharing file systems over NFS or SMB - protocols -snapshot subcommand Must also have the 'mount' ability -groupquota other Allows accessing any groupquota@... property -groupused other Allows reading any groupused@... property -userprop other Allows changing any user property -userquota other Allows accessing any userquota@... property -userused other Allows reading any userused@... property - -aclinherit property -aclmode property -atime property -canmount property -casesensitivity property -checksum property -compression property -copies property -devices property -exec property -mountpoint property -nbmand property -normalization property -primarycache property -quota property -readonly property -recordsize property -refquota property -refreservation property -reservation property -secondarycache property -setuid property -shareiscsi property -sharenfs property -sharesmb property -snapdir property -utf8only property -version property -volblocksize property -volsize property -vscan property -xattr property -zoned property -.fi -.in -2 -.sp - -.sp -.ne 2 -.mk -.na -\fB\fBzfs allow\fR \fB-c\fR \fIperm\fR|@\fIsetname\fR[,...] \fIfilesystem\fR|\fIvolume\fR\fR -.ad -.sp .6 -.RS 4n -Sets "create time" permissions. These permissions are granted (locally) to the creator of any newly-created descendent file system. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs allow\fR \fB-s\fR @\fIsetname\fR \fIperm\fR|@\fIsetname\fR[,...] \fIfilesystem\fR|\fIvolume\fR\fR -.ad -.sp .6 -.RS 4n -Defines or adds permissions to a permission set. The set can be used by other \fBzfs allow\fR commands for the specified file system and its descendents. Sets are evaluated dynamically, so changes to a set are immediately reflected. Permission sets follow the same naming restrictions as ZFS file systems, but the name must begin with an "at sign" (\fB@\fR), and can be no more than 64 characters long. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs unallow\fR [\fB-rldug\fR] "\fIeveryone\fR"|\fIuser\fR|\fIgroup\fR[,...] [\fIperm\fR|@\fIsetname\fR[, ...]] \fIfilesystem\fR|\fIvolume\fR\fR -.ad -.br -.na -\fB\fBzfs unallow\fR [\fB-rld\fR] \fB-e\fR [\fIperm\fR|@\fIsetname\fR [,...]] \fIfilesystem\fR|\fIvolume\fR\fR -.ad -.br -.na -\fB\fBzfs unallow\fR [\fB-r\fR] \fB-c\fR [\fIperm\fR|@\fIsetname\fR[,...]]\fR -.ad -.br -.na -\fB\fIfilesystem\fR|\fIvolume\fR\fR -.ad -.sp .6 -.RS 4n -Removes permissions that were granted with the \fBzfs allow\fR command. No permissions are explicitly denied, so other permissions granted are still in effect. For example, if the permission is granted by an ancestor. If no permissions are specified, then all permissions for the specified \fIuser\fR, \fIgroup\fR, or \fIeveryone\fR are removed. Specifying "everyone" (or using the \fB-e\fR option) only removes the permissions that were granted to "everyone", not all permissions for every user and group. See the \fBzfs allow\fR command for a description of the \fB-ldugec\fR options. -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl v +Print verbose information about the stream and the time required to perform the +receive operation. +.It Fl n +Do not actually receive the stream. This can be useful in conjunction with the +.It Fl v +option to verify the name the receive operation would use. +.It Fl F +Force a rollback of the file system to the most recent snapshot before +performing the receive operation. If receiving an incremental replication +stream (for example, one generated by +.Qq Nm Cm send Fl R Fi iI Ns ) , +destroy snapshots and file systems that do not exist on the sending side. +.El +.It Xo +.Nm +.Cm allow +.Ar filesystem Ns | Ns Ar volume +.Xc +.Pp +Displays permissions that have been delegated on the specified filesystem or +volume. See the other forms of +.Qq Nm Cm allow +for more information. +.It Xo +.Nm +.Cm allow +.Op Fl ldug +.Cm everyone Ns | Ns Ar user Ns | Ns Ar group Ns Op , Ns Ar ... +.Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Xc +.It Xo +.Nm +.Cm allow +.Op Fl ld +.Fl e +.Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Xc +.Pp +Delegates +.Tn ZFS +administration permission for the file systems to non-privileged users. +.Bl -tag -width indent +.It Xo +.Op Fl ug +.Cm everyone Ns | Ns Ar user Ns | Ns Ar group Ns Op , Ns Ar ... +.Xc +Specifies to whom the permissions are delegated. Multiple entities can be +specified as a comma-separated list. If neither of the +.Fl ug +options are specified, then the argument is interpreted preferentially as the +keyword "everyone", then as a user name, and lastly as a group name. To specify +a user or group named "everyone", use the +.Fl u +or +.Fl g +options. To specify a group with the same name as a user, use the +.Fl g +option. +.It Xo +.Op Fl e +.Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Xc +Specifies that the permissions be delegated to "everyone." Multiple permissions +may be specified as a comma-separated list. Permission names are the same as +.Tn ZFS +subcommand and property names. See the property list below. Property set names, +which begin with an at sign +.Pq Sy @ Ns , +may be specified. See the +.Fl s +form below for details. +.It Xo +.Op Fl ld +.Ar filesystem Ns | Ns Ar volume +.Xc +Specifies where the permissions are delegated. If neither of the +.Fl ld +options are specified, or both are, then the permissions are allowed for the +file system or volume, and all of its descendents. If only the +.Fl l +option is used, then is allowed "locally" only for the specified file system. +If only the +.Fl d +option is used, then is allowed only for the descendent file systems. +.El +.Pp +Permissions are generally the ability to use a +.Tn ZFS +subcommand or change a +.Tn ZFS +property. The following permissions are available: +.Bl -column -offset 4n "secondarycache" "subcommand" +.It NAME Ta TYPE Ta NOTES +.It Xo allow Ta subcommand Ta Must +also have the permission that is being allowed +.Xc +.It Xo clone Ta subcommand Ta Must +also have the 'create' ability and 'mount' ability in the origin file system +.Xc +.It create Ta subcommand Ta Must also have the 'mount' ability +.It destroy Ta subcommand Ta Must also have the 'mount' ability +.It hold Ta subcommand Ta Allows adding a user hold to a snapshot +.It mount Ta subcommand Ta Allows mount/umount of Tn ZFS No datasets +.It Xo promote Ta subcommand Ta Must +also have the 'mount' and 'promote' ability in the origin file system +.Xc +.It receive Ta subcommand Ta Must also have the 'mount' and 'create' ability +.It Xo release Ta subcommand Ta Allows +releasing a user hold which might destroy the snapshot +.Xc +.It Xo rename Ta subcommand Ta Must +also have the 'mount' and 'create' ability in the new parent +.Xc +.It rollback Ta subcommand Ta Must also have the 'mount' ability +.It send Ta subcommand +.It share Ta subcommand Ta Allows Xo +sharing file systems over the +.Tn NFS +protocol +.Xc +.It snapshot Ta subcommand Ta Must also have the 'mount' ability +.It groupquota Ta other Ta Allows accessing any groupquota@... property +.It groupused Ta other Ta Allows reading any groupused@... property +.It userprop Ta other Ta Allows changing any user property +.It userquota Ta other Ta Allows accessing any userquota@... property +.It userused Ta other Ta Allows reading any userused@... property +.It Ta +.It aclinherit Ta property +.It aclmode Ta property +.It atime Ta property +.It canmount Ta property +.It casesensitivity Ta property +.It checksum Ta property +.It compression Ta property +.It copies Ta property +.It dedup Ta property +.It devices Ta property +.It exec Ta property +.It logbias Ta property +.It jailed Ta property +.It mlslabel Ta property +.It mountpoint Ta property +.It nbmand Ta property +.It normalization Ta property +.It primarycache Ta property +.It quota Ta property +.It readonly Ta property +.It recordsize Ta property +.It refquota Ta property +.It refreservation Ta property +.It reservation Ta property +.It secondarycache Ta property +.It setuid Ta property +.It sharenfs Ta property +.It sharesmb Ta property +.It snapdir Ta property +.It sync Ta property +.It utf8only Ta property +.It version Ta property +.It volblocksize Ta property +.It volsize Ta property +.It vscan Ta property +.It xattr Ta property +.El +.It Xo +.Nm +.Cm allow +.Fl c +.Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Xc +.Pp +Sets "create time" permissions. These permissions are granted (locally) to the +creator of any newly-created descendent file system. +.It Xo +.Nm +.Cm allow +.Fl s +.Ar @setname +.Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Xc +.Pp +Defines or adds permissions to a permission set. The set can be used by other +.Qq Nm Cm allow +commands for the specified file system and its descendents. Sets are evaluated +dynamically, so changes to a set are immediately reflected. Permission sets +follow the same naming restrictions as ZFS file systems, but the name must +begin with an "at sign" +.Pq Sy @ Ns , +and can be no more than 64 characters long. +.It Xo +.Nm +.Cm unallow +.Op Fl rldug +.Cm everyone Ns | Ns Ar user Ns | Ns Ar group Ns Op , Ns Ar ... +.Op Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Xc +.It Xo +.Nm +.Cm unallow +.Op Fl rld +.Fl e +.Op Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Xc +.It Xo +.Nm +.Cm unallow +.Op Fl r +.Fl c +.Op Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Xc +.Pp +Removes permissions that were granted with the +.Qq Nm Cm allow +command. No permissions are explicitly denied, so other permissions granted are +still in effect. For example, if the permission is granted by an ancestor. If +no permissions are specified, then all permissions for the specified +.Ar user , group , No or Ar everyone +are removed. Specifying "everyone" (or using the +.Fl e +option) only removes the permissions that were granted to "everyone", +not all permissions for every user and group. See the +.Qq Nm Cm allow +command for a description of the +.Fl ldugec +options. +.Bl -tag -width indent +.It Fl r Recursively remove the permissions from this file system and all descendents. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs unallow\fR [\fB-r\fR] \fB-s\fR @\fIsetname\fR [\fIperm\fR|@\fIsetname\fR[,...]]\fR -.ad -.br -.na -\fB\fIfilesystem\fR|\fIvolume\fR\fR -.ad -.sp .6 -.RS 4n -Removes permissions from a permission set. If no permissions are specified, then all permissions are removed, thus removing the set entirely. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs hold\fR [\fB-r\fR] \fItag\fR \fIsnapshot\fR...\fR -.ad -.sp .6 -.RS 4n -Adds a single reference, named with the \fItag\fR argument, to the specified snapshot or snapshots. Each snapshot has its own tag namespace, and tags must be unique within that space. -.sp -If a hold exists on a snapshot, attempts to destroy that snapshot by using the \fBzfs destroy\fR command return \fBEBUSY\fR. -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n -Specifies that a hold with the given tag is applied recursively to the snapshots of all descendent file systems. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs holds\fR [\fB-r\fR] \fIsnapshot\fR...\fR -.ad -.sp .6 -.RS 4n +.El +.It Xo +.Nm +.Cm unallow +.Op Fl r +.Fl s +.Ar @setname +.Ar perm Ns | Ns Ar @setname Ns Op , Ns Ar ... +.Ar filesystem Ns | Ns Ar volume +.Xc +.Pp +Removes permissions from a permission set. If no permissions are specified, +then all permissions are removed, thus removing the set entirely. +.It Xo +.Nm +.Cm hold +.Op Fl r +.Ar tag snapshot ... +.Xc +.Pp +Adds a single reference, named with the +.Ar tag +argument, to the specified snapshot or snapshots. Each snapshot has its own tag +namespace, and tags must be unique within that space. +.Pp +If a hold exists on a snapshot, attempts to destroy that snapshot by using the +.Qq Nm Cm destroy +command returns +.Em EBUSY . +.Bl -tag -width indent +.It Fl r +Specifies that a hold with the given tag is applied recursively to the +snapshots of all descendent file systems. +.El +.It Xo +.Nm +.Cm holds +.Op Fl r +.Ar snapshot ... +.Xc +.Pp Lists all existing user references for the given snapshot or snapshots. -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n -Lists the holds that are set on the named descendent snapshots, in addition to listing the holds on the named snapshot. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs release\fR [\fB-r\fR] \fItag\fR \fIsnapshot\fR...\fR -.ad -.sp .6 -.RS 4n -Removes a single reference, named with the \fItag\fR argument, from the specified snapshot or snapshots. The tag must already exist for each snapshot. -.sp -If a hold exists on a snapshot, attempts to destroy that snapshot by using the \fBzfs destroy\fR command return \fBEBUSY\fR. -.sp -.ne 2 -.mk -.na -\fB\fB-r\fR\fR -.ad -.sp .6 -.RS 4n -Recursively releases a hold with the given tag on the snapshots of all descendent file systems. -.RE - -.RE - -\fB\fBzfs jail\fR \fIjailid\fR \fIfilesystem\fR\fR -.ad -.sp .6 -.RS 4n -Attaches the given file system to the given jail. From now on this file system tree can be managed from within a jail if the "\fBjailed\fR" property has been set. -To use this functionality, sysctl \fBsecurity.jail.enforce_statfs\fR should be set to 0 and sysctl \fBsecurity.jail.mount_allowed\fR should be set to 1. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzfs unjail\fR \fIjailid\fR \fIfilesystem\fR\fR -.ad -.sp .6 -.RS 4n -Detaches the given file system from the given jail. -.RE - -.SH EXAMPLES -.LP -\fBExample 1 \fRCreating a ZFS File System Hierarchy -.sp -.LP -The following commands create a file system named \fBpool/home\fR and a file system named \fBpool/home/bob\fR. The mount point \fB/export/home\fR is set for the parent file system, and is automatically inherited by the child file system. - -.sp -.in +2 -.nf -# \fBzfs create pool/home\fR -# \fBzfs set mountpoint=/export/home pool/home\fR -# \fBzfs create pool/home/bob\fR -.fi -.in -2 -.sp - -.LP -\fBExample 2 \fRCreating a ZFS Snapshot -.sp -.LP -The following command creates a snapshot named \fByesterday\fR. This snapshot is mounted on demand in the \fB\&.zfs/snapshot\fR directory at the root of the \fBpool/home/bob\fR file system. - -.sp -.in +2 -.nf -# \fBzfs snapshot pool/home/bob@yesterday\fR -.fi -.in -2 -.sp - -.LP -\fBExample 3 \fRCreating and Destroying Multiple Snapshots -.sp -.LP -The following command creates snapshots named \fByesterday\fR of \fBpool/home\fR and all of its descendent file systems. Each snapshot is mounted on demand in the \fB\&.zfs/snapshot\fR directory at the root of its file system. The second command destroys the newly created snapshots. - -.sp -.in +2 -.nf -# \fBzfs snapshot -r pool/home@yesterday\fR -# \fBzfs destroy -r pool/home@yesterday\fR -.fi -.in -2 -.sp - -.LP -\fBExample 4 \fRDisabling and Enabling File System Compression -.sp -.LP -The following command disables the \fBcompression\fR property for all file systems under \fBpool/home\fR. The next command explicitly enables \fBcompression\fR for \fBpool/home/anne\fR. - -.sp -.in +2 -.nf -# \fBzfs set compression=off pool/home\fR -# \fBzfs set compression=on pool/home/anne\fR -.fi -.in -2 -.sp - -.LP -\fBExample 5 \fRListing ZFS Datasets -.sp -.LP -The following command lists all active file systems and volumes in the system. Snapshots are displayed if the \fBlistsnaps\fR property is \fBon\fR. The default is \fBoff\fR. See \fBzpool\fR(1M) for more information on pool properties. - -.sp -.in +2 -.nf -# \fBzfs list\fR +.Bl -tag -width indent +.It Fl r +Lists the holds that are set on the named descendent snapshots, in addition to +listing the holds on the named snapshot. +.El +.It Xo +.Nm +.Cm release +.Op Fl r +.Ar tag snapshot ... +.Xc +.Pp +Removes a single reference, named with the +.Ar tag +argument, from the specified snapshot or snapshots. The tag must already exist +for each snapshot. +.Bl -tag -width indent +.It Fl r +Recursively releases a hold with the given tag on the snapshots of all +descendent file systems. +.El +.It Xo +.Nm +.Cm diff +.Op Fl FHt +.Ar snapshot +.Op Ar snapshot Ns | Ns Ar filesystem +.Xc +.Pp +Describes differences between a snapshot and a successor dataset. The +successor dataset can be a later snapshot or the current filesystem. +.Pp +The changed files are displayed including the change type. The change type +is displayed ussing a single character. If a file or directory was renamed, +the old and the new names are displayed. +.Pp +The following change types can be displayed: +.Pp +.Bl -column -offset indent "CHARACTER" "CHANGE TYPE" +.It CHARACTER Ta CHANGE TYPE +.It \&+ Ta file was added +.It \&- Ta file was removed +.It \&M Ta file was modified +.It \&R Ta file was renamed +.El +.Bl -tag -width indent +.It Fl F +Display a single letter for the file type in second to last column. +.Pp +The following file types can be displayed: +.Pp +.Bl -column -offset indent "CHARACTER" "FILE TYPE" +.It CHARACTER Ta FILE TYPE +.It \&F Ta file +.It \&/ Ta directory +.It \&B Ta block device +.It \&@ Ta symbolic link +.It \&= Ta socket +.It \&> Ta door (not supported on Fx Ns ) +.It \&| Ta FIFO (not supported on Fx Ns ) +.It \&P Ta event portal (not supported on Fx Ns ) +.El +.It Fl H +Machine-parseable output, fields separated a tab character. +.It Fl t +Display a change timestamp in the first column. +.El +.It Xo +.Nm +.Cm jail +.Ar jailid filesystem +.Xc +.Pp +Attaches the specified +.Ar filesystem +to the jail identified by JID +.Ar jailid . +From now on this file system tree can be managed from within a jail if the +.Sy jailed +property has been set. To use this functionality, the jail needs the +.Va enforce_statfs +parameter set to +.Sy 0 +and the +.Va allow.mount +parameter set to +.Sy 1 . +.Pp +See +.Xr jail 8 +for more information on managing jails and configuring the parameters above. +.It Xo +.Nm +.Cm unjail +.Ar jailid filesystem +.Xc +.Pp +Detaches the specified +.Ar filesystem +from the jail identified by JID +.Ar jailid . +.El +.Sh EXAMPLES +.Bl -tag -width 0n +.It Sy Example 1 No Creating a Tn ZFS No File System Hierarchy +.Pp +The following commands create a file system named +.Em pool/home +and a file system named +.Em pool/home/bob . +The mount point +.Pa /home +is set for the parent file system, and is automatically inherited by the child +file system. +.Bd -literal -offset 2n +.Li # Ic zfs create pool/home +.Li # Ic zfs set mountpoint=/home pool/home +.Li # Ic zfs create pool/home/bob +.Ed +.It Sy Example 2 No Creating a Tn ZFS No Snapshot +.Pp +The following command creates a snapshot named +.Sy yesterday . +This snapshot is mounted on demand in the +.Pa \&.zfs/snapshot +directory at the root of the +.Em pool/home/bob +file system. +.Bd -literal -offset 2n +.Li # Ic zfs snapshot pool/home/bob@yesterday +.Ed +.It Sy Example 3 No Creating and Destroying Multiple Snapshots +.Pp +The following command creates snapshots named +.Em yesterday +of +.Em pool/home +and all of its descendent file systems. Each snapshot is mounted on demand in +the +.Pa \&.zfs/snapshot +directory at the root of its file system. The second command destroys the newly +created snapshots. +.Bd -literal -offset 2n +.Li # Ic zfs snapshot -r pool/home@yesterday +.Li # Ic zfs destroy -r pool/home@yesterday +.Ed +.It Sy Example 4 No Disabling and Enabling File System Compression +.Pp +The following command disables the +.Sy compression +property for all file systems under +.Em pool/home . +The next command explicitly enables +.Sy compression +for +.Em pool/home/anne . +.Bd -literal -offset 2n +.Li # Ic zfs set compression=off pool/home +.Li # Ic zfs set compression=on pool/home/anne +.Ed +.It Sy Example 5 No Listing Tn ZFS No Datasets +.Pp +The following command lists all active file systems and volumes in the system. +Snapshots are displayed if the +.Sy listsnaps +property is +.Cm on . +The default is +.Cm off . +See +.Xr zpool 8 +for more information on pool properties. +.Bd -literal -offset 2n +.Li # Ic zfs list NAME USED AVAIL REFER MOUNTPOINT pool 450K 457G 18K /pool - pool/home 315K 457G 21K /export/home - pool/home/anne 18K 457G 18K /export/home/anne - pool/home/bob 276K 457G 276K /export/home/bob -.fi -.in -2 -.sp - -.LP -\fBExample 6 \fRSetting a Quota on a ZFS File System -.sp -.LP -The following command sets a quota of 50 Gbytes for \fBpool/home/bob\fR. - -.sp -.in +2 -.nf -# \fBzfs set quota=50G pool/home/bob\fR -.fi -.in -2 -.sp - -.LP -\fBExample 7 \fRListing ZFS Properties -.sp -.LP -The following command lists all properties for \fBpool/home/bob\fR. - -.sp -.in +2 -.nf -# \fBzfs get all pool/home/bob\fR + pool/home 315K 457G 21K /home + pool/home/anne 18K 457G 18K /home/anne + pool/home/bob 276K 457G 276K /home/bob +.Ed +.It Sy Example 6 No Setting a Quota on a Tn ZFS No File System +.Pp +The following command sets a quota of 50 Gbytes for +.Em pool/home/bob . +.Bd -literal -offset 2n +.Li # Ic zfs set quota=50G pool/home/bob +.Ed +.It Sy Example 7 No Listing Tn ZFS No Properties +.Pp +The following command lists all properties for +.Em pool/home/bob . +.Bd -literal -offset 2n +.Li # Ic zfs get all pool/home/bob NAME PROPERTY VALUE SOURCE pool/home/bob type filesystem - pool/home/bob creation Tue Jul 21 15:53 2009 - @@ -2707,7 +2732,7 @@ pool/home/bob mounted yes - pool/home/bob quota 20G local pool/home/bob reservation none default pool/home/bob recordsize 128K default -pool/home/bob mountpoint /pool/home/bob default +pool/home/bob mountpoint /home/bob default pool/home/bob sharenfs off default pool/home/bob checksum on default pool/home/bob compression on local @@ -2716,15 +2741,14 @@ pool/home/bob devices on default pool/home/bob exec on default pool/home/bob setuid on default pool/home/bob readonly off default -pool/home/bob zoned off default +pool/home/bob jailed off default pool/home/bob snapdir hidden default pool/home/bob aclmode discard default pool/home/bob aclinherit restricted default pool/home/bob canmount on default -pool/home/bob shareiscsi off default pool/home/bob xattr on default pool/home/bob copies 1 default -pool/home/bob version 4 - +pool/home/bob version 5 - pool/home/bob utf8only off - pool/home/bob normalization none - pool/home/bob casesensitivity sensitive - @@ -2739,276 +2763,238 @@ pool/home/bob usedbysnapshots 0 - pool/home/bob usedbydataset 21K - pool/home/bob usedbychildren 0 - pool/home/bob usedbyrefreservation 0 - -.fi -.in -2 -.sp - -.sp -.LP +pool/home/bob logbias latency default +pool/home/bob dedup off default +pool/home/bob mlslabel - +pool/home/bob sync standard default +pool/home/bob refcompressratio 1.00x - +.Ed +.Pp The following command gets a single property value. - -.sp -.in +2 -.nf -# \fBzfs get -H -o value compression pool/home/bob\fR +.Bd -literal -offset 2n +.Li # Ic zfs get -H -o value compression pool/home/bob on -.fi -.in -2 -.sp - -.sp -.LP -The following command lists all properties with local settings for \fBpool/home/bob\fR. - -.sp -.in +2 -.nf -# \fBzfs get -r -s local -o name,property,value all pool/home/bob\fR +.Ed +.Pp +The following command lists all properties with local settings for +.Em pool/home/bob . +.Bd -literal -offset 2n +.Li # Ic zfs get -s local -o name,property,value all pool/home/bob NAME PROPERTY VALUE pool/home/bob quota 20G pool/home/bob compression on -.fi -.in -2 -.sp - -.LP -\fBExample 8 \fRRolling Back a ZFS File System -.sp -.LP -The following command reverts the contents of \fBpool/home/anne\fR to the snapshot named \fByesterday\fR, deleting all intermediate snapshots. - -.sp -.in +2 -.nf -# \fBzfs rollback -r pool/home/anne@yesterday\fR -.fi -.in -2 -.sp - -.LP -\fBExample 9 \fRCreating a ZFS Clone -.sp -.LP -The following command creates a writable file system whose initial contents are the same as \fBpool/home/bob@yesterday\fR. - -.sp -.in +2 -.nf -# \fBzfs clone pool/home/bob@yesterday pool/clone\fR -.fi -.in -2 -.sp - -.LP -\fBExample 10 \fRPromoting a ZFS Clone -.sp -.LP -The following commands illustrate how to test out changes to a file system, and then replace the original file system with the changed one, using clones, clone promotion, and renaming: - -.sp -.in +2 -.nf -# \fBzfs create pool/project/production\fR - populate /pool/project/production with data -# \fBzfs snapshot pool/project/production@today\fR -# \fBzfs clone pool/project/production@today pool/project/beta\fR -make changes to /pool/project/beta and test them -# \fBzfs promote pool/project/beta\fR -# \fBzfs rename pool/project/production pool/project/legacy\fR -# \fBzfs rename pool/project/beta pool/project/production\fR -once the legacy version is no longer needed, it can be destroyed -# \fBzfs destroy pool/project/legacy\fR -.fi -.in -2 -.sp - -.LP -\fBExample 11 \fRInheriting ZFS Properties -.sp -.LP -The following command causes \fBpool/home/bob\fR and \fBpool/home/anne\fR to inherit the \fBchecksum\fR property from their parent. - -.sp -.in +2 -.nf -# \fBzfs inherit checksum pool/home/bob pool/home/anne\fR -.fi -.in -2 -.sp - -.LP -\fBExample 12 \fRRemotely Replicating ZFS Data -.sp -.LP -The following commands send a full stream and then an incremental stream to a remote machine, restoring them into \fBpoolB/received/fs@a\fRand \fBpoolB/received/fs@b\fR, respectively. \fBpoolB\fR must contain the file system \fBpoolB/received\fR, and must not initially contain \fBpoolB/received/fs\fR. - -.sp -.in +2 -.nf -# \fBzfs send pool/fs@a | \e\fR - \fBssh host zfs receive poolB/received/fs@a\fR -# \fBzfs send -i a pool/fs@b | ssh host \e\fR - \fBzfs receive poolB/received/fs\fR -.fi -.in -2 -.sp - -.LP -\fBExample 13 \fRUsing the \fBzfs receive\fR \fB-d\fR Option -.sp -.LP -The following command sends a full stream of \fBpoolA/fsA/fsB@snap\fR to a remote machine, receiving it into \fBpoolB/received/fsA/fsB@snap\fR. The \fBfsA/fsB@snap\fR portion of the received snapshot's name is determined from the name of the sent snapshot. \fBpoolB\fR must contain the file system \fBpoolB/received\fR. If \fBpoolB/received/fsA\fR does not exist, it is created as an empty file system. - -.sp -.in +2 -.nf -# \fBzfs send poolA/fsA/fsB@snap | \e - ssh host zfs receive -d poolB/received\fR -.fi -.in -2 -.sp - -.LP -\fBExample 14 \fRSetting User Properties -.sp -.LP -The following example sets the user-defined \fBcom.example:department\fR property for a dataset. - -.sp -.in +2 -.nf -# \fBzfs set com.example:department=12345 tank/accounting\fR -.fi -.in -2 -.sp - -.LP -\fBExample 15 \fRCreating a ZFS Volume as an iSCSI Target Device -.sp -.LP -The following example shows how to create a \fBZFS\fR volume as an \fBiSCSI\fR target. - -.sp -.in +2 -.nf -# \fBzfs create -V 2g pool/volumes/vol1\fR -# \fBzfs set shareiscsi=on pool/volumes/vol1\fR -# \fBiscsitadm list target\fR -Target: pool/volumes/vol1 - iSCSI Name: - iqn.1986-03.com.sun:02:7b4b02a6-3277-eb1b-e686-a24762c52a8c - Connections: 0 -.fi -.in -2 -.sp - -.sp -.LP -After the \fBiSCSI\fR target is created, set up the \fBiSCSI\fR initiator. For more information about the Solaris \fBiSCSI\fR initiator, see \fBiscsitadm\fR(1M). -.LP -\fBExample 16 \fRPerforming a Rolling Snapshot -.sp -.LP -The following example shows how to maintain a history of snapshots with a consistent naming scheme. To keep a week's worth of snapshots, the user destroys the oldest snapshot, renames the remaining snapshots, and then creates a new snapshot, as follows: - -.sp -.in +2 -.nf -# \fBzfs destroy -r pool/users@7daysago\fR -# \fBzfs rename -r pool/users@6daysago @7daysago\fR -# \fBzfs rename -r pool/users@5daysago @6daysago\fR -# \fBzfs rename -r pool/users@yesterday @5daysago\fR -# \fBzfs rename -r pool/users@yesterday @4daysago\fR -# \fBzfs rename -r pool/users@yesterday @3daysago\fR -# \fBzfs rename -r pool/users@yesterday @2daysago\fR -# \fBzfs rename -r pool/users@today @yesterday\fR -# \fBzfs snapshot -r pool/users@today\fR -.fi -.in -2 -.sp - -.LP -\fBExample 17 \fRSetting \fBsharenfs\fR Property Options on a ZFS File System -.sp -.LP -The following commands show how to set \fBsharenfs\fR property options to enable \fBrw\fR access for a set of \fBIP\fR addresses and to enable root access for system \fBneo\fR on the \fBtank/home\fR file system. - -.sp -.in +2 -.nf -# \fB# zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home\fR -.fi -.in -2 -.sp - -.sp -.LP -If you are using \fBDNS\fR for host name resolution, specify the fully qualified hostname. - -.LP -\fBExample 18 \fRDelegating ZFS Administration Permissions on a ZFS Dataset -.sp -.LP -The following example shows how to set permissions so that user \fBcindys\fR can create, destroy, mount, and take snapshots on \fBtank/cindys\fR. The permissions on \fBtank/cindys\fR are also displayed. - -.sp -.in +2 -.nf -# \fBzfs allow cindys create,destroy,mount,snapshot tank/cindys\fR -# \fBzfs allow tank/cindys\fR +.Ed +.It Sy Example 8 No Rolling Back a Tn ZFS No File System +.Pp +The following command reverts the contents of +.Em pool/home/anne +to the snapshot named +.Em yesterday , +deleting all intermediate snapshots. +.Bd -literal -offset 2n +.Li # Ic zfs rollback -r pool/home/anne@yesterday +.Ed +.It Sy Example 9 No Creating a Tn ZFS No Clone +.Pp +The following command creates a writable file system whose initial contents are +the same as +.Em pool/home/bob@yesterday . +.Bd -literal -offset 2n +.Li # Ic zfs clone pool/home/bob@yesterday pool/clone +.Ed +.It Sy Example 10 No Promoting a Tn ZFS No Clone +.Pp +The following commands illustrate how to test out changes to a file system, and +then replace the original file system with the changed one, using clones, clone +promotion, and renaming: +.Bd -literal -offset 2n +.Li # Ic zfs create pool/project/production +.Ed +.Pp +Populate +.Pa /pool/project/production +with data and continue with the following commands: +.Bd -literal -offset 2n +.Li # Ic zfs snapshot pool/project/production@today +.Li # Ic zfs clone pool/project/production@today pool/project/beta +.Ed +.Pp +Now make changes to +.Pa /pool/project/beta +and continue with the following commands: +.Bd -literal -offset 2n +.Li # Ic zfs promote pool/project/beta +.Li # Ic zfs rename pool/project/production pool/project/legacy +.Li # Ic zfs rename pool/project/beta pool/project/production +.Ed +.Pp +Once the legacy version is no longer needed, it can be destroyed. +.Bd -literal -offset 2n +.Li # Ic zfs destroy pool/project/legacy +.Ed +.It Sy Example 11 No Inheriting Tn ZFS No Properties +.Pp +The following command causes +.Em pool/home/bob +and +.Em pool/home/anne +to inherit the +.Sy checksum +property from their parent. +.Bd -literal -offset 2n +.Li # Ic zfs inherit checksum pool/home/bob pool/home/anne +.Ed +.It Sy Example 12 No Remotely Replicating Tn ZFS No Data +.Pp +The following commands send a full stream and then an incremental stream to a +remote machine, restoring them into +.Sy poolB/received/fs@a +and +.Sy poolB/received/fs@b , +respectively. +.Sy poolB +must contain the file system +.Sy poolB/received , +and must not initially contain +.Sy poolB/received/fs . +.Bd -literal -offset 2n +.Li # Ic zfs send pool/fs@a | ssh host zfs receive poolB/received/fs@a +.Li # Ic zfs send -i a pool/fs@b | ssh host zfs receive poolB/received/fs +.Ed +.It Xo +.Sy Example 13 +Using the +.Qq zfs receive -d +Option +.Xc +.Pp +The following command sends a full stream of +.Sy poolA/fsA/fsB@snap +to a remote machine, receiving it into +.Sy poolB/received/fsA/fsB@snap . +The +.Sy fsA/fsB@snap +portion of the received snapshot's name is determined from the name of the sent +snapshot. +.Sy poolB +must contain the file system +.Sy poolB/received . +If +.Sy poolB/received/fsA +does not exist, it is created as an empty file system. +.Bd -literal -offset 2n +.Li # Ic zfs send poolA/fsA/fsB@snap | ssh host zfs receive -d poolB/received +.Ed +.It Sy Example 14 No Setting User Properties +.Pp +The following example sets the user-defined +.Sy com.example:department +property for a dataset. +.Bd -literal -offset 2n +.Li # Ic zfs set com.example:department=12345 tank/accounting +.Ed +.It Sy Example 15 No Performing a Rolling Snapshot +.Pp +The following example shows how to maintain a history of snapshots with a +consistent naming scheme. To keep a week's worth of snapshots, the user +destroys the oldest snapshot, renames the remaining snapshots, and then creates +a new snapshot, as follows: +.Bd -literal -offset 2n +.Li # Ic zfs destroy -r pool/users@7daysago +.Li # Ic zfs rename -r pool/users@6daysago @7daysago +.Li # Ic zfs rename -r pool/users@5daysago @6daysago +.Li # Ic zfs rename -r pool/users@yesterday @5daysago +.Li # Ic zfs rename -r pool/users@yesterday @4daysago +.Li # Ic zfs rename -r pool/users@yesterday @3daysago +.Li # Ic zfs rename -r pool/users@yesterday @2daysago +.Li # Ic zfs rename -r pool/users@today @yesterday +.Li # Ic zfs snapshot -r pool/users@today +.Ed +.It Xo +.Sy Example 16 +Setting +.Qq sharenfs +Property Options on a ZFS File System +.Xc +.Pp +The following command shows how to set +.Sy sharenfs +property options to enable root access for a specific network on the +.Em tank/home +file system. The contents of the +.Sy sharenfs +property are valid +.Xr exports 5 +options. +.Bd -literal -offset 2n +.Li # Ic zfs set sharenfs="maproot=root,network 192.168.0.0/24" tank/home +.Ed +.Pp +Another way to write this command with the same result is: +.Bd -literal -offset 2n +.Li # Ic set zfs sharenfs="-maproot=root -network 192.168.0.0/24" tank/home +.Ed +.It Xo +.Sy Example 17 +Delegating +.Tn ZFS +Administration Permissions on a +.Tn ZFS +Dataset +.Xc +.Pp +The following example shows how to set permissions so that user +.Em cindys +can create, destroy, mount, and take snapshots on +.Em tank/cindys . +The permissions on +.Em tank/cindys +are also displayed. +.Bd -literal -offset 2n +.Li # Ic zfs allow cindys create,destroy,mount,snapshot tank/cindys +.Li # Ic zfs allow tank/cindys ------------------------------------------------------------- Local+Descendent permissions on (tank/cindys) user cindys create,destroy,mount,snapshot ------------------------------------------------------------- -.fi -.in -2 -.sp - -.sp -.LP -Because the \fBtank/cindys\fR mount point permission is set to 755 by default, user \fBcindys\fR will be unable to mount file systems under \fBtank/cindys\fR. Set an \fBACL\fR similar to the following syntax to provide mount point access: -.sp -.in +2 -.nf -# \fBchmod A+user:cindys:add_subdirectory:allow /tank/cindys\fR -.fi -.in -2 -.sp - -.LP -\fBExample 19 \fRDelegating Create Time Permissions on a ZFS Dataset -.sp -.LP -The following example shows how to grant anyone in the group \fBstaff\fR to create file systems in \fBtank/users\fR. This syntax also allows staff members to destroy their own file systems, but not destroy anyone else's file system. The permissions on \fBtank/users\fR are also displayed. - -.sp -.in +2 -.nf -# \fB# zfs allow staff create,mount tank/users\fR -# \fBzfs allow -c destroy tank/users\fR -# \fBzfs allow tank/users\fR +.Ed +.It Sy Example 18 No Delegating Create Time Permissions on a Tn ZFS No Dataset +.Pp +The following example shows how to grant anyone in the group +.Em staff +to create file systems in +.Em tank/users . +This syntax also allows staff members to destroy their own file systems, but +not destroy anyone else's file system. The permissions on +.Em tank/users +are also displayed. +.Bd -literal -offset 2n +.Li # Ic zfs allow staff create,mount tank/users +.Li # Ic zfs allow -c destroy tank/users +.Li # Ic zfs allow tank/users ------------------------------------------------------------- Create time permissions on (tank/users) create,destroy Local+Descendent permissions on (tank/users) group staff create,mount -------------------------------------------------------------- -.fi -.in -2 -.sp - -.LP -\fBExample 20 \fRDefining and Granting a Permission Set on a ZFS Dataset -.sp -.LP -The following example shows how to define and grant a permission set on the \fBtank/users\fR file system. The permissions on \fBtank/users\fR are also displayed. - -.sp -.in +2 -.nf -# \fBzfs allow -s @pset create,destroy,snapshot,mount tank/users\fR -# \fBzfs allow staff @pset tank/users\fR -# \fBzfs allow tank/users\fR +------------------------------------------------------------- +.Ed +.It Xo +.Sy Example 19 +Defining and Granting a Permission Set on a +.Tn ZFS +Dataset +.Xc +.Pp +The following example shows how to define and grant a permission set on the +.Em tank/users +file system. The permissions on +.Em tank/users +are also displayed. +.Bd -literal -offset 2n +.Li # Ic zfs allow -s @pset create,destroy,snapshot,mount tank/users +.Li # Ic zfs allow staff @pset tank/users +.Li # Ic zfs allow tank/users ------------------------------------------------------------- Permission sets on (tank/users) @pset create,destroy,mount,snapshot @@ -3017,44 +3003,40 @@ Create time permissions on (tank/users) Local+Descendent permissions on (tank/users) group staff @pset,create,mount ------------------------------------------------------------- -.fi -.in -2 -.sp - -.LP -\fBExample 21 \fRDelegating Property Permissions on a ZFS Dataset -.sp -.LP -The following example shows to grant the ability to set quotas and reservations on the \fBusers/home\fR file system. The permissions on \fBusers/home\fR are also displayed. - -.sp -.in +2 -.nf -# \fBzfs allow cindys quota,reservation users/home\fR -# \fBzfs allow users/home\fR +.Ed +.It Sy Example 20 No Delegating Property Permissions on a Tn ZFS No Dataset +.Pp +The following example shows to grant the ability to set quotas and reservations +on the +.Sy users/home +file system. The permissions on +.Sy users/home +are also displayed. +.Bd -literal -offset 2n +.Li # Ic zfs allow cindys quota,reservation users/home +.Li # Ic zfs allow cindys ------------------------------------------------------------- Local+Descendent permissions on (users/home) user cindys quota,reservation ------------------------------------------------------------- -cindys% \fBzfs set quota=10G users/home/marks\fR -cindys% \fBzfs get quota users/home/marks\fR +.Li # Ic su - cindys +.Li cindys% Ic zfs set quota=10G users/home/marks +.Li cindys% Ic zfs get quota users/home/marks NAME PROPERTY VALUE SOURCE -users/home/marks quota 10G local -.fi -.in -2 -.sp - -.LP -\fBExample 22 \fRRemoving ZFS Delegated Permissions on a ZFS Dataset -.sp -.LP -The following example shows how to remove the snapshot permission from the \fBstaff\fR group on the \fBtank/users\fR file system. The permissions on \fBtank/users\fR are also displayed. - -.sp -.in +2 -.nf -# \fBzfs unallow staff snapshot tank/users\fR -# \fBzfs allow tank/users\fR +users/home/marks quota 10G local +.Ed +.It Sy Example 21 No Removing ZFS Delegated Permissions on a Tn ZFS No Dataset +.Pp +The following example shows how to remove the snapshot permission from the +.Em staff +group on the +.Em tank/users +file system. The permissions on +.Em tank/users +are also displayed. +.Bd -literal -offset 2n +.Li # Ic zfs unallow staff snapshot tank/users +.Li # Ic zfs allow tank/users ------------------------------------------------------------- Permission sets on (tank/users) @pset create,destroy,mount,snapshot @@ -3062,74 +3044,43 @@ Create time permissions on (tank/users) create,destroy Local+Descendent permissions on (tank/users) group staff @pset,create,mount -------------------------------------------------------------- -.fi -.in -2 -.sp - -.SH EXIT STATUS -.sp -.LP +------------------------------------------------------------- +.Ed +.El +.Sh EXIT STATUS The following exit values are returned: -.sp -.ne 2 -.mk -.na -\fB\fB0\fR\fR -.ad -.sp .6 -.RS 4n -Successful completion. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB1\fR\fR -.ad -.sp .6 -.RS 4n +.Bl -tag -offset 2n -width 2n +.It 0 +Successful completion. +.It 1 An error occurred. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB2\fR\fR -.ad -.sp .6 -.RS 4n +.It 2 Invalid command line options were specified. -.RE - -.SH ATTRIBUTES -.sp -.LP -See \fBattributes\fR(5) for descriptions of the following attributes: -.sp - -.sp -.TS -tab() box; -cw(2.75i) |cw(2.75i) -lw(2.75i) |lw(2.75i) -. -ATTRIBUTE TYPEATTRIBUTE VALUE -_ -AvailabilitySUNWzfsu -_ -Interface StabilityCommitted -.TE - -.SH SEE ALSO -.sp -.LP -\fBssh\fR(1), \fBiscsitadm\fR(1M), \fBmount\fR(1M), \fBshare\fR(1M), \fBsharemgr\fR(1M), \fBunshare\fR(1M), \fBzonecfg\fR(1M), \fBzpool\fR(1M), \fBchmod\fR(2), \fBstat\fR(2), \fBwrite\fR(2), \fBfsync\fR(3C), \fBdfstab\fR(4), \fBattributes\fR(5) -.sp -.LP -See the \fBgzip\fR(1) man page, which is not part of the SunOS man page collection. -.sp -.LP -For information about using the \fBZFS\fR web-based management tool and other \fBZFS\fR features, see the \fISolaris ZFS Administration Guide\fR. +.El +.Sh SEE ALSO +.Xr chmod 2 , +.Xr fsync 2 , +.Xr exports 5 , +.Xr fstab 5 , +.Xr rc.conf 5 , +.Xr jail 8 , +.Xr mount 8 , +.Xr umount 8 , +.Xr zpool 8 +.Sh AUTHORS +This manual page is a +.Xr mdoc 7 +reimplementation of the +.Tn OpenSolaris +manual page +.Em zfs(1M) , +modified and customized for +.Fx +and licensed under the +Common Development and Distribution License +.Pq Tn CDDL . +.Pp +The +.Xr mdoc 7 +implementation of this manual page was initially written by +.An Martin Matuska Aq mm@FreeBSD.org . diff --git a/cddl/contrib/opensolaris/cmd/zpool/zpool.8 b/cddl/contrib/opensolaris/cmd/zpool/zpool.8 index c700b7f..40c09de 100644 --- a/cddl/contrib/opensolaris/cmd/zpool/zpool.8 +++ b/cddl/contrib/opensolaris/cmd/zpool/zpool.8 @@ -1,1613 +1,1708 @@ '\" te -.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. -.\" The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing. -.\" See the License for the specific language governing permissions and limitations under the License. When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this CDDL HEADER, with the -.\" fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner] -.TH zpool 1M "21 Sep 2009" "SunOS 5.11" "System Administration Commands" -.SH NAME -zpool \- configures ZFS storage pools -.SH SYNOPSIS -.LP -.nf -\fBzpool\fR [\fB-?\fR] -.fi - -.LP -.nf -\fBzpool add\fR [\fB-fn\fR] \fIpool\fR \fIvdev\fR ... -.fi - -.LP -.nf -\fBzpool attach\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR \fInew_device\fR -.fi - -.LP -.nf -\fBzpool clear\fR \fIpool\fR [\fIdevice\fR] -.fi - -.LP -.nf -\fBzpool create\fR [\fB-fn\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-O\fR \fIfile-system-property=value\fR] - ... [\fB-m\fR \fImountpoint\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR \fIvdev\fR ... -.fi - -.LP -.nf -\fBzpool destroy\fR [\fB-f\fR] \fIpool\fR -.fi - -.LP -.nf -\fBzpool detach\fR \fIpool\fR \fIdevice\fR -.fi - -.LP -.nf -\fBzpool export\fR [\fB-f\fR] \fIpool\fR ... -.fi - -.LP -.nf -\fBzpool get\fR "\fIall\fR" | \fIproperty\fR[,...] \fIpool\fR ... -.fi - -.LP -.nf -\fBzpool history\fR [\fB-il\fR] [\fIpool\fR] ... -.fi - -.LP -.nf -\fBzpool import\fR [\fB-d\fR \fIdir\fR] [\fB-D\fR] -.fi - -.LP -.nf -\fBzpool import\fR [\fB-o \fImntopts\fR\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR] - [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fB-a\fR -.fi - -.LP -.nf -\fBzpool import\fR [\fB-o \fImntopts\fR\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR] - [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR |\fIid\fR [\fInewpool\fR] -.fi - -.LP -.nf -\fBzpool iostat\fR [\fB-T\fR u | d ] [\fB-v\fR] [\fIpool\fR] ... [\fIinterval\fR[\fIcount\fR]] -.fi - -.LP -.nf -\fBzpool labelclear\fR [\fB-f\fR] \fIdevice\fR -.fi - -.LP -.nf -\fBzpool list\fR [\fB-H\fR] [\fB-o\fR \fIproperty\fR[,...]] [\fIpool\fR] ... -.fi - -.LP -.nf -\fBzpool offline\fR [\fB-t\fR] \fIpool\fR \fIdevice\fR ... -.fi - -.LP -.nf -\fBzpool online\fR \fIpool\fR \fIdevice\fR ... -.fi - -.LP -.nf -\fBzpool remove\fR \fIpool\fR \fIdevice\fR ... -.fi - -.LP -.nf -\fBzpool replace\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR [\fInew_device\fR] -.fi - -.LP -.nf -\fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ... -.fi - -.LP -.nf -\fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR -.fi - -.LP -.nf -\fBzpool status\fR [\fB-xv\fR] [\fIpool\fR] ... -.fi - -.LP -.nf -\fBzpool upgrade\fR -.fi - -.LP -.nf -\fBzpool upgrade\fR \fB-v\fR -.fi - -.LP -.nf -\fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ... -.fi - -.SH DESCRIPTION -.sp -.LP -The \fBzpool\fR command configures \fBZFS\fR storage pools. A storage pool is a collection of devices that provides physical storage and data replication for \fBZFS\fR datasets. -.sp -.LP -All datasets within a storage pool share the same space. See \fBzfs\fR(1M) for information on managing datasets. -.SS "Virtual Devices (\fBvdev\fRs)" -.sp -.LP -A "virtual device" describes a single device or a collection of devices organized according to certain performance and fault characteristics. The following virtual devices are supported: -.sp -.ne 2 -.mk -.na -\fB\fBdisk\fR\fR -.ad -.RS 10n -.rt -A block device, typically located under \fB/dev/dsk\fR. \fBZFS\fR can use individual slices or partitions, though the recommended mode of operation is to use whole disks. A disk can be specified by a full path, or it can be a shorthand name (the relative portion of the path under "/dev/dsk"). A whole disk can be specified by omitting the slice or partition designation. For example, "c0t0d0" is equivalent to "/dev/dsk/c0t0d0s2". When given a whole disk, \fBZFS\fR automatically labels the disk, if necessary. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBfile\fR\fR -.ad -.RS 10n -.rt -A regular file. The use of files as a backing store is strongly discouraged. It is designed primarily for experimental purposes, as the fault tolerance of a file is only as good as the file system of which it is a part. A file must be specified by a full path. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBmirror\fR\fR -.ad -.RS 10n -.rt -A mirror of two or more devices. Data is replicated in an identical fashion across all components of a mirror. A mirror with \fIN\fR disks of size \fIX\fR can hold \fIX\fR bytes and can withstand (\fIN-1\fR) devices failing before data integrity is compromised. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBraidz\fR\fR -.ad -.br -.na -\fB\fBraidz1\fR\fR -.ad -.br -.na -\fB\fBraidz2\fR\fR -.ad -.br -.na -\fB\fBraidz3\fR\fR -.ad -.RS 10n -.rt -A variation on \fBRAID-5\fR that allows for better distribution of parity and eliminates the "\fBRAID-5\fR write hole" (in which data and parity become inconsistent after a power loss). Data and parity is striped across all disks within a \fBraidz\fR group. -.sp -A \fBraidz\fR group can have single-, double- , or triple parity, meaning that the \fBraidz\fR group can sustain one, two, or three failures, respectively, without losing any data. The \fBraidz1\fR \fBvdev\fR type specifies a single-parity \fBraidz\fR group; the \fBraidz2\fR \fBvdev\fR type specifies a double-parity \fBraidz\fR group; and the \fBraidz3\fR \fBvdev\fR type specifies a triple-parity \fBraidz\fR group. The \fBraidz\fR \fBvdev\fR type is an alias for \fBraidz1\fR. -.sp -A \fBraidz\fR group with \fIN\fR disks of size \fIX\fR with \fIP\fR parity disks can hold approximately (\fIN-P\fR)*\fIX\fR bytes and can withstand \fIP\fR device(s) failing before data integrity is compromised. The minimum number of devices in a \fBraidz\fR group is one more than the number of parity disks. The recommended number is between 3 and 9 to help increase performance. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBspare\fR\fR -.ad -.RS 10n -.rt -A special pseudo-\fBvdev\fR which keeps track of available hot spares for a pool. For more information, see the "Hot Spares" section. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBlog\fR\fR -.ad -.RS 10n -.rt -A separate-intent log device. If more than one log device is specified, then writes are load-balanced between devices. Log devices can be mirrored. However, \fBraidz\fR \fBvdev\fR types are not supported for the intent log. For more information, see the "Intent Log" section. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBcache\fR\fR -.ad -.RS 10n -.rt -A device used to cache storage pool data. A cache device cannot be cannot be configured as a mirror or \fBraidz\fR group. For more information, see the "Cache Devices" section. -.RE - -.sp -.LP -Virtual devices cannot be nested, so a mirror or \fBraidz\fR virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed. -.sp -.LP -A pool can have any number of virtual devices at the top of the configuration (known as "root vdevs"). Data is dynamically distributed across all top-level devices to balance data among devices. As new virtual devices are added, \fBZFS\fR automatically places data on the newly available devices. -.sp -.LP -Virtual devices are specified one at a time on the command line, separated by whitespace. The keywords "mirror" and "raidz" are used to distinguish where a group ends and another begins. For example, the following creates two root vdevs, each a mirror of two disks: -.sp -.in +2 -.nf -# \fBzpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0\fR -.fi -.in -2 -.sp - -.SS "Device Failure and Recovery" -.sp -.LP -\fBZFS\fR supports a rich set of mechanisms for handling device failure and data corruption. All metadata and data is checksummed, and \fBZFS\fR automatically repairs bad data from a good copy when corruption is detected. -.sp -.LP -In order to take advantage of these features, a pool must make use of some form of redundancy, using either mirrored or \fBraidz\fR groups. While \fBZFS\fR supports running in a non-redundant configuration, where each root vdev is simply a disk or file, this is strongly discouraged. A single case of bit corruption can render some or all of your data unavailable. -.sp -.LP -A pool's health status is described by one of three states: online, degraded, or faulted. An online pool has all devices operating normally. A degraded pool is one in which one or more devices have failed, but the data is still available due to a redundant configuration. A faulted pool has corrupted metadata, or one or more faulted devices, and insufficient replicas to continue functioning. -.sp -.LP -The health of the top-level vdev, such as mirror or \fBraidz\fR device, is potentially impacted by the state of its associated vdevs, or component devices. A top-level vdev or component device is in one of the following states: -.sp -.ne 2 -.mk -.na -\fB\fBDEGRADED\fR\fR -.ad -.RS 12n -.rt -One or more top-level vdevs is in the degraded state because one or more component devices are offline. Sufficient replicas exist to continue functioning. -.sp -One or more component devices is in the degraded or faulted state, but sufficient replicas exist to continue functioning. The underlying conditions are as follows: -.RS +4 -.TP -.ie t \(bu -.el o -The number of checksum errors exceeds acceptable levels and the device is degraded as an indication that something may be wrong. \fBZFS\fR continues to use the device as necessary. -.RE -.RS +4 -.TP -.ie t \(bu -.el o -The number of I/O errors exceeds acceptable levels. The device could not be marked as faulted because there are insufficient replicas to continue functioning. -.RE -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBFAULTED\fR\fR -.ad -.RS 12n -.rt -One or more top-level vdevs is in the faulted state because one or more component devices are offline. Insufficient replicas exist to continue functioning. -.sp -One or more component devices is in the faulted state, and insufficient replicas exist to continue functioning. The underlying conditions are as follows: -.RS +4 -.TP -.ie t \(bu -.el o -The device could be opened, but the contents did not match expected values. -.RE -.RS +4 -.TP -.ie t \(bu -.el o -The number of I/O errors exceeds acceptable levels and the device is faulted to prevent further use of the device. -.RE -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBOFFLINE\fR\fR -.ad -.RS 12n -.rt -The device was explicitly taken offline by the "\fBzpool offline\fR" command. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBONLINE\fR\fR -.ad -.RS 12n -.rt +.\" Copyright (c) 2011, Martin Matuska . +.\" All Rights Reserved. +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or http://www.opensolaris.org/os/licensing. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" Copyright (c) 2010, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011, Nexenta Systems, Inc. All Rights Reserved. +.\" Copyright (c) 2011, Justin T. Gibbs +.\" +.\" $FreeBSD$ +.\" +.Dd November 26, 2011 +.Dt ZPOOL 8 +.Os +.Sh NAME +.Nm zpool +.Nd configures ZFS storage pools +.Sh SYNOPSIS +.Nm +.Op Fl \&? +.Nm +.Cm add +.Op Fl fn +.Ar pool vdev ... +.Nm +.Cm attach +.Op Fl f +.Ar pool device new_device +.Nm +.Cm clear +.Op Fl F Op Fl n +.Ar pool +.Op Ar device +.Nm +.Cm create +.Op Fl fn +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... +.Op Fl O Ar file-system-property Ns = Ns Ar value +.Ar ... +.Op Fl m Ar mountpoint +.Op Fl R Ar root +.Ar pool vdev ... +.Nm +.Cm destroy +.Op Fl f +.Ar pool +.Nm +.Cm detach +.Ar pool device +.Nm +.Cm export +.Op Fl f +.Ar pool ... +.Nm +.Cm get +.Ar all | property Ns Op , Ns Ar ... +.Ar pool ... +.Nm +.Cm history +.Op Fl il +.Op Ar pool +.Ar ... +.Nm +.Cm import +.Op Fl d Ar dir | Fl c Ar cachefile +.Op Fl D +.Nm +.Cm import +.Op Fl o Ar mntopts +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... +.Op Fl d Ar dir | Fl c Ar cachefile +.Op Fl D +.Op Fl f +.Op Fl m +.Op Fl N +.Op Fl R Ar root +.Op Fl F Op Fl n +.Fl a +.Nm +.Cm import +.Op Fl o Ar mntopts +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... +.Op Fl d Ar dir | Fl c Ar cachefile +.Op Fl D +.Op Fl f +.Op Fl m +.Op Fl N +.Op Fl R Ar root +.Op Fl F Op Fl n +.Ar pool | id +.Op Ar newpool +.Nm +.Cm iostat +.Op Fl T Cm d Ns | Ns Cm u +.Op Fl v +.Op Ar pool +.Ar ... +.Nm +.Cm labelclear +.Op Fl f +.Ar device +.Nm +.Cm list +.Op Fl H +.Op Fl o Ar property Ns Op , Ns Ar ... +.Op Fl T Cm d Ns | Ns Cm u +.Op Ar pool +.Ar ... +.Op Ar inverval Op Ar count +.Nm +.Cm offline +.Op Fl t +.Ar pool device ... +.Nm +.Cm online +.Op Fl e +.Ar pool device ... +.Nm +.Cm remove +.Ar pool device ... +.Nm +.Cm replace +.Op Fl f +.Ar pool device +.Op Ar new_device +.Nm +.Cm scrub +.Op Fl s +.Ar pool ... +.Nm +.Cm set +.Ar property Ns = Ns Ar value pool +.Nm +.Cm split +.Op Fl n +.Op Fl R Ar altroot +.Op Fl o Ar mntopts +.Op Fl o Ar property Ns = Ns Ar value +.Ar pool newpool +.Op Ar device ... +.Nm +.Cm status +.Op Fl vx +.Op Fl T Cm d Ns | Ns Cm u +.Op Ar pool +.Ar ... +.Op Ar interval Op Ar count +.Nm +.Cm upgrade +.Op Fl v +.Nm +.Cm upgrade +.Op Fl V Ar version +.Fl a | Ar pool ... +.Sh DESCRIPTION +The +.Nm +command configures +.Tn ZFS +storage pools. A storage pool is a collection of devices that provides physical +storage and data replication for +.Tn ZFS +datasets. +.Pp +All datasets within a storage pool share the same space. See +.Xr zfs 8 +for information on managing datasets. +.Ss Virtual Devices (vdevs) +A +.Qq virtual device +.Pq No vdev +describes a single device or a collection of devices organized according to +certain performance and fault characteristics. The following virtual devices +are supported: +.Bl -tag +.It Sy disk +A block device, typically located under +.Pa /dev Ns . +.Tn ZFS +can use individual slices or partitions, though the recommended mode of +operation is to use whole disks. A disk can be specified by a full path to the +device or the +.Xr geom 4 +provider name. When given a whole disk, +.Tn ZFS +automatically labels the disk, if necessary. +.It Sy file +A regular file. The use of files as a backing store is strongly discouraged. It +is designed primarily for experimental purposes, as the fault tolerance of a +file is only as good the file system of which it is a part. A file must be +specified by a full path. +.It Sy mirror +A mirror of two or more devices. Data is replicated in an identical fashion +across all components of a mirror. A mirror with +.Em N +disks of size +.Em X +can hold +.Em X +bytes and can withstand +.Pq Em N-1 +devices failing before data integrity is compromised. +.It Sy raidz +.No ( or Sy raidz1 raidz2 raidz3 Ns ). +A variation on +.Sy RAID-5 +that allows for better distribution of parity and eliminates the +.Qq Sy RAID-5 No write hole +(in which data and parity become inconsistent after a power loss). Data and +parity is striped across all disks within a +.No raidz +group. +.Pp +A +.No raidz +group can have single-, double- , or triple parity, meaning that the +.No raidz +group can sustain one, two, or three failures, respectively, without +losing any data. The +.Sy raidz1 No vdev +type specifies a single-parity +.No raidz +group; the +.Sy raidz2 No vdev +type specifies a double-parity +.No raidz +group; and the +.Sy raidz3 No vdev +type specifies a triple-parity +.No raidz +group. The +.Sy raidz No vdev +type is an alias for +.Sy raidz1 Ns . +.Pp +A +.No raidz +group with +.Em N +disks of size +.Em X +with +.Em P +parity disks can hold approximately +.Sm off +.Pq Em N-P +*X +.Sm on +bytes and can withstand +.Em P +device(s) failing before data integrity is compromised. The minimum number of +devices in a +.No raidz +group is one more than the number of parity disks. The +recommended number is between 3 and 9 to help increase performance. +.It Sy spare +A special +.No pseudo- Ns No vdev +which keeps track of available hot spares for a pool. +For more information, see the +.Qq Sx Hot Spares +section. +.It Sy log +A separate-intent log device. If more than one log device is specified, then +writes are load-balanced between devices. Log devices can be mirrored. However, +.No raidz +.No vdev +types are not supported for the intent log. For more information, +see the +.Qq Sx Intent Log +section. +.It Sy cache +A device used to cache storage pool data. A cache device cannot be configured +as a mirror or +.No raidz +group. For more information, see the +.Qq Sx Cache Devices +section. +.El +.Pp +Virtual devices cannot be nested, so a mirror or +.No raidz +virtual device can only +contain files or disks. Mirrors of mirrors (or other combinations) are not +allowed. +.Pp +A pool can have any number of virtual devices at the top of the configuration +(known as +.Qq root +.No vdev Ns s). +Data is dynamically distributed across all top-level devices to balance data +among devices. As new virtual devices are added, +.Tn ZFS +automatically places data on the newly available devices. +.Pp +Virtual devices are specified one at a time on the command line, separated by +whitespace. The keywords +.Qq mirror +and +.Qq raidz +are used to distinguish where a group ends and another begins. For example, the +following creates two root +.No vdev Ns s, +each a mirror of two disks: +.Ss Device Failure and Recovery +.Tn ZFS +supports a rich set of mechanisms for handling device failure and data +corruption. All metadata and data is checksummed, and +.Tn ZFS +automatically repairs bad data from a good copy when corruption is detected. +.Pp +In order to take advantage of these features, a pool must make use of some form +of redundancy, using either mirrored or +.No raidz +groups. While +.Tn ZFS +supports running in a non-redundant configuration, where each root +.No vdev +is simply a disk or file, this is strongly discouraged. A single case of bit +corruption can render some or all of your data unavailable. +.Pp +A pool's health status is described by one of three states: online, degraded, +or faulted. An online pool has all devices operating normally. A degraded pool +is one in which one or more devices have failed, but the data is still +available due to a redundant configuration. A faulted pool has corrupted +metadata, or one or more faulted devices, and insufficient replicas to continue +functioning. +.Pp +The health of the top-level +.No vdev , +such as mirror or +.No raidz +device, is +potentially impacted by the state of its associated +.No vdev Ns s, +or component devices. A top-level +.No vdev +or component device is in one of the following states: +.Bl -tag -width "DEGRADED" +.It Sy DEGRADED +One or more top-level +.No vdev Ns s +is in the degraded state because one or more +component devices are offline. Sufficient replicas exist to continue +functioning. +.Pp +One or more component devices is in the degraded or faulted state, but +sufficient replicas exist to continue functioning. The underlying conditions +are as follows: +.Bl -bullet -offset 2n +.It +The number of checksum errors exceeds acceptable levels and the device is +degraded as an indication that something may be wrong. +.Tn ZFS +continues to use the device as necessary. +.It +The number of +.Tn I/O +errors exceeds acceptable levels. The device could not be +marked as faulted because there are insufficient replicas to continue +functioning. +.El +.It Sy FAULTED +One or more top-level +.No vdev Ns s +is in the faulted state because one or more +component devices are offline. Insufficient replicas exist to continue +functioning. +.Pp +One or more component devices is in the faulted state, and insufficient +replicas exist to continue functioning. The underlying conditions are as +follows: +.Bl -bullet -offset 2n +.It +The device could be opened, but the contents did not match expected values. +.It +The number of +.Tn I/O +errors exceeds acceptable levels and the device is faulted to +prevent further use of the device. +.El +.It Sy OFFLINE +The device was explicitly taken offline by the +.Qq Nm Cm offline +command. +.It Sy ONLINE The device is online and functioning. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBREMOVED\fR\fR -.ad -.RS 12n -.rt -The device was physically removed while the system was running. Device removal detection is hardware-dependent and may not be supported on all platforms. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBUNAVAIL\fR\fR -.ad -.RS 12n -.rt -The device could not be opened. If a pool is imported when a device was unavailable, then the device will be identified by a unique identifier instead of its path since the path was never correct in the first place. -.RE - -.sp -.LP -If a device is removed and later re-attached to the system, \fBZFS\fR attempts to put the device online automatically. Device attach detection is hardware-dependent and might not be supported on all platforms. -.SS "Hot Spares" -.sp -.LP -\fBZFS\fR allows devices to be associated with pools as "hot spares". These devices are not actively used in the pool, but when an active device fails, it is automatically replaced by a hot spare. To create a pool with hot spares, specify a "spare" \fBvdev\fR with any number of devices. For example, -.sp -.in +2 -.nf -# zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0 -.fi -.in -2 -.sp - -.sp -.LP -Spares can be shared across multiple pools, and can be added with the "\fBzpool add\fR" command and removed with the "\fBzpool remove\fR" command. Once a spare replacement is initiated, a new "spare" \fBvdev\fR is created within the configuration that will remain there until the original device is replaced. At this point, the hot spare becomes available again if another device fails. -.sp -.LP -If a pool has a shared spare that is currently being used, the pool can not be exported since other pools may use this shared spare, which may lead to potential data corruption. -.sp -.LP -An in-progress spare replacement can be cancelled by detaching the hot spare. If the original faulted device is detached, then the hot spare assumes its place in the configuration, and is removed from the spare list of all active pools. -.sp -.LP +.It Sy REMOVED +The device was physically removed while the system was running. Device removal +detection is hardware-dependent and may not be supported on all platforms. +.It Sy UNAVAIL +The device could not be opened. If a pool is imported when a device was +unavailable, then the device will be identified by a unique identifier instead +of its path since the path was never correct in the first place. +.El +.Pp +If a device is removed and later reattached to the system, +.Tn ZFS +attempts to put the device online automatically. Device attach detection is +hardware-dependent and might not be supported on all platforms. +.Ss Hot Spares +.Tn ZFS +allows devices to be associated with pools as +.Qq hot spares Ns . +These devices are not actively used in the pool, but when an active device +fails, it is automatically replaced by a hot spare. To create a pool with hot +spares, specify a +.Qq spare +.No vdev +with any number of devices. For example, +.Bd -literal -offset 2n +.Li # Ic zpool create pool mirror da0 da1 spare da2 da3 +.Ed +.Pp +Spares can be shared across multiple pools, and can be added with the +.Qq Nm Cm add +command and removed with the +.Qq Nm Cm remove +command. Once a spare replacement is initiated, a new "spare" +.No vdev +is created +within the configuration that will remain there until the original device is +replaced. At this point, the hot spare becomes available again if another +device fails. +.Pp +If a pool has a shared spare that is currently being used, the pool can not be +exported since other pools may use this shared spare, which may lead to +potential data corruption. +.Pp +An in-progress spare replacement can be cancelled by detaching the hot spare. +If the original faulted device is detached, then the hot spare assumes its +place in the configuration, and is removed from the spare list of all active +pools. +.Pp Spares cannot replace log devices. -.SS "Intent Log" -.sp -.LP -The \fBZFS\fR Intent Log (\fBZIL\fR) satisfies \fBPOSIX\fR requirements for synchronous transactions. For instance, databases often require their transactions to be on stable storage devices when returning from a system call. \fBNFS\fR and other applications can also use \fBfsync\fR() to ensure data stability. By default, the intent log is allocated from blocks within the main pool. However, it might be possible to get better performance using separate intent log devices such as \fBNVRAM\fR or a dedicated disk. For example: -.sp -.in +2 -.nf -\fB# zpool create pool c0d0 c1d0 log c2d0\fR -.fi -.in -2 -.sp - -.sp -.LP -Multiple log devices can also be specified, and they can be mirrored. See the EXAMPLES section for an example of mirroring multiple log devices. -.sp -.LP -Log devices can be added, replaced, attached, detached, and imported and exported as part of the larger pool. Mirrored log devices can be removed by specifying the top-level mirror for the log. -.SS "Cache Devices" -.sp -.LP -Devices can be added to a storage pool as "cache devices." These devices provide an additional layer of caching between main memory and disk. For read-heavy workloads, where the working set size is much larger than what can be cached in main memory, using cache devices allow much more of this working set to be served from low latency media. Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content. -.sp -.LP -To create a pool with cache devices, specify a "cache" \fBvdev\fR with any number of devices. For example: -.sp -.in +2 -.nf -\fB# zpool create pool c0d0 c1d0 cache c2d0 c3d0\fR -.fi -.in -2 -.sp - -.sp -.LP -Cache devices cannot be mirrored or part of a \fBraidz\fR configuration. If a read error is encountered on a cache device, that read \fBI/O\fR is reissued to the original storage pool device, which might be part of a mirrored or \fBraidz\fR configuration. -.sp -.LP -The content of the cache devices is considered volatile, as is the case with other system caches. -.SS "Properties" -.sp -.LP -Each pool has several properties associated with it. Some properties are read-only statistics while others are configurable and change the behavior of the pool. The following are read-only properties: -.sp -.ne 2 -.mk -.na -\fB\fBavailable\fR\fR -.ad -.RS 20n -.rt -Amount of storage available within the pool. This property can also be referred to by its shortened column name, "avail". -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBcapacity\fR\fR -.ad -.RS 20n -.rt -Percentage of pool space used. This property can also be referred to by its shortened column name, "cap". -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBhealth\fR\fR -.ad -.RS 20n -.rt -The current health of the pool. Health can be "\fBONLINE\fR", "\fBDEGRADED\fR", "\fBFAULTED\fR", " \fBOFFLINE\fR", "\fBREMOVED\fR", or "\fBUNAVAIL\fR". -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBguid\fR\fR -.ad -.RS 20n -.rt +.Ss Intent Log +The +.Tn ZFS +Intent Log +.Pq Tn ZIL +satisfies +.Tn POSIX +requirements for synchronous transactions. For instance, databases often +require their transactions to be on stable storage devices when returning from +a system call. +.Tn NFS +and other applications can also use +.Xr fsync 2 +to ensure data stability. By default, the intent log is allocated from blocks +within the main pool. However, it might be possible to get better performance +using separate intent log devices such as +.Tn NVRAM +or a dedicated disk. For example: +.Bd -literal -offset 2n +.Li # Ic zpool create pool da0 da1 log da2 +.Ed +.Pp +Multiple log devices can also be specified, and they can be mirrored. See the +.Sx EXAMPLES +section for an example of mirroring multiple log devices. +.Pp +Log devices can be added, replaced, attached, detached, imported and exported +as part of the larger pool. Mirrored log devices can be removed by specifying +the top-level mirror for the log. +.Ss Cache devices +Devices can be added to a storage pool as "cache devices." These devices +provide an additional layer of caching between main memory and disk. For +read-heavy workloads, where the working set size is much larger than what can +be cached in main memory, using cache devices allow much more of this working +set to be served from low latency media. Using cache devices provides the +greatest performance improvement for random read-workloads of mostly static +content. +.Pp +To create a pool with cache devices, specify a "cache" +.No vdev +with any number of devices. For example: +.Bd -literal -offset 2n +.Li # Ic zpool create pool da0 da1 cache da2 da3 +.Ed +.Pp +Cache devices cannot be mirrored or part of a +.No raidz +configuration. If a read +error is encountered on a cache device, that read +.Tn I/O +is reissued to the original storage pool device, which might be part of a +mirrored or +.No raidz +configuration. +.Pp +The content of the cache devices is considered volatile, as is the case with +other system caches. +.Ss Properties +Each pool has several properties associated with it. Some properties are +read-only statistics while others are configurable and change the behavior of +the pool. The following are read-only properties: +.Bl -tag -width "dedupratio" +.It Sy alloc +Amount of storage space within the pool that has been physically allocated. +.It Sy capacity +Percentage of pool space used. This property can also be referred to by its +shortened column name, "cap". +.It Sy comment +A text string consisting of printable ASCII characters that will be stored +such that it is available even if the pool becomes faulted. An administrator +can provide additional information about a pool using this property. +.It Sy dedupratio +The deduplication ratio specified for a pool, expressed as a multiplier. +For example, a +.S dedupratio +value of 1.76 indicates that 1.76 units of data were stored but only 1 unit of disk space was actually consumed. See +.Xr zfs 8 +for a description of the deduplication feature. +.It Sy free +Number of blocks within the pool that are not allocated. +.It Sy guid A unique identifier for the pool. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBsize\fR\fR -.ad -.RS 20n -.rt +.It Sy health +The current health of the pool. Health can be +.Qq Sy ONLINE Ns , +.Qq Sy DEGRADED Ns , +.Qq Sy FAULTED Ns , +.Qq Sy OFFLINE Ns , +.Qq Sy REMOVED Ns , or +.Qq Sy UNAVAIL Ns . +.It Sy size Total size of the storage pool. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBused\fR\fR -.ad -.RS 20n -.rt +.It Sy used Amount of storage space used within the pool. -.RE - -.sp -.LP -These space usage properties report actual physical space available to the storage pool. The physical space can be different from the total amount of space that any contained datasets can actually use. The amount of space used in a \fBraidz\fR configuration depends on the characteristics of the data being written. In addition, \fBZFS\fR reserves some space for internal accounting that the \fBzfs\fR(1M) command takes into account, but the \fBzpool\fR command does not. For non-full pools of a reasonable size, these effects should be invisible. For small pools, or pools that are close to being completely full, these discrepancies may become more noticeable. -.sp -.LP +.El +.Pp +These space usage properties report actual physical space available to the +storage pool. The physical space can be different from the total amount of +space that any contained datasets can actually use. The amount of space used in +a +.No raidz +configuration depends on the characteristics of the data being written. +In addition, +.Tn ZFS +reserves some space for internal accounting that the +.Xr zfs 8 +command takes into account, but the +.Xr zpool 8 +command does not. For non-full pools of a reasonable size, these effects should +be invisible. For small pools, or pools that are close to being completely +full, these discrepancies may become more noticeable. +.Pp The following property can be set at creation time and import time: -.sp -.ne 2 -.mk -.na -\fB\fBaltroot\fR\fR -.ad -.sp .6 -.RS 4n -Alternate root directory. If set, this directory is prepended to any mount points within the pool. This can be used when examining an unknown pool where the mount points cannot be trusted, or in an alternate boot environment, where the typical paths are not valid. \fBaltroot\fR is not a persistent property. It is valid only while the system is up. Setting \fBaltroot\fR defaults to using \fBcachefile\fR=none, though this may be overridden using an explicit setting. -.RE - -.sp -.LP -The following properties can be set at creation time and import time, and later changed with the \fBzpool set\fR command: -.sp -.ne 2 -.mk -.na -\fB\fBautoexpand\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls automatic pool expansion when the underlying LUN is grown. If set to \fBon\fR, the pool will be resized according to the size of the expanded device. If the device is part of a mirror or \fBraidz\fR then all devices within that mirror/\fBraidz\fR group must be expanded before the new space is made available to the pool. The default behavior is \fBoff\fR. This property can also be referred to by its shortened column name, \fBexpand\fR. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBautoreplace\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls automatic device replacement. If set to "\fBoff\fR", device replacement must be initiated by the administrator by using the "\fBzpool replace\fR" command. If set to "\fBon\fR", any new device, found in the same physical location as a device that previously belonged to the pool, is automatically formatted and replaced. The default behavior is "\fBoff\fR". This property can also be referred to by its shortened column name, "replace". -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBbootfs\fR=\fIpool\fR/\fIdataset\fR\fR -.ad -.sp .6 -.RS 4n -Identifies the default bootable dataset for the root pool. This property is expected to be set mainly by the installation and upgrade programs. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBcachefile\fR=\fIpath\fR | \fBnone\fR\fR -.ad -.sp .6 -.RS 4n -Controls the location of where the pool configuration is cached. Discovering all pools on system startup requires a cached copy of the configuration data that is stored on the root file system. All pools in this cache are automatically imported when the system boots. Some environments, such as install and clustering, need to cache this information in a different location so that pools are not automatically imported. Setting this property caches the pool configuration in a different location that can later be imported with "\fBzpool import -c\fR". Setting it to the special value "\fBnone\fR" creates a temporary pool that is never cached, and the special value \fB\&''\fR (empty string) uses the default location. -.sp -Multiple pools can share the same cache file. Because the kernel destroys and recreates this file when pools are added and removed, care should be taken when attempting to access this file. When the last pool using a \fBcachefile\fR is exported or destroyed, the file is removed. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBdelegation\fR=\fBon\fR | \fBoff\fR\fR -.ad -.sp .6 -.RS 4n -Controls whether a non-privileged user is granted access based on the dataset permissions defined on the dataset. See \fBzfs\fR(1M) for more information on \fBZFS\fR delegated administration. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBfailmode\fR=\fBwait\fR | \fBcontinue\fR | \fBpanic\fR\fR -.ad -.sp .6 -.RS 4n -Controls the system behavior in the event of catastrophic pool failure. This condition is typically a result of a loss of connectivity to the underlying storage device(s) or a failure of all devices within the pool. The behavior of such an event is determined as follows: -.sp -.ne 2 -.mk -.na -\fB\fBwait\fR\fR -.ad -.RS 12n -.rt -Blocks all \fBI/O\fR access until the device connectivity is recovered and the errors are cleared. This is the default behavior. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBcontinue\fR\fR -.ad -.RS 12n -.rt -Returns \fBEIO\fR to any new write \fBI/O\fR requests but allows reads to any of the remaining healthy devices. Any write requests that have yet to be committed to disk would be blocked. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBpanic\fR\fR -.ad -.RS 12n -.rt +.Bl -tag -width 2n +.It Sy altroot +Alternate root directory. If set, this directory is prepended to any mount +points within the pool. This can be used when examining an unknown pool where +the mount points cannot be trusted, or in an alternate boot environment, where +the typical paths are not valid. +.Sy altroot +is not a persistent property. It is valid only while the system is up. +Setting +.Sy altroot +defaults to using +.Cm cachefile=none Ns , +though this may be overridden using an explicit setting. +.El +.Pp +The following property can only be set at import time: +.Bl -tag -width 2n +.It Sy readonly Ns = Ns Cm on No | Cm off +If set to +.Cm on Ns , +pool will be imported in read-only mode with the following restrictions: +.Bl -bullet -offset 2n +.It +Synchronous data in the intent log will not be accessible +.It +Properties of the pool can not be changed +.It +Datasets of this pool can only be mounted read-only +.It +To write to a read-only pool, a export and import of the pool is required. +.El +.El +.Pp +The following properties can be set at creation time and import time, and later +changed with the +.Ic zpool set +command: +.Bl -tag -width 2n +.It Sy autoexpand Ns = Ns Cm on No | Cm off +Controls automatic pool expansion when the underlying LUN is grown. If set to +.Qq Cm on Ns , +the pool will be resized according to the size of the expanded +device. If the device is part of a mirror or +.No raidz +then all devices within that +.No mirror/ Ns No raidz +group must be expanded before the new space is made available to +the pool. The default behavior is +.Qq off Ns . +This property can also be referred to by its shortened column name, +.Sy expand Ns . +.It Sy autoreplace Ns = Ns Cm on No | Cm off +Controls automatic device replacement. If set to +.Qq Cm off Ns , +device replacement must be initiated by the administrator by using the +.Qq Nm Cm replace +command. If set to +.Qq Cm on Ns , +any new device, found in the same +physical location as a device that previously belonged to the pool, is +automatically formatted and replaced. The default behavior is +.Qq Cm off Ns . +This property can also be referred to by its shortened column name, "replace". +.It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset +Identifies the default bootable dataset for the root pool. This property is +expected to be set mainly by the installation and upgrade programs. +.It Sy cachefile Ns = Ns Ar path No | Cm none +Controls the location of where the pool configuration is cached. Discovering +all pools on system startup requires a cached copy of the configuration data +that is stored on the root file system. All pools in this cache are +automatically imported when the system boots. Some environments, such as +install and clustering, need to cache this information in a different location +so that pools are not automatically imported. Setting this property caches the +pool configuration in a different location that can later be imported with +.Qq Nm Cm import Fl c . +Setting it to the special value +.Qq Cm none +creates a temporary pool that is never cached, and the special value +.Cm '' +(empty string) uses the default location. +.It Sy dedupditto Ns = Ns Ar number +Threshold for the number of block ditto copies. If the reference count for a +deduplicated block increases above this number, a new ditto copy of this block +is automatically stored. Deafult setting is +.Cm 0 Ns . +.It Sy delegation Ns = Ns Cm on No | Cm off +Controls whether a non-privileged user is granted access based on the dataset +permissions defined on the dataset. See +.Xr zfs 8 +for more information on +.Tn ZFS +delegated administration. +.It Sy failmode Ns = Ns Cm wait No | Cm continue No | Cm panic +Controls the system behavior in the event of catastrophic pool failure. This +condition is typically a result of a loss of connectivity to the underlying +storage device(s) or a failure of all devices within the pool. The behavior of +such an event is determined as follows: +.Bl -tag -width indent +.It Sy wait +Blocks all +.Tn I/O +access until the device connectivity is recovered and the errors are cleared. +This is the default behavior. +.It Sy continue +Returns +.Em EIO +to any new write +.Tn I/O +requests but allows reads to any of the remaining healthy devices. Any write +requests that have yet to be committed to disk would be blocked. +.It Sy panic Prints out a message to the console and generates a system crash dump. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBlistsnaps\fR=on | off\fR -.ad -.sp .6 -.RS 4n -Controls whether information about snapshots associated with this pool is output when "\fBzfs list\fR" is run without the \fB-t\fR option. The default value is "off". -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBversion\fR=\fIversion\fR\fR -.ad -.sp .6 -.RS 4n -The current on-disk version of the pool. This can be increased, but never decreased. The preferred method of updating pools is with the "\fBzpool upgrade\fR" command, though this property can be used when a specific version is needed for backwards compatibility. This property can be any number between 1 and the current version reported by "\fBzpool upgrade -v\fR". -.RE - -.SS "Subcommands" -.sp -.LP -All subcommands that modify state are logged persistently to the pool in their original form. -.sp -.LP -The \fBzpool\fR command provides subcommands to create and destroy storage pools, add capacity to storage pools, and provide information about the storage pools. The following subcommands are supported: -.sp -.ne 2 -.mk -.na -\fB\fBzpool\fR \fB-?\fR\fR -.ad -.sp .6 -.RS 4n +.El +.It Sy listsnaps Ns = Ns Cm on No | Cm off +Controls whether information about snapshots associated with this pool is +output when +.Qq Nm zfs Cm list +is run without the +.Fl t +option. The default value is +.Cm off . +.It Sy version Ns = Ns Ar version +The current on-disk version of the pool. This can be increased, but never +decreased. The preferred method of updating pools is with the +.Qq Nm Cm upgrade +command, though this property can be used when a specific version is needed +for backwards compatibility. This property can be any number between 1 and the +current version reported by +.Qo Ic zpool upgrade -v Qc Ns . +.El +.Sh SUBCOMMANDS +All subcommands that modify state are logged persistently to the pool in their +original form. +.Pp +The +.Nm +command provides subcommands to create and destroy storage pools, add capacity +to storage pools, and provide information about the storage pools. The following +subcommands are supported: +.Bl -tag -width 2n +.It Xo +.Nm +.Op Fl \&? +.Xc +.Pp Displays a help message. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool add\fR [\fB-fn\fR] \fIpool\fR \fIvdev\fR ...\fR -.ad -.sp .6 -.RS 4n -Adds the specified virtual devices to the given pool. The \fIvdev\fR specification is described in the "Virtual Devices" section. The behavior of the \fB-f\fR option, and the device checks performed are described in the "zpool create" subcommand. -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.RS 6n -.rt -Forces use of \fBvdev\fRs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-n\fR\fR -.ad -.RS 6n -.rt -Displays the configuration that would be used without actually adding the \fBvdev\fRs. The actual pool creation can still fail due to insufficient privileges or device sharing. -.RE - -Do not add a disk that is currently configured as a quorum device to a zpool. After a disk is in the pool, that disk can then be configured as a quorum device. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool attach\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR \fInew_device\fR\fR -.ad -.sp .6 -.RS 4n -Attaches \fInew_device\fR to an existing \fBzpool\fR device. The existing device cannot be part of a \fBraidz\fR configuration. If \fIdevice\fR is not currently part of a mirrored configuration, \fIdevice\fR automatically transforms into a two-way mirror of \fIdevice\fR and \fInew_device\fR. If \fIdevice\fR is part of a two-way mirror, attaching \fInew_device\fR creates a three-way mirror, and so on. In either case, \fInew_device\fR begins to resilver immediately. -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.RS 6n -.rt -Forces use of \fInew_device\fR, even if its appears to be in use. Not all devices can be overridden in this manner. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool clear\fR \fIpool\fR [\fIdevice\fR] ...\fR -.ad -.sp .6 -.RS 4n -Clears device errors in a pool. If no arguments are specified, all device errors within the pool are cleared. If one or more devices is specified, only those errors associated with the specified device or devices are cleared. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool create\fR [\fB-fn\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-O\fR \fIfile-system-property=value\fR] ... [\fB-m\fR \fImountpoint\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR \fIvdev\fR ...\fR -.ad -.sp .6 -.RS 4n -Creates a new storage pool containing the virtual devices specified on the command line. The pool name must begin with a letter, and can only contain alphanumeric characters as well as underscore ("_"), dash ("-"), and period ("."). The pool names "mirror", "raidz", "spare" and "log" are reserved, as are names beginning with the pattern "c[0-9]". The \fBvdev\fR specification is described in the "Virtual Devices" section. -.sp -The command verifies that each device specified is accessible and not currently in use by another subsystem. There are some uses, such as being currently mounted, or specified as the dedicated dump device, that prevents a device from ever being used by \fBZFS\fR. Other uses, such as having a preexisting \fBUFS\fR file system, can be overridden with the \fB-f\fR option. -.sp -The command also checks that the replication strategy for the pool is consistent. An attempt to combine redundant and non-redundant storage in a single pool, or to mix disks and files, results in an error unless \fB-f\fR is specified. The use of differently sized devices within a single \fBraidz\fR or mirror group is also flagged as an error unless \fB-f\fR is specified. -.sp -Unless the \fB-R\fR option is specified, the default mount point is "/\fIpool\fR". The mount point must not exist or must be empty, or else the root dataset cannot be mounted. This can be overridden with the \fB-m\fR option. -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.sp .6 -.RS 4n -Forces use of \fBvdev\fRs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-n\fR\fR -.ad -.sp .6 -.RS 4n -Displays the configuration that would be used without actually creating the pool. The actual pool creation can still fail due to insufficient privileges or device sharing. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIproperty=value\fR [\fB-o\fR \fIproperty=value\fR] ...\fR -.ad -.sp .6 -.RS 4n -Sets the given pool properties. See the "Properties" section for a list of valid properties that can be set. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-O\fR \fIfile-system-property=value\fR\fR -.ad -.br -.na -\fB[\fB-O\fR \fIfile-system-property=value\fR] ...\fR -.ad -.sp .6 -.RS 4n -Sets the given file system properties in the root file system of the pool. See the "Properties" section of \fBzfs\fR(1M) for a list of valid properties that can be set. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-R\fR \fIroot\fR\fR -.ad -.sp .6 -.RS 4n -Equivalent to "-o cachefile=none,altroot=\fIroot\fR" -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-m\fR \fImountpoint\fR\fR -.ad -.sp .6 -.RS 4n -Sets the mount point for the root dataset. The default mount point is "/\fIpool\fR" or "\fBaltroot\fR/\fIpool\fR" if \fBaltroot\fR is specified. The mount point must be an absolute path, "\fBlegacy\fR", or "\fBnone\fR". For more information on dataset mount points, see \fBzfs\fR(1M). -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool destroy\fR [\fB-f\fR] \fIpool\fR\fR -.ad -.sp .6 -.RS 4n -Destroys the given pool, freeing up any devices for other use. This command tries to unmount any active datasets before destroying the pool. -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.RS 6n -.rt +.It Xo +.Nm +.Cm add +.Op Fl fn +.Ar pool vdev ... +.Xc +.Pp +Adds the specified virtual devices to the given pool. The +.No vdev +specification is described in the +.Qq Sx Virtual Devices +section. The behavior of the +.Fl f +option, and the device checks performed are described in the +.Qq Nm Cm create +subcommand. +.Bl -tag -width indent +.It Fl f +Forces use of +.Ar vdev Ns , +even if they appear in use or specify a conflicting replication level. +Not all devices can be overridden in this manner. +.It Fl n +Displays the configuration that would be used without actually adding the +.Ar vdev Ns s. +The actual pool creation can still fail due to insufficient privileges or device +sharing. +.Pp +Do not add a disk that is currently configured as a quorum device to a zpool. +After a disk is in the pool, that disk can then be configured as a quorum +device. +.El +.It Xo +.Nm +.Cm attach +.Op Fl f +.Ar pool device new_device +.Xc +.Pp +Attaches +.Ar new_device +to an existing +.Sy zpool +device. The existing device cannot be part of a +.No raidz +configuration. If +.Ar device +is not currently part of a mirrored configuration, +.Ar device +automatically transforms into a two-way mirror of +.Ar device No and Ar new_device Ns . If +.Ar device +is part of a two-way mirror, attaching +.Ar new_device +creates a three-way mirror, and so on. In either case, +.Ar new_device +begins to resilver immediately. +.Bl -tag -width indent +.It Fl f +Forces use of +.Ar new_device Ns , +even if its appears to be in use. Not all devices can be overridden in this +manner. +.El +.It Xo +.Nm +.Cm clear +.Op Fl F Op Fl n +.Ar pool +.Op Ar device +.Xc +.Pp +Clears device errors in a pool. If no arguments are specified, all device +errors within the pool are cleared. If one or more devices is specified, only +those errors associated with the specified device or devices are cleared. +.Bl -tag -width indent +.It Fl F +Initiates recovery mode for an unopenable pool. Attempts to discard the last +few transactions in the pool to return it to an openable state. Not all damaged +pools can be recovered by using this option. If successful, the data from the +discarded transactions is irretrievably lost. +.It Fl n +Used in combination with the +.Fl F +flag. Check whether discarding transactions would make the pool openable, but +do not actually discard any transactions. +.El +.It Xo +.Nm +.Cm create +.Op Fl fn +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... +.Op Fl O Ar file-system-property Ns = Ns Ar value +.Ar ... +.Op Fl m Ar mountpoint +.Op Fl R Ar root +.Ar pool vdev ... +.Xc +.Pp +Creates a new storage pool containing the virtual devices specified on the +command line. The pool name must begin with a letter, and can only contain +alphanumeric characters as well as underscore ("_"), dash ("-"), and period +("."). The pool names "mirror", "raidz", "spare" and "log" are reserved, as are +names beginning with the pattern "c[0-9]". The +.No vdev +specification is described in the +.Qq Sx Virtual Devices +section. +.Pp +The command verifies that each device specified is accessible and not currently +in use by another subsystem. There are some uses, such as being currently +mounted, or specified as the dedicated dump device, that prevents a device from +ever being used by +.Tn ZFS +Other uses, such as having a preexisting +.Sy UFS +file system, can be overridden with the +.Fl f +option. +.Pp +The command also checks that the replication strategy for the pool is +consistent. An attempt to combine redundant and non-redundant storage in a +single pool, or to mix disks and files, results in an error unless +.Fl f +is specified. The use of differently sized devices within a single +.No raidz +or mirror group is also flagged as an error unless +.Fl f +is specified. +.Pp +Unless the +.Fl R +option is specified, the default mount point is +.Qq Pa /pool Ns . +The mount point must not exist or must be empty, or else the +root dataset cannot be mounted. This can be overridden with the +.Fl m +option. +.Bl -tag -width indent +.It Fl f +Forces use of +.Ar vdev Ns s, +even if they appear in use or specify a conflicting replication level. +Not all devices can be overridden in this manner. +.It Fl n +Displays the configuration that would be used without actually creating the +pool. The actual pool creation can still fail due to insufficient privileges or +device sharing. +.It Xo +.Fl o Ar property Ns = Ns Ar value +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... +.Xc +Sets the given pool properties. See the +.Qq Sx Properties +section for a list of valid properties that can be set. +.It Xo +.Fl O +.Ar file-system-property Ns = Ns Ar value +.Op Fl O Ar file-system-property Ns = Ns Ar value +.Ar ... +.Xc +Sets the given file system properties in the root file system of the pool. See +.Xr zfs 8 Properties +for a list of valid properties that +can be set. +.It Fl R Ar root +Equivalent to +.Qq Fl o Cm cachefile=none,altroot= Ns Pa root +.It Fl m Ar mountpoint +Sets the mount point for the root dataset. The default mount point is +.Qq Pa /pool +or +.Qq Cm altroot Ns Pa /pool +if +.Sy altroot +is specified. The mount point must be an absolute path, +.Qq Cm legacy Ns , or Qq Cm none Ns . +For more information on dataset mount points, see +.Xr zfs 8 Ns \&. +.El +.It Xo +.Nm +.Cm destroy +.Op Fl f +.Ar pool +.Xc +.Pp +Destroys the given pool, freeing up any devices for other use. This command +tries to unmount any active datasets before destroying the pool. +.Bl -tag -width indent +.It Fl f Forces any active datasets contained within the pool to be unmounted. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool detach\fR \fIpool\fR \fIdevice\fR\fR -.ad -.sp .6 -.RS 4n -Detaches \fIdevice\fR from a mirror. The operation is refused if there are no other valid replicas of the data. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool export\fR [\fB-f\fR] \fIpool\fR ...\fR -.ad -.sp .6 -.RS 4n -Exports the given pools from the system. All devices are marked as exported, but are still considered in use by other subsystems. The devices can be moved between systems (even those of different endianness) and imported as long as a sufficient number of devices are present. -.sp -Before exporting the pool, all datasets within the pool are unmounted. A pool can not be exported if it has a shared spare that is currently being used. -.sp -For pools to be portable, you must give the \fBzpool\fR command whole disks, not just slices, so that \fBZFS\fR can label the disks with portable \fBEFI\fR labels. Otherwise, disk drivers on platforms of different endianness will not recognize the disks. -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.RS 6n -.rt -Forcefully unmount all datasets, using the "\fBunmount -f\fR" command. -.sp -This command will forcefully export the pool even if it has a shared spare that is currently being used. This may lead to potential data corruption. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool get\fR "\fIall\fR" | \fIproperty\fR[,...] \fIpool\fR ...\fR -.ad -.sp .6 -.RS 4n -Retrieves the given list of properties (or all properties if "\fBall\fR" is used) for the specified storage pool(s). These properties are displayed with the following fields: -.sp -.in +2 -.nf - name Name of storage pool - property Property name - value Property value - source Property source, either 'default' or 'local'. -.fi -.in -2 -.sp - -See the "Properties" section for more information on the available pool properties. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool history\fR [\fB-il\fR] [\fIpool\fR] ...\fR -.ad -.sp .6 -.RS 4n -Displays the command history of the specified pools or all pools if no pool is specified. -.sp -.ne 2 -.mk -.na -\fB\fB-i\fR\fR -.ad -.RS 6n -.rt -Displays internally logged \fBZFS\fR events in addition to user initiated events. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-l\fR\fR -.ad -.RS 6n -.rt -Displays log records in long format, which in addition to standard format includes, the user name, the hostname, and the zone in which the operation was performed. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool import\fR [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR] [\fB-D\fR]\fR -.ad -.sp .6 -.RS 4n -Lists pools available to import. If the \fB-d\fR option is not specified, this command searches for devices in "/dev/dsk". The \fB-d\fR option can be specified multiple times, and all directories are searched. If the device appears to be part of an exported pool, this command displays a summary of the pool with the name of the pool, a numeric identifier, as well as the \fIvdev\fR layout and current health of the device for each device or file. Destroyed pools, pools that were previously destroyed with the "\fBzpool destroy\fR" command, are not listed unless the \fB-D\fR option is specified. -.sp -The numeric identifier is unique, and can be used instead of the pool name when multiple exported pools of the same name are available. -.sp -.ne 2 -.mk -.na -\fB\fB-c\fR \fIcachefile\fR\fR -.ad -.RS 16n -.rt -Reads configuration from the given \fBcachefile\fR that was created with the "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of searching for devices. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-d\fR \fIdir\fR\fR -.ad -.RS 16n -.rt -Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be specified multiple times. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-D\fR\fR -.ad -.RS 16n -.rt +.El +.It Xo +.Nm +.Cm detach +.Ar pool device +.Xc +.Pp +Detaches +.Ar device +from a mirror. The operation is refused if there are no other valid replicas +of the data. +.It Xo +.Nm +.Cm export +.Op Fl f +.Ar pool ... +.Xc +.Pp +Exports the given pools from the system. All devices are marked as exported, +but are still considered in use by other subsystems. The devices can be moved +between systems (even those of different endianness) and imported as long as a +sufficient number of devices are present. +.Pp +Before exporting the pool, all datasets within the pool are unmounted. A pool +can not be exported if it has a shared spare that is currently being used. +.Pp +For pools to be portable, you must give the +.Nm +command whole disks, not just slices, so that +.Tn ZFS +can label the disks with portable +.Sy EFI +labels. Otherwise, disk drivers on platforms of different endianness will not +recognize the disks. +.Bl -tag -width indent +.It Fl f +Forcefully unmount all datasets, using the +.Qq Nm unmount Fl f +command. +.Pp +This command will forcefully export the pool even if it has a shared spare that +is currently being used. This may lead to potential data corruption. +.El +.It Xo +.Nm +.Cm get +.Ar all | property Ns Op , Ns Ar ... +.Ar pool ... +.Xc +.Pp +Retrieves the given list of properties (or all properties if +.Qq Cm all +is used) for the specified storage pool(s). These properties are displayed with +the following fields: +.Bl -column -offset indent "property" +.It name Ta Name of storage pool +.It property Ta Property name +.It value Ta Property value +.It source Ta Property source, either 'default' or 'local'. +.El +.Pp +See the +.Qq Sx Properties +section for more information on the available pool properties. +.It Xo +.Nm +.Cm history +.Op Fl il +.Op Ar pool +.Ar ... +.Xc +.Pp +Displays the command history of the specified pools or all pools if no pool is +specified. +.Bl -tag -width indent +.It Fl i +Displays internally logged +.Tn ZFS +events in addition to user initiated events. +.It Fl l +Displays log records in long format, which in addition to standard format +includes, the user name, the hostname, and the zone in which the operation was +performed. +.El +.It Xo +.Nm +.Cm import +.Op Fl d Ar dir | Fl c Ar cachefile +.Op Fl D +.Xc +.Pp +Lists pools available to import. If the +.Fl d +option is not specified, this command searches for devices in +.Qq Pa /dev Ns . +The +.Fl d +option can be specified multiple times, and all directories are searched. If +the device appears to be part of an exported pool, this command displays a +summary of the pool with the name of the pool, a numeric identifier, as well as +the +.No vdev +layout and current health of the device for each device or file. +Destroyed pools, pools that were previously destroyed with the +.Qq Nm Cm destroy +command, are not listed unless the +.Fl D +option is specified. +.Pp +The numeric identifier is unique, and can be used instead of the pool name when +multiple exported pools of the same name are available. +.Bl -tag -width indent +.It Fl c Ar cachefile +Reads configuration from the given +.Ar cachefile +that was created with the +.Qq Sy cachefile +pool property. This +.Ar cachefile +is used instead of searching for devices. +.It Fl d Ar dir +Searches for devices or files in +.Ar dir Ns . +The +.Fl d +option can be specified multiple times. +.It Fl D Lists destroyed pools only. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool import\fR [\fB-o\fR \fImntopts\fR] [ \fB-o\fR \fIproperty\fR=\fIvalue\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR] [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fB-a\fR\fR -.ad -.sp .6 -.RS 4n -Imports all pools found in the search directories. Identical to the previous command, except that all pools with a sufficient number of devices available are imported. Destroyed pools, pools that were previously destroyed with the "\fBzpool destroy\fR" command, will not be imported unless the \fB-D\fR option is specified. -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fImntopts\fR\fR -.ad -.RS 21n -.rt -Comma-separated list of mount options to use when mounting datasets within the pool. See \fBzfs\fR(1M) for a description of dataset properties and mount options. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIproperty=value\fR\fR -.ad -.RS 21n -.rt -Sets the specified property on the imported pool. See the "Properties" section for more information on the available pool properties. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-c\fR \fIcachefile\fR\fR -.ad -.RS 21n -.rt -Reads configuration from the given \fBcachefile\fR that was created with the "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of searching for devices. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-d\fR \fIdir\fR\fR -.ad -.RS 21n -.rt -Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be specified multiple times. This option is incompatible with the \fB-c\fR option. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-D\fR\fR -.ad -.RS 21n -.rt -Imports destroyed pools only. The \fB-f\fR option is also required. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.RS 21n -.rt +.El +.It Xo +.Nm +.Cm import +.Op Fl o Ar mntopts +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... +.Op Fl d Ar dir | Fl c Ar cachefile +.Op Fl D +.Op Fl f +.Op Fl m +.Op Fl N +.Op Fl R Ar root +.Op Fl F Op Fl n +.Fl a +.Xc +.Pp +Imports all pools found in the search directories. Identical to the previous +command, except that all pools with a sufficient number of devices available +are imported. Destroyed pools, pools that were previously destroyed with the +.Qq Nm Cm destroy +command, will not be imported unless the +.Fl D +option is specified. +.Bl -tag -width indent +.It Fl o Ar mntopts +Comma-separated list of mount options to use when mounting datasets within the +pool. See +.Xr zfs 8 +for a description of dataset properties and mount options. +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property on the imported pool. See the +.Qq Sx Properties +section for more information on the available pool properties. +.It Fl c Ar cachefile +Reads configuration from the given +.Ar cachefile +that was created with the +.Qq Sy cachefile +pool property. This +.Ar cachefile +is used instead of searching for devices. +.It Fl d Ar dir +Searches for devices or files in +.Ar dir Ns . +The +.Fl d +option can be specified multiple times. This option is incompatible with the +.Fl c +option. +.It Fl D +Imports destroyed pools only. The +.Fl f +option is also required. +.It Fl f Forces import, even if the pool appears to be potentially active. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-a\fR\fR -.ad -.RS 21n -.rt -Searches for and imports all pools found. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-R\fR \fIroot\fR\fR -.ad -.RS 21n -.rt -Sets the "\fBcachefile\fR" property to "\fBnone\fR" and the "\fIaltroot\fR" property to "\fIroot\fR". -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool import\fR [\fB-o\fR \fImntopts\fR] [ \fB-o\fR \fIproperty\fR=\fIvalue\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR] [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR | \fIid\fR [\fInewpool\fR]\fR -.ad -.sp .6 -.RS 4n -Imports a specific pool. A pool can be identified by its name or the numeric identifier. If \fInewpool\fR is specified, the pool is imported using the name \fInewpool\fR. Otherwise, it is imported with the same name as its exported name. -.sp -If a device is removed from a system without running "\fBzpool export\fR" first, the device appears as potentially active. It cannot be determined if this was a failed export, or whether the device is really in use from another host. To import a pool in this state, the \fB-f\fR option is required. -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fImntopts\fR\fR -.ad -.sp .6 -.RS 4n -Comma-separated list of mount options to use when mounting datasets within the pool. See \fBzfs\fR(1M) for a description of dataset properties and mount options. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIproperty=value\fR\fR -.ad -.sp .6 -.RS 4n -Sets the specified property on the imported pool. See the "Properties" section for more information on the available pool properties. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-c\fR \fIcachefile\fR\fR -.ad -.sp .6 -.RS 4n -Reads configuration from the given \fBcachefile\fR that was created with the "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of searching for devices. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-d\fR \fIdir\fR\fR -.ad -.sp .6 -.RS 4n -Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be specified multiple times. This option is incompatible with the \fB-c\fR option. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-D\fR\fR -.ad -.sp .6 -.RS 4n -Imports destroyed pool. The \fB-f\fR option is also required. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl m +Enables import with missing log devices. +.It Fl N +Do not mount any filesystems from the imported pool. +.It Fl R Ar root +Sets the +.Qq Sy cachefile +property to +.Qq Cm none +and the +.Qq Sy altroot +property to +.Qq Ar root +.It Fl F +Recovery mode for a non-importable pool. Attempt to return the pool to an +importable state by discarding the last few transactions. Not all damaged pools +can be recovered by using this option. If successful, the data from the +discarded transactions is irretrievably lost. This option is ignored if the +pool is importable or already imported. +.It Fl n +Used with the +.Fl F +recovery option. Determines whether a non-importable pool can be made +importable again, but does not actually perform the pool recovery. For more +details about pool recovery mode, see the +.Fl F +option, above. +.It Fl a +Searches for and imports all pools found. +.El +.It Xo +.Nm +.Cm import +.Op Fl o Ar mntopts +.Op Fl o Ar property Ns = Ns Ar value +.Ar ... +.Op Fl d Ar dir | Fl c Ar cachefile +.Op Fl D +.Op Fl f +.Op Fl m +.Op Fl N +.Op Fl R Ar root +.Op Fl F Op Fl n +.Ar pool | id +.Op Ar newpool +.Xc +.Pp +Imports a specific pool. A pool can be identified by its name or the numeric +identifier. If +.Ar newpool +is specified, the pool is imported using the name +.Ar newpool Ns . +Otherwise, it is imported with the same name as its exported name. +.Pp +If a device is removed from a system without running +.Qq Nm Cm export +first, the device appears as potentially active. It cannot be determined if +this was a failed export, or whether the device is really in use from another +host. To import a pool in this state, the +.Fl f +option is required. +.Bl -tag -width indent +.It Fl o Ar mntopts +Comma-separated list of mount options to use when mounting datasets within the +pool. See +.Xr zfs 8 +for a description of dataset properties and mount options. +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property on the imported pool. See the +.Qq Sx Properties +section for more information on the available pool properties. +.It Fl c Ar cachefile +Reads configuration from the given +.Ar cachefile +that was created with the +.Qq Sy cachefile +pool property. This +.Ar cachefile +is used instead of searching for devices. +.It Fl d Ar dir +Searches for devices or files in +.Ar dir Ns . +The +.Fl d +option can be specified multiple times. This option is incompatible with the +.Fl c +option. +.It Fl D +Imports destroyed pools only. The +.Fl f +option is also required. +.It Fl f Forces import, even if the pool appears to be potentially active. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-R\fR \fIroot\fR\fR -.ad -.sp .6 -.RS 4n -Sets the "\fBcachefile\fR" property to "\fBnone\fR" and the "\fIaltroot\fR" property to "\fIroot\fR". -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool iostat\fR [\fB-T\fR \fBu\fR | \fBd\fR] [\fB-v\fR] [\fIpool\fR] ... [\fIinterval\fR[\fIcount\fR]]\fR -.ad -.sp .6 -.RS 4n -Displays \fBI/O\fR statistics for the given pools. When given an interval, the statistics are printed every \fIinterval\fR seconds until \fBCtrl-C\fR is pressed. If no \fIpools\fR are specified, statistics for every pool in the system is shown. If \fIcount\fR is specified, the command exits after \fIcount\fR reports are printed. -.sp -.ne 2 -.mk -.na -\fB\fB-T\fR \fBu\fR | \fBd\fR\fR -.ad -.RS 12n -.rt -Display a time stamp. -.sp -Specify \fBu\fR for a printed representation of the internal representation of time. See \fBtime\fR(2). Specify \fBd\fR for standard date format. See \fBdate\fR(1). -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-v\fR\fR -.ad -.RS 12n -.rt -Verbose statistics. Reports usage statistics for individual \fIvdevs\fR within the pool, in addition to the pool-wide statistics. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool labelclear\fR [\fB-f\fR] \fIdevice\fR -.ad -.sp .6 -.RS 4n -Removes ZFS label information from the specified device. The device must not be part of an active pool configuration. -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.RS 12n -.rt +.It Fl m +Enables import with missing log devices. +.It Fl N +Do not mount any filesystems from the imported pool. +.It Fl R Ar root +Equivalent to +.Qq Fl o Cm cachefile=none,altroot= Ns Pa root +.It Fl F +Recovery mode for a non-importable pool. Attempt to return the pool to an +importable state by discarding the last few transactions. Not all damaged pools +can be recovered by using this option. If successful, the data from the +discarded transactions is irretrievably lost. This option is ignored if the +pool is importable or already imported. +.It Fl n +Used with the +.Fl F +recovery option. Determines whether a non-importable pool can be made +importable again, but does not actually perform the pool recovery. For more +details about pool recovery mode, see the +.Fl F +option, above. +.El +.It Xo +.Nm +.Cm iostat +.Op Fl T Cm d Ns | Ns Cm u +.Op Fl v +.Op Ar pool +.Ar ... +.Op Ar interval Op Ar count +.Xc +.Pp +Displays +.Tn I/O +statistics for the given pools. When given an interval, the statistics are +printed every +.Ar interval +seconds until +.Sy Ctrl-C +is pressed. If no +.Ar pools +are specified, statistics for every pool in the system is shown. If +.Ar count +is specified, the command exits after +.Ar count +reports are printed. +.Bl -tag -width indent +.It Fl T Cm d Ns | Ns Cm u +Print a timestamp. +.Pp +Use modifier +.Cm d +for standard date format. See +.Xr date 1 Ns . +Use modifier +.Cm u +for unixtime +.Pq equals Qq Ic date +%s Ns . +.It Fl v +Verbose statistics. Reports usage statistics for individual +.No vdev Ns s +within the pool, in addition to the pool-wide statistics. +.El +.It Xo +.Nm +.Cm labelclear +.Op Fl f +.Ar device +.Xc +.Pp +Removes +.Tn ZFS +label information from the specified +.Ar device Ns . +The +.Ar device +must not be part of an active pool configuration. +.Bl -tag -width indent +.It Fl v Treat exported or foreign devices as inactive. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool list\fR [\fB-H\fR] [\fB-o\fR \fIprops\fR[,...]] [\fIpool\fR] ...\fR -.ad -.sp .6 -.RS 4n -Lists the given pools along with a health status and space usage. When given no arguments, all pools in the system are listed. -.sp -.ne 2 -.mk -.na -\fB\fB-H\fR\fR -.ad -.RS 12n -.rt -Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-o\fR \fIprops\fR\fR -.ad -.RS 12n -.rt -Comma-separated list of properties to display. See the "Properties" section for a list of valid properties. The default list is "name, size, used, available, capacity, health, altroot" -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool offline\fR [\fB-t\fR] \fIpool\fR \fIdevice\fR ...\fR -.ad -.sp .6 -.RS 4n -Takes the specified physical device offline. While the \fIdevice\fR is offline, no attempt is made to read or write to the device. -.sp -This command is not applicable to spares or cache devices. -.sp -.ne 2 -.mk -.na -\fB\fB-t\fR\fR -.ad -.RS 6n -.rt -Temporary. Upon reboot, the specified physical device reverts to its previous state. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool online\fR [\fB-e\fR] \fIpool\fR \fIdevice\fR...\fR -.ad -.sp .6 -.RS 4n +.El +.It Xo +.Nm +.Cm list +.Op Fl H +.Op Fl o Ar property Ns Op , Ns Ar ... +.Op Fl T Cm d Ns | Ns Cm u +.Op Ar pool +.Ar ... +.Op Ar inverval Op Ar count +.Xc +.Pp +Lists the given pools along with a health status and space usage. When given no +arguments, all pools in the system are listed. +.Pp +When given an interval, the output is printed every +.Ar interval +seconds until +.Sy Ctrl-C +is pressed. If +.Ar count +is specified, the command exits after +.Ar count +reports are printed. +.Bl -tag -width indent +.It Fl H +Scripted mode. Do not display headers, and separate fields by a single tab +instead of arbitrary space. +.It Fl o Ar property Ns Op , Ns Ar ... +Comma-separated list of properties to display. See the +.Qq Sx Properties +section for a list of valid properties. The default list is +.Sy name Ns , +.Sy size Ns , +.Sy used Ns , +.Sy available Ns , +.Sy capacity Ns , +.Sy health Ns , +.Sy altroot Ns . +.It Fl T Cm d Ns | Ns Cm u +Print a timestamp. +.Pp +Use modifier +.Cm d +for standard date format. See +.Xr date 1 Ns . +Use modifier +.Cm u +for unixtime +.Pq equals Qq Ic date +%s Ns . +.El +.It Xo +.Nm +.Cm offline +.Op Fl t +.Ar pool device ... +.Xc +.Pp +Takes the specified physical device offline. While the +.Ar device +is offline, no attempt is made to read or write to the device. +.Bl -tag -width indent +.It Fl t +Temporary. Upon reboot, the specified physical device reverts to its previous +state. +.El +.It Xo +.Nm +.Cm online +.Op Fl e +.Ar pool device ... +.Xc +.Pp Brings the specified physical device online. -.sp +.Pp This command is not applicable to spares or cache devices. -.sp -.ne 2 -.mk -.na -\fB\fB-e\fR\fR -.ad -.RS 6n -.rt -Expand the device to use all available space. If the device is part of a mirror or \fBraidz\fR then all devices must be expanded before the new space will become available to the pool. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool remove\fR \fIpool\fR \fIdevice\fR ...\fR -.ad -.sp .6 -.RS 4n -Removes the specified device from the pool. This command currently only supports removing hot spares, cache, and log devices. A mirrored log device can be removed by specifying the top-level mirror for the log. Non-log devices that are part of a mirrored configuration can be removed using the \fBzpool detach\fR command. Non-redundant and \fBraidz\fR devices cannot be removed from a pool. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool replace\fR [\fB-f\fR] \fIpool\fR \fIold_device\fR [\fInew_device\fR]\fR -.ad -.sp .6 -.RS 4n -Replaces \fIold_device\fR with \fInew_device\fR. This is equivalent to attaching \fInew_device\fR, waiting for it to resilver, and then detaching \fIold_device\fR. -.sp -The size of \fInew_device\fR must be greater than or equal to the minimum size of all the devices in a mirror or \fBraidz\fR configuration. -.sp -\fInew_device\fR is required if the pool is not redundant. If \fInew_device\fR is not specified, it defaults to \fIold_device\fR. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same \fB/dev/dsk\fR path as the old device, even though it is actually a different disk. \fBZFS\fR recognizes this. -.sp -.ne 2 -.mk -.na -\fB\fB-f\fR\fR -.ad -.RS 6n -.rt -Forces use of \fInew_device\fR, even if its appears to be in use. Not all devices can be overridden in this manner. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ...\fR -.ad -.sp .6 -.RS 4n -Begins a scrub. The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror or \fBraidz\fR) devices, \fBZFS\fR automatically repairs any damage discovered during the scrub. The "\fBzpool status\fR" command reports the progress of the scrub and summarizes the results of the scrub upon completion. -.sp -Scrubbing and resilvering are very similar operations. The difference is that resilvering only examines data that \fBZFS\fR knows to be out of date (for example, when attaching a new device to a mirror or replacing an existing device), whereas scrubbing examines all data to discover silent errors due to hardware faults or disk failure. -.sp -Because scrubbing and resilvering are \fBI/O\fR-intensive operations, \fBZFS\fR only allows one at a time. If a scrub is already in progress, the "\fBzpool scrub\fR" command terminates it and starts a new scrub. If a resilver is in progress, \fBZFS\fR does not allow a scrub to be started until the resilver completes. -.sp -.ne 2 -.mk -.na -\fB\fB-s\fR\fR -.ad -.RS 6n -.rt +.Bl -tag -width indent +.It Fl e +Expand the device to use all available space. If the device is part of a mirror +or +.No raidz +then all devices must be expanded before the new space will become +available to the pool. +.El +.It Xo +.Nm +.Cm remove +.Ar pool device ... +.Xc +.Pp +Removes the specified device from the pool. This command currently only +supports removing hot spares, cache, and log devices. A mirrored log device can +be removed by specifying the top-level mirror for the log. Non-log devices that +are part of a mirrored configuration can be removed using the +.Qq Nm Cm detach +command. Non-redundant and +.No raidz +devices cannot be removed from a pool. +.It Xo +.Nm +.Cm replace +.Op Fl f +.Ar pool device +.Op Ar new_device +.Xc +.Pp +Replaces +.Ar old_device +with +.Ar new_device Ns . +This is equivalent to attaching +.Ar new_device Ns , +waiting for it to resilver, and then detaching +.Ar old_device Ns . +.Pp +The size of +.Ar new_device +must be greater than or equal to the minimum size +of all the devices in a mirror or +.No raidz +configuration. +.Pp +.Ar new_device +is required if the pool is not redundant. If +.Ar new_device +is not specified, it defaults to +.Ar old_device Ns . +This form of replacement is useful after an existing disk has failed and has +been physically replaced. In this case, the new disk may have the same +.Pa /dev +path as the old device, even though it is actually a different disk. +.Tn ZFS +recognizes this. +.Bl -tag -width indent +.It Fl f +Forces use of +.Ar new_device Ns , +even if its appears to be in use. Not all devices can be overridden in this +manner. +.El +.It Xo +.Nm +.Cm scrub +.Op Fl s +.Ar pool ... +.Xc +.Pp +Begins a scrub. The scrub examines all data in the specified pools to verify +that it checksums correctly. For replicated (mirror or +.No raidz Ns ) +devices, +.Tn ZFS +automatically repairs any damage discovered during the scrub. The +.Qq Nm Cm status +command reports the progress of the scrub and summarizes the results of the +scrub upon completion. +.Pp +Scrubbing and resilvering are very similar operations. The difference is that +resilvering only examines data that +.Tn ZFS +knows to be out of date (for example, when attaching a new device to a mirror +or replacing an existing device), whereas scrubbing examines all data to +discover silent errors due to hardware faults or disk failure. +.Pp +Because scrubbing and resilvering are +.Tn I/O Ns -intensive +operations, +.Tn ZFS +only allows one at a time. If a scrub is already in progress, the +.Qq Nm Cm scrub +command returns an error. To start a new scrub, you have to stop the old scrub +with the +.Qq Nm Cm scrub Fl s +command first. If a resilver is in progress, +.Tn ZFS +does not allow a scrub to be started until the resilver completes. +.Bl -tag -width indent +.It Fl s Stop scrubbing. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR\fR -.ad -.sp .6 -.RS 4n -Sets the given property on the specified pool. See the "Properties" section for more information on what properties can be set and acceptable values. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool status\fR [\fB-xv\fR] [\fIpool\fR] ...\fR -.ad -.sp .6 -.RS 4n -Displays the detailed health status for the given pools. If no \fIpool\fR is specified, then the status of each pool in the system is displayed. For more information on pool and device health, see the "Device Failure and Recovery" section. -.sp -If a scrub or resilver is in progress, this command reports the percentage done and the estimated time to completion. Both of these are only approximate, because the amount of data in the pool and the other workloads on the system can change. -.sp -.ne 2 -.mk -.na -\fB\fB-x\fR\fR -.ad -.RS 6n -.rt -Only display status for pools that are exhibiting errors or are otherwise unavailable. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-v\fR\fR -.ad -.RS 6n -.rt -Displays verbose data error information, printing out a complete list of all data errors since the last complete pool scrub. -.RE - -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool upgrade\fR\fR -.ad -.sp .6 -.RS 4n -Displays all pools formatted using a different \fBZFS\fR on-disk version. Older versions can continue to be used, but some features may not be available. These pools can be upgraded using "\fBzpool upgrade -a\fR". Pools that are formatted with a more recent version are also displayed, although these pools will be inaccessible on the system. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool upgrade\fR \fB-v\fR\fR -.ad -.sp .6 -.RS 4n -Displays \fBZFS\fR versions supported by the current software. The current \fBZFS\fR versions and all previous supported versions are displayed, along with an explanation of the features provided with each version. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ...\fR -.ad -.sp .6 -.RS 4n -Upgrades the given pool to the latest on-disk version. Once this is done, the pool will no longer be accessible on systems running older versions of the software. -.sp -.ne 2 -.mk -.na -\fB\fB-a\fR\fR -.ad -.RS 14n -.rt +.El +.It Xo +.Nm +.Cm set +.Ar property Ns = Ns Ar value pool +.Xc +.Pp +Sets the given property on the specified pool. See the +.Qq Sx Properties +section for more information on what properties can be set and acceptable +values. +.It Xo +.Nm +.Cm split +.Op Fl n +.Op Fl R Ar altroot +.Op Fl o Ar mntopts +.Op Fl o Ar property Ns = Ns Ar value +.Ar pool newpool +.Op Ar device ... +.Xc +.Pp +Splits off one disk from each mirrored top-level +.No vdev +in a pool and creates a new pool from the split-off disks. The original pool +must be made up of one or more mirrors and must not be in the process of +resilvering. The +.Cm split +subcommand chooses the last device in each mirror +.No vdev +unless overridden by a device specification on the command line. +.Pp +When using a +.Ar device +argument, +.Cm split +includes the specified device(s) in a new pool and, should any devices remain +unspecified, assigns the last device in each mirror +.No vdev +to that pool, as it does normally. If you are uncertain about the outcome of a +.Cm split +command, use the +.Fl n +("dry-run") option to ensure your command will have the effect you intend. +.Bl -tag -width indent +.It Fl R Ar altroot +Automatically import the newly created pool after splitting, using the +specified +.Ar altroot +parameter for the new pool's alternate root. See the +.Sy altroot +description in the +.Qq Sx Properties +section, above. +.It Fl n +Displays the configuration that would be created without actually splitting the +pool. The actual pool split could still fail due to insufficient privileges or +device status. +.It Fl o Ar mntopts +Comma-separated list of mount options to use when mounting datasets within the +pool. See +.Xr zfs 8 +for a description of dataset properties and mount options. Valid only in +conjunction with the +.Fl R +option. +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property on the new pool. See the +.Qq Sx Properties +section, above, for more information on the available pool properties. +.El +.It Xo +.Nm +.Cm status +.Op Fl vx +.Op Fl T Cm d Ns | Ns Cm u +.Op Ar pool +.Ar ... +.Op Ar interval Op Ar count +.Xc +.Pp +Displays the detailed health status for the given pools. If no +.Ar pool +is specified, then the status of each pool in the system is displayed. For more +information on pool and device health, see the +.Qq Sx Device Failure and Recovery +section. +.Pp +When given an interval, the output is printed every +.Ar interval +seconds until +.Sy Ctrl-C +is pressed. If +.Ar count +is specified, the command exits after +.Ar count +reports are printed. +.Pp +If a scrub or resilver is in progress, this command reports the percentage done +and the estimated time to completion. Both of these are only approximate, +because the amount of data in the pool and the other workloads on the system +can change. +.Bl -tag -width indent +.It Fl x +Only display status for pools that are exhibiting errors or are otherwise +unavailable. +.It Fl v +Displays verbose data error information, printing out a complete list of all +data errors since the last complete pool scrub. +.It Fl T Cm d Ns | Ns Cm u +Print a timestamp. +.Pp +Use modifier +.Cm d +for standard date format. See +.Xr date 1 Ns . +Use modifier +.Cm u +for unixtime +.Pq equals Qq Ic date +%s Ns . +.El +.It Xo +.Nm +.Cm upgrade +.Op Fl v +.Xc +.Pp +Displays all pools formatted using a different +.Tn ZFS +pool on-disk version. Older versions can continue to be used, but some +features may not be available. These pools can be upgraded using +.Qq Nm Cm upgrade Fl a . +Pools that are formatted with a more recent version are also displayed, +although these pools will be inaccessible on the system. +.Bl -tag -width indent +.It Fl v +Displays +.Tn ZFS +pool versions supported by the current software. The current +.Tn ZFS +pool version and all previous supported versions are displayed, along +with an explanation of the features provided with each version. +.El +.It Xo +.Nm +.Cm upgrade +.Op Fl V Ar version +.Fl a | Ar pool ... +.Xc +.Pp +Upgrades the given pool to the latest on-disk pool version. Once this is done, +the pool will no longer be accessible on systems running older versions of the +software. +.Bl -tag -width indent +.It Fl a Upgrades all pools. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-V\fR \fIversion\fR\fR -.ad -.RS 14n -.rt -Upgrade to the specified version. If the \fB-V\fR flag is not specified, the pool is upgraded to the most recent version. This option can only be used to increase the version number, and only up to the most recent version supported by this software. -.RE - -.RE - -.SH EXAMPLES -.LP -\fBExample 1 \fRCreating a RAID-Z Storage Pool -.sp -.LP -The following command creates a pool with a single \fBraidz\fR root \fIvdev\fR that consists of six disks. - -.sp -.in +2 -.nf -# \fBzpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0\fR -.fi -.in -2 -.sp - -.LP -\fBExample 2 \fRCreating a Mirrored Storage Pool -.sp -.LP -The following command creates a pool with two mirrors, where each mirror contains two disks. - -.sp -.in +2 -.nf -# \fBzpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0\fR -.fi -.in -2 -.sp - -.LP -\fBExample 3 \fRCreating a ZFS Storage Pool by Using Slices -.sp -.LP -The following command creates an unmirrored pool using two disk slices. - -.sp -.in +2 -.nf -# \fBzpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4\fR -.fi -.in -2 -.sp - -.LP -\fBExample 4 \fRCreating a ZFS Storage Pool by Using Files -.sp -.LP -The following command creates an unmirrored pool using files. While not recommended, a pool based on files can be useful for experimental purposes. - -.sp -.in +2 -.nf -# \fBzpool create tank /path/to/file/a /path/to/file/b\fR -.fi -.in -2 -.sp - -.LP -\fBExample 5 \fRAdding a Mirror to a ZFS Storage Pool -.sp -.LP -The following command adds two mirrored disks to the pool "\fItank\fR", assuming the pool is already made up of two-way mirrors. The additional space is immediately available to any datasets within the pool. - -.sp -.in +2 -.nf -# \fBzpool add tank mirror c1t0d0 c1t1d0\fR -.fi -.in -2 -.sp - -.LP -\fBExample 6 \fRListing Available ZFS Storage Pools -.sp -.LP -The following command lists all available pools on the system. In this case, the pool \fIzion\fR is faulted due to a missing device. - -.sp -.LP -The results from this command are similar to the following: - -.sp -.in +2 -.nf -# \fBzpool list\fR - NAME SIZE USED AVAIL CAP HEALTH ALTROOT - pool 67.5G 2.92M 67.5G 0% ONLINE - - tank 67.5G 2.92M 67.5G 0% ONLINE - - zion - - - 0% FAULTED - -.fi -.in -2 -.sp - -.LP -\fBExample 7 \fRDestroying a ZFS Storage Pool -.sp -.LP -The following command destroys the pool "\fItank\fR" and any datasets contained within. - -.sp -.in +2 -.nf -# \fBzpool destroy -f tank\fR -.fi -.in -2 -.sp - -.LP -\fBExample 8 \fRExporting a ZFS Storage Pool -.sp -.LP -The following command exports the devices in pool \fItank\fR so that they can be relocated or later imported. - -.sp -.in +2 -.nf -# \fBzpool export tank\fR -.fi -.in -2 -.sp - -.LP -\fBExample 9 \fRImporting a ZFS Storage Pool -.sp -.LP -The following command displays available pools, and then imports the pool "tank" for use on the system. - -.sp -.LP +.It Fl V Ar version +Upgrade to the specified version. If the +.Fl V +flag is not specified, the pool is upgraded to the most recent version. This +option can only be used to increase the version number, and only up to the most +recent version supported by this software. +.El +.El +.Sh EXAMPLES +.Bl -tag -width 0n +.It Sy Example 1 No Creating a RAID-Z Storage Pool +.Pp +The following command creates a pool with a single +.No raidz +root +.No vdev +that consists of six disks. +.Bd -literal -offset 2n +.Li # Ic zpool create tank raidz da0 da1 da2 da3 da4 da5 +.Ed +.It Sy Example 2 No Creating a Mirrored Storage Pool +.Pp +The following command creates a pool with two mirrors, where each mirror +contains two disks. +.Bd -literal -offset 2n +.Li # Ic zpool create tank mirror da0 da1 mirror da2 da3 +.Ed +.It Sy Example 3 No Creating a Tn ZFS No Storage Pool by Using Partitions +.Pp +The following command creates an unmirrored pool using two GPT partitions. +.Bd -literal -offset 2n +.Li # Ic zpool create tank da0p3 da1p3 +.Ed +.It Sy Example 4 No Creating a Tn ZFS No Storage Pool by Using Files +.Pp +The following command creates an unmirrored pool using files. While not +recommended, a pool based on files can be useful for experimental purposes. +.Bd -literal -offset 2n +.Li # Ic zpool create tank /path/to/file/a /path/to/file/b +.Ed +.It Sy Example 5 No Adding a Mirror to a Tn ZFS No Storage Pool +.Pp +The following command adds two mirrored disks to the pool +.Em tank Ns , +assuming the pool is already made up of two-way mirrors. The additional space +is immediately available to any datasets within the pool. +.Bd -literal -offset 2n +.Li # Ic zpool add tank mirror da2 da3 +.Ed +.It Sy Example 6 No Listing Available Tn ZFS No Storage Pools +.Pp +The following command lists all available pools on the system. +.Bd -literal -offset 2n +.Li # Ic zpool list +NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT +pool 2.70T 473G 2.24T 17% 1.00x ONLINE - +test 1.98G 89.5K 1.98G 0% 1.00x ONLINE - +.Ed +.It Sy Example 7 No Listing All Properties for a Pool +.Pp +The following command lists all the properties for a pool. +.Bd -literal -offset 2n +.Li # Ic zpool get all pool +pool size 2.70T - +pool capacity 17% - +pool altroot - default +pool health ONLINE - +pool guid 2501120270416322443 default +pool version 28 default +pool bootfs pool/root local +pool delegation on default +pool autoreplace off default +pool cachefile - default +pool failmode wait default +pool listsnapshots off default +pool autoexpand off default +pool dedupditto 0 default +pool dedupratio 1.00x - +pool free 2.24T - +pool allocated 473G - +pool readonly off - +.Ed +.It Sy Example 8 No Destroying a Tn ZFS No Storage Pool +.Pp +The following command destroys the pool +.Qq Em tank +and any datasets contained within. +.Bd -literal -offset 2n +.Li # Ic zpool destroy -f tank +.Ed +.It Sy Example 9 No Exporting a Tn ZFS No Storage Pool +.Pp +The following command exports the devices in pool +.Em tank +so that they can be relocated or later imported. +.Bd -literal -offset 2n +.Li # Ic zpool export tank +.Ed +.It Sy Example 10 No Importing a Tn ZFS No Storage Pool +.Pp +The following command displays available pools, and then imports the pool +.Qq Em tank +for use on the system. +.Pp The results from this command are similar to the following: +.Bd -literal -offset 2n +.Li # Ic zpool import -.sp -.in +2 -.nf -# \fBzpool import\fR pool: tank id: 15451357997522795478 state: ONLINE @@ -1616,211 +1711,160 @@ config: tank ONLINE mirror ONLINE - c1t2d0 ONLINE - c1t3d0 ONLINE - -# \fBzpool import tank\fR -.fi -.in -2 -.sp - -.LP -\fBExample 10 \fRUpgrading All ZFS Storage Pools to the Current Version -.sp -.LP -The following command upgrades all ZFS Storage pools to the current version of the software. - -.sp -.in +2 -.nf -# \fBzpool upgrade -a\fR -This system is currently running ZFS version 2. -.fi -.in -2 -.sp - -.LP -\fBExample 11 \fRManaging Hot Spares -.sp -.LP + da0 ONLINE + da1 ONLINE +.Ed +.It Xo +.Sy Example 11 +Upgrading All +.Tn ZFS +Storage Pools to the Current Version +.Xc +.Pp +The following command upgrades all +.Tn ZFS +Storage pools to the current version of +the software. +.Bd -literal -offset 2n +.Li # Ic zpool upgrade -a +This system is currently running ZFS pool version 28. +.Ed +.It Sy Example 12 No Managing Hot Spares +.Pp The following command creates a new pool with an available hot spare: - -.sp -.in +2 -.nf -# \fBzpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0\fR -.fi -.in -2 -.sp - -.sp -.LP -If one of the disks were to fail, the pool would be reduced to the degraded state. The failed device can be replaced using the following command: - -.sp -.in +2 -.nf -# \fBzpool replace tank c0t0d0 c0t3d0\fR -.fi -.in -2 -.sp - -.sp -.LP -Once the data has been resilvered, the spare is automatically removed and is made available should another device fails. The hot spare can be permanently removed from the pool using the following command: - -.sp -.in +2 -.nf -# \fBzpool remove tank c0t2d0\fR -.fi -.in -2 -.sp - -.LP -\fBExample 12 \fRCreating a ZFS Pool with Mirrored Separate Intent Logs -.sp -.LP -The following command creates a ZFS storage pool consisting of two, two-way mirrors and mirrored log devices: - -.sp -.in +2 -.nf -# \fBzpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e - c4d0 c5d0\fR -.fi -.in -2 -.sp - -.LP -\fBExample 13 \fRAdding Cache Devices to a ZFS Pool -.sp -.LP -The following command adds two disks for use as cache devices to a ZFS storage pool: - -.sp -.in +2 -.nf -# \fBzpool add pool cache c2d0 c3d0\fR -.fi -.in -2 -.sp - -.sp -.LP -Once added, the cache devices gradually fill with content from main memory. Depending on the size of your cache devices, it could take over an hour for them to fill. Capacity and reads can be monitored using the \fBiostat\fR option as follows: - -.sp -.in +2 -.nf -# \fBzpool iostat -v pool 5\fR -.fi -.in -2 -.sp - -.LP -\fBExample 14 \fRRemoving a Mirrored Log Device -.sp -.LP -The following command removes the mirrored log device \fBmirror-2\fR. - -.sp -.LP +.Bd -literal -offset 2n +.Li # Ic zpool create tank mirror da0 da1 spare da2 +.Ed +.Pp +If one of the disks were to fail, the pool would be reduced to the degraded +state. The failed device can be replaced using the following command: +.Bd -literal -offset 2n +.Li # Ic zpool replace tank da0 da2 +.Ed +.Pp +Once the data has been resilvered, the spare is automatically removed and is +made available should another device fails. The hot spare can be permanently +removed from the pool using the following command: +.Bd -literal -offset 2n +.Li # Ic zpool remove tank da2 +.Ed +.It Xo +.Sy Example 13 +Creating a +.Tn ZFS +Pool with Mirrored Separate Intent Logs +.Xc +.Pp +The following command creates a +.Tn ZFS +storage pool consisting of two, two-way +mirrors and mirrored log devices: +.Bd -literal -offset 2n +.Li # Ic zpool create pool mirror da0 da1 mirror da2 da3 log miror da4 da5 +.Ed +.It Sy Example 14 No Adding Cache Devices to a Tn ZFS No Pool +.Pp +The following command adds two disks for use as cache devices to a +.Tn ZFS +storage pool: +.Bd -literal -offset 2n +.Li # Ic zpool add pool cache da2 da3 +.Ed +.Pp +Once added, the cache devices gradually fill with content from main memory. +Depending on the size of your cache devices, it could take over an hour for +them to fill. Capacity and reads can be monitored using the +.Cm iostat +subcommand as follows: +.Bd -literal -offset 2n +.Li # Ic zpool iostat -v pool 5 +.Ed +.It Sy Example 15 No Removing a Mirrored Log Device +.Pp +The following command removes the mirrored log device +.Em mirror-2 Ns . +.Pp Given this configuration: - -.sp -.in +2 -.nf +.Bd -literal -offset 2n pool: tank state: ONLINE scrub: none requested -config: + config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 - c6t0d0 ONLINE 0 0 0 - c6t1d0 ONLINE 0 0 0 + da0 ONLINE 0 0 0 + da1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 - c6t2d0 ONLINE 0 0 0 - c6t3d0 ONLINE 0 0 0 + da2 ONLINE 0 0 0 + da3 ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 - c4t0d0 ONLINE 0 0 0 - c4t1d0 ONLINE 0 0 0 -.fi -.in -2 -.sp - -.sp -.LP -The command to remove the mirrored log \fBmirror-2\fR is: - -.sp -.in +2 -.nf -# \fBzpool remove tank mirror-2\fR -.fi -.in -2 -.sp - -.SH EXIT STATUS -.sp -.LP + da4 ONLINE 0 0 0 + da5 ONLINE 0 0 0 +.Ed +.Pp +The command to remove the mirrored log +.Em mirror-2 +is: +.Bd -literal -offset 2n +.Li # Ic zpool remove tank mirror-2 +.Ed +.It Sy Example 16 No Recovering a Faulted Tn ZFS No Pool +.Pp +If a pool is faulted but recoverable, a message indicating this state is +provided by +.Qq Nm Cm status +if the pool was cached (see the +.Fl c Ar cachefile +argument above), or as part of the error output from a failed +.Qq Nm Cm import +of the pool. +.Pp +Recover a cached pool with the +.Qq Nm Cm clear +command: +.Bd -literal -offset 2n +.Li # Ic zpool clear -F data +Pool data returned to its state as of Tue Sep 08 13:23:35 2009. +Discarded approximately 29 seconds of transactions. +.Ed +.Pp +If the pool configuration was not cached, use +.Qq Nm Cm import +with the recovery mode flag: +.Bd -literal -offset 2n +.Li # Ic zpool import -F data +Pool data returned to its state as of Tue Sep 08 13:23:35 2009. +Discarded approximately 29 seconds of transactions. +.Ed +.El +.Sh EXIT STATUS The following exit values are returned: -.sp -.ne 2 -.mk -.na -\fB\fB0\fR\fR -.ad -.RS 5n -.rt -Successful completion. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB1\fR\fR -.ad -.RS 5n -.rt +.Bl -tag -offset 2n -width 2n +.It 0 +Successful completion. +.It 1 An error occurred. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB2\fR\fR -.ad -.RS 5n -.rt +.It 2 Invalid command line options were specified. -.RE - -.SH ATTRIBUTES -.sp -.LP -See \fBattributes\fR(5) for descriptions of the following attributes: -.sp - -.sp -.TS -tab() box; -cw(2.75i) |cw(2.75i) -lw(2.75i) |lw(2.75i) -. -ATTRIBUTE TYPEATTRIBUTE VALUE -_ -AvailabilitySUNWzfsu -_ -Interface StabilityEvolving -.TE - -.SH SEE ALSO -.sp -.LP -\fBzfs\fR(1M), \fBattributes\fR(5) +.El +.Sh SEE ALSO +.Xr zfs 8 +.Sh AUTHORS +This manual page is a +.Xr mdoc 7 +reimplementation of the +.Tn OpenSolaris +manual page +.Em zpool(1M) , +modified and customized for +.Fx +and licensed under the Common Development and Distribution License +.Pq Tn CDDL . +.Pp +The +.Xr mdoc 7 +implementation of this manual page was initially written by +.An Martin Matuska Aq mm@FreeBSD.org . diff --git a/cddl/contrib/opensolaris/cmd/zstreamdump/zstreamdump.1 b/cddl/contrib/opensolaris/cmd/zstreamdump/zstreamdump.1 index 9e11948..f800a05 100644 --- a/cddl/contrib/opensolaris/cmd/zstreamdump/zstreamdump.1 +++ b/cddl/contrib/opensolaris/cmd/zstreamdump/zstreamdump.1 @@ -1,67 +1,67 @@ '\" te -.\" Copyright (c) 2009, Sun Microsystems, Inc. All Rights Reserved -.\" The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing. -.\" See the License for the specific language governing permissions and limitations under the License. When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this CDDL HEADER, with -.\" the fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner] -.TH zstreamdump 1M "21 Sep 2009" "SunOS 5.11" "System Administration Commands" -.SH NAME -zstreamdump \- filter data in zfs send stream -.SH SYNOPSIS -.LP -.nf -\fBzstreamdump\fR [\fB-C\fR] [\fB-v\fR] -.fi - -.SH DESCRIPTION -.sp -.LP -The \fBzstreamdump\fR utility reads from the output of the \fBzfs send\fR command, then displays headers and some statistics from that output. See \fBzfs\fR(1M). -.SH OPTIONS -.sp -.LP +.\" Copyright (c) 2011, Martin Matuska . +.\" All Rights Reserved. +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or http://www.opensolaris.org/os/licensing. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" Copyright (c) 2009, Sun Microsystems, Inc. All Rights Reserved. +.\" +.\" $FreeBSD$ +.\" +.Dd November 26, 2011 +.Dt ZSTREAMDUMP 8 +.Os +.Sh NAME +.Nm zdb +.Nd filter data in zfs send stream +.Sh SYNOPSIS +.Nm +.Op Fl C +.Op Fl v +.Sh DESCRIPTION +The +.Nm +utility reads from the output of the +.Qq Nm zfs Cm send +command, then displays headers and some statistics from that output. See +.Xr zfs 8 . +.Pp The following options are supported: -.sp -.ne 2 -.mk -.na -\fB\fB-C\fR\fR -.ad -.sp .6 -.RS 4n +.Bl -tag -width indent +.It Fl C Suppress the validation of checksums. -.RE - -.sp -.ne 2 -.mk -.na -\fB\fB-v\fR\fR -.ad -.sp .6 -.RS 4n +.It Fl v Verbose. Dump all headers, not only begin and end headers. -.RE - -.SH ATTRIBUTES -.sp -.LP -See \fBattributes\fR(5) for descriptions of the following attributes: -.sp - -.sp -.TS -tab() box; -cw(2.75i) |cw(2.75i) -lw(2.75i) |lw(2.75i) -. -ATTRIBUTE TYPEATTRIBUTE VALUE -_ -AvailabilitySUNWzfsu -_ -Interface StabilityUncommitted -.TE - -.SH SEE ALSO -.sp -.LP -\fBzfs\fR(1M), \fBattributes\fR(5) +.El +.Sh SEE ALSO +.Xr zfs 8 +.Sh AUTHORS +This manual page is a +.Xr mdoc 7 +reimplementation of the +.Tn OpenSolaris +manual page +.Em zstreamdump(1M) , +modified and customized for +.Fx +and licensed under the +.Tn Common Development and Distribution License +.Pq Tn CDDL . +.Pp +The +.Xr mdoc 7 +implementation of this manual page was initially written by +.An Martin Matuska Aq mm@FreeBSD.org . -- 1.7.8.3