X-Git-Url: https://git.decadent.org.uk/gitweb/?p=nfs-utils.git;a=blobdiff_plain;f=utils%2Fmount%2Fnfs.man;h=ad2a14da875a95bf4a16b509d8384fec21d5d0d5;hp=8abdb8394ead612b2597fc3cf067d42a448df48d;hb=260eb781154f288055f42602aaa25b3f608404ea;hpb=f1e07c06652eb5cae1ce028cad8cd35e59f32f57 diff --git a/utils/mount/nfs.man b/utils/mount/nfs.man index 8abdb83..ad2a14d 100644 --- a/utils/mount/nfs.man +++ b/utils/mount/nfs.man @@ -1,459 +1,1407 @@ -.\" nfs.5 "Rick Sladkey" -.\" Wed Feb 8 12:52:42 1995, faith@cs.unc.edu: updates for Ross Biro's -.\" patches. " -.TH NFS 5 "20 November 1993" "Linux 0.99" "Linux Programmer's Manual" +.\"@(#)nfs.5" +.TH NFS 5 "2 November 2007" .SH NAME -nfs \- nfs and nfs4 fstab format and options +nfs \- fstab format and options for the +.B nfs +and +.B nfs4 +file systems .SH SYNOPSIS -.B /etc/fstab +.I /etc/fstab .SH DESCRIPTION +NFS is an Internet Standard protocol +created by Sun Microsystems in 1984. NFS was developed +to allow file sharing between systems residing +on a local area network. +The Linux NFS client supports three versions +of the NFS protocol: +NFS version 2 [RFC1094], +NFS version 3 [RFC1813], +and NFS version 4 [RFC3530]. +.P The -.I fstab -file contains information about which filesystems -to mount where and with what options. -For NFS mounts, it contains the server name and -exported server directory to mount from, -the local directory that is the mount point, -and the NFS specific options that control -the way the filesystem is mounted. -.P -Three different versions of the NFS protocol are -supported by the Linux NFS client: -NFS version 2, NFS version 3, and NFS version 4. -To mount via NFS version 2, use the -.BR nfs -file system type and specify -.BR nfsvers=2 . -To mount via NFS version 3, use the -.BR nfs -file system type and specify -.BR nfsvers=3 . -Version 3 is the default protocol version for the -.BR nfs -file system type when -.BR nfsvers= -is not specified on the mount command and both client and server -support it. -To mount via NFS version 4, use the -.BR nfs4 -file system type. +.BR mount (8) +command attaches a file system to the system's +name space hierarchy at a given mount point. The -.BR nfsvers= -keyword is not supported for the -.BR nfs4 -file system type. +.I /etc/fstab +file describes how +.BR mount (8) +should assemble a system's file name hierarchy +from various independent file systems +(including file systems exported by NFS servers). +Each line in the +.I /etc/fstab +file describes a single file system, its mount point, +and a set of default mount options for that mount point. .P -These file system types share similar mount options; -the differences are listed below. +For NFS file system mounts, a line in the +.I /etc/fstab +file specifies the server name, +the path name of the exported server directory to mount, +the local directory that is the mount point, +the type of file system that is being mounted, +and a list of mount options that control +the way the filesystem is mounted and +how the NFS client behaves when accessing +files on this mount point. +The fifth and sixth fields on each line are not used +by NFS, thus conventionally each contain the digit zero. For example: .P -Here is an example from an \fI/etc/fstab\fP file for an NFSv3 mount -over TCP. -.sp -.nf -.ta 2.5i +0.75i +0.75i +1.0i -server:/usr/local/pub /pub nfs rsize=32768,wsize=32768,timeo=14,intr -.fi +.SP +.NF +.TA 2.5i +0.75i +0.75i +1.0i + server:path /mountpoint fstype option,option,... 0 0 +.FI .P -Here is an example for an NFSv4 mount over TCP using Kerberos -5 mutual authentication. -.sp -.nf -.ta 2.5i +0.75i +0.75i +1.0i -server:/usr/local/pub /pub nfs4 proto=tcp,sec=krb5,hard,intr -.fi +The server's hostname and export pathname +are separated by a colon, while +the mount options are separated by commas. The remaining fields +are separated by blanks or tabs. +The server's hostname can be an unqualified hostname, +a fully qualified domain name, +or a dotted quad IPv4 address. +The +.I fstype +field contains either "nfs" (for version 2 or version 3 NFS mounts) +or "nfs4" (for NFS version 4 mounts). +The +.B nfs +and +.B nfs4 +file system types share similar mount options, +which are described below. +.SH "MOUNT OPTIONS" +Refer to +.BR mount (8) +for a description of generic mount options +available for all file systems. If you do not need to +specify any mount options, use the generic option +.B defaults +in +.IR /etc/fstab . +. .DT -.SS Options for the nfs file system type -.TP 1.5i -.I rsize=n -The number of bytes NFS uses when reading files from an NFS server. -The rsize is negotiated between the server and client to determine -the largest block size that both can support. -The value specified by this option is the maximum size that could -be used; however, the actual size used may be smaller. -Note: Setting this size to a value less than the largest supported -block size will adversely affect performance. -.TP 1.5i -.I wsize=n -The number of bytes NFS uses when writing files to an NFS server. -The wsize is negotiated between the server and client to determine -the largest block size that both can support. -The value specified by this option is the maximum size that could -be used; however, the actual size used may be smaller. -Note: Setting this size to a value less than the largest supported -block size will adversely affect performance. -.TP 1.5i -.I timeo=n -The value in tenths of a second before sending the -first retransmission after an RPC timeout. -The default value is 7 tenths of a second. After the first timeout, -the timeout is doubled after each successive timeout until a maximum -timeout of 60 seconds is reached or the enough retransmissions -have occured to cause a major timeout. Then, if the filesystem -is hard mounted, each new timeout cascade restarts at twice the -initial value of the previous cascade, again doubling at each -retransmission. The maximum timeout is always 60 seconds. -Better overall performance may be achieved by increasing the -timeout when mounting on a busy network, to a slow server, or through -several routers or gateways. -.TP 1.5i -.I retrans=n -The number of minor timeouts and retransmissions that must occur before -a major timeout occurs. The default is 3 timeouts. When a major timeout -occurs, the file operation is either aborted or a "server not responding" -message is printed on the console. -.TP 1.5i -.I acregmin=n -The minimum time in seconds that attributes of a regular file should -be cached before requesting fresh information from a server. -The default is 3 seconds. -.TP 1.5i -.I acregmax=n -The maximum time in seconds that attributes of a regular file can -be cached before requesting fresh information from a server. -The default is 60 seconds. -.TP 1.5i -.I acdirmin=n -The minimum time in seconds that attributes of a directory should -be cached before requesting fresh information from a server. -The default is 30 seconds. -.TP 1.5i -.I acdirmax=n -The maximum time in seconds that attributes of a directory can -be cached before requesting fresh information from a server. -The default is 60 seconds. -.TP 1.5i -.I actimeo=n -Using actimeo sets all of -.I acregmin, -.I acregmax, -.I acdirmin, +.SS "Valid options for either the nfs or nfs4 file system type" +These options are valid to use when mounting either +.B nfs +or +.B nfs4 +file system types. +They imply the same behavior +and have the same default for both file system types. +.TP 1.5i +.BR soft " / " hard +Determines the recovery behavior of the NFS client +after an NFS request times out. +If neither option is specified (or if the +.B hard +option is specified), NFS requests are retried indefinitely. +If the +.B soft +option is specified, then the NFS client fails an NFS request +after +.B retrans +retransmissions have been sent, +causing the NFS client to return an error +to the calling application. +.IP +.I NB: +A so-called "soft" timeout can cause +silent data corruption in certain cases. As such, use the +.B soft +option only when client responsiveness +is more important than data integrity. +Using NFS over TCP or increasing the value of the +.B retrans +option may mitigate some of the risks of using the +.B soft +option. +.TP 1.5i +.BI timeo= n +The time (in tenths of a second) the NFS client waits for a +response before it retries an NFS request. If this +option is not specified, requests are retried every +60 seconds for NFS over TCP. +The NFS client does not perform any kind of timeout backoff +for NFS over TCP. +.IP +However, for NFS over UDP, the client uses an adaptive +algorithm to estimate an appropriate timeout value for frequently used +request types (such as READ and WRITE requests), but uses the +.B timeo +setting for infrequently used request types (such as FSINFO requests). +If the +.B timeo +option is not specified, +infrequently used request types are retried after 1.1 seconds. +After each retransmission, the NFS client doubles the timeout for +that request, +up to a maximum timeout length of 60 seconds. +.TP 1.5i +.BI retrans= n +The number of times the NFS client retries a request before +it attempts further recovery action. If the +.B retrans +option is not specified, the NFS client tries each request +three times. +.IP +The NFS client generates a "server not responding" message +after +.B retrans +retries, then attempts further recovery (depending on whether the +.B hard +mount option is in effect). +.TP 1.5i +.BI rsize= n +The maximum number of bytes in each network READ request +that the NFS client can receive when reading data from a file +on an NFS server. +The actual data payload size of each NFS READ request is equal to +or smaller than the +.B rsize +setting. The largest read payload supported by the Linux NFS client +is 1,048,576 bytes (one megabyte). +.IP +The +.B rsize +value is a positive integral multiple of 1024. +Specified +.B rsize +values lower than 1024 are replaced with 4096; values larger than +1048576 are replaced with 1048576. If a specified value is within the supported +range but not a multiple of 1024, it is rounded down to the nearest +multiple of 1024. +.IP +If an +.B rsize +value is not specified, or if the specified +.B rsize +value is larger than the maximum that either client or server can support, +the client and server negotiate the largest +.B rsize +value that they can both support. +.IP +The +.B rsize +mount option as specified on the +.BR mount (8) +command line appears in the +.I /etc/mtab +file. However, the effective +.B rsize +value negotiated by the client and server is reported in the +.I /proc/mounts +file. +.TP 1.5i +.BI wsize= n +The maximum number of bytes per network WRITE request +that the NFS client can send when writing data to a file +on an NFS server. The actual data payload size of each +NFS WRITE request is equal to +or smaller than the +.B wsize +setting. The largest write payload supported by the Linux NFS client +is 1,048,576 bytes (one megabyte). +.IP +Similar to +.B rsize +, the +.B wsize +value is a positive integral multiple of 1024. +Specified +.B wsize +values lower than 1024 are replaced with 4096; values larger than +1048576 are replaced with 1048576. If a specified value is within the supported +range but not a multiple of 1024, it is rounded down to the nearest +multiple of 1024. +.IP +If a +.B wsize +value is not specified, or if the specified +.B wsize +value is larger than the maximum that either client or server can support, +the client and server negotiate the largest +.B wsize +value that they can both support. +.IP +The +.B wsize +mount option as specified on the +.BR mount (8) +command line appears in the +.I /etc/mtab +file. However, the effective +.B wsize +value negotiated by the client and server is reported in the +.I /proc/mounts +file. +.TP 1.5i +.BR ac " / " noac +Selects whether the client may cache file attributes. If neither +option is specified (or if +.B ac +is specified), the client caches file +attributes. +.IP +To improve performance, NFS clients cache file +attributes. Every few seconds, an NFS client checks the server's version of each +file's attributes for updates. Changes that occur on the server in +those small intervals remain undetected until the client checks the +server again. The +.B noac +option prevents clients from caching file +attributes so that applications can more quickly detect file changes +on the server. +.IP +In addition to preventing the client from caching file attributes, +the +.B noac +option forces application writes to become synchronous so +that local changes to a file become visible on the server +immediately. That way, other clients can quickly detect recent +writes when they check the file's attributes. +.IP +Using the +.B noac +option provides greater cache coherence among NFS clients +accessing the same files, +but it extracts a significant performance penalty. +As such, judicious use of file locking is encouraged instead. +The DATA AND METADATA COHERENCE section contains a detailed discussion +of these trade-offs. +.TP 1.5i +.BI acregmin= n +The minimum time (in seconds) that the NFS client caches +attributes of a regular file before it requests +fresh attribute information from a server. +If this option is not specified, the NFS client uses +a 3-second minimum. +.TP 1.5i +.BI acregmax= n +The maximum time (in seconds) that the NFS client caches +attributes of a regular file before it requests +fresh attribute information from a server. +If this option is not specified, the NFS client uses +a 60-second maximum. +.TP 1.5i +.BI acdirmin= n +The minimum time (in seconds) that the NFS client caches +attributes of a directory before it requests +fresh attribute information from a server. +If this option is not specified, the NFS client uses +a 30-second minimum. +.TP 1.5i +.BI acdirmax= n +The maximum time (in seconds) that the NFS client caches +attributes of a directory before it requests +fresh attribute information from a server. +If this option is not specified, the NFS client uses +a 60-second maximum. +.TP 1.5i +.BI actimeo= n +Using +.B actimeo +sets all of +.BR acregmin , +.BR acregmax , +.BR acdirmin , and -.I acdirmax +.B acdirmax to the same value. -There is no default value. +If this option is not specified, the NFS client uses +the defaults for each of these options listed above. .TP 1.5i -.I retry=n -The number of minutes to retry an NFS mount operation +.BR bg " / " fg +Determines how the +.BR mount (8) +command behaves if an attempt to mount an export fails. +The +.B fg +option causes +.BR mount (8) +to exit with an error status if any part of the mount request +times out or fails outright. +This is called a "foreground" mount, +and is the default behavior if neither the +.B fg +nor +.B bg +mount option is specified. +.IP +If the +.B bg +option is specified, a timeout or failure causes the +.BR mount (8) +command to fork a child which continues to attempt +to mount the export. +The parent immediately returns with a zero exit code. +This is known as a "background" mount. +.IP +If the local mount point directory is missing, the +.BR mount (8) +command acts as if the mount request timed out. +This permits nested NFS mounts specified in +.I /etc/fstab +to proceed in any order during system initialization, +even if some NFS servers are not yet available. +Alternatively these issues can be addressed +using an automounter (refer to +.BR automount (8) +for details). +.TP 1.5i +.BI retry= n +The number of minutes that the +.BR mount (8) +command retries an NFS mount operation in the foreground or background before giving up. -The default value for forground mounts is 2 minutes. -The default value for background mounts is 10000 minutes, -which is roughly one week. -.TP 1.5i -.I namlen=n -When an NFS server does not support version two of the -RPC mount protocol, this option can be used to specify -the maximum length of a filename that is supported on -the remote filesystem. This is used to support the -POSIX pathconf functions. The default is 255 characters. -.TP 1.5i -.I port=n -The numeric value of the port to connect to the NFS server on. -If the port number is 0 (the default) then query the -remote host's portmapper for the port number to use. -If the remote host's NFS daemon is not registered with -its portmapper, the standard NFS port number 2049 is -used instead. -.TP 1.5i -.I mountport=n -The numeric value of the -.B mountd -port. +If this option is not specified, the default value for foreground mounts +is 2 minutes, and the default value for background mounts is 10000 minutes (80 minutes shy of one week). .TP 1.5i -.I mounthost=name -The name of the host running -.B mountd . -.TP 1.5i -.I mountprog=n -Use an alternate RPC program number to contact the -mount daemon on the remote host. This option is useful -for hosts that can run multiple NFS servers. -The default value is 100005 which is the standard RPC -mount daemon program number. -.TP 1.5i -.I mountvers=n -Use an alternate RPC version number to contact the -mount daemon on the remote host. This option is useful -for hosts that can run multiple NFS servers. -The default value depends on which kernel you are using. -.TP 1.5i -.I nfsprog=n -Use an alternate RPC program number to contact the -NFS daemon on the remote host. This option is useful -for hosts that can run multiple NFS servers. -The default value is 100003 which is the standard RPC -NFS daemon program number. -.TP 1.5i -.I nfsvers=n -Use an alternate RPC version number to contact the -NFS daemon on the remote host. This option is useful -for hosts that can run multiple NFS servers. -The default value depends on which kernel you are using. -.TP 1.5i -.I vers=n -vers is an alternative to nfsvers and is compatible with -many other operating systems. -.TP 1.5i -.I nolock -Disable NFS locking. Do not start lockd. -This is appropriate for mounting the root filesystem or -.B /usr -or -.BR /var . -These filesystems are typically either read-only or not shared, and in -those cases, remote locking is not needed. -This also needs to be used with some old NFS servers -that don't support locking. -.br -Note that applications can still get locks on files, but the locks -only provide exclusion locally. Other clients mounting the same -filesystem will not be able to detect the locks. -.TP 1.5i -.I bg -If the first NFS mount attempt times out, retry the mount -in the background. -After a mount operation is backgrounded, all subsequent mounts -on the same NFS server will be backgrounded immediately, without -first attempting the mount. -A missing mount point is treated as a timeout, -to allow for nested NFS mounts. -.TP 1.5i -.I fg -If the first NFS mount attempt times out, retry the mount -in the foreground. -This is the complement of the -.I bg -option, and also the default behavior. -.TP 1.5i -.I soft -If an NFS file operation has a major timeout then report an I/O error to -the calling program. -The default is to continue retrying NFS file operations indefinitely. -.TP 1.5i -.I hard -If an NFS file operation has a major timeout then report -"server not responding" on the console and continue retrying indefinitely. -This is the default. -.TP 1.5i -.I intr -If an NFS file operation has a major timeout and it is hard mounted, -then allow signals to interupt the file operation and cause it to -return EINTR to the calling program. The default is to not -allow file operations to be interrupted. -.TP 1.5i -.I posix -Mount the NFS filesystem using POSIX semantics. This allows -an NFS filesystem to properly support the POSIX pathconf -command by querying the mount server for the maximum length -of a filename. To do this, the remote host must support version -two of the RPC mount protocol. Many NFS servers support only -version one. -.TP 1.5i -.I nocto -Suppress the retrieval of new attributes when creating a file. -.TP 1.5i -.I noac -Disable all forms of attribute caching entirely. This extracts a -significant performance penalty but it allows two different NFS clients -to get reasonable results when both clients are actively -writing to a common export on the server. -.TP 1.5i -.I noacl -Disables Access Control List (ACL) processing. -.TP 1.5i -.I sec=mode -Set the security flavor for this mount to "mode". -The default setting is \f3sec=sys\f1, which uses local -unix uids and gids to authenticate NFS operations (AUTH_SYS). -Other currently supported settings are: -\f3sec=krb5\f1, which uses Kerberos V5 instead of local unix uids -and gids to authenticate users; -\f3sec=krb5i\f1, which uses Kerberos V5 for user authentication -and performs integrity checking of NFS operations using secure -checksums to prevent data tampering; and -\f3sec=krb5p\f1, which uses Kerberos V5 for user authentication -and integrity checking, and encrypts NFS traffic to prevent -traffic sniffing (this is the most secure setting). -Note that there is a performance penalty when using integrity -or privacy. -.TP 1.5i -.I tcp -Mount the NFS filesystem using the TCP protocol. This is the default -if it is supported by both client and server. Many NFS servers only -support UDP. -.TP 1.5i -.I udp -Mount the NFS filesystem using the UDP protocol. -.P -All of the non-value options have corresponding nooption forms. -For example, nointr means don't allow file operations to be -interrupted. -.SS Options for the nfs4 file system type -.TP 1.5i -.I rsize=n -The number of bytes nfs4 uses when reading files from the server. -The rsize is negotiated between the server and client to determine -the largest block size that both can support. -The value specified by this option is the maximum size that could -be used; however, the actual size used may be smaller. -Note: Setting this size to a value less than the largest supported -block size will adversely affect performance. -.TP 1.5i -.I wsize=n -The number of bytes nfs4 uses when writing files to the server. -The wsize is negotiated between the server and client to determine -the largest block size that both can support. -The value specified by this option is the maximum size that could -be used; however, the actual size used may be smaller. -Note: Setting this size to a value less than the largest supported -block size will adversely affect performance. -.TP 1.5i -.I timeo=n -The value in tenths of a second before sending the -first retransmission after an RPC timeout. -The default value depends on whether -.IR proto=udp +.BI sec= mode +The RPCGSS security flavor to use for accessing files on this mount point. +If the +.B sec +option is not specified, or if +.B sec=sys +is specified, the NFS client uses the AUTH_SYS security flavor +for all NFS requests on this mount point. +Valid security flavors are +.BR none , +.BR sys , +.BR krb5 , +.BR krb5i , +.BR krb5p , +.BR lkey , +.BR lkeyi , +.BR lkeyp , +.BR spkm , +.BR spkmi , +and +.BR spkmp . +Refer to the SECURITY CONSIDERATIONS section for details. +.TP 1.5i +.BR sharecache " / " nosharecache +Determines how the client's data cache and attribute cache are shared +when mounting the same export more than once concurrently. Using the +same cache reduces memory requirements on the client and presents +identical file contents to applications when the same remote file is +accessed via different mount points. +.IP +If neither option is specified, or if the +.B sharecache +option is +specified, then a single cache is used for all mount points that +access the same export. If the +.B nosharecache +option is specified, +then that mount point gets a unique cache. Note that when data and +attribute caches are shared, the mount options from the first mount +point take effect for subsequent concurrent mounts of the same export. +.IP +As of kernel 2.6.18, the behavior specified by +.B nosharecache +is legacy caching behavior. This +is considered a data risk since multiple cached copies +of the same file on the same client can become out of sync +following a local update of one of the copies. +.TP 1.5i +.BR resvport " / " noresvport +Specifies whether the NFS client should use a privileged source port +when communicating with an NFS server for this mount point. +If this option is not specified, or the +.B resvport +option is specified, the NFS client uses a privileged source port. +If the +.B noresvport +option is specified, the NFS client uses a non-privileged source port. +This option is supported in kernels 2.6.28 and later. +.IP +Using non-privileged source ports helps increase the maximum number of +NFS mount points allowed on a client, but NFS servers must be configured +to allow clients to connect via non-privileged source ports. +.IP +Refer to the SECURITY CONSIDERATIONS section for important details. +.SS "Valid options for the nfs file system type" +Use these options, along with the options in the above subsection, +for mounting the +.B nfs +file system type. +.TP 1.5i +.BI proto= transport +The transport the NFS client uses +to transmit requests to the NFS server for this mount point. +.I transport +can be either +.B udp or -.IR proto=tcp -is in effect (see below). -The default value for UDP is 7 tenths of a second. -The default value for TCP is 60 seconds. -After the first timeout, -the timeout is doubled after each successive timeout until a maximum -timeout of 60 seconds is reached or the enough retransmissions -have occured to cause a major timeout. Then, if the filesystem -is hard mounted, each new timeout cascade restarts at twice the -initial value of the previous cascade, again doubling at each -retransmission. The maximum timeout is always 60 seconds. -.TP 1.5i -.I retrans=n -The number of minor timeouts and retransmissions that must occur before -a major timeout occurs. The default is 5 timeouts for -.IR proto=udp -and 2 timeouts for -.IR proto=tcp . -When a major timeout -occurs, the file operation is either aborted or a "server not responding" -message is printed on the console. -.TP 1.5i -.I acregmin=n -The minimum time in seconds that attributes of a regular file should -be cached before requesting fresh information from a server. -The default is 3 seconds. -.TP 1.5i -.I acregmax=n -The maximum time in seconds that attributes of a regular file can -be cached before requesting fresh information from a server. -The default is 60 seconds. -.TP 1.5i -.I acdirmin=n -The minimum time in seconds that attributes of a directory should -be cached before requesting fresh information from a server. -The default is 30 seconds. -.TP 1.5i -.I acdirmax=n -The maximum time in seconds that attributes of a directory can -be cached before requesting fresh information from a server. -The default is 60 seconds. -.TP 1.5i -.I actimeo=n -Using actimeo sets all of -.I acregmin, -.I acregmax, -.I acdirmin, +.BR tcp . +Each transport uses different default +.B retrans and -.I acdirmax -to the same value. -There is no default value. +.B timeo +settings; refer to the description of these two mount options for details. +.IP +In addition to controlling how the NFS client transmits requests to +the server, this mount option also controls how the +.BR mount (8) +command communicates with the server's rpcbind and mountd services. +Specifying +.B proto=tcp +forces all traffic from the +.BR mount (8) +command and the NFS client to use TCP. +Specifying +.B proto=udp +forces all traffic types to use UDP. +.IP +If the +.B proto +mount option is not specified, the +.BR mount (8) +command discovers which protocols the server supports +and chooses an appropriate transport for each service. +Refer to the TRANSPORT METHODS section for more details. .TP 1.5i -.I retry=n -The number of minutes to retry an NFS mount operation -in the foreground or background before giving up. -The default value for forground mounts is 2 minutes. -The default value for background mounts is 10000 minutes, -which is roughly one week. -.TP 1.5i -.I port=n -The numeric value of the port to connect to the NFS server on. -If the port number is 0 (the default) then query the -remote host's portmapper for the port number to use. -If the remote host's NFS daemon is not registered with -its portmapper, the standard NFS port number 2049 is -used instead. -.TP 1.5i -.I proto=n -Mount the NFS filesystem using a specific network protocol -instead of the default UDP protocol. -Many NFS version 4 servers only support TCP. -Valid protocol types are -.IR udp +.B udp +The +.B udp +option is an alternative to specifying +.BR proto=udp. +It is included for compatibility with other operating systems. +.TP 1.5i +.B tcp +The +.B tcp +option is an alternative to specifying +.BR proto=tcp. +It is included for compatibility with other operating systems. +.TP 1.5i +.BI port= n +The numeric value of the server's NFS service port. +If the server's NFS service is not available on the specified port, +the mount request fails. +.IP +If this option is not specified, or if the specified port value is 0, +then the NFS client uses the NFS service port number +advertised by the server's rpcbind service. +The mount request fails if the server's rpcbind service is not available, +the server's NFS service is not registered with its rpcbind service, +or the server's NFS service is not available on the advertised port. +.TP 1.5i +.BI mountport= n +The numeric value of the server's mountd port. +If the server's mountd service is not available on the specified port, +the mount request fails. +.IP +If this option is not specified, +or if the specified port value is 0, then the +.BR mount (8) +command uses the mountd service port number +advertised by the server's rpcbind service. +The mount request fails if the server's rpcbind service is not available, +the server's mountd service is not registered with its rpcbind service, +or the server's mountd service is not available on the advertised port. +.IP +This option can be used when mounting an NFS server +through a firewall that blocks the rpcbind protocol. +.TP 1.5i +.BI mountproto= transport +The transport the NFS client uses +to transmit requests to the NFS server's mountd service when performing +this mount request, and when later unmounting this mount point. +.I transport +can be either +.B udp +or +.BR tcp . +.IP +This option can be used when mounting an NFS server +through a firewall that blocks a particular transport. +When used in combination with the +.B proto +option, different transports for mountd requests and NFS requests +can be specified. +If the server's mountd service is not available via the specified +transport, the mount request fails. +Refer to the TRANSPORT METHODS section for more on how the +.B mountproto +mount option interacts with the +.B proto +mount option. +.TP 1.5i +.BI mounthost= name +The hostname of the host running mountd. +If this option is not specified, the +.BR mount (8) +command assumes that the mountd service runs +on the same host as the NFS service. +.TP 1.5i +.BI mountvers= n +The RPC version number used to contact the server's mountd. +If this option is not specified, the client uses a version number +appropriate to the requested NFS version. +This option is useful when multiple NFS services +are running on the same remote server host. +.TP 1.5i +.BI namlen= n +The maximum length of a pathname component on this mount. +If this option is not specified, the maximum length is negotiated +with the server. In most cases, this maximum length is 255 characters. +.IP +Some early versions of NFS did not support this negotiation. +Using this option ensures that +.BR pathconf (3) +reports the proper maximum component length to applications +in such cases. +.TP 1.5i +.BI nfsvers= n +The NFS protocol version number used to contact the server's NFS service. +The Linux client supports version 2 and version 3 of the NFS protocol +when using the file system type +.BR nfs . +If the server does not support the requested version, +the mount request fails. +If this option is not specified, the client attempts to use version 3, +but negotiates the NFS version with the server if version 3 support +is not available. +.TP 1.5i +.BI vers= n +This option is an alternative to the +.B nfsvers +option. +It is included for compatibility with other operating systems. +.TP 1.5i +.BR lock " / " nolock +Selects whether to use the NLM sideband protocol to lock files on the server. +If neither option is specified (or if +.B lock +is specified), NLM locking is used for this mount point. +When using the +.B nolock +option, applications can lock files, +but such locks provide exclusion only against other applications +running on the same client. +Remote applications are not affected by these locks. +.IP +NLM locking must be disabled with the +.B nolock +option when using NFS to mount +.I /var +because +.I /var +contains files used by the NLM implementation on Linux. +Using the +.B nolock +option is also required when mounting exports on NFS servers +that do not support the NLM protocol. +.TP 1.5i +.BR intr " / " nointr +Selects whether to allow signals to interrupt file operations +on this mount point. If neither option +is specified (or if +.B nointr +is specified), +signals do not interrupt NFS file operations. If +.B intr +is specified, system calls return EINTR if an in-progress NFS operation is interrupted by +a signal. +.IP +Using the +.B intr +option is preferred to using the +.B soft +option because it is significantly less likely to result in data corruption. +.TP 1.5i +.BR cto " / " nocto +Selects whether to use close-to-open cache coherence semantics. +If neither option is specified (or if +.B cto +is specified), the client uses close-to-open +cache coherence semantics. If the +.B nocto +option is specified, the client uses a non-standard heuristic to determine when +files on the server have changed. +.IP +Using the +.B nocto +option may improve performance for read-only mounts, +but should be used only if the data on the server changes only occasionally. +The DATA AND METADATA COHERENCE section discusses the behavior +of this option in more detail. +.TP 1.5i +.BR acl " / " noacl +Selects whether to use the NFSACL sideband protocol on this mount point. +The NFSACL sideband protocol is a proprietary protocol +implemented in Solaris that manages Access Control Lists. NFSACL was never +made a standard part of the NFS protocol specification. +.IP +If neither +.B acl +nor +.B noacl +option is specified, +the NFS client negotiates with the server +to see if the NFSACL protocol is supported, +and uses it if the server supports it. +Disabling the NFSACL sideband protocol may be necessary +if the negotiation causes problems on the client or server. +Refer to the SECURITY CONSIDERATIONS section for more details. +.TP 1.5i +.BR rdirplus " / " nordirplus +Selects whether to use NFS version 3 READDIRPLUS requests. +If this option is not specified, the NFS client uses READDIRPLUS requests +on NFS version 3 mounts to read small directories. +Some applications perform better if the client uses only READDIR requests +for all directories. +.SS "Valid options for the nfs4 file system type" +Use these options, along with the options in the first subsection above, +for mounting the +.B nfs4 +file system type. +.TP 1.5i +.BI proto= transport +The transport the NFS client uses +to transmit requests to the NFS server for this mount point. +.I transport +can be either +.B udp +or +.BR tcp . +All NFS version 4 servers are required to support TCP, +so if this mount option is not specified, the NFS version 4 client +uses the TCP transport protocol. +Refer to the TRANSPORT METHODS section for more details. +.TP 1.5i +.BI port= n +The numeric value of the server's NFS service port. +If the server's NFS service is not available on the specified port, +the mount request fails. +.IP +If this mount option is not specified, +the NFS client uses the standard NFS port number of 2049 +without first checking the server's rpcbind service. +This allows an NFS version 4 client to contact an NFS version 4 +server through a firewall that may block rpcbind requests. +.IP +If the specified port value is 0, +then the NFS client uses the NFS service port number +advertised by the server's rpcbind service. +The mount request fails if the server's rpcbind service is not available, +the server's NFS service is not registered with its rpcbind service, +or the server's NFS service is not available on the advertised port. +.TP 1.5i +.BR intr " / " nointr +Selects whether to allow signals to interrupt file operations +on this mount point. If neither option is specified (or if +.B intr +is specified), system calls return EINTR if an in-progress NFS operation +is interrupted by a signal. If +.B nointr +is specified, signals do not +interrupt NFS operations. +.IP +Using the +.B intr +option is preferred to using the +.B soft +option because it is significantly less likely to result in data corruption. +.TP 1.5i +.BR cto " / " nocto +Selects whether to use close-to-open cache coherence semantics +for NFS directories on this mount point. +If neither +.B cto +nor +.B nocto +is specified, +the default is to use close-to-open cache coherence +semantics for directories. +.IP +File data caching behavior is not affected by this option. +The DATA AND METADATA COHERENCE section discusses +the behavior of this option in more detail. +.TP 1.5i +.BI clientaddr= n.n.n.n +Specifies a single IPv4 address (in dotted-quad form) +that the NFS client advertises to allow servers +to perform NFS version 4 callback requests against +files on this mount point. If the server is unable to +establish callback connections to clients, performance +may degrade, or accesses to files may temporarily hang. +.IP +If this option is not specified, the +.BR mount (8) +command attempts to discover an appropriate callback address automatically. +The automatic discovery process is not perfect, however. +In the presence of multiple client network interfaces, +special routing policies, +or atypical network topologies, +the exact address to use for callbacks may be nontrivial to determine. +.SH EXAMPLES +To mount an export using NFS version 2, +use the +.B nfs +file system type and specify the +.B nfsvers=2 +mount option. +To mount using NFS version 3, +use the +.B nfs +file system type and specify the +.B nfsvers=3 +mount option. +To mount using NFS version 4, +use the +.B nfs4 +file system type. +The +.B nfsvers +mount option is not supported for the +.B nfs4 +file system type. +.P +The following example from an +.I /etc/fstab +file causes the mount command to negotiate +reasonable defaults for NFS behavior. +.P +.NF +.TA 2.5i +0.7i +0.7i +.7i + server:/export /mnt nfs defaults 0 0 +.FI +.P +Here is an example from an /etc/fstab file for an NFS version 2 mount over UDP. +.P +.NF +.TA 2.5i +0.7i +0.7i +.7i + server:/export /mnt nfs nfsvers=2,proto=udp 0 0 +.FI +.P +Try this example to mount using NFS version 4 over TCP +with Kerberos 5 mutual authentication. +.P +.NF +.TA 2.5i +0.7i +0.7i +.7i + server:/export /mnt nfs4 sec=krb5 0 0 +.FI +.P +This example can be used to mount /usr over NFS. +.P +.NF +.TA 2.5i +0.7i +0.7i +.7i + server:/export /usr nfs ro,nolock,nocto,actimeo=3600 0 0 +.FI +.SH "TRANSPORT METHODS" +NFS clients send requests to NFS servers via +Remote Procedure Calls, or +.IR RPCs . +The RPC client discovers remote service endpoints automatically, +handles per-request authentication, +adjusts request parameters for different byte endianness on client and server, +and retransmits requests that may have been lost by the network or server. +RPC requests and replies flow over a network transport. +.P +In most cases, the +.BR mount (8) +command, NFS client, and NFS server +can automatically negotiate proper transport +and data transfer size settings for a mount point. +In some cases, however, it pays to specify +these settings explicitly using mount options. +.P +Traditionally, NFS clients used the UDP transport exclusively for +transmitting requests to servers. Though its implementation is +simple, NFS over UDP has many limitations that prevent smooth +operation and good performance in some common deployment +environments. Even an insignificant packet loss rate results in the +loss of whole NFS requests; as such, retransmit timeouts are usually +in the subsecond range to allow clients to recover quickly from +dropped requests, but this can result in extraneous network traffic +and server load. +.P +However, UDP can be quite effective in specialized settings where +the network’s MTU is large relative to NFS’s data transfer size (such +as network environments that enable jumbo Ethernet frames). In such +environments, trimming the +.B rsize +and +.B wsize +settings so that each +NFS read or write request fits in just a few network frames (or even +in a single frame) is advised. This reduces the probability that +the loss of a single MTU-sized network frame results in the loss of +an entire large read or write request. +.P +TCP is the default transport protocol used for all modern NFS +implementations. It performs well in almost every conceivable +network environment and provides excellent guarantees against data +corruption caused by network unreliability. TCP is often a +requirement for mounting a server through a network firewall. +.P +Under normal circumstances, networks drop packets much more +frequently than NFS servers drop requests. As such, an aggressive +retransmit timeout setting for NFS over TCP is unnecessary. Typical +timeout settings for NFS over TCP are between one and ten minutes. +After the client exhausts its retransmits (the value of the +.B retrans +mount option), it assumes a network partition has occurred, +and attempts to reconnect to the server on a fresh socket. Since +TCP itself makes network data transfer reliable, +.B rsize +and +.B wsize +can safely be allowed to default to the largest values supported by +both client and server, independent of the network's MTU size. +.SS "Using the mountproto mount option" +This section applies only to NFS version 2 and version 3 mounts +since NFS version 4 does not use a separate protocol for mount +requests. +.P +The Linux NFS client can use a different transport for +contacting an NFS server's rpcbind service, its mountd service, +its Network Lock Manager (NLM) service, and its NFS service. +The exact transports employed by the Linux NFS client for +each mount point depends on the settings of the transport +mount options, which include +.BR proto , +.BR mountproto , +.BR udp ", and " tcp . +.P +The client sends Network Status Manager (NSM) notifications +via UDP no matter what transport options are specified, but +listens for server NSM notifications on both UDP and TCP. +The NFS Access Control List (NFSACL) protocol shares the same +transport as the main NFS service. +.P +If no transport options are specified, the Linux NFS client +uses UDP to contact the server's mountd service, and TCP to +contact its NLM and NFS services by default. +.P +If the server does not support these transports for these services, the +.BR mount (8) +command attempts to discover what the server supports, and then retries +the mount request once using the discovered transports. +If the server does not advertise any transport supported by the client +or is misconfigured, the mount request fails. +If the +.B bg +option is in effect, the mount command backgrounds itself and continues +to attempt the specified mount request. +.P +When the +.B proto +option, the +.B udp +option, or the +.B tcp +option is specified but the +.B mountproto +option is not, the specified transport is used to contact +both the server's mountd service and for the NLM and NFS services. +.P +If the +.B mountproto +option is specified but none of the +.BR proto ", " udp " or " tcp +options are specified, then the specified transport is used for the +initial mountd request, but the mount command attempts to discover +what the server supports for the NFS protocol, preferring TCP if +both transports are supported. +.P +If both the +.BR mountproto " and " proto +(or +.BR udp " or " tcp ) +options are specified, then the transport specified by the +.B mountproto +option is used for the initial mountd request, and the transport +specified by the +.B proto +option (or the +.BR udp " or " tcp " options)" +is used for NFS, no matter what order these options appear. +No automatic service discovery is performed if these options are +specified. +.P +If any of the +.BR proto ", " udp ", " tcp ", " +or +.B mountproto +options are specified more than once on the same mount command line, +then the value of the rightmost instance of each of these options +takes effect. +.SH "DATA AND METADATA COHERENCE" +Some modern cluster file systems provide +perfect cache coherence among their clients. +Perfect cache coherence among disparate NFS clients +is expensive to achieve, especially on wide area networks. +As such, NFS settles for weaker cache coherence that +satisfies the requirements of most file sharing types. Normally, +file sharing is completely sequential: +first client A opens a file, writes something to it, then closes it; +then client B opens the same file, and reads the changes. +.DT +.SS "Close-to-open cache consistency" +When an application opens a file stored on an NFS server, +the NFS client checks that it still exists on the server +and is permitted to the opener by sending a GETATTR or ACCESS request. +When the application closes the file, +the NFS client writes back any pending changes +to the file so that the next opener can view the changes. +This also gives the NFS client an opportunity to report +any server write errors to the application +via the return code from +.BR close (2). +The behavior of checking at open time and flushing at close time +is referred to as close-to-open cache consistency. +.SS "Weak cache consistency" +There are still opportunities for a client's data cache +to contain stale data. +The NFS version 3 protocol introduced "weak cache consistency" +(also known as WCC) which provides a way of efficiently checking +a file's attributes before and after a single request. +This allows a client to help identify changes +that could have been made by other clients. +.P +When a client is using many concurrent operations +that update the same file at the same time +(for example, during asynchronous write behind), +it is still difficult to tell whether it was +that client's updates or some other client's updates +that altered the file. +.SS "Attribute caching" +Use the +.B noac +mount option to achieve attribute cache coherence +among multiple clients. +Almost every file system operation checks +file attribute information. +The client keeps this information cached +for a period of time to reduce network and server load. +When +.B noac +is in effect, a client's file attribute cache is disabled, +so each operation that needs to check a file's attributes +is forced to go back to the server. +This permits a client to see changes to a file very quickly, +at the cost of many extra network operations. +.P +Be careful not to confuse the +.B noac +option with "no data caching." +The +.B noac +mount option prevents the client from caching file metadata, +but there are still races that may result in data cache incoherence +between client and server. +.P +The NFS protocol is not designed to support +true cluster file system cache coherence +without some type of application serialization. +If absolute cache coherence among clients is required, +applications should use file locking. Alternatively, applications +can also open their files with the O_DIRECT flag +to disable data caching entirely. +.SS "The sync mount option" +The NFS client treats the +.B sync +mount option differently than some other file systems +(refer to +.BR mount (8) +for a description of the generic +.B sync and -.IR tcp . -.TP 1.5i -.I clientaddr=n -On a multi-homed client, this -causes the client to use a specific callback address when -communicating with an NFS version 4 server. -This option is currently ignored. -.TP 1.5i -.I sec=mode -Same as \f3sec=mode\f1 for the nfs filesystem type (see above). -.TP 1.5i -.I bg -If an NFS mount attempt times out, retry the mount -in the background. -After a mount operation is backgrounded, all subsequent mounts -on the same NFS server will be backgrounded immediately, without -first attempting the mount. -A missing mount point is treated as a timeout, -to allow for nested NFS mounts. -.TP 1.5i -.I fg -If the first NFS mount attempt times out, retry the mount -in the foreground. -This is the complement of the -.I bg -option, and also the default behavior. -.TP 1.5i -.I soft -If an NFS file operation has a major timeout then report an I/O error to -the calling program. -The default is to continue retrying NFS file operations indefinitely. -.TP 1.5i -.I hard -If an NFS file operation has a major timeout then report -"server not responding" on the console and continue retrying indefinitely. -This is the default. -.TP 1.5i -.I intr -If an NFS file operation has a major timeout and it is hard mounted, -then allow signals to interupt the file operation and cause it to -return EINTR to the calling program. The default is to not -allow file operations to be interrupted. -.TP 1.5i -.I nocto -Suppress the retrieval of new attributes when creating a file. -.TP 1.5i -.I noac -Disable attribute caching, and force synchronous writes. -This extracts a -server performance penalty but it allows two different NFS clients -to get reasonable good results when both clients are actively -writing to common filesystem on the server. -.P -All of the non-value options have corresponding nooption forms. -For example, nointr means don't allow file operations to be -interrupted. +.B async +mount options). +If neither +.B sync +nor +.B async +is specified (or if the +.B async +option is specified), +the NFS client delays sending application +writes to the server +until any of these events occur: +.IP +Memory pressure forces reclamation of system memory resources. +.IP +An application flushes file data explicitly with +.BR sync (2), +.BR msync (2), +or +.BR fsync (3). +.IP +An application closes a file with +.BR close (2). +.IP +The file is locked/unlocked via +.BR fcntl (2). +.P +In other words, under normal circumstances, +data written by an application may not immediately appear +on the server that hosts the file. +.P +If the +.B sync +option is specified on a mount point, +any system call that writes data to files on that mount point +causes that data to be flushed to the server +before the system call returns control to user space. +This provides greater data cache coherence among clients, +but at a significant performance cost. +.P +Applications can use the O_SYNC open flag to force application +writes to individual files to go to the server immediately without +the use of the +.B sync +mount option. +.SS "Using file locks with NFS" +The Network Lock Manager protocol is a separate sideband protocol +used to manage file locks in NFS version 2 and version 3. +To support lock recovery after a client or server reboot, +a second sideband protocol -- +known as the Network Status Manager protocol -- +is also required. +In NFS version 4, +file locking is supported directly in the main NFS protocol, +and the NLM and NSM sideband protocols are not used. +.P +In most cases, NLM and NSM services are started automatically, +and no extra configuration is required. +Configure all NFS clients with fully-qualified domain names +to ensure that NFS servers can find clients to notify them of server reboots. +.P +NLM supports advisory file locks only. +To lock NFS files, use +.BR fcntl (2) +with the F_GETLK and F_SETLK commands. +The NFS client converts file locks obtained via +.BR flock (2) +to advisory locks. +.P +When mounting servers that do not support the NLM protocol, +or when mounting an NFS server through a firewall +that blocks the NLM service port, +specify the +.B nolock +mount option. NLM locking must be disabled with the +.B nolock +option when using NFS to mount +.I /var +because +.I /var +contains files used by the NLM implementation on Linux. +.P +Specifying the +.B nolock +option may also be advised to improve the performance +of a proprietary application which runs on a single client +and uses file locks extensively. +.SS "NFS version 4 caching features" +The data and metadata caching behavior of NFS version 4 +clients is similar to that of earlier versions. +However, NFS version 4 adds two features that improve +cache behavior: +.I change attributes +and +.IR "file delegation" . +.P +The +.I change attribute +is a new part of NFS file and directory metadata +which tracks data changes. +It replaces the use of a file's modification +and change time stamps +as a way for clients to validate the content +of their caches. +Change attributes are independent of the time stamp +resolution on either the server or client, however. +.P +A +.I file delegation +is a contract between an NFS version 4 client +and server that allows the client to treat a file temporarily +as if no other client is accessing it. +The server promises to notify the client (via a callback request) if another client +attempts to access that file. +Once a file has been delegated to a client, the client can +cache that file's data and metadata aggressively without +contacting the server. +.P +File delegations come in two flavors: +.I read +and +.IR write . +A +.I read +delegation means that the server notifies the client +about any other clients that want to write to the file. +A +.I write +delegation means that the client gets notified about +either read or write accessors. +.P +Servers grant file delegations when a file is opened, +and can recall delegations at any time when another +client wants access to the file that conflicts with +any delegations already granted. +Delegations on directories are not supported. +.P +In order to support delegation callback, the server +checks the network return path to the client during +the client's initial contact with the server. +If contact with the client cannot be established, +the server simply does not grant any delegations to +that client. +.SH "SECURITY CONSIDERATIONS" +NFS servers control access to file data, +but they depend on their RPC implementation +to provide authentication of NFS requests. +Traditional NFS access control mimics +the standard mode bit access control provided in local file systems. +Traditional RPC authentication uses a number +to represent each user +(usually the user's own uid), +a number to represent the user's group (the user's gid), +and a set of up to 16 auxiliary group numbers +to represent other groups of which the user may be a member. +.P +Typically, file data and user ID values appear unencrypted +(i.e. "in the clear") on the network. +Moreover, NFS versions 2 and 3 use +separate sideband protocols for mounting, +locking and unlocking files, +and reporting system status of clients and servers. +These auxiliary protocols use no authentication. +.P +In addition to combining these sideband protocols with the main NFS protocol, +NFS version 4 introduces more advanced forms of access control, +authentication, and in-transit data protection. +The NFS version 4 specification mandates NFSv4 ACLs, +RPCGSS authentication, and RPCGSS security flavors +that provide per-RPC integrity checking and encryption. +Because NFS version 4 combines the +function of the sideband protocols into the main NFS protocol, +the new security features apply to all NFS version 4 operations +including mounting, file locking, and so on. +RPCGSS authentication can also be used with NFS versions 2 and 3, +but does not protect their sideband protocols. +.P +The +.B sec +mount option specifies the RPCGSS security mode +that is in effect on a given NFS mount point. +Specifying +.B sec=krb5 +provides cryptographic proof of a user's identity in each RPC request. +This provides strong verification of the identity of users +accessing data on the server. +Note that additional configuration besides adding this mount option +is required in order to enable Kerberos security. +Refer to the +.BR rpc.gssd (8) +man page for details. +.P +Two additional flavors of Kerberos security are supported: +.B krb5i +and +.BR krb5p . +The +.B krb5i +security flavor provides a cryptographically strong guarantee +that the data in each RPC request has not been tampered with. +The +.B krb5p +security flavor encrypts every RPC request +to prevent data exposure during network transit; however, +expect some performance impact +when using integrity checking or encryption. +Similar support for other forms of cryptographic security (such as lipkey and SPKM3) +is also available. +.P +The NFS version 4 protocol allows +clients and servers to negotiate among multiple security flavors +during mount processing. +However, Linux does not yet implement such negotiation. +The Linux client specifies a single security flavor at mount time +which remains in effect for the lifetime of the mount. +If the server does not support this flavor, +the initial mount request is rejected by the server. +.SS "Using non-privileged source ports" +NFS clients usually communicate with NFS servers via network sockets. +Each end of a socket is assigned a port value, which is simply a number +between 1 and 65535 that distinguishes socket endpoints at the same +IP address. +A socket is uniquely defined by a tuple that includes the transport +protocol (TCP or UDP) and the port values and IP addresses of both +endpoints. +.P +The NFS client can choose any source port value for its sockets, +but usually chooses a +.I privileged +port. +A privileged port is a port value less than 1024. +Only a process with root privileges may create a socket +with a privileged source port. +.P +The exact range of privileged source ports that can be chosen is +set by a pair of sysctls to avoid choosing a well-known port, such as +the port used by ssh. +This means the number of source ports available for the NFS client, +and therefore the number of socket connections that can be used +at the same time, +is practically limited to only a few hundred. +.P +As described above, the traditional default NFS authentication scheme, +known as AUTH_SYS, relies on sending local UID and GID numbers to identify +users making NFS requests. +An NFS server assumes that if a connection comes from a privileged port, +the UID and GID numbers in the NFS requests on this connection have been +verified by the client's kernel or some other local authority. +This is an easy system to spoof, but on a trusted physical network between +trusted hosts, it is entirely adequate. +.P +Roughly speaking, one socket is used for each NFS mount point. +If a client could use non-privileged source ports as well, +the number of sockets allowed, +and thus the maximum number of concurrent mount points, +would be much larger. +.P +Using non-privileged source ports may compromise server security somewhat, +since any user on AUTH_SYS mount points can now pretend to be any other +when making NFS requests. +Thus NFS servers do not support this by default. +They explicitly allow it usually via an export option. +.P +To retain good security while allowing as many mount points as possible, +it is best to allow non-privileged client connections only if the server +and client both require strong authentication, such as Kerberos. +.SS "Mounting through a firewall" +A firewall may reside between an NFS client and server, +or the client or server may block some of its own ports via IP +filter rules. +It is still possible to mount an NFS server through a firewall, +though some of the +.BR mount (8) +command's automatic service endpoint discovery mechanisms may not work; this +requires you to provide specific endpoint details via NFS mount options. +.P +NFS servers normally run a portmapper or rpcbind daemon to advertise +their service endpoints to clients. Clients use the rpcbind daemon to determine: +.IP +What network port each RPC-based service is using +.IP +What transport protocols each RPC-based service supports +.P +The rpcbind daemon uses a well-known port number (111) to help clients find a service endpoint. +Although NFS often uses a standard port number (2049), +auxiliary services such as the NLM service can choose +any unused port number at random. +.P +Common firewall configurations block the well-known rpcbind port. +In the absense of an rpcbind service, +the server administrator fixes the port number +of NFS-related services so that the firewall +can allow access to specific NFS service ports. +Client administrators then specify the port number +for the mountd service via the +.BR mount (8) +command's +.B mountport +option. +It may also be necessary to enforce the use of TCP or UDP +if the firewall blocks one of those transports. +.SS "NFS Access Control Lists" +Solaris allows NFS version 3 clients direct access +to POSIX Access Control Lists stored in its local file systems. +This proprietary sideband protocol, known as NFSACL, +provides richer access control than mode bits. +Linux implements this protocol +for compatibility with the Solaris NFS implementation. +The NFSACL protocol never became a standard part +of the NFS version 3 specification, however. +.P +The NFS version 4 specification mandates a new version +of Access Control Lists that are semantically richer than POSIX ACLs. +NFS version 4 ACLs are not fully compatible with POSIX ACLs; as such, +some translation between the two is required +in an environment that mixes POSIX ACLs and NFS version 4. .SH FILES +.TP 1.5i .I /etc/fstab -.SH "SEE ALSO" -.BR fstab "(5), " mount "(8), " umount "(8), " exports (5) -.SH AUTHOR -"Rick Sladkey" +file system table .SH BUGS +The generic +.B remount +option is not fully supported. +Generic options, such as +.BR rw " and " ro +can be modified using the +.B remount +option, +but NFS-specific options are not all supported. +The underlying transport or NFS version +cannot be changed by a remount, for example. +Performing a remount on an NFS file system mounted with the +.B noac +option may have unintended consequences. +The +.B noac +option is a mixture of a generic option, +.BR sync , +and an NFS-specific option +.BR actimeo=0 . .P -Checking files on NFS filesystem referenced by file descriptors (i.e. the -.BR fcntl -and -.BR ioctl -families of functions) may lead to inconsistent result due to the lack of -consistency check in kernel even if noac is used. +Before 2.4.7, the Linux NFS client did not support NFS over TCP. +.P +Before 2.4.20, the Linux NFS client used a heuristic +to determine whether cached file data was still valid +rather than using the standard close-to-open cache coherency method +described above. +.P +Starting with 2.4.22, the Linux NFS client employs +a Van Jacobsen-based RTT estimator to determine retransmit +timeout values when using NFS over UDP. +.P +Before 2.6.0, the Linux NFS client did not support NFS version 4. +.P +Before 2.6.8, the Linux NFS client used only synchronous reads and writes +when the +.BR rsize " and " wsize +settings were smaller than the system's page size. +.P +The Linux NFS client does not yet support +certain optional features of the NFS version 4 protocol, +such as security negotiation, server referrals, and named attributes. +.SH "SEE ALSO" +.BR fstab (5), +.BR mount (8), +.BR umount (8), +.BR mount.nfs (5), +.BR umount.nfs (5), +.BR exports (5), +.BR nfsd (8), +.BR sm-notify (8), +.BR rpc.statd (8), +.BR rpc.idmapd (8), +.BR rpc.gssd (8), +.BR rpc.svcgssd (8), +.BR kerberos (1) +.sp +RFC 768 for the UDP specification. +.br +RFC 793 for the TCP specification. +.br +RFC 1094 for the NFS version 2 specification. +.br +RFC 1813 for the NFS version 3 specification. +.br +RFC 1832 for the XDR specification. +.br +RFC 1833 for the RPC bind specification. +.br +RFC 2203 for the RPCSEC GSS API protocol specification. +.br +RFC 3530 for the NFS version 4 specification.