RFC3010 - NFS version 4 Protocol

时间:2025-03-18 05:39:43 来源:网络 浏览:22次

Network Working Group S. Shepler
Request for Comments: 3010 B. Callaghan
Obsoletes: 1813, 1094 D. Robinson
Category: Standards Track R. Thurlow
Sun Microsystems Inc.
C. Beame
Hummingbird Ltd.
M. Eisler
Zambeel, Inc.
D. Noveck
Network Appliance, Inc.
December 2000
NFS version 4 Protocol
Status of this Memo
This document specifies an Internet standards track protocol for the
Internet community, and requests discussion and suggestions for
improvements. Please refer to the current edition of the "Internet
Official Protocol Standards" (STD 1) for the standardization state
and status of this protocol. Distribution of this memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (2000). All Rights Reserved.
Abstract
NFS (Network File System) version 4 is a distributed file system
protocol which owes heritage to NFS protocol versions 2 [RFC1094] and
3 [RFC1813]. Unlike earlier versions, the NFS version 4 protocol
supports traditional file Access while integrating support for file
locking and the mount protocol. In addition, support for strong
security (and its negotiation), compound operations, client caching,
and internationalization have been added. Of course, attention has
been applied to making NFS version 4 operate well in an Internet
environment.
Key Words
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC2119.
Table of Contents
1. IntrodUCtion . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1. Overview of NFS Version 4 Features . . . . . . . . . . . . 6
1.1.1. RPC and Security . . . . . . . . . . . . . . . . . . . . 6
1.1.2. Procedure and Operation Structure . . . . . . . . . . . 7
1.1.3. File System Model . . . . . . . . . . . . . . . . . . . 8
1.1.3.1. Filehandle Types . . . . . . . . . . . . . . . . . . . 8
1.1.3.2. Attribute Types . . . . . . . . . . . . . . . . . . . 8
1.1.3.3. File System Replication and Migration . . . . . . . . 9
1.1.4. OPEN and CLOSE . . . . . . . . . . . . . . . . . . . . . 9
1.1.5. File locking . . . . . . . . . . . . . . . . . . . . . . 9
1.1.6. Client Caching and Delegation . . . . . . . . . . . . . 10
1.2. General Definitions . . . . . . . . . . . . . . . . . . . 11
2. Protocol Data Types . . . . . . . . . . . . . . . . . . . . 12
2.1. Basic Data Types . . . . . . . . . . . . . . . . . . . . . 12
2.2. Structured Data Types . . . . . . . . . . . . . . . . . . 14
3. RPC and Security Flavor . . . . . . . . . . . . . . . . . . 18
3.1. Ports and Transports . . . . . . . . . . . . . . . . . . . 18
3.2. Security Flavors . . . . . . . . . . . . . . . . . . . . . 18
3.2.1. Security mechanisms for NFS version 4 . . . . . . . . . 19
3.2.1.1. Kerberos V5 as security triple . . . . . . . . . . . . 19
3.2.1.2. LIPKEY as a security triple . . . . . . . . . . . . . 19
3.2.1.3. SPKM-3 as a security triple . . . . . . . . . . . . . 20
3.3. Security Negotiation . . . . . . . . . . . . . . . . . . . 21
3.3.1. Security Error . . . . . . . . . . . . . . . . . . . . . 21
3.3.2. SECINFO . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4. Callback RPC Authentication . . . . . . . . . . . . . . . 22
4. Filehandles . . . . . . . . . . . . . . . . . . . . . . . . 23
4.1. OBTaining the First Filehandle . . . . . . . . . . . . . . 24
4.1.1. Root Filehandle . . . . . . . . . . . . . . . . . . . . 24
4.1.2. Public Filehandle . . . . . . . . . . . . . . . . . . . 24
4.2. Filehandle Types . . . . . . . . . . . . . . . . . . . . . 25
4.2.1. General Properties of a Filehandle . . . . . . . . . . . 25
4.2.2. Persistent Filehandle . . . . . . . . . . . . . . . . . 26
4.2.3. Volatile Filehandle . . . . . . . . . . . . . . . . . . 26
4.2.4. One Method of Constructing a Volatile Filehandle . . . . 28
4.3. Client Recovery from Filehandle EXPiration . . . . . . . . 28
5. File Attributes . . . . . . . . . . . . . . . . . . . . . . 29
5.1. Mandatory Attributes . . . . . . . . . . . . . . . . . . . 30
5.2. Recommended Attributes . . . . . . . . . . . . . . . . . . 30
5.3. Named Attributes . . . . . . . . . . . . . . . . . . . . . 31
5.4. Mandatory Attributes - Definitions . . . . . . . . . . . . 31
5.5. Recommended Attributes - Definitions . . . . . . . . . . . 33
5.6. Interpreting owner and owner_group . . . . . . . . . . . . 38
5.7. Character Case Attributes . . . . . . . . . . . . . . . . 39
5.8. Quota Attributes . . . . . . . . . . . . . . . . . . . . . 39
5.9. Access Control Lists . . . . . . . . . . . . . . . . . . . 40
5.9.1. ACE type . . . . . . . . . . . . . . . . . . . . . . . . 41
5.9.2. ACE flag . . . . . . . . . . . . . . . . . . . . . . . . 41
5.9.3. ACE Access Mask . . . . . . . . . . . . . . . . . . . . 43
5.9.4. ACE who . . . . . . . . . . . . . . . . . . . . . . . . 44
6. File System Migration and Replication . . . . . . . . . . . 44
6.1. Replication . . . . . . . . . . . . . . . . . . . . . . . 45
6.2. Migration . . . . . . . . . . . . . . . . . . . . . . . . 45
6.3. Interpretation of the fs_locations Attribute . . . . . . . 46
6.4. Filehandle Recovery for Migration or Replication . . . . . 47
7. NFS Server Name Space . . . . . . . . . . . . . . . . . . . 47
7.1. Server Exports . . . . . . . . . . . . . . . . . . . . . . 47
7.2. Browsing Exports . . . . . . . . . . . . . . . . . . . . . 48
7.3. Server Pseudo File System . . . . . . . . . . . . . . . . 48
7.4. Multiple Roots . . . . . . . . . . . . . . . . . . . . . . 49
7.5. Filehandle Volatility . . . . . . . . . . . . . . . . . . 49
7.6. Exported Root . . . . . . . . . . . . . . . . . . . . . . 49
7.7. Mount Point Crossing . . . . . . . . . . . . . . . . . . . 49
7.8. Security Policy and Name Space Presentation . . . . . . . 50
8. File Locking and Share Reservations . . . . . . . . . . . . 50
8.1. Locking . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.1.1. Client ID . . . . . . . . . . . . . . . . . . . . . . . 51
8.1.2. Server Release of Clientid . . . . . . . . . . . . . . . 53
8.1.3. nfs_lockowner and stateid Definition . . . . . . . . . . 54
8.1.4. Use of the stateid . . . . . . . . . . . . . . . . . . . 55
8.1.5. Sequencing of Lock Requests . . . . . . . . . . . . . . 56
8.1.6. Recovery from Replayed Requests . . . . . . . . . . . . 56
8.1.7. Releasing nfs_lockowner State . . . . . . . . . . . . . 57
8.2. Lock Ranges . . . . . . . . . . . . . . . . . . . . . . . 57
8.3. Blocking Locks . . . . . . . . . . . . . . . . . . . . . . 58
8.4. Lease Renewal . . . . . . . . . . . . . . . . . . . . . . 58
8.5. Crash Recovery . . . . . . . . . . . . . . . . . . . . . . 59
8.5.1. Client Failure and Recovery . . . . . . . . . . . . . . 59
8.5.2. Server Failure and Recovery . . . . . . . . . . . . . . 60
8.5.3. Network Partitions and Recovery . . . . . . . . . . . . 62
8.6. Recovery from a Lock Request Timeout or Abort . . . . . . 63
8.7. Server Revocation of Locks . . . . . . . . . . . . . . . . 63
8.8. Share Reservations . . . . . . . . . . . . . . . . . . . . 65
8.9. OPEN/CLOSE Operations . . . . . . . . . . . . . . . . . . 65
8.10. Open Upgrade and Downgrade . . . . . . . . . . . . . . . 66
8.11. Short and Long Leases . . . . . . . . . . . . . . . . . . 66
8.12. Clocks and Calculating Lease Expiration . . . . . . . . . 67
8.13. Migration, Replication and State . . . . . . . . . . . . 67
8.13.1. Migration and State . . . . . . . . . . . . . . . . . . 67
8.13.2. Replication and State . . . . . . . . . . . . . . . . . 68
8.13.3. Notification of Migrated Lease . . . . . . . . . . . . 69
9. Client-Side Caching . . . . . . . . . . . . . . . . . . . . 69
9.1. Performance Challenges for Client-Side Caching . . . . . . 70
9.2. Delegation and Callbacks . . . . . . . . . . . . . . . . . 71
9.2.1. Delegation Recovery . . . . . . . . . . . . . . . . . . 72
9.3. Data Caching . . . . . . . . . . . . . . . . . . . . . . . 74
9.3.1. Data Caching and OPENs . . . . . . . . . . . . . . . . . 74
9.3.2. Data Caching and File Locking . . . . . . . . . . . . . 75
9.3.3. Data Caching and Mandatory File Locking . . . . . . . . 77
9.3.4. Data Caching and File Identity . . . . . . . . . . . . . 77
9.4. Open Delegation . . . . . . . . . . . . . . . . . . . . . 78
9.4.1. Open Delegation and Data Caching . . . . . . . . . . . . 80
9.4.2. Open Delegation and File Locks . . . . . . . . . . . . . 82
9.4.3. Recall of Open Delegation . . . . . . . . . . . . . . . 82
9.4.4. Delegation Revocation . . . . . . . . . . . . . . . . . 84
9.5. Data Caching and Revocation . . . . . . . . . . . . . . . 84
9.5.1. Revocation Recovery for Write Open Delegation . . . . . 85
9.6. Attribute Caching . . . . . . . . . . . . . . . . . . . . 85
9.7. Name Caching . . . . . . . . . . . . . . . . . . . . . . . 86
9.8. Directory Caching . . . . . . . . . . . . . . . . . . . . 87
10. Minor Versioning . . . . . . . . . . . . . . . . . . . . . 88
11. Internationalization . . . . . . . . . . . . . . . . . . . 91
11.1. Universal Versus Local Character Sets . . . . . . . . . . 91
11.2. Overview of Universal Character Set Standards . . . . . . 92
11.3. Difficulties with UCS-4, UCS-2, Unicode . . . . . . . . . 93
11.4. UTF-8 and its solutions . . . . . . . . . . . . . . . . . 94
11.5. Normalization . . . . . . . . . . . . . . . . . . . . . . 94
12. Error Definitions . . . . . . . . . . . . . . . . . . . . . 95
13. NFS Version 4 Requests . . . . . . . . . . . . . . . . . . 99
13.1. Compound Procedure . . . . . . . . . . . . . . . . . . . 100
13.2. Evaluation of a Compound Request . . . . . . . . . . . . 100
13.3. Synchronous Modifying Operations . . . . . . . . . . . . 101
13.4. Operation Values . . . . . . . . . . . . . . . . . . . . 102
14. NFS Version 4 Procedures . . . . . . . . . . . . . . . . . 102
14.1. Procedure 0: NULL - No Operation . . . . . . . . . . . . 102
14.2. Procedure 1: COMPOUND - Compound Operations . . . . . . . 102
14.2.1. Operation 3: ACCESS - Check Access Rights . . . . . . . 105
14.2.2. Operation 4: CLOSE - Close File . . . . . . . . . . . . 108
14.2.3. Operation 5: COMMIT - Commit Cached Data . . . . . . . 109
14.2.4. Operation 6: CREATE - Create a Non-Regular File Object. 112
14.2.5. Operation 7: DELEGPURGE - Purge Delegations Awaiting
Recovery . . . . . . . . . . . . . . . . . . . . . . . 114
14.2.6. Operation 8: DELEGRETURN - Return Delegation . . . . . 115
14.2.7. Operation 9: GETATTR - Get Attributes . . . . . . . . . 115
14.2.8. Operation 10: GETFH - Get Current Filehandle . . . . . 117
14.2.9. Operation 11: LINK - Create Link to a File . . . . . . 118
14.2.10. Operation 12: LOCK - Create Lock . . . . . . . . . . . 119
14.2.11. Operation 13: LOCKT - Test For Lock . . . . . . . . . 121
14.2.12. Operation 14: LOCKU - Unlock File . . . . . . . . . . 122
14.2.13. Operation 15: LOOKUP - Lookup Filename . . . . . . . . 123
14.2.14. Operation 16: LOOKUPP - Lookup Parent Directory . . . 126
14.2.15. Operation 17: NVERIFY - Verify Difference in
Attributes . . . . . . . . . . . . . . . . . . . . . . 127
14.2.16. Operation 18: OPEN - Open a Regular File . . . . . . . 128
14.2.17. Operation 19: OPENATTR - Open Named Attribute
Directory . . . . . . . . . . . . . . . . . . . . . . 137
14.2.18. Operation 20: OPEN_CONFIRM - Confirm Open . . . . . . 138
14.2.19. Operation 21: OPEN_DOWNGRADE - Reduce Open File Access 140
14.2.20. Operation 22: PUTFH - Set Current Filehandle . . . . . 141
14.2.21. Operation 23: PUTPUBFH - Set Public Filehandle . . . . 142
14.2.22. Operation 24: PUTROOTFH - Set Root Filehandle . . . . 143
14.2.23. Operation 25: READ - Read from File . . . . . . . . . 144
14.2.24. Operation 26: READDIR - Read Directory . . . . . . . . 146
14.2.25. Operation 27: READLINK - Read Symbolic Link . . . . . 150
14.2.26. Operation 28: REMOVE - Remove Filesystem Object . . . 151
14.2.27. Operation 29: RENAME - Rename Directory Entry . . . . 153
14.2.28. Operation 30: RENEW - Renew a Lease . . . . . . . . . 155
14.2.29. Operation 31: RESTOREFH - Restore Saved Filehandle . . 156
14.2.30. Operation 32: SAVEFH - Save Current Filehandle . . . . 157
14.2.31. Operation 33: SECINFO - Obtain Available Security . . 158
14.2.32. Operation 34: SETATTR - Set Attributes . . . . . . . . 160
14.2.33. Operation 35: SETCLIENTID - Negotiate Clientid . . . . 162
14.2.34. Operation 36: SETCLIENTID_CONFIRM - Confirm Clientid . 163
14.2.35. Operation 37: VERIFY - Verify Same Attributes . . . . 164
14.2.36. Operation 38: WRITE - Write to File . . . . . . . . . 166
15. NFS Version 4 Callback Procedures . . . . . . . . . . . . . 170
15.1. Procedure 0: CB_NULL - No Operation . . . . . . . . . . . 170
15.2. Procedure 1: CB_COMPOUND - Compound Operations . . . . . 171
15.2.1. Operation 3: CB_GETATTR - Get Attributes . . . . . . . 172
15.2.2. Operation 4: CB_RECALL - Recall an Open Delegation . . 173
16. Security Considerations . . . . . . . . . . . . . . . . . . 174
17. IANA Considerations . . . . . . . . . . . . . . . . . . . . 174
17.1. Named Attribute Definition . . . . . . . . . . . . . . . 174
18. RPC definition file . . . . . . . . . . . . . . . . . . . . 175
19. Bibliography . . . . . . . . . . . . . . . . . . . . . . . 206
20. Authors . . . . . . . . . . . . . . . . . . . . . . . . . . 210
20.1. Editor"s Address . . . . . . . . . . . . . . . . . . . . 210
20.2. Authors" Addresses . . . . . . . . . . . . . . . . . . . 210
20.3. Acknowledgements . . . . . . . . . . . . . . . . . . . . 211
21. Full Copyright Statement . . . . . . . . . . . . . . . . . 212
1. Introduction
The NFS version 4 protocol is a further revision of the NFS protocol
defined already by versions 2 [RFC1094] and 3 [RFC1813]. It retains
the essential characteristics of previous versions: design for easy
recovery, independent of transport protocols, operating systems and
filesystems, simplicity, and good performance. The NFS version 4
revision has the following goals:
o Improved access and good performance on the Internet.
The protocol is designed to transit firewalls easily, perform well
where latency is high and bandwidth is low, and scale to very
large numbers of clients per server.
o Strong security with negotiation built into the protocol.
The protocol builds on the work of the ONCRPC working group in
supporting the RPCSEC_GSS protocol. Additionally, the NFS version
4 protocol provides a mechanism to allow clients and servers the
ability to negotiate security and require clients and servers to
support a minimal set of security schemes.
o Good cross-platform interoperability.
The protocol features a file system model that provides a useful,
common set of features that does not unduly favor one file system
or operating system over another.
o Designed for protocol extensions.
The protocol is designed to accept standard extensions that do not
compromise backward compatibility.
1.1. Overview of NFS Version 4 Features
To provide a reasonable context for the reader, the major features of
NFS version 4 protocol will be reviewed in brief. This will be done
to provide an appropriate context for both the reader who is familiar
with the previous versions of the NFS protocol and the reader that is
new to the NFS protocols. For the reader new to the NFS protocols,
there is still a fundamental knowledge that is expected. The reader
should be familiar with the XDR and RPC protocols as described in
[RFC1831] and [RFC1832]. A basic knowledge of file systems and
distributed file systems is expected as well.
1.1.1. RPC and Security
As with previous versions of NFS, the External Data Representation
(XDR) and Remote Procedure Call (RPC) mechanisms used for the NFS
version 4 protocol are those defined in [RFC1831] and [RFC1832]. To
meet end to end security requirements, the RPCSEC_GSS framework
[RFC2203] will be used to extend the basic RPC security. With the
use of RPCSEC_GSS, various mechanisms can be provided to offer
authentication, integrity, and privacy to the NFS version 4 protocol.
Kerberos V5 will be used as described in [RFC1964] to provide one
security framework. The LIPKEY GSS-API mechanism described in
[RFC2847] will be used to provide for the use of user password and
server public key by the NFS version 4 protocol. With the use of
RPCSEC_GSS, other mechanisms may also be specified and used for NFS
version 4 security.
To enable in-band security negotiation, the NFS version 4 protocol
has added a new operation which provides the client a method of
querying the server about its policies regarding which security
mechanisms must be used for access to the server"s file system
resources. With this, the client can securely match the security
mechanism that meets the policies specified at both the client and
server.
1.1.2. Procedure and Operation Structure
A significant departure from the previous versions of the NFS
protocol is the introduction of the COMPOUND procedure. For the NFS
version 4 protocol, there are two RPC procedures, NULL and COMPOUND.
The COMPOUND procedure is defined in terms of operations and these
operations correspond more closely to the traditional NFS procedures.
With the use of the COMPOUND procedure, the client is able to build
simple or complex requests. These COMPOUND requests allow for a
reduction in the number of RPCs needed for logical file system
operations. For example, without previous contact with a server a
client will be able to read data from a file in one request by
combining LOOKUP, OPEN, and READ operations in a single COMPOUND RPC.
With previous versions of the NFS protocol, this type of single
request was not possible.
The model used for COMPOUND is very simple. There is no logical OR
or ANDing of operations. The operations combined within a COMPOUND
request are evaluated in order by the server. Once an operation
returns a failing result, the evaluation ends and the results of all
evaluated operations are returned to the client.
The NFS version 4 protocol continues to have the client refer to a
file or directory at the server by a "filehandle". The COMPOUND
procedure has a method of passing a filehandle from one operation to
another within the sequence of operations. There is a concept of a
"current filehandle" and "saved filehandle". Most operations use the
"current filehandle" as the file system object to operate upon. The
"saved filehandle" is used as temporary filehandle storage within a
COMPOUND procedure as well as an additional operand for certain
operations.
1.1.3. File System Model
The general file system model used for the NFS version 4 protocol is
the same as previous versions. The server file system is
hierarchical with the regular files contained within being treated as
opaque byte streams. In a slight departure, file and directory names
are encoded with UTF-8 to deal with the basics of
internationalization.
The NFS version 4 protocol does not require a separate protocol to
provide for the initial mapping between path name and filehandle.
Instead of using the older MOUNT protocol for this mapping, the
server provides a ROOT filehandle that represents the logical root or
top of the file system tree provided by the server. The server
provides multiple file systems by gluing them together with pseudo
file systems. These pseudo file systems provide for potential gaps
in the path names between real file systems.
1.1.3.1. Filehandle Types
In previous versions of the NFS protocol, the filehandle provided by
the server was guaranteed to be valid or persistent for the lifetime
of the file system object to which it referred. For some server
implementations, this persistence requirement has been difficult to
meet. For the NFS version 4 protocol, this requirement has been
relaxed by introducing another type of filehandle, volatile. With
persistent and volatile filehandle types, the server implementation
can match the abilities of the file system at the server along with
the operating environment. The client will have knowledge of the
type of filehandle being provided by the server and can be prepared
to deal with the semantics of each.
1.1.3.2. Attribute Types
The NFS version 4 protocol introduces three classes of file system or
file attributes. Like the additional filehandle type, the
classification of file attributes has been done to ease server
implementations along with extending the overall functionality of the
NFS protocol. This attribute model is structured to be extensible
such that new attributes can be introduced in minor revisions of the
protocol without requiring significant rework.
The three classifications are: mandatory, recommended and named
attributes. This is a significant departure from the previous
attribute model used in the NFS protocol. Previously, the attributes
for the file system and file objects were a fixed set of mainly Unix
attributes. If the server or client did not support a particular
attribute, it would have to simulate the attribute the best it could.
Mandatory attributes are the minimal set of file or file system
attributes that must be provided by the server and must be properly
represented by the server. Recommended attributes represent
different file system types and operating environments. The
recommended attributes will allow for better interoperability and the
inclusion of more operating environments. The mandatory and
recommended attribute sets are traditional file or file system
attributes. The third type of attribute is the named attribute. A
named attribute is an opaque byte stream that is associated with a
directory or file and referred to by a string name. Named attributes
are meant to be used by client applications as a method to associate
application specific data with a regular file or directory.
One significant addition to the recommended set of file attributes is
the Access Control List (ACL) attribute. This attribute provides for
directory and file access control beyond the model used in previous
versions of the NFS protocol. The ACL definition allows for
specification of user and group level access control.
1.1.3.3. File System Replication and Migration
With the use of a special file attribute, the ability to migrate or
replicate server file systems is enabled within the protocol. The
file system locations attribute provides a method for the client to
probe the server about the location of a file system. In the event
of a migration of a file system, the client will receive an error
when operating on the file system and it can then query as to the new
file system location. Similar steps are used for replication, the
client is able to query the server for the multiple available
locations of a particular file system. From this information, the
client can use its own policies to access the appropriate file system
location.
1.1.4. OPEN and CLOSE
The NFS version 4 protocol introduces OPEN and CLOSE operations. The
OPEN operation provides a single point where file lookup, creation,
and share semantics can be combined. The CLOSE operation also
provides for the release of state accumulated by OPEN.
1.1.5. File locking
With the NFS version 4 protocol, the support for byte range file
locking is part of the NFS protocol. The file locking support is
structured so that an RPC callback mechanism is not required. This
is a departure from the previous versions of the NFS file locking
protocol, Network Lock Manager (NLM). The state associated with file
locks is maintained at the server under a lease-based model. The
server defines a single lease period for all state held by a NFS
client. If the client does not renew its lease within the defined
period, all state associated with the client"s lease may be released
by the server. The client may renew its lease with use of the RENEW
operation or implicitly by use of other operations (primarily READ).
1.1.6. Client Caching and Delegation
The file, attribute, and directory caching for the NFS version 4
protocol is similar to previous versions. Attributes and directory
information are cached for a duration determined by the client. At
the end of a predefined timeout, the client will query the server to
see if the related file system object has been updated.
For file data, the client checks its cache validity when the file is
opened. A query is sent to the server to determine if the file has
been changed. Based on this information, the client determines if
the data cache for the file should kept or released. Also, when the
file is closed, any modified data is written to the server.
If an application wants to serialize access to file data, file
locking of the file data ranges in question should be used.
The major addition to NFS version 4 in the area of caching is the
ability of the server to delegate certain responsibilities to the
client. When the server grants a delegation for a file to a client,
the client is guaranteed certain semantics with respect to the
sharing of that file with other clients. At OPEN, the server may
provide the client either a read or write delegation for the file.
If the client is granted a read delegation, it is assured that no
other client has the ability to write to the file for the duration of
the delegation. If the client is granted a write delegation, the
client is assured that no other client has read or write access to
the file.
Delegations can be recalled by the server. If another client
requests access to the file in such a way that the access conflicts
with the granted delegation, the server is able to notify the initial
client and recall the delegation. This requires that a callback path
exist between the server and client. If this callback path does not
exist, then delegations can not be granted. The essence of a
delegation is that it allows the client to locally service operations
such as OPEN, CLOSE, LOCK, LOCKU, READ, WRITE without immediate
interaction with the server.
1.2. General Definitions
The following definitions are provided for the purpose of providing
an appropriate context for the reader.
Client The "client" is the entity that accesses the NFS server"s
resources. The client may be an application which contains
the logic to access the NFS server directly. The client
may also be the traditional operating system client remote
file system services for a set of applications.
In the case of file locking the client is the entity that
maintains a set of locks on behalf of one or more
applications. This client is responsible for crash or
failure recovery for those locks it manages.
Note that multiple clients may share the same transport and
multiple clients may exist on the same network node.
Clientid A 64-bit quantity used as a unique, short-hand reference to
a client supplied Verifier and ID. The server is
responsible for supplying the Clientid.
Lease An interval of time defined by the server for which the
client is irrevocably granted a lock. At the end of a
lease period the lock may be revoked if the lease has not
been extended. The lock must be revoked if a conflicting
lock has been granted after the lease interval.
All leases granted by a server have the same fixed
interval. Note that the fixed interval was chosen to
alleviate the expense a server would have in maintaining
state about variable length leases across server failures.
Lock The term "lock" is used to refer to both record (byte-
range) locks as well as file (share) locks unless
specifically stated otherwise.
Server The "Server" is the entity responsible for coordinating
client access to a set of file systems.
Stable Storage
NFS version 4 servers must be able to recover without data
loss from multiple power failures (including cascading
power failures, that is, several power failures in quick
succession), operating system failures, and hardware
failure of components other than the storage medium itself
(for example, disk, nonvolatile RAM).
Some examples of stable storage that are allowable for an
NFS server include:
1. Media commit of data, that is, the modified data has
been successfully written to the disk media, for
example, the disk platter.
2. An immediate reply disk drive with battery-backed on-
drive intermediate storage or uninterruptible power
system (UPS).
3. Server commit of data with battery-backed intermediate
storage and recovery software.
4. Cache commit with uninterruptible power system (UPS) and
recovery software.
Stateid A 64-bit quantity returned by a server that uniquely
defines the locking state granted by the server for a
specific lock owner for a specific file.
Stateids composed of all bits 0 or all bits 1 have special
meaning and are reserved values.
Verifier A 64-bit quantity generated by the client that the server
can use to determine if the client has restarted and lost
all previous lock state.
2. Protocol Data Types
The syntax and semantics to describe the data types of the NFS
version 4 protocol are defined in the XDR [RFC1832] and RPC [RFC1831]
documents. The next sections build upon the XDR data types to define
types and structures specific to this protocol.
2.1. Basic Data Types
Data Type Definition
_____________________________________________________________________
int32_t typedef int int32_t;
uint32_t typedef unsigned int uint32_t;
int64_t typedef hyper int64_t;
uint64_t typedef unsigned hyper uint64_t;
attrlist4 typedef opaque attrlist4<>;
Used for file/directory attributes
bitmap4 typedef uint32_t bitmap4<>;
Used in attribute array encoding.
changeid4 typedef uint64_t changeid4;
Used in definition of change_info
clientid4 typedef uint64_t clientid4;
Shorthand reference to client identification
component4 typedef utf8string component4;
Represents path name components
count4 typedef uint32_t count4;
Various count parameters (READ, WRITE, COMMIT)
length4 typedef uint64_t length4;
Describes LOCK lengths
linktext4 typedef utf8string linktext4;
Symbolic link contents
mode4 typedef uint32_t mode4;
Mode attribute data type
nfs_cookie4 typedef uint64_t nfs_cookie4;
Opaque cookie value for READDIR
nfs_fh4 typedef opaque nfs_fh4<NFS4_FHSIZE>;
Filehandle definition; NFS4_FHSIZE is defined as 128
nfs_ftype4 enum nfs_ftype4;
Various defined file types
nfsstat4 enum nfsstat4;
Return value for operations
offset4 typedef uint64_t offset4;
Various offset designations (READ, WRITE, LOCK, COMMIT)
pathname4 typedef component4 pathname4<>;
Represents path name for LOOKUP, OPEN and others
qop4 typedef uint32_t qop4;
Quality of protection designation in SECINFO
sec_oid4 typedef opaque sec_oid4<>;
Security Object Identifier
The sec_oid4 data type is not really opaque.
Instead contains an ASN.1 OBJECT IDENTIFIER as used
by GSS-API in the mech_type argument to
GSS_Init_sec_context. See [RFC2078] for details.
seqid4 typedef uint32_t seqid4;
Sequence identifier used for file locking
stateid4 typedef uint64_t stateid4;
State identifier used for file locking and delegation
utf8string typedef opaque utf8string<>;
UTF-8 encoding for strings
verifier4 typedef opaque verifier4[NFS4_VERIFIER_SIZE];
Verifier used for various operations (COMMIT, CREATE,
OPEN, READDIR, SETCLIENTID, WRITE)
NFS4_VERIFIER_SIZE is defined as 8
2.2. Structured Data Types
nfstime4
struct nfstime4 {
int64_t seconds;
uint32_t nseconds;
}
The nfstime4 structure gives the number of seconds and nanoseconds
since midnight or 0 hour January 1, 1970 Coordinated Universal
Time (UTC). Values greater than zero for the seconds field denote
dates after the 0 hour January 1, 1970. Values less than zero for
the seconds field denote dates before the 0 hour January 1, 1970.
In both cases, the nseconds field is to be added to the seconds
field for the final time representation. For example, if the time
to be represented is one-half second before 0 hour January 1,
1970, the seconds field would have a value of negative one (-1)
and the nseconds fields would have a value of one-half second
(500000000). Values greater than 999,999,999 for nseconds are
considered invalid.
This data type is used to pass time and date information. A
server converts to and from its local representation of time when
processing time values, preserving as much accuracy as possible.
If the precision of timestamps stored for a file system object is
less than defined, loss of precision can occur. An adjunct time
maintenance protocol is recommended to reduce client and server
time skew.
time_how4
enum time_how4 {
SET_TO_SERVER_TIME4 = 0,
SET_TO_CLIENT_TIME4 = 1
};
settime4
union settime4 switch (time_how4 set_it) {
case SET_TO_CLIENT_TIME4:
nfstime4 time;
default:
void;
};
The above definitions are used as the attribute definitions to
set time values. If set_it is SET_TO_SERVER_TIME4, then the
server uses its local representation of time for the time value.
specdata4
struct specdata4 {
uint32_t specdata1;
uint32_t specdata2;
};
This data type represents additional information for the device
file types NF4CHR and NF4BLK.
fsid4
struct fsid4 {
uint64_t major;
uint64_t minor;
};
This type is the file system identifier that is used as a
mandatory attribute.
fs_location4
struct fs_location4 {
utf8string server<>;
pathname4 rootpath;
};
fs_locations4
struct fs_locations4 {
pathname4 fs_root;
fs_location4 locations<>;
};
The fs_location4 and fs_locations4 data types are used for the
fs_locations recommended attribute which is used for migration
and replication support.
fattr4
struct fattr4 {
bitmap4 attrmask;
attrlist4 attr_vals;
};
The fattr4 structure is used to represent file and directory
attributes.
The bitmap is a counted array of 32 bit integers used to contain
bit values. The position of the integer in the array that
contains bit n can be computed from the expression (n / 32) and
its bit within that integer is (n mod 32).
0 1
+-----------+-----------+-----------+--
count 31 .. 0 63 .. 32
+-----------+-----------+-----------+--
change_info4
struct change_info4 {
bool atomic;
changeid4 before;
changeid4 after;
};
This structure is used with the CREATE, LINK, REMOVE, RENAME
operations to let the client the know value of the change
attribute for the directory in which the target file system
object resides.
clientaddr4
struct clientaddr4 {
/* see struct rpcb in RFC1833 */
string r_netid<>/* network id */
string r_addr<>/* universal address */
};
The clientaddr4 structure is used as part of the SETCLIENT
operation to either specify the address of the client that is
using a clientid or as part of the call back registration.
cb_client4
struct cb_client4 {
unsigned int cb_program;
clientaddr4 cb_location;
};
This structure is used by the client to inform the server of its
call back address; includes the program number and client
address.
nfs_client_id4
struct nfs_client_id4 {
verifier4 verifier;
opaque id<>;
};
This structure is part of the arguments to the SETCLIENTID
operation.
nfs_lockowner4
struct nfs_lockowner4 {
clientid4 clientid;
opaque owner<>;
};
This structure is used to identify the owner of a OPEN share or
file lock.
3. RPC and Security Flavor
The NFS version 4 protocol is a Remote Procedure Call (RPC)
application that uses RPC version 2 and the corresponding eXternal
Data Representation (XDR) as defined in [RFC1831] and [RFC1832]. The
RPCSEC_GSS security flavor as defined in [RFC2203] MUST be used as
the mechanism to deliver stronger security for the NFS version 4
protocol.
3.1. Ports and Transports
Historically, NFS version 2 and version 3 servers have resided on
port 2049. The registered port 2049 [RFC1700] for the NFS protocol
should be the default configuration. Using the registered port for
NFS services means the NFS client will not need to use the RPC
binding protocols as described in [RFC1833]; this will allow NFS to
transit firewalls.
The transport used by the RPC service for the NFS version 4 protocol
MUST provide congestion control comparable to that defined for TCP in
[RFC2581]. If the operating environment implements TCP, the NFS
version 4 protocol SHOULD be supported over TCP. The NFS client and
server may use other transports if they support congestion control as
defined above and in those cases a mechanism may be provided to
override TCP usage in favor of another transport.
If TCP is used as the transport, the client and server SHOULD use
persistent connections. This will prevent the weakening of TCP"s
congestion control via short lived connections and will improve
performance for the WAN environment by eliminating the need for SYN
handshakes.
Note that for various timers, the client and server should avoid
inadvertent synchronization of those timers. For further discussion
of the general issue refer to [Floyd].
3.2. Security Flavors
Traditional RPC implementations have included AUTH_NONE, AUTH_SYS,
AUTH_DH, and AUTH_KRB4 as security flavors. With [RFC2203] an
additional security flavor of RPCSEC_GSS has been introduced which
uses the functionality of GSS-API [RFC2078]. This allows for the use
of varying security mechanisms by the RPC layer without the
additional implementation overhead of adding RPC security flavors.
For NFS version 4, the RPCSEC_GSS security flavor MUST be used to
enable the mandatory security mechanism. Other flavors, such as,
AUTH_NONE, AUTH_SYS, and AUTH_DH MAY be implemented as well.
3.2.1. Security mechanisms for NFS version 4
The use of RPCSEC_GSS requires selection of: mechanism, quality of
protection, and service (authentication, integrity, privacy). The
remainder of this document will refer to these three parameters of
the RPCSEC_GSS security as the security triple.
3.2.1.1. Kerberos V5 as security triple
The Kerberos V5 GSS-API mechanism as described in [RFC1964] MUST be
implemented and provide the following security triples.
column descriptions:
1 == number of pseudo flavor
2 == name of pseudo flavor
3 == mechanism"s OID
4 == mechanism"s algorithm(s)
5 == RPCSEC_GSS service
1 2 3 4 5
-----------------------------------------------------------------------
390003 krb5 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_none
390004 krb5i 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_integrity
390005 krb5p 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_privacy
for integrity,
and 56 bit DES
for privacy.
Note that the pseudo flavor is presented here as a mapping aid to the
implementor. Because this NFS protocol includes a method to
negotiate security and it understands the GSS-API mechanism, the
pseudo flavor is not needed. The pseudo flavor is needed for NFS
version 3 since the security negotiation is done via the MOUNT
protocol.
For a discussion of NFS" use of RPCSEC_GSS and Kerberos V5, please
see [RFC2623].
3.2.1.2. LIPKEY as a security triple
The LIPKEY GSS-API mechanism as described in [RFC2847] MUST be
implemented and provide the following security triples. The
definition of the columns matches the previous subsection "Kerberos
V5 as security triple"
1 2 3 4 5
-----------------------------------------------------------------------
390006 lipkey 1.3.6.1.5.5.9 negotiated rpc_gss_svc_none
390007 lipkey-i 1.3.6.1.5.5.9 negotiated rpc_gss_svc_integrity
390008 lipkey-p 1.3.6.1.5.5.9 negotiated rpc_gss_svc_privacy
The mechanism algorithm is listed as "negotiated". This is because
LIPKEY is layered on SPKM-3 and in SPKM-3 [RFC2847] the
confidentiality and integrity algorithms are negotiated. Since
SPKM-3 specifies HMAC-MD5 for integrity as MANDATORY, 128 bit
cast5CBC for confidentiality for privacy as MANDATORY, and further
specifies that HMAC-MD5 and cast5CBC MUST be listed first before
weaker algorithms, specifying "negotiated" in column 4 does not
impair interoperability. In the event an SPKM-3 peer does not
support the mandatory algorithms, the other peer is free to accept or
reject the GSS-API context creation.
Because SPKM-3 negotiates the algorithms, subsequent calls to
LIPKEY"s GSS_Wrap() and GSS_GetMIC() by RPCSEC_GSS will use a quality
of protection value of 0 (zero). See section 5.2 of [RFC2025] for an
explanation.
LIPKEY uses SPKM-3 to create a secure channel in which to pass a user
name and password from the client to the user. Once the user name
and password have been accepted by the server, calls to the LIPKEY
context are redirected to the SPKM-3 context. See [RFC2847] for more
details.
3.2.1.3. SPKM-3 as a security triple
The SPKM-3 GSS-API mechanism as described in [RFC2847] MUST be
implemented and provide the following security triples. The
definition of the columns matches the previous subsection "Kerberos
V5 as security triple".
1 2 3 4 5
-----------------------------------------------------------------------
390009 spkm3 1.3.6.1.5.5.1.3 negotiated rpc_gss_svc_none
390010 spkm3i 1.3.6.1.5.5.1.3 negotiated rpc_gss_svc_integrity
390011 spkm3p 1.3.6.1.5.5.1.3 negotiated rpc_gss_svc_privacy
For a discussion as to why the mechanism algorithm is listed as
"negotiated", see the previous section "LIPKEY as a security triple."
Because SPKM-3 negotiates the algorithms, subsequent calls to SPKM-
3"s GSS_Wrap() and GSS_GetMIC() by RPCSEC_GSS will use a quality of
protection value of 0 (zero). See section 5.2 of [RFC2025] for an
explanation.
Even though LIPKEY is layered over SPKM-3, SPKM-3 is specified as a
mandatory set of triples to handle the situations where the initiator
(the client) is anonymous or where the initiator has its own
certificate. If the initiator is anonymous, there will not be a user
name and password to send to the target (the server). If the
initiator has its own certificate, then using passwords is
superfluous.
3.3. Security Negotiation
With the NFS version 4 server potentially offering multiple security
mechanisms, the client needs a method to determine or negotiate which
mechanism is to be used for its communication with the server. The
NFS server may have multiple points within its file system name space
that are available for use by NFS clients. In turn the NFS server
may be configured such that each of these entry points may have
different or multiple security mechanisms in use.
The security negotiation between client and server must be done with
a secure channel to eliminate the possibility of a third party
intercepting the negotiation sequence and forcing the client and
server to choose a lower level of security than required or desired.
3.3.1. Security Error
Based on the assumption that each NFS version 4 client and server
must support a minimum set of security (i.e. LIPKEY, SPKM-3, and
Kerberos-V5 all under RPCSEC_GSS), the NFS client will start its
communication with the server with one of the minimal security
triples. During communication with the server, the client may
receive an NFS error of NFS4ERR_WRONGSEC. This error allows the
server to notify the client that the security triple currently being
used is not appropriate for access to the server"s file system
resources. The client is then responsible for determining what
security triples are available at the server and choose one which is
appropriate for the client.
3.3.2. SECINFO
The new SECINFO operation will allow the client to determine, on a
per filehandle basis, what security triple is to be used for server
access. In general, the client will not have to use the SECINFO
procedure except during initial communication with the server or when
the client crosses policy boundaries at the server. It is possible
that the server"s policies change during the client"s interaction
therefore forcing the client to negotiate a new security triple.
3.4. Callback RPC Authentication
The callback RPC (described later) must mutually authenticate the NFS
server to the principal that acquired the clientid (also described
later), using the same security flavor the original SETCLIENTID
operation used. Because LIPKEY is layered over SPKM-3, it is
permissible for the server to use SPKM-3 and not LIPKEY for the
callback even if the client used LIPKEY for SETCLIENTID.
For AUTH_NONE, there are no principals, so this is a non-issue.
For AUTH_SYS, the server simply uses the AUTH_SYS credential that the
user used when it set up the delegation.
For AUTH_DH, one commonly used convention is that the server uses the
credential corresponding to this AUTH_DH principal:
unix.host@domain
where host and domain are variables corresponding to the name of
server host and directory services domain in which it lives such as a
Network Information System domain or a DNS domain.
Regardless of what security mechanism under RPCSEC_GSS is being used,
the NFS server, MUST identify itself in GSS-API via a
GSS_C_NT_HOSTBASED_SERVICE name type. GSS_C_NT_HOSTBASED_SERVICE
names are of the form:
service@hostname
For NFS, the "service" element is
nfs
Implementations of security mechanisms will convert nfs@hostname to
various different forms. For Kerberos V5 and LIPKEY, the following
form is RECOMMENDED:
nfs/hostname
For Kerberos V5, nfs/hostname would be a server principal in the
Kerberos Key Distribution Center database. For LIPKEY, this would be
the username passed to the target (the NFS version 4 client that
receives the callback).
It should be noted that LIPKEY may not work for callbacks, since the
LIPKEY client uses a user id/password. If the NFS client receiving
the callback can authenticate the NFS server"s user name/password
pair, and if the user that the NFS server is authenticating to has a
public key certificate, then it works.
In situations where NFS client uses LIPKEY and uses a per-host
principal for the SETCLIENTID operation, instead of using LIPKEY for
SETCLIENTID, it is RECOMMENDED that SPKM-3 with mutual authentication
be used. This effectively means that the client will use a
certificate to authenticate and identify the initiator to the target
on the NFS server. Using SPKM-3 and not LIPKEY has the following
advantages:
o When the server does a callback, it must authenticate to the
principal used in the SETCLIENTID. Even if LIPKEY is used,
because LIPKEY is layered over SPKM-3, the NFS client will need to
have a certificate that corresponds to the principal used in the
SETCLIENTID operation. From an administrative perspective, having
a user name, password, and certificate for both the client and
server is redundant.
o LIPKEY was intended to minimize additional infrastructure
requirements beyond a certificate for the target, and the
expectation is that existing password infrastructure can be
leveraged for the initiator. In some environments, a per-host
password does not exist yet. If certificates are used for any
per-host principals, then additional password infrastructure is
not needed.
o In cases when a host is both an NFS client and server, it can
share the same per-host certificate.
4. Filehandles
The filehandle in the NFS protocol is a per server unique identifier
for a file system object. The contents of the filehandle are opaque
to the client. Therefore, the server is responsible for translating
the filehandle to an internal representation of the file system
object. Since the filehandle is the client"s reference to an object
and the client may cache this reference, the server SHOULD not reuse
a filehandle for another file system object. If the server needs to
reuse a filehandle value, the time elapsed before reuse SHOULD be
large enough such that it is unlikely the client has a cached copy of
the reused filehandle value. Note that a client may cache a
filehandle for a very long time. For example, a client may cache NFS
data to local storage as a method to expand its effective cache size
and as a means to survive client restarts. Therefore, the lifetime
of a cached filehandle may be extended.
4.1. Obtaining the First Filehandle
The operations of the NFS protocol are defined in terms of one or
more filehandles. Therefore, the client needs a filehandle to
initiate communication with the server. With the NFS version 2
protocol [RFC1094] and the NFS version 3 protocol [RFC1813], there
exists an ancillary protocol to obtain this first filehandle. The
MOUNT protocol, RPC program number 100005, provides the mechanism of
translating a string based file system path name to a filehandle
which can then be used by the NFS protocols.
The MOUNT protocol has deficiencies in the area of security and use
via firewalls. This is one reason that the use of the public
filehandle was introduced in [RFC2054] and [RFC2055]. With the use
of the public filehandle in combination with the LOOKUP procedure in
the NFS version 2 and 3 protocols, it has been demonstrated that the
MOUNT protocol is unnecessary for viable interaction between NFS
client and server.
Therefore, the NFS version 4 protocol will not use an ancillary
protocol for translation from string based path names to a
filehandle. Two special filehandles will be used as starting points
for the NFS client.
4.1.1. Root Filehandle
The first of the special filehandles is the ROOT filehandle. The
ROOT filehandle is the "conceptual" root of the file system name
space at the NFS server. The client uses or starts with the ROOT
filehandle by employing the PUTROOTFH operation. The PUTROOTFH
operation instructs the server to set the "current" filehandle to the
ROOT of the server"s file tree. Once this PUTROOTFH operation is
used, the client can then traverse the entirety of the server"s file
tree with the LOOKUP procedure. A complete discussion of the server
name space is in the section "NFS Server Name Space".
4.1.2. Public Filehandle
The second special filehandle is the PUBLIC filehandle. Unlike the
ROOT filehandle, the PUBLIC filehandle may be bound or represent an
arbitrary file system object at the server. The server is
responsible for this binding. It may be that the PUBLIC filehandle
and the ROOT filehandle refer to the same file system object.
However, it is up to the administrative software at the server and
the policies of the server administrator to define the binding of the
PUBLIC filehandle and server file system object. The client may not
make any assumptions about this binding.
4.2. Filehandle Types
In the NFS version 2 and 3 protocols, there was one type of
filehandle with a single set of semantics. The NFS version 4
protocol introduces a new type of filehandle in an attempt to
accommodate certain server environments. The first type of
filehandle is "persistent". The semantics of a persistent filehandle
are the same as the filehandles of the NFS version 2 and 3 protocols.
The second or new type of filehandle is the "volatile" filehandle.
The volatile filehandle type is being introduced to address server
functionality or implementation issues which make correct
implementation of a persistent filehandle infeasible. Some server
environments do not provide a file system level invariant that can be
used to construct a persistent filehandle. The underlying server
file system may not provide the invariant or the server"s file system
programming interfaces may not provide access to the needed
invariant. Volatile filehandles may ease the implementation of
server functionality such as hierarchical storage management or file
system reorganization or migration. However, the volatile filehandle
increases the implementation burden for the client. However this
increased burden is deemed acceptable based on the overall gains
achieved by the protocol.
Since the client will need to handle persistent and volatile
filehandle differently, a file attribute is defined which may be used
by the client to determine the filehandle types being returned by the
server.
4.2.1. General Properties of a Filehandle
The filehandle contains all the information the server needs to
distinguish an individual file. To the client, the filehandle is
opaque. The client stores filehandles for use in a later request and
can compare two filehandles from the same server for equality by
doing a byte-by-byte comparison. However, the client MUST NOT
otherwise interpret the contents of filehandles. If two filehandles
from the same server are equal, they MUST refer to the same file. If
they are not equal, the client may use information provided by the
server, in the form of file attributes, to determine whether they
denote the same files or different files. The client would do this
as necessary for client side caching. Servers SHOULD try to maintain
a one-to-one correspondence between filehandles and files but this is
not required. Clients MUST use filehandle comparisons only to
improve performance, not for correct behavior. All clients need to
be prepared for situations in which it cannot be determined whether
two filehandles denote the same object and in such cases, avoid
making invalid assumptions which might cause incorrect behavior.
Further discussion of filehandle and attribute comparison in the
context of data caching is presented in the section "Data Caching and
File Identity".
As an example, in the case that two different path names when
traversed at the server terminate at the same file system object, the
server SHOULD return the same filehandle for each path. This can
occur if a hard link is used to create two file names which refer to
the same underlying file object and associated data. For example, if
paths /a/b/c and /a/d/c refer to the same file, the server SHOULD
return the same filehandle for both path names traversals.
4.2.2. Persistent Filehandle
A persistent filehandle is defined as having a fixed value for the
lifetime of the file system object to which it refers. Once the
server creates the filehandle for a file system object, the server
MUST accept the same filehandle for the object for the lifetime of
the object. If the server restarts or reboots the NFS server must
honor the same filehandle value as it did in the server"s previous
instantiation. Similarly, if the file system is migrated, the new
NFS server must honor the same file handle as the old NFS server.
The persistent filehandle will be become stale or invalid when the
file system object is removed. When the server is presented with a
persistent filehandle that refers to a deleted object, it MUST return
an error of NFS4ERR_STALE. A filehandle may become stale when the
file system containing the object is no longer available. The file
system may become unavailable if it exists on removable media and the
media is no longer available at the server or the file system in
whole has been destroyed or the file system has simply been removed
from the server"s name space (i.e. unmounted in a Unix environment).
4.2.3. Volatile Filehandle
A volatile filehandle does not share the same longevity
characteristics of a persistent filehandle. The server may determine
that a volatile filehandle is no longer valid at many different
points in time. If the server can definitively determine that a
volatile filehandle refers to an object that has been removed, the
server should return NFS4ERR_STALE to the client (as is the case for
persistent filehandles). In all other cases where the server
determines that a volatile filehandle can no longer be used, it
should return an error of NFS4ERR_FHEXPIRED.
The mandatory attribute "fh_expire_type" is used by the client to
determine what type of filehandle the server is providing for a
particular file system. This attribute is a bitmask with the
following values:
FH4_PERSISTENT
The value of FH4_PERSISTENT is used to indicate a persistent
filehandle, which is valid until the object is removed from the
file system. The server will not return NFS4ERR_FHEXPIRED for
this filehandle. FH4_PERSISTENT is defined as a value in which
none of the bits specified below are set.
FH4_NOEXPIRE_WITH_OPEN
The filehandle will not expire while client has the file open.
If this bit is set, then the values FH4_VOLATILE_ANY or
FH4_VOL_RENAME do not impact expiration while the file is open.
Once the file is closed or if the FH4_NOEXPIRE_WITH_OPEN bit is
false, the rest of the volatile related bits apply.
FH4_VOLATILE_ANY
The filehandle may expire at any time and will expire during
system migration and rename.
FH4_VOL_MIGRATION
The filehandle will expire during file system migration. May
only be set if FH4_VOLATILE_ANY is not set.
FH4_VOL_RENAME
The filehandle may expire due to a rename. This includes a
rename by the requesting client or a rename by another client.
May only be set if FH4_VOLATILE_ANY is not set.
Servers which provide volatile filehandles should deny a RENAME or
REMOVE that would affect an OPEN file or any of the components
leading to the OPEN file. In addition, the server should deny all
RENAME or REMOVE requests during the grace or lease period upon
server restart.
The reader may be wondering why there are three FH4_VOL* bits and why
FH4_VOLATILE_ANY is exclusive of FH4_VOL_MIGRATION and
FH4_VOL_RENAME. If the a filehandle is normally persistent but
cannot persist across a file set migration, then the presence of the
FH4_VOL_MIGRATION or FH4_VOL_RENAME tells the client that it can
treat the file handle as persistent for purposes of maintaining a
file name to file handle cache, except for the specific event
described by the bit. However, FH4_VOLATILE_ANY tells the client
that it should not maintain such a cache for unopened files. A
server MUST not present FH4_VOLATILE_ANY with FH4_VOL_MIGRATION or
FH4_VOL_RENAME as this will lead to confusion. FH4_VOLATILE_ANY
implies that the file handle will expire upon migration or rename, in
addition to other events.
4.2.4. One Method of Constructing a Volatile Filehandle
As mentioned, in some instances a filehandle is stale (no longer
valid; perhaps because the file was removed from the server) or it is
expired (the underlying file is valid but since the filehandle is
volatile, it may have expired). Thus the server needs to be able to
return NFS4ERR_STALE in the former case and NFS4ERR_FHEXPIRED in the
latter case. This can be done by careful construction of the volatile
filehandle. One possible implementation follows.
A volatile filehandle, while opaque to the client could contain:
[volatile bit = 1 server boot time slot generation number]
o slot is an index in the server volatile filehandle table
o generation number is the generation number for the table
entry/slot
If the server boot time is less than the current server boot time,
return NFS4ERR_FHEXPIRED. If slot is out of range, return
NFS4ERR_BADHANDLE. If the generation number does not match, return
NFS4ERR_FHEXPIRED.
When the server reboots, the table is gone (it is volatile).
If volatile bit is 0, then it is a persistent filehandle with a
different structure following it.
4.3. Client Recovery from Filehandle Expiration
If possible, the client SHOULD recover from the receipt of an
NFS4ERR_FHEXPIRED error. The client must take on additional
responsibility so that it may prepare itself to recover from the
expiration of a volatile filehandle. If the server returns
persistent filehandles, the client does not need these additional
steps.
For volatile filehandles, most commonly the client will need to store
the component names leading up to and including the file system
object in question. With these names, the client should be able to
recover by finding a filehandle in the name space that is still
available or by starting at the root of the server"s file system name
space.
If the expired filehandle refers to an object that has been removed
from the file system, obviously the client will not be able to
recover from the exp
评论
评论
发 布