The internal REST API used between the proxy server and the account, container and object server is almost identical to public Swift REST API, but with a few internal extensions (for example, update an account with a new container).
The pluggable back-end APIs for the three REST API servers (account, container, object) abstracts the needs for servicing the various REST APIs from the details of how data is laid out and stored on-disk.
The APIs are documented in the reference implementations for all three servers. For historical reasons, the object server backend reference implementation module is named diskfile, while the account and container server backend reference implementation modules are named appropriately.
This API is still under development and not yet finalized.
Pluggable Back-end for Account Server
swift.account.backend.
AccountBroker
(db_file, timeout=25, logger=None, account=None, container=None, pending_timeout=None, stale_reads_ok=False, skip_commits=False)Encapsulates working with an account database.
create_account_stat_table
(conn, put_timestamp)Create account_stat table which is specific to the account DB. Not a part of Pluggable Back-ends, internal to the baseline code.
Parameters: |
|
---|
create_container_table
(conn)Create container table which is specific to the account DB.
Parameters: | conn – DB connection object |
---|
create_policy_stat_table
(conn)Create policy_stat table which is specific to the account DB. Not a part of Pluggable Back-ends, internal to the baseline code.
Parameters: | conn – DB connection object |
---|
empty
()Check if the account DB is empty.
Returns: | True if the database has no active containers. |
---|
get_info
()Get global data for the account.
Returns: | dict with keys: account, created_at, put_timestamp, delete_timestamp, status_changed_at, container_count, object_count, bytes_used, hash, id |
---|
get_policy_stats
(do_migrations=False)Get global policy stats for the account.
Parameters: | do_migrations – boolean, if True the policy stat dicts will always include the ‘container_count’ key; otherwise it may be omitted on legacy databases until they are migrated. |
---|---|
Returns: | dict of policy stats where the key is the policy index and the value is a dictionary like {‘object_count’: M, ‘bytes_used’: N, ‘container_count’: L} |
is_status_deleted
()Only returns true if the status field is set to DELETED.
list_containers_iter
(limit, marker, end_marker, prefix, delimiter, reverse=False)Get a list of containers sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix.
Parameters: |
|
---|---|
Returns: | list of tuples of (name, object_count, bytes_used, put_timestamp, 0) |
make_tuple_for_pickle
(record)Turn this db record dict into the format this service uses for pending pickles.
merge_items
(item_list, source=None)Merge items into the container table.
Parameters: |
|
---|
put_container
(name, put_timestamp, delete_timestamp, object_count, bytes_used, storage_policy_index)Create a container with the given attributes.
Parameters: |
|
---|
Pluggable Back-ends for Container Server
swift.container.backend.
ContainerBroker
(db_file, timeout=25, logger=None, account=None, container=None, pending_timeout=None, stale_reads_ok=False, skip_commits=False, force_db_file=False)Encapsulates working with a container database.
Note that this may involve multiple on-disk DB files if the container becomes sharded:
_db_file
is the path to the legacy container DB name, i.e.<hash>.db
. This file should exist for an initialised broker that has never been sharded, but will not exist once a container has been sharded.db_files
is a list of existing db files for the broker. This list should have at least one entry for an initialised broker, and should have two entries while a broker is in SHARDING state.db_file
is the path to whichever db is currently authoritative for the container. Depending on the container’s state, this may not be the same as thedb_file
argument given to__init__()
, unlessforce_db_file
is True in which casedb_file
is always equal to thedb_file
argument given to__init__()
.pending_file
is always equal to_db_file
extended with.pending
, i.e.<hash>.db.pending
.
create_broker
(device_path, part, account, container, logger=None, epoch=None, put_timestamp=None, storage_policy_index=None)Create a ContainerBroker instance. If the db doesn’t exist, initialize the db file.
Parameters: |
|
---|---|
Returns: | a |
create_container_info_table
(conn, put_timestamp, storage_policy_index)Create the container_info table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. Also creates the container_stat view.
Parameters: |
|
---|
create_object_table
(conn)Create the object table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code.
Parameters: | conn – DB connection object |
---|
create_policy_stat_table
(conn, storage_policy_index=0)Create policy_stat table.
Parameters: |
|
---|
create_shard_range_table
(conn)Create the shard_range table which is specific to the container DB.
Parameters: | conn – DB connection object |
---|
db_file
Get the path to the primary db file for this broker. This is typically
the db file for the most recent sharding epoch. However, if no db files
exist on disk, or if force_db_file
was True when the broker was
constructed, then the primary db file is the file passed to the broker
constructor.
Returns: | A path to a db file; the file does not necessarily exist. |
---|
db_files
Gets the cached list of valid db files that exist on disk for this broker.
reload_db_files()
.Returns: | A list of paths to db files ordered by ascending epoch; the list may be empty. |
---|
delete_object
(name, timestamp, storage_policy_index=0)Mark an object deleted.
Parameters: |
|
---|
empty
()Check if container DB is empty.
This method uses more stringent checks on object count than
is_deleted()
: this method checks that there are no objects in any
policy; if the container is in the process of sharding then both fresh
and retiring databases are checked to be empty; if a root container has
shard ranges then they are checked to be empty.
Returns: | True if the database has no active objects, False otherwise |
---|
enable_sharding
(epoch)Updates this broker’s own shard range with the given epoch, sets its state to SHARDING and persists it in the DB.
Parameters: | epoch – a Timestamp |
---|---|
Returns: | the broker’s updated own shard range. |
find_shard_ranges
(shard_size, limit=-1, existing_ranges=None)Scans the container db for shard ranges. Scanning will start at the
upper bound of the any existing_ranges
that are given, otherwise
at ShardRange.MIN
. Scanning will stop when limit
shard ranges
have been found or when no more shard ranges can be found. In the
latter case, the upper bound of the final shard range will be equal to
the upper bound of the container namespace.
This method does not modify the state of the db; callers are responsible for persisting any shard range data in the db.
Parameters: |
|
---|---|
Returns: | a tuple; the first value in the tuple is a list of dicts each having keys {‘index’, ‘lower’, ‘upper’, ‘object_count’} in order of ascending ‘upper’; the second value in the tuple is a boolean which is True if the last shard range has been found, False otherwise. |
get_all_shard_range_data
()Returns a list of all shard range data, including own shard range and deleted shard ranges.
Returns: | A list of dict representations of a ShardRange. |
---|
get_brokers
()Return a list of brokers for component dbs. The list has two entries
while the db state is sharding: the first entry is a broker for the
retiring db with skip_commits
set to True
; the second entry is
a broker for the fresh db with skip_commits
set to False
. For
any other db state the list has one entry.
Returns: | a list of ContainerBroker |
---|
get_db_state
()Returns the current state of on disk db files.
get_info
()Get global data for the container.
Returns: | dict with keys: account, container, created_at, put_timestamp, delete_timestamp, status_changed_at, object_count, bytes_used, reported_put_timestamp, reported_delete_timestamp, reported_object_count, reported_bytes_used, hash, id, x_container_sync_point1, x_container_sync_point2, and storage_policy_index, db_state. |
---|
get_info_is_deleted
()Get the is_deleted status and info for the container.
Returns: | a tuple, in the form (info, is_deleted) info is a dict as returned by get_info and is_deleted is a boolean. |
---|
get_misplaced_since
(start, count)Get a list of objects which are in a storage policy different from the container’s storage policy.
Parameters: |
|
---|---|
Returns: | list of dicts with keys: name, created_at, size, content_type, etag, storage_policy_index |
get_objects
(limit=None, marker='', end_marker='', include_deleted=None, since_row=None)Returns a list of objects, including deleted objects, in all policies. Each object in the list is described by a dict with keys {‘name’, ‘created_at’, ‘size’, ‘content_type’, ‘etag’, ‘deleted’, ‘storage_policy_index’}.
Parameters: |
|
---|---|
Returns: | a list of dicts, each describing an object. |
get_own_shard_range
(no_default=False)Returns a shard range representing this broker’s own shard range. If no such range has been persisted in the broker’s shard ranges table then a default shard range representing the entire namespace will be returned.
The returned shard range will be updated with the current object stats for this broker and a meta timestamp set to the current time. For these values to be persisted the caller must merge the shard range.
Parameters: | no_default – if True and the broker’s own shard range is not found in the shard ranges table then None is returned, otherwise a default shard range is returned. |
---|---|
Returns: | an instance of ShardRange |
get_replication_info
()Get information about the DB required for replication.
Returns: | dict containing keys from get_info plus max_row and metadata |
---|
get_shard_ranges
(marker=None, end_marker=None, includes=None, reverse=False, include_deleted=False, states=None, include_own=False, exclude_others=False, fill_gaps=False)Returns a list of persisted shard ranges.
Parameters: |
|
---|---|
Returns: | a list of instances of |
get_shard_usage
()Get the aggregate object stats for all shard ranges in states ACTIVE, SHARDING or SHRINKING.
Returns: | a dict with keys {bytes_used, object_count} |
---|
get_sharding_sysmeta
(key=None)Returns sharding specific info from the broker’s metadata.
Parameters: | key – if given the value stored under key in the sharding
info will be returned. |
---|---|
Returns: | either a dict of sharding info or the value stored under
key in that dict. |
is_reclaimable
(now, reclaim_age)Check if the broker abstraction is empty, and has been marked deleted for at least a reclaim age.
is_root_container
()Returns True if this container is a root container, False otherwise.
A root container is a container that is not a shard of another container.
list_objects_iter
(limit, marker, end_marker, prefix, delimiter, path=None, storage_policy_index=0, reverse=False, include_deleted=False, since_row=None, transform_func=None, all_policies=False)Get a list of objects sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix.
Parameters: |
|
---|---|
Returns: | list of tuples of (name, created_at, size, content_type, etag, deleted) |
make_tuple_for_pickle
(record)Turn this db record dict into the format this service uses for pending pickles.
merge_items
(item_list, source=None)Merge items into the object table.
Parameters: |
|
---|
merge_shard_ranges
(shard_ranges)Merge shard ranges into the shard range table.
Parameters: | shard_ranges – a shard range or a list of shard ranges; each
shard range should be an instance of
ShardRange or a dict representation of
a shard range having SHARD_RANGE_KEYS . |
---|
put_object
(name, timestamp, size, content_type, etag, deleted=0, storage_policy_index=0, ctype_timestamp=None, meta_timestamp=None)Creates an object in the DB with its metadata.
Parameters: |
|
---|
reload_db_files
()Reloads the cached list of valid on disk db files for this broker.
remove_objects
(lower, upper, max_row=None)Removes object records in the given namespace range from the object table.
Note that objects are removed regardless of their storage_policy_index.
Parameters: |
|
---|
reported
(put_timestamp, delete_timestamp, object_count, bytes_used)Update reported stats, available with container’s get_info.
Parameters: |
|
---|
resolve_shard_range_states
(states)Given a list of values each of which may be the name of a state, the number of a state, or an alias, return the set of state numbers described by the list.
The following alias values are supported: ‘listing’ maps to all states that are considered valid when listing objects; ‘updating’ maps to all states that are considered valid for redirecting an object update.
Parameters: | states – a list of values each of which may be the name of a state, the number of a state, or an alias |
---|---|
Returns: | a set of integer state numbers, or None if no states are given |
Raises: | ValueError – if any value in the given list is neither a valid state nor a valid alias |
set_sharded_state
()Unlink’s the broker’s retiring DB file.
Returns: | True if the retiring DB was successfully unlinked, False otherwise. |
---|
set_sharding_state
()Creates and initializes a fresh DB file in preparation for sharding a retiring DB. The broker’s own shard range must have an epoch timestamp for this method to succeed.
Returns: | True if the fresh DB was successfully created, False otherwise. |
---|
set_sharding_sysmeta
(key, value)Updates the broker’s metadata metadata stored under the given key prefixed with a sharding specific namespace.
Parameters: |
|
---|
set_storage_policy_index
(policy_index, timestamp=None)Update the container_stat policy_index and status_changed_at.
sharding_initiated
()Returns True if a broker has shard range state that would be necessary for sharding to have been initiated, False otherwise.
sharding_required
()Returns True if a broker has shard range state that would be necessary for sharding to have been initiated but has not yet completed sharding, False otherwise.
swift.container.backend.
merge_shards
(shard_data, existing)Compares shard_data
with existing
and updates shard_data
with
any items of existing
that take precedence over the corresponding item
in shard_data
.
Parameters: |
|
---|---|
Returns: | True if |
swift.container.backend.
update_new_item_from_existing
(new_item, existing)Compare the data and meta related timestamps of a new object item with the timestamps of an existing object record, and update the new item with data and/or meta related attributes from the existing record if their timestamps are newer.
The multiple timestamps are encoded into a single string for storing in the ‘created_at’ column of the objects db table.
Parameters: |
|
---|---|
Returns: | True if any attributes of the new item dict were found to be newer than the existing and therefore not updated, otherwise False implying that the updated item is equal to the existing. |
Disk File Interface for the Swift Object Server
The DiskFile, DiskFileWriter and DiskFileReader classes combined define the on-disk abstraction layer for supporting the object server REST API interfaces (excluding REPLICATE). Other implementations wishing to provide an alternative backend for the object server must implement the three classes. An example alternative implementation can be found in the mem_server.py and mem_diskfile.py modules along size this one.
The DiskFileManager is a reference implemenation specific class and is not part of the backend API.
The remaining methods in this module are considered implementation specific and are also not considered part of the backend API.
swift.obj.diskfile.
AuditLocation
(path, device, partition, policy)Represents an object location to be audited.
Other than being a bucket of data, the only useful thing this does is stringify to a filesystem path so the auditor’s logs look okay.
swift.obj.diskfile.
BaseDiskFile
(mgr, device_path, partition, account=None, container=None, obj=None, _datadir=None, policy=None, use_splice=False, pipe_size=None, use_linkat=False, open_expired=False, next_part_power=None, **kwargs)Manage object files.
This specific implementation manages object files on a disk formatted with a POSIX-compliant file system that supports extended attributes as metadata on a file or directory.
Note
The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments.
The following path format is used for data file locations: <devices_path/<device_dir>/<datadir>/<partdir>/<suffixdir>/<hashdir>/ <datafile>.<ext>
Parameters: |
|
---|
create
(size=None)Context manager to create a file. We create a temporary file first, and then return a DiskFileWriter object to encapsulate the state.
Note
An implementation is not required to perform on-disk preallocations even if the parameter is specified. But if it does and it fails, it must raise a DiskFileNoSpace exception.
Parameters: | size – optional initial size of file to explicitly allocate on disk |
---|---|
Raises: | DiskFileNoSpace – if a size is specified and allocation fails |
delete
(timestamp)Delete the object.
This implementation creates a tombstone file using the given timestamp, and removes any older versions of the object file. Any file that has an older timestamp than timestamp will be deleted.
Note
An implementation is free to use or ignore the timestamp parameter.
Parameters: | timestamp – timestamp to compare with each file |
---|---|
Raises: | DiskFileError – this implementation will raise the same errors as the create() method. |
durable_timestamp
Provides the timestamp of the newest data file found in the object directory.
Returns: | A Timestamp instance, or None if no data file was found. |
---|---|
Raises: | DiskFileNotOpen – if the open() method has not been previously called on this instance. |
get_datafile_metadata
()Provide the datafile metadata for a previously opened object as a dictionary. This is metadata that was included when the object was first PUT, and does not include metadata set by any subsequent POST.
Returns: | object’s datafile metadata dictionary |
---|---|
Raises: | DiskFileNotOpen – if the
swift.obj.diskfile.DiskFile.open() method was not previously
invoked |
get_metadata
()Provide the metadata for a previously opened object as a dictionary.
Returns: | object’s metadata dictionary |
---|---|
Raises: | DiskFileNotOpen – if the
swift.obj.diskfile.DiskFile.open() method was not previously
invoked |
get_metafile_metadata
()Provide the metafile metadata for a previously opened object as a dictionary. This is metadata that was written by a POST and does not include any persistent metadata that was set by the original PUT.
Returns: | object’s .meta file metadata dictionary, or None if there is no .meta file |
---|---|
Raises: | DiskFileNotOpen – if the
swift.obj.diskfile.DiskFile.open() method was not previously
invoked |
open
(modernize=False, current_time=None)Open the object.
This implementation opens the data file representing the object, reads the associated metadata in the extended attributes, additionally combining metadata from fast-POST .meta files.
Parameters: |
|
---|
Note
An implementation is allowed to raise any of the following exceptions, but is only required to raise DiskFileNotExist when the object representation does not exist.
Raises: |
|
---|---|
Returns: | itself for use as a context manager |
read_metadata
(current_time=None)Return the metadata for an object without requiring the caller to open the object first.
Parameters: | current_time – Unix time used in checking expiration. If not present, the current time will be used. |
---|---|
Returns: | metadata dictionary for an object |
Raises: | DiskFileError – this implementation will raise the same errors as the open() method. |
reader
(keep_cache=False, _quarantine_hook=<function BaseDiskFile.<lambda>>)Return a swift.common.swob.Response
class compatible
“app_iter” object as defined by
swift.obj.diskfile.DiskFileReader
.
For this implementation, the responsibility of closing the open file
is passed to the swift.obj.diskfile.DiskFileReader
object.
Parameters: |
|
---|---|
Returns: | a |
write_metadata
(metadata)Write a block of metadata to an object without requiring the caller to create the object first. Supports fast-POST behavior semantics.
Parameters: | metadata – dictionary of metadata to be associated with the object |
---|---|
Raises: | DiskFileError – this implementation will raise the same errors as the create() method. |
swift.obj.diskfile.
BaseDiskFileManager
(conf, logger)Management class for devices, providing common place for shared parameters and methods not provided by the DiskFile class (which primarily services the object server REST API layer).
The get_diskfile() method is how this implementation creates a DiskFile object.
Note
This class is reference implementation specific and not part of the pluggable on-disk backend API.
Note
TODO(portante): Not sure what the right name to recommend here, as “manager” seemed generic enough, though suggestions are welcome.
Parameters: |
|
---|
cleanup_ondisk_files
(hsh_path, **kwargs)Clean up on-disk files that are obsolete and gather the set of valid on-disk files for an object.
Parameters: |
|
---|---|
Returns: | a dict that may contain: valid on disk files keyed by their filename extension; a list of obsolete files stored under the key ‘obsolete’; a list of files remaining in the directory, reverse sorted, stored under the key ‘files’. |
construct_dev_path
(device)Construct the path to a device without checking if it is mounted.
Parameters: | device – name of target device |
---|---|
Returns: | full path to the device |
get_dev_path
(device, mount_check=None)Return the path to a device, first checking to see if either it is a proper mount point, or at least a directory depending on the mount_check configuration option.
Parameters: |
|
---|---|
Returns: | full path to the device, None if the path to the device is not a proper mount point or directory. |
get_diskfile
(device, partition, account, container, obj, policy, **kwargs)Returns a BaseDiskFile instance for an object based on the object’s partition, path parts and policy.
Parameters: |
|
---|
get_diskfile_from_audit_location
(audit_location)Returns a BaseDiskFile instance for an object at the given AuditLocation.
Parameters: | audit_location – object location to be audited |
---|
get_diskfile_from_hash
(device, partition, object_hash, policy, **kwargs)Returns a DiskFile instance for an object at the given object_hash. Just in case someone thinks of refactoring, be sure DiskFileDeleted is not raised, but the DiskFile instance representing the tombstoned object is returned instead.
Parameters: |
|
---|---|
Raises: | DiskFileNotExist – if the object does not exist |
Returns: | an instance of BaseDiskFile |
get_hashes
(device, partition, suffixes, policy)Parameters: |
|
---|---|
Returns: | a dictionary that maps suffix directories |
get_ondisk_files
(files, datadir, verify=True, **kwargs)Given a simple list of files names, determine the files that constitute a valid fileset i.e. a set of files that defines the state of an object, and determine the files that are obsolete and could be deleted. Note that some files may fall into neither category.
If a file is considered part of a valid fileset then its info dict will be added to the results dict, keyed by <extension>_info. Any files that are no longer required will have their info dicts added to a list stored under the key ‘obsolete’.
The results dict will always contain entries with keys ‘ts_file’, ‘data_file’ and ‘meta_file’. Their values will be the fully qualified path to a file of the corresponding type if there is such a file in the valid fileset, or None.
Parameters: |
|
---|---|
Returns: |
|
make_on_disk_filename
(timestamp, ext=None, ctype_timestamp=None, *a, **kw)Returns filename for given timestamp.
Parameters: | |
---|---|
Returns: | a file name |
object_audit_location_generator
(policy, device_dirs=None, auditor_type='ALL')Yield an AuditLocation for all objects stored under device_dirs.
Parameters: |
|
---|
parse_on_disk_filename
(filename)Parse an on disk file name.
Parameters: | filename – the file name including extension |
---|---|
Returns: | a dict, with keys for timestamp, ext and ctype_timestamp:
Subclasses may override this method to add further keys to the returned dict. |
Raises: | DiskFileError – if any part of the filename is not able to be validated. |
pickle_async_update
(device, account, container, obj, data, timestamp, policy)Write data describing a container update notification to a pickle file in the async_pending directory.
Parameters: |
|
---|
replication_lock
(device)A context manager that will lock on the device given, if configured to do so.
Parameters: | device – name of target device |
---|---|
Raises: | ReplicationLockTimeout – If the lock on the device cannot be granted within the configured timeout. |
yield_hashes
(device, partition, policy, suffixes=None, **kwargs)Yields tuples of (hash_only, timestamps) for object information stored for the given device, partition, and (optionally) suffixes. If suffixes is None, all stored suffixes will be searched for object hashes. Note that if suffixes is not None but empty, such as [], then nothing will be yielded.
timestamps is a dict which may contain items mapping:
where timestamps are instances of
Timestamp
Parameters: |
|
---|
yield_suffixes
(device, partition, policy)Yields tuples of (full_path, suffix_only) for suffixes stored on the given device and partition.
Parameters: |
|
---|
swift.obj.diskfile.
BaseDiskFileReader
(fp, data_file, obj_size, etag, disk_chunk_size, keep_cache_size, device_path, logger, quarantine_hook, use_splice, pipe_size, diskfile, keep_cache=False)Encapsulation of the WSGI read context for servicing GET REST API
requests. Serves as the context manager object for the
swift.obj.diskfile.DiskFile
class’s
swift.obj.diskfile.DiskFile.reader()
method.
Note
The quarantining behavior of this method is considered implementation specific, and is not required of the API.
Note
The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments.
Parameters: |
|
---|
app_iter_range
(start, stop)Returns an iterator over the data file for range (start, stop)
app_iter_ranges
(ranges, content_type, boundary, size)Returns an iterator over the data file for a set of ranges
close
()Close the open file handle if present.
For this specific implementation, this method will handle quarantining the file if necessary.
zero_copy_send
(wsockfd)Does some magic with splice() and tee() to move stuff from disk to network without ever touching userspace.
Parameters: | wsockfd – file descriptor (integer) of the socket out which to send data |
---|
swift.obj.diskfile.
BaseDiskFileWriter
(name, datadir, fd, tmppath, bytes_per_sync, diskfile, next_part_power)Encapsulation of the write context for servicing PUT REST API
requests. Serves as the context manager object for the
swift.obj.diskfile.DiskFile
class’s
swift.obj.diskfile.DiskFile.create()
method.
Note
It is the responsibility of the
swift.obj.diskfile.DiskFile.create()
method context manager to
close the open file descriptor.
Note
The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments.
Parameters: |
|
---|
commit
(timestamp)Perform any operations necessary to mark the object as durable. For replication policy type this is a no-op.
Parameters: | timestamp – object put timestamp, an instance of
Timestamp |
---|
put
(metadata)Finalize writing the file on disk.
Parameters: | metadata – dictionary of metadata to be associated with the object |
---|
write
(chunk)Write a chunk of data to disk. All invocations of this method must come before invoking the :func:
For this implementation, the data is written into a temporary file.
Parameters: | chunk – the chunk of data to write as a string object |
---|---|
Returns: | the total number of bytes written to an object |
swift.obj.diskfile.
DiskFile
(mgr, device_path, partition, account=None, container=None, obj=None, _datadir=None, policy=None, use_splice=False, pipe_size=None, use_linkat=False, open_expired=False, next_part_power=None, **kwargs)reader_cls
alias of DiskFileReader
writer_cls
alias of DiskFileWriter
swift.obj.diskfile.
DiskFileManager
(conf, logger)diskfile_cls
alias of DiskFile
swift.obj.diskfile.
DiskFileReader
(fp, data_file, obj_size, etag, disk_chunk_size, keep_cache_size, device_path, logger, quarantine_hook, use_splice, pipe_size, diskfile, keep_cache=False)swift.obj.diskfile.
DiskFileWriter
(name, datadir, fd, tmppath, bytes_per_sync, diskfile, next_part_power)put
(metadata)Finalize writing the file on disk.
Parameters: | metadata – dictionary of metadata to be associated with the object |
---|
swift.obj.diskfile.
ECDiskFile
(*args, **kwargs)durable_timestamp
Provides the timestamp of the newest durable file found in the object directory.
Returns: | A Timestamp instance, or None if no durable file was found. |
---|---|
Raises: | DiskFileNotOpen – if the open() method has not been previously called on this instance. |
fragments
Provides information about all fragments that were found in the object directory, including fragments without a matching durable file, and including any fragment chosen to construct the opened diskfile.
Returns: | A dict mapping <Timestamp instance> -> <list of frag indexes>, or None if the diskfile has not been opened or no fragments were found. |
---|
purge
(timestamp, frag_index)Remove a tombstone file matching the specified timestamp or datafile matching the specified timestamp and fragment index from the object directory.
This provides the EC reconstructor/ssync process with a way to remove a tombstone or fragment from a handoff node after reverting it to its primary node.
The hash will be invalidated, and if empty or invalid the hsh_path will be removed on next cleanup_ondisk_files.
Parameters: |
|
---|
reader_cls
alias of ECDiskFileReader
writer_cls
alias of ECDiskFileWriter
swift.obj.diskfile.
ECDiskFileManager
(conf, logger)diskfile_cls
alias of ECDiskFile
make_on_disk_filename
(timestamp, ext=None, frag_index=None, ctype_timestamp=None, durable=False, *a, **kw)Returns the EC specific filename for given timestamp.
Parameters: |
|
---|---|
Returns: | a file name |
Raises: | DiskFileError – if ext==’.data’ and the kwarg frag_index is not a whole number |
parse_on_disk_filename
(filename)Returns timestamp(s) and other info extracted from a policy specific file name. For EC policy the data file name includes a fragment index and possibly a durable marker, both of which which must be stripped off to retrieve the timestamp.
Parameters: | filename – the file name including extension |
---|---|
Returns: |
|
Raises: | DiskFileError – if any part of the filename is not able to be validated. |
validate_fragment_index
(frag_index)Return int representation of frag_index, or raise a DiskFileError if frag_index is not a whole number.
Parameters: | frag_index – a fragment archive index |
---|
swift.obj.diskfile.
ECDiskFileReader
(fp, data_file, obj_size, etag, disk_chunk_size, keep_cache_size, device_path, logger, quarantine_hook, use_splice, pipe_size, diskfile, keep_cache=False)swift.obj.diskfile.
ECDiskFileWriter
(name, datadir, fd, tmppath, bytes_per_sync, diskfile, next_part_power)commit
(timestamp)Finalize put by renaming the object data file to include a durable marker. We do this for EC policy because it requires a 2-phase put commit confirmation.
Parameters: | timestamp – object put timestamp, an instance of
Timestamp |
---|---|
Raises: | DiskFileError – if the diskfile frag_index has not been set (either during initialisation or a call to put()) |
put
(metadata)The only difference between this method and the replication policy DiskFileWriter method is adding the frag index to the metadata.
Parameters: | metadata – dictionary of metadata to be associated with object |
---|
swift.obj.diskfile.
consolidate_hashes
(partition_dir)Take what’s in hashes.pkl and hashes.invalid, combine them, write the result back to hashes.pkl, and clear out hashes.invalid.
Parameters: | partition_dir – absolute path to partition dir containing hashes.pkl and hashes.invalid |
---|---|
Returns: | a dict, the suffix hashes (if any), the key ‘valid’ will be False if hashes.pkl is corrupt, cannot be read or does not exist |
swift.obj.diskfile.
extract_policy
(obj_path)Extracts the policy for an object (based on the name of the objects directory) given the device-relative path to the object. Returns None in the event that the path is malformed in some way.
The device-relative path is everything after the mount point; for example:
would have device-relative path:
objects-5/30/179/485dc017205a81df3af616d917c90179/1401811134.873649.data
Parameters: | obj_path – device-relative path of an object, or the full path |
---|---|
Returns: | a BaseStoragePolicy or None |
swift.obj.diskfile.
get_part_path
(dev_path, policy, partition)Given the device path, policy, and partition, returns the full path to the partition
swift.obj.diskfile.
invalidate_hash
(suffix_dir)Invalidates the hash for a suffix_dir in the partition’s hashes file.
Parameters: | suffix_dir – absolute path to suffix dir whose hash needs invalidating |
---|
swift.obj.diskfile.
object_audit_location_generator
(devices, datadir, mount_check=True, logger=None, device_dirs=None, auditor_type='ALL')Given a devices path (e.g. “/srv/node”), yield an AuditLocation for all objects stored under that directory for the given datadir (policy), if device_dirs isn’t set. If device_dirs is set, only yield AuditLocation for the objects under the entries in device_dirs. The AuditLocation only knows the path to the hash directory, not to the .data file therein (if any). This is to avoid a double listdir(hash_dir); the DiskFile object will always do one, so we don’t.
Parameters: |
|
---|
swift.obj.diskfile.
quarantine_renamer
(device_path, corrupted_file_path)In the case that a file is corrupted, move it to a quarantined area to allow replication to fix it.
Params device_path: | |
---|---|
The path to the device the corrupted file is on. | |
Params corrupted_file_path: | |
The path to the file you want quarantined. | |
Returns: | path (str) of directory the file was moved to |
Raises: | OSError – re-raises non errno.EEXIST / errno.ENOTEMPTY exceptions from rename |
swift.obj.diskfile.
read_hashes
(partition_dir)Read the existing hashes.pkl
Returns: | a dict, the suffix hashes (if any), the key ‘valid’ will be False if hashes.pkl is corrupt, cannot be read or does not exist |
---|
swift.obj.diskfile.
read_metadata
(fd, add_missing_checksum=False)Helper function to read the pickled metadata from an object file.
Parameters: |
|
---|---|
Returns: | dictionary of metadata |
swift.obj.diskfile.
relink_paths
(target_path, new_target_path, check_existing=False)Hard-links a file located in target_path using the second path new_target_path. Creates intermediate directories if required.
Parameters: |
|
---|
swift.obj.diskfile.
strip_self
(f)Wrapper to attach module level functions to base class.
swift.obj.diskfile.
write_hashes
(partition_dir, hashes)Write hashes to hashes.pkl
The updated key is added to hashes before it is written.
swift.obj.diskfile.
write_metadata
(fd, metadata, xattr_size=65536)Helper function to write pickled metadata for an object file.
Parameters: |
|
---|
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.