dfstore is a storage client for dragonfly. It can rely on different types of object storage,
such as S3 or OSS, to provide stable object storage capabilities.
Dfstore uses the entire P2P network as a cache when storing objects. Rely on S3 or OSS as the backend to ensure storage reliability. In the process of object storage, P2P Cache is effectively used for fast read and write storage.
- Provides object storage service.
- Provides two modes for writing backend including
- Backend is third-party object service and supports
- Multiple copies can be replicated within a P2P cluster, and the number of copies is controllable.
Upload object to dragonfly object storage service.
Put Object With AsyncWriteBack Mode
Dfstore puts object to peer synchronously, to backend asynchronously, and to other peer asynchronously.
Put Object With WriteBack Mode
Dfstore puts object to peer synchronously, to backend synchronously, and to other peer asynchronously.
Download object from dragonfly object storage service.
Hit Peer Cache
Dfstore download object hits the cache of peer and returns immediately.
Hit Other Peers(include seed peer) Cache
Dfstore download object hits the cache of other peers and download object from other peers. Other peers includes seed peers and peers that have cached object.
Dfstore download object from backend.
Dragonfly object storage protocol name is
dfs, and the URL is defined as
- bucketName: The bucket name of the backend third-party object storage service.
- objectKey: The key of the object storage, that is, the storage path.
If the Backend is S3 and the upload object is
Then upload the image
/bar/foo/baz.jpg in the S3
Step 1: Create the backend object storage service
Create the backend third-party object storage service, bucket and
Now backend supports AWS S3 and Aliyun OSS.
Step 2: Configure the third-party object storage service to the manager
Enable the object storage service in the manager's configuration file.
# Enable object storage
# Object storage name of type, it can be s3 or oss
# Storage region
# Datacenter endpoint
# Access key ID
# Access key secret
Step 3: Enable object storage in the peer
Enable the object storage service and enable the manager service in the dfdaemon's configuration file.
# Get scheduler list dynamically from manager
# Manager service address
- type: tcp
# Scheduler list refresh interval
# Enable object storage service
# Filter is used to generate a unique Task ID by
# filtering unnecessary query params in the URL,
# it is separated by & character.
# When filter: "Expires&Signature&ns", for example:
# http://localhost/xyz?Expires=111&Signature=222&ns=docker.io and http://localhost/xyz?Expires=333&Signature=999&ns=docker.io
# is same task
# MaxReplicas is the maximum number of replicas of an object cache in seed peers.
# Object storage service security option
# Listen address
# Listen port
Step 4: Install dragofly system
Install dragonfly system with object storage, refer to install dragonfly.
Step 5: Install the dfstore command-line tool
Install the last version of dfstore. You can install one of the version for Dragonfly on the github releases page.
go install d7y.io/dragonfly/v2/cmd/dfstore@latest
dfstore and peer need to be in the same instance,
127.0.0.1:65004 endpoint of the peer will be called by default when uploading and downloading.
If dfstore and peer are not the same instance,
you can specify the endpoint of the peer's object storage through
--endpoint parameter of
dfstore command-line tool.
Detailed parameters refer to dfstore-cli.
Step 6: Upload object to dragonfly object storage
baz.jpg image to
/bar/foo/baz.jpg in S3
dfstore cp ./baz.jpg dfs://dragonfly/bar/foo/baz.jpg
Step 7: Download object from dragonfly object storage
baz.jpg image from
/bar/foo/baz.jpg in S3
dfstore cp dfs://dragonfly/bar/foo/baz.jpg ./baz.jpg
Step 8: Delete object from dragonfly object storage
/bar/foo/baz.jpg image in S3
dfstore rm dfs://dragonfly/bar/foo/baz.jpg