使用rclone与rsync做服务器备份

rclone服务器挂载谷歌driver备份

rclone docker-compose:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
services:
  rclone:
    container_name: rclone
    image: rclone/rclone
    command:
      - "--verbose"
      - "serve"
      - "webdav"
      - "/data"
      - "--user"
      - "admin"
      - "--pass"
      - "qwe467895596"
      - "--addr"
      - "0.0.0.0:8080"
    restart: unless-stopped
    ports:
      - "8085:8080"
    volumes:
      - ./config:/root/.config/rclone:ro
      - ./data:/data
  • docker-compose up -d
  • 启动后docker exec -it rclone /bin/sh进入容器后,执行rclone config
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
Current remotes:

Name                 Type
====                 ====

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n  #选择n


Enter name for new remote.
name> gdriver

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)
 3 / Alias for an existing remote
   \ (alias)
 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
   \ (s3)
 5 / Backblaze B2
   \ (b2)
 6 / Better checksums for other remotes
   \ (hasher)
 7 / Box
   \ (box)
 8 / Cache a remote
   \ (cache)
 9 / Citrix Sharefile
   \ (sharefile)
10 / Combine several remotes into one
   \ (combine)
11 / Compress a remote
   \ (compress)
12 / Dropbox
   \ (dropbox)
13 / Encrypt/Decrypt a remote
   \ (crypt)
14 / Enterprise File Fabric
   \ (filefabric)
15 / FTP
   \ (ftp)
16 / Google Cloud Storage (this is not Google Drive)
   \ (google cloud storage)
17 / Google Drive
   \ (drive)
18 / Google Photos
   \ (google photos)
19 / HTTP
   \ (http)
20 / Hadoop distributed file system
   \ (hdfs)
21 / HiDrive
   \ (hidrive)
22 / ImageKit.io
   \ (imagekit)
23 / In memory object storage system.
   \ (memory)
24 / Internet Archive
   \ (internetarchive)
25 / Jottacloud
   \ (jottacloud)
26 / Koofr, Digi Storage and other Koofr-compatible storage providers
   \ (koofr)
27 / Linkbox
   \ (linkbox)
28 / Local Disk
   \ (local)
29 / Mail.ru Cloud
   \ (mailru)
30 / Mega
   \ (mega)
31 / Microsoft Azure Blob Storage
   \ (azureblob)
32 / Microsoft Azure Files
   \ (azurefiles)
33 / Microsoft OneDrive
   \ (onedrive)
34 / OpenDrive
   \ (opendrive)
35 / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)
   \ (swift)
36 / Oracle Cloud Infrastructure Object Storage
   \ (oracleobjectstorage)
37 / Pcloud
   \ (pcloud)
38 / PikPak
   \ (pikpak)
39 / Proton Drive
   \ (protondrive)
40 / Put.io
   \ (putio)
41 / QingCloud Object Storage
   \ (qingstor)
42 / Quatrix by Maytech
   \ (quatrix)
43 / SMB / CIFS
   \ (smb)
44 / SSH/SFTP
   \ (sftp)
45 / Sia Decentralized Cloud
   \ (sia)
46 / Storj Decentralized Cloud Storage
   \ (storj)
47 / Sugarsync
   \ (sugarsync)
48 / Transparently chunk/split large files
   \ (chunker)
49 / Uloz.to
   \ (ulozto)
50 / Union merges the contents of several upstream fs
   \ (union)
51 / Uptobox
   \ (uptobox)
52 / WebDAV
   \ (webdav)
53 / Yandex Disk
   \ (yandex)
54 / Zoho
   \ (zoho)
55 / premiumize.me
   \ (premiumizeme)
56 / seafile
   \ (seafile)
Storage> 17 

Option client_id.
Google Application Client Id
Setting your own is recommended.
See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
If you leave this blank, it will use an internal key which is low performance.
Enter a value. Press Enter to leave empty.
client_id> 202264815644.apps.googleusercontent.com    //使用rclone官网client_id 不会过期

Option client_secret.
OAuth Client Secret.
Leave blank normally.
Enter a value. Press Enter to leave empty.
client_secret> X4Z3ca8xfWDb1Voo-F9a7ZxJ    //使用rclone官网client_secret 不会过期

Option scope.
Comma separated list of scopes that rclone should use when requesting access from drive.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Full access all files, excluding Application Data Folder.
   \ (drive)
 2 / Read-only access to file metadata and file contents.
   \ (drive.readonly)
   / Access to files created by rclone only.
 3 | These are visible in the drive website.
   | File authorization is revoked when the user deauthorizes the app.
   \ (drive.file)
   / Allows read and write access to the Application Data folder.
 4 | This is not visible in the drive website.
   \ (drive.appfolder)
   / Allows read-only access to file metadata but
 5 | does not allow any access to read or download file content.
   \ (drive.metadata.readonly)
scope> 1

Option service_account_file.
Service Account Credentials JSON file path.
Leave blank normally.
Needed only if you want use SA instead of interactive login.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
Enter a value. Press Enter to leave empty.
service_account_file> 

Edit advanced config?
y) Yes
n) No (default)
y/n>  //回车

Use web browser to automatically authenticate rclone with remote?
 * Say Y if the machine running rclone has a web browser you can use
 * Say N if running rclone on a (remote) machine without web browser access
If not sure try Y. If Y failed, try N.

y) Yes (default)
n) No
y/n> n  //这里没有GUI只能选n了,有浏览器可以直接yes

Option config_token.
For this to work, you will need rclone available on a machine that has
a web browser available.
For more help and alternate methods see: https://rclone.org/remote_setup/
Execute the following on the machine with the web browser (same rclone
version recommended):
        rclone authorize "drive" "eyJjbGllbnRfaWQiOiIyMDIyNjQ4MTU2NDQuYXBwcy5nb29nbGV1c2VyY29udGVudC5jb20iLCJjbGllbnRfc2VjcmV0IjoiWDRaM2NhOHhmV0RiMVZvby1GOWE3WnhKIi"
Then paste the result.
Enter a value.

代授权

  • 这里在其他能打开浏览器的操作系统执行,安装rclone,可以用包管理器scoop直接安装。
  • scoop install rclone
  • 安装好之后在控制台设置代理。

以下基于V2ray代理端口

  • CMD:
1
2
3
4
set http_proxy=socks5://127.0.0.1:10808 
set https_proxy=socks5://127.0.0.1:10808 
set http_proxy=http://127.0.0.1:10809
set https_proxy=http://127.0.0.1:10809
  • PowerShell:
1
2
3
4
$env:http_proxy="socks5://127.0.0.1:10808"
$env:https_proxy="socks5://127.0.0.1:10808"
$env:http_proxy="http://127.0.0.1:10809"
$env:https_proxy="http://127.0.0.1:10809"

设置好代理使用curl https://www.google.com 如下返回StatusCode:200 就没问题。 1721670862493.png 输入你在控制台的这一串rclone authorize "drive" "eyJjbGllbnRfaWQiOiIyMDIyNjQ4MTU2NDQuYXBwcy5nb29nbGV1c2VyY29udGVudC5jb20iLCJjbGllbnRfc2VjcmV0IjoiWDRaM2NhOHhmV0RiMVZvby1GOWE3WnhKIiwic2NvcGUiOiJkcml2ZSJ9" 会在浏览器自动打开授权,如果停在Get Code被阻塞,那就是代理失败了。 否则会给一个Token,将Token复制在控制台输入。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
config_token> eyJ0b2tlbiI6IntcImFjY2Vzc190b2tlblwiOlwieWEyOS5hMEFYb29DZ3YxU1hlY050U3R1c3ZoUzJMekpQNnVtRzU0VXlZQzhoSWdrclJ4TUhKQzRGTUlBZ011RTlfVEtROXotaE44VnVHYWhPVklhYVVZb1hRUVFWWHVpSW5oSlROR0tEZnZXWlJTSXA5bVlDNk5ERW9xOHlhVDVqWHQwSXR6SWpORGlCOG9Zc3BWS2prZkhuSWU0cGZIX2NYMnVDLVFqbnhtYUNnWUtBVTRTQVJNU0ZRSEdYMk1pQk9wcTVabGpKZ3hRdVd2d3lKanFoQTAxNzFcIixcInRva2VuX3R5cGVcIjpcIkJlYXJlclwiLFwicmVmcmVzaF90b2tlblwiOlwiMS8vMGVnQ1AtZS1fLWI3b0NnWUlBUkFBR0E0U053Ri1MOUlyVm1nOEZILTlzRm5EVEpaMXZGODZ6WmdTX3hld2luX3FVamxSZjJJZS1MYU82Y05LVjhEMW83aTJiSWJqOHdiNGVnSVwiLFwiZXhwaXJ5XCI6


Configure this as a Shared Drive (Team Drive)?

y) Yes
n) No (default)
y/n> 

Configuration complete.
Options:
- type: drive
- client_id: 202264815644.apps.googleusercontent.com
- client_secret: X4Z3ca8xfWDb1Voo-F9a7ZxJ
- scope: drive
- token: {"access_token":"ya29.a0AXooCgv1SXecNtStusvhS2LzJP6umG54UyYC8hIgkrRxMHJC4FMIAgMuE9_TKQ9z-hN8VuGahOVIaaUYoXQQQVXuiInhJTNGKDfvWZRSIp9mYC6NDEoq8yaT5jXt0ItzIjNDiB8oYspVKjkfHnIe4pfHQjnxmaCgYKAU4SARMSFQHGX2MiBOpq5ZljJgxQuWvwyJjqhA0171","token_type":"Bearer","refresh_token":"1//0egCP-eb7oCgYIARAAGA4SNwF-L9IrVmg8FH-9sFnDTJZ1vF86zZgS_xewin_qUjlRf2Ie-LaO6cNKV8D1o7i2bIbj8wb4egI","expiry":"2024-07-22T21:12:00.0815445+08:00"}
- team_drive: 
Keep this "gdriver" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>   //回车

Current remotes:

Name                 Type
====                 ====
gdriver              drive

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q  //q退出

/data # rclone listremotes
gdriver:

将options复制。 在config目录下创建rclone.conf
将以下加入到rclone.conf

1
2
3
4
5
6
[gdriver]
type: drive
client_id: 202264815644.apps.googleusercontent.com
client_secret: X4Z3ca8xfWDb1Voo-F9a7ZxJ
scope: drive
token: {"access_token":"ya29.a0AXooCgv1SXecNtStusvhS2LzJP6umG54UyYC8hIgkrRxMHJC4FMIAgMuE9_TKQ9z-hN8VuGahOVIaaUYoXQQQVXuiInhJTNGKDfvWZRSIp9mYC6NDEoq8yaT5jXt0ItzIjNDiB8oYspVKjkfHnIe4pfHQjnxmaCgYKAU4SARMSFQHGX2MiBOpq5ZljJgxQuWvwyJjqhA0171","token_type":"Bearer","refresh_token":"1//0egCP-eb7oCgYIARAAGA4SNwF-L9IrVmg8FH-9sFnDTJZ1vF86zZgS_xewin_qUjlRf2Ie-LaO6cNKV8D1o7i2bIbj8wb4egI","expiry":"2024-07-22T21:12:00.0815445+08:00"}

docker-compose -restart重启即可,之后若是要迁移/做高可用,可以直接将rclone目录拷贝至其他服务器。 直接docker-compose up -d即可。(可能要删除docker网络)

服务器备份

主机备份

  • 主机备份(安装rclone的机器)的话可以将想要备份的目录打包,不打包有很多小文件会限速。
  • 主机打包脚本
    package.sh:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/bash

# 要备份的目录
SOURCE_DIR="/opt"
#备份的文件路径
FILE_PATH="/opt/rclone/data/127.0.0.1_opt_backup.tar.gz"
#日志
LOG_FILE="/opt/package.log"

# 输出正在创建备份的消息
echo "Starting backup of $SOURCE_DIR to $FILE_NAME..." | tee -a $LOG_FILE

# 使用 tar 创建备份并输出详细信息
tar -czvf $FILE_PATH -C $SOURCE_DIR . 2>&1 | tee -a $LOG_FILE

# 检查 tar 命令的退出状态并输出结果
if [ $? -eq 0 ]; then
    echo "Backup completed successfully at $FILE_NAME" | tee -a $LOG_FILE
else
    echo "Backup failed" | tee -a $LOG_FILE
fi

其他机备份

  • 这里用rsync获取其他机器的备份文件
    • 注:这里主机和其他机器都要安装rsync
安装rsync
1
2
3
4
5
6
# Ubuntu/Debian
sudo apt update
sudo apt install rsync

#CentOS/RHEL
sudo yum install rsync
配置SSH免密登录
1
2
3
4
5
#两次回车即可
ssh-keygen -t rsa

#这里要第一次要输入一次密码做验证
ssh-copy-id -i ~/.ssh/id_rsa.pub root@yourIP
服务器打包脚本

有多个要备份的服务器,不管备份什么目录,最后存放的文件路径最好统一。
package.sh:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/bash

# 要备份的目录
SOURCE_DIR="/opt"
#备份好存放的文件路径 最好统一下要备份的目录,将目录统一放在这个位置,改一个统一的命名。
FILE_PATH="/opt/opt.tar.gz"
#日志
LOG_FILE="/opt/package.log"

# 输出正在创建备份的消息
echo "Starting backup of $SOURCE_DIR to $FILE_PATH..." | tee -a $LOG_FILE

# 使用 tar 创建备份并输出详细信息
tar -czvf $FILE_PATH -C $SOURCE_DIR . 2>&1 | tee -a $LOG_FILE

# 检查 tar 命令的退出状态并输出结果
if [ $? -eq 0 ]; then
    echo "Backup completed successfully at $FILE_PATH" | tee -a $LOG_FILE
else
    echo "Backup failed" | tee -a $LOG_FILE
fi
备份脚本

backup.sh:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#!/bin/bash

# 服务器列表及其端口配置
REMOTE_SERVERS=(
    "root@ip:host"
    "root@ip:host"
)
LOG_FILE="/opt/backup.log"
# 本地备份路径 在docker-compose中已经挂载了的目录。
LOCAL_BACKUP_PATH="/opt/rclone/data"

# 清空日志文件
> "$LOG_FILE"

# 从每台服务器拉取备份
for SERVER in "${REMOTE_SERVERS[@]}"; do
    # 提取主机和端口
    HOST=$(echo $SERVER | cut -d: -f1)
    PORT=$(echo $SERVER | cut -d: -f2)

    # 以主机端口做区分。
    BACKUP_FILE="$(basename $HOST)_opt_backup.tar.gz"

    echo "Fetching backup from $SERVER..." | tee -a "$LOG_FILE"
    # 使用 rsync 从远程服务器获取备份文件,这里获取的是各服务器统一备份的路径。
    rsync -avz -e "ssh -p $PORT -i ~/.ssh/id_rsa" $HOST:/opt/opt.tar.gz $LOCAL_BACKUP_PATH/$BACKUP_FILE >> "$LOG_FILE" 2>&1
done

# 上传所有主机备份文件到 Google Drive
for BACKUP_FILE in $LOCAL_BACKUP_PATH/*_opt_backup.tar.gz; do
    echo "Uploading $BACKUP_FILE to Google Drive..."  | tee -a "$LOG_FILE"
    docker exec rclone /bin/sh -c "rclone copy $BACKUP_FILE gdriver:/backups/ --progress" >> "$LOG_FILE" 2>&1
done

#本机的
docker exec rclone /bin/sh -c "rclone copy $LOCAL_BACKUP_PATH/127.0.0.1_opt_backup.tar.gz gdriver:/backups/ --progress" >> "$LOG_FILE" 2>&1

echo "Backup completed successfully"
  • bash /opt/backup.sh执行之后可以看下日志,有什么错误。
  • 脚本跑完之后,可以看到成功将文件备份到google drive了。
  • 要是想用sync,可以直接替换命令就行了。
  • /backups/存放的根目录
    docker exec rclone /bin/sh -c "rclone sync $BACKUP_FILE gdriver:/backups/ --progress" >> "$LOG_FILE" 2>&1 1721712548308.png
crontab定时任务

最后主服务器crontab -e加入打包脚本与备份脚本,按i进入编辑模式,加入后esc退出编辑模式,输入:wq保存。

1
2
0 0 * * 1 /bin/bash /opt/package.sh
0 1 * * 1 /bin/bash /opt/backup.sh

最后其他从服务器crontab -e加入打包脚本

1
0 0 * * 1 /bin/bash /opt/package.sh

* * * * * 对应 分、时、日、月、周。可以按照自己的备份周期来制定备份计划。

💬评论
0%