Commit 0716b6d6 authored by lijingang's avatar lijingang

更新README.md文档,添加新功能说明、配置覆盖、环境变量支持等

parent 8cd1bad2
# @youdatasum/client-s3 # @youdatasum/client-s3
S3 operations client for internal use. S3 operations client for internal use. Supports file/folder upload, download, deletion, and listing from S3-compatible storage (RustFS).
## Installation ## Installation
...@@ -28,12 +28,33 @@ Or via Git: ...@@ -28,12 +28,33 @@ Or via Git:
npm install git+ssh://git@git.youdatasum.com:ljg/client-s3.git npm install git+ssh://git@git.youdatasum.com:ljg/client-s3.git
``` ```
## Usage ## CLI Usage
### CLI
After global installation, you can use `ss3ops` command: After global installation, you can use `ss3ops` command:
### Global Options
All commands support these global options to override S3 configuration:
```bash
# Override S3 endpoint
ss3ops --endpoint=http://192.168.2.254:9000 list zhipinku audit/
# Override credentials and endpoint
ss3ops --endpoint=http://192.168.2.254:9000 --access-key=xxx --secret-key=yyy list zhipinku audit/
# Override region
ss3ops --region=us-west-2 list zhipinku audit/
```
Global options:
- `--endpoint <url>` - S3 endpoint URL (default: `http://192.168.2.253:9000`)
- `--access-key <key>` - S3 access key ID
- `--secret-key <key>` - S3 secret access key
- `--region <region>` - S3 region (default: `us-east-1`)
### Commands
```bash ```bash
# Download a file # Download a file
ss3ops download <bucket> <key> <destination> ss3ops download <bucket> <key> <destination>
...@@ -41,24 +62,36 @@ ss3ops download <bucket> <key> <destination> ...@@ -41,24 +62,36 @@ ss3ops download <bucket> <key> <destination>
# Upload a file # Upload a file
ss3ops upload <bucket> <key> <source> ss3ops upload <bucket> <key> <source>
# Upload a folder # Upload a folder (recursively)
ss3ops upload-folder <bucket> <prefix> <folder> ss3ops upload-folder <bucket> <prefix> <folder>
# Example: ss3ops upload-folder zhipinku audit-4520a6ae9d60a234fe-zi ./audit-4520a6ae9d60a234fe-zi
> ss3ops upload-folder zhipinku audit-4520a6ae9d60a234fe-zi .\audit-4520a6ae9d60a234fe-zi
# Download a folder # Download a folder
ss3ops download-folder <bucket> <prefix> <destination> ss3ops download-folder <bucket> <prefix> <destination>
# Example: ss3ops download-folder zhipinku audit-4520a6ae9d60a234fe-zi ./downloads/
ss3ops download-folder zhipinku audit-4520a6ae9d60a234fe-zi ./xxx # Delete a folder and all its contents
ss3ops delete-folder <bucket> <prefix>
# Example: ss3ops delete-folder zhipinku audit-4520a6ae9d60a234fe-zi
# List objects in a bucket # List objects in a bucket
ss3ops list <bucket> [prefix] ss3ops list <bucket> [prefix]
# Example: ss3ops list zhipinku audit/
``` ```
### Node.js library ## Node.js Library Usage
```javascript ```javascript
const { s3, downloadFileFromS3, uploadFileToS3, uploadFolderToS3, downloadS3Folder } = require('@youdatasum/client-s3'); const {
s3,
createS3Client,
reconfigureS3Client,
downloadFileFromS3,
uploadFileToS3,
uploadFolderToS3,
downloadS3Folder,
deleteS3Folder
} = require('@youdatasum/client-s3');
// Use the pre-configured S3 client (connected to internal RustFS endpoint) // Use the pre-configured S3 client (connected to internal RustFS endpoint)
await s3.send(...); await s3.send(...);
...@@ -71,16 +104,87 @@ const fs = require('fs'); ...@@ -71,16 +104,87 @@ const fs = require('fs');
const body = fs.readFileSync('./local.txt'); const body = fs.readFileSync('./local.txt');
await uploadFileToS3('my-bucket', 'path/to/uploaded.txt', body); await uploadFileToS3('my-bucket', 'path/to/uploaded.txt', body);
// Upload a folder // Upload a folder (recursively uploads all subfolders)
await uploadFolderToS3('./local-folder', 'my-bucket', 'remote-prefix/'); await uploadFolderToS3('./local-folder', 'my-bucket', 'remote-prefix/');
// Download a folder // Download a folder
await downloadS3Folder('my-bucket', 'remote-prefix/', './downloads/'); await downloadS3Folder('my-bucket', 'remote-prefix/', './downloads/');
// Delete a folder and all its contents
const deletedCount = await deleteS3Folder('my-bucket', 'remote-prefix/');
console.log(`Deleted ${deletedCount} objects`);
// Create a custom S3 client
const customS3 = createS3Client({
endpoint: 'http://192.168.2.254:9000',
region: 'us-west-2',
credentials: {
accessKeyId: 'your-access-key',
secretAccessKey: 'your-secret-key'
}
});
// Reconfigure the global S3 client
reconfigureS3Client({
endpoint: 'http://192.168.2.254:9000',
credentials: {
accessKeyId: 'your-access-key',
secretAccessKey: 'your-secret-key'
}
});
``` ```
## Configuration ## Configuration
The S3 client is pre-configured to connect to an internal RustFS endpoint. Credentials and endpoint are hardcoded in the library. If you need to customize, you can modify the `lib/s3-client.js` file. The S3 client is pre-configured to connect to an internal RustFS endpoint (`http://192.168.2.253:9000`). You can customize the configuration in multiple ways:
### 1. Environment Variables (Highest priority)
```bash
export S3_ENDPOINT=http://192.168.2.254:9000
export S3_ACCESS_KEY_ID=your-access-key
export S3_SECRET_ACCESS_KEY=your-secret-key
export S3_REGION=us-west-2
```
### 2. Command Line Options
```bash
ss3ops --endpoint=http://192.168.2.254:9000 --access-key=xxx --secret-key=yyy list zhipinku audit/
```
### 3. Programmatic Configuration
```javascript
const { createS3Client, reconfigureS3Client } = require('@youdatasum/client-s3');
// Create a new client instance
const customClient = createS3Client({
endpoint: 'http://192.168.2.254:9000',
region: 'us-west-2'
});
// Reconfigure the global client
reconfigureS3Client({
endpoint: 'http://192.168.2.254:9000'
});
```
### 4. Modify Source Code (Lowest priority)
Edit `lib/s3-client.js` directly to change default values.
## Features
- **File Operations**: Upload/download single files
- **Folder Operations**: Recursive upload/download of folders
- **Bulk Deletion**: Delete folders with thousands of files (batches of 1000)
- **Connection Resilience**: Increased timeouts (10s connection, 30s socket) and retry logic
- **Configurable**: Support for multiple S3 endpoints via environment variables, CLI options, or programmatic configuration
- **Progress Feedback**: Detailed logging for long-running operations
## Performance Optimizations
- **Batch Processing**: `deleteS3Folder` processes files in batches of 1000 to avoid S3 API limits
- **Streaming Downloads**: Large files are streamed directly to disk to save memory
- **Connection Pooling**: Reuses HTTP connections for better performance
- **Timeout Management**: Configurable timeouts with sensible defaults
## Development ## Development
...@@ -89,6 +193,18 @@ The S3 client is pre-configured to connect to an internal RustFS endpoint. Crede ...@@ -89,6 +193,18 @@ The S3 client is pre-configured to connect to an internal RustFS endpoint. Crede
3. Make changes. 3. Make changes.
4. Test with `node bin/ss3ops.js`. 4. Test with `node bin/ss3ops.js`.
### Local Development with npm link
For developing with other projects locally:
```bash
# In ss3ops directory
npm link
# In your project directory
npm link @youdatasum/client-s3
```
## Publishing ## Publishing
This package is published to the internal GitLab npm registry. Update version in `package.json` and run: This package is published to the internal GitLab npm registry. Update version in `package.json` and run:
...@@ -99,15 +215,6 @@ npm publish ...@@ -99,15 +215,6 @@ npm publish
Ensure you have proper permissions and `publishConfig` points to the correct registry. Ensure you have proper permissions and `publishConfig` points to the correct registry.
方案2:使用 npm link
这个方案是在 ../framework 项目中通过 npm link 链接到本地开发的包。
步骤:
1. 在我们的 ss3ops 目录运行 npm link 创建全局链接
2. 在 ../framework 目录运行 npm link @youdatasum/client-s3 链接到本地包
## License ## License
MIT MIT
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment