Storage
Multipart Uploads
Upload large files in parts for reliability and resumability. Required for files over 5GB, recommended for files over 100MB.
When to Use Multipart Uploads
100MB+
Recommended threshold
5GB
Maximum single upload
5TB
Maximum object size
Benefits
Resumable
If a part fails, only retry that part instead of the entire file.
Parallel Uploads
Upload multiple parts simultaneously for faster throughput.
Integrity
Each part has its own ETag for verification.
Supported Operations
-
CreateMultipartUpload— Start a new multipart upload -
UploadPart— Upload a part (5MB minimum, except last part) -
CompleteMultipartUpload— Finalize and create the object -
AbortMultipartUpload— Cancel and clean up parts -
ListParts— List uploaded parts -
ListMultipartUploads— List in-progress uploads
Using AWS CLI
The AWS CLI handles multipart uploads automatically for large files:
# AWS CLI automatically uses multipart for large files
$ aws --profile edge --endpoint-url https://storage.edge.network \
s3 cp large-video.mp4 s3://my-bucket/videos/
# Configure part size (default is 8MB)
$ aws configure set s3.multipart_chunksize 64MB JavaScript Example
Use the Upload utility for automatic multipart handling:
import { S3Client } from '@aws-sdk/client-s3'
import { Upload } from '@aws-sdk/lib-storage'
import { createReadStream } from 'fs'
const client = new S3Client({
endpoint: 'https://storage.edge.network',
region: 'us-east-1',
forcePathStyle: true,
credentials: {
accessKeyId: process.env.EDGE_ACCESS_KEY,
secretAccessKey: process.env.EDGE_SECRET_KEY
}
})
const upload = new Upload({
client,
params: {
Bucket: 'my-bucket',
Key: 'large-video.mp4',
Body: createReadStream('./large-video.mp4')
},
queueSize: 4, // Parallel uploads
partSize: 1024 * 1024 * 64 // 64MB parts
})
// Track progress
upload.on('httpUploadProgress', (progress) => {
console.log(`${progress.loaded} / ${progress.total}`)
})
await upload.done() Python Example
boto3 handles multipart automatically with upload_file:
import boto3
from boto3.s3.transfer import TransferConfig
client = boto3.client(
's3',
endpoint_url='https://storage.edge.network',
aws_access_key_id=os.environ['EDGE_ACCESS_KEY'],
aws_secret_access_key=os.environ['EDGE_SECRET_KEY']
)
# Configure multipart settings
config = TransferConfig(
multipart_threshold=1024 * 1024 * 100, # 100MB threshold
multipart_chunksize=1024 * 1024 * 64, # 64MB parts
max_concurrency=4 # Parallel uploads
)
# Upload with progress callback
def progress(bytes_transferred):
print(f"Uploaded {bytes_transferred} bytes")
client.upload_file(
'large-video.mp4',
'my-bucket',
'videos/large-video.mp4',
Config=config,
Callback=progress
) Part Size Recommendations
| File Size | Recommended Part Size | Approximate Parts |
|---|---|---|
| 100MB - 1GB | 16MB - 32MB | 6-32 parts |
| 1GB - 10GB | 64MB - 128MB | 16-80 parts |
| 10GB - 100GB | 128MB - 256MB | 80-400 parts |
| 100GB+ | 256MB - 512MB | Varies |
Note: Minimum part size is 5MB (except the last part). Maximum is 5GB. Maximum 10,000 parts per upload.
Cleaning Up Incomplete Uploads
Incomplete multipart uploads consume storage. Clean them up with:
# List incomplete uploads
$ aws --profile edge --endpoint-url https://storage.edge.network \
s3api list-multipart-uploads --bucket my-bucket
# Abort a specific upload
$ aws --profile edge --endpoint-url https://storage.edge.network \
s3api abort-multipart-upload \
--bucket my-bucket \
--key videos/large-video.mp4 \
--upload-id YOUR_UPLOAD_ID