Modern webpack plugin for uploading assets to AWS S3 with enhanced features including concurrency control, rate limiting, retry logic, timeout handling, and progress bar. Uses manual multipart upload to work around AWS SDK v3 lib-storage bugs.
- Features
- Why This Plugin?
- Installation
- Usage
- Options
- How It Works
- Examples
- Plugin Compatibility
- Troubleshooting
- Development
- Changelog
- License
- Webpack 4/5 Compatible - Works with both webpack versions
- Modern AWS SDK v3 - Uses latest AWS SDK for better performance
- Two Upload Strategies:
- Small files (<5MB): Direct PutObjectCommand (fast & reliable)
- Large files (≥5MB): Manual multipart upload with progress tracking
- Concurrent Uploads - Configurable concurrency with connection pooling
- Rate Limiting - Control upload bandwidth in KB/s
- Retry & Timeout - Automatic retry with exponential backoff
- Progress Tracking - Real-time progress for multipart uploads
- Progress Bar - Visual feedback during upload
- TypeScript Support - Built-in TypeScript definitions
- Error Resilience - Continue on errors or fail fast
AWS SDK v3's @aws-sdk/lib-storage has known issues with large file uploads:
- Issue #7729:
Upload.done()never resolves after large streaming upload - Issue #5561: Multipart upload requests suddenly get stuck without error
- Issue #7179: Memory leak with large files (>200MB)
This plugin bypasses lib-storage by using manual multipart upload for large files and PutObjectCommand for small files, avoiding these bugs entirely.
npm install webpack-s3-assets-plugin --save-dev
# or
yarn add -D webpack-s3-assets-pluginconst WebpackS3AssetsPlugin = require('webpack-s3-assets-plugin');
module.exports = {
plugins: [
new WebpackS3AssetsPlugin({
s3Options: {
region: 'us-west-2',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
},
s3UploadOptions: {
Bucket: 'my-bucket'
}
})
]
};const WebpackS3AssetsPlugin = require('webpack-s3-assets-plugin');
module.exports = {
plugins: [
new WebpackS3AssetsPlugin({
// AWS S3 Client options
s3Options: {
region: 'us-west-2',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
},
// S3 upload options
s3UploadOptions: {
Bucket: 'my-bucket',
ACL: 'public-read',
CacheControl: 'max-age=31536000'
},
// Base path in S3 bucket
basePath: 'dist/v1.0.0',
// Include/exclude filters (RegExp, function, or array)
include: /\.(js|css|png|jpg|gif)$/,
exclude: /\.map$/,
// Concurrency control (default: 3)
concurrency: 3,
// Rate limit in KB/s (0 = no limit)
rateLimitKBps: 1024,
// Timeout settings (in milliseconds)
timeout: 60000, // Single file/part timeout (default: 60s)
totalTimeout: 600000, // Total timeout for all files (default: 10m)
// Retry settings
retries: 3, // Retry attempts (default: 3)
retryDelay: 2000, // Initial retry delay (default: 2s)
// Error handling
continueOnError: true, // Continue if some files fail
// Multipart upload settings
multipartThreshold: 5 * 1024 * 1024, // Switch to multipart at 5MB (default)
partSize: 5 * 1024 * 1024, // 5MB parts for multipart upload
maxFileSize: 5 * 1024 * 1024 * 1024, // 5GB max file size
skipLargeFiles: false, // Skip files exceeding maxFileSize
skipExistingFiles: false, // Skip files already in S3 with same content
// Progress & debug
progress: true, // Show progress bar
debug: false // Enable debug logging
})
]
};| Option | Type | Default | Description |
|---|---|---|---|
s3Options |
object |
{} |
AWS S3 client configuration |
s3UploadOptions |
object |
{} |
S3 PutObject options including Bucket (required) |
basePath |
string |
'' |
Base path prefix for uploaded files in S3 |
include |
RegExp|Function|Array |
null |
Filter to include specific files |
exclude |
RegExp|Function|Array |
null |
Filter to exclude specific files |
progress |
boolean |
true |
Enable progress bar |
concurrency |
number |
3 |
Maximum concurrent uploads (recommended: 2-5) |
rateLimitKBps |
number |
0 |
Rate limit in KB/s (0 = unlimited) |
timeout |
number |
60000 |
Single file/part upload timeout in ms |
totalTimeout |
number |
600000 |
Total upload timeout for all files in ms |
retries |
number |
3 |
Number of retry attempts for failed uploads |
retryDelay |
number |
2000 |
Initial delay between retries in ms |
continueOnError |
boolean |
true |
Continue uploading other files when one fails |
debug |
boolean |
false |
Enable debug logging to console |
multipartThreshold |
number |
5242880 |
File size threshold (5MB) to switch to multipart upload |
partSize |
number |
5242880 |
Part size (5MB) for multipart uploads |
maxFileSize |
number |
5368709120 |
Maximum file size (5GB) |
skipLargeFiles |
boolean |
false |
Skip files larger than maxFileSize instead of failing |
skipExistingFiles |
boolean |
false |
Skip files that already exist in S3 with the same content (MD5 hash comparison) |
Uses PutObjectCommand - simple, fast, and reliable for small files.
📄 Small files (<5.00 MB): 45 (PutObjectCommand)
📄 PutObjectCommand: main.js completed
📄 PutObjectCommand: styles.css completed
Uses manual multipart upload - creates multipart upload, uploads parts sequentially, then completes.
🎬 Large files (≥5.00 MB): 3 (Multipart Upload)
🎬 Multipart Upload: video.mp4 (45.32 MB)
Uploading part 1/10 (5.00 MB)
📤 video.mp4: 10% (5.00 MB/45.32 MB)
Uploading part 2/10 (5.00 MB)
📤 video.mp4: 20% (10.00 MB/45.32 MB)
...
Completing multipart upload: abc123...
✅ Multipart upload completed: video.mp4
If uploads get stuck with many files:
- Reduce concurrency (recommended: 2-3):
new WebpackS3AssetsPlugin({
concurrency: 2,
timeout: 30000 // Shorter timeout per file
})- Enable debug mode to see detailed logs:
new WebpackS3AssetsPlugin({
debug: true
})- Use rate limiting to prevent overwhelming the connection:
new WebpackS3AssetsPlugin({
rateLimitKBps: 512, // Limit to 512 KB/s
concurrency: 2
})For large files like videos:
new WebpackS3AssetsPlugin({
// Increase timeout for each part
timeout: 120000, // 2 minutes per part
totalTimeout: 1800000, // 30 minutes total
// Lower threshold to use multipart earlier
multipartThreshold: 1 * 1024 * 1024, // 1MB
// Larger parts = fewer requests (better for slow connections)
partSize: 10 * 1024 * 1024, // 10MB parts
// Increase max file size limit (default 5GB)
maxFileSize: 10 * 1024 * 1024 * 1024, // 10GB
// Or skip files that are too large
skipLargeFiles: true,
// Reduce concurrency for large files
concurrency: 1,
debug: true // See per-part progress
})If you see timeout errors:
- Increase
timeoutfor each part upload - Increase
totalTimeoutif uploading many files - Check your network connection
- For very large files, consider using
skipLargeFiles: trueand uploading separately
If some files consistently fail:
- Check S3 permissions and bucket policy
- Verify file sizes are within S3 limits
- Enable
continueOnError: trueto upload other files - Check debug logs for specific error messages
- For large files, ensure your network can handle the upload speed
The s3UploadOptions accepts all options from AWS SDK PutObjectCommand:
new WebpackS3AssetsPlugin({
s3UploadOptions: {
Bucket: 'my-bucket', // Required, no default
ACL: 'public-read', // Default: undefined (uses bucket default)
CacheControl: 'max-age=31536000', // Default: undefined
ContentDisposition: 'attachment', // Default: undefined
ContentEncoding: 'gzip', // Default: undefined
ContentType: 'application/javascript', // Default: auto-detected from file extension
Expires: new Date('2025-12-31'), // Default: undefined
Metadata: { // Default: undefined
'x-amz-meta-version': '1.0.0',
'x-amz-meta-build': process.env.BUILD_ID
},
ServerSideEncryption: 'AES256', // Default: undefined
StorageClass: 'STANDARD_IA' // Default: 'STANDARD'
}
})| Option | Default | Description |
|---|---|---|
Bucket |
(required) | S3 bucket name - must be provided |
ACL |
undefined |
Uses bucket's default ACL |
ContentType |
(auto-detected) | Detected from file extension using mime-types library, falls back to application/octet-stream |
CacheControl |
undefined |
No caching headers set |
ContentDisposition |
undefined |
No disposition set |
ContentEncoding |
undefined |
No encoding set |
StorageClass |
'STANDARD' |
Standard S3 storage |
ServerSideEncryption |
undefined |
No server-side encryption |
Metadata |
undefined |
No custom metadata |
Expires |
undefined |
No expiration date |
Content types are automatically detected from file extensions:
// Examples of auto-detection:
'style.css' → 'text/css'
'app.js' → 'application/javascript'
'image.png' → 'image/png'
'font.woff2' → 'font/woff2'
'video.mp4' → 'video/mp4'
'data.bin' → 'application/octet-stream' (fallback)You can override auto-detection by explicitly setting ContentType in s3UploadOptions.
private- Owner-only accesspublic-read- Anyone can readpublic-read-write- Anyone can read/write (not recommended)authenticated-read- Authenticated AWS users can read
STANDARD- Default, for frequently accessed dataSTANDARD_IA- Infrequent Access, lower costONEZONE_IA- Single zone, lower costGLACIER- Archive storageINTELLIGENT_TIERING- Automatic cost optimization
new WebpackS3AssetsPlugin({
include: /\.(js|css)$/,
exclude: /\.test\.js$/
});new WebpackS3AssetsPlugin({
include: (filename) => filename.startsWith('main'),
exclude: (filename) => filename.includes('test')
});new WebpackS3AssetsPlugin({
include: [
/\.js$/,
(filename) => filename.length > 10
]
});import WebpackS3AssetsPlugin from 'webpack-s3-assets-plugin';
import { Configuration } from 'webpack';
const config: Configuration = {
plugins: [
new WebpackS3AssetsPlugin({
s3Options: {
region: 'us-west-2'
},
s3UploadOptions: {
Bucket: 'my-bucket'
},
concurrency: 3,
retries: 5,
debug: process.env.NODE_ENV === 'development'
})
]
};
export default config;This plugin was created specifically to avoid these known bugs in AWS SDK v3's lib-storage:
-
Issue #7729 (Feb 2026):
Upload.done()never resolves after large streaming upload- Solution: We use manual multipart upload instead of lib-storage
-
Issue #5561 (Dec 2023): Multipart upload requests suddenly get stuck
- Solution: Sequential part uploads with per-part timeout
-
Issue #7179: Memory leak with large files (>200MB)
- Solution: Process parts sequentially, not in parallel
- Small files: Use
PutObjectCommand(simple, no streaming issues) - Large files: Manual multipart upload with sequential part processing
- Create multipart upload
- Upload parts one by one (not parallel to avoid memory issues)
- Complete multipart upload
- Auto-abort on error to clean up
This approach is slightly slower than parallel uploads but much more reliable and avoids all known AWS SDK bugs.
const WebpackS3AssetsPlugin = require('webpack-s3-assets-plugin');
module.exports = {
plugins: [
new WebpackS3AssetsPlugin({
s3Options: {
region: 'us-west-2',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
},
s3UploadOptions: {
Bucket: 'my-bucket'
}
})
]
};new WebpackS3AssetsPlugin({
s3Options: { region: 'us-west-2' },
s3UploadOptions: {
Bucket: 'my-bucket',
ACL: 'public-read',
CacheControl: 'max-age=31536000'
},
basePath: `dist/${process.env.npm_package_version}`,
include: /\.(js|css|png|jpg|gif)$/,
exclude: /\.map$/
})new WebpackS3AssetsPlugin({
s3Options: { region: 'us-west-2' },
s3UploadOptions: { Bucket: 'my-video-bucket' },
include: /\.(mp4|webm|mov)$/i,
concurrency: 1, // Single file at a time
timeout: 120000, // 2 min per part
totalTimeout: 1800000, // 30 min total
multipartThreshold: 1 * 1024 * 1024, // 1MB threshold
partSize: 10 * 1024 * 1024, // 10MB parts
retries: 5
})new WebpackS3AssetsPlugin({
s3Options: { region: 'us-west-2' },
s3UploadOptions: { Bucket: 'my-bucket' },
skipExistingFiles: true, // Skip files already in S3 with same content
debug: true // See which files are skipped
})See examples/ directory for more complete configurations.
This plugin is tested and confirmed compatible with:
| Plugin | Compatibility | Notes |
|---|---|---|
| clean-webpack-plugin | ✅ Full | Works correctly, cleanup happens before upload |
| html-webpack-plugin | ✅ Full | HTML files are uploaded correctly |
| mini-css-extract-plugin | ✅ Full | CSS files are extracted and uploaded |
| terser-webpack-plugin | ✅ Full | Minified files upload correctly |
| copy-webpack-plugin | ✅ Full | Copied static assets are uploaded |
| webpack-manifest-plugin | ✅ Full | Manifest file is uploaded |
| webpack.DefinePlugin | ✅ Full | Environment variables work correctly |
| webpack.ProgressPlugin | ✅ Full | Progress tracking works together |
When using with other plugins, ensure WebpackS3AssetsPlugin is added after plugins that generate assets:
plugins: [
new CleanWebpackPlugin(),
new HtmlWebpackPlugin(),
new MiniCssExtractPlugin(),
// ... other asset-generating plugins
new WebpackS3AssetsPlugin({
// Upload happens after all assets are generated
})
]# Clone the repository
git clone https://github.com/yourusername/webpack-s3-assets-plugin.git
cd webpack-s3-assets-plugin
# Install dependencies
yarn install# Run all tests
yarn test
# Run tests in watch mode
yarn test:watch
# Run tests with coverage
yarn test:coverage
# Run specific test file
yarn test -- rate-limiter# Run ESLint
yarn lint
# Fix ESLint issues
yarn lint:fixCreate __tests__/.env file:
AWS_ACCESS_KEY=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_BUCKET=your_bucket
AWS_REGION=us-east-1Then run:
yarn test -- e2e-real-s3See tests/README.md for detailed testing documentation.
See CHANGELOG.md for version history and breaking changes.
MIT © webpack-s3-assets-plugin