-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(s3-deployment): optimize memory usage for large files #34020
base: main
Are you sure you want to change the base?
Conversation
The S3 deployment Lambda handler was reading entire files into memory to check if they're JSON, causing timeouts and memory issues with large files (10MB+). This change improves memory efficiency by: 1. Using file extension to determine if a file is JSON instead of reading its content 2. Only loading the entire file for JSON processing when necessary 3. Maintaining special handling for JSON files with double quotes in markers Performance testing shows: - Standard operations stay under 32MB memory usage - Even complex JSON files with double quotes stay under 256MB - Processing time is comparable to the previous implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(This review is outdated)
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #34020 +/- ##
=======================================
Coverage 83.98% 83.98%
=======================================
Files 120 120
Lines 6976 6976
Branches 1178 1178
=======================================
Hits 5859 5859
Misses 1005 1005
Partials 112 112
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
Exemption Request: The issue involves memory usage with large files (10MB+), and adding integ test would require to add such files to the repository. Instead, I've implemented comprehensive local testing with Docker-based isolated test runs that measure memory usage across various file types and sizes. This approach provides more targeted performance metrics than a standard integration test could, as it includes specific memory instrumentation that captures the exact behavior being fixed. The detailed performance metrics included in this PR demonstrate the effectiveness of the solution without needing to commit large test files to the repository. |
$DOCKER_CMD build . | ||
$DOCKER_CMD build --no-cache -t s3-deployment-test . | ||
|
||
$DOCKER_CMD run --rm s3-deployment-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How and where/when does this test get run?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are manually run by the developer when testing their changes on the custom resource's code.
I also ran into the memory issue and we don't use any markers. I almost did a PR until I saw this one. I would like to suggest to move the check for no markers from # extract archive and replace markers in output files
def extract_and_replace_markers(archive, contents_dir, markers):
with ZipFile(archive, "r") as zip:
zip.extractall(contents_dir)
# replace markers for this source if there are any markers
if markers:
for file in zip.namelist():
file_path = os.path.join(contents_dir, file)
if os.path.isdir(file_path): continue
replace_markers(file_path, markers) |
…memory usage This change addresses issue aws#34002 where the S3 deployment Lambda function experiences memory issues with large JSON files. The fix: - Adds a new `jsonAwareSourceProcessing` boolean property to BucketDeploymentProps - Implements efficient marker detection using line-by-line processing - Optimizes memory usage by only loading entire files when necessary - Updates tests to verify both processing modes - Updates documentation with usage examples and memory considerations By default, the property is set to false for backward compatibility.
✅ Updated pull request passes all PRLinter validations. Dismissing previous PRLinter review.
That's a valid proposition. However, I want to be able to test the changes without having to create a zip file. So I will keep the test at the beginning of |
chunk += ` "name": "Item ${lineNum}",\n`; | ||
chunk += ` "hash": "${crypto.createHash('sha256').update(lineNum.toString()).digest('hex')}",\n`; | ||
chunk += ' "properties": {\n'; | ||
chunk += ` "description": "${lineContent.replace(/"/g, '\\"')}",\n`; |
Check failure
Code scanning / CodeQL
Incomplete string escaping or encoding High test
chunk += ` }${bytesWritten + chunk.length + 6 < totalBytes ? ',\n' : '\n'}`; | ||
} else { | ||
// Simple items for the rest | ||
chunk += ` { "id": "${lineNum}", "value": "${lineContent.replace(/"/g, '\\"')}" }${bytesWritten + chunk.length + 6 < totalBytes ? ',\n' : '\n'}`; |
Check failure
Code scanning / CodeQL
Incomplete string escaping or encoding High test
We are now using a safe json value for marker replacement and avoid loading the complete JSON file in memory. This change also add a way to trigger the new processing when creating the source data. This now limit the processing at the asset level instead of the deployment level.
…roduction rollback
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
Issue # (if applicable)
Closes #34002.
Reason for this change
The original fix for issue #22661 (PR #33698) introduced a regression where the S3 deployment Lambda would read entire files into memory to check if they're JSON. This approach works fine for small files but causes Lambda timeouts and memory issues with large files (10MB+). This is particularly problematic for customers deploying large assets to S3 buckets.
Description of changes
The S3 deployment Lambda handler was reading entire files into memory to check if they're JSON, causing timeouts and memory issues with large files (10MB+).
This change improves memory efficiency by:
a) At least one marker's value contains double quotes
b) The file content is valid JSON (determined after reading the file)
Performance testing shows:
Performance metrics show significant improvements:
Additionally, this change:
Describe any new or updated permissions being added
No new or updated IAM permissions are required for this change.
Description of how you validated changes
Performance testing was conducted with isolated test runs to ensure accurate measurements:
All tests passed with memory limits of:
The results confirm that our optimization only uses significant memory when processing JSON files with double quote markers, which is the specific case that requires special handling.
Checklist
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license