-
Notifications
You must be signed in to change notification settings - Fork 73
lot of memory allocations becomes bottleneck #120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Memory allocations are expected especially during generation of synthetic datasets. |
Is it possible to split the "generation of synthetic datasets" and "actual benchmark execution" between two processes. My case is I am trying to run these benchmarking algorithms in SGX using gramine where we have memory constraints. Hence would like to know if synthetic datasets can be generated separately so that we do only benchmarks execution inside SGX. |
sorry closed it by mistake |
Would be addressed with pre-fetch capability in this PR -#133 |
I captured perf data for most of the algorithms and see there are lot many memory allocations happens during the run which become bottleneck. Please refer attached screenshot.
Is there a way to fine tune the memory allocations? like any env variable or cmmandline arguments?
The text was updated successfully, but these errors were encountered: