Skip to content

[FX] Sync to OSS #1118

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 15, 2022
Merged

[FX] Sync to OSS #1118

merged 1 commit into from
Jun 15, 2022

Conversation

frank-wei
Copy link
Contributor

364639a8ab2ee7531ce5259b8985a3c90bda4fdf Wei Wei [email protected] [fx2trt] target files added
07d8e842b54b9c727f4215239f6c007cc7a62c9f Wei Wei [email protected] Swap fx2trt_oss to torch_tensorrt
74731c90fd63e41ff5997887d8f72ca0b805cf8d Yinghai Lu [email protected] Fix uru_10x10 test
6c53d36a08a7d465a1108d7154ef29a373eb38cc Wei Wei [email protected] [fx2trt] Modify lower setting class to accommandate AIT lowering
6f873f4f3ece9d476479eb7c9633d38554dd8692 Oleg Khabinov [email protected] [fx2trt] Make sure acc_tracer belongs only to single target
529a5750ace2bede6e9b7a9922a0f75c459df16b Shirong Wu [email protected] Enable explicit batch dim for MTS gpu benchmark
2d284df94ddb530f3a8875fdc76796fad508ec29 Wei Wei [email protected] [fx2trt] remove wildcard for obj of torch_fx2trt in TARGETS
84b53b15427cc08fb1e36143b6bdec4557f50d7e Shirong Wu [email protected] Add var converter
17e309b17b3ba66cda0e7d5712089d860a5e125e Jordan Fix [email protected] [const_fold] Set requires_grad based on the folded tensor; add device_for_folding option
2c8f1b23be30ec968ad27215256d250c872616b0 Kefei Lu [email protected] lowering: support creating lowerer instance with "presets"
50fa26d1b56888ec25eb839d4813bc695be20da9 wwei6 [email protected] [fx2trt] target files added
6e7f9b6c4f8afa32383c457e8133674640348810 wwei6 [email protected] fx2trt_oss change set1

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

364639a8ab2ee7531ce5259b8985a3c90bda4fdf Wei Wei <[email protected]> [fx2trt] target files added
07d8e842b54b9c727f4215239f6c007cc7a62c9f Wei Wei <[email protected]> Swap fx2trt_oss to torch_tensorrt
74731c90fd63e41ff5997887d8f72ca0b805cf8d Yinghai Lu <[email protected]> Fix uru_10x10 test
6c53d36a08a7d465a1108d7154ef29a373eb38cc Wei Wei <[email protected]> [fx2trt] Modify lower setting class to accommandate AIT lowering
6f873f4f3ece9d476479eb7c9633d38554dd8692 Oleg Khabinov <[email protected]> [fx2trt] Make sure acc_tracer belongs only to single target
529a5750ace2bede6e9b7a9922a0f75c459df16b Shirong Wu <[email protected]> Enable explicit batch dim for MTS gpu benchmark
2d284df94ddb530f3a8875fdc76796fad508ec29 Wei Wei <[email protected]> [fx2trt] remove wildcard for obj of torch_fx2trt in TARGETS
84b53b15427cc08fb1e36143b6bdec4557f50d7e Shirong Wu <[email protected]> Add var converter
17e309b17b3ba66cda0e7d5712089d860a5e125e Jordan Fix <[email protected]> [const_fold] Set requires_grad based on the folded tensor; add device_for_folding option
2c8f1b23be30ec968ad27215256d250c872616b0 Kefei Lu <[email protected]> lowering: support creating lowerer instance with "presets"
50fa26d1b56888ec25eb839d4813bc695be20da9 wwei6 <[email protected]> [fx2trt] target files added
6e7f9b6c4f8afa32383c457e8133674640348810 wwei6 <[email protected]> fx2trt_oss change set1
f3ee8a4b482a35edc2786cd97bc0d07e9af6a23e wwei6 <[email protected]> Automatic update of fbcode/deeplearning/trt/torch_tensorrt to 666a263
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@yinghai yinghai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm

@frank-wei frank-wei merged commit 42be623 into master Jun 15, 2022
@frank-wei frank-wei deleted the fb-sync-wwei6 branch June 15, 2022 05:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants