You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[ET-VK][ez] Allow logit linear layer to be lowered to Vulkan
Pull Request resolved: #9918
## Context
Due to poor performance of Vulkan's int4 linear operator, the final logit layer of the transformer model was not being delegated to vulkan, and was instead quantized and executed with the XNNPACK delegate.
However, with D72412950 / #9883 decent performance can now be achieved with Vulkan/s int4 linear op. Therefore, the final logit layer can be lowered to Vulkan.
## Changes
* Remove limit from `VkInt4WeightOnlyQuantizer` that was causing it to ignore the logit layer of the transformer
* Do not apply XNNPACK partitioner and quantizer when lowering with Vulkan
Differential Revision: [D72480177](https://our.internmc.facebook.com/intern/diff/D72480177/)
ghstack-source-id: 276235672
0 commit comments