Skip to content
This repository was archived by the owner on Jan 31, 2024. It is now read-only.

Commit 600513b

Browse files
Merged commit includes the following changes:
723477680 by A. Unique TensorFlower<[email protected]>: [XLA] Tag timeout tests as `not_run:arm` Similarly to cl/722883015 tagging also: * //third_party/tensorflow/compiler/xla/python/transfer:socket_bulk_transport_test * //third_party/tensorflow/compiler/xla/python/transfer:socket-server_test * //third_party/tensorflow/compiler/xla/python/transfer:event_loop_test -- 723476384 by A. Unique TensorFlower<[email protected]>: Parse XLA_FLAGS environment variable every time, conditionally on xla_flags_reset flag. -- 723471749 by A. Unique TensorFlower<[email protected]>: [XLA:GPU] Rename `IsSyncCollective` and move to a GPU specific file. The implementation is specific to the GPU backend. -- 723470593 by A. Unique TensorFlower<[email protected]>: [XLA:GPU] move DotDecompose out of simplification pipeline That seems to be a better approach then moving TransposeFold to simplification-2 in 961e5c25fbd4082a1ac4f2e0865ad28163d12f7d: 1. There is a report that previous change has resulted in perf degradation openxla/xla#22081 2. I have found another case when DotDecompose is competing with algsimp. Added a test for that. Overall, having an pass that expands operation together with passes that are trying to do the simplification asks for such infinite loops. --- For archeologists: passes DotDimensionSorter and DotDecomposer were added along with GpuAlgebraicSimplifier as it previously could have added multiple contracting dimensions to dot. But cudnn does not support dots with 2+ dimensions, forcing us to use a less efficient loop emitter. - That what "// AlgebraicSimplifier may add contracting dimensions to a dot." comment was about. After a while simplifier started to use supports_non_canonical_dots to guard against this case. So it should be safe to remove dot decomposer and friends. -- 723469960 by A. Unique TensorFlower<[email protected]>: PR tensorflow#22334: [ROCm] Fix flaky gpu compiler test when building with rocm Imported from GitHub PR openxla/xla#22334 This change fixes the flaky gpu compiler test used to run on rocm CI pipeline gate. Triton pipeline was wrongly using the TritonGPUAccelerateMatmul pass which supports cuda only. In rocm there is a different pass which is now used in the rocm pipeline. https://github.com/triton-lang/triton/blob/main/third_party/amd/lib/TritonAMDGPUTransforms/AccelerateAMDMatmul.cpp Copybara import of the project: -- c5f600f03aa87d155bb624bedb0584e635af190e by Alexandros Theodoridis <[email protected]>: Fix flaky gpu compiler test when building with rocm Merging this change closes tensorflow#22334 -- 723453199 by A. Unique TensorFlower<[email protected]>: Automated Code Change -- 723445422 by A. Unique TensorFlower<[email protected]>: Automated Code Change -- 723443292 by A. Unique TensorFlower<[email protected]>: [pjrt] Removed deprecated `PjRtBuffer::CopyToDevice` -- 723434255 by A. Unique TensorFlower<[email protected]>: Automated Code Change -- 723430683 by A. Unique TensorFlower<[email protected]>: Automated Code Change -- 723426786 by A. Unique TensorFlower<[email protected]>: PR tensorflow#22258: [GPU][NFC] Avoid always printing complete PGLE profiles. Imported from GitHub PR openxla/xla#22258 Copybara import of the project: -- 025352635a155e447559d83c471369559aad5981 by Ilia Sergachev <[email protected]>: [GPU][NFC] Avoid always printing complete PGLE profiles. Merging this change closes tensorflow#22258 -- 723426773 by A. Unique TensorFlower<[email protected]>: PR tensorflow#21375: [ds-fusion] Get While loop analysis with copy fusion Imported from GitHub PR openxla/xla#21375 In later stages of optimization, there are instances of copy fusion on the parameter of the while body. With this, we need to allow inlining of fusions while getting the induction variable index, otherwise we cannot deduce the tuple index. Copybara import of the project: -- 3147ec926aa1c6fdfa2f4376668434c9a2fbeb87 by Shraiysh Vaishay <[email protected]>: [ds-fusion] Get While loop analysis with copy fusion In later stages of optimization, there are instances of copy fusion on the parameter of the while body. With this, we need to allow inlining of fusions while getting the induction variable index, otherwise we cannot deduce the tuple index. -- a435fbd2eadc17269d7bccbe141dcf7a21cc20e8 by Shraiysh Vaishay <[email protected]>: Relay control dependencies while converting fusion to call (extractor) Merging this change closes tensorflow#21375 -- 723425710 by A. Unique TensorFlower<[email protected]>: [XLA] Add const reference versions of `ForEachInstructionWithPred` and `ForEachInstructionWithOpcode`. These are more permissive and semantically equivalent. -- 723425622 by A. Unique TensorFlower<[email protected]>: Remove dead code (NFC) We compute the total number of tiles in a variable `num_tiles` but then never use it. So remove it. -- 723419822 by A. Unique TensorFlower<[email protected]>: Automated Code Change -- 723402058 by A. Unique TensorFlower<[email protected]>: compat: Update forward compatibility horizon to 2025-02-05 -- 723401869 by A. Unique TensorFlower<[email protected]>: Update GraphDef version to 2129. -- 723396271 by A. Unique TensorFlower<[email protected]>: [XLA] Support different operand and result types in AlgebraicSimplifierVisitor::HandlePad. I checked that none of the other cases in HandlePad require any adjustments. -- 723389764 by A. Unique TensorFlower<[email protected]>: Automated Code Change -- 723370461 by A. Unique TensorFlower<[email protected]>: Use matchers_oss in vendor code -- 723367856 by A. Unique TensorFlower<[email protected]>: Update users of TSL headers and targets to new location in XLA Updating: - `env.h` - `env_time.h` - `errors.h` - `file_statistics.h` - `file_system.h` - `file_system_helper.h` - `logging.h` - `macros.h` - `status.h` - `status_matchers.h` - `status_to_from_proto.h` - `statusor.h` - `test.h` - `test_benchmark.h` - `threadpool.h` - `threadpool_async_executor.h` - `threadpool_interface.h` - `threadpool_options.h` - `types.h` and associated targets. -- 723349025 by A. Unique TensorFlower<[email protected]>: Fix inference request analysis aggregated on batch size, by aggregating only the requests included in a single batch, as large request split into multiple batches will introduce confusing results (eg. the device time will be the sum of the 2 batch processing). -- 723344172 by A. Unique TensorFlower<[email protected]>: Automated Code Change -- 723340771 by A. Unique TensorFlower<[email protected]>: Automated Code Change -- 723337100 by A. Unique TensorFlower<[email protected]>: Automated Code Change -- 723321370 by A. Unique TensorFlower<[email protected]>: Stop modifying the TraceEventsContainer in DoStoreAsLevelDbTable. This behavior is not intuitive (modifying a const value that was passed in) and unnecessary. -- 723307829 by A. Unique TensorFlower<[email protected]>: Automated rollback of changelist 723246423. 723278167 by A. Unique TensorFlower<[email protected]>: Update users of TSL headers and targets to new location in XLA Updating: - `env.h` - `env_time.h` - `errors.h` - `file_statistics.h` - `file_system.h` - `file_system_helper.h` - `logging.h` - `macros.h` - `status.h` - `status_matchers.h` - `status_to_from_proto.h` - `statusor.h` - `test.h` - `test_benchmark.h` - `threadpool.h` - `threadpool_async_executor.h` - `threadpool_interface.h` - `threadpool_options.h` - `types.h` and associated targets. -- 723265881 by A. Unique TensorFlower<[email protected]>: Add the list of Qualcomm SoCs supporting NPU. -- 723248792 by A. Unique TensorFlower<[email protected]>: Add Q/DQ annotation lowering support. LowerQuantAnnotationsPass now supports quant.quantize and quant.dequantize composite lowering. These patterns make adjustments to the function signatures if necessary. -- 723246423 by A. Unique TensorFlower<[email protected]>: PR tensorflow#85476: Support Qnn Wrappers for LiteRt Imported from GitHub PR tensorflow#85476 # WHAT - Basic wrapper for QNN types, handle dynamic resources along with wrapper instances. - Make these wrappers independent to LiteRT/tflite - Only depend on QNN and STL ### `ScalarParamWrapper` - Wrap `Qnn_Param_t` with `QNN_PARAMTYPE_SCALAR` for `paramType` - Choose correct `QNN_DATATYPE` based on the data type ### `TensorParamWrapper` - Wrap `Qnn_Param_t` with `QNN_PARAMTYPE_TENSOR` for `paramType` ### `UndefinedQuantizeParamsWrapper` - Wrap `Qnn_QuantizeParams_t` - Default for quantization parameter ### `ScaleOffsetQuantizeParamsWrapper` - Wrap `Qnn_QuantizeParams_t` for per-tensor quantization ### `AxisScaleOffsetQuantizeParamsWrapper` - Wrap `Qnn_QuantizeParams_t` for per-axis quantization ### `TensorWrapper` - Wrap `Qnn_TensorType_t` - Handle dynamic resource, e.g. name, dimensions, weight data. ### `OpWrapper` - Wrap `Qnn_OpConfig_t` - Handle dynamic resource, e.g. name, input output tensors, params Copybara import of the project: -- 4833a20 by weilhuan-quic <[email protected]>: LiteRt Qualcomm wrappers -- 725f571 by weilhuan-quic <[email protected]>: TensorWrapper GetDataTypeSize() return bytes instead of bits -- dd3f251 by weilhuan-quic <[email protected]>: comment qnn_lib_headers -- 06e0616 by weilhuan-quic <[email protected]>: Change license Merging this change closes tensorflow#85476 -- PiperOrigin-RevId: 723477680
1 parent 91ae966 commit 600513b

File tree

122 files changed

+800
-295
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

122 files changed

+800
-295
lines changed

tensorflow/compiler/jit/pjrt_device_context.cc

+2-1
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,8 @@ void PjRtDeviceToDeviceCopy(DeviceContext* send_dev_context,
271271
.value();
272272

273273
absl::StatusOr<std::unique_ptr<xla::PjRtBuffer>> buffer_or =
274-
src_device_buffer->CopyToDevice(pjrt_dst_device);
274+
src_device_buffer->CopyToMemorySpace(
275+
*pjrt_dst_device->default_memory_space());
275276
if (!buffer_or.ok()) {
276277
done(buffer_or.status());
277278
return;

tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.h

+1
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ limitations under the License.
2020
#include <unordered_set>
2121
#include <vector>
2222

23+
#include "absl/status/status.h"
2324
#include "mlir/IR/BuiltinOps.h" // from @llvm-project
2425
#include "mlir/IR/MLIRContext.h" // from @llvm-project
2526
#include "mlir/IR/OwningOpRef.h" // from @llvm-project

tensorflow/compiler/mlir/lite/transforms/lower_quant_annotations_helper.cc

+12-2
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,8 @@ namespace mlir::TFL {
3737
LogicalResult FillCompositeParams(stablehlo::CompositeOp op,
3838
SmallVector<double, 4>& scales,
3939
SmallVector<int64_t, 4>& zero_points,
40-
int& num_bits, bool& is_signed) {
40+
int& num_bits, bool& is_signed,
41+
bool& is_narrow_range) {
4142
auto scale_attr = llvm::dyn_cast_or_null<DenseFPElementsAttr>(
4243
op.getCompositeAttributes().get("scale"));
4344
if (scale_attr == nullptr) {
@@ -68,10 +69,19 @@ LogicalResult FillCompositeParams(stablehlo::CompositeOp op,
6869
if (dtype == "i8") {
6970
num_bits = 8;
7071
is_signed = true;
72+
} else if (dtype == "i4") {
73+
num_bits = 4;
74+
is_signed = true;
7175
} else {
72-
// TODO(majiddadashi) currently only tested with i8.
7376
return failure();
7477
}
78+
auto narrow_range_attr = llvm::dyn_cast_or_null<BoolAttr>(
79+
op.getCompositeAttributes().get("narrow_range"));
80+
if (narrow_range_attr == nullptr) {
81+
return failure();
82+
}
83+
is_narrow_range = narrow_range_attr.getValue();
84+
7585
return success();
7686
}
7787

tensorflow/compiler/mlir/lite/transforms/lower_quant_annotations_helper.h

+2-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,8 @@ namespace mlir::TFL {
3131
LogicalResult FillCompositeParams(stablehlo::CompositeOp op,
3232
SmallVector<double, 4>& scales,
3333
SmallVector<int64_t, 4>& zero_points,
34-
int& num_bits, bool& is_signed);
34+
int& num_bits, bool& is_signed,
35+
bool& is_narrow_range);
3536

3637
LogicalResult GetStorageParams(unsigned num_bits, bool narrow_range,
3738
bool is_signed, MLIRContext* ctx,

0 commit comments

Comments
 (0)