Skip to content

partially codegen adaptive_avgpool3d and backward #3790

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Aug 8, 2022

Conversation

JackCaoG
Copy link
Collaborator

Fix #3788 and #3787

We have some special fallback logic in adaptive_avgpool3d so I can't codegen it. However during the development, I still codegen the NativeFunction and copy the parts that is relevant to the aten_xla_type.cpp. This prevent me from writing the shape fn logic from scratch. I also get rid of corresponding entry in the tensor_method.cpp since in the long run we don't want to keep those.

LazyIr

class AdaptiveAvgPool3d : public XlaNode {
 public:
  static torch::lazy::OpKind ClassOpKind() {
    return torch::lazy::OpKind(at::aten::_adaptive_avg_pool3d);
  }

  AdaptiveAvgPool3d(const torch::lazy::Value& self,
                    const ::std::vector<int64_t>& output_size,
                    std::vector<torch::lazy::Shape>&& shapes)
      : XlaNode(
            torch::lazy::OpKind(at::aten::_adaptive_avg_pool3d), {self},
            std::move(shapes),
            [&]() { return AdaptiveAvgPool3dOutputShape(self, output_size); },
            /* num_outputs */ 1, torch::lazy::MHash(output_size)),
        output_size(output_size) {}

  std::string ToString() const override {
    std::stringstream ss;
    ss << XlaNode::ToString();
    ss << ", output_size=" << output_size;
    return ss.str();
  }

  bool CanBeReused(const torch::lazy::Value& self,
                   const ::std::vector<int64_t>& output_size) const {
    return false;
  }

  torch_xla::XlaOpVector Lower(LoweringContext* loctx) const override;

  ::std::vector<int64_t> output_size;
};

class AdaptiveAvgPool3dBackward : public XlaNode {
 public:
  static torch::lazy::OpKind ClassOpKind() {
    return torch::lazy::OpKind(at::aten::_adaptive_avg_pool3d_backward);
  }

  AdaptiveAvgPool3dBackward(const torch::lazy::Value& grad_output,
                            const torch::lazy::Value& self,
                            std::vector<torch::lazy::Shape>&& shapes)
      : XlaNode(torch::lazy::OpKind(at::aten::_adaptive_avg_pool3d_backward),
                {grad_output, self}, std::move(shapes),
                [&]() {
                  return AdaptiveAvgPool3dBackwardOutputShape(grad_output,
                                                              self);
                },
                /* num_outputs */ 1, torch::lazy::MHash()) {}

  std::string ToString() const override {
    std::stringstream ss;
    ss << XlaNode::ToString();

    return ss.str();
  }

  bool CanBeReused(const torch::lazy::Value& grad_output,
                   const torch::lazy::Value& self) const {
    return false;
  }

  torch_xla::XlaOpVector Lower(LoweringContext* loctx) const override;
};

@JackCaoG JackCaoG force-pushed the codegen_adaptive_avgpool_3d branch from 937c42e to de5170f Compare July 29, 2022 22:52
Copy link
Collaborator

@wonjoolee95 wonjoolee95 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! Thanks!

@JackCaoG JackCaoG merged commit 1f154ce into master Aug 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

_adaptive_avg_pool3d_backward
2 participants