Skip to content

Commit ed20255

Browse files
fix(nodes): depth anything processor (#5956) (#5961)
We were passing a PIL image when we needed to pass the np image. Closes #5956 ## What type of PR is this? (check all applicable) - [ ] Refactor - [ ] Feature - [x] Bug Fix - [ ] Optimization - [ ] Documentation Update - [ ] Community Node Submission ## Description We were passing a PIL image when we needed to pass the np image. Closes #5956 ## Related Tickets & Documents <!-- For pull requests that relate or close an issue, please include them below. For example having the text: "closes #1234" would connect the current pull request to issue 1234. And when we merge the pull request, Github will automatically close the issue. --> - Related Issue # - Closes #5956 ## QA Instructions, Screenshots, Recordings Depth anything processor should work. <!-- Please provide steps on how to test changes, any hardware or software specifications as well as any other pertinent information. --> ## Merge Plan This PR can be merged when approved <!-- A merge plan describes how this PR should be handled after it is approved. Example merge plans: - "This PR can be merged when approved" - "This must be squash-merged when approved" - "DO NOT MERGE - I will rebase and tidy commits before merging" - "#dev-chat on discord needs to be advised of this change when it is merged" A merge plan is particularly important for large PRs or PRs that touch the database in any way. -->
2 parents a386544 + fed1f98 commit ed20255

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

invokeai/backend/image_util/depth_anything/__init__.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -90,8 +90,8 @@ def __call__(self, image: Image.Image, resolution: int = 512) -> Image.Image:
9090
np_image = np_image[:, :, ::-1] / 255.0
9191

9292
image_height, image_width = np_image.shape[:2]
93-
np_image = transform({"image": image})["image"]
94-
tensor_image = torch.from_numpy(image).unsqueeze(0).to(choose_torch_device())
93+
np_image = transform({"image": np_image})["image"]
94+
tensor_image = torch.from_numpy(np_image).unsqueeze(0).to(choose_torch_device())
9595

9696
with torch.no_grad():
9797
depth = self.model(tensor_image)

0 commit comments

Comments
 (0)