Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gRPC instrumentation fails when server uses unix socket #3393

Closed
diurnalist opened this issue Mar 28, 2025 · 0 comments · Fixed by #3394
Closed

gRPC instrumentation fails when server uses unix socket #3393

diurnalist opened this issue Mar 28, 2025 · 0 comments · Fixed by #3394
Labels
bug Something isn't working

Comments

@diurnalist
Copy link
Contributor

diurnalist commented Mar 28, 2025

Describe your environment

OS: Ubuntu (did not test others)
Python version: Python 3.9+
Package version: 0.44b0+
grpc version: <=1.50.0

What happened?

When using the gRPC instrumentation with a server that is exposed over a unix socket, we get the following error:

ValueError: not enough values to unpack (expected 2, got 1)

This seems to be because the context.peer() name is "unix://path/to/socket.sock", not simply "unix:". The instrumentation thinks that it is a host:port address and fails to parse it properly.

Note that the grpc project has at times tried to improve the communication of the socket path: grpc/grpc#18556. It seems that this changed after version 1.50.0 (versions 1.51 and 1.52 were yanked for other issues, 1.53.0 is the first version that does not have this problem.) However, I think that we should be handling the case when there is a path defined, as it seems like that's more correct.

Steps to Reproduce

requirements.txt

grpcio==1.50.0
grpcio-tools
opentelemetry-instrumentation-grpc

proto/helloworld.proto

syntax = "proto3";

package helloworld;

service Greeter {
  rpc SayHello (HelloRequest) returns (HelloReply) {}
}

message HelloRequest {
  string name = 1;
}

message HelloReply {
  string message = 1;
}

src/server.py

import logging
from concurrent import futures

import grpc
from opentelemetry.instrumentation.grpc import GrpcInstrumentorServer

import helloworld_pb2_grpc
import helloworld_pb2

logger = logging.getLogger(__name__)

class MyGreeter:
    def SayHello(self, request, context):
        return helloworld_pb2.HelloReply(message=":wave: " + request.name)

def serve():
    grpc_server_instrumentor = GrpcInstrumentorServer()
    grpc_server_instrumentor.instrument()

    server = grpc.server(futures.ThreadPoolExecutor())
    helloworld_pb2_grpc.add_GreeterServicer_to_server(MyGreeter(), server)
    server.add_insecure_port('unix:///tmp/grpc.sock')
    server.start()
    server.wait_for_termination()

if __name__ == '__main__':
    logging.basicConfig(level=logging.INFO)
    serve()

src/client.py

import grpc

import helloworld_pb2_grpc
import helloworld_pb2


def run():
    with grpc.insecure_channel("unix:///tmp/grpc.sock") as channel:
        stub = helloworld_pb2_grpc.GreeterStub(channel)
        stub.SayHello(helloworld_pb2.HelloRequest(name="world"))

if __name__ == "__main__":
    run()

How to reproduce:

  • uv venv --seed
  • uv run pip install -r requirements.txt
  • uv run python -m grpc_tools.protoc -I./proto --python_out=./src --grpc_python_out=./src ./proto/helloworld.proto
  • uv run python src/server.py
  • (in another terminal) uv run python src/client.py

Expected Result

gRPC call should return without error.

Actual Result

Traceback (most recent call last):
  File "/home/jason.anderson/workspaces/grpc-unix-socket-server/src/client.py", line 13, in <module>
    run()
  File "/home/jason.anderson/workspaces/grpc-unix-socket-server/src/client.py", line 10, in run
    stub.SayHello(helloworld_pb2.HelloRequest(name="world"))
  File "/home/jason.anderson/workspaces/grpc-unix-socket-server/.venv/lib/python3.11/site-packages/grpc/_channel.py", line 946, in __call__
    return _end_unary_response_blocking(state, call, False, None)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jason.anderson/workspaces/grpc-unix-socket-server/.venv/lib/python3.11/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
        status = StatusCode.UNKNOWN
        details = "Exception calling application: not enough values to unpack (expected 2, got 1)"
        debug_error_string = "UNKNOWN:Error received from peer unix:/tmp/grpc.sock {grpc_message:"Exception calling application: not enough values to unpack (expected 2, got 1)", grpc_status:2, created_time:"2025-03-28T13:48:37.018497876-07:00"}"

Additional context

No response

Would you like to implement a fix?

Yes

@diurnalist diurnalist added the bug Something isn't working label Mar 28, 2025
diurnalist added a commit to diurnalist/opentelemetry-python-contrib that referenced this issue Mar 28, 2025
with some grpc implementations the full .peer address is available
for unix sockets, which includes the socket path. it seems that
in versions of grpc prior to 1.53.0, the full path is returned by
`context.peer()`. rather than change the dependency of the instrumentation,
this updates it to more gracefully handle the case of the socket path
being present or absent.

Fixes open-telemetry#3393
@xrmx xrmx closed this as completed in #3394 Apr 2, 2025
xrmx added a commit that referenced this issue Apr 2, 2025
* fix: grpc server ValueError when using unix sockets

with some grpc implementations the full .peer address is available
for unix sockets, which includes the socket path. it seems that
in versions of grpc prior to 1.53.0, the full path is returned by
`context.peer()`. rather than change the dependency of the instrumentation,
this updates it to more gracefully handle the case of the socket path
being present or absent.

Fixes #3393

* add changelog entry

---------

Co-authored-by: Riccardo Magliocchetti <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant