Skip to content

Commit 5e7d5f7

Browse files
yunhaolingYijunXieMS
authored andcommitted
Eventhub track2 Live test update (#8)
* Set allowed sasl mechs * Remove client.py * Receiver update * Add dummy send api * logging updates * Error handling, reconnect and logging * Add app properties to event data * unbind transport on connection close * timestamp filter on py2 * module version * Reconnect once when link/session/connection close * Add SessionPolicy * Add client info * Updates - Cleaned wireframes to be PEP compliant - Implemented single partition pump and single event_hub partition pump scenario Todo - Add Unit Tests for partition pump and event hub partition pump - Implement Partition Manager - Implement Checkpointing and Lease Managment * Updates - Cleaned wireframes to be PEP compliant - Implemented single partition pump and single event_hub partition pump scenario Todo - Add Unit Tests for partition pump and event hub partition pump - Implement Partition Manager - Implement Checkpointing and Lease Managment * run client in non-blocking mode * Added unit testing * Implemented the following functionality - Azure_storage_checkpoint_manager - AzureBlobLease isExpired Todo Implement partition manager Implement partition context Test full implementation * Implemented Processing of First Epoh Todo - Fix lease bug that is breaking subsequent epochs * Changes - Completed End to End EPH Flow - Removed storage dependancy on downloading full blob to check lease state Todo - Add thread and queue for checking lease state and other storage operations - Ensure eventhub client shuts down properly - Find way to update partition pumps without restarting them - Other optimizations * Move examples out * Changes - Added thread pool executor to enable conncurent execution of partitions - Removed partition pump dependency on max_batch Todo - Ensure eventhub client shuts down properly (This is causing errors) - Add thread pool for making checkpoint code conccurent - Add thread and queue for checking lease state and other storage operations to enable async - Find way to reassign active partition pumps without restarting them - Other optimizations * Add async receive * Changes - Added logs - Fixed error causing client to prematurely shutdown * Manual link flow control for async receive * Workaround for stuck async receiver * Local variable names * Changes - Optimized logging and comments Todo - Add concurecny mechanim for azure storage - Depricate partition pump event queue and update to latest version of the client * Create Dockerfile * Stuck async receiver * credit keeps increasing in async receiver * Changes - Added asnyc event hub client support - Optimized logging and comments Todo - Add concurecny mechanim for azure storage * Updated docker file as requested * Added EPH example * Fix hardcoded HTTP header * Made suggested changes * Bug fix - Fixed event loop bugs. In windows eventloop is thread dependent but in ubuntu the eventloop is threadsafe so you need to differentiate the thread specific eventloop from the host one. * Updated loop naming convention to be consistent * Added option to pass asyncio event_loop to eph * Updated docker file * Fixed critical bug with partition manager and aquirec mechanisiims Todo : Identitfy and fix remaining bug that is causing all pumps to shut down when a second host starts * Bug fixes - Fixed bug where closing a pump closed a host - Fixed bug where error partitioned were not removed - Fixed bug where leases were renewed at an incorrect interval * Updated file headers Removed author reference * - Fixed bug in eph example that caused host to terminate prematurely - Made the lease renewal and checkpoint creation "multithreaded" * Increase the size of the connection pool The default connection pool size was too small for scenarios where multiple partitions were handled by one EventProcessorHost. If the amount of partitions handled is large, we might end up doing very many connections at the same time due to the multi-threaded blob-handling. For this reason, you might hit the OS limits that restrict the number of open files per process that in MacOS is not very big. This can be worked around with something like: `ulimit -n 2560` * Decrease info logging verbosity * added ability to toggle pump shutdown when all messages on a pump are processed. * Install also eventhubsprocessor * Default to keeping the pumps It is more optimal to keep the pumps alive even if there are no messages so that it is faster to pickup when messages start to arrive. * Pipe and event injector for Windows * Event injector updates * EHClient refactoring. EHClient leaks. Sender part 1. * Send support * ren eventhubsprocessor eventprocessorhost * Changes - Added event hub config to simplify installation story * Changes - Added optional eventprocessor_params for passing context to the event processor - Made the storage manager mandatatory * Fix memory leaks * logging * Fix: 1. process crash due to race in client stop and connection remote close. 2. handle client close in async receiver. 3. fail pending sends when sender is closed. 4. some debug logging. * tests * test: recv from multiple partitions * test utility * logging update * Support callback based send for high throughput * Workaroud memory issue in proton.reactor.ApplicationEvent * renamed eventprocessor to eventprocessorhost for consistency * updated docker file * fixed typo in url * Added amqp port to address * Updated sample documentation since url is auto encoded by config * Updated docs * Implement timeout for send * Async sender and example * Close injector pipe * Use send timer to also check queued messages * Add partition pump loop to partition_context This gives the EventProcessor access to the partition_pump loop object. This way if One desires to run synchronous code inside process_events_async one can utilize the loop object to run the synchronous code using await context.pump_loop.run_in_executor(None, bla) * Include details in send error * Release deliveries when sender is closed * added validation to unquoted sas key * added support for custom eventhub client prefetch size * Update README.md * Update README.md * Added Docker instructions and fixed Dockerfile (Azure#18) * Removed Dockerfile from the main folder and fixed Dockerfile example * Added build and run Dockerfile documentation * Update Readme * Removed rm qpid-proton folder * Removed /usr/share copy * Disallow a sender/receiver to be registered more than once * Make everything async in EPH I have removed all usage of threads thoroughout the code. Using threads to run pumps etc. Causes async code written into the event-processor to become caotic (you need to follow which loop is currently being used in the call to prevent loops not being found or using the wrong loop (There is the main loop and then loops that are created inside threads) Things become caotic when the event processor is being called by objects that run under different loops. So, no Threading except usage of asyncio run_in_executor. This is done mostly for azure blob api calls. Also changed the bla_async methods to not block. this way, when calling open_async for the the event-processor-host, the command will exit once the EPH is started. Due to the above, see the edited example/eph.py where I added a monitor that makes sure the EPH is still running (Could be replaced by loop.run_forever()) in the example file I have also incorporated a test class for gracefully killing the EPH after 30 seconds. this works, nevertheless takes a while to close as we are waiting for timeouts on the eventhubs connections. * Started removing proton code * Removed most of proton _impl * Removed more code * Working sender * Updates to sender * Added some tests/samples * Some progress on clients * Fixed samples * Added azure namespace * Azure#25 Partition key cannot be set for events * Updated version * Updated README * Renamed package to eventhub * Started EPH modifications * Updated imports * Fixed target urls * Updated logging * Updated async message receive * updated test imports * Added mgmt call to get eh info * Updated samples * Updated receive test * Added send and receive test clients * Updated uamqp dependency * Merged updates from dev * Fixed typos * Updated EPH sample * Started docstrings * Converted tests to pytest * Updates to batch receive * Started adding docstrings * More docstrings * bumped version * Started porting test suite * More tests and improvements * Moved eph tests * Some sample cleanup * Some test updates * Some test restructure * Docstring cleanup * Fixed some merge artifacts * Fixed formatting error * Removed delivery count * Nested package directory * Support custom URL suffix * Support custom URL suffix * Support for EventData device ID * Reverted nested directory * Updated release notes * Workaround for partitionkey * Finished partition key workaround * beta2 fixes * pylint fixes * Trigger CI * Test fixes * Added package manifest * Added warning for Python 2.7 support Support for issues Azure#36 and Azure#38 * Started adding scenario tests * More test scenarios * Better docstring formatting * Started iothub support * Fixed long running test * Fixed typo and memory leak * Restructure * IoThub support * Updates for RC1 release * Fix long running test * Docstring and sample cleanups * Working on error retry * Improved error processing * Fixed partition manager * Progress on IotHub error * Some test updates * Updated uamqp dependency * Restructure for independent connections * Added HTTP proxy support Fix for issue Azure#41 * Fixed some tests + samples * pylint fixes * bumped version * Added keepalive config and some eph fixes * Made reconnect configurable * Added more EPH options * Bumped version * Pylint fix * Pylint fix * Added send and auth timeouts * Changed log formatting. Retry on reconnect * Pylint fixes * Renamed internal async module * Updated send example to match recv Fix for issue Azure#56 * Added build badge to readme * Fix for repeat startup * Added more storage connect options to EPH * Bumped version * Handler blocked until client started * Added event data methods * Fix pylint * Fix 3.7 CI * Fix 3.7 CI * Updated pylint version * Pylint fixes * Updated README * Fixed readme badge refresh * Fixed bug in Azure namespace package * Updated manifest * Parse enqueued time as UTC Fixes Azure#72. * Updates for release 1.2.0 (Azure#81) * Made setup 2.7 compatible * Separated async tests * Support 2.7 types * Bumped version * Added non-ascii tests * Fix CI * Fix Py27 pylint * Added iot sample * Updated sender/receiver client opening * bumped version * Updated tests * Fixed test name * Fixed test env settings * Skip eph test * Updates for v1.3.0 (Azure#91) * Added support for storing the state of the Event Processor along the Checkpoint. Both Checkpoint and the EP state are stored as pickled objects. * Fixing pylint complaints. * Switched from pickle back to JSON for lease persistence. * Fixes bug when accessing leases that don't contain EP context. Also, minor renaming. * Better SAS token support * Fixed pylint * Improved auth error handling * Test stabilization * Improved stored EPH context * Updated EPH context storing * Skip test on OSX * Skip tests on OSX Fail due to large message body bug. * Some cleanup * Fixed error handling * Improved SAS token parsing * Fixed datetime offset (Azure#99) * Fixed datetime offset * Updated pylint * Removed 3.4 pylint pass * Fixed bug in error handling (Azure#100) * Migrate event hub sdk to central repo 1. add verifiable code snippets into docstring 2. update readme according to the template 3. add livetest mark and config 4. optimize code layout/structure * 1. document formatting 2. separate async/sync example tests * Fix build error: 1. uamqp dependency mismatch 2. rename test_examples in eventhub to avoid mismatch * This should fix build error * remove tests import and add sys path to solve build error * add live test for sending BatchEvent with application_properties, new live test passed with new uamqp wheel locally installed * Add get_partition_info in Event Hub * add get_partition_info * Add telemetry information to the connection properties * Disable smart split in batch message * 1. Add amqp over websocket test 2. Add proxy sample 3. Update some comment and code * update some test code * Add __str__ to EventData * Update test code * Update event position * Update live test * Update reconnect live test * Update too large data size
1 parent d415474 commit 5e7d5f7

File tree

8 files changed

+54
-53
lines changed

8 files changed

+54
-53
lines changed

sdk/eventhub/azure-eventhubs/tests/asynctests/test_negative_async.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ async def test_send_too_large_message_async(connection_str):
141141
client = EventHubClient.from_connection_string(connection_str, debug=False)
142142
sender = client.create_sender()
143143
try:
144-
data = EventData(b"A" * 300000)
144+
data = EventData(b"A" * 1100000)
145145
with pytest.raises(EventHubError):
146146
await sender.send(data)
147147
finally:

sdk/eventhub/azure-eventhubs/tests/test_iothub_receive.py

+3-1
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,10 @@
1313

1414
@pytest.mark.liveTest
1515
def test_iothub_receive_sync(iot_connection_str, device_id):
16-
client = EventHubClient.from_iothub_connection_string(iot_connection_str, debug=True)
16+
pytest.skip("current code will cause ErrorCodes.LinkRedirect")
17+
client = EventHubClient.from_iothub_connection_string(iot_connection_str, network_tracing=True)
1718
receiver = client.create_receiver(partition_id="0", operation='/messages/events')
19+
receiver._open()
1820
try:
1921
partitions = client.get_properties()
2022
assert partitions["partition_ids"] == ["0", "1", "2", "3"]

sdk/eventhub/azure-eventhubs/tests/test_iothub_send.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,10 @@
1616

1717
@pytest.mark.liveTest
1818
def test_iothub_send_single_event(iot_connection_str, device_id):
19-
client = EventHubClient.from_iothub_connection_string(iot_connection_str, debug=True)
20-
sender = client.add_sender(operation='/messages/devicebound')
19+
client = EventHubClient.from_iothub_connection_string(iot_connection_str, network_tracing=True)
20+
sender = client.create_sender(operation='/messages/devicebound')
2121
try:
22-
outcome = sender.send(EventData(b"A single event", to_device=device_id))
23-
assert outcome.value == 0
22+
sender.send(EventData(b"A single event", to_device=device_id))
2423
except:
2524
raise
2625
finally:

sdk/eventhub/azure-eventhubs/tests/test_longrunning_receive.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ def test_long_running_receive(connection_str):
9797
if args.conn_str:
9898
client = EventHubClient.from_connection_string(
9999
args.conn_str,
100-
eventhub=args.eventhub, debug=False)
100+
eventhub=args.eventhub, network_tracing=False)
101101
elif args.address:
102102
client = EventHubClient(
103103
args.address,

sdk/eventhub/azure-eventhubs/tests/test_negative.py

+17-16
Original file line numberDiff line numberDiff line change
@@ -19,15 +19,15 @@
1919
@pytest.mark.liveTest
2020
def test_send_with_invalid_hostname(invalid_hostname, connstr_receivers):
2121
_, receivers = connstr_receivers
22-
client = EventHubClient.from_connection_string(invalid_hostname, debug=False)
22+
client = EventHubClient.from_connection_string(invalid_hostname, network_tracing=False)
2323
sender = client.create_sender()
2424
with pytest.raises(EventHubError):
2525
sender._open()
2626

2727

2828
@pytest.mark.liveTest
2929
def test_receive_with_invalid_hostname_sync(invalid_hostname):
30-
client = EventHubClient.from_connection_string(invalid_hostname, debug=True)
30+
client = EventHubClient.from_connection_string(invalid_hostname, network_tracing=True)
3131
receiver = client.create_receiver(partition_id="0")
3232
with pytest.raises(EventHubError):
3333
receiver._open()
@@ -36,15 +36,15 @@ def test_receive_with_invalid_hostname_sync(invalid_hostname):
3636
@pytest.mark.liveTest
3737
def test_send_with_invalid_key(invalid_key, connstr_receivers):
3838
_, receivers = connstr_receivers
39-
client = EventHubClient.from_connection_string(invalid_key, debug=False)
39+
client = EventHubClient.from_connection_string(invalid_key, network_tracing=False)
4040
sender = client.create_sender()
4141
with pytest.raises(EventHubError):
4242
sender._open()
4343

4444

4545
@pytest.mark.liveTest
4646
def test_receive_with_invalid_key_sync(invalid_key):
47-
client = EventHubClient.from_connection_string(invalid_key, debug=True)
47+
client = EventHubClient.from_connection_string(invalid_key, network_tracing=True)
4848
receiver = client.create_receiver(partition_id="0")
4949
with pytest.raises(EventHubError):
5050
receiver._open()
@@ -53,23 +53,24 @@ def test_receive_with_invalid_key_sync(invalid_key):
5353
@pytest.mark.liveTest
5454
def test_send_with_invalid_policy(invalid_policy, connstr_receivers):
5555
_, receivers = connstr_receivers
56-
client = EventHubClient.from_connection_string(invalid_policy, debug=False)
56+
client = EventHubClient.from_connection_string(invalid_policy, network_tracing=False)
5757
sender = client.create_sender()
5858
with pytest.raises(EventHubError):
5959
sender._open()
6060

6161

6262
@pytest.mark.liveTest
6363
def test_receive_with_invalid_policy_sync(invalid_policy):
64-
client = EventHubClient.from_connection_string(invalid_policy, debug=True)
64+
client = EventHubClient.from_connection_string(invalid_policy, network_tracing=True)
6565
receiver = client.create_receiver(partition_id="0")
6666
with pytest.raises(EventHubError):
6767
receiver._open()
6868

6969

7070
@pytest.mark.liveTest
7171
def test_send_partition_key_with_partition_sync(connection_str):
72-
client = EventHubClient.from_connection_string(connection_str, debug=True)
72+
pytest.skip("Skipped tentatively. Confirm whether to throw ValueError or just warn users")
73+
client = EventHubClient.from_connection_string(connection_str, network_tracing=True)
7374
sender = client.create_sender(partition_id="1")
7475
try:
7576
data = EventData(b"Data")
@@ -82,15 +83,15 @@ def test_send_partition_key_with_partition_sync(connection_str):
8283

8384
@pytest.mark.liveTest
8485
def test_non_existing_entity_sender(connection_str):
85-
client = EventHubClient.from_connection_string(connection_str, eventhub="nemo", debug=False)
86+
client = EventHubClient.from_connection_string(connection_str, eventhub="nemo", network_tracing=False)
8687
sender = client.create_sender(partition_id="1")
8788
with pytest.raises(EventHubError):
8889
sender._open()
8990

9091

9192
@pytest.mark.liveTest
9293
def test_non_existing_entity_receiver(connection_str):
93-
client = EventHubClient.from_connection_string(connection_str, eventhub="nemo", debug=False)
94+
client = EventHubClient.from_connection_string(connection_str, eventhub="nemo", network_tracing=False)
9495
receiver = client.create_receiver(partition_id="0")
9596
with pytest.raises(EventHubError):
9697
receiver._open()
@@ -100,7 +101,7 @@ def test_non_existing_entity_receiver(connection_str):
100101
def test_receive_from_invalid_partitions_sync(connection_str):
101102
partitions = ["XYZ", "-1", "1000", "-" ]
102103
for p in partitions:
103-
client = EventHubClient.from_connection_string(connection_str, debug=True)
104+
client = EventHubClient.from_connection_string(connection_str, network_tracing=True)
104105
receiver = client.create_receiver(partition_id=p)
105106
try:
106107
with pytest.raises(EventHubError):
@@ -113,7 +114,7 @@ def test_receive_from_invalid_partitions_sync(connection_str):
113114
def test_send_to_invalid_partitions(connection_str):
114115
partitions = ["XYZ", "-1", "1000", "-" ]
115116
for p in partitions:
116-
client = EventHubClient.from_connection_string(connection_str, debug=False)
117+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
117118
sender = client.create_sender(partition_id=p)
118119
try:
119120
with pytest.raises(EventHubError):
@@ -126,10 +127,10 @@ def test_send_to_invalid_partitions(connection_str):
126127
def test_send_too_large_message(connection_str):
127128
if sys.platform.startswith('darwin'):
128129
pytest.skip("Skipping on OSX - open issue regarding message size")
129-
client = EventHubClient.from_connection_string(connection_str, debug=True)
130+
client = EventHubClient.from_connection_string(connection_str, network_tracing=True)
130131
sender = client.create_sender()
131132
try:
132-
data = EventData(b"A" * 300000)
133+
data = EventData(b"A" * 1100000)
133134
with pytest.raises(EventHubError):
134135
sender.send(data)
135136
finally:
@@ -138,8 +139,8 @@ def test_send_too_large_message(connection_str):
138139

139140
@pytest.mark.liveTest
140141
def test_send_null_body(connection_str):
141-
partitions = ["XYZ", "-1", "1000", "-" ]
142-
client = EventHubClient.from_connection_string(connection_str, debug=False)
142+
partitions = ["XYZ", "-1", "1000", "-"]
143+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
143144
sender = client.create_sender()
144145
try:
145146
with pytest.raises(ValueError):
@@ -152,7 +153,7 @@ def test_send_null_body(connection_str):
152153
@pytest.mark.liveTest
153154
def test_message_body_types(connstr_senders):
154155
connection_str, senders = connstr_senders
155-
client = EventHubClient.from_connection_string(connection_str, debug=False)
156+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
156157
receiver = client.create_receiver(partition_id="0", event_position=EventPosition('@latest'))
157158
try:
158159
received = receiver.receive(timeout=5)

sdk/eventhub/azure-eventhubs/tests/test_receive.py

+11-12
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515
# def test_receive_without_events(connstr_senders):
1616
# connection_str, senders = connstr_senders
17-
# client = EventHubClient.from_connection_string(connection_str, debug=True)
17+
# client = EventHubClient.from_connection_string(connection_str, network_tracing=True)
1818
# receiver = client.create_receiver("$default", "0", event_position=EventPosition('@latest'))
1919
# finish = datetime.datetime.now() + datetime.timedelta(seconds=240)
2020
# count = 0
@@ -36,7 +36,7 @@
3636
@pytest.mark.liveTest
3737
def test_receive_end_of_stream(connstr_senders):
3838
connection_str, senders = connstr_senders
39-
client = EventHubClient.from_connection_string(connection_str, debug=False)
39+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
4040
receiver = client.create_receiver(partition_id="0", event_position=EventPosition('@latest'))
4141
with receiver:
4242
received = receiver.receive(timeout=5)
@@ -52,7 +52,7 @@ def test_receive_end_of_stream(connstr_senders):
5252
@pytest.mark.liveTest
5353
def test_receive_with_offset_sync(connstr_senders):
5454
connection_str, senders = connstr_senders
55-
client = EventHubClient.from_connection_string(connection_str, debug=False)
55+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
5656
partitions = client.get_properties()
5757
assert partitions["partition_ids"] == ["0", "1"]
5858
receiver = client.create_receiver(partition_id="0", event_position=EventPosition('@latest'))
@@ -82,7 +82,7 @@ def test_receive_with_offset_sync(connstr_senders):
8282
@pytest.mark.liveTest
8383
def test_receive_with_inclusive_offset(connstr_senders):
8484
connection_str, senders = connstr_senders
85-
client = EventHubClient.from_connection_string(connection_str, debug=False)
85+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
8686
receiver = client.create_receiver(partition_id="0", event_position=EventPosition('@latest'))
8787

8888
with receiver:
@@ -106,7 +106,7 @@ def test_receive_with_inclusive_offset(connstr_senders):
106106
@pytest.mark.liveTest
107107
def test_receive_with_datetime_sync(connstr_senders):
108108
connection_str, senders = connstr_senders
109-
client = EventHubClient.from_connection_string(connection_str, debug=False)
109+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
110110
partitions = client.get_properties()
111111
assert partitions["partition_ids"] == ["0", "1"]
112112
receiver = client.create_receiver(partition_id="0", event_position=EventPosition('@latest'))
@@ -135,7 +135,7 @@ def test_receive_with_datetime_sync(connstr_senders):
135135
@pytest.mark.liveTest
136136
def test_receive_with_custom_datetime_sync(connstr_senders):
137137
connection_str, senders = connstr_senders
138-
client = EventHubClient.from_connection_string(connection_str, debug=False)
138+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
139139
for i in range(5):
140140
senders[0].send(EventData(b"Message before timestamp"))
141141
time.sleep(60)
@@ -161,9 +161,8 @@ def test_receive_with_custom_datetime_sync(connstr_senders):
161161

162162
@pytest.mark.liveTest
163163
def test_receive_with_sequence_no(connstr_senders):
164-
# TODO: liveTest fail when just one event data is sent
165164
connection_str, senders = connstr_senders
166-
client = EventHubClient.from_connection_string(connection_str, debug=False)
165+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
167166
receiver = client.create_receiver(partition_id="0", event_position=EventPosition('@latest'))
168167

169168
with receiver:
@@ -187,7 +186,7 @@ def test_receive_with_sequence_no(connstr_senders):
187186
@pytest.mark.liveTest
188187
def test_receive_with_inclusive_sequence_no(connstr_senders):
189188
connection_str, senders = connstr_senders
190-
client = EventHubClient.from_connection_string(connection_str, debug=False)
189+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
191190
receiver = client.create_receiver(partition_id="0", event_position=EventPosition('@latest'))
192191
with receiver:
193192
received = receiver.receive(timeout=5)
@@ -205,7 +204,7 @@ def test_receive_with_inclusive_sequence_no(connstr_senders):
205204
@pytest.mark.liveTest
206205
def test_receive_batch(connstr_senders):
207206
connection_str, senders = connstr_senders
208-
client = EventHubClient.from_connection_string(connection_str, debug=False)
207+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
209208
receiver = client.create_receiver(partition_id="0", prefetch=500, event_position=EventPosition('@latest'))
210209
with receiver:
211210
received = receiver.receive(timeout=5)
@@ -234,7 +233,7 @@ def batched():
234233
ed.application_properties = batch_app_prop
235234
yield ed
236235

237-
client = EventHubClient.from_connection_string(connection_str, debug=False)
236+
client = EventHubClient.from_connection_string(connection_str, network_tracing=False)
238237
receiver = client.create_receiver(partition_id="0", prefetch=500, event_position=EventPosition('@latest'))
239238
with receiver:
240239
received = receiver.receive(timeout=5)
@@ -256,7 +255,7 @@ def batched():
256255
@pytest.mark.liveTest
257256
def test_receive_over_websocket_sync(connstr_senders):
258257
connection_str, senders = connstr_senders
259-
client = EventHubClient.from_connection_string(connection_str, transport_type=TransportType.AmqpOverWebsocket, debug=False)
258+
client = EventHubClient.from_connection_string(connection_str, transport_type=TransportType.AmqpOverWebsocket, network_tracing=False)
260259
receiver = client.create_receiver(partition_id="0", prefetch=500, event_position=EventPosition('@latest'))
261260

262261
event_list = []

sdk/eventhub/azure-eventhubs/tests/test_reconnect.py

+6-6
Original file line numberDiff line numberDiff line change
@@ -18,34 +18,34 @@
1818
@pytest.mark.liveTest
1919
def test_send_with_long_interval_sync(connstr_receivers):
2020
connection_str, receivers = connstr_receivers
21-
client = EventHubClient.from_connection_string(connection_str, debug=True)
21+
client = EventHubClient.from_connection_string(connection_str, network_tracing=True)
2222
sender = client.create_sender()
2323
with sender:
2424
sender.send(EventData(b"A single event"))
25-
for _ in range(2):
25+
for _ in range(1):
2626
time.sleep(300)
2727
sender.send(EventData(b"A single event"))
2828

2929
received = []
3030
for r in receivers:
3131
received.extend(r.receive(timeout=1))
3232

33-
assert len(received) == 3
33+
assert len(received) == 2
3434
assert list(received[0].body)[0] == b"A single event"
3535

3636

3737
@pytest.mark.liveTest
3838
def test_send_with_forced_conn_close_sync(connstr_receivers):
3939
connection_str, receivers = connstr_receivers
40-
client = EventHubClient.from_connection_string(connection_str, debug=True)
40+
client = EventHubClient.from_connection_string(connection_str, network_tracing=True)
4141
sender = client.create_sender()
4242
with sender:
4343
sender.send(EventData(b"A single event"))
44-
sender._handler._message_sender.destroy()
44+
sender._handler._connection._conn.destroy()
4545
time.sleep(300)
4646
sender.send(EventData(b"A single event"))
4747
sender.send(EventData(b"A single event"))
48-
sender._handler._message_sender.destroy()
48+
sender._handler._connection._conn.destroy()
4949
time.sleep(300)
5050
sender.send(EventData(b"A single event"))
5151
sender.send(EventData(b"A single event"))

0 commit comments

Comments
 (0)