[Bug]: Deep crawling is exceeding the max_pages
parameter and continuing beyond the set limit.
#927
Labels
💪 - Intermediate
Difficulty level - Intermediate
🐞 Bug
Something isn't working
⚙️ In-progress
Issues, Features requests that are in Progress
crawl4ai version
0.5.0.post4
Expected Behavior
The crawler should stop after crawling 10 pages, as specified by max_pages=10.
len(results) should report a maximum of 10 pages.
Current Behavior
When using AsyncWebCrawler with BestFirstCrawlingStrategy and setting max_pages=10, the crawler unexpectedly crawls more pages than specified. In my case, it crawled 17 pages instead of stopping at 10.
Is this reproducible?
Yes
Inputs Causing the Bug
Steps to Reproduce
Code snippets
OS
Linux
Python version
3.9.7
Browser
Chrome
Browser version
131.0.6778.139
Error logs & Screenshots (if applicable)
The text was updated successfully, but these errors were encountered: