Most Python scraping setups rely on Requests and BeautifulSoup, and they work fine for static pages. The problem starts when you point them at a JavaScript-rendered site. The HTML you get back has none of the content you need because the page has not executed yet.
In this article, we will explore why standard Python scrapers fail on JavaScript sites, how to fix that with Selenium, and how it compares to Playwright so you can pick the right tool for your use case.
Why Requests and BeautifulSoup Fail on JavaScript Sites

When you request with the Requests library, you get back the raw HTML before any JavaScript runs. On a static site, that raw HTML already contains the data you need. On a JavaScript-rendered site, it does not. Requests never trigger the JavaScript responsible for loading the content, and it has no browser to do that.
A simple way to check if a site relies on JavaScript is to disable it in your browser and reload the page. If the content disappears, Requests will not work on it.
Also Read: How to Do Web Scraping Without Getting Blocked
How to Scrape JavaScript Sites With Selenium

Selenium controls a real browser, which means the page fully loads, JavaScript executes, and you get the actual rendered content before extracting anything.
Install Selenium and its browser driver first:
1pip install selenium
2pip install webdriver-manager
Then the basic setup looks like this:
1from selenium import webdriver
2from webdriver_manager.chrome import ChromeDriverManager
3from selenium.webdriver.chrome.service import Service
4
5driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
6driver.get("https://example.com")
7
8print(driver.page_source)
9driver.quit()
Once the page loads, use find_element or find_elements to target specific tags, classes, or IDs. The one thing to watch out for is timing. Some content takes a moment to appear after the initial load because it depends on additional API calls finishing. Use Selenium's built-in wait conditions instead of a hardcoded sleep:
1from selenium.webdriver.common.by import By
2from selenium.webdriver.support.ui import WebDriverWait
3from selenium.webdriver.support import expected_conditions as EC
4
5wait = WebDriverWait(driver, 10)
6element = wait.until(EC.visibility_of_element_located((By.CLASS_NAME, "target-class")))
7print(element.text)
That alone handles most JavaScript sites without any extra configuration.
Selenium vs Playwright

Selenium has been around longer, which means more community resources and broader language support. The downside is a more verbose API and a slower setup compared to Playwright.
Playwright was built specifically with automation in mind. It handles waiting for elements more cleanly, supports multiple browsers without extra drivers, and is generally faster. The async API also makes it a better fit for concurrent scraping tasks.
For straightforward scraping jobs, Selenium gets it done. If you are scraping at scale or need to pair your scraper with rotating residential proxies, Playwright is the better choice.
Also Read: How to Scrape JavaScript-Heavy Sites With Playwright
Final Thoughts
Requests and BeautifulSoup stop working the moment JavaScript is involved. Selenium fixes that by running a real browser and giving you the fully rendered page. For anything more demanding, Playwright picks up where Selenium leaves off.





