Scraping Google search results is incredibly useful for SEO analysis and competitor tracking. However, Google actively blocks bots by detecting automated queries and triggering CAPTCHA challenges.
If you’ve tried scraping Google with Python requests
or BeautifulSoup
, you’ve probably encountered:
- CAPTCHA pages instead of search results.
- 403 Forbidden errors.
- Repeated empty results.
Google blocks automated scraping by detecting headless browsers, unusual activity, lack of human interaction, and repeated searches from the same IP, enforcing strict anti-bot measures to prevent abuse. To avoid detection, we need to mimic real user behaviour in Selenium.
How to Scrape Google Search Results: A Step-by-Step Guide
What This Script Does;
- Reads keywords from an Excel file
- Scrapes Google UK search results
- Ensures unique URLs are collected
- Saves results in a Keyword → URL row format
This guide will teach you how to scrape Google search results using Selenium, bypass CAPTCHA, and save the top 10 URLs per keyword into an Excel file named serp_results.xlsx
.
Step 1: Install Python
Make sure you have Python 3.8 or higher installed. You can download it from python.org.
Step 2: Install Required Python Dependencies
pip install selenium pandas openpyxl webdriver-manager pyautogui
Step 3: Prepare keywords.xlsx
Prepare keywords.xlsx
with:
Keywords
best running shoes
latest iPhone price
SEO tips
Step 4: Download this Python Code & Execute
If you’d like to get started quickly with the code, you can download the complete Python script here. Simply click the link, save the file (scrape.py), and run it in your Python environment. Make sure to install the required dependencies outlined in Step 2.
To run the script, save the script to a file (e.g., scrape.py) and execute the below command on the command prompt.
python scrape.py

Step 5: Open Downloaded Output File
The keyword appears once. The URLs are listed in rows under each keyword. The output will be saved in serp_results.xlsx.

To perform a competitor analysis, you can take the output sheet and run a crawl of the URLs using Screaming Frog. By crawling the URLs, you can easily extract valuable metadata such as page titles, meta descriptions, and H1 tags. This allows you to compare and analyse your competitors’ optimisation strategies against your URL ranking for the same keyword set, providing actionable insights to refine and optimise your SEO approach.
Let me know if you have any questions or suggestions for improving this guide! Remember to always respect Google’s terms of service, use scraping responsibly for small-scale tasks or research, and consider APIs like SerpAPI for large-scale scraping of SERPs.