Scraping e-commerce websites and product pages is a common practice driven by several key reasons. It enables businesses to conduct market research, monitor prices, enhance product catalogs, generate leads, and aggregate content. In this tutorial, you’ll learn how to scrape e-commerce businesses using Python and Oxylabs’ E-Commerce Scraper API. The E-Commerce API will help you avoid any antibot protection or CAPTCHA without writing a complex script. Let’s get started.
First, you’ll have to install Python. Please download it from here.
Next, you’ll have to install a couple of libraries so that you can interact with the E-Commerce Scraper API and parse the HTML content. You can run the below command:
pip install bs4 requests
This will install Beautiful Soup and the requests libraries for you.
Now, you can import these libraries using the following code:
from bs4 import BeautifulSoup
import requests
Next, you’ll have to log in to your Oxylabs account to retrieve API credentials. If you don’t have an account yet, you can simply sign up for free and go to the dashboard. There, you’ll get the necessary credentials for the API.
Once you've retrieved your credentials, you can add them to your code.
username, password = 'USERNAME', 'PASSWORD'
Don’t forget to replace USERNAME and PASSWORD with your username and password.
Oxylabs’ E-Commerce Scraper API expects a JSON payload in a POST request. You’ll have to prepare the payload before sending the POST request. The source must be set to universal_ecommerce.
url = "https://sandbox.oxylabs.io/products"
payload = {
'source': 'universal_ecommerce',
'render': 'html',
'url': url,
}
’render’: 'html' tells the API to execute JavaScript when loading the website content.
Note: For demonstration, we’ll use the sandbox.oxylabs.io store page.
Let’s send the payload using the requests module’s `post()` method. You can pass the credentials using the `auth` parameter.
response = requests.post(
'https://realtime.oxylabs.io/v1/queries',
auth=(username, password),
json=payload,
)
print(response.status_code)
Since the `payload` needs to be in JSON format, you can use the `json` parameter of the `post()` method. If you run this code now, you should see the output of the `status_code` as `200`. Any other numbers mean there are some errors, if you get one, check your credentials, payload, and URL thoroughly and make sure they all are correct.
You can extract the HTML content from the `JSON` response of the API and create a Beautiful Soup object named `soup`.
content = response.json()["results"][0]["content"]
soup = BeautifulSoup(content, "html.parser")
Using Web Browser’s developer tools, you can inspect the various elements of the website to find the necessary CSS selectors. Once you've gathered the CSS selectors, you can use the `soup` object to extract those elements. To activate the developer tools, you can simply browse to the target website, right-click, and select inspect. Let’s parse the title, price, and availability of all products.
If you inspect the title, you’ll notice it’s inside a `<h4>` tag with a class `title`.
So, you can use the `soup` object to extract the title as below:
title = soup.find('h4', {"class": "title"}).get_text(strip=True)
Similarly, inspect the price element.
As you can see, it’s wrapped in a <div> with the class price-wrapper. So, use the find() method, as shown below, to extract the price text.
price = soup.find('div', {"class": "price-wrapper"}).get_text(strip=True)
There are two types of availability on this website, In Stock and Out of Stock. If you inspect both elements, you’ll notice they have different classes.
Fortunately, the Beautiful Soup library’s find() method supports multiple class lookups. You’ll have to pass the classes in a list object.
availability = soup.find('p', {"class": ["in-stock", "out-of-stock"]}).get_text(strip=True)
To extract all product data, you’ll have to inspect the product elements and find the appropriate CSS selectors.
Since each of the product elements is wrapped in a <div> with class product-card, you can loop through each element using a for loop.
data = []
for elem in soup.find_all("div", {"class": "product-card"}):
title = elem.find('h4', {"class": "title"}).get_text(strip=True)
price = elem.find('div', {"class": "price-wrapper"}).get_text(strip=True)
availability = elem.find('p', {"class": ["in-stock", "out-of-stock"]}).get_text(strip=True)
data.append({
"title": title,
"price": price,
"availability": availability,
})
print(data)
The data list will contain all the product data.
The entire scraper is given below for your convenience. You can use it as a building block for your next scraper. You’ll only have to replace the URL and parsing logic with your own.
from bs4 import BeautifulSoup
import requests
username, password = 'USERNAME', 'PASSWORD'
url = "https://sandbox.oxylabs.io/products"
payload = {
'source': 'universal_ecommerce',
'render': 'html',
'url': url,
}
response = requests.post(
'https://realtime.oxylabs.io/v1/queries',
auth=(username, password),
json=payload,
)
print(response.status_code)
content = response.json()["results"][0]["content"]
soup = BeautifulSoup(content, "html.parser")
data = []
for elem in soup.find_all("div", {"class": "product-card"}):
title = elem.find('h4', {"class": "title"}).get_text(strip=True)
price = elem.find('div', {"class": "price-wrapper"}).get_text(strip=True)
availability = elem.find('p', {"class": ["in-stock", "out-of-stock"]}).get_text(strip=True)
data.append({
"title": title,
"price": price,
"availability": availability,
})
print(data)
Here’s the output:
So far, you’ve learned how to scrape e-commerce stores using Python. You also explored Oxylabs’ E-Commerce Scraper API and learned how to use it for scraping complex websites with ease. By using the techniques described in this article, you can perform large-scale web scraping on websites with bot protection and CAPTCHA.
For e-commerce and product details data scraping, you’ll first need to pick a programming language you are most comfortable with. Python, Go, JavaScript, Ruby, and Elixir are popular programming languages with excellent support for large-scale e-commerce data scraping. After that, you’ll have to find the necessary tools and libraries available to help you extract data from the target website. You can learn the web scraping best practices here.
Web scraping is ethical as long as the scrapers respect all the rules set by the target websites, don’t harm the website, don’t breach any laws, and use the scraped data with good intentions. It’s essential to respect the ToS of the website and obey the rules of the robots.txt file. Read this article to learn more about ethical web scraping.
About the author
Maryia Stsiopkina
Senior Content Manager
Maryia Stsiopkina is a Senior Content Manager at Oxylabs. As her passion for writing was developing, she was writing either creepy detective stories or fairy tales at different points in time. Eventually, she found herself in the tech wonderland with numerous hidden corners to explore. At leisure, she does birdwatching with binoculars (some people mistake it for stalking), makes flower jewelry, and eats pickles.
All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.
Get the latest news from data gathering world
Forget about complex web scraping processes
Choose Oxylabs' advanced web intelligence collection solutions to gather real-time public data hassle-free.
Scale up your business with Oxylabs®
GET IN TOUCH
General:
hello@oxylabs.ioSupport:
support@oxylabs.ioCareer:
career@oxylabs.ioCertified data centers and upstream providers
Connect with us
Advanced proxy solutions
Resources
Innovation hub