SERP Scraper API lets you extract public data from leading search engines – Google, Bing, Baidu, and Yandex – in real time and at scale.
You can use the API to access different types of SERPs, such as regular or image search, and extract parsed data in a convenient format. SERP Scraper API is part of maintenance-free Oxylabs infrastructure, allowing you to focus on data and leave technicalities to us.
Follow this guide for a quick setup and learn how to send your first search query.
Register, or if you already have an account, log in to the dashboard.
After selecting a free trial or subscription plan, a pop-up window will appear. Think of your username and password and create an API user.
3. In the dashboard, you’ll see a test query to scrape Google Search for the term "adidas". It has parameters to connect to the New York geo-location and to deliver parsed results: "parse":true. Copy the provided code to your terminal, insert your API user credentials, and run the query.
A test query from the dashboard
Here’s the query in code:
curl 'https://realtime.oxylabs.io/v1/queries' \
--user 'USERNAME:PASSWORD' \
-H 'Content-Type: application/json' \
-d '{"source": "google_search", "domain": "com", "query": "adidas", "geo_location": "New York,New York,United States", "parse": "true"}'
The following is an output example of this query. You can find the complete code here.
{
"results": [
{
"content": {
"url": "https://www.google.com/search?q=adidas&filter=1&safe=off&uule=w+CAIQICIWTmV3IFlvcmssVW5pdGVkIFN0YXRlcw&gl=us&hl=en",
"page": 1,
"results": {
"pla": {
"items": [
{
"pos": 1,
"url": "https://www.adidas.com/us/samba-og-shoes/ID2055.html?dfw_tracker=24819-ID2055-0016",
"price": "$100.00",
"title": "Samba OG Shoes Core White M 11.5 / W 12.5 - Mens Originals Shoes",
"seller": "adidas",
"url_image": "https://encrypted-tbn1.gstatic.com/shopping?q=tbn:ANd9GcRgP38gzea9q1Mt9PqvPozcgXzK6PFBEJ0MV5PkN501OFG6kxTtK5RXoJVCMHzO_vo6GdpipGPb77he4wDd-pljZTLq17RPmFm3PG9YzeO1tmVzFR2GkwHv2g&usqp=CAc",
"image_data": "UklGRvINAABXRUJQVlA4IOY..."
},
...
},
"last_visible_page": -1,
"parse_status_code": 12000
},
"created_at": "2023-09-14 06:27:53",
"updated_at": "2023-09-14 06:28:12",
"page": 1,
"url": "https://www.google.com/search?q=adidas&filter=1&safe=off&uule=w+CAIQICIWTmV3IFlvcmssVW5pdGVkIFN0YXRlcw&gl=us&hl=en",
"job_id": "7107973212826243073",
"status_code": 200,
"parser_type": ""
}
]
}
For a visual representation of how to set up and manually test SERP Scraper API, check the video below.
You can also check how SERP Scraper API works in our Scraper APIs Playground, accessible via the dashboard.
The example above showcases the Realtime integration method. With Realtime, you can send your request and receive data back on the same open HTTPS connection straight away.
You can integrate SERP Scraper API using one of the three methods:
Realtime
Push-Pull
Proxy Endpoint
Read more about integration methods and how to choose one here. In essence, here are the main differences.
Push-Pull | Realtime | Proxy Endpoint | |
---|---|---|---|
Type | Asynchronous | Synchronous | Synchronous |
Job query format | JSON | JSON | URL |
Job status check | Yes | No | No |
Batch query | Yes | No | No |
Upload to storage | Yes | No | No |
For full examples of Push-Pull and Proxy Endpoint integration methods, please see our GitHub or documentation.
To collect data from a specific search engine, adjust your query and set a dedicated scraper using the source parameters listed below.
Domain | Sources |
---|---|
google , google_search , google_ads , google_hotels , google_travel_hotels , google_images , google_suggest . |
|
Yandex | yandex , yandex_search . |
Bing | bing , bing_search . |
Baidu | baidu , baidu_search . |
In our documentation, you can find additional parameters, such as handling specific context types and detailed explanations for individual targets.
Parameter | Description |
---|---|
source |
Sets the scraper to process your request. |
url or query |
Direct URL (link) or keyword, depending on the source |
user_agent_type |
Device type and browser. Default value: desktop |
domain |
Domain localization for Google. |
geo_location |
Geo-location of a proxy used to retrieve the data. |
locale |
The Accept-Language header value changes your Google search page web interface language. |
render |
Enables JavaScript rendering when the target requires JavaScript to load content. Only works via the Push-Pull method. |
parse |
true will return parsed data from sources that support this parameter. |
start_page |
Starting page number. Default value: 1 |
pages |
Number of pages to retrieve. Default value: 1 |
limit |
Number of results to retrieve from each page. The API supports continuous scroll. |
Below are the most common response codes you can encounter using SERP Scraper API. Please contact technical support if you receive a code not found in our documentation.
Response | Error message | Description |
---|---|---|
200 |
OK | All went well. |
202 |
Accepted | Your request was accepted. |
204 |
No content | You are trying to retrieve a job that has not been completed yet. |
400 |
Multiple error messages | Wrong request structure. Could be a misspelled parameter or an invalid value. The response body will have a more specific error message. |
401 |
Authorization header not provided / Invalid authorization header / Client not found | Missing authorization header or incorrect login credentials. |
403 |
Forbidden | Your account does not have access to this resource. |
404 |
Not found | The job ID you are looking for is no longer available. |
422 |
Unprocessable entity | There is something wrong with the payload. Make sure it's a valid JSON object. |
429 |
Too many requests | Exceeded rate limit. Please contact your account manager to increase limits. |
500 |
Internal server error | We're facing technical issues, please retry later. We may already be aware, but feel free to report it anyway. |
524 |
Timeout | Service unavailable. |
612 |
Undefined internal error | Job submission failed. Retry at no extra cost with faulted jobs, or reach out to us for assistance. |
613 |
Faulted after too many retries | Job submission failed. Retry at no extra cost with faulted jobs, or reach out to us for assistance. |
Scheduler automates recurring web scraping and parsing jobs by scheduling them. You can schedule at any interval – every minute, every five minutes, hourly, daily, every two days, and so on. With Scheduler, you don’t need to repeat requests with the same parameters. Read more for tech details.
Dedicated Parser parses Google data automatically, while Custom Parser allows you to tailor the code to any target. With Custom Parser, you can parse data with the help of XPath and CSS expressions by taking the necessary information from the HTML and converting it into a readable format. Read more for tech details.
Cloud integration allows you to get your data delivered to a preferred cloud storage bucket, whether it's AWS S3 or GCS. This eliminates the need for additional requests to fetch results – data goes directly to your cloud storage. Read more for tech details.
Headless Browser enables you to interact with a web page, imitate organic user behavior, and efficiently render JavaScript. You don't need to develop and maintain your own headless browser solution, so you can save time and resources on more critical tasks. Read more for tech details.
In the Oxylabs dashboard, you can follow your usage. Within the Statistics section, you’ll find a graph with scraped pages and a table with your API user's data. It includes average response time, daily request counts, and total requests. Additionally, you can filter the statistics to see your usage during specified intervals.
You can try SERP Scraper API for free for a week with 5K results. If you have any questions, please contact us via the live chat or email us at support@oxylabs.io.
For more tutorials and tips on all things web data extraction, stay engaged:
Every user account has a rate limit corresponding to their monthly subscription plan. The rate limit should be more than enough based on the expected volume of scraping jobs.
You can download images either by saving the output to the image extension when using the Proxy Endpoint integration method or passing the content_encoding parameter when using the Push-Pull or Realtime integration methods.
Yes, you can use SERP Scraper API free of charge for 1 week. Here’s what the free trial offers:
5000 results
5 requests / s rate limit
Access to all available scraping targets
You can choose a plan suited for small businesses or large enterprises, starting from $49/month.
Billing depends on the number of successful results. Failed attempts with an error from our side won’t affect your bills.
About the author
Maryia Stsiopkina
Senior Content Manager
Maryia Stsiopkina is a Senior Content Manager at Oxylabs. As her passion for writing was developing, she was writing either creepy detective stories or fairy tales at different points in time. Eventually, she found herself in the tech wonderland with numerous hidden corners to explore. At leisure, she does birdwatching with binoculars (some people mistake it for stalking), makes flower jewelry, and eats pickles.
All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.
Get the latest news from data gathering world
SERP Scraper API for real-time data extraction
Fetch quality search engine data in real-time with features like geo-targeting, the proxy rotator, custom storage, and more.
Scale up your business with Oxylabs®
GET IN TOUCH
General:
hello@oxylabs.ioSupport:
support@oxylabs.ioCareer:
career@oxylabs.ioCertified data centers and upstream providers
Connect with us
Advanced proxy solutions
Resources
Innovation hub