Automated web scraping and crawling are crucial for gathering and analyzing data from websites at large scale. However, anti-bot technologies like CAPTCHA have made automated web access more challenging.
Many websites often load CAPTCHAs or block screens as a security measure. If your automated scraper can somehow appear human to the target website, likely, the site will not load a CAPTCHA or block screen. Thereby, your scraper can bypass CAPTCHA and reCAPTCHA challenge and perform the scraping activities.
But how can a scraper appear human to the websites? Let’s find out.
To access content from the protected websites, you need to figure out how to prevent CAPTCHA from loading. Puppeteer can help us here. It is a Node.JS library that provides a user-friendly API for managing Chrome and Chromium via the DevTools Protocol. You can configure Puppeteer to operate in full Chrome/Chromium mode instead of the default headless mode.
What happens when you try automated access to a CAPTCHA-protected website using the Puppeteer alone? The target website detects that the access is automated and shows you a block screen or CAPTCHA test.
Let’s validate it using the following steps:
You must have Node.JS installed on your system. Create a new Node.JS project and install Puppeteer using the following npm command:
npm i puppeteer
2. Import the Puppeteer library in your Node.JS file.
const puppeteer = require('puppeteer');
3. Create a new browser instance in headless mode and a new page using the following code:
const browserObj = await puppeteer.launch();
// Create a new page
const newpage = await browserObj.newPage();
4. Since we need to take the screenshot on the desktop device, we can set the viewport size using the following code:
// Set the width and height of viewport
await newpage.setViewport({ width: 1280, height: 720 });
The setViewPort() method sets the size of the webpage. You can change it according to your device requirements.
5. After that, navigate to a page URL (that you think is a CAPTCHA-protected page) and take a screenshot. Remember to close the browser object at the end.
const url = 'https://yourexampletarget.com';
// Open the required URL in the newpage object
await newpage.goto(url);
// Capture screenshot
await newpage.screenshot({
path: 'screenshot.jpg',
});
// Close the browser object
await browserObj.close();
This is what our complete code looks like:
const puppeteer = require('puppeteer');
(async () => {
// Create a browser instance
const browserObj = await puppeteer.launch();
// Create a new page
const newpage = await browserObj.newPage();
// Set the width and height of viewport
await newpage.setViewport({ width: 1280, height: 720 });
const url = 'https://yourexampletarget.com';
// Open the required URL in the newpage object
await newpage.goto(url);
// Capture screenshot
await newpage.screenshot({
path: 'screenshot.png',
});
// Close the browser object
await browserObj.close();
})();
If you see a block screen or a CAPTCHA, it means the website has detected the traffic from a programmatically controlled browser. Thereby it blocked access.
You can enhance the capabilities of Puppeteer by installing the Stealth extension with it. The Stealth plugin has a range of features to tackle most of the methods implemented by protected websites to detect the automated-accesses.
Stealth can make your Puppeteer’s automated headless accesses so “human” that many websites won’t be able to detect the difference. Thereby, Stealth-based accesses prevent CAPTCHA from loading on these websites. Hence, you can make your automated Puppeteer script access the contents behind the CAPTCHA.
Note: All the bypassing methods showcased in this tutorial are intended for educational purposes only.
Here is the step-by-step procedure to implement this CAPTCHA bypass:
To start, you need to install the puppeteer-extra and puppeteer-extra-plugin-stealth packages.
npm install puppeteer-extra-plugin-stealth puppeteer-extra
2. After that, import the following required libraries in your Node.JS file:
const puppeteerExtra = require('puppeteer-extra');
const Stealth = require('puppeteer-extra-plugin-stealth');
puppeteerExtra.use(Stealth());
3. The next step is to create the browser object in headless mode, navigate to the URL and take a screenshot.
(async () => {
const browserObj = await puppeteerExtra.launch();
const newpage = await browserObj.newPage();
await newpage.setViewport({ width: 1280, height: 720 });
await newpage.setUserAgent(
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36');
await newpage.goto('https://yourexampletarget.com');
await newpage.waitForTimeout(20000); // Wait for 20 seconds
await newpage.screenshot({ path: 'screenshot.png' });
await browserObj.close();
})();
The setUserAgent method makes our requests imitate a real browser's User-Agent, making our automated headless browsers appear more like regular users. Setting one of the common User-Agent strings helps evade detection and bypass anti-bot mechanisms that analyze the User-Agent header.
Here is what our complete script looks like:
const puppeteerExtra = require('puppeteer-extra');
const Stealth = require('puppeteer-extra-plugin-stealth');
puppeteerExtra.use(Stealth());
(async () => {
const browserObj = await puppeteerExtra.launch();
const newpage = await browserObj.newPage();
await newpage.setViewport({ width: 1280, height: 720 });
await newpage.setUserAgent(
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36');
await newpage.goto('https://yourexampletarget.com');
await newpage.waitForTimeout(20000); // Wait for 20 seconds
await newpage.screenshot({ path: 'screenshot.png' });
await browserObj.close();
})();
Now, If your screenshot shows the actual content of the website, congratulations, your scraper made it to avoid CAPTCHA from loading. But, unfortunately, Stealth can still fail at some websites that use more sophisticated anti-bots.
Luckily, we have another simpler, scalable, and robust alternative, Web Unblocker, and its built-in feature, Headless Browser.
Web Unblocker uses AI to help users prevent CAPTCHA and gain access to public data from websites with advanced anti-bots implemented. Web Unblocker supports proxy management, automatic generation of browser fingerprints, automatic retries, session maintenance, and JavaScript rendering to control various scraping processes. Also, check this integration tutorial to learn how to use a proxy in Puppeteer.
To begin, you can send a basic query without any special options. Web Unblocker will select the fastest CAPTCHA proxy, add all necessary headers, and provide you with the response body.
Using Web Unblocker with Node.JS is easy. Just follow the following steps:
Install the node-fetch and HttpsProxyAgent using the following command:
npm install node-fetch https-proxy-agent
2. Sign up to Oxylabs and get your credentials for using the API.
3. Import the required modules in your JS file like this:
const fetch = require('node-fetch');
const HttpsProxyAgent = require('https-proxy-agent');
const fs = require('fs');
The fs library can help save the response in an HTML file.
4. Provide your user credentials and set up a proxy using HttpsProxyAgent.
const username = '<Your-username>';
const password = '<Your-password>';
(async () => {
const agent = new HttpsProxyAgent.HttpsProxyAgent(
`http://${username}:${password}@unblock.oxylabs.io:60000`
);
5. Next, set the URL and issue a fetch request.
process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = 0;
const response = await fetch('https://example.com/captcha', {
method: 'get',
agent: agent,
});
The environment variable NODE_TLS_REJECT_UNAUTHORIZED is set to zero so that Node.JS doesn't verify the SSL/TLS certificates. This is important when you need to send the proxy information with your request.
6. In the end, you can convert the response into text and save it in an HTML file.
const resp = await response.text();
fs.writeFile('result.html', resp.toString(), (err) => {
if (err) throw err;
console.log('Result saved to result.html');
});
})();
Here is the complete script:
const fetch = require('node-fetch');
const HttpsProxyAgent = require('https-proxy-agent');
const fs = require('fs');
const username = '<Your-username>';
const password = '<Your-password>';
(async () => {
const agent = new HttpsProxyAgent.HttpsProxyAgent(
`http://${username}:${password}@unblock.oxylabs.io:60000`
);
// Ignore the certificate
process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = 0;
const response = await fetch('https://example.com/captcha', {
method: 'get',
agent: agent,
});
const resp = await response.text();
fs.writeFile('result.html', resp.toString(), (err) => {
if (err) throw err;
console.log('Result saved to result.html');
});
})();
Thanks to Web Unblocker, you can avoid CAPTCHA from loading and bypass the advanced security obstacles of websites to get your scraping tasks fulfilled.
CAPTCHA challenges can impede web automation efforts, but with the help of Puppeteer Stealth and Oxylabs’ Web Unblocker, you can bypass CAPTCHAs and make your automation process smooth and obstacle-free. Remember to remain within the legal boundaries and seek legal consultation before engaging in scraping activities of any kind.
We encourage you to secure a free one-week trial of Oxylabs’ Web Unblocker and read our detailed documentation to get the most out of it.
No, Puppeteer can’t solve CAPTCHA. However, Puppeteer can deal with the CAPTCHA and reCAPTCHA challenge by making an automated script appear as a real human accessing the website. This way, CAPTCHA doesn't get triggered.
Yes, you can bypass a CAPTCHA by employing advanced AI-based tools like Web Unblocker.
About the author
Maryia Stsiopkina
Senior Content Manager
Maryia Stsiopkina is a Senior Content Manager at Oxylabs. As her passion for writing was developing, she was writing either creepy detective stories or fairy tales at different points in time. Eventually, she found herself in the tech wonderland with numerous hidden corners to explore. At leisure, she does birdwatching with binoculars (some people mistake it for stalking), makes flower jewelry, and eats pickles.
All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.
Get the latest news from data gathering world
Forget about complex web scraping processes
Choose Oxylabs' advanced web intelligence collection solutions to gather real-time public data hassle-free.
Scale up your business with Oxylabs®
GET IN TOUCH
General:
hello@oxylabs.ioSupport:
support@oxylabs.ioCareer:
career@oxylabs.ioCertified data centers and upstream providers
Connect with us
Advanced proxy solutions
Resources
Innovation hub