Web Scraping Real Python

Posted on  by 



Web scraping is a technique to automatically access and extract large amounts of information from a website, which can save a huge amount of time and effort. In this article, we will go through an easy example of how to automate downloading hundreds of files from the New York MTA. Web scraping is the process of collecting and parsing raw data from the Web, and the Python community has come up with some pretty powerful web scraping tools. The Internet hosts perhaps the greatest source of information—and misinformation—on the planet. With the example in this tutorial, you learned web scraping basics with the Beautiful Soup Python library. For your convenience, the above Python code is compiled together in this GitHub repo. Even though the real-world situation is often more complicated, you’ve got a good foundation to explore yourself!

Introduction

PythonPython

We'll cover how to use Headless Chrome for web scraping Google Places. Google places does not necessarily require javascript because google will serve a different response if you disable javascript. But for better user emulation when browsing/scraping google places, a browser is recommended.

Headless Chrome is essentially the Chrome browser running without a head (no graphical user interface). The benefit being you can run a headless browser on a server environment that also has no graphical interface attached to it, which is normally accessed through shell access. It can also be faster to run headless and can have lower overhead on system resources.

Controlling a browser

We need a way to control the browser with code, this can be done through what is called the Chrome DevTools Protocol or CDP. CDP is essentially a websocket server running on the browser that is based on JSONRPC. Instead of directly working with CDP we'll use a library called pyppeteer which is a python implementation of the CDP protocol that provides an easier to use abstraction. It's inspired by the Node version of the same library called puppeteer.

Setting up

As usual with any of my python projects, I recommend working in a virtual python environment which helps us address dependencies and versions separately for each application / project. Let's create a virtual environment in our home directory and install the dependencies we need.

Make sure you are running at least python 3.6.1, 3.5 is end of support.The pyppeteer library will not work with python 3.6.0, this is due to the websockets library that it depends on not supporting that python version.

Let's create the following folders and files.

We created a __main__.py file, this lets us run the Google Places scraper with the following command (nothing should happen right now):

Launching a headless browser

We need to launch a Chrome browser. By default, pyppeteer will install the latest version of Chromium. It's also possible to just use Chrome as long as it is installed on your system. The library makes use of async/await for concurrency. In order to use this we import the asyncio package from python.

To launch with Chrome instead of Chromium add executablePath option to the launch function. Below, we launch the browser, navigate to google and take a screenshot. The screenshot will be saved in the folder you are running the scraper.

Digging in

Let's create some functions in core/browser.py to simplify working with a browser and the page. We'll make use of what I believe is an awesome feature in python for simplifying management of resources called context manager. Specifically we will use an async context manager.

An asynchronous context manager is a context manager that is able to suspend execution in its enter and exit methods.

This feature in python lets us write code like the below which handles opening and closing a browser with one line.

Let's add the PageSession async context manager in the file core/browser.py.

In our google-places/__main__.py file let's make use of our new PageSession and print the html content of the final rendered page with javascript executed.

Run the google-places module in your terminal with the same command we used earlier.

So now we can launch a browser, open a page (a tab in chrome) and navigate to a website and wait for javascript to finish loading/executing then close the browser with the above code.

Next let's do the following:

  • We want to visit google.com
  • Enter a search query for pediatrician near 94118
  • Click on google places to see more results
  • Scrape results from the page
  • Save results to a CSV file

Navigating pages

We want to end up on the following page navigations so we can pull the data we need.

Let's start by breaking up our code in google-places/__main__.py so we can first search then navigate to google places. We also want to clean up some of the string literals like the google url.

You can see the new code we added above as it has been highlighted. We use XPath to find the search bar, the search button and the view all button to get us to google places.

  1. Type in the search bar
  1. Click the search button
  1. Wait for the view all button to appear
  1. Click view all button to take us to google places
  1. Wait for an element on the new page to appear

Scraping the data with Pyppeteer

At this point we should be on the google places page and we can pull the data we want. The navigation flow we followed before is important for emulating a user.

Let's define the data we want to pull from the page.

  • Name
  • Location
  • Phone
  • Rating
  • Website Link

In core/browser.py let's add two methods to our PageSession to help us grab the text and an attribute (the website link for the doctor).

So we added get_text and get_link. These two methods will evaluate javascript on the browser, the same way if you were to type it on the Chrome console. You can see that they just use the DOM to grab the text of the element or the href attribute.

In google-places/__main__.py we will add a few functions that will grab the content that we care about from the page.

We make use of XPath to grab the elements. You can practice XPath in your Chrome browser by pressing F12 or right-clicking inspect to open the console.Why do I use XPath? It's easier to specify complex selectors because XPath has built in functions for handling things like finding elements which contain some text or traversing the tree in various ways.

For the phone, rating and link fields we default to None and substitute with 'N/A' because not all doctors have a phone number listed, a rating or a link. All of them seem to have a location and a name.

Because there are many doctors listed on the page we want to find the parent element and loop over each match, then evaluate the XPath we defined above.To do this let's add two more functions to tie it all together.

The entry point here is scrape_doctors which evaluates get_doctor_details on each container element.

In the code below, we loop over each container element that matched our XPath and we get back a Future object by calling the function get_doctor_details.Because we don't use the await keyword, we get back a Future object which can be used by the asyncio.gather call to evaluate all Future objects in the tasks list.

This line allows us to wait for all async calls to finish concurrently.

Let's put this together in our main function. First we search and crawl to the right page, then we scrape with scrape_doctors.

Saving the output

In core/utils.py we'll add two functions to help us save our scraped output to a local CSV file.

Let's import it in google-places/__main__.py and save the output of scrape_doctors from our main function.

We should now have a file called pediatricians.csv which contains our output.

Wrapping up

From this guide we should have learned how to use a headless browser to crawl and scrape google places while emulating a real user.There's a lot more you can do with headless browsers such as generate pdfs, screenshots and other automation tasks.

Hopefully this guide helped you get started executing javascript and scraping with a headless browser. Till next time!

In the previous post about Web Scraping with Python we talked a bit about Scrapy. In this post we are going to dig a little bit deeper into it.

Scrapy is a wonderful open source Python web scraping framework. It handles the most common use cases when doing web scraping at scale:

  • Multithreading
  • Crawling (going from link to link)
  • Extracting the data
  • Validating
  • Saving to different format / databases
  • Many more

The main difference between Scrapy and other commonly used librairies like Requests / BeautifulSoup is that it is opinionated. It allows you to solve the usual web scraping problems in an elegant way.

The downside of Scrapy is that the learning curve is steep, there is a lot to learn, but that is what we are here for :)

In this tutorial we will create two different web scrapers, a simple one that will extract data from an E-commerce product page, and a more “complex” one that will scrape an entire E-commerce catalog!

Basic overview

You can install Scrapy using pip. Be careful though, the Scrapy documentation strongly suggests to install it in a dedicated virtual environnement in order to avoid conflicts with your system packages.

I'm using Virtualenv and Virtualenvwrapper:

and

You can now create a new Scrapy project with this command:

This will create all the necessary boilerplate files for the project.

Here is a brief overview of these files and folders:

  • items.py is a model for the extracted data. You can define custom model (like a Product) that will inherit the scrapy Item class.
  • middlewares.py Middleware used to change the request / response lifecycle. For example you could create a middle ware to rotate user-agents, or to use an API like ScrapingBee instead of doing the requests yourself.
  • pipelines.py In Scrapy, pipelines are used to process the extracted data, clean the HTML, validate the data, and export it to a custom format or saving it to a database.
  • /spiders is a folder containing Spider classes. With Scrapy, Spiders are classes that define how a website should be scraped, including what link to follow and how to extract the data for those links.
  • scrapy.cfg is a configuration file to change some settings

Scraping a single product

In this example we are going to scrape a single product from a dummy E-commerce website. Here is the first the product we are going to scrape:


https://clever-lichterman-044f16.netlify.com/products/taba-cream.1/

We are going to extract the product name, picture, price and description.

Scrapy Shell

Scrapy comes with a built-in shell that helps you try and debug your scraping code in real time. You can quickly test your XPath expressions / CSS selectors with it. It's a very cool tool to write your web scrapers and I always use it!

You can configure Scrapy Shell to use another console instead of the default Python console like IPython. You will get autocompletion and other nice perks like colorized output.

In order to use it in your scrapy Shell, you need to add this line to your scrapy.cfg file:

Once it's configured, you can start using scrapy shell:

We can start fetching a URL by simply:

This will start by fetching the /robot.txt file.

In this case there isn't any robot.txt, that's why we can see a 404 HTTP code. If there was a robot.txt, by default Scrapy will follow the rule.

You can disable this behavior by changing this setting in settings.py:

Then you should should have a log like this:

You can now see your response object, response headers, and try different XPath expression / CSS selectors to extract the data you want.

You can see the response directly in your browser with:

Note that the page will render badly inside your browser, for lots of different reasons. This can be CORS issues, Javascript code that didn't execute, or relative URLs for assets that won't work locally.

The scrapy shell is like a regular Python shell, so don't hesitate to load your favorite scripts/function in it.

Extracting Data

Scrapy doesn't execute any Javascript by default, so if the website you are trying to scrape is using a frontend framework like Angular / React.js, you could have trouble accessing the data you want.

Now let's try some XPath expression to extract the product title and price:


In order to extract the price, we are going to use an XPath expression, we're selecting the first span after the div with the class my-4

I could also use a CSS selector:

Creating a Scrapy Spider

With Scrapy, Spiders are classes where you define your crawling (what links / URLs need to be scraped) and scraping (what to extract) behavior.

Here are the different steps used by a spider to scrape a website:

  • It starts by looking at the class attribute start_urls, and call these URLs with the start_requests() method. You could override this method if you need to change the HTTP verb, add some parameters to the request (for example, sending a POST request instead of a GET).
  • It will then generate a Request object for each URL, and send the response to the callback function parse()
  • The parse() method will then extract the data (in our case, the product price, image, description, title) and return either a dictionnary, an Item object, a Request or an iterable.

You may wonder why the parse method can return so many different objects. It's for flexibility. Let's say you want to scrape an E-commerce website that doesn't have any sitemap. You could start by scraping the product categories, so this would be a first parse method.

This method would then yield a Request object to each product category to a new callback method parse2()For each category you would need to handle pagination Then for each product the actual scraping that generate an Item so a third parse function.

With Scrapy you can return the scraped data as a simple Python dictionary, but it is a good idea to use the built-in Scrapy Item class.It's a simple container for our scraped data and Scrapy will look at this item's fields for many things like exporting the data to different format (JSON / CSV…), the item pipeline etc.

So here is a basic Product class:

Now we can generate a spider, either with the command line helper:

Or you can do it manually and put your Spider's code inside the /spiders directory.

Tired of getting blocked while scraping the web? Our API handles headless browsers and rotates proxies for you.

There are different types of Spiders in Scrapy to solve the most common web scraping use cases:

  • Spider that we will use. It takes a start_urls list and scrape each one with a parse method.
  • CrawlSpider follows links defined by a set of rules
  • SitemapSpider extract URLs defined in a sitemap
  • Many more

In this EcomSpider class, there are two required attributes:

  • name which is our Spider's name (that you can run using scrapy runspider spider_name)
  • start_urls which is the starting URL

The allowed_domains is optionnal but important when you use a CrawlSpider that could follow links on different domains.

Web Scraping Real Python

Then I've just populated the Product fields by using XPath expressions to extract the data I wanted as we saw earlier, and we return the item.

You can run this code as follow to export the result into JSON (you could also export to CSV)

You should then get a nice JSON file:

Item loaders

Web scraping real python interview

There are two common problems that you can face while extracting data from the Web:

  • For the same website, the page layout and underlying HTML can be different. If you scrape an E-commerce website, you will often have a regular price and a discounted price, with different XPath / CSS selectors.
  • The data can be dirty and need some kind of post processing, again for an E-commerce website it could be the way the prices are displayed for example ($1.00, $1, $1,00 )

Scrapy comes with a built-in solution for this, ItemLoaders.It's an interesting way to populate our Product object.

You can add several XPath expression to the same Item field, and it will test it sequentially. By default if several XPath are found, it will load all of them into a list.

You can find many examples of input and output processors in the Scrapy documentation.

It's really useful when you need to transorm/clean the data your extract.For example, extracting the currency from a price, transorming a unit into another one (centimers in meters, Celcius degres in Fahrenheit) …

In our webpage we can find the product title with different XPath expressions: //title and //section[1]//h2/text()

Here is how you could use and Itemloader in this case:

Generally you only want the first matching XPath, so you will need to add this output_processor=TakeFirst() to your item's field constructor.

In our case we only want the first matching XPath for each field, so a better approach would be to create our own ItemLoader and declare a default output_processor to take the first matching XPath:

I also added a price_in which is an input processor to delete the dollar sign from the price. I'm using MapCompose which is a built-in processor that takes one or several functions to be executed sequentially. You can add as many functions as you like for . The convention is to add _in or _out to your Item field's name to add an input or output processor to it.

There are many more processors, you can learn more about this in the documentation

Scraping multiple pages

Now that we know how to scrape a single page, it's time to learn how to scrape multiple pages, like the entire product catalog.As we saw earlier there are different kinds of Spiders.

When you want to scrape an entire product catalog the first thing you should look at is a sitemap. Sitemap are exactly built for this, to show web crawlers how the website is structured.

Python Web Scraping Real Time

Most of the time you can find one at base_url/sitemap.xml. Parsing a sitemap can be tricky, and again, Scrapy is here to help you with this.

In our case, you can find the sitemap here: https://clever-lichterman-044f16.netlify.com/sitemap.xml

If we look inside the sitemap there are many URLs that we are not interested by, like the home page, blog posts etc:

Fortunately, we can filter the URLs to parse only those that matches some pattern, it's really easy, here we only to have URL thathave /products/ in their URLs:

You can run this spider as follow to scrape all the products and export the result to a CSV file:scrapy runspider sitemap_spider.py -o output.csv

Now what if the website didn't have any sitemap? Once again, Scrapy has a solution for this!

Let me introduce you to the… CrawlSpider.

The CrawlSpider will crawl the target website by starting with a start_urls list. Then for each url, it will extract all the links based on a list of Rule.In our case it's easy, products has the same URL pattern /products/product_title so we only need filter these URLs.

As you can see, all these built-in Spiders are really easy to use. It would have been much more complex to do it from scratch.

With Scrapy you don't have to think about the crawling logic, like adding new URLs to a queue, keeping track of already parsed URLs, multi-threading…

Conclusion

In this post we saw a general overview of how to scrape the web with Scrapy and how it can solve your most common web scraping challenges. Of course we only touched the surface and there are many more interesting things to explore, like middlewares, exporters, extensions, pipelines!

If you've been doing web scraping more “manually” with tools like BeautifulSoup / Requests, it's easy to understand how Scrapy can help save time and build more maintainable scrapers.

Web Scraping Real Python Pdf

I hope you liked this Scrapy tutorial and that it will motivate you to experiment with it.

For further reading don't hesitate to look at the great Scrapy documentation.

We have also published our custom integration with Scrapy, it allows you to execute Javascript with Scrapy, do not hesitate to check it out.

Python Web Scraper

You can also check out our web scraping with Python tutorial to learn more about web scraping.

Web Scraping Real Python Example

Happy Scraping!





Coments are closed