Web Scraping Beautiful Soup



To parse our HTML document and extract the 50 div containers, we’ll use a Python module called BeautifulSoup, the most common web scraping module for Python. Paragon ntfs for mac os как удалить. In the following code cell we will: Import the BeautifulSoup class creator from the package bs4. Parse response.text by creating a BeautifulSoup object, and assign this object to htmlsoup. Learn how to perform web scraping with Python using the Beautiful Soup library. ️ Tutorial by JimShapedCoding. Learn how to perform web scraping with Python using the Beautiful Soup library.

  1. If you are curious to know that, then please use this link - Official Beautiful Soup Docs. In short, with the help of BeautfulSoup and a parser, we can easily navigate, search, scrape, and modify the parsed HTML/XML content like above (bytes type) by treating everything in it as a Python Object!
  2. Learn web scraping in Python using the BeautifulSoup library; Web Scraping is a useful technique to convert unstructured data on the web to structured data; BeautifulSoup is an efficient library available in Python to perform web scraping other than urllib; A basic knowledge of HTML and HTML tags is necessary to do web scraping in Python.
  3. Web scraping is a process of extracting specific information as structured data from HTML/XML content. Often data scientists and researchers need to fetch and extract data from numerous websites to create datasets, test or train algorithms, neural networks, and machine learning models.
  • Searching the Tree

Introduction

Before reading it, please read the warnings in my blog Learning Python: Web Scraping.

A brief introduction of Beautiful Soup can be found in my blog Learning Python: Web and Databases.It creates a parse tree for parsed pages that can be used to extract data from HTML, which is useful for web scraping.

Create a BeautifulSoup object that represents the document as a nested data structure.

Web Scraping Using Python And Beautiful Soup

Beautiful Soup supports the HTML parser included in Python’s standard library, but it also supports a number of third-party Python parsers. One is the lxml parser. For instance, BeautifulSoup(markup, 'lxml').It is very fast and lenient.

Web scraping beautiful soup

Objects in Beautiful Soup

Beautiful Soup transforms a complex HTML document into a complex tree of Python objects.The objects are mainly four kinds: Tag, NavigableString, BeautifulSoup, and Comment.

  1. BeautifulSoup: the BeautifulSoup object itself represents the document as a whole.
  2. Tag: a Tag object corresponds to an XML or HTML tag in the original document. Every tag has a name (accessible as .name) and any number of attributes (accessible by treating like a dictionary).
  3. NavigableString: a string corresponds to a bit of text within a tag. You can’t edit a string in place, but you can replace one string with another, using replace_with().
  4. Comment: the Comment object is just a special type of NavigableString.

Beautiful Soup defines classes for anything else that might show up in an XML document: CData, ProcessingInstruction, Declaration, and Doctype.

Navigating the Tree

In a HTML/XML document, tag may contain texts and other tags.Beautiful Soup provides many attributes for navigating and iterating over tree.

  1. Directly use the name of the tag. Using a tag name as an attribute will give you only the first tag by that name.
  2. .contents give a list of a tag’s direct children.
  3. .children generator can be used to iterate over a tag’s direct children.
  4. .descendants allows to iterate over all of a tag’s children recursively including its direct children, the children of its direct children and so on.
  5. .string gives the text in this tag if it has only one NavigableString child. It gives the text in a tag’s child if this tag has only one tag child and this tag child has a string. If a tag contains more than one thing, it is no clear and is defined to be None.
  6. .parent can access an element’s parent.
  7. .parents can iterate over all of an element’s parents.
  8. .next_sibling and .previous_sibling can navigate between elements that are on the same level.
  9. .next_siblings and .previous_siblings can iterate over a tag’s siblings.
  10. .next_element and .previous_element can navigate to the next or previous element of a tag. It is different from .next_sibling and .previous_sibling.

Searching the Tree

Web Scraping Beautiful Soup 4

Beautiful Soup also provides many methods for searching the tree.Two main methods are find() and find_all().

Some basic usage of the methods:

  1. use a string;
  2. use regular expression;
  3. use a list;
  4. use True;
  5. use a self defined function.

There are some arguments for these two methods to use.

Method: find_all()

find_all(name, attrs, recursive, string, limit, **kwargs):

  1. name: it can be a string, a regular expression, a list a function or True.
  2. attrs: Any argument that’s not recognized will be turned into a filter on one of a tag’s attributes. Sometimes, the attributes cannot be used as a keyword argument. Then use attrs to pass attribute name and its value. It would be a little different if you want to search by CSS class. It uses the keyword argument class_.
  3. recursive: if you set this value to False like recursive=False, it will only search through the direct children instead of all the descendants.
  4. string: with it you can search for strings instead of tags. It also accepts a string, a regular expression, a list, a function or True.
  5. limit: it limits the number of results.

Method: find()

find(name, attrs, recursive, string, **kwargs): it only returns the first result.If find() can’t find anything, it returns None instead of an empty list.

Other Methods

Web Scraping With Beautiful Soup And Selenium

There are some other methods: find_parents(), find_parent(), find_next_siblings(), find_next_sibling(), find_previous_siblings(), find_previous_sibling(), find_all_next(), find_next(), find_all_previous() and find_previous(). They are all similar. So I will not describe them in detail here.

As of version 4.7.0, Beautiful Soup supports most CSS4 selectors via the SoupSieve project.

Example

KassiesA: UEFA European Cup Football contains a lot of soccer data for the matches of UEFA Champions League and Europa League.

I will give an example using Beautiful Soup to extract the results of all the matches in UEFA European Cup Matches 2017/2018.

The HTML content in the page looks like:

Based on the structure of the page, I develop a simple program to save all the match results in file with CSV format.

Some parts of the file:

Then we can use them for our own analysis.

Further

Learn Web Scraping With Beautiful Soup

Besides parsing tree, Beautiful Soup also allows to modify the tree and write the changes as a new HTML or XML document.

More information in detail can be found in Beautiful Soup Official Documentation.

References

Please enable JavaScript to view the comments powered by Disqus.blog comments powered by

Beautiful Soup Tutorial

Disqus

Web Scraping And Crawling With Python Beautifulsoup Requests & Selenium

Published

Tags