We are looking for a solution to do web scraping and extract the specified Google location data that is usually displayed on the Google search results. It should also extract the name of the location. We predefined the searches, example named BANK branches. G The KCB BRANCH ITEN KENYA Use. Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization. clips/pattern.
Sometimes we need to extract information from websites. We can extract data from websites by using there available API’s. But there are websites where API’s are not available.
Here, Web scraping comes into play!
Python is widely being used in web scraping, for the ease it provides in writing the core logic. Whether you are a data scientist, developer, engineer or someone who works with large amounts of data, web scraping with Python is of great help.
Without a direct way to download the data, you are left with web scraping in Python as it can extract massive quantities of data without any hassle and within a short period of time.
In this tutorial , we shall be looking into scraping using some very powerful Python based libraries like BeautifulSoup and Selenium.
BeautifulSoup and urllib
BeautifulSoup is a Python library for pulling data out of HTML and XML files. But it does not get data directly from a webpage. So here we will use urllib library to extract webpage.
First we need to install Python web scraping BeautifulSoup4 plugin in our system using following command :
$ sudo pip install BeatifulSoup4
$ pip install lxml
OR
$ sudo apt-get install python3-bs4
$ sudo apt-get install python-lxml
So here I am going to extract homepage from a website https://www.botreetechnologies.com
from urllib.request import urlopen
from bs4 import BeautifulSoup
We import our package that we are going to use in our program. Now we will extract our webpage using following.
response = urlopen('https://www.botreetechnologies.com/case-studies')
Beautiful Soup does not get data directly from content we just extract. So we need to parse it in html/XML data.
data = BeautifulSoup(response.read(),'lxml')
Here we parsed our webpage html content into XML using lxml parser.
As you can see in our web page there are many case studies available. I just want to read all the case studies available here.
There is a title of case studies at the top and then some details related to that case. I want to extract all that information.
We can extract an element based on tag , class, id , Xpath etc.
You can get class of an element by simply right click on that element and select inspect element.
case_studies = data.find('div', { 'class' : 'content-section' })
In case of multiple elements of this class in our page, it will return only first. So if you want to get all the elements having this class use findAll()
method.
case_studies = data.find('div', { 'class' : 'content-section' })
Now we have div having class ‘content-section’ containing its child elements. We will get all <h2> tags to get our ‘TITLE’ and <ul> tag to get all children, the <li>
elements.
case_stud.find('h2').find('a').text
case_stud_details = case_stud.find(‘ul’).findAll(‘li’)
Now we got the list of all children of ul
element.
To get first element from the children list simply write:
case_stud_details[0]
We can extract all attribute of a element . i.e we can get text for this element by using:
case_stud_details[2].text
But here I want to click on the ‘TITLE’ of any case study and open details page to get all information.
Since we want to interact with the website to get the dynamic content, we need to imitate the normal user interaction. Such behaviour cannot be achieved using BeautifulSoup or urllib, hence we need a webdriver to do this.
Webdriver basically creates a new browser window which we can control pragmatically. It also let us capture the user events like click and scroll.
Selenium is one such webdriver.
Selenium Webdriver
Selenium webdriver accepts cthe ommand and sends them to ba rowser and retrieves results.
You can install selenium in your system using fthe ollowing simple command:
$ sudo pip install selenium
In order to use we need to import selenium in our Python script.
from selenium import webdriver
I am using Firefox webdriver in this tutorial. Now we are ready to extract our webpage and we can do this by using fthe ollowing:
self.url = 'https://www.botreetechnologies.com/'
self.browser = webdriver.Firefox()
Now we need to click on ‘CASE-STUDIES’ to open that page.
We can click on a selenium element by using following piece of code:
self.browser.find_element_by_xpath('//div[contains(@id,'navbar')]/ul[2]/li[1]').click()
Now we are transferred to case-studies page and here all the case studies are listed with some information.
Here, I want to click on each case study and open details page to extract all available information.
So, I created a list of links for all case studies and load them one after the other.
To load previous page you can use following piece of code:
self.browser.execute_script('window.history.go(-1)')
Python Web Scraping Sample
Final script for using Selenium will looks as under:
And we are done, Now you can extract static webpages or interact with webpages using the above script.
Conclusion: Web Scraping Python is an essential Skill to have
Today, more than ever, companies are working with huge amounts of data. Learning how to scrape data in Python web scraping projects will take you a long way. In this tutorial, you learn Python web scraping with beautiful soup.
Along with that, Python web scraping with selenium is also a useful skill. Companies need data engineers who can extract data and deliver it to them for gathering useful insights. You have a high chance of success in data extraction if you are working on Python web scraping projects.
If you want to hire Python developers for web scraping, then contact BoTree Technologies. We have a team of engineers who are experts in web scraping. Give us a call today.
Consulting is free – let us help you grow!
It is a well-known fact that Python is one of the most popular programming languages for data mining and Web Scraping. There are tons of libraries and niche scrapers around the community, but we’d like to share the 5 most popular of them.
Most of these libraries' advantages can be received by using our API and some of these libraries can be used in stack with it.
The Top 5 Python Web Scraping Libraries in 2020#
1. Requests#
Well known library for most of the Python developers as a fundamental tool to get raw HTML data from web resources.
To install the library just execute the following PyPI command in your command prompt or Terminal:
After this you can check installation using REPL:
- Official docs URL: https://requests.readthedocs.io/en/latest/
- GitHub repository: https://github.com/psf/requests
Web Scraping With Python And Beautifulsoup
2. LXML#
When we’re talking about the speed and parsing of the HTML we should keep in mind this great library called LXML. This is a real champion in HTML and XML parsing while Web Scraping, so the software based on LXML can be used for scraping of frequently-changing pages like gambling sites that provide odds for live events.
To install the library just execute the following PyPI command in your command prompt or Terminal:
The LXML Toolkit is a really powerful instrument and the whole functionality can’t be described in just a few words, so the following links might be very useful:
- Official docs URL: https://lxml.de/index.html#documentation
- GitHub repository: https://github.com/lxml/lxml/
3. BeautifulSoup#
Probably 80% of all the Python Web Scraping tutorials on the Internet uses the BeautifulSoup4 library as a simple tool for dealing with retrieved HTML in the most human-preferable way. Selectors, attributes, DOM-tree, and much more. The perfect choice for porting code to or from Javascript's Cheerio or jQuery.
To install this library just execute the following PyPI command in your command prompt or Terminal:
As it was mentioned before, there are a bunch of tutorials around the Internet about BeautifulSoup4 usage, so do not hesitate to Google it!
- Official docs URL: https://www.crummy.com/software/BeautifulSoup/bs4/doc/
- Launchpad repository: https://code.launchpad.net/~leonardr/beautifulsoup/bs4
4. Selenium#
Selenium is the most popular Web Driver that has a lot of wrappers suitable for most programming languages. Quality Assurance engineers, automation specialists, developers, data scientists - all of them at least once used this perfect tool. For the Web Scraping it’s like a Swiss Army knife - there are no additional libraries needed because any action can be performed with a browser like a real user: page opening, button click, form filling, Captcha resolving, and much more.
To install this library just execute the following PyPI command in your command prompt or Terminal:
The code below describes how easy Web Crawling can be started with using Selenium:
Python Web Scraping Example
As this example only illustrates 1% of the Selenium power, we’d like to offer of following useful links:
- Official docs URL: https://selenium-python.readthedocs.io/
- GitHub repository: https://github.com/SeleniumHQ/selenium
5. Scrapy#
Scrapy is the greatest Web Scraping framework, and it was developed by a team with a lot of enterprise scraping experience. The software created on top of this library can be a crawler, scraper, and data extractor or even all this together.
To install this library just execute the following PyPI command in your command prompt or Terminal:
We definitely suggest you start with a tutorial to know more about this piece of gold: https://docs.scrapy.org/en/latest/intro/tutorial.html
As usual, the useful links are below:
- Official docs URL: https://docs.scrapy.org/en/latest/index.html
- GitHub repository: https://github.com/scrapy/scrapy
What web scraping library to use?#
So, it’s all up to you and up to the task you’re trying to resolve, but always remember to read the Privacy Policy and Terms of the site you’re scraping 😉.