Web Scraping Java Selenium



  1. Web Scraping Java Selenium Interview
  2. Selenium Web Scraping Tutorial
  3. Web Scraping Using Selenium Python
  4. Scraping With Selenium
  5. Selenium Web Scraping Python
  6. Selenium Web Test

Java html selenium web-scraping jsoup. Improve this question. Follow edited Dec 23 '20 at 0:28. 1,549 2 2 gold badges 14 14 silver badges 18 18 bronze badges. Asked Dec 31 '14 at 11:59. Matze999 matze999. 61 1 1 silver badge 4 4 bronze badges. You may want to try Selenium IDE. There, you can record actions that are then.

  • Java bot csv selenium-java web-scraping selenium-webdriver Updated Sep 17, 2019; Java; roman-int3 / pinnaclecom Star 0 Code Issues Pull requests Simple example of web.
  • You must have figured out that Selenium wasn’t really built for web scraping. It was built to automate website tests. There are other libraries that can be used to simply pull web content (e.g Jsoup Parser is recommended). However, there are situations in which nothing else will work (without you pulling out your hair).

By Rob Gravelle

In the Web Page Scraping with jsoup article I described how to extract data from a web page using the open-source jsoup Java library. As an HTML parser, jsoup only sees the raw page source and is completely unaware of any content that is added to the DOM via JavaScript after the initial page load. For that, you need to employ some kind of embedded browser engine, such as Oracle WebView.

Another approach is to use a headless browser, that is, a web browser without a graphical user interface. There are several good ones to choose from, including Chrome, Firefox, PhantomJS, Zombie JS, HtmlUnit, and Splash. I today's article, we'll be automating the Chrome headless browser from a Python script to fetch a web page and read the dynamically generated contents of an element.

Project Setup

Python is an ideal language for web page scraping because it's more light-weight that full-fledged languages like Java. There is also a Selenium WebDriver for python. (Actually, there is one for Java as well!) The driver is basically the engine that runs the browser, much like a database driver.

I'll be developing the script within MS Visual Studio Code. It just makes everything much easier. Here's a great tutorial on getting started with python in Visual Studio Code.

Once you've got the python extension set up, you're ready to create the project.

  1. Select File > Add Folder to Workspace… from the main menu.
  2. Browse the root folder of your project. Mine is simply called 'demo'.
  3. Now we'll need to download the ChromeDriver - WebDriver for Chrome. Place the chromedriver executable in your project root.
  4. Next, we'll install Selenium. In the Integrated Terminal (select View < Integrated Terminal from the main menu if it is not already open), type 'pip install selenium' at the prompt and hit the Enter key.

The Demo Page

We'll be testing our script on a very simple web page. It has one paragraph element whose text which will be updated via JavaScript. You can either host the page on a server or, to keep things really simple, just save it locally to your project folder!

The Script

Our script will test the above page by loading it into the headless chrome browser, fetching the '#dynamic-text' element, and printing its innerHTML to the console. If it says, 'JavaScript rendered content,' then we've got the JS-rendered text. Otherwise, it might be time to revisit this whole solution!

  1. Create a new file named 'page_scraping_demo.py' in your project root. Visual Studio Code will immediately recognize it as a python script.
  2. Add the following code to the file and save your changes.

Running the Script

It's time to put our code to work.

Right-click anywhere in the script editor and select 'Run Python File in Terminal' from the popup menu. A browser window will open and load the page:

The script's output will be displayed in the terminal:

Selenium Registry Issue in Windows

There is a known selenium issue in Windows that causes the following error to appear in the terminal:

To fix it, you'll have to go into regedit.exe and add required keys as per the instructions provided by Jari Mäkeläinen (aka jarmake):

  1. Open the registry with regedit (just click on Windows start menu and start typing regedit, it should come up)
  2. From the registry explorer, expand HKEY_LOCAL_MACHINE, and from there expand SOFTWARE
  3. Expand Policies. (I was missing everything from this point.)
  4. So what I had to do was to select the Policies by left clicking on it, and then right click and from the context menu select New > Key and name it Google.
  5. Once that is created, select that and right click and again select New > Key and name that Chrome.
  6. Select that folder, and right click. Choose New > String. Name it 'MachineLevelUserCloudPolicyEnrollmentToken' and leave the value empty. (I set mine to '2')
    If you already have Google and Chrome under Policies, I think only adding the key as in step 6 should work. I've attached a picture to show how it should look like when it's done.
    And just to clarify, this is under SOFTWARE/Policies, not directly under SOFTWARE.

You can download the files that we worked on today from GitHub.

Going Forward…

Now that we've gotten our feet wet with python, selenium, and the chrome headless browser, we'll tackle a more complex example next time that illustrates how to gather data from a dynamically generated page.

Web Scraping Java Selenium Interview


Rob Gravelle resides in Ottawa, Canada. His design company has built web applications for numerous businesses and government agencies. Email him.

Rob's alter-ego, 'Blackjacques', is an accomplished guitar player, who has released several CDs and cover songs. His band, Ivory Knight, was rated as one of Canada's top hard rock and metal groups by Brave Words magazine (issue #92).


NextStep 2019 was an exciting event that drew professionals from multiple countries and several sectors. One of our most popular technical sessions was on how to scrape website data. Presented by Miguel Antunes, an OutSystems MVP and Tech Lead at one of our partners, Do iT Lean, this session is available on-demand. But, if you prefer to just quickly read through the highlights…keep reading, we’ve got you covered!

As developers, we all love APIs. It makes our lives that much easier. However, there are times when APIs aren’t available, making it difficult for developers to access the data they need. Thankfully, there are still ways for us to access this data required to build great solutions.

What Is Web Scraping?

Web scraping is the act of pulling data directly from a website by parsing the HTML from the web page itself. It refers to retrieving or “scraping” data from a website. Instead of going through the difficult process of physically extracting data, web scraping employs cutting-edge automation to retrieve countless data points from any number of websites.

If a browser can render a page, and we can parse the HTML in a structured way, it’s safe to say we can perform web scraping to access all the data.

Benefits of Web Scraping and When to Use It

You don’t have to look far to come up with many benefits of web scraping.

  • No rate-limits: Unlike with APIs, there aren’t any rate limits to web scraping. With APIs, you need to register an account to receive an API key, limiting the amount of data you’re able to collect based on the limitations of the package you buy.
  • Anonymous access: Since there’s no API key, your information can’t be tracked. Only your IP address and cool keys can be tracked, but that can easily be fixed through spoofing, allowing you to remain perfectly anonymous while accessing the data you need.
  • The data is already available: When you visit a website, the data is public and available. There are some legal concerns regarding this, but most of the time, you just need to understand the terms and conditions of the website you’re scraping, and then you can use the data from the site.

How to Web Scrape with OutSystems: Tutorial

Regardless of the language you use, there’s an excellent scraping library that’s perfectly suited to your project:

Selenium Web Scraping Tutorial

  • Python: BeautifulSoup or Scrapy
  • Ruby: Upton, Wombat or Nokogiri
  • Node: Scraperjs or X-ray
  • Go: Scrape
  • Java: Jaunt

OutSystems is no exception. Its Text and HTML Processing component is designed to interpret the text from the HTML file and convert it to an HTML Document (similar to a JSON object). This makes it possible to access all the nodes.

It also extracts information from plain text data with regular expressions, or from HTML with CSS selectors. You’ll be able to manipulate HTML documents with ease while sanitizing user input against HTML injection.

But how does web scraping look like in real life? Let’s take a look at scraping an actual website.

We start with a simple plan:

  • Pinpoint your target: a simple HTML website;
  • Design your scraping theme;
  • Run and let the magic happen.

Scraping an Example Website

Our example website is www.bank-code.net, a site that lists all the SWIFT codes from the banking industry. There’s a ton of data here, so let’s get scraping.

This is what the website looks like:

If you want to collect these SWIFT codes for an internal project, it will take hours to copy it manually. With scraping, extracting the data will take a fraction of that time.

  • Navigate to your OutSystems personal environment, and start a new app (if you don't have one yet, sign-up for OutSystems free edition);
  • Choose “Reactive App”;
  • Fill in your app’s basic information, including its name and a description of the app to continue;
  • Click on “Create Module”;
  • Reference the library you’re going to use from the Forge component, which in this case is the “Text and HTML Processing” library;
  • Go to the website and copy the URL, for example: https://bank-code.net/country/PORTUGAL-%28PT%29/100. We’re going to use Portugal as a baseline for this tutorial;
  • In the OutSystems app, create a REST API for integration with the website. It’s basically just a “get request”, and place the copied URL;
  • If you noticed we have the pagination offset already present in the URL, it’s the “/100” part. Change that to be a REST input parameter;
  • Out of our set of actions, we’ll use the ones designed to work with HTML, which in this case, are Attributes or Elements. We can send the HTML text of the website to these actions. This will return our HTML document, the one mentioned before that looks like a JSON object where you can access all the nodes of the HTML.

Now we can create our action to scrape the website. Let’s call it “Scrape”, for example.

  • Use the endpoint previously created, which will gather the HTML. We’ll parse this HTML text into our document;
  • Going back to the website, in Chrome, right-click on the page where the content is that you’d like scraped. Click on “Inspect” and in the subsequent section, identify the table you’d like to scrape;
  • Since the table has its own ID, it will be unique across the HTML text, making it easy to identify in the text;
  • Since we now have the table, we really want to get all the rows in this table. You can easily identify the selector for the row by expanding the HTML till you see the rows and right click in one of them - Copy - Copy Selector, and this will give you “#tableID > tbody > tr:nth-child(1)” for the first row. And since we want all of them, we’re going to use “#tableID > tbody > tr”;
  • You have now all the elements for the table rows. It’s time to iterate all rows and get to select all the columns;
  • Now, select the column’s text, using the HTML document and the Selector from the last action, in addition to our column selector: “> td:nth-child(2)” is the selector for the second column which contains the Bank Name. For the other columns, you just need to iterate the “child(n)” node.

Since you have scraped all the information, check if you already have the code on our database. If we have it, we just need to update the data. If we don’t have it, we’ll just create the record. This should provide us with all the records for the first page of the website when you hit 1-Click Publish.

The process above is basically our tool for parsing the data from the first page. We identify the site, identify the content that we want, and identify how to get the data. This runs all the rows of the table and parses all the text from the columns, storing it in our database.

For the full code used in this example, you can go to the OutSystems Forge and download it from there.

Web Scraping Enterprise Scale: Real-Life Scenario - Frankort & Koning

So, you may think that this was a nice and simple example of scraping a website, but how can you apply this at the enterprise level? To illustrate this tool’s effectiveness at an enterprise-level, we’ll use a case study of Frankort & Koning, a company we did this for.

Frankort & Koning is a Netherlands-based fresh fruit and vegetable company. They buy products from producers and sell them to the market. As these products trade in fresh produce, there are many regulations that regulate their industry. Frankfort & Koning needs to check each product that they buy to resell.

Imagine how taxing it would be to check each product coming into their warehouse to make sure that all the producers and their products are certified by the relevant industry watchdog. This needs to be done multiple times per day per product.

GlobalGap has a very basic database, which they use to give products a thirteen-digit GGN (Global Gap Number). This number identifies the producer, allowing them to track all the products and determine if they're really fresh. This helps Frankort & Koning certify that the products are suitable to be sold to their customers. Since Global Gap doesn't have any API to assist with this, this is where the scraping part comes in.

Web Scraping Using Selenium Python

To work with the database as it is now, you need to enter the GGN number into the website manually. Once the information loads, there will be an expandable table at the bottom of the page. Clicking on the relevant column will provide you with the producer’s information and whether they’re certified to sell their products. Imagine doing this manually for each product that enters the Frankort & Koning warehouse. It would be totally impractical.

How Did We Perform Web Scraping for Frankort & Koning?

Selenium web scraping tutorial

We identified the need for some automation here. Selenium was a great tool to set up the automation we required. Selenium automates user interactions on a website. We created an OutSystems extension with Selenium and Chrome driver.

This allowed Selenium to run Chrome instances on the server. We also needed to give Selenium some instructions on how to do the human interaction. After we took care of the human interaction aspect, we needed to parse the HTML to bring the data to our side.

The instructions Selenium needed to automate the human interaction included identifying our base URL and the 'Accept All Cookies' button, as this button popped up when opening the website. We needed to identify that button so that we could program a click on that button.

We also needed to produce instructions on how to interact with the collapse icon on the results table and the input where the GGN number would be entered into. We did all of this to run on an OutSystems timer and ran Chrome in headless mode.

We told Selenium to go to our target website and find the cookie button and input elements. We then sent the keys, as the user entered the GGN number, to the system and waited a moment for the page to be rendered. After this, we iterated all the results, and then output the HTML back to the OutSystems app.

This is how we tie together automation and user interaction with web scraping.

These are the numbers we worked with, with Frankort & Koning:

  • 700+ producers supplying products
  • 160+ products provided each day
  • 900+ certificates - the number of checks they needed to perform daily
  • It would’ve taken about 15 hours to process this information manually
  • Instead, it took only two hours to process this information automatically

This is just one example of how web scraping can contribute to bottom-line savings in an organization.

Scraping With Selenium

Still Got Questions?

Selenium Web Scraping Python

Just drop me a line! And in the meantime, if you enjoyed my session, take a look at the NextStep 2020 conference, now available on-demand, with more than 50 sessions presented by thought leaders driving the next generation of innovation.

Selenium Web Test

Related posts