How to Scrape Amazon and Other Large Scale Ecommerce Websites

Page 1

How to Scrape Amazon and Other LargeScale Ecommerce Websites The e-commerce industry is increasingly data-driven. Extracting product data from Amazon and other major e-commerce websites is a crucial part of competitive intelligence. On Amazon alone, there is a huge volume of data (more than 120 million to date). Extracting this data on a daily basis is a significant undertaking. At Ahmed Software Technologies, we work with many clients helping their access data. But some people have to set up an internal machine to extract data for many reasons. This post is for people who need to understand how to build and grow an internal team.

Hypothesis These assumptions will give you a rough idea of the scale, efforts, and challenges we will face: Looking to get product information from the top 20 ecommerce websites, including Amazon. You need the data for subcategories 20-25 of the electronics category of a website. The total number of categories and subcategories is around 450. The refresh rate is different for different subcategories. Ten of the twenty subcategories (of a website) require daily updates, five require data every day, three require data every three days, and two require data every day. There are four websites with Leads Scraper technologies implemented. The volume of data varies from 3 million to 7 million per day, depending on the day of the week.

Understanding e-commerce data We need to understand the data we extract. For demonstration purposes, let’s choose Amazon. Product URL Bread crumbs Product name Product description Prize Delivery Stock details (in stock or not) Image URL


Average Star Rating The frequency The refresh rate is different for different subcategories. Ten of the website’s 20 subcategories require daily updates, five require data every two days, three require data every three days, and two require data every four days. The frequency may change later, depending on the changing priorities of the sales team. Understand the specific requirements When we work on large data mining projects with our corporate clients, they always have special requirements. These are done to ensure internal compliance with guidelines or to improve the efficiency of an internal process. These are common special requests: Have a copy of the extracted HTML code (data not parsed) on a storage system such as Dropbox or Amazon s3. Create an integration with a tool to monitor data mining progress. Integrations can be a simple flexible integration to notify the end of data delivery or the creation of a complex pipeline to BI tools. The data extraction process The structure of a website is used to create a web scraper. Simply put, you submit a request to the site, the website returns an HTML page to you, and parses the information in the HTML. This is what happens in a typical low-volume data mining use case: write a Websites Extractor using Python or some other framework like Scrappy. You run it from your terminal and convert it to a CSV file. Simple. The challenges of data mining 1. Write and hold scrapers You can use Python to write scrapers to extract data from e-commerce websites. In our case, we need to extract data from 20 subcategories of a website. Depending on structural variations, you will need more than one analyzer on your scraper to obtain the data. Amazon and other large e-commerce sites frequently change the category and sub-category model. Therefore, the person responsible for maintaining belt scrapers must make constant adjustments to the scraper code. As the sales team adds new categories and websites, one or two people in your team should create scrapers and parsers. Scrapers generally require adjustments every few weeks. A small change in the structure would affect the fields it scrapes. This can give you incomplete data or cause the scraper to hang, depending on the scraper logic. And finally, finish building a scraper management system. Web scrapers work depending on how the website is built. Each website will represent data differently. Managing all this mess requires a common language, a unified format. This format will also change over time and you should get it right the first time.


Detecting changes early enough is critical to ensure that scheduled data delivery is not missed. You should create a tool to detect changes in the model and alert the scraper maintenance team. Ideally, these tools should run every 15 minutes to detect changes. At Ahmed Software Technologies, we have created an early warning system for website changes using Python. Need we to blog on how to make a simple website template change detection system? Let us know in the comments. 2. Big data management systems and scrapers Managing many scrapers through one terminal is not a good idea. You have to find productive ways to manage them. At Ahmed Software Technologies, we have created a GUI that can be used as an interface to the underlying platform to implement and manage scrapers without having to rely on the terminal at all times. Managing large volumes of data is a big challenge and you need to create an internal data storage infrastructure or use a cloud-based tool like a snowflake. 3. Automatic scraper generator Once you have built a lot of scrapers, the next step is to upgrade your own scraper frame. You can find common structural models and use them to build scrapers faster. You should think about building an automatic scraper frame once you have a considerable number of scrapers. 4. Anti-Scrape and anti-Scrape gear As stated in the introduction, the websites will have anti-Scrape technologies to prevent/hinder data extraction. They create their own IP-based blocking solution or install a third-party service. Bypassing large-scale anti-Scrape is not easy. You have to buy a lot of IP addresses and run them efficiently.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.