Processing Online Data
★★★ Expert Level
Online data is an extremely powerful and impactful resource. Do you want to learn how you can incorporate online data in your business? Take on our two-day Processing Online Data course and gain knowledge about the different types of data and the best ways to integrate them into your existing business processes.
Topics
Analytical Process
Data Exploration

Language
English
Duration
2 days
Time
9:00-17:00
Certification
Yes
Lunch
Included
Recommended
Level
Expert
Upcoming courses
Currently there are no scheduled dates for this course. To be notified about upcoming dates, please choose "Reserve a seat".
Select tickets
We're sorry, but all tickets sales have ended because the event is expired.
*If you are a group of 5 or more, we are happy to accommodate a date for the training that suits you best. If so, please choose the "Reserve a seat" option.
Processing Online Data
About the course
The use of online data has revolutionized business; allowing for easier data collection, data that is more representative, and data, therefore, that provides more accurate insights. This two-day Processing Online Data course provides a thorough investigation into all areas of the topic, building on existing knowledge as well as introducing new concepts. The first day begins with an outline of the types of online data, the best practices and challenges in designing architecture and data pipeline processes. We then move to legal web scraping and crawling, the terminology, techniques and tools, which you will apply to a practical case, scraping data from both SSR and CSR websites. Then you will apply existing API skills to extract and import data from external APIs. The second day dives into accessing data and usability from your digital channels in the context of end-2-end AI solutions. Finally, this training will close by applying what you have learned to integrate fast- and slow-moving data into one optimal model. Challenging you to design and build a data model and pipeline, that is flexible and reliable, to generate added value.Why this is for you
There is a wide variety of data sources at your disposal: offline data, external online data, data on customer behavior from digital channels. Knowing how to process data and design the architecture for these data sources is no mean feat. After this course, you will be able to start using the most common types of online data skillfully. As well as conduct thoughtful discussion on this topic in terms of unlocking online data and using it in your daily automated process to make the data available for models in end-2-end AI solutions.For whom
This course is designed specifically for Data Scientists and Data Engineers. Many of the skills covered in this course involve preexisting knowledge outlined in the web scraping pre-work accompanying this badge, along with the Data Models and Manipulation (4204) badge. Participants must have experience interacting with APIs and expert programming skills in SQL and Python to keep up with this course.What you’ll learn
This training will dive into the concept of online data through the stages of extraction, processing, scraping and crawling, assessment, and integration. Specifically, this will include:- The types of online data, ownership, and accessibility.
- The legality of scraping and crawling.
- A recap on APIs.
- How to set requirements for data collection.
- How to perform data quality checks.
- How to track customer behavior longitudinally.
- Determine the optimal architecture for processing online data – Explain the methods and challenges of extracting different types of online data and decide on the optimal architecture for your case.
- Apply web scraping and crawling – Scrape and crawl data from both SSR and CSR websites and explain the legality of scraping and crawling.
- Use external APIs for collecting data – Apply existing skills to gather external (website) data through APIs.
- Assess data from your digital channels – Identify the different types of data sources, set requirements, and assess the quality and usability in the context of end-2-end AI solutions.
- Integrate fast- and slow-moving data – In a single flexible and reliable data model to generate added value from this integration.