Python Big Data Save

Python and Pandas are known to have issues around scalability and efficiency. You will learn how to use libraries such as Modin, Dask, Ray, Vaex etc to overcome the problems faced by Pandas.

Project README

Stars Badge Forks Badge Pull Requests Badge Issues Badge GitHub contributors Visitors

Don't forget to hit the :star: if you like this repo.

About Us

The information on this Github is part of the materials for the subject High Performance Data Processing (SECP3133). This folder contains general big data information as well as big data case studies using Malaysian datasets. This case study was created by a Bachelor of Computer Science (Data Engineering), Universiti Teknologi Malaysia student.

📚 Big data processing

Big data processing involves the systematic handling and analysis of vast and complex datasets that exceed the capabilities of traditional data processing methods. It encompasses the storage, retrieval, and manipulation of massive volumes of information to extract valuable insights. Key steps include data ingestion, where large datasets are collected from various sources, and preprocessing, involving cleaning and transformation to ensure data quality. Advanced analytics, machine learning, and data mining techniques are then applied to uncover patterns, trends, and correlations within the data. Big data processing is integral to informed decision-making, enabling organizations to derive meaningful conclusions from their data, optimize operations, and gain a competitive edge in today's data-driven landscape.

Notes

1. Big Data: Pandas

Big Data processing with Pandas, a powerful Python library for data manipulation and analysis, involves implementing strategies to handle large datasets efficiently. Scaling to sizable datasets requires adopting techniques such as processing data in smaller chunks using the 'chunksize' parameter in Pandas read_csv function. This approach facilitates reading and processing large datasets in more manageable portions, preventing memory overload. To further optimize memory usage, it's essential to leverage Pandas' features like data types optimization, using more memory-efficient data types when possible. Additionally, utilizing advanced functionalities like the 'skiprows' parameter and filtering columns during data import can significantly enhance performance. By mastering these strategies, one can effectively manage and analyze vast datasets in Python with Pandas, ensuring both computational efficiency and memory optimization in the face of Big Data challenges MORE 💡.

2. Big Data: Alternatives to Pandas for Processing Large Datasets

This topic delves into the challenges encountered when using Pandas, a popular Python library for data analysis, in handling large datasets. Recognizing the limitations of Pandas, the article explores alternative solutions specifically designed for efficient processing of extensive data. It examines cutting-edge libraries such as Dask, Modin, Polars, Vaex, and others, showcasing their unique features and advantages. From parallel and distributed computing to out-of-core processing and GPU acceleration, the article provides insights into how these alternatives address the scalability and performance issues often faced when dealing with big datasets, offering readers a comprehensive guide to navigate the complexities of large-scale data processing beyond Pandas MORE 💡.

3. Comparison between libraries

4. Big Data: Case study

Lab

Comparison between libraries

Contribution 🛠️

Please create an Issue for any improvements, suggestions or errors in the content.

You can also contact me using Linkedin for any other queries or feedback.

Visitors

Open Source Agenda is not affiliated with "Python Big Data" Project. README Source: drshahizan/Python-big-data
Stars
126
Open Issues
0
Last Commit
2 months ago

Open Source Agenda Badge

Open Source Agenda Rating