Data Science Hacks consists of tips, tricks to help you become a better data scientist. Data science hacks are for all - beginner to advanced. Data science hacks consist of python, jupyter notebook, pandas hacks and so on.
Data Science Hacks is created and maintained by Analytics Vidhya for the data science community.
It includes a variety of tips, tricks and hacks related to data science, machine learning
These Hacks are for all the data scientists out there. It doesn’t matter if you are a beginner or an advanced professional, these hacks will definitely make you efficient!
Feel free to contribute your own data science hacks here. Make sure that your hack follows the contribution guidelines
How can you extract image data directly from chrome in one click? Imagine that you want to make your own machine learning project but you don't have enough data, it becomes a daunting task Worry not you can use the ResourceSaver extension to directly download data! Let's see how!
Pandas Apply is one of the most commonly used functions for playing with data and creating new variables. It returns some value after passing each row/column of a data frame with some function. The function can be both default or user-defined.
It helps to select subset of data based on the value of the data in the dataframe
It is used to create MS Excel style spreadsheet. Levels in the pivot table will be stored in MultiIndex objects (hierarchical indexes) on the index and columns of the result DataFrame.
pd.crosstab() function is used to get an initial “feel” (view) of the data.
It is used to apply vectorized string functions on a pandas dataframe column. Let’s say you want to split the names in a dataframe column into first name and last name. pandas.Series.str along with split( ) can be used to perform this task.
Here is an interesting hack to extract email ids present in long pieces of text by just using 2 lines of code in Python using regular expressions. Extracting information from social media posts and websites has become a common practice in data analytics but sometimes we end up trying complicated methods to achieve things that can be solved easily by using the right technique.
One of the most important assumptions in linear and logistic regression is that our data must follow a normal distribution but we all know that's usually not the case in real life. We often need to transform our data into normal/ gaussian distribution.
Preprocessing is one of the key steps for improving the performance of a model. One of the main reasons for text preprocessing is to remove unwanted characters from text like punctuation, emojis, links and so on which is not required for our problem statement.
Elbow Method is used for identifying the value of k in k-Nearest Neighbors. It's a plot of errors at different values of k and we select the k value having least error!
An important part of data analysis is to preprocessing. Many times we need to scale our features like in the case of k-NN we always need to scale the data before building model or else it'll give spurious results.
Most of the data collected today, hold the date and time variables. There is a lot of information that you can extract from these features and you can utilize it in your analysis!
Deeplearning models usually require a lot of #data for training. But acquiring massive amounts of data comes with its own challenges. Instead of spending days manually collecting data, you can make use of Image Augmentation techniques. It is the process of generating new images. These new images are generated using the existing training images and hence we don’t have to collect them manually.
Tokenization is the primary task while building the vocabulary. HuggingFace recently created a library for tokenization which provides an implementation of today's most used tokenizers, with a focus on performance and versatility. Key features: Ultra-fast: They can encode 1GB of text in ~20sec on a standard server's CPU
You can extract categorical and numeric features into seperate dataframes in just 1 line of code! This can be done using the select_dtypes function.
Do you want to to do perform quick data analysis on your dataframe? You can use pandas profiling to generate profile report of your dataset in just 1 line of code!
Convert wide form dataframe into long form dataframe in just 1 line of code! In pd.melt(), one more columns are used as identifiers. "Unmelt the data", use pivot() function
Do you know how you can get the history of all the commands running inside your jupyter notebook? Use %history, jupyter notebook's built-in magic function! Note - Even if you have cut the cells in your notebook, %history will print those commands as well!
Create heatmap on pandas dataframe using seaborn! It helps you understand the complete range of values at a glimpse.
Scikit-learn has released its stable 0.22.1 version with new features and bug fixes. One new function is the plot_confusion_matrix function which generates an extremely intuitive and customisable confusion matrix for your classifier. Bonus tip: You can specify the format of the numbers appearing in the boxes using the values_format parameter('n' for whole numbers, '.2f' for float, etc)
What will be the output if you run the following commands in single cell of your jupyter notebook? df.shape df.head() Ofcourse it'll be first five rows of your dataframe. Can we get output of both the command run in same cell? You can do it using InteractiveShell.
Most of you have heard about the library tqdm and you might be using it track the progress of forever running for loops. Most of the times we write complex functions having nested for loops. #tqdm allows tracking that too. Here is how you can track the nested loops using tdqm in python.
Deeplearning models usually require a lot of data for training. But acquiring massive amounts of data comes with its own challenges. Instead of spending days manually collecting data, you can make use of Image Augmentation techniques. It is the process of generating new images. These new images are generated using the existing training images and hence we don’t have to collect them manually.
jupyter-themes provides an easy way to change theme, fonts and much more in your jupyter notebook.
conda install -c conda-forge jupyterthemes
pip install jupyterthemes
jt - l
jt -t chesterish
To do this we use jupyter-themes, it provides an easy way to change theme, fonts and much more in your jupyter notebook.
Install jupyter-themes -
conda install -c conda-forge jupyterthemes
conda install -c pip install jupyterthemes
Change the theme, cell width, cell height
jt -t chesterish -cellw 100% lineh 170
What do you do when you need to change the data type of a column to DateTime? We can do this directly at the time of reading data using parse_dates argument.
You can share your jupyter notebook with non-programmers very easily and the best way to do it is by using jupyter nbviewer. Pro tip - You can use Binder to execute the code from nbviewer on your machine!
Do you know how to plot a decision tree in just 1 line of code? Sklearn provides a simple function plot_tree() to do this task. You can tweak the hyperparameters as per your requirements.
Do you know how you can invert a dictionary in python? Dictionary is a collection which is unordered, changeable and indexed. It is widely used in day to day programming, and machine learning tasks.
Cufflinks binds plotly directly to pandas dataframes! Therefore you can make interactive charts without any hassle or long codes.
This hack is about saving contents of a cell to a .py file using the magic command %%writefile and then running the file in another jupyter notebook using the magic command %run
Are you getting confused while printing some of the data structures? Worry not, it is very common. pretty-print module provides an easy way to print the data structures in a visually pleasing way!
This code allows you to convert date of any format into a specified format. Many times, we receive dates of various formats in our data. This hack will help you to convert all those formats into a specified format.
One of the ways to perform feature selection is by using feature_importance_ attribute of the base estimators. Using SelectFromModel function you can specify the estimator and the threshold for feature_importance_, This hack uses 'mean' as the threshold. You can tweak the threshold to get optimum results. To learn more visit the documentation
What could be the easiest way to convert a string to characters? Here is a simple hack which comes in handy while working with text data
While building an image classification model using deep learning, it is required that all the images should be of same size. However, as the data comes from different sources, images may have different shapes. So, to convert them to same shape, we can use the resize function from open cv. This hack will will help you convert the images of any shape to a specified shape.
Does it take time to perform operations on your pandas dataframe? Pandarallel is a simple and efficient tool to parallelize Pandas operations on all your available CPUs!
The generator yields one item at a time and generates them only when in demand. Generators are much more memory efficient. This hack compares generator expressions with list comprehensions.
Sometimes the data can be in the form of nested list. For example, the data can be date-wise transaction records for a particular product. However, you might need only in a single dimension. This hack will help you to flatten the list of lists into a single list.
We often use print statements for debugging purposes. This hack will help you to turn off print statements in a particular section of the code so that it will make debugging easier.
This hack will help you to split a single PDF document into multiple pages.
This hack will help you to combine multiple PDF documents into a single document. This hack is the inverse of Hack #42 Split PDF Document page-wise
Sometimes you would need a functionality which is not directly provided by Keras's ImageDataGenerator. You can easily create a wrapper around it to suit your needs.
(i.e. a neural network which takes input from multiple data sources, and does a combined training on this data), and you want that the data generator should be able to handle the data preparation on the fly, you can create a wrapper around ImageDataGenerator class to give the required output.This notebook explains a simple solution to this usecase.