In this course, learners can explore how to implement machine learning scaling techniques such as standardizing and normalizing on continuous data and label encoding on the target, in order to get the best out of machine learning algorithms. Examine dimensionality reduction by using Principal Component Analysis (PCA). Start this 6-video course by using Pandas library to load a CSV data set into a data frame and scale continuous features by using a standard scaler. You will then learn how to build and evaluate a support vector classifier in scikit-learn; use Pandas and Seaborn to generate a heat map; and spot the correlations between features in a data set. Discover how to apply the technique of PCA to reduce the number of dimensions in your input data and obtain the explained variance of each principal component. In the course's final tutorial, you will explore how to apply normalization and PCA on data sets and build a classification model with the principal components of scaled data. The concluding exercise involves processing data for classification.
Building ML Training Sets: Preprocessing Datasets for Classification
Course Overview
use the Pandas library to load a CSV dataset into a dataframe and scale the continuous features using a standard scaler
build and evaluate a support vector classifier in scikit-learn, use Pandas and Seaborn to generate a heatmap, and spot the correlations between features in a dataset
apply the technique of Principal Component Analysis to reduce the number of dimensions in your input data and obtain the explained variance of each principal component
apply normalization and PCA on a dataset and build a classification model with the principal components of scaled data
encode the target column of a dataset containing certain values, identify the features of Normalization, enumerate reasons for using PCA, split data into training and test sets using scikit-learn, identify one method of viewing correlations in a dataset using Pandas and Seaborn