Computer Science, asked by afreenb9755, 1 year ago

create a datframe named as 'x' such that it includes all the feature columns and drop the target column.

Answers

Answered by riteish9797
0

Answer:

Feature selection is one of the first and important steps while performing any machine learning task. A feature in case of a dataset simply means a column. When we get any dataset, not necessarily every column (feature) is going to have an impact on the output variable. If we add these irrelevant features in the model, it will just make the model worst (Garbage In Garbage Out). This gives rise to the need of doing feature selection.

When it comes to implementation of feature selection in Pandas, Numerical and Categorical features are to be treated differently. Here we will first discuss about Numeric feature selection. Hence before implementing the following methods, we need to make sure that the DataFrame only contains Numeric features. Also, the following methods are discussed for regression problem, which means both the input and output variables are continuous in nature.

Feature selection can be done in multiple ways but there are broadly 3 categories of it:

1. Filter Method

2. Wrapper Method

3. Embedded Method

About the dataset:

We will be using the built-in Boston dataset which can be loaded through sklearn. We will be selecting features using the above listed methods for the regression problem of predicting the “MEDV” column. In the following code snippet, we will import all the required libraries and load the dataset.

.

Explanation:

1. Filter Method:

As the name suggest, in this method, you filter and take only the subset of the relevant features. The model is built after selecting the features. The filtering here is done using correlation matrix and it is most commonly done using Pearson correlation.

Here we will first plot the Pearson correlation heatmap and see the correlation of independent variables with the output variable MEDV. We will only select features which has correlation of above 0.5 (taking absolute value) with the output variable.

The correlation coefficient has values between -1 to 1

 — A value closer to 0 implies weaker correlation (exact 0 implying no correlation)

 — A value closer to 1 implies stronger positive correlation

 — A value closer to -1 implies stronger negative correlation

.

hope it wil help u

Similar questions