A. which you set up by running model

Train and Test Sets

Imbalanced data
typically refers to a problem with classification problems where the classes
are not represented equally.Most classification data sets do not have exactly
equal number of instances in each class, but a small difference often does not
matter. One thus needs to make sure that all two classes of wine are present in
the training model. What’s more, the amount of instances of all two wine types
needs to be more or less equal so that you do not favour one or the other class
in your predictions.


# Import
`train_test_split` from `sklearn.model_selection`

sklearn.model_selection import train_test_split

# Specify the data


# Specify the target
labels and flatten the array


# Split the data up
in train and test sets

X_train, X_test,
y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)

Standardization is a
way to deal with these values that lie so far apart. The scikit-learn package
offers you a great and quick way of getting your data standardized: import the
StandardScaler module from sklearn.preprocessing

# Import
`StandardScaler` from `sklearn.preprocessing`

sklearn.preprocessing import StandardScaler

# Define the scaler

scaler =

# Scale the train set

X_train =

# Scale the test set

X_test =


Creating the Model

We  start by using  Keras Sequential model: it’s a linear stack
of layers. You can easily create the model by passing a list of layer instances
to the constructor, which you set up by running model = Sequential(). The model
will be implemented using a multilayer perceptron network. The structure of the
multi-layer perceptron involves an input layer, some hidden layers and an
output layer.we need to take into account that the first layer needs to make
the input shape clear. The model needs to know what input shape to expect and
that’s why the input_shape, input_dim, input_length, or batch_size arguments
are used to pass the relevant features.

In this case, are using a Dense layer,
which is a fully connected layer. Dense layers implement the following
operation: output = activation(dot(input, kernel) + bias). Note that without
the activation function, the Dense layer would consist only of two linear
operations: a dot product and an addition.

In the first layer, the activation argument
takes the value relu. Next, the input_shape has been defined. This is the input
of the operation that above: the model takes as input arrays of shape (12,), or
(*, 12). The first layer has 12 as a first value for the units argument of
Dense(), which is the dimensionality of the output space and which are actually
12 hidden units. This means that the model will output arrays of shape (*, 12):
this is