Algorithms: The ML Workflow

 Machine learning (ML) has become an integral part of our daily lives, from recommending movies and products to detecting spam emails and enabling self-driving cars. But how do these ML algorithms work? At a high level, the development of an ML algorithm can be distilled into five main steps. This guide will walk you through these steps in a way that’s accessible to beginners. In future articles, we will go over different approaches in detail and specify precise details about each of the five steps for different applications.

Step 1: Gathering Data and Preparing the Features

 The foundation of any machine learning algorithm is data. The first step involves collecting a dataset that your algorithm will learn from. This dataset is divided into two parts: a training set and a testing set. The training set is used to teach the algorithm to recognize patterns, while the testing set evaluates how well the algorithm has learned.

 Features are individual measurable properties or characteristics of the phenomena you’re observing. For example, if your task is to predict house prices, features might include the number of bedrooms, the size of the house, and its location. Finalizing the set of features means deciding which attributes of the data you’ll use to train your model. This step is crucial because the right features can significantly improve the algorithm’s performance.

Figure 1: Steps in Data Gathering and Preparation

Figure 1: Steps in Data Gathering and Preparation

Step 2: Choosing the Model Representation/Hypothesis Function

 Once you have your data ready, the next step is to select a model. This involves choosing a representation or hypothesis, which is essentially the form or structure your model will take. Think of it as choosing the shape of the puzzle pieces you’ll be working with. Some common model families include:

  • Linear Models: These models predict the output as a linear combination of input features. They work well for problems where the relationship between the features and the target output is approximately linear.
  • Polynomial Models: These are an extension of linear models that allow for a nonlinear relationship between the features and the target.
  • Decision Trees: These models use a tree-like graph of decisions and their possible consequences. They’re useful for classification and regression tasks.
  • Layered Neural Networks: Also known as deep learning, these models are composed of layers of nodes or neurons that can learn complex patterns in large datasets.
Figure 2: Different types of Model Representation

Figure 2: Different types of Model Representation

Step 3: Selecting the Loss or Objective Function

 The loss function, also known as the cost function or objective function, is what the algorithm aims to minimize during the training process. It measures how well the model’s predictions match up with the actual data. Different tasks and models might require different loss functions. Some common ones include:

  • Square Loss: Used in regression tasks to measure the difference between the predicted and actual values.
  • Perceptron Loss: Used in classification tasks, focusing on minimizing misclassifications.
  • Cross-Entropy: Also used in classification, particularly when predicting probabilities.
  • Likelihood: Used in models that involve probability and statistics, measuring how likely the observed data is under the model.

Step 4: Choosing the Optimization Algorithm

 The optimization algorithm is the method used to minimize the loss function. It adjusts the model’s parameters to find the best possible solution. There are several optimization algorithms, each with its strengths and weaknesses:

  • Gradient Descent: A popular method that adjusts parameters in the direction that most reduces the loss.
  • Stochastic Gradient Descent (SGD): A variation of gradient descent that updates the model more frequently, leading to faster convergence.
  • Coordinate Descent: Optimizes the loss function with respect to one parameter at a time.
  • Greedy Algorithms and Beam Search: These are used in specific contexts, like decision trees, to make locally optimal choices at each step.
  • Alternating Minimization: If a loss function has more than one set of parameters, we optimize one parameter at a time.  In this blog article, we will go over more details on optimization algorithms like gradient descent and its variants which are often used in machine learning.
    Figure 3: Workflow of a Model Training

    Figure 3: Workflow of a Model Training

Step 3 + Step 4 = Model Training

 A combination of Step 3 (choosing the objective function) and Step (optimization of the objective function) is called model training. The result of the optimization process is the optimal set of parameters, which are called the model parameters. Once the trained Model is obtained, we can go to the next step (Step 5) which is Inference and Evaluation.

Step 5: Inference and Evaluation

 After training the model, the next step is to obtain the predicted labels on unseen test examples. This process in machine learning is called inference. Once we infer the predicted labels on the test data, we want to evaluate the model performance - how well did the model perform? The evaluation is always done on an unseen test dataset to avoid overfitting. The reason is it is very easy to fit perfectly well on the training set resulting in a close to 0 training error, but in reality, the goal of machine learning is to perform well on unseen data instances.

 The process of evaluating the model performance involves choosing an evaluation metric that quantifies how well the model meets the objectives of your task. Common metrics include:

  • Accuracy: The proportion of correct predictions in classification tasks.
  • F1 Score: A measure that balances precision and recall, particularly useful in imbalanced datasets.
  • RMSE (Root Mean Square Error): Measures the average magnitude of the errors between predicted and actual values in regression tasks.
  • R2 (R-squared): Indicates the proportion of the variance in the dependent variable that is predictable from the independent variables in the regression.
Figure 4: Different types of Evaluation

Figure 4: Different types of Evaluation

Examples of the 5 Steps for Different ML Algorithms

 Below, we provide a high-level overview of how the five main steps of developing a machine learning algorithm apply to some common models: Linear Regression, Perceptrons, Support Vector Machines (SVMs), Decision Trees, Neural Networks, Clustering, and Reinforcement Learning. This overview will set the stage for more detailed discussions in future articles.

Linear Regression

 Linear Regression is a model that predicts a continuous outcome variable as a linear combination of one or more input features.

  • Data Preparation: Features might include numerical data points like house size, number of bedrooms, and age of the house for predicting house prices.
  • Model Representation: A linear model that predicts the target variable (e.g., house price) as a linear combination of the input features.
  • Loss Function: Square loss is typically used, measuring the squared differences between the predicted and actual values.
  • Optimization Algorithm: Gradient descent is commonly used to find the model parameters that minimize the loss function.
  • Evaluation Metric: RMSE (Root Mean Square Error) or R2 (R-squared) to assess the model’s accuracy in predicting house prices.
Figure 5: Linear Regression

Figure 5: Linear Regression

Perceptrons

 Perceptron is the simplest form of a neural network used for binary classification, which assigns weights to input features and uses a threshold to decide the output.

  • Data Preparation: Features could be binary or continuous attributes that describe the entities being classified.
  • Model Representation: A simple form of a neural network with a single layer of weights, used for binary classification tasks.
  • Loss Function: Perceptron loss, which focuses on minimizing the number of misclassifications.
  • Optimization Algorithm: A variant of stochastic gradient descent specific to the perceptron algorithm, updating weights based on misclassified examples.
  • Evaluation Metric: Accuracy, to measure the proportion of correctly classified instances.
Figure 6: Perceptrons

Figure 6: Perceptrons

Support Vector Machines (SVMs)

 SVMs are a robust classification technique that finds the optimal hyperplane to separate different classes in the feature space with the maximum margin.

  • Data Preparation: Features are attributes that characterize the data points. For image classification, these might be pixel values or derived features.
  • Model Representation: SVMs find the hyperplane that best separates different classes in the feature space. The representation includes linear models for linearly separable - data and kernel functions for non-linearly separable data.
  • Loss Function: Hinge loss, which aims to maximize the margin between the classes while penalizing misclassifications.
  • Optimization Algorithm: Quadratic programming solvers are used to find the support vectors that define the separating hyperplane.
  • Evaluation Metric: Accuracy or F1 score, especially useful when dealing with imbalanced classes.
Figure 7: Support Vector Machines (SVMs)

Figure 7: Support Vector Machines (SVMs)

Decision Trees

 Decision Tree is a model that uses a tree-like graph of decisions and their possible consequences to perform classification or regression tasks.

  • Data Preparation: Features can be categorical or numerical, describing the instances to be classified or regressed.
  • Model Representation: A tree structure where each node represents a feature decision point, leading to class labels or values at the leaves.
  • Loss Function: For classification, it could be entropy or Gini impurity; for regression, it’s often the mean squared error.
  • Optimization Algorithm: Greedy algorithms that recursively split the data to maximize information gain or minimize impurity at each node.
  • Evaluation Metric: Accuracy for classification trees; RMSE for regression trees.
Figure 8: Decision Trees

Figure 8: Decision Trees

Neural Networks

 Neural Network is a versatile and powerful modeling approach that mimics the workings of the human brain to learn complex patterns through layers of interconnected nodes or neurons.

  • Data Preparation: Features might include raw input data or features engineered from data, such as pixel values from images or word embeddings from text.
  • Model Representation:* Layered networks of neuron*s with adjustable weights. Architectures vary widely, from simple feedforward networks to complex structures like Convolutional Neural Networks (CNNs) for image tasks or recurrent neural networks (RNNs) for sequential data.
  • Loss Function: Cross-entropy loss for classification tasks; mean squared error for regression.
  • Optimization Algorithm: Stochastic gradient descent (SGD) or its variants like Adam and RMSprop, which adjust the network weights based on the gradient of the loss function.
  • Evaluation Metric: Accuracy for classification neural networks; RMSE for regression neural networks.
Figure 9: Neural Networks

Figure 9: Neural Networks

Clustering (Unsupervised Learning)

 Clustering algorithms seek to group a set of objects in such a way that objects in the same group (called a cluster) are more similar to each other than to those in other groups.

  • Data Preparation: Features include numerical or categorical data points that describe the characteristics of the entities to be grouped. No labels are provided.
  • Model Representation: The model could be a set of centroids in K-means clustering, representing the center of each cluster. Other representations include the similarity matrix in spectral clustering.
  • Loss Function: In K-means, the loss function is often the sum of squared distances from each point to its cluster centroid, aiming to minimize intra-cluster variance.
  • Optimization Algorithm: For K-means, an iterative refinement technique is used, where each iteration consists of assigning points to the nearest centroid and then updating centroids based on the assignments.
  • Evaluation Metric: Silhouette score, Davies–Bouldin index, or within-cluster sum of squares (WCSS) can be used to assess the quality of the clustering, even though the true labels are not known.
Figure 10: K-Means Clustering

Figure 10: K-Means Clustering

Reinforcement Learning (Simple RL Algorithm)

 Reinforcement learning algorithms learn to make decisions by taking actions in an environment to maximize some notion of cumulative reward.

  • Data Preparation: In RL, the data comes from the environment as the algorithm interacts with it. Features include the state of the environment and the rewards associated with each state-action pair. There are no predefined labels.
  • Model Representation: The model could be a Q-table in Q-learning, representing the expected rewards for each state-action pair. For more complex problems, neural networks (as in Deep Q-Networks) can approximate the Q-value function.
  • Loss Function: The difference between the predicted Q-values and the target Q-values (obtained using the Bellman equation) is minimized. This loss is often referred to as the temporal difference error in Q-learning.
  • Optimization Algorithm: In Q-learning, the algorithm updates the Q-values based on the reward received after taking an action in a given state, using a learning rate. In Deep Q-Networks, stochastic gradient descent or its variants can be used to minimize the loss.
  • Evaluation Metric: Cumulative reward is the primary measure of success in RL. However, specific metrics may vary depending on the task, such as the number of steps to complete a task or the win rate in a game.
Figure 11: Reinforcement Learning

Figure 11: Reinforcement Learning

 Each machine learning model comes with its unique considerations for data preparation, model representation, loss function, optimization algorithm, and evaluation metrics. These examples provide a glimpse into the diverse world of machine learning, setting the stage for deeper exploration in future articles. By understanding these foundational elements, you’ll be better equipped to dive into the specifics of each model and apply them effectively to solve real-world problems.

Conclusion

 Building a machine learning model is a structured process that involves making informed choices at each step, from preparing the data to selecting the right model, optimizing the loss function, and evaluating the results. By understanding these fundamental steps, even beginners can start to appreciate the complexities and the art behind developing machine learning algorithms. This high-level overview serves as a starting point for diving deeper into each step and gradually mastering the craft of machine learning.