Supervised Machine Learning

Supervised Machine Learning


It provides you with a powerful tool to classify and process data using machine language. With supervised learning you use labeled data, which is a data set that has been classified, to infer a learning algorithm. The data set is used as the basis for predicting the classification of other unlabeled data through the use of machine learning algorithms.

▪  Linear Regression, and


▪ Classification Techniques.


Linear Regression

It is a supervised learning technique typically used in predicting, forecasting, and finding relationships between quantitative data. It is one of the earliest learning techniques, which is still widely used. 


For example, this technique can be applied to examine if there was a relationship between a company’s advertising budget and its sales. You could also use it to determine if there is a linear relationship between particular radiation therapy and tumor sizes.


Classification Techniques


The classification techniques that will be discussed in this section are those focused on predicting a qualitative response by analyzing data and recognizing patterns.

For example, this type of technique is used to classify whether or not a credit card transaction is fraudulent. There are many different classification techniques or classifiers, but some of the widely used ones include:


▪ Logistic regression,


▪ Linear discriminant analysis,


▪ K-nearest neighbors,


▪ Trees,


▪ Neural Networks, and


▪ Support Vector Machines (SVM).


Supervised Machine Learning


You train the machine using data that is well "labeled." It means some data is already tagged with the correct answer. It can be compared to learning which takes place in the presence of a supervisor or a teacher.


A supervised learning algorithm learns from labeled training data, helps you to predict outcomes for unforeseen data.


Successfully building, scaling, and deploying accurate supervised machine learning models takes time and technical expertise from a team of highly skilled data scientists. Moreover, Data scientist must rebuild models to make sure the insights given remains true until its data changes.


How Supervised Learning Works


For example, you want to train a machine to help you predict how long it will take you to drive home from your workplace. Here, you start by creating a set of labeled data. This data includes


Weather conditions


Time of the day


Holidays


All these details are your inputs. The output is the amount of time it took to drive back home on that specific day.


You instinctively know that if it's raining outside, then it will take you longer to drive home. But the machine needs data and statistics.


Let's see now how you can develop a supervised learning model of this example which helps the user to determine the commute time. The first thing you require to create is a training set. This training set will contain the total commute time and corresponding factors like weather, time, etc. Based on this training set, your machine might see there's a direct relationship between the amount of rain and time you will take to get home.


So, it ascertains that the more it rains, the longer you will be driving to get back to your home. It might also see the connection between the time you leave work and the time you'll be on the road.


The closer you're to 6 p.m. the longer it takes for you to get home. Your machine may find some of the relationships with your labeled data.


This is the start of your Data Model. It begins to impact how rain impacts the way people drive. It also starts to see that more people travel during a particular time of day.


Types of Supervised Machine Learning Algorithms

Regression:

The regression technique predicts a single output value using training data.


Example: You can use regression to predict the house price from training data. The input variables will be locality, size of a house, etc.


Strengths: Outputs always have a probabilistic interpretation, and the algorithm can be regularized to avoid overfitting.


Weaknesses: Logistic regression may underperform when there are multiple or non-linear decision boundaries. This method is not flexible, so it does not capture more complex relationships.


Logistic Regression:


Logistic regression method used to estimate discrete values based on given a set of independent variables. It helps you to predicts the probability of occurrence of an event by fitting data to a logit function. Therefore, it is also known as logistic regression. As it predicts the probability, its output value lies between 0 and 1.


Here are a few types of Regression Algorithms


Classification:


It means to group the output inside a class. If the algorithm tries to label input into two distinct classes, it is called binary classification. Selecting between more than two classes is referred to as multiclass classification.


Example: Determining whether or not someone will be a defaulter of the loan.


Strengths: Classification tree perform very well in practice


Weaknesses: Unconstrained, individual trees are prone to overfitting.


Here are a few types of Classification Algorithms


Naïve Bayes Classifiers (NBC)


Naïve Bayesian model (NBN) is easy to build and very useful for large datasets. This method is composed of direct acyclic graphs with one parent and several children. It assumes independence among child nodes separated from their parent.


Decision Trees


It classifies instances by sorting them based on the feature value. In this method, each mode is the feature of an instance. It should be classified, and every branch represents a value that the node can assume. It is a widely used technique for classification. In this method, classification is a tree that is known as a decision tree.


It helps you to estimate real values (cost of purchasing a car, number of calls, total monthly sales, etc.).


Support Vector Machine (SVM)


SVM is a type of learning algorithm developed in 1990. This method is based on results from the statistical learning theory introduced by Vap Nik.


They are also closely connected to kernel functions which is a central concept for most of the learning tasks. The kernel framework and SVM are used in a variety of fields. It includes multimedia information retrieval, bioinformatics, and pattern recognition.



Challenges in Supervised machine learning


Here, are challenges faced in supervised machine learning:


Irrelevant input feature present training data could give inaccurate results


Data preparation and pre-processing is always a challenge.


Accuracy suffers when impossible, unlikely, and incomplete values have been inputted as training data


If the concerned expert is not available, then the other approach is "brute-force." It means you need to think that the right features (input variables) to train the machine on. It could be inaccurate.

Advantages of Supervised Learning:


It allows you to collect data or produce a data output from the previous experience

Helps you to optimize performance criteria using experience

It helps you to solve various types of real-world computation problems.


Disadvantages of Supervised Learning


Decision boundary might be overtrained if your training set which doesn't have examples that you want to have in a class


You need to select lots of good examples from each class while you are training the classifier.


Classifying big data can be a real challenge.


Training for supervised learning needs a lot of computation time.


Best practices for Supervised Learning


Before doing anything else, you need to decide what kind of data is to be used as a training set


You need to decide the structure of the learned function and learning algorithm.


There corresponding outputs either from human experts or from measurements


Summary - Supervised learning


You train the machine using data that is well "labeled."


You want to train a machine which helps you predict how long it will take you to drive home from your workplace is an example of supervised learning


Regression and Classification are two types of supervised machine learning techniques.


A simpler method while Unsupervised learning is a complex method.


The biggest challenge in supervised learning is that Irrelevant input feature present training data could give inaccurate results.


The main advantage of supervised learning is that it allows you to collect data or produce a data output from the previous experience.


The drawback of this model is that the decision boundary might be overstrained if your training set doesn't have examples that you want to have in a class.


As a best practice of supervised learning, you first need to decide what kind of data should be used as a training set.


Comments