Contents Machine Learning Tutorial – What is Machine Learning? .......................................3 Machine Learning Tutorial – Data Mining vs Machine Learning.............................4 Machine Learning Tutorial – Types of Machine Learning .......................................4 i. Supervised learning ................................................................................................5 ii. Unsupervised Learning..........................................................................................5 iii. Reinforcement Learning .......................................................................................5 Machine Learning Tutorial – Machine Learning Approaches ..................................6 i. Decision Tree Learning .......................................................................................6 ii. Support Vector Machines ....................................................................................6 iii. Association Rule Learning ................................................................................7 iv. Artificial Neural Networks (ANN) ....................................................................7 v. Inductive Logic Programming ............................................................................7 vi. Reinforcement Learning ....................................................................................7 vii. Clustering..........................................................................................................8 viii. Similarity and Metric Learning .......................................................................8 ix. Bayesian Networks ............................................................................................8 x. Representation Learning .....................................................................................8 xi. Sparse Dictionary Learning ...............................................................................9 Conclusion .................................................................................................................9
Machine Learning Tutorial for Beginners – Learn Machine Learning
Machine Learning Tutorial – What is Machine Learning? Machine learning is a technology design to build intelligent systems. These systems also have the ability to learn from past experience or analyze historical data. It provides results according to its experience. Alpavdin defines Machine Learning as“Optimizing a performance criterion using example data and past experience”.
Data is the key concept of machine learning. We can also apply its algorithms on data to identify hidden patterns and gain insights. These patterns and gained knowledge help systems to learn and improve their performance. Machine learning technology involves both statistics and computer science. Statistics allows one to draw inferences from the given data. To implement efficient algorithms we can also use computer science. It represents the required model, and evaluate the performance of the model. Machine learning involves some advanced statistical concepts such as modeling and optimization. Modeling refers to the conditions or probability distribution for the given sample data. Optimization also includes techniques used to find the most appropriate parameters for the given set of data. The knowledge helps systems to learn and improve their performance. We can use Modern Learning technology in several areas such as artificial neural networks, data mining, web ranking etc.
Machine Learning Tutorial – Data Mining vs Machine Learning In Big Data analytics, data mining and machine learning are the two most commonly used techniques. Learners get confused between the two but they are two different approaches used for two different purposes. Here, in this part of Machine Learning Tutorial, we will see the difference between data mining and machine learning.
Data mining is the process of identifying patterns in large amounts of data to extract useful information from those patterns. It may include techniques of artificial intelligence, machine learning, neural networks, and statistics. The basis of data mining is real world data. It may have taken inspiration and techniques from machine learning and statistics but is put to different ends. A person carries out data mining in a specific situation on a particular data set. The goal is to leverage the power of the various pattern recognition techniques of machine learning. But machine learning process is an approach to developing artificial intelligence. We use Machine Learning algorithm to develop the new algorithms and techniques. These allow the machine to learn from the analyzed data or with experience. Most tasks that need intelligence must have an ability to induce new knowledge from experiences. Thus, a large area within AI is machine learning. This involves the study of algorithms that can extract information without on-line human guidance. Machine Learning relates to the study, design, and development of the algorithms. These give computers the capability to learn without being explicitly programmed. Data Mining starts with unstructured data and tries to extract knowledge or interesting patterns. During this process, we use machine Learning algorithms.
Machine Learning Tutorial – Types of Machine Learning In this Machine Learning Tutorial, we can organize Machine learning algorithms into the taxonomy, based on the desired outcome of the algorithm. Common algorithm types include:
i. Supervised learning In this, we can present the computer with example inputs and their desired outputs, given by a “teacher”. Its goal is to learn a general rule that maps inputs to outputs. Spam filtering is an example of supervised learning. In particular classification, the learning algorithm is present with email messages labeled as “spam” or “not spam”. This is to produce a computer program that labels unseen messages as either spam or not. The classification problem is another standard formulation of the supervised learning task. Here the learner needs to learn a function which maps a vector into one of the several classes. This he can do by looking at several input-output examples of the function.
ii. Unsupervised Learning In this, no labels are given to the learning algorithm, leaving it on its own to groups of similar inputs (clustering), density estimates or projections of high-dimensional data that can be visualized effectively. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end. Topic modeling is an example of unsupervised learning, where a program is given a list of human language documents and is tasked to find out which documents cover similar topics. Here, learning takes places by detecting regularities in input data and developing patterns based on these deductions. Regularities are observed for repetitive occurrence of a pattern. The more frequently occurring pattern is used to make predictions. This approach is also called the density estimation approach. Several methods like clustering can be used for density estimation.
iii. Reinforcement Learning
In this, a computer program interacts with a dynamic environment. In this, it must perform a certain goal, without a teacher explicit telling it whether it has come close to its goal or not. Let us consider the case of robotic navigation. A robot can make very precise movements to perform a task. Yet, the robot has to learn to perform these movements through repeated tests. It applies the knowledge gained from this to improve its efficiency. This is the basis of reinforced learning. In robotic navigation, and other similar systems, such as a self-driving car, sensor doors the output is not restricted to a single action. It may contain a sequence of actions.
Machine Learning Tutorial – Machine Learning Approaches Let us see some most common machine learning approaches:
Machine Learning Tutorial – Machine Learning Approaches
i. Decision Tree Learning In Decision tree learning, the predictive model used is a decision tree. It maps observations about an item to conclusions about the item’s target value. This type of learning is generally used in the field of data mining and machine learning. These trees are also referred as the regression or classification trees. A decision tree can also provide graphical or implicit representation for decision-making problems.
ii. Support Vector Machines
These are sets of related supervised learning methods that you can use for classification and regression. An SVM training algorithm builds a model that predicts whether a new example falls into one category or the other. Support vector machines (SVM) cover both linear and nonlinear classifiers. To classify two-dimensional training data into two groups, you can use a scatter plot of the two attributes. Using SVM, you can represent the plotting positions with two different labels or colors that identify the two classes.
iii. Association Rule Learning You can use it for discovering interesting relations between variables in large databases. This rule is generally applied to sales data, to find an association among sales of different items. This rule predicts that if a customer buys an item X, then there are chances, the customer will also buy an item Y.
iv. Artificial Neural Networks (ANN) An artificial neural network (ANN) learning algorithm is usually called “neural network� (NN). It has inspired by the structure and functional aspects of biological neural networks. Computations structure for an interconnected group of artificial neurons. It processes information using a connectionist approach to computation. Modern neural networks are non-linear statistical data modeling tools. One uses ANN to model complex relationships between inputs and outputs and to find patterns in data. They also capture the statistical structure in an unknown joint probability distribution between observed variables. Artificial Neural Networks (ANN) are intensive methods of computation to find patterns in data sets that are very large. And it contains many explanatory variables that standard method like multiple regression is impractical. When the outputs are continuous variables, neural networks can operate like multiple regressions. They can act like classifications when the outputs are categorical.
v. Inductive Logic Programming An Inductive logic programming (ILP) is an approach to rule learning using logic programming. It is a uniform representation of input examples, background knowledge, and hypothesis. An ILP system will also derive a hypothesized logic program which entails all the positive and none of the negative examples. Inductive programming considers programming languages for representing hypotheses like functional programs. In inductive techniques, we also use a training phase to develop a model, which summarizes the relations between the variables. We then apply this model to new data to deduce a classification or prediction from them in inductive techniques.
vi. Reinforcement Learning The Reinforcement learning is how an agent ought to take actions in an environment to maximize some notion of long-term reward. It finds a policy that maps states of the world to the actions the agent ought to take in those states. In it, there is never correct input/output pairs are never present nor sub-optimal actions correct. This creates the difference between reinforcement learning and supervised learning. The reinforced learning identifies actions in particular situation to maximize the output of a system.
The basis of the reinforced learning is the trial-and-error approach. The learning process takes place by discovering a learning problem instead of a method. We can also divide a complete reinforced learning process as: Sensing the problem Taking appropriate actions Goals of actions
vii. Clustering Cluster analysis is the assignment of a set of observations into subsets called clusters. The observations within the same cluster are similar according to some predesignated criteria. And observations drawn from different clusters are dissimilar. Different clustering techniques also make different assumptions about the structure of the data. Similarity metric defines these techniques. And then, evaluated by internal compactness and separation between different clusters. Other methods depend on estimated density and graph connectivity. Clustering is a method of unsupervised learning and it is also a common technique for statistical data analysis.
viii. Similarity and Metric Learning In this problem, we give pairs of examples to the learning machine that are similar and pairs of less similar objects. It then learns a similarity function (or a distance metric function) that can predict if new objects are similar. Sometimes we can also use it in Recommendation systems.
ix. Bayesian Networks A Bayesian network or belief network or directed acyclic graphical model. It is a probabilistic graphical model to represent random variables with conditional independencies. This is often done via directed acyclic graph (DAG). For example, it could represent the probabilistic relationships between diseases and symptoms. Given symptoms, we can also use the Bayesian network to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayes’ theorem is one of the most important results in probability theory. It concerns the inversion of probabilities and relates, for two events A and B, the conditional probability of A given B to the conditional probability of B given A: P (A/B) = P (B/A)*P (A)/P (B) By using the Bayes’ theorem, a user can also construct DAG, relating the dependent variable to the independent variables. This graph developed is the Bayesian network.
x. Representation Learning The representation of data is one of the key factors that can affect the performance of the machine learning method. We use representation learning algorithms to represent data in a better format. The aim of representation learning algorithms is to store input information and also transform it into a form to make it more useful.
Representation learning algorithms often attempt to preserve the information in their input. It transforms it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This also allows reconstructing the inputs coming from the unknown data generating the distribution. Here it is not being necessary to be faithful for configurations that are implausible under that distribution.
xi. Sparse Dictionary Learning In this method, we represent a datum as a linear combination of basis functions and then assume the coefficients to be sparse. Let x be a d-dimensional datum, D be add by n matrix, where each column of D represents a basis function. r is the coefficient which represents x using D. In maths terms, sparse dictionary learning means the following Here r is sparse. Generally speaking, we assume n to be larger than d to allow the freedom for a sparse representation. We can also apply Sparse dictionary learning in several contexts. In classification, the problem is to determine which prior classes unseen datum belongs to. Suppose when we develop a dictionary for each class. Then we associate a new datum with the class such that it can represent it by the corresponding dictionary. Sparse dictionary learning also finds application in image de-noising. The key idea is that we can represent a clean image path by an image dictionary, but the noise cannot. So, this was all about Machine Learning Tutorial. Hope you like our explanation.
Conclusion Hence, in this Machine Learning Tutorial, we studied Machine learning also gets change because of the evolution of new computing technologies. Earlier the machine learning was the theory that computers can learn without being programmed to perform specific tasks but now the researchers interested in artificial intelligence wanted to see if computers could learn from data. They learn from previous computations to produce reliable decisions and results. It’s a science that’s not new – but one that’s gaining fresh momentum.