open book sitting on table

Today we are going to ask ourselves the question, What is machine learning? Machine learning is the concept of training your computer to analyze data for you. But without you having to give explicit instructions for every part of the task. Machine learning consists of datasets that you use to train your models.


In machine learning, a model is your trained machine. You feed your data into your model to train it. Then you feed additional data into your model and evaluate what it outputs. For example. You might create a model for classifying cars. In this model, you feed in various attributes about the car such as weight, horsepower, and a number of seats. The model then predicts what class of vehicle it is, as a car, truck, or van. This model is really the whole point of machine learning.

There are many different kinds of machine learning models. And the kind that you use depends on what you are attempting to do. For example: If you are trying to predict something such as stock price, you will want to use some sort of regression model. If you are trying to identify objects, you will want to use a Classification model.


The most important part when talking machine learning is having a good dataset to train your model with. The more high-quality data you have to train your machine learning algorithm with, the more accurate the results. If you put junk into your model, you will get junk out of your model.

Your Dataset is simply the data that you input into your model. Again going back to the car classifying robot, you might have a table showing all of the attributes about a car. This is your dataset.

One thing to keep in mind about datasets is machine learning models usually only take vector input, not text input. So when looking at your dataset, you will need to evaluate how you can convert your dataset into vectors (numbers) so you can feed the dataset into your algorithm. For a good example of converting a dataset into vectors, check out the classifying cars article.


Depending on the type of model you are trying to build, you will choose different methods for training your model. The main training types for models are:

  • Supervised Learning
  • Un-Supervised Learning
  • Reinforcement Learning

In Supervised Learning, we take a dataset that is already structured and labeled. We then pass the dataset to our machine learning algorithm to train it. This is the kind of training we used in the cars example from earlier.

In Unsupervised Learning, we take a bunch of unstructured data that is not yet labeled, and pass it to our algorithm and see if it can make sense of it. Don Hebb made the statement “When neurons fire together, they wire together”. This logic explains how unsupervised learning works with clustering. As you pass in various vectors over time, they will begin to form groups based on the combinations of the vectors you input.

Reinforcement Learning is when you take the output of your neural network, then feedback input to reinforce whether the previous action our output was correct. The neural network then makes adjustments and tries again. For example, you might write an algorithm for navigating a maze. You after each step forward, you feedback whether or not you ran into a wall. When you run into a wall, you mark that as a bad path. When you do not, you mark it as a good path.

What are Neurons?

In animals, neurons are small interconnected particles that together form your brain. As you learn things, some neuron connections get stronger, while others grow weaker.

In Machine learning, neurons are more of a conceptual part of a complex statistical equation. each input for the neural network is considered a neuron. Each output for the neural network is also a neuron. You then have one or more “hidden” layers of neurons that interconnect the input and output neurons.

As you feed more input into your model the relationships between different neurons strengthen, while others weaken. The strengthening the link between neurons is really just an adjustment to a statistical formula in the background. But because of the way the calculations are structured, you end up with something very powerful. We will go into this in more detail when we publish our article on deep neural networks.