Skip to main content

Neural Networks vs. Deep Learning

What’s the difference between deep learning and a regular neural network? The simple answer is that deep learning is larger in scale. Before we get into what that means, let’s talk about how a neural network functions.
To make sense of observational data (like photos or audio), neural networks pass data through interconnected layers of nodes. When information passes through a layer, each node in that layer performs simple operations on the data and selectively passes the results to other nodes. Each subsequent layer focuses on a higher-level feature than the last, until the network creates an output.
In between the input layer and the output layer are hidden layers. And here’s where users typically differentiate between neural nets and deep learning: A basic neural network might have one or two hidden layers, while a deep learning network might have dozens or even hundreds. For example, a simple neural network with a few hidden layers can solve a common classification problem. But to identify the names of objects in a photograph, Google’s image recognition model, GoogLeNet, uses a total of 22 layers.
Why so many layers? Increasing the number of layers and nodes can potentially increase the accuracy of your network. However, more layers means your model will require more parameters and computational resources and is more likely to become overfit.

Training a Deep Learning Model

Training a deep learning model requires 
a lot of data. The more data you train it on, the more accurate your deep learning model will be. (In 2012, Google used 10 million digital images taken from YouTube videos to train a deep learning model to identify cats. Yes, you read that right.)

Simply put, training a deep learning model means that you’re feeding data to the model, getting an output, and then using that output to make adjustments. For example, if you train your model on a bunch of pictures of cats and then feed it new cat photos it’s never seen before, it should be able to pick out the cats in the new photos. If it doesn’t, you can change the way the network’s nodes are weighing certain characteristics of the images (the presence of whiskers and a tail, for instance). Weight, in this case, is a number that represents the importance of a characteristic. The higher the weight, the higher the influence that characteristic has on the nodes.
But how did the model even know to look for whiskers? Typically, a data scientist will engineer features for the model to consider and feed it labeled data during the training process (e.g., a series of labeled photos of cats). But one of the amazing things about deep learning is that you don’t necessarily have to complete this step. To continue with our Google example, the company’s cat-identifying model learned to pick out 20,000 distinct object categoriesunsupervised. It “learned” what a cat looked like without explicitly being told. (The downside to this method is that the resulting models aren’t as accurate as models that receive supervised training — yet.)

The Future of Deep Learning

We’ve come a long way since Google built its deep learning model to identify cats. Now, we’re starting to use 
automated machine learning tools to create neural network layers in less time, employ deep learning in the medical field, and match related images with human accuracy.
As data scientists get closer to building highly accurate deep learning models that can learn without supervision, deep learning will become faster and less labor intensive. That can only mean bigger and better things are yet to come.

Comments

Popular posts from this blog

2.0

I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. Unfortunately, this interpretation completely misses the forest for the trees. Neural networks are not just another classifier, they represent the beginning of a fundamental shift in how we write software. They are Software 2.0 . The “classical stack” of  Software 1.0  is what we’re all familiar with — it is written in languages such as Python, C++, etc. It consists of explicit instructions to the computer written by a programmer. By writing each line of code, the programmer is identifying a specific point in program space with some desirable behavior. In contrast,  Software 2.0  is written in neural network weights. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and co...

Building Contents of Watson Chatbots

In today’s world Chatbots are tremendously transforming the way we interact with software by providing a great business opportunity for almost every company. Chatbots are seen in almost all the websites and also in applications. The first question I ask to myself, what is Chatbot? Chatbots are known by different names some call it “conversational gents”, some “Chatter Robot”. Chatbots are basically a computer program that mimics written or spoken human speech in its natural format using Artificial Intelligence techniques such as Natural Language Processing (NLP) which is used for conversation purpose. In today’s era Chatbots are most commonly used in customer service space, acts as a human face of the brand for support operatives and customer satisfaction reps. We all know virtual assistants like Apple Siri or Amazon Alexa, are two most popular chatbots interacting via voice rather than text. Chatbots engages their customers in the right place, at the right time, with right ...

How does a total beginner start to learn machine learning if they have some knowledge of programming languages?

I work with people who write C/C++ programs that generate GBs of data, people who manage TBs of data distributed across giant databases, people who are top notch programmers in SQL, Python, R, and people who have setup an organization wide databases working with Hadoop, Sap, Business Intelligence etc. My inspiration to anyone and everyone would be following: Learn all the basics from Coursera, but if I really have to compare what you would get out of Coursera compared to the vastness of data science, let us say ~ Coursera is as good as eating a burrito at Chipotle Mexican Grill. You certainly can satiate yourself, and you have a few things to eat there. The pathway to value adding data science is really quite deep, and I consider it equivalent to a five star buffet offering 20 cuisines and some 500 different recipes. Coursera is certainly a good starting point, and one should certainly go over these courses, but I personally never paid any money to Coursera, and I could easily ...