This coding bootcamp dives deeply into Machine and Deep Learning using Python as a programming language.
Python is a general-purpose programming language designed not to be used only by programmers but for everyone having the need of coding.
At the same time, Machine and Deep Learning evolves from artificial intelligence and study of pattern recognition. Nowadays, massive amount of data must be analyzed so that useful information be produced along with recommendations and predictions such as market trends, cancer detection, face recognition et cetera.
Anyone interested in, planning to learn, or already learning data science. No prior coding knowledge is required, although having some experience sure helps.
If you don’t know what programming is or don’t know whether you should learn web development, data science, or mobile development, read What should I learn first in coding? and What is code?.
Learn how to create programs with Python, the definite programming language for data science.
Extend your Python skills to provide a solid foundation in data analysis & visualization.
Create Machine and Deep Learning models, the technologies that are shaping our everyday lives and the decisions we make.
Upgrade your career with 1-on-1 mentorship, CV & portfolio building, and interview preperation.
We enlist industry experts to plan, author and review our syllabus. It will guide you from fundamental concepts all the way to full scale implementations. It is constantly updated, and you get lifetime access.
Learn the absolute basics in Python: variables and assignments, using expressions, manipulating numbers and strings, how to indicate comments in code, and so forth.
Python provides a complete set of control-flow elements. Learn everything about conditionals, loops and exceptions handling.
A function is a block of code that provide better modularity for your application and a high degree of code reusing. Learn about function definition, function calling with or without parameters, lambda expressions and decorators.
Python supports the famous Object-Oriented Programming (OOP) paradigm. You will learn the constructs available in Python to use OOP in your programs.
Learn how to efficiently use the four major Python complex data types. With lists, tuples, sets and dictionaries you will be able to store data for any real-world scenario.
Modules are used to organize larger Python projects. The Python standard library is split into modules to make it more manageable. Programs are a collection of modules that are executable by the operating system.
Working with files involves two things: basic I/O like opening, reading or writing a file and working with the filesystem like creating or renaming a file or a dictionary.
You will learn some more advanced features, which you may not use every day but which are handy when you need them, like regex expressions, package creation and extending Python types.
NumPy is a fundamental Python package to efficiently practice data science. Expand your skillset by learning scientific computing.
Learn to import data into Python from various sources, such as CSV, Excel, SQL and the web.
Learn how to use the industry-standard pandas library to import, build, and manipulate DataFrames.
Learn how to tidy, rearrange, and restructure your data using versatile Pandas DataFrames.
Learn how to compine and merge DataFrames, an essential part of your Data Scientist's toolbox.
Learn complex visualization techniques using Matplotlib library.
Learn how to analyze time series data and visualize seasonality, trends and other patterns.
learn how to interact with, manipulate and augment real-world data using their geographic dimension in the most common formats (GeoJSON, shapefile, geopackage) and visualize them in maps.
Linear regression is used for predicting a real-valued output based on a series of input values. Linear regression can be applied to several applications such as housing price prediction or the temperature of an area. Here we will also present the notion of a cost function, and introduce the gradient descent method for learning.
Naive Bayes is one of the basic machine learning algorithm, widely used for classification problems. It works on the Bayes theorem of probability to predict the class of unknown datasets. In the same time, Decision Trees are the most widely and commonly used machine learning algorithms. They are used for solving both classification as well as regression problems being robust to Outliers.
The k-nearest neighbors (KNN) algorithm is a simple, easy-to-implement supervised machine learning algorithm that can be used to solve both classification and regression problems. It assumes that similar things exist in close proximity and uses the k-Nearest Neighbors of an object so to classify it.
Support Vector Machines (SVM) are one of the most powerful machine learning models around. During this week you will learn the theory behind the SVM by understanding hyperplanes and Kernel tricks to leave you with one of the most popular machine learning algorithms at your disposal.
The k-means clustering method is an unsupervised machine learning technique used to identify clusters of data objects in a dataset. It is considered to be one of the oldest and most approachable clustering method. Depending on the data size you will learn how to use both K-Means and Mini-Batch K-Means.
The human brain consists of 100 billion cells called neurons, connected together by synapses. If sufficient synaptic inputs fire to a neuron, that neuron will also fire. We call this process “thinking”. We can model this process by creating a neural network on a computer.
Convolutional Neural Networks (CNN) allow computers to see. In other words, CNNs are used to recognize images by transforming the original image through layers to a class scores. CNN was inspired by the visual cortex. Every time we see something, a series of layers of neurons gets activated, and each layer will detect a set of features such as lines, edges. The high level of layers will detect more complex features in order to recognize what we saw.
In order to handle sequential data successfully, you need to use a Recurrent Neural Network (RNN). It is able to “memorize” parts of the inputs and use them to make accurate predictions. These networks are at the heart of speech recognition, translation and more. Many sophisticated software products are using RNNs, like Google Translate and Apple Siri.