Data Scientists enjoy one of the top-paying jobs, with an average salary of $120,000 according to Glassdoor and Indeed. That’s just the average! And it’s not just about money – it’s interesting work too!
If you’ve got some programming or scripting experience, this course will teach you the techniques used by real data scientists in the tech industry – and prepare you for a move into this hot career path. This comprehensive course includes 68 lectures spanning almost 9 hours of video, and most topics include hands-on Python code examples you can use for reference and for practice. I’ll draw on my 9 years of experience at Amazon and IMDb to guide you through what matters, and what doesn’t.
The topics in this course come from an analysis of real requirements in data scientist job listings from the biggest tech employers. We’ll cover the machine learning and data mining techniques real employers are looking for, including:
Principal Component Analysis
Train/Test and cross validation
Decision Trees and Random Forests
Support Vector Machines
Term Frequency / Inverse Document Frequency
Experimental Design and A/B Tests
…and much more! There’s also an entire section on machine learning with Apache Spark, which lets you scale up these techniques to “big data” analyzed on a computing cluster.
If you’re new to Python, don’t worry – the course starts with a crash course. If you’ve done some programming before, you should pick it up quickly. This course shows you how to get set up on Microsoft Windows-based PC’s; the sample code will also run on MacOS or Linux desktop systems, but I can’t provide OS-specific support for them.
Each concept is introduced in plain English, avoiding confusing mathematical notation and jargon. It’s then demonstrated using Python code you can experiment with and build upon, along with notes you can keep for future reference.
If you’re a programmer looking to switch into an exciting new career track, or a data analyst looking to make the transition into the tech industry – this course will teach you the basic techniques used by real-world industry data scientists. I think you’ll enjoy it!
What are the requirements?
You’ll need a desktop computer (Windows, Mac, or Linux) capable of running Enthought Canopy 1.6.2 or newer. The course will walk you through installing the necessary free software.
Some prior coding or scripting experience is required.
At least high school level math skills will be required.
This course walks through getting set up on a Microsoft Windows based desktop PC. While the code in this course will run on other operating systems, we cannot provide OS-specific support for them.
What am I going to get from this course?
Extract meaning from large data sets using a wide variety of machine learning, data mining, and data science techniques with the Python programming language.
Perform machine learning on “big data” using Apache Spark and its MLLib package.
Design experiments and interpret the results of A/B tests
Visualize clustering and regression analysis in Python using matplotlib
Produce automated recommendations of products or content with collaborative filtering techniques
Apply best practices in cleaning and preparing your data prior to analysis
What is the target audience?
Software developers or programmers who want to transition into the lucrative data science career path will learn a lot from this course.
Data analysts in the finance or other non-tech industries who want to transition into the tech industry can use this course to learn how to analyze data using code instead of tools. But, you’ll need some prior experience in coding or scripting to be successful.
If you have no prior coding or scripting experience, you should NOT take this course – yet. Go take an introductory Python course first.
|Section 1: Getting Started|
[Activity] Getting What You Need
[Activity] Installing Enthought Canopy
Python Basics, Part 1
[Activity] Python Basics, Part 2
Running Python Scripts
|Section 2: Statistics and Probability Refresher, and Python Practise|
Types of Data
Mean, Median, Mode
[Activity] Using mean, median, and mode in Python
[Activity] Variation and Standard Deviation
Probability Density Function; Probability Mass Function
Common Data Distributions
[Activity] Percentiles and Moments
[Activity] A Crash Course in matplotlib
[Activity] Covariance and Correlation
[Exercise] Conditional Probability
Exercise Solution: Conditional Probability of Purchase by Age
|Section 3: Predictive Models|
[Activity] Linear Regression
[Activity] Polynomial Regression
[Activity] Multivariate Regression, and Predicting Car Prices
|Section 4: Machine Learning with Python|
Supervised vs. Unsupervised Learning, and Train/Test
[Activity] Using Train/Test to Prevent Overfitting a Polynomial Regression
Bayesian Methods: Concepts
[Activity] Implementing a Spam Classifier with Naive Bayes
[Activity] Clustering people based on income and age
[Activity] Install GraphViz
Decision Trees: Concepts
[Activity] Decision Trees: Predicting Hiring Decisions
Support Vector Machines (SVM) Overview
[Activity] Using SVM to cluster people using scikit-learn
|Section 5: Recommender Systems|
User-Based Collaborative Filtering
Item-Based Collaborative Filtering
[Activity] Finding Movie Similarities
[Activity] Improving the Results of Movie Similarities
[Activity] Making Movie Recommendations to People
[Exercise] Improve the recommender’s results
|Section 6: More Data Mining and Machine Learning Techniques|
[Activity] Using KNN to predict a rating for a movie
Dimensionality Reduction; Principal Component Analysis
[Activity] PCA Example with the Iris data set
Data Warehousing Overview: ETL and ELT
|Section 7: Dealing with Real-World Data|
[Activity] K-Fold Cross-Validation to avoid overfitting
Data Cleaning and Normalization
[Activity] Cleaning web log data
Normalizing numerical data
[Activity] Detecting outliers
|Section 8: Apache Spark: Machine Learning on Big Data|
[Activity] Installing Spark – Part 1
[Activity] Installing Spark – Part 2
Spark and the Resilient Distributed Dataset (RDD)
[Activity] Decision Trees in Spark
[Activity] K-Means Clustering in Spark
TF / IDF
[Activity] Searching Wikipedia with Spark
[Activity] Using the Spark 2.0 DataFrame API for MLLib
|Section 9: Experimental Design|
A/B Testing Concepts
T-Tests and P-Values
[Activity] Hands-on With T-Tests
Determining How Long to Run an Experiment
A/B Test Gotchas
|Section 10: You made it!|
More to Explore
Don’t Forget to Leave a Rating!
Bonus Lecture: Discounts on my Spark and MapReduce courses!
Frank Kane, Data Miner and Software Engineer
Frank Kane spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.