List of Topics
1. What is machine learning, and how does it work?
What is machine learning?
What are the two main categories of machine learning?
What are some examples of machine learning?
How does machine learning "work"?
2. Setting up Python for machine learning: and Jupyter Notebook
What are the benefits and drawbacks of ?
How do I install ?
How do I use the Jupyter Notebook?
What are some good resources for learning Python?
3. Getting started in with the famous iris dataset
What is the famous iris dataset, and how does it relate to machine learning?
How do we load the iris dataset into ?
How do we describe a dataset using machine learning terminology?
What are 's four key requirements for working with data?
4. Training a machine learning model
What is the K-nearest neighbors classification model?
What are the four steps for model training and prediction in ?
How can I apply this pattern to other machine learning models?
5. Comparing machine learning models in
How do I choose which model to use for my supervised learning task?
How do I choose the best tuning parameters for that model?
How do I estimate the likely performance of my model on out-of-sample data?
6. Data science pipeline: pandas, seaborn,
How do I use the pandas library to read data into Python?
How do I use the seaborn library to visualize data?
What is linear regression, and how does it work?
How do I train and interpret a linear regression model in ?
What are some evaluation metrics for regression problems?
How do I choose which features to include in my model?
7. Cross-validation for parameter tuning, model selection, and feature selection)
What is the drawback of using the train/test split procedure for model evaluation?
How does K-fold cross-validation overcome this limitation?
How can cross-validation be used for selecting tuning parameters, choosing between models, and selecting features?
What are some possible improvements to cross-validation?
8. Efficiently searching for optimal tuning parameters
How can K-fold cross-validation be used to search for an optimal tuning parameter?
How can this process be made more efficient?
How do you search for multiple tuning parameters at once?
What do you do with those tuning parameters before making real predictions?
How can the computational expense of this process be reduced?
9. Evaluating a classification model
What is the purpose of model evaluation, and what are some common evaluation procedures?
What is the usage of classification accuracy, and what are its limitations?
How does a confusion matrix describe the performance of a classifier?
What metrics can be computed from a confusion matrix?
How can you adjust classifier performance by changing the classification threshold?
What is the purpose of an ROC curve?
How does Area Under the Curve (AUC) differ from classification accuracy?