TECHNOLOGY

DATA SCIENCE

PROBLEM SOLVING

GRAPHIC ART

MISCELLANEOUS

# Binary Classification Metrics

01/15/2018

tags: machine learning accuracy error Depending on the situation, the simple “number of correct classifications” error metric might not be the best metric to use in binary classification. Here, we explore several metrics and how they might be used for different problems.

# Probabilitistic Classification

01/14/2018

tags: machine learning probability classification In binary classification problems, we have an input feature vector and we’d like to classify it into one of two classes. We did this by minimizing reasonable loss functions based on activation functions. In this very long post, we’ll take a probabilistic approach to classification and detail the generative framework.

# Classification and Perceptron

01/09/2018

tags: machine learning classification perceptron We now leave the land of predicting real-numbered values to look at data classification. The discussion will conclude with one of the fundamental concepts behind classification, the Perceptron algorithm.

# Linear Regression: Bayesian Approach, Normal Conjugacy

01/08/2018

tags: machine learning bayesian regression

Understanding the linear regression from a probabilistic perspective allows us to perform more advanced statistical inference. Today, we’ll be applying Bayesian inference concepts to the linear regression. As a result, we’ll have a way to update the beliefs of our models as more data becomes accessible or account for prior knowledge when looking at data.

# Nonlinearity: Basis Functions

01/08/2018

tags: machine learning non-linearity basis functions We often work in linear space, but you might ask how we could capture nonlinearity? The answer lies in basis functions.

# Model Selection

01/06/2018

tags: machine learning model selection overfitting

So far, we’ve looked at linear regression and K-Nearest Neighbors as potential models for estimating real-valued data. But how do we know which model is the best to use? In this post, we discuss overfitting, bias-variance decomposition, and regularization as factors when considering models.

# Linear Regression: A Probabilistic Approach

01/04/2018

tags: machine learning linear regression probability

Today, we look at the regression under a probabilistic modeling context that help us understand the reasons behind the least squares loss function.

# Linear Regression: A Mathematical Approach

12/30/2017

tags: machine learning linear regression

In this post, we’ll take a look at linear regression from a mathematical lens, ignoring the statistical interpretation. Here, we provide the derivation and interpretation of the closed form solution for the weights.

# Introduction to Regression: K-Nearest Neighbors

12/19/2017

tags: machine learning k nearest neighbors

Here, we’ll look at the K-Nearest Neighbors approach toward understanding one of the core ideas of machine learning, the regression.

# Introduction to Inference: Coins and Discrete Probability

09/10/2017

tags: discrete inference machine learning

In data science, it all starts with a coin. Today, we’ll talk about the fundamentals of statistical inference for discrete models: how to determine the optimal parameters given data, how to incorporate prior knowledge, and how to make predictions. This assumes familiarity with random variables and the basics of probability theory.

# Welcome to my Data Science Blog!

08/30/2017

tags: intro

Welcome to my data science blog! Here, you’ll find posts about the various things I’m working on as well as tips and insights I’ve gained during the project.