datam@berkeley.edu, github.com/dantetam
The purpose of this document is to detail all of my relevant ML expertise, for myself and employers.
I have taken UC Berkeley's unique machine learning course, CS 189. From the syllabus:
classification: perceptrons, support vector machines (SVMs), Gaussian discriminant analysis (LDA, QDA),
decision trees, neural networks, convolutional neural networks, nearest neighbor search;
regression: least-squares linear regression, logistic regression, polynomial regression, ridge regression, Lasso;
density estimation: Bayesian analysis, maximum likelihood estimation (MLE);
dimensionality reduction: principal components analysis (PCA), random projection, latent factor analysis; and
clustering: k-means clustering, hierarchical clustering, spectral graph clustering.
This offering of the course relied heavily on students' own hand-derived, individual implementations of the above,
like neural networks with analytical gradients.
I further expanded on this base with several machine learning projects, below.
Stella, a conversational agent for Android and the web, developed in multiple platforms:
in Java, JS,
and Python w/ TensorFlow;
A part-of-speech tagger, based on a multiclass logistic regression trained to find feature probabilities, which are used in a MEMM with Viterbi decoding.