# Articles by The Pleasure of Finding Things Out: A blog by James Triveri

### Gradient Descent for Logistic Regression

February 1, 2024 | The Pleasure of Finding Things Out: A blog by James Triveri

Within the GLM framework, model coefficients are estimated using iterative reweighted least squares (IRLS), sometimes referred to as Fisher Scoring. This works well, but becomes inefficient as the size of the dataset increases: IRLS relies on th...
$z = \boldsymbol{X} \boldsymbol{\theta}$

### Bessel’s Correction

February 1, 2024 | The Pleasure of Finding Things Out: A blog by James Triveri

Bessel’s correction is the use of instead of in the sample variance formula where is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. Recall that bias is defined as: where r...
$n$

### Denoising Signals using the FFT

February 1, 2024 | The Pleasure of Finding Things Out: A blog by James Triveri

The Discrete Fourier Transform (DFT) turns a data vector into a sum of sine/cosine components. The DFT is a Fourier series on data instead of analytic functions. Why do we perform the DFT? Because the features typically of interest aren’t always...
$\mathcal{O}(n \mathrm{log}(n))$

### Gradient Descent for Logistic Regression

February 1, 2024 | The Pleasure of Finding Things Out: A blog by James Triveri

Within the GLM framework, model coefficients are estimated using iterative reweighted least squares (IRLS), sometimes referred to as Fisher Scoring. This works well, but becomes inefficient as the size of the dataset increases: IRLS relies on the...
$z = \boldsymbol{X} \boldsymbol{\theta}$

### GeoHashing from Scratch in Python

February 1, 2024 | The Pleasure of Finding Things Out: A blog by James Triveri

I recently became interested in GeoHashing, and wanted to develop an understanding of the algorithm with the goal of implementing it myself. I was surprised to find it to be quite simple and intuitive. In what follows, I’ll demonstrate how to ge...

### Generating Correlated Random Samples in Python

February 1, 2024 | The Pleasure of Finding Things Out: A blog by James Triveri

An analysis may require the ability to generate correlated random samples. For example, imagine we have monthly returns for three financial indicators over a 20 year period. We are interested in modeling these returns using parametric distributi...
$(0, \infty)$

### Bessel’s Correction

February 1, 2024 | The Pleasure of Finding Things Out: A blog by James Triveri

Bessel’s correction is the use of instead of in the sample variance formula where is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. Recall that bias is defined as: where re...
$n$

### Generating Correlated Random Samples in Python

February 1, 2024 | The Pleasure of Finding Things Out: A blog by James Triveri

An analysis may require the ability to generate correlated random samples. For example, imagine we have monthly returns for three financial indicators over a 20 year period. We are interested in modeling these returns using parametric distributio...
$(0, \infty)$

### A Matrix Factorization Approach to Linear Regression

February 1, 2024 | The Pleasure of Finding Things Out: A blog by James Triveri

This post is intended to shed light on why the closed form solution to linear regression estimates is avoided in statistical software packages. But we start by first by deriving the solution to the normal equations within the standard multivariat...
$\epsilon_{1}, \cdots, \epsilon_{n} \sim \mathrm{i.i.d.}\hspace{.25em}\mathrm{normal}(0, \sigma^{2})\\ y_{i} = \beta^{T}x_{i} + \epsilon_{i}.$