A new version of nnetsauce, and a new Techtonique website

This article was first published on T. Moudiki's Webpage - Python , and kindly contributed to python-bloggers. (You can report issue about the content on this page here)
Want to share your content on python-bloggers? click here.

As a reminder nnetsauce, mlsauce, the querier and the teller are now stored under Techtonique – for those who have already starred, you can report your stars there! A new Techtonique website is also out know; it contains documentation + examples for nnetsauce, mlsauce, the querier and the teller, and is a work in progress.

new-techtonique-website
Figure: New Techtonique Website

In addition, a new version of nnetsauce including nonlinear Generalized Linear Models (GLM) has been released, both on Pypi and GitHub.

  • Installing by using Pypi:
pip install nnetsauce
  • Installing from Github:
pip install git+https://github.com/thierrymoudiki/nnetsauce.git

I’ve been experiencing some issues when installing from Pypi lately. Please, feel free to report any issue to me, if you’re experiencing some, or use the GitHub version instead. You can still execute the following jupyter notebook – at the bottom for nonlinear GLMs in particular`– to see what’s changed:

https://github.com/Techtonique/nnetsauce/blob/master/nnetsauce/demo/thierrymoudiki_040920_examples.ipynb

How do these nonlinear GLMs work? Let’s say (a highly simplified example) that we want to predict the future, final height of a basketball player, using his current height and his age. We’d use historical information of heights, ages (covariates), and final heights (response) of multiple players:

1 – Input data, the covariates, are transformed into new covariates, as we’ve already seen it before, for nnetsauce. You can visit this page for a refresher: References.

nnetsauce-input-transformation

2 – A loss function, a function measuring the quality of adjustment of the model to the observed data, is optimized as represented in the figure below. Currently, two optimization methods are used: Stochastic gradient descent and Stochastic coordinate descent. With, notably and amongst other hyperparameters, early stopping criteria and regularization to prevent overfitting.

glm-loss-function
Figure: GLM loss function (from notebook)

I’ll write down a more formal description of these algorithms in the future.

To leave a comment for the author, please follow the link and comment on their blog: T. Moudiki's Webpage - Python .

Want to share your content on python-bloggers? click here.