MENU

Quasi-randomized nnetworks in Julia, Python and R

This article was first published on T. Moudiki's Webpage - Python , and kindly contributed to python-bloggers. (You can report issue about the content on this page here)
Want to share your content on python-bloggers? click here.

nnetsauce
nnetsauce, a package for quasi-randomized supervised learning (classification and regression), is currently available for R and Python. For more details on
nnetsauce
nnetsauce, you can read these posts.

I’ve always wanted to port

nnetsauce
nnetsauce to the Julia language. However, in the past few years, there was a little timing overhead (more precisely, a lag) when I tried to do that with Julia’s
PyCall
PyCall, based on my Python source code. This overhead seems to have ‘disappeared’ now.

Julia language’s

nnetsauce
nnetsauce is not a package yet, but you can already use
nnetsauce
nnetsauce in Julia.

Here’s how I did it on Ubuntu Linux:

Contents

1 – Install Julia

See also: https://www.digitalocean.com/community/tutorials/how-to-install-julia-programming-language-on-ubuntu-22-04.

Run (terminal):

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
wget https://julialang-s3.julialang.org/bin/linux/x64/1.9/julia-1.9.4-linux-x86_64.tar.gz
wget https://julialang-s3.julialang.org/bin/linux/x64/1.9/julia-1.9.4-linux-x86_64.tar.gz
wget https://julialang-s3.julialang.org/bin/linux/x64/1.9/julia-1.9.4-linux-x86_64.tar.gz

Run (terminal):

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
tar zxvf julia-1.9.4-linux-x86_64.tar.gz
tar zxvf julia-1.9.4-linux-x86_64.tar.gz
tar zxvf julia-1.9.4-linux-x86_64.tar.gz

Run (terminal)(This is VSCode, but use your favorite editor here):

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
code ~/.bashrc
code ~/.bashrc
code ~/.bashrc

Add to

.bashrc
.bashrc (last line):

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
export PATH="$PATH:julia-1.9.4/bin"
export PATH="$PATH:julia-1.9.4/bin"
export PATH="$PATH:julia-1.9.4/bin"

Run (terminal):

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
source ~/.bashrc
source ~/.bashrc
source ~/.bashrc

Run (terminal):

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
julia nnetsauce_example.jl
julia nnetsauce_example.jl
julia nnetsauce_example.jl

2 – Example using a nnetsauce classifier in Julia language

For Python user’s, notice that this is basically Python ^^

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
using Pkg
ENV["PYTHON"] = "" # replace with your Python path
Pkg.add("PyCall")
Pkg.build("PyCall")
Pkg.add("Conda")
Pkg.build("Conda")
using PyCall
using Conda
Conda.add("pip") # Ensure pip is installed
Conda.pip_interop(true) # Enable pip interop
Conda.pip("install", "scikit-learn") # Install scikit-learn
Conda.pip("install", "jax") # /!\ Only on Linux or macOS: Install jax
Conda.pip("install", "jaxlib") # /!\ Only on Linux or macOS: Install jaxlib
Conda.pip("install", "nnetsauce") # Install nnetsauce
Conda.add("numpy")
np = pyimport("numpy")
ns = pyimport("nnetsauce")
sklearn = pyimport("sklearn")
# 1 - breast cancer dataset
dataset = sklearn.datasets.load_breast_cancer()
X = dataset["data"]
y = dataset["target"]
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y,
test_size=0.2, random_state=123)
clf = ns.Ridge2MultitaskClassifier(n_hidden_features=9, dropout=0.43, n_clusters=1,
lambda1=1.24023438e+01, lambda2=7.30263672e+03)
@time clf.fit(X=X_train, y=y_train) # timing?
print("\n\n Model parameters: \n\n")
print(clf.get_params())
print("\n\n Testing score: \n\n") # Classifier's accuracy
print(clf.score(X_test, y_test)) # Must be: 0.9824561403508771
print("\n\n")
using Pkg ENV["PYTHON"] = "" # replace with your Python path Pkg.add("PyCall") Pkg.build("PyCall") Pkg.add("Conda") Pkg.build("Conda") using PyCall using Conda Conda.add("pip") # Ensure pip is installed Conda.pip_interop(true) # Enable pip interop Conda.pip("install", "scikit-learn") # Install scikit-learn Conda.pip("install", "jax") # /!\ Only on Linux or macOS: Install jax Conda.pip("install", "jaxlib") # /!\ Only on Linux or macOS: Install jaxlib Conda.pip("install", "nnetsauce") # Install nnetsauce Conda.add("numpy") np = pyimport("numpy") ns = pyimport("nnetsauce") sklearn = pyimport("sklearn") # 1 - breast cancer dataset dataset = sklearn.datasets.load_breast_cancer() X = dataset["data"] y = dataset["target"] X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.2, random_state=123) clf = ns.Ridge2MultitaskClassifier(n_hidden_features=9, dropout=0.43, n_clusters=1, lambda1=1.24023438e+01, lambda2=7.30263672e+03) @time clf.fit(X=X_train, y=y_train) # timing? print("\n\n Model parameters: \n\n") print(clf.get_params()) print("\n\n Testing score: \n\n") # Classifier's accuracy print(clf.score(X_test, y_test)) # Must be: 0.9824561403508771 print("\n\n")
using Pkg
ENV["PYTHON"] = ""  # replace with your Python path
Pkg.add("PyCall")
Pkg.build("PyCall")
Pkg.add("Conda")
Pkg.build("Conda")

using PyCall
using Conda

Conda.add("pip")  # Ensure pip is installed
Conda.pip_interop(true)  # Enable pip interop
Conda.pip("install", "scikit-learn")  # Install scikit-learn
Conda.pip("install", "jax")  # /!\ Only on Linux or macOS: Install jax
Conda.pip("install", "jaxlib")  # /!\ Only on Linux or macOS: Install jaxlib
Conda.pip("install", "nnetsauce")  # Install nnetsauce
Conda.add("numpy")

np = pyimport("numpy")
ns = pyimport("nnetsauce")
sklearn = pyimport("sklearn")


# 1 - breast cancer dataset

dataset = sklearn.datasets.load_breast_cancer()

X = dataset["data"]
y = dataset["target"]

X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, 
test_size=0.2, random_state=123)

clf = ns.Ridge2MultitaskClassifier(n_hidden_features=9, dropout=0.43, n_clusters=1, 
lambda1=1.24023438e+01, lambda2=7.30263672e+03)

@time clf.fit(X=X_train, y=y_train) # timing?

print("\n\n Model parameters: \n\n")
print(clf.get_params())

print("\n\n Testing score: \n\n") # Classifier's accuracy
print(clf.score(X_test, y_test)) # Must be: 0.9824561403508771
print("\n\n")
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# 2 - wine dataset
dataset = sklearn.datasets.load_wine()
X = dataset["data"]
y = dataset["target"]
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y,
test_size=0.2, random_state=123)
clf = ns.Ridge2MultitaskClassifier(n_hidden_features=15,
dropout=0.1, n_clusters=3,
type_clust="gmm")
@time clf.fit(X=X_train, y=y_train) # timing?
print("\n\n Model parameters: \n\n")
print(clf.get_params())
print("\n\n Testing score: \n\n") # Classifier's accuracy
print(clf.score(X_test, y_test)) # Must be 1.0
print("\n\n")
# 2 - wine dataset dataset = sklearn.datasets.load_wine() X = dataset["data"] y = dataset["target"] X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.2, random_state=123) clf = ns.Ridge2MultitaskClassifier(n_hidden_features=15, dropout=0.1, n_clusters=3, type_clust="gmm") @time clf.fit(X=X_train, y=y_train) # timing? print("\n\n Model parameters: \n\n") print(clf.get_params()) print("\n\n Testing score: \n\n") # Classifier's accuracy print(clf.score(X_test, y_test)) # Must be 1.0 print("\n\n")
# 2 - wine dataset

dataset = sklearn.datasets.load_wine()

X = dataset["data"]
y = dataset["target"]

X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, 
test_size=0.2, random_state=123)

clf = ns.Ridge2MultitaskClassifier(n_hidden_features=15,
dropout=0.1, n_clusters=3, 
type_clust="gmm")

@time clf.fit(X=X_train, y=y_train) # timing?

print("\n\n Model parameters: \n\n")
print(clf.get_params())

print("\n\n Testing score: \n\n") # Classifier's accuracy
print(clf.score(X_test, y_test)) # Must be 1.0
print("\n\n")
To leave a comment for the author, please follow the link and comment on their blog: T. Moudiki's Webpage - Python .

Want to share your content on python-bloggers? click here.