Deutsch   English   Français   Italiano  
<06eabe944364625b1eba7ea6e09791ad@www.novabbs.com>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail
From: melahi_ahmed@yahoo.fr (Ahmed)
Newsgroups: comp.lang.forth
Subject: Neural networks from scratch in forth
Date: Mon, 2 Dec 2024 20:12:56 +0000
Organization: novaBBS
Message-ID: <06eabe944364625b1eba7ea6e09791ad@www.novabbs.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
	logging-data="973784"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="t+/9LUKLIiUqIe6reyFE7me/EcA/Gr17dRXgwnADesE";
User-Agent: Rocksolid Light
X-Rslight-Site: $2y$10$pSbrbIJ1y983tLodk9GdAOT5qbQi3DR70p8o1j2Y8XkYtCaANCAc.
X-Rslight-Posting-User: 5f6b2e70af503e44dad56966aa15d35bdef29623
X-Spam-Checker-Version: SpamAssassin 4.0.0
Bytes: 3704
Lines: 106

Hi,
Here is a session (with gforth) using neural networks
(neural_networks.fs) applied to the XOR operation.

----------------the session begins here---------
Gforth 0.7.9_20200709
Authors: Anton Ertl, Bernd Paysan, Jens Wilke et al., for more type
`authors'
Copyright © 2019 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<https://gnu.org/licenses/gpl.html>
Gforth comes with ABSOLUTELY NO WARRANTY; for details type `license'
Type `help' for basic help

\ this is a session using neural networks for the XOR operation  ok
include neural_networks.fs
neural_networks.fs:134:1: warning: redefined b with B
locate1.fs:142:3: warning: original location ok

\ create data  ok
4 >n_samples  ok
create data1  ok
0e f, 0e f, 0e f,  ok
0e f, 1e f, 1e f,  ok
1e f, 0e f, 1e f,  ok
1e f, 1e f, 0e f,  ok
data1 >data  ok
\ this concerns the XOR operation  ok

\ create the neural network, it has 2 inputs, 1 output, 2 hidden layers
with 5 neurons in each hidden layer  ok
1 5 5 2 2 neuralnet: net1  ok
' net1 is net  ok
net_layers  ok

\ activation functions   ok
' dlatan is act_func  ok
' dllinear is act_func_ol \ a linear activation function for the output
layer  ok

\ setting learning rate   ok
1e-3 >eta  ok
0e >beta  ok

\ tolerance and relative tolerance  ok
1e-4 >tol  ok
0e >rtol  ok

\ epochs  ok
1000000 >epochs  ok
\ this is maximal epochs, the algorithm terminates when the Cost is less
then the tolerance tol  ok

\ setting display steps when learning  ok
1000 >display_step  ok

\ adaptation of eta to speedup learning if possible  ok
false >adapt_eta  ok

\ initialize the weights and biases at each learning if redoing learning
phase  ok
true >init_net  ok

\ method to initilize weights and biases  ok
' init_weights_2 is init_weights  ok
' init_biases_2 is init_biases  ok

\ now we lauch the learning (Backpropagation algorithm)  ok
learn
Learning...
-----------
epochs| Cost
------+ ----
0    1.9799033462046
1000    0.478161583121087
2000    0.435711003426376
3000    0.376641058924564
4000    0.289059769511348
5000    0.175586135423502
6000    0.0717553727810072
7000    0.0181228454797771
8000    0.00315094688675379
9000    0.000449783250624701  ok

\ now we verify it  ok
test
inputs | outputs (desired outputs)
-------+--------------------------
0. 0.  |  0.006715207738167  (0. )
0. 1.  |  0.991841706392265  (1. )
1. 0.  |  0.993839285400743  (1. )
1. 1.  |  0.00680589396777978  (0. )   ok

\ we can also do predictions  ok
0e 1e to_inputs forward_pass .outputs
out_n°| value
------+------
0     | 0.991841706392265  ok
\ wich it is true (approximately equals 1)   ok

-----------the session finishes here----------------------

The program works with gforth, iforth and vfxforth.

Ahmed

--