Predictive Analytics I SDK (Neural Network Engine and Solver (Complete Source Code and Project Code))

300371674

Analog / Digital Neural Network Teacher / Solver
Intellectual Property Licensing
Part I – Basic Information
Title: Analog / Digital Neural Network Teacher / Solver
Purpose: To teach and learn a neural network setup that is based on an analog or digital perceptron approach.
Implementation: Implemented in Delphi, Turbo Pascal 5.5, Free Pascal
Benefit: This module can be used to develop applications that used neural networks in finance, planning, strategic development, education, and others. The neural network processing engine can be used with the application to provide additional functionality..
Part II – Licensing Summary
Type: Non-exclusive royalty based license
Fee: Flat fee of $300,000.00 / year USD with auto renewal for maximum royalty payment.
Territory: Non-exclusive territory use.
Assignment of Agreement: Agreements cannot be re-assigned.
Part III – Terms
Purchase indicates acceptance, approval, and agreement of the non disclosure (Part V) and royalty license of this agreement (Part VI).
Part IV – Sample Use of Program
<< Sample Research Report >>
THE LEARNING EFFECT OF NEURAL NETWORKS
BY USING DIFFERENT PRESENTATION TECHNIQUES AND BY
INCREASING COMPLEXITY.
(USING THE XOR/PARITY EXAMPLE)
(PART 1 of 2)
BY
DAVID H. KANECKI, A.C.S., BIO. SCI.
I have performed on 3 experiments using the xor neural network. The xor neural network is used to simulate constraint based problem solving in neural networks. Also, the xor network of 2 input to 1 output, 3 inputs to 1 output, and 4 inputs to 1 output was used. Next, an XNOR (not xor) network of 2 input to 1 output, 3 input to 1 output, and 4 input to 1 output was used.
The goal of these test were to see how many learning steps would be needed to have the neural network learn the various neural inputs. This type of learning being performed is associa­tive learning which is one form of learning used by organisms in the biological world.
The data for the 2 input to 1 input XOR network, the input data was:
X Y H
‑ ‑ ‑
0 0 0
0 1 0
1 0 0
1 1 1
The output data for each response was:
R= ( 0, 1, 1, 0 )
The input data used for the 3 input to 1 output network was:
X Y Z H
‑ ‑ ‑ ‑
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 0

The output data for each response was:
R = ( 0, 1, 1, 0, 1, 0, 0, 1)
The input data used for the 4 input to 1 input was:
W X Y Z H
‑ ‑ ‑ ‑ ‑
0 0 0 0 0
0 0 0 1 0
0 0 1 0 0
0 0 1 1 1
0 1 0 0 0
0 1 0 1 1
0 1 1 0 1
0 1 1 1 0
1 0 0 0 0
1 0 0 1 1
1 0 1 0 1
1 0 1 1 0
1 1 0 0 1
1 1 0 1 0
1 1 1 0 0
1 1 1 1 1
The output data for each response was:
R = ( 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0 )
In the first test, the neural network was based on digital response, it would either give a yes (1) or no (0) response. Based on using the Widrow rule with the beta factor set to 2.0, the number of presentations needed to learn the xor networks for 2, 3, and 4 elements were 16, 32, and 80 presentations respec­tively. In terms of the number of times a whole set had to be present for the 2, 3, and 4 element xor network it was 4, 4, and 5 repetitions of a set.
Thus, the four element xor network shows the beginning on complexity added to the learning phase. This is based on that it took 5 repetitions of the set for it to learn. If complexity were not present it should have taken 4 repetitions similar to the 2 and 3 network xor.
Lemma #1
In a digital response neural network, the effect of complexity in learning starts with the
number of input elements >= 4.

The second test was using the same neural networks of 2, 3, and 4 elements in an analog response. To use the above examples in an analog system one needs to 1) set all 0 values to 0.1 and 2) set all 1 values to 1.0. In this example, the beta factor was set to 2.0. This neural network is designed to give a response within a tolerance factor 0.3. Thus, an answer between ‑0.2 to 0.3 is interpreted as no (0) response, between 0.7 and 1.3 is interpreted as a yes (1) response, and between 0.3 and 0.7 is interpreted as an abstain response.
The number of presentations needed to solve the 2, 3, and 4 element analog neural networks were: 444, 536, and over 1000 (not solvable for four element) steps. The total number of set repeti­tions is 111, 67, and over 63 (not solvable for four element).
Thus, this shows that an analog neural network has ideal states for learning. Also, it shows the ideal state is 3 state neural network. The least ideal states from worse to better are the 4 element and the 2 element analog neural networks.
Lemma #2:
When the beta factor is low when used with an analog neural network: ie. 2.0 ‑ >
There exist a minimum state of complexity as one increases the number of elements in an analog neural network. Also, an analog network with too small number of elements or too big number of elements will have the greatest complexity to learning. Finally, between the networks that are too small or too big exist a network size with the greatest learning capacity, least number of set repetitions.
When I repeated the same experiment on the analog networks with a beta factor of 3.0, the number of presentations were over 1000 (not solvable), 816, and 512 steps for the 2, 3, and 4 element neural networks. The number of set repetitions came to over 250 (not solvable), 102, and 32 respectively. This means that an increased beta rate makes it easier for larger networks to solve problems, but it makes it harder for smaller networks to solve problems.
Lemma #3:
When using analog neural networks, an increased beta rate, ie. from 2.0 to 3.0, increases the learning efficiency of large networks (3 and 4 elements), but it decreases the learning efficiency of small networks ( 2 elements ).
Corollary #1:
An increased beta rate can cause small analog networks (2 ele­ments) to go from a solvable to state to an unsolvable state.
Corollary #2:
An increased beta rate can cause a large analog neural network to go from an unsolvable state to a solvable state.
In the next discussion, I will present the results of the XNOR network experiments.
<< Sample Learning Log >>
Neural Network Programmer Matrix 2.2
By David Kanecki © 1988-2010
PARAMETERS
1 1 = RWM 2=PFM 3=STL 4=IRN
MAX Presentations: 200
A Len = 3
B Len = 1
BETA = 2.0000000000E+00
LIMIT = 7.0000000000E-01
OPTION = 1 Transfer Function 1 Selected
UNITY = 100
W Matrix
200
200
-400
PS #1 TS#1 BEC = 0 BER = 0.0 RESULT =TRUE
B CAL= 0 Result: TRUE A= 0 0 0 B= 0
PS #2 TS#2 BEC = 0 BER = 0.0 RESULT =TRUE
B CAL= 100 Result: TRUE A= 0 100 0 B= 100
PS #3 TS#3 BEC = 0 BER = 0.0 RESULT =TRUE
B CAL= 100 Result: TRUE A= 100 0 0 B= 100
PS #4 TS#4 BEC = 0 BER = 0.0 RESULT =TRUE
B CAL= 0 Result: TRUE A= 100 100 100 B= 0
Number of Presenatations: 4
======= FINAL MATRIX =========
===== W Matrix ======
200
200
-400
===== W Matrix ======
<< User’s Manual and Learning Guide >>
K-Artificial Neural Network Solver 20 and 30
( An Innovative and Easy Method
to Solve Complex Data Problems )
By Diana Kaneck, A.C.S., Bio. Sci.

Kanecki Artificial Neural Network 21 ……….. Users Manual 6
APPLICATIONS………………………………………………………………………………………………….. 7
1.0 INTRODUCTION………………………………………………………………………………… 7
1.1 USE IN DECISION MAKING……………………………………………………………….. 7
1.2 USE IN NEURAL NETWORK RESEARCH……………………………………………. 8
DATA FILE SET UP……………………………………………………………………………………………. 10
2.0 INTRODUCTION…………………………………………………………………………………………. 10
2.0.1 KANN21 Parameter File – Line 1……………………………………………… 11
2.0.2 KANN21 Parameter File – Line 2……………………………………………… 12
2.0.3 KANN21 Parameter file – line 3………………………………………………… 12
2.0.4/5 KANN21 Parameter File – Line 4 and line 5……………………………… 12
2.0.6 KANN21 – Parameter File – Line 6……………………………………………. 13
2.0.7 KANN21 – Parameter File – Line 7……………………………………………. 14
2.0.8 KANN21 Parameter File – Line 8……………………………………………… 14
2.0.9 KANN21 Parameter File – Line 9……………………………………………… 14
2.1 ADATA FILE……………………………………………………………………………………… 15
2.2 BDATA FILE……………………………………………………………………………………… 17
DATA FILE OUTPUT…………………………………………………………………………. 19
3.0 INTRODUCTION…………………………………………………………………………………………. 19
3.1 EXPLANATION FORMAT 1………………………………………………………………. 19
3.2 EXPLANATION FORMAT 2………………………………………………………………. 21
3.3 EXPLANATION FORMAT 3………………………………………………………………. 23
3.4 EXPLANATION FORMAT 4………………………………………………………………. 24
3.5 EXPLANATION FORMAT 2-Simulation Mode………………………………………. 26
12,1988-1535h…………………………………………………………………………………………. 26
4.0 Learning Strategies for Neural Networks……………………………………………………………… 29
4.1 Learning Strategies……………………………………………………………………………….. 30
4.1.1 Parity , 1 neuron………………………………………………………………….. 30
4.1.2 Power of 2, Log2(#A MAT Neurons)……………………………………….. 31
4.1.4 Symmetry, 1 bit…………………………………………………………………… 34
4.2 Comments on Learning Strategies……………………………………………………………. 34
Kanecki Artificial Neural Network 30 Users Manual…………………………………………….. 37
APPLICATIONS………………………………………………………………………………………………… 38
1.0 INTRODUCTION………………………………………………………………………………. 38
1.1 USE IN DECISION MAKING……………………………………………………………… 38
1.2 USE IN NEURAL NETWORK RESEARCH………………………………………….. 39
DATA FILE SET UP……………………………………………………………………………. 41
2.0 INTRODUCTION………………………………………………………………………………. 41
2.0.1 KANN30 Parameter File – Line 1……………………………………………… 42
2.0.2 KANN30 Parameter File – Line 2 …………………………………………….. 42
2.0.3 KANN30 Parameter file – line 3………………………………………………… 42
2.0.4/5 KANN30 Parameter File – Line 4 and line 5……………………………… 43
2.0.6 KANN30 – Parameter File – Line 6……………………………………………. 44
2.0.7 KANN30 – Parameter File – Line 7……………………………………………. 44
2.0.8 KANN30 Parameter File – Line 8……………………………………………… 44
2.0.9 KANN30 Parameter File – Line 9……………………………………………… 45
2.0.11 KANN30 Paramter File – Line 10……………………………………………. 45
2.0.10 Kann30 Parameter File – Line 10……………………………………………… 45
2.1 ADATA FILE……………………………………………………………………………………… 46
2.2 BDATA FILE……………………………………………………………………………………… 47
DATA FILE OUTPUT……………………………………………………………………. 50
3.0 INTRODUCTION…………………………………………………………………………………………. 50
3.1 EXPLANATION FORMAT 1………………………………………………………………. 50
3.2 EXPLANATION FORMAT 2………………………………………………………………. 53
3.3 EXPLANATION FORMAT 3………………………………………………………………. 55
3.4 EXPLANATION FORMAT 4………………………………………………………………. 58
3.5 EXPLANATION FORMAT 2-Simulation Mode………………………………………. 64
4.0 Learning Strategies for Neural Networks……………………………………………………………… 66
4.1 Learning Strategies……………………………………………………………………………….. 68
4.1.1 Parity , 1 neuron………………………………………………………………….. 68
4.1.2 Power of 2, Log2(#A MAT Neurons)……………………………………….. 69
4.1.4 Symmetry, 1 bit…………………………………………………………………… 72
4.2 Comments on Learning Strategies……………………………………………………………. 72
5.0 Processing Time…………………………………………………………………………………… 74

Kanecki Artificial
Neural Network 21
Users Manual
By
Diana Kaneck,
ACS/Bio. Sci.

APPLICATIONS
1.0 INTRODUCTION
The KANN21 program can be used as an aid in decision making or as an aid in neural network research.
The way KANN21 can used as an aid in decision making is to link and organize various information. To organize the information the user needs to specify and input and output data set.
1.1 USE IN DECISION MAKING
Each neuron in the data set represents a concept. For example, the input neuron 1 may represent a concept of the color blue, neuron 2 may represent the concept of the color black, neuron 3 may represent the concept of 2 wheels, and neuron 4 may represent the concept of 4 wheels. Next, the output neuron 1 may represent the concept of car and neuron 2 may represent the concept of bicycle.
The neuron/concept is similar to a punched card methology. A neuron/concept is punched if the concept is available and not punched if the concept if not available. A punch is indicated by the value 1 for the associated neuron. And, a no punch is indicated by a value of zero for the associated neuron.
The KANN21 defines the input neuron set as the A MAT and the file ADATA.DAT. The output neuron set is defined as the B MAT and the file BDATA.DAT.
This is how the A MAT example would be coded:
0 1 1 0
1 0 1 0
1 0 0 1
9999 9999 9999 9999 9999
The first line represents a set of concepts where something is black and has 2 wheels. The second line represents something that is blue and has 2 wheels. The third line represents something that is black and has 4 wheels. The fourth line is the terminating set. This tells the program that no more data is available. The above data is stored in file ‘ADATA.DAT’.
This how the B MAT data would be coded:
0 1
0 1
1 0

9999 9999
The first line indicates that something is a bike. Also, the second line indicates that something is a bike. And, the third line indicates something is a car. The fourth line is the terminating data set. This means that no more B MAT data is available. This would be stored in file ‘BDATA.DAT’.
Finally, line 1 in A MAT represents the input neurons for data set 1 and line 1 in B MAT represents the output neurons for data set 1. Also, line 2 in A MAT represents the input neurons for data set 2 and line 2 in B MAT represents the output neurons in data set 2.
1.2 USE IN NEURAL NETWORK RESEARCH
One example of KANN21 can be used in neural network research is by showing the set up required to solve circuit problems as the XOR problem.
In one solution to the XOR problem the A MAT or input neurons had 3 neurons representing 2 state bits and 1 hidden unit. A hidden unit is a neuron or set of neurons that aid a neural network in solving a problem. I will give a more detailed discussion of hidden units in another section.
The output neuron or B MAT consisted of 1 bit representing the ON state by the value of 1 or the OFF state by the value of 0.
The A MAT was coded as follows:
0 1 0
1 0 0
1 1 1
9999 9999 9999
The first line represents STATE BIT 1 as ON. The second line represents STATE BIT 2 as ON. The third line represents STATE BIT 1 and 2 as ON and the hidden unit in ON. The fourth line represents the terminating data set.
The B MAT or output neuron would be coded as:
1
1
0
9999
The first line indicates the result is ON. The second line represents the result is ON. The third line represents the result in OFF. Finally, the fourth line represents the terminating data set card.

In the example line 1 in A MAT are the input neurons and line 1 in B MAT is the output neuron for data set 1. And, line 2 in A MAT are the input neurons and line 2 in B MAT is the output neuron for data set 2. The same pattern follows for each line.
A historical sidelight of the XOR problem is that neural networks were originated in 1943 and it was not until 1969 the XOR problem was solved. The reason for the delay was that without the hidden unit the XOR matrix became all zeros. Thus, one neuron canceled out another neuron. With, the hidden unit the neuron cancellation does not occur.

DATA FILE SET UP
2.0 INTRODUCTION
The KANN21 program consist of four input files. They are ‘ADATA.DAT’, ‘BDATA.DAT’, ‘KANN21.DAT’, and ‘NUTEST.DAT’.
The file ‘KANN21.DAT’ includes 9 parts of information needed by the program. The 10th line onward contains the W matrix to be read if format option 1 is specified.
File KANN21.DAT resides on the logged drive. Thus, if prompt before the KANN21 program was executed was:
A>
The program will look for the file on drive A. And, if the file was not found the following message would be printed:
FILE KANN21.DAT not available
Terminating Program
And, the program would print:
PROGRAM DONE
to indicate it has finished its processing.
The parameter file: KANN21.DAT would look like this:
2 final results only
100 Maximum number of learning sessions
3 Number of A MAT neurons
1 Number of B MAT neurons
0 No Simulation file ‘NUTEST.DAT’
2.0 Update ratio
0.455 Threshold limit
1 Transfer function 1
10 unity value 10 = 1.0
Below is a line by line explanation of the parameters and their uses.

2.0.1 KANN21 Parameter File ‑ Line 1
The first line contains a number 1 thru 4. This is the format option number. A format option number of 1 means that the programs reads the W matrix from the data instead of generating it from the A MAT and B MAT data. This is useful if a training set was learned and a simulation of new data is desired. Format option 2 means that the program will print out only the parameter, data input, final W matrix, and simulation information. No intermediate information will be printed. A format option of 3 means that intermediate information will be printed as presentation number (PS#), test set number (TS#), neuron error count (BEC), total neuron error count (BER), and learning result. This information is useful to determine if a system is learning. For example, a system is learning if the total neuron error count between similar test sets is decreasing. A system is not learning if they remain the same or increase. Finally, format 4 option will display the a neuron listing. This will indicate what neurons are on and what neurons are off. The listing is presented in a hexadecimal format. This allows all 128 neurons to be display on 1 line. The table below indicates what group of 4 neurons are on:
HEX Neuron ON
0 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1
10 1 0 1 0
11 1 0 1 1
12 1 1 0 0
13 1 1 0 1
14 1 1 1 0
15 1 1 1 1
For example if A MAT were “01EF” the neuron ON (indicated by a 1 ) and the neurons OFF (indicated by a zero) in the neuron list would be:
0 0 0 0 0 0 0 1 1 1 1 0 1 1 1 1
For example if B MAT were “EF7″ then neuron list would be:
1 1 1 0 1 1 1 1 0 1 1 1

2.0.2 KANN21 Parameter File ‑ Line 2
The second line contains the maximum number of learning sessions. This number can range from 0 to about 2000. If a number over 2000 is selected a numeric overflow error could occur. The number of learning sessions is dependent upon the value of UNITY. UNITY is a value substituted for the number 1.0. For example, if UNITY were equal to 10 or 1 significant digit the number of recommended learning sessions would be 2000. And, if UNITY were equal to 100 or 2 significant digits the number of recommended learning sessions would be 200. If the UNITY value were 1000 or 3 significant digits the number of recommend learning sessions would be 20. By differing the value of UNITY the resolution between differing neurons increase. Thus, it may be easier to solve certain types of problems.
Finally, if the value for UNITY is less than zero or greater than 1000 the program will print:
ERROR‑INVALID Unity Value Assuming Unity=10
and continue its processing.
2.0.3 KANN21 Parameter file ‑ line 3
The third line contains the simulation status. If the value is a 1 simulation data is in file ‘NUTEST.DAT’. If the value is a 0 no simulation data is available in file ‘NUTEST.DAT’ and the simulation option at the end of program is not run. For example, if the simulation status was 1 at the end of a learning session the computer look for the file ‘NUTEST.DAT’ and retrieve the A MAT data and print out the corresponding B MAT result. This is useful when the W matrix has already been generated and only results are needed as in a decision making process. Please that if the simulation status equal 1 and the file ‘NUTEST.DAT’ is not available the program will halt and print the message:
No NUTEST.DAT Data File Available for Simulation
Terminating Program
And, the next message printed will be:
PROGRAM DONE
indicating the program has finished its processing.
2.0.4/5 KANN21 Parameter File ‑ Line 4 and line 5

The fourth line contains the number of A MAT or input neurons. The number of A MAT and B MAT neurons can be from 1 to 128. But, the number of A MAT neurons times the number of B MAT must be less than or equal to 2700. For example, if the number of A MAT neurons were 10 and the number of B MAT neurons were 100. The total would be 1000 and this is below 2700 and the program will run normally. But, if 128 A MAT and 128 B MAT neurons were specified the total would be 16386 above the 2700 level and program would print:
MATRIX TOO BIG
Terminating Program
And, the program will print:
PROGRAM DONE
indicating it has finished its processing.
Or, if the number A MAT neurons or B MAT neurons were not within the range of 1 to 128 a error message would be printed. For example, if A MAT were equal to ‑1 and B MAT were equal to 128 the program would print:
A Len or B Len not within specified range
Terminating Program
And, the program will print:
PROGRAM DONE
indicating it has finished processing.
The fifth line contains the number of B MAT or output neurons. The number of B MAT can be from 1 to 128.
2.0.6 KANN21 ‑ Parameter File ‑ Line 6
The sixth line contains the update ratio. This ratio must be greater than zero and suggested less than 9. Most neural simulations use the value “0.5”. The greater the update ratio the faster the learning process. But, in experiments I have performed I have found that 0.5 works best for most problems and a value over 9 causes the system to learn and unlearn do to the high variability.

2.0.7 KANN21 ‑ Parameter File ‑ Line 7
The seventh line contains the threshold limit. The value must be greater than zero. Most neural simulations set the value between 0.45 and 0.90. The threshold is the value where a neuron will be ON. For example if a neuron a has a value of 1.2 and the threshold is 0.45 the neuron will be ON because the neuron value is greater than or equal to 0.45. But, if the neuron has a value of 0.2 the neuron will be OFF because 0.2 is not greater than or equal to 0.45.
2.0.8 KANN21 Parameter File ‑ Line 8
The eighth line specified which transfer function to use. If transfer function other than 1 or 2 is used the program automatically selects transfer function 1.
Transfer function 1 represents the MC‑PITTS model of B(y)=A(x)*W(x,y) where A(X) is the input neuron, W(x,y) is the neural matrix of A and B, and B(y) is the output neuron. The advantage of this function is that it generates a solution faster than transfer function 2 but the neuron can have periods of cycling between high and low values.
Transfer function 2 represents the exponential model of B(Y)=1/(1+exp(‑A(x)*W(x,y))) where A(x) is the input neuron, B(x) is the output neuron, W(x,y) is the neural matrix of A and B. The advantage of this function is that it approaches a solution with less variability than the MC‑PITTS model. But, the solution time way be longer than the MC‑PITTS model.
If a transfer function other than 1 or 2 is specified transfer function 1 will be selected and the program would print:
ERROR‑INVALID Option Selected Assuming Option 1
And, the program will continue its processing.
2.0.9 KANN21 Parameter File ‑ Line 9

And, the ninth line is the value for unity or 1. I have used the value 10 to represent the value of 1. The unity value is selected in order to optimize data storage space and calculations. The KANN21 system stores all data in 16 bit variables with a range of 32767 to ‑32768. Suggested unity values are 10, 100, and 1000. A majority of neural network session I have done have used 10 as the value of unity. This allows 2000 training sets of data. The number of recommend training set is equal 20000 divided by the value of unity. If unity were equal to 1000 or 3 significant figures the recommended training set would be 20. If unity were equal to 100 then the recommend training set would be 200.
If a unity value less than zero is specified or greater than 1000 is specified the error message printed will be:
ERROR‑INVALID Unity Value Assuming Unity=10
And, the program will continue its processing.
2.0.10 Kann21 Parameter File ‑ Line 10
If the simulation status was equal this would be used to specify the W matrix. One way this is used is to compare new A MAT data to the neural matrix generated by a previous learning session. For example, if a neural matrix was generated to aid in decision making and new data was submitted then a result could be generated. In the case above a result was wanted but the person did not wish to modify the neural matrix.
If parameter file is specified a there isn’t any data the program will print:
Unexpected End of File in KANN21.DAT
while reading W Matrix
Terminating Program
And, the program will print:
PROGRAM DONE
to indicate it has finished its processing.
2.1 ADATA FILE
The file ADATA.DAT specified the data used for the A MAT neurons or input neurons. The file may contain up 128 neurons per training set with 16 neurons per line. This means that if someone declared A Len = 128 then the A MAT neurons would have 8 lines of 16 neurons for 128 neurons. But, if someone declared A Len=3 then the A MAT neurons would have 1 line of 3 neurons. Finally, if a A LEN greater than 16 is specified like 20 there would 2 lines of 16 neurons. This is because any specification over 16 has to be padded or contain a number neurons to the next multiple of 16. Thus, if 20 neurons were declared the A MAT learning set would contain 16 neurons on line 1, 16 neurons on line2 as 4 valid A MAT neurons and 12 neurons with the value of zero(padded neurons).

The last line is the ADATA.DAT file is the terminating line. The first value is 9999 to indicate the end of the adata. Also, the same format rules apply to the terminating line as above. For example, if A Len were equal to 3 the terminating line would be:
9999 1 1
And, if A Len= were equal to 20 the terminating line would be:
9999 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
A sample ADATA.DAT data file for XOR problem discussed earlier looks like this:
0 1 0
1 0 0
1 1 1
9999 9999 9999
The file ADATA.DAT resides on the logged drive. Thus, if this prompt were seen:
A>
The program would look for ADATA.DAT on the A drive. And, if it could not find it a error message would printed. The error message would say:
Files ADATA.DAT and BDATA.DAT not available
Terminating Program
And, the program would print:
PROGRAM DONE
to indicate it has completed its processing.
If the file ADATA.DAT was found and not enough data were in the file the program would print:
Unexpected End of File in ADATA.DAT
Terminating Program
And, the program would print:

PROGRAM DONE
to indicate it has finished its processing.
The ADATA file may contain up 2000 learning sets if the value of UNITY is equal 10 in the file KANN21.DAT.
2.2 BDATA FILE
The file BDATA.DAT specified the data used for the B MAT neurons or output neurons. The file may contain up 128 neurons per training set with 16 neurons per line. This means that if someone declared A Len = 128 then the A MAT neurons would have 8 lines of 16 neurons for 128 neurons. But, if someone declared A Len=3 then the A MAT neurons would have 1 line of 3 neurons. Finally, if a A LEN greater than 16 is specified like 20 there would 2 lines of 16 neurons. This is because any specification over 16 has to be padded or contain a number neurons to the next multiple of 16. Thus, if 20 neurons were declared the B MAT learning set would contain 16 neurons on line 1, 16 neurons on line2 as 4 valid B MAT neurons and 12 neurons with the value of zero(padded neurons).
The last line is the BDATA.DAT file is the terminating line. The first value is 9999 to indicate the end of the adata. Also, the same format rules apply to the terminating line as above. For example, if B Len were equal to 1 the terminating line would be:
9999
And, if B Len= were equal to 20 the terminating line would be:
9999 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
A sample BDATA.DAT data file for XOR problem discussed earlier looks like this:
1
1
0
9999
The file BDATA.DAT resides on the logged drive. Thus, if this prompt were seen:

A>
The program would look for BDATA.DAT on the A drive. And, if it could not find it a error message would printed. The error message would say:
Files ADATA.DAT and BDATA.DAT not available
Terminating Program
And, the program would print:
PROGRAM DONE
to indicate it has completed its processing.
If the file BDATA.DAT was found and not enough data were in the file the program would print:
Unexpected End of File in BDATA.DAT
Terminating Program
And, the program would print:
PROGRAM DONE
to indicate it has finished its processing.
The BDATA file may contain up 2000 learning sets if the value of UNITY is equal 10 in the file KANN21.DAT.

DATA FILE OUTPUT
3.0 INTRODUCTION
The KANN21 neural network programmer / simulator allows five report options. The report option specified will either give more or less detail on neural operations or modify the method the W or neural matrix is made or use the W or neural matrix in a different way.
3.1 EXPLANATION FORMAT 1
With format option 1 the computer is instructed to read the W matrix. This is useful if a simulation or retraining session is desired.
The first bit of information displayed are the parameters specified in the file ‘KANN21.DAT’.
The after the word ‘UNITY = ‘ is displayed the word ‘W Matrix’ will be displayed. The results printed will be the same W matrix data that was specified in file ‘KANN21.DAT’. Remember, when specifying W matrix data always make sure that the data is the one used with the value of ‘UNITY’. For example, if the W matrix was 10, 20, 30 and the unity value for the data was 10, but a unity value of 100 was specified the data would read as 0.10, 0.20, 0.30 as opposed to 1, 2, and 3.
After, the W matrix is read one will see something like this:
PS # 1 TS# 1 BEC = 0 BER = 0.0 RESULT =TRUE
This is called the status line. The fields in the status line are presentation number (PS#), test set number (TS#), bit error count (BEC), bit error sum (BER), and result.
The PS# indicates what learning session the program is in. When presentation number is greater than maximum number of learning sessions the program will stop.
The TS# indicates what test data is being used to train the network. For example, TS# 1 means that the first A MAT and B MAT data are being used. The other reason this is posted is to aid the user in determining what data is difficult to learn.
The BEC indicates how neurons were not learned in the B MAT data set. For example, if the BEC were equal to 1 and the B MAT length was equal to 3, then 2 B MAT neurons were learned and 1 B MAT neuron was not learned.

The BER is the cumulative sum of past BEC plus the current BEC divided by 2. This BER value will help one determine if a system is learning. For example, if one compares the BER values from test set 1 to test 3 one would find the BER value between the test set decreasing. This indicates the system is learning. A decreasing BER value from the past and current test sets indicates a system is learning. A increasing BER value indicates the system is not learning. A BER value that is the same between test sets indicates the system has reached where not further learning can take place.
Next, the result field indicates if all the test data was learned. For example, given that the A MAT neurons to produce the B MAT neurons and the B MAT neurons produced match exactly and completely with B MAT neurons desire a result of TRUE would be printed. But, if a exact and complete match is not found a FALSE is printed.
And, when RESULT equals TRUE for all the test set data the program will stop before the maximum presentation limit. But, if the RESULT does not equal TRUE for all the test set data the program will not stop until the maximum presentation limit is reached.
Finally, a message ‘Number of Presentations’ will be seen. This indicates how learning sessions were needed to learn the information or how many attempts were made. If the RESULT field was true in the last test (ie. TS# 1 to TS# 3) then the data was learned. If the RESULT field was not true in the last test set the the message indicates how many attempts were made.
Below is a sample output using output format 1:
Neural Network Programmer Matrix 2.2
By David Kanecki ‑ Oct 12,1988‑1535h
Mar 16,1989‑1600h
PARAMETERS
1 1 = RWM 2=PFM 3=STL 4=IRM
MAX Presentations: 100
A Len = 3 B Len = 1
BETA = 2.0000000000E+00
LIMIT = 4.5500000000E‑01
OPTION = 1 Transfer Function 1 Selected
UNITY = 10
W Matrix
10
20
30
PS # 1 TS# 1 BEC = 0 BER = 0.0 RESULT =TRUE

PS # 2 TS# 2 BEC = 0 BER = 0.0 RESULT =TRUE
PS # 3 TS# 3 BEC = 1 BER = 0.0 RESULT =FALSE
PS # 4 TS# 1 BEC = 1 BER = 0.5 RESULT =FALSE
PS # 5 TS# 2 BEC = 1 BER = 1.5 RESULT =FALSE
PS # 6 TS# 3 BEC = 1 BER = 2.5 RESULT =FALSE
PS # 7 TS# 1 BEC = 1 BER = 3.5 RESULT =FALSE
PS # 8 TS# 2 BEC = 1 BER = 4.5 RESULT =FALSE
PS # 9 TS# 3 BEC = 1 BER = 5.5 RESULT =FALSE
PS # 10 TS# 1 BEC = 1 BER = 6.5 RESULT =FALSE
PS # 11 TS# 2 BEC = 1 BER = 7.5 RESULT =FALSE
PS # 12 TS# 3 BEC = 0 BER = 8.5 RESULT =TRUE
PS # 13 TS# 1 BEC = 0 BER = 9.0 RESULT =TRUE
PS # 14 TS# 2 BEC = 0 BER = 9.0 RESULT =TRUE
PS # 15 TS# 3 BEC = 0 BER = 9.0 RESULT =TRUE
Number of Presentations: 15
3.2 EXPLANATION FORMAT 2
Output format 2 will display the parameters, input neurons, output neurons, W matrix produced from the input and output data, number of presentations, and final matrix.
The message ‘DATA INPUT’ indicates where the program is reading the A MAT input neurons and B MAT output neurons to produce the W or neural matrix. The input and output neurons are printed in hexadecimal mode to allow 128 neurons to be printed on one line. A neuron is on if it has the value of 1 and Off it is has the value of zero. To identify specific neurons use the method discussed in section 2.0.1.

Note, that during the training session the status line is not displayed. But, after the training session the final W or neural matrix is displayed. This neural matrix can be used for further training or applications.
The values displayed in W matrix are obtained by the value times UNITY. For example, 1.0 becomes 10 and 0.5 becomes 5 and ‑1.5 becomes ‑15 if the unity value is equal 10. These are called scaled numbers. The use of scaled numbers increase the number of neurons a system can have. The W matrix values printed can be substituted directly into the ‘KANN21.DAT’ data file if a simulation is needed. But, remember to delete the blank line separating each row.
When displaying the W matrix results to other people it is suggest to display the actual number as opposed to scaled number. To convert a scaled number to a actual just divide the scaled number by the value of UNITY as defined in the front of the report.
Below is a listing of the output with format option 2:
Neural Network Programmer Matrix 2.2
By David Kanecki ‑ Oct 12,1988‑1535h
Mar 16,1989‑1600h
PARAMETERS
2 1 = RWM 2=PFM 3=STL 4=IRM
MAX Presentations: 100
A Len = 3
B Len = 1
BETA = 2.0000000000E+00
LIMIT = 4.5500000000E‑01
OPTION = 1 Transfer Function 1 Selected
UNITY = 10
DATA INPUT
A MAT= 4
B MAT= 8
A MAT= 8
B MAT= 8
A MAT= E
B MAT= 0
===== W Matrix ======
‑10
‑10

‑30
Number of Presentations: 6
======= FINAL MATRIX =========
===== W Matrix ======
10
10
‑30
3.3 EXPLANATION FORMAT 3
Format option 1 and format option 3 both produce the same kind of output file. But, format option 3 generates the W or neural matrix from the A MAT and B MAT neurons as opposed from reading the W matrix from the file ‘KANN21.DAT’.
Below is the report generated by format option 3:
Neural Network Programmer Matrix 2.2
By David Kanecki ‑ Oct 12,1988‑1535h
Mar 16,1989‑1600h
PARAMETERS
3 1 = RWM 2=PFM 3=STL 4=IRM
MAX Presentations: 100
A Len = 3
B Len = 1
BETA = 2.0000000000E+00
LIMIT = 4.5500000000E‑01
OPTION = 1 Transfer Function 1 Selected
UNITY = 10
DATA INPUT
A MAT= 4
B MAT= 8
A MAT= 8
B MAT= 8
A MAT= E
B MAT= 0

===== W Matrix ======
‑10
‑10
‑30
PS # 1 TS# 1 BEC = 1 BER = 0.0 RESULT =FALSE
PS # 2 TS# 2 BEC = 1 BER = 0.5 RESULT =FALSE
PS # 3 TS# 3 BEC = 0 BER = 1.5 RESULT =TRUE
PS # 4 TS# 1 BEC = 0 BER = 2.0 RESULT =TRUE
PS # 5 TS# 2 BEC = 0 BER = 2.0 RESULT =TRUE
PS # 6 TS# 3 BEC = 0 BER = 2.0 RESULT =TRUE
Number of Presentations: 6
======= FINAL MATRIX =========
===== W Matrix ======
10
10
‑30
3.4 EXPLANATION FORMAT 4
Format option 4 produces the report like format option 3 except more information is included.
The information included is ‘B CAL =’, ‘A=’, and ‘B=’. The ‘B CAL =’ is the output neurons generated from the A MAT input neurons. The ‘A=’ are the A MAT input neurons from file ‘ADATA.DAT’ and the ‘B= ‘ are the B MAT output neurons from the file ‘BDATA.DAT’. The program compares the ‘B CAL’ and ‘B’ neurons. If there is a exact match the result is TRUE else it is FALSE. Finally, the ‘BEC ‘ reflects the incorrect responses in the ‘B CAL’ neurons.
Below is a report using format option 4:

Neural Network Programmer Matrix 2.2
By David Kanecki ‑ Oct 12,1988‑1535h
Mar 16,1989‑1600h
PARAMETERS
4 1 = RWM 2=PFM 3=STL 4=IRM
MAX Presentations: 100
A Len = 3
B Len = 1 BETA = 2.0000000000E+00
LIMIT = 4.5500000000E‑01
OPTION = 1 Transfer Function 1 Selected
UNITY = 10
DATA INPUT
A MAT= 4
B MAT= 8
A MAT= 8
B MAT= 8
A MAT= E
B MAT= 0
===== W Matrix ======
‑10
‑10
‑30
PS # 1 TS# 1 BEC = 1 BER = 0.0 RESULT =FALSE
B CAL= 0
A= 4
B= 8
PS # 2 TS# 2 BEC = 1 BER = 0.5 RESULT =FALSE
B CAL= 0
A= 8
B= 8
PS # 3 TS# 3 BEC = 0 BER = 1.5 RESULT =TRUE
B CAL= 0
A= E

B= 0
PS # 4 TS# 1 BEC = 0 BER = 2.0 RESULT =TRUE
B CAL= 8
A= 4
B= 8
PS # 5 TS# 2 BEC = 0 BER = 2.0 RESULT =TRUE
B CAL= 8
A= 8
B= 8
PS # 6 TS# 3 BEC = 0 BER = 2.0 RESULT =TRUE
B CAL= 0
A= E
B= 0
Number of Presentations: 6
======= FINAL MATRIX =========
===== W Matrix ======
10
10
‑30
3.5 EXPLANATION FORMAT 2‑Simulation Mode
The simulation mode is set when a value of 1 is placed in line 3. In this mode data on the logged drive is in file ‘NUTEST.DAT’. This file contains the A MAT neurons. From the A MAT neurons the B MAT neurons will be generated by the final W matrix.
The A MAT neurons are indicated by the word ‘INPUT’ in the simulation mode. And, the B MAT neurons are indicated by the word ‘RESULT’ in the simulation mode.
The ‘NUTEST.DAT’ may contain many A MAT neuron simulation data. And, if the file ‘NUTEST.DAT’ is not available the program will print a error message as specified in section 2 and halt.
Neural Network Programmer Matrix 2.2
By David Kanecki ‑ Oct 12,1988‑1535h

Mar 16,1989‑1600h
PARAMETERS
2 1 = RWM 2=PFM 3=STL 4=IRM
MAX Presentations: 100
A Len= 3
B Len = 1
BETA = 2.0000000000E+00
LIMIT = 4.5500000000E‑01
OPTION = 1 Transfer Function 1 Selected
UNITY = 10
DATA INPUT
A MAT= 4
B MAT= 8
A MAT= 8
B MAT= 8
A MAT= E
B MAT= 0
===== W Matrix ======
‑10
‑10
‑30
Number of Presentations: 6
======= FINAL MATRIX =========
===== W Matrix ======
10
10
‑30
=== INPUT ===
INPUT 4
RESULT 8
INPUT 8
RESULT 8

INPUT E
RESULT 0
INPUT 0
RESULT 0

4.0 Learning Strategies for Neural Networks
Neural networks have the ability to learn various types of information. But, sometimes the information presented is in conflict with each other as in the XOR problem. When this occurs a learning strategy is needed.
A learning strategy is used is use when all the elements in the W matrix become zero or cycle to various values. For example, the XOR excercise indicates this:
A MAT = 0 1 B MAT = 1
1 0 1
1 1 0
Initial W Matrix W[0] = [ 0 0 ]
A MAT = 0 1 B CAL = 0 W[1]= [ 0.0 0.5 ]
A MAT = 1 0 B CAL = 0 W[2]= [ 0.5 0.5 ]
A MAT = 1 1 B CAL = 1 W[3]= [ 0.0 0.0 ]
A MAT = 0 1 B CAL = 0 W[4]= [ 0.0 0.5 ]
A MAT = 1 0 B CAL = 0 W[5]= [ 0.5 0.5 ]
A MAT = 1 1 B CAL = 0 W[6] =[ 0.0 0.0 ]
… and so forth …
During the six learning sessions one can see that the W matrix becomes zero at a certain time and the values cycle from time to time. Thus, a learning strategy is needed to solve this.
Now, lets consider the same data set using the Parity learning strategy:
A MAT = 0 1 0 B MAT = 1
A MAT = 1 0 0 B MAT = 1
A MAT = 1 1 1 B MAT = 0
W[0] = [ ‑1 ‑1 ‑3]
A MAT = 0 1 0 B CAL = 0 W[1]=[‑1.0 ‑0.5 ‑3.0]
A MAT = 1 0 0 B CAL = 0 W[2]=[‑0.5 ‑0.5 ‑3.0]
A MAT = 1 1 1 B CAL = 0 W[2]=[‑0.5 ‑0.5 ‑3.0]
A MAT = 0 1 0 B CAL = 0 W[3]=[‑0.5 0.0 ‑3.0]
A MAT = 1 0 0 B CAL = 0 W[4]=[ 0.0 0.0 ‑3.0]

A MAT = 1 1 1 B CAL = 0 W[4]=[ 0.0 0.0 ‑3.0]
A MAT = 0 1 0 B CAL = 0 W[5]=[ 0.0 0.5 ‑3.0]
A MAT = 1 0 0 B CAL = 0 W[6]=[ 0.5 0.5 ‑3.0]
A MAT = 1 1 1 B CAL = 0 W[6]=[ 0.5 0.5 ‑3.0]
A MAT = 0 1 0 B CAL = 1 W[6]=[ 0.5 0.5 ‑3.0]
A MAT = 1 0 0 B CAL = 1 W[6]=[ 0.5 0.5 ‑3.0] A MAT = 1 1 1 B CAL = 1 W[6]=[ 0.5 0.5 ‑3.0]
Thus, the problem is solved by using the parity learning strategy. In this section I will present various learning strategies. The common name for extra neuron is a ‘Hidden Unit’. A hidden unit is similar to a catalyst is chemistry. It aids a reaction but is not included in the by product of the reaction.
Please note that the learning strategy is applied to A MAT neurons only. This is done because the A MAT neuron values are known while the B CAL neuron value can vary during the learning session.
4.1 Learning Strategies
4.1.1 Parity , 1 neuron
The extra parity bit is ON if the number of A MAT neurons ON is odd. Or, the extra parity bit is ON is the number of A MAT neurons ON is even. But, only 1 of 2 options can be used throughout a data set.
For example, how does code the parity bit in the A MAT data? First, one needs to make a table. So, lets use the A MAT data from section 4.0 to set up the table:
A MAT H Comments
‑‑‑‑‑‑ ‑ ‑‑‑‑‑‑‑‑
0 1 0 OFF because number of A MAT neurons
ON is odd
1 0 0 OFF for same reason as above
1 1 1 ON because number of A MAT neurons
ON is even
B MAT
‑‑‑‑‑‑
1

1
0
Now, one needs to enter these into the ‘ADATA.DAT’ like so:
0 1 0
1 0 0
1 1 1 9999 1 1 1
Next, in file ‘KANN21.DAT’ on needs to specify that there are 3 A MAT neurons by putting a 3 in line 3 of file ‘KANN21.DAT’.
Finally, the B MAT neurons must be entered in file
‘BDATA.DAT’ like so:
1
1
0
9999
And, a 1 must be placed in line 4 of file ‘KANN21.DAT’.
4.1.2 Power of 2, Log2(#A MAT Neurons)
In the power of 2 method the number of hidden units or extra neurons is equal to whole number of the LOG2 of the A MAT neurons. Or, 2 to the power of N equal the number of A MAT neurons where N is the number of extra neurons or hidden units.
In this method a neuron is ON if the number of A MAT neurons ON is equal to a 2 to the power of 1 , 2, 3, up to N. For example, the XOR problem has two A MAT neurons so the number of extra or hidden units is equal 1. Thus, the hidden unit is ON if 2 A MAT neurons are on. And, if we have 4 hidden units they would ON if:
Hidden Unit 1 = ON if 2 A MAT neurons ON
Hidden Unit 2 = ON if 4 A MAT neurons ON
Hidden Unit 3 = ON if 8 A MAT neurons ON
Hidden Unit 4 = ON if 16 A MAT neurons ON
To code the A MAT and B MAT data set up a table similar to section 4.1.1:

A MAT H Comment
‑‑‑‑‑‑ ‑‑ ‑‑‑‑‑‑‑‑‑‑‑‑‑
0 1 0 OFF because ON A MAT = 1, not
a power of 2
1 0 0 OFF due to same reason above
1 1 1 ON because ON A MAT neurons = 2,
and 2 to the power 1 =2, they
are equal.
B MAT
‑‑‑‑‑
1
1
1
Now code the A MAT data in the data file ‘ADATA.DAT’ as follows:
0 1 0
1 0 0
1 1 1
9999 1 1
And, remember to indicate the number of A MAT neurons is equal to 3 in the file ‘KANN21.DAT’.
Next, code the B MAT data in the data file ‘BDATA.DAT’ as follows:
1
1
0
9999
And, remember to indicate the number of B MAT neurons is equal to 1 in the file ‘KANN21.DAT’.
4.1.3 Power of 1, #A MAT neurons

The power of 1 learning strategy is that there are equal numbers of A MAT neurons and hidden unit neurons. And, each hidden unit neurons is ON if 1, 2, 3, up the A MAT neurons are ON. For example, in the sample presented in section 4.0 there would have been 2 hidden neurons. They would be on if:
Hidden Unit 1 = ON if 1 A MAT neuron ON
Hidden Unit 2 = ON if 2 A MAT neurons ON
And, the coding table would have looked something like this:
A MAT H1 H2 Comment
‑‑‑‑‑‑‑‑ ‑‑ ‑‑ ‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑
0 1 1 0 H1 ON because A MAT neurons
ON equal to 1
1 0 1 0 H1 ON for reason state above
1 1 0 1 H2 ON because A MAT neurons
ON equal to 2.
B MAT
‑‑‑‑‑
1
1
0
And, the ‘ADATA.DAT’ file would be coded as follows:
0 1 1 0
1 0 1 0
1 1 0 1
9999 1 1 1
Remember to indicate in parameter file ‘KANN21.DAT’ that there are 4 A MAT neurons.
The ‘BDATA.DAT’ file would be coded as follows:
1
1
0
9999

Remember to indicate in parameter file ‘KANN21.DAT’ that there is 1 B MAT neuron.
4.1.4 Symmetry, 1 bit
This option uses the principle of symmetry to aid in learning. For example, if there are 10 neurons then neurons 1 thru 5 will be on one side of the symmetry line and neurons 6 thru 10 will be o the other side of the symmetry line. A symmetry
hidden unit is ON when neurons in the same position on different sides of the symmetry line are in the same state(ON or OFF).
For example, if A MAT neurons used in section 4.0 there was 1 neuron on each side of the symmetry line. And, the symmetry neuron would on if:
Symmetry Line
|
0 | 0 1 ON because both neurons are in
| same state.
0 | 1 0 OFF because no neuron is in
same state
1 | 0 0 OFF because no neuron is in
same state
1 | 1 1 ON because both neurons are in
same state
4.2 Comments on Learning Strategies
Each learning strategy offers advantages and disadvantages in learning rate and probability of learning neural information.
For example each learning strategy is similar to a catalyst used in a chemical reaction. Some are faster than others. And, some work in limited situations. Thus, I have observed the same result in various neural network experiments.
Thus, I have found it is very important to use a consistant method that presents the least number of contradicitions to other neurons. For example, the XOR problem was based on the Binary system of coding.

Some people have used a method called thermometer coding for A MAT data. The thermometer method is based that the A MAT neurons represent a scale with a number each specific scale as in a thermometer. For example, to code the number 0,1, 5, 9, and 10 using the thermometer method 11 A MAT neurons are needed to cover the range 0 to 10. The number would be coded as follows:
Number Coding
‑‑‑‑‑‑ ‑‑‑‑‑‑
0 1 0 0 0 0 0 0 0 0 0 0
1 1 1 0 0 0 0 0 0 0 0 0
5 1 1 1 1 1 1 0 0 0 0 0
9 1 1 1 1 1 1 1 1 1 1 0
10 1 1 1 1 1 1 1 1 1 1 1
This was used in one users group I belong to solve the learning of values in the sine function.
5.0 Processing Time
The processing time depends on 4 variables. They are the number of A MAT neurons, number of B MAT neurons, and the maximum number of presentations.
All these factors multiplied together form neural update unit. And, based on test on the CP/M one can use a ratio to determine the run time.
For example, in one experiment I have 32 A MAT neurons, 64 B MAT neurons, 12 test set, and 180 as the maximum number of presentations. The total product was 368,640 and the processing time was 40 minutes on a CP/M machine running at 4Mhz. Thus, the estimated processing time is equal to:
est time= 40 min * (#A MAT * #B MAT * Max )
(CP/M ‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑
4 Mhz) 368,640
where
#A MAT is the number of A MAT neurons
#B MAT is the number of B MAT neurons
MAT is the maximum number of learning
sessions.
Note, the estimated time may vary due disk access time and processor speed on varying systems.

Kanecki Artificial
Neural Network 30
Users Manual
By Diana Kaneci
ACS/Bio. Sci.

APPLICATIONS
1.0 INTRODUCTION
The KANN30 program can be used as an aid in decision making or as an aid in neural network research.
The way KANN30 can used as an aid in decision making is to link and organize various information. To organize the information the user needs to specify and input and output data set.
The KANN30 systems allows the neuron to contains values from 0.0 to 1.0 to indicate probability, capacity, or quality of a concept. Thus, KANN30 can be used for greater analytical task.
1.1 USE IN DECISION MAKING
Each neuron in the data set represents a concept. For example, the input neuron 1 may represent a concept of the color blue, neuron 2 may represent the concept of the color black, neuron 3 may represent the concept of 2 wheels, and neuron 4 may represent the concept of 4 wheels. Next, the output neuron 1 may represent the concept of car and neuron 2 may represent the concept of bicycle.
And, the neuron can represent a percent of a certain quality. For example, 1.0 is used to represent 100% of a certain quality. Whereas, 0.1 is used to represent 10% of a certain quality.
The neuron/concept is similar to a punched card methology. A neuron/concept is punched to varying grades according to the relative quality of a concept ( 1.0 represents 100% of quality and 0.1 represents 10% of quality of concept).
The KANN30 defines the input neuron set as the A MAT and the file ADATA.DAT. The output neuron set is defined as the B MAT and the file BDATA.DAT.
This is how the A MAT example would be coded:
0.0 1.0 1.0 0.0
1.0 0.0 1.0 0.0
1.0 0.0 0.0 1.0
9999 9999 9999 9999 9999

The first line represents a set of concepts where something is black and has 2 wheels. The second line represents something that is blue and has 2 wheels. The third line represents something that is black and has 4 wheels. The fourth line is the terminating set. This tells the program that no more data is available. The above data is stored in file ‘ADATA.DAT’. The value 1.0 is used to represent a 100 percent quality level of the concept. And, 0.0 represents a 0 percent quality of a concept. This is usefull in quantifying an observation or data in decision making.
This how the B MAT data would be coded:
0.0 0.4
0.0 0.4
0.4 0.0
9999 9999
The first line indicates that something is a bike. Also, the second line indicates that something is a bike. And, the third line indicates something is a car. The fourth line is the terminating data set. This means that no more B MAT data is available. This would be stored in file ‘BDATA.DAT’. The 0.4 indicates that the object has a 40 percent concept quality of being a bike. In another way the 0.4 could indicate the probability of something being a bike based on certain observations.
Finally, line 1 in A MAT represents the input neurons for data set 1 and line 1 in B MAT represents the output neurons for data set 1. Also, line 2 in A MAT represents the input neurons for data set 2 and line 2 in B MAT represents the output neurons in data set 2.
1.2 USE IN NEURAL NETWORK RESEARCH
One example of KANN30 can be used in neural network research is by showing the set up required to solve circuit problems as the XOR problem.
In one solution to the XOR problem the A MAT or input neurons had 3 neurons representing 2 state bits and 1 hidden unit. A hidden unit is a neuron or set of neurons that aid a neural network in solving a problem. I will give a more detailed discussion of hidden units in another section.
The output neuron or B MAT consisted of 1 bit representing the ON state by the value of 1 or the OFF state by the value of 0. The 1.0 would mean the neuron is operating at 100 percent capacity. And, 0.0 would indicate the neuron is operating a 0 percent capacity. Thus, the KANN30 system can help make decisions based on capacity as well as probability.
The A MAT was coded as follows:
0.0 1.0 0.0
1.0 0.0 0.0
1.0 1.0 1.0
9999 9999 9999

The first line represents STATE BIT 1 as ON at 100% of capacity. The second line represents STATE BIT 2 as ON at 100% of capacity. The third line represents STATE BIT 1 and 2 as ON and the hidden unit in ON at 100% of capacity. The fourth line represents the terminating data set.
The B MAT or output neuron would be coded as:
1.0
1.0
0.0
9999
The first line indicates the result is ON. The second line represents the result is ON. The third line represents the result in OFF. Finally, the fourth line represents the terminating data set card. The 1.0 indicates the neuron is operating at 100% of capacity. While, the 0.0 indicates the neuron is completely off.
In the example line 1 in A MAT are the input neurons and line 1 in B MAT is the output neuron for data set 1. And, line 2 in A MAT are the input neurons and line 2 in B MAT is the output neuron for data set 2. The same pattern follows for each line.
A historical sidelight of the XOR problem is that neural networks were originated in 1943 and it was not until 1969 the XOR problem was solved. The reason for the delay was that without the hidden unit the XOR matrix became all zeros. Thus, one neuron canceled out another neuron. With, the hidden unit the neuron cancellation does not occur.

DATA FILE SET UP
2.0 INTRODUCTION
The KANN30 program consist of four input files. They are ‘ADATA.DAT’, ‘BDATA.DAT’, ‘KANN30.DAT’, and ‘NUTEST.DAT’.
The file ‘KANN30.DAT’ includes 9 parts of information needed by the program. The 11th line onward contains the W matrix to be read if format option 1 is specified.
File KANN30.DAT resides on the logged drive. Thus, if prompt before the KANN30 program was executed was:
A>
The program will look for the file on drive A. And, if the file was not found the following message would be printed:
FILE KANN30.DAT not available
Terminating Program
And, the program would print:
PROGRAM DONE
to indicate it has finished its processing.
The parameter file: KANN30.DAT would look like this:
4 Format Option 4 ‑ Complete Results
100 Max Limit
3 A Vector
1 B vector
0 No simulation mode for NUTEST.DAT
2.0 Update ratio
0.455 Threshold limit
1 Transfer function 1
10 unity value, 10 = 1.0
0.1 tolerance of error

Below is a line by line explanation of the parameters and their uses.
2.0.1 KANN30 Parameter File ‑ Line 1
The first line contains a number 1 thru 4. This is the format option number. A format option number of 1 means that the programs reads the W matrix from the data instead of generating it from the A MAT and B MAT data. This is useful if a training set was learned and a simulation of new data is desired. Format option 2 means that the program will print out only the parameter, data input, final W matrix, and simulation information. No intermediate information will be printed. A format option of 3 means that intermediate information will be printed as presentation number (PS#), test set number (TS#), neuron error count (BEC), total neuron error count (BER), and learning result. This information is useful to determine if a system is learning. For example, a system is learning if the total neuron error count between similar test sets is decreasing. A system is not learning if they remain the same or increase. Finally, format 4 option will display the a neuron listing. This will indicate at what capacity are the neurons operating. The neurons will be displayed something like:
B CAL= 1 10 3 10
And, if the UNITY value were 10 the above values would represent 0.1, 1.0, 0.3, and 1.0.
2.0.2 KANN30 Parameter File ‑ Line 2
The second line contains the maximum number of learning sessions. This number can range from 0 to about 2000. If a number over 2000 is selected a numeric overflow error could occur. The number of learning sessions is dependent upon the value of UNITY. UNITY is a value substituted for the number 1.0. For example, if UNITY were equal to 10 or 1 significant digit the number of recommended learning sessions would be 2000. And, if UNITY were equal to 100 or 2 significant digits the number of recommended learning sessions would be 200. If the UNITY value were 1000 or 3 significant digits the number of recommend learning sessions would be 20. By differing the value of UNITY the resolution between differing neurons increase. Thus, it may be easier to solve certain types of problems.
Finally, if the value for UNITY is less than zero or greater than 1000 the program will print:
ERROR‑INVALID Unity Value Assuming Unity=10
and continue its processing.
2.0.3 KANN30 Parameter file ‑ line 3

The third line contains the simulation status. If the value is a 1 simulation data is in file ‘NUTEST.DAT’. If the value is a 0 no simulation data is available in file ‘NUTEST.DAT’ and the simulation option at the end of program is not run. For example, if the simulation status was 1 at the end of a learning session the computer look for the file ‘NUTEST.DAT’ and retrieve the A MAT data and print out the corresponding B MAT result. This is useful when the W matrix has already been generated and only results are needed as in a decision making process. Please that if the simulation status equal 1 and the file ‘NUTEST.DAT’ is not available the program will halt and print the message:
No NUTEST.DAT Data File Available for Simulation
Terminating Program
And, the next message printed will be:
PROGRAM DONE
indicating the program has finished its processing.
2.0.4/5 KANN30 Parameter File ‑ Line 4 and line 5
The fourth line contains the number of A MAT or input neurons. The number of A MAT and B MAT neurons can be from 1 to 128. But, the number of A MAT neurons times the number of B MAT must be less than or equal to 2700. For example, if the number of A MAT neurons were 10 and the number of B MAT neurons were 100. The total would be 1000 and this is below 2700 and the program will run normally. But, if 128 A MAT and 128 B MAT neurons were specified the total would be 16386 above the 2700 level and program would print:
MATRIX TOO BIG
Terminating Program
And, the program will print:
PROGRAM DONE
indicating it has finished its processing.
Or, if the number A MAT neurons or B MAT neurons were not within the range of 1 to 128 a error message would be printed. For example, if A MAT were equal to ‑1 and B MAT were equal to 128 the program would print:
A Len or B Len not within specified range

Terminating Program
And, the program will print:
PROGRAM DONE
indicating it has finished processing.
The fifth line contains the number of B MAT or output neurons. The number of B MAT can be from 1 to 128.
2.0.6 KANN30 ‑ Parameter File ‑ Line 6
The sixth line contains the update ratio. This ratio must be greater than zero and suggested less than 9. Most neural simulations use the value “0.5”. The greater the update ratio the faster the learning process. But, in experiments I have performed I have found that 0.5 works best for most problems and a value over 9 causes the system to learn and unlearn do to the high variability.
2.0.7 KANN30 ‑ Parameter File ‑ Line 7
In KANN30 this variable is not used but is required in the data file. The variable represented a threshold value for system KANN21.
2.0.8 KANN30 Parameter File ‑ Line 8
The eighth line specified which transfer function to use. Transfer function 1 is the only function available at this time.
Transfer function 1 represents the exponential model of B(Y)=1/(1+exp(‑A(x)*W(x,y))) where A(x) is the input neuron, B(y) is the output neuron, W(x,y) is the neural matrix of A and B.
If a transfer function other than 1 is specified transfer function 1 will be selected and the program would print:
ERROR‑INVALID Option Selected Assuming Option 1
And, the program will continue its processing.

2.0.9 KANN30 Parameter File ‑ Line 9
And, the ninth line is the value for unity or 1. I have used the value 10 to represent the value of 1. The unity value is selected in order to optimize data storage space and calculations. The KANN30 system stores all data in 16 bit variables with a range of 32767 to ‑32768. Suggested unity values are 10, 100, and 1000. A majority of neural network session I have done have used 10 as the value of unity. This allows 2000 training sets of data. The number of recommend training set is equal 20000 divided by the value of unity. If unity were equal to 1000 or 3 significant figures the recommended training set would be 20. If unity were equal to 100 then the recommend training set would be 200.
If a unity value less than zero is specified or greater than 1000 is specified the error message printed will be:
ERROR‑INVALID Unity Value Assuming Unity=10
And, the program will continue its processing.
2.0.11 KANN30 Paramter File ‑ Line 10
The value is this line indicates the accepted tolerance between the actual B MAT neurons and the calculated B MAT neurons.
For example, if the actual or reference B MAT neuron had a value of 0.2 and the calculated B MAT neuron had a value of 0.3 the error would 0.1 and within the accepted tolerance. Thus, the corresponding A MAT and B MAT neuron pattern was learned.
But, if the actual or reference B MAT neuron had a value of 0.2 and the calculated B MAT neuron had a value of 0.4 and the tolerance was 0.1 the error would 0.2 and not within the accepted tolerance. Thus, the corresponding A MAT and B MAT neural pattern was not learned.
2.0.10 Kann30 Parameter File ‑ Line 10
If the simulation status was equal this would be used to specify the W matrix. One way this is used is to compare new A MAT data to the neural matrix generated by a previous learning session. For example, if a neural matrix was generated to aid in decision making and new data was submitted then a result could be generated. In the case above a result was wanted but the person did not wish to modify the neural matrix.
If parameter file is specified a there isn’t any data the program will print:
Unexpected End of File in KANN30.DAT

while reading W Matrix
Terminating Program
And, the program will print:
PROGRAM DONE
to indicate it has finished its processing.
2.1 ADATA FILE
The file ADATA.DAT specified the data used for the A MAT neurons or input neurons. The file may contain up 128 neurons per training set with 16 neurons per line. This means that if someone declared A Len = 128 then the A MAT neurons would have 8 lines of 16 neurons for 128 neurons. But, if someone declared A Len=3 then the A MAT neurons would have 1 line of 3 neurons. Finally, if a A LEN greater than 16 is specified like 20 there would 2 lines of 16 neurons. This is because any specification over 16 has to be padded or contain a number neurons to the next multiple of 16. Thus, if 20 neurons were declared the A MAT learning set would contain 16 neurons on line 1, 16 neurons on line2 as 4 valid A MAT neurons and 12 neurons with the value of zero(padded neurons).
The last line is the ADATA.DAT file is the terminating line. The first value is 9999 to indicate the end of the adata. Also, the same format rules apply to the terminating line as above. For example, if A Len were equal to 3 the terminating line would be:
9999.0 1.0 1.0
Please note that all entries must have a decimal point. This is because the KANN30 system uses analog neurons (neurons with value that range from 0.0 and 1.0) as opposed to digital neurons in KANN21 ( neurons with value of 1 or 0). And, if A Len= were equal to 20 the terminating line would be:
9999.01.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
A sample ADATA.DAT data file for XOR problem discussed earlier looks like this:
0.0 1.0 0.0
1.0 0.0 0.0
1.0 1.0 1.0
9999.0 9999.0 9999.0

The 1.0 can indicate a neuron is working at 100 percent of capacity or a neural concept is 100% probable. And, 0.0 can indicate a neuron is completed off or 0 percent probable.
The file ADATA.DAT resides on the logged drive. Thus, if this prompt were seen:
A>
The program would look for ADATA.DAT on the A drive. And, if it could not find it a error message would printed. The error message would say:
Files ADATA.DAT and BDATA.DAT not available
Terminating Program
And, the program would print:
PROGRAM DONE
to indicate it has completed its processing.
If the file ADATA.DAT was found and not enough data were in the file the program would print:
Unexpected End of File in ADATA.DAT
Terminating Program
And, the program would print:
PROGRAM DONE
to indicate it has finished its processing.
The ADATA file may contain up 2000 learning sets if the value of UNITY is equal 10 in the file KANN30.DAT.
2.2 BDATA FILE

The file BDATA.DAT specified the data used for the B MAT neurons or output neurons. The file may contain up 128 neurons per training set with 16 neurons per line. This means that if someone declared A Len = 128 then the A MAT neurons would have 8 lines of 16 neurons for 128 neurons. But, if someone declared A Len=3 then the A MAT neurons would have 1 line of 3 neurons. Finally, if a A LEN greater than 16 is specified like 20 there would 2 lines of 16 neurons. This is because any specification over 16 has to be padded or contain a number neurons to the next multiple of 16. Thus, if 20 neurons were declared the B MAT learning set would contain 16 neurons on line 1, 16 neurons on line2 as 4 valid B MAT neurons and 12 neurons with the value of zero(padded neurons).
The last line is the BDATA.DAT file is the terminating line. The first value is 9999 to indicate the end of the adata. Also, the same format rules apply to the terminating line as above. For example, if B Len were equal to 1 the terminating line would be:
9999.0
And, if B Len= were equal to 20 the terminating line would be:
9999 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
A sample BDATA.DAT data file for XOR problem discussed earlier looks like this:
1.0
1.0
0.0
9999.0
The file BDATA.DAT resides on the logged drive. Thus, if this prompt were seen:
A>
The program would look for BDATA.DAT on the A drive. And, if it could not find it a error message would printed. The error message would say:
Files ADATA.DAT and BDATA.DAT not available
Terminating Program
And, the program would print:
PROGRAM DONE
to indicate it has completed its processing.

If the file BDATA.DAT was found and not enough data were in the file the program would print:
Unexpected End of File in BDATA.DAT
Terminating Program
And, the program would print:
PROGRAM DONE
to indicate it has finished its processing.
The BDATA file may contain up 2000 learning sets if the value of UNITY is equal 10 in the file KANN30.DAT.

DATA FILE OUTPUT
3.0 INTRODUCTION
The KANN30 neural network programmer / simulator allows five report options. The report option specified will either give more or less detail on neural operations or modify the method the W or neural matrix is made or use the W or neural matrix in a different way.
3.1 EXPLANATION FORMAT 1
With format option 1 the computer is instructed to read the W matrix. This is useful if a simulation or retraining session is desired.
The first bit of information displayed are the parameters specified in the file ‘KANN30.DAT’.
The after the word ‘UNITY = ‘ is displayed the word ‘W Matrix’ will be displayed. The results printed will be the same W matrix data that was specified in file ‘KANN30.DAT’. Remember, when specifying W matrix data always make sure that the data is the one used with the value of ‘UNITY’. For example, if the W matrix was 10, 20, 30 and the unity value for the data was 10, but a unity value of 100 was specified the data would read as 0.10, 0.20, 0.30 as opposed to 1, 2, and 3.
After, the W matrix is read one will see something like this:
PS # 1 TS# 1 BEC = 0 BER = 0.0 RESULT =TRUE
This is called the status line. The fields in the status line are presentation number (PS#), test set number (TS#), bit error count (BEC), bit error sum (BER), and result.
The PS# indicates what learning session the program is in. When presentation number is greater than maximum number of learning sessions the program will stop.
The TS# indicates what test data is being used to train the network. For example, TS# 1 means that the first A MAT and B MAT data are being used. The other reason this is posted is to aid the user in determining what data is difficult to learn.
The BEC indicates how neurons were not learned in the B MAT data set. For example, if the BEC were equal to 1 and the B MAT length was equal to 3, then 2 B MAT neurons were learned and 1 B MAT neuron was not learned.

The BER is the cumulative sum of past BEC plus the current BEC divided by 2. This BER value will help one determine if a system is learning. For example, if one compares the BER values from test set 1 to test 3 one would find the BER value between the test set decreasing. This indicates the system is learning. A decreasing BER value from the past and current test sets indicates a system is learning. A increasing BER value indicates the system is not learning. A BER value that is the same between test sets indicates the system has reached where not further learning can take place.
Next, the result field indicates if all the test data was learned. For example, given that the A MAT neurons to produce the B MAT neurons and the B MAT neurons produced match exactly and completely with B MAT neurons desire a result of TRUE would be printed. But, if a exact and complete match is not found a FALSE is printed.
And, when RESULT equals TRUE for all the test set data the program will stop before the maximum presentation limit. But, if the RESULT does not equal TRUE for all the test set data the program will not stop until the maximum presentation limit is reached.
Finally, a message ‘Number of Presentations’ will be seen. This indicates how learning sessions were needed to learn the information or how many attempts were made. If the RESULT field was true in the last test (ie. TS# 1 to TS# 3) then the data was learned. If the RESULT field was not true in the last test set the the message indicates how many attempts were made.
Below is a sample output using output format 1:
Neural Network Programmer Matrix 2.2
By David Kanecki ‑ Oct 12,1988‑1535h
Mar 16,1989‑1600h
PARAMETERS
1 1 = RWM 2=PFM 3=STL 4=IRN
MAX Presentations: 100
A Len = 3 B Len = 1
TOL = 1.0000000000E‑01
BETA = 2.0000000000E+00
LIMIT = 4.5500000000E‑01
OPTION = 1 Transfer Function 1 Selected
UNITY = 10
W Matrix
10
20
30
PS # 1 TS# 1 BEC = 0 BER = 0.0 RESULT = TRUE

PS # 2 TS# 2 BEC = 1 BER = 0.5 RESULT = FALSE
PS # 3 TS# 3 BEC = 1 BER = 1.0 RESULT = FALSE
PS # 4 TS# 1 BEC = 1 BER = 1.5 RESULT = FALSE
PS # 5 TS# 2 BEC = 1 BER = 2.0 RESULT = FALSE
PS # 6 TS# 3 BEC = 1 BER = 2.5 RESULT = FALSE
PS # 7 TS# 1 BEC = 1 BER = 3.0 RESULT = FALSE
PS # 8 TS# 2 BEC = 1 BER = 3.5 RESULT = FALSE
PS # 9 TS# 3 BEC = 1 BER = 4.0 RESULT = FALSE
PS # 10 TS# 1 BEC = 1 BER = 4.5 RESULT = FALSE
PS # 11 TS# 2 BEC = 1 BER = 5.0 RESULT = FALSE
PS # 12 TS# 3 BEC = 1 BER = 5.5 RESULT = FALSE
PS # 13 TS# 1 BEC = 1 BER = 6.0 RESULT = FALSE
PS # 14 TS# 2 BEC = 1 BER = 6.5 RESULT = FALSE
PS # 15 TS# 3 BEC = 1 BER = 7.0 RESULT = FALSE
PS # 16 TS# 1 BEC = 1 BER = 7.5 RESULT = FALSE
PS # 17 TS# 2 BEC = 1 BER = 8.0 RESULT = FALSE
PS # 18 TS# 3 BEC = 1 BER = 8.5 RESULT = FALSE
PS # 19 TS# 1 BEC = 1 BER = 9.0 RESULT = FALSE
PS # 20 TS# 2 BEC = 1 BER = 9.5 RESULT = FALSE
PS # 21 TS# 3 BEC = 1 BER = 10.0 RESULT = FALSE
PS # 22 TS# 1 BEC = 1 BER = 10.5 RESULT = FALSE
PS # 23 TS# 2 BEC = 1 BER = 11.0 RESULT = FALSE

PS # 24 TS# 3 BEC = 1 BER = 11.5 RESULT = FALSE
PS # 25 TS# 1 BEC = 1 BER = 12.0 RESULT = FALSE
PS # 26 TS# 2 BEC = 1 BER = 12.5 RESULT = FALSE
PS # 27 TS# 3 BEC = 1 BER = 13.0 RESULT = FALSE
PS # 28 TS# 1 BEC = 1 BER = 13.5 RESULT = FALSE
PS # 29 TS# 2 BEC = 1 BER = 14.0 RESULT = FALSE
PS # 30 TS# 3 BEC = 0 BER = 14.0 RESULT = TRUE
PS # 31 TS# 1 BEC = 1 BER = 14.5 RESULT = FALSE
PS # 32 TS# 2 BEC = 1 BER = 15.0 RESULT = FALSE
PS # 33 TS# 3 BEC = 1 BER = 15.5 RESULT = FALSE
PS # 34 TS# 1 BEC = 1 BER = 16.0 RESULT = FALSE
PS # 35 TS# 2 BEC = 1 BER = 16.5 RESULT = FALSE
PS # 36 TS# 3 BEC = 1 BER = 17.0 RESULT = FALSE
PS # 37 TS# 1 BEC = 1 BER = 17.5 RESULT = FALSE
PS # 38 TS# 2 BEC = 1 BER = 18.0 RESULT = FALSE
PS # 39 TS# 3 BEC = 0 BER = 18.0 RESULT = TRUE
PS # 40 TS# 1 BEC = 0 BER = 18.0 RESULT = TRUE
PS # 41 TS# 2 BEC = 0 BER = 18.0 RESULT = TRUE
PS # 42 TS# 3 BEC = 0 BER = 18.0 RESULT = TRUE
Number of Presenatations: 42
3.2 EXPLANATION FORMAT 2

Output format 2 will display the parameters, input neurons, output neurons, W matrix produced from the input and output data, number of presentations, and final matrix.
The message ‘DATA INPUT’ indicates where the program is reading the A MAT input neurons and B MAT output neurons to produce the W or neural matrix. The input and output neurons are printed in in value times unity. Thus, a printed value 10 with a specified value of unity is equal to 1.0 and a printed value of ‑5 with a specified value of unity equal to 10 is equal to ‑0.5. This representation is used to aid in the computer processing of neural network data. Another area where this is used is in the paragraph describing the W matrix.
Note, that during the training session the status line is not displayed. But, after the training session the final W or neural matrix is displayed. This neural matrix can be used for further training or applications.
The values displayed in W matrix are obtained by the value times UNITY. For example, 1.0 becomes 10 and 0.5 becomes 5 and ‑1.5 becomes ‑15 if the unity value is equal 10. These are called scaled numbers. The use of scaled numbers increase the number of neurons a system can have. The W matrix values printed can be substituted directly into the ‘KANN30.DAT’ data file if a simulation is needed. But, remember to delete the blank line separating each row.
When displaying the W matrix results to other people it is suggest to display the actual number as opposed to scaled number. To convert a scaled number to a actual just divide the scaled number by the value of UNITY as defined in the front of the report.
Below is a listing of the output with format option 2:
Neural Network Programmer Matrix 2.2
By David Kanecki ‑ Oct 12,1988‑1535h
Mar 16,1989‑1600h
PARAMETERS
2 1 = RWM 2=PFM 3=STL 4=IRN
MAX Presentations: 100
A Len = 3
B Len = 1
TOL = 1.0000000000E‑01
BETA = 2.0000000000E+00
LIMIT = 4.5500000000E‑01
OPTION = 1 Transfer Function 1 Selected
UNITY = 10
DATA INPUT
A MAT=

0 10 0
B MAT=
10
A MAT=
10 0 0
B MAT=
10
A MAT=
10 10 10
B MAT=
0
===== W Matrix ======
‑10
‑10
‑30
Number of Presenatations: 27
======= FINAL MATRIX =========
===== W Matrix ======
18
18
‑56
3.3 EXPLANATION FORMAT 3
Format option 1 and format option 3 both produce the same kind of output file. But, format option 3 generates the W or neural matrix from the A MAT and B MAT neurons as opposed from reading the W matrix from the file ‘KANN30.DAT’.
Below is the report generated by format option 3:
Neural Network Programmer Matrix 2.2
By David Kanecki ‑ Oct 12,1988‑1535h
Mar 16,1989‑1600h

PARAMETERS
3 1 = RWM 2=PFM 3=STL 4=IRN
MAX Presentations: 100
A Len = 3
B Len = 1
TOL = 1.0000000000E‑01
BETA = 2.0000000000E+00
LIMIT = 4.5500000000E‑01
OPTION = 1 Transfer Function 1 Selected
UNITY = 10
DATA INPUT
A MAT= 0 10 0
B MAT=
10
A MAT=
10 0 0
B MAT=
10
A MAT=
10 10 10
B MAT=
0
===== W Matrix ======
‑10
‑10
‑30
PS # 1 TS# 1 BEC = 1 BER = 0.5 RESULT = FALSE
PS # 2 TS# 2 BEC = 1 BER = 1.0 RESULT = FALSE
PS # 3 TS# 3 BEC = 0 BER = 1.0 RESULT = TRUE
PS # 4 TS# 1 BEC = 1 BER = 1.5 RESULT = FALSE
PS # 5 TS# 2 BEC = 1 BER = 2.0 RESULT = FALSE

PS # 6 TS# 3 BEC = 1 BER = 2.5 RESULT = FALSE
PS # 7 TS# 1 BEC = 1 BER = 3.0 RESULT = FALSE
PS # 8 TS# 2 BEC = 1 BER = 3.5 RESULT = FALSE
PS # 9 TS# 3 BEC = 1 BER = 4.0 RESULT = FALSE
PS # 10 TS# 1 BEC = 1 BER = 4.5 RESULT = FALSE
PS # 11 TS# 2 BEC = 1 BER = 5.0 RESULT = FALSE
PS # 12 TS# 3 BEC = 1 BER = 5.5 RESULT = FALSE
PS # 13 TS# 1 BEC = 1 BER = 6.0 RESULT = FALSE
PS # 14 TS# 2 BEC = 1 BER = 6.5 RESULT = FALSE
PS # 15 TS# 3 BEC = 1 BER = 7.0 RESULT = FALSE
PS # 16 TS# 1 BEC = 1 BER = 7.5 RESULT = FALSE
PS # 17 TS# 2 BEC = 1 BER = 8.0 RESULT = FALSE
PS # 18 TS# 3 BEC = 0 BER = 8.0 RESULT = TRUE
PS # 19 TS# 1 BEC = 1 BER = 8.5 RESULT = FALSE
PS # 20 TS# 2 BEC = 1 BER = 9.0 RESULT = FALSE
PS # 21 TS# 3 BEC = 1 BER = 9.5 RESULT = FALSE
PS # 22 TS# 1 BEC = 1 BER = 10.0 RESULT = FALSE
PS # 23 TS# 2 BEC = 1 BER = 10.5 RESULT = FALSE
PS # 24 TS# 3 BEC = 0 BER = 10.5 RESULT = TRUE
PS # 25 TS# 1 BEC = 0 BER = 10.5 RESULT = TRUE
PS # 26 TS# 2 BEC = 0 BER = 10.5 RESULT = TRUE
PS # 27 TS# 3 BEC = 0 BER = 10.5 RESULT = TRUE

Number of Presenatations: 27
======= FINAL MATRIX =========
===== W Matrix ======
18
18
‑56
3.4 EXPLANATION FORMAT 4
Format option 4 produces the report like format option 3 except more information is included.
The information included is ‘B CAL =’, ‘A=’, and ‘B=’. The ‘B CAL =’ is the output neurons generated from the A MAT input neurons. The ‘A=’ are the A MAT input neurons from file ‘ADATA.DAT’ and the ‘B= ‘ are the B MAT output neurons from the file ‘BDATA.DAT’. The program compares the ‘B CAL’ and ‘B’ neurons. If there is a exact match the result is TRUE else it is FALSE. Finally, the ‘BEC ‘ reflects the incorrect responses in the ‘B CAL’ neurons.
Note, that ‘B CAL’, ‘A= ‘, and ‘B= ‘ use the scaled number notation as described in 3.2 for the data input and matrix output.
Below is a report using format option 4:
Neural Network Programmer Matrix 2.2
By David Kanecki ‑ Oct 12,1988‑1535h Mar 16,1989‑1600h
PARAMETERS
4 1 = RWM 2=PFM 3=STL 4=IRN
MAX Presentations: 100
A Len = 3
B Len = 1
TOL = 1.0000000000E‑01
BETA = 2.0000000000E+00
LIMIT = 4.5500000000E‑01
OPTION = 1 Transfer Function 1 Selected
UNITY = 10
DATA INPUT
A MAT=
0 10 0

B MAT=
10
A MAT=
10 0 0
B MAT=
10
A MAT=
10 10 10
B MAT=
0
===== W Matrix ======
‑10
‑10
‑30
PS # 1 TS# 1 BEC = 1 BER = 0.5 RESULT = FALSE
B CAL=
3
A=
0 10 0
B=
10
PS # 2 TS# 2 BEC = 1 BER = 1.0 RESULT = FALSE
B CAL=
3
A=
10 0 0
B=
10
PS # 3 TS# 3 BEC = 0 BER = 1.0 RESULT = TRUE
B CAL=
1
A= 10 10 10
B=
0

PS # 4 TS# 1 BEC = 1 BER = 1.5 RESULT = FALSE
B CAL=
6
A=
0 10 0
B=
10
PS # 5 TS# 2 BEC = 1 BER = 2.0 RESULT = FALSE
B CAL=
6
A=
10 0 0
B=
10
PS # 6 TS# 3 BEC = 1 BER = 2.5 RESULT = FALSE
B CAL=
4
A=
10 10 10
B=
0
PS # 7 TS# 1 BEC = 1 BER = 3.0 RESULT = FALSE
B CAL=
6
A=
0 10 0
B=
10
PS # 8 TS# 2 BEC = 1 BER = 3.5 RESULT = FALSE
B CAL=
6
A=
10 0 0
B=
10
PS # 9 TS# 3 BEC = 1 BER = 4.0 RESULT = FALSE
B CAL=

2
A=
10 10 10
B=
0
PS # 10 TS# 1 BEC = 1 BER = 4.5 RESULT = FALSE
B CAL=
7 A=
0 10 0
B=
10
PS # 11 TS# 2 BEC = 1 BER = 5.0 RESULT = FALSE
B CAL=
7
A=
10 0 0
B=
10
PS # 12 TS# 3 BEC = 1 BER = 5.5 RESULT = FALSE
B CAL=
2
A=
10 10 10
B=
0
PS # 13 TS# 1 BEC = 1 BER = 6.0 RESULT = FALSE
B CAL=
7
A=
0 10 0
B=
10
PS # 14 TS# 2 BEC = 1 BER = 6.5 RESULT = FALSE
B CAL=
7
A=
10 0 0

B=
10
PS # 15 TS# 3 BEC = 1 BER = 7.0 RESULT = FALSE
B CAL=
2
A=
10 10 10
B=
0
PS # 16 TS# 1 BEC = 1 BER = 7.5 RESULT = FALSE
B CAL=
8
A=
0 10 0
B=
10
PS # 17 TS# 2 BEC = 1 BER = 8.0 RESULT = FALSE
B CAL= 8
A=
10 0 0
B=
10
PS # 18 TS# 3 BEC = 0 BER = 8.0 RESULT = TRUE
B CAL=
1
A=
10 10 10
B=
0
PS # 19 TS# 1 BEC = 1 BER = 8.5 RESULT = FALSE
B CAL=
8
A=
0 10 0
B=
10

PS # 20 TS# 2 BEC = 1 BER = 9.0 RESULT = FALSE
B CAL=
8
A=
10 0 0
B=
10
PS # 21 TS# 3 BEC = 1 BER = 9.5 RESULT = FALSE
B CAL=
3
A=
10 10 10
B=
0
PS # 22 TS# 1 BEC = 1 BER = 10.0 RESULT = FALSE
B CAL=
8
A=
0 10 0
B=
10
PS # 23 TS# 2 BEC = 1 BER = 10.5 RESULT = FALSE
B CAL=
8
A=
10 0 0
B=
10
PS # 24 TS# 3 BEC = 0 BER = 10.5 RESULT = TRUE B CAL=
1
A=
10 10 10
B=
0
PS # 25 TS# 1 BEC = 0 BER = 10.5 RESULT = TRUE
B CAL=
9

A=
0 10 0
B=
10
PS # 26 TS# 2 BEC = 0 BER = 10.5 RESULT = TRUE
B CAL=
9
A=
10 0 0
B=
10
PS # 27 TS# 3 BEC = 0 BER = 10.5 RESULT = TRUE
B CAL=
1
A=
10 10 10
B=
0
Number of Presenatations: 27
======= FINAL MATRIX =========
===== W Matrix ======
18
18
‑56
3.5 EXPLANATION FORMAT 2‑Simulation Mode
The simulation mode is set when a value of 1 is placed in line 3. In this mode data on the logged drive is in file ‘NUTEST.DAT’. This file contains the A MAT neurons. From the A MAT neurons the B MAT neurons will be generated by the final W matrix.
The A MAT neurons are indicated by the word ‘INPUT’ in the simulation mode. And, the B MAT neurons are indicated by the word ‘RESULT’ in the simulation mode.

The ‘NUTEST.DAT’ may contain many A MAT neuron simulation data. And, if the file ‘NUTEST.DAT’ is not available the program will print a error message as specified in section 2 and halt.

4.0 Learning Strategies for Neural Networks
Neural networks have the ability to learn various types of information. But, sometimes the information presented is in conflict with each other as in the XOR problem. When this occurs a learning strategy is needed.
A learning strategy is used is use when all the elements in the W matrix become zero or cycle to various values. For example, the XOR excercise indicates this:
A MAT = 0.0 1.0 B MAT = 1.0
1.0 0.0 1.0
1.0 1.0 0.0
Initial W Matrix W[0] = [ 0.0 0.0 ]
A MAT = 0 1 B CAL = 0.5 W[1]= [ 0.0 0.5 ]
A MAT = 1 0 B CAL = 0.5 W[2]= [ 0.5 0.5 ]
A MAT = 1 1 B CAL = 1.0 W[3]= [‑0.5 ‑0.5 ]
A MAT = 0 1 B CAL = 0.4 W[4]= [‑0.5 0.1 ]
A MAT = 1 0 B CAL = 0.4 W[5]= [ 0.1 0.1 ]
A MAT = 1 1 B CAL = 0.5 W[6]= [‑0.4 ‑0.4 ]
A MAT = 0 1 B CAL = 0.4 W[7]= [‑0.4 0.2 ]
A MAT = 1 0 B CAL = 0.4 W[8]= [ 0.2 0.2 ]
A MAT = 1 1 B CAL = 0.6 W[9]= [‑0.4 ‑0.4 ]
… and so forth …
The tolerance for this exercise was set to 0.1.
During the nine learning sessions one can see that the W matrix values cycle from time to time. Thus, a learning strategy is needed to solve this.
Now, lets consider the same data set using the Parity learning strategy:
A MAT = 0.0 1.0 0.0 B MAT = 1.0
A MAT = 1.0 0.0 0.0 B MAT = 1.0
A MAT = 1.0 1.0 1.0 B MAT = 0.0
W[0] = [ ‑1.0 ‑1.0 ‑3.0]

A MAT = 0 1 0 B CAL = 0.3 W[1]=[‑1.0 ‑0.3 ‑3.0]
A MAT = 1 0 0 B CAL = 0.3 W[2]=[‑0.3 ‑0.3 ‑3.0]
A MAT = 1 1 1 B CAL = 0.0 W[2]=[‑0.3 ‑0.3 ‑3.0]
A MAT = 0 1 0 B CAL = 0.4 W[3]=[‑0.3 0.3 ‑3.0]
A MAT = 1 0 0 B CAL = 0.4 W[4]=[ 0.3 0.3 ‑3.0]
A MAT = 1 1 1 B CAL = 0.0 W[4]=[ 0.3 0.3 ‑3.0]
A MAT = 0 1 0 B CAL = 0.6 W[5]=[ 0.3 0.7 ‑3.0] A MAT = 1 0 0 B CAL = 0.6 W[6]=[ 0.7 0.7 ‑3.0]
A MAT = 1 1 1 B CAL = 0.2 W[7]=[ 0.5 0.5 ‑3.2]
A MAT = 0 1 0 B CAL = 0.6 W[8]=[ 0.5 0.9 ‑3.2]
A MAT = 1 0 0 B CAL = 0.6 W[9]=[ 0.9 0.9 ‑3.2]
A MAT = 1 1 1 B CAL = 0.2 W[10]=[ 0.7 0.7 ‑3.4]
A MAT = 0 1 0 B CAL = 0.7 W[11]=[ 0.7 1.0 ‑3.4]
A MAT = 1 0 0 B CAL = 0.7 W[12]=[ 1.0 1.0 ‑3.4]
A MAT = 1 1 1 B CAL = 0.1 W[13]=[ 0.9 0.9 ‑3.5]
A MAT = 0 1 0 B CAL = 0.7 W[14]=[ 0.9 1.2 ‑3.5]
A MAT = 1 0 0 B CAL = 0.7 W[15]=[ 1.2 1.2 ‑3.5]
A MAT = 1 1 1 B CAL = 0.2 W[16]=[ 1.0 1.0 ‑3.7]
A MAT = 0 1 0 B CAL = 0.7 W[17]=[ 1.0 1.3 ‑3.7]
A MAT = 1 0 0 B CAL = 0.7 W[18]=[ 1.3 1.3 ‑3.7]
A MAT = 1 1 1 B CAL = 0.1 W[19]=[ 1.2 1.2 ‑3.8]
A MAT = 0 1 0 B CAL = 0.8 W[20]=[ 1.2 1.4 ‑3.8]
A MAT = 1 0 0 B CAL = 0.8 W[21]=[ 1.4 1.4 ‑3.8]
A MAT = 1 1 1 B CAL = 0.3 W[22]=[ 1.1 1.1 ‑4.1]
A MAT = 0 1 0 B CAL = 0.8 W[23]=[ 1.1 1.3 ‑4.1]
A MAT = 1 0 0 B CAL = 0.8 W[24]=[ 1.3 1.3 ‑4.1]
A MAT = 1 1 1 B CAL = 0.2 W[25]=[ 1.1 1.1 ‑4.3]
A MAT = 0 1 0 B CAL = 0.8 W[26]=[ 1.1 1.3 ‑4.3]
A MAT = 1 0 0 B CAL = 0.8 W[27]=[ 1.3 1.3 ‑4.3]
A MAT = 1 1 1 B CAL = 0.2 W[28]=[ 1.1 1.1 ‑4.5]
A MAT = 0 1 0 B CAL = 0.8 W[29]=[ 1.1 1.3 ‑4.5]
A MAT = 1 0 0 B CAL = 0.8 W[30]=[ 1.3 1.3 ‑4.5]
A MAT = 1 1 1 B CAL = 0.1 W[30]=[ 1.3 1.3 ‑4.5]
A MAT = 0 1 0 B CAL = 0.8 W[31]=[ 1.3 1.5 ‑4.5]
A MAT = 1 0 0 B CAL = 0.8 W[32]=[ 1.5 1.5 ‑4.5]
A MAT = 1 1 1 B CAL = 0.2 W[32]=[ 1.5 1.5 ‑4.5]
A MAT = 0 1 0 B CAL = 0.8 W[33]=[ 1.5 1.7 ‑4.5]
A MAT = 1 0 0 B CAL = 0.8 W[34]=[ 1.7 1.7 ‑4.5]
A MAT = 1 1 1 B CAL = 0.3 W[35]=[ 1.4 1.4 ‑4.8]
A MAT = 0 1 0 B CAL = 0.8 W[36]=[ 1.4 1.6 ‑4.8]
A MAT = 1 0 0 B CAL = 0.8 W[37]=[ 1.6 1.6 ‑4.8]
A MAT = 1 1 1 B CAL = 0.2 W[37]=[ 1.6 1.6 ‑4.8]

A MAT = 0 1 0 B CAL = 0.8 W[38]=[ 1.6 1.8 ‑4.8]
A MAT = 1 0 0 B CAL = 0.8 W[39]=[ 1.8 1.8 ‑4.8]
A MAT = 1 1 1 B CAL = 0.2 W[40]=[ 1.6 1.6 ‑5.0]
A MAT = 0 1 0 B CAL = 0.8 W[41]=[ 1.6 1.8 ‑5.0]
A MAT = 1 0 0 B CAL = 0.8 W[43]=[ 1.8 1.8 ‑5.0]
A MAT = 1 1 1 B CAL = 0.2 W[43]=[ 1.8 1.8 ‑5.0]
A MAT = 0 1 0 B CAL = 0.9 W[43]=[ 1.8 1.8 ‑5.0]
A MAT = 1 0 0 B CAL = 0.9 W[43]=[ 1.8 1.8 ‑5.0]
Thus, the problem is solved by using the parity learning strategy. In this section I will present various learning strategies. The common name for extra neuron is a ‘Hidden Unit’. A hidden unit is similar to a catalyst is chemistry. It aids a reaction but is not included in the by product of the reaction.
Please note that the learning strategy is applied to A MAT neurons only. This is done because the A MAT neuron values are known while the B CAL neuron value can vary during the learning session.
4.1 Learning Strategies
4.1.1 Parity , 1 neuron
The extra parity bit is ON if the number of A MAT neurons ON is odd. Or, the extra parity bit is ON is the number of A MAT neurons ON is even. But, only 1 of 2 options can be used throughout a data set.
For example, how does code the parity bit in the A MAT data? First, one needs to make a table. So, lets use the A MAT data from section 4.0 to set up the table:
A MAT H Comments
‑‑‑‑‑‑ ‑ ‑‑‑‑‑‑‑‑
0 1 0 OFF because number of A MAT neurons
ON is odd
1 0 0 OFF for same reason as above
1 1 1 ON because number of A MAT neurons
ON is even
B MAT
‑‑‑‑‑‑

1
1
0
Now, one needs to enter these into the ‘ADATA.DAT’ like so:
0 1 0
1 0 0
1 1 1
9999 1 1 1
Next, in file ‘KANN30.DAT’ on needs to specify that there are 3 A MAT neurons by putting a 3 in line 3 of file ‘KANN30.DAT’.
Finally, the B MAT neurons must be entered in file
‘BDATA.DAT’ like so:
1
1
0
9999
And, a 1 must be placed in line 4 of file ‘KANN30.DAT’.
4.1.2 Power of 2, Log2(#A MAT Neurons)
In the power of 2 method the number of hidden units or extra neurons is equal to whole number of the LOG2 of the A MAT neurons. Or, 2 to the power of N equal the number of A MAT neurons where N is the number of extra neurons or hidden units.
In this method a neuron is ON if the number of A MAT neurons ON is equal to a 2 to the power of 1 , 2, 3, up to N. For example, the XOR problem has two A MAT neurons so the number of extra or hidden units is equal 1. Thus, the hidden unit is ON if 2 A MAT neurons are on. And, if we have 4 hidden units they would ON if:
Hidden Unit 1 = ON if 2 A MAT neurons ON
Hidden Unit 2 = ON if 4 A MAT neurons ON
Hidden Unit 3 = ON if 8 A MAT neurons ON
Hidden Unit 4 = ON if 16 A MAT neurons ON
To code the A MAT and B MAT data set up a table similar to section 4.1.1:

A MAT H Comment
‑‑‑‑‑‑ ‑‑ ‑‑‑‑‑‑‑‑‑‑‑‑‑
0 1 0 OFF because ON A MAT = 1, not
a power of 2
1 0 0 OFF due to same reason above
1 1 1 ON because ON A MAT neurons = 2,
and 2 to the power 1 =2, they
are equal.
B MAT
‑‑‑‑‑
1
1
1
Now code the A MAT data in the data file ‘ADATA.DAT’ as follows:
0 1 0
1 0 0 1 1 1
9999 1 1
And, remember to indicate the number of A MAT neurons is equal to 3 in the file ‘KANN30.DAT’.
Next, code the B MAT data in the data file ‘BDATA.DAT’ as follows:
1
1
0
9999
And, remember to indicate the number of B MAT neurons is equal to 1 in the file ‘KANN30.DAT’.
4.1.3 Power of 1, #A MAT neurons

The power of 1 learning strategy is that there are equal numbers of A MAT neurons and hidden unit neurons. And, each hidden unit neurons is ON if 1, 2, 3, up the A MAT neurons are ON. For example, in the sample presented in section 4.0 there would have been 2 hidden neurons. They would be on if:
Hidden Unit 1 = ON if 1 A MAT neuron ON
Hidden Unit 2 = ON if 2 A MAT neurons ON
And, the coding table would have looked something like this:
A MAT H1 H2 Comment
‑‑‑‑‑‑‑‑ ‑‑ ‑‑ ‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑
0 1 1 0 H1 ON because A MAT neurons
ON equal to 1
1 0 1 0 H1 ON for reason state above
1 1 0 1 H2 ON because A MAT neurons
ON equal to 2.
B MAT
‑‑‑‑‑
1
1
0
And, the ‘ADATA.DAT’ file would be coded as follows:
0 1 1 0
1 0 1 0
1 1 0 1
9999 1 1 1
Remember to indicate in parameter file ‘KANN30.DAT’ that there are 4 A MAT neurons.
The ‘BDATA.DAT’ file would be coded as follows:
1
1
0
9999

Remember to indicate in parameter file ‘KANN30.DAT’ that there is 1 B MAT neuron.
4.1.4 Symmetry, 1 bit
This option uses the principle of symmetry to aid in learning. For example, if there are 10 neurons then neurons 1 thru 5 will be on one side of the symmetry line and neurons 6 thru 10 will be o the other side of the symmetry line. A symmetry
hidden unit is ON when neurons in the same position on different sides of the symmetry line are in the same state(ON or OFF).
For example, if A MAT neurons used in section 4.0 there was 1 neuron on each side of the symmetry line. And, the symmetry neuron would on if:
Symmetry Line
|
0 | 0 1 ON because both neurons are in
| same state.
0 | 1 0 OFF because no neuron is in
same state
1 | 0 0 OFF because no neuron is in
same state
1 | 1 1 ON because both neurons are in
same state
4.2 Comments on Learning Strategies
Each learning strategy offers advantages and disadvantages in learning rate and probability of learning neural information.
For example each learning strategy is similar to a catalyst used in a chemical reaction. Some are faster than others. And, some work in limited situations. Thus, I have observed the same result in various neural network experiments.
Thus, I have found it is very important to use a consistant method that presents the least number of contradicitions to other neurons. For example, the XOR problem was based on the Binary system of coding.

Some people have used a method called thermometer coding for A MAT data. The thermometer method is based that the A MAT neurons represent a scale with a number each specific scale as in a thermometer. For example, to code the number 0,1, 5, 9, and 10 using the thermometer method 11 A MAT neurons are needed to cover the range 0 to 10. The number would be coded as follows:
Number Coding
‑‑‑‑‑‑ ‑‑‑‑‑‑
0 1 0 0 0 0 0 0 0 0 0 0
1 1 1 0 0 0 0 0 0 0 0 0
5 1 1 1 1 1 1 0 0 0 0 0
9 1 1 1 1 1 1 1 1 1 1 0
10 1 1 1 1 1 1 1 1 1 1 1
This was used in one users group I belong to solve the learning of values in the sine function.

5.0 Processing Time
The processing time depends on 4 variables. They are the number of A MAT neurons, number of B MAT neurons, and the maximum number of presentations.
All these factors multiplied together form neural update unit. And, based on test on the CP/M one can use a ratio to determine the run time.
For example, in one experiment I have 32 A MAT neurons, 64 B MAT neurons, 12 test set, and 180 as the maximum number of presentations. The total product was 368,640 and the processing time was 40 minutes on a CP/M machine running at 4Mhz. Thus, the estimated processing time is equal to:
est time= 40 min * (#A MAT * #B MAT * Max )
(CP/M ‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑
4 Mhz) 368,640
where
#A MAT is the number of A MAT neurons
#B MAT is the number of B MAT neurons
MAX is the maximum number of learning
sessions.
Note, the estimated time may vary due disk access time and processor speed on varying systems.

Part V – Non Disclosure Agreement (Purchase implies acceptance, approval, and agreement)
Nondisclosure Agreement
Kanecki Associates, Inc.
P.O. Box 866, Kenosha, WI 53141
Parties.
This Nondisclosure agreement (the “Agreement”) is entered into by and between _David Kanecki of Kanecki Associates, Inc an S-Corp at P.O. Box 866, Kenosha, WI 53141 (“disclosing party”) and original purchasing party from Kanecki Associates, Inc share-it.com portal with original receipt. The individual or company name listed on the receipt will serve as they receiving party of this agreement (“receiving party”) for the purpose of preventing the unauthorized disclosure of Confidential Information (as defined below).
Summary.
Disclosing party may disclose confidential and proprietary trade secret information to receiving party. The parties mutually agree to enter into a confidential relationship with respect to the disclosure of certain proprietary and confidential information (the “Confidential Information”).
Definition of Confidential Information (Written or Oral).
For purposes of this Agreement, “Confidential Information” shall include all information or material that has or could have commercial value or other utility in the business in which disclosing party is engaged. In the event that Confidential Information is in written form, the disclosing party shall label or stamp the materials with the word “Confidential” or some similar warning. In the event that Confidential Material is transmitted orally, the disclosing party shall promptly provide a writing indicating that such oral communication constituted Confidential Information.
Exclusions from Confidential Information.
Receiving party’s obligations under this Agreement shall not extend to information that is: (a) publicly known at the time of disclosure under this Agreement or subsequently becomes publicly known through no fault of the receiving party, (b) discovered or created by the receiving party prior to the time of disclosure by disclosing party or (c) otherwise learned by the receiving party through legitimate means other than from the disclosing party or anyone connected with the disclosing party.
Obligations of Receiving Party.
The receiving party shall hold and maintain the Confidential Information of the other party in strictest confidence for the sole and exclusive benefit of the disclosing party. The receiving party shall carefully restrict access to any such Confidential Information to persons bound by this Agreement, only on a need-to-know basis. The receiving party shall not, without prior written approval of the disclosing party, use for the receiving party’s own benefit, publish, copy or otherwise disclose to others, or permit the use by others for their benefit or to the detriment of the disclosing party, any of the Confidential Information. The receiving party shall return to disclosing party any and all records, notes, and other written, printed or tangible materials in its possession pertaining to the Confidential Information immediately on the written request of disclosing party.
Time Periods.
The nondisclosure and confidentiality provisions of this Agreement shall survive the termination of any relationship between the disclosing party and the receiving party.
Miscellaneous.
Nothing contained in this Agreement shall be deemed to constitute either party a partner, joint venturer or employee of the other party for any purpose. This Agreement may not be amended except in a writing signed by both parties. If a court finds any provision of this Agreement invalid or unenforceable as applied to any circumstance, the remainder of this Agreement shall be interpreted so as best to effect the intent of the parties. This Agreement shall be governed by and interpreted in accordance with the laws of the State of _Wisconsin. . Any controversy or claim arising out of or relating to this Agreement, or the breach of this Agreement, shall be settled by arbitration in accordance with the rules of the American Arbitration Association and judgment upon the award rendered by the arbitrator(s) may be entered in any court having jurisdiction. The prevailing party shall have the right to collect from the other party its reasonable costs and attorneys fees incurred in enforcing this agreement. Any such arbitration hearing shall include a written transcript of the proceedings and a written explanation for any final determination. This Agreement expresses the complete understanding of the parties with respect to the subject matter and supersedes all prior proposals, agreements, representations and understandings. This Agreement and each party’s obligations shall be binding on the representatives, assigns and successors of such party. Each party has signed this Agreement through its authorized representative.
DISCLOSING PARTY:
_________________________________________
Signature
_________________________________________
Disclosing Party’s Name
Date: __________________________________
RECEIVING PARTY:
___Individual, Company, Organization, Etc. Listed on original share-it.com purchase receipt from Kanecki Associates, Inc. share-it portal.______________________________________
Signature
_Purchase and receipt in lieu of signature________________________________________
Receiving Party’s Name/Title
Date: _The data listed on the original share-it.com purchase receipt._________________________________
Part VI – Royalty Agreement
Sample License Agreement
Kanecki Associates, Inc.
P.O. Box 866, Kenosha, WI 53141 UNITED STATES
Introduction.
This License Agreement (the “Agreement”) is made between __David Kanecki of Kanecki Associates, Inc an S-Corporation at P.O. Box 866, Kenosha, WI 53141 _ (referred to as “Licensor”), and the receiving party name and address listed on the share-it.com purchase receipt_ (referred to as “Licensee”)
Licensor and Licensee shall be collectively referred to as “the parties.” Licensor is the owner of certain proprietary rights to an invention referred to as _Procedure Software Development Kit 1. Licensee desires to license certain rights in the invention. Therefore the parties agree as follows:
The Property [select one]
[Copyright, Trade Secrets and Trademarks: No Patents]
[ X] The Property refers to all proprietary rights, including but not limited to copyrights, trade secrets, formulas, research data, know–how and specifications related to the invention commonly known as the _Procedure Software Development Kit 1 as well as any trademark rights and associated good will. A more complete description is provided in the attached Exhibit A.
Licensed Products. [select one]
[Licensed Products specifically described]
[ X] Licensed Products are defined as the Licensee products incorporating the Property and specifically described in Exhibit A (the “Licensed Products”).
Grant of Rights.
Licensor grants to Licensee an exclusive license to make, use and sell the Property solely in association with the manufacture, sale, use, promotion or distribution of the Licensed Products.
Sublicense. [select one]
[Consent required]
[X] Licensee may sublicense the rights granted pursuant to this agreement provided: Licensee obtains Licensor’s prior written consent to such sublicense, and Licensor receives such revenue or royalty payment as provided in the Payment section below. Any sublicense granted in violation of this provision shall be void.
Reservation of Rights. [select one]
[All rights reserved]
[ X] Licensor expressly reserves all rights other than those being conveyed or granted in this Agreement.
Territory. [select one]
[Statement of territory]
[X] The rights granted to Licensee are limited to a non-exclusive territory (the “Territory”).
[Limiting cross–territory sales]
[ X ] The rights granted to Licensee are limited to_countries that are not own the Bureau of Industrial Security (Department of Commerce) watch or banned countries] (the “Territory”). Licensee shall not make, use or sell the Licensed Products or any products which are confusingly or substantially similar to the Licensed Products in any country outside the Territory and will not knowingly sell the Licensed Products to persons who intend to resell them in a country outside the Territory.
Term. [select one]
[Specified with renewal rights]
[ X] This Agreement shall commence upon the purchase date on the original share-it.com receipt, (the “Effective Date”) and shall extend for a period of _five years (the “Initial Term. Following the Initial Term, this agreement may be renewed by Licensee under the same terms and conditions for consecutive one-year periods. (the “Renewal Terms”) provided that Licensee provides written notice of its intention to renew this agreement within thirty days before the expiration of the current term. In no event shall the Agreement extend longer than the date of expiration of the patent listed in the definition of the Property.
[Short term with renewal rights based upon sales]
[ ] This Agreement shall commence upon the Effective Date and shall extend for a period of _1 year (the “Initial Term”) and thereafter may be renewed by Licensee under the same terms and conditions for consecutive 1-year periods (the “Renewal Terms”), provided that:
(a) Licensee provides written notice of its intention to renew this agreement within thirty days before the expiration of the current term,
(b) Licensee has met the sales requirements as established in Exhibit A, and
(c) in no event shall the Agreement extend longer that the date of expiration of the longest–living patent (or patents) or last–remaining patent application as listed in the definition of the Property.
[No patents; indefinite term]
[X ] This Agreement shall commence upon the Effective Date and shall continue until terminated pursuant to a provision of this Agreement.
[Term for as long as licensee sells licensed products]
[ X ] This Agreement shall commence upon the Effective Date as specified in Exhibit A and shall continue for as long as Licensee continues to offer the Licensed Products in commercially reasonable quantities unless sooner terminated pursuant to a provision of this Agreement.
Royalties.
All royalties (“Royalties”) provided for under this Agreement shall accrue when the respective items are sold, shipped, distributed, billed or paid for, whichever occurs first. Royalties shall also be paid by the Licensee to Licensor on all items, even if not billed (including, but not limited to introductory offers, samples, promotions or distributions) to individuals or companies which are affiliated with, associated with or subsidiaries of Licensee.
Net Sales.
“Net Sales” are defined as Licensee’s gross sales (i.e., the gross invoice amount billed customers) less quantity discounts and returns actually credited. A quantity discount is a discount made at the time of shipment. No deductions shall be made for cash or other discounts, for commissions, for uncollectible accounts or for fees or expenses of any kind which may be incurred by the Licensee in connection with the Royalty payments.
[ ] Advance Against Royalties. [Optional]
As a nonrefundable advance against Royalties (the “Advance”), Licensee agrees to pay to Licensor upon execution of this Agreement the sum of 50,000.00 USD..
Licensed Product Royalty. [select one]
[All rights]
[ X] Licensee agrees to pay a Royalty of _five percent of all Net Sales revenue of the Licensed Products (“Licensed Product Royalty”).
[ X] Guaranteed Minimum Annual Royalty Payment. [Optional]
In addition to any other advances or fees, Licensee shall pay an annual guaranteed royalty (the “GMAR”) as follows: _$300,000.00 per year. The GMAR shall be paid to Licensor annually on _January 15th. The GMAR is an advance against royalties for the twelve–month period commencing upon payment. Royalty payments based on Net Sales made during any year of this Agreement shall be credited against the GMAR due for the year in which such Net Sales were made. In the event that annual royalties exceed the GMAR, Licensee shall pay the difference to Licensor. Any annual royalty payments in excess of the GMAR shall not be carried forward from previous years or applied against the GMAR.
[ X] License Fee. [Optional]
As a nonrefundable, nonrecoupable fee for executing this license, Licensee agrees to pay to Licensor upon execution of this Agreement the sum of $500,000.00_.
[ X] Royalties on Spin Offs. [Optional]
Licensee agrees to pay a Royalty (“Spin Off Product Royalty”) of _five percent for all Net Sales of “Spin Off Products.” A “Spin–Off Product” is any product that is derived from, based on or adapted from the Licensed Product, provided that if the product uses the Property it shall be considered to be a Licensed Product and not a Spin Off Product.
[ ] Adjustment of Royalties For Third Party Licenses. [Optional]
In the event that any Licensed Product (or other items for which Licensee pays Royalties to Licensor) incorporates third party character licenses, endorsements or other proprietary licenses, Licensor agrees to adjust the Royalty rate to _10 percent for such third party licenses. A. Licensee shall notify Licensor of any such third party licenses prior to manufacture. Third party licenses shall not include licenses accruing to an affiliate, associate or subsidiary of Licensee.
[X] F.O.B. Royalties. [Optional]
Licensee agrees to pay the Royalty (“F.O.B. Royalty”) of _eight percent_ [insert appropriate royalty percentage] for all F.O.B. sales of Licensed Products.
[ X] Sublicensing Revenues. [Optional]
In the event of any sublicense of the rights granted pursuant to this Agreement, Licensee shall pay to Licensor __five percent of all sublicensing revenues.
Payments and Statements to Licensor.
Within thirty days after the end of each calendar quarter (the “Royalty Period”), an accurate statement of Net Sales of Licensed Products along with any Royalty payments or sublicensing revenues due to Licensor shall be provided to Licensor, regardless of whether any Licensed Products were sold during the Royalty Period. All payments shall be paid in United States currency drawn on a United States bank. The acceptance by Licensor of any of the statements furnished or Royalties paid shall not preclude Licensor questioning the correctness at any time of any payments or statements.
Audit.
Licensee shall keep accurate books of account and records covering all transactions relating to the license granted in this Agreement, and Licensor or its duly authorized representatives shall have the right upon five days’ prior written notice, and during normal business hours, to inspect and audit Licensee’s records relating to the Property licensed under this Agreement. Licensor shall bear the cost of such inspection and audit, unless the results indicate an underpayment greater than $1000.00 for any six–month period. In that case, Licensee shall promptly reimburse Licensor for all costs of the audit along with the amount due with interest on such sums. Interest shall accrue from the date the payment was originally due and the interest rate shall be 1.5% per month, or the maximum rate permitted by law, whichever is less. All books of account and records shall be made available in the United States and kept available for at least two years after the termination of this Agreement.
Late Payment.
Time is of the essence with respect to all payments to be made by Licensee under this Agreement. If Licensee is late in any payment provided for in this Agreement, Licensee shall pay interest on the payment from the date due until paid at a rate of 1.5% per month, or the maximum rate permitted by law, whichever is less.
Licensor Warranties.
Licensor warrants that it has the power and authority to enter into this Agreement and has no knowledge as to any third party claims regarding the proprietary rights in the Property which would interfere with the rights granted under this Agreement.
Indemnification by Licensor. [select one]
[Statement of licensor indemnification]
[X ] Licensor shall indemnify Licensee and hold Licensee harmless from any damages and liabilities (including reasonable attorneys’ fees and costs) arising from any breach of Licensor’s warranties as defined in Licensor’s Warranties, above, provided: (a) such claim, if sustained, would prevent Licensee from marketing the Licensed Products or the Property; (b) such claim arises solely out of the Property as disclosed to the Licensee, and not out of any change in the Property made by Licensee or a vendor, or by reason of an off–the–shelf component or by reason of any claim for trademark infringement; (c) Licensee gives Licensor prompt written notice of any such claim; (d) such indemnity shall only be applicable in the event of a final decision by a court of competent jurisdiction from which no right to appeal exists; and (e) the maximum amount due from Licensor to Licensee under this paragraph shall not exceed the amounts due to Licensor under the Payment Section from the date that Licensor notifies Licensee of the existence of such a claim.
Licensee Warranties.
Licensee warrants that it will use its best commercial efforts to market the Licensed Products and that their sale and marketing shall be in conformance with all applicable laws and regulations, including but not limited to all intellectual property laws.
Indemnification by Licensee.
Licensee shall indemnify Licensor and hold Licensor harmless from any damages and liabilities (including reasonable attorneys’ fees and costs) (a) arising from any breach of Licensee’s warranties and representation as defined in the Licensee Warranties, above; (b) arising out of any alleged defects or failures to perform of the Licensed Products or any product liability claims or use of the Licensed Products; and (c) arising out of advertising, distribution or marketing of the Licensed Products.
[ X] Limitation of Licensor Liability. [Optional]
Licensor’s maximum liability to Licensee under this agreement, regardless on what basis liability is asserted, shall in no event exceed the total amount paid to Licensor under this Agreement. Licensor shall not be liable to Licensee for any incidental, consequential, punitive or special damages.
Intellectual Property Protection.
Licensor may, but is not obligated to seek, in its own name and at its own expense, appropriate patent, trademark or copyright protection for the Property. Licensor makes no warranty with respect to the validity of any patent, trademark or copyright which may be granted. Licensor grants to Licensee the right to apply for patents on the Property or Licensed Products provided that such patents shall be applied for in the name of Licensor and licensed to Licensee during the Term and according to the conditions of this Agreement. Licensee shall have the right to deduct its reasonable out of pocket expenses for the preparation, filing and prosecution of any such U.S. patent application (but in no event more than $5,000) from future royalties due to Licensor under this Agreement. Licensee shall obtain Licensor’s prior written consent before incurring expenses for any foreign patent application.
Compliance with Intellectual Property Laws.
The license granted in this Agreement is conditioned on Licensee’s compliance with the provisions of the intellectual property laws of the United States and any foreign country in the Territory. All copies of the Licensed Product as well as all promotional material shall bear appropriate proprietary notices.
Infringement Against Third Parties.
In the event that either party learns of imitations or infringements of the Property or Licensed Products, that party shall notify the other in writing of the infringements or imitations. Licensor shall have the right to commence lawsuits against third persons arising from infringement of the Property or Licensed Products. In the event that Licensor does not commence a lawsuit against an alleged infringer within sixty days of notification by Licensee, Licensee may commence a lawsuit against the third party. Before the filing suit, Licensee shall obtain the written consent of Licensor to do so, and such consent shall not be unreasonably withheld. Licensor will cooperate fully and in good faith with Licensee for the purpose of securing and preserving Licensee’s rights to the Property. Any recovery (including, but not limited to, a judgment, settlement or licensing agreement included as resolution of an infringement dispute) shall be divided equally between the parties after deduction and payment of reasonable attorneys’ fees to the party bringing the lawsuit.
Exploitation.
Licensee agrees to manufacture, distribute and sell the Licensed Products in commercially reasonable quantities during the term of this Agreement and to commence such manufacture, distribution and sale within the following time period: _six months This is a material provision of this Agreement.
Samples & Quality Control.
Licensee shall submit a reasonable number of production samples of the Licensed Product to Licensor to assure that the product meets Licensor’s quality standards. In the event that Licensor fails to object in writing within 10 business days after the date of receipt, the Licensed Product shall be deemed to be acceptable. At least once during each calendar year, Licensee shall submit two production samples of each Licensed Product for review. The quality standards applied by Licensor shall be no more rigorous than the quality standards applied by Licensee to similar products.
Insurance.
Licensee shall, throughout the Term, obtain and maintain, at its own expense, standard product liability insurance coverage, naming Licensor as additional named insureds. Such policy shall: (a) be maintained with a carrier having a Moody’s rating of at least B; and (b) provide protection against any claims, demands and causes of action arising out of any alleged defects or failure to perform of the Licensed Products or any use of the Licensed Products. The amount of coverage shall be a minimum of _$2,000,000.00_ with no deductible amount for each single occurrence for bodily injury or property damage. The policy shall provide for notice to the Agent and Licensor from the insurer by Registered or Certified Mail in the event of any modification or termination of insurance. Licensee shall furnish Licensor and Agent a certificate from its product liability insurance carrier evidencing insurance coverage in favor of Licensor, and in no event shall Licensee distribute the Licensed Products before the receipt by the Licensor of evidence of insurance. The provisions of this section shall survive termination for three years.
Confidentiality.
The parties acknowledge that each may be furnished or have access to confidential information that relates to each other’s business (the “Confidential Information”). In the event that Information is in written form, the disclosing party shall label or stamp the materials with the word “Confidential” or some similar warning. In the event that Confidential Information is transmitted orally, the disclosing party shall promptly provide a writing indicating that such oral communication constituted Information. The parties agree to maintain the Confidential Information in strictest confidence for the sole and exclusive benefit of the other party and to restrict access to such Confidential Information to persons bound by this Agreement, only on a need–to–know basis. Neither party, without prior written approval of the other, shall use or otherwise disclose to others, or permit the use by others of the Confidential Information.
Termination. [select one]
[Initial term with renewals].
[ X] This Agreement terminates at the end of two years (the “Initial Term”) unless renewed by Licensee under the same terms and conditions for consecutive two year periods (the “Renewal Terms”) provided that Licensee provides written notice of its intention to renew this agreement within thirty days prior to expiration of the current term. In no event shall the Agreement extend longer than the date of expiration of the longest–living patent (or patents) or last–remaining patent application as listed in the definition of the Property.
[Termination at will: Licensee’s option]
[ X ] Upon 90 days’ notice, licensee may, at its sole discretion, terminate this agreement by providing notice to the licensor.
Licensor’s Right to Terminate.
Licensor shall have the right to terminate this Agreement for the following reasons:
(a) Licensee fails to pay Royalties when due or fails to accurately report Net Sales, as defined in the Payment Section of this Agreement, and such failure is not cured within thirty days after written notice from the Licensor;
(b) Licensee fails to introduce the product to market by __six months after contract signing or to offer the Licensed Products in commercially reasonable quantities during any subsequent year;
(c) Licensee fails to maintain confidentiality regarding Licensor’s trade secrets and other Information;
(d) Licensee assigns or sublicenses in violation of the Agreement; or
(e) Licensee fails to maintain or obtain product liability insurance as required by the provisions of this Agreement.
Effect of Termination.
Upon termination of this Agreement, all Royalty obligations as established in the Payments Section shall immediately become due. After the termination of this license, all rights granted to Licensee under this Agreement shall terminate and revert to Licensor, and Licensee will refrain from further manufacturing, copying, marketing, distribution or use of any Licensed Product or other product which incorporates the Property. Within thirty days after termination, Licensee shall deliver to Licensor a statement indicating the number and description of the Licensed Products which it had on hand or is in the process of manufacturing as of the termination date. Licensee, may dispose of the Licensed Products covered by this Agreement for a period of three months after termination or expiration, except that Licensee shall have no such right in the event this agreement is terminated according to the Licensor’s Right to Terminate, above. At the end of the post–termination sale period, Licensee shall furnish a royalty payment and statement as required under the Payment Section. Upon termination, Licensee shall deliver to Licensor all tooling and molds used in the manufacture of the Licensed Products. Licensor shall bear the costs of shipping for the tooling and molds.
Survival.
The obligations of Sections __allshall survive any termination of this Agreement.
Attorneys’ Fees and Expenses.
The prevailing party shall have the right to collect from the other party its reasonable costs and necessary disbursements and attorneys’ fees incurred in enforcing this Agreement.
Dispute Resolution. [select one]
[X] Mediation & Arbitration.
The Parties agree that every dispute or difference between them, arising under this Agreement, shall be settled first by a meeting of the Parties attempting to confer and resolve the dispute in a good faith manner. If the Parties cannot resolve their dispute after conferring, any Party may require the other Parties to submit the matter to non–binding mediation, utilizing the services of an impartial professional mediator approved by all Parties. If the Parties cannot come to an agreement following mediation, the Parties agree to submit the matter to binding arbitration at a location mutually agreeable to the Parties. The arbitration shall be conducted on a confidential basis pursuant to the Commercial Arbitration Rules of the American Arbitration Association. Any decision or award as a result of any such arbitration proceeding shall include the assessment of costs, expenses and reasonable attorney’s fees and shall include a written record of the proceedings and a written determination of the arbitrators. Absent an agreement to the contrary, any such arbitration shall be conducted by an arbitrator experienced in intellectual property law. The Parties reserve the right to object to any individual who shall be employed by or affiliated with a competing organization or entity. In the event of any such dispute or difference, either Party may give to the other notice requiring that the matter be settled by arbitration. An award of arbitration shall be final and binding on the Parties and may be confirmed in a court of competent jurisdiction.
Governing Law.
This Agreement shall be governed in accordance with the laws of the State of _Wisconsin.
Jurisdiction.
The parties consent to the exclusive jurisdiction and venue of the federal and state courts located in _Milwaukee, Wisconsin in any action arising out of or relating to this Agreement. The parties waive any other venue to which either party might be entitled by domicile or otherwise.
Waiver.
The failure to exercise any right provided in this Agreement shall not be a waiver of prior or subsequent rights.
Invalidity.
If any provision of this Agreement is invalid under applicable statute or rule of law, it is to be considered omitted and the remaining provisions of this Agreement shall in no way be affected.
Entire Understanding.
This Agreement expresses the complete understanding of the parties and supersedes all prior representations, agreements and understandings, whether written or oral. This Agreement may not be altered except by a written document signed by both parties.
Attachments & Exhibits.
The parties agree and acknowledge that all attachments, exhibits and schedules referred to in this Agreement are incorporated in this Agreement by reference.
Notices.
Any notice or communication required or permitted to be given under this Agreement shall be sufficiently given when received by certified mail, or sent by facsimile transmission or overnight courier.
No Joint Venture.
Nothing contained in this Agreement shall be construed to place the parties in the relationship of agent, employee, franchisee, officer, partners or joint ventures. Neither party may create or assume any obligation on behalf of the other.
Assignability. [select one]
[Statement of Assignability]
[X] Licensee may not assign or transfer its rights or obligations pursuant to this Agreement without the prior written consent of Licensor. Any assignment or transfer in violation of this section shall be void.
Each party has signed this Agreement through its authorized representative. The parties, having read this Agreement, indicate their consent to the terms and conditions by their signature below.
By ___David Kanecki of Kanecki Associates, Inc.____ Date: __date listed on original share-it.com purchase receipt_________________
Licensor Name:__David Kanecki_
By ___Individual, Company, Party, etc listed under name on original share-it.com receipt.__ Date: __date listed on original share-it.com receipt_________________
Licensee Name/Title: Individual, Company, Party, etc listed under name on original share-it.com receipt _____
EXHIBIT A
The complete project, source, and data files are included for use with either Delphi. The project files are in Delphi 6 mode, and recognizable by later versions of Delphi.

[on a separate sheet]
EXHIBIT A
THE PROPERTY
One reason for considering the procedural software development tool kit is that although Java exists and runs on many platforms, this kit allows you to have to an extended benefit of having a virtual development platform and have that virtual development platform cross-link with Java. With a virtual development platform, it can be easier to update certain areas of an application.
The procedural software development kit contains tools in executable programs and source code that allow you to develop applications for a RISC and stack based processor. The modules include two compilers that are similar to Pascal and Fortran, two RISC and Stack based assemblers for single client and multiple client programs, and three emulators for stand-alone single client, multiple client programs, and virtual memory stand-alone client with a hit ratio of over 97% for large programs. Finally, an assembly converter program is available that can convert the RISC/Stack based to a specific processor.
The compilers input source code that is similar to Pascal and FORTRAN and generate RISC and stack based assembly code. The benefit of the RISC and stack based approach is that it allows the application to optimize the processing of the application. In addition, the use of RISC/Stack based compiler coding makes it easier to add features to the compiler and to simplify the compilation process.
The assemblers use the RISC/Stack based assembly code and generate a universal object code that can be run on different computers. The only part of the tool kit that has to be specific in the emulator. In addition, the procedural software tool kit contains a converter to generate the equivalent series of instructions for a specific processor.
The emulators allow the program to run on different computers. With compiler and assembler, the assembly code and object code are universal for all computers. The emulators are the only part needed to be made for a specific computer. Due to the “plain vanilla” coding, the emulators can be made to operate on different computers quickly. Also, this has been verified by testing it on three different operating systems, including a Java based emulator.

LICENSED PRODUCTS
Compilers, stand-alone applications, specialized applications, Java add-on, cloud computing
== COMPLETE DESCRIPTION AND AGREEMENTS

image_pdf
Never Expires

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

9 + 4 =


Related Listing