Two-Layer Perceptrons 
 
-  Three-layer directed, weighted graph
  
  -  First layer: input nodes
  
 -  Second layer: hidden nodes
  
 -  Third layer: output nodes
  
 
 -  Alternatively, view as two connected perceptrons
  
  -  Output layer of the first perceptron is the input layer for the second
  
 
 -  Computing an output
  
  -  Compute outputs for first perceptron
  
 -  Using these outputs, compute outputs for second perceptron
 -  Backpropagation of errors during training
  
  -  First, train the second perceptron
  
 -  Next, calculate errors for each hidden node
    
    -  For each output node
      
      -  Add to the total error the product of:
        
        -  the weight of the edge from the hidden node to the output node
        
 -  the gradient of the output of the input-to-hidden perceptron 
        
 -  the error from the hidden to the output node
        
 
       
     -  Modify each incoming weight as with the perceptrons