Dashboard
① Patient Input
Tumour Size
centimetres (cm)
Cell Uniformity
score 1 – 10
Hidden Layers2
neurons per layer: 4
Learning Rate (η)0.01
step size for weight updates
② Network Architecture
Layer computation:
z = Σ wᵢ·xᵢ + b
a = ReLU(z) = max(0, z)
Output: σ(z) = 1/(1+e⁻ᶻ)
Input: size, uniformity
Hidden: 2 × 4 neurons (ReLU)
Output: 1 neuron (Sigmoid)
Parameters:
③ Prediction Output
Malignancy Probability
MALIGNANT
Output z-score
BCE Loss
Layer 1 Max Act
Layer 2 Max Act
④ Network Visualization — node brightness = activation level · edge thickness = weight magnitude · animated signal flow on Forward Pass