Intermediate Features via Guided Backpropagation
Goal
Answer “what are the intermediate features looking for?” by calculating gradients from a single neuron activation to each input pixel.
We choose one specific neuron activation (a single scalar value), not an entire filter.
Steps
- Choose one neuron: Pick one activation value (e.g., Conv5)
- Backpropagate: Calculate each input pixel’s contribution to that activation
- Visualize: Display gradient information as images
The “Guided” Modification
Problem: Standard backprop creates noisy visualizations due to negative gradients
Solution: During ReLU backpropagation:
- Set negative gradients to 0 (normal ReLU)
- Also set gradients to 0 where forward pass was negative
This double filtering produces cleaner visualizations. Why it works better isn’t fully understood, but results are significantly clearer.