# Learning From Data – A Short Course: Exercise 7.9

**Page 18**

What can go wrong if you just initialize all the weights to exactly zero?

For , if becomes zero then becomes zero. For , ( or ), if becomes zero then becomes zero.

The gradient will then becomes zero so the algorithm will stop immediately and then blindly return as the final hypothesis.

**Note that this result may not hold for other kinds of .** If is standard logistic function (side note: I really hate sigmoid function usage confusion), , hence it’s *likely *, so it’s *likely* that eventually* *together with other *likely* non-zero components leads to . The problem here, as suggested by Andrew Ng, is that all the weights directly connected to an output node will share the same value of weight (after an update), always shares the same value for each unit in the same layer because their contributing weights are the same, that will eventually leads to the same for each unit in the same layer. This kind of redundant architure happens not only when weights are initialized to zero but also when all weights are initialized to the same value. So random initialization will likely **symmetry breaking**.