What is the float precision used during training 16 or 32 bits?

What is the float precision used during training 16 or 32 bits ?

float precision is 32 bits.

Would there be a way to have the choice. I am asking cause I read that some GPUs have much better 16bits performance compared to 32bits (more than 50% better)