Rings. Zero-divisors.
Zero-divisor
An element \( a \) in a ring \( (R, +, \cdot) \) is a left zero-divisor iff there exists an element \( b \neq 0 \) in \( R \) such that \( a\cdot b = 0 \).
An element \( a \) in a ring \( (R, +, \cdot) \) is a right zero-divisor iff there exists an element \( b \neq 0 \) in \( R \) such that \( b \cdot a = 0 \).
Lambda perspective
The left case describes the 2-input function \( \groupMul{R}: R \times R \to R \) having the first parameter fixed at \( a \). This forms a function \( m_a : R \to R \). Lambda calculus would call this partial application.
Injectivity and surjectivity
Not a left zero-divisor iff left-multiplication is injective
The partial application of \( \groupMul{R} : R \times R \to R \) by \( a \) as the first argument is an injective function iff \( a \) is not a left zero-divisor.
Can you recall the proof?
When a ring has finite elements, all of which are not zero-divisors, it is a field.
Proof
The proposition is easily grasped in the contrapose: if \( a \) composes on the left with distinct elements \( b \) and \( c \) to produce the same element \( d \) (i.e. not injective), then \( a \) is a left zero-divisor. The reverse implication is also true.
Proof. Forward case. Assume \( ab = ac
\). Then \( a\cdot b - a \cdot c = 0 \) and \( a \cdot (b - c) = 0 \). As \( b
\) and \( c \) are distinct elements, \( c \)'s inverse does not bring \( b \)
to zero, and so we have found a non-zero element \( (b - c) \) which \( a \)
composes to produce zero.
Reverse case. If \( a \) is a left zero-divisor,
then there is a non-zero element \( x \) such that \( a \cdot x = 0 \). And so
both \( a \cdot 0 \) and \(a \cdot x \) map to the same element (not
injective).
Intuition
Below, the function \( \groupMul{R} \) with lambda type \( R \to R \to R \) is partially applied, \( \groupMul{R}(a) \), and the resulting function of type \( R \to R \) has its mapping drawn out:
What is interesting to note is that the non-injectivity imposed by the double mapping to \( d \) (\( a\cdot b = a\cdot c = d \)) creates two "holes" in the co-domain. The second hole arises arises as there are two elements \( e_+ \) and \( (b - c) \) which map to \( e_+ \). So the co-domain holes come in pairs. This is the essence of the proposition: for every co-domain position (non-zero) that the mapping doubles up on, there will be another doubling up at \( e_+ \).
Injectivity implies inverse exist
If \( R \) has finite elements, then the injectivity of the partial \( \cdot_R(a) \) implies that \( a \) has a right-inverse.
Rings introduce more jargon to describe this idea:
Not a left zero-divisor iff a two-sided unit (if \( R \) is finite)
Proof.Let \( a \) be one of the finite elements of \( R \). If \( a \) is not a left zero-divisor then the partial \( \groupMul{R}(a) \) is injective by the proposition above. Injectivity implies surjectivity when \( R \) is finite. \( a \) has a left-inverse, which is the element that it maps to \( e_\cdot \). The partial function \( \groupMul{R}(a) \) is both injective and surjective so is a bijection and thus invertible. TODO: how to know that the inverting function maps to an element of \( R \)? With both inverses, \( a \) is a two-sided unit.
When \( R \) is not assumed to be finite, the discussion is more subtle. See p123 in Aluffi for more details.