Categories
Math

A More Intuitive Variant of a UFD 2

We continue the discussion started in this post.

We noted earlier that we wouldn’t expect fields to satisfy our definitions. It is clear that a field cannot be a free commutative monoid, since the words in such a monoid would not contain inverses of variables. Can we show that a field cannot be a UFA?

Sidenote: how does the standard concept of reducibility look in a field? We have that any nonzero element is reducible, since it is invertible. This seems to match our general intuition of reducibility at least in the case of fields, since we’d expect that every element of a field can be “divided further” and hence reducible.

But for a UFA, we deal with the concept of type II reducibility. We noted before that if there doesn’t exist an invertible element other than 1 and -1, then 1 and -1 are type II reducible. Thus, given any such element y, for any factorization, Thus, say that a nonzero element x is reducible. Then, it is either invertible or it can be written as x = ab where a,b are non-invertible. If x is invertible, then we can write x = \left( x^{- 1} \right)\left( x^{2} \right). If x^{- 1} = x \rightarrow x^{2} = 1, then this doesn’t yield strong reducibility, so for example if x = 1 or x = - 1. But are 1 and -1 strong reducible?

If there exists an a \notin \left\{ 1, - 1 \right\} that is invertible, then we’d have a^{- 1} \notin \left\{ 1, - 1 \right\} too since inverses are unique and 1^{- 1} = 1 (and ( - 1)^{- 1} = - 1), and then we could write 1 = (a)\left( a^{- 1} \right) and - 1 = ( - a)\left( a^{- 1} \right), showing that 1 and - 1 are strong reducible. In particular, in a field larger than \mathbb{Z}_{3}, 1 and -1 are always strong reducible. (For \mathbb{Z}_{2} and \mathbb{Z}_{3}, 1 and -1 are strong irreducible, since we’d need 1 = ab where a,b \neq 1 and a,b \neq - 1, and in both fields that’s not possible.)

Is \mathbb{Z}_{2} a strong UFD? Can every nonzero element have a factorization? Well, the only such element is 1, and if n = 0 then we can write 1 = 1 as the factorization. But actually, if we allow repeats of p_{i} (which we generally do as that is par for the course in prime factorization in \mathbb{Z}), then the factorization isn’t unique, since we can always let n be as large as we want and let p_{i} = 1 for all i. We can do the same with \mathbb{Z}_{3}, since 1 is strong irreducible there too. Thus, in a trivial way, \mathbb{Z}_{2} and \mathbb{Z}_{3} can’t be UFDs. (Which we’d expect from fields, though these particular cases are more trivial and thus subvert intuition.)

In a field larger than \mathbb{Z}_{3}, this non-unique factorization extension isn’t possible since 1 and -1 are strong reducible. In fact, in a field larger than size 5, we have that any nonzero element is strong reducible, by x = (xa)\left( a^{- 1} \right). (Discounting 0, there would always be more than four elements, so it is always possible to pick a such that a^{- 1} \notin \left\{ 1, - 1,x, - x \right\}. Then, it follows that xa \notin \{ 1, - 1,x, - x\}, so that this reduction is valid.) Thus, in such a field, we could never have x = ( \pm 1)p_{1}\ldots p_{n} with n \geq 1 — we must have n = 0, so x must be \pm 1, so such a field can never be a strong UFD since there would exist x \notin \left\{ 1, - 1 \right\}.

In the case of a field with four elements (which we know exists since four is a prime power), we have a characterization by Finite field – Wikipedia, as \left\{ 0,1,b,b + 1 \right\} (using b for what the article denotes as \alpha.) The characteristic is 2, so we have 1 = - 1, which is strong reducible. Are b and b + 1 strong reducible? We can write 1 + b = b^{2}, and since b \notin \left\{ 1 + b,1, - 1, - (1 + b) \right\} = \left\{ 1 + b,1 \right\} this shows that 1 + b is strong reducible. We have b = (1 + b)^{2} from the multiplication table in the article, and since 1 + b \notin \left\{ b,1, - 1, - b \right\} = \left\{ b,1 \right\} this shows that b is strong reducible. Thus, from the same argument earlier for fields larger than \mathbb{Z}_{5}, a field with four elements can’t be a strong UFD.

In the case of a field with five elements, we have that 1 and - 1 = 4 are strong reducible. However, 2 and 3 are strong irreducible: the possible products are 2 = (1)(2),(2)(1),(3)( - 1),( - 1)(3) and 3 = (1)(3),(2)( - 1),(3)(1),( - 1)(2). Thus, it is possible to factorize 2 = (1)(2) and 3 = (1)(3). Are these factorizations unique? No, they’re not, since we can also write 2 = ( - 1)(3),3 = ( - 1)(2). Thus, \mathbb{Z}_{5} isn’t a strong UFD.

Together, these show that a field is never a strong UFD — which aligns with our intuition.

Can we show something even stronger? Can we show that an integral domain whose nonzero elements are all strongly reducible must be a field? This would align with the idea that invertibility is intuitively “the same as dividing all the way down.”

It remains to investigate this.

We can actually generalize the definition of a UFA beyond ring-like structures by treating signs without addition, as follows:

Definition. Let M be a commutative monoid. A sign on M is a function s:M \rightarrow M (in other words, a unary operation) satisfying:

  1. s(a)s(b) = ab
  2. s(ab) = as(b)

Note that commutativity implies s(ab) = s(a)b too, since s(ab) = s(ba) = bs(a) = s(a)b.

Definition. A UFA is a commutative monoid M with a sign s such that (denoting the identity by 1) every element x can be written as

\displaystyle x = p_{1}\ldots p_{n}

where the p_{i} are type II irreducible or 1 and the factorization is unique up to order of the p_{i}, even number of sign changes, and inclusion of 1.

This is exactly equivalent to our previous definition in the case of a commutative ring:

Theorem. Let R be a commutative ring. Then, R is a UFA by the previous definition if and only if R - \left\{ 0 \right\} is a UFA by the current definition.

This is immediate when we write out the definitions. Henceforth, we will just use our definition here, which can apply to a much greater variety of structures.

This new definition also answers a question about whether we can express UFAs as free objects — we now very much can! We have that:

Definition. We will call a commutative monoid with a sign on it a signed commutative monoid. If the sign is s(a) = a, then we call this a trivially signed commutative monoid, while if the sign satisfies s(a) \neq a for all a, then we call this a strongly signed commutative monoid.

Trivially signed commutative monoids correspond to rings of characteristic 2 in the earlier definition of a UFA, and strongly signed commutative monoids correspond to all other rings.

Theorem. A UFA by the current definition is equivalent to a free strongly signed commutative monoid. A UFA by the previous definition for a ring R of characteristic 2 is equivalent to a free commutative monoid on the nonzero elements of R. A UFA by the previous definition for a ring R of characteristic not 2 is equivalent to a free strongly signed commutative monoid on the nonzero elements of R.

Since characteristic 2 / trivial sign seems to be a trivial edge case here, let’s ignore it in future discussions. Thus, we’ll define a sign on a commutative monoid to automatically include the requirement that s(a) \neq a for all a. (Hence, we’ll drop the “strongly” terminology.) With this terminology, a UFA is a free signed commutative monoid. We hence will drop the “UFA” terminology and speak instead of signed commutative monoids.

The fact that our definitions are so easily encompassed by free objects can yield some intuition for why the standard definition of a UFD is up to invertible elements (although that also can very likely be expressed by a free object): this terminology can be used well for a concept where other terms may not “reach as far” so easily.

Also, this seems to heavily clarify our idea earlier about generalizing the relationship between \mathbb{Z}_{+} and \mathbb{Z} to what we then called type I and type II UFAs. Stated in our current language, we would want to say that:

Conjecture. If M is a free signed commutative monoid, then a certain construction M_{+} on M produces a free commutative monoid.

Conjecture. If M is a free commutative monoid, we can extend M to a free signed commutative monoid.

The second of these looks particularly low-hanging; let’s try to show it. Let M be a free commutative monoid. We define

\displaystyle \overline{M} = \left\{ (p,m):p \in \left\{ + , - \right\},m \in M \right\}.

We also define a multiplication on \overline{M} by

\displaystyle \left( p_{1},m_{1} \right) \times \left( p_{2},m_{2} \right) = \left( p_{1}p_{2},m_{1}m_{2} \right),

where “multiplication of signs” is defined by usual convention. It is then straightforward to see that \overline{M} is a commutative monoid with 1 \in M as the identity, and it is also straightforward to see that

\displaystyle s(p,m) = \left( p^{'},m \right),

where p \rightarrow p^{'} “flips the sign” in the usual manner, turns \overline{M} into a free signed commutative monoid. (In general, even if M isn’t free, we can perform this construction to yield \overline{M} as a signed commutative monoid, where s(a) \neq a in particular is satisfied by construction.)

Similarly, we can go the other way: if M is a signed commutative monoid, then it is very likely that we can construct a subset M^{'} \subseteq M that is a commutative monoid (so essentially closed with respect to multiplication and identity) in the way that we would take sets of elements with their signs and pick one, although we need to be careful and deliberate about it. Then, if M is free, then M^{'} will be free too (but note that we need M to be free with respect to the sign, and M^{'} would only be free with respect to multiplication.)

It is pertinent to note that generalizations of the Fundamental Theorem of Arithmetic to settings other than UFDs, as we have done here, have been explored in the literature, for example notably in abstract analytic number theory with arithmetic semigroups (which in fact are commutative monoids, as we have been working with here.)

It would be interesting to me to see where else signs on commutative monoids come up and how we can better study their theory!

Actually, we didn’t write out all the properties of signs that we’d probably need to require. Let’s write this out now for the sake of completeness and reference:

Definition. Let M be a commutative monoid. A sign on M is a unary operation s:M \rightarrow M satisfying the following conditions:

  1. s(a) \neq a for all a
  2. s\left( s(a) \right) = a for all a
  3. s(ab) = as(b) for all a,b

Can we show s(a)s(b) = ab as a consequence of these fundamental properties? It seems like we can, let’s do this:

\displaystyle s(a)s(b) = s\left( s(a)b \right) = s\left( bs(a) \right) = bs\left( s(a) \right) = ba = ab.

Can we show that these properties are independent? One way to do this (that I’ve used before in other explorations) is to exhibit a new example for each truth-value assignation to the properties (so 2^{3} = 8 arrangements.)

Clearly, (2) doesn’t imply (1), as to satisfy (2) we could just have s(a) = a. In fact, similarly, (2) and (3) don’t imply (1). But actually, could (1) imply something about (2) or (3) especially for a small enough M? For example, if M has two elements, then (1) has to imply (2), since the sign would need to send each element to the other. Would (1) imply (3) too in the case of M with two elements? Say M = \left\{ a,b \right\}. (1) implies s(a) = b,s(b) = a. Then, the multiplicative identity is what could imply stuff here: one of a,b must be the multiplicative identity. WLOG assume a is the identity, and let’s check this systematically. We have s\left( a^{2} \right) = s(a) = b and as(a) = ab = b, so s\left( a^{2} \right) = as(a). We have s(ab) = s(b) = a and as(b) = a^{2} = a, so s(ab) = as(b). Since M is commutative, we must also have that s(ba) = s(ab) = a. But bs(a) = b^{2}; can b^{2} = b? In that case, we’d have s(ba) \neq bs(a). We have s\left( b^{2} \right) = s(b) = a if b^{2} = b, and s\left( b^{2} \right) = s(a) = b otherwise; we have bs(b) = ba = b. Thus, if b^{2} = a, then s satisfies (3), otherwise it does not. Can we have b^{2} = b and M still be a commutative monoid? This is what the multiplication table would look like:

\displaystyle aa = a,

\displaystyle ab = b,

\displaystyle ba = b,

\displaystyle bb = b.

It’s certainly commutative and has identity a; does it satisfy associativity?

\displaystyle (aa)a = aa = a,a(aa) = aa = a

\displaystyle (aa)b = ab = b,a(ab) = ab = b

\displaystyle (ab)a = ba = b,a(ba) = ab = b

\displaystyle (ab)b = bb = b,a(bb) = ab = b

\displaystyle (ba)a = ba = b,b(aa) = ba = b

Actually, to summarize, every expression that involves b will evaluate to b, and every other expression (every expression involving only a) will evaluate to a. So that satisfies associativity, hence this is a valid commutative monoid.

OK, so even for just 2 elements, (1) doesn’t imply (3). And clearly because of (1), a sign can’t exist on any smaller monoid (a one-element monoid.) Do (1) and (2) together imply (3)? Well, in this case no, since (1) implies (2), but we can construct a case without (3).

Thus, (2) and (3) don’t imply (1), and (1) and (2) don’t imply (3). What about (1) and (3)? Do they imply (2)?

If we want to construct a counterexample, we now have to look outside of two-element monoids, since there (1) automatically yields (2). What about three-element monoids? Can we construct an s on \left\{ 1,2,3 \right\} satisfying (1) and (3) but not (2)?

Incidentally, it seems like on a three-element monoid it’s impossible to have (1) and (2) both be satisfied. Is that true? Let’s try to prove it. In fact, I conjecture that any finite signed commutative monoid must have an even number of elements.

Say we have a signed commutative monoid M of 2n + 1 elements. The properties s(a) \neq a and s\left( s(a) \right) = a imply two-element cycles. Specifically, if we form

\displaystyle \left\{ \left\{ a,s(a) \right\}:a \in M \right\},

then this will be a partition of M. Then, the fact that s(a) \neq a will imply that each subset in the partition has two elements, so M must have even order.

To show this is a partition: first, it is clear that the union forms all of M. Now, let’s say that \left\{ a,s(a) \right\} and \left\{ b,s(b) \right\} have non-empty intersection. Then, we must have either a = b, in which case both sets are the same, or a = s(b), in which case \left\{ a,s(a) \right\} = \left\{ s(b),s\left( s(b) \right) \right\} = \left\{ b,s(b) \right\}, so both sets are again the same. So this is a partition, and we are done.

So actually, we’ve shown that no function satisfying (1) and (2) can exist on an even-order set. If we can exhibit a function satisfying (1) and (3) on an odd-order set, then we automatically get that (1) and (3) can’t imply (2), as desired.

So let’s try achieving (1) and (3) on three elements. We need s(a) \neq a and s(ab) = as(b). By commutativity, this means as(b) = s(a)b = s(ab).

First, we note that (3) probably means that the multiplication table is entirely determined by s; the question is whether the commutative monoid properties are satisfied.

Let’s assume that 1 is the identity. Then, s(a \times 1) = as(1) = s(a). Thus, s(a) = s(1)a. We also have s\left( a^{2} \right) = as(a) = a\left( s(1)a \right) = s(1)a^{2} … but we already knew that.

We have 2 \neq s(2) = s(1) \times 2, and similarly 3 \neq s(1) \times 3. We have s(1) \neq 1 so either s(1) = 2 or s(1) = 3. If s(1) = 2 then we yield 2 \neq 2 \times 2 and 3 \neq 2 \times 3, otherwise we yield 2 \neq 2 \times 3 and 3 \neq 3 \times 3.

In general, ab \neq s(ab) = as(b). In fact, can we use this to show cancellativity? Let’s say ab = ac. We know ac \neq s(ab) = as(b), thus c \neq s(b) and similarly b \neq s(c). Assume b \neq c. Since s(b) \neq b and s(b) \neq c, we must have s(b) be the other element. Similarly, we must have s(c) be this other element. Thus, s(b) = s(c). To summarize: if ab = ac with b \neq c, then s(b) and s(c) are the element other than b or c.

It remains to study this further.

Let’s now deduce some properties of signs. (2) implies that s is injective: s(a) = s(b) \rightarrow s\left( s(a) \right) = s\left( s(b) \right) \rightarrow a = b. Is it surjective? Say b \in M; must there exist a such that s(a) = b? Clearly, if M is finite then this is the case (injections from a finite set to itself are bijections), but is that generally true? Well, consider s(M) = \left\{ s(m):m \in M \right\}. Now, assume b \notin s(M). Then, s(b) \notin s\left( s(M) \right) since s is injective, but s\left( s(M) \right) = M, which is impossible. Thus, s must be surjective. Hence, property (2) by itself actually implies that s is a bijection, even when M is infinite.

Property (1) then shows that s is actually a derangement: a permutation that doesn’t fix anything. In fact, as discussed earlier, s can be decomposed into disjoint cycles.

Property (3) then places further constraints on s. We have

\displaystyle s\left( as(a) \right) = as\left( s(a) \right) = a^{2},

\displaystyle s\left( s(a)a \right) = s(a)s(a) = s(a)^{2},

\displaystyle s\left( a^{2} \right) = as(a),

\displaystyle s\left( s(a)^{2} \right) = s(a)s\left( s(a) \right) = as(a).

In particular, for any a,

\displaystyle a^{2} = s(a)^{2} = s\left( as(a) \right).

I suspect that property (3) would have strong consequences for the form of s and the multiplication table of M. It remains to study this further.

Can we show: we have a^{2} = b^{2} with a \neq b if and only if a = s(b)?

We have one direction: a^{2} = s(a)^{2} as shown above. For the other, say that there was some c such that a^{2} = c^{2} and c \neq a, c \neq s(a). We’d also have a^{2} = s(c)^{2}, and s(c) \neq a,s(c) \neq s(a),s(c) \neq c. Now, we compute:

\displaystyle s\left( a^{2} \right) = as(a) = s\left( c^{2} \right) = cs(c)

\displaystyle s(ac) = as(c) = cs(a)

\displaystyle s\left( as(c) \right) = s(a)s(c) = s\left( s(c)a \right) = s\left( s(c) \right)a = ca

Now, a \neq c so a^{2} \neq ac, hence cs(a) \neq as(a) = cs(c). But also a \neq s(c) so a^{2} \neq as(c), so as(a) = cs(c) \neq ca. (These two inequalities and the equality as(a) = cs(c) incorporate all the given information.) So we have three values that are proven different from each other: cs(a), ca, and as(a) = cs(c). Can we yield some kind of contradiction from these?

\displaystyle s(c)a \neq c^{2}\ldots

It remains to study this.