We continue the discussion started in this post.
We noted earlier that we wouldn’t expect fields to satisfy our definitions. It is clear that a field cannot be a free commutative monoid, since the words in such a monoid would not contain inverses of variables. Can we show that a field cannot be a UFA?
Sidenote: how does the standard concept of reducibility look in a field? We have that any nonzero element is reducible, since it is invertible. This seems to match our general intuition of reducibility at least in the case of fields, since we’d expect that every element of a field can be “divided further” and hence reducible.
But for a UFA, we deal with the concept of type II reducibility. We noted before that if there doesn’t exist an invertible element other than 1 and -1, then 1 and -1 are type II reducible. Thus, given any such element , for any factorization, Thus, say that a nonzero element
is reducible. Then, it is either invertible or it can be written as
where
are non-invertible. If
is invertible, then we can write
. If
, then this doesn’t yield strong reducibility, so for example if
or
. But are 1 and -1 strong reducible?
If there exists an that is invertible, then we’d have
too since inverses are unique and
(and
), and then we could write
and
, showing that
and
are strong reducible. In particular, in a field larger than
, 1 and -1 are always strong reducible. (For
and
, 1 and -1 are strong irreducible, since we’d need
where
and
, and in both fields that’s not possible.)
Is a strong UFD? Can every nonzero element have a factorization? Well, the only such element is 1, and if
then we can write
as the factorization. But actually, if we allow repeats of
(which we generally do as that is par for the course in prime factorization in
), then the factorization isn’t unique, since we can always let
be as large as we want and let
for all
. We can do the same with
, since 1 is strong irreducible there too. Thus, in a trivial way,
and
can’t be UFDs. (Which we’d expect from fields, though these particular cases are more trivial and thus subvert intuition.)
In a field larger than , this non-unique factorization extension isn’t possible since 1 and -1 are strong reducible. In fact, in a field larger than size 5, we have that any nonzero element is strong reducible, by
. (Discounting 0, there would always be more than four elements, so it is always possible to pick
such that
. Then, it follows that
, so that this reduction is valid.) Thus, in such a field, we could never have
with
— we must have
, so
must be
, so such a field can never be a strong UFD since there would exist
.
In the case of a field with four elements (which we know exists since four is a prime power), we have a characterization by Finite field – Wikipedia, as (using
for what the article denotes as
.) The characteristic is 2, so we have
, which is strong reducible. Are
and
strong reducible? We can write
, and since
this shows that
is strong reducible. We have
from the multiplication table in the article, and since
this shows that
is strong reducible. Thus, from the same argument earlier for fields larger than
, a field with four elements can’t be a strong UFD.
In the case of a field with five elements, we have that and
are strong reducible. However, 2 and 3 are strong irreducible: the possible products are
and
. Thus, it is possible to factorize
and
. Are these factorizations unique? No, they’re not, since we can also write
. Thus,
isn’t a strong UFD.
Together, these show that a field is never a strong UFD — which aligns with our intuition.
Can we show something even stronger? Can we show that an integral domain whose nonzero elements are all strongly reducible must be a field? This would align with the idea that invertibility is intuitively “the same as dividing all the way down.”
It remains to investigate this.
—
We can actually generalize the definition of a UFA beyond ring-like structures by treating signs without addition, as follows:
Definition. Let be a commutative monoid. A sign on
is a function
(in other words, a unary operation) satisfying:
Note that commutativity implies too, since
.
Definition. A UFA is a commutative monoid with a sign
such that (denoting the identity by 1) every element
can be written as
where the are type II irreducible or 1 and the factorization is unique up to order of the
, even number of sign changes, and inclusion of 1.
This is exactly equivalent to our previous definition in the case of a commutative ring:
Theorem. Let be a commutative ring. Then,
is a UFA by the previous definition if and only if
is a UFA by the current definition.
This is immediate when we write out the definitions. Henceforth, we will just use our definition here, which can apply to a much greater variety of structures.
This new definition also answers a question about whether we can express UFAs as free objects — we now very much can! We have that:
Definition. We will call a commutative monoid with a sign on it a signed commutative monoid. If the sign is , then we call this a trivially signed commutative monoid, while if the sign satisfies
for all
, then we call this a strongly signed commutative monoid.
Trivially signed commutative monoids correspond to rings of characteristic 2 in the earlier definition of a UFA, and strongly signed commutative monoids correspond to all other rings.
Theorem. A UFA by the current definition is equivalent to a free strongly signed commutative monoid. A UFA by the previous definition for a ring of characteristic 2 is equivalent to a free commutative monoid on the nonzero elements of
. A UFA by the previous definition for a ring
of characteristic not 2 is equivalent to a free strongly signed commutative monoid on the nonzero elements of
.
Since characteristic 2 / trivial sign seems to be a trivial edge case here, let’s ignore it in future discussions. Thus, we’ll define a sign on a commutative monoid to automatically include the requirement that for all
. (Hence, we’ll drop the “strongly” terminology.) With this terminology, a UFA is a free signed commutative monoid. We hence will drop the “UFA” terminology and speak instead of signed commutative monoids.
The fact that our definitions are so easily encompassed by free objects can yield some intuition for why the standard definition of a UFD is up to invertible elements (although that also can very likely be expressed by a free object): this terminology can be used well for a concept where other terms may not “reach as far” so easily.
Also, this seems to heavily clarify our idea earlier about generalizing the relationship between and
to what we then called type I and type II UFAs. Stated in our current language, we would want to say that:
Conjecture. If is a free signed commutative monoid, then a certain construction
on
produces a free commutative monoid.
Conjecture. If is a free commutative monoid, we can extend
to a free signed commutative monoid.
The second of these looks particularly low-hanging; let’s try to show it. Let be a free commutative monoid. We define
We also define a multiplication on by
where “multiplication of signs” is defined by usual convention. It is then straightforward to see that is a commutative monoid with
as the identity, and it is also straightforward to see that
where “flips the sign” in the usual manner, turns
into a free signed commutative monoid. (In general, even if
isn’t free, we can perform this construction to yield
as a signed commutative monoid, where
in particular is satisfied by construction.)
Similarly, we can go the other way: if is a signed commutative monoid, then it is very likely that we can construct a subset
that is a commutative monoid (so essentially closed with respect to multiplication and identity) in the way that we would take sets of elements with their signs and pick one, although we need to be careful and deliberate about it. Then, if
is free, then
will be free too (but note that we need
to be free with respect to the sign, and
would only be free with respect to multiplication.)
—
It is pertinent to note that generalizations of the Fundamental Theorem of Arithmetic to settings other than UFDs, as we have done here, have been explored in the literature, for example notably in abstract analytic number theory with arithmetic semigroups (which in fact are commutative monoids, as we have been working with here.)
It would be interesting to me to see where else signs on commutative monoids come up and how we can better study their theory!
Actually, we didn’t write out all the properties of signs that we’d probably need to require. Let’s write this out now for the sake of completeness and reference:
Definition. Let be a commutative monoid. A sign on
is a unary operation
satisfying the following conditions:
for all
for all
for all
Can we show as a consequence of these fundamental properties? It seems like we can, let’s do this:
Can we show that these properties are independent? One way to do this (that I’ve used before in other explorations) is to exhibit a new example for each truth-value assignation to the properties (so arrangements.)
Clearly, (2) doesn’t imply (1), as to satisfy (2) we could just have . In fact, similarly, (2) and (3) don’t imply (1). But actually, could (1) imply something about (2) or (3) especially for a small enough
? For example, if
has two elements, then (1) has to imply (2), since the sign would need to send each element to the other. Would (1) imply (3) too in the case of
with two elements? Say
. (1) implies
. Then, the multiplicative identity is what could imply stuff here: one of
must be the multiplicative identity. WLOG assume
is the identity, and let’s check this systematically. We have
and
, so
. We have
and
, so
. Since
is commutative, we must also have that
. But
; can
? In that case, we’d have
. We have
if
, and
otherwise; we have
. Thus, if
, then
satisfies (3), otherwise it does not. Can we have
and
still be a commutative monoid? This is what the multiplication table would look like:
It’s certainly commutative and has identity ; does it satisfy associativity?
Actually, to summarize, every expression that involves will evaluate to
, and every other expression (every expression involving only
) will evaluate to
. So that satisfies associativity, hence this is a valid commutative monoid.
OK, so even for just 2 elements, (1) doesn’t imply (3). And clearly because of (1), a sign can’t exist on any smaller monoid (a one-element monoid.) Do (1) and (2) together imply (3)? Well, in this case no, since (1) implies (2), but we can construct a case without (3).
Thus, (2) and (3) don’t imply (1), and (1) and (2) don’t imply (3). What about (1) and (3)? Do they imply (2)?
If we want to construct a counterexample, we now have to look outside of two-element monoids, since there (1) automatically yields (2). What about three-element monoids? Can we construct an on
satisfying (1) and (3) but not (2)?
Incidentally, it seems like on a three-element monoid it’s impossible to have (1) and (2) both be satisfied. Is that true? Let’s try to prove it. In fact, I conjecture that any finite signed commutative monoid must have an even number of elements.
Say we have a signed commutative monoid of
elements. The properties
and
imply two-element cycles. Specifically, if we form
then this will be a partition of . Then, the fact that
will imply that each subset in the partition has two elements, so
must have even order.
To show this is a partition: first, it is clear that the union forms all of . Now, let’s say that
and
have non-empty intersection. Then, we must have either
, in which case both sets are the same, or
, in which case
, so both sets are again the same. So this is a partition, and we are done.
So actually, we’ve shown that no function satisfying (1) and (2) can exist on an even-order set. If we can exhibit a function satisfying (1) and (3) on an odd-order set, then we automatically get that (1) and (3) can’t imply (2), as desired.
So let’s try achieving (1) and (3) on three elements. We need and
. By commutativity, this means
.
First, we note that (3) probably means that the multiplication table is entirely determined by ; the question is whether the commutative monoid properties are satisfied.
Let’s assume that 1 is the identity. Then, . Thus,
. We also have
… but we already knew that.
We have , and similarly
. We have
so either
or
. If
then we yield
and
, otherwise we yield
and
.
In general, . In fact, can we use this to show cancellativity? Let’s say
. We know
, thus
and similarly
. Assume
. Since
and
, we must have
be the other element. Similarly, we must have
be this other element. Thus,
. To summarize: if
with
, then
and
are the element other than
or
.
It remains to study this further.
—
Let’s now deduce some properties of signs. (2) implies that is injective:
. Is it surjective? Say
; must there exist
such that
? Clearly, if
is finite then this is the case (injections from a finite set to itself are bijections), but is that generally true? Well, consider
. Now, assume
. Then,
since
is injective, but
, which is impossible. Thus,
must be surjective. Hence, property (2) by itself actually implies that
is a bijection, even when
is infinite.
Property (1) then shows that is actually a derangement: a permutation that doesn’t fix anything. In fact, as discussed earlier,
can be decomposed into disjoint cycles.
Property (3) then places further constraints on . We have
In particular, for any ,
I suspect that property (3) would have strong consequences for the form of and the multiplication table of
. It remains to study this further.
—
Can we show: we have with
if and only if
?
We have one direction: as shown above. For the other, say that there was some
such that
and
,
. We’d also have
, and
. Now, we compute:
Now, so
, hence
. But also
so
, so
. (These two inequalities and the equality
incorporate all the given information.) So we have three values that are proven different from each other:
,
, and
. Can we yield some kind of contradiction from these?
It remains to study this.
