Browsed by
Category: Category Theory

Applications of Reflections: Taking a Group to its “Abelian” Form

Applications of Reflections: Taking a Group to its “Abelian” Form

In continuing the exploration of explicit applications and examples of category-theoretic concepts, we highlight the versatility of reflections and reflective subcategories. This concept can be used to perform all kinds of desired actions on a category to yield a subcategory that “nicer” in some way. This article explores how we can use reflections to make groups abelian.

Please see the previous article by Traylor and Fadeev for the definition of a category and other properties.

Definition (Subcategory):

A category \mathbf{A} is a subcategory of a category \mathbf{B} if 
(1) \mathrm{Ob}(\mathbf{A}) \subseteq \mathrm{Ob}(\mathbf{B})
(2) for each A, A' \in \mathrm{Ob}(\mathbf{A}), \mathrm{hom}_{\mathbf{A}}(A,A') \subseteq \mathrm{hom}_{\mathbf{B}}(A,A')
(3) for each A \in \mathrm{Ob}(\mathbf{A}), the \mathbf{B}-identity on A is the same as the \mathbf{A}-identity on A.
(4) composition in \mathbf{A} is the restriction of composition in \mathbf{B} to the morphisms of \mathbf{A}

Adámek et al, Abstract and Concrete Categories(1990)

Point by point, we’ll pick the definition apart. The first part is pretty clear. The collection of objects in a subcategory is contained in the collection of objects in its “parent”. The second criterion says that the set of morphisms from one object A to another object A' inside the littler category \mathbf{A} should be a subset of all the morphisms from the same A to the same A', but inside \mathbf{B}. That is, there are morphisms from A \to A' in \mathbf{B} that won’t live in the subcategory \mathbf{A}.

The third criterion just states that the identity morphisms on objects should match in both categories. The final criterion tells us that composition inside the subcategory \mathbf{A} only “works” on the morphisms inside \mathbf{A}, but is otherwise the same composition as in \mathbf{B}. We just only perform compositions on morphisms in \mathbf{A} when we’re in \mathbf{A}.

We now define a reflection.

Definition (A- reflection)

Let \mathbf{A} be a subcategory of \mathbf{B}, and let B \in \mathrm{Ob}(\mathbf{B}).

(1) An \mathbf{A}- reflection for B is a morphism B \xrightarrow{r} A from B to A \in \mathrm{Ob}(\mathbf{A}) such that for any morphism B \xrightarrow{f} A' from B to A' \in \mathrm{Ob}(\mathbf{A}) there exists a unique f': A \to A' such that f = f' \circ r.
(2) \mathbf{A} is a reflective subcategory for \mathbf{B} if each \mathbf{B}- object has an \mathbf{A}-reflection. 

Currently, this definition seems a bit abstract. The following sections will illustrate concrete examples of reflections to understand this definition better.

Taking a Group to Its Abelian Form

The Category \mathbf{Grp}

For this example, we’ll be working with the category of groups, \mathbf{Grp}. The objects in this category are groups, and the morphisms are group homomorphisms. Some elements of this category are:

  • (\mathbb{R}, +) \xrightarrow{\phi} (\mathbb{R}^{+}, \cdot), \phi(x) = e^{x}. The real numbers under addition and the positive real numbers under multiplication are both groups, so they’re objects in this category. \phi here is a group homomorphism1, so it’s a morphism in \mathbf{Grp}.
  • The dihedral group D_{6}, and the permutation group S_{3} are also objects in this class. Recall that the dihedral group D_{6} is commonly visualized using the symmetries of an equilateral triangle2, but is commonly given using the following presentation: D_{6} = \langle r,s : r^{3} = s^{2} = 1, sr = r^{-1}s\rangle Here, we see that D_{6} is generated by two elements r and s (r is the rotation by 2\pi/3, and s is one of the reflection lines). S_{3} is the set of permutations on 3 elements– all permutations of the integers \{1,2,3\}. Both of these are groups, and both have 6 elements. If we define \phi: \begin{cases} r \to (123) \\ s \to (12)\end{cases} Then \phi is a group homomorphism that takes D_{6} to S_{3}, and is also thus a morphism in \mathbf{Grp}
  • As one last example, let \mathrm{GL}_{2}(\mathbb{R}) be the general linear group of degree 2. This is the group of invertible 2\times 2 matrices with real entries. Then we can create a group homomorphism \phi: D_{6} \to \mathrm{GL}_{2}(\mathbb{R}) by letting \phi(r) = \begin{bmatrix}\cos(\theta) & -\sin(\theta)\\ \sin(\theta) & \cos(\theta)\end{bmatrix} \qquad \phi(s) = \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} so this \phi is also a morphism in \mathbf{Grp}.

The Category \mathbf{Ab}

A subcategory of \mathbf{Grp} is \mathbf{Ab}, the category of abelian groups. Morphisms are again group homomorphisms, but this time we are only looking at morphisms between abelian groups. Some examples:

(1) (\mathbb{R}, +) \xrightarrow{\phi} (\mathbb{R}^{+}, \cdot), \phi(x) = e^{x}. The real numbers under addition, and the positive real numbers under multiplication are abelian groups, so they’re in the subcategory \mathbf{Ab}.

(2) (\mathbb{Z}_{2} \times \mathbb{Z}_{2}, +) and (\mathbb{Z}_{4}, +) are also abelian groups. We can define a group homomorphism \psi: \mathbb{Z}_{2} \times \mathbb{Z}_{2} \to \mathbb{Z}_{4} by \psi: \begin{cases} (0,0) \to 0 \\ (1,0) \to 1\\ (0,1) \to 2 \\ (1,1) \to 3\end{cases}, so \psi is in the collection of morphisms in \mathbf{Ab}. Of course, it’s also in the morphisms of \mathbf{Grp} as well.

(3) D_{6} is not commutative, so it’s not an object in \mathbf{Ab}

(4)\mathrm{GL}_{2} under matrix multiplication is not abelian, because matrix multiplication is not a commutative operation. So neither this group nor the group homomorphism between D_{6} and \mathrm{GL}_{2} are in \mathbf{Ab}.

Creating the Commutator Subgroup

We now discuss the commutator element. The commutator of two elements g,h in a group, denoted [g,h] is given by [g,h] = g^{-1}h^{-1}gh.

Remark: The order matters here.

Let’s work with a nice easy group of matrices. Let G =\left\{I_{2},\begin{bmatrix}0&1\\1&0\end{bmatrix},\begin{bmatrix}0&1\\-1&-1\end{bmatrix},\begin{bmatrix}-1&-1\\0&1\end{bmatrix},\begin{bmatrix}-1&-1\\1&0\end{bmatrix},\begin{bmatrix}1 & 0\\-1&-1\end{bmatrix}\right\} under matrix multiplication. We’ll name these matrices \{I,A,B,C,D,K\} respectively. This group is not commutative, as shown in the group table below:


 Next, we’re going to form the commutator subgroup, which is the subgroup of G consisting of all commutators of G. Let’s call this subgroup \tilde{G}. The quotient G/\tilde{G} is always an abelian group, so “quotienting out by” \tilde{G} and working with the cosets gives us an “abelian version” of our previously non-abelian group. 

We’ll calculate a few commutator elements to demonstrate how, but will not run through every combination. \begin{aligned}[I,X]&=[X,I]=I\\ [A,B]&=A^{-1}B^{-1}AB=ADAB=D\\ [B,A]&=B^{-1}A^{-1}BA=DABA=B\\ [C,D]&=C^{-1}D^{-1}CD=CBCD=B\\ \vdots\end{aligned}

Continuing with all combinations, we find that there are only three commutator elements: \{I, B, D\}. Thus3 \tilde{G} = \{I,B,D\}. Now, G/\tilde{G} gives us the left cosets of \tilde{G}: \begin{aligned}A\tilde{G}&= C\tilde{G}=K\tilde{G}=\{A,C,K\}\\ B\tilde{G}&= D\tilde{G}=I\tilde{G}=\{I,B,D\}\end{aligned}Thus, the commutator subgroup is G/\tilde{G} = \{A\tilde{G}, \tilde{G}\}. This two-element group is a bit dull, admittedly, but it certainly is abelian, with the identity element as \tilde{G}.

Back to Reflections

Something else we can do with this little two-element group is map it to (\mathbb{Z}_{2}, +) via the homomorphism \phi: G/\tilde{G} \to \mathbb{Z}_{2}, where \phi(\tilde{G}) = 0, and \phi(A\tilde{G}) = 1

What does any of this have to do with reflections? Recall that G \in \mathrm{Ob}(\mathbf{Grp}), but not in \mathrm{Ob}(\mathbf{Ab}). G/\tilde{G} is in \mathrm{Ob}(\mathbf{Ab}), and so is \mathbb{Z}_{2}

A reflection for G into \mathbf{Ab} is a morphism r such that for any morphism \psi:G \to A', A' \in \mathrm{Ob}(\mathbf{Ab}), we can find a morphism \phi completely contained in \mathbf{Ab} such that \psi = \phi \circ r

Let A'= \mathbb{Z}_{2}, and define \psi: G \to \mathbb{Z}_{2} by \psi: \begin{cases}A \to 1 \\ C \to 1 \\ K \to 1 \\ B \to 0\\D \to 0 \\I \to 0\end{cases} This is certainly a homomorphism (\psi(XY) = \psi(X)\psi(Y)). 

What if we could “bounce” this group G off another group H in \mathbf{Ab}, then move from H to \mathbb{Z}_{2}? That might reveal a little more about \psi than just the seemingly contrived definition we put forth. 

Let r: G \to G/\tilde{G} be the canonical map sending each element of G to the appropriate coset. That is r(A) = A\tilde{G}, r(B) = \tilde{G}, etc. Then the image of r is G/\tilde{G}, and \phi as defined above is the unique morphism that will take G/\tilde{G} to \mathbb{Z}_{2} such that \psi = \phi \circ r

One reflection, r, taking G to some object A in \mathbf{Ab}. Grab a morphism \psi from G to somewhere else in \mathbf{Ab} (we picked \mathbb{Z_{2}}). Then we have to be able to find a unique \phi such that \psi decomposes into the composition of r with that \phi. This same r (and thus the same A) should be able to perform this action for any A', any \psi. In our case, the reflection is the canonical map from G to its commutator subgroup. 


Reflections can perform many different actions to take objects in one category to objects in a subcategory. We focused on reflections making things “abelian” in a nice way, which helped reveal some structures that would have otherwise been hidden.


Equivalence v. Isomorphisms in Category Theory

Equivalence v. Isomorphisms in Category Theory


Editor’s Note: The article is co-written by Rachel Traylor (The Math Citadel/Marquette University) and Valentin Fadeev (The Open University, UK). Substantial additional review, contributions, and discussions were provided by Matt Kukla and Jason Hathcock. A pdf is available for download at the end of this post.

The biggest challenge we have found in studying category theory is the lack of “concrete” examples to illustrate concepts that aren’t themselves highly abstract. This may be due to our more “vulgar” nature as a primarily applied mathematician (from the viewpoint of many academic mathematicians). However, we firmly believe that if category theory wishes to deliver on the promises it makes to help bind many branches of mathematics (and even the sciences) together, it needs a little boring vulgarity sometimes when illustrating concepts as opposed to a never-ending march towards greater abstraction and generality. 

Perhaps we are better poised than many to do the dirty work of illustrating some of these concepts in a more boring way, on topics so familiar many mathematicians ignore them. For this particular article, we will elaborate on the difference between isomorphic categories and equivalent categories. The difference between the two has a degree of subtlety, but exploring a tangible example involving matrices and finite-dimensional vector spaces really clarified these concepts for us. 

Necessary Definitions

We will not assume a familiarity with all the terms we will be using, so some definitions for reference are provided here. It is perfectly acceptable to skim these until they are invoked later, then return for further examination. For reference, please see Abstract and Concrete Categories by Adámek, Herrlich, and Strecker (1990).

Definition (Category)

A category is a quadruple \mathbf{A} = (\mathscr{O}, \text{hom}, \text{id}, \circ) that consists of

  1. A class \mathscr{O} whose members are called \mathbf{A}-objects.1
  2. For each pair (A,B) of \mathbf{A}-objects, a set \hom(A,B) whose elements are called morphisms from A to B. Denoted A \xrightarrow{f} B or f: A \to B. The collection of all morphisms in \mathbf{A} is denoted \text{Mor}(\mathbf{A}).
  3. For each \mathbf{A}-object A, there is an identity morphism A \xrightarrow{id_{A}} A.
  4. A composition law that associates with each \mathbf{A}-morphism A \xrightarrow{f} B and B \xrightarrow{g} C an \mathbf{A}-morphism A \xrightarrow{g \circ f} C we call the composite of f and g.2 This composite map g\circ f is subject to the following conditions:
    • Composition is associative. That is, for A \xrightarrow{f} B, B \xrightarrow{g} C, and C \xrightarrow{h} D

      h \circ (g\circ f) = (h \circ g) \circ f.
    • For any morphism A \xrightarrow{f} B, id_{B} \circ f = f and f \circ id_{A} = f.3
    • The sets \hom(A,B) are pairwise-disjoint, meaning that a morphism f cannot belong to \hom(A,B) and \hom(A',B') at the same time. It can only be in one set of morphisms. This condition guarantees each morphism has a unique domain (where we’re mapping from) and codomain (where we map to).
Abstract and Concrete Categories, Adámek, Herrlich, Strecker (1990)

For those who have never seen any category theory at all, it is worth a pause to list a couple examples of categories. We shall only list the ones that of interest to this article. There are many more examples, and many more “types” of categories beyond these.

  • \mathbf{Set} is the category of all sets. \mathscr{O} is the class of all sets. (Here a particular set is an object. \mathbb{N} is an object in this category, as is the set of all permutations of the numbers \{1,2,3\}. We don’t care about individual elements of these sets. We move from one whole set to another whole set. \hom(A,B) is the set of all functions from set A to set B. id_{A} is just the identity function on a set A, and \circ is our standard composition of functions.
  • \mathbf{Vect} is the category whose objects are all real vector spaces. Morphisms in this category are linear transformations from one vector space to another, so \hom(A,B) is the set of all linear transformations from vector space A to vector space B.4 The identity morphisms are identity transformations, and composition in this category is the standard composition of linear maps.
  • \mathbf{Mat} an interesting category worth spending some time on. Objects here are not matrices, as one might guess. Objects in \mathbf{Mat} are natural numbers. Morphisms are m \times n real-valued matrices. (We can generalize this concept to \mathbf{Mat}_{R}, where the entries of the matrices are the elements of a particular specified ring R, but let’s just stick with real-valued matrices here. Put differently, morphisms take a natural number m “to” a natural number n by assigning it a matrix of dimensions m \times n with real-valued entries. \hom(m,n) is the set of all m\times n real-valued matrices. Structurally, there is only one morphism from m to n. While we can have different m\times n matrices that perform the task of mapping m to n, they are structurally equivalent in that they map two natural numbers to the same dimensions. In this category, the identity morphism id_{m} are m \times m identity matrices. Composition in this category is defined by A \circ B = BA, where BA is just the usual matrix multiplication. It’s worth noting here that a category doesn’t require that all morphisms be composable. \mathbf{Mat} provides a nice example of this. We can only multiply matrices when the dimensions are appropriate to do so, which means only certain morphisms are composable. This doesn’t violate the definition of a category or associativity, etc. Composition must be associative when composition can be done.

Morphisms allow us to move between objects in a category. If we want to move from one category to another, we’ll need the concept of a functor.

Definition (Functor)

If \mathbf{A} and \mathbf{B} are categories, then a functor F from \mathbf{A} to \mathbf{B} is a mapping that assigns to each \mathbf{A}-object A and \mathbf{B}-object FA. It also assigns to each \mathbf{A}-morphism A \xrightarrow{f} A' a \mathbf{B}-morphism FA \xrightarrow{Ff} FA' such that the following conditions hold:

  1. F(f\circ g) = Ff \circ Fg (Composition is preserved whenever f\circ g is defined.)
  2. For every \mathbf{A}-object A, F(id_{A}) = id_{FA}. This makes sure the functor maps the identity morphism on A to the identity morphism on the object that A maps to in the category \mathbf{B}.

Notationally, we’ll see functors written F: \mathbf{A} \to \mathbf{B}, or F(A\xrightarrow{f}B), which shows more clearly that functors move objects and morphisms. 

Functors themselves can have different properties, that are a bit similar to concepts from studying regular functions (e.g. injectivity, surjectivity). We can also compose functors, and that composite functor is indeed a functor.

Definition (Isomorphism as a functor)

A functor F : \mathbf{A} \to \mathbf{B} is called an isomorphism if there is a functor \mathbf{G}: \mathbf{B} \to \mathbf{A} such that G \circ F = id_{\mathbf{A}} and F \circ G = id_{\mathbf{B}}.

Note id_{\cdot} here is the identity functor for a category, not the identity morphism on an object. The identity functor will just take all objects to themselves, and all morphisms to themselves, but the identity functor acts on a category, whereas the identity morphism acts on an object within a category.

Definition (Faithful Functor)

A functor is faithful if all the hom-set restrictions F: \hom_{\mathbf{A}}(A,A') \to \hom_{\mathbf{B}}(FA, FA') are injective.

Here, this means that if a morphism f \in \text{hom}_{\mathbf{A}}(A,A') maps to Ff \in \hom_{\mathbf{B}}(FA, FA'), and another morphism g \in \hom_{\mathbf{A}}(A,A') maps to Fg \in \hom_{\mathbf{B}}(FA, FA') we have that Ff = Fg (the two “target” morphisms are the same), then f = g. This needs to hold for all sets of morphisms between any two objects.

Definition (Full Functor)

A functor F is full if all the hom-set restrictions F: \hom_{\mathbf{A}}(A,A') \to \hom_{\mathbf{B}}(FA, FA') are surjective.

In other words, for a functor to be full, we need a morphism g in \hom_{\mathbf{B}}(FA, FA') to have a morphism f in \hom_{\mathbf{A}}(A,A') such that Ff = g. g should have a “preimage” under the functor. 

Definition (Isomorphism Dense)

A functor is isomorphism-dense if for any \mathbf{B}-object B there exists some \mathbf{A}-object A such that FA \cong B

Two objects FA and B in a category \mathbf{B} are isomorphic if there exists a morphism between the two that is an isomorphism. It is worth clearly noting the definition of an isomorphism at it pertains to morphisms in a category:

Definition (Isomorphism for Morphisms)

A morphism f: A \to B in a category \mathbf{A} is an isomorphism if there exists a morphism g: B \to A in \text{Mor}(\mathbf{A}) such that f\circ g = id_{B} and g\circ f = id_{A} where id_{A} and id_{B} are the identity morphisms for the objects A and B.

Finally, we can combine a few of these to get the notion of an equivalence:

Definition (Equivalence}

A functor is an equivalence if it is full, faithful, and isomorphism-dense.

If we re-write the definition of an isomorphism (functor), then an isomorphism is a functor that is full, faithful, and bijective on objects, whereas an equivalence is a functor that is full, faithful, and isomorphism-dense. The only difference between these two is the notion of being bijective on objects v. being isomorphism-dense. An isomorphism is more restrictive than an equivalence in the sense that all  isomorphisms are equivalences, but we can exhibit equivalences that are not isomorphisms. When we talk about categories being either equivalent or isomorphic, these two words are not interchangeable. In the next sections, we’ll discuss this difference, and illustrate with an explicit example. 

Linear Algebra and Matrices

At this point, we’ll revisit briefly some standard results from linear algebra, focusing on real vector spaces. In most undergraduate linear algebra courses, the notion that a linear transformation can be represented in matrix form is simply stated (sometimes proven, but not often). In particular, those linear algebra courses designed for engineers may only focus on linear systems of equations (which are linear transformations) and omit much talk of linear transformations at all. I then consider it reasonable to revisit some of these ideas. For more details, please consult your favorite linear algebra text.

Linear Spaces (Vector Spaces)

Remark: The terms linear space and vector space are interchangeable. Different texts will use different terms.

Definition (Linear Space)

A linear space over a field \mathbb{F} is a set of elements (vectors) V with two operations +, \cdot. The operation + is called vector addition takes two elements v, w \in V and returns another element v+w \in V. The operation \cdot is scalar multiplication, and takes a scalar element \alpha from the field \mathbb{F} and multiplies it by an element v \in V and returns a “scaled” element \alpha v \in V. We require closure under linear combinations. For any \alpha_{1},\alpha_{2},\ldots,\alpha_{n} \in \mathbb{F} and \mathbf{v}_{1}, \mathbf{v}_{2},\ldots,\mathbf{v}_{n} \in V,
\sum_{i=1}^{n}\alpha_{i}\mathbf{v}_{i} \in V. This is a consolidation of the ten axioms a linear space must satisfy. The expanded axioms are relegated to the appendix.

We’re going to focus on real vector spaces rather than this abstract definition. Real vector spaces have \mathbb{R} as our field (so we multiply vectors by real numbers). 

The vector spaces most often encountered in engineering problems are \mathbb{R}^{n}–the standard n-dimensional real vector spaces on real-valued vectors. (Also known as a standard Euclidean space.) 

Now suppose we wish to move from one vector space V to another W. There are many ways we can define such a move, but the one we’re interested in are linear transformations.

Definition (Linear Transformation)

If V,W are linear spaces, then a function T: V \to W is a linear transformation if

  1. T(x+y) = T(x) + T(y) for all x,y \in V. If we add two vectors, and then transform the result into the space W, it should yield the same vector in W as if we transformed each of x and y into the space W, then added them together.
  2. T(\alpha x) = \alpha T(x) for all \alpha \in \mathbb{F}, x \in V. (We can “factor out” scalars.)

We can take linear transformations from \mathbb{R}^{m} \to \mathbb{R}^{n} by transforming coordinates as in the following example:

T:\mathbb{R}^{2} \to \mathbb{R}^{2} is given by T(x) = x+2y and T(y) = 3y

From linear algebra, we know we may rewrite the transformation in vector form, since we live in “vector space” (in 2 dimensions here):

T\left(\begin{bmatrix}x\\y\end{bmatrix}\right) = \begin{bmatrix}x+2y\\3y\end{bmatrix}

Typically the next step in linear algebra is to say that we may represent the transformation T in matrix form such that multiplication of our “source vector” by the matrix yields the result of the linear transformation. For our example, the matrix A corresponding to the linear transformation T is 

A = \begin{bmatrix}1 & 2 \\ 0 & 3\end{bmatrix}, so that for \mathbf{x} = \begin{bmatrix}x\\y\end{bmatrix} A\mathbf{x} = \begin{bmatrix}x+2y \\ 3y\end{bmatrix}

At this point, this equivalence between linear transformations and matrix representation (with operation matrix/vector multiplication) is simply stated and expected to be accepted. However, we’re going to use the notion of equivalences in category theory to establish this formally.

Moving from Mat to Vect

As mentioned before, we’re going to focus only on finite dimensional vector spaces (\mathbb{R}^{n} for various values of n or other spaces of finite dimension ) and real-valued matrices to keep things simple. These concepts generalize much further, though. Since we’re operating on real-valued matrices, we’ll be using \mathbf{Mat}_{\mathbb{R}}5

To show the equivalence between\mathbf{Vect} and \mathbf{Mat}, we must establish an equivalence functor between the two that allows us to map \mathbf{Mat} to \mathbf{Vect}. We define the functor as follows:

F:\mathbf{Mat}\to\mathbf{Vect} assigns objects n in \mathbf{Mat} the vector space \mathbb{R}^{n}. Thus, we simply assign a natural number n in \mathbf{Mat} to the corresponding n-dimensional real vector space in \mathbf{Vect}, which is an object in \mathbf{Vect}.

Functors also map morphisms in the source category (\mathbf{Mat} in our case) to morphism in the destination category (\mathbf{Vect} here). Morphisms in \mathbf{Mat} are m \times n matrices with real values. Morphisms in \mathbf{Vect} are linear transformations. Thus, it’s natural to have our functor take an m \times n matrix A in\mathbf{Mat} and assign to it the linear map T in \text{Mor}(\textbf{Vect}) from \mathbb{R}^{m} \to \mathbb{R}^{n} that takes an m-dimensional vector \mathbf{x}=\begin{bmatrix}x_{1}\\\vdots \\x_{m}\end{bmatrix} \in \mathbb{R}^{m} maps it to the n-dimensional vector \mathbf{y} = \begin{bmatrix}y_{1}\\\vdots\\y_{n}\end{bmatrix}^{T} \in \mathbb{R}^{n} given by \mathbf{x}^{T}A.6

We need to check that this functor satisfies the definition of both a functor and an equivalence. That F is a functor follows from matrix multiplication on one hand, and from composition of linear maps on the other. To ensure it satisfies the definition of an equivalence, we must check to see that the functor is full, faithful, and isomorphism-dense.

To show the functor is faithful, take two linear transformations T and T' in \mathbf{Vect} such that T = T'. In other words, these two transformations do the exact same thing to input vectors. It’s clear there can if we had two different matrices A and A' such that multiplication by \mathbf{x} yields the exact same vector \mathbf{y}, then A = A'. Thus the functor is indeed faithful. To show the functor is full, we need to show that for any linear transformation T \in \text{Mor}(\mathbf{Vect}), there is a matrix A \in \text{Mor}(\mathbf{Mat}) such that F(A) = T. A linear transformation by definition (see above) results in a component x_{i} \in \mathbb{R}^{m} being taken to a linear combination of components x_{j} \in \mathbb{R}^{n}. Refer back to the example above, where we mapped from \mathbb{R}^{2} to \mathbb{R}^{2}.

T\left(\begin{bmatrix}x\\y\end{bmatrix}\right) = \begin{bmatrix}x+2y\\3y\end{bmatrix}

Since multiplication of matrices results in components which are linear combinations of the source vector by definition of matrix multiplication, there is certainly a matrix A \in \text{Mor}(\mathbf{Mat}) that will map to T \in \text{Mor}(\mathbf{Vect}) for any linear transformation T. Thus, F is certainly a full functor. 

It remains to show that F is isomorphism-dense. To do so, we’re going to attempt to create its “inverse” functor G : \mathbf{Vect} \to \mathbf{Mat}. It is at this point we’ll really understand the difference between being isomorphism-dense and being a true isomorphism.

Creating the Inverse Functor

We’re going to create a functor G that maps \mathbf{Vect} to \mathbf{Mat}. For this, we’ll define it as the “natural” inverse. G will take vector spaces in \mathbf{Vect} to their dimension n \in \mathbb{N}, which is an object in \mathbf{Mat}. We’ll also take T \in \text{Mor}(\mathbf{Vect}) to its corresponding matrix A \in \text{Mor}(\mathbf{Mat}), simply “reversing” the direction. 

We can show in an identical manner to what we did for F that the functor G is full. Isomorphisms in \mathbf{Vect} are invertible linear maps between vector spaces. Since the objects of \mathbf{Mat} are natural numbers, we cannot have isomorphisms between different objects. That is, m cannot be isomorphic to n inside \mathbf{Mat} unless m=n.

We now illustrate the key difference between isomorphism and isomorphism-dense functors. Take the space of polynomials of order m with real coefficients. This is indeed a real vector space, and belongs to \mathbf{Vect}. This polynomial space is isomorphic to \mathbb{R}^{m} inside of \mathbf{Vect}, meaning that there is an invertible linear map K from the polynomial space to \mathbb{R}^{m}. Recall that functors not only map objects to objects, but they also map morphisms to morphisms. Since all the functor F originating \mathbf{Mat} cares about is the dimension of the space, the isomorphism K between the polynomial space and \mathbb{R}^{m} is “invisible” in the category \mathbf{Mat}, because \mathbf{Mat} sees the polynomial space and \mathbb{R}^{m} just as “some” m-dimensional spaces. Thus the morphism K \in \text{Mor}(\textbf{Vect}) has no morphism in \text{Mor}(\textbf{Mat}) that will map to it. 

The functor G is not faithful because T and L will map to the same matrix A_{m\times n} in \text{Mor}(\textbf{Mat})

Therefore, the two categories \mathbf{Vect} and \mathbf{Mat} are equivalent, but not isomorphic, because F is only isomorphism-dense, not fully isomorphic.


We exhibited an explicit example involving the category \mathbf{Mat} of matrices over \mathbb{R} and the category \mathbf{Vect} of finite dimensional real vector spaces to illustrate the difference between functors that are isomorphisms and functors that are equivalences. The key difference is the notion of isomorphism-dense, which means that an object in the target category of a functor may not have a “preimage” in the source space, but is isomorphic inside the target category to an object which does have a “preimage” in the source space. 


For a linear space V over a field \mathbb{F}, the following axioms regarding the two operations must be satisfied:

  1. For x,y \in V, x+y \in V. That is, if we perform the addition operation on two elements of V, the result can’t be outside the set V.
  2. For each \alpha \in \mathbb{F} and v \in V, \alpha v \in V. Multiplying an element by a scalar shouldn’t “knock” us outside the set V.
  3. For x,y \in V, x+y = y+x. (Commutativity)
  4. For x,y,z \in V, x+(y+z) = (x+y)+z (Associativity)
  5. There must be some element in V, denoted 0_{V} that is an \textit{additive identity}. That is, for any x \in V, x + 0_{V} = 0_{V} + x = x.
  6. For every x \in V, the element (-1)x has the property that x + (-1)x = 0_{V}. (Existence of “negatives”, or additive inverses)
  7. For \alpha, \beta \in \mathbb{F}, v \in V, \alpha(\beta v) = (\alpha\beta)v. (Associativity of multiplication)
  8. For all x,y \in V, and for all \alpha \in \mathbb{F}, \alpha(x+y) = \alpha x + \alpha y (Distributivity of vector addition)
  9. For all \alpha, \beta \in \mathbb{F}, v \in V, (\alpha + \beta)v = \alpha v + \beta v.7
  10. For every v \in V, we have that 1v = v, where 1 is the multiplicative identity in \mathbb{F}.


We’d like to gratefully acknowledge Matt Kukla and Jason Hathcock for their insights, review, and meaningful discussion.

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.