Sets and mappings

We begin our discussion with the concept of a set, the notion of which we will assume is intuitively clear; although this is in actuality far from so and an unrestricted use of the set concept has led to contradictions in mathematics. It was for this reason that an axiomatic treatment became necessary.

In this discussion, we will usually designate sets by capital Roman letters such as $A$, $B$, $C$, etc, and elements of sets by small Roman letters. We will also indicate that an element $a$ belongs to a set $A$ by writing $a\in A$, while if this is not the case, we write $a\notin A$, read ``$a$ does not belong to the set $A$.''

If we have a collection of sets indexed by elements $\alpha$ belonging to a set $\Lambda$, then this collection will be denoted by $\{A_\alpha\}_{\alpha\in \Lambda}$. For example, if $\Lambda =\mathbb{N}$, the set of positive integers or natural numbers, we could write $\{A_i\}_{i\in \mathbb{N}}$, meaning that we have a countable number of sets which are being considered. (Note, in general, it is not necessary that $\Lambda$ be even countable. The set of all real numbers denoted by $\mathbb{R}$ is an example of an uncountable set as compared to $\mathbb{N}$, which is a countable set.)

Let $\{A_\alpha\}_{\alpha\in \Lambda}$ be a collection of sets. The union of these sets, denoted by $\cup_{\alpha\in \Lambda} A_\alpha$, is the set of all elements which belong to at least one of the $A_\alpha$. In case the index set $\Lambda$ is equal to $\mathbb{N}$ or is finite, say is equal to $\{1, ..., n\}$, we use the following notations: $\cup_{i=1}^\infty A_i$, $\cup_{i=1}^n A_i$, respectively or

\begin{displaymath} \cup_{i=1}^\infty A_i =A_1\cup A_2 \cup ...\cup A_n\cup ..., \end{displaymath}

\begin{displaymath} \cup_{i=1}^, A_i =A_1\cup A_2 \cup ...\cup A_n. \end{displaymath}

Again let $\{A_\alpha\}_{\alpha\in \Lambda}$ be a collection of sets. The intersection of the sets, denoted by $\cap_{\alpha\in \Lambda} A_\alpha$, is the set of all elements which belong to all the $A_\alpha$. Similar notations as for unions are adopted in the case of intersections when the index set $\Lambda$ is countable or finite.

Next let $A$ and $B$ be two sets. If every element of $A$ is also an element of $B$, one says that $A$ is a subset of $B$, denoted $A\subset B$ (or $A\subseteq B$). If $A\subset B$ and $B\subset A$, then the sets $A$ and $B$ are said to be equal, denoted $A = B$. Finally, if $A\subset B$ but $A\not= B$, then $A$ is called a proper subset of $B$, denoted $A\subsetneq B$.

If $A$ and $B$ are two sets, the cartesian product, $A\times B$, is the set of all ordered pairs $(a,b)$ such that $a\in A$ and $b \in B$. Here it is to be emphasized that $(a_1,b_1) = (a_2,b_2)$ if and only if $a_1 = a_2$ and $b_1 = b_2$. Similarly we can define the cartesian product of any finite number of sets. For example, $\mathbb{R}^n = \mathbb{R}\times ... \mathbb{R}$ ($n$ times), consists of ordered $n$-tuples $(x_1, ..., x_n)$ where each $x_i\in \mathbb{R}$, for $1 \leq i \leq n$.

The set consisting of no elements at all is called the null set or empty set and is designated by $\emptyset$. If $A$ and $B$ are two sets such that $A \cap B = \emptyset$, then $A$ and $B$ are called disjoint sets.

We next introduce a particularly useful notation which will be used throughout: If $A$ is a set, we denote by

\begin{displaymath} \{x\in A\ \vert\ P(x)\} \end{displaymath}

the set of all elements $x$ belonging to $A$ for which the proposition $P(x)$ is true. For example, the set of even positive integers equals $\{x \ \vert\ x\ {\rm is\ even}\}$. In the way of notation and abbreviation, we shall also, sometimes, adopt the following convention: If $A$ is a finite set consisting of the elements $x_1, ..., x_n$, one writes $A = \{x_1, ..., x_n\}$. In particular if $A$ consists of just a single element $x$, we write $A = \{x\}$. A similar notation is frequently adopted in the case where $A$ consists of a denumerable or countable number of elements. Finally, we define the order of a finite set $A$, written $\vert A\vert$ , to be the number of elements in the set $A$. If $A$ is not finite we write, $\vert A\vert = \infty$.

We now turn to the next item of business involving sets, namely the notion of a mapping between two sets. Let $A$ and $B$ be two sets. If for every $a\in A$ there is associated a unique $b \in B$, we say that there is a mapping or function $f$ from $A$ into $B$ and we write $f(a) = b$. Here $A$ is called the domain of $f$, $B$ is the co-domain of $f$, and if $b = f(a)$, then $b$ is the image of $a$ under $f$. We denote this by

\begin{displaymath} f:A\rightarrow B,\ \ \ {\rm or}\ \ A \stackrel{f}{\rightarrow} B. \end{displaymath}

Let $f$ be a mapping from $A$ into $B$. If distinct elements of $A$ have distinct images in $B$ under $f$, i.e., if $a_1,a_2\in A$ and $a_1\not= a_2$ implies $f(a_1)\not= f(a_2)$, then $f$ is called a one-to-one (1-1) (or injective) mapping. Put another way, f is 1-1 if and only if $f(a_1) = f(a_2)$ implies $a_1 = a_2$. If for every $b \in B$, there is an element $a\in A$ such that $f(a) = b$, then the mapping $f$ is said to be onto $B$.

Again, let $f : A\rightarrow B$, and suppose that $E$ is a subset of $A$, $E\subset A$. The image set $f[E]$ is defined by

\begin{displaymath} f[E] = \{f(x) \in B \ \vert\ x \in E\}. \end{displaymath}

If $F\subset B$ then the pre-image $f^{- 1}[F]$ is defined by

\begin{displaymath} f^{- 1}[F] = \{x \in A \ \vert\ f(x)\in F\}. \end{displaymath}

Notice that an image set $f[E]$ can be empty only if $E$ is empty, but that a pre-image set $f^{- 1}[F]$ can be empty even if $F$ is nonempty.

Suppose now that $f$ is a 1-1 mapping of $A$ onto $B$. Then for each $b \in B$, there exists a unique $a\in A$ such that $f(a) = b$. This allows us to define a mapping called the inverse mapping of $f$, which we denote by $f^{- 1}$, where

\begin{displaymath} f^{-1}: B\rightarrow A. \end{displaymath}

is defined by $f^{-1}(b) = a$ provided $a$ is the unique element of $A$ such that $f(a) = b$. It is easy to show that $f^{- 1}$ is also one-to-one and onto. This usage of the notation $f^{- 1}$ should not be confused with its usage for the pre-image of a set, which is defined even if $f$ is NOT one-to-one and onto.

For any set $A$, we define the identity map on $A$, denoted by $1_A$ or simply $1$ if the set involved is clear, as that mapping which leaves every element of $A$ fixed, i.e., $1_A(x) = x$ for all $x\in A$.

Next let $f : A\rightarrow B$, and let $E\subset A$. The restriction of $f$ to $E$ is the mapping denoted by $f\vert _E$, and defined by $f\vert _E (x) = f(x)$ for all $x\in E$. (Here it is assumed that $E$ is non-empty.)

Now let $A$, $B$, and $C$ be sets and suppose that $f$ is a mapping from $A$ into $B$ and $g$ is a mapping from $B$ into $C$. Thus we have

\begin{displaymath} A \stackrel{f}{\rightarrow} B \stackrel{g}{\rightarrow} C. \end{displaymath}

Then the composite map (or product map or composition) $gf$ is defined as

\begin{displaymath} (gf)(x) = g(f(x)) \end{displaymath}

for all $x\in A$; thus $gf : A \rightarrow C$. Since $g(f(x))$ is uniquely determined by $x$, $gf$ is indeed a mapping. In the special case of $f : A \rightarrow A$, we speak of powers of $f$ : $f^2$, $f^3$, etc., and mean $f^2(x) = f(f(x))$, $f^3(x) = f(f^2(x))$, etc., for all $x\in A$.

Using the notations introduced above and assuming $f$ is a 1-1, onto mapping from $A$ to $B$, it is easy to see that $ff^{-1} = 1_B$ and $f^{-1}f = 1_A$.

Let $S$ be a set and furthermore let $\sim$ denote a relation defined between ordered pairs of elements of $S$ such that given any two elements $a,b \in S$ either $a \sim b$ (read ``$a$ is equivalent to b'') is true or it is false. In other words, using the previously introduced terminology and notation, we assume we are given a mapping:

\begin{displaymath} S \times S \rightarrow \{T,F\}, \end{displaymath}

i.e., a mapping of the cartesian product of S with itself into a two-element set consisting of $T$ (``true'') and $F$ (``false''). If $(a,b)$ is mapped into $T$, we write $a \sim b$ and say that ``$a$ is equivalent to $b$''. If $(a,b)$ is mapped into $F$, we say that ``$a$ is not equivalent to $b$''. The relation, $\sim$ , is called an equivalence relation on $S$ if it satisfies the following three conditions:
  1. (Reflexivity) $a \sim a$ for all $a \in S$.
  2. (Symmetry) If $a \sim b$, then $b \sim a$.
  3. (Transitivity) If $a \sim b$, and $b \sim c$, then $a \sim c$.

The following are examples of equivalence relations:

Example 1.1.1   Take $S$ to be the set of all triangles in the plane and take $\sim$ to be the relation of congruence, $\equiv$, or equally well, take $\sim$ to be the relation of similarity.

Example 1.1.2   Take $S$ to be the set of all lines in the plane and $\sim$ to be the relation of being parallel. (By convention, a line is parallel to itself.)

Example 1.1.3   Take $S = \mathbb{Z}$, the set of all integers, and let $m$ be a fixed positive integer. Define $a \sim b$ if and only if $m$ divides $a - b$. This special equivalence relation is denoted by $a \equiv b$ (mod $m$), read $a$ is congruent to $b$ modulo (or just mod) $m$. It will be treated in more detail in the next section.

Now suppose that we have a set $S$ and an equivalence relation defined on $S$. We denote by $[a]$, the set of all elements of S which are equivalent to $a$, i.e.,

\begin{displaymath}[a]= \{x \in S \ \vert\ a \sim x\}. \end{displaymath}

Such a set is called an equivalence class. Clearly $a \in [a]$ by the first condition for an equivalence relation, hence $[a] \not= \emptyset$, for all $a \in S$. We claim, next, that for $a,b \in S$ either $[a] = [b]$ or $[a]$ and $[b]$ are disjoint sets, i.e., $[a] \cap [b] = \emptyset$. To this end, suppose that $[a] \cap [b] \not= \emptyset$. Then there is a $d$ such that $d \in [a]$ and $d \in [b]$. Let $c$ be any element in $[a]$. Then $c\sim a$ and $a \sim d$. Hence $c \sim d$. Since also $d \sim b$, it follows that $c \sim b$, which implies $c \in [b]$. Thus $[a] \subset [b]$. Similarly, $[b] \subset [a]$. We have, therefore shown that either $[a]$ and $[b]$ are disjoint or are equal.

Summarizing, we have the following result.

Theorem 1.1.4   If $S$ is a set with an equivalence relation defined on it, then $S$ is decomposed into disjoint, nonempty equivalence classes. (We say $S$ is partitioned.) This is denoted by $S = [a]$, where it is understood that the union is taken over only certain $a \in S$ so that the classes are disjoint.

Next, we want to consider one further equivalence relation of great importance. It will occur in more specialized settings latter, and we consider the general case now. For this purpose let $E$ and $F$ be sets, and let $f : E \rightarrow F$. For $a,b \in E$ define $a \sim b$ if and only if $f(a) = f(b)$. It is readily seen that this is an equivalence relation on $E$. Let $\overline{E}$ denote the set of all equivalence classes and consider the following mappings:

\begin{displaymath} E \stackrel{\kappa}{\rightarrow} \overline{E} \stackrel{g}{\rightarrow} f[E]\stackrel{i}{\rightarrow} F, \end{displaymath}

where $\kappa (a) = [a]$, $g ([a]) = f(a)$, and $i(f(a))) = f(a)$. Clearly $\kappa$ is an onto mapping. We claim that $g$ is, first of all, well-defined: $[a] = [b]$ implies $g([a]) = g([b])$. Indeed, if $[a] = [b]$, then $a \in [b]$ or $a \sim b$, because $a \in [a]$, so $f(a) = f(b)$; hence $g([a]) = g([b])$, proving the claim. Moreover, $g$ is 1-1. Namely if $g([a]) = g([b])$, then $f(a) = f(b)$, so $a \sim b$ and, therefore, $[a] = [b]$. Also $g$ is onto; for if $c \in f[E]$, then $c = f(a)$ for some $a \in E$, so $c = g([a])$. Finally, $i$ is clearly a 1-1 mapping, and it is also clear that $f = ig$ .

Thus it is seen that an arbitrary mapping $f$ of one set into another can be written (factored) as a product of three mappings each of which has certain nice features which $f$ in general need not possess.


David Joyner 2007-08-06