The area of Mathematics called Logic has two dominating themes: Proving and Computing.
A mathematician often works with axioms, from which he proves theorems. This process
is analyzed (in an abstract way) in Logic. One starts by introducing a formal language, in which
mathematical statements can be written down: with variables, logical symbols for "and", "or", "not",
quantifiers "there is" and "for all", and special symbols for specific elements, operations or
relations that may differ from situation to situation.
The "statements" of this formal language can be interpreted in "real" mathematical structures, and
are then "true" or "false"; for example, the statement "2 is a square" is true in R, but false in Q.
A "formal proof" is a structured collection of formal statements, in such a way that it is
immediately clear what the conclusion is, what the premisses are and how each intermediate statement
follows from previously reached conclusions.
The most important theorem (Gödel's "Completeness Theorem"; not to be confused with his
"Incompleteness Theorems") says, that a formal statement occurs as the conclusion of a proof, precisely
when this statement is true in every possible interpretation.
This point of view enables us sometimes to prove, that for certain mathematical statements no proof can exist.
For instance the so-called "Continuum hypothesis" by Cantor, which says that every subset
of R is either countable, or in 1-1 correspondence with R itself. It has been proved that this
statement cannot be proved from the usual set of axioms for mathematics (see the paragraph on
Set Theory below. That doesn't make it false, because its negation cannot be proved
either!
During the Bachelor course "Foundations of Mathematics" you have been made familiar with the concepts of formal language, interpretation and formal proof.
The second theme, computing, is about the question: for which sort of decision problems in
mathematics can one devise an algorithm by which a computer can always give the right answer? To give
an example: suppose a convergent sequence of rational numbers is given to us, together with information
about how fast the sequence converges. We want to determine whether the limit of this sequence is a
real number less than zero or greater than or equal to zero. The computer is allowed to successively
read a next element of the sequence, and compute with all elements obtained so far (as well as with the
rate of convergence). There is no program which always, in finite time, gives the right answer!
Starting from a formal definition of "algorithm", we get to the notion of "computable function": a
computable function from N to N (or from Z to Z) is a function for which there exists such an
algorithm.
An important theorem is about the image of such a computable function. We call a set of
k-tuples of elements of Z algebraic if it is the set of zeroes of a polynomial in k variables
with coefficients from Z. Matyasevic' theorem says: a subset of Z is the image of a computable
function, precisely if it is the projection of an algebraic set of k-tuples (for certain k). This
theorem allows one to show that there can be no algorithm for deciding whether a given polynomial with
integer coefficients, has integer zeroes.