The Nordic Online Logic Seminar (NOL Seminar) is organised monthly over Zoom, with expository talks on topics of interest for the broader logic community. The seminar is open for professional or aspiring logicians and logic aficionados worldwide.

The tentative time slot is Monday, 16.00-17.30 (Sweden). If you wish to receive the Zoom ID and password for it, as well as further announcements, please subscribe to the NOL Seminar mailing list.

NOL seminar organisers

Valentin Goranko and Graham Leigh

Øystein Linnebo
(University of Oslo) – Potentialism in the philosophy and foundations of mathematics

Aristotle famously claimed that the only coherent form of infinity is potential, not actual. However many objects there are, it is possible for there to be yet more; but it is impossible for there in fact to be infinitely many objects. Although this view was superseded by Cantor’s transfinite set theory, even Cantor regarded the collection of all sets as “unfinished” or incapable of “being together”. In recent years, there has been a revival of interest in potentialist approaches to the philosophy and foundations of mathematics. The lecture provides a survey of such approaches, covering both technical results and associated philosophical views, as these emerge both in published work and in work in progress.

Michael Rathjen
(University of Leeds) – Completeness: Turing, Schütte, Feferman (and Löb)

Progressions of theories along paths through Kleene’s Omega adding the consistency of the previous theory at every successor step, can deduce every true \(\Pi^0_1\)-statement. This was shown by Turing in his 1938 thesis who called these progressions “ordinal logics”. In 1962 Feferman proved the amazing theorem that progressions based on the “uniform reflection principle” can deduce every true arithmetic statement. In contrast to Turing’s, Feferman’s proof is very complicated, involving several cunning applications of self-reference via the recursion theorem. Using Schütte’s method of search trees (or decomposition trees) for omega-logic and reflexive induction, however, one can give a rather transparent proof.

Juliette Kennedy
(University of Helsinki) – Reading syntax off semantics

The practice of foundations of mathematics is built around a firm distinction between syntax and semantics. But how stable is this distinction, and is it always the case that semantically presented mathematical objects in the form e.g. of a model class might give rise to a “natural logic”? In this talk I will investigate different scenarios from set and model theory in which an investigation of the notion of an implicit or internal logic or syntax becomes possible. Time permitting we will also discuss the question whether logics without a syntax can be considered logics at all.

Thierry Coquand
(Gothenburg University) – Formalization of Mathematics and Dependent Type Theory

The first part will be about representation of mathematics on a computer. Questions that arise there are naturally reminiscent of issues that arise when teaching formal proofs in a basic logic course, e.g. how to deal with free and bound variables, and instantiation rules. As discussed in a 1962 paper of Tarski, these issues are “clearly experienced both in teaching an elementary course in mathematical logic and in formalizing the syntax of predicate logic for some theoretical purposes.” I will present two quite different approaches to this problem: one inspired by Tarski’s paper (N. Megill, system Metamath) and one using dependent type theory (N.G. de Bruijn).

The second part will then try to explain how notations introduced by dependent type theory suggest new insights for old questions coming from Principia Mathematica (extensionality, reducibility axiom) through the notion of universe, introduced by Grothendieck for representing category theory in set theory, and introduced in dependent type theory by P. Martin-Löf.

Johan van Benthem
(University of Amsterdam, Stanford University, and Tsinghua University) – Interleaving Logic and Counting

Reasoning with generalized quantifiers in natural language combines logical and arithmetical features, transcending divides between qualitative and quantitative. This practice blends with inference patterns in ‘grassroots mathematics’ such as pigeon-hole principles. Our topic is this cooperation of logic and counting on a par, studied with small systems and gradually moving upward. We start with monadic first-order logic with counting. We provide normal forms that allow for axiomatization, determine which arithmetical notions are definable, and conversely, discuss which logical notions and reasoning principles can be defined out of arithmetical ones. Next we study a series of strengthenings in the same style, including second-order versions, systems with multiple counting, and a new modal logic with counting. As a complement to our fragment approach, we also discuss another way of controlling complexity: changing the semantics of counting to reason about ‘mass’ or other aggregating notions than cardinalities. Finally, we return to the basic reasoning practices that lie embedded in natural language, confronting our formal systems with linguistic quantifier vocabulary, monotonicity reasoning, and procedural semantics via semantic automata. We conclude with some pointers to further entanglements of logic and counting in the metamathematics of formal systems, the philosophy of logic, and cognitive psychology. (Joint work with Thomas Icard)

Paper available at: https://eprints.illc.uva.nl/id/eprint/1813/1/Logic.Counting.pdf

Lars Birkedal
(Aarhus University) – Iris: A Higher-Order Concurrent Separation Logic Framework

I will introduce some of our research on Iris, a higher-order concurrent separation logic framework, implemented and verified in the Coq proof assistant, which can be used for mathematical reasoning about safety and correctness of concurrent higher-order imperative programs. Iris has been used for many different applications; see iris-project.org for a list of research papers. However, in this talk I will focus on the Iris base logic (and its semantics) and sketch how one may define useful program logics on top of the base logic. The base logic is a higher-order intuitionistic modal logic, which, in particular, supports the definition of recursive predicates and whose type of propositions is itself recursively defined.

Sara L. Uckelman
(Durham University) – John Eliot’s Logick Primer: A bilingual English-Wôpanâak logic textbook

In 1672 John Eliot, English Puritan educator and missionary, published *The Logick Primer: Some Logical Notions to initiate the INDIANS in the knowledge of the Rule of Reason; and to know how to make use thereof* [1]. This roughly 80 page pamphlet focuses on introducing basic syllogistic vocabulary and reasoning so that syllogisms can be created from texts in the Psalms, the gospels, and other New Testament books. The use of logic for proselytizing purposes is not distinctive: What is distinctive about Eliot’s book is that it is bilingual, written in both English and Wôpanâak (Massachusett), an Algonquian language spoken in eastern coastal and southeastern Massachusetts. It is one of the earliest bilingual logic textbooks, it is the only textbook that I know of in an indigenous American language, and it is one of the earliest printed attestations of the Massachusett language.

In this talk, I will:

- Introduce John Eliot and the linguistic context he was working in.
- Introduce the contents of the
*Logick Primer*—vocabulary, inference patterns, and applications. - Discuss notions of “Puritan” logic that inform this primer.
- Talk about the importance of his work in documenting and expanding the Massachusett language and the problems that accompany his colonial approach to this work.

References

[1] J.[ohn] E.[liot]. *The Logick Primer: Some Logical Notions to initiate the INDIANS in the knowledge of the Rule of Reason; and to know how to make use thereof*. Cambridge, MA: Printed by M.[armaduke] J.[ohnson], 1672.

Erich Grädel
(RWTH Aachen University) – Semiring semantics for logical statements with applications to the strategy analysis of games

Semiring semantics of logical formulae generalises the classical Boolean semantics by permitting multiple truth values from certain semirings. In the classical Boolean semantics, a model of a formula assigns to each (instantiated) literal a Boolean value. K-interpretations, for a semiring K, generalize this by assigning to each such literal a value from K. We then interpret 0 as false and all other semiring values as nuances of true, which provide additional information, depending on the semiring: For example, the Boolean semiring over {0,1} corresponds classical semantics, the Viterbi-semiring can model confidence scores, the tropical semiring is used for cost analysis, and min-max-semirings (A, max, min, a, b) for a totally ordered set (A,<) can model different access levels. Most importantly, semirings of polynomials, such as N[X], allow us to track certain literals by mapping them to different indeterminates. The overall value of the formula is then a polynomial that describes precisely what combinations of literals prove the truth of the formula.

This can also be used for strategy analysis in games. Evaluating formulae that define winning regions in a given game in an appropriate semiring of polynomials provides not only the Boolean information on who wins, but also tells us how they win and which strategies they might use. For this approach, the case of Büchi games is of special interest, not only due to their practical importance, but also because it is the simplest case where the logical definition of the winning region involves a genuine alternation of a greatest and a least fixed point. We show that, in a precise sense, semiring semantics provide information about all absorption-dominant strategies – strategies that win with minimal effort, and we discuss how these relate to positional and the more general persistent strategies. This information enables further applications such as game synthesis or determining minimal modifications to the game needed to change its outcome.

Anupam Das
(University of Birmingham) – On the proof theoretic strength of cyclic reasoning

Cyclic (or circular) proofs are now a common technique for demonstrating metalogical properties of systems incorporating (co)induction, including modal logics, predicate logics, type systems and algebras. Inspired by automaton theory, cyclic proofs encode a form of self-dependency of which induction/recursion comprise special cases. An overarching question of the area, the so-called ‘Brotherston-Simpson conjecture’, asks to what extent the converse holds.

In this talk I will discuss a line of work that attempts to understand the expressivity of circular reasoning via forms of proof theoretic strength. Namely, I address predicate logic in the guise of first-order arithmetic, and type systems in the guise of higher-order primitive recursion, and establish a recurring theme: circular reasoning buys precisely one level of ‘abstraction’ over inductive reasoning.

This talk will be based on the following works:

Dag Normann
(Oslo) – An alternative perspective on Reverse Mathematics

In his address to the International Congress of Mathematics in Vancouver, 1974, Harvey Friedman launched a program where the aim would be to find the minimal set of axioms needed to prove theorems of ordinary mathematics. More than often, it turned out that the axioms then would be provable from the theorems, and the subject was named Reverse Mathematics. In this talk we will survey some of the philosophy behind, and results of, the early reverse mathematics, based on the formalisation of mathematics within second order number theory.

In 2005, Ulrich Kohlenbach introduced higher order reverse mathematics, and we give a brief explanation of the what and why? of Kohlenbach’s approach. In an ongoing project with Sam Sanders we have studied the strength of classical theorems of late 19th/early 20th century mathematics, partly within Kohlenbach’s formal typed theory and partly by their, in a generalised sense, constructive content. In the final part of the talk I will give some examples of results from this project, mainly from the perspective of higher order computability theory. No prior knowledge of higher order computability theory is needed.

Wilfrid Hodges
(Fellow of the British Academy) – How the teenage Avicenna planned out several new logics

Almost exactly a thousand years ago a teenager known today as Avicenna lived in what is now Uzbekistan. He made a resolution to teach himself Aristotelian logic, armed with an Arabic translation of Aristotle and a century-old Arabic textbook of logic. A couple of years later, around his eighteenth birthday, he wrote a brief report of what he had learned. Six months ago I started to examine this report - I suspect I am the first logician to do that. It contains many surprising things. Besides introducing some new ideas that readers of Avicenna know from his later works, it also identifies some specific points of modal logic where Avicenna was sure that Aristotle had made a mistake. People had criticised Aristotle’s logic before, but not at these points. At first Avicenna had no clear understanding of how to do modal logic, and it took him another thirty years to justify all the criticisms of Aristotle in his report. But meanwhile he discovered for himself how to defend a new logic by building new foundations. I think the logic itself is interesting, but the talk will concentrate on another aspect. These recent discoveries mean that Avicenna is the earliest known logician who creates new logics and tells us what he is doing, and why, at each stage along the way.

Jouko Väänänen
(Helsinki) – Dependence logic: Some recent developments

In the traditional so-called Tarski’s Truth Definition the semantics of first order logic is defined with respect to an assignment of values to the free variables. A richer family of semantic concepts can be modelled if semantics is defined with respect to a set (a “team”) of such assignments. This is called team semantics. Examples of semantic concepts available in team semantics but not in traditional Tarskian semantics are the concepts of dependence and independence. Dependence logic is an extension of first-order logic based on team semantics. It has emerged that teams appear naturally in several areas of sciences and humanities, which has made it possible to apply dependence logic and its variants to these areas. In my talk I will give a quick introduction to the basic ideas of team semantics and dependence logic as well as an overview of some new developments, such as quantitative analysis of team properties, a framework for a multiverse approach to set theory, and probabilistic independence logic inspired by the foundations of quantum mechanics.

Dag Prawitz
(Stockholm) – Validity of inference and argument

An account of inferences should take into account not only inferences from established premisses but also inferences made under assumptions. This makes it necessary to consider arguments, chains of inferences in which assumptions and variables may become bound. An argument is valid when all its inferences are valid, and it then amounts to a proof in case it has no unbound assumptions or variables. The validity of an inference – not to confuse with the conclusion being a logical consequence of the premisses – seems in turn best explained in terms of proofs. This means that the concepts of valid inference and valid argument depend on each other and cannot be defined independently but have to be described by principles that state how they are related. A number of such principles will be proposed. It is conjectured that inferences that can be expressed in the language of first order intuitionistic predicate logic and are implied to be valid by these principles are all provable in that logic.