1. Penser LaTeX, penser avec LaTeX: histoire de l’écriture, concepts et apports de TeX et LaTeX (Éric Guichard, 20 mn + 10 mn de débat).
2. LaTeX: réaliser un premier document en LaTeX; présentation illustrée d’exemples simples (Éric Guichard, 15 à 20 mn + 10 mn de débat).
3. Points de typographie: généralités, césures, polices, langues (Jean-Michel Hufflen, 15 à 20 mn + 10 mn de débat).
4. LaTeX en milieu littéraire: normes, confort de lecture, design, dialogue avec les éditeurs, communication avec d’autres systèmes éditoriaux (Éric Guichard, 15 à 20 mn + 10 mn de débat).
5. Bibliographies: processeurs de bibliographies, styles de base, exemples (Jean-Michel Hufflen, 15 à 20 mn + 10 mn de débat).
6. Compléments: traitement des images, détournements de LaTeX (Éric Guichard, 15 à 20 mn + 10 mn de débat).
On the first day of the Conference, Sue and Cheryl will conduct a webinar with their classical introduction to LaTeX, in English.
Starting with the basic principles in the morning and then proceeding to more detailed topics in the afternoon. They will cover making Indexes, Tables, Figures, Bibliographes, entering Mathematics and many other topics.
Alexander es profesor en la Escuela de Matemática del Instituto Tecnológico de Costa Rica, editor de contenido de la Revista Digital Matemática Educación e Internet y estudiante de Ingeniería en Computación, junto a Walter Mora son los autores del libro Edición de Textos Científicos LaTeX. A workshop in LaTeX in Spanish.
undefined
Opening welcome from Boris Veytsman, the President of TUG.
Tectonic is a software project built around an alternative TeX engine forked from XƎTeX. It was created to explore the answers to two questions. The first question relates to documents: in a world of 21st-century technologies– where interactive displays, computation, and internet connectivity are generally cheap and ubiquitous– what new forms of technical document have become possible? The second question relates to tools: how can we use those same technologies to do a better job of empowering people to create excellent technical documents? The answers are, of course, intertwined: without a system of great tools, it’s hard (or perhaps impossible?) to create great documents.
The premises of the Tectonic project are that the world needs and deserves a “21st-century” document authoring system, that such a system should have TeX at its heart– and that in order to create a successful system, parts of the classic TeX experience will need to be rethought or jettisoned completely.
This is why Tectonic forks XƎTeX and is branded independently: while it aspires to maintain compatibility with classic TeX workflows as far as can be managed, in a certain sense the whole point of the effort is to break compatibility and ignore tradition– to experiment with new ideas that can’t be tried in mainline TeX. Thus far, these “new ideas” have focused on experience design, seeking to deliver a system that is convenient, empowering, and even delightful for users and developers. Tectonic is therefore compiled using standard Rust tools, installs as a single executable file, and downloads support files from a prebuilt TeX Live distribution on demand.
In the past year, long-threatened work on native HTML output has finally started landing, including a possibly novel Unicode math rendering scheme based on font subsetting. The goal for upcoming work is to flesh out this HTML support so that Tectonic can create the world’s best web-native technical documents, and to use that support to document the Tectonic system itself.
This keynote presentation will address how recent trends to align technical documentation practices with “developer-friendly” workflows may be detrimental to documentation authors and their users. A proposed solution is in the recent past of technical documentation as a discipline, where tools and ideas rooted in structured authoring and markup, reuse, and personalization can still provide solutions to present– and future– needs related to technical content.
Since it was first released in 2008, siunitx has become established as the major package for typesetting physical quantities in LaTeX. Following up on my TUG 2018 talk, I will look at how the update to version 3 has gone now that this is out. I’ll briefly look at the background, then consider some of the user and developer efforts that have made the launch a success.
In this talk, Paulo recollects the untold story of two friends writing a silly package just for the fun of it. The story, however, takes a turn when the TeX community decides to embrace silliness. Gather around to learn about TeX, friendship, community, silly walks, and the air speed velocity of an unladen swallow.
Playing chess can range from a casual pastime to a highly competitive event. Several local organizations offer chess as enrichment programs in K–12 schools, often having their own workbooks to supplement their instruction. One drawback is that these workbooks are often created using screen captures of online sources, hence resulting in low-quality outputs when used for print. This exploration tours a few packages used for typesetting diagrams for chess problems and puzzles and presents comparisons of one enrichment program’s original workbook to equivalent pages produced using LaTeX.
undefined
In this talk, Paulo recollects 2021 as a challenging year for the Island of TeX: roadmap changes, lack of resources, server limitations. Yet, resilience, persistence and a bit of good humour made the island even stronger, with new joiners, community support, bold plans and an even brighter future for the TeX ecosystem. And all just in time for celebrating 10 years of arara, our beloved bird!
This presentation touches on:
It will present examples of Markdown to LaTeX-styled PDF.
It will also announce two initiatives: a TeX Live book publishing scheme; and a website where self-publishers can find TeX Live installation instructions plus book publishing how-tos, tutorials, and resources.
Lloyd is a self-publisher with experience in magazine publishing, corporate communication, academia, and software development.
Matching cancer patients with clinical trials is a complex process. One of the outputs of that process is the production of a PDF report containing relevant information about a set of trials. In this paper we present strategies, challenges, and conclusions regarding our use of LaTeX deployed in AWS to generate PDF reports.
Who wins? The base or the superstructure? I’m not a Marxist per se, but I’ve lived this struggle for some time as a writer and publisher. In this keynote presentation, I describe my efforts to change or adapt the democratized tools of production to produce new forms of writing, which ultimately led to an ongoing battle with the dominant cultures of production in the world of publishing. I’ll narrate two case studies. One focuses on the writing and production of an innovative, if not disruptive, textbook in the ultra-conservative textbook industry. The second tells the ongoing story of an interloping publishing company (Parlor Press) that reveals the central challenge of distribution for both writers and publishers, from typesetting (print) to transformation (digital). LaTeX developers and users, take note! The return of the nonbreaking space and soft return is nigh!
undefined
If you take a quick glance at an airport and its signage, you’ll see many different situations where text is used to enhance and streamline processes for both pilot and ground crew alike. Thus, this exploration will take a closer look at such variations along the taxiway and apron at major airports, also discussing how the onset of autonomous aircraft can factor into it.
Through the different constitutions from different countries we’ll look at, France, Canada, the United States, Mexico, and Argentina it is clear that the fonts range from cursive to typewriter-like. The fonts and format of the country’s constitution are based on the time period it was written and other countries’ influence. The countries have developed different iterations in order for the constitution to best represent their country’s values.
One of Knuth’s important insights was the concept of literate programming, where the prose is as important as the code. Now many scientists in different fields are having similar insights about their work. While the published papers have been always recognized as the works of literature, now we start to understand this with respect to lab notes, the lowly reports of our daily activity. This explains the new interest to notebook interfaces: from commercial programs like Matlab and Mathematica to free systems like Wxmaxima and Jupyter.
In this talk I discuss the approach that uses LaTeX and knitr for creating lab notes. I compare it with the available notebook interfaces and the the solutions based on Markdown.
This talk reports on changes within the TeXLive project and distribution over the last year, as well as looking at further development directions and challenges we are facing.
I will discuss the recent changes to the bidi package allowing users to produce right to left beamer documents describing the challenges and what needs to be done. I will also discuss other recent changes of the bidi package.
undefined
TeX (and therefore LaTeX) have enjoyed great popularity over the years as an extremely flexible, versatile, and robust text typesetting system. The flexibility comes not least from the ability to modify the behavior of TeX through programming and from Knuth’s foresight in recognizing the individual elements on the page as small, rectangular building blocks that can be combined into larger units and also manipulated (box).
The development of LuaTeX made modern applications possible for the first time in the long history of TeX via some extensions:
I use these extensions for the program ‘speedata Publisher”, which is mainly made for the fully automatic creation of product catalogs and data sheets from XML.
Despite all the achievements of TeX and LuaTeX, there are still serious disadvantages:
The restrictions mentioned have disturbed me considerable. Regarding the output quality of TeX, there are hardly comparable alternatives– especially not in the opensource area. Therefore, there seemed no alternative left but to re-implement TeX in a “modern” programming language. Some years ago there was already such an attempt (NTS), but it failed. After long pondering, respectively to meet my requirements for a text typesetting system for catalogs and datasheets, I came to the conclusion that I “only” take over the algorithms and the logic of TeX, but not the input language.
“Boxes and glue” is a library written in the Go programming language. The name is based on the model of TeX with the stretchable spaces between the rectangular units. The development of boxes and glue is quite advanced and includes among other things:
Besides these basic parts, there is yet another library that builds on boxesandglue. It offers:
The application programming interface (API) is not yet fixed. The development of boxes and glue is being carried out in parallel with the further development of the speedata Publisher and the requirements here largely determine the programming interface of boxesandglue. Since it is a library, there is no fixed input language as with TeX. In this respect also, boxesandglue is also yet suitable for and (end) user.
This paper describes the development and usage of the luatruthtable package in LaTeX. It is developed to generate truth tables of boolean values in a LaTeX document. The package provides an easy way of generating truth tables in LaTeX. There is no need of special environment in LaTeX in the package for the generation of truth tables. It is written in Lua and TeX file is to be compiled with LuaLaTeX engine.
The Lua programming language is a scripting language which can be embedded across platforms. With LuaTeX and the luacode package, it is possible to use Lua in LaTeX. AllTeX have some scope for programming, but with the internals of TeX there are several limitations especially for performing calculations. Packages like pgf and xparse in LaTeX provide some programming capabilities inside LaTeX documents, but such packages are not meant to provide the complete programming structure that general programming languages, like Lua, provide.
The generation of truth tables with these packages in LaTeX became complex, and probably without using Lua it can’t be done in an easier way in LaTeX. The programming capabilities of Lua are effectively used in the development of luatruthtable package. The xkeyval package is used in its development, in addition to the luacode package. The time needed for generation of truth tables using the package and compilation of a TeX document with LuaTeX is not an issue.
TeX is great for producing beautiful documents, but not the easiest to read and write. At this workshop, you will learn about Markdown and how you can use it to produce different types of beautiful documents from beautiful source texts that don’t distract you from your writing.
UK TUG was established in the early 1990s. I’ve been a member of UK TUG almost from its start through to its dissolution earlier this year. Much has changed both in the TeX community and in the wider world over that time.
UK TUG was a significant part of the TeX community. Besides myself (Jonathan Fine), former members of UK TUG include Peter Abbott, Kaveh Bazargan, David Carlisle, Paulo Cereda, Malcolm Clark, David Crossland, Robin Fairbairns, Alan Jeffrey, Sebastian Rahtz, Arthur Rosendahl, Chris Rowley, Philip Taylor and Joseph Wright.
This list includes two past Presidents of TUG, the current Vice President and a past Secretary. Ten people on the list served on the TUG Board, for a total of over 30 years.
Five are or were members of the LaTeX3 project. One was the founder and for 8 years editor of TeXLive, and another the Technical coordinator of the NTS project. One is a Lead Program Manager for Google Fonts.
This talk provides a personal history from \begin{uktug} to \end{uktug}, with a short ‘\aftergroup’ appendix.
Real world bricks and jigsaw puzzles are a fun pastime for many people. The tikzbricks and jigsaw packages bring them to the LaTeX world. This short talk will give an overview of both packages and show examples how they can be used.
undefined
In this talk I present a selection of improvement we made in the recent LaTeX releases. The changes are not discussed in depth; the goal is to give some interesting examples and make you curious enough to explore the documentation and learn more.
In 2015, I talked about my work exploring Unicode-land, particularly how to carry out case changing in XƎTeX and LuaTeX properly. Since then, expl3 has become a part of the LaTeX kernel, and LaTeX has adopted UTF-8 as the standard input encoding. The time has therefore become ripe to “open up” Unicode-land to allow MakeUppercase and MakeLowercase to roam free.
In this talk, I’ll remind us of what Unicode tells us about case changing, where the challenges are and how we’ve approached them in expl3. I’ll then show how this has combined with some TeX features to enable us to make the switch, incorporate ideas from the textcase package and upgrade MakeUppercase and MakeLowercase for the 21st century.
undefined
Secretary of the TUG Board of directors, Klaus Höppner will present a report on the Board actions of the last year and the general meeting will be under way.
undefined
In this talk we explore the history of LaTeX and PDFs in scientific communication, the roles these tools play, and how those roles may evolve over time. We discuss the rise of Markdown for web publishing, its limitations, and opportunities. We also touch on some recent developments by Mathpix to facilitate document interoperability and accessibility for researchers and the broader STEM community.
Having Vietnamese as my first language and English as my dominant language has inspired exploration of the history and applications of the former. Considering how Vietnamese and English both use the Latin alphabet, this presentation will explore the similarities and differences between the two, using a collection of instances in which Vietnamese text is displayed in our world.
Initially, TeX was a single engine and a single format. However, over the past 40 years, the number of engines and formats has significantly grown, meaning that there are multiple ways of implementing similar solutions depending on the TeX variant used. In this talk, I’ll introduce and compare each engine and format, focusing on both history and practical tips.
I will discuss how mathematics is typeset in Persian and what is required. I will also talk about how the XƎPersian package implements these features and show some examples. I will then discuss recent changes to the xepersian package allowing users to change between English and Persian digits mid-math mode.
undefined
Boris Veytsman, the president of TUG, will be interviewed live by Paulo Ney de Souza, and you will be able to join the conversation.
Some basic requirements for Accessibility of tabular material are:
Header cells themselves may have other row or column headers; e.g., as a common header for a block of rows or columns.
Tagged PDF has the tagging and mechanisms to provide such attributes. When the PDF is translated into HTML (using the ngPDF online converter, say) this information is recorded in the web-pages, to be available to Assistive Technologies. In this talk we show several examples of tables specified using various packages, as in the LaTeX Companion, both in PDF and HTML web pages. A novel coding idea that allows this to be achieved will be presented.
Appendix D (Dirty Tricks) of TeXBook describes algorithms for multi-column typesetting and paragraph footnotes, among much more. The described algorithms are used in various TeX packages such as footmisc, fnpara, manyfoot, and many others.
When the package multicol is used, things get more complicated. Another level of complication arises when you want to mix these with both right to left and left to right typesetting.
The bidi package provides both right to left and left to right multi-columns and paragraph footnotes.
This talk will describe my own experience learning about how other packages implement multi-columns and paragraph footnotes, and also the approach I took in the bidi package for typesetting right to left and left to right multi-columns and paragraph footnotes.
undefined
Due to the permissive nature of LaTeX, authors who prepare their manuscripts in LaTeX for publishing their research articles in academic journals often knowingly or unknowingly indulge in non-standard markup practices that cause avoidable delays and hardships in processing their submissions. A simple pre-submission check followed by requests to fix as much as possible at their end before submission with the benefit of earlier publication can reduce turnaround time (TAT) considerably.
In the talk, I introduce vakthesis, a bundle of LaTeX classes for typesetting doctoral theses according to official requirements in Ukraine, discuss current status of the project and future development plans. Some LaTeX programming tricks that I have studied are considered.
We report on sTeX3– a complete redesign and reimplementation (using LaTeX3) from the ground up of the sTeX ecosystem for semantic markup of mathematical documents. Specifically, we present:
Generally, sTeX3 documents can be made not only interactive (by embedding semantic services), but also “active” in that they actively adapt to reader preferences and pre-knowledge (if known).
We present some tools that allow us to parse all or part of AllTeX source files and process suitable information. For example, we can use them to extract some metadata of a document. These tools have been developed in the Scheme functional programming language. Using them only requires basic knowledge of functional programming and Scheme. Besides, these tools could be easily implemented using a strongly typed functional programming language, such as Standard ML or Haskell.
undefined
I will present an ongoing project with Hans Hagen with the challenging goal of improving the quality of mathematical typesetting, and to make both the input and output of math cleaner and more structured. Among the many enhancements, we mention here the introduction of new atom classes that has given a better control over the details, and the unboxing of fenced material, that together with improved line-breaking and more flexible multiline display math has created a coherent way to produce formulas that split over lines.
In this talk I recount some practical experiences with spot colors I gained while working on the third edition of The LaTeX Companion.
I describe what spot colors are, how to use them for text and (TikZ) graphics, how to mix them properly, and some of the pitfalls we found and how we worked around them.
LaTeX 2ε introduced class and package setting in the optional arguments to documentclass and usepackage. To date, these were designed to handle simple keyword-based option. Over time, packages have extended the mechanism to accept key-value (keyval) arguments. Recent work by the team brings keyval handling into the kernel. This brings the added benefit of allowing repeated package loading to avoid clashes. Here, I will look briefly at the background, then explore how to use the new mechanism in package development.
yex is an implementation of the core TeX system in pure Python. In this talk I shall give an overview of its development, the challenges faced, and possible future directions for the project.
undefined
We present a machine translation system, the PolyMath Translator, for LaTeX documents containing mathematical text. The system combines a LaTeX parser, tokenization of math and labels, a deep learning Transformer model trained on mathematical and other text, and the Google Translate API with a custom glossary. Ablation testing shows that math tokenization and the Transformer model each significantly improve translation quality, while Google Translate is used as a backup when the Transformer does not have confidence in its translation. For LaTeX parsing, we have used the pandoc document converter, while our latest development version instead uses the TexSoup package. We will describe the system, show examples, and discuss future directions.
The Chafee Amendment to US copyright law “allows authorized entities to reproduce or distribute copies or phonorecords of previously published literary or musical works in accessible formats exclusively for use by print-disabled persons.”
This wonderful legal exemption to copyright nicely illustrates the relation between access (here to print works) and accessibility (here production of phonorecords, i.e., audiobooks). Here’s another illustration.
Jonathan Godfrey, a blind Senior Lecturer in Statistics in New Zealand wrote to the Blind Math list “I used to use TeX4ht as my main tool for getting HTML from LaTeX source. This was and probably still is, an excellent tool. How much traction does it get though? Not much. Why? I don’t know, but my current theory is that tools that aren’t right under people’s noses or automatically applied in the background just don’t get as much traction.” (Reference)
Jonathan Godfrey also wrote to the BlindMath list “Something has to change in the very way people use LaTeX if we are ever to get truly accessible pdf documents. I’ve laboured the point that we need access to information much more than we need access to a specific file format, and I’ll keep doing so. [...] I do think a fundamental shift in thinking about how we get access to information is required across most STEM disciplines. (Reference)
This talk looks at the experience of visually impaired STEM students and professionals, from both the point of view of easy access to suitable inputs and tools and also the generation of accessible outputs, as pioneered and enabled by the Chafee Amendment.
TeX and LaTeX have been used for offline documentation of software packages and are supported by several auto-documenting systems including doxygen, sphinx and f2py. Often, documentation markup languages like ReST or Markdown will support a subset of TeX commands for various output formats (e.g., MathJax/KaTeX for HTML).
With the rise of virtual machines for continuous integration, along with a renewed focus on documenting code, the time taken for compiling offline documentation (typically PDF files) from TeX sources has become a bottleneck, and some projects (e.g., SciPy) have discontinued the generation of PDF files altogether. Alternatives have been suggested, e.g. offline HTML, web-PDFs, etc. and will be covered briefly.
In this talk, the main challenges and their mitigation strategies will be discussed including Sphinx LaTeX generation, styling, methods to reduce documentation size and automated file-splitting with the aim of preventing more projects from moving away from TeX-based PDFs. The focus will be on the NumPy TeX CI documentation workflow, but will be generally applicable to most Python projects.
John Lees-Miller, the CTO of Overleaf, will be interviewed live by Paulo Ney de Souza, and you will be able to join the conversation.
undefined
Computer History Museum senior curator Dag Spicer takes us on a walk through computing history, from the Antikythera Mechanism to the first Google server.
Dag Spicer is an electrical engineer and historian of science and technology. He began working at the Museum in 1996 and has built the Museum’s permanent collection into the largest archive of computers, software, media, oral histories, and ephemera in the world. Dag has given hundreds of interviews on computer history and related topics to major news outlets such as The Economist, The New York Times, NPR, CBS, VOA, and has appeared on numerous television programs including Mysteries at the Museum and CBS Sunday Morning.
We will explain the typesetting of a musical composition using the LaTeX markup.
The typographer’s goal is to provide the best possible reading experience for the reader. Thirty years of disruptive technologies have made this a greater challenge despite the overwhelming number of type designs available to us. Steve Matteson will give several historical and contemporary examples where fonts have been adapted or designed to meet constantly changing technological demands.
Closing by the president. See you next year!