TUG 2021 online — Program

Current timeCurrent local time is , as determined from your browser. All presentation times below are in time zone: . If that is not correct adjust your system setting!


Sue and Cheryl will be giving their classic introduction to LaTeX workshop, in English.

The workshop intends to briefly present the features of PGFPlots for LaTeX users. The topics covered start from the environment description, present the graphics types and their components, and possible customizations. The presentation ends with the use of the animate package to provide animations from a set of in line text graphics. The codes of the examples presented are available for you to try and evaluate the capabilities of the graphic environment in LaTeX.
O workshop pretende apresentar de forma breve os recursos do PGFPlots para usuários de LaTeX. Os tópicos abordados partem da descrição do ambiente, apresentam os tipos de gráficos e seus componentes, e algumas das customizações possíveis. A apresentação é encerrada com o uso do pacote animate que produz animações a partir de gráficos . Os códigos dos exemplos apresentados estão disponibilizados para que possam experimentar e avaliar as capacidades do ambiente gráfico em LaTeX.

We will show how Digital Science combines Jupyter Notebooks and Overleaf projects for automated creation of professionally-looking documents, and team collaboration.

Visit and work with a Letter Press printer.

Learn about the details of producing a book in LaTeX for Amazon Kindle Direct Publishing.

Making the web more beautiful, fast, and open through great typography.

Learn how to create a new symbol and make an OpenType font for your logo to be used in TeX and elsewhere.

Air travel provided the zoom for society before we had to ‘Zoom’. However, at the most critical stage of flight when the pilot and plane are coming to land, readability of a runway marker is of utmost importance. While the methods of marking have traditionally been white paint on blacktop written in the official font used by the International Civil Aviation Organization (ICAO), with the industry turning towards the commercialization of autonomous drones, it raises the question of whether the current font is suitable for visual-based navigational systems. This project consequently examines the ability of machine learning software to read, learn, and recognize digits 0-9 and letters L, C, and R across a variety of fonts.

This talk is based primarily on my last 15 years of TeX development in the area of Persian typesetting. I will look at the current state of Persian typesetting in TeX, discuss the issues I have faced, the current challenges, and what needs to be done. I will also discuss how the xepersian package is used for typesetting mainly Persian documents and show few sample documents (books, theses, and other types of documents) produced by the xepersian package. Some capabilities of the xepersian package will be demonstrated live.

Norbert will be interviewed by Paulo Ney de Souza.

Now we all know what a pandemic is and how dreadful it is for all mankind. Most generations who are alive today would have not known one, never seen one, never felt one. Surviving these trying times have made us all adapt to change and we are no stranger. As a typesetter for the leading Scientific, Technical and Medical publishers around the world, we have helped typeset thousands of pages of research articles on Covid-19. Somewhere through our business of ‘typesetting’ — directly or indirectly we feel we have helped. Our main objective during this challenging period, was to keep the business rolling safely, making sure our employees and customers were not let down. Through this journey we have helped our staff work safely ensuring they had no job loss. And for the scientific community we have worked tirelessly, helping publishers continue publishing research articles quickly without losing a single committed due date. This is how we assured Business Continuity and Certainty for our employees, customers and business. Looking back, with gratitude we can now confidently shout out, YES, we have achieved what we had set out for — making LaTeX work for a business, ensuring no job losses, standardising our workflows, making data driven analytical decisions within LaTeX workflows and to sum it all, keep delivering aesthetically pleasing documents to our customers and delighting them always. It was indeed not a cake walk, times were superbly challenging and we have persevered. Finding the Goldilocks between aesthetics and efficiency has always been a challenge for large production houses. But in this pandemic time we have achieved just that the right balance “Our Goldilocks” for LaTeX based typesetting. This is our journey, this is our story, the story still continues...

This paper describes several aspects of the conversion of TeX’s source code from WEB, based on Pascal, to cweb based on CEE with web2w. It emphasizes those aspects that are relevant for obtaining a translation that can truly be regarded as source code and lends itself to modifications.

Celebrating its fifth birthday, the Markdown package has received five new features: user-defined LaTeX themes & setup snippets, two syntax extensions for comments, and support for the LuaMetaTeX engine. In this talk, I will introduce each of these features and show how they can be used in practice.

In this talk I would like to introduce the usage of TeX and templates along with generating ad-hoc class and style files for working with orgmode. In particular, I will highlight also the process by which literate programming practices can be implemented with babel. This allows for a more native and flexible alternative to vendor locked in systems like Jupyterlab which rely on JS based inelegant approaches towards TeX typesetting. Finally, I would like to go into how to leverage CI methods for TeX workflows augmented with git.

The Braille system allows the tactile representation of characters in various alphabets, giving access to reading texts to visually impared persons. The Nemeth code for Mathematics allows the representation of Mathematics symbols and expressions into the Braille system. We have developed a tool, named latex2nemeth, for the reliable transcription of LaTeX documents to Nemeth Braille, thus facilitating the access of visually impared students to studing Science. In order to support the extensive set of Mathematics symbols covered by TeX, we have proposed some new symbols based on the extension mechanisms of the Nemeth code. With the aim of latex2nemeth, we have created a repository of learning material in Braille/Nemeth code aiming to support studies in Mathematics for visually impared students. While most of the material available in the repository is in the Greek language, the tool supports other languages as well. latex2nemeth is currently available in both texlive and MiKTeX distributions.

In this talk I will present two packages as part of the LaTeX Project “Tagged PDF”:


I will show how to use these packages, which benefits they will bring for the “normal” user, but also speak about incompabilities and required changes in document.

In this talk I will demonstrate and describe our solution for automatically tagging paragraphs when using engines such as pdfTeX or XeTeX. The situation with LuaTeX is different and simpler and therefore not subject of this talk. I briefly touch on the problems one encounters and explain the approacheswe used to overcome them. This will be done with a number of demonstrations intermixed with theoretical explanations. This work is part of our multi-year journey to gradually modernize LaTeX so that it can automatically produce high-quality tagged and “accessible” PDF without the need to post-process the result of the LaTeX run.

It’s said: Easy reading is hard writing. Certainly both reader and writer need to make extra effort, when the reader is visually impaired, and the material is technical. This talk is about improving the accessibility of TeX and its outputs. This is not a typesetting talk. It is a user experience and social interaction talk.
For sighted readers the printed page assists short-term memory, as does typography. They reduce the cognitive load. The eye can pick up subtle hints. Clarity of organisation and writing will reduce the cognitive load for both visually impaired and sighted readers, provided they have sufficient verbal skills.
This year I’ve had regular online discussions about accessibility with blind and visually impaired persons, and listened in on their forum conversations with each other. I’ve learnt a lot from this.

The introduction of computers and networks has been, with some exceptions, an enabling technology for the visually impaired. A screen reader allows the user to hear what is written, without needing a sighted assistant. And video calls by mobile phone means that the sighted assistant need not be physically present.
Louis Braille, who became blind as a young child, developed the tactile code for reading and writing that we now know simply as Braille. Screen readers allow the visually impaired to write computer software. The major screen readers are JAWS, Orca and NVDA. It should be no surprise that their leading developers Glen Gordon, Mark Mulcahy, Michael Curran and James Teh are all blind.
To summarize, my talk will share what I’ve learned from my interactions with blind and visually impaired users, and how it relates to the accessibility of TeX and its outputs.

In this talk, Paulo recalls 2020 at the Island of TeX: an eventful year with a new backend for the online TeX and LaTeX documentation lookup system, the release of a tool for finding fonts that contain a given Unicode glyph, a major update for arara and other actions and initiatives as a means to enrich the TeX ecosystem. Yet, a new adventure is about to unfold, for the Island has bold and exciting plans for the future.

TeX itself has no built-in support for colour, which is therefore handled by specials or engine-specific extensions. For LaTeX2e, the different interfaces are abstracted out by the color package. However, there is a lot that the color package does not do, for example handling colour model interconversion, mixing colours or device-specific colour spaces. Packages such as xcolor and colorspace fill that gap, whilst the luacolor package addresses a separate issue: avoiding the need to use whatsits for colour at all.
As part of wider efforts to enhance the LaTeX kernel via expl3 additions, recent work on the l3color package has brought many of these concepts into a single set of interfaces. That means not only copying existing ideas but also ensuring maximal functionality. In my talk, I will explore the work on l3color, highlighting where it can go beyond the predecessor packages in ease of use and functionality.

The TikZlings package provides a selection of cute little animals (and other beings) which can be used in TikZ. Cats, teddy bears, penguins, snowmen and many more are included in the package.
After a short introduction on how to use the package, I will give an overview of the available options and show some examples of how one can customise TikZlings.

For a long time, Type 3 fonts in LaTeX generated PDF files were known for (undesirable) bitmap fonts, but that's only a small aspect of what this font format can do. With OpenType color fonts the idea behind Type 3 fonts has seen a revival, and LuaTeX recently added supported for adding such fonts for non-bitmap use cases too.
In this talk I want to look at how this format can be used to create smaller and simpler PDF files involving color fonts and user generated glyphs and consider advantages and disadvantages in contrast to traditional alternatives like virtual fonts or macro based solutions.

Frank will be interviewed by Paulo Ney de Souza.

Annual General Meeting of the TeX Users Group

Governments around the world are enforcing accessibility standards. Vendors of software used by government agencies are required to file formal statements of accessibility for their products. This presents a special challenge of open source products, if they are not sponsored by a corporation.
In this talk we discuss our experience in creation of such a formal statement for TeXLive. While command line tools are usually more accessible than GUI interfaces, the work turned out to be more difficult than we thought in the beginnning.

High quality automated transcription of mathematical texts, including graphics, into tactile form is an open problem. In this talk, we describe the reasons for producing tactile forms of mathematical texts. We will describe common challenges involved in transcription, and progress made to date. We make the case that semantically rich source files are needed to produce adequate tactile and audio-tactile forms of scientific materials.

A screen reader is a vital tool that helps individuals who are blind or low-vision read digital text. Unfortunately, not all file formats receive the same level of support from screen readers. For example, while PDF files have accessibility features that can be used, they are often not the preferred file format for screen reader users. Between line breaks, multiple columns, symbols, and images, screen readers often struggle with academic journal articles in certain file formats. We will discuss the collaboration of the Open at RIT project with an open access journal and their combined goal of improving accessibility and readership for all. We will explore the difficulties that journals face on their journey towards accessibility, why this journey is worth making, and show how using LaTeX to publish both to our traditional PDF format as well as a more accessible HTML format allowed us to make a big leap towards becoming a more accessible journal.

US government agencies have a need for properly Accessible PDFs. The practice of `remediation’ (adjusting and augmenting the PDF after the typesetting phase) is both expensive and produces generally poor results.
In this talk we show how a much better product can be created directly using LaTeX, adapted for constructing documents that fully conform to PDF/UA-1 and PDF/A-3a. LaTeX sources are handled at 3 levels: (i) initial data capture by research scientists, (ii) heavy editorial work to enable accessibility aspects, (iii) production-level processing to produce feature-rich tagging and full Accessibility.
The two speakers will discuss different aspects of these 3 levels, according to their own involvement in this generalised workflow.
Of particular interest is the use of acronyms and glossaries to enrich the PDF with features that associate technical terms and abbreviations with a fully expanded description of the meanings of those terms, accessible both visually and to Assistive Technology for non-visual readers.

Word search puzzles are a fun pastime and can be a helpful learning tool for spelling and letter recognition. I present an exploration of the babel language package for LaTeX with the production of puzzles in Cyrillic, Arabic, and English.

Improving the English skills of non-native undergraduate students has important implications and can often be directly linked to students’ future, particularly in the IT field. The edition of a bilingual lecture textbook is thus arguably meaningful, notably by considering English as the “major” book language and students’ mother tongue the “minor”, supporting one. Yet, from a TeXnical point of view, this is far from being trivial. Hereinafter, LaTeX methods are given together with guidelines to support the realisation of a bilingual textbook. The especially technically demanding English–Japanese scenario is considered.

This talk is based primarily on my last 15 years of TeX development in the area of bidirectional typesetting. I will look at the current state of bidirectional typesetting in TeX, discuss the issues I have faced, the current challenges, and what needs to be done. I will also discuss how the bidi package is used for typesetting bidirectional documents and show few sample documents (books, theses, and other types of documents) produced by the bidi package. Some capabilities of the bidi package will be demonstrated live.

In order to dissuade our students from bulimic learning and to motivate them to deal with electrical engineering already during the semester, we have developed a concept of personalized tasks with anonymous peer review. All students receive their own assignment by e-mail, can solve it and submit their solution as an explanatory video via a learning management system for correction. The video submission was chosen because not only the result but also the process of solving the problem can be documented much better and can be corrected or evaluated. In order to reduce the correction effort for the teachers, the students assess each other using a sample solution that is also personalized. The process runs automatically and is therefore easily scalable. Compared to simple multiple-choice or numerical value-and-unit tasks, the calculation method and approach as well as sketches, circuit diagrams and charts can also be evaluated well. This contribution describes how the tasks and sample solutions can be automatically generated in LaTeX with the help of the packages PGFPlots and Circuitikz.

Knitr ties LaTeX and R together in a very powerful combination. TikZ typesets visually appealing graphs from R code. Data is processed upon typesetting a report. All calculations can be made available to the reader as R code. This simplifies reproducible research. R offers a whole ecosystem of statistic procedures, graph packages, and even connections to other systems such as Python and MATLAB. In this talk I will show the applications of R & LaTeX that I came across. My aim is to typeset beautiful graphs in a widely accessible manner.

This abstract is a short essay giving the framework for my talk. I take a long view. In my talk I’ll provide some details and examples. My talk is about digital typography in 2050 and 2070, and the conditions for its emergence that are already present.
A few billion years ago life in the oceans began oxygenating the atmosphere. By 350 million years ago life on land was creating what we now call fossil fuel (coal, crude oil and natural gas). A few million years ago the genus homo (man) emerged.
Birds have song and dance. The tool-making Neanderthals (250,000 to 40,000 years ago) probably had language. Human art and music arose at least 40,000 years ago. Around 14,000 years ago agriculture and settlement started to replace nomadic hunt and gather. Writing (on tablets) followed about 5,000 years ago.
Ancient history (3000BC to AD500) includes about 80 civilizations worldwide with written records. This is a very rich period which still influences contemporary thought in art, religion, society, culture and politics.
Along with the rise of the European Renaissance in the 1400s, printing with moveable type emerged, to replace hand copying of books. This is typography, born out of calligraphy (writing with pen or brush).
By the 20th century there were massive printing presses, producing a million copies or more of each issue of a newspaper, which were then distributed on a national basis. (In 1950 the News of the World sold over 8 million copies each week.)
Also in the 20th century there was electrification, wireless stations and receivers, and studios. This distributed spoken voice news, and music, to millions. Cinema and then television provided moving images to accompany the sound.
By 2020 vast torrents of information were being created and transmitted using computers and networks (mobile phones, wi-fi and 4G). Computers are everywhere, even in electric light bulbs. The present context is very different from the 1970s, when Don Knuth started his foundational work on digital typography, and the creation of TeX and Metafont.
Gutenberg and others replaced hand copying of books by the printing press. Knuth and others replaced mechanical typography by computer (or digital) typography. Both produce only static visual images.
If humanity avoids destroying its culture and civilization, then the digital typography of 2050 will be different again. It is already emerging. One major component is the (world wide) web and its servers and browsers. This was pioneered by Tim Berners-Lee. Another is the smart mobile phone (now dominated by Apple and Android). A third is the large high-resolution flat screen television. A fourth is the ubiquity of computers.
I am now in my late 60s. I hope to be alive to see the digital typography of 2050, and if so I expect some surprises. Maxwell’s unification of electricity and magnetism (1865) lives on as the theoretical basis for electrification, wireless and much more. I hope the work of Knuth and others in digital typography can similarly be transmitted as useful living tools and skills to those who follow us.
I do not expect to be alive in 2070, yet alone the 100th birthday of TeX (2078 to 2082). I hope my contribution adds to the cause for celebration.

Space- and time-effective segmentation and hyphenation of natural languages stay at the core of every document preparation system, web browser, or mobile rendering system.
Recently, the unreasonable effectiveness of pattern generation has been shown – it is possible to use hyphenation patterns to solve the dictionary problem for a single language without compromise.
In this article, we will show how we applied the marvelous effectiveness of patgen for the generation of the new Czechoslovak hyphenation patterns that cover two languages.
We show that the development of more universal hyphenation patterns is feasible, allows for significant quality improvements and space savings. We evaluate the new approach and the new Czechoslovak hyphenation patterns.

We are interested in situations such that using the full expressive power of a programming language is needed when References sections are generated for a source text suitable for LaTeX. The data model used by BibTeX is inadequate from this point of view, the “biblatex” package is based on a more efficient data model, but workarounds may be needed in some circumstances.

Symbols (which may be letter-like mathematical constants or functions or may be pictographs with no intuitive ordering) can be problematic to sort. With MakeIndex and Xindy, it's necessary to explicitly set the sort key to the most appropriate alphanumeric value. With bib2gls, it's better not to explicitly set the sort field but instead use bib2gls's field fallback system to select the most appropriate field for the given entry type.

Here I present an automatic open source build system that supports the conversion process of a collection of documents written in LaTeX or other TeX formats. With texmlbus, the Tex to XML BUild System, documents can not only be converted to PDF, but also to other output formats – such as markup languages like HTML. In particular, conversion to XML, HTML and MathML is supported via latexml. Texmlbus can schedule jobs among several workers (possibly on different hosts), extracts and analyzes the the outcome of the conversion process of each document and stores results in its own database. Result documents as well as statistics about the results of the build process can be easily retrieved using a web browser.

TeX4ht is a converter from LaTeX to HTML and several other output formats. Recent work focuses keeping current with package updates, and supporting new packages. In this talk, I will discuss its current status and recent development. I will show how to how to change the look of the generated document, how to select the right way to produce math (including MathJax and MathML), and how to fix some common issues caused by clashes with unsupported packages or commands.

XLingPaper is a plugin to XMLMind, an XML editor designed for publishers. XLingPaper does three things: 1) controls the user interface of a powerful tool only allowing valid document sections to be inserted into a document — reducing user friction in the document production process. 2) it provides a constrained number of document sections which are relevant in the production of linguistically oriented publications, e.g., grammars, dissertations, thesis, journal articles, edited volumes, etc. 3) it exports documents to a variety of formats, e.g, PDFs, ePUB, Open Office Writer, HTML. We describe XLingPaper’s development history and its dependencies on TeX packages for PDF creation.

In this presentation I discuss some of the issues surrounding the workflow used in the production of the annotated Spanish translation of the medieval work, Salomon et Marcolfus. I explain the decisions taken regarding the XSLT transformation of the TEI-XML document, in order to produce a final LuaLaTeX text.

John Hammersley, co-founder and CEO of Overleaf, will be interviewed by Paulo Ney de Souza.

The End.