texlive[59756] Master/texmf-dist: csvsimple (29jun21)
commits+karl at tug.org
commits+karl at tug.org
Tue Jun 29 21:53:40 CEST 2021
Revision: 59756
http://tug.org/svn/texlive?view=revision&revision=59756
Author: karl
Date: 2021-06-29 21:53:39 +0200 (Tue, 29 Jun 2021)
Log Message:
-----------
csvsimple (29jun21)
Modified Paths:
--------------
trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-example.pdf
trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-example.tex
trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple.pdf
trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple.tex
trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple.sty
Added Paths:
-----------
trunk/Master/texmf-dist/doc/latex/csvsimple/CHANGES.md
trunk/Master/texmf-dist/doc/latex/csvsimple/README.md
trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-doc.sty
trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-l3.pdf
trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-l3.tex
trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-legacy.pdf
trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-legacy.tex
trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-title.png
trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple-l3.sty
trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple-legacy.sty
Removed Paths:
-------------
trunk/Master/texmf-dist/doc/latex/csvsimple/CHANGES
trunk/Master/texmf-dist/doc/latex/csvsimple/README
Deleted: trunk/Master/texmf-dist/doc/latex/csvsimple/CHANGES
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/CHANGES 2021-06-29 17:19:30 UTC (rev 59755)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/CHANGES 2021-06-29 19:53:39 UTC (rev 59756)
@@ -1,115 +0,0 @@
-%% The LaTeX package csvsimple - version 1.22 (2021/06/07)
-%%
-%% -------------------------------------------------------------------------------------------
-%% Copyright (c) 2008-2021 by Prof. Dr. Dr. Thomas F. Sturm <thomas dot sturm at unibw dot de>
-%% -------------------------------------------------------------------------------------------
-%%
-%% This work may be distributed and/or modified under the
-%% conditions of the LaTeX Project Public License, either version 1.3
-%% of this license or (at your option) any later version.
-%% The latest version of this license is in
-%% http://www.latex-project.org/lppl.txt
-%% and version 1.3 or later is part of all distributions of LaTeX
-%% version 2005/12/01 or later.
-%%
-%% This work has the LPPL maintenance status `author-maintained'.
-%%
-%% This work consists of all files listed in README
-%%
-
-version 1.00 (2010/07/28): initial public release
-
-version 1.01 (2010/11/10):
-- documentation of some keys clarified
-- new key: after first line
-- new key: late after first line
-- new example for key evaluation in the documentation
-
-version 1.02 (2011/04/04):
-- error in the documentation for longtable und tabbing corrected
-- new macros: \csvfilteraccept, \csvfilterreject
-- new keys: filter accept all, filter reject all
-
-version 1.03 (2011/11/04):
-- processing error for lines starting with '00' corrected
-
-version 1.04 (2011/11/11):
-- new key: head to column names (automatic column names)
-- new key: no table
-- column numbers can now be used for column macro definitions
-- documentation update and correction
-- internal behaviour of 'before reading' and 'after reading' changed for tables
-
-version 1.05 (2012/03/12):
-- documentation language changed from German to English
-- source code of the documentation added
-- provision of the csvsimple.tds.zip file for easier installation
-- key @table removed from the documentation
-- new keys: preprocessed file, preprocessor, no preprocessing
- for preprocessing support (e.g. sorting)
-- error in 'nocheckcolumncount' corrected and key renamed to 'no check column count'
-- key nofilter renamed to 'no filter' and 'nohead' to 'no head' (the old names
- are kept as deprecated key names)
-
-version 1.06 (2012/11/08):
-- implementation for line breaking changed from full macro expansion to
- token expansion. This allows quite arbitrary macro code inside the data.
- Note that this may be a breaking change if your application expects
- expanded column values.
-- option values added for \csvautotabular and \csvautolongtable
-
-version 1.07 (2013/09/25):
-- internal macro '\TrimSpaces' renamed to avoid name clashed with 'xparse'
-- new option 'separator' to set the data value separator to
- 'comma', 'semicolon', or 'pipe'
-
-version 1.10 (2014/07/07):
-- bug fix: table head names in curly brackets were not recognized for some cases
-- changed: if a CSV file is not found, csvsimple stops with an error message instead of a warning
-- external sorting specifically supported for the CSV-Sorter tool with the new options
- 'csvsorter command', 'csvsorter configpath', 'csvsorter log',
- 'sort by', 'new sorting rule'
-- new automatic tabular settings with booktabs:
- '\csvautotabular' and '\csvautolongtable'
-- new keys for respecting special characters:
- 'respect tab', 'respect percent', 'respect sharp', 'respect dollar',
- 'respect and', 'respect backslash', 'respect underscore', 'respect tilde',
- 'respect circumflex', 'respect leftbrace', 'respect rightbrace',
- 'respect all', 'respect none'
-- new value 'tab' for the 'separator' option to use a tabulator signs
- as separator.
-
-version 1.11 (2014/07/08):
-- bug fix (serious!): sorting preprocessor overwrites the input data in some combinations
-- changed: if a CSV file with an empty first line is found, csvsimple stops with an error message
-
-version 1.12 (2014/07/14):
-- fixed: CSV-Sorter call incompatibilities with the ngerman package (not babel)
-- changed: success of CSV-Sorter call is checked (Note: Update to CSV-Sorter v0.94 or newer!)
- new key 'csvsorter token'
-- changed: encircling column entry braces removed for all entries for better siunitx compatibility
-- documentation revised and extended with siunitx examples
-
-version 1.20 (2016/07/01):
-- implementation changed from \roman to \romannumeral
-- write18 replace by \ShellEscape from the shellesc package
-- '\csvlinetotablerow' implemented more efficiently
-- '\csvloop' made long
-- new string comparison macros:
- '\ifcsvstrequal', '\ifcsvprostrequal', '\ifcsvstrcmp', '\ifcsvnotstrcmp'
-- new filter options:
- 'filter ifthen', 'filter test', 'filter expr', 'full filter',
- 'filter strcmp', 'filter not strcmp'
-- code optimizations
-- documentation revised
-
-version 1.21 (2019/04/09):
-- spurious blank in sorting code removed
-- package 'pgfrcs' added as required package
-- (#3): introduction augmented with additional hints for first time users
-
-version 1.22 (2021/06/07):
-- (#7) new option 'head to column names prefix'
-- (#11) Due to changes in the LaTeX kernel 2021-06-01, the empty line
- detection of csvsimple had to be adapted. Updating csvsimple is
- essential to avoid problems with kernel 2021-06-01.
Added: trunk/Master/texmf-dist/doc/latex/csvsimple/CHANGES.md
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/CHANGES.md (rev 0)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/CHANGES.md 2021-06-29 19:53:39 UTC (rev 59756)
@@ -0,0 +1,293 @@
+# Changelog
+All notable changes to this project will be documented in this file.
+
+The format is based on
+[Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
+and this project adheres to
+[Semantic Versioning](http://semver.org/spec/v2.0.0.html).
+
+## [Unreleased]
+
+
+
+## [2.0.0] - 2021-06-29
+
+### Added
+- New documentation `csvsimple-l3.pdf` for the new LaTeX3 version
+ (revised, adapted and extended from the old documentation)
+- `\thecsvcolumncount`
+- Option `autotabular*`
+- Option `autobooktabular*`
+- Option `autolongtable*`
+- Option `autobooklongtable*`
+- Option `filter bool`
+- Option `filter fp`
+- Option `range`
+- `\csvautotabular*`
+- `\csvautobooktabular*`
+- `\csvautolongtable*`
+- `\csvautobooklongtable*`
+- `\csvfilterbool`
+- `\ifcsvfirstrow`
+- `\ifcsvoddrow`
+- `\ifcsvfpcmp`
+- `\ifcsvintcmp`
+- `\csvsortingrule`
+
+### Changed
+- Complete re-implementation of the hitherto existing latex package
+ as LaTeX3 package using the expl3 interface. From now on, three package
+ files are provided:
+ ** `csvsimple-legacy.sty` identical to csvsimple until version 1.22 **
+ ** `csvsimple-l3.sty` LaTeX3 package of csvsimple **
+ ** `csvsimple.sty` stub to select `l3` or `legacy` (default) **
+- The LaTeX2e version (`csvsimple-legacy`) will be maintained in its
+ current state with no intended changes with exceptions of bug fixes.
+- The LaTeX3 version (`csvsimple-l3`) is regarded to be the main package
+ and may receive feature upgrades in the future
+- Existing documents using csvsimple v1.22 need no change since loading
+ `csvsimple` will load `csvsimple-legacy`.
+- `cvsimple-l3` is a *nearly* drop-in replacement for `csvsimple-legacy`.
+ Only very few things phased out and the user interface is quite identical.
+ The most significant difference is that `l3keys` are used instead of `pgfkeys`
+ which may need adaptions on user side (for examples, if .styles are used)
+- New documents are encouraged to apply `cvsimple-l3` instead of `csvsimple-legacy`.
+- For the complete package is valid: do not upgrade from version 1.22, if your
+ TeX installation has no current LateX3/expl3 support a.k.a *is too old*
+- `csvinputline` and `csvrow` are no longer LaTeX2e counters
+- The hitherto existing documentation `csvsimple.pdf` is now `csvsimple-legacy.pdf`
+- `csvsimple.pdf` documents the stub package and differences
+ between `csvsimple-l3.sty` and `csvsimple-legacy.sty`
+- `column count = 0` means automatic column number detection for CSV files without head
+- Option `head` does not change option `check column count` anymore
+- Changelog moved from CHANGES to CHANGES.md and adapted to
+ [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
+- From now on version numbers adhere to
+ [Semantic Versioning](http://semver.org/spec/v2.0.0.html)
+
+### Deprecated
+- `\csviffirstrow`
+- `\csvifoddrow`
+
+### Removed
+- `\csvheadset`
+- Option `filter`
+- Option `nofilter`
+- Option `nohead`
+
+
+
+
+## [1.22] - 2021-06-07
+
+### Added
+- Option `head to column names prefix` (issue #7)
+
+### Changed
+- Due to changes in the LaTeX kernel 2021-06-01, the empty line
+ detection of csvsimple had to be adapted. Updating csvsimple is
+ essential to avoid problems with kernel 2021-06-01. (issue #11)
+
+
+
+## [1.21] - 2019-04-09
+
+### Changed
+- Package `pgfrcs` added as required package
+- Introduction augmented with additional hints for first time users (issue #3)
+
+### Fixed
+- Spurious blank in sorting code removed
+
+
+
+## [1.20] - 2016-07-01
+
+### Added
+- New string comparison macros:
+- `\ifcsvstrequal`
+- `\ifcsvprostrequal`
+- `\ifcsvstrcmp`
+- `\ifcsvnotstrcmp`
+- New filter options:
+- Option `filter ifthen`
+- Option `filter test`
+- Option `filter expr`
+- Option `full filter`
+- Option `filter strcmp`
+- Option `filter not strcmp`
+
+### Changed
+- Implementation changed from `\roman` to `\romannumeral`
+- `\write18` replaced by `\ShellEscape` from the shellesc package
+- `\csvlinetotablerow` implemented more efficiently
+- `\csvloop` made long
+- Code optimizations
+- Documentation revised
+
+
+
+## [1.12] - 2014-07-14
+
+### Added
+- Option `csvsorter token`
+- Documentation extended with siunitx examples
+
+### Changed
+- Success of CSV-Sorter call is checked (Note: Update to CSV-Sorter v0.94 or newer!)
+- Encircling column entry braces removed for all entries for better siunitx compatibility
+- Documentation revised
+
+### Fixed
+- CSV-Sorter call incompatibilities with the ngerman package (not babel)
+
+
+
+## [1.11] - 2014-07-08
+
+### Changed
+- If a CSV file with an empty first line is found, csvsimple
+ stops with an error message
+
+### Fixed
+- Sorting preprocessor overwrites the input data in some combinations
+
+
+
+## [1.10] - 2014-07-07
+
+### Added
+- `\csvautobooktabular`
+- `\csvautobooklongtable`
+- External sorting specifically supported for the CSV-Sorter tool with the new options
+- Option `csvsorter command`
+- Option `csvsorter configpath`
+- Option `csvsorter log`
+- Option `sort by`
+- Option `new sorting rule`
+- New keys for respecting special characters:
+- Option `respect tab`
+- Option `respect percent`
+- Option `respect sharp`
+- Option `respect dollar`
+- Option `respect and`
+- Option `respect backslash`
+- Option `respect underscore`
+- Option `respect tilde`
+- Option `respect circumflex`
+- Option `respect leftbrace`
+- Option `respect rightbrace`
+- Option `respect all`
+- Option `respect none`
+- Option setting `separator = tab`
+
+### Changed
+- If a CSV file is not found, csvsimple stops with an error message instead of a warning
+
+### Fixed
+- Table head names in curly brackets were not recognized for some cases
+
+
+
+## [1.07] - 2013-09-25
+
+### Added
+- Option `separator` to set the data value separator to
+ `comma`, `semicolon`, or `pipe`
+
+### Changed
+- Internal macro `\TrimSpaces` renamed to avoid name clashed with `xparse`
+
+
+
+## [1.06] - 2012-11-08
+
+### Changed
+- Implementation for line breaking changed from full macro expansion to
+ token expansion. This allows quite arbitrary macro code inside the data.
+ Note that this may be a breaking change if your application expects
+ expanded column values.
+- Option values added for `\csvautotabular` and `\csvautolongtable`
+
+
+
+## [1.05] - 2012-03-12
+
+### Added
+- Source code of the documentation added
+- Provision of the csvsimple.tds.zip file for easier installation
+- Option `preprocessed file`
+- Option `preprocessor`
+- Option `no preprocessing`
+
+### Changed
+- Documentation language changed from German to English
+- Option `nocheckcolumncount` renamed to `no check column count`
+- Option `nofilter` renamed to `no check column count`
+- Option `nocheckcolumncount` renamed to `no filter`
+- Option `nohead` renamed to `no head`
+
+### Deprecated
+- Option `nofilter`
+- Option `nohead`
+
+### Removed
+- Option `@table` removed from the documentation
+
+### Fixed
+- Error in `nocheckcolumncount` corrected and key renamed to 'no check column count'
+
+
+
+## [1.04] - 2011-11-11
+
+### Added
+- Option `head to column names` (automatic column names)
+- Option `no table`
+- Column numbers can now be used for column macro definitions
+
+### Changed
+- Internal behaviour of `before reading` and `after reading`
+ changed for tables
+
+### Fixed
+- documentation update and correction
+
+
+
+## [1.03] - 2011-11-04
+
+### Fixed
+- Processing error for lines starting with '00' corrected
+
+
+
+## [1.02] - 2011-04-04
+
+### Added
+- `\csvfilteraccept`
+- `\csvfilterreject`
+- Option `filter accept all`
+- Option `filter reject all`
+
+### Fixed
+- Error in the documentation for longtable und tabbing corrected
+
+
+
+## [1.01] - 2010-11-10
+
+### Added
+- Option `after first line`
+- Option `late after first line`
+- New example for key evaluation in the documentation
+
+### Changed
+- Documentation of some options clarified
+
+
+
+## [1.00] - 2010-07-28
+
+### Added
+- Initial public release
Property changes on: trunk/Master/texmf-dist/doc/latex/csvsimple/CHANGES.md
___________________________________________________________________
Added: svn:eol-style
## -0,0 +1 ##
+native
\ No newline at end of property
Deleted: trunk/Master/texmf-dist/doc/latex/csvsimple/README
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/README 2021-06-29 17:19:30 UTC (rev 59755)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/README 2021-06-29 19:53:39 UTC (rev 59756)
@@ -1,50 +0,0 @@
-%% The LaTeX package csvsimple - version 1.22 (2021/06/07)
-%%
-%% -------------------------------------------------------------------------------------------
-%% Copyright (c) 2008-2021 by Prof. Dr. Dr. Thomas F. Sturm <thomas dot sturm at unibw dot de>
-%% -------------------------------------------------------------------------------------------
-%%
-%% This work may be distributed and/or modified under the
-%% conditions of the LaTeX Project Public License, either version 1.3
-%% of this license or (at your option) any later version.
-%% The latest version of this license is in
-%% http://www.latex-project.org/lppl.txt
-%% and version 1.3 or later is part of all distributions of LaTeX
-%% version 2005/12/01 or later.
-%%
-%% This work has the LPPL maintenance status `author-maintained'.
-%%
-%% This work consists of all files listed in README
-%%
-
-csvsimple provides a simple LaTeX interface for the processing of files with
-comma separated values (CSV). csvsimple relies heavily on the key value syntax
-from pgfkeys which results (hopefully) in an easy way of usage. Filtering and
-table generation is especially supported. Since the package is considered as a
-lightweight tool, there is no support for data sorting or data base storage.
-
-Contents of the package
-=======================
- 'README' this file
- 'CHANGES' log of changes (history)
- 'csvsimple.sty' LaTeX package file (style file)
- 'csvsimple.pdf' Documentation for csvsimple
- 'csvsimple.tex' Source code of the documentation
- 'csvsimple-example.tex' Example file for package usage
- 'csvsimple-example.csv' CSV file as part of the example
- 'csvsimple-example.pdf' Compiled example
- 'amountsort.xml' csvsorter configuration file (example)
- 'catsort.xml' csvsorter configuration file (example)
- 'encoding.xml' csvsorter configuration file (example)
- 'gradesort.xml' csvsorter configuration file (example)
- 'matriculationsort.xml' csvsorter configuration file (example)
- 'namesort.xml' csvsorter configuration file (example)
- 'transform.xml' csvsorter configuration file (example)
-
-Installation
-============
-Copy the contents of the 'csvsimple.tds.zip' from CTAN to your local TeX file tree.
-
-Alternatively, put the files to their respective locations within the TeX installation:
- 'csvsimple.sty' -> /tex/latex/csvsimple
- all other files -> /doc/latex/csvsimple
Added: trunk/Master/texmf-dist/doc/latex/csvsimple/README.md
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/README.md (rev 0)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/README.md 2021-06-29 19:53:39 UTC (rev 59756)
@@ -0,0 +1,62 @@
+# The LaTeX package csvsimple - version 2.0.0 (2021/06/29)
+
+>
+> Copyright (c) 2008-2021 by Prof. Dr. Dr. Thomas F. Sturm <thomas dot sturm at unibw dot de>
+>
+> This work may be distributed and/or modified under the
+> conditions of the LaTeX Project Public License, either version 1.3
+> of this license or (at your option) any later version.
+> The latest version of this license is in
+> http://www.latex-project.org/lppl.txt
+> and version 1.3 or later is part of all distributions of LaTeX
+> version 2005/12/01 or later.
+>
+> This work has the LPPL maintenance status `author-maintained`.
+>
+> This work consists of all files listed in README.md
+>
+
+`csvsimple` provides a simple *LaTeX* interface for the processing of files
+with comma separated values (CSV). `csvsimple` relies heavily on a key value
+syntax which results in an easy way of usage. Filtering and table generation
+is especially supported. Since the package is considered as a lightweight
+tool, there is no support for data sorting or data base storage.
+
+
+## Contents of the package
+
+- `README.md` this file
+- `CHANGES.md` log of changes (history)
+- `csvsimple.sty` LaTeX package file (style file)
+- `csvsimple-l3.sty` LaTeX package file (style file)
+- `csvsimple-legacy.sty` LaTeX package file (style file)
+- `csvsimple.pdf` Documentation for csvsimple
+- `csvsimple-l3.pdf` Documentation for csvsimple (LaTeX3)
+- `csvsimple-legacy.pdf` Documentation for csvsimple (Legacy)
+- `csvsimple.tex` Source code of the documentation
+- `csvsimple-l3.tex` Source code of the documentation
+- `csvsimple-legacy.tex` Source code of the documentation
+- `csvsimple-doc.sty` Source code of the documentation
+- `csvsimple-title.png` Picture for the documentation
+- `csvsimple-example.tex` Example file for package usage
+- `csvsimple-example.csv` CSV file as part of the example
+- `csvsimple-example.pdf` Compiled example
+- `amountsort.xml` csvsorter configuration file (example)
+- `catsort.xml` csvsorter configuration file (example)
+- `encoding.xml` csvsorter configuration file (example)
+- `gradesort.xml` csvsorter configuration file (example)
+- `matriculationsort.xml` csvsorter configuration file (example)
+- `namesort.xml` csvsorter configuration file (example)
+- `transform.xml` csvsorter configuration file (example)
+
+
+## Installation
+
+Copy the contents of the `csvsimple.tds.zip` from CTAN to your local TeX file tree.
+
+Alternatively, put the files to their respective locations within the TeX installation:
+
+- `csvsimple.sty` -> /tex/latex/csvsimple
+- `csvsimple-l3.sty` -> /tex/latex/csvsimple
+- `csvsimple-legacy.sty` -> /tex/latex/csvsimple
+- all other files -> /doc/latex/csvsimple
Property changes on: trunk/Master/texmf-dist/doc/latex/csvsimple/README.md
___________________________________________________________________
Added: svn:eol-style
## -0,0 +1 ##
+native
\ No newline at end of property
Added: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-doc.sty
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-doc.sty (rev 0)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-doc.sty 2021-06-29 19:53:39 UTC (rev 59756)
@@ -0,0 +1,106 @@
+% !TeX encoding=UTF-8
+%% The LaTeX package csvsimple - version 2.0.0 (2021/06/29)
+%% csvsimple-doc.sty: style file for the documentation
+%%
+%% -------------------------------------------------------------------------------------------
+%% Copyright (c) 2008-2021 by Prof. Dr. Dr. Thomas F. Sturm <thomas dot sturm at unibw dot de>
+%% -------------------------------------------------------------------------------------------
+%%
+%% This work may be distributed and/or modified under the
+%% conditions of the LaTeX Project Public License, either version 1.3
+%% of this license or (at your option) any later version.
+%% The latest version of this license is in
+%% http://www.latex-project.org/lppl.txt
+%% and version 1.3 or later is part of all distributions of LaTeX
+%% version 2005/12/01 or later.
+%%
+%% This work has the LPPL maintenance status `author-maintained'.
+%%
+%% This work consists of all files listed in README.md
+%%
+\def\version{2.0.0}%
+\def\datum{2021/06/29}%
+
+\IfFileExists{csvsimple-doc.cfg}{\input{csvsimple-doc.cfg}}{}\providecommand\csvpkgprefix{}
+
+\RequirePackage[T1]{fontenc}
+\RequirePackage[utf8]{inputenc}
+\RequirePackage[english]{babel}
+\RequirePackage{lmodern,parskip,array,ifthen,calc,makeidx}
+\RequirePackage{amsmath,amssymb}
+\RequirePackage[svgnames,table,hyperref]{xcolor}
+\RequirePackage{tikz,siunitx,xfp}
+\RequirePackage{varioref}
+\RequirePackage[pdftex,bookmarks,raiselinks,pageanchor,hyperindex,colorlinks]{hyperref}
+\urlstyle{sf}
+\RequirePackage{cleveref}
+
+\RequirePackage[a4paper,left=2.5cm,right=2.5cm,top=1.5cm,bottom=1.5cm,
+ marginparsep=3mm,marginparwidth=18mm,
+ headheight=0mm,headsep=0cm,
+ footskip=1.5cm,includeheadfoot]{geometry}
+\RequirePackage{fancyhdr}
+\fancyhf{}
+\fancyfoot[C]{\thepage}%
+\renewcommand{\headrulewidth}{0pt}
+\renewcommand{\footrulewidth}{0pt}
+\pagestyle{fancy}
+\tolerance=2000%
+\setlength{\emergencystretch}{20pt}%
+
+\RequirePackage{longtable,booktabs,ifthen,etoolbox}
+
+\RequirePackage{tcolorbox}
+\tcbuselibrary{skins,xparse,minted,breakable,documentation,raster}
+
+\definecolor{Green_Dark}{rgb}{0.078431,0.407843,0.176471}
+\definecolor{Blue_Dark}{rgb}{0.090196,0.211765,0.364706}
+\definecolor{Blue_Bright}{rgb}{0.858824,0.898039,0.945098}
+
+\tcbset{skin=enhanced,
+ minted options={fontsize=\footnotesize},
+ doc head={colback=yellow!10!white,interior style=fill},
+ doc head key={colback=magenta!5!white,interior style=fill},
+ color key=DarkViolet,
+ color value=Teal,
+ color color=Teal,
+ color counter=Orange!85!black,
+ color length=Orange!85!black,
+ index colorize,
+ index annotate,
+ beforeafter example/.style={
+ before skip=4pt plus 2pt minus 1pt,
+ after skip=8pt plus 4pt minus 2pt
+ },
+ docexample/.style={bicolor,
+ beforeafter example,
+ arc is angular,fonttitle=\bfseries,
+ fontlower=\footnotesize,
+ colframe=green!25!yellow!50!black,
+ colback=green!25!yellow!7,
+ colbacklower=white,
+ drop fuzzy shadow=green!25!yellow!50!black,
+ listing engine=minted,
+ documentation minted style=colorful,
+ documentation minted options={fontsize=\footnotesize},
+ },
+}
+
+\renewcommand*{\tcbdocnew}[1]{\textcolor{green!50!black}{\sffamily\bfseries N} #1}
+\renewcommand*{\tcbdocupdated}[1]{\textcolor{blue!75!black}{\sffamily\bfseries U} #1}
+
+\NewDocumentCommand{\csvsorter}{}{\textsf{\bfseries\color{red!20!black}CSV-Sorter}}
+
+\newtcbinputlisting{\csvlisting}[1]{docexample,minted options={fontsize=\footnotesize},minted language=latex,
+ fonttitle=\bfseries,listing only,title={CSV file \flqq\texttt{\detokenize{#1.csv}}\frqq},listing file=#1.csv}
+
+\newtcbinputlisting{\xmllisting}[1]{docexample,minted options={fontsize=\footnotesize},minted language=xml,
+ fonttitle=\bfseries,listing only,title={Configuration file \flqq\texttt{\detokenize{#1.xml}}\frqq},listing file=#1.xml}
+
+\NewTotalTCBox{\verbbox}{m}{enhanced,on line,size=fbox,frame empty,colback=red!5!white,
+ colupper=red!85!black,fontupper=\bfseries\ttfamily}{\detokenize{"}#1\detokenize{"}}
+
+
+\makeindex
+
+\pdfsuppresswarningpagegroup=1
Property changes on: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-doc.sty
___________________________________________________________________
Added: svn:eol-style
## -0,0 +1 ##
+native
\ No newline at end of property
Modified: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-example.pdf
===================================================================
(Binary files differ)
Modified: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-example.tex
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-example.tex 2021-06-29 17:19:30 UTC (rev 59755)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-example.tex 2021-06-29 19:53:39 UTC (rev 59756)
@@ -1,4 +1,4 @@
-%% The LaTeX package csvsimple - version 1.22 (2021/06/07)
+%% The LaTeX package csvsimple - version 2.0.0 (2021/06/29)
%% csvsimple-example.tex: an example for csvsimple
%%
%% -------------------------------------------------------------------------------------------
@@ -15,12 +15,16 @@
%%
%% This work has the LPPL maintenance status `author-maintained'.
%%
-%% This work consists of all files listed in README
+%% This work consists of all files listed in README.md
%%
\documentclass{article}
-\usepackage{array,booktabs}
-\usepackage{csvsimple}
+\usepackage{ifthen,array,booktabs}
+\IfFileExists{csvsimple-doc.cfg}{\input{csvsimple-doc.cfg}}{}% ignore this line
+\providecommand\csvpkgprefix{} % ignore this line
+
+\usepackage{\csvpkgprefix csvsimple-l3}% \usepackage{csvsimple-l3}
+
\begin{document}
%----------------------------------------------------------
@@ -65,7 +69,7 @@
%----------------------------------------------------------
\section{More filter fun}
-\csvreader[my names, filter=\birthyear<1980, centered tabular=rllr,
+\csvreader[my names, filter ifthen=\birthyear<1980, centered tabular=rllr,
table head=\multicolumn{4}{c}{\bfseries People born before 1980}\\\toprule
\# & Name & Postal address & input line no.\\\midrule,
late after line=\\, late after last line=\\\bottomrule]%
Added: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-l3.pdf
===================================================================
(Binary files differ)
Index: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-l3.pdf
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-l3.pdf 2021-06-29 17:19:30 UTC (rev 59755)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-l3.pdf 2021-06-29 19:53:39 UTC (rev 59756)
Property changes on: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-l3.pdf
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+application/pdf
\ No newline at end of property
Added: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-l3.tex
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-l3.tex (rev 0)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-l3.tex 2021-06-29 19:53:39 UTC (rev 59756)
@@ -0,0 +1,2504 @@
+% \LaTeX-Main\
+% !TeX encoding=UTF-8
+%% The LaTeX package csvsimple - version 2.0.0 (2021/06/29)
+%% csvsimple.tex: Manual
+%%
+%% -------------------------------------------------------------------------------------------
+%% Copyright (c) 2008-2021 by Prof. Dr. Dr. Thomas F. Sturm <thomas dot sturm at unibw dot de>
+%% -------------------------------------------------------------------------------------------
+%%
+%% This work may be distributed and/or modified under the
+%% conditions of the LaTeX Project Public License, either version 1.3
+%% of this license or (at your option) any later version.
+%% The latest version of this license is in
+%% http://www.latex-project.org/lppl.txt
+%% and version 1.3 or later is part of all distributions of LaTeX
+%% version 2005/12/01 or later.
+%%
+%% This work has the LPPL maintenance status `author-maintained'.
+%%
+%% This work consists of all files listed in README.md
+%%
+\documentclass[a4paper,11pt]{ltxdoc}
+\usepackage{csvsimple-doc}
+
+\usepackage{\csvpkgprefix csvsimple-l3}
+
+\tcbmakedocSubKey{docCsvKey}{csvsim}
+\tcbmakedocSubKeys{docCsvKeys}{csvsim}
+
+\hypersetup{
+ pdftitle={Manual for the csvsimple-l3 package},
+ pdfauthor={Thomas F. Sturm},
+ pdfsubject={csv file processing with LaTeX3},
+ pdfkeywords={csv file, comma separated values, key value syntax}
+}
+
+\usepackage{incgraph}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\begin{document}
+
+
+\begin{center}
+\begin{tcolorbox}[enhanced,hbox,tikznode,left=8mm,right=8mm,boxrule=0.4pt,
+ colback=white,colframe=black!50!yellow,
+ drop lifted shadow=black!50!yellow,arc is angular,
+ before=\par\vspace*{5mm},after=\par\bigskip]
+{\bfseries\LARGE The \texttt{csvsimple-l3} package}\\[3mm]
+{\large Manual for version \version\ (\datum)}
+\end{tcolorbox}
+{\large Thomas F.~Sturm%
+ \footnote{Prof.~Dr.~Dr.~Thomas F.~Sturm, Institut f\"{u}r Mathematik und Informatik,
+ Universit\"{a}t der Bundeswehr M\"{u}nchen, D-85577 Neubiberg, Germany;
+ email: \href{mailto:thomas.sturm at unibw.de}{thomas.sturm at unibw.de}}\par\medskip
+\normalsize\url{https://www.ctan.org/pkg/csvsimple}\par
+\url{https://github.com/T-F-S/csvsimple}
+}
+\end{center}
+\bigskip
+\begin{absquote}
+ \begin{center}\bfseries Abstract\end{center}
+ |csvsimple(-l3)| provides a simple \LaTeX\ interface for the processing of files with
+ comma separated values (CSV). |csvsimple-l3| relies heavily on the key value
+ syntax from |l3keys| which results in an easy way of usage.
+ Filtering and table generation is especially supported. Since the package
+ is considered as a lightweight tool, there is no support for data sorting
+ or data base storage.
+\end{absquote}
+
+\vspace{1cm}
+
+\includegraphics[width=\linewidth]{csvsimple-title.png}
+% Source code for the title picture - omitted for PDF viewer compatibility
+\begin{tcolorbox}[void]
+\begin{NoHyper}
+\begin{inctext}[]
+\begin{tikzpicture}
+\fill[top color=blue!50!gray!50,bottom color=red!50!gray!50] (-8,-5) rectangle (8,5);
+\node at (0,2.5) {\tcbinputlisting{listing file=csvsimple-example.csv,listing only,width=11cm,blankest,colupper=blue!50!black}};
+\node[red!50!black] at (0,-2.5) {\csvautotabular{csvsimple-example.csv}};
+\begin{scope}[transparency group=knockout]
+\fill [top color=blue!50!gray!10,bottom color=red!50!gray!10] (-7.7,-4.7) rectangle (7.7,4.7);
+\node at (0,2.5) {\tcbinputlisting{listing file=csvsimple-example.csv,listing only,width=11cm,blankest,colupper=blue!20}};
+\node[red!20] at (0,-2.5) {\csvautotabular{csvsimple-example.csv}};
+\node at (0,2.5) [opacity=0,font=\fontencoding{T1}\fontfamily{lmr}\fontsize{7cm}{7cm}\bfseries] {csv};
+\node at (0,-2.5) [opacity=0,font=\fontencoding{T1}\fontfamily{lmr}\fontsize{4.8cm}{4.8cm}\bfseries] {simple};
+\end{scope}
+\end{tikzpicture}
+\end{inctext}
+\end{NoHyper}
+\end{tcolorbox}
+
+
+\clearpage
+\tableofcontents
+
+\clearpage
+\section{Introduction}%
+The |csvsimple-l3| package is applied to the processing of
+CSV\footnote{CSV file: file with comma separated values.} files.
+This processing is controlled by key value assignments according to the
+syntax of |l3keys|. Sample applications of the package
+are tabular lists, serial letters, and charts.
+
+An alternative to |csvsimple-l3| is the |datatool| package
+which provides considerably more functions and allows sorting of data by \LaTeX.
+|csvsimple-l3| has a different approach for the user interface and
+is deliberately restricted to some basic functions with fast
+processing speed.
+
+Mind the following restrictions:
+\begin{itemize}
+\item Sorting is not supported directly but can be done
+ with external tools, see \Fullref{sec:Sorting}.
+\item Values are expected to be comma separated, but the package
+ provides support for other separators, see \Fullref{sec:separators}.
+\item Values are expected to be either not quoted or quoted with
+ curly braces |{}| of \TeX\ groups. Other quotes like doublequotes
+ are not supported directly, but can be achieved
+ with external tools, see \Fullref{sec:importeddata}.
+\item Every data line is expected to contain the same amount of values.
+ Unfeasible data lines are silently ignored by default, but this can
+ be configured, see \Fullref{sec:consistency}.
+\end{itemize}
+
+
+\subsection{Loading the Package}
+|csvsimple-l3| is loaded with \emph{one} of the following
+alternatives inside the preamble:
+\begin{dispListing}
+\usepackage[l3]{csvsimple}
+ % or alternatively (not simultaneously!)
+\usepackage{csvsimple-l3}
+\end{dispListing}
+
+Not automatically loaded, but used for many examples are the packages
+|longtable|, |booktabs|, |ifthen|, and |booktabs|.
+
+
+\clearpage
+\subsection{First Steps}
+Every line of a processable CSV file has to contain an identical amount of
+comma\footnote{See \refKey{/csvsim/separator} for other separators than comma.} separated values. The curly braces |{}| of \TeX\ groups can be used
+to mask a block which may contain commas not to be processed as separators.
+
+The first line of such a CSV file is usually but not necessarily a header line
+which contains the identifiers for each column.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{grade.csv}
+name,givenname,matriculation,gender,grade
+Maier,Hans,12345,m,1.0
+Huber,Anna,23456,f,2.3
+Weißbäck,Werner,34567,m,5.0
+Bauer,Maria,19202,f,3.3
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{grade}
+
+\smallskip
+The most simple way to display a CSV file in tabular form is the processing
+with the \refCom{csvautotabular} command.
+
+\begin{dispExample}
+\csvautotabular{grade.csv}
+\end{dispExample}
+
+
+Typically, one would use \refCom{csvreader} instead of |\csvautotabular| to
+gain full control over the interpretation of the included data.
+
+In the following example, the entries of the header line are automatically
+assigned to \TeX\ macros which may be used deliberately.
+
+
+\begin{dispExample}
+\begin{tabular}{|l|c|}\hline%
+\bfseries Person & \bfseries Matr.~No.
+\csvreader[
+ head to column names
+ ]{grade.csv}{}{%
+ \\\givenname\ \name & \matriculation
+ }%
+\\\hline
+\end{tabular}
+\end{dispExample}
+
+
+\clearpage
+|\csvreader| is controlled by a plenty of options. For example, for table
+applications line breaks are easily inserted by
+\refKey{/csvsim/late after line}. This defines a macro execution just before
+the following line.
+Additionally, the assignment of columns to \TeX\ macros is shown in a non automated
+way.
+
+\begin{dispExample}
+\begin{tabular}{|r|l|c|}\hline%
+& Person & Matr.~No.\\\hline\hline
+\csvreader[
+ late after line = \\\hline
+ ]{grade.csv}%
+ {name=\name, givenname=\firstname, matriculation=\matnumber}{%
+ \thecsvrow & \firstname~\name & \matnumber
+ }%
+\end{tabular}
+\end{dispExample}
+
+\smallskip
+An even more comfortable and preferrable way to create a table is setting
+appropriate option keys. Note, that this gives you the possibility to create a
+meta key (called style here) which contains the whole table creation
+using \refCom{csvstyle} or |keys_define:nn| from |l3keys|.
+
+\begin{dispExample}
+\csvreader[
+ tabular = |r|l|c|,
+ table head = \hline & Person & Matr.~No.\\\hline\hline,
+ late after line = \\\hline
+ ]{grade.csv}
+ {name=\name, givenname=\firstname, matriculation=\matnumber}{%
+ \thecsvrow & \firstname~\name & \matnumber
+ }%
+\end{dispExample}
+
+
+\clearpage
+The next example shows such a style definition with the convenience macro
+\refCom{csvstyle}. Here, we see again the automated assignment of header
+entries to column names by \refKey{/csvsim/head to column names}.
+For this, the header entries have to be without spaces and special characters.
+But you can always assign entries to canonical macro names manually like in the examples
+above. Here, we also add a \refKey{/csvsim/head to column names prefix} to avoid
+macro name clashes.
+
+\begin{dispExample}
+\csvstyle{myTableStyle}{
+ tabular = |r|l|c|,
+ table head = \hline & Person & Matr.~No.\\\hline\hline,
+ late after line = \\\hline,
+ head to column names,
+ head to column names prefix = MY,
+ }
+
+\csvreader[myTableStyle]
+ {grade.csv}{}{%
+ \thecsvrow & \MYgivenname~\MYname & \MYmatriculation
+ }
+\end{dispExample}
+
+
+\smallskip
+Another way to address columns is to use their roman numbers.
+The direct addressing is done by |\csvcoli|, |\csvcolii|, |\csvcoliii|, \ldots:
+
+\begin{dispExample}
+\csvreader[
+ tabular = |r|l|c|,
+ table head = \hline & Person & Matr.~No.\\\hline\hline,
+ late after line = \\\hline
+ ]{grade.csv}{}{%
+ \thecsvrow & \csvcolii~\csvcoli & \csvcoliii
+ }
+\end{dispExample}
+
+\smallskip
+And yet another method to assign macros to columns is to use arabic numbers
+for the assignment:
+
+\begin{dispExample}
+\csvreader[
+ tabular = |r|l|c|,
+ table head = \hline & Person & Matr.~No.\\\hline\hline,
+ late after line = \\\hline]%
+ {grade.csv}
+ {1=\name, 2=\firstname, 3=\matnumber}{%
+ \thecsvrow & \firstname~\name & \matnumber
+ }
+\end{dispExample}
+
+\smallskip
+For recurring applications, the |l3keys| syntax allows to create own meta options
+(styles) for a consistent and centralized design. The following example is easily
+modified to obtain more or less option settings.
+
+\begin{dispExample}
+\csvstyle{myStudentList}{%
+ tabular = |r|l|c|,
+ table head = \hline & Person & #1\\\hline\hline,
+ late after line = \\\hline,
+ column names = {name=\name, givenname=\firstname}
+ }
+
+\csvreader[ myStudentList={Matr.~No.} ]
+ {grade.csv}
+ {matriculation=\matnumber}{%
+ \thecsvrow & \firstname~\name & \matnumber
+ }%
+\hfill%
+\csvreader[ myStudentList={Grade} ]
+ {grade.csv}
+ {grade=\grade}{%
+ \thecsvrow & \firstname~\name & \grade
+ }
+\end{dispExample}
+
+
+\clearpage
+Alternatively, column names can be set by \refCom{csvnames}
+and style definitions by \refCom{csvstyle}.
+With this, the last example is rewritten as follows:
+
+\begin{dispExample}
+\csvnames{myNames}{1=\name,2=\firstname,3=\matnumber,5=\grade}
+\csvstyle{myStudentList}{
+ tabular = |r|l|c|,
+ table head = \hline & Person & #1\\\hline\hline,
+ late after line = \\\hline,
+ myNames
+ }
+
+\csvreader[ myStudentList={Matr.~No.} ]
+ {grade.csv}{}{%
+ \thecsvrow & \firstname~\name & \matnumber
+ }%
+\hfill%
+\csvreader[ myStudentList={Grade} ]
+ {grade.csv}{}{%
+ \thecsvrow & \firstname~\name & \grade
+ }
+\end{dispExample}
+
+\smallskip
+The data lines of a CSV file can also be filtered. In the following example,
+a certificate is printed only for students with grade unequal to 5.0.
+
+\begin{dispExample}
+\csvreader[
+ filter not strcmp={\grade}{5.0}
+ ]{grade.csv}
+ {1=\name,2=\firstname,3=\matnumber,4=\gender,5=\grade}{%
+ \begin{center}\Large\bfseries Certificate in Mathematics\end{center}
+ \large\ifcsvstrcmp{\gender}{f}{Ms.}{Mr.}
+ \firstname~\name, matriculation number \matnumber, has passed the test
+ in mathematics with grade \grade.\par\ldots\par
+ }%
+\end{dispExample}
+
+
+\clearpage
+\section{Macros for the Processing of CSV Files}\label{sec:makros}%
+
+\begin{docCommand}{csvreader}{\oarg{options}\marg{file name}\marg{assignments}\marg{command list}}
+ \refCom{csvreader} reads the file denoted by \meta{file name} line by line.
+ Every line of the file has to contain an identical amount of
+ comma separated values. The curly braces |{}| of \TeX\ groups can be used
+ to mask a block which may contain commas not to be processed as separators.\smallskip
+
+ The first line of such a CSV file is by default but not necessarily
+ processed as a header line which contains the identifiers for each column.
+ The entries of this line can be used to give \meta{assignments} to \TeX\ macros
+ to address the columns. The number of entries of this first line
+ determines the accepted number of entries for all following lines.
+ Every line which contains a higher or lower number of entries is ignored
+ during standard processing.\smallskip
+
+ The \meta{assignments} are given as comma separated list of key value pairs
+ \mbox{\meta{name}|=|\meta{macro}}. Here, \meta{name} is an entry from the
+ header line \emph{or} the arabic number of the addressed column.
+ \meta{macro} is some \TeX\ macro which gets the content of the addressed column.\smallskip
+
+ The \meta{command list} is executed for every accepted data line. Inside the
+ \meta{command list} is applicable:
+ \begin{itemize}
+ \item \docAuxCommand{thecsvrow} or the counter |csvrow| which contains the number of the
+ current data line (starting with 1).
+ \item \docAuxCommand{csvcoli}, \docAuxCommand{csvcolii}, \docAuxCommand{csvcoliii}, \ldots,
+ which contain the contents of the column entries of the current data line.
+ Alternatively can be used:
+ \item \meta{macro} from the \meta{assignments} to have a logical
+ addressing of a column entry.
+ \end{itemize}
+ Note, that the \meta{command list} is allowed to contain |\par| and
+ that \textbf{all macro definitions are made global} to be used for table applications.\smallskip
+
+ The processing of the given CSV file can be controlled by various
+ \meta{options} given as key value list. The feasible option keys
+ are described in section \ref{sec:schluessel} from page \pageref{sec:schluessel}.
+
+\begin{dispExample}
+\csvreader[
+ tabular = |r|l|l|,
+ table head = \hline,
+ table foot = \hline
+ ]{grade.csv}%
+ {name=\name, givenname=\firstname, grade=\grade}{%
+ \grade & \firstname~\name & \csvcoliii
+ }
+\end{dispExample}
+
+Mainly, the |\csvreader| command consists of a \refCom{csvloop} macro with
+following parameters:\par
+|\csvloop{|\meta{options}|, file=|\meta{file name}|, column names=|\meta{assignments}|,|\\
+ \hspace*{2cm} |command=|\meta{command list}|}|\par
+ Therefore, the application of the keys \refKey{/csvsim/file} and \refKey{/csvsim/command}
+is useless for |\csvreader|.
+\end{docCommand}
+
+
+\clearpage
+\begin{docCommand}{csvloop}{\marg{options}}
+ Usually, \refCom{csvreader} may be preferred instead of |\csvloop|.
+ \refCom{csvreader} is based on |\csvloop| which takes a mandatory list of
+ \meta{options} in key value syntax.
+ This list of \meta{options} controls the total processing. Especially,
+ it has to contain the CSV file name.
+\begin{dispExample}
+\csvloop{
+ file = {grade.csv},
+ head to column names,
+ command = \name,
+ before reading = {List of students:\ },
+ late after line = {{,}\ },
+ late after last line = .
+ }
+\end{dispExample}
+\end{docCommand}
+
+\bigskip
+
+The following |\csvauto...| commands are intended for quick data overview
+with limited formatting potential.
+See Subsection~\ref{subsec:tabsupport} on page \pageref{subsec:tabsupport}
+for the general table options in combination with \refCom{csvreader} and
+\refCom{csvloop}.
+
+\begin{docCommands}[
+ doc parameter = \oarg{options}\marg{file name}
+ ]
+ {
+ { doc name = csvautotabular },
+ { doc name = csvautotabular*, doc new = 2021-06-25 }
+ }
+ |\csvautotabular| or |\csvautotabular*|
+ is an abbreviation for the application of the option key
+ \refKey{/csvsim/autotabular} or \refKey{/csvsim/autotabular*}
+ together with other \meta{options} to \refCom{csvloop}.
+ This macro reads the whole CSV file denoted by \meta{file name}
+ with an automated formatting.
+ The star variant treats the first line as data line and not as header line.
+\begin{dispExample}
+\csvautotabular*{grade.csv}
+\end{dispExample}
+\begin{dispExample}
+\csvautotabular[filter equal={\csvcoliv}{f}]{grade.csv}
+\end{dispExample}
+\end{docCommands}
+
+\clearpage
+
+\begin{docCommands}[
+ doc parameter = \oarg{options}\marg{file name}
+ ]
+ {
+ { doc name = csvautolongtable },
+ { doc name = csvautolongtable*, doc new = 2021-06-25 }
+ }
+ |\csvautolongtable| or |\csvautolongtable*|
+ is an abbreviation for the application of the option key
+ \refKey{/csvsim/autolongtable} or \refKey{/csvsim/autolongtable*}
+ together with other \meta{options} to \refCom{csvloop}.
+ This macro reads the whole CSV file denoted by \meta{file name}
+ with an automated formatting.
+ For application, the package |longtable| is required which has to be
+ loaded in the preamble.
+ The star variant treats the first line as data line and not as header line.
+\begin{dispListing}
+\csvautolongtable{grade.csv}
+\end{dispListing}
+\csvautolongtable{grade.csv}
+\end{docCommands}
+
+
+
+\begin{docCommands}[
+ doc parameter = \oarg{options}\marg{file name}
+ ]
+ {
+ { doc name = csvautobooktabular },
+ { doc name = csvautobooktabular*, doc new = 2021-06-25 }
+ }
+ |\csvautobooktabular| or |\csvautobooktabular*|
+ is an abbreviation for the application of the option key
+ \refKey{/csvsim/autobooktabular} or \refKey{/csvsim/autobooktabular*}
+ together with other \meta{options} to \refCom{csvloop}.
+ This macro reads the whole CSV file denoted by \meta{file name}
+ with an automated formatting.
+ For application, the package |booktabs| is required which has to be
+ loaded in the preamble.
+ The star variant treats the first line as data line and not as header line.
+\begin{dispExample}
+\csvautobooktabular{grade.csv}
+\end{dispExample}
+\end{docCommands}
+
+
+\begin{docCommands}[
+ doc parameter = \oarg{options}\marg{file name}
+ ]
+ {
+ { doc name = csvautobooklongtable },
+ { doc name = csvautobooklongtable*, doc new = 2021-06-25 }
+ }
+ |\csvautobooklongtable| or |\csvautobooklongtable*|
+ is an abbreviation for the application of the option key
+ \refKey{/csvsim/autobooklongtable} or \refKey{/csvsim/autobooklongtable*}
+ together with other \meta{options} to \refCom{csvloop}.
+ This macro reads the whole CSV file denoted by \meta{file name}
+ with an automated formatting.
+ For application, the packages |booktabs| and |longtable| are required which have to be
+ loaded in the preamble.
+ The star variant treats the first line as data line and not as header line.
+\begin{dispListing}
+\csvautobooklongtable{grade.csv}
+\end{dispListing}
+\csvautobooklongtable{grade.csv}
+\end{docCommands}
+
+
+
+\clearpage
+
+\begin{docCommand}[doc updated = 2021-06-25]{csvset}{\marg{options}}
+ Sets \meta{options} for every following
+ \refCom{csvreader} and \refCom{csvloop}.
+ Note that most options are set to default values at the begin of these
+ commands and therefore cannot be defined reasonable by \refCom{csvset}.
+ But it may be used for options like \refKey{/csvsim/csvsorter command}
+ to give global settings. Also see \refKey{/csvsim/every csv}.
+\end{docCommand}
+
+
+\begin{docCommand}{csvstyle}{\marg{key}\marg{options}}
+ Defines a new |l3keys| meta key to call other keys. It is used to
+ make abbreviations for convenient key set applications.
+ The new \meta{key} can take one parameter. The name \refCom{csvstyle}
+ originates from an old version of |csvsimple| which used |pgfkeys|
+ instead of |l3keys|.
+
+\begin{dispExample}
+\csvstyle{grade list}{
+ column names = {name=\name, givenname=\firstname, grade=\grade}
+ }
+\csvstyle{passed}{
+ filter not strcmp = {\grade}{5.0}
+ }
+The following students passed the test in mathematics:\\
+\csvreader[grade list,passed]{grade.csv}{}{
+ \firstname\ \name\ (\grade);
+ }
+\end{dispExample}
+\end{docCommand}
+
+
+\begin{docCommand}{csvnames}{\marg{key}\marg{assignments}}
+ Abbreviation for |\csvstyle{|\meta{key}|}{column names=|\marg{assignments}|}|
+ to define additional \meta{assignments} of macros to columns.
+\begin{dispExample}
+\csvnames{grade list}{
+ name=\name, givenname=\firstname, grade=\grade
+ }
+\csvstyle{passed}{
+ filter not strcmp = {\grade}{5.0}
+ }
+The following students passed the test in mathematics:\\
+\csvreader[grade list,passed]{grade.csv}{}{
+ \firstname\ \name\ (\grade);
+ }
+\end{dispExample}
+\end{docCommand}
+
+
+%\begin{docCommand}{csvheadset}{\marg{assignments}}
+% For some special cases, this command can be used to change the
+% \meta{assignments} of macros to columns during execution of
+% \refCom{csvreader} and \refCom{csvloop}.
+%\begin{dispExample}
+%\csvreader{grade.csv}{}%
+% { \csvheadset{name=\n} \fbox{\n}
+% \csvheadset{givenname=\n} \ldots\ \fbox{\n} }%
+%\end{dispExample}
+%\end{docCommand}
+
+\clearpage
+
+
+\begin{docCommand}[doc updated=2021-06-28]{ifcsvoddrow}{\marg{then macros}\marg{else macros}}
+ Inside the command list of \refCom{csvreader}, the \meta{then macros}
+ are executed for odd-numbered data lines, and the \meta{else macros}
+ are executed for even-numbered lines.
+ \refCom{ifcsvoddrow} is expandable.
+\begin{dispExample}
+\csvreader[
+ head to column names,
+ tabular = |l|l|l|l|,
+ table head = \hline\bfseries \# & \bfseries Name & \bfseries Grade\\\hline,
+ table foot = \hline
+ ]{grade.csv}{}{%
+ \ifcsvoddrow{\slshape\thecsvrow & \slshape\name, \givenname & \slshape\grade}%
+ {\bfseries\thecsvrow & \bfseries\name, \givenname & \bfseries\grade}
+ }
+\end{dispExample}
+
+The |\ifcsvoddrow| macro may be used for striped tables:
+
+\begin{dispExample}
+% This example needs the xcolor package
+\csvreader[
+ head to column names,
+ tabular = rlcc,
+ table head = \hline\rowcolor{red!50!black}\color{white}\# & \color{white}Person
+ & \color{white}Matr.~No. & \color{white}Grade,
+ late after head = \\\hline\rowcolor{yellow!50},
+ late after line = \ifcsvoddrow{\\\rowcolor{yellow!50}}{\\\rowcolor{red!25}}
+ ]{grade.csv}{}{%
+ \thecsvrow & \givenname~\name & \matriculation & \grade
+ }
+\end{dispExample}
+
+Alternatively, |\rowcolors| from the |xcolor| package can be used for this
+purpose:
+
+\begin{dispExample}
+% This example needs the xcolor package
+\csvreader[
+ head to column names,
+ tabular = rlcc,
+ before table = \rowcolors{2}{red!25}{yellow!50},
+ table head = \hline\rowcolor{red!50!black}\color{white}\# & \color{white}Person
+ & \color{white}Matr.~No. & \color{white}Grade\\\hline
+ ]{grade.csv}{}{%
+ \thecsvrow & \givenname~\name & \matriculation & \grade
+ }
+\end{dispExample}
+
+ The deprecated, but still available alias for this command is
+ \docAuxCommand{csvifoddrow}.
+\end{docCommand}
+
+\clearpage
+
+\begin{docCommand}[doc updated=2021-06-28]{ifcsvfirstrow}{\marg{then macros}\marg{else macros}}
+ Inside the command list of \refCom{csvreader}, the \meta{then macros}
+ are executed for the first data line, and the \meta{else macros}
+ are executed for all following lines.
+ \refCom{ifcsvfirstrow} is expandable.
+\begin{dispExample}
+\csvreader[
+ tabbing,
+ head to column names,
+ table head = {\hspace*{3cm}\=\kill}
+ ]{grade.csv}{}{%
+ \givenname~\name \> (\ifcsvfirstrow{first entry!!}{following entry})
+ }
+\end{dispExample}
+ The deprecated, but still available alias for this command is
+ \docAuxCommand{csviffirstrow}.
+\end{docCommand}
+
+\medskip
+
+
+\begin{docCommand}{csvfilteraccept}{}
+ All following consistent data lines will be accepted and processed.
+ This command overwrites all previous filter settings and may be used
+ inside \refKey{/csvsim/full filter} to implement
+ an own filtering rule together with |\csvfilterreject|.
+\begin{dispExample}
+\csvreader[
+ autotabular,
+ full filter = \ifcsvstrcmp{\csvcoliv}{m}{\csvfilteraccept}{\csvfilterreject}
+ ]{grade.csv}{}{%
+ \csvlinetotablerow
+ }
+\end{dispExample}
+\end{docCommand}
+
+
+\begin{docCommand}{csvfilterreject}{}
+ All following data lines will be ignored.
+ This command overwrites all previous filter settings.
+\end{docCommand}
+
+
+\begin{docCommand}{csvline}{}
+ This macro contains the current and unprocessed data line.
+\begin{dispExample}
+\csvreader[
+ no head,
+ tabbing,
+ table head = {\textit{line XX:}\=\kill}
+ ]{grade.csv}{}{%
+ \textit{line \thecsvrow:} \> \csvline
+ }
+\end{dispExample}
+\end{docCommand}
+
+
+\begin{docCommand}[doc updated=2016-07-01]{csvlinetotablerow}{}
+ Typesets the current processed data line with |&| between the entries.
+ %Most users will never apply this command.
+\end{docCommand}
+
+\clearpage
+\begin{docCommands}{
+ { doc name = thecsvrow , doc updated = 2021-06-25 },
+ { doc name = g_csvsim_row_int, doc new = 2021-06-25 }
+ }
+ Typesets the current data line number. This is the
+ current number of accepted data lines without the header line.
+ Despite of the name, there is no associated \LaTeX\ counter |csvrow|,
+ but \refCom{thecsvrow} is an accessor the \LaTeX3 integer
+ \refCom{g_csvsim_row_int}.
+\end{docCommands}
+
+
+\begin{docCommands}[doc new=2021-06-25]{
+ { doc name = thecsvcolumncount },
+ { doc name = g_csvsim_columncount_int }
+ }
+ Typesets the number of columns of the current CSV file. This number
+ is either computed from the first valid line (header or data) or
+ given by \refKey{/csvsim/column count}.
+ Despite of the name, there is no associated \LaTeX\ counter |csvcolumncount|,
+ but \refCom{thecsvcolumncount} is an accessor the \LaTeX3 integer
+ \refCom{g_csvsim_columncount_int}.
+\begin{dispExample}
+\csvreader{grade.csv}{}{}%
+The last file consists of \thecsvcolumncount{} columns and
+\thecsvrow{} accepted data lines. The total number of lines
+ist \thecsvinputline{}.
+\end{dispExample}
+\end{docCommands}
+
+
+\begin{docCommands}{
+ { doc name = thecsvinputline , doc updated = 2021-06-25 },
+ { doc name = g_csvsim_inputline_int, doc new = 2021-06-25 }
+ }
+ Typesets the current file line number. This is the
+ current number of all data lines including the header line and all
+ lines filtered out.
+ Despite of the name, there is no associated \LaTeX\ counter |csvinputline|,
+ but \refCom{thecsvinputline} is an accessor the \LaTeX3 integer
+ \refCom{g_csvsim_inputline_int}.
+\begin{dispExample}
+\csvreader[
+ no head,
+ filter test = \ifnumequal{\thecsvinputline}{3}
+ ]{grade.csv}{}{%
+ The line with number \thecsvinputline\ contains: \csvline
+ }
+\end{dispExample}
+\end{docCommands}
+
+
+
+
+
+\clearpage
+\section{Option Keys}\label{sec:schluessel}%
+For the \meta{options} in \refCom{csvreader} respectively \refCom{csvloop}
+the following |l3keys| keys can be applied. The \meta{module} name |/csvsim/| is not
+to be used inside these macros.
+
+
+\subsection{Command Definition}%--------%[[
+
+\begin{docCsvKey}{before reading}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed before the CSV file is opened.
+\end{docCsvKey}
+
+\begin{docCsvKey}{after head}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after the header line is read.
+ \refCom{thecsvcolumncount} and header entries are available.
+\end{docCsvKey}
+
+\begin{docCsvKey}{before filter}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after reading and consistency checking
+ of a data line. They are executed before any filter condition is checked,
+ see e.g. \refKey{/csvsim/filter ifthen}.
+ Also see \refKey{/csvsim/full filter}.
+ All line entries are available.
+\end{docCsvKey}
+
+\begin{docCsvKey}{late after head}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after reading and disassembling
+ of the first accepted data line.
+ These operations are executed before further processing of this line.
+ \meta{code} should not refer to any data content, but may be something
+ like |\\|.
+\end{docCsvKey}
+
+\begin{docCsvKey}{late after line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after reading and disassembling
+ of the next accepted data line (after \refKey{/csvsim/before filter}).
+ These operations are executed before further processing of this line.
+ \meta{code} should not refer to any data content, but may be something
+ like |\\|.
+ \refKey{/csvsim/late after line} overwrites
+ \refKey{/csvsim/late after first line} and
+ \refKey{/csvsim/late after last line}.
+ Note that table options like \refKey{/csvsim/tabular} set this key to |\\|
+ automatically.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{late after first line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after reading and disassembling
+ of the second accepted data line instead of \refKey{/csvsim/late after line}.
+ \meta{code} should not refer to any data content.
+ This key has to be set after \refKey{/csvsim/late after line}.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{late after last line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after processing of the last
+ accepted data line instead of \refKey{/csvsim/late after line}.
+ \meta{code} should not refer to any data content.
+ This key has to be set after \refKey{/csvsim/late after line}.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{before line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after \refKey{/csvsim/late after line}
+ and before \refKey{/csvsim/command}.
+ All line entries are available.
+ \refKey{/csvsim/before line} overwrites
+ \refKey{/csvsim/before first line}.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{before first line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed instead of \refKey{/csvsim/before line}
+ for the first accepted data line.
+ All line entries are available.
+ This key has to be set after \refKey{/csvsim/before line}.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{command}{=\meta{code}}{no default, initially \cs{csvline}}
+ Sets the \meta{code} to be executed for every accepted data line.
+ It is executed between \refKey{/csvsim/before line} and \refKey{/csvsim/after line}.
+ \refKey{/csvsim/command} describes the main processing of the line
+ entries. \refCom{csvreader} sets \refKey{/csvsim/command} as mandatory
+ parameter.
+\end{docCsvKey}
+
+\pagebreak
+
+\begin{docCsvKey}{after line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed for every accepted data line
+ after \refKey{/csvsim/command}.
+ All line entries are still available.
+ \refKey{/csvsim/after line} overwrites \refKey{/csvsim/after first line}.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{after first line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed instead of \refKey{/csvsim/after line}
+ for the first accepted data line.
+ All line entries are still available.
+ This key has to be set after \refKey{/csvsim/after line}.
+\end{docCsvKey}
+
+\begin{docCsvKey}{after reading}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after the CSV file is closed.
+\end{docCsvKey}
+
+\bigskip
+
+The following example illustrates the sequence of command execution.
+Note that \refKey{/csvsim/command} is set by the mandatory last
+parameter of \refCom{csvreader}.
+
+\begin{dispExample}
+\csvreader[
+ before reading = \meta{before reading}\\,
+ after head = \meta{after head},
+ before filter = \\\meta{before filter},
+ late after head = \meta{late after head},
+ late after line = \meta{late after line},
+ late after first line = \meta{late after first line},
+ late after last line = \\\meta{late after last line},
+ before line = \meta{before line},
+ before first line = \meta{before first line},
+ after line = \meta{after line},
+ after first line = \meta{after first line},
+ after reading = \\\meta{after reading}
+ ]{grade.csv}{name=\name}{\textbf{\name}}%
+\end{dispExample}
+
+Additional command definition keys are provided for the supported tables,
+see Section~\ref{subsec:tabsupport} from page~\pageref{subsec:tabsupport}.
+
+\clearpage
+\subsection{Header Processing and Column Name Assignment}%
+
+\begin{docCsvKey}{head}{\colOpt{=true\textbar false}}{default |true|, initially |true|}
+ If this key is set, the first line of the CSV file is treated as a header
+ line which can be used for column name assignments.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{no head}{}{no value}
+ Abbreviation for |head=false|, i.\,e. the first line of the CSV file is
+ treated as data line.
+ Note that this option cannot be used in combination with
+ the |\csvauto...| commands like \refCom{csvautotabular}, etc.
+ Instead, there are \emph{star} variants like \refCom{csvautotabular*} to
+ process files without header line.
+ See Section~\ref{noheader} on page~\pageref{noheader} for examples.
+\end{docCsvKey}
+
+\begin{docCsvKey}{column names}{=\marg{assignments}}{no default, initially empty}
+ Adds some new \meta{assignments} of macros to columns in key value syntax.
+ Existing assignments are kept.\par
+ The \meta{assignments} are given as comma separated list of key value pairs
+ \mbox{\meta{name}|=|\meta{macro}}. Here, \meta{name} is an entry from the
+ header line \emph{or} the arabic number of the addressed column.
+ \meta{macro} is some \TeX\ macro which gets the content of the addressed column.
+\begin{dispListing}
+ column names = {name=\surname, givenname=\firstname, grade=\grade}
+\end{dispListing}
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{column names reset}{}{no value}
+ Clears all assignments of macros to columns.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{head to column names}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, the entries of the header line are used automatically
+ as macro names for the columns. This option can be used only, if
+ the header entries do not contain spaces and special characters to be
+ used as feasible \LaTeX\ macro names.
+ Note that the macro definition is \emph{global} and may therefore override
+ existing macros for the rest of the document. Adding
+ \refKey{/csvsim/head to column names prefix} may help to avoid unwanted
+ overrides.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}[][doc new=2019-07-16]{head to column names prefix}{=\meta{text}}{no default, initially empty}
+ The given \meta{text} is prefixed to the name of all macros generated by
+ \refKey{/csvsim/head to column names}. For example, if you use the settings
+\begin{dispListing}
+ head to column names,
+ head to column names prefix=MY,
+\end{dispListing}
+ a header entry |section| will generate the corresponding macro
+ |\MYsection| instead of destroying the standard \LaTeX\ |\section| macro.
+\end{docCsvKey}
+
+
+\clearpage
+\subsection{Consistency Check}\label{sec:consistency}%
+
+\begin{docCsvKey}{check column count}{\colOpt{=true\textbar false}}{default |true|, initially |true|}
+ This key defines, wether the number of entries in a data line is checked against
+ an expected value or not.\\
+ If |true|, every non consistent line is ignored without announcement.\\
+ If |false|, every line is accepted and may produce an error during
+ further processing.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{no check column count}{}{no value}
+ Abbreviation for |check column count=false|.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}[][doc updated=2021-06-24]{column count}{=\meta{number}}{no default, initially |0|}
+ Sets the \meta{number} of feasible entries per data line.
+ If \refKey{/csvsim/column count} is set to |0|, the number of entries of
+ the first non-empty line determines the column count (automatic detection).
+
+ This setting is only useful in connection with \refKey{/csvsim/no head},
+ since \meta{number} would be replaced by the number of entries in the
+ header line otherwise.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{on column count error}{=\meta{code}}{no default, initially empty}
+ \meta{code} to be executed for unfeasible data lines.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{warn on column count error}{}{style, no value}
+ Display of a warning for unfeasible data lines.
+\end{docCsvKey}
+
+
+\clearpage
+\subsection{Filtering}\label{subsec:filtering}%
+
+Applying a \emph{filter} means that data lines are only processed / displayed,
+if they fulfill a given \emph{condition}.
+
+The following string compare filters \refKey{/csvsim/filter strcmp} and
+\refKey{/csvsim/filter equal} are identical by logic, but differ in implementation.
+
+\begin{docCsvKey}{filter strcmp}{=\marg{stringA}\marg{stringB}}{style, no default}
+ Only lines where \meta{stringA} and \meta{stringB} are equal after expansion
+ are accepted.
+ The implementation is done with \refCom{ifcsvstrcmp}.
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[
+ head to column names,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ filter strcmp = {\gender}{f}, %>> list only female persons <<
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{filter not strcmp}{=\marg{stringA}\marg{stringB}}{style, no default}
+ Only lines where \meta{stringA} and \meta{stringB} are not equal after expansion
+ are accepted.
+ The implementation is done with \refCom{ifcsvnotstrcmp}.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{filter equal}{=\marg{stringA}\marg{stringB}}{style, no default}
+ Only lines where \meta{stringA} and \meta{stringB} are equal after expansion
+ are accepted.
+ The implementation is done with the |ifthen| package (loading required!).
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{filter not equal}{=\marg{stringA}\marg{stringB}}{style, no default}
+ Only lines where \meta{stringA} and \meta{stringB} are not equal after expansion
+ are accepted.
+ The implementation is done with the |ifthen| package (loading required!).
+\end{docCsvKey}
+
+
+\begin{docCsvKey}[][doc new=2021-06-25]{filter fp}{=\meta{floating point expression}}{no default}
+ Only data lines which fulfill a \LaTeX3 \meta{floating point expression}
+ (|l3fp|, |xfp|) are accepted.
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[
+ head to column names,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ %>> list only matriculation numbers greater than 20000
+ % and grade less than 4.0 <<
+ filter fp = { \matriculation > 20000 && \grade < 4.0 },
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+\end{docCsvKey}
+
+
+\clearpage
+
+\begin{docCsvKey}[][doc new=2021-06-25]{filter bool}{=\meta{boolean expression}}{no default}
+ Only data lines which fulfill a \LaTeX3 \meta{boolean expression} are accepted.
+ Note that such an \meta{boolean expression} needs expl3 code.
+ To preprocess the data line before testing the \meta{condition},
+ the option key \refKey{/csvsim/before filter} can be used.
+\begin{dispExample}
+% For convenience, we save the filter
+\ExplSyntaxOn
+%>> list only matriculation numbers greater than 20000, list only men <<
+\csvstyle{myfilter}
+ {
+ filter~bool =
+ {
+ \int_compare_p:n { \matriculation > 20000 } &&
+ \str_compare_p:eNe { \gender } = { m }
+ }
+ }
+\ExplSyntaxOff
+
+\csvreader[
+ head to column names,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ myfilter
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+\end{docCsvKey}
+
+\medskip
+\begin{docCommand}[doc new=2021-06-25]{csvfilterbool}{\marg{key}\marg{boolean expression}}
+ Defines a new |l3keys| meta key which applies \refKey{/csvsim/filter bool}
+ with the given \meta{boolean expression}.
+\begin{dispExample}
+% For convenience, we save the filter
+\ExplSyntaxOn
+%>> list only matriculation numbers greater than 20000, list only men <<
+\csvfilterbool{myfilter}
+ {
+ \int_compare_p:n { \matriculation > 20000 } &&
+ \str_compare_p:eNe { \gender } = { m }
+ }
+\ExplSyntaxOff
+
+\csvreader[
+ head to column names,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ myfilter
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+\end{docCommand}
+
+
+
+\clearpage
+
+\begin{docCsvKey}[][doc new=2016-07-01]{filter test}{=\meta{condition}}{no default}
+ Only data lines which fulfill a logical \meta{condition} are accepted.
+ For the \meta{condition}, every single test normally employed like
+\begin{dispListing}
+\iftest{some testing}{true}{false}
+\end{dispListing}
+ can be used as
+\begin{dispListing}
+filter test=\iftest{some testing},
+\end{dispListing}
+ For |\iftest|, tests from the |etoolbox| package like
+ |\ifnumcomp|, |\ifdimgreater|, etc. and from \Fullref{sec:stringtests} can be used.
+ Also, arbitrary own macros fulfilling this signature can be applied.
+\begin{dispExample}
+% \usepackage{etoolbox,booktabs}
+\csvreader[
+ head to column names,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ %>> list only matriculation numbers greater than 20000 <<
+ filter test = \ifnumgreater{\matriculation}{20000},
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+\end{docCsvKey}
+
+
+\medskip
+\begin{docCsvKey}[][doc new=2016-07-01]{filter expr}{=\meta{boolean expression}}{no default}
+ Only data lines which fulfill a \meta{boolean expression} are accepted.
+ Every \meta{boolean expression}
+ from the |etoolbox| package is feasible (package loading required!).
+ To preprocess the data line before testing the \meta{condition},
+ the option key \refKey{/csvsim/before filter} can be used.
+\begin{dispExample}
+% \usepackage{etoolbox,booktabs}
+\csvreader[
+ head to column names,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ %>> list only matriculation numbers greater than 20000
+ % and grade less than 4.0 <<
+ filter expr = { test{\ifnumgreater{\matriculation}{20000}}
+ and test{\ifdimless{\grade pt}{4.0pt}} },
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+\end{docCsvKey}
+
+
+\clearpage
+\begin{docCsvKey}[][doc new=2016-07-01]{filter ifthen}{=\meta{boolean expression}}{no default}
+ Only data lines which fulfill a \meta{boolean expression} are accepted.
+ For the \meta{boolean expression}, every term from the |ifthen| package
+ is feasible (package loading required!).
+ To preprocess the data line before testing the \meta{condition},
+ the option key \refKey{/csvsim/before filter} can be used.
+
+\begin{dispExample}
+% \usepackage{ifthen,booktabs}
+\csvreader[
+ head to column names,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ %>> list only female persons <<
+ filter ifthen=\equal{\gender}{f},
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{no filter}{}{no value, initially set}
+ Clears a set filter.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{filter accept all}{}{no value, initially set}
+ Alias for |no filter|. All consistent data lines are accepted.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{filter reject all}{}{no value}
+ All data line are ignored.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}[][doc new=2016-07-01]{full filter}{=\meta{code}}{no default}
+ Technically, this key is an alias for \refKey{/csvsim/before filter}.
+ Philosophically, \refKey{/csvsim/before filter} computes something before
+ a filter condition is set, but \refKey{/csvsim/full filter} should implement
+ the full filtering. Especially, \refCom{csvfilteraccept} or
+ \refCom{csvfilterreject} \emph{should} be set inside the \meta{code}.
+\begin{dispExample}
+% \usepackage{etoolbox,booktabs}
+\csvreader[
+ head to column names,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ %>> list only matriculation numbers greater than 20000
+ % and grade less than 4.0 <<
+ full filter = \ifnumgreater{\matriculation}{20000}
+ {\ifdimless{\grade pt}{4.0pt}{\csvfilteraccept}{\csvfilterreject}}
+ {\csvfilterreject},
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+\end{docCsvKey}
+
+
+
+%]]
+
+\clearpage
+\subsection{Line Range}\label{subsec:linerange}
+
+Applying a \emph{line range} means to select certain line numbers to be
+displayed. These line numbers are not necessarily line numbers of
+the input file, see \refCom{thecsvinputline}, but line numbers of
+type \refCom{thecsvrow}.
+
+For example, if a \emph{filter} was applied, see \Fullref{subsec:filtering},
+and 42 lines are accepted, a \emph{range} could select the first 20 of them or
+line 10 to 30 of the accepted lines.
+
+
+\begin{docCsvKey}[][doc new=2021-06-29]{range}{=\brackets{\meta{range1},\meta{range2},\meta{range3},... }}{no default, initially empty}
+ Defines a comma separated list of line ranges. If a line number \refCom{thecsvrow}
+ satisfies one or more of the given \meta{range1}, \meta{range2}, \ldots,
+ the corresponding line is processed and displayed.
+ If \refKey{/csvsim/range} is set to empty, all lines are accepted.
+
+ Every \meta{range} can
+ corresponds to one of the following variants:
+ \begin{tabbing}
+ \hspace*{2cm}\=\kill
+ \texttt{\meta{a}-\meta{b}} \> meaning line numbers \meta{a} to \meta{b}.\\
+ \texttt{\meta{a}-} \> meaning line numbers \meta{a} to |\c_max_int|=2 147 483 647.\\
+ \texttt{-\meta{b}} \> meaning line numbers 1 to \meta{b}.\\
+ \texttt{-} \> meaning line numbers 1 to 2 147 483 647 (inefficient; don't use).\\
+ \texttt{\meta{a}} \> meaning line numbers \meta{a} to \meta{a} (i.e. only \meta{a}).\\
+ \texttt{\meta{a}+\meta{d}} \> meaning line numbers \meta{a} to \meta{a}$+$\meta{d}$-1$.\\
+ \texttt{\meta{a}+} \> meaning line numbers \meta{a} to \meta{a} (i.e. only \meta{a}).\\
+ \texttt{+\meta{d}} \> meaning line numbers 1 to \meta{d}.\\
+ \texttt{+} \> meaning line numbers 1 to 1 (i.e. only 1; weird).\\
+ \end{tabbing}
+
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[
+ head to column names,
+ range = 2-3,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+
+
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[
+ head to column names,
+ range = 3-,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+
+
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[
+ head to column names,
+ range = 2+2,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[
+ head to column names,
+ range = {2,4},
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+
+To select the last $n$ lines, you have to know or count the line numbers first.
+The following example displays the last three line numbers:
+
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader{grade.csv}{}{}% count line numbers
+\csvreader[
+ head to column names,
+ range = {\thecsvrow-2}-,
+ tabular = llll,
+ table head = \toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot = \bottomrule,
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade
+ }
+\end{dispExample}
+
+\end{docCsvKey}
+
+
+
+\clearpage
+\subsection{Table Support}\label{subsec:tabsupport}%--------%[[
+
+\begin{docCsvKey}{tabular}{=\meta{table format}}{style, no default}
+ Surrounds the CSV processing with |\begin{tabular}|\marg{table format}
+ at begin and with |\end{tabular}| at end.
+Additionally, the commands defined by the key values of
+ \refKey{/csvsim/before table}, \refKey{/csvsim/table head}, \refKey{/csvsim/table foot},
+ and \refKey{/csvsim/after table} are executed at the appropriate places.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{centered tabular}{=\meta{table format}}{style, no default}
+ Like \refKey{/csvsim/tabular} but inside an additional |center| environment.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{longtable}{=\meta{table format}}{style, no default}
+ Like \refKey{/csvsim/tabular} but for the |longtable| environment.
+ This requires the package |longtable| (not loaded automatically).
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{tabbing}{}{style, no value}
+ Like \refKey{/csvsim/tabular} but for the |tabbing| environment.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{centered tabbing}{}{style, no value}
+ Like \refKey{/csvsim/tabbing} but inside an additional |center| environment.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{no table}{}{style, no value}
+ Deactivates |tabular|, |longtable|, and |tabbing|.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{before table}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed before |\begin{tabular}| or before |\begin{longtable}|
+ or before |\begin{tabbing}|, respectively.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{table head}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after |\begin{tabular}| or after |\begin{longtable}|
+ or after |\begin{tabbing}|, respectively.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{table foot}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed before |\end{tabular}| or before |\end{longtable}|
+ or before |\end{tabbing}|, respectively.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{after table}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after |\end{tabular}| or after |\end{longtable}|
+ or after |\end{tabbing}|, respectively.
+\end{docCsvKey}
+
+\clearpage
+
+The following |auto| options are the counterparts for the respective quick
+overview commands like \refCom{csvautotabular}. They are listed for
+completeness, but are unlikely to be used directly.
+
+\begin{docCsvKeys}[
+ doc parameter = {=\meta{file name}},
+ doc description = no default,
+ ]
+ {
+ { doc name = autotabular },
+ { doc name = autotabular* },
+ }
+ Reads the whole CSV file denoted \meta{file name} with an automated formatting.
+ The star variant treats the first line as data line and not as header line.
+\end{docCsvKeys}
+
+
+\begin{docCsvKeys}[
+ doc parameter = {=\meta{file name}},
+ doc description = no default,
+ ]
+ {
+ { doc name = autolongtable },
+ { doc name = autolongtable* },
+ }
+ Reads the whole CSV file denoted \meta{file name} with an automated formatting
+ using the required |longtable| package.
+ The star variant treats the first line as data line and not as header line.
+\end{docCsvKeys}
+
+
+\begin{docCsvKeys}[
+ doc parameter = {=\meta{file name}},
+ doc description = no default,
+ ]
+ {
+ { doc name = autobooktabular },
+ { doc name = autobooktabular* },
+ }
+ Reads the whole CSV file denoted \meta{file name} with an automated formatting
+ using the required |booktabs| package.
+ The star variant treats the first line as data line and not as header line.
+\end{docCsvKeys}
+
+
+\begin{docCsvKeys}[
+ doc parameter = {=\meta{file name}},
+ doc description = no default,
+ ]
+ {
+ { doc name = autobooklongtable },
+ { doc name = autobooklongtable* },
+ }
+ Reads the whole CSV file denoted \meta{file name} with an automated formatting
+ using the required |booktabs| and |longtable| packages.
+ The star variant treats the first line as data line and not as header line.
+\end{docCsvKeys}
+
+
+\clearpage
+\subsection{Special Characters}\label{subsec:specchar}
+Be default, the CSV content is treated like normal \LaTeX\ text, see
+Subsection~\ref{macrocodexample} on page~\pageref{macrocodexample}.
+But, \TeX\ special characters of the CSV content may also be interpreted
+as normal characters (|\catcode| 12, other), if one or more of the following options are used.
+
+\begin{docCsvKey}{respect tab}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ tabulator sign
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect percent}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ percent sign \verbbox{\%}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect sharp}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ sharp sign \verbbox{\#}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect dollar}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ dollar sign \verbbox{\$}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect and}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ and sign \verbbox{\&}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect backslash}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ backslash sign \verbbox{\textbackslash}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect underscore}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ underscore sign \verbbox{\_}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect tilde}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ tilde sign \verbbox{\textasciitilde}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect circumflex}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ circumflex sign \verbbox{\textasciicircum}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect leftbrace}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ left brace sign \verbbox{\textbraceleft}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect rightbrace}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ right brace sign \verbbox{\textbraceright}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect all}{}{style, no value, initially unset}
+ Set all special characters from above to normal characters. This means
+ a quite verbatim interpretation of the CSV content.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect none}{}{style, no value, initially set}
+ Do not change any special character from above to normal character.
+\end{docCsvKey}
+
+\clearpage
+\subsection{Separators}\label{sec:separators}%
+\begin{docCsvKey}{separator}{=\meta{sign}}{no default, initially |comma|}
+ \catcode `|=12
+ Sets the \meta{sign} which is treates as separator between the data values
+ of a data line. Feasible values are:
+ \begin{itemize}
+ \item\docValue{comma}: This is the initial value with '\texttt{,}' as separator.
+ \medskip
+
+ \item\docValue{semicolon}: Sets the separator to '\texttt{;}'.
+\begin{dispExample}
+% \usepackage{tcolorbox} for tcbverbatimwrite
+\begin{tcbverbatimwrite}{testsemi.csv}
+ name;givenname;matriculation;gender;grade
+ Maier;Hans;12345;m;1.0
+ Huber;Anna;23456;f;2.3
+ Weißbäck;Werner;34567;m;5.0
+\end{tcbverbatimwrite}
+
+\csvautobooktabular[separator=semicolon]{testsemi.csv}
+\end{dispExample}
+\medskip
+
+\item\docValue{pipe}: Sets the separator to '\texttt{|}'.
+\begin{dispExample}
+% \usepackage{tcolorbox} for tcbverbatimwrite
+\begin{tcbverbatimwrite}{pipe.csv}
+ name|givenname|matriculation|gender|grade
+ Maier|Hans|12345|m|1.0
+ Huber|Anna|23456|f|2.3
+ Weißbäck|Werner|34567|m|5.0
+\end{tcbverbatimwrite}
+
+\csvautobooktabular[separator=pipe]{pipe.csv}
+\end{dispExample}
+\medskip
+
+\item\docValue{tab}: Sets the separator to the tabulator sign.
+ Automatically, \refKey{/csvsim/respect tab} is set also.
+ \end{itemize}
+\end{docCsvKey}
+
+\clearpage
+\subsection{Miscellaneous}%
+
+\begin{docCsvKey}{every csv}{}{style, initially empty}
+ A meta key (style) definition which is used for every following CSV file.
+ This definition can be overwritten with user code.
+\begin{dispListing}
+% Sets a warning message for unfeasible data lines.
+\csvstyle{every csv}{warn on column count error}
+\end{dispListing}
+\end{docCsvKey}
+
+\begin{docCsvKey}{default}{}{style}
+ A style definition which is used for every following CSV file which
+ resets all settings to default values\footnote{\texttt{default} is used
+ because of the global nature of most settings.}.
+ This key should not be used or changed by the user if there is not a
+ really good reason (and you know what you do).
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{file}{=\meta{file name}}{no default, initially |unknown.csv|}
+ Sets the \meta{file name} of the CSV file to be processed.
+ \refCom{csvreader} sets this option by a mandatory parameter.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{preprocessed file}{=\meta{file name}}{no default, initially \texttt{\textbackslash\detokenize{jobname_sorted.csv}}}
+ Sets the \meta{file name} of the CSV file which is the output of a
+ preprocessor.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{preprocessor}{=\meta{macro}}{no default}
+ Defines a preprocessor for the given CSV file.
+ The \meta{macro} has to have two mandatory arguments. The first argument
+ is the original CSV file which is set by \refKey{/csvsim/file}.
+ The second argument is the preprocessed CSV file
+ which is set by \refKey{/csvsim/preprocessed file}.\par\smallskip
+ Typically, the \meta{macro} may call an external program which preprocesses
+ the original CSV file (e.\,g. sorting the file) and creates the
+ preprocessed CSV file. The later file is used by \refCom{csvreader}
+ or \refCom{csvloop}.
+\begin{dispListing}
+\newcommand{\mySortTool}[2]{%
+ % call to an external program to sort file #1 with resulting file #2
+}
+
+\csvreader[%
+ preprocessed file = \jobname_sorted.csv,
+ preprocessor = \mySortTool,
+ ]{some.csv}{}{%
+ % do something
+}
+\end{dispListing}
+See Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting} for a
+concrete sorting preprocessing implemented with an external tool.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{no preprocessing}{}{style, no value, initially set}
+ Clears any preprocessing, i.\,e. preprocessing is switched of.
+\end{docCsvKey}
+
+
+
+\clearpage
+\subsection{Sorting}\label{sec:Sorting}%
+\TeX/\LaTeX\ was not born under a sorting planet. |csvsimple-l3| provides no
+sorting of data lines by \LaTeX-methods since sorting can be done much faster
+and much better by external tools.
+
+First, one should consider the appropriate \emph{place} for sorting:
+\begin{itemize}
+\item CSV files may be sorted by a tool \emph{before} the \LaTeX\ document is processed
+ at all. If the CSV data is not likely to change, this is the most efficient method.
+\item CSV files may be sorted by a tool every time before the \LaTeX\ document is compiled.
+ This could be automated by a shell script or some processing tool like |arara|.
+\item CSV files may be sorted on-the-fly by a tool during compilation of
+ a \LaTeX\ document. This is the most elegant but not the most efficient way.
+\end{itemize}
+
+The first two methods are decoupled from anything concerning |csvsimple-l3|.
+For the third method, the \refKey{/csvsim/preprocessor} option is made for.
+This allows to access an external tool for sorting.
+\emph{Which tool} is your choice.
+
+\csvsorter\ was written as a companion tool for |csvsimple|.
+It is an open source Java command-line tool for sorting CSV files, available at\\
+\url{https://T-F-S.github.io/csvsorter/}\quad or\quad
+\url{https://github.com/T-F-S/csvsorter}
+
+It can be
+used for all three sorting approaches described above.
+There is special support for on-the-fly sorting with \csvsorter\ using the
+following options.
+
+\begin{enumerate}\bfseries
+\item To use the sorting options, you have to install \csvsorter\ before!
+\item You have to give permission to call external tools during
+ compilation, i.\,e.\ the command-line options for |latex| have to include
+ |-shell-escape|.
+\end{enumerate}
+
+\bigskip
+
+\begin{docCsvKey}{csvsorter command}{=\meta{system command}}{no default, initially |csvsorter|}
+ The \meta{system command} specifies the system call for \csvsorter\ (without the options).
+ If \csvsorter\ was completely installed following its documentation, there is
+ nothing to change here. If the |csvsorter.jar| file is inside the same
+ directory as the \LaTeX\ source file, you may configure:% preferrably inside the preamble:
+\begin{dispListing}
+\csvset{csvsorter command=java -jar csvsorter.jar}
+\end{dispListing}
+\end{docCsvKey}
+
+\begin{docCsvKey}{csvsorter configpath}{=\meta{path}}{no default, initially |.|}
+ Sorting with \csvsorter\ is done using XML configuration files. If these files
+ are not stored inside the same directory as the \LaTeX\ source file, a
+ \meta{path} to access them can be configured:
+\begin{dispListing}
+\csvset{csvsorter configpath=xmlfiles}
+\end{dispListing}
+ Here, the configuration files would be stored in a subdirectory named |xmlfiles|.
+\end{docCsvKey}
+
+\begin{docCsvKey}{csvsorter log}{=\meta{file name}}{no default, initially |csvsorter.log|}
+ Sets the log file of \csvsorter\ to the given \meta{file name}.
+\begin{dispListing}
+\csvset{csvsorter log=outdir/csvsorter.log}
+\end{dispListing}
+ Here, the log file is written to a subdirectory named |outdir|.
+\end{docCsvKey}
+
+\clearpage
+\begin{docCsvKey}{csvsorter token}{=\meta{file name}}{no default, initially |\textbackslash jobname.csvtoken|}
+ Sets \meta{file name} as token file. This is an auxiliary file which
+ communicates the success of \csvsorter\ to |csvsimple|.
+\begin{dispListing}
+\csvset{csvsorter log=outdir/\jobname.csvtoken}
+\end{dispListing}
+ Here, the token file is written to a subdirectory named |outdir|.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{sort by}{=\meta{file name}}{style, initially unset}
+ The \meta{file name} denotes an XML configuration file for \csvsorter.
+ Setting this option inside \refCom{csvreader} or
+ \refCom{csvloop} will issue a system call to \csvsorter.
+ \begin{itemize}
+ \item \csvsorter\ uses the given CSV file as input file.
+ \item \csvsorter\ uses \meta{file name} as configuration file.
+ \item The output CSV file is denoted by \refKey{/csvsim/preprocessed file}
+ which is by default \texttt{\textbackslash\detokenize{jobname_sorted.csv}}.
+ This output file is this actual file processed by \refCom{csvreader} or \refCom{csvloop}.
+ \item \csvsorter\ also generates a log file denoted by \refKey{/csvsim/csvsorter log} which is by default |csvsorter.log|.
+ \end{itemize}
+
+\par\medskip\textbf{First example:}
+ To sort our example |grade.csv| file according to |name| and |givenname|, we
+ use the following XML configuration file. Since \csvsorter\ uses double quotes
+ as default brackets for column values, we remove bracket recognition to avoid
+ a clash with the escaped umlauts of the example CSV file.\par\smallskip
+
+\xmllisting{namesort}
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[
+ head to column names,
+ sort by = namesort.xml,
+ tabular = >{\color{red}}lllll,
+ table head = \toprule Name & Given Name & Matriculation & Gender & Grade\\\midrule,
+ table foot = \bottomrule
+ ]{grade.csv}{}{%
+ \csvlinetotablerow
+ }
+\end{dispExample}
+
+\clearpage\textbf{Second example:}
+ To sort our example |grade.csv| file according to |grade|, we
+ use the following XML configuration file. Further, persons with the same |grade|
+ are sorted by |name| and |givenname|. Since \csvsorter\ uses double quotes
+ as default brackets for column values, we remove bracket recognition to avoid
+ a clash with the escaped umlauts of the example CSV file.\par\smallskip
+
+\xmllisting{gradesort}
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[
+ head to column names,
+ sort by = gradesort.xml,
+ tabular = llll>{\color{red}}l,
+ table head = \toprule Name & Given Name & Matriculation & Gender & Grade\\\midrule,
+ table foot = \bottomrule
+ ]{grade.csv}{}{%
+ \csvlinetotablerow
+ }
+\end{dispExample}
+
+\clearpage\textbf{Third example:}
+ To generate a matriculation/grade list, we sort our example |grade.csv| file
+ using the following XML configuration file.
+ Again, since \csvsorter\ uses double quotes
+ as default brackets for column values, we remove bracket recognition to avoid
+ a clash with the escaped umlauts of the example CSV file.\par\smallskip
+
+\xmllisting{matriculationsort}
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[
+ head to column names,
+ sort by = matriculationsort.xml,
+ tabular = >{\color{red}}ll,
+ table head = \toprule Matriculation & Grade\\\midrule,
+ table foot = \bottomrule
+ ]{grade.csv}{}{%
+ \matriculation & \grade
+ }
+\end{dispExample}
+\end{docCsvKey}
+
+
+\clearpage
+\begin{docCsvKey}{new sorting rule}{=\marg{name}\marg{file name}}{style, initially unset}
+This is a convenience option to generate a new shortcut for often used
+\refKey{/csvsim/sort by} applications. It also adds a more semantic touch.
+The new shortcut option is
+\tcbox[on line,size=small,colback=white,colframe=red]{|sort by| \meta{name}} which expands to
+\tcbox[on line,size=small,colback=white,colframe=red]{|sort by=|\marg{file name}}.\par\medskip
+
+Consider the following example:
+\begin{dispExample}
+\csvautotabular[sort by=namesort.xml]{grade.csv}
+\end{dispExample}
+A good place for setting up a new sorting rule would be inside the preamble:
+
+\csvset{new sorting rule={name}{namesort.xml}}
+\begin{dispListing}
+\csvset{new sorting rule={name}{namesort.xml}}
+\end{dispListing}
+
+Now, we can use the new rule:
+\begin{dispExample}
+\csvautotabular[sort by name]{grade.csv}
+\end{dispExample}
+\end{docCsvKey}
+
+
+\begin{docCommand}[doc new=2021-06-28]{csvsortingrule}{\marg{name}\marg{file name}}
+ Identical in function to \refKey{/csvsim/new sorting rule}, see above.
+A good place for setting up a new sorting rule would be inside the preamble:
+
+\csvsortingrule{name}{namesort.xml}
+\begin{dispListing}
+\csvsortingrule{name}{namesort.xml}
+\end{dispListing}
+
+Now, we can use the new rule:
+\begin{dispExample}
+\csvautotabular[sort by name]{grade.csv}
+\end{dispExample}
+\end{docCommand}
+
+
+
+\clearpage
+\section{String and Number Tests}\label{sec:stringtests}%
+
+The following string tests are complementing the string tests
+from packages like |etoolbox|. They all do the same, i.e.,
+comparing expanded strings for equality. To some extent, they are
+provided for backward compatibility.
+\begin{itemize}
+\item\refCom{ifcsvstrcmp} may be the most efficient method, because it uses
+ the native compiler string comparison (if available).
+\item\refCom{ifcsvstrequal} does not rely on a compiler. It also is the
+ fallback implementation for \refCom{ifcsvstrcmp}, if there is no
+ native comparison method.
+\item\refCom{ifcsvprostrequal} is possibly more failsafe than the other two
+ string tests. It may be used, if strings contain dirty things like |\textbf{A}|.
+\end{itemize}
+\medskip
+
+\begin{docCommand}[doc new and updated={2016-07-01}{2021-06-28}]{ifcsvstrcmp}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
+ Compares two strings and executes \meta{true} if they are equal, and \meta{false} otherwise.
+ The comparison is done using |\str_compare:eNeTF|.
+ \refCom{ifcsvstrcmp} is expandable.
+\end{docCommand}
+
+
+\begin{docCommand}[doc new and updated={2016-07-01}{2021-06-28}]{ifcsvnotstrcmp}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
+ Compares two strings and executes \meta{true} if they are \emph{not} equal, and \meta{false} otherwise.
+ The implementation uses \refCom{ifcsvstrcmp}.
+ \refCom{ifcsvstrcmp} is expandable.
+\end{docCommand}
+
+
+\begin{docCommand}[doc new and updated={2016-07-01}{2021-06-28}]{ifcsvstrequal}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
+ Compares two strings and executes \meta{true} if they are equal, and \meta{false} otherwise.
+ The strings are expanded
+ and the comparison is done using |\tl_if_eq:NNTF|.
+ \refCom{ifcsvstrequal} is not expandable.
+\end{docCommand}
+
+
+\begin{docCommand}[doc new and updated={2016-07-01}{2021-06-28}]{ifcsvprostrequal}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
+ Compares two strings and executes \meta{true} if they are equal, and \meta{false} otherwise.
+ The strings are expanded with |\protected at edef|
+ in the test, i.e. parts of the
+ strings which are protected stay unexpanded.
+ The comparison is done using |\tl_if_eq:NNTF|.
+ \refCom{ifcsvprostrequal} is not expandable.
+\end{docCommand}
+
+
+The following number tests are wrappers for corresponding \LaTeX3 conditionals.
+
+\begin{docCommand}[doc new={2021-06-28}]{ifcsvfpcmp}{\marg{floating point expression}\marg{true}\marg{false}}
+ Evaluates the given \meta{floating point expression}
+ and executes \meta{true} or \meta{false} appropriately.
+ The evaluation is done using |\fp_compare:nTF|.
+ \refCom{ifcsvfpcmp} is expandable.
+\end{docCommand}
+
+\begin{docCommand}[doc new={2021-06-28}]{ifcsvintcmp}{\marg{integer expression}\marg{true}\marg{false}}
+ Evaluates the given \meta{integer expression}
+ and executes \meta{true} or \meta{false} appropriately.
+ The evaluation is done using |\int_compare:nTF|.
+ \refCom{ifcsvintcmp} is expandable.
+\end{docCommand}
+
+
+\clearpage
+\section{Examples}%
+
+\subsection{A Serial Letter}%
+In this example, a serial letter is to be written to all persons with
+addresses from the following CSV file. Deliberately, the file content is
+not given in very pretty format.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{address.csv}
+name,givenname,gender,degree,street,zip,location,bonus
+Maier,Hans,m,,Am Bachweg 17,10010,Hopfingen,20
+ % next line with a comma in curly braces
+Huber,Erna,f,Dr.,{Moosstraße 32, Hinterschlag},10020,Örtingstetten,30
+Weißbäck,Werner,m,Prof. Dr.,Brauallee 10,10030,Klingenbach,40
+ % this line is ignored %
+ Siebener , Franz,m, , Blaumeisenweg 12 , 10040 , Pardauz , 50
+ % preceding and trailing spaces in entries are removed %
+Schmitt,Anton,m,,{\AE{}lfred-Esplanade, T\ae{}g 37}, 10050,\OE{}resung,60
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{address}
+
+Firstly, we survey the file content quickly using
+|\csvautotabular|.
+As can be seen, unfeasible lines are ignored automatically.
+
+\begin{dispExample}
+\tiny\csvautotabular{address.csv}
+\end{dispExample}
+
+Now, we create the serial letter where every feasible data line produces
+an own page. Here, we simulate the page by a |tcolorbox| (from the package
+|tcolorbox|).
+For the gender specific salutations, an auxiliary macro |\ifmale| is
+introduced.
+
+\begin{dispExample}
+% this example requires the tcolorbox package
+\newcommand{\ifmale}[2]{\ifcsvstrcmp{\gender}{m}{#1}{#2}}
+
+\csvreader[head to column names]{address.csv}{}{%
+\begin{tcolorbox}[colframe=DarkGray,colback=White,arc=0mm,width=(\linewidth-2pt)/2,
+ equal height group=letter,before=,after=\hfill,fonttitle=\bfseries,
+ adjusted title={Letter to \name}]
+ \ifcsvstrcmp{\degree}{}{\ifmale{Mr.}{Ms.}}{\degree}~\givenname~\name\\
+ \street\\\zip~\location
+ \tcblower
+ {\itshape Dear \ifmale{Sir}{Madam},}\\
+ we are pleased to announce you a bonus value of \bonus\%{}
+ which will be delivered to \location\ soon.\\\ldots
+\end{tcolorbox}}
+\end{dispExample}
+
+
+
+\clearpage
+\subsection{A Graphical Presentation}%
+For this example, we use some artificial statistical data given by a CSV file.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{data.csv}
+land,group,amount
+Bayern,A,1700
+Baden-Württemberg,A,2300
+Sachsen,B,1520
+Thüringen,A,1900
+Hessen,B,2100
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{data}
+
+Firstly, we survey the file content using
+|\csvautobooktabular|.
+
+\begin{dispExample}
+% needs the booktabs package
+\csvautobooktabular{data.csv}
+\end{dispExample}
+
+The amount values are presented in the following diagram by bars where
+the group classification is given using different colors.
+
+\begin{dispExample}
+% This example requires the package tikz
+\begin{tikzpicture}[Group/A/.style={left color=red!10,right color=red!20},
+ Group/B/.style={left color=blue!10,right color=blue!20}]
+\csvreader[head to column names]{data.csv}{}{%
+ \begin{scope}[yshift=-\thecsvrow cm]
+ \path [draw,Group/\group] (0,-0.45)
+ rectangle node[font=\bfseries] {\amount} (\amount/1000,0.45);
+ \node[left] at (0,0) {\land};
+ \end{scope} }
+\end{tikzpicture}
+\end{dispExample}
+
+
+\clearpage
+It would be nice to sort the bars by length, i.\,e.\ to sort the CSV file
+by the |amount| column. If the \csvsorter\ program is properly installed,
+see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
+this can be done with the following configuration file for \csvsorter:
+
+\xmllisting{amountsort}
+
+Now, we just have to add an option |sort by=amountsort.xml|:
+\begin{dispExample}
+% This example requires the package tikz
+% Also, the CSV-Sorter tool has to be installed
+\begin{tikzpicture}[Group/A/.style={left color=red!10,right color=red!20},
+ Group/B/.style={left color=blue!10,right color=blue!20}]
+\csvreader[head to column names,sort by=amountsort.xml]{data.csv}{}{%
+ \begin{scope}[yshift=-\thecsvrow cm]
+ \path [draw,Group/\group] (0,-0.45)
+ rectangle node[font=\bfseries] {\amount} (\amount/1000,0.45);
+ \node[left] at (0,0) {\land};
+ \end{scope} }
+\end{tikzpicture}
+\end{dispExample}
+
+
+
+
+\clearpage
+Next, we create a pie chart by calling |\csvreader| twice.
+In the first step, the total sum of amounts is computed, and in the second
+step the slices are drawn.
+
+\begin{dispExample}
+% Modified example from www.texample.net for pie charts
+% This example needs the packages tikz, xcolor, calc
+\definecolorseries{myseries}{rgb}{step}[rgb]{.95,.85,.55}{.17,.47,.37}
+\resetcolorseries{myseries}%
+
+% a pie slice
+\newcommand{\slice}[4]{
+ \pgfmathsetmacro{\midangle}{0.5*#1+0.5*#2}
+ \begin{scope}
+ \clip (0,0) -- (#1:1) arc (#1:#2:1) -- cycle;
+ \colorlet{SliceColor}{myseries!!+}%
+ \fill[inner color=SliceColor!30,outer color=SliceColor!60] (0,0) circle (1cm);
+ \end{scope}
+ \draw[thick] (0,0) -- (#1:1) arc (#1:#2:1) -- cycle;
+ \node[label=\midangle:#4] at (\midangle:1) {};
+ \pgfmathsetmacro{\temp}{min((#2-#1-10)/110*(-0.3),0)}
+ \pgfmathsetmacro{\innerpos}{max(\temp,-0.5) + 0.8}
+ \node at (\midangle:\innerpos) {#3};
+}
+
+% sum of amounts
+\csvreader[before reading=\def\mysum{0}]{data.csv}{amount=\amount}{%
+ \pgfmathsetmacro{\mysum}{\mysum+\amount}%
+}
+
+% drawing of the pie chart
+\begin{tikzpicture}[scale=3]%
+\def\mya{0}\def\myb{0}
+\csvreader[head to column names]{data.csv}{}{%
+ \let\mya\myb
+ \pgfmathsetmacro{\myb}{\myb+\amount}
+ \slice{\mya/\mysum*360}{\myb/\mysum*360}{\amount}{\land}
+}
+\end{tikzpicture}%
+\end{dispExample}
+
+
+\clearpage
+Finally, the filter option is demonstrated by separating the groups A and B.
+Every item is piled upon the appropriate stack.
+
+\begin{dispExample}
+\newcommand{\drawGroup}[2]{%
+ \def\mya{0}\def\myb{0}
+ \node[below=3mm] at (2.5,0) {\bfseries Group #1};
+ \csvreader[head to column names,filter equal={\group}{#1}]{data.csv}{}{%
+ \let\mya\myb
+ \pgfmathsetmacro{\myb}{\myb+\amount}
+ \path[draw,top color=#2!25,bottom color=#2!50]
+ (0,\mya/1000) rectangle node{\land\ (\amount)} (5,\myb/1000);
+}}
+
+\begin{tikzpicture}
+ \fill[gray!75] (-1,0) rectangle (13,-0.1);
+ \drawGroup{A}{red}
+ \begin{scope}[xshift=7cm]
+ \drawGroup{B}{blue}
+ \end{scope}
+\end{tikzpicture}
+
+\end{dispExample}
+
+
+\clearpage
+\subsection{Macro code inside the data}\label{macrocodexample}%
+
+If needed, the data file may contain macro code.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{macrodata.csv}
+type,description,content
+M,A nice \textbf{formula}, $\displaystyle \int\frac{1}{x} = \ln|x|+c$
+G,A \textcolor{red}{colored} ball, {\tikz \shadedraw [shading=ball] (0,0) circle (.5cm);}
+M,\textbf{Another} formula, $\displaystyle \lim\limits_{n\to\infty} \frac{1}{n}=0$
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{macrodata}
+
+Firstly, we survey the file content using
+|\csvautobooktabular|.
+
+\begin{dispExample}
+\csvautobooktabular{macrodata.csv}
+\end{dispExample}
+
+
+\begin{dispExample}
+\csvstyle{my enumerate}{head to column names,
+ before reading=\begin{enumerate},after reading=\end{enumerate}}
+
+\csvreader[my enumerate]{macrodata.csv}{}{%
+ \item \description:\par\content}
+
+\bigskip
+Now, formulas only:
+\csvreader[my enumerate,filter strcmp={\type}{M}]{macrodata.csv}{}{%
+ \item \description:\qquad\content}
+\end{dispExample}
+
+\clearpage
+\subsection{Tables with Number Formatting}\label{numberformatting}%
+
+We consider a file with numerical data which should be pretty-printed.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{data_numbers.csv}
+month, dogs, cats
+January, 12.50,12.3e5
+February, 3.32, 8.7e3
+March, 43, 3.1e6
+April, 0.33, 21.2e4
+May, 5.12, 3.45e6
+June, 6.44, 6.66e6
+July, 123.2,7.3e7
+August, 12.3, 5.3e4
+September,2.3, 4.4e4
+October, 6.5, 6.5e6
+November, 0.55, 5.5e5
+December, 2.2, 3.3e3
+\end{tcbverbatimwrite}
+
+\csvlisting{data_numbers}
+
+\medskip
+
+The |siunitx| package provides a huge amount of formatting options for
+numbers. A good and robust way to apply formatting by |siunitx| inside
+tables generated by |csvsimple-l3| is the |\tablenum| macro from
+|siunitx|.
+
+\begin{dispExample}
+% \usepackage{siunitx,array,booktabs}
+\csvreader[
+ head to column names,
+ before reading = \begin{center}\sisetup{table-number-alignment=center},
+ tabular = cc,
+ table head = \toprule \textbf{Cats} & \textbf{Dogs} \\\midrule,
+ table foot = \bottomrule,
+ after reading = \end{center}
+ ]{data_numbers.csv}{}{%
+ \tablenum[table-format=2.2e1]{\cats} & \tablenum{\dogs}
+ }
+\end{dispExample}
+
+\clearpage
+
+It is also possible to create on-the-fly tables using calcations of
+the given data. The following example shows cat values bisected and
+dog values doubled.
+
+\begin{dispExample}
+% \usepackage{siunitx,array,booktabs,xfp}
+\csvreader[
+ head to column names,
+ before reading = \begin{center}\sisetup{table-number-alignment=center},
+ tabular = cccc,
+ table head = \toprule \textbf{Cats} & \textbf{Dogs}
+ & \textbf{Halfcats} & \textbf{Doubledogs} \\\midrule,
+ table foot = \bottomrule,
+ after reading = \end{center}
+ ]{data_numbers.csv}{}{%
+ \tablenum[table-format=2.2e1]{\cats} & \tablenum{\dogs}
+ & \tablenum[exponent-mode=scientific, round-precision=3,
+ round-mode=places, table-format=1.3e1]{\fpeval{\cats/2}}
+ & \tablenum{\fpeval{\dogs*2}}
+ }
+\end{dispExample}
+
+
+\clearpage
+
+The |siunitx| package also provides a new column type |S|
+which can align material using a number of different strategies.
+Special care is needed, if the \emph{first} or the \emph{last} column is to be formatted with
+the column type |S|. The number detection of |siunitx| is disturbed by
+the line reading code of |csvsimple-l3| which actually is present at the
+first and last column. To avoid this problem, the utilization of
+|\tablenum| is appropriate, see above.
+Alternatively, a very nifty workaround suggested by Enrico Gregorio is to
+add an invisible dummy column with |c@{}| as first column
+and |@{}c| as last column:
+
+\begin{dispExample}
+% \usepackage{siunitx,array,booktabs}
+\csvreader[
+ head to column names,
+ before reading = \begin{center}\sisetup{table-number-alignment=center},
+ tabular = {c@{}S[table-format=2.2e1]S@{}c},
+ table head = \toprule & \textbf{Cats} & \textbf{Dogs} & \\\midrule,
+ table foot = \bottomrule,
+ after reading = \end{center}
+ ]{data_numbers.csv}{}{%
+ & \cats & \dogs &
+ }
+\end{dispExample}
+
+
+
+
+\clearpage
+Now, the preceding table shall be sorted by the \emph{cats} values.
+If the \csvsorter\ program is properly installed,
+see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
+this can be done with the following configuration file for \csvsorter:
+
+\xmllisting{catsort}
+
+Now, we just have to add an option |sort by=catsort.xml|:
+\begin{dispExample}
+% \usepackage{siunitx,array,booktabs}
+% Also, the CSV-Sorter tool has to be installed
+\csvreader[
+ head to column names,
+ sort by = catsort.xml,
+ before reading = \begin{center}\sisetup{table-number-alignment=center},
+ tabular = lcc,
+ table head = \toprule \textbf{Month} & \textbf{Dogs} & \textbf{Cats} \\\midrule,
+ table foot = \bottomrule,
+ after reading = \end{center}
+ ]{data_numbers.csv}{}{%
+ \month & \tablenum{\dogs} & \tablenum[table-format=2.2e1]{\cats}
+ }
+\end{dispExample}
+
+
+\clearpage
+\subsection{CSV data without header line}\label{noheader}%
+CSV files with a header line are more semantic than files without header,
+but it's no problem to work with headless files.
+
+For this example, we use again some artificial statistical data given by a CSV file
+but this time without header.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{data_headless.csv}
+Bayern,A,1700
+Baden-Württemberg,A,2300
+Sachsen,B,1520
+Thüringen,A,1900
+Hessen,B,2100
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{data_headless}
+
+Note that you cannot use the \refKey{/csvsim/no head} option for the auto tabular
+commands.
+If no options are given, the first line is interpreted as header line
+which gives an unpleasant result:
+
+\begin{dispExample}
+\csvautobooktabular{data_headless.csv}
+\end{dispExample}
+
+To get the expected result, the \emph{star} versions of the auto tabular
+commands can be used.
+
+\begin{dispExample}
+\csvautobooktabular*{data_headless.csv}
+\end{dispExample}
+
+This example can be extended to insert a table head for this headless data:
+
+\begin{dispExample}
+\csvautobooktabular*[
+ table head=\toprule\bfseries Land & \bfseries Group
+ & \bfseries Amount\\\midrule
+ ]{data_headless.csv}
+\end{dispExample}
+
+
+\clearpage
+
+For the normal \refCom{csvreader} command, the \refKey{/csvsim/no head} option
+should be applied. Of course, we cannot use \refKey{/csvsim/head to column names}
+because there is no head, but the columns can be addressed by their numbers:
+
+\begin{dispExample}
+\csvreader[
+ no head,
+ tabular = lr,
+ table head = \toprule\bfseries Land & \bfseries Amount\\\midrule,
+ table foot = \bottomrule]
+ {data_headless.csv}
+ { 1=\land, 3=\amount }
+ {\land & \amount}
+\end{dispExample}
+
+
+\clearpage
+\subsection{Imported CSV data}\label{sec:importeddata}%
+If data is imported from other applications, there is not always a choice
+to format in comma separated values with curly brackets.
+
+Consider the following example data file:
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{imported.csv}
+"name";"address";"email"
+"Frank Smith";"Yellow Road 123, Brimblsby";"frank.smith at organization.org"
+"Mary May";"Blue Alley 2a, London";"mmay at maybe.uk"
+"Hans Meier";"Hauptstraße 32, Berlin";"hans.meier at corporation.de"
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{imported}
+
+If the \csvsorter\ program is properly installed,
+see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
+this can be transformed on-the-fly
+with the following configuration file for \csvsorter:
+
+\xmllisting{transform}
+
+Now, we just have to add an option |sort by=transform.xml| to transform
+the input data. Here, we actually do not sort.
+
+\begin{dispExample}
+% \usepackage{booktabs,array}
+% Also, the CSV-Sorter tool has to be installed
+\newcommand{\Header}[1]{\normalfont\bfseries #1}
+
+\csvreader[
+ sort by = transform.xml,
+ tabular = >{\itshape}ll>{\ttfamily}l,
+ table head = \toprule\Header{Name} & \Header{Address} & \Header{email}\\\midrule,
+ table foot = \bottomrule
+ ]
+ {imported.csv}{}
+ {\csvlinetotablerow}
+\end{dispExample}
+
+The file which is generated on-the-fly and which is actually read by
+|csvsimple-l3| is the following:
+
+\tcbinputlisting{docexample,listing style=tcbdocumentation,fonttitle=\bfseries,
+ listing only,listing file=\jobname_sorted._csv}
+
+
+\clearpage
+\subsection{Encoding}\label{encoding}%
+If the CSV file has a different encoding than the \LaTeX\ source file,
+then special care is needed.
+
+\begin{itemize}
+\item The most obvious treatment is to change the encoding of the CSV file
+ or the \LaTeX\ source file to match the other one (every good editor
+ supports such a conversion). This is the easiest choice, if there a no
+ good reasons against such a step. E.g., unfortunately, several tools
+ under Windows need the CSV file to be |cp1252| encoded while
+ the \LaTeX\ source file may need to be |utf8| encoded.
+
+\item The |inputenc| package allows to switch the encoding inside the
+ document, say from |utf8| to |cp1252|. Just be aware that you should only
+ use pure ASCII for additional texts inside the switched region.
+\begin{dispListing}
+% !TeX encoding=UTF-8
+% ....
+\usepackage[utf8]{inputenc}
+% ....
+\begin{document}
+% ....
+\inputencoding{latin1}% only use ASCII from here, e.g. "Uberschrift
+\csvreader[%...
+ ]{data_cp1252.csv}{%...
+ }{% ....
+ }
+\inputencoding{utf8}
+% ....
+\end{document}
+\end{dispListing}
+
+\item As a variant to the last method, the encoding switch can be done
+ using options from |csvsimple-l3|:
+\begin{dispListing}
+% !TeX encoding=UTF-8
+% ....
+\usepackage[utf8]{inputenc}
+% ....
+\begin{document}
+% ....
+% only use ASCII from here, e.g. "Uberschrift
+\csvreader[%...
+ before reading=\inputencoding{latin1},
+ after reading=\inputencoding{utf8},
+ ]{data_cp1252.csv}{%...
+ }{% ....
+ }
+% ....
+\end{document}
+\end{dispListing}
+
+\pagebreak\item
+If the \csvsorter\ program is properly installed,
+see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
+the CSV file can be re-encoded on-the-fly
+with the following configuration file for \csvsorter:
+
+\xmllisting{encoding}
+
+\begin{dispListing}
+% !TeX encoding=UTF-8
+% ....
+\usepackage[utf8]{inputenc}
+% ....
+\begin{document}
+% ....
+\csvreader[%...
+ sort by=encoding.xml,
+ ]{data_cp1252.csv}{%...
+ }{% ....
+ }
+% ....
+\end{document}
+\end{dispListing}
+
+
+\end{itemize}
+
+
+\clearpage
+
+\printindex
+
+\end{document}
Property changes on: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-l3.tex
___________________________________________________________________
Added: svn:eol-style
## -0,0 +1 ##
+native
\ No newline at end of property
Added: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-legacy.pdf
===================================================================
(Binary files differ)
Index: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-legacy.pdf
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-legacy.pdf 2021-06-29 17:19:30 UTC (rev 59755)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-legacy.pdf 2021-06-29 19:53:39 UTC (rev 59756)
Property changes on: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-legacy.pdf
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+application/pdf
\ No newline at end of property
Added: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-legacy.tex
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-legacy.tex (rev 0)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-legacy.tex 2021-06-29 19:53:39 UTC (rev 59756)
@@ -0,0 +1,1970 @@
+% \LaTeX-Main\
+% !TeX encoding=UTF-8
+%% The LaTeX package csvsimple - version 2.0.0 (2021/06/29)
+%% csvsimple.tex: Manual
+%%
+%% -------------------------------------------------------------------------------------------
+%% Copyright (c) 2008-2021 by Prof. Dr. Dr. Thomas F. Sturm <thomas dot sturm at unibw dot de>
+%% -------------------------------------------------------------------------------------------
+%%
+%% This work may be distributed and/or modified under the
+%% conditions of the LaTeX Project Public License, either version 1.3
+%% of this license or (at your option) any later version.
+%% The latest version of this license is in
+%% http://www.latex-project.org/lppl.txt
+%% and version 1.3 or later is part of all distributions of LaTeX
+%% version 2005/12/01 or later.
+%%
+%% This work has the LPPL maintenance status `author-maintained'.
+%%
+%% This work consists of all files listed in README.md
+%%
+\documentclass[a4paper,11pt]{ltxdoc}
+\usepackage{csvsimple-doc}
+
+\usepackage{\csvpkgprefix csvsimple-legacy}
+
+\tcbmakedocSubKey{docCsvKey}{csv}
+
+\hypersetup{
+ pdftitle={Manual for the csvsimple-legacy package},
+ pdfauthor={Thomas F. Sturm},
+ pdfsubject={csv file processing with LaTeX2e},
+ pdfkeywords={csv file, comma separated values, key value syntax}
+}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\begin{document}
+\begin{center}
+\begin{tcolorbox}[enhanced,hbox,tikznode,left=8mm,right=8mm,boxrule=0.4pt,
+ colback=white,colframe=black!50!yellow,
+ drop lifted shadow=black!50!yellow,arc is angular,
+ before=\par\vspace*{5mm},after=\par\bigskip]
+{\bfseries\LARGE The \texttt{csvsimple-legacy} package}\\[3mm]
+{\large Manual for version \version\ (\datum)}
+\end{tcolorbox}
+{\large Thomas F.~Sturm%
+ \footnote{Prof.~Dr.~Dr.~Thomas F.~Sturm, Institut f\"{u}r Mathematik und Informatik,
+ Universit\"{a}t der Bundeswehr M\"{u}nchen, D-85577 Neubiberg, Germany;
+ email: \href{mailto:thomas.sturm at unibw.de}{thomas.sturm at unibw.de}}\par\medskip
+\normalsize\url{https://www.ctan.org/pkg/csvsimple}\par
+\url{https://github.com/T-F-S/csvsimple}
+}
+\end{center}
+\bigskip
+\begin{absquote}
+ \begin{center}\bfseries Abstract\end{center}
+ |csvsimple(-legacy)| provides a simple \LaTeX\ interface for the processing of files with
+ comma separated values (CSV). |csvsimple-legacy| relies heavily on the key value
+ syntax from |pgfkeys| which results in an easy way of usage.
+ Filtering and table generation is especially supported. Since the package
+ is considered as a lightweight tool, there is no support for data sorting
+ or data base storage.
+\end{absquote}
+
+
+\begin{tcolorbox}[enhanced,left=8mm,right=8mm,boxrule=2pt,boxsep=3mm,
+ colback=red!85!gray!5!white,colframe=red!85!gray,
+ arc is angular,arc=5mm,
+ before skip=1cm]
+Actually, |csvsimple-legacy| is identical to the old version 1.22 (2021/06/07)
+of |csvsimple|. It is superseded by |csvsimple-l3|, a \LaTeX3 implementation
+of |csvsimple| which is a \emph{nearly} drop-in for the erstwhile implementation.
+\begin{itemize}
+\item If you are a new user or an experienced user of |csvsimple| creating a
+ new document, you are encouraged to turn to |csvsimple-l3|, see\\
+ \href{csvsimple-l3.pdf}{\flqq The |csvsimple-l3| package\frqq}
+\item If you used |csvsimple| before version 2.00 in one or many documents,
+ there is \emph{no need} to change anything. Loading |csvsimple|
+ without options loads |csvsimple-legacy|.
+ |csvsimple-legacy| will be maintained to stay functional as it is for the
+ sake of compatibility to old documents.
+\item Differences between |csvsimple-legacy| and |csvsimple-l3| are
+ discussed in \href{csvsimple.pdf}{\flqq The |csvsimple| package\frqq}.
+\end{itemize}
+\end{tcolorbox}
+
+
+\clearpage
+\tableofcontents
+
+\clearpage
+\section{Introduction}%
+The |csvsimple-legacy| package is applied to the processing of
+CSV\footnote{CSV file: file with comma separated values.} files.
+This processing is controlled by key value assignments according to the
+syntax of |pgfkeys|. Sample applications of the package
+are tabular lists, serial letters, and charts.
+
+An alternative to |csvsimple-legacy| is the |datatool| package
+which provides considerably more functions and allows sorting of data by \LaTeX.
+|csvsimple-legacy| has a different approach for the user interface and
+is deliberately restricted to some basic functions with fast
+processing speed.
+
+Mind the following restrictions:
+\begin{itemize}
+\item Sorting is not supported directly but can be done
+ with external tools, see \Fullref{sec:Sorting}.
+\item Values are expected to be comma separated, but the package
+ provides support for other separators, see \Fullref{sec:separators}.
+\item Values are expected to be either not quoted or quoted with
+ curly braces |{}| of \TeX\ groups. Other quotes like doublequotes
+ are not supported directly, but can be achieved
+ with external tools, see \Fullref{sec:importeddata}.
+\item Every data line is expected to contain the same amount of values.
+ Unfeasible data lines are silently ignored by default, but this can
+ be configured, see \Fullref{sec:consistency}.
+\end{itemize}
+
+
+\subsection{Loading the Package}
+The package |csvsimple-legacy| loads the packages
+|pgfkeys|,
+|etoolbox|,
+and |ifthen|.
+|csvsimple-legacy| itself is loaded with \emph{one} of the following
+alternatives inside the preamble:
+\begin{dispListing}
+\usepackage{csvsimple}
+ % or alternatively (not simultaneously!)
+\usepackage[legacy]{csvsimple}
+ % or alternatively (not simultaneously!)
+\usepackage{csvsimple-legacy}
+\end{dispListing}
+
+
+Not automatically loaded, but used for many examples are the packages
+|longtable|
+and
+|booktabs|.
+
+
+\clearpage
+\subsection{First Steps}
+Every line of a processable CSV file has to contain an identical amount of
+comma\footnote{See \refKey{/csv/separator} for other separators than comma.} separated values. The curly braces |{}| of \TeX\ groups can be used
+to mask a block which may contain commas not to be processed as separators.
+
+The first line of such a CSV file is usually but not necessarily a header line
+which contains the identifiers for each column.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{grade.csv}
+name,givenname,matriculation,gender,grade
+Maier,Hans,12345,m,1.0
+Huber,Anna,23456,f,2.3
+Weißbäck,Werner,34567,m,5.0
+Bauer,Maria,19202,f,3.3
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{grade}
+
+\smallskip
+The most simple way to display a CSV file in tabular form is the processing
+with the \refCom{csvautotabular} command.
+
+\begin{dispExample}
+\csvautotabular{grade.csv}
+\end{dispExample}
+
+
+Typically, one would use \refCom{csvreader} instead of |\csvautotabular| to
+gain full control over the interpretation of the included data.
+
+In the following example, the entries of the header line are automatically
+assigned to \TeX\ macros which may be used deliberately.
+
+
+\begin{dispExample}
+\begin{tabular}{|l|c|}\hline%
+\bfseries Person & \bfseries Matr.~No.
+\csvreader[head to column names]{grade.csv}{}%
+{\\\givenname\ \name & \matriculation}%
+ \\\hline
+\end{tabular}
+\end{dispExample}
+
+
+\clearpage
+|\csvreader| is controlled by a plenty of options. For example, for table
+applications line breaks are easily inserted by
+\refKey{/csv/late after line}. This defines a macro execution just before
+the following line.
+Additionally, the assignment of columns to \TeX\ macros is shown in a non automated
+way.
+
+\begin{dispExample}
+\begin{tabular}{|r|l|c|}\hline%
+& Person & Matr.~No.\\\hline\hline
+\csvreader[late after line=\\\hline]%
+ {grade.csv}{name=\name,givenname=\firstname,matriculation=\matnumber}%
+ {\thecsvrow & \firstname~\name & \matnumber}%
+\end{tabular}
+\end{dispExample}
+
+\smallskip
+An even more comfortable and preferrable way to create a table is setting
+appropriate option keys. Note, that this gives you the possibility to create a
+|pgfkeys| style which contains the whole table creation.
+
+\begin{dispExample}
+\csvreader[tabular=|r|l|c|,
+ table head=\hline & Person & Matr.~No.\\\hline\hline,
+ late after line=\\\hline]%
+ {grade.csv}{name=\name,givenname=\firstname,matriculation=\matnumber}%
+ {\thecsvrow & \firstname~\name & \matnumber}%
+\end{dispExample}
+
+\smallskip
+The next example shows such a style definition with the convenience macro
+\refCom{csvstyle}. Here, we see again the automated assignment of header
+entries to column names by \refKey{/csv/head to column names}.
+For this, the header entries have to be without spaces and special characters.
+But you can always assign entries to canonical macro names by hand like in the examples
+above. Here, we also add a \refKey{/csv/head to column names prefix} to avoid
+macro name clashes.
+
+\begin{dispExample}
+\csvstyle{myTableStyle}{tabular=|r|l|c|,
+ table head=\hline & Person & Matr.~No.\\\hline\hline,
+ late after line=\\\hline,
+ head to column names,
+ head to column names prefix=MY,
+ }
+
+\csvreader[myTableStyle]{grade.csv}{}%
+ {\thecsvrow & \MYgivenname~\MYname & \MYmatriculation}%
+\end{dispExample}
+
+
+\clearpage
+Another way to address columns is to use their roman numbers.
+The direct addressing is done by |\csvcoli|, |\csvcolii|, |\csvcoliii|, \ldots:
+
+\begin{dispExample}
+\csvreader[tabular=|r|l|c|,
+ table head=\hline & Person & Matr.~No.\\\hline\hline,
+ late after line=\\\hline]%
+ {grade.csv}{}%
+ {\thecsvrow & \csvcolii~\csvcoli & \csvcoliii}%
+\end{dispExample}
+
+\smallskip
+And yet another method to assign macros to columns is to use arabic numbers
+for the assignment:
+
+\begin{dispExample}
+\csvreader[tabular=|r|l|c|,
+ table head=\hline & Person & Matr.~No.\\\hline\hline,
+ late after line=\\\hline]%
+ {grade.csv}{1=\name,2=\firstname,3=\matnumber}%
+ {\thecsvrow & \firstname~\name & \matnumber}%
+\end{dispExample}
+
+\smallskip
+For recurring applications, the |pgfkeys| syntax allows to create own styles
+for a consistent and centralized design. The following example is easily
+modified to obtain more or less option settings.
+
+\begin{dispExample}
+\csvset{myStudentList/.style={%
+ tabular=|r|l|c|,
+ table head=\hline & Person & #1\\\hline\hline,
+ late after line=\\\hline,
+ column names={name=\name,givenname=\firstname}
+ }}
+
+\csvreader[myStudentList={Matr.~No.}]{grade.csv}{matriculation=\matnumber}%
+{\thecsvrow & \firstname~\name & \matnumber}%
+\hfill%
+\csvreader[myStudentList={Grade}]{grade.csv}{grade=\grade}%
+{\thecsvrow & \firstname~\name & \grade}%
+\end{dispExample}
+
+
+\clearpage
+Alternatively, column names can be set by \refCom{csvnames}
+and style definitions by \refCom{csvstyle}.
+With this, the last example is rewritten as follows:
+
+\begin{dispExample}
+\csvnames{myNames}{1=\name,2=\firstname,3=\matnumber,5=\grade}
+\csvstyle{myStudentList}{tabular=|r|l|c|,
+ table head=\hline & Person & #1\\\hline\hline,
+ late after line=\\\hline, myNames}
+
+\csvreader[myStudentList={Matr.~No.}]{grade.csv}{}%
+{\thecsvrow & \firstname~\name & \matnumber}%
+\hfill%
+\csvreader[myStudentList={Grade}]{grade.csv}{}%
+{\thecsvrow & \firstname~\name & \grade}%
+\end{dispExample}
+
+\smallskip
+The data lines of a CSV file can also be filtered. In the following example,
+a certificate is printed only for students with grade unequal to 5.0.
+
+\begin{dispExample}
+\csvreader[filter not strcmp={\grade}{5.0}]%
+ {grade.csv}{1=\name,2=\firstname,3=\matnumber,4=\gender,5=\grade}%
+ {\begin{center}\Large\bfseries Certificate in Mathematics\end{center}
+ \large\ifcsvstrcmp{\gender}{f}{Ms.}{Mr.}
+ \firstname~\name, matriculation number \matnumber, has passed the test
+ in mathematics with grade \grade.\par\ldots\par
+ }%
+\end{dispExample}
+
+
+\clearpage
+\section{Macros for the Processing of CSV Files}\label{sec:makros}%
+
+\begin{docCommand}{csvreader}{\oarg{options}\marg{file name}\marg{assignments}\marg{command list}}
+ |\csvreader| reads the file denoted by \meta{file name} line by line.
+ Every line of the file has to contain an identical amount of
+ comma separated values. The curly braces |{}| of \TeX\ groups can be used
+ to mask a block which may contain commas not to be processed as separators.\smallskip
+
+ The first line of such a CSV file is by default but not necessarily
+ processed as a header line which contains the identifiers for each column.
+ The entries of this line can be used to give \meta{assignments} to \TeX\ macros
+ to address the columns. The number of entries of this first line
+ determines the accepted number of entries for all following lines.
+ Every line which contains a higher or lower number of entries is ignored
+ during standard processing.\smallskip
+
+ The \meta{assignments} are given by key value pairs
+ \mbox{\meta{name}|=|\meta{macro}}. Here, \meta{name} is an entry from the
+ header line \emph{or} the arabic number of the addressed column.
+ \meta{macro} is some \TeX\ macro which gets the content of the addressed column.\smallskip
+
+ The \meta{command list} is executed for every accepted data line. Inside the
+ \meta{command list} is applicable:
+ \begin{itemize}
+ \item \docAuxCommand{thecsvrow} or the counter |csvrow| which contains the number of the
+ current data line (starting with 1).
+ \item \docAuxCommand{csvcoli}, \docAuxCommand{csvcolii}, \docAuxCommand{csvcoliii}, \ldots,
+ which contain the contents of the column entries of the current data line.
+ Alternatively can be used:
+ \item \meta{macro} from the \meta{assignments} to have a logical
+ addressing of a column entry.
+ \end{itemize}
+ Note, that the \meta{command list} is allowed to contain |\par| and
+ that all macro definitions are made global to be used for table applications.\smallskip
+
+ The processing of the given CSV file can be controlled by various
+ \meta{options} given as key value list. The feasible option keys
+ are described in section \ref{sec:schluessel} from page \pageref{sec:schluessel}.
+
+\begin{dispExample}
+\csvreader[tabular=|r|l|l|, table head=\hline, table foot=\hline]{grade.csv}%
+ {name=\name,givenname=\firstname,grade=\grade}%
+ {\grade & \firstname~\name & \csvcoliii}
+\end{dispExample}
+
+Mainly, the |\csvreader| command consists of a \refCom{csvloop} macro with
+following parameters:\par
+|\csvloop{|\meta{options}|, file=|\meta{file name}|, column names=|\meta{assignments}|,|\\
+ \hspace*{2cm} |command=|\meta{command list}|}|\par
+ Therefore, the application of the keys \refKey{/csv/file} and \refKey{/csv/command}
+is useless for |\csvreader|.
+\end{docCommand}
+
+\begin{docCommand}{csvloop}{\marg{options}}
+ Usually, \refCom{csvreader} may be preferred instead of |\csvloop|.
+ \refCom{csvreader} is based on |\csvloop| which takes a mandatory list of
+ \meta{options} in key value syntax.
+ This list of \meta{options} controls the total processing. Especially,
+ it has to contain the CSV file name.
+\begin{dispExample}
+\csvloop{file={grade.csv}, head to column names, command=\name,
+ before reading={List of students:\ },
+ late after line={{,}\ }, late after last line=.}
+\end{dispExample}
+\end{docCommand}
+
+\clearpage
+The following |\csvauto...| commands are intended for quick data overview
+with limited formatting potential.
+See Subsection~\ref{subsec:tabsupport} on page \pageref{subsec:tabsupport}
+for the general table options in combination with \refCom{csvreader} and
+\refCom{csvloop}.
+
+\begin{docCommand}{csvautotabular}{\oarg{options}\marg{file name}}
+ |\csvautotabular| is an abbreviation for the application of the option key
+ \refKey{/csv/autotabular} together with other \meta{options} to \refCom{csvloop}.
+ This macro reads the whole CSV file denoted by \meta{file name}
+ with an automated formatting.
+\begin{dispExample}
+\csvautotabular{grade.csv}
+\end{dispExample}
+\begin{dispExample}
+\csvautotabular[filter equal={\csvcoliv}{f}]{grade.csv}
+\end{dispExample}
+\end{docCommand}
+
+
+\begin{docCommand}{csvautolongtable}{\oarg{options}\marg{file name}}
+ |csvautolongtable| is an abbreviation for the application of the option key
+ \refKey{/csv/autolongtable} together with other \meta{options} to \refCom{csvloop}.
+ This macro reads the whole CSV file denoted by \meta{file name}
+ with an automated formatting.
+ For application, the package |longtable| is required which has to be
+ loaded in the preamble.
+\begin{dispListing}
+\csvautolongtable{grade.csv}
+\end{dispListing}
+\csvautolongtable{grade.csv}
+\end{docCommand}
+
+\clearpage
+
+\begin{docCommand}{csvautobooktabular}{\oarg{options}\marg{file name}}
+ |\csvautobooktabular| is an abbreviation for the application of the option key
+ \refKey{/csv/autobooktabular} together with other \meta{options} to \refCom{csvloop}.
+ This macro reads the whole CSV file denoted by \meta{file name}
+ with an automated formatting.
+ For application, the package |booktabs| is required which has to be
+ loaded in the preamble.
+\begin{dispExample}
+\csvautobooktabular{grade.csv}
+\end{dispExample}
+\end{docCommand}
+
+
+\begin{docCommand}{csvautobooklongtable}{\oarg{options}\marg{file name}}
+ |csvautobooklongtable| is an abbreviation for the application of the option key
+ \refKey{/csv/autobooklongtable} together with other \meta{options} to \refCom{csvloop}.
+ This macro reads the whole CSV file denoted by \meta{file name}
+ with an automated formatting.
+ For application, the packages |booktabs| and |longtable| are required which have to be
+ loaded in the preamble.
+\begin{dispListing}
+\csvautobooklongtable{grade.csv}
+\end{dispListing}
+\csvautobooklongtable{grade.csv}
+\end{docCommand}
+
+
+
+\clearpage
+
+\begin{docCommand}{csvset}{\marg{options}}
+ Sets \meta{options} for every following
+ \refCom{csvreader} and \refCom{csvloop}. For example, this command may
+ be used for style definitions.
+\begin{dispExample}
+\csvset{grade list/.style=
+ {column names={name=\name,givenname=\firstname,grade=\grade}},
+ passed/.style={filter not strcmp={\grade}{5.0}} }
+
+The following students passed the test in mathematics:
+\csvreader[grade list,passed]{grade.csv}{}{\firstname\ \name\ (\grade); }%
+\end{dispExample}
+\end{docCommand}
+
+
+\begin{docCommand}{csvstyle}{\marg{key}\marg{options}}
+ Abbreviation for |\csvset{|\meta{key}|/.style=|\marg{options}|}|
+ to define a new style.
+\end{docCommand}
+
+\begin{docCommand}{csvnames}{\marg{key}\marg{assignments}}
+ Abbreviation for |\csvset{|\meta{key}|/.style={column names=|\marg{assignments}|}}|
+ to define additional \meta{assignments} of macros to columns.
+\begin{dispExample}
+\csvnames{grade list}{name=\name,givenname=\firstname,grade=\grade}
+\csvstyle{passed}{filter not strcmp={\grade}{5.0}}
+
+The following students passed the test in mathematics:
+\csvreader[grade list,passed]{grade.csv}{}{\firstname\ \name\ (\grade); }%
+\end{dispExample}
+\end{docCommand}
+
+
+\begin{docCommand}{csvheadset}{\marg{assignments}}
+ For some special cases, this command can be used to change the
+ \meta{assignments} of macros to columns during execution of
+ \refCom{csvreader} and \refCom{csvloop}.
+\begin{dispExample}
+\csvreader{grade.csv}{}%
+ { \csvheadset{name=\n} \fbox{\n}
+ \csvheadset{givenname=\n} \ldots\ \fbox{\n} }%
+\end{dispExample}
+\end{docCommand}
+
+\clearpage
+
+\begin{docCommand}{csviffirstrow}{\marg{then macros}\marg{else macros}}
+ Inside the command list of \refCom{csvreader}, the \meta{then macros}
+ are executed for the first data line, and the \meta{else macros}
+ are executed for all following lines.
+\begin{dispExample}
+\csvreader[tabbing, head to column names, table head=\hspace*{3cm}\=\kill]%
+ {grade.csv}{}%
+ {\givenname~\name \> (\csviffirstrow{first entry!!}{following entry})}
+\end{dispExample}
+\end{docCommand}
+
+
+\begin{docCommand}{csvifoddrow}{\marg{then macros}\marg{else macros}}
+ Inside the command list of \refCom{csvreader}, the \meta{then macros}
+ are executed for odd-numbered data lines, and the \meta{else macros}
+ are executed for even-numbered lines.
+\begin{dispExample}
+\csvreader[head to column names,tabular=|l|l|l|l|,
+ table head=\hline\bfseries \# & \bfseries Name & \bfseries Grade\\\hline,
+ table foot=\hline]{grade.csv}{}{%
+ \csvifoddrow{\slshape\thecsvrow & \slshape\name, \givenname & \slshape\grade}%
+ {\bfseries\thecsvrow & \bfseries\name, \givenname & \bfseries\grade}}
+\end{dispExample}
+
+The |\csvifoddrow| macro may be used for striped tables:
+
+\begin{dispExample}
+% This example needs the xcolor package
+\csvreader[head to column names,tabular=rlcc,
+ table head=\hline\rowcolor{red!50!black}\color{white}\# & \color{white}Person
+ & \color{white}Matr.~No. & \color{white}Grade,
+ late after head=\\\hline\rowcolor{yellow!50},
+ late after line=\csvifoddrow{\\\rowcolor{yellow!50}}{\\\rowcolor{red!25}}]%
+ {grade.csv}{}%
+ {\thecsvrow & \givenname~\name & \matriculation & \grade}%
+\end{dispExample}
+
+\enlargethispage*{1cm}
+Alternatively, |\rowcolors| from the |xcolor| package can be used for this
+purpose:
+
+\begin{dispExample}
+% This example needs the xcolor package
+\csvreader[tabular=rlcc, before table=\rowcolors{2}{red!25}{yellow!50},
+ table head=\hline\rowcolor{red!50!black}\color{white}\# & \color{white}Person
+ & \color{white}Matr.~No. & \color{white}Grade\\\hline,
+ head to column names]{grade.csv}{}%
+ {\thecsvrow & \givenname~\name & \matriculation & \grade}%
+\end{dispExample}
+\end{docCommand}
+
+\clearpage
+
+\begin{docCommand}{csvfilteraccept}{}
+ All following consistent data lines will be accepted and processed.
+ This command overwrites all previous filter settings and may be used
+ inside \refKey{/csv/full filter} to implement
+ an own filtering rule together with |\csvfilterreject|.
+\begin{dispExample}
+\csvreader[autotabular,
+ full filter=\ifcsvstrcmp{\csvcoliv}{m}{\csvfilteraccept}{\csvfilterreject}
+ ]{grade.csv}{}{\csvlinetotablerow}%
+\end{dispExample}
+\end{docCommand}
+
+
+\begin{docCommand}{csvfilterreject}{}
+ All following data lines will be ignored.
+ This command overwrites all previous filter settings.
+\end{docCommand}
+
+
+\begin{docCommand}{csvline}{}
+ This macro contains the current and unprocessed data line.
+\begin{dispExample}
+\csvreader[no head, tabbing, table head=\textit{line XX:}\=\kill]%
+ {grade.csv}{}{\textit{line \thecsvrow:} \> \csvline}%
+\end{dispExample}
+\end{docCommand}
+
+
+\begin{docCommand}{thecsvrow}{}
+ Typesets the current data line number. This is the
+ current number of accepted data lines without the header line.
+ The \LaTeX\ counter |csvrow| can be addressed directly in the usual way,
+ e.\,g. by |\roman{csvrow}|.
+\end{docCommand}
+
+
+\begin{docCommand}{thecsvinputline}{}
+ Typesets the current file line number. This is the
+ current number of all data lines including the header line.
+ The \LaTeX\ counter |csvinputline| can be addressed directly in the usual way,
+ e.\,g. by |\roman{csvinputline}|.
+\begin{dispExample}
+\csvreader[no head, filter test=\ifnumequal{\thecsvinputline}{3}]%
+ {grade.csv}{}%
+ {The line with number \thecsvinputline\ contains: \csvline}%
+\end{dispExample}
+\end{docCommand}
+
+
+\begin{docCommand}[doc updated=2016-07-01]{csvlinetotablerow}{}
+ Typesets the current processed data line with |&| between the entries.
+ %Most users will never apply this command.
+\end{docCommand}
+
+
+
+\clearpage
+\section{Option Keys}\label{sec:schluessel}%
+For the \meta{options} in \refCom{csvreader} respectively \refCom{csvloop}
+the following |pgf| keys can be applied. The key tree path |/csv/| is not
+to be used inside these macros.
+
+
+\subsection{Command Definition}%--------%[[
+
+\begin{docCsvKey}{before reading}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed before the CSV file is processed.
+\end{docCsvKey}
+
+\begin{docCsvKey}{after head}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after the header line is read.
+\end{docCsvKey}
+
+\begin{docCsvKey}{before filter}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after reading and consistency checking
+ of a data line. They are executed before any filter condition is checked,
+ see \refKey{/csv/filter}.
+ Also see \refKey{/csv/full filter}.
+\end{docCsvKey}
+
+\begin{docCsvKey}{late after head}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after reading and disassembling
+ of the first accepted data line. They are executed before further processing
+ of this line.
+\end{docCsvKey}
+
+\begin{docCsvKey}{late after line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after reading and disassembling
+ of the next accepted data line (after \refKey{/csv/before filter}).
+ They are executed before further processing of this next line.
+ |late after line| overwrites |late after first line| and |late after last line|.
+ Note that table options like \refKey{/csv/tabular} set this key to |\\|
+ automatically.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{late after first line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after reading and disassembling
+ of the second accepted data line instead of \refKey{/csv/late after line}.
+ This key has to be set after |late after line|.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{late after last line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after processing of the last
+ accepted data line instead of \refKey{/csv/late after line}.
+ This key has to be set after |late after line|.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{before line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after \refKey{/csv/late after line}
+ and before \refKey{/csv/command}.
+ |before line| overwrites |before first line|.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{before first line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed instead of \refKey{/csv/before line}
+ for the first accepted data line.
+ This key has to be set after |before line|.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{command}{=\meta{code}}{no default, initially \cs{csvline}}
+ Sets the \meta{code} to be executed for every accepted data line.
+ They are executed between \refKey{/csv/before line} and \refKey{/csv/after line}.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{after line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed for every accepted data line
+ after \refKey{/csv/command}.
+ |after line| overwrites |after first line|.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{after first line}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed instead of \refKey{/csv/after line}
+ for the first accepted data line.
+ This key has to be set after |after line|.
+\end{docCsvKey}
+
+\begin{docCsvKey}{after reading}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after the CSV file is processed.
+\end{docCsvKey}
+
+
+\begin{dispExample}
+\csvreader[
+ before reading = \meta{before reading}\\,
+ after head = \meta{after head},
+ before filter = \\\meta{before filter},
+ late after head = \meta{late after head},
+ late after line = \meta{late after line},
+ late after first line = \meta{late after first line},
+ late after last line = \\\meta{late after last line},
+ before line = \meta{before line},
+ before first line = \meta{before first line},
+ after line = \meta{after line},
+ after first line = \meta{after first line},
+ after reading = \\\meta{after reading}
+ ]{grade.csv}{name=\name}{\textbf{\name}}%
+\end{dispExample}
+
+Additional command definition keys are provided for the supported tables,
+see Section~\ref{subsec:tabsupport} from page~\pageref{subsec:tabsupport}.
+
+\clearpage
+\subsection{Header Processing and Column Name Assignment}%
+
+\begin{docCsvKey}{head}{\colOpt{=true\textbar false}}{default |true|, initially |true|}
+ If this key is set, the first line of the CSV file is treated as a header
+ line which can be used for column name assignments.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{no head}{}{no value}
+ Abbreviation for |head=false|, i.\,e. the first line of the CSV file is
+ treated as data line.
+ Note that this option cannot be used in combination with
+ \refCom{csvautotabular}, \refKey{/csv/autotabular}, and similar automated commands/options.
+ See Section~\ref{noheader} on page~\pageref{noheader} for assistance.
+\end{docCsvKey}
+
+\begin{docCsvKey}{column names}{=\meta{assignments}}{no default, initially empty}
+ Adds some new \meta{assignments} of macros to columns in key value syntax.
+ Existing assignments are kept.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{column names reset}{}{no value}
+ Clears all assignments of macros to columns.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{head to column names}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, the entries of the header line are used automatically
+ as macro names for the columns. This option can be used only, if
+ the header entries do not contain spaces and special characters to be
+ used as feasible \LaTeX\ macro names.
+ Note that the macro definition is \emph{global} and may therefore override
+ existing macros for the rest of the document. Adding
+ \refKey{/csv/head to column names prefix} may help to avoid unwanted
+ overrides.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}[][doc new=2019-07-16]{head to column names prefix}{=\meta{text}}{no default, initially empty}
+ The given \meta{text} is prefixed to the name of all macros generated by
+ \refKey{/csv/head to column names}. For example, if you use the settings
+\begin{dispListing}
+ head to column names,
+ head to column names prefix=MY,
+\end{dispListing}
+ a header entry |section| will generate the corresponding macro
+ |\MYsection| instead of destroying the standard \LaTeX\ |\section| macro.
+\end{docCsvKey}
+
+
+\clearpage
+\subsection{Consistency Check}\label{sec:consistency}%
+
+\begin{docCsvKey}{check column count}{\colOpt{=true\textbar false}}{default |true|, initially |true|}
+ This key defines, wether the number of entries in a data line is checked against
+ an expected value or not.\\
+ If |true|, every non consistent line is ignored without announcement.\\
+ If |false|, every line is accepted and may produce an error during
+ further processing.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{no check column count}{}{no value}
+ Abbreviation for |check column count=false|.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{column count}{=\meta{number}}{no default}
+ Sets the \meta{number} of feasible entries per data line.
+ This setting is only useful in connection with \refKey{/csv/no head},
+ since \meta{number} would be replaced by the number of entries in the
+ header line otherwise.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{on column count error}{=\meta{code}}{no default, initially empty}
+ \meta{code} to be executed for unfeasible data lines.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{warn on column count error}{}{style, no value}
+ Display of a warning for unfeasible data lines.
+\end{docCsvKey}
+
+
+\clearpage
+\subsection{Filtering}%
+
+\begin{docCsvKey}[][doc new=2016-07-01]{filter test}{=\meta{condition}}{no default}
+ Only data lines which fulfill a logical \meta{condition} are accepted.
+ For the \meta{condition}, every single test normally employed like
+\begin{dispListing}
+\iftest{some testing}{true}{false}
+\end{dispListing}
+ can be used as
+\begin{dispListing}
+filter test=\iftest{some testing},
+\end{dispListing}
+ For |\iftest|, tests from the |etoolbox| package like
+ |\ifnumcomp|, |\ifdimgreater|, etc. and from \Fullref{sec:stringtests} can be used.
+
+\begin{dispExample}
+\csvreader[head to column names,tabular=llll,
+ table head=\toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot=\bottomrule,
+ %>> list only matriculation numbers greater than 20000 <<
+ filter test=\ifnumgreater{\matriculation}{20000},
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade}
+\end{dispExample}
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{filter strcmp}{=\marg{stringA}\marg{stringB}}{style, no default}
+ Only lines where \meta{stringA} and \meta{stringB} are equal after expansion
+ are accepted.
+ The implementation is done with \refCom{ifcsvstrcmp}.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{filter not strcmp}{=\marg{stringA}\marg{stringB}}{style, no default}
+ Only lines where \meta{stringA} and \meta{stringB} are not equal after expansion
+ are accepted.
+ The implementation is done with \refCom{ifcsvnotstrcmp}.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}[][doc new=2016-07-01]{filter expr}{=\meta{condition}}{no default}
+ Only data lines which fulfill a logical \meta{condition} are accepted.
+ For the \meta{condition}, every boolean expression
+ from the |etoolbox| package is feasible.
+ To preprocess the data line before testing the \meta{condition},
+ the option key \refKey{/csv/before filter} can be used.
+\begin{dispExample}
+\csvreader[head to column names,tabular=llll,
+ table head=\toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot=\bottomrule,
+ %>> list only matriculation numbers greater than 20000
+ % and grade less than 4.0 <<
+ filter expr={ test{\ifnumgreater{\matriculation}{20000}}
+ and test{\ifdimless{\grade pt}{4.0pt}} },
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade}
+\end{dispExample}
+\end{docCsvKey}
+
+\clearpage
+\begin{docCsvKey}[][doc new=2016-07-01]{filter ifthen}{=\meta{condition}}{no default}
+ Only data lines which fulfill a logical \meta{condition} are accepted.
+ For the \meta{condition}, every term from the |ifthen| package
+ is feasible.
+ To preprocess the data line before testing the \meta{condition},
+ the option key \refKey{/csv/before filter} can be used.
+
+\begin{dispExample}
+\csvreader[head to column names,tabular=llll,
+ table head=\toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot=\bottomrule,
+ %>> list only female persons <<
+ filter ifthen=\equal{\gender}{f},
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade}
+\end{dispExample}
+
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{filter}{=\meta{condition}}{no default}
+ Alias for \refKey{/csv/filter ifthen}.
+\end{docCsvKey}
+
+\begin{docCsvKey}{filter equal}{=\marg{stringA}\marg{stringB}}{style, no default}
+ Only lines where \meta{stringA} and \meta{stringB} are equal after expansion
+ are accepted.
+ The implementation is done with the |ifthen| package.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{filter not equal}{=\marg{stringA}\marg{stringB}}{style, no default}
+ Only lines where \meta{stringA} and \meta{stringB} are not equal after expansion
+ are accepted.
+ The implementation is done with the |ifthen| package.
+\end{docCsvKey}
+
+
+
+\begin{docCsvKey}{no filter}{}{no value, initially set}
+ Clears a set filter.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{filter accept all}{}{no value, initially set}
+ Alias for |no filter|. All consistent data lines are accepted.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{filter reject all}{}{no value}
+ All data line are ignored.
+\end{docCsvKey}
+
+
+
+\enlargethispage*{2cm}
+\begin{docCsvKey}[][doc new=2016-07-01]{full filter}{=\meta{code}}{no default}
+ Technically, this key is an alias for \refKey{/csv/before filter}.
+ Philosophically, \refKey{/csv/before filter} computes something before
+ a filter condition is set, but \refKey{/csv/full filter} should implement
+ the full filtering. Especially, \refCom{csvfilteraccept} or
+ \refCom{csvfilterreject} \emph{should} be set inside the \meta{code}.
+\begin{dispExample}
+\csvreader[head to column names,tabular=llll,
+ table head=\toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
+ table foot=\bottomrule,
+ %>> list only matriculation numbers greater than 20000
+ % and grade less than 4.0 <<
+ full filter=\ifnumgreater{\matriculation}{20000}
+ {\ifdimless{\grade pt}{4.0pt}{\csvfilteraccept}{\csvfilterreject}}
+ {\csvfilterreject},
+ ]{grade.csv}{}{%
+ \thecsvrow & \slshape\name, \givenname & \matriculation & \grade}
+\end{dispExample}
+\end{docCsvKey}
+
+
+
+%]]
+
+
+\clearpage
+\subsection{Table Support}\label{subsec:tabsupport}%--------%[[
+
+\begin{docCsvKey}{tabular}{=\meta{table format}}{style, no default}
+ Surrounds the CSV processing with |\begin{tabular}|\marg{table format}
+ at begin and with |\end{tabular}| at end.
+Additionally, the commands defined by the key values of
+ \refKey{/csv/before table}, \refKey{/csv/table head}, \refKey{/csv/table foot},
+ and \refKey{/csv/after table} are executed at the appropriate places.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{centered tabular}{=\meta{table format}}{style, no default}
+ Like \refKey{/csv/tabular} but inside an additional |center| environment.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{longtable}{=\meta{table format}}{style, no default}
+ Like \refKey{/csv/tabular} but for the |longtable| environment.
+ This requires the package |longtable| (not loaded automatically).
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{tabbing}{}{style, no value}
+ Like \refKey{/csv/tabular} but for the |tabbing| environment.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{centered tabbing}{}{style, no value}
+ Like \refKey{/csv/tabbing} but inside an additional |center| environment.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{no table}{}{style, no value}
+ Deactivates |tabular|, |longtable|, and |tabbing|.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{before table}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed before |\begin{tabular}| or before |\begin{longtable}|
+ or before |\begin{tabbing}|, respectively.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{table head}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after |\begin{tabular}| or after |\begin{longtable}|
+ or after |\begin{tabbing}|, respectively.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{table foot}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed before |\end{tabular}| or before |\end{longtable}|
+ or before |\end{tabbing}|, respectively.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{after table}{=\meta{code}}{no default, initially empty}
+ Sets the \meta{code} to be executed after |\end{tabular}| or after |\end{longtable}|
+ or after |\end{tabbing}|, respectively.
+\end{docCsvKey}
+
+\bigskip
+
+The following |auto| options are the counterparts for the respective quick
+overview commands like \refCom{csvautotabular}. They are listed for
+completeness, but are unlikely to be used directly.
+
+\begin{docCsvKey}{autotabular}{=\meta{file name}}{no default}
+ Reads the whole CSV file denoted \meta{file name} with an automated formatting.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{autolongtable}{=\meta{file name}}{no default}
+ Reads the whole CSV file denoted \meta{file name} with an automated formatting
+ using the required |longtable| package.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{autobooktabular}{=\meta{file name}}{no default}
+ Reads the whole CSV file denoted \meta{file name} with an automated formatting
+ using the required |booktabs| package.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{autobooklongtable}{=\meta{file name}}{no default}
+ Reads the whole CSV file denoted \meta{file name} with an automated formatting
+ using the required |booktabs| and |longtable| packages.
+\end{docCsvKey}
+
+
+\clearpage
+\subsection{Special Characters}\label{subsec:specchar}
+Be default, the CSV content is treated like normal \LaTeX\ text, see
+Subsection~\ref{macrocodexample} on page~\pageref{macrocodexample}.
+But, \TeX\ special characters of the CSV content may also be interpreted
+as normal characters, if one or more of the following options are used.
+
+\begin{docCsvKey}{respect tab}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ tabulator sign
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect percent}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ percent sign \verbbox{\%}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect sharp}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ sharp sign \verbbox{\#}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect dollar}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ dollar sign \verbbox{\$}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect and}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ and sign \verbbox{\&}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect backslash}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ backslash sign \verbbox{\textbackslash}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect underscore}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ underscore sign \verbbox{\_}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect tilde}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ tilde sign \verbbox{\textasciitilde}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect circumflex}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ circumflex sign \verbbox{\textasciicircum}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect leftbrace}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ left brace sign \verbbox{\textbraceleft}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect rightbrace}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
+ If this key is set, every
+ right brace sign \verbbox{\textbraceright}
+ inside the CSV content is a normal character.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect all}{}{style, no value, initially unset}
+ Set all special characters from above to normal characters. This means
+ a quite verbatim interpretation of the CSV content.
+\end{docCsvKey}
+
+\begin{docCsvKey}{respect none}{}{style, no value, initially set}
+ Do not change any special character from above to normal character.
+\end{docCsvKey}
+
+\clearpage
+\subsection{Separators}\label{sec:separators}%
+\begin{docCsvKey}{separator}{=\meta{sign}}{no default, initially |comma|}
+ \catcode `|=12
+ Sets the \meta{sign} which is treates as separator between the data values
+ of a data line. Feasible values are:
+ \begin{itemize}
+ \item\docValue{comma}: This is the initial value with '\texttt{,}' as separator.
+ \medskip
+
+ \item\docValue{semicolon}: Sets the separator to '\texttt{;}'.
+\begin{dispExample}
+% \usepackage{tcolorbox} for tcbverbatimwrite
+\begin{tcbverbatimwrite}{testsemi.csv}
+ name;givenname;matriculation;gender;grade
+ Maier;Hans;12345;m;1.0
+ Huber;Anna;23456;f;2.3
+ Weißbäck;Werner;34567;m;5.0
+\end{tcbverbatimwrite}
+
+\csvautobooktabular[separator=semicolon]{testsemi.csv}
+\end{dispExample}
+\medskip
+
+\item\docValue{pipe}: Sets the separator to '\texttt{|}'.
+\begin{dispExample}
+% \usepackage{tcolorbox} for tcbverbatimwrite
+\begin{tcbverbatimwrite}{pipe.csv}
+ name|givenname|matriculation|gender|grade
+ Maier|Hans|12345|m|1.0
+ Huber|Anna|23456|f|2.3
+ Weißbäck|Werner|34567|m|5.0
+\end{tcbverbatimwrite}
+
+\csvautobooktabular[separator=pipe]{pipe.csv}
+\end{dispExample}
+\medskip
+
+\item\docValue{tab}: Sets the separator to the tabulator sign.
+ Automatically, \refKey{/csv/respect tab} is set also.
+ \end{itemize}
+\end{docCsvKey}
+
+\clearpage
+\subsection{Miscellaneous}%
+
+\begin{docCsvKey}{every csv}{}{style, initially empty}
+ A style definition which is used for every following CSV file.
+ This definition can be overwritten with user code.
+\begin{dispListing}
+% Sets a warning message for unfeasible data lines.
+\csvset{every csv/.style={warn on column count error}}
+% Alternatively:
+\csvstyle{every csv}{warn on column count error}
+\end{dispListing}
+\end{docCsvKey}
+
+\begin{docCsvKey}{default}{}{style}
+ A style definition which is used for every following CSV file which
+ resets all settings to default values\footnote{\texttt{default} is used
+ because of the global nature of most settings.}.
+ This key should not be used or changed by the user if there is not a
+ really good reason (and you know what you do).
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{file}{=\meta{file name}}{no default, initially |unknown.csv|}
+ Sets the \meta{file name} of the CSV file to be processed.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{preprocessed file}{=\meta{file name}}{no default, initially \texttt{\textbackslash\detokenize{jobname_sorted.csv}}}
+ Sets the \meta{file name} of the CSV file which is the output of a
+ preprocessor.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{preprocessor}{=\meta{macro}}{no default}
+ Defines a preprocessor for the given CSV file.
+ The \meta{macro} has to have two mandatory arguments. The first argument
+ is the original CSV file which is set by \refKey{/csv/file}.
+ The second argument is the preprocessed CSV file
+ which is set by \refKey{/csv/preprocessed file}.\par\smallskip
+ Typically, the \meta{macro} may call an external program which preprocesses
+ the original CSV file (e.\,g. sorting the file) and creates the
+ preprocessed CSV file. The later file is used by \refCom{csvreader}
+ or \refCom{csvloop}.
+\begin{dispListing}
+\newcommand{\mySortTool}[2]{%
+ % call to an external program to sort file #1 with resulting file #2
+}
+
+\csvreader[%
+ preprocessed file=\jobname_sorted.csv,
+ preprocessor=\mySortTool,
+ ]{some.csv}{}{%
+ % do something
+}
+\end{dispListing}
+See Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting} for a
+concrete sorting preprocessing implemented with an external tool.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{no preprocessing}{}{style, no value, initially set}
+ Clears any preprocessing, i.\,e. preprocessing is switched of.
+\end{docCsvKey}
+
+
+
+\clearpage
+\subsection{Sorting}\label{sec:Sorting}%
+\TeX/\LaTeX\ was not born under a sorting planet. |csvsimple-legacy| provides no
+sorting of data lines by \LaTeX-methods since sorting can be done much faster
+and much better by external tools.
+
+First, one should consider the appropriate \emph{place} for sorting:
+\begin{itemize}
+\item CSV files may be sorted by a tool \emph{before} the \LaTeX\ document is processed
+ at all. If the CSV data is not likely to change, this is the most efficient method.
+\item CSV files may be sorted by a tool every time before the \LaTeX\ document is compiled.
+ This could be automated by a shell script or some processing tool like |arara|.
+\item CSV files may be sorted on-the-fly by a tool during compilation of
+ a \LaTeX\ document. This is the most elegant but not the most efficient way.
+\end{itemize}
+
+The first two methods are decoupled from anything concerning |csvsimple-legacy|.
+For the third method, the \refKey{/csv/preprocessor} option is made for.
+This allows to access an external tool for sorting.
+\emph{Which tool} is your choice.
+
+\csvsorter\ was written as a companion tool for |csvsimple|.
+It is an open source Java command-line tool for sorting CSV files, available at\\
+\url{http://T-F-S.github.io/csvsorter/}\quad or\quad
+\url{https://github.com/T-F-S/csvsorter}
+
+It can be
+used for all three sorting approaches described above.
+There is special support for on-the-fly sorting with \csvsorter\ using the
+following options.
+
+\begin{enumerate}\bfseries
+\item To use the sorting options, you have to install \csvsorter\ before!\\
+ |csvsimple| v1.12 or newer needs \csvsorter\ v0.94 of newer!
+\item You have to give permission to call external tools during
+ compilation, i.\,e.\ the command-line options for |latex| have to include
+ |-shell-escape|.
+\end{enumerate}
+
+\bigskip
+
+\begin{docCsvKey}{csvsorter command}{=\meta{system command}}{no default, initially |csvsorter|}
+ The \meta{system command} specifies the system call for \csvsorter\ (without the options).
+ If \csvsorter\ was completely installed following its documentation, there is
+ nothing to change here. If the |csvsorter.jar| file is inside the same
+ directory as the \LaTeX\ source file, you may configure:% preferrably inside the preamble:
+\begin{dispListing}
+\csvset{csvsorter command=java -jar csvsorter.jar}
+\end{dispListing}
+\end{docCsvKey}
+
+\begin{docCsvKey}{csvsorter configpath}{=\meta{path}}{no default, initially |.|}
+ Sorting with \csvsorter\ is done using XML configuration files. If these files
+ are not stored inside the same directory as the \LaTeX\ source file, a
+ \meta{path} to access them can be configured:
+\begin{dispListing}
+\csvset{csvsorter configpath=xmlfiles}
+\end{dispListing}
+ Here, the configuration files would be stored in a subdirectory named |xmlfiles|.
+\end{docCsvKey}
+
+\begin{docCsvKey}{csvsorter log}{=\meta{file name}}{no default, initially |csvsorter.log|}
+ Sets the log file of \csvsorter\ to the given \meta{file name}.
+\begin{dispListing}
+\csvset{csvsorter log=outdir/csvsorter.log}
+\end{dispListing}
+ Here, the log file is written to a subdirectory named |outdir|.
+\end{docCsvKey}
+
+\clearpage
+\begin{docCsvKey}{csvsorter token}{=\meta{file name}}{no default, initially |\textbackslash jobname.csvtoken|}
+ Sets \meta{file name} as token file. This is an auxiliary file which
+ communicates the success of \csvsorter\ to |csvsimple|.
+\begin{dispListing}
+\csvset{csvsorter log=outdir/\jobname.csvtoken}
+\end{dispListing}
+ Here, the token file is written to a subdirectory named |outdir|.
+\end{docCsvKey}
+
+
+\begin{docCsvKey}{sort by}{=\meta{file name}}{style, initially unset}
+ The \meta{file name} denotes an XML configuration file for \csvsorter.
+ Setting this option inside \refCom{csvreader} or
+ \refCom{csvloop} will issue a system call to \csvsorter.
+ \begin{itemize}
+ \item \csvsorter\ uses the given CSV file as input file.
+ \item \csvsorter\ uses \meta{file name} as configuration file.
+ \item The output CSV file is denoted by \refKey{/csv/preprocessed file}
+ which is by default \texttt{\textbackslash\detokenize{jobname_sorted.csv}}.
+ This output file is this actual file processed by \refCom{csvreader} or \refCom{csvloop}.
+ \item \csvsorter\ also generates a log file denoted by \refKey{/csv/csvsorter log} which is by default |csvsorter.log|.
+ \end{itemize}
+
+\par\medskip\textbf{First example:}
+ To sort our example |grade.csv| file according to |name| and |givenname|, we
+ use the following XML configuration file. Since \csvsorter\ uses double quotes
+ as default brackets for column values, we remove bracket recognition to avoid
+ a clash with the escaped umlauts of the example CSV file.\par\smallskip
+
+\xmllisting{namesort}
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[sort by=namesort.xml,
+ head to column names,
+ tabular=>{\color{red}}lllll,
+ table head=\toprule Name & Given Name & Matriculation & Gender & Grade\\\midrule,
+ table foot=\bottomrule]
+ {grade.csv}{}{\csvlinetotablerow}
+\end{dispExample}
+
+\clearpage\textbf{Second example:}
+ To sort our example |grade.csv| file according to |grade|, we
+ use the following XML configuration file. Further, persons with the same |grade|
+ are sorted by |name| and |givenname|. Since \csvsorter\ uses double quotes
+ as default brackets for column values, we remove bracket recognition to avoid
+ a clash with the escaped umlauts of the example CSV file.\par\smallskip
+
+\xmllisting{gradesort}
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[sort by=gradesort.xml,
+ head to column names,
+ tabular=llll>{\color{red}}l,
+ table head=\toprule Name & Given Name & Matriculation & Gender & Grade\\\midrule,
+ table foot=\bottomrule]
+ {grade.csv}{}{\csvlinetotablerow}
+\end{dispExample}
+
+\clearpage\textbf{Third example:}
+ To generate a matriculation/grade list, we sort our example |grade.csv| file
+ using the following XML configuration file.
+ Again, since \csvsorter\ uses double quotes
+ as default brackets for column values, we remove bracket recognition to avoid
+ a clash with the escaped umlauts of the example CSV file.\par\smallskip
+
+\xmllisting{matriculationsort}
+\begin{dispExample}
+% \usepackage{booktabs}
+\csvreader[sort by=matriculationsort.xml,
+ head to column names,
+ tabular=>{\color{red}}ll,
+ table head=\toprule Matriculation & Grade\\\midrule,
+ table foot=\bottomrule]
+ {grade.csv}{}{\matriculation & \grade}
+\end{dispExample}
+\end{docCsvKey}
+
+
+\clearpage
+\begin{docCsvKey}{new sorting rule}{=\marg{name}\marg{file name}}{style, initially unset}
+This is a convenience option to generate a new shortcut for often used
+\refKey{/csv/sort by} applications. It also adds a more semantic touch.
+The new shortcut option is
+\tcbox[on line,size=small,colback=white,colframe=red]{|sort by| \meta{name}} which expands to
+\tcbox[on line,size=small,colback=white,colframe=red]{|sort by=|\marg{file name}}.\par\medskip
+
+Consider the following example:
+\begin{dispExample}
+\csvautotabular[sort by=namesort.xml]{grade.csv}
+\end{dispExample}
+A good place for setting up a new sorting rule would be inside the preamble:
+
+\csvset{new sorting rule={name}{namesort.xml}}
+\begin{dispListing}
+\csvset{new sorting rule={name}{namesort.xml}}
+\end{dispListing}
+
+Now, we can use the new rule:
+\begin{dispExample}
+\csvautotabular[sort by name]{grade.csv}
+\end{dispExample}
+
+\end{docCsvKey}
+
+
+\clearpage
+\section{String Tests}\label{sec:stringtests}%
+
+The following string tests are complementing the string tests
+from the |etoolbox| package. They all do the same, i.e.,
+comparing expanded strings for equality.
+\begin{itemize}
+\item\refCom{ifcsvstrcmp} is the most efficient method, because it uses
+ native compiler string comparison (if available).
+\item\refCom{ifcsvstrequal} does not rely on a compiler. It also is the
+ fallback implementation for \refCom{ifcsvstrcmp}, if there is no
+ native comparison method.
+\item\refCom{ifcsvprostrequal} is possibly more failsafe than the other two
+ string tests. It may be used, if strings contain dirty things like |\textbf{A}|.
+\end{itemize}
+\medskip
+
+\begin{docCommand}[doc new=2016-07-01]{ifcsvstrcmp}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
+ Compares two strings and executes \meta{true} if they are equal, and \meta{false} otherwise.
+ The comparison is done using |\pdfstrcmp|, if compilation is done with pdf\LaTeX.
+ The comparison is done using |\pdf at strcmp|, if the package |pdftexcmds| is
+ loaded and compilation is done with lua\LaTeX\ or Xe\LaTeX.
+ Otherwise, \refCom{ifcsvstrcmp} is identical to \refCom{ifcsvstrequal}.
+ This command cannot be used inside the preamble.
+\end{docCommand}
+
+
+\begin{docCommand}[doc new=2016-07-01]{ifcsvnotstrcmp}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
+ Compares two strings and executes \meta{true} if they are \emph{not} equal, and \meta{false} otherwise.
+ The implementation uses \refCom{ifcsvstrcmp}.
+\end{docCommand}
+
+
+\begin{docCommand}[doc new=2016-07-01]{ifcsvstrequal}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
+ Compares two strings and executes \meta{true} if they are equal, and \meta{false} otherwise.
+ The strings are expanded with |\edef| in the test.
+\end{docCommand}
+
+\begin{docCommand}[doc new=2016-07-01]{ifcsvprostrequal}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
+ Compares two strings and executes \meta{true} if they are equal, and \meta{false} otherwise.
+ The strings are expanded with |\protected at edef| in the test, i.e. parts of the
+ strings which are protected stay unexpanded.
+\end{docCommand}
+
+
+
+\clearpage
+\section{Examples}%
+
+\subsection{A Serial Letter}%
+In this example, a serial letter is to be written to all persons with
+addresses from the following CSV file. Deliberately, the file content is
+not given in very pretty format.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{address.csv}
+name,givenname,gender,degree,street,zip,location,bonus
+Maier,Hans,m,,Am Bachweg 17,10010,Hopfingen,20
+ % next line with a comma in curly braces
+Huber,Erna,f,Dr.,{Moosstraße 32, Hinterschlag},10020,Örtingstetten,30
+Weißbäck,Werner,m,Prof. Dr.,Brauallee 10,10030,Klingenbach,40
+ % this line is ignored %
+ Siebener , Franz,m, , Blaumeisenweg 12 , 10040 , Pardauz , 50
+ % preceding and trailing spaces in entries are removed %
+Schmitt,Anton,m,,{\AE{}lfred-Esplanade, T\ae{}g 37}, 10050,\OE{}resung,60
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{address}
+
+Firstly, we survey the file content quickly using
+|\csvautotabular|.
+As can be seen, unfeasible lines are ignored automatically.
+
+\begin{dispExample}
+\tiny\csvautotabular{address.csv}
+\end{dispExample}
+
+Now, we create the serial letter where every feasible data line produces
+an own page. Here, we simulate the page by a |tcolorbox| (from the package
+|tcolorbox|).
+For the gender specific salutations, an auxiliary macro |\ifmale| is
+introduced.
+
+\begin{dispExample}
+% this example requires the tcolorbox package
+\newcommand{\ifmale}[2]{\ifcsvstrcmp{\gender}{m}{#1}{#2}}
+
+\csvreader[head to column names]{address.csv}{}{%
+\begin{tcolorbox}[colframe=DarkGray,colback=White,arc=0mm,width=(\linewidth-2pt)/2,
+ equal height group=letter,before=,after=\hfill,fonttitle=\bfseries,
+ adjusted title={Letter to \name}]
+ \ifcsvstrcmp{\degree}{}{\ifmale{Mr.}{Ms.}}{\degree}~\givenname~\name\\
+ \street\\\zip~\location
+ \tcblower
+ {\itshape Dear \ifmale{Sir}{Madam},}\\
+ we are pleased to announce you a bonus value of \bonus\%{}
+ which will be delivered to \location\ soon.\\\ldots
+\end{tcolorbox}}
+\end{dispExample}
+
+
+
+\clearpage
+\subsection{A Graphical Presentation}%
+For this example, we use some artificial statistical data given by a CSV file.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{data.csv}
+land,group,amount
+Bayern,A,1700
+Baden-Württemberg,A,2300
+Sachsen,B,1520
+Thüringen,A,1900
+Hessen,B,2100
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{data}
+
+Firstly, we survey the file content using
+|\csvautobooktabular|.
+
+\begin{dispExample}
+% needs the booktabs package
+\csvautobooktabular{data.csv}
+\end{dispExample}
+
+The amount values are presented in the following diagram by bars where
+the group classification is given using different colors.
+
+\begin{dispExample}
+% This example requires the package tikz
+\begin{tikzpicture}[Group/A/.style={left color=red!10,right color=red!20},
+ Group/B/.style={left color=blue!10,right color=blue!20}]
+\csvreader[head to column names]{data.csv}{}{%
+ \begin{scope}[yshift=-\thecsvrow cm]
+ \path [draw,Group/\group] (0,-0.45)
+ rectangle node[font=\bfseries] {\amount} (\amount/1000,0.45);
+ \node[left] at (0,0) {\land};
+ \end{scope} }
+\end{tikzpicture}
+\end{dispExample}
+
+
+\clearpage
+It would be nice to sort the bars by length, i.\,e.\ to sort the CSV file
+by the |amount| column. If the \csvsorter\ program is properly installed,
+see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
+this can be done with the following configuration file for \csvsorter:
+
+\xmllisting{amountsort}
+
+Now, we just have to add an option |sort by=amountsort.xml|:
+\begin{dispExample}
+% This example requires the package tikz
+% Also, the CSV-Sorter tool has to be installed
+\begin{tikzpicture}[Group/A/.style={left color=red!10,right color=red!20},
+ Group/B/.style={left color=blue!10,right color=blue!20}]
+\csvreader[head to column names,sort by=amountsort.xml]{data.csv}{}{%
+ \begin{scope}[yshift=-\thecsvrow cm]
+ \path [draw,Group/\group] (0,-0.45)
+ rectangle node[font=\bfseries] {\amount} (\amount/1000,0.45);
+ \node[left] at (0,0) {\land};
+ \end{scope} }
+\end{tikzpicture}
+\end{dispExample}
+
+
+
+
+\clearpage
+Next, we create a pie chart by calling |\csvreader| twice.
+In the first step, the total sum of amounts is computed, and in the second
+step the slices are drawn.
+
+\begin{dispExample}
+% Modified example from www.texample.net for pie charts
+% This example needs the packages tikz, xcolor, calc
+\definecolorseries{myseries}{rgb}{step}[rgb]{.95,.85,.55}{.17,.47,.37}
+\resetcolorseries{myseries}%
+
+% a pie slice
+\newcommand{\slice}[4]{
+ \pgfmathsetmacro{\midangle}{0.5*#1+0.5*#2}
+ \begin{scope}
+ \clip (0,0) -- (#1:1) arc (#1:#2:1) -- cycle;
+ \colorlet{SliceColor}{myseries!!+}%
+ \fill[inner color=SliceColor!30,outer color=SliceColor!60] (0,0) circle (1cm);
+ \end{scope}
+ \draw[thick] (0,0) -- (#1:1) arc (#1:#2:1) -- cycle;
+ \node[label=\midangle:#4] at (\midangle:1) {};
+ \pgfmathsetmacro{\temp}{min((#2-#1-10)/110*(-0.3),0)}
+ \pgfmathsetmacro{\innerpos}{max(\temp,-0.5) + 0.8}
+ \node at (\midangle:\innerpos) {#3};
+}
+
+% sum of amounts
+\csvreader[before reading=\def\mysum{0}]{data.csv}{amount=\amount}{%
+ \pgfmathsetmacro{\mysum}{\mysum+\amount}%
+}
+
+% drawing of the pie chart
+\begin{tikzpicture}[scale=3]%
+\def\mya{0}\def\myb{0}
+\csvreader[head to column names]{data.csv}{}{%
+ \let\mya\myb
+ \pgfmathsetmacro{\myb}{\myb+\amount}
+ \slice{\mya/\mysum*360}{\myb/\mysum*360}{\amount}{\land}
+}
+\end{tikzpicture}%
+\end{dispExample}
+
+
+\clearpage
+Finally, the filter option is demonstrated by separating the groups A and B.
+Every item is piled upon the appropriate stack.
+
+\begin{dispExample}
+\newcommand{\drawGroup}[2]{%
+ \def\mya{0}\def\myb{0}
+ \node[below=3mm] at (2.5,0) {\bfseries Group #1};
+ \csvreader[head to column names,filter equal={\group}{#1}]{data.csv}{}{%
+ \let\mya\myb
+ \pgfmathsetmacro{\myb}{\myb+\amount}
+ \path[draw,top color=#2!25,bottom color=#2!50]
+ (0,\mya/1000) rectangle node{\land\ (\amount)} (5,\myb/1000);
+}}
+
+\begin{tikzpicture}
+ \fill[gray!75] (-1,0) rectangle (13,-0.1);
+ \drawGroup{A}{red}
+ \begin{scope}[xshift=7cm]
+ \drawGroup{B}{blue}
+ \end{scope}
+\end{tikzpicture}
+
+\end{dispExample}
+
+
+\clearpage
+\subsection{Macro code inside the data}\label{macrocodexample}%
+
+If needed, the data file may contain macro code. Note that the first character
+of a data line is not allowed to be the backslash '|\|'.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{macrodata.csv}
+type,description,content
+M,A nice \textbf{formula}, $\displaystyle \int\frac{1}{x} = \ln|x|+c$
+G,A \textcolor{red}{colored} ball, {\tikz \shadedraw [shading=ball] (0,0) circle (.5cm);}
+M,\textbf{Another} formula, $\displaystyle \lim\limits_{n\to\infty} \frac{1}{n}=0$
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{macrodata}
+
+Firstly, we survey the file content using
+|\csvautobooktabular|.
+
+\begin{dispExample}
+\csvautobooktabular{macrodata.csv}
+\end{dispExample}
+
+
+\begin{dispExample}
+\csvstyle{my enumerate}{head to column names,
+ before reading=\begin{enumerate},after reading=\end{enumerate}}
+
+\csvreader[my enumerate]{macrodata.csv}{}{%
+ \item \description:\par\content}
+
+\bigskip
+Now, formulas only:
+\csvreader[my enumerate,filter equal={\type}{M}]{macrodata.csv}{}{%
+ \item \description:\qquad\content}
+\end{dispExample}
+
+\clearpage
+\subsection{Tables with Number Formatting}\label{numberformatting}%
+
+We consider a file with numerical data which should be pretty-printed.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{data_numbers.csv}
+month, dogs, cats
+January, 12.50,12.3e5
+February, 3.32, 8.7e3
+March, 43, 3.1e6
+April, 0.33, 21.2e4
+May, 5.12, 3.45e6
+June, 6.44, 6.66e6
+July, 123.2,7.3e7
+August, 12.3, 5.3e4
+September,2.3, 4.4e4
+October, 6.5, 6.5e6
+November, 0.55, 5.5e5
+December, 2.2, 3.3e3
+\end{tcbverbatimwrite}
+
+\csvlisting{data_numbers}
+
+The |siunitx| package provides a new column type |S|
+which can align material using a number of different strategies.
+The following example demonstrates the application with CSV reading.
+The package documentation of |siunitx| contains a huge amount
+of formatting options.
+
+\begin{dispExample}
+% \usepackage{siunitx,array,booktabs}
+\csvloop{
+ file=data_numbers.csv,
+ head to column names,
+ before reading=\centering\sisetup{table-number-alignment=center},
+ tabular={lSS[table-format=2.2e1]@{}c},
+ table head=\toprule\textbf{Month} & \textbf{Dogs} & \textbf{Cats} &\\\midrule,
+ command=\month & \dogs & \cats &,
+ table foot=\bottomrule}
+\end{dispExample}
+
+\clearpage
+Special care is needed, if the \emph{first} or the \emph{last} column is to be formatted with
+the column type |S|. The number detection of |siunitx| is disturbed by
+the line reading code of |csvsimple-legacy| which actually is present at the
+first and last column. To avoid this problem, the content of the first and last column
+could be formatted not by the table format definition, but by using a
+suitable |\tablenum| formatting directly, see |siunitx|.
+
+Another and very nifty workaround suggested by Enrico Gregorio is to
+add an invisible dummy column with |c@{}| as first column
+and |@{}c| as last column:
+
+
+\begin{dispExample}
+% \usepackage{siunitx,array,booktabs}
+\csvloop{
+ file=data_numbers.csv,
+ head to column names,
+ before reading=\centering\sisetup{table-number-alignment=center},
+ tabular={c@{}S[table-format=2.2e1]S@{}c},
+ table head= & \textbf{Cats} & \textbf{Dogs} & \\\midrule,
+ command= & \cats & \dogs &,
+ table foot=\bottomrule}
+\end{dispExample}
+
+
+\clearpage
+Now, the preceding table shall be sorted by the \emph{cats} values.
+If the \csvsorter\ program is properly installed,
+see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
+this can be done with the following configuration file for \csvsorter:
+
+\xmllisting{catsort}
+
+Now, we just have to add an option |sort by=catsort.xml|:
+\begin{dispExample}
+% \usepackage{siunitx,array,booktabs}
+% Also, the CSV-Sorter tool has to be installed
+\csvloop{
+ file=data_numbers.csv,
+ sort by=catsort.xml,
+ head to column names,
+ before reading=\centering\sisetup{table-number-alignment=center},
+ tabular={lSS[table-format=2.2e1]@{}c},
+ table head=\toprule\textbf{Month} & \textbf{Dogs} & \textbf{Cats} & \\\midrule,
+ command=\month & \dogs & \cats &,
+ table foot=\bottomrule}
+\end{dispExample}
+
+
+\clearpage
+\subsection{CSV data without header line}\label{noheader}%
+CSV files with a header line are more semantic than files without header,
+but it's no problem to work with headless files.
+
+For this example, we use again some artificial statistical data given by a CSV file
+but this time without header.
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{data_headless.csv}
+Bayern,A,1700
+Baden-Württemberg,A,2300
+Sachsen,B,1520
+Thüringen,A,1900
+Hessen,B,2100
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{data_headless}
+
+Note that you cannot use the \refKey{/csv/no head} option for the auto tabular
+commands. If no options are given, the first line is interpreted as header line
+which gives an unpleasant result:
+
+\begin{dispExample}
+\csvautobooktabular{data_headless.csv}
+\end{dispExample}
+
+To get the expected result, one can redefine \refKey{/csv/table head}
+using \refCom{csvlinetotablerow} which holds the first line data for the
+|\csvauto...| commands:
+
+\begin{dispExample}
+\csvautobooktabular[table head=\toprule\csvlinetotablerow\\]{data_headless.csv}
+\end{dispExample}
+
+This example can be extended to insert a table head for this headless data:
+
+\begin{dispExample}
+\csvautobooktabular[table head=\toprule\bfseries Land & \bfseries Group
+ & \bfseries Amount\\\midrule\csvlinetotablerow\\]{data_headless.csv}
+\end{dispExample}
+
+\clearpage
+
+For the normal \refCom{csvreader} command, the \refKey{/csv/no head} option
+should be applied. Of course, we cannot use \refKey{/csv/head to column names}
+because there is no head, but the columns can be addressed by their numbers:
+
+\begin{dispExample}
+\csvreader[no head,
+ tabular=lr,
+ table head=\toprule\bfseries Land & \bfseries Amount\\\midrule,
+ table foot=\bottomrule]
+ {data_headless.csv}
+ {1=\land,3=\amount}
+ {\land & \amount}
+\end{dispExample}
+
+
+\clearpage
+\subsection{Imported CSV data}\label{sec:importeddata}%
+If data is imported from other applications, there is not always a choice
+to format in comma separated values with curly brackets.
+
+Consider the following example data file:
+
+%-- file embedded for simplicity --
+\begin{tcbverbatimwrite}{imported.csv}
+"name";"address";"email"
+"Frank Smith";"Yellow Road 123, Brimblsby";"frank.smith at organization.org"
+"Mary May";"Blue Alley 2a, London";"mmay at maybe.uk"
+"Hans Meier";"Hauptstraße 32, Berlin";"hans.meier at corporation.de"
+\end{tcbverbatimwrite}
+%-- end embedded file --
+
+\csvlisting{imported}
+
+If the \csvsorter\ program is properly installed,
+see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
+this can be transformed on-the-fly
+with the following configuration file for \csvsorter:
+
+\xmllisting{transform}
+
+Now, we just have to add an option |sort by=transform.xml| to transform
+the input data. Here, we actually do not sort.
+
+\begin{dispExample}
+% \usepackage{booktabs,array}
+% Also, the CSV-Sorter tool has to be installed
+\newcommand{\Header}[1]{\normalfont\bfseries #1}
+
+\csvreader[
+ sort by=transform.xml,
+ tabular=>{\itshape}ll>{\ttfamily}l,
+ table head=\toprule\Header{Name} & \Header{Address} & \Header{email}\\\midrule,
+ table foot=\bottomrule]
+ {imported.csv}{}{\csvlinetotablerow}
+\end{dispExample}
+
+The file which is generated on-the-fly and which is actually read by
+|csvsimple-legacy| is the following:
+
+\tcbinputlisting{docexample,listing style=tcbdocumentation,fonttitle=\bfseries,
+ listing only,listing file=\jobname_sorted._csv}
+
+
+\clearpage
+\subsection{Encoding}\label{encoding}%
+If the CSV file has a different encoding than the \LaTeX\ source file,
+then special care is needed.
+
+\begin{itemize}
+\item The most obvious treatment is to change the encoding of the CSV file
+ or the \LaTeX\ source file to match the other one (every good editor
+ supports such a conversion). This is the easiest choice, if there a no
+ good reasons against such a step. E.g., unfortunately, several tools
+ under Windows need the CSV file to be |cp1252| encoded while
+ the \LaTeX\ source file may need to be |utf8| encoded.
+
+\item The |inputenc| package allows to switch the encoding inside the
+ document, say from |utf8| to |cp1252|. Just be aware that you should only
+ use pure ASCII for additional texts inside the switched region.
+\begin{dispListing}
+% !TeX encoding=UTF-8
+% ....
+\usepackage[utf8]{inputenc}
+% ....
+\begin{document}
+% ....
+\inputencoding{latin1}% only use ASCII from here, e.g. "Uberschrift
+\csvreader[%...
+ ]{data_cp1252.csv}{%...
+ }{% ....
+ }
+\inputencoding{utf8}
+% ....
+\end{document}
+\end{dispListing}
+
+\item As a variant to the last method, the encoding switch can be done
+ using options from |csvsimple-legacy|:
+\begin{dispListing}
+% !TeX encoding=UTF-8
+% ....
+\usepackage[utf8]{inputenc}
+% ....
+\begin{document}
+% ....
+% only use ASCII from here, e.g. "Uberschrift
+\csvreader[%...
+ before reading=\inputencoding{latin1},
+ after reading=\inputencoding{utf8},
+ ]{data_cp1252.csv}{%...
+ }{% ....
+ }
+% ....
+\end{document}
+\end{dispListing}
+
+\pagebreak\item
+If the \csvsorter\ program is properly installed,
+see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
+the CSV file can be re-encoded on-the-fly
+with the following configuration file for \csvsorter:
+
+\xmllisting{encoding}
+
+\begin{dispListing}
+% !TeX encoding=UTF-8
+% ....
+\usepackage[utf8]{inputenc}
+% ....
+\begin{document}
+% ....
+\csvreader[%...
+ sort by=encoding.xml,
+ ]{data_cp1252.csv}{%...
+ }{% ....
+ }
+% ....
+\end{document}
+\end{dispListing}
+
+
+\end{itemize}
+
+
+\clearpage
+
+\printindex
+
+\end{document}
Property changes on: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-legacy.tex
___________________________________________________________________
Added: svn:eol-style
## -0,0 +1 ##
+native
\ No newline at end of property
Added: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-title.png
===================================================================
(Binary files differ)
Index: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-title.png
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-title.png 2021-06-29 17:19:30 UTC (rev 59755)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-title.png 2021-06-29 19:53:39 UTC (rev 59756)
Property changes on: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple-title.png
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+application/octet-stream
\ No newline at end of property
Modified: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple.pdf
===================================================================
(Binary files differ)
Modified: trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple.tex
===================================================================
--- trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple.tex 2021-06-29 17:19:30 UTC (rev 59755)
+++ trunk/Master/texmf-dist/doc/latex/csvsimple/csvsimple.tex 2021-06-29 19:53:39 UTC (rev 59756)
@@ -1,6 +1,6 @@
% \LaTeX-Main\
% !TeX encoding=UTF-8
-%% The LaTeX package csvsimple - version 1.22 (2021/06/07)
+%% The LaTeX package csvsimple - version 2.0.0 (2021/06/29)
%% csvsimple.tex: Manual
%%
%% -------------------------------------------------------------------------------------------
@@ -17,106 +17,13 @@
%%
%% This work has the LPPL maintenance status `author-maintained'.
%%
-%% This work consists of all files listed in README
+%% This work consists of all files listed in README.md
%%
\documentclass[a4paper,11pt]{ltxdoc}
+\usepackage{csvsimple-doc}
-\usepackage[T1]{fontenc}
-\usepackage[utf8]{inputenc}
-\usepackage[english]{babel}
-\usepackage{lmodern,parskip,array,ifthen,calc,makeidx}
-\usepackage{amsmath,amssymb}
-\usepackage[svgnames,table,hyperref]{xcolor}
-\usepackage{tikz,siunitx}
-\usepackage{varioref}
-\usepackage[pdftex,bookmarks,raiselinks,pageanchor,hyperindex,colorlinks]{hyperref}
-\urlstyle{sf}
-\usepackage{cleveref}
+\usepackage{\csvpkgprefix csvsimple-legacy}
-\usepackage[a4paper,left=2.5cm,right=2.5cm,top=1.5cm,bottom=1.5cm,
- marginparsep=3mm,marginparwidth=18mm,
- headheight=0mm,headsep=0cm,
- footskip=1.5cm,includeheadfoot]{geometry}
-\usepackage{fancyhdr}
-\fancyhf{}
-\fancyfoot[C]{\thepage}%
-\renewcommand{\headrulewidth}{0pt}
-\renewcommand{\footrulewidth}{0pt}
-\pagestyle{fancy}
-\tolerance=2000%
-\setlength{\emergencystretch}{20pt}%
-
-\RequirePackage{csquotes}
-\RequirePackage[style=numeric-comp,sorting=nyt,
- maxnames=8,minnames=8,abbreviate=false,backend=biber]{biblatex}
-\DeclareFieldFormat{url}{\newline\url{#1}}%
-\DeclareListFormat{language}{}%
-\setlength{\bibitemsep}{\smallskipamount}
-\addbibresource{\jobname.bib}
-
-\usepackage{longtable,booktabs}
-\usepackage{csvsimple}
-
-\usepackage{tcolorbox}
-\tcbuselibrary{skins,xparse,minted,breakable,documentation,raster}
-
-\definecolor{Green_Dark}{rgb}{0.078431,0.407843,0.176471}
-\definecolor{Blue_Dark}{rgb}{0.090196,0.211765,0.364706}
-\definecolor{Blue_Bright}{rgb}{0.858824,0.898039,0.945098}
-
-\tcbset{skin=enhanced,
- minted options={fontsize=\footnotesize},
- doc head={colback=yellow!10!white,interior style=fill},
- doc head key={colback=magenta!5!white,interior style=fill},
- color key=DarkViolet,
- color value=Teal,
- color color=Teal,
- color counter=Orange!85!black,
- color length=Orange!85!black,
- index colorize,
- index annotate,
- beforeafter example/.style={
- before skip=4pt plus 2pt minus 1pt,
- after skip=8pt plus 4pt minus 2pt
- },
- docexample/.style={bicolor,
- beforeafter example,
- arc is angular,fonttitle=\bfseries,
- %fontupper=\tiny\itshape,
- fontlower=\footnotesize,
- %colframe=Blue_Dark,
- %colback=Blue_Bright!75,
- colframe=green!25!yellow!50!black,
- colback=green!25!yellow!7,
- colbacklower=white,
-% drop fuzzy shadow,
- drop fuzzy shadow=green!25!yellow!50!black,
- listing engine=minted,
- documentation minted style=colorful,
- documentation minted options={fontsize=\footnotesize},
- },
-}
-
-\renewcommand*{\tcbdocnew}[1]{\textcolor{green!50!black}{\sffamily\bfseries N} #1}
-\renewcommand*{\tcbdocupdated}[1]{\textcolor{blue!75!black}{\sffamily\bfseries U} #1}
-
-\tcbmakedocSubKey{docCsvKey}{csv}
-
-\NewDocumentCommand{\csvsorter}{}{\textsf{\bfseries\color{red!20!black}CSV-Sorter}}
-
-%\newtcbinputlisting{\csvlisting}[1]{docexample,listing style=tcbdocumentation,fonttitle=\bfseries,
-% listing only,title={CSV file \flqq\texttt{\detokenize{#1.csv}}\frqq},listing file=#1.csv}
-\newtcbinputlisting{\csvlisting}[1]{docexample,minted options={fontsize=\footnotesize},minted language=latex,
- fonttitle=\bfseries,listing only,title={CSV file \flqq\texttt{\detokenize{#1.csv}}\frqq},listing file=#1.csv}
-
-%\newtcbinputlisting{\xmllisting}[1]{docexample,listing options={style=tcbdocumentation,language=XML},
-% fonttitle=\bfseries,listing only,title={Configuration file \flqq\texttt{\detokenize{#1.xml}}\frqq},listing file=#1.xml}
-\newtcbinputlisting{\xmllisting}[1]{docexample,minted options={fontsize=\footnotesize},minted language=xml,
- fonttitle=\bfseries,listing only,title={Configuration file \flqq\texttt{\detokenize{#1.xml}}\frqq},listing file=#1.xml}
-
-\NewTotalTCBox{\verbbox}{m}{enhanced,on line,size=fbox,frame empty,colback=red!5!white,
- colupper=red!85!black,fontupper=\bfseries\ttfamily}{\detokenize{"}#1\detokenize{"}}
-
\hypersetup{
pdftitle={Manual for the csvsimple package},
pdfauthor={Thomas F. Sturm},
@@ -124,11 +31,6 @@
pdfkeywords={csv file, comma separated values, key value syntax}
}
-\def\version{1.22}%
-\def\datum{2021/06/07}%
-\makeindex
-
-
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\begin{center}
@@ -151,1963 +53,123 @@
\begin{absquote}
\begin{center}\bfseries Abstract\end{center}
|csvsimple| provides a simple \LaTeX\ interface for the processing of files with
- comma separated values (CSV). |csvsimple| relies heavily on the key value
- syntax from |pgfkeys| which results (hopefully) in an easy way of usage.
+ comma separated values (CSV). |csvsimple| relies heavily on a key value
+ syntax which results in an easy way of usage.
Filtering and table generation is especially supported. Since the package
is considered as a lightweight tool, there is no support for data sorting
or data base storage.
\end{absquote}
-\clearpage
-\tableofcontents
+\section{Package Options}%
-\clearpage
-\section{Introduction}%
-The |csvsimple| package is applied to the processing of
-CSV\footnote{CSV file: file with comma separated values.} files.
-This processing is controlled by key value assignments according to the
-syntax of |pgfkeys| \cite{tantau:tikz}. Sample applications of the package
-are tabular lists, serial letters, and charts.
-
-An alternative to |csvsimple| is the |datatool| package \cite{talbot:datatool}
-which provides considerably more functions and allows sorting of data by \LaTeX.
-|csvsimple| has a different approach for the user interface and
-is deliberately restricted to some basic functions with fast
-processing speed.
-
-Mind the following restrictions:
+|csvsimple| is a stub which merely selects to load exclusively one of the
+following packages:
\begin{itemize}
-\item Sorting is not supported directly but can be done
- with external tools, see \Fullref{sec:Sorting}.
-\item Values are expected to be comma separated, but the package
- provides support for other separators, see \Fullref{sec:separators}.
-\item Values are expected to be either not quoted or quoted with
- curly braces |{}| of \TeX\ groups. Other quotes like doublequotes
- are not supported directly, but can be achieved
- with external tools, see \Fullref{sec:importeddata}.
-\item Every data line is expected to contain the same amount of values.
- Unfeasible data lines are silently ignored by default, but this can
- be configured, see \Fullref{sec:consistency}.
-\end{itemize}
-
-
-\subsection{Loading the Package}
-The package |csvsimple| loads the packages
-|pgfkeys| \cite{tantau:tikz},
-|etoolbox| \cite{lehmannwright:etoolbox},
-and |ifthen| \cite{carlisle:2014c}.
-|csvsimple| itself is loaded in the usual manner in the preamble:
+\item \href{csvsimple-l3.pdf}{\flqq The |csvsimple-l3| package\frqq}:\\
+ This is the pure \LaTeX3 version of |csvsimple|. It is considered
+ to be the \emph{current} version.
+ New documents are encouraged to use this package.\par
+ |csvsimple-l3| is loaded with \emph{one} of the following
+ alternatives inside the preamble:
\begin{dispListing}
-\usepackage{csvsimple}
+\usepackage[l3]{csvsimple}
+ % or alternatively (not simultaneously!)
+\usepackage{csvsimple-l3}
\end{dispListing}
-
-Not automatically loaded, but used for many examples are the packages
-|longtable| \cite{carlisle:2014d}
-and
-|booktabs| \cite{fear:2016a}.
-
-
-\clearpage
-\subsection{First Steps}
-Every line of a processable CSV file has to contain an identical amount of
-comma\footnote{See \refKey{/csv/separator} for other separators than comma.} separated values. The curly braces |{}| of \TeX\ groups can be used
-to mask a block which may contain commas not to be processed as separators.
-
-The first line of such a CSV file is usually but not necessarily a header line
-which contains the identifiers for each column.
-
-%-- file embedded for simplicity --
-\begin{tcbverbatimwrite}{grade.csv}
-name,givenname,matriculation,gender,grade
-Maier,Hans,12345,m,1.0
-Huber,Anna,23456,f,2.3
-Weißbäck,Werner,34567,m,5.0
-Bauer,Maria,19202,f,3.3
-\end{tcbverbatimwrite}
-%-- end embedded file --
-
-\csvlisting{grade}
-
-\smallskip
-The most simple way to display a CSV file in tabular form is the processing
-with the \refCom{csvautotabular} command.
-
-\begin{dispExample}
-\csvautotabular{grade.csv}
-\end{dispExample}
-
-
-Typically, one would use \refCom{csvreader} instead of |\csvautotabular| to
-gain full control over the interpretation of the included data.
-
-In the following example, the entries of the header line are automatically
-assigned to \TeX\ macros which may be used deliberately.
-
-
-\begin{dispExample}
-\begin{tabular}{|l|c|}\hline%
-\bfseries Person & \bfseries Matr.~No.
-\csvreader[head to column names]{grade.csv}{}%
-{\\\givenname\ \name & \matriculation}%
- \\\hline
-\end{tabular}
-\end{dispExample}
-
-
-\clearpage
-|\csvreader| is controlled by a plenty of options. For example, for table
-applications line breaks are easily inserted by
-\refKey{/csv/late after line}. This defines a macro execution just before
-the following line.
-Additionally, the assignment of columns to \TeX\ macros is shown in a non automated
-way.
-
-\begin{dispExample}
-\begin{tabular}{|r|l|c|}\hline%
-& Person & Matr.~No.\\\hline\hline
-\csvreader[late after line=\\\hline]%
- {grade.csv}{name=\name,givenname=\firstname,matriculation=\matnumber}%
- {\thecsvrow & \firstname~\name & \matnumber}%
-\end{tabular}
-\end{dispExample}
-
-\smallskip
-An even more comfortable and preferrable way to create a table is setting
-appropriate option keys. Note, that this gives you the possibility to create a
-|pgfkeys| style which contains the whole table creation.
-
-\begin{dispExample}
-\csvreader[tabular=|r|l|c|,
- table head=\hline & Person & Matr.~No.\\\hline\hline,
- late after line=\\\hline]%
- {grade.csv}{name=\name,givenname=\firstname,matriculation=\matnumber}%
- {\thecsvrow & \firstname~\name & \matnumber}%
-\end{dispExample}
-
-\smallskip
-The next example shows such a style definition with the convenience macro
-\refCom{csvstyle}. Here, we see again the automated assignment of header
-entries to column names by \refKey{/csv/head to column names}.
-For this, the header entries have to be without spaces and special characters.
-But you can always assign entries to canonical macro names by hand like in the examples
-above. Here, we also add a \refKey{/csv/head to column names prefix} to avoid
-macro name clashes.
-
-\begin{dispExample}
-\csvstyle{myTableStyle}{tabular=|r|l|c|,
- table head=\hline & Person & Matr.~No.\\\hline\hline,
- late after line=\\\hline,
- head to column names,
- head to column names prefix=MY,
- }
-
-\csvreader[myTableStyle]{grade.csv}{}%
- {\thecsvrow & \MYgivenname~\MYname & \MYmatriculation}%
-\end{dispExample}
-
-
-\clearpage
-Another way to address columns is to use their roman numbers.
-The direct addressing is done by |\csvcoli|, |\csvcolii|, |\csvcoliii|, \ldots:
-
-\begin{dispExample}
-\csvreader[tabular=|r|l|c|,
- table head=\hline & Person & Matr.~No.\\\hline\hline,
- late after line=\\\hline]%
- {grade.csv}{}%
- {\thecsvrow & \csvcolii~\csvcoli & \csvcoliii}%
-\end{dispExample}
-
-\smallskip
-And yet another method to assign macros to columns is to use arabic numbers
-for the assignment:
-
-\begin{dispExample}
-\csvreader[tabular=|r|l|c|,
- table head=\hline & Person & Matr.~No.\\\hline\hline,
- late after line=\\\hline]%
- {grade.csv}{1=\name,2=\firstname,3=\matnumber}%
- {\thecsvrow & \firstname~\name & \matnumber}%
-\end{dispExample}
-
-\smallskip
-For recurring applications, the |pgfkeys| syntax allows to create own styles
-for a consistent and centralized design. The following example is easily
-modified to obtain more or less option settings.
-
-\begin{dispExample}
-\csvset{myStudentList/.style={%
- tabular=|r|l|c|,
- table head=\hline & Person & #1\\\hline\hline,
- late after line=\\\hline,
- column names={name=\name,givenname=\firstname}
- }}
-
-\csvreader[myStudentList={Matr.~No.}]{grade.csv}{matriculation=\matnumber}%
-{\thecsvrow & \firstname~\name & \matnumber}%
-\hfill%
-\csvreader[myStudentList={Grade}]{grade.csv}{grade=\grade}%
-{\thecsvrow & \firstname~\name & \grade}%
-\end{dispExample}
-
-
-\clearpage
-Alternatively, column names can be set by \refCom{csvnames}
-and style definitions by \refCom{csvstyle}.
-With this, the last example is rewritten as follows:
-
-\begin{dispExample}
-\csvnames{myNames}{1=\name,2=\firstname,3=\matnumber,5=\grade}
-\csvstyle{myStudentList}{tabular=|r|l|c|,
- table head=\hline & Person & #1\\\hline\hline,
- late after line=\\\hline, myNames}
-
-\csvreader[myStudentList={Matr.~No.}]{grade.csv}{}%
-{\thecsvrow & \firstname~\name & \matnumber}%
-\hfill%
-\csvreader[myStudentList={Grade}]{grade.csv}{}%
-{\thecsvrow & \firstname~\name & \grade}%
-\end{dispExample}
-
-\smallskip
-The data lines of a CSV file can also be filtered. In the following example,
-a certificate is printed only for students with grade unequal to 5.0.
-
-\begin{dispExample}
-\csvreader[filter not strcmp={\grade}{5.0}]%
- {grade.csv}{1=\name,2=\firstname,3=\matnumber,4=\gender,5=\grade}%
- {\begin{center}\Large\bfseries Certificate in Mathematics\end{center}
- \large\ifcsvstrcmp{\gender}{f}{Ms.}{Mr.}
- \firstname~\name, matriculation number \matnumber, has passed the test
- in mathematics with grade \grade.\par\ldots\par
- }%
-\end{dispExample}
-
-
-\clearpage
-\section{Macros for the Processing of CSV Files}\label{sec:makros}%
-
-\begin{docCommand}{csvreader}{\oarg{options}\marg{file name}\marg{assignments}\marg{command list}}
- |\csvreader| reads the file denoted by \meta{file name} line by line.
- Every line of the file has to contain an identical amount of
- comma separated values. The curly braces |{}| of \TeX\ groups can be used
- to mask a block which may contain commas not to be processed as separators.\smallskip
-
- The first line of such a CSV file is by default but not necessarily
- processed as a header line which contains the identifiers for each column.
- The entries of this line can be used to give \meta{assignments} to \TeX\ macros
- to address the columns. The number of entries of this first line
- determines the accepted number of entries for all following lines.
- Every line which contains a higher or lower number of entries is ignored
- during standard processing.\smallskip
-
- The \meta{assignments} are given by key value pairs
- \mbox{\meta{name}|=|\meta{macro}}. Here, \meta{name} is an entry from the
- header line \emph{or} the arabic number of the addressed column.
- \meta{macro} is some \TeX\ macro which gets the content of the addressed column.\smallskip
-
- The \meta{command list} is executed for every accepted data line. Inside the
- \meta{command list} is applicable:
- \begin{itemize}
- \item \docAuxCommand{thecsvrow} or the counter |csvrow| which contains the number of the
- current data line (starting with 1).
- \item \docAuxCommand{csvcoli}, \docAuxCommand{csvcolii}, \docAuxCommand{csvcoliii}, \ldots,
- which contain the contents of the column entries of the current data line.
- Alternatively can be used:
- \item \meta{macro} from the \meta{assignments} to have a logical
- addressing of a column entry.
- \end{itemize}
- Note, that the \meta{command list} is allowed to contain |\par| and
- that all macro definitions are made global to be used for table applications.\smallskip
-
- The processing of the given CSV file can be controlled by various
- \meta{options} given as key value list. The feasible option keys
- are described in section \ref{sec:schluessel} from page \pageref{sec:schluessel}.
-
-\begin{dispExample}
-\csvreader[tabular=|r|l|l|, table head=\hline, table foot=\hline]{grade.csv}%
- {name=\name,givenname=\firstname,grade=\grade}%
- {\grade & \firstname~\name & \csvcoliii}
-\end{dispExample}
-
-Mainly, the |\csvreader| command consists of a \refCom{csvloop} macro with
-following parameters:\par
-|\csvloop{|\meta{options}|, file=|\meta{file name}|, column names=|\meta{assignments}|,|\\
- \hspace*{2cm} |command=|\meta{command list}|}|\par
- Therefore, the application of the keys \refKey{/csv/file} and \refKey{/csv/command}
-is useless for |\csvreader|.
-\end{docCommand}
-
-\begin{docCommand}{csvloop}{\marg{options}}
- Usually, \refCom{csvreader} may be preferred instead of |\csvloop|.
- \refCom{csvreader} is based on |\csvloop| which takes a mandatory list of
- \meta{options} in key value syntax.
- This list of \meta{options} controls the total processing. Especially,
- it has to contain the CSV file name.
-\begin{dispExample}
-\csvloop{file={grade.csv}, head to column names, command=\name,
- before reading={List of students:\ },
- late after line={{,}\ }, late after last line=.}
-\end{dispExample}
-\end{docCommand}
-
-\clearpage
-The following |\csvauto...| commands are intended for quick data overview
-with limited formatting potential.
-See Subsection~\ref{subsec:tabsupport} on page \pageref{subsec:tabsupport}
-for the general table options in combination with \refCom{csvreader} and
-\refCom{csvloop}.
-
-\begin{docCommand}{csvautotabular}{\oarg{options}\marg{file name}}
- |\csvautotabular| is an abbreviation for the application of the option key
- \refKey{/csv/autotabular} together with other \meta{options} to \refCom{csvloop}.
- This macro reads the whole CSV file denoted by \meta{file name}
- with an automated formatting.
-\begin{dispExample}
-\csvautotabular{grade.csv}
-\end{dispExample}
-\begin{dispExample}
-\csvautotabular[filter equal={\csvcoliv}{f}]{grade.csv}
-\end{dispExample}
-\end{docCommand}
-
-
-\begin{docCommand}{csvautolongtable}{\oarg{options}\marg{file name}}
- |csvautolongtable| is an abbreviation for the application of the option key
- \refKey{/csv/autolongtable} together with other \meta{options} to \refCom{csvloop}.
- This macro reads the whole CSV file denoted by \meta{file name}
- with an automated formatting.
- For application, the package |longtable| is required which has to be
- loaded in the preamble.
-\begin{dispListing}
-\csvautolongtable{grade.csv}
-\end{dispListing}
-\csvautolongtable{grade.csv}
-\end{docCommand}
-
-\clearpage
-
-\begin{docCommand}{csvautobooktabular}{\oarg{options}\marg{file name}}
- |\csvautotabular| is an abbreviation for the application of the option key
- \refKey{/csv/autobooktabular} together with other \meta{options} to \refCom{csvloop}.
- This macro reads the whole CSV file denoted by \meta{file name}
- with an automated formatting.
- For application, the package |booktabs| is required which has to be
- loaded in the preamble.
-\begin{dispExample}
-\csvautobooktabular{grade.csv}
-\end{dispExample}
-\end{docCommand}
-
-
-\begin{docCommand}{csvautobooklongtable}{\oarg{options}\marg{file name}}
- |csvautobooklongtable| is an abbreviation for the application of the option key
- \refKey{/csv/autobooklongtable} together with other \meta{options} to \refCom{csvloop}.
- This macro reads the whole CSV file denoted by \meta{file name}
- with an automated formatting.
- For application, the packages |booktabs| and |longtable| are required which have to be
- loaded in the preamble.
-\begin{dispListing}
-\csvautobooklongtable{grade.csv}
-\end{dispListing}
-\csvautobooklongtable{grade.csv}
-\end{docCommand}
-
-
-
-\clearpage
-
-\begin{docCommand}{csvset}{\marg{options}}
- Sets \meta{options} for every following
- \refCom{csvreader} and \refCom{csvloop}. For example, this command may
- be used for style definitions.
-\begin{dispExample}
-\csvset{grade list/.style=
- {column names={name=\name,givenname=\firstname,grade=\grade}},
- passed/.style={filter not strcmp={\grade}{5.0}} }
-
-The following students passed the test in mathematics:
-\csvreader[grade list,passed]{grade.csv}{}{\firstname\ \name\ (\grade); }%
-\end{dispExample}
-\end{docCommand}
-
-
-\begin{docCommand}{csvstyle}{\marg{Stilname}\marg{options}}
- Abbreviation for |\csvset{|\meta{style name}|/.style=|\marg{options}|}|
- to define a new style.
-\end{docCommand}
-
-\begin{docCommand}{csvnames}{\marg{Stilname}\marg{Zuweisungsliste}}
- Abbreviation for |\csvset{|\meta{style name}|/.style={column names=|\marg{assignments}|}}|
- to define additional \meta{assignments} of macros to columns.
-\begin{dispExample}
-\csvnames{grade list}{name=\name,givenname=\firstname,grade=\grade}
-\csvstyle{passed}{filter not strcmp={\grade}{5.0}}
-
-The following students passed the test in mathematics:
-\csvreader[grade list,passed]{grade.csv}{}{\firstname\ \name\ (\grade); }%
-\end{dispExample}
-\end{docCommand}
-
-
-\begin{docCommand}{csvheadset}{\marg{assignments}}
- For some special cases, this command can be used to change the
- \meta{assignments} of macros to columns during execution of
- \refCom{csvreader} and \refCom{csvloop}.
-\begin{dispExample}
-\csvreader{grade.csv}{}%
- { \csvheadset{name=\n} \fbox{\n}
- \csvheadset{givenname=\n} \ldots\ \fbox{\n} }%
-\end{dispExample}
-\end{docCommand}
-
-\clearpage
-
-\begin{docCommand}{csviffirstrow}{\marg{then macros}\marg{else macros}}
- Inside the command list of \refCom{csvreader}, the \meta{then macros}
- are executed for the first data line, and the \meta{else macros}
- are executed for all following lines.
-\begin{dispExample}
-\csvreader[tabbing, head to column names, table head=\hspace*{3cm}\=\kill]%
- {grade.csv}{}%
- {\givenname~\name \> (\csviffirstrow{first entry!!}{following entry})}
-\end{dispExample}
-\end{docCommand}
-
-
-\begin{docCommand}{csvifoddrow}{\marg{then macros}\marg{else macros}}
- Inside the command list of \refCom{csvreader}, the \meta{then macros}
- are executed for odd-numbered data lines, and the \meta{else macros}
- are executed for even-numbered lines.
-\begin{dispExample}
-\csvreader[head to column names,tabular=|l|l|l|l|,
- table head=\hline\bfseries \# & \bfseries Name & \bfseries Grade\\\hline,
- table foot=\hline]{grade.csv}{}{%
- \csvifoddrow{\slshape\thecsvrow & \slshape\name, \givenname & \slshape\grade}%
- {\bfseries\thecsvrow & \bfseries\name, \givenname & \bfseries\grade}}
-\end{dispExample}
-
-The |\csvifoddrow| macro may be used for striped tables:
-
-\begin{dispExample}
-% This example needs the xcolor package
-\csvreader[head to column names,tabular=rlcc,
- table head=\hline\rowcolor{red!50!black}\color{white}\# & \color{white}Person
- & \color{white}Matr.~No. & \color{white}Grade,
- late after head=\\\hline\rowcolor{yellow!50},
- late after line=\csvifoddrow{\\\rowcolor{yellow!50}}{\\\rowcolor{red!25}}]%
- {grade.csv}{}%
- {\thecsvrow & \givenname~\name & \matriculation & \grade}%
-\end{dispExample}
-
-\enlargethispage*{1cm}
-Alternatively, |\rowcolors| from the |xcolor| package can be used for this
-purpose:
-
-\begin{dispExample}
-% This example needs the xcolor package
-\csvreader[tabular=rlcc, before table=\rowcolors{2}{red!25}{yellow!50},
- table head=\hline\rowcolor{red!50!black}\color{white}\# & \color{white}Person
- & \color{white}Matr.~No. & \color{white}Grade\\\hline,
- head to column names]{grade.csv}{}%
- {\thecsvrow & \givenname~\name & \matriculation & \grade}%
-\end{dispExample}
-\end{docCommand}
-
-\clearpage
-
-\begin{docCommand}{csvfilteraccept}{}
- All following consistent data lines will be accepted and processed.
- This command overwrites all previous filter settings and may be used
- inside \refKey{/csv/full filter} to implement
- an own filtering rule together with |\csvfilterreject|.
-\begin{dispExample}
-\csvreader[autotabular,
- full filter=\ifcsvstrcmp{\csvcoliv}{m}{\csvfilteraccept}{\csvfilterreject}
- ]{grade.csv}{}{\csvlinetotablerow}%
-\end{dispExample}
-\end{docCommand}
-
-
-\begin{docCommand}{csvfilterreject}{}
- All following data lines will be ignored.
- This command overwrites all previous filter settings.
-\end{docCommand}
-
-
-\begin{docCommand}{csvline}{}
- This macro contains the current and unprocessed data line.
-\begin{dispExample}
-\csvreader[no head, tabbing, table head=\textit{line XX:}\=\kill]%
- {grade.csv}{}{\textit{line \thecsvrow:} \> \csvline}%
-\end{dispExample}
-\end{docCommand}
-
-
-\begin{docCommand}{thecsvrow}{}
- Typesets the current data line number. This is the
- current number of accepted data lines without the header line.
- The \LaTeX\ counter |csvrow| can be addressed directly in the usual way,
- e.\,g. by |\roman{csvrow}|.
-\end{docCommand}
-
-
-\begin{docCommand}{thecsvinputline}{}
- Typesets the current file line number. This is the
- current number of all data lines including the header line.
- The \LaTeX\ counter |csvinputline| can be addressed directly in the usual way,
- e.\,g. by |\roman{csvinputline}|.
-\begin{dispExample}
-\csvreader[no head, filter test=\ifnumequal{\thecsvinputline}{3}]%
- {grade.csv}{}%
- {The line with number \thecsvinputline\ contains: \csvline}%
-\end{dispExample}
-\end{docCommand}
-
-
-\begin{docCommand}[doc updated=2016-07-01]{csvlinetotablerow}{}
- Typesets the current processed data line with |&| between the entries.
- %Most users will never apply this command.
-\end{docCommand}
-
-
-
-\clearpage
-\section{Option Keys}\label{sec:schluessel}%
-For the \meta{options} in \refCom{csvreader} respectively \refCom{csvloop}
-the following |pgf| keys can be applied. The key tree path |/csv/| is not
-to be used inside these macros.
-
-
-\subsection{Command Definition}%--------%[[
-
-\begin{docCsvKey}{before reading}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed before the CSV file is processed.
-\end{docCsvKey}
-
-\begin{docCsvKey}{after head}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed after the header line is read.
-\end{docCsvKey}
-
-\begin{docCsvKey}{before filter}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed after reading and consistency checking
- of a data line. They are executed before any filter condition is checked,
- see \refKey{/csv/filter}.
- Also see \refKey{/csv/full filter}.
-\end{docCsvKey}
-
-\begin{docCsvKey}{late after head}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed after reading and disassembling
- of the first accepted data line. They are executed before further processing
- of this line.
-\end{docCsvKey}
-
-\begin{docCsvKey}{late after line}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed after reading and disassembling
- of the next accepted data line (after \refKey{/csv/before filter}).
- They are executed before further processing of this next line.
- |late after line| overwrites |late after first line| and |late after last line|.
- Note that table options like \refKey{/csv/tabular} set this key to |\\|
- automatically.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{late after first line}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed after reading and disassembling
- of the second accepted data line instead of \refKey{/csv/late after line}.
- This key has to be set after |late after line|.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{late after last line}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed after processing of the last
- accepted data line instead of \refKey{/csv/late after line}.
- This key has to be set after |late after line|.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{before line}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed after \refKey{/csv/late after line}
- and before \refKey{/csv/command}.
- |before line| overwrites |before first line|.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{before first line}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed instead of \refKey{/csv/before line}
- for the first accepted data line.
- This key has to be set after |before line|.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{command}{=\meta{code}}{no default, initially \cs{csvline}}
- Sets the \meta{code} to be executed for every accepted data line.
- They are executed between \refKey{/csv/before line} and \refKey{/csv/after line}.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{after line}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed for every accepted data line
- after \refKey{/csv/command}.
- |after line| overwrites |after first line|.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{after first line}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed instead of \refKey{/csv/after line}
- for the first accepted data line.
- This key has to be set after |after line|.
-\end{docCsvKey}
-
-\begin{docCsvKey}{after reading}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed after the CSV file is processed.
-\end{docCsvKey}
-
-
-\begin{dispExample}
-\csvreader[
- before reading = \meta{before reading}\\,
- after head = \meta{after head},
- before filter = \\\meta{before filter},
- late after head = \meta{late after head},
- late after line = \meta{late after line},
- late after first line = \meta{late after first line},
- late after last line = \\\meta{late after last line},
- before line = \meta{before line},
- before first line = \meta{before first line},
- after line = \meta{after line},
- after first line = \meta{after first line},
- after reading = \\\meta{after reading}
- ]{grade.csv}{name=\name}{\textbf{\name}}%
-\end{dispExample}
-
-Additional command definition keys are provided for the supported tables,
-see Section~\ref{subsec:tabsupport} from page~\pageref{subsec:tabsupport}.
-
-\clearpage
-\subsection{Header Processing and Column Name Assignment}%
-
-\begin{docCsvKey}{head}{\colOpt{=true\textbar false}}{default |true|, initially |true|}
- If this key is set, the first line of the CSV file is treated as a header
- line which can be used for column name assignments.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{no head}{}{no value}
- Abbreviation for |head=false|, i.\,e. the first line of the CSV file is
- treated as data line.
- Note that this option cannot be used in combination with
- \refCom{csvautotabular}, \refKey{/csv/autotabular}, and similar automated commands/options.
- See Section~\ref{noheader} on page~\pageref{noheader} for assistance.
-\end{docCsvKey}
-
-\begin{docCsvKey}{column names}{=\meta{assignments}}{no default, initially empty}
- Adds some new \meta{assignments} of macros to columns in key value syntax.
- Existing assignments are kept.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{column names reset}{}{no value}
- Clears all assignments of macros to columns.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{head to column names}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, the entries of the header line are used automatically
- as macro names for the columns. This option can be used only, if
- the header entries do not contain spaces and special characters to be
- used as feasible \LaTeX\ macro names.
- Note that the macro definition is \emph{global} and may therefore override
- existing macros for the rest of the document. Adding
- \refKey{/csv/head to column names prefix} may help to avoid unwanted
- overrides.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}[][doc new=2019-07-16]{head to column names prefix}{=\meta{text}}{no default, initially empty}
- The given \meta{text} is prefixed to the name of all macros generated by
- \refKey{/csv/head to column names}. For example, if you use the settings
-\begin{dispListing}
- head to column names,
- head to column names prefix=MY,
-\end{dispListing}
- a header entry |section| will generate the corresponding macro
- |\MYsection| instead of destroying the standard \LaTeX\ |\section| macro.
-\end{docCsvKey}
-
-
-\clearpage
-\subsection{Consistency Check}\label{sec:consistency}%
-
-\begin{docCsvKey}{check column count}{\colOpt{=true\textbar false}}{default |true|, initially |true|}
- This key defines, wether the number of entries in a data line is checked against
- an expected value or not.\\
- If |true|, every non consistent line is ignored without announcement.\\
- If |false|, every line is accepted and may produce an error during
- further processing.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{no check column count}{}{no value}
- Abbreviation for |check column count=false|.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{column count}{=\meta{number}}{no default}
- Sets the \meta{number} of feasible entries per data line.
- This setting is only useful in connection with \refKey{/csv/no head},
- since \meta{number} would be replaced by the number of entries in the
- header line otherwise.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{on column count error}{=\meta{code}}{no default, initially empty}
- \meta{code} to be executed for unfeasible data lines.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{warn on column count error}{}{style, no value}
- Display of a warning for unfeasible data lines.
-\end{docCsvKey}
-
-
-\clearpage
-\subsection{Filtering}%
-
-\begin{docCsvKey}[][doc new=2016-07-01]{filter test}{=\meta{condition}}{no default}
- Only data lines which fulfill a logical \meta{condition} are accepted.
- For the \meta{condition}, every single test normally employed like
-\begin{dispListing}
-\iftest{some testing}{true}{false}
-\end{dispListing}
- can be used as
-\begin{dispListing}
-filter test=\iftest{some testing},
-\end{dispListing}
- For |\iftest|, tests from the |etoolbox| package \cite{lehmannwright:etoolbox} like
- |\ifnumcomp|, |\ifdimgreater|, etc. and from \Fullref{sec:stringtests} can be used.
-
-\begin{dispExample}
-\csvreader[head to column names,tabular=llll,
- table head=\toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
- table foot=\bottomrule,
- %>> list only matriculation numbers greater than 20000 <<
- filter test=\ifnumgreater{\matriculation}{20000},
- ]{grade.csv}{}{%
- \thecsvrow & \slshape\name, \givenname & \matriculation & \grade}
-\end{dispExample}
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{filter strcmp}{=\marg{stringA}\marg{stringB}}{style, no default}
- Only lines where \meta{stringA} and \meta{stringB} are equal after expansion
- are accepted.
- The implementation is done with \refCom{ifcsvstrcmp}.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{filter not strcmp}{=\marg{stringA}\marg{stringB}}{style, no default}
- Only lines where \meta{stringA} and \meta{stringB} are not equal after expansion
- are accepted.
- The implementation is done with \refCom{ifcsvnotstrcmp}.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}[][doc new=2016-07-01]{filter expr}{=\meta{condition}}{no default}
- Only data lines which fulfill a logical \meta{condition} are accepted.
- For the \meta{condition}, every boolean expression
- from the |etoolbox| package \cite{lehmannwright:etoolbox} is feasible.
- To preprocess the data line before testing the \meta{condition},
- the option key \refKey{/csv/before filter} can be used.
-\begin{dispExample}
-\csvreader[head to column names,tabular=llll,
- table head=\toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
- table foot=\bottomrule,
- %>> list only matriculation numbers greater than 20000
- % and grade less than 4.0 <<
- filter expr={ test{\ifnumgreater{\matriculation}{20000}}
- and test{\ifdimless{\grade pt}{4.0pt}} },
- ]{grade.csv}{}{%
- \thecsvrow & \slshape\name, \givenname & \matriculation & \grade}
-\end{dispExample}
-\end{docCsvKey}
-
-\clearpage
-\begin{docCsvKey}[][doc new=2016-07-01]{filter ifthen}{=\meta{condition}}{no default}
- Only data lines which fulfill a logical \meta{condition} are accepted.
- For the \meta{condition}, every term from the |ifthen| \cite{carlisle:2014c} package
- is feasible.
- To preprocess the data line before testing the \meta{condition},
- the option key \refKey{/csv/before filter} can be used.
-
-\begin{dispExample}
-\csvreader[head to column names,tabular=llll,
- table head=\toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
- table foot=\bottomrule,
- %>> list only female persons <<
- filter ifthen=\equal{\gender}{f},
- ]{grade.csv}{}{%
- \thecsvrow & \slshape\name, \givenname & \matriculation & \grade}
-\end{dispExample}
-
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{filter}{=\meta{condition}}{no default}
- Alias for \refKey{/csv/filter ifthen}.
-\end{docCsvKey}
-
-\begin{docCsvKey}{filter equal}{=\marg{stringA}\marg{stringB}}{style, no default}
- Only lines where \meta{stringA} and \meta{stringB} are equal after expansion
- are accepted.
- The implementation is done with the |ifthen| \cite{carlisle:2014c} package.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{filter not equal}{=\marg{stringA}\marg{stringB}}{style, no default}
- Only lines where \meta{stringA} and \meta{stringB} are not equal after expansion
- are accepted.
- The implementation is done with the |ifthen| \cite{carlisle:2014c} package.
-\end{docCsvKey}
-
-
-
-\begin{docCsvKey}{no filter}{}{no value, initially set}
- Clears a set filter.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{filter accept all}{}{no value, initially set}
- Alias for |no filter|. All consistent data lines are accepted.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{filter reject all}{}{no value}
- All data line are ignored.
-\end{docCsvKey}
-
-
-
-\enlargethispage*{2cm}
-\begin{docCsvKey}[][doc new=2016-07-01]{full filter}{=\meta{code}}{no default}
- Technically, this key is an alias for \refKey{/csv/before filter}.
- Philosophically, \refKey{/csv/before filter} computes something before
- a filter condition is set, but \refKey{/csv/full filter} should implement
- the full filtering. Especially, \refCom{csvfilteraccept} or
- \refCom{csvfilterreject} \emph{should} be set inside the \meta{code}.
-\begin{dispExample}
-\csvreader[head to column names,tabular=llll,
- table head=\toprule & \bfseries Name & \bfseries Matr & \bfseries Grade\\\midrule,
- table foot=\bottomrule,
- %>> list only matriculation numbers greater than 20000
- % and grade less than 4.0 <<
- full filter=\ifnumgreater{\matriculation}{20000}
- {\ifdimless{\grade pt}{4.0pt}{\csvfilteraccept}{\csvfilterreject}}
- {\csvfilterreject},
- ]{grade.csv}{}{%
- \thecsvrow & \slshape\name, \givenname & \matriculation & \grade}
-\end{dispExample}
-\end{docCsvKey}
-
-
-
-%]]
-
-
-\clearpage
-\subsection{Table Support}\label{subsec:tabsupport}%--------%[[
-
-\begin{docCsvKey}{tabular}{=\meta{table format}}{style, no default}
- Surrounds the CSV processing with |\begin{tabular}|\marg{table format}
- at begin and with |\end{tabular}| at end.
-Additionally, the commands defined by the key values of
- \refKey{/csv/before table}, \refKey{/csv/table head}, \refKey{/csv/table foot},
- and \refKey{/csv/after table} are executed at the appropriate places.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{centered tabular}{=\meta{table format}}{style, no default}
- Like \refKey{/csv/tabular} but inside an additional |center| environment.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{longtable}{=\meta{table format}}{style, no default}
- Like \refKey{/csv/tabular} but for the |longtable| environment.
- This requires the package |longtable| (not loaded automatically).
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{tabbing}{}{style, no value}
- Like \refKey{/csv/tabular} but for the |tabbing| environment.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{centered tabbing}{}{style, no value}
- Like \refKey{/csv/tabbing} but inside an additional |center| environment.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{no table}{}{style, no value}
- Deactivates |tabular|, |longtable|, and |tabbing|.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{before table}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed before |\begin{tabular}| or before |\begin{longtable}|
- or before |\begin{tabbing}|, respectively.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{table head}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed after |\begin{tabular}| or after |\begin{longtable}|
- or after |\begin{tabbing}|, respectively.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{table foot}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed before |\end{tabular}| or before |\end{longtable}|
- or before |\end{tabbing}|, respectively.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{after table}{=\meta{code}}{no default, initially empty}
- Sets the \meta{code} to be executed after |\end{tabular}| or after |\end{longtable}|
- or after |\end{tabbing}|, respectively.
-\end{docCsvKey}
-
-\bigskip
-
-The following |auto| options are the counterparts for the respective quick
-overview commands like \refCom{csvautotabular}. They are listed for
-completeness, but are unlikely to be used directly.
-
-\begin{docCsvKey}{autotabular}{=\meta{file name}}{no default}
- Reads the whole CSV file denoted \meta{file name} with an automated formatting.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{autolongtable}{=\meta{file name}}{no default}
- Reads the whole CSV file denoted \meta{file name} with an automated formatting
- using the required |longtable| package.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{autobooktabular}{=\meta{file name}}{no default}
- Reads the whole CSV file denoted \meta{file name} with an automated formatting
- using the required |booktabs| package.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{autobooklongtable}{=\meta{file name}}{no default}
- Reads the whole CSV file denoted \meta{file name} with an automated formatting
- using the required |booktabs| and |longtable| packages.
-\end{docCsvKey}
-
-
-\clearpage
-\subsection{Special Characters}\label{subsec:specchar}
-Be default, the CSV content is treated like normal \LaTeX\ text, see
-Subsection~\ref{macrocodexample} on page~\pageref{macrocodexample}.
-But, \TeX\ special characters of the CSV content may also be interpreted
-as normal characters, if one or more of the following options are used.
-
-\begin{docCsvKey}{respect tab}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, every
- tabulator sign
- inside the CSV content is a normal character.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect percent}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, every
- percent sign \verbbox{\%}
- inside the CSV content is a normal character.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect sharp}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, every
- sharp sign \verbbox{\#}
- inside the CSV content is a normal character.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect dollar}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, every
- dollar sign \verbbox{\$}
- inside the CSV content is a normal character.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect and}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, every
- and sign \verbbox{\&}
- inside the CSV content is a normal character.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect backslash}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, every
- backslash sign \verbbox{\textbackslash}
- inside the CSV content is a normal character.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect underscore}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, every
- underscore sign \verbbox{\_}
- inside the CSV content is a normal character.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect tilde}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, every
- tilde sign \verbbox{\textasciitilde}
- inside the CSV content is a normal character.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect circumflex}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, every
- circumflex sign \verbbox{\textasciicircum}
- inside the CSV content is a normal character.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect leftbrace}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, every
- left brace sign \verbbox{\textbraceleft}
- inside the CSV content is a normal character.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect rightbrace}{\colOpt{=true\textbar false}}{default |true|, initially |false|}
- If this key is set, every
- right brace sign \verbbox{\textbraceright}
- inside the CSV content is a normal character.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect all}{}{style, no value, initially unset}
- Set all special characters from above to normal characters. This means
- a quite verbatim interpretation of the CSV content.
-\end{docCsvKey}
-
-\begin{docCsvKey}{respect none}{}{style, no value, initially set}
- Do not change any special character from above to normal character.
-\end{docCsvKey}
-
-\clearpage
-\subsection{Separators}\label{sec:separators}%
-\begin{docCsvKey}{separator}{=\meta{sign}}{no default, initially |comma|}
- \catcode `|=12
- Sets the \meta{sign} which is treates as separator between the data values
- of a data line. Feasible values are:
- \begin{itemize}
- \item\docValue{comma}: This is the initial value with '\texttt{,}' as separator.
- \medskip
-
- \item\docValue{semicolon}: Sets the separator to '\texttt{;}'.
-\begin{dispExample}
-% \usepackage{tcolorbox} for tcbverbatimwrite
-\begin{tcbverbatimwrite}{testsemi.csv}
- name;givenname;matriculation;gender;grade
- Maier;Hans;12345;m;1.0
- Huber;Anna;23456;f;2.3
- Weißbäck;Werner;34567;m;5.0
-\end{tcbverbatimwrite}
-
-\csvautobooktabular[separator=semicolon]{testsemi.csv}
-\end{dispExample}
\medskip
-\item\docValue{pipe}: Sets the separator to '\texttt{|}'.
-\begin{dispExample}
-% \usepackage{tcolorbox} for tcbverbatimwrite
-\begin{tcbverbatimwrite}{pipe.csv}
- name|givenname|matriculation|gender|grade
- Maier|Hans|12345|m|1.0
- Huber|Anna|23456|f|2.3
- Weißbäck|Werner|34567|m|5.0
-\end{tcbverbatimwrite}
-
-\csvautobooktabular[separator=pipe]{pipe.csv}
-\end{dispExample}
-\medskip
-
-\item\docValue{tab}: Sets the separator to the tabulator sign.
- Automatically, \refKey{/csv/respect tab} is set also.
- \end{itemize}
-\end{docCsvKey}
-
-\clearpage
-\subsection{Miscellaneous}%
-
-\begin{docCsvKey}{every csv}{}{style, initially empty}
- A style definition which is used for every following CSV file.
- This definition can be overwritten with user code.
+\item \href{csvsimple-l3.pdf}{\flqq The |csvsimple-legacy| package\frqq}:\\
+ This is the \LaTeXe{} version of |csvsimple|. It is considered
+ to be the \emph{superseded} version identical to version 1.22 of |csvsimple|.
+ Documents based on that former version do \emph{not have to be changed}
+ and stay compilable in future.\par
+ |csvsimple-legacy| is loaded with \emph{one} of the following
+ alternatives inside the preamble:
\begin{dispListing}
-% Sets a warning message for unfeasible data lines.
-\csvset{every csv/.style={warn on column count error}}
-% Alternatively:
-\csvstyle{every csv}{warn on column count error}
+\usepackage{csvsimple}
+ % or alternatively (not simultaneously!)
+\usepackage[legacy]{csvsimple}
+ % or alternatively (not simultaneously!)
+\usepackage{csvsimple-legacy}
\end{dispListing}
-\end{docCsvKey}
-
-\begin{docCsvKey}{default}{}{style}
- A style definition which is used for every following CSV file which
- resets all settings to default values\footnote{\texttt{default} is used
- because of the global nature of most settings.}.
- This key should not be used or changed by the user if there is not a
- really good reason (and you know what you do).
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{file}{=\meta{file name}}{no default, initially |unknown.csv|}
- Sets the \meta{file name} of the CSV file to be processed.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{preprocessed file}{=\meta{file name}}{no default, initially \texttt{\textbackslash\detokenize{jobname_sorted.csv}}}
- Sets the \meta{file name} of the CSV file which is the output of a
- preprocessor.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{preprocessor}{=\meta{macro}}{no default}
- Defines a preprocessor for the given CSV file.
- The \meta{macro} has to have two mandatory arguments. The first argument
- is the original CSV file which is set by \refKey{/csv/file}.
- The second argument is the preprocessed CSV file
- which is set by \refKey{/csv/preprocessed file}.\par\smallskip
- Typically, the \meta{macro} may call an external program which preprocesses
- the original CSV file (e.\,g. sorting the file) and creates the
- preprocessed CSV file. The later file is used by \refCom{csvreader}
- or \refCom{csvloop}.
-\begin{dispListing}
-\newcommand{\mySortTool}[2]{%
- % call to an external program to sort file #1 with resulting file #2
-}
-
-\csvreader[%
- preprocessed file=\jobname_sorted.csv,
- preprocessor=\mySortTool,
- ]{some.csv}{}{%
- % do something
-}
-\end{dispListing}
-See Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting} for a
-concrete sorting preprocessing implemented with an external tool.
-\end{docCsvKey}
-
-
-\begin{docCsvKey}{no preprocessing}{}{style, no value, initially set}
- Clears any preprocessing, i.\,e. preprocessing is switched of.
-\end{docCsvKey}
-
-
-
-\clearpage
-\subsection{Sorting}\label{sec:Sorting}%
-\TeX/\LaTeX\ was not born under a sorting planet. |csvsimple| provides no
-sorting of data lines by \LaTeX-methods since sorting can be done much faster
-and much better by external tools.
-
-First, one should consider the appropriate \emph{place} for sorting:
-\begin{itemize}
-\item CSV files may be sorted by a tool \emph{before} the \LaTeX\ document is processed
- at all. If the CSV data is not likely to change, this is the most efficient method.
-\item CSV files may be sorted by a tool every time before the \LaTeX\ document is compiled.
- This could be automated by a shell script or some processing tool like |arara|.
-\item CSV files may be sorted on-the-fly by a tool during compilation of
- a \LaTeX\ document. This is the most elegant but not the most efficient way.
\end{itemize}
-The first two methods are decoupled from anything concerning |csvsimple|.
-For the third method, the \refKey{/csv/preprocessor} option is made for.
-This allows to access an external tool for sorting.
-\emph{Which tool} is your choice.
-\csvsorter\ \cite{sturm:csvsorter}
-was written as a companion tool for |csvsimple|.
-It is an open source Java command-line tool for sorting CSV files, available at\\
-\url{http://T-F-S.github.io/csvsorter/}\quad or\quad
-\url{https://github.com/T-F-S/csvsorter}
-
-It can be
-used for all three sorting approaches described above.
-There is special support for on-the-fly sorting with \csvsorter\ using the
-following options.
-
-\begin{enumerate}\bfseries
-\item To use the sorting options, you have to install \csvsorter\ \cite{sturm:csvsorter} before!\\
- |csvsimple| v1.12 or newer needs \csvsorter\ v0.94 of newer!
-\item You have to give permission to call external tools during
- compilation, i.\,e.\ the command-line options for |latex| have to include
- |-shell-escape|.
-\end{enumerate}
-
-\bigskip
-
-\begin{docCsvKey}{csvsorter command}{=\meta{system command}}{no default, initially |csvsorter|}
- The \meta{system command} specifies the system call for \csvsorter\ (without the options).
- If \csvsorter\ was completely installed following its documentation, there is
- nothing to change here. If the |csvsorter.jar| file is inside the same
- directory as the \LaTeX\ source file, you may configure:% preferrably inside the preamble:
-\begin{dispListing}
-\csvset{csvsorter command=java -jar csvsorter.jar}
-\end{dispListing}
-\end{docCsvKey}
-
-\begin{docCsvKey}{csvsorter configpath}{=\meta{path}}{no default, initially |.|}
- Sorting with \csvsorter\ is done using XML configuration files. If these files
- are not stored inside the same directory as the \LaTeX\ source file, a
- \meta{path} to access them can be configured:
-\begin{dispListing}
-\csvset{csvsorter configpath=xmlfiles}
-\end{dispListing}
- Here, the configuration files would be stored in a subdirectory named |xmlfiles|.
-\end{docCsvKey}
-
-\begin{docCsvKey}{csvsorter log}{=\meta{file name}}{no default, initially |csvsorter.log|}
- Sets the log file of \csvsorter\ to the given \meta{file name}.
-\begin{dispListing}
-\csvset{csvsorter log=outdir/csvsorter.log}
-\end{dispListing}
- Here, the log file is written to a subdirectory named |outdir|.
-\end{docCsvKey}
-
\clearpage
-\begin{docCsvKey}{csvsorter token}{=\meta{file name}}{no default, initially |\textbackslash jobname.csvtoken|}
- Sets \meta{file name} as token file. This is an auxiliary file which
- communicates the success of \csvsorter\ to |csvsimple|.
-\begin{dispListing}
-\csvset{csvsorter log=outdir/\jobname.csvtoken}
-\end{dispListing}
- Here, the token file is written to a subdirectory named |outdir|.
-\end{docCsvKey}
+\section{Differences between \texttt{csvsimple-l3} and \texttt{csvsimple-legacy}}
+This section is intended for users who know |csvsimple| before version~2.00.
+|csvsimple-l3| is a \emph{nearly} drop-in replacement for
+|csvsimple-legacy|. Although old documents have no \emph{need} to be changed,
+adopting the new \LaTeX3 version for existing documents should impose not
+too much effort. Actually, it depends on how intense |pgfkeys| specific
+styles were used.
-\begin{docCsvKey}{sort by}{=\meta{file name}}{style, initially unset}
- The \meta{file name} denotes an XML configuration file for \csvsorter.
- Setting this option inside \refCom{csvreader} or
- \refCom{csvloop} will issue a system call to \csvsorter.
- \begin{itemize}
- \item \csvsorter\ uses the given CSV file as input file.
- \item \csvsorter\ uses \meta{file name} as configuration file.
- \item The output CSV file is denoted by \refKey{/csv/preprocessed file}
- which is by default \texttt{\textbackslash\detokenize{jobname_sorted.csv}}.
- This output file is this actual file processed by \refCom{csvreader} or \refCom{csvloop}.
- \item \csvsorter\ also generates a log file denoted by \refKey{/csv/csvsorter log} which is by default |csvsorter.log|.
- \end{itemize}
+That brings us to the differences between the two packages and a more precise
+understanding what \emph{nearly} drop-in replacement means. The following enumeration
+does not list new features of \texttt{csvsimple-l3} (if any), but takes an
+upgrade point of view.
-\par\medskip\textbf{First example:}
- To sort our example |grade.csv| file according to |name| and |givenname|, we
- use the following XML configuration file. Since \csvsorter\ uses double quotes
- as default brackets for column values, we remove bracket recognition to avoid
- a clash with the escaped umlauts of the example CSV file.\par\smallskip
-
-\xmllisting{namesort}
-\begin{dispExample}
-% \usepackage{booktabs}
-\csvreader[sort by=namesort.xml,
- head to column names,
- tabular=>{\color{red}}lllll,
- table head=\toprule Name & Given Name & Matriculation & Gender & Grade\\\midrule,
- table foot=\bottomrule]
- {grade.csv}{}{\csvlinetotablerow}
-\end{dispExample}
-
-\clearpage\textbf{Second example:}
- To sort our example |grade.csv| file according to |grade|, we
- use the following XML configuration file. Further, persons with the same |grade|
- are sorted by |name| and |givenname|. Since \csvsorter\ uses double quotes
- as default brackets for column values, we remove bracket recognition to avoid
- a clash with the escaped umlauts of the example CSV file.\par\smallskip
-
-\xmllisting{gradesort}
-\begin{dispExample}
-% \usepackage{booktabs}
-\csvreader[sort by=gradesort.xml,
- head to column names,
- tabular=llll>{\color{red}}l,
- table head=\toprule Name & Given Name & Matriculation & Gender & Grade\\\midrule,
- table foot=\bottomrule]
- {grade.csv}{}{\csvlinetotablerow}
-\end{dispExample}
-
-\clearpage\textbf{Third example:}
- To generate a matriculation/grade list, we sort our example |grade.csv| file
- using the following XML configuration file.
- Again, since \csvsorter\ uses double quotes
- as default brackets for column values, we remove bracket recognition to avoid
- a clash with the escaped umlauts of the example CSV file.\par\smallskip
-
-\xmllisting{matriculationsort}
-\begin{dispExample}
-% \usepackage{booktabs}
-\csvreader[sort by=matriculationsort.xml,
- head to column names,
- tabular=>{\color{red}}ll,
- table head=\toprule Matriculation & Grade\\\midrule,
- table foot=\bottomrule]
- {grade.csv}{}{\matriculation & \grade}
-\end{dispExample}
-\end{docCsvKey}
-
-
-\clearpage
-\begin{docCsvKey}{new sorting rule}{=\marg{name}\marg{file name}}{style, initially unset}
-This is a convenience option to generate a new shortcut for often used
-\refKey{/csv/sort by} applications. It also adds a more semantic touch.
-The new shortcut option is
-\tcbox[on line,size=small,colback=white,colframe=red]{|sort by| \meta{name}} which expands to
-\tcbox[on line,size=small,colback=white,colframe=red]{|sort by=|\marg{file name}}.\par\medskip
-
-Consider the following example:
-\begin{dispExample}
-\csvautotabular[sort by=namesort.xml]{grade.csv}
-\end{dispExample}
-A good place for setting up a new sorting rule would be inside the preamble:
-
-\csvset{new sorting rule={name}{namesort.xml}}
-\begin{dispListing}
-\csvset{new sorting rule={name}{namesort.xml}}
-\end{dispListing}
-
-Now, we can use the new rule:
-\begin{dispExample}
-\csvautotabular[sort by name]{grade.csv}
-\end{dispExample}
-
-\end{docCsvKey}
-
-
-\clearpage
-\section{String Tests}\label{sec:stringtests}%
-
-The following string tests are complementing the string tests
-from the |etoolbox| \cite{lehmannwright:etoolbox} package. They all do the same, i.e.,
-comparing expanded strings for equality.
\begin{itemize}
-\item\refCom{ifcsvstrcmp} is the most efficient method, because it uses
- native compiler string comparison (if available).
-\item\refCom{ifcsvstrequal} does not rely on a compiler. It also is the
- fallback implementation for \refCom{ifcsvstrcmp}, if there is no
- native comparison method.
-\item\refCom{ifcsvprostrequal} is possibly more failsafe than the other two
- string tests. It may be used, if strings contain dirty things like |\textbf{A}|.
+\item Any patches or additions using undocumented internals of |csvsimple-legacy|
+ will stop to function, because |csvsimple-l3| has a completely implementation.
+\item |csvsimple-l3| is programmed in |expl3| code using the \LaTeX3 interfaces.
+ No additional packages are loaded or needed with exception of several options
+ which allow to access methods from |ifthen|, |etoolbox|, |longtable|, etc.
+ On the other hand, |csvsimple-legacy| is programmed in \LaTeXe{} with
+ dirty tricks from here and there.
+\item The most significant change of the user interface is that the key value
+ engine of |csvsimple-legacy| is |pgfkeys| (root \docAuxKey*[csv]{}) while |csvsimple-l3| uses
+ |l3keys| (root \docAuxKey*[csvsim]{}).
+ Names und usage of the keys are \emph{unchanged}.
+ But, if
+ you made own |pgfkeys| \emph{styles} using the |pgfkeys| style handler,
+ these \emph{styles} have to be adapted to |.meta| keys of |l3keys|.
+ The good news is that styles
+ made with \docAuxCommand*{csvstyle} become |.meta| keys automatically.
+\item The macro \docAuxCommand*{csvheadset} is removed. It is not supportable
+ by the new implementation. I never used it and I forgot why I ever wrote it
+ -- I hope the same is true for you. If not, |csvsimple-legacy| can be
+ used for documents which needs it.
+\item Option \docAuxKey*[csv]{filter} is removed. Instead, \docAuxKey*[csvsim]{filter ifthen}
+ can be used (also true with \docAuxKey*[csv]{filter ifthen} for the old version).
+\item The deprecated options
+ \docAuxKey*[csv]{nofilter} and \docAuxKey*[csv]{nohead} are removed.
+ They were not documented any more since years. Obviously, use
+ \docAuxKey*[csvsim]{no filter} and \docAuxKey*[csvsim]{no head} instead.
+\item Compilation problems are to be expected, if an |S| column of the |siunitx| package
+ is used as first or last column. Documents neglecting this rule successfully
+ for |csvsimple-legacy|, may fail to compile with |csvsimple-l3.|
+\item The \LaTeX{} counters \docCounter*{csvinputline}
+ and \docCounter*{csvrow}
+ are replaced by \LaTeX3 integers
+ \docCounter*{g_csvsim_inputline_int}
+ and \docCounter*{g_csvsim_row_int}, but accessors
+ \docAuxCommand*{thecsvinputline} and
+ \docAuxCommand*{thecsvrow} are still valid.
+\item The packages |pgfrcs|, |pgfkeys|, |ifthen|, |etoolbox|, and |shellesc|
+ are not included anymore (include manually, if needed).
+\item
+ \docAuxCommand*{csviffirstrow} and
+ \docAuxCommand*{csvifoddrow} are deprecated and replaced by
+ \docAuxCommand*{ifcsvfirstrow}
+ \docAuxCommand*{ifcsvoddrow}
+ which are more consistent in nomenclature.
+\item For |csvsimple-l3|, data lines are allowed to begin with an backslash.
+\item Assigned macros like |\myname| for e.g. the third column contain
+ not |\csvcoliii| anymore, but are equal to the content of |\csvcoliii| now.
+\item Character code changes with \docAuxKey*[csvsim]{respect percent} etc.
+ and the tabulator as separator should work for |csvsimple-l3| as expected in every
+ situation (not always worked for |csvsimple-legacy|).
+\item A drawback of |csvsimple-l3| against |csvsimple-legacy| is
+ a higher compilation time. This may vary by used compiler.
+ An example document of 5061 pages using a CSV file with 166 992 lines
+ took about 28 seconds with |csvsimple-legacy| and
+ about 51 seconds with |csvsimple-l3| on my machine
+ (just a singular observation, no scientific analysis at all).
\end{itemize}
-\medskip
-\begin{docCommand}[doc new=2016-07-01]{ifcsvstrcmp}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
- Compares two strings and executes \meta{true} if they are equal, and \meta{false} otherwise.
- The comparison is done using |\pdfstrcmp|, if compilation is done with pdf\LaTeX.
- The comparison is done using |\pdf at strcmp|, if the package |pdftexcmds| is
- loaded and compilation is done with lua\LaTeX\ or Xe\LaTeX.
- Otherwise, \refCom{ifcsvstrcmp} is identical to \refCom{ifcsvstrequal}.
- This command cannot be used inside the preamble.
-\end{docCommand}
-\begin{docCommand}[doc new=2016-07-01]{ifcsvnotstrcmp}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
- Compares two strings and executes \meta{true} if they are \emph{not} equal, and \meta{false} otherwise.
- The implementation uses \refCom{ifcsvstrcmp}.
-\end{docCommand}
-
-\begin{docCommand}[doc new=2016-07-01]{ifcsvstrequal}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
- Compares two strings and executes \meta{true} if they are equal, and \meta{false} otherwise.
- The strings are expanded with |\edef| in the test.
-\end{docCommand}
-
-\begin{docCommand}[doc new=2016-07-01]{ifcsvprostrequal}{\marg{stringA}\marg{stringB}\marg{true}\marg{false}}
- Compares two strings and executes \meta{true} if they are equal, and \meta{false} otherwise.
- The strings are expanded with |\protected at edef| in the test, i.e. parts of the
- strings which are protected stay unexpanded.
-\end{docCommand}
-
-
-
-\clearpage
-\section{Examples}%
-
-\subsection{A Serial Letter}%
-In this example, a serial letter is to be written to all persons with
-addresses from the following CSV file. Deliberately, the file content is
-not given in very pretty format.
-
-%-- file embedded for simplicity --
-\begin{tcbverbatimwrite}{address.csv}
-name,givenname,gender,degree,street,zip,location,bonus
-Maier,Hans,m,,Am Bachweg 17,10010,Hopfingen,20
- % next line with a comma in curly braces
-Huber,Erna,f,Dr.,{Moosstraße 32, Hinterschlag},10020,Örtingstetten,30
-Weißbäck,Werner,m,Prof. Dr.,Brauallee 10,10030,Klingenbach,40
- % this line is ignored %
- Siebener , Franz,m, , Blaumeisenweg 12 , 10040 , Pardauz , 50
- % preceding and trailing spaces in entries are removed %
-Schmitt,Anton,m,,{\AE{}lfred-Esplanade, T\ae{}g 37}, 10050,\OE{}resung,60
-\end{tcbverbatimwrite}
-%-- end embedded file --
-
-\csvlisting{address}
-
-Firstly, we survey the file content quickly using
-|\csvautotabular|.
-As can be seen, unfeasible lines are ignored automatically.
-
-\begin{dispExample}
-\tiny\csvautotabular{address.csv}
-\end{dispExample}
-
-Now, we create the serial letter where every feasible data line produces
-an own page. Here, we simulate the page by a |tcolorbox| (from the package
-|tcolorbox|).
-For the gender specific salutations, an auxiliary macro |\ifmale| is
-introduced.
-
-\begin{dispExample}
-% this example requires the tcolorbox package
-\newcommand{\ifmale}[2]{\ifcsvstrcmp{\gender}{m}{#1}{#2}}
-
-\csvreader[head to column names]{address.csv}{}{%
-\begin{tcolorbox}[colframe=DarkGray,colback=White,arc=0mm,width=(\linewidth-2pt)/2,
- equal height group=letter,before=,after=\hfill,fonttitle=\bfseries,
- adjusted title={Letter to \name}]
- \ifcsvstrcmp{\degree}{}{\ifmale{Mr.}{Ms.}}{\degree}~\givenname~\name\\
- \street\\\zip~\location
- \tcblower
- {\itshape Dear \ifmale{Sir}{Madam},}\\
- we are pleased to announce you a bonus value of \bonus\%{}
- which will be delivered to \location\ soon.\\\ldots
-\end{tcolorbox}}
-\end{dispExample}
-
-
-
-\clearpage
-\subsection{A Graphical Presentation}%
-For this example, we use some artificial statistical data given by a CSV file.
-
-%-- file embedded for simplicity --
-\begin{tcbverbatimwrite}{data.csv}
-land,group,amount
-Bayern,A,1700
-Baden-Württemberg,A,2300
-Sachsen,B,1520
-Thüringen,A,1900
-Hessen,B,2100
-\end{tcbverbatimwrite}
-%-- end embedded file --
-
-\csvlisting{data}
-
-Firstly, we survey the file content using
-|\csvautobooktabular|.
-
-\begin{dispExample}
-% needs the booktabs package
-\csvautobooktabular{data.csv}
-\end{dispExample}
-
-The amount values are presented in the following diagram by bars where
-the group classification is given using different colors.
-
-\begin{dispExample}
-% This example requires the package tikz
-\begin{tikzpicture}[Group/A/.style={left color=red!10,right color=red!20},
- Group/B/.style={left color=blue!10,right color=blue!20}]
-\csvreader[head to column names]{data.csv}{}{%
- \begin{scope}[yshift=-\thecsvrow cm]
- \path [draw,Group/\group] (0,-0.45)
- rectangle node[font=\bfseries] {\amount} (\amount/1000,0.45);
- \node[left] at (0,0) {\land};
- \end{scope} }
-\end{tikzpicture}
-\end{dispExample}
-
-
-\clearpage
-It would be nice to sort the bars by length, i.\,e.\ to sort the CSV file
-by the |amount| column. If the \csvsorter\ program is properly installed,
-see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
-this can be done with the following configuration file for \csvsorter:
-
-\xmllisting{amountsort}
-
-Now, we just have to add an option |sort by=amountsort.xml|:
-\begin{dispExample}
-% This example requires the package tikz
-% Also, the CSV-Sorter tool has to be installed
-\begin{tikzpicture}[Group/A/.style={left color=red!10,right color=red!20},
- Group/B/.style={left color=blue!10,right color=blue!20}]
-\csvreader[head to column names,sort by=amountsort.xml]{data.csv}{}{%
- \begin{scope}[yshift=-\thecsvrow cm]
- \path [draw,Group/\group] (0,-0.45)
- rectangle node[font=\bfseries] {\amount} (\amount/1000,0.45);
- \node[left] at (0,0) {\land};
- \end{scope} }
-\end{tikzpicture}
-\end{dispExample}
-
-
-
-
-\clearpage
-Next, we create a pie chart by calling |\csvreader| twice.
-In the first step, the total sum of amounts is computed, and in the second
-step the slices are drawn.
-
-\begin{dispExample}
-% Modified example from www.texample.net for pie charts
-% This example needs the packages tikz, xcolor, calc
-\definecolorseries{myseries}{rgb}{step}[rgb]{.95,.85,.55}{.17,.47,.37}
-\resetcolorseries{myseries}%
-
-% a pie slice
-\newcommand{\slice}[4]{
- \pgfmathsetmacro{\midangle}{0.5*#1+0.5*#2}
- \begin{scope}
- \clip (0,0) -- (#1:1) arc (#1:#2:1) -- cycle;
- \colorlet{SliceColor}{myseries!!+}%
- \fill[inner color=SliceColor!30,outer color=SliceColor!60] (0,0) circle (1cm);
- \end{scope}
- \draw[thick] (0,0) -- (#1:1) arc (#1:#2:1) -- cycle;
- \node[label=\midangle:#4] at (\midangle:1) {};
- \pgfmathsetmacro{\temp}{min((#2-#1-10)/110*(-0.3),0)}
- \pgfmathsetmacro{\innerpos}{max(\temp,-0.5) + 0.8}
- \node at (\midangle:\innerpos) {#3};
-}
-
-% sum of amounts
-\csvreader[before reading=\def\mysum{0}]{data.csv}{amount=\amount}{%
- \pgfmathsetmacro{\mysum}{\mysum+\amount}%
-}
-
-% drawing of the pie chart
-\begin{tikzpicture}[scale=3]%
-\def\mya{0}\def\myb{0}
-\csvreader[head to column names]{data.csv}{}{%
- \let\mya\myb
- \pgfmathsetmacro{\myb}{\myb+\amount}
- \slice{\mya/\mysum*360}{\myb/\mysum*360}{\amount}{\land}
-}
-\end{tikzpicture}%
-\end{dispExample}
-
-
-\clearpage
-Finally, the filter option is demonstrated by separating the groups A and B.
-Every item is piled upon the appropriate stack.
-
-\begin{dispExample}
-\newcommand{\drawGroup}[2]{%
- \def\mya{0}\def\myb{0}
- \node[below=3mm] at (2.5,0) {\bfseries Group #1};
- \csvreader[head to column names,filter equal={\group}{#1}]{data.csv}{}{%
- \let\mya\myb
- \pgfmathsetmacro{\myb}{\myb+\amount}
- \path[draw,top color=#2!25,bottom color=#2!50]
- (0,\mya/1000) rectangle node{\land\ (\amount)} (5,\myb/1000);
-}}
-
-\begin{tikzpicture}
- \fill[gray!75] (-1,0) rectangle (13,-0.1);
- \drawGroup{A}{red}
- \begin{scope}[xshift=7cm]
- \drawGroup{B}{blue}
- \end{scope}
-\end{tikzpicture}
-
-\end{dispExample}
-
-
-\clearpage
-\subsection{Macro code inside the data}\label{macrocodexample}%
-
-If needed, the data file may contain macro code. Note that the first character
-of a data line is not allowed to be the backslash '|\|'.
-
-%-- file embedded for simplicity --
-\begin{tcbverbatimwrite}{macrodata.csv}
-type,description,content
-M,A nice \textbf{formula}, $\displaystyle \int\frac{1}{x} = \ln|x|+c$
-G,A \textcolor{red}{colored} ball, {\tikz \shadedraw [shading=ball] (0,0) circle (.5cm);}
-M,\textbf{Another} formula, $\displaystyle \lim\limits_{n\to\infty} \frac{1}{n}=0$
-\end{tcbverbatimwrite}
-%-- end embedded file --
-
-\csvlisting{macrodata}
-
-Firstly, we survey the file content using
-|\csvautobooktabular|.
-
-\begin{dispExample}
-\csvautobooktabular{macrodata.csv}
-\end{dispExample}
-
-
-\begin{dispExample}
-\csvstyle{my enumerate}{head to column names,
- before reading=\begin{enumerate},after reading=\end{enumerate}}
-
-\csvreader[my enumerate]{macrodata.csv}{}{%
- \item \description:\par\content}
-
-\bigskip
-Now, formulas only:
-\csvreader[my enumerate,filter equal={\type}{M}]{macrodata.csv}{}{%
- \item \description:\qquad\content}
-\end{dispExample}
-
-\clearpage
-\subsection{Tables with Number Formatting}\label{numberformatting}%
-
-We consider a file with numerical data which should be pretty-printed.
-
-%-- file embedded for simplicity --
-\begin{tcbverbatimwrite}{data_numbers.csv}
-month, dogs, cats
-January, 12.50,12.3e5
-February, 3.32, 8.7e3
-March, 43, 3.1e6
-April, 0.33, 21.2e4
-May, 5.12, 3.45e6
-June, 6.44, 6.66e6
-July, 123.2,7.3e7
-August, 12.3, 5.3e4
-September,2.3, 4.4e4
-October, 6.5, 6.5e6
-November, 0.55, 5.5e5
-December, 2.2, 3.3e3
-\end{tcbverbatimwrite}
-
-\csvlisting{data_numbers}
-
-The |siunitx| \cite{wright:siuntix} package provides a new column type |S|
-which can align material using a number of different strategies.
-The following example demonstrates the application with CSV reading.
-The package documentation \cite{wright:siuntix} contains a huge amount
-of formatting options.
-
-\begin{dispExample}
-% \usepackage{siunitx,array,booktabs}
-\csvloop{
- file=data_numbers.csv,
- head to column names,
- before reading=\centering\sisetup{table-number-alignment=center},
- tabular={lSS[table-format=2.2e1]},
- table head=\toprule\textbf{Month} & \textbf{Dogs} & \textbf{Cats}\\\midrule,
- command=\month & \dogs & \cats,
- table foot=\bottomrule}
-\end{dispExample}
-
-\clearpage
-Special care is needed, if the \emph{first} column is to be formatted with
-the column type |S|. The number detection of |siunitx| is disturbed by
-the line reading code of |csvsimple| which actually is present at the
-first column. To avoid this problem, the content of the first column
-could be formatted not by the table format definition, but by using a
-suitable |\tablenum| formatting directly, see |siunitx| \cite{wright:siuntix}.
-
-Another and very nifty workaround suggested by Enrico Gregorio is to
-add an invisible dummy column with |c@{}| as first column:
-
-
-\begin{dispExample}
-% \usepackage{siunitx,array,booktabs}
-\csvloop{
- file=data_numbers.csv,
- head to column names,
- before reading=\centering\sisetup{table-number-alignment=center},
- tabular={c@{}S[table-format=2.2e1]S},
- table head= & \textbf{Cats} & \textbf{Dogs}\\\midrule,
- command= & \cats & \dogs,
- table foot=\bottomrule}
-\end{dispExample}
-
-
-\clearpage
-Now, the preceding table shall be sorted by the \emph{cats} values.
-If the \csvsorter\ program is properly installed,
-see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
-this can be done with the following configuration file for \csvsorter:
-
-\xmllisting{catsort}
-
-Now, we just have to add an option |sort by=catsort.xml|:
-\begin{dispExample}
-% \usepackage{siunitx,array,booktabs}
-% Also, the CSV-Sorter tool has to be installed
-\csvloop{
- file=data_numbers.csv,
- sort by=catsort.xml,
- head to column names,
- before reading=\centering\sisetup{table-number-alignment=center},
- tabular={lSS[table-format=2.2e1]},
- table head=\toprule\textbf{Month} & \textbf{Dogs} & \textbf{Cats}\\\midrule,
- command=\month & \dogs & \cats,
- table foot=\bottomrule}
-\end{dispExample}
-
-
-\clearpage
-\subsection{CSV data without header line}\label{noheader}%
-CSV files with a header line are more semantic than files without header,
-but it's no problem to work with headless files.
-
-For this example, we use again some artificial statistical data given by a CSV file
-but this time without header.
-
-%-- file embedded for simplicity --
-\begin{tcbverbatimwrite}{data_headless.csv}
-Bayern,A,1700
-Baden-Württemberg,A,2300
-Sachsen,B,1520
-Thüringen,A,1900
-Hessen,B,2100
-\end{tcbverbatimwrite}
-%-- end embedded file --
-
-\csvlisting{data_headless}
-
-Note that you cannot use the \refKey{/csv/no head} option for the auto tabular
-commands. If no options are given, the first line is interpreted as header line
-which gives an unpleasant result:
-
-\begin{dispExample}
-\csvautobooktabular{data_headless.csv}
-\end{dispExample}
-
-To get the expected result, one can redefine \refKey{/csv/table head}
-using \refCom{csvlinetotablerow} which holds the first line data for the
-|\csvauto...| commands:
-
-\begin{dispExample}
-\csvautobooktabular[table head=\toprule\csvlinetotablerow\\]{data_headless.csv}
-\end{dispExample}
-
-This example can be extended to insert a table head for this headless data:
-
-\begin{dispExample}
-\csvautobooktabular[table head=\toprule\bfseries Land & \bfseries Group
- & \bfseries Amount\\\midrule\csvlinetotablerow\\]{data_headless.csv}
-\end{dispExample}
-
-\clearpage
-
-For the normal \refCom{csvreader} command, the \refKey{/csv/no head} option
-should be applied. Of course, we cannot use \refKey{/csv/head to column names}
-because there is no head, but the columns can be addressed by their numbers:
-
-\begin{dispExample}
-\csvreader[no head,
- tabular=lr,
- table head=\toprule\bfseries Land & \bfseries Amount\\\midrule,
- table foot=\bottomrule]
- {data_headless.csv}
- {1=\land,3=\amount}
- {\land & \amount}
-\end{dispExample}
-
-
-\clearpage
-\subsection{Imported CSV data}\label{sec:importeddata}%
-If data is imported from other applications, there is not always a choice
-to format in comma separated values with curly brackets.
-
-Consider the following example data file:
-
-%-- file embedded for simplicity --
-\begin{tcbverbatimwrite}{imported.csv}
-"name";"address";"email"
-"Frank Smith";"Yellow Road 123, Brimblsby";"frank.smith at organization.org"
-"Mary May";"Blue Alley 2a, London";"mmay at maybe.uk"
-"Hans Meier";"Hauptstraße 32, Berlin";"hans.meier at corporation.de"
-\end{tcbverbatimwrite}
-%-- end embedded file --
-
-\csvlisting{imported}
-
-If the \csvsorter\ program is properly installed,
-see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
-this can be transformed on-the-fly
-with the following configuration file for \csvsorter:
-
-\xmllisting{transform}
-
-Now, we just have to add an option |sort by=transform.xml| to transform
-the input data. Here, we actually do not sort.
-
-\begin{dispExample}
-% \usepackage{booktabs,array}
-% Also, the CSV-Sorter tool has to be installed
-\newcommand{\Header}[1]{\normalfont\bfseries #1}
-
-\csvreader[
- sort by=transform.xml,
- tabular=>{\itshape}ll>{\ttfamily}l,
- table head=\toprule\Header{Name} & \Header{Address} & \Header{email}\\\midrule,
- table foot=\bottomrule]
- {imported.csv}{}{\csvlinetotablerow}
-\end{dispExample}
-
-The file which is generated on-the-fly and which is actually read by
-|csvsimple| is the following:
-
-\tcbinputlisting{docexample,listing style=tcbdocumentation,fonttitle=\bfseries,
- listing only,listing file=\jobname_sorted._csv}
-
-
-\clearpage
-\subsection{Encoding}\label{encoding}%
-If the CSV file has a different encoding than the \LaTeX\ source file,
-then special care is needed.
-
-\begin{itemize}
-\item The most obvious treatment is to change the encoding of the CSV file
- or the \LaTeX\ source file to match the other one (every good editor
- supports such a conversion). This is the easiest choice, if there a no
- good reasons against such a step. E.g., unfortunately, several tools
- under Windows need the CSV file to be |cp1252| encoded while
- the \LaTeX\ source file may need to be |utf8| encoded.
-
-\item The |inputenc| package allows to switch the encoding inside the
- document, say from |utf8| to |cp1252|. Just be aware that you should only
- use pure ASCII for additional texts inside the switched region.
-\begin{dispListing}
-% !TeX encoding=UTF-8
-% ....
-\usepackage[utf8]{inputenc}
-% ....
-\begin{document}
-% ....
-\inputencoding{latin1}% only use ASCII from here, e.g. "Uberschrift
-\csvreader[%...
- ]{data_cp1252.csv}{%...
- }{% ....
- }
-\inputencoding{utf8}
-% ....
\end{document}
-\end{dispListing}
-
-\item As a variant to the last method, the encoding switch can be done
- using options from |csvsimple|:
-\begin{dispListing}
-% !TeX encoding=UTF-8
-% ....
-\usepackage[utf8]{inputenc}
-% ....
-\begin{document}
-% ....
-% only use ASCII from here, e.g. "Uberschrift
-\csvreader[%...
- before reading=\inputencoding{latin1},
- after reading=\inputencoding{utf8},
- ]{data_cp1252.csv}{%...
- }{% ....
- }
-% ....
-\end{document}
-\end{dispListing}
-
-\pagebreak\item
-If the \csvsorter\ program is properly installed,
-see Subsection~\ref{sec:Sorting} on page~\pageref{sec:Sorting},
-the CSV file can be re-encoded on-the-fly
-with the following configuration file for \csvsorter:
-
-\xmllisting{encoding}
-
-\begin{dispListing}
-% !TeX encoding=UTF-8
-% ....
-\usepackage[utf8]{inputenc}
-% ....
-\begin{document}
-% ....
-\csvreader[%...
- sort by=encoding.xml,
- ]{data_cp1252.csv}{%...
- }{% ....
- }
-% ....
-\end{document}
-\end{dispListing}
-
-
-\end{itemize}
-
-
-
-
-\clearpage
-
-% Actually, it is not a good idea to include the references like this!
-% Do not follow this bad example ...
-\begin{tcbverbatimwrite}{\jobname.bib}
- at manual{tantau:tikz,
- author = {Till Tantau},
- title = {The TikZ and PGF Packages},
- subtitle = {Manual for version 3.1.2},
- url = {http://mirrors.ctan.org/graphics/pgf/base/doc/pgfmanual.pdf},
- date = {2019-04-04},
-}
-
- at manual{carlisle:2014c,
- author = {David Carlisle},
- title = {The ifthen package},
- url = {http://mirror.ctan.org/macros/latex/base/ifthen.pdf},
- date = {2014-09-29},
- language = {english}
-}
-
-
- at manual{talbot:datatool,
- author = {Nicola L. C. Talbot},
- title = {User Manual for the datatool bundle version 2.31},
- url = {http://mirrors.ctan.org/macros/latex/contrib/datatool/datatool-user.pdf},
- date = {2018-12-07},
- language = {english}
-}
-
- at manual{sturm:csvsorter,
- author = {Thomas F. Sturm},
- title = {The CSV-Sorter program},
- subtitle = {Manual for version 0.95 beta},
- url = {http://T-F-S.github.io/csvsorter/csvsorter.pdf},
- date = {2018-01-11},
- language = {english}
-}
-
- at manual{carlisle:2014d,
- author = {David Carlisle},
- title = {The longtable package},
- url = {http://mirror.ctan.org/macros/latex/required/tools/longtable.pdf},
- date = {2014-10-28},
- language = {english}
-}
-
-
- at manual{fear:2016a,
- author = {Simon Fear},
- title = {Publication quality tables in \LaTeX},
- url = {http://mirror.ctan.org/macros/latex/contrib/booktabs/booktabs.pdf},
- date = {2016-04-29},
- language = {english}
-}
-
- at manual{wright:siuntix,
- author = {Joseph Wright},
- title = {siunitx --- A comprehensive (SI) units package},
- url = {http://mirror.ctan.org/macros/latex/contrib/siunitx/siunitx.pdf},
- date = {2018-05-17},
- language = {english}
-}
-
- at manual{lehmannwright:etoolbox,
- author = {Philipp Lehman and Joseph Wright},
- title = {The etoolbox Package},
- url = {http://mirror.ctan.org/macros/latex/contrib/etoolbox/etoolbox.pdf},
- date = {2018-08-19},
-}
-
-\end{tcbverbatimwrite}
-
-
-\printbibliography[heading=bibintoc]
-
-\printindex
-
-\end{document}
Added: trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple-l3.sty
===================================================================
--- trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple-l3.sty (rev 0)
+++ trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple-l3.sty 2021-06-29 19:53:39 UTC (rev 59756)
@@ -0,0 +1,1260 @@
+%% The LaTeX package csvsimple - version 2.0.0 (2021/06/29)
+%% csvsimple-l3.sty: Simple LaTeX CSV file processing (LaTeX3)
+%%
+%% -------------------------------------------------------------------------------------------
+%% Copyright (c) 2008-2021 by Prof. Dr. Dr. Thomas F. Sturm <thomas dot sturm at unibw dot de>
+%% -------------------------------------------------------------------------------------------
+%%
+%% This work may be distributed and/or modified under the
+%% conditions of the LaTeX Project Public License, either version 1.3
+%% of this license or (at your option) any later version.
+%% The latest version of this license is in
+%% http://www.latex-project.org/lppl.txt
+%% and version 1.3 or later is part of all distributions of LaTeX
+%% version 2005/12/01 or later.
+%%
+%% This work has the LPPL maintenance status `author-maintained'.
+%%
+%% This work consists of all files listed in README.md
+%%
+\ProvidesExplPackage{csvsimple-l3}{2021/06/29}{2.0.0}
+ {LaTeX3 CSV file processing}
+
+
+
+%---- check package
+
+\cs_if_exist:NT \c__csvsim_package_expl_bool
+ {
+ \msg_new:nnn { csvsimple }{ l3 / package-loaded }
+ { Package~'csvsimple-legacy'~seems~already~be~loaded!~
+ 'csvsimple-l3'~cannot~be~loaded~simultaneously.~
+ Therefore,~loading~of~'csvsimple-l3'~stops~now.}
+ \msg_warning:nn { csvsimple }{ l3 / package-loaded }
+ \tex_endinput:D
+ }
+\bool_const:Nn \c__csvsim_package_expl_bool { 1 }
+
+
+
+%---- declarations and expl3 variants
+
+\bool_new:N \g__csvsim_check_column_count_bool
+\bool_new:N \g__csvsim_firstline_bool
+\bool_new:N \g__csvsim_head_bool
+\bool_new:N \g__csvsim_head_to_colnames_bool
+\bool_new:N \g__csvsim_line_accepted_bool
+\bool_new:N \l__csvsim_respect_and_bool
+\bool_new:N \l__csvsim_respect_backslash_bool
+\bool_new:N \l__csvsim_respect_circumflex_bool
+\bool_new:N \l__csvsim_respect_dollar_bool
+\bool_new:N \l__csvsim_respect_leftbrace_bool
+\bool_new:N \l__csvsim_respect_percent_bool
+\bool_new:N \l__csvsim_respect_rightbrace_bool
+\bool_new:N \l__csvsim_respect_sharp_bool
+\bool_new:N \l__csvsim_respect_tab_bool
+\bool_new:N \l__csvsim_respect_tilde_bool
+\bool_new:N \l__csvsim_respect_underscore_bool
+
+\int_new:N \g__csvsim_col_int
+\int_new:N \g__csvsim_colmax_int
+\int_new:N \g_csvsim_inputline_int
+\int_new:N \g_csvsim_row_int
+\int_new:N \g_csvsim_columncount_int
+
+\seq_new:N \g__csvsim_colname_seq
+\seq_new:N \g__csvsim_line_seq
+\seq_new:N \g__csvsim_range_seq
+
+\str_new:N \g__csvsim_curfilename_str
+\str_new:N \g__csvsim_filename_str
+\str_new:N \l__csvsim_csvsorter_command_str
+\str_new:N \l__csvsim_csvsorter_configpath_str
+\str_new:N \l__csvsim_csvsorter_log_str
+\str_new:N \l__csvsim_csvsorter_token_str
+\str_new:N \l__csvsim_ppfilename_str
+\str_new:N \l__csvsim_temp_filename_str
+
+\tl_const:Nn \c__csvsim_par_tl { \par }
+
+\tl_new:N \g__csvsim_after_table_tl
+\tl_new:N \g__csvsim_before_table_tl
+\tl_new:N \g__csvsim_body_tl
+\tl_new:N \g__csvsim_catcode_tl
+
+\tl_new:N \g__csvsim_columnnames_tl
+\tl_new:N \g__csvsim_filter_tl
+\tl_new:N \g__csvsim_headname_prefix_tl
+\tl_new:N \g__csvsim_hook_after_first_line_tl
+\tl_new:N \g__csvsim_hook_after_head_tl
+\tl_new:N \g__csvsim_hook_after_line_tl
+\tl_new:N \g__csvsim_hook_after_reading_tl
+\tl_new:N \g__csvsim_hook_before_filter_tl
+\tl_new:N \g__csvsim_hook_before_first_line_tl
+\tl_new:N \g__csvsim_hook_before_line_tl
+\tl_new:N \g__csvsim_hook_before_reading_tl
+\tl_new:N \g__csvsim_hook_columncounterror_tl
+\tl_new:N \g__csvsim_hook_late_after_first_line_tl
+\tl_new:N \g__csvsim_hook_late_after_head_tl
+\tl_new:N \g__csvsim_hook_late_after_last_line_tl
+\tl_new:N \g__csvsim_hook_late_after_line_tl
+\tl_new:N \g__csvsim_hook_table_begin_tl
+\tl_new:N \g__csvsim_hook_table_end_tl
+\tl_new:N \g__csvsim_preprocessor_tl
+\tl_new:N \g__csvsim_separator_tl
+\tl_new:N \g__csvsim_table_foot_tl
+\tl_new:N \g__csvsim_table_head_tl
+
+\group_begin:
+ \char_set_catcode_other:n { 9 }
+ \str_const:Nn \c__csvsim_tab_str { ^^I }
+\group_end:
+
+\regex_const:Nn \c__csvsim_integer_regex {\A\d+\Z}
+
+\cs_generate_variant:Nn \seq_gset_split:Nnn { NVV }
+
+
+%---- messages
+
+\msg_new:nnnn { csvsimple }{ column-name }
+ { Unknown~column~key~'#1'. }
+ { The~key~'#1'~you~used~in~'column~names'~is~unknown.\\
+ Therefore,~the~macro~#2 is~not~defined.
+ }
+
+\msg_new:nnn { csvsimple }{ empty-head }
+ { File~'#1'~starts~with~an~empty~line~(empty~head)!.}
+
+\msg_new:nnn { csvsimple }{ file-error }
+ { File~'#1'~not~existent,~not~readable,~or~empty!}
+
+\msg_new:nnn { csvsimple }{ column-wrong-count }
+ { #1~instead~of~#2~columns~for~input~line~#3~of~file~'#4'}
+
+\msg_new:nnn { csvsimple }{ sort-info }
+ { Sort~'#1'~by~'#2' }
+
+\msg_new:nnn { csvsimple }{ sort-shell-escape }
+ { You~need~to~use~'-shell-escape'~to~run~CSV-Sorter }
+
+\msg_new:nnnn { csvsimple }{ sort-error }
+ { Call~of~CSV-Sorter~failed! }
+ { See~log~file~'\l__csvsim_csvsorter_log_str'. }
+
+
+
+%---- core loop processing
+
+\cs_new_protected_nopar:Npn \__csvsim_read_line:
+ {
+ \group_begin:
+ \g__csvsim_catcode_tl
+ \ior_get:NNTF \g__csvsim_ior \l_tmpa_tl
+ {
+ \tl_gset_eq:NN \csvline \l_tmpa_tl
+ \int_gincr:N \g_csvsim_inputline_int
+ }
+ {
+ \msg_error:nnx { csvsimple }{ file-error }{ \g__csvsim_curfilename_str }
+ }
+ \group_end:
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_scan_line:
+ {
+ \int_gzero:N \g__csvsim_col_int
+ \seq_gset_split:NVV \g__csvsim_line_seq \g__csvsim_separator_tl \csvline
+ \seq_map_inline:Nn \g__csvsim_line_seq
+ {
+ \int_gincr:N \g__csvsim_col_int
+ \tl_gset:cn {csvcol \int_to_roman:n \g__csvsim_col_int}{##1}
+ }
+ \int_compare:nNnT \g__csvsim_colmax_int < \g__csvsim_col_int
+ {
+ \int_gset_eq:NN \g__csvsim_colmax_int \g__csvsim_col_int
+ }
+ \int_compare:nNnT \g_csvsim_columncount_int < \c_one_int
+ {
+ \int_gset_eq:NN \g_csvsim_columncount_int \g__csvsim_col_int
+ }
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_process_head_name:n #1
+ {
+ \tl_set:No \l_tmpa_tl {\cs:w csvcol\int_to_roman:n{#1} \cs_end:}
+ \exp_args:NnV \cs_set_nopar:cpn {__csvsim__/\l_tmpa_tl} \l_tmpa_tl
+
+ \bool_if:NT \g__csvsim_head_to_colnames_bool
+ {
+ \tl_set:No \l_tmpb_tl {\cs:w \g__csvsim_headname_prefix_tl \l_tmpa_tl \cs_end:}
+ \tl_put_right:NV \l_tmpb_tl \l_tmpa_tl
+ \exp_args:NNV \seq_gput_right:Nn \g__csvsim_colname_seq \l_tmpb_tl
+ }
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_read_head:
+ {
+ \__csvsim_read_line:
+ \tl_if_eq:NNTF \csvline \c__csvsim_par_tl
+ {
+ \msg_error:nnx { csvsimple }{ empty-head }{ \g__csvsim_filename_str }
+ }
+ {
+ \int_zero:N \g_csvsim_columncount_int
+ \__csvsim_scan_line:
+ \int_step_function:nN \g_csvsim_columncount_int \__csvsim_process_head_name:n
+ }
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_process_colname:nn #1#2
+ {
+ \cs_if_exist:cTF {__csvsim__/#1}
+ {
+ \tl_set:Nv \l_tmpa_tl {__csvsim__/#1}
+ \tl_put_left:Nn \l_tmpa_tl {#2}
+ \exp_args:NNV \seq_gput_right:Nn \g__csvsim_colname_seq \l_tmpa_tl
+ }
+ {
+ \regex_match:NnTF \c__csvsim_integer_regex {#1}
+ {
+ \tl_set:No \l_tmpa_tl {\cs:w csvcol\int_to_roman:n{#1} \cs_end:}
+ \tl_put_left:Nn \l_tmpa_tl {#2}
+ \exp_args:NNV \seq_gput_right:Nn \g__csvsim_colname_seq \l_tmpa_tl
+ }
+ {
+ \str_set:Nn \l_tmpb_str {#2}
+ \msg_error:nnxx { csvsimple }{ column-name }{ #1 }{ \l_tmpb_str }
+ }
+ }
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_set_colnames:
+ {
+ \seq_map_inline:Nn \g__csvsim_colname_seq
+ {
+ \tl_gset_eq:NN ##1
+ }
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_loop:
+ {
+ % preprocess
+ \tl_if_empty:NTF \l__csvsim_preprocessor_tl
+ {
+ \str_gset_eq:NN \g__csvsim_curfilename_str \g__csvsim_filename_str
+ }
+ {
+ \l__csvsim_preprocessor_tl \g__csvsim_filename_str \l__csvsim_ppfilename_str
+ \str_gset_eq:NN \g__csvsim_curfilename_str \l__csvsim_ppfilename_str
+ }
+
+ % initialize
+ \cs_if_exist:NF \g__csvsim_ior
+ {
+ \ior_new:N \g__csvsim_ior
+ }
+ \__csvsim_setup_catcode_list:
+ \seq_gclear:N \g__csvsim_colname_seq
+ \int_gzero:N \g_csvsim_inputline_int
+ \int_gzero:N \g_csvsim_row_int
+ \int_gset_eq:NN \g__csvsim_colmax_int \c_one_int
+
+ % open file
+ \g__csvsim_hook_before_reading_tl
+ \g__csvsim_hook_table_begin_tl
+ \ior_open:Nn \g__csvsim_ior { \g__csvsim_curfilename_str }
+
+ % read head line
+ \bool_if:NT \g__csvsim_head_bool
+ {
+ \__csvsim_read_head:
+ }
+ \exp_args:NNNV \keyval_parse:NNn \use_none:n
+ \__csvsim_process_colname:nn \g__csvsim_columnnames_tl
+ \bool_if:NT \g__csvsim_head_bool
+ {
+ \g__csvsim_hook_after_head_tl
+ }
+
+ % read body lines
+ \bool_gset_true:N \g__csvsim_line_firstline_bool
+ \bool_until_do:nn {\ior_if_eof_p:N \g__csvsim_ior}
+ {
+ \__csvsim_read_line:
+ \tl_if_eq:NNF \csvline \c__csvsim_par_tl
+ {
+ \bool_gset_true:N \g__csvsim_line_accepted_bool
+ \__csvsim_scan_line:
+ \__csvsim_set_colnames:
+ \bool_if:NT \g__csvsim_check_column_count_bool
+ {
+ \int_compare:nNnF \g__csvsim_col_int = \g_csvsim_columncount_int
+ {
+ \bool_gset_false:N \g__csvsim_line_accepted_bool
+ \g__csvsim_hook_columncounterror_tl
+ }
+ }
+ \bool_if:NT \g__csvsim_line_accepted_bool
+ {
+ \g__csvsim_hook_before_filter_tl
+ \g__csvsim_filter_tl
+ \bool_if:NT \g__csvsim_line_accepted_bool
+ {
+ \int_gincr:N \g_csvsim_row_int
+ \__csvsim_check_range:
+ \bool_if:NT \g__csvsim_line_accepted_bool
+ {
+ \bool_if:NTF \g__csvsim_line_firstline_bool
+ {
+ \bool_if:NT \g__csvsim_head_bool
+ {
+ \g__csvsim_hook_late_after_head_tl
+ }
+ \g__csvsim_hook_before_first_line_tl
+ \g__csvsim_body_tl
+ \g__csvsim_hook_after_first_line_tl
+ \bool_gset_false:N \g__csvsim_line_firstline_bool
+ }
+ {
+ \g__csvsim_hook_late_after_line_tl
+ \g__csvsim_hook_before_line_tl
+ \g__csvsim_body_tl
+ \g__csvsim_hook_after_line_tl
+ }
+ }
+ }
+ }
+ }
+ }
+
+ % close file
+ \ior_close:N \g__csvsim_ior
+
+ % clear macros
+ \int_step_inline:nn \g__csvsim_colmax_int
+ {
+ \tl_set:No \l_tmpa_tl {\cs:w csvcol\int_to_roman:n{##1} \cs_end:}
+ \use:x
+ {
+ \exp_not:N\tl_gclear:N \exp_not:V\l_tmpa_tl
+ }
+ }
+ \__csvsim_set_colnames:
+ \seq_gclear:N \g__csvsim_colname_seq
+ \bool_if:NF \g__csvsim_line_firstline_bool
+ {
+ \g__csvsim_hook_late_after_last_line_tl
+ }
+ \g__csvsim_hook_table_end_tl
+ \g__csvsim_hook_after_reading_tl
+ }
+
+
+\NewDocumentCommand \csvloop { +m }
+ {
+ \keys_set:nn { csvsim } { default, every~csv, #1}
+ \__csvsim_loop:
+ }
+
+
+\NewDocumentCommand \csvreader { +O{} m m +m }
+ {
+ \keys_set:nn { csvsim } { default, every~csv, #1, file={#2}, column~names={#3} }
+ \tl_gset:Nn \g__csvsim_body_tl {#4}
+ \__csvsim_loop:
+ }
+
+
+
+%---- auxiliary user macros
+
+\NewDocumentCommand \csvlinetotablerow { }
+ {
+ \tl_clear:N \l_tmpa_tl
+ \bool_set_false:N \l_tmpa_bool
+ \seq_map_inline:Nn \g__csvsim_line_seq
+ {
+ \bool_if:NTF \l_tmpa_bool
+ {
+ \tl_put_right:Nn \l_tmpa_tl { & ##1 }
+ }
+ {
+ \tl_put_right:Nn \l_tmpa_tl { ##1 }
+ \bool_set_true:N \l_tmpa_bool
+ }
+ }
+ \l_tmpa_tl
+ }
+
+
+\NewExpandableDocumentCommand \thecsvrow { }
+ {
+ \int_use:N \g_csvsim_row_int
+ }
+
+
+\NewExpandableDocumentCommand \thecsvcolumncount { }
+ {
+ \int_use:N \g_csvsim_columncount_int
+ }
+
+
+\NewExpandableDocumentCommand \thecsvinputline { }
+ {
+ \int_use:N \g_csvsim_inputline_int
+ }
+
+
+\NewExpandableDocumentCommand \ifcsvfirstrow { }
+ {
+ \bool_if:NTF \g__csvsim_firstline_bool
+ }
+
+% deprecated
+\NewExpandableDocumentCommand \csviffirstrow { }
+ {
+ \bool_if:NTF \g__csvsim_firstline_bool
+ }
+
+
+\NewExpandableDocumentCommand \ifcsvoddrow { }
+ {
+ \int_if_odd:nTF {\g_csvsim_row_int}
+ }
+
+% deprecated
+\NewExpandableDocumentCommand \csvifoddrow { }
+ {
+ \int_if_odd:nTF {\g_csvsim_row_int}
+ }
+
+
+\NewExpandableDocumentCommand \ifcsvstrcmp { m m }
+ {
+ \str_compare:eNeTF {#1} = {#2}
+ }
+
+
+\NewExpandableDocumentCommand \ifcsvnotstrcmp { m m +m +m }
+ {
+ \ifcsvstrcmp{#1}{#2}{#4}{#3}
+ }
+
+
+\NewDocumentCommand \ifcsvstrequal { m m }
+ {
+ \tl_set:Nx \l_tmpa_tl {#1}
+ \tl_set:Nx \l_tmpb_tl {#2}
+ \tl_if_eq:NNTF \l_tmpa_tl \l_tmpb_tl
+ }
+
+
+\NewDocumentCommand \ifcsvprostrequal { m m }
+ {
+ \protected at edef \l_tmpa_tl {#1}
+ \protected at edef \l_tmpb_tl {#2}
+ \tl_if_eq:NNTF \l_tmpa_tl \l_tmpb_tl
+ }
+
+
+\NewExpandableDocumentCommand \ifcsvfpcmp { m }
+ {
+ \fp_compare:nTF {#1}
+ }
+
+
+\NewExpandableDocumentCommand \ifcsvintcmp { m }
+ {
+ \int_compare:nTF {#1}
+ }
+
+
+%---- filename functions
+
+\cs_new_protected_nopar:Npn \__csvsim_set_temp_filename:nnn #1#2#3
+ {
+ \str_set:Nn \l__csvsim_temp_filename_str {#2#3}
+ \str_if_empty:NTF \l__csvsim_temp_filename_str
+ {
+ \str_set:Nn \l__csvsim_temp_filename_str {#1}
+ }
+ {
+ \str_set:Nn \l_tmpa_str {#1}
+ \str_if_empty:NF \l_tmpa_str
+ {
+ \str_compare:eNeF { \str_item:Nn \l_tmpa_str {-1} } = { / }
+ {
+ \str_put_right:Nn \l_tmpa_str {/}
+ }
+ \str_concat:NNN \l__csvsim_temp_filename_str
+ \l_tmpa_str \l__csvsim_temp_filename_str
+ }
+ }
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_set_temp_filename:n #1
+ {
+ \file_parse_full_name_apply:nN { #1 } \__csvsim_set_temp_filename:nnn
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_set_filename:Nn #1#2
+ {
+ \__csvsim_set_temp_filename:n { #2 }
+ \str_set_eq:NN #1 \l__csvsim_temp_filename_str
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_gset_filename:Nn #1#2
+ {
+ \__csvsim_set_temp_filename:n { #2 }
+ \str_gset_eq:NN #1 \l__csvsim_temp_filename_str
+ }
+
+
+
+%---- keys
+
+\NewDocumentCommand \csvset { +m }
+ {
+ \keys_set:nn { csvsim } { #1 }
+ }
+
+
+\NewDocumentCommand \csvstyle { m +m }
+ {
+ \keys_define:nn { csvsim }
+ {
+ #1 .meta:n = { #2 }
+ }
+ }
+
+
+\NewDocumentCommand \csvnames { m m }
+ {
+ \keys_define:nn { csvsim }
+ {
+ #1 .meta:n = { column~names={#2} }
+ }
+ }
+
+
+\keys_define:nn { csvsim }
+ {
+ file .code:n = \__csvsim_gset_filename:Nn \g__csvsim_filename_str {#1},
+ column~names~reset .code:n = \tl_gclear:N \g__csvsim_columnnames_tl,
+ column~names .code:n =
+ {
+ \tl_if_empty:NTF \g__csvsim_columnnames_tl
+ {
+ \tl_gset:Nn \g__csvsim_columnnames_tl {#1}
+ }
+ {
+ \tl_gput_right:Nn \g__csvsim_columnnames_tl {,#1}
+ }
+ },
+ command .tl_gset:N = \g__csvsim_body_tl,
+ check~column~count .bool_gset:N = \g__csvsim_check_column_count_bool,
+ on~column~count~error .tl_gset:N = \g__csvsim_hook_columncounterror_tl,
+ head .bool_gset:N = \g__csvsim_head_bool,
+ head~to~column~names~prefix .tl_gset:N = \g__csvsim_headname_prefix_tl,
+ head~to~column~names .bool_gset:N = \g__csvsim_head_to_colnames_bool,
+ column~count .int_gset:N = \g_csvsim_columncount_int,
+ separator .choice:,
+ separator/comma .code:n =
+ {
+ \tl_gset:Nn \g__csvsim_separator_tl {,}
+ },
+ separator/semicolon .code:n =
+ {
+ \tl_gset:Nn \g__csvsim_separator_tl {;}
+ },
+ separator/pipe .code:n =
+ {
+ \tl_gset:Nn \g__csvsim_separator_tl {|}
+ },
+ separator/tab .code:n =
+ {
+ \tl_gset:NV \g__csvsim_separator_tl \c__csvsim_tab_str
+ \csvset{respect~tab}
+ },
+ every~csv .meta:n = {},
+ no~head .meta:n = { head=false },
+ no~check~column~count .meta:n = { check~column~count=false },
+ warn~on~column~count~error .meta:n = { on~column~count~error=
+ {
+ \msg_warning:nnxxxx { csvsimple }{ column-wrong-count }
+ { \int_use:N\g__csvsim_col_int }
+ { \int_use:N\g_csvsim_columncount_int }
+ { \int_use:N\g_csvsim_inputline_int }
+ { \g__csvsim_filename_str }
+ }},
+ }
+
+
+%---- hooks
+
+\keys_define:nn { csvsim }
+ {
+ before~reading .tl_gset:N = \g__csvsim_hook_before_reading_tl,
+ after~head .tl_gset:N = \g__csvsim_hook_after_head_tl,
+ before~filter .tl_gset:N = \g__csvsim_hook_before_filter_tl,
+ late~after~head .tl_gset:N = \g__csvsim_hook_late_after_head_tl,
+ late~after~first~line .tl_gset:N = \g__csvsim_hook_late_after_first_line_tl,
+ late~after~last~line .tl_gset:N = \g__csvsim_hook_late_after_last_line_tl,
+ before~first~line .tl_gset:N = \g__csvsim_hook_before_first_line_tl,
+ after~first~line .tl_gset:N = \g__csvsim_hook_after_first_line_tl,
+ after~reading .tl_gset:N = \g__csvsim_hook_after_reading_tl,
+ late~after~line .code:n =
+ {
+ \tl_gset:Nn \g__csvsim_hook_late_after_line_tl {#1}
+ \tl_gset_eq:NN \g__csvsim_hook_late_after_first_line_tl \g__csvsim_hook_late_after_line_tl
+ \tl_gset_eq:NN \g__csvsim_hook_late_after_last_line_tl \g__csvsim_hook_late_after_line_tl
+ },
+ before~line .code:n =
+ {
+ \tl_gset:Nn \g__csvsim_hook_before_line_tl {#1}
+ \tl_gset_eq:NN \g__csvsim_hook_before_first_line_tl \g__csvsim_hook_before_line_tl
+ },
+ after~line .code:n =
+ {
+ \tl_gset:Nn \g__csvsim_hook_after_line_tl {#1}
+ \tl_gset_eq:NN \g__csvsim_hook_after_first_line_tl \g__csvsim_hook_after_line_tl
+ },
+ }
+
+
+%---- filter
+
+\cs_new_protected_nopar:Npn \__csvsim_set_filter:n #1
+ {
+ \tl_gset:Nn \g__csvsim_filter_tl
+ {
+ #1
+ }
+ }
+
+
+\NewDocumentCommand \csvfilteraccept { }
+ {
+ \__csvsim_set_filter:n
+ {
+ \bool_gset_true:N \g__csvsim_line_accepted_bool
+ }
+ }
+
+
+\NewDocumentCommand \csvfilterreject { }
+ {
+ \__csvsim_set_filter:n
+ {
+ \bool_gset_false:N \g__csvsim_line_accepted_bool
+ }
+ }
+
+
+\keys_define:nn { csvsim }
+ {
+ no~filter .code:n =
+ {
+ \csvfilteraccept
+ },
+ filter~reject~all .code:n =
+ {
+ \csvfilterreject
+ },
+ filter~accept~all .code:n =
+ {
+ \csvfilteraccept
+ },
+ full~filter. tl_gset:N = \g__csvsim_hook_before_filter_tl,
+ filter~test .code:n =
+ {
+ \__csvsim_set_filter:n
+ {
+ #1
+ { \bool_gset_true:N \g__csvsim_line_accepted_bool }
+ { \bool_gset_false:N \g__csvsim_line_accepted_bool }
+ }
+ },
+ filter~bool .code:n =
+ {
+ \__csvsim_set_filter:n
+ {
+ \bool_gset:Nn \g__csvsim_line_accepted_bool { #1 }
+ }
+ },
+ filter~fp .code:n =
+ {
+ \__csvsim_set_filter:n
+ {
+ \bool_gset:Nn \g__csvsim_line_accepted_bool { \fp_compare_p:n{#1} }
+ }
+ },
+ filter~strcmp .meta:n = { filter~test=\ifcsvstrcmp #1 },
+ filter~not~strcmp .meta:n = { filter~test=\ifcsvnotstrcmp #1 },
+ }
+
+
+\NewDocumentCommand \csvfilterbool { m m }
+ {
+ \keys_define:nn { csvsim }
+ {
+ #1 .meta:n = { filter~bool={#2} }
+ }
+ }
+
+
+% ifthen
+\keys_define:nn { csvsim }
+ {
+ filter~ifthen .code:n =
+ {
+ \__csvsim_set_filter:n
+ {
+ \ifthenelse{#1}
+ { \bool_gset_true:N \g__csvsim_line_accepted_bool }
+ { \bool_gset_false:N \g__csvsim_line_accepted_bool }
+ }
+ },
+ filter~equal .meta:n = { filter~ifthen=\equal #1 },
+ filter~not~equal .meta:n = { filter~ifthen=\not\equal #1 },
+ }
+
+
+% etoolbox
+\keys_define:nn { csvsim }
+ {
+ filter~expr .code:n =
+ {
+ \__csvsim_set_filter:n
+ {
+ \ifboolexpr{#1}
+ { \bool_gset_true:N \g__csvsim_line_accepted_bool }
+ { \bool_gset_false:N \g__csvsim_line_accepted_bool }
+ }
+ },
+ }
+
+
+
+%---- range
+
+
+\cs_new_protected_nopar:Npn \__csvsim_add_range:n #1
+ {
+ \tl_if_in:nnTF {#1}{-}
+ {
+ \seq_set_split:Nnn \l_tmpa_seq {-} {#1}
+ \seq_pop_left:NN \l_tmpa_seq \l_tmpa_tl
+ \seq_pop_left:NN \l_tmpa_seq \l_tmpb_tl
+ \tl_if_empty:NTF \l_tmpa_tl
+ {
+ \int_set_eq:NN \l_tmpa_int \c_one_int
+ }
+ {
+ \int_set:Nn \l_tmpa_int { \l_tmpa_tl }
+ }
+ \tl_if_empty:NTF \l_tmpb_tl
+ {
+ \int_set_eq:NN \l_tmpb_int \c_max_int
+ }
+ {
+ \int_set:Nn \l_tmpb_int { \l_tmpb_tl }
+ }
+ }
+ {
+ \tl_if_in:nnTF {#1}{+}
+ {
+ \seq_set_split:Nnn \l_tmpa_seq {+} {#1}
+ \seq_pop_left:NN \l_tmpa_seq \l_tmpa_tl
+ \seq_pop_left:NN \l_tmpa_seq \l_tmpb_tl
+ \tl_if_empty:NTF \l_tmpa_tl
+ {
+ \int_set:Nn \l_tmpa_int { 1 }
+ }
+ {
+ \int_set:Nn \l_tmpa_int { \l_tmpa_tl }
+ }
+ \tl_if_empty:NTF \l_tmpb_tl
+ {
+ \int_set_eq:NN \l_tmpb_int \l_tmpa_int
+ }
+ {
+ \int_set:Nn \l_tmpb_int { \l_tmpa_int + \l_tmpb_tl - 1 }
+ }
+ }
+ {
+ \int_set:Nn \l_tmpa_int {#1}
+ \int_set_eq:NN \l_tmpb_int \l_tmpa_int
+ }
+ }
+ \seq_gput_right:Nx \g__csvsim_range_seq {{\int_use:N \l_tmpa_int}{\int_use:N \l_tmpb_int}}
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_set_range:n #1
+ {
+ \seq_gclear:N \g__csvsim_range_seq
+ \keyval_parse:NNn
+ \__csvsim_add_range:n
+ \use_none:nn
+ { #1 }
+ }
+
+
+\keys_define:nn { csvsim }
+ {
+ range .code:n =
+ {
+ \__csvsim_set_range:n {#1}
+ },
+ }
+
+
+\prg_new_conditional:Npnn \__csvsim_if_in_range:nn #1#2 { p , T }
+ {
+ \if_int_compare:w #1 > \g_csvsim_row_int
+ \prg_return_false:
+ \else:
+ \if_int_compare:w #2 < \g_csvsim_row_int
+ \prg_return_false:
+ \else:
+ \prg_return_true:
+ \fi:
+ \fi:
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_check_range:
+ {
+ \seq_if_empty:NF \g__csvsim_range_seq
+ {
+ \bool_gset_false:N \g__csvsim_line_accepted_bool
+ \seq_map_inline:Nn \g__csvsim_range_seq
+ {
+ \__csvsim_if_in_range:nnT ##1
+ {
+ \bool_gset_true:N \g__csvsim_line_accepted_bool
+ \seq_map_break:
+ }
+ }
+ }
+ }
+
+
+
+%---- catcodes
+
+\cs_new_protected_nopar:Npn \__csvsim_setup_catcode_list:
+ {
+ \tl_gclear:N \g__csvsim_catcode_tl
+ \bool_if:NT \l__csvsim_respect_tab_bool
+ {
+ \tl_gput_right:Nn \g__csvsim_catcode_tl { \char_set_catcode_other:n { 9 } }
+ }
+ \bool_if:NT \l__csvsim_respect_percent_bool
+ {
+ \tl_gput_right:Nn \g__csvsim_catcode_tl { \char_set_catcode_other:n { 37 } }
+ }
+ \bool_if:NT \l__csvsim_respect_sharp_bool
+ {
+ \tl_gput_right:Nn \g__csvsim_catcode_tl { \char_set_catcode_other:n { 35 } }
+ }
+ \bool_if:NT \l__csvsim_respect_dollar_bool
+ {
+ \tl_gput_right:Nn \g__csvsim_catcode_tl { \char_set_catcode_other:n { 36 } }
+ }
+ \bool_if:NT \l__csvsim_respect_and_bool
+ {
+ \tl_gput_right:Nn \g__csvsim_catcode_tl { \char_set_catcode_other:n { 38 } }
+ }
+ \bool_if:NT \l__csvsim_respect_backslash_bool
+ {
+ \tl_gput_right:Nn \g__csvsim_catcode_tl { \char_set_catcode_other:n { 92 } }
+ }
+ \bool_if:NT \l__csvsim_respect_underscore_bool
+ {
+ \tl_gput_right:Nn \g__csvsim_catcode_tl { \char_set_catcode_other:n { 95 } }
+ }
+ \bool_if:NT \l__csvsim_respect_tilde_bool
+ {
+ \tl_gput_right:Nn \g__csvsim_catcode_tl { \char_set_catcode_other:n { 126 } }
+ }
+ \bool_if:NT \l__csvsim_respect_circumflex_bool
+ {
+ \tl_gput_right:Nn \g__csvsim_catcode_tl { \char_set_catcode_other:n { 94 } }
+ }
+ \bool_if:NT \l__csvsim_respect_leftbrace_bool
+ {
+ \tl_gput_right:Nn \g__csvsim_catcode_tl { \char_set_catcode_other:n { 123 } }
+ }
+ \bool_if:NT \l__csvsim_respect_rightbrace_bool
+ {
+ \tl_gput_right:Nn \g__csvsim_catcode_tl { \char_set_catcode_other:n { 125 } }
+ }
+ }
+
+
+\keys_define:nn { csvsim }
+ {
+ respect~tab .bool_set:N = \l__csvsim_respect_tab_bool,
+ respect~percent .bool_set:N = \l__csvsim_respect_percent_bool,
+ respect~sharp .bool_set:N = \l__csvsim_respect_sharp_bool,
+ respect~dollar .bool_set:N = \l__csvsim_respect_dollar_bool,
+ respect~and .bool_set:N = \l__csvsim_respect_and_bool,
+ respect~backslash .bool_set:N = \l__csvsim_respect_backslash_bool,
+ respect~underscore .bool_set:N = \l__csvsim_respect_underscore_bool,
+ respect~tilde .bool_set:N = \l__csvsim_respect_tilde_bool,
+ respect~circumflex .bool_set:N = \l__csvsim_respect_circumflex_bool,
+ respect~leftbrace .bool_set:N = \l__csvsim_respect_leftbrace_bool,
+ respect~rightbrace .bool_set:N = \l__csvsim_respect_rightbrace_bool,
+ respect~all .meta:n =
+ {
+ respect~tab, respect~percent, respect~sharp, respect~dollar,
+ respect~and, respect~backslash, respect~underscore, respect~tilde,
+ respect~circumflex, respect~leftbrace, respect~rightbrace
+ },
+ respect~none .meta:n =
+ {
+ respect~tab=false, respect~percent=false, respect~sharp=false,
+ respect~dollar=false, respect~and=false, respect~backslash=false,
+ respect~underscore=false, respect~tilde=false, respect~circumflex=false,
+ respect~leftbrace=false, respect~rightbrace=false
+ },
+ }
+
+
+
+%---- tables
+
+\cs_new_protected_nopar:Npn \__csvsim_key_table:nn #1#2
+ {
+ \tl_gset:Nn \g__csvsim_hook_table_begin_tl {#1}
+ \tl_gset:Nn \g__csvsim_hook_table_end_tl {#2}
+ }
+
+
+\keys_define:nn { csvsim }
+ {
+ before~table .tl_gset:N = \g__csvsim_before_table_tl,
+ after~table .tl_gset:N = \g__csvsim_after_table_tl,
+ table~head .tl_gset:N = \g__csvsim_table_head_tl,
+ table~foot .tl_gset:N = \g__csvsim_table_foot_tl,
+ _table_ .code:n = \__csvsim_key_table:nn #1,
+ no~table .meta:n = { _table_={}{} },
+ tabular .meta:n =
+ {
+ _table_ = { \g__csvsim_before_table_tl\begin{tabular}{#1}\g__csvsim_table_head_tl }
+ { \g__csvsim_table_foot_tl\end{tabular}\g__csvsim_after_table_tl },
+ late~after~line = \\
+ },
+ centered~tabular .meta:n =
+ {
+ _table_ = { \begin{center}\g__csvsim_before_table_tl\begin{tabular}{#1}\g__csvsim_table_head_tl }
+ { \g__csvsim_table_foot_tl\end{tabular}\g__csvsim_after_table_tl\end{center} },
+ late~after~line = \\
+ },
+ longtable .meta:n =
+ {
+ _table_ = { \g__csvsim_before_table_tl\begin{longtable}{#1}\g__csvsim_table_head_tl }
+ { \g__csvsim_table_foot_tl\end{longtable}\g__csvsim_after_table_tl },
+ late~after~line = \\
+ },
+ tabbing .meta:n =
+ {
+ _table_ = { \g__csvsim_before_table_tl\begin{tabbing}\g__csvsim_table_head_tl }
+ { \g__csvsim_table_foot_tl\end{tabbing}\g__csvsim_after_table_tl },
+ late~after~line = \\,
+ late~after~last~line =
+ },
+ centered~tabbing .meta:n =
+ {
+ _table_ = { \begin{center}\g__csvsim_before_table_tl\begin{tabbing}\g__csvsim_table_head_tl }
+ { \g__csvsim_table_foot_tl\end{tabbing}\g__csvsim_after_table_tl\end{center} },
+ late~after~line = \\,
+ late~after~last~line =
+ },
+ _autotab_ .meta:n =
+ {
+ file = #1,
+ late~after~line = \\,
+ command = \csvlinetotablerow
+ },
+ _autotabular_ .meta:n =
+ {
+ _autotab_ = #1,
+ late~after~last~line = \g__csvsim_table_foot_tl
+ \end{tabular}
+ \g__csvsim_after_table_tl,
+ },
+ autotabular .meta:n =
+ {
+ _autotabular_ = #1,
+ head,
+ after~head = \g__csvsim_before_table_tl
+ \begin{tabular}{|*{\int_use:N\g__csvsim_col_int}{l|}}
+ \g__csvsim_table_head_tl,
+ table~head = \hline\csvlinetotablerow\\\hline,
+ table~foot = \\\hline,
+ },
+ autotabular* .meta:n =
+ {
+ _autotabular_ = #1,
+ no~head,
+ before~first~line = \g__csvsim_before_table_tl
+ \begin{tabular}{|*{\int_use:N\g__csvsim_col_int}{l|}}
+ \g__csvsim_table_head_tl,
+ table~head = \hline,
+ table~foot = \\\hline,
+ },
+ autobooktabular .meta:n =
+ {
+ _autotabular_ = #1,
+ head,
+ after~head = \g__csvsim_before_table_tl
+ \begin{tabular}{*{\int_use:N\g__csvsim_col_int}{l}}
+ \g__csvsim_table_head_tl,
+ table~head = \toprule\csvlinetotablerow\\\midrule,
+ table~foot = \\\bottomrule,
+ },
+ autobooktabular* .meta:n =
+ {
+ _autotabular_ = #1,
+ no~head,
+ before~first~line = \g__csvsim_before_table_tl
+ \begin{tabular}{*{\int_use:N\g__csvsim_col_int}{l}}
+ \g__csvsim_table_head_tl,
+ table~head = \toprule,
+ table~foot = \\\bottomrule,
+ },
+ _autolongtable_ .meta:n =
+ {
+ _autotab_ = #1,
+ late~after~last~line = \end{longtable}
+ \g__csvsim_after_table_tl,
+ },
+ autolongtable .meta:n =
+ {
+ _autolongtable_ = #1,
+ head,
+ after~head = \g__csvsim_before_table_tl
+ \begin{longtable}{|*{\int_use:N\g__csvsim_col_int}{l|}}
+ \g__csvsim_table_head_tl,
+ table~head = \hline\csvlinetotablerow\\\hline\endhead
+ \hline\endfoot,
+ },
+ autolongtable* .meta:n =
+ {
+ _autolongtable_ = #1,
+ no~head,
+ before~first~line = \g__csvsim_before_table_tl
+ \begin{longtable}{|*{\int_use:N\g__csvsim_col_int}{l|}}
+ \g__csvsim_table_head_tl,
+ table~head = \hline\endhead
+ \hline\endfoot,
+ },
+ autobooklongtable .meta:n =
+ {
+ _autolongtable_ = #1,
+ head,
+ after~head = \g__csvsim_before_table_tl
+ \begin{longtable}{*{\int_use:N\g__csvsim_col_int}{l}}
+ \g__csvsim_table_head_tl,
+ table~head = \toprule\csvlinetotablerow\\\midrule\endhead
+ \bottomrule\endfoot,
+ },
+ autobooklongtable* .meta:n =
+ {
+ _autolongtable_ = #1,
+ no~head,
+ before~first~line = \g__csvsim_before_table_tl
+ \begin{longtable}{*{\int_use:N\g__csvsim_col_int}{l}}
+ \g__csvsim_table_head_tl,
+ table~head = \toprule\endhead
+ \bottomrule\endfoot,
+ },
+ }
+
+
+\NewDocumentCommand \csvautotabular { s +O{} m }
+ {
+ \IfBooleanTF {#1}
+ {
+ \keys_set:nn { csvsim } { default, every~csv, autotabular*={#3}, #2}
+ }
+ {
+ \keys_set:nn { csvsim } { default, every~csv, autotabular={#3}, #2}
+ }
+ \__csvsim_loop:
+ }
+
+
+\NewDocumentCommand \csvautolongtable { s +O{} m }
+ {
+ \IfBooleanTF {#1}
+ {
+ \keys_set:nn { csvsim } { default, every~csv, autolongtable*={#3}, #2}
+ }
+ {
+ \keys_set:nn { csvsim } { default, every~csv, autolongtable={#3}, #2}
+ }
+ \__csvsim_loop:
+ }
+
+
+\NewDocumentCommand \csvautobooktabular { s +O{} m }
+ {
+ \IfBooleanTF {#1}
+ {
+ \keys_set:nn { csvsim } { default, every~csv, autobooktabular*={#3}, #2}
+ }
+ {
+ \keys_set:nn { csvsim } { default, every~csv, autobooktabular={#3}, #2}
+ }
+ \__csvsim_loop:
+ }
+
+
+\NewDocumentCommand \csvautobooklongtable { s +O{} m }
+ {
+ \IfBooleanTF {#1}
+ {
+ \keys_set:nn { csvsim } { default, every~csv, autobooklongtable*={#3}, #2}
+ }
+ {
+ \keys_set:nn { csvsim } { default, every~csv, autobooklongtable={#3}, #2}
+ }
+ \__csvsim_loop:
+ }
+
+
+
+%---- sorting
+
+\cs_new_protected_nopar:Npn \__csvsim_key_new_sorting_rule:nn #1#2
+ {
+ \keys_define:nn { csvsim }
+ {
+ sort~by~#1 .meta:n = { sort~by={#2} },
+ }
+ }
+
+
+\NewDocumentCommand \csvsortingrule { }
+ {
+ \__csvsim_key_new_sorting_rule:nn
+ }
+
+
+\keys_define:nn { csvsim }
+ {
+ preprocessor .tl_gset:N = \l__csvsim_preprocessor_tl,
+ preprocessed~file .code:n = \__csvsim_set_filename:Nn \l__csvsim_ppfilename_str {#1},
+ csvsorter~command .code:n = \__csvsim_set_filename:Nn \l__csvsim_csvsorter_command_str {#1},
+ csvsorter~configpath .code:n = \__csvsim_set_filename:Nn\l__csvsim_csvsorter_configpath_str {#1},
+ csvsorter~log .code:n = \__csvsim_set_filename:Nn \l__csvsim_csvsorter_log_str {#1},
+ csvsorter~token .code:n = \__csvsim_set_filename:Nn \l__csvsim_csvsorter_token_str {#1},
+ no~preprocessing .meta:n = { preprocessor= },
+ sort~by .meta:n =
+ {
+ preprocessor=
+ {
+ \__csvsim_processor_csvsorter:nnn {#1}
+ }
+ },
+ new~sorting~rule .code:n = \__csvsim_key_new_sorting_rule:nn #1,
+ new~sorting~rule .value_required:n = true ,
+}
+
+
+\keys_set:nn { csvsim }
+ {
+ preprocessed~file = \c_sys_jobname_str _sorted._csv,
+ csvsorter~command = csvsorter,
+ csvsorter~configpath = .,
+ csvsorter~log = csvsorter.log,
+ csvsorter~token = \c_sys_jobname_str.csvtoken,
+ }
+
+
+\cs_new_protected_nopar:Npn \__csvsim_processor_csvsorter:nnn #1#2#3
+ {
+ \sys_if_shell_unrestricted:TF
+ {
+ \__csvsim_set_temp_filename:n { #1 }
+ \msg_note:nnxx { csvsimple }{ sort-info }{ #2 }{ \l__csvsim_temp_filename_str }
+ \cs_if_exist:NF \g__csvsim_iow
+ {
+ \iow_new:N \g__csvsim_iow
+ }
+ \iow_open:Nn \g__csvsim_iow { \l__csvsim_csvsorter_token_str }
+ \iow_now:Nn \g__csvsim_iow { \ExplSyntaxOn \msg_error:nn { csvsimple }{ sort-error } \ExplSyntaxOff }
+ \iow_close:N \g__csvsim_iow
+ \sys_shell_now:x
+ {
+ "\l__csvsim_csvsorter_command_str" \c_space_tl
+ -c~ "\l__csvsim_csvsorter_configpath_str/\l__csvsim_temp_filename_str" \c_space_tl
+ -l~ "\l__csvsim_csvsorter_log_str" \c_space_tl
+ -t~ "\l__csvsim_csvsorter_token_str" \c_space_tl
+ -i~ "#2" \c_space_tl
+ -o~ "#3" \c_space_tl
+ -q~1
+ }
+ \file_input:n { \l__csvsim_csvsorter_token_str }
+ }
+ {
+ \msg_error:nn { csvsimple }{ sort-shell-escape }
+ }
+ }
+
+
+
+%---- default
+
+\keys_define:nn { csvsim }
+ {
+ % default for reset
+ default .meta:n =
+ {
+ file = unknown.csv,
+ no~preprocessing,
+ command = \csvline,
+ column~names~reset,
+ head,
+ check~column~count,
+ head~to~column~names~prefix = ,
+ head~to~column~names = false,
+ column~count = 0,
+ on~column~count~error =,
+ no~filter,
+ before~filter =,
+ before~line =,
+ after~line =,
+ late~after~line =,
+ after~head =,
+ late~after~head =,
+ before~reading =,
+ after~reading =,
+ before~table =,
+ after~table =,
+ table~head =,
+ table~foot =,
+ no~table,
+ separator = comma,
+ respect~none,
+ },
+ }
+
+\keys_set:nn { csvsim }
+ {
+ default
+ }
Property changes on: trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple-l3.sty
___________________________________________________________________
Added: svn:eol-style
## -0,0 +1 ##
+native
\ No newline at end of property
Added: trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple-legacy.sty
===================================================================
--- trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple-legacy.sty (rev 0)
+++ trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple-legacy.sty 2021-06-29 19:53:39 UTC (rev 59756)
@@ -0,0 +1,795 @@
+%% The LaTeX package csvsimple - version 2.0.0 (2021/06/29)
+%% csvsimple-legacy.sty: Simple LaTeX CSV file processing (LaTeX2e)
+%%
+%% -------------------------------------------------------------------------------------------
+%% Copyright (c) 2008-2021 by Prof. Dr. Dr. Thomas F. Sturm <thomas dot sturm at unibw dot de>
+%% -------------------------------------------------------------------------------------------
+%%
+%% This work may be distributed and/or modified under the
+%% conditions of the LaTeX Project Public License, either version 1.3
+%% of this license or (at your option) any later version.
+%% The latest version of this license is in
+%% http://www.latex-project.org/lppl.txt
+%% and version 1.3 or later is part of all distributions of LaTeX
+%% version 2005/12/01 or later.
+%%
+%% This work has the LPPL maintenance status `author-maintained'.
+%%
+%% This work consists of all files listed in README.md
+%%
+\NeedsTeXFormat{LaTeX2e}
+\ProvidesPackage{csvsimple-legacy}[2021/06/29 version 2.0.0 LaTeX2e CSV file processing]
+
+
+%---- check package
+\ExplSyntaxOn
+\cs_if_exist:NT \c__csvsim_package_expl_bool
+ {
+ \msg_new:nnn { csvsimple }{ legacy / package-loaded }
+ { Package~'csvsimple-l3'~seems~already~be~loaded!~
+ 'csvsimple-legacy'~cannot~be~loaded~simultaneously.~
+ Therefore,~loading~of~'csvsimple-legacy'~stops~now.}
+ \msg_warning:nn { csvsimple }{ legacy / package-loaded }
+ \tex_endinput:D
+ }
+\bool_const:Nn \c__csvsim_package_expl_bool { 0 }
+\ExplSyntaxOff
+
+
+
+\RequirePackage{pgfrcs,pgfkeys,ifthen,etoolbox,shellesc}
+
+
+%---- general
+
+\def\csv at warning#1{\PackageWarning{csvsimple}{#1}}
+\def\csv at error#1#2{\PackageError{csvsimple}{#1}{#2}}
+
+\newread\csv at file
+\newcounter{csvinputline}
+\newcounter{csvrow}
+\newcounter{csvcol}
+
+\def\csv at empty{}
+
+\long\def\csviffirstrow#1#2{%
+ \ifnum\c at csvrow=1%
+ \long\def\csviffirstrow at doit{#1}%
+ \else%
+ \long\def\csviffirstrow at doit{#2}%
+ \fi%
+ \csviffirstrow at doit%
+}
+
+\long\def\csvifoddrow#1#2{%
+ \ifodd\c at csvrow%
+ \long\def\csvifoddrow at doit{#1}%
+ \else%
+ \long\def\csvifoddrow at doit{#2}%
+ \fi%
+ \csvifoddrow at doit%
+}
+
+\def\csv at assemble@csvlinetotablerow{%
+ \global\c at csvcol 1\relax%
+ \xdef\csvlinetotablerow{\expandonce{\csname csvcol\romannumeral\c at csvcol\endcsname}}%
+ \ifnum\c at csvcol<\csv at columncount\relax%
+ \loop%
+ \global\advance\c at csvcol 1\relax%
+ \xappto\csvlinetotablerow{\noexpand&\expandonce{\csname csvcol\romannumeral\c at csvcol\endcsname}}%
+ \ifnum\c at csvcol<\csv at columncount\relax\repeat%
+ \fi%
+ \csvlinetotablerow%
+}
+
+
+%---- breaking lines
+
+% This command removes leading and trailing spaces from <Token>. I found
+% the original code on the web. The original author was Michael Downes, who
+% provided the code as an answer to 'around the bend' question #15.
+\catcode`\Q=3
+\def\csv at TrimSpaces#1{%
+ \begingroup%
+ \aftergroup\toks\aftergroup0\aftergroup{%
+ \expandafter\csv at trimb\expandafter\noexpand#1Q Q}%
+ \global\edef#1{\the\toks0}%
+}
+\def\csv at trimb#1 Q{\csv at trimc#1Q}
+\def\csv at trimc#1Q#2{\afterassignment\endgroup \vfuzz\the\vfuzz#1}
+\catcode`\Q=11
+
+\def\csv at TrimBraces#1{\expandafter\csv at TrimBraces@#1\@nil{#1}}
+\def\csv at TrimBraces@#1\@nil#2{\def#2{#1}}
+
+\def\csv at breakline@kernel#1{%
+ \ifx\csv at termination#1\let\nextcol=\relax\else%
+ \let\nextcol=\csv at breakline%
+ \global\advance\c at csvcol 1\relax%
+ \def\csv at col@body{#1}%
+ \csv at TrimSpaces\csv at col@body%
+ \csv at TrimBraces\csv at col@body%
+ \toks@\expandafter{\csv at col@body}%
+ \expandafter\xdef\csname csvcol\romannumeral\c at csvcol\endcsname{\the\toks@}%
+ \fi%
+ \nextcol%
+}
+
+% comma
+\def\csv at breakline@A#1,{\csv at breakline@kernel{#1}}
+
+\def\csv at scanline@A#1{%
+ \global\c at csvcol 0\relax%
+ \csv at breakline#1,\csv at termination,%
+}
+
+% semi colon
+\def\csv at breakline@B#1;{\csv at breakline@kernel{#1}}
+
+\def\csv at scanline@B#1{%
+ \global\c at csvcol 0\relax%
+ \csv at breakline#1;\csv at termination;%
+}
+
+% pipe
+\def\csv at breakline@C#1|{\csv at breakline@kernel{#1}}
+
+\def\csv at scanline@C#1{%
+ \global\c at csvcol 0\relax%
+ \csv at breakline#1|\csv at termination|%
+}
+
+% tab
+\catcode`\^^I=12
+\def\csv at breakline@D#1^^I{\csv at breakline@kernel{#1}}
+
+\def\csv at scanline@D#1{%
+ \global\c at csvcol 0\relax%
+ \csv at breakline#1^^I\csv at termination^^I%
+}
+\catcode`\^^I=10
+
+% expands a CSV line and scans content
+\def\csv at escanline#1{%
+ \toks@\expandafter{#1}%
+ \edef\@csv at scanline{\noexpand\csv at scanline{\the\toks@}}%
+ \@csv at scanline%
+}
+
+{
+ \catcode`\"=12%
+ \gdef\csv at passivquotes{"}
+}
+
+\newwrite\csv at out
+
+\def\csv at preprocessor@csvsorter#1#2#3{%
+ \begingroup%
+ \typeout{<sort \csv at passivquotes#2\csv at passivquotes\space by \csv at passivquotes#1\csv at passivquotes>}%
+ \immediate\openout\csv at out=\csv at csvsorter@token%
+ \immediate\write\csv at out{\string\makeatletter\string\csv at error{Call of CSV-Sorter failed! Use '-shell-escape' option or check log file '\csv at csvsorter@log'.}{}}%
+ \immediate\closeout\csv at out%
+ \ShellEscape{\csv at csvsorter@command\space
+ -c \csv at passivquotes#1\csv at passivquotes\space
+ -l \csv at passivquotes\csv at csvsorter@log\csv at passivquotes\space
+ -t \csv at passivquotes\csv at csvsorter@token\csv at passivquotes\space
+ -i \csv at passivquotes#2\csv at passivquotes\space
+ -o \csv at passivquotes#3\csv at passivquotes\space -q 1}%
+ \input{\csv at csvsorter@token}%
+ \endgroup%
+}
+
+
+\def\csv at preprocss@none{%
+ \let\csv at input@filename=\csv at filename%
+}
+
+\def\csv at preprocss@procedure{%
+ \csv at preprocessor{\csv at filename}{\csv at ppfilename}%
+ \let\csv at input@filename=\csv at ppfilename%
+}
+
+
+%---- the loop
+
+\def\csv at AtEndLoop{\gappto\@endloophook}
+\let\@endloophook\csv at empty
+
+\def\csv at current@col{\csname csvcol\romannumeral\c at csvcol\endcsname}
+
+% auto head names
+\def\set at csv@autohead{%
+ \toks0=\expandafter{\csname\csv at headnameprefix\csv at current@col\endcsname}%
+ \toks1=\expandafter{\csname csvcol\romannumeral\c at csvcol\endcsname}%
+ \begingroup\edef\csv at temp{\endgroup\noexpand\gdef\the\toks0{\the\toks1}\noexpand\csv at AtEndLoop{\noexpand\gdef\the\toks0{}}}%
+ \csv at temp%
+}
+
+% head names and numbers
+\def\set at csv@head{%
+ \toks0={\gdef##1}%
+ \toks1=\expandafter{\csname csvcol\romannumeral\c at csvcol\endcsname}%
+ \begingroup\edef\csv at temp{\endgroup\noexpand\pgfkeysdef{/csv head/\csv at current@col}{\the\toks0{\the\toks1}\noexpand\csv at AtEndLoop{\the\toks0{}}}}%
+ \csv at temp%
+ \begingroup\edef\csv at temp{\endgroup\noexpand\pgfkeysdef{/csv head/\thecsvcol}{\the\toks0{\the\toks1}\noexpand\csv at AtEndLoop{\the\toks0{}}}}%
+ \csv at temp%
+}
+
+% head line
+\def\csv at processheadline{%
+ \csvreadnext%
+ \ifx\csv at par\csvline\relax%
+ \csv at error{File '\csv at input@filename' starts with an empty line!}{}%
+ \else\csv at escanline{\csvline}%
+ \fi%
+ \xdef\csv at columncount{\thecsvcol}%
+ \global\c at csvcol 0\relax%
+ \loop%
+ \global\advance\c at csvcol 1\relax%
+ \csv at opt@headtocolumnames%
+ \set at csv@head%
+ \ifnum\c at csvcol<\csv at columncount\repeat%
+ \toks@=\expandafter{\csv at columnnames}%
+ \edef\csv at processkeys{\noexpand\pgfkeys{/csv head/.cd,\the\toks@}}%
+ \csv at processkeys%
+ \csv at posthead%
+}
+
+% head numbers for no head
+\def\set at csv@nohead{%
+ \toks0={\gdef##1}%
+ \toks1=\expandafter{\csname csvcol\romannumeral\c at csvcol\endcsname}%
+ \begingroup\edef\csv at temp{\endgroup\noexpand\pgfkeysdef{/csv head/\thecsvcol}{\the\toks0{\the\toks1}\noexpand\csv at AtEndLoop{\the\toks0{}}}}%
+ \csv at temp%
+}
+
+% no head line
+\def\csv at noheadline{%
+ \global\c at csvcol 0\relax%
+ \loop%
+ \global\advance\c at csvcol 1\relax%
+ \set at csv@nohead%
+ \ifnum\c at csvcol<\csv at columncount\repeat%
+ \toks@=\expandafter{\csv at columnnames}%
+ \edef\csv at processkeys{\noexpand\pgfkeys{/csv head/.cd,\the\toks@}}%
+ \csv at processkeys%
+}
+
+% check filter
+\def\csv at checkfilter{%
+ \csv at prefiltercommand%
+ \csv at iffilter{%
+ \stepcounter{csvrow}%
+ \let\csv at usage=\csv at do@linecommand%
+ }{}%
+}
+
+\def\csv at truefilter#1#2{#1}
+
+\def\csv at falsefilter#1#2{#2}
+
+\def\csvfilteraccept{\global\let\csv at iffilter=\csv at truefilter}
+
+\def\csvfilterreject{\global\let\csv at iffilter=\csv at falsefilter}
+
+% check columns
+\def\csv at checkcolumncount{%
+ \ifnum\c at csvcol=\csv at columncount\relax%
+ \csv at checkfilter%
+ \else%
+ \csv at columncounterror%
+ \fi%
+}
+
+\def\csv at nocheckcolumncount{%
+ \csv at checkfilter%
+}
+
+% normal line
+\def\csv at do@linecommand{%
+ \csv at do@latepostline%
+ \csv at do@preline%
+ \csv at body\relax%
+ \csv at do@postline%
+}
+
+\gdef\csvreadnext{%
+ \global\read\csv at file to\csvline%
+ \stepcounter{csvinputline}%
+}
+
+\def\csv at par{\par}
+
+% reads and processes a CSV file
+\long\def\csvloop#1{%
+ % reset
+ \global\let\@endloophook\csv at empty%
+ \global\let\csvlinetotablerow\csv at assemble@csvlinetotablerow%
+ % options
+ \csvset{default,every csv,#1}%
+ \csv at preprocss%
+ \csv at set@catcodes%
+ \csv at prereading%
+ \csv at table@begin%
+ \setcounter{csvinputline}{0}%
+ % start reading
+ \openin\csv at file=\csv at input@filename\relax%
+ \ifeof\csv at file%
+ \csv at error{File '\csv at input@filename' not existent, not readable, or empty!}{}%
+ \else%
+ % the head line
+ \csv at opt@processheadline%
+ \fi%
+ %
+ \setcounter{csvrow}{0}%
+ \gdef\csv at do@preline{%
+ \csv at prefirstline%
+ \global\let\csv at do@preline=\csv at preline%
+ }%
+ \gdef\csv at do@postline{%
+ \csv at postfirstline%
+ \global\let\csv at do@postline=\csv at postline%
+ }%
+ \gdef\csv at do@@latepostline{%
+ \csv at latepostfirstline%
+ \global\let\csv at do@latepostline=\csv at latepostline%
+ }%
+ \gdef\csv at do@latepostline{%
+ \csv at lateposthead%
+ \global\let\csv at do@latepostline=\csv at do@@latepostline%
+ }%
+ % command for the reading loop
+ \gdef\csv at iterate{%
+ \let\csv at usage=\csv at empty%
+ \csvreadnext%
+ \ifeof\csv at file%
+ \global\let\csv at next=\csv at empty%
+ \else%
+ \global\let\csv at next=\csv at iterate%
+ \ifx\csv at par\csvline\relax%
+ \else%
+ \csv at escanline{\csvline}%
+ % check and decide
+ \csv at opt@checkcolumncount%
+ \fi%
+ \fi%
+ % do or do not
+ \csv at usage%
+ \csv at next}%
+ \ifeof\csv at file%
+ \global\let\csv at next=\csv at empty%
+ \else%
+ \global\let\csv at next=\csv at iterate%
+ \fi%
+ \csv at next%
+ \closein\csv at file%
+ \@endloophook%
+ \csv at latepostlastline%
+ \csv at table@end%
+ \csv at postreading%
+ \csv at reset@catcodes%
+}
+
+% user command
+\long\def\csv at reader[#1]#2#3#4{%
+ \global\long\def\csv@@body{#4}%
+ \csvloop{#1,file={#2},column names={#3},command=\csv@@body}%
+}
+
+\def\csvreader{%
+ \@ifnextchar[{\csv at reader}{\csv at reader[]}}
+
+
+%---- keys
+
+\pgfkeys{/handlers/.gstore in/.code=\pgfkeysalso{\pgfkeyscurrentpath/.code=\gdef#1{##1}}}
+\pgfkeys{/csv/.is family}
+\pgfkeys{/csv head/.is family}
+
+\def\csvset{\pgfqkeys{/csv}}
+\def\csvheadset{\pgfqkeys{/csv head}}
+
+\csvset{%
+ file/.gstore in=\csv at filename,%
+ preprocessed file/.gstore in=\csv at ppfilename,%
+ preprocessor/.code={\gdef\csv at preprocessor{#1}\let\csv at preprocss=\csv at preprocss@procedure},%
+ no preprocessing/.code={\let\csv at preprocss=\csv at preprocss@none},
+ column names reset/.code={\gdef\csv at columnnames{}},%
+ column names/.code={%
+ \toks0=\expandafter{\csv at columnnames}%
+ \def\temp{#1}\toks1=\expandafter{\temp}%
+ \xdef\csv at columnnames{\the\toks0,\the\toks1}%
+ },
+ command/.gstore in=\csv at body,%
+ check column count/.is choice,%
+ check column count/.default=true,%
+ check column count/true/.code={\global\let\csv at opt@checkcolumncount=\csv at checkcolumncount},%
+ check column count/false/.code={\global\let\csv at opt@checkcolumncount=\csv at nocheckcolumncount},%
+ on column count error/.gstore in=\csv at columncounterror,
+ head/.is choice,%
+ head/.default=true,%
+ head/true/.code={\global\let\csv at opt@processheadline=\csv at processheadline%
+ \pgfkeysalso{check column count}},%
+ head/false/.code={\global\let\csv at opt@processheadline=\csv at noheadline%
+ \pgfkeysalso{check column count=false,late after head=}},%
+ head to column names prefix/.store in=\csv at headnameprefix,%
+ head to column names/.is choice,%
+ head to column names/.default=true,%
+ head to column names/true/.code={\global\let\csv at opt@headtocolumnames=\set at csv@autohead},%
+ head to column names/false/.code={\global\let\csv at opt@headtocolumnames=\csv at empty},%
+ column count/.gstore in=\csv at columncount,%
+ filter/.code={\gdef\csv at iffilter{\ifthenelse{#1}}},
+ filter ifthen/.code={\gdef\csv at iffilter{\ifthenelse{#1}}},
+ filter test/.code={\gdef\csv at iffilter{#1}},
+ filter expr/.code={\gdef\csv at iffilter{\ifboolexpr{#1}}},
+ no filter/.code={\csvfilteraccept},
+ filter reject all/.code={\csvfilterreject},
+ filter accept all/.code={\csvfilteraccept},
+ before filter/.gstore in=\csv at prefiltercommand,
+ full filter/.gstore in=\csv at prefiltercommand,
+ before first line/.gstore in=\csv at prefirstline,
+ before line/.code={\gdef\csv at preline{#1}\pgfkeysalso{before first line=#1}},
+ after first line/.gstore in=\csv at postfirstline,
+ after line/.code={\gdef\csv at postline{#1}\pgfkeysalso{after first line=#1}},
+ late after first line/.gstore in=\csv at latepostfirstline,
+ late after last line/.gstore in=\csv at latepostlastline,
+ late after line/.code={\gdef\csv at latepostline{#1}\pgfkeysalso{late after first line=#1,late after last line=#1}},
+ after head/.gstore in=\csv at posthead,
+ late after head/.gstore in=\csv at lateposthead,
+ before reading/.gstore in=\csv at prereading,
+ after reading/.gstore in=\csv at postreading,
+ before table/.gstore in=\csv at pretable,
+ after table/.gstore in=\csv at posttable,
+ table head/.gstore in=\csv at tablehead,
+ table foot/.gstore in=\csv at tablefoot,
+ @table/.code 2 args={\gdef\csv at table@begin{#1}\gdef\csv at table@end{#2}},
+ no table/.style={@table={}{}},
+ separator/.is choice,
+ separator/comma/.code={\global\let\csv at scanline=\csv at scanline@A%
+ \global\let\csv at breakline\csv at breakline@A},
+ separator/semicolon/.code={\global\let\csv at scanline=\csv at scanline@B%
+ \global\let\csv at breakline\csv at breakline@B},
+ separator/pipe/.code={\global\let\csv at scanline=\csv at scanline@C%
+ \global\let\csv at breakline\csv at breakline@C},
+ separator/tab/.code={\global\let\csv at scanline=\csv at scanline@D%
+ \global\let\csv at breakline\csv at breakline@D%
+ \csvset{respect tab}},
+ %
+ csvsorter command/.store in=\csv at csvsorter@command,
+ csvsorter configpath/.store in=\csv at csvsorter@configpath,
+ sort by/.style={preprocessor={\csv at preprocessor@csvsorter{\csv at csvsorter@configpath/#1}}},
+ new sorting rule/.style 2 args={sort by #1/.style={sort by={#2}}},
+ csvsorter log/.store in=\csv at csvsorter@log,
+ csvsorter token/.store in=\csv at csvsorter@token,
+ csvsorter command=csvsorter,
+ csvsorter configpath=.,
+ preprocessed file={\jobname_sorted._csv},
+ csvsorter log={csvsorter.log},
+ csvsorter token={\jobname.csvtoken},
+ %
+ % default for reset
+ default/.style={
+ file=unknown.csv,
+ no preprocessing,
+ command=\csvline,
+ column names reset,
+ head,
+ head to column names prefix=,
+ head to column names=false,
+ column count=10,
+ on column count error=,
+ no filter,
+ before filter=,
+ before line=,
+ after line=,
+ late after line=,
+ after head=,
+ late after head=,
+ before reading=,
+ after reading=,
+ before table=,
+ after table=,
+ table head=,
+ table foot=,
+ no table,
+ separator=comma,
+ },
+ default,
+ %
+ % styles
+ every csv/.style={},
+ no head/.style={head=false},
+ no check column count/.style={check column count=false},
+ warn on column count error/.style={on column count error={\csv at warning{>\thecsvcol< instead of >\csv at columncount< columns for input line >\thecsvinputline< of file >\csv at ppfilename<}}},
+ filter equal/.style 2 args={filter ifthen=\equal{#1}{#2}},
+ filter not equal/.style 2 args={filter ifthen=\not\equal{#1}{#2}},
+ filter strcmp/.style 2 args={filter test=\ifcsvstrcmp{#1}{#2}},
+ filter not strcmp/.style 2 args={filter test=\ifcsvnotstrcmp{#1}{#2}},
+ tabular/.style={
+ @table={\csv at pretable\begin{tabular}{#1}\csv at tablehead}{\csv at tablefoot\end{tabular}\csv at posttable},
+ late after line=\\},
+ centered tabular/.style={
+ @table={\begin{center}\csv at pretable\begin{tabular}{#1}\csv at tablehead}{\csv at tablefoot\end{tabular}\csv at posttable\end{center}},
+ late after line=\\},
+ longtable/.style={
+ @table={\csv at pretable\begin{longtable}{#1}\csv at tablehead}{\csv at tablefoot\end{longtable}\csv at posttable},
+ late after line=\\},
+ tabbing/.style={
+ @table={\csv at pretable\begin{tabbing}\csv at tablehead}{\csv at tablefoot\end{tabbing}\csv at posttable},
+ late after line=\\,
+ late after last line=},
+ centered tabbing/.style={
+ @table={\begin{center}\csv at pretable\begin{tabbing}\csv at tablehead}{\csv at tablefoot\end{tabbing}\csv at posttable\end{center}},
+ late after line=\\,
+ late after last line=},
+ autotabular/.style={
+ file=#1,
+ after head=\csv at pretable\begin{tabular}{|*{\csv at columncount}{l|}}\csv at tablehead,
+ table head=\hline\csvlinetotablerow\\\hline,
+ late after line=\\,
+ table foot=\\\hline,
+ late after last line=\csv at tablefoot\end{tabular}\csv at posttable,
+ command=\csvlinetotablerow},
+ autolongtable/.style={
+ file=#1,
+ after head=\csv at pretable\begin{longtable}{|*{\csv at columncount}{l|}}\csv at tablehead,
+ table head=\hline\csvlinetotablerow\\\hline\endhead\hline\endfoot,
+ late after line=\\,
+ late after last line=\csv at tablefoot\end{longtable}\csv at posttable,
+ command=\csvlinetotablerow},
+ autobooktabular/.style={
+ file=#1,
+ after head=\csv at pretable\begin{tabular}{*{\csv at columncount}{l}}\csv at tablehead,
+ table head=\toprule\csvlinetotablerow\\\midrule,
+ late after line=\\,
+ table foot=\\\bottomrule,
+ late after last line=\csv at tablefoot\end{tabular}\csv at posttable,
+ command=\csvlinetotablerow},
+ autobooklongtable/.style={
+ file=#1,
+ after head=\csv at pretable\begin{longtable}{*{\csv at columncount}{l}}\csv at tablehead,
+ table head=\toprule\csvlinetotablerow\\\midrule\endhead\bottomrule\endfoot,
+ late after line=\\,
+ late after last line=\csv at tablefoot\end{longtable}\csv at posttable,
+ command=\csvlinetotablerow},
+}
+
+% deprecated keys
+\csvset{
+ nofilter/.style=no filter,
+ nohead/.style=no head,
+}
+
+% catcodes
+\def\csv at set@catcodes{%
+ \csv at catcode@tab at set%
+ \csv at catcode@tilde at set%
+ \csv at catcode@circumflex at set%
+ \csv at catcode@underscore at set%
+ \csv at catcode@and at set%
+ \csv at catcode@sharp at set%
+ \csv at catcode@dollar at set%
+ \csv at catcode@backslash at set%
+ \csv at catcode@leftbrace at set%
+ \csv at catcode@rightbrace at set%
+ \csv at catcode@percent at set}
+
+\def\csv at reset@catcodes{\csv at catcode@percent at reset%
+ \csv at catcode@rightbrace at reset%
+ \csv at catcode@leftbrace at reset%
+ \csv at catcode@backslash at reset%
+ \csv at catcode@dollar at reset%
+ \csv at catcode@sharp at reset%
+ \csv at catcode@and at reset%
+ \csv at catcode@underscore at reset%
+ \csv at catcode@circumflex at reset%
+ \csv at catcode@tilde at reset%
+ \csv at catcode@tab at reset%
+}
+
+
+\csvset{
+ respect tab/.is choice,
+ respect tab/true/.code={%
+ \gdef\csv at catcode@tab at set{%
+ \xdef\csv at catcode@tab at value{\the\catcode`\^^I}%
+ \catcode`\^^I=12}%
+ \gdef\csv at catcode@tab at reset{\catcode`\^^I=\csv at catcode@tab at value}},
+ respect tab/false/.code={%
+ \global\let\csv at catcode@tab at set\csv at empty%
+ \global\let\csv at catcode@tab at reset\csv at empty},
+ respect tab/.default=true,
+ %
+ respect percent/.is choice,
+ respect percent/true/.code={%
+ \gdef\csv at catcode@percent at set{%
+ \xdef\csv at catcode@percent at value{\the\catcode`\%}%
+ \catcode`\%=12}%
+ \gdef\csv at catcode@percent at reset{\catcode`\%=\csv at catcode@percent at value}},
+ respect percent/false/.code={%
+ \global\let\csv at catcode@percent at set\csv at empty%
+ \global\let\csv at catcode@percent at reset\csv at empty},
+ respect percent/.default=true,
+ %
+ respect sharp/.is choice,
+ respect sharp/true/.code={%
+ \gdef\csv at catcode@sharp at set{%
+ \xdef\csv at catcode@sharp at value{\the\catcode`\#}%
+ \catcode`\#=12}%
+ \gdef\csv at catcode@sharp at reset{\catcode`\#=\csv at catcode@sharp at value}},
+ respect sharp/false/.code={%
+ \global\let\csv at catcode@sharp at set\csv at empty%
+ \global\let\csv at catcode@sharp at reset\csv at empty},
+ respect sharp/.default=true,
+ %
+ respect dollar/.is choice,
+ respect dollar/true/.code={%
+ \gdef\csv at catcode@dollar at set{%
+ \xdef\csv at catcode@dollar at value{\the\catcode`\$}%
+ \catcode`\$=12}%
+ \gdef\csv at catcode@dollar at reset{\catcode`\$=\csv at catcode@dollar at value}},
+ respect dollar/false/.code={%
+ \global\let\csv at catcode@dollar at set\csv at empty%
+ \global\let\csv at catcode@dollar at reset\csv at empty},
+ respect dollar/.default=true,
+ %
+ respect and/.is choice,
+ respect and/true/.code={%
+ \gdef\csv at catcode@and at set{%
+ \xdef\csv at catcode@and at value{\the\catcode`\&}%
+ \catcode`\&=12}%
+ \gdef\csv at catcode@and at reset{\catcode`\&=\csv at catcode@and at value}},
+ respect and/false/.code={%
+ \global\let\csv at catcode@and at set\csv at empty%
+ \global\let\csv at catcode@and at reset\csv at empty},
+ respect and/.default=true,
+ %
+ respect backslash/.is choice,
+ respect backslash/true/.code={%
+ \gdef\csv at catcode@backslash at set{%
+ \xdef\csv at catcode@backslash at value{\the\catcode`\\}%
+ \catcode`\\=12}%
+ \gdef\csv at catcode@backslash at reset{\catcode`\\=\csv at catcode@backslash at value}},
+ respect backslash/false/.code={%
+ \global\let\csv at catcode@backslash at set\csv at empty%
+ \global\let\csv at catcode@backslash at reset\csv at empty},
+ respect backslash/.default=true,
+ %
+ respect underscore/.is choice,
+ respect underscore/true/.code={%
+ \gdef\csv at catcode@underscore at set{%
+ \xdef\csv at catcode@underscore at value{\the\catcode`\_}%
+ \catcode`\_=12}%
+ \gdef\csv at catcode@underscore at reset{\catcode`\_=\csv at catcode@underscore at value}},
+ respect underscore/false/.code={%
+ \global\let\csv at catcode@underscore at set\csv at empty%
+ \global\let\csv at catcode@underscore at reset\csv at empty},
+ respect underscore/.default=true,
+ %
+ respect tilde/.is choice,
+ respect tilde/true/.code={%
+ \gdef\csv at catcode@tilde at set{%
+ \xdef\csv at catcode@tilde at value{\the\catcode`\~}%
+ \catcode`\~=12}%
+ \gdef\csv at catcode@tilde at reset{\catcode`\~=\csv at catcode@tilde at value}},
+ respect tilde/false/.code={%
+ \global\let\csv at catcode@tilde at set\csv at empty%
+ \global\let\csv at catcode@tilde at reset\csv at empty},
+ respect tilde/.default=true,
+ %
+ respect circumflex/.is choice,
+ respect circumflex/true/.code={%
+ \gdef\csv at catcode@circumflex at set{%
+ \xdef\csv at catcode@circumflex at value{\the\catcode`\^}%
+ \catcode`\^=12}%
+ \gdef\csv at catcode@circumflex at reset{\catcode`\^=\csv at catcode@circumflex at value}},
+ respect circumflex/false/.code={%
+ \global\let\csv at catcode@circumflex at set\csv at empty%
+ \global\let\csv at catcode@circumflex at reset\csv at empty},
+ respect circumflex/.default=true,
+ %
+ respect leftbrace/.is choice,
+ respect leftbrace/true/.code={%
+ \gdef\csv at catcode@leftbrace at set{%
+ \xdef\csv at catcode@leftbrace at value{\the\catcode`\{}%
+ \catcode`\{=12}%
+ \gdef\csv at catcode@leftbrace at reset{\catcode`\{=\csv at catcode@leftbrace at value}},
+ respect leftbrace/false/.code={%
+ \global\let\csv at catcode@leftbrace at set\csv at empty%
+ \global\let\csv at catcode@leftbrace at reset\csv at empty},
+ respect leftbrace/.default=true,
+ %
+ respect rightbrace/.is choice,
+ respect rightbrace/true/.code={%
+ \gdef\csv at catcode@rightbrace at set{%
+ \xdef\csv at catcode@rightbrace at value{\the\catcode`\}}%
+ \catcode`\}=12}%
+ \gdef\csv at catcode@rightbrace at reset{\catcode`\}=\csv at catcode@rightbrace at value}},
+ respect rightbrace/false/.code={%
+ \global\let\csv at catcode@rightbrace at set\csv at empty%
+ \global\let\csv at catcode@rightbrace at reset\csv at empty},
+ respect rightbrace/.default=true,
+ %
+ respect all/.style={respect tab,respect percent,respect sharp,respect dollar,
+ respect and,respect backslash,respect underscore,respect tilde,respect circumflex,
+ respect leftbrace,respect rightbrace},
+ respect none/.style={respect tab=false,respect percent=false,respect sharp=false,
+ respect dollar=false,respect and=false,respect backslash=false,
+ respect underscore=false,respect tilde=false,respect circumflex=false,
+ respect leftbrace=false,respect rightbrace=false},
+ respect none
+}
+
+
+\long\def\csv at autotabular[#1]#2{\csvloop{autotabular={#2},#1}}
+
+\def\csvautotabular{%
+ \@ifnextchar[{\csv at autotabular}{\csv at autotabular[]}}
+
+\long\def\csv at autolongtable[#1]#2{\csvloop{autolongtable={#2},#1}}
+
+\def\csvautolongtable{%
+ \@ifnextchar[{\csv at autolongtable}{\csv at autolongtable[]}}
+
+\long\def\csv at autobooktabular[#1]#2{\csvloop{autobooktabular={#2},#1}}
+
+\def\csvautobooktabular{%
+ \@ifnextchar[{\csv at autobooktabular}{\csv at autobooktabular[]}}
+
+
+\long\def\csv at autobooklongtable[#1]#2{\csvloop{autobooklongtable={#2},#1}}
+
+\def\csvautobooklongtable{%
+ \@ifnextchar[{\csv at autobooklongtable}{\csv at autobooklongtable[]}}
+
+
+\def\csvstyle#1#2{\csvset{#1/.style={#2}}}
+
+\def\csvnames#1#2{\csvset{#1/.style={column names={#2}}}}
+
+% string comparison
+
+\newrobustcmd{\ifcsvstrequal}[2]{%
+ \begingroup%
+ \edef\csv at tempa{#1}%
+ \edef\csv at tempb{#2}%
+ \ifx\csv at tempa\csv at tempb%
+ \aftergroup\@firstoftwo%
+ \else%
+ \aftergroup\@secondoftwo%
+ \fi%
+ \endgroup%
+}%
+
+\newrobustcmd{\ifcsvprostrequal}[2]{%
+ \begingroup%
+ \protected at edef\csv at tempa{#1}%
+ \protected at edef\csv at tempb{#2}%
+ \ifx\csv at tempa\csv at tempb%
+ \aftergroup\@firstoftwo%
+ \else%
+ \aftergroup\@secondoftwo%
+ \fi%
+ \endgroup%
+}%
+
+\AtBeginDocument{%
+ \ifdefined\pdfstrcmp%
+ \let\csv at strcmp\pdfstrcmp%
+ \else\ifdefined\pdf at strcmp%
+ \let\csv at strcmp\pdf at strcmp%
+ \fi\fi%
+ \ifdefined\csv at strcmp%
+ \newrobustcmd{\ifcsvstrcmp}[2]{%
+ \ifnum\csv at strcmp{#1}{#2}=\z@\relax%
+ \expandafter\@firstoftwo%
+ \else%
+ \expandafter\@secondoftwo%
+ \fi%
+ }%
+ \else%
+ \let\ifcsvstrcmp\ifcsvstrequal%
+ \fi%
+}
+
+\newrobustcmd{\ifcsvnotstrcmp}[4]{\ifcsvstrcmp{#1}{#2}{#4}{#3}}
Property changes on: trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple-legacy.sty
___________________________________________________________________
Added: svn:eol-style
## -0,0 +1 ##
+native
\ No newline at end of property
Modified: trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple.sty
===================================================================
--- trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple.sty 2021-06-29 17:19:30 UTC (rev 59755)
+++ trunk/Master/texmf-dist/tex/latex/csvsimple/csvsimple.sty 2021-06-29 19:53:39 UTC (rev 59756)
@@ -1,4 +1,4 @@
-%% The LaTeX package csvsimple - version 1.22 (2021/06/07)
+%% The LaTeX package csvsimple - version 2.0.0 (2021/06/29)
%% csvsimple.sty: Simple LaTeX CSV file processing
%%
%% -------------------------------------------------------------------------------------------
@@ -15,766 +15,35 @@
%%
%% This work has the LPPL maintenance status `author-maintained'.
%%
-%% This work consists of all files listed in README
+%% This work consists of all files listed in README.md
%%
-\NeedsTeXFormat{LaTeX2e}
-\ProvidesPackage{csvsimple}[2021/06/07 version 1.22 LaTeX CSV file processing]
+\RequirePackage{l3keys2e}
-\RequirePackage{pgfrcs,pgfkeys,ifthen,etoolbox,shellesc}
+\ProvidesExplPackage{csvsimple}{2021/06/29}{2.0.0}
+ {LaTeX CSV file processing}
+\cs_if_exist:NT \c__csvsim_package_expl_bool
+ {
+ \msg_new:nnn { csvsimple }{ package-loaded }
+ { Package~'#1'~seems~already~be~loaded! }
+ \bool_if:NTF \c__csvsim_package_expl_bool
+ {
+ \msg_warning:nn { csvsimple }{ package-loaded }{ csvsimple-l3 }
+ }
+ {
+ \msg_warning:nn { csvsimple }{ package-loaded }{ csvsimple-legacy }
+ }
+ \tex_endinput:D
+ }
-%---- general
+\keys_define:nn { csvsimple }
+ {
+ l3 .code:n = \tl_set:Nn \l__csvsim_package_expl_tl { l3 },
+ legacy .code:n = \tl_set:Nn \l__csvsim_package_expl_tl { legacy },
+ }
-\def\csv at warning#1{\PackageWarning{csvsimple}{#1}}
-\def\csv at error#1#2{\PackageError{csvsimple}{#1}{#2}}
+\keys_set:nn { csvsimple } { legacy }
-\newread\csv at file
-\newcounter{csvinputline}
-\newcounter{csvrow}
-\newcounter{csvcol}
+\ProcessKeysPackageOptions { csvsimple }
-\def\csv at empty{}
-
-\long\def\csviffirstrow#1#2{%
- \ifnum\c at csvrow=1%
- \long\def\csviffirstrow at doit{#1}%
- \else%
- \long\def\csviffirstrow at doit{#2}%
- \fi%
- \csviffirstrow at doit%
-}
-
-\long\def\csvifoddrow#1#2{%
- \ifodd\c at csvrow%
- \long\def\csvifoddrow at doit{#1}%
- \else%
- \long\def\csvifoddrow at doit{#2}%
- \fi%
- \csvifoddrow at doit%
-}
-
-\def\csv at assemble@csvlinetotablerow{%
- \global\c at csvcol 1\relax%
- \xdef\csvlinetotablerow{\expandonce{\csname csvcol\romannumeral\c at csvcol\endcsname}}%
- \ifnum\c at csvcol<\csv at columncount\relax%
- \loop%
- \global\advance\c at csvcol 1\relax%
- \xappto\csvlinetotablerow{\noexpand&\expandonce{\csname csvcol\romannumeral\c at csvcol\endcsname}}%
- \ifnum\c at csvcol<\csv at columncount\relax\repeat%
- \fi%
- \csvlinetotablerow%
-}
-
-
-%---- breaking lines
-
-% This command removes leading and trailing spaces from <Token>. I found
-% the original code on the web. The original author was Michael Downes, who
-% provided the code as an answer to 'around the bend' question #15.
-\catcode`\Q=3
-\def\csv at TrimSpaces#1{%
- \begingroup%
- \aftergroup\toks\aftergroup0\aftergroup{%
- \expandafter\csv at trimb\expandafter\noexpand#1Q Q}%
- \global\edef#1{\the\toks0}%
-}
-\def\csv at trimb#1 Q{\csv at trimc#1Q}
-\def\csv at trimc#1Q#2{\afterassignment\endgroup \vfuzz\the\vfuzz#1}
-\catcode`\Q=11
-
-\def\csv at TrimBraces#1{\expandafter\csv at TrimBraces@#1\@nil{#1}}
-\def\csv at TrimBraces@#1\@nil#2{\def#2{#1}}
-
-\def\csv at breakline@kernel#1{%
- \ifx\csv at termination#1\let\nextcol=\relax\else%
- \let\nextcol=\csv at breakline%
- \global\advance\c at csvcol 1\relax%
- \def\csv at col@body{#1}%
- \csv at TrimSpaces\csv at col@body%
- \csv at TrimBraces\csv at col@body%
- \toks@\expandafter{\csv at col@body}%
- \expandafter\xdef\csname csvcol\romannumeral\c at csvcol\endcsname{\the\toks@}%
- \fi%
- \nextcol%
-}
-
-% comma
-\def\csv at breakline@A#1,{\csv at breakline@kernel{#1}}
-
-\def\csv at scanline@A#1{%
- \global\c at csvcol 0\relax%
- \csv at breakline#1,\csv at termination,%
-}
-
-% semi colon
-\def\csv at breakline@B#1;{\csv at breakline@kernel{#1}}
-
-\def\csv at scanline@B#1{%
- \global\c at csvcol 0\relax%
- \csv at breakline#1;\csv at termination;%
-}
-
-% pipe
-\def\csv at breakline@C#1|{\csv at breakline@kernel{#1}}
-
-\def\csv at scanline@C#1{%
- \global\c at csvcol 0\relax%
- \csv at breakline#1|\csv at termination|%
-}
-
-% tab
-\catcode`\^^I=12
-\def\csv at breakline@D#1^^I{\csv at breakline@kernel{#1}}
-
-\def\csv at scanline@D#1{%
- \global\c at csvcol 0\relax%
- \csv at breakline#1^^I\csv at termination^^I%
-}
-\catcode`\^^I=10
-
-% expands a CSV line and scans content
-\def\csv at escanline#1{%
- \toks@\expandafter{#1}%
- \edef\@csv at scanline{\noexpand\csv at scanline{\the\toks@}}%
- \@csv at scanline%
-}
-
-{
- \catcode`\"=12%
- \gdef\csv at passivquotes{"}
-}
-
-\newwrite\csv at out
-
-\def\csv at preprocessor@csvsorter#1#2#3{%
- \begingroup%
- \typeout{<sort \csv at passivquotes#2\csv at passivquotes\space by \csv at passivquotes#1\csv at passivquotes>}%
- \immediate\openout\csv at out=\csv at csvsorter@token%
- \immediate\write\csv at out{\string\makeatletter\string\csv at error{Call of CSV-Sorter failed! Use '-shell-escape' option or check log file '\csv at csvsorter@log'.}{}}%
- \immediate\closeout\csv at out%
- \ShellEscape{\csv at csvsorter@command\space
- -c \csv at passivquotes#1\csv at passivquotes\space
- -l \csv at passivquotes\csv at csvsorter@log\csv at passivquotes\space
- -t \csv at passivquotes\csv at csvsorter@token\csv at passivquotes\space
- -i \csv at passivquotes#2\csv at passivquotes\space
- -o \csv at passivquotes#3\csv at passivquotes\space -q 1}%
- \input{\csv at csvsorter@token}%
- \endgroup%
-}
-
-
-\def\csv at preprocss@none{%
- \let\csv at input@filename=\csv at filename%
-}
-
-\def\csv at preprocss@procedure{%
- \csv at preprocessor{\csv at filename}{\csv at ppfilename}%
- \let\csv at input@filename=\csv at ppfilename%
-}
-
-
-%---- the loop
-
-\def\csv at AtEndLoop{\gappto\@endloophook}
-\let\@endloophook\csv at empty
-
-\def\csv at current@col{\csname csvcol\romannumeral\c at csvcol\endcsname}
-
-% auto head names
-\def\set at csv@autohead{%
- \toks0=\expandafter{\csname\csv at headnameprefix\csv at current@col\endcsname}%
- \toks1=\expandafter{\csname csvcol\romannumeral\c at csvcol\endcsname}%
- \begingroup\edef\csv at temp{\endgroup\noexpand\gdef\the\toks0{\the\toks1}\noexpand\csv at AtEndLoop{\noexpand\gdef\the\toks0{}}}%
- \csv at temp%
-}
-
-% head names and numbers
-\def\set at csv@head{%
- \toks0={\gdef##1}%
- \toks1=\expandafter{\csname csvcol\romannumeral\c at csvcol\endcsname}%
- \begingroup\edef\csv at temp{\endgroup\noexpand\pgfkeysdef{/csv head/\csv at current@col}{\the\toks0{\the\toks1}\noexpand\csv at AtEndLoop{\the\toks0{}}}}%
- \csv at temp%
- \begingroup\edef\csv at temp{\endgroup\noexpand\pgfkeysdef{/csv head/\thecsvcol}{\the\toks0{\the\toks1}\noexpand\csv at AtEndLoop{\the\toks0{}}}}%
- \csv at temp%
-}
-
-% head line
-\def\csv at processheadline{%
- \csvreadnext%
- \ifx\csv at par\csvline\relax%
- \csv at error{File '\csv at input@filename' starts with an empty line!}{}%
- \else\csv at escanline{\csvline}%
- \fi%
- \xdef\csv at columncount{\thecsvcol}%
- \global\c at csvcol 0\relax%
- \loop%
- \global\advance\c at csvcol 1\relax%
- \csv at opt@headtocolumnames%
- \set at csv@head%
- \ifnum\c at csvcol<\csv at columncount\repeat%
- \toks@=\expandafter{\csv at columnnames}%
- \edef\csv at processkeys{\noexpand\pgfkeys{/csv head/.cd,\the\toks@}}%
- \csv at processkeys%
- \csv at posthead%
-}
-
-% head numbers for no head
-\def\set at csv@nohead{%
- \toks0={\gdef##1}%
- \toks1=\expandafter{\csname csvcol\romannumeral\c at csvcol\endcsname}%
- \begingroup\edef\csv at temp{\endgroup\noexpand\pgfkeysdef{/csv head/\thecsvcol}{\the\toks0{\the\toks1}\noexpand\csv at AtEndLoop{\the\toks0{}}}}%
- \csv at temp%
-}
-
-% no head line
-\def\csv at noheadline{%
- \global\c at csvcol 0\relax%
- \loop%
- \global\advance\c at csvcol 1\relax%
- \set at csv@nohead%
- \ifnum\c at csvcol<\csv at columncount\repeat%
- \toks@=\expandafter{\csv at columnnames}%
- \edef\csv at processkeys{\noexpand\pgfkeys{/csv head/.cd,\the\toks@}}%
- \csv at processkeys%
-}
-
-% check filter
-\def\csv at checkfilter{%
- \csv at prefiltercommand%
- \csv at iffilter{%
- \stepcounter{csvrow}%
- \let\csv at usage=\csv at do@linecommand%
- }{}%
-}
-
-\def\csv at truefilter#1#2{#1}
-
-\def\csv at falsefilter#1#2{#2}
-
-\def\csvfilteraccept{\global\let\csv at iffilter=\csv at truefilter}
-
-\def\csvfilterreject{\global\let\csv at iffilter=\csv at falsefilter}
-
-% check columns
-\def\csv at checkcolumncount{%
- \ifnum\c at csvcol=\csv at columncount\relax%
- \csv at checkfilter%
- \else%
- \csv at columncounterror%
- \fi%
-}
-
-\def\csv at nocheckcolumncount{%
- \csv at checkfilter%
-}
-
-% normal line
-\def\csv at do@linecommand{%
- \csv at do@latepostline%
- \csv at do@preline%
- \csv at body\relax%
- \csv at do@postline%
-}
-
-\gdef\csvreadnext{%
- \global\read\csv at file to\csvline%
- \stepcounter{csvinputline}%
-}
-
-\def\csv at par{\par}
-
-% reads and processes a CSV file
-\long\def\csvloop#1{%
- % reset
- \global\let\@endloophook\csv at empty%
- \global\let\csvlinetotablerow\csv at assemble@csvlinetotablerow%
- % options
- \csvset{default,every csv,#1}%
- \csv at preprocss%
- \csv at set@catcodes%
- \csv at prereading%
- \csv at table@begin%
- \setcounter{csvinputline}{0}%
- % start reading
- \openin\csv at file=\csv at input@filename\relax%
- \ifeof\csv at file%
- \csv at error{File '\csv at input@filename' not existent, not readable, or empty!}{}%
- \else%
- % the head line
- \csv at opt@processheadline%
- \fi%
- %
- \setcounter{csvrow}{0}%
- \gdef\csv at do@preline{%
- \csv at prefirstline%
- \global\let\csv at do@preline=\csv at preline%
- }%
- \gdef\csv at do@postline{%
- \csv at postfirstline%
- \global\let\csv at do@postline=\csv at postline%
- }%
- \gdef\csv at do@@latepostline{%
- \csv at latepostfirstline%
- \global\let\csv at do@latepostline=\csv at latepostline%
- }%
- \gdef\csv at do@latepostline{%
- \csv at lateposthead%
- \global\let\csv at do@latepostline=\csv at do@@latepostline%
- }%
- % command for the reading loop
- \gdef\csv at iterate{%
- \let\csv at usage=\csv at empty%
- \csvreadnext%
- \ifeof\csv at file%
- \global\let\csv at next=\csv at empty%
- \else%
- \global\let\csv at next=\csv at iterate%
- \ifx\csv at par\csvline\relax%
- \else%
- \csv at escanline{\csvline}%
- % check and decide
- \csv at opt@checkcolumncount%
- \fi%
- \fi%
- % do or do not
- \csv at usage%
- \csv at next}%
- \ifeof\csv at file%
- \global\let\csv at next=\csv at empty%
- \else%
- \global\let\csv at next=\csv at iterate%
- \fi%
- \csv at next%
- \closein\csv at file%
- \@endloophook%
- \csv at latepostlastline%
- \csv at table@end%
- \csv at postreading%
- \csv at reset@catcodes%
-}
-
-% user command
-\long\def\csv at reader[#1]#2#3#4{%
- \global\long\def\csv@@body{#4}%
- \csvloop{#1,file={#2},column names={#3},command=\csv@@body}%
-}
-
-\def\csvreader{%
- \@ifnextchar[{\csv at reader}{\csv at reader[]}}
-
-
-%---- keys
-
-\pgfkeys{/handlers/.gstore in/.code=\pgfkeysalso{\pgfkeyscurrentpath/.code=\gdef#1{##1}}}
-\pgfkeys{/csv/.is family}
-\pgfkeys{/csv head/.is family}
-
-\def\csvset{\pgfqkeys{/csv}}
-\def\csvheadset{\pgfqkeys{/csv head}}
-
-\csvset{%
- file/.gstore in=\csv at filename,%
- preprocessed file/.gstore in=\csv at ppfilename,%
- preprocessor/.code={\gdef\csv at preprocessor{#1}\let\csv at preprocss=\csv at preprocss@procedure},%
- no preprocessing/.code={\let\csv at preprocss=\csv at preprocss@none},
- column names reset/.code={\gdef\csv at columnnames{}},%
- column names/.code={%
- \toks0=\expandafter{\csv at columnnames}%
- \def\temp{#1}\toks1=\expandafter{\temp}%
- \xdef\csv at columnnames{\the\toks0,\the\toks1}%
- },
- command/.gstore in=\csv at body,%
- check column count/.is choice,%
- check column count/.default=true,%
- check column count/true/.code={\global\let\csv at opt@checkcolumncount=\csv at checkcolumncount},%
- check column count/false/.code={\global\let\csv at opt@checkcolumncount=\csv at nocheckcolumncount},%
- on column count error/.gstore in=\csv at columncounterror,
- head/.is choice,%
- head/.default=true,%
- head/true/.code={\global\let\csv at opt@processheadline=\csv at processheadline%
- \pgfkeysalso{check column count}},%
- head/false/.code={\global\let\csv at opt@processheadline=\csv at noheadline%
- \pgfkeysalso{check column count=false,late after head=}},%
- head to column names prefix/.store in=\csv at headnameprefix,%
- head to column names/.is choice,%
- head to column names/.default=true,%
- head to column names/true/.code={\global\let\csv at opt@headtocolumnames=\set at csv@autohead},%
- head to column names/false/.code={\global\let\csv at opt@headtocolumnames=\csv at empty},%
- column count/.gstore in=\csv at columncount,%
- filter/.code={\gdef\csv at iffilter{\ifthenelse{#1}}},
- filter ifthen/.code={\gdef\csv at iffilter{\ifthenelse{#1}}},
- filter test/.code={\gdef\csv at iffilter{#1}},
- filter expr/.code={\gdef\csv at iffilter{\ifboolexpr{#1}}},
- no filter/.code={\csvfilteraccept},
- filter reject all/.code={\csvfilterreject},
- filter accept all/.code={\csvfilteraccept},
- before filter/.gstore in=\csv at prefiltercommand,
- full filter/.gstore in=\csv at prefiltercommand,
- before first line/.gstore in=\csv at prefirstline,
- before line/.code={\gdef\csv at preline{#1}\pgfkeysalso{before first line=#1}},
- after first line/.gstore in=\csv at postfirstline,
- after line/.code={\gdef\csv at postline{#1}\pgfkeysalso{after first line=#1}},
- late after first line/.gstore in=\csv at latepostfirstline,
- late after last line/.gstore in=\csv at latepostlastline,
- late after line/.code={\gdef\csv at latepostline{#1}\pgfkeysalso{late after first line=#1,late after last line=#1}},
- after head/.gstore in=\csv at posthead,
- late after head/.gstore in=\csv at lateposthead,
- before reading/.gstore in=\csv at prereading,
- after reading/.gstore in=\csv at postreading,
- before table/.gstore in=\csv at pretable,
- after table/.gstore in=\csv at posttable,
- table head/.gstore in=\csv at tablehead,
- table foot/.gstore in=\csv at tablefoot,
- @table/.code 2 args={\gdef\csv at table@begin{#1}\gdef\csv at table@end{#2}},
- no table/.style={@table={}{}},
- separator/.is choice,
- separator/comma/.code={\global\let\csv at scanline=\csv at scanline@A%
- \global\let\csv at breakline\csv at breakline@A},
- separator/semicolon/.code={\global\let\csv at scanline=\csv at scanline@B%
- \global\let\csv at breakline\csv at breakline@B},
- separator/pipe/.code={\global\let\csv at scanline=\csv at scanline@C%
- \global\let\csv at breakline\csv at breakline@C},
- separator/tab/.code={\global\let\csv at scanline=\csv at scanline@D%
- \global\let\csv at breakline\csv at breakline@D%
- \csvset{respect tab}},
- %
- csvsorter command/.store in=\csv at csvsorter@command,
- csvsorter configpath/.store in=\csv at csvsorter@configpath,
- sort by/.style={preprocessor={\csv at preprocessor@csvsorter{\csv at csvsorter@configpath/#1}}},
- new sorting rule/.style 2 args={sort by #1/.style={sort by={#2}}},
- csvsorter log/.store in=\csv at csvsorter@log,
- csvsorter token/.store in=\csv at csvsorter@token,
- csvsorter command=csvsorter,
- csvsorter configpath=.,
- preprocessed file={\jobname_sorted._csv},
- csvsorter log={csvsorter.log},
- csvsorter token={\jobname.csvtoken},
- %
- % default for reset
- default/.style={
- file=unknown.csv,
- no preprocessing,
- command=\csvline,
- column names reset,
- head,
- head to column names prefix=,
- head to column names=false,
- column count=10,
- on column count error=,
- no filter,
- before filter=,
- before line=,
- after line=,
- late after line=,
- after head=,
- late after head=,
- before reading=,
- after reading=,
- before table=,
- after table=,
- table head=,
- table foot=,
- no table,
- separator=comma,
- },
- default,
- %
- % styles
- every csv/.style={},
- no head/.style={head=false},
- no check column count/.style={check column count=false},
- warn on column count error/.style={on column count error={\csv at warning{>\thecsvcol< instead of >\csv at columncount< columns for input line >\thecsvinputline< of file >\csv at ppfilename<}}},
- filter equal/.style 2 args={filter ifthen=\equal{#1}{#2}},
- filter not equal/.style 2 args={filter ifthen=\not\equal{#1}{#2}},
- filter strcmp/.style 2 args={filter test=\ifcsvstrcmp{#1}{#2}},
- filter not strcmp/.style 2 args={filter test=\ifcsvnotstrcmp{#1}{#2}},
- tabular/.style={
- @table={\csv at pretable\begin{tabular}{#1}\csv at tablehead}{\csv at tablefoot\end{tabular}\csv at posttable},
- late after line=\\},
- centered tabular/.style={
- @table={\begin{center}\csv at pretable\begin{tabular}{#1}\csv at tablehead}{\csv at tablefoot\end{tabular}\csv at posttable\end{center}},
- late after line=\\},
- longtable/.style={
- @table={\csv at pretable\begin{longtable}{#1}\csv at tablehead}{\csv at tablefoot\end{longtable}\csv at posttable},
- late after line=\\},
- tabbing/.style={
- @table={\csv at pretable\begin{tabbing}\csv at tablehead}{\csv at tablefoot\end{tabbing}\csv at posttable},
- late after line=\\,
- late after last line=},
- centered tabbing/.style={
- @table={\begin{center}\csv at pretable\begin{tabbing}\csv at tablehead}{\csv at tablefoot\end{tabbing}\csv at posttable\end{center}},
- late after line=\\,
- late after last line=},
- autotabular/.style={
- file=#1,
- after head=\csv at pretable\begin{tabular}{|*{\csv at columncount}{l|}}\csv at tablehead,
- table head=\hline\csvlinetotablerow\\\hline,
- late after line=\\,
- table foot=\\\hline,
- late after last line=\csv at tablefoot\end{tabular}\csv at posttable,
- command=\csvlinetotablerow},
- autolongtable/.style={
- file=#1,
- after head=\csv at pretable\begin{longtable}{|*{\csv at columncount}{l|}}\csv at tablehead,
- table head=\hline\csvlinetotablerow\\\hline\endhead\hline\endfoot,
- late after line=\\,
- late after last line=\csv at tablefoot\end{longtable}\csv at posttable,
- command=\csvlinetotablerow},
- autobooktabular/.style={
- file=#1,
- after head=\csv at pretable\begin{tabular}{*{\csv at columncount}{l}}\csv at tablehead,
- table head=\toprule\csvlinetotablerow\\\midrule,
- late after line=\\,
- table foot=\\\bottomrule,
- late after last line=\csv at tablefoot\end{tabular}\csv at posttable,
- command=\csvlinetotablerow},
- autobooklongtable/.style={
- file=#1,
- after head=\csv at pretable\begin{longtable}{*{\csv at columncount}{l}}\csv at tablehead,
- table head=\toprule\csvlinetotablerow\\\midrule\endhead\bottomrule\endfoot,
- late after line=\\,
- late after last line=\csv at tablefoot\end{longtable}\csv at posttable,
- command=\csvlinetotablerow},
-}
-
-% deprecated keys
-\csvset{
- nofilter/.style=no filter,
- nohead/.style=no head,
-}
-
-% catcodes
-\def\csv at set@catcodes{%
- \csv at catcode@tab at set%
- \csv at catcode@tilde at set%
- \csv at catcode@circumflex at set%
- \csv at catcode@underscore at set%
- \csv at catcode@and at set%
- \csv at catcode@sharp at set%
- \csv at catcode@dollar at set%
- \csv at catcode@backslash at set%
- \csv at catcode@leftbrace at set%
- \csv at catcode@rightbrace at set%
- \csv at catcode@percent at set}
-
-\def\csv at reset@catcodes{\csv at catcode@percent at reset%
- \csv at catcode@rightbrace at reset%
- \csv at catcode@leftbrace at reset%
- \csv at catcode@backslash at reset%
- \csv at catcode@dollar at reset%
- \csv at catcode@sharp at reset%
- \csv at catcode@and at reset%
- \csv at catcode@underscore at reset%
- \csv at catcode@circumflex at reset%
- \csv at catcode@tilde at reset%
- \csv at catcode@tab at reset%
-}
-
-
-\csvset{
- respect tab/.is choice,
- respect tab/true/.code={%
- \gdef\csv at catcode@tab at set{%
- \xdef\csv at catcode@tab at value{\the\catcode`\^^I}%
- \catcode`\^^I=12}%
- \gdef\csv at catcode@tab at reset{\catcode`\^^I=\csv at catcode@tab at value}},
- respect tab/false/.code={%
- \global\let\csv at catcode@tab at set\csv at empty%
- \global\let\csv at catcode@tab at reset\csv at empty},
- respect tab/.default=true,
- %
- respect percent/.is choice,
- respect percent/true/.code={%
- \gdef\csv at catcode@percent at set{%
- \xdef\csv at catcode@percent at value{\the\catcode`\%}%
- \catcode`\%=12}%
- \gdef\csv at catcode@percent at reset{\catcode`\%=\csv at catcode@percent at value}},
- respect percent/false/.code={%
- \global\let\csv at catcode@percent at set\csv at empty%
- \global\let\csv at catcode@percent at reset\csv at empty},
- respect percent/.default=true,
- %
- respect sharp/.is choice,
- respect sharp/true/.code={%
- \gdef\csv at catcode@sharp at set{%
- \xdef\csv at catcode@sharp at value{\the\catcode`\#}%
- \catcode`\#=12}%
- \gdef\csv at catcode@sharp at reset{\catcode`\#=\csv at catcode@sharp at value}},
- respect sharp/false/.code={%
- \global\let\csv at catcode@sharp at set\csv at empty%
- \global\let\csv at catcode@sharp at reset\csv at empty},
- respect sharp/.default=true,
- %
- respect dollar/.is choice,
- respect dollar/true/.code={%
- \gdef\csv at catcode@dollar at set{%
- \xdef\csv at catcode@dollar at value{\the\catcode`\$}%
- \catcode`\$=12}%
- \gdef\csv at catcode@dollar at reset{\catcode`\$=\csv at catcode@dollar at value}},
- respect dollar/false/.code={%
- \global\let\csv at catcode@dollar at set\csv at empty%
- \global\let\csv at catcode@dollar at reset\csv at empty},
- respect dollar/.default=true,
- %
- respect and/.is choice,
- respect and/true/.code={%
- \gdef\csv at catcode@and at set{%
- \xdef\csv at catcode@and at value{\the\catcode`\&}%
- \catcode`\&=12}%
- \gdef\csv at catcode@and at reset{\catcode`\&=\csv at catcode@and at value}},
- respect and/false/.code={%
- \global\let\csv at catcode@and at set\csv at empty%
- \global\let\csv at catcode@and at reset\csv at empty},
- respect and/.default=true,
- %
- respect backslash/.is choice,
- respect backslash/true/.code={%
- \gdef\csv at catcode@backslash at set{%
- \xdef\csv at catcode@backslash at value{\the\catcode`\\}%
- \catcode`\\=12}%
- \gdef\csv at catcode@backslash at reset{\catcode`\\=\csv at catcode@backslash at value}},
- respect backslash/false/.code={%
- \global\let\csv at catcode@backslash at set\csv at empty%
- \global\let\csv at catcode@backslash at reset\csv at empty},
- respect backslash/.default=true,
- %
- respect underscore/.is choice,
- respect underscore/true/.code={%
- \gdef\csv at catcode@underscore at set{%
- \xdef\csv at catcode@underscore at value{\the\catcode`\_}%
- \catcode`\_=12}%
- \gdef\csv at catcode@underscore at reset{\catcode`\_=\csv at catcode@underscore at value}},
- respect underscore/false/.code={%
- \global\let\csv at catcode@underscore at set\csv at empty%
- \global\let\csv at catcode@underscore at reset\csv at empty},
- respect underscore/.default=true,
- %
- respect tilde/.is choice,
- respect tilde/true/.code={%
- \gdef\csv at catcode@tilde at set{%
- \xdef\csv at catcode@tilde at value{\the\catcode`\~}%
- \catcode`\~=12}%
- \gdef\csv at catcode@tilde at reset{\catcode`\~=\csv at catcode@tilde at value}},
- respect tilde/false/.code={%
- \global\let\csv at catcode@tilde at set\csv at empty%
- \global\let\csv at catcode@tilde at reset\csv at empty},
- respect tilde/.default=true,
- %
- respect circumflex/.is choice,
- respect circumflex/true/.code={%
- \gdef\csv at catcode@circumflex at set{%
- \xdef\csv at catcode@circumflex at value{\the\catcode`\^}%
- \catcode`\^=12}%
- \gdef\csv at catcode@circumflex at reset{\catcode`\^=\csv at catcode@circumflex at value}},
- respect circumflex/false/.code={%
- \global\let\csv at catcode@circumflex at set\csv at empty%
- \global\let\csv at catcode@circumflex at reset\csv at empty},
- respect circumflex/.default=true,
- %
- respect leftbrace/.is choice,
- respect leftbrace/true/.code={%
- \gdef\csv at catcode@leftbrace at set{%
- \xdef\csv at catcode@leftbrace at value{\the\catcode`\{}%
- \catcode`\{=12}%
- \gdef\csv at catcode@leftbrace at reset{\catcode`\{=\csv at catcode@leftbrace at value}},
- respect leftbrace/false/.code={%
- \global\let\csv at catcode@leftbrace at set\csv at empty%
- \global\let\csv at catcode@leftbrace at reset\csv at empty},
- respect leftbrace/.default=true,
- %
- respect rightbrace/.is choice,
- respect rightbrace/true/.code={%
- \gdef\csv at catcode@rightbrace at set{%
- \xdef\csv at catcode@rightbrace at value{\the\catcode`\}}%
- \catcode`\}=12}%
- \gdef\csv at catcode@rightbrace at reset{\catcode`\}=\csv at catcode@rightbrace at value}},
- respect rightbrace/false/.code={%
- \global\let\csv at catcode@rightbrace at set\csv at empty%
- \global\let\csv at catcode@rightbrace at reset\csv at empty},
- respect rightbrace/.default=true,
- %
- respect all/.style={respect tab,respect percent,respect sharp,respect dollar,
- respect and,respect backslash,respect underscore,respect tilde,respect circumflex,
- respect leftbrace,respect rightbrace},
- respect none/.style={respect tab=false,respect percent=false,respect sharp=false,
- respect dollar=false,respect and=false,respect backslash=false,
- respect underscore=false,respect tilde=false,respect circumflex=false,
- respect leftbrace=false,respect rightbrace=false},
- respect none
-}
-
-
-\long\def\csv at autotabular[#1]#2{\csvloop{autotabular={#2},#1}}
-
-\def\csvautotabular{%
- \@ifnextchar[{\csv at autotabular}{\csv at autotabular[]}}
-
-\long\def\csv at autolongtable[#1]#2{\csvloop{autolongtable={#2},#1}}
-
-\def\csvautolongtable{%
- \@ifnextchar[{\csv at autolongtable}{\csv at autolongtable[]}}
-
-\long\def\csv at autobooktabular[#1]#2{\csvloop{autobooktabular={#2},#1}}
-
-\def\csvautobooktabular{%
- \@ifnextchar[{\csv at autobooktabular}{\csv at autobooktabular[]}}
-
-
-\long\def\csv at autobooklongtable[#1]#2{\csvloop{autobooklongtable={#2},#1}}
-
-\def\csvautobooklongtable{%
- \@ifnextchar[{\csv at autobooklongtable}{\csv at autobooklongtable[]}}
-
-
-\def\csvstyle#1#2{\csvset{#1/.style={#2}}}
-
-\def\csvnames#1#2{\csvset{#1/.style={column names={#2}}}}
-
-% string comparison
-
-\newrobustcmd{\ifcsvstrequal}[2]{%
- \begingroup%
- \protected at edef\csv at tempa{#1}%
- \protected at edef\csv at tempb{#2}%
- \edef\csv at tempa{#1}%
- \edef\csv at tempb{#2}%
- \ifx\csv at tempa\csv at tempb%
- \aftergroup\@firstoftwo%
- \else%
- \aftergroup\@secondoftwo%
- \fi%
- \endgroup%
-}%
-
-\newrobustcmd{\ifcsvprostrequal}[2]{%
- \begingroup%
- \protected at edef\csv at tempa{#1}%
- \protected at edef\csv at tempb{#2}%
- \ifx\csv at tempa\csv at tempb%
- \aftergroup\@firstoftwo%
- \else%
- \aftergroup\@secondoftwo%
- \fi%
- \endgroup%
-}%
-
-\AtBeginDocument{%
- \ifdefined\pdfstrcmp%
- \let\csv at strcmp\pdfstrcmp%
- \else\ifdefined\pdf at strcmp%
- \let\csv at strcmp\pdf at strcmp%
- \fi\fi%
- \ifdefined\csv at strcmp%
- \newrobustcmd{\ifcsvstrcmp}[2]{%
- \ifnum\csv at strcmp{#1}{#2}=\z@\relax%
- \expandafter\@firstoftwo%
- \else%
- \expandafter\@secondoftwo%
- \fi%
- }%
- \else%
- \let\ifcsvstrcmp\ifcsvstrequal%
- \fi%
-}
-
-\newrobustcmd{\ifcsvnotstrcmp}[4]{\ifcsvstrcmp{#1}{#2}{#4}{#3}}
+\RequirePackage{csvsimple-\l__csvsim_package_expl_tl}
More information about the tex-live-commits
mailing list.