?)\n\n- FAQ: Structured \"question & answer(s)\" constructs.\n\n- Compound document: Merge chapters into a book. Master manifest file?\n\nParsers\n\nParsers analyze their input and produce a Docutils document tree. They\ndon't know or care anything about the source or destination of the data.\n\nEach input parser is a module or package exporting a \"Parser\" class with\na \"parse\" method. The base \"Parser\" class can be found in the\ndocutils/parsers/__init__.py module.\n\nResponsibilities: Given raw input text and a doctree root node, populate\nthe doctree by parsing the input text.\n\nExample: The only parser implemented so far is for the reStructuredText\nmarkup. It is implemented in the docutils/parsers/rst/ package.\n\nThe development and integration of other parsers is possible and\nencouraged.\n\nTransformer\n\nThe Transformer class, in docutils/transforms/__init__.py, stores\ntransforms and applies them to documents. A transformer object is\nattached to every new document tree. The Publisher calls\nTransformer.apply_transforms() to apply all stored transforms to the\ndocument tree. Transforms change the document tree from one form to\nanother, add to the tree, or prune it. Transforms resolve references and\nfootnote numbers, process interpreted text, and do other\ncontext-sensitive processing.\n\nSome transforms are specific to components (Readers, Parser, Writers,\nInput, Output). Standard component-specific transforms are specified in\nthe default_transforms attribute of component classes. After the Reader\nhas finished processing, the Publisher calls\nTransformer.populate_from_components() with a list of components and all\ndefault transforms are stored.\n\nEach transform is a class in a module in the docutils/transforms/\npackage, a subclass of docutils.transforms.Transform. Transform classes\neach have a default_priority attribute which is used by the Transformer\nto apply transforms in order (low to high). The default priority can be\noverridden when adding transforms to the Transformer object.\n\nTransformer responsibilities:\n\n- Apply transforms to the document tree, in priority order.\n- Store a mapping of component type name ('reader', 'writer', etc.) to\n component objects. These are used by certain transforms (such as\n \"components.Filter\") to determine suitability.\n\nTransform responsibilities:\n\n- Modify a doctree in-place, either purely transforming one structure\n into another, or adding new structures based on the doctree and/or\n external data.\n\nExamples of transforms (in the docutils/transforms/ package):\n\n- frontmatter.DocInfo: Conversion of document metadata (bibliographic\n information).\n- references.AnonymousHyperlinks: Resolution of anonymous references\n to corresponding targets.\n- parts.Contents: Generates a table of contents for a document.\n- document.Merger: Combining multiple populated doctrees into one.\n (Not yet implemented or fully understood.)\n- document.Splitter: Splits a document into a tree-structure of\n subdocuments, perhaps by section. It will have to transform\n references appropriately. (Neither implemented not remotely\n understood.)\n- components.Filter: Includes or excludes elements which depend on a\n specific Docutils component.\n\nWriters\n\nWriters produce the final output (HTML, XML, TeX, etc.). Writers\ntranslate the internal document tree structure into the final data\nformat, possibly running Writer-specific transforms first.\n\nBy the time the document gets to the Writer, it should be in final form.\nThe Writer's job is simply (and only) to translate from the Docutils\ndoctree structure to the target format. Some small transforms may be\nrequired, but they should be local and format-specific.\n\nEach writer is a module or package exporting a \"Writer\" class with a\n\"write\" method. The base \"Writer\" class can be found in the\ndocutils/writers/__init__.py module.\n\nResponsibilities:\n\n- Translate doctree(s) into specific output formats.\n - Transform references into format-native forms.\n- Write the translated output to the destination I/O.\n\nExamples:\n\n- XML: Various forms, such as:\n - Docutils XML (an expression of the internal document tree,\n implemented as docutils.writers.docutils_xml).\n - DocBook (being implemented in the Docutils sandbox).\n- HTML (XHTML implemented as docutils.writers.html4css1).\n- PDF (a ReportLabs interface is being developed in the Docutils\n sandbox).\n- TeX (a LaTeX Writer is being implemented in the sandbox).\n- Docutils-native pseudo-XML (implemented as\n docutils.writers.pseudoxml, used for testing).\n- Plain text\n- reStructuredText?\n\nInput/Output\n\nI/O classes provide a uniform API for low-level input and output.\nSubclasses will exist for a variety of input/output mechanisms. However,\nthey can be considered an implementation detail. Most applications\nshould be satisfied using one of the convenience functions associated\nwith the Publisher.\n\nI/O classes are currently in the preliminary stages; there's a lot of\nwork yet to be done. Issues:\n\n- How to represent multi-file input (files & directories) in the API?\n- How to represent multi-file output? Perhaps \"Writer\" variants, one\n for each output distribution type? Or Output objects with associated\n transforms?\n\nResponsibilities:\n\n- Read data from the input source (Input objects) or write data to the\n output destination (Output objects).\n\nExamples of input sources:\n\n- A single file on disk or a stream (implemented as\n docutils.io.FileInput).\n- Multiple files on disk (MultiFileInput?).\n- Python source files: modules and packages.\n- Python strings, as received from a client application (implemented\n as docutils.io.StringInput).\n\nExamples of output destinations:\n\n- A single file on disk or a stream (implemented as\n docutils.io.FileOutput).\n- A tree of directories and files on disk.\n- A Python string, returned to a client application (implemented as\n docutils.io.StringOutput).\n- No output; useful for programmatic applications where only a portion\n of the normal output is to be used (implemented as\n docutils.io.NullOutput).\n- A single tree-shaped data structure in memory.\n- Some other set of data structures in memory.\n\nDocutils Package Structure\n\n- Package \"docutils\".\n - Module \"__init__.py\" contains: class \"Component\", a base class\n for Docutils components; class \"SettingsSpec\", a base class for\n specifying runtime settings (used by docutils.frontend); and\n class \"TransformSpec\", a base class for specifying transforms.\n\n - Module \"docutils.core\" contains facade class \"Publisher\" and\n convenience functions. See Publisher above.\n\n - Module \"docutils.frontend\" provides runtime settings support,\n for programmatic use and front-end tools (including\n configuration file support, and command-line argument and option\n processing).\n\n - Module \"docutils.io\" provides a uniform API for low-level input\n and output. See Input/Output above.\n\n - Module \"docutils.nodes\" contains the Docutils document tree\n element class library plus tree-traversal Visitor pattern base\n classes. See Document Tree below.\n\n - Module \"docutils.statemachine\" contains a finite state machine\n specialized for regular-expression-based text filters and\n parsers. The reStructuredText parser implementation is based on\n this module.\n\n - Module \"docutils.urischemes\" contains a mapping of known URI\n schemes (\"http\", \"ftp\", \"mail\", etc.).\n\n - Module \"docutils.utils\" contains utility functions and classes,\n including a logger class (\"Reporter\"; see Error Handling below).\n\n - Package \"docutils.parsers\": markup parsers.\n\n - Function \"get_parser_class(parser_name)\" returns a parser\n module by name. Class \"Parser\" is the base class of specific\n parsers. (docutils/parsers/__init__.py)\n - Package \"docutils.parsers.rst\": the reStructuredText parser.\n - Alternate markup parsers may be added.\n\n See Parsers above.\n\n - Package \"docutils.readers\": context-aware input readers.\n\n - Function \"get_reader_class(reader_name)\" returns a reader\n module by name or alias. Class \"Reader\" is the base class of\n specific readers. (docutils/readers/__init__.py)\n - Module \"docutils.readers.standalone\" reads independent\n document files.\n - Module \"docutils.readers.pep\" reads PEPs (Python Enhancement\n Proposals).\n - Readers to be added for: Python source code (structure &\n docstrings), email, FAQ, and perhaps Wiki and others.\n\n See Readers above.\n\n - Package \"docutils.writers\": output format writers.\n\n - Function \"get_writer_class(writer_name)\" returns a writer\n module by name. Class \"Writer\" is the base class of specific\n writers. (docutils/writers/__init__.py)\n - Module \"docutils.writers.html4css1\" is a simple HyperText\n Markup Language document tree writer for HTML 4.01 and CSS1.\n - Module \"docutils.writers.docutils_xml\" writes the internal\n document tree in XML form.\n - Module \"docutils.writers.pseudoxml\" is a simple internal\n document tree writer; it writes indented pseudo-XML.\n - Writers to be added: HTML 3.2 or 4.01-loose, XML (various\n forms, such as DocBook), PDF, TeX, plaintext,\n reStructuredText, and perhaps others.\n\n See Writers above.\n\n - Package \"docutils.transforms\": tree transform classes.\n\n - Class \"Transformer\" stores transforms and applies them to\n document trees. (docutils/transforms/__init__.py)\n - Class \"Transform\" is the base class of specific transforms.\n (docutils/transforms/__init__.py)\n - Each module contains related transform classes.\n\n See Transforms above.\n\n - Package \"docutils.languages\": Language modules contain\n language-dependent strings and mappings. They are named for\n their language identifier (as defined in Choice of Docstring\n Format below), converting dashes to underscores.\n\n - Function \"get_language(language_code)\", returns matching\n language module. (docutils/languages/__init__.py)\n - Modules: en.py (English), de.py (German), fr.py (French),\n it.py (Italian), sk.py (Slovak), sv.py (Swedish).\n - Other languages to be added.\n- Third-party modules: \"extras\" directory. These modules are installed\n only if they're not already present in the Python installation.\n - extras/optparse.py and extras/textwrap.py provide option parsing\n and command-line help; from Greg Ward's http://optik.sf.net/\n project, included for convenience.\n - extras/roman.py contains Roman numeral conversion routines.\n\nFront-End Tools\n\nThe tools/ directory contains several front ends for common Docutils\nprocessing. See Docutils Front-End Tools for details.\n\nDocument Tree\n\nA single intermediate data structure is used internally by Docutils, in\nthe interfaces between components; it is defined in the docutils.nodes\nmodule. It is not required that this data structure be used internally\nby any of the components, just between components as outlined in the\ndiagram in the Docutils Project Model above.\n\nCustom node types are allowed, provided that either (a) a transform\nconverts them to standard Docutils nodes before they reach the Writer\nproper, or (b) the custom node is explicitly supported by certain\nWriters, and is wrapped in a filtered \"pending\" node. An example of\ncondition (a) is the Python Source Reader (see below), where a \"stylist\"\ntransform converts custom nodes. The HTML tag is an example of\ncondition (b); it is supported by the HTML Writer but not by others. The\nreStructuredText \"meta\" directive creates a \"pending\" node, which\ncontains knowledge that the embedded \"meta\" node can only be handled by\nHTML-compatible writers. The \"pending\" node is resolved by the\ndocutils.transforms.components.Filter transform, which checks that the\ncalling writer supports HTML; if it doesn't, the \"pending\" node (and\nenclosed \"meta\" node) is removed from the document.\n\nThe document tree data structure is similar to a DOM tree, but with\nspecific node names (classes) instead of DOM's generic nodes. The schema\nis documented in an XML DTD (eXtensible Markup Language Document Type\nDefinition), which comes in two parts:\n\n- the Docutils Generic DTD, docutils.dtd, and\n- the OASIS Exchange Table Model, soextbl.dtd.\n\nThe DTD defines a rich set of elements, suitable for many input and\noutput formats. The DTD retains all information necessary to reconstruct\nthe original input text, or a reasonable facsimile thereof.\n\nSee The Docutils Document Tree for details (incomplete).\n\nError Handling\n\nWhen the parser encounters an error in markup, it inserts a system\nmessage (DTD element \"system_message\"). There are five levels of system\nmessages:\n\n- Level-0, \"DEBUG\": an internal reporting issue. There is no effect on\n the processing. Level-0 system messages are handled separately from\n the others.\n- Level-1, \"INFO\": a minor issue that can be ignored. There is little\n or no effect on the processing. Typically level-1 system messages\n are not reported.\n- Level-2, \"WARNING\": an issue that should be addressed. If ignored,\n there may be minor problems with the output. Typically level-2\n system messages are reported but do not halt processing\n- Level-3, \"ERROR\": a major issue that should be addressed. If\n ignored, the output will contain unpredictable errors. Typically\n level-3 system messages are reported but do not halt processing\n- Level-4, \"SEVERE\": a critical error that must be addressed.\n Typically level-4 system messages are turned into exceptions which\n halt processing. If ignored, the output will contain severe errors.\n\nAlthough the initial message levels were devised independently, they\nhave a strong correspondence to VMS error condition severity levels; the\nnames in quotes for levels 1 through 4 were borrowed from VMS. Error\nhandling has since been influenced by the log4j project.\n\nPython Source Reader\n\nThe Python Source Reader (\"PySource\") is the Docutils component that\nreads Python source files, extracts docstrings in context, then parses,\nlinks, and assembles the docstrings into a cohesive whole. It is a major\nand non-trivial component, currently under experimental development in\nthe Docutils sandbox. High-level design issues are presented here.\n\nProcessing Model\n\nThis model will evolve over time, incorporating experience and\ndiscoveries.\n\n1. The PySource Reader uses an Input class to read in Python packages\n and modules, into a tree of strings.\n2. The Python modules are parsed, converting the tree of strings into a\n tree of abstract syntax trees with docstring nodes.\n3. The abstract syntax trees are converted into an internal\n representation of the packages/modules. Docstrings are extracted, as\n well as code structure details. See AST Mining below. Namespaces are\n constructed for lookup in step 6.\n4. One at a time, the docstrings are parsed, producing standard\n Docutils doctrees.\n5. PySource assembles all the individual docstrings' doctrees into a\n Python-specific custom Docutils tree paralleling the\n package/module/class structure; this is a custom Reader-specific\n internal representation (see the Docutils Python Source DTD).\n Namespaces must be merged: Python identifiers, hyperlink targets.\n6. Cross-references from docstrings (interpreted text) to Python\n identifiers are resolved according to the Python namespace lookup\n rules. See Identifier Cross-References below.\n7. A \"Stylist\" transform is applied to the custom doctree (by the\n Transformer), custom nodes are rendered using standard nodes as\n primitives, and a standard document tree is emitted. See Stylist\n Transforms below.\n8. Other transforms are applied to the standard doctree by the\n Transformer.\n9. The standard doctree is sent to a Writer, which translates the\n document into a concrete format (HTML, PDF, etc.).\n10. The Writer uses an Output class to write the resulting data to its\n destination (disk file, directories and files, etc.).\n\nAST Mining\n\nAbstract Syntax Tree mining code will be written (or adapted) that scans\na parsed Python module, and returns an ordered tree containing the\nnames, docstrings (including attribute and additional docstrings; see\nbelow), and additional info (in parentheses below) of all of the\nfollowing objects:\n\n- packages\n- modules\n- module attributes (+ initial values)\n- classes (+ inheritance)\n- class attributes (+ initial values)\n- instance attributes (+ initial values)\n- methods (+ parameters & defaults)\n- functions (+ parameters & defaults)\n\n(Extract comments too? For example, comments at the start of a module\nwould be a good place for bibliographic field lists.)\n\nIn order to evaluate interpreted text cross-references, namespaces for\neach of the above will also be required.\n\nSee the python-dev/docstring-develop thread \"AST mining\", started on\n2001-08-14.\n\nDocstring Extraction Rules\n\n1. What to examine:\n\n a) If the \"__all__\" variable is present in the module being\n documented, only identifiers listed in \"__all__\" are examined\n for docstrings.\n b) In the absence of \"__all__\", all identifiers are examined,\n except those whose names are private (names begin with \"_\" but\n don't begin and end with \"__\").\n c) 1a and 1b can be overridden by runtime settings.\n\n2. Where:\n\n Docstrings are string literal expressions, and are recognized in the\n following places within Python modules:\n\n a) At the beginning of a module, function definition, class\n definition, or method definition, after any comments. This is\n the standard for Python __doc__ attributes.\n b) Immediately following a simple assignment at the top level of a\n module, class definition, or __init__ method definition, after\n any comments. See Attribute Docstrings below.\n c) Additional string literals found immediately after the\n docstrings in (a) and (b) will be recognized, extracted, and\n concatenated. See Additional Docstrings below.\n d) @@@ 2.2-style \"properties\" with attribute docstrings? Wait for\n syntax?\n\n3. How:\n\n Whenever possible, Python modules should be parsed by Docutils, not\n imported. There are several reasons:\n\n - Importing untrusted code is inherently insecure.\n - Information from the source is lost when using introspection to\n examine an imported module, such as comments and the order of\n definitions.\n - Docstrings are to be recognized in places where the byte-code\n compiler ignores string literal expressions (2b and 2c above),\n meaning importing the module will lose these docstrings.\n\n Of course, standard Python parsing tools such as the \"parser\"\n library module should be used.\n\n When the Python source code for a module is not available (i.e. only\n the .pyc file exists) or for C extension modules, to access\n docstrings the module can only be imported, and any limitations must\n be lived with.\n\nSince attribute docstrings and additional docstrings are ignored by the\nPython byte-code compiler, no namespace pollution or runtime bloat will\nresult from their use. They are not assigned to __doc__ or to any other\nattribute. The initial parsing of a module may take a slight performance\nhit.\n\nAttribute Docstrings\n\n(This is a simplified version of PEP 224.)\n\nA string literal immediately following an assignment statement is\ninterpreted by the docstring extraction machinery as the docstring of\nthe target of the assignment statement, under the following conditions:\n\n1. The assignment must be in one of the following contexts:\n\n a) At the top level of a module (i.e., not nested inside a compound\n statement such as a loop or conditional): a module attribute.\n b) At the top level of a class definition: a class attribute.\n c) At the top level of the \"__init__\" method definition of a class:\n an instance attribute. Instance attributes assigned in other\n methods are assumed to be implementation details. (@@@ __new__\n methods?)\n d) A function attribute assignment at the top level of a module or\n class definition.\n\n Since each of the above contexts are at the top level (i.e., in the\n outermost suite of a definition), it may be necessary to place dummy\n assignments for attributes assigned conditionally or in a loop.\n\n2. The assignment must be to a single target, not to a list or a tuple\n of targets.\n\n3. The form of the target:\n\n a) For contexts 1a and 1b above, the target must be a simple\n identifier (not a dotted identifier, a subscripted expression,\n or a sliced expression).\n b) For context 1c above, the target must be of the form\n \"self.attrib\", where \"self\" matches the \"__init__\" method's\n first parameter (the instance parameter) and \"attrib\" is a\n simple identifier as in 3a.\n c) For context 1d above, the target must be of the form\n \"name.attrib\", where \"name\" matches an already-defined function\n or method name and \"attrib\" is a simple identifier as in 3a.\n\nBlank lines may be used after attribute docstrings to emphasize the\nconnection between the assignment and the docstring.\n\nExamples:\n\n g = 'module attribute (module-global variable)'\n \"\"\"This is g's docstring.\"\"\"\n\n class AClass:\n\n c = 'class attribute'\n \"\"\"This is AClass.c's docstring.\"\"\"\n\n def __init__(self):\n \"\"\"Method __init__'s docstring.\"\"\"\n\n self.i = 'instance attribute'\n \"\"\"This is self.i's docstring.\"\"\"\n\n def f(x):\n \"\"\"Function f's docstring.\"\"\"\n return x**2\n\n f.a = 1\n \"\"\"Function attribute f.a's docstring.\"\"\"\n\nAdditional Docstrings\n\n(This idea was adapted from PEP 216.)\n\nMany programmers would like to make extensive use of docstrings for API\ndocumentation. However, docstrings do take up space in the running\nprogram, so some programmers are reluctant to \"bloat up\" their code.\nAlso, not all API documentation is applicable to interactive\nenvironments, where __doc__ would be displayed.\n\nDocutils' docstring extraction tools will concatenate all string literal\nexpressions which appear at the beginning of a definition or after a\nsimple assignment. Only the first strings in definitions will be\navailable as __doc__, and can be used for brief usage text suitable for\ninteractive sessions; subsequent string literals and all attribute\ndocstrings are ignored by the Python byte-code compiler and may contain\nmore extensive API information.\n\nExample:\n\n def function(arg):\n \"\"\"This is __doc__, function's docstring.\"\"\"\n \"\"\"\n This is an additional docstring, ignored by the byte-code\n compiler, but extracted by Docutils.\n \"\"\"\n pass\n\nIssue: from __future__ import\n\nThis would break \"from __future__ import\" statements introduced in\nPython 2.1 for multiple module docstrings (main docstring plus\nadditional docstring(s)). The Python Reference Manual specifies:\n\n A future statement must appear near the top of the module. The only\n lines that can appear before a future statement are:\n\n - the module docstring (if any),\n - comments,\n - blank lines, and\n - other future statements.\n\nResolution?\n\n1. Should we search for docstrings after a __future__ statement? Very\n ugly.\n2. Redefine __future__ statements to allow multiple preceding string\n literals?\n3. Or should we not even worry about this? There probably shouldn't be\n __future__ statements in production code, after all. Perhaps modules\n with __future__ statements will simply have to put up with the\n single-docstring limitation.\n\nChoice of Docstring Format\n\nRather than force everyone to use a single docstring format, multiple\ninput formats are allowed by the processing system. A special variable,\n__docformat__, may appear at the top level of a module before any\nfunction or class definitions. Over time or through decree, a standard\nformat or set of formats should emerge.\n\nA module's __docformat__ variable only applies to the objects defined in\nthe module's file. In particular, the __docformat__ variable in a\npackage's __init__.py file does not apply to objects defined in\nsubpackages and submodules.\n\nThe __docformat__ variable is a string containing the name of the format\nbeing used, a case-insensitive string matching the input parser's module\nor package name (i.e., the same name as required to \"import\" the module\nor package), or a registered alias. If no __docformat__ is specified,\nthe default format is \"plaintext\" for now; this may be changed to the\nstandard format if one is ever established.\n\nThe __docformat__ string may contain an optional second field, separated\nfrom the format name (first field) by a single space: a case-insensitive\nlanguage identifier as defined in 1766. A typical language identifier\nconsists of a 2-letter language code from ISO 639 (3-letter codes used\nonly if no 2-letter code exists; 1766 is currently being revised to\nallow 3-letter codes). If no language identifier is specified, the\ndefault is \"en\" for English. The language identifier is passed to the\nparser and can be used for language-dependent markup features.\n\nIdentifier Cross-References\n\nIn Python docstrings, interpreted text is used to classify and mark up\nprogram identifiers, such as the names of variables, functions, classes,\nand modules. If the identifier alone is given, its role is inferred\nimplicitly according to the Python namespace lookup rules. For functions\nand methods (even when dynamically assigned), parentheses ('()') may be\nincluded:\n\n This function uses `another()` to do its work.\n\nFor class, instance and module attributes, dotted identifiers are used\nwhen necessary. For example (using reStructuredText markup):\n\n class Keeper(Storer):\n\n \"\"\"\n Extend `Storer`. Class attribute `instances` keeps track\n of the number of `Keeper` objects instantiated.\n \"\"\"\n\n instances = 0\n \"\"\"How many `Keeper` objects are there?\"\"\"\n\n def __init__(self):\n \"\"\"\n Extend `Storer.__init__()` to keep track of instances.\n\n Keep count in `Keeper.instances`, data in `self.data`.\n \"\"\"\n Storer.__init__(self)\n Keeper.instances += 1\n\n self.data = []\n \"\"\"Store data in a list, most recent last.\"\"\"\n\n def store_data(self, data):\n \"\"\"\n Extend `Storer.store_data()`; append new `data` to a\n list (in `self.data`).\n \"\"\"\n self.data = data\n\nEach of the identifiers quoted with backquotes (\"`\") will become\nreferences to the definitions of the identifiers themselves.\n\nStylist Transforms\n\nStylist transforms are specialized transforms specific to the PySource\nReader. The PySource Reader doesn't have to make any decisions as to\nstyle; it just produces a logically constructed document tree, parsed\nand linked, including custom node types. Stylist transforms understand\nthe custom nodes created by the Reader and convert them into standard\nDocutils nodes.\n\nMultiple Stylist transforms may be implemented and one can be chosen at\nruntime (through a \"--style\" or \"--stylist\" command-line option). Each\nStylist transform implements a different layout or style; thus the name.\nThey decouple the context-understanding part of the Reader from the\nlayout-generating part of processing, resulting in a more flexible and\nrobust system. This also serves to \"separate style from content\", the\nSGML/XML ideal.\n\nBy keeping the piece of code that does the styling small and modular, it\nbecomes much easier for people to roll their own styles. The \"barrier to\nentry\" is too high with existing tools; extracting the stylist code will\nlower the barrier considerably.\n\nReferences and Footnotes\n\nProject Web Site\n\nA SourceForge project has been set up for this work at\nhttp://docutils.sourceforge.net/.\n\nCopyright\n\nThis document has been placed in the public domain.\n\nAcknowledgements\n\nThis document borrows ideas from the archives of the Python Doc-SIG.\nThanks to all members past & present.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 End:"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.181151"},"created":{"kind":"timestamp","value":"2001-05-31T00:00:00","string":"2001-05-31T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0258/\",\n \"authors\": [\n \"David Goodger\"\n ],\n \"pep_number\": \"0258\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":8,"cells":{"id":{"kind":"string","value":"0002"},"text":{"kind":"string","value":"PEP: 2 Title: Procedure for Adding New Modules Version: $Revision$\nLast-Modified: $Date$ Author: Brett Cannon , Martijn\nFaassen Status: Active Type: Process Content-Type:\ntext/x-rst Created: 07-Jul-2001 Post-History: 07-Jul-2001, 09-Mar-2002\n\nIntroduction\n\nThe Python Standard Library contributes significantly to Python's\nsuccess. The language comes with \"batteries included\", so it is easy for\npeople to become productive with just the standard library alone. It is\ntherefore important that the usefulness of the standard library be\nmaintained.\n\nDue to the visibility and importance of the standard library, it must be\nmaintained thoughtfully. As such, any code within it must be maintained\nby Python's development team which leads to a perpetual cost to each\naddition made. There is also added cognitive load for users in\nfamiliarizing themselves with what is in the standard library to be\nconsidered.\n\nNew functionality is commonly added to the library in the form of new\nmodules. This PEP will describe the procedure for the addition of new\nmodules. PEP 4 deals with procedures for deprecation of modules; the\nremoval of old and unused modules from the standard library.\n\nAcceptance Procedure\n\nFor top-level modules/packages, a PEP is required. The procedure for\nwriting a PEP is covered in PEP 1.\n\nFor submodules of a preexisting package in the standard library,\nadditions are at the discretion of the general Python development team\nand its members.\n\nGeneral guidance on what modules typically are accepted into the\nstandard library, the overall process, etc. are covered in the\ndeveloper's guide.\n\nMaintenance Procedure\n\nAnything accepted into the standard library is expected to be primarily\nmaintained there, within Python's development infrastructure. While some\nmembers of the development team may choose to maintain a backport of a\nmodule outside of the standard library, it is up to them to keep their\nexternal code in sync as appropriate.\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.189771"},"created":{"kind":"timestamp","value":"2001-07-07T00:00:00","string":"2001-07-07T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0002/\",\n \"authors\": [\n \"Brett Cannon\"\n ],\n \"pep_number\": \"0002\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":9,"cells":{"id":{"kind":"string","value":"0222"},"text":{"kind":"string","value":"PEP: 222 Title: Web Library Enhancements Version: $Revision$\nLast-Modified: $Date$ Author: A.M. Kuchling Status:\nDeferred Type: Standards Track Content-Type: text/x-rst Created:\n18-Aug-2000 Python-Version: 2.1 Post-History: 22-Dec-2000\n\nAbstract\n\nThis PEP proposes a set of enhancements to the CGI development\nfacilities in the Python standard library. Enhancements might be new\nfeatures, new modules for tasks such as cookie support, or removal of\nobsolete code.\n\nThe original intent was to make improvements to Python 2.1. However,\nthere seemed little interest from the Python community, and time was\nlacking, so this PEP has been deferred to some future Python release.\n\nOpen Issues\n\nThis section lists changes that have been suggested, but about which no\nfirm decision has yet been made. In the final version of this PEP, this\nsection should be empty, as all the changes should be classified as\naccepted or rejected.\n\ncgi.py: We should not be told to create our own subclass just so we can\nhandle file uploads. As a practical matter, I have yet to find the time\nto do this right, so I end up reading cgi.py's temp file into, at best,\nanother file. Some of our legacy code actually reads it into a second\ntemp file, then into a final destination! And even if we did, that would\nmean creating yet another object with its __init__ call and associated\noverhead.\n\ncgi.py: Currently, query data with no = are ignored. Even if\nkeep_blank_values is set, queries like ...?value=&... are returned with\nblank values but queries like ...?value&... are completely lost. It\nwould be great if such data were made available through the FieldStorage\ninterface, either as entries with None as values, or in a separate list.\n\nUtility function: build a query string from a list of 2-tuples\n\nDictionary-related utility classes: NoKeyErrors (returns an empty\nstring, never a KeyError), PartialStringSubstitution (returns the\noriginal key string, never a KeyError)\n\nNew Modules\n\nThis section lists details about entire new packages or modules that\nshould be added to the Python standard library.\n\n- fcgi.py : A new module adding support for the FastCGI protocol.\n Robin Dunn's code needs to be ported to Windows, though.\n\nMajor Changes to Existing Modules\n\nThis section lists details of major changes to existing modules, whether\nin implementation or in interface. The changes in this section therefore\ncarry greater degrees of risk, either in introducing bugs or a backward\nincompatibility.\n\nThe cgi.py module would be deprecated. (XXX A new module or package name\nhasn't been chosen yet: 'web'? 'cgilib'?)\n\nMinor Changes to Existing Modules\n\nThis section lists details of minor changes to existing modules. These\nchanges should have relatively small implementations, and have little\nrisk of introducing incompatibilities with previous versions.\n\nRejected Changes\n\nThe changes listed in this section were proposed for Python 2.1, but\nwere rejected as unsuitable. For each rejected change, a rationale is\ngiven describing why the change was deemed inappropriate.\n\n- An HTML generation module is not part of this PEP. Several such\n modules exist, ranging from HTMLgen's purely programming interface\n to ASP-inspired simple templating to DTML's complex templating.\n There's no indication of which templating module to enshrine in the\n standard library, and that probably means that no module should be\n so chosen.\n- cgi.py: Allowing a combination of query data and POST data. This\n doesn't seem to be standard at all, and therefore is dubious\n practice.\n\nProposed Interface\n\nXXX open issues: naming convention (studlycaps or underline-separated?);\nneed to look at the cgi.parse*() functions and see if they can be\nsimplified, too.\n\nParsing functions: carry over most of the parse* functions from cgi.py\n\n # The Response class borrows most of its methods from Zope's\n # HTTPResponse class.\n\n class Response:\n \"\"\"\n Attributes:\n status: HTTP status code to return\n headers: dictionary of response headers\n body: string containing the body of the HTTP response\n \"\"\"\n\n def __init__(self, status=200, headers={}, body=\"\"):\n pass\n\n def setStatus(self, status, reason=None):\n \"Set the numeric HTTP response code\"\n pass\n\n def setHeader(self, name, value):\n \"Set an HTTP header\"\n pass\n\n def setBody(self, body):\n \"Set the body of the response\"\n pass\n\n def setCookie(self, name, value,\n path = '/',\n comment = None,\n domain = None,\n max-age = None,\n expires = None,\n secure = 0\n ):\n \"Set a cookie\"\n pass\n\n def expireCookie(self, name):\n \"Remove a cookie from the user\"\n pass\n\n def redirect(self, url):\n \"Redirect the browser to another URL\"\n pass\n\n def __str__(self):\n \"Convert entire response to a string\"\n pass\n\n def dump(self):\n \"Return a string representation useful for debugging\"\n pass\n\n # XXX methods for specific classes of error:serverError,\n # badRequest, etc.?\n\n\n class Request:\n\n \"\"\"\n Attributes:\n\n XXX should these be dictionaries, or dictionary-like objects?\n .headers : dictionary containing HTTP headers\n .cookies : dictionary of cookies\n .fields : data from the form\n .env : environment dictionary\n \"\"\"\n\n def __init__(self, environ=os.environ, stdin=sys.stdin,\n keep_blank_values=1, strict_parsing=0):\n \"\"\"Initialize the request object, using the provided environment\n and standard input.\"\"\"\n pass\n\n # Should people just use the dictionaries directly?\n def getHeader(self, name, default=None):\n pass\n\n def getCookie(self, name, default=None):\n pass\n\n def getField(self, name, default=None):\n \"Return field's value as a string (even if it's an uploaded file)\"\n pass\n\n def getUploadedFile(self, name):\n \"\"\"Returns a file object that can be read to obtain the contents\n of an uploaded file. XXX should this report an error if the\n field isn't actually an uploaded file? Or should it wrap\n a StringIO around simple fields for consistency?\n \"\"\"\n\n def getURL(self, n=0, query_string=0):\n \"\"\"Return the URL of the current request, chopping off 'n' path\n components from the right. Eg. if the URL is\n \"http://foo.com/bar/baz/quux\", n=2 would return\n \"http://foo.com/bar\". Does not include the query string (if\n any)\n \"\"\"\n\n def getBaseURL(self, n=0):\n \"\"\"Return the base URL of the current request, adding 'n' path\n components to the end to recreate more of the whole URL.\n\n Eg. if the request URL is\n \"http://foo.com/q/bar/baz/qux\", n=0 would return\n \"http://foo.com/\", and n=2 \"http://foo.com/q/bar\".\n\n Returned URL does not include the query string, if any.\n \"\"\"\n\n def dump(self):\n \"String representation suitable for debugging output\"\n pass\n\n # Possibilities? I don't know if these are worth doing in the\n # basic objects.\n def getBrowser(self):\n \"Returns Mozilla/IE/Lynx/Opera/whatever\"\n\n def isSecure(self):\n \"Return true if this is an SSLified request\"\n\n\n # Module-level function\n def wrapper(func, logfile=sys.stderr):\n \"\"\"\n Calls the function 'func', passing it the arguments\n (request, response, logfile). Exceptions are trapped and\n sent to the file 'logfile'.\n \"\"\"\n # This wrapper will detect if it's being called from the command-line,\n # and if so, it will run in a debugging mode; name=value pairs\n # can be entered on standard input to set field values.\n # (XXX how to do file uploads in this syntax?)\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.196838"},"created":{"kind":"timestamp","value":"2000-08-18T00:00:00","string":"2000-08-18T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0222/\",\n \"authors\": [\n \"A.M. Kuchling\"\n ],\n \"pep_number\": \"0222\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":10,"cells":{"id":{"kind":"string","value":"0502"},"text":{"kind":"string","value":"PEP: 502 Title: String Interpolation - Extended Discussion Author: Mike\nG. Miller Status: Rejected Type: Informational Content-Type: text/x-rst\nCreated: 10-Aug-2015 Python-Version: 3.6\n\nAbstract\n\nPEP 498: Literal String Interpolation, which proposed \"formatted\nstrings\" was accepted September 9th, 2015. Additional background and\nrationale given during its design phase is detailed below.\n\nTo recap that PEP, a string prefix was introduced that marks the string\nas a template to be rendered. These formatted strings may contain one or\nmore expressions built on the existing syntax of str.format().[1][2] The\nformatted string expands at compile-time into a conventional string\nformat operation, with the given expressions from its text extracted and\npassed instead as positional arguments.\n\nAt runtime, the resulting expressions are evaluated to render a string\nto given specifications:\n\n >>> location = 'World'\n >>> f'Hello, {location} !' # new prefix: f''\n 'Hello, World !' # interpolated result\n\nFormat-strings may be thought of as merely syntactic sugar to simplify\ntraditional calls to str.format().\n\nPEP Status\n\nThis PEP was rejected based on its using an opinion-based tone rather\nthan a factual one. This PEP was also deemed not critical as PEP 498 was\nalready written and should be the place to house design decision\ndetails.\n\nMotivation\n\nThough string formatting and manipulation features are plentiful in\nPython, one area where it falls short is the lack of a convenient string\ninterpolation syntax. In comparison to other dynamic scripting languages\nwith similar use cases, the amount of code necessary to build similar\nstrings is substantially higher, while at times offering lower\nreadability due to verbosity, dense syntax, or identifier duplication.\n\nThese difficulties are described at moderate length in the original post\nto python-ideas that started the snowball (that became PEP 498)\nrolling.[3]\n\nFurthermore, replacement of the print statement with the more consistent\nprint function of Python 3 (PEP 3105) has added one additional minor\nburden, an additional set of parentheses to type and read. Combined with\nthe verbosity of current string formatting solutions, this puts an\notherwise simple language at an unfortunate disadvantage to its peers:\n\n echo \"Hello, user: $user, id: $id, on host: $hostname\" # bash\n say \"Hello, user: $user, id: $id, on host: $hostname\"; # perl\n puts \"Hello, user: #{user}, id: #{id}, on host: #{hostname}\\n\" # ruby\n # 80 ch -->|\n # Python 3, str.format with named parameters\n print('Hello, user: {user}, id: {id}, on host: {hostname}'.format(**locals()))\n\n # Python 3, worst case\n print('Hello, user: {user}, id: {id}, on host: {hostname}'.format(user=user,\n id=id,\n hostname=\n hostname))\n\nIn Python, the formatting and printing of a string with multiple\nvariables in a single line of code of standard width is noticeably\nharder and more verbose, with indentation exacerbating the issue.\n\nFor use cases such as smaller projects, systems programming, shell\nscript replacements, and even one-liners, where message formatting\ncomplexity has yet to be encapsulated, this verbosity has likely lead a\nsignificant number of developers and administrators to choose other\nlanguages over the years.\n\nRationale\n\nGoals\n\nThe design goals of format strings are as follows:\n\n1. Eliminate need to pass variables manually.\n2. Eliminate repetition of identifiers and redundant parentheses.\n3. Reduce awkward syntax, punctuation characters, and visual noise.\n4. Improve readability and eliminate mismatch errors, by preferring\n named parameters to positional arguments.\n5. Avoid need for locals() and globals() usage, instead parsing the\n given string for named parameters, then passing them\n automatically.[4][5]\n\nLimitations\n\nIn contrast to other languages that take design cues from Unix and its\nshells, and in common with Javascript, Python specified both single (')\nand double (\") ASCII quote characters to enclose strings. It is not\nreasonable to choose one of them now to enable interpolation, while\nleaving the other for uninterpolated strings. Other characters, such as\nthe \"Backtick\" (or grave accent ) are also `constrained by history_ as a\nshortcut for repr().\n\nThis leaves a few remaining options for the design of such a feature:\n\n- An operator, as in printf-style string formatting via %.\n- A class, such as string.Template().\n- A method or function, such as str.format().\n- New syntax, or\n- A new string prefix marker, such as the well-known r'' or u''.\n\nThe first three options above are mature. Each has specific use cases\nand drawbacks, yet also suffer from the verbosity and visual noise\nmentioned previously. All options are discussed in the next sections.\n\nBackground\n\nFormatted strings build on several existing techniques and proposals and\nwhat we've collectively learned from them. In keeping with the design\ngoals of readability and error-prevention, the following examples\ntherefore use named, not positional arguments.\n\nLet's assume we have the following dictionary, and would like to print\nout its items as an informative string for end users:\n\n >>> params = {'user': 'nobody', 'id': 9, 'hostname': 'darkstar'}\n\nPrintf-style formatting, via operator\n\nThis venerable technique continues to have its uses, such as with\nbyte-based protocols, simplicity in simple cases, and familiarity to\nmany programmers:\n\n >>> 'Hello, user: %(user)s, id: %(id)s, on host: %(hostname)s' % params\n 'Hello, user: nobody, id: 9, on host: darkstar'\n\nIn this form, considering the prerequisite dictionary creation, the\ntechnique is verbose, a tad noisy, yet relatively readable. Additional\nissues are that an operator can only take one argument besides the\noriginal string, meaning multiple parameters must be passed in a tuple\nor dictionary. Also, it is relatively easy to make an error in the\nnumber of arguments passed, the expected type, have a missing key, or\nforget the trailing type, e.g. (s or d).\n\nstring.Template Class\n\nThe string.Template class from PEP 292 (Simpler String Substitutions) is\na purposely simplified design, using familiar shell interpolation\nsyntax, with safe-substitution feature, that finds its main use cases in\nshell and internationalization tools:\n\n Template('Hello, user: $user, id: ${id}, on host: $hostname').substitute(params)\n\nWhile also verbose, the string itself is readable. Though functionality\nis limited, it meets its requirements well. It isn't powerful enough for\nmany cases, and that helps keep inexperienced users out of trouble, as\nwell as avoiding issues with moderately-trusted input (i18n) from\nthird-parties. It unfortunately takes enough code to discourage its use\nfor ad-hoc string interpolation, unless encapsulated in a convenience\nlibrary such as flufl.i18n.\n\nPEP 215 - String Interpolation\n\nPEP 215 was a former proposal of which this one shares a lot in common.\nApparently, the world was not ready for it at the time, but considering\nrecent support in a number of other languages, its day may have come.\n\nThe large number of dollar sign ($) characters it included may have led\nit to resemble Python's arch-nemesis Perl, and likely contributed to the\nPEP's lack of acceptance. It was superseded by the following proposal.\n\nstr.format() Method\n\nThe str.format() syntax of PEP 3101 is the most recent and modern of the\nexisting options. It is also more powerful and usually easier to read\nthan the others. It avoids many of the drawbacks and limits of the\nprevious techniques.\n\nHowever, due to its necessary function call and parameter passing, it\nruns from verbose to very verbose in various situations with string\nliterals:\n\n >>> 'Hello, user: {user}, id: {id}, on host: {hostname}'.format(**params)\n 'Hello, user: nobody, id: 9, on host: darkstar'\n\n # when using keyword args, var name shortening sometimes needed to fit :/\n >>> 'Hello, user: {user}, id: {id}, on host: {host}'.format(user=user,\n id=id,\n host=hostname)\n 'Hello, user: nobody, id: 9, on host: darkstar'\n\nThe verbosity of the method-based approach is illustrated here.\n\nPEP 498 -- Literal String Formatting\n\nPEP 498 defines and discusses format strings, as also described in the\nAbstract above.\n\nIt also, somewhat controversially to those first exposed, introduces the\nidea that format-strings shall be augmented with support for arbitrary\nexpressions. This is discussed further in the Restricting Syntax section\nunder Rejected Ideas.\n\nPEP 501 -- Translation ready string interpolation\n\nThe complimentary PEP 501 brings internationalization into the\ndiscussion as a first-class concern, with its proposal of the i-prefix,\nstring.Template syntax integration compatible with ES6 (Javascript),\ndeferred rendering, and an object return value.\n\nImplementations in Other Languages\n\nString interpolation is now well supported by various programming\nlanguages used in multiple industries, and is converging into a standard\nof sorts. It is centered around str.format() style syntax in minor\nvariations, with the addition of arbitrary expressions to expand\nutility.\n\nIn the Motivation section it was shown how convenient interpolation\nsyntax existed in Bash, Perl, and Ruby. Let's take a look at their\nexpression support.\n\nBash\n\nBash supports a number of arbitrary, even recursive constructs inside\nstrings:\n\n > echo \"user: $USER, id: $((id + 6)) on host: $(echo is $(hostname))\"\n user: nobody, id: 15 on host: is darkstar\n\n- Explicit interpolation within double quotes.\n- Direct environment variable access supported.\n- Arbitrary expressions are supported.[6]\n- External process execution and output capture supported.[7]\n- Recursive expressions are supported.\n\nPerl\n\nPerl also has arbitrary expression constructs, perhaps not as well\nknown:\n\n say \"I have @{[$id + 6]} guanacos.\"; # lists\n say \"I have ${\\($id + 6)} guanacos.\"; # scalars\n say \"Hello { @names.join(', ') } how are you?\"; # Perl 6 version\n\n- Explicit interpolation within double quotes.\n- Arbitrary expressions are supported.[8][9]\n\nRuby\n\nRuby allows arbitrary expressions in its interpolated strings:\n\n puts \"One plus one is two: #{1 + 1}\\n\"\n\n- Explicit interpolation within double quotes.\n- Arbitrary expressions are supported.[10][11]\n- Possible to change delimiter chars with %.\n- See the Reference Implementation(s) section for an implementation in\n Python.\n\nOthers\n\nLet's look at some less-similar modern languages recently implementing\nstring interpolation.\n\nScala\n\nScala interpolation is directed through string prefixes. Each prefix has\na different result:\n\n s\"Hello, $name ${1 + 1}\" # arbitrary\n f\"$name%s is $height%2.2f meters tall\" # printf-style\n raw\"a\\nb\" # raw, like r''\n\nThese prefixes may also be implemented by the user, by extending Scala's\nStringContext class.\n\n- Explicit interpolation within double quotes with literal prefix.\n- User implemented prefixes supported.\n- Arbitrary expressions are supported.\n\nES6 (Javascript)\n\nDesigners of Template strings faced the same issue as Python where\nsingle and double quotes were taken. Unlike Python however, \"backticks\"\nwere not. Despite their issues, they were chosen as part of the\nECMAScript 2015 (ES6) standard:\n\n console.log(`Fifteen is ${a + b} and\\nnot ${2 * a + b}.`);\n\nCustom prefixes are also supported by implementing a function the same\nname as the tag:\n\n function tag(strings, ...values) {\n console.log(strings.raw[0]); // raw string is also available\n return \"Bazinga!\";\n }\n tag`Hello ${ a + b } world ${ a * b}`;\n\n- Explicit interpolation within backticks.\n- User implemented prefixes supported.\n- Arbitrary expressions are supported.\n\nC#, Version 6\n\nC# has a useful new interpolation feature as well, with some ability to\ncustomize interpolation via the IFormattable interface:\n\n $\"{person.Name, 20} is {person.Age:D3} year{(p.Age == 1 ? \"\" : \"s\")} old.\";\n\n- Explicit interpolation with double quotes and $ prefix.\n- Custom interpolations are available.\n- Arbitrary expressions are supported.\n\nApple's Swift\n\nArbitrary interpolation under Swift is available on all strings:\n\n let multiplier = 3\n let message = \"\\(multiplier) times 2.5 is \\(Double(multiplier) * 2.5)\"\n // message is \"3 times 2.5 is 7.5\"\n\n- Implicit interpolation with double quotes.\n- Arbitrary expressions are supported.\n- Cannot contain CR/LF.\n\nAdditional examples\n\nA number of additional examples of string interpolation may be found at\nWikipedia.\n\nNow that background and history have been covered, let's continue on for\na solution.\n\nNew Syntax\n\nThis should be an option of last resort, as every new syntax feature has\na cost in terms of real-estate in a brain it inhabits. There is however\none alternative left on our list of possibilities, which follows.\n\nNew String Prefix\n\nGiven the history of string formatting in Python and\nbackwards-compatibility, implementations in other languages, avoidance\nof new syntax unless necessary, an acceptable design is reached through\nelimination rather than unique insight. Therefore, marking interpolated\nstring literals with a string prefix is chosen.\n\nWe also choose an expression syntax that reuses and builds on the\nstrongest of the existing choices, str.format() to avoid further\nduplication of functionality:\n\n >>> location = 'World'\n >>> f'Hello, {location} !' # new prefix: f''\n 'Hello, World !' # interpolated result\n\nPEP 498 -- Literal String Formatting, delves into the mechanics and\nimplementation of this design.\n\nAdditional Topics\n\nSafety\n\nIn this section we will describe the safety situation and precautions\ntaken in support of format-strings.\n\n1. Only string literals have been considered for format-strings, not\n variables to be taken as input or passed around, making external\n attacks difficult to accomplish.\n\n str.format() and alternatives already handle this use-case.\n\n2. Neither locals() nor globals() are necessary nor used during the\n transformation, avoiding leakage of information.\n\n3. To eliminate complexity as well as RuntimeError (s) due to recursion\n depth, recursive interpolation is not supported.\n\nHowever, mistakes or malicious code could be missed inside string\nliterals. Though that can be said of code in general, that these\nexpressions are inside strings means they are a bit more likely to be\nobscured.\n\nMitigation via Tools\n\nThe idea is that tools or linters such as pyflakes, pylint, or Pycharm,\nmay check inside strings with expressions and mark them up\nappropriately. As this is a common task with programming languages\ntoday, multi-language tools won't have to implement this feature solely\nfor Python, significantly shortening time to implementation.\n\nFarther in the future, strings might also be checked for constructs that\nexceed the safety policy of a project.\n\nStyle Guide/Precautions\n\nAs arbitrary expressions may accomplish anything a Python expression is\nable to, it is highly recommended to avoid constructs inside\nformat-strings that could cause side effects.\n\nFurther guidelines may be written once usage patterns and true problems\nare known.\n\nReference Implementation(s)\n\nThe say module on PyPI implements string interpolation as described here\nwith the small burden of a callable interface:\n\n > pip install say\n\n from say import say\n nums = list(range(4))\n say(\"Nums has {len(nums)} items: {nums}\")\n\nA Python implementation of Ruby interpolation is also available. It uses\nthe codecs module to do its work:\n\n > pip install interpy\n\n # coding: interpy\n location = 'World'\n print(\"Hello #{location}.\")\n\nBackwards Compatibility\n\nBy using existing syntax and avoiding current or historical features,\nformat strings were designed so as to not interfere with existing code\nand are not expected to cause any issues.\n\nPostponed Ideas\n\nInternationalization\n\nThough it was highly desired to integrate internationalization support,\n(see PEP 501), the finer details diverge at almost every point, making a\ncommon solution unlikely:[12]\n\n- Use-cases differ\n- Compile vs. run-time tasks\n- Interpolation syntax needs\n- Intended audience\n- Security policy\n\nRejected Ideas\n\nRestricting Syntax to str.format() Only\n\nThe common arguments against support of arbitrary expressions were:\n\n1. YAGNI, \"You aren't gonna need it.\"\n2. The feature is not congruent with historical Python conservatism.\n3. Postpone - can implement in a future version if need is\n demonstrated.\n\nSupport of only str.format() syntax however, was deemed not enough of a\nsolution to the problem. Often a simple length or increment of an\nobject, for example, is desired before printing.\n\nIt can be seen in the Implementations in Other Languages section that\nthe developer community at large tends to agree. String interpolation\nwith arbitrary expressions is becoming an industry standard in modern\nlanguages due to its utility.\n\nAdditional/Custom String-Prefixes\n\nAs seen in the Implementations in Other Languages section, many modern\nlanguages have extensible string prefixes with a common interface. This\ncould be a way to generalize and reduce lines of code in common\nsituations. Examples are found in ES6 (Javascript), Scala, Nim, and C#\n(to a lesser extent). This was rejected by the BDFL.[13]\n\nAutomated Escaping of Input Variables\n\nWhile helpful in some cases, this was thought to create too much\nuncertainty of when and where string expressions could be used safely or\nnot. The concept was also difficult to describe to others.[14]\n\nAlways consider format string variables to be unescaped, unless the\ndeveloper has explicitly escaped them.\n\nEnvironment Access and Command Substitution\n\nFor systems programming and shell-script replacements, it would be\nuseful to handle environment variables and capture output of commands\ndirectly in an expression string. This was rejected as not important\nenough, and looking too much like bash/perl, which could encourage bad\nhabits.[15]\n\nAcknowledgements\n\n- Eric V. Smith for the authoring and implementation of PEP 498.\n- Everyone on the python-ideas mailing list for rejecting the various\n crazy ideas that came up, helping to keep the final design in focus.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] Python Str.Format Syntax\n(https://docs.python.org/3.6/library/string.html#format-string-syntax)\n\n[2] Python Format-Spec Mini Language\n(https://docs.python.org/3.6/library/string.html#format-specification-mini-language)\n\n[3] Briefer String Format\n(https://mail.python.org/pipermail/python-ideas/2015-July/034659.html)\n\n[4] Briefer String Format\n(https://mail.python.org/pipermail/python-ideas/2015-July/034669.html)\n\n[5] Briefer String Format\n(https://mail.python.org/pipermail/python-ideas/2015-July/034701.html)\n\n[6] Bash Docs (https://tldp.org/LDP/abs/html/arithexp.html)\n\n[7] Bash Docs (https://tldp.org/LDP/abs/html/commandsub.html)\n\n[8] Perl Cookbook\n(https://docstore.mik.ua/orelly/perl/cookbook/ch01_11.htm)\n\n[9] Perl Docs\n(https://web.archive.org/web/20121025185907/https://perl6maven.com/perl6-scalar-array-and-hash-interpolation)\n\n[10] Ruby Docs\n(http://ruby-doc.org/core-2.1.1/doc/syntax/literals_rdoc.html#label-Strings)\n\n[11] Ruby Docs\n(https://en.wikibooks.org/wiki/Ruby_Programming/Syntax/Literals#Interpolation)\n\n[12] Literal String Formatting\n(https://mail.python.org/pipermail/python-dev/2015-August/141289.html)\n\n[13] Extensible String Prefixes\n(https://mail.python.org/pipermail/python-ideas/2015-August/035336.html)\n\n[14] Escaping of Input Variables\n(https://mail.python.org/pipermail/python-ideas/2015-August/035532.html)\n\n[15] Environment Access and Command Substitution\n(https://mail.python.org/pipermail/python-ideas/2015-August/035554.html)"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.233943"},"created":{"kind":"timestamp","value":"2015-08-10T00:00:00","string":"2015-08-10T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0502/\",\n \"authors\": [\n \"Mike G. Miller\"\n ],\n \"pep_number\": \"0502\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":11,"cells":{"id":{"kind":"string","value":"0208"},"text":{"kind":"string","value":"PEP: 208 Title: Reworking the Coercion Model Author: Neil Schemenauer\n, Marc-André Lemburg Status: Final\nType: Standards Track Content-Type: text/x-rst Created: 04-Dec-2000\nPython-Version: 2.1 Post-History:\n\nAbstract\n\nMany Python types implement numeric operations. When the arguments of a\nnumeric operation are of different types, the interpreter tries to\ncoerce the arguments into a common type. The numeric operation is then\nperformed using this common type. This PEP proposes a new type flag to\nindicate that arguments to a type's numeric operations should not be\ncoerced. Operations that do not support the supplied types indicate it\nby returning a new singleton object. Types which do not set the type\nflag are handled in a backwards compatible manner. Allowing operations\nhandle different types is often simpler, more flexible, and faster than\nhaving the interpreter do coercion.\n\nRationale\n\nWhen implementing numeric or other related operations, it is often\ndesirable to provide not only operations between operands of one type\nonly, e.g. integer + integer, but to generalize the idea behind the\noperation to other type combinations as well, e.g. integer + float.\n\nA common approach to this mixed type situation is to provide a method of\n\"lifting\" the operands to a common type (coercion) and then use that\ntype's operand method as execution mechanism. Yet, this strategy has a\nfew drawbacks:\n\n- the \"lifting\" process creates at least one new (temporary) operand\n object,\n- since the coercion method is not being told about the operation that\n is to follow, it is not possible to implement operation specific\n coercion of types,\n- there is no elegant way to solve situations were a common type is\n not at hand, and\n- the coercion method will always have to be called prior to the\n operation's method itself.\n\nA fix for this situation is obviously needed, since these drawbacks make\nimplementations of types needing these features very cumbersome, if not\nimpossible. As an example, have a look at the DateTime and\nDateTimeDelta[1] types, the first being absolute, the second relative.\nYou can always add a relative value to an absolute one, giving a new\nabsolute value. Yet, there is no common type which the existing coercion\nmechanism could use to implement that operation.\n\nCurrently, PyInstance types are treated specially by the interpreter in\nthat their numeric methods are passed arguments of different types.\nRemoving this special case simplifies the interpreter and allows other\ntypes to implement numeric methods that behave like instance types. This\nis especially useful for extension types like ExtensionClass.\n\nSpecification\n\nInstead of using a central coercion method, the process of handling\ndifferent operand types is simply left to the operation. If the\noperation finds that it cannot handle the given operand type\ncombination, it may return a special singleton as indicator.\n\nNote that \"numbers\" (anything that implements the number protocol, or\npart of it) written in Python already use the first part of this\nstrategy - it is the C level API that we focus on here.\n\nTo maintain nearly 100% backward compatibility we have to be very\ncareful to make numbers that don't know anything about the new strategy\n(old style numbers) work just as well as those that expect the new\nscheme (new style numbers). Furthermore, binary compatibility is a must,\nmeaning that the interpreter may only access and use new style\noperations if the number indicates the availability of these.\n\nA new style number is considered by the interpreter as such if and only\nif it sets the type flag Py_TPFLAGS_CHECKTYPES. The main difference\nbetween an old style number and a new style one is that the numeric slot\nfunctions can no longer assume to be passed arguments of identical type.\nNew style slots must check all arguments for proper type and implement\nthe necessary conversions themselves. This may seem to cause more work\non the behalf of the type implementor, but is in fact no more difficult\nthan writing the same kind of routines for an old style coercion slot.\n\nIf a new style slot finds that it cannot handle the passed argument type\ncombination, it may return a new reference of the special singleton\nPy_NotImplemented to the caller. This will cause the caller to try the\nother operands operation slots until it finds a slot that does implement\nthe operation for the specific type combination. If none of the possible\nslots succeed, it raises a TypeError.\n\nTo make the implementation easy to understand (the whole topic is\nesoteric enough), a new layer in the handling of numeric operations is\nintroduced. This layer takes care of all the different cases that need\nto be taken into account when dealing with all the possible combinations\nof old and new style numbers. It is implemented by the two static\nfunctions binary_op() and ternary_op(), which are both internal\nfunctions that only the functions in Objects/abstract.c have access to.\nThe numeric API (PyNumber_*) is easy to adapt to this new layer.\n\nAs a side-effect all numeric slots can be NULL-checked (this has to be\ndone anyway, so the added feature comes at no extra cost).\n\nThe scheme used by the layer to execute a binary operation is as\nfollows:\n\n v w Action taken\n ----- ----- -----------------------------------\n new new v.op(v,w), w.op(v,w)\n new old v.op(v,w), coerce(v,w), v.op(v,w)\n old new w.op(v,w), coerce(v,w), v.op(v,w)\n old old coerce(v,w), v.op(v,w)\n\nThe indicated action sequence is executed from left to right until\neither the operation succeeds and a valid result (!= Py_NotImplemented)\nis returned or an exception is raised. Exceptions are returned to the\ncalling function as-is. If a slot returns Py_NotImplemented, the next\nitem in the sequence is executed.\n\nNote that coerce(v,w) will use the old style nb_coerce slot methods via\na call to PyNumber_Coerce().\n\nTernary operations have a few more cases to handle:\n\n v w z Action taken\n ----- ----- ----- ------------------------------------------------------\n new new new v.op(v,w,z), w.op(v,w,z), z.op(v,w,z)\n new old new v.op(v,w,z), z.op(v,w,z), coerce(v,w,z), v.op(v,w,z)\n old new new w.op(v,w,z), z.op(v,w,z), coerce(v,w,z), v.op(v,w,z)\n old old new z.op(v,w,z), coerce(v,w,z), v.op(v,w,z)\n new new old v.op(v,w,z), w.op(v,w,z), coerce(v,w,z), v.op(v,w,z)\n new old old v.op(v,w,z), coerce(v,w,z), v.op(v,w,z)\n old new old w.op(v,w,z), coerce(v,w,z), v.op(v,w,z)\n old old old coerce(v,w,z), v.op(v,w,z)\n\nThe same notes as above, except that coerce(v,w,z) actually does:\n\n if z != Py_None:\n coerce(v,w), coerce(v,z), coerce(w,z)\n else:\n # treat z as absent variable\n coerce(v,w)\n\nThe current implementation uses this scheme already (there's only one\nternary slot: nb_pow(a,b,c)).\n\nNote that the numeric protocol is also used for some other related\ntasks, e.g. sequence concatenation. These can also benefit from the new\nmechanism by implementing right-hand operations for type combinations\nthat would otherwise fail to work. As an example, take string\nconcatenation: currently you can only do string + string. With the new\nmechanism, a new string-like type could implement new_type + string and\nstring + new_type, even though strings don't know anything about\nnew_type.\n\nSince comparisons also rely on coercion (every time you compare an\ninteger to a float, the integer is first converted to float and then\ncompared...), a new slot to handle numeric comparisons is needed:\n\n PyObject *nb_cmp(PyObject *v, PyObject *w)\n\nThis slot should compare the two objects and return an integer object\nstating the result. Currently, this result integer may only be -1, 0, 1.\nIf the slot cannot handle the type combination, it may return a\nreference to Py_NotImplemented. [XXX Note that this slot is still in\nflux since it should take into account rich comparisons (i.e. PEP 207).]\n\nNumeric comparisons are handled by a new numeric protocol API:\n\n PyObject *PyNumber_Compare(PyObject *v, PyObject *w)\n\nThis function compare the two objects as \"numbers\" and return an integer\nobject stating the result. Currently, this result integer may only be\n-1, 0, 1. In case the operation cannot be handled by the given objects,\na TypeError is raised.\n\nThe PyObject_Compare() API needs to adjusted accordingly to make use of\nthis new API.\n\nOther changes include adapting some of the built-in functions (e.g.\ncmp()) to use this API as well. Also, PyNumber_CoerceEx() will need to\ncheck for new style numbers before calling the nb_coerce slot. New style\nnumbers don't provide a coercion slot and thus cannot be explicitly\ncoerced.\n\nReference Implementation\n\nA preliminary patch for the CVS version of Python is available through\nthe Source Forge patch manager[2].\n\nCredits\n\nThis PEP and the patch are heavily based on work done by Marc-André\nLemburg[3].\n\nCopyright\n\nThis document has been placed in the public domain.\n\nReferences\n\n[1] http://www.lemburg.com/files/python/mxDateTime.html\n\n[2] http://sourceforge.net/patch/?func=detailpatch&patch_id=102652&group_id=5470\n\n[3] http://www.lemburg.com/files/python/CoercionProposal.html"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.250333"},"created":{"kind":"timestamp","value":"2000-12-04T00:00:00","string":"2000-12-04T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0208/\",\n \"authors\": [\n \"Marc-André Lemburg\",\n \"Neil Schemenauer\"\n ],\n \"pep_number\": \"0208\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":12,"cells":{"id":{"kind":"string","value":"0649"},"text":{"kind":"string","value":"PEP: 649 Title: Deferred Evaluation Of Annotations Using Descriptors\nAuthor: Larry Hastings Discussions-To:\nhttps://discuss.python.org/t/pep-649-deferred-evaluation-of-annotations-tentatively-accepted/21331/\nStatus: Accepted Type: Standards Track Topic: Typing Content-Type:\ntext/x-rst Created: 11-Jan-2021 Python-Version: 3.14 Post-History:\n11-Jan-2021, 12-Apr-2021, 18-Apr-2021, 09-Aug-2021, 20-Oct-2021,\n20-Oct-2021, 17-Nov-2021, 15-Mar-2022, 23-Nov-2022, 07-Feb-2023,\n11-Apr-2023, Replaces: 563 Resolution: 08-May-2023\n\nAbstract\n\nAnnotations are a Python technology that allows expressing type\ninformation and other metadata about Python functions, classes, and\nmodules. But Python's original semantics for annotations required them\nto be eagerly evaluated, at the time the annotated object was bound.\nThis caused chronic problems for static type analysis users using \"type\nhints\", due to forward-reference and circular-reference problems.\n\nPython solved this by accepting PEP 563, incorporating a new approach\ncalled \"stringized annotations\" in which annotations were automatically\nconverted into strings by Python. This solved the forward-reference and\ncircular-reference problems, and also fostered intriguing new uses for\nannotation metadata. But stringized annotations in turn caused chronic\nproblems for runtime users of annotations.\n\nThis PEP proposes a new and comprehensive third approach for\nrepresenting and computing annotations. It adds a new internal mechanism\nfor lazily computing annotations on demand, via a new object method\ncalled __annotate__. This approach, when combined with a novel technique\nfor coercing annotation values into alternative formats, solves all the\nabove problems, supports all existing use cases, and should foster\nfuture innovations in annotations.\n\nOverview\n\nThis PEP adds a new dunder attribute to the objects that support\nannotations--functions, classes, and modules. The new attribute is\ncalled __annotate__, and is a reference to a function which computes and\nreturns that object's annotations dict.\n\nAt compile time, if the definition of an object includes annotations,\nthe Python compiler will write the expressions computing the annotations\ninto its own function. When run, the function will return the\nannotations dict. The Python compiler then stores a reference to this\nfunction in __annotate__ on the object.\n\nFurthermore, __annotations__ is redefined to be a \"data descriptor\"\nwhich calls this annotation function once and caches the result.\n\nThis mechanism delays the evaluation of annotations expressions until\nthe annotations are examined, which solves many circular reference\nproblems.\n\nThis PEP also defines new functionality for two functions in the Python\nstandard library: inspect.get_annotations and typing.get_type_hints. The\nfunctionality is accessed via a new keyword-only parameter, format.\nformat allows the user to request the annotations from these functions\nin a specific format. Format identifiers are always predefined integer\nvalues. The formats defined by this PEP are:\n\n- inspect.VALUE = 1\n\n The default value. The function will return the conventional Python\n values for the annotations. This format is identical to the return\n value for these functions under Python 3.11.\n\n- inspect.FORWARDREF = 2\n\n The function will attempt to return the conventional Python values\n for the annotations. However, if it encounters an undefined name, or\n a free variable that has not yet been associated with a value, it\n dynamically creates a proxy object (a ForwardRef) that substitutes\n for that value in the expression, then continues evaluation. The\n resulting dict may contain a mixture of proxies and real values. If\n all real values are defined at the time the function is called,\n inspect.FORWARDREF and inspect.VALUE produce identical results.\n\n- inspect.SOURCE = 3\n\n The function will produce an annotation dictionary where the values\n have been replaced by strings containing the original source code\n for the annotation expressions. These strings may only be\n approximate, as they may be reverse-engineered from another format,\n rather than preserving the original source code, but the differences\n will be minor.\n\nIf accepted, this PEP would supersede PEP 563, and PEP 563's behavior\nwould be deprecated and eventually removed.\n\nComparison Of Annotation Semantics\n\nNote\n\nThe code presented in this section is simplified for clarity, and is\nintentionally inaccurate in some critical aspects. This example is\nintended merely to communicate the high-level concepts involved without\ngetting lost in the details. But readers should note that the actual\nimplementation is quite different in several important ways. See the\nImplementation section later in this PEP for a far more accurate\ndescription of what this PEP proposes from a technical level.\n\nConsider this example code:\n\n def foo(x: int = 3, y: MyType = None) -> float:\n ...\n class MyType:\n ...\n foo_y_annotation = foo.__annotations__['y']\n\nAs we see here, annotations are available at runtime through an\n__annotations__ attribute on functions, classes, and modules. When\nannotations are specified on one of these objects, __annotations__ is a\ndictionary mapping the names of the fields to the value specified as\nthat field's annotation.\n\nThe default behavior in Python is to evaluate the expressions for the\nannotations, and build the annotations dict, at the time the function,\nclass, or module is bound. At runtime the above code actually works\nsomething like this:\n\n annotations = {'x': int, 'y': MyType, 'return': float}\n def foo(x = 3, y = \"abc\"):\n ...\n foo.__annotations__ = annotations\n class MyType:\n ...\n foo_y_annotation = foo.__annotations__['y']\n\nThe crucial detail here is that the values int, MyType, and float are\nlooked up at the time the function object is bound, and these values are\nstored in the annotations dict. But this code doesn't run—it throws a\nNameError on the first line, because MyType hasn't been defined yet.\n\nPEP 563's solution is to decompile the expressions back into strings\nduring compilation and store those strings as the values in the\nannotations dict. The equivalent runtime code would look something like\nthis:\n\n annotations = {'x': 'int', 'y': 'MyType', 'return': 'float'}\n def foo(x = 3, y = \"abc\"):\n ...\n foo.__annotations__ = annotations\n class MyType:\n ...\n foo_y_annotation = foo.__annotations__['y']\n\nThis code now runs successfully. However, foo_y_annotation is no longer\na reference to MyType, it is the string 'MyType'. To turn the string\ninto the real value MyType, the user would need to evaluate the string\nusing eval, inspect.get_annotations, or typing.get_type_hints.\n\nThis PEP proposes a third approach, delaying the evaluation of the\nannotations by computing them in their own function. If this PEP was\nactive, the generated code would work something like this:\n\n class function:\n # __annotations__ on a function object is already a\n # \"data descriptor\" in Python, we're just changing\n # what it does\n @property\n def __annotations__(self):\n return self.__annotate__()\n\n # ...\n\n def annotate_foo():\n return {'x': int, 'y': MyType, 'return': float}\n def foo(x = 3, y = \"abc\"):\n ...\n foo.__annotate__ = annotate_foo\n class MyType:\n ...\n foo_y_annotation = foo.__annotations__['y']\n\nThe important change is that the code constructing the annotations dict\nnow lives in a function—here, called annotate_foo(). But this function\nisn't called until we ask for the value of foo.__annotations__, and we\ndon't do that until after the definition of MyType. So this code also\nruns successfully, and foo_y_annotation now has the correct value--the\nclass MyType--even though MyType wasn't defined until after the\nannotation was defined.\n\nMistaken Rejection Of This Approach In November 2017\n\nDuring the early days of discussion around PEP 563, in a November 2017\nthread in comp.lang.python-dev, the idea of using code to delay the\nevaluation of annotations was briefly discussed. At the time the\ntechnique was termed an \"implicit lambda expression\".\n\nGuido van Rossum—Python's BDFL at the time—replied, asserting that these\n\"implicit lambda expression\" wouldn't work, because they'd only be able\nto resolve symbols at module-level scope:\n\n IMO the inability of referencing class-level definitions from\n annotations on methods pretty much kills this idea.\n\nhttps://mail.python.org/pipermail/python-dev/2017-November/150109.html\n\nThis led to a short discussion about extending lambda-ized annotations\nfor methods to be able to refer to class-level definitions, by\nmaintaining a reference to the class-level scope. This idea, too, was\nquickly rejected.\n\nPEP 563 summarizes the above discussion\n<563#keeping-the-ability-to-use-function-local-state-when-defining-annotations>\n\nThe approach taken by this PEP doesn't suffer from these restrictions.\nAnnotations can access module-level definitions, class-level\ndefinitions, and even local and free variables.\n\nMotivation\n\nA History Of Annotations\n\nPython 3.0 shipped with a new syntax feature, \"annotations\", defined in\nPEP 3107. This allowed specifying a Python value that would be\nassociated with a parameter of a Python function, or with the value that\nfunction returns. Said another way, annotations gave Python users an\ninterface to provide rich metadata about a function parameter or return\nvalue, for example type information. All the annotations for a function\nwere stored together in a new attribute __annotations__, in an\n\"annotation dict\" that mapped parameter names (or, in the case of the\nreturn annotation, using the name 'return') to their Python value.\n\nIn an effort to foster experimentation, Python intentionally didn't\ndefine what form this metadata should take, or what values should be\nused. User code began experimenting with this new facility almost\nimmediately. But popular libraries that make use of this functionality\nwere slow to emerge.\n\nAfter years of little progress, the BDFL chose a particular approach for\nexpressing static type information, called type hints, as defined in PEP\n484. Python 3.5 shipped with a new typing module which quickly became\nvery popular.\n\nPython 3.6 added syntax to annotate local variables, class attributes,\nand module attributes, using the approach proposed in PEP 526. Static\ntype analysis continued to grow in popularity.\n\nHowever, static type analysis users were increasingly frustrated by an\ninconvenient problem: forward references. In classic Python, if a class\nC depends on a later-defined class D, it's normally not a problem,\nbecause user code will usually wait until both are defined before trying\nto use either. But annotations added a new complication, because they\nwere computed at the time the annotated object (function, class, or\nmodule) was bound. If methods on class C are annotated with type D, and\nthese annotation expressions are computed at the time that the method is\nbound, D may not be defined yet. And if methods in D are also annotated\nwith type C, you now have an unresolvable circular reference problem.\n\nInitially, static type users worked around this problem by defining\ntheir problematic annotations as strings. This worked because a string\ncontaining the type hint was just as usable for the static type analysis\ntool. And users of static type analysis tools rarely examine the\nannotations at runtime, so this representation wasn't itself an\ninconvenience. But manually stringizing type hints was clumsy and\nerror-prone. Also, code bases were adding more and more annotations,\nwhich consumed more and more CPU time to create and bind.\n\nTo solve these problems, the BDFL accepted PEP 563, which added a new\nfeature to Python 3.7: \"stringized annotations\". It was activated with a\nfuture import:\n\n from __future__ import annotations\n\nNormally, annotation expressions were evaluated at the time the object\nwas bound, with their values being stored in the annotations dict. When\nstringized annotations were active, these semantics changed: instead, at\ncompile time, the compiler converted all annotations in that module into\nstring representations of their source code--thus, automatically turning\nthe users's annotations into strings, obviating the need to manually\nstringize them as before. PEP 563 suggested users could evaluate this\nstring with eval if the actual value was needed at runtime.\n\n(From here on out, this PEP will refer to the classic semantics of PEP\n3107 and PEP 526, where the values of annotation expressions are\ncomputed at the time the object is bound, as \"stock\" semantics, to\ndifferentiate them from the new PEP 563 \"stringized\" annotation\nsemantics.)\n\nThe Current State Of Annotation Use Cases\n\nAlthough there are many specific use cases for annotations, annotation\nusers in the discussion around this PEP tended to fall into one of these\nfour categories.\n\nStatic typing users\n\nStatic typing users use annotations to add type information to their\ncode. But they largely don't examine the annotations at runtime.\nInstead, they use static type analysis tools (mypy, pytype) to examine\ntheir source tree and determine whether or not their code is using types\nconsistently. This is almost certainly the most popular use case for\nannotations today.\n\nMany of the annotations use type hints, a la PEP 484 (and many\nsubsequent PEPs). Type hints are passive objects, mere representation of\ntype information; they don't do any actual work. Type hints are often\nparameterized with other types or other type hints. Since they're\nagnostic about what these actual values are, type hints work fine with\nForwardRef proxy objects. Users of static type hints discovered that\nextensive type hinting under stock semantics often created large-scale\ncircular reference and circular import problems that could be difficult\nto solve. PEP 563 was designed specifically to solve this problem, and\nthe solution worked great for these users. The difficulty of rendering\nstringized annotations into real values largely didn't inconvenience\nthese users because of how infrequently they examine annotations at\nruntime.\n\nStatic typing users often combine PEP 563 with the\nif typing.TYPE_CHECKING idiom to prevent their type hints from being\nloaded at runtime. This means they often aren't able to evaluate their\nstringized annotations and produce real values at runtime. On the rare\noccasion that they do examine annotations at runtime, they often forgo\neval, instead using lexical analysis directly on the stringized\nannotations.\n\nUnder this PEP, static typing users will probably prefer FORWARDREF or\nSOURCE format.\n\nRuntime annotation users\n\nRuntime annotation users use annotations as a means of expressing rich\nmetadata about their functions and classes, which they use as input to\nruntime behavior. Specific use cases include runtime type verification\n(Pydantic) and glue logic to expose Python APIs in another domain\n(FastAPI, Typer). The annotations may or may not be type hints.\n\nAs runtime annotation users examine annotations at runtime, they were\ntraditionally better served with stock semantics. This use case is\nlargely incompatible with PEP 563, particularly with the\nif typing.TYPE_CHECKING idiom.\n\nUnder this PEP, runtime annotation users will most likely prefer VALUE\nformat, though some (e.g. if they evaluate annotations eagerly in a\ndecorator and want to support forward references) may also use\nFORWARDREF format.\n\nWrappers\n\nWrappers are functions or classes that wrap user functions or classes\nand add functionality. Examples of this would be ~dataclasses.dataclass,\nfunctools.partial, attrs, and wrapt.\n\nWrappers are a distinct subcategory of runtime annotation users.\nAlthough they do use annotations at runtime, they may or may not\nactually examine the annotations of the objects they wrap--it depends on\nthe functionality the wrapper provides. As a rule they should propagate\nthe annotations of the wrapped object to the wrapper they create,\nalthough it's possible they may modify those annotations.\n\nWrappers were generally designed to work well under stock semantics.\nWhether or not they work well under PEP 563 semantics depends on the\ndegree to which they examine the wrapped object's annotations. Often\nwrappers don't care about the value per se, only needing specific\ninformation about the annotations. Even so, PEP 563 and the\nif typing.TYPE_CHECKING idiom can make it difficult for wrappers to\nreliably determine the information they need at runtime. This is an\nongoing, chronic problem. Under this PEP, wrappers will probably prefer\nFORWARDREF format for their internal logic. But the wrapped objects need\nto support all formats for their users.\n\nDocumentation\n\nPEP 563 stringized annotations were a boon for tools that mechanically\nconstruct documentation.\n\nStringized type hints make for excellent documentation; type hints as\nexpressed in source code are often succinct and readable. However, at\nruntime these same type hints can produce value at runtime whose repr is\na sprawling, nested, unreadable mess. Thus documentation users were\nwell-served by PEP 563 but poorly served with stock semantics.\n\nUnder this PEP, documentation users are expected to use SOURCE format.\n\nMotivation For This PEP\n\nPython's original semantics for annotations made its use for static type\nanalysis painful due to forward reference problems. PEP 563 solved the\nforward reference problem, and many static type analysis users became\nhappy early adopters of it. But its unconventional solution created new\nproblems for two of the above cited use cases: runtime annotation users,\nand wrappers.\n\nFirst, stringized annotations didn't permit referencing local or free\nvariables, which meant many useful, reasonable approaches to creating\nannotations were no longer viable. This was particularly inconvenient\nfor decorators that wrap existing functions and classes, as these\ndecorators often use closures.\n\nSecond, in order for eval to correctly look up globals in a stringized\nannotation, you must first obtain a reference to the correct module. But\nclass objects don't retain a reference to their globals. PEP 563\nsuggests looking up a class's module by name in sys.modules—a surprising\nrequirement for a language-level feature.\n\nAdditionally, complex but legitimate constructions can make it difficult\nto determine the correct globals and locals dicts to give to eval to\nproperly evaluate a stringized annotation. Even worse, in some\nsituations it may simply be infeasible.\n\nFor example, some libraries (e.g. typing.TypedDict, dataclasses) wrap a\nuser class, then merge all the annotations from all that class's base\nclasses together into one cumulative annotations dict. If those\nannotations were stringized, calling eval on them later may not work\nproperly, because the globals dictionary used for the eval will be the\nmodule where the user class was defined, which may not be the same\nmodule where the annotation was defined. However, if the annotations\nwere stringized because of forward-reference problems, calling eval on\nthem early may not work either, due to the forward reference not being\nresolvable yet. This has proved to be difficult to reconcile; of the\nthree bug reports linked to below, only one has been marked as fixed.\n\n- https://github.com/python/cpython/issues/89687\n- https://github.com/python/cpython/issues/85421\n- https://github.com/python/cpython/issues/90531\n\nEven with proper globals and locals, eval can be unreliable on\nstringized annotations. eval can only succeed if all the symbols\nreferenced in an annotations are defined. If a stringized annotation\nrefers to a mixture of defined and undefined symbols, a simple eval of\nthat string will fail. This is a problem for libraries with that need to\nexamine the annotation, because they can't reliably convert these\nstringized annotations into real values.\n\n- Some libraries (e.g. dataclasses) solved this by foregoing real\n values and performing lexical analysis of the stringized annotation,\n which requires a lot of work to get right.\n- Other libraries still suffer with this problem, which can produce\n surprising runtime behavior.\n https://github.com/python/cpython/issues/97727\n\nAlso, eval() is slow, and it isn't always available; it's sometimes\nremoved for space reasons on certain platforms. eval() on MicroPython\ndoesn't support the locals argument, which makes converting stringized\nannotations into real values at runtime even harder.\n\nFinally, PEP 563 requires Python implementations to stringize their\nannotations. This is surprising behavior—unprecedented for a\nlanguage-level feature, with a complicated implementation, that must be\nupdated whenever a new operator is added to the language.\n\nThese problems motivated the research into finding a new approach to\nsolve the problems facing annotations users, resulting in this PEP.\n\nImplementation\n\nObserved semantics for annotations expressions\n\nFor any object o that supports annotations, provided that all names\nevaluated in the annotations expressions are bound before o is defined\nand never subsequently rebound, o.__annotations__ will produce an\nidentical annotations dict both when \"stock\" semantics are active and\nwhen this PEP is active. In particular, name resolution will be\nperformed identically in both scenarios.\n\nWhen this PEP is active, the value of o.__annotations__ won't be\ncalculated until the first time o.__annotations__ itself is evaluated.\nAll evaluation of the annotation expressions is delayed until this\nmoment, which also means that\n\n- names referenced in the annotations expressions will use their\n current value at this moment, and\n- if evaluating the annotations expressions raises an exception, that\n exception will be raised at this moment.\n\nOnce o.__annotations__ is successfully calculated for the first time,\nthis value is cached and will be returned by future requests for\no.__annotations__.\n\n------------------------------------------------------------------------\n\nPython supports annotations on three different types: functions,\nclasses, and modules. This PEP modifies the semantics on all three of\nthese types in a similar way.\n\nFirst, this PEP adds a new \"dunder\" attribute, __annotate__.\n__annotate__ must be a \"data descriptor\", implementing all three\nactions: get, set, and delete. The __annotate__ attribute is always\ndefined, and may only be set to either None or to a callable.\n(__annotate__ cannot be deleted.) If an object has no annotations,\n__annotate__ should be initialized to None, rather than to a function\nthat returns an empty dict.\n\nThe __annotate__ data descriptor must have dedicated storage inside the\nobject to store the reference to its value. The location of this storage\nat runtime is an implementation detail. Even if it's visible to Python\ncode, it should still be considered an internal implementation detail,\nand Python code should prefer to interact with it only via the\n__annotate__ attribute.\n\nThe callable stored in __annotate__ must accept a single required\npositional argument called format, which will always be an int (or a\nsubclass of int). It must either return a dict (or subclass of dict) or\nraise NotImplementedError().\n\nHere's a formal definition of __annotate__, as it will appear in the\n\"Magic methods\" section of the Python Language Reference:\n\n __annotate__(format: int) -> dict\n\n Returns a new dictionary object mapping attribute/parameter names to\n their annotation values.\n\n Takes a format parameter specifying the format in which annotations\n values should be provided. Must be one of the following:\n\n inspect.VALUE (equivalent to the int constant 1)\n\n Values are the result of evaluating the annotation expressions.\n\n inspect.FORWARDREF (equivalent to the int constant 2)\n\n Values are real annotation values (as per inspect.VALUE format) for\n defined values, and ForwardRef proxies for undefined values. Real\n objects may be exposed to, or contain references to, ForwardRef\n proxy objects.\n\n inspect.SOURCE (equivalent to the int constant 3)\n\n Values are the text string of the annotation as it appears in the\n source code. May only be approximate; whitespace may be normalized,\n and constant values may be optimized. It's possible the exact values\n of these strings could change in future version of Python.\n\n If an __annotate__ function doesn't support the requested format, it\n must raise NotImplementedError(). __annotate__ functions must always\n support 1 (inspect.VALUE) format; they must not raise\n NotImplementedError() when called with format=1.\n\n When called with format=1, an __annotate__ function may raise\n NameError; it must not raise NameError when called requesting any\n other format.\n\n If an object doesn't have any annotations, __annotate__ should\n preferably be set to None (it can't be deleted), rather than set to a\n function that returns an empty dict.\n\nWhen the Python compiler compiles an object with annotations, it\nsimultaneously compiles the appropriate annotate function. This\nfunction, called with the single positional argument inspect.VALUE,\ncomputes and returns the annotations dict as defined on that object. The\nPython compiler and runtime work in concert to ensure that the function\nis bound to the appropriate namespaces:\n\n- For functions and classes, the globals dictionary will be the module\n where the object was defined. If the object is itself a module, its\n globals dictionary will be its own dict.\n- For methods on classes, and for classes, the locals dictionary will\n be the class dictionary.\n- If the annotations refer to free variables, the closure will be the\n appropriate closure tuple containing cells for free variables.\n\nSecond, this PEP requires that the existing __annotations__ must be a\n\"data descriptor\", implementing all three actions: get, set, and delete.\n__annotations__ must also have its own internal storage it uses to cache\na reference to the annotations dict:\n\n- Class and module objects must cache the annotations dict in their\n __dict__, using the key __annotations__. This is required for\n backwards compatibility reasons.\n- For function objects, storage for the annotations dict cache is an\n implementation detail. It's preferably internal to the function\n object and not visible in Python.\n\nThis PEP defines semantics on how __annotations__ and __annotate__\ninteract, for all three types that implement them. In the following\nexamples, fn represents a function, cls represents a class, mod\nrepresents a module, and o represents an object of any of these three\ntypes:\n\n- When o.__annotations__ is evaluated, and the internal storage for\n o.__annotations__ is unset, and o.__annotate__ is set to a callable,\n the getter for o.__annotations__ calls o.__annotate__(1), then\n caches the result in its internal storage and returns the result.\n - To explicitly clarify one question that has come up multiple\n times: this o.__annotations__ cache is the only caching\n mechanism defined in this PEP. There are no other caching\n mechanisms defined in this PEP. The __annotate__ functions\n generated by the Python compiler explicitly don't cache any of\n the values they compute.\n- Setting o.__annotate__ to a callable invalidates the cached\n annotations dict.\n- Setting o.__annotate__ to None has no effect on the cached\n annotations dict.\n- Deleting o.__annotate__ raises TypeError. __annotate__ must always\n be set; this prevents unannotated subclasses from inheriting the\n __annotate__ method of one of their base classes.\n- Setting o.__annotations__ to a legal value automatically sets\n o.__annotate__ to None.\n - Setting cls.__annotations__ or mod.__annotations__ to None\n otherwise works like any other attribute; the attribute is set\n to None.\n - Setting fn.__annotations__ to None invalidates the cached\n annotations dict. If fn.__annotations__ doesn't have a cached\n annotations value, and fn.__annotate__ is None, the\n fn.__annotations__ data descriptor creates, caches, and returns\n a new empty dict. (This is for backwards compatibility with PEP\n 3107 semantics.)\n\nChanges to allowable annotations syntax\n\n__annotate__ now delays the evaluation of annotations until\n__annotations__ is referenced in the future. It also means annotations\nare evaluated in a new function, rather than in the original context\nwhere the object they were defined on was bound. There are four\noperators with significant runtime side-effects that were permitted in\nstock semantics, but are disallowed when\nfrom __future__ import annotations is active, and will have to be\ndisallowed when this PEP is active:\n\n- :=\n- yield\n- yield from\n- await\n\nChanges to inspect.get_annotations and typing.get_type_hints\n\n(This PEP makes frequent reference to these two functions. In the future\nit will refer to them collectively as \"the helper functions\", as they\nhelp user code work with annotations.)\n\nThese two functions extract and return the annotations from an object.\ninspect.get_annotations returns the annotations unchanged; for the\nconvenience of static typing users, typing.get_type_hints makes some\nmodifications to the annotations before it returns them.\n\nThis PEP adds a new keyword-only parameter to these two functions,\nformat. format specifies what format the values in the annotations dict\nshould be returned in. The format parameter on these two functions\naccepts the same values as the format parameter on the __annotate__\nmagic method defined above; however, these format parameters also have a\ndefault value of inspect.VALUE.\n\nWhen either __annotations__ or __annotate__ is updated on an object, the\nother of those two attributes is now out-of-date and should also either\nbe updated or deleted (set to None, in the case of __annotate__ which\ncannot be deleted). In general, the semantics established in the\nprevious section ensure that this happens automatically. However,\nthere's one case which for all practical purposes can't be handled\nautomatically: when the dict cached by o.__annotations__ is itself\nmodified, or when mutable values inside that dict are modified.\n\nSince this can't be handled in code, it must be handled in\ndocumentation. This PEP proposes amending the documentation for\ninspect.get_annotations (and similarly for typing.get_type_hints) as\nfollows:\n\n If you directly modify the __annotations__ dict on an object, by\n default these changes may not be reflected in the dictionary returned\n by inspect.get_annotations when requesting either SOURCE or FORWARDREF\n format on that object. Rather than modifying the __annotations__ dict\n directly, consider replacing that object's __annotate__ method with a\n function computing the annotations dict with your desired values.\n Failing that, it's best to overwrite the object's __annotate__ method\n with None to prevent inspect.get_annotations from generating stale\n results for SOURCE and FORWARDREF formats.\n\nThe stringizer and the fake globals environment\n\nAs originally proposed, this PEP supported many runtime annotation user\nuse cases, and many static type user use cases. But this was\ninsufficient--this PEP could not be accepted until it satisfied all\nextant use cases. This became a longtime blocker of this PEP until Carl\nMeyer proposed the \"stringizer\" and the \"fake globals\" environment as\ndescribed below. These techniques allow this PEP to support both the\nFORWARDREF and SOURCE formats, ably satisfying all remaining uses cases.\n\nIn a nutshell, this technique involves running a\nPython-compiler-generated __annotate__ function in an exotic runtime\nenvironment. Its normal globals dict is replaced with what's called a\n\"fake globals\" dict. A \"fake globals\" dict is a dict with one important\ndifference: every time you \"get\" a key from it that isn't mapped, it\ncreates, caches, and returns a new value for that key (as per the\n__missing__ callback for a dictionary). That value is a an instance of a\nnovel type referred to as a \"stringizer\".\n\nA \"stringizer\" is a Python class with highly unusual behavior. Every\nstringizer is initialized with its \"value\", initially the name of the\nmissing key in the \"fake globals\" dict. The stringizer then implements\nevery Python \"dunder\" method used to implement operators, and the value\nreturned by that method is a new stringizer whose value is a text\nrepresentation of that operation.\n\nWhen these stringizers are used in expressions, the result of the\nexpression is a new stringizer whose name textually represents that\nexpression. For example, let's say you have a variable f, which is a\nreference to a stringizer initialized with the value 'f'. Here are some\nexamples of operations you could perform on f and the values they would\nreturn:\n\n >>> f\n Stringizer('f')\n >>> f + 3\n Stringizer('f + 3')\n >> f[\"key\"]\n Stringizer('f[\"key\"]')\n\nBringing it all together: if we run a Python-generated __annotate__\nfunction, but we replace its globals with a \"fake globals\" dict, all\nundefined symbols it references will be replaced with stringizer proxy\nobjects representing those symbols, and any operations performed on\nthose proxies will in turn result in proxies representing that\nexpression. This allows __annotate__ to complete, and to return an\nannotations dict, with stringizer instances standing in for names and\nentire expressions that could not have otherwise been evaluated.\n\nIn practice, the \"stringizer\" functionality will be implemented in the\nForwardRef object currently defined in the typing module. ForwardRef\nwill be extended to implement all stringizer functionality; it will also\nbe extended to support evaluating the string it contains, to produce the\nreal value (assuming all symbols referenced are defined). This means the\nForwardRef object will retain references to the appropriate \"globals\",\n\"locals\", and even \"closure\" information needed to evaluate the\nexpression.\n\nThis technique is the core of how inspect.get_annotations supports\nFORWARDREF and SOURCE formats. Initially, inspect.get_annotations will\ncall the object's __annotate__ method requesting the desired format. If\nthat raises NotImplementedError, inspect.get_annotations will construct\na \"fake globals\" environment, then call the object's __annotate__\nmethod.\n\n- inspect.get_annotations produces SOURCE format by creating a new\n empty \"fake globals\" dict, binding it to the object's __annotate__\n method, calling that requesting VALUE format, and then extracting\n the string \"value\" from each ForwardRef object in the resulting\n dict.\n- inspect.get_annotations produces FORWARDREF format by creating a new\n empty \"fake globals\" dict, pre-populating it with the current\n contents of the __annotate__ method's globals dict, binding the\n \"fake globals\" dict to the object's __annotate__ method, calling\n that requesting VALUE format, and returning the result.\n\nThis entire technique works because the __annotate__ functions generated\nby the compiler are controlled by Python itself, and are simple and\npredictable. They're effectively a single return statement, computing\nand returning the annotations dict. Since most operations needed to\ncompute an annotation are implemented in Python using dunder methods,\nand the stringizer supports all the relevant dunder methods, this\napproach is a reliable, practical solution.\n\nHowever, it's not reasonable to attempt this technique with just any\n__annotate__ method. This PEP assumes that third-party libraries may\nimplement their own __annotate__ methods, and those functions would\nalmost certainly work incorrectly when run in this \"fake globals\"\nenvironment. For that reason, this PEP allocates a flag on code objects,\none of the unused bits in co_flags, to mean \"This code object can be run\nin a 'fake globals' environment.\" This makes the \"fake globals\"\nenvironment strictly opt-in, and it's expected that only __annotate__\nmethods generated by the Python compiler will set it.\n\nThe weakness in this technique is in handling operators which don't\ndirectly map to dunder methods on an object. These are all operators\nthat implement some manner of flow control, either branching or\niteration:\n\n- Short-circuiting or\n- Short-circuiting and\n- Ternary operator (the if / then operator)\n- Generator expressions\n- List / dict / set comprehensions\n- Iterable unpacking\n\nAs a rule these techniques aren't used in annotations, so it doesn't\npose a problem in practice. However, the recent addition of TypeVarTuple\nto Python does use iterable unpacking. The dunder methods involved\n(__iter__ and __next__) don't permit distinguishing between iteration\nuse cases; in order to correctly detect which use case was involved,\nmere \"fake globals\" and a \"stringizer\" wouldn't be sufficient; this\nwould require a custom bytecode interpreter designed specifically around\nproducing SOURCE and FORWARDREF formats.\n\nThankfully there's a shortcut that will work fine: the stringizer will\nsimply assume that when its iteration dunder methods are called, it's in\nservice of iterator unpacking being performed by TypeVarTuple. It will\nhard-code this behavior. This means no other technique using iteration\nwill work, but in practice this won't inconvenience real-world use\ncases.\n\nFinally, note that the \"fake globals\" environment will also require\nconstructing a matching \"fake locals\" dictionary, which for FORWARDREF\nformat will be pre-populated with the relevant locals dict. The \"fake\nglobals\" environment will also have to create a fake \"closure\", a tuple\nof ForwardRef objects pre-created with the names of the free variables\nreferenced by the __annotate__ method.\n\nForwardRef proxies created from __annotate__ methods that reference free\nvariables will map the names and closure values of those free variables\ninto the locals dictionary, to ensure that eval uses the correct values\nfor those names.\n\nCompiler-generated __annotate__ functions\n\nAs mentioned in the previous section, the __annotate__ functions\ngenerated by the compiler are simple. They're mainly a single return\nstatement, computing and returning the annotations dict.\n\nHowever, the protocol for inspect.get_annotations to request either\nFORWARDREF or SOURCE format requires first asking the __annotate__\nmethod to produce it. __annotate__ methods generated by the Python\ncompiler won't support either of these formats and will raise\nNotImplementedError().\n\nThird-party __annotate__ functions\n\nThird-party classes and functions will likely need to implement their\nown __annotate__ methods, so that downstream users of those objects can\ntake full advantage of annotations. In particular, wrappers will likely\nneed to transform the annotation dicts produced by the wrapped object:\nadding, removing, or modifying the dictionary in some way.\n\nMost of the time, third-party code will implement their __annotate__\nmethods by calling inspect.get_annotations on some existing upstream\nobject. For example, wrappers will likely request the annotations dict\nfor their wrapped object, in the format that was requested from them,\nthen modify the returned annotations dict as appropriate and return\nthat. This allows third-party code to leverage the \"fake globals\"\ntechnique without having to understand or participate in it.\n\nThird-party libraries that support both pre- and post-PEP-649 versions\nof Python will have to innovate their own best practices on how to\nsupport both. One sensible approach would be for their wrapper to always\nsupport __annotate__, then call it requesting VALUE format and store the\nresult as the __annotations__ on their wrapper object. This would\nsupport pre-649 Python semantics, and be forward-compatible with\npost-649 semantics.\n\nPseudocode\n\nHere's high-level pseudocode for inspect.get_annotations:\n\n def get_annotations(o, format):\n if format == VALUE:\n return dict(o.__annotations__)\n\n if format == FORWARDREF:\n try:\n return dict(o.__annotations__)\n except NameError:\n pass\n\n if not hasattr(o.__annotate__):\n return {}\n\n c_a = o.__annotate__\n try:\n return c_a(format)\n except NotImplementedError:\n if not can_be_called_with_fake_globals(c_a):\n return {}\n c_a_with_fake_globals = make_fake_globals_version(c_a, format)\n return c_a_with_fake_globals(VALUE)\n\nHere's what a Python compiler-generated __annotate__ method might look\nlike if it was written in Python:\n\n def __annotate__(self, format):\n if format != 1:\n raise NotImplementedError()\n return { ... }\n\nHere's how a third-party wrapper class might implement __annotate__. In\nthis example, the wrapper works like functools.partial, pre-binding one\nparameter of the wrapped callable, which for simplicity must be named\narg:\n\n def __annotate__(self, format):\n ann = inspect.get_annotations(self.wrapped_fn, format)\n if 'arg' in ann:\n del ann['arg']\n return ann\n\nOther modifications to the Python runtime\n\nThis PEP does not dictate exactly how it should be implemented; that is\nleft up to the language implementation maintainers. However, the best\nimplementation of this PEP may require adding additional information to\nexisting Python objects, which is implicitly condoned by the acceptance\nof this PEP.\n\nFor example, it may be necessary to add a __globals__ attribute to class\nobjects, so that the __annotate__ function for that class can be lazily\nbound, only on demand. Also, __annotate__ functions defined on methods\ndefined in a class may need to retain a reference to the class's\n__dict__, in order to correctly evaluate names bound in that class. It's\nexpected that the CPython implementation of this PEP will include both\nthose new attributes.\n\nAll such new information added to existing Python objects should be done\nwith \"dunder\" attributes, as they will of course be implementation\ndetails.\n\nInteractive REPL Shell\n\nThe semantics established in this PEP also hold true when executing code\nin Python's interactive REPL shell, except for module annotations in the\ninteractive module (__main__) itself. Since that module is never\n\"finished\", there's no specific point where we can compile the\n__annotate__ function.\n\nFor the sake of simplicity, in this case we forego delayed evaluation.\nModule-level annotations in the REPL shell will continue to work exactly\nas they do with \"stock semantics\", evaluating immediately and setting\nthe result directly inside the __annotations__ dict.\n\nAnnotations On Local Variables Inside Functions\n\nPython supports syntax for local variable annotations inside functions.\nHowever, these annotations have no runtime effect--they're discarded at\ncompile-time. Therefore, this PEP doesn't need to do anything to support\nthem, the same as stock semantics and PEP 563.\n\nPrototype\n\nThe original prototype implementation of this PEP can be found here:\n\nhttps://github.com/larryhastings/co_annotations/\n\nAs of this writing, the implementation is severely out of date; it's\nbased on Python 3.10 and implements the semantics of the first draft of\nthis PEP, from early 2021. It will be updated shortly.\n\nPerformance Comparison\n\nPerformance with this PEP is generally favorable. There are four\nscenarios to consider:\n\n- the runtime cost when annotations aren't defined,\n- the runtime cost when annotations are defined but not referenced,\n and\n- the runtime cost when annotations are defined and referenced as\n objects.\n- the runtime cost when annotations are defined and referenced as\n strings.\n\nWe'll examine each of these scenarios in the context of all three\nsemantics for annotations: stock, PEP 563, and this PEP.\n\nWhen there are no annotations, all three semantics have the same runtime\ncost: zero. No annotations dict is created and no code is generated for\nit. This requires no runtime processor time and consumes no memory.\n\nWhen annotations are defined but not referenced, the runtime cost of\nPython with this PEP is roughly the same as PEP 563, and improved over\nstock. The specifics depend on the object being annotated:\n\n- With stock semantics, the annotations dict is always built, and set\n as an attribute of the object being annotated.\n- In PEP 563 semantics, for function objects, a precompiled constant\n (a specially constructed tuple) is set as an attribute of the\n function. For class and module objects, the annotations dict is\n always built and set as an attribute of the class or module.\n- With this PEP, a single object is set as an attribute of the object\n being annotated. Most of the time, this object is a constant (a code\n object), but when the annotations require a class namespace or\n closure, this object will be a tuple constructed at binding time.\n\nWhen annotations are both defined and referenced as objects, code using\nthis PEP should be much faster than PEP 563, and be as fast or faster\nthan stock. PEP 563 semantics requires invoking eval() for every value\ninside an annotations dict which is enormously slow. And the\nimplementation of this PEP generates measurably more efficient bytecode\nfor class and module annotations than stock semantics; for function\nannotations, this PEP and stock semantics should be about the same\nspeed.\n\nThe one case where this PEP will be noticeably slower than PEP 563 is\nwhen annotations are requested as strings; it's hard to beat \"they are\nalready strings.\" But stringified annotations are intended for online\ndocumentation use cases, where performance is less likely to be a key\nfactor.\n\nMemory use should also be comparable in all three scenarios across all\nthree semantic contexts. In the first and third scenarios, memory usage\nshould be roughly equivalent in all cases. In the second scenario, when\nannotations are defined but not referenced, using this PEP's semantics\nwill mean the function/class/module will store one unused code object\n(possibly bound to an unused function object); with the other two\nsemantics, they'll store one unused dictionary or constant tuple.\n\nBackwards Compatibility\n\nBackwards Compatibility With Stock Semantics\n\nThis PEP preserves nearly all existing behavior of annotations from\nstock semantics:\n\n- The format of the annotations dict stored in the __annotations__\n attribute is unchanged. Annotations dicts contain real values, not\n strings as per PEP 563.\n- Annotations dicts are mutable, and any changes to them are\n preserved.\n- The __annotations__ attribute can be explicitly set, and any legal\n value set this way will be preserved.\n- The __annotations__ attribute can be deleted using the del\n statement.\n\nMost code that works with stock semantics should continue to work when\nthis PEP is active without any modification necessary. But there are\nexceptions, as follows.\n\nFirst, there's a well-known idiom for accessing class annotations which\nmay not work correctly when this PEP is active. The original\nimplementation of class annotations had what can only be called a bug:\nif a class didn't define any annotations of its own, but one of its base\nclasses did define annotations, the class would \"inherit\" those\nannotations. This behavior was never desirable, so user code found a\nworkaround: instead of accessing the annotations on the class directly\nvia cls.__annotations__, code would access the class's annotations via\nits dict as in cls.__dict__.get(\"__annotations__\", {}). This idiom\nworked because classes stored their annotations in their __dict__, and\naccessing them this way avoided the lookups in the base classes. The\ntechnique relied on implementation details of CPython, so it was never\nsupported behavior--though it was necessary. However, when this PEP is\nactive, a class may have annotations defined but hasn't yet called\n__annotate__ and cached the result, in which case this approach would\nlead to mistakenly assuming the class didn't have annotations. In any\ncase, the bug was fixed as of Python 3.10, and the idiom should no\nlonger be used. Also as of Python 3.10, there's an Annotations HOWTO\nthat defines best practices for working with annotations; code that\nfollows these guidelines will work correctly even when this PEP is\nactive, because it suggests using different approaches to get\nannotations from class objects based on the Python version the code runs\nunder.\n\nSince delaying the evaluation of annotations until they are introspected\nchanges the semantics of the language, it's observable from within the\nlanguage. Therefore it's possible to write code that behaves differently\nbased on whether annotations are evaluated at binding time or at access\ntime, e.g.\n\n mytype = str\n def foo(a:mytype): pass\n mytype = int\n print(foo.__annotations__['a'])\n\nThis will print with stock semantics and \nwhen this PEP is active. This is therefore a backwards-incompatible\nchange. However, this example is poor programming style, so this change\nseems acceptable.\n\nThere are two uncommon interactions possible with class and module\nannotations that work with stock semantics that would no longer work\nwhen this PEP was active. These two interactions would have to be\nprohibited. The good news is, neither is common, and neither is\nconsidered good practice. In fact, they're rarely seen outside of\nPython's own regression test suite. They are:\n\n- Code that sets annotations on module or class attributes from inside\n any kind of flow control statement. It's currently possible to set\n module and class attributes with annotations inside an if or try\n statement, and it works as one would expect. It's untenable to\n support this behavior when this PEP is active.\n- Code in module or class scope that references or modifies the local\n __annotations__ dict directly. Currently, when setting annotations\n on module or class attributes, the generated code simply creates a\n local __annotations__ dict, then adds mappings to it as needed. It's\n possible for user code to directly modify this dict, though this\n doesn't seem to be an intentional feature. Although it would be\n possible to support this after a fashion once this PEP was active,\n the semantics would likely be surprising and wouldn't make anyone\n happy.\n\nNote that these are both also pain points for static type checkers, and\nare unsupported by those tools. It seems reasonable to declare that both\nare at the very least unsupported, and their use results in undefined\nbehavior. It might be worth making a small effort to explicitly prohibit\nthem with compile-time checks.\n\nFinally, if this PEP is active, annotation values shouldn't use the\nif / else ternary operator. Although this will work correctly when\naccessing o.__annotations__ or requesting inspect.VALUE from a helper\nfunction, the boolean expression may not compute correctly with\ninspect.FORWARDREF when some names are defined, and would be far less\ncorrect with inspect.SOURCE.\n\nBackwards Compatibility With PEP 563 Semantics\n\nPEP 563 changed the semantics of annotations. When its semantics are\nactive, annotations must assume they will be evaluated in module-level\nor class-level scope. They may no longer refer directly to local\nvariables in the current function or an enclosing function. This PEP\nremoves that restriction, and annotations may refer any local variable.\n\nPEP 563 requires using eval (or a helper function like\ntyping.get_type_hints or inspect.get_annotations that uses eval for you)\nto convert stringized annotations into their \"real\" values. Existing\ncode that activates stringized annotations, and calls eval() directly to\nconvert the strings back into real values, can simply remove the eval()\ncall. Existing code using a helper function would continue to work\nunchanged, though use of those functions may become optional.\n\nStatic typing users often have modules that only contain inert type hint\ndefinitions--but no live code. These modules are only needed when\nrunning static type checking; they aren't used at runtime. But under\nstock semantics, these modules have to be imported in order for the\nruntime to evaluate and compute the annotations. Meanwhile, these\nmodules often caused circular import problems that could be difficult or\neven impossible to solve. PEP 563 allowed users to solve these circular\nimport problems by doing two things. First, they activated PEP 563 in\ntheir modules, which meant annotations were constant strings, and didn't\nrequire the real symbols to be defined in order for the annotations to\nbe computable. Second, this permitted users to only import the\nproblematic modules in an if typing.TYPE_CHECKING block. This allowed\nthe static type checkers to import the modules and the type definitions\ninside, but they wouldn't be imported at runtime. So far, this approach\nwill work unchanged when this PEP is active; if typing.TYPE_CHECKING is\nsupported behavior.\n\nHowever, some codebases actually did examine their annotations at\nruntime, even when using the if typing.TYPE_CHECKING technique and not\nimporting definitions used in their annotations. These codebases\nexamined the annotation strings without evaluating them, instead relying\non identity checks or simple lexical analysis on the strings.\n\nThis PEP supports these techniques too. But users will need to port\ntheir code to it. First, user code will need to use\ninspect.get_annotations or typing.get_type_hints to access the\nannotations; they won't be able to simply get the __annotations__\nattribute from their object. Second, they will need to specify either\ninspect.FORWARDREF or inspect.SOURCE for the format when calling that\nfunction. This means the helper function can succeed in producing the\nannotations dict, even when not all the symbols are defined. Code\nexpecting stringized annotations should work unmodified with\ninspect.SOURCE formatted annotations dicts; however, users should\nconsider switching to inspect.FORWARDREF, as it may make their analysis\neasier.\n\nSimilarly, PEP 563 permitted use of class decorators on annotated\nclasses in a way that hadn't previously been possible. Some class\ndecorators (e.g. dataclasses) examine the annotations on the class.\nBecause class decorators using the @ decorator syntax are run before the\nclass name is bound, they can cause unsolvable circular-definition\nproblems. If you annotate attributes of a class with references to the\nclass itself, or annotate attributes in multiple classes with circular\nreferences to each other, you can't decorate those classes with the @\ndecorator syntax using decorators that examine the annotations. PEP 563\nallowed this to work, as long as the decorators examined the strings\nlexically and didn't use eval to evaluate them (or handled the NameError\nwith further workarounds). When this PEP is active, decorators will be\nable to compute the annotations dict in inspect.SOURCE or\ninspect.FORWARDREF format using the helper functions. This will permit\nthem to analyze annotations containing undefined symbols, in the format\nthey prefer.\n\nEarly adopters of PEP 563 discovered that \"stringized\" annotations were\nuseful for automatically-generated documentation. Users experimented\nwith this use case, and Python's pydoc has expressed some interest in\nthis technique. This PEP supports this use case; the code generating the\ndocumentation will have to be updated to use a helper function to access\nthe annotations in inspect.SOURCE format.\n\nFinally, the warnings about using the if / else ternary operator in\nannotations apply equally to users of PEP 563. It currently works for\nthem, but could produce incorrect results when requesting some formats\nfrom the helper functions.\n\nIf this PEP is accepted, PEP 563 will be deprecated and eventually\nremoved. To facilitate this transition for early adopters of PEP 563,\nwho now depend on its semantics, inspect.get_annotations and\ntyping.get_type_hints will implement a special affordance.\n\nThe Python compiler won't generate annotation code objects for objects\ndefined in a module where PEP 563 semantics are active, even if this PEP\nis accepted. So, under normal circumstances, requesting inspect.SOURCE\nformat from a helper function would return an empty dict. As an\naffordance, to facilitate the transition, if the helper functions detect\nthat an object was defined in a module with PEP 563 active, and the user\nrequests inspect.SOURCE format, they'll return the current value of the\n__annotations__ dict, which in this case will be the stringized\nannotations. This will allow PEP 563 users who lexically analyze\nstringized annotations to immediately change over to requesting\ninspect.SOURCE format from the helper functions, which will hopefully\nsmooth their transition away from PEP 563.\n\nRejected Ideas\n\n\"Just store the strings\"\n\nOne proposed idea for supporting SOURCE format was for the Python\ncompiler to emit the actual source code for the annotation values\nsomewhere, and to furnish that when the user requested SOURCE format.\n\nThis idea wasn't rejected so much as categorized as \"not yet\". We\nalready know we need to support FORWARDREF format, and that technique\ncan be adapted to support SOURCE format in just a few lines. There are\nmany unanswered questions about this approach:\n\n- Where would we store the strings? Would they always be loaded when\n the annotated object was created, or would they be lazy-loaded on\n demand? If so, how would the lazy-loading work?\n- Would the \"source code\" include the newlines and comments of the\n original? Would it preserve all whitespace, including indents and\n extra spaces used purely for formatting?\n\nIt's possible we'll revisit this topic in the future, if improving the\nfidelity of SOURCE values to the original source code is judged\nsufficiently important.\n\nAcknowledgements\n\nThanks to Carl Meyer, Barry Warsaw, Eric V. Smith, Mark Shannon, Jelle\nZiljstra, and Guido van Rossum for ongoing feedback and encouragement.\n\nParticular thanks to several individuals who contributed key ideas that\nbecame some of the best aspects of this proposal:\n\n- Carl Meyer suggested the \"stringizer\" technique that made FORWARDREF\n and SOURCE formats possible, which allowed making forward progress\n on this PEP possible after a year of languishing due to\n seemingly-unfixable problems. He also suggested the affordance for\n PEP 563 users where inspect.SOURCE will return the stringized\n annotations, and many more suggestions besides. Carl was also the\n primary correspondent in private email threads discussing this PEP,\n and was a tireless resource and voice of sanity. This PEP would\n almost certainly not have been accepted it were it not for Carl's\n contributions.\n- Mark Shannon suggested building the entire annotations dict inside a\n single code object, and only binding it to a function on demand.\n- Guido van Rossum suggested that __annotate__ functions should\n duplicate the name visibility rules of annotations under \"stock\"\n semantics.\n- Jelle Zijlstra contributed not only feedback--but code!\n\nReferences\n\n- https://github.com/larryhastings/co_annotations/issues\n- https://discuss.python.org/t/two-polls-on-how-to-revise-pep-649/23628\n- https://discuss.python.org/t/a-massive-pep-649-update-with-some-major-course-corrections/25672\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.314245"},"created":{"kind":"timestamp","value":"2021-01-11T00:00:00","string":"2021-01-11T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0649/\",\n \"authors\": [\n \"Larry Hastings\"\n ],\n \"pep_number\": \"0649\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":13,"cells":{"id":{"kind":"string","value":"0539"},"text":{"kind":"string","value":"PEP: 539 Title: A New C-API for Thread-Local Storage in CPython Version:\n$Revision$ Last-Modified: $Date$ Author: Erik M. Bray, Masayuki Yamamoto\nBDFL-Delegate: Alyssa Coghlan Status: Final Type: Standards Track\nContent-Type: text/x-rst Created: 20-Dec-2016 Python-Version: 3.7\nPost-History: 16-Dec-2016, 31-Aug-2017, 08-Sep-2017 Resolution:\nhttps://mail.python.org/pipermail/python-dev/2017-September/149358.html\n\nAbstract\n\nThe proposal is to add a new Thread Local Storage (TLS) API to CPython\nwhich would supersede use of the existing TLS API within the CPython\ninterpreter, while deprecating the existing API. The new API is named\nthe \"Thread Specific Storage (TSS) API\" (see Rationale for Proposed\nSolution for the origin of the name).\n\nBecause the existing TLS API is only used internally (it is not\nmentioned in the documentation, and the header that defines it,\npythread.h, is not included in Python.h either directly or indirectly),\nthis proposal probably only affects CPython, but might also affect other\ninterpreter implementations (PyPy?) that implement parts of the CPython\nAPI.\n\nThis is motivated primarily by the fact that the old API uses int to\nrepresent TLS keys across all platforms, which is neither\nPOSIX-compliant, nor portable in any practical sense[1].\n\nNote\n\nThroughout this document the acronym \"TLS\" refers to Thread Local\nStorage and should not be confused with \"Transportation Layer Security\"\nprotocols.\n\nSpecification\n\nThe current API for TLS used inside the CPython interpreter consists of\n6 functions:\n\n PyAPI_FUNC(int) PyThread_create_key(void)\n PyAPI_FUNC(void) PyThread_delete_key(int key)\n PyAPI_FUNC(int) PyThread_set_key_value(int key, void *value)\n PyAPI_FUNC(void *) PyThread_get_key_value(int key)\n PyAPI_FUNC(void) PyThread_delete_key_value(int key)\n PyAPI_FUNC(void) PyThread_ReInitTLS(void)\n\nThese would be superseded by a new set of analogous functions:\n\n PyAPI_FUNC(int) PyThread_tss_create(Py_tss_t *key)\n PyAPI_FUNC(void) PyThread_tss_delete(Py_tss_t *key)\n PyAPI_FUNC(int) PyThread_tss_set(Py_tss_t *key, void *value)\n PyAPI_FUNC(void *) PyThread_tss_get(Py_tss_t *key)\n\nThe specification also adds a few new features:\n\n- A new type Py_tss_t--an opaque type the definition of which may\n depend on the underlying TLS implementation. It is defined:\n\n typedef struct {\n int _is_initialized;\n NATIVE_TSS_KEY_T _key;\n } Py_tss_t;\n\n where NATIVE_TSS_KEY_T is a macro whose value depends on the\n underlying native TLS implementation (e.g. pthread_key_t).\n\n- An initializer for Py_tss_t variables, Py_tss_NEEDS_INIT.\n\n- Three new functions:\n\n PyAPI_FUNC(Py_tss_t *) PyThread_tss_alloc(void)\n PyAPI_FUNC(void) PyThread_tss_free(Py_tss_t *key)\n PyAPI_FUNC(int) PyThread_tss_is_created(Py_tss_t *key)\n\n The first two are needed for dynamic (de-)allocation of a Py_tss_t,\n particularly in extension modules built with Py_LIMITED_API, where\n static allocation of this type is not possible due to its\n implementation being opaque at build time. A value returned by\n PyThread_tss_alloc is in the same state as a value initialized with\n Py_tss_NEEDS_INIT, or NULL in the case of dynamic allocation\n failure. The behavior of PyThread_tss_free involves calling\n PyThread_tss_delete preventively, or is a no-op if the value pointed\n to by the key argument is NULL. PyThread_tss_is_created returns\n non-zero if the given Py_tss_t has been initialized (i.e. by\n PyThread_tss_create).\n\nThe new TSS API does not provide functions which correspond to\nPyThread_delete_key_value and PyThread_ReInitTLS, because these\nfunctions were needed only for CPython's now defunct built-in TLS\nimplementation; that is the existing behavior of these functions is\ntreated as follows: PyThread_delete_key_value(key) is equivalent to\nPyThread_set_key_value(key, NULL), and PyThread_ReInitTLS() is a\nno-op[2].\n\nThe new PyThread_tss_ functions are almost exactly analogous to their\noriginal counterparts with a few minor differences: Whereas\nPyThread_create_key takes no arguments and returns a TLS key as an int,\nPyThread_tss_create takes a Py_tss_t* as an argument and returns an int\nstatus code. The behavior of PyThread_tss_create is undefined if the\nvalue pointed to by the key argument is not initialized by\nPy_tss_NEEDS_INIT. The returned status code is zero on success and\nnon-zero on failure. The meanings of non-zero status codes are not\notherwise defined by this specification.\n\nSimilarly the other PyThread_tss_ functions are passed a Py_tss_t*\nwhereas previously the key was passed by value. This change is\nnecessary, as being an opaque type, the Py_tss_t type could\nhypothetically be almost any size. This is especially necessary for\nextension modules built with Py_LIMITED_API, where the size of the type\nis not known. Except for PyThread_tss_free, the behaviors of\nPyThread_tss_ are undefined if the value pointed to by the key argument\nis NULL.\n\nMoreover, because of the use of Py_tss_t instead of int, there are\nbehaviors in the new API which differ from the existing API with regard\nto key creation and deletion. PyThread_tss_create can be called\nrepeatedly on the same key--calling it on an already initialized key is\na no-op and immediately returns success. Similarly for calling\nPyThread_tss_delete with an uninitialized key.\n\nThe behavior of PyThread_tss_delete is defined to change the key's\ninitialization state to \"uninitialized\"--this allows, for example,\nstatically allocated keys to be reset to a sensible state when\nrestarting the CPython interpreter without terminating the process (e.g.\nembedding Python in an application)[3].\n\nThe old PyThread_*_key* functions will be marked as deprecated in the\ndocumentation, but will not generate runtime deprecation warnings.\n\nAdditionally, on platforms where sizeof(pthread_key_t) != sizeof(int),\nPyThread_create_key will return immediately with a failure status, and\nthe other TLS functions will all be no-ops on such platforms.\n\nComparison of API Specification\n\n+-------------------+-----------------------+-----------------------+\n| API | Thread Local Storage | Thread Specific |\n| | (TLS) | Storage (TSS) |\n+===================+=======================+=======================+\n| Version | Existing | New |\n+-------------------+-----------------------+-----------------------+\n| Key Type | int | Py_tss_t (opaque |\n| | | type) |\n+-------------------+-----------------------+-----------------------+\n| Handle Native Key | cast to int | conceal into internal |\n| | | field |\n+-------------------+-----------------------+-----------------------+\n| Function Argument | int | Py_tss_t * |\n+-------------------+-----------------------+-----------------------+\n| Features | - create key | - create key |\n| | - delete key | - delete key |\n| | - set value | - set value |\n| | - get value | - get value |\n| | - delete value | - (set NULL |\n| | - reinitialize keys | instead)[7] |\n| | (after fork) | - (unnecessary)[8] |\n| | | - dynamically |\n| | | (de-)allocate key |\n| | | - check key's |\n| | | initialization |\n| | | state |\n+-------------------+-----------------------+-----------------------+\n| Key Initializer | (-1 as key creation | Py_tss_NEEDS_INIT |\n| | failure) | |\n+-------------------+-----------------------+-----------------------+\n| Requirement | native threads (since | native threads |\n| | CPython 3.7[9]) | |\n+-------------------+-----------------------+-----------------------+\n| Restriction | No support for | Unable to statically |\n| | platforms where | allocate keys when |\n| | native TLS key is | Py_LIMITED_API is |\n| | defined in a way that | defined. |\n| | cannot be safely cast | |\n| | to int. | |\n+-------------------+-----------------------+-----------------------+\n\nExample\n\nWith the proposed changes, a TSS key is initialized like:\n\n static Py_tss_t tss_key = Py_tss_NEEDS_INIT;\n if (PyThread_tss_create(&tss_key)) {\n /* ... handle key creation failure ... */\n }\n\nThe initialization state of the key can then be checked like:\n\n assert(PyThread_tss_is_created(&tss_key));\n\nThe rest of the API is used analogously to the old API:\n\n int the_value = 1;\n if (PyThread_tss_get(&tss_key) == NULL) {\n PyThread_tss_set(&tss_key, (void *)&the_value);\n assert(PyThread_tss_get(&tss_key) != NULL);\n }\n /* ... once done with the key ... */\n PyThread_tss_delete(&tss_key);\n assert(!PyThread_tss_is_created(&tss_key));\n\nWhen Py_LIMITED_API is defined, a TSS key must be dynamically allocated:\n\n static Py_tss_t *ptr_key = PyThread_tss_alloc();\n if (ptr_key == NULL) {\n /* ... handle key allocation failure ... */\n }\n assert(!PyThread_tss_is_created(ptr_key));\n /* ... once done with the key ... */\n PyThread_tss_free(ptr_key);\n ptr_key = NULL;\n\nPlatform Support Changes\n\nA new \"Native Thread Implementation\" section will be added to PEP 11\nthat states:\n\n- As of CPython 3.7, all platforms are required to provide a native\n thread implementation (such as pthreads or Windows) to implement the\n TSS API. Any TSS API problems that occur in an implementation\n without native threads will be closed as \"won't fix\".\n\nMotivation\n\nThe primary problem at issue here is the type of the keys (int) used for\nTLS values, as defined by the original PyThread TLS API.\n\nThe original TLS API was added to Python by GvR back in 1997, and at the\ntime the key used to represent a TLS value was an int, and so it has\nbeen to the time of writing. This used CPython's own TLS implementation\nwhich long remained unused, largely unchanged, in Python/thread.c.\nSupport for implementation of the API on top of native thread\nimplementations (pthreads and Windows) was added much later, and the\nbuilt-in implementation has been deemed no longer necessary and has\nsince been removed[10].\n\nThe problem with the choice of int to represent a TLS key, is that while\nit was fine for CPython's own TLS implementation, and happens to be\ncompatible with Windows (which uses DWORD for the analogous data), it is\nnot compatible with the POSIX standard for the pthreads API, which\ndefines pthread_key_t as an opaque type not further defined by the\nstandard (as with Py_tss_t described above)[11]. This leaves it up to\nthe underlying implementation how a pthread_key_t value is used to look\nup thread-specific data.\n\nThis has not generally been a problem for Python's API, as it just\nhappens that on Linux pthread_key_t is defined as an unsigned int, and\nso is fully compatible with Python's TLS API--pthread_key_t's created by\npthread_create_key can be freely cast to int and back (well, not\nexactly, even this has some limitations as pointed out by issue #22206).\n\nHowever, as issue #25658 points out, there are at least some platforms\n(namely Cygwin, CloudABI, but likely others as well) which have\notherwise modern and POSIX-compliant pthreads implementations, but are\nnot compatible with Python's API because their pthread_key_t is defined\nin a way that cannot be safely cast to int. In fact, the possibility of\nrunning into this problem was raised by MvL at the time pthreads TLS was\nadded[12].\n\nIt could be argued that PEP 11 makes specific requirements for\nsupporting a new, not otherwise officially-support platform (such as\nCloudABI), and that the status of Cygwin support is currently dubious.\nHowever, this creates a very high barrier to supporting platforms that\nare otherwise Linux- and/or POSIX-compatible and where CPython might\notherwise \"just work\" except for this one hurdle. CPython itself imposes\nthis implementation barrier by way of an API that is not compatible with\nPOSIX (and in fact makes invalid assumptions about pthreads).\n\nRationale for Proposed Solution\n\nThe use of an opaque type (Py_tss_t) to key TLS values allows the API to\nbe compatible with all present (POSIX and Windows) and future (C11?)\nnative TLS implementations supported by CPython, as it allows the\ndefinition of Py_tss_t to depend on the underlying implementation.\n\nSince the existing TLS API has been available in the limited API[13] for\nsome platforms (e.g. Linux), CPython makes an effort to provide the new\nTSS API at that level likewise. Note, however, that the Py_tss_t\ndefinition becomes to be an opaque struct when Py_LIMITED_API is\ndefined, because exposing NATIVE_TSS_KEY_T as part of the limited API\nwould prevent us from switching native thread implementation without\nrebuilding extension modules.\n\nA new API must be introduced, rather than changing the function\nsignatures of the current API, in order to maintain backwards\ncompatibility. The new API also more clearly groups together these\nrelated functions under a single name prefix, PyThread_tss_. The \"tss\"\nin the name stands for \"thread-specific storage\", and was influenced by\nthe naming and design of the \"tss\" API that is part of the C11 threads\nAPI[14]. However, this is in no way meant to imply compatibility with or\nsupport for the C11 threads API, or signal any future intention of\nsupporting C11--it's just the influence for the naming and design.\n\nThe inclusion of the special initializer Py_tss_NEEDS_INIT is required\nby the fact that not all native TLS implementations define a sentinel\nvalue for uninitialized TLS keys. For example, on Windows a TLS key is\nrepresented by a DWORD (unsigned int) and its value must be treated as\nopaque[15]. So there is no unsigned integer value that can be safely\nused to represent an uninitialized TLS key on Windows. Likewise, POSIX\ndoes not specify a sentinel for an uninitialized pthread_key_t, instead\nrelying on the pthread_once interface to ensure that a given TLS key is\ninitialized only once per-process. Therefore, the Py_tss_t type contains\nan explicit ._is_initialized that can indicate the key's initialization\nstate independent of the underlying implementation.\n\nChanging PyThread_create_key to immediately return a failure status on\nsystems using pthreads where sizeof(int) != sizeof(pthread_key_t) is\nintended as a sanity check: Currently, PyThread_create_key may report\ninitial success on such systems, but attempts to use the returned key\nare likely to fail. Although in practice this failure occurs earlier in\nthe interpreter initialization, it's better to fail immediately at the\nsource of problem (PyThread_create_key) rather than sometime later when\nuse of an invalid key is attempted. In other words, this indicates\nclearly that the old API is not supported on platforms where it cannot\nbe used reliably, and that no effort will be made to add such support.\n\nRejected Ideas\n\n- Do nothing: The status quo is fine because it works on Linux, and\n platforms wishing to be supported by CPython should follow the\n requirements of PEP 11. As explained above, while this would be a\n fair argument if CPython were being to asked to make changes to\n support particular quirks or features of a specific platform, in\n this case it is a quirk of CPython that prevents it from being used\n to its full potential on otherwise POSIX-compliant platforms. The\n fact that the current implementation happens to work on Linux is a\n happy accident, and there's no guarantee that this will never\n change.\n- Affected platforms should just configure Python --without-threads:\n this is no longer an option as the --without-threads option has been\n removed for Python 3.7[16].\n- Affected platforms should use CPython's built-in TLS implementation\n instead of a native TLS implementation: This is a more acceptable\n alternative to the previous idea, and in fact there had been a patch\n to do just that[17]. However, the built-in implementation being\n \"slower and clunkier\" in general than native implementations still\n needlessly hobbles performance on affected platforms. At least one\n other module (tracemalloc) is also broken if Python is built without\n a native TLS implementation. This idea also cannot be adopted\n because the built-in implementation has since been removed.\n- Keep the existing API, but work around the issue by providing a\n mapping from pthread_key_t values to int values. A couple attempts\n were made at this ([18],[19]), but this injects needless complexity\n and overhead into performance-critical code on platforms that are\n not currently affected by this issue (such as Linux). Even if use of\n this workaround were made conditional on platform compatibility, it\n introduces platform-specific code to maintain, and still has the\n problem of the previous rejected ideas of needlessly hobbling\n performance on affected platforms.\n\nImplementation\n\nAn initial version of a patch[20] is available on the bug tracker for\nthis issue. Since the migration to GitHub, its development has continued\nin the pep539-tss-api feature branch[21] in Masayuki Yamamoto's fork of\nthe CPython repository on GitHub. A work-in-progress PR is available\nat[22].\n\nThis reference implementation covers not only the new API implementation\nfeatures, but also the client code updates needed to replace the\nexisting TLS API with the new TSS API.\n\nCopyright\n\nThis document has been placed in the public domain.\n\nReferences and Footnotes\n\n[1] http://bugs.python.org/issue25658\n\n[2] https://bugs.python.org/msg298342\n\n[3] https://docs.python.org/3/c-api/init.html#c.Py_FinalizeEx\n\n[4] https://bugs.python.org/msg298342\n\n[5] https://bugs.python.org/msg298342\n\n[6] http://bugs.python.org/issue30832\n\n[7] https://bugs.python.org/msg298342\n\n[8] https://bugs.python.org/msg298342\n\n[9] http://bugs.python.org/issue30832\n\n[10] http://bugs.python.org/issue30832\n\n[11] http://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_key_create.html\n\n[12] https://bugs.python.org/msg116292\n\n[13] It is also called as \"stable ABI\" (PEP 384)\n\n[14] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf#page=404\n\n[15] https://msdn.microsoft.com/en-us/library/windows/desktop/ms686801(v=vs.85).aspx\n\n[16] https://bugs.python.org/issue31370\n\n[17] http://bugs.python.org/file45548/configure-pthread_key_t.patch\n\n[18] http://bugs.python.org/file44269/issue25658-1.patch\n\n[19] http://bugs.python.org/file44303/key-constant-time.diff\n\n[20] http://bugs.python.org/file46379/pythread-tss-3.patch\n\n[21] https://github.com/python/cpython/compare/master...ma8ma:pep539-tss-api\n\n[22] https://github.com/python/cpython/pull/1362"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.343703"},"created":{"kind":"timestamp","value":"2016-12-20T00:00:00","string":"2016-12-20T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0539/\",\n \"authors\": [\n \"Erik M. Bray\",\n \"Masayuki Yamamoto\"\n ],\n \"pep_number\": \"0539\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":14,"cells":{"id":{"kind":"string","value":"0265"},"text":{"kind":"string","value":"PEP: 265 Title: Sorting Dictionaries by Value Author: Grant Griffin\n Status: Rejected Type: Standards Track Content-Type:\ntext/x-rst Created: 08-Aug-2001 Python-Version: 2.2 Post-History:\n\nAbstract\n\nThis PEP suggests a \"sort by value\" operation for dictionaries. The\nprimary benefit would be in terms of \"batteries included\" support for a\ncommon Python idiom which, in its current form, is both difficult for\nbeginners to understand and cumbersome for all to implement.\n\nBDFL Pronouncement\n\nThis PEP is rejected because the need for it has been largely fulfilled\nby Py2.4's sorted() builtin function:\n\n >>> sorted(d.iteritems(), key=itemgetter(1), reverse=True)\n [('b', 23), ('d', 17), ('c', 5), ('a', 2), ('e', 1)]\n\nor for just the keys:\n\n sorted(d, key=d.__getitem__, reverse=True)\n ['b', 'd', 'c', 'a', 'e']\n\nAlso, Python 2.5's heapq.nlargest() function addresses the common use\ncase of finding only a few of the highest valued items:\n\n >>> nlargest(2, d.iteritems(), itemgetter(1))\n [('b', 23), ('d', 17)]\n\nMotivation\n\nA common use of dictionaries is to count occurrences by setting the\nvalue of d[key] to 1 on its first occurrence, then increment the value\non each subsequent occurrence. This can be done several different ways,\nbut the get() method is the most succinct:\n\n d[key] = d.get(key, 0) + 1\n\nOnce all occurrences have been counted, a common use of the resulting\ndictionary is to print the occurrences in occurrence-sorted order, often\nwith the largest value first.\n\nThis leads to a need to sort a dictionary's items by value. The\ncanonical method of doing so in Python is to first use d.items() to get\na list of the dictionary's items, then invert the ordering of each\nitem's tuple from (key, value) into (value, key), then sort the list;\nsince Python sorts the list based on the first item of the tuple, the\nlist of (inverted) items is therefore sorted by value. If desired, the\nlist can then be reversed, and the tuples can be re-inverted back to\n(key, value). (However, in my experience, the inverted tuple ordering is\nfine for most purposes, e.g. printing out the list.)\n\nFor example, given an occurrence count of:\n\n >>> d = {'a':2, 'b':23, 'c':5, 'd':17, 'e':1}\n\nwe might do:\n\n >>> items = [(v, k) for k, v in d.items()]\n >>> items.sort()\n >>> items.reverse() # so largest is first\n >>> items = [(k, v) for v, k in items]\n\nresulting in:\n\n >>> items\n [('b', 23), ('d', 17), ('c', 5), ('a', 2), ('e', 1)]\n\nwhich shows the list in by-value order, largest first. (In this case,\n'b' was found to have the most occurrences.)\n\nThis works fine, but is \"hard to use\" in two aspects. First, although\nthis idiom is known to veteran Pythoneers, it is not at all obvious to\nnewbies -- either in terms of its algorithm (inverting the ordering of\nitem tuples) or its implementation (using list comprehensions -- which\nare an advanced Python feature.) Second, it requires having to\nrepeatedly type a lot of \"grunge\", resulting in both tedium and\nmistakes.\n\nWe therefore would rather Python provide a method of sorting\ndictionaries by value which would be both easy for newbies to understand\n(or, better yet, not to have to understand) and easier for all to use.\n\nRationale\n\nAs Tim Peters has pointed out, this sort of thing brings on the problem\nof trying to be all things to all people. Therefore, we will limit its\nscope to try to hit \"the sweet spot\". Unusual cases (e.g. sorting via a\ncustom comparison function) can, of course, be handled \"manually\" using\npresent methods.\n\nHere are some simple possibilities:\n\nThe items() method of dictionaries can be augmented with new parameters\nhaving default values that provide for full backwards-compatibility:\n\n (1) items(sort_by_values=0, reversed=0)\n\nor maybe just:\n\n (2) items(sort_by_values=0)\n\nsince reversing a list is easy enough.\n\nAlternatively, items() could simply let us control the (key, value)\norder:\n\n (3) items(values_first=0)\n\nAgain, this is fully backwards-compatible. It does less work than the\nothers, but it at least eases the most complicated/tricky part of the\nsort-by-value problem: inverting the order of item tuples. Using this is\nvery simple:\n\n items = d.items(1)\n items.sort()\n items.reverse() # (if desired)\n\nThe primary drawback of the preceding three approaches is the additional\noverhead for the parameter-less items() case, due to having to process\ndefault parameters. (However, if one assumes that items() gets used\nprimarily for creating sort-by-value lists, this is not really a\ndrawback in practice.)\n\nAlternatively, we might add a new dictionary method which somehow\nembodies \"sorting\". This approach offers two advantages. First, it\navoids adding overhead to the items() method. Second, it is perhaps more\naccessible to newbies: when they go looking for a method for sorting\ndictionaries, they hopefully run into this one, and they will not have\nto understand the finer points of tuple inversion and list sorting to\nachieve sort-by-value.\n\nTo allow the four basic possibilities of sorting by key/value and in\nforward/reverse order, we could add this method:\n\n (4) sorted_items(by_value=0, reversed=0)\n\nI believe the most common case would actually be by_value=1, reversed=1,\nbut the defaults values given here might lead to fewer surprises by\nusers: sorted_items() would be the same as items() followed by sort().\n\nFinally (as a last resort), we could use:\n\n (5) items_sorted_by_value(reversed=0)\n\nImplementation\n\nThe proposed dictionary methods would necessarily be implemented in C.\nPresumably, the implementation would be fairly simple since it involves\njust adding a few calls to Python's existing machinery.\n\nConcerns\n\nAside from the run-time overhead already addressed in possibilities 1\nthrough 3, concerns with this proposal probably will fall into the\ncategories of \"feature bloat\" and/or \"code bloat\". However, I believe\nthat several of the suggestions made here will result in quite minimal\nbloat, resulting in a good tradeoff between bloat and \"value added\".\n\nTim Peters has noted that implementing this in C might not be\nsignificantly faster than implementing it in Python today. However, the\nmajor benefits intended here are \"accessibility\" and \"ease of use\", not\n\"speed\". Therefore, as long as it is not noticeably slower (in the case\nof plain items(), speed need not be a consideration.\n\nReferences\n\nA related thread called \"counting occurrences\" appeared on\ncomp.lang.python in August, 2001. This included examples of approaches\nto systematizing the sort-by-value problem by implementing it as\nreusable Python functions and classes.\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.353018"},"created":{"kind":"timestamp","value":"2001-08-08T00:00:00","string":"2001-08-08T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0265/\",\n \"authors\": [\n \"Grant Griffin\"\n ],\n \"pep_number\": \"0265\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":15,"cells":{"id":{"kind":"string","value":"0746"},"text":{"kind":"string","value":"PEP: 746 Title: Type checking Annotated metadata Author: Adrian Garcia\nBadaracco Sponsor: Jelle Zijlstra\n Discussions-To:\nhttps://discuss.python.org/t/pep-746-typedmetadata-for-type-checking-of-pep-593-annotated/53834\nStatus: Draft Type: Standards Track Topic: Typing Created: 20-May-2024\nPython-Version: 3.14 Post-History: 20-May-2024\n\nAbstract\n\nThis PEP proposes a mechanism for type checking metadata that uses the\ntyping.Annotated type. Metadata objects that implement the new\n__supports_annotated_base__ protocol will be type checked by static type\ncheckers to ensure that the metadata is valid for the given type.\n\nMotivation\n\nPEP 593 introduced Annotated as a way to attach runtime metadata to\ntypes. In general, the metadata is not meant for static type checkers,\nbut even so, it is often useful to be able to check that the metadata\nmakes sense for the given type.\n\nTake the first example in PEP 593, which uses Annotated to attach\nserialization information to a field:\n\n class Student(struct2.Packed):\n name: Annotated[str, struct2.ctype(\"<10s\")]\n\nHere, the struct2.ctype(\"<10s\") metadata is meant to be used by a\nserialization library to serialize the field. Such libraries can only\nserialize a subset of types: it would not make sense to write, for\nexample, Annotated[list[str], struct2.ctype(\"<10s\")]. Yet the type\nsystem provides no way to enforce this. The metadata are completely\nignored by type checkers.\n\nThis use case comes up in libraries like pydantic and msgspec, which use\nAnnotated to attach validation and conversion information to fields or\nfastapi, which uses Annotated to mark parameters as extracted from\nheaders, query strings or dependency injection.\n\nSpecification\n\nThis PEP introduces a protocol that can be used by static and runtime\ntype checkers to validate the consistency between Annotated metadata and\na given type. Objects that implement this protocol have an attribute\ncalled __supports_annotated_base__ that specifies whether the metadata\nis valid for a given type:\n\n class Int64:\n __supports_annotated_base__: int\n\nThe attribute may also be marked as a ClassVar to avoid interaction with\ndataclasses:\n\n from dataclasses import dataclass\n from typing import ClassVar\n\n @dataclass\n class Gt:\n value: int\n __supports_annotated_base__: ClassVar[int]\n\nWhen a static type checker encounters a type expression of the form\nAnnotated[T, M1, M2, ...], it should enforce that for each metadata\nelement in M1, M2, ..., one of the following is true:\n\n- The metadata element evaluates to an object that does not have a\n __supports_annotated_base__ attribute; or\n- The metadata element evaluates to an object M that has a\n __supports_annotated_base__ attribute; and T is assignable to the\n type of M.__supports_annotated_base__.\n\nTo support generic Gt metadata, one might write:\n\n from typing import Protocol\n\n class SupportsGt[T](Protocol):\n def __gt__(self, __other: T) -> bool:\n ...\n\n class Gt[T]:\n __supports_annotated_base__: ClassVar[SupportsGt[T]]\n\n def __init__(self, value: T) -> None:\n self.value = value\n\n x1: Annotated[int, Gt(0)] = 1 # OK\n x2: Annotated[str, Gt(0)] = 0 # type checker error: str is not assignable to SupportsGt[int]\n x3: Annotated[int, Gt(1)] = 0 # OK for static type checkers; runtime type checkers may flag this\n\nBackwards Compatibility\n\nMetadata that does not implement the protocol will be considered valid\nfor all types, so no breaking changes are introduced for existing code.\nThe new checks only apply to metadata objects that explicitly implement\nthe protocol specified by this PEP.\n\nSecurity Implications\n\nNone.\n\nHow to Teach This\n\nThis protocol is intended mostly for libraries that provide Annotated\nmetadata; end users of those libraries are unlikely to need to implement\nthe protocol themselves. The protocol should be mentioned in the\ndocumentation for typing.Annotated and in the typing specification.\n\nReference Implementation\n\nNone yet.\n\nRejected ideas\n\nIntroducing a type variable instead of a generic class\n\nWe considered using a special type variable,\nAnnotatedT = TypeVar(\"AnnotatedT\"), to represent the type T of the inner\ntype in Annotated; metadata would be type checked against this type\nvariable. However, this would require using the old type variable syntax\n(before PEP 695), which is now a discouraged feature. In addition, this\nwould use type variables in an unusual way that does not fit well with\nthe rest of the type system.\n\nIntroducing a new type to typing.py that all metadata objects should subclass\n\nA previous version of this PEP suggested adding a new generic base\nclass, TypedMetadata[U], that metadata objects would subclass. If a\nmetadata object is a subclass of TypedMetadata[U], then type checkers\nwould check that the annotation's base type is assignable to U. However,\nthis mechanism does not integrate as well with the rest of the language;\nPython does not generally use marker base classes. In addition, it\nprovides less flexibility than the current proposal: it would not allow\noverloads, and it would require metadata objects to add a new base\nclass, which may make their runtime implementation more complex.\n\nUsing a method instead of an attribute for __supports_annotated_base__\n\nWe considered using a method instead of an attribute for the protocol,\nso that this method can be used at runtime to check the validity of the\nmetadata and to support overloads or returning boolean literals.\nHowever, using a method adds boilerplate to the implementation and the\nvalue of the runtime use cases or more complex scenarios involving\noverloads and returning boolean literals was not clear.\n\nAcknowledgments\n\nWe thank Eric Traut for suggesting the idea of using a protocol and\nimplementing provisional support in Pyright. Thank you to Jelle Zijlstra\nfor sponsoring this PEP.\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.361990"},"created":{"kind":"timestamp","value":"2024-05-20T00:00:00","string":"2024-05-20T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0746/\",\n \"authors\": [\n \"Adrian Garcia Badaracco\"\n ],\n \"pep_number\": \"0746\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":16,"cells":{"id":{"kind":"string","value":"0727"},"text":{"kind":"string","value":"PEP: 727 Title: Documentation in Annotated Metadata Author: Sebastián\nRamírez Sponsor: Jelle Zijlstra\n Discussions-To:\nhttps://discuss.python.org/t/32566 Status: Draft Type: Standards Track\nTopic: Typing Content-Type: text/x-rst Created: 28-Aug-2023\nPython-Version: 3.13 Post-History: 30-Aug-2023\n\nAbstract\n\nThis PEP proposes a standardized way to provide documentation strings\nfor Python symbols defined with ~typing.Annotated using a new class\ntyping.Doc.\n\nMotivation\n\nThere's already a well-defined way to provide documentation for classes,\nfunctions, class methods, and modules: using docstrings.\n\nCurrently there is no formalized standard to provide documentation\nstrings for other types of symbols: parameters, return values,\nclass-scoped variables (class variables and instance variables), local\nvariables, and type aliases.\n\nNevertheless, to allow documenting most of these additional symbols,\nseveral conventions have been created as microsyntaxes inside of\ndocstrings, and are currently commonly used: Sphinx, numpydoc, Google,\nKeras, etc.\n\nThere are two scenarios in which these conventions would be supported by\ntools: for authors, while editing the contents of the documentation\nstrings and for users, while rendering that content in some way (in\ndocumentation sites, tooltips in editors, etc).\n\nBecause each of these conventions uses a microsyntax inside a string,\nwhen editing those docstrings, editors can't easily provide support for\nautocompletion, inline errors for broken syntax, etc. Any type of\nediting support for these conventions would be on top of the support for\nediting standard Python syntax.\n\nWhen documenting parameters with current conventions, because the\ndocstring is in a different place in the code than the actual parameters\nand it requires duplication of information (the parameter name) the\ninformation about a parameter is easily in a place in the code quite far\naway from the declaration of the actual parameter and it is disconnected\nfrom it. This means it's easy to refactor a function, remove a\nparameter, and forget to remove its docs. The same happens when adding a\nnew parameter: it's easy to forget to add the docstring for it.\n\nAnd because of this same duplication of information (the parameter name)\neditors and other tools need complex custom logic to check or ensure the\nconsistency of the parameters in the signature and in their docstring,\nor they simply don't fully support that.\n\nAs these existing conventions are different types of microsyntaxes\ninside of strings, robustly parsing them for rendering requires complex\nlogic that needs to be implemented by the tools supporting them.\nAdditionally, libraries and tools don't have a straightforward way to\nobtain the documentation for each individual parameter or variable at\nruntime, without depending on a specific docstring convention parser.\nAccessing the parameter documentation strings at runtime would be\nuseful, for example, for testing the contents of each parameter's\ndocumentation, to ensure consistency across several similar functions,\nor to extract and expose that same parameter documentation in some other\nway (e.g. an API with FastAPI, a CLI with Typer, etc).\n\nSome of these previous formats tried to account for the lack of type\nannotations in older Python versions by including typing information in\nthe docstrings (e.g. Sphinx, numpydoc) but now that information doesn't\nneed to be in docstrings as there is now an official\nsyntax for type annotations <484>.\n\nRationale\n\nThis proposal intends to address these shortcomings by extending and\ncomplementing the information in docstrings, keeping backwards\ncompatibility with existing docstrings (it doesn't deprecate them), and\ndoing it in a way that leverages the Python language and structure, via\ntype annotations with ~typing.Annotated, and a new class Doc in typing.\n\nThe reason why this would belong in the standard Python library instead\nof an external package is because although the implementation would be\nquite trivial, the actual power and benefit from it would come from\nbeing a standard, to facilitate its usage from library authors and to\nprovide a default way to document Python symbols using\n~typing.Annotated. Some tool providers (at least VS Code and PyCharm)\nhave shown they would consider implementing support for this only if it\nwas a standard.\n\nThis doesn't deprecate current usage of docstrings, docstrings should be\nconsidered the preferred documentation method when available (not\navailable in type aliases, parameters, etc). And docstrings would be\ncomplemented by this proposal for documentation specific to the symbols\nthat can be declared with ~typing.Annotated (currently only covered by\nthe several available microsyntax conventions).\n\nThis should be relatively transparent to common developers (library\nusers) unless they manually open the source files from libraries\nadopting it.\n\nIt should be considered opt-in for library authors who would like to\nadopt it and they should be free to decide to use it or not.\n\nIt would be only useful for libraries that are willing to use optional\ntype hints.\n\nSummary\n\nHere's a short summary of the features of this proposal in contrast to\ncurrent conventions:\n\n- Editing would be already fully supported by default by any editor\n (current or future) supporting Python syntax, including syntax\n errors, syntax highlighting, etc.\n- Rendering would be relatively straightforward to implement by static\n tools (tools that don't need runtime execution), as the information\n can be extracted from the AST they normally already create.\n- Deduplication of information: the name of a parameter would be\n defined in a single place, not duplicated inside of a docstring.\n- Elimination of the possibility of having inconsistencies when\n removing a parameter or class variable and forgetting to remove its\n documentation.\n- Minimization of the probability of adding a new parameter or class\n variable and forgetting to add its documentation.\n- Elimination of the possibility of having inconsistencies between the\n name of a parameter in the signature and the name in the docstring\n when it is renamed.\n- Access to the documentation string for each symbol at runtime,\n including existing (older) Python versions.\n- A more formalized way to document other symbols, like type aliases,\n that could use ~typing.Annotated.\n- No microsyntax to learn for newcomers, it's just Python syntax.\n- Parameter documentation inheritance for functions captured by\n ~typing.ParamSpec.\n\nSpecification\n\nThe main proposal is to introduce a new class, typing.Doc. This class\nshould only be used within ~typing.Annotated annotations. It takes a\nsingle positional-only string argument. It should be used to document\nthe intended meaning and use of the symbol declared using\n~typing.Annotated.\n\nFor example:\n\n from typing import Annotated, Doc\n\n class User:\n name: Annotated[str, Doc(\"The user's name\")]\n age: Annotated[int, Doc(\"The user's age\")]\n\n ...\n\n~typing.Annotated is normally used as a type annotation, in those cases,\nany typing.Doc inside of it would document the symbol being annotated.\n\nWhen ~typing.Annotated is used to declare a type alias, typing.Doc would\nthen document the type alias symbol.\n\nFor example:\n\n from typing import Annotated, Doc, TypeAlias\n\n from external_library import UserResolver\n\n CurrentUser: TypeAlias = Annotated[str, Doc(\"The current system user\"), UserResolver()]\n\n def create_user(name: Annotated[str, Doc(\"The user's name\")]): ...\n\n def delete_user(name: Annotated[str, Doc(\"The user to delete\")]): ...\n\nIn this case, if a user imported CurrentUser, tools like editors could\nprovide a tooltip with the documentation string when a user hovers over\nthat symbol, or documentation tools could include the type alias with\nits documentation in their generated output.\n\nFor tools extracting the information at runtime, they would normally use\n~typing.get_type_hints with the parameter include_extras=True, and as\n~typing.Annotated is normalized (even with type aliases), this would\nmean they should use the last typing.Doc available, if more than one is\nused, as that is the last one used.\n\nAt runtime, typing.Doc instances have an attribute documentation with\nthe string passed to it.\n\nWhen a function's signature is captured by a ~typing.ParamSpec, any\ndocumentation strings associated with the parameters should be retained.\n\nAny tool processing typing.Doc objects should interpret the string as a\ndocstring, and therefore should normalize whitespace as if\ninspect.cleandoc() were used.\n\nThe string passed to typing.Doc should be of the form that would be a\nvalid docstring. This means that f-strings and string operations should\nnot be used. As this cannot be enforced by the Python runtime, tools\nshould not rely on this behavior.\n\nWhen tools providing rendering show the raw signature, they could allow\nconfiguring if the whole raw ~typing.Annotated code should be displayed,\nbut they should default to not include ~typing.Annotated and its\ninternal code metadata, only the type of the symbols annotated. When\nthose tools support typing.Doc and rendering in other ways than just a\nraw signature, they should show the string value passed to typing.Doc in\na convenient way that shows the relation between the documented symbol\nand the documentation string.\n\nTools providing rendering could allow ways to configure where to show\nthe parameter documentation and the prose docstring in different ways.\nOtherwise, they could simply show the prose docstring first and then the\nparameter documentation second.\n\nExamples\n\nClass attributes may be documented:\n\n from typing import Annotated, Doc\n\n class User:\n name: Annotated[str, Doc(\"The user's name\")]\n age: Annotated[int, Doc(\"The user's age\")]\n\n ...\n\nAs can function or method parameters and return values:\n\n from typing import Annotated, Doc\n\n def create_user(\n name: Annotated[str, Doc(\"The user's name\")],\n age: Annotated[int, Doc(\"The user's age\")],\n cursor: DatabaseConnection | None = None,\n ) -> Annotated[User, Doc(\"The created user after saving in the database\")]:\n \"\"\"Create a new user in the system.\n\n It needs the database connection to be already initialized.\n \"\"\"\n pass\n\nBackwards Compatibility\n\nThis proposal is fully backwards compatible with existing code and it\ndoesn't deprecate existing usage of docstring conventions.\n\nFor developers that wish to adopt it before it is available in the\nstandard library, or to support older versions of Python, they can use\ntyping_extensions and import and use Doc from there.\n\nFor example:\n\n from typing import Annotated\n from typing_extensions import Doc\n\n class User:\n name: Annotated[str, Doc(\"The user's name\")]\n age: Annotated[int, Doc(\"The user's age\")]\n\n ...\n\nSecurity Implications\n\nThere are no known security implications.\n\nHow to Teach This\n\nThe main mechanism of documentation should continue to be standard\ndocstrings for prose information, this applies to modules, classes,\nfunctions and methods.\n\nFor authors that want to adopt this proposal to add more granularity,\nthey can use typing.Doc inside of ~typing.Annotated annotations for the\nsymbols that support it.\n\nLibrary authors that wish to adopt this proposal while keeping backwards\ncompatibility with older versions of Python should use\ntyping_extensions.Doc instead of typing.Doc.\n\nReference Implementation\n\ntyping.Doc is implemented equivalently to:\n\n class Doc:\n def __init__(self, documentation: str, /):\n self.documentation = documentation\n\nIt has been implemented in the typing_extensions package.\n\nSurvey of Other languages\n\nHere's a short survey of how other languages document their symbols.\n\nJava\n\nJava functions and their parameters are documented with Javadoc, a\nspecial format for comments put on top of the function definition. This\nwould be similar to Python current docstring microsyntax conventions\n(but only one).\n\nFor example:\n\n /**\n * Returns an Image object that can then be painted on the screen. \n * The url argument must specify an absolute {@link URL}. The name\n * argument is a specifier that is relative to the url argument. \n *
\n * This method always returns immediately, whether or not the \n * image exists. When this applet attempts to draw the image on\n * the screen, the data will be loaded. The graphics primitives \n * that draw the image will incrementally paint on the screen. \n *\n * @param url an absolute URL giving the base location of the image\n * @param name the location of the image, relative to the url argument\n * @return the image at the specified URL\n * @see Image\n */\n public Image getImage(URL url, String name) {\n try {\n return getImage(new URL(url, name));\n } catch (MalformedURLException e) {\n return null;\n }\n }\n\nJavaScript\n\nBoth JavaScript and TypeScript use a similar system to Javadoc.\n\nJavaScript uses JSDoc.\n\nFor example:\n\n /**\n * Represents a book.\n * @constructor\n * @param {string} title - The title of the book.\n * @param {string} author - The author of the book.\n */\n function Book(title, author) {\n }\n\nTypeScript\n\nTypeScript has its own JSDoc reference with some variations.\n\nFor example:\n\n // Parameters may be declared in a variety of syntactic forms\n /**\n * @param {string} p1 - A string param.\n * @param {string=} p2 - An optional param (Google Closure syntax)\n * @param {string} [p3] - Another optional param (JSDoc syntax).\n * @param {string} [p4=\"test\"] - An optional param with a default value\n * @returns {string} This is the result\n */\n function stringsStringStrings(p1, p2, p3, p4) {\n // TODO\n }\n\nRust\n\nRust uses another similar variation of a microsyntax in Doc comments.\n\nBut it doesn't have a particular well defined microsyntax structure to\ndenote what documentation refers to what symbol/parameter other than\nwhat can be inferred from the pure Markdown.\n\nFor example:\n\n #![crate_name = \"doc\"]\n\n /// A human being is represented here\n pub struct Person {\n /// A person must have a name, no matter how much Juliet may hate it\n name: String,\n }\n\n impl Person {\n /// Returns a person with the name given them\n ///\n /// # Arguments\n ///\n /// * `name` - A string slice that holds the name of the person\n ///\n /// # Examples\n ///\n /// ```\n /// // You can have rust code between fences inside the comments\n /// // If you pass --test to `rustdoc`, it will even test it for you!\n /// use doc::Person;\n /// let person = Person::new(\"name\");\n /// ```\n pub fn new(name: &str) -> Person {\n Person {\n name: name.to_string(),\n }\n }\n\n /// Gives a friendly hello!\n ///\n /// Says \"Hello, [name](Person::name)\" to the `Person` it is called on.\n pub fn hello(& self) {\n println!(\"Hello, {}!\", self.name);\n }\n }\n\n fn main() {\n let john = Person::new(\"John\");\n\n john.hello();\n }\n\nGo Lang\n\nGo also uses a form of Doc Comments.\n\nIt doesn't have a well defined microsyntax structure to denote what\ndocumentation refers to which symbol/parameter, but parameters can be\nreferenced by name without any special syntax or marker, this also means\nthat ordinary words that could appear in the documentation text should\nbe avoided as parameter names.\n\n package strconv\n\n // Quote returns a double-quoted Go string literal representing s.\n // The returned string uses Go escape sequences (\\t, \\n, \\xFF, \\u0100)\n // for control characters and non-printable characters as defined by IsPrint.\n func Quote(s string) string {\n ...\n }\n\nRejected Ideas\n\nStandardize Current Docstrings\n\nA possible alternative would be to support and try to push as a standard\none of the existing docstring formats. But that would only solve the\nstandardization.\n\nIt wouldn't solve any of the other problems derived from using a\nmicrosyntax inside of a docstring instead of pure Python syntax, the\nsame as described above in the Rationale - Summary.\n\nExtra Metadata and Decorator\n\nSome ideas before this proposal included having a function doc() instead\nof the single class Doc with several parameters to indicate whether an\nobject is discouraged from use, what exceptions it may raise, etc. To\nallow also deprecating functions and classes, it was also expected that\ndoc() could be used as a decorator. But this functionality is covered by\ntyping.deprecated() in PEP 702, so it was dropped from this proposal.\n\nA way to declare additional information could still be useful in the\nfuture, but taking early feedback on this idea, all that was postponed\nto future proposals.\n\nThis also shifted the focus from an all-encompassing function doc() with\nmultiple parameters to a single Doc class to be used in\n~typing.Annotated in a way that could be composed with other future\nproposals.\n\nThis design change also allows better interoperability with other\nproposals like typing.deprecated(), as in the future it could be\nconsidered to allow having typing.deprecated() also in ~typing.Annotated\nto deprecate individual parameters, coexisting with Doc.\n\nString Under Definition\n\nA proposed alternative in the discussion is declaring a string under the\ndefinition of a symbol and providing runtime access to those values:\n\n class User:\n name: str\n \"The user's name\"\n age: int\n \"The user's age\"\n\n ...\n\nThis was already proposed and rejected in PEP 224, mainly due to the\nambiguity of how is the string connected with the symbol it's\ndocumenting.\n\nAdditionally, there would be no way to provide runtime access to this\nvalue in previous versions of Python.\n\nPlain String in Annotated\n\nIn the discussion, it was also suggested to use a plain string inside of\n~typing.Annotated:\n\n from typing import Annotated\n\n class User:\n name: Annotated[str, \"The user's name\"]\n age: Annotated[int, \"The user's age\"]\n\n ...\n\nBut this would create a predefined meaning for any plain string inside\nof ~typing.Annotated, and any tool that was using plain strings in them\nfor any other purpose, which is currently allowed, would now be invalid.\n\nHaving an explicit typing.Doc makes it compatible with current valid\nuses of ~typing.Annotated.\n\nAnother Annotated-Like Type\n\nIn the discussion it was suggested to define a new type similar to\n~typing.Annotated, it would take the type and a parameter with the\ndocumentation string:\n\n from typing import Doc\n\n class User:\n name: Doc[str, \"The user's name\"]\n age: Doc[int, \"The user's age\"]\n\n ...\n\nThis idea was rejected as it would only support that use case and would\nmake it more difficult to combine it with ~typing.Annotated for other\npurposes ( e.g. with FastAPI metadata, Pydantic fields, etc.) or adding\nadditional metadata apart from the documentation string (e.g.\ndeprecation).\n\nTransferring Documentation from Type aliases\n\nA previous version of this proposal specified that when type aliases\ndeclared with ~typing.Annotated were used, and these type aliases were\nused in annotations, the documentation string would be transferred to\nthe annotated symbol.\n\nFor example:\n\n from typing import Annotated, Doc, TypeAlias\n\n\n UserName: TypeAlias = Annotated[str, Doc(\"The user's name\")]\n\n\n def create_user(name: UserName): ...\n\n def delete_user(name: UserName): ...\n\nThis was rejected after receiving feedback from the maintainer of one of\nthe main components used to provide editor support.\n\nShorthand with Slices\n\nIn the discussion, it was suggested to use a shorthand with slices:\n\n is_approved: Annotated[str: \"The status of a PEP.\"]\n\nAlthough this is a very clever idea and would remove the need for a new\nDoc class, runtime executing of current versions of Python don't allow\nit.\n\nAt runtime, ~typing.Annotated requires at least two arguments, and it\nrequires the first argument to be type, it crashes if it is a slice.\n\nOpen Issues\n\nVerbosity\n\nThe main argument against this would be the increased verbosity.\n\nIf the signature was not viewed independently of the documentation and\nthe body of the function with the docstring was also measured, the total\nverbosity would be somewhat similar, as what this proposal does is to\nmove some of the contents from the docstring in the body to the\nsignature.\n\nConsidering the signature alone, without the body, they could be much\nlonger than they currently are, they could end up being more than one\npage long. In exchange, the equivalent docstrings that currently are\nmore than one page long would be much shorter.\n\nWhen comparing the total verbosity, including the signature and the\ndocstring, the main additional verbosity added by this would be from\nusing ~typing.Annotated and typing.Doc. If ~typing.Annotated had more\nusage, it could make sense to have an improved shorter syntax for it and\nfor the type of metadata it would carry. But that would only make sense\nonce ~typing.Annotated is more widely used.\n\nOn the other hand, this verbosity would not affect end users as they\nwould not see the internal code using typing.Doc. The majority of users\nwould interact with libraries through editors without looking at the\ninternals, and if anything, they would have tooltips from editors\nsupporting this proposal.\n\nThe cost of dealing with the additional verbosity would mainly be\ncarried by those library maintainers that use this feature.\n\nThis argument could be analogous to the argument against type\nannotations in general, as they do indeed increase verbosity, in\nexchange for their features. But again, as with type annotations, this\nwould be optional and only to be used by those that are willing to take\nthe extra verbosity in exchange for the benefits.\n\nOf course, more advanced users might want to look at the source code of\nthe libraries and if the authors of those libraries adopted this, those\nadvanced users would end up having to look at that code with additional\nsignature verbosity instead of docstring verbosity.\n\nAny authors that decide not to adopt it should be free to continue using\ndocstrings with any particular format they decide, no docstrings at all,\netc.\n\nStill, there's a high chance that library authors could receive pressure\nto adopt this if it became the blessed solution.\n\nDocumentation is not Typing\n\nIt could also be argued that documentation is not really part of typing,\nor that it should live in a different module. Or that this information\nshould not be part of the signature but live in another place (like the\ndocstring).\n\nNevertheless, type annotations in Python could already be considered, by\ndefault, additional metadata: they carry additional information about\nvariables, parameters, return types, and by default they don't have any\nruntime behavior. And this proposal would add one more type of metadata\nto them.\n\nIt could be argued that this proposal extends the type of information\nthat type annotations carry, the same way as PEP 702 extends them to\ninclude deprecation information.\n\n~typing.Annotated was added to the standard library precisely to support\nadding additional metadata to the annotations, and as the new proposed\nDoc class is tightly coupled to ~typing.Annotated, it makes sense for it\nto live in the same module. If ~typing.Annotated was moved to another\nmodule, it would make sense to move Doc with it.\n\nMultiple Standards\n\nAnother argument against this would be that it would create another\nstandard, and that there are already several conventions for docstrings.\nIt could seem better to formalize one of the currently existing\nstandards.\n\nNevertheless, as stated above, none of those conventions cover the\ngeneral drawbacks of a doctsring-based approach that this proposal\nsolves naturally.\n\nTo see a list of the drawbacks of a docstring-based approach, see the\nsection above in the Rationale - Summary.\n\nIn the same way, it can be seen that, in many cases, a new standard that\ntakes advantage of new features and solves several problems from\nprevious methods can be worth having. As is the case with the new\npyproject.toml, dataclass_transform, the new typing pipe/union (|)\noperator, and other cases.\n\nAdoption\n\nAs this is a new standard proposal, it would only make sense if it had\ninterest from the community.\n\nFortunately there's already interest from several mainstream libraries\nfrom several developers and teams, including FastAPI, Typer, SQLModel,\nAsyncer (from the author of this proposal), Pydantic, Strawberry\n(GraphQL), and others.\n\nThere's also interest and support from documentation tools, like\nmkdocstrings, which added support even for an earlier version of this\nproposal.\n\nAll the CPython core developers contacted for early feedback (at least\n4) have shown interest and support for this proposal.\n\nEditor developers (VS Code and PyCharm) have shown some interest, while\nshowing concerns about the signature verbosity of the proposal, although\nnot about the implementation (which is what would affect them the most).\nAnd they have shown they would consider adding support for this if it\nwere to become an official standard. In that case, they would only need\nto add support for rendering, as support for editing, which is normally\nnon-existing for other standards, is already there, as they already\nsupport editing standard Python syntax.\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.552786"},"created":{"kind":"timestamp","value":"2023-08-28T00:00:00","string":"2023-08-28T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0727/\",\n \"authors\": [\n \"Sebastián Ramírez\"\n ],\n \"pep_number\": \"0727\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":17,"cells":{"id":{"kind":"string","value":"0218"},"text":{"kind":"string","value":"PEP: 218 Title: Adding a Built-In Set Object Type Author: Greg Wilson\n, Raymond Hettinger Status: Final\nType: Standards Track Content-Type: text/x-rst Created: 31-Jul-2000\nPython-Version: 2.2 Post-History:\n\nIntroduction\n\nThis PEP proposes adding a Set module to the standard Python library,\nand to then make sets a built-in Python type if that module is widely\nused. After explaining why sets are desirable, and why the common idiom\nof using dictionaries in their place is inadequate, we describe how we\nintend built-in sets to work, and then how the preliminary Set module\nwill behave. The last section discusses the mutability (or otherwise) of\nsets and set elements, and the solution which the Set module will\nimplement.\n\nRationale\n\nSets are a fundamental mathematical structure, and are very commonly\nused in algorithm specifications. They are much less frequently used in\nimplementations, even when they are the \"right\" structure. Programmers\nfrequently use lists instead, even when the ordering information in\nlists is irrelevant, and by-value lookups are frequent. (Most\nmedium-sized C programs contain a depressing number of start-to-end\nsearches through malloc'd vectors to determine whether particular items\nare present or not...)\n\nProgrammers are often told that they can implement sets as dictionaries\nwith \"don't care\" values. Items can be added to these \"sets\" by\nassigning the \"don't care\" value to them; membership can be tested using\ndict.has_key; and items can be deleted using del. However, the other\nmain operations on sets (union, intersection, and difference) are not\ndirectly supported by this representation, since their meaning is\nambiguous for dictionaries containing key/value pairs.\n\nProposal\n\nThe long-term goal of this PEP is to add a built-in set type to Python.\nThis type will be an unordered collection of unique values, just as a\ndictionary is an unordered collection of key/value pairs.\n\nIteration and comprehension will be implemented in the obvious ways, so\nthat:\n\n for x in S:\n\nwill step through the elements of S in arbitrary order, while:\n\n set(x**2 for x in S)\n\nwill produce a set containing the squares of all elements in S,\nMembership will be tested using in and not in, and basic set operations\nwill be implemented by a mixture of overloaded operators:\n\n ----------- -------------------------------\n | union\n & intersection\n ^ symmetric difference\n - asymmetric difference\n == != equality and inequality tests\n < <= >= > subset and superset tests\n ----------- -------------------------------\n\nand methods:\n\n ---------------- ----------------------------------------------------------------------------------------------\n S.add(x) Add \"x\" to the set.\n S.update(s) Add all elements of sequence \"s\" to the set.\n S.remove(x) Remove \"x\" from the set. If \"x\" is not present, this method raises a LookupError exception.\n S.discard(x) Remove \"x\" from the set if it is present, or do nothing if it is not.\n S.pop() Remove and return an arbitrary element, raising a LookupError if the element is not present.\n S.clear() Remove all elements from this set.\n S.copy() Make a new set.\n s.issuperset() Check for a superset relationship.\n s.issubset() Check for a subset relationship.\n ---------------- ----------------------------------------------------------------------------------------------\n\nand two new built-in conversion functions:\n\n -------------- ------------------------------------------------------------------------\n set(x) Create a set containing the elements of the collection \"x\".\n frozenset(x) Create an immutable set containing the elements of the collection \"x\".\n -------------- ------------------------------------------------------------------------\n\nNotes:\n\n1. We propose using the bitwise operators \"|&\" for intersection and\n union. While \"+\" for union would be intuitive, \"*\" for intersection\n is not (very few of the people asked guessed what it did correctly).\n2. We considered using \"+\" to add elements to a set, rather than \"add\".\n However, Guido van Rossum pointed out that \"+\" is symmetric for\n other built-in types (although \"*\" is not). Use of \"add\" will also\n avoid confusion between that operation and set union.\n\nSet Notation\n\nThe PEP originally proposed {1,2,3} as the set notation and {-} for the\nempty set. Experience with Python 2.3's sets.py showed that the notation\nwas not necessary. Also, there was some risk of making dictionaries less\ninstantly recognizable.\n\nIt was also contemplated that the braced notation would support set\ncomprehensions; however, Python 2.4 provided generator expressions which\nfully met that need and did so it a more general way. (See PEP 289 for\ndetails on generator expressions).\n\nSo, Guido ruled that there would not be a set syntax; however, the issue\ncould be revisited for Python 3000 (see PEP 3000).\n\nHistory\n\nTo gain experience with sets, a pure python module was introduced in\nPython 2.3. Based on that implementation, the set and frozenset types\nwere introduced in Python 2.4. The improvements are:\n\n- Better hash algorithm for frozensets\n- More compact pickle format (storing only an element list instead of\n a dictionary of key:value pairs where the value is always True).\n- Use a __reduce__ function so that deep copying is automatic.\n- The BaseSet concept was eliminated.\n- The union_update() method became just update().\n- Auto-conversion between mutable and immutable sets was dropped.\n- The _repr method was dropped (the need is met by the new sorted()\n built-in function).\n\nTim Peters believes that the class's constructor should take a single\nsequence as an argument, and populate the set with that sequence's\nelements. His argument is that in most cases, programmers will be\ncreating sets from pre-existing sequences, so that this case should be\nthe common one. However, this would require users to remember an extra\nset of parentheses when initializing a set with known values:\n\n >>> Set((1, 2, 3, 4)) # case 1\n\nOn the other hand, feedback from a small number of novice Python users\n(all of whom were very experienced with other languages) indicates that\npeople will find a \"parenthesis-free\" syntax more natural:\n\n >>> Set(1, 2, 3, 4) # case 2\n\nUltimately, we adopted the first strategy in which the initializer takes\na single iterable argument.\n\nMutability\n\nThe most difficult question to resolve in this proposal was whether sets\nought to be able to contain mutable elements. A dictionary's keys must\nbe immutable in order to support fast, reliable lookup. While it would\nbe easy to require set elements to be immutable, this would preclude\nsets of sets (which are widely used in graph algorithms and other\napplications).\n\nEarlier drafts of PEP 218 had only a single set type, but the sets.py\nimplementation in Python 2.3 has two, Set and ImmutableSet. For Python\n2.4, the new built-in types were named set and frozenset which are\nslightly less cumbersome.\n\nThere are two classes implemented in the \"sets\" module. Instances of the\nSet class can be modified by the addition or removal of elements, and\nthe ImmutableSet class is \"frozen\", with an unchangeable collection of\nelements. Therefore, an ImmutableSet may be used as a dictionary key or\nas a set element, but cannot be updated. Both types of set require that\ntheir elements are immutable, hashable objects. Parallel comments apply\nto the \"set\" and \"frozenset\" built-in types.\n\nCopyright\n\nThis document has been placed in the Public Domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.569982"},"created":{"kind":"timestamp","value":"2000-07-31T00:00:00","string":"2000-07-31T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0218/\",\n \"authors\": [\n \"Greg Wilson\",\n \"Raymond Hettinger\"\n ],\n \"pep_number\": \"0218\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":18,"cells":{"id":{"kind":"string","value":"3149"},"text":{"kind":"string","value":"PEP: 3149 Title: ABI version tagged .so files Version: $Revision$\nLast-Modified: $Date$ Author: Barry Warsaw Status:\nFinal Type: Standards Track Content-Type: text/x-rst Created:\n09-Jul-2010 Python-Version: 3.2 Post-History: 14-Jul-2010, 22-Jul-2010\nResolution:\nhttps://mail.python.org/pipermail/python-dev/2010-September/103408.html\n\nAbstract\n\nPEP 3147 described an extension to Python's import machinery that\nimproved the sharing of Python source code, by allowing more than one\nbyte compilation file (.pyc) to be co-located with each source file.\n\nThis PEP defines an adjunct feature which allows the co-location of\nextension module files (.so) in a similar manner. This optional,\nbuild-time feature will enable downstream distributions of Python to\nmore easily provide more than one Python major version at a time.\n\nBackground\n\nPEP 3147 defined the file system layout for a pure-Python package, where\nmultiple versions of Python are available on the system. For example,\nwhere the alpha package containing source modules one.py and two.py\nexist on a system with Python 3.2 and 3.3, the post-byte compilation\nfile system layout would be:\n\n alpha/\n __pycache__/\n __init__.cpython-32.pyc\n __init__.cpython-33.pyc\n one.cpython-32.pyc\n one.cpython-33.pyc\n two.cpython-32.pyc\n two.cpython-33.pyc\n __init__.py\n one.py\n two.py\n\nFor packages with extension modules, a similar differentiation is needed\nfor the module's .so files. Extension modules compiled for different\nPython major versions are incompatible with each other due to changes in\nthe ABI. Different configuration/compilation options for the same Python\nversion can result in different ABIs (e.g. --with-wide-unicode).\n\nWhile PEP 384 defines a stable ABI, it will minimize, but not eliminate\nextension module incompatibilities between Python builds or major\nversions. Thus a mechanism for discriminating extension module file\nnames is proposed.\n\nRationale\n\nLinux distributions such as Ubuntu[1] and Debian[2] provide more than\none Python version at the same time to their users. For example, Ubuntu\n9.10 Karmic Koala users can install Python 2.5, 2.6, and 3.1, with\nPython 2.6 being the default.\n\nIn order to share as much as possible between the available Python\nversions, these distributions install third party package modules (.pyc\nand .so files) into /usr/share/pyshared and symlink to them from\n/usr/lib/pythonX.Y/dist-packages. The symlinks exist because in a\npre-PEP 3147 world (i.e < Python 3.2), the .pyc files resulting from\nbyte compilation by the various installed Pythons will name collide with\neach other. For Python versions >= 3.2, all pure-Python packages can be\nshared, because the .pyc files will no longer cause file system naming\nconflicts. Eliminating these symlinks makes for a simpler, more robust\nPython distribution.\n\nA similar situation arises with shared library extensions. Because\nextension modules are typically named foo.so for a foo extension module,\nthese would also name collide if foo was provided for more than one\nPython version.\n\nIn addition, because different configuration/compilation options for the\nsame Python version can cause different ABIs to be presented to\nextension modules. On POSIX systems for example, the configure options\n--with-pydebug, --with-pymalloc, and --with-wide-unicode all change the\nABI. This PEP proposes to encode build-time options in the file name of\nthe .so extension module files.\n\nPyPy[3] can also benefit from this PEP, allowing it to avoid name\ncollisions in extension modules built for its API, but with a different\n.so tag.\n\nProposal\n\nThe configure/compilation options chosen at Python interpreter\nbuild-time will be encoded in the shared library file name for extension\nmodules. This \"tag\" will appear between the module base name and the\noperation file system extension for shared libraries.\n\nThe following information MUST be included in the shared library file\nname:\n\n- The Python implementation (e.g. cpython, pypy, jython, etc.)\n- The interpreter's major and minor version numbers\n\nThese two fields are separated by a hyphen and no dots are to appear\nbetween the major and minor version numbers. E.g. cpython-32.\n\nPython implementations MAY include additional flags in the file name tag\nas appropriate. For example, on POSIX systems these flags will also\ncontribute to the file name:\n\n- --with-pydebug (flag: d)\n- --with-pymalloc (flag: m)\n- --with-wide-unicode (flag: u)\n\nBy default in Python 3.2, configure enables --with-pymalloc so shared\nlibrary file names would appear as foo.cpython-32m.so. When the other\ntwo flags are also enabled, the file names would be\nfoo.cpython-32dmu.so.\n\nThe shared library file name tag is used unconditionally; it cannot be\nchanged. The tag and extension module suffix are available through the\nsysconfig modules via the following variables:\n\n >>> sysconfig.get_config_var('EXT_SUFFIX')\n '.cpython-32mu.so'\n >>> sysconfig.get_config_var('SOABI')\n 'cpython-32mu'\n\nNote that $SOABI contains just the tag, while $EXT_SUFFIX includes the\nplatform extension for shared library files, and is the exact suffix\nadded to the extension module name.\n\nFor an arbitrary package foo, you might see these files when the\ndistribution package was installed:\n\n /usr/lib/python/foo.cpython-32m.so\n /usr/lib/python/foo.cpython-33m.so\n\n(These paths are for example purposes only. Distributions are free to\nuse whatever filesystem layout they choose, and nothing in this PEP\nchanges the locations where from-source builds of Python are installed.)\n\nPython's dynamic module loader will recognize and import shared library\nextension modules with a tag that matches its build-time options. For\nbackward compatibility, Python will also continue to import untagged\nextension modules, e.g. foo.so.\n\nThis shared library tag would be used globally for all distutils-based\nextension modules, regardless of where on the file system they are\nbuilt. Extension modules built by means other than distutils would\neither have to calculate the tag manually, or fallback to the non-tagged\n.so file name.\n\nProven approach\n\nThe approach described here is already proven, in a sense, on Debian and\nUbuntu system where different extensions are used for debug builds of\nPython and extension modules. Debug builds on Windows also already use a\ndifferent file extension for dynamic libraries, and in fact encoded (in\na different way than proposed in this PEP) the Python major and minor\nversion in the .dll file name.\n\nWindows\n\nThis PEP only addresses build issues on POSIX systems that use the\nconfigure script. While Windows or other platform support is not\nexplicitly disallowed under this PEP, platform expertise is needed in\norder to evaluate, describe, and implement support on such platforms. It\nis not currently clear that the facilities in this PEP are even useful\nfor Windows.\n\nPEP 384\n\nPEP 384 defines a stable ABI for extension modules. In theory, universal\nadoption of PEP 384 would eliminate the need for this PEP because all\nextension modules could be compatible with any Python version. In\npractice of course, it will be impossible to achieve universal adoption,\nand as described above, different build-time flags still affect the ABI.\nThus even with a stable ABI, this PEP may still be necessary. While a\ncomplete specification is reserved for PEP 384, here is a discussion of\nthe relevant issues.\n\nPEP 384 describes a change to PyModule_Create() where 3 is passed as the\nAPI version if the extension was compiled with Py_LIMITED_API. This\nshould be formalized into an official macro called PYTHON_ABI_VERSION to\nmirror PYTHON_API_VERSION. If and when the ABI changes in an\nincompatible way, this version number would be bumped. To facilitate\nsharing, Python would be extended to search for extension modules with\nthe PYTHON_ABI_VERSION number in its name. The prefix abi is reserved\nfor Python's use.\n\nThus, an initial implementation of PEP 384, when Python is configured\nwith the default set of flags, would search for the following file names\nwhen extension module foo is imported (in this order):\n\n foo.cpython-XYm.so\n foo.abi3.so\n foo.so\n\nThe distutils[4] build_ext command would also have to be extended to\ncompile to shared library files with the abi3 tag, when the module\nauthor indicates that their extension supports that version of the ABI.\nThis could be done in a backward compatible way by adding a keyword\nargument to the Extension class, such as:\n\n Extension('foo', ['foo.c'], abi=3)\n\nMartin v. Löwis describes his thoughts[5] about the applicability of\nthis PEP to PEP 384. In summary:\n\n- --with-pydebug would not be supported by the stable ABI because this\n changes the layout of PyObject, which is an exposed structure.\n- --with-pymalloc has no bearing on the issue.\n- --with-wide-unicode is trickier, though Martin's inclination is to\n force the stable ABI to use a Py_UNICODE that matches the platform's\n wchar_t.\n\nAlternatives\n\nIn the initial python-dev thread[6] where this idea was first\nintroduced, several alternatives were suggested. For completeness they\nare listed here, along with the reasons for not adopting them.\n\nIndependent directories or symlinks\n\nDebian and Ubuntu could simply add a version-specific directory to\nsys.path that would contain just the extension modules for that version\nof Python. Or the symlink trick eliminated in PEP 3147 could be retained\nfor just shared libraries. This approach is rejected because it\npropagates the essential complexity that PEP 3147 tries to avoid, and\nadds potentially several additional directories to search for all\nmodules, even when the number of extension modules is much fewer than\nthe total number of Python packages. For example, builds were made\navailable both with and without wide unicode, with and without pydebug,\nand with and without pymalloc, the total number of directories search\nincreases substantially.\n\nDon't share packages with extension modules\n\nIt has been suggested that Python packages with extension modules not be\nshared among all supported Python versions on a distribution. Even with\nadoption of PEP 3149, extension modules will have to be compiled for\nevery supported Python version, so perhaps sharing of such packages\nisn't useful anyway. Not sharing packages with extensions though is\ninfeasible for several reasons.\n\nIf a pure-Python package is shared in one version, should it suddenly be\nnot-shared if the next release adds an extension module for speed? Also,\neven though all extension shared libraries will be compiled and\ndistributed once for every supported Python, there's a big difference\nbetween duplicating the .so files and duplicating all .py files. The\nextra size increases the download time for such packages, and more\nimmediately, increases the space pressures on already constrained\ndistribution CD-ROMs.\n\nReference implementation\n\nWork on this code is tracked in a Bazaar branch on Launchpad[7] until\nit's ready for merge into Python 3.2. The work-in-progress diff can also\nbe viewed[8] and is updated automatically as new changes are uploaded.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] Ubuntu: \n\n[2] Debian: \n\n[3] http://codespeak.net/pypy/dist/pypy/doc/\n\n[4] http://docs.python.org/py3k/distutils/index.html\n\n[5] https://mail.python.org/pipermail/python-dev/2010-August/103330.html\n\n[6] https://mail.python.org/pipermail/python-dev/2010-June/100998.html\n\n[7] https://code.edge.launchpad.net/~barry/python/sovers\n\n[8] https://code.edge.launchpad.net/~barry/python/sovers/+merge/29411"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.586287"},"created":{"kind":"timestamp","value":"2010-07-09T00:00:00","string":"2010-07-09T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-3149/\",\n \"authors\": [\n \"Barry Warsaw\"\n ],\n \"pep_number\": \"3149\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":19,"cells":{"id":{"kind":"string","value":"8010"},"text":{"kind":"string","value":"PEP: 8010 Title: The Technical Leader Governance Model Author: Barry\nWarsaw Status: Rejected Type: Informational Topic:\nGovernance Content-Type: text/x-rst Created: 24-Aug-2018\n\nAbstract\n\nThis PEP proposes a continuation of the singular technical project\nleader model, euphemistically called the Benevolent Dictator For Life\n(BDFL) model of Python governance, to be henceforth called in this PEP\nthe Gracious Umpire Influencing Decisions Officer (GUIDO). This change\nin name reflects both the expanded view of the GUIDO as final arbiter\nfor the Python language decision making process in consultation with the\nwider development community, and the recognition that \"for life\" while\nperhaps aspirational, is not necessarily in the best interest of the\nwell-being of either the language or the GUIDO themselves.\n\nThis PEP describes:\n\n- The rationale for maintaining the singular technical leader model\n- The process for how the GUIDO will be selected, elected, retained,\n recalled, and succeeded;\n- The roles of the GUIDO in the Python language evolution process;\n- The term length of service;\n- The relationship of the GUIDO with a Council of Pythonistas (CoP)\n that advise the GUIDO on technical matters;\n- The size, election, and roles of the CoP;\n- The decision delegation process;\n- Any changes to the PEP process to fit the new governance model;\n\nThis PEP does not name a new BDFL. Should this model be adopted, it will\nbe codified in PEP 13 along with the names of all officeholders\ndescribed in this PEP.\n\nPEP Rejection\n\nPEP 8010 was rejected by a core developer vote described in PEP 8001 on\nMonday, December 17, 2018.\n\nPEP 8016 and the governance model it describes were chosen instead.\n\nOpen discussion points\n\nVarious tweaks to the parameters of this PEP are allowed during the\ngovernance discussion process, such as the exact size of the CoP, term\nlengths of service, and voting procedures. These will be codified by the\ntime the PEP is ready to be voted on.\n\nThe voting procedures and events described in this PEP will default to\nthe voting method specified in PEP 8001, although as that PEP is still\nin discussion at the time of this writing, this is subject to change.\n\nIt is allowed, and perhaps even expected, that as experience is gained\nwith this model, these parameters may be tweaked as future GUIDOs are\nnamed, in order to provide for a smoother governing process.\n\nWhy a singular technical leader?\n\nWhy this model rather than any other? It comes down to \"vision\". Design\nby committee has many known downsides, leading to a language that\naccretes new features based on the varied interests of the contributors\nat the time. A famous aphorism is \"a camel is a horse designed by\ncommittee\". Can a language that is designed by committee \"hang\ntogether\"? Does it feel like a coherent, self-consistent language where\nthe rules make sense and are easily remembered?\n\nA singular technical leader can promote that vision more than a\ncommittee can, whether that committee is small (e.g. 3 or 5 persons) or\nspans the entire Python community. Every participant will have their own\nvision of what \"Python\" is, and this can lead to indecision or illogical\nchoices when those individual visions are in conflict. Should CPython be\n3x faster or should we preserve the C API? That's a very difficult\nquestion to get consensus on, since neither choice is right or wrong.\nBut worse than making the wrong decision might be accepting the status\nquo because no consensus could be found.\n\nFlexibility\n\nDegrees of flexibility are given to both the GUIDO and CoP by way of\nunderspecification. This PEP describes how conflicts will be resolved,\nbut expects all participants, including core developers, community\nmembers, and office holders, to always have the best interest of Python\nand its users at heart. The PEP assumes that mutual respect and the best\nintentions will always lead to consensus, and that the Code of Conduct\ngoverns all interactions and discussions.\n\nThe role of the GUIDO\n\nOne of the most important roles of the GUIDO is to provide an\noverarching, broad, coherent vision for the evolution of the Python\nlanguage, spanning multiple releases. This is especially important when\ndecision have lasting impact and competing benefits. For example, if\nbackward incompatible changes to the C API leads to a 2x improvement in\nPython performance, different community members will likely advocate\nconvincingly on both sides of the debate, and a clear consensus may not\nemerge. Either choice is equally valid. In consultation with the CoP, it\nwill be the GUIDO's vision that guides the ultimate decision.\n\nThe GUIDO is the ultimate authority for decisions on PEPs and other\nissues, including whether any particular change is PEP-worthy. As is the\ncase today, many --in fact perhaps most-- decisions are handled by\ndiscussion and resolution on the issue tracker, merge requests, and\ndiscussion forums, usually with input or lead by experts in the\nparticular field. Where this operating procedure works perfectly well,\nit can continue unchanged. This also helps reduce the workload on the\nCoP and GUIDO, leaving only the most important decisions and broadest\nview of the landscape to the central authority.\n\nSimilarly, should a particular change be deemed to require a PEP, but\nthe GUIDO, in consultation with the CoP, identifies experts that have\nthe full confidence to make the final decision, the GUIDO can name a\nDelegate for the PEP. While the GUIDO remains the ultimate authority, it\nis expected that the GUIDO will not undermine, and in fact will support\nthe authority of the Delegate as the final arbiter of the PEP.\n\nThe GUIDO has full authority to shut down unproductive discussions,\nideas, and proposals, when it is clear that the proposal runs counter to\nthe long-term vision for Python. This is done with compassion for the\nadvocates of the change, but with the health and well-being of all\ncommunity members in mind. A toxic discussion on a dead-end proposal\ndoes no one any good, and they can be terminated by fiat.\n\nTo sum up: the GUIDO has the authority to make a final pronouncement on\nany topic, technical or non-technical, except for changing to the\ngovernance PEP itself.\n\nAuthority comes from the community\n\nThe GUIDO's authority ultimately resides with the community. A rogue\nGUIDO that loses the confidence of the majority of the community can be\nrecalled and a new vote conducted. This is an exceedingly rare and\nunlikely event. This is a sufficient stopgap for the worst-case\nscenario, so it should not be undertaken lightly. The GUIDO should not\nfear being deposed because of one decision, even if that decision isn't\nfavored by the majority of Python developers. Recall should be reserved\nfor actions severely detrimental to the Python language or community.\n\nThe Council of Pythonistas (see below) has the responsibility to\ninitiate a vote of no-confidence.\n\nLength of service and term limits\n\nThe GUIDO shall serve for three Python releases, approximately 4.5 years\ngiven the current release cadence. If Python’s release cadence changes,\nthe length of GUIDO’s term should change to 4.5 years rounded to whole\nreleases. How the rounding is done is left to the potential release\ncadence PEP. After this time, a new election is held according to the\nprocedures outlined below. There are no term limits, so the GUIDO may\nrun for re-election for as long as they like.\n\nWe expect GUIDOs to serve out their entire term of office, but of\ncourse, Life Happens. Should the GUIDO need to step down before their\nterm ends, the vacancy will be filled by the process outlined below as\nper choosing a new GUIDO. However, the new GUIDO will only serve for the\nremainder of the original GUIDO's term, at which time a new election is\nconducted. The GUIDO stepping down may continue to serve until their\nreplacement is selected.\n\nDuring the transition period, the CoP (see below) may carry out the\nGUIDO's duties, however they may also prefer to leave substantive\ndecisions (such as technical PEP approvals) to the incoming GUIDO.\n\nChoosing a GUIDO\n\nThe selection process is triggered whenever a vacancy exists for a new\nGUIDO, or when the GUIDO is up for re-election in the normal course of\nevents. When the selection process is triggered, either by the GUIDO\nstepping down, or two months before the end of the GUIDO's regular term,\na new election process begins.\n\nFor three weeks prior to the vote, nominations are open. Candidates must\nbe chosen from the current list of core Python developers. Non-core\ndevelopers are ineligible to serve as the GUIDO. Candidates may\nself-nominate, but all nominations must be seconded. Nominations and\nseconds are conducted as merge requests on a private repository.\n\nOnce they accept their nomination, nominees may post short position\nstatements using the same private repository, and may also post them to\nthe committers discussion forum. Maybe we'll even have debates! This\nphase of the election runs for two weeks.\n\nCore developers then have three weeks to vote, using the process\ndescribed in PEP 8001.\n\nThe Council of Pythonistas (CoP)\n\nAssisting the GUIDO is a small team of elected Python experts. They\nserve on a team of technical committee members. They provide insight and\noffer discussion of the choices before the GUIDO. Consultation can be\ntriggered from either side. For example, if the GUIDO is still undecided\nabout any particular choice, discussions with the CoP can help clarify\nthe remaining issues, identify the right questions to ask, and provide\ninsight into the impact on other users of Python that the GUIDO may not\nbe as familiar with. The CoP are the GUIDO's trusted advisers, and a\nclose working relationship is expected.\n\nThe CoP shall consist of 3 members, elected from among the core\ndevelopers. Their term runs for 3 years and members may run for\nre-election as many times as they want. To ensure continuity, CoP\nmembers are elected on a rotating basis; every year, one CoP member is\nup for re-election.\n\nIn order to bootstrap the stagger for the initial election, the CoP\nmember with the most votes shall serve for 3 years, the second most\npopular vote getter shall serve for 2 years, and CoP member with the\nleast number of votes shall serve initially for 1 year.\n\nAll ties in voting will be broken with a procedure to be determined in\nPEP 8001.\n\nThe nomination and voting process is similar as with the GUIDO. There is\na three-week nomination period, where self-nominations are allowed and\nmust be seconded, followed by a period of time for posting position\nstatements, followed by a vote.\n\nBy unanimous decision, the CoP may begin a no-confidence vote on the\nGUIDO, triggering the procedure in that section.\n\nNo confidence votes\n\nAs mentioned above, the CoP may, by unanimous decision, initiate a vote\nof no-confidence in the GUIDO. This process should not be undertaken\nlightly, but once begun, it triggers up to two votes. In both cases,\nvoting is done by the same procedure as in PEP 8001, and all core\ndevelopers may participate in no confidence votes.\n\nThe first vote is whether to recall the current GUIDO or not. Should a\nsuper majority of Python developers vote \"no confidence\", the GUIDO is\nrecalled. A second vote is then conducted to select the new GUIDO, in\naccordance with the procedures for initial section of this office\nholder. During the time in which there is no GUIDO, major decisions are\nput on hold, but normal Python operations may of course continue.\n\nDay-to-day operations\n\nThe GUIDO is not needed for all -- or even most -- decisions. Python\ndevelopers already have plenty of opportunity for delegation,\nresponsibility, and self-direction. The issue tracker and pull requests\nserve exactly the same function as they did before this governance model\nwas chosen. Most discussions of bug fixes and minor improvements can\njust happen on these forums, as they always have.\n\nPEP considerations\n\nThe GUIDO, members of the CoP, and anyone else in the Python community\nmay propose a PEP. Treatment of the prospective PEP is handled the same\nregardless of the author of the PEP.\n\nHowever, in the case of the GUIDO authoring a PEP, an impartial PEP\nDelegate should be selected, and given the authority to accept or reject\nthe PEP. The GUIDO should recuse themselves from the decision making\nprocess. In the case of controversial PEPs where a clear consensus does\nnot arrive, ultimate authority on PEPs authored by the GUIDO rests with\nthe CoP.\n\nThe PEP propose is further enhanced such that a core developer must\nalways be chose as the PEP Shepherd. This person ensure that proper\nprocedure is maintained. The Shepherd must be chosen from among the core\ndevelopers. This means that while anyone can author a PEP, all PEPs must\nhave some level of sponsorship from at least one core developer.\n\nVersion History\n\nVersion 2\n\n - Renamed to \"The Technical Leader Governance Model\"\n - \"singular leader\" -> \"singular technical leader\"\n - The adoption of PEP 8001 voting procedures is tentative until that\n PEP is approved\n - Describe what happens if the GUIDO steps down\n - Recall votes require a super majority of core devs to succeed\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.599842"},"created":{"kind":"timestamp","value":"2018-08-24T00:00:00","string":"2018-08-24T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-8010/\",\n \"authors\": [\n \"Barry Warsaw\"\n ],\n \"pep_number\": \"8010\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":20,"cells":{"id":{"kind":"string","value":"0011"},"text":{"kind":"string","value":"PEP: 11 Title: CPython platform support Author: Martin von Löwis\n, Brett Cannon Status: Active\nType: Process Content-Type: text/x-rst Created: 07-Jul-2002\nPost-History: 18-Aug-2007, 14-May-2014, 20-Feb-2015, 10-Mar-2022,\n\nAbstract\n\nThis PEP documents how an operating system (platform) becomes supported\nin CPython, what platforms are currently supported, and documents past\nsupport.\n\nRationale\n\nOver time, the CPython source code has collected various pieces of\nplatform-specific code, which, at some point in time, was considered\nnecessary to use CPython on a specific platform. Without access to this\nplatform, it is not possible to determine whether this code is still\nneeded. As a result, this code may either break during CPython's\nevolution, or it may become unnecessary as the platforms evolve as well.\n\nAllowing these fragments to grow poses the risk of unmaintainability:\nwithout having experts for a large number of platforms, it is not\npossible to determine whether a certain change to the CPython source\ncode will work on all supported platforms.\n\nTo reduce this risk, this PEP specifies what is required for a platform\nto be considered supported by CPython as well as providing a procedure\nto remove code for platforms with few or no CPython users.\n\nThis PEP also lists what platforms are supported by the CPython\ninterpreter. This lets people know what platforms are directly supported\nby the CPython development team.\n\nSupport tiers\n\nPlatform support is broken down into tiers. Each tier comes with\ndifferent requirements which lead to different promises being made about\nsupport.\n\nTo be promoted to a tier, steering council support is required and is\nexpected to be driven by team consensus. Demotion to a lower tier occurs\nwhen the requirements of the current tier are no longer met for a\nplatform for an extended period of time based on the judgment of the\nrelease manager or steering council. For platforms which no longer meet\nthe requirements of any tier by b1 of a new feature release, an\nannouncement will be made to warn the community of the pending removal\nof support for the platform (e.g. in the b1 announcement). If the\nplatform is not brought into line for at least one of the tiers by the\nfirst release candidate, it will be listed as unsupported in this PEP.\n\nTier 1\n\n- STATUS\n- CI failures block releases.\n- Changes which would break the main branch are not allowed to be\n merged; any breakage should be fixed or reverted immediately.\n- All core developers are responsible to keep main, and thus these\n platforms, working.\n- Failures on these platforms block a release.\n\n Target Triple Notes\n -------------------------- -----------------\n aarch64-apple-darwin clang\n i686-pc-windows-msvc \n x86_64-pc-windows-msvc \n x86_64-apple-darwin BSD libc, clang\n x86_64-unknown-linux-gnu glibc, gcc\n\nTier 2\n\n- STATUS\n- Must have a reliable buildbot.\n- At least two core developers are signed up to support the platform.\n- Changes which break any of these platforms are to be fixed or\n reverted within 24 hours.\n- Failures on these platforms block a release.\n\n+-----------------------+--------------------+-----------------------+\n| Target Triple | Notes | Contacts |\n+=======================+====================+=======================+\n| aarc | glibc, gcc | Petr Viktorin, Victor |\n| h64-unknown-linux-gnu | | Stinner |\n| | glibc, clang | |\n| | | Victor Stinner, |\n| | | Gregory P. Smith |\n+-----------------------+--------------------+-----------------------+\n| wasm32-unknown-wasi | WASI SDK, Wasmtime | Brett Cannon, Eric |\n| | | Snow |\n+-----------------------+--------------------+-----------------------+\n| x86 | glibc, clang | Victor Stinner, |\n| _64-unknown-linux-gnu | | Gregory P. Smith |\n+-----------------------+--------------------+-----------------------+\n\nTier 3\n\n- STATUS\n- Must have a reliable buildbot.\n- At least one core developer is signed up to support the platform.\n- No response SLA to failures.\n- Failures on these platforms do not block a release.\n\n+----------------------+----------------------+----------------------+\n| Target Triple | Notes | Contacts |\n+======================+======================+======================+\n| a | | Russell Keith-Magee, |\n| arch64-linux-android | | Petr Viktorin |\n+----------------------+----------------------+----------------------+\n| aar | | Steve Dower |\n| ch64-pc-windows-msvc | | |\n+----------------------+----------------------+----------------------+\n| arm64-apple-ios | iOS on device | Russell Keith-Magee, |\n| | | Ned Deily |\n+----------------------+----------------------+----------------------+\n| arm64 | iOS on M1 macOS | Russell Keith-Magee, |\n| -apple-ios-simulator | simulator | Ned Deily |\n+----------------------+----------------------+----------------------+\n| armv7l-unk | Raspberry Pi OS, | Gregory P. Smith |\n| nown-linux-gnueabihf | glibc, gcc | |\n+----------------------+----------------------+----------------------+\n| powerpc64 | glibc, clang | Victor Stinner |\n| le-unknown-linux-gnu | | |\n| | glibc, gcc | Victor Stinner |\n+----------------------+----------------------+----------------------+\n| s39 | glibc, gcc | Victor Stinner |\n| 0x-unknown-linux-gnu | | |\n+----------------------+----------------------+----------------------+\n| x86_64-linux-android | | Russell Keith-Magee, |\n| | | Petr Viktorin |\n+----------------------+----------------------+----------------------+\n| x8 | BSD libc, clang | Victor Stinner |\n| 6_64-unknown-freebsd | | |\n+----------------------+----------------------+----------------------+\n\nAll other platforms\n\nSupport for a platform may be partial within the code base, such as from\nactive development around platform support or accidentally. Code changes\nto platforms not listed in the above tiers may be rejected or removed\nfrom the code base without a deprecation process if they cause a\nmaintenance burden or obstruct general improvements.\n\nPlatforms not listed here may be supported by the wider Python community\nin some way. If your desired platform is not listed above, please\nperform a search online to see if someone is already providing support\nin some form.\n\nNotes\n\nMicrosoft Windows\n\nWindows versions prior to Windows 10 follow Microsoft's Fixed Lifecycle\nPolicy, with a mainstream support phase for 5 years after release, where\nthe product is generally commercially available, and an additional 5\nyear extended support phase, where paid support is still available and\ncertain bug fixes are released. Extended Security Updates (ESU) is a\npaid program available to high-volume enterprise customers as a \"last\nresort\" option to receive certain security updates after extended\nsupport ends. ESU is considered a distinct phase that follows the\nexpiration of extended support.\n\nWindows 10 and later follow Microsoft's Modern Lifecycle Policy, which\nvaries per-product, per-version, per-edition and per-channel. Generally,\nfeature updates (1709, 22H2) occur every 6-12 months and are supported\nfor 18-36 months; Server and IoT editions, and LTSC channel releases are\nsupported for 5-10 years, and the latest feature release of a major\nversion (Windows 10, Windows 11) generally receives new updates for at\nleast 10 years following release. Microsoft's Windows Lifecycle FAQ has\nmore specific and up-to-date guidance.\n\nCPython's Windows support currently follows Microsoft's lifecycles. A\nnew feature release X.Y.0 will support all Windows versions whose\nextended support phase has not yet expired. Subsequent bug fix releases\nwill support the same Windows versions as the original feature release,\neven if no longer supported by Microsoft. New versions of Windows\nreleased while CPython is in maintenance mode may be supported at the\ndiscretion of the core team and release manager.\n\nAs of 2024, our current interpretation of Microsoft's lifecycles is that\nWindows for IoT and embedded systems is out of scope for new CPython\nreleases, as the intent of those is to avoid feature updates. Windows\nServer will usually be the oldest version still receiving free security\nfixes, and that will determine the earliest supported client release\nwith equivalent API version (which will usually be past its\nend-of-life).\n\nEach feature release is built by a specific version of Microsoft Visual\nStudio. That version should have mainstream support when the release is\nmade. Developers of extension modules will generally need to use the\nsame Visual Studio release; they are concerned both with the\navailability of the versions they need to use, and with keeping the zoo\nof versions small. The CPython source tree will keep unmaintained build\nfiles for older Visual Studio releases, for which patches will be\naccepted. Such build files will be removed from the source tree 3 years\nafter the extended support for the compiler has ended (but continue to\nremain available in revision control).\n\nLegacy C Locale\n\nStarting with CPython 3.7.0, *nix platforms are expected to provide at\nleast one of C.UTF-8 (full locale), C.utf8 (full locale) or UTF-8\n(LC_CTYPE-only locale) as an alternative to the legacy C locale.\n\nAny Unicode-related integration problems that occur only in the legacy C\nlocale and cannot be reproduced in an appropriately configured non-ASCII\nlocale will be closed as \"won't fix\".\n\nUnsupporting platforms\n\nIf a platform drops out of tiered support, a note must be posted in this\nPEP that the platform is no longer actively supported. This note must\ninclude:\n\n- The name of the system,\n- The first release number that does not support this platform\n anymore, and\n- The first release where the historical support code is actively\n removed.\n\nIn some cases, it is not possible to identify the specific list of\nsystems for which some code is used (e.g. when autoconf tests for\nabsence of some feature which is considered present on all supported\nsystems). In this case, the name will give the precise condition\n(usually a preprocessor symbol) that will become unsupported.\n\nAt the same time, the CPython build must be changed to produce a warning\nif somebody tries to install CPython on this platform. On platforms\nusing autoconf, configure should also be made emit a warning about the\nunsupported platform.\n\nThis gives potential users of the platform a chance to step forward and\noffer maintenance. We do not treat a platform that loses Tier 3 support\nany worse than a platform that was never supported.\n\nNo-longer-supported platforms\n\n- Name: MS-DOS, MS-Windows 3.x\n Unsupported in: Python 2.0\n Code removed in: Python 2.1\n\n- Name: SunOS 4\n Unsupported in: Python 2.3\n Code removed in: Python 2.4\n\n- Name: DYNIX\n Unsupported in: Python 2.3\n Code removed in: Python 2.4\n\n- Name: dgux\n Unsupported in: Python 2.3\n Code removed in: Python 2.4\n\n- Name: Minix\n Unsupported in: Python 2.3\n Code removed in: Python 2.4\n\n- Name: Irix 4 and --with-sgi-dl\n Unsupported in: Python 2.3\n Code removed in: Python 2.4\n\n- Name: Linux 1\n Unsupported in: Python 2.3\n Code removed in: Python 2.4\n\n- Name: Systems defining __d6_pthread_create (configure.in)\n Unsupported in: Python 2.3\n Code removed in: Python 2.4\n\n- Name: Systems defining PY_PTHREAD_D4, PY_PTHREAD_D6, or\n PY_PTHREAD_D7 in thread_pthread.h\n Unsupported in: Python 2.3\n Code removed in: Python 2.4\n\n- Name: Systems using --with-dl-dld\n Unsupported in: Python 2.3\n Code removed in: Python 2.4\n\n- Name: Systems using --without-universal-newlines,\n Unsupported in: Python 2.3\n Code removed in: Python 2.4\n\n- Name: MacOS 9\n Unsupported in: Python 2.4\n Code removed in: Python 2.4\n\n- Name: Systems using --with-wctype-functions\n Unsupported in: Python 2.6\n Code removed in: Python 2.6\n\n- Name: Win9x, WinME, NT4\n Unsupported in: Python 2.6 (warning in 2.5 installer)\n Code removed in: Python 2.6\n\n- Name: AtheOS\n Unsupported in: Python 2.6 (with \"AtheOS\" changed to \"Syllable\")\n Build broken in: Python 2.7 (edit configure to re-enable)\n Code removed in: Python 3.0\n Details: http://www.syllable.org/discussion.php?id=2320\n\n- Name: BeOS\n Unsupported in: Python 2.6 (warning in configure)\n Build broken in: Python 2.7 (edit configure to re-enable)\n Code removed in: Python 3.0\n\n- Name: Systems using Mach C Threads\n Unsupported in: Python 3.2\n Code removed in: Python 3.3\n\n- Name: SunOS lightweight processes (LWP)\n Unsupported in: Python 3.2\n Code removed in: Python 3.3\n\n- Name: Systems using --with-pth (GNU pth threads)\n Unsupported in: Python 3.2\n Code removed in: Python 3.3\n\n- Name: Systems using Irix threads\n Unsupported in: Python 3.2\n Code removed in: Python 3.3\n\n- Name: OSF* systems (issue 8606)\n Unsupported in: Python 3.2\n Code removed in: Python 3.3\n\n- Name: OS/2 (issue 16135)\n Unsupported in: Python 3.3\n Code removed in: Python 3.4\n\n- Name: VMS (issue 16136)\n Unsupported in: Python 3.3\n Code removed in: Python 3.4\n\n- Name: Windows 2000\n Unsupported in: Python 3.3\n Code removed in: Python 3.4\n\n- Name: Windows systems where COMSPEC points to command.com\n Unsupported in: Python 3.3\n Code removed in: Python 3.4\n\n- Name: RISC OS\n Unsupported in: Python 3.0 (some code actually removed)\n Code removed in: Python 3.4\n\n- Name: IRIX\n Unsupported in: Python 3.7\n Code removed in: Python 3.7\n\n- Name: Systems without multithreading support\n Unsupported in: Python 3.7\n Code removed in: Python 3.7\n\n- Name: wasm32-unknown-emscripten\n Unsupported in: Python 3.13\n Code removed in: Unknown\n\nDiscussions\n\n- April 2022: Consider adding a Tier 3 to tiered platform support\n (Victor Stinner)\n- March 2022: Proposed tiered platform support (Brett Cannon)\n- February 2015: Update to PEP 11 to clarify garnering platform\n support (Brett Cannon)\n- May 2014: Where is our official policy of what platforms we do\n support? (Brett Cannon)\n- August 2007: PEP 11 update - Call for port maintainers to step\n forward (Skip Montanaro)\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.634821"},"created":{"kind":"timestamp","value":"2002-07-07T00:00:00","string":"2002-07-07T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0011/\",\n \"authors\": [\n \"Martin von Löwis\"\n ],\n \"pep_number\": \"0011\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":21,"cells":{"id":{"kind":"string","value":"0289"},"text":{"kind":"string","value":"PEP: 289 Title: Generator Expressions Author: Raymond Hettinger\n Status: Final Type: Standards Track Content-Type:\ntext/x-rst Created: 30-Jan-2002 Python-Version: 2.4 Post-History:\n22-Oct-2003\n\nAbstract\n\nThis PEP introduces generator expressions as a high performance, memory\nefficient generalization of list comprehensions PEP 202 and generators\nPEP 255.\n\nRationale\n\nExperience with list comprehensions has shown their widespread utility\nthroughout Python. However, many of the use cases do not need to have a\nfull list created in memory. Instead, they only need to iterate over the\nelements one at a time.\n\nFor instance, the following summation code will build a full list of\nsquares in memory, iterate over those values, and, when the reference is\nno longer needed, delete the list:\n\n sum([x*x for x in range(10)])\n\nMemory is conserved by using a generator expression instead:\n\n sum(x*x for x in range(10))\n\nSimilar benefits are conferred on constructors for container objects:\n\n s = set(word for line in page for word in line.split())\n d = dict( (k, func(k)) for k in keylist)\n\nGenerator expressions are especially useful with functions like sum(),\nmin(), and max() that reduce an iterable input to a single value:\n\n max(len(line) for line in file if line.strip())\n\nGenerator expressions also address some examples of functionals coded\nwith lambda:\n\n reduce(lambda s, a: s + a.myattr, data, 0)\n reduce(lambda s, a: s + a[3], data, 0)\n\nThese simplify to:\n\n sum(a.myattr for a in data)\n sum(a[3] for a in data)\n\nList comprehensions greatly reduced the need for filter() and map().\nLikewise, generator expressions are expected to minimize the need for\nitertools.ifilter() and itertools.imap(). In contrast, the utility of\nother itertools will be enhanced by generator expressions:\n\n dotproduct = sum(x*y for x,y in itertools.izip(x_vector, y_vector))\n\nHaving a syntax similar to list comprehensions also makes it easy to\nconvert existing code into a generator expression when scaling up\napplication.\n\nEarly timings showed that generators had a significant performance\nadvantage over list comprehensions. However, the latter were highly\noptimized for Py2.4 and now the performance is roughly comparable for\nsmall to mid-sized data sets. As the data volumes grow larger, generator\nexpressions tend to perform better because they do not exhaust cache\nmemory and they allow Python to re-use objects between iterations.\n\nBDFL Pronouncements\n\nThis PEP is ACCEPTED for Py2.4.\n\nThe Details\n\n(None of this is exact enough in the eye of a reader from Mars, but I\nhope the examples convey the intention well enough for a discussion in\nc.l.py. The Python Reference Manual should contain a 100% exact semantic\nand syntactic specification.)\n\n1. The semantics of a generator expression are equivalent to creating\n an anonymous generator function and calling it. For example:\n\n g = (x**2 for x in range(10))\n print g.next()\n\n is equivalent to:\n\n def __gen(exp):\n for x in exp:\n yield x**2\n g = __gen(iter(range(10)))\n print g.next()\n\n Only the outermost for-expression is evaluated immediately, the\n other expressions are deferred until the generator is run:\n\n g = (tgtexp for var1 in exp1 if exp2 for var2 in exp3 if exp4)\n\n is equivalent to:\n\n def __gen(bound_exp):\n for var1 in bound_exp:\n if exp2:\n for var2 in exp3:\n if exp4:\n yield tgtexp\n g = __gen(iter(exp1))\n del __gen\n\n2. The syntax requires that a generator expression always needs to be\n directly inside a set of parentheses and cannot have a comma on\n either side. With reference to the file Grammar/Grammar in CVS, two\n rules change:\n\n a) The rule:\n\n atom: '(' [testlist] ')'\n\n changes to:\n\n atom: '(' [testlist_gexp] ')'\n\n where testlist_gexp is almost the same as listmaker, but only\n allows a single test after 'for' ... 'in':\n\n testlist_gexp: test ( gen_for | (',' test)* [','] )\n\n b) The rule for arglist needs similar changes.\n\n This means that you can write:\n\n sum(x**2 for x in range(10))\n\n but you would have to write:\n\n reduce(operator.add, (x**2 for x in range(10)))\n\n and also:\n\n g = (x**2 for x in range(10))\n\n i.e. if a function call has a single positional argument, it can be\n a generator expression without extra parentheses, but in all other\n cases you have to parenthesize it.\n\n The exact details were checked in to Grammar/Grammar version 1.49.\n\n3. The loop variable (if it is a simple variable or a tuple of simple\n variables) is not exposed to the surrounding function. This\n facilitates the implementation and makes typical use cases more\n reliable. In some future version of Python, list comprehensions will\n also hide the induction variable from the surrounding code (and, in\n Py2.4, warnings will be issued for code accessing the induction\n variable).\n\n For example:\n\n x = \"hello\"\n y = list(x for x in \"abc\")\n print x # prints \"hello\", not \"c\"\n\n4. List comprehensions will remain unchanged. For example:\n\n [x for x in S] # This is a list comprehension.\n [(x for x in S)] # This is a list containing one generator\n # expression.\n\n Unfortunately, there is currently a slight syntactic difference. The\n expression:\n\n [x for x in 1, 2, 3]\n\n is legal, meaning:\n\n [x for x in (1, 2, 3)]\n\n But generator expressions will not allow the former version:\n\n (x for x in 1, 2, 3)\n\n is illegal.\n\n The former list comprehension syntax will become illegal in Python\n 3.0, and should be deprecated in Python 2.4 and beyond.\n\n List comprehensions also \"leak\" their loop variable into the\n surrounding scope. This will also change in Python 3.0, so that the\n semantic definition of a list comprehension in Python 3.0 will be\n equivalent to list(). Python 2.4 and beyond\n should issue a deprecation warning if a list comprehension's loop\n variable has the same name as a variable used in the immediately\n surrounding scope.\n\nEarly Binding versus Late Binding\n\nAfter much discussion, it was decided that the first (outermost)\nfor-expression should be evaluated immediately and that the remaining\nexpressions be evaluated when the generator is executed.\n\nAsked to summarize the reasoning for binding the first expression, Guido\noffered[1]:\n\n Consider sum(x for x in foo()). Now suppose there's a bug in foo()\n that raises an exception, and a bug in sum() that raises an\n exception before it starts iterating over its argument. Which\n exception would you expect to see? I'd be surprised if the one in\n sum() was raised rather the one in foo(), since the call to foo()\n is part of the argument to sum(), and I expect arguments to be\n processed before the function is called.\n\n OTOH, in sum(bar(x) for x in foo()), where sum() and foo()\n are bugfree, but bar() raises an exception, we have no choice but\n to delay the call to bar() until sum() starts iterating -- that's\n part of the contract of generators. (They do nothing until their\n next() method is first called.)\n\nVarious use cases were proposed for binding all free variables when the\ngenerator is defined. And some proponents felt that the resulting\nexpressions would be easier to understand and debug if bound\nimmediately.\n\nHowever, Python takes a late binding approach to lambda expressions and\nhas no precedent for automatic, early binding. It was felt that\nintroducing a new paradigm would unnecessarily introduce complexity.\n\nAfter exploring many possibilities, a consensus emerged that binding\nissues were hard to understand and that users should be strongly\nencouraged to use generator expressions inside functions that consume\ntheir arguments immediately. For more complex applications, full\ngenerator definitions are always superior in terms of being obvious\nabout scope, lifetime, and binding[2].\n\nReduction Functions\n\nThe utility of generator expressions is greatly enhanced when combined\nwith reduction functions like sum(), min(), and max(). The heapq module\nin Python 2.4 includes two new reduction functions: nlargest() and\nnsmallest(). Both work well with generator expressions and keep no more\nthan n items in memory at one time.\n\nAcknowledgements\n\n- Raymond Hettinger first proposed the idea of \"generator\n comprehensions\" in January 2002.\n- Peter Norvig resurrected the discussion in his proposal for\n Accumulation Displays.\n- Alex Martelli provided critical measurements that proved the\n performance benefits of generator expressions. He also provided\n strong arguments that they were a desirable thing to have.\n- Phillip Eby suggested \"iterator expressions\" as the name.\n- Subsequently, Tim Peters suggested the name \"generator expressions\".\n- Armin Rigo, Tim Peters, Guido van Rossum, Samuele Pedroni, Hye-Shik\n Chang and Raymond Hettinger teased out the issues surrounding early\n versus late binding[3].\n- Jiwon Seo single-handedly implemented various versions of the\n proposal including the final version loaded into CVS. Along the way,\n there were periodic code reviews by Hye-Shik Chang and Raymond\n Hettinger. Guido van Rossum made the key design decisions after\n comments from Armin Rigo and newsgroup discussions. Raymond\n Hettinger provided the test suite, documentation, tutorial, and\n examples[4].\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] Discussion over the relative merits of early versus late binding\nhttps://mail.python.org/pipermail/python-dev/2004-April/044555.html\n\n[2] Patch discussion and alternative patches on Source Forge\nhttps://bugs.python.org/issue872326\n\n[3] Discussion over the relative merits of early versus late binding\nhttps://mail.python.org/pipermail/python-dev/2004-April/044555.html\n\n[4] Patch discussion and alternative patches on Source Forge\nhttps://bugs.python.org/issue872326"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.647173"},"created":{"kind":"timestamp","value":"2002-01-30T00:00:00","string":"2002-01-30T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0289/\",\n \"authors\": [\n \"Raymond Hettinger\"\n ],\n \"pep_number\": \"0289\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":22,"cells":{"id":{"kind":"string","value":"0504"},"text":{"kind":"string","value":"PEP: 504 Title: Using the System RNG by default Version: $Revision$\nLast-Modified: $Date$ Author: Alyssa Coghlan \nStatus: Withdrawn Type: Standards Track Content-Type: text/x-rst\nCreated: 15-Sep-2015 Python-Version: 3.6 Post-History: 15-Sep-2015\n\nAbstract\n\nPython currently defaults to using the deterministic Mersenne Twister\nrandom number generator for the module level APIs in the random module,\nrequiring users to know that when they're performing \"security\nsensitive\" work, they should instead switch to using the\ncryptographically secure os.urandom or random.SystemRandom interfaces or\na third party library like cryptography.\n\nUnfortunately, this approach has resulted in a situation where\ndevelopers that aren't aware that they're doing security sensitive work\nuse the default module level APIs, and thus expose their users to\nunnecessary risks.\n\nThis isn't an acute problem, but it is a chronic one, and the often long\ndelays between the introduction of security flaws and their exploitation\nmeans that it is difficult for developers to naturally learn from\nexperience.\n\nIn order to provide an eventually pervasive solution to the problem,\nthis PEP proposes that Python switch to using the system random number\ngenerator by default in Python 3.6, and require developers to opt-in to\nusing the deterministic random number generator process wide either by\nusing a new random.ensure_repeatable() API, or by explicitly creating\ntheir own random.Random() instance.\n\nTo minimise the impact on existing code, module level APIs that require\ndeterminism will implicitly switch to the deterministic PRNG.\n\nPEP Withdrawal\n\nDuring discussion of this PEP, Steven D'Aprano proposed the simpler\nalternative of offering a standardised secrets module that provides \"one\nobvious way\" to handle security sensitive tasks like generating default\npasswords and other tokens.\n\nSteven's proposal has the desired effect of aligning the easy way to\ngenerate such tokens and the right way to generate them, without\nintroducing any compatibility risks for the existing random module API,\nso this PEP has been withdrawn in favour of further work on refining\nSteven's proposal as PEP 506.\n\nProposal\n\nCurrently, it is never correct to use the module level functions in the\nrandom module for security sensitive applications. This PEP proposes to\nchange that admonition in Python 3.6+ to instead be that it is not\ncorrect to use the module level functions in the random module for\nsecurity sensitive applications if random.ensure_repeatable() is ever\ncalled (directly or indirectly) in that process.\n\nTo achieve this, rather than being bound methods of a random.Random\ninstance as they are today, the module level callables in random would\nchange to be functions that delegate to the corresponding method of the\nexisting random._inst module attribute.\n\nBy default, this attribute will be bound to a random.SystemRandom\ninstance.\n\nA new random.ensure_repeatable() API will then rebind the random._inst\nattribute to a system.Random instance, restoring the same module level\nAPI behaviour as existed in previous Python versions (aside from the\nadditional level of indirection):\n\n def ensure_repeatable():\n \"\"\"Switch to using random.Random() for the module level APIs\n\n This switches the default RNG instance from the cryptographically\n secure random.SystemRandom() to the deterministic random.Random(),\n enabling the seed(), getstate() and setstate() operations. This means\n a particular random scenario can be replayed later by providing the\n same seed value or restoring a previously saved state.\n\n NOTE: Libraries implementing security sensitive operations should\n always explicitly use random.SystemRandom() or os.urandom in order to\n correctly handle applications that call this function.\n \"\"\"\n if not isinstance(_inst, Random):\n _inst = random.Random()\n\nTo minimise the impact on existing code, calling any of the following\nmodule level functions will implicitly call random.ensure_repeatable():\n\n- random.seed\n- random.getstate\n- random.setstate\n\nThere are no changes proposed to the random.Random or\nrandom.SystemRandom class APIs - applications that explicitly\ninstantiate their own random number generators will be entirely\nunaffected by this proposal.\n\nWarning on implicit opt-in\n\nIn Python 3.6, implicitly opting in to the use of the deterministic PRNG\nwill emit a deprecation warning using the following check:\n\n if not isinstance(_inst, Random):\n warnings.warn(DeprecationWarning,\n \"Implicitly ensuring repeatability. \"\n \"See help(random.ensure_repeatable) for details\")\n ensure_repeatable()\n\nThe specific wording of the warning should have a suitable answer added\nto Stack Overflow as was done for the custom error message that was\nadded for missing parentheses in a call to print[1].\n\nIn the first Python 3 release after Python 2.7 switches to security fix\nonly mode, the deprecation warning will be upgraded to a RuntimeWarning\nso it is visible by default.\n\nThis PEP does not propose ever removing the ability to ensure the\ndefault RNG used process wide is a deterministic PRNG that will produce\nthe same series of outputs given a specific seed. That capability is\nwidely used in modelling and simulation scenarios, and requiring that\nensure_repeatable() be called either directly or indirectly is a\nsufficient enhancement to address the cases where the module level\nrandom API is used for security sensitive tasks in web applications\nwithout due consideration for the potential security implications of\nusing a deterministic PRNG.\n\nPerformance impact\n\nDue to the large performance difference between random.Random and\nrandom.SystemRandom, applications ported to Python 3.6 will encounter a\nsignificant performance regression in cases where:\n\n- the application is using the module level random API\n- cryptographic quality randomness isn't needed\n- the application doesn't already implicitly opt back in to the\n deterministic PRNG by calling random.seed, random.getstate, or\n random.setstate\n- the application isn't updated to explicitly call\n random.ensure_repeatable\n\nThis would be noted in the Porting section of the Python 3.6 What's New\nguide, with the recommendation to include the following code in the\n__main__ module of affected applications:\n\n if hasattr(random, \"ensure_repeatable\"):\n random.ensure_repeatable()\n\nApplications that do need cryptographic quality randomness should be\nusing the system random number generator regardless of speed\nconsiderations, so in those cases the change proposed in this PEP will\nfix a previously latent security defect.\n\nDocumentation changes\n\nThe random module documentation would be updated to move the\ndocumentation of the seed, getstate and setstate interfaces later in the\nmodule, along with the documentation of the new ensure_repeatable\nfunction and the associated security warning.\n\nThat section of the module documentation would also gain a discussion of\nthe respective use cases for the deterministic PRNG enabled by\nensure_repeatable (games, modelling & simulation, software testing) and\nthe system RNG that is used by default (cryptography, security token\ngeneration). This discussion will also recommend the use of third party\nsecurity libraries for the latter task.\n\nRationale\n\nWriting secure software under deadline and budget pressures is a hard\nproblem. This is reflected in regular notifications of data breaches\ninvolving personally identifiable information[2], as well as with\nfailures to take security considerations into account when new systems,\nlike motor vehicles [3], are connected to the internet. It's also the\ncase that a lot of the programming advice readily available on the\ninternet [#search] simply doesn't take the mathematical arcana of\ncomputer security into account. Compounding these issues is the fact\nthat defenders have to cover all of their potential vulnerabilities, as\na single mistake can make it possible to subvert other defences[4].\n\nOne of the factors that contributes to making this last aspect\nparticularly difficult is APIs where using them inappropriately creates\na silent security failure - one where the only way to find out that what\nyou're doing is incorrect is for someone reviewing your code to say\n\"that's a potential security problem\", or for a system you're\nresponsible for to be compromised through such an oversight (and you're\nnot only still responsible for that system when it is compromised, but\nyour intrusion detection and auditing mechanisms are good enough for you\nto be able to figure out after the event how the compromise took place).\n\nThis kind of situation is a significant contributor to \"security\nfatigue\", where developers (often rightly[5]) feel that security\nengineers spend all their time saying \"don't do that the easy way, it\ncreates a security vulnerability\".\n\nAs the designers of one of the world's most popular languages[6], we can\nhelp reduce that problem by making the easy way the right way (or at\nleast the \"not wrong\" way) in more circumstances, so developers and\nsecurity engineers can spend more time worrying about mitigating\nactually interesting threats, and less time fighting with default\nlanguage behaviours.\n\nDiscussion\n\nWhy \"ensure_repeatable\" over \"ensure_deterministic\"?\n\nThis is a case where the meaning of a word as specialist jargon\nconflicts with the typical meaning of the word, even though it's\ntechnically the same.\n\nFrom a technical perspective, a \"deterministic RNG\" means that given\nknowledge of the algorithm and the current state, you can reliably\ncompute arbitrary future states.\n\nThe problem is that \"deterministic\" on its own doesn't convey those\nqualifiers, so it's likely to instead be interpreted as \"predictable\" or\n\"not random\" by folks that are familiar with the conventional meaning,\nbut aren't familiar with the additional qualifiers on the technical\nmeaning.\n\nA second problem with \"deterministic\" as a description for the\ntraditional RNG is that it doesn't really tell you what you can do with\nthe traditional RNG that you can't do with the system one.\n\n\"ensure_repeatable\" aims to address both of those problems, as its\ncommon meaning accurately describes the main reason for preferring the\ndeterministic PRNG over the system RNG: ensuring you can repeat the same\nseries of outputs by providing the same seed value, or by restoring a\npreviously saved PRNG state.\n\nOnly changing the default for Python 3.6+\n\nSome other recent security changes, such as upgrading the capabilities\nof the ssl module and switching to properly verifying HTTPS certificates\nby default, have been considered critical enough to justify backporting\nthe change to all currently supported versions of Python.\n\nThe difference in this case is one of degree - the additional benefits\nfrom rolling out this particular change a couple of years earlier than\nwill otherwise be the case aren't sufficient to justify either the\nadditional effort or the stability risks involved in making such an\nintrusive change in a maintenance release.\n\nKeeping the module level functions\n\nIn additional to general backwards compatibility considerations, Python\nis widely used for educational purposes, and we specifically don't want\nto invalidate the wide array of educational material that assumes the\navailability of the current random module API. Accordingly, this\nproposal ensures that most of the public API can continue to be used not\nonly without modification, but without generating any new warnings.\n\nWarning when implicitly opting in to the deterministic RNG\n\nIt's necessary to implicitly opt in to the deterministic PRNG as Python\nis widely used for modelling and simulation purposes where this is the\nright thing to do, and in many cases, these software models won't have a\ndedicated maintenance team tasked with ensuring they keep working on the\nlatest versions of Python.\n\nUnfortunately, explicitly calling random.seed with data from os.urandom\nis also a mistake that appears in a number of the flawed \"how to\ngenerate a security token in Python\" guides readily available online.\n\nUsing first DeprecationWarning, and then eventually a RuntimeWarning, to\nadvise against implicitly switching to the deterministic PRNG aims to\nnudge future users that need a cryptographically secure RNG away from\ncalling random.seed() and those that genuinely need a deterministic\ngenerator towards explicitly calling random.ensure_repeatable().\n\nAvoiding the introduction of a userspace CSPRNG\n\nThe original discussion of this proposal on python-ideas[7] suggested\nintroducing a cryptographically secure pseudo-random number generator\nand using that by default, rather than defaulting to the relatively slow\nsystem random number generator.\n\nThe problem[8] with this approach is that it introduces an additional\npoint of failure in security sensitive situations, for the sake of\napplications where the random number generation may not even be on a\ncritical performance path.\n\nApplications that do need cryptographic quality randomness should be\nusing the system random number generator regardless of speed\nconsiderations, so in those cases.\n\nIsn't the deterministic PRNG \"secure enough\"?\n\nIn a word, \"No\" - that's why there's a warning in the module\ndocumentation that says not to use it for security sensitive purposes.\nWhile we're not currently aware of any studies of Python's random number\ngenerator specifically, studies of PHP's random number generator[9] have\ndemonstrated the ability to use weaknesses in that subsystem to\nfacilitate a practical attack on password recovery tokens in popular PHP\nweb applications.\n\nHowever, one of the rules of secure software development is that\n\"attacks only get better, never worse\", so it may be that by the time\nPython 3.6 is released we will actually see a practical attack on\nPython's deterministic PRNG publicly documented.\n\nSecurity fatigue in the Python ecosystem\n\nOver the past few years, the computing industry as a whole has been\nmaking a concerted effort to upgrade the shared network infrastructure\nwe all depend on to a \"secure by default\" stance. As one of the most\nwidely used programming languages for network service development\n(including the OpenStack Infrastructure-as-a-Service platform) and for\nsystems administration on Linux systems in general, a fair share of that\nburden has fallen on the Python ecosystem, which is understandably\nfrustrating for Pythonistas using Python in other contexts where these\nissues aren't of as great a concern.\n\nThis consideration is one of the primary factors driving the substantial\nbackwards compatibility improvements in this proposal relative to the\ninitial draft concept posted to python-ideas[10].\n\nAcknowledgements\n\n- Theo de Raadt, for making the suggestion to Guido van Rossum that we\n seriously consider defaulting to a cryptographically secure random\n number generator\n- Serhiy Storchaka, Terry Reedy, Petr Viktorin, and anyone else in the\n python-ideas threads that suggested the approach of transparently\n switching to the random.Random implementation when any of the\n functions that only make sense for a deterministic RNG are called\n- Nathaniel Smith for providing the reference on practical attacks\n against PHP's random number generator when used to generate password\n reset tokens\n- Donald Stufft for pursuing additional discussions with network\n security experts that suggested the introduction of a userspace\n CSPRNG would mean additional complexity for insufficient gain\n relative to just using the system RNG directly\n- Paul Moore for eloquently making the case for the current level of\n security fatigue in the Python ecosystem\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] Stack Overflow answer for missing parentheses in call to print\n(http://stackoverflow.com/questions/25445439/what-does-syntaxerror-missing-parentheses-in-call-to-print-mean-in-python/25445440#25445440)\n\n[2] Visualization of data breaches involving more than 30k records\n(each)\n(http://www.informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/)\n\n[3] Remote UConnect hack for Jeep Cherokee\n(http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/)\n\n[4] Bypassing bcrypt through an insecure data cache\n(http://arstechnica.com/security/2015/09/once-seen-as-bulletproof-11-million-ashley-madison-passwords-already-cracked/)\n\n[5] OWASP Top Ten Web Security Issues for 2013\n(https://www.owasp.org/index.php/OWASP_Top_Ten_Project#tab=OWASP_Top_10_for_2013)\n\n[6] IEEE Spectrum 2015 Top Ten Programming Languages\n(http://spectrum.ieee.org/computing/software/the-2015-top-ten-programming-languages)\n\n[7] python-ideas thread discussing using a userspace CSPRNG\n(https://mail.python.org/pipermail/python-ideas/2015-September/035886.html)\n\n[8] Safely generating random numbers\n(http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/)\n\n[9] PRNG based attack against password reset tokens in PHP applications\n(https://media.blackhat.com/bh-us-12/Briefings/Argyros/BH_US_12_Argyros_PRNG_WP.pdf)\n\n[10] Initial draft concept that eventually became this PEP\n(https://mail.python.org/pipermail/python-ideas/2015-September/036095.html)"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.666233"},"created":{"kind":"timestamp","value":"2015-09-15T00:00:00","string":"2015-09-15T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0504/\",\n \"authors\": [\n \"Alyssa Coghlan\"\n ],\n \"pep_number\": \"0504\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":23,"cells":{"id":{"kind":"string","value":"0527"},"text":{"kind":"string","value":"PEP: 527 Title: Removing Un(der)used file types/extensions on PyPI\nVersion: $Revision$ Last-Modified: $Date$ Author: Donald Stufft\n BDFL-Delegate: Alyssa Coghlan \nDiscussions-To: distutils-sig@python.org Status: Final Type: Standards\nTrack Topic: Packaging Content-Type: text/x-rst Created: 23-Aug-2016\nPost-History: 23-Aug-2016 Resolution:\nhttps://mail.python.org/pipermail/distutils-sig/2016-September/029624.html\n\nAbstract\n\nThis PEP recommends deprecating, and ultimately removing, support for\nuploading certain unused or under used file types and extensions to\nPyPI. In particular it recommends disallowing further uploads of any\nfiles of the types bdist_dumb, bdist_rpm, bdist_dmg, bdist_msi, and\nbdist_wininst, leaving PyPI to only accept new uploads of the sdist,\nbdist_wheel, and bdist_egg file types.\n\nIn addition, this PEP proposes removing support for new uploads of\nsdists using the .tar, .tar.bz2, .tar.xz, .tar.Z, .tgz, .tbz, and any\nother extension besides .tar.gz and .zip.\n\nFinally, this PEP also proposes limiting the number of allowed sdist\nuploads for each individual release of a project on PyPI to one instead\nof one for each allowed extension.\n\nRationale\n\nFile Formats\n\nCurrently PyPI supports the following file types:\n\n- sdist\n- bdist_wheel\n- bdist_egg\n- bdist_wininst\n- bdist_msi\n- bdist_dmg\n- bdist_rpm\n- bdist_dumb\n\nHowever, these different types of files have varying amounts of\nusefulness or general use in the ecosystem. Continuing to support them\nadds a maintenance burden on PyPI as well as tool authors and incurs a\ncost in both bandwidth and disk space not only on PyPI itself, but also\non any mirrors of PyPI.\n\nPython packaging is a multi-level ecosystem where PyPI is primarily\nsuited and used to distribute virtual environment compatible packages\ndirectly from their respective project owners. These packages are then\nconsumed either directly by end-users, or by downstream distributors\nthat take these packages and turn them into their respective system\nlevel packages (such as RPM, deb, MSI, etc).\n\nWhile PyPI itself only directly works with these Python specific but\nplatform agnostic packages, we encourage community-driven and commercial\nconversions of these packages to downstream formats for particular\ntarget environments, like:\n\n- The conda cross-platform data analysis ecosystem (conda-forge)\n- The deb based Linux ecosystem (Debian, Ubuntu, etc)\n- The RPM based Linux ecosystem (Fedora, openSuSE, Mageia, etc)\n- The homebrew, MacPorts and fink ecosystems for Mac OS X\n- The Windows Package Management ecosystem (NuGet, Chocolatey, etc)\n- 3rd party creation of Windows MSIs and installers (e.g. Christoph\n Gohlke's work at http://www.lfd.uci.edu/~gohlke/pythonlibs/ )\n- other commercial redistribution formats (ActiveState's PyPM,\n Enthought Canopy, etc)\n- other open source community redistribution formats (Nix, Gentoo,\n Arch, *BSD, etc)\n\nIt is the belief of this PEP that the entire ecosystem is best supported\nby keeping PyPI focused on the platform agnostic formats, where the\nlimited amount of time by volunteers can be best used instead of\nspreading the available time out amongst several platforms. Further\nmore, this PEP believes that the people best positioned to provide well\nintegrated packages for a particular platform are people focused on that\nplatform, and not across all possible platforms.\n\nbdist_dumb\n\nAs it's name implies, bdist_dumb is not a very complex format, however\nit is so simple as to be worthless for actual usage.\n\nFor instance, if you're using something like pyenv on macOS and you're\nbuilding a library using Python 3.5, then bdist_dumb will produce a\n.tar.gz file named something like\nexampleproject-1.0.macosx-10.11-x86_64.tar.gz. Right off the bat this\nfile name is somewhat difficult to differentiate from an sdist since\nthey both use the same file extension (and with the legacy pre PEP 440\nversions, 1.0-macosx-10.11-x86_64 is a valid, although quite silly,\nversion number). However, once you open up the created .tar.gz, you'd\nfind that there is no metadata inside that could be used for things like\ndependency discovery and in fact, it is quite simply a tarball\ncontaining hardcoded paths to wherever files would have been installed\non the computer creating the bdist_dumb. Going back to our pyenv on\nmacOS example, this means that if I created it, it would contain files\nlike:\n\nUsers/dstufft/.pyenv/versions/3.5.2/lib/python3.5/site-packages/example.py\n\nbdist_rpm\n\nThe bdist_rpm format on PyPI allows people to upload .rpm files for end\nusers to manually download by hand and then manually install by hand.\nHowever, the common usage of rpm is with a specially designed repository\nthat allows automatic installation of dependencies, upgrades, etc which\nPyPI does not provide. Thus, it is a type of file that is barely being\nused on PyPI with only ~460 files of this type having been uploaded to\nPyPI (out a total of 662,544).\n\nIn addition, services like COPR provide a better supported mechanism for\npublishing and using RPM files than we're ever likely to get on PyPI.\n\nbdist_dmg, bdist_msi, and bdist_wininst\n\nThe bdist_dmg, bdist_msi, and bdist_winist formats are similar in that\nthey are an OS specific installer that will only install a library into\nan environment and are not designed for real user facing installs of\napplications (which would require things like bundling a Python\ninterpreter and the like).\n\nOut of these three, the usage for bdist_dmg and bdist_msi is very low,\nwith only ~500 bdist_msi files and ~50 bdist_dmg files having been\nuploaded to PyPI. The bdist_wininst format has more use, with ~14,000\nfiles having ever been uploaded to PyPI.\n\nIt's quite easy to look at the low usage of bdist_dmg and bdist_msi and\nconclude that removing them will be fairly low impact, however\nbdist_wininst has several orders of magnitude more usage. This is\nsomewhat misleading though, because although it has more people\nuploading those files the actual usage of those uploaded files is fairly\nlow. Taking a look at the previous 30 days, we can see that 90% of all\ndownloads of bdist_winist files from PyPI were generated by the\nmirroring infrastructure and 7% of them were generated by setuptools\n(which can currently be better covered by bdist_egg files).\n\nGiven the small number of files uploaded for bdist_dmg and bdist_msi and\nthat bdist_wininst is largely existing to either consume bandwidth and\ndisk space via the mirroring infrastructure or could be trivially\nreplaced with bdist_egg, this PEP proposes to include these three\nformats in the list of those to be disallowed.\n\nFile Extensions\n\nCurrently sdist supports a wide variety of file extensions like .tar.gz,\n.tar, .tar.bz2, .tar.xz, .zip, .tar.Z, .tgz, and .tbz. However, of those\nthe only extensions which get anything more than negligible usage is\n.tar.gz with 444,338 sdists currently, .zip with 58,774 sdists\ncurrently, and .tar.bz2 with 3,265 sdists currently.\n\nHaving multiple formats accepted requires tooling both within PyPI and\noutside of PyPI to handle all of the various extensions that might be\nused (even if nobody is currently using them). This doesn't only affect\nPyPI, but ripples out throughout the ecosystem. In addition, the\ndifferent formats all have different requirements for what optional C\nlibraries Python was linked against and different requirements for what\nversions of Python they support. In addition, multiple formats also\ncreate a weird situation where there may be two sdist files for a\nparticular project/release with subtly different content.\n\nIt's easy to advocate that anything outside of .tar.gz, .zip, and\n.tar.bz2 should be disallowed. Outside of a tiny handful, nobody has\nactively been uploading these other types of files in the ~15 years of\nPyPI's existence so they've obviously not been particularly useful. In\naddition, while .tar.xz is theoretically a nicer format than the other\n.tar.* formats due to the better compression ratio achieved by LZMA, it\nis only available in Python 3.3+ and has an optional dependency on the\nlzma C library.\n\nLooking at the three extensions we do have in current use, it's also\nfairly easy to conclude that .tar.bz2 can be disallowed as well. It has\na fairly small number of files ever uploaded with it and it requires an\nadditional optional C library to handle the bzip2 compression.\n\nFinally we get down to .tar.gz and .zip. Looking at the pure numbers for\nthese two, we can see that .tar.gz is by far the most uploaded format,\nwith 444,338 total uploaded compared to .zip's 58,774 and on POSIX\noperating systems .tar.gz is also the default produced by all currently\nreleased versions of Python and setuptools. In addition, these two file\ntypes both use the same C library (zlib) which is also required for\nbdist_wheel and bdist_egg. The two wrinkles with deciding between\n.tar.gz and .zip is that while on POSIX operating systems .tar.gz is the\ndefault, on Windows .zip is the default and the bdist_wheel format also\nuses zip.\n\nInstead of trying to standardize on either .tar.gz or .zip, this PEP\nproposes that we allow either .tar.gz or .zip for sdists.\n\nLimiting number of sdists per release\n\nA sdist on PyPI should be a single source of truth for a particular\nrelease of software. However, currently PyPI allows you to upload one\nsdist for each of the sdist file extensions it allows. Currently this\nallows something like 10 different sdists for a project, but even with\nthis PEP it allows two different sources of truth for a single version.\nHaving multiple sdists oftentimes can account for strange bugs that only\nexpose themselves based on which sdist that the person used.\n\nTo resolve this, this PEP proposes to allow one, and only one, sdist per\nrelease of a project.\n\nRemoval Process\n\nThis PEP does NOT propose removing any existing files from PyPI, only\ndisallowing new ones from being uploaded. This restriction will be\nphased in on a per-project basis to allow projects to adjust to the new\nrestrictions where applicable.\n\nFirst, any existing projects will be flagged to allow legacy file types\nto be uploaded, and any project without that flag (i.e. new projects)\nwill not be able to upload anything but sdist with a .tar.gz or .zip\nextension, bdist_wheel, and bdist_egg. Then, any existing projects that\nhave never uploaded a file that requires the legacy file type flag will\nhave that flag removed, also making them fall under the new\nrestrictions. Finally, an email will be generated to the maintainers of\nall projects still given the legacy flag, which will inform them of the\nupcoming new restrictions on uploads and tell them that these\nrestrictions will be applied to future uploads to their projects\nstarting in 1 month. Finally, after 1 month all projects will have the\nlegacy file type flag removed, and support for uploading these types of\nfiles will cease to exist on PyPI.\n\nThis plan should provide minimal disruption since it does not remove any\nexisting files, and the types of files it does prevent from being\nuploaded are either not particularly useful (or used) types of files or\nthey can continue to upload a similar type of file with a slight change\nto their process.\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.683696"},"created":{"kind":"timestamp","value":"2016-08-23T00:00:00","string":"2016-08-23T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0527/\",\n \"authors\": [\n \"Donald Stufft\"\n ],\n \"pep_number\": \"0527\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":24,"cells":{"id":{"kind":"string","value":"0325"},"text":{"kind":"string","value":"PEP: 325 Title: Resource-Release Support for Generators Version:\n$Revision$ Last-Modified: $Date$ Author: Samuele Pedroni\n Status: Rejected Type: Standards Track\nContent-Type: text/x-rst Created: 25-Aug-2003 Python-Version: 2.4\nPost-History:\n\nAbstract\n\nGenerators allow for natural coding and abstraction of traversal over\ndata. Currently if external resources needing proper timely release are\ninvolved, generators are unfortunately not adequate. The typical idiom\nfor timely release is not supported, a yield statement is not allowed in\nthe try clause of a try-finally statement inside a generator. The\nfinally clause execution can be neither guaranteed nor enforced.\n\nThis PEP proposes that the built-in generator type implement a close\nmethod and destruction semantics, such that the restriction on yield\nplacement can be lifted, expanding the applicability of generators.\n\nPronouncement\n\nRejected in favor of PEP 342 which includes substantially all of the\nrequested behavior in a more refined form.\n\nRationale\n\nPython generators allow for natural coding of many data traversal\nscenarios. Their instantiation produces iterators, i.e. first-class\nobjects abstracting traversal (with all the advantages of first-\nclassness). In this respect they match in power and offer some\nadvantages over the approach using iterator methods taking a\n(smalltalkish) block. On the other hand, given current limitations (no\nyield allowed in a try clause of a try-finally inside a generator) the\nlatter approach seems better suited to encapsulating not only traversal\nbut also exception handling and proper resource acquisition and release.\n\nLet's consider an example (for simplicity, files in read-mode are used):\n\n def all_lines(index_path):\n for path in file(index_path, \"r\"):\n for line in file(path.strip(), \"r\"):\n yield line\n\nthis is short and to the point, but the try-finally for timely closing\nof the files cannot be added. (While instead of a path, a file, whose\nclosing then would be responsibility of the caller, could be passed in\nas argument, the same is not applicable for the files opened depending\non the contents of the index).\n\nIf we want timely release, we have to sacrifice the simplicity and\ndirectness of the generator-only approach: (e.g.) :\n\n class AllLines:\n\n def __init__(self, index_path):\n self.index_path = index_path\n self.index = None\n self.document = None\n\n def __iter__(self):\n self.index = file(self.index_path, \"r\")\n for path in self.index:\n self.document = file(path.strip(), \"r\")\n for line in self.document:\n yield line\n self.document.close()\n self.document = None\n\n def close(self):\n if self.index:\n self.index.close()\n if self.document:\n self.document.close()\n\nto be used as:\n\n all_lines = AllLines(\"index.txt\")\n try:\n for line in all_lines:\n ...\n finally:\n all_lines.close()\n\nThe more convoluted solution implementing timely release, seems to offer\na precious hint. What we have done is encapsulate our traversal in an\nobject (iterator) with a close method.\n\nThis PEP proposes that generators should grow such a close method with\nsuch semantics that the example could be rewritten as:\n\n # Today this is not valid Python: yield is not allowed between\n # try and finally, and generator type instances support no\n # close method.\n\n def all_lines(index_path):\n index = file(index_path, \"r\")\n try:\n for path in index:\n document = file(path.strip(), \"r\")\n try:\n for line in document:\n yield line\n finally:\n document.close()\n finally:\n index.close()\n\n all = all_lines(\"index.txt\")\n try:\n for line in all:\n ...\n finally:\n all.close() # close on generator\n\nCurrently PEP 255 disallows yield inside a try clause of a try-finally\nstatement, because the execution of the finally clause cannot be\nguaranteed as required by try-finally semantics.\n\nThe semantics of the proposed close method should be such that while the\nfinally clause execution still cannot be guaranteed, it can be enforced\nwhen required. Specifically, the close method behavior should trigger\nthe execution of the finally clauses inside the generator, either by\nforcing a return in the generator frame or by throwing an exception in\nit. In situations requiring timely resource release, close could then be\nexplicitly invoked.\n\nThe semantics of generator destruction on the other hand should be\nextended in order to implement a best-effort policy for the general\ncase. Specifically, destruction should invoke close(). The best-effort\nlimitation comes from the fact that the destructor's execution is not\nguaranteed in the first place.\n\nThis seems to be a reasonable compromise, the resulting global behavior\nbeing similar to that of files and closing.\n\nPossible Semantics\n\nThe built-in generator type should have a close method implemented,\nwhich can then be invoked as:\n\n gen.close()\n\nwhere gen is an instance of the built-in generator type. Generator\ndestruction should also invoke close method behavior.\n\nIf a generator is already terminated, close should be a no-op.\n\nOtherwise, there are two alternative solutions, Return or Exception\nSemantics:\n\nA - Return Semantics: The generator should be resumed, generator\nexecution should continue as if the instruction at the re-entry point is\na return. Consequently, finally clauses surrounding the re-entry point\nwould be executed, in the case of a then allowed try-yield-finally\npattern.\n\nIssues: is it important to be able to distinguish forced termination by\nclose, normal termination, exception propagation from generator or\ngenerator-called code? In the normal case it seems not, finally clauses\nshould be there to work the same in all these cases, still this\nsemantics could make such a distinction hard.\n\nExcept-clauses, like by a normal return, are not executed, such clauses\nin legacy generators expect to be executed for exceptions raised by the\ngenerator or by code called from it. Not executing them in the close\ncase seems correct.\n\nB - Exception Semantics: The generator should be resumed and execution\nshould continue as if a special-purpose exception (e.g. CloseGenerator)\nhas been raised at re-entry point. Close implementation should consume\nand not propagate further this exception.\n\nIssues: should StopIteration be reused for this purpose? Probably not.\nWe would like close to be a harmless operation for legacy generators,\nwhich could contain code catching StopIteration to deal with other\ngenerators/iterators.\n\nIn general, with exception semantics, it is unclear what to do if the\ngenerator does not terminate or we do not receive the special exception\npropagated back. Other different exceptions should probably be\npropagated, but consider this possible legacy generator code:\n\n try:\n ...\n yield ...\n ...\n except: # or except Exception:, etc\n raise Exception(\"boom\")\n\nIf close is invoked with the generator suspended after the yield, the\nexcept clause would catch our special purpose exception, so we would get\na different exception propagated back, which in this case ought to be\nreasonably consumed and ignored but in general should be propagated, but\nseparating these scenarios seems hard.\n\nThe exception approach has the advantage to let the generator\ndistinguish between termination cases and have more control. On the\nother hand, clear-cut semantics seem harder to define.\n\nRemarks\n\nIf this proposal is accepted, it should become common practice to\ndocument whether a generator acquires resources, so that its close\nmethod ought to be called. If a generator is no longer used, calling\nclose should be harmless.\n\nOn the other hand, in the typical scenario the code that instantiated\nthe generator should call close if required by it. Generic code dealing\nwith iterators/generators instantiated elsewhere should typically not be\nlittered with close calls.\n\nThe rare case of code that has acquired ownership of and need to\nproperly deal with all of iterators, generators and generators acquiring\nresources that need timely release, is easily solved:\n\n if hasattr(iterator, 'close'):\n iterator.close()\n\nOpen Issues\n\nDefinitive semantics ought to be chosen. Currently Guido favors\nException Semantics. If the generator yields a value instead of\nterminating, or propagating back the special exception, a special\nexception should be raised again on the generator side.\n\nIt is still unclear whether spuriously converted special exceptions (as\ndiscussed in Possible Semantics) are a problem and what to do about\nthem.\n\nImplementation issues should be explored.\n\nAlternative Ideas\n\nThe idea that the yield placement limitation should be removed and that\ngenerator destruction should trigger execution of finally clauses has\nbeen proposed more than once. Alone it cannot guarantee that timely\nrelease of resources acquired by a generator can be enforced.\n\nPEP 288 proposes a more general solution, allowing custom exception\npassing to generators. The proposal in this PEP addresses more directly\nthe problem of resource release. Were PEP 288 implemented, Exceptions\nSemantics for close could be layered on top of it, on the other hand PEP\n288 should make a separate case for the more general functionality.\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.692626"},"created":{"kind":"timestamp","value":"2003-08-25T00:00:00","string":"2003-08-25T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0325/\",\n \"authors\": [\n \"Samuele Pedroni\"\n ],\n \"pep_number\": \"0325\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":25,"cells":{"id":{"kind":"string","value":"0360"},"text":{"kind":"string","value":"PEP: 360 Title: Externally Maintained Packages Version: $Revision$\nLast-Modified: $Date$ Author: Brett Cannon Status:\nFinal Type: Process Content-Type: text/x-rst Created: 30-May-2006\nPost-History:\n\nWarning\n\nNo new modules are to be added to this PEP. It has been deemed dangerous\nto codify external maintenance of any code checked into Python's code\nrepository. Code contributors should expect Python's development\nmethodology to be used for any and all code checked into Python's code\nrepository.\n\nAbstract\n\nThere are many great pieces of Python software developed outside of the\nPython standard library (a.k.a., the \"stdlib\"). Sometimes it makes sense\nto incorporate these externally maintained packages into the stdlib in\norder to fill a gap in the tools provided by Python.\n\nBut by having the packages maintained externally it means Python's\ndevelopers do not have direct control over the packages' evolution and\nmaintenance. Some package developers prefer to have bug reports and\npatches go through them first instead of being directly applied to\nPython's repository.\n\nThis PEP is meant to record details of packages in the stdlib that are\nmaintained outside of Python's repository. Specifically, it is meant to\nkeep track of any specific maintenance needs for each package. It should\nbe mentioned that changes needed in order to fix bugs and keep the code\nrunning on all of Python's supported platforms will be done directly in\nPython's repository without worrying about going through the contact\ndeveloper. This is so that Python itself is not held up by a single bug\nand allows the whole process to scale as needed.\n\nIt also is meant to allow people to know which version of a package is\nreleased with which version of Python.\n\nExternally Maintained Packages\n\nThe section title is the name of the package as it is known outside of\nthe Python standard library. The \"standard library name\" is what the\npackage is named within Python. The \"contact person\" is the Python\ndeveloper in charge of maintaining the package. The \"synchronisation\nhistory\" lists what external version of the package was included in each\nversion of Python (if different from the previous Python release).\n\nElementTree\n\nWeb site\n\n http://effbot.org/zone/element-index.htm\n\nStandard library name\n\n xml.etree\n\nContact person\n\n Fredrik Lundh\n\nFredrik has ceded ElementTree maintenance to the core Python development\nteam[1].\n\nExpat XML parser\n\nWeb site\n\n http://www.libexpat.org/\n\nStandard library name\n\n N/A (this refers to the parser itself, and not the Python bindings)\n\nContact person\n\n None\n\nOptik\n\nWeb site\n\n http://optik.sourceforge.net/\n\nStandard library name\n\n optparse\n\nContact person\n\n Greg Ward\n\nExternal development seems to have ceased. For new applications,\noptparse itself has been largely superseded by argparse.\n\nwsgiref\n\nWeb site\n\n None\n\nStandard library name\n\n wsgiref\n\nContact Person\n\n Phillip J. Eby\n\nThis module is maintained in the standard library, but significant bug\nreports and patches should pass through the Web-SIG mailing list [2] for\ndiscussion.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] Fredrik's handing over of ElementTree\n(https://mail.python.org/pipermail/python-dev/2012-February/116389.html)\n\n[2] Web-SIG mailing list\n(https://mail.python.org/mailman/listinfo/web-sig)"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.702369"},"created":{"kind":"timestamp","value":"2006-05-30T00:00:00","string":"2006-05-30T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0360/\",\n \"authors\": [\n \"Brett Cannon\"\n ],\n \"pep_number\": \"0360\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":26,"cells":{"id":{"kind":"string","value":"0451"},"text":{"kind":"string","value":"PEP: 451 Title: A ModuleSpec Type for the Import System Version:\n$Revision$ Last-Modified: $Date$ Author: Eric Snow\n BDFL-Delegate: Brett Cannon\n, Alyssa Coghlan Discussions-To:\nimport-sig@python.org Status: Final Type: Standards Track Content-Type:\ntext/x-rst Created: 08-Aug-2013 Python-Version: 3.4 Post-History:\n08-Aug-2013, 28-Aug-2013, 18-Sep-2013, 24-Sep-2013, 04-Oct-2013\nResolution:\nhttps://mail.python.org/pipermail/python-dev/2013-November/130104.html\n\nAbstract\n\nThis PEP proposes to add a new class to importlib.machinery called\n\"ModuleSpec\". It will provide all the import-related information used to\nload a module and will be available without needing to load the module\nfirst. Finders will directly provide a module's spec instead of a loader\n(which they will continue to provide indirectly). The import machinery\nwill be adjusted to take advantage of module specs, including using them\nto load modules.\n\nTerms and Concepts\n\nThe changes in this proposal are an opportunity to make several existing\nterms and concepts more clear, whereas currently they are\n(unfortunately) ambiguous. New concepts are also introduced in this\nproposal. Finally, it's worth explaining a few other existing terms with\nwhich people may not be so familiar. For the sake of context, here is a\nbrief summary of all three groups of terms and concepts. A more detailed\nexplanation of the import system is found at [1].\n\nname\n\nIn this proposal, a module's \"name\" refers to its fully-qualified name,\nmeaning the fully-qualified name of the module's parent (if any) joined\nto the simple name of the module by a period.\n\nfinder\n\nA \"finder\" is an object that identifies the loader that the import\nsystem should use to load a module. Currently this is accomplished by\ncalling the finder's find_module() method, which returns the loader.\n\nFinders are strictly responsible for providing the loader, which they do\nthrough their find_module() method. The import system then uses that\nloader to load the module.\n\nloader\n\nA \"loader\" is an object that is used to load a module during import.\nCurrently this is done by calling the loader's load_module() method. A\nloader may also provide APIs for getting information about the modules\nit can load, as well as about data from sources associated with such a\nmodule.\n\nRight now loaders (via load_module()) are responsible for certain\nboilerplate, import-related operations. These are:\n\n1. Perform some (module-related) validation\n2. Create the module object\n3. Set import-related attributes on the module\n4. \"Register\" the module to sys.modules\n5. Exec the module\n6. Clean up in the event of failure while loading the module\n\nThis all takes place during the import system's call to\nLoader.load_module().\n\norigin\n\nThis is a new term and concept. The idea of it exists subtly in the\nimport system already, but this proposal makes the concept explicit.\n\n\"origin\" in an import context means the system (or resource within a\nsystem) from which a module originates. For the purposes of this\nproposal, \"origin\" is also a string which identifies such a resource or\nsystem. \"origin\" is applicable to all modules.\n\nFor example, the origin for built-in and frozen modules is the\ninterpreter itself. The import system already identifies this origin as\n\"built-in\" and \"frozen\", respectively. This is demonstrated in the\nfollowing module repr: \"\".\n\nIn fact, the module repr is already a relatively reliable, though\nimplicit, indicator of a module's origin. Other modules also indicate\ntheir origin through other means, as described in the entry for\n\"location\".\n\nIt is up to the loader to decide on how to interpret and use a module's\norigin, if at all.\n\nlocation\n\nThis is a new term. However the concept already exists clearly in the\nimport system, as associated with the __file__ and __path__ attributes\nof modules, as well as the name/term \"path\" elsewhere.\n\nA \"location\" is a resource or \"place\", rather than a system at large,\nfrom which a module is loaded. It qualifies as an \"origin\". Examples of\nlocations include filesystem paths and URLs. A location is identified by\nthe name of the resource, but may not necessarily identify the system to\nwhich the resource pertains. In such cases the loader would have to\nidentify the system itself.\n\nIn contrast to other kinds of module origin, a location cannot be\ninferred by the loader just by the module name. Instead, the loader must\nbe provided with a string to identify the location, usually by the\nfinder that generates the loader. The loader then uses this information\nto locate the resource from which it will load the module. In theory you\ncould load the module at a given location under various names.\n\nThe most common example of locations in the import system are the files\nfrom which source and extension modules are loaded. For these modules\nthe location is identified by the string in the __file__ attribute.\nAlthough __file__ isn't particularly accurate for some modules (e.g.\nzipped), it is currently the only way that the import system indicates\nthat a module has a location.\n\nA module that has a location may be called \"locatable\".\n\ncache\n\nThe import system stores compiled modules in the __pycache__ directory\nas an optimization. This module cache that we use today was provided by\nPEP 3147. For this proposal, the relevant API for module caching is the\n__cache__ attribute of modules and the cache_from_source() function in\nimportlib.util. Loaders are responsible for putting modules into the\ncache (and loading out of the cache). Currently the cache is only used\nfor compiled source modules. However, loaders may take advantage of the\nmodule cache for other kinds of modules.\n\npackage\n\nThe concept does not change, nor does the term. However, the distinction\nbetween modules and packages is mostly superficial. Packages are\nmodules. They simply have a __path__ attribute and import may add\nattributes bound to submodules. The typically perceived difference is a\nsource of confusion. This proposal explicitly de-emphasizes the\ndistinction between packages and modules where it makes sense to do so.\n\nMotivation\n\nThe import system has evolved over the lifetime of Python. In late 2002\nPEP 302 introduced standardized import hooks via finders and loaders and\nsys.meta_path. The importlib module, introduced with Python 3.1, now\nexposes a pure Python implementation of the APIs described by PEP 302,\nas well as of the full import system. It is now much easier to\nunderstand and extend the import system. While a benefit to the Python\ncommunity, this greater accessibility also presents a challenge.\n\nAs more developers come to understand and customize the import system,\nany weaknesses in the finder and loader APIs will be more impactful. So\nthe sooner we can address any such weaknesses the import system, the\nbetter...and there are a couple we hope to take care of with this\nproposal.\n\nFirstly, any time the import system needs to save information about a\nmodule we end up with more attributes on module objects that are\ngenerally only meaningful to the import system. It would be nice to have\na per-module namespace in which to put future import-related information\nand to pass around within the import system. Secondly, there's an API\nvoid between finders and loaders that causes undue complexity when\nencountered. The PEP 420 (namespace packages) implementation had to work\naround this. The complexity surfaced again during recent efforts on a\nseparate proposal.[2]\n\nThe finder and loader sections above detail current responsibility of\nboth. Notably, loaders are not required to provide any of the\nfunctionality of their load_module() method through other methods. Thus,\nthough the import-related information about a module is likely available\nwithout loading the module, it is not otherwise exposed.\n\nFurthermore, the requirements associated with load_module() are common\nto all loaders and mostly are implemented in exactly the same way. This\nmeans every loader has to duplicate the same boilerplate code.\nimportlib.util provides some tools that help with this, but it would be\nmore helpful if the import system simply took charge of these\nresponsibilities. The trouble is that this would limit the degree of\ncustomization that load_module() could easily continue to facilitate.\n\nMore importantly, While a finder could provide the information that the\nloader's load_module() would need, it currently has no consistent way to\nget it to the loader. This is a gap between finders and loaders which\nthis proposal aims to fill.\n\nFinally, when the import system calls a finder's find_module(), the\nfinder makes use of a variety of information about the module that is\nuseful outside the context of the method. Currently the options are\nlimited for persisting that per-module information past the method call,\nsince it only returns the loader. Popular options for this limitation\nare to store the information in a module-to-info mapping somewhere on\nthe finder itself, or store it on the loader.\n\nUnfortunately, loaders are not required to be module-specific. On top of\nthat, some of the useful information finders could provide is common to\nall finders, so ideally the import system could take care of those\ndetails. This is the same gap as before between finders and loaders.\n\nAs an example of complexity attributable to this flaw, the\nimplementation of namespace packages in Python 3.3 (see PEP 420) added\nFileFinder.find_loader() because there was no good way for find_module()\nto provide the namespace search locations.\n\nThe answer to this gap is a ModuleSpec object that contains the\nper-module information and takes care of the boilerplate functionality\ninvolved with loading the module.\n\nSpecification\n\nThe goal is to address the gap between finders and loaders while\nchanging as little of their semantics as possible. Though some\nfunctionality and information is moved to the new ModuleSpec type, their\nbehavior should remain the same. However, for the sake of clarity the\nfinder and loader semantics will be explicitly identified.\n\nHere is a high-level summary of the changes described by this PEP. More\ndetail is available in later sections.\n\nimportlib.machinery.ModuleSpec (new)\n\nAn encapsulation of a module's import-system-related state during\nimport. See the ModuleSpec section below for a more detailed\ndescription.\n\n- ModuleSpec(name, loader, *, origin=None, loader_state=None,\n is_package=None)\n\nAttributes:\n\n- name - a string for the fully-qualified name of the module.\n- loader - the loader to use for loading.\n- origin - the name of the place from which the module is loaded, e.g.\n \"builtin\" for built-in modules and the filename for modules loaded\n from source.\n- submodule_search_locations - list of strings for where to find\n submodules, if a package (None otherwise).\n- loader_state - a container of extra module-specific data for use\n during loading.\n- cached (property) - a string for where the compiled module should be\n stored.\n- parent (RO-property) - the fully-qualified name of the package to\n which the module belongs as a submodule (or None).\n- has_location (RO-property) - a flag indicating whether or not the\n module's \"origin\" attribute refers to a location.\n\nimportlib.util Additions\n\nThese are ModuleSpec factory functions, meant as a convenience for\nfinders. See the Factory Functions section below for more detail.\n\n- spec_from_file_location(name, location, *, loader=None,\n submodule_search_locations=None)\n - build a spec from file-oriented information and loader APIs.\n- spec_from_loader(name, loader, *, origin=None, is_package=None)\n - build a spec with missing information filled in by using loader\n APIs.\n\nOther API Additions\n\n- importlib.find_spec(name, path=None, target=None) will work exactly\n the same as importlib.find_loader() (which it replaces), but return\n a spec instead of a loader.\n\nFor finders:\n\n- importlib.abc.MetaPathFinder.find_spec(name, path, target) and\n importlib.abc.PathEntryFinder.find_spec(name, target) will return a\n module spec to use during import.\n\nFor loaders:\n\n- importlib.abc.Loader.exec_module(module) will execute a module in\n its own namespace. It replaces importlib.abc.Loader.load_module(),\n taking over its module execution functionality.\n- importlib.abc.Loader.create_module(spec) (optional) will return the\n module to use for loading.\n\nFor modules:\n\n- Module objects will have a new attribute: __spec__.\n\nAPI Changes\n\n- InspectLoader.is_package() will become optional.\n\nDeprecations\n\n- importlib.abc.MetaPathFinder.find_module()\n- importlib.abc.PathEntryFinder.find_module()\n- importlib.abc.PathEntryFinder.find_loader()\n- importlib.abc.Loader.load_module()\n- importlib.abc.Loader.module_repr()\n- importlib.util.set_package()\n- importlib.util.set_loader()\n- importlib.find_loader()\n\nRemovals\n\nThese were introduced prior to Python 3.4's release, so they can simply\nbe removed.\n\n- importlib.abc.Loader.init_module_attrs()\n- importlib.util.module_to_load()\n\nOther Changes\n\n- The import system implementation in importlib will be changed to\n make use of ModuleSpec.\n- importlib.reload() will make use of ModuleSpec.\n- A module's import-related attributes (other than __spec__) will no\n longer be used directly by the import system during that module's\n import. However, this does not impact use of those attributes (e.g.\n __path__) when loading other modules (e.g. submodules).\n- Import-related attributes should no longer be added to modules\n directly, except by the import system.\n- The module type's __repr__() will be a thin wrapper around a pure\n Python implementation which will leverage ModuleSpec.\n- The spec for the __main__ module will reflect the appropriate name\n and origin.\n\nBackward-Compatibility\n\n- If a finder does not define find_spec(), a spec is derived from the\n loader returned by find_module().\n- PathEntryFinder.find_loader() still takes priority over\n find_module().\n- Loader.load_module() is used if exec_module() is not defined.\n\nWhat Will not Change?\n\n- The syntax and semantics of the import statement.\n- Existing finders and loaders will continue to work normally.\n- The import-related module attributes will still be initialized with\n the same information.\n- Finders will still create loaders (now storing them in specs).\n- Loader.load_module(), if a module defines it, will have all the same\n requirements and may still be called directly.\n- Loaders will still be responsible for module data APIs.\n- importlib.reload() will still overwrite the import-related\n attributes.\n\nResponsibilities\n\nHere's a quick breakdown of where responsibilities lie after this PEP.\n\nfinders:\n\n- create/identify a loader that can load the module.\n- create the spec for the module.\n\nloaders:\n\n- create the module (optional).\n- execute the module.\n\nModuleSpec:\n\n- orchestrate module loading\n- boilerplate for module loading, including managing sys.modules and\n setting import-related attributes\n- create module if loader doesn't\n- call loader.exec_module(), passing in the module in which to exec\n- contain all the information the loader needs to exec the module\n- provide the repr for modules\n\nWhat Will Existing Finders and Loaders Have to Do Differently?\n\nImmediately? Nothing. The status quo will be deprecated, but will\ncontinue working. However, here are the things that the authors of\nfinders and loaders should change relative to this PEP:\n\n- Implement find_spec() on finders.\n- Implement exec_module() on loaders, if possible.\n\nThe ModuleSpec factory functions in importlib.util are intended to be\nhelpful for converting existing finders. spec_from_loader() and\nspec_from_file_location() are both straightforward utilities in this\nregard.\n\nFor existing loaders, exec_module() should be a relatively direct\nconversion from the non-boilerplate portion of load_module(). In some\nuncommon cases the loader should also implement create_module().\n\nModuleSpec Users\n\nModuleSpec objects have 3 distinct target audiences: Python itself,\nimport hooks, and normal Python users.\n\nPython will use specs in the import machinery, in interpreter startup,\nand in various standard library modules. Some modules are\nimport-oriented, like pkgutil, and others are not, like pickle and\npydoc. In all cases, the full ModuleSpec API will get used.\n\nImport hooks (finders and loaders) will make use of the spec in specific\nways. First of all, finders may use the spec factory functions in\nimportlib.util to create spec objects. They may also directly adjust the\nspec attributes after the spec is created. Secondly, the finder may bind\nadditional information to the spec (in finder_extras) for the loader to\nconsume during module creation/execution. Finally, loaders will make use\nof the attributes on a spec when creating and/or executing a module.\n\nPython users will be able to inspect a module's __spec__ to get\nimport-related information about the object. Generally, Python\napplications and interactive users will not be using the ModuleSpec\nfactory functions nor any the instance methods.\n\nHow Loading Will Work\n\nHere is an outline of what the import machinery does during loading,\nadjusted to take advantage of the module's spec and the new loader API:\n\n module = None\n if spec.loader is not None and hasattr(spec.loader, 'create_module'):\n module = spec.loader.create_module(spec)\n if module is None:\n module = ModuleType(spec.name)\n # The import-related module attributes get set here:\n _init_module_attrs(spec, module)\n\n if spec.loader is None and spec.submodule_search_locations is not None:\n # Namespace package\n sys.modules[spec.name] = module\n elif not hasattr(spec.loader, 'exec_module'):\n spec.loader.load_module(spec.name)\n # __loader__ and __package__ would be explicitly set here for\n # backwards-compatibility.\n else:\n sys.modules[spec.name] = module\n try:\n spec.loader.exec_module(module)\n except BaseException:\n try:\n del sys.modules[spec.name]\n except KeyError:\n pass\n raise\n module_to_return = sys.modules[spec.name]\n\nThese steps are exactly what Loader.load_module() is already expected to\ndo. Loaders will thus be simplified since they will only need to\nimplement exec_module().\n\nNote that we must return the module from sys.modules. During loading the\nmodule may have replaced itself in sys.modules. Since we don't have a\npost-import hook API to accommodate the use case, we have to deal with\nit. However, in the replacement case we do not worry about setting the\nimport-related module attributes on the object. The module writer is on\ntheir own if they are doing this.\n\nHow Reloading Will Work\n\nHere is the corresponding outline for reload():\n\n _RELOADING = {}\n\n def reload(module):\n try:\n name = module.__spec__.name\n except AttributeError:\n name = module.__name__\n spec = find_spec(name, target=module)\n\n if sys.modules.get(name) is not module:\n raise ImportError\n if spec in _RELOADING:\n return _RELOADING[name]\n _RELOADING[name] = module\n try:\n if spec.loader is None:\n # Namespace loader\n _init_module_attrs(spec, module)\n return module\n if spec.parent and spec.parent not in sys.modules:\n raise ImportError\n\n _init_module_attrs(spec, module)\n # Ignoring backwards-compatibility call to load_module()\n # for simplicity.\n spec.loader.exec_module(module)\n return sys.modules[name]\n finally:\n del _RELOADING[name]\n\nA key point here is the switch to Loader.exec_module() means that\nloaders will no longer have an easy way to know at execution time if it\nis a reload or not. Before this proposal, they could simply check to see\nif the module was already in sys.modules. Now, by the time exec_module()\nis called during load (not reload) the import machinery would already\nhave placed the module in sys.modules. This is part of the reason why\nfind_spec() has the \"target\" parameter.\n\nThe semantics of reload will remain essentially the same as they exist\nalready[3]. The impact of this PEP on some kinds of lazy loading modules\nwas a point of discussion.[4]\n\nModuleSpec\n\nAttributes\n\nEach of the following names is an attribute on ModuleSpec objects. A\nvalue of None indicates \"not set\". This contrasts with module objects\nwhere the attribute simply doesn't exist. Most of the attributes\ncorrespond to the import-related attributes of modules. Here is the\nmapping. The reverse of this mapping describes how the import machinery\nsets the module attributes right before calling exec_module().\n\n+----------------------------+----------------+\n| On ModuleSpec | On Modules |\n+============================+================+\n| name | __name__ |\n+----------------------------+----------------+\n| loader | __loader__ |\n+----------------------------+----------------+\n| parent | __package__ |\n+----------------------------+----------------+\n| origin | __file__* |\n+----------------------------+----------------+\n| cached | __cached__*,** |\n+----------------------------+----------------+\n| submodule_search_locations | __path__** |\n+----------------------------+----------------+\n| loader_state | - |\n+----------------------------+----------------+\n| has_location | - |\n+----------------------------+----------------+\n\n* Set on the module only if spec.has_location is true.\n** Set on the module only if the spec attribute is not None.\n\nWhile parent and has_location are read-only properties, the remaining\nattributes can be replaced after the module spec is created and even\nafter import is complete. This allows for unusual cases where directly\nmodifying the spec is the best option. However, typical use should not\ninvolve changing the state of a module's spec.\n\norigin\n\n\"origin\" is a string for the name of the place from which the module\noriginates. See origin above. Aside from the informational value, it is\nalso used in the module's repr. In the case of a spec where\n\"has_location\" is true, __file__ is set to the value of \"origin\". For\nbuilt-in modules \"origin\" would be set to \"built-in\".\n\nhas_location\n\nAs explained in the location section above, many modules are\n\"locatable\", meaning there is a corresponding resource from which the\nmodule will be loaded and that resource can be described by a string. In\ncontrast, non-locatable modules can't be loaded in this fashion, e.g.\nbuiltin modules and modules dynamically created in code. For these, the\nname is the only way to access them, so they have an \"origin\" but not a\n\"location\".\n\n\"has_location\" is true if the module is locatable. In that case the\nspec's origin is used as the location and __file__ is set to\nspec.origin. If additional location information is required (e.g.\nzipimport), that information may be stored in spec.loader_state.\n\n\"has_location\" may be implied from the existence of a load_data() method\non the loader.\n\nIncidentally, not all locatable modules will be cache-able, but most\nwill.\n\nsubmodule_search_locations\n\nThe list of location strings, typically directory paths, in which to\nsearch for submodules. If the module is a package this will be set to a\nlist (even an empty one). Otherwise it is None.\n\nThe name of the corresponding module attribute, __path__, is relatively\nambiguous. Instead of mirroring it, we use a more explicit attribute\nname that makes the purpose clear.\n\nloader_state\n\nA finder may set loader_state to any value to provide additional data\nfor the loader to use during loading. A value of None is the default and\nindicates that there is no additional data. Otherwise it can be set to\nany object, such as a dict, list, or types.SimpleNamespace, containing\nthe relevant extra information.\n\nFor example, zipimporter could use it to pass the zip archive name to\nthe loader directly, rather than needing to derive it from origin or\ncreate a custom loader for each find operation.\n\nloader_state is meant for use by the finder and corresponding loader. It\nis not guaranteed to be a stable resource for any other use.\n\nFactory Functions\n\nspec_from_file_location(name, location, *, loader=None,\nsubmodule_search_locations=None)\n\nBuild a spec from file-oriented information and loader APIs.\n\n- \"origin\" will be set to the location.\n- \"has_location\" will be set to True.\n- \"cached\" will be set to the result of calling cache_from_source().\n- \"origin\" can be deduced from loader.get_filename() (if \"location\" is\n not passed in.\n- \"loader\" can be deduced from suffix if the location is a filename.\n- \"submodule_search_locations\" can be deduced from loader.is_package()\n and from os.path.dirname(location) if location is a filename.\n\nspec_from_loader(name, loader, *, origin=None, is_package=None)\n\nBuild a spec with missing information filled in by using loader APIs.\n\n- \"has_location\" can be deduced from loader.get_data.\n- \"origin\" can be deduced from loader.get_filename().\n- \"submodule_search_locations\" can be deduced from loader.is_package()\n and from os.path.dirname(location) if location is a filename.\n\nBackward Compatibility\n\nModuleSpec doesn't have any. This would be a different story if\nFinder.find_module() were to return a module spec instead of loader. In\nthat case, specs would have to act like the loader that would have been\nreturned instead. Doing so would be relatively simple, but is an\nunnecessary complication. It was part of earlier versions of this PEP.\n\nSubclassing\n\nSubclasses of ModuleSpec are allowed, but should not be necessary.\nSimply setting loader_state or adding functionality to a custom finder\nor loader will likely be a better fit and should be tried first.\nHowever, as long as a subclass still fulfills the requirements of the\nimport system, objects of that type are completely fine as the return\nvalue of Finder.find_spec(). The same points apply to duck-typing.\n\nExisting Types\n\nModule Objects\n\nOther than adding __spec__, none of the import-related module attributes\nwill be changed or deprecated, though some of them could be; any such\ndeprecation can wait until Python 4.\n\nA module's spec will not be kept in sync with the corresponding\nimport-related attributes. Though they may differ, in practice they will\ntypically be the same.\n\nOne notable exception is that case where a module is run as a script by\nusing the -m flag. In that case module.__spec__.name will reflect the\nactual module name while module.__name__ will be __main__.\n\nA module's spec is not guaranteed to be identical between two modules\nwith the same name. Likewise there is no guarantee that successive calls\nto importlib.find_spec() will return the same object or even an\nequivalent object, though at least the latter is likely.\n\nFinders\n\nFinders are still responsible for identifying, and typically creating,\nthe loader that should be used to load a module. That loader will now be\nstored in the module spec returned by find_spec() rather than returned\ndirectly. As is currently the case without the PEP, if a loader would be\ncostly to create, that loader can be designed to defer the cost until\nlater.\n\nMetaPathFinder.find_spec(name, path=None, target=None)\n\nPathEntryFinder.find_spec(name, target=None)\n\nFinders must return ModuleSpec objects when find_spec() is called. This\nnew method replaces find_module() and find_loader() (in the\nPathEntryFinder case). If a loader does not have find_spec(),\nfind_module() and find_loader() are used instead, for\nbackward-compatibility.\n\nAdding yet another similar method to loaders is a case of practicality.\nfind_module() could be changed to return specs instead of loaders. This\nis tempting because the import APIs have suffered enough, especially\nconsidering PathEntryFinder.find_loader() was just added in Python 3.3.\nHowever, the extra complexity and a less-than-explicit method name\naren't worth it.\n\nThe \"target\" parameter of find_spec()\n\nA call to find_spec() may optionally include a \"target\" argument. This\nis the module object that will be used subsequently as the target of\nloading. During normal import (and by default) \"target\" is None, meaning\nthe target module has yet to be created. During reloading the module\npassed in to reload() is passed through to find_spec() as the target.\nThis argument allows the finder to build the module spec with more\ninformation than is otherwise available. Doing so is particularly\nrelevant in identifying the loader to use.\n\nThrough find_spec() the finder will always identify the loader it will\nreturn in the spec (or return None). At the point the loader is\nidentified, the finder should also decide whether or not the loader\nsupports loading into the target module, in the case that \"target\" is\npassed in. This decision may entail consulting with the loader.\n\nIf the finder determines that the loader does not support loading into\nthe target module, it should either find another loader or raise\nImportError (completely stopping import of the module). This\ndetermination is especially important during reload since, as noted in\nHow Reloading Will Work, loaders will no longer be able to trivially\nidentify a reload situation on their own.\n\nTwo alternatives were presented to the \"target\" parameter:\nLoader.supports_reload() and adding \"target\" to Loader.exec_module()\ninstead of find_spec(). supports_reload() was the initial approach to\nthe reload situation.[5] However, there was some opposition to the\nloader-specific, reload-centric approach. [6]\n\nAs to \"target\" on exec_module(), the loader may need other information\nfrom the target module (or spec) during reload, more than just \"does\nthis loader support reloading this module\", that is no longer available\nwith the move away from load_module(). A proposal on the table was to\nadd something like \"target\" to exec_module().[7] However, putting\n\"target\" on find_spec() instead is more in line with the goals of this\nPEP. Furthermore, it obviates the need for supports_reload().\n\nNamespace Packages\n\nCurrently a path entry finder may return (None, portions) from\nfind_loader() to indicate it found part of a possible namespace package.\nTo achieve the same effect, find_spec() must return a spec with \"loader\"\nset to None (a.k.a. not set) and with submodule_search_locations set to\nthe same portions as would have been provided by find_loader(). It's up\nto PathFinder how to handle such specs.\n\nLoaders\n\nLoader.exec_module(module)\n\nLoaders will have a new method, exec_module(). Its only job is to \"exec\"\nthe module and consequently populate the module's namespace. It is not\nresponsible for creating or preparing the module object, nor for any\ncleanup afterward. It has no return value. exec_module() will be used\nduring both loading and reloading.\n\nexec_module() should properly handle the case where it is called more\nthan once. For some kinds of modules this may mean raising ImportError\nevery time after the first time the method is called. This is\nparticularly relevant for reloading, where some kinds of modules do not\nsupport in-place reloading.\n\nLoader.create_module(spec)\n\nLoaders may also implement create_module() that will return a new module\nto exec. It may return None to indicate that the default module creation\ncode should be used. One use case, though atypical, for create_module()\nis to provide a module that is a subclass of the builtin module type.\nMost loaders will not need to implement create_module(),\n\ncreate_module() should properly handle the case where it is called more\nthan once for the same spec/module. This may include returning None or\nraising ImportError.\n\nNote\n\nexec_module() and create_module() should not set any import-related\nmodule attributes. The fact that load_module() does is a design flaw\nthat this proposal aims to correct.\n\nOther changes:\n\nPEP 420 introduced the optional module_repr() loader method to limit the\namount of special-casing in the module type's __repr__(). Since this\nmethod is part of ModuleSpec, it will be deprecated on loaders. However,\nif it exists on a loader it will be used exclusively.\n\nLoader.init_module_attr() method, added prior to Python 3.4's release,\nwill be removed in favor of the same method on ModuleSpec.\n\nHowever, InspectLoader.is_package() will not be deprecated even though\nthe same information is found on ModuleSpec. ModuleSpec can use it to\npopulate its own is_package if that information is not otherwise\navailable. Still, it will be made optional.\n\nIn addition to executing a module during loading, loaders will still be\ndirectly responsible for providing APIs concerning module-related data.\n\nOther Changes\n\n- The various finders and loaders provided by importlib will be\n updated to comply with this proposal.\n- Any other implementations of or dependencies on the import-related\n APIs (particularly finders and loaders) in the stdlib will be\n likewise adjusted to this PEP. While they should continue to work,\n any such changes that get missed should be considered bugs for the\n Python 3.4.x series.\n- The spec for the __main__ module will reflect how the interpreter\n was started. For instance, with -m the spec's name will be that of\n the module used, while __main__.__name__ will still be \"__main__\".\n- We will add importlib.find_spec() to mirror importlib.find_loader()\n (which becomes deprecated).\n- importlib.reload() is changed to use ModuleSpec.\n- importlib.reload() will now make use of the per-module import lock.\n\nReference Implementation\n\nA reference implementation is available at\nhttp://bugs.python.org/issue18864.\n\nImplementation Notes\n\n* The implementation of this PEP needs to be cognizant of its impact on\npkgutil (and setuptools). pkgutil has some generic function-based\nextensions to PEP 302 which may break if importlib starts wrapping\nloaders without the tools' knowledge.\n\n* Other modules to look at: runpy (and pythonrun.c), pickle, pydoc,\ninspect.\n\nFor instance, pickle should be updated in the __main__ case to look at\nmodule.__spec__.name.\n\nRejected Additions to the PEP\n\nThere were a few proposed additions to this proposal that did not fit\nwell enough into its scope.\n\nThere is no \"PathModuleSpec\" subclass of ModuleSpec that separates out\nhas_location, cached, and submodule_search_locations. While that might\nmake the separation cleaner, module objects don't have that distinction.\nModuleSpec will support both cases equally well.\n\nWhile \"ModuleSpec.is_package\" would be a simple additional attribute\n(aliasing self.submodule_search_locations is not None), it perpetuates\nthe artificial (and mostly erroneous) distinction between modules and\npackages.\n\nThe module spec Factory Functions could be classmethods on ModuleSpec.\nHowever that would expose them on all modules via __spec__, which has\nthe potential to unnecessarily confuse non-advanced Python users. The\nfactory functions have a specific use case, to support finder authors.\nSee ModuleSpec Users.\n\nLikewise, several other methods could be added to ModuleSpec that expose\nthe specific uses of module specs by the import machinery:\n\n- create() - a wrapper around Loader.create_module().\n- exec(module) - a wrapper around Loader.exec_module().\n- load() - an analogue to the deprecated Loader.load_module().\n\nAs with the factory functions, exposing these methods via\nmodule.__spec__ is less than desirable. They would end up being an\nattractive nuisance, even if only exposed as \"private\" attributes (as\nthey were in previous versions of this PEP). If someone finds a need for\nthese methods later, we can expose the via an appropriate API (separate\nfrom ModuleSpec) at that point, perhaps relative to PEP 406 (import\nengine).\n\nConceivably, the load() method could optionally take a list of modules\nwith which to interact instead of sys.modules. Also, load() could be\nleveraged to implement multi-version imports. Both are interesting\nideas, but definitely outside the scope of this proposal.\n\nOthers left out:\n\n- Add ModuleSpec.submodules (RO-property) - returns possible\n submodules relative to the spec.\n- Add ModuleSpec.loaded (RO-property) - the module in sys.module, if\n any.\n- Add ModuleSpec.data - a descriptor that wraps the data API of the\n spec's loader.\n- Also see[8].\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] http://docs.python.org/3/reference/import.html\n\n[2] https://mail.python.org/pipermail/import-sig/2013-August/000658.html\n\n[3] http://bugs.python.org/issue19413\n\n[4] https://mail.python.org/pipermail/python-dev/2013-August/128129.html\n\n[5] https://mail.python.org/pipermail/python-dev/2013-October/129913.html\n\n[6] https://mail.python.org/pipermail/python-dev/2013-October/129971.html\n\n[7] https://mail.python.org/pipermail/python-dev/2013-October/129933.html\n\n[8] https://mail.python.org/pipermail/import-sig/2013-September/000735.html"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.747828"},"created":{"kind":"timestamp","value":"2013-08-08T00:00:00","string":"2013-08-08T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0451/\",\n \"authors\": [\n \"Eric Snow\"\n ],\n \"pep_number\": \"0451\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":27,"cells":{"id":{"kind":"string","value":"0282"},"text":{"kind":"string","value":"PEP: 282 Title: A Logging System Author: Vinay Sajip , Trent Mick Status: Final Type:\nStandards Track Content-Type: text/x-rst Created: 04-Feb-2002\nPython-Version: 2.3 Post-History:\n\nAbstract\n\nThis PEP describes a proposed logging package for Python's standard\nlibrary.\n\nBasically the system involves the user creating one or more logger\nobjects on which methods are called to log debugging notes, general\ninformation, warnings, errors etc. Different logging 'levels' can be\nused to distinguish important messages from less important ones.\n\nA registry of named singleton logger objects is maintained so that\n\n1) different logical logging streams (or 'channels') exist (say, one\n for 'zope.zodb' stuff and another for 'mywebsite'-specific stuff)\n2) one does not have to pass logger object references around.\n\nThe system is configurable at runtime. This configuration mechanism\nallows one to tune the level and type of logging done while not touching\nthe application itself.\n\nMotivation\n\nIf a single logging mechanism is enshrined in the standard library, 1)\nlogging is more likely to be done 'well', and 2) multiple libraries will\nbe able to be integrated into larger applications which can be logged\nreasonably coherently.\n\nInfluences\n\nThis proposal was put together after having studied the following\nlogging packages:\n\n- java.util.logging in JDK 1.4 (a.k.a. JSR047)[1]\n- log4j[2]\n- the Syslog package from the Protomatter project[3]\n- MAL's mx.Log package[4]\n\nSimple Example\n\nThis shows a very simple example of how the logging package can be used\nto generate simple logging output on stderr.\n\n --------- mymodule.py -------------------------------\n import logging\n log = logging.getLogger(\"MyModule\")\n\n def doIt():\n log.debug(\"Doin' stuff...\")\n #do stuff...\n raise TypeError, \"Bogus type error for testing\"\n -----------------------------------------------------\n\n --------- myapp.py ----------------------------------\n import mymodule, logging\n\n logging.basicConfig()\n\n log = logging.getLogger(\"MyApp\")\n\n log.info(\"Starting my app\")\n try:\n mymodule.doIt()\n except Exception, e:\n log.exception(\"There was a problem.\")\n log.info(\"Ending my app\")\n -----------------------------------------------------\n\n $ python myapp.py\n\n INFO:MyApp: Starting my app\n DEBUG:MyModule: Doin' stuff...\n ERROR:MyApp: There was a problem.\n Traceback (most recent call last):\n File \"myapp.py\", line 9, in ?\n mymodule.doIt()\n File \"mymodule.py\", line 7, in doIt\n raise TypeError, \"Bogus type error for testing\"\n TypeError: Bogus type error for testing\n\n INFO:MyApp: Ending my app\n\nThe above example shows the default output format. All aspects of the\noutput format should be configurable, so that you could have output\nformatted like this:\n\n 2002-04-19 07:56:58,174 MyModule DEBUG - Doin' stuff...\n\n or just\n\n Doin' stuff...\n\nControl Flow\n\nApplications make logging calls on Logger objects. Loggers are organized\nin a hierarchical namespace and child Loggers inherit some logging\nproperties from their parents in the namespace.\n\nLogger names fit into a \"dotted name\" namespace, with dots (periods)\nindicating sub-namespaces. The namespace of logger objects therefore\ncorresponds to a single tree data structure.\n\n- \"\" is the root of the namespace\n- \"Zope\" would be a child node of the root\n- \"Zope.ZODB\" would be a child node of \"Zope\"\n\nThese Logger objects create LogRecord objects which are passed to\nHandler objects for output. Both Loggers and Handlers may use logging\nlevels and (optionally) Filters to decide if they are interested in a\nparticular LogRecord. When it is necessary to output a LogRecord\nexternally, a Handler can (optionally) use a Formatter to localize and\nformat the message before sending it to an I/O stream.\n\nEach Logger keeps track of a set of output Handlers. By default all\nLoggers also send their output to all Handlers of their ancestor\nLoggers. Loggers may, however, also be configured to ignore Handlers\nhigher up the tree.\n\nThe APIs are structured so that calls on the Logger APIs can be cheap\nwhen logging is disabled. If logging is disabled for a given log level,\nthen the Logger can make a cheap comparison test and return. If logging\nis enabled for a given log level, the Logger is still careful to\nminimize costs before passing the LogRecord into the Handlers. In\nparticular, localization and formatting (which are relatively expensive)\nare deferred until the Handler requests them.\n\nThe overall Logger hierarchy can also have a level associated with it,\nwhich takes precedence over the levels of individual Loggers. This is\ndone through a module-level function:\n\n def disable(lvl):\n \"\"\"\n Do not generate any LogRecords for requests with a severity less\n than 'lvl'.\n \"\"\"\n ...\n\nLevels\n\nThe logging levels, in increasing order of importance, are:\n\n- DEBUG\n- INFO\n- WARN\n- ERROR\n- CRITICAL\n\nThe term CRITICAL is used in preference to FATAL, which is used by\nlog4j. The levels are conceptually the same - that of a serious, or very\nserious, error. However, FATAL implies death, which in Python implies a\nraised and uncaught exception, traceback, and exit. Since the logging\nmodule does not enforce such an outcome from a FATAL-level log entry, it\nmakes sense to use CRITICAL in preference to FATAL.\n\nThese are just integer constants, to allow simple comparison of\nimportance. Experience has shown that too many levels can be confusing,\nas they lead to subjective interpretation of which level should be\napplied to any particular log request.\n\nAlthough the above levels are strongly recommended, the logging system\nshould not be prescriptive. Users may define their own levels, as well\nas the textual representation of any levels. User defined levels must,\nhowever, obey the constraints that they are all positive integers and\nthat they increase in order of increasing severity.\n\nUser-defined logging levels are supported through two module-level\nfunctions:\n\n def getLevelName(lvl):\n \"\"\"Return the text for level 'lvl'.\"\"\"\n ...\n\n def addLevelName(lvl, lvlName):\n \"\"\"\n Add the level 'lvl' with associated text 'levelName', or\n set the textual representation of existing level 'lvl' to be\n 'lvlName'.\"\"\"\n ...\n\nLoggers\n\nEach Logger object keeps track of a log level (or threshold) that it is\ninterested in, and discards log requests below that level.\n\nA Manager class instance maintains the hierarchical namespace of named\nLogger objects. Generations are denoted with dot-separated names: Logger\n\"foo\" is the parent of Loggers \"foo.bar\" and \"foo.baz\".\n\nThe Manager class instance is a singleton and is not directly exposed to\nusers, who interact with it using various module-level functions.\n\nThe general logging method is:\n\n class Logger:\n def log(self, lvl, msg, *args, **kwargs):\n \"\"\"Log 'str(msg) % args' at logging level 'lvl'.\"\"\"\n ...\n\nHowever, convenience functions are defined for each logging level:\n\n class Logger:\n def debug(self, msg, *args, **kwargs): ...\n def info(self, msg, *args, **kwargs): ...\n def warn(self, msg, *args, **kwargs): ...\n def error(self, msg, *args, **kwargs): ...\n def critical(self, msg, *args, **kwargs): ...\n\nOnly one keyword argument is recognized at present - \"exc_info\". If\ntrue, the caller wants exception information to be provided in the\nlogging output. This mechanism is only needed if exception information\nneeds to be provided at any logging level. In the more common case,\nwhere exception information needs to be added to the log only when\nerrors occur, i.e. at the ERROR level, then another convenience method\nis provided:\n\n class Logger:\n def exception(self, msg, *args): ...\n\nThis should only be called in the context of an exception handler, and\nis the preferred way of indicating a desire for exception information in\nthe log. The other convenience methods are intended to be called with\nexc_info only in the unusual situation where you might want to provide\nexception information in the context of an INFO message, for example.\n\nThe \"msg\" argument shown above will normally be a format string;\nhowever, it can be any object x for which str(x) returns the format\nstring. This facilitates, for example, the use of an object which\nfetches a locale- specific message for an internationalized/localized\napplication, perhaps using the standard gettext module. An outline\nexample:\n\n class Message:\n \"\"\"Represents a message\"\"\"\n def __init__(self, id):\n \"\"\"Initialize with the message ID\"\"\"\n\n def __str__(self):\n \"\"\"Return an appropriate localized message text\"\"\"\n\n ...\n\n logger.info(Message(\"abc\"), ...)\n\nGathering and formatting data for a log message may be expensive, and a\nwaste if the logger was going to discard the message anyway. To see if a\nrequest will be honoured by the logger, the isEnabledFor() method can be\nused:\n\n class Logger:\n def isEnabledFor(self, lvl):\n \"\"\"\n Return true if requests at level 'lvl' will NOT be\n discarded.\n \"\"\"\n ...\n\nso instead of this expensive and possibly wasteful DOM to XML\nconversion:\n\n ...\n hamletStr = hamletDom.toxml()\n log.info(hamletStr)\n ...\n\none can do this:\n\n if log.isEnabledFor(logging.INFO):\n hamletStr = hamletDom.toxml()\n log.info(hamletStr)\n\nWhen new loggers are created, they are initialized with a level which\nsignifies \"no level\". A level can be set explicitly using the setLevel()\nmethod:\n\n class Logger:\n def setLevel(self, lvl): ...\n\nIf a logger's level is not set, the system consults all its ancestors,\nwalking up the hierarchy until an explicitly set level is found. That is\nregarded as the \"effective level\" of the logger, and can be queried via\nthe getEffectiveLevel() method:\n\n def getEffectiveLevel(self): ...\n\nLoggers are never instantiated directly. Instead, a module-level\nfunction is used:\n\n def getLogger(name=None): ...\n\nIf no name is specified, the root logger is returned. Otherwise, if a\nlogger with that name exists, it is returned. If not, a new logger is\ninitialized and returned. Here, \"name\" is synonymous with \"channel\nname\".\n\nUsers can specify a custom subclass of Logger to be used by the system\nwhen instantiating new loggers:\n\n def setLoggerClass(klass): ...\n\nThe passed class should be a subclass of Logger, and its __init__ method\nshould call Logger.__init__.\n\nHandlers\n\nHandlers are responsible for doing something useful with a given\nLogRecord. The following core Handlers will be implemented:\n\n- StreamHandler: A handler for writing to a file-like object.\n- FileHandler: A handler for writing to a single file or set of\n rotating files.\n- SocketHandler: A handler for writing to remote TCP ports.\n- DatagramHandler: A handler for writing to UDP sockets, for low-cost\n logging. Jeff Bauer already had such a system[5].\n- MemoryHandler: A handler that buffers log records in memory until\n the buffer is full or a particular condition occurs [6].\n- SMTPHandler: A handler for sending to email addresses via SMTP.\n- SysLogHandler: A handler for writing to Unix syslog via UDP.\n- NTEventLogHandler: A handler for writing to event logs on Windows\n NT, 2000 and XP.\n- HTTPHandler: A handler for writing to a Web server with either GET\n or POST semantics.\n\nHandlers can also have levels set for them using the setLevel() method:\n\n def setLevel(self, lvl): ...\n\nThe FileHandler can be set up to create a rotating set of log files. In\nthis case, the file name passed to the constructor is taken as a \"base\"\nfile name. Additional file names for the rotation are created by\nappending .1, .2, etc. to the base file name, up to a maximum as\nspecified when rollover is requested. The setRollover method is used to\nspecify a maximum size for a log file and a maximum number of backup\nfiles in the rotation.\n\n def setRollover(maxBytes, backupCount): ...\n\nIf maxBytes is specified as zero, no rollover ever occurs and the log\nfile grows indefinitely. If a non-zero size is specified, when that size\nis about to be exceeded, rollover occurs. The rollover method ensures\nthat the base file name is always the most recent, .1 is the next most\nrecent, .2 the next most recent after that, and so on.\n\nThere are many additional handlers implemented in the test/example\nscripts provided with[7] - for example, XMLHandler and SOAPHandler.\n\nLogRecords\n\nA LogRecord acts as a receptacle for information about a logging event.\nIt is little more than a dictionary, though it does define a getMessage\nmethod which merges a message with optional runarguments.\n\nFormatters\n\nA Formatter is responsible for converting a LogRecord to a string\nrepresentation. A Handler may call its Formatter before writing a\nrecord. The following core Formatters will be implemented:\n\n- Formatter: Provide printf-like formatting, using the % operator.\n- BufferingFormatter: Provide formatting for multiple messages, with\n header and trailer formatting support.\n\nFormatters are associated with Handlers by calling setFormatter() on a\nhandler:\n\n def setFormatter(self, form): ...\n\nFormatters use the % operator to format the logging message. The format\nstring should contain %(name)x and the attribute dictionary of the\nLogRecord is used to obtain message-specific data. The following\nattributes are provided:\n\n --------------------- -------------------------------------------------------------------------------------------------------------------------------------------------\n %(name)s Name of the logger (logging channel)\n %(levelno)s Numeric logging level for the message (DEBUG, INFO, WARN, ERROR, CRITICAL)\n %(levelname)s Text logging level for the message (\"DEBUG\", \"INFO\", \"WARN\", \"ERROR\", \"CRITICAL\")\n %(pathname)s Full pathname of the source file where the logging call was issued (if available)\n %(filename)s Filename portion of pathname\n %(module)s Module from which logging call was made\n %(lineno)d Source line number where the logging call was issued (if available)\n %(created)f Time when the LogRecord was created (time.time() return value)\n %(asctime)s Textual time when the LogRecord was created\n %(msecs)d Millisecond portion of the creation time\n %(relativeCreated)d Time in milliseconds when the LogRecord was created, relative to the time the logging module was loaded (typically at application startup time)\n %(thread)d Thread ID (if available)\n %(message)s The result of record.getMessage(), computed just as the record is emitted\n --------------------- -------------------------------------------------------------------------------------------------------------------------------------------------\n\nIf a formatter sees that the format string includes \"(asctime)s\", the\ncreation time is formatted into the LogRecord's asctime attribute. To\nallow flexibility in formatting dates, Formatters are initialized with a\nformat string for the message as a whole, and a separate format string\nfor date/time. The date/time format string should be in time.strftime\nformat. The default value for the message format is \"%(message)s\". The\ndefault date/time format is ISO8601.\n\nThe formatter uses a class attribute, \"converter\", to indicate how to\nconvert a time from seconds to a tuple. By default, the value of\n\"converter\" is \"time.localtime\". If needed, a different converter (e.g.\n\"time.gmtime\") can be set on an individual formatter instance, or the\nclass attribute changed to affect all formatter instances.\n\nFilters\n\nWhen level-based filtering is insufficient, a Filter can be called by a\nLogger or Handler to decide if a LogRecord should be output. Loggers and\nHandlers can have multiple filters installed, and any one of them can\nveto a LogRecord being output.\n\n class Filter:\n def filter(self, record):\n \"\"\"\n Return a value indicating true if the record is to be\n processed. Possibly modify the record, if deemed\n appropriate by the filter.\n \"\"\"\n\nThe default behaviour allows a Filter to be initialized with a Logger\nname. This will only allow through events which are generated using the\nnamed logger or any of its children. For example, a filter initialized\nwith \"A.B\" will allow events logged by loggers \"A.B\", \"A.B.C\",\n\"A.B.C.D\", \"A.B.D\" etc. but not \"A.BB\", \"B.A.B\" etc. If initialized with\nthe empty string, all events are passed by the Filter. This filter\nbehaviour is useful when it is desired to focus attention on one\nparticular area of an application; the focus can be changed simply by\nchanging a filter attached to the root logger.\n\nThere are many examples of Filters provided in[8].\n\nConfiguration\n\nThe main benefit of a logging system like this is that one can control\nhow much and what logging output one gets from an application without\nchanging that application's source code. Therefore, although\nconfiguration can be performed through the logging API, it must also be\npossible to change the logging configuration without changing an\napplication at all. For long-running programs like Zope, it should be\npossible to change the logging configuration while the program is\nrunning.\n\nConfiguration includes the following:\n\n- What logging level a logger or handler should be interested in.\n- What handlers should be attached to which loggers.\n- What filters should be attached to which handlers and loggers.\n- Specifying attributes specific to certain handlers and filters.\n\nIn general each application will have its own requirements for how a\nuser may configure logging output. However, each application will\nspecify the required configuration to the logging system through a\nstandard mechanism.\n\nThe most simple configuration is that of a single handler, writing to\nstderr, attached to the root logger. This configuration is set up by\ncalling the basicConfig() function once the logging module has been\nimported.\n\n def basicConfig(): ...\n\nFor more sophisticated configurations, this PEP makes no specific\nproposals, for the following reasons:\n\n- A specific proposal may be seen as prescriptive.\n- Without the benefit of wide practical experience in the Python\n community, there is no way to know whether any given configuration\n approach is a good one. That practice can't really come until the\n logging module is used, and that means until after Python 2.3 has\n shipped.\n- There is a likelihood that different types of applications may\n require different configuration approaches, so that no \"one size\n fits all\".\n\nThe reference implementation[9] has a working configuration file format,\nimplemented for the purpose of proving the concept and suggesting one\npossible alternative. It may be that separate extension modules, not\npart of the core Python distribution, are created for logging\nconfiguration and log viewing, supplemental handlers and other features\nwhich are not of interest to the bulk of the community.\n\nThread Safety\n\nThe logging system should support thread-safe operation without any\nspecial action needing to be taken by its users.\n\nModule-Level Functions\n\nTo support use of the logging mechanism in short scripts and small\napplications, module-level functions debug(), info(), warn(), error(),\ncritical() and exception() are provided. These work in the same way as\nthe correspondingly named methods of Logger - in fact they delegate to\nthe corresponding methods on the root logger. A further convenience\nprovided by these functions is that if no configuration has been done,\nbasicConfig() is automatically called.\n\nAt application exit, all handlers can be flushed by calling the\nfunction:\n\n def shutdown(): ...\n\nThis will flush and close all handlers.\n\nImplementation\n\nThe reference implementation is Vinay Sajip's logging module[10].\n\nPackaging\n\nThe reference implementation is implemented as a single module. This\noffers the simplest interface - all users have to do is \"import logging\"\nand they are in a position to use all the functionality available.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] java.util.logging\nhttp://java.sun.com/j2se/1.4/docs/guide/util/logging/\n\n[2] log4j: a Java logging package https://logging.apache.org/log4j/\n\n[3] Protomatter's Syslog\nhttp://protomatter.sourceforge.net/1.1.6/index.html\nhttp://protomatter.sourceforge.net/1.1.6/javadoc/com/protomatter/syslog/syslog-whitepaper.html\n\n[4] MAL mentions his mx.Log logging module:\nhttps://mail.python.org/pipermail/python-dev/2002-February/019767.html\n\n[5] Jeff Bauer's Mr. Creosote\nhttp://starship.python.net/crew/jbauer/creosote/\n\n[6] java.util.logging\nhttp://java.sun.com/j2se/1.4/docs/guide/util/logging/\n\n[7] Vinay Sajip's logging module.\nhttps://old.red-dove.com/python_logging.html\n\n[8] Vinay Sajip's logging module.\nhttps://old.red-dove.com/python_logging.html\n\n[9] Vinay Sajip's logging module.\nhttps://old.red-dove.com/python_logging.html\n\n[10] Vinay Sajip's logging module.\nhttps://old.red-dove.com/python_logging.html"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.786249"},"created":{"kind":"timestamp","value":"2002-02-04T00:00:00","string":"2002-02-04T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0282/\",\n \"authors\": [\n \"Vinay Sajip\"\n ],\n \"pep_number\": \"0282\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":28,"cells":{"id":{"kind":"string","value":"0606"},"text":{"kind":"string","value":"PEP: 606 Title: Python Compatibility Version Author: Victor Stinner\n Status: Rejected Type: Standards Track\nContent-Type: text/x-rst Created: 18-Oct-2019 Python-Version: 3.9\n\nAbstract\n\nAdd sys.set_python_compat_version(version) to enable partial\ncompatibility with requested Python version. Add\nsys.get_python_compat_version().\n\nModify a few functions in the standard library to implement partial\ncompatibility with Python 3.8.\n\nAdd sys.set_python_min_compat_version(version) to deny backward\ncompatibility with Python versions older than version.\n\nAdd -X compat_version=VERSION and -X min_compat_version=VERSION command\nline options. Add PYTHONCOMPATVERSION and PYTHONCOMPATMINVERSION\nenvironment variables.\n\nRationale\n\nThe need to evolve frequently\n\nTo remain relevant and useful, Python has to evolve frequently; some\nenhancements require incompatible changes. Any incompatible change can\nbreak an unknown number of Python projects. Developers can decide to not\nimplement a feature because of that.\n\nUsers want to get the latest Python version to obtain new features and\nbetter performance. A few incompatible changes can prevent them from\nusing their applications on the latest Python version.\n\nThis PEP proposes to add a partial compatibility with old Python\nversions as a tradeoff to fit both use cases.\n\nThe main issue with the migration from Python 2 to Python 3 is not that\nPython 3 is backward incompatible, but how incompatible changes were\nintroduced.\n\nPartial compatibility to minimize the Python maintenance burden\n\nWhile technically it would be possible to provide full compatibility\nwith old Python versions, this PEP proposes to minimize the number of\nfunctions handling backward compatibility to reduce the maintenance\nburden of the Python project (CPython).\n\nEach change introducing backport compatibility to a function should be\nproperly discussed to estimate the maintenance cost in the long-term.\n\nBackward compatibility code will be dropped on each Python release, on a\ncase-by-case basis. Each compatibility function can be supported for a\ndifferent number of Python releases depending on its maintenance cost\nand the estimated risk (number of broken projects) if it's removed.\n\nThe maintenance cost does not only come from the code implementing the\nbackward compatibility, but also comes from the additional tests.\n\nCases excluded from backward compatibility\n\nThe performance overhead of any compatibility code must be low when\nsys.set_python_compat_version() is not called.\n\nThe C API is out of the scope of this PEP: Py_LIMITED_API macro and the\nstable ABI are solving this problem differently, see the PEP 384:\nDefining a Stable ABI <384>.\n\nSecurity fixes which break backward compatibility on purpose will not\nget a compatibility layer; security matters more than compatibility. For\nexample, http.client.HTTPSConnection was modified in Python 3.4.3 to\nperforms all the necessary certificate and hostname checks by default.\nIt was a deliberate change motivated by PEP 476: Enabling\ncertificate verification by default for stdlib http clients\n<476> (bpo-22417).\n\nThe Python language does not provide backward compatibility.\n\nChanges which are not clearly incompatible are not covered by this PEP.\nFor example, Python 3.9 changed the default protocol in the pickle\nmodule to Protocol 4 which was first introduced in Python 3.4. This\nchange is backward compatible up to Python 3.4. There is no need to use\nthe Protocol 3 by default when compatibility with Python 3.8 is\nrequested.\n\nThe new DeprecationWarning and PendingDeprecatingWarning warnings in\nPython 3.9 will not be disabled in Python 3.8 compatibility mode. If a\nproject runs its test suite using -Werror (treat any warning as an\nerror), these warnings must be fixed, or specific deprecation warnings\nmust be ignored on a case-by-case basis.\n\nUpgrading a project to a newer Python\n\nWithout backward compatibility, all incompatible changes must be fixed\nat once, which can be a blocker issue. It is even worse when a project\nis upgraded to a newer Python which is separated by multiple releases\nfrom the old Python.\n\nPostponing an upgrade only makes things worse: each skipped release adds\nmore incompatible changes. The technical debt only steadily increases\nover time.\n\nWith backward compatibility, it becomes possible to upgrade Python\nincrementally in a project, without having to fix all of the issues at\nonce.\n\nThe \"all-or-nothing\" is a showstopper to port large Python 2 code bases\nto Python 3. The list of incompatible changes between Python 2 and\nPython 3 is long, and it's getting longer with each Python 3.x release.\n\nCleaning up Python and DeprecationWarning\n\nOne of the Zen of Python (PEP 20)\n<20> motto is:\n\n There should be one-- and preferably only one --obvious way to do it.\n\nWhen Python evolves, new ways inevitably emerge. DeprecationWarnings are\nemitted to suggest using the new way, but many developers ignore these\nwarnings, which are silent by default (except in the __main__ module:\nsee the PEP 565). Some developers simply ignore all warnings when there\nare too many warnings, thus only bother with exceptions when the\ndeprecated code is removed.\n\nSometimes, supporting both ways has a minor maintenance cost, but\ndevelopers prefer to drop the old way to clean up their code. These\nkinds of changes are backward incompatible.\n\nSome developers can take the end of the Python 2 support as an\nopportunity to push even more incompatible changes than usual.\n\nAdding an opt-in backward compatibility prevents the breaking of\napplications and allows developers to continue doing these cleanups.\n\nRedistribute the maintenance burden\n\nThe backward compatibility involves authors of incompatible changes more\nin the upgrade path.\n\nExamples of backward compatibility\n\ncollections ABC aliases\n\ncollections.abc aliases to ABC classes have been removed from the\ncollections module in Python 3.9, after being deprecated since Python\n3.3. For example, collections.Mapping no longer exists.\n\nIn Python 3.6, aliases were created in collections/__init__.py by\nfrom _collections_abc import *.\n\nIn Python 3.7, a __getattr__() has been added to the collections module\nto emit a DeprecationWarning upon first access to an attribute:\n\n def __getattr__(name):\n # For backwards compatibility, continue to make the collections ABCs\n # through Python 3.6 available through the collections module.\n # Note: no new collections ABCs were added in Python 3.7\n if name in _collections_abc.__all__:\n obj = getattr(_collections_abc, name)\n import warnings\n warnings.warn(\"Using or importing the ABCs from 'collections' instead \"\n \"of from 'collections.abc' is deprecated since Python 3.3, \"\n \"and in 3.9 it will be removed.\",\n DeprecationWarning, stacklevel=2)\n globals()[name] = obj\n return obj\n raise AttributeError(f'module {__name__!r} has no attribute {name!r}')\n\nCompatibility with Python 3.8 can be restored in Python 3.9 by adding\nback the __getattr__() function, but only when backward compatibility is\nrequested:\n\n def __getattr__(name):\n if (sys.get_python_compat_version() < (3, 9)\n and name in _collections_abc.__all__):\n ...\n raise AttributeError(f'module {__name__!r} has no attribute {name!r}')\n\nDeprecated open() \"U\" mode\n\nThe \"U\" mode of open() is deprecated since Python 3.4 and emits a\nDeprecationWarning. bpo-37330 proposes to drop this mode:\nopen(filename, \"rU\") would raise an exception.\n\nThis change falls into the \"cleanup\" category: it is not required to\nimplement a feature.\n\nA backward compatibility mode would be trivial to implement and would be\nwelcomed by users.\n\nSpecification\n\nsys functions\n\nAdd 3 functions to the sys module:\n\n- sys.set_python_compat_version(version): set the Python compatibility\n version. If it has been called previously, use the minimum of\n requested versions. Raise an exception if\n sys.set_python_min_compat_version(min_version) has been called and\n version < min_version. version must be greater than or equal to\n (3, 0).\n- sys.set_python_min_compat_version(min_version): set the minimum\n compatibility version. Raise an exception if\n sys.set_python_compat_version(old_version) has been called\n previously and old_version < min_version. min_version must be\n greater than or equal to (3, 0).\n- sys.get_python_compat_version(): get the Python compatibility\n version. Return a tuple of 3 integers.\n\nA version must a tuple of 2 or 3 integers. (major, minor) version is\nequivalent to (major, minor, 0).\n\nBy default, sys.get_python_compat_version() returns the current Python\nversion.\n\nFor example, to request compatibility with Python 3.8.0:\n\n import collections\n\n sys.set_python_compat_version((3, 8))\n\n # collections.Mapping alias, removed from Python 3.9, is available\n # again, even if collections has been imported before calling\n # set_python_compat_version().\n parent = collections.Mapping\n\nObviously, calling sys.set_python_compat_version(version) has no effect\non code executed before the call. Use -X compat_version=VERSION command\nline option or PYTHONCOMPATVERSIONVERSION=VERSION environment variable\nto set the compatibility version at Python startup.\n\nCommand line\n\nAdd -X compat_version=VERSION and -X min_compat_version=VERSION command\nline options: call respectively sys.set_python_compat_version() and\nsys.set_python_min_compat_version(). VERSION is a version string with 2\nor 3 numbers (major.minor.micro or major.minor). For example,\n-X compat_version=3.8 calls sys.set_python_compat_version((3, 8)).\n\nAdd PYTHONCOMPATVERSIONVERSION=VERSION and\nPYTHONCOMPATMINVERSION=VERSION=VERSION environment variables: call\nrespectively sys.set_python_compat_version() and\nsys.set_python_min_compat_version(). VERSION is a version string with\nthe same format as the command line options.\n\nBackwards Compatibility\n\nIntroducing the sys.set_python_compat_version() function means that an\napplication will behave differently depending on the compatibility\nversion. Moreover, since the version can be decreased multiple times,\nthe application can behave differently depending on the import order.\n\nPython 3.9 with sys.set_python_compat_version((3, 8)) is not fully\ncompatible with Python 3.8: the compatibility is only partial.\n\nSecurity Implications\n\nsys.set_python_compat_version() must not disable security fixes.\n\nAlternatives\n\nProvide a workaround for each incompatible change\n\nAn application can work around most incompatible changes which impacts\nit.\n\nFor example, collections aliases can be added back using:\n\n import collections.abc\n collections.Mapping = collections.abc.Mapping\n collections.Sequence = collections.abc.Sequence\n\nHandle backward compatibility in the parser\n\nThe parser is modified to support multiple versions of the Python\nlanguage (grammar).\n\nThe current Python parser cannot be easily modified for that. AST and\ngrammar are hardcoded to a single Python version.\n\nIn Python 3.8, compile() has an undocumented _feature_version to not\nconsider async and await as keywords.\n\nThe latest major language backward incompatible change was Python 3.7\nwhich made async and await real keywords. It seems like Twisted was the\nonly affected project, and Twisted had a single affected function (it\nused a parameter called async).\n\nHandling backward compatibility in the parser seems quite complex, not\nonly to modify the parser, but also for developers who have to check\nwhich version of the Python language is used.\n\nfrom __future__ import python38_syntax\n\nAdd pythonXY_syntax to the __future__ module. It would enable backward\ncompatibility with Python X.Y syntax, but only for the current file.\n\nWith this option, there is no need to change\nsys.implementation.cache_tag to use a different .pyc filename, since the\nparser will always produce the same output for the same input (except\nfor the optimization level).\n\nFor example:\n\n from __future__ import python35_syntax\n\n async = 1\n await = 2\n\nUpdate cache_tag\n\nModify the parser to use sys.get_python_compat_version() to choose the\nversion of the Python language.\n\nsys.set_python_compat_version() updates sys.implementation.cache_tag to\ninclude the compatibility version without the micro version as a suffix.\nFor example, Python 3.9 uses 'cpython-39' by default, but\nsys.set_python_compat_version((3, 7, 2)) sets cache_tag to\n'cpython-39-37'. Changes to the Python language are now allowed in micro\nreleases.\n\nOne problem is that import asyncio is likely to fail if\nsys.set_python_compat_version((3, 6)) has been called previously. The\ncode of the asyncio module requires async and await to be real keywords\n(change done in Python 3.7).\n\nAnother problem is that regular users cannot write .pyc files into\nsystem directories, and so cannot create them on demand. It means that\n.pyc optimization cannot be used in the backward compatibility mode.\n\nOne solution for that is to modify the Python installer and Python\npackage installers to precompile .pyc files not only for the current\nPython version, but also for multiple older Python versions (up to\nPython 3.0?).\n\nEach .py file would have 3n .pyc files (3 optimization levels), where n\nis the number of supported Python versions. For example, it means 6 .pyc\nfiles, instead of 3, to support Python 3.8 and Python 3.9.\n\nTemporary moratorium on incompatible changes\n\nIn 2009, PEP 3003 \"Python Language Moratorium\" proposed a temporary\nmoratorium (suspension) of all changes to the Python language syntax,\nsemantics, and built-ins for Python 3.1 and Python 3.2.\n\nIn May 2018, during the PEP 572 discussions, it was also proposed to\nslow down Python changes: see the python-dev thread Slow down...\n\nBarry Warsaw's call on this:\n\n I don’t believe that the way for Python to remain relevant and useful\n for the next 10 years is to cease all language evolution. Who knows\n what the computing landscape will look like in 5 years, let alone 10?\n Something as arbitrary as a 10-year moratorium is (again, IMHO) a\n death sentence for the language.\n\nPEP 387\n\nPEP 387 -- Backwards Compatibility Policy\n<387> proposes a process to make incompatible changes. The main point is\nthe 4th step of the process:\n\n See if there's any feedback. Users not involved in the original\n discussions may comment now after seeing the warning. Perhaps\n reconsider.\n\nPEP 497\n\nPEP 497 -- A standard mechanism for backward compatibility\n<497> proposes different solutions to provide backward compatibility.\n\nExcept for the __past__ mechanism idea, PEP 497 does not propose\nconcrete solutions:\n\n When an incompatible change to core language syntax or semantics is\n being made, Python-dev's policy is to prefer and expect that, wherever\n possible, a mechanism for backward compatibility be considered and\n provided for future Python versions after the breaking change is\n adopted by default, in addition to any mechanisms proposed for forward\n compatibility such as new future_statements.\n\nExamples of incompatible changes\n\nPython 3.8\n\nExamples of Python 3.8 incompatible changes:\n\n- (During beta phase) PyCode_New() required a new parameter: it broke\n all Cython extensions (all projects distributing precompiled Cython\n code). This change has been reverted during the 3.8 beta phase and a\n new PyCode_NewWithPosOnlyArgs() function was added instead.\n- types.CodeType requires an additional mandatory parameter. The\n CodeType.replace() function was added to help projects to no longer\n depend on the exact signature of the CodeType constructor.\n- C extensions are no longer linked to libpython.\n- sys.abiflags changed from 'm' to an empty string. For example,\n python3.8m program is gone.\n- The C structure PyInterpreterState was made opaque.\n - Blender:\n - https://bugzilla.redhat.com/show_bug.cgi?id=1734980#c6\n - https://developer.blender.org/D6038\n- XML attribute order: bpo-34160. Broken projects:\n - coverage\n - docutils\n - pcs\n - python-glyphsLib\n\nBackward compatibility cannot be added for all these changes. For\nexample, changes in the C API and in the build system are out of the\nscope of this PEP.\n\nSee What’s New In Python 3.8: API and Feature Removals for all changes.\n\nSee also the Porting to Python 3.8 section of What’s New In Python 3.8.\n\nPython 3.7\n\nExamples of Python 3.7 incompatible changes:\n\n- async and await are now reserved keywords.\n- Several undocumented internal imports were removed. One example is\n that os.errno is no longer available; use import errno directly\n instead. Note that such undocumented internal imports may be removed\n any time without notice, even in micro version releases.\n- Unknown escapes consisting of '\\' and an ASCII letter in replacement\n templates for re.sub() were deprecated in Python 3.5, and will now\n cause an error.\n- The asyncio.windows_utils.socketpair() function has been removed: it\n was an alias to socket.socketpair().\n- asyncio no longer exports the selectors and _overlapped modules as\n asyncio.selectors and asyncio._overlapped. Replace\n from asyncio import selectors with import selectors.\n- PEP 479 is enabled for all code in Python 3.7, meaning that\n StopIteration exceptions raised directly or indirectly in coroutines\n and generators are transformed into RuntimeError exceptions.\n- socketserver.ThreadingMixIn.server_close() now waits until all\n non-daemon threads complete. Set the new block_on_close class\n attribute to False to get the pre-3.7 behaviour.\n- The struct.Struct.format type is now str instead of bytes.\n- repr for datetime.timedelta has changed to include the keyword\n arguments in the output.\n- tracemalloc.Traceback frames are now sorted from oldest to most\n recent to be more consistent with traceback.\n\nAdding backward compatibility for most of these changes would be easy.\n\nSee also the Porting to Python 3.7 section of What’s New In Python 3.7.\n\nMicro releases\n\nSometimes, incompatible changes are introduced in micro releases (micro\nin major.minor.micro) to fix bugs or security vulnerabilities. Examples\ninclude:\n\n- Python 3.7.2, compileall and py_compile module: the\n invalidation_mode parameter's default value is updated to None; the\n SOURCE_DATE_EPOCH environment variable no longer overrides the value\n of the invalidation_mode argument, and determines its default value\n instead.\n- Python 3.7.1, xml modules: the SAX parser no longer processes\n general external entities by default to increase security by\n default.\n- Python 3.5.2, os.urandom(): on Linux, if the getrandom() syscall\n blocks (the urandom entropy pool is not initialized yet), fall back\n on reading /dev/urandom.\n- Python 3.5.1, sys.setrecursionlimit(): a RecursionError exception is\n now raised if the new limit is too low at the current recursion\n depth.\n- Python 3.4.4, ssl.create_default_context(): RC4 was dropped from the\n default cipher string.\n- Python 3.4.3, http.client: HTTPSConnection now performs all the\n necessary certificate and hostname checks by default.\n- Python 3.4.2, email.message: EmailMessage.is_attachment() is now a\n method instead of a property, for consistency with\n Message.is_multipart().\n- Python 3.4.1, os.makedirs(name, mode=0o777, exist_ok=False): Before\n Python 3.4.1, if exist_ok was True and the directory existed,\n makedirs() would still raise an error if mode did not match the mode\n of the existing directory. Since this behavior was impossible to\n implement safely, it was removed in Python 3.4.1 (bpo-21082).\n\nExamples of changes made in micro releases which are not backward\nincompatible:\n\n- ssl.OP_NO_TLSv1_3 constant was added to 2.7.15, 3.6.3 and 3.7.0 for\n backwards compatibility with OpenSSL 1.0.2.\n- typing.AsyncContextManager was added to Python 3.6.2.\n- The zipfile module accepts a path-like object since Python 3.6.2.\n- loop.create_future() was added to Python 3.5.2 in the asyncio\n module.\n\nNo backward compatibility code is needed for these kinds of changes.\n\nReferences\n\nAccepted PEPs:\n\n- PEP 5 -- Guidelines for Language Evolution\n <5>\n- PEP 236 -- Back to the __future__\n <236>\n- PEP 411 -- Provisional packages in the Python standard library\n <411>\n- PEP 3002 -- Procedure for Backwards-Incompatible Changes\n <3002>\n\nDraft PEPs:\n\n- PEP 602 -- Annual Release Cycle for Python\n <602>\n- PEP 605 -- A rolling feature release stream for CPython\n <605>\n- See also withdrawn PEP 598 -- Introducing incremental feature\n releases <598>\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.821855"},"created":{"kind":"timestamp","value":"2019-10-18T00:00:00","string":"2019-10-18T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0606/\",\n \"authors\": [\n \"Victor Stinner\"\n ],\n \"pep_number\": \"0606\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":29,"cells":{"id":{"kind":"string","value":"0204"},"text":{"kind":"string","value":"PEP: 204 Title: Range Literals Author: Thomas Wouters\n Status: Rejected Type: Standards Track Created:\n14-Jul-2000 Python-Version: 2.0 Post-History:\n\nAfter careful consideration, and a period of meditation, this proposal\nhas been rejected. The open issues, as well as some confusion between\nranges and slice syntax, raised enough questions for Guido not to accept\nit for Python 2.0, and later to reject the proposal altogether. The new\nsyntax and its intentions were deemed not obvious enough.\n\n[ TBD: Guido, amend/confirm this, please. Preferably both; this is a\nPEP, it should contain all the reasons for rejection and/or\nreconsideration, for future reference. ]\n\nIntroduction\n\nThis PEP describes the \"range literal\" proposal for Python 2.0. This PEP\ntracks the status and ownership of this feature, slated for introduction\nin Python 2.0. It contains a description of the feature and outlines\nchanges necessary to support the feature. This PEP summarizes\ndiscussions held in mailing list forums, and provides URLs for further\ninformation, where appropriate. The CVS revision history of this file\ncontains the definitive historical record.\n\nList ranges\n\nRanges are sequences of numbers of a fixed stepping, often used in\nfor-loops. The Python for-loop is designed to iterate over a sequence\ndirectly:\n\n >>> l = ['a', 'b', 'c', 'd']\n >>> for item in l:\n ... print item\n a\n b\n c\n d\n\nHowever, this solution is not always prudent. Firstly, problems arise\nwhen altering the sequence in the body of the for-loop, resulting in the\nfor-loop skipping items. Secondly, it is not possible to iterate over,\nsay, every second element of the sequence. And thirdly, it is sometimes\nnecessary to process an element based on its index, which is not readily\navailable in the above construct.\n\nFor these instances, and others where a range of numbers is desired,\nPython provides the range builtin function, which creates a list of\nnumbers. The range function takes three arguments, start, end and step.\nstart and step are optional, and default to 0 and 1, respectively.\n\nThe range function creates a list of numbers, starting at start, with a\nstep of step, up to, but not including end, so that range(10) produces a\nlist that has exactly 10 items, the numbers 0 through 9.\n\nUsing the range function, the above example would look like this:\n\n >>> for i in range(len(l)):\n ... print l[i]\n a\n b\n c\n d\n\nOr, to start at the second element of l and processing only every second\nelement from then on:\n\n >>> for i in range(1, len(l), 2):\n ... print l[i]\n b\n d\n\nThere are several disadvantages with this approach:\n\n- Clarity of purpose: Adding another function call, possibly with\n extra arithmetic to determine the desired length and step of the\n list, does not improve readability of the code. Also, it is possible\n to \"shadow\" the builtin range function by supplying a local or\n global variable with the same name, effectively replacing it. This\n may or may not be a desired effect.\n- Efficiency: because the range function can be overridden, the Python\n compiler cannot make assumptions about the for-loop, and has to\n maintain a separate loop counter.\n- Consistency: There already is a syntax that is used to denote\n ranges, as shown below. This syntax uses the exact same arguments,\n though all optional, in the exact same way. It seems logical to\n extend this syntax to ranges, to form \"range literals\".\n\nSlice Indices\n\nIn Python, a sequence can be indexed in one of two ways: retrieving a\nsingle item, or retrieving a range of items. Retrieving a range of items\nresults in a new object of the same type as the original sequence,\ncontaining zero or more items from the original sequence. This is done\nusing a \"range notation\":\n\n >>> l[2:4]\n ['c', 'd']\n\nThis range notation consists of zero, one or two indices separated by a\ncolon. The first index is the start index, the second the end. When\neither is left out, they default to respectively the start and the end\nof the sequence.\n\nThere is also an extended range notation, which incorporates step as\nwell. Though this notation is not currently supported by most builtin\ntypes, if it were, it would work as follows:\n\n >>> l[1:4:2]\n ['b', 'd']\n\nThe third \"argument\" to the slice syntax is exactly the same as the step\nargument to range(). The underlying mechanisms of the standard, and\nthese extended slices, are sufficiently different and inconsistent that\nmany classes and extensions outside of mathematical packages do not\nimplement support for the extended variant. While this should be\nresolved, it is beyond the scope of this PEP.\n\nExtended slices do show, however, that there is already a perfectly\nvalid and applicable syntax to denote ranges in a way that solve all of\nthe earlier stated disadvantages of the use of the range() function:\n\n- It is clearer, more concise syntax, which has already proven to be\n both intuitive and easy to learn.\n- It is consistent with the other use of ranges in Python (e.g.\n slices).\n- Because it is built-in syntax, instead of a builtin function, it\n cannot be overridden. This means both that a viewer can be certain\n about what the code does, and that an optimizer will not have to\n worry about range() being \"shadowed\".\n\nThe Proposed Solution\n\nThe proposed implementation of range-literals combines the syntax for\nlist literals with the syntax for (extended) slices, to form range\nliterals:\n\n >>> [1:10]\n [1, 2, 3, 4, 5, 6, 7, 8, 9]\n >>> [:5]\n [0, 1, 2, 3, 4]\n >>> [5:1:-1]\n [5, 4, 3, 2]\n\nThere is one minor difference between range literals and the slice\nsyntax: though it is possible to omit all of start, end and step in\nslices, it does not make sense to omit end in range literals. In slices,\nend would default to the end of the list, but this has no meaning in\nrange literals.\n\nReference Implementation\n\nThe proposed implementation can be found on SourceForge[1]. It adds a\nnew bytecode, BUILD_RANGE, that takes three arguments from the stack and\nbuilds a list on the bases of those. The list is pushed back on the\nstack.\n\nThe use of a new bytecode is necessary to be able to build ranges based\non other calculations, whose outcome is not known at compile time.\n\nThe code introduces two new functions to listobject.c, which are\ncurrently hovering between private functions and full-fledged API calls.\n\nPyList_FromRange() builds a list from start, end and step, returning\nNULL if an error occurs. Its prototype is:\n\n PyObject * PyList_FromRange(long start, long end, long step)\n\nPyList_GetLenOfRange() is a helper function used to determine the length\nof a range. Previously, it was a static function in bltinmodule.c, but\nis now necessary in both listobject.c and bltinmodule.c (for xrange). It\nis made non-static solely to avoid code duplication. Its prototype is:\n\n long PyList_GetLenOfRange(long start, long end, long step)\n\nOpen issues\n\n- One possible solution to the discrepancy of requiring the end\n argument in range literals is to allow the range syntax to create a\n \"generator\", rather than a list, such as the xrange builtin function\n does. However, a generator would not be a list, and it would be\n impossible, for instance, to assign to items in the generator, or\n append to it.\n\n The range syntax could conceivably be extended to include tuples\n (i.e. immutable lists), which could then be safely implemented as\n generators. This may be a desirable solution, especially for large\n number arrays: generators require very little in the way of storage\n and initialization, and there is only a small performance impact in\n calculating and creating the appropriate number on request. (TBD: is\n there any at all? Cursory testing suggests equal performance even in\n the case of ranges of length 1)\n\n However, even if idea was adopted, would it be wise to \"special\n case\" the second argument, making it optional in one instance of the\n syntax, and non-optional in other cases ?\n\n- Should it be possible to mix range syntax with normal list literals,\n creating a single list? E.g.:\n\n >>> [5, 6, 1:6, 7, 9]\n\n to create:\n\n [5, 6, 1, 2, 3, 4, 5, 7, 9]\n\n- How should range literals interact with another proposed new\n feature, \"list comprehensions\" <202>? Specifically, should it be\n possible to create lists in list comprehensions? E.g.:\n\n >>> [x:y for x in (1, 2) y in (3, 4)]\n\n Should this example return a single list with multiple ranges:\n\n [1, 2, 1, 2, 3, 2, 2, 3]\n\n Or a list of lists, like so:\n\n [[1, 2], [1, 2, 3], [2], [2, 3]]\n\n However, as the syntax and semantics of list comprehensions are\n still subject of hot debate, these issues are probably best\n addressed by the \"list comprehensions\" PEP.\n\n- Range literals accept objects other than integers: it performs\n PyInt_AsLong() on the objects passed in, so as long as the objects\n can be coerced into integers, they will be accepted. The resulting\n list, however, is always composed of standard integers.\n\n Should range literals create a list of the passed-in type? It might\n be desirable in the cases of other builtin types, such as longs and\n strings:\n\n >>> [ 1L : 2L<<64 : 2<<32L ]\n >>> [\"a\":\"z\":\"b\"]\n >>> [\"a\":\"z\":2]\n\n However, this might be too much \"magic\" to be obvious. It might also\n present problems with user-defined classes: even if the base class\n can be found and a new instance created, the instance may require\n additional arguments to __init__, causing the creation to fail.\n\n- The PyList_FromRange() and PyList_GetLenOfRange() functions need to\n be classified: are they part of the API, or should they be made\n private functions?\n\nCopyright\n\nThis document has been placed in the Public Domain.\n\nReferences\n\n[1] http://sourceforge.net/patch/?func=detailpatch&patch_id=100902&group_id=5470"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.835017"},"created":{"kind":"timestamp","value":"2000-07-14T00:00:00","string":"2000-07-14T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0204/\",\n \"authors\": [\n \"Thomas Wouters\"\n ],\n \"pep_number\": \"0204\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":30,"cells":{"id":{"kind":"string","value":"0733"},"text":{"kind":"string","value":"PEP: 733 Title: An Evaluation of Python's Public C API Author: Erlend\nEgeberg Aasland , Domenico Andreoli\n, Stefan Behnel , Carl\nFriedrich Bolz-Tereick , Simon Cross\n, Steve Dower , Tim\nFelgentreff , David Hewitt\n<1939362+davidhewitt@users.noreply.github.com>, Shantanu Jain\n, Wenzel Jakob , Irit\nKatriel , Marc-Andre Lemburg , Donghee\nNa , Karl Nelson , Ronald\nOussoren , Antoine Pitrou ,\nNeil Schemenauer , Mark Shannon ,\nStepan Sindelar , Gregory P. Smith\n, Eric Snow , Victor\nStinner , Guido van Rossum , Petr\nViktorin , Carol Willing ,\nWilliam Woodruff , David Woods\n, Jelle Zijlstra ,\nStatus: Draft Type: Informational Created: 16-Oct-2023 Post-History:\n01-Nov-2023\n\nAbstract\n\nThis informational PEP describes our shared view of the public C API.\nThe document defines:\n\n- purposes of the C API\n- stakeholders and their particular use cases and requirements\n- strengths of the C API\n- problems of the C API categorized into nine areas of weakness\n\nThis document does not propose solutions to any of the identified\nproblems. By creating a shared list of C API issues, this document will\nhelp to guide continuing discussion about change proposals and to\nidentify evaluation criteria.\n\nIntroduction\n\nPython's C API was not designed for the different purposes it currently\nfulfills. It evolved from what was initially the internal API between\nthe C code of the interpreter and the Python language and libraries. In\nits first incarnation, it was exposed to make it possible to embed\nPython into C/C++ applications and to write extension modules in C/C++.\nThese capabilities were instrumental to the growth of Python's\necosystem. Over the decades, the C API grew to provide different tiers\nof stability, conventions changed, and new usage patterns have emerged,\nsuch as bindings to languages other than C/C++. In the next few years,\nnew developments are expected to further test the C API, such as the\nremoval of the GIL and the development of a JIT compiler. However, this\ngrowth was not supported by clearly documented guidelines, resulting in\ninconsistent approaches to API design in different subsystems of\nCPython. In addition, CPython is no longer the only implementation of\nPython, and some of the design decisions made when it was, are difficult\nfor alternative implementations to work with [Issue 64]. In the\nmeantime, lessons were learned and mistakes in both the design and the\nimplementation of the C API were identified.\n\nEvolving the C API is hard due to the combination of backwards\ncompatibility constraints and its inherent complexity, both technical\nand social. Different types of users bring different, sometimes\nconflicting, requirements. The tradeoff between stability and progress\nis an ongoing, highly contentious topic of discussion when suggestions\nare made for incremental improvements. Several proposals have been put\nforward for improvement, redesign or replacement of the C API, each\nrepresenting a deep analysis of the problems. At the 2023 Language\nSummit, three back-to-back sessions were devoted to different aspects of\nthe C API. There is general agreement that a new design can remedy the\nproblems that the C API has accumulated over the last 30 years, while at\nthe same time updating it for use cases that it was not originally\ndesigned for.\n\nHowever, there was also a sense at the Language Summit that we are\ntrying to discuss solutions without a clear common understanding of the\nproblems that we are trying to solve. We decided that we need to agree\non the current problems with the C API, before we are able to evaluate\nany of the proposed solutions. We therefore created the capi-workgroup\nrepository on GitHub in order to collect everyone's ideas on that\nquestion.\n\nOver 60 different issues were created on that repository, each\ndescribing a problem with the C API. We categorized them and identified\na number of recurring themes. The sections below mostly correspond to\nthese themes, and each contains a combined description of the issues\nraised in that category, along with links to the individual issues. In\naddition, we included a section that aims to identify the different\nstakeholders of the C API, and the particular requirements that each of\nthem has.\n\nC API Stakeholders\n\nAs mentioned in the introduction, the C API was originally created as\nthe internal interface between CPython's interpreter and the Python\nlayer. It was later exposed as a way for third-party developers to\nextend and embed Python programs. Over the years, new types of\nstakeholders emerged, with different requirements and areas of focus.\nThis section describes this complex state of affairs in terms of the\nactions that different stakeholders need to perform through the C API.\n\nCommon Actions for All Stakeholders\n\nThere are actions which are generic, and required by all types of API\nusers:\n\n- Define functions and call them\n- Define new types\n- Create instances of builtin and user-defined types\n- Perform operations on object instances\n- Introspect objects, including types, instances, and functions\n- Raise and handle exceptions\n- Import modules\n- Access to Python's OS interface\n\nThe following sections look at the unique requirements of various\nstakeholders.\n\nExtension Writers\n\nExtension writers are the traditional users of the C API. Their\nrequirements are the common actions listed above. They also commonly\nneed to:\n\n- Create new modules\n- Efficiently interface between modules at the C level\n\nAuthors of Embedded Python Applications\n\nApplications with an embedded Python interpreter. Examples are Blender\nand OBS.\n\nThey need to be able to:\n\n- Configure the interpreter (import paths, inittab, sys.argv, memory\n allocator, etc.).\n- Interact with the execution model and program lifetime, including\n clean interpreter shutdown and restart.\n- Represent complex data models in a way Python can use without having\n to create deep copies.\n- Provide and import frozen modules.\n- Run and manage multiple independent interpreters (in particular,\n when embedded in a library that wants to avoid global effects).\n\nPython Implementations\n\nPython implementations such as CPython, PyPy, GraalPy, IronPython,\nRustPython, MicroPython, and Jython), may take very different approaches\nfor the implementation of different subsystems. They need:\n\n- The API to be abstract and hide implementation details.\n- A specification of the API, ideally with a test suite that ensures\n compatibility.\n- It would be nice to have an ABI that can be shared across Python\n implementations.\n\nAlternative APIs and Binding Generators\n\nThere are several projects that implement alternatives to the C API,\nwhich offer extension users advantanges over programming directly with\nthe C API. These APIs are implemented with the C API, and in some cases\nby using CPython internals.\n\nThere are also libraries that create bindings between Python and other\nobject models, paradigms or languages.\n\nThere is overlap between these categories: binding generators usually\nprovide alternative APIs, and vice versa.\n\nExamples are Cython, cffi, pybind11 and nanobind for C++, PyO3 for Rust,\nShiboken used by PySide for Qt, PyGObject for GTK, Pygolo for Go, JPype\nfor Java, PyJNIus for Android, PyObjC for Objective-C, SWIG for C/C++,\nPython.NET for .NET (C#), HPy, Mypyc, Pythran and pythoncapi-compat.\nCPython's DSL for parsing function arguments, the Argument Clinic, can\nalso be seen as belonging to this category of stakeholders.\n\nAlternative APIs need minimal building blocks for accessing CPython\nefficiently. They don't necessarily need an ergonomic API, because they\ntypically generate code that is not intended to be read by humans. But\nthey do need it to be comprehensive enough so that they can avoid\naccessing internals, without sacrificing performance.\n\nBinding generators often need to:\n\n- Create custom objects (e.g. function/module objects and traceback\n entries) that match the behavior of equivalent Python code as\n closely as possible.\n- Dynamically create objects which are static in traditional C\n extensions (e.g. classes/modules), and need CPython to manage their\n state and lifetime.\n- Dynamically adapt foreign objects (strings, GC'd containers), with\n low overhead.\n- Adapt external mechanisms, execution models and guarantees to the\n Python way (stackful coroutines, continuations,\n one-writer-or-multiple-readers semantics, virtual multiple\n inheritance, 1-based indexing, super-long inheritance chains,\n goroutines, channels, etc.).\n\nThese tools might also benefit from a choice between a more stable and a\nfaster (possibly lower-level) API. Their users could then decide whether\nthey can afford to regenerate the code often or trade some performance\nfor more stability and less maintenance work.\n\nStrengths of the C API\n\nWhile the bulk of this document is devoted to problems with the C API\nthat we would like to see fixed in any new design, it is also important\nto point out the strengths of the C API, and to make sure that they are\npreserved.\n\nAs mentioned in the introduction, the C API enabled the development and\ngrowth of the Python ecosystem over the last three decades, while\nevolving to support use cases that it was not originally designed for.\nThis track record in itself is indication of how effective and valuable\nit has been.\n\nA number of specific strengths were mentioned in the capi-workgroup\ndiscussions. Heap types were identified as much safer and easier to use\nthan static types [Issue 4].\n\nAPI functions that take a C string literal for lookups based on a Python\nstring are very convenient [Issue 30].\n\nThe limited API demonstrates that an API which hides implementation\ndetails makes it easier to evolve Python [Issue 30].\n\nC API problems\n\nThe remainder of this document summarizes and categorizes the problems\nthat were reported on the capi-workgroup repository. The issues are\ngrouped into several categories.\n\nAPI Evolution and Maintenance\n\nThe difficulty of making changes in the C API is central to this report.\nIt is implicit in many of the issues we discuss here, particularly when\nwe need to decide whether an incremental bugfix can resolve the issue,\nor whether it can only be addressed as part of an API redesign [Issue\n44]. The benefit of each incremental change is often viewed as too small\nto justify the disruption. Over time, this implies that every mistake we\nmake in an API's design or implementation remains with us indefinitely.\n\nWe can take two views on this issue. One is that this is a problem and\nthe solution needs to be baked into any new C API we design, in the form\nof a process for incremental API evolution, which includes deprecation\nand removal of API elements. The other possible approach is that this is\nnot a problem to be solved, but rather a feature of any API. In this\nview, API evolution should not be incremental, but rather through large\nredesigns, each of which learns from the mistakes of the past and is not\nshackled by backwards compatibility requirements (in the meantime, new\nAPI elements may be added, but nothing can ever be removed). A\ncompromise approach is somewhere between these two extremes, fixing\nissues which are easy or important enough to tackle incrementally, and\nleaving others alone.\n\nThe problem we have in CPython is that we don't have an agreed, official\napproach to API evolution. Different members of the core team are\npulling in different directions and this is an ongoing source of\ndisagreements. Any new C API needs to come with a clear decision about\nthe model that its maintenance will follow, as well as the technical and\norganizational processes by which this will work.\n\nIf the model does include provisions for incremental evolution of the\nAPI, it will include processes for managing the impact of the change on\nusers [Issue 60], perhaps through introducing an external backwards\ncompatibility module [Issue 62], or a new API tier of \"blessed\"\nfunctions [Issue 55].\n\nAPI Specification and Abstraction\n\nThe C API does not have a formal specification, it is currently defined\nas whatever the reference implementation (CPython) contains in a\nparticular version. The documentation acts as an incomplete description,\nwhich is not sufficient for verifying the correctness of either the full\nAPI, the limited API, or the stable ABI. As a result, the C API may\nchange significantly between releases without needing a more visible\nspecification update, and this leads to a number of problems.\n\nBindings for languages other than C/C++ must parse C code [Issue 7].\nSome C language features are hard to handle in this way, because they\nproduce compiler-dependent output (such as enums) or require a C\npreprocessor/compiler rather than just a parser (such as macros) [Issue\n35].\n\nFurthermore, C header files tend to expose more than what is intended to\nbe part of the public API [Issue 34]. In particular, implementation\ndetails such as the precise memory layouts of internal data structures\ncan be exposed [Issue 22 and PEP 620]. This can make API evolution very\ndifficult, in particular when it occurs in the stable ABI as in the case\nof ob_refcnt and ob_type, which are accessed via the reference counting\nmacros [Issue 45].\n\nWe identified a deeper issue in relation to the way that reference\ncounting is exposed. The way that C extensions are required to manage\nreferences with calls to Py_INCREF and Py_DECREF is specific to\nCPython's memory model, and is hard for alternative Python\nimplementations to emulate. [Issue 12].\n\nAnother set of problems arises from the fact that a PyObject* is exposed\nin the C API as an actual pointer rather than a handle. The address of\nan object serves as its ID and is used for comparison, and this\ncomplicates matters for alternative Python implementations that move\nobjects during GC [Issue 37].\n\nA separate issue is that object references are opaque to the runtime,\ndiscoverable only through calls to tp_traverse/tp_clear, which have\ntheir own purposes. If there was a way for the runtime to know the\nstructure of the object graph, and keep up with changes in it, this\nwould make it possible for alternative implementations to implement\ndifferent memory management schemes [Issue 33].\n\nObject Reference Management\n\nThere does not exist a consistent naming convention for functions which\nmakes their reference semantics obvious, and this leads to error prone C\nAPI functions, where they do not follow the typical behaviour. When a C\nAPI function returns a PyObject*, the caller typically gains ownership\nof a reference to the object. However, there are exceptions where a\nfunction returns a \"borrowed\" reference, which the caller can access but\ndoes not own a reference to. Similarly, functions typically do not\nchange the ownership of references to their arguments, but there are\nexceptions where a function \"steals\" a reference, i.e., the ownership of\nthe reference is permanently transferred from the caller to the callee\nby the call [Issue 8 and Issue 52]. The terminology used to describe\nthese situations in the documentation can also be improved [Issue 11].\n\nA more radical change is necessary in the case of functions that return\n\"borrowed\" references (such as PyList_GetItem) [Issue 5 and Issue 21] or\npointers to parts of the internal structure of an object (such as\nPyBytes_AsString) [Issue 57]. In both cases, the reference/pointer is\nvalid for as long as the owning object holds the reference, but this\ntime is hard to reason about. Such functions should not exist in the API\nwithout a mechanism that can make them safe.\n\nFor containers, the API is currently missing bulk operations on the\nreferences of contained objects. This is particularly important for a\nstable ABI where INCREF and DECREF cannot be macros, making bulk\noperations expensive when implemented as a sequence of function calls\n[Issue 15].\n\nType Definition and Object Creation\n\nThe C API has functions that make it possible to create incomplete or\ninconsistent Python objects, such as PyTuple_New and PyUnicode_New. This\ncauses problems when the object is tracked by GC or its\ntp_traverse/tp_clear functions are called. A related issue is with\nfunctions such as PyTuple_SetItem which is used to modify a partially\ninitialized tuple (tuples are immutable once fully initialized) [Issue\n56].\n\nWe identified a few issues with type definition APIs. For legacy\nreasons, there is often a significant amount of code duplication between\ntp_new and tp_vectorcall [Issue 24]. The type slot function should be\ncalled indirectly, so that their signatures can change to include\ncontext information [Issue 13]. Several aspects of the type definition\nand creation process are not well defined, such as which stage of the\nprocess is responsible for initializing and clearing different fields of\nthe type object [Issue 49].\n\nError Handling\n\nError handling in the C API is based on the error indicator which is\nstored on the thread state (in global scope). The design intention was\nthat each API function returns a value indicating whether an error has\noccurred (by convention, -1 or NULL). When the program knows that an\nerror occurred, it can fetch the exception object which is stored in the\nerror indicator. We identified a number of problems which are related to\nerror handling, pointing at APIs which are too easy to use incorrectly.\n\nThere are functions that do not report all errors that occur while they\nexecute. For example, PyDict_GetItem clears any errors that occur when\nit calls the key's hash function, or while performing a lookup in the\ndictionary [Issue 51].\n\nPython code never executes with an in-flight exception (by definition),\nand typically code using native functions should also be interrupted by\nan error being raised. This is not checked in most C API functions, and\nthere are places in the interpreter where error handling code calls a C\nAPI function while an exception is set. For example, see the call to\nPyUnicode_FromString in the error handler of _PyErr_WriteUnraisableMsg\n[Issue 2].\n\nThere are functions that do not return a value, so a caller is forced to\nquery the error indicator in order to identify whether an error has\noccurred. An example is PyBuffer_Release [Issue 20]. There are other\nfunctions which do have a return value, but this return value does not\nunambiguously indicate whether an error has occurred. For example,\nPyLong_AsLong returns -1 in case of error, or when the value of the\nargument is indeed -1 [Issue 1]. In both cases, the API is error prone\nbecause it is possible that the error indicator was already set before\nthe function was called, and the error is incorrectly attributed. The\nfact that the error was not detected before the call is a bug in the\ncalling code, but the behaviour of the program in this case doesn't make\nit easy to identify and debug the problem.\n\nThere are functions that take a PyObject* argument, with special meaning\nwhen it is NULL. For example, if PyObject_SetAttr receives NULL as the\nvalue to set, this means that the attribute should be cleared. This is\nerror prone because it could be that NULL indicates an error in the\nconstruction of the value, and the program failed to check for this\nerror. The program will misinterpret the NULL to mean something\ndifferent than error [Issue 47].\n\nAPI Tiers and Stability Guarantees\n\nThe different API tiers provide different tradeoffs of stability vs API\nevolution, and sometimes performance.\n\nThe stable ABI was identified as an area that needs to be looked into.\nAt the moment it is incomplete and not widely adopted. At the same time,\nits existence is making it hard to make changes to some implementation\ndetails, because it exposes struct fields such as ob_refcnt, ob_type and\nob_size. There was some discussion about whether the stable ABI is worth\nkeeping. Arguments on both sides can be found in [Issue 4] and [Issue\n9].\n\nAlternatively, it was suggested that in order to be able to evolve the\nstable ABI, we need a mechanism to support multiple versions of it in\nthe same Python binary. It was pointed out that versioning individual\nfunctions within a single ABI version is not enough because it may be\nnecessary to evolve, together, a group of functions that interoperate\nwith each other [Issue 39].\n\nThe limited API was introduced in 3.2 as a blessed subset of the C API\nwhich is recommended for users who would like to restrict themselves to\nhigh quality APIs which are not likely to change often. The\nPy_LIMITED_API flag allows users to restrict their program to older\nversions of the limited API, but we now need the opposite option, to\nexclude older versions. This would make it possible to evolve the\nlimited API by replacing flawed elements in it [Issue 54]. More\ngenerally, in a redesign we should revisit the way that API tiers are\nspecified and consider designing a method that will unify the way we\ncurrently select between the different tiers [Issue 59].\n\nAPI elements whose names begin with an underscore are considered\nprivate, essentially an API tier with no stability guarantees. However,\nthis was only clarified recently, in PEP 689. It is not clear what the\nchange policy should be with respect to such API elements that predate\nPEP 689 [Issue 58].\n\nThere are API functions which have an unsafe (but fast) version as well\nas a safe version which performs error checking (for example,\nPyTuple_GET_ITEM vs PyTuple_GetItem). It may help to be able to group\nthem into their own tiers - the \"unsafe API\" tier and the \"safe API\"\ntier [Issue 61].\n\nUse of the C Language\n\nA number of issues were raised with respect to the way that CPython uses\nthe C language. First there is the issue of which C dialect we use, and\nhow we test our compatibility with it, as well as API header\ncompatibility with C++ dialects [Issue 42].\n\nUsage of const in the API is currently sparse, but it is not clear\nwhether this is something that we should consider changing [Issue 38].\n\nWe currently use the C types long and int, where fixed-width integers\nsuch as int32_t and int64_t may now be better choices [Issue 27].\n\nWe are using C language features which are hard for other languages to\ninteract with, such as macros, variadic arguments, enums, bitfields, and\nnon-function symbols [Issue 35].\n\nThere are API functions that take a PyObject* arg which must be of a\nmore specific type (such as PyTuple_Size, which fails if its arg is not\na PyTupleObject*). It is an open question whether this is a good pattern\nto have, or whether the API should expect the more specific type [Issue\n31].\n\nThere are functions in the API that take concrete types, such as\nPyDict_GetItemString which performs a dictionary lookup for a key\nspecified as a C string rather than PyObject*. At the same time, for\nPyDict_ContainsString it is not considered appropriate to add a concrete\ntype alternative. The principle around this should be documented in the\nguidelines [Issue 23].\n\nImplementation Flaws\n\nBelow is a list of localized implementation flaws. Most of these can\nprobably be fixed incrementally, if we choose to do so. They should, in\nany case, be avoided in any new API design.\n\nThere are functions that don't follow the convention of returning 0 for\nsuccess and -1 for failure. For example, PyArg_ParseTuple returns 0 for\nsuccess and non-zero for failure [Issue 25].\n\nThe macros Py_CLEAR and Py_SETREF access their arg more than once, so if\nthe arg is an expression with side effects, they are duplicated [Issue\n3].\n\nThe meaning of Py_SIZE depends on the type and is not always reliable\n[Issue 10].\n\nSome API function do not have the same behaviour as their Python\nequivalents. The behaviour of PyIter_Next is different from tp_iternext.\n[Issue 29]. The behaviour of PySet_Contains is different from\nset.__contains__ [Issue 6].\n\nThe fact that PyArg_ParseTupleAndKeywords takes a non-const char* array\nas argument makes it more difficult to use [Issue 28].\n\nPython.h does not expose the whole API. Some headers (like marshal.h)\nare not included from Python.h. [Issue 43].\n\nNaming\n\nPyLong and PyUnicode use names which no longer match the Python types\nthey represent (int/str). This could be fixed in a new API [Issue 14].\n\nThere are identifiers in the API which are lacking a Py/_Py prefix\n[Issue 46].\n\nMissing Functionality\n\nThis section consists of a list of feature requests, i.e., functionality\nthat was identified as missing in the current C API.\n\nDebug Mode\n\nA debug mode that can be activated without recompilation and which\nactivates various checks that can help detect various types of errors\n[Issue 36].\n\nIntrospection\n\nThere aren't currently reliable introspection capabilities for objects\ndefined in C in the same way as there are for Python objects [Issue 32].\n\nEfficient type checking for heap types [Issue 17].\n\nImproved Interaction with Other Languages\n\nInterfacing with other GC based languages, and integrating their GC with\nPython's GC [Issue 19].\n\nInject foreign stack frames to the traceback [Issue 18].\n\nConcrete strings that can be used in other languages [Issue 16].\n\nReferences\n\n1. Python/C API Reference Manual\n2. 2023 Language Summit Blog Post: Three Talks on the C API\n3. capi-workgroup on GitHub\n4. Irit's Core Sprint 2023 slides about C API workgroup\n5. Petr's Core Sprint 2023 slides\n6. HPy team's Core Sprint 2023 slides for Things to Learn from HPy\n7. Victor's slides of Core Sprint 2023 Python C API talk\n8. The Python's stability promise — Cristián Maureira-Fredes, PySide\n maintainer\n9. Report on the issues PySide had 5 years ago when switching to the\n stable ABI\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.868050"},"created":{"kind":"timestamp","value":"2023-10-16T00:00:00","string":"2023-10-16T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0733/\",\n \"authors\": [\n \"Erlend Egeberg Aasland\"\n ],\n \"pep_number\": \"0733\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":31,"cells":{"id":{"kind":"string","value":"0725"},"text":{"kind":"string","value":"PEP: 725 Title: Specifying external dependencies in pyproject.toml\nAuthor: Pradyun Gedam , Ralf Gommers\n Discussions-To:\nhttps://discuss.python.org/t/31888 Status: Draft Type: Standards Track\nTopic: Packaging Content-Type: text/x-rst Created: 17-Aug-2023\nPost-History: 18-Aug-2023\n\nAbstract\n\nThis PEP specifies how to write a project's external, or non-PyPI, build\nand runtime dependencies in a pyproject.toml file for packaging-related\ntools to consume.\n\nThis PEP proposes to add an [external] table to pyproject.toml with\nthree keys: \"build-requires\", \"host-requires\" and \"dependencies\". These\nare for specifying three types of dependencies:\n\n1. build-requires, build tools to run on the build machine\n2. host-requires, build dependencies needed for host machine but also\n needed at build time.\n3. dependencies, needed at runtime on the host machine but not needed\n at build time.\n\nCross compilation is taken into account by distinguishing build and host\ndependencies. Optional build-time and runtime dependencies are supported\ntoo, in a manner analogies to how that is supported in the [project]\ntable.\n\nMotivation\n\nPython packages may have dependencies on build tools, libraries,\ncommand-line tools, or other software that is not present on PyPI.\nCurrently there is no way to express those dependencies in standardized\nmetadata [1],[2]. Key motivators for this PEP are to:\n\n- Enable tools to automatically map external dependencies to packages\n in other packaging repositories,\n- Make it possible to include needed dependencies in error messages\n emitting by Python package installers and build frontends,\n- Provide a canonical place for package authors to record this\n dependency information.\n\nPackaging ecosystems like Linux distros, Conda, Homebrew, Spack, and Nix\nneed full sets of dependencies for Python packages, and have tools like\npyp2spec (Fedora), Grayskull (Conda), and dh_python (Debian) which\nattempt to automatically generate dependency metadata for their own\npackage managers from the metadata in upstream Python packages. External\ndependencies are currently handled manually, because there is no\nmetadata for this in pyproject.toml or any other standard location.\nEnabling automating this conversion is a key benefit of this PEP, making\npackaging Python packages for distros easier and more reliable. In\naddition, the authors envision other types of tools making use of this\ninformation, e.g., dependency analysis tools like Repology, Dependabot\nand libraries.io. Software bill of materials (SBOM) generation tools may\nalso be able to use this information, e.g. for flagging that external\ndependencies listed in pyproject.toml but not contained in wheel\nmetadata are likely vendored within the wheel.\n\nPackages with external dependencies are typically hard to build from\nsource, and error messages from build failures tend to be hard to\ndecipher for end users. Missing external dependencies on the end user's\nsystem are the most likely cause of build failures. If installers can\nshow the required external dependencies as part of their error message,\nthis may save users a lot of time.\n\nAt the moment, information on external dependencies is only captured in\ninstallation documentation of individual packages. It is hard to\nmaintain for package authors and tends to go out of date. It's also hard\nfor users and distro packagers to find it. Having a canonical place to\nrecord this dependency information will improve this situation.\n\nThis PEP is not trying to specify how the external dependencies should\nbe used, nor a mechanism to implement a name mapping from names of\nindividual packages that are canonical for Python projects published on\nPyPI to those of other packaging ecosystems. Those topics should be\naddressed in separate PEPs.\n\nRationale\n\nTypes of external dependencies\n\nMultiple types of external dependencies can be distinguished:\n\n- Concrete packages that can be identified by name and have a\n canonical location in another language-specific package repository.\n E.g., Rust packages on crates.io, R packages on CRAN, JavaScript\n packages on the npm registry.\n- Concrete packages that can be identified by name but do not have a\n clear canonical location. This is typically the case for libraries\n and tools written in C, C++, Fortran, CUDA and other low-level\n languages. E.g., Boost, OpenSSL, Protobuf, Intel MKL, GCC.\n- \"Virtual\" packages, which are names for concepts, types of tools or\n interfaces. These typically have multiple implementations, which are\n concrete packages. E.g., a C++ compiler, BLAS, LAPACK, OpenMP, MPI.\n\nConcrete packages are straightforward to understand, and are a concept\npresent in virtually every package management system. Virtual packages\nare a concept also present in a number of packaging systems -- but not\nalways, and the details of their implementation varies.\n\nCross compilation\n\nCross compilation is not yet (as of August 2023) well-supported by\nstdlib modules and pyproject.toml metadata. It is however important when\ntranslating external dependencies to those of other packaging systems\n(with tools like pyp2spec). Introducing support for cross compilation\nimmediately in this PEP is much easier than extending [external] in the\nfuture, hence the authors choose to include this now.\n\nTerminology\n\nThis PEP uses the following terminology:\n\n- build machine: the machine on which the package build process is\n being executed\n- host machine: the machine on which the produced artifact will be\n installed and run\n- build dependency: dependency for building the package that needs to\n be present at build time and itself was built for the build\n machine's OS and architecture\n- host dependency: dependency for building the package that needs to\n be present at build time and itself was built for the host machine's\n OS and architecture\n\nNote that this terminology is not consistent across build and packaging\ntools, so care must be taken when comparing build/host dependencies in\npyproject.toml to dependencies from other package managers.\n\nNote that \"target machine\" or \"target dependency\" is not used in this\nPEP. That is typically only relevant for cross-compiling compilers or\nother such advanced scenarios[3],[4] - this is out of scope for this\nPEP.\n\nFinally, note that while \"dependency\" is the term most widely used for\npackages needed at build time, the existing key in pyproject.toml for\nPyPI build-time dependencies is build-requires. Hence this PEP uses the\nkeys build-requires and host-requires under [external] for consistency.\n\nBuild and host dependencies\n\nClear separation of metadata associated with the definition of build and\ntarget platforms, rather than assuming that build and target platform\nwill always be the same, is important[5].\n\nBuild dependencies are typically run during the build process - they may\nbe compilers, code generators, or other such tools. In case the use of a\nbuild dependency implies a runtime dependency, that runtime dependency\ndoes not have to be declared explicitly. For example, when compiling\nFortran code with gfortran into a Python extension module, the package\nlikely incurs a dependency on the libgfortran runtime library. The\nrationale for not explicitly listing such runtime dependencies is\ntwo-fold: (1) it may depend on compiler/linker flags or details of the\nbuild environment whether the dependency is present, and (2) these\nruntime dependencies can be detected and handled automatically by tools\nlike auditwheel.\n\nHost dependencies are typically not run during the build process, but\nonly used for linking against. This is not a rule though -- it may be\npossible or necessary to run a host dependency under an emulator, or\nthrough a custom tool like crossenv. When host dependencies imply a\nruntime dependency, that runtime dependency also does not have to be\ndeclared, just like for build dependencies.\n\nWhen host dependencies are declared and a tool is not cross-compilation\naware and has to do something with external dependencies, the tool MAY\nmerge the host-requires list into build-requires. This may for example\nhappen if an installer like pip starts reporting external dependencies\nas a likely cause of a build failure when a package fails to build from\nan sdist.\n\nSpecifying external dependencies\n\nConcrete package specification through PURL\n\nThe two types of concrete packages are supported by PURL (Package URL),\nwhich implements a scheme for identifying packages that is meant to be\nportable across packaging ecosystems. Its design is:\n\n scheme:type/namespace/name@version?qualifiers#subpath \n\nThe scheme component is a fixed string, pkg, and of the other components\nonly type and name are required. As an example, a package URL for the\nrequests package on PyPI would be:\n\n pkg:pypi/requests\n\nAdopting PURL to specify external dependencies in pyproject.toml solves\na number of problems at once - and there are already implementations of\nthe specification in Python and multiple languages. PURL is also already\nsupported by dependency-related tooling like SPDX (see External\nRepository Identifiers in the SPDX 2.3 spec), the Open Source\nVulnerability format, and the Sonatype OSS Index; not having to wait\nyears before support in such tooling arrives is valuable.\n\nFor concrete packages without a canonical package manager to refer to,\neither pkg:generic/pkg-name can be used, or a direct reference to the\nVCS system that the package is maintained in (e.g.,\npkg:github/user-or-org-name/pkg-name). Which of these is more\nappropriate is situation-dependent. This PEP recommends using\npkg:generic when the package name is unambiguous and well-known (e.g.,\npkg:generic/git or pkg:generic/openblas), and using the VCS as the PURL\ntype otherwise.\n\nVirtual package specification\n\nThere is no ready-made support for virtual packages in PURL or another\nstandard. There are a relatively limited number of such dependencies\nthough, and adopting a scheme similar to PURL but with the virtual:\nrather than pkg: scheme seems like it will be understandable and map\nwell to Linux distros with virtual packages and to the likes of Conda\nand Spack.\n\nThe two known virtual package types are compiler and interface.\n\nVersioning\n\nSupport in PURL for version expressions and ranges beyond a fixed\nversion is still pending, see the Open Issues section.\n\nDependency specifiers\n\nRegular Python dependency specifiers (as originally defined in PEP 508)\nmay be used behind PURLs. PURL qualifiers, which use ? followed by a\npackage type-specific dependency specifier component, must not be used.\nThe reason for this is pragmatic: dependency specifiers are already used\nfor other metadata in pyproject.toml, any tooling that is used with\npyproject.toml is likely to already have a robust implementation to\nparse it. And we do not expect to need the extra possibilities that PURL\nqualifiers provide (e.g. to specify a Conan or Conda channel, or a\nRubyGems platform).\n\nUsage of core metadata fields\n\nThe core metadata specification contains one relevant field, namely\nRequires-External. This has no well-defined semantics in core metadata\n2.1; this PEP chooses to reuse the field for external runtime\ndependencies. The core metadata specification does not contain fields\nfor any metadata in pyproject.toml's [build-system] table. Therefore the\nbuild-requires and host-requires content also does not need to be\nreflected in core metadata fields. The optional-dependencies content\nfrom [external] would need to either reuse Provides-Extra or require a\nnew Provides-External-Extra field. Neither seems desirable.\n\nDifferences between sdist and wheel metadata\n\nA wheel may vendor its external dependencies. This happens in particular\nwhen distributing wheels on PyPI or other Python package indexes - and\ntools like auditwheel, delvewheel and delocate automate this process. As\na result, a Requires-External entry in an sdist may disappear from a\nwheel built from that sdist. It is also possible that a\nRequires-External entry remains in a wheel, either unchanged or with\nnarrower constraints. auditwheel does not vendor certain allow-listed\ndependencies, such as OpenGL, by default. In addition, auditwheel and\ndelvewheel allow a user to manually exclude dependencies via a --exclude\nor --no-dll command-line flag. This is used to avoid vendoring large\nshared libraries, for example those from CUDA.\n\nRequires-External entries generated from external dependencies in\npyproject.toml in a wheel are therefore allowed to be narrower than\nthose for the corresponding sdist. They must not be wider, i.e.\nconstraints must not allow a version of a dependency for a wheel that\nisn't allowed for an sdist, nor contain new dependencies that are not\nlisted in the sdist's metadata at all.\n\nCanonical names of dependencies and -dev(el) split packages\n\nIt is fairly common for distros to split a package into two or more\npackages. In particular, runtime components are often separately\ninstallable from development components (headers, pkg-config and CMake\nfiles, etc.). The latter then typically has a name with -dev or -devel\nappended to the project/library name. This split is the responsibility\nof each distro to maintain, and should not be reflected in the\n[external] table. It is not possible to specify this in a reasonable way\nthat works across distros, hence only the canonical name should be used\nin [external].\n\nThe intended meaning of using a PURL or virtual dependency is \"the full\npackage with the name specified\". It will depend on the context in which\nthe metadata is used whether the split is relevant. For example, if\nlibffi is a host dependency and a tool wants to prepare an environment\nfor building a wheel, then if a distro has split off the headers for\nlibffi into a libffi-devel package then the tool has to install both\nlibffi and libffi-devel.\n\nPython development headers\n\nPython headers and other build support files may also be split. This is\nthe same situation as in the section above (because Python is simply a\nregular package in distros). However, a python-dev|devel dependency is\nspecial because in pyproject.toml Python itself is an implicit rather\nthan an explicit dependency. Hence a choice needs to be made here - add\npython-dev implicitly, or make each package author add it explicitly\nunder [external]. For consistency between Python dependencies and\nexternal dependencies, we choose to add it implicitly. Python\ndevelopment headers must be assumed to be necessary when an [external]\ntable contains one or more compiler packages.\n\nSpecification\n\nIf metadata is improperly specified then tools MUST raise an error to\nnotify the user about their mistake.\n\nDetails\n\nNote that pyproject.toml content is in the same format as in PEP 621.\n\nTable name\n\nTools MUST specify fields defined by this PEP in a table named\n[external]. No tools may add fields to this table which are not defined\nby this PEP or subsequent PEPs. The lack of an [external] table means\nthe package either does not have any external dependencies, or the ones\nit does have are assumed to be present on the system already.\n\nbuild-requires/optional-build-requires\n\n- Format: Array of PURL strings (build-requires) and a table with\n values of arrays of PURL strings (optional-build-requires)\n- Core metadata: N/A\n\nThe (optional) external build requirements needed to build the project.\n\nFor build-requires, it is a key whose value is an array of strings. Each\nstring represents a build requirement of the project and MUST be\nformatted as either a valid PURL string or a virtual: string.\n\nFor optional-build-requires, it is a table where each key specifies an\nextra set of build requirements and whose value is an array of strings.\nThe strings of the arrays MUST be valid PURL strings.\n\nhost-requires/optional-host-requires\n\n- Format: Array of PURL strings (host-requires) and a table with\n values of arrays of PURL strings (optional-host-requires)\n- Core metadata: N/A\n\nThe (optional) external host requirements needed to build the project.\n\nFor host-requires, it is a key whose value is an array of strings. Each\nstring represents a host requirement of the project and MUST be\nformatted as either a valid PURL string or a virtual: string.\n\nFor optional-host-requires, it is a table where each key specifies an\nextra set of host requirements and whose value is an array of strings.\nThe strings of the arrays MUST be valid PURL strings.\n\ndependencies/optional-dependencies\n\n- Format: Array of PURL strings (dependencies) and a table with values\n of arrays of PURL strings (optional-dependencies)\n- Core metadata: Requires-External, N/A\n\nThe (optional) runtime dependencies of the project.\n\nFor dependencies, it is a key whose value is an array of strings. Each\nstring represents a dependency of the project and MUST be formatted as\neither a valid PURL string or a virtual: string. Each string maps\ndirectly to a Requires-External entry in the core metadata.\n\nFor optional-dependencies, it is a table where each key specifies an\nextra and whose value is an array of strings. The strings of the arrays\nMUST be valid PURL strings. Optional dependencies do not map to a core\nmetadata field.\n\nExamples\n\nThese examples show what the [external] content for a number of packages\nis expected to be.\n\ncryptography 39.0:\n\n [external]\n build-requires = [\n \"virtual:compiler/c\",\n \"virtual:compiler/rust\",\n \"pkg:generic/pkg-config\",\n ]\n host-requires = [\n \"pkg:generic/openssl\",\n \"pkg:generic/libffi\",\n ]\n\nSciPy 1.10:\n\n [external]\n build-requires = [\n \"virtual:compiler/c\",\n \"virtual:compiler/cpp\",\n \"virtual:compiler/fortran\",\n \"pkg:generic/ninja\",\n \"pkg:generic/pkg-config\",\n ]\n host-requires = [\n \"virtual:interface/blas\",\n \"virtual:interface/lapack\", # >=3.7.1 (can't express version ranges with PURL yet)\n ]\n\nPillow 10.1.0:\n\n [external]\n build-requires = [\n \"virtual:compiler/c\",\n ]\n host-requires = [\n \"pkg:generic/libjpeg\",\n \"pkg:generic/zlib\",\n ]\n\n [external.optional-host-requires]\n extra = [\n \"pkg:generic/lcms2\",\n \"pkg:generic/freetype\",\n \"pkg:generic/libimagequant\",\n \"pkg:generic/libraqm\",\n \"pkg:generic/libtiff\",\n \"pkg:generic/libxcb\",\n \"pkg:generic/libwebp\",\n \"pkg:generic/openjpeg\", # add >=2.0 once we have version specifiers\n \"pkg:generic/tk\",\n ]\n\nNAVis 1.4.0:\n\n [project.optional-dependencies]\n r = [\"rpy2\"]\n\n [external]\n build-requires = [\n \"pkg:generic/XCB; platform_system=='Linux'\",\n ]\n\n [external.optional-dependencies]\n nat = [\n \"pkg:cran/nat\",\n \"pkg:cran/nat.nblast\",\n ]\n\nSpyder 6.0:\n\n [external]\n dependencies = [\n \"pkg:cargo/ripgrep\",\n \"pkg:cargo/tree-sitter-cli\",\n \"pkg:golang/github.com/junegunn/fzf\",\n ]\n\njupyterlab-git 0.41.0:\n\n [external]\n dependencies = [\n \"pkg:generic/git\",\n ]\n\n [external.optional-build-requires]\n dev = [\n \"pkg:generic/nodejs\",\n ]\n\nPyEnchant 3.2.2:\n\n [external]\n dependencies = [\n # libenchant is needed on all platforms but only vendored into wheels on\n # Windows, so on Windows the build backend should remove this external\n # dependency from wheel metadata.\n \"pkg:github/AbiWord/enchant\",\n ]\n\nBackwards Compatibility\n\nThere is no impact on backwards compatibility, as this PEP only adds\nnew, optional metadata. In the absence of such metadata, nothing changes\nfor package authors or packaging tooling.\n\nSecurity Implications\n\nThere are no direct security concerns as this PEP covers how to\nstatically define metadata for external dependencies. Any security\nissues would stem from how tools consume the metadata and choose to act\nupon it.\n\nHow to Teach This\n\nExternal dependencies and if and how those external dependencies are\nvendored are topics that are typically not understood in detail by\nPython package authors. We intend to start from how an external\ndependency is defined, the different ways it can be depended on---from\nruntime-only with ctypes or a subprocess call to it being a build\ndependency that's linked against---before going into how to declare\nexternal dependencies in metadata. The documentation should make\nexplicit what is relevant for package authors, and what for distro\npackagers.\n\nMaterial on this topic will be added to the most relevant packaging\ntutorials, primarily the Python Packaging User Guide. In addition, we\nexpect that any build backend that adds support for external\ndependencies metadata will include information about that in its\ndocumentation, as will tools like auditwheel.\n\nReference Implementation\n\nThis PEP contains a metadata specification, rather that a code feature -\nhence there will not be code implementing the metadata spec as a whole.\nHowever, there are parts that do have a reference implementation:\n\n1. The [external] table has to be valid TOML and therefore can be\n loaded with tomllib.\n2. The PURL specification, as a key part of this spec, has a Python\n package with a reference implementation for constructing and parsing\n PURLs: packageurl-python.\n\nThere are multiple possible consumers and use cases of this metadata,\nonce that metadata gets added to Python packages. Tested metadata for\nall of the top 150 most-downloaded packages from PyPI with published\nplatform-specific wheels can be found in rgommers/external-deps-build.\nThis metadata has been validated by using it to build wheels from sdists\npatched with that metadata in clean Docker containers.\n\nRejected Ideas\n\nSpecific syntax for external dependencies which are also packaged on PyPI\n\nThere are non-Python packages which are packaged on PyPI, such as Ninja,\npatchelf and CMake. What is typically desired is to use the system\nversion of those, and if it's not present on the system then install the\nPyPI package for it. The authors believe that specific support for this\nscenario is not necessary (or too complex to justify such support); a\ndependency provider for external dependencies can treat PyPI as one\npossible source for obtaining the package.\n\nUsing library and header names as external dependencies\n\nA previous draft PEP (\"External dependencies\" (2015)) proposed using\nspecific library and header names as external dependencies. This is too\ngranular; using package names is a well-established pattern across\npackaging ecosystems and should be preferred.\n\nOpen Issues\n\nVersion specifiers for PURLs\n\nSupport in PURL for version expressions and ranges is still pending. The\npull request at vers implementation for PURL seems close to being\nmerged, at which point this PEP could adopt it.\n\nVersioning of virtual dependencies\n\nOnce PURL supports version expressions, virtual dependencies can be\nversioned with the same syntax. It must be better specified however what\nthe version scheme is, because this is not as clear for virtual\ndependencies as it is for PURLs (e.g., there can be multiple\nimplementations, and abstract interfaces may not be unambiguously\nversioned). E.g.:\n\n- OpenMP: has regular MAJOR.MINOR versions of its standard, so would\n look like >=4.5.\n- BLAS/LAPACK: should use the versioning used by Reference LAPACK,\n which defines what the standard APIs are. Uses MAJOR.MINOR.MICRO, so\n would look like >=3.10.0.\n- Compilers: these implement language standards. For C, C++ and\n Fortran these are versioned by year. In order for versions to sort\n correctly, we choose to use the full year (four digits). So \"at\n least C99\" would be >=1999, and selecting C++14 or Fortran 77 would\n be ==2014 or ==1977 respectively. Other languages may use different\n versioning schemes. These should be described somewhere before they\n are used in pyproject.toml.\n\nA logistical challenge is where to describe the versioning - given that\nthis will evolve over time, this PEP itself is not the right location\nfor it. Instead, this PEP should point at that (to be created) location.\n\nWho defines canonical names and canonical package structure?\n\nSimilarly to the logistics around versioning is the question about what\nnames are allowed and where they are described. And then who is in\ncontrol of that description and responsible for maintaining it. Our\ntentative answer is: there should be a central list for virtual\ndependencies and pkg:generic PURLs, maintained as a PyPA project. See\nhttps://discuss.python.org/t/pep-725-specifying-external-dependencies-in-pyproject-toml/31888/62.\nTODO: once that list/project is prototyped, include it in the PEP and\nclose this open issue.\n\nSyntax for virtual dependencies\n\nThe current syntax this PEP uses for virtual dependencies is\nvirtual:type/name, which is analogous to but not part of the PURL spec.\nThis open issue discusses supporting virtual dependencies within PURL:\npurl-spec#222.\n\nShould a host-requires key be added under [build-system]?\n\nAdding host-requires for host dependencies that are on PyPI in order to\nbetter support name mapping to other packaging systems with support for\ncross-compiling may make sense. This issue tracks this topic and has\narguments in favor and against adding host-requires under [build-system]\nas part of this PEP.\n\nReferences\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive.\n\n[1] The \"define native requirements metadata\" part of the \"Wanting a\nsingular packaging vision\" thread (2022, Discourse):\nhttps://discuss.python.org/t/wanting-a-singular-packaging-tool-vision/21141/92\n\n[2] pypackaging-native: \"Native dependencies\"\nhttps://pypackaging-native.github.io/key-issues/native-dependencies/\n\n[3] GCC documentation - Configure Terms and History,\nhttps://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html\n\n[4] Meson documentation - Cross compilation\nhttps://mesonbuild.com/Cross-compilation.html\n\n[5] pypackaging-native: \"Cross compilation\"\nhttps://pypackaging-native.github.io/key-issues/cross_compilation/"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.982646"},"created":{"kind":"timestamp","value":"2023-08-17T00:00:00","string":"2023-08-17T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0725/\",\n \"authors\": [\n \"Pradyun Gedam\"\n ],\n \"pep_number\": \"0725\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":32,"cells":{"id":{"kind":"string","value":"3129"},"text":{"kind":"string","value":"PEP: 3129 Title: Class Decorators Version: $Revision$ Last-Modified:\n$Date$ Author: Collin Winter Status: Final\nType: Standards Track Content-Type: text/x-rst Created: 01-May-2007\nPython-Version: 3.0 Post-History: 07-May-2007\n\nAbstract\n\nThis PEP proposes class decorators, an extension to the function and\nmethod decorators introduced in PEP 318.\n\nRationale\n\nWhen function decorators were originally debated for inclusion in Python\n2.4, class decorators were seen as\nobscure and unnecessary <318#motivation> thanks to metaclasses. After\nseveral years' experience with the Python 2.4.x series of releases and\nan increasing familiarity with function decorators and their uses, the\nBDFL and the community re-evaluated class decorators and recommended\ntheir inclusion in Python 3.0[1].\n\nThe motivating use-case was to make certain constructs more easily\nexpressed and less reliant on implementation details of the CPython\ninterpreter. While it is possible to express class decorator-like\nfunctionality using metaclasses, the results are generally unpleasant\nand the implementation highly fragile[2]. In addition, metaclasses are\ninherited, whereas class decorators are not, making metaclasses\nunsuitable for some, single class-specific uses of class decorators. The\nfact that large-scale Python projects like Zope were going through these\nwild contortions to achieve something like class decorators won over the\nBDFL.\n\nSemantics\n\nThe semantics and design goals of class decorators are the same as for\nfunction decorators (318#current-syntax, 318#design-goals); the only\ndifference is that you're decorating a class instead of a function. The\nfollowing two snippets are semantically identical:\n\n class A:\n pass\n A = foo(bar(A))\n\n\n @foo\n @bar\n class A:\n pass\n\nFor a detailed examination of decorators, please refer to PEP 318.\n\nImplementation\n\nAdapting Python's grammar to support class decorators requires modifying\ntwo rules and adding a new rule:\n\n funcdef: [decorators] 'def' NAME parameters ['->' test] ':' suite\n\n compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt |\n with_stmt | funcdef | classdef\n\nneed to be changed to :\n\n decorated: decorators (classdef | funcdef)\n\n funcdef: 'def' NAME parameters ['->' test] ':' suite\n\n compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt |\n with_stmt | funcdef | classdef | decorated\n\nAdding decorated is necessary to avoid an ambiguity in the grammar.\n\nThe Python AST and bytecode must be modified accordingly.\n\nA reference implementation[3] has been provided by Jack Diederich.\n\nAcceptance\n\nThere was virtually no discussion following the posting of this PEP,\nmeaning that everyone agreed it should be accepted.\n\nThe patch was committed to Subversion as revision 55430.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] https://mail.python.org/pipermail/python-dev/2006-March/062942.html\n\n[2] https://mail.python.org/pipermail/python-dev/2006-March/062888.html\n\n[3] https://bugs.python.org/issue1671208"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.991906"},"created":{"kind":"timestamp","value":"2007-05-01T00:00:00","string":"2007-05-01T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-3129/\",\n \"authors\": [\n \"Collin Winter\"\n ],\n \"pep_number\": \"3129\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":33,"cells":{"id":{"kind":"string","value":"0562"},"text":{"kind":"string","value":"PEP: 562 Title: Module __getattr__ and __dir__ Author: Ivan Levkivskyi\n Status: Final Type: Standards Track Content-Type:\ntext/x-rst Created: 09-Sep-2017 Python-Version: 3.7 Post-History:\n09-Sep-2017 Resolution:\nhttps://mail.python.org/pipermail/python-dev/2017-December/151033.html\n\nAbstract\n\nIt is proposed to support __getattr__ and __dir__ function defined on\nmodules to provide basic customization of module attribute access.\n\nRationale\n\nIt is sometimes convenient to customize or otherwise have control over\naccess to module attributes. A typical example is managing deprecation\nwarnings. Typical workarounds are assigning __class__ of a module object\nto a custom subclass of types.ModuleType or replacing the sys.modules\nitem with a custom wrapper instance. It would be convenient to simplify\nthis procedure by recognizing __getattr__ defined directly in a module\nthat would act like a normal __getattr__ method, except that it will be\ndefined on module instances. For example:\n\n # lib.py\n\n from warnings import warn\n\n deprecated_names = [\"old_function\", ...]\n\n def _deprecated_old_function(arg, other):\n ...\n\n def __getattr__(name):\n if name in deprecated_names:\n warn(f\"{name} is deprecated\", DeprecationWarning)\n return globals()[f\"_deprecated_{name}\"]\n raise AttributeError(f\"module {__name__!r} has no attribute {name!r}\")\n\n # main.py\n\n from lib import old_function # Works, but emits the warning\n\nAnother widespread use case for __getattr__ would be lazy submodule\nimports. Consider a simple example:\n\n # lib/__init__.py\n\n import importlib\n\n __all__ = ['submod', ...]\n\n def __getattr__(name):\n if name in __all__:\n return importlib.import_module(\".\" + name, __name__)\n raise AttributeError(f\"module {__name__!r} has no attribute {name!r}\")\n\n # lib/submod.py\n\n print(\"Submodule loaded\")\n class HeavyClass:\n ...\n\n # main.py\n\n import lib\n lib.submod.HeavyClass # prints \"Submodule loaded\"\n\nThere is a related proposal PEP 549 that proposes to support instance\nproperties for a similar functionality. The difference is this PEP\nproposes a faster and simpler mechanism, but provides more basic\ncustomization. An additional motivation for this proposal is that PEP\n484 already defines the use of module __getattr__ for this purpose in\nPython stub files, see 484#stub-files.\n\nIn addition, to allow modifying result of a dir() call on a module to\nshow deprecated and other dynamically generated attributes, it is\nproposed to support module level __dir__ function. For example:\n\n # lib.py\n\n deprecated_names = [\"old_function\", ...]\n __all__ = [\"new_function_one\", \"new_function_two\", ...]\n\n def new_function_one(arg, other):\n ...\n def new_function_two(arg, other):\n ...\n\n def __dir__():\n return sorted(__all__ + deprecated_names)\n\n # main.py\n\n import lib\n\n dir(lib) # prints [\"new_function_one\", \"new_function_two\", \"old_function\", ...]\n\nSpecification\n\nThe __getattr__ function at the module level should accept one argument\nwhich is the name of an attribute and return the computed value or raise\nan AttributeError:\n\n def __getattr__(name: str) -> Any: ...\n\nIf an attribute is not found on a module object through the normal\nlookup (i.e. object.__getattribute__), then __getattr__ is searched in\nthe module __dict__ before raising an AttributeError. If found, it is\ncalled with the attribute name and the result is returned. Looking up a\nname as a module global will bypass module __getattr__. This is\nintentional, otherwise calling __getattr__ for builtins will\nsignificantly harm performance.\n\nThe __dir__ function should accept no arguments, and return a list of\nstrings that represents the names accessible on module:\n\n def __dir__() -> List[str]: ...\n\nIf present, this function overrides the standard dir() search on a\nmodule.\n\nThe reference implementation for this PEP can be found in[1].\n\nBackwards compatibility and impact on performance\n\nThis PEP may break code that uses module level (global) names\n__getattr__ and __dir__. (But the language reference explicitly reserves\nall undocumented dunder names, and allows \"breakage without warning\";\nsee[2].) The performance implications of this PEP are minimal, since\n__getattr__ is called only for missing attributes.\n\nSome tools that perform module attributes discovery might not expect\n__getattr__. This problem is not new however, since it is already\npossible to replace a module with a module subclass with overridden\n__getattr__ and __dir__, but with this PEP such problems can occur more\noften.\n\nDiscussion\n\nNote that the use of module __getattr__ requires care to keep the\nreferred objects pickleable. For example, the __name__ attribute of a\nfunction should correspond to the name with which it is accessible via\n__getattr__:\n\n def keep_pickleable(func):\n func.__name__ = func.__name__.replace('_deprecated_', '')\n func.__qualname__ = func.__qualname__.replace('_deprecated_', '')\n return func\n\n @keep_pickleable\n def _deprecated_old_function(arg, other):\n ...\n\nOne should be also careful to avoid recursion as one would do with a\nclass level __getattr__.\n\nTo use a module global with triggering __getattr__ (for example if one\nwants to use a lazy loaded submodule) one can access it as:\n\n sys.modules[__name__].some_global\n\nor as:\n\n from . import some_global\n\nNote that the latter sets the module attribute, thus __getattr__ will be\ncalled only once.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] The reference implementation\n(https://github.com/ilevkivskyi/cpython/pull/3/files)\n\n[2] Reserved classes of identifiers\n(https://docs.python.org/3/reference/lexical_analysis.html#reserved-classes-of-identifiers)"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:15.999816"},"created":{"kind":"timestamp","value":"2017-09-09T00:00:00","string":"2017-09-09T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0562/\",\n \"authors\": [\n \"Ivan Levkivskyi\"\n ],\n \"pep_number\": \"0562\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":34,"cells":{"id":{"kind":"string","value":"0707"},"text":{"kind":"string","value":"PEP: 707 Title: A simplified signature for __exit__ and __aexit__\nAuthor: Irit Katriel Discussions-To:\nhttps://discuss.python.org/t/24402 Status: Rejected Type: Standards\nTrack Content-Type: text/x-rst Created: 18-Feb-2023 Python-Version: 3.12\nPost-History: 02-Mar-2023, Resolution:\nhttps://discuss.python.org/t/pep-707-a-simplified-signature-for-exit-and-aexit/24402/46\n\nRejection Notice\n\nPer the SC:\n\n We discussed the PEP and have decided to reject it. Our thinking was\n the magic and risk of potential breakage didn’t warrant the benefits.\n We are totally supportive, though, of exploring a potential context\n manager v2 API or __leave__.\n\nAbstract\n\nThis PEP proposes to make the interpreter accept context managers whose\n~py3.11:object.__exit__ / ~py3.11:object.__aexit__ method takes only a\nsingle exception instance, while continuing to also support the current\n(typ, exc, tb) signature for backwards compatibility.\n\nThis proposal is part of an ongoing effort to remove the redundancy of\nthe 3-item exception representation from the language, a relic of\nearlier Python versions which now confuses language users while adding\ncomplexity and overhead to the interpreter.\n\nThe proposed implementation uses introspection, which is tailored to the\nrequirements of this use case. The solution ensures the safety of the\nnew feature by supporting it only in non-ambiguous cases. In particular,\nany signature that could accept three arguments is assumed to expect\nthem.\n\nBecause reliable introspection of callables is not currently possible in\nPython, the solution proposed here is limited in that only the common\ntypes of single-arg callables will be identified as such, while some of\nthe more esoteric ones will continue to be called with three arguments.\nThis imperfect solution was chosen among several imperfect alternatives\nin the spirit of practicality. It is my hope that the discussion about\nthis PEP will explore the other options and lead us to the best way\nforward, which may well be to remain with our imperfect status quo.\n\nMotivation\n\nIn the past, an exception was represented in many parts of Python by a\ntuple of three elements: the type of the exception, its value, and its\ntraceback. While there were good reasons for this design at the time,\nthey no longer hold because the type and traceback can now be reliably\ndeduced from the exception instance. Over the last few years we saw\nseveral efforts to simplify the representation of exceptions.\n\nSince 3.10 in CPython PR #70577, the py3.11:traceback module's functions\naccept either a 3-tuple as described above, or just an exception\ninstance as a single argument.\n\nInternally, the interpreter no longer represents exceptions as a\ntriplet. This was removed for the handled exception in 3.11 and for the\nraised exception in 3.12. As a consequence, several APIs that expose the\ntriplet can now be replaced by simpler alternatives:\n\n Legacy API Alternative\n ------------------------------------------------------ -------------------------- ---------------------------\n Get handled exception (Python) py3.12:sys.exc_info py3.12:sys.exception\n Get handled exception (C) PyErr_GetExcInfo PyErr_GetHandledException\n Set handled exception (C) PyErr_SetExcInfo PyErr_SetHandledException\n Get raised exception (C) PyErr_Fetch PyErr_GetRaisedException\n Set raised exception (C) PyErr_Restore PyErr_SetRaisedException\n Construct an exception instance from the 3-tuple (C) PyErr_NormalizeException N/A\n\nThe current proposal is a step in this process, and considers the way\nforward for one more case in which the 3-tuple representation has leaked\nto the language. The motivation for all this work is twofold.\n\nSimplify the implementation of the language\n\nThe simplification gained by reducing the interpreter's internal\nrepresentation of the handled exception to a single object was\nsignificant. Previously, the interpreter needed to push onto/pop from\nthe stack three items whenever it did anything with exceptions. This\nincreased stack depth (adding pressure on caches and registers) and\ncomplicated some of the bytecodes. Reducing this to one item removed\nabout 100 lines of code from ceval.c (the interpreter's eval loop\nimplementation), and it was later followed by the removal of the\nPOP_EXCEPT_AND_RERAISE opcode which has become simple enough to be\nreplaced by generic stack manipulation instructions. Micro-benchmarks\nshowed a speedup of about 10% for catching and raising an exception, as\nwell as for creating generators. To summarize, removing this redundancy\nin Python's internals simplified the interpreter and made it faster.\n\nThe performance of invoking __exit__/__aexit__ when leaving a context\nmanager can be also improved by replacing a multi-arg function call with\na single-arg one. Micro-benchmarks showed that entering and exiting a\ncontext manager with single-arg __exit__ is about 13% faster.\n\nSimplify the language itself\n\nOne of the reasons for the popularity of Python is its simplicity. The\npy3.11:sys.exc_info triplet is cryptic for new learners, and the\nredundancy in it is confusing for those who do understand it.\n\nIt will take multiple releases to get to a point where we can think of\ndeprecating sys.exc_info(). However, we can relatively quickly reach a\nstage where new learners do not need to know about it, or about the\n3-tuple representation, at least until they are maintaining legacy code.\n\nRationale\n\nThe only reason to object today to the removal of the last remaining\nappearances of the 3-tuple from the language is the concerns about\ndisruption that such changes can bring. The goal of this PEP is to\npropose a safe, gradual and minimally disruptive way to make this change\nin the case of __exit__, and with this to initiate a discussion of our\noptions for evolving its method signature.\n\nIn the case of the py3.11:traceback module's API, evolving the functions\nto have a hybrid signature is relatively straightforward and safe. The\nfunctions take one positional and two optional arguments, and interpret\nthem according to their types. This is safe when sentinels are used for\ndefault values. The signatures of callbacks, which are defined by the\nuser's program, are harder to evolve.\n\nThe safest option is to make the user explicitly indicate which\nsignature the callback is expecting, by marking it with an additional\nattribute or giving it a different name. For example, we could make the\ninterpreter look for a __leave__ method on the context manager, and call\nit with a single arg if it exists (otherwise, it looks for __exit__ and\ncontinues as it does now). The introspection-based alternative proposed\nhere intends to make it more convenient for users to write new code,\nbecause they can just use the single-arg version and remain unaware of\nthe legacy API. However, if the limitations of introspection are found\nto be too severe, we should consider an explicit option. Having both\n__exit__ and __leave__ around for 5-10 years with similar functionality\nis not ideal, but it is an option.\n\nLet us now examine the limitations of the current proposal. It\nidentifies 2-arg python functions and METH_O C functions as having a\nsingle-arg signature, and assumes that anything else is expecting 3\nargs. Obviously it is possible to create false negatives for this\nheuristic (single-arg callables that it will not identify). Context\nmanagers written in this way won't work, they will continue to fail as\nthey do now when their __exit__ function will be called with three\narguments.\n\nI believe that it will not be a problem in practice. First, all working\ncode will continue to work, so this is a limitation on new code rather\nthan a problem impacting existing code. Second, exotic callable types\nare rarely used for __exit__ and if one is needed, it can always be\nwrapped by a plain vanilla method that delegates to the callable. For\nexample, we can write this:\n\n class C:\n __enter__ = lambda self: self\n __exit__ = ExoticCallable()\n\nas follows:\n\n class CM:\n __enter__ = lambda self: self\n _exit = ExoticCallable()\n __exit__ = lambda self, exc: CM._exit(exc)\n\nWhile discussing the real-world impact of the problem in this PEP, it is\nworth noting that most __exit__ functions don't do anything with their\narguments. Typically, a context manager is implemented to ensure that\nsome cleanup actions take place upon exit. It is rarely appropriate for\nthe __exit__ function to handle exceptions raised within the context,\nand they are typically allowed to propagate out of __exit__ to the\ncalling function. This means that most __exit__ functions do not access\ntheir arguments at all, and we should take this into account when trying\nto assess the impact of different solutions on Python's userbase.\n\nSpecification\n\nA context manager's __exit__/__aexit__ method can have a single-arg\nsignature, in which case it is invoked by the interpreter with the\nargument equal to an exception instance or None:\n\n >>> class C:\n ... def __enter__(self):\n ... return self\n ... def __exit__(self, exc):\n ... print(f'__exit__ called with: {exc!r}')\n ...\n >>> with C():\n ... pass\n ...\n __exit__ called with: None\n >>> with C():\n ... 1/0\n ...\n __exit__ called with: ZeroDivisionError('division by zero')\n Traceback (most recent call last):\n File \"\", line 2, in \n ZeroDivisionError: division by zero\n\nIf __exit__/__aexit__ has any other signature, it is invoked with the\n3-tuple (typ, exc, tb) as happens now:\n\n >>> class C:\n ... def __enter__(self):\n ... return self\n ... def __exit__(self, *exc):\n ... print(f'__exit__ called with: {exc!r}')\n ...\n >>> with C():\n ... pass\n ...\n __exit__ called with: (None, None, None)\n >>> with C():\n ... 1/0\n ...\n __exit__ called with: (, ZeroDivisionError('division by zero'), )\n Traceback (most recent call last):\n File \"\", line 2, in \n ZeroDivisionError: division by zero\n\nThese __exit__ methods will also be called with a 3-tuple:\n\n def __exit__(self, typ, *exc):\n pass\n\n def __exit__(self, typ, exc, tb):\n pass\n\nA reference implementation is provided in CPython PR #101995.\n\nWhen the interpreter reaches the end of the scope of a context manager,\nand it is about to call the relevant __exit__ or __aexit__ function, it\ninstrospects this function to determine whether it is the single-arg or\nthe legacy 3-arg version. In the draft PR, this introspection is\nperformed by the is_legacy___exit__ function:\n\n static int is_legacy___exit__(PyObject *exit_func) {\n if (PyMethod_Check(exit_func)) {\n PyObject *func = PyMethod_GET_FUNCTION(exit_func);\n if (PyFunction_Check(func)) {\n PyCodeObject *code = (PyCodeObject*)PyFunction_GetCode(func);\n if (code->co_argcount == 2 && !(code->co_flags & CO_VARARGS)) {\n /* Python method that expects self + one more arg */\n return false;\n }\n }\n }\n else if (PyCFunction_Check(exit_func)) {\n if (PyCFunction_GET_FLAGS(exit_func) == METH_O) {\n /* C function declared as single-arg */\n return false;\n }\n }\n return true;\n }\n\nIt is important to note that this is not a generic introspection\nfunction, but rather one which is specifically designed for our use\ncase. We know that exit_func is an attribute of the context manager\nclass (taken from the type of the object that provided __enter__), and\nit is typically a function. Furthermore, for this to be useful we need\nto identify enough single-arg forms, but not necessarily all of them.\nWhat is critical for backwards compatibility is that we will never\nmisidentify a legacy exit_func as a single-arg one. So, for example,\n__exit__(self, *args) and __exit__(self, exc_type, *args) both have the\nlegacy form, even though they could be invoked with one arg.\n\nIn summary, an exit_func will be invoke with a single arg if:\n\n- It is a PyMethod with argcount 2 (to count self) and no vararg, or\n- it is a PyCFunction with the METH_O flag.\n\nNote that any performance cost of the introspection can be mitigated via\nspecialization <659>, so it won't be a problem if we need to make it\nmore sophisticated than this for some reason.\n\nBackwards Compatibility\n\nAll context managers that previously worked will continue to work in the\nsame way because the interpreter will call them with three args whenever\nthey can accept three args. There may be context managers that\npreviously did not work because their exit_func expected one argument,\nso the call to __exit__ would have caused a TypeError exception to be\nraised, and now the call would succeed. This could theoretically change\nthe behaviour of existing code, but it is unlikely to be a problem in\npractice.\n\nThe backwards compatibility concerns will show up in some cases when\nlibraries try to migrate their context managers from the multi-arg to\nthe single-arg signature. If __exit__ or __aexit__ is called by any code\nother than the interpreter's eval loop, the introspection does not\nautomatically happen. For example, this will occur where a context\nmanager is subclassed and its __exit__ method is called directly from\nthe derived __exit__. Such context managers will need to migrate to the\nsingle-arg version with their users, and may choose to offer a parallel\nAPI rather than breaking the existing one. Alternatively, a superclass\ncan stay with the signature __exit__(self, *args), and support both one\nand three args. Since most context managers do not use the value of the\narguments to __exit__, and simply allow the exception to propagate\nonward, this is likely to be the common approach.\n\nSecurity Implications\n\nI am not aware of any.\n\nHow to Teach This\n\nThe language tutorial will present the single-arg version, and the\ndocumentation for context managers will include a section on the legacy\nsignatures of __exit__ and __aexit__.\n\nReference Implementation\n\nCPython PR #101995 implements the proposal of this PEP.\n\nRejected Ideas\n\nSupport __leave__(self, exc)\n\nIt was considered to support a method by a new name, such as __leave__,\nwith the new signature. This basically makes the programmer explicitly\ndeclare which signature they are intending to use, and avoid the need\nfor introspection.\n\nDifferent variations of this idea include different amounts of magic\nthat can help automate the equivalence between __leave__ and __exit__.\nFor example, Mark Shannon suggested that the type constructor would add\na default implementation for each of __exit__ and __leave__ whenever one\nof them is defined on a class. This default implementation acts as a\ntrampoline that calls the user's function. This would make inheritance\nwork seamlessly, as well as the migration from __exit__ to __leave__ for\nparticular classes. The interpreter would just need to call __leave__,\nand that would call __exit__ whenever necessary.\n\nWhile this suggestion has several advantages over the current proposal,\nit has two drawbacks. The first is that it adds a new dunder name to the\ndata model, and we would end up with two dunders that mean the same\nthing, and only slightly differ in their signatures. The second is that\nit would require the migration of every __exit__ to __leave__, while\nwith introspection it would not be necessary to change the many\n__exit__(*arg) methods that do not access their args. While it is not as\nsimple as a grep for __exit__, it is possible to write an AST visitor\nthat detects __exit__ methods that can accept multiple arguments, and\nwhich do access them.\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.035440"},"created":{"kind":"timestamp","value":"2023-02-18T00:00:00","string":"2023-02-18T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0707/\",\n \"authors\": [\n \"Irit Katriel\"\n ],\n \"pep_number\": \"0707\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":35,"cells":{"id":{"kind":"string","value":"0720"},"text":{"kind":"string","value":"PEP: 720 Title: Cross-compiling Python packages Author: Filipe Laíns\n PEP-Delegate: Status: Draft Type: Informational\nContent-Type: text/x-rst Created: 01-Jul-2023 Python-Version: 3.12\n\nAbstract\n\nThis PEP attempts to document the status of cross-compilation of\ndownstream projects.\n\nIt should give an overview of the approaches currently used by\ndistributors (Linux distros, WASM environment providers, etc.) to\ncross-compile downstream projects (3rd party extensions, etc.).\n\nMotivation\n\nWe write this PEP to express the challenges in cross-compilation and act\nas a supporting document in future improvement proposals.\n\nAnalysis\n\nIntroduction\n\nThere are a couple different approaches being used to tackle this, with\ndifferent levels of interaction required from the user, but they all\nrequire a significant amount of effort. This is due to the lack of\nstandardized cross-compilation infrastructure on the Python packaging\necosystem, which itself stems from the complexity of cross-builds,\nmaking it a huge undertaking.\n\nUpstream support\n\nSome major projects like CPython, setuptools, etc. provide some support\nto help with cross-compilation, but it's unofficial and at a best-effort\nbasis. For example, the sysconfig module allows overwriting the data\nmodule name via the _PYTHON_SYSCONFIGDATA_NAME environment variable,\nsomething that is required for cross-builds, and setuptools accepts\npatches[1] to tweak/fix its logic to be compatible with popular\n\"environment faking\" workflows[2].\n\nThe lack of first-party support in upstream projects leads to\ncross-compilation being fragile and requiring a significant effort from\nusers, but at the same time, the lack of standardization makes it harder\nfor upstreams to improve support as there's no clarity on how this\nfeature should be provided.\n\nProjects with decent cross-build support\n\nIt seems relevant to point out that there are a few modern Python\npackage build-backends with, at least, decent cross-compilation support,\nthose being scikit-build and meson-python. Both these projects integrate\nexternal mature build-systems into Python packaging — CMake and Meson,\nrespectively — so cross-build support is inherited from them.\n\nDownstream approaches\n\nCross-compilation approaches fall in a spectrum that goes from, by\ndesign, requiring extensive user interaction to (ideally) almost none.\nUsually, they'll be based on one of two main strategies, using a\ncross-build environment, or faking the target environment.\n\nCross-build environment\n\nThis consists of running the Python interpreter normally and utilizing\nthe cross-build provided by the projects' build-system. However, as we\nsaw above, upstream support is lacking, so this approach only works for\na small-ish set of projects. When this fails, the usual strategy is to\npatch the build-system code to build use the correct toolchain, system\ndetails, etc.[3].\n\nSince this approach often requires package-specific patching, it\nrequires a lot of user interaction.\n\nExamples\n\npython-for-android, kivy-ios, etc.\n\nFaking the target environment\n\nAiming to drop the requirement for user input, a popular approach is\ntrying to fake the target environment. It generally consists of\nmonkeypatching the Python interpreter to get it to mimic the interpreter\non the target system, which constitutes of changing many of the sys\nmodule attributes, the sysconfig data, etc. Using this strategy,\nbuild-backends do not need to have any cross-build support, and should\njust work without any code changes.\n\nUnfortunately, though, it isn't possible to truly fake the target\nenvironment. There are many reasons for this, one of the main ones being\nthat it breaks code that actually needs to introspect the running\ninterpreter. As a result, monkeypatching Python to look like target is\nvery tricky — to achieve the less amount of breakage, we can only patch\ncertain aspects of the interpreter. Consequently, build-backends may\nneed some code changes, but these are generally much smaller than the\nprevious approach. This is an inherent limitation of the technique,\nmeaning this strategy still requires some user interaction.\n\nNonetheless, this strategy still works out-of-the-box with significantly\nmore projects than the approach above, and requires much less effort in\nthese cases. It is successful in decreasing the amount of user\ninteraction needed, even though it doesn't succeed in being generic.\n\nExamples\n\ncrossenv, conda-forge, etc.\n\nEnvironment introspection\n\nAs explained above, most build system code is written with the\nassumption that the target system is the same as where the build is\noccurring, so introspection is usually used to guide the build.\n\nIn this section, we try to document most of the ways this is\naccomplished. It should give a decent overview of of environment details\nthat are required by build systems.\n\n+----------------------+----------------------+----------------------+\n| Snippet | Description | Variance |\n+======================+======================+======================+\n| >> | Extension (native | This is |\n| > importlib.machiner | module) suffixes | imp |\n| y.EXTENSION_SUFFIXES | supported by this | lementation-defined, |\n| [ | interpreter. | but it usually |\n| | | differs based on the |\n| '.cpython-311-x | | implementation, |\n| 86_64-linux-gnu.so', | | system architecture, |\n| '.abi3.so', | | build configuration, |\n| '.so', | | Python language |\n| ] | | version, and |\n| | | implementation |\n| | | version — if one |\n| | | exists. |\n+----------------------+----------------------+----------------------+\n| | Source (pure-Python) | This is |\n| >>> importlib.machi | suffixes supported | imp |\n| nery.SOURCE_SUFFIXES | by this interpreter. | lementation-defined, |\n| ['.py'] | | but it usually |\n| | | doesn't differ |\n| | | (outside exotic |\n| | | implementations or |\n| | | systems). |\n+----------------------+----------------------+----------------------+\n| | All module file | This is |\n| >>> importlib.mach | suffixes supported | imp |\n| inery.all_suffixes() | by this interpreter. | lementation-defined, |\n| [ | It should be the | but it usually |\n| '.py', | union of all | differs based on the |\n| '.pyc', | importlib. | implementation, |\n| | machinery.*_SUFFIXES | system architecture, |\n| '.cpython-311-x | attributes. | build configuration, |\n| 86_64-linux-gnu.so', | | Python language |\n| '.abi3.so', | | version, and |\n| '.so', | | implementation |\n| ] | | version — if one |\n| | | exists. See the |\n| | | entries above for |\n| | | more information. |\n+----------------------+----------------------+----------------------+\n| >>> sys.abiflags | ABI flags, as | Differs based on the |\n| '' | specified in PEP | build configuration. |\n| | 3149. | |\n+----------------------+----------------------+----------------------+\n| | C API version. | Differs based on the |\n| >>> sys.api_version | | Python installation. |\n| 1013 | | |\n+----------------------+----------------------+----------------------+\n| | Prefix of the | Differs based on the |\n| >>> sys.base_prefix | installation-wide | platform, and |\n| /usr | directories where | installation. |\n| | platform independent | |\n| | files are installed. | |\n+----------------------+----------------------+----------------------+\n| >>> | Prefix of the | Differs based on the |\n| sys.base_exec_prefix | installation-wide | platform, and |\n| /usr | directories where | installation. |\n| | platform dependent | |\n| | files are installed. | |\n+----------------------+----------------------+----------------------+\n| | Native byte order. | Differs based on the |\n| >>> sys.byteorder | | platform. |\n| 'little' | | |\n+----------------------+----------------------+----------------------+\n| >>> sys. | Names of all modules | Differs based on the |\n| builtin_module_names | that are compiled | platform, system |\n| ('_abc', '_a | into the Python | architecture, and |\n| st', '_codecs', ...) | interpreter. | build configuration. |\n+----------------------+----------------------+----------------------+\n| | Prefix of the | Differs based on the |\n| >>> sys.exec_prefix | site-specific | platform, |\n| /usr | directories where | installation, and |\n| | platform independent | environment. |\n| | files are installed. | |\n| | Because it concerns | |\n| | the site-specific | |\n| | directories, in | |\n| | standard virtual | |\n| | environment | |\n| | implementation, it | |\n| | will be a | |\n| | virtual- | |\n| | environment-specific | |\n| | path. | |\n+----------------------+----------------------+----------------------+\n| | Path of the Python | Differs based on the |\n| >>> sys.executable | interpreter being | installation. |\n| | used. | |\n| '/usr/bin/python' | | |\n+----------------------+----------------------+----------------------+\n| > | Whether the Python | Differs based on the |\n| >> with open(sys.exe | interpreter is an | installation. |\n| cutable, 'rb') as f: | ELF file, and the | |\n| ... | ELF header. This | |\n| header = f.read(4) | approach is | |\n| .. | something used to | |\n| . if is_elf := (he | identify the target | |\n| ader == b'\\x7fELF'): | architecture of the | |\n| ... elf_cl | installation | |\n| ass = int(f.read(1)) | (example). | |\n| ... | | |\n| size = {1: 52, 2 | | |\n| : 64}.get(elf_class) | | |\n| | | |\n| ... elf_heade | | |\n| r = f.read(size - 5) | | |\n+----------------------+----------------------+----------------------+\n| | Low level | Differs based on the |\n| >>> sys.float_info | information about | architecture, and |\n| sys.float_info( | the float type, as | platform. |\n| max=1.79 | defined by float.h. | |\n| 76931348623157e+308, | | |\n| max_exp=1024, | | |\n| | | |\n| max_10_exp=308, | | |\n| min=2.22 | | |\n| 50738585072014e-308, | | |\n| | | |\n| min_exp=-1021, | | |\n| | | |\n| min_10_exp=-307, | | |\n| dig=15, | | |\n| mant_dig=53, | | |\n| epsilon=2. | | |\n| 220446049250313e-16, | | |\n| radix=2, | | |\n| rounds=1, | | |\n| ) | | |\n+----------------------+----------------------+----------------------+\n| >>> sys. | Integer representing | Differs based on the |\n| getandroidapilevel() | the Android API | platform. |\n| 21 | level. | |\n+----------------------+----------------------+----------------------+\n| >>> sys | Windows version of | Differs based on the |\n| .getwindowsversion() | the system. | platform. |\n| sy | | |\n| s.getwindowsversion( | | |\n| major=10, | | |\n| minor=0, | | |\n| build=19045, | | |\n| platform=2, | | |\n| | | |\n| service_pack='', | | |\n| ) | | |\n+----------------------+----------------------+----------------------+\n| | Python version | Differs based on the |\n| >>> sys.hexversion | encoded as an | Python language |\n| 0x30b03f0 | integer. | version. |\n+----------------------+----------------------+----------------------+\n| >> | Interpreter | Differs based on the |\n| > sys.implementation | implementation | interpreter |\n| namespace( | details. | implementation, |\n| | | Python language |\n| name='cpython', | | version, and |\n| cach | | implementation |\n| e_tag='cpython-311', | | version — if one |\n| versi | | exists. It may also |\n| on=sys.version_info( | | include |\n| major=3, | | ar |\n| minor=11, | | chitecture-dependent |\n| micro=3, | | information, so it |\n| r | | may also differ |\n| eleaselevel='final', | | based on the system |\n| serial=0, | | architecture. |\n| ), | | |\n| h | | |\n| exversion=0x30b03f0, | | |\n| _multiarch | | |\n| ='x86_64-linux-gnu', | | |\n| ) | | |\n+----------------------+----------------------+----------------------+\n| >>> sys.int_info | Low level | Differs based on the |\n| sys.int_info( | information about | architecture, |\n| | Python's internal | platform, |\n| bits_per_digit=30, | integer | implementation, |\n| | representation. | build, and runtime |\n| sizeof_digit=4, | | flags. |\n| default_ | | |\n| max_str_digits=4300, | | |\n| str_digits_ | | |\n| check_threshold=640, | | |\n| ) | | |\n+----------------------+----------------------+----------------------+\n| >>> sys.maxsize | Maximum value a | Differs based on the |\n| | variable of type | architecture, |\n| 0x7fffffffffffffff | Py_ssize_t can take. | platform, and |\n| | | implementation. |\n+----------------------+----------------------+----------------------+\n| | Value of the largest | Differs based on the |\n| >>> sys.maxunicode | Unicode code point. | implementation, and |\n| 0x10ffff | | on Python versions |\n| | | older than 3.3, the |\n| | | build. |\n+----------------------+----------------------+----------------------+\n| >>> sys.platform | Platform identifier. | Differs based on the |\n| linux | | platform. |\n+----------------------+----------------------+----------------------+\n| >>> sys.prefix | Prefix of the | Differs based on the |\n| /usr | site-specific | platform, |\n| | directories where | installation, and |\n| | platform dependent | environment. |\n| | files are installed. | |\n| | Because it concerns | |\n| | the site-specific | |\n| | directories, in | |\n| | standard virtual | |\n| | environment | |\n| | implementation, it | |\n| | will be a | |\n| | virtual- | |\n| | environment-specific | |\n| | path. | |\n+----------------------+----------------------+----------------------+\n| | Platform-specific | Differs based on the |\n| >>> sys.platlibdir | library directory. | platform, and |\n| lib | | vendor. |\n+----------------------+----------------------+----------------------+\n| | Python language | Differs if the |\n| >>> sys.version_info | version implemented | target Python |\n| | by the interpreter. | version is not the |\n| sys.version_info( | | same[8]. |\n| major=3, | | |\n| minor=11, | | |\n| micro=3, | | |\n| r | | |\n| eleaselevel='final', | | |\n| serial=0, | | |\n| ) | | |\n+----------------------+----------------------+----------------------+\n| | Information about | Differs based on the |\n| >>> sys.thread_info | the thread | platform, and |\n| sys.thread_info( | implementation. | implementation. |\n| | | |\n| name='pthread', | | |\n| | | |\n| lock='semaphore', | | |\n| | | |\n| version='NPTL 2.37', | | |\n| ) | | |\n+----------------------+----------------------+----------------------+\n| >>> sys.winver | Version number used | Differs based on the |\n| 3.8-32 | to form Windows | platform, and |\n| | registry keys. | implementation. |\n+----------------------+----------------------+----------------------+\n| >>> sysconf | Python distribution | This is |\n| ig.get_config_vars() | configuration | imp |\n| { ... } | variables. It | lementation-defined, |\n| >>> sysconfig | includes a set of | but it usually |\n| .get_config_var(...) | variables[9] — like | differs between |\n| ... | prefix, exec_prefix, | non-identical |\n| | etc. — based on the | builds. Please refer |\n| | running context[10], | to the sysconfig |\n| | and may include some | configuration |\n| | extra variables | variables table for |\n| | based on the Python | a overview of the |\n| | implementation and | different |\n| | system. | configuration |\n| | | variable that are |\n| | In CPython and most | usually present. |\n| | other | |\n| | implementations that | |\n| | use the same | |\n| | build-system, the | |\n| | \"extra\" variables | |\n| | mention above are: | |\n| | on POSIX, all | |\n| | variables from the | |\n| | Makefile used to | |\n| | build the | |\n| | interpreter, and on | |\n| | Windows, it usually | |\n| | only includes a | |\n| | small subset of the | |\n| | those[11] — like | |\n| | EXT_SUFFIX, BINDIR, | |\n| | etc. | |\n+----------------------+----------------------+----------------------+\n\nCPython (and similar)\n\n --------------------------------------------------------------------------------------------------------\n Name Example Value Description Variance\n -------------------- ---------------------------------- -------------------------- ---------------------\n SOABI cpython-311-x86_64-linux-gnu ABI string — defined by Differs based on the\n PEP 3149. implementation,\n system architecture,\n Python language\n version, and\n implementation\n version — if one\n exists.\n\n SHLIB_SUFFIX .so Shared library suffix. Differs based on the\n platform.\n\n EXT_SUFFIX .cpython-311-x86_64-linux-gnu.so Interpreter-specific Differs based on the\n Python extension (native implementation,\n module) suffix — generally system architecture,\n defined as Python language\n .{SOABI}.{SHLIB_SUFFIX}. version, and\n implementation\n version — if one\n exists.\n\n LDLIBRARY libpython3.11.so Shared libpython library Differs based on the\n name — if available. If implementation,\n unavailable[12], the system architecture,\n variable will be empty, if build configuration,\n available, the library Python language\n should be located in version, and\n LIBDIR. implementation\n version — if one\n exists.\n\n PY3LIBRARY libpython3.so Shared Python 3 only Differs based on the\n (major version bound implementation,\n only)[13] libpython system architecture,\n library name — if build configuration,\n available. If Python language\n unavailable[14], the version, and\n variable will be empty, if implementation\n available, the library version — if one\n should be located in exists.\n LIBDIR. \n\n LIBRARY libpython3.11.a Static libpython library Differs based on the\n name — if available. If implementation,\n unavailable[15], the system architecture,\n variable will be empty, if build configuration,\n available, the library Python language\n should be located in version, and\n LIBDIR. implementation\n version — if one\n exists.\n\n Py_DEBUG 0 Whether this is a debug Differs based on the\n build. build configuration.\n\n WITH_PYMALLOC 1 Whether this build has Differs based on the\n pymalloc support. build configuration.\n\n Py_TRACE_REFS 0 Whether reference tracing Differs based on the\n (debug build only) is build configuration.\n enabled. \n\n Py_UNICODE_SIZE Size of the Py_UNICODE Differs based on the\n object, in bytes. This build configuration.\n variable is only present \n in CPython versions older \n than 3.3, and was commonly \n used to detect if the \n build uses UCS2 or UCS4 \n for unicode objects — \n before PEP 393. \n\n Py_ENABLE_SHARED 1 Whether a shared libpython Differs based on the\n is available. build configuration.\n\n PY_ENABLE_SHARED 1 Whether a shared libpython Differs based on the\n is available. build configuration.\n\n CC gcc The C compiler used to Differs based on the\n build the Python build configuration.\n distribution. \n\n CXX g++ The C compiler used to Differs based on the\n build the Python build configuration.\n distribution. \n\n CFLAGS -DNDEBUG -g -fwrapv ... The C compiler flags used Differs based on the\n to build the Python build configuration.\n distribution. \n\n py_version 3.11.3 Full form of the Python Differs based on the\n version. Python language\n version.\n\n py_version_short 3.11 Custom form of the Python Differs based on the\n version, containing only Python language\n the major and minor version.\n numbers. \n\n py_version_nodot 311 Custom form of the Python Differs based on the\n version, containing only Python language\n the major and minor version.\n numbers, and no dots. \n\n prefix /usr Same as sys.prefix, please Differs based on the\n refer to the entry in platform,\n table above. installation, and\n environment.\n\n base /usr Same as sys.prefix, please Differs based on the\n refer to the entry in platform,\n table above. installation, and\n environment.\n\n exec_prefix /usr Same as sys.exec_prefix, Differs based on the\n please refer to the entry platform,\n in table above. installation, and\n environment.\n\n platbase /usr Same as sys.exec_prefix, Differs based on the\n please refer to the entry platform,\n in table above. installation, and\n environment.\n\n installed_base /usr Same as sys.base_prefix, Differs based on the\n please refer to the entry platform, and\n in table above. installation.\n\n installed_platbase /usr Same as Differs based on the\n sys.base_exec_prefix, platform, and\n please refer to the entry installation.\n in table above. \n\n platlibdir lib Same as sys.platlibdir, Differs based on the\n please refer to the entry platform, and vendor.\n in table above. \n\n SIZEOF_* 4 Size of a certain C type Differs based on the\n (double, float, etc.). system architecture,\n and build details.\n --------------------------------------------------------------------------------------------------------\n\n : sysconfig configuration variables\n\nRelevant Information\n\nThere are some bits of information required by build systems — eg.\nplatform particularities — scattered across many places, and it often is\ndifficult to identify code with assumptions based on them. In this\nsection, we try to document the most relevant cases.\n\nWhen should extensions be linked against libpython?\n\nShort answer\n\n Yes, on Windows. No on POSIX platforms, except Android, Cygwin, and\n other Windows-based POSIX-like platforms.\n\nWhen building extensions for dynamic loading, depending on the target\nplatform, they may need to be linked against libpython.\n\nOn Windows, extensions need to link against libpython, because all\nsymbols must be resolvable at link time. POSIX-like platforms based on\nWindows — like Cygwin, MinGW, or MSYS — will also require linking\nagainst libpython.\n\nOn most POSIX platforms, it is not necessary to link against libpython,\nas the symbols will already be available in the due to the interpreter —\nor, when embedding, the executable/library in question — already linking\nto libpython. Not linking an extension module against libpython will\nallow it to be loaded by static Python builds, so when possible, it is\ndesirable to do so (see GH-65735).\n\nThis might not be the case on all POSIX platforms, so make sure you\ncheck. One example is Android, where only the main executable and\nLD_PRELOAD entries are considered to be RTLD_GLOBAL (meaning\ndependencies are RTLD_LOCAL) [16], which causes the libpython symbols be\nunavailable when loading the extension.\n\nWhat are prefix, exec_prefix, base_prefix, and base_exec_prefix?\n\nThese are sys attributes set in the Python initialization that describe\nthe running environment. They refer to the prefix of directories where\ninstallation/environment files are installed, according to the table\nbelow.\n\n Name Target files Environment Scope\n ------------------ ---------------------------------------- -------------------\n prefix platform independent (eg. pure Python) site-specific\n exec_prefix platform dependent (eg. native code) site-specific\n base_prefix platform independent (eg. pure Python) installation-wide\n base_exec_prefix platform dependent (eg. native code) installation-wide\n\nBecause the site-specific prefixes will be different inside virtual\nenvironments, checking sys.prexix != sys.base_prefix is commonly used to\ncheck if we are in a virtual environment.\n\nCase studies\n\ncrossenv\n\nDescription\n\n Virtual Environments for Cross-Compiling Python Extension Modules.\n\nURL\n\n https://github.com/benfogle/crossenv\n\ncrossenv is a tool to create a virtual environment with a monkeypatched\nPython installation that tries to emulate the target machine in certain\nscenarios. More about this approach can be found in the Faking the\ntarget environment section.\n\nconda-forge\n\nDescription\n\n A community-led collection of recipes, build infrastructure and\n distributions for the conda package manager.\n\nURL\n\n https://conda-forge.org/\n\nXXX: Jaime will write a quick summary once the PEP draft is public.\n\nXXX Uses a modified crossenv.\n\nYocto Project\n\nDescription\n\n The Yocto Project is an open source collaboration project that helps\n developers create custom Linux-based systems regardless of the\n hardware architecture.\n\nURL\n\n https://www.yoctoproject.org/\n\nXXX: Sent email to the mailing list.\n\nTODO\n\nBuildroot\n\nDescription\n\n Buildroot is a simple, efficient and easy-to-use tool to generate\n embedded Linux systems through cross-compilation.\n\nURL\n\n https://buildroot.org/\n\nTODO\n\nPyodide\n\nDescription\n\n Pyodide is a Python distribution for the browser and Node.js based\n on WebAssembly.\n\nURL\n\n https://pyodide.org/en/stable/\n\nXXX: Hood should review/expand this section.\n\nPyodide is a provides a Python distribution compiled to WebAssembly\nusing the Emscripten toolchain.\n\nIt patches several aspects of the CPython installation and some external\ncomponents. A custom package manager — micropip — supporting both Pure\nand wasm32/Emscripten wheels, is also provided as a part of the\ndistribution. On top of this, a repo with a selected set of 3rd party\npackages is also provided and enabled by default.\n\nBeeware\n\nDescription\n\n BeeWare allows you to write your app in Python and release it on\n multiple platforms.\n\nURL\n\n https://beeware.org/\n\nTODO\n\npython-for-android\n\nDescription\n\n Turn your Python application into an Android APK.\n\nURL\n\n https://github.com/kivy/python-for-android\n\nresource https://github.com/Android-for-Python/Android-for-Python-Users\n\npython-for-android is a tool to package Python apps on Android. It\ncreates a Python distribution with your app and its dependencies.\n\nPure-Python dependencies are handled automatically and in a generic way,\nbut native dependencies need recipes. A set of recipes for popular\ndependencies is provided, but users need to provide their own recipes\nfor any other native dependencies.\n\nkivy-ios\n\nDescription\n\n Toolchain for compiling Python / Kivy / other libraries for iOS.\n\nURL\n\n https://github.com/kivy/kivy-ios\n\nkivy-ios is a tool to package Python apps on iOS. It provides a\ntoolchain to build a Python distribution with your app and its\ndependencies, as well as a CLI to create and manage Xcode projects that\nintegrate with the toolchain.\n\nIt uses the same approach as python-for-android (also maintained by the\nKivy project) for app dependencies — pure-Python dependencies are\nhandled automatically, but native dependencies need recipes, and the\nproject provides recipes for popular dependencies.\n\nAidLearning\n\nDescription\n\n AI, Android, Linux, ARM: AI application development platform based\n on Android+Linux integrated ecology.\n\nURL\n\n https://github.com/aidlearning/AidLearning-FrameWork\n\nTODO\n\nQPython\n\nDescription\n\n QPython is the Python engine for android.\n\nURL\n\n https://github.com/qpython-android/qpython\n\nTODO\n\npyqtdeploy\n\nDescription\n\n pyqtdeploy is a tool for deploying PyQt applications.\n\nURL\n\n https://www.riverbankcomputing.com/software/pyqtdeploy/\n\ncontact\nhttps://www.riverbankcomputing.com/pipermail/pyqt/2023-May/thread.html\ncontacted Phil, the maintainer\n\nTODO\n\nChaquopy\n\nDescription\n\n Chaquopy provides everything you need to include Python components\n in an Android app.\n\nURL\n\n https://chaquo.com/chaquopy/\n\nTODO\n\nEDK II\n\nDescription\n\n EDK II is a modern, feature-rich, cross-platform firmware\n development environment for the UEFI and PI specifications.\n\nURL\n\n https://github.com/tianocore/edk2-libc/tree/master/AppPkg/Applications/Python\n\nTODO\n\nActivePython\n\nDescription\n\n Commercial-grade, quality-assured Python distribution focusing on\n easy installation and cross-platform compatibility on Windows,\n Linux, Mac OS X, Solaris, HP-UX and AIX.\n\nURL\n\n https://www.activestate.com/products/python/\n\nTODO\n\nTermux\n\nDescription\n\n Termux is an Android terminal emulator and Linux environment app\n that works directly with no rooting or setup required.\n\nURL\n\n https://termux.dev/en/\n\nTODO\n\n[1] At the time of writing (Jun 2023), setuptools' compiler interface\ncode, the component that most of affects cross-compilation, is developed\non the pypa/distutils repository, which gets periodically synced to the\nsetuptools repository.\n\n[2] We specifically mention popular workflows, because this is not\nstandardized. Though, many of the most popular implementations\n(crossenv, conda-forge's build system, etc.) work similarly, and this is\nwhat we are referring to here. For clarity, the implementations we are\nreferring to here could be described as crossenv-style.\n\n[3] The scope of the build-system patching varies between users and\nusually depends on the their goal — some (eg. Linux distributions) may\npatch the build-system to support cross-builds, while others might\nhardcode compiler paths and system information in the build-system, to\nsimply make the build work.\n\n[4] Ideally, you want to perform cross-builds with the same Python\nversion and implementation, however, this is often not the case. It\nshould not be very problematic as long as the major and minor versions\ndon't change.\n\n[5] The set of config variables that will always be present mostly\nconsists of variables needed to calculate the installation scheme paths.\n\n[6] The context we refer here consists of the \"path initialization\",\nwhich is a process that happens in the interpreter startup and is\nresponsible for figuring out which environment it is being run — eg.\nglobal environment, virtual environment, etc. — and setting sys.prefix\nand other attributes accordingly.\n\n[7] This is because Windows builds may not use the Makefile, and instead\nuse the Visual Studio build system. A subset of the most relevant\nMakefile variables is provided to make user code that uses them simpler.\n\n[8] Ideally, you want to perform cross-builds with the same Python\nversion and implementation, however, this is often not the case. It\nshould not be very problematic as long as the major and minor versions\ndon't change.\n\n[9] The set of config variables that will always be present mostly\nconsists of variables needed to calculate the installation scheme paths.\n\n[10] The context we refer here consists of the \"path initialization\",\nwhich is a process that happens in the interpreter startup and is\nresponsible for figuring out which environment it is being run — eg.\nglobal environment, virtual environment, etc. — and setting sys.prefix\nand other attributes accordingly.\n\n[11] This is because Windows builds may not use the Makefile, and\ninstead use the Visual Studio build system. A subset of the most\nrelevant Makefile variables is provided to make user code that uses them\nsimpler.\n\n[12] Due to Python bring compiled without shared or static libpython\nsupport, respectively.\n\n[13] This is the libpython library that users of the stable ABI should\nlink against, if they need to link against libpython.\n\n[14] Due to Python bring compiled without shared or static libpython\nsupport, respectively.\n\n[15] Due to Python bring compiled without shared or static libpython\nsupport, respectively.\n\n[16] Refer to dlopen's man page for more information."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.139736"},"created":{"kind":"timestamp","value":"2023-07-01T00:00:00","string":"2023-07-01T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0720/\",\n \"authors\": [\n \"Filipe Laíns\"\n ],\n \"pep_number\": \"0720\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":36,"cells":{"id":{"kind":"string","value":"0754"},"text":{"kind":"string","value":"PEP: 754 Title: IEEE 754 Floating Point Special Values Version:\n$Revision$ Last-Modified: $Date$ Author: Gregory R. Warnes\n Status: Rejected Type: Standards\nTrack Content-Type: text/x-rst Created: 28-Mar-2003 Python-Version: 2.3\nPost-History:\n\nRejection Notice\n\nThis PEP has been rejected. After sitting open for four years, it has\nfailed to generate sufficient community interest.\n\nSeveral ideas of this PEP were implemented for Python 2.6. float('inf')\nand repr(float('inf')) are now guaranteed to work on every supported\nplatform with IEEE 754 semantics. However the eval(repr(float('inf')))\nroundtrip is still not supported unless you define inf and nan yourself:\n\n >>> inf = float('inf')\n >>> inf, 1E400\n (inf, inf)\n >>> neginf = float('-inf')\n >>> neginf, -1E400\n (-inf, -inf)\n >>> nan = float('nan')\n >>> nan, inf * 0.\n (nan, nan)\n\nThe math and the sys module also have gained additional features,\nsys.float_info, math.isinf, math.isnan, math.copysign.\n\nAbstract\n\nThis PEP proposes an API and a provides a reference module that\ngenerates and tests for IEEE 754 double-precision special values:\npositive infinity, negative infinity, and not-a-number (NaN).\n\nRationale\n\nThe IEEE 754 standard defines a set of binary representations and\nalgorithmic rules for floating point arithmetic. Included in the\nstandard is a set of constants for representing special values,\nincluding positive infinity, negative infinity, and indeterminate or\nnon-numeric results (NaN). Most modern CPUs implement the IEEE 754\nstandard, including the (Ultra)SPARC, PowerPC, and x86 processor series.\n\nCurrently, the handling of IEEE 754 special values in Python depends on\nthe underlying C library. Unfortunately, there is little consistency\nbetween C libraries in how or whether these values are handled. For\ninstance, on some systems \"float('Inf')\" will properly return the IEEE\n754 constant for positive infinity. On many systems, however, this\nexpression will instead generate an error message.\n\nThe output string representation for an IEEE 754 special value also\nvaries by platform. For example, the expression \"float(1e3000)\", which\nis large enough to generate an overflow, should return a string\nrepresentation corresponding to IEEE 754 positive infinity. Python 2.1.3\non x86 Debian Linux returns \"inf\". On Sparc Solaris 8 with Python 2.2.1,\nthis same expression returns \"Infinity\", and on MS-Windows 2000 with\nActive Python 2.2.1, it returns \"1.#INF\".\n\nAdding to the confusion, some platforms generate one string on\nconversion from floating point and accept a different string for\nconversion to floating point. On these systems :\n\n float(str(x))\n\nwill generate an error when \"x\" is an IEEE special value.\n\nIn the past, some have recommended that programmers use expressions\nlike:\n\n PosInf = 1e300**2\n NaN = PosInf/PosInf\n\nto obtain positive infinity and not-a-number constants. However, the\nfirst expression generates an error on current Python interpreters. A\npossible alternative is to use:\n\n PosInf = 1e300000\n NaN = PosInf/PosInf\n\nWhile this does not generate an error with current Python interpreters,\nit is still an ugly and potentially non-portable hack. In addition,\ndefining NaN in this way does solve the problem of detecting such\nvalues. First, the IEEE 754 standard provides for an entire set of\nconstant values for Not-a-Number. Second, the standard requires that :\n\n NaN != X\n\nfor all possible values of X, including NaN. As a consequence :\n\n NaN == NaN\n\nshould always evaluate to false. However, this behavior also is not\nconsistently implemented. [e.g. Cygwin Python 2.2.2]\n\nDue to the many platform and library inconsistencies in handling IEEE\nspecial values, it is impossible to consistently set or detect IEEE 754\nfloating point values in normal Python code without resorting to\ndirectly manipulating bit-patterns.\n\nThis PEP proposes a standard Python API and provides a reference module\nimplementation which allows for consistent handling of IEEE 754 special\nvalues on all supported platforms.\n\nAPI Definition\n\nConstants\n\nNaN\n\n Non-signalling IEEE 754 \"Not a Number\" value\n\nPosInf\n\n IEEE 754 Positive Infinity value\n\nNegInf\n\n IEEE 754 Negative Infinity value\n\nFunctions\n\nisNaN(value)\n\n Determine if the argument is an IEEE 754 NaN (Not a Number) value.\n\nisPosInf(value)\n\n Determine if the argument is an IEEE 754 positive infinity value.\n\nisNegInf(value)\n\n Determine if the argument is an IEEE 754 negative infinity value.\n\nisFinite(value)\n\n Determine if the argument is a finite IEEE 754 value (i.e., is not\n NaN, positive, or negative infinity).\n\nisInf(value)\n\n Determine if the argument is an infinite IEEE 754 value (positive or\n negative infinity)\n\nExample\n\n(Run under Python 2.2.1 on Solaris 8.)\n\n>>> import fpconst >>> val = 1e30000 # should be cause overflow and\nresult in \"Inf\" >>> val Infinity >>> fpconst.isInf(val) 1 >>>\nfpconst.PosInf Infinity >>> nval = val/val # should result in NaN >>>\nnval NaN >>> fpconst.isNaN(nval) 1 >>> fpconst.isNaN(val) 0\n\nImplementation\n\nThe reference implementation is provided in the module \"fpconst\"[1],\nwhich is written in pure Python by taking advantage of the \"struct\"\nstandard module to directly set or test for the bit patterns that define\nIEEE 754 special values. Care has been taken to generate proper results\non both big-endian and little-endian machines. The current\nimplementation is pure Python, but some efficiency could be gained by\ntranslating the core routines into C.\n\nPatch 1151323 \"New fpconst module\"[2] on SourceForge adds the fpconst\nmodule to the Python standard library.\n\nReferences\n\nSee http://babbage.cs.qc.edu/courses/cs341/IEEE-754references.html for\nreference material on the IEEE 754 floating point standard.\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 End:\n\n[1] Further information on the reference package is available at\nhttp://research.warnes.net/projects/rzope/fpconst/\n\n[2] http://sourceforge.net/tracker/?func=detail&aid=1151323&group_id=5470&atid=305470"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.152606"},"created":{"kind":"timestamp","value":"2003-03-28T00:00:00","string":"2003-03-28T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0754/\",\n \"authors\": [\n \"Gregory R. Warnes\"\n ],\n \"pep_number\": \"0754\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":37,"cells":{"id":{"kind":"string","value":"0375"},"text":{"kind":"string","value":"PEP: 375 Title: Python 3.1 Release Schedule Version: $Revision$\nLast-Modified: $Date$ Author: Benjamin Peterson \nStatus: Final Type: Informational Topic: Release Content-Type:\ntext/x-rst Created: 08-Feb-2009 Python-Version: 3.1\n\nAbstract\n\nThis document describes the development and release schedule for Python\n3.1. The schedule primarily concerns itself with PEP-sized items. Small\nfeatures may be added up to and including the first beta release. Bugs\nmay be fixed until the final release.\n\nRelease Manager and Crew\n\n Position Name\n --------------------- -------------------\n 3.1 Release Manager Benjamin Peterson\n Windows installers Martin v. Loewis\n Mac installers Ronald Oussoren\n\nRelease Schedule\n\n- 3.1a1 March 7, 2009\n- 3.1a2 April 4, 2009\n- 3.1b1 May 6, 2009\n- 3.1rc1 May 30, 2009\n- 3.1rc2 June 13, 2009\n- 3.1 final June 27, 2009\n\nMaintenance Releases\n\n3.1 is no longer maintained. 3.1 received security fixes until June\n2012.\n\nPrevious maintenance releases are:\n\n- v3.1.1rc1 2009-08-13\n- v3.1.1 2009-08-16\n- v3.1.2rc1 2010-03-06\n- v3.1.2 2010-03-20\n- v3.1.3rc1 2010-11-13\n- v3.1.3 2010-11-27\n- v3.1.4rc1 2011-05-29\n- v3.1.4 2011-06-11\n- v3.1.5rc1 2012-02-23\n- v3.1.5rc2 2012-03-15\n- v3.1.5 2012-04-06\n\nFeatures for 3.1\n\n- importlib\n- io in C\n- Update simplejson to the latest external version[1].\n- Ordered dictionary for collections (PEP 372).\n- auto-numbered replacement fields in str.format() strings[2]\n- Nested with-statements in one with statement\n\nFootnotes\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] http://bugs.python.org/issue4136\n\n[2] http://bugs.python.org/issue5237"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.161815"},"created":{"kind":"timestamp","value":"2009-02-08T00:00:00","string":"2009-02-08T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0375/\",\n \"authors\": [\n \"Benjamin Peterson\"\n ],\n \"pep_number\": \"0375\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":38,"cells":{"id":{"kind":"string","value":"0395"},"text":{"kind":"string","value":"PEP: 395 Title: Qualified Names for Modules Version: $Revision$\nLast-Modified: $Date$ Author: Alyssa Coghlan \nStatus: Withdrawn Type: Standards Track Content-Type: text/x-rst\nCreated: 04-Mar-2011 Python-Version: 3.4 Post-History: 05-Mar-2011,\n19-Nov-2011\n\nPEP Withdrawal\n\nThis PEP was withdrawn by the author in December 2013, as other\nsignificant changes in the time since it was written have rendered\nseveral aspects obsolete. Most notably PEP 420 namespace packages\nrendered some of the proposals related to package detection unworkable\nand PEP 451 module specifications resolved the multiprocessing issues\nand provide a possible means to tackle the pickle compatibility issues.\n\nA future PEP to resolve the remaining issues would still be appropriate,\nbut it's worth starting any such effort as a fresh PEP restating the\nremaining problems in an updated context rather than trying to build on\nthis one directly.\n\nAbstract\n\nThis PEP proposes new mechanisms that eliminate some longstanding traps\nfor the unwary when dealing with Python's import system, as well as\nserialisation and introspection of functions and classes.\n\nIt builds on the \"Qualified Name\" concept defined in PEP 3155.\n\nRelationship with Other PEPs\n\nMost significantly, this PEP is currently deferred as it requires\nsignificant changes in order to be made compatible with the removal of\nmandatory __init__.py files in PEP 420 (which has been implemented and\nreleased in Python 3.3).\n\nThis PEP builds on the \"qualified name\" concept introduced by PEP 3155,\nand also shares in that PEP's aim of fixing some ugly corner cases when\ndealing with serialisation of arbitrary functions and classes.\n\nIt also builds on PEP 366, which took initial tentative steps towards\nmaking explicit relative imports from the main module work correctly in\nat least some circumstances.\n\nFinally, PEP 328 eliminated implicit relative imports from imported\nmodules. This PEP proposes that the de facto implicit relative imports\nfrom main modules that are provided by the current initialisation\nbehaviour for sys.path[0] also be eliminated.\n\nWhat's in a __name__?\n\nOver time, a module's __name__ attribute has come to be used to handle a\nnumber of different tasks.\n\nThe key use cases identified for this module attribute are:\n\n1. Flagging the main module in a program, using the\n if __name__ == \"__main__\": convention.\n2. As the starting point for relative imports\n3. To identify the location of function and class definitions within\n the running application\n4. To identify the location of classes for serialisation into pickle\n objects which may be shared with other interpreter instances\n\nTraps for the Unwary\n\nThe overloading of the semantics of __name__, along with some\nhistorically associated behaviour in the initialisation of sys.path[0],\nhas resulted in several traps for the unwary. These traps can be quite\nannoying in practice, as they are highly unobvious (especially to\nbeginners) and can cause quite confusing behaviour.\n\nWhy are my imports broken?\n\nThere's a general principle that applies when modifying sys.path: never\nput a package directory directly on sys.path. The reason this is\nproblematic is that every module in that directory is now potentially\naccessible under two different names: as a top level module (since the\npackage directory is on sys.path) and as a submodule of the package (if\nthe higher level directory containing the package itself is also on\nsys.path).\n\nAs an example, Django (up to and including version 1.3) is guilty of\nsetting up exactly this situation for site-specific applications - the\napplication ends up being accessible as both app and site.app in the\nmodule namespace, and these are actually two different copies of the\nmodule. This is a recipe for confusion if there is any meaningful\nmutable module level state, so this behaviour is being eliminated from\nthe default site set up in version 1.4 (site-specific apps will always\nbe fully qualified with the site name).\n\nHowever, it's hard to blame Django for this, when the same part of\nPython responsible for setting __name__ = \"__main__\" in the main module\ncommits the exact same error when determining the value for sys.path[0].\n\nThe impact of this can be seen relatively frequently if you follow the\n\"python\" and \"import\" tags on Stack Overflow. When I had the time to\nfollow it myself, I regularly encountered people struggling to\nunderstand the behaviour of straightforward package layouts like the\nfollowing (I actually use package layouts along these lines in my own\nprojects):\n\n project/\n setup.py\n example/\n __init__.py\n foo.py\n tests/\n __init__.py\n test_foo.py\n\nWhile I would often see it without the __init__.py files first, that's a\ntrivial fix to explain. What's hard to explain is that all of the\nfollowing ways to invoke test_foo.py probably won't work due to broken\nimports (either failing to find example for absolute imports,\ncomplaining about relative imports in a non-package or beyond the\ntoplevel package for explicit relative imports, or issuing even more\nobscure errors if some other submodule happens to shadow the name of a\ntop-level module, such as an example.json module that handled\nserialisation or an example.tests.unittest test runner):\n\n # These commands will most likely *FAIL*, even if the code is correct\n\n # working directory: project/example/tests\n ./test_foo.py\n python test_foo.py\n python -m package.tests.test_foo\n python -c \"from package.tests.test_foo import main; main()\"\n\n # working directory: project/package\n tests/test_foo.py\n python tests/test_foo.py\n python -m package.tests.test_foo\n python -c \"from package.tests.test_foo import main; main()\"\n\n # working directory: project\n example/tests/test_foo.py\n python example/tests/test_foo.py\n\n # working directory: project/..\n project/example/tests/test_foo.py\n python project/example/tests/test_foo.py\n # The -m and -c approaches don't work from here either, but the failure\n # to find 'package' correctly is easier to explain in this case\n\nThat's right, that long list is of all the methods of invocation that\nwill almost certainly break if you try them, and the error messages\nwon't make any sense if you're not already intimately familiar not only\nwith the way Python's import system works, but also with how it gets\ninitialised.\n\nFor a long time, the only way to get sys.path right with that kind of\nsetup was to either set it manually in test_foo.py itself (hardly\nsomething a novice, or even many veteran, Python programmers are going\nto know how to do) or else to make sure to import the module instead of\nexecuting it directly:\n\n # working directory: project\n python -c \"from package.tests.test_foo import main; main()\"\n\nSince the implementation of PEP 366 (which defined a mechanism that\nallows relative imports to work correctly when a module inside a package\nis executed via the -m switch), the following also works properly:\n\n # working directory: project\n python -m package.tests.test_foo\n\nThe fact that most methods of invoking Python code from the command line\nbreak when that code is inside a package, and the two that do work are\nhighly sensitive to the current working directory is all thoroughly\nconfusing for a beginner. I personally believe it is one of the key\nfactors leading to the perception that Python packages are complicated\nand hard to get right.\n\nThis problem isn't even limited to the command line - if test_foo.py is\nopen in Idle and you attempt to run it by pressing F5, or if you try to\nrun it by clicking on it in a graphical filebrowser, then it will fail\nin just the same way it would if run directly from the command line.\n\nThere's a reason the general \"no package directories on sys.path\"\nguideline exists, and the fact that the interpreter itself doesn't\nfollow it when determining sys.path[0] is the root cause of all sorts of\ngrief.\n\nIn the past, this couldn't be fixed due to backwards compatibility\nconcerns. However, scripts potentially affected by this problem will\nalready require fixes when porting to the Python 3.x (due to the\nelimination of implicit relative imports when importing modules\nnormally). This provides a convenient opportunity to implement a\ncorresponding change in the initialisation semantics for sys.path[0].\n\nImporting the main module twice\n\nAnother venerable trap is the issue of importing __main__ twice. This\noccurs when the main module is also imported under its real name,\neffectively creating two instances of the same module under different\nnames.\n\nIf the state stored in __main__ is significant to the correct operation\nof the program, or if there is top-level code in the main module that\nhas non-idempotent side effects, then this duplication can cause obscure\nand surprising errors.\n\nIn a bit of a pickle\n\nSomething many users may not realise is that the pickle module sometimes\nrelies on the __module__ attribute when serialising instances of\narbitrary classes. So instances of classes defined in __main__ are\npickled that way, and won't be unpickled correctly by another python\ninstance that only imported that module instead of running it directly.\nThis behaviour is the underlying reason for the advice from many Python\nveterans to do as little as possible in the __main__ module in any\napplication that involves any form of object serialisation and\npersistence.\n\nSimilarly, when creating a pseudo-module (see next paragraph), pickles\nrely on the name of the module where a class is actually defined, rather\nthan the officially documented location for that class in the module\nhierarchy.\n\nFor the purposes of this PEP, a \"pseudo-module\" is a package designed\nlike the Python 3.2 unittest and concurrent.futures packages. These\npackages are documented as if they were single modules, but are in fact\ninternally implemented as a package. This is supposed to be an\nimplementation detail that users and other implementations don't need to\nworry about, but, thanks to pickle (and serialisation in general), the\ndetails are often exposed and can effectively become part of the public\nAPI.\n\nWhile this PEP focuses specifically on pickle as the principal\nserialisation scheme in the standard library, this issue may also affect\nother mechanisms that support serialisation of arbitrary class instances\nand rely on __module__ attributes to determine how to handle\ndeserialisation.\n\nWhere's the source?\n\nSome sophisticated users of the pseudo-module technique described above\nrecognise the problem with implementation details leaking out via the\npickle module, and choose to address it by altering __name__ to refer to\nthe public location for the module before defining any functions or\nclasses (or else by modifying the __module__ attributes of those objects\nafter they have been defined).\n\nThis approach is effective at eliminating the leakage of information via\npickling, but comes at the cost of breaking introspection for functions\nand classes (as their __module__ attribute now points to the wrong\nplace).\n\nForkless Windows\n\nTo get around the lack of os.fork on Windows, the multiprocessing module\nattempts to re-execute Python with the same main module, but skipping\nover any code guarded by if __name__ == \"__main__\": checks. It does the\nbest it can with the information it has, but is forced to make\nassumptions that simply aren't valid whenever the main module isn't an\nordinary directly executed script or top-level module. Packages and\nnon-top-level modules executed via the -m switch, as well as directly\nexecuted zipfiles or directories, are likely to make multiprocessing on\nWindows do the wrong thing (either quietly or noisily, depending on\napplication details) when spawning a new process.\n\nWhile this issue currently only affects Windows directly, it also\nimpacts any proposals to provide Windows-style \"clean process\"\ninvocation via the multiprocessing module on other platforms.\n\nQualified Names for Modules\n\nTo make it feasible to fix these problems once and for all, it is\nproposed to add a new module level attribute: __qualname__. This\nabbreviation of \"qualified name\" is taken from PEP 3155, where it is\nused to store the naming path to a nested class or function definition\nrelative to the top level module.\n\nFor modules, __qualname__ will normally be the same as __name__, just as\nit is for top-level functions and classes in PEP 3155. However, it will\ndiffer in some situations so that the above problems can be addressed.\n\nSpecifically, whenever __name__ is modified for some other purpose (such\nas to denote the main module), then __qualname__ will remain unchanged,\nallowing code that needs it to access the original unmodified value.\n\nIf a module loader does not initialise __qualname__ itself, then the\nimport system will add it automatically (setting it to the same value as\n__name__).\n\nAlternative Names\n\nTwo alternative names were also considered for the new attribute: \"full\nname\" (__fullname__) and \"implementation name\" (__implname__).\n\nEither of those would actually be valid for the use case in this PEP.\nHowever, as a meta-issue, PEP 3155 is also adding a new attribute (for\nfunctions and classes) that is \"like __name__, but different in some\ncases where __name__ is missing necessary information\" and those terms\naren't accurate for the PEP 3155 function and class use case.\n\nPEP 3155 deliberately omits the module information, so the term \"full\nname\" is simply untrue, and \"implementation name\" implies that it may\nspecify an object other than that specified by __name__, and that is\nnever the case for PEP 3155 (in that PEP, __name__ and __qualname__\nalways refer to the same function or class, it's just that __name__ is\ninsufficient to accurately identify nested functions and classes).\n\nSince it seems needlessly inconsistent to add two new terms for\nattributes that only exist because backwards compatibility concerns keep\nus from changing the behaviour of __name__ itself, this PEP instead\nchose to adopt the PEP 3155 terminology.\n\nIf the relative inscrutability of \"qualified name\" and __qualname__\nencourages interested developers to look them up at least once rather\nthan assuming they know what they mean just from the name and guessing\nwrong, that's not necessarily a bad outcome.\n\nBesides, 99% of Python developers should never need to even care these\nextra attributes exist - they're really an implementation detail to let\nus fix a few problematic behaviours exhibited by imports, pickling and\nintrospection, not something people are going to be dealing with on a\nregular basis.\n\nEliminating the Traps\n\nThe following changes are interrelated and make the most sense when\nconsidered together. They collectively either completely eliminate the\ntraps for the unwary noted above, or else provide straightforward\nmechanisms for dealing with them.\n\nA rough draft of some of the concepts presented here was first posted on\nthe python-ideas list ([1]), but they have evolved considerably since\nfirst being discussed in that thread. Further discussion has\nsubsequently taken place on the import-sig mailing list ([2].[3]).\n\nFixing main module imports inside packages\n\nTo eliminate this trap, it is proposed that an additional filesystem\ncheck be performed when determining a suitable value for sys.path[0].\nThis check will look for Python's explicit package directory markers and\nuse them to find the appropriate directory to add to sys.path.\n\nThe current algorithm for setting sys.path[0] in relevant cases is\nroughly as follows:\n\n # Interactive prompt, -m switch, -c switch\n sys.path.insert(0, '')\n\n # Valid sys.path entry execution (i.e. directory and zip execution)\n sys.path.insert(0, sys.argv[0])\n\n # Direct script execution\n sys.path.insert(0, os.path.dirname(sys.argv[0]))\n\nIt is proposed that this initialisation process be modified to take\npackage details stored on the filesystem into account:\n\n # Interactive prompt, -m switch, -c switch\n in_package, path_entry, _ignored = split_path_module(os.getcwd(), '')\n if in_package:\n sys.path.insert(0, path_entry)\n else:\n sys.path.insert(0, '')\n\n # Start interactive prompt or run -c command as usual\n # __main__.__qualname__ is set to \"__main__\"\n\n # The -m switches uses the same sys.path[0] calculation, but:\n # modname is the argument to the -m switch\n # modname is passed to ``runpy._run_module_as_main()`` as usual\n # __main__.__qualname__ is set to modname\n\n # Valid sys.path entry execution (i.e. directory and zip execution)\n modname = \"__main__\"\n path_entry, modname = split_path_module(sys.argv[0], modname)\n sys.path.insert(0, path_entry)\n\n # modname (possibly adjusted) is passed to ``runpy._run_module_as_main()``\n # __main__.__qualname__ is set to modname\n\n # Direct script execution\n in_package, path_entry, modname = split_path_module(sys.argv[0])\n sys.path.insert(0, path_entry)\n if in_package:\n # Pass modname to ``runpy._run_module_as_main()``\n else:\n # Run script directly\n # __main__.__qualname__ is set to modname\n\nThe split_path_module() supporting function used in the above\npseudo-code would have the following semantics:\n\n def _splitmodname(fspath):\n path_entry, fname = os.path.split(fspath)\n modname = os.path.splitext(fname)[0]\n return path_entry, modname\n\n def _is_package_dir(fspath):\n return any(os.exists(\"__init__\" + info[0]) for info\n in imp.get_suffixes())\n\n def split_path_module(fspath, modname=None):\n \"\"\"Given a filesystem path and a relative module name, determine an\n appropriate sys.path entry and a fully qualified module name.\n\n Returns a 3-tuple of (package_depth, fspath, modname). A reported\n package depth of 0 indicates that this would be a top level import.\n\n If no relative module name is given, it is derived from the final\n component in the supplied path with the extension stripped.\n \"\"\"\n if modname is None:\n fspath, modname = _splitmodname(fspath)\n package_depth = 0\n while _is_package_dir(fspath):\n fspath, pkg = _splitmodname(fspath)\n modname = pkg + '.' + modname\n return package_depth, fspath, modname\n\nThis PEP also proposes that the split_path_module() functionality be\nexposed directly to Python users via the runpy module.\n\nWith this fix in place, and the same simple package layout described\nearlier, all of the following commands would invoke the test suite\ncorrectly:\n\n # working directory: project/example/tests\n ./test_foo.py\n python test_foo.py\n python -m package.tests.test_foo\n python -c \"from .test_foo import main; main()\"\n python -c \"from ..tests.test_foo import main; main()\"\n python -c \"from package.tests.test_foo import main; main()\"\n\n # working directory: project/package\n tests/test_foo.py\n python tests/test_foo.py\n python -m package.tests.test_foo\n python -c \"from .tests.test_foo import main; main()\"\n python -c \"from package.tests.test_foo import main; main()\"\n\n # working directory: project\n example/tests/test_foo.py\n python example/tests/test_foo.py\n python -m package.tests.test_foo\n python -c \"from package.tests.test_foo import main; main()\"\n\n # working directory: project/..\n project/example/tests/test_foo.py\n python project/example/tests/test_foo.py\n # The -m and -c approaches still don't work from here, but the failure\n # to find 'package' correctly is pretty easy to explain in this case\n\nWith these changes, clicking Python modules in a graphical file browser\nshould always execute them correctly, even if they live inside a\npackage. Depending on the details of how it invokes the script, Idle\nwould likely also be able to run test_foo.py correctly with F5, without\nneeding any Idle specific fixes.\n\nOptional addition: command line relative imports\n\nWith the above changes in place, it would be a fairly minor addition to\nallow explicit relative imports as arguments to the -m switch:\n\n # working directory: project/example/tests\n python -m .test_foo\n python -m ..tests.test_foo\n\n # working directory: project/example/\n python -m .tests.test_foo\n\nWith this addition, system initialisation for the -m switch would change\nas follows:\n\n # -m switch (permitting explicit relative imports)\n in_package, path_entry, pkg_name = split_path_module(os.getcwd(), '')\n qualname= <>\n if qualname.startswith('.'):\n modname = qualname\n while modname.startswith('.'):\n modname = modname[1:]\n pkg_name, sep, _ignored = pkg_name.rpartition('.')\n if not sep:\n raise ImportError(\"Attempted relative import beyond top level package\")\n qualname = pkg_name + '.' modname\n if in_package:\n sys.path.insert(0, path_entry)\n else:\n sys.path.insert(0, '')\n\n # qualname is passed to ``runpy._run_module_as_main()``\n # _main__.__qualname__ is set to qualname\n\nCompatibility with PEP 382\n\nMaking this proposal compatible with the PEP 382 namespace packaging PEP\nis trivial. The semantics of _is_package_dir() are merely changed to be:\n\n def _is_package_dir(fspath):\n return (fspath.endswith(\".pyp\") or\n any(os.exists(\"__init__\" + info[0]) for info\n in imp.get_suffixes()))\n\nIncompatibility with PEP 402\n\nPEP 402 proposes the elimination of explicit markers in the file system\nfor Python packages. This fundamentally breaks the proposed concept of\nbeing able to take a filesystem path and a Python module name and work\nout an unambiguous mapping to the Python module namespace. Instead, the\nappropriate mapping would depend on the current values in sys.path,\nrendering it impossible to ever fix the problems described above with\nthe calculation of sys.path[0] when the interpreter is initialised.\n\nWhile some aspects of this PEP could probably be salvaged if PEP 402\nwere adopted, the core concept of making import semantics from main and\nother modules more consistent would no longer be feasible.\n\nThis incompatibility is discussed in more detail in the relevant\nimport-sig threads ([4],[5]).\n\nPotential incompatibilities with scripts stored in packages\n\nThe proposed change to sys.path[0] initialisation may break some\nexisting code. Specifically, it will break scripts stored in package\ndirectories that rely on the implicit relative imports from __main__ in\norder to run correctly under Python 3.\n\nWhile such scripts could be imported in Python 2 (due to implicit\nrelative imports) it is already the case that they cannot be imported in\nPython 3, as implicit relative imports are no longer permitted when a\nmodule is imported.\n\nBy disallowing implicit relatives imports from the main module as well,\nsuch modules won't even work as scripts with this PEP. Switching them\nover to explicit relative imports will then get them working again as\nboth executable scripts and as importable modules.\n\nTo support earlier versions of Python, a script could be written to use\ndifferent forms of import based on the Python version:\n\n if __name__ == \"__main__\" and sys.version_info < (3, 3):\n import peer # Implicit relative import\n else:\n from . import peer # explicit relative import\n\nFixing dual imports of the main module\n\nGiven the above proposal to get __qualname__ consistently set correctly\nin the main module, one simple change is proposed to eliminate the\nproblem of dual imports of the main module: the addition of a\nsys.metapath hook that detects attempts to import __main__ under its\nreal name and returns the original main module instead:\n\n class AliasImporter:\n def __init__(self, module, alias):\n self.module = module\n self.alias = alias\n\n def __repr__(self):\n fmt = \"{0.__class__.__name__}({0.module.__name__}, {0.alias})\"\n return fmt.format(self)\n\n def find_module(self, fullname, path=None):\n if path is None and fullname == self.alias:\n return self\n return None\n\n def load_module(self, fullname):\n if fullname != self.alias:\n raise ImportError(\"{!r} cannot load {!r}\".format(self, fullname))\n return self.main_module\n\nThis metapath hook would be added automatically during import system\ninitialisation based on the following logic:\n\n main = sys.modules[\"__main__\"]\n if main.__name__ != main.__qualname__:\n sys.metapath.append(AliasImporter(main, main.__qualname__))\n\nThis is probably the least important proposal in the PEP - it just\ncloses off the last mechanism that is likely to lead to module\nduplication after the configuration of sys.path[0] at interpreter\nstartup is addressed.\n\nFixing pickling without breaking introspection\n\nTo fix this problem, it is proposed to make use of the new module level\n__qualname__ attributes to determine the real module location when\n__name__ has been modified for any reason.\n\nIn the main module, __qualname__ will automatically be set to the main\nmodule's \"real\" name (as described above) by the interpreter.\n\nPseudo-modules that adjust __name__ to point to the public namespace\nwill leave __qualname__ untouched, so the implementation location\nremains readily accessible for introspection.\n\nIf __name__ is adjusted at the top of a module, then this will\nautomatically adjust the __module__ attribute for all functions and\nclasses subsequently defined in that module.\n\nSince multiple submodules may be set to use the same \"public\" namespace,\nfunctions and classes will be given a new __qualmodule__ attribute that\nrefers to the __qualname__ of their module.\n\nThis isn't strictly necessary for functions (you could find out their\nmodule's qualified name by looking in their globals dictionary), but it\nis needed for classes, since they don't hold a reference to the globals\nof their defining module. Once a new attribute is added to classes, it\nis more convenient to keep the API consistent and add a new attribute to\nfunctions as well.\n\nThese changes mean that adjusting __name__ (and, either directly or\nindirectly, the corresponding function and class __module__ attributes)\nbecomes the officially sanctioned way to implement a namespace as a\npackage, while exposing the API as if it were still a single module.\n\nAll serialisation code that currently uses __name__ and __module__\nattributes will then avoid exposing implementation details by default.\n\nTo correctly handle serialisation of items from the main module, the\nclass and function definition logic will be updated to also use\n__qualname__ for the __module__ attribute in the case where\n__name__ == \"__main__\".\n\nWith __name__ and __module__ being officially blessed as being used for\nthe public names of things, the introspection tools in the standard\nlibrary will be updated to use __qualname__ and __qualmodule__ where\nappropriate. For example:\n\n- pydoc will report both public and qualified names for modules\n- inspect.getsource() (and similar tools) will use the qualified names\n that point to the implementation of the code\n- additional pydoc and/or inspect APIs may be provided that report all\n modules with a given public __name__.\n\nFixing multiprocessing on Windows\n\nWith __qualname__ now available to tell multiprocessing the real name of\nthe main module, it will be able to simply include it in the serialised\ninformation passed to the child process, eliminating the need for the\ncurrent dubious introspection of the __file__ attribute.\n\nFor older Python versions, multiprocessing could be improved by applying\nthe split_path_module() algorithm described above when attempting to\nwork out how to execute the main module based on its __file__ attribute.\n\nExplicit relative imports\n\nThis PEP proposes that __package__ be unconditionally defined in the\nmain module as __qualname__.rpartition('.')[0]. Aside from that, it\nproposes that the behaviour of explicit relative imports be left alone.\n\nIn particular, if __package__ is not set in a module when an explicit\nrelative import occurs, the automatically cached value will continue to\nbe derived from __name__ rather than __qualname__. This minimises any\nbackwards incompatibilities with existing code that deliberately\nmanipulates relative imports by adjusting __name__ rather than setting\n__package__ directly.\n\nThis PEP does not propose that __package__ be deprecated. While it is\ntechnically redundant following the introduction of __qualname__, it\njust isn't worth the hassle of deprecating it within the lifetime of\nPython 3.x.\n\nReference Implementation\n\nNone as yet.\n\nReferences\n\n- Elaboration of compatibility problems between this PEP and PEP 402\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] Module aliases and/or \"real names\"\n\n[2] PEP 395 (Module aliasing) and the namespace PEPs\n\n[3] Updated PEP 395 (aka \"Implicit Relative Imports Must Die!\")\n\n[4] PEP 395 (Module aliasing) and the namespace PEPs\n\n[5] Updated PEP 395 (aka \"Implicit Relative Imports Must Die!\")"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.205364"},"created":{"kind":"timestamp","value":"2011-03-04T00:00:00","string":"2011-03-04T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0395/\",\n \"authors\": [\n \"Alyssa Coghlan\"\n ],\n \"pep_number\": \"0395\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":39,"cells":{"id":{"kind":"string","value":"0609"},"text":{"kind":"string","value":"PEP: 609 Title: Python Packaging Authority (PyPA) Governance Author:\nDustin Ingram , Pradyun Gedam ,\nSumana Harihareswara Sponsor: Paul Ganssle\n Discussions-To:\nhttps://discuss.python.org/t/pep-609-pypa-governance/2619 Status: Active\nType: Process Topic: Governance, Packaging Content-Type: text/x-rst\nCreated: 05-Nov-2019 Post-History: 05-Nov-2019\n\nAbstract\n\nThis document describes a governance model for the Python Packaging\nAuthority (PyPA). The model is closely based on existing informal\npractices, with the intent of providing clarity into the functioning of\nthe PyPA and formalizing transparent processes for the PyPA.\n\nRationale\n\nThe Python Packaging Authority (PyPA) is a collaborative community that\nmaintains and advances many of the relevant projects in Python\npackaging. The software and standards developed through the PyPA are\nused to package, share, and install Python software and to interact with\nindexes of downloadable Python software such as PyPI, the Python Package\nIndex.\n\nCurrently, the PyPA is an informal and loosely defined organization that\nlacks true authority, and the inclusion of a given project under the\nPyPA umbrella or the creation of new projects has been done in an\nad-hoc, one-off manner. Similarly, individual membership in the PyPA is\nnot well-defined.\n\nWhile this model has more or less worked for the PyPA in the past, it\nresults in an organization which is missing certain features of a stable\necosystem, namely a clear and transparent decision-making process. This\nPEP seeks to rectify this by defining a governance model for the PyPA.\n\nTerminology\n\nRelevant terms for groups of individual contributors used in this PEP:\n\nPyPA members:\n\n Anyone with the triage bit or commit bit, on at least one project in\n the PyPA organization.\n\nPyPA committers:\n\n Anyone with the commit bit on at least one project in the PyPA\n organization, which should correspond to everyone on the\n PyPA-Committers mailing list.\n\nPyPA community:\n\n Anyone who is interested in PyPA activity and wants to follow along,\n contribute or make proposals.\n\nPackaging-WG members:\n\n As described in the Packaging-WG Wiki page. For clarity: there is no\n formal relationship between the Packaging-WG and PyPA. This group is\n only included in this list to disambiguate it from PyPA.\n\nGoals\n\nThe following section formalizes the goals (and non-goals) of the PyPA\nand this governance model.\n\nGoals of the PyPA\n\nThese goals are the primary motivation for the existence of the PyPA.\nThese goals are largely already being carried out, even though most have\nnot been explicitly defined.\n\nProvide support for existing projects under the PyPA\n\nIn the event that a given project needs additional support, or no longer\nhas active maintainers, the PyPA will ensure that the given project will\ncontinue to be supported for users to the extent necessary.\n\nFoster the creation and acceptance of standards for PyPA projects\n\nThe PyPA should, as much as possible, strive for standardization and\ncoordination across PyPA projects, primarily though the governance\nprocess outlined below. PyPA projects are expected to abide by\napplicable specifications maintained by the PyPA.\n\nGuide decisions which affect multiple PyPA projects\n\nThe PyPA community (especially PyPA members) should be expected to\nprovide opinions, insight and experience when ecosystem-wide changes are\nbeing proposed.\n\nDetermine which projects should be under the guidance of the PyPA\n\nFor example: accepting new projects from the community, organically\ncreating projects within the PyPA, etc.\n\nEnforce adherence to a Code of Conduct across all projects\n\nGenerally this means leading by example, but occasionally it may mean\nmore explicit moderation.\n\nNon-goals of the PyPA\n\nThese are specific items that are explicitly _not goals of the PyPA.\n\nDetermine who is and isn't a PyPA member\n\nThis is for members of individual projects to decide, as they add new\nmembers to their projects. Maintainership of a project that is under the\nPyPA organization automatically transfers membership in the PyPA.\n\nMicromanage individual projects\n\nAs long as the project is adhering to the Code of Conduct and following\nspecifications supported by the PyPA, the PyPA should only concerned\nwith large, ecosystem-wide changes.\n\nDevelop and maintain standalone Code of Conduct\n\nPyPA projects follow the PSF Code of Conduct.\n\nGoals of the PyPA's Governance Model\n\nThese are new goals which the governance model seeks to make possible.\n\nTransparency in PyPA membership\n\nProvide a transparent process for decisions taken, regarding project\nmembership in the PyPA.\n\nDocument PyPA's use of PEPs\n\nFormally document how the PyPA uses Python Enhancement Proposals (PEPs),\nfor maintaining interoperability specifications defined by the PyPA.\n\nProcesses\n\nThe processes for the PyPA's activities are outlined below:\n\nSpecifications\n\nThe PyPA will use PEPs for defining, and making changes to, the\ninteroperability specifications maintained by the PyPA. Thus, the Python\nSteering Council has the final say in the acceptance of these\ninteroperability specifications.\n\nIt is expected (but not required) that the Python Steering Council would\ndelegate authority to sponsor and/or approve/reject PEPs related to\npackaging interoperability specifications, to individuals within the\nPyPA community. At the time of writing (June 2020), the Python Steering\nCouncil has standing delegations for currently active packaging\ninteroperability specifications.\n\nThe details of the process of proposing and updating the\ninteroperability specifications are described in the PyPA Specifications\ndocument.\n\nGovernance\n\nPyPA Committer Votes\n\nA PyPA member can put forward a proposal and call for a vote on a public\nPyPA communication channel. A PyPA committer vote is triggered when a\nPyPA committer (not the proposer) seconds the proposal.\n\nThe proposal will be put to a vote on the PyPA-Committers mailing list,\nover a 7-day period. Each PyPA committer can vote once, and can choose\none of +1 and -1. If at least two thirds of recorded votes are +1, then\nthe vote succeeds.\n\nPyPA committer votes are required for, and limited to, the following\nkinds of proposals:\n\nAddition of a project to the PyPA\n\nProposing the acceptance of a project into the PyPA organization. This\nproposal must not be opposed by the existing maintainers of the project.\n\nCreation of a new project in the PyPA\n\nProposing the creation of a new tools / project in the PyPA\norganization.\n\nRemoval of a project from PyPA\n\nProposing the removal of a project in the PyPA organization.\n\nUpdates to the Governance/Specification Processes\n\nProposing changes to how the PyPA operates, including but not limited to\nchanges to its specification and governance processes, and this PEP.\n\nLeaving PyPA\n\nA project that is a part of the PyPA organization, can request to leave\nPyPA.\n\nSuch requests can made by a committer of the project, on the\nPyPA-Committers mailing list and must clearly state the GitHub\nuser/organization to transfer the repository to.\n\nIf the request is not opposed by another committer of the same project\nover a 7-day period, the project would leave the PyPA and be transferred\nout of the PyPA organization as per the request.\n\nCode of Conduct enforcement\n\nEach project that is a part of the PyPA organization follows the PSF\nCode of Conduct, including its incident reporting guidelines and\nenforcement procedures.\n\nPyPA members are responsible for leading by example. PyPA members\noccasionally may need to more explicitly moderate behavior in their\nprojects, and each project that is a part of the PyPA organization must\ndesignate at least one PyPA member as available to contact in case of a\nCode of Conduct incident. If told of any Code of Conduct incidents\ninvolving their projects, PyPA members are expected to report those\nincidents up to the PSF Conduct WG, for recording purposes and for\npotential assistance.\n\nReferences\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.221216"},"created":{"kind":"timestamp","value":"2019-11-05T00:00:00","string":"2019-11-05T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0609/\",\n \"authors\": [\n \"Dustin Ingram\"\n ],\n \"pep_number\": \"0609\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":40,"cells":{"id":{"kind":"string","value":"0644"},"text":{"kind":"string","value":"PEP: 644 Title: Require OpenSSL 1.1.1 or newer Author: Christian Heimes\n Discussions-To:\nhttps://discuss.python.org/t/pep-644-require-openssl-1-1-or-newer/5584\nStatus: Final Type: Standards Track Content-Type: text/x-rst Created:\n27-Oct-2020 Python-Version: 3.10 Post-History: 27-Oct-2020, 03-Mar-2021,\n17-Mar-2021, 17-Apr-2021 Resolution:\nhttps://mail.python.org/archives/list/python-dev@python.org/message/INLCO2EZVQW7R7J2OL6HWVLVU3TQRAZV/\n\nAbstract\n\nThis PEP proposes for CPython’s standard library to support only OpenSSL\n1.1.1 LTS or newer. Support for OpenSSL versions past end-of-lifetime,\nincompatible forks, and other TLS libraries are dropped.\n\nMotivation\n\nPython makes use of OpenSSL in hashlib, hmac, and ssl modules. OpenSSL\nprovides fast implementations of cryptographic primitives and a full TLS\nstack including handling of X.509 certificates. The ssl module is used\nby standard library modules like urllib and 3rd party modules like\nurllib3 to implement secure variants of internet protocols. pip uses the\nssl module to securely download packages from PyPI. Any bug in the ssl\nmodule's bindings to OpenSSL can lead to a severe security issue.\n\nOver time OpenSSL's public API has evolved and changed. Version 1.0.2\nintroduced new APIs to verify and match hostnames. OpenSSL 1.1.0 made\ninternal structs opaque and introduced new APIs that replace direct\naccess of struct members. Version 3.0.0 will deprecate more APIs due to\ninternal reorganization that moves cryptographic algorithms out of the\ncore and into providers. Forks like LibreSSL and BoringSSL have diverged\nin different directions.\n\nCurrently Python versions 3.6 to 3.9 are compatible with OpenSSL 1.0.2,\n1.1.0, and 1.1.1. For the most part Python also works with LibreSSL >=\n2.7.1 with some missing features and broken tests.\n\nDue to limited resources and time it becomes increasingly hard to\nsupport multiple versions and forks as well as test and verify\ncorrectness. Besides multiple incompatible APIs there are build time\nflags, distribution-specific patches, and local crypto-policy settings\nthat add to plethora of combinations. On the other hand, the Python core\nteam has only a couple of domain experts who are familiar with TLS and\nOpenSSL internals and even fewer who are active maintainers.\n\nRequiring OpenSSL 1.1.1 would allow us to give the vast majority of\nusers a better experience, reduce our maintenance overhead and thus free\nresources to implement new features. Users would be able to rely on the\npresence of new features and consistent behavior, ultimately resulting\nin a more robust experience.\n\nImpact\n\nOpenSSL 1.1.1 is the default variant and version of OpenSSL on almost\nall supported platforms and distributions. It’s also the only version\nthat still receives security support from upstream[1].\n\nNo macOS and Windows user will be affected by the deprecation. The\npython.org installer and alternative distributions like Conda ship with\nmost recent OpenSSL version.\n\nAs of October 2020 and according to DistroWatch[2] most current BSD and\nLinux distributions ship with OpenSSL 1.1.1 as well. Some older releases\nof long-term support (LTS) and enterprise distributions have older\nversions of OpenSSL or LibreSSL. By the time Python 3.10 will be\ngenerally available, several of these distributions will have reached\nend of lifetime, end of general support, or moved from LibreSSL to\nOpenSSL.\n\nOther software has dropped support for OpenSSL 1.0.2 as well. For\nexample, PyCA cryptography 3.2 (2020-10-25) removed compatibility with\nOpenSSL 1.0.2.\n\nOpenSSL 1.0.2 LTS\n\nreleased: 2015-02 end of lifetime: 2019-12\n\nOpenSSL 1.0.2 added hostname verification, ALPN support, and elliptic\ncurves.\n\n- CentOS 7 (EOL 2024-06)\n- Debian 8 Jessie (EOL 2020-07)\n- Linux Mint 18.3 (EOL 2021-04)\n- RHEL 7 (full support ends 2019-08, maintenance 2 support ends\n 2024-06)\n- SUSE Enterprise Linux 12-SP5 (general supports ends 2024-10)\n- Ubuntu 16.04 LTS / Xenial (general support ends 2021-04)\n\nOpenSSL 1.1.0\n\nreleased: 2016-08 end of lifetime: 2019-09\n\nOpenSSL 1.1.0 removed or disabled insecure ciphers by default and added\nsupport for ChaCha20-Poly1305, BLAKE2 (basic features), X25519 and CT.\nThe majority of structs were made opaque and new APIs were introduced.\nOpenSSL 1.1.0 is not API compatible with 1.0.2.\n\n- Debian 9 Stretch (security support ended 2020-07, LTS until 2022-06)\n- Ubuntu 18.04 LTS / Bionic (general support ends 2023-04)\n\nOpenSSL 1.1.1 LTS\n\nreleased: 2018-08 end of lifetime: 2023-09 (planned)\n\nOpenSSL 1.1.1 added TLS 1.3, SHA-3, X448 and Ed448.\n\n- Alpine (switched back to OpenSSL in 2018[3])\n- Arch Linux current\n- CentOS 8.0+\n- Debian 10 Buster\n- Debian 11 Bullseye (ETA 2021-06)\n- Fedora 29+\n- FreeBSD 11.3+\n- Gentoo Linux stable (dropped LibreSSL as alternative in January\n 2021[4])\n- HardenedBSD (switched back to OpenSSL in 2018[5])\n- Linux Mint 19.3+\n- macOS (python.org installer)\n- NetBSD 8.2+\n- openSUSE 15.2+\n- RHEL 8.0+\n- Slackware current\n- SUSE Enterprise Linux 15-SP2\n- Ubuntu 18.10+\n- Ubuntu 20.04 LTS / Focal\n- VoidLinux (switched back to OpenSSL in March 2021[6])\n- Windows (python.org installer, Conda)\n\nMajor CI providers provide images with OpenSSL 1.1.1.\n\n- AppVeyor (with image Ubuntu2004)\n- CircleCI (with recent cimg/base:stable or cimg/base:stable-20.04)\n- GitHub Actions (with runs-on: ubuntu-20.04)\n- Giblab CI (with Debian Stretch, Ubuntu Focal, CentOS 8, RHEL 8, or\n Fedora runner)\n- Packit\n- TravisCI (with dist: focal)\n- Zuul\n\nOpenSSL 3.0.0\n\nreleased: n/a (planned for mid/late 2021)\n\nOpenSSL 3.0.0 is currently under development. Major changes include\nrelicensing to Apache License 2.0 and a new API for cryptographic\nalgorithms providers. Most changes are internal refactorings and don’t\naffect public APIs.[7]\n\nLibreSSL\n\ncreated: 2014-04 (forked from OpenSSL 1.0.1g)\n\n- DragonFly BSD\n- Hyperbola GNU/Linux-libre\n- OpenBSD\n- OpenELEC (discontinued)\n- TrueOS (discontinued)\n\nSome distributions like FreeBSD and OPNsense also feature LibreSSL\ninstead of OpenSSL as non-standard TLS libraries. Gentoo discontinued\nLibreSSL as an alternative to OpenSSL in January 2021[8] due to\ncompatibility issues and little testing.\n\nOpenBSD ports has a port security/openssl/1.1 which is documented as\n\"[...] is present to provide support for applications which cannot be\nmade compatible with LibReSSL\"[9]. The package could be used by OpenBSD\nto provide a working ssl module.\n\nBoringSSL\n\ncreated: 2014-06\n\nBoringSSL is Google’s fork of OpenSSL. It’s not intended for general use\nand therefore not supported by Python. There are no guarantees of API or\nABI stability. Vendored copies of BoringSSL are used in Chrome/Chromium\nbrowser, Android, and on Apple platforms[10].\n\nBenefits\n\nTLS 1.3\n\nOpenSSL 1.1.1 introduced support for the new TLS 1.3 version. The latest\nversion of the TLS protocol has a faster handshake and is more secure\nthan the previous versions.\n\nThread and fork safety\n\nStarting with release 1.1.0c, OpenSSL is fully fork and thread safe.\nBindings no longer need any workarounds or additional callbacks to\nsupport multithreading.\n\nSHA-3\n\nSince 1.1.0, OpenSSL ships with SHA-3 and SHAKE implementations.\nPython's builtin SHA-3 support is based on the reference implementation.\nThe internal _sha3 code is fairly large and the resulting shared library\nclose to 0.5 MB. Python could drop the builtin implementation and rely\non OpenSSL's libcrypto instead.\n\nSo far LibreSSL upstream development has refused to add SHA-3\nsupport.[11]\n\nCompatibility\n\nOpenSSL downstream patches and options\n\nOpenSSL features more than 70 configure and build time options in the\nform of OPENSSL_NO_* macros. Around 60 options affect the presence of\nfeatures like cryptographic algorithms and TLS versions. Some\ndistributions apply patches to alter settings. Furthermore, default\nvalues for settings like security level, ciphers, TLS version range, and\nsignature algorithms can be set in OpenSSL config file.\n\nThe Python core team lacks resources to test all possible combinations.\nThis PEP proposes that Python only supports OpenSSL builds that have\nstandard features enabled. Vendors shall disable deprecated or insecure\nalgorithms and TLS versions with build time options like\nOPENSSL_NO_TLS1_1_METHOD or OpenSSL config options like\nMinProtocol = TLSv1.2.\n\nPython assumes that OpenSSL is built with\n\n- hashlib’s default algorithms such as MD5, SHA-1, SHA-2 family,\n SHA-3/SHAKE family, BLAKE2\n- TLS 1.2 and TLS 1.3 protocols\n- current key agreement, signature, and encryption algorithms for TLS\n 1.2 and 1.3 (ECDH, RSA, ECDSA, Curve25519, AES, Poly1309-ChaCha20,\n ...)\n- threading, file I/O, socket I/O, and error messages\n\nWeak algorithms (MD5, SHA-1 signatures) and short keys (RSA < 2024 bits)\nmay be disabled at runtime. Algorithms may also be blocked when they are\ndisabled by a crypto policy such as FIPS. The PEP is not more specific\non purpose to give room for new features as well as countermeasures\nagainst vulnerabilities. As a rule of thumb, Python should be able to\nconnect to PyPI and the test suite should pass.\n\nLibreSSL support\n\nLibreSSL is a fork of OpenSSL. The fork was created off OpenSSL 1.0.1g\nby members of the OpenBSD team in 2014 in light of the heartbleed\nvulnerability. Since its inception several features deemed problematic\nor insecure were removed or replaced (SSL 2.0, SSL 3.0, improved CPRNG)\nor backported from OpenSSL and BoringSSL.\n\nAt the moment LibreSSL is not fully API compatible with OpenSSL 1.1.1.\nThe latest release LibreSSL 3.3.2 is missing features and behaves\ndifferently in some cases. Mentionable missing or incompatible features\ninclude\n\n- SHA-3, SHAKE, BLAKE2\n- SSL_CERT_* environment variables\n- security level APIs\n- session handling APIs\n- key logging API\n- verified cert chain APIs\n- OPENSSL_VERSION macro\n\nThis PEP proposed to remove any and all LibreSSL related workarounds\nfrom Python. In the future Python will not actively prohibit LibreSSL\nsupport with configure and compile time checks. But Python will not\naccept patches that add non-trivial workarounds or disable tests either.\n\nBoringSSL\n\nThere are currently no plans to support BoringSSL.\n\nRejected Ideas\n\nFormalize supported OpenSSL versions\n\nThis PEP does not provide a set of formal rules and conditions under\nwhich an OpenSSL version is supported.\n\nIn general Python aims to be compatible with commonly used and\nofficially supported OpenSSL versions. Patch releases of Python may not\nbe compatible with new major releases of OpenSSL. Users should not\nexpect that a new major or minor release of Python works with an OpenSSL\nversion that is past its end-of-lifetime. Python core development may\nbackport fixes for new releases or extend compatibility with EOLed\nreleases as we see fit.\n\nThe new ABI stability and LTS policies of OpenSSL[12] should help, too.\n\nKeep support for OpenSSL 1.1.0\n\nIt was suggested to keep support for OpenSSL 1.1.0 for compatibility\nwith Debian 9 (Stretch). The proposal was rejected since it would\ncomplicated code cleanup and testing. Stretch is already out of regular\nsecurity support and close to end of long-term support. By the time of\nPython 3.10 final release, Debian Buster and Debian Bullseye will be\navailable.\n\nInstead Python 3.10 will gain additional documentation and a new\nconfigure option --with-openssl-rpath=auto to simplify use of custom\nOpenSSL builds[13].\n\nBackwards Compatibility\n\nPython 3.10 will no longer support TLS/SSL and fast hashing on platforms\nwith OpenSSL 1.0.2 or LibreSSL. The first draft of this PEP was\npublished at the beginning of the 3.10 release cycles to give vendors\nlike Linux distributors or CI providers sufficient time to plan.\n\nPython's internal copy of the Keccak Code Package and the internal _sha3\nmodule will be removed. This will reduce source code size by about 280kB\nand code size by roughly 0.5MB. The hashlib will solely rely on\nOpenSSL's SHA-3 implementation. SHA-3 and SHAKE will no longer be\navailable without OpenSSL.\n\nDisclaimer and special thanks\n\nThe author of this PEP is a contributor to OpenSSL project and employed\nby a major Linux distributor that uses OpenSSL.\n\nThanks to Alex Gaynor, Gregory P. Smith, Nathaniel J. Smith, Paul\nKehrer, and Seth Larson for their review and feedback on the initial\ndraft.\n\nReferences\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive.\n\n[1] https://www.openssl.org/policies/releasestrat.html\n\n[2] https://distrowatch.com/\n\n[3] https://lists.alpinelinux.org/~alpine/devel/%3CCA%2BT2pCGFeh30aEi43hAvJ3yoHBijABy_U62wfjhVmf3FmbNUUg%40mail.gmail.com%3E\n\n[4] https://www.gentoo.org/support/news-items/2021-01-05-libressl-support-discontinued.html\n\n[5] https://hardenedbsd.org/article/shawn-webb/2018-04-30/hardenedbsd-switching-back-openssl\n\n[6] https://voidlinux.org/news/2021/02/OpenSSL.html\n\n[7] https://www.openssl.org/docs/OpenSSL300Design.html\n\n[8] https://www.gentoo.org/support/news-items/2021-01-05-libressl-support-discontinued.html\n\n[9] https://openports.se/security/openssl/1.1\n\n[10] https://forums.swift.org/t/rfc-moving-swiftnio-ssl-to-boringssl/18280\n\n[11] https://github.com/libressl-portable/portable/issues/455\n\n[12] https://www.openssl.org/policies/releasestrat.html\n\n[13] https://bugs.python.org/issue43466"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.242372"},"created":{"kind":"timestamp","value":"2020-10-27T00:00:00","string":"2020-10-27T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0644/\",\n \"authors\": [\n \"Christian Heimes\"\n ],\n \"pep_number\": \"0644\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":41,"cells":{"id":{"kind":"string","value":"0574"},"text":{"kind":"string","value":"PEP: 574 Title: Pickle protocol 5 with out-of-band data Version:\n$Revision$ Last-Modified: $Date$ Author: Antoine Pitrou\n BDFL-Delegate: Alyssa Coghlan Status: Final Type:\nStandards Track Content-Type: text/x-rst Created: 23-Mar-2018\nPython-Version: 3.8 Post-History: 28-Mar-2018, 30-Apr-2019 Resolution:\nhttps://mail.python.org/pipermail/python-dev/2019-May/157284.html\n\nAbstract\n\nThis PEP proposes to standardize a new pickle protocol version, and\naccompanying APIs to take full advantage of it:\n\n1. A new pickle protocol version (5) to cover the extra metadata needed\n for out-of-band data buffers.\n2. A new PickleBuffer type for __reduce_ex__ implementations to return\n out-of-band data buffers.\n3. A new buffer_callback parameter when pickling, to handle out-of-band\n data buffers.\n4. A new buffers parameter when unpickling to provide out-of-band data\n buffers.\n\nThe PEP guarantees unchanged behaviour for anyone not using the new\nAPIs.\n\nRationale\n\nThe pickle protocol was originally designed in 1995 for on-disk\npersistency of arbitrary Python objects. The performance of a 1995-era\nstorage medium probably made it irrelevant to focus on performance\nmetrics such as use of RAM bandwidth when copying temporary data before\nwriting it to disk.\n\nNowadays the pickle protocol sees a growing use in applications where\nmost of the data isn't ever persisted to disk (or, when it is, it uses a\nportable format instead of Python-specific). Instead, pickle is being\nused to transmit data and commands from one process to another, either\non the same machine or on multiple machines. Those applications will\nsometimes deal with very large data (such as Numpy arrays or Pandas\ndataframes) that need to be transferred around. For those applications,\npickle is currently wasteful as it imposes spurious memory copies of the\ndata being serialized.\n\nAs a matter of fact, the standard multiprocessing module uses pickle for\nserialization, and therefore also suffers from this problem when sending\nlarge data to another process.\n\nThird-party Python libraries, such as Dask[1], PyArrow[2] and\nIPyParallel[3], have started implementing alternative serialization\nschemes with the explicit goal of avoiding copies on large data.\nImplementing a new serialization scheme is difficult and often leads to\nreduced generality (since many Python objects support pickle but not the\nnew serialization scheme). Falling back on pickle for unsupported types\nis an option, but then you get back the spurious memory copies you\nwanted to avoid in the first place. For example, dask is able to avoid\nmemory copies for Numpy arrays and built-in containers thereof (such as\nlists or dicts containing Numpy arrays), but if a large Numpy array is\nan attribute of a user-defined object, dask will serialize the\nuser-defined object as a pickle stream, leading to memory copies.\n\nThe common theme of these third-party serialization efforts is to\ngenerate a stream of object metadata (which contains pickle-like\ninformation about the objects being serialized) and a separate stream of\nzero-copy buffer objects for the payloads of large objects. Note that,\nin this scheme, small objects such as ints, etc. can be dumped together\nwith the metadata stream. Refinements can include opportunistic\ncompression of large data depending on its type and layout, like dask\ndoes.\n\nThis PEP aims to make pickle usable in a way where large data is handled\nas a separate stream of zero-copy buffers, letting the application\nhandle those buffers optimally.\n\nExample\n\nTo keep the example simple and avoid requiring knowledge of third-party\nlibraries, we will focus here on a bytearray object (but the issue is\nconceptually the same with more sophisticated objects such as Numpy\narrays). Like most objects, the bytearray object isn't immediately\nunderstood by the pickle module and must therefore specify its\ndecomposition scheme.\n\nHere is how a bytearray object currently decomposes for pickling:\n\n >>> b.__reduce_ex__(4)\n (, (b'abc',), None)\n\nThis is because the bytearray.__reduce_ex__ implementation reads morally\nas follows:\n\n class bytearray:\n\n def __reduce_ex__(self, protocol):\n if protocol == 4:\n return type(self), bytes(self), None\n # Legacy code for earlier protocols omitted\n\nIn turn it produces the following pickle code:\n\n >>> pickletools.dis(pickletools.optimize(pickle.dumps(b, protocol=4)))\n 0: \\x80 PROTO 4\n 2: \\x95 FRAME 30\n 11: \\x8c SHORT_BINUNICODE 'builtins'\n 21: \\x8c SHORT_BINUNICODE 'bytearray'\n 32: \\x93 STACK_GLOBAL\n 33: C SHORT_BINBYTES b'abc'\n 38: \\x85 TUPLE1\n 39: R REDUCE\n 40: . STOP\n\n(the call to pickletools.optimize above is only meant to make the pickle\nstream more readable by removing the MEMOIZE opcodes)\n\nWe can notice several things about the bytearray's payload (the sequence\nof bytes b'abc'):\n\n- bytearray.__reduce_ex__ produces a first copy by instantiating a new\n bytes object from the bytearray's data.\n- pickle.dumps produces a second copy when inserting the contents of\n that bytes object into the pickle stream, after the SHORT_BINBYTES\n opcode.\n- Furthermore, when deserializing the pickle stream, a temporary bytes\n object is created when the SHORT_BINBYTES opcode is encountered\n (inducing a data copy).\n\nWhat we really want is something like the following:\n\n- bytearray.__reduce_ex__ produces a view of the bytearray's data.\n- pickle.dumps doesn't try to copy that data into the pickle stream\n but instead passes the buffer view to its caller (which can decide\n on the most efficient handling of that buffer).\n- When deserializing, pickle.loads takes the pickle stream and the\n buffer view separately, and passes the buffer view directly to the\n bytearray constructor.\n\nWe see that several conditions are required for the above to work:\n\n- __reduce__ or __reduce_ex__ must be able to return something that\n indicates a serializable no-copy buffer view.\n- The pickle protocol must be able to represent references to such\n buffer views, instructing the unpickler that it may have to get the\n actual buffer out of band.\n- The pickle.Pickler API must provide its caller with a way to receive\n such buffer views while serializing.\n- The pickle.Unpickler API must similarly allow its caller to provide\n the buffer views required for deserialization.\n- For compatibility, the pickle protocol must also be able to contain\n direct serializations of such buffer views, such that current uses\n of the pickle API don't have to be modified if they are not\n concerned with memory copies.\n\nProducer API\n\nWe are introducing a new type pickle.PickleBuffer which can be\ninstantiated from any buffer-supporting object, and is specifically\nmeant to be returned from __reduce__ implementations:\n\n class bytearray:\n\n def __reduce_ex__(self, protocol):\n if protocol >= 5:\n return type(self), (PickleBuffer(self),), None\n # Legacy code for earlier protocols omitted\n\nPickleBuffer is a simple wrapper that doesn't have all the memoryview\nsemantics and functionality, but is specifically recognized by the\npickle module if protocol 5 or higher is enabled. It is an error to try\nto serialize a PickleBuffer with pickle protocol version 4 or earlier.\n\nOnly the raw data of the PickleBuffer will be considered by the pickle\nmodule. Any type-specific metadata (such as shapes or datatype) must be\nreturned separately by the type's __reduce__ implementation, as is\nalready the case.\n\nPickleBuffer objects\n\nThe PickleBuffer class supports a very simple Python API. Its\nconstructor takes a single PEP 3118-compatible object. PickleBuffer\nobjects themselves support the buffer protocol, so consumers can call\nmemoryview(...) on them to get additional information about the\nunderlying buffer (such as the original type, shape, etc.). In addition,\nPickleBuffer objects have the following methods:\n\nraw()\n\n Return a memoryview of the raw memory bytes underlying the\n PickleBuffer, erasing any shape, strides and format information. This\n is required to handle Fortran-contiguous buffers correctly in the pure\n Python pickle implementation.\n\nrelease()\n\n Release the PickleBuffer's underlying buffer, making it unusable.\n\nOn the C side, a simple API will be provided to create and inspect\nPickleBuffer objects:\n\nPyObject *PyPickleBuffer_FromObject(PyObject *obj)\n\n Create a PickleBuffer object holding a view over the PEP\n 3118-compatible obj.\n\nPyPickleBuffer_Check(PyObject *obj)\n\n Return whether obj is a PickleBuffer instance.\n\nconst Py_buffer *PyPickleBuffer_GetBuffer(PyObject *picklebuf)\n\n Return a pointer to the internal Py_buffer owned by the PickleBuffer\n instance. An exception is raised if the buffer is released.\n\nint PyPickleBuffer_Release(PyObject *picklebuf)\n\n Release the PickleBuffer instance's underlying buffer.\n\nBuffer requirements\n\nPickleBuffer can wrap any kind of buffer, including non-contiguous\nbuffers. However, it is required that __reduce__ only returns a\ncontiguous PickleBuffer (contiguity here is meant in the PEP 3118 sense:\neither C-ordered or Fortran-ordered). Non-contiguous buffers will raise\nan error when pickled.\n\nThis restriction is primarily an ease-of-implementation issue for the\npickle module but also other consumers of out-of-band buffers. The\nsimplest solution on the provider side is to return a contiguous copy of\na non-contiguous buffer; a sophisticated provider, though, may decide\ninstead to return a sequence of contiguous sub-buffers.\n\nConsumer API\n\npickle.Pickler.__init__ and pickle.dumps are augmented with an\nadditional buffer_callback parameter:\n\n class Pickler:\n def __init__(self, file, protocol=None, ..., buffer_callback=None):\n \"\"\"\n If *buffer_callback* is None (the default), buffer views are\n serialized into *file* as part of the pickle stream.\n\n If *buffer_callback* is not None, then it can be called any number\n of times with a buffer view. If the callback returns a false value\n (such as None), the given buffer is out-of-band; otherwise the\n buffer is serialized in-band, i.e. inside the pickle stream.\n\n The callback should arrange to store or transmit out-of-band buffers\n without changing their order.\n\n It is an error if *buffer_callback* is not None and *protocol* is\n None or smaller than 5.\n \"\"\"\n\n def pickle.dumps(obj, protocol=None, *, ..., buffer_callback=None):\n \"\"\"\n See above for *buffer_callback*.\n \"\"\"\n\npickle.Unpickler.__init__ and pickle.loads are augmented with an\nadditional buffers parameter:\n\n class Unpickler:\n def __init__(file, *, ..., buffers=None):\n \"\"\"\n If *buffers* is not None, it should be an iterable of buffer-enabled\n objects that is consumed each time the pickle stream references\n an out-of-band buffer view. Such buffers have been given in order\n to the *buffer_callback* of a Pickler object.\n\n If *buffers* is None (the default), then the buffers are taken\n from the pickle stream, assuming they are serialized there.\n It is an error for *buffers* to be None if the pickle stream\n was produced with a non-None *buffer_callback*.\n \"\"\"\n\n def pickle.loads(data, *, ..., buffers=None):\n \"\"\"\n See above for *buffers*.\n \"\"\"\n\nProtocol changes\n\nThree new opcodes are introduced:\n\n- BYTEARRAY8 creates a bytearray from the data following it in the\n pickle stream and pushes it on the stack (just like BINBYTES8 does\n for bytes objects);\n- NEXT_BUFFER fetches a buffer from the buffers iterable and pushes it\n on the stack.\n- READONLY_BUFFER makes a readonly view of the top of the stack.\n\nWhen pickling encounters a PickleBuffer, that buffer can be considered\nin-band or out-of-band depending on the following conditions:\n\n- if no buffer_callback is given, the buffer is in-band;\n- if a buffer_callback is given, it is called with the buffer. If the\n callback returns a true value, the buffer is in-band; if the\n callback returns a false value, the buffer is out-of-band.\n\nAn in-band buffer is serialized as follows:\n\n- If the buffer is writable, it is serialized into the pickle stream\n as if it were a bytearray object.\n- If the buffer is readonly, it is serialized into the pickle stream\n as if it were a bytes object.\n\nAn out-of-band buffer is serialized as follows:\n\n- If the buffer is writable, a NEXT_BUFFER opcode is appended to the\n pickle stream.\n- If the buffer is readonly, a NEXT_BUFFER opcode is appended to the\n pickle stream, followed by a READONLY_BUFFER opcode.\n\nThe distinction between readonly and writable buffers is motivated below\n(see \"Mutability\").\n\nSide effects\n\nImproved in-band performance\n\nEven in-band pickling can be improved by returning a PickleBuffer\ninstance from __reduce_ex__, as one copy is avoided on the serialization\npath[4][5].\n\nCaveats\n\nMutability\n\nPEP 3118 buffers can be readonly or writable. Some objects, such as\nNumpy arrays, need to be backed by a mutable buffer for full operation.\nPickle consumers that use the buffer_callback and buffers arguments will\nhave to be careful to recreate mutable buffers. When doing I/O, this\nimplies using buffer-passing API variants such as readinto (which are\nalso often preferable for performance).\n\nData sharing\n\nIf you pickle and then unpickle an object in the same process, passing\nout-of-band buffer views, then the unpickled object may be backed by the\nsame buffer as the original pickled object.\n\nFor example, it might be reasonable to implement reduction of a Numpy\narray as follows (crucial metadata such as shapes is omitted for\nsimplicity):\n\n class ndarray:\n\n def __reduce_ex__(self, protocol):\n if protocol == 5:\n return numpy.frombuffer, (PickleBuffer(self), self.dtype)\n # Legacy code for earlier protocols omitted\n\nThen simply passing the PickleBuffer around from dumps to loads will\nproduce a new Numpy array sharing the same underlying memory as the\noriginal Numpy object (and, incidentally, keeping it alive):\n\n >>> import numpy as np\n >>> a = np.zeros(10)\n >>> a[0]\n 0.0\n >>> buffers = []\n >>> data = pickle.dumps(a, protocol=5, buffer_callback=buffers.append)\n >>> b = pickle.loads(data, buffers=buffers)\n >>> b[0] = 42\n >>> a[0]\n 42.0\n\nThis won't happen with the traditional pickle API (i.e. without passing\nbuffers and buffer_callback parameters), because then the buffer view is\nserialized inside the pickle stream with a copy.\n\nRejected alternatives\n\nUsing the existing persistent load interface\n\nThe pickle persistence interface is a way of storing references to\ndesignated objects in the pickle stream while handling their actual\nserialization out of band. For example, one might consider the following\nfor zero-copy serialization of bytearrays:\n\n class MyPickle(pickle.Pickler):\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.buffers = []\n\n def persistent_id(self, obj):\n if type(obj) is not bytearray:\n return None\n else:\n index = len(self.buffers)\n self.buffers.append(obj)\n return ('bytearray', index)\n\n\n class MyUnpickle(pickle.Unpickler):\n\n def __init__(self, *args, buffers, **kwargs):\n super().__init__(*args, **kwargs)\n self.buffers = buffers\n\n def persistent_load(self, pid):\n type_tag, index = pid\n if type_tag == 'bytearray':\n return self.buffers[index]\n else:\n assert 0 # unexpected type\n\nThis mechanism has two drawbacks:\n\n- Each pickle consumer must reimplement Pickler and Unpickler\n subclasses, with custom code for each type of interest. Essentially,\n N pickle consumers end up each implementing custom code for M\n producers. This is difficult (especially for sophisticated types\n such as Numpy arrays) and poorly scalable.\n\n- Each object encountered by the pickle module (even simple built-in\n objects such as ints and strings) triggers a call to the user's\n persistent_id() method, leading to a possible performance drop\n compared to nominal.\n\n (the Python 2 cPickle module supported an undocumented\n inst_persistent_id() hook that was only called on non-built-in\n types; it was added in 1997 in order to alleviate the performance\n issue of calling persistent_id, presumably at ZODB's request)\n\nPassing a sequence of buffers in buffer_callback\n\nBy passing a sequence of buffers, rather than a single buffer, we would\npotentially save on function call overhead in case a large number of\nbuffers are produced during serialization. This would need additional\nsupport in the Pickler to save buffers before calling the callback.\nHowever, it would also prevent the buffer callback from returning a\nboolean to indicate whether a buffer is to be serialized in-band or\nout-of-band.\n\nWe consider that having a large number of buffers to serialize is an\nunlikely case, and decided to pass a single buffer to the buffer\ncallback.\n\nAllow serializing a PickleBuffer in protocol 4 and earlier\n\nIf we were to allow serializing a PickleBuffer in protocols 4 and\nearlier, it would actually make a supplementary memory copy when the\nbuffer is mutable. Indeed, a mutable PickleBuffer would serialize as a\nbytearray object in those protocols (that is a first copy), and\nserializing the bytearray object would call bytearray.__reduce_ex__\nwhich returns a bytes object (that is a second copy).\n\nTo prevent __reduce__ implementors from introducing involuntary\nperformance regressions, we decided to reject PickleBuffer when the\nprotocol is smaller than 5. This forces implementors to switch to\n__reduce_ex__ and implement protocol-dependent serialization, taking\nadvantage of the best path for each protocol (or at least treat protocol\n5 and upwards separately from protocols 4 and downwards).\n\nImplementation\n\nThe PEP was initially implemented in the author's GitHub fork[6]. It was\nlater merged into Python 3.8[7].\n\nA backport for Python 3.6 and 3.7 is downloadable from PyPI [8].\n\nSupport for pickle protocol 5 and out-of-band buffers was added to Numpy\n[9].\n\nSupport for pickle protocol 5 and out-of-band buffers was added to the\nApache Arrow Python bindings[10].\n\nRelated work\n\nDask.distributed implements a custom zero-copy serialization with\nfallback to pickle[11].\n\nPyArrow implements zero-copy component-based serialization for a few\nselected types[12].\n\nPEP 554 proposes hosting multiple interpreters in a single process, with\nprovisions for transferring buffers between interpreters as a\ncommunication scheme.\n\nAcknowledgements\n\nThanks to the following people for early feedback: Alyssa Coghlan,\nOlivier Grisel, Stefan Krah, MinRK, Matt Rocklin, Eric Snow.\n\nThanks to Pierre Glaser and Olivier Grisel for experimenting with the\nimplementation.\n\nReferences\n\nCopyright\n\nThis document has been placed into the public domain.\n\n[1] Dask.distributed -- A lightweight library for distributed computing\nin Python https://distributed.readthedocs.io/\n\n[2] PyArrow -- A cross-language development platform for in-memory data\nhttps://arrow.apache.org/docs/python/\n\n[3] IPyParallel -- Using IPython for parallel computing\nhttps://ipyparallel.readthedocs.io/\n\n[4] Benchmark zero-copy pickling in Apache Arrow\nhttps://github.com/apache/arrow/pull/2161#issuecomment-407859213\n\n[5] Benchmark pickling Numpy arrays with different pickle protocols\nhttps://github.com/numpy/numpy/issues/11161#issuecomment-424035962\n\n[6] pickle5 branch on GitHub\nhttps://github.com/pitrou/cpython/tree/pickle5\n\n[7] PEP 574 Pull Request on GitHub\nhttps://github.com/python/cpython/pull/7076\n\n[8] pickle5 project on PyPI https://pypi.org/project/pickle5/\n\n[9] Pull request: Support pickle protocol 5 in Numpy\nhttps://github.com/numpy/numpy/pull/12011\n\n[10] Pull request: Experimental zero-copy pickling in Apache Arrow\nhttps://github.com/apache/arrow/pull/2161\n\n[11] Dask.distributed custom serialization\nhttps://distributed.readthedocs.io/en/latest/serialization.html\n\n[12] PyArrow IPC and component-based serialization\nhttps://arrow.apache.org/docs/python/ipc.html#component-based-serialization"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.270914"},"created":{"kind":"timestamp","value":"2018-03-23T00:00:00","string":"2018-03-23T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0574/\",\n \"authors\": [\n \"Antoine Pitrou\"\n ],\n \"pep_number\": \"0574\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":42,"cells":{"id":{"kind":"string","value":"0538"},"text":{"kind":"string","value":"PEP: 538 Title: Coercing the legacy C locale to a UTF-8 based locale\nVersion: $Revision$ Last-Modified: $Date$ Author: Alyssa Coghlan\n BDFL-Delegate: INADA Naoki Status: Final Type:\nStandards Track Content-Type: text/x-rst Created: 28-Dec-2016\nPython-Version: 3.7 Post-History: 03-Jan-2017, 07-Jan-2017, 05-Mar-2017,\n09-May-2017 Resolution:\nhttps://mail.python.org/pipermail/python-dev/2017-May/148035.html\n\nAbstract\n\nAn ongoing challenge with Python 3 on *nix systems is the conflict\nbetween needing to use the configured locale encoding by default for\nconsistency with other locale-aware components in the same process or\nsubprocesses, and the fact that the standard C locale (as defined in\nPOSIX:2001) typically implies a default text encoding of ASCII, which is\nentirely inadequate for the development of networked services and client\napplications in a multilingual world.\n\nPEP 540 proposes a change to CPython's handling of the legacy C locale\nsuch that CPython will assume the use of UTF-8 in such environments,\nrather than persisting with the demonstrably problematic assumption of\nASCII as an appropriate encoding for communicating with operating system\ninterfaces. This is a good approach for cases where network encoding\ninteroperability is a more important concern than local encoding\ninteroperability.\n\nHowever, it comes at the cost of making CPython's encoding assumptions\ndiverge from those of other locale-aware components in the same process,\nas well as those of components running in subprocesses that share the\nsame environment.\n\nThis can cause interoperability problems with some extension modules\n(such as GNU readline's command line history editing), as well as with\ncomponents running in subprocesses (such as older Python runtimes).\n\nIt also requires non-trivial changes to the internals of how CPython\nitself works, rather than relying primarily on existing configuration\nsettings that are supported by Python versions prior to Python 3.7.\n\nAccordingly, this PEP proposes that independently of the UTF-8 mode\nproposed in PEP 540, the way the CPython implementation handles the\ndefault C locale be changed to be roughly equivalent to the following\nexisting configuration settings (supported since Python 3.1):\n\n LC_CTYPE=C.UTF-8\n PYTHONIOENCODING=utf-8:surrogateescape\n\nThe exact target locale for coercion will be chosen from a predefined\nlist at runtime based on the actually available locales.\n\nThe reinterpreted locale settings will be written back to the\nenvironment so they're visible to other components in the same process\nand in subprocesses, but the changed PYTHONIOENCODING default will be\nmade implicit in order to avoid causing compatibility problems with\nPython 2 subprocesses that don't provide the surrogateescape error\nhandler.\n\nThe new legacy locale coercion behavior can be disabled either by\nsetting LC_ALL (which may still lead to a Unicode compatibility warning)\nor by setting the new PYTHONCOERCECLOCALE environment variable to 0.\n\nWith this change, any *nix platform that does not offer at least one of\nthe C.UTF-8, C.utf8 or UTF-8 locales as part of its standard\nconfiguration would only be considered a fully supported platform for\nCPython 3.7+ deployments when a suitable locale other than the default C\nlocale is configured explicitly (e.g. en_AU.UTF-8, zh_CN.gb18030). If\nPEP 540 is accepted in addition to this PEP, then pure Python modules\nwould also be supported when using the proposed PYTHONUTF8 mode, but\nexpectations for full Unicode compatibility in extension modules would\ncontinue to be limited to the platforms covered by this PEP.\n\nAs it only reflects a change in default settings rather than a\nfundamentally new capability, redistributors (such as Linux\ndistributions) with a narrower target audience than the upstream CPython\ndevelopment team may also choose to opt in to this locale coercion\nbehaviour for the Python 3.6.x series by applying the necessary changes\nas a downstream patch.\n\nImplementation Notes\n\nAttempting to implement the PEP as originally accepted showed that the\nproposal to emit locale coercion and compatibility warnings by default\nsimply wasn't practical (there were too many cases where previously\nworking code failed because of the warnings, rather than because of\nlatent locale handling defects in the affected code).\n\nAs a result, the PY_WARN_ON_C_LOCALE config flag was removed, and\nreplaced with a runtime PYTHONCOERCECLOCALE=warn environment variable\nsetting that allows developers and system integrators to opt-in to\nreceiving locale coercion and compatibility warnings, without emitting\nthem by default.\n\nThe output examples in the PEP itself have also been updated to remove\nthe warnings and make them easier to read.\n\nBackground\n\nWhile the CPython interpreter is starting up, it may need to convert\nfrom the char * format to the wchar_t * format, or from one of those\nformats to PyUnicodeObject *, in a way that's consistent with the locale\nsettings of the overall system. It handles these cases by relying on the\noperating system to do the conversion and then ensuring that the text\nencoding name reported by sys.getfilesystemencoding() matches the\nencoding used during this early bootstrapping process.\n\nOn Windows, the limitations of the mbcs format used by default in these\nconversions proved sufficiently problematic that PEP 528 and PEP 529\nwere implemented to bypass the operating system supplied interfaces for\nbinary data handling and force the use of UTF-8 instead.\n\nOn Mac OS X, iOS, and Android, many components, including CPython,\nalready assume the use of UTF-8 as the system encoding, regardless of\nthe locale setting. However, this isn't the case for all components, and\nthe discrepancy can cause problems in some situations (for example, when\nusing the GNU readline module [16]).\n\nOn non-Apple and non-Android *nix systems, these operations are handled\nusing the C locale system in glibc, which has the following\ncharacteristics[1]:\n\n- by default, all processes start in the C locale, which uses ASCII\n for these conversions. This is almost never what anyone doing\n multilingual text processing actually wants (including CPython and\n C/C++ GUI frameworks).\n- calling setlocale(LC_ALL, \"\") reconfigures the active locale based\n on the locale categories configured in the current process\n environment\n- if the locale requested by the current environment is unknown, or no\n specific locale is configured, then the default C locale will remain\n active\n\nThe specific locale category that covers the APIs that CPython depends\non is LC_CTYPE, which applies to \"classification and conversion of\ncharacters, and to multibyte and wide characters\"[2]. Accordingly,\nCPython includes the following key calls to setlocale:\n\n- in the main python binary, CPython calls setlocale(LC_ALL, \"\") to\n configure the entire C locale subsystem according to the process\n environment. It does this prior to making any calls into the shared\n CPython library\n- in Py_Initialize, CPython calls setlocale(LC_CTYPE, \"\"), such that\n the configured locale settings for that category always match those\n set in the environment. It does this unconditionally, and it doesn't\n revert the process state change in Py_Finalize\n\n(This summary of the locale handling omits several technical details\nrelated to exactly where and when the text encoding declared as part of\nthe locale settings is used - see PEP 540 for further discussion, as\nthese particular details matter more when decoupling CPython from the\ndeclared C locale than they do when overriding the locale with one based\non UTF-8)\n\nThese calls are usually sufficient to provide sensible behaviour, but\nthey can still fail in the following cases:\n\n- SSH environment forwarding means that SSH clients may sometimes\n forward client locale settings to servers that don't have that\n locale installed. This leads to CPython running in the default\n ASCII-based C locale\n- some process environments (such as Linux containers) may not have\n any explicit locale configured at all. As with unknown locales, this\n leads to CPython running in the default ASCII-based C locale\n- on Android, rather than configuring the locale based on environment\n variables, the empty locale \"\" is treated as specifically requesting\n the \"C\" locale\n\nThe simplest way to deal with this problem for currently released\nversions of CPython is to explicitly set a more sensible locale when\nlaunching the application. For example:\n\n LC_CTYPE=C.UTF-8 python3 ...\n\nThe C.UTF-8 locale is a full locale definition that uses UTF-8 for the\nLC_CTYPE category, and the same settings as the C locale for all other\ncategories (including LC_COLLATE). It is offered by a number of Linux\ndistributions (including Debian, Ubuntu, Fedora, Alpine and Android) as\nan alternative to the ASCII-based C locale. Some other platforms (such\nas HP-UX) offer an equivalent locale definition under the name C.utf8.\n\nMac OS X and other *BSD systems have taken a different approach: instead\nof offering a C.UTF-8 locale, they offer a partial UTF-8 locale that\nonly defines the LC_CTYPE category. On such systems, the preferred\nenvironmental locale adjustment is to set LC_CTYPE=UTF-8 rather than to\nset LC_ALL or LANG.[3]\n\nIn the specific case of Docker containers and similar technologies, the\nappropriate locale setting can be specified directly in the container\nimage definition.\n\nAnother common failure case is developers specifying LANG=C in order to\nsee otherwise translated user interface messages in English, rather than\nthe more narrowly scoped LC_MESSAGES=C or LANGUAGE=en.\n\nRelationship with other PEPs\n\nThis PEP shares a common problem statement with PEP 540 (improving\nPython 3's behaviour in the default C locale), but diverges markedly in\nthe proposed solution:\n\n- PEP 540 proposes to entirely decouple CPython's default text\n encoding from the C locale system in that case, allowing text\n handling inconsistencies to arise between CPython and other\n locale-aware components running in the same process and in\n subprocesses. This approach aims to make CPython behave less like a\n locale-aware application, and more like locale-independent language\n runtimes like those for Go, Node.js (V8), and Rust\n- this PEP proposes to override the legacy C locale with a more\n recently defined locale that uses UTF-8 as its default text\n encoding. This means that the text encoding override will apply not\n only to CPython, but also to any locale-aware extension modules\n loaded into the current process, as well as to locale-aware\n applications invoked in subprocesses that inherit their environment\n from the parent process. This approach aims to retain CPython's\n traditional strong support for integration with other locale-aware\n components while also actively helping to push forward the adoption\n and standardisation of the C.UTF-8 locale as a Unicode-aware\n replacement for the legacy C locale in the wider C/C++ ecosystem\n\nAfter reviewing both PEPs, it became clear that they didn't actually\nconflict at a technical level, and the proposal in PEP 540 offered a\nsuperior option in cases where no suitable locale was available, as well\nas offering a better reference behaviour for platforms where the notion\nof a \"locale encoding\" doesn't make sense (for example, embedded systems\nrunning MicroPython rather than the CPython reference interpreter).\n\nMeanwhile, this PEP offered improved compatibility with other\nlocale-aware components, and an approach more amenable to being\nbackported to Python 3.6 by downstream redistributors.\n\nAs a result, this PEP was amended to refer to PEP 540 as a complementary\nsolution that offered improved behaviour when none of the standard UTF-8\nbased locales were available, as well as extending the changes in the\ndefault settings to APIs that aren't currently independently\nconfigurable (such as the default encoding and error handler for\nopen()).\n\nThe availability of PEP 540 also meant that the LC_CTYPE=en_US.UTF-8\nlegacy fallback was removed from the list of UTF-8 locales tried as a\ncoercion target, with the expectation being that CPython will instead\nrely solely on the proposed PYTHONUTF8 mode in such cases.\n\nMotivation\n\nWhile Linux container technologies like Docker, Kubernetes, and\nOpenShift are best known for their use in web service development, the\nrelated container formats and execution models are also being adopted\nfor Linux command line application development. Technologies like Gnome\nFlatpak[4] and Ubuntu Snappy[5] further aim to bring these same\ntechniques to Linux GUI application development.\n\nWhen using Python 3 for application development in these contexts, it\nisn't uncommon to see text encoding related errors akin to the\nfollowing:\n\n $ docker run --rm fedora:25 python3 -c 'print(\"ℙƴ☂ℌøἤ\")'\n Unable to decode the command from the command line:\n UnicodeEncodeError: 'utf-8' codec can't encode character '\\udce2' in position 7: surrogates not allowed\n $ docker run --rm ncoghlan/debian-python python3 -c 'print(\"ℙƴ☂ℌøἤ\")'\n Unable to decode the command from the command line:\n UnicodeEncodeError: 'utf-8' codec can't encode character '\\udce2' in position 7: surrogates not allowed\n\nEven though the same command is likely to work fine when run locally:\n\n $ python3 -c 'print(\"ℙƴ☂ℌøἤ\")'\n ℙƴ☂ℌøἤ\n\nThe source of the problem can be seen by instead running the locale\ncommand in the three environments:\n\n $ locale | grep -E 'LC_ALL|LC_CTYPE|LANG'\n LANG=en_AU.UTF-8\n LC_CTYPE=\"en_AU.UTF-8\"\n LC_ALL=\n $ docker run --rm fedora:25 locale | grep -E 'LC_ALL|LC_CTYPE|LANG'\n LANG=\n LC_CTYPE=\"POSIX\"\n LC_ALL=\n $ docker run --rm ncoghlan/debian-python locale | grep -E 'LC_ALL|LC_CTYPE|LANG'\n LANG=\n LANGUAGE=\n LC_CTYPE=\"POSIX\"\n LC_ALL=\n\nIn this particular example, we can see that the host system locale is\nset to \"en_AU.UTF-8\", so CPython uses UTF-8 as the default text\nencoding. By contrast, the base Docker images for Fedora and Debian\ndon't have any specific locale set, so they use the POSIX locale by\ndefault, which is an alias for the ASCII-based default C locale.\n\nThe simplest way to get Python 3 (regardless of the exact version) to\nbehave sensibly in Fedora and Debian based containers is to run it in\nthe C.UTF-8 locale that both distros provide:\n\n $ docker run --rm -e LC_CTYPE=C.UTF-8 fedora:25 python3 -c 'print(\"ℙƴ☂ℌøἤ\")'\n ℙƴ☂ℌøἤ\n $ docker run --rm -e LC_CTYPE=C.UTF-8 ncoghlan/debian-python python3 -c 'print(\"ℙƴ☂ℌøἤ\")'\n ℙƴ☂ℌøἤ\n\n $ docker run --rm -e LC_CTYPE=C.UTF-8 fedora:25 locale | grep -E 'LC_ALL|LC_CTYPE|LANG'\n LANG=\n LC_CTYPE=C.UTF-8\n LC_ALL=\n $ docker run --rm -e LC_CTYPE=C.UTF-8 ncoghlan/debian-python locale | grep -E 'LC_ALL|LC_CTYPE|LANG'\n LANG=\n LANGUAGE=\n LC_CTYPE=C.UTF-8\n LC_ALL=\n\nThe Alpine Linux based Python images provided by Docker, Inc. already\nuse the C.UTF-8 locale by default:\n\n $ docker run --rm python:3 python3 -c 'print(\"ℙƴ☂ℌøἤ\")'\n ℙƴ☂ℌøἤ\n $ docker run --rm python:3 locale | grep -E 'LC_ALL|LC_CTYPE|LANG'\n LANG=C.UTF-8\n LANGUAGE=\n LC_CTYPE=\"C.UTF-8\"\n LC_ALL=\n\nSimilarly, for custom container images (i.e. those adding additional\ncontent on top of a base distro image), a more suitable locale can be\nset in the image definition so everything just works by default.\nHowever, it would provide a much nicer and more consistent user\nexperience if CPython were able to just deal with this problem\nautomatically rather than relying on redistributors or end users to\nhandle it through system configuration changes.\n\nWhile the glibc developers are working towards making the C.UTF-8 locale\nuniversally available for use by glibc based applications like\nCPython[6], this unfortunately doesn't help on platforms that ship older\nversions of glibc without that feature, and also don't provide C.UTF-8\n(or an equivalent) as an on-disk locale the way Debian and Fedora do.\nThese platforms are considered out of scope for this PEP - see PEP 540\nfor further discussion of possible options for improving CPython's\ndefault behaviour in such environments.\n\nDesign Principles\n\nThe above motivation leads to the following core design principles for\nthe proposed solution:\n\n- if a locale other than the default C locale is explicitly\n configured, we'll continue to respect it\n- as far as is feasible, any changes made will use existing\n configuration options\n- Python's runtime behaviour in potential coercion target locales\n should be identical regardless of whether the locale was set\n explicitly in the environment or implicitly as a locale coercion\n target\n- for Python 3.7, if we're changing the locale setting without an\n explicit config option, we'll emit a warning on stderr that we're\n doing so rather than silently changing the process configuration.\n This will alert application and system integrators to the change,\n even if they don't closely follow the PEP process or Python release\n announcements. However, to minimize the chance of introducing new\n problems for end users, we'll do this without using the warnings\n system, so even running with -Werror won't turn it into a runtime\n exception. (Note: these warnings ended up being silenced by default.\n See the Implementation Note above for more details)\n- for Python 3.7, any changed defaults will offer some form of\n explicit \"off\" switch at build time, runtime, or both\n\nMinimizing the negative impact on systems currently correctly configured\nto use GB-18030 or another partially ASCII compatible universal encoding\nleads to the following design principle:\n\n- if a UTF-8 based Linux container is run on a host that is explicitly\n configured to use a non-UTF-8 encoding, and tries to exchange\n locally encoded data with that host rather than exchanging\n explicitly UTF-8 encoded data, CPython will endeavour to correctly\n round-trip host provided data that is concatenated or split solely\n at common ASCII compatible code points, but may otherwise emit\n nonsensical results.\n\nMinimizing the negative impact on systems and programs correctly\nconfigured to use an explicit locale category like LC_TIME, LC_MONETARY\nor LC_NUMERIC while otherwise running in the legacy C locale gives the\nfollowing design principles:\n\n- don't make any environmental changes that would alter any existing\n settings for locale categories other than LC_CTYPE (most notably:\n don't set LC_ALL or LANG)\n\nFinally, maintaining compatibility with running arbitrary subprocesses\nin orchestration use cases leads to the following design principle:\n\n- don't make any Python-specific environmental changes that might be\n incompatible with any still supported version of CPython (including\n CPython 2.7)\n\nSpecification\n\nTo better handle the cases where CPython would otherwise end up\nattempting to operate in the C locale, this PEP proposes that CPython\nautomatically attempt to coerce the legacy C locale to a UTF-8 based\nlocale for the LC_CTYPE category when it is run as a standalone command\nline application.\n\nIt further proposes to emit a warning on stderr if the legacy C locale\nis in effect for the LC_CTYPE category at the point where the language\nruntime itself is initialized, and the explicit environmental flag to\ndisable locale coercion is not set, in order to warn system and\napplication integrators that they're running CPython in an unsupported\nconfiguration.\n\nIn addition to these general changes, some additional Android-specific\nchanges are proposed to handle the differences in the behaviour of\nsetlocale on that platform.\n\nLegacy C locale coercion in the standalone Python interpreter binary\n\nWhen run as a standalone application, CPython has the opportunity to\nreconfigure the C locale before any locale dependent operations are\nexecuted in the process.\n\nThis means that it can change the locale settings not only for the\nCPython runtime, but also for any other locale-aware components running\nin the current process (e.g. as part of extension modules), as well as\nin subprocesses that inherit their environment from the current process.\n\nAfter calling setlocale(LC_ALL, \"\") to initialize the locale settings in\nthe current process, the main interpreter binary will be updated to\ninclude the following call:\n\n const char *ctype_loc = setlocale(LC_CTYPE, NULL);\n\nThis cryptic invocation is the API that C provides to query the current\nlocale setting without changing it. Given that query, it is possible to\ncheck for exactly the C locale with strcmp:\n\n ctype_loc != NULL && strcmp(ctype_loc, \"C\") == 0 # true only in the C locale\n\nThis call also returns \"C\" when either no particular locale is set, or\nthe nominal locale is set to an alias for the C locale (such as POSIX).\n\nGiven this information, CPython can then attempt to coerce the locale to\none that uses UTF-8 rather than ASCII as the default encoding.\n\nThree such locales will be tried:\n\n- C.UTF-8 (available at least in Debian, Ubuntu, Alpine, and Fedora\n 25+, and expected to be available by default in a future version of\n glibc)\n- C.utf8 (available at least in HP-UX)\n- UTF-8 (available in at least some *BSD variants, including Mac OS X)\n\nThe coercion will be implemented by setting the LC_CTYPE environment\nvariable to the candidate locale name, such that future calls to\nsetlocale() will see it, as will other components looking for those\nsettings (such as GUI development frameworks and Python's own locale\nmodule).\n\nTo allow for better cross-platform binary portability and to adjust\nautomatically to future changes in locale availability, these checks\nwill be implemented at runtime on all platforms other than Windows,\nrather than attempting to determine which locales to try at compile\ntime.\n\nWhen this locale coercion is activated, the following warning will be\nprinted on stderr, with the warning containing whichever locale was\nsuccessfully configured:\n\n Python detected LC_CTYPE=C: LC_CTYPE coerced to C.UTF-8 (set another\n locale or PYTHONCOERCECLOCALE=0 to disable this locale coercion behaviour).\n\n(Note: this warning ended up being silenced by default. See the\nImplementation Note above for more details)\n\nAs long as the current platform provides at least one of the candidate\nUTF-8 based environments, this locale coercion will mean that the\nstandard Python binary and locale-aware extensions should once again\n\"just work\" in the three main failure cases we're aware of (missing\nlocale settings, SSH forwarding of unknown locales via LANG or LC_CTYPE,\nand developers explicitly requesting LANG=C).\n\nThe one case where failures may still occur is when stderr is\nspecifically being checked for no output, which can be resolved either\nby configuring a locale other than the C locale, or else by using a\nmechanism other than \"there was no output on stderr\" to check for\nsubprocess errors (e.g. checking process return codes).\n\nIf none of the candidate locales are successfully configured, or the\nLC_ALL, locale override is defined in the current process environment,\nthen initialization will continue in the C locale and the Unicode\ncompatibility warning described in the next section will be emitted just\nas it would for any other application.\n\nIf PYTHONCOERCECLOCALE=0 is explicitly set, initialization will continue\nin the C locale and the Unicode compatibility warning described in the\nnext section will be automatically suppressed.\n\nThe interpreter will always check for the PYTHONCOERCECLOCALE\nenvironment variable at startup (even when running under the -E or -I\nswitches), as the locale coercion check necessarily takes place before\nany command line argument processing. For consistency, the runtime check\nto determine whether or not to suppress the locale compatibility warning\nwill be similarly independent of these settings.\n\nLegacy C locale warning during runtime initialization\n\nBy the time that Py_Initialize is called, arbitrary locale-dependent\noperations may have taken place in the current process. This means that\nby the time it is called, it is too late to reliably switch to a\ndifferent locale - doing so would introduce inconsistencies in decoded\ntext, even in the context of the standalone Python interpreter binary.\n\nAccordingly, when Py_Initialize is called and CPython detects that the\nconfigured locale is still the default C locale and\nPYTHONCOERCECLOCALE=0 is not set, the following warning will be issued:\n\n Python runtime initialized with LC_CTYPE=C (a locale with default ASCII\n encoding), which may cause Unicode compatibility problems. Using C.UTF-8,\n C.utf8, or UTF-8 (if available) as alternative Unicode-compatible\n locales is recommended.\n\n(Note: this warning ended up being silenced by default. See the\nImplementation Note above for more details)\n\nIn this case, no actual change will be made to the locale settings.\n\nInstead, the warning informs both system and application integrators\nthat they're running Python 3 in a configuration that we don't expect to\nwork properly.\n\nThe second sentence providing recommendations may eventually be\nconditionally compiled based on the operating system (e.g. recommending\nLC_CTYPE=UTF-8 on *BSD systems), but the initial implementation will\njust use the common generic message shown above.\n\nNew build-time configuration options\n\nWhile both of the above behaviours would be enabled by default, they\nwould also have new associated configuration options and preprocessor\ndefinitions for the benefit of redistributors that want to override\nthose default settings.\n\nThe locale coercion behaviour would be controlled by the flag\n--with[out]-c-locale-coercion, which would set the PY_COERCE_C_LOCALE\npreprocessor definition.\n\nThe locale warning behaviour would be controlled by the flag\n--with[out]-c-locale-warning, which would set the PY_WARN_ON_C_LOCALE\npreprocessor definition.\n\n(Note: this compile time warning option ended up being replaced by a\nruntime PYTHONCOERCECLOCALE=warn option. See the Implementation Note\nabove for more details)\n\nOn platforms which don't use the autotools based build system (i.e.\nWindows) these preprocessor variables would always be undefined.\n\nChanges to the default error handling on the standard streams\n\nSince Python 3.5, CPython has defaulted to using surrogateescape on the\nstandard streams (sys.stdin, sys.stdout) when it detects that the\ncurrent locale is C and no specific error handled has been set using\neither the PYTHONIOENCODING environment variable or the\nPy_setStandardStreamEncoding API. For other locales, the default error\nhandler for the standard streams is strict.\n\nIn order to preserve this behaviour without introducing any behavioural\ndiscrepancies between locale coercion and explicitly configuring a\nlocale, the coercion target locales (C.UTF-8, C.utf8, and UTF-8) will be\nadded to the list of locales that use surrogateescape as their default\nerror handler for the standard streams.\n\nNo changes are proposed to the default error handler for sys.stderr:\nthat will continue to be backslashreplace.\n\nChanges to locale settings on Android\n\nIndependently of the other changes in this PEP, CPython on Android\nsystems will be updated to call setlocale(LC_ALL, \"C.UTF-8\") where it\ncurrently calls setlocale(LC_ALL, \"\") and setlocale(LC_CTYPE, \"C.UTF-8\")\nwhere it currently calls setlocale(LC_CTYPE, \"\").\n\nThis Android-specific behaviour is being introduced due to the following\nAndroid-specific details:\n\n- on Android, passing \"\" to setlocale is equivalent to passing \"C\"\n- the C.UTF-8 locale is always available\n\nPlatform Support Changes\n\nA new \"Legacy C Locale\" section will be added to PEP 11 that states:\n\n- as of CPython 3.7, *nix platforms are expected to provide at least\n one of C.UTF-8 (full locale), C.utf8 (full locale) or UTF-8 (\n LC_CTYPE-only locale) as an alternative to the legacy C locale. Any\n Unicode related integration problems that occur only in the legacy C\n locale and cannot be reproduced in an appropriately configured\n non-ASCII locale will be closed as \"won't fix\".\n\nRationale\n\nImproving the handling of the C locale\n\nIt has been clear for some time that the C locale's default encoding of\nASCII is entirely the wrong choice for development of modern networked\nservices. Newer languages like Rust and Go have eschewed that default\nentirely, and instead made it a deployment requirement that systems be\nconfigured to use UTF-8 as the text encoding for operating system\ninterfaces. Similarly, Node.js assumes UTF-8 by default (a behaviour\ninherited from the V8 JavaScript engine) and requires custom build\nsettings to indicate it should use the system locale settings for\nlocale-aware operations. Both the JVM and the .NET CLR use UTF-16-LE as\ntheir primary encoding for passing text between applications and the\napplication runtime (i.e. the JVM/CLR, not the host operating system).\n\nThe challenge for CPython has been the fact that in addition to being\nused for network service development, it is also extensively used as an\nembedded scripting language in larger applications, and as a desktop\napplication development language, where it is more important to be\nconsistent with other locale-aware components sharing the same process,\nas well as with the user's desktop locale settings, than it is with the\nemergent conventions of modern network service development.\n\nThe core premise of this PEP is that for all of these use cases, the\nassumption of ASCII implied by the default \"C\" locale is the wrong\nchoice, and furthermore that the following assumptions are valid:\n\n- in desktop application use cases, the process locale will already be\n configured appropriately, and if it isn't, then that is an operating\n system or embedding application level problem that needs to be\n reported to and resolved by the operating system provider or\n application developer\n- in network service development use cases (especially those based on\n Linux containers), the process locale may not be configured at all,\n and if it isn't, then the expectation is that components will impose\n their own default encoding the way Rust, Go and Node.js do, rather\n than trusting the legacy C default encoding of ASCII the way CPython\n currently does\n\nDefaulting to \"surrogateescape\" error handling on the standard IO streams\n\nBy coercing the locale away from the legacy C default and its assumption\nof ASCII as the preferred text encoding, this PEP also disables the\nimplicit use of the \"surrogateescape\" error handler on the standard IO\nstreams that was introduced in Python 3.5 ([7]), as well as the\nautomatic use of surrogateescape when operating in PEP 540's proposed\nUTF-8 mode.\n\nRather than introducing yet another configuration option to adjust that\nbehaviour, this PEP instead proposes to extend the \"surrogateescape\"\ndefault for stdin and stderr error handling to also apply to the three\npotential coercion target locales.\n\nThe aim of this behaviour is to attempt to ensure that operating system\nprovided text values are typically able to be transparently passed\nthrough a Python 3 application even if it is incorrect in assuming that\nthat text has been encoded as UTF-8.\n\nIn particular, GB 18030[8] is a Chinese national text encoding standard\nthat handles all Unicode code points, that is formally incompatible with\nboth ASCII and UTF-8, but will nevertheless often tolerate processing as\nsurrogate escaped data - the points where GB 18030 reuses ASCII byte\nvalues in an incompatible way are likely to be invalid in UTF-8, and\nwill therefore be escaped and opaque to string processing operations\nthat split on or search for the relevant ASCII code points. Operations\nthat don't involve splitting on or searching for particular ASCII or\nUnicode code point values are almost certain to work correctly.\n\nSimilarly, Shift-JIS[9] and ISO-2022-JP[10] remain in widespread use in\nJapan, and are incompatible with both ASCII and UTF-8, but will tolerate\ntext processing operations that don't involve splitting on or searching\nfor particular ASCII or Unicode code point values.\n\nAs an example, consider two files, one encoded with UTF-8 (the default\nencoding for en_AU.UTF-8), and one encoded with GB-18030 (the default\nencoding for zh_CN.gb18030):\n\n $ python3 -c 'open(\"utf8.txt\", \"wb\").write(\"ℙƴ☂ℌøἤ\\n\".encode(\"utf-8\"))'\n $ python3 -c 'open(\"gb18030.txt\", \"wb\").write(\"ℙƴ☂ℌøἤ\\n\".encode(\"gb18030\"))'\n\nOn disk, we can see that these are two very different files:\n\n $ python3 -c 'print(\"UTF-8: \", open(\"utf8.txt\", \"rb\").read().strip()); \\\n print(\"GB18030:\", open(\"gb18030.txt\", \"rb\").read().strip())'\n UTF-8: b'\\xe2\\x84\\x99\\xc6\\xb4\\xe2\\x98\\x82\\xe2\\x84\\x8c\\xc3\\xb8\\xe1\\xbc\\xa4\\n'\n GB18030: b'\\x816\\xbd6\\x810\\x9d0\\x817\\xa29\\x816\\xbc4\\x810\\x8b3\\x816\\x8d6\\n'\n\nThat nevertheless can both be rendered correctly to the terminal as long\nas they're decoded prior to printing:\n\n $ python3 -c 'print(\"UTF-8: \", open(\"utf8.txt\", \"r\", encoding=\"utf-8\").read().strip()); \\\n print(\"GB18030:\", open(\"gb18030.txt\", \"r\", encoding=\"gb18030\").read().strip())'\n UTF-8: ℙƴ☂ℌøἤ\n GB18030: ℙƴ☂ℌøἤ\n\nBy contrast, if we just pass along the raw bytes, as cat and similar\nC/C++ utilities will tend to do:\n\n $ LANG=en_AU.UTF-8 cat utf8.txt gb18030.txt\n ℙƴ☂ℌøἤ\n �6�6�0�0�7�9�6�4�0�3�6�6\n\nEven setting a specifically Chinese locale won't help in getting the\nGB-18030 encoded file rendered correctly:\n\n $ LANG=zh_CN.gb18030 cat utf8.txt gb18030.txt\n ℙƴ☂ℌøἤ\n �6�6�0�0�7�9�6�4�0�3�6�6\n\nThe problem is that the terminal encoding setting remains UTF-8,\nregardless of the nominal locale. A GB18030 terminal can be emulated\nusing the iconv utility:\n\n $ cat utf8.txt gb18030.txt | iconv -f GB18030 -t UTF-8\n 鈩櫰粹槀鈩屆羔激\n ℙƴ☂ℌøἤ\n\nThis reverses the problem, such that the GB18030 file is rendered\ncorrectly, but the UTF-8 file has been converted to unrelated hanzi\ncharacters, rather than the expected rendering of \"Python\" as non-ASCII\ncharacters.\n\nWith the emulated GB18030 terminal encoding, assuming UTF-8 in Python\nresults in both files being displayed incorrectly:\n\n $ python3 -c 'print(\"UTF-8: \", open(\"utf8.txt\", \"r\", encoding=\"utf-8\").read().strip()); \\\n print(\"GB18030:\", open(\"gb18030.txt\", \"r\", encoding=\"gb18030\").read().strip())' \\\n | iconv -f GB18030 -t UTF-8\n UTF-8: 鈩櫰粹槀鈩屆羔激\n GB18030: 鈩櫰粹槀鈩屆羔激\n\nHowever, setting the locale correctly means that the emulated GB18030\nterminal now displays both files as originally intended:\n\n $ LANG=zh_CN.gb18030 \\\n python3 -c 'print(\"UTF-8: \", open(\"utf8.txt\", \"r\", encoding=\"utf-8\").read().strip()); \\\n print(\"GB18030:\", open(\"gb18030.txt\", \"r\", encoding=\"gb18030\").read().strip())' \\\n | iconv -f GB18030 -t UTF-8\n UTF-8: ℙƴ☂ℌøἤ\n GB18030: ℙƴ☂ℌøἤ\n\nThe rationale for retaining surrogateescape as the default IO encoding\nis that it will preserve the following helpful behaviour in the C\nlocale:\n\n $ cat gb18030.txt \\\n | LANG=C python3 -c \"import sys; print(sys.stdin.read())\" \\\n | iconv -f GB18030 -t UTF-8\n ℙƴ☂ℌøἤ\n\nRather than reverting to the exception currently seen when a UTF-8 based\nlocale is explicitly configured:\n\n $ cat gb18030.txt \\\n | python3 -c \"import sys; print(sys.stdin.read())\" \\\n | iconv -f GB18030 -t UTF-8\n Traceback (most recent call last):\n File \"\", line 1, in \n File \"/usr/lib64/python3.5/codecs.py\", line 321, in decode\n (result, consumed) = self._buffer_decode(data, self.errors, final)\n UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 0: invalid start byte\n\nAs an added benefit, environments explicitly configured to use one of\nthe coercion target locales will implicitly gain the encoding\ntransparency behaviour currently enabled by default in the C locale.\n\nAvoiding setting PYTHONIOENCODING during UTF-8 locale coercion\n\nRather than changing the default handling of the standard streams during\ninterpreter initialization, earlier versions of this PEP proposed\nsetting PYTHONIOENCODING to utf-8:surrogateescape. This turned out to\ncreate a significant compatibility problem: since the surrogateescape\nhandler only exists in Python 3.1+, running Python 2.7 processes in\nsubprocesses could potentially break in a confusing way with that\nconfiguration.\n\nThe current design means that earlier Python versions will instead\nretain their default strict error handling on the standard streams,\nwhile Python 3.7+ will consistently use the more permissive\nsurrogateescape handler even when these locales are explicitly\nconfigured (rather than being reached through locale coercion).\n\nDropping official support for ASCII based text handling in the legacy C locale\n\nWe've been trying to get strict bytes/text separation to work reliably\nin the legacy C locale for over a decade at this point. Not only haven't\nwe been able to get it to work, neither has anyone else - the only\nviable alternatives identified have been to pass the bytes along\nverbatim without eagerly decoding them to text (C/C++, Python 2.x, Ruby,\netc), or else to largely ignore the nominal C/C++ locale encoding and\nassume the use of either UTF-8 (PEP 540, Rust, Go, Node.js, etc) or\nUTF-16-LE (JVM, .NET CLR).\n\nWhile this PEP ensures that developers that genuinely need to do so can\nstill opt-in to running their Python code in the legacy C locale (by\nsetting LC_ALL=C, PYTHONCOERCECLOCALE=0, or running a custom build that\nsets --without-c-locale-coercion), it also makes it clear that we don't\nexpect Python 3's Unicode handling to be completely reliable in that\nconfiguration, and the recommended alternative is to use a more\nappropriate locale setting (potentially in combination with PEP 540's\nUTF-8 mode, if that is available).\n\nProviding implicit locale coercion only when running standalone\n\nThe major downside of the proposed design in this PEP is that it\nintroduces a potential discrepancy between the behaviour of the CPython\nruntime when it is run as a standalone application and when it is run as\nan embedded component inside a larger system (e.g. mod_wsgi running\ninside Apache httpd).\n\nOver the course of Python 3.x development, multiple attempts have been\nmade to improve the handling of incorrect locale settings at the point\nwhere the Python interpreter is initialised. The problem that emerged is\nthat this is ultimately too late in the interpreter startup process -\ndata such as command line arguments and the contents of environment\nvariables may have already been retrieved from the operating system and\nprocessed under the incorrect ASCII text encoding assumption well before\nPy_Initialize is called.\n\nThe problems created by those inconsistencies were then even harder to\ndiagnose and debug than those created by believing the operating\nsystem's claim that ASCII was a suitable encoding to use for operating\nsystem interfaces. This was the case even for the default CPython\nbinary, let alone larger C/C++ applications that embed CPython as a\nscripting engine.\n\nThe approach proposed in this PEP handles that problem by moving the\nlocale coercion as early as possible in the interpreter startup sequence\nwhen running standalone: it takes place directly in the C-level main()\nfunction, even before calling in to the Py_Main() library function that\nimplements the features of the CPython interpreter CLI.\n\nThe Py_Initialize API then only gains an explicit warning (emitted on\nstderr) when it detects use of the C locale, and relies on the embedding\napplication to specify something more reasonable.\n\nThat said, the reference implementation for this PEP adds most of the\nfunctionality to the shared library, with the CLI being updated to\nunconditionally call two new private APIs:\n\n if (_Py_LegacyLocaleDetected()) {\n _Py_CoerceLegacyLocale();\n }\n\nThese are similar to other \"pre-configuration\" APIs intended for\nembedding applications: they're designed to be called before\nPy_Initialize, and hence change the way the interpreter gets\ninitialized.\n\nIf these were made public (either as part of this PEP or in a subsequent\nRFE), then it would be straightforward for other embedding applications\nto recreate the same behaviour as is proposed for the CPython CLI.\n\nAllowing restoration of the legacy behaviour\n\nThe CPython command line interpreter is often used to investigate faults\nthat occur in other applications that embed CPython, and those\napplications may still be using the C locale even after this PEP is\nimplemented.\n\nProviding a simple on/off switch for the locale coercion behaviour makes\nit much easier to reproduce the behaviour of such applications for\ndebugging purposes, as well as making it easier to reproduce the\nbehaviour of older 3.x runtimes even when running a version with this\nchange applied.\n\nQuerying LC_CTYPE for C locale detection\n\nLC_CTYPE is the actual locale category that CPython relies on to drive\nthe implicit decoding of environment variables, command line arguments,\nand other text values received from the operating system.\n\nAs such, it makes sense to check it specifically when attempting to\ndetermine whether or not the current locale configuration is likely to\ncause Unicode handling problems.\n\nExplicitly setting LC_CTYPE for UTF-8 locale coercion\n\nPython is often used as a glue language, integrating other C/C++ ABI\ncompatible components in the current process, and components written in\narbitrary languages in subprocesses.\n\nSetting LC_CTYPE to C.UTF-8 is important to handle cases where the\nproblem has arisen from a setting like LC_CTYPE=UTF-8 being provided on\na system where no UTF-8 locale is defined (e.g. when a Mac OS X ssh\nclient is configured to forward locale settings, and the user logs into\na Linux server).\n\nThis should be sufficient to ensure that when the locale coercion is\nactivated, the switch to the UTF-8 based locale will be applied\nconsistently across the current process and any subprocesses that\ninherit the current environment.\n\nAvoiding setting LANG for UTF-8 locale coercion\n\nEarlier versions of this PEP proposed setting the LANG category\nindependent default locale, in addition to setting LC_CTYPE.\n\nThis was later removed on the grounds that setting only LC_CTYPE is\nsufficient to handle all of the problematic scenarios that the PEP aimed\nto resolve, while setting LANG as well would break cases where LANG was\nset correctly, and the locale problems were solely due to an incorrect\nLC_CTYPE setting ([11]).\n\nFor example, consider a Python application that called the Linux date\nutility in a subprocess rather than doing its own date formatting:\n\n $ LANG=ja_JP.UTF-8 LC_CTYPE=C date\n 2017年 5月 23日 火曜日 17:31:03 JST\n\n $ LANG=ja_JP.UTF-8 LC_CTYPE=C.UTF-8 date # Coercing only LC_CTYPE\n 2017年 5月 23日 火曜日 17:32:58 JST\n\n $ LANG=C.UTF-8 LC_CTYPE=C.UTF-8 date # Coercing both of LC_CTYPE and LANG\n Tue May 23 17:31:10 JST 2017\n\nWith only LC_CTYPE updated in the Python process, the subprocess would\ncontinue to behave as expected. However, if LANG was updated as well,\nthat would effectively override the LC_TIME setting and use the wrong\ndate formatting conventions.\n\nAvoiding setting LC_ALL for UTF-8 locale coercion\n\nEarlier versions of this PEP proposed setting the LC_ALL locale\noverride, in addition to setting LC_CTYPE.\n\nThis was changed after it was determined that just setting LC_CTYPE and\nLANG should be sufficient to handle all the scenarios the PEP aims to\ncover, as it avoids causing any problems in cases like the following:\n\n $ LANG=C LC_MONETARY=ja_JP.utf8 ./python -c \\\n \"from locale import setlocale, LC_ALL, currency; setlocale(LC_ALL, ''); print(currency(1e6))\"\n ¥1000000\n\nSkipping locale coercion if LC_ALL is set in the current environment\n\nWith locale coercion now only setting LC_CTYPE and LANG, it will have no\neffect if LC_ALL is also set. To avoid emitting a spurious locale\ncoercion notice in that case, coercion is instead skipped entirely.\n\nConsidering locale coercion independently of \"UTF-8 mode\"\n\nWith both this PEP's locale coercion and PEP 540's UTF-8 mode under\nconsideration for Python 3.7, it makes sense to ask whether or not we\ncan limit ourselves to only doing one or the other, rather than making\nboth changes.\n\nThe UTF-8 mode proposed in PEP 540 has two major limitations that make\nit a potential complement to this PEP rather than a potential\nreplacement.\n\nFirst, unlike this PEP, PEP 540's UTF-8 mode makes it possible to change\ndefault behaviours that are not currently configurable at all. While\nthat's exactly what makes the proposal interesting, it's also what makes\nit an entirely unproven approach. By contrast, the approach proposed in\nthis PEP builds directly atop existing configuration settings for the C\nlocale system ( LC_CTYPE, LANG) and Python's standard streams\n(PYTHONIOENCODING) that have already been in use for years to handle the\nkinds of compatibility problems discussed in this PEP.\n\nSecondly, one of the things we know based on that experience is that the\nproposed locale coercion can resolve problems not only in CPython\nitself, but also in extension modules that interact with the standard\nstreams, like GNU readline. As an example, consider the following\ninteractive session from a PEP 538 enabled CPython build, where each\nline after the first is executed by doing \"up-arrow, left-arrow x4,\ndelete, enter\":\n\n $ LANG=C ./python\n Python 3.7.0a0 (heads/pep538-coerce-c-locale:188e780, May 7 2017, 00:21:13)\n [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)] on linux\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> print(\"ℙƴ☂ℌøἤ\")\n ℙƴ☂ℌøἤ\n >>> print(\"ℙƴ☂ℌἤ\")\n ℙƴ☂ℌἤ\n >>> print(\"ℙƴ☂ἤ\")\n ℙƴ☂ἤ\n >>> print(\"ℙƴἤ\")\n ℙƴἤ\n >>> print(\"ℙἤ\")\n ℙἤ\n >>> print(\"ἤ\")\n ἤ\n >>>\n\nThis is exactly what we'd expect from a well-behaved command history\neditor.\n\nBy contrast, the following is what currently happens on an older release\nif you only change the Python level stream encoding settings without\nupdating the locale settings:\n\n $ LANG=C PYTHONIOENCODING=utf-8:surrogateescape python3\n Python 3.5.3 (default, Apr 24 2017, 13:32:13)\n [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)] on linux\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> print(\"ℙƴ☂ℌøἤ\")\n ℙƴ☂ℌøἤ\n >>> print(\"ℙƴ☂ℌ�\")\n File \"\", line 0\n\n ^\n SyntaxError: 'utf-8' codec can't decode bytes in position 20-21:\n invalid continuation byte\n\nThat particular misbehaviour is coming from GNU readline, not CPython\n-because the command history editing wasn't UTF-8 aware, it corrupted\nthe history buffer and fed such nonsense to stdin that even the\nsurrogateescape error handler was bypassed. While PEP 540's UTF-8 mode\ncould technically be updated to also reconfigure readline, that's just\none extension module that might be interacting with the standard streams\nwithout going through the CPython C API, and any change made by CPython\nwould only apply when readline is running directly as part of Python 3.7\nrather than in a separate subprocess.\n\nHowever, if we actually change the configured locale, GNU readline\nstarts behaving itself, without requiring any changes to the embedding\napplication:\n\n $ LANG=C.UTF-8 python3\n Python 3.5.3 (default, Apr 24 2017, 13:32:13)\n [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)] on linux\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> print(\"ℙƴ☂ℌøἤ\")\n ℙƴ☂ℌøἤ\n >>> print(\"ℙƴ☂ℌἤ\")\n ℙƴ☂ℌἤ\n >>> print(\"ℙƴ☂ἤ\")\n ℙƴ☂ἤ\n >>> print(\"ℙƴἤ\")\n ℙƴἤ\n >>> print(\"ℙἤ\")\n ℙἤ\n >>> print(\"ἤ\")\n ἤ\n >>>\n $ LC_CTYPE=C.UTF-8 python3\n Python 3.5.3 (default, Apr 24 2017, 13:32:13)\n [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)] on linux\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> print(\"ℙƴ☂ℌøἤ\")\n ℙƴ☂ℌøἤ\n >>> print(\"ℙƴ☂ℌἤ\")\n ℙƴ☂ℌἤ\n >>> print(\"ℙƴ☂ἤ\")\n ℙƴ☂ἤ\n >>> print(\"ℙƴἤ\")\n ℙƴἤ\n >>> print(\"ℙἤ\")\n ℙἤ\n >>> print(\"ἤ\")\n ἤ\n >>>\n\nEnabling C locale coercion and warnings on Mac OS X, iOS and Android\n\nOn Mac OS X, iOS, and Android, CPython already assumes the use of UTF-8\nfor system interfaces, and we expect most other locale-aware components\nto do the same.\n\nAccordingly, this PEP originally proposed to disable locale coercion and\nwarnings at build time for these platforms, on the assumption that it\nwould be entirely redundant.\n\nHowever, that assumption turned out to be incorrect, as subsequent\ninvestigations showed that if you explicitly configure LANG=C on these\nplatforms, extension modules like GNU readline will misbehave in much\nthe same way as they do on other *nix systems.[12]\n\nIn addition, Mac OS X is also frequently used as a development and\ntesting platform for Python software intended for deployment to other\n*nix environments (such as Linux or Android), and Linux is similarly\noften used as a development and testing platform for mobile and Mac OS X\napplications.\n\nAccordingly, this PEP enables the locale coercion and warning features\nby default on all platforms that use CPython's autotools based build\ntoolchain (i.e. everywhere other than Windows).\n\nImplementation\n\nThe reference implementation is being developed in the\npep538-coerce-c-locale feature branch[13] in Alyssa Coghlan's fork of\nthe CPython repository on GitHub. A work-in-progress PR is available\nat[14].\n\nThis reference implementation covers not only the enhancement request in\nissue 28180[15], but also the Android compatibility fixes needed to\nresolve issue 28997[16].\n\nBackporting to earlier Python 3 releases\n\nBackporting to Python 3.6.x\n\nIf this PEP is accepted for Python 3.7, redistributors backporting the\nchange specifically to their initial Python 3.6.x release will be both\nallowed and encouraged. However, such backports should only be\nundertaken either in conjunction with the changes needed to also provide\na suitable locale by default, or else specifically for platforms where\nsuch a locale is already consistently available.\n\nAt least the Fedora project is planning to pursue this approach for the\nupcoming Fedora 26 release[17].\n\nBackporting to other 3.x releases\n\nWhile the proposed behavioural change is seen primarily as a bug fix\naddressing Python 3's current misbehaviour in the default ASCII-based C\nlocale, it still represents a reasonably significant change in the way\nCPython interacts with the C locale system. As such, while some\nredistributors may still choose to backport it to even earlier Python\n3.x releases based on the needs and interests of their particular user\nbase, this wouldn't be encouraged as a general practice.\n\nHowever, configuring Python 3 environments (such as base container\nimages) to use these configuration settings by default is both allowed\nand recommended.\n\nAcknowledgements\n\nThe locale coercion approach proposed in this PEP is inspired directly\nby Armin Ronacher's handling of this problem in the click command line\nutility development framework[18]:\n\n $ LANG=C python3 -c 'import click; cli = click.command()(lambda:None); cli()'\n Traceback (most recent call last):\n ...\n RuntimeError: Click will abort further execution because Python 3 was\n configured to use ASCII as encoding for the environment. Either run this\n under Python 2 or consult http://click.pocoo.org/python3/ for mitigation\n steps.\n\n This system supports the C.UTF-8 locale which is recommended.\n You might be able to resolve your issue by exporting the\n following environment variables:\n\n export LC_ALL=C.UTF-8\n export LANG=C.UTF-8\n\nThe change was originally proposed as a downstream patch for Fedora's\nsystem Python 3.6 package[19], and then reformulated as a PEP for Python\n3.7 with a section allowing for backports to earlier versions by\nredistributors. In parallel with the development of the upstream patch,\nCharalampos Stratakis has been working on the Fedora 26 backport and\nproviding feedback on the practical viability of the proposed changes.\n\nThe initial draft was posted to the Python Linux SIG for discussion[20]\nand then amended based on both that discussion and Victor Stinner's work\nin PEP 540[21].\n\nThe \"ℙƴ☂ℌøἤ\" string used in the Unicode handling examples throughout\nthis PEP is taken from Ned Batchelder's excellent \"Pragmatic Unicode\"\npresentation[22].\n\nStephen Turnbull has long provided valuable insight into the text\nencoding handling challenges he regularly encounters at the University\nof Tsukuba (筑波大学).\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain under the terms of\nthe CC0 1.0 license: https://creativecommons.org/publicdomain/zero/1.0/\n\n[1] GNU C: How Programs Set the Locale\n(https://www.gnu.org/software/libc/manual/html_node/Setting-the-Locale.html)\n\n[2] GNU C: Locale Categories\n(https://www.gnu.org/software/libc/manual/html_node/Locale-Categories.html)\n\n[3] UTF-8 locale discussion on \"locale.getdefaultlocale() fails on Mac\nOS X with default language set to English\"\n(https://bugs.python.org/issue18378#msg215215)\n\n[4] GNOME Flatpak (https://flatpak.org/)\n\n[5] Ubuntu Snappy (https://www.ubuntu.com/desktop/snappy)\n\n[6] glibc C.UTF-8 locale proposal\n(https://sourceware.org/glibc/wiki/Proposals/C.UTF-8)\n\n[7] Use \"surrogateescape\" error handler for sys.stdin and sys.stdout on\nUNIX for the C locale (https://bugs.python.org/issue19977)\n\n[8] GB 18030 (https://en.wikipedia.org/wiki/GB_18030)\n\n[9] Shift-JIS (https://en.wikipedia.org/wiki/Shift_JIS)\n\n[10] ISO-2022 (https://en.wikipedia.org/wiki/ISO/IEC_2022)\n\n[11] Potential problems when setting LANG in addition to setting\nLC_CTYPE\n(https://mail.python.org/pipermail/python-dev/2017-May/147968.html)\n\n[12] GNU readline misbehaviour on Mac OS X with LANG=C\n(https://mail.python.org/pipermail/python-dev/2017-May/147897.html)\n\n[13] GitHub branch diff for ncoghlan:pep538-coerce-c-locale\n(https://github.com/python/cpython/compare/master...ncoghlan:pep538-coerce-c-locale)\n\n[14] GitHub pull request for the reference implementation\n(https://github.com/python/cpython/pull/659)\n\n[15] CPython: sys.getfilesystemencoding() should default to utf-8\n(https://bugs.python.org/issue28180)\n\n[16] test_readline.test_nonascii fails on Android\n(https://bugs.python.org/issue28997)\n\n[17] Fedora 26 change proposal for locale coercion backport\n(https://fedoraproject.org/wiki/Changes/python3_c.utf-8_locale)\n\n[18] Locale configuration required for click applications under Python 3\n(https://click.palletsprojects.com/en/5.x/python3/#python-3-surrogate-handling)\n\n[19] Fedora: force C.UTF-8 when Python 3 is run under the C locale\n(https://bugzilla.redhat.com/show_bug.cgi?id=1404918)\n\n[20] linux-sig discussion of initial PEP draft\n(https://mail.python.org/pipermail/linux-sig/2017-January/000014.html)\n\n[21] Feedback notes from linux-sig discussion and PEP 540\n(https://github.com/python/peps/issues/171)\n\n[22] Pragmatic Unicode (https://nedbatchelder.com/text/unipain.html)"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.323002"},"created":{"kind":"timestamp","value":"2016-12-28T00:00:00","string":"2016-12-28T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0538/\",\n \"authors\": [\n \"Alyssa Coghlan\"\n ],\n \"pep_number\": \"0538\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":43,"cells":{"id":{"kind":"string","value":"0513"},"text":{"kind":"string","value":"PEP: 513 Title: A Platform Tag for Portable Linux Built Distributions\nVersion: $Revision$ Last-Modified: $Date$ Author: Robert T. McGibbon\n, Nathaniel J. Smith BDFL-Delegate:\nAlyssa Coghlan Discussions-To:\ndistutils-sig@python.org Status: Superseded Type: Informational Topic:\nPackaging Content-Type: text/x-rst Created: 19-Jan-2016 Post-History:\n19-Jan-2016, 25-Jan-2016, 29-Jan-2016 Superseded-By: 600 Resolution:\nhttps://mail.python.org/pipermail/distutils-sig/2016-January/028211.html\n\nAbstract\n\nThis PEP proposes the creation of a new platform tag for Python package\nbuilt distributions, such as wheels, called manylinux1_{x86_64,i686}\nwith external dependencies limited to a standardized, restricted subset\nof the Linux kernel and core userspace ABI. It proposes that PyPI\nsupport uploading and distributing wheels with this platform tag, and\nthat pip support downloading and installing these packages on compatible\nplatforms.\n\nRationale\n\nCurrently, distribution of binary Python extensions for Windows and OS X\nis straightforward. Developers and packagers build wheels (PEP 427, PEP\n491), which are assigned platform tags such as win32 or\nmacosx_10_6_intel, and upload these wheels to PyPI. Users can download\nand install these wheels using tools such as pip.\n\nFor Linux, the situation is much more delicate. In general, compiled\nPython extension modules built on one Linux distribution will not work\non other Linux distributions, or even on different machines running the\nsame Linux distribution with different system libraries installed.\n\nBuild tools using PEP 425 platform tags do not track information about\nthe particular Linux distribution or installed system libraries, and\ninstead assign all wheels the too-vague linux_i686 or linux_x86_64 tags.\nBecause of this ambiguity, there is no expectation that linux-tagged\nbuilt distributions compiled on one machine will work properly on\nanother, and for this reason, PyPI has not permitted the uploading of\nwheels for Linux.\n\nIt would be ideal if wheel packages could be compiled that would work on\nany linux system. But, because of the incredible diversity of Linux\nsystems -- from PCs to Android to embedded systems with custom libcs --\nthis cannot be guaranteed in general.\n\nInstead, we define a standard subset of the kernel+core userspace ABI\nthat, in practice, is compatible enough that packages conforming to this\nstandard will work on many linux systems, including essentially all of\nthe desktop and server distributions in common use. We know this because\nthere are companies who have been distributing such widely-portable\npre-compiled Python extension modules for Linux -- e.g. Enthought with\nCanopy[1] and Continuum Analytics with Anaconda[2].\n\nBuilding on the compatibility lessons learned from these companies, we\nthus define a baseline manylinux1 platform tag for use by binary Python\nwheels, and introduce the implementation of preliminary tools to aid in\nthe construction of these manylinux1 wheels.\n\nKey Causes of Inter-Linux Binary Incompatibility\n\nTo properly define a standard that will guarantee that wheel packages\nmeeting this specification will operate on many linux platforms, it is\nnecessary to understand the root causes which often prevent portability\nof pre-compiled binaries on Linux. The two key causes are dependencies\non shared libraries which are not present on users' systems, and\ndependencies on particular versions of certain core libraries like\nglibc.\n\nExternal Shared Libraries\n\nMost desktop and server linux distributions come with a system package\nmanager (examples include APT on Debian-based systems, yum on RPM-based\nsystems, and pacman on Arch linux) that manages, among other\nresponsibilities, the installation of shared libraries installed to\nsystem directories such as /usr/lib. Most non-trivial Python extensions\nwill depend on one or more of these shared libraries, and thus function\nproperly only on systems where the user has the proper libraries (and\nthe proper versions thereof), either installed using their package\nmanager, or installed manually by setting certain environment variables\nsuch as LD_LIBRARY_PATH to notify the runtime linker of the location of\nthe depended-upon shared libraries.\n\nVersioning of Core Shared Libraries\n\nEven if the developers a Python extension module wish to use no external\nshared libraries, the modules will generally have a dynamic runtime\ndependency on the GNU C library, glibc. While it is possible, statically\nlinking glibc is usually a bad idea because certain important C\nfunctions like dlopen() cannot be called from code that statically links\nglibc. A runtime shared library dependency on a system-provided glibc is\nunavoidable in practice.\n\nThe maintainers of the GNU C library follow a strict symbol versioning\nscheme for backward compatibility. This ensures that binaries compiled\nagainst an older version of glibc can run on systems that have a newer\nglibc. The opposite is generally not true -- binaries compiled on newer\nLinux distributions tend to rely upon versioned functions in glibc that\nare not available on older systems.\n\nThis generally prevents wheels compiled on the latest Linux\ndistributions from being portable.\n\nThe manylinux1 policy\n\nFor these reasons, to achieve broad portability, Python wheels\n\n- should depend only on an extremely limited set of external shared\n libraries; and\n- should depend only on \"old\" symbol versions in those external shared\n libraries; and\n- should depend only on a widely-compatible kernel ABI.\n\nTo be eligible for the manylinux1 platform tag, a Python wheel must\ntherefore both (a) contain binary executables and compiled code that\nlinks only to libraries with SONAMEs included in the following list: :\n\n libpanelw.so.5\n libncursesw.so.5\n libgcc_s.so.1\n libstdc++.so.6\n libm.so.6\n libdl.so.2\n librt.so.1\n libc.so.6\n libnsl.so.1\n libutil.so.1\n libpthread.so.0\n libresolv.so.2\n libX11.so.6\n libXext.so.6\n libXrender.so.1\n libICE.so.6\n libSM.so.6\n libGL.so.1\n libgobject-2.0.so.0\n libgthread-2.0.so.0\n libglib-2.0.so.0\n\nand, (b) work on a stock CentOS 5.11[3] system that contains the system\npackage manager's provided versions of these libraries.\n\nlibcrypt.so.1 was retrospectively removed from the whitelist after\nFedora 30 was released with libcrypt.so.2 instead.\n\nBecause CentOS 5 is only available for x86_64 and i686 architectures,\nthese are the only architectures currently supported by the manylinux1\npolicy.\n\nOn Debian-based systems, these libraries are provided by the packages :\n\n libncurses5 libgcc1 libstdc++6 libc6 libx11-6 libxext6\n libxrender1 libice6 libsm6 libgl1-mesa-glx libglib2.0-0\n\nOn RPM-based systems, these libraries are provided by the packages :\n\n ncurses libgcc libstdc++ glibc libXext libXrender\n libICE libSM mesa-libGL glib2\n\nThis list was compiled by checking the external shared library\ndependencies of the Canopy[4] and Anaconda[5] distributions, which both\ninclude a wide array of the most popular Python modules and have been\nconfirmed in practice to work across a wide swath of Linux systems in\nthe wild.\n\nMany of the permitted system libraries listed above use symbol\nversioning schemes for backward compatibility. The latest symbol\nversions provided with the CentOS 5.11 versions of these libraries are:\n:\n\n GLIBC_2.5\n CXXABI_3.4.8\n GLIBCXX_3.4.9\n GCC_4.2.0\n\nTherefore, as a consequence of requirement (b), any wheel that depends\non versioned symbols from the above shared libraries may depend only on\nsymbols with the following versions: :\n\n GLIBC <= 2.5\n CXXABI <= 3.4.8\n GLIBCXX <= 3.4.9\n GCC <= 4.2.0\n\nThese recommendations are the outcome of the relevant discussions in\nJanuary 2016[6],[7].\n\nNote that in our recommendations below, we do not suggest that pip or\nPyPI should attempt to check for and enforce the details of this policy\n(just as they don't check for and enforce the details of existing\nplatform tags like win32). The text above is provided (a) as advice to\npackage builders, and (b) as a method for allocating blame if a given\nwheel doesn't work on some system: if it satisfies the policy above,\nthen this is a bug in the spec or the installation tool; if it does not\nsatisfy the policy above, then it's a bug in the wheel. One useful\nconsequence of this approach is that it leaves open the possibility of\nfurther updates and tweaks as we gain more experience, e.g., we could\nhave a \"manylinux 1.1\" policy which targets the same systems and uses\nthe same manylinux1 platform tag (and thus requires no further changes\nto pip or PyPI), but that adjusts the list above to remove libraries\nthat have turned out to be problematic or add libraries that have turned\nout to be safe.\n\nlibpythonX.Y.so.1\n\nNote that libpythonX.Y.so.1 is not on the list of libraries that a\nmanylinux1 extension is allowed to link to. Explicitly linking to\nlibpythonX.Y.so.1 is unnecessary in almost all cases: the way ELF\nlinking works, extension modules that are loaded into the interpreter\nautomatically get access to all of the interpreter's symbols, regardless\nof whether or not the extension itself is explicitly linked against\nlibpython. Furthermore, explicit linking to libpython creates problems\nin the common configuration where Python is not built with\n--enable-shared. In particular, on Debian and Ubuntu systems,\napt install pythonX.Y does not even install libpythonX.Y.so.1, meaning\nthat any wheel that did depend on libpythonX.Y.so.1 could fail to\nimport.\n\nThere is one situation where extensions that are linked in this way can\nfail to work: if a host program (e.g., apache2) uses dlopen() to load a\nmodule (e.g., mod_wsgi) that embeds the CPython interpreter, and the\nhost program does not pass the RTLD_GLOBAL flag to dlopen(), then the\nembedded CPython will be unable to load any extension modules that do\nnot themselves link explicitly to libpythonX.Y.so.1. Fortunately,\napache2 does set the RTLD_GLOBAL flag, as do all the other programs that\nembed-CPython-via-a-dlopened-plugin that we could locate, so this does\nnot seem to be a serious problem in practice. The incompatibility with\nDebian/Ubuntu is more of an issue than the theoretical incompatibility\nwith a rather obscure corner case.\n\nThis is a rather complex and subtle issue that extends beyond the scope\nof manylinux1; for more discussion see:[8],[9], [10].\n\nUCS-2 vs UCS-4 builds\n\nAll versions of CPython 2.x, plus CPython 3.0-3.2 inclusive, can be\nbuilt in two ABI-incompatible modes: builds using the\n--enable-unicode=ucs2 configure flag store Unicode data in UCS-2 (or\nreally UTF-16) format, while builds using the --enable-unicode=ucs4\nconfigure flag store Unicode data in UCS-4. (CPython 3.3 and greater use\na different storage method that always supports UCS-4.) If we want to\nmake sure ucs2 wheels don't get installed into ucs4 CPythons and\nvice-versa, then something must be done.\n\nAn earlier version of this PEP included a requirement that manylinux1\nwheels targeting these older CPython versions should always use the ucs4\nABI. But then, in between the PEP's initial acceptance and its\nimplementation, pip and wheel gained first-class support for tracking\nand checking this aspect of ABI compatibility for the relevant CPython\nversions, which is a better solution. So we now allow the manylinux1\nplatform tags to be used in combination with any ABI tag. However, to\nmaintain compatibility it is crucial to ensure that all manylinux1\nwheels include a non-trivial abi tag. For example, a wheel built against\na ucs4 CPython might have a name like:\n\n PKG-VERSION-cp27-cp27mu-manylinux1_x86_64.whl\n ^^^^^^ Good!\n\nWhile a wheel built against the ucs2 ABI might have a name like:\n\n PKG-VERSION-cp27-cp27m-manylinux1_x86_64.whl\n ^^^^^ Okay!\n\nBut you should never have a wheel with a name like:\n\n PKG-VERSION-cp27-none-manylinux1_x86_64.whl\n ^^^^ BAD! Don't do this!\n\nThis wheel claims to be simultaneously compatible with both ucs2 and\nucs4 builds, which is bad.\n\nWe note for information that the ucs4 ABI appears to be much more\nwidespread among Linux CPython distributors.\n\nfpectl builds vs. no fpectl builds\n\nAll extant versions of CPython can be built either with or without the\n--with-fpectl flag to configure. It turns out that this changes the\nCPython ABI: extensions that are built against a no-fpectl CPython are\nalways compatible with yes-fpectl CPython, but the reverse is not\nnecessarily true. (Symptom: errors at import time complaining about\nundefined symbol: PyFPE_jbuf.) See: [11].\n\nFor maximum compatibility, therefore, the CPython used to build\nmanylinux1 wheels must be compiled without the --with-fpectl flag, and\nmanylinux1 extensions must not reference the symbol PyFPE_jbuf.\n\nCompilation of Compliant Wheels\n\nThe way glibc, libgcc, and libstdc++ manage their symbol versioning\nmeans that in practice, the compiler toolchains that most developers use\nto do their daily work are incapable of building manylinux1-compliant\nwheels. Therefore, we do not attempt to change the default behavior of\npip wheel / bdist_wheel: they will continue to generate regular linux_*\nplatform tags, and developers who wish to use them to generate\nmanylinux1-tagged wheels will have to change the tag as a second\npost-processing step.\n\nTo support the compilation of wheels meeting the manylinux1 standard, we\nprovide initial drafts of two tools.\n\nDocker Image\n\nThe first tool is a Docker image based on CentOS 5.11, which is\nrecommended as an easy to use self-contained build box for compiling\nmanylinux1 wheels [12]. Compiling on a more recently-released linux\ndistribution will generally introduce dependencies on too-new versioned\nsymbols. The image comes with a full compiler suite installed (gcc, g++,\nand gfortran 4.8.2) as well as the latest releases of Python and pip.\n\nAuditwheel\n\nThe second tool is a command line executable called auditwheel[13] that\nmay aid in package maintainers in dealing with third-party external\ndependencies.\n\nThere are at least three methods for building wheels that use\nthird-party external libraries in a way that meets the above policy.\n\n1. The third-party libraries can be statically linked.\n2. The third-party shared libraries can be distributed in separate\n packages on PyPI which are depended upon by the wheel.\n3. The third-party shared libraries can be bundled inside the wheel\n libraries, linked with a relative path.\n\nAll of these are valid option which may be effectively used by different\npackages and communities. Statically linking generally requires\npackage-specific modifications to the build system, and distributing\nthird-party dependencies on PyPI may require some coordination of the\ncommunity of users of the package.\n\nAs an often-automatic alternative to these options, we introduce\nauditwheel. The tool inspects all of the ELF files inside a wheel to\ncheck for dependencies on versioned symbols or external shared\nlibraries, and verifies conformance with the manylinux1 policy. This\nincludes the ability to add the new platform tag to conforming wheels.\nMore importantly, auditwheel has the ability to automatically modify\nwheels that depend on external shared libraries by copying those shared\nlibraries from the system into the wheel itself, and modifying the\nappropriate RPATH entries such that these libraries will be picked up at\nruntime. This accomplishes a similar result as if the libraries had been\nstatically linked without requiring changes to the build system.\nPackagers are advised that bundling, like static linking, may implicate\ncopyright concerns.\n\nBundled Wheels on Linux\n\nWhile we acknowledge many approaches for dealing with third-party\nlibrary dependencies within manylinux1 wheels, we recognize that the\nmanylinux1 policy encourages bundling external dependencies, a practice\nwhich runs counter to the package management policies of many linux\ndistributions' system package managers[14],[15]. The primary purpose of\nthis is cross-distro compatibility. Furthermore, manylinux1 wheels on\nPyPI occupy a different niche than the Python packages available through\nthe system package manager.\n\nThe decision in this PEP to encourage departure from general Linux\ndistribution unbundling policies is informed by the following concerns:\n\n1. In these days of automated continuous integration and deployment\n pipelines, publishing new versions and updating dependencies is\n easier than it was when those policies were defined.\n2. pip users remain free to use the \"--no-binary\" option if they want\n to force local builds rather than using pre-built wheel files.\n3. The popularity of modern container based deployment and \"immutable\n infrastructure\" models involve substantial bundling at the\n application layer anyway.\n4. Distribution of bundled wheels through PyPI is currently the norm\n for Windows and OS X.\n5. This PEP doesn't rule out the idea of offering more targeted\n binaries for particular Linux distributions in the future.\n\nThe model described in this PEP is most ideally suited for\ncross-platform Python packages, because it means they can reuse much of\nthe work that they're already doing to make static Windows and OS X\nwheels. We recognize that it is less optimal for Linux-specific packages\nthat might prefer to interact more closely with Linux's unique package\nmanagement functionality and only care about targeting a small set of\nparticular distos.\n\nSecurity Implications\n\nOne of the advantages of dependencies on centralized libraries in Linux\nis that bugfixes and security updates can be deployed system-wide, and\napplications which depend on these libraries will automatically feel the\neffects of these patches when the underlying libraries are updated. This\ncan be particularly important for security updates in packages engaged\nin communication across the network or cryptography.\n\nmanylinux1 wheels distributed through PyPI that bundle security-critical\nlibraries like OpenSSL will thus assume responsibility for prompt\nupdates in response disclosed vulnerabilities and patches. This closely\nparallels the security implications of the distribution of binary wheels\non Windows that, because the platform lacks a system package manager,\ngenerally bundle their dependencies. In particular, because it lacks a\nstable ABI, OpenSSL cannot be included in the manylinux1 profile.\n\nPlatform Detection for Installers\n\nAbove, we defined what it means for a wheel to be manylinux1-compatible.\nHere we discuss what it means for a Python installation to be\nmanylinux1-compatible. In particular, this is important for tools like\npip to know when deciding whether or not they should consider\nmanylinux1-tagged wheels for installation.\n\nBecause the manylinux1 profile is already known to work for the many\nthousands of users of popular commercial Python distributions, we\nsuggest that installation tools should error on the side of assuming\nthat a system is compatible, unless there is specific reason to think\notherwise.\n\nWe know of four main sources of potential incompatibility that are\nlikely to arise in practice:\n\n- Eventually, in the future, there may exist distributions that break\n compatibility with this profile (e.g., if one of the libraries in\n the profile changes its ABI in a backwards-incompatible way)\n- A linux distribution that is too old (e.g. RHEL 4)\n- A linux distribution that does not use glibc (e.g. Alpine Linux,\n which is based on musl libc, or Android)\n\nTo address these we propose a two-pronged approach. To handle potential\nfuture incompatibilities, we standardize a mechanism for a Python\ndistributor to signal that a particular Python install definitely is or\nis not compatible with manylinux1: this is done by installing a module\nnamed _manylinux, and setting its manylinux1_compatible attribute. We do\nnot propose adding any such module to the standard library -- this is\nmerely a well-known name by which distributors and installation tools\ncan rendezvous. However, if a distributor does add this module, they\nshould add it to the standard library rather than to a site-packages/\ndirectory, because the standard library is inherited by virtualenvs\n(which we want), and site-packages/ in general is not.\n\nThen, to handle the last two cases for existing Python distributions, we\nsuggest a simple and reliable method to check for the presence and\nversion of glibc (basically using it as a \"clock\" for the overall age of\nthe distribution).\n\nSpecifically, the algorithm we propose is:\n\n def is_manylinux1_compatible():\n # Only Linux, and only x86-64 / i686\n from distutils.util import get_platform\n if get_platform() not in [\"linux-x86_64\", \"linux-i686\"]:\n return False\n\n # Check for presence of _manylinux module\n try:\n import _manylinux\n return bool(_manylinux.manylinux1_compatible)\n except (ImportError, AttributeError):\n # Fall through to heuristic check below\n pass\n\n # Check glibc version. CentOS 5 uses glibc 2.5.\n return have_compatible_glibc(2, 5)\n\n def have_compatible_glibc(major, minimum_minor):\n import ctypes\n\n process_namespace = ctypes.CDLL(None)\n try:\n gnu_get_libc_version = process_namespace.gnu_get_libc_version\n except AttributeError:\n # Symbol doesn't exist -> therefore, we are not linked to\n # glibc.\n return False\n\n # Call gnu_get_libc_version, which returns a string like \"2.5\".\n gnu_get_libc_version.restype = ctypes.c_char_p\n version_str = gnu_get_libc_version()\n # py2 / py3 compatibility:\n if not isinstance(version_str, str):\n version_str = version_str.decode(\"ascii\")\n\n # Parse string and check against requested version.\n version = [int(piece) for piece in version_str.split(\".\")]\n assert len(version) == 2\n if major != version[0]:\n return False\n if minimum_minor > version[1]:\n return False\n return True\n\nRejected alternatives: We also considered using a configuration file,\ne.g. /etc/python/compatibility.cfg. The problem with this is that a\nsingle filesystem might contain many different interpreter environments,\neach with their own ABI profile -- the manylinux1 compatibility of a\nsystem-installed x86_64 CPython might not tell us much about the\nmanylinux1 compatibility of a user-installed i686 PyPy. Locating this\nconfiguration information within the Python environment itself ensures\nthat it remains attached to the correct binary, and dramatically\nsimplifies lookup code.\n\nWe also considered using a more elaborate structure, like a list of all\nplatform tags that should be considered compatible, together with their\npreference ordering, for example:\n_binary_compat.compatible = [\"manylinux1_x86_64\", \"centos5_x86_64\", \"linux_x86_64\"].\nHowever, this introduces several complications. For example, we want to\nbe able to distinguish between the state of \"doesn't support manylinux1\"\n(or eventually manylinux2, etc.) versus \"doesn't specify either way\nwhether it supports manylinux1\", which is not entirely obvious in the\nabove representation; and, it's not at all clear what features are\nreally needed vis a vis preference ordering given that right now the\nonly possible platform tags are manylinux1 and linux. So we're deferring\na more complete solution here for a separate PEP, when / if Linux gets\nmore platform tags.\n\nFor the library compatibility check, we also considered much more\nelaborate checks (e.g. checking the kernel version, searching for and\nchecking the versions of all the individual libraries listed in the\nmanylinux1 profile, etc.), but ultimately decided that this would be\nmore likely to introduce confusing bugs than actually help the user.\n(For example: different distributions vary in where they actually put\nthese libraries, and if our checking code failed to use the correct path\nsearch then it could easily return incorrect answers.)\n\nPyPI Support\n\nPyPI should permit wheels containing the manylinux1 platform tag to be\nuploaded. PyPI should not attempt to formally verify that wheels\ncontaining the manylinux1 platform tag adhere to the manylinux1 policy\ndescribed in this document. This verification tasks should be left to\nother tools, like auditwheel, that are developed separately.\n\nRejected Alternatives\n\nOne alternative would be to provide separate platform tags for each\nLinux distribution (and each version thereof), e.g. RHEL6, ubuntu14_10,\ndebian_jessie, etc. Nothing in this proposal rules out the possibility\nof adding such platform tags in the future, or of further extensions to\nwheel metadata that would allow wheels to declare dependencies on\nexternal system-installed packages. However, such extensions would\nrequire substantially more work than this proposal, and still might not\nbe appreciated by package developers who would prefer not to have to\nmaintain multiple build environments and build multiple wheels in order\nto cover all the common Linux distributions. Therefore, we consider such\nproposals to be out-of-scope for this PEP.\n\nFuture updates\n\nWe anticipate that at some point in the future there will be a\nmanylinux2 specifying a more modern baseline environment (perhaps based\non CentOS 6), and someday a manylinux3 and so forth, but we defer\nspecifying these until we have more experience with the initial\nmanylinux1 proposal.\n\nReferences\n\nCopyright\n\nThis document has been placed into the public domain.\n\n[1] Enthought Canopy Python Distribution\n(https://store.enthought.com/downloads/)\n\n[2] Continuum Analytics Anaconda Python Distribution\n(https://www.continuum.io/downloads)\n\n[3] CentOS 5.11 Release Notes\n(https://wiki.centos.org/Manuals/ReleaseNotes/CentOS5.11)\n\n[4] Enthought Canopy Python Distribution\n(https://store.enthought.com/downloads/)\n\n[5] Continuum Analytics Anaconda Python Distribution\n(https://www.continuum.io/downloads)\n\n[6] manylinux-discuss mailing list discussion\n(https://groups.google.com/forum/#!topic/manylinux-discuss/-4l3rrjfr9U)\n\n[7] distutils-sig discussion\n(https://mail.python.org/pipermail/distutils-sig/2016-January/027997.html)\n\n[8] distutils-sig discussion\n(https://mail.python.org/pipermail/distutils-sig/2016-February/028275.html)\n\n[9] github issue discussion\n(https://github.com/pypa/manylinux/issues/30)\n\n[10] python bug tracker discussion (https://bugs.python.org/issue21536)\n\n[11] numpy bug report:\nhttps://github.com/numpy/numpy/issues/8415#issuecomment-269095235\n\n[12] manylinux1 docker images (Source:\nhttps://github.com/pypa/manylinux; x86-64:\nhttps://quay.io/repository/pypa/manylinux1_x86_64; x86-32:\nhttps://quay.io/repository/pypa/manylinux1_i686)\n\n[13] auditwheel tool (https://pypi.python.org/pypi/auditwheel)\n\n[14] Fedora Bundled Software Policy\n(https://fedoraproject.org/wiki/Bundled_Software_policy)\n\n[15] Debian Policy Manual -- 4.13: Convenience copies of code\n(https://www.debian.org/doc/debian-policy/ch-source.html#s-embeddedfiles)"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.420859"},"created":{"kind":"timestamp","value":"2016-01-19T00:00:00","string":"2016-01-19T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0513/\",\n \"authors\": [\n \"Nathaniel J. Smith\",\n \"Robert T. McGibbon\"\n ],\n \"pep_number\": \"0513\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":44,"cells":{"id":{"kind":"string","value":"0706"},"text":{"kind":"string","value":"PEP: 706 Title: Filter for tarfile.extractall Author: Petr Viktorin\n Discussions-To: https://discuss.python.org/t/23903\nStatus: Final Type: Standards Track Content-Type: text/x-rst Created:\n09-Feb-2023 Python-Version: 3.12 Post-History: 25-Jan-2023, 15-Feb-2023,\nResolution: https://discuss.python.org/t/23903/10\n\ntarfile documentation \n\nAbstract\n\nThe extraction methods in tarfile gain a filter argument, which allows\nrejecting files or modifying metadata as the archive is extracted. Three\nbuilt-in named filters are provided, aimed at limiting features that\nmight be surprising or dangerous. These can be used as-is, or serve as a\nbase for custom filters.\n\nAfter a deprecation period, a strict (but safer) filter will become the\ndefault.\n\nMotivation\n\nThe tar format is used for several use cases, many of which have\ndifferent needs. For example:\n\n- A backup of a UNIX workstation should faithfully preserve all kinds\n of details like file permissions, symlinks to system configuration,\n and various kinds of special files.\n- When unpacking a data bundle, it’s much more important that the\n unpacking will not have unintended consequences – like exposing a\n password file by symlinking it to a public place.\n\nTo support all its use cases, the tar format has many features. In many\ncases, it's best to ignore or disallow some of them when extracting an\narchive.\n\nPython allows extracting tar archives using tarfile.TarFile.extractall,\nwhose docs warn to never extract archives from untrusted sources without\nprior inspection. However, it’s not clear what kind of inspection should\nbe done. Indeed, it’s quite tricky to do such an inspection correctly.\nAs a result, many people don’t bother, or do the check incorrectly,\nresulting in security issues such as CVE-2007-4559.\n\nSince tarfile was first written, it's become more accepted that warnings\nin documentation are not enough. Whenever possible, an unsafe operation\nshould be explicitly requested; potentially dangerous operations should\nlook dangerous. However, TarFile.extractall looks benign in a code\nreview.\n\nTarfile extraction is also exposed via shutil.unpack_archive, which\nallows the user to not care about the kind of archive they're dealing\nwith. The API is very inviting for extracting archives without prior\ninspection, even though the docs again warn against it.\n\nIt has been argued that Python is not wrong -- it behaves exactly as\ndocumented -- but that's beside the point. Let's improve the situation\nrather than assign/avoid blame. Python and its docs are the best place\nto improve things.\n\nRationale\n\nHow do we improve things? Unfortunately, we will need to change the\ndefaults, which implies breaking backwards compatibility.\nTarFile.extractall is what people reach for\nwhen they need to extract a tarball. Its default behaviour needs to\nchange.\n\nWhat would be the best behaviour? That depends on the use case. So,\nwe'll add several general “policies” to control extraction. They are\nbased on use cases, and ideally they should have straightforward\nsecurity implications:\n\n- Current behavior: trusting the archive. Suitable e.g. as a building\n block for libraries that do the check themselves, or extracting an\n archive you just made yourself.\n- Unpacking a UNIX archive: roughly following GNU tar, e.g. stripping\n leading / from filenames.\n- Unpacking a general data archive: the shutil.unpack_archive use\n case, where it's not important to preserve details specific to tar\n or Unix-like filesystems.\n\nAfter a deprecation period, the last option -- the most limited but most\nsecure one -- will become the default.\n\nEven with better general defaults, users should still verify the\narchives they extract, and perhaps modify some of the metadata.\nSuperficially, the following looks like a reasonable way to do this\ntoday:\n\n- Call TarFile.getmembers \n- Verify or modify each member's ~tarfile.TarInfo\n- Pass the result to extractall's members\n\nHowever, there are some issues with this approach:\n\n- It's possible to modify TarInfo objects, but the changes to them\n affect all subsequent operations on the same TarFile object. This\n behavior is fine for most uses, but despite that, it would be very\n surprising if TarFile.extractall did this by default.\n- Calling getmembers can be expensive and it requires a seekable\n archive.\n- When verifying members in advance, it may be necessary to track how\n each member would have changed the filesystem, e.g. how symlinks are\n being set up. This is hard. We can't expect users to do it.\n\nTo solve these issues we'll:\n\n- Provide a supported way to “clone” and modify TarInfo objects. A\n replace method, similar to dataclasses.replace or\n namedtuple._replace should do\n the trick.\n- Provide a “filter” hook in extractall's loop that can modify or\n discard members before they are processed.\n- Require that this hook is called just before extracting each member,\n so it can scan the current state of the disk. This will greatly\n simplify the implementation of policies (both in stdlib and user\n code), at the cost of not being able to do a precise “dry run”.\n\nThe hook API will be very similar to the existing filter argument for\nTarFile.add . We'll also name it filter. (In some\ncases “policy” would be a more fitting name, but the API can be used for\nmore than security policies.)\n\nThe built-in policies/filters described above will be implemented using\nthe public filter API, so they can be used as building blocks or\nexamples.\n\nSetting a precedent\n\nIf and when other libraries for archive extraction, such as zipfile,\ngain similar functionality, they should mimic this API as much as it's\nreasonable.\n\nTo enable this for simple cases, the built-in filters will have string\nnames; e.g. users can pass filter='data' instead of a specific function\nthat deals with ~tarfile.TarInfo objects.\n\nThe shutil.unpack_archive function will get a filter argument, which it\nwill pass to extractall.\n\nAdding function-based API that would work across archive formats is out\nof scope of this PEP.\n\nFull disclosure & redistributor info\n\nThe PEP author works for Red Hat, a redistributor of Python with\ndifferent security needs and support periods than CPython in general.\nSuch redistributors may want to carry vendor patches to:\n\n- Allow configuring the defaults system-wide, and\n- Change the default as soon as possible, even in older Python\n versions.\n\nThe proposal makes this easy to do, and it allows users to query the\nsettings.\n\nSpecification\n\nModifying and forgetting member metadata\n\nThe ~tarfile.TarInfo class will gain a new method, replace(), which will\nwork similarly to dataclasses.replace. It will return a copy of the\nTarInfo object with attributes replaced as specified by keyword-only\narguments:\n\n- name\n- mtime\n- mode\n- linkname\n- uid\n- gid\n- uname\n- gname\n\nAny of these, except name and linkname, will be allowed to be set to\nNone. When extract or extractall encounters such a None, it will not set\nthat piece of metadata. (If uname or gname is None, it will fall back to\nuid or gid as if the name wasn't found.) When addfile or tobuf\nencounters such a None, it will raise a ValueError. When list encounters\nsuch a None, it will print a placeholder string.\n\nThe documentation will mention why the method is there: TarInfo objects\nretrieved from TarFile.getmembers are\n“live”; modifying them directly will affect subsequent unrelated\noperations.\n\nFilters\n\nTarFile.extract and\nTarFile.extractall methods will grow a\nfilter keyword-only parameter, which takes a callable that can be called\nas:\n\n filter(/, member: TarInfo, path: str) -> TarInfo|None\n\nwhere member is the member to be extracted, and path is the path to\nwhere the archive is extracted (i.e., it'll be the same for every\nmember).\n\nWhen used it will be called on each member as it is extracted, and\nextraction will work with the result. If it returns None, the member\nwill be skipped.\n\nThe function can also raise an exception. This can, depending on\nTarFile.errorlevel, abort the extraction or cause the member to be\nskipped.\n\nNote\n\nIf extraction is aborted, the archive may be left partially extracted.\nIt is the user’s responsibility to clean up.\n\nWe will also provide a set of defaults for common use cases. In addition\nto a function, the filter argument can be one of the following strings:\n\n- 'fully_trusted': Current behavior: honor the metadata as is. Should\n be used if the user trusts the archive completely, or implements\n their own complex verification.\n- 'tar': Roughly follow defaults of the GNU tar command (when run as a\n normal user):\n - Strip leading '/' and os.sep from filenames\n - Refuse to extract files with absolute paths (after the /\n stripping above, e.g. C:/foo on Windows).\n - Refuse to extract files whose absolute path (after following\n symlinks) would end up outside the destination. (Note that GNU\n tar instead delays creating some links.)\n - Clear high mode bits (setuid, setgid, sticky) and group/other\n write bits (S_IWGRP|S_IWOTH ). (This is an\n approximation of GNU tar's default, which limits the mode by the\n current umask setting.)\n- 'data': Extract a \"data\" archive, disallowing common attack vectors\n but limiting functionality. In particular, many features specific to\n UNIX-style filesystems (or equivalently, to the tar archive format)\n are ignored, making this a good filter for cross-platform archives.\n In addition to tar:\n - Refuse to extract links (hard or soft) that link to absolute\n paths.\n - Refuse to extract links (hard or soft) which end up linking to a\n path outside of the destination. (On systems that don't support\n links, tarfile will, in most cases, fall back to creating\n regular files. This proposal doesn't change that behaviour.)\n - Refuse to extract device files (including pipes).\n - For regular files and hard links:\n - Set the owner read and write permissions\n (S_IRUSR|S_IWUSR ).\n - Remove the group & other executable permission\n (S_IXGRP|S_IXOTH ) if the owner doesn't have\n it (~stat.S_IXUSR).\n - For other files (directories), ignore mode entirely (set it to\n None).\n - Ignore user and group info (set uid, gid, uname, gname to None).\n\nAny other string will cause a ValueError.\n\nThe corresponding filter functions will be available as\ntarfile.fully_trusted_filter(), tarfile.tar_filter(), etc., so they can\nbe easily used in custom policies.\n\nNote that these filters never return None. Skipping members this way is\na feature for user-defined filters.\n\nDefaults and their configuration\n\n~tarfile.TarFile will gain a new attribute, extraction_filter, to allow\nconfiguring the default filter. By default it will be None, but users\ncan set it to a callable that will be used if the filter argument is\nmissing or None.\n\nNote\n\nString names won't be accepted here. That would encourage code like\nmy_tarfile.extraction_filter = 'data'. On Python versions without this\nfeature, this would do nothing, silently ignoring a security-related\nrequest.\n\nIf both the argument and attribute are None:\n\n- In Python 3.12-3.13, a DeprecationWarning will be emitted and\n extraction will use the 'fully_trusted' filter.\n- In Python 3.14+, it will use the 'data' filter.\n\nApplications and system integrators may wish to change extraction_filter\nof the TarFile class itself to set a global default. When using a\nfunction, they will generally want to wrap it in staticmethod() to\nprevent injection of a self argument.\n\nSubclasses of TarFile can also override extraction_filter.\n\nFilterError\n\nA new exception, FilterError, will be added to the tarfile module. It'll\nhave several new subclasses, one for each of the refusal reasons above.\nFilterError's member attribute will contain the relevant TarInfo.\n\nIn the lists above, “refusing\" to extract a file means that a\nFilterError will be raised. As with other extraction errors, if the\nTarFile.errorlevel is 1 or more, this will abort the extraction; with\nerrorlevel=0 the error will be logged and the member will be ignored,\nbut extraction will continue. Note that extractall() may leave the\narchive partially extracted; it is the user's responsibility to clean\nup.\n\nErrorlevel, and fatal/non-fatal errors\n\nCurrently, ~tarfile.TarFile has an errorlevel argument/attribute, which\nspecifies how errors are handled:\n\n- With errorlevel=0, documentation says that “all errors are ignored\n when using ~tarfile.TarFile.extract and\n ~tarfile.TarFile.extractall”. The code only ignores non-fatal and\n fatal errors (see below), so, for example, you still get TypeError\n if you pass None as the destination path.\n\n- With errorlevel=1 (the default), all non-fatal errors are ignored.\n (They may be logged to sys.stderr by setting the debug\n argument/attribute.) Which errors are non-fatal is not defined in\n documentation, but code treats ExtractionError as such.\n Specifically, it's these issues:\n\n - “unable to resolve link inside archive” (raised on systems that\n do not support symlinks)\n - “fifo/special devices not supported by system” (not used for\n failures if the system supports these, e.g. for a\n PermissionError)\n - “could not change owner/mode/modification time”\n\n Note that, for example, file name too long or out of disk space\n don't qualify. The non-fatal errors are not very likely to appear on\n a Unix-like system.\n\n- With errorlevel=2, all errors are raised, including fatal ones.\n Which errors are fatal is, again, not defined; in practice it's\n OSError.\n\nA filter refusing to extract a member does not fit neatly into the\nfatal/*non-fatal* categories.\n\n- This PEP does not change existing behavior. (Ideas for improvements\n are welcome in Discourse topic 25970.)\n- When a filter refuses to extract a member, the error should not pass\n silently by default.\n\nTo satisfy this, FilterError will be considered a fatal error, that is,\nit'll be ignored only with errorlevel=0.\n\nUsers that want to ignore FilterError but not other fatal errors should\ncreate a custom filter function, and call another filter in a try block.\n\nHints for further verification\n\nEven with the proposed changes, tarfile will not be suited for\nextracting untrusted files without prior inspection. Among other issues,\nthe proposed policies don't prevent denial-of-service attacks. Users\nshould do additional checks.\n\nNew docs will tell users to consider:\n\n- extracting to a new empty directory,\n- using external (e.g. OS-level) limits on disk, memory and CPU usage,\n- checking filenames against an allow-list of characters (to filter\n out control characters, confusables, etc.),\n- checking that filenames have expected extensions (discouraging files\n that execute when you “click on them”, or extension-less files like\n Windows special device names),\n- limiting the number of extracted files, total size of extracted\n data, and size of individual files,\n- checking for files that would be shadowed on case-insensitive\n filesystems.\n\nAlso, the docs will note that:\n\n- tar files commonly contain multiple versions of the same file: later\n ones are expected to overwrite earlier ones on extraction,\n- tarfile does not protect against issues with “live” data, e.g. an\n attacker tinkering with the destination directory while extracting\n (or adding) is going on (see the GNU tar manual for more info).\n\nThis list is not comprehensive, but the documentation is a good place to\ncollect such general tips. It can be moved into a separate document if\ngrows too long or if it needs to be consolidated with zipfile or shutil\n(which is out of scope for this proposal).\n\nTarInfo identity, and offset\n\nWith filters that use replace(), the TarInfo objects handled by the\nextraction machinery will not necessarily be the same objects as those\npresent in members. This may affect TarInfo subclasses that override\nmethods like makelink and rely on object identity.\n\nSuch code can switch to comparing offset, the position of the member\nheader inside the file.\n\nNote that both the overridable methods and offset are only documented in\nsource comments.\n\ntarfile CLI\n\nThe CLI (python -m tarfile) will gain a --filter option that will take\nthe name of one of the provided default filters. It won't be possible to\nspecify a custom filter function.\n\nIf --filter is not given, the CLI will use the default filter\n('fully_trusted' with a deprecation warning now, and 'data' from Python\n3.14 on).\n\nThere will be no short option. (-f would be confusingly similar to the\nfilename option of GNU tar.)\n\nOther archive libraries\n\nIf and when other archive libraries, such as zipfile, grow similar\nfunctionality, their extraction functions should use a filter argument\nthat takes, at least, the strings 'fully_trusted' (which should disable\nany security precautions) and 'data' (which should avoid features that\nmight surprise users).\n\nStandardizing a function-based filter API is out of scope of this PEP.\n\nShutil\n\nshutil.unpack_archive will gain a filter argument. If it's given, it\nwill be passed to the underlying extraction function. Passing it for a\nzip archive will fail for now (until zipfile gains a filter argument, if\nit ever does).\n\nIf filter is not specified (or left as None), it won't be passed on, so\nextracting a tarball will use the default filter ('fully_trusted' with a\ndeprecation warning now, and 'data' from Python 3.14 on).\n\nComplex filters\n\nNote that some user-defined filters need, for example, to count\nextracted members of do post-processing. This requires a more complex\nAPI than a filter callable. However, that complex API need not be\nexposed to tarfile. For example, with a hypothetical StatefulFilter\nusers would write:\n\n with StatefulFilter() as filter_func:\n my_tar.extract(path, filter=filter_func)\n\nA simple StatefulFilter example will be added to the docs.\n\nNote\n\nThe need for stateful filters is a reason against allowing registration\nof custom filter names in addition to 'fully_trusted', 'tar' and 'data'.\nWith such a mechanism, API for (at least) set-up and tear-down would\nneed to be set in stone.\n\nBackwards Compatibility\n\nThe default behavior of TarFile.extract and\nTarFile.extractall will change, after\nraising DeprecationWarning for 2 releases (shortest deprecation period\nallowed in Python's backwards compatibility policy <387>).\n\nAdditionally, code that relies on tarfile.TarInfo object identity may\nbreak, see 706-offset.\n\nBackporting & Forward Compatibility\n\nThis feature may be backported to older versions of Python.\n\nIn CPython, we don't add warnings to patch releases, so the default\nfilter should be changed to 'fully_trusted' in backports.\n\nOther than that, all of the changes to tarfile should be backported, so\nhasattr(tarfile, 'data_filter') becomes a reliable check for all of the\nnew functionality.\n\nNote that CPython's usual policy is to avoid adding new APIs in security\nbackports. This feature does not make sense without a new API\n(TarFile.extraction_filter and the filter argument), so we'll make an\nexception. (See Discourse comment 23149/16 for details.)\n\nHere are examples of code that takes into account that tarfile may or\nmay not have the proposed feature.\n\nWhen copying these snippets, note that setting extraction_filter will\naffect subsequent operations.\n\n- Fully trusted archive:\n\n my_tarfile.extraction_filter = (lambda member, path: member)\n my_tarfile.extractall()\n\n- Use the 'data' filter if available, but revert to Python 3.11\n behavior ('fully_trusted') if this feature is not available:\n\n my_tarfile.extraction_filter = getattr(tarfile, 'data_filter',\n (lambda member, path: member))\n my_tarfile.extractall()\n\n (This is an unsafe operation, so it should be spelled out\n explicitly, ideally with a comment.)\n\n- Use the 'data' filter; fail if it is not available:\n\n my_tarfile.extractall(filter=tarfile.data_filter)\n\n or:\n\n my_tarfile.extraction_filter = tarfile.data_filter\n my_tarfile.extractall()\n\n- Use the 'data' filter; warn if it is not available:\n\n if hasattr(tarfile, 'data_filter'):\n my_tarfile.extractall(filter='data')\n else:\n # remove this when no longer needed\n warn_the_user('Extracting may be unsafe; consider updating Python')\n my_tarfile.extractall()\n\nSecurity Implications\n\nThis proposal improves security, at the expense of backwards\ncompatibility. In particular, it will help users avoid CVE-2007-4559.\n\nHow to Teach This\n\nThe API, usage notes and tips for further verification will be added to\nthe documentation. These should be usable for users who are familiar\nwith archives in general, but not with the specifics of UNIX filesystems\nnor the related security issues.\n\nReference Implementation\n\nSee pull request #102953 on GitHub.\n\nRejected Ideas\n\nSafeTarFile\n\nAn initial idea from Lars Gustäbel was to provide a separate class that\nimplements security checks (see gh-65308). There are two major issues\nwith this approach:\n\n- The name is misleading. General archive operations can never be made\n “safe” from all kinds of unwanted behavior, without impacting\n legitimate use cases.\n- It does not solve the problem of unsafe defaults.\n\nHowever, many of the ideas behind SafeTarFile were reused in this PEP.\n\nAdd absolute_path option to tarfile\n\nIssue gh-73974 asks for adding an absolute_path option to extraction\nmethods. This would be a minimal change to formally resolve\nCVE-2007-4559. It doesn't go far enough to protect the unaware, nor to\nempower the diligent and curious.\n\nOther names for the 'tar' filter\n\nThe 'tar' filter exposes features specific to UNIX-like filesystems, so\nit could be named 'unix'. Or 'unix-like', 'nix', '*nix', 'posix'?\n\nFeature-wise, tar format and UNIX-like filesystem are essentially\nequivalent, so tar is a good name.\n\nPossible Further Work\n\nAdding filters to zipfile and shutil.unpack_archive\n\nFor consistency, zipfile and shutil.unpack_archive could gain support\nfor a filter argument. However, this would require research that this\nPEP's author can't promise for Python 3.12.\n\nFilters for zipfile would probably not help security. Zip is used\nprimarily for cross-platform data bundles, and correspondingly,\nZipFile.extract 's defaults are already similar\nto what a 'data' filter would do. A 'fully_trusted' filter, which would\nnewly allow absolute paths and .. path components, might not be useful\nfor much except a unified unpack_archive API.\n\nFilters should be useful for use cases other than security, but those\nwould usually need custom filter functions, and those would need API\nthat works with both ~tarfile.TarInfo and ~zipfile.ZipInfo. That is\ndefinitely out of scope of this PEP.\n\nIf only this PEP is implemented and nothing changes for zipfile, the\neffect for callers of unpack_archive is that the default for tar files\nis changing from 'fully_trusted' to the more appropriate 'data'. In the\ninterim period, Python 3.12-3.13 will emit DeprecationWarning. That's\nannoying, but there are several ways to handle it: e.g. add a filter\nargument conditionally, set TarFile.extraction_filter globally, or\nignore/suppress the warning until Python 3.14.\n\nAlso, since many calls to unpack_archive are likely to be unsafe,\nthere's hope that the DeprecationWarning will often turn out to be a\nhelpful hint to review affected code.\n\nThanks\n\nThis proposal is based on prior work and discussions by many people, in\nparticular Lars Gustäbel, Gregory P. Smith, Larry Hastings, Joachim\nWagner, Jan Matejek, Jakub Wilk, Daniel Garcia, Lumír Balhar, Miro\nHrončok, and many others.\n\nReferences\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.468748"},"created":{"kind":"timestamp","value":"2023-02-09T00:00:00","string":"2023-02-09T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0706/\",\n \"authors\": [\n \"Petr Viktorin\"\n ],\n \"pep_number\": \"0706\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":45,"cells":{"id":{"kind":"string","value":"0391"},"text":{"kind":"string","value":"PEP: 391 Title: Dictionary-Based Configuration For Logging Version:\n$Revision$ Last-Modified: $Date$ Author: Vinay Sajip Status: Final Type: Standards Track Content-Type:\ntext/x-rst Created: 15-Oct-2009 Python-Version: 2.7, 3.2 Post-History:\n\nAbstract\n\nThis PEP describes a new way of configuring logging using a dictionary\nto hold configuration information.\n\nRationale\n\nThe present means for configuring Python's logging package is either by\nusing the logging API to configure logging programmatically, or else by\nmeans of ConfigParser-based configuration files.\n\nProgrammatic configuration, while offering maximal control, fixes the\nconfiguration in Python code. This does not facilitate changing it\neasily at runtime, and, as a result, the ability to flexibly turn the\nverbosity of logging up and down for different parts of a using\napplication is lost. This limits the usability of logging as an aid to\ndiagnosing problems - and sometimes, logging is the only diagnostic aid\navailable in production environments.\n\nThe ConfigParser-based configuration system is usable, but does not\nallow its users to configure all aspects of the logging package. For\nexample, Filters cannot be configured using this system. Furthermore,\nthe ConfigParser format appears to engender dislike (sometimes strong\ndislike) in some quarters. Though it was chosen because it was the only\nconfiguration format supported in the Python standard at that time, many\npeople regard it (or perhaps just the particular schema chosen for\nlogging's configuration) as 'crufty' or 'ugly', in some cases apparently\non purely aesthetic grounds.\n\nRecent versions of Python include JSON support in the standard library,\nand this is also usable as a configuration format. In other\nenvironments, such as Google App Engine, YAML is used to configure\napplications, and usually the configuration of logging would be\nconsidered an integral part of the application configuration. Although\nthe standard library does not contain YAML support at present, support\nfor both JSON and YAML can be provided in a common way because both of\nthese serialization formats allow deserialization to Python\ndictionaries.\n\nBy providing a way to configure logging by passing the configuration in\na dictionary, logging will be easier to configure not only for users of\nJSON and/or YAML, but also for users of custom configuration methods, by\nproviding a common format in which to describe the desired\nconfiguration.\n\nAnother drawback of the current ConfigParser-based configuration system\nis that it does not support incremental configuration: a new\nconfiguration completely replaces the existing configuration. Although\nfull flexibility for incremental configuration is difficult to provide\nin a multi-threaded environment, the new configuration mechanism will\nallow the provision of limited support for incremental configuration.\n\nSpecification\n\nThe specification consists of two parts: the API and the format of the\ndictionary used to convey configuration information (i.e. the schema to\nwhich it must conform).\n\nNaming\n\nHistorically, the logging package has not been PEP 8 conformant. At some\nfuture time, this will be corrected by changing method and function\nnames in the package in order to conform with PEP 8. However, in the\ninterests of uniformity, the proposed additions to the API use a naming\nscheme which is consistent with the present scheme used by logging.\n\nAPI\n\nThe logging.config module will have the following addition:\n\n- A function, called dictConfig(), which takes a single argument\n - the dictionary holding the configuration. Exceptions will be\n raised if there are errors while processing the dictionary.\n\nIt will be possible to customize this API - see the section on API\nCustomization. Incremental configuration is covered in its own section.\n\nDictionary Schema - Overview\n\nBefore describing the schema in detail, it is worth saying a few words\nabout object connections, support for user-defined objects and access to\nexternal and internal objects.\n\nObject connections\n\nThe schema is intended to describe a set of logging objects - loggers,\nhandlers, formatters, filters - which are connected to each other in an\nobject graph. Thus, the schema needs to represent connections between\nthe objects. For example, say that, once configured, a particular logger\nhas attached to it a particular handler. For the purposes of this\ndiscussion, we can say that the logger represents the source, and the\nhandler the destination, of a connection between the two. Of course in\nthe configured objects this is represented by the logger holding a\nreference to the handler. In the configuration dict, this is done by\ngiving each destination object an id which identifies it unambiguously,\nand then using the id in the source object's configuration to indicate\nthat a connection exists between the source and the destination object\nwith that id.\n\nSo, for example, consider the following YAML snippet:\n\n formatters:\n brief:\n # configuration for formatter with id 'brief' goes here\n precise:\n # configuration for formatter with id 'precise' goes here\n handlers:\n h1: #This is an id\n # configuration of handler with id 'h1' goes here\n formatter: brief\n h2: #This is another id\n # configuration of handler with id 'h2' goes here\n formatter: precise\n loggers:\n foo.bar.baz:\n # other configuration for logger 'foo.bar.baz'\n handlers: [h1, h2]\n\n(Note: YAML will be used in this document as it is a little more\nreadable than the equivalent Python source form for the dictionary.)\n\nThe ids for loggers are the logger names which would be used\nprogrammatically to obtain a reference to those loggers, e.g.\nfoo.bar.baz. The ids for Formatters and Filters can be any string value\n(such as brief, precise above) and they are transient, in that they are\nonly meaningful for processing the configuration dictionary and used to\ndetermine connections between objects, and are not persisted anywhere\nwhen the configuration call is complete.\n\nHandler ids are treated specially, see the section on Handler Ids,\nbelow.\n\nThe above snippet indicates that logger named foo.bar.baz should have\ntwo handlers attached to it, which are described by the handler ids h1\nand h2. The formatter for h1 is that described by id brief, and the\nformatter for h2 is that described by id precise.\n\nUser-defined objects\n\nThe schema should support user-defined objects for handlers, filters and\nformatters. (Loggers do not need to have different types for different\ninstances, so there is no support - in the configuration -for\nuser-defined logger classes.)\n\nObjects to be configured will typically be described by dictionaries\nwhich detail their configuration. In some places, the logging system\nwill be able to infer from the context how an object is to be\ninstantiated, but when a user-defined object is to be instantiated, the\nsystem will not know how to do this. In order to provide complete\nflexibility for user-defined object instantiation, the user will need to\nprovide a 'factory' - a callable which is called with a configuration\ndictionary and which returns the instantiated object. This will be\nsignalled by an absolute import path to the factory being made available\nunder the special key '()'. Here's a concrete example:\n\n formatters:\n brief:\n format: '%(message)s'\n default:\n format: '%(asctime)s %(levelname)-8s %(name)-15s %(message)s'\n datefmt: '%Y-%m-%d %H:%M:%S'\n custom:\n (): my.package.customFormatterFactory\n bar: baz\n spam: 99.9\n answer: 42\n\nThe above YAML snippet defines three formatters. The first, with id\nbrief, is a standard logging.Formatter instance with the specified\nformat string. The second, with id default, has a longer format and also\ndefines the time format explicitly, and will result in a\nlogging.Formatter initialized with those two format strings. Shown in\nPython source form, the brief and default formatters have configuration\nsub-dictionaries:\n\n {\n 'format' : '%(message)s'\n }\n\nand:\n\n {\n 'format' : '%(asctime)s %(levelname)-8s %(name)-15s %(message)s',\n 'datefmt' : '%Y-%m-%d %H:%M:%S'\n }\n\nrespectively, and as these dictionaries do not contain the special key\n'()', the instantiation is inferred from the context: as a result,\nstandard logging.Formatter instances are created. The configuration\nsub-dictionary for the third formatter, with id custom, is:\n\n {\n '()' : 'my.package.customFormatterFactory',\n 'bar' : 'baz',\n 'spam' : 99.9,\n 'answer' : 42\n }\n\nand this contains the special key '()', which means that user-defined\ninstantiation is wanted. In this case, the specified factory callable\nwill be used. If it is an actual callable it will be used directly -\notherwise, if you specify a string (as in the example) the actual\ncallable will be located using normal import mechanisms. The callable\nwill be called with the remaining items in the configuration\nsub-dictionary as keyword arguments. In the above example, the formatter\nwith id custom will be assumed to be returned by the call:\n\n my.package.customFormatterFactory(bar='baz', spam=99.9, answer=42)\n\nThe key '()' has been used as the special key because it is not a valid\nkeyword parameter name, and so will not clash with the names of the\nkeyword arguments used in the call. The '()' also serves as a mnemonic\nthat the corresponding value is a callable.\n\nAccess to external objects\n\nThere are times where a configuration will need to refer to objects\nexternal to the configuration, for example sys.stderr. If the\nconfiguration dict is constructed using Python code then this is\nstraightforward, but a problem arises when the configuration is provided\nvia a text file (e.g. JSON, YAML). In a text file, there is no standard\nway to distinguish sys.stderr from the literal string 'sys.stderr'. To\nfacilitate this distinction, the configuration system will look for\ncertain special prefixes in string values and treat them specially. For\nexample, if the literal string 'ext://sys.stderr' is provided as a value\nin the configuration, then the ext:// will be stripped off and the\nremainder of the value processed using normal import mechanisms.\n\nThe handling of such prefixes will be done in a way analogous to\nprotocol handling: there will be a generic mechanism to look for\nprefixes which match the regular expression\n^(?P[a-z]+)://(?P.*)$ whereby, if the prefix is\nrecognised, the suffix is processed in a prefix-dependent manner and the\nresult of the processing replaces the string value. If the prefix is not\nrecognised, then the string value will be left as-is.\n\nThe implementation will provide for a set of standard prefixes such as\next:// but it will be possible to disable the mechanism completely or\nprovide additional or different prefixes for special handling.\n\nAccess to internal objects\n\nAs well as external objects, there is sometimes also a need to refer to\nobjects in the configuration. This will be done implicitly by the\nconfiguration system for things that it knows about. For example, the\nstring value 'DEBUG' for a level in a logger or handler will\nautomatically be converted to the value logging.DEBUG, and the handlers,\nfilters and formatter entries will take an object id and resolve to the\nappropriate destination object.\n\nHowever, a more generic mechanism needs to be provided for the case of\nuser-defined objects which are not known to logging. For example, take\nthe instance of logging.handlers.MemoryHandler, which takes a target\nwhich is another handler to delegate to. Since the system already knows\nabout this class, then in the configuration, the given target just needs\nto be the object id of the relevant target handler, and the system will\nresolve to the handler from the id. If, however, a user defines a\nmy.package.MyHandler which has a alternate handler, the configuration\nsystem would not know that the alternate referred to a handler. To cater\nfor this, a generic resolution system will be provided which allows the\nuser to specify:\n\n handlers:\n file:\n # configuration of file handler goes here\n\n custom:\n (): my.package.MyHandler\n alternate: cfg://handlers.file\n\nThe literal string 'cfg://handlers.file' will be resolved in an\nanalogous way to the strings with the ext:// prefix, but looking in the\nconfiguration itself rather than the import namespace. The mechanism\nwill allow access by dot or by index, in a similar way to that provided\nby str.format. Thus, given the following snippet:\n\n handlers:\n email:\n class: logging.handlers.SMTPHandler\n mailhost: localhost\n fromaddr: my_app@domain.tld\n toaddrs:\n - support_team@domain.tld\n - dev_team@domain.tld\n subject: Houston, we have a problem.\n\nin the configuration, the string 'cfg://handlers' would resolve to the\ndict with key handlers, the string 'cfg://handlers.email would resolve\nto the dict with key email in the handlers dict, and so on. The string\n'cfg://handlers.email.toaddrs[1] would resolve to 'dev_team.domain.tld'\nand the string 'cfg://handlers.email.toaddrs[0]' would resolve to the\nvalue 'support_team@domain.tld'. The subject value could be accessed\nusing either 'cfg://handlers.email.subject' or, equivalently,\n'cfg://handlers.email[subject]'. The latter form only needs to be used\nif the key contains spaces or non-alphanumeric characters. If an index\nvalue consists only of decimal digits, access will be attempted using\nthe corresponding integer value, falling back to the string value if\nneeded.\n\nGiven a string cfg://handlers.myhandler.mykey.123, this will resolve to\nconfig_dict['handlers']['myhandler']['mykey']['123']. If the string is\nspecified as cfg://handlers.myhandler.mykey[123], the system will\nattempt to retrieve the value from\nconfig_dict['handlers']['myhandler']['mykey'][123], and fall back to\nconfig_dict['handlers']['myhandler']['mykey']['123'] if that fails.\n\nHandler Ids\n\nSome specific logging configurations require the use of handler levels\nto achieve the desired effect. However, unlike loggers which can always\nbe identified by their names, handlers have no persistent handles\nwhereby levels can be changed via an incremental configuration call.\n\nTherefore, this PEP proposes to add an optional name property to\nhandlers. If used, this will add an entry in a dictionary which maps the\nname to the handler. (The entry will be removed when the handler is\nclosed.) When an incremental configuration call is made, handlers will\nbe looked up in this dictionary to set the handler level according to\nthe value in the configuration. See the section on incremental\nconfiguration for more details.\n\nIn theory, such a \"persistent name\" facility could also be provided for\nFilters and Formatters. However, there is not a strong case to be made\nfor being able to configure these incrementally. On the basis that\npracticality beats purity, only Handlers will be given this new name\nproperty. The id of a handler in the configuration will become its name.\n\nThe handler name lookup dictionary is for configuration use only and\nwill not become part of the public API for the package.\n\nDictionary Schema - Detail\n\nThe dictionary passed to dictConfig() must contain the following keys:\n\n- version - to be set to an integer value representing the schema\n version. The only valid value at present is 1, but having this key\n allows the schema to evolve while still preserving backwards\n compatibility.\n\nAll other keys are optional, but if present they will be interpreted as\ndescribed below. In all cases below where a 'configuring dict' is\nmentioned, it will be checked for the special '()' key to see if a\ncustom instantiation is required. If so, the mechanism described above\nis used to instantiate; otherwise, the context is used to determine how\nto instantiate.\n\n- formatters - the corresponding value will be a dict in which each\n key is a formatter id and each value is a dict describing how to\n configure the corresponding Formatter instance.\n\n The configuring dict is searched for keys format and datefmt (with\n defaults of None) and these are used to construct a\n logging.Formatter instance.\n\n- filters - the corresponding value will be a dict in which each key\n is a filter id and each value is a dict describing how to configure\n the corresponding Filter instance.\n\n The configuring dict is searched for key name (defaulting to the\n empty string) and this is used to construct a logging.Filter\n instance.\n\n- handlers - the corresponding value will be a dict in which each key\n is a handler id and each value is a dict describing how to configure\n the corresponding Handler instance.\n\n The configuring dict is searched for the following keys:\n\n - class (mandatory). This is the fully qualified name of the\n handler class.\n - level (optional). The level of the handler.\n - formatter (optional). The id of the formatter for this handler.\n - filters (optional). A list of ids of the filters for this\n handler.\n\n All other keys are passed through as keyword arguments to the\n handler's constructor. For example, given the snippet:\n\n handlers:\n console:\n class : logging.StreamHandler\n formatter: brief\n level : INFO\n filters: [allow_foo]\n stream : ext://sys.stdout\n file:\n class : logging.handlers.RotatingFileHandler\n formatter: precise\n filename: logconfig.log\n maxBytes: 1024\n backupCount: 3\n\n the handler with id console is instantiated as a\n logging.StreamHandler, using sys.stdout as the underlying stream.\n The handler with id file is instantiated as a\n logging.handlers.RotatingFileHandler with the keyword arguments\n filename='logconfig.log', maxBytes=1024, backupCount=3.\n\n- loggers - the corresponding value will be a dict in which each key\n is a logger name and each value is a dict describing how to\n configure the corresponding Logger instance.\n\n The configuring dict is searched for the following keys:\n\n - level (optional). The level of the logger.\n - propagate (optional). The propagation setting of the logger.\n - filters (optional). A list of ids of the filters for this\n logger.\n - handlers (optional). A list of ids of the handlers for this\n logger.\n\n The specified loggers will be configured according to the level,\n propagation, filters and handlers specified.\n\n- root - this will be the configuration for the root logger.\n Processing of the configuration will be as for any logger, except\n that the propagate setting will not be applicable.\n\n- incremental - whether the configuration is to be interpreted as\n incremental to the existing configuration. This value defaults to\n False, which means that the specified configuration replaces the\n existing configuration with the same semantics as used by the\n existing fileConfig() API.\n\n If the specified value is True, the configuration is processed as\n described in the section on Incremental Configuration, below.\n\n- disable_existing_loggers - whether any existing loggers are to be\n disabled. This setting mirrors the parameter of the same name in\n fileConfig(). If absent, this parameter defaults to True. This value\n is ignored if incremental is True.\n\nA Working Example\n\nThe following is an actual working configuration in YAML format (except\nthat the email addresses are bogus):\n\n formatters:\n brief:\n format: '%(levelname)-8s: %(name)-15s: %(message)s'\n precise:\n format: '%(asctime)s %(name)-15s %(levelname)-8s %(message)s'\n filters:\n allow_foo:\n name: foo\n handlers:\n console:\n class : logging.StreamHandler\n formatter: brief\n level : INFO\n stream : ext://sys.stdout\n filters: [allow_foo]\n file:\n class : logging.handlers.RotatingFileHandler\n formatter: precise\n filename: logconfig.log\n maxBytes: 1024\n backupCount: 3\n debugfile:\n class : logging.FileHandler\n formatter: precise\n filename: logconfig-detail.log\n mode: a\n email:\n class: logging.handlers.SMTPHandler\n mailhost: localhost\n fromaddr: my_app@domain.tld\n toaddrs:\n - support_team@domain.tld\n - dev_team@domain.tld\n subject: Houston, we have a problem.\n loggers:\n foo:\n level : ERROR\n handlers: [debugfile]\n spam:\n level : CRITICAL\n handlers: [debugfile]\n propagate: no\n bar.baz:\n level: WARNING\n root:\n level : DEBUG\n handlers : [console, file]\n\nIncremental Configuration\n\nIt is difficult to provide complete flexibility for incremental\nconfiguration. For example, because objects such as filters and\nformatters are anonymous, once a configuration is set up, it is not\npossible to refer to such anonymous objects when augmenting a\nconfiguration.\n\nFurthermore, there is not a compelling case for arbitrarily altering the\nobject graph of loggers, handlers, filters, formatters at run-time, once\na configuration is set up; the verbosity of loggers and handlers can be\ncontrolled just by setting levels (and, in the case of loggers,\npropagation flags). Changing the object graph arbitrarily in a safe way\nis problematic in a multi-threaded environment; while not impossible,\nthe benefits are not worth the complexity it adds to the implementation.\n\nThus, when the incremental key of a configuration dict is present and is\nTrue, the system will ignore any formatters and filters entries\ncompletely, and process only the level settings in the handlers entries,\nand the level and propagate settings in the loggers and root entries.\n\nIt's certainly possible to provide incremental configuration by other\nmeans, for example making dictConfig() take an incremental keyword\nargument which defaults to False. The reason for suggesting that a value\nin the configuration dict be used is that it allows for configurations\nto be sent over the wire as pickled dicts to a socket listener. Thus,\nthe logging verbosity of a long-running application can be altered over\ntime with no need to stop and restart the application.\n\nNote: Feedback on incremental configuration needs based on your\npractical experience will be particularly welcome.\n\nAPI Customization\n\nThe bare-bones dictConfig() API will not be sufficient for all use\ncases. Provision for customization of the API will be made by providing\nthe following:\n\n- A class, called DictConfigurator, whose constructor is passed the\n dictionary used for configuration, and which has a configure()\n method.\n- A callable, called dictConfigClass, which will (by default) be set\n to DictConfigurator. This is provided so that if desired,\n DictConfigurator can be replaced with a suitable user-defined\n implementation.\n\nThe dictConfig() function will call dictConfigClass passing the\nspecified dictionary, and then call the configure() method on the\nreturned object to actually put the configuration into effect:\n\n def dictConfig(config):\n dictConfigClass(config).configure()\n\nThis should cater to all customization needs. For example, a subclass of\nDictConfigurator could call DictConfigurator.__init__() in its own\n__init__(), then set up custom prefixes which would be usable in the\nsubsequent configure() call. The dictConfigClass would be bound to the\nsubclass, and then dictConfig() could be called exactly as in the\ndefault, uncustomized state.\n\nChange to Socket Listener Implementation\n\nThe existing socket listener implementation will be modified as follows:\nwhen a configuration message is received, an attempt will be made to\ndeserialize to a dictionary using the json module. If this step fails,\nthe message will be assumed to be in the fileConfig format and processed\nas before. If deserialization is successful, then dictConfig() will be\ncalled to process the resulting dictionary.\n\nConfiguration Errors\n\nIf an error is encountered during configuration, the system will raise a\nValueError, TypeError, AttributeError or ImportError with a suitably\ndescriptive message. The following is a (possibly incomplete) list of\nconditions which will raise an error:\n\n- A level which is not a string or which is a string not corresponding\n to an actual logging level\n- A propagate value which is not a boolean\n- An id which does not have a corresponding destination\n- A non-existent handler id found during an incremental call\n- An invalid logger name\n- Inability to resolve to an internal or external object\n\nDiscussion in the community\n\nThe PEP has been announced on python-dev and python-list. While there\nhasn't been a huge amount of discussion, this is perhaps to be expected\nfor a niche topic.\n\nDiscussion threads on python-dev:\n\nhttps://mail.python.org/pipermail/python-dev/2009-October/092695.html\nhttps://mail.python.org/pipermail/python-dev/2009-October/092782.html\nhttps://mail.python.org/pipermail/python-dev/2009-October/093062.html\n\nAnd on python-list:\n\nhttps://mail.python.org/pipermail/python-list/2009-October/1223658.html\nhttps://mail.python.org/pipermail/python-list/2009-October/1224228.html\n\nThere have been some comments in favour of the proposal, no objections\nto the proposal as a whole, and some questions and objections about\nspecific details. These are believed by the author to have been\naddressed by making changes to the PEP.\n\nReference implementation\n\nA reference implementation of the changes is available as a module\ndictconfig.py with accompanying unit tests in test_dictconfig.py, at:\n\nhttp://bitbucket.org/vinay.sajip/dictconfig\n\nThis incorporates all features other than the socket listener change.\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.498410"},"created":{"kind":"timestamp","value":"2009-10-15T00:00:00","string":"2009-10-15T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0391/\",\n \"authors\": [\n \"Vinay Sajip\"\n ],\n \"pep_number\": \"0391\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":46,"cells":{"id":{"kind":"string","value":"3121"},"text":{"kind":"string","value":"PEP: 3121 Title: Extension Module Initialization and Finalization\nVersion: $Revision$ Last-Modified: $Date$ Author: Martin von Löwis\n Status: Final Type: Standards Track Content-Type:\ntext/x-rst Created: 27-Apr-2007 Python-Version: 3.0 Post-History:\n\nPyInit_modulename and PyModuleDef\n\nAbstract\n\nExtension module initialization currently has a few deficiencies. There\nis no cleanup for modules, the entry point name might give naming\nconflicts, the entry functions don't follow the usual calling\nconvention, and multiple interpreters are not supported well. This PEP\naddresses these issues.\n\nProblems\n\nModule Finalization\n\nCurrently, extension modules are initialized usually once and then\n\"live\" forever. The only exception is when Py_Finalize() is called: then\nthe initialization routine is invoked a second time. This is bad from a\nresource management point of view: memory and other resources might get\nallocated each time initialization is called, but there is no way to\nreclaim them. As a result, there is currently no way to completely\nrelease all resources Python has allocated.\n\nEntry point name conflicts\n\nThe entry point is currently called init. This might conflict\nwith other symbols also called init. In particular,\ninitsocket is known to have conflicted in the past (this specific\nproblem got resolved as a side effect of renaming the module to\n_socket).\n\nEntry point signature\n\nThe entry point is currently a procedure (returning void). This deviates\nfrom the usual calling conventions; callers can find out whether there\nwas an error during initialization only by checking PyErr_Occurred. The\nentry point should return a PyObject*, which will be the module created,\nor NULL in case of an exception.\n\nMultiple Interpreters\n\nCurrently, extension modules share their state across all interpreters.\nThis allows for undesirable information leakage across interpreters: one\nscript could permanently corrupt objects in an extension module,\npossibly breaking all scripts in other interpreters.\n\nSpecification\n\nThe module initialization routines change their signature to:\n\n PyObject *PyInit_()\n\nThe initialization routine will be invoked once per interpreter, when\nthe module is imported. It should return a new module object each time.\n\nIn order to store per-module state in C variables, each module object\nwill contain a block of memory that is interpreted only by the module.\nThe amount of memory used for the module is specified at the point of\ncreation of the module.\n\nIn addition to the initialization function, a module may implement a\nnumber of additional callback functions, which are invoked when the\nmodule's tp_traverse, tp_clear, and tp_free functions are invoked, and\nwhen the module is reloaded.\n\nThe entire module definition is combined in a struct PyModuleDef:\n\n struct PyModuleDef{\n PyModuleDef_Base m_base; /* To be filled out by the interpreter */\n Py_ssize_t m_size; /* Size of per-module data */\n PyMethodDef *m_methods;\n inquiry m_reload;\n traverseproc m_traverse;\n inquiry m_clear;\n freefunc m_free;\n };\n\nCreation of a module is changed to expect an optional PyModuleDef*. The\nmodule state will be null-initialized.\n\nEach module method will be passed the module object as the first\nparameter. To access the module data, a function:\n\n void* PyModule_GetState(PyObject*);\n\nwill be provided. In addition, to lookup a module more efficiently than\ngoing through sys.modules, a function:\n\n PyObject* PyState_FindModule(struct PyModuleDef*);\n\nwill be provided. This lookup function will use an index located in the\nm_base field, to find the module by index, not by name.\n\nAs all Python objects should be controlled through the Python memory\nmanagement, usage of \"static\" type objects is discouraged, unless the\ntype object itself has no memory-managed state. To simplify definition\nof heap types, a new method:\n\n PyTypeObject* PyType_Copy(PyTypeObject*);\n\nis added.\n\nExample\n\nxxmodule.c would be changed to remove the initxx function, and add the\nfollowing code instead:\n\n struct xxstate{\n PyObject *ErrorObject;\n PyObject *Xxo_Type;\n };\n\n #define xxstate(o) ((struct xxstate*)PyModule_GetState(o))\n\n static int xx_traverse(PyObject *m, visitproc v,\n void *arg)\n {\n Py_VISIT(xxstate(m)->ErrorObject);\n Py_VISIT(xxstate(m)->Xxo_Type);\n return 0;\n }\n\n static int xx_clear(PyObject *m)\n {\n Py_CLEAR(xxstate(m)->ErrorObject);\n Py_CLEAR(xxstate(m)->Xxo_Type);\n return 0;\n }\n\n static struct PyModuleDef xxmodule = {\n {}, /* m_base */\n sizeof(struct xxstate),\n &xx_methods,\n 0, /* m_reload */\n xx_traverse,\n xx_clear,\n 0, /* m_free - not needed, since all is done in m_clear */\n }\n\n PyObject*\n PyInit_xx()\n {\n PyObject *res = PyModule_New(\"xx\", &xxmodule);\n if (!res) return NULL;\n xxstate(res)->ErrorObject = PyErr_NewException(\"xx.error\", NULL, NULL);\n if (!xxstate(res)->ErrorObject) {\n Py_DECREF(res);\n return NULL;\n }\n xxstate(res)->XxoType = PyType_Copy(&Xxo_Type);\n if (!xxstate(res)->Xxo_Type) {\n Py_DECREF(res);\n return NULL;\n }\n return res;\n }\n\nDiscussion\n\nTim Peters reports in[1] that PythonLabs considered such a feature at\none point, and lists the following additional hooks which aren't\ncurrently supported in this PEP:\n\n- when the module object is deleted from sys.modules\n- when Py_Finalize is called\n- when Python exits\n- when the Python DLL is unloaded (Windows only)\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] Tim Peters, reporting earlier conversation about such a feature\nhttps://mail.python.org/pipermail/python-3000/2006-April/000726.html"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.508640"},"created":{"kind":"timestamp","value":"2007-04-27T00:00:00","string":"2007-04-27T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-3121/\",\n \"authors\": [\n \"Martin von Löwis\"\n ],\n \"pep_number\": \"3121\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":47,"cells":{"id":{"kind":"string","value":"0676"},"text":{"kind":"string","value":"PEP: 676 Title: PEP Infrastructure Process Author: Adam Turner\n Sponsor: Mariatta \nPEP-Delegate: Barry Warsaw Discussions-To:\nhttps://discuss.python.org/t/10774 Status: Active Type: Process\nContent-Type: text/x-rst Created: 01-Nov-2021 Post-History: 23-Sep-2021,\n30-Nov-2021 Resolution: https://discuss.python.org/t/10774/99\n\nAbstract\n\nThis PEP addresses the infrastructure around rendering PEP files from\nreStructuredText files to HTML webpages. We aim to specify a\nself-contained and maintainable solution for PEP readers, authors, and\neditors.\n\nMotivation\n\nAs of November 2021, Python Enhancement Proposals (PEPs) are rendered in\na multi-system, multi-stage process. A continuous integration (CI) task\nruns a docutils script to render all PEP files individually. The CI task\nthen uploads a tar archive to a server, where it is retrieved and\nrendered into the python.org website periodically.\n\nThis places a constraint on the python.org website to handle raw HTML\nuploads and handle PEP rendering, and makes the appropriate place to\nraise issues unclear in some cases[1].\n\nThis PEP provides a specification for self-contained rendering of PEPs.\nThis would:\n\n- reduce the amount of distributed configuration for supporting PEPs\n- enable quality-of-life improvements for those who read, write, and\n review PEPs\n- solve a number of outstanding issues, and lay the path for\n improvements\n- save volunteer maintainers' time\n\nWe propose that PEPs are accessed through peps.python.org at the\ntop-level (for example peps.python.org/pep-0008), and that all custom\ntooling to support rendering PEPs is hosted in the python/peps\nrepository.\n\nRationale\n\nSimplifying and Centralising Infrastructure\n\nAs of November 2021, to locally render a PEP file, a PEP author or\neditor needs to create a full local instance of the python.org website\nand run a number of disparate scripts, following documentation that\nlives outside of the python/peps repository.\n\nBy contrast, the proposed implementation provides a single Makefile and\na Python script to render all PEP files, with options to target a\nweb-server or the local filesystem.\n\nUsing a single repository to host all tooling will clarify where to\nraise issues, reducing volunteer time spent in triage.\n\nSimplified and centralised tooling may also reduce the barrier to entry\nto further improvements, as the scope of the PEP rendering\ninfrastructure is well defined.\n\nQuality-of-Life Improvements and Resolving Issues\n\nThere are several requests for additional features in reading PEPs, such\nas:\n\n- syntax highlighting[2]\n- use of .. code-block:: directives[3]\n- support for SVG images[4]\n- typographic quotation marks[5]\n- additional footer information[6]\n- intersphinx functionality[7]\n- dark mode theme[8]\n\nThese are \"easy wins\" from this proposal, and would serve to improve the\nquality-of-life for consumers of PEPs (including reviewers and writers).\n\nFor example, the current (as of November 2021) system runs periodically\non a schedule. This means that updates to PEPs cannot be circulated\nimmediately, reducing productivity. The reference implementation renders\nand publishes all PEPs on every commit to the repository, solving the\nissue by design.\n\nThe reference implementation fixes several issues[9]. For example:\n\n- list styles are currently not respected by python.org's stylesheets\n- support for updating images in PEPs is challenging in python.org\n\nThird-party providers such as Read the Docs or Netlify can enhance this\nexperience with features such as automatic rendering of pull requests.\n\nSpecification\n\nThe proposed specification for rendering the PEP files to HTML is as per\nthe reference implementation.\n\nThe rendered PEPs MUST be available at peps.python.org. These SHOULD be\nhosted as static files, and MAY be behind a content delivery network\n(CDN).\n\nA service to render previews of pull requests SHOULD be provided. This\nservice MAY be integrated with the hosting and deployment solution.\n\nThe following redirect rules MUST be created for the python.org domain:\n\n- /peps/ -> https://peps.python.org/\n- /dev/peps/ -> https://peps.python.org/\n- /peps/(.*)\\.html -> https://peps.python.org/$1\n- /dev/peps/(.*) -> https://peps.python.org/$1\n\nThe following nginx configuration would achieve this:\n\n location ~ ^/dev/peps/?(.*)$ {\n return 308 https://peps.python.org/$1/;\n }\n\n location ~ ^/peps/(.*)\\.html$ {\n return 308 https://peps.python.org/$1/;\n }\n\n location ^/(dev/)?peps(/.*)?$ {\n return 308 https://peps.python.org/;\n }\n\nRedirects MUST be implemented to preserve URL fragments for backward\ncompatibility purposes.\n\nBackwards Compatibility\n\nDue to server-side redirects to the new canonical URLs, links in\npreviously published materials referring to the old URL schemes will be\nguaranteed to work. All PEPs will continue to render correctly, and a\ncustom stylesheet in the reference implementation improves presentation\nfor some elements (most notably code blocks and block quotes).\nTherefore, this PEP presents no backwards compatibility issues.\n\nSecurity Implications\n\nThe main python.org website will no longer process raw HTML uploads,\nclosing a potential threat vector. PEP rendering and deployment\nprocesses will use modern, well-maintained code and secure automated\nplatforms, further reducing the potential attack surface. Therefore, we\nsee no negative security impact.\n\nHow to Teach This\n\nThe new canonical URLs will be publicised in the documentation. However,\nthis is mainly a backend infrastructure change, and there should be\nminimal end-user impact. PEP 1 and PEP 12 will be updated as needed.\n\nReference Implementation\n\nThe proposed implementation has been merged into the python/peps\nrepository in a series of pull requests[10]. It uses the Sphinx\ndocumentation system with a custom theme (supporting light and dark\ncolour schemes) and extensions.\n\nThis already automatically renders all PEPs on every commit, and\npublishes them to python.github.io/peps. The high level documentation\nfor the system covers how to render PEPs locally and the implementation\nof the system.\n\nRejected Ideas\n\nIt would likely be possible to amend the current (as of November 2021)\nrendering process to include a subset of the quality-of-life\nimprovements and issue mitigations mentioned above. However, we do not\nbelieve that this would solve the distributed tooling issue.\n\nIt would be possible to use the output from the proposed rendering\nsystem and import it into python.org. We would argue that this would be\nthe worst of both worlds, as a great deal of complexity is added whilst\nnone is removed.\n\nAcknowledgements\n\n- Hugo van Kemenade\n- Pablo Galindo Salgado\n- Éric Araujo\n- Mariatta\n- C.A.M. Gerlach\n\nFootnotes\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] For example, pythondotorg#1024, pythondotorg#1038,\npythondotorg#1387, pythondotorg#1388, pythondotorg#1393,\npythondotorg#1564, pythondotorg#1913,\n\n[2] Requested: pythondotorg#1063, pythondotorg#1206, pythondotorg#1638,\npeps#159, comment in peps#1571, peps#1577,\n\n[3] Requested: pythondotorg#1063, pythondotorg#1206, pythondotorg#1638,\npeps#159, comment in peps#1571, peps#1577,\n\n[4] Requested: peps#701\n\n[5] Requested: peps#165\n\n[6] Requested: pythondotorg#1564\n\n[7] Requested: comment in peps#2\n\n[8] Requested: in python-dev\n\n[9] As of November 2021, see peps#1387, pythondotorg#824,\npythondotorg#1556,\n\n[10] Implementation PRs: peps#1930, peps#1931, peps#1932, peps#1933,\npeps#1934"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.529183"},"created":{"kind":"timestamp","value":"2021-11-01T00:00:00","string":"2021-11-01T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0676/\",\n \"authors\": [\n \"Adam Turner\"\n ],\n \"pep_number\": \"0676\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":48,"cells":{"id":{"kind":"string","value":"0634"},"text":{"kind":"string","value":"PEP: 634 Title: Structural Pattern Matching: Specification Author:\nBrandt Bucher , Guido van Rossum \nBDFL-Delegate: Discussions-To: python-dev@python.org Status: Final Type:\nStandards Track Created: 12-Sep-2020 Python-Version: 3.10 Post-History:\n22-Oct-2020, 08-Feb-2021 Replaces: 622 Resolution:\nhttps://mail.python.org/archives/list/python-committers@python.org/message/SQC2FTLFV5A7DV7RCEAR2I2IKJKGK7W3\n\nmatch\n\nAbstract\n\nThis PEP provides the technical specification for the match statement.\nIt replaces PEP 622, which is hereby split in three parts:\n\n- PEP 634: Specification\n- PEP 635: Motivation and Rationale\n- PEP 636: Tutorial\n\nThis PEP is intentionally devoid of commentary; the motivation and all\nexplanations of our design choices are in PEP 635. First-time readers\nare encouraged to start with PEP 636, which provides a gentler\nintroduction to the concepts, syntax and semantics of patterns.\n\nSyntax and Semantics\n\nSee Appendix A for the complete grammar.\n\nOverview and Terminology\n\nThe pattern matching process takes as input a pattern (following case)\nand a subject value (following match). Phrases to describe the process\ninclude \"the pattern is matched with (or against) the subject value\" and\n\"we match the pattern against (or with) the subject value\".\n\nThe primary outcome of pattern matching is success or failure. In case\nof success we may say \"the pattern succeeds\", \"the match succeeds\", or\n\"the pattern matches the subject value\".\n\nIn many cases a pattern contains subpatterns, and success or failure is\ndetermined by the success or failure of matching those subpatterns\nagainst the value (e.g., for OR patterns) or against parts of the value\n(e.g., for sequence patterns). This process typically processes the\nsubpatterns from left to right until the overall outcome is determined.\nE.g., an OR pattern succeeds at the first succeeding subpattern, while a\nsequence patterns fails at the first failing subpattern.\n\nA secondary outcome of pattern matching may be one or more name\nbindings. We may say \"the pattern binds a value to a name\". When\nsubpatterns tried until the first success, only the bindings due to the\nsuccessful subpattern are valid; when trying until the first failure,\nthe bindings are merged. Several more rules, explained below, apply to\nthese cases.\n\nThe Match Statement\n\nSyntax:\n\n match_stmt: \"match\" subject_expr ':' NEWLINE INDENT case_block+ DEDENT\n subject_expr:\n | star_named_expression ',' star_named_expressions?\n | named_expression\n case_block: \"case\" patterns [guard] ':' block\n guard: 'if' named_expression\n\nThe rules star_named_expression, star_named_expressions,\nnamed_expression and block are part of the standard Python grammar.\n\nThe rule patterns is specified below.\n\nFor context, match_stmt is a new alternative for compound_statement:\n\n compound_statement:\n | if_stmt\n ...\n | match_stmt\n\nThe match and case keywords are soft keywords, i.e. they are not\nreserved words in other grammatical contexts (including at the start of\na line if there is no colon where expected). This implies that they are\nrecognized as keywords when part of a match statement or case block\nonly, and are allowed to be used in all other contexts as variable or\nargument names.\n\nMatch Semantics\n\nThe match statement first evaluates the subject expression. If a comma\nis present a tuple is constructed using the standard rules.\n\nThe resulting subject value is then used to select the first case block\nwhose patterns succeeds matching it and whose guard condition (if\npresent) is \"truthy\". If no case blocks qualify the match statement is\ncomplete; otherwise, the block of the selected case block is executed.\nThe usual rules for executing a block nested inside a compound statement\napply (e.g. an if statement).\n\nName bindings made during a successful pattern match outlive the\nexecuted block and can be used after the match statement.\n\nDuring failed pattern matches, some subpatterns may succeed. For\nexample, while matching the pattern (0, x, 1) with the value [0, 1, 2],\nthe subpattern x may succeed if the list elements are matched from left\nto right. The implementation may choose to either make persistent\nbindings for those partial matches or not. User code including a match\nstatement should not rely on the bindings being made for a failed match,\nbut also shouldn't assume that variables are unchanged by a failed\nmatch. This part of the behavior is left intentionally unspecified so\ndifferent implementations can add optimizations, and to prevent\nintroducing semantic restrictions that could limit the extensibility of\nthis feature.\n\nThe precise pattern binding rules vary per pattern type and are\nspecified below.\n\nGuards\n\nIf a guard is present on a case block, once the pattern or patterns in\nthe case block succeed, the expression in the guard is evaluated. If\nthis raises an exception, the exception bubbles up. Otherwise, if the\ncondition is \"truthy\" the case block is selected; if it is \"falsy\" the\ncase block is not selected.\n\nSince guards are expressions they are allowed to have side effects.\nGuard evaluation must proceed from the first to the last case block, one\nat a time, skipping case blocks whose pattern(s) don't all succeed.\n(I.e., even if determining whether those patterns succeed may happen out\nof order, guard evaluation must happen in order.) Guard evaluation must\nstop once a case block is selected.\n\nIrrefutable case blocks\n\nA pattern is considered irrefutable if we can prove from its syntax\nalone that it will always succeed. In particular, capture patterns and\nwildcard patterns are irrefutable, and so are AS patterns whose\nleft-hand side is irrefutable, OR patterns containing at least one\nirrefutable pattern, and parenthesized irrefutable patterns.\n\nA case block is considered irrefutable if it has no guard and its\npattern is irrefutable.\n\nA match statement may have at most one irrefutable case block, and it\nmust be last.\n\nPatterns\n\nThe top-level syntax for patterns is as follows:\n\n patterns: open_sequence_pattern | pattern\n pattern: as_pattern | or_pattern\n as_pattern: or_pattern 'as' capture_pattern\n or_pattern: '|'.closed_pattern+\n closed_pattern:\n | literal_pattern\n | capture_pattern\n | wildcard_pattern\n | value_pattern\n | group_pattern\n | sequence_pattern\n | mapping_pattern\n | class_pattern\n\nAS Patterns\n\nSyntax:\n\n as_pattern: or_pattern 'as' capture_pattern\n\n(Note: the name on the right may not be _.)\n\nAn AS pattern matches the OR pattern on the left of the as keyword\nagainst the subject. If this fails, the AS pattern fails. Otherwise, the\nAS pattern binds the subject to the name on the right of the as keyword\nand succeeds.\n\nOR Patterns\n\nSyntax:\n\n or_pattern: '|'.closed_pattern+\n\nWhen two or more patterns are separated by vertical bars (|), this is\ncalled an OR pattern. (A single closed pattern is just that.)\n\nOnly the final subpattern may be irrefutable.\n\nEach subpattern must bind the same set of names.\n\nAn OR pattern matches each of its subpatterns in turn to the subject,\nuntil one succeeds. The OR pattern is then deemed to succeed. If none of\nthe subpatterns succeed the OR pattern fails.\n\nLiteral Patterns\n\nSyntax:\n\n literal_pattern:\n | signed_number\n | signed_number '+' NUMBER\n | signed_number '-' NUMBER\n | strings\n | 'None'\n | 'True'\n | 'False'\n signed_number: NUMBER | '-' NUMBER\n\nThe rule strings and the token NUMBER are defined in the standard Python\ngrammar.\n\nTriple-quoted strings are supported. Raw strings and byte strings are\nsupported. F-strings are not supported.\n\nThe forms signed_number '+' NUMBER and signed_number '-' NUMBER are only\npermitted to express complex numbers; they require a real number on the\nleft and an imaginary number on the right.\n\nA literal pattern succeeds if the subject value compares equal to the\nvalue expressed by the literal, using the following comparisons rules:\n\n- Numbers and strings are compared using the == operator.\n- The singleton literals None, True and False are compared using the\n is operator.\n\nCapture Patterns\n\nSyntax:\n\n capture_pattern: !\"_\" NAME\n\nThe single underscore (_) is not a capture pattern (this is what !\"_\"\nexpresses). It is treated as a wildcard pattern.\n\nA capture pattern always succeeds. It binds the subject value to the\nname using the scoping rules for name binding established for the walrus\noperator in PEP 572. (Summary: the name becomes a local variable in the\nclosest containing function scope unless there's an applicable nonlocal\nor global statement.)\n\nIn a given pattern, a given name may be bound only once. This disallows\nfor example case x, x: ... but allows case [x] | x: ....\n\nWildcard Pattern\n\nSyntax:\n\n wildcard_pattern: \"_\"\n\nA wildcard pattern always succeeds. It binds no name.\n\nValue Patterns\n\nSyntax:\n\n value_pattern: attr\n attr: name_or_attr '.' NAME\n name_or_attr: attr | NAME\n\nThe dotted name in the pattern is looked up using the standard Python\nname resolution rules. However, when the same value pattern occurs\nmultiple times in the same match statement, the interpreter may cache\nthe first value found and reuse it, rather than repeat the same lookup.\n(To clarify, this cache is strictly tied to a given execution of a given\nmatch statement.)\n\nThe pattern succeeds if the value found thus compares equal to the\nsubject value (using the == operator).\n\nGroup Patterns\n\nSyntax:\n\n group_pattern: '(' pattern ')'\n\n(For the syntax of pattern, see Patterns above. Note that it contains no\ncomma -- a parenthesized series of items with at least one comma is a\nsequence pattern, as is ().)\n\nA parenthesized pattern has no additional syntax. It allows users to add\nparentheses around patterns to emphasize the intended grouping.\n\nSequence Patterns\n\nSyntax:\n\n sequence_pattern:\n | '[' [maybe_sequence_pattern] ']'\n | '(' [open_sequence_pattern] ')'\n open_sequence_pattern: maybe_star_pattern ',' [maybe_sequence_pattern]\n maybe_sequence_pattern: ','.maybe_star_pattern+ ','?\n maybe_star_pattern: star_pattern | pattern\n star_pattern: '*' (capture_pattern | wildcard_pattern)\n\n(Note that a single parenthesized pattern without a trailing comma is a\ngroup pattern, not a sequence pattern. However a single pattern enclosed\nin [...] is still a sequence pattern.)\n\nThere is no semantic difference between a sequence pattern using [...],\na sequence pattern using (...), and an open sequence pattern.\n\nA sequence pattern may contain at most one star subpattern. The star\nsubpattern may occur in any position. If no star subpattern is present,\nthe sequence pattern is a fixed-length sequence pattern; otherwise it is\na variable-length sequence pattern.\n\nFor a sequence pattern to succeed the subject must be a sequence, where\nbeing a sequence is defined as its class being one of the following:\n\n- a class that inherits from collections.abc.Sequence\n- a Python class that has been registered as a\n collections.abc.Sequence\n- a builtin class that has its Py_TPFLAGS_SEQUENCE bit set\n- a class that inherits from any of the above (including classes\n defined before a parent's Sequence registration)\n\nThe following standard library classes will have their\nPy_TPFLAGS_SEQUENCE bit set:\n\n- array.array\n- collections.deque\n- list\n- memoryview\n- range\n- tuple\n\nNote\n\nAlthough str, bytes, and bytearray are usually considered sequences,\nthey are not included in the above list and do not match sequence\npatterns.\n\nA fixed-length sequence pattern fails if the length of the subject\nsequence is not equal to the number of subpatterns.\n\nA variable-length sequence pattern fails if the length of the subject\nsequence is less than the number of non-star subpatterns.\n\nThe length of the subject sequence is obtained using the builtin len()\nfunction (i.e., via the __len__ protocol). However, the interpreter may\ncache this value in a similar manner as described for value patterns.\n\nA fixed-length sequence pattern matches the subpatterns to corresponding\nitems of the subject sequence, from left to right. Matching stops (with\na failure) as soon as a subpattern fails. If all subpatterns succeed in\nmatching their corresponding item, the sequence pattern succeeds.\n\nA variable-length sequence pattern first matches the leading non-star\nsubpatterns to the corresponding items of the subject sequence, as for a\nfixed-length sequence. If this succeeds, the star subpattern matches a\nlist formed of the remaining subject items, with items removed from the\nend corresponding to the non-star subpatterns following the star\nsubpattern. The remaining non-star subpatterns are then matched to the\ncorresponding subject items, as for a fixed-length sequence.\n\nMapping Patterns\n\nSyntax:\n\n mapping_pattern: '{' [items_pattern] '}'\n items_pattern: ','.key_value_pattern+ ','?\n key_value_pattern:\n | (literal_pattern | value_pattern) ':' pattern\n | double_star_pattern\n double_star_pattern: '**' capture_pattern\n\n(Note that **_ is disallowed by this syntax.)\n\nA mapping pattern may contain at most one double star pattern, and it\nmust be last.\n\nA mapping pattern may not contain duplicate key values. (If all key\npatterns are literal patterns this is considered a syntax error;\notherwise this is a runtime error and will raise ValueError.)\n\nFor a mapping pattern to succeed the subject must be a mapping, where\nbeing a mapping is defined as its class being one of the following:\n\n- a class that inherits from collections.abc.Mapping\n- a Python class that has been registered as a collections.abc.Mapping\n- a builtin class that has its Py_TPFLAGS_MAPPING bit set\n- a class that inherits from any of the above (including classes\n defined before a parent's Mapping registration)\n\nThe standard library classes dict and mappingproxy will have their\nPy_TPFLAGS_MAPPING bit set.\n\nA mapping pattern succeeds if every key given in the mapping pattern is\npresent in the subject mapping, and the pattern for each key matches the\ncorresponding item of the subject mapping. Keys are always compared with\nthe == operator. If a '**' NAME form is present, that name is bound to a\ndict containing remaining key-value pairs from the subject mapping.\n\nIf duplicate keys are detected in the mapping pattern, the pattern is\nconsidered invalid, and a ValueError is raised.\n\nKey-value pairs are matched using the two-argument form of the subject's\nget() method. As a consequence, matched key-value pairs must already be\npresent in the mapping, and not created on-the-fly by __missing__ or\n__getitem__. For example, collections.defaultdict instances will only be\nmatched by patterns with keys that were already present when the match\nstatement was entered.\n\nClass Patterns\n\nSyntax:\n\n class_pattern:\n | name_or_attr '(' [pattern_arguments ','?] ')'\n pattern_arguments:\n | positional_patterns [',' keyword_patterns]\n | keyword_patterns\n positional_patterns: ','.pattern+\n keyword_patterns: ','.keyword_pattern+\n keyword_pattern: NAME '=' pattern\n\nA class pattern may not repeat the same keyword multiple times.\n\nIf name_or_attr is not an instance of the builtin type, TypeError is\nraised.\n\nA class pattern fails if the subject is not an instance of name_or_attr.\nThis is tested using isinstance().\n\nIf no arguments are present, the pattern succeeds if the isinstance()\ncheck succeeds. Otherwise:\n\n- If only keyword patterns are present, they are processed as follows,\n one by one:\n - The keyword is looked up as an attribute on the subject.\n - If this raises an exception other than AttributeError, the\n exception bubbles up.\n - If this raises AttributeError the class pattern fails.\n - Otherwise, the subpattern associated with the keyword is\n matched against the attribute value. If this fails, the\n class pattern fails. If it succeeds, the match proceeds to\n the next keyword.\n - If all keyword patterns succeed, the class pattern as a whole\n succeeds.\n- If any positional patterns are present, they are converted to\n keyword patterns (see below) and treated as additional keyword\n patterns, preceding the syntactic keyword patterns (if any).\n\nPositional patterns are converted to keyword patterns using the\n__match_args__ attribute on the class designated by name_or_attr, as\nfollows:\n\n- For a number of built-in types (specified below), a single\n positional subpattern is accepted which will match the entire\n subject. (Keyword patterns work as for other types here.)\n- The equivalent of getattr(cls, \"__match_args__\", ())) is called.\n- If this raises an exception the exception bubbles up.\n- If the returned value is not a tuple, the conversion fails and\n TypeError is raised.\n- If there are more positional patterns than the length of\n __match_args__ (as obtained using len()), TypeError is raised.\n- Otherwise, positional pattern i is converted to a keyword pattern\n using __match_args__[i] as the keyword, provided it the latter is a\n string; if it is not, TypeError is raised.\n- For duplicate keywords, TypeError is raised.\n\nOnce the positional patterns have been converted to keyword patterns,\nthe match proceeds as if there were only keyword patterns.\n\nAs mentioned above, for the following built-in types the handling of\npositional subpatterns is different: bool, bytearray, bytes, dict,\nfloat, frozenset, int, list, set, str, and tuple.\n\nThis behavior is roughly equivalent to the following:\n\n class C:\n __match_args__ = (\"__match_self_prop__\",)\n @property\n def __match_self_prop__(self):\n return self\n\nSide Effects and Undefined Behavior\n\nThe only side-effect produced explicitly by the matching process is the\nbinding of names. However, the process relies on attribute access,\ninstance checks, len(), equality and item access on the subject and some\nof its components. It also evaluates value patterns and the class name\nof class patterns. While none of those typically create any\nside-effects, in theory they could. This proposal intentionally leaves\nout any specification of what methods are called or how many times. This\nbehavior is therefore undefined and user code should not rely on it.\n\nAnother undefined behavior is the binding of variables by capture\npatterns that are followed (in the same case block) by another pattern\nthat fails. These may happen earlier or later depending on the\nimplementation strategy, the only constraint being that capture\nvariables must be set before guards that use them explicitly are\nevaluated. If a guard consists of an and clause, evaluation of the\noperands may even be interspersed with pattern matching, as long as\nleft-to-right evaluation order is maintained.\n\nThe Standard Library\n\nTo facilitate the use of pattern matching, several changes will be made\nto the standard library:\n\n- Namedtuples and dataclasses will have auto-generated __match_args__.\n- For dataclasses the order of attributes in the generated\n __match_args__ will be the same as the order of corresponding\n arguments in the generated __init__() method. This includes the\n situations where attributes are inherited from a superclass. Fields\n with init=False are excluded from __match_args__.\n\nIn addition, a systematic effort will be put into going through existing\nstandard library classes and adding __match_args__ where it looks\nbeneficial.\n\nAppendix A -- Full Grammar\n\nHere is the full grammar for match_stmt. This is an additional\nalternative for compound_stmt. Remember that match and case are soft\nkeywords, i.e. they are not reserved words in other grammatical contexts\n(including at the start of a line if there is no colon where expected).\nBy convention, hard keywords use single quotes while soft keywords use\ndouble quotes.\n\nOther notation used beyond standard EBNF:\n\n- SEP.RULE+ is shorthand for RULE (SEP RULE)*\n- !RULE is a negative lookahead assertion\n\n match_stmt: \"match\" subject_expr ':' NEWLINE INDENT case_block+ DEDENT\n subject_expr:\n | star_named_expression ',' [star_named_expressions]\n | named_expression\n case_block: \"case\" patterns [guard] ':' block\n guard: 'if' named_expression\n\n patterns: open_sequence_pattern | pattern\n pattern: as_pattern | or_pattern\n as_pattern: or_pattern 'as' capture_pattern\n or_pattern: '|'.closed_pattern+\n closed_pattern:\n | literal_pattern\n | capture_pattern\n | wildcard_pattern\n | value_pattern\n | group_pattern\n | sequence_pattern\n | mapping_pattern\n | class_pattern\n\n literal_pattern:\n | signed_number !('+' | '-')\n | signed_number '+' NUMBER\n | signed_number '-' NUMBER\n | strings\n | 'None'\n | 'True'\n | 'False'\n signed_number: NUMBER | '-' NUMBER\n\n capture_pattern: !\"_\" NAME !('.' | '(' | '=')\n\n wildcard_pattern: \"_\"\n\n value_pattern: attr !('.' | '(' | '=')\n attr: name_or_attr '.' NAME\n name_or_attr: attr | NAME\n\n group_pattern: '(' pattern ')'\n\n sequence_pattern:\n | '[' [maybe_sequence_pattern] ']'\n | '(' [open_sequence_pattern] ')'\n open_sequence_pattern: maybe_star_pattern ',' [maybe_sequence_pattern]\n maybe_sequence_pattern: ','.maybe_star_pattern+ ','?\n maybe_star_pattern: star_pattern | pattern\n star_pattern: '*' (capture_pattern | wildcard_pattern)\n\n mapping_pattern: '{' [items_pattern] '}'\n items_pattern: ','.key_value_pattern+ ','?\n key_value_pattern:\n | (literal_pattern | value_pattern) ':' pattern\n | double_star_pattern\n double_star_pattern: '**' capture_pattern\n\n class_pattern:\n | name_or_attr '(' [pattern_arguments ','?] ')'\n pattern_arguments:\n | positional_patterns [',' keyword_patterns]\n | keyword_patterns\n positional_patterns: ','.pattern+\n keyword_patterns: ','.keyword_pattern+\n keyword_pattern: NAME '=' pattern\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.560056"},"created":{"kind":"timestamp","value":"2020-09-12T00:00:00","string":"2020-09-12T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0634/\",\n \"authors\": [\n \"Brandt Bucher\"\n ],\n \"pep_number\": \"0634\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":49,"cells":{"id":{"kind":"string","value":"0535"},"text":{"kind":"string","value":"PEP: 535 Title: Rich comparison chaining Version: $Revision$\nLast-Modified: $Date$ Author: Alyssa Coghlan \nStatus: Deferred Type: Standards Track Content-Type: text/x-rst\nRequires: 532 Created: 12-Nov-2016 Python-Version: 3.8\n\nPEP Deferral\n\nFurther consideration of this PEP has been deferred until Python 3.8 at\nthe earliest.\n\nAbstract\n\nInspired by PEP 335, and building on the circuit breaking protocol\ndescribed in PEP 532, this PEP proposes a change to the definition of\nchained comparisons, where the comparison chaining will be updated to\nuse the left-associative circuit breaking operator (else) rather than\nthe logical disjunction operator (and) if the left hand comparison\nreturns a circuit breaker as its result.\n\nWhile there are some practical complexities arising from the current\nhandling of single-valued arrays in NumPy, this change should be\nsufficient to allow elementwise chained comparison operations for\nmatrices, where the result is a matrix of boolean values, rather than\nraising ValueError or tautologically returning True (indicating a\nnon-empty matrix).\n\nRelationship with other PEPs\n\nThis PEP has been extracted from earlier iterations of PEP 532, as a\nfollow-on use case for the circuit breaking protocol, rather than an\nessential part of its introduction.\n\nThe specific proposal in this PEP to handle the element-wise comparison\nuse case by changing the semantic definition of comparison chaining is\ndrawn directly from Guido's rejection of PEP 335.\n\nSpecification\n\nA chained comparison like 0 < x < 10 written as:\n\n LEFT_BOUND LEFT_OP EXPR RIGHT_OP RIGHT_BOUND\n\nis currently roughly semantically equivalent to:\n\n _expr = EXPR\n _lhs_result = LEFT_BOUND LEFT_OP _expr\n _expr_result = _lhs_result and (_expr RIGHT_OP RIGHT_BOUND)\n\nUsing the circuit breaking concepts introduced in PEP 532, this PEP\nproposes that comparison chaining be changed to explicitly check if the\nleft comparison returns a circuit breaker, and if so, use else rather\nthan and to implement the comparison chaining:\n\n _expr = EXPR\n _lhs_result = LEFT_BOUND LEFT_OP _expr\n if hasattr(type(_lhs_result), \"__else__\"):\n _expr_result = _lhs_result else (_expr RIGHT_OP RIGHT_BOUND)\n else:\n _expr_result = _lhs_result and (_expr RIGHT_OP RIGHT_BOUND)\n\nThis allows types like NumPy arrays to control the behaviour of chained\ncomparisons by returning suitably defined circuit breakers from\ncomparison operations.\n\nThe expansion of this logic to an arbitrary number of chained comparison\noperations would be the same as the existing expansion for and.\n\nRationale\n\nIn ultimately rejecting PEP 335, Guido van Rossum noted[1]:\n\n The NumPy folks brought up a somewhat separate issue: for them, the\n most common use case is chained comparisons (e.g. A < B < C).\n\nTo understand this observation, we first need to look at how comparisons\nwork with NumPy arrays:\n\n >>> import numpy as np\n >>> increasing = np.arange(5)\n >>> increasing\n array([0, 1, 2, 3, 4])\n >>> decreasing = np.arange(4, -1, -1)\n >>> decreasing\n array([4, 3, 2, 1, 0])\n >>> increasing < decreasing\n array([ True, True, False, False, False], dtype=bool)\n\nHere we see that NumPy array comparisons are element-wise by default,\ncomparing each element in the left hand array to the corresponding\nelement in the right hand array, and producing a matrix of boolean\nresults.\n\nIf either side of the comparison is a scalar value, then it is broadcast\nacross the array and compared to each individual element:\n\n >>> 0 < increasing\n array([False, True, True, True, True], dtype=bool)\n >>> increasing < 4\n array([ True, True, True, True, False], dtype=bool)\n\nHowever, this broadcasting idiom breaks down if we attempt to use\nchained comparisons:\n\n >>> 0 < increasing < 4\n Traceback (most recent call last):\n File \"\", line 1, in \n ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\n\nThe problem is that internally, Python implicitly expands this chained\ncomparison into the form:\n\n >>> 0 < increasing and increasing < 4\n Traceback (most recent call last):\n File \"\", line 1, in \n ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\n\nAnd NumPy only permits implicit coercion to a boolean value for\nsingle-element arrays where a.any() and a.all() can be assured of having\nthe same result:\n\n >>> np.array([False]) and np.array([False])\n array([False], dtype=bool)\n >>> np.array([False]) and np.array([True])\n array([False], dtype=bool)\n >>> np.array([True]) and np.array([False])\n array([False], dtype=bool)\n >>> np.array([True]) and np.array([True])\n array([ True], dtype=bool)\n\nThe proposal in this PEP would allow this situation to be changed by\nupdating the definition of element-wise comparison operations in NumPy\nto return a dedicated subclass that implements the new circuit breaking\nprotocol and also changes the result array's interpretation in a boolean\ncontext to always return False and hence never trigger the\nshort-circuiting behaviour:\n\n class ComparisonResultArray(np.ndarray):\n def __bool__(self):\n # Element-wise comparison chaining never short-circuits\n return False\n def _raise_NotImplementedError(self):\n msg = (\"Comparison array truth values are ambiguous outside \"\n \"chained comparisons. Use a.any() or a.all()\")\n raise NotImplementedError(msg)\n def __not__(self):\n self._raise_NotImplementedError()\n def __then__(self, result):\n self._raise_NotImplementedError()\n def __else__(self, result):\n return np.logical_and(self, other.view(ComparisonResultArray))\n\nWith this change, the chained comparison example above would be able to\nreturn:\n\n >>> 0 < increasing < 4\n ComparisonResultArray([ False, True, True, True, False], dtype=bool)\n\nImplementation\n\nActual implementation has been deferred pending in-principle interest in\nthe idea of making the changes proposed in PEP 532.\n\n...TBD...\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain under the terms of\nthe CC0 1.0 license: https://creativecommons.org/publicdomain/zero/1.0/\n\n[1] PEP 335 rejection notification\n(https://mail.python.org/pipermail/python-dev/2012-March/117510.html)"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.568430"},"created":{"kind":"timestamp","value":"2016-11-12T00:00:00","string":"2016-11-12T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0535/\",\n \"authors\": [\n \"Alyssa Coghlan\"\n ],\n \"pep_number\": \"0535\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":50,"cells":{"id":{"kind":"string","value":"0296"},"text":{"kind":"string","value":"PEP: 296 Title: Adding a bytes Object Type Author: Scott Gilbert\n Status: Withdrawn Type: Standards Track\nContent-Type: text/x-rst Created: 12-Jul-2002 Python-Version: 2.3\nPost-History:\n\nNotice\n\nThis PEP is withdrawn by the author (in favor of PEP 358).\n\nAbstract\n\nThis PEP proposes the creation of a new standard type and builtin\nconstructor called 'bytes'. The bytes object is an efficiently stored\narray of bytes with some additional characteristics that set it apart\nfrom several implementations that are similar.\n\nRationale\n\nPython currently has many objects that implement something akin to the\nbytes object of this proposal. For instance the standard string, buffer,\narray, and mmap objects are all very similar in some regards to the\nbytes object. Additionally, several significant third party extensions\nhave created similar objects to try and fill similar needs.\nFrustratingly, each of these objects is too narrow in scope and is\nmissing critical features to make it applicable to a wider category of\nproblems.\n\nSpecification\n\nThe bytes object has the following important characteristics:\n\n1. Efficient underlying array storage via the standard C type \"unsigned\n char\". This allows fine grain control over how much memory is\n allocated. With the alignment restrictions designated in the next\n item, it is trivial for low level extensions to cast the pointer to\n a different type as needed.\n\n Also, since the object is implemented as an array of bytes, it is\n possible to pass the bytes object to the extensive library of\n routines already in the standard library that presently work with\n strings. For instance, the bytes object in conjunction with the\n struct module could be used to provide a complete replacement for\n the array module using only Python script.\n\n If an unusual platform comes to light, one where there isn't a\n native unsigned 8 bit type, the object will do its best to represent\n itself at the Python script level as though it were an array of 8\n bit unsigned values. It is doubtful whether many extensions would\n handle this correctly, but Python script could be portable in these\n cases.\n\n2. Alignment of the allocated byte array is whatever is promised by the\n platform implementation of malloc. A bytes object created from an\n extension can be supplied that provides any arbitrary alignment as\n the extension author sees fit.\n\n This alignment restriction should allow the bytes object to be used\n as storage for all standard C types - including PyComplex objects or\n other structs of standard C type types. Further alignment\n restrictions can be provided by extensions as necessary.\n\n3. The bytes object implements a subset of the sequence operations\n provided by string/array objects, but with slightly different\n semantics in some cases. In particular, a slice always returns a new\n bytes object, but the underlying memory is shared between the two\n objects. This type of slice behavior has been called creating a\n \"view\". Additionally, repetition and concatenation are undefined for\n bytes objects and will raise an exception.\n\n As these objects are likely to find use in high performance\n applications, one motivation for the decision to use view slicing is\n that copying between bytes objects should be very efficient and not\n require the creation of temporary objects. The following code\n illustrates this:\n\n # create two 10 Meg bytes objects\n b1 = bytes(10000000)\n b2 = bytes(10000000)\n\n # copy from part of one to another with out creating a 1 Meg temporary\n b1[2000000:3000000] = b2[4000000:5000000]\n\n Slice assignment where the rvalue is not the same length as the\n lvalue will raise an exception. However, slice assignment will work\n correctly with overlapping slices (typically implemented with\n memmove).\n\n4. The bytes object will be recognized as a native type by the pickle\n and cPickle modules for efficient serialization. (In truth, this is\n the only requirement that can't be implemented via a third party\n extension.)\n\n Partial solutions to address the need to serialize the data stored\n in a bytes-like object without creating a temporary copy of the data\n into a string have been implemented in the past. The tofile and\n fromfile methods of the array object are good examples of this. The\n bytes object will support these methods too. However, pickling is\n useful in other situations - such as in the shelve module, or\n implementing RPC of Python objects, and requiring the end user to\n use two different serialization mechanisms to get an efficient\n transfer of data is undesirable.\n\n XXX: Will try to implement pickling of the new bytes object in such\n a way that previous versions of Python will unpickle it as a string\n object.\n\n When unpickling, the bytes object will be created from memory\n allocated from Python (via malloc). As such, it will lose any\n additional properties that an extension supplied pointer might have\n provided (special alignment, or special types of memory).\n\n XXX: Will try to make it so that C subclasses of bytes type can\n supply the memory that will be unpickled into. For instance, a\n derived class called PageAlignedBytes would unpickle to memory that\n is also page aligned.\n\n On any platform where an int is 32 bits (most of them), it is\n currently impossible to create a string with a length larger than\n can be represented in 31 bits. As such, pickling to a string will\n raise an exception when the operation is not possible.\n\n At least on platforms supporting large files (many of them),\n pickling large bytes objects to files should be possible via\n repeated calls to the file.write() method.\n\n5. The bytes type supports the PyBufferProcs interface, but a bytes\n object provides the additional guarantee that the pointer will not\n be deallocated or reallocated as long as a reference to the bytes\n object is held. This implies that a bytes object is not resizable\n once it is created, but allows the global interpreter lock (GIL) to\n be released while a separate thread manipulates the memory pointed\n to if the PyBytes_Check(...) test passes.\n\n This characteristic of the bytes object allows it to be used in\n situations such as asynchronous file I/O or on multiprocessor\n machines where the pointer obtained by PyBufferProcs will be used\n independently of the global interpreter lock.\n\n Knowing that the pointer can not be reallocated or freed after the\n GIL is released gives extension authors the capability to get true\n concurrency and make use of additional processors for long running\n computations on the pointer.\n\n6. In C/C++ extensions, the bytes object can be created from a supplied\n pointer and destructor function to free the memory when the\n reference count goes to zero.\n\n The special implementation of slicing for the bytes object allows\n multiple bytes objects to refer to the same pointer/destructor. As\n such, a refcount will be kept on the actual pointer/destructor. This\n refcount is separate from the refcount typically associated with\n Python objects.\n\n XXX: It may be desirable to expose the inner refcounted object as an\n actual Python object. If a good use case arises, it should be\n possible for this to be implemented later with no loss to backwards\n compatibility.\n\n7. It is also possible to signify the bytes object as readonly, in this\n case it isn't actually mutable, but does provide the other features\n of a bytes object.\n\n8. The bytes object keeps track of the length of its data with a Python\n LONG_LONG type. Even though the current definition for PyBufferProcs\n restricts the length to be the size of an int, this PEP does not\n propose to make any changes there. Instead, extensions can work\n around this limit by making an explicit PyBytes_Check(...) call, and\n if that succeeds they can make a PyBytes_GetReadBuffer(...) or\n PyBytes_GetWriteBuffer call to get the pointer and full length of\n the object as a LONG_LONG.\n\n The bytes object will raise an exception if the standard\n PyBufferProcs mechanism is used and the size of the bytes object is\n greater than can be represented by an integer.\n\n From Python scripting, the bytes object will be subscriptable with\n longs so the 32 bit int limit can be avoided.\n\n There is still a problem with the len() function as it is\n PyObject_Size() and this returns an int as well. As a workaround,\n the bytes object will provide a .length() method that will return a\n long.\n\n9. The bytes object can be constructed at the Python scripting level by\n passing an int/long to the bytes constructor with the number of\n bytes to allocate. For example:\n\n b = bytes(100000) # alloc 100K bytes\n\n The constructor can also take another bytes object. This will be\n useful for the implementation of unpickling, and in converting a\n read-write bytes object into a read-only one. An optional second\n argument will be used to designate creation of a readonly bytes\n object.\n\n10. From the C API, the bytes object can be allocated using any of the\n following signatures:\n\n PyObject* PyBytes_FromLength(LONG_LONG len, int readonly);\n PyObject* PyBytes_FromPointer(void* ptr, LONG_LONG len, int readonly\n void (*dest)(void *ptr, void *user), void* user);\n\n In the PyBytes_FromPointer(...) function, if the dest function\n pointer is passed in as NULL, it will not be called. This should\n only be used for creating bytes objects from statically allocated\n space.\n\n The user pointer has been called a closure in other places. It is a\n pointer that the user can use for whatever purposes. It will be\n passed to the destructor function on cleanup and can be useful for a\n number of things. If the user pointer is not needed, NULL should be\n passed instead.\n\n11. The bytes type will be a new style class as that seems to be where\n all standard Python types are headed.\n\nContrast to existing types\n\nThe most common way to work around the lack of a bytes object has been\nto simply use a string object in its place. Binary files, the\nstruct/array modules, and several other examples exist of this. Putting\naside the style issue that these uses typically have nothing to do with\ntext strings, there is the real problem that strings are not mutable, so\ndirect manipulation of the data returned in these cases is not possible.\nAlso, numerous optimizations in the string module (such as caching the\nhash value or interning the pointers) mean that extension authors are on\nvery thin ice if they try to break the rules with the string object.\n\nThe buffer object seems like it was intended to address the purpose that\nthe bytes object is trying fulfill, but several shortcomings in its\nimplementation[1] have made it less useful in many common cases. The\nbuffer object made a different choice for its slicing behavior (it\nreturns new strings instead of buffers for slicing and other\noperations), and it doesn't make many of the promises on alignment or\nbeing able to release the GIL that the bytes object does.\n\nAlso in regards to the buffer object, it is not possible to simply\nreplace the buffer object with the bytes object and maintain backwards\ncompatibility. The buffer object provides a mechanism to take the\nPyBufferProcs supplied pointer of another object and present it as its\nown. Since the behavior of the other object can not be guaranteed to\nfollow the same set of strict rules that a bytes object does, it can't\nbe used in places that a bytes object could.\n\nThe array module supports the creation of an array of bytes, but it does\nnot provide a C API for supplying pointers and destructors to extension\nsupplied memory. This makes it unusable for constructing objects out of\nshared memory, or memory that has special alignment or locking for\nthings like DMA transfers. Also, the array object does not currently\npickle. Finally since the array object allows its contents to grow, via\nthe extend method, the pointer can be changed if the GIL is not held\nwhile using it.\n\nCreating a buffer object from an array object has the same problem of\nleaving an invalid pointer when the array object is resized.\n\nThe mmap object caters to its particular niche, but does not attempt to\nsolve a wider class of problems.\n\nFinally, any third party extension can not implement pickling without\ncreating a temporary object of a standard Python type. For example, in\nthe Numeric community, it is unpleasant that a large array can't pickle\nwithout creating a large binary string to duplicate the array data.\n\nBackward Compatibility\n\nThe only possibility for backwards compatibility problems that the\nauthor is aware of are in previous versions of Python that try to\nunpickle data containing the new bytes type.\n\nReference Implementation\n\nXXX: Actual implementation is in progress, but changes are still\npossible as this PEP gets further review.\n\nThe following new files will be added to the Python baseline:\n\n Include/bytesobject.h # C interface\n Objects/bytesobject.c # C implementation\n Lib/test/test_bytes.py # unit testing\n Doc/lib/libbytes.tex # documentation\n\nThe following files will also be modified:\n\n Include/Python.h # adding bytesmodule.h include file\n Python/bltinmodule.c # adding the bytes type object\n Modules/cPickle.c # adding bytes to the standard types\n Lib/pickle.py # adding bytes to the standard types\n\nIt is possible that several other modules could be cleaned up and\nimplemented in terms of the bytes object. The mmap module comes to mind\nfirst, but as noted above it would be possible to reimplement the array\nmodule as a pure Python module. While it is attractive that this PEP\ncould actually reduce the amount of source code by some amount, the\nauthor feels that this could cause unnecessary risk for breaking\nexisting applications and should be avoided at this time.\n\nAdditional Notes/Comments\n\n- Guido van Rossum wondered whether it would make sense to be able to\n create a bytes object from a mmap object. The mmap object appears to\n support the requirements necessary to provide memory for a bytes\n object. (It doesn't resize, and the pointer is valid for the\n lifetime of the object.) As such, a method could be added to the\n mmap module such that a bytes object could be created directly from\n a mmap object. An initial stab at how this would be implemented\n would be to use the PyBytes_FromPointer() function described above\n and pass the mmap_object as the user pointer. The destructor\n function would decref the mmap_object for cleanup.\n\n- Todd Miller notes that it may be useful to have two new functions:\n PyObject_AsLargeReadBuffer() and PyObject_AsLargeWriteBuffer that\n are similar to PyObject_AsReadBuffer() and PyObject_AsWriteBuffer(),\n but support getting a LONG_LONG length in addition to the void*\n pointer. These functions would allow extension authors to work\n transparently with bytes object (that support LONG_LONG lengths) and\n most other buffer like objects (which only support int lengths).\n These functions could be in lieu of, or in addition to, creating a\n specific PyByte_GetReadBuffer() and PyBytes_GetWriteBuffer()\n functions.\n\n XXX: The author thinks this is very a good idea as it paves the way\n for other objects to eventually support large (64 bit) pointers, and\n it should only affect abstract.c and abstract.h. Should this be\n added above?\n\n- It was generally agreed that abusing the segment count of the\n PyBufferProcs interface is not a good hack to work around the 31 bit\n limitation of the length. If you don't know what this means, then\n you're in good company. Most code in the Python baseline, and\n presumably in many third party extensions, punt when the segment\n count is not 1.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] The buffer interface\nhttps://mail.python.org/pipermail/python-dev/2000-October/009974.html"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.583207"},"created":{"kind":"timestamp","value":"2002-07-12T00:00:00","string":"2002-07-12T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0296/\",\n \"authors\": [\n \"Scott Gilbert\"\n ],\n \"pep_number\": \"0296\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":51,"cells":{"id":{"kind":"string","value":"3135"},"text":{"kind":"string","value":"PEP: 3135 Title: New Super Author: Calvin Spealman\n, Tim Delaney , Lie\nRyan Status: Final Type: Standards Track\nContent-Type: text/x-rst Created: 28-Apr-2007 Python-Version: 3.0\nPost-History: 28-Apr-2007, 29-Apr-2007, 29-Apr-2007, 14-May-2007,\n12-Mar-2009\n\nNumbering Note\n\nThis PEP started its life as PEP 367. Since it is now targeted for\nPython 3000, it has been moved into the 3xxx space.\n\nAbstract\n\nThis PEP proposes syntactic sugar for use of the super type to\nautomatically construct instances of the super type binding to the class\nthat a method was defined in, and the instance (or class object for\nclassmethods) that the method is currently acting upon.\n\nThe premise of the new super usage suggested is as follows:\n\n super().foo(1, 2)\n\nto replace the old:\n\n super(Foo, self).foo(1, 2)\n\nRationale\n\nThe current usage of super requires an explicit passing of both the\nclass and instance it must operate from, requiring a breaking of the DRY\n(Don't Repeat Yourself) rule. This hinders any change in class name, and\nis often considered a wart by many.\n\nSpecification\n\nWithin the specification section, some special terminology will be used\nto distinguish similar and closely related concepts. \"super class\" will\nrefer to the actual builtin class named \"super\". A \"super instance\" is\nsimply an instance of the super class, which is associated with another\nclass and possibly with an instance of that class.\n\nThe new super semantics are only available in Python 3.0.\n\nReplacing the old usage of super, calls to the next class in the MRO\n(method resolution order) can be made without explicitly passing the\nclass object (although doing so will still be supported). Every function\nwill have a cell named __class__ that contains the class object that the\nfunction is defined in.\n\nThe new syntax:\n\n super()\n\nis equivalent to:\n\n super(__class__, )\n\nwhere __class__ is the class that the method was defined in, and\n is the first parameter of the method (normally self for\ninstance methods, and cls for class methods). For functions defined\noutside a class body, __class__ is not defined, and will result in\nruntime SystemError.\n\nWhile super is not a reserved word, the parser recognizes the use of\nsuper in a method definition and only passes in the __class__ cell when\nthis is found. Thus, calling a global alias of super without arguments\nwill not necessarily work.\n\nClosed Issues\n\nDetermining the class object to use\n\nThe class object is taken from a cell named __class__.\n\nShould super actually become a keyword?\n\nNo. It is not necessary for super to become a keyword.\n\nsuper used with __call__ attributes\n\nIt was considered that it might be a problem that instantiating super\ninstances the classic way, because calling it would lookup the __call__\nattribute and thus try to perform an automatic super lookup to the next\nclass in the MRO. However, this was found to be false, because calling\nan object only looks up the __call__ method directly on the object's\ntype. The following example shows this in action.\n\n class A(object):\n def __call__(self):\n return '__call__'\n def __getattribute__(self, attr):\n if attr == '__call__':\n return lambda: '__getattribute__'\n a = A()\n assert a() == '__call__'\n assert a.__call__() == '__getattribute__'\n\nIn any case, this issue goes away entirely because classic calls to\nsuper(, ) are still supported with the same meaning.\n\nAlternative Proposals\n\nNo Changes\n\nAlthough its always attractive to just keep things how they are, people\nhave sought a change in the usage of super calling for some time, and\nfor good reason, all mentioned previously.\n\n- Decoupling from the class name (which might not even be bound to the\n right class anymore!)\n- Simpler looking, cleaner super calls would be better\n\nDynamic attribute on super type\n\nThe proposal adds a dynamic attribute lookup to the super type, which\nwill automatically determine the proper class and instance parameters.\nEach super attribute lookup identifies these parameters and performs the\nsuper lookup on the instance, as the current super implementation does\nwith the explicit invocation of a super instance upon a class and\ninstance.\n\nThis proposal relies on sys._getframe(), which is not appropriate for\nanything except a prototype implementation.\n\nself.__super__.foo(*args)\n\nThe __super__ attribute is mentioned in this PEP in several places, and\ncould be a candidate for the complete solution, actually using it\nexplicitly instead of any super usage directly. However,\ndouble-underscore names are usually an internal detail, and attempted to\nbe kept out of everyday code.\n\nsuper(self, *args) or __super__(self, *args)\n\nThis solution only solves the problem of the type indication, does not\nhandle differently named super methods, and is explicit about the name\nof the instance. It is less flexible without being able to enacted on\nother method names, in cases where that is needed. One use case this\nfails is where a base-class has a factory classmethod and a subclass has\ntwo factory classmethods,both of which needing to properly make super\ncalls to the one in the base-class.\n\nsuper.foo(self, *args)\n\nThis variation actually eliminates the problems with locating the proper\ninstance, and if any of the alternatives were pushed into the spotlight,\nI would want it to be this one.\n\nsuper(*p, **kw)\n\nThere has been the proposal that directly calling super(*p, **kw) would\nbe equivalent to calling the method on the super object with the same\nname as the method currently being executed i.e. the following two\nmethods would be equivalent:\n\n def f(self, *p, **kw):\n super.f(*p, **kw)\n\n def f(self, *p, **kw):\n super(*p, **kw)\n\nThere is strong sentiment for and against this, but implementation and\nstyle concerns are obvious. Guido has suggested that this should be\nexcluded from this PEP on the principle of KISS (Keep It Simple Stupid).\n\nHistory\n\n29-Apr-2007\n\n - Changed title from \"Super As A Keyword\" to \"New Super\"\n - Updated much of the language and added a terminology section for\n clarification in confusing places.\n - Added reference implementation and history sections.\n\n06-May-2007\n\n - Updated by Tim Delaney to reflect discussions on the python-3000\n and python-dev mailing lists.\n\n12-Mar-2009\n\n - Updated to reflect the current state of implementation.\n\nReferences\n\n[1] Fixing super anyone?\n(https://mail.python.org/pipermail/python-3000/2007-April/006667.html)\n\n[2] PEP 3130: Access to Module/Class/Function Currently Being Defined\n(this)\n(https://mail.python.org/pipermail/python-ideas/2007-April/000542.html)\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.601410"},"created":{"kind":"timestamp","value":"2007-04-28T00:00:00","string":"2007-04-28T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-3135/\",\n \"authors\": [\n \"Calvin Spealman\"\n ],\n \"pep_number\": \"3135\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":52,"cells":{"id":{"kind":"string","value":"0475"},"text":{"kind":"string","value":"PEP: 475 Title: Retry system calls failing with EINTR Version:\n$Revision$ Last-Modified: $Date$ Author: Charles-François Natali\n, Victor Stinner \nBDFL-Delegate: Antoine Pitrou Status: Final Type:\nStandards Track Content-Type: text/x-rst Created: 29-Jul-2014\nPython-Version: 3.5 Resolution:\nhttps://mail.python.org/pipermail/python-dev/2015-February/138018.html\n\nAbstract\n\nSystem call wrappers provided in the standard library should be retried\nautomatically when they fail with EINTR, to relieve application code\nfrom the burden of doing so.\n\nBy system calls, we mean the functions exposed by the standard C library\npertaining to I/O or handling of other system resources.\n\nRationale\n\nInterrupted system calls\n\nOn POSIX systems, signals are common. Code calling system calls must be\nprepared to handle them. Examples of signals:\n\n- The most common signal is SIGINT, the signal sent when CTRL+c is\n pressed. By default, Python raises a KeyboardInterrupt exception\n when this signal is received.\n- When running subprocesses, the SIGCHLD signal is sent when a child\n process exits.\n- Resizing the terminal sends the SIGWINCH signal to the applications\n running in the terminal.\n- Putting the application in background (ex: press CTRL-z and then\n type the bg command) sends the SIGCONT signal.\n\nWriting a C signal handler is difficult: only \"async-signal-safe\"\nfunctions can be called (for example, printf() and malloc() are not\nasync-signal safe), and there are issues with reentrancy. Therefore,\nwhen a signal is received by a process during the execution of a system\ncall, the system call can fail with the EINTR error to give the program\nan opportunity to handle the signal without the restriction on\nsignal-safe functions.\n\nThis behaviour is system-dependent: on certain systems, using the\nSA_RESTART flag, some system calls are retried automatically instead of\nfailing with EINTR. Regardless, Python's signal.signal() function clears\nthe SA_RESTART flag when setting the signal handler: all system calls\nwill probably fail with EINTR in Python.\n\nSince receiving a signal is a non-exceptional occurrence, robust POSIX\ncode must be prepared to handle EINTR (which, in most cases, means retry\nin a loop in the hope that the call eventually succeeds). Without\nspecial support from Python, this can make application code much more\nverbose than it needs to be.\n\nStatus in Python 3.4\n\nIn Python 3.4, handling the InterruptedError exception (EINTR's\ndedicated exception class) is duplicated at every call site on a\ncase-by-case basis. Only a few Python modules actually handle this\nexception, and fixes usually took several years to cover a whole module.\nExample of code retrying file.read() on InterruptedError:\n\n while True:\n try:\n data = file.read(size)\n break\n except InterruptedError:\n continue\n\nList of Python modules in the standard library which handle\nInterruptedError:\n\n- asyncio\n- asyncore\n- io, _pyio\n- multiprocessing\n- selectors\n- socket\n- socketserver\n- subprocess\n\nOther programming languages like Perl, Java and Go retry system calls\nfailing with EINTR at a lower level, so that libraries and applications\nneedn't bother.\n\nUse Case 1: Don't Bother With Signals\n\nIn most cases, you don't want to be interrupted by signals and you don't\nexpect to get InterruptedError exceptions. For example, do you really\nwant to write such complex code for a \"Hello World\" example?\n\n while True:\n try:\n print(\"Hello World\")\n break\n except InterruptedError:\n continue\n\nInterruptedError can happen in unexpected places. For example,\nos.close() and FileIO.close() may raise InterruptedError: see the\narticle close() and EINTR.\n\nThe Python issues related to EINTR section below gives examples of bugs\ncaused by EINTR.\n\nThe expectation in this use case is that Python hides the\nInterruptedError and retries system calls automatically.\n\nUse Case 2: Be notified of signals as soon as possible\n\nSometimes yet, you expect some signals and you want to handle them as\nsoon as possible. For example, you may want to immediately quit a\nprogram using the CTRL+c keyboard shortcut.\n\nBesides, some signals are not interesting and should not disrupt the\napplication. There are two options to interrupt an application on only\nsome signals:\n\n- Set up a custom signal handler which raises an exception, such as\n KeyboardInterrupt for SIGINT.\n- Use a I/O multiplexing function like select() together with Python's\n signal wakeup file descriptor: see the function\n signal.set_wakeup_fd().\n\nThe expectation in this use case is for the Python signal handler to be\nexecuted timely, and the system call to fail if the handler raised an\nexception -- otherwise restart.\n\nProposal\n\nThis PEP proposes to handle EINTR and retries at the lowest level, i.e.\nin the wrappers provided by the stdlib (as opposed to higher-level\nlibraries and applications).\n\nSpecifically, when a system call fails with EINTR, its Python wrapper\nmust call the given signal handler (using PyErr_CheckSignals()). If the\nsignal handler raises an exception, the Python wrapper bails out and\nfails with the exception.\n\nIf the signal handler returns successfully, the Python wrapper retries\nthe system call automatically. If the system call involves a timeout\nparameter, the timeout is recomputed.\n\nModified functions\n\nExample of standard library functions that need to be modified to comply\nwith this PEP:\n\n- open(), os.open(), io.open()\n- functions of the faulthandler module\n- os functions:\n - os.fchdir()\n - os.fchmod()\n - os.fchown()\n - os.fdatasync()\n - os.fstat()\n - os.fstatvfs()\n - os.fsync()\n - os.ftruncate()\n - os.mkfifo()\n - os.mknod()\n - os.posix_fadvise()\n - os.posix_fallocate()\n - os.pread()\n - os.pwrite()\n - os.read()\n - os.readv()\n - os.sendfile()\n - os.wait3()\n - os.wait4()\n - os.wait()\n - os.waitid()\n - os.waitpid()\n - os.write()\n - os.writev()\n - special cases: os.close() and os.dup2() now ignore EINTR error,\n the syscall is not retried\n- select.select(), select.poll.poll(), select.epoll.poll(),\n select.kqueue.control(), select.devpoll.poll()\n- socket.socket() methods:\n - accept()\n - connect() (except for non-blocking sockets)\n - recv()\n - recvfrom()\n - recvmsg()\n - send()\n - sendall()\n - sendmsg()\n - sendto()\n- signal.sigtimedwait(), signal.sigwaitinfo()\n- time.sleep()\n\n(Note: the selector module already retries on InterruptedError, but it\ndoesn't recompute the timeout yet)\n\nos.close, close() methods and os.dup2() are a special case: they will\nignore EINTR instead of retrying. The reason is complex but involves\nbehaviour under Linux and the fact that the file descriptor may really\nbe closed even if EINTR is returned. See articles:\n\n- Returning EINTR from close()\n- (LKML) Re: [patch 7/7] uml: retry host close() on EINTR\n- close() and EINTR\n\nThe socket.socket.connect() method does not retry connect() for\nnon-blocking sockets if it is interrupted by a signal (fails with\nEINTR). The connection runs asynchronously in background. The caller is\nresponsible to wait until the socket becomes writable (ex: using\nselect.select()) and then call\nsocket.socket.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) to check if\nthe connection succeeded (getsockopt() returns 0) or failed.\n\nInterruptedError handling\n\nSince interrupted system calls are automatically retried, the\nInterruptedError exception should not occur anymore when calling those\nsystem calls. Therefore, manual handling of InterruptedError as\ndescribed in Status in Python 3.4 can be removed, which will simplify\nstandard library code.\n\nBackward compatibility\n\nApplications relying on the fact that system calls are interrupted with\nInterruptedError will hang. The authors of this PEP don't think that\nsuch applications exist, since they would be exposed to other issues\nsuch as race conditions (there is an opportunity for deadlock if the\nsignal comes before the system call). Besides, such code would be\nnon-portable.\n\nIn any case, those applications must be fixed to handle signals\ndifferently, to have a reliable behaviour on all platforms and all\nPython versions. A possible strategy is to set up a signal handler\nraising a well-defined exception, or use a wakeup file descriptor.\n\nFor applications using event loops, signal.set_wakeup_fd() is the\nrecommended option to handle signals. Python's low-level signal handler\nwill write signal numbers into the file descriptor and the event loop\nwill be awaken to read them. The event loop can handle those signals\nwithout the restriction of signal handlers (for example, the loop can be\nwoken up in any thread, not just the main thread).\n\nAppendix\n\nWakeup file descriptor\n\nSince Python 3.3, signal.set_wakeup_fd() writes the signal number into\nthe file descriptor, whereas it only wrote a null byte before. It\nbecomes possible to distinguish between signals using the wakeup file\ndescriptor.\n\nLinux has a signalfd() system call which provides more information on\neach signal. For example, it's possible to know the pid and uid who sent\nthe signal. This function is not exposed in Python yet (see issue\n12304).\n\nOn Unix, the asyncio module uses the wakeup file descriptor to wake up\nits event loop.\n\nMultithreading\n\nA C signal handler can be called from any thread, but Python signal\nhandlers will always be called in the main Python thread.\n\nPython's C API provides the PyErr_SetInterrupt() function which calls\nthe SIGINT signal handler in order to interrupt the main Python thread.\n\nSignals on Windows\n\nControl events\n\nWindows uses \"control events\":\n\n- CTRL_BREAK_EVENT: Break (SIGBREAK)\n- CTRL_CLOSE_EVENT: Close event\n- CTRL_C_EVENT: CTRL+C (SIGINT)\n- CTRL_LOGOFF_EVENT: Logoff\n- CTRL_SHUTDOWN_EVENT: Shutdown\n\nThe SetConsoleCtrlHandler() function can be used to install a control\nhandler.\n\nThe CTRL_C_EVENT and CTRL_BREAK_EVENT events can be sent to a process\nusing the GenerateConsoleCtrlEvent() function. This function is exposed\nin Python as os.kill().\n\nSignals\n\nThe following signals are supported on Windows:\n\n- SIGABRT\n- SIGBREAK (CTRL_BREAK_EVENT): signal only available on Windows\n- SIGFPE\n- SIGILL\n- SIGINT (CTRL_C_EVENT)\n- SIGSEGV\n- SIGTERM\n\nSIGINT\n\nThe default Python signal handler for SIGINT sets a Windows event\nobject: sigint_event.\n\ntime.sleep() is implemented with WaitForSingleObjectEx(), it waits for\nthe sigint_event object using time.sleep() parameter as the timeout. So\nthe sleep can be interrupted by SIGINT.\n\n_winapi.WaitForMultipleObjects() automatically adds sigint_event to the\nlist of watched handles, so it can also be interrupted.\n\nPyOS_StdioReadline() also used sigint_event when fgets() failed to check\nif Ctrl-C or Ctrl-Z was pressed.\n\nLinks\n\nMisc\n\n- glibc manual: Primitives Interrupted by Signals\n- Bug #119097 for perl5: print returning EINTR in 5.14.\n\nPython issues related to EINTR\n\nThe main issue is: handle EINTR in the stdlib.\n\nOpen issues:\n\n- Add a new signal.set_wakeup_socket() function\n- signal.set_wakeup_fd(fd): set the fd to non-blocking mode\n- Use a monotonic clock to compute timeouts\n- sys.stdout.write on OS X is not EINTR safe\n- platform.uname() not EINTR safe\n- asyncore does not handle EINTR in recv, send, connect, accept,\n- socket.create_connection() doesn't handle EINTR properly\n\nClosed issues:\n\n- Interrupted system calls are not retried\n- Solaris: EINTR exception in select/socket calls in telnetlib\n- subprocess: Popen.communicate() doesn't handle EINTR in some cases\n- multiprocessing.util._eintr_retry doesn't recalculate timeouts\n- file readline, readlines & readall methods can lose data on EINTR\n- multiprocessing BaseManager serve_client() does not check EINTR on\n recv\n- selectors behaviour on EINTR undocumented\n- asyncio: limit EINTR occurrences with SA_RESTART\n- smtplib.py socket.create_connection() also doesn't handle EINTR\n properly\n- Faulty RESTART/EINTR handling in Parser/myreadline.c\n- test_httpservers intermittent failure, test_post and EINTR\n- os.spawnv(P_WAIT, ...) on Linux doesn't handle EINTR\n- asyncore fails when EINTR happens in pol\n- file.write and file.read don't handle EINTR\n- socket.readline() interface doesn't handle EINTR properly\n- subprocess is not EINTR-safe\n- SocketServer doesn't handle syscall interruption\n- subprocess deadlock when read() is interrupted\n- time.sleep(1): call PyErr_CheckSignals() if the sleep was\n interrupted\n- siginterrupt with flag=False is reset when signal received\n- need siginterrupt() on Linux - impossible to do timeouts\n- [Windows] Can not interrupt time.sleep()\n\nPython issues related to signals\n\nOpen issues:\n\n- signal.default_int_handler should set signal number on the raised\n exception\n- expose signalfd(2) in the signal module\n- missing return in win32_kill?\n- Interrupts are lost during readline PyOS_InputHook processing\n- cannot catch KeyboardInterrupt when using curses getkey()\n- Deferred KeyboardInterrupt in interactive mode\n\nClosed issues:\n\n- sys.interrupt_main()\n\nImplementation\n\nThe implementation is tracked in issue 23285. It was committed on\nFebruary 07, 2015.\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.633361"},"created":{"kind":"timestamp","value":"2014-07-29T00:00:00","string":"2014-07-29T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0475/\",\n \"authors\": [\n \"Charles-François Natali\",\n \"Victor Stinner\"\n ],\n \"pep_number\": \"0475\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":53,"cells":{"id":{"kind":"string","value":"0209"},"text":{"kind":"string","value":"PEP: 209 Title: Multi-dimensional Arrays Author: Paul Barrett\n, Travis Oliphant Status:\nWithdrawn Type: Standards Track Created: 03-Jan-2001 Python-Version: 2.2\nPost-History:\n\nAbstract\n\nThis PEP proposes a redesign and re-implementation of the\nmulti-dimensional array module, Numeric, to make it easier to add new\nfeatures and functionality to the module. Aspects of Numeric 2 that will\nreceive special attention are efficient access to arrays exceeding a\ngigabyte in size and composed of inhomogeneous data structures or\nrecords. The proposed design uses four Python classes: ArrayType, UFunc,\nArray, and ArrayView; and a low-level C-extension module, _ufunc, to\nhandle the array operations efficiently. In addition, each array type\nhas its own C-extension module which defines the coercion rules,\noperations, and methods for that type. This design enables new types,\nfeatures, and functionality to be added in a modular fashion. The new\nversion will introduce some incompatibilities with the current Numeric.\n\nMotivation\n\nMulti-dimensional arrays are commonly used to store and manipulate data\nin science, engineering, and computing. Python currently has an\nextension module, named Numeric (henceforth called Numeric 1), which\nprovides a satisfactory set of functionality for users manipulating\nhomogeneous arrays of data of moderate size (of order 10 MB). For access\nto larger arrays (of order 100 MB or more) of possibly inhomogeneous\ndata, the implementation of Numeric 1 is inefficient and cumbersome. In\nthe future, requests by the Numerical Python community for additional\nfunctionality is also likely as PEPs 211: Adding New Linear Operators to\nPython, and 225: Elementwise/Objectwise Operators illustrate.\n\nProposal\n\nThis proposal recommends a re-design and re-implementation of Numeric 1,\nhenceforth called Numeric 2, which will enable new types, features, and\nfunctionality to be added in an easy and modular manner. The initial\ndesign of Numeric 2 should focus on providing a generic framework for\nmanipulating arrays of various types and should enable a straightforward\nmechanism for adding new array types and UFuncs. Functional methods that\nare more specific to various disciplines can then be layered on top of\nthis core. This new module will still be called Numeric and most of the\nbehavior found in Numeric 1 will be preserved.\n\nThe proposed design uses four Python classes: ArrayType, UFunc, Array,\nand ArrayView; and a low-level C-extension module to handle the array\noperations efficiently. In addition, each array type has its own\nC-extension module which defines the coercion rules, operations, and\nmethods for that type. At a later date, when core functionality is\nstable, some Python classes can be converted to C-extension types.\n\nSome planned features are:\n\n1. Improved memory usage\n\n This feature is particularly important when handling large arrays\n and can produce significant improvements in performance as well as\n memory usage. We have identified several areas where memory usage\n can be improved:\n\n a. Use a local coercion model\n\n Instead of using Python's global coercion model which creates\n temporary arrays, Numeric 2, like Numeric 1, will implement a\n local coercion model as described in PEP 208 which defers the\n responsibility of coercion to the operator. By using internal\n buffers, a coercion operation can be done for each array\n (including output arrays), if necessary, at the time of the\n operation. Benchmarks[1] have shown that performance is at most\n degraded only slightly and is improved in cases where the\n internal buffers are less than the L2 cache size and the\n processor is under load. To avoid array coercion altogether, C\n functions having arguments of mixed type are allowed in Numeric\n 2.\n\n b. Avoid creation of temporary arrays\n\n In complex array expressions (i.e. having more than one\n operation), each operation will create a temporary array which\n will be used and then deleted by the succeeding operation. A\n better approach would be to identify these temporary arrays and\n reuse their data buffers when possible, namely when the array\n shape and type are the same as the temporary array being\n created. This can be done by checking the temporary array's\n reference count. If it is 1, then it will be deleted once the\n operation is done and is a candidate for reuse.\n\n c. Optional use of memory-mapped files\n\n Numeric users sometimes need to access data from very large\n files or to handle data that is greater than the available\n memory. Memory-mapped arrays provide a mechanism to do this by\n storing the data on disk while making it appear to be in memory.\n Memory- mapped arrays should improve access to all files by\n eliminating one of two copy steps during a file access. Numeric\n should be able to access in-memory and memory-mapped arrays\n transparently.\n\n d. Record access\n\n In some fields of science, data is stored in files as binary\n records. For example, in astronomy, photon data is stored as a 1\n dimensional list of photons in order of arrival time. These\n records or C-like structures contain information about the\n detected photon, such as its arrival time, its position on the\n detector, and its energy. Each field may be of a different type,\n such as char, int, or float. Such arrays introduce new issues\n that must be dealt with, in particular byte alignment or byte\n swapping may need to be performed for the numeric values to be\n properly accessed (though byte swapping is also an issue for\n memory mapped data). Numeric 2 is designed to automatically\n handle alignment and representational issues when data is\n accessed or operated on. There are two approaches to\n implementing records; as either a derived array class or a\n special array type, depending on your point-of-view. We defer\n this discussion to the Open Issues section.\n\n2. Additional array types\n\n Numeric 1 has 11 defined types: char, ubyte, sbyte, short, int,\n long, float, double, cfloat, cdouble, and object. There are no\n ushort, uint, or ulong types, nor are there more complex types such\n as a bit type which is of use to some fields of science and possibly\n for implementing masked-arrays. The design of Numeric 1 makes the\n addition of these and other types a difficult and error-prone\n process. To enable the easy addition (and deletion) of new array\n types such as a bit type described below, a re-design of Numeric is\n necessary.\n\n a. Bit type\n\n The result of a rich comparison between arrays is an array of\n boolean values. The result can be stored in an array of type\n char, but this is an unnecessary waste of memory. A better\n implementation would use a bit or boolean type, compressing the\n array size by a factor of eight. This is currently being\n implemented for Numeric 1 (by Travis Oliphant) and should be\n included in Numeric 2.\n\n3. Enhanced array indexing syntax\n\n The extended slicing syntax was added to Python to provide greater\n flexibility when manipulating Numeric arrays by allowing step-sizes\n greater than 1. This syntax works well as a shorthand for a list of\n regularly spaced indices. For those situations where a list of\n irregularly spaced indices are needed, an enhanced array indexing\n syntax would allow 1-D arrays to be arguments.\n\n4. Rich comparisons\n\n The implementation of PEP 207: Rich Comparisons in Python 2.1\n provides additional flexibility when manipulating arrays. We intend\n to implement this feature in Numeric 2.\n\n5. Array broadcasting rules\n\n When an operation between a scalar and an array is done, the implied\n behavior is to create a new array having the same shape as the array\n operand containing the scalar value. This is called array\n broadcasting. It also works with arrays of lesser rank, such as\n vectors. This implicit behavior is implemented in Numeric 1 and will\n also be implemented in Numeric 2.\n\nDesign and Implementation\n\nThe design of Numeric 2 has four primary classes:\n\n1. ArrayType:\n\n This is a simple class that describes the fundamental properties of\n an array-type, e.g. its name, its size in bytes, its coercion\n relations with respect to other types, etc., e.g.\n\n Int32 = ArrayType('Int32', 4, 'doc-string')\n\n Its relation to the other types is defined when the C-extension\n module for that type is imported. The corresponding Python code is:\n\n Int32.astype[Real64] = Real64\n\n This says that the Real64 array-type has higher priority than the\n Int32 array-type.\n\n The following attributes and methods are proposed for the core\n implementation. Additional attributes can be added on an individual\n basis, e.g. .bitsize or .bitstrides for the bit type.\n\n Attributes:\n\n .name: e.g. \"Int32\", \"Float64\", etc.\n .typecode: e.g. 'i', 'f', etc.\n (for backward compatibility)\n .size (in bytes): e.g. 4, 8, etc.\n .array_rules (mapping): rules between array types\n .pyobj_rules (mapping): rules between array and python types\n .doc: documentation string\n\n Methods:\n\n __init__(): initialization\n __del__(): destruction\n __repr__(): representation\n\n C-API: This still needs to be fleshed-out.\n\n2. UFunc:\n\n This class is the heart of Numeric 2. Its design is similar to that\n of ArrayType in that the UFunc creates a singleton callable object\n whose attributes are name, total and input number of arguments, a\n document string, and an empty CFunc dictionary; e.g.\n\n add = UFunc('add', 3, 2, 'doc-string')\n\n When defined the add instance has no C functions associated with it\n and therefore can do no work. The CFunc dictionary is populated or\n registered later when the C-extension module for an array-type is\n imported. The arguments of the register method are: function name,\n function descriptor, and the CUFunc object. The corresponding Python\n code is\n\n add.register('add', (Int32, Int32, Int32), cfunc-add)\n\n In the initialization function of an array type module, e.g. Int32,\n there are two C API functions: one to initialize the coercion rules\n and the other to register the CFunc objects.\n\n When an operation is applied to some arrays, the __call__ method is\n invoked. It gets the type of each array (if the output array is not\n given, it is created from the coercion rules) and checks the CFunc\n dictionary for a key that matches the argument types. If it exists\n the operation is performed immediately, otherwise the coercion rules\n are used to search for a related operation and set of conversion\n functions. The __call__ method then invokes a compute method written\n in C to iterate over slices of each array, namely:\n\n _ufunc.compute(slice, data, func, swap, conv)\n\n The 'func' argument is a CFuncObject, while the 'swap' and 'conv'\n arguments are lists of CFuncObjects for those arrays needing pre- or\n post-processing, otherwise None is used. The data argument is a list\n of buffer objects, and the slice argument gives the number of\n iterations for each dimension along with the buffer offset and step\n size for each array and each dimension.\n\n We have predefined several UFuncs for use by the __call__ method:\n cast, swap, getobj, and setobj. The cast and swap functions do\n coercion and byte-swapping, respectively and the getobj and setobj\n functions do coercion between Numeric arrays and Python sequences.\n\n The following attributes and methods are proposed for the core\n implementation.\n\n Attributes:\n\n .name: e.g. \"add\", \"subtract\", etc.\n .nargs: number of total arguments\n .iargs: number of input arguments\n .cfuncs (mapping): the set C functions\n .doc: documentation string\n\n Methods:\n\n __init__(): initialization\n __del__(): destruction\n __repr__(): representation\n __call__(): look-up and dispatch method\n initrule(): initialize coercion rule\n uninitrule(): uninitialize coercion rule\n register(): register a CUFunc\n unregister(): unregister a CUFunc\n\n C-API: This still needs to be fleshed-out.\n\n3. Array:\n\n This class contains information about the array, such as shape,\n type, endian-ness of the data, etc.. Its operators, '+', '-', etc.\n just invoke the corresponding UFunc function, e.g.\n\n def __add__(self, other):\n return ufunc.add(self, other)\n\n The following attributes, methods, and functions are proposed for\n the core implementation.\n\n Attributes:\n\n .shape: shape of the array\n .format: type of the array\n .real (only complex): real part of a complex array\n .imag (only complex): imaginary part of a complex array\n\n Methods:\n\n __init__(): initialization\n __del__(): destruction\n __repr_(): representation\n __str__(): pretty representation\n __cmp__(): rich comparison\n __len__():\n __getitem__():\n __setitem__():\n __getslice__():\n __setslice__():\n numeric methods:\n copy(): copy of array\n aslist(): create list from array\n asstring(): create string from array\n\n Functions:\n\n fromlist(): create array from sequence\n fromstring(): create array from string\n array(): create array with shape and value\n concat(): concatenate two arrays\n resize(): resize array\n\n C-API: This still needs to be fleshed-out.\n\n4. ArrayView\n\n This class is similar to the Array class except that the reshape and\n flat methods will raise exceptions, since non-contiguous arrays\n cannot be reshaped or flattened using just pointer and step-size\n information.\n\n C-API: This still needs to be fleshed-out.\n\n5. C-extension modules:\n\n Numeric2 will have several C-extension modules.\n\n a. _ufunc:\n\n The primary module of this set is the _ufuncmodule.c. The\n intention of this module is to do the bare minimum, i.e. iterate\n over arrays using a specified C function. The interface of these\n functions is the same as Numeric 1, i.e.\n\n int (*CFunc)(char *data, int *steps, int repeat, void *func);\n\n and their functionality is expected to be the same, i.e. they\n iterate over the inner-most dimension.\n\n The following attributes and methods are proposed for the core\n implementation.\n\n Attributes:\n\n Methods:\n\n compute():\n\n C-API: This still needs to be fleshed-out.\n\n b. _int32, _real64, etc.:\n\n There will also be C-extension modules for each array type, e.g.\n _int32module.c, _real64module.c, etc. As mentioned previously,\n when these modules are imported by the UFunc module, they will\n automatically register their functions and coercion rules. New\n or improved versions of these modules can be easily implemented\n and used without affecting the rest of Numeric 2.\n\nOpen Issues\n\n1. Does slicing syntax default to copy or view behavior?\n\n The default behavior of Python is to return a copy of a sub-list or\n tuple when slicing syntax is used, whereas Numeric 1 returns a view\n into the array. The choice made for Numeric 1 is apparently for\n reasons of performance: the developers wish to avoid the penalty of\n allocating and copying the data buffer during each array operation\n and feel that the need for a deep copy of an array to be rare. Yet,\n some have argued that Numeric's slice notation should also have copy\n behavior to be consistent with Python lists. In this case the\n performance penalty associated with copy behavior can be minimized\n by implementing copy-on-write. This scheme has both arrays sharing\n one data buffer (as in view behavior) until either array is assigned\n new data at which point a copy of the data buffer is made. View\n behavior would then be implemented by an ArrayView class, whose\n behavior be similar to Numeric 1 arrays, i.e. .shape is not settable\n for non-contiguous arrays. The use of an ArrayView class also makes\n explicit what type of data the array contains.\n\n2. Does item syntax default to copy or view behavior?\n\n A similar question arises with the item syntax. For example, if\n a = [[0,1,2], [3,4,5]] and b = a[0], then changing b[0] also changes\n a[0][0], because a[0] is a reference or view of the first row of a.\n Therefore, if c is a 2-d array, it would appear that c[i] should\n return a 1-d array which is a view into, instead of a copy of, c for\n consistency. Yet, c[i] can be considered just a shorthand for c[i,:]\n which would imply copy behavior assuming slicing syntax returns a\n copy. Should Numeric 2 behave the same way as lists and return a\n view or should it return a copy.\n\n3. How is scalar coercion implemented?\n\n Python has fewer numeric types than Numeric which can cause coercion\n problems. For example, when multiplying a Python scalar of type\n float and a Numeric array of type float, the Numeric array is\n converted to a double, since the Python float type is actually a\n double. This is often not the desired behavior, since the Numeric\n array will be doubled in size which is likely to be annoying,\n particularly for very large arrays. We prefer that the array type\n trumps the python type for the same type class, namely integer,\n float, and complex. Therefore, an operation between a Python integer\n and an Int16 (short) array will return an Int16 array. Whereas an\n operation between a Python float and an Int16 array would return a\n Float64 (double) array. Operations between two arrays use normal\n coercion rules.\n\n4. How is integer division handled?\n\n In a future version of Python, the behavior of integer division will\n change. The operands will be converted to floats, so the result will\n be a float. If we implement the proposed scalar coercion rules where\n arrays have precedence over Python scalars, then dividing an array\n by an integer will return an integer array and will not be\n consistent with a future version of Python which would return an\n array of type double. Scientific programmers are familiar with the\n distinction between integer and float-point division, so should\n Numeric 2 continue with this behavior?\n\n5. How should records be implemented?\n\n There are two approaches to implementing records depending on your\n point-of-view. The first is two divide arrays into separate classes\n depending on the behavior of their types. For example, numeric\n arrays are one class, strings a second, and records a third, because\n the range and type of operations of each class differ. As such, a\n record array is not a new type, but a mechanism for a more flexible\n form of array. To easily access and manipulate such complex data,\n the class is comprised of numeric arrays having different byte\n offsets into the data buffer. For example, one might have a table\n consisting of an array of Int16, Real32 values. Two numeric arrays,\n one with an offset of 0 bytes and a stride of 6 bytes to be\n interpreted as Int16, and one with an offset of 2 bytes and a stride\n of 6 bytes to be interpreted as Real32 would represent the record\n array. Both numeric arrays would refer to the same data buffer, but\n have different offset and stride attributes, and a different numeric\n type.\n\n The second approach is to consider a record as one of many array\n types, albeit with fewer, and possibly different, array operations\n than for numeric arrays. This approach considers an array type to be\n a mapping of a fixed-length string. The mapping can either be\n simple, like integer and floating-point numbers, or complex, like a\n complex number, a byte string, and a C-structure. The record type\n effectively merges the struct and Numeric modules into a\n multi-dimensional struct array. This approach implies certain\n changes to the array interface. For example, the 'typecode' keyword\n argument should probably be changed to the more descriptive 'format'\n keyword.\n\n a. How are record semantics defined and implemented?\n\n Which ever implementation approach is taken for records, the\n syntax and semantics of how they are to be accessed and\n manipulated must be decided, if one wishes to have access to\n sub-fields of records. In this case, the record type can\n essentially be considered an inhomogeneous list, like a tuple\n returned by the unpack method of the struct module; and a 1-d\n array of records may be interpreted as a 2-d array with the\n second dimension being the index into the list of fields. This\n enhanced array semantics makes access to an array of one or more\n of the fields easy and straightforward. It also allows a user to\n do array operations on a field in a natural and intuitive way.\n If we assume that records are implemented as an array type, then\n last dimension defaults to 0 and can therefore be neglected for\n arrays comprised of simple types, like numeric.\n\n6. How are masked-arrays implemented?\n\n Masked-arrays in Numeric 1 are implemented as a separate array\n class. With the ability to add new array types to Numeric 2, it is\n possible that masked-arrays in Numeric 2 could be implemented as a\n new array type instead of an array class.\n\n7. How are numerical errors handled (IEEE floating-point errors in\n particular)?\n\n It is not clear to the proposers (Paul Barrett and Travis Oliphant)\n what is the best or preferred way of handling errors. Since most of\n the C functions that do the operation, iterate over the inner-most\n (last) dimension of the array. This dimension could contain a\n thousand or more items having one or more errors of differing type,\n such as divide-by-zero, underflow, and overflow. Additionally,\n keeping track of these errors may come at the expense of\n performance. Therefore, we suggest several options:\n\n a. Print a message of the most severe error, leaving it to the user\n to locate the errors.\n b. Print a message of all errors that occurred and the number of\n occurrences, leaving it to the user to locate the errors.\n c. Print a message of all errors that occurred and a list of where\n they occurred.\n d. Or use a hybrid approach, printing only the most severe error,\n yet keeping track of what and where the errors occurred. This\n would allow the user to locate the errors while keeping the\n error message brief.\n\n8. What features are needed to ease the integration of FORTRAN\n libraries and code?\n\nIt would be a good idea at this stage to consider how to ease the\nintegration of FORTRAN libraries and user code in Numeric 2.\n\nImplementation Steps\n\n1. Implement basic UFunc capability\n a. Minimal Array class:\n\n Necessary class attributes and methods, e.g. .shape, .data,\n .type, etc.\n\n b. Minimal ArrayType class:\n\n Int32, Real64, Complex64, Char, Object\n\n c. Minimal UFunc class:\n\n UFunc instantiation, CFunction registration, UFunc call for 1-D\n arrays including the rules for doing alignment, byte-swapping,\n and coercion.\n\n d. Minimal C-extension module:\n\n _UFunc, which does the innermost array loop in C.\n\n This step implements whatever is needed to do: 'c = add(a, b)'\n where a, b, and c are 1-D arrays. It teaches us how to add new\n UFuncs, to coerce the arrays, to pass the necessary information\n to a C iterator method and to do the actually computation.\n2. Continue enhancing the UFunc iterator and Array class\n a. Implement some access methods for the Array class: print, repr,\n getitem, setitem, etc.\n b. Implement multidimensional arrays\n c. Implement some of basic Array methods using UFuncs: +, -, *, /,\n etc.\n d. Enable UFuncs to use Python sequences.\n3. Complete the standard UFunc and Array class behavior\n a. Implement getslice and setslice behavior\n b. Work on Array broadcasting rules\n c. Implement Record type\n4. Add additional functionality\n a. Add more UFuncs\n b. Implement buffer or mmap access\n\nIncompatibilities\n\nThe following is a list of incompatibilities in behavior between Numeric\n1 and Numeric 2.\n\n1. Scalar coercion rules\n\n Numeric 1 has single set of coercion rules for array and Python\n numeric types. This can cause unexpected and annoying problems\n during the calculation of an array expression. Numeric 2 intends to\n overcome these problems by having two sets of coercion rules: one\n for arrays and Python numeric types, and another just for arrays.\n\n2. No savespace attribute\n\n The savespace attribute in Numeric 1 makes arrays with this\n attribute set take precedence over those that do not have it set.\n Numeric 2 will not have such an attribute and therefore normal array\n coercion rules will be in effect.\n\n3. Slicing syntax returns a copy\n\n The slicing syntax in Numeric 1 returns a view into the original\n array. The slicing behavior for Numeric 2 will be a copy. You should\n use the ArrayView class to get a view into an array.\n\n4. Boolean comparisons return a boolean array\n\n A comparison between arrays in Numeric 1 results in a Boolean\n scalar, because of current limitations in Python. The advent of Rich\n Comparisons in Python 2.1 will allow an array of Booleans to be\n returned.\n\n5. Type characters are deprecated\n\n Numeric 2 will have an ArrayType class composed of Type instances,\n for example Int8, Int16, Int32, and Int for signed integers. The\n typecode scheme in Numeric 1 will be available for backward\n compatibility, but will be deprecated.\n\nAppendices\n\nA. Implicit sub-arrays iteration\n\n A computer animation is composed of a number of 2-D images or frames\n of identical shape. By stacking these images into a single block of\n memory, a 3-D array is created. Yet the operations to be performed\n are not meant for the entire 3-D array, but on the set of 2-D\n sub-arrays. In most array languages, each frame has to be extracted,\n operated on, and then reinserted into the output array using a\n for-like loop. The J language allows the programmer to perform such\n operations implicitly by having a rank for the frame and array. By\n default these ranks will be the same during the creation of the\n array. It was the intention of the Numeric 1 developers to implement\n this feature, since it is based on the language J. The Numeric 1\n code has the required variables for implementing this behavior, but\n was never implemented. We intend to implement implicit sub-array\n iteration in Numeric 2, if the array broadcasting rules found in\n Numeric 1 do not fully support this behavior.\n\nCopyright\n\nThis document is placed in the public domain.\n\nRelated PEPs\n\n- PEP 207: Rich Comparisons by Guido van Rossum and David Ascher\n- PEP 208: Reworking the Coercion Model by Neil Schemenauer and\n Marc-Andre' Lemburg\n- PEP 211: Adding New Linear Algebra Operators to Python by Greg\n Wilson\n- PEP 225: Elementwise/Objectwise Operators by Huaiyu Zhu\n- PEP 228: Reworking Python's Numeric Model by Moshe Zadka\n\nReferences\n\n[1]\nP. Greenfield 2000. private communication."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.663643"},"created":{"kind":"timestamp","value":"2001-01-03T00:00:00","string":"2001-01-03T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0209/\",\n \"authors\": [\n \"Paul Barrett\",\n \"Travis Oliphant\"\n ],\n \"pep_number\": \"0209\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":54,"cells":{"id":{"kind":"string","value":"0202"},"text":{"kind":"string","value":"PEP: 202 Title: List Comprehensions Author: Barry Warsaw\n Status: Final Type: Standards Track Content-Type:\ntext/x-rst Created: 13-Jul-2000 Python-Version: 2.0 Post-History:\n\nIntroduction\n\nThis PEP describes a proposed syntactical extension to Python, list\ncomprehensions.\n\nThe Proposed Solution\n\nIt is proposed to allow conditional construction of list literals using\nfor and if clauses. They would nest in the same way for loops and if\nstatements nest now.\n\nRationale\n\nList comprehensions provide a more concise way to create lists in\nsituations where map() and filter() and/or nested loops would currently\nbe used.\n\nExamples\n\n >>> print [i for i in range(10)]\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n >>> print [i for i in range(20) if i%2 == 0]\n [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]\n\n >>> nums = [1, 2, 3, 4]\n >>> fruit = [\"Apples\", \"Peaches\", \"Pears\", \"Bananas\"]\n >>> print [(i, f) for i in nums for f in fruit]\n [(1, 'Apples'), (1, 'Peaches'), (1, 'Pears'), (1, 'Bananas'),\n (2, 'Apples'), (2, 'Peaches'), (2, 'Pears'), (2, 'Bananas'),\n (3, 'Apples'), (3, 'Peaches'), (3, 'Pears'), (3, 'Bananas'),\n (4, 'Apples'), (4, 'Peaches'), (4, 'Pears'), (4, 'Bananas')]\n >>> print [(i, f) for i in nums for f in fruit if f[0] == \"P\"]\n [(1, 'Peaches'), (1, 'Pears'),\n (2, 'Peaches'), (2, 'Pears'),\n (3, 'Peaches'), (3, 'Pears'),\n (4, 'Peaches'), (4, 'Pears')]\n >>> print [(i, f) for i in nums for f in fruit if f[0] == \"P\" if i%2 == 1]\n [(1, 'Peaches'), (1, 'Pears'), (3, 'Peaches'), (3, 'Pears')]\n >>> print [i for i in zip(nums, fruit) if i[0]%2==0]\n [(2, 'Peaches'), (4, 'Bananas')]\n\nReference Implementation\n\nList comprehensions become part of the Python language with release 2.0,\ndocumented in[1].\n\nBDFL Pronouncements\n\n- The syntax proposed above is the Right One.\n- The form [x, y for ...] is disallowed; one is required to write\n [(x, y) for ...].\n- The form [... for x... for y...] nests, with the last index varying\n fastest, just like nested for loops.\n\nReferences\n\n[1] http://docs.python.org/reference/expressions.html#list-displays"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.670811"},"created":{"kind":"timestamp","value":"2000-07-13T00:00:00","string":"2000-07-13T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0202/\",\n \"authors\": [\n \"Barry Warsaw\"\n ],\n \"pep_number\": \"0202\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":55,"cells":{"id":{"kind":"string","value":"0675"},"text":{"kind":"string","value":"PEP: 675 Title: Arbitrary Literal String Type Author: Pradeep Kumar\nSrinivasan , Graham Bleaney \nSponsor: Jelle Zijlstra Discussions-To:\nhttps://mail.python.org/archives/list/typing-sig@python.org/thread/VB74EHNM4RODDFM64NEEEBJQVAUAWIAW/\nStatus: Final Type: Standards Track Topic: Typing Created: 30-Nov-2021\nPython-Version: 3.11 Post-History: 07-Feb-2022 Resolution:\nhttps://mail.python.org/archives/list/python-dev@python.org/message/XEOOSSPNYPGZ5NXOJFPLXG2BTN7EVRT5/\n\ntyping:literalstring and typing.LiteralString\n\nAbstract\n\nThere is currently no way to specify, using type annotations, that a\nfunction parameter can be of any literal string type. We have to specify\na precise literal string type, such as Literal[\"foo\"]. This PEP\nintroduces a supertype of literal string types: LiteralString. This\nallows a function to accept arbitrary literal string types, such as\nLiteral[\"foo\"] or Literal[\"bar\"].\n\nMotivation\n\nPowerful APIs that execute SQL or shell commands often recommend that\nthey be invoked with literal strings, rather than arbitrary user\ncontrolled strings. There is no way to express this recommendation in\nthe type system, however, meaning security vulnerabilities sometimes\noccur when developers fail to follow it. For example, a naive way to\nlook up a user record from a database is to accept a user id and insert\nit into a predefined SQL query:\n\n def query_user(conn: Connection, user_id: str) -> User:\n query = f\"SELECT * FROM data WHERE user_id = {user_id}\"\n conn.execute(query)\n ... # Transform data to a User object and return it\n\n query_user(conn, \"user123\") # OK.\n\nHowever, the user-controlled data user_id is being mixed with the SQL\ncommand string, which means a malicious user could run arbitrary SQL\ncommands:\n\n # Delete the table.\n query_user(conn, \"user123; DROP TABLE data;\")\n\n # Fetch all users (since 1 = 1 is always true).\n query_user(conn, \"user123 OR 1 = 1\")\n\nTo prevent such SQL injection attacks, SQL APIs offer parameterized\nqueries, which separate the executed query from user-controlled data and\nmake it impossible to run arbitrary queries. For example, with sqlite3,\nour original function would be written safely as a query with\nparameters:\n\n def query_user(conn: Connection, user_id: str) -> User:\n query = \"SELECT * FROM data WHERE user_id = ?\"\n conn.execute(query, (user_id,))\n ...\n\nThe problem is that there is no way to enforce this discipline.\nsqlite3's own documentation can only admonish the reader to not\ndynamically build the sql argument from external input; the API's\nauthors cannot express that through the type system. Users can (and\noften do) still use a convenient f-string as before and leave their code\nvulnerable to SQL injection.\n\nExisting tools, such as the popular security linter Bandit, attempt to\ndetect unsafe external data used in SQL APIs, by inspecting the AST or\nby other semantic pattern-matching. These tools, however, preclude\ncommon idioms like storing a large multi-line query in a variable before\nexecuting it, adding literal string modifiers to the query based on some\nconditions, or transforming the query string using a function. (We\nsurvey existing tools in the Rejected Alternatives section.) For\nexample, many tools will detect a false positive issue in this benign\nsnippet:\n\n def query_data(conn: Connection, user_id: str, limit: bool) -> None:\n query = \"\"\"\n SELECT\n user.name,\n user.age\n FROM data\n WHERE user_id = ?\n \"\"\"\n if limit:\n query += \" LIMIT 1\"\n\n conn.execute(query, (user_id,))\n\nWe want to forbid harmful execution of user-controlled data while still\nallowing benign idioms like the above and not requiring extra user work.\n\nTo meet this goal, we introduce the LiteralString type, which only\naccepts string values that are known to be made of literals. This is a\ngeneralization of the Literal[\"foo\"] type from PEP 586. A string of type\nLiteralString cannot contain user-controlled data. Thus, any API that\nonly accepts LiteralString will be immune to injection vulnerabilities\n(with pragmatic limitations ).\n\nSince we want the sqlite3 execute method to disallow strings built with\nuser input, we would make its typeshed stub accept a sql query that is\nof type LiteralString:\n\n from typing import LiteralString\n\n def execute(self, sql: LiteralString, parameters: Iterable[str] = ...) -> Cursor: ...\n\nThis successfully forbids our unsafe SQL example. The variable query\nbelow is inferred to have type str, since it is created from a format\nstring using user_id, and cannot be passed to execute:\n\n def query_user(conn: Connection, user_id: str) -> User:\n query = f\"SELECT * FROM data WHERE user_id = {user_id}\"\n conn.execute(query) # Error: Expected LiteralString, got str.\n ...\n\nThe method remains flexible enough to allow our more complicated\nexample:\n\n def query_data(conn: Connection, user_id: str, limit: bool) -> None:\n # This is a literal string.\n query = \"\"\"\n SELECT\n user.name,\n user.age\n FROM data\n WHERE user_id = ?\n \"\"\"\n\n if limit:\n # Still has type LiteralString because we added a literal string.\n query += \" LIMIT 1\"\n\n conn.execute(query, (user_id,)) # OK\n\nNotice that the user did not have to change their SQL code at all. The\ntype checker was able to infer the literal string type and complain only\nin case of violations.\n\nLiteralString is also useful in other cases where we want strict\ncommand-data separation, such as when building shell commands or when\nrendering a string into an HTML response without escaping (see Appendix\nA: Other Uses). Overall, this combination of strictness and flexibility\nmakes it easy to enforce safer API usage in sensitive code without\nburdening users.\n\nUsage statistics\n\nIn a sample of open-source projects using sqlite3, we found that\nconn.execute was called ~67% of the time with a safe string literal and\n~33% of the time with a potentially unsafe, local string variable. Using\nthis PEP's literal string type along with a type checker would prevent\nthe unsafe portion of that 33% of cases (ie. the ones where user\ncontrolled data is incorporated into the query), while seamlessly\nallowing the safe ones to remain.\n\nRationale\n\nFirstly, why use types to prevent security vulnerabilities?\n\nWarning users in documentation is insufficient - most users either never\nsee these warnings or ignore them. Using an existing dynamic or static\nanalysis approach is too restrictive - these prevent natural idioms, as\nwe saw in the Motivation section (and will discuss more extensively in\nthe Rejected Alternatives section). The typing-based approach in this\nPEP strikes a user-friendly balance between strictness and flexibility.\n\nRuntime approaches do not work because, at runtime, the query string is\na plain str. While we could prevent some exploits using heuristics, such\nas regex-filtering for obviously malicious payloads, there will always\nbe a way to work around them (perfectly distinguishing good and bad\nqueries reduces to the halting problem).\n\nStatic approaches, such as checking the AST to see if the query string\nis a literal string expression, cannot tell when a string is assigned to\nan intermediate variable or when it is transformed by a benign function.\nThis makes them overly restrictive.\n\nThe type checker, surprisingly, does better than both because it has\naccess to information not available in the runtime or static analysis\napproaches. Specifically, the type checker can tell us whether an\nexpression has a literal string type, say Literal[\"foo\"]. The type\nchecker already propagates types across variable assignments or function\ncalls.\n\nIn the current type system itself, if the SQL or shell command execution\nfunction only accepted three possible input strings, our job would be\ndone. We would just say:\n\n def execute(query: Literal[\"foo\", \"bar\", \"baz\"]) -> None: ...\n\nBut, of course, execute can accept any possible query. How do we ensure\nthat the query does not contain an arbitrary, user-controlled string?\n\nWe want to specify that the value must be of some type Literal[<...>]\nwhere <...> is some string. This is what LiteralString represents.\nLiteralString is the \"supertype\" of all literal string types. In effect,\nthis PEP just introduces a type in the type hierarchy between\nLiteral[\"foo\"] and str. Any particular literal string, such as\nLiteral[\"foo\"] or Literal[\"bar\"], is compatible with LiteralString, but\nnot the other way around. The \"supertype\" of LiteralString itself is\nstr. So, LiteralString is compatible with str, but not the other way\naround.\n\nNote that a Union of literal types is naturally compatible with\nLiteralString because each element of the Union is individually\ncompatible with LiteralString. So, Literal[\"foo\", \"bar\"] is compatible\nwith LiteralString.\n\nHowever, recall that we don't just want to represent exact literal\nqueries. We also want to support composition of two literal strings,\nsuch as query + \" LIMIT 1\". This too is possible with the above concept.\nIf x and y are two values of type LiteralString, then x + y will also be\nof type compatible with LiteralString. We can reason about this by\nlooking at specific instances such as Literal[\"foo\"] and Literal[\"bar\"];\nthe value of the added string x + y can only be \"foobar\", which has type\nLiteral[\"foobar\"] and is thus compatible with LiteralString. The same\nreasoning applies when x and y are unions of literal types; the result\nof pairwise adding any two literal types from x and y respectively is a\nliteral type, which means that the overall result is a Union of literal\ntypes and is thus compatible with LiteralString.\n\nIn this way, we are able to leverage Python's concept of a Literal\nstring type to specify that our API can only accept strings that are\nknown to be constructed from literals. More specific details follow in\nthe remaining sections.\n\nSpecification\n\nRuntime Behavior\n\nWe propose adding LiteralString to typing.py, with an implementation\nsimilar to typing.NoReturn.\n\nNote that LiteralString is a special form used solely for type checking.\nThere is no expression for which type() will produce LiteralString\nat runtime. So, we do not specify in the implementation that it is a\nsubclass of str.\n\nValid Locations for LiteralString\n\nLiteralString can be used where any other type can be used:\n\n variable_annotation: LiteralString\n\n def my_function(literal_string: LiteralString) -> LiteralString: ...\n\n class Foo:\n my_attribute: LiteralString\n\n type_argument: List[LiteralString]\n\n T = TypeVar(\"T\", bound=LiteralString)\n\nIt cannot be nested within unions of Literal types:\n\n bad_union: Literal[\"hello\", LiteralString] # Not OK\n bad_nesting: Literal[LiteralString] # Not OK\n\nType Inference\n\nInferring LiteralString\n\nAny literal string type is compatible with LiteralString. For example,\nx: LiteralString = \"foo\" is valid because \"foo\" is inferred to be of\ntype Literal[\"foo\"].\n\nAs per the Rationale, we also infer LiteralString in the following\ncases:\n\n- Addition: x + y is of type LiteralString if both x and y are\n compatible with LiteralString.\n- Joining: sep.join(xs) is of type LiteralString if sep's type is\n compatible with LiteralString and xs's type is compatible with\n Iterable[LiteralString].\n- In-place addition: If s has type LiteralString and x has type\n compatible with LiteralString, then s += x preserves s's type as\n LiteralString.\n- String formatting: An f-string has type LiteralString if and only if\n its constituent expressions are literal strings. s.format(...) has\n type LiteralString if and only if s and the arguments have types\n compatible with LiteralString.\n- Literal-preserving methods: In Appendix C, we have provided an\n exhaustive list of str methods that preserve the LiteralString type.\n\nIn all other cases, if one or more of the composed values has a\nnon-literal type str, the composition of types will have type str. For\nexample, if s has type str, then \"hello\" + s has type str. This matches\nthe pre-existing behavior of type checkers.\n\nLiteralString is compatible with the type str. It inherits all methods\nfrom str. So, if we have a variable s of type LiteralString, it is safe\nto write s.startswith(\"hello\").\n\nSome type checkers refine the type of a string when doing an equality\ncheck:\n\n def foo(s: str) -> None:\n if s == \"bar\":\n reveal_type(s) # => Literal[\"bar\"]\n\nSuch a refined type in the if-block is also compatible with\nLiteralString because its type is Literal[\"bar\"].\n\nExamples\n\nSee the examples below to help clarify the above rules:\n\n:\n\n literal_string: LiteralString\n s: str = literal_string # OK\n\n literal_string: LiteralString = s # Error: Expected LiteralString, got str.\n literal_string: LiteralString = \"hello\" # OK\n\nAddition of literal strings:\n\n def expect_literal_string(s: LiteralString) -> None: ...\n\n expect_literal_string(\"foo\" + \"bar\") # OK\n expect_literal_string(literal_string + \"bar\") # OK\n\n literal_string2: LiteralString\n expect_literal_string(literal_string + literal_string2) # OK\n\n plain_string: str\n expect_literal_string(literal_string + plain_string) # Not OK.\n\nJoin using literal strings:\n\n expect_literal_string(\",\".join([\"foo\", \"bar\"])) # OK\n expect_literal_string(literal_string.join([\"foo\", \"bar\"])) # OK\n expect_literal_string(literal_string.join([literal_string, literal_string2])) # OK\n\n xs: List[LiteralString]\n expect_literal_string(literal_string.join(xs)) # OK\n expect_literal_string(plain_string.join([literal_string, literal_string2]))\n # Not OK because the separator has type 'str'.\n\nIn-place addition using literal strings:\n\n literal_string += \"foo\" # OK\n literal_string += literal_string2 # OK\n literal_string += plain_string # Not OK\n\nFormat strings using literal strings:\n\n literal_name: LiteralString\n expect_literal_string(f\"hello {literal_name}\")\n # OK because it is composed from literal strings.\n\n expect_literal_string(\"hello {}\".format(literal_name)) # OK\n\n expect_literal_string(f\"hello\") # OK\n\n username: str\n expect_literal_string(f\"hello {username}\")\n # NOT OK. The format-string is constructed from 'username',\n # which has type 'str'.\n\n expect_literal_string(\"hello {}\".format(username)) # Not OK\n\nOther literal types, such as literal integers, are not compatible with\nLiteralString:\n\n some_int: int\n expect_literal_string(some_int) # Error: Expected LiteralString, got int.\n\n literal_one: Literal[1] = 1\n expect_literal_string(literal_one) # Error: Expected LiteralString, got Literal[1].\n\nWe can call functions on literal strings:\n\n def add_limit(query: LiteralString) -> LiteralString:\n return query + \" LIMIT = 1\"\n\n def my_query(query: LiteralString, user_id: str) -> None:\n sql_connection().execute(add_limit(query), (user_id,)) # OK\n\nConditional statements and expressions work as expected:\n\n def return_literal_string() -> LiteralString:\n return \"foo\" if condition1() else \"bar\" # OK\n\n def return_literal_str2(literal_string: LiteralString) -> LiteralString:\n return \"foo\" if condition1() else literal_string # OK\n\n def return_literal_str3() -> LiteralString:\n if condition1():\n result: Literal[\"foo\"] = \"foo\"\n else:\n result: LiteralString = \"bar\"\n\n return result # OK\n\nInteraction with TypeVars and Generics\n\nTypeVars can be bound to LiteralString:\n\n from typing import Literal, LiteralString, TypeVar\n\n TLiteral = TypeVar(\"TLiteral\", bound=LiteralString)\n\n def literal_identity(s: TLiteral) -> TLiteral:\n return s\n\n hello: Literal[\"hello\"] = \"hello\"\n y = literal_identity(hello)\n reveal_type(y) # => Literal[\"hello\"]\n\n s: LiteralString\n y2 = literal_identity(s)\n reveal_type(y2) # => LiteralString\n\n s_error: str\n literal_identity(s_error)\n # Error: Expected TLiteral (bound to LiteralString), got str.\n\nLiteralString can be used as a type argument for generic classes:\n\n class Container(Generic[T]):\n def __init__(self, value: T) -> None:\n self.value = value\n\n literal_string: LiteralString = \"hello\"\n x: Container[LiteralString] = Container(literal_string) # OK\n\n s: str\n x_error: Container[LiteralString] = Container(s) # Not OK\n\nStandard containers like List work as expected:\n\n xs: List[LiteralString] = [\"foo\", \"bar\", \"baz\"]\n\nInteractions with Overloads\n\nLiteral strings and overloads do not need to interact in a special way:\nthe existing rules work fine. LiteralString can be used as a fallback\noverload where a specific Literal[\"foo\"] type does not match:\n\n @overload\n def foo(x: Literal[\"foo\"]) -> int: ...\n @overload\n def foo(x: LiteralString) -> bool: ...\n @overload\n def foo(x: str) -> str: ...\n\n x1: int = foo(\"foo\") # First overload.\n x2: bool = foo(\"bar\") # Second overload.\n s: str\n x3: str = foo(s) # Third overload.\n\nBackwards Compatibility\n\nWe propose adding typing_extensions.LiteralString for use in earlier\nPython versions.\n\nAs PEP 586 mentions\n<586#backwards-compatibility>, type checkers \"should feel free to\nexperiment with more sophisticated inference techniques\". So, if the\ntype checker infers a literal string type for an unannotated variable\nthat is initialized with a literal string, the following example should\nbe OK:\n\n x = \"hello\"\n expect_literal_string(x)\n # OK, because x is inferred to have type 'Literal[\"hello\"]'.\n\nThis enables precise type checking of idiomatic SQL query code without\nannotating the code at all (as seen in the Motivation section example).\n\nHowever, like PEP 586, this PEP does not mandate the above inference\nstrategy. In case the type checker doesn't infer x to have type\nLiteral[\"hello\"], users can aid the type checker by explicitly\nannotating it as x: LiteralString:\n\n x: LiteralString = \"hello\"\n expect_literal_string(x)\n\nRejected Alternatives\n\nWhy not use tool X?\n\nTools to catch issues such as SQL injection seem to come in three\nflavors: AST based, function level analysis, and taint flow analysis.\n\nAST-based tools: Bandit has a plugin to warn when SQL queries are not\nliteral strings. The problem is that many perfectly safe SQL queries are\ndynamically built out of string literals, as shown in the Motivation\nsection. At the AST level, the resultant SQL query is not going to\nappear as a string literal anymore and is thus indistinguishable from a\npotentially malicious string. To use these tools would require\nsignificantly restricting developers' ability to build SQL queries.\nLiteralString can provide similar safety guarantees with fewer\nrestrictions.\n\nSemgrep and pyanalyze: Semgrep supports a more sophisticated function\nlevel analysis, including constant propagation within a function. This\nallows us to prevent injection attacks while permitting some forms of\nsafe dynamic SQL queries within a function. pyanalyze has a similar\nextension. But neither handles function calls that construct and return\nsafe SQL queries. For example, in the code sample below,\nbuild_insert_query is a helper function to create a query that inserts\nmultiple values into the corresponding columns. Semgrep and pyanalyze\nforbid this natural usage whereas LiteralString handles it with no\nburden on the programmer:\n\n def build_insert_query(\n table: LiteralString\n insert_columns: Iterable[LiteralString],\n ) -> LiteralString:\n sql = \"INSERT INTO \" + table\n\n column_clause = \", \".join(insert_columns)\n value_clause = \", \".join([\"?\"] * len(insert_columns))\n\n sql += f\" ({column_clause}) VALUES ({value_clause})\"\n return sql\n\n def insert_data(\n conn: Connection,\n kvs_to_insert: Dict[LiteralString, str]\n ) -> None:\n query = build_insert_query(\"data\", kvs_to_insert.keys())\n conn.execute(query, kvs_to_insert.values())\n\n # Example usage\n data_to_insert = {\n \"column_1\": value_1, # Note: values are not literals\n \"column_2\": value_2,\n \"column_3\": value_3,\n }\n insert_data(conn, data_to_insert)\n\nTaint flow analysis: Tools such as Pysa or CodeQL are capable of\ntracking data flowing from a user controlled input into a SQL query.\nThese tools are powerful but involve considerable overhead in setting up\nthe tool in CI, defining \"taint\" sinks and sources, and teaching\ndevelopers how to use them. They also usually take longer to run than a\ntype checker (minutes instead of seconds), which means feedback is not\nimmediate. Finally, they move the burden of preventing vulnerabilities\non to library users instead of allowing the libraries themselves to\nspecify precisely how their APIs must be called (as is possible with\nLiteralString).\n\nOne final reason to prefer using a new type over a dedicated tool is\nthat type checkers are more widely used than dedicated security tooling;\nfor example, MyPy was downloaded over 7 million times in Jan 2022 vs\nless than 2 million times for Bandit. Having security protections built\nright into type checkers will mean that more developers benefit from\nthem.\n\nWhy not use a NewType for str?\n\nAny API for which LiteralString would be suitable could instead be\nupdated to accept a different type created within the Python type\nsystem, such as NewType(\"SafeSQL\", str):\n\n SafeSQL = NewType(\"SafeSQL\", str)\n\n def execute(self, sql: SafeSQL, parameters: Iterable[str] = ...) -> Cursor: ...\n\n execute(SafeSQL(\"SELECT * FROM data WHERE user_id = ?\"), user_id) # OK\n\n user_query: str\n execute(user_query) # Error: Expected SafeSQL, got str.\n\nHaving to create a new type to call an API might give some developers\npause and encourage more caution, but it doesn't guarantee that\ndevelopers won't just turn a user controlled string into the new type,\nand pass it into the modified API anyway:\n\n query = f\"SELECT * FROM data WHERE user_id = f{user_id}\"\n execute(SafeSQL(query)) # No error!\n\nWe are back to square one with the problem of preventing arbitrary\ninputs to SafeSQL. This is not a theoretical concern either. Django uses\nthe above approach with SafeString and mark_safe. Issues such as\nCVE-2020-13596 show how this technique can fail.\n\nAlso note that this requires invasive changes to the source code\n(wrapping the query with SafeSQL) whereas LiteralString requires no such\nchanges. Users can remain oblivious to it as long as they pass in\nliteral strings to sensitive APIs.\n\nWhy not try to emulate Trusted Types?\n\nTrusted Types is a W3C specification for preventing DOM-based Cross Site\nScripting (XSS). XSS occurs when dangerous browser APIs accept raw\nuser-controlled strings. The specification modifies these APIs to accept\nonly the \"Trusted Types\" returned by designated sanitizing functions.\nThese sanitizing functions must take in a potentially malicious string\nand validate it or render it benign somehow, for example by verifying\nthat it is a valid URL or HTML-encoding it.\n\nIt can be tempting to assume porting the concept of Trusted Types to\nPython could solve the problem. The fundamental difference, however, is\nthat the output of a Trusted Types sanitizer is usually intended to not\nbe executable code. Thus it's easy to HTML encode the input, strip out\ndangerous tags, or otherwise render it inert. With a SQL query or shell\ncommand, the end result still needs to be executable code. There is no\nway to write a sanitizer that can reliably figure out which parts of an\ninput string are benign and which ones are potentially malicious.\n\nRuntime Checkable LiteralString\n\nThe LiteralString concept could be extended beyond static type checking\nto be a runtime checkable property of str objects. This would provide\nsome benefits, such as allowing frameworks to raise errors on dynamic\nstrings. Such runtime errors would be a more robust defense mechanism\nthan type errors, which can potentially be suppressed, ignored, or never\neven seen if the author does not use a type checker.\n\nThis extension to the LiteralString concept would dramatically increase\nthe scope of the proposal by requiring changes to one of the most\nfundamental types in Python. While runtime taint checking on strings,\nsimilar to Perl's taint, has been considered and attempted in the past,\nand others may consider it in the future, such extensions are out of\nscope for this PEP.\n\nRejected Names\n\nWe considered a variety of names for the literal string type and\nsolicited ideas on typing-sig. Some notable alternatives were:\n\n- Literal[str]: This is a natural extension of the Literal[\"foo\"] type\n name, but typing-sig objected that users could mistake this for the\n literal type of the str class.\n- LiteralStr: This is shorter than LiteralString but looks weird to\n the PEP authors.\n- LiteralDerivedString: This (along with MadeFromLiteralString) best\n captures the technical meaning of the type. It represents not just\n the type of literal expressions, such as \"foo\", but also that of\n expressions composed from literals, such as \"foo\" + \"bar\". However,\n both names seem wordy.\n- StringLiteral: Users might confuse this with the existing concept of\n \"string literals\" where the string exists as a syntactic token in\n the source code, whereas our concept is more general.\n- SafeString: While this comes close to our intended meaning, it may\n mislead users into thinking that the string has been sanitized in\n some way, perhaps by escaping HTML tags or shell-related special\n characters.\n- ConstantStr: This does not capture the idea of composing literal\n strings.\n- StaticStr: This suggests that the string is statically computable,\n i.e., computable without running the program, which is not true. The\n literal string may vary based on runtime flags, as seen in the\n Motivation examples.\n- LiteralOnly[str]: This has the advantage of being extensible to\n other literal types, such as bytes or int. However, we did not find\n the extensibility worth the loss of readability.\n\nOverall, there was no clear winner on typing-sig over a long period, so\nwe decided to tip the scales in favor of LiteralString.\n\nLiteralBytes\n\nWe could generalize literal byte types, such as Literal[b\"foo\"], to\nLiteralBytes. However, literal byte types are used much less frequently\nthan literal string types and we did not find much user demand for\nLiteralBytes, so we decided not to include it in this PEP. Others may,\nhowever, consider it in future PEPs.\n\nReference Implementation\n\nThis is implemented in Pyre v0.9.8 and is actively being used.\n\nThe implementation simply extends the type checker with LiteralString as\na supertype of literal string types.\n\nTo support composition via addition, join, etc., it was sufficient to\noverload the stubs for str in Pyre's copy of typeshed.\n\nAppendix A: Other Uses\n\nTo simplify the discussion and require minimal security knowledge, we\nfocused on SQL injections throughout the PEP. LiteralString, however,\ncan also be used to prevent many other kinds of injection\nvulnerabilities.\n\nCommand Injection\n\nAPIs such as subprocess.run accept a string which can be run as a shell\ncommand:\n\n subprocess.run(f\"echo 'Hello {name}'\", shell=True)\n\nIf user-controlled data is included in the command string, the code is\nvulnerable to \"command injection\"; i.e., an attacker can run malicious\ncommands. For example, a value of ' && rm -rf / # would result in the\nfollowing destructive command being run:\n\n echo 'Hello ' && rm -rf / #'\n\nThis vulnerability could be prevented by updating run to only accept\nLiteralString when used in shell=True mode. Here is one simplified stub:\n\n def run(command: LiteralString, *args: str, shell: bool=...): ...\n\nCross Site Scripting (XSS)\n\nMost popular Python web frameworks, such as Django, use a templating\nengine to produce HTML from user data. These templating languages\nauto-escape user data before inserting it into the HTML template and\nthus prevent cross site scripting (XSS) vulnerabilities.\n\nBut a common way to bypass auto-escaping and render HTML as-is is to use\nfunctions like mark_safe in Django or do_mark_safe in Jinja2, which\ncause XSS vulnerabilities:\n\n dangerous_string = django.utils.safestring.mark_safe(f\"\")\n return(dangerous_string)\n\nThis vulnerability could be prevented by updating mark_safe to only\naccept LiteralString:\n\n def mark_safe(s: LiteralString) -> str: ...\n\nServer Side Template Injection (SSTI)\n\nTemplating frameworks, such as Jinja, allow Python expressions which\nwill be evaluated and substituted into the rendered result:\n\n template_str = \"There are {{ len(values) }} values: {{ values }}\"\n template = jinja2.Template(template_str)\n template.render(values=[1, 2])\n # Result: \"There are 2 values: [1, 2]\"\n\nIf an attacker controls all or part of the template string, they can\ninsert expressions which execute arbitrary code and compromise the\napplication:\n\n malicious_str = \"{{''.__class__.__base__.__subclasses__()[408]('rm - rf /',shell=True)}}\"\n template = jinja2.Template(malicious_str)\n template.render()\n # Result: The shell command 'rm - rf /' is run\n\nTemplate injection exploits like this could be prevented by updating the\nTemplate API to only accept LiteralString:\n\n class Template:\n def __init__(self, source: LiteralString): ...\n\nLogging Format String Injection\n\nLogging frameworks often allow their input strings to contain formatting\ndirectives. At its worst, allowing users to control the logged string\nhas led to CVE-2021-44228 (colloquially known as log4shell), which has\nbeen described as the \"most critical vulnerability of the last decade\".\nWhile no Python frameworks are currently known to be vulnerable to a\nsimilar attack, the built-in logging framework does provide formatting\noptions which are vulnerable to Denial of Service attacks from\nexternally controlled logging strings. The following example illustrates\na simple denial of service scenario:\n\n external_string = \"%(foo)999999999s\"\n ...\n # Tries to add > 1GB of whitespace to the logged string:\n logger.info(f'Received: {external_string}', some_dict)\n\nThis kind of attack could be prevented by requiring that the format\nstring passed to the logger be a LiteralString and that all externally\ncontrolled data be passed separately as arguments (as proposed in Issue\n46200):\n\n def info(msg: LiteralString, *args: object) -> None:\n ...\n\nAppendix B: Limitations\n\nThere are a number of ways LiteralString could still fail to prevent\nusers from passing strings built from non-literal data to an API:\n\n1. If the developer does not use a type checker or does not add type\nannotations, then violations will go uncaught.\n\n2. cast(LiteralString, non_literal_string) could be used to lie to the\ntype checker and allow a dynamic string value to masquerade as a\nLiteralString. The same goes for a variable that has type Any.\n\n3. Comments such as # type: ignore could be used to ignore warnings\nabout non-literal strings.\n\n4. Trivial functions could be constructed to convert a str to a\nLiteralString:\n\n def make_literal(s: str) -> LiteralString:\n letters: Dict[str, LiteralString] = {\n \"A\": \"A\",\n \"B\": \"B\",\n ...\n }\n output: List[LiteralString] = [letters[c] for c in s]\n return \"\".join(output)\n\nWe could mitigate the above using linting, code review, etc., but\nultimately a clever, malicious developer attempting to circumvent the\nprotections offered by LiteralString will always succeed. The important\nthing to remember is that LiteralString is not intended to protect\nagainst malicious developers; it is meant to protect against benign\ndevelopers accidentally using sensitive APIs in a dangerous way (without\ngetting in their way otherwise).\n\nWithout LiteralString, the best enforcement tool API authors have is\ndocumentation, which is easily ignored and often not seen. With\nLiteralString, API misuse requires conscious thought and artifacts in\nthe code that reviewers and future developers can notice.\n\nAppendix C: str methods that preserve LiteralString\n\nThe str class has several methods that would benefit from LiteralString.\nFor example, users might expect \"hello\".capitalize() to have the type\nLiteralString similar to the other examples we have seen in the\nInferring LiteralString section. Inferring the type LiteralString is\ncorrect because the string is not an arbitrary user-supplied string - we\nknow that it has the type Literal[\"HELLO\"], which is compatible with\nLiteralString. In other words, the capitalize method preserves the\nLiteralString type. There are several other str methods that preserve\nLiteralString.\n\nWe propose updating the stub for str in typeshed so that the methods are\noverloaded with the LiteralString-preserving versions. This means type\ncheckers do not have to hardcode LiteralString behavior for each method.\nIt also lets us easily support new methods in the future by updating the\ntypeshed stub.\n\nFor example, to preserve literal types for the capitalize method, we\nwould change the stub as below:\n\n # before\n def capitalize(self) -> str: ...\n\n # after\n @overload\n def capitalize(self: LiteralString) -> LiteralString: ...\n @overload\n def capitalize(self) -> str: ...\n\nThe downside of changing the str stub is that the stub becomes more\ncomplicated and can make error messages harder to understand. Type\ncheckers may need to special-case str to make error messages\nunderstandable for users.\n\nBelow is an exhaustive list of str methods which, when called with\narguments of type LiteralString, must be treated as returning a\nLiteralString. If this PEP is accepted, we will update these method\nsignatures in typeshed:\n\n @overload\n def capitalize(self: LiteralString) -> LiteralString: ...\n @overload\n def capitalize(self) -> str: ...\n\n @overload\n def casefold(self: LiteralString) -> LiteralString: ...\n @overload\n def casefold(self) -> str: ...\n\n @overload\n def center(self: LiteralString, __width: SupportsIndex, __fillchar: LiteralString = ...) -> LiteralString: ...\n @overload\n def center(self, __width: SupportsIndex, __fillchar: str = ...) -> str: ...\n\n if sys.version_info >= (3, 8):\n @overload\n def expandtabs(self: LiteralString, tabsize: SupportsIndex = ...) -> LiteralString: ...\n @overload\n def expandtabs(self, tabsize: SupportsIndex = ...) -> str: ...\n\n else:\n @overload\n def expandtabs(self: LiteralString, tabsize: int = ...) -> LiteralString: ...\n @overload\n def expandtabs(self, tabsize: int = ...) -> str: ...\n\n @overload\n def format(self: LiteralString, *args: LiteralString, **kwargs: LiteralString) -> LiteralString: ...\n @overload\n def format(self, *args: str, **kwargs: str) -> str: ...\n\n @overload\n def join(self: LiteralString, __iterable: Iterable[LiteralString]) -> LiteralString: ...\n @overload\n def join(self, __iterable: Iterable[str]) -> str: ...\n\n @overload\n def ljust(self: LiteralString, __width: SupportsIndex, __fillchar: LiteralString = ...) -> LiteralString: ...\n @overload\n def ljust(self, __width: SupportsIndex, __fillchar: str = ...) -> str: ...\n\n @overload\n def lower(self: LiteralString) -> LiteralString: ...\n @overload\n def lower(self) -> LiteralString: ...\n\n @overload\n def lstrip(self: LiteralString, __chars: LiteralString | None = ...) -> LiteralString: ...\n @overload\n def lstrip(self, __chars: str | None = ...) -> str: ...\n\n @overload\n def partition(self: LiteralString, __sep: LiteralString) -> tuple[LiteralString, LiteralString, LiteralString]: ...\n @overload\n def partition(self, __sep: str) -> tuple[str, str, str]: ...\n\n @overload\n def replace(self: LiteralString, __old: LiteralString, __new: LiteralString, __count: SupportsIndex = ...) -> LiteralString: ...\n @overload\n def replace(self, __old: str, __new: str, __count: SupportsIndex = ...) -> str: ...\n\n if sys.version_info >= (3, 9):\n @overload\n def removeprefix(self: LiteralString, __prefix: LiteralString) -> LiteralString: ...\n @overload\n def removeprefix(self, __prefix: str) -> str: ...\n\n @overload\n def removesuffix(self: LiteralString, __suffix: LiteralString) -> LiteralString: ...\n @overload\n def removesuffix(self, __suffix: str) -> str: ...\n\n @overload\n def rjust(self: LiteralString, __width: SupportsIndex, __fillchar: LiteralString = ...) -> LiteralString: ...\n @overload\n def rjust(self, __width: SupportsIndex, __fillchar: str = ...) -> str: ...\n\n @overload\n def rpartition(self: LiteralString, __sep: LiteralString) -> tuple[LiteralString, LiteralString, LiteralString]: ...\n @overload\n def rpartition(self, __sep: str) -> tuple[str, str, str]: ...\n\n @overload\n def rsplit(self: LiteralString, sep: LiteralString | None = ..., maxsplit: SupportsIndex = ...) -> list[LiteralString]: ...\n @overload\n def rsplit(self, sep: str | None = ..., maxsplit: SupportsIndex = ...) -> list[str]: ...\n\n @overload\n def rstrip(self: LiteralString, __chars: LiteralString | None = ...) -> LiteralString: ...\n @overload\n def rstrip(self, __chars: str | None = ...) -> str: ...\n\n @overload\n def split(self: LiteralString, sep: LiteralString | None = ..., maxsplit: SupportsIndex = ...) -> list[LiteralString]: ...\n @overload\n def split(self, sep: str | None = ..., maxsplit: SupportsIndex = ...) -> list[str]: ...\n\n @overload\n def splitlines(self: LiteralString, keepends: bool = ...) -> list[LiteralString]: ...\n @overload\n def splitlines(self, keepends: bool = ...) -> list[str]: ...\n\n @overload\n def strip(self: LiteralString, __chars: LiteralString | None = ...) -> LiteralString: ...\n @overload\n def strip(self, __chars: str | None = ...) -> str: ...\n\n @overload\n def swapcase(self: LiteralString) -> LiteralString: ...\n @overload\n def swapcase(self) -> str: ...\n\n @overload\n def title(self: LiteralString) -> LiteralString: ...\n @overload\n def title(self) -> str: ...\n\n @overload\n def upper(self: LiteralString) -> LiteralString: ...\n @overload\n def upper(self) -> str: ...\n\n @overload\n def zfill(self: LiteralString, __width: SupportsIndex) -> LiteralString: ...\n @overload\n def zfill(self, __width: SupportsIndex) -> str: ...\n\n @overload\n def __add__(self: LiteralString, __s: LiteralString) -> LiteralString: ...\n @overload\n def __add__(self, __s: str) -> str: ...\n\n @overload\n def __iter__(self: LiteralString) -> Iterator[str]: ...\n @overload\n def __iter__(self) -> Iterator[str]: ...\n\n @overload\n def __mod__(self: LiteralString, __x: Union[LiteralString, Tuple[LiteralString, ...]]) -> str: ...\n @overload\n def __mod__(self, __x: Union[str, Tuple[str, ...]]) -> str: ...\n\n @overload\n def __mul__(self: LiteralString, __n: SupportsIndex) -> LiteralString: ...\n @overload\n def __mul__(self, __n: SupportsIndex) -> str: ...\n\n @overload\n def __repr__(self: LiteralString) -> LiteralString: ...\n @overload\n def __repr__(self) -> str: ...\n\n @overload\n def __rmul__(self: LiteralString, n: SupportsIndex) -> LiteralString: ...\n @overload\n def __rmul__(self, n: SupportsIndex) -> str: ...\n\n @overload\n def __str__(self: LiteralString) -> LiteralString: ...\n @overload\n def __str__(self) -> str: ...\n\nAppendix D: Guidelines for using LiteralString in Stubs\n\nLibraries that do not contain type annotations within their source may\nspecify type stubs in Typeshed. Libraries written in other languages,\nsuch as those for machine learning, may also provide Python type stubs.\nThis means the type checker cannot verify that the type annotations\nmatch the source code and must trust the type stub. Thus, authors of\ntype stubs need to be careful when using LiteralString, since a function\nmay falsely appear to be safe when it is not.\n\nWe recommend the following guidelines for using LiteralString in stubs:\n\n- If the stub is for a pure function, we recommend using LiteralString\n in the return type of the function or of its overloads only if all\n the corresponding parameters have literal types (i.e., LiteralString\n or Literal[\"a\", \"b\"]).\n\n # OK\n @overload\n def my_transform(x: LiteralString, y: Literal[\"a\", \"b\"]) -> LiteralString: ...\n @overload\n def my_transform(x: str, y: str) -> str: ...\n\n # Not OK\n @overload\n def my_transform(x: LiteralString, y: str) -> LiteralString: ...\n @overload\n def my_transform(x: str, y: str) -> str: ...\n\n- If the stub is for a staticmethod, we recommend the same guideline\n as above.\n\n- If the stub is for any other kind of method, we recommend against\n using LiteralString in the return type of the method or any of its\n overloads. This is because, even if all the explicit parameters have\n type LiteralString, the object itself may be created using user data\n and thus the return type may be user-controlled.\n\n- If the stub is for a class attribute or global variable, we also\n recommend against using LiteralString because the untyped code may\n write arbitrary values to the attribute.\n\nHowever, we leave the final call to the library author. They may use\nLiteralString if they feel confident that the string returned by the\nmethod or function or the string stored in the attribute is guaranteed\nto have a literal type - i.e., the string is created by applying only\nliteral-preserving str operations to a string literal.\n\nNote that these guidelines do not apply to inline type annotations since\nthe type checker can verify that, say, a method returning LiteralString\ndoes in fact return an expression of that type.\n\nResources\n\nLiteral String Types in Scala\n\nScala uses Singleton as the supertype for singleton types, which\nincludes literal string types, such as \"foo\". Singleton is Scala's\ngeneralized analogue of this PEP's LiteralString.\n\nTamer Abdulradi showed how Scala's literal string types can be used for\n\"Preventing SQL injection at compile time\", Scala Days talk Literal\ntypes: What are they good for? (slides 52 to 68).\n\nThanks\n\nThanks to the following people for their feedback on the PEP:\n\nEdward Qiu, Jia Chen, Shannon Zhu, Gregory P. Smith, Никита Соболев, CAM\nGerlach, Arie Bovenberg, David Foster, and Shengye Wan\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.715084"},"created":{"kind":"timestamp","value":"2021-11-30T00:00:00","string":"2021-11-30T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0675/\",\n \"authors\": [\n \"Graham Bleaney\",\n \"Pradeep Kumar Srinivasan\"\n ],\n \"pep_number\": \"0675\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":56,"cells":{"id":{"kind":"string","value":"0557"},"text":{"kind":"string","value":"PEP: 557 Title: Data Classes Author: Eric V. Smith \nStatus: Final Type: Standards Track Content-Type: text/x-rst Created:\n02-Jun-2017 Python-Version: 3.7 Post-History: 08-Sep-2017, 25-Nov-2017,\n30-Nov-2017, 01-Dec-2017, 02-Dec-2017, 06-Jan-2018, 04-Mar-2018\nResolution:\nhttps://mail.python.org/pipermail/python-dev/2017-December/151034.html\n\nNotice for Reviewers\n\nThis PEP and the initial implementation were drafted in a separate repo:\nhttps://github.com/ericvsmith/dataclasses. Before commenting in a public\nforum please at least read the discussion listed at the end of this PEP.\n\nAbstract\n\nThis PEP describes an addition to the standard library called Data\nClasses. Although they use a very different mechanism, Data Classes can\nbe thought of as \"mutable namedtuples with defaults\". Because Data\nClasses use normal class definition syntax, you are free to use\ninheritance, metaclasses, docstrings, user-defined methods, class\nfactories, and other Python class features.\n\nA class decorator is provided which inspects a class definition for\nvariables with type annotations as defined in PEP 526, \"Syntax for\nVariable Annotations\". In this document, such variables are called\nfields. Using these fields, the decorator adds generated method\ndefinitions to the class to support instance initialization, a repr,\ncomparison methods, and optionally other methods as described in the\nSpecification section. Such a class is called a Data Class, but there's\nreally nothing special about the class: the decorator adds generated\nmethods to the class and returns the same class it was given.\n\nAs an example:\n\n @dataclass\n class InventoryItem:\n '''Class for keeping track of an item in inventory.'''\n name: str\n unit_price: float\n quantity_on_hand: int = 0\n\n def total_cost(self) -> float:\n return self.unit_price * self.quantity_on_hand\n\nThe @dataclass decorator will add the equivalent of these methods to the\nInventoryItem class:\n\n def __init__(self, name: str, unit_price: float, quantity_on_hand: int = 0) -> None:\n self.name = name\n self.unit_price = unit_price\n self.quantity_on_hand = quantity_on_hand\n def __repr__(self):\n return f'InventoryItem(name={self.name!r}, unit_price={self.unit_price!r}, quantity_on_hand={self.quantity_on_hand!r})'\n def __eq__(self, other):\n if other.__class__ is self.__class__:\n return (self.name, self.unit_price, self.quantity_on_hand) == (other.name, other.unit_price, other.quantity_on_hand)\n return NotImplemented\n def __ne__(self, other):\n if other.__class__ is self.__class__:\n return (self.name, self.unit_price, self.quantity_on_hand) != (other.name, other.unit_price, other.quantity_on_hand)\n return NotImplemented\n def __lt__(self, other):\n if other.__class__ is self.__class__:\n return (self.name, self.unit_price, self.quantity_on_hand) < (other.name, other.unit_price, other.quantity_on_hand)\n return NotImplemented\n def __le__(self, other):\n if other.__class__ is self.__class__:\n return (self.name, self.unit_price, self.quantity_on_hand) <= (other.name, other.unit_price, other.quantity_on_hand)\n return NotImplemented\n def __gt__(self, other):\n if other.__class__ is self.__class__:\n return (self.name, self.unit_price, self.quantity_on_hand) > (other.name, other.unit_price, other.quantity_on_hand)\n return NotImplemented\n def __ge__(self, other):\n if other.__class__ is self.__class__:\n return (self.name, self.unit_price, self.quantity_on_hand) >= (other.name, other.unit_price, other.quantity_on_hand)\n return NotImplemented\n\nData Classes save you from writing and maintaining these methods.\n\nRationale\n\nThere have been numerous attempts to define classes which exist\nprimarily to store values which are accessible by attribute lookup. Some\nexamples include:\n\n- collections.namedtuple in the standard library.\n- typing.NamedTuple in the standard library.\n- The popular attrs[1] project.\n- George Sakkis' recordType recipe[2], a mutable data type inspired by\n collections.namedtuple.\n- Many example online recipes[3], packages[4], and questions[5]. David\n Beazley used a form of data classes as the motivating example in a\n PyCon 2013 metaclass talk[6].\n\nSo, why is this PEP needed?\n\nWith the addition of PEP 526, Python has a concise way to specify the\ntype of class members. This PEP leverages that syntax to provide a\nsimple, unobtrusive way to describe Data Classes. With two exceptions,\nthe specified attribute type annotation is completely ignored by Data\nClasses.\n\nNo base classes or metaclasses are used by Data Classes. Users of these\nclasses are free to use inheritance and metaclasses without any\ninterference from Data Classes. The decorated classes are truly \"normal\"\nPython classes. The Data Class decorator should not interfere with any\nusage of the class.\n\nOne main design goal of Data Classes is to support static type checkers.\nThe use of PEP 526 syntax is one example of this, but so is the design\nof the fields() function and the @dataclass decorator. Due to their very\ndynamic nature, some of the libraries mentioned above are difficult to\nuse with static type checkers.\n\nData Classes are not, and are not intended to be, a replacement\nmechanism for all of the above libraries. But being in the standard\nlibrary will allow many of the simpler use cases to instead leverage\nData Classes. Many of the libraries listed have different feature sets,\nand will of course continue to exist and prosper.\n\nWhere is it not appropriate to use Data Classes?\n\n- API compatibility with tuples or dicts is required.\n- Type validation beyond that provided by PEPs 484 and 526 is\n required, or value validation or conversion is required.\n\nSpecification\n\nAll of the functions described in this PEP will live in a module named\ndataclasses.\n\nA function dataclass which is typically used as a class decorator is\nprovided to post-process classes and add generated methods, described\nbelow.\n\nThe dataclass decorator examines the class to find fields. A field is\ndefined as any variable identified in __annotations__. That is, a\nvariable that has a type annotation. With two exceptions described\nbelow, none of the Data Class machinery examines the type specified in\nthe annotation.\n\nNote that __annotations__ is guaranteed to be an ordered mapping, in\nclass declaration order. The order of the fields in all of the generated\nmethods is the order in which they appear in the class.\n\nThe dataclass decorator will add various \"dunder\" methods to the class,\ndescribed below. If any of the added methods already exist on the class,\na TypeError will be raised. The decorator returns the same class that is\ncalled on: no new class is created.\n\nThe dataclass decorator is typically used with no parameters and no\nparentheses. However, it also supports the following logical signature:\n\n def dataclass(*, init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False)\n\nIf dataclass is used just as a simple decorator with no parameters, it\nacts as if it has the default values documented in this signature. That\nis, these three uses of @dataclass are equivalent:\n\n @dataclass\n class C:\n ...\n\n @dataclass()\n class C:\n ...\n\n @dataclass(init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False)\n class C:\n ...\n\nThe parameters to dataclass are:\n\n- init: If true (the default), a __init__ method will be generated.\n\n- repr: If true (the default), a __repr__ method will be generated.\n The generated repr string will have the class name and the name and\n repr of each field, in the order they are defined in the class.\n Fields that are marked as being excluded from the repr are not\n included. For example:\n InventoryItem(name='widget', unit_price=3.0, quantity_on_hand=10).\n\n If the class already defines __repr__, this parameter is ignored.\n\n- eq: If true (the default), an __eq__ method will be generated. This\n method compares the class as if it were a tuple of its fields, in\n order. Both instances in the comparison must be of the identical\n type.\n\n If the class already defines __eq__, this parameter is ignored.\n\n- order: If true (the default is False), __lt__, __le__, __gt__, and\n __ge__ methods will be generated. These compare the class as if it\n were a tuple of its fields, in order. Both instances in the\n comparison must be of the identical type. If order is true and eq is\n false, a ValueError is raised.\n\n If the class already defines any of __lt__, __le__, __gt__, or\n __ge__, then ValueError is raised.\n\n- unsafe_hash: If False (the default), the __hash__ method is\n generated according to how eq and frozen are set.\n\n If eq and frozen are both true, Data Classes will generate a\n __hash__ method for you. If eq is true and frozen is false, __hash__\n will be set to None, marking it unhashable (which it is). If eq is\n false, __hash__ will be left untouched meaning the __hash__ method\n of the superclass will be used (if the superclass is object, this\n means it will fall back to id-based hashing).\n\n Although not recommended, you can force Data Classes to create a\n __hash__ method with unsafe_hash=True. This might be the case if\n your class is logically immutable but can nonetheless be mutated.\n This is a specialized use case and should be considered carefully.\n\n If a class already has an explicitly defined __hash__ the behavior\n when adding __hash__ is modified. An explicitly defined __hash__ is\n defined when:\n\n - __eq__ is defined in the class and __hash__ is defined with\n any value other than None.\n - __eq__ is defined in the class and any non-None __hash__ is\n defined.\n - __eq__ is not defined on the class, and any __hash__ is\n defined.\n\n If unsafe_hash is true and an explicitly defined __hash__ is\n present, then ValueError is raised.\n\n If unsafe_hash is false and an explicitly defined __hash__ is\n present, then no __hash__ is added.\n\n See the Python documentation[7] for more information.\n\n- frozen: If true (the default is False), assigning to fields will\n generate an exception. This emulates read-only frozen instances. If\n either __getattr__ or __setattr__ is defined in the class, then\n ValueError is raised. See the discussion below.\n\nfields may optionally specify a default value, using normal Python\nsyntax:\n\n @dataclass\n class C:\n a: int # 'a' has no default value\n b: int = 0 # assign a default value for 'b'\n\nIn this example, both a and b will be included in the added __init__\nmethod, which will be defined as:\n\n def __init__(self, a: int, b: int = 0):\n\nTypeError will be raised if a field without a default value follows a\nfield with a default value. This is true either when this occurs in a\nsingle class, or as a result of class inheritance.\n\nFor common and simple use cases, no other functionality is required.\nThere are, however, some Data Class features that require additional\nper-field information. To satisfy this need for additional information,\nyou can replace the default field value with a call to the provided\nfield() function. The signature of field() is:\n\n def field(*, default=MISSING, default_factory=MISSING, repr=True,\n hash=None, init=True, compare=True, metadata=None)\n\nThe MISSING value is a sentinel object used to detect if the default and\ndefault_factory parameters are provided. This sentinel is used because\nNone is a valid value for default.\n\nThe parameters to field() are:\n\n- default: If provided, this will be the default value for this field.\n This is needed because the field call itself replaces the normal\n position of the default value.\n\n- default_factory: If provided, it must be a zero-argument callable\n that will be called when a default value is needed for this field.\n Among other purposes, this can be used to specify fields with\n mutable default values, as discussed below. It is an error to\n specify both default and default_factory.\n\n- init: If true (the default), this field is included as a parameter\n to the generated __init__ method.\n\n- repr: If true (the default), this field is included in the string\n returned by the generated __repr__ method.\n\n- compare: If True (the default), this field is included in the\n generated equality and comparison methods (__eq__, __gt__, et al.).\n\n- hash: This can be a bool or None. If True, this field is included in\n the generated __hash__ method. If None (the default), use the value\n of compare: this would normally be the expected behavior. A field\n should be considered in the hash if it's used for comparisons.\n Setting this value to anything other than None is discouraged.\n\n One possible reason to set hash=False but compare=True would be if a\n field is expensive to compute a hash value for, that field is needed\n for equality testing, and there are other fields that contribute to\n the type's hash value. Even if a field is excluded from the hash, it\n will still be used for comparisons.\n\n- metadata: This can be a mapping or None. None is treated as an empty\n dict. This value is wrapped in types.MappingProxyType to make it\n read-only, and exposed on the Field object. It is not used at all by\n Data Classes, and is provided as a third-party extension mechanism.\n Multiple third-parties can each have their own key, to use as a\n namespace in the metadata.\n\nIf the default value of a field is specified by a call to field(), then\nthe class attribute for this field will be replaced by the specified\ndefault value. If no default is provided, then the class attribute will\nbe deleted. The intent is that after the dataclass decorator runs, the\nclass attributes will all contain the default values for the fields,\njust as if the default value itself were specified. For example, after:\n\n @dataclass\n class C:\n x: int\n y: int = field(repr=False)\n z: int = field(repr=False, default=10)\n t: int = 20\n\nThe class attribute C.z will be 10, the class attribute C.t will be 20,\nand the class attributes C.x and C.y will not be set.\n\nField objects\n\nField objects describe each defined field. These objects are created\ninternally, and are returned by the fields() module-level method (see\nbelow). Users should never instantiate a Field object directly. Its\ndocumented attributes are:\n\n- name: The name of the field.\n- type: The type of the field.\n- default, default_factory, init, repr, hash, compare, and metadata\n have the identical meaning and values as they do in the field()\n declaration.\n\nOther attributes may exist, but they are private and must not be\ninspected or relied on.\n\npost-init processing\n\nThe generated __init__ code will call a method named __post_init__, if\nit is defined on the class. It will be called as self.__post_init__().\nIf no __init__ method is generated, then __post_init__ will not\nautomatically be called.\n\nAmong other uses, this allows for initializing field values that depend\non one or more other fields. For example:\n\n @dataclass\n class C:\n a: float\n b: float\n c: float = field(init=False)\n\n def __post_init__(self):\n self.c = self.a + self.b\n\nSee the section below on init-only variables for ways to pass parameters\nto __post_init__(). Also see the warning about how replace() handles\ninit=False fields.\n\nClass variables\n\nOne place where dataclass actually inspects the type of a field is to\ndetermine if a field is a class variable as defined in PEP 526. It does\nthis by checking if the type of the field is typing.ClassVar. If a field\nis a ClassVar, it is excluded from consideration as a field and is\nignored by the Data Class mechanisms. For more discussion, see[8]. Such\nClassVar pseudo-fields are not returned by the module-level fields()\nfunction.\n\nInit-only variables\n\nThe other place where dataclass inspects a type annotation is to\ndetermine if a field is an init-only variable. It does this by seeing if\nthe type of a field is of type dataclasses.InitVar. If a field is an\nInitVar, it is considered a pseudo-field called an init-only field. As\nit is not a true field, it is not returned by the module-level fields()\nfunction. Init-only fields are added as parameters to the generated\n__init__ method, and are passed to the optional __post_init__ method.\nThey are not otherwise used by Data Classes.\n\nFor example, suppose a field will be initialized from a database, if a\nvalue is not provided when creating the class:\n\n @dataclass\n class C:\n i: int\n j: int = None\n database: InitVar[DatabaseType] = None\n\n def __post_init__(self, database):\n if self.j is None and database is not None:\n self.j = database.lookup('j')\n\n c = C(10, database=my_database)\n\nIn this case, fields() will return Field objects for i and j, but not\nfor database.\n\nFrozen instances\n\nIt is not possible to create truly immutable Python objects. However, by\npassing frozen=True to the @dataclass decorator you can emulate\nimmutability. In that case, Data Classes will add __setattr__ and\n__delattr__ methods to the class. These methods will raise a\nFrozenInstanceError when invoked.\n\nThere is a tiny performance penalty when using frozen=True: __init__\ncannot use simple assignment to initialize fields, and must use\nobject.__setattr__.\n\nInheritance\n\nWhen the Data Class is being created by the @dataclass decorator, it\nlooks through all of the class's base classes in reverse MRO (that is,\nstarting at object) and, for each Data Class that it finds, adds the\nfields from that base class to an ordered mapping of fields. After all\nof the base class fields are added, it adds its own fields to the\nordered mapping. All of the generated methods will use this combined,\ncalculated ordered mapping of fields. Because the fields are in\ninsertion order, derived classes override base classes. An example:\n\n @dataclass\n class Base:\n x: Any = 15.0\n y: int = 0\n\n @dataclass\n class C(Base):\n z: int = 10\n x: int = 15\n\nThe final list of fields is, in order, x, y, z. The final type of x is\nint, as specified in class C.\n\nThe generated __init__ method for C will look like:\n\n def __init__(self, x: int = 15, y: int = 0, z: int = 10):\n\nDefault factory functions\n\nIf a field specifies a default_factory, it is called with zero arguments\nwhen a default value for the field is needed. For example, to create a\nnew instance of a list, use:\n\n l: list = field(default_factory=list)\n\nIf a field is excluded from __init__ (using init=False) and the field\nalso specifies default_factory, then the default factory function will\nalways be called from the generated __init__ function. This happens\nbecause there is no other way to give the field an initial value.\n\nMutable default values\n\nPython stores default member variable values in class attributes.\nConsider this example, not using Data Classes:\n\n class C:\n x = []\n def add(self, element):\n self.x += element\n\n o1 = C()\n o2 = C()\n o1.add(1)\n o2.add(2)\n assert o1.x == [1, 2]\n assert o1.x is o2.x\n\nNote that the two instances of class C share the same class variable x,\nas expected.\n\nUsing Data Classes, if this code was valid:\n\n @dataclass\n class D:\n x: List = []\n def add(self, element):\n self.x += element\n\nit would generate code similar to:\n\n class D:\n x = []\n def __init__(self, x=x):\n self.x = x\n def add(self, element):\n self.x += element\n\n assert D().x is D().x\n\nThis has the same issue as the original example using class C. That is,\ntwo instances of class D that do not specify a value for x when creating\na class instance will share the same copy of x. Because Data Classes\njust use normal Python class creation they also share this problem.\nThere is no general way for Data Classes to detect this condition.\nInstead, Data Classes will raise a TypeError if it detects a default\nparameter of type list, dict, or set. This is a partial solution, but it\ndoes protect against many common errors. See Automatically support\nmutable default values in the Rejected Ideas section for more details.\n\nUsing default factory functions is a way to create new instances of\nmutable types as default values for fields:\n\n @dataclass\n class D:\n x: list = field(default_factory=list)\n\n assert D().x is not D().x\n\nModule level helper functions\n\n- fields(class_or_instance): Returns a tuple of Field objects that\n define the fields for this Data Class. Accepts either a Data Class,\n or an instance of a Data Class. Raises ValueError if not passed a\n Data Class or instance of one. Does not return pseudo-fields which\n are ClassVar or InitVar.\n\n- asdict(instance, *, dict_factory=dict): Converts the Data Class\n instance to a dict (by using the factory function dict_factory).\n Each Data Class is converted to a dict of its fields, as name:value\n pairs. Data Classes, dicts, lists, and tuples are recursed into. For\n example:\n\n @dataclass\n class Point:\n x: int\n y: int\n\n @dataclass\n class C:\n l: List[Point]\n\n p = Point(10, 20)\n assert asdict(p) == {'x': 10, 'y': 20}\n\n c = C([Point(0, 0), Point(10, 4)])\n assert asdict(c) == {'l': [{'x': 0, 'y': 0}, {'x': 10, 'y': 4}]}\n\n Raises TypeError if instance is not a Data Class instance.\n\n- astuple(*, tuple_factory=tuple): Converts the Data Class instance to\n a tuple (by using the factory function tuple_factory). Each Data\n Class is converted to a tuple of its field values. Data Classes,\n dicts, lists, and tuples are recursed into.\n\n Continuing from the previous example:\n\n assert astuple(p) == (10, 20)\n assert astuple(c) == ([(0, 0), (10, 4)],)\n\n Raises TypeError if instance is not a Data Class instance.\n\n- make_dataclass(cls_name, fields, *, bases=(), namespace=None):\n Creates a new Data Class with name cls_name, fields as defined in\n fields, base classes as given in bases, and initialized with a\n namespace as given in namespace. fields is an iterable whose\n elements are either name, (name, type), or (name, type, Field). If\n just name is supplied, typing.Any is used for type. This function is\n not strictly required, because any Python mechanism for creating a\n new class with __annotations__ can then apply the dataclass function\n to convert that class to a Data Class. This function is provided as\n a convenience. For example:\n\n C = make_dataclass('C',\n [('x', int),\n 'y',\n ('z', int, field(default=5))],\n namespace={'add_one': lambda self: self.x + 1})\n\n Is equivalent to:\n\n @dataclass\n class C:\n x: int\n y: 'typing.Any'\n z: int = 5\n\n def add_one(self):\n return self.x + 1\n\n- replace(instance, **changes): Creates a new object of the same type\n of instance, replacing fields with values from changes. If instance\n is not a Data Class, raises TypeError. If values in changes do not\n specify fields, raises TypeError.\n\n The newly returned object is created by calling the __init__ method\n of the Data Class. This ensures that __post_init__, if present, is\n also called.\n\n Init-only variables without default values, if any exist, must be\n specified on the call to replace so that they can be passed to\n __init__ and __post_init__.\n\n It is an error for changes to contain any fields that are defined as\n having init=False. A ValueError will be raised in this case.\n\n Be forewarned about how init=False fields work during a call to\n replace(). They are not copied from the source object, but rather\n are initialized in __post_init__(), if they're initialized at all.\n It is expected that init=False fields will be rarely and judiciously\n used. If they are used, it might be wise to have alternate class\n constructors, or perhaps a custom replace() (or similarly named)\n method which handles instance copying.\n\n- is_dataclass(class_or_instance): Returns True if its parameter is a\n dataclass or an instance of one, otherwise returns False.\n\n If you need to know if a class is an instance of a dataclass (and\n not a dataclass itself), then add a further check for\n not isinstance(obj, type):\n\n def is_dataclass_instance(obj):\n return is_dataclass(obj) and not isinstance(obj, type)\n\nDiscussion\n\npython-ideas discussion\n\nThis discussion started on python-ideas[9] and was moved to a GitHub\nrepo[10] for further discussion. As part of this discussion, we made the\ndecision to use PEP 526 syntax to drive the discovery of fields.\n\nSupport for automatically setting __slots__?\n\nAt least for the initial release, __slots__ will not be supported.\n__slots__ needs to be added at class creation time. The Data Class\ndecorator is called after the class is created, so in order to add\n__slots__ the decorator would have to create a new class, set __slots__,\nand return it. Because this behavior is somewhat surprising, the initial\nversion of Data Classes will not support automatically setting\n__slots__. There are a number of workarounds:\n\n- Manually add __slots__ in the class definition.\n- Write a function (which could be used as a decorator) that inspects\n the class using fields() and creates a new class with __slots__ set.\n\nFor more discussion, see[11].\n\nWhy not just use namedtuple?\n\n- Any namedtuple can be accidentally compared to any other with the\n same number of fields. For example:\n Point3D(2017, 6, 2) == Date(2017, 6, 2). With Data Classes, this\n would return False.\n\n- A namedtuple can be accidentally compared to a tuple. For example,\n Point2D(1, 10) == (1, 10). With Data Classes, this would return\n False.\n\n- Instances are always iterable, which can make it difficult to add\n fields. If a library defines:\n\n Time = namedtuple('Time', ['hour', 'minute'])\n def get_time():\n return Time(12, 0)\n\n Then if a user uses this code as:\n\n hour, minute = get_time()\n\n then it would not be possible to add a second field to Time without\n breaking the user's code.\n\n- No option for mutable instances.\n\n- Cannot specify default values.\n\n- Cannot control which fields are used for __init__, __repr__, etc.\n\n- Cannot support combining fields by inheritance.\n\nWhy not just use typing.NamedTuple?\n\nFor classes with statically defined fields, it does support similar\nsyntax to Data Classes, using type annotations. This produces a\nnamedtuple, so it shares namedtuples benefits and some of its downsides.\nData Classes, unlike typing.NamedTuple, support combining fields via\ninheritance.\n\nWhy not just use attrs?\n\n- attrs moves faster than could be accommodated if it were moved in to\n the standard library.\n- attrs supports additional features not being proposed here:\n validators, converters, metadata, etc. Data Classes makes a tradeoff\n to achieve simplicity by not implementing these features.\n\nFor more discussion, see[12].\n\npost-init parameters\n\nIn an earlier version of this PEP before InitVar was added, the\npost-init function __post_init__ never took any parameters.\n\nThe normal way of doing parameterized initialization (and not just with\nData Classes) is to provide an alternate classmethod constructor. For\nexample:\n\n @dataclass\n class C:\n x: int\n\n @classmethod\n def from_file(cls, filename):\n with open(filename) as fl:\n file_value = int(fl.read())\n return C(file_value)\n\n c = C.from_file('file.txt')\n\nBecause the __post_init__ function is the last thing called in the\ngenerated __init__, having a classmethod constructor (which can also\nexecute code immediately after constructing the object) is functionally\nequivalent to being able to pass parameters to a __post_init__ function.\n\nWith InitVars, __post_init__ functions can now take parameters. They are\npassed first to __init__ which passes them to __post_init__ where user\ncode can use them as needed.\n\nThe only real difference between alternate classmethod constructors and\nInitVar pseudo-fields is in regards to required non-field parameters\nduring object creation. With InitVars, using __init__ and the\nmodule-level replace() function InitVars must always be specified.\nConsider the case where a context object is needed to create an\ninstance, but isn't stored as a field. With alternate classmethod\nconstructors the context parameter is always optional, because you could\nstill create the object by going through __init__ (unless you suppress\nits creation). Which approach is more appropriate will be\napplication-specific, but both approaches are supported.\n\nAnother reason for using InitVar fields is that the class author can\ncontrol the order of __init__ parameters. This is especially important\nwith regular fields and InitVar fields that have default values, as all\nfields with defaults must come after all fields without defaults. A\nprevious design had all init-only fields coming after regular fields.\nThis meant that if any field had a default value, then all init-only\nfields would have to have defaults values, too.\n\nasdict and astuple function names\n\nThe names of the module-level helper functions asdict() and astuple()\nare arguably not PEP 8 compliant, and should be as_dict() and\nas_tuple(), respectively. However, after discussion[13] it was decided\nto keep consistency with namedtuple._asdict() and attr.asdict().\n\nRejected ideas\n\nCopying init=False fields after new object creation in replace()\n\nFields that are init=False are by definition not passed to __init__, but\ninstead are initialized with a default value, or by calling a default\nfactory function in __init__, or by code in __post_init__.\n\nA previous version of this PEP specified that init=False fields would be\ncopied from the source object to the newly created object after __init__\nreturned, but that was deemed to be inconsistent with using __init__ and\n__post_init__ to initialize the new object. For example, consider this\ncase:\n\n @dataclass\n class Square:\n length: float\n area: float = field(init=False, default=0.0)\n\n def __post_init__(self):\n self.area = self.length * self.length\n\n s1 = Square(1.0)\n s2 = replace(s1, length=2.0)\n\nIf init=False fields were copied from the source to the destination\nobject after __post_init__ is run, then s2 would end up begin\nSquare(length=2.0, area=1.0), instead of the correct\nSquare(length=2.0, area=4.0).\n\nAutomatically support mutable default values\n\nOne proposal was to automatically copy defaults, so that if a literal\nlist [] was a default value, each instance would get a new list. There\nwere undesirable side effects of this decision, so the final decision is\nto disallow the 3 known built-in mutable types: list, dict, and set. For\na complete discussion of this and other options, see[14].\n\nExamples\n\nCustom __init__ method\n\nSometimes the generated __init__ method does not suffice. For example,\nsuppose you wanted to have an object to store *args and **kwargs:\n\n @dataclass(init=False)\n class ArgHolder:\n args: List[Any]\n kwargs: Mapping[Any, Any]\n\n def __init__(self, *args, **kwargs):\n self.args = args\n self.kwargs = kwargs\n\n a = ArgHolder(1, 2, three=3)\n\nA complicated example\n\nThis code exists in a closed source project:\n\n class Application:\n def __init__(self, name, requirements, constraints=None, path='', executable_links=None, executables_dir=()):\n self.name = name\n self.requirements = requirements\n self.constraints = {} if constraints is None else constraints\n self.path = path\n self.executable_links = [] if executable_links is None else executable_links\n self.executables_dir = executables_dir\n self.additional_items = []\n\n def __repr__(self):\n return f'Application({self.name!r},{self.requirements!r},{self.constraints!r},{self.path!r},{self.executable_links!r},{self.executables_dir!r},{self.additional_items!r})'\n\nThis can be replaced by:\n\n @dataclass\n class Application:\n name: str\n requirements: List[Requirement]\n constraints: Dict[str, str] = field(default_factory=dict)\n path: str = ''\n executable_links: List[str] = field(default_factory=list)\n executable_dir: Tuple[str] = ()\n additional_items: List[str] = field(init=False, default_factory=list)\n\nThe Data Class version is more declarative, has less code, supports\ntyping, and includes the other generated functions.\n\nAcknowledgements\n\nThe following people provided invaluable input during the development of\nthis PEP and code: Ivan Levkivskyi, Guido van Rossum, Hynek Schlawack,\nRaymond Hettinger, and Lisa Roach. I thank them for their time and\nexpertise.\n\nA special mention must be made about the attrs project. It was a true\ninspiration for this PEP, and I respect the design decisions they made.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] attrs project on github (https://github.com/python-attrs/attrs)\n\n[2] George Sakkis' recordType recipe\n(http://code.activestate.com/recipes/576555-records/)\n\n[3] DictDotLookup recipe\n(http://code.activestate.com/recipes/576586-dot-style-nested-lookups-over-dictionary-based-dat/)\n\n[4] attrdict package (https://pypi.python.org/pypi/attrdict)\n\n[5] StackOverflow question about data container classes\n(https://stackoverflow.com/questions/3357581/using-python-class-as-a-data-container)\n\n[6] David Beazley metaclass talk featuring data classes\n(https://www.youtube.com/watch?v=sPiWg5jSoZI)\n\n[7] Python documentation for __hash__\n(https://docs.python.org/3/reference/datamodel.html#object.__hash__)\n\n[8] ClassVar discussion in PEP 526 <526#class-and-instance-variable-annotations>\n\n[9] Start of python-ideas discussion\n(https://mail.python.org/pipermail/python-ideas/2017-May/045618.html)\n\n[10] GitHub repo where discussions and initial development took place\n(https://github.com/ericvsmith/dataclasses)\n\n[11] Support __slots__?\n(https://github.com/ericvsmith/dataclasses/issues/28)\n\n[12] why not just attrs?\n(https://github.com/ericvsmith/dataclasses/issues/19)\n\n[13] PEP 8 names for asdict and astuple\n(https://github.com/ericvsmith/dataclasses/issues/110)\n\n[14] Copying mutable defaults\n(https://github.com/ericvsmith/dataclasses/issues/3)"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.761501"},"created":{"kind":"timestamp","value":"2017-06-02T00:00:00","string":"2017-06-02T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0557/\",\n \"authors\": [\n \"Eric V. Smith\"\n ],\n \"pep_number\": \"0557\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":57,"cells":{"id":{"kind":"string","value":"0269"},"text":{"kind":"string","value":"PEP: 269 Title: Pgen Module for Python Author: Jonathan Riehl\n Status: Deferred Type: Standards Track\nContent-Type: text/x-rst Created: 24-Aug-2001 Python-Version: 2.2\nPost-History:\n\nAbstract\n\nMuch like the parser module exposes the Python parser, this PEP proposes\nthat the parser generator used to create the Python parser, pgen, be\nexposed as a module in Python.\n\nRationale\n\nThrough the course of Pythonic history, there have been numerous\ndiscussions about the creation of a Python compiler[1]. These have\nresulted in several implementations of Python parsers, most notably the\nparser module currently provided in the Python standard library[2] and\nJeremy Hylton's compiler module[3]. However, while multiple language\nchanges have been proposed [4][5], experimentation with the Python\nsyntax has lacked the benefit of a Python binding to the actual parser\ngenerator used to build Python.\n\nBy providing a Python wrapper analogous to Fred Drake Jr.'s parser\nwrapper, but targeted at the pgen library, the following assertions are\nmade:\n\n1. Reference implementations of syntax changes will be easier to\n develop. Currently, a reference implementation of a syntax change\n would require the developer to use the pgen tool from the command\n line. The resulting parser data structure would then either have to\n be reworked to interface with a custom CPython implementation, or\n wrapped as a C extension module.\n2. Reference implementations of syntax changes will be easier to\n distribute. Since the parser generator will be available in Python,\n it should follow that the resulting parser will accessible from\n Python. Therefore, reference implementations should be available as\n pure Python code, versus using custom versions of the existing\n CPython distribution, or as compilable extension modules.\n3. Reference implementations of syntax changes will be easier to\n discuss with a larger audience. This somewhat falls out of the\n second assertion, since the community of Python users is most likely\n larger than the community of CPython developers.\n4. Development of small languages in Python will be further enhanced,\n since the additional module will be a fully functional LL(1) parser\n generator.\n\nSpecification\n\nThe proposed module will be called pgen. The pgen module will contain\nthe following functions:\n\nparseGrammarFile (fileName) -> AST\n\nThe parseGrammarFile() function will read the file pointed to by\nfileName and create an AST object. The AST nodes will contain the\nnonterminal, numeric values of the parser generator meta-grammar. The\noutput AST will be an instance of the AST extension class as provided by\nthe parser module. Syntax errors in the input file will cause the\nSyntaxError exception to be raised.\n\nparseGrammarString (text) -> AST\n\nThe parseGrammarString() function will follow the semantics of the\nparseGrammarFile(), but accept the grammar text as a string for input,\nas opposed to the file name.\n\nbuildParser (grammarAst) -> DFA\n\nThe buildParser() function will accept an AST object for input and\nreturn a DFA (deterministic finite automaton) data structure. The DFA\ndata structure will be a C extension class, much like the AST structure\nis provided in the parser module. If the input AST does not conform to\nthe nonterminal codes defined for the pgen meta-grammar, buildParser()\nwill throw a ValueError exception.\n\nparseFile (fileName, dfa, start) -> AST\n\nThe parseFile() function will essentially be a wrapper for the\nPyParser_ParseFile() C API function. The wrapper code will accept the\nDFA C extension class, and the file name. An AST instance that conforms\nto the lexical values in the token module and the nonterminal values\ncontained in the DFA will be output.\n\nparseString (text, dfa, start) -> AST\n\nThe parseString() function will operate in a similar fashion to the\nparseFile() function, but accept the parse text as an argument. Much\nlike parseFile() will wrap the PyParser_ParseFile() C API function,\nparseString() will wrap the PyParser_ParseString() function.\n\nsymbolToStringMap (dfa) -> dict\n\nThe symbolToStringMap() function will accept a DFA instance and return a\ndictionary object that maps from the DFA's numeric values for its\nnonterminals to the string names of the nonterminals as found in the\noriginal grammar specification for the DFA.\n\nstringToSymbolMap (dfa) -> dict\n\nThe stringToSymbolMap() function output a dictionary mapping the\nnonterminal names of the input DFA to their corresponding numeric\nvalues.\n\nExtra credit will be awarded if the map generation functions and parsing\nfunctions are also methods of the DFA extension class.\n\nImplementation Plan\n\nA cunning plan has been devised to accomplish this enhancement:\n\n1. Rename the pgen functions to conform to the CPython naming\n standards. This action may involve adding some header files to the\n Include subdirectory.\n2. Move the pgen C modules in the Makefile.pre.in from unique pgen\n elements to the Python C library.\n3. Make any needed changes to the parser module so the AST extension\n class understands that there are AST types it may not understand.\n Cursory examination of the AST extension class shows that it keeps\n track of whether the tree is a suite or an expression.\n4. Code an additional C module in the Modules directory. The C\n extension module will implement the DFA extension class and the\n functions outlined in the previous section.\n5. Add the new module to the build process. Black magic, indeed.\n\nLimitations\n\nUnder this proposal, would be designers of Python 3000 will still be\nconstrained to Python's lexical conventions. The addition, subtraction\nor modification of the Python lexer is outside the scope of this PEP.\n\nReference Implementation\n\nNo reference implementation is currently provided. A patch was provided\nat some point in\nhttp://sourceforge.net/tracker/index.php?func=detail&aid=599331&group_id=5470&atid=305470\nbut that patch is no longer maintained.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] The (defunct) Python Compiler-SIG\nhttp://www.python.org/sigs/compiler-sig/\n\n[2] Parser Module Documentation\nhttp://docs.python.org/library/parser.html\n\n[3] Hylton, Jeremy. http://docs.python.org/library/compiler.html\n\n[4] Pelletier, Michel. \"Python Interface Syntax\", PEP 245\n\n[5] The Python Types-SIG http://www.python.org/sigs/types-sig/"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.775991"},"created":{"kind":"timestamp","value":"2001-08-24T00:00:00","string":"2001-08-24T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0269/\",\n \"authors\": [\n \"Jonathan Riehl\"\n ],\n \"pep_number\": \"0269\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":58,"cells":{"id":{"kind":"string","value":"0271"},"text":{"kind":"string","value":"PEP: 271 Title: Prefixing sys.path by command line option Author:\nFrédéric B. Giacometti Status: Rejected Type:\nStandards Track Content-Type: text/x-rst Created: 15-Aug-2001\nPython-Version: 2.2 Post-History:\n\nAbstract\n\nAt present, setting the PYTHONPATH environment variable is the only\nmethod for defining additional Python module search directories.\n\nThis PEP introduces the '-P' valued option to the python command as an\nalternative to PYTHONPATH.\n\nRationale\n\nOn Unix:\n\n python -P $SOMEVALUE\n\nwill be equivalent to:\n\n env PYTHONPATH=$SOMEVALUE python\n\nOn Windows 2K:\n\n python -P %SOMEVALUE%\n\nwill (almost) be equivalent to:\n\n set __PYTHONPATH=%PYTHONPATH% && set PYTHONPATH=%SOMEVALUE%\\\n && python && set PYTHONPATH=%__PYTHONPATH%\n\nOther Information\n\nThis option is equivalent to the 'java -classpath' option.\n\nWhen to use this option\n\nThis option is intended to ease and make more robust the use of Python\nin test or build scripts, for instance.\n\nReference Implementation\n\nA patch implementing this is available from SourceForge:\n\n http://sourceforge.net/tracker/download.php?group_id=5470&atid=305470&file_id=6916&aid=429614\n\nwith the patch discussion at:\n\n http://sourceforge.net/tracker/?func=detail&atid=305470&aid=429614&group_id=5470\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.780171"},"created":{"kind":"timestamp","value":"2001-08-15T00:00:00","string":"2001-08-15T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0271/\",\n \"authors\": [\n \"Frédéric B. Giacometti\"\n ],\n \"pep_number\": \"0271\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":59,"cells":{"id":{"kind":"string","value":"0682"},"text":{"kind":"string","value":"PEP: 682 Title: Format Specifier for Signed Zero Author: John Belmonte\n Sponsor: Mark Dickinson \nPEP-Delegate: Mark Dickinson Discussions-To:\nhttps://discuss.python.org/t/pep-682-format-specifier-for-signed-zero/13596\nStatus: Final Type: Standards Track Content-Type: text/x-rst Created:\n29-Jan-2022 Python-Version: 3.11 Post-History: 08-Feb-2022 Resolution:\nhttps://discuss.python.org/t/accepting-pep-682-format-specifier-for-signed-zero/14088\n\nAbstract\n\nThough float and Decimal types can represent signed zero, in many fields\nof mathematics negative zero is surprising or unwanted -- especially in\nthe context of displaying an (often rounded) numerical result. This PEP\nproposes an extension to the string format specification allowing\nnegative zero to be normalized to positive zero.\n\nMotivation\n\nHere is negative zero:\n\n >>> x = -0.\n >>> x\n -0.0\n\nWhen formatting a number, negative zero can result from rounding.\nAssuming the user's intention is truly to discard precision, the\ndistinction between negative and positive zero of the rounded result\nmight be considered an unwanted artifact:\n\n >>> for x in (.002, -.001, .060):\n ... print(f'{x: .1f}')\n 0.0\n -0.0\n 0.1\n\nThere are various approaches to clearing the sign of a negative zero. It\ncan be achieved without a conditional by adding positive zero:\n\n >>> x = -0.\n >>> x + 0.\n 0.0\n\nTo normalize negative zero when formatting, it is necessary to perform a\nredundant (and error-prone) pre-rounding of the input:\n\n >>> for x in (.002, -.001, .060):\n ... print(f'{round(x, 1) + 0.: .1f}')\n 0.0\n 0.0\n 0.1\n\nThere is ample evidence that, regardless of the language, programmers\nare often looking for a way to suppress negative zero, and landing on a\nvariety of workarounds (pre-round, post-regex, etc.). A sampling:\n\n- How to have negative zero always formatted as positive zero in a\n python string? (Python, post-regex)\n- (Iron)Python formatting issue with modulo operator & \"negative zero\"\n (Python, pre-round)\n- Negative sign in case of zero in java (Java, post-regex)\n- Prevent small negative numbers printing as \"-0\" (Objective-C, custom\n number formatter)\n\nWhat we would like instead is a first-class option to normalize negative\nzero, on top of everything else that numerical string formatting already\noffers.\n\nRationale\n\nThere are use cases where negative zero is unwanted in formatted number\noutput -- arguably, not wanting it is more common. Expanding the format\nspecification is the best way to support this because number formatting\nalready incorporates rounding, and the normalization of negative zero\nmust happen after rounding.\n\nWhile it is possible to pre-round and normalize a number before\nformatting, it's tedious and prone to error if the rounding doesn't\nprecisely match that of the format spec. Furthermore, functions that\nwrap formatting would find themselves having to parse format specs to\nextract the precision information. For example, consider how this\nutility for formatting one-dimensional numerical arrays would be\ncomplicated by such pre-rounding:\n\n def format_vector(v, format_spec='8.2f'):\n \"\"\"Format a vector (any iterable) using given per-term format string.\"\"\"\n return f\"[{','.join(f'{term:{format_spec}}' for term in v)}]\"\n\nTo date, there doesn't appear to be any other widely-used language or\nlibrary providing a formatting option for negative zero. However, the\nsame z option syntax and semantics specified below have been proposed\nfor C++ std::format(). While the proposal was withdrawn for C++20, a\nconsensus proposal is promised for C++23. (The original feature request\nprompting this PEP was argued without knowledge of the C++ proposal.)\n\nWhen Rust developers debated whether to suppress negative zero in print\noutput, they took a small survey of other languages. Notably, it didn't\nmention any language providing an option for negative zero handling.\n\nSpecification\n\nAn optional, literal z is added to the Format Specification\nMini-Language following sign:\n\n [[fill]align][sign][z][#][0][width][grouping_option][.precision][type]\n\nwhere z is allowed for floating-point presentation types (f, g, etc., as\ndefined by the format specification documentation). Support for z is\nprovided by the .__format__() method of each numeric type, allowing the\nspecifier to be used in f-strings, built-in format(), and str.format().\n\nWhen z is present, negative zero (whether the original value or the\nresult of rounding) will be normalized to positive zero.\n\nSynopsis:\n\n >>> x = -.00001\n >>> f'{x:z.1f}'\n '0.0'\n\n >>> x = decimal.Decimal('-.00001')\n >>> '{:+z.1f}'.format(x)\n '+0.0'\n\nDesign Notes\n\nThe solution must be opt-in, because we can't change the behavior of\nprograms that may be expecting or relying on negative zero when\nformatting numbers.\n\nThe proposed extension is intentionally [sign][z] rather than [sign[z]].\nThe default for sign (-) is not widely known or explicitly written, so\nthis avoids everyone having to learn it just to use the z option.\n\nWhile f-strings, built-in format(), and str.format() can access the new\noption, %-formatting cannot. There is already precedent for not\nextending %-formatting with new options, as was the case for the ,\noption (PEP 378).\n\nC99 printf already uses the z option character for another purpose:\nqualifying the unsigned type (u) to match the length of size_t. However,\nsince the signed zero option specifically disallows z for integer\npresentation types, it's possible to disambiguate the two uses, should C\nwant to adopt this new option.\n\nBackwards Compatibility\n\nThe new formatting behavior is opt-in, so numerical formatting of\nexisting programs will not be affected.\n\nHow to Teach This\n\nA typical introductory Python course will not cover string formatting in\nfull detail. For such a course, no adjustments would need to be made.\nFor a course that does go into details of the string format\nspecification, a single example demonstrating the effect of the z option\non a negative value that's rounded to zero by the formatting should be\nenough. For an independent developer encountering the feature in someone\nelse's code, reference to the Format Specification Mini-Language section\nof the library reference manual should suffice.\n\nReference Implementation\n\nA reference implementation exists at pull request #30049.\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.793196"},"created":{"kind":"timestamp","value":"2022-01-29T00:00:00","string":"2022-01-29T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0682/\",\n \"authors\": [\n \"John Belmonte\"\n ],\n \"pep_number\": \"0682\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":60,"cells":{"id":{"kind":"string","value":"0398"},"text":{"kind":"string","value":"PEP: 398 Title: Python 3.3 Release Schedule Version: $Revision$\nLast-Modified: $Date$ Author: Georg Brandl Status:\nFinal Type: Informational Topic: Release Content-Type: text/x-rst\nCreated: 23-Mar-2011 Python-Version: 3.3\n\nAbstract\n\nThis document describes the development and release schedule for Python\n3.3. The schedule primarily concerns itself with PEP-sized items.\n\nRelease Manager and Crew\n\n- 3.3 Release Managers: Georg Brandl, Ned Deily (3.3.7+)\n- Windows installers: Martin v. Löwis\n- Mac installers: Ronald Oussoren/Ned Deily\n- Documentation: Georg Brandl\n\n3.3 Lifespan\n\n3.3 will receive bugfix updates approximately every 4-6 months for\napproximately 18 months. After the release of 3.4.0 final, a final 3.3\nbugfix update will be released. After that, security updates (source\nonly) will be released until 5 years after the release of 3.3 final,\nwhich will be September 2017.\n\nAs of 2017-09-29, Python 3.3.x reached end-of-life status.\n\nRelease Schedule\n\n3.3.0 schedule\n\n- 3.3.0 alpha 1: March 5, 2012\n- 3.3.0 alpha 2: April 2, 2012\n- 3.3.0 alpha 3: May 1, 2012\n- 3.3.0 alpha 4: May 31, 2012\n- 3.3.0 beta 1: June 27, 2012\n\n(No new features beyond this point.)\n\n- 3.3.0 beta 2: August 12, 2012\n- 3.3.0 candidate 1: August 24, 2012\n- 3.3.0 candidate 2: September 9, 2012\n- 3.3.0 candidate 3: September 24, 2012\n- 3.3.0 final: September 29, 2012\n\n3.3.1 schedule\n\n- 3.3.1 candidate 1: March 23, 2013\n- 3.3.1 final: April 6, 2013\n\n3.3.2 schedule\n\n- 3.3.2 final: May 13, 2013\n\n3.3.3 schedule\n\n- 3.3.3 candidate 1: October 27, 2013\n- 3.3.3 candidate 2: November 9, 2013\n- 3.3.3 final: November 16, 2013\n\n3.3.4 schedule\n\n- 3.3.4 candidate 1: January 26, 2014\n- 3.3.4 final: February 9, 2014\n\n3.3.5 schedule\n\nPython 3.3.5 was the last regular maintenance release before 3.3 entered\nsecurity-fix only mode.\n\n- 3.3.5 candidate 1: February 22, 2014\n- 3.3.5 candidate 2: March 1, 2014\n- 3.3.5 final: March 8, 2014\n\n3.3.6 schedule\n\nSecurity fixes only\n\n- 3.3.6 candidate 1 (source-only release): October 4, 2014\n- 3.3.6 final (source-only release): October 11, 2014\n\n3.3.7 schedule\n\nSecurity fixes only\n\n- 3.3.7 candidate 1 (source-only release): September 6, 2017\n- 3.3.7 final (source-only release): September 19, 2017\n\n3.3.x end-of-life\n\n- September 29, 2017\n\nFeatures for 3.3\n\nImplemented / Final PEPs:\n\n- PEP 362: Function Signature Object\n- PEP 380: Syntax for Delegating to a Subgenerator\n- PEP 393: Flexible String Representation\n- PEP 397: Python launcher for Windows\n- PEP 399: Pure Python/C Accelerator Module Compatibility Requirements\n- PEP 405: Python Virtual Environments\n- PEP 409: Suppressing exception context\n- PEP 412: Key-Sharing Dictionary\n- PEP 414: Explicit Unicode Literal for Python 3.3\n- PEP 415: Implement context suppression with exception attributes\n- PEP 417: Including mock in the Standard Library\n- PEP 418: Add monotonic time, performance counter, and process time\n functions\n- PEP 420: Implicit Namespace Packages\n- PEP 421: Adding sys.implementation\n- PEP 3118: Revising the buffer protocol (protocol semantics\n finalised)\n- PEP 3144: IP Address manipulation library\n- PEP 3151: Reworking the OS and IO exception hierarchy\n- PEP 3155: Qualified name for classes and functions\n\nOther final large-scale changes:\n\n- Addition of the \"faulthandler\" module\n- Addition of the \"lzma\" module, and lzma/xz support in tarfile\n- Implementing __import__ using importlib\n- Addition of the C decimal implementation\n- Switch of Windows build toolchain to VS 2010\n\nCandidate PEPs:\n\n- None\n\nOther planned large-scale changes:\n\n- None\n\nDeferred to post-3.3:\n\n- PEP 395: Qualified Names for Modules\n- PEP 3143: Standard daemon process library\n- PEP 3154: Pickle protocol version 4\n- Breaking out standard library and docs in separate repos\n- Addition of the \"packaging\" module, deprecating \"distutils\"\n- Addition of the \"regex\" module\n- Email version 6\n- A standard event-loop interface (PEP by Jim Fulton pending)\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.811756"},"created":{"kind":"timestamp","value":"2011-03-23T00:00:00","string":"2011-03-23T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0398/\",\n \"authors\": [\n \"Georg Brandl\"\n ],\n \"pep_number\": \"0398\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":61,"cells":{"id":{"kind":"string","value":"0340"},"text":{"kind":"string","value":"PEP: 340 Title: Anonymous Block Statements Version: $Revision$\nLast-Modified: $Date$ Author: Guido van Rossum Status: Rejected Type:\nStandards Track Content-Type: text/x-rst Created: 27-Apr-2005\nPost-History:\n\nIntroduction\n\nThis PEP proposes a new type of compound statement which can be used for\nresource management purposes. The new statement type is provisionally\ncalled the block-statement because the keyword to be used has not yet\nbeen chosen.\n\nThis PEP competes with several other PEPs: PEP 288 (Generators\nAttributes and Exceptions; only the second part), PEP 310 (Reliable\nAcquisition/Release Pairs), and PEP 325 (Resource-Release Support for\nGenerators).\n\nI should clarify that using a generator to \"drive\" a block statement is\nreally a separable proposal; with just the definition of the block\nstatement from the PEP you could implement all the examples using a\nclass (similar to example 6, which is easily turned into a template).\nBut the key idea is using a generator to drive a block statement; the\nrest is elaboration, so I'd like to keep these two parts together.\n\n(PEP 342, Enhanced Iterators, was originally a part of this PEP; but the\ntwo proposals are really independent and with Steven Bethard's help I\nhave moved it to a separate PEP.)\n\nRejection Notice\n\nI am rejecting this PEP in favor of PEP 343. See the motivational\nsection in that PEP for the reasoning behind this rejection. GvR.\n\nMotivation and Summary\n\n(Thanks to Shane Hathaway -- Hi Shane!)\n\nGood programmers move commonly used code into reusable functions.\nSometimes, however, patterns arise in the structure of the functions\nrather than the actual sequence of statements. For example, many\nfunctions acquire a lock, execute some code specific to that function,\nand unconditionally release the lock. Repeating the locking code in\nevery function that uses it is error prone and makes refactoring\ndifficult.\n\nBlock statements provide a mechanism for encapsulating patterns of\nstructure. Code inside the block statement runs under the control of an\nobject called a block iterator. Simple block iterators execute code\nbefore and after the code inside the block statement. Block iterators\nalso have the opportunity to execute the controlled code more than once\n(or not at all), catch exceptions, or receive data from the body of the\nblock statement.\n\nA convenient way to write block iterators is to write a generator (PEP\n255). A generator looks a lot like a Python function, but instead of\nreturning a value immediately, generators pause their execution at\n\"yield\" statements. When a generator is used as a block iterator, the\nyield statement tells the Python interpreter to suspend the block\niterator, execute the block statement body, and resume the block\niterator when the body has executed.\n\nThe Python interpreter behaves as follows when it encounters a block\nstatement based on a generator. First, the interpreter instantiates the\ngenerator and begins executing it. The generator does setup work\nappropriate to the pattern it encapsulates, such as acquiring a lock,\nopening a file, starting a database transaction, or starting a loop.\nThen the generator yields execution to the body of the block statement\nusing a yield statement. When the block statement body completes, raises\nan uncaught exception, or sends data back to the generator using a\ncontinue statement, the generator resumes. At this point, the generator\ncan either clean up and stop or yield again, causing the block statement\nbody to execute again. When the generator finishes, the interpreter\nleaves the block statement.\n\nUse Cases\n\nSee the Examples section near the end.\n\nSpecification: the __exit__() Method\n\nAn optional new method for iterators is proposed, called __exit__(). It\ntakes up to three arguments which correspond to the three \"arguments\" to\nthe raise-statement: type, value, and traceback. If all three arguments\nare None, sys.exc_info() may be consulted to provide suitable default\nvalues.\n\nSpecification: the Anonymous Block Statement\n\nA new statement is proposed with the syntax:\n\n block EXPR1 as VAR1:\n BLOCK1\n\nHere, 'block' and 'as' are new keywords; EXPR1 is an arbitrary\nexpression (but not an expression-list) and VAR1 is an arbitrary\nassignment target (which may be a comma-separated list).\n\nThe \"as VAR1\" part is optional; if omitted, the assignments to VAR1 in\nthe translation below are omitted (but the expressions assigned are\nstill evaluated!).\n\nThe choice of the 'block' keyword is contentious; many alternatives have\nbeen proposed, including not to use a keyword at all (which I actually\nlike). PEP 310 uses 'with' for similar semantics, but I would like to\nreserve that for a with-statement similar to the one found in Pascal and\nVB. (Though I just found that the C# designers don't like 'with'[1], and\nI have to agree with their reasoning.) To sidestep this issue\nmomentarily I'm using 'block' until we can agree on the right keyword,\nif any.\n\nNote that the 'as' keyword is not contentious (it will finally be\nelevated to proper keyword status).\n\nNote that it is up to the iterator to decide whether a block-statement\nrepresents a loop with multiple iterations; in the most common use case\nBLOCK1 is executed exactly once. To the parser, however, it is always a\nloop; break and continue return transfer to the block's iterator (see\nbelow for details).\n\nThe translation is subtly different from a for-loop: iter() is not\ncalled, so EXPR1 should already be an iterator (not just an iterable);\nand the iterator is guaranteed to be notified when the block-statement\nis left, regardless if this is due to a break, return or exception:\n\n itr = EXPR1 # The iterator\n ret = False # True if a return statement is active\n val = None # Return value, if ret == True\n exc = None # sys.exc_info() tuple if an exception is active\n while True:\n try:\n if exc:\n ext = getattr(itr, \"__exit__\", None)\n if ext is not None:\n VAR1 = ext(*exc) # May re-raise *exc\n else:\n raise exc[0], exc[1], exc[2]\n else:\n VAR1 = itr.next() # May raise StopIteration\n except StopIteration:\n if ret:\n return val\n break\n try:\n ret = False\n val = exc = None\n BLOCK1\n except:\n exc = sys.exc_info()\n\n(However, the variables 'itr' etc. are not user-visible and the built-in\nnames used cannot be overridden by the user.)\n\nInside BLOCK1, the following special translations apply:\n\n- \"break\" is always legal; it is translated into:\n\n exc = (StopIteration, None, None)\n continue\n\n- \"return EXPR3\" is only legal when the block-statement is contained\n in a function definition; it is translated into:\n\n exc = (StopIteration, None, None)\n ret = True\n val = EXPR3\n continue\n\nThe net effect is that break and return behave much the same as if the\nblock-statement were a for-loop, except that the iterator gets a chance\nat resource cleanup before the block-statement is left, through the\noptional __exit__() method. The iterator also gets a chance if the\nblock-statement is left through raising an exception. If the iterator\ndoesn't have an __exit__() method, there is no difference with a\nfor-loop (except that a for-loop calls iter() on EXPR1).\n\nNote that a yield-statement in a block-statement is not treated\ndifferently. It suspends the function containing the block without\nnotifying the block's iterator. The block's iterator is entirely unaware\nof this yield, since the local control flow doesn't actually leave the\nblock. In other words, it is not like a break or return statement. When\nthe loop that was resumed by the yield calls next(), the block is\nresumed right after the yield. (See example 7 below.) The generator\nfinalization semantics described below guarantee (within the limitations\nof all finalization semantics) that the block will be resumed\neventually.\n\nUnlike the for-loop, the block-statement does not have an else-clause. I\nthink it would be confusing, and emphasize the \"loopiness\" of the\nblock-statement, while I want to emphasize its difference from a\nfor-loop. In addition, there are several possible semantics for an\nelse-clause, and only a very weak use case.\n\nSpecification: Generator Exit Handling\n\nGenerators will implement the new __exit__() method API.\n\nGenerators will be allowed to have a yield statement inside a\ntry-finally statement.\n\nThe expression argument to the yield-statement will become optional\n(defaulting to None).\n\nWhen __exit__() is called, the generator is resumed but at the point of\nthe yield-statement the exception represented by the __exit__\nargument(s) is raised. The generator may re-raise this exception, raise\nanother exception, or yield another value, except that if the exception\npassed in to __exit__() was StopIteration, it ought to raise\nStopIteration (otherwise the effect would be that a break is turned into\ncontinue, which is unexpected at least). When the initial call resuming\nthe generator is an __exit__() call instead of a next() call, the\ngenerator's execution is aborted and the exception is re-raised without\npassing control to the generator's body.\n\nWhen a generator that has not yet terminated is garbage-collected\n(either through reference counting or by the cyclical garbage\ncollector), its __exit__() method is called once with StopIteration as\nits first argument. Together with the requirement that a generator ought\nto raise StopIteration when __exit__() is called with StopIteration,\nthis guarantees the eventual activation of any finally-clauses that were\nactive when the generator was last suspended. Of course, under certain\ncircumstances the generator may never be garbage-collected. This is no\ndifferent than the guarantees that are made about finalizers (__del__()\nmethods) of other objects.\n\nAlternatives Considered and Rejected\n\n- Many alternatives have been proposed for 'block'. I haven't seen a\n proposal for another keyword that I like better than 'block' yet.\n Alas, 'block' is also not a good choice; it is a rather popular name\n for variables, arguments and methods. Perhaps 'with' is the best\n choice after all?\n\n- Instead of trying to pick the ideal keyword, the block-statement\n could simply have the form:\n\n EXPR1 as VAR1:\n BLOCK1\n\n This is at first attractive because, together with a good choice of\n function names (like those in the Examples section below) used in\n EXPR1, it reads well, and feels like a \"user-defined statement\". And\n yet, it makes me (and many others) uncomfortable; without a keyword\n the syntax is very \"bland\", difficult to look up in a manual\n (remember that 'as' is optional), and it makes the meaning of break\n and continue in the block-statement even more confusing.\n\n- Phillip Eby has proposed to have the block-statement use an entirely\n different API than the for-loop, to differentiate between the two. A\n generator would have to be wrapped in a decorator to make it support\n the block API. IMO this adds more complexity with very little\n benefit; and we can't really deny that the block-statement is\n conceptually a loop -- it supports break and continue, after all.\n\n- This keeps getting proposed: \"block VAR1 = EXPR1\" instead of \"block\n EXPR1 as VAR1\". That would be very misleading, since VAR1 does not\n get assigned the value of EXPR1; EXPR1 results in a generator which\n is assigned to an internal variable, and VAR1 is the value returned\n by successive calls to the __next__() method of that iterator.\n\n- Why not change the translation to apply iter(EXPR1)? All the\n examples would continue to work. But this makes the block-statement\n more like a for-loop, while the emphasis ought to be on the\n difference between the two. Not calling iter() catches a bunch of\n misunderstandings, like using a sequence as EXPR1.\n\nComparison to Thunks\n\nAlternative semantics proposed for the block-statement turn the block\ninto a thunk (an anonymous function that blends into the containing\nscope).\n\nThe main advantage of thunks that I can see is that you can save the\nthunk for later, like a callback for a button widget (the thunk then\nbecomes a closure). You can't use a yield-based block for that (except\nin Ruby, which uses yield syntax with a thunk-based implementation). But\nI have to say that I almost see this as an advantage: I think I'd be\nslightly uncomfortable seeing a block and not knowing whether it will be\nexecuted in the normal control flow or later. Defining an explicit\nnested function for that purpose doesn't have this problem for me,\nbecause I already know that the 'def' keyword means its body is executed\nlater.\n\nThe other problem with thunks is that once we think of them as the\nanonymous functions they are, we're pretty much forced to say that a\nreturn statement in a thunk returns from the thunk rather than from the\ncontaining function. Doing it any other way would cause major weirdness\nwhen the thunk were to survive its containing function as a closure\n(perhaps continuations would help, but I'm not about to go there :-).\n\nBut then an IMO important use case for the resource cleanup template\npattern is lost. I routinely write code like this:\n\n def findSomething(self, key, default=None):\n self.lock.acquire()\n try:\n for item in self.elements:\n if item.matches(key):\n return item\n return default\n finally:\n self.lock.release()\n\nand I'd be bummed if I couldn't write this as:\n\n def findSomething(self, key, default=None):\n block locking(self.lock):\n for item in self.elements:\n if item.matches(key):\n return item\n return default\n\nThis particular example can be rewritten using a break:\n\n def findSomething(self, key, default=None):\n block locking(self.lock):\n for item in self.elements:\n if item.matches(key):\n break\n else:\n item = default\n return item\n\nbut it looks forced and the transformation isn't always that easy; you'd\nbe forced to rewrite your code in a single-return style which feels too\nrestrictive.\n\nAlso note the semantic conundrum of a yield in a thunk -- the only\nreasonable interpretation is that this turns the thunk into a generator!\n\nGreg Ewing believes that thunks \"would be a lot simpler, doing just what\nis required without any jiggery pokery with exceptions and\nbreak/continue/return statements. It would be easy to explain what it\ndoes and why it's useful.\"\n\nBut in order to obtain the required local variable sharing between the\nthunk and the containing function, every local variable used or set in\nthe thunk would have to become a 'cell' (our mechanism for sharing\nvariables between nested scopes). Cells slow down access compared to\nregular local variables: access involves an extra C function call\n(PyCell_Get() or PyCell_Set()).\n\nPerhaps not entirely coincidentally, the last example above\n(findSomething() rewritten to avoid a return inside the block) shows\nthat, unlike for regular nested functions, we'll want variables assigned\nto by the thunk also to be shared with the containing function, even if\nthey are not assigned to outside the thunk.\n\nGreg Ewing again: \"generators have turned out to be more powerful,\nbecause you can have more than one of them on the go at once. Is there a\nuse for that capability here?\"\n\nI believe there are definitely uses for this; several people have\nalready shown how to do asynchronous light-weight threads using\ngenerators (e.g. David Mertz quoted in PEP 288, and Fredrik Lundh[2]).\n\nAnd finally, Greg says: \"a thunk implementation has the potential to\neasily handle multiple block arguments, if a suitable syntax could ever\nbe devised. It's hard to see how that could be done in a general way\nwith the generator implementation.\"\n\nHowever, the use cases for multiple blocks seem elusive.\n\n(Proposals have since been made to change the implementation of thunks\nto remove most of these objections, but the resulting semantics are\nfairly complex to explain and to implement, so IMO that defeats the\npurpose of using thunks in the first place.)\n\nExamples\n\n(Several of these examples contain \"yield None\". If PEP 342 is accepted,\nthese can be changed to just \"yield\" of course.)\n\n1. A template for ensuring that a lock, acquired at the start of a\n block, is released when the block is left:\n\n def locking(lock):\n lock.acquire()\n try:\n yield None\n finally:\n lock.release()\n\n Used as follows:\n\n block locking(myLock):\n # Code here executes with myLock held. The lock is\n # guaranteed to be released when the block is left (even\n # if via return or by an uncaught exception).\n\n2. A template for opening a file that ensures the file is closed when\n the block is left:\n\n def opening(filename, mode=\"r\"):\n f = open(filename, mode)\n try:\n yield f\n finally:\n f.close()\n\n Used as follows:\n\n block opening(\"/etc/passwd\") as f:\n for line in f:\n print line.rstrip()\n\n3. A template for committing or rolling back a database transaction:\n\n def transactional(db):\n try:\n yield None\n except:\n db.rollback()\n raise\n else:\n db.commit()\n\n4. A template that tries something up to n times:\n\n def auto_retry(n=3, exc=Exception):\n for i in range(n):\n try:\n yield None\n return\n except exc, err:\n # perhaps log exception here\n continue\n raise # re-raise the exception we caught earlier\n\n Used as follows:\n\n block auto_retry(3, IOError):\n f = urllib.urlopen(\"https://www.example.com/\")\n print f.read()\n\n5. It is possible to nest blocks and combine templates:\n\n def locking_opening(lock, filename, mode=\"r\"):\n block locking(lock):\n block opening(filename) as f:\n yield f\n\n Used as follows:\n\n block locking_opening(myLock, \"/etc/passwd\") as f:\n for line in f:\n print line.rstrip()\n\n (If this example confuses you, consider that it is equivalent to\n using a for-loop with a yield in its body in a regular generator\n which is invoking another iterator or generator recursively; see for\n example the source code for os.walk().)\n\n6. It is possible to write a regular iterator with the semantics of\n example 1:\n\n class locking:\n def __init__(self, lock):\n self.lock = lock\n self.state = 0\n def __next__(self, arg=None):\n # ignores arg\n if self.state:\n assert self.state == 1\n self.lock.release()\n self.state += 1\n raise StopIteration\n else:\n self.lock.acquire()\n self.state += 1\n return None\n def __exit__(self, type, value=None, traceback=None):\n assert self.state in (0, 1, 2)\n if self.state == 1:\n self.lock.release()\n raise type, value, traceback\n\n (This example is easily modified to implement the other examples; it\n shows how much simpler generators are for the same purpose.)\n\n7. Redirect stdout temporarily:\n\n def redirecting_stdout(new_stdout):\n save_stdout = sys.stdout\n try:\n sys.stdout = new_stdout\n yield None\n finally:\n sys.stdout = save_stdout\n\n Used as follows:\n\n block opening(filename, \"w\") as f:\n block redirecting_stdout(f):\n print \"Hello world\"\n\n8. A variant on opening() that also returns an error condition:\n\n def opening_w_error(filename, mode=\"r\"):\n try:\n f = open(filename, mode)\n except IOError, err:\n yield None, err\n else:\n try:\n yield f, None\n finally:\n f.close()\n\n Used as follows:\n\n block opening_w_error(\"/etc/passwd\", \"a\") as f, err:\n if err:\n print \"IOError:\", err\n else:\n f.write(\"guido::0:0::/:/bin/sh\\n\")\n\nAcknowledgements\n\nIn no useful order: Alex Martelli, Barry Warsaw, Bob Ippolito, Brett\nCannon, Brian Sabbey, Chris Ryland, Doug Landauer, Duncan Booth, Fredrik\nLundh, Greg Ewing, Holger Krekel, Jason Diamond, Jim Jewett, Josiah\nCarlson, Ka-Ping Yee, Michael Chermside, Michael Hudson, Neil\nSchemenauer, Alyssa Coghlan, Paul Moore, Phillip Eby, Raymond Hettinger,\nGeorg Brandl, Samuele Pedroni, Shannon Behrens, Skip Montanaro, Steven\nBethard, Terry Reedy, Tim Delaney, Aahz, and others. Thanks all for the\nvaluable contributions!\n\nReferences\n\n[1] https://mail.python.org/pipermail/python-dev/2005-April/052821.html\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] https://web.archive.org/web/20060719195933/http://msdn.microsoft.com/vcsharp/programming/language/ask/withstatement/\n\n[2] https://web.archive.org/web/20050204062901/http://effbot.org/zone/asyncore-generators.htm"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.830087"},"created":{"kind":"timestamp","value":"2005-04-27T00:00:00","string":"2005-04-27T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0340/\",\n \"authors\": [\n \"Guido van Rossum\"\n ],\n \"pep_number\": \"0340\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":62,"cells":{"id":{"kind":"string","value":"0303"},"text":{"kind":"string","value":"PEP: 303 Title: Extend divmod() for Multiple Divisors Version:\n$Revision$ Last-Modified: $Date$ Author: Thomas Bellman\n Status: Rejected Type: Standards\nTrack Content-Type: text/x-rst Created: 31-Dec-2002 Python-Version: 2.3\nPost-History:\n\nAbstract\n\nThis PEP describes an extension to the built-in divmod() function,\nallowing it to take multiple divisors, chaining several calls to\ndivmod() into one.\n\nPronouncement\n\nThis PEP is rejected. Most uses for chained divmod() involve a constant\nmodulus (in radix conversions for example) and are more properly coded\nas a loop. The example of splitting seconds into\ndays/hours/minutes/seconds does not generalize to months and years;\nrather, the whole use case is handled more flexibly and robustly by date\nand time modules. The other use cases mentioned in the PEP are somewhat\nrare in real code. The proposal is also problematic in terms of clarity\nand obviousness. In the examples, it is not immediately clear that the\nargument order is correct or that the target tuple is of the right\nlength. Users from other languages are more likely to understand the\nstandard two argument form without having to re-read the documentation.\nSee python-dev discussion on 17 June 2005[1].\n\nSpecification\n\nThe built-in divmod() function would be changed to accept multiple\ndivisors, changing its signature from divmod(dividend, divisor) to\ndivmod(dividend, *divisors). The dividend is divided by the last\ndivisor, giving a quotient and a remainder. The quotient is then divided\nby the second to last divisor, giving a new quotient and remainder. This\nis repeated until all divisors have been used, and divmod() then returns\na tuple consisting of the quotient from the last step, and the\nremainders from all the steps.\n\nA Python implementation of the new divmod() behaviour could look like:\n\n def divmod(dividend, *divisors):\n modulos = ()\n q = dividend\n while divisors:\n q, r = q.__divmod__(divisors[-1])\n modulos = (r,) + modulos\n divisors = divisors[:-1]\n return (q,) + modulos\n\nMotivation\n\nOccasionally one wants to perform a chain of divmod() operations,\ncalling divmod() on the quotient from the previous step, with varying\ndivisors. The most common case is probably converting a number of\nseconds into weeks, days, hours, minutes and seconds. This would today\nbe written as:\n\n def secs_to_wdhms(seconds):\n m, s = divmod(seconds, 60)\n h, m = divmod(m, 60)\n d, h = divmod(h, 24)\n w, d = divmod(d, 7)\n return (w, d, h, m, s)\n\nThis is tedious and easy to get wrong each time you need it.\n\nIf instead the divmod() built-in is changed according the proposal, the\ncode for converting seconds to weeks, days, hours, minutes and seconds\nthen become :\n\n def secs_to_wdhms(seconds):\n w, d, h, m, s = divmod(seconds, 7, 24, 60, 60)\n return (w, d, h, m, s)\n\nwhich is easier to type, easier to type correctly, and easier to read.\n\nOther applications are:\n\n- Astronomical angles (declination is measured in degrees, minutes and\n seconds, right ascension is measured in hours, minutes and seconds).\n- Old British currency (1 pound = 20 shilling, 1 shilling = 12 pence).\n- Anglo-Saxon length units: 1 mile = 1760 yards, 1 yard = 3 feet, 1\n foot = 12 inches.\n- Anglo-Saxon weight units: 1 long ton = 160 stone, 1 stone = 14\n pounds, 1 pound = 16 ounce, 1 ounce = 16 dram.\n- British volumes: 1 gallon = 4 quart, 1 quart = 2 pint, 1 pint = 20\n fluid ounces.\n\nRationale\n\nThe idea comes from APL, which has an operator that does this. (I don't\nremember what the operator looks like, and it would probably be\nimpossible to render in ASCII anyway.)\n\nThe APL operator takes a list as its second operand, while this PEP\nproposes that each divisor should be a separate argument to the divmod()\nfunction. This is mainly because it is expected that the most common\nuses will have the divisors as constants right in the call (as the 7,\n24, 60, 60 above), and adding a set of parentheses or brackets would\njust clutter the call.\n\nRequiring an explicit sequence as the second argument to divmod() would\nseriously break backwards compatibility. Making divmod() check its\nsecond argument for being a sequence is deemed to be too ugly to\ncontemplate. And in the case where one does have a sequence that is\ncomputed other-where, it is easy enough to write divmod(x, *divs)\ninstead.\n\nRequiring at least one divisor, i.e rejecting divmod(x), has been\nconsidered, but no good reason to do so has come to mind, and is thus\nallowed in the name of generality.\n\nCalling divmod() with no divisors should still return a tuple (of one\nelement). Code that calls divmod() with a varying number of divisors,\nand thus gets a return value with an \"unknown\" number of elements, would\notherwise have to special case that case. Code that knows it is calling\ndivmod() with no divisors is considered to be too silly to warrant a\nspecial case.\n\nProcessing the divisors in the other direction, i.e dividing with the\nfirst divisor first, instead of dividing with the last divisor first,\nhas been considered. However, the result comes with the most significant\npart first and the least significant part last (think of the chained\ndivmod as a way of splitting a number into \"digits\", with varying\nweights), and it is reasonable to specify the divisors (weights) in the\nsame order as the result.\n\nThe inverse operation:\n\n def inverse_divmod(seq, *factors):\n product = seq[0]\n for x, y in zip(factors, seq[1:]):\n product = product * x + y\n return product\n\ncould also be useful. However, writing :\n\n seconds = (((((w * 7) + d) * 24 + h) * 60 + m) * 60 + s)\n\nis less cumbersome both to write and to read than the chained divmods.\nIt is therefore deemed to be less important, and its introduction can be\ndeferred to its own PEP. Also, such a function needs a good name, and\nthe PEP author has not managed to come up with one yet.\n\nCalling divmod(\"spam\") does not raise an error, despite strings\nsupporting neither division nor modulo. However, unless we know the\nother object too, we can't determine whether divmod() would work or not,\nand thus it seems silly to forbid it.\n\nBackwards Compatibility\n\nAny module that replaces the divmod() function in the __builtin__\nmodule, may cause other modules using the new syntax to break. It is\nexpected that this is very uncommon.\n\nCode that expects a TypeError exception when calling divmod() with\nanything but two arguments will break. This is also expected to be very\nuncommon.\n\nNo other issues regarding backwards compatibility are known.\n\nReference Implementation\n\nNot finished yet, but it seems a rather straightforward new\nimplementation of the function builtin_divmod() in Python/bltinmodule.c.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] Raymond Hettinger, \"Propose rejection of PEP 303 -- Extend divmod()\nfor Multiple Divisors\"\nhttps://mail.python.org/pipermail/python-dev/2005-June/054283.html"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.921162"},"created":{"kind":"timestamp","value":"2002-12-31T00:00:00","string":"2002-12-31T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0303/\",\n \"authors\": [\n \"Thomas Bellman\"\n ],\n \"pep_number\": \"0303\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":63,"cells":{"id":{"kind":"string","value":"0718"},"text":{"kind":"string","value":"PEP: 718 Title: Subscriptable functions Author: James Hilton-Balfe\n Sponsor: Guido van Rossum \nDiscussions-To: https://discuss.python.org/t/28457/ Status: Draft Type:\nStandards Track Topic: Typing Content-Type: text/x-rst Created:\n23-Jun-2023 Python-Version: 3.13 Post-History: 24-Jun-2023\n\nAbstract\n\nThis PEP proposes making function objects subscriptable for typing\npurposes. Doing so gives developers explicit control over the types\nproduced by the type checker where bi-directional inference (which\nallows for the types of parameters of anonymous functions to be\ninferred) and other methods than specialisation are insufficient. It\nalso brings functions in line with regular classes in their ability to\nbe subscriptable.\n\nMotivation\n\nUnknown Types\n\nCurrently, it is not possible to infer the type parameters to generic\nfunctions in certain situations:\n\n def make_list[T](*args: T) -> list[T]: ...\n reveal_type(make_list()) # type checker cannot infer a meaningful type for T\n\nMaking instances of FunctionType subscriptable would allow for this\nconstructor to be typed:\n\n reveal_type(make_list[int]()) # type is list[int]\n\nCurrently you have to use an assignment to provide a precise type:\n\n x: list[int] = make_list()\n reveal_type(x) # type is list[int]\n\nbut this code is unnecessarily verbose taking up multiple lines for a\nsimple function call.\n\nSimilarly, T in this example cannot currently be meaningfully inferred,\nso x is untyped without an extra assignment:\n\n def factory[T](func: Callable[[T], Any]) -> Foo[T]: ...\n\n reveal_type(factory(lambda x: \"Hello World\" * x))\n\nIf function objects were subscriptable, however, a more specific type\ncould be given:\n\n reveal_type(factory[int](lambda x: \"Hello World\" * x)) # type is Foo[int]\n\nUndecidable Inference\n\nThere are even cases where subclass relations make type inference\nimpossible. However, if you can specialise the function type checkers\ncan infer a meaningful type.\n\n def foo[T](x: Sequence[T] | T) -> list[T]: ...\n\n reveal_type(foo[bytes](b\"hello\"))\n\nCurrently, type checkers do not consistently synthesise a type here.\n\nUnsolvable Type Parameters\n\nCurrently, with unspecialised literals, it is not possible to determine\na type for situations similar to:\n\n def foo[T](x: list[T]) -> T: ...\n reveal_type(foo([])) # type checker cannot infer T (yet again)\n\n reveal_type(foo[int]([])) # type is int\n\nIt is also useful to be able to specify in cases in which a certain type\nmust be passed to a function beforehand:\n\n words = [\"hello\", \"world\"]\n foo[int](words) # Invalid: list[str] is incompatible with list[int]\n\nAllowing subscription makes functions and methods consistent with\ngeneric classes where they weren't already. Whilst all of the proposed\nchanges can be implemented using callable generic classes, syntactic\nsugar would be highly welcome.\n\nDue to this, specialising the function and using it as a new factory is\nfine\n\n make_int_list = make_list[int]\n reveal_type(make_int_list()) # type is list[int]\n\nMonomorphisation and Reification\n\nThis proposal also opens the door to monomorphisation and reified types.\n\nThis would allow for a functionality which anecdotally has been\nrequested many times.\n\nPlease note this feature is not being proposed by the PEP, but may be\nimplemented in the future.\n\nThe syntax for such a feature may look something like:\n\n def foo[T]():\n return T.__value__\n\n assert foo[int]() is int\n\nRationale\n\nFunction objects in this PEP is used to refer to FunctionType,\nMethodType, BuiltinFunctionType, BuiltinMethodType and\nMethodWrapperType.\n\nFor MethodType you should be able to write:\n\n class Foo:\n def make_list[T](self, *args: T) -> list[T]: ...\n\n Foo().make_list[int]()\n\nand have it work similarly to a FunctionType.\n\nFor BuiltinFunctionType, so builtin generic functions (e.g. max and min)\nwork like ones defined in Python. Built-in functions should behave as\nmuch like functions implemented in Python as possible.\n\nBuiltinMethodType is the same type as BuiltinFunctionType.\n\nMethodWrapperType (e.g. the type of object().__str__) is useful for\ngeneric magic methods.\n\nSpecification\n\nFunction objects should implement __getitem__ to allow for subscription\nat runtime and return an instance of types.GenericAlias with __origin__\nset as the callable and __args__ as the types passed.\n\nType checkers should support subscripting functions and understand that\nthe parameters passed to the function subscription should follow the\nsame rules as a generic callable class.\n\nSetting __orig_class__\n\nCurrently, __orig_class__ is an attribute set in GenericAlias.__call__\nto the instance of the GenericAlias that created the called class e.g.\n\n class Foo[T]: ...\n\n assert Foo[int]().__orig_class__ == Foo[int]\n\nCurrently, __orig_class__ is unconditionally set; however, to avoid\npotential erasure on any created instances, this attribute should not be\nset if __origin__ is an instance of any function object.\n\nThe following code snippet would fail at runtime without this change as\n__orig_class__ would be bar[str] and not Foo[int].\n\n def bar[U]():\n return Foo[int]()\n\n assert bar[str]().__orig_class__ == Foo[int]\n\nInteractions with @typing.overload\n\nOverloaded functions should work much the same as already, since they\nhave no effect on the runtime type. The only change is that more\nsituations will be decidable and the behaviour/overload can be specified\nby the developer rather than leaving it to ordering of overloads/unions.\n\nBackwards Compatibility\n\nCurrently these classes are not subclassable and so there are no\nbackwards compatibility concerns with regards to classes already\nimplementing __getitem__.\n\nReference Implementation\n\nThe runtime changes proposed can be found here\nhttps://github.com/Gobot1234/cpython/tree/function-subscript\n\nAcknowledgements\n\nThank you to Alex Waygood and Jelle Zijlstra for their feedback on this\nPEP and Guido for some motivating examples.\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.944590"},"created":{"kind":"timestamp","value":"2023-06-23T00:00:00","string":"2023-06-23T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0718/\",\n \"authors\": [\n \"James Hilton-Balfe\"\n ],\n \"pep_number\": \"0718\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":64,"cells":{"id":{"kind":"string","value":"0599"},"text":{"kind":"string","value":"PEP: 599 Title: The manylinux2014 Platform Tag Version: $Revision$\nLast-Modified: $Date$ Author: Dustin Ingram Sponsor:\nPaul Moore BDFL-Delegate: Paul Moore\n Discussions-To:\nhttps://discuss.python.org/t/the-next-manylinux-specification/1043\nStatus: Superseded Type: Informational Topic: Packaging Content-Type:\ntext/x-rst Created: 29-Apr-2019 Post-History: 29-Apr-2019 Superseded-By:\n600 Resolution:\nhttps://discuss.python.org/t/the-next-manylinux-specification/1043/199\n\nAbstract\n\nThis PEP proposes the creation of a manylinux2014 platform tag to\nsucceed the manylinux2010 tag introduced by PEP 513. It also proposes\nthat PyPI and pip both be updated to support uploading, downloading, and\ninstalling manylinux2014 distributions on compatible platforms.\n\nRationale\n\nCentOS 6 is now the oldest supported CentOS release, and will receive\nmaintenance updates through November 30th, 2020,[1] at which point it\nwill reach end-of-life, and no further updates such as security patches\nwill be made available. All wheels built under the manylinux2010 images\nwill remain at obsolete versions after that point.\n\nTherefore, we propose the continuation of the existing manylinux\nstandard, and that a new PEP 425-style platform tag called manylinux2014\nbe derived from CentOS 7 and that the manylinux toolchain, PyPI, and pip\nbe updated to support it.\n\nSimilar to how PEP 571 and PEP 513 drew allowed shared libraries and\ntheir symbol versions from CentOS 5.11 and CentOS 6, respectively, a\nmanylinux2014 platform tag will draw its libraries and symbol versions\nfrom CentOS 7, which will reach end-of-life on June 30th, 2024.[2]\n\nThe manylinuxYYYY pattern has a number of advantages that motivate\ncontinuing with the current status quo:\n\n- Well-defined Docker images with clearly specified compatible\n libraries;\n- No need to survey for compatibility issues across multiple releases;\n- A single build image and auditwheel profile per architecture.\n\nThere are also some disadvantages:\n\n- Requires drafting a new PEP for every new standard;\n- Requires adding the new platform tag to installers (e.g., pip);\n- Installers are unable to install a platform tag which predates a\n given release.\n\nThere are also challenges which would exist for any proposal, including\nthe time and effort it takes to define, prepare and release the Docker\nimages and corresponding auditwheel profiles. These challenges were\nexperienced in the long rollout period for manylinux2010, which took\napproximately 1 year from PEP acceptance to compatible build environment\npublished.[3]\n\nHowever, if this PEP can be an indicator, the process is now\nwell-defined and easily repeatable, which should increase the timeline\nfor rollout of a newer, updated platform tag.\n\nThe manylinux2014 policy\n\nThe following criteria determine a linux wheel's eligibility for the\nmanylinux2014 tag:\n\n1. The wheel may only contain binary executables and shared objects\n compiled for one of the following architectures supported by CentOS\n 7, or a CentOS 7 compatible base image (such as ubi7):[4] :\n\n x86_64\n i686\n aarch64\n armv7l\n ppc64\n ppc64le\n s390x\n\n This list adds support for ARMv7 (armv7l), ARMv8 (aarch64) and\n PowerPC (ppc64, ppc64le) architectures supported by the CentOS\n Alternative Architecture Special Interest Group, as well as the IBM\n Z (s390x) architecture.[5]\n\n2. The wheel's binary executables or shared objects may not link\n against externally-provided libraries except those in the following\n list: :\n\n libgcc_s.so.1\n libstdc++.so.6\n libm.so.6\n libdl.so.2\n librt.so.1\n libc.so.6\n libnsl.so.1\n libutil.so.1\n libpthread.so.0\n libresolv.so.2\n libX11.so.6\n libXext.so.6\n libXrender.so.1\n libICE.so.6\n libSM.so.6\n libGL.so.1\n libgobject-2.0.so.0\n libgthread-2.0.so.0\n libglib-2.0.so.0\n\n This list is identical to the externally-provided libraries\n originally allowed for manylinux2010, with one exception:\n libcrypt.so.1 was removed due to being deprecated in Fedora 30.\n libpythonX.Y remains ineligible for inclusion for the same reasons\n outlined in PEP 513.\n\n On Debian-based systems, these libraries are provided by the\n packages:\n\n Package Libraries\n -------------- ----------------------------------------------------------------------------------------------------------\n libc6 libdl.so.2, libresolv.so.2, librt.so.1, libc.so.6, libpthread.so.0, libm.so.6, libutil.so.1, libnsl.so.1\n libgcc1 libgcc_s.so.1\n libgl1 libGL.so.1\n libglib2.0-0 libgobject-2.0.so.0, libgthread-2.0.so.0, libglib-2.0.so.0\n libice6 libICE.so.6\n libsm6 libSM.so.6\n libstdc++6 libstdc++.so.6\n libx11-6 libX11.so.6\n libxext6 libXext.so.6\n libxrender1 libXrender.so.1\n\n On RPM-based systems, they are provided by these packages:\n\n Package Libraries\n ------------ ----------------------------------------------------------------------------------------------------------\n glib2 libglib-2.0.so.0, libgthread-2.0.so.0, libgobject-2.0.so.0\n glibc libresolv.so.2, libutil.so.1, libnsl.so.1, librt.so.1, libpthread.so.0, libdl.so.2, libm.so.6, libc.so.6\n libICE libICE.so.6\n libX11 libX11.so.6\n libXext: libXext.so.6\n libXrender libXrender.so.1\n libgcc: libgcc_s.so.1\n libstdc++ libstdc++.so.6\n mesa libGL.so.1\n\n3. If the wheel contains binary executables or shared objects linked\n against any allowed libraries that also export versioned symbols,\n they may only depend on the following maximum versions:\n\n GLIBC_2.17\n CXXABI_1.3.7, CXXABI_TM_1 is also allowed\n GLIBCXX_3.4.19\n GCC_4.8.0\n\n As an example, manylinux2014 wheels may include binary artifacts\n that require glibc symbols at version GLIBC_2.12, because this an\n earlier version than the maximum of GLIBC_2.17.\n\n4. If a wheel is built for any version of CPython 2 or CPython versions\n 3.0 up to and including 3.2, it must include a CPython ABI tag\n indicating its Unicode ABI. A manylinux2014 wheel built against\n Python 2, then, must include either the cpy27mu tag indicating it\n was built against an interpreter with the UCS-4 ABI or the cpy27m\n tag indicating an interpreter with the UCS-2 ABI. (PEP 3149[6])\n\n5. A wheel must not require the PyFPE_jbuf symbol. This is achieved by\n building it against a Python compiled without the --with-fpectl\n configure flag.\n\nCompilation of Compliant Wheels\n\nLike manylinux1, the auditwheel tool adds manylinux2014 platform tags to\nlinux wheels built by pip wheel or bdist_wheel in a manylinux2014 Docker\ncontainer.\n\nDocker Images\n\nA manylinux2014 Docker image based on CentOS 7 x86_64 should be provided\nfor building binary linux wheels that can reliably be converted to\nmanylinux2014 wheels. This image will come with a full compiler suite\ninstalled (gcc, g++, and gfortran 4.8.5) as well as the latest releases\nof Python and pip.\n\nAuditwheel\n\nThe auditwheel tool will also be updated to produce manylinux2014\nwheels.[7] Its behavior and purpose will be otherwise unchanged from PEP\n513.\n\nPlatform Detection for Installers\n\nPlatforms may define a manylinux2014_compatible boolean attribute on the\n_manylinux module described in PEP 513. A platform is considered\nincompatible with manylinux2014 if the attribute is False.\n\nIf the _manylinux module is not found, or it does not have the attribute\nmanylinux2014_compatible, tools may fall back to checking for glibc. If\nthe platform has glibc 2.17 or newer, it is assumed to be compatible\nunless the _manylinux module says otherwise.\n\nSpecifically, the algorithm we propose is:\n\n def is_manylinux2014_compatible():\n # Only Linux, and only supported architectures\n from distutils.util import get_platform\n\n if get_platform() not in [\n \"linux-x86_64\",\n \"linux-i686\",\n \"linux-aarch64\",\n \"linux-armv7l\",\n \"linux-ppc64\",\n \"linux-ppc64le\",\n \"linux-s390x\",\n ]:\n return False\n\n # Check for presence of _manylinux module\n try:\n import _manylinux\n\n return bool(_manylinux.manylinux2014_compatible)\n except (ImportError, AttributeError):\n # Fall through to heuristic check below\n pass\n\n # Check glibc version. CentOS 7 uses glibc 2.17.\n # PEP 513 contains an implementation of this function.\n return have_compatible_glibc(2, 17)\n\nBackwards compatibility with manylinux2010 wheels\n\nAs explained in PEP 513, the specified symbol versions for manylinux1\nallowed libraries constitute an upper bound. The same is true for the\nsymbol versions defined for manylinux2014 in this PEP. As a result,\nmanylinux1 and manylinux2010 wheels are considered manylinux2014 wheels.\nA pip that recognizes the manylinux2014 platform tag will thus install\nmanylinux2010 wheels for manylinux2014 platforms -- even when explicitly\nset --when no manylinux2014 wheels are available.\n\nPyPI Support\n\nPyPI should permit wheels containing the manylinux2014 platform tag to\nbe uploaded in the same way that it permits manylinux2010.\n\nIf technically feasible, PyPI should attempt to verify the compatibility\nof manylinux2014 wheels, but that capability is not a requirement for\nadoption of this PEP.\n\nPackage authors should not upload non-compliant manylinux2014 wheels to\nPyPI, and should be aware that PyPI may begin blocking non-compliant\nwheels from being uploaded.\n\nReferences\n\nAcceptance\n\nPEP 599 was accepted by Paul Moore on July 31, 2019.\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive.\n\n[1] CentOS Product Specifications\n(https://wiki.centos.org/About/Product)\n\n[2] CentOS Product Specifications\n(https://wiki.centos.org/About/Product)\n\n[3] Tracking issue for manylinux2010 rollout\n(https://github.com/pypa/manylinux/issues/179)\n\n[4] Red Hat Universal Base Image 7\n(https://access.redhat.com/containers/?tab=overview#/registry.access.redhat.com/ubi7)\n\n[5] The CentOS Alternative Architecture Special Interest Group\n(https://wiki.centos.org/SpecialInterestGroup/AltArch)\n\n[6] SOABI support for Python 2.X and PyPy\n(https://github.com/pypa/pip/pull/3075)\n\n[7] auditwheel (https://github.com/pypa/auditwheel/)"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:16.964541"},"created":{"kind":"timestamp","value":"2019-04-29T00:00:00","string":"2019-04-29T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0599/\",\n \"authors\": [\n \"Dustin Ingram\"\n ],\n \"pep_number\": \"0599\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":65,"cells":{"id":{"kind":"string","value":"0674"},"text":{"kind":"string","value":"PEP: 674 Title: Disallow using macros as l-values Author: Victor Stinner\n Status: Deferred Type: Standards Track\nContent-Type: text/x-rst Created: 30-Nov-2021 Python-Version: 3.12\n\nAbstract\n\nDisallow using macros as l-values. For example, Py_TYPE(obj) = new_type\nnow fails with a compiler error.\n\nIn practice, the majority of affected projects only have to make two\nchanges:\n\n- Replace Py_TYPE(obj) = new_type with Py_SET_TYPE(obj, new_type).\n- Replace Py_SIZE(obj) = new_size with Py_SET_SIZE(obj, new_size).\n\nPEP Deferral\n\nSee SC reply to PEP 674 -- Disallow using macros as l-values (February\n2022).\n\nRationale\n\nUsing a macro as a an l-value\n\nIn the Python C API, some functions are implemented as macro because\nwriting a macro is simpler than writing a regular function. If a macro\nexposes directly a structure member, it is technically possible to use\nthis macro to not only get the structure member but also set it.\n\nExample with the Python 3.10 Py_TYPE() macro:\n\n #define Py_TYPE(ob) (((PyObject *)(ob))->ob_type)\n\nThis macro can be used as a r-value to get an object type:\n\n type = Py_TYPE(object);\n\nIt can also be used as an l-value to set an object type:\n\n Py_TYPE(object) = new_type;\n\nIt is also possible to set an object reference count and an object size\nusing Py_REFCNT() and Py_SIZE() macros.\n\nSetting directly an object attribute relies on the current exact CPython\nimplementation. Implementing this feature in other Python\nimplementations can make their C API implementation less efficient.\n\nCPython nogil fork\n\nSam Gross forked Python 3.9 to remove the GIL: the nogil branch. This\nfork has no PyObject.ob_refcnt member, but a more elaborated\nimplementation for reference counting, and so the\nPy_REFCNT(obj) = new_refcnt; code fails with a compiler error.\n\nMerging the nogil fork into the upstream CPython main branch requires\nfirst to fix this C API compatibility issue. It is a concrete example of\na Python optimization blocked indirectly by the C API.\n\nThis issue was already fixed in Python 3.10: the Py_REFCNT() macro has\nbeen already modified to disallow using it as an l-value.\n\nThese statements are endorsed by Sam Gross (nogil developer).\n\nHPy project\n\nThe HPy project is a brand new C API for Python using only handles and\nfunction calls: handles are opaque, structure members cannot be accessed\ndirectly, and pointers cannot be dereferenced.\n\nSearching and replacing Py_SET_SIZE() is easier and safer than searching\nand replacing some strange macro uses of Py_SIZE(). Py_SIZE() can be\nsemi-mechanically replaced by HPy_Length(), whereas seeing Py_SET_SIZE()\nwould immediately make clear that the code needs bigger changes in order\nto be ported to HPy (for example by using HPyTupleBuilder or\nHPyListBuilder).\n\nThe fewer internal details exposed via macros, the easier it will be for\nHPy to provide direct equivalents. Any macro that references\n\"non-public\" interfaces effectively exposes those interfaces publicly.\n\nThese statements are endorsed by Antonio Cuni (HPy developer).\n\nGraalVM Python\n\nIn GraalVM, when a Python object is accessed by the Python C API, the C\nAPI emulation layer has to wrap the GraalVM objects into wrappers that\nexpose the internal structure of the CPython structures (PyObject,\nPyLongObject, PyTypeObject, etc). This is because when the C code\naccesses it directly or via macros, all GraalVM can intercept is a read\nat the struct offset, which has to be mapped back to the representation\nin GraalVM. The smaller the \"effective\" number of exposed struct members\n(by replacing macros with functions), the simpler GraalVM wrappers can\nbe.\n\nThis PEP alone is not enough to get rid of the wrappers in GraalVM, but\nit is a step towards this long term goal. GraalVM already supports HPy\nwhich is a better solution in the long term.\n\nThese statements are endorsed by Tim Felgentreff (GraalVM Python\ndeveloper).\n\nSpecification\n\nDisallow using macros as l-values\n\nThe following 65 macros are modified to disallow using them as l-values.\n\nPyObject and PyVarObject macros\n\n- Py_TYPE(): Py_SET_TYPE() must be used instead\n- Py_SIZE(): Py_SET_SIZE() must be used instead\n\nGET macros\n\n- PyByteArray_GET_SIZE()\n- PyBytes_GET_SIZE()\n- PyCFunction_GET_CLASS()\n- PyCFunction_GET_FLAGS()\n- PyCFunction_GET_FUNCTION()\n- PyCFunction_GET_SELF()\n- PyCell_GET()\n- PyCode_GetNumFree()\n- PyDict_GET_SIZE()\n- PyFunction_GET_ANNOTATIONS()\n- PyFunction_GET_CLOSURE()\n- PyFunction_GET_CODE()\n- PyFunction_GET_DEFAULTS()\n- PyFunction_GET_GLOBALS()\n- PyFunction_GET_KW_DEFAULTS()\n- PyFunction_GET_MODULE()\n- PyHeapType_GET_MEMBERS()\n- PyInstanceMethod_GET_FUNCTION()\n- PyList_GET_SIZE()\n- PyMemoryView_GET_BASE()\n- PyMemoryView_GET_BUFFER()\n- PyMethod_GET_FUNCTION()\n- PyMethod_GET_SELF()\n- PySet_GET_SIZE()\n- PyTuple_GET_SIZE()\n- PyUnicode_GET_DATA_SIZE()\n- PyUnicode_GET_LENGTH()\n- PyUnicode_GET_LENGTH()\n- PyUnicode_GET_SIZE()\n- PyWeakref_GET_OBJECT()\n\nAS macros\n\n- PyByteArray_AS_STRING()\n- PyBytes_AS_STRING()\n- PyFloat_AS_DOUBLE()\n- PyUnicode_AS_DATA()\n- PyUnicode_AS_UNICODE()\n\nPyUnicode macros\n\n- PyUnicode_1BYTE_DATA()\n- PyUnicode_2BYTE_DATA()\n- PyUnicode_4BYTE_DATA()\n- PyUnicode_DATA()\n- PyUnicode_IS_ASCII()\n- PyUnicode_IS_COMPACT()\n- PyUnicode_IS_READY()\n- PyUnicode_KIND()\n- PyUnicode_READ()\n- PyUnicode_READ_CHAR()\n\nPyDateTime GET macros\n\n- PyDateTime_DATE_GET_FOLD()\n- PyDateTime_DATE_GET_HOUR()\n- PyDateTime_DATE_GET_MICROSECOND()\n- PyDateTime_DATE_GET_MINUTE()\n- PyDateTime_DATE_GET_SECOND()\n- PyDateTime_DATE_GET_TZINFO()\n- PyDateTime_DELTA_GET_DAYS()\n- PyDateTime_DELTA_GET_MICROSECONDS()\n- PyDateTime_DELTA_GET_SECONDS()\n- PyDateTime_GET_DAY()\n- PyDateTime_GET_MONTH()\n- PyDateTime_GET_YEAR()\n- PyDateTime_TIME_GET_FOLD()\n- PyDateTime_TIME_GET_HOUR()\n- PyDateTime_TIME_GET_MICROSECOND()\n- PyDateTime_TIME_GET_MINUTE()\n- PyDateTime_TIME_GET_SECOND()\n- PyDateTime_TIME_GET_TZINFO()\n\nPort C extensions to Python 3.11\n\nIn practice, the majority of projects affected by these PEP only have to\nmake two changes:\n\n- Replace Py_TYPE(obj) = new_type with Py_SET_TYPE(obj, new_type).\n- Replace Py_SIZE(obj) = new_size with Py_SET_SIZE(obj, new_size).\n\nThe pythoncapi_compat project can be used to update automatically C\nextensions: add Python 3.11 support without losing support with older\nPython versions. The project provides a header file which provides\nPy_SET_REFCNT(), Py_SET_TYPE() and Py_SET_SIZE() functions to Python 3.8\nand older.\n\nPyTuple_GET_ITEM() and PyList_GET_ITEM() are left unchanged\n\nThe PyTuple_GET_ITEM() and PyList_GET_ITEM() macros are left unchanged.\n\nThe code patterns &PyTuple_GET_ITEM(tuple, 0) and\n&PyList_GET_ITEM(list, 0) are still commonly used to get access to the\ninner PyObject** array.\n\nChanging these macros is out of the scope of this PEP.\n\nPyDescr_NAME() and PyDescr_TYPE() are left unchanged\n\nThe PyDescr_NAME() and PyDescr_TYPE() macros are left unchanged.\n\nThese macros give access to PyDescrObject.d_name and\nPyDescrObject.d_type members. They can be used as l-values to set these\nmembers.\n\nThe SWIG project uses these macros as l-values to set these members. It\nwould be possible to modify SWIG to prevent setting PyDescrObject\nstructure members directly, but it is not really worth it since the\nPyDescrObject structure is not performance critical and is unlikely to\nchange soon.\n\nSee the bpo-46538 \"[C API] Make the PyDescrObject structure opaque:\nPyDescr_NAME() and PyDescr_TYPE()\" issue for more details.\n\nImplementation\n\nThe implementation is tracked by bpo-45476: [C API] PEP 674: Disallow\nusing macros as l-values.\n\nPy_TYPE() and Py_SIZE() macros\n\nIn May 2020, the Py_TYPE() and Py_SIZE() macros have been modified to\ndisallow using them as l-values (Py_TYPE, Py_SIZE).\n\nIn November 2020, the change was reverted, since it broke too many third\nparty projects.\n\nIn June 2021, once most third party projects were updated, a second\nattempt was done, but had to be reverted again , since it broke\ntest_exceptions on Windows.\n\nIn September 2021, once test_exceptions has been fixed, Py_TYPE() and\nPy_SIZE() were finally changed.\n\nIn November 2021, this backward incompatible change got a Steering\nCouncil exception.\n\nIn October 2022, Python 3.11 got released with Py_TYPE() and Py_SIZE()\nincompatible changes.\n\nBackwards Compatibility\n\nThe proposed C API changes are backward incompatible on purpose.\n\nIn practice, only Py_TYPE() and Py_SIZE() macros are used as l-values.\n\nThis change does not follow the PEP 387 deprecation process. There is no\nknown way to emit a deprecation warning only when a macro is used as an\nl-value, but not when it's used differently (ex: as a r-value).\n\nThe following 4 macros are left unchanged to reduce the number of\naffected projects: PyDescr_NAME(), PyDescr_TYPE(), PyList_GET_ITEM() and\nPyTuple_GET_ITEM().\n\nStatistics\n\nIn total (projects on PyPI and not on PyPI), 34 projects are known to be\naffected by this PEP:\n\n- 16 projects (47%) are already fixed\n- 18 projects (53%) are not fixed yet (pending fix or have to\n regenerate their Cython code)\n\nOn September 1, 2022, the PEP affects 18 projects (0.4%) of the top 5000\nPyPI projects:\n\n- 15 projects (0.3%) have to regenerate their Cython code\n- 3 projects (0.1%) have a pending fix\n\nTop 5000 PyPI\n\nProjects with a pending fix (3):\n\n- datatable (1.0.0): fixed\n- guppy3 (3.1.2): fixed\n- scipy (1.9.3): need to update boost python\n\nMoreover, 15 projects have to regenerate their Cython code.\n\nProjects released with a fix (12):\n\n- bitarray (1.6.2): commit\n- Cython (0.29.20): commit\n- immutables (0.15): commit\n- mercurial (5.7): commit, bug report\n- mypy (v0.930): commit\n- numpy (1.22.1): commit, commit 2\n- pycurl (7.44.1): commit\n- PyGObject (3.42.0)\n- pyside2 (5.15.1): bug report\n- python-snappy (0.6.1): fixed\n- recordclass (0.17.2): fixed\n- zstd (1.5.0.3): commit\n\nThere are also two backport projects which are affected by this PEP:\n\n- pickle5 (0.0.12): backport for Python <= 3.7\n- pysha3 (1.0.2): backport for Python <= 3.5\n\nThey must not be used and cannot be used on Python 3.11.\n\nOther affected projects\n\nOther projects released with a fix (4):\n\n- boost (1.78.0): commit\n- breezy (3.2.1): bug report\n- duplicity (0.8.18): commit\n- gobject-introspection (1.70.0): MR\n\nRelationship with the HPy project\n\nThe HPy project\n\nThe hope with the HPy project is to provide a C API that is close to the\noriginal API—to make porting easy—and have it perform as close to the\nexisting API as possible. At the same time, HPy is sufficiently removed\nto be a good \"C extension API\" (as opposed to a stable subset of the\nCPython implementation API) that does not leak implementation details.\nTo ensure this latter property, the HPy project tries to develop\neverything in parallel for CPython, PyPy, and GraalVM Python.\n\nHPy is still evolving very fast. Issues are still being solved while\nmigrating NumPy, and work has begun on adding support for HPy to Cython.\nWork on pybind11 is starting soon. Tim Felgentreff believes by the time\nHPy has these users of the existing C API working, HPy should be in a\nstate where it is generally useful and can be deemed stable enough that\nfurther development can follow a more stable process.\n\nIn the long run the HPy project would like to become a promoted API to\nwrite Python C extensions.\n\nThe HPy project is a good solution for the long term. It has the\nadvantage of being developed outside Python and it doesn't require any C\nAPI change.\n\nThe C API is here is stay for a few more years\n\nThe first concern about HPy is that right now, HPy is not mature nor\nwidely used, and CPython still has to continue supporting a large amount\nof C extensions which are not likely to be ported to HPy soon.\n\nThe second concern is the inability to evolve CPython internals to\nimplement new optimizations, and the inefficient implementation of the\ncurrent C API in PyPy, GraalPython, etc. Sadly, HPy will only solve\nthese problems when most C extensions will be fully ported to HPy: when\nit will become reasonable to consider dropping the \"legacy\" Python C\nAPI.\n\nWhile porting a C extension to HPy can be done incrementally on CPython,\nit requires to modify a lot of code and takes time. Porting most C\nextensions to HPy is expected to take a few years.\n\nThis PEP proposes to make the C API \"less bad\" by fixing one problem\nwhich is clearily identified as causing practical issues: macros used as\nl-values. This PEP only requires updating a minority of C extensions,\nand usually only a few lines need to be changed in impacted extensions.\n\nFor example, NumPy 1.22 is made of 307,300 lines of C code, and adapting\nNumPy to the this PEP only modified 11 lines (use Py_SET_TYPE and\nPy_SET_SIZE) and adding 4 lines (to define Py_SET_TYPE and Py_SET_SIZE\nfor Python 3.8 and older). The beginnings of the NumPy port to HPy\nalready required modifying more lines than that.\n\nRight now, it's hard to bet which approach is the best: fixing the\ncurrent C API, or focusing on HPy. It would be risky to only focus on\nHPy.\n\nRejected Idea: Leave the macros as they are\n\nThe documentation of each function can discourage developers to use\nmacros to modify Python objects.\n\nIf these is a need to make an assignment, a setter function can be added\nand the macro documentation can require to use the setter function. For\nexample, a Py_SET_TYPE() function has been added to Python 3.9 and the\nPy_TYPE() documentation now requires to use the Py_SET_TYPE() function\nto set an object type.\n\nIf developers use macros as an l-value, it's their responsibility when\ntheir code breaks, not Python's responsibility. We are operating under\nthe consenting adults principle: we expect users of the Python C API to\nuse it as documented and expect them to take care of the fallout, if\nthings break when they don't.\n\nThis idea was rejected because only few developers read the\ndocumentation, and only a minority is tracking changes of the Python C\nAPI documentation. The majority of developers are only using CPython and\nso are not aware of compatibility issues with other Python\nimplementations.\n\nMoreover, continuing to allow using macros as an l-value does not help\nthe HPy project, and leaves the burden of emulating them on GraalVM's\nPython implementation.\n\nMacros already modified\n\nThe following C API macros have already been modified to disallow using\nthem as l-value:\n\n- PyCell_SET()\n- PyList_SET_ITEM()\n- PyTuple_SET_ITEM()\n- Py_REFCNT() (Python 3.10): Py_SET_REFCNT() must be used\n- _PyGCHead_SET_FINALIZED()\n- _PyGCHead_SET_NEXT()\n- asdl_seq_GET()\n- asdl_seq_GET_UNTYPED()\n- asdl_seq_LEN()\n- asdl_seq_SET()\n- asdl_seq_SET_UNTYPED()\n\nFor example, PyList_SET_ITEM(list, 0, item) < 0 now fails with a\ncompiler error as expected.\n\nPost History\n\n- PEP 674 \"Disallow using macros as l-values\" and Python 3.11 (August\n 18, 2022)\n- SC reply to PEP 674 -- Disallow using macros as l-values (February\n 22, 2022)\n- PEP 674: Disallow using macros as l-value (version 2) (Jan 18, 2022)\n- PEP 674: Disallow using macros as l-value (Nov 30, 2021)\n\nReferences\n\n- Python C API: Add functions to access PyObject (October\n 2021) article by Victor Stinner\n- [capi-sig] Py_TYPE() and Py_SIZE() become static inline functions\n (September 2021)\n- [C API] Avoid accessing PyObject and PyVarObject members directly:\n add Py_SET_TYPE() and Py_IS_TYPE(), disallow Py_TYPE(obj)=type\n (February 2020)\n- bpo-30459: PyList_SET_ITEM could be safer (May 2017)\n\nVersion History\n\n- Version 3: No longer change PyDescr_TYPE() and PyDescr_NAME() macros\n- Version 2: Add \"Relationship with the HPy project\" section, remove\n the PyPy section\n- Version 1: First public version\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.000161"},"created":{"kind":"timestamp","value":"2021-11-30T00:00:00","string":"2021-11-30T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0674/\",\n \"authors\": [\n \"Victor Stinner\"\n ],\n \"pep_number\": \"0674\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":66,"cells":{"id":{"kind":"string","value":"0563"},"text":{"kind":"string","value":"PEP: 563 Title: Postponed Evaluation of Annotations Version: $Revision$\nLast-Modified: $Date$ Author: Łukasz Langa \nDiscussions-To: python-dev@python.org Status: Accepted Type: Standards\nTrack Topic: Typing Content-Type: text/x-rst Created: 08-Sep-2017\nPython-Version: 3.7 Post-History: 01-Nov-2017, 21-Nov-2017\nSuperseded-By: 649 Resolution:\nhttps://mail.python.org/pipermail/python-dev/2017-December/151042.html\n\nAbstract\n\nPEP 3107 introduced syntax for function annotations, but the semantics\nwere deliberately left undefined. PEP 484 introduced a standard meaning\nto annotations: type hints. PEP 526 defined variable annotations,\nexplicitly tying them with the type hinting use case.\n\nThis PEP proposes changing function annotations and variable annotations\nso that they are no longer evaluated at function definition time.\nInstead, they are preserved in __annotations__ in string form.\n\nThis change is being introduced gradually, starting with a __future__\nimport in Python 3.7.\n\nRationale and Goals\n\nPEP 3107 added support for arbitrary annotations on parts of a function\ndefinition. Just like default values, annotations are evaluated at\nfunction definition time. This creates a number of issues for the type\nhinting use case:\n\n- forward references: when a type hint contains names that have not\n been defined yet, that definition needs to be expressed as a string\n literal;\n- type hints are executed at module import time, which is not\n computationally free.\n\nPostponing the evaluation of annotations solves both problems. NOTE: PEP\n649 proposes an alternative solution to the above issues, putting this\nPEP in danger of being superseded.\n\nNon-goals\n\nJust like in PEP 484 and PEP 526, it should be emphasized that Python\nwill remain a dynamically typed language, and the authors have no desire\nto ever make type hints mandatory, even by convention.\n\nThis PEP is meant to solve the problem of forward references in type\nannotations. There are still cases outside of annotations where forward\nreferences will require usage of string literals. Those are listed in a\nlater section of this document.\n\nAnnotations without forced evaluation enable opportunities to improve\nthe syntax of type hints. This idea will require its own separate PEP\nand is not discussed further in this document.\n\nNon-typing usage of annotations\n\nWhile annotations are still available for arbitrary use besides type\nchecking, it is worth mentioning that the design of this PEP, as well as\nits precursors (PEP 484 and PEP 526), is predominantly motivated by the\ntype hinting use case.\n\nIn Python 3.8 PEP 484 will graduate from provisional status. Other\nenhancements to the Python programming language like PEP 544, PEP 557,\nor PEP 560, are already being built on this basis as they depend on type\nannotations and the typing module as defined by PEP 484. In fact, the\nreason PEP 484 is staying provisional in Python 3.7 is to enable rapid\nevolution for another release cycle that some of the aforementioned\nenhancements require.\n\nWith this in mind, uses for annotations incompatible with the\naforementioned PEPs should be considered deprecated.\n\nImplementation\n\nWith this PEP, function and variable annotations will no longer be\nevaluated at definition time. Instead, a string form will be preserved\nin the respective __annotations__ dictionary. Static type checkers will\nsee no difference in behavior, whereas tools using annotations at\nruntime will have to perform postponed evaluation.\n\nThe string form is obtained from the AST during the compilation step,\nwhich means that the string form might not preserve the exact formatting\nof the source. Note: if an annotation was a string literal already, it\nwill still be wrapped in a string.\n\nAnnotations need to be syntactically valid Python expressions, also when\npassed as literal strings (i.e. compile(literal, '', 'eval')).\nAnnotations can only use names present in the module scope as postponed\nevaluation using local names is not reliable (with the sole exception of\nclass-level names resolved by typing.get_type_hints()).\n\nNote that as per PEP 526, local variable annotations are not evaluated\nat all since they are not accessible outside of the function's closure.\n\nEnabling the future behavior in Python 3.7\n\nThe functionality described above can be enabled starting from Python\n3.7 using the following special import:\n\n from __future__ import annotations\n\nA reference implementation of this functionality is available on GitHub.\n\nResolving Type Hints at Runtime\n\nTo resolve an annotation at runtime from its string form to the result\nof the enclosed expression, user code needs to evaluate the string.\n\nFor code that uses type hints, the\ntyping.get_type_hints(obj, globalns=None, localns=None) function\ncorrectly evaluates expressions back from its string form. Note that all\nvalid code currently using __annotations__ should already be doing that\nsince a type annotation can be expressed as a string literal.\n\nFor code which uses annotations for other purposes, a regular\neval(ann, globals, locals) call is enough to resolve the annotation.\n\nIn both cases it's important to consider how globals and locals affect\nthe postponed evaluation. An annotation is no longer evaluated at the\ntime of definition and, more importantly, in the same scope where it was\ndefined. Consequently, using local state in annotations is no longer\npossible in general. As for globals, the module where the annotation was\ndefined is the correct context for postponed evaluation.\n\nThe get_type_hints() function automatically resolves the correct value\nof globalns for functions and classes. It also automatically provides\nthe correct localns for classes.\n\nWhen running eval(), the value of globals can be gathered in the\nfollowing way:\n\n- function objects hold a reference to their respective globals in an\n attribute called __globals__;\n\n- classes hold the name of the module they were defined in, this can\n be used to retrieve the respective globals:\n\n cls_globals = vars(sys.modules[SomeClass.__module__])\n\n Note that this needs to be repeated for base classes to evaluate all\n __annotations__.\n\n- modules should use their own __dict__.\n\nThe value of localns cannot be reliably retrieved for functions because\nin all likelihood the stack frame at the time of the call no longer\nexists.\n\nFor classes, localns can be composed by chaining vars of the given class\nand its base classes (in the method resolution order). Since slots can\nonly be filled after the class was defined, we don't need to consult\nthem for this purpose.\n\nRuntime annotation resolution and class decorators\n\nMetaclasses and class decorators that need to resolve annotations for\nthe current class will fail for annotations that use the name of the\ncurrent class. Example:\n\n def class_decorator(cls):\n annotations = get_type_hints(cls) # raises NameError on 'C'\n print(f'Annotations for {cls}: {annotations}')\n return cls\n\n @class_decorator\n class C:\n singleton: 'C' = None\n\nThis was already true before this PEP. The class decorator acts on the\nclass before it's assigned a name in the current definition scope.\n\nRuntime annotation resolution and TYPE_CHECKING\n\nSometimes there's code that must be seen by a type checker but should\nnot be executed. For such situations the typing module defines a\nconstant, TYPE_CHECKING, that is considered True during type checking\nbut False at runtime. Example:\n\n import typing\n\n if typing.TYPE_CHECKING:\n import expensive_mod\n\n def a_func(arg: expensive_mod.SomeClass) -> None:\n a_var: expensive_mod.SomeClass = arg\n ...\n\nThis approach is also useful when handling import cycles.\n\nTrying to resolve annotations of a_func at runtime using\ntyping.get_type_hints() will fail since the name expensive_mod is not\ndefined (TYPE_CHECKING variable being False at runtime). This was\nalready true before this PEP.\n\nBackwards Compatibility\n\nThis is a backwards incompatible change. Applications depending on\narbitrary objects to be directly present in annotations will break if\nthey are not using typing.get_type_hints() or eval().\n\nAnnotations that depend on locals at the time of the function definition\nwill not be resolvable later. Example:\n\n def generate():\n A = Optional[int]\n class C:\n field: A = 1\n def method(self, arg: A) -> None: ...\n return C\n X = generate()\n\nTrying to resolve annotations of X later by using get_type_hints(X) will\nfail because A and its enclosing scope no longer exists. Python will\nmake no attempt to disallow such annotations since they can often still\nbe successfully statically analyzed, which is the predominant use case\nfor annotations.\n\nAnnotations using nested classes and their respective state are still\nvalid. They can use local names or the fully qualified name. Example:\n\n class C:\n field = 'c_field'\n def method(self) -> C.field: # this is OK\n ...\n\n def method(self) -> field: # this is OK\n ...\n\n def method(self) -> C.D: # this is OK\n ...\n\n def method(self) -> D: # this is OK\n ...\n\n class D:\n field2 = 'd_field'\n def method(self) -> C.D.field2: # this is OK\n ...\n\n def method(self) -> D.field2: # this FAILS, class D is local to C \n ... # and is therefore only available \n # as C.D. This was already true\n # before the PEP.\n\n def method(self) -> field2: # this is OK\n ...\n\n def method(self) -> field: # this FAILS, field is local to C and\n # is therefore not visible to D unless\n # accessed as C.field. This was already \n # true before the PEP.\n\nIn the presence of an annotation that isn't a syntactically valid\nexpression, SyntaxError is raised at compile time. However, since names\naren't resolved at that time, no attempt is made to validate whether\nused names are correct or not.\n\nDeprecation policy\n\nStarting with Python 3.7, a __future__ import is required to use the\ndescribed functionality. No warnings are raised.\n\nNOTE: Whether this will eventually become the default behavior is\ncurrently unclear pending decision on PEP 649. In any case, use of\nannotations that depend upon their eager evaluation is incompatible with\nboth proposals and is no longer supported.\n\nForward References\n\nDeliberately using a name before it was defined in the module is called\na forward reference. For the purpose of this section, we'll call any\nname imported or defined within a if TYPE_CHECKING: block a forward\nreference, too.\n\nThis PEP addresses the issue of forward references in type annotations.\nThe use of string literals will no longer be required in this case.\nHowever, there are APIs in the typing module that use other syntactic\nconstructs of the language, and those will still require working around\nforward references with string literals. The list includes:\n\n- type definitions:\n\n T = TypeVar('T', bound='')\n UserId = NewType('UserId', '')\n Employee = NamedTuple('Employee', [('name', ''), ('id', '')])\n\n- aliases:\n\n Alias = Optional['']\n AnotherAlias = Union['', '']\n YetAnotherAlias = ''\n\n- casting:\n\n cast('', value)\n\n- base classes:\n\n class C(Tuple['', '']): ...\n\nDepending on the specific case, some of the cases listed above might be\nworked around by placing the usage in a if TYPE_CHECKING: block. This\nwill not work for any code that needs to be available at runtime,\nnotably for base classes and casting. For named tuples, using the new\nclass definition syntax introduced in Python 3.6 solves the issue.\n\nIn general, fixing the issue for all forward references requires\nchanging how module instantiation is performed in Python, from the\ncurrent single-pass top-down model. This would be a major change in the\nlanguage and is out of scope for this PEP.\n\nRejected Ideas\n\nKeeping the ability to use function local state when defining annotations\n\nWith postponed evaluation, this would require keeping a reference to the\nframe in which an annotation got created. This could be achieved for\nexample by storing all annotations as lambdas instead of strings.\n\nThis would be prohibitively expensive for highly annotated code as the\nframes would keep all their objects alive. That includes predominantly\nobjects that won't ever be accessed again.\n\nTo be able to address class-level scope, the lambda approach would\nrequire a new kind of cell in the interpreter. This would proliferate\nthe number of types that can appear in __annotations__, as well as\nwouldn't be as introspectable as strings.\n\nNote that in the case of nested classes, the functionality to get the\neffective \"globals\" and \"locals\" at definition time is provided by\ntyping.get_type_hints().\n\nIf a function generates a class or a function with annotations that have\nto use local variables, it can populate the given generated object's\n__annotations__ dictionary directly, without relying on the compiler.\n\nDisallowing local state usage for classes, too\n\nThis PEP originally proposed limiting names within annotations to only\nallow names from the model-level scope, including for classes. The\nauthor argued this makes name resolution unambiguous, including in cases\nof conflicts between local names and module-level names.\n\nThis idea was ultimately rejected in case of classes. Instead,\ntyping.get_type_hints() got modified to populate the local namespace\ncorrectly if class-level annotations are needed.\n\nThe reasons for rejecting the idea were that it goes against the\nintuition of how scoping works in Python, and would break enough\nexisting type annotations to make the transition cumbersome. Finally,\nlocal scope access is required for class decorators to be able to\nevaluate type annotations. This is because class decorators are applied\nbefore the class receives its name in the outer scope.\n\nIntroducing a new dictionary for the string literal form instead\n\nYury Selivanov shared the following idea:\n\n1. Add a new special attribute to functions: __annotations_text__.\n2. Make __annotations__ a lazy dynamic mapping, evaluating expressions\n from the corresponding key in __annotations_text__ just-in-time.\n\nThis idea is supposed to solve the backwards compatibility issue,\nremoving the need for a new __future__ import. Sadly, this is not\nenough. Postponed evaluation changes which state the annotation has\naccess to. While postponed evaluation fixes the forward reference\nproblem, it also makes it impossible to access function-level locals\nanymore. This alone is a source of backwards incompatibility which\njustifies a deprecation period.\n\nA __future__ import is an obvious and explicit indicator of opting in\nfor the new functionality. It also makes it trivial for external tools\nto recognize the difference between a Python files using the old or the\nnew approach. In the former case, that tool would recognize that local\nstate access is allowed, whereas in the latter case it would recognize\nthat forward references are allowed.\n\nFinally, just-in-time evaluation in __annotations__ is an unnecessary\nstep if get_type_hints() is used later.\n\nDropping annotations with -O\n\nThere are two reasons this is not satisfying for the purpose of this\nPEP.\n\nFirst, this only addresses runtime cost, not forward references, those\nstill cannot be safely used in source code. A library maintainer would\nnever be able to use forward references since that would force the\nlibrary users to use this new hypothetical -O switch.\n\nSecond, this throws the baby out with the bath water. Now no runtime\nannotation use can be performed. PEP 557 is one example of a recent\ndevelopment where evaluating type annotations at runtime is useful.\n\nAll that being said, a granular -O option to drop annotations is a\npossibility in the future, as it's conceptually compatible with existing\n-O behavior (dropping docstrings and assert statements). This PEP does\nnot invalidate the idea.\n\nPassing string literals in annotations verbatim to __annotations__\n\nThis PEP originally suggested directly storing the contents of a string\nliteral under its respective key in __annotations__. This was meant to\nsimplify support for runtime type checkers.\n\nMark Shannon pointed out this idea was flawed since it wasn't handling\nsituations where strings are only part of a type annotation.\n\nThe inconsistency of it was always apparent but given that it doesn't\nfully prevent cases of double-wrapping strings anyway, it is not worth\nit.\n\nMaking the name of the future import more verbose\n\nInstead of requiring the following import:\n\n from __future__ import annotations\n\nthe PEP could call the feature more explicitly, for example\nstring_annotations, stringify_annotations, annotation_strings,\nannotations_as_strings, lazy_annotations, static_annotations, etc.\n\nThe problem with those names is that they are very verbose. Each of them\nbesides lazy_annotations would constitute the longest future feature\nname in Python. They are long to type and harder to remember than the\nsingle-word form.\n\nThere is precedence of a future import name that sounds overly generic\nbut in practice was obvious to users as to what it does:\n\n from __future__ import division\n\nPrior discussion\n\nIn PEP 484\n\nThe forward reference problem was discussed when PEP 484 was originally\ndrafted, leading to the following statement in the document:\n\n A compromise is possible where a __future__ import could enable\n turning all annotations in a given module into string literals, as\n follows:\n\n from __future__ import annotations\n\n class ImSet:\n def add(self, a: ImSet) -> List[ImSet]: ...\n\n assert ImSet.add.__annotations__ == {\n 'a': 'ImSet', 'return': 'List[ImSet]'\n }\n\n Such a __future__ import statement may be proposed in a separate PEP.\n\npython/typing#400\n\nThe problem was discussed at length on the typing module's GitHub\nproject, under Issue 400. The problem statement there includes critique\nof generic types requiring imports from typing. This tends to be\nconfusing to beginners:\n\n Why this:\n\n from typing import List, Set\n def dir(o: object = ...) -> List[str]: ...\n def add_friends(friends: Set[Friend]) -> None: ...\n\n But not this:\n\n def dir(o: object = ...) -> list[str]: ...\n def add_friends(friends: set[Friend]) -> None ...\n\n Why this:\n\n up_to_ten = list(range(10))\n friends = set()\n\n But not this:\n\n from typing import List, Set\n up_to_ten = List[int](range(10))\n friends = Set[Friend]()\n\nWhile typing usability is an interesting problem, it is out of scope of\nthis PEP. Specifically, any extensions of the typing syntax standardized\nin PEP 484 will require their own respective PEPs and approval.\n\nIssue 400 ultimately suggests postponing evaluation of annotations and\nkeeping them as strings in __annotations__, just like this PEP\nspecifies. This idea was received well. Ivan Levkivskyi supported using\nthe __future__ import and suggested unparsing the AST in compile.c.\nJukka Lehtosalo pointed out that there are some cases of forward\nreferences where types are used outside of annotations and postponed\nevaluation will not help those. For those cases using the string literal\nnotation would still be required. Those cases are discussed briefly in\nthe \"Forward References\" section of this PEP.\n\nThe biggest controversy on the issue was Guido van Rossum's concern that\nuntokenizing annotation expressions back to their string form has no\nprecedent in the Python programming language and feels like a hacky\nworkaround. He said:\n\n One thing that comes to mind is that it's a very random change to the\n language. It might be useful to have a more compact way to indicate\n deferred execution of expressions (using less syntax than lambda:).\n But why would the use case of type annotations be so all-important to\n change the language to do it there first (rather than proposing a more\n general solution), given that there's already a solution for this\n particular use case that requires very minimal syntax?\n\nEventually, Ethan Smith and schollii voiced that feedback gathered\nduring PyCon US suggests that the state of forward references needs\nfixing. Guido van Rossum suggested coming back to the __future__ idea,\npointing out that to prevent abuse, it's important for the annotations\nto be kept both syntactically valid and evaluating correctly at runtime.\n\nFirst draft discussion on python-ideas\n\nDiscussion happened largely in two threads, the original announcement\nand a follow-up called PEP 563 and expensive backwards compatibility.\n\nThe PEP received rather warm feedback (4 strongly in favor, 2 in favor\nwith concerns, 2 against). The biggest voice of concern on the former\nthread being Steven D'Aprano's review stating that the problem\ndefinition of the PEP doesn't justify breaking backwards compatibility.\nIn this response Steven seemed mostly concerned about Python no longer\nsupporting evaluation of annotations that depended on local\nfunction/class state.\n\nA few people voiced concerns that there are libraries using annotations\nfor non-typing purposes. However, none of the named libraries would be\ninvalidated by this PEP. They do require adapting to the new requirement\nto call eval() on the annotation with the correct globals and locals\nset.\n\nThis detail about globals and locals having to be correct was picked up\nby a number of commenters. Alyssa (Nick) Coghlan benchmarked turning\nannotations into lambdas instead of strings, sadly this proved to be\nmuch slower at runtime than the current situation.\n\nThe latter thread was started by Jim J. Jewett who stressed that the\nability to properly evaluate annotations is an important requirement and\nbackwards compatibility in that regard is valuable. After some\ndiscussion he admitted that side effects in annotations are a code smell\nand modal support to either perform or not perform evaluation is a messy\nsolution. His biggest concern remained loss of functionality stemming\nfrom the evaluation restrictions on global and local scope.\n\nAlyssa Coghlan pointed out that some of those evaluation restrictions\nfrom the PEP could be lifted by a clever implementation of an evaluation\nhelper, which could solve self-referencing classes even in the form of a\nclass decorator. She suggested the PEP should provide this helper\nfunction in the standard library.\n\nSecond draft discussion on python-dev\n\nDiscussion happened mainly in the announcement thread, followed by a\nbrief discussion under Mark Shannon's post.\n\nSteven D'Aprano was concerned whether it's acceptable for typos to be\nallowed in annotations after the change proposed by the PEP. Brett\nCannon responded that type checkers and other static analyzers (like\nlinters or programming text editors) will catch this type of error.\nJukka Lehtosalo added that this situation is analogous to how names in\nfunction bodies are not resolved until the function is called.\n\nA major topic of discussion was Alyssa Coghlan's suggestion to store\nannotations in \"thunk form\", in other words as a specialized lambda\nwhich would be able to access class-level scope (and allow for scope\ncustomization at call time). He presented a possible design for it\n(indirect attribute cells). This was later seen as equivalent to\n\"special forms\" in Lisp. Guido van Rossum expressed worry that this sort\nof feature cannot be safely implemented in twelve weeks (i.e. in time\nbefore the Python 3.7 beta freeze).\n\nAfter a while it became clear that the point of division between\nsupporters of the string form vs. supporters of the thunk form is\nactually about whether annotations should be perceived as a general\nsyntactic element vs. something tied to the type checking use case.\n\nFinally, Guido van Rossum declared he's rejecting the thunk idea based\non the fact that it would require a new building block in the\ninterpreter. This block would be exposed in annotations, multiplying\npossible types of values stored in __annotations__ (arbitrary objects,\nstrings, and now thunks). Moreover, thunks aren't as introspectable as\nstrings. Most importantly, Guido van Rossum explicitly stated interest\nin gradually restricting the use of annotations to static typing (with\nan optional runtime component).\n\nAlyssa Coghlan got convinced to PEP 563, too, promptly beginning the\nmandatory bike shedding session on the name of the __future__ import.\nMany debaters agreed that annotations seems like an overly broad name\nfor the feature name. Guido van Rossum briefly decided to call it\nstring_annotations but then changed his mind, arguing that division is a\nprecedent of a broad name with a clear meaning.\n\nThe final improvement to the PEP suggested in the discussion by Mark\nShannon was the rejection of the temptation to pass string literals\nthrough to __annotations__ verbatim.\n\nA side-thread of discussion started around the runtime penalty of static\ntyping, with topic like the import time of the typing module (which is\ncomparable to re without dependencies, and three times as heavy as re\nwhen counting dependencies).\n\nAcknowledgements\n\nThis document could not be completed without valuable input,\nencouragement and advice from Guido van Rossum, Jukka Lehtosalo, and\nIvan Levkivskyi.\n\nThe implementation was thoroughly reviewed by Serhiy Storchaka who found\nall sorts of issues, including bugs, bad readability, and performance\nproblems.\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.025794"},"created":{"kind":"timestamp","value":"2017-09-08T00:00:00","string":"2017-09-08T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0563/\",\n \"authors\": [\n \"Łukasz Langa\"\n ],\n \"pep_number\": \"0563\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":67,"cells":{"id":{"kind":"string","value":"0428"},"text":{"kind":"string","value":"PEP: 428 Title: The pathlib module -- object-oriented filesystem paths\nVersion: $Revision$ Last-Modified: $Date$ Author: Antoine Pitrou\n Status: Final Type: Standards Track Content-Type:\ntext/x-rst Created: 30-Jul-2012 Python-Version: 3.4 Post-History:\n05-Oct-2012 Resolution:\nhttps://mail.python.org/pipermail/python-dev/2013-November/130424.html\n\nAbstract\n\nThis PEP proposes the inclusion of a third-party module, pathlib, in the\nstandard library. The inclusion is proposed under the provisional label,\nas described in PEP 411. Therefore, API changes can be done, either as\npart of the PEP process, or after acceptance in the standard library\n(and until the provisional label is removed).\n\nThe aim of this library is to provide a simple hierarchy of classes to\nhandle filesystem paths and the common operations users do over them.\n\nRelated work\n\nAn object-oriented API for filesystem paths has already been proposed\nand rejected in PEP 355. Several third-party implementations of the idea\nof object-oriented filesystem paths exist in the wild:\n\n- The historical path.py module by Jason Orendorff, Jason R. Coombs\n and others, which provides a str-subclassing Path class;\n- Twisted's slightly specialized FilePath class;\n- An AlternativePathClass proposal, subclassing tuple rather than str;\n- Unipath, a variation on the str-subclassing approach with two public\n classes, an AbstractPath class for operations which don't do I/O and\n a Path class for all common operations.\n\nThis proposal attempts to learn from these previous attempts and the\nrejection of PEP 355.\n\nImplementation\n\nThe implementation of this proposal is tracked in the pep428 branch of\npathlib's Mercurial repository.\n\nWhy an object-oriented API\n\nThe rationale to represent filesystem paths using dedicated classes is\nthe same as for other kinds of stateless objects, such as dates, times\nor IP addresses. Python has been slowly moving away from strictly\nreplicating the C language's APIs to providing better, more helpful\nabstractions around all kinds of common functionality. Even if this PEP\nisn't accepted, it is likely that another form of filesystem handling\nabstraction will be adopted one day into the standard library.\n\nIndeed, many people will prefer handling dates and times using the\nhigh-level objects provided by the datetime module, rather than using\nnumeric timestamps and the time module API. Moreover, using a dedicated\nclass allows to enable desirable behaviours by default, for example the\ncase insensitivity of Windows paths.\n\nProposal\n\nClass hierarchy\n\nThe pathlib module implements a simple hierarchy of classes:\n\n +----------+\n | |\n ---------| PurePath |--------\n | | | |\n | +----------+ |\n | | |\n | | |\n v | v\n +---------------+ | +-----------------+\n | | | | |\n | PurePosixPath | | | PureWindowsPath |\n | | | | |\n +---------------+ | +-----------------+\n | v |\n | +------+ |\n | | | |\n | -------| Path |------ |\n | | | | | |\n | | +------+ | |\n | | | |\n | | | |\n v v v v\n +-----------+ +-------------+\n | | | |\n | PosixPath | | WindowsPath |\n | | | |\n +-----------+ +-------------+\n\nThis hierarchy divides path classes along two dimensions:\n\n- a path class can be either pure or concrete: pure classes support\n only operations that don't need to do any actual I/O, which are most\n path manipulation operations; concrete classes support all the\n operations of pure classes, plus operations that do I/O.\n- a path class is of a given flavour according to the kind of\n operating system paths it represents. pathlib implements two\n flavours: Windows paths for the filesystem semantics embodied in\n Windows systems, POSIX paths for other systems.\n\nAny pure class can be instantiated on any system: for example, you can\nmanipulate PurePosixPath objects under Windows, PureWindowsPath objects\nunder Unix, and so on. However, concrete classes can only be\ninstantiated on a matching system: indeed, it would be error-prone to\nstart doing I/O with WindowsPath objects under Unix, or vice-versa.\n\nFurthermore, there are two base classes which also act as\nsystem-dependent factories: PurePath will instantiate either a\nPurePosixPath or a PureWindowsPath depending on the operating system.\nSimilarly, Path will instantiate either a PosixPath or a WindowsPath.\n\nIt is expected that, in most uses, using the Path class is adequate,\nwhich is why it has the shortest name of all.\n\nNo confusion with builtins\n\nIn this proposal, the path classes do not derive from a builtin type.\nThis contrasts with some other Path class proposals which were derived\nfrom str. They also do not pretend to implement the sequence protocol:\nif you want a path to act as a sequence, you have to lookup a dedicated\nattribute (the parts attribute).\n\nThe key reasoning behind not inheriting from str is to prevent\naccidentally performing operations with a string representing a path and\na string that doesn't, e.g. path + an_accident. Since operations with a\nstring will not necessarily lead to a valid or expected file system\npath, \"explicit is better than implicit\" by avoiding accidental\noperations with strings by not subclassing it. A blog post by a Python\ncore developer goes into more detail on the reasons behind this specific\ndesign decision.\n\nImmutability\n\nPath objects are immutable, which makes them hashable and also prevents\na class of programming errors.\n\nSane behaviour\n\nLittle of the functionality from os.path is reused. Many os.path\nfunctions are tied by backwards compatibility to confusing or plain\nwrong behaviour (for example, the fact that os.path.abspath() simplifies\n\"..\" path components without resolving symlinks first).\n\nComparisons\n\nPaths of the same flavour are comparable and orderable, whether pure or\nnot:\n\n >>> PurePosixPath('a') == PurePosixPath('b')\n False\n >>> PurePosixPath('a') < PurePosixPath('b')\n True\n >>> PurePosixPath('a') == PosixPath('a')\n True\n\nComparing and ordering Windows path objects is case-insensitive:\n\n >>> PureWindowsPath('a') == PureWindowsPath('A')\n True\n\nPaths of different flavours always compare unequal, and cannot be\nordered:\n\n >>> PurePosixPath('a') == PureWindowsPath('a')\n False\n >>> PurePosixPath('a') < PureWindowsPath('a')\n Traceback (most recent call last):\n File \"\", line 1, in \n TypeError: unorderable types: PurePosixPath() < PureWindowsPath()\n\nPaths compare unequal to, and are not orderable with instances of\nbuiltin types (such as str) and any other types.\n\nUseful notations\n\nThe API tries to provide useful notations all the while avoiding magic.\nSome examples:\n\n >>> p = Path('/home/antoine/pathlib/setup.py')\n >>> p.name\n 'setup.py'\n >>> p.suffix\n '.py'\n >>> p.root\n '/'\n >>> p.parts\n ('/', 'home', 'antoine', 'pathlib', 'setup.py')\n >>> p.relative_to('/home/antoine')\n PosixPath('pathlib/setup.py')\n >>> p.exists()\n True\n\nPure paths API\n\nThe philosophy of the PurePath API is to provide a consistent array of\nuseful path manipulation operations, without exposing a hodge-podge of\nfunctions like os.path does.\n\nDefinitions\n\nFirst a couple of conventions:\n\n- All paths can have a drive and a root. For POSIX paths, the drive is\n always empty.\n- A relative path has neither drive nor root.\n- A POSIX path is absolute if it has a root. A Windows path is\n absolute if it has both a drive and a root. A Windows UNC path (e.g.\n \\\\host\\share\\myfile.txt) always has a drive and a root (here,\n \\\\host\\share and \\, respectively).\n- A path which has either a drive or a root is said to be anchored.\n Its anchor is the concatenation of the drive and root. Under POSIX,\n \"anchored\" is the same as \"absolute\".\n\nConstruction\n\nWe will present construction and joining together since they expose\nsimilar semantics.\n\nThe simplest way to construct a path is to pass it its string\nrepresentation:\n\n >>> PurePath('setup.py')\n PurePosixPath('setup.py')\n\nExtraneous path separators and \".\" components are eliminated:\n\n >>> PurePath('a///b/c/./d/')\n PurePosixPath('a/b/c/d')\n\nIf you pass several arguments, they will be automatically joined:\n\n >>> PurePath('docs', 'Makefile')\n PurePosixPath('docs/Makefile')\n\nJoining semantics are similar to os.path.join, in that anchored paths\nignore the information from the previously joined components:\n\n >>> PurePath('/etc', '/usr', 'bin')\n PurePosixPath('/usr/bin')\n\nHowever, with Windows paths, the drive is retained as necessary:\n\n >>> PureWindowsPath('c:/foo', '/Windows')\n PureWindowsPath('c:/Windows')\n >>> PureWindowsPath('c:/foo', 'd:')\n PureWindowsPath('d:')\n\nAlso, path separators are normalized to the platform default:\n\n >>> PureWindowsPath('a/b') == PureWindowsPath('a\\\\b')\n True\n\nExtraneous path separators and \".\" components are eliminated, but not\n\"..\" components:\n\n >>> PurePosixPath('a//b/./c/')\n PurePosixPath('a/b/c')\n >>> PurePosixPath('a/../b')\n PurePosixPath('a/../b')\n\nMultiple leading slashes are treated differently depending on the path\nflavour. They are always retained on Windows paths (because of the UNC\nnotation):\n\n >>> PureWindowsPath('//some/path')\n PureWindowsPath('//some/path/')\n\nOn POSIX, they are collapsed except if there are exactly two leading\nslashes, which is a special case in the POSIX specification on pathname\nresolution (this is also necessary for Cygwin compatibility):\n\n >>> PurePosixPath('///some/path')\n PurePosixPath('/some/path')\n >>> PurePosixPath('//some/path')\n PurePosixPath('//some/path')\n\nCalling the constructor without any argument creates a path object\npointing to the logical \"current directory\" (without looking up its\nabsolute path, which is the job of the cwd() classmethod on concrete\npaths):\n\n >>> PurePosixPath()\n PurePosixPath('.')\n\nRepresenting\n\nTo represent a path (e.g. to pass it to third-party libraries), just\ncall str() on it:\n\n >>> p = PurePath('/home/antoine/pathlib/setup.py')\n >>> str(p)\n '/home/antoine/pathlib/setup.py'\n >>> p = PureWindowsPath('c:/windows')\n >>> str(p)\n 'c:\\\\windows'\n\nTo force the string representation with forward slashes, use the\nas_posix() method:\n\n >>> p.as_posix()\n 'c:/windows'\n\nTo get the bytes representation (which might be useful under Unix\nsystems), call bytes() on it, which internally uses os.fsencode():\n\n >>> bytes(p)\n b'/home/antoine/pathlib/setup.py'\n\nTo represent the path as a file: URI, call the as_uri() method:\n\n >>> p = PurePosixPath('/etc/passwd')\n >>> p.as_uri()\n 'file:///etc/passwd'\n >>> p = PureWindowsPath('c:/Windows')\n >>> p.as_uri()\n 'file:///c:/Windows'\n\nThe repr() of a path always uses forward slashes, even under Windows,\nfor readability and to remind users that forward slashes are ok:\n\n >>> p = PureWindowsPath('c:/Windows')\n >>> p\n PureWindowsPath('c:/Windows')\n\nProperties\n\nSeveral simple properties are provided on every path (each can be\nempty):\n\n >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz')\n >>> p.drive\n 'c:'\n >>> p.root\n '\\\\'\n >>> p.anchor\n 'c:\\\\'\n >>> p.name\n 'pathlib.tar.gz'\n >>> p.stem\n 'pathlib.tar'\n >>> p.suffix\n '.gz'\n >>> p.suffixes\n ['.tar', '.gz']\n\nDeriving new paths\n\nJoining\n\nA path can be joined with another using the / operator:\n\n >>> p = PurePosixPath('foo')\n >>> p / 'bar'\n PurePosixPath('foo/bar')\n >>> p / PurePosixPath('bar')\n PurePosixPath('foo/bar')\n >>> 'bar' / p\n PurePosixPath('bar/foo')\n\nAs with the constructor, multiple path components can be specified,\neither collapsed or separately:\n\n >>> p / 'bar/xyzzy'\n PurePosixPath('foo/bar/xyzzy')\n >>> p / 'bar' / 'xyzzy'\n PurePosixPath('foo/bar/xyzzy')\n\nA joinpath() method is also provided, with the same behaviour:\n\n >>> p.joinpath('Python')\n PurePosixPath('foo/Python')\n\nChanging the path's final component\n\nThe with_name() method returns a new path, with the name changed:\n\n >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz')\n >>> p.with_name('setup.py')\n PureWindowsPath('c:/Downloads/setup.py')\n\nIt fails with a ValueError if the path doesn't have an actual name:\n\n >>> p = PureWindowsPath('c:/')\n >>> p.with_name('setup.py')\n Traceback (most recent call last):\n File \"\", line 1, in \n File \"pathlib.py\", line 875, in with_name\n raise ValueError(\"%r has an empty name\" % (self,))\n ValueError: PureWindowsPath('c:/') has an empty name\n >>> p.name\n ''\n\nThe with_suffix() method returns a new path with the suffix changed.\nHowever, if the path has no suffix, the new suffix is added:\n\n >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz')\n >>> p.with_suffix('.bz2')\n PureWindowsPath('c:/Downloads/pathlib.tar.bz2')\n >>> p = PureWindowsPath('README')\n >>> p.with_suffix('.bz2')\n PureWindowsPath('README.bz2')\n\nMaking the path relative\n\nThe relative_to() method computes the relative difference of a path to\nanother:\n\n >>> PurePosixPath('/usr/bin/python').relative_to('/usr')\n PurePosixPath('bin/python')\n\nValueError is raised if the method cannot return a meaningful value:\n\n >>> PurePosixPath('/usr/bin/python').relative_to('/etc')\n Traceback (most recent call last):\n File \"\", line 1, in \n File \"pathlib.py\", line 926, in relative_to\n .format(str(self), str(formatted)))\n ValueError: '/usr/bin/python' does not start with '/etc'\n\nSequence-like access\n\nThe parts property returns a tuple providing read-only sequence access\nto a path's components:\n\n >>> p = PurePosixPath('/etc/init.d')\n >>> p.parts\n ('/', 'etc', 'init.d')\n\nWindows paths handle the drive and the root as a single path component:\n\n >>> p = PureWindowsPath('c:/setup.py')\n >>> p.parts\n ('c:\\\\', 'setup.py')\n\n(separating them would be wrong, since C: is not the parent of C:\\\\).\n\nThe parent property returns the logical parent of the path:\n\n >>> p = PureWindowsPath('c:/python33/bin/python.exe')\n >>> p.parent\n PureWindowsPath('c:/python33/bin')\n\nThe parents property returns an immutable sequence of the path's logical\nancestors:\n\n >>> p = PureWindowsPath('c:/python33/bin/python.exe')\n >>> len(p.parents)\n 3\n >>> p.parents[0]\n PureWindowsPath('c:/python33/bin')\n >>> p.parents[1]\n PureWindowsPath('c:/python33')\n >>> p.parents[2]\n PureWindowsPath('c:/')\n\nQuerying\n\nis_relative() returns True if the path is relative (see definition\nabove), False otherwise.\n\nis_reserved() returns True if a Windows path is a reserved path such as\nCON or NUL. It always returns False for POSIX paths.\n\nmatch() matches the path against a glob pattern. It operates on\nindividual parts and matches from the right:\n\n >>> p = PurePosixPath('/usr/bin') >>> p.match('/usr/b*') True >>>\n p.match('usr/b*') True >>> p.match('b*') True >>> p.match('/u*') False\n\nThis behaviour respects the following expectations:\n\n- A simple pattern such as \"*.py\" matches arbitrarily long paths as\n long as the last part matches, e.g. \"/usr/foo/bar.py\".\n- Longer patterns can be used as well for more complex matching, e.g.\n \"/usr/foo/*.py\" matches \"/usr/foo/bar.py\".\n\nConcrete paths API\n\nIn addition to the operations of the pure API, concrete paths provide\nadditional methods which actually access the filesystem to query or\nmutate information.\n\nConstructing\n\nThe classmethod cwd() creates a path object pointing to the current\nworking directory in absolute form:\n\n >>> Path.cwd()\n PosixPath('/home/antoine/pathlib')\n\nFile metadata\n\nThe stat() returns the file's stat() result; similarly, lstat() returns\nthe file's lstat() result (which is different iff the file is a symbolic\nlink):\n\n >>> p.stat()\n posix.stat_result(st_mode=33277, st_ino=7483155, st_dev=2053, st_nlink=1, st_uid=500, st_gid=500, st_size=928, st_atime=1343597970, st_mtime=1328287308, st_ctime=1343597964)\n\nHigher-level methods help examine the kind of the file:\n\n >>> p.exists()\n True\n >>> p.is_file()\n True\n >>> p.is_dir()\n False\n >>> p.is_symlink()\n False\n >>> p.is_socket()\n False\n >>> p.is_fifo()\n False\n >>> p.is_block_device()\n False\n >>> p.is_char_device()\n False\n\nThe file owner and group names (rather than numeric ids) are queried\nthrough corresponding methods:\n\n >>> p = Path('/etc/shadow')\n >>> p.owner()\n 'root'\n >>> p.group()\n 'shadow'\n\nPath resolution\n\nThe resolve() method makes a path absolute, resolving any symlink on the\nway (like the POSIX realpath() call). It is the only operation which\nwill remove \"..\" path components. On Windows, this method will also take\ncare to return the canonical path (with the right casing).\n\nDirectory walking\n\nSimple (non-recursive) directory access is done by calling the iterdir()\nmethod, which returns an iterator over the child paths:\n\n >>> p = Path('docs')\n >>> for child in p.iterdir(): child\n ...\n PosixPath('docs/conf.py')\n PosixPath('docs/_templates')\n PosixPath('docs/make.bat')\n PosixPath('docs/index.rst')\n PosixPath('docs/_build')\n PosixPath('docs/_static')\n PosixPath('docs/Makefile')\n\nThis allows simple filtering through list comprehensions:\n\n >>> p = Path('.')\n >>> [child for child in p.iterdir() if child.is_dir()]\n [PosixPath('.hg'), PosixPath('docs'), PosixPath('dist'), PosixPath('__pycache__'), PosixPath('build')]\n\nSimple and recursive globbing is also provided:\n\n >>> for child in p.glob('**/*.py'): child\n ...\n PosixPath('test_pathlib.py')\n PosixPath('setup.py')\n PosixPath('pathlib.py')\n PosixPath('docs/conf.py')\n PosixPath('build/lib/pathlib.py')\n\nFile opening\n\nThe open() method provides a file opening API similar to the builtin\nopen() method:\n\n >>> p = Path('setup.py')\n >>> with p.open() as f: f.readline()\n ...\n '#!/usr/bin/env python3\\n'\n\nFilesystem modification\n\nSeveral common filesystem operations are provided as methods: touch(),\nmkdir(), rename(), replace(), unlink(), rmdir(), chmod(), lchmod(),\nsymlink_to(). More operations could be provided, for example some of the\nfunctionality of the shutil module.\n\nDetailed documentation of the proposed API can be found at the pathlib\ndocs.\n\nDiscussion\n\nDivision operator\n\nThe division operator came out first in a poll about the path joining\noperator. Initial versions of pathlib used square brackets (i.e.\n__getitem__) instead.\n\njoinpath()\n\nThe joinpath() method was initially called join(), but several people\nobjected that it could be confused with str.join() which has different\nsemantics. Therefore, it was renamed to joinpath().\n\nCase-sensitivity\n\nWindows users consider filesystem paths to be case-insensitive and\nexpect path objects to observe that characteristic, even though in some\nrare situations some foreign filesystem mounts may be case-sensitive\nunder Windows.\n\nIn the words of one commenter,\n\n \"If glob(\"*.py\") failed to find SETUP.PY on Windows, that would be a\n usability disaster\".\n\n -- Paul Moore in\n https://mail.python.org/pipermail/python-dev/2013-April/125254.html\n\nCopyright\n\nThis document has been placed into the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.053992"},"created":{"kind":"timestamp","value":"2012-07-30T00:00:00","string":"2012-07-30T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0428/\",\n \"authors\": [\n \"Antoine Pitrou\"\n ],\n \"pep_number\": \"0428\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":68,"cells":{"id":{"kind":"string","value":"0521"},"text":{"kind":"string","value":"PEP: 521 Title: Managing global context via 'with' blocks in generators\nand coroutines Version: $Revision$ Last-Modified: $Date$ Author:\nNathaniel J. Smith Status: Withdrawn Type: Standards\nTrack Content-Type: text/x-rst Created: 27-Apr-2015 Python-Version: 3.6\nPost-History: 29-Apr-2015\n\nPEP Withdrawal\n\nWithdrawn in favor of PEP 567.\n\nAbstract\n\nWhile we generally try to avoid global state when possible, there\nnonetheless exist a number of situations where it is agreed to be the\nbest approach. In Python, a standard pattern for handling such cases is\nto store the global state in global or thread-local storage, and then\nuse with blocks to limit modifications of this global state to a single\ndynamic scope. Examples where this pattern is used include the standard\nlibrary's warnings.catch_warnings and decimal.localcontext, NumPy's\nnumpy.errstate (which exposes the error-handling settings provided by\nthe IEEE 754 floating point standard), and the handling of logging\ncontext or HTTP request context in many server application frameworks.\n\nHowever, there is currently no ergonomic way to manage such local\nchanges to global state when writing a generator or coroutine. For\nexample, this code:\n\n def f():\n with warnings.catch_warnings():\n for x in g():\n yield x\n\nmay or may not successfully catch warnings raised by g(), and may or may\nnot inadvertently swallow warnings triggered elsewhere in the code. The\ncontext manager, which was intended to apply only to f and its callees,\nends up having a dynamic scope that encompasses arbitrary and\nunpredictable parts of its call**ers**. This problem becomes\nparticularly acute when writing asynchronous code, where essentially all\nfunctions become coroutines.\n\nHere, we propose to solve this problem by notifying context managers\nwhenever execution is suspended or resumed within their scope, allowing\nthem to restrict their effects appropriately.\n\nSpecification\n\nTwo new, optional, methods are added to the context manager protocol:\n__suspend__ and __resume__. If present, these methods will be called\nwhenever a frame's execution is suspended or resumed from within the\ncontext of the with block.\n\nMore formally, consider the following code:\n\n with EXPR as VAR:\n PARTIAL-BLOCK-1\n f((yield foo))\n PARTIAL-BLOCK-2\n\nCurrently this is equivalent to the following code (copied from PEP\n343):\n\n mgr = (EXPR)\n exit = type(mgr).__exit__ # Not calling it yet\n value = type(mgr).__enter__(mgr)\n exc = True\n try:\n try:\n VAR = value # Only if \"as VAR\" is present\n PARTIAL-BLOCK-1\n f((yield foo))\n PARTIAL-BLOCK-2\n except:\n exc = False\n if not exit(mgr, *sys.exc_info()):\n raise\n finally:\n if exc:\n exit(mgr, None, None, None)\n\nThis PEP proposes to modify with block handling to instead become:\n\n mgr = (EXPR)\n exit = type(mgr).__exit__ # Not calling it yet\n ### --- NEW STUFF ---\n if the_block_contains_yield_points: # known statically at compile time\n suspend = getattr(type(mgr), \"__suspend__\", lambda: None)\n resume = getattr(type(mgr), \"__resume__\", lambda: None)\n ### --- END OF NEW STUFF ---\n value = type(mgr).__enter__(mgr)\n exc = True\n try:\n try:\n VAR = value # Only if \"as VAR\" is present\n PARTIAL-BLOCK-1\n ### --- NEW STUFF ---\n suspend(mgr)\n tmp = yield foo\n resume(mgr)\n f(tmp)\n ### --- END OF NEW STUFF ---\n PARTIAL-BLOCK-2\n except:\n exc = False\n if not exit(mgr, *sys.exc_info()):\n raise\n finally:\n if exc:\n exit(mgr, None, None, None)\n\nAnalogous suspend/resume calls are also wrapped around the yield points\nembedded inside the yield from, await, async with, and async for\nconstructs.\n\nNested blocks\n\nGiven this code:\n\n def f():\n with OUTER:\n with INNER:\n yield VALUE\n\nthen we perform the following operations in the following sequence:\n\n INNER.__suspend__()\n OUTER.__suspend__()\n yield VALUE\n OUTER.__resume__()\n INNER.__resume__()\n\nNote that this ensures that the following is a valid refactoring:\n\n def f():\n with OUTER:\n yield from g()\n\n def g():\n with INNER\n yield VALUE\n\nSimilarly, with statements with multiple context managers suspend from\nright to left, and resume from left to right.\n\nOther changes\n\nAppropriate __suspend__ and __resume__ methods are added to\nwarnings.catch_warnings and decimal.localcontext.\n\nRationale\n\nIn the abstract, we gave an example of plausible but incorrect code:\n\n def f():\n with warnings.catch_warnings():\n for x in g():\n yield x\n\nTo make this correct in current Python, we need to instead write\nsomething like:\n\n def f():\n with warnings.catch_warnings():\n it = iter(g())\n while True:\n with warnings.catch_warnings():\n try:\n x = next(it)\n except StopIteration:\n break\n yield x\n\nOTOH, if this PEP is accepted then the original code will become correct\nas-is. Or if this isn't convincing, then here's another example of\nbroken code; fixing it requires even greater gyrations, and these are\nleft as an exercise for the reader:\n\n async def test_foo_emits_warning():\n with warnings.catch_warnings(record=True) as w:\n await foo()\n assert len(w) == 1\n assert \"xyzzy\" in w[0].message\n\nAnd notice that this last example isn't artificial at all -- this is\nexactly how you write a test that an async/await-using coroutine\ncorrectly raises a warning. Similar issues arise for pretty much any use\nof warnings.catch_warnings, decimal.localcontext, or numpy.errstate in\nasync/await-using code. So there's clearly a real problem to solve here,\nand the growing prominence of async code makes it increasingly urgent.\n\nAlternative approaches\n\nThe main alternative that has been proposed is to create some kind of\n\"task-local storage\", analogous to \"thread-local storage\" [1]. In\nessence, the idea would be that the event loop would take care to\nallocate a new \"task namespace\" for each task it schedules, and provide\nan API to at any given time fetch the namespace corresponding to the\ncurrently executing task. While there are many details to be worked\nout[2], the basic idea seems doable, and it is an especially natural way\nto handle the kind of global context that arises at the top-level of\nasync application frameworks (e.g., setting up context objects in a web\nframework). But it also has a number of flaws:\n\n- It only solves the problem of managing global state for coroutines\n that yield back to an asynchronous event loop. But there actually\n isn't anything about this problem that's specific to asyncio -- as\n shown in the examples above, simple generators run into exactly the\n same issue.\n\n- It creates an unnecessary coupling between event loops and code that\n needs to manage global state. Obviously an async web framework needs\n to interact with some event loop API anyway, so it's not a big deal\n in that case. But it's weird that warnings or decimal or NumPy\n should have to call into an async library's API to access their\n internal state when they themselves involve no async code. Worse,\n since there are multiple event loop APIs in common use, it isn't\n clear how to choose which to integrate with. (This could be somewhat\n mitigated by CPython providing a standard API for creating and\n switching \"task-local domains\" that asyncio, Twisted, tornado, etc.\n could then work with.)\n\n- It's not at all clear that this can be made acceptably fast. NumPy\n has to check the floating point error settings on every single\n arithmetic operation. Checking a piece of data in thread-local\n storage is absurdly quick, because modern platforms have put massive\n resources into optimizing this case (e.g. dedicating a CPU register\n for this purpose); calling a method on an event loop to fetch a\n handle to a namespace and then doing lookup in that namespace is\n much slower.\n\n More importantly, this extra cost would be paid on every access to\n the global data, even for programs which are not otherwise using an\n event loop at all. This PEP's proposal, by contrast, only affects\n code that actually mixes with blocks and yield statements, meaning\n that the users who experience the costs are the same users who also\n reap the benefits.\n\nOn the other hand, such tight integration between task context and the\nevent loop does potentially allow other features that are beyond the\nscope of the current proposal. For example, an event loop could note\nwhich task namespace was in effect when a task called call_soon, and\narrange that the callback when run would have access to the same task\nnamespace. Whether this is useful, or even well-defined in the case of\ncross-thread calls (what does it mean to have task-local storage\naccessed from two threads simultaneously?), is left as a puzzle for\nevent loop implementors to ponder -- nothing in this proposal rules out\nsuch enhancements as well. It does seem though that such features would\nbe useful primarily for state that already has a tight integration with\nthe event loop -- while we might want a request id to be preserved\nacross call_soon, most people would not expect:\n\n with warnings.catch_warnings():\n loop.call_soon(f)\n\nto result in f being run with warnings disabled, which would be the\nresult if call_soon preserved global context in general. It's also\nunclear how this would even work given that the warnings context manager\n__exit__ would be called before f.\n\nSo this PEP takes the position that __suspend__/__resume__ and\n\"task-local storage\" are two complementary tools that are both useful in\ndifferent circumstances.\n\nBackwards compatibility\n\nBecause __suspend__ and __resume__ are optional and default to no-ops,\nall existing context managers continue to work exactly as before.\n\nSpeed-wise, this proposal adds additional overhead when entering a with\nblock (where we must now check for the additional methods; failed\nattribute lookup in CPython is rather slow, since it involves allocating\nan AttributeError), and additional overhead at suspension points. Since\nthe position of with blocks and suspension points is known statically,\nthe compiler can straightforwardly optimize away this overhead in all\ncases except where one actually has a yield inside a with. Furthermore,\nbecause we only do attribute checks for __suspend__ and __resume__ once\nat the start of a with block, when these attributes are undefined then\nthe per-yield overhead can be optimized down to a single C-level\nif (frame->needs_suspend_resume_calls) { ... }. Therefore, we expect the\noverall overhead to be negligible.\n\nInteraction with PEP 492\n\nPEP 492 added new asynchronous context managers, which are like regular\ncontext managers, but instead of having regular methods __enter__ and\n__exit__ they have coroutine methods __aenter__ and __aexit__.\n\nFollowing this pattern, one might expect this proposal to add\n__asuspend__ and __aresume__ coroutine methods. But this doesn't make\nmuch sense, since the whole point is that __suspend__ should be called\nbefore yielding our thread of execution and allowing other code to run.\nThe only thing we accomplish by making __asuspend__ a coroutine is to\nmake it possible for __asuspend__ itself to yield. So either we need to\nrecursively call __asuspend__ from inside __asuspend__, or else we need\nto give up and allow these yields to happen without calling the suspend\ncallback; either way it defeats the whole point.\n\nWell, with one exception: one possible pattern for coroutine code is to\ncall yield in order to communicate with the coroutine runner, but\nwithout actually suspending their execution (i.e., the coroutine might\nknow that the coroutine runner will resume them immediately after\nprocessing the yielded message). An example of this is the\ncurio.timeout_after async context manager, which yields a special\nset_timeout message to the curio kernel, and then the kernel immediately\n(synchronously) resumes the coroutine which sent the message. And from\nthe user point of view, this timeout value acts just like the kinds of\nglobal variables that motivated this PEP. But, there is a crucal\ndifference: this kind of async context manager is, by definition,\ntightly integrated with the coroutine runner. So, the coroutine runner\ncan take over responsibility for keeping track of which timeouts apply\nto which coroutines without any need for this PEP at all (and this is\nindeed how curio.timeout_after works).\n\nThat leaves two reasonable approaches to handling async context\nmanagers:\n\n1) Add plain __suspend__ and __resume__ methods.\n2) Leave async context managers alone for now until we have more\n experience with them.\n\nEither seems plausible, so out of laziness / YAGNI this PEP tentatively\nproposes to stick with option (2).\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] https://groups.google.com/forum/#!topic/python-tulip/zix5HQxtElg\nhttps://github.com/python/asyncio/issues/165\n\n[2] For example, we would have to decide whether there is a single\ntask-local namespace shared by all users (in which case we need a way\nfor multiple third-party libraries to adjudicate access to this\nnamespace), or else if there are multiple task-local namespaces, then we\nneed some mechanism for each library to arrange for their task-local\nnamespaces to be created and destroyed at appropriate moments. The\npreliminary patch linked from the github issue above doesn't seem to\nprovide any mechanism for such lifecycle management."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.069601"},"created":{"kind":"timestamp","value":"2015-04-27T00:00:00","string":"2015-04-27T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0521/\",\n \"authors\": [\n \"Nathaniel J. Smith\"\n ],\n \"pep_number\": \"0521\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":69,"cells":{"id":{"kind":"string","value":"0443"},"text":{"kind":"string","value":"PEP: 443 Title: Single-dispatch generic functions Version: $Revision$\nLast-Modified: $Date$ Author: Łukasz Langa \nDiscussions-To: python-dev@python.org Status: Final Type: Standards\nTrack Content-Type: text/x-rst Created: 22-May-2013 Python-Version: 3.4\nPost-History: 22-May-2013, 25-May-2013, 31-May-2013 Replaces: 245, 246,\n3124\n\nAbstract\n\nThis PEP proposes a new mechanism in the functools standard library\nmodule that provides a simple form of generic programming known as\nsingle-dispatch generic functions.\n\nA generic function is composed of multiple functions implementing the\nsame operation for different types. Which implementation should be used\nduring a call is determined by the dispatch algorithm. When the\nimplementation is chosen based on the type of a single argument, this is\nknown as single dispatch.\n\nRationale and Goals\n\nPython has always provided a variety of built-in and standard-library\ngeneric functions, such as len(), iter(), pprint.pprint(), copy.copy(),\nand most of the functions in the operator module. However, it currently:\n\n1. does not have a simple or straightforward way for developers to\n create new generic functions,\n2. does not have a standard way for methods to be added to existing\n generic functions (i.e., some are added using registration\n functions, others require defining __special__ methods, possibly by\n monkeypatching).\n\nIn addition, it is currently a common anti-pattern for Python code to\ninspect the types of received arguments, in order to decide what to do\nwith the objects.\n\nFor example, code may wish to accept either an object of some type, or a\nsequence of objects of that type. Currently, the \"obvious way\" to do\nthis is by type inspection, but this is brittle and closed to extension.\n\nAbstract Base Classes make it easier to discover present behaviour, but\ndon't help adding new behaviour. A developer using an already-written\nlibrary may be unable to change how their objects are treated by such\ncode, especially if the objects they are using were created by a third\nparty.\n\nTherefore, this PEP proposes a uniform API to address dynamic\noverloading using decorators.\n\nUser API\n\nTo define a generic function, decorate it with the @singledispatch\ndecorator. Note that the dispatch happens on the type of the first\nargument. Create your function accordingly:\n\n >>> from functools import singledispatch\n >>> @singledispatch\n ... def fun(arg, verbose=False):\n ... if verbose:\n ... print(\"Let me just say,\", end=\" \")\n ... print(arg)\n\nTo add overloaded implementations to the function, use the register()\nattribute of the generic function. This is a decorator, taking a type\nparameter and decorating a function implementing the operation for that\ntype:\n\n >>> @fun.register(int)\n ... def _(arg, verbose=False):\n ... if verbose:\n ... print(\"Strength in numbers, eh?\", end=\" \")\n ... print(arg)\n ...\n >>> @fun.register(list)\n ... def _(arg, verbose=False):\n ... if verbose:\n ... print(\"Enumerate this:\")\n ... for i, elem in enumerate(arg):\n ... print(i, elem)\n\nTo enable registering lambdas and pre-existing functions, the register()\nattribute can be used in a functional form:\n\n >>> def nothing(arg, verbose=False):\n ... print(\"Nothing.\")\n ...\n >>> fun.register(type(None), nothing)\n\nThe register() attribute returns the undecorated function. This enables\ndecorator stacking, pickling, as well as creating unit tests for each\nvariant independently:\n\n >>> @fun.register(float)\n ... @fun.register(Decimal)\n ... def fun_num(arg, verbose=False):\n ... if verbose:\n ... print(\"Half of your number:\", end=\" \")\n ... print(arg / 2)\n ...\n >>> fun_num is fun\n False\n\nWhen called, the generic function dispatches on the type of the first\nargument:\n\n >>> fun(\"Hello, world.\")\n Hello, world.\n >>> fun(\"test.\", verbose=True)\n Let me just say, test.\n >>> fun(42, verbose=True)\n Strength in numbers, eh? 42\n >>> fun(['spam', 'spam', 'eggs', 'spam'], verbose=True)\n Enumerate this:\n 0 spam\n 1 spam\n 2 eggs\n 3 spam\n >>> fun(None)\n Nothing.\n >>> fun(1.23)\n 0.615\n\nWhere there is no registered implementation for a specific type, its\nmethod resolution order is used to find a more generic implementation.\nThe original function decorated with @singledispatch is registered for\nthe base object type, which means it is used if no better implementation\nis found.\n\nTo check which implementation will the generic function choose for a\ngiven type, use the dispatch() attribute:\n\n >>> fun.dispatch(float)\n \n >>> fun.dispatch(dict) # note: default implementation\n \n\nTo access all registered implementations, use the read-only registry\nattribute:\n\n >>> fun.registry.keys()\n dict_keys([, , ,\n , ,\n ])\n >>> fun.registry[float]\n \n >>> fun.registry[object]\n \n\nThe proposed API is intentionally limited and opinionated, as to ensure\nit is easy to explain and use, as well as to maintain consistency with\nexisting members in the functools module.\n\nImplementation Notes\n\nThe functionality described in this PEP is already implemented in the\npkgutil standard library module as simplegeneric. Because this\nimplementation is mature, the goal is to move it largely as-is. The\nreference implementation is available on hg.python.org[1].\n\nThe dispatch type is specified as a decorator argument. An alternative\nform using function annotations was considered but its inclusion has\nbeen rejected. As of May 2013, this usage pattern is out of scope for\nthe standard library[2], and the best practices for annotation usage are\nstill debated.\n\nBased on the current pkgutil.simplegeneric implementation, and following\nthe convention on registering virtual subclasses on Abstract Base\nClasses, the dispatch registry will not be thread-safe.\n\nAbstract Base Classes\n\nThe pkgutil.simplegeneric implementation relied on several forms of\nmethod resolution order (MRO). @singledispatch removes special handling\nof old-style classes and Zope's ExtensionClasses. More importantly, it\nintroduces support for Abstract Base Classes (ABC).\n\nWhen a generic function implementation is registered for an ABC, the\ndispatch algorithm switches to an extended form of C3 linearization,\nwhich includes the relevant ABCs in the MRO of the provided argument.\nThe algorithm inserts ABCs where their functionality is introduced, i.e.\nissubclass(cls, abc) returns True for the class itself but returns False\nfor all its direct base classes. Implicit ABCs for a given class (either\nregistered or inferred from the presence of a special method like\n__len__()) are inserted directly after the last ABC explicitly listed in\nthe MRO of said class.\n\nIn its most basic form, this linearization returns the MRO for the given\ntype:\n\n >>> _compose_mro(dict, [])\n [, ]\n\nWhen the second argument contains ABCs that the specified type is a\nsubclass of, they are inserted in a predictable order:\n\n >>> _compose_mro(dict, [Sized, MutableMapping, str,\n ... Sequence, Iterable])\n [, ,\n , ,\n , ,\n ]\n\nWhile this mode of operation is significantly slower, all dispatch\ndecisions are cached. The cache is invalidated on registering new\nimplementations on the generic function or when user code calls\nregister() on an ABC to implicitly subclass it. In the latter case, it\nis possible to create a situation with ambiguous dispatch, for instance:\n\n >>> from collections.abc import Iterable, Container\n >>> class P:\n ... pass\n >>> Iterable.register(P)\n \n >>> Container.register(P)\n \n\nFaced with ambiguity, @singledispatch refuses the temptation to guess:\n\n >>> @singledispatch\n ... def g(arg):\n ... return \"base\"\n ...\n >>> g.register(Iterable, lambda arg: \"iterable\")\n at 0x108b49110>\n >>> g.register(Container, lambda arg: \"container\")\n at 0x108b491c8>\n >>> g(P())\n Traceback (most recent call last):\n ...\n RuntimeError: Ambiguous dispatch: \n or \n\nNote that this exception would not be raised if one or more ABCs had\nbeen provided explicitly as base classes during class definition. In\nthis case dispatch happens in the MRO order:\n\n >>> class Ten(Iterable, Container):\n ... def __iter__(self):\n ... for i in range(10):\n ... yield i\n ... def __contains__(self, value):\n ... return value in range(10)\n ...\n >>> g(Ten())\n 'iterable'\n\nA similar conflict arises when subclassing an ABC is inferred from the\npresence of a special method like __len__() or __contains__():\n\n >>> class Q:\n ... def __contains__(self, value):\n ... return False\n ...\n >>> issubclass(Q, Container)\n True\n >>> Iterable.register(Q)\n >>> g(Q())\n Traceback (most recent call last):\n ...\n RuntimeError: Ambiguous dispatch: \n or \n\nAn early version of the PEP contained a custom approach that was simpler\nbut created a number of edge cases with surprising results[3].\n\nUsage Patterns\n\nThis PEP proposes extending behaviour only of functions specifically\nmarked as generic. Just as a base class method may be overridden by a\nsubclass, so too a function may be overloaded to provide custom\nfunctionality for a given type.\n\nUniversal overloading does not equal arbitrary overloading, in the sense\nthat we need not expect people to randomly redefine the behavior of\nexisting functions in unpredictable ways. To the contrary, generic\nfunction usage in actual programs tends to follow very predictable\npatterns and registered implementations are highly-discoverable in the\ncommon case.\n\nIf a module is defining a new generic operation, it will usually also\ndefine any required implementations for existing types in the same\nplace. Likewise, if a module is defining a new type, then it will\nusually define implementations there for any generic functions that it\nknows or cares about. As a result, the vast majority of registered\nimplementations can be found adjacent to either the function being\noverloaded, or to a newly-defined type for which the implementation is\nadding support.\n\nIt is only in rather infrequent cases that one will have implementations\nregistered in a module that contains neither the function nor the\ntype(s) for which the implementation is added. In the absence of\nincompetence or deliberate intention to be obscure, the few\nimplementations that are not registered adjacent to the relevant type(s)\nor function(s), will generally not need to be understood or known about\noutside the scope where those implementations are defined. (Except in\nthe \"support modules\" case, where best practice suggests naming them\naccordingly.)\n\nAs mentioned earlier, single-dispatch generics are already prolific\nthroughout the standard library. A clean, standard way of doing them\nprovides a way forward to refactor those custom implementations to use a\ncommon one, opening them up for user extensibility at the same time.\n\nAlternative approaches\n\nIn PEP 3124 Phillip J. Eby proposes a full-grown solution with\noverloading based on arbitrary rule sets (with the default\nimplementation dispatching on argument types), as well as interfaces,\nadaptation and method combining. PEAK-Rules[4] is a reference\nimplementation of the concepts described in PJE's PEP.\n\nSuch a broad approach is inherently complex, which makes reaching a\nconsensus hard. In contrast, this PEP focuses on a single piece of\nfunctionality that is simple to reason about. It's important to note\nthis does not preclude the use of other approaches now or in the future.\n\nIn a 2005 article on Artima[5] Guido van Rossum presents a generic\nfunction implementation that dispatches on types of all arguments on a\nfunction. The same approach was chosen in Andrey Popp's generic package\navailable on PyPI[6], as well as David Mertz's\ngnosis.magic.multimethods[7].\n\nWhile this seems desirable at first, I agree with Fredrik Lundh's\ncomment that \"if you design APIs with pages of logic just to sort out\nwhat code a function should execute, you should probably hand over the\nAPI design to someone else\". In other words, the single argument\napproach proposed in this PEP is not only easier to implement but also\nclearly communicates that dispatching on a more complex state is an\nanti-pattern. It also has the virtue of corresponding directly with the\nfamiliar method dispatch mechanism in object oriented programming. The\nonly difference is whether the custom implementation is associated more\nclosely with the data (object-oriented methods) or the algorithm\n(single-dispatch overloading).\n\nPyPy's RPython offers extendabletype[8], a metaclass which enables\nclasses to be externally extended. In combination with pairtype() and\npair() factories, this offers a form of single-dispatch generics.\n\nAcknowledgements\n\nApart from Phillip J. Eby's work on PEP 3124 and PEAK-Rules, influences\ninclude Paul Moore's original issue [9] that proposed exposing\npkgutil.simplegeneric as part of the functools API, Guido van Rossum's\narticle on multimethods [10], and discussions with Raymond Hettinger on\na general pprint rewrite. Huge thanks to Alyssa Coghlan for encouraging\nme to create this PEP and providing initial feedback.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] http://hg.python.org/features/pep-443/file/tip/Lib/functools.py#l359\n\n[2] PEP 8 states in the \"Programming Recommendations\" section that \"the\nPython standard library will not use function annotations as that would\nresult in a premature commitment to a particular annotation style\".\n\n[3] http://bugs.python.org/issue18244\n\n[4] http://peak.telecommunity.com/DevCenter/PEAK_2dRules\n\n[5] http://www.artima.com/weblogs/viewpost.jsp?thread=101605\n\n[6] http://pypi.python.org/pypi/generic\n\n[7] http://gnosis.cx/publish/programming/charming_python_b12.html\n\n[8] https://bitbucket.org/pypy/pypy/raw/default/rpython/tool/pairtype.py\n\n[9] http://bugs.python.org/issue5135\n\n[10] http://www.artima.com/weblogs/viewpost.jsp?thread=101605"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.084972"},"created":{"kind":"timestamp","value":"2013-05-22T00:00:00","string":"2013-05-22T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0443/\",\n \"authors\": [\n \"Łukasz Langa\"\n ],\n \"pep_number\": \"0443\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":70,"cells":{"id":{"kind":"string","value":"0251"},"text":{"kind":"string","value":"PEP: 251 Title: Python 2.2 Release Schedule Author: Barry Warsaw\n, Guido van Rossum Status: Final\nType: Informational Topic: Release Content-Type: text/x-rst Created:\n17-Apr-2001 Python-Version: 2.2 Post-History: 14-Aug-2001\n\nAbstract\n\nThis document describes the Python 2.2 development and release schedule.\nThe schedule primarily concerns itself with PEP-sized items. Small bug\nfixes and changes will occur up until the first beta release.\n\nThe schedule below represents the actual release dates of Python 2.2.\nNote that any subsequent maintenance releases of Python 2.2 should be\ncovered by separate PEPs.\n\nRelease Schedule\n\nTentative future release dates. Note that we've slipped this compared to\nthe schedule posted around the release of 2.2a1.\n\n- 21-Dec-2001: 2.2 [Released] (final release)\n- 14-Dec-2001: 2.2c1 [Released]\n- 14-Nov-2001: 2.2b2 [Released]\n- 19-Oct-2001: 2.2b1 [Released]\n- 28-Sep-2001: 2.2a4 [Released]\n- 7-Sep-2001: 2.2a3 [Released]\n- 22-Aug-2001: 2.2a2 [Released]\n- 18-Jul-2001: 2.2a1 [Released]\n\nRelease Manager\n\nBarry Warsaw was the Python 2.2 release manager.\n\nRelease Mechanics\n\nWe experimented with a new mechanism for releases: a week before every\nalpha, beta or other release, we forked off a branch which became the\nrelease. Changes to the branch are limited to the release manager and\nhis designated 'bots. This experiment was deemed a success and should be\nobserved for future releases. See PEP 101 for the actual release\nmechanics.\n\nNew features for Python 2.2\n\nThe following new features are introduced in Python 2.2. For a more\ndetailed account, see Misc/NEWS[1] in the Python distribution, or Andrew\nKuchling's \"What's New in Python 2.2\" document[2].\n\n- iterators (PEP 234)\n- generators (PEP 255)\n- unifying long ints and plain ints (PEP 237)\n- division (PEP 238)\n- unification of types and classes (PEP 252, PEP 253)\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] Misc/NEWS file from CVS\nhttp://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/python/python/dist/src/Misc/NEWS?rev=1.337.2.4&content-type=text/vnd.viewcvs-markup\n\n[2] Andrew Kuchling, What's New in Python 2.2\nhttp://www.python.org/doc/2.2.1/whatsnew/whatsnew22.html"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.092323"},"created":{"kind":"timestamp","value":"2001-04-17T00:00:00","string":"2001-04-17T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0251/\",\n \"authors\": [\n \"Barry Warsaw\",\n \"Guido van Rossum\"\n ],\n \"pep_number\": \"0251\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":71,"cells":{"id":{"kind":"string","value":"3107"},"text":{"kind":"string","value":"PEP: 3107 Title: Function Annotations Version: $Revision$ Last-Modified:\n$Date$ Author: Collin Winter , Tony Lownds\n Status: Final Type: Standards Track Content-Type:\ntext/x-rst Created: 02-Dec-2006 Python-Version: 3.0 Post-History:\n\nAbstract\n\nThis PEP introduces a syntax for adding arbitrary metadata annotations\nto Python functions[1].\n\nRationale\n\nBecause Python's 2.x series lacks a standard way of annotating a\nfunction's parameters and return values, a variety of tools and\nlibraries have appeared to fill this gap. Some utilise the decorators\nintroduced in PEP 318, while others parse a function's docstring,\nlooking for annotations there.\n\nThis PEP aims to provide a single, standard way of specifying this\ninformation, reducing the confusion caused by the wide variation in\nmechanism and syntax that has existed until this point.\n\nFundamentals of Function Annotations\n\nBefore launching into a discussion of the precise ins and outs of Python\n3.0's function annotations, let's first talk broadly about what\nannotations are and are not:\n\n1. Function annotations, both for parameters and return values, are\n completely optional.\n\n2. Function annotations are nothing more than a way of associating\n arbitrary Python expressions with various parts of a function at\n compile-time.\n\n By itself, Python does not attach any particular meaning or\n significance to annotations. Left to its own, Python simply makes\n these expressions available as described in Accessing Function\n Annotations below.\n\n The only way that annotations take on meaning is when they are\n interpreted by third-party libraries. These annotation consumers can\n do anything they want with a function's annotations. For example,\n one library might use string-based annotations to provide improved\n help messages, like so:\n\n def compile(source: \"something compilable\",\n filename: \"where the compilable thing comes from\",\n mode: \"is this a single statement or a suite?\"):\n ...\n\n Another library might be used to provide typechecking for Python\n functions and methods. This library could use annotations to\n indicate the function's expected input and return types, possibly\n something like:\n\n def haul(item: Haulable, *vargs: PackAnimal) -> Distance:\n ...\n\n However, neither the strings in the first example nor the type\n information in the second example have any meaning on their own;\n meaning comes from third-party libraries alone.\n\n3. Following from point 2, this PEP makes no attempt to introduce any\n kind of standard semantics, even for the built-in types. This work\n will be left to third-party libraries.\n\nSyntax\n\nParameters\n\nAnnotations for parameters take the form of optional expressions that\nfollow the parameter name:\n\n def foo(a: expression, b: expression = 5):\n ...\n\nIn pseudo-grammar, parameters now look like\nidentifier [: expression] [= expression]. That is, annotations always\nprecede a parameter's default value and both annotations and default\nvalues are optional. Just like how equal signs are used to indicate a\ndefault value, colons are used to mark annotations. All annotation\nexpressions are evaluated when the function definition is executed, just\nlike default values.\n\nAnnotations for excess parameters (i.e., *args and **kwargs) are\nindicated similarly:\n\n def foo(*args: expression, **kwargs: expression):\n ...\n\nAnnotations for nested parameters always follow the name of the\nparameter, not the last parenthesis. Annotating all parameters of a\nnested parameter is not required:\n\n def foo((x1, y1: expression),\n (x2: expression, y2: expression)=(None, None)):\n ...\n\nReturn Values\n\nThe examples thus far have omitted examples of how to annotate the type\nof a function's return value. This is done like so:\n\n def sum() -> expression:\n ...\n\nThat is, the parameter list can now be followed by a literal -> and a\nPython expression. Like the annotations for parameters, this expression\nwill be evaluated when the function definition is executed.\n\nThe grammar for function definitions[2] is now:\n\n decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE\n decorators: decorator+\n funcdef: [decorators] 'def' NAME parameters ['->' test] ':' suite\n parameters: '(' [typedargslist] ')'\n typedargslist: ((tfpdef ['=' test] ',')*\n ('*' [tname] (',' tname ['=' test])* [',' '**' tname]\n | '**' tname)\n | tfpdef ['=' test] (',' tfpdef ['=' test])* [','])\n tname: NAME [':' test]\n tfpdef: tname | '(' tfplist ')'\n tfplist: tfpdef (',' tfpdef)* [',']\n\nLambda\n\nlambda's syntax does not support annotations. The syntax of lambda could\nbe changed to support annotations, by requiring parentheses around the\nparameter list. However it was decided [3] not to make this change\nbecause:\n\n1. It would be an incompatible change.\n2. Lambdas are neutered anyway.\n3. The lambda can always be changed to a function.\n\nAccessing Function Annotations\n\nOnce compiled, a function's annotations are available via the function's\n__annotations__ attribute. This attribute is a mutable dictionary,\nmapping parameter names to an object representing the evaluated\nannotation expression\n\nThere is a special key in the __annotations__ mapping, \"return\". This\nkey is present only if an annotation was supplied for the function's\nreturn value.\n\nFor example, the following annotation:\n\n def foo(a: 'x', b: 5 + 6, c: list) -> max(2, 9):\n ...\n\nwould result in an __annotations__ mapping of :\n\n {'a': 'x',\n 'b': 11,\n 'c': list,\n 'return': 9}\n\nThe return key was chosen because it cannot conflict with the name of a\nparameter; any attempt to use return as a parameter name would result in\na SyntaxError.\n\n__annotations__ is an empty, mutable dictionary if there are no\nannotations on the function or if the functions was created from a\nlambda expression.\n\nUse Cases\n\nIn the course of discussing annotations, a number of use-cases have been\nraised. Some of these are presented here, grouped by what kind of\ninformation they convey. Also included are examples of existing products\nand packages that could make use of annotations.\n\n- Providing typing information\n - Type checking ([4],[5])\n - Let IDEs show what types a function expects and returns ([6])\n - Function overloading / generic functions ([7])\n - Foreign-language bridges ([8],[9])\n - Adaptation ([10],[11])\n - Predicate logic functions\n - Database query mapping\n - RPC parameter marshaling ([12])\n- Other information\n - Documentation for parameters and return values ([13])\n\nStandard Library\n\npydoc and inspect\n\nThe pydoc module should display the function annotations when displaying\nhelp for a function. The inspect module should change to support\nannotations.\n\nRelation to Other PEPs\n\nFunction Signature Objects (PEP 362)\n\nFunction Signature Objects should expose the function's annotations. The\nParameter object may change or other changes may be warranted.\n\nImplementation\n\nA reference implementation has been checked into the py3k (formerly\n\"p3yk\") branch as revision 53170[14].\n\nRejected Proposals\n\n- The BDFL rejected the author's idea for a special syntax for adding\n annotations to generators as being \"too ugly\"[15].\n- Though discussed early on ([16],[17]), including special objects in\n the stdlib for annotating generator functions and higher-order\n functions was ultimately rejected as being more appropriate for\n third-party libraries; including them in the standard library raised\n too many thorny issues.\n- Despite considerable discussion about a standard type\n parameterisation syntax, it was decided that this should also be\n left to third-party libraries. ([18], [19],[20]).\n- Despite yet more discussion, it was decided not to standardize a\n mechanism for annotation interoperability. Standardizing\n interoperability conventions at this point would be premature. We\n would rather let these conventions develop organically, based on\n real-world usage and necessity, than try to force all users into\n some contrived scheme. ([21],[22], [23]).\n\nReferences and Footnotes\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:\n\n[1] Unless specifically stated, \"function\" is generally used as a\nsynonym for \"callable\" throughout this document.\n\n[2] http://docs.python.org/reference/compound_stmts.html#function-definitions\n\n[3] https://mail.python.org/pipermail/python-3000/2006-May/001613.html\n\n[4] http://web.archive.org/web/20070730120117/http://oakwinter.com/code/typecheck/\n\n[5] http://web.archive.org/web/20070603221429/http://maxrepo.info/\n\n[6] http://www.python.org/idle/doc/idle2.html#Tips\n\n[7] http://www-128.ibm.com/developerworks/library/l-cppeak2/\n\n[8] http://www.jython.org/Project/index.html\n\n[9] http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython\n\n[10] http://www.artima.com/weblogs/viewpost.jsp?thread=155123\n\n[11] http://peak.telecommunity.com/PyProtocols.html\n\n[12] http://rpyc.wikispaces.com/\n\n[13] http://docs.python.org/library/pydoc.html\n\n[14] http://svn.python.org/view?rev=53170&view=rev\n\n[15] https://mail.python.org/pipermail/python-3000/2006-May/002103.html\n\n[16] https://mail.python.org/pipermail/python-3000/2006-May/002091.html\n\n[17] https://mail.python.org/pipermail/python-3000/2006-May/001972.html\n\n[18] https://mail.python.org/pipermail/python-3000/2006-May/002105.html\n\n[19] https://mail.python.org/pipermail/python-3000/2006-May/002209.html\n\n[20] https://mail.python.org/pipermail/python-3000/2006-June/002438.html\n\n[21] https://mail.python.org/pipermail/python-3000/2006-August/002895.html\n\n[22] https://mail.python.org/pipermail/python-ideas/2007-January/000032.html\n\n[23] https://mail.python.org/pipermail/python-list/2006-December/420645.html"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.110929"},"created":{"kind":"timestamp","value":"2006-12-02T00:00:00","string":"2006-12-02T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-3107/\",\n \"authors\": [\n \"Collin Winter\"\n ],\n \"pep_number\": \"3107\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":72,"cells":{"id":{"kind":"string","value":"0587"},"text":{"kind":"string","value":"PEP: 587 Title: Python Initialization Configuration Author: Victor\nStinner , Alyssa Coghlan \nBDFL-Delegate: Thomas Wouters Discussions-To:\npython-dev@python.org Status: Final Type: Standards Track Content-Type:\ntext/x-rst Created: 27-Mar-2019 Python-Version: 3.8\n\nAbstract\n\nAdd a new C API to configure the Python Initialization providing finer\ncontrol on the whole configuration and better error reporting.\n\nIt becomes possible to read the configuration and then override some\ncomputed parameters before it is applied. It also becomes possible to\ncompletely override how Python computes the module search paths\n(sys.path).\n\nThe new Isolated Configuration provides sane default values to isolate\nPython from the system. For example, to embed Python into an\napplication. Using the environment are now opt-in options, rather than\nan opt-out options. For example, environment variables, command line\narguments and global configuration variables are ignored by default.\n\nBuilding a customized Python which behaves as the regular Python becomes\neasier using the new Py_RunMain() function. Moreover, using the Python\nConfiguration, PyConfig.argv arguments are now parsed the same way the\nregular Python parses command line arguments, and PyConfig.xoptions are\nhandled as -X opt command line options.\n\nThis extracts a subset of the API design from the PEP 432 development\nand refactoring work that is now considered sufficiently stable to make\npublic (allowing 3rd party embedding applications access to the same\nconfiguration APIs that the native CPython CLI is now using).\n\nRationale\n\nPython is highly configurable but its configuration evolved organically.\nThe initialization configuration is scattered all around the code using\ndifferent ways to set them: global configuration variables (ex:\nPy_IsolatedFlag), environment variables (ex: PYTHONPATH), command line\narguments (ex: -b), configuration files (ex: pyvenv.cfg), function calls\n(ex: Py_SetProgramName()). A straightforward and reliable way to\nconfigure Python is needed.\n\nSome configuration parameters are not accessible from the C API, or not\neasily. For example, there is no API to override the default values of\nsys.executable.\n\nSome options like PYTHONPATH can only be set using an environment\nvariable which has a side effect on Python child processes if not unset\nproperly.\n\nSome options also depends on other options: see Priority and Rules.\nPython 3.7 API does not provide a consistent view of the overall\nconfiguration.\n\nThe C API of Python 3.7 Initialization takes wchar_t* strings as input\nwhereas the Python filesystem encoding is set during the initialization\nwhich can lead to mojibake.\n\nPython 3.7 APIs like Py_Initialize() aborts the process on memory\nallocation failure which is not convenient when Python is embedded.\nMoreover, Py_Main() could exit directly the process rather than\nreturning an exit code. Proposed new API reports the error or exit code\nto the caller which can decide how to handle it.\n\nImplementing the PEP 540 (UTF-8 Mode) and the new -X dev correctly was\nalmost impossible in Python 3.6. The code base has been deeply reworked\nin Python 3.7 and then in Python 3.8 to read the configuration into a\nstructure with no side effect. It becomes possible to clear the\nconfiguration (release memory) and read again the configuration if the\nencoding changed . It is required to implement properly the UTF-8 which\nchanges the encoding using -X utf8 command line option. Internally,\nbytes argv strings are decoded from the filesystem encoding. The -X dev\nchanges the memory allocator (behaves as PYTHONMALLOC=debug), whereas it\nwas not possible to change the memory allocation while parsing the\ncommand line arguments. The new design of the internal implementation\nnot only allowed to implement properly -X utf8 and -X dev, it also\nallows to change the Python behavior way more easily, especially for\ncorner cases like that, and ensure that the configuration remains\nconsistent: see Priority and Rules.\n\nThis PEP is a partial implementation of PEP 432 which is the overall\ndesign. New fields can be added later to PyConfig structure to finish\nthe implementation of the PEP 432 (e.g. by adding a new partial\ninitialization API which allows to configure Python using Python objects\nto finish the full initialization). However, those features are omitted\nfrom this PEP as even the native CPython CLI doesn't work that way - the\npublic API proposal in this PEP is limited to features which have\nalready been implemented and adopted as private APIs for us in the\nnative CPython CLI.\n\nPython Initialization C API\n\nThis PEP proposes to add the following new structures and functions.\n\nNew structures:\n\n- PyConfig\n- PyPreConfig\n- PyStatus\n- PyWideStringList\n\nNew functions:\n\n- PyConfig_Clear(config)\n- PyConfig_InitIsolatedConfig()\n- PyConfig_InitPythonConfig()\n- PyConfig_Read(config)\n- PyConfig_SetArgv(config, argc, argv)\n- PyConfig_SetBytesArgv(config, argc, argv)\n- PyConfig_SetBytesString(config, config_str, str)\n- PyConfig_SetString(config, config_str, str)\n- PyPreConfig_InitIsolatedConfig(preconfig)\n- PyPreConfig_InitPythonConfig(preconfig)\n- PyStatus_Error(err_msg)\n- PyStatus_Exception(status)\n- PyStatus_Exit(exitcode)\n- PyStatus_IsError(status)\n- PyStatus_IsExit(status)\n- PyStatus_NoMemory()\n- PyStatus_Ok()\n- PyWideStringList_Append(list, item)\n- PyWideStringList_Insert(list, index, item)\n- Py_BytesMain(argc, argv)\n- Py_ExitStatusException(status)\n- Py_InitializeFromConfig(config)\n- Py_PreInitialize(preconfig)\n- Py_PreInitializeFromArgs(preconfig, argc, argv)\n- Py_PreInitializeFromBytesArgs(preconfig, argc, argv)\n- Py_RunMain()\n\nThis PEP also adds _PyRuntimeState.preconfig (PyPreConfig type) and\nPyInterpreterState.config (PyConfig type) fields to these internal\nstructures. PyInterpreterState.config becomes the new reference\nconfiguration, replacing global configuration variables and other\nprivate variables.\n\nPyWideStringList\n\nPyWideStringList is a list of wchar_t* strings.\n\nPyWideStringList structure fields:\n\n- length (Py_ssize_t)\n- items (wchar_t**)\n\nMethods:\n\n- PyStatus PyWideStringList_Append(PyWideStringList *list, const wchar_t *item):\n Append item to list.\n- PyStatus PyWideStringList_Insert(PyWideStringList *list, Py_ssize_t index, const wchar_t *item):\n Insert item into list at index. If index is greater than list\n length, just append item to list.\n\nIf length is non-zero, items must be non-NULL and all strings must be\nnon-NULL.\n\nPyStatus\n\nPyStatus is a structure to store the status of an initialization\nfunction: success, error or exit. For an error, it can store the C\nfunction name which created the error.\n\nExample:\n\n PyStatus alloc(void **ptr, size_t size)\n {\n *ptr = PyMem_RawMalloc(size);\n if (*ptr == NULL) {\n return PyStatus_NoMemory();\n }\n return PyStatus_Ok();\n }\n\n int main(int argc, char **argv)\n {\n void *ptr;\n PyStatus status = alloc(&ptr, 16);\n if (PyStatus_Exception(status)) {\n Py_ExitStatusException(status);\n }\n PyMem_Free(ptr);\n return 0;\n }\n\nPyStatus fields:\n\n- exitcode (int): Argument passed to exit().\n- err_msg (const char*): Error message.\n- func (const char *): Name of the function which created an error,\n can be NULL.\n- private _type field: for internal usage only.\n\nFunctions to create a status:\n\n- PyStatus_Ok(): Success.\n- PyStatus_Error(err_msg): Initialization error with a message.\n- PyStatus_NoMemory(): Memory allocation failure (out of memory).\n- PyStatus_Exit(exitcode): Exit Python with the specified exit code.\n\nFunctions to handle a status:\n\n- PyStatus_Exception(status): Is the result an error or an exit? If\n true, the exception must be handled; by calling\n Py_ExitStatusException(status) for example.\n- PyStatus_IsError(status): Is the result an error?\n- PyStatus_IsExit(status): Is the result an exit?\n- Py_ExitStatusException(status): Call exit(exitcode) if status is an\n exit. Print the error messageand exit with a non-zero exit code if\n status is an error. Must only be called if\n PyStatus_Exception(status) is true.\n\nNote\n\nInternally, Python uses macros which set PyStatus.func, whereas\nfunctions to create a status set func to NULL.\n\nPreinitialization with PyPreConfig\n\nThe PyPreConfig structure is used to preinitialize Python:\n\n- Set the Python memory allocator\n- Configure the LC_CTYPE locale\n- Set the UTF-8 mode\n\nExample using the preinitialization to enable the UTF-8 Mode:\n\n PyStatus status;\n PyPreConfig preconfig;\n\n PyPreConfig_InitPythonConfig(&preconfig);\n\n preconfig.utf8_mode = 1;\n\n status = Py_PreInitialize(&preconfig);\n if (PyStatus_Exception(status)) {\n Py_ExitStatusException(status);\n }\n\n /* at this point, Python will speak UTF-8 */\n\n Py_Initialize();\n /* ... use Python API here ... */\n Py_Finalize();\n\nFunction to initialize a preconfiguration:\n\n- PyStatus PyPreConfig_InitIsolatedConfig(PyPreConfig *preconfig)\n- PyStatus PyPreConfig_InitPythonConfig(PyPreConfig *preconfig)\n\nFunctions to preinitialize Python:\n\n- PyStatus Py_PreInitialize(const PyPreConfig *preconfig)\n- PyStatus Py_PreInitializeFromBytesArgs(const PyPreConfig *preconfig, int argc, char * const *argv)\n- PyStatus Py_PreInitializeFromArgs(const PyPreConfig *preconfig, int argc, wchar_t * const * argv)\n\nThe caller is responsible to handle exceptions (error or exit) using\nPyStatus_Exception() and Py_ExitStatusException().\n\nFor Python Configuration (PyPreConfig_InitPythonConfig()), if Python is\ninitialized with command line arguments, the command line arguments must\nalso be passed to preinitialize Python, since they have an effect on the\npre-configuration like encodings. For example, the -X utf8 command line\noption enables the UTF-8 Mode.\n\nPyPreConfig fields:\n\n- allocator (int): Name of the memory allocator (ex:\n PYMEM_ALLOCATOR_MALLOC). Valid values:\n - PYMEM_ALLOCATOR_NOT_SET (0): don't change memory allocators (use\n defaults)\n - PYMEM_ALLOCATOR_DEFAULT (1): default memory allocators\n - PYMEM_ALLOCATOR_DEBUG (2): default memory allocators with debug\n hooks\n - PYMEM_ALLOCATOR_MALLOC (3): force usage of malloc()\n - PYMEM_ALLOCATOR_MALLOC_DEBUG (4): force usage of malloc() with\n debug hooks\n - PYMEM_ALLOCATOR_PYMALLOC (5): Python \"pymalloc\" allocator\n - PYMEM_ALLOCATOR_PYMALLOC_DEBUG (6): Python \"pymalloc\" allocator\n with debug hooks\n - Note: PYMEM_ALLOCATOR_PYMALLOC and\n PYMEM_ALLOCATOR_PYMALLOC_DEBUG are not supported if Python is\n configured using --without-pymalloc\n- configure_locale (int): Set the LC_CTYPE locale to the user\n preferred locale? If equals to 0, set coerce_c_locale and\n coerce_c_locale_warn to 0.\n- coerce_c_locale (int): If equals to 2, coerce the C locale; if\n equals to 1, read the LC_CTYPE locale to decide if it should be\n coerced.\n- coerce_c_locale_warn (int): If non-zero, emit a warning if the C\n locale is coerced.\n- dev_mode (int): See PyConfig.dev_mode.\n- isolated (int): See PyConfig.isolated.\n- legacy_windows_fs_encoding (int, Windows only): If non-zero, disable\n UTF-8 Mode, set the Python filesystem encoding to mbcs, set the\n filesystem error handler to replace.\n- parse_argv (int): If non-zero, Py_PreInitializeFromArgs() and\n Py_PreInitializeFromBytesArgs() parse their argv argument the same\n way the regular Python parses command line arguments: see Command\n Line Arguments.\n- use_environment (int): See PyConfig.use_environment.\n- utf8_mode (int): If non-zero, enable the UTF-8 mode.\n\nThe legacy_windows_fs_encoding field is only available on Windows.\n#ifdef MS_WINDOWS macro can be used for Windows specific code.\n\nPyPreConfig private fields, for internal use only:\n\n- _config_init (int): Function used to initialize PyConfig, used for\n preinitialization.\n\nPyMem_SetAllocator() can be called after Py_PreInitialize() and before\nPy_InitializeFromConfig() to install a custom memory allocator. It can\nbe called before Py_PreInitialize() if allocator is set to\nPYMEM_ALLOCATOR_NOT_SET (default value).\n\nPython memory allocation functions like PyMem_RawMalloc() must not be\nused before Python preinitialization, whereas calling directly malloc()\nand free() is always safe. Py_DecodeLocale() must not be called before\nthe preinitialization.\n\nInitialization with PyConfig\n\nThe PyConfig structure contains most parameters to configure Python.\n\nExample setting the program name:\n\n void init_python(void)\n {\n PyStatus status;\n\n PyConfig config;\n PyConfig_InitPythonConfig(&config);\n\n /* Set the program name. Implicitly preinitialize Python. */\n status = PyConfig_SetString(&config, &config.program_name,\n L\"/path/to/my_program\");\n if (PyStatus_Exception(status)) {\n goto fail;\n }\n\n status = Py_InitializeFromConfig(&config);\n if (PyStatus_Exception(status)) {\n goto fail;\n }\n PyConfig_Clear(&config);\n return;\n\n fail:\n PyConfig_Clear(&config);\n Py_ExitStatusException(status);\n }\n\nPyConfig methods:\n\n- void PyConfig_InitPythonConfig(PyConfig *config) Initialize\n configuration with Python Configuration.\n- void PyConfig_InitIsolatedConfig(PyConfig *config): Initialize\n configuration with Isolated Configuration.\n- PyStatus PyConfig_SetString(PyConfig *config, wchar_t * const *config_str, const wchar_t *str):\n Copy the wide character string str into *config_str. Preinitialize\n Python if needed.\n- PyStatus PyConfig_SetBytesString(PyConfig *config, wchar_t * const *config_str, const char *str):\n Decode str using Py_DecodeLocale() and set the result into\n *config_str. Preinitialize Python if needed.\n- PyStatus PyConfig_SetArgv(PyConfig *config, int argc, wchar_t * const *argv):\n Set command line arguments from wide character strings.\n Preinitialize Python if needed.\n- PyStatus PyConfig_SetBytesArgv(PyConfig *config, int argc, char * const *argv):\n Set command line arguments: decode bytes using Py_DecodeLocale().\n Preinitialize Python if needed.\n- PyStatus PyConfig_Read(PyConfig *config): Read all Python\n configuration. Fields which are already initialized are left\n unchanged. Preinitialize Python if needed.\n- void PyConfig_Clear(PyConfig *config): Release configuration memory.\n\nMost PyConfig methods preinitialize Python if needed. In that case, the\nPython preinitialization configuration in based on the PyConfig. If\nconfiguration fields which are in common with PyPreConfig are tuned,\nthey must be set before calling a PyConfig method:\n\n- dev_mode\n- isolated\n- parse_argv\n- use_environment\n\nMoreover, if PyConfig_SetArgv() or PyConfig_SetBytesArgv() is used, this\nmethod must be called first, before other methods, since the\npreinitialization configuration depends on command line arguments (if\nparse_argv is non-zero).\n\nFunctions to initialize Python:\n\n- PyStatus Py_InitializeFromConfig(const PyConfig *config): Initialize\n Python from config configuration.\n\nThe caller of these methods and functions is responsible to handle\nexceptions (error or exit) using PyStatus_Exception() and\nPy_ExitStatusException().\n\nPyConfig fields:\n\n- argv (PyWideStringList): Command line arguments, sys.argv. See\n parse_argv to parse argv the same way the regular Python parses\n Python command line arguments. If argv is empty, an empty string is\n added to ensure that sys.argv always exists and is never empty.\n- base_exec_prefix (wchar_t*): sys.base_exec_prefix.\n- base_prefix (wchar_t*): sys.base_prefix.\n- buffered_stdio (int): If equals to 0, enable unbuffered mode, making\n the stdout and stderr streams unbuffered.\n- bytes_warning (int): If equals to 1, issue a warning when comparing\n bytes or bytearray with str, or comparing bytes with int. If equal\n or greater to 2, raise a BytesWarning exception.\n- check_hash_pycs_mode (wchar_t*): --check-hash-based-pycs command\n line option value (see PEP 552). Valid values: always, never and\n default. The default value is default.\n- configure_c_stdio (int): If non-zero, configure C standard streams\n (stdio, stdout, stdout). For example, set their mode to O_BINARY on\n Windows.\n- dev_mode (int): Development mode\n- dump_refs (int): If non-zero, dump all objects which are still alive\n at exit. Require a special Python build with Py_REF_DEBUG macro\n defined.\n- exec_prefix (wchar_t*): sys.exec_prefix.\n- executable (wchar_t*): sys.executable.\n- faulthandler (int): If non-zero, call faulthandler.enable().\n- filesystem_encoding (wchar_t*): Filesystem encoding,\n sys.getfilesystemencoding().\n- filesystem_errors (wchar_t*): Filesystem encoding errors,\n sys.getfilesystemencodeerrors().\n- use_hash_seed (int), hash_seed (unsigned long): Randomized hash\n function seed.\n- home (wchar_t*): Python home directory.\n- import_time (int): If non-zero, profile import time.\n- inspect (int): Enter interactive mode after executing a script or a\n command.\n- install_signal_handlers (int): Install signal handlers?\n- interactive (int): Interactive mode.\n- isolated (int): If greater than 0, enable isolated mode:\n - sys.path contains neither the script's directory (computed from\n argv[0] or the current directory) nor the user's site-packages\n directory.\n - Python REPL doesn't import readline nor enable default readline\n configuration on interactive prompts.\n - Set use_environment and user_site_directory to 0.\n- legacy_windows_stdio (int, Windows only): If non-zero, use io.FileIO\n instead of WindowsConsoleIO for sys.stdin, sys.stdout and\n sys.stderr.\n- malloc_stats (int): If non-zero, dump statistics on pymalloc memory\n allocator at exit. The option is ignored if Python is built using\n --without-pymalloc.\n- pythonpath_env (wchar_t*): Module search paths as a string separated\n by DELIM (usually : character). Initialized from PYTHONPATH\n environment variable value by default.\n- module_search_paths_set (int), module_search_paths\n (PyWideStringList): sys.path. If module_search_paths_set is equal to\n 0, the module_search_paths is overridden by the function computing\n the Path Configuration.\n- optimization_level (int): Compilation optimization level:\n - 0: Peephole optimizer (and __debug__ is set to True)\n - 1: Remove assertions, set __debug__ to False\n - 2: Strip docstrings\n- parse_argv (int): If non-zero, parse argv the same way the regular\n Python command line arguments, and strip Python arguments from argv:\n see Command Line Arguments.\n- parser_debug (int): If non-zero, turn on parser debugging output\n (for expert only, depending on compilation options).\n- pathconfig_warnings (int): If equal to 0, suppress warnings when\n computing the path configuration (Unix only, Windows does not log\n any warning). Otherwise, warnings are written into stderr.\n- prefix (wchar_t*): sys.prefix.\n- program_name (wchar_t*): Program name.\n- pycache_prefix (wchar_t*): .pyc cache prefix.\n- quiet (int): Quiet mode. For example, don't display the copyright\n and version messages even in interactive mode.\n- run_command (wchar_t*): python3 -c COMMAND argument.\n- run_filename (wchar_t*): python3 FILENAME argument.\n- run_module (wchar_t*): python3 -m MODULE argument.\n- show_alloc_count (int): Show allocation counts at exit? Need a\n special Python build with COUNT_ALLOCS macro defined.\n- show_ref_count (int): Show total reference count at exit? Need a\n debug build of Python (Py_REF_DEBUG macro should be defined).\n- site_import (int): Import the site module at startup?\n- skip_source_first_line (int): Skip the first line of the source?\n- stdio_encoding (wchar_t*), stdio_errors (wchar_t*): Encoding and\n encoding errors of sys.stdin, sys.stdout and sys.stderr.\n- tracemalloc (int): If non-zero, call tracemalloc.start(value).\n- user_site_directory (int): If non-zero, add user site directory to\n sys.path.\n- verbose (int): If non-zero, enable verbose mode.\n- warnoptions (PyWideStringList): Options of the warnings module to\n build warnings filters.\n- write_bytecode (int): If non-zero, write .pyc files.\n- xoptions (PyWideStringList): sys._xoptions.\n\nThe legacy_windows_stdio field is only available on Windows.\n#ifdef MS_WINDOWS macro can be used for Windows specific code.\n\nIf parse_argv is non-zero, argv arguments are parsed the same way the\nregular Python parses command line arguments, and Python arguments are\nstripped from argv: see Command Line Arguments.\n\nThe xoptions options are parsed to set other options: see -X Options.\n\nPyConfig private fields, for internal use only:\n\n- _config_init (int): Function used to initialize PyConfig, used for\n preinitialization.\n- _install_importlib (int): Install importlib?\n- _init_main (int): If equal to 0, stop Python initialization before\n the \"main\" phase (see PEP 432).\n\nMore complete example modifying the default configuration, read the\nconfiguration, and then override some parameters:\n\n PyStatus init_python(const char *program_name)\n {\n PyStatus status;\n\n PyConfig config;\n PyConfig_InitPythonConfig(&config);\n\n /* Set the program name before reading the configuration\n (decode byte string from the locale encoding).\n\n Implicitly preinitialize Python. */\n status = PyConfig_SetBytesString(&config, &config.program_name,\n program_name);\n if (PyStatus_Exception(status)) {\n goto done;\n }\n\n /* Read all configuration at once */\n status = PyConfig_Read(&config);\n if (PyStatus_Exception(status)) {\n goto done;\n }\n\n /* Append our custom search path to sys.path */\n status = PyWideStringList_Append(&config.module_search_paths,\n L\"/path/to/more/modules\");\n if (PyStatus_Exception(status)) {\n goto done;\n }\n\n /* Override executable computed by PyConfig_Read() */\n status = PyConfig_SetString(&config, &config.executable,\n L\"/path/to/my_executable\");\n if (PyStatus_Exception(status)) {\n goto done;\n }\n\n status = Py_InitializeFromConfig(&config);\n\n done:\n PyConfig_Clear(&config);\n return status;\n }\n\nNote\n\nPyImport_FrozenModules, PyImport_AppendInittab() and\nPyImport_ExtendInittab() functions are still relevant and continue to\nwork as previously. They should be set or called after Python\npreinitialization and before the Python initialization.\n\nIsolated Configuration\n\nPyPreConfig_InitIsolatedConfig() and PyConfig_InitIsolatedConfig()\nfunctions create a configuration to isolate Python from the system. For\nexample, to embed Python into an application.\n\nThis configuration ignores global configuration variables, environments\nvariables and command line arguments (argv is not parsed). The C\nstandard streams (ex: stdout) and the LC_CTYPE locale are left unchanged\nby default.\n\nConfiguration files are still used with this configuration. Set the Path\nConfiguration (\"output fields\") to ignore these configuration files and\navoid the function computing the default path configuration.\n\nPython Configuration\n\nPyPreConfig_InitPythonConfig() and PyConfig_InitPythonConfig() functions\ncreate a configuration to build a customized Python which behaves as the\nregular Python.\n\nEnvironments variables and command line arguments are used to configure\nPython, whereas global configuration variables are ignored.\n\nThis function enables C locale coercion (PEP 538) and UTF-8 Mode (PEP\n540) depending on the LC_CTYPE locale, PYTHONUTF8 and\nPYTHONCOERCECLOCALE environment variables.\n\nExample of customized Python always running in isolated mode:\n\n int main(int argc, char **argv)\n {\n PyStatus status;\n\n PyConfig config;\n PyConfig_InitPythonConfig(&config);\n\n config.isolated = 1;\n\n /* Decode command line arguments.\n Implicitly preinitialize Python (in isolated mode). */\n status = PyConfig_SetBytesArgv(&config, argc, argv);\n if (PyStatus_Exception(status)) {\n goto fail;\n }\n\n status = Py_InitializeFromConfig(&config);\n if (PyStatus_Exception(status)) {\n goto fail;\n }\n PyConfig_Clear(&config);\n\n return Py_RunMain();\n\n fail:\n PyConfig_Clear(&config);\n if (PyStatus_IsExit(status)) {\n return status.exitcode;\n }\n /* Display the error message and exit the process with\n non-zero exit code */\n Py_ExitStatusException(status);\n }\n\nThis example is a basic implementation of the \"System Python Executable\"\ndiscussed in PEP 432.\n\nPath Configuration\n\nPyConfig contains multiple fields for the path configuration:\n\n- Path configuration input fields:\n - home\n - pythonpath_env\n - pathconfig_warnings\n- Path configuration output fields:\n - exec_prefix\n - executable\n - prefix\n - module_search_paths_set, module_search_paths\n\nIf at least one \"output field\" is not set, Python computes the path\nconfiguration to fill unset fields. If module_search_paths_set is equal\nto 0, module_search_paths is overridden and module_search_paths_set is\nset to 1.\n\nIt is possible to completely ignore the function computing the default\npath configuration by setting explicitly all path configuration output\nfields listed above. A string is considered as set even if it is\nnon-empty. module_search_paths is considered as set if\nmodule_search_paths_set is set to 1. In this case, path configuration\ninput fields are ignored as well.\n\nSet pathconfig_warnings to 0 to suppress warnings when computing the\npath configuration (Unix only, Windows does not log any warning).\n\nIf base_prefix or base_exec_prefix fields are not set, they inherit\ntheir value from prefix and exec_prefix respectively.\n\nPy_RunMain() and Py_Main() modify sys.path:\n\n- If run_filename is set and is a directory which contains a\n __main__.py script, prepend run_filename to sys.path.\n- If isolated is zero:\n - If run_module is set, prepend the current directory to sys.path.\n Do nothing if the current directory cannot be read.\n - If run_filename is set, prepends the directory of the filename\n to sys.path.\n - Otherwise, prepends an empty string to sys.path.\n\nIf site_import is non-zero, sys.path can be modified by the site module.\nIf user_site_directory is non-zero and the user's site-package directory\nexists, the site module appends the user's site-package directory to\nsys.path.\n\nSee also Configuration Files used by the path configuration.\n\nPy_BytesMain()\n\nPython 3.7 provides a high-level Py_Main() function which requires to\npass command line arguments as wchar_t* strings. It is non-trivial to\nuse the correct encoding to decode bytes. Python has its own set of\nissues with C locale coercion and UTF-8 Mode.\n\nThis PEP adds a new Py_BytesMain() function which takes command line\narguments as bytes:\n\n int Py_BytesMain(int argc, char **argv)\n\nPy_RunMain()\n\nThe new Py_RunMain() function executes the command\n(PyConfig.run_command), the script (PyConfig.run_filename) or the module\n(PyConfig.run_module) specified on the command line or in the\nconfiguration, and then finalizes Python. It returns an exit status that\ncan be passed to the exit() function. :\n\n int Py_RunMain(void);\n\nSee Python Configuration for an example of customized Python always\nrunning in isolated mode using Py_RunMain().\n\nMulti-Phase Initialization Private Provisional API\n\nThis section is a private provisional API introducing multi-phase\ninitialization, the core feature of the PEP 432:\n\n- \"Core\" initialization phase, \"bare minimum Python\":\n - Builtin types;\n - Builtin exceptions;\n - Builtin and frozen modules;\n - The sys module is only partially initialized (ex: sys.path\n doesn't exist yet);\n- \"Main\" initialization phase, Python is fully initialized:\n - Install and configure importlib;\n - Apply the Path Configuration;\n - Install signal handlers;\n - Finish sys module initialization (ex: create sys.stdout and\n sys.path);\n - Enable optional features like faulthandler and tracemalloc;\n - Import the site module;\n - etc.\n\nPrivate provisional API:\n\n- PyConfig._init_main: if set to 0, Py_InitializeFromConfig() stops at\n the \"Core\" initialization phase.\n- PyStatus _Py_InitializeMain(void): move to the \"Main\" initialization\n phase, finish the Python initialization.\n\nNo module is imported during the \"Core\" phase and the importlib module\nis not configured: the Path Configuration is only applied during the\n\"Main\" phase. It may allow to customize Python in Python to override or\ntune the Path Configuration, maybe install a custom sys.meta_path\nimporter or an import hook, etc.\n\nIt may become possible to compute the Path Configuration in Python,\nafter the Core phase and before the Main phase, which is one of the PEP\n432 motivation.\n\nThe \"Core\" phase is not properly defined: what should be and what should\nnot be available at this phase is not specified yet. The API is marked\nas private and provisional: the API can be modified or even be removed\nanytime until a proper public API is designed.\n\nExample running Python code between \"Core\" and \"Main\" initialization\nphases:\n\n void init_python(void)\n {\n PyStatus status;\n\n PyConfig config;\n PyConfig_InitPythonConfig(&config);\n\n config._init_main = 0;\n\n /* ... customize 'config' configuration ... */\n\n status = Py_InitializeFromConfig(&config);\n PyConfig_Clear(&config);\n if (PyStatus_Exception(status)) {\n Py_ExitStatusException(status);\n }\n\n /* Use sys.stderr because sys.stdout is only created\n by _Py_InitializeMain() */\n int res = PyRun_SimpleString(\n \"import sys; \"\n \"print('Run Python code before _Py_InitializeMain', \"\n \"file=sys.stderr)\");\n if (res < 0) {\n exit(1);\n }\n\n /* ... put more configuration code here ... */\n\n status = _Py_InitializeMain();\n if (PyStatus_Exception(status)) {\n Py_ExitStatusException(status);\n }\n }\n\nBackwards Compatibility\n\nThis PEP only adds a new API: it leaves the existing API unchanged and\nhas no impact on the backwards compatibility.\n\nThe Python 3.7 Py_Initialize() function now disable the C locale\ncoercion (PEP 538) and the UTF-8 Mode (PEP 540) by default to prevent\nmojibake. The new API using the Python Configuration is needed to enable\nthem automatically.\n\nAnnexes\n\nComparison of Python and Isolated Configurations\n\nDifferences between PyPreConfig_InitPythonConfig() and\nPyPreConfig_InitIsolatedConfig():\n\n+----------------------------+--------+----------+\n| PyPreConfig | Python | Isolated |\n+============================+========+==========+\n| coerce_c_locale_warn | -1 | 0 |\n+----------------------------+--------+----------+\n| coerce_c_locale | -1 | 0 |\n+----------------------------+--------+----------+\n| configure_locale | 1 | 0 |\n+----------------------------+--------+----------+\n| dev_mode | -1 | 0 |\n+----------------------------+--------+----------+\n| isolated | 0 | 1 |\n+----------------------------+--------+----------+\n| legacy_windows_fs_encoding | -1 | 0 |\n+----------------------------+--------+----------+\n| use_environment | 0 | 0 |\n+----------------------------+--------+----------+\n| parse_argv | 1 | 0 |\n+----------------------------+--------+----------+\n| utf8_mode | -1 | 0 |\n+----------------------------+--------+----------+\n\nDifferences between PyConfig_InitPythonConfig() and\nPyConfig_InitIsolatedConfig():\n\n+-------------------------+--------+----------+\n| PyConfig | Python | Isolated |\n+=========================+========+==========+\n| configure_c_stdio | 1 | 0 |\n+-------------------------+--------+----------+\n| install_signal_handlers | 1 | 0 |\n+-------------------------+--------+----------+\n| isolated | 0 | 1 |\n+-------------------------+--------+----------+\n| parse_argv | 1 | 0 |\n+-------------------------+--------+----------+\n| pathconfig_warnings | 1 | 0 |\n+-------------------------+--------+----------+\n| use_environment | 1 | 0 |\n+-------------------------+--------+----------+\n| user_site_directory | 1 | 0 |\n+-------------------------+--------+----------+\n\nPriority and Rules\n\nPriority of configuration parameters, highest to lowest:\n\n- PyConfig\n- PyPreConfig\n- Configuration files\n- Command line options\n- Environment variables\n- Global configuration variables\n\nPriority of warning options, highest to lowest:\n\n- PyConfig.warnoptions\n- PySys_AddWarnOption()\n- PyConfig.bytes_warning (add \"error::BytesWarning\" filter if greater\n than 1, add \"default::BytesWarning filter if equals to 1)\n- -W opt command line argument\n- PYTHONWARNINGS environment variable\n- PyConfig.dev_mode (add \"default\" filter)\n\nRules on PyConfig parameters:\n\n- If isolated is non-zero, use_environment and user_site_directory are\n set to 0.\n- If dev_mode is non-zero, allocator is set to \"debug\", faulthandler\n is set to 1, and \"default\" filter is added to warnoptions. But the\n PYTHONMALLOC environment variable has the priority over dev_mode to\n set the memory allocator.\n- If base_prefix is not set, it inherits prefix value.\n- If base_exec_prefix is not set, it inherits exec_prefix value.\n- If the python._pth configuration file is present, isolated is set to\n 1 and site_import is set to 0; but site_import is set to 1 if\n python._pth contains import site.\n\nRules on PyConfig and PyPreConfig parameters:\n\n- If PyPreConfig.legacy_windows_fs_encoding is non-zero, set\n PyPreConfig.utf8_mode to 0, set PyConfig.filesystem_encoding to\n mbcs, and set PyConfig.filesystem_errors to replace.\n\nConfiguration Files\n\nPython configuration files used by the Path Configuration:\n\n- pyvenv.cfg\n- python._pth (Windows only)\n- pybuilddir.txt (Unix only)\n\nGlobal Configuration Variables\n\nGlobal configuration variables mapped to PyPreConfig fields:\n\n Variable Field\n -------------------------------- ----------------------------\n Py_IgnoreEnvironmentFlag use_environment (NOT)\n Py_IsolatedFlag isolated\n Py_LegacyWindowsFSEncodingFlag legacy_windows_fs_encoding\n Py_UTF8Mode utf8_mode\n\n(NOT) means that the PyPreConfig value is the opposite of the global\nconfiguration variable value. Py_LegacyWindowsFSEncodingFlag is only\navailable on Windows.\n\nGlobal configuration variables mapped to PyConfig fields:\n\n Variable Field\n -------------------------------------- ---------------------------\n Py_BytesWarningFlag bytes_warning\n Py_DebugFlag parser_debug\n Py_DontWriteBytecodeFlag write_bytecode (NOT)\n Py_FileSystemDefaultEncodeErrors filesystem_errors\n Py_FileSystemDefaultEncoding filesystem_encoding\n Py_FrozenFlag pathconfig_warnings (NOT)\n Py_HasFileSystemDefaultEncoding filesystem_encoding\n Py_HashRandomizationFlag use_hash_seed, hash_seed\n Py_IgnoreEnvironmentFlag use_environment (NOT)\n Py_InspectFlag inspect\n Py_InteractiveFlag interactive\n Py_IsolatedFlag isolated\n Py_LegacyWindowsStdioFlag legacy_windows_stdio\n Py_NoSiteFlag site_import (NOT)\n Py_NoUserSiteDirectory user_site_directory (NOT)\n Py_OptimizeFlag optimization_level\n Py_QuietFlag quiet\n Py_UnbufferedStdioFlag buffered_stdio (NOT)\n Py_VerboseFlag verbose\n _Py_HasFileSystemDefaultEncodeErrors filesystem_errors\n\n(NOT) means that the PyConfig value is the opposite of the global\nconfiguration variable value. Py_LegacyWindowsStdioFlag is only\navailable on Windows.\n\nCommand Line Arguments\n\nUsage:\n\n python3 [options]\n python3 [options] -c COMMAND\n python3 [options] -m MODULE\n python3 [options] SCRIPT\n\nCommand line options mapped to pseudo-action on PyPreConfig fields:\n\n Option PyConfig field\n --------------- ---------------------\n -E use_environment = 0\n -I isolated = 1\n -X dev dev_mode = 1\n -X utf8 utf8_mode = 1\n -X utf8=VALUE utf8_mode = VALUE\n\nCommand line options mapped to pseudo-action on PyConfig fields:\n\n Option PyConfig field\n ------------------------------ --------------------------------------------\n -b bytes_warning++\n -B write_bytecode = 0\n -c COMMAND run_command = COMMAND\n --check-hash-based-pycs=MODE check_hash_pycs_mode = MODE\n -d parser_debug++\n -E use_environment = 0\n -i inspect++ and interactive++\n -I isolated = 1\n -m MODULE run_module = MODULE\n -O optimization_level++\n -q quiet++\n -R use_hash_seed = 0\n -s user_site_directory = 0\n -S site_import\n -t ignored (kept for backwards compatibility)\n -u buffered_stdio = 0\n -v verbose++\n -W WARNING add WARNING to warnoptions\n -x skip_source_first_line = 1\n -X OPTION add OPTION to xoptions\n\n-h, -? and -V options are handled without PyConfig.\n\n-X Options\n\n-X options mapped to pseudo-action on PyConfig fields:\n\n Option PyConfig field\n -------------------------- -------------------------\n -X dev dev_mode = 1\n -X faulthandler faulthandler = 1\n -X importtime import_time = 1\n -X pycache_prefix=PREFIX pycache_prefix = PREFIX\n -X showalloccount show_alloc_count = 1\n -X showrefcount show_ref_count = 1\n -X tracemalloc=N tracemalloc = N\n\nEnvironment Variables\n\nEnvironment variables mapped to PyPreConfig fields:\n\n Variable PyPreConfig field\n ------------------------------- ---------------------------------------\n PYTHONCOERCECLOCALE coerce_c_locale, coerce_c_locale_warn\n PYTHONDEVMODE dev_mode\n PYTHONLEGACYWINDOWSFSENCODING legacy_windows_fs_encoding\n PYTHONMALLOC allocator\n PYTHONUTF8 utf8_mode\n\nEnvironment variables mapped to PyConfig fields:\n\n Variable PyConfig field\n -------------------------- ------------------------------\n PYTHONDEBUG parser_debug\n PYTHONDEVMODE dev_mode\n PYTHONDONTWRITEBYTECODE write_bytecode\n PYTHONDUMPREFS dump_refs\n PYTHONEXECUTABLE program_name\n PYTHONFAULTHANDLER faulthandler\n PYTHONHASHSEED use_hash_seed, hash_seed\n PYTHONHOME home\n PYTHONINSPECT inspect\n PYTHONIOENCODING stdio_encoding, stdio_errors\n PYTHONLEGACYWINDOWSSTDIO legacy_windows_stdio\n PYTHONMALLOCSTATS malloc_stats\n PYTHONNOUSERSITE user_site_directory\n PYTHONOPTIMIZE optimization_level\n PYTHONPATH pythonpath_env\n PYTHONPROFILEIMPORTTIME import_time\n PYTHONPYCACHEPREFIX, pycache_prefix\n PYTHONTRACEMALLOC tracemalloc\n PYTHONUNBUFFERED buffered_stdio\n PYTHONVERBOSE verbose\n PYTHONWARNINGS warnoptions\n\nPYTHONLEGACYWINDOWSFSENCODING and PYTHONLEGACYWINDOWSSTDIO are specific\nto Windows.\n\nDefault Python Configuration\n\nPyPreConfig_InitPythonConfig():\n\n- allocator = PYMEM_ALLOCATOR_NOT_SET\n- coerce_c_locale_warn = -1\n- coerce_c_locale = -1\n- configure_locale = 1\n- dev_mode = -1\n- isolated = 0\n- legacy_windows_fs_encoding = -1\n- use_environment = 1\n- utf8_mode = -1\n\nPyConfig_InitPythonConfig():\n\n- argv = []\n- base_exec_prefix = NULL\n- base_prefix = NULL\n- buffered_stdio = 1\n- bytes_warning = 0\n- check_hash_pycs_mode = NULL\n- configure_c_stdio = 1\n- dev_mode = 0\n- dump_refs = 0\n- exec_prefix = NULL\n- executable = NULL\n- faulthandler = 0\n- filesystem_encoding = NULL\n- filesystem_errors = NULL\n- hash_seed = 0\n- home = NULL\n- import_time = 0\n- inspect = 0\n- install_signal_handlers = 1\n- interactive = 0\n- isolated = 0\n- malloc_stats = 0\n- module_search_path_env = NULL\n- module_search_paths = []\n- optimization_level = 0\n- parse_argv = 1\n- parser_debug = 0\n- pathconfig_warnings = 1\n- prefix = NULL\n- program_name = NULL\n- pycache_prefix = NULL\n- quiet = 0\n- run_command = NULL\n- run_filename = NULL\n- run_module = NULL\n- show_alloc_count = 0\n- show_ref_count = 0\n- site_import = 1\n- skip_source_first_line = 0\n- stdio_encoding = NULL\n- stdio_errors = NULL\n- tracemalloc = 0\n- use_environment = 1\n- use_hash_seed = 0\n- user_site_directory = 1\n- verbose = 0\n- warnoptions = []\n- write_bytecode = 1\n- xoptions = []\n- _init_main = 1\n- _install_importlib = 1\n\nDefault Isolated Configuration\n\nPyPreConfig_InitIsolatedConfig():\n\n- allocator = PYMEM_ALLOCATOR_NOT_SET\n- coerce_c_locale_warn = 0\n- coerce_c_locale = 0\n- configure_locale = 0\n- dev_mode = 0\n- isolated = 1\n- legacy_windows_fs_encoding = 0\n- use_environment = 0\n- utf8_mode = 0\n\nPyConfig_InitIsolatedConfig():\n\n- argv = []\n- base_exec_prefix = NULL\n- base_prefix = NULL\n- buffered_stdio = 1\n- bytes_warning = 0\n- check_hash_pycs_mode = NULL\n- configure_c_stdio = 0\n- dev_mode = 0\n- dump_refs = 0\n- exec_prefix = NULL\n- executable = NULL\n- faulthandler = 0\n- filesystem_encoding = NULL\n- filesystem_errors = NULL\n- hash_seed = 0\n- home = NULL\n- import_time = 0\n- inspect = 0\n- install_signal_handlers = 0\n- interactive = 0\n- isolated = 1\n- malloc_stats = 0\n- module_search_path_env = NULL\n- module_search_paths = []\n- optimization_level = 0\n- parse_argv = 0\n- parser_debug = 0\n- pathconfig_warnings = 0\n- prefix = NULL\n- program_name = NULL\n- pycache_prefix = NULL\n- quiet = 0\n- run_command = NULL\n- run_filename = NULL\n- run_module = NULL\n- show_alloc_count = 0\n- show_ref_count = 0\n- site_import = 1\n- skip_source_first_line = 0\n- stdio_encoding = NULL\n- stdio_errors = NULL\n- tracemalloc = 0\n- use_environment = 0\n- use_hash_seed = 0\n- user_site_directory = 0\n- verbose = 0\n- warnoptions = []\n- write_bytecode = 1\n- xoptions = []\n- _init_main = 1\n- _install_importlib = 1\n\nPython 3.7 API\n\nPython 3.7 has 4 functions in its C API to initialize and finalize\nPython:\n\n- Py_Initialize(), Py_InitializeEx(): initialize Python\n- Py_Finalize(), Py_FinalizeEx(): finalize Python\n\nPython 3.7 can be configured using Global Configuration Variables,\nEnvironment Variables, and the following functions:\n\n- PyImport_AppendInittab()\n- PyImport_ExtendInittab()\n- PyMem_SetAllocator()\n- PyMem_SetupDebugHooks()\n- PyObject_SetArenaAllocator()\n- Py_SetPath()\n- Py_SetProgramName()\n- Py_SetPythonHome()\n- Py_SetStandardStreamEncoding()\n- PySys_AddWarnOption()\n- PySys_AddXOption()\n- PySys_ResetWarnOptions()\n\nThere is also a high-level Py_Main() function and PyImport_FrozenModules\nvariable which can be overridden.\n\nSee Initialization, Finalization, and Threads documentation.\n\nPython Issues\n\nIssues that will be fixed by this PEP, directly or indirectly:\n\n- bpo-1195571: \"simple callback system for Py_FatalError\"\n- bpo-11320: \"Usage of API method Py_SetPath causes errors in\n Py_Initialize() (Posix only)\"\n- bpo-13533: \"Would like Py_Initialize to play friendly with host app\"\n- bpo-14956: \"custom PYTHONPATH may break apps embedding Python\"\n- bpo-19983: \"When interrupted during startup, Python should not call\n abort() but exit()\"\n- bpo-22213: \"Make pyvenv style virtual environments easier to\n configure when embedding Python\".\n- bpo-29778: \"_Py_CheckPython3 uses uninitialized dllpath when\n embedder sets module path with Py_SetPath\"\n- bpo-30560: \"Add Py_SetFatalErrorAbortFunc: Allow embedding program\n to handle fatal errors\".\n- bpo-31745: \"Overloading \"Py_GetPath\" does not work\"\n- bpo-32573: \"All sys attributes (.argv, ...) should exist in embedded\n environments\".\n- bpo-33135: \"Define field prefixes for the various config structs\".\n The PEP now defines well how warnings options are handled.\n- bpo-34725: \"Py_GetProgramFullPath() odd behaviour in Windows\"\n- bpo-36204: \"Deprecate calling Py_Main() after Py_Initialize()? Add\n Py_InitializeFromArgv()?\"\n\nIssues of the PEP implementation:\n\n- bpo-16961: \"No regression tests for -E and individual environment\n vars\"\n- bpo-20361: \"-W command line options and PYTHONWARNINGS environmental\n variable should not override -b / -bb command line options\"\n- bpo-26122: \"Isolated mode doesn't ignore PYTHONHASHSEED\"\n- bpo-29818: \"Py_SetStandardStreamEncoding leads to a memory error in\n debug mode\"\n- bpo-31845: \"PYTHONDONTWRITEBYTECODE and PYTHONOPTIMIZE have no\n effect\"\n- bpo-32030: \"PEP 432: Rewrite Py_Main()\"\n- bpo-32124: \"Document functions safe to be called before\n Py_Initialize()\"\n- bpo-33042: \"New 3.7 startup sequence crashes PyInstaller\"\n- bpo-33932: \"Calling Py_Initialize() twice now triggers a fatal error\n (Python 3.7)\"\n- bpo-34008: \"Do we support calling Py_Main() after Py_Initialize()?\"\n- bpo-34170: \"Py_Initialize(): computing path configuration must not\n have side effect (PEP 432)\"\n- bpo-34589: \"Py_Initialize() and Py_Main() should not enable C locale\n coercion\"\n- bpo-34639: \"PYTHONCOERCECLOCALE is ignored when using -E or -I\n option\"\n- bpo-36142: \"Add a new _PyPreConfig step to Python initialization to\n setup memory allocator and encodings\"\n- bpo-36202: \"Calling Py_DecodeLocale() before _PyPreConfig_Write()\n can produce mojibake\"\n- bpo-36301: \"Add _Py_PreInitialize() function\"\n- bpo-36443: \"Disable coerce_c_locale and utf8_mode by default in\n _PyPreConfig?\"\n- bpo-36444: \"Python initialization: remove _PyMainInterpreterConfig\"\n- bpo-36471: \"PEP 432, PEP 587: Add _Py_RunMain()\"\n- bpo-36763: \"PEP 587: Rework initialization API to prepare second\n version of the PEP\"\n- bpo-36775: \"Rework filesystem codec implementation\"\n- bpo-36900: \"Use _PyCoreConfig rather than global configuration\n variables\"\n\nIssues related to this PEP:\n\n- bpo-12598: \"Move sys variable initialization from import.c to\n sysmodule.c\"\n- bpo-15577: \"Real argc and argv in embedded interpreter\"\n- bpo-16202: \"sys.path[0] security issues\"\n- bpo-18309: \"Make python slightly more relocatable\"\n- bpo-22257: \"PEP 432: Redesign the interpreter startup sequence\"\n- bpo-25631: \"Segmentation fault with invalid Unicode command-line\n arguments in embedded Python\"\n- bpo-26007: \"Support embedding the standard library in an executable\"\n- bpo-31210: \"Can not import modules if sys.prefix contains DELIM\".\n- bpo-31349: \"Embedded initialization ignores Py_SetProgramName()\"\n- bpo-33919: \"Expose _PyCoreConfig structure to Python\"\n- bpo-35173: \"Re-use already existing functionality to allow Python\n 2.7.x (both embedded and standalone) to locate the module path\n according to the shared library\"\n\nDiscussions\n\n- May 2019:\n - [Python-Dev] PEP 587 \"Python Initialization Configuration\"\n version 4\n - [Python-Dev] RFC: PEP 587 \"Python Initialization Configuration\":\n 3rd version\n - Study on applications embedding Python\n - [Python-Dev] RFC: PEP 587 \"Python Initialization Configuration\":\n 2nd version\n- March 2019:\n - [Python-Dev] PEP 587: Python Initialization Configuration\n - [Python-Dev] New Python Initialization API\n- February 2019:\n - Adding char* based APIs for Unix\n- July-August 2018:\n - July: [Python-Dev] New _Py_InitializeFromConfig() function (PEP\n 432)\n - August: [Python-Dev] New _Py_InitializeFromConfig() function\n (PEP 432)\n\nVersion History\n\n- Version 5:\n - Rename PyInitError to PyStatus\n - Rename PyInitError_Failed() to PyStatus_Exception()\n - Rename Py_ExitInitError() to Py_ExitStatusException()\n - Add PyPreConfig._config_init private field.\n - Fix Python Configuration default values: isolated=0 and\n use_environment=1, instead of -1.\n - Add \"Multi-Phase Initialization Private Provisional API\" and\n \"Discussions\" sections\n- Version 4:\n - Introduce \"Python Configuration\" and \"Isolated Configuration\"\n which are well better defined. Replace all macros with\n functions.\n - Replace PyPreConfig_INIT and PyConfig_INIT macros with\n functions:\n - PyPreConfig_InitIsolatedConfig(),\n PyConfig_InitIsolatedConfig()\n - PyPreConfig_InitPythonConfig(), PyConfig_InitPythonConfig()\n - PyPreConfig no longer uses dynamic memory, the allocator field\n type becomes an int, add configure_locale and parse_argv field.\n - PyConfig: rename module_search_path_env to pythonpath_env,\n rename use_module_search_paths to module_search_paths_set,\n remove program and dll_path.\n - Replace Py_INIT_xxx() macros with PyInitError_xxx() functions.\n - Remove the \"Constant PyConfig\" section. Remove\n Py_InitializeFromArgs() and Py_InitializeFromBytesArgs()\n functions.\n- Version 3:\n - PyConfig: Add configure_c_stdio and parse_argv; rename _frozen\n to pathconfig_warnings.\n - Rename functions using bytes strings and wide character strings.\n For example, Py_PreInitializeFromWideArgs() becomes\n Py_PreInitializeFromArgs(), and PyConfig_SetArgv() becomes\n PyConfig_SetBytesArgv().\n - Add PyWideStringList_Insert() function.\n - New \"Path configuration\", \"Isolate Python\", \"Python Issues\" and\n \"Version History\" sections.\n - PyConfig_SetString() and PyConfig_SetBytesString() now requires\n the configuration as the first argument.\n - Rename Py_UnixMain() to Py_BytesMain()\n- Version 2: Add PyConfig methods (ex: PyConfig_Read()), add\n PyWideStringList_Append(), rename PyWideCharList to\n PyWideStringList.\n- Version 1: Initial version.\n\nAcceptance\n\nPEP 587 was accepted by Thomas Wouters on May 26, 2019.\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.307465"},"created":{"kind":"timestamp","value":"2019-03-27T00:00:00","string":"2019-03-27T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0587/\",\n \"authors\": [\n \"Alyssa Coghlan\",\n \"Victor Stinner\"\n ],\n \"pep_number\": \"0587\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":73,"cells":{"id":{"kind":"string","value":"0671"},"text":{"kind":"string","value":"PEP: 671 Title: Syntax for late-bound function argument defaults Author:\nChris Angelico Discussions-To:\nhttps://mail.python.org/archives/list/python-ideas@python.org/thread/UVOQEK7IRFSCBOH734T5GFJOEJXFCR6A/\nStatus: Draft Type: Standards Track Content-Type: text/x-rst Created:\n24-Oct-2021 Python-Version: 3.12 Post-History: 24-Oct-2021, 01-Dec-2021\n\nAbstract\n\nFunction parameters can have default values which are calculated during\nfunction definition and saved. This proposal introduces a new form of\nargument default, defined by an expression to be evaluated at function\ncall time.\n\nMotivation\n\nOptional function arguments, if omitted, often have some sort of logical\ndefault value. When this value depends on other arguments, or needs to\nbe reevaluated each function call, there is currently no clean way to\nstate this in the function header.\n\nCurrently-legal idioms for this include:\n\n # Very common: Use None and replace it in the function\n def bisect_right(a, x, lo=0, hi=None, *, key=None):\n if hi is None:\n hi = len(a)\n\n # Also well known: Use a unique custom sentinel object\n _USE_GLOBAL_DEFAULT = object()\n def connect(timeout=_USE_GLOBAL_DEFAULT):\n if timeout is _USE_GLOBAL_DEFAULT:\n timeout = default_timeout\n\n # Unusual: Accept star-args and then validate\n def add_item(item, *optional_target):\n if not optional_target:\n target = []\n else:\n target = optional_target[0]\n\nIn each form, help(function) fails to show the true default value. Each\none has additional problems, too; using None is only valid if None is\nnot itself a plausible function parameter, the custom sentinel requires\na global constant; and use of star-args implies that more than one\nargument could be given.\n\nSpecification\n\nFunction default arguments can be defined using the new => notation:\n\n def bisect_right(a, x, lo=0, hi=>len(a), *, key=None):\n def connect(timeout=>default_timeout):\n def add_item(item, target=>[]):\n def format_time(fmt, time_t=>time.time()):\n\nThe expression is saved in its source code form for the purpose of\ninspection, and bytecode to evaluate it is prepended to the function's\nbody.\n\nNotably, the expression is evaluated in the function's run-time scope,\nNOT the scope in which the function was defined (as are early-bound\ndefaults). This allows the expression to refer to other arguments.\n\nMultiple late-bound arguments are evaluated from left to right, and can\nrefer to previously-defined values. Order is defined by the function,\nregardless of the order in which keyword arguments may be passed.\n\n def prevref(word=\"foo\", a=>len(word), b=>a//2): # Valid def\n selfref(spam=>spam): # UnboundLocalError def spaminate(sausage=>eggs +\n 1, eggs=>sausage - 1): # Confusing, don't do this def\n frob(n=>len(items), items=[]): # See below\n\nEvaluation order is left-to-right; however, implementations MAY choose\nto do so in two separate passes, first for all passed arguments and\nearly-bound defaults, and then a second pass for late-bound defaults.\nOtherwise, all arguments will be assigned strictly left-to-right.\n\nRejected choices of spelling\n\nWhile this document specifies a single syntax name=>expression,\nalternate spellings are similarly plausible. The following spellings\nwere considered:\n\n def bisect(a, hi=>len(a)):\n def bisect(a, hi:=len(a)):\n def bisect(a, hi?=len(a)):\n def bisect(a, @hi=len(a)):\n\nSince default arguments behave largely the same whether they're early or\nlate bound, the chosen syntax hi=>len(a) is deliberately similar to the\nexisting early-bind syntax.\n\nOne reason for rejection of the := syntax is its behaviour with\nannotations. Annotations go before the default, so in all syntax\noptions, it must be unambiguous (both to the human and the parser)\nwhether this is an annotation, a default, or both. The alternate syntax\ntarget:=expr runs the risk of being misinterpreted as target:int=expr\nwith the annotation omitted in error, and may thus mask bugs. The chosen\nsyntax target=>expr does not have this problem.\n\nHow to Teach This\n\nEarly-bound default arguments should always be taught first, as they are\nthe simpler and more efficient way to evaluate arguments. Building on\nthem, late bound arguments are broadly equivalent to code at the top of\nthe function:\n\n def add_item(item, target=>[]):\n\n # Equivalent pseudocode:\n def add_item(item, target=):\n if target was omitted: target = []\n\nA simple rule of thumb is: \"target=expression\" is evaluated when the\nfunction is defined, and \"target=>expression\" is evaluated when the\nfunction is called. Either way, if the argument is provided at call\ntime, the default is ignored. While this does not completely explain all\nthe subtleties, it is sufficient to cover the important distinction here\n(and the fact that they are similar).\n\nInteraction with other proposals\n\nPEP 661 attempts to solve one of the same problems as this does. It\nseeks to improve the documentation of sentinel values in default\narguments, where this proposal seeks to remove the need for sentinels in\nmany common cases. PEP 661 is able to improve documentation in\narbitrarily complicated functions (it cites traceback.print_exception as\nits primary motivation, which has two arguments which must\nboth-or-neither be specified); on the other hand, many of the common\ncases would no longer need sentinels if the true default could be\ndefined by the function. Additionally, dedicated sentinel objects can be\nused as dictionary lookup keys, where PEP 671 does not apply.\n\nA generic system for deferred evaluation has been proposed at times (not\nto be confused with PEP 563 and PEP 649 which are specific to\nannotations). While it may seem, on the surface, that late-bound\nargument defaults are of a similar nature, they are in fact unrelated\nand orthogonal ideas, and both could be of value to the language. The\nacceptance or rejection of this proposal would not affect the viability\nof a deferred evaluation proposal, and vice versa. (A key difference\nbetween generalized deferred evaluation and argument defaults is that\nargument defaults will always and only be evaluated as the function\nbegins executing, whereas deferred expressions would only be realized\nupon reference.)\n\nImplementation details\n\nThe following relates to the reference implementation, and is not\nnecessarily part of the specification.\n\nArgument defaults (positional or keyword) have both their values, as\nalready retained, and an extra piece of information. For positional\narguments, the extras are stored in a tuple in __defaults_extra__, and\nfor keyword-only, a dict in __kwdefaults_extra__. If this attribute is\nNone, it is equivalent to having None for every argument default.\n\nFor each parameter with a late-bound default, the special value Ellipsis\nis stored as the value placeholder, and the corresponding extra\ninformation needs to be queried. If it is None, then the default is\nindeed the value Ellipsis; otherwise, it is a descriptive string and the\ntrue value is calculated as the function begins.\n\nWhen a parameter with a late-bound default is omitted, the function will\nbegin with the parameter unbound. The function begins by testing for\neach parameter with a late-bound default using a new opcode\nQUERY_FAST/QUERY_DEREF, and if unbound, evaluates the original\nexpression. This opcode (available only for fast locals and closure\nvariables) pushes True onto the stack if the given local has a value,\nand False if not - meaning that it pushes False if LOAD_FAST or\nLOAD_DEREF would raise UnboundLocalError, and True if it would succeed.\n\nOut-of-order variable references are permitted as long as the referent\nhas a value from an argument or early-bound default.\n\nCosts\n\nWhen no late-bound argument defaults are used, the following costs\nshould be all that are incurred:\n\n- Function objects require two additional pointers, which will be NULL\n- Compiling code and constructing functions have additional flag\n checks\n- Using Ellipsis as a default value will require run-time verification\n to see if late-bound defaults exist.\n\nThese costs are expected to be minimal (on 64-bit Linux, this increases\nall function objects from 152 bytes to 168), with virtually no run-time\ncost when late-bound defaults are not used.\n\nBackward incompatibility\n\nWhere late-bound defaults are not used, behaviour should be identical.\nCare should be taken if Ellipsis is found, as it may not represent\nitself, but beyond that, tools should see existing code unchanged.\n\nReferences\n\nhttps://github.com/rosuav/cpython/tree/pep-671\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.321978"},"created":{"kind":"timestamp","value":"2021-10-24T00:00:00","string":"2021-10-24T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0671/\",\n \"authors\": [\n \"Chris Angelico\"\n ],\n \"pep_number\": \"0671\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":74,"cells":{"id":{"kind":"string","value":"0006"},"text":{"kind":"string","value":"PEP: 6 Title: Bug Fix Releases Author: Aahz ,\nAnthony Baxter Status: Superseded Type:\nProcess Content-Type: text/x-rst Created: 15-Mar-2001 Post-History:\n15-Mar-2001, 18-Apr-2001, 19-Aug-2004\n\nNote\n\nThis PEP is obsolete. The current release policy is documented in the\ndevguide. See also PEP 101 for mechanics of the release process.\n\nAbstract\n\nPython has historically had only a single fork of development, with\nreleases having the combined purpose of adding new features and\ndelivering bug fixes (these kinds of releases will be referred to as\n\"major releases\"). This PEP describes how to fork off maintenance, or\nbug fix, releases of old versions for the primary purpose of fixing\nbugs.\n\nThis PEP is not, repeat NOT, a guarantee of the existence of bug fix\nreleases; it only specifies a procedure to be followed if bug fix\nreleases are desired by enough of the Python community willing to do the\nwork.\n\nMotivation\n\nWith the move to SourceForge, Python development has accelerated. There\nis a sentiment among part of the community that there was too much\nacceleration, and many people are uncomfortable with upgrading to new\nversions to get bug fixes when so many features have been added,\nsometimes late in the development cycle.\n\nOne solution for this issue is to maintain the previous major release,\nproviding bug fixes until the next major release. This should make\nPython more attractive for enterprise development, where Python may need\nto be installed on hundreds or thousands of machines.\n\nProhibitions\n\nBug fix releases are required to adhere to the following restrictions:\n\n1. There must be zero syntax changes. All .pyc and .pyo files must work\n (no regeneration needed) with all bugfix releases forked off from a\n major release.\n2. There must be zero pickle changes.\n3. There must be no incompatible C API changes. All extensions must\n continue to work without recompiling in all bugfix releases in the\n same fork as a major release.\n\nBreaking any of these prohibitions requires a BDFL proclamation (and a\nprominent warning in the release notes).\n\nNot-Quite-Prohibitions\n\nWhere possible, bug fix releases should also:\n\n1. Have no new features. The purpose of a bug fix release is to fix\n bugs, not add the latest and greatest whizzo feature from the HEAD\n of the CVS root.\n2. Be a painless upgrade. Users should feel confident that an upgrade\n from 2.x.y to 2.x.(y+1) will not break their running systems. This\n means that, unless it is necessary to fix a bug, the standard\n library should not change behavior, or worse yet, APIs.\n\nApplicability of Prohibitions\n\nThe above prohibitions and not-quite-prohibitions apply both for a final\nrelease to a bugfix release (for instance, 2.4 to 2.4.1) and for one\nbugfix release to the next in a series (for instance 2.4.1 to 2.4.2).\n\nFollowing the prohibitions listed in this PEP should help keep the\ncommunity happy that a bug fix release is a painless and safe upgrade.\n\nHelping the Bug Fix Releases Happen\n\nHere's a few pointers on helping the bug fix release process along.\n\n1. Backport bug fixes. If you fix a bug, and it seems appropriate, port\n it to the CVS branch for the current bug fix release. If you're\n unwilling or unable to backport it yourself, make a note in the\n commit message, with words like 'Bugfix candidate' or 'Backport\n candidate'.\n2. If you're not sure, ask. Ask the person managing the current bug fix\n releases if they think a particular fix is appropriate.\n3. If there's a particular bug you'd particularly like fixed in a bug\n fix release, jump up and down and try to get it done. Do not wait\n until 48 hours before a bug fix release is due, and then start\n asking for bug fixes to be included.\n\nVersion Numbers\n\nStarting with Python 2.0, all major releases are required to have a\nversion number of the form X.Y; bugfix releases will always be of the\nform X.Y.Z.\n\nThe current major release under development is referred to as release N;\nthe just-released major version is referred to as N-1.\n\nIn CVS, the bug fix releases happen on a branch. For release 2.x, the\nbranch is named 'release2x-maint'. For example, the branch for the 2.3\nmaintenance releases is release23-maint\n\nProcedure\n\nThe process for managing bugfix releases is modeled in part on the Tcl\nsystem[1].\n\nThe Patch Czar is the counterpart to the BDFL for bugfix releases.\nHowever, the BDFL and designated appointees retain veto power over\nindividual patches. A Patch Czar might only be looking after a single\nbranch of development - it's quite possible that a different person\nmight be maintaining the 2.3.x and the 2.4.x releases.\n\nAs individual patches get contributed to the current trunk of CVS, each\npatch committer is requested to consider whether the patch is a bug fix\nsuitable for inclusion in a bugfix release. If the patch is considered\nsuitable, the committer can either commit the release to the maintenance\nbranch, or else mark the patch in the commit message.\n\nIn addition, anyone from the Python community is free to suggest patches\nfor inclusion. Patches may be submitted specifically for bugfix\nreleases; they should follow the guidelines in PEP 3. In general,\nthough, it's probably better that a bug in a specific release also be\nfixed on the HEAD as well as the branch.\n\nThe Patch Czar decides when there are a sufficient number of patches to\nwarrant a release. The release gets packaged up, including a Windows\ninstaller, and made public. If any new bugs are found, they must be\nfixed immediately and a new bugfix release publicized (with an\nincremented version number). For the 2.3.x cycle, the Patch Czar\n(Anthony) has been trying for a release approximately every six months,\nbut this should not be considered binding in any way on any future\nreleases.\n\nBug fix releases are expected to occur at an interval of roughly six\nmonths. This is only a guideline, however - obviously, if a major bug is\nfound, a bugfix release may be appropriate sooner. In general, only the\nN-1 release will be under active maintenance at any time. That is,\nduring Python 2.4's development, Python 2.3 gets bugfix releases. If,\nhowever, someone qualified wishes to continue the work to maintain an\nolder release, they should be encouraged.\n\nPatch Czar History\n\nAnthony Baxter is the Patch Czar for 2.3.1 through 2.3.4.\n\nBarry Warsaw is the Patch Czar for 2.2.3.\n\nGuido van Rossum is the Patch Czar for 2.2.2.\n\nMichael Hudson is the Patch Czar for 2.2.1.\n\nAnthony Baxter is the Patch Czar for 2.1.2 and 2.1.3.\n\nThomas Wouters is the Patch Czar for 2.1.1.\n\nMoshe Zadka is the Patch Czar for 2.0.1.\n\nHistory\n\nThis PEP started life as a proposal on comp.lang.python. The original\nversion suggested a single patch for the N-1 release to be released\nconcurrently with the N release. The original version also argued for\nsticking with a strict bug fix policy.\n\nFollowing feedback from the BDFL and others, the draft PEP was written\ncontaining an expanded bugfix release cycle that permitted any previous\nmajor release to obtain patches and also relaxed the strict bug fix\nrequirement (mainly due to the example of PEP 235, which could be argued\nas either a bug fix or a feature).\n\nDiscussion then mostly moved to python-dev, where BDFL finally issued a\nproclamation basing the Python bugfix release process on Tcl's, which\nessentially returned to the original proposal in terms of being only the\nN-1 release and only bug fixes, but allowing multiple bugfix releases\nuntil release N is published.\n\nAnthony Baxter then took this PEP and revised it, based on lessons from\nthe 2.3 release cycle.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] http://www.tcl.tk/cgi-bin/tct/tip/28.html"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.333225"},"created":{"kind":"timestamp","value":"2001-03-15T00:00:00","string":"2001-03-15T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0006/\",\n \"authors\": [\n \"Aahz\",\n \"Anthony Baxter\"\n ],\n \"pep_number\": \"0006\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":75,"cells":{"id":{"kind":"string","value":"0347"},"text":{"kind":"string","value":"PEP: 347 Title: Migrating the Python CVS to Subversion Version:\n$Revision$ Last-Modified: $Date$ Author: Martin von Löwis\n Discussions-To: python-dev@python.org Status: Final\nType: Process Content-Type: text/x-rst Created: 14-Jul-2004\nPost-History: 14-Jul-2004\n\nAbstract\n\nThe Python source code is currently managed in a CVS repository on\nsourceforge.net. This PEP proposes to move it to a Subversion repository\non svn.python.org.\n\nRationale\n\nThis change has two aspects: moving from CVS to Subversion, and moving\nfrom SourceForge to python.org. For each, a rationale will be given.\n\nMoving to Subversion\n\nCVS has a number of limitations that have been eliminated by Subversion.\nFor the development of Python, the most notable improvements are:\n\n- the ability to rename files and directories, and to remove\n directories, while keeping the history of these files.\n- support for change sets (sets of correlated changes to multiple\n files) through global revision numbers. Change sets are\n transactional.\n- atomic, fast tagging: a cvs tag might take many minutes; a\n Subversion tag (svn cp) will complete quickly, and atomically.\n Likewise, branches are very efficient.\n- support for offline diffs, which is useful when creating patches.\n\nMoving to python.org\n\nSourceForge has kindly provided an important infrastructure for the past\nyears. Unfortunately, the attention that SF received has also caused\nrepeated overload situations in the past, to which the SF operators\ncould not always respond in a timely manner. In particular, for CVS,\nthey had to reduce the load on the primary CVS server by introducing a\nsecond, read-only CVS server for anonymous access. This server is\nregularly synchronized, but lags behind the read-write CVS repository\nbetween synchronizations. As a result, users without commit access can\nsee recent changes to the repository only after a delay.\n\nOn python.org, it would be possible to make the repository accessible\nfor anonymous access.\n\nMigration Procedure\n\nTo move the Python CVS repository, the following steps need to be\nexecuted. The steps are elaborated upon in the following sections.\n\n1. Collect SSH keys for all current committers, along with usernames to\n appear in commit messages.\n2. At the beginning of the migration, announce that the repository on\n SourceForge closed.\n3. 24 hours after the last commit, download the CVS repository.\n4. Convert the CVS repository into a Subversion repository.\n5. Publish the repository with write access for committers, and\n read-only anonymous access.\n6. Disable CVS access on SF.\n\nCollect SSH keys\n\nAfter some discussion, svn+ssh was selected as the best method for write\naccess to the repository. Developers can continue to use their SSH keys,\nbut they must be installed on python.org.\n\nIn order to avoid having to create a new Unix user for each developer, a\nsingle account should be used, with command= attributes in the\nauthorized_keys files.\n\nThe lines in the authorized_keys file should read like this (wrapped for\nbetter readability):\n\n command=\"/usr/bin/svnserve --root=/svnroot -t\n --tunnel-user=''\",no-port-forwarding,\n no-X11-forwarding,no-agent-forwarding,no-pty\n ssh-dss \n\nAs the usernames, the real names should be used instead of the SF\naccount names, so that people can be better identified in log messages.\n\nAdministrator Access\n\nAdministrator access to the pythondev account should be granted to all\ncurrent admins of the Python SF project. To distinguish between shell\nlogin and svnserve login, admins need to maintain two keys. Using\nOpenSSH, the following procedure can be used to create a second key:\n\n cd .ssh\n ssh-keygen -t DSA -f pythondev -C @pythondev\n vi config\n\nIn the config file, the following lines need to be added:\n\n Host pythondev\n Hostname dinsdale.python.org\n User pythondev\n IdentityFile ~/.ssh/pythondev\n\nThen, shell login becomes possible through \"ssh pythondev\".\n\nDownloading the CVS Repository\n\nThe CVS repository can be downloaded from\n\n http://cvs.sourceforge.net/cvstarballs/python-cvsroot.tar.bz2\n\nSince this tarball is generated only once a day, some time must pass\nafter the repository freeze before the tarball can be picked up. It\nshould be verified that the last commit, as recorded on the\npython-commits mailing list, is indeed included in the tarball.\n\nAfter the conversion, the converted CVS tarball should be kept forever\non www.python.org/archive/python-cvsroot-.tar.bz2\n\nConverting the CVS Repository\n\nThe Python CVS repository contains two modules: distutils and python.\nThe python module is further structured into dist and nondist, where\ndist only contains src (the python code proper). nondist contains\nvarious subdirectories.\n\nThese should be reorganized in the Subversion repository to get shorter\nURLs, following the /{trunk,tags,branches} structure. A project\nwill be created for each nondist directory, plus for src (called\npython), plus distutils. Reorganizing the repository is best done in the\nCVS tree, as shown below.\n\nThe fsfs backend should be used as the repository format (which requires\nSubversion 1.1). The fsfs backend has the advantage of being more\nbackup-friendly, as it allows incremental repository backups, without\nrequiring any dump commands to be run.\n\nThe conversion should be done using the cvs2svn utility, available e.g.\nin the cvs2svn Debian package. As cvs2svn does not currently support the\nproject/trunk structure, each project needs to be converted separately.\nTo get each conversion result into a separate directory in the target\nrepository, svnadmin load must be used.\n\nSubversion has a different view on binary-vs-text files than CVS. To\ncorrectly carry the CVS semantics forward, svn:eol-style should be set\nto native on all files that are not marked binary in the CVS.\n\nIn summary, the conversion script is:\n\n #!/bin/sh\n rm cvs2svn-*\n rm -rf python py.new\n tar xjf python-cvsroot.tar.bz2\n rm -rf python/CVSROOT\n svnadmin create --fs-type fsfs py.new\n mv python/python python/orig\n mv python/orig/dist/src python/python\n mv python/orig/nondist/* python\n # nondist/nondist is empty\n rmdir python/nondist\n rm -rf python/orig\n for a in python/*\n do\n b=`basename $a`\n cvs2svn -q --dump-only --encoding=latin1 --force-branch=cnri-16-start \\\n --force-branch=descr-branch --force-branch=release152p1-patches \\\n --force-tag=r16b1 $a\n svn mkdir -m\"Conversion to SVN\" file:///`pwd`/py.new/$b\n svnadmin load -q --parent-dir $b py.new < cvs2svn-dump\n rm cvs2svn-dump\n done\n\nSample results of this conversion are available at\n\n http://www.dcl.hpi.uni-potsdam.de/pysvn/\n\nPublish the Repository\n\nThe repository should be published at http://svn.python.org/projects.\nRead-write access should be granted to all current SF committers through\nsvn+ssh://pythondev@svn.python.org/; read-only anonymous access through\nWebDAV should also be granted.\n\nAs an option, websvn (available e.g. from the Debian websvn package)\ncould be provided. Unfortunately, in the test installation, websvn\nbreaks because it runs out of memory.\n\nThe current SF project admins should get write access to the\nauthorized_keys2 file of the pythondev account.\n\nDisable CVS\n\nIt appears that CVS cannot be disabled entirely. Only the user interface\ncan be removed from the project page; the repository itself remains\navailable. If desired, write access to the python and distutils modules\ncan be disabled through a CVS commitinfo entry.\n\nDiscussion\n\nSeveral alternatives had been suggested to the procedure above. The\nrejected alternatives are shortly discussed here:\n\n- create multiple repositories, one for python and one for distutils.\n This would have allowed even shorter URLs, but was rejected because\n a single repository supports moving code across projects.\n- Several people suggested to create the project/trunk structure\n through standard cvs2svn, followed by renames. This would have the\n disadvantage that old revisions use different path names than recent\n revisions; the suggested approach through dump files works without\n renames.\n- Several people also expressed concern about the administrative\n overhead that hosting the repository on python.org would cause to\n pydotorg admins. As a specific alternative, BerliOS has been\n suggested. The pydotorg admins themselves haven't objected to the\n additional workload; migrating the repository again if they get\n overworked is an option.\n- Different authentication strategies were discussed. As alternatives\n to svn+ssh were suggested\n - Subversion over WebDAV, using SSL and basic authentication, with\n pydotorg-generated passwords mailed to the user. People did not\n like that approach, since they would need to store the password\n on disk (because they can't remember it); this is a security\n risk.\n - Subversion over WebDAV, using SSL client certificates. This\n would work, but would require us to administer a certificate\n authority.\n- Instead of hosting this on python.org, people suggested hosting it\n elsewhere. One issue is whether this alternative should be free or\n commercial; several people suggested it should better be commercial,\n to reduce the load on the volunteers. In particular:\n - Greg Stein suggested http://www.wush.net/subversion.php. They\n offer 5 GB for $90/month, with 200 GB download/month. The data\n is on a RAID drive and fully backed up. Anonymous access and\n email commit notifications are supported. wush.net elaborated\n the following details:\n\n - The machine would be a Virtuozzo Virtual Private Server\n (VPS), hosted at PowerVPS.\n - The default repository URL would be\n http://python.wush.net/svn/projectname/, but anything else\n could be arranged\n - we would get SSH login to the machine, with sudo\n capabilities.\n - They have a Web interface for management of the various SVN\n repositories that we want to host, and to manage user\n accounts. While svn+ssh would be supported, the user\n interface does not yet support it.\n - For offsite mirroring/backup, they suggest to use rsync\n instead of download of repository tarballs.\n\n Bob Ippolito reported that they had used wush.net for a\n commercial project for about 6 months, after which time they\n left wush.net, because the service was down for three days, with\n nobody reachable, and no explanation when it came back.\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.347243"},"created":{"kind":"timestamp","value":"2004-07-14T00:00:00","string":"2004-07-14T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0347/\",\n \"authors\": [\n \"Martin von Löwis\"\n ],\n \"pep_number\": \"0347\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":76,"cells":{"id":{"kind":"string","value":"0471"},"text":{"kind":"string","value":"PEP: 471 Title: os.scandir() function -- a better and faster directory\niterator Version: $Revision$ Last-Modified: $Date$ Author: Ben Hoyt\n BDFL-Delegate: Victor Stinner \nStatus: Final Type: Standards Track Content-Type: text/x-rst Created:\n30-May-2014 Python-Version: 3.5 Post-History: 27-Jun-2014, 08-Jul-2014,\n14-Jul-2014\n\nAbstract\n\nThis PEP proposes including a new directory iteration function,\nos.scandir(), in the standard library. This new function adds useful\nfunctionality and increases the speed of os.walk() by 2-20 times\n(depending on the platform and file system) by avoiding calls to\nos.stat() in most cases.\n\nRationale\n\nPython's built-in os.walk() is significantly slower than it needs to be,\nbecause -- in addition to calling os.listdir() on each directory -- it\nexecutes the stat() system call or GetFileAttributes() on each file to\ndetermine whether the entry is a directory or not.\n\nBut the underlying system calls -- FindFirstFile / FindNextFile on\nWindows and readdir on POSIX systems --already tell you whether the\nfiles returned are directories or not, so no further system calls are\nneeded. Further, the Windows system calls return all the information for\na stat_result object on the directory entry, such as file size and last\nmodification time.\n\nIn short, you can reduce the number of system calls required for a tree\nfunction like os.walk() from approximately 2N to N, where N is the total\nnumber of files and directories in the tree. (And because directory\ntrees are usually wider than they are deep, it's often much better than\nthis.)\n\nIn practice, removing all those extra system calls makes os.walk() about\n8-9 times as fast on Windows, and about 2-3 times as fast on POSIX\nsystems. So we're not talking about micro-optimizations. See more\nbenchmarks here.\n\nSomewhat relatedly, many people (see Python Issue 11406) are also keen\non a version of os.listdir() that yields filenames as it iterates\ninstead of returning them as one big list. This improves memory\nefficiency for iterating very large directories.\n\nSo, as well as providing a scandir() iterator function for calling\ndirectly, Python's existing os.walk() function can be sped up a huge\namount.\n\nImplementation\n\nThe implementation of this proposal was written by Ben Hoyt (initial\nversion) and Tim Golden (who helped a lot with the C extension module).\nIt lives on GitHub at benhoyt/scandir. (The implementation may lag\nbehind the updates to this PEP a little.)\n\nNote that this module has been used and tested (see \"Use in the wild\"\nsection in this PEP), so it's more than a proof-of-concept. However, it\nis marked as beta software and is not extensively battle-tested. It will\nneed some cleanup and more thorough testing before going into the\nstandard library, as well as integration into posixmodule.c.\n\nSpecifics of proposal\n\nos.scandir()\n\nSpecifically, this PEP proposes adding a single function to the os\nmodule in the standard library, scandir, that takes a single, optional\nstring as its argument:\n\n scandir(path='.') -> generator of DirEntry objects\n\nLike listdir, scandir calls the operating system's directory iteration\nsystem calls to get the names of the files in the given path, but it's\ndifferent from listdir in two ways:\n\n- Instead of returning bare filename strings, it returns lightweight\n DirEntry objects that hold the filename string and provide simple\n methods that allow access to the additional data the operating\n system may have returned.\n- It returns a generator instead of a list, so that scandir acts as a\n true iterator instead of returning the full list immediately.\n\nscandir() yields a DirEntry object for each file and sub-directory in\npath. Just like listdir, the '.' and '..' pseudo-directories are\nskipped, and the entries are yielded in system-dependent order. Each\nDirEntry object has the following attributes and methods:\n\n- name: the entry's filename, relative to the scandir path argument\n (corresponds to the return values of os.listdir)\n- path: the entry's full path name (not necessarily an absolute path)\n -- the equivalent of os.path.join(scandir_path, entry.name)\n- inode(): return the inode number of the entry. The result is cached\n on the DirEntry object, use\n os.stat(entry.path, follow_symlinks=False).st_ino to fetch\n up-to-date information. On Unix, no system call is required.\n- is_dir(*, follow_symlinks=True): similar to pathlib.Path.is_dir(),\n but the return value is cached on the DirEntry object; doesn't\n require a system call in most cases; don't follow symbolic links if\n follow_symlinks is False\n- is_file(*, follow_symlinks=True): similar to pathlib.Path.is_file(),\n but the return value is cached on the DirEntry object; doesn't\n require a system call in most cases; don't follow symbolic links if\n follow_symlinks is False\n- is_symlink(): similar to pathlib.Path.is_symlink(), but the return\n value is cached on the DirEntry object; doesn't require a system\n call in most cases\n- stat(*, follow_symlinks=True): like os.stat(), but the return value\n is cached on the DirEntry object; does not require a system call on\n Windows (except for symlinks); don't follow symbolic links (like\n os.lstat()) if follow_symlinks is False\n\nAll methods may perform system calls in some cases and therefore\npossibly raise OSError -- see the \"Notes on exception handling\" section\nfor more details.\n\nThe DirEntry attribute and method names were chosen to be the same as\nthose in the new pathlib module where possible, for consistency. The\nonly difference in functionality is that the DirEntry methods cache\ntheir values on the entry object after the first call.\n\nLike the other functions in the os module, scandir() accepts either a\nbytes or str object for the path parameter, and returns the\nDirEntry.name and DirEntry.path attributes with the same type as path.\nHowever, it is strongly recommended to use the str type, as this ensures\ncross-platform support for Unicode filenames. (On Windows, bytes\nfilenames have been deprecated since Python 3.3).\n\nos.walk()\n\nAs part of this proposal, os.walk() will also be modified to use\nscandir() rather than listdir() and os.path.isdir(). This will increase\nthe speed of os.walk() very significantly (as mentioned above, by 2-20\ntimes, depending on the system).\n\nExamples\n\nFirst, a very simple example of scandir() showing use of the\nDirEntry.name attribute and the DirEntry.is_dir() method:\n\n def subdirs(path):\n \"\"\"Yield directory names not starting with '.' under given path.\"\"\"\n for entry in os.scandir(path):\n if not entry.name.startswith('.') and entry.is_dir():\n yield entry.name\n\nThis subdirs() function will be significantly faster with scandir than\nos.listdir() and os.path.isdir() on both Windows and POSIX systems,\nespecially on medium-sized or large directories.\n\nOr, for getting the total size of files in a directory tree, showing use\nof the DirEntry.stat() method and DirEntry.path attribute:\n\n def get_tree_size(path):\n \"\"\"Return total size of files in given path and subdirs.\"\"\"\n total = 0\n for entry in os.scandir(path):\n if entry.is_dir(follow_symlinks=False):\n total += get_tree_size(entry.path)\n else:\n total += entry.stat(follow_symlinks=False).st_size\n return total\n\nThis also shows the use of the follow_symlinks parameter to is_dir() --\nin a recursive function like this, we probably don't want to follow\nlinks. (To properly follow links in a recursive function like this we'd\nwant special handling for the case where following a symlink leads to a\nrecursive loop.)\n\nNote that get_tree_size() will get a huge speed boost on Windows,\nbecause no extra stat call are needed, but on POSIX systems the size\ninformation is not returned by the directory iteration functions, so\nthis function won't gain anything there.\n\nNotes on caching\n\nThe DirEntry objects are relatively dumb -- the name and path attributes\nare obviously always cached, and the is_X and stat methods cache their\nvalues (immediately on Windows via FindNextFile, and on first use on\nPOSIX systems via a stat system call) and never refetch from the system.\n\nFor this reason, DirEntry objects are intended to be used and thrown\naway after iteration, not stored in long-lived data structured and the\nmethods called again and again.\n\nIf developers want \"refresh\" behaviour (for example, for watching a\nfile's size change), they can simply use pathlib.Path objects, or call\nthe regular os.stat() or os.path.getsize() functions which get fresh\ndata from the operating system every call.\n\nNotes on exception handling\n\nDirEntry.is_X() and DirEntry.stat() are explicitly methods rather than\nattributes or properties, to make it clear that they may not be cheap\noperations (although they often are), and they may do a system call. As\na result, these methods may raise OSError.\n\nFor example, DirEntry.stat() will always make a system call on\nPOSIX-based systems, and the DirEntry.is_X() methods will make a stat()\nsystem call on such systems if readdir() does not support d_type or\nreturns a d_type with a value of DT_UNKNOWN, which can occur under\ncertain conditions or on certain file systems.\n\nOften this does not matter -- for example, os.walk() as defined in the\nstandard library only catches errors around the listdir() calls.\n\nAlso, because the exception-raising behaviour of the DirEntry.is_X\nmethods matches that of pathlib -- which only raises OSError in the case\nof permissions or other fatal errors, but returns False if the path\ndoesn't exist or is a broken symlink -- it's often not necessary to\ncatch errors around the is_X() calls.\n\nHowever, when a user requires fine-grained error handling, it may be\ndesirable to catch OSError around all method calls and handle as\nappropriate.\n\nFor example, below is a version of the get_tree_size() example shown\nabove, but with fine-grained error handling added:\n\n def get_tree_size(path):\n \"\"\"Return total size of files in path and subdirs. If\n is_dir() or stat() fails, print an error message to stderr\n and assume zero size (for example, file has been deleted).\n \"\"\"\n total = 0\n for entry in os.scandir(path):\n try:\n is_dir = entry.is_dir(follow_symlinks=False)\n except OSError as error:\n print('Error calling is_dir():', error, file=sys.stderr)\n continue\n if is_dir:\n total += get_tree_size(entry.path)\n else:\n try:\n total += entry.stat(follow_symlinks=False).st_size\n except OSError as error:\n print('Error calling stat():', error, file=sys.stderr)\n return total\n\nSupport\n\nThe scandir module on GitHub has been forked and used quite a bit (see\n\"Use in the wild\" in this PEP), but there's also been a fair bit of\ndirect support for a scandir-like function from core developers and\nothers on the python-dev and python-ideas mailing lists. A sampling:\n\n- python-dev: a good number of +1's and very few negatives for scandir\n and PEP 471 on this June 2014 python-dev thread\n- Alyssa Coghlan, a core Python developer: \"I've had the local Red Hat\n release engineering team express their displeasure at having to stat\n every file in a network mounted directory tree for info that is\n present in the dirent structure, so a definite +1 to os.scandir from\n me, so long as it makes that info available.\" [source1]\n- Tim Golden, a core Python developer, supports scandir enough to have\n spent time refactoring and significantly improving scandir's C\n extension module. [source2]\n- Christian Heimes, a core Python developer: \"+1 for something like\n yielddir()\" [source3] and \"Indeed! I'd like to see the feature in\n 3.4 so I can remove my own hack from our code base.\" [source4]\n- Gregory P. Smith, a core Python developer: \"As 3.4beta1 happens\n tonight, this isn't going to make 3.4 so i'm bumping this to 3.5. I\n really like the proposed design outlined above.\" [source5]\n- Guido van Rossum on the possibility of adding scandir to Python 3.5\n (as it was too late for 3.4): \"The ship has likewise sailed for\n adding scandir() (whether to os or pathlib). By all means experiment\n and get it ready for consideration for 3.5, but I don't want to add\n it to 3.4.\" [source6]\n\nSupport for this PEP itself (meta-support?) was given by Alyssa (Nick)\nCoghlan on python-dev: \"A PEP reviewing all this for 3.5 and proposing a\nspecific os.scandir API would be a good thing.\" [source7]\n\nUse in the wild\n\nTo date, the scandir implementation is definitely useful, but has been\nclearly marked \"beta\", so it's uncertain how much use of it there is in\nthe wild. Ben Hoyt has had several reports from people using it. For\nexample:\n\n- Chris F: \"I am processing some pretty large directories and was half\n expecting to have to modify getdents. So thanks for saving me the\n effort.\" [via personal email]\n- bschollnick: \"I wanted to let you know about this, since I am using\n Scandir as a building block for this code. Here's a good example of\n scandir making a radical performance improvement over os.listdir.\"\n [source8]\n- Avram L: \"I'm testing our scandir for a project I'm working on.\n Seems pretty solid, so first thing, just want to say nice work!\"\n [via personal email]\n- Matt Z: \"I used scandir to dump the contents of a network dir in\n under 15 seconds. 13 root dirs, 60,000 files in the structure. This\n will replace some old VBA code embedded in a spreadsheet that was\n taking 15-20 minutes to do the exact same thing.\" [via personal\n email]\n\nOthers have requested a PyPI package for it, which has been created. See\nPyPI package.\n\nGitHub stats don't mean too much, but scandir does have several\nwatchers, issues, forks, etc. Here's the run-down as of the stats as of\nJuly 7, 2014:\n\n- Watchers: 17\n- Stars: 57\n- Forks: 20\n- Issues: 4 open, 26 closed\n\nAlso, because this PEP will increase the speed of os.walk()\nsignificantly, there are thousands of developers and scripts, and a lot\nof production code, that would benefit from it. For example, on GitHub,\nthere are almost as many uses of os.walk (194,000) as there are of\nos.mkdir (230,000).\n\nRejected ideas\n\nNaming\n\nThe only other real contender for this function's name was iterdir().\nHowever, iterX() functions in Python (mostly found in Python 2) tend to\nbe simple iterator equivalents of their non-iterator counterparts. For\nexample, dict.iterkeys() is just an iterator version of dict.keys(), but\nthe objects returned are identical. In scandir()'s case, however, the\nreturn values are quite different objects (DirEntry objects vs filename\nstrings), so this should probably be reflected by a difference in name\n-- hence scandir().\n\nSee some relevant discussion on python-dev.\n\nWildcard support\n\nFindFirstFile/FindNextFile on Windows support passing a \"wildcard\" like\n*.jpg, so at first folks (this PEP's author included) felt it would be a\ngood idea to include a windows_wildcard keyword argument to the scandir\nfunction so users could pass this in.\n\nHowever, on further thought and discussion it was decided that this\nwould be bad idea, unless it could be made cross-platform (a pattern\nkeyword argument or similar). This seems easy enough at first -- just\nuse the OS wildcard support on Windows, and something like fnmatch or re\nafterwards on POSIX-based systems.\n\nUnfortunately the exact Windows wildcard matching rules aren't really\ndocumented anywhere by Microsoft, and they're quite quirky (see this\nblog post), meaning it's very problematic to emulate using fnmatch or\nregexes.\n\nSo the consensus was that Windows wildcard support was a bad idea. It\nwould be possible to add at a later date if there's a cross-platform way\nto achieve it, but not for the initial version.\n\nRead more on the this Nov 2012 python-ideas thread and this June 2014\npython-dev thread on PEP 471.\n\nMethods not following symlinks by default\n\nThere was much debate on python-dev (see messages in this thread) over\nwhether the DirEntry methods should follow symbolic links or not (when\nthe is_X() methods had no follow_symlinks parameter).\n\nInitially they did not (see previous versions of this PEP and the\nscandir.py module), but Victor Stinner made a pretty compelling case on\npython-dev that following symlinks by default is a better idea, because:\n\n- following links is usually what you want (in 92% of cases in the\n standard library, functions using os.listdir() and os.path.isdir()\n do follow symlinks)\n- that's the precedent set by the similar functions os.path.isdir()\n and pathlib.Path.is_dir(), so to do otherwise would be confusing\n- with the non-link-following approach, if you wanted to follow links\n you'd have to say something like\n if (entry.is_symlink() and os.path.isdir(entry.path)) or entry.is_dir(),\n which is clumsy\n\nAs a case in point that shows the non-symlink-following version is error\nprone, this PEP's author had a bug caused by getting this exact test\nwrong in his initial implementation of scandir.walk() in scandir.py (see\nIssue #4 here).\n\nIn the end there was not total agreement that the methods should follow\nsymlinks, but there was basic consensus among the most involved\nparticipants, and this PEP's author believes that the above case is\nstrong enough to warrant following symlinks by default.\n\nIn addition, it's straightforward to call the relevant methods with\nfollow_symlinks=False if the other behaviour is desired.\n\nDirEntry attributes being properties\n\nIn some ways it would be nicer for the DirEntry is_X() and stat() to be\nproperties instead of methods, to indicate they're very cheap or free.\nHowever, this isn't quite the case, as stat() will require an OS call on\nPOSIX-based systems but not on Windows. Even is_dir() and friends may\nperform an OS call on POSIX-based systems if the dirent.d_type value is\nDT_UNKNOWN (on certain file systems).\n\nAlso, people would expect the attribute access entry.is_dir to only ever\nraise AttributeError, not OSError in the case it makes a system call\nunder the covers. Calling code would have to have a try/except around\nwhat looks like a simple attribute access, and so it's much better to\nmake them methods.\n\nSee this May 2013 python-dev thread where this PEP author makes this\ncase and there's agreement from a core developers.\n\nDirEntry fields being \"static\" attribute-only objects\n\nIn this July 2014 python-dev message, Paul Moore suggested a solution\nthat was a \"thin wrapper round the OS feature\", where the DirEntry\nobject had only static attributes: name, path, and is_X, with the st_X\nattributes only present on Windows. The idea was to use this simpler,\nlower-level function as a building block for higher-level functions.\n\nAt first there was general agreement that simplifying in this way was a\ngood thing. However, there were two problems with this approach. First,\nthe assumption is the is_dir and similar attributes are always present\non POSIX, which isn't the case (if d_type is not present or is\nDT_UNKNOWN). Second, it's a much harder-to-use API in practice, as even\nthe is_dir attributes aren't always present on POSIX, and would need to\nbe tested with hasattr() and then os.stat() called if they weren't\npresent.\n\nSee this July 2014 python-dev response from this PEP's author detailing\nwhy this option is a non-ideal solution, and the subsequent reply from\nPaul Moore voicing agreement.\n\nDirEntry fields being static with an ensure_lstat option\n\nAnother seemingly simpler and attractive option was suggested by Alyssa\nCoghlan in this June 2014 python-dev message: make DirEntry.is_X and\nDirEntry.lstat_result properties, and populate DirEntry.lstat_result at\niteration time, but only if the new argument ensure_lstat=True was\nspecified on the scandir() call.\n\nThis does have the advantage over the above in that you can easily get\nthe stat result from scandir() if you need it. However, it has the\nserious disadvantage that fine-grained error handling is messy, because\nstat() will be called (and hence potentially raise OSError) during\niteration, leading to a rather ugly, hand-made iteration loop:\n\n it = os.scandir(path)\n while True:\n try:\n entry = next(it)\n except OSError as error:\n handle_error(path, error)\n except StopIteration:\n break\n\nOr it means that scandir() would have to accept an onerror argument -- a\nfunction to call when stat() errors occur during iteration. This seems\nto this PEP's author neither as direct nor as Pythonic as try/except\naround a DirEntry.stat() call.\n\nAnother drawback is that os.scandir() is written to make code faster.\nAlways calling os.lstat() on POSIX would not bring any speedup. In most\ncases, you don't need the full stat_result object -- the is_X() methods\nare enough and this information is already known.\n\nSee Ben Hoyt's July 2014 reply to the discussion summarizing this and\ndetailing why he thinks the original PEP 471 proposal is \"the right one\"\nafter all.\n\nReturn values being (name, stat_result) two-tuples\n\nInitially this PEP's author proposed this concept as a function called\niterdir_stat() which yielded two-tuples of (name, stat_result). This\ndoes have the advantage that there are no new types introduced. However,\nthe stat_result is only partially filled on POSIX-based systems (most\nfields set to None and other quirks), so they're not really stat_result\nobjects at all, and this would have to be thoroughly documented as\ndifferent from os.stat().\n\nAlso, Python has good support for proper objects with attributes and\nmethods, which makes for a saner and simpler API than two-tuples. It\nalso makes the DirEntry objects more extensible and future-proof as\noperating systems add functionality and we want to include this in\nDirEntry.\n\nSee also some previous discussion:\n\n- May 2013 python-dev thread where Alyssa Coghlan makes the original\n case for a DirEntry-style object.\n- June 2014 python-dev thread where Alyssa Coghlan makes (another)\n good case against the two-tuple approach.\n\nReturn values being overloaded stat_result objects\n\nAnother alternative discussed was making the return values to be\noverloaded stat_result objects with name and path attributes. However,\napart from this being a strange (and strained!) kind of overloading,\nthis has the same problems mentioned above --most of the stat_result\ninformation is not fetched by readdir() on POSIX systems, only (part of)\nthe st_mode value.\n\nReturn values being pathlib.Path objects\n\nWith Antoine Pitrou's new standard library pathlib module, it at first\nseems like a great idea for scandir() to return instances of\npathlib.Path. However, pathlib.Path's is_X() and stat() functions are\nexplicitly not cached, whereas scandir has to cache them by design,\nbecause it's (often) returning values from the original directory\niteration system call.\n\nAnd if the pathlib.Path instances returned by scandir cached stat\nvalues, but the ordinary pathlib.Path objects explicitly don't, that\nwould be more than a little confusing.\n\nGuido van Rossum explicitly rejected pathlib.Path caching stat in the\ncontext of scandir here, making pathlib.Path objects a bad choice for\nscandir return values.\n\nPossible improvements\n\nThere are many possible improvements one could make to scandir, but here\nis a short list of some this PEP's author has in mind:\n\n- scandir could potentially be further sped up by calling readdir /\n FindNextFile say 50 times per Py_BEGIN_ALLOW_THREADS block so that\n it stays in the C extension module for longer, and may be somewhat\n faster as a result. This approach hasn't been tested, but was\n suggested by on Issue 11406 by Antoine Pitrou. [source9]\n- scandir could use a free list to avoid the cost of memory allocation\n for each iteration -- a short free list of 10 or maybe even 1 may\n help. Suggested by Victor Stinner on a python-dev thread on June 27.\n\nPrevious discussion\n\n- Original November 2012 thread Ben Hoyt started on python-ideas about\n speeding up os.walk()\n- Python Issue 11406, which includes the original proposal for a\n scandir-like function\n- Further May 2013 thread Ben Hoyt started on python-dev that refined\n the scandir() API, including Alyssa Coghlan's suggestion of scandir\n yielding DirEntry-like objects\n- November 2013 thread Ben Hoyt started on python-dev to discuss the\n interaction between scandir and the new pathlib module\n- June 2014 thread Ben Hoyt started on python-dev to discuss the first\n version of this PEP, with extensive discussion about the API\n- First July 2014 thread Ben Hoyt started on python-dev to discuss his\n updates to PEP 471\n- Second July 2014 thread Ben Hoyt started on python-dev to discuss\n the remaining decisions needed to finalize PEP 471, specifically\n whether the DirEntry methods should follow symlinks by default\n- Question on StackOverflow about why os.walk() is slow and pointers\n on how to fix it (this inspired the author of this PEP early on)\n- BetterWalk, this PEP's author's previous attempt at this, on which\n the scandir code is based\n\nCopyright\n\nThis document has been placed in the public domain.\n\n\f\n\n Local Variables: mode: indented-text indent-tabs-mode: nil\n sentence-end-double-space: t fill-column: 70 coding: utf-8 End:"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.384284"},"created":{"kind":"timestamp","value":"2014-05-30T00:00:00","string":"2014-05-30T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0471/\",\n \"authors\": [\n \"Ben Hoyt\"\n ],\n \"pep_number\": \"0471\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":77,"cells":{"id":{"kind":"string","value":"3114"},"text":{"kind":"string","value":"PEP: 3114 Title: Renaming iterator.next() to iterator.__next__() Author:\nKa-Ping Yee Status: Final Type: Standards Track\nContent-Type: text/x-rst Created: 04-Mar-2007 Python-Version: 3.0\nPost-History:\n\nAbstract\n\nThe iterator protocol in Python 2.x consists of two methods: __iter__()\ncalled on an iterable object to yield an iterator, and next() called on\nan iterator object to yield the next item in the sequence. Using a for\nloop to iterate over an iterable object implicitly calls both of these\nmethods. This PEP proposes that the next method be renamed to __next__,\nconsistent with all the other protocols in Python in which a method is\nimplicitly called as part of a language-level protocol, and that a\nbuilt-in function named next be introduced to invoke __next__ method,\nconsistent with the manner in which other protocols are explicitly\ninvoked.\n\nNames With Double Underscores\n\nIn Python, double underscores before and after a name are used to\ndistinguish names that belong to the language itself. Attributes and\nmethods that are implicitly used or created by the interpreter employ\nthis naming convention; some examples are:\n\n- __file__ - an attribute automatically created by the interpreter\n- __dict__ - an attribute with special meaning to the interpreter\n- __init__ - a method implicitly called by the interpreter\n\nNote that this convention applies to methods such as __init__ that are\nexplicitly defined by the programmer, as well as attributes such as\n__file__ that can only be accessed by naming them explicitly, so it\nincludes names that are used or created by the interpreter.\n\n(Not all things that are called \"protocols\" are made of methods with\ndouble-underscore names. For example, the __contains__ method has double\nunderscores because the language construct x in y implicitly calls\n__contains__. But even though the read method is part of the file\nprotocol, it does not have double underscores because there is no\nlanguage construct that implicitly invokes x.read().)\n\nThe use of double underscores creates a separate namespace for names\nthat are part of the Python language definition, so that programmers are\nfree to create variables, attributes, and methods that start with\nletters, without fear of silently colliding with names that have a\nlanguage-defined purpose. (Colliding with reserved keywords is still a\nconcern, but at least this will immediately yield a syntax error.)\n\nThe naming of the next method on iterators is an exception to this\nconvention. Code that nowhere contains an explicit call to a next method\ncan nonetheless be silently affected by the presence of such a method.\nTherefore, this PEP proposes that iterators should have a __next__\nmethod instead of a next method (with no change in semantics).\n\nDouble-Underscore Methods and Built-In Functions\n\nThe Python language defines several protocols that are implemented or\ncustomized by defining methods with double-underscore names. In each\ncase, the protocol is provided by an internal method implemented as a C\nfunction in the interpreter. For objects defined in Python, this C\nfunction supports customization by implicitly invoking a Python method\nwith a double-underscore name (it often does a little bit of additional\nwork beyond just calling the Python method.)\n\nSometimes the protocol is invoked by a syntactic construct:\n\n- x[y] --> internal tp_getitem --> x.__getitem__(y)\n- x + y --> internal nb_add --> x.__add__(y)\n- -x --> internal nb_negative --> x.__neg__()\n\nSometimes there is no syntactic construct, but it is still useful to be\nable to explicitly invoke the protocol. For such cases Python offers a\nbuilt-in function of the same name but without the double underscores.\n\n- len(x) --> internal sq_length --> x.__len__()\n- hash(x) --> internal tp_hash --> x.__hash__()\n- iter(x) --> internal tp_iter --> x.__iter__()\n\nFollowing this pattern, the natural way to handle next is to add a next\nbuilt-in function that behaves in exactly the same fashion.\n\n- next(x) --> internal tp_iternext --> x.__next__()\n\nFurther, it is proposed that the next built-in function accept a\nsentinel value as an optional second argument, following the style of\nthe getattr and iter built-in functions. When called with two arguments,\nnext catches the StopIteration exception and returns the sentinel value\ninstead of propagating the exception. This creates a nice duality\nbetween iter and next:\n\n iter(function, sentinel) <--> next(iterator, sentinel)\n\nPrevious Proposals\n\nThis proposal is not a new idea. The idea proposed here was supported by\nthe BDFL on python-dev[1] and is even mentioned in the original iterator\nPEP, PEP 234:\n\n (In retrospect, it might have been better to go for __next__()\n and have a new built-in, next(it), which calls it.__next__().\n But alas, it's too late; this has been deployed in Python 2.2\n since December 2001.)\n\nObjections\n\nThere have been a few objections to the addition of more built-ins. In\nparticular, Martin von Loewis writes[2]:\n\n I dislike the introduction of more builtins unless they have a true\n generality (i.e. are likely to be needed in many programs). For this\n one, I think the normal usage of __next__ will be with a for loop, so\n I don't think one would often need an explicit next() invocation.\n\n It is also not true that most protocols are explicitly invoked through\n builtin functions. Instead, most protocols are can be explicitly invoked\n through methods in the operator module. So following tradition, it\n should be operator.next.\n\n ...\n\n As an alternative, I propose that object grows a .next() method,\n which calls __next__ by default.\n\nTransition Plan\n\nTwo additional transformations will be added to the 2to3 translation\ntool[3]:\n\n- Method definitions named next will be renamed to __next__.\n- Explicit calls to the next method will be replaced with calls to the\n built-in next function. For example, x.next() will become next(x).\n\nCollin Winter looked into the possibility of automatically deciding\nwhether to perform the second transformation depending on the presence\nof a module-level binding to next[4] and found that it would be \"ugly\nand slow\". Instead, the translation tool will emit warnings upon\ndetecting such a binding. Collin has proposed warnings for the following\nconditions[5]:\n\n- Module-level assignments to next.\n- Module-level definitions of a function named next.\n- Module-level imports of the name next.\n- Assignments to __builtin__.next.\n\nApproval\n\nThis PEP was accepted by Guido on March 6, 2007[6].\n\nImplementation\n\nA patch with the necessary changes (except the 2to3 tool) was written by\nGeorg Brandl and committed as revision 54910.\n\nReferences\n\nCopyright\n\nThis document has been placed in the public domain.\n\n[1] Single- vs. Multi-pass iterability (Guido van Rossum)\nhttps://mail.python.org/pipermail/python-dev/2002-July/026814.html\n\n[2] PEP: rename it.next() to it.__next__()... (Martin von Loewis)\nhttps://mail.python.org/pipermail/python-3000/2007-March/005965.html\n\n[3] 2to3 refactoring tool\nhttps://github.com/python/cpython/tree/ef04c44e29a8276a484f58d03a75a2dec516302d/Lib/lib2to3\n\n[4] PEP: rename it.next() to it.__next__()... (Collin Winter)\nhttps://mail.python.org/pipermail/python-3000/2007-March/006020.html\n\n[5] PEP 3113 transition plan\nhttps://mail.python.org/pipermail/python-3000/2007-March/006044.html\n\n[6] PEP: rename it.next() to it.__next__()... (Guido van Rossum)\nhttps://mail.python.org/pipermail/python-3000/2007-March/006027.html"},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.399518"},"created":{"kind":"timestamp","value":"2007-03-04T00:00:00","string":"2007-03-04T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-3114/\",\n \"authors\": [\n \"Ka-Ping Yee\"\n ],\n \"pep_number\": \"3114\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":78,"cells":{"id":{"kind":"string","value":"0740"},"text":{"kind":"string","value":"PEP: 740 Title: Index support for digital attestations Author: William\nWoodruff , Facundo Tuesca\n, Dustin Ingram Sponsor:\nDonald Stufft PEP-Delegate: Donald Stufft\n Discussions-To:\nhttps://discuss.python.org/t/pep-740-index-support-for-digital-attestations/44498\nStatus: Provisional Type: Standards Track Topic: Packaging Created:\n08-Jan-2024 Post-History: 02-Jan-2024, 29-Jan-2024 Resolution:\nhttps://discuss.python.org/t/pep-740-index-support-for-digital-attestations/44498/26\n\nAbstract\n\nThis PEP proposes a collection of changes related to the upload and\ndistribution of digitally signed attestations and metadata used to\nverify them on a Python package repository, such as PyPI.\n\nThese changes have two subcomponents:\n\n- Changes to the currently unstandardized PyPI upload API, allowing\n clients to upload digital attestations as\n attestation objects ;\n- Changes to the\n HTML and JSON \"simple\" APIs ,\n allowing clients to retrieve both digital attestations and Trusted\n Publishing metadata for individual release files as\n provenance objects .\n\nThis PEP does not make a policy recommendation around mandatory digital\nattestations on release uploads or their subsequent verification by\ninstalling clients like pip.\n\nRationale and Motivation\n\nDesire for digital signatures on Python packages has been repeatedly\nexpressed by both package maintainers and downstream users:\n\n- Maintainers wish to demonstrate the integrity and authenticity of\n their package uploads;\n- Individual downstream users wish to verify package integrity and\n authenticity without placing additional trust in their index's\n honesty;\n- \"Bulk\" downstream users (such as Operating System distributions)\n wish to perform similar verifications and potentially re-expose or\n countersign for their own downstream packaging ecosystems.\n\nThis proposal seeks to accommodate each of the above use cases.\n\nAdditionally, this proposal identifies the following motivations:\n\n- Verifiable provenance for Python package distributions: many Python\n packages currently contain unauthenticated provenance metadata, such\n as URLs for source hosts. A cryptographic attestation format could\n enable strong authenticated links between these packages and their\n source hosts, allowing both the index and downstream users to\n cryptographically verify that a package originates from its claimed\n source repository.\n\n- Raising attacker requirements: an attacker who seeks to take over a\n Python package can be described along sophistication\n (unsophisticated to sophisticated) and targeting dimensions\n (opportunistic to targeted).\n\n Digital attestations impose additional sophistication requirements:\n the attacker must be sufficiently sophisticated to access private\n signing material (or signing identities).\n\n- Index verifiability: in the status quo, the only attestation\n provided by the index is an optional PGP signature per release file\n (see PGP signatures ). These signatures are not (and\n cannot be) checked by the index either for well-formedness or for\n validity, since the index has no mechanism for identifying the right\n public key for the signature. This PEP overcomes this limitation by\n ensuring that provenance objects contain all of\n the metadata needed by the index to verify an attestation's\n validity.\n\nThis PEP proposes a generic attestation format, containing an\nattestation statement for signature generation ,\nwith the expectation that index providers adopt the format with a\nsuitable source of identity for signature verification, such as Trusted\nPublishing.\n\nDesign Considerations\n\nThis PEP identifies the following design considerations when evaluating\nboth its own proposed changes and previous work in the same or adjacent\nareas of Python packaging:\n\n1. Index accessibility: digital attestations for Python packages are\n ideally retrievable directly from the index itself, as \"detached\"\n resources.\n\n This both simplifies some compatibility concerns (by avoiding the\n need to modify the distribution formats themselves) and also\n simplifies the behavior of potential installing clients (by allowing\n them to retrieve each attestation before its corresponding package\n without needing to do streaming decompression).\n\n2. Verification by the index itself: in addition to enabling\n verification by installing clients, each digital attestation is\n ideally verifiable in some form by the index itself.\n\n This both increases the overall quality of attestations uploaded to\n the index (preventing, for example, users from accidentally\n uploading incorrect or invalid attestations) and also enables UI and\n UX refinements on the index itself (such as a \"provenance\" view for\n each uploaded package).\n\n3. General applicability: digital attestations should be applicable to\n any and every package uploaded to the index, regardless of its\n format (sdist or wheel) or interior contents.\n\n4. Metadata support: this PEP refers to \"digital attestations\" rather\n than just \"digital signatures\" to emphasize the ideal presence of\n additional metadata within the cryptographic envelope.\n\n For example, to prevent domain separation between a distribution's\n name and its contents, this PEP uses 'Statements' from the in-toto\n project to bind the distribution's contents (via SHA-256 digest) to\n its filename.\n\nPrevious Work\n\nPGP signatures\n\nPyPI and other indices have historically supported PGP signatures on\nuploaded distributions. These could be supplied during upload, and could\nbe retrieved by installing clients via the data-gpg-sig attribute in the\nPEP 503 API, the gpg-sig key on the PEP 691 API, or via an adjacent\n.asc-suffixed URL.\n\nPGP signature uploads have been disabled on PyPI since May 2023, after\nan investigation determined that the majority of signatures (which,\nthemselves, constituted a tiny percentage of overall uploads) could not\nbe associated with a public key or otherwise meaningfully verified.\n\nIn their previously supported form on PyPI, PGP signatures satisfied\nconsiderations (1) and (3) above but not (2) (owing to the need for\nexternal keyservers and key distribution) or (4) (due to PGP signatures\ntypically being constructed over just an input file, without any\nassociated signed metadata).\n\nWheel signatures\n\nPEP 427 (and its\nliving PyPA counterpart ) specify\nthe wheel format .\n\nThis format includes accommodations for digital signatures embedded\ndirectly into the wheel, in either JWS or S/MIME format. These\nsignatures are specified over a PEP 376 RECORD, which is modified to\ninclude a cryptographic digest for each recorded file in the wheel.\n\nWhile wheel signatures are fully specified, they do not appear to be\nbroadly used; the official wheel tooling deprecated signature generation\nand verification support in 0.32.0, which was released in 2018.\n\nAdditionally, wheel signatures do not satisfy any of the above\nconsiderations (due to the \"attached\" nature of the signatures,\nnon-verifiability on the index itself, and support for wheels only).\n\nSpecification\n\nUpload endpoint changes\n\nThe current upload API is not standardized. However, we propose the\nfollowing changes to it:\n\n- In addition to the current top-level content and gpg_signature\n fields, the index SHALL accept attestations as an additional\n multipart form field.\n- The new attestations field SHALL be a JSON array.\n- The attestations array SHALL have one or more items, each a JSON\n object representing an individual attestation.\n- Each attestation object MUST be verifiable by the index. If the\n index fails to verify any attestation in attestations, it MUST\n reject the upload. The format of attestation objects is defined\n under attestation-object and the process for verifying attestations\n is defined under attestation-verification.\n\nIndex changes\n\nSimple Index\n\nThe following changes are made to the\nsimple repository API :\n\n- When an uploaded file has one or more attestations, the index MAY\n provide a provenance file containing attestations associated with a\n given distribution. The format of the provenance file SHALL be a\n JSON-encoded provenance object , which SHALL\n contain the file's attestations.\n\n The location of the provenance file is signaled by the index via the\n data-provenance attribute.\n\n- When a provenance file is present, the index MAY include a\n data-provenance attribute on its file link. The value of the\n data-provenance attribute SHALL be a fully qualified URL, signaling\n the the file's provenance can be found at that URL. This URL MUST\n represent a secure origin.\n\n The following table provides examples of release file URLs,\n data-provenance values, and their resulting provenance file URLs.\n\n File URL data-provenance Provenance URL\n ------------------------------------------------ ----------------------------------------------------------------- -----------------------------------------------------------------\n https://example.com/sampleproject-1.2.3.tar.gz https://example.com/sampleproject-1.2.3.tar.gz.provenance https://example.com/sampleproject-1.2.3.tar.gz.provenance\n https://example.com/sampleproject-1.2.3.tar.gz https://other.example.com/sampleproject-1.2.3.tar.gz/provenance https://other.example.com/sampleproject-1.2.3.tar.gz/provenance\n https://example.com/sampleproject-1.2.3.tar.gz ../relative (invalid: not a fully qualified URL)\n https://example.com/sampleproject-1.2.3.tar.gz http://unencrypted.example.com/provenance (invalid: not a secure origin)\n\n- The index MAY choose to modify the provenance file. For example, the\n index MAY permit adding additional attestations and verification\n materials, such as attestations from third-party auditors or other\n services.\n\n See changes-to-provenance-objects for an additional discussion of\n reasons why a file's provenance may change.\n\nJSON-based Simple API\n\nThe following changes are made to the\nJSON simple API :\n\n- When an uploaded file has one or more attestations, the index MAY\n include a provenance key in the file dictionary for that file.\n\n The value of the provenance key SHALL be either a JSON string or\n null. If provenance is not null, it SHALL be a URL to the associated\n provenance file.\n\n See appendix-3 for an explanation of the technical decision to embed\n the SHA-256 digest in the JSON API, rather than the full\n provenance object .\n\nThese changes require a version change to the JSON API:\n\n- The api-version SHALL specify version 1.3 or later.\n\nAttestation objects\n\nAn attestation object is a JSON object with several required keys;\napplications or signers may include additional keys so long as all\nexplicitly listed keys are provided. The required layout of an\nattestation object is provided as pseudocode below.\n\n @dataclass\n class Attestation:\n version: Literal[1]\n \"\"\"\n The attestation object's version, which is always 1.\n \"\"\"\n\n verification_material: VerificationMaterial\n \"\"\"\n Cryptographic materials used to verify `envelope`.\n \"\"\"\n\n envelope: Envelope\n \"\"\"\n The enveloped attestation statement and signature.\n \"\"\"\n\n\n @dataclass\n class Envelope:\n statement: bytes\n \"\"\"\n The attestation statement.\n\n This is represented as opaque bytes on the wire (encoded as base64),\n but it MUST be an JSON in-toto v1 Statement.\n \"\"\"\n\n signature: bytes\n \"\"\"\n A signature for the above statement, encoded as base64.\n \"\"\"\n\n @dataclass\n class VerificationMaterial:\n certificate: str\n \"\"\"\n The signing certificate, as `base64(DER(cert))`.\n \"\"\"\n\n transparency_entries: list[object]\n \"\"\"\n One or more transparency log entries for this attestation's signature\n and certificate.\n \"\"\"\n\nA full data model for each object in transparency_entries is provided in\nappendix-2. Attestation objects SHOULD include one or more transparency\nlog entries, and MAY include additional keys for other sources of signed\ntime (such as an 3161 Time Stamping Authority or a Roughtime server).\n\nAttestation objects are versioned; this PEP specifies version 1. Each\nversion is tied to a single cryptographic suite to minimize unnecessary\ncryptographic agility. In version 1, the suite is as follows:\n\n- Certificates are specified as X.509 certificates, and comply with\n the profile in 5280.\n- The message signature algorithm is ECDSA, with the P-256 curve for\n public keys and SHA-256 as the cryptographic digest function.\n\nFuture PEPs may change this suite (and the overall shape of the\nattestation object) by selecting a new version number.\n\nAttestation statement and signature generation\n\nThe attestation statement is the actual claim that is cryptographically\nsigned over within the attestation object (i.e., the\nenvelope.statement).\n\nThe attestation statement is encoded as a v1 in-toto Statement object,\nin JSON form. When serialized the statement is treated as an opaque\nbinary blob, avoiding the need for canonicalization. An example\nJSON-encoded statement is provided in appendix-4.\n\nIn addition to being a v1 in-toto Statement, the attestation statement\nis constrained in the following ways:\n\n- The in-toto subject MUST contain only a single subject.\n- subject[0].name is the distribution's filename, which MUST be a\n valid source distribution or\n wheel distribution filename.\n- subject[0].digest MUST contain a SHA-256 digest. Other digests MAY\n be present. The digests MUST be represented as hexadecimal strings.\n- The following predicateType values are supported:\n - SLSA Provenance: https://slsa.dev/provenance/v1\n - PyPI Publish Attestation:\n https://docs.pypi.org/attestations/publish/v1\n\nThe signature over this statement is constructed using the v1 DSSE\nsignature protocol, with a PAYLOAD_TYPE of application/vnd.in-toto+json\nand a PAYLOAD_BODY of the JSON-encoded statement above. No other\nPAYLOAD_TYPE is permitted.\n\nProvenance objects\n\nThe index will serve uploaded attestations along with metadata that can\nassist in verifying them in the form of JSON serialized objects.\n\nThese provenance objects will be available via both the Simple Index and\nJSON-based Simple API as described above, and will have the following\nlayout:\n\n {\n \"version\": 1,\n \"attestation_bundles\": [\n {\n \"publisher\": {\n \"kind\": \"important-ci-service\",\n \"claims\": {},\n \"vendor-property\": \"foo\",\n \"another-property\": 123\n },\n \"attestations\": [\n { /* attestation 1 ... */ },\n { /* attestation 2 ... */ }\n ]\n }\n ]\n }\n\nor, as pseudocode:\n\n @dataclass\n class Publisher:\n kind: string\n \"\"\"\n The kind of Trusted Publisher.\n \"\"\"\n\n claims: object | None\n \"\"\"\n Any context-specific claims retained by the index during Trusted Publisher\n authentication.\n \"\"\"\n\n _rest: object\n \"\"\"\n Each publisher object is open-ended, meaning that it MAY contain additional\n fields beyond the ones specified explicitly above. This field signals that,\n but is not itself present.\n \"\"\"\n\n @dataclass\n class AttestationBundle:\n publisher: Publisher\n \"\"\"\n The publisher associated with this set of attestations.\n \"\"\"\n\n attestations: list[Attestation]\n \"\"\"\n The set of attestations included in this bundle.\n \"\"\"\n\n @dataclass\n class Provenance:\n version: Literal[1]\n \"\"\"\n The provenance object's version, which is always 1.\n \"\"\"\n\n attestation_bundles: list[AttestationBundle]\n \"\"\"\n One or more attestation \"bundles\".\n \"\"\"\n\n- version is 1. Like attestation objects, provenance objects are\n versioned, and this PEP only defines version 1.\n\n- attestation_bundles is a required JSON array, containing one or more\n \"bundles\" of attestations. Each bundle corresponds to a signing\n identity (such as a Trusted Publishing identity), and contains one\n or more attestation objects.\n\n As noted in the Publisher model, each AttestationBundle.publisher\n object is specific to its Trusted Publisher but must include at\n minimum:\n\n - A kind key, which MUST be a JSON string that uniquely identifies\n the kind of Trusted Publisher.\n - A claims key, which MUST be a JSON object containing any\n context-specific claims retained by the index during Trusted\n Publisher authentication.\n\n All other keys in the publisher object are publisher-specific. A\n full illustrative example of a publisher object is provided in\n appendix-1.\n\n Each array of attestation objects is a superset of the attestations\n array supplied by the uploaded through the attestations field at\n upload time, as described in upload-endpoint and\n changes-to-provenance-objects.\n\nChanges to provenance objects\n\nProvenance objects are not immutable, and may change over time. Reasons\nfor changes to the provenance object include but are not limited to:\n\n- Addition of new attestations for a pre-existing signing identity:\n the index MAY choose to allow additional attestations by\n pre-existing signing identities, such as newer attestation versions\n for already uploaded files.\n- Addition of new signing identities and associated attestations: the\n index MAY choose to support attestations from sources other than the\n file's uploader, such as third-party auditors or the index itself.\n These attestations may be performed asynchronously, requiring the\n index to insert them into the provenance object post facto.\n\nAttestation verification\n\nVerifying an attestation object against a distribution file requires\nverification of each of the following:\n\n- version is 1. The verifier MUST reject any other version.\n- verification_material.certificate is a valid signing certificate, as\n issued by an a priori trusted authority (such as a root of trust\n already present within the verifying client).\n- verification_material.certificate identifies an appropriate signing\n subject, such as the machine identity of the Trusted Publisher that\n published the package.\n- envelope.statement is a valid in-toto v1 Statement, with a subject\n and digest that MUST match the distribution's filename and contents.\n For the distribution's filename, matching MUST be performed by\n parsing using the appropriate source distribution or wheel filename\n format, as the statement's subject may be equivalent but normalized.\n- envelope.signature is a valid signature for envelope.statement\n corresponding to verification_material.certificate, as reconstituted\n via the v1 DSSE signature protocol.\n\nIn addition to the above required steps, a verifier MAY additionally\nverify verification_material.transparency_entries on a policy basis,\ne.g. requiring at least one transparency log entry or a threshold of\nentries. When verifying transparency entries, the verifier MUST confirm\nthat the inclusion time for each entry lies within the signing\ncertificate's validity period.\n\nSecurity Implications\n\nThis PEP is primarily \"mechanical\" in nature; it provides layouts for\nstructuring and serving verifiable digital attestations without\nspecifying higher level security \"policies\" around attestation validity,\nthresholds between attestations, and so forth.\n\nCryptographic agility in attestations\n\nAlgorithmic agility is a common source of exploitable vulnerabilities in\ncryptographic schemes. This PEP limits algorithmic agility in two ways:\n\n- All algorithms are specified in a single suite, rather than a\n geometric collection of parameters. This makes it impossible (for\n example) for an attacker to select a strong signature algorithm with\n a weak hash function, compromising the scheme as a whole.\n- Attestation objects are versioned, and may only contain the\n algorithmic suite specified for their version. If a specific suite\n is considered insecure in the future, clients may choose to blanket\n reject or qualify verifications of attestations that contain that\n suite.\n\nIndex trust\n\nThis PEP does not increase (or decrease) trust in the index itself: the\nindex is still effectively trusted to honestly deliver unmodified\npackage distributions, since a dishonest index capable of modifying\npackage contents could also dishonestly modify or omit package\nattestations. As a result, this PEP's presumption of index trust is\nequivalent to the unstated presumption with earlier mechanisms, like PGP\nand wheel signatures.\n\nThis PEP does not preclude or exclude future index trust mechanisms,\nsuch as PEP 458 and/or PEP 480.\n\nRecommendations\n\nThis PEP recommends, but does not mandate, that attestation objects\ncontain one or more verifiable sources of signed time that corroborate\nthe signing certificate's claimed validity period. Indices that\nimplement this PEP may choose to strictly enforce this requirement.\n\nAppendix 1: Example Trusted Publisher Representation\n\nThis appendix provides a fictional example of a publisher key within a\nsimple JSON API project.files[].provenance listing:\n\n \"publisher\": {\n \"kind\": \"GitHub\",\n \"claims\": {\n \"ref\": \"refs/tags/v1.0.0\",\n \"sha\": \"da39a3ee5e6b4b0d3255bfef95601890afd80709\"\n },\n \"repository_name\": \"HolyGrail\",\n \"repository_owner\": \"octocat\",\n \"repository_owner_id\": \"1\",\n \"workflow_filename\": \"publish.yml\",\n \"environment\": null\n }\n\nAppendix 2: Data models for Transparency Log Entries\n\nThis appendix contains pseudocoded data models for transparency log\nentries in attestation objects. Each transparency log entry serves as a\nsource of signed inclusion time, and can be verified either online or\noffline.\n\n @dataclass\n class TransparencyLogEntry:\n log_index: int\n \"\"\"\n The global index of the log entry, used when querying the log.\n \"\"\"\n\n log_id: str\n \"\"\"\n An opaque, unique identifier for the log.\n \"\"\"\n\n entry_kind: str\n \"\"\"\n The kind (type) of log entry.\n \"\"\"\n\n entry_version: str\n \"\"\"\n The version of the log entry's submitted format.\n \"\"\"\n\n integrated_time: int\n \"\"\"\n The UNIX timestamp from the log from when the entry was persisted.\n \"\"\"\n\n inclusion_proof: InclusionProof\n \"\"\"\n The actual inclusion proof of the log entry.\n \"\"\"\n\n\n @dataclass\n class InclusionProof:\n log_index: int\n \"\"\"\n The index of the entry in the tree it was written to.\n \"\"\"\n\n root_hash: str\n \"\"\"\n The digest stored at the root of the Merkle tree at the time of proof\n generation.\n \"\"\"\n\n tree_size: int\n \"\"\"\n The size of the Merkle tree at the time of proof generation.\n \"\"\"\n\n hashes: list[str]\n \"\"\"\n A list of hashes required to complete the inclusion proof, sorted\n in order from leaf to root. The leaf and root hashes are not themselves\n included in this list; the root is supplied via `root_hash` and the client\n must calculate the leaf hash.\n \"\"\"\n\n checkpoint: str\n \"\"\"\n The signed tree head's signature, at the time of proof generation.\n \"\"\"\n\n cosigned_checkpoints: list[str]\n \"\"\"\n Cosigned checkpoints from zero or more log witnesses.\n \"\"\"\n\nAppendix 3: Simple JSON API size considerations\n\nA previous draft of this PEP required embedding each\nprovenance object directly into its appropriate part\nof the JSON Simple API.\n\nThe current version of this PEP embeds the SHA-256 digest of the\nprovenance object instead. This is done for size and network bandwidth\nconsideration reasons:\n\n1. We estimate the typical size of an attestation object to be\n approximately 5.3 KB of JSON.\n2. We conservatively estimate that indices eventually host around 3\n attestations per release file, or approximately 15.9 KB of JSON per\n combined provenance object.\n3. As of May 2024, the average project on PyPI has approximately 21\n release files. We conservatively expect this average to increase\n over time.\n4. Combined, these numbers imply that a typical project might expect to\n host between 60 and 70 attestations, or approximately 339 KB of\n additional JSON in its \"project detail\" endpoint.\n\nThese numbers are significantly worse in \"pathological\" cases, where\nprojects have hundreds or thousands of releases and/or dozens of files\nper release.\n\nAppendix 4: Example attestation statement\n\nGiven a source distribution sampleproject-1.2.3.tar.gz with a SHA-256\ndigest of\ne3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855, the\nfollowing is an appropriate in-toto Statement, as a JSON object:\n\n {\n \"_type\": \"https://in-toto.io/Statement/v1\",\n \"subject\": [\n {\n \"name\": \"sampleproject-1.2.3.tar.gz\",\n \"digest\": {\"sha256\": \"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\"}\n }\n ],\n \"predicateType\": \"https://some-arbitrary-predicate.example.com/v1\",\n \"predicate\": {\n \"something-else\": \"foo\"\n }\n }\n\nCopyright\n\nThis document is placed in the public domain or under the\nCC0-1.0-Universal license, whichever is more permissive."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.452412"},"created":{"kind":"timestamp","value":"2024-01-08T00:00:00","string":"2024-01-08T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0740/\",\n \"authors\": [\n \"William Woodruff\"\n ],\n \"pep_number\": \"0740\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":79,"cells":{"id":{"kind":"string","value":"0238"},"text":{"kind":"string","value":"PEP: 238 Title: Changing the Division Operator Author: Moshe Zadka\n, Guido van Rossum Status:\nFinal Type: Standards Track Content-Type: text/x-rst Created:\n11-Mar-2001 Python-Version: 2.2 Post-History: 16-Mar-2001, 26-Jul-2001,\n27-Jul-2001\n\nAbstract\n\nThe current division (/) operator has an ambiguous meaning for numerical\narguments: it returns the floor of the mathematical result of division\nif the arguments are ints or longs, but it returns a reasonable\napproximation of the division result if the arguments are floats or\ncomplex. This makes expressions expecting float or complex results\nerror-prone when integers are not expected but possible as inputs.\n\nWe propose to fix this by introducing different operators for different\noperations: x/y to return a reasonable approximation of the mathematical\nresult of the division (\"true division\"), x//y to return the floor\n(\"floor division\"). We call the current, mixed meaning of x/y \"classic\ndivision\".\n\nBecause of severe backwards compatibility issues, not to mention a major\nflamewar on c.l.py, we propose the following transitional measures\n(starting with Python 2.2):\n\n- Classic division will remain the default in the Python 2.x series;\n true division will be standard in Python 3.0.\n- The // operator will be available to request floor division\n unambiguously.\n- The future division statement, spelled\n from __future__ import division, will change the / operator to mean\n true division throughout the module.\n- A command line option will enable run-time warnings for classic\n division applied to int or long arguments; another command line\n option will make true division the default.\n- The standard library will use the future division statement and the\n // operator when appropriate, so as to completely avoid classic\n division.\n\nMotivation\n\nThe classic division operator makes it hard to write numerical\nexpressions that are supposed to give correct results from arbitrary\nnumerical inputs. For all other operators, one can write down a formula\nsuch as x*y**2 + z, and the calculated result will be close to the\nmathematical result (within the limits of numerical accuracy, of course)\nfor any numerical input type (int, long, float, or complex). But\ndivision poses a problem: if the expressions for both arguments happen\nto have an integral type, it implements floor division rather than true\ndivision.\n\nThe problem is unique to dynamically typed languages: in a statically\ntyped language like C, the inputs, typically function arguments, would\nbe declared as double or float, and when a call passes an integer\nargument, it is converted to double or float at the time of the call.\nPython doesn't have argument type declarations, so integer arguments can\neasily find their way into an expression.\n\nThe problem is particularly pernicious since ints are perfect\nsubstitutes for floats in all other circumstances: math.sqrt(2) returns\nthe same value as math.sqrt(2.0), 3.14*100 and 3.14*100.0 return the\nsame value, and so on. Thus, the author of a numerical routine may only\nuse floating point numbers to test his code, and believe that it works\ncorrectly, and a user may accidentally pass in an integer input value\nand get incorrect results.\n\nAnother way to look at this is that classic division makes it difficult\nto write polymorphic functions that work well with either float or int\narguments; all other operators already do the right thing. No algorithm\nthat works for both ints and floats has a need for truncating division\nin one case and true division in the other.\n\nThe correct work-around is subtle: casting an argument to float() is\nwrong if it could be a complex number; adding 0.0 to an argument doesn't\npreserve the sign of the argument if it was minus zero. The only\nsolution without either downside is multiplying an argument (typically\nthe first) by 1.0. This leaves the value and sign unchanged for float\nand complex, and turns int and long into a float with the corresponding\nvalue.\n\nIt is the opinion of the authors that this is a real design bug in\nPython, and that it should be fixed sooner rather than later. Assuming\nPython usage will continue to grow, the cost of leaving this bug in the\nlanguage will eventually outweigh the cost of fixing old code -- there\nis an upper bound to the amount of code to be fixed, but the amount of\ncode that might be affected by the bug in the future is unbounded.\n\nAnother reason for this change is the desire to ultimately unify\nPython's numeric model. This is the subject of PEP 228 (which is\ncurrently incomplete). A unified numeric model removes most of the\nuser's need to be aware of different numerical types. This is good for\nbeginners, but also takes away concerns about different numeric behavior\nfor advanced programmers. (Of course, it won't remove concerns about\nnumerical stability and accuracy.)\n\nIn a unified numeric model, the different types (int, long, float,\ncomplex, and possibly others, such as a new rational type) serve mostly\nas storage optimizations, and to some extent to indicate orthogonal\nproperties such as inexactness or complexity. In a unified model, the\ninteger 1 should be indistinguishable from the floating point number 1.0\n(except for its inexactness), and both should behave the same in all\nnumeric contexts. Clearly, in a unified numeric model, if a==b and c==d,\na/c should equal b/d (taking some liberties due to rounding for inexact\nnumbers), and since everybody agrees that 1.0/2.0 equals 0.5, 1/2 should\nalso equal 0.5. Likewise, since 1//2 equals zero, 1.0//2.0 should also\nequal zero.\n\nVariations\n\nAesthetically, x//y doesn't please everyone, and hence several\nvariations have been proposed. They are addressed here:\n\n- x div y. This would introduce a new keyword. Since div is a popular\n identifier, this would break a fair amount of existing code, unless\n the new keyword was only recognized under a future division\n statement. Since it is expected that the majority of code that needs\n to be converted is dividing integers, this would greatly increase\n the need for the future division statement. Even with a future\n statement, the general sentiment against adding new keywords unless\n absolutely necessary argues against this.\n- div(x, y). This makes the conversion of old code much harder.\n Replacing x/y with x//y or x div y can be done with a simple query\n replace; in most cases the programmer can easily verify that a\n particular module only works with integers so all occurrences of x/y\n can be replaced. (The query replace is still needed to weed out\n slashes occurring in comments or string literals.) Replacing x/y\n with div(x, y) would require a much more intelligent tool, since the\n extent of the expressions to the left and right of the / must be\n analyzed before the placement of the div( and ) part can be decided.\n- x \\ y. The backslash is already a token, meaning line continuation,\n and in general it suggests an escape to Unix eyes. In addition (this\n due to Terry Reedy) this would make things like eval(\"x\\y\") harder\n to get right.\n\nAlternatives\n\nIn order to reduce the amount of old code that needs to be converted,\nseveral alternative proposals have been put forth. Here is a brief\ndiscussion of each proposal (or category of proposals). If you know of\nan alternative that was discussed on c.l.py that isn't mentioned here,\nplease mail the second author.\n\n- Let / keep its classic semantics; introduce // for true division.\n This still leaves a broken operator in the language, and invites to\n use the broken behavior. It also shuts off the road to a unified\n numeric model a la PEP 228.\n- Let int division return a special \"portmanteau\" type that behaves as\n an integer in integer context, but like a float in a float context.\n The problem with this is that after a few operations, the int and\n the float value could be miles apart, it's unclear which value\n should be used in comparisons, and of course many contexts (like\n conversion to string) don't have a clear integer or float\n preference.\n- Use a directive to use specific division semantics in a module,\n rather than a future statement. This retains classic division as a\n permanent wart in the language, requiring future generations of\n Python programmers to be aware of the problem and the remedies.\n- Use from __past__ import division to use classic division semantics\n in a module. This also retains the classic division as a permanent\n wart, or at least for a long time (eventually the past division\n statement could raise an ImportError).\n- Use a directive (or some other way) to specify the Python version\n for which a specific piece of code was developed. This requires\n future Python interpreters to be able to emulate exactly several\n previous versions of Python, and moreover to do so for multiple\n versions within the same interpreter. This is way too much work. A\n much simpler solution is to keep multiple interpreters installed.\n Another argument against this is that the version directive is\n almost always overspecified: most code written for Python X.Y, works\n for Python X.(Y-1) and X.(Y+1) as well, so specifying X.Y as a\n version is more constraining than it needs to be. At the same time,\n there's no way to know at which future or past version the code will\n break.\n\nAPI Changes\n\nDuring the transitional phase, we have to support three division\noperators within the same program: classic division (for / in modules\nwithout a future division statement), true division (for / in modules\nwith a future division statement), and floor division (for //). Each\noperator comes in two flavors: regular, and as an augmented assignment\noperator (/= or //=).\n\nThe names associated with these variations are:\n\n- Overloaded operator methods:\n\n __div__(), __floordiv__(), __truediv__();\n __idiv__(), __ifloordiv__(), __itruediv__().\n\n- Abstract API C functions:\n\n PyNumber_Divide(), PyNumber_FloorDivide(),\n PyNumber_TrueDivide();\n\n PyNumber_InPlaceDivide(), PyNumber_InPlaceFloorDivide(),\n PyNumber_InPlaceTrueDivide().\n\n- Byte code opcodes:\n\n BINARY_DIVIDE, BINARY_FLOOR_DIVIDE, BINARY_TRUE_DIVIDE;\n INPLACE_DIVIDE, INPLACE_FLOOR_DIVIDE, INPLACE_TRUE_DIVIDE.\n\n- PyNumberMethod slots:\n\n nb_divide, nb_floor_divide, nb_true_divide,\n nb_inplace_divide, nb_inplace_floor_divide,\n nb_inplace_true_divide.\n\nThe added PyNumberMethod slots require an additional flag in tp_flags;\nthis flag will be named Py_TPFLAGS_HAVE_NEWDIVIDE and will be included\nin Py_TPFLAGS_DEFAULT.\n\nThe true and floor division APIs will look for the corresponding slots\nand call that; when that slot is NULL, they will raise an exception.\nThere is no fallback to the classic divide slot.\n\nIn Python 3.0, the classic division semantics will be removed; the\nclassic division APIs will become synonymous with true division.\n\nCommand Line Option\n\nThe -Q command line option takes a string argument that can take four\nvalues: old, warn, warnall, or new. The default is old in Python 2.2 but\nwill change to warn in later 2.x versions. The old value means the\nclassic division operator acts as described. The warn value means the\nclassic division operator issues a warning (a DeprecationWarning using\nthe standard warning framework) when applied to ints or longs. The\nwarnall value also issues warnings for classic division when applied to\nfloats or complex; this is for use by the fixdiv.py conversion script\nmentioned below. The new value changes the default globally so that the\n/ operator is always interpreted as true division. The new option is\nonly intended for use in certain educational environments, where true\ndivision is required, but asking the students to include the future\ndivision statement in all their code would be a problem.\n\nThis option will not be supported in Python 3.0; Python 3.0 will always\ninterpret / as true division.\n\n(This option was originally proposed as -D, but that turned out to be an\nexisting option for Jython, hence the Q -- mnemonic for Quotient. Other\nnames have been proposed, like -Qclassic, -Qclassic-warn, -Qtrue, or\n-Qold_division etc.; these seem more verbose to me without much\nadvantage. After all the term classic division is not used in the\nlanguage at all (only in the PEP), and the term true division is rarely\nused in the language -- only in __truediv__.)\n\nSemantics of Floor Division\n\nFloor division will be implemented in all the Python numeric types, and\nwill have the semantics of:\n\n a // b == floor(a/b)\n\nexcept that the result type will be the common type into which a and b\nare coerced before the operation.\n\nSpecifically, if a and b are of the same type, a//b will be of that type\ntoo. If the inputs are of different types, they are first coerced to a\ncommon type using the same rules used for all other arithmetic\noperators.\n\nIn particular, if a and b are both ints or longs, the result has the\nsame type and value as for classic division on these types (including\nthe case of mixed input types; int//long and long//int will both return\na long).\n\nFor floating point inputs, the result is a float. For example:\n\n 3.5//2.0 == 1.0\n\nFor complex numbers, // raises an exception, since floor() of a complex\nnumber is not allowed.\n\nFor user-defined classes and extension types, all semantics are up to\nthe implementation of the class or type.\n\nSemantics of True Division\n\nTrue division for ints and longs will convert the arguments to float and\nthen apply a float division. That is, even 2/1 will return a\nfloat (2.0), not an int. For floats and complex, it will be the same as\nclassic division.\n\nThe 2.2 implementation of true division acts as if the float type had\nunbounded range, so that overflow doesn't occur unless the magnitude of\nthe mathematical result is too large to represent as a float. For\nexample, after x = 1L << 40000, float(x) raises OverflowError (note that\nthis is also new in 2.2: previously the outcome was platform-dependent,\nmost commonly a float infinity). But x/x returns 1.0 without exception,\nwhile x/1 raises OverflowError.\n\nNote that for int and long arguments, true division may lose\ninformation; this is in the nature of true division (as long as\nrationals are not in the language). Algorithms that consciously use\nlongs should consider using //, as true division of longs retains no\nmore than 53 bits of precision (on most platforms).\n\nIf and when a rational type is added to Python (see PEP 239), true\ndivision for ints and longs should probably return a rational. This\navoids the problem with true division of ints and longs losing\ninformation. But until then, for consistency, float is the only choice\nfor true division.\n\nThe Future Division Statement\n\nIf from __future__ import division is present in a module, or if -Qnew\nis used, the / and /= operators are translated to true division opcodes;\notherwise they are translated to classic division (until Python 3.0\ncomes along, where they are always translated to true division).\n\nThe future division statement has no effect on the recognition or\ntranslation of // and //=.\n\nSee PEP 236 for the general rules for future statements.\n\n(It has been proposed to use a longer phrase, like true_division or\nmodern_division. These don't seem to add much information.)\n\nOpen Issues\n\nWe expect that these issues will be resolved over time, as more feedback\nis received or we gather more experience with the initial\nimplementation.\n\n- It has been proposed to call // the quotient operator, and the /\n operator the ratio operator. I'm not sure about this -- for some\n people quotient is just a synonym for division, and ratio suggests\n rational numbers, which is wrong. I prefer the terminology to be\n slightly awkward if that avoids unambiguity. Also, for some folks\n quotient suggests truncation towards zero, not towards infinity as\n floor division says explicitly.\n- It has been argued that a command line option to change the default\n is evil. It can certainly be dangerous in the wrong hands: for\n example, it would be impossible to combine a 3rd party library\n package that requires -Qnew with another one that requires -Qold.\n But I believe that the VPython folks need a way to enable true\n division by default, and other educators might need the same. These\n usually have enough control over the library packages available in\n their environment.\n- For classes to have to support all three of __div__(),\n __floordiv__() and __truediv__() seems painful; and what to do in\n 3.0? Maybe we only need __div__() and __floordiv__(), or maybe at\n least true division should try __truediv__() first and __div__()\n second.\n\nResolved Issues\n\n- Issue: For very large long integers, the definition of true division\n as returning a float causes problems, since the range of Python\n longs is much larger than that of Python floats. This problem will\n disappear if and when rational numbers are supported.\n\n Resolution: For long true division, Python uses an internal float\n type with native double precision but unbounded range, so that\n OverflowError doesn't occur unless the quotient is too large to\n represent as a native double.\n\n- Issue: In the interim, maybe the long-to-float conversion could be\n made to raise OverflowError if the long is out of range.\n\n Resolution: This has been implemented, but, as above, the magnitude\n of the inputs to long true division doesn't matter; only the\n magnitude of the quotient matters.\n\n- Issue: Tim Peters will make sure that whenever an in-range float is\n returned, decent precision is guaranteed.\n\n Resolution: Provided the quotient of long true division is\n representable as a float, it suffers no more than 3 rounding errors:\n one each for converting the inputs to an internal float type with\n native double precision but unbounded range, and one more for the\n division. However, note that if the magnitude of the quotient is too\n small to represent as a native double, 0.0 is returned without\n exception (\"silent underflow\").\n\nFAQ\n\nWhen will Python 3.0 be released?\n\n We don't plan that long ahead, so we can't say for sure. We want to\n allow at least two years for the transition. If Python 3.0 comes out\n sooner, we'll keep the 2.x line alive for backwards compatibility\n until at least two years from the release of Python 2.2. In practice,\n you will be able to continue to use the Python 2.x line for several\n years after Python 3.0 is released, so you can take your time with the\n transition. Sites are expected to have both Python 2.x and Python 3.x\n installed simultaneously.\n\nWhy isn't true division called float division?\n\n Because I want to keep the door open to possibly introducing rationals\n and making 1/2 return a rational rather than a float. See PEP 239.\n\nWhy is there a need for __truediv__ and __itruediv__?\n\n We don't want to make user-defined classes second-class citizens.\n Certainly not with the type/class unification going on.\n\nHow do I write code that works under the classic rules as well as under the new rules without using // or a future division statement?\n\n Use x*1.0/y for true division, divmod(x, y) (PEP 228) for int\n division. Especially the latter is best hidden inside a function. You\n may also write float(x)/y for true division if you are sure that you\n don't expect complex numbers. If you know your integers are never\n negative, you can use int(x/y) -- while the documentation of int()\n says that int() can round or truncate depending on the C\n implementation, we know of no C implementation that doesn't truncate,\n and we're going to change the spec for int() to promise truncation.\n Note that classic division (and floor division) round towards negative\n infinity, while int() rounds towards zero, giving different answers\n for negative numbers.\n\nHow do I specify the division semantics for input(), compile(), execfile(), eval() and exec?\n\n They inherit the choice from the invoking module. PEP 236 now lists\n this as a resolved problem, referring to PEP 264.\n\nWhat about code compiled by the codeop module?\n\n This is dealt with properly; see PEP 264.\n\nWill there be conversion tools or aids?\n\n Certainly. While these are outside the scope of the PEP, I should\n point out two simple tools that will be released with Python 2.2a3:\n Tools/scripts/finddiv.py finds division operators (slightly smarter\n than grep /) and Tools/scripts/fixdiv.py can produce patches based on\n run-time analysis.\n\nWhy is my question not answered here?\n\n Because we weren't aware of it. If it's been discussed on c.l.py and\n you believe the answer is of general interest, please notify the\n second author. (We don't have the time or inclination to answer every\n question sent in private email, hence the requirement that it be\n discussed on c.l.py first.)\n\nImplementation\n\nEssentially everything mentioned here is implemented in CVS and will be\nreleased with Python 2.2a3; most of it was already released with Python\n2.2a2.\n\nCopyright\n\nThis document has been placed in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.481081"},"created":{"kind":"timestamp","value":"2001-03-11T00:00:00","string":"2001-03-11T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0238/\",\n \"authors\": [\n \"Moshe Zadka\"\n ],\n \"pep_number\": \"0238\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":80,"cells":{"id":{"kind":"string","value":"0234"},"text":{"kind":"string","value":"PEP: 234 Title: Iterators Author: Ka-Ping Yee , Guido van\nRossum Status: Final Type: Standards Track\nContent-Type: text/x-rst Created: 30-Jan-2001 Python-Version: 2.1\nPost-History: 30-Apr-2001\n\nAbstract\n\nThis document proposes an iteration interface that objects can provide\nto control the behaviour of for loops. Looping is customized by\nproviding a method that produces an iterator object. The iterator\nprovides a get next value operation that produces the next item in the\nsequence each time it is called, raising an exception when no more items\nare available.\n\nIn addition, specific iterators over the keys of a dictionary and over\nthe lines of a file are proposed, and a proposal is made to allow\nspelling dict.has_key(key) as key in dict.\n\nNote: this is an almost complete rewrite of this PEP by the second\nauthor, describing the actual implementation checked into the trunk of\nthe Python 2.2 CVS tree. It is still open for discussion. Some of the\nmore esoteric proposals in the original version of this PEP have been\nwithdrawn for now; these may be the subject of a separate PEP in the\nfuture.\n\nC API Specification\n\nA new exception is defined, StopIteration, which can be used to signal\nthe end of an iteration.\n\nA new slot named tp_iter for requesting an iterator is added to the type\nobject structure. This should be a function of one PyObject * argument\nreturning a PyObject *, or NULL. To use this slot, a new C API function\nPyObject_GetIter() is added, with the same signature as the tp_iter slot\nfunction.\n\nAnother new slot, named tp_iternext, is added to the type structure, for\nobtaining the next value in the iteration. To use this slot, a new C API\nfunction PyIter_Next() is added. The signature for both the slot and the\nAPI function is as follows, although the NULL return conditions differ:\nthe argument is a PyObject * and so is the return value. When the return\nvalue is non-NULL, it is the next value in the iteration. When it is\nNULL, then for the tp_iternext slot there are three possibilities:\n\n- No exception is set; this implies the end of the iteration.\n- The StopIteration exception (or a derived exception class) is set;\n this implies the end of the iteration.\n- Some other exception is set; this means that an error occurred that\n should be propagated normally.\n\nThe higher-level PyIter_Next() function clears the StopIteration\nexception (or derived exception) when it occurs, so its NULL return\nconditions are simpler:\n\n- No exception is set; this means iteration has ended.\n- Some exception is set; this means an error occurred, and should be\n propagated normally.\n\nIterators implemented in C should not implement a next() method with\nsimilar semantics as the tp_iternext slot! When the type's dictionary is\ninitialized (by PyType_Ready()), the presence of a tp_iternext slot\ncauses a method next() wrapping that slot to be added to the type's\ntp_dict. (Exception: if the type doesn't use PyObject_GenericGetAttr()\nto access instance attributes, the next() method in the type's tp_dict\nmay not be seen.) (Due to a misunderstanding in the original text of\nthis PEP, in Python 2.2, all iterator types implemented a next() method\nthat was overridden by the wrapper; this has been fixed in Python 2.3.)\n\nTo ensure binary backwards compatibility, a new flag\nPy_TPFLAGS_HAVE_ITER is added to the set of flags in the tp_flags field,\nand to the default flags macro. This flag must be tested before\naccessing the tp_iter or tp_iternext slots. The macro PyIter_Check()\ntests whether an object has the appropriate flag set and has a non-NULL\ntp_iternext slot. There is no such macro for the tp_iter slot (since the\nonly place where this slot is referenced should be PyObject_GetIter(),\nand this can check for the Py_TPFLAGS_HAVE_ITER flag directly).\n\n(Note: the tp_iter slot can be present on any object; the tp_iternext\nslot should only be present on objects that act as iterators.)\n\nFor backwards compatibility, the PyObject_GetIter() function implements\nfallback semantics when its argument is a sequence that does not\nimplement a tp_iter function: a lightweight sequence iterator object is\nconstructed in that case which iterates over the items of the sequence\nin the natural order.\n\nThe Python bytecode generated for for loops is changed to use new\nopcodes, GET_ITER and FOR_ITER, that use the iterator protocol rather\nthan the sequence protocol to get the next value for the loop variable.\nThis makes it possible to use a for loop to loop over non-sequence\nobjects that support the tp_iter slot. Other places where the\ninterpreter loops over the values of a sequence should also be changed\nto use iterators.\n\nIterators ought to implement the tp_iter slot as returning a reference\nto themselves; this is needed to make it possible to use an iterator (as\nopposed to a sequence) in a for loop.\n\nIterator implementations (in C or in Python) should guarantee that once\nthe iterator has signalled its exhaustion, subsequent calls to\ntp_iternext or to the next() method will continue to do so. It is not\nspecified whether an iterator should enter the exhausted state when an\nexception (other than StopIteration) is raised. Note that Python cannot\nguarantee that user-defined or 3rd party iterators implement this\nrequirement correctly.\n\nPython API Specification\n\nThe StopIteration exception is made visible as one of the standard\nexceptions. It is derived from Exception.\n\nA new built-in function is defined, iter(), which can be called in two\nways:\n\n- iter(obj) calls PyObject_GetIter(obj).\n- iter(callable, sentinel) returns a special kind of iterator that\n calls the callable to produce a new value, and compares the return\n value to the sentinel value. If the return value equals the\n sentinel, this signals the end of the iteration and StopIteration is\n raised rather than returning normal; if the return value does not\n equal the sentinel, it is returned as the next value from the\n iterator. If the callable raises an exception, this is propagated\n normally; in particular, the function is allowed to raise\n StopIteration as an alternative way to end the iteration. (This\n functionality is available from the C API as\n PyCallIter_New(callable, sentinel).)\n\nIterator objects returned by either form of iter() have a next() method.\nThis method either returns the next value in the iteration, or raises\nStopIteration (or a derived exception class) to signal the end of the\niteration. Any other exception should be considered to signify an error\nand should be propagated normally, not taken to mean the end of the\niteration.\n\nClasses can define how they are iterated over by defining an __iter__()\nmethod; this should take no additional arguments and return a valid\niterator object. A class that wants to be an iterator should implement\ntwo methods: a next() method that behaves as described above, and an\n__iter__() method that returns self.\n\nThe two methods correspond to two distinct protocols:\n\n1. An object can be iterated over with for if it implements __iter__()\n or __getitem__().\n2. An object can function as an iterator if it implements next().\n\nContainer-like objects usually support protocol 1. Iterators are\ncurrently required to support both protocols. The semantics of iteration\ncome only from protocol 2; protocol 1 is present to make iterators\nbehave like sequences; in particular so that code receiving an iterator\ncan use a for-loop over the iterator.\n\nDictionary Iterators\n\n- Dictionaries implement a sq_contains slot that implements the same\n test as the has_key() method. This means that we can write\n\n if k in dict: ...\n\n which is equivalent to\n\n if dict.has_key(k): ...\n\n- Dictionaries implement a tp_iter slot that returns an efficient\n iterator that iterates over the keys of the dictionary. During such\n an iteration, the dictionary should not be modified, except that\n setting the value for an existing key is allowed (deletions or\n additions are not, nor is the update() method). This means that we\n can write\n\n for k in dict: ...\n\n which is equivalent to, but much faster than\n\n for k in dict.keys(): ...\n\n as long as the restriction on modifications to the dictionary\n (either by the loop or by another thread) are not violated.\n\n- Add methods to dictionaries that return different kinds of iterators\n explicitly:\n\n for key in dict.iterkeys(): ...\n\n for value in dict.itervalues(): ...\n\n for key, value in dict.iteritems(): ...\n\n This means that for x in dict is shorthand for\n for x in dict.iterkeys().\n\nOther mappings, if they support iterators at all, should also iterate\nover the keys. However, this should not be taken as an absolute rule;\nspecific applications may have different requirements.\n\nFile Iterators\n\nThe following proposal is useful because it provides us with a good\nanswer to the complaint that the common idiom to iterate over the lines\nof a file is ugly and slow.\n\n- Files implement a tp_iter slot that is equivalent to\n iter(f.readline, \"\"). This means that we can write\n\n for line in file:\n ...\n\n as a shorthand for\n\n for line in iter(file.readline, \"\"):\n ...\n\n which is equivalent to, but faster than\n\n while 1:\n line = file.readline()\n if not line:\n break\n ...\n\nThis also shows that some iterators are destructive: they consume all\nthe values and a second iterator cannot easily be created that iterates\nindependently over the same values. You could open the file for a second\ntime, or seek() to the beginning, but these solutions don't work for all\nfile types, e.g. they don't work when the open file object really\nrepresents a pipe or a stream socket.\n\nBecause the file iterator uses an internal buffer, mixing this with\nother file operations (e.g. file.readline()) doesn't work right. Also,\nthe following code:\n\n for line in file:\n if line == \"\\n\":\n break\n for line in file:\n print line,\n\ndoesn't work as you might expect, because the iterator created by the\nsecond for-loop doesn't take the buffer read-ahead by the first for-loop\ninto account. A correct way to write this is:\n\n it = iter(file)\n for line in it:\n if line == \"\\n\":\n break\n for line in it:\n print line,\n\n(The rationale for these restrictions are that for line in file ought to\nbecome the recommended, standard way to iterate over the lines of a\nfile, and this should be as fast as can be. The iterator version is\nconsiderable faster than calling readline(), due to the internal buffer\nin the iterator.)\n\nRationale\n\nIf all the parts of the proposal are included, this addresses many\nconcerns in a consistent and flexible fashion. Among its chief virtues\nare the following four -- no, five -- no, six -- points:\n\n1. It provides an extensible iterator interface.\n2. It allows performance enhancements to list iteration.\n3. It allows big performance enhancements to dictionary iteration.\n4. It allows one to provide an interface for just iteration without\n pretending to provide random access to elements.\n5. It is backward-compatible with all existing user-defined classes and\n extension objects that emulate sequences and mappings, even mappings\n that only implement a subset of {__getitem__, keys, values, items}.\n6. It makes code iterating over non-sequence collections more concise\n and readable.\n\nResolved Issues\n\nThe following topics have been decided by consensus or BDFL\npronouncement.\n\n- Two alternative spellings for next() have been proposed but\n rejected: __next__(), because it corresponds to a type object slot\n (tp_iternext); and __call__(), because this is the only operation.\n\n Arguments against __next__(): while many iterators are used in for\n loops, it is expected that user code will also call next() directly,\n so having to write __next__() is ugly; also, a possible extension of\n the protocol would be to allow for prev(), current() and reset()\n operations; surely we don't want to use __prev__(), __current__(),\n __reset__().\n\n Arguments against __call__() (the original proposal): taken out of\n context, x() is not very readable, while x.next() is clear; there's\n a danger that every special-purpose object wants to use __call__()\n for its most common operation, causing more confusion than clarity.\n\n (In retrospect, it might have been better to go for __next__() and\n have a new built-in, next(it), which calls it.__next__(). But alas,\n it's too late; this has been deployed in Python 2.2 since December\n 2001.)\n\n- Some folks have requested the ability to restart an iterator. This\n should be dealt with by calling iter() on a sequence repeatedly, not\n by the iterator protocol itself. (See also requested extensions\n below.)\n\n- It has been questioned whether an exception to signal the end of the\n iteration isn't too expensive. Several alternatives for the\n StopIteration exception have been proposed: a special value End to\n signal the end, a function end() to test whether the iterator is\n finished, even reusing the IndexError exception.\n\n - A special value has the problem that if a sequence ever contains\n that special value, a loop over that sequence will end\n prematurely without any warning. If the experience with\n null-terminated C strings hasn't taught us the problems this can\n cause, imagine the trouble a Python introspection tool would\n have iterating over a list of all built-in names, assuming that\n the special End value was a built-in name!\n - Calling an end() function would require two calls per iteration.\n Two calls is much more expensive than one call plus a test for\n an exception. Especially the time-critical for loop can test\n very cheaply for an exception.\n - Reusing IndexError can cause confusion because it can be a\n genuine error, which would be masked by ending the loop\n prematurely.\n\n- Some have asked for a standard iterator type. Presumably all\n iterators would have to be derived from this type. But this is not\n the Python way: dictionaries are mappings because they support\n __getitem__() and a handful other operations, not because they are\n derived from an abstract mapping type.\n\n- Regarding if key in dict: there is no doubt that the dict.has_key(x)\n interpretation of x in dict is by far the most useful\n interpretation, probably the only useful one. There has been\n resistance against this because x in list checks whether x is\n present among the values, while the proposal makes x in dict check\n whether x is present among the keys. Given that the symmetry between\n lists and dictionaries is very weak, this argument does not have\n much weight.\n\n- The name iter() is an abbreviation. Alternatives proposed include\n iterate(), traverse(), but these appear too long. Python has a\n history of using abbrs for common builtins, e.g. repr(), str(),\n len().\n\n Resolution: iter() it is.\n\n- Using the same name for two different operations (getting an\n iterator from an object and making an iterator for a function with a\n sentinel value) is somewhat ugly. I haven't seen a better name for\n the second operation though, and since they both return an iterator,\n it's easy to remember.\n\n Resolution: the builtin iter() takes an optional argument, which is\n the sentinel to look for.\n\n- Once a particular iterator object has raised StopIteration, will it\n also raise StopIteration on all subsequent next() calls? Some say\n that it would be useful to require this, others say that it is\n useful to leave this open to individual iterators. Note that this\n may require an additional state bit for some iterator\n implementations (e.g. function-wrapping iterators).\n\n Resolution: once StopIteration is raised, calling it.next()\n continues to raise StopIteration.\n\n Note: this was in fact not implemented in Python 2.2; there are many\n cases where an iterator's next() method can raise StopIteration on\n one call but not on the next. This has been remedied in Python 2.3.\n\n- It has been proposed that a file object should be its own iterator,\n with a next() method returning the next line. This has certain\n advantages, and makes it even clearer that this iterator is\n destructive. The disadvantage is that this would make it even more\n painful to implement the \"sticky StopIteration\" feature proposed in\n the previous bullet.\n\n Resolution: tentatively rejected (though there are still people\n arguing for this).\n\n- Some folks have requested extensions of the iterator protocol, e.g.\n prev() to get the previous item, current() to get the current item\n again, finished() to test whether the iterator is finished, and\n maybe even others, like rewind(), __len__(), position().\n\n While some of these are useful, many of these cannot easily be\n implemented for all iterator types without adding arbitrary\n buffering, and sometimes they can't be implemented at all (or not\n reasonably). E.g. anything to do with reversing directions can't be\n done when iterating over a file or function. Maybe a separate PEP\n can be drafted to standardize the names for such operations when\n they are implementable.\n\n Resolution: rejected.\n\n- There has been a long discussion about whether\n\n for x in dict: ...\n\n should assign x the successive keys, values, or items of the\n dictionary. The symmetry between if x in y and for x in y suggests\n that it should iterate over keys. This symmetry has been observed by\n many independently and has even been used to \"explain\" one using the\n other. This is because for sequences, if x in y iterates over y\n comparing the iterated values to x. If we adopt both of the above\n proposals, this will also hold for dictionaries.\n\n The argument against making for x in dict iterate over the keys\n comes mostly from a practicality point of view: scans of the\n standard library show that there are about as many uses of\n for x in dict.items() as there are of for x in dict.keys(), with the\n items() version having a small majority. Presumably many of the\n loops using keys() use the corresponding value anyway, by writing\n dict[x], so (the argument goes) by making both the key and value\n available, we could support the largest number of cases. While this\n is true, I (Guido) find the correspondence between for x in dict and\n if x in dict too compelling to break, and there's not much overhead\n in having to write dict[x] to explicitly get the value.\n\n For fast iteration over items, use\n for key, value in dict.iteritems(). I've timed the difference\n between\n\n for key in dict: dict[key]\n\n and\n\n for key, value in dict.iteritems(): pass\n\n and found that the latter is only about 7% faster.\n\n Resolution: By BDFL pronouncement, for x in dict iterates over the\n keys, and dictionaries have iteritems(), iterkeys(), and\n itervalues() to return the different flavors of dictionary\n iterators.\n\nMailing Lists\n\nThe iterator protocol has been discussed extensively in a mailing list\non SourceForge:\n\n http://lists.sourceforge.net/lists/listinfo/python-iterators\n\nInitially, some of the discussion was carried out at Yahoo; archives are\nstill accessible:\n\n http://groups.yahoo.com/group/python-iter\n\nCopyright\n\nThis document is in the public domain."},"source":{"kind":"string","value":"python-peps"},"added":{"kind":"string","value":"2024-10-18T13:23:17.506615"},"created":{"kind":"timestamp","value":"2001-01-30T00:00:00","string":"2001-01-30T00:00:00"},"metadata":{"kind":"string","value":"{\n \"license\": \"Public Domain\",\n \"url\": \"https://peps.python.org/pep-0234/\",\n \"authors\": [\n \"Guido van Rossum\",\n \"Ka-Ping Yee\"\n ],\n \"pep_number\": \"0234\",\n \"pandoc_version\": \"3.5\"\n}"}}},{"rowIdx":81,"cells":{"id":{"kind":"string","value":"0752"},"text":{"kind":"string","value":"PEP: 752 Title: Implicit namespaces for package repositories Author:\nOfek Lev Sponsor: Barry Warsaw\n PEP-Delegate: Dustin Ingram \nDiscussions-To: https://discuss.python.org/t/63192 Status: Draft Type:\nStandards Track Topic: Packaging Created: 13-Aug-2024 Post-History:\n18-Aug-2024, 07-Sep-2024,\n\nAbstract\n\nThis PEP specifies a way for organizations to reserve package name\nprefixes for future uploads.\n\n \"Namespaces are one honking great idea -- let's do more of those!\" -\n PEP 20\n\nMotivation\n\nThe current ecosystem lacks a way for projects with many packages to\nsignal a verified pattern of ownership. Such projects fall into two\ncategories.\n\nThe first category is projects[1] that want complete control over their\nnamespace. A few examples:\n\n- Major cloud providers like Amazon, Google and Microsoft have a\n common prefix for each feature's corresponding package[2]. For\n example, most of Google's packages are prefixed by google-cloud-\n e.g. google-cloud-compute for using virtual machines.\n- OpenTelemetry is an open standard for observability with official\n packages for the core APIs and SDK with contrib packages to collect\n data from various sources. All packages are prefixed by\n opentelemetry- with child prefixes in the form\n opentelemetry--