Unicode Objects and Codecs
Since the implementation of PEP 393 in Python 3.3, Unicode objects internally use a variety of representations, in order to allow handling the complete range of Unicode characters while staying memory efficient. There are special cases for strings where all code points are below 128, 256, or 65536; otherwise, code points must be below 1114112 (which is the full Unicode range).
:c:type:`Py_UNICODE*` and UTF-8 representations are created on demand and cached in the Unicode object. The :c:type:`Py_UNICODE*` representation is deprecated and inefficient; it should be avoided in performance- or memory-sensitive situations.
Due to the transition between the old APIs and the new APIs, unicode objects can internally be in two states depending on how they were created:
- "canonical" unicode objects are all objects created by a non-deprecated unicode API. They use the most efficient representation allowed by the implementation.
- "legacy" unicode objects have been created through one of the deprecated APIs (typically :c:func:`PyUnicode_FromUnicode`) and only bear the :c:type:`Py_UNICODE*` representation; you will have to call :c:func:`PyUnicode_READY` on them before calling any other API.
These are the basic Unicode object types used for the Unicode implementation in Python:
The following APIs are really C macros and can be used to do fast checks and to access internal read-only data of Unicode objects:
Unicode Character Properties
Unicode provides many different character properties. The most often needed ones are available through these macros which are mapped to C functions depending on the Python configuration.
These APIs can be used for fast direct character conversions:
These APIs can be used to work with surrogates:
Creating and accessing Unicode strings
To create Unicode objects and access their basic sequence properties, use these APIs:
Deprecated Py_UNICODE APIs
These API functions are deprecated with the implementation of PEP 393. Extension modules can continue using them, as they will not be removed in Python 3.x, but need to be aware that their use can now cause performance and memory hits.
The current locale encoding can be used to decode text from the operating system.
File System Encoding
To encode and decode file names and other environment strings, :c:data:`Py_FileSystemEncoding` should be used as the encoding, and "surrogateescape" should be used as the error handler (PEP 383). To encode file names during argument parsing, the "O&" converter should be used, passing :c:func:`PyUnicode_FSConverter` as the conversion function:
To decode file names during argument parsing, the "O&" converter should be used, passing :c:func:`PyUnicode_FSDecoder` as the conversion function:
:c:type:`wchar_t` support for platforms which support it:
Python provides a set of built-in codecs which are written in C for speed. All of these codecs are directly usable via the following functions.
Many of the following APIs take two arguments encoding and errors, and they have the same semantics as the ones of the built-in :func:`str` string object constructor.
Setting encoding to NULL causes the default encoding to be used which is ASCII. The file system calls should use :c:func:`PyUnicode_FSConverter` for encoding file names. This uses the variable :c:data:`Py_FileSystemDefaultEncoding` internally. This variable should be treated as read-only: on some systems, it will be a pointer to a static string, on others, it will change at run-time (such as when the application invokes setlocale).
Error handling is set by errors which may also be set to NULL meaning to use the default handling defined for the codec. Default error handling for all built-in codecs is "strict" (:exc:`ValueError` is raised).
The codecs all use a similar interface. Only deviation from the following generic ones are documented for simplicity.
These are the generic codec APIs:
These are the UTF-8 codec APIs:
These are the UTF-32 codec APIs:
These are the UTF-16 codec APIs:
These are the UTF-7 codec APIs:
These are the "Unicode Escape" codec APIs:
These are the "Raw Unicode Escape" codec APIs:
These are the Latin-1 codec APIs: Latin-1 corresponds to the first 256 Unicode ordinals and only these are accepted by the codecs during encoding.
These are the ASCII codec APIs. Only 7-bit ASCII data is accepted. All other codes generate errors.
Character Map Codecs
This codec is special in that it can be used to implement many different codecs (and this is in fact what was done to obtain most of the standard codecs included in the :mod:`encodings` package). The codec uses mapping to encode and decode characters.
Decoding mappings must map single string characters to single Unicode characters, integers (which are then interpreted as Unicode ordinals) or None (meaning "undefined mapping" and causing an error).
Encoding mappings must map single Unicode characters to single string characters, integers (which are then interpreted as Latin-1 ordinals) or None (meaning "undefined mapping" and causing an error).
The mapping objects provided must only support the __getitem__ mapping interface.
If a character lookup fails with a LookupError, the character is copied as-is meaning that its ordinal value will be interpreted as Unicode or Latin-1 ordinal resp. Because of this, mappings only need to contain those mappings which map characters to different code points.
These are the mapping codec APIs:
The following codec API is special in that maps Unicode to Unicode.
MBCS codecs for Windows
These are the MBCS codec APIs. They are currently only available on Windows and use the Win32 MBCS converters to implement the conversions. Note that MBCS (or DBCS) is a class of encodings, not just one. The target encoding is defined by the user settings on the machine running the codec.
Methods & Slots
Methods and Slot Functions
The following APIs are capable of handling Unicode objects and strings on input (we refer to them as strings in the descriptions) and return Unicode objects or integers as appropriate.
They all return NULL or -1 if an exception occurs.