Commits

Vinay Sajip committed 68ed919 Merge

Merged default.

  • Participants
  • Parent commits e54cbba, 507d66e
  • Branches base_prefix

Comments (0)

Files changed (137)

 Modules/config.c
 Modules/ld_so_aix$
 Parser/pgen$
-PCbuild/amd64/
 ^core
 ^python-gdb.py
 ^python.exe-gdb.py
 PC/*/*.user
 PC/*/*.ncb
 PC/*/*.suo
+PC/*/Win32-temp-*
+PC/*/x64-temp-*
+PC/*/amd64
 PCbuild/*.exe
 PCbuild/*.dll
 PCbuild/*.pdb
 PCbuild/*.*sdf
 PCbuild/Win32-temp-*
 PCbuild/x64-temp-*
+PCbuild/amd64
 BuildLog.htm
 __pycache__
 Modules/_testembed

File Doc/c-api/import.rst

 
 .. c:function:: PyObject* PyImport_ImportModuleNoBlock(const char *name)
 
-   This version of :c:func:`PyImport_ImportModule` does not block. It's intended
-   to be used in C functions that import other modules to execute a function.
-   The import may block if another thread holds the import lock. The function
-   :c:func:`PyImport_ImportModuleNoBlock` never blocks. It first tries to fetch
-   the module from sys.modules and falls back to :c:func:`PyImport_ImportModule`
-   unless the lock is held, in which case the function will raise an
-   :exc:`ImportError`.
+   This function is a deprecated alias of :c:func:`PyImport_ImportModule`.
+
+   .. versionchanged:: 3.3
+      This function used to fail immediately when the import lock was held
+      by another thread.  In Python 3.3 though, the locking scheme switched
+      to per-module locks for most purposes, so this function's special
+      behaviour isn't needed anymore.
 
 
 .. c:function:: PyObject* PyImport_ImportModuleEx(char *name, PyObject *globals, PyObject *locals, PyObject *fromlist)

File Doc/library/__future__.rst

 | division         | 2.2.0a2     | 3.0          | :pep:`238`:                                 |
 |                  |             |              | *Changing the Division Operator*            |
 +------------------+-------------+--------------+---------------------------------------------+
-| absolute_import  | 2.5.0a1     | 2.7          | :pep:`328`:                                 |
+| absolute_import  | 2.5.0a1     | 3.0          | :pep:`328`:                                 |
 |                  |             |              | *Imports: Multi-Line and Absolute/Relative* |
 +------------------+-------------+--------------+---------------------------------------------+
 | with_statement   | 2.5.0a1     | 2.6          | :pep:`343`:                                 |

File Doc/library/functions.rst

    Accordingly, :func:`super` is undefined for implicit lookups using statements or
    operators such as ``super()[name]``.
 
-   Also note that :func:`super` is not limited to use inside methods.  The two
-   argument form specifies the arguments exactly and makes the appropriate
-   references.  The zero argument form automatically searches the stack frame
-   for the class (``__class__``) and the first argument.
+   Also note that, aside from the zero argument form, :func:`super` is not
+   limited to use inside methods.  The two argument form specifies the
+   arguments exactly and makes the appropriate references.  The zero
+   argument form only works inside a class definition, as the compiler fills
+   in the necessary details to correctly retrieve the class being defined,
+   as well as accessing the current instance for ordinary methods.
 
    For practical suggestions on how to design cooperative classes using
    :func:`super`, see `guide to using super()

File Doc/library/functools.rst

    .. versionadded:: 3.2
 
 
-.. decorator:: lru_cache(maxsize=100, typed=False)
+.. decorator:: lru_cache(maxsize=128, typed=False)
 
    Decorator to wrap a function with a memoizing callable that saves up to the
    *maxsize* most recent calls.  It can save time when an expensive or I/O bound

File Doc/library/imp.rst

    in ``sys.modules``.
 
 
-.. function:: lock_held()
-
-   Return ``True`` if the import lock is currently held, else ``False``. On
-   platforms without threads, always return ``False``.
-
-   On platforms with threads, a thread executing an import holds an internal lock
-   until the import is complete. This lock blocks other threads from doing an
-   import until the original import completes, which in turn prevents other threads
-   from seeing incomplete module objects constructed by the original thread while
-   in the process of completing its import (and the imports, if any, triggered by
-   that).
-
-
-.. function:: acquire_lock()
-
-   Acquire the interpreter's import lock for the current thread.  This lock should
-   be used by import hooks to ensure thread-safety when importing modules.
-
-   Once a thread has acquired the import lock, the same thread may acquire it
-   again without blocking; the thread must release it once for each time it has
-   acquired it.
-
-   On platforms without threads, this function does nothing.
-
-
-.. function:: release_lock()
-
-   Release the interpreter's import lock. On platforms without threads, this
-   function does nothing.
-
-
 .. function:: reload(module)
 
    Reload a previously imported *module*.  The argument must be a module object, so
    magic number, as returned by :func:`get_magic`.
 
 
+The following functions help interact with the import system's internal
+locking mechanism.  Locking semantics of imports are an implementation
+detail which may vary from release to release.  However, Python ensures
+that circular imports work without any deadlocks.
+
+.. versionchanged:: 3.3
+   In Python 3.3, the locking scheme has changed to per-module locks for
+   the most part.  A global import lock is kept for some critical tasks,
+   such as initializing the per-module locks.
+
+
+.. function:: lock_held()
+
+   Return ``True`` if the global import lock is currently held, else
+   ``False``. On platforms without threads, always return ``False``.
+
+   On platforms with threads, a thread executing an import first holds a
+   global import lock, then sets up a per-module lock for the rest of the
+   import.  This blocks other threads from importing the same module until
+   the original import completes, preventing other threads from seeing
+   incomplete module objects constructed by the original thread.  An
+   exception is made for circular imports, which by construction have to
+   expose an incomplete module object at some point.
+
+.. function:: acquire_lock()
+
+   Acquire the interpreter's global import lock for the current thread.
+   This lock should be used by import hooks to ensure thread-safety when
+   importing modules.
+
+   Once a thread has acquired the import lock, the same thread may acquire it
+   again without blocking; the thread must release it once for each time it has
+   acquired it.
+
+   On platforms without threads, this function does nothing.
+
+
+.. function:: release_lock()
+
+   Release the interpreter's global import lock. On platforms without
+   threads, this function does nothing.
+
+
 The following constants with integer values, defined in this module, are used
 to indicate the search result of :func:`find_module`.
 

File Doc/library/json.rst

    (e.g. :class:`float`).
 
    *parse_constant*, if specified, will be called with one of the following
-   strings: ``'-Infinity'``, ``'Infinity'``, ``'NaN'``, ``'null'``, ``'true'``,
-   ``'false'``.  This can be used to raise an exception if invalid JSON numbers
+   strings: ``'-Infinity'``, ``'Infinity'``, ``'NaN'``.
+   This can be used to raise an exception if invalid JSON numbers
    are encountered.
 
    To use a custom :class:`JSONDecoder` subclass, specify it with the ``cls``

File Doc/library/multiprocessing.rst

           print(q.get())    # prints "[42, None, 'hello']"
           p.join()
 
-   Queues are thread and process safe, but note that they must never
-   be instantiated as a side effect of importing a module: this can lead
-   to a deadlock!  (see :ref:`threaded-imports`)
+   Queues are thread and process safe.
 
 **Pipes**
 

File Doc/library/textwrap.rst

       expanded to spaces using the :meth:`expandtabs` method of *text*.
 
 
+   .. attribute:: tabsize
+
+      (default: ``8``) If :attr:`expand_tabs` is true, then all tab characters
+      in *text* will be expanded to zero or more spaces, depending on the
+      current column and the given tab size.
+
+      .. versionadded:: 3.3
+
+
    .. attribute:: replace_whitespace
 
       (default: ``True``) If true, each whitespace character (as defined by

File Doc/library/threading.rst

 
    Acquire a lock, blocking or non-blocking.
 
-   When invoked without arguments, block until the lock is unlocked, then set it to
-   locked, and return true.
+   When invoked with the *blocking* argument set to ``True`` (the default),
+   block until the lock is unlocked, then set it to locked and return ``True``.
 
-   When invoked with the *blocking* argument set to true, do the same thing as when
-   called without arguments, and return true.
-
-   When invoked with the *blocking* argument set to false, do not block.  If a call
-   without an argument would block, return false immediately; otherwise, do the
-   same thing as when called without arguments, and return true.
+   When invoked with the *blocking* argument set to ``False``, do not block.
+   If a call with *blocking* set to ``True`` would block, return ``False``
+   immediately; otherwise, set the lock to locked and return ``True``.
 
    When invoked with the floating-point *timeout* argument set to a positive
    value, block for at most the number of seconds specified by *timeout*
 Currently, :class:`Lock`, :class:`RLock`, :class:`Condition`,
 :class:`Semaphore`, and :class:`BoundedSemaphore` objects may be used as
 :keyword:`with` statement context managers.
-
-
-.. _threaded-imports:
-
-Importing in threaded code
---------------------------
-
-While the import machinery is thread-safe, there are two key restrictions on
-threaded imports due to inherent limitations in the way that thread-safety is
-provided:
-
-* Firstly, other than in the main module, an import should not have the
-  side effect of spawning a new thread and then waiting for that thread in
-  any way. Failing to abide by this restriction can lead to a deadlock if
-  the spawned thread directly or indirectly attempts to import a module.
-* Secondly, all import attempts must be completed before the interpreter
-  starts shutting itself down. This can be most easily achieved by only
-  performing imports from non-daemon threads created through the threading
-  module. Daemon threads and threads created directly with the thread
-  module will require some other form of synchronization to ensure they do
-  not attempt imports after system shutdown has commenced. Failure to
-  abide by this restriction will lead to intermittent exceptions and
-  crashes during interpreter shutdown (as the late imports attempt to
-  access machinery which is no longer in a valid state).

File Doc/library/time.rst

   the units in which their value or argument is expressed. E.g. on most Unix
   systems, the clock "ticks" only 50 or 100 times a second.
 
-* On the other hand, the precision of :func:`time` and :func:`sleep` is better
+* On the other hand, the precision of :func:`.time` and :func:`sleep` is better
   than their Unix equivalents: times are expressed as floating point numbers,
-  :func:`time` returns the most accurate time available (using Unix
+  :func:`.time` returns the most accurate time available (using Unix
   :c:func:`gettimeofday` where available), and :func:`sleep` will accept a time
   with a nonzero fraction (Unix :c:func:`select` is used to implement this, where
   available).
 
    Convert a time expressed in seconds since the epoch to a string representing
    local time. If *secs* is not provided or :const:`None`, the current time as
-   returned by :func:`time` is used.  ``ctime(secs)`` is equivalent to
+   returned by :func:`.time` is used.  ``ctime(secs)`` is equivalent to
    ``asctime(localtime(secs))``. Locale information is not used by :func:`ctime`.
 
 
 
    Convert a time expressed in seconds since the epoch to a :class:`struct_time` in
    UTC in which the dst flag is always zero.  If *secs* is not provided or
-   :const:`None`, the current time as returned by :func:`time` is used.  Fractions
+   :const:`None`, the current time as returned by :func:`.time` is used.  Fractions
    of a second are ignored.  See above for a description of the
    :class:`struct_time` object. See :func:`calendar.timegm` for the inverse of this
    function.
 .. function:: localtime([secs])
 
    Like :func:`gmtime` but converts to local time.  If *secs* is not provided or
-   :const:`None`, the current time as returned by :func:`time` is used.  The dst
+   :const:`None`, the current time as returned by :func:`.time` is used.  The dst
    flag is set to ``1`` when DST applies to the given time.
 
 
    This is the inverse function of :func:`localtime`.  Its argument is the
    :class:`struct_time` or full 9-tuple (since the dst flag is needed; use ``-1``
    as the dst flag if it is unknown) which expresses the time in *local* time, not
-   UTC.  It returns a floating point number, for compatibility with :func:`time`.
+   UTC.  It returns a floating point number, for compatibility with :func:`.time`.
    If the input value cannot be represented as a valid time, either
    :exc:`OverflowError` or :exc:`ValueError` will be raised (which depends on
    whether the invalid value is caught by Python or the underlying C libraries).

File Doc/library/types.rst

-:mod:`types` --- Names for built-in types
-=========================================
+:mod:`types` --- Dynamic type creation and names for built-in types
+===================================================================
 
 .. module:: types
    :synopsis: Names for built-in types.
 
 --------------
 
-This module defines names for some object types that are used by the standard
+This module defines utility function to assist in dynamic creation of
+new types.
+
+It also defines names for some object types that are used by the standard
 Python interpreter, but not exposed as builtins like :class:`int` or
-:class:`str` are.  Also, it does not include some of the types that arise
-transparently during processing such as the ``listiterator`` type.
+:class:`str` are.
 
-Typical use is for :func:`isinstance` or :func:`issubclass` checks.
 
-The module defines the following names:
+Dynamic Type Creation
+---------------------
+
+.. function:: new_class(name, bases=(), kwds=None, exec_body=None)
+
+   Creates a class object dynamically using the appropriate metaclass.
+
+   The arguments are the components that make up a class definition: the
+   class name, the base classes (in order), the keyword arguments (such as
+   ``metaclass``) and the callback function to populate the class namespace.
+
+   The *exec_body* callback should accept the class namespace as its sole
+   argument and update the namespace directly with the class contents.
+
+.. function:: prepare_class(name, bases=(), kwds=None)
+
+   Calculates the appropriate metaclass and creates the class namespace.
+
+   The arguments are the components that make up a class definition: the
+   class name, the base classes (in order) and the keyword arguments (such as
+   ``metaclass``).
+
+   The return value is a 3-tuple: ``metaclass, namespace, kwds``
+
+   *metaclass* is the appropriate metaclass
+   *namespace* is the prepared class namespace
+   *kwds* is an updated copy of the passed in *kwds* argument with any
+   ``'metaclass'`` entry removed. If no *kwds* argument is passed in, this
+   will be an empty dict.
+
+
+.. seealso::
+
+   :pep:`3115` - Metaclasses in Python 3000
+      Introduced the ``__prepare__`` namespace hook
+
+
+Standard Interpreter Types
+--------------------------
+
+This module provides names for many of the types that are required to
+implement a Python interpreter. It deliberately avoids including some of
+the types that arise only incidentally during processing such as the
+``listiterator`` type.
+
+Typical use is of these names is for :func:`isinstance` or
+:func:`issubclass` checks.
+
+Standard names are defined for the following types:
 
 .. data:: FunctionType
           LambdaType
 
-   The type of user-defined functions and functions created by :keyword:`lambda`
-   expressions.
+   The type of user-defined functions and functions created by
+   :keyword:`lambda`  expressions.
 
 
 .. data:: GeneratorType

File Doc/library/urllib.request.rst

 The :mod:`urllib.request` module defines the following functions:
 
 
-.. function:: urlopen(url, data=None[, timeout], *, cafile=None, capath=None)
+.. function:: urlopen(url, data=None[, timeout], *, cafile=None, capath=None, cadefault=True)
 
    Open the URL *url*, which can be either a string or a
    :class:`Request` object.
    point to a directory of hashed certificate files.  More information can
    be found in :meth:`ssl.SSLContext.load_verify_locations`.
 
+   The *cadefault* parameter specifies whether to fall back to loading a
+   default certificate store defined by the underlying OpenSSL library if the
+   *cafile* and *capath* parameters are omitted.  This will only work on
+   some non-Windows platforms.
+
    .. warning::
-      If neither *cafile* nor *capath* is specified, an HTTPS request
-      will not do any verification of the server's certificate.
+      If neither *cafile* nor *capath* is specified, and *cadefault* is False,
+      an HTTPS request will not do any verification of the server's
+      certificate.
 
    This function returns a file-like object that works as a :term:`context manager`,
    with two additional methods from the :mod:`urllib.response` module
    .. versionadded:: 3.2
       *data* can be an iterable object.
 
+   .. versionchanged:: 3.3
+      *cadefault* was added.
+
 .. function:: install_opener(opener)
 
    Install an :class:`OpenerDirector` instance as the default global opener.

File Doc/reference/datamodel.rst

 Customizing class creation
 --------------------------
 
-By default, classes are constructed using :func:`type`. A class definition is
-read into a separate namespace and the value of class name is bound to the
-result of ``type(name, bases, dict)``.
-
-When the class definition is read, if a callable ``metaclass`` keyword argument
-is passed after the bases in the class definition, the callable given will be
-called instead of :func:`type`.  If other keyword arguments are passed, they
-will also be passed to the metaclass.  This allows classes or functions to be
-written which monitor or alter the class creation process:
-
-* Modifying the class dictionary prior to the class being created.
-
-* Returning an instance of another class -- essentially performing the role of a
-  factory function.
-
-These steps will have to be performed in the metaclass's :meth:`__new__` method
--- :meth:`type.__new__` can then be called from this method to create a class
-with different properties.  This example adds a new element to the class
-dictionary before creating the class::
-
-  class metacls(type):
-      def __new__(mcs, name, bases, dict):
-          dict['foo'] = 'metacls was here'
-          return type.__new__(mcs, name, bases, dict)
-
-You can of course also override other class methods (or add new methods); for
-example defining a custom :meth:`__call__` method in the metaclass allows custom
-behavior when the class is called, e.g. not always creating a new instance.
-
-If the metaclass has a :meth:`__prepare__` attribute (usually implemented as a
-class or static method), it is called before the class body is evaluated with
-the name of the class and a tuple of its bases for arguments.  It should return
-an object that supports the mapping interface that will be used to store the
-namespace of the class.  The default is a plain dictionary.  This could be used,
-for example, to keep track of the order that class attributes are declared in by
-returning an ordered dictionary.
-
-The appropriate metaclass is determined by the following precedence rules:
-
-* If the ``metaclass`` keyword argument is passed with the bases, it is used.
-
-* Otherwise, if there is at least one base class, its metaclass is used.
-
-* Otherwise, the default metaclass (:class:`type`) is used.
+By default, classes are constructed using :func:`type`. The class body is
+executed in a new namespace and the class name is bound locally to the
+result of ``type(name, bases, namespace)``.
+
+The class creation process can be customised by passing the ``metaclass``
+keyword argument in the class definition line, or by inheriting from an
+existing class that included such an argument. In the following example,
+both ``MyClass`` and ``MySubclass`` are instances of ``Meta``::
+
+   class Meta(type):
+       pass
+
+   class MyClass(metaclass=Meta):
+       pass
+
+   class MySubclass(MyClass):
+       pass
+
+Any other keyword arguments that are specified in the class definition are
+passed through to all metaclass operations described below.
+
+When a class definition is executed, the following steps occur:
+
+* the appropriate metaclass is determined
+* the class namespace is prepared
+* the class body is executed
+* the class object is created
+
+Determining the appropriate metaclass
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The appropriate metaclass for a class definition is determined as follows:
+
+* if no bases and no explicit metaclass are given, then :func:`type` is used
+* if an explicit metaclass is given and it is *not* an instance of
+  :func:`type`, then it is used directly as the metaclass
+* if an instance of :func:`type` is given as the explicit metaclass, or
+  bases are defined, then the most derived metaclass is used
+
+The most derived metaclass is selected from the explicitly specified
+metaclass (if any) and the metaclasses (i.e. ``type(cls)``) of all specified
+base classes. The most derived metaclass is one which is a subtype of *all*
+of these candidate metaclasses. If none of the candidate metaclasses meets
+that criterion, then the class definition will fail with ``TypeError``.
+
+
+Preparing the class namespace
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Once the appropriate metaclass has been identified, then the class namespace
+is prepared. If the metaclass has a ``__prepare__`` attribute, it is called
+as ``namespace = metaclass.__prepare__(name, bases, **kwds)`` (where the
+additional keyword arguments, if any, come from the class definition).
+
+If the metaclass has no ``__prepare__`` attribute, then the class namespace
+is initialised as an empty :func:`dict` instance.
+
+.. seealso::
+
+   :pep:`3115` - Metaclasses in Python 3000
+      Introduced the ``__prepare__`` namespace hook
+
+
+Executing the class body
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The class body is executed (approximately) as
+``exec(body, globals(), namespace)``. The key difference from a normal
+call to :func:`exec` is that lexical scoping allows the class body (including
+any methods) to reference names from the current and outer scopes when the
+class definition occurs inside a function.
+
+However, even when the class definition occurs inside the function, methods
+defined inside the class still cannot see names defined at the class scope.
+Class variables must be accessed through the first parameter of instance or
+class methods, and cannot be accessed at all from static methods.
+
+
+Creating the class object
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Once the class namespace has been populated by executing the class body,
+the class object is created by calling
+``metaclass(name, bases, namespace, **kwds)`` (the additional keywords
+passed here are the same as those passed to ``__prepate__``).
+
+This class object is the one that will be referenced by the zero-argument
+form of :func:`super`. ``__class__`` is an implicit closure reference
+created by the compiler if any methods in a class body refer to either
+``__class__`` or ``super``. This allows the zero argument form of
+:func:`super` to correctly identify the class being defined based on
+lexical scoping, while the class or instance that was used to make the
+current call is identified based on the first argument passed to the method.
+
+After the class object is created, any class decorators included in the
+function definition are invoked and the resulting object is bound in the
+local namespace to the name of the class.
+
+.. seealso::
+
+   :pep:`3135` - New super
+      Describes the implicit ``__class__`` closure reference
+
+
+Metaclass example
+^^^^^^^^^^^^^^^^^
 
 The potential uses for metaclasses are boundless. Some ideas that have been
-explored including logging, interface checking, automatic delegation, automatic
+explored include logging, interface checking, automatic delegation, automatic
 property creation, proxies, frameworks, and automatic resource
 locking/synchronization.
 
          def __prepare__(metacls, name, bases, **kwds):
             return collections.OrderedDict()
 
-         def __new__(cls, name, bases, classdict):
-            result = type.__new__(cls, name, bases, dict(classdict))
-            result.members = tuple(classdict)
+         def __new__(cls, name, bases, namespace, **kwds):
+            result = type.__new__(cls, name, bases, dict(namespace))
+            result.members = tuple(namespace)
             return result
 
     class A(metaclass=OrderedClass):

File Doc/whatsnew/3.3.rst

 .. XXX mention new error messages for passing wrong number of arguments to functions
 
 
+A Finer-Grained Import Lock
+===========================
+
+Previous versions of CPython have always relied on a global import lock.
+This led to unexpected annoyances, such as deadlocks when importing a module
+would trigger code execution in a different thread as a side-effect.
+Clumsy workarounds were sometimes employed, such as the
+:c:func:`PyImport_ImportModuleNoBlock` C API function.
+
+In Python 3.3, importing a module takes a per-module lock.  This correctly
+serializes importation of a given module from multiple threads (preventing
+the exposure of incompletely initialized modules), while eliminating the
+aforementioned annoyances.
+
+(contributed by Antoine Pitrou in :issue:`9260`.)
+
+
 New and Improved Modules
 ========================
 
 (:issue:`14386`)
 
 
+The new functions `types.new_class` and `types.prepare_class` provide support
+for PEP 3115 compliant dynamic type creation. (:issue:`14588`)
+
+
 urllib
 ------
 
   * repeating a single ASCII letter and getting a substring of a ASCII strings
     is 4 times faster
 
+* UTF-8 and UTF-16 decoding is now 2x to 4x faster.
+
+  (contributed by Serhiy Storchaka, :issue:`14624` and :issue:`14738`.)
+
 
 Build and C API Changes
 =======================

File Lib/__future__.py

                     CO_FUTURE_DIVISION)
 
 absolute_import = _Feature((2, 5, 0, "alpha", 1),
-                           (2, 7, 0, "alpha", 0),
+                           (3, 0, 0, "alpha", 0),
                            CO_FUTURE_ABSOLUTE_IMPORT)
 
 with_statement = _Feature((2, 5, 0, "alpha", 1),

File Lib/functools.py

     def __hash__(self):
         return self.hashvalue
 
-def lru_cache(maxsize=100, typed=False):
+def lru_cache(maxsize=128, typed=False):
     """Least-recently-used cache decorator.
 
     If *maxsize* is set to None, the LRU features are disabled and the cache

File Lib/http/client.py

 
         self.putrequest(method, url, **skips)
 
-        if body and ('content-length' not in header_names):
+        if body is not None and ('content-length' not in header_names):
             self._set_content_length(body)
         for hdr, value in headers.items():
             self.putheader(hdr, value)

File Lib/http/cookies.py

     from time import gmtime, time
     now = time()
     year, month, day, hh, mm, ss, wd, y, z = gmtime(now + future)
-    return "%s, %02d-%3s-%4d %02d:%02d:%02d GMT" % \
+    return "%s, %02d %3s %4d %02d:%02d:%02d GMT" % \
            (weekdayname[wd], day, monthname[month], year, hh, mm, ss)
 
 

File Lib/importlib/_bootstrap.py

     return type(_io)(name)
 
 
+# Module-level locking ########################################################
+
+# A dict mapping module names to weakrefs of _ModuleLock instances
+_module_locks = {}
+# A dict mapping thread ids to _ModuleLock instances
+_blocking_on = {}
+
+
+class _DeadlockError(RuntimeError):
+    pass
+
+
+class _ModuleLock:
+    """A recursive lock implementation which is able to detect deadlocks
+    (e.g. thread 1 trying to take locks A then B, and thread 2 trying to
+    take locks B then A).
+    """
+
+    def __init__(self, name):
+        self.lock = _thread.allocate_lock()
+        self.wakeup = _thread.allocate_lock()
+        self.name = name
+        self.owner = None
+        self.count = 0
+        self.waiters = 0
+
+    def has_deadlock(self):
+        # Deadlock avoidance for concurrent circular imports.
+        me = _thread.get_ident()
+        tid = self.owner
+        while True:
+            lock = _blocking_on.get(tid)
+            if lock is None:
+                return False
+            tid = lock.owner
+            if tid == me:
+                return True
+
+    def acquire(self):
+        """
+        Acquire the module lock.  If a potential deadlock is detected,
+        a _DeadlockError is raised.
+        Otherwise, the lock is always acquired and True is returned.
+        """
+        tid = _thread.get_ident()
+        _blocking_on[tid] = self
+        try:
+            while True:
+                with self.lock:
+                    if self.count == 0 or self.owner == tid:
+                        self.owner = tid
+                        self.count += 1
+                        return True
+                    if self.has_deadlock():
+                        raise _DeadlockError("deadlock detected by %r" % self)
+                    if self.wakeup.acquire(False):
+                        self.waiters += 1
+                # Wait for a release() call
+                self.wakeup.acquire()
+                self.wakeup.release()
+        finally:
+            del _blocking_on[tid]
+
+    def release(self):
+        tid = _thread.get_ident()
+        with self.lock:
+            if self.owner != tid:
+                raise RuntimeError("cannot release un-acquired lock")
+            assert self.count > 0
+            self.count -= 1
+            if self.count == 0:
+                self.owner = None
+                if self.waiters:
+                    self.waiters -= 1
+                    self.wakeup.release()
+
+    def __repr__(self):
+        return "_ModuleLock(%r) at %d" % (self.name, id(self))
+
+
+class _DummyModuleLock:
+    """A simple _ModuleLock equivalent for Python builds without
+    multi-threading support."""
+
+    def __init__(self, name):
+        self.name = name
+        self.count = 0
+
+    def acquire(self):
+        self.count += 1
+        return True
+
+    def release(self):
+        if self.count == 0:
+            raise RuntimeError("cannot release un-acquired lock")
+        self.count -= 1
+
+    def __repr__(self):
+        return "_DummyModuleLock(%r) at %d" % (self.name, id(self))
+
+
+# The following two functions are for consumption by Python/import.c.
+
+def _get_module_lock(name):
+    """Get or create the module lock for a given module name.
+
+    Should only be called with the import lock taken."""
+    lock = None
+    if name in _module_locks:
+        lock = _module_locks[name]()
+    if lock is None:
+        if _thread is None:
+            lock = _DummyModuleLock(name)
+        else:
+            lock = _ModuleLock(name)
+        def cb(_):
+            del _module_locks[name]
+        _module_locks[name] = _weakref.ref(lock, cb)
+    return lock
+
+def _lock_unlock_module(name):
+    """Release the global import lock, and acquires then release the
+    module lock for a given module name.
+    This is used to ensure a module is completely initialized, in the
+    event it is being imported by another thread.
+
+    Should only be called with the import lock taken."""
+    lock = _get_module_lock(name)
+    _imp.release_lock()
+    try:
+        lock.acquire()
+    except _DeadlockError:
+        # Concurrent circular import, we'll accept a partially initialized
+        # module object.
+        pass
+    else:
+        lock.release()
+
+
 # Finder/loader utility code ##################################################
 
 _PYCACHE = '__pycache__'
                 else:
                     module.__package__ = fullname.rpartition('.')[0]
         try:
+            module.__initializing__ = True
             # If __package__ was not set above, __import__() will do it later.
             return fxn(self, module, *args, **kwargs)
         except:
             if not is_reload:
                 del sys.modules[fullname]
             raise
+        finally:
+            module.__initializing__ = False
     _wrap(module_for_loader_wrapper, fxn)
     return module_for_loader_wrapper
 
     if not sys.meta_path:
         _warnings.warn('sys.meta_path is empty', ImportWarning)
     for finder in sys.meta_path:
-        loader = finder.find_module(name, path)
+        with _ImportLockContext():
+            loader = finder.find_module(name, path)
         if loader is not None:
             # The parent import may have already imported this module.
             if name not in sys.modules:
 
 _ERR_MSG = 'No module named {!r}'
 
-def _find_and_load(name, import_):
-    """Find and load the module."""
+def _find_and_load_unlocked(name, import_):
     path = None
     parent = name.rpartition('.')[0]
     if parent:
     return module
 
 
+def _find_and_load(name, import_):
+    """Find and load the module, and release the import lock."""
+    try:
+        lock = _get_module_lock(name)
+    finally:
+        _imp.release_lock()
+    lock.acquire()
+    try:
+        return _find_and_load_unlocked(name, import_)
+    finally:
+        lock.release()
+
+
 def _gcd_import(name, package=None, level=0):
     """Import and return the module based on its name, the package the call is
     being made from, and the level adjustment.
     _sanity_check(name, package, level)
     if level > 0:
         name = _resolve_name(name, package, level)
-    with _ImportLockContext():
-        try:
-            module = sys.modules[name]
-            if module is None:
-                message = ("import of {} halted; "
-                            "None in sys.modules".format(name))
-                raise ImportError(message, name=name)
-            return module
-        except KeyError:
-            pass  # Don't want to chain the exception
+    _imp.acquire_lock()
+    if name not in sys.modules:
         return _find_and_load(name, _gcd_import)
+    module = sys.modules[name]
+    if module is None:
+        _imp.release_lock()
+        message = ("import of {} halted; "
+                    "None in sys.modules".format(name))
+        raise ImportError(message, name=name)
+    _lock_unlock_module(name)
+    return module
 
 
 def _handle_fromlist(module, fromlist, import_):
                 continue
     else:
         raise ImportError('importlib requires posix or nt')
+
+    try:
+        thread_module = BuiltinImporter.load_module('_thread')
+    except ImportError:
+        # Python was built without threads
+        thread_module = None
+    weakref_module = BuiltinImporter.load_module('_weakref')
+
     setattr(self_module, '_os', os_module)
+    setattr(self_module, '_thread', thread_module)
+    setattr(self_module, '_weakref', weakref_module)
     setattr(self_module, 'path_sep', path_sep)
     setattr(self_module, 'path_separators', set(path_separators))
     # Constants

File Lib/importlib/test/test_locks.py

+from importlib import _bootstrap
+import time
+import unittest
+import weakref
+
+from test import support
+
+try:
+    import threading
+except ImportError:
+    threading = None
+else:
+    from test import lock_tests
+
+
+LockType = _bootstrap._ModuleLock
+DeadlockError = _bootstrap._DeadlockError
+
+
+if threading is not None:
+    class ModuleLockAsRLockTests(lock_tests.RLockTests):
+        locktype = staticmethod(lambda: LockType("some_lock"))
+
+        # _is_owned() unsupported
+        test__is_owned = None
+        # acquire(blocking=False) unsupported
+        test_try_acquire = None
+        test_try_acquire_contended = None
+        # `with` unsupported
+        test_with = None
+        # acquire(timeout=...) unsupported
+        test_timeout = None
+        # _release_save() unsupported
+        test_release_save_unacquired = None
+
+else:
+    class ModuleLockAsRLockTests(unittest.TestCase):
+        pass
+
+
+@unittest.skipUnless(threading, "threads needed for this test")
+class DeadlockAvoidanceTests(unittest.TestCase):
+
+    def run_deadlock_avoidance_test(self, create_deadlock):
+        NLOCKS = 10
+        locks = [LockType(str(i)) for i in range(NLOCKS)]
+        pairs = [(locks[i], locks[(i+1)%NLOCKS]) for i in range(NLOCKS)]
+        if create_deadlock:
+            NTHREADS = NLOCKS
+        else:
+            NTHREADS = NLOCKS - 1
+        barrier = threading.Barrier(NTHREADS)
+        results = []
+        def _acquire(lock):
+            """Try to acquire the lock. Return True on success, False on deadlock."""
+            try:
+                lock.acquire()
+            except DeadlockError:
+                return False
+            else:
+                return True
+        def f():
+            a, b = pairs.pop()
+            ra = _acquire(a)
+            barrier.wait()
+            rb = _acquire(b)
+            results.append((ra, rb))
+            if rb:
+                b.release()
+            if ra:
+                a.release()
+        lock_tests.Bunch(f, NTHREADS).wait_for_finished()
+        self.assertEqual(len(results), NTHREADS)
+        return results
+
+    def test_deadlock(self):
+        results = self.run_deadlock_avoidance_test(True)
+        # One of the threads detected a potential deadlock on its second
+        # acquire() call.
+        self.assertEqual(results.count((True, False)), 1)
+        self.assertEqual(results.count((True, True)), len(results) - 1)
+
+    def test_no_deadlock(self):
+        results = self.run_deadlock_avoidance_test(False)
+        self.assertEqual(results.count((True, False)), 0)
+        self.assertEqual(results.count((True, True)), len(results))
+
+
+class LifetimeTests(unittest.TestCase):
+
+    def test_lock_lifetime(self):
+        name = "xyzzy"
+        self.assertNotIn(name, _bootstrap._module_locks)
+        lock = _bootstrap._get_module_lock(name)
+        self.assertIn(name, _bootstrap._module_locks)
+        wr = weakref.ref(lock)
+        del lock
+        support.gc_collect()
+        self.assertNotIn(name, _bootstrap._module_locks)
+        self.assertIs(wr(), None)
+
+    def test_all_locks(self):
+        support.gc_collect()
+        self.assertEqual(0, len(_bootstrap._module_locks))
+
+
+@support.reap_threads
+def test_main():
+    support.run_unittest(ModuleLockAsRLockTests,
+                         DeadlockAvoidanceTests,
+                         LifetimeTests)
+
+
+if __name__ == '__main__':
+    test_main()

File Lib/multiprocessing/forking.py

             return [sys.executable, '--multiprocessing-fork']
         else:
             prog = 'from multiprocessing.forking import main; main()'
-            return [_python_exe, '-c', prog, '--multiprocessing-fork']
+            opts = util._args_from_interpreter_flags()
+            return [_python_exe] + opts + ['-c', prog, '--multiprocessing-fork']
 
 
     def main():

File Lib/multiprocessing/util.py

 # Licensed to PSF under a Contributor Agreement.
 #
 
+import sys
 import functools
 import itertools
 import weakref
 import atexit
 import threading        # we want threading to install it's
                         # cleanup function before multiprocessing does
+from subprocess import _args_from_interpreter_flags
 
 from multiprocessing.process import current_process, active_children
 

File Lib/pyclbr.py

         parent = _readmodule(package, path, inpackage)
         if inpackage is not None:
             package = "%s.%s" % (inpackage, package)
+        if not '__path__' in parent:
+            raise ImportError('No package named {}'.format(package))
         return _readmodule(submodule, parent['__path__'], package)
 
     # Search the path for the module

File Lib/pydoc.py

     if name in {'__builtins__', '__doc__', '__file__', '__path__',
                      '__module__', '__name__', '__slots__', '__package__',
                      '__cached__', '__author__', '__credits__', '__date__',
-                     '__version__', '__qualname__'}:
+                     '__version__', '__qualname__', '__initializing__'}:
         return 0
     # Private names are hidden, but special names are displayed.
     if name.startswith('__') and name.endswith('__'): return 1

File Lib/subprocess.py

             continue
 
 
+# XXX This function is only used by multiprocessing and the test suite,
+# but it's here so that it can be imported when Python is compiled without
+# threads.
+
+def _args_from_interpreter_flags():
+    """Return a list of command-line arguments reproducing the current
+    settings in sys.flags and sys.warnoptions."""
+    flag_opt_map = {
+        'debug': 'd',
+        # 'inspect': 'i',
+        # 'interactive': 'i',
+        'optimize': 'O',
+        'dont_write_bytecode': 'B',
+        'no_user_site': 's',
+        'no_site': 'S',
+        'ignore_environment': 'E',
+        'verbose': 'v',
+        'bytes_warning': 'b',
+        'quiet': 'q',
+        'hash_randomization': 'R',
+    }
+    args = []
+    for flag, opt in flag_opt_map.items():
+        v = getattr(sys.flags, flag)
+        if v > 0:
+            args.append('-' + opt * v)
+    for opt in sys.warnoptions:
+        args.append('-W' + opt)
+    return args
+
+
 def call(*popenargs, timeout=None, **kwargs):
     """Run command with arguments.  Wait for command to complete or
     timeout, then return the returncode attribute.

File Lib/tarfile.py

        the high bit set. So we calculate two checksums, unsigned and
        signed.
     """
-    unsigned_chksum = 256 + sum(struct.unpack("148B", buf[:148]) + struct.unpack("356B", buf[156:512]))
-    signed_chksum = 256 + sum(struct.unpack("148b", buf[:148]) + struct.unpack("356b", buf[156:512]))
+    unsigned_chksum = 256 + sum(struct.unpack_from("148B8x356B", buf))
+    signed_chksum = 256 + sum(struct.unpack_from("148b8x356b", buf))
     return unsigned_chksum, signed_chksum
 
 def copyfileobj(src, dst, length=None):

File Lib/test/lock_tests.py

         # Cannot release an unacquired lock
         lock = self.locktype()
         self.assertRaises(RuntimeError, lock.release)
-        self.assertRaises(RuntimeError, lock._release_save)
         lock.acquire()
         lock.acquire()
         lock.release()
         lock.release()
         lock.release()
         self.assertRaises(RuntimeError, lock.release)
+
+    def test_release_save_unacquired(self):
+        # Cannot _release_save an unacquired lock
+        lock = self.locktype()
+        self.assertRaises(RuntimeError, lock._release_save)
+        lock.acquire()
+        lock.acquire()
+        lock.release()
+        lock.acquire()
+        lock.release()
+        lock.release()
         self.assertRaises(RuntimeError, lock._release_save)
 
     def test_different_thread(self):

File Lib/test/support.py

 def args_from_interpreter_flags():
     """Return a list of command-line arguments reproducing the current
     settings in sys.flags and sys.warnoptions."""
-    flag_opt_map = {
-        'bytes_warning': 'b',
-        'dont_write_bytecode': 'B',
-        'hash_randomization': 'R',
-        'ignore_environment': 'E',
-        'no_user_site': 's',
-        'no_site': 'S',
-        'optimize': 'O',
-        'verbose': 'v',
-    }
-    args = []
-    for flag, opt in flag_opt_map.items():
-        v = getattr(sys.flags, flag)
-        if v > 0:
-            args.append('-' + opt * v)
-    for opt in sys.warnoptions:
-        args.append('-W' + opt)
-    return args
+    return subprocess._args_from_interpreter_flags()
 
 #============================================================
 # Support for assertions about logging.

File Lib/test/test_bisect.py

 import bisect as c_bisect
 
 
+class Range(object):
+    """A trivial range()-like object without any integer width limitations."""
+    def __init__(self, start, stop):
+        self.start = start
+        self.stop = stop
+        self.last_insert = None
+
+    def __len__(self):
+        return self.stop - self.start
+
+    def __getitem__(self, idx):
+        n = self.stop - self.start
+        if idx < 0:
+            idx += n
+        if idx >= n:
+            raise IndexError(idx)
+        return self.start + idx
+
+    def insert(self, idx, item):
+        self.last_insert = idx, item
+
+
 class TestBisect(unittest.TestCase):
     module = None
 
     def test_large_range(self):
         # Issue 13496
         mod = self.module
-        data = range(sys.maxsize-1)
-        self.assertEqual(mod.bisect_left(data, sys.maxsize-3), sys.maxsize-3)
-        self.assertEqual(mod.bisect_right(data, sys.maxsize-3), sys.maxsize-2)
+        n = sys.maxsize
+        data = range(n-1)
+        self.assertEqual(mod.bisect_left(data, n-3), n-3)
+        self.assertEqual(mod.bisect_right(data, n-3), n-2)
+        self.assertEqual(mod.bisect_left(data, n-3, n-10, n), n-3)
+        self.assertEqual(mod.bisect_right(data, n-3, n-10, n), n-2)
+
+    def test_large_pyrange(self):
+        # Same as above, but without C-imposed limits on range() parameters
+        mod = self.module
+        n = sys.maxsize
+        data = Range(0, n-1)
+        self.assertEqual(mod.bisect_left(data, n-3), n-3)
+        self.assertEqual(mod.bisect_right(data, n-3), n-2)
+        self.assertEqual(mod.bisect_left(data, n-3, n-10, n), n-3)
+        self.assertEqual(mod.bisect_right(data, n-3, n-10, n), n-2)
+        x = n - 100
+        mod.insort_left(data, x, x - 50, x + 50)
+        self.assertEqual(data.last_insert, (x, x))
+        x = n - 200
+        mod.insort_right(data, x, x - 50, x + 50)
+        self.assertEqual(data.last_insert, (x + 1, x))
 
     def test_random(self, n=25):
         from random import randrange

File Lib/test/test_buffer.py

 class TestBufferProtocol(unittest.TestCase):
 
     def setUp(self):
-        self.sizeof_void_p = get_config_var('SIZEOF_VOID_P') \
-                                if sys.platform != 'darwin' else None
-        if not self.sizeof_void_p:
-            self.sizeof_void_p = 8 if sys.maxsize > 2**32 else 4
+        # The suboffsets tests need sizeof(void *).
+        self.sizeof_void_p = get_sizeof_void_p()
 
     def verify(self, result, obj=-1,
                      itemsize={1}, fmt=-1, readonly={1},

File Lib/test/test_hashlib.py

 import array
 import hashlib
 import itertools
+import os
 import sys
 try:
     import threading
                              'sha224', 'SHA224', 'sha256', 'SHA256',
                              'sha384', 'SHA384', 'sha512', 'SHA512' )
 
-    _warn_on_extension_import = COMPILED_WITH_PYDEBUG
+    # Issue #14693: fallback modules are always compiled under POSIX
+    _warn_on_extension_import = os.name == 'posix' or COMPILED_WITH_PYDEBUG
 
     def _conditional_import_module(self, module_name):
         """Import a module and return a reference to it or None on failure."""

File Lib/test/test_http_cookies.py

 
         # loading 'expires'
         C = cookies.SimpleCookie()
-        C.load('Customer="W"; expires=Wed, 01-Jan-2010 00:00:00 GMT')
+        C.load('Customer="W"; expires=Wed, 01 Jan 2010 00:00:00 GMT')
         self.assertEqual(C['Customer']['expires'],
-                         'Wed, 01-Jan-2010 00:00:00 GMT')
+                         'Wed, 01 Jan 2010 00:00:00 GMT')
         C = cookies.SimpleCookie()
-        C.load('Customer="W"; expires=Wed, 01-Jan-98 00:00:00 GMT')
+        C.load('Customer="W"; expires=Wed, 01 Jan 98 00:00:00 GMT')
         self.assertEqual(C['Customer']['expires'],
-                         'Wed, 01-Jan-98 00:00:00 GMT')
+                         'Wed, 01 Jan 98 00:00:00 GMT')
 
         # 'max-age'
         C = cookies.SimpleCookie('Customer="WILE_E_COYOTE"')

File Lib/test/test_httplib.py

                 conn.request('POST', '/', body, headers)
                 self.assertEqual(conn._buffer.count[header.lower()], 1)
 
+    def test_content_length_0(self):
+
+        class ContentLengthChecker(list):
+            def __init__(self):
+                list.__init__(self)
+                self.content_length = None
+            def append(self, item):
+                kv = item.split(b':', 1)
+                if len(kv) > 1 and kv[0].lower() == b'content-length':
+                    self.content_length = kv[1].strip()
+                list.append(self, item)
+
+        # POST with empty body
+        conn = client.HTTPConnection('example.com')
+        conn.sock = FakeSocket(None)
+        conn._buffer = ContentLengthChecker()
+        conn.request('POST', '/', '')
+        self.assertEqual(conn._buffer.content_length, b'0',
+                        'Header Content-Length not set')
+
+        # PUT request with empty body
+        conn = client.HTTPConnection('example.com')
+        conn.sock = FakeSocket(None)
+        conn._buffer = ContentLengthChecker()
+        conn.request('PUT', '/', '')
+        self.assertEqual(conn._buffer.content_length, b'0',
+                        'Header Content-Length not set')
+
     def test_putheader(self):
         conn = client.HTTPConnection('example.com')
         conn.sock = FakeSocket(None)

File Lib/test/test_multiprocessing.py

         with self.assertRaises(ValueError):
             multiprocessing.connection.Listener('/var/test.pipe')
 
+#
+# Issue 12098: check sys.flags of child matches that for parent
+#
+
+class TestFlags(unittest.TestCase):
+    @classmethod
+    def run_in_grandchild(cls, conn):
+        conn.send(tuple(sys.flags))
+
+    @classmethod
+    def run_in_child(cls):
+        import json
+        r, w = multiprocessing.Pipe(duplex=False)
+        p = multiprocessing.Process(target=cls.run_in_grandchild, args=(w,))
+        p.start()
+        grandchild_flags = r.recv()
+        p.join()
+        r.close()
+        w.close()
+        flags = (tuple(sys.flags), grandchild_flags)
+        print(json.dumps(flags))
+
+    def test_flags(self):
+        import json, subprocess
+        # start child process using unusual flags
+        prog = ('from test.test_multiprocessing import TestFlags; ' +
+                'TestFlags.run_in_child()')
+        data = subprocess.check_output(
+            [sys.executable, '-E', '-S', '-O', '-c', prog])
+        child_flags, grandchild_flags = json.loads(data.decode('ascii'))
+        self.assertEqual(child_flags, grandchild_flags)
+
 testcases_other = [OtherTest, TestInvalidHandle, TestInitializers,
-                   TestStdinBadfiledescriptor, TestWait, TestInvalidFamily]
+                   TestStdinBadfiledescriptor, TestWait, TestInvalidFamily,
+                   TestFlags]
 
 #
 #

File Lib/test/test_pkg.py

 def fixdir(lst):
     if "__builtins__" in lst:
         lst.remove("__builtins__")
+    if "__initializing__" in lst:
+        lst.remove("__initializing__")
     return lst
 
 

File Lib/test/test_pyclbr.py

         cm('email.parser')
         cm('test.test_pyclbr')
 
+    def test_issue_14798(self):
+        # test ImportError is raised when the first part of a dotted name is
+        # not a package
+        self.assertRaises(ImportError, pyclbr.readmodule_ex, 'asyncore.foo')
+
 
 def test_main():
     run_unittest(PyclbrTest)

File Lib/test/test_stat.py

     def test_mode(self):
         with open(TESTFN, 'w'):
             pass
-        os.chmod(TESTFN, 0o700)
-        self.assertEqual(get_mode(), '-rwx------')
-        os.chmod(TESTFN, 0o070)
-        self.assertEqual(get_mode(), '----rwx---')
-        os.chmod(TESTFN, 0o007)
-        self.assertEqual(get_mode(), '-------rwx')
-        os.chmod(TESTFN, 0o444)
-        self.assertEqual(get_mode(), '-r--r--r--')
+        if os.name == 'posix':
+            os.chmod(TESTFN, 0o700)
+            self.assertEqual(get_mode(), '-rwx------')
+            os.chmod(TESTFN, 0o070)
+            self.assertEqual(get_mode(), '----rwx---')
+            os.chmod(TESTFN, 0o007)
+            self.assertEqual(get_mode(), '-------rwx')
+            os.chmod(TESTFN, 0o444)
+            self.assertEqual(get_mode(), '-r--r--r--')
+        else:
+            os.chmod(TESTFN, 0o700)
+            self.assertEqual(get_mode()[:3], '-rw')
 
     def test_directory(self):
         os.mkdir(TESTFN)
         os.chmod(TESTFN, 0o700)
-        self.assertEqual(get_mode(), 'drwx------')
+        if os.name == 'posix':
+            self.assertEqual(get_mode(), 'drwx------')
+        else:
+            self.assertEqual(get_mode()[0], 'd')
 
     @unittest.skipUnless(hasattr(os, 'symlink'), 'os.symlink not available')
     def test_link(self):
-        os.symlink(os.getcwd(), TESTFN)
-        self.assertEqual(get_mode()[0], 'l')
+        try:
+            os.symlink(os.getcwd(), TESTFN)
+        except (OSError, NotImplementedError) as err:
+            raise unittest.SkipTest(str(err))
+        else:
+            self.assertEqual(get_mode()[0], 'l')
 
     @unittest.skipUnless(hasattr(os, 'mkfifo'), 'os.mkfifo not available')
     def test_fifo(self):

File Lib/test/test_textwrap.py

         result = wrapper.fill(text)
         self.check(result, '\n'.join(expect))
 
+        text = "\tTest\tdefault\t\ttabsize."
+        expect = ["        Test    default         tabsize."]
+        self.check_wrap(text, 80, expect)
+
+        text = "\tTest\tcustom\t\ttabsize."
+        expect = ["    Test    custom      tabsize."]
+        self.check_wrap(text, 80, expect, tabsize=4)
+
     def test_fix_sentence_endings(self):
         wrapper = TextWrapper(60, fix_sentence_endings=True)
 

File Lib/test/test_threaded_import.py

 import shutil
 import unittest
 from test.support import (
-    verbose, import_module, run_unittest, TESTFN, reap_threads)
+    verbose, import_module, run_unittest, TESTFN, reap_threads, forget, unlink)
 threading = import_module('threading')
 
 def task(N, done, done_tasks, errors):
             contents = contents % {'delay': delay}
             with open(os.path.join(TESTFN, name + ".py"), "wb") as f:
                 f.write(contents.encode('utf-8'))
-            self.addCleanup(sys.modules.pop, name, None)
+            self.addCleanup(forget, name)
 
         results = []
         def import_ab():
         t2.join()
         self.assertEqual(set(results), {'a', 'b'})
 
+    def test_side_effect_import(self):
+        code = """if 1:
+            import threading
+            def target():
+                import random
+            t = threading.Thread(target=target)
+            t.start()
+            t.join()"""
+        sys.path.insert(0, os.curdir)
+        self.addCleanup(sys.path.remove, os.curdir)
+        filename = TESTFN + ".py"
+        with open(filename, "wb") as f:
+            f.write(code.encode('utf-8'))
+        self.addCleanup(unlink, filename)
+        self.addCleanup(forget, TESTFN)
+        __import__(TESTFN)
+
 
 @reap_threads
 def test_main():

File Lib/test/test_types.py

         self.assertEqual(copy['key1'], 27)
 
 
+class ClassCreationTests(unittest.TestCase):
+
+    class Meta(type):
+        def __init__(cls, name, bases, ns, **kw):
+            super().__init__(name, bases, ns)
+        @staticmethod
+        def __new__(mcls, name, bases, ns, **kw):
+            return super().__new__(mcls, name, bases, ns)
+        @classmethod
+        def __prepare__(mcls, name, bases, **kw):
+            ns = super().__prepare__(name, bases)
+            ns["y"] = 1
+            ns.update(kw)
+            return ns
+
+    def test_new_class_basics(self):
+        C = types.new_class("C")
+        self.assertEqual(C.__name__, "C")
+        self.assertEqual(C.__bases__, (object,))
+
+    def test_new_class_subclass(self):
+        C = types.new_class("C", (int,))
+        self.assertTrue(issubclass(C, int))
+
+    def test_new_class_meta(self):
+        Meta = self.Meta
+        settings = {"metaclass": Meta, "z": 2}
+        # We do this twice to make sure the passed in dict isn't mutated
+        for i in range(2):
+            C = types.new_class("C" + str(i), (), settings)
+            self.assertIsInstance(C, Meta)
+            self.assertEqual(C.y, 1)
+            self.assertEqual(C.z, 2)
+
+    def test_new_class_exec_body(self):
+        Meta = self.Meta
+        def func(ns):
+            ns["x"] = 0
+        C = types.new_class("C", (), {"metaclass": Meta, "z": 2}, func)
+        self.assertIsInstance(C, Meta)
+        self.assertEqual(C.x, 0)
+        self.assertEqual(C.y, 1)
+        self.assertEqual(C.z, 2)
+
+    def test_new_class_exec_body(self):
+        #Test that keywords are passed to the metaclass:
+        def meta_func(name, bases, ns, **kw):
+            return name, bases, ns, kw
+        res = types.new_class("X",
+                              (int, object),
+                              dict(metaclass=meta_func, x=0))
+        self.assertEqual(res, ("X", (int, object), {}, {"x": 0}))
+
+    def test_new_class_defaults(self):
+        # Test defaults/keywords:
+        C = types.new_class("C", (), {}, None)
+        self.assertEqual(C.__name__, "C")
+        self.assertEqual(C.__bases__, (object,))
+
+    def test_new_class_meta_with_base(self):
+        Meta = self.Meta
+        def func(ns):
+            ns["x"] = 0
+        C = types.new_class(name="C",
+                            bases=(int,),
+                            kwds=dict(metaclass=Meta, z=2),
+                            exec_body=func)
+        self.assertTrue(issubclass(C, int))
+        self.assertIsInstance(C, Meta)
+        self.assertEqual(C.x, 0)
+        self.assertEqual(C.y, 1)
+        self.assertEqual(C.z, 2)
+
+    # Many of the following tests are derived from test_descr.py
+    def test_prepare_class(self):
+        # Basic test of metaclass derivation
+        expected_ns = {}
+        class A(type):
+            def __new__(*args, **kwargs):
+                return type.__new__(*args, **kwargs)
+
+            def __prepare__(*args):
+                return expected_ns
+
+        B = types.new_class("B", (object,))
+        C = types.new_class("C", (object,), {"metaclass": A})
+
+        # The most derived metaclass of D is A rather than type.
+        meta, ns, kwds = types.prepare_class("D", (B, C), {"metaclass": type})
+        self.assertIs(meta, A)
+        self.assertIs(ns, expected_ns)
+        self.assertEqual(len(kwds), 0)
+
+    def test_metaclass_derivation(self):
+        # issue1294232: correct metaclass calculation
+        new_calls = []  # to check the order of __new__ calls
+        class AMeta(type):
+            def __new__(mcls, name, bases, ns):
+                new_calls.append('AMeta')
+                return super().__new__(mcls, name, bases, ns)
+            @classmethod
+            def __prepare__(mcls, name, bases):
+                return {}
+
+        class BMeta(AMeta):
+            def __new__(mcls, name, bases, ns):
+                new_calls.append('BMeta')
+                return super().__new__(mcls, name, bases, ns)
+            @classmethod
+            def __prepare__(mcls, name, bases):
+                ns = super().__prepare__(name, bases)
+                ns['BMeta_was_here'] = True
+                return ns
+
+        A = types.new_class("A", (), {"metaclass": AMeta})
+        self.assertEqual(new_calls, ['AMeta'])
+        new_calls.clear()
+
+        B = types.new_class("B", (), {"metaclass": BMeta})
+        # BMeta.__new__ calls AMeta.__new__ with super:
+        self.assertEqual(new_calls, ['BMeta', 'AMeta'])
+        new_calls.clear()
+
+        C = types.new_class("C", (A, B))
+        # The most derived metaclass is BMeta:
+        self.assertEqual(new_calls, ['BMeta', 'AMeta'])
+        new_calls.clear()
+        # BMeta.__prepare__ should've been called:
+        self.assertIn('BMeta_was_here', C.__dict__)
+
+        # The order of the bases shouldn't matter:
+        C2 = types.new_class("C2", (B, A))
+        self.assertEqual(new_calls, ['BMeta', 'AMeta'])
+        new_calls.clear()
+        self.assertIn('BMeta_was_here', C2.__dict__)
+
+        # Check correct metaclass calculation when a metaclass is declared:
+        D = types.new_class("D", (C,), {"metaclass": type})
+        self.assertEqual(new_calls, ['BMeta', 'AMeta'])
+        new_calls.clear()
+        self.assertIn('BMeta_was_here', D.__dict__)
+
+        E = types.new_class("E", (C,), {"metaclass": AMeta})
+        self.assertEqual(new_calls, ['BMeta', 'AMeta'])
+        new_calls.clear()
+        self.assertIn('BMeta_was_here', E.__dict__)
+
+    def test_metaclass_override_function(self):
+        # Special case: the given metaclass isn't a class,
+        # so there is no metaclass calculation.
+        class A(metaclass=self.Meta):
+            pass
+
+        marker = object()
+        def func(*args, **kwargs):
+            return marker
+
+        X = types.new_class("X", (), {"metaclass": func})
+        Y = types.new_class("Y", (object,), {"metaclass": func})
+        Z = types.new_class("Z", (A,), {"metaclass": func})
+        self.assertIs(marker, X)
+        self.assertIs(marker, Y)
+        self.assertIs(marker, Z)
+
+    def test_metaclass_override_callable(self):
+        # The given metaclass is a class,
+        # but not a descendant of type.
+        new_calls = []  # to check the order of __new__ calls
+        prepare_calls = []  # to track __prepare__ calls
+        class ANotMeta:
+            def __new__(mcls, *args, **kwargs):
+                new_calls.append('ANotMeta')
+                return super().__new__(mcls)
+            @classmethod
+            def __prepare__(mcls, name, bases):
+                prepare_calls.append('ANotMeta')
+                return {}
+
+        class BNotMeta(ANotMeta):
+            def __new__(mcls, *args, **kwargs):
+                new_calls.append('BNotMeta')
+                return super().__new__(mcls)
+            @classmethod
+            def __prepare__(mcls, name, bases):
+                prepare_calls.append('BNotMeta')
+                return super().__prepare__(name, bases)
+
+        A = types.new_class("A", (), {"metaclass": ANotMeta})
+        self.assertIs(ANotMeta, type(A))
+        self.assertEqual(prepare_calls, ['ANotMeta'])
+        prepare_calls.clear()
+        self.assertEqual(new_calls, ['ANotMeta'])
+        new_calls.clear()
+
+        B = types.new_class("B", (), {"metaclass": BNotMeta})
+        self.assertIs(BNotMeta, type(B))
+        self.assertEqual(prepare_calls, ['BNotMeta', 'ANotMeta'])
+        prepare_calls.clear()
+        self.assertEqual(new_calls, ['BNotMeta', 'ANotMeta'])
+        new_calls.clear()
+
+        C = types.new_class("C", (A, B))
+        self.assertIs(BNotMeta, type(C))
+        self.assertEqual(prepare_calls, ['BNotMeta', 'ANotMeta'])
+        prepare_calls.clear()
+        self.assertEqual(new_calls, ['BNotMeta', 'ANotMeta'])
+        new_calls.clear()
+
+        C2 = types.new_class("C2", (B, A))
+        self.assertIs(BNotMeta, type(C2))
+        self.assertEqual(prepare_calls, ['BNotMeta', 'ANotMeta'])
+        prepare_calls.clear()
+        self.assertEqual(new_calls, ['BNotMeta', 'ANotMeta'])
+        new_calls.clear()
+
+        # This is a TypeError, because of a metaclass conflict:
+        # BNotMeta is neither a subclass, nor a superclass of type
+        with self.assertRaises(TypeError):
+            D = types.new_class("D", (C,), {"metaclass": type})
+
+        E = types.new_class("E", (C,), {"metaclass": ANotMeta})
+        self.assertIs(BNotMeta, type(E))
+        self.assertEqual(prepare_calls, ['BNotMeta', 'ANotMeta'])
+        prepare_calls.clear()
+        self.assertEqual(new_calls, ['BNotMeta', 'ANotMeta'])
+        new_calls.clear()
+
+        F = types.new_class("F", (object(), C))
+        self.assertIs(BNotMeta, type(F))
+        self.assertEqual(prepare_calls, ['BNotMeta', 'ANotMeta'])
+        prepare_calls.clear()
+        self.assertEqual(new_calls, ['BNotMeta', 'ANotMeta'])
+        new_calls.clear()
+
+        F2 = types.new_class("F2", (C, object()))
+        self.assertIs(BNotMeta, type(F2))
+        self.assertEqual(prepare_calls, ['BNotMeta', 'ANotMeta'])
+        prepare_calls.clear()
+        self.assertEqual(new_calls, ['BNotMeta', 'ANotMeta'])
+        new_calls.clear()
+
+        # TypeError: BNotMeta is neither a
+        # subclass, nor a superclass of int
+        with self.assertRaises(TypeError):
+            X = types.new_class("X", (C, int()))
+        with self.assertRaises(TypeError):
+            X = types.new_class("X", (int(), C))
+
+
 def test_main():
-    run_unittest(TypesTests, MappingProxyTests)
+    run_unittest(TypesTests, MappingProxyTests, ClassCreationTests)
 
 if __name__ == '__main__':
     test_main()