Skip to content

multiprocessing.Pool starts to fail if command has a long list of arguments #144503

@sjoerdjob

Description

@sjoerdjob

Bug report

Bug description:

A multiprocessing pool can fail to start when the command that starts the pool was ran with a lot of arguments.

Minimal working example:

# mwe.py
import multiprocessing
if __name__ == "__main__":
    with multiprocessing.Pool():
        pass

Run this using the following script:

#!/bin/bash

PAGE_SIZE=$(getconf PAGE_SIZE)
MAX_ARG_STRLEN=$((32 * PAGE_SIZE))
COUNT=$((MAX_ARG_STRLEN / 100 ))
echo $COUNT
yes "$(printf 'a%.0s' {1..100})" | head -n $COUNT | xargs python3 mwe.py

What this will do is effectively run the following:

python3 mwe.py aaa..a aaa..a aaa..a ... aaa.a

The Python process itself launches correctly, as each argument is only 100 bytes long (way under the maximum argument length of MAX_ARG_STRLEN).
The creation of the multiprocessing pool however fails, because the sys.argv variable is taken, and used to build a single string in multiprocessing.forkserver.ForkServer.ensure_running here:

if 'sys_argv' in data:
.

This is caused by a recent change: #143717, which was also backported to 3.13 and 3.14. (It works on 3.14.2, fails on 3.14.3).

CPython versions tested on:

3.15, 3.14

Operating systems tested on:

Linux

Linked PRs

Metadata

Metadata

Assignees

No one assigned

    Labels

    3.13bugs and security fixes3.14bugs and security fixes3.15new features, bugs and security fixesstdlibStandard Library Python modules in the Lib/ directorytopic-multiprocessingtype-bugAn unexpected behavior, bug, or error

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions