Skip to main content

Pools

note

Pools are an early preview feature. The concept is functional but will become more useful as additional launcher types are added.

A pool defines a configuration for automatically launching workers in a workspace. Instead of manually starting workers, you can configure a pool and the server will launch and manage workers on your behalf.

Configuring a pool

Pools are configured using the CLI:

coflux pools update mypool \
--module myapp.workflows \
--module myapp.tasks \
--docker-image myorg/myapp:latest

Launcher types

Each pool has a launcher that determines how workers are started. The server must be configured to allow the relevant launcher type (COFLUX_LAUNCHER_TYPES).

Docker launcher

Launches workers as Docker containers:

coflux pools update mypool \
--docker-image myorg/myapp:latest \
--docker-host tcp://docker:2375
OptionDescription
--docker-imageDocker image to run
--docker-hostDocker host (default: local socket)

Process launcher

Launches workers as local processes:

coflux pools update mypool \
--process-dir /path/to/project
OptionDescription
--process-dirWorking directory for the worker process

Common options

These options apply to all launcher types:

OptionDescription
--module, -mModules to host (can be specified multiple times)
--providesFeatures that workers provide (e.g., gpu:A100)
--server-hostServer host override for launched workers
--adapterAdapter command
--concurrencyMaximum concurrent executions per worker
--envEnvironment variables (e.g., --env KEY=VALUE)

Managing pools

# List pools in a workspace
coflux pools list

# Get pool configuration
coflux pools get mypool

# View launched workers
coflux pools launches mypool

# Watch launches in real-time
coflux pools launches mypool --watch

# Delete a pool
coflux pools delete mypool

Provides and requires

Workers can declare features they provide, and targets can require specific features. This allows routing executions to appropriate workers — for example, GPU-intensive tasks to GPU-equipped workers.

On the worker side, configure provides on the pool:

coflux pools update gpu-pool \
--docker-image myorg/gpu-worker:latest \
--provides "gpu:A100"

On the task side, specify requires in the decorator:

@cf.task(requires={"gpu": "A100"})
def train_model(data):
...

The requires parameter accepts a dictionary where keys are feature names and values can be a specific value ("A100"), a list of acceptable values (["A100", "H100"]), or True to require the feature with any value.