2f33fe86f5
* Update dependencies and build to go1.22 * Fix api changes wrt to dependencies * Update golangci config |
||
---|---|---|
.. | ||
internal/multierror | ||
iter | ||
panics | ||
.golangci.yml | ||
LICENSE | ||
README.md | ||
waitgroup.go |
conc
:
better structured concurrency for go
conc
is your toolbelt for structured concurrency in go,
making common tasks easier and safer.
go get github.com/sourcegraph/conc
At a glance
- Use
conc.WaitGroup
if you just want a safer version ofsync.WaitGroup
- Use
pool.Pool
if you want a concurrency-limited task runner - Use
pool.ResultPool
if you want a concurrent task runner that collects task results - Use
pool.(Result)?ErrorPool
if your tasks are fallible - Use
pool.(Result)?ContextPool
if your tasks should be canceled on failure - Use
stream.Stream
if you want to process an ordered stream of tasks in parallel with serial callbacks - Use
iter.Map
if you want to concurrently map a slice - Use
iter.ForEach
if you want to concurrently iterate over a slice - Use
panics.Catcher
if you want to catch panics in your own goroutines
All pools are created with pool.New()
or pool.NewWithResults[T]()
,
then configured with methods:
p.WithMaxGoroutines()
configures the maximum number of goroutines in the poolp.WithErrors()
configures the pool to run tasks that return errorsp.WithContext(ctx)
configures the pool to run tasks that should be canceled on first errorp.WithFirstError()
configures error pools to only keep the first returned error rather than an aggregated errorp.WithCollectErrored()
configures result pools to collect results even when the task errored
Goals
The main goals of the package are: 1) Make it harder to leak goroutines 2) Handle panics gracefully 3) Make concurrent code easier to read
Goal #1: Make it harder to leak goroutines
A common pain point when working with goroutines is cleaning them up.
It’s really easy to fire off a go
statement and fail to
properly wait for it to complete.
conc
takes the opinionated stance that all concurrency
should be scoped. That is, goroutines should have an owner and that
owner should always ensure that its owned goroutines exit properly.
In conc
, the owner of a goroutine is always a
conc.WaitGroup
. Goroutines are spawned in a
WaitGroup
with (*WaitGroup).Go()
, and
(*WaitGroup).Wait()
should always be called before the
WaitGroup
goes out of scope.
In some cases, you might want a spawned goroutine to outlast the
scope of the caller. In that case, you could pass a
WaitGroup
into the spawning function.
func main() {
var wg conc.WaitGroup
defer wg.Wait()
(&wg)
startTheThing}
func startTheThing(wg *conc.WaitGroup) {
.Go(func() { ... })
wg}
For some more discussion on why scoped concurrency is nice, check out this blog post.
Goal #2: Handle panics gracefully
A frequent problem with goroutines in long-running applications is handling panics. A goroutine spawned without a panic handler will crash the whole process on panic. This is usually undesirable.
However, if you do add a panic handler to a goroutine, what do you do with the panic once you catch it? Some options: 1) Ignore it 2) Log it 3) Turn it into an error and return that to the goroutine spawner 4) Propagate the panic to the goroutine spawner
Ignoring panics is a bad idea since panics usually mean there is actually something wrong and someone should fix it.
Just logging panics isn’t great either because then there is no indication to the spawner that something bad happened, and it might just continue on as normal even though your program is in a really bad state.
Both (3) and (4) are reasonable options, but both require the
goroutine to have an owner that can actually receive the message that
something went wrong. This is generally not true with a goroutine
spawned with go
, but in the conc
package, all
goroutines have an owner that must collect the spawned goroutine. In the
conc package, any call to Wait()
will panic if any of the
spawned goroutines panicked. Additionally, it decorates the panic value
with a stacktrace from the child goroutine so that you don’t lose
information about what caused the panic.
Doing this all correctly every time you spawn something with
go
is not trivial and it requires a lot of boilerplate that
makes the important parts of the code more difficult to read, so
conc
does this for you.
stdlib
|
conc
|
---|---|
|
|
Goal #3: Make concurrent code easier to read
Doing concurrency correctly is difficult. Doing it in a way that
doesn’t obfuscate what the code is actually doing is more difficult. The
conc
package attempts to make common operations easier by
abstracting as much boilerplate complexity as possible.
Want to run a set of concurrent tasks with a bounded set of
goroutines? Use pool.New()
. Want to process an ordered
stream of results concurrently, but still maintain order? Try
stream.New()
. What about a concurrent map over a slice?
Take a peek at iter.Map()
.
Browse some examples below for some comparisons with doing these by hand.
Examples
Each of these examples forgoes propagating panics for simplicity. To see what kind of complexity that would add, check out the “Goal #2” header above.
Spawn a set of goroutines and waiting for them to finish:
stdlib
|
conc
|
---|---|
|
|
Process each element of a stream in a static pool of goroutines:
stdlib
|
conc
|
---|---|
|
|
Process each element of a slice in a static pool of goroutines:
stdlib
|
conc
|
---|---|
|
|
Concurrently map a slice:
stdlib
|
conc
|
---|---|
|
|
Process an ordered stream concurrently:
stdlib
|
conc
|
---|---|
|
|
Status
This package is currently pre-1.0. There are likely to be minor breaking changes before a 1.0 release as we stabilize the APIs and tweak defaults. Please open an issue if you have questions, concerns, or requests that you’d like addressed before the 1.0 release. Currently, a 1.0 is targeted for March 2023.