I'm not entirely sure why, but completely separate copies of KLUFactorization cannot be used in separate threads without causing a segfault.
I've made a minimal example at https://github.com/josephmckinsey/klu-threading, since I've only been able to produce this with a specific matrix so far.
https://github.com/josephmckinsey/klu-threading/actions/runs/17870311904/job/50822461621
Solving Ax = b in parallel with 6 threads...
[2188] signal 11 (1): Segmentation fault
in expression starting at none:0
/home/runner/work/_temp/9db51d3c-398d-426e-ace8-493b5356e3a5.sh: line 1: 2188 Segmentation fault (core dumped) julia --project threads.jl
This might be an upstream bug. However, removing _free_numeric from the finalizer in a local copy of KLU.j stopped the seg fault. It also disappeared if I used @sync instead of just @spawn. Is this known behavior and all that's needed is a mutex around free? The SuiteSparse memory management did not look particularly inviting.
I'm not entirely sure why, but completely separate copies of KLUFactorization cannot be used in separate threads without causing a segfault.
I've made a minimal example at https://github.com/josephmckinsey/klu-threading, since I've only been able to produce this with a specific matrix so far.
https://github.com/josephmckinsey/klu-threading/actions/runs/17870311904/job/50822461621
This might be an upstream bug. However, removing
_free_numericfrom the finalizer in a local copy of KLU.j stopped the seg fault. It also disappeared if I used@syncinstead of just@spawn. Is this known behavior and all that's needed is a mutex around free? The SuiteSparse memory management did not look particularly inviting.