Skip to content

FNO invokes scalar indexing #125

@ayushinav

Description

@ayushinav

Describe the bug 🐞

Calling FNO on GPUs leads to Scalar indexing is disallowed error.

Minimal Reproducible Example 👇

using CUDA, Lux

n_batch = 10
nz = 64
m_sample = cu(2 .+ randn(nz, 1, n_batch))


fno2 = FourierNeuralOperator(gelu; chs=(1, 64, 64, 128, 1), modes=(16,))
ps,st = Lux.setup(rng, fno2) |> cu
val, st_ = Lux.apply(fno2, m_sample, ps, st)

Error & Stacktrace ⚠️

ERROR: Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore should be avoided.

If you want to allow scalar iteration, use `allowscalar` or `@allowscalar`
to enable scalar iteration globally or for the operations in question.
Stacktrace:
  [1] error(s::String)
    @ Base .\error.jl:35
  [2] errorscalar(op::String)
    @ GPUArraysCore C:\Users\ayush\.julia\packages\GPUArraysCore\aNaXo\src\GPUArraysCore.jl:151
  [3] _assertscalar(op::String, behavior::GPUArraysCore.ScalarIndexing)
    @ GPUArraysCore C:\Users\ayush\.julia\packages\GPUArraysCore\aNaXo\src\GPUArraysCore.jl:124
  [4] assertscalar(op::String)
    @ GPUArraysCore C:\Users\ayush\.julia\packages\GPUArraysCore\aNaXo\src\GPUArraysCore.jl:112
  [5] getindex
    @ C:\Users\ayush\.julia\packages\GPUArrays\3a5jB\src\host\indexing.jl:50 [inlined]
  [6] scalar_getindex
    @ C:\Users\ayush\.julia\packages\GPUArrays\3a5jB\src\host\indexing.jl:36 [inlined]
  [7] _getindex
    @ C:\Users\ayush\.julia\packages\GPUArrays\3a5jB\src\host\indexing.jl:19 [inlined]
  [8] getindex
    @ C:\Users\ayush\.julia\packages\GPUArrays\3a5jB\src\host\indexing.jl:17 [inlined]
  [9] getindex
    @ .\subarray.jl:320 [inlined]
 [10] im2col!(col::CuArray{…}, x::SubArray{…}, cdims::DenseConvDims{…})
    @ NNlib C:\Users\ayush\.julia\packages\NNlib\srXYX\src\impl\conv_im2col.jl:253
 [11] (::NNlib.var"#conv_part#640"{})(task_n::Int64, part::UnitRange{…})
    @ NNlib C:\Users\ayush\.julia\packages\NNlib\srXYX\src\impl\conv_im2col.jl:53
 [12] conv_im2col!(y::SubArray{…}, x::SubArray{…}, w::CuArray{…}, cdims::DenseConvDims{…}; col::CuArray{…}, alpha::Float32, beta::Float32, ntasks::Int64)
    @ NNlib C:\Users\ayush\.julia\packages\NNlib\srXYX\src\impl\conv_im2col.jl:69
 [13] conv_im2col!(y::SubArray{…}, x::SubArray{…}, w::CuArray{…}, cdims::DenseConvDims{…})
    @ NNlib C:\Users\ayush\.julia\packages\NNlib\srXYX\src\impl\conv_im2col.jl:23
 [14] (::NNlib.var"#conv_group#301"{})(xc::UnitRange{…}, wc::UnitRange{…})
    @ NNlib C:\Users\ayush\.julia\packages\NNlib\srXYX\src\conv.jl:209
 [15] conv!(out::CuArray{…}, in1::CuArray{…}, in2::CuArray{…}, cdims::DenseConvDims{…}; kwargs::@Kwargs{})
    @ NNlib C:\Users\ayush\.julia\packages\NNlib\srXYX\src\conv.jl:218
 [16] conv!
    @ C:\Users\ayush\.julia\packages\NNlib\srXYX\src\conv.jl:185 [inlined]
 [17] conv_bias_act!(y::CuArray{…}, x::CuArray{…}, w::CuArray{…}, cdims::DenseConvDims{…}, b::CuArray{…}, σ::Function; kwargs::@Kwargs{})
    @ NNlib C:\Users\ayush\.julia\packages\NNlib\srXYX\src\conv_bias_act.jl:10
 [18] conv_bias_act!(y::CuArray{…}, x::CuArray{…}, w::CuArray{…}, cdims::DenseConvDims{…}, b::CuArray{…}, σ::Function)
    @ NNlib C:\Users\ayush\.julia\packages\NNlib\srXYX\src\conv_bias_act.jl:8
 [19] conv_bias_act!(y::CuArray{…}, x::CuArray{…}, w::CuArray{…}, cdims::DenseConvDims{…}, b::CuArray{…}, σ::Function; kwargs::@Kwargs{})
    @ NNlib C:\Users\ayush\.julia\packages\NNlib\srXYX\src\conv_bias_act.jl:22
 [20] conv_bias_act!
    @ C:\Users\ayush\.julia\packages\NNlib\srXYX\src\conv_bias_act.jl:17 [inlined]
 [21] #conv_bias_act#404
    @ C:\Users\ayush\.julia\packages\NNlib\srXYX\src\conv_bias_act.jl:4 [inlined]
 [22] conv_bias_act
    @ C:\Users\ayush\.julia\packages\NNlib\srXYX\src\conv_bias_act.jl:1 [inlined]
 [23] conv_bias_act(::Type{…}, x::CuArray{…}, weight::CuArray{…}, cdims::DenseConvDims{…}, bias′::CuArray{…}, act::typeof(identity))
    @ LuxLib.Impl C:\Users\ayush\.julia\packages\LuxLib\ZJ3gh\src\impl\conv.jl:163
 [24] conv_bias_act
    @ C:\Users\ayush\.julia\packages\LuxLib\ZJ3gh\src\impl\conv.jl:141 [inlined]
 [25] fused_conv
    @ C:\Users\ayush\.julia\packages\LuxLib\ZJ3gh\src\impl\conv.jl:201 [inlined]
 [26] fused_conv
    @ C:\Users\ayush\.julia\packages\LuxLib\ZJ3gh\src\impl\conv.jl:177 [inlined]
 [27] fused_conv_bias_activation
    @ C:\Users\ayush\.julia\packages\LuxLib\ZJ3gh\src\api\conv.jl:37 [inlined]
 [28] ##c::Convinternal#337
    @ C:\Users\ayush\.julia\packages\Lux\GCC0y\src\layers\conv.jl:259 [inlined]
 [29] Conv
    @ C:\Users\ayush\.julia\packages\ReactantCore\SEqVX\src\ReactantCore.jl:286 [inlined]
 [30] apply
    @ C:\Users\ayush\.julia\packages\LuxCore\kQC9S\src\LuxCore.jl:155 [inlined]
 [31] macro expansion
    @ C:\Users\ayush\.julia\packages\Lux\GCC0y\src\layers\containers.jl:0 [inlined]
 [32] applychain(layers::@NamedTuple{}, x::CuArray{…}, ps::@NamedTuple{}, st::@NamedTuple{})
    @ Lux C:\Users\ayush\.julia\packages\Lux\GCC0y\src\layers\containers.jl:577
 [33] Chain
    @ C:\Users\ayush\.julia\packages\Lux\GCC0y\src\layers\containers.jl:575 [inlined]
 [34] apply
    @ C:\Users\ayush\.julia\packages\LuxCore\kQC9S\src\LuxCore.jl:155 [inlined]
 [35] AbstractLuxWrapperLayer
    @ C:\Users\ayush\.julia\packages\LuxCore\kQC9S\src\LuxCore.jl:269 [inlined]
 [36] apply(model::FourierNeuralOperator{…}, x::CuArray{…}, ps::@NamedTuple{}, st::@NamedTuple{})
    @ LuxCore C:\Users\ayush\.julia\packages\LuxCore\kQC9S\src\LuxCore.jl:155
 [37] top-level scope
    @ c:\Users\ayush\Desktop\UQ_MT\wip\surrogate_v1.jl:1
Some type information was truncated. Use `show(err)` to see complete types.

Constructing the FNO via the other API atleast points out to the line in NeuralOperators, but is otherwise missing in the above error stack

fno3 = FourierNeuralOperator((16,), 1, 4, 64);
ps,st = Lux.setup(rng, fno3) |> cu;
val, st_ = Lux.apply(fno3, m_sample, ps, st);

Error stack

ERROR: Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore should be avoided.

If you want to allow scalar iteration, use `allowscalar` or `@allowscalar`
to enable scalar iteration globally or for the operations in question.
Stacktrace:
  [1] error(s::String)
    @ Base .\error.jl:35
  [2] errorscalar(op::String)
    @ GPUArraysCore C:\Users\ayush\.julia\packages\GPUArraysCore\aNaXo\src\GPUArraysCore.jl:151
  [3] _assertscalar(op::String, behavior::GPUArraysCore.ScalarIndexing)
    @ GPUArraysCore C:\Users\ayush\.julia\packages\GPUArraysCore\aNaXo\src\GPUArraysCore.jl:124
  [4] assertscalar(op::String)
    @ GPUArraysCore C:\Users\ayush\.julia\packages\GPUArraysCore\aNaXo\src\GPUArraysCore.jl:112
  [5] getindex
    @ C:\Users\ayush\.julia\packages\GPUArrays\3a5jB\src\host\indexing.jl:50 [inlined]
  [6] macro expansion
    @ .\multidimensional.jl:981 [inlined]
  [7] macro expansion
    @ .\cartesian.jl:64 [inlined]
  [8] macro expansion
    @ .\multidimensional.jl:979 [inlined]
  [9] _unsafe_setindex!(::IndexLinear, ::Array{…}, ::CuArray{…}, ::UnitRange{…}, ::UnitRange{…}, ::UnitRange{…})
    @ Base .\multidimensional.jl:988
 [10] _setindex!
    @ .\multidimensional.jl:967 [inlined]
 [11] setindex!
    @ .\abstractarray.jl:1413 [inlined]
 [12] _copy_or_fill!
    @ .\abstractarray.jl:1857 [inlined]
 [13] __cat_offset1!
    @ .\abstractarray.jl:1849 [inlined]
 [14] __cat_offset!
    @ .\abstractarray.jl:1840 [inlined]
 [15] __cat_offset!(A::Array{…}, shape::Tuple{…}, catdims::Tuple{…}, offsets::Tuple{…}, x::Array{…}, X::CuArray{…})
    @ Base .\abstractarray.jl:1841
 [16] __cat(::Array{Float32, 3}, ::Tuple{Int64, Int64, Int64}, ::Tuple{Bool, Bool}, ::Array{Float32, 3}, ::Vararg{Any})
    @ Base .\abstractarray.jl:1836
 [17] _cat_t
    @ .\abstractarray.jl:1829 [inlined]
 [18] _cat(::Int64, ::Array{Float32, 3}, ::CuArray{Float32, 3, CUDA.DeviceMemory})
    @ Base .\abstractarray.jl:2086
 [19] cat
    @ .\abstractarray.jl:2084 [inlined]
 [20] (::GridEmbedding{Vector{Tuple{…}}})(x::CuArray{Float32, 3, CUDA.DeviceMemory}, ps::@NamedTuple{}, st::@NamedTuple{})
    @ NeuralOperators C:\Users\ayush\.julia\packages\NeuralOperators\IE6mQ\src\layers.jl:240
 [21] apply
    @ C:\Users\ayush\.julia\packages\LuxCore\kQC9S\src\LuxCore.jl:155 [inlined]
 [22] macro expansion
    @ C:\Users\ayush\.julia\packages\Lux\GCC0y\src\layers\containers.jl:0 [inlined]
 [23] applychain(layers::@NamedTuple{}, x::CuArray{…}, ps::@NamedTuple{}, st::@NamedTuple{})
    @ Lux C:\Users\ayush\.julia\packages\Lux\GCC0y\src\layers\containers.jl:577
 [24] Chain
    @ C:\Users\ayush\.julia\packages\Lux\GCC0y\src\layers\containers.jl:575 [inlined]
 [25] apply
    @ C:\Users\ayush\.julia\packages\LuxCore\kQC9S\src\LuxCore.jl:155 [inlined]
 [26] AbstractLuxWrapperLayer
    @ C:\Users\ayush\.julia\packages\LuxCore\kQC9S\src\LuxCore.jl:269 [inlined]
 [27] apply(model::FourierNeuralOperator{…}, x::CuArray{…}, ps::@NamedTuple{}, st::@NamedTuple{})
    @ LuxCore C:\Users\ayush\.julia\packages\LuxCore\kQC9S\src\LuxCore.jl:155
 [28] top-level scope
    @ c:\Users\ayush\Desktop\UQ_MT\wip\surrogate_v1.jl:48

Environment (please complete the following information):

  • Output of using Pkg; Pkg.status()
Status `C:\Users\ayush\Desktop\UQ_MT\Project.toml`
  [052768ef] CUDA v5.10.1
  [b2108857] Lux v1.31.3
  [872c559c] NNlib v0.9.33
  [ea5c82af] NeuralOperators v0.6.2
  [3c362404] Reactant v0.2.231
  • Output of versioninfo()
Julia Version 1.11.4
Commit 8561cc3d68 (2025-03-10 11:36 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: 16 × 13th Gen Intel(R) Core(TM) i7-13620H
  WORD_SIZE: 64
  LLVM: libLLVM-16.0.6 (ORCJIT, goldmont)
Threads: 1 default, 0 interactive, 1 GC (on 16 virtual cores)
Environment:
  JULIA_EDITOR = code
  JULIA_VSCODE_REPL = 1

Additional context

Add any other context about the problem here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions