-
Notifications
You must be signed in to change notification settings - Fork 130
Description
I'm encountering an issue while performing object detection using a deep learning model in ArcGIS Pro 3.3. When I run the DetectObjectsUsingDeepLearning tool with GPU enabled, I receive the following error:
ExecuteError: Raster error encountered. For more details, see the message displayed afterward.
Parallel processing job timed out [Failed to generate table]
Failed to execute (DetectObjectsUsingDeepLearning).
However, when I run the same tool using CPU (without GPU acceleration), it works fine, and I can also train models on GPU without any issues.
Here is the key part of my Python script:
import arcpy
with arcpy.EnvManager(gpuId=0, cellSize="csl_20210529_processed.tif", processorType="GPU", scratchWorkspace=r""):
arcpy.ia.DetectObjectsUsingDeepLearning(
in_raster="csl_20210529_processed.tif",
out_detected_objects=r"E:\Arcgis_Pro_work_place\MyProject1\MyProject1.gdb\csl_20210529_p_DetectObjects111",
in_model_definition=r"E:\test\model_500_ResNet50_64stride_50p\model_500_ResNet50_64stride_50p.emd",
arguments="padding 64;batch_size 4;threshold 0.8;return_bboxes False;test_time_augmentation False;merge_policy mean;tile_size 256",
run_nms="NMS",
confidence_score_field="Confidence",
class_value_field="Class",
max_overlap_ratio=0.01,
processing_mode="PROCESS_AS_MOSAICKED_IMAGE",
use_pixelspace="NO_PIXELSPACE"
)
ExecuteError Traceback (most recent call last)
In [1]:
Line 2: arcpy.ia.DetectObjectsUsingDeepLearning(
File E:\Software\Arcgis_Pro\Resources\ArcPy\arcpy\ia\Functions.py, in DetectObjectsUsingDeepLearning:
Line 4127: return Wrapper(
File E:\Software\Arcgis_Pro\Resources\ArcPy\arcpy\sa\Utils.py, in swapper:
Line 45: result = wrapper(*args, **kwargs)
File E:\Software\Arcgis_Pro\Resources\ArcPy\arcpy\ia\Functions.py, in Wrapper:
Line 4115: result = arcpy.gp.DetectObjectsUsingDeepLearning_ia(
File E:\Software\Arcgis_Pro\Resources\ArcPy\arcpy\geoprocessing_base.py, in :
Line 512: return lambda *args: val(*gp_fixargs(args, True))
Environment:
ArcGIS Pro version: 3.3
GPU: NVIDIA GeForce RTX 4060 Laptop GPU (8GB VRAM)
OS: Windows 11
CUDA toolkit: (default with ArcGIS Pro installation)
Deep learning framework: Installed via ArcGIS Pro Python environment
What I’ve tried:
Model is correctly trained and inference works with processorType="CPU".
Inference fails only with GPU; training on GPU works fine.
I also tried adjusting tile_size, batch_size, and max_overlap_ratio, but the issue persists.
If anyone has encountered a similar problem or has suggestions (e.g., settings to tweak, known issues with RTX 4060 Laptop GPU), I’d greatly appreciate your help. Thank you in advance!