-
Notifications
You must be signed in to change notification settings - Fork 17
Core v9.2.0 + Server v1.0.0 #2531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
johngrimes
wants to merge
612
commits into
main
Choose a base branch
from
release/9.2.0
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
+113,951
−17,275
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Split the combined update.md into separate pages for better navigation: - crud.md: Create, read, update, and delete operations - batch.md: Batch operations for multiple resources Added documentation for the FHIR read operation (GET by ID).
Adds an Export button alongside view execution results that kicks off the $viewdefinition-export async operation. Users can select output format (NDJSON, CSV, Parquet) and download exported files when complete. Works for both stored ViewDefinitions and inline/custom ViewDefinitions. Job status and download links are displayed inline on the page.
When async operations execute, they were trying to recreate a RequestTag from ServletRequestDetails to look up their job. However, by the time the async task runs, Tomcat has recycled the original servlet request, causing an IllegalStateException. This fix introduces AsyncJobContext, a ThreadLocal holder that allows AsyncAspect to pass the current job directly to async operations. The job is set before joinPoint.proceed() and cleared in the finally block. All async operations now check AsyncJobContext first before falling back to the original RequestTag lookup.
Updates the export UI to display and download files using the actual filename from the server response URL, rather than constructing a synthetic name. Adds support for the ViewDefinition name field in view exports. When a view export is performed without an explicit name, the view's own name is now used as a fallback before generating a name from the resource type.
Documents the $viewdefinition-export operation for exporting ViewDefinition query results to files using the async pattern.
Add the viewdefinition-export operation to the system-level operations list so it appears in the server's CapabilityStatement.
…hook Moves the useMutation for saving ViewDefinitions from SqlOnFhir.tsx into a dedicated hook, following the same pattern as useViewDefinitions.
Replace three nearly-identical polling hooks (useImportJobPolling, useExportJobPolling, useViewExportJobPolling) with a single generic useJobPolling<TManifest> hook. Components now pass queryKey and pollFn as parameters. Also extracts common PollResult<T> type to reduce duplication in service files.
Defines types for all server API operations including REST API, bulk export, import, import ping-and-pull, bulk submit, view run, view export, and async job handling. Includes a generic async job executor pattern for handling polling-based operations.
Introduces a dedicated API layer with comprehensive test coverage (151 tests) separating HTTP concerns from React hooks. Creates reusable hooks for async operations (bulk export, import, view export) with consistent start/cancel/status patterns. Adds useUnauthorizedHandler hook to deduplicate 401 error handling across pages. Removes unused job card/list components and legacy polling hooks.
Eliminates duplicate useMemo blocks in Export, Import, and Resources pages by computing sorted resource type names once in the hook.
Pages now use the numeric progress value directly from their async hooks (useBulkExport, useImport, useImportPnp, useViewExport) instead of maintaining separate local state and converting between string/number.
Add request state management to useAsyncJob, eliminating the need for page components to maintain separate request state and coordinate execution via useEffect. Introduces startWith(request) to trigger jobs and reset() to clear state. Updates all derived hooks (useBulkExport, useImport, useImportPnp, useBulkSubmit, useViewExport) and simplifies page components (Export, Import, SqlOnFhir) by removing coordination boilerplate.
ConformanceProvider.buildResources() was adding OperationDefinition twice: once in the general resource type loop with full CRUD interactions, and once explicitly with read-only interaction. This caused the UI to send duplicate types in the _type parameter, triggering an IllegalStateException in ExportExecutor.applyResourceTypeFiltering(). Changes: - Skip OperationDefinition in the general loop (it's handled separately) - Add validation to reject duplicate _type values with 400 error - Add defensive deduplication in UI's useServerCapabilities hook
Removed 31 useCallback wrappers that provided no benefit - simple functions that just wrapped other functions or set state without additional logic. Retained useCallback only where needed for hook dependencies or complex async functions.
Consolidates duplicate authenticated download code from Export.tsx and SqlOnFhir.tsx into a reusable hook that handles Bearer token injection and 401 error handling.
Move manual polling logic from BulkSubmit.tsx into a dedicated hook following the same pattern as useBulkExport. This simplifies the page component and improves consistency across async job operations.
Eliminates the mapExportLevel function by using identical string values for both types. Changes 'patient-type' to 'all-patients' and 'patient-instance' to 'patient'.
The inputSource field was collected by the ImportPnpForm but never passed through the hook and API layers, causing "Missing required parameter: inputSource" errors when submitting FHIR server imports.
The inputSource parameter is not part of the SMART Bulk Data Import PnP specification and was incorrectly required by the server. This change removes the parameter from both import and import-pnp operations while maintaining backwards compatibility by accepting but ignoring the parameter in incoming requests. Server changes: - Remove inputSource from ImportRequest and ImportPnpRequest records - Update validators to accept but ignore inputSource parameter - Make inputSource nullable in ImportManifest for JSON format - Update all tests to reflect the optional nature of inputSource UI changes: - Remove inputSource field from import forms - Remove inputSource from API types and service calls - Update tests to not include inputSource
Replace array index with output.url as the React key for list items to avoid potential rendering issues. Also reformat imports.
The backend does not support static export type, so the UI option was misleading. The form now always uses dynamic export type.
- Remove inputSource parameter (accepted but ignored by server) - Add missing save modes: append, ignore, error - Fix import-pnp parameter name from saveMode to mode - Add inputFormat parameter to import-pnp documentation - Use valueCoding instead of valueCode for Coding parameters - Update Python examples to reflect correct parameter usage
Consolidates the save mode selection into a shared SaveModeField component used by both ImportForm and ImportPnpForm. Both forms now support all five save modes (overwrite, merge, append, ignore, error). Also fixes a bug where the save mode and input format selections were being ignored - the values are now passed through to the API calls instead of being hardcoded.
Moves save mode field to end of forms for consistent layout. Removes http:// and https:// from supported URL schemes in ImportForm as these are not typically used for bulk data imports.
Refactored useViewRun from a declarative useQuery hook to an imperative useMutation-based hook that handles stream consumption and NDJSON parsing internally. SqlOnFhir.tsx now uses the hook instead of manual state management. - Created ui/src/utils/ndjson.ts with shared stream processing helpers - Updated useViewRun to return ViewDefinitionResult with execute/reset API - Removed manual execution state from SqlOnFhir.tsx - Deleted unused services/sqlOnFhir.ts - Cleaned up duplicate type definitions
- Make disabled prop optional with default value
- Fix unhandled promise warning on cancel
- Remove redundant disabled={false} prop
Adds 32 unit tests covering the main code paths in BulkSubmitExecutor to improve new code coverage from 78.7% towards the 80% quality gate threshold. Tests cover: - downloadManifestJob: job registration, state transitions, OAuth token handling, error scenarios, and file downloads - abortSubmission: async job cancellation and state updates - abortManifestJob: individual job abortion - importSubmission: import execution, completion marking, and error handling Uses WireMock for HTTP endpoint stubbing and Mockito for dependency mocking. BulkSubmitExecutor coverage improved from 63% to 90%.
Adds a collapsible "Export options" section to the import from FHIR server form, allowing users to configure bulk export parameters that are passed through to the remote server. Supported options include resource types, since/until timestamps, elements, output format, type filters, and include associated data. Extracts common export options into a reusable ExportOptions component shared between ExportForm and ImportPnpForm.
Remove validation that required authentication credentials to be configured, allowing imports from publicly accessible FHIR endpoints that don't require authentication.
Remove the _outputFormat parameter from the $import-pnp operation and instead automatically use the inputFormat value when invoking the remote bulk export operation. This eliminates redundant configuration and prevents mismatches between the requested export format and the format Pathling expects to read.
Remove SpotBugs Maven plugin and FindSecBugs security plugin from the build process. Deletes the exclusion filter configuration file.
Aligns with the server change that now derives outputFormat from inputFormat in the $import-pnp operation.
Moves type definitions from the central hooks.d.ts file to their corresponding hook implementation files. This enables ESLint's no-unused-vars rule to detect unused type exports in .ts files, which doesn't work for .d.ts declaration files. Also adds explicit @typescript-eslint/no-unused-vars rule to the ESLint config and fixes a pre-existing bug in ImportCard.tsx where an invalid outputFormat property was being passed.
Co-locate type definitions with their implementations for better maintainability. Types are now defined in the same files as the functions that use them, with shared types re-exported from index.ts.
Adds NDJSON, Parquet, and Delta output format selection to the bulk export UI. The output format dropdown is now always visible in the export form rather than being hidden behind extended options.
Extends the bulk export operation to support Parquet and Delta Lake output formats in addition to NDJSON. The format can be specified via the _outputFormat parameter using MIME types or shorthand names: - NDJSON: application/fhir+ndjson, application/ndjson, ndjson (default) - Parquet: application/x-pathling-parquet, parquet - Delta: application/x-pathling-delta+parquet, delta The DataSinkBuilder already supported all three formats, so changes were confined to the server module validation and execution layers.
Extract shared helpers to eliminate duplicated patterns: - Add e2e/helpers/mockHelpers.ts with mockMetadata and createOperationOutcome utilities for E2E tests - Add src/hooks/useAsyncJobCallbacks.ts for memoising async job callback options Update E2E tests (export, import, resources) and hooks (useImport, useImportPnp, useBulkExport, useViewExport, useBulkSubmit) to use shared utilities. Set jscpd threshold to 5% to enforce the duplication limit. Duplication reduced from 5.06% to 4.48%.
NotModifiedException errors from BaseInterceptorService are normal 304 responses when client ETags match, not actual errors.
Delta Lake format is not suitable for FHIR Bulk Data import because it requires a directory structure with a _delta_log/ transaction log. FHIR Bulk Data manifests list individual file URLs, not directories, so without the transaction log Delta provides no benefit over plain Parquet. DeltaSource remains available in the library API for direct use cases where callers control the directory structure.
Parquet exports were failing to download with a 500 error because Spark
writes to directories containing multiple part files, but the download
endpoint expected individual files.
Changes:
- ParquetSink now flattens partitioned directories into individual files
named {resourceType}.{partId}.parquet (e.g. Patient.00000.parquet)
- Removed Delta format from bulk export since it requires directory
structure (_delta_log/) that cannot be flattened for download
- Updated UI to remove Delta from output format options
Remove AlertDialog.Action wrapper from delete button to prevent automatic dialog closure. The dialog now remains open to show the loading spinner during the delete operation and only closes on success. On error, the dialog stays open so the user can retry. Update e2e test to use direct locator since the modal dialog marks content outside it as aria-hidden, preventing accessibility queries.
Updates test to match commit 8bba194 which removed Delta format from the import operation.
Implements a limited version of the FHIRPath resolve() function that extracts and returns type information from Reference elements, supporting type checking with the 'is' operator without performing actual resource resolution. Type Extraction: - Prioritizes Reference.type field when present - Falls back to parsing type from Reference.reference string - Supports relative, absolute, and canonical reference formats - Works with contained and logical references when type field is provided - Validates extracted types against FHIR resource type pattern Key Features: - Type checking: resolve() is Patient, resolve() is Organization - Array handling: Preserves array alignment using zip_with() - Filtering: Automatically removes unresolvable references (nulls) - No traversal: Prevents field access on resolved references Implementation: - ReferenceValue: Utility class for type extraction from Reference columns - Static method extractTypeFromColumns() for singular extraction logic - Vectorized extraction using zip_with() for arrays - Both referenceColumn and typeColumn required (@nonnull) - ResolvedReferenceCollection: Represents resolved references with type info - Dynamic type information for 'is' operator support - Prevents traversal to child fields - Supports ofType() filtering - ReferenceCollection.resolve(): Main entry point for resolution - Uses getField() to extract reference and type columns - Preserves array alignment (no empty value removal) - Filters nulls after type extraction Testing: - 86 comprehensive DSL test cases in ResolveFunctionDslTest - Tests cover: basic extraction, type priority, edge cases, HAPI resources, collections, filtering, and integration scenarios - Added fhirReference() helper to FhirPathModelBuilder for consistent Reference creation in tests (ensures both fields always present) Addresses #2522 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Allow $import-pnp to work without explicit PnP configuration by using sensible defaults. This enables the operation to be used against unauthenticated FHIR servers without requiring any configuration. When no PnP configuration is provided, the executor now creates a default PnpConfiguration with null authentication credentials, allowing unauthenticated access to public FHIR servers.
Async operations (e.g. $export, $import-pnp) return 202 Accepted responses that can be cached by reverse proxies like Varnish. After a server restart, the in-memory job registry is cleared, but clients may still have cached 202 responses with stale job IDs, resulting in 404 "Job ID not found" errors. This fix introduces instance-specific ETags for async responses. Each server instance generates a unique ID at startup, which is included in the ETag for 202 responses. When a client validates a cached 202 response against a restarted server, the mismatched instance ID causes the server to return a fresh response with a new job ID instead of a 304 Not Modified.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.