Conversation
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughWalkthroughAdds a complete cloud backup subsystem: new Rust UniFFI cloud backup manager and persistence, CSPP encryption modules, keychain/bootstrap and migration changes, iOS/Android FFI and platform implementations (iCloud + Passkeys), extensive SwiftUI screens and helpers, and related docs/tests. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as Platform UI
participant SwiftMgr as CloudBackupManager (Swift)
participant RustMgr as RustCloudBackupManager (Rust)
participant Passkey as PasskeyProvider (Platform)
participant Storage as CloudStorage (iCloud)
participant DB as Database (Encrypted)
rect rgba(200,150,255,0.5)
UI->>SwiftMgr: user triggers Enable / Restore / Verify
SwiftMgr->>RustMgr: dispatch(action)
RustMgr->>Passkey: authenticate/discover PRF
Passkey-->>RustMgr: prf key / credential id (or failure)
RustMgr->>Storage: list/download/upload/delete backups
RustMgr->>DB: persist state / enqueue pending uploads
RustMgr-->>SwiftMgr: reconcile(messages)
SwiftMgr->>UI: update UI
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related issues
Possibly related PRs
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
19603ae to
e797c50
Compare
Use an independent random key stored in keychain for local DB encryption instead of deriving it from the CSPP master key. This prevents cloud backup restore from invalidating the local encryption key and wiping all local databases. - Add get/create_local_encryption_key() to Keychain (write-once) - Rename encrypted DB files: cove.encrypted.db, wallet_data.encrypted.json.redb - Rewrite redb migration to source≠dest flow (no two-phase-swap) - Simplify restore: save master key without replacing local encryption - Remove replace_encryption_key(), delete_master_key(), Database::reinitialize() - Fix .expect() panic in wallet_data database_location()
- Wipe BDK store files in wipe_local_data (B2) - Load persisted cloud backup state on startup via syncPersistedState (B3) - Add cloud restore option to CatastrophicErrorView with CloudKit probe (B1) - Add DeviceRestoreView orchestrating wipe → bootstrap → restore (B1) - Guard against concurrent enable/restore operations (I5) - Don't mark Enabled when all wallets fail to restore (C2) - Propagate collect_existing_fingerprints error instead of swallowing (C3) - Error on partial keychain state in get_local_encryption_key (C4) - Write-side hardening: cleanup orphaned cryptor on second save failure (C4) - Bootstrap handles partial keychain state: routes to DatabaseKeyMismatch or purges and recreates when no DB exists (C4) - Zeroize plaintext JSON in wallet_crypto encrypt/decrypt (I1) - Use .allKeys save policy in CloudKit uploads for idempotent retry (I2) - Add 120s timeout to passkey semaphore (I3) - Log encryption key mismatch in release builds (I4) - Log is_wallet_duplicate errors (I6) - Distinguish "record exists but data nil" from "not found" in CloudKit (I7)
On startup, if cloud backup state is disabled (fresh install or upgrade), probe CloudKit for an existing backup. If found, show a simple CloudRestoreOfferView with restore/skip options before entering the normal app flow.
Increment Android versionCode from 17 to 18 and update iOS CURRENT_PROJECT_VERSION from 64 to 67 in the Xcode project. These changes align build numbers for a new release while leaving Android versionName (1.3.0) unchanged.
Add request.prf = .checkForSupport during createPasskey so the authenticator knows PRF will be needed for assertions. Also log PRF support status after registration and log when assertion.prf is nil to help diagnose PRF failures.
When authenticateWithPrf fails (e.g. passkey was deleted externally), clear the cached credential_id and prf_salt from keychain so the next retry creates a fresh passkey instead of trying to sign in with a credential that no longer exists.
Runs enable/restore on tokio's blocking thread pool so the tokio reactor is available if any code inside needs it.
Replace verbose .map_err(|e| Error(e.to_string())) with map_err_str across cloud_backup_manager, derive strum::EnumIter on WalletMode for cleaner iteration, add iCloud/CloudKit entitlements, and document map_err_str preference in CLAUDE.md
Replace verbose .map_err(|e| ...) patterns with cove_util::ResultExt helpers (map_err_prefix / map_err_str) for cleaner, consistent error mapping across modules. Add missing use statements where needed, centralize and move the CloudBackupError enum earlier in cloud_backup_manager.rs (removing the duplicate at the bottom), and make small related fixes (manifest/version handling, whitespace). Also update CLAUDE.md to document the preferred ResultExt helpers. Affected files: CLAUDE.md, rust/crates/cove-device/src/keychain.rs, rust/src/auth.rs, rust/src/backup/crypto.rs, rust/src/backup/verify.rs, rust/src/manager/cloud_backup_manager.rs.
Clear per-wallet DATABASE_CONNECTIONS cache in reinit_database() so restored wallets don't get stale handles pointing at deleted files. Add 120s timeout to observeRestoreCompletion() polling loop to prevent infinite spinner if the Rust side gets stuck.
Rust: Improve legacy DB migration recovery and detection — handle interrupted two-phase swaps (.enc.tmp + .bak), restore from .bak or finish the .enc.tmp rename as appropriate, and avoid re-migrating databases that are already encrypted by the old migration (rename them instead). Added needs_legacy_rename helper, updated counting and migration flows to consider both migration and simple rename cases, and added safety logging and error handling. iOS: Surface iCloud backup check failures — add cloudCheckError state, present an alert when the cloud backup check throws, and switch the cloud check to a try/catch flow that logs failures and sets the error message while still finishing bootstrap.
Check all networks and modes when deciding whether to offer cloud restore on startup, preventing incorrect restore offers when wallets exist on a non-selected network or in decoy mode. Verify main DB health before deleting legacy .bak files, restoring from backup if the main DB is corrupt instead of silently dropping the last recoverable copy.
Auto-backup new wallets as they're created via backup_new_wallet() called from save_new_wallet_metadata. Add sync button on the cloud backup detail screen for manually uploading unsynced wallets. Use the cloud manifest as source of truth: refresh_cloud_backup_detail downloads the manifest from CloudKit to determine real backup status, and do_sync_unsynced_wallets compares against the manifest rather than the local cache. Extract all_local_wallets() helper to flatten the repeated network×mode double-loop pattern across the file.
Switch hasCloudBackup() and downloadRecord() from the convenience db.fetch(withRecordID:) API to CKFetchRecordsOperation to match how uploads already work. Add mapFetchError helper that logs raw CKError code, domain, and userInfo for diagnosing container errors. Handle operation-level failures in fetchRecordsResultBlock.
The local cache (cloud_backup_wallets in global_config) was falsely reporting wallets as backed up without verifying against the cloud. Remove the cache entirely and use the cloud manifest to determine real backup status. Add cloud_only_count to CloudBackupDetail and fetch_cloud_only_wallets() to download and decrypt orphan entries. Replace cloud_backup_detail() with is_cloud_backup_enabled() since the detail screen now loads everything from the cloud.
Show a progress indicator while downloading the cloud manifest instead of falsely displaying cached backup status. Show error state if the cloud read fails. Add cloud-only section showing count of wallets in the cloud but not on this device, with a Get More Info button that downloads and decrypts their metadata. Add SyncFailed reconcile message to separate sync errors from global backup state.
When the manifest is not found (e.g. switching from Development to Production CloudKit), automatically re-upload all local wallets instead of showing a generic error. For other cloud errors (access denied, network unavailable), show the specific error message with a retry button. Extract upload_all_wallets_and_manifest helper shared by enable and re-upload flows.
CloudKit required manual schema deployment and had bundle ID / container mismatch issues on TestFlight. iCloud Drive (ubiquity container) eliminates all of this with zero CloudKit Dashboard setup needed. - Add ICloudDriveHelper with NSFileCoordinator, NSMetadataQuery wrappers - Files stored in Data/ (hidden from user) with SHA256-hashed filenames - Uploads block until confirmed uploaded (waitForUpload polling) - downloadManifest uses NSMetadataQuery for authoritative NotFound - hasCloudBackup checks both manifest AND master key existence - Update entitlements: CloudDocuments + ubiquity container - Replace CKContainer.accountStatus with ubiquityIdentityToken - Add sync health indicator to cloud backup detail header
…agement - Add "Create New Passkey" recovery action on cancelled/failed verification for when the original passkey was deleted - Persist verification state as CloudBackup::Unverified in redb so the main settings screen shows "Cloud Backup Unverified" across app restarts - Remove passkey existence check from startup integrity check to prevent brief Face ID flash on launch - Auto-sync unsynced wallets during deep verify and startup integrity check - Add retry fallback to backup_new_wallet via full sync on failure - Deduplicate manifest record_ids to prevent duplicate wallet entries - Add restore and delete actions for cloud-only wallets with confirmation - Increase iCloud Drive upload timeout from 10s to 60s - Refresh detail after cloud wallet restore/delete operations
- Add namespace ID derivation (HKDF-SHA256 from master key) for directory isolation
- Replace flat Data/*.json layout with Data/cspp-namespaces/{namespace_id}/ structure
- Remove BackupManifest, use filesystem as source of truth via NSMetadataQuery
- Fix double-hash bug in wallet filenames (recordId is already SHA256)
- Fix /var vs /private/var symlink mismatch in NSMetadataQuery path comparisons
- Add legacy flat-file detection in hasAnyCloudBackup for old-format discovery
- Simplify DeviceRestoreView to additive restore (no wipe/bootstrap)
- Export NAMESPACES_SUBDIRECTORY constant from Rust for cross-platform consistency
|
@greptile-apps review |
Change DeepVerificationFailure return types to Box<DeepVerificationFailure> in cloud backup verification code to simplify ownership/size handling. Wrap error constructions with Box::new(...) and adjust the caller in finalize flow to dereference the boxed failure when passing to apply_failed_verification. Updates localized to rust/src/manager/cloud_backup_manager/verify.rs (error mapping from master key loading, missing master key, and wallet deserialization).
There was a problem hiding this comment.
Actionable comments posted: 12
♻️ Duplicate comments (3)
rust/src/database/migration/bdk/recovery.rs (1)
107-113:⚠️ Potential issue | 🔴 CriticalRestore
*.db.baktogether with its auxiliary files.Both backup-restore branches still rename only
*.db.bakback to*.db. If the backup has pending pages in*.db.bak-wal, the recovered wallet silently rolls back to an older state. Please reuse the same SQLite-bundle rename/checkpoint logic here as well. As per coding guidelines, "Verify correct functionality and logic consistency."Also applies to: 135-143
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/src/database/migration/bdk/recovery.rs` around lines 107 - 113, The restore branch that currently only renames bak_path -> db_path must also restore auxiliary SQLite bundle files and run the same checkpoint/rename logic used elsewhere: when moving bak_path to db_path also move/rename the corresponding wal and shm files (e.g., bak_path.with_extension("db-wal") -> db_path.with_extension("db-wal") and bak_path.with_extension("db-shm") -> db_path.with_extension("db-shm")) and perform the same checkpoint/sync sequence used by the existing tmp-path recovery logic instead of a lone std::fs::rename; apply the identical fix to the other restore branch that mirrors this behavior so both branches consistently restore the full SQLite bundle (bak, -wal, -shm) and ensure no silent rollback.ios/Cove/Flows/SettingsFlow/CloudBackupDetailScreen.swift (1)
138-143:⚠️ Potential issue | 🟠 MajorKeep detail content visible after verification is cancelled.
If
manager.detailis already loaded, Line 138 still hides it when verification becomes.cancelled, leaving only the verification section. Cancel should stop the verification flow, not blank the sync/cloud-only detail state.🪄 Minimal fix
- } else if let detail = manager.detail, !isCancelled { + } else if let detail = manager.detail {As per coding guidelines,
ios/Cove/**/*.swift: "Review SwiftUI view code for proper layout, best practices."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/Cove/Flows/SettingsFlow/CloudBackupDetailScreen.swift` around lines 138 - 143, The current conditional hides the DetailFormContent when verification becomes .cancelled because it checks both manager.detail and !isCancelled; change that so loaded details remain visible after cancellation: update the branch that constructs DetailFormContent (the clause using manager.detail and isCancelled) to only require manager.detail (remove the !isCancelled check) so DetailFormContent renders whenever manager.detail is present, while leaving the separate verification UI/flow handling for cancelled state intact.rust/src/manager/cloud_backup_manager.rs (1)
564-624:⚠️ Potential issue | 🟠 MajorMake restore-operation checks and mutations atomic.
These helpers still validate
restore_operation_idand mutate state/DB in separate steps. A cancel or newer restore can still land between them, so stale work can persistPersistedCloudBackupStateor push restore UI updates after cancellation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/src/manager/cloud_backup_manager.rs` around lines 564 - 624, The three non-persist helpers validate the restore_operation_id and then mutate runtime state separately, allowing a race; update set_status_for_restore_operation, set_restore_progress_for_restore_operation, and set_restore_report_for_restore_operation to call self.ensure_current_restore_operation(operation_id)? again immediately after performing the mutation (set_status, set_restore_progress, set_restore_report) and return an error if the second check fails so you don't apply stale updates; keep persist_cloud_backup_state_for_restore_operation as-is (it already re-checks after the DB write).
🧹 Nitpick comments (1)
ios/Cove/SidebarContainer.swift (1)
35-46: Freeze the gesture-start state before callingupdateSidebarState().
updateSidebarState()rewritesdragStartedWithSidebarOpen, so the second branch inonDragEndedis reading post-snap state, not the state this gesture started with. On the open→close path that means the "started closed" branch runs a second time; capturestartedOpenfirst and make this anif/else.♻️ Proposed refactor
let threshold = sideBarWidth * 0.3 let predictedEnd = value.predictedEndTranslation.width let currentOffset = totalOffset + let startedOpen = dragStartedWithSidebarOpen @@ withAnimation(.spring(response: 0.3, dampingFraction: 0.8)) { // started open - if dragStartedWithSidebarOpen { + if startedOpen { // started open - closing requires dragging below 70% (196px for 280px width) // this means we dragged 30% towards closed let closeThreshold = sideBarWidth - threshold @@ } else { // snap back to open updateSidebarState(isVisible: true, animated: false) } - } - - // started closed - if !dragStartedWithSidebarOpen { + } else { + // started closed - opening requires dragging past 30% (84px for 280px width) if offset > threshold || predictedEnd > threshold { // snap to open updateSidebarState(isVisible: true, animated: false) } else { // snap back to closedAlso applies to: 58-96
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/Cove/SidebarContainer.swift` around lines 35 - 46, The gesture handler is reading dragStartedWithSidebarOpen after updateSidebarState mutates it, so freeze the gesture-start state into a local (e.g., let startedOpen = dragStartedWithSidebarOpen) at the top of onDragEnded, then use that local for the conditional branches instead of reading dragStartedWithSidebarOpen; ensure you call updateSidebarState(isVisible:..., animated:...) only once per gesture flow and convert the two branches into an if/else based on startedOpen to avoid re-running the same branch after updateSidebarState sets app.isSidebarVisible or dragStartedWithSidebarOpen.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/ICLOUD_DRIVE.md`:
- Around line 48-52: The fenced log block containing the lines starting with
"[ERROR] couldn't fetch remote operation IDs: NSError: Cocoa 257" and the
subsequent daemon error should include a language identifier to satisfy
markdownlint; update the triple-backtick fence that wraps that log sample to use
"text" (i.e., replace ``` with ```text) so the block is properly annotated.
In `@ios/Cove/CloudBackupManager.swift`:
- Around line 155-164: The delayed dialog scheduling in CloudBackupManager
(using DispatchQueue.main.asyncAfter with Self.passkeySheetDismissDelay to set
showExistingBackupWarning and showPasskeyChoiceDialog) should be converted to
cancellable DispatchWorkItem instances: create stored properties (e.g.,
existingBackupWorkItem and passkeyDiscoveryWorkItem) and when scheduling, assign
a DispatchWorkItem that sets the boolean (capturing [weak self]) and submit it
via DispatchQueue.main.async(execute:). On any new operation/state change (the
method that handles state transitions or starts a new enable/repair attempt),
cancel and nil the previously stored work items before scheduling new ones so a
late callback from a previous attempt cannot flip the UI flags.
In `@ios/Cove/CoveApp.swift`:
- Around line 245-250: The current guard in
shouldRunCloudRestoreCheck(appManager:) only allows
CloudBackupManager.shared.status == .disabled to proceed, which wrongly excludes
.passkeyMissing, .unsupportedPasskeyProvider, and .error; update the guard to
accept all statuses that indicate a potentially restorable state (e.g., change
the pattern to guard case .disabled, .passkeyMissing,
.unsupportedPasskeyProvider, .error = CloudBackupManager.shared.status else {
return false } or implement a CloudBackupManager.status.isPotentiallyRestorable
computed property and use that) so those states don't bypass the startup
restore/recovery flow.
In `@ios/Cove/CoveMainView.swift`:
- Around line 254-264: The computed gates canPresentMissingPasskeyAlert and
canPresentCloudBackupVerificationPrompt currently only check phase, showCover,
app.alertState, app.sheetState and showMissingPasskeyAlert; update them to also
block when any local/settings-level modals or the CloudBackupDetailScreen
verification UI are active (i.e., include checks against the SettingsFlow local
modal booleans and the CloudBackupDetailScreen verification flag or, better,
add/consume a single aggregated isAnyModalPresented boolean on the app/state and
use that in both canPresentMissingPasskeyAlert and
canPresentCloudBackupVerificationPrompt) so the root missing-passkey
alert/verification cover will not stack on top of those flows.
In `@ios/Cove/FFI/CloudStorageAccessImpl.swift`:
- Around line 23-25: Replace the hardcoded record id in downloadMasterKeyBackup
with the exported constant by calling csppMasterKeyRecordId() instead of
constructing "masterkey-\(namespace)"; specifically update the function
downloadMasterKeyBackup to pass csppMasterKeyRecordId() into helper.downloadFile
so the recordId matches the Rust constant used by
ICloudDriveHelper.backupFileURL and restores the master-key special-case logic.
In `@ios/Cove/Flows/Onboarding/OnboardingContainer.swift`:
- Line 17: Change the direct reconciler capture to a weak reconciler to avoid a
retain cycle by calling rust.listenForUpdates(reconciler: WeakReconciler(self));
to do that, make OnboardingManager conform to AnyReconciler by adding typealias
Message = OnboardingReconcileMessage and implementing func
reconcileMany(messages: [OnboardingReconcileMessage]) { messages.forEach {
reconcile(message: $0) } }, and add the compatibility extension extension
WeakReconciler: OnboardingManagerReconciler where Reconciler ==
OnboardingManager {} so WeakReconciler can be passed into listenForUpdates.
In `@ios/Cove/Flows/SettingsFlow/MainSettingsScreen.swift`:
- Around line 923-927: The onboarding handler currently dispatches
CloudBackupManager.shared.dispatch(action: .enableCloudBackupNoDiscovery) which
bypasses the discovery/passkey path; update the CloudBackupEnableOnboardingView
onEnable closure to dispatch the discovery-enabled action (e.g.,
.enableCloudBackup) or invoke the same enable flow used by the "Passkey Options"
alert so the existing-passkey discovery runs; keep the sheetState change
(sheetState = .none) but replace the .enableCloudBackupNoDiscovery call with the
discovery-enabled action or shared enable method used elsewhere (refer to
CloudBackupEnableOnboardingView and CloudBackupManager.shared.dispatch).
In `@rust/src/database/migration/bdk/recovery.rs`:
- Around line 116-126: The rename of tmp_path to db_path can leave behind or
orphan WAL/SHM files (e.g., files named by tmp_path with "-wal" / "-shm"), so
before performing std::fs::rename(&tmp_path, db_path) in this recovery block,
open the temp SQLite bundle referred to by tmp_path and run a checkpoint (PRAGMA
wal_checkpoint(TRUNCATE)) to flush and remove WAL/SHM, or atomically move the
companion files by renaming tmp_path-wal → db_path-wal and tmp_path-shm →
db_path-shm (or remove them if checkpointed); ensure you update/remove the
auxiliary files consistently and use the existing super::super::log_remove_file
helper when deleting, keeping tmp_path, bak_path, and db_path logic intact.
In `@rust/src/manager/cloud_backup_detail_manager.rs`:
- Around line 153-187: The NotEnabled branch in handle_start_verification leaves
the object stuck in Verifying; update the DeepVerificationResult::NotEnabled arm
to restore state (e.g., call self.set_verification(VerificationState::Idle) and
also self.set_recovery(RecoveryState::Idle) like in PasskeyMissing) so
verification/recovery are reset when NotEnabled is returned.
In `@rust/src/manager/cloud_backup_manager.rs`:
- Around line 887-889: The discard_pending_enable_cloud_backup routine currently
only calls clear_pending_enable_session(), but when
do_enable_cloud_backup_no_discovery previously created a local master key via
get_or_create_master_key() the key must be removed as well; update
discard_pending_enable_cloud_backup to also roll back/delete the newly created
local master key (use the existing master-key removal API or implement a
remove_local_master_key() call), so that has_local_master_key becomes false and
subsequent enable logic will perform namespace discovery; ensure you reference
and reverse whatever persistent state get_or_create_master_key() modified in the
no-local-key code path while still clearing PendingEnableSession.
In `@rust/src/manager/cloud_backup_manager/ops.rs`:
- Around line 677-684: The match in try_match_namespace_with_passkey currently
maps NamespaceMatchOutcome::UserDeclined to CloudBackupError::PasskeyMismatch
which incorrectly drives do_restore_from_cloud_backup down the local-master-key
fallback path; change the mapping so UserDeclined returns a distinct
cancellation error (e.g. CloudBackupError::PasskeyCancelled or
CloudBackupError::UserDeclined) instead of PasskeyMismatch, and then update
do_restore_from_cloud_backup to check for that cancellation-specific error
(CloudBackupError::PasskeyCancelled/CloudBackupError::UserDeclined) before
attempting the local-key fallback so a user-cancelled picker short-circuits the
restore flow.
In `@rust/src/manager/cloud_backup_manager/pending.rs`:
- Around line 67-89: The current dedupe builds known_record_ids from all
existing (namespace_id, record_id) pairs, which prevents re-enqueuing a record
when an earlier confirmed PendingCloudUploadItem exists; change the logic in the
block that constructs known_record_ids (and the loop that adds items) to only
treat items with non-confirmed verification as existing (i.e., filter out items
whose verification state is Confirmed) so new uploads are allowed, and
additionally when encountering an existing item that is Confirmed (match via
PendingCloudUploadItem, CloudUploadKind::BackupBlob and namespace_id), update
that existing item's verification to CloudUploadVerificationState::Pending and
refresh enqueued_at (or replace the item) instead of skipping; reference
PendingCloudUploadItem, CloudUploadVerificationState,
CloudUploadKind::BackupBlob, and has_pending_cloud_upload_verification() to
ensure consistency with how pending/confirmed items are inspected.
---
Duplicate comments:
In `@ios/Cove/Flows/SettingsFlow/CloudBackupDetailScreen.swift`:
- Around line 138-143: The current conditional hides the DetailFormContent when
verification becomes .cancelled because it checks both manager.detail and
!isCancelled; change that so loaded details remain visible after cancellation:
update the branch that constructs DetailFormContent (the clause using
manager.detail and isCancelled) to only require manager.detail (remove the
!isCancelled check) so DetailFormContent renders whenever manager.detail is
present, while leaving the separate verification UI/flow handling for cancelled
state intact.
In `@rust/src/database/migration/bdk/recovery.rs`:
- Around line 107-113: The restore branch that currently only renames bak_path
-> db_path must also restore auxiliary SQLite bundle files and run the same
checkpoint/rename logic used elsewhere: when moving bak_path to db_path also
move/rename the corresponding wal and shm files (e.g.,
bak_path.with_extension("db-wal") -> db_path.with_extension("db-wal") and
bak_path.with_extension("db-shm") -> db_path.with_extension("db-shm")) and
perform the same checkpoint/sync sequence used by the existing tmp-path recovery
logic instead of a lone std::fs::rename; apply the identical fix to the other
restore branch that mirrors this behavior so both branches consistently restore
the full SQLite bundle (bak, -wal, -shm) and ensure no silent rollback.
In `@rust/src/manager/cloud_backup_manager.rs`:
- Around line 564-624: The three non-persist helpers validate the
restore_operation_id and then mutate runtime state separately, allowing a race;
update set_status_for_restore_operation,
set_restore_progress_for_restore_operation, and
set_restore_report_for_restore_operation to call
self.ensure_current_restore_operation(operation_id)? again immediately after
performing the mutation (set_status, set_restore_progress, set_restore_report)
and return an error if the second check fails so you don't apply stale updates;
keep persist_cloud_backup_state_for_restore_operation as-is (it already
re-checks after the DB write).
---
Nitpick comments:
In `@ios/Cove/SidebarContainer.swift`:
- Around line 35-46: The gesture handler is reading dragStartedWithSidebarOpen
after updateSidebarState mutates it, so freeze the gesture-start state into a
local (e.g., let startedOpen = dragStartedWithSidebarOpen) at the top of
onDragEnded, then use that local for the conditional branches instead of reading
dragStartedWithSidebarOpen; ensure you call updateSidebarState(isVisible:...,
animated:...) only once per gesture flow and convert the two branches into an
if/else based on startedOpen to avoid re-running the same branch after
updateSidebarState sets app.isSidebarVisible or dragStartedWithSidebarOpen.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: aa726b6a-478c-4138-8a9a-46ddd8da8a8a
⛔ Files ignored due to path filters (1)
ios/CoveCore/Sources/CoveCore/generated/cove.swiftis excluded by!**/generated/**
📒 Files selected for processing (35)
android/app/src/main/java/org/bitcoinppl/cove_core/cove.ktandroid/app/src/main/java/org/bitcoinppl/cove_core/device/cove_device.ktdocs/ICLOUD_DRIVE.mdios/Cove/AppManager.swiftios/Cove/CloudBackupManager.swiftios/Cove/CoveApp.swiftios/Cove/CoveMainView.swiftios/Cove/FFI/CloudStorageAccessImpl.swiftios/Cove/FFI/ICloudDriveHelper+Metadata.swiftios/Cove/FFI/ICloudDriveHelper.swiftios/Cove/FFI/PasskeyProviderImpl.swiftios/Cove/Flows/NewWalletFlow/HotWallet/VerifyWords/VerifyWordsScreen.swiftios/Cove/Flows/Onboarding/OnboardingContainer.swiftios/Cove/Flows/SettingsFlow/AboutScreen.swiftios/Cove/Flows/SettingsFlow/CloudBackupDetailScreen.swiftios/Cove/Flows/SettingsFlow/CloudBackupEnableOnboardingView.swiftios/Cove/Flows/SettingsFlow/MainSettingsScreen.swiftios/Cove/SidebarContainer.swiftrust/crates/cove-cspp/src/backup_data.rsrust/crates/cove-cspp/src/wallet_crypto.rsrust/crates/cove-device/src/passkey.rsrust/src/backup/import.rsrust/src/database/migration/bdk/recovery.rsrust/src/database/wallet.rsrust/src/manager/cloud_backup_detail_manager.rsrust/src/manager/cloud_backup_manager.rsrust/src/manager/cloud_backup_manager/cloud_inventory.rsrust/src/manager/cloud_backup_manager/ops.rsrust/src/manager/cloud_backup_manager/pending.rsrust/src/manager/cloud_backup_manager/pending/queue_processor.rsrust/src/manager/cloud_backup_manager/verify.rsrust/src/manager/cloud_backup_manager/verify/passkey_auth.rsrust/src/manager/cloud_backup_manager/verify/session.rsrust/src/manager/cloud_backup_manager/verify/wrapper_repair.rsrust/src/manager/cloud_backup_manager/wallets.rs
✅ Files skipped from review due to trivial changes (2)
- ios/Cove/Flows/NewWalletFlow/HotWallet/VerifyWords/VerifyWordsScreen.swift
- rust/crates/cove-device/src/passkey.rs
🚧 Files skipped from review as they are similar to previous changes (9)
- ios/Cove/AppManager.swift
- rust/src/manager/cloud_backup_manager/pending/queue_processor.rs
- ios/Cove/Flows/SettingsFlow/CloudBackupEnableOnboardingView.swift
- ios/Cove/Flows/SettingsFlow/AboutScreen.swift
- rust/crates/cove-cspp/src/wallet_crypto.rs
- rust/src/database/wallet.rs
- ios/Cove/FFI/PasskeyProviderImpl.swift
- rust/crates/cove-cspp/src/backup_data.rs
- rust/src/manager/cloud_backup_manager/cloud_inventory.rs
|
@greptile-apps re-review |
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (1)
rust/src/manager/cloud_backup_manager.rs (1)
557-602:⚠️ Potential issue | 🟠 MajorRestore cancellation still races with side effects.
ensure_current_restore_operation()is only a load, andpersist_cloud_backup_state_for_restore_operation()still performsDatabase::set()between separate checks. Callers can also use the naked check before keychain writes, so a cancelled/stale restore can still persist recovered credentials or DB state after the operation has been invalidated. The restore-id check and the side effect need to share the same gate.Also applies to: 620-634
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/src/manager/cloud_backup_manager.rs` around lines 557 - 602, The race exists because ensure_current_restore_operation() only does an atomic load and doesn't hold the restore_operation_gate, so side-effecting callers (e.g., persist_cloud_backup_state_for_restore_operation and other callers around lines ~620-634) can pass the check and then perform writes after a cancel; fix by making the check hold the same gate: change ensure_current_restore_operation() to acquire self.restore_operation_gate.lock() (same as with_current_restore_operation) before reading/validating restore_operation_id and returning Ok/Err, or alternatively update callers to use with_current_restore_operation() for their side-effect closures; update the duplicate callers mentioned to use the locked-check variant so the check and the side effect are executed under the same gate.
🧹 Nitpick comments (1)
ios/Cove/Flows/Onboarding/OnboardingContainer.swift (1)
48-48: Clarify ownership of the onboarding manager.
@Stateis view-owned storage, so it shouldn’t be exposed as a mutable property. If this view owns the manager, make itprivateand initialize_managerexplicitly; if the parent owns it, this should be a plain stored property instead of@State.♻️ Suggested fix
struct OnboardingContainer: View { - `@State` var manager: OnboardingManager + `@State` private var manager: OnboardingManager let onComplete: () -> Void + + init(manager: OnboardingManager, onComplete: `@escaping` () -> Void) { + _manager = State(initialValue: manager) + self.onComplete = onComplete + }As per coding guidelines, "Review SwiftUI view code for proper layout, best practices".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/Cove/Flows/Onboarding/OnboardingContainer.swift` at line 48, The OnboardingContainer's manager is incorrectly declared as a non-private `@State`; decide ownership and fix accordingly: if the view truly owns the manager make it private and initialize the backing State (for example set `@State` private var manager with an explicit initial value or initialize _manager = State(initialValue: ...) inside init so the view owns the state), but if the parent owns it remove `@State` and expose a plain stored property (e.g., let or var manager: OnboardingManager) so the parent controls it; update OnboardingContainer, the manager property, and any initializer to match the chosen ownership model.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ios/Cove/CoveMainView.swift`:
- Around line 892-974: The onChange hooks miss reacting to flips of
app.isCloudBackupRootPromptBlocked so pendingMissingPasskeyAlert can stay
forever; add an .onChange(of: app.isCloudBackupRootPromptBlocked) that mirrors
the other gates: when it becomes false (unblocked) if pendingMissingPasskeyAlert
call scheduleMissingPasskeyAlert(), otherwise call
scheduleCloudBackupVerificationPrompt(); when it becomes true
dismissMissingPasskeyAlert() and set showCloudBackupVerificationPrompt = false
(and clear keepShowingCloudBackupVerificationPrompt if appropriate) so the root
cloud-backup UI is rescheduled exactly like route/sheet/alert/cover changes.
In `@ios/Cove/Flows/Onboarding/OnboardingContainer.swift`:
- Around line 110-115: Replace the detached background probe with a cancellable
child task tied to the view's lifecycle: inside the .task modifier use Task {
let hasBackup = await Self.checkForCloudBackup { _ in } ;
onCloudCheckComplete(hasBackup) } (stop using Task.detached) so the probe is
cancelled when the view disappears; also update the retry/sleep logic (the
Task.sleep call) to not swallow CancellationError—handle errors from try await
Task.sleep by rethrowing or returning immediately when CancellationError is
caught (check Task.isCancelled or catch .is CancellationError and propagate) so
retries stop after cancellation.
In `@rust/src/database/migration/bdk/recovery.rs`:
- Around line 132-145: The restore assumes bak_path contains a complete DB but
WAL/SHM may be left behind; update the backup creation or restore logic so the
backup includes committed WAL pages: either checkpoint the original plaintext DB
before renaming (call checkpoint on db_path in migrate_single_bdk_database prior
to moving path -> bak_path) or move/rename the associated WAL/SHM files (rename
db_path-wal -> bak_path-wal and db_path-shm -> bak_path-shm) when creating
bak_path, and likewise ensure finalize_sqlite_bundle_move and the restore branch
that checks bak_path/db_path/tmp_path handle those wal/shm files consistently so
restoring from bak_path recovers committed pages.
- Around line 157-159: The current cleanup only removes the main temp DB
(tmp_path) but leaves SQLite sidecars (*.db.enc.tmp-wal, *.db.enc.tmp-shm)
behind; update the removal logic so after detecting tmp_path.exists() you also
construct and remove the two sidecar paths (append "-wal" and "-shm" to
tmp_path.file_name()) and call super::super::log_remove_file on each if they
exist, keeping the use of tmp_path and super::super::log_remove_file to locate
and log removals.
In `@rust/src/manager/cloud_backup_manager.rs`:
- Around line 1014-1023: The loop in
wipe_wallet_keychain_items_for_catastrophic_recovery() ignores the Result from
Keychain::delete_wallet_items(), allowing key deletion failures to be hidden;
change the function to return Result<(), E> (or appropriate crate error type),
check each delete_wallet_items(&wallet_id) call, and on Err either log with
context (including wallet_id) and return Err (or collect and return an
aggregated error) so callers can detect failures; update the function signature
and any callers (and keep references to catastrophic_wipe_wallet_ids and
persisted_wallet_ids_for_catastrophic_wipe) accordingly.
- Around line 951-955: The exported function
reset_local_data_for_catastrophic_recovery currently calls
wipe_local_data_for_catastrophic_recovery() then
reinit_database_after_catastrophic_recovery(), which routes into
Database::reinit() and may panic — change the export to return a Result (e.g.,
Result<(), String>) instead of void, catch and propagate any error from
reinit_database_after_catastrophic_recovery() so failures are returned to Swift
rather than panicking, and keep wipe_local_data_for_catastrophic_recovery()
behavior the same; apply the same change to the other exported
catastrophic-recovery path referenced around the 984-987 region so both exports
return Result and surface reinit errors instead of crashing.
In `@rust/src/manager/cloud_backup_manager/ops.rs`:
- Around line 342-360: The code calls Cspp::new(...).get_or_create_master_key()
before running discover_or_create_prf_key_without_persisting, so if passkey
setup is cancelled or errors we must roll back any master key that was newly
created; modify the error and cancellation branches around
discover_or_create_prf_key_without_persisting to detect that the returned
master_key was created by get_or_create_master_key and, before returning, call
the Cspp API to remove/rollback that key (e.g., cspp.delete_master_key or
cspp.remove_master_key using master_key.namespace_id()), logging any deletion
errors but ensuring the function returns with no leftover local master key;
apply the same rollback logic to the similar block at 405-422 and keep existing
Message::PasskeyDiscoveryCancelled, set_progress, set_restore_progress, and
set_status(CloudBackupStatus::Disabled) behavior.
---
Duplicate comments:
In `@rust/src/manager/cloud_backup_manager.rs`:
- Around line 557-602: The race exists because
ensure_current_restore_operation() only does an atomic load and doesn't hold the
restore_operation_gate, so side-effecting callers (e.g.,
persist_cloud_backup_state_for_restore_operation and other callers around lines
~620-634) can pass the check and then perform writes after a cancel; fix by
making the check hold the same gate: change ensure_current_restore_operation()
to acquire self.restore_operation_gate.lock() (same as
with_current_restore_operation) before reading/validating restore_operation_id
and returning Ok/Err, or alternatively update callers to use
with_current_restore_operation() for their side-effect closures; update the
duplicate callers mentioned to use the locked-check variant so the check and the
side effect are executed under the same gate.
---
Nitpick comments:
In `@ios/Cove/Flows/Onboarding/OnboardingContainer.swift`:
- Line 48: The OnboardingContainer's manager is incorrectly declared as a
non-private `@State`; decide ownership and fix accordingly: if the view truly owns
the manager make it private and initialize the backing State (for example set
`@State` private var manager with an explicit initial value or initialize _manager
= State(initialValue: ...) inside init so the view owns the state), but if the
parent owns it remove `@State` and expose a plain stored property (e.g., let or
var manager: OnboardingManager) so the parent controls it; update
OnboardingContainer, the manager property, and any initializer to match the
chosen ownership model.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b3cdf3b5-cdd3-4720-9090-a18ecf52e429
📒 Files selected for processing (16)
docs/ICLOUD_DRIVE.mdios/Cove/AppManager.swiftios/Cove/CloudBackupManager.swiftios/Cove/CoveMainView.swiftios/Cove/FFI/CloudStorageAccessImpl.swiftios/Cove/Flows/Onboarding/OnboardingContainer.swiftios/Cove/Flows/SettingsFlow/CloudBackupDetailScreen.swiftios/Cove/Flows/SettingsFlow/MainSettingsScreen.swiftios/Cove/SidebarContainer.swiftrust/src/database/migration/bdk.rsrust/src/database/migration/bdk/recovery.rsrust/src/manager/cloud_backup_detail_manager.rsrust/src/manager/cloud_backup_manager.rsrust/src/manager/cloud_backup_manager/ops.rsrust/src/manager/cloud_backup_manager/pending.rsrust/src/manager/cloud_backup_manager/verify.rs
✅ Files skipped from review due to trivial changes (2)
- ios/Cove/Flows/SettingsFlow/CloudBackupDetailScreen.swift
- docs/ICLOUD_DRIVE.md
🚧 Files skipped from review as they are similar to previous changes (3)
- ios/Cove/SidebarContainer.swift
- ios/Cove/AppManager.swift
- rust/src/database/migration/bdk.rs
46dbdce to
02ec208
Compare
02ec208 to
3903b93
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (2)
rust/src/manager/cloud_backup_manager.rs (1)
707-741:⚠️ Potential issue | 🟠 MajorClaim the busy slot before the worker starts.
Lines 716-720 still do the busy check before
spawn_blocking, and Lines 725-731 only publishEnabling/Restoringinside the worker. Two quick taps can still launch overlapping enable flows before either one marks the manager busy.As per coding guidelines,
rust/**/*.rs: "Verify correct functionality and logic consistency."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/src/manager/cloud_backup_manager.rs` around lines 707 - 741, start_background_operation currently checks busy state before spawn but only sets entering_status (e.g., Enabling/Restoring) inside the spawn_blocking closure, allowing a race where multiple callers can pass the busy check; fix by claiming the busy slot before spawning: if entering_status.is_some() call set_progress(None), set_restore_progress(None), conditionally set_restore_report(None) and set_status(entering_status) synchronously (outside and before cove_tokio::task::spawn_blocking) so the manager becomes busy immediately, then spawn the worker which will perform work(self.clone()) and keep the existing error handling that resets progress and sets status via status_for_operation_error.ios/Cove/CoveApp.swift (1)
252-262:⚠️ Potential issue | 🟠 MajorStartup restore check still excludes recoverable cloud states.
Line 254 still requires
.disabled. That skips onboarding when the persisted cloud state is.passkeyMissing,.unsupportedPasskeyProvider, or.error, even though those states can still require restore/recovery handling on an empty device.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/Cove/CoveApp.swift` around lines 252 - 262, The startup check in shouldRunCloudRestoreCheck currently only continues when CloudBackupManager.shared.status is .disabled, which skips recoverable states; update the status check so it proceeds when the cloud status is .disabled OR .passkeyMissing OR .unsupportedPasskeyProvider OR .error (i.e. any state that may require restore/recovery on an empty device). Replace the single pattern match guard case .disabled = CloudBackupManager.shared.status with a switch or combined pattern/if-case that allows those additional statuses before falling through to the wallets/file checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ios/Cove/CoveApp.swift`:
- Around line 120-127: Wipe Only currently calls the same reset helper as
Restore which causes the app to follow the restore-onboarding path; change the
onWipeOnly handler in CatastrophicErrorView so it invokes a distinct
wipe-specific reset (e.g. resetCatastrophicRecoveryStateForWipeAndRebootstrap or
resetCatastrophicRecoveryStateAndRebootstrap(wipeOnly: true)) instead of
resetCatastrophicRecoveryStateAndRebootstrap(), and ensure that helper clears
any “hasCloudRestoreAvailable”/restore flags before rebootstrap so the app
starts fresh rather than launching restore onboarding.
In `@ios/Cove/CoveMainView.swift`:
- Around line 254-269: Both computed properties canPresentMissingPasskeyAlert
and canPresentCloudBackupVerificationPrompt currently gate on showCover (which
no longer implies the app is authenticated) so the alerts can appear while auth
is still locked; update their predicates to also require the authentication
state to be unlocked (e.g., check auth.lockState == .unlocked or
auth.isUnlocked) instead of or in addition to showCover, and apply the same
change anywhere showCover was used to gate root cloud-backup UI (including the
LockView-related checks) so the missing-passkey alert and cloud-backup
verification prompt only present when auth.lockState indicates the app is
unlocked.
- Around line 1052-1119: The view’s vertical layout can grow past the screen
because the body’s fixed VStacks and hard Spacers don’t allow scrolling when the
failure message expands; wrap the main content (currently the outer VStack in
var body) in a ScrollView(.vertical) with a VStack inside so content becomes
scrollable, remove or reduce hard Spacer().frame(height: ...) usages that push
the actions off-screen (or convert them to flexible Spacer()s), and keep the
interactive elements (primaryButtonTitle/onVerify and "Later"/onDismiss buttons)
inside that scrollable VStack; keep modifiers like .padding,
.onboardingRecoveryBackground(), and .animation(.easeInOut(duration: 0.25),
value: isVerifying) applied to the ScrollView/VStack as appropriate so
isVerifying state still animates.
In `@rust/src/manager/cloud_backup_manager.rs`:
- Around line 1043-1046: The current wipe uses only
persisted_wallet_ids_for_catastrophic_wipe(), skipping any orphaned on-disk
wallet directories and thus leaving key material; update the flow to compute the
union of persisted IDs and on-disk IDs (enumerate subdirectories under
cove_common::consts::WALLET_DATA_DIR), pass that union into
catastrophic_wipe_wallet_ids (or otherwise ensure catastrophic_wipe_wallet_ids
is given the combined set), and then invoke delete_wallet_items() for each ID in
that union so every wallet (persisted or orphaned) has its keychain material
removed; refer to persisted_wallet_ids_for_catastrophic_wipe,
catastrophic_wipe_wallet_ids, delete_wallet_items, and WALLET_DATA_DIR when
making the change.
- Around line 1000-1005: The current code treats a failed wipe as success by
swallowing the error from std::fs::remove_dir_all(wallet_dir); change the logic
in the catastrophic-recovery path so that if remove_dir_all(wallet_dir) returns
Err you propagate/return that error instead of logging and continuing: locate
the wallet_dir usage and the remove_dir_all call in cloud_backup_manager (the
block referencing cove_common::consts::WALLET_DATA_DIR and remove_dir_all),
replace the if-let swallow with explicit error handling (e.g., match or use ? /
map_err with context) so the surrounding function returns Err(...) on failure
and prevents reinitializing the DB when the directory was not actually removed.
---
Duplicate comments:
In `@ios/Cove/CoveApp.swift`:
- Around line 252-262: The startup check in shouldRunCloudRestoreCheck currently
only continues when CloudBackupManager.shared.status is .disabled, which skips
recoverable states; update the status check so it proceeds when the cloud status
is .disabled OR .passkeyMissing OR .unsupportedPasskeyProvider OR .error (i.e.
any state that may require restore/recovery on an empty device). Replace the
single pattern match guard case .disabled = CloudBackupManager.shared.status
with a switch or combined pattern/if-case that allows those additional statuses
before falling through to the wallets/file checks.
In `@rust/src/manager/cloud_backup_manager.rs`:
- Around line 707-741: start_background_operation currently checks busy state
before spawn but only sets entering_status (e.g., Enabling/Restoring) inside the
spawn_blocking closure, allowing a race where multiple callers can pass the busy
check; fix by claiming the busy slot before spawning: if
entering_status.is_some() call set_progress(None), set_restore_progress(None),
conditionally set_restore_report(None) and set_status(entering_status)
synchronously (outside and before cove_tokio::task::spawn_blocking) so the
manager becomes busy immediately, then spawn the worker which will perform
work(self.clone()) and keep the existing error handling that resets progress and
sets status via status_for_operation_error.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 77c0ca98-8436-4c70-98f4-b39b7b7b6807
⛔ Files ignored due to path filters (1)
ios/CoveCore/Sources/CoveCore/generated/cove.swiftis excluded by!**/generated/**
📒 Files selected for processing (11)
android/app/src/main/java/org/bitcoinppl/cove_core/cove.ktios/Cove/CoveApp.swiftios/Cove/CoveMainView.swiftios/Cove/Flows/Onboarding/OnboardingContainer.swiftios/Cove/Flows/Onboarding/StartupRecovery/CatastrophicErrorView.swiftios/Cove/Flows/Onboarding/StartupRecovery/CloudRestoreOfferView.swiftrust/src/database.rsrust/src/database/migration/bdk.rsrust/src/database/migration/bdk/recovery.rsrust/src/manager/cloud_backup_manager.rsrust/src/manager/cloud_backup_manager/ops.rs
🚧 Files skipped from review as they are similar to previous changes (5)
- ios/Cove/Flows/Onboarding/StartupRecovery/CloudRestoreOfferView.swift
- ios/Cove/Flows/Onboarding/StartupRecovery/CatastrophicErrorView.swift
- rust/src/database/migration/bdk/recovery.rs
- rust/src/database.rs
- rust/src/database/migration/bdk.rs
|
@greptile-apps re-review |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (5)
ios/Cove/CoveMainView.swift (1)
254-269:⚠️ Potential issue | 🟠 MajorGate cloud-backup prompts on authentication state, not just
showCover.
canPresentMissingPasskeyAlertandcanPresentCloudBackupVerificationPromptcheck!showCover, butshowCoveris set tofalseat line 656 when the app becomes active—before the user has authenticated. This allows the prompts to appear whileauth.lockStateis still.locked, potentially overlaying the lock screen.Add
auth.lockState == .unlocked(or equivalent) to the predicates:🛡️ Suggested fix
private var canPresentMissingPasskeyAlert: Bool { phase == .active && !showCover && + auth.lockState == .unlocked && app.alertState == nil && app.sheetState == nil && !app.isCloudBackupRootPromptBlocked } private var canPresentCloudBackupVerificationPrompt: Bool { phase == .active && !showCover && + auth.lockState == .unlocked && app.alertState == nil && app.sheetState == nil && !showMissingPasskeyAlert && !app.isCloudBackupRootPromptBlocked }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/Cove/CoveMainView.swift` around lines 254 - 269, The presentation predicates allow backup/passkey prompts to appear before the user authenticates because they only check !showCover; update both canPresentMissingPasskeyAlert and canPresentCloudBackupVerificationPrompt to also require auth.lockState == .unlocked (or equivalent unlocked check) so prompts are gated on authentication state in addition to the existing conditions (phase, alertState, sheetState, showCover, isCloudBackupRootPromptBlocked, and !showMissingPasskeyAlert).rust/src/manager/cloud_backup_manager.rs (3)
710-744:⚠️ Potential issue | 🟠 MajorClaim the enable slot before spawning the worker.
Lines 718-744 still only read
state.statusbeforespawn_blocking, so two quick enable taps can both pass the guard. The plainenable_cloud_backup()path at Line 900 never even supplies anentering_status, which leaves the window open until worker code eventually updates state.As per coding guidelines,
rust/**/*.rs: "Verify correct functionality and logic consistency."Also applies to: 900-922
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/src/manager/cloud_backup_manager.rs` around lines 710 - 744, The start_background_operation currently only reads state.status before spawn_blocking allowing races; before calling cove_tokio::task::spawn_blocking claim the slot by acquiring a write access (or call self.set_status/clear progress synchronously) when entering_status.is_some() so the status is updated atomically prior to spawning the worker; keep the same progress/reset logic (set_progress(None), set_restore_progress(None), set_restore_report(None) when appropriate) but perform those calls before spawn_blocking so two concurrent enable_cloud_backup() calls cannot both pass the initial guard; ensure the worker still sets failure status via Self::status_for_operation_error(&error) on error as before.
1044-1049:⚠️ Potential issue | 🟠 MajorDelete keychain items for the union of persisted and on-disk wallets.
catastrophic_wipe_wallet_ids()still returns persisted IDs as soon as the DB lookup succeeds, sowipe_wallet_keychain_items_for_catastrophic_recovery()skips orphaned wallet directories. Their files get removed later, but their keychain material survives the “wipe”.As per coding guidelines,
rust/**/*.rs: "Check for potential security issues, make sure Bitcoin wallets are handled securely."Also applies to: 1087-1096
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/src/manager/cloud_backup_manager.rs` around lines 1044 - 1049, The current wipe_wallet_keychain_items_for_catastrophic_recovery() only uses IDs returned by catastrophic_wipe_wallet_ids(persisted_wallet_ids_for_catastrophic_wipe(), &WALLET_DATA_DIR) which can omit orphaned on-disk wallet directories; update the logic to compute the union of persisted wallet IDs (persisted_wallet_ids_for_catastrophic_wipe()) and wallet IDs discovered on disk under cove_common::consts::WALLET_DATA_DIR before removing keychain items so orphaned dirs' key material is also wiped; modify wipe_wallet_keychain_items_for_catastrophic_recovery() (and the similar block around lines 1087-1096) to gather on-disk IDs, merge with persisted IDs (deduplicate), then call the existing catastrophic wipe routine or a new helper with that union to delete keychain entries for all wallets.
1003-1008:⚠️ Potential issue | 🟠 MajorPropagate wallet-directory wipe failures.
If
remove_dir_all(wallet_dir)fails here, catastrophic recovery still reports success and proceeds to reinitialize the database. That can leave wallet data on disk after a destructive recovery flow.Suggested fix
let wallet_dir = &*cove_common::consts::WALLET_DATA_DIR; - if wallet_dir.exists() - && let Err(error) = std::fs::remove_dir_all(wallet_dir) - { - error!("Failed to remove wallet data dir: {error}"); + if wallet_dir.exists() { + std::fs::remove_dir_all(wallet_dir).map_err(|error| { + error!("Failed to remove wallet data dir: {error}"); + CatastrophicRecoveryError::Failure(format!( + "failed to remove wallet data dir: {error}" + )) + })?; }As per coding guidelines,
rust/**/*.rs: "Suggest improvements to error handling, unwraps, and Result handling."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/src/manager/cloud_backup_manager.rs` around lines 1003 - 1008, The wallet-directory wipe failure is currently only logged and not propagated, allowing recovery to continue erroneously; change the removal so errors are returned to the caller instead of swallowed—replace the if-let block around std::fs::remove_dir_all(wallet_dir) with code that either uses the ? operator (std::fs::remove_dir_all(wallet_dir)? ) or explicitly maps and returns an error (e.g., if let Err(e) = std::fs::remove_dir_all(wallet_dir) { error!("Failed to remove wallet data dir: {e}"); return Err(anyhow::anyhow!("Failed to remove wallet data dir: {e})); } ), and ensure the enclosing function signature returns a Result so the failure is propagated up from wallet_dir removal.rust/src/database/migration/bdk/recovery.rs (1)
162-176:⚠️ Potential issue | 🔴 CriticalDon't roll back on inconclusive verification failures.
rust/src/database/migration/bdk.rsLines 118-149 still map connection-open,PRAGMA key, schema-query, andintegrity_checkexecution failures toOk(false). This branch treatsOk(false)as proven corruption and deletesdb_path, so a transient I/O or SQLCipher problem can replace a healthy encrypted wallet with the older backup. ReserveOk(false)for confirmed corruption and keep both files on operational errors.As per coding guidelines, "Verify correct functionality and logic consistency."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/src/database/migration/bdk/recovery.rs` around lines 162 - 176, The current match on super::verify_encrypted_bdk_db(db_path) treats Ok(false) as definitive corruption and removes db_path; change this so only a confirmed corruption path causes deletion/restore. Specifically, update the match in the block that calls verify_encrypted_bdk_db to: 1) keep db_path and bak_path untouched on Ok(false) (log a warning/error but do not call super::super::log_remove_file or clean_auxiliary_files), 2) only call super::super::log_remove_file, clean_auxiliary_files, and finalize_sqlite_bundle_move when the verification explicitly confirms corruption (Ok(true) or an explicit Corrupt enum if you refactor verify_encrypted_bdk_db to return a clearer result), and 3) treat verification errors (Err(...)) as operational/inconclusive and likewise avoid deleting the live DB; adjust verify_encrypted_bdk_db signature if needed to return a more granular result to distinguish confirmed corruption from transient errors so the match can make this decision.
🧹 Nitpick comments (2)
ios/Cove/CoveMainView.swift (1)
1052-1121: Wrap the verification prompt body in aScrollViewfor accessibility.The
CloudBackupVerificationPromptViewuses a fixedVStackwith multiple hardSpacer().frame(height:)calls. On smaller devices or with larger Dynamic Type sizes, the failure message and action buttons can be pushed off-screen with no way to scroll.♻️ Suggested refactor
var body: some View { - VStack(spacing: 0) { + ScrollView(.vertical, showsIndicators: false) { + VStack(spacing: 0) { HStack { Spacer() if !isVerifying { Button(action: onDismiss) { Image(systemName: "xmark") .font(.headline) .foregroundStyle(.white.opacity(0.85)) .frame(width: 44, height: 44) } } } .padding(.top, 4) - Spacer() - .frame(height: 20) + Spacer().frame(height: 20) heroView - Spacer() - .frame(height: 36) + Spacer().frame(height: 36) VStack(spacing: 12) { // title + message } - Spacer() - .frame(height: 28) + Spacer().frame(height: 28) Divider() .overlay(Color.coveLightGray.opacity(0.14)) - Spacer(minLength: 24) + Spacer().frame(minHeight: 24) if !isVerifying { VStack(spacing: 14) { Button(primaryButtonTitle, action: onVerify) .buttonStyle(OnboardingPrimaryButtonStyle()) Button("Later", action: onDismiss) .buttonStyle(OnboardingSecondaryButtonStyle()) } } - Spacer(minLength: 0) + Spacer().frame(minHeight: 24) + } } .padding(.horizontal, 28) .padding(.bottom, 28) .frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .top) .onboardingRecoveryBackground() .animation(.easeInOut(duration: 0.25), value: isVerifying) .animation(.easeInOut(duration: 0.25), value: failure != nil) }As per coding guidelines,
ios/Cove/**/*.swift: "Review SwiftUI view code for proper layout, best practices."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/Cove/CoveMainView.swift` around lines 1052 - 1121, The view's fixed VStack layout in the CloudBackupVerificationPromptView (the var body) can push content off-screen on small devices or large Dynamic Type; wrap the main content in a vertical ScrollView so the failure message and action buttons can be scrolled into view. Replace the top-level VStack with ScrollView(.vertical) { VStack(...) } (keeping the inner VStack, spacers, heroView, conditional buttons, padding, .frame(maxWidth: .infinity, alignment: .top), background modifier, and the two .animation modifiers) and ensure the ScrollView preserves the same horizontal padding and top alignment so visual appearance and behaviors (isVerifying/failure-driven animations and onDismiss/onVerify actions) remain unchanged.rust/src/manager/cloud_backup_manager.rs (1)
348-349: Please makelisten_for_updates()single-registration or broadcast-safe.
reconcile_receiveris shared behindArc, so every thread spawned here consumes from the same flume queue. If this gets registered more than once, updates will be split across listeners instead of replacing the old one, and the previous callback/thread stays alive.Also applies to: 754-761
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rust/src/manager/cloud_backup_manager.rs` around lines 348 - 349, The current listen_for_updates() spawns threads that all consume from the shared Arc<Receiver<Message>> (reconcile_receiver), causing messages to be split across listeners; make registration single-registration by adding a flag (e.g., AtomicBool updates_listener_registered) to the struct and in listen_for_updates() do an atomic compare-and-swap to ensure only the first caller proceeds to spawn the thread(s) and subsequent calls return early; alternatively, if you need multiple listeners, replace the flume Receiver with a broadcast-capable channel and update code paths that use reconciler and reconcile_receiver accordingly so each listener receives a copy of each Message. Ensure the change touches functions/fields reconcile_receiver, reconciler and listen_for_updates (also apply the same fix in the analogous block around lines 754-761).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ios/Cove/Flows/Onboarding/StartupRecovery/CatastrophicErrorView.swift`:
- Around line 28-30: The cloud probe logic currently spawns multiple detached
tasks (via Task.detached in probeCloud()/retryProbe()) which can outlive the
view and allow stale results to overwrite newer ones; change probe management to
keep a single Task? property (e.g., probeTask) owned by the view/model and:
before starting a new probe cancel the previous task, start the new probe on the
MainActor or use Task { `@MainActor` in ... } rather than Task.detached, and
ensure you only apply results if the task was not cancelled (or use a
monotonically increasing probeVersion token checked on completion) so stale
completions are ignored and no MainActor updates occur after dismissal.
In `@rust/src/database/migration/bdk/recovery.rs`:
- Around line 75-81: The current sequence renames the main DB file before its
auxiliary -wal/-shm sidecars (clean_auxiliary_files; std::fs::rename;
rename_auxiliary_files), which can leave sidecar-only remnants if the process
crashes; to make the bundle move crash-safe, checkpoint the source bundle
(flush/commit and ensure WAL is checkpointed into the main DB or atomically
persist sidecars) before calling std::fs::rename, or implement a resumable
recovery path in recovery_target_path that detects and reconciles sidecar-only
remnants; specifically, update the code around clean_auxiliary_files /
std::fs::rename / rename_auxiliary_files to call a checkpoint operation on the
source (or sync and flush the WAL) prior to renaming, and/or modify
recovery_target_path to scan and recover when only -wal/-shm exist so the newest
committed pages aren’t lost.
---
Duplicate comments:
In `@ios/Cove/CoveMainView.swift`:
- Around line 254-269: The presentation predicates allow backup/passkey prompts
to appear before the user authenticates because they only check !showCover;
update both canPresentMissingPasskeyAlert and
canPresentCloudBackupVerificationPrompt to also require auth.lockState ==
.unlocked (or equivalent unlocked check) so prompts are gated on authentication
state in addition to the existing conditions (phase, alertState, sheetState,
showCover, isCloudBackupRootPromptBlocked, and !showMissingPasskeyAlert).
In `@rust/src/database/migration/bdk/recovery.rs`:
- Around line 162-176: The current match on
super::verify_encrypted_bdk_db(db_path) treats Ok(false) as definitive
corruption and removes db_path; change this so only a confirmed corruption path
causes deletion/restore. Specifically, update the match in the block that calls
verify_encrypted_bdk_db to: 1) keep db_path and bak_path untouched on Ok(false)
(log a warning/error but do not call super::super::log_remove_file or
clean_auxiliary_files), 2) only call super::super::log_remove_file,
clean_auxiliary_files, and finalize_sqlite_bundle_move when the verification
explicitly confirms corruption (Ok(true) or an explicit Corrupt enum if you
refactor verify_encrypted_bdk_db to return a clearer result), and 3) treat
verification errors (Err(...)) as operational/inconclusive and likewise avoid
deleting the live DB; adjust verify_encrypted_bdk_db signature if needed to
return a more granular result to distinguish confirmed corruption from transient
errors so the match can make this decision.
In `@rust/src/manager/cloud_backup_manager.rs`:
- Around line 710-744: The start_background_operation currently only reads
state.status before spawn_blocking allowing races; before calling
cove_tokio::task::spawn_blocking claim the slot by acquiring a write access (or
call self.set_status/clear progress synchronously) when
entering_status.is_some() so the status is updated atomically prior to spawning
the worker; keep the same progress/reset logic (set_progress(None),
set_restore_progress(None), set_restore_report(None) when appropriate) but
perform those calls before spawn_blocking so two concurrent
enable_cloud_backup() calls cannot both pass the initial guard; ensure the
worker still sets failure status via Self::status_for_operation_error(&error) on
error as before.
- Around line 1044-1049: The current
wipe_wallet_keychain_items_for_catastrophic_recovery() only uses IDs returned by
catastrophic_wipe_wallet_ids(persisted_wallet_ids_for_catastrophic_wipe(),
&WALLET_DATA_DIR) which can omit orphaned on-disk wallet directories; update the
logic to compute the union of persisted wallet IDs
(persisted_wallet_ids_for_catastrophic_wipe()) and wallet IDs discovered on disk
under cove_common::consts::WALLET_DATA_DIR before removing keychain items so
orphaned dirs' key material is also wiped; modify
wipe_wallet_keychain_items_for_catastrophic_recovery() (and the similar block
around lines 1087-1096) to gather on-disk IDs, merge with persisted IDs
(deduplicate), then call the existing catastrophic wipe routine or a new helper
with that union to delete keychain entries for all wallets.
- Around line 1003-1008: The wallet-directory wipe failure is currently only
logged and not propagated, allowing recovery to continue erroneously; change the
removal so errors are returned to the caller instead of swallowed—replace the
if-let block around std::fs::remove_dir_all(wallet_dir) with code that either
uses the ? operator (std::fs::remove_dir_all(wallet_dir)? ) or explicitly maps
and returns an error (e.g., if let Err(e) = std::fs::remove_dir_all(wallet_dir)
{ error!("Failed to remove wallet data dir: {e}"); return
Err(anyhow::anyhow!("Failed to remove wallet data dir: {e})); } ), and ensure
the enclosing function signature returns a Result so the failure is propagated
up from wallet_dir removal.
---
Nitpick comments:
In `@ios/Cove/CoveMainView.swift`:
- Around line 1052-1121: The view's fixed VStack layout in the
CloudBackupVerificationPromptView (the var body) can push content off-screen on
small devices or large Dynamic Type; wrap the main content in a vertical
ScrollView so the failure message and action buttons can be scrolled into view.
Replace the top-level VStack with ScrollView(.vertical) { VStack(...) } (keeping
the inner VStack, spacers, heroView, conditional buttons, padding,
.frame(maxWidth: .infinity, alignment: .top), background modifier, and the two
.animation modifiers) and ensure the ScrollView preserves the same horizontal
padding and top alignment so visual appearance and behaviors
(isVerifying/failure-driven animations and onDismiss/onVerify actions) remain
unchanged.
In `@rust/src/manager/cloud_backup_manager.rs`:
- Around line 348-349: The current listen_for_updates() spawns threads that all
consume from the shared Arc<Receiver<Message>> (reconcile_receiver), causing
messages to be split across listeners; make registration single-registration by
adding a flag (e.g., AtomicBool updates_listener_registered) to the struct and
in listen_for_updates() do an atomic compare-and-swap to ensure only the first
caller proceeds to spawn the thread(s) and subsequent calls return early;
alternatively, if you need multiple listeners, replace the flume Receiver with a
broadcast-capable channel and update code paths that use reconciler and
reconcile_receiver accordingly so each listener receives a copy of each Message.
Ensure the change touches functions/fields reconcile_receiver, reconciler and
listen_for_updates (also apply the same fix in the analogous block around lines
754-761).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 03f9a42a-3446-4fc8-a58f-dcbeb54cc10b
⛔ Files ignored due to path filters (1)
ios/CoveCore/Sources/CoveCore/generated/cove.swiftis excluded by!**/generated/**
📒 Files selected for processing (11)
android/app/src/main/java/org/bitcoinppl/cove_core/cove.ktios/Cove/CoveApp.swiftios/Cove/CoveMainView.swiftios/Cove/Flows/Onboarding/OnboardingContainer.swiftios/Cove/Flows/Onboarding/StartupRecovery/CatastrophicErrorView.swiftios/Cove/Flows/Onboarding/StartupRecovery/CloudRestoreOfferView.swiftrust/src/database.rsrust/src/database/migration/bdk.rsrust/src/database/migration/bdk/recovery.rsrust/src/manager/cloud_backup_manager.rsrust/src/manager/cloud_backup_manager/ops.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- rust/src/database.rs
- ios/Cove/Flows/Onboarding/StartupRecovery/CloudRestoreOfferView.swift
- ios/Cove/CoveApp.swift
| .task { | ||
| probeCloud() | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Read the CatastrophicErrorView file to examine the async probe code
cat -n ios/Cove/Flows/Onboarding/StartupRecovery/CatastrophicErrorView.swift | head -100Repository: bitcoinppl/cove
Length of output: 3755
🏁 Script executed:
# Check if there are any Task properties or cancellation mechanisms
rg -A5 -B5 "Task.detached|probeCloud|retryProbe" ios/Cove/Flows/Onboarding/StartupRecovery/CatastrophicErrorView.swiftRepository: bitcoinppl/cove
Length of output: 1229
🏁 Script executed:
# Check the CoveApp.swift wiring mentioned in the scratchpad
sed -n '119,127p' ios/Cove/CoveApp.swiftRepository: bitcoinppl/cove
Length of output: 379
Cancel or version cloud probes so stale results can't win.
retryProbe() starts another detached task without cancelling the first one. If the earlier probe finishes last, it can overwrite the newer result and show the wrong recovery action. Additionally, Task.detached lets the work outlive the view, risking MainActor updates after view dismissal. Store a single probe task reference and cancel it on retry, or use a versioned token and ignore stale completions.
Also applies to: 43–70
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ios/Cove/Flows/Onboarding/StartupRecovery/CatastrophicErrorView.swift` around
lines 28 - 30, The cloud probe logic currently spawns multiple detached tasks
(via Task.detached in probeCloud()/retryProbe()) which can outlive the view and
allow stale results to overwrite newer ones; change probe management to keep a
single Task? property (e.g., probeTask) owned by the view/model and: before
starting a new probe cancel the previous task, start the new probe on the
MainActor or use Task { `@MainActor` in ... } rather than Task.detached, and
ensure you only apply results if the task was not cancelled (or use a
monotonically increasing probeVersion token checked on completion) so stale
completions are ignored and no MainActor updates occur after dismissal.
| clean_auxiliary_files(destination_path); | ||
| std::fs::rename(source_path, destination_path).context(format!( | ||
| "failed to rename {} to {}", | ||
| source_path.display(), | ||
| destination_path.display() | ||
| ))?; | ||
| rename_auxiliary_files(source_path, destination_path) |
There was a problem hiding this comment.
Make the bundle move crash-safe.
Line 76 renames the main DB before the -wal/-shm files. If the process dies before rename_auxiliary_files() finishes, recovery_target_path() will ignore the leftover *.db.bak-wal / *.db.enc.tmp-wal sidecars on the next launch because only main artifacts are scanned. For the legacy backup states this helper is meant to recover, that can silently drop the newest committed pages. Please checkpoint the source bundle before the main rename, or add resumable recovery for sidecar-only remnants.
As per coding guidelines, "Verify correct functionality and logic consistency."
Also applies to: 109-128
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@rust/src/database/migration/bdk/recovery.rs` around lines 75 - 81, The
current sequence renames the main DB file before its auxiliary -wal/-shm
sidecars (clean_auxiliary_files; std::fs::rename; rename_auxiliary_files), which
can leave sidecar-only remnants if the process crashes; to make the bundle move
crash-safe, checkpoint the source bundle (flush/commit and ensure WAL is
checkpointed into the main DB or atomically persist sidecars) before calling
std::fs::rename, or implement a resumable recovery path in recovery_target_path
that detects and reconciles sidecar-only remnants; specifically, update the code
around clean_auxiliary_files / std::fs::rename / rename_auxiliary_files to call
a checkpoint operation on the source (or sync and flush the WAL) prior to
renaming, and/or modify recovery_target_path to scan and recover when only
-wal/-shm exist so the newest committed pages aren’t lost.
Summary
This branch brings the iOS CSPP cloud backup flow to review-ready shape.
On the Rust/core side, it adds the CSPP backup data model, passkey and cloud storage callback interfaces, the new cloud backup manager, encrypted DB change detection, debounced upload handling, startup sync/reconciliation, restore support, and persisted backup verification state.
On iOS, it wires that stack into iCloud Drive and passkeys, adds the iOS 18.4 floor needed for safe PRF behavior, and rounds out the user-facing flows for backup setup, restore discovery, restore progress, backup verification, passkey repair, and cloud-backup detail/status screens. It also hardens recovery behavior around missing passkeys, unsupported backup formats, duplicate-wallet restore cases, and backup integrity verification after wallet changes.
What To Review
QA Notes
Closes: #562 #566 #567 #569 #570 #571 #575 #577 #578 #579 #582 #585 #600 #601 #603 #604
Also addresses: #580 #583 #584
Summary by CodeRabbit
New Features
New UI
Improvements
Documentation