diff --git a/.assets/FAQs.md b/.assets/FAQs.md index b865de4..fcf6076 100644 --- a/.assets/FAQs.md +++ b/.assets/FAQs.md @@ -27,4 +27,10 @@ Table fields that are obsoleted already cannot be configured to be exported but ### I need help because my export job is timing out! Let's look at addressing timeout issues have been seen to occur at two possible places in the solution, both of them happening typically during the initial export of records, 1. The query to fetch the records during before the export to the lake may timeout if it takes more than the [operation limits](/business-central/dev-itpro/administration/operational-limits-online) defined. This may happen when bc2adls attempts to sort a large set of records as per the row version. You may _suspend_ the sorting temporarily using the field `Skip row version sorting` on the setup page. -1. Chunks of data (or [blocks](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction#:~:text=Block%20blobs), in the data lake parlance) are added to a lake file during the export. Adding too many such large chunks may cause timeout issues in the form of an error message like `Could not commit blocks to . OperationTimedOutOperation could not be completed within the specified time.` We are using default timeouts in the bc2adls app, but you may add [additional timeout URL parameter](https://learn.microsoft.com/en-us/rest/api/storageservices/put-block-list?tabs=azure-ad#:~:text=timeout) if you want by suffixing the URL call in the procedure [`CommitAllBlocksOnDataBlob`](https://github.com/microsoft/bc2adls/blob/main/businessCentral/src/ADLSEGen2Util.Codeunit.al#:~:text=CommitAllBlocksOnDataBlob) with `?timeout=XX`, XX being the number of seconds for timeout to expire. This issue could typically happen when you are pushing a large payload to the server. Also consider reducing the number at the field [Max payload size (MiBs)](https://github.com/microsoft/bc2adls/blob/main/.assets/Setup.md#:~:text=Max%20payload%20size%20(MiBs)). \ No newline at end of file +1. Chunks of data (or [blocks](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction#:~:text=Block%20blobs), in the data lake parlance) are added to a lake file during the export. Adding too many such large chunks may cause timeout issues in the form of an error message like `Could not commit blocks to . OperationTimedOutOperation could not be completed within the specified time.` We are using default timeouts in the bc2adls app, but you may add [additional timeout URL parameter](https://learn.microsoft.com/en-us/rest/api/storageservices/put-block-list?tabs=azure-ad#:~:text=timeout) if you want by suffixing the URL call in the procedure [`CommitAllBlocksOnDataBlob`](https://github.com/microsoft/bc2adls/blob/main/businessCentral/src/ADLSEGen2Util.Codeunit.al#:~:text=CommitAllBlocksOnDataBlob) with `?timeout=XX`, XX being the number of seconds for timeout to expire. This issue could typically happen when you are pushing a large payload to the server. Also consider reducing the number at the field [Max payload size (MiBs)](https://github.com/microsoft/bc2adls/blob/main/.assets/Setup.md#:~:text=Max%20payload%20size%20(MiBs)). + +### How do I secure the data on the lake from being publicly viewable? +Business Central is hosted on the SaaS but not from a pre-determined IP address. You may, however, explore the following options (or a combination of them) if you wish to secure your data on the lake, +- Use [Azure security service tags](https://learn.microsoft.com/en-us/dynamics365/business-central/dev-itpro/security/security-service-tags) on the ADLS resource to ensure that incoming traffic is allowed only from genuine Business Central servers. Of course, you will have to include your own IP addresses to read the data from the lake. Beware, this does not enable a private network still and in theory, anyone using an extension in Business Central may make a call to the ADLS endpoint provided they have the right keys/ secrets. +- Keep the data on the lake encrypted. The data may be encrypted when it is being written to the lake at the `CreateCsvPayload` procedure in the [Codeunit ADLSE Util](https://github.com/microsoft/bc2adls/blob/main/businessCentral/src/ADLSEUtil.Codeunit.al). You will also have to create custom code to decrypt the data when reading it from, say, Power BI. Please take a look at the out- of- the- box feature to enable [At-rest encryption in Data Lake](https://learn.microsoft.com/en-us/azure/security/fundamentals/encryption-overview#at-rest-encryption-in-data-lake). +- The ADLS indeed supports being put behind a private network. But BC may not be able to access it in that case, to write incremental updates. In order to allow BC to write to it, you may add an intermediate Azure Function with a public endpoint but which is allowed to write to the lake, by whitelisting its public IP address. The Azure Function then acts like a delegate and augments your security outlook by constructing additional gates to the data.