Release of Daml Connect 1.17.0

Daml Connect 1.17.0 has been released and you can install it using:

daml install 1.17.0

Summary

Daml Connect 1.17 brings a number of important new features and improvements:

  1. Oracle DB support is now available throughout the Daml Connect stack as part of the Enterprise Edition. 
  2. Pruning of divulged contracts allows running Daml Ledgers in constant disk space even if divulgence is used to share contracts.
  3. Improvements to Command Deduplication makes it easier to build resilient client applications.
  4. Significant performance improvements for Driver integration components are available in the Daml Driver for PostgreSQL 1.17 and will filter through to other Drivers as they upgrade.

Impact and Migration

Almost all changes in this release are additive. The exception is a small API change in the low-level akka-bindings component of the Scala Ledger API Bindings. 

  • LedgerClientBinding.commands now returns a flow of Either[CompletionFailure, CompletionSuccess] instead of Completion. For backwards compatibility, the new return type can be turned back into a Completion using CompletionResponse.toCompletion.

In addition we recommend taking advantage of the improved Pruning and Command Deduplication features, even though these are fully backward compatible and opt-in.

What’s New

Daml Connect Oracle DB Support

Background

Several runtime components in Daml Connect require an RDBMS persistence backend in production use. Until now, PostgreSQL was the only supported option for this. With Daml Connect 1.17, OracleDB version 19.11 and upward is also supported by the Enterprise Edition of Daml Connect.

Specific Changes

  • In the Enterprise Edition the trigger service and JSON API server now support JDBC URLs pointing to an Oracle Database using the thin driver type from oracle.jdbc.OracleDriver. For example:

    driver=oracle.jdbc.OracleDriver,url=jdbc:oracle:thin:@//localhost:1521/ORCLPDB1,user=system,password=hunter2
    • Note that by default, the JSON API Server sets up a JSON search index to speed up the query endpoints. Oracle Database versions prior to 19.11 have a known bug on such indices and will serve incorrect results.
      Furthermore, even with newer versions, queries strings greater than 256 byte search strings will fail. If you need queries with long search tokens, or want to use an older Oracle DB version at your own risk, you can disable the JSON search index using the option `disableContractPayloadIndexing=true` as part of --query-store-jdbc-config.

Impact and Migration

This is an additive change. If you would like to migrate Daml Connect to Oracle DB, simply restart the JSON API and Trigger Service components against an Oracle Database using the create-and-start start mode to create and populate a new schema.

Pruning of Divulged Contracts

Background

In Daml Connect release 1.14 we announced the deprecation of the use of divulged contracts during interpretation, together with a four step plan to roll out the change away from using divulgence as part of business processes in Daml. 

  1. Deprecate the behavior
  2. Emit warnings when the behavior is used
  3. Change the ledger pruning feature to prune divulged contracts
  4. Turn the feature off by default together with the introduction of a better feature to share contracts without modifying them.

In release 1.16 warnings were introduced that flag up any use of this behaviour, in line with step 2. Step 3 is taken in this release (1.17). Until the transition is complete, this is an optional feature of the pruning service, but not pruning divulged contracts may lead to steady increase in disk space use over time.

Specific Changes

  • The pruning service has a new flag prune_all_divulged_contracts. When set to true, all divulgence events up to the pruning offset are removed.

Impact and Migration

Unless you use the pruning features, you don’t need to worry about this change. If you do, the documentation on pruning gives some guidance on how to choose good pruning offsets and/or determine how to set the new flag. The general rule of thumb is:

  • If you know your application does not make use of divulged contracts during command interpretation, set the flag to true.
  • If you know your application does not make use of old (e.g., older than 30 days) contracts during interpretation, set the flag to true, but choose an appropriately old pruning offset.
  • If you are unsure whether your application makes use of divulgence, keep the flag false, and start monitoring for the warnings introduced in Daml Connect 1.16.

If you are a Digital Asset customer, you can also work through your Rrelationship Mmanager to get support in determining whether your application is impacted.

Command Deduplication Improvements

Background

There are circumstances where an integration or client application needs at most once or exactly once delivery of a Command. In other words, when it does not receive a response for a requested transaction, it needs a safe way to try again without risking duplicate processing of the command. A good example of a case where this is important is during issuance of an asset.

Command Deduplication is a Ledger API feature that offers precisely this safety, and the guarantees offered are getting even stronger with this release. This allows application developers to build more resilient client applications more easily.

In previous SDK releases, you already set the following fields when submitting commands: command_id, application_id, act_as. This triplet is now referred to as a Change ID, and represents a certain change that an application wants to affect on the ledger. The command deduplication feature, as before, ensures that at most one command submission with a given Change ID can be committed within an application specified deduplication period.

What’s new is that this duplication also works if the two submissions come from different Participant Nodes, and that applications can correlate the specific submission attempt identified by a `submission_id`, and their corresponding completions.

Specific Changes

  • The following changes were made to protobuf to support the new command deduplication mechanism:
    • com.daml.ledger.api.v1.Commands:
      • The new field submission_id identifies the individual submissions belonging to the same Change ID
      • deduplication_time is deprecated in favour of the more aptly named deduplication_duration. This specifies the length of the deduplication period
    • com.daml.ledger.api.v1.Completion:
      • Completions now contain application_id and act_as, the application ID and act_as set that was used for the submission. Together with the existing command_id, this now provides the Change ID on the completion.
    • The completion also contains the submission_id that was set as part of the submission and identifies the individual submissions belonging to the same Change ID.
    • Completions now contain either a deduplication_period or deduplication_time, indicating the actual starting point of the deduplication that was used in practice.

Impact and Migration

These improvements are all opt-in. We recommend setting submission_id to be able to correlate completions with submission attempts, but the field is optional. Similarly, we recommend using the better-named deduplication_duration over the deprecated deduplication_time, but the two fields do the same thing. 

Integration Component and Daml Driver for PostgreSQL Performance

Background

Driver Performance has been a major theme for the last months and one of the biggest changes that has been happening under the hood - in the integration components for Daml Drivers - is now surfacing in the Daml Driver for PostgreSQL.

Starting with release 1.17, the driver supports a new underlying database schema, which offers 3x-4x transaction throughput improvements when used. Like all Daml Driver for PostgreSQL schema migrations, this is a one-way operation, and this new schema requires PostgreSQL 10.x or newer instead of the previously supported version 9.6. Using the new schema is therefore an opt-in option activated via a command line flag..

Specific Changes

  • The Daml Driver for PostgreSQL has  a new flag --enable-append-only-schema to enable a new database schema that improves performance and enables the pruning feature.

Impact and Migration

The transition from old schema to new is both opt-in as well as transparent and seamless if performed. So there is no need to take any action, but if you would like to start testing or taking advantage of the new high-performance persistence schema, you can do so by setting the above flag.

The new schema makes use of certain PostgreSQL features that are only considered stable from PostgreSQL 10.x or newer.

Minor Improvements

  • The JSON API’s CLI option --access-token-file is now deprecated. It used to serve passing in a file containing a claim that would give the API server access to the package service, but that is no longer needed and the flag therefore no longer serves a purpose.
  • The JSON API now logs errors in the ledger connection at every attempt
  • The JSON API query performance on contract keys has improved. If run with persistence, this needs a schema migration so you must run the JSON API server once with start-mode=create-only.
  • The daml ledger commands now accept --tlscode> in combination with --json-apicode> to access a JSON API behind a TLS reverse proxy.
  • A table prefix can now be specified in the JDBC config via tablePrefix=<TablePrefix>. This was added to allow running multiple instances of the HTTP-JSON API service against a single database instance while allowing to cleanly separate each HTTP-JSON API query store without extra configuration.
  • The Java Ledger API Bindings now have an optional client-side timeout for commands. This is set using the withTimeout method on  DamlLedgerClient.Builder.
  • To aid clearer error handling, the low-level akka-bindings component of the Scala Ledger API bindings has had a small type change. LedgerClientBinding.commands now returns a flow of Either[CompletionFailure, CompletionSuccess] instead of Completion. For backwards compatibility, the new return type can be turned back into a Completion using CompletionResponse.toCompletioncode>.
  • The DA.List and DA.List.Total modules now export minimumBy, maximumBy, minimumOn and maximumOn, respectively behaving similarly to sortBy and sortOn
  • Navigator will now start with an empty config if no config file exists and it is run outside of a project.
  • The Navigator now highlights the currently selected custom view in the sidebar.
  • Daml Studio now remembers the script view configuration like the checkboxes for detailed disclosure information for each script in a workspace and does not need to be reconfigured upon closing/reopening or restarting of Daml Studio.
  • The Daml Script Export tool now supports an --all-parties option to generate a ledger export as seen by all known parties.
  • Daml Script Export  now handles templates in packages using LF versions 1.7 or older. These package versions don't include type class instances and Daml Script Export needs to generate replacement instances in the generated script. The generated script  ends up using less type-safe versions of Daml script ledger commands.

Bug Fixes

  • A bug in the Daml Trigger Service was fixed which could allow underprivileged users to access a list of all running triggers.
  • A bug in the Daml REPL was fixed that could cause an error when out of scope types were encountered. As an example, the following resulted in an error in SDK 1.16 and earlier because DA.Map is not in scope.
    ```
    daml> import qualified DA.Set as Set
    daml> let m = Set.toMap (Set.fromList [1,2,3])
    daml> debug m
    File: Line1.daml
    Hidden:   no
    Range: 6:9-6:27
    Source:   typecheck
    Severity: DsError
    Message:
      Line1.daml:6:9: error:
      Not in scope: type constructor or class ‘DA.Internal.LF.Map’
      No module named ‘DA.Internal.LF’ is imported.

     

    Fixed a bug that would cause Daml Script and Scenarios to display poorly in case of unhandled exceptions. 
  • Fixed a bug in the JSON API that prevented it from being aware of packages uploaded directly via the Ledger API.