Daml Connect 1.17.0 has been released and you can install it using:
daml install 1.17.0
Daml Connect 1.17 brings a number of important new features and improvements:
Impact and Migration
Almost all changes in this release are additive. The exception is a small API change in the low-level akka-bindings
component of the Scala Ledger API Bindings.
LedgerClientBinding.commands
now returns a flow of
Either[CompletionFailure, CompletionSuccess]
instead of
Completion
. For backwards compatibility, the new return type can be turned back into a
Completion
using
CompletionResponse.toCompletion
.
In addition we recommend taking advantage of the improved Pruning and Command Deduplication features, even though these are fully backward compatible and opt-in.
Background
Several runtime components in Daml Connect require an RDBMS persistence backend in production use. Until now, PostgreSQL was the only supported option for this. With Daml Connect 1.17, OracleDB version 19.11 and upward is also supported by the Enterprise Edition of Daml Connect.
Specific Changes
thin
driver type from oracle.jdbc.OracleDriver. For example: --query-store-jdbc-config
.Impact and Migration
This is an additive change. If you would like to migrate Daml Connect to Oracle DB, simply restart the JSON API and Trigger Service components against an Oracle Database using the create-and-start
start mode to create and populate a new schema.
Background
In Daml Connect release 1.14 we announced the deprecation of the use of divulged contracts during interpretation, together with a four step plan to roll out the change away from using divulgence as part of business processes in Daml.
In release 1.16 warnings were introduced that flag up any use of this behaviour, in line with step 2. Step 3 is taken in this release (1.17). Until the transition is complete, this is an optional feature of the pruning service, but not pruning divulged contracts may lead to steady increase in disk space use over time.
Specific Changes
prune_all_divulged_contracts
. When set to true, all divulgence events up to the pruning offset are removed.Impact and Migration
Unless you use the pruning features, you don’t need to worry about this change. If you do, the documentation on pruning gives some guidance on how to choose good pruning offsets and/or determine how to set the new flag. The general rule of thumb is:
If you are a Digital Asset customer, you can also work through your Rrelationship Mmanager to get support in determining whether your application is impacted.
Background
There are circumstances where an integration or client application needs at most once or exactly once delivery of a Command. In other words, when it does not receive a response for a requested transaction, it needs a safe way to try again without risking duplicate processing of the command. A good example of a case where this is important is during issuance of an asset.
Command Deduplication is a Ledger API feature that offers precisely this safety, and the guarantees offered are getting even stronger with this release. This allows application developers to build more resilient client applications more easily.
In previous SDK releases, you already set the following fields when submitting commands: command_id, application_id, act_as. This triplet is now referred to as a Change ID, and represents a certain change that an application wants to affect on the ledger. The command deduplication feature, as before, ensures that at most one command submission with a given Change ID can be committed within an application specified deduplication period.
What’s new is that this duplication also works if the two submissions come from different Participant Nodes, and that applications can correlate the specific submission attempt identified by a `submission_id`, and their corresponding completions.
Specific Changes
submission_id
identifies the individual submissions belonging to the same Change IDdeduplication_time
is deprecated in favour of the more aptly named deduplication_duration. This specifies the length of the deduplication periodImpact and Migration
These improvements are all opt-in. We recommend setting submission_id
to be able to correlate completions with submission attempts, but the field is optional. Similarly, we recommend using the better-named deduplication_duration
over the deprecated deduplication_time
, but the two fields do the same thing.
Background
Driver Performance has been a major theme for the last months and one of the biggest changes that has been happening under the hood - in the integration components for Daml Drivers - is now surfacing in the Daml Driver for PostgreSQL.
Starting with release 1.17, the driver supports a new underlying database schema, which offers 3x-4x transaction throughput improvements when used. Like all Daml Driver for PostgreSQL schema migrations, this is a one-way operation, and this new schema requires PostgreSQL 10.x or newer instead of the previously supported version 9.6. Using the new schema is therefore an opt-in option activated via a command line flag..
Specific Changes
Impact and Migration
The transition from old schema to new is both opt-in as well as transparent and seamless if performed. So there is no need to take any action, but if you would like to start testing or taking advantage of the new high-performance persistence schema, you can do so by setting the above flag.
The new schema makes use of certain PostgreSQL features that are only considered stable from PostgreSQL 10.x or newer.
--access-token-file
is now deprecated. It used to serve passing in a file containing a claim that would give the API server access to the package service, but that is no longer needed and the flag therefore no longer serves a purpose.start-mode=create-only
.daml ledger
commands now accept --tls
code> in combination with --json-api
code> to access a JSON API behind a TLS reverse proxy.tablePrefix=<TablePrefix>
. This was added to allow running multiple instances of the HTTP-JSON API service against a single database instance while allowing to cleanly separate each HTTP-JSON API query store without extra configuration.withTimeout
method on DamlLedgerClient.Builder
.akka-bindings
component of the Scala Ledger API bindings has had a small type change. LedgerClientBinding.commands
now returns a flow of Either[CompletionFailure, CompletionSuccess]
instead of Completion
. For backwards compatibility, the new return type can be turned back into a Completion
using CompletionResponse.toCompletion
code>.--all-parties
option to generate a ledger export as seen by all known parties.daml> import qualified DA.Set as Set
daml> let m = Set.toMap (Set.fromList [1,2,3])
daml> debug m
File: Line1.daml
Hidden: no
Range: 6:9-6:27
Source: typecheck
Severity: DsError
Message:
Line1.daml:6:9: error:
Not in scope: type constructor or class ‘DA.Internal.LF.Map’
No module named ‘DA.Internal.LF’ is imported.
Fixed a bug that would cause Daml Script and Scenarios to display poorly in case of unhandled exceptions.