Release 2.8.0 brings a wide range of improvements to Daml Enterprise. These include: observability enhancements, JSON API enhancements, new financial instruments for Daml Finance, a new feature to allow for high throughput or complex queries, and many many operational, quality, security and performance improvements.
The term Operational Data Store (ODS) usually refers to a database that mirrors the ledger and allows for efficient querying. The Participant Query Store (PQS) feature acts as an ODS for a participant node. It stores contract creation, contract archival, and exercise information in a PostgreSQL database using a JSONB column format. The PostgreSQL database is queried over JDBC.
The PQS allows for complex or high-throughput queries related to contracts. It can be used by:
Java and TypeScript codegen support initializing objects from a PQS query which ensure consistency between a template, a Java class, a TypeScript class, and the data payload in PQS. Please explore the documentation for PQS.
PQS has graduated to General Availability.
We have enhanced the core asset model (Account, Holding, and Instrument interfaces) to streamline upgrade processes, enhance extensibility, and improve interoperability:
In order to ease future transitions to Daml 3.0 and the Canton Network we have shifted to single-maintainer contract keys.
Previously, our factory contracts featured a Remove choice for archiving interface instances. With Daml now supporting direct archival of interface instances, these choices have been removed.
The locking mechanism has been separated from the Holding interface (renamed from Base following customer feedback) into a new Lockable interface. That makes it available for broader use; while the Account also implements Lockable allowing accounts to be frozen, it’s not a mandatory feature.
Furthermore, we have broadened the library's functionality by introducing new financial instruments, such as structured products and multi-underlying asset swap instruments (both early access).
Finally, we’ve made a large number (around 50 tickets) of smaller improvements addressing customer feedback. These improvements range from the consistency of naming conventions in the library to didactical improvements in our docs and tutorials.
You can find the list of stable packages and the major updates since the previous release here. The technical changelog for each package can be found as a sub-page here.
Distributed tracing is a technique for troubleshooting performance issues in a microservices environment like Daml Enterprise. In this release, the client applications gain the ability to extract trace and span IDs from past transactions and completions, so that distributed traces can continue in follow-up commands. Trace context enables client applications that were not the submitters of the original request to pick up the initial spans and continue them in follow-up requests. This allows multi-step workflows to be adorned with contiguous chains of related spans.
To learn how to extend your application to support distributed tracing, see the documentation.
Trace contexts are now included in the gRPC messages returned in Ledger API streams and point-wise queries. This change affects the following transaction and command completion service calls:
This is a purely additive change.
For each user, the JSON API server retains a high watermark (offset) that keeps track of the last offset seen for each party and template combination. The Participant Node (PN) can prune its data so that the JSON API offset is earlier than the actual PN’s pruned offset ( i.e., the oldest known offset, post-prune). When a JSON API server queries the ledger API with an earlier offset that is invalid, an error is returned to the client. Two enhancements have been made to avoid the error.
The first enhancement is to provide a cache refresh endpoint that internally iterates through the existing [party, template] pairs that are in the cache. The cache refresh has an optimization to do a refresh for all templates whose cache is staler than a given offset. A limitation, as currently implemented, is that it only has authorization of the JWT used in the HTTP request that triggers the cache refresh, and it attempts to request updates for all those parties with stale caches. There are two options to refresh the entire cache: (1) provide a JWT that could be authorized for all the relevant parties, or (2) issue separate JWT and HTTP requests for each party that needs a refresh. The approach selected depends on the customer’s requirements and integration tooling.
The second feature is called prune safety. If an out-of-bounds offset request is made by the JSON API server the error is detected, the [party, template] cache is cleared, and the cache is recreated by making ledger API requests to get a fresh copy of the ACS for that [party, template] pair.
Prune safety does not extend to an application providing an invalid offset. For example, a websocket query from a client can specify an offset. If the provided offset is out of bounds for the PN, then the client application can detect an error from the JSON API and proceed by making a request without a specified offset.
As an optimization, a customer can create a script to refresh the cache by issuing a query for each [party, template] pair prior to pruning. This script can be run as part of the pruning business process, which recreates the cache before pruning to avoid the latency of updating the cache when query requests are made.
This feature is purely additive.
The new multi-package build feature improves the workflow for developers working across multiple packages at the same time by managing dependencies across packages and only (re)building packages when needed. Using multi-package builds, developers can:
For a summary and detailed changes, see the documentation.
The multi-package build feature is quite flexible so that it can be adapted to the development and CI/CD environment. An overview of the changes:
A complete rebuild may be needed and this can be accomplished using daml clean --all followed by daml build.
This feature is purely additive and backwards compatible. Enabled by default. It should not change existing behavior, but if any regressions occur, you can use --enable-multi-package=no to deactivate the feature altogether.
New certification paths aligned with our latest release:
There are many other additional enhancements that are available in this release. They are listed below.
The JSON API server can terminate TLS secured connections from clients.
Pruning presents a trade-off between limiting ledger storage space and being able to query ledger history far into the past. Pruning participant-internal state strikes a balance by deleting only internal state. Previously, internal pruning was available only via the "manual" participant.pruning.prune_internally command. With this release, pruning participant internal-only state also becomes available through automatic pruning as well. The specific changes are:
Dalm Enterprise now supports HLF 2.5. These changes are backwards compatible with any running 2.2 solution and no changes are required on Canton side to perform these upgrades.
The Daml Language updates are listed below.
Restricted name warnings: Attempting to use the names this, self or arg in the template, interface, or exception fields will often result in confusing errors and mismatches with the underlying desugared code. We now throw an error (or warning) early in those cases on the field name itself to make this more clear.
daml script --ide-ledger: To unify Daml tools, the --ide-ledger option is supported in Daml script. This allows a user to directly invoke scripts within a .dar file on their local machine, without a separate ledger running. Note the difference from daml test is that daml script will not attempt to recompile or read the source code directly. This option cannot be used with --ledger-host, --participant-config or --json-api.
Daml-script json support for --all: The daml-script binary runner has been refactored to be more consistent across using --script-name and --all. As such, now --all will work when using --json-api.
Daml-script --upload-dar flag: The daml-script binary now allows you to specify if you want the DAR containing your scripts to be uploaded to the ledger before execution. The previously implicit uploading behavior of automatically uploading when using --all is now deprecated with a warning. To avoid the warning, add --upload-dar=yes.
Deprecation of template-local definitions: The syntax for let bindings in template definitions will be deprecated in favor of plain top-level bindings. If the deprecated syntax is used, the following warning will be shown during compilation or in the IDE: Template-local binding syntax ("template-let") is deprecated, it will be removed in a future version of Daml. Instead, use plain top level definitions, taking parameters for the contract fields or body ("this") if necessary. For more information, see Reference: Templates: Template-local Definitions (Deprecated)
Removal of deprecated controller..can syntax: The controller..can syntax for defining template choices has been deprecated since Daml 2.0 and is now removed. Projects that use this syntax are no longer accepted. Those choices should instead be defined using choice-first syntax. Note that as a consequence, the warning flags -Wcontroller-can and
-Wnocontroller-can are not accepted anymore. See Deprecation of controller-first syntax: Migrating for more information on how to adapt existing projects.
CommandService gRPC deadline logic: Commands submitted to the Ledger API now respect the gRPC deadlines: If a request reaches the command processing layer with an already-expired gRPC deadline, the command will not be sent for submission. Instead the request is rejected with a new self-service error code REQUEST_DEADLINE_EXCEEDED, which informs the client that the command is guaranteed not to have been sent for execution to the ledger.
Command Interpretation Timeouts: If you submit a command that runs for a very long time, the Ledger API will now reject the command with the new self-service error code INTERPRETATION_TIME_EXCEEDED when the transaction reaches the ledger time tolerance limit based on the submission time.
Protocol versions 3 and 4 are deprecated: Protocol versions 3 and 4 are now marked as deprecated. Protocol version 5 should be used for any new deployment.
The expected KMS wrapper-key configuration value has changed from:
crypto.private-key-store.encryption.wrapper-key-id = { str = "..."}
to a simple string:
crypto.private-key-store.encryption.wrapper-key-id = "..."
The configuration fields schema-migration-attempt-backoff and schema-migration-attempts for the indexer were removed. The following config lines will have to be removed, if they exist:
The configuration fields max-event-cache-weight and max-contract-cache-weight for the Ledger API server were removed. The following config lines will have to be removed, if they exist:
By default, Canton will log the config values on startup, as this has turned out to be useful for troubleshooting. This feature can be turned off by setting canton.monitoring.logging.log-config-on-startup = false but this is not recommended. Logging the configuration, including all default values, can be turned on using canton.monitoring.logging.log-config-with-defaults = true. Note that this will log all available settings, including parameters that are not expected to be changed. Confidential data will not be logged but replaced by xxxx.
The default sizes of the contract state and contract key state caches have been decreased by one order of magnitude from 100,000 to 10,000.
The size of these caches determines the likelihood that a transaction using a contract/contract-key that was recently created or read will still find it in memory rather than need to query it from the database. Larger caches might be of interest in use cases where there is a big pool of ambient contracts that are consistently being fetched or used for non-consuming exercises. It may also benefit those use cases where a big pool of contracts is being rotated through a create -> archive -> create-successor cycle. Consider adjusting these parameters explicitly if the performance of your specific workflow depends on large caches, and you were relying on the defaults thus far.
The default scope (scope field in the scope based token) for authenticating on the Ledger API using JWT is daml_ledger_api. Other scopes can be configured explicitly using the custom target scope configuration option:
canton.participants.participant.ledger-api.auth-services.0.target-scope="custom/Scope-5:with_special_characters"
Target scope can be any case-sensitive string containing alphanumeric characters, hyphens, slashes, colons and underscores. Either the target-scope or target-audience parameter can be configured, but not both.
The sizes of the connection pools used for interactions with database storage inside Canton nodes are determined using a dedicated formula described in the documentation article on max connection settings. The values obtained from that formula can now be overridden using explicit configuration settings for the read, write and ledger-api connection pool sizes:
Similar parameters are available for other Canton node types:
The effective connection pool sizes are reported by the Canton nodes at start-up
INFO c.d.c.r.DbStorageMulti$:participant=participant_b - Creating storage, num-reads: 5, num-writes: 4
Improved observability of ledger streams
The observability of the streams served over the Ledger API has been improved in both logging and metrics.
The Canton console has also had several enhancements. They are listed below.
Usage of the applicationId in command submissions and completion subscriptions: Previously, the Canton console used a hard-coded "CantonConsole" as an applicationId in the command submissions and the completion subscriptions performed against the Ledger API. Now, if an access token is provided to the console, it will extract the userId from that token and use it instead. A local console will use the adminToken provided in canton.participants.<participant>.ledger-api.admin-token, whereas a remote console will use the token from canton.remote-participants.<remoteParticipant>.token. This affects the following console commands:
You can also override the applicationId by supplying it explicitly to these commands.
Introduction of Java Bindings Compatible Console Commands: The following console commands were added to support actions with Java codegen compatible data:
The following commands were replaced by their Java bindings compatible equivalent (in parentheses):
New Functions to Specify a Full-blown Transaction Filter for Flat Transactions: ledger_api.transactions.flat_with_tx_filter and ledger_api.javaapi.transactions.flat_with_tx_filter are more sophisticated alternatives to ledger_api.transactions.flat and ledger_api.javaapi.transactions.flat respectively that allow you to specify a full transaction filter instead of a set of parties. Consider using this if you need to specify more fine-grained filters that include template IDs, interface IDs, and/or whether you want to retrieve and create event blobs for explicit disclosure.
Commands around ACS migration: Console commands for ACS migration (ACS export/import) can now be used with remote nodes. This change applies to the commands in the repair namespace.
New ACS export / import repair commands: The new ACS export / import commands, repair.export_acs and repair.import_acs, provide similar functionality as the existing repair.download and repair.upload commands. However, their implementation allows them to evolve better over time. Consequently, the existing download / upload functionality is deprecated.
Transactions generated by importing an ACS have a configurable workflow ID to track ongoing imports: Contracts added via the repair.party_migration.step2_import_acs and repair.import_acs commands now include a workflow ID. The ID is in the form prefix-${n}-${m}, where m is the number of transactions generated as part of the import process and n is a sequential number from 1 to m inclusive. Each transaction contains 1 or more contracts that share the ledger time of their creation. The two numbers allow you to track whether an import is being processed. You can specify a prefix with the workflow_id_prefix string parameter defined on both commands. If not specified, the prefix defaults to import-${randomly-generated-unique-identifier}.
keys.secret.rotate_node_key() console command: The console command keys.secret.rotate_node_key can now accept a name for the newly generated key.
owner_to_key_mappings.rotate_key command expects a node reference: The previous owner_to_key_mappings.rotate_key is deprecated and now expects a node reference (InstanceReferenceCommon) to avoid any dangerous and/or unwanted key rotations.
DAR vetting and unvetting commands: DAR vetting and unvetting convenience commands have been added to:
Additionally, two error codes have been introduced to allow better error reporting to the client when working with DAR vetting or unvetting: DAR_NOT_FOUND and PACKAGE_MISSING_DEPENDENCIES. Please note that these commands are alpha only and subject to change.
SequencerConnection.addConnection: SequencerConnection.addConnection is deprecated. Use SequencerConnection.addEndpoints instead.
Metrics Changes: There are two metric changes:
Submission service error code change: The error code SEQUENCER_DELIVER_ERROR that could be received when submitting a transaction has been superseded by two new error codes: SEQUENCER_SUBMISSION_REQUEST_MALFORMED and SEQUENCER_SUBMISSION_REQUEST_REFUSED. Please migrate client applications code if you rely on the older error code.
The following bugs were fixed in 2.7 patch releases and are mentioned here for completeness. Any additional bugs that are fixed in the 2.8.0 release are also included.
The Daml 2.8.0 SDK has been released. You can install it using the command: daml install 2.8.0.
The table below lists how you can download Daml Enterprise or individual components.
Daml Enterprise v2.8.0 |
||
Component |
File download |
Container Image |
SDK |
digitalasset/daml-sdk:2.8.0 |
|
Canton for Daml Enterprise |
digitalasset-docker.jfrog.io/canton-enterprise:2.8.0 |
|
Daml Finance |
NA |
|
HTTP JSON API Service |
digitalasset-docker.jfrog.io/http-json:2.8.0 |
|
Trigger Service |
digitalasset-docker.jfrog.io/trigger-service:2.8.0 |
|
OAuth 2.0 middleware (Open-Source) |
digitalasset-docker.jfrog.io/oauth2-middleware:2.8.0 |
|
Participant Query Store |
digitalasset-docker.jfrog.io/participant-query-store:0.1.0 |
|
Trigger Runner |
digitalasset-docker.jfrog.io/trigger-runner:2.8.0 |
|
Daml Script |
digitalasset-docker.jfrog.io/daml-script:2.8.0 |
If you are using Oracle JVM and testing security provider signatures, note that the Canton JAR file embeds the Bouncy Castle provider as a dependency. To enable the JVM to verify the signature, put the bcprov JAR on the classpath before the Canton standalone JAR. For example:
java -cp bcprov-jdk15on-1.70.jar:canton-with-drivers-2.8.0-all.jar com.digitalasset.canton.CantonEnterpriseApp
Note: These Docker images are designed to be suitable for production use, with minimal size and attack surface. Minimal images can sometimes make debugging difficult (e.g. no shell in the containers). For convenience, we provide “debug” versions of each of the above images, which you can access by appending “-debug” to the image tag (e.g. digitalasset-docker.jfrog.io/http-json:2.8.0-debug).