Release Notes

Release of Daml 2.8.0

author by Curtis Hrischuk April 15, 2024

In this article

Summary

Release 2.8.0 brings a wide range of improvements to Daml Enterprise. These include: observability enhancements, JSON API enhancements, new financial instruments for Daml Finance, a new feature to allow for high throughput or complex queries, and many many operational, quality, security and performance improvements.

Key Enhancements

 

  • Participant Query Store is a new feature that allows client applications to leverage JSONB SQL for complex or high-performance queries regarding contract creation, contract archival, and exercise actions.
  • Daml Finance Enhancements add new financial instruments and an enhanced core asset model (Account, Holding, and Instrument interfaces) to streamline upgrade processes, enhance extensibility, and improve interoperability.
  • Distributed Tracing Enhancements allow you to extract trace and span IDs from past transactions and completions, so that distributed traces can continue in follow-up commands; this will speed up application problem diagnosis.
  • Pruning Enhancements for the JSON API Server so that when a JSON API server queries the ledger API with an earlier offset than is on the participant node’s ledger, the JSON API server’s cache is refreshed to avoid an error being returned.
  • Better Multi-Package Build Support increases developer productivity for projects that involve multiple packages by tracking dependencies and only building those packages that have changes. 

What’s New

Participant Query Store

Background

The term Operational Data Store (ODS) usually refers to a database that mirrors the ledger and allows for efficient querying. The Participant Query Store (PQS) feature acts as an ODS for a participant node. It stores contract creation, contract archival, and exercise information in a PostgreSQL database using a JSONB column format. The PostgreSQL database is queried over JDBC.

The PQS allows for complex or high-throughput queries related to contracts. It can be used by:

  • Application developers to access data on the ledger, observe the evolution of data, and debug their applications;
  • Business analysts to analyze ledger data and create reports;
  • Support teams to debug any problems that happen in production.

Java and TypeScript codegen support initializing objects from a PQS query which ensure consistency between a template, a Java class, a TypeScript class, and the data payload in PQS.  Please explore the documentation for PQS.

Impact and Migration

PQS has graduated to General Availability.

Daml Finance Enhancements

Enhanced Upgradeability, Extensibility, and Interoperability

We have enhanced the core asset model (Account, Holding, and Instrument interfaces) to streamline upgrade processes, enhance extensibility, and improve interoperability:

  • The Account now links to its HoldingFactory via a key. This facilitates upgrading the HoldingFactory without the need to modify existing Account contract instances. It also enables a “lazy” upgrade approach for holdings, as detailed in our new Holding Upgrade Tutorial.
  • In anticipation of the need for standardization when implementing composed workflows across applications, we have introduced the notion of a HoldingStandard. It categorizes holdings into distinct classes, each defined by the combination of holding interfaces (Transferable, Fungible, and Holding) they implement. This new standard has guided the renaming and reorganization of our Holding implementations.  Moreover, the settlement process has been refined to require only a matching HoldingStandard, allowing for implementation variations. 
  • A unified HoldingFactory capable of creating holdings for any specified HoldingStandard has been adopted. In particular, this enables multiple holdings (of various HoldingStandards) to be credited to the same account. 

Daml 3.0 and the Canton Network

In order to ease future transitions to Daml 3.0 and the Canton Network we have shifted to single-maintainer contract keys.

Streamlining Interface Archival

Previously, our factory contracts featured a Remove choice for archiving interface instances. With Daml now supporting direct archival of interface instances, these choices have been removed.

New Interface Lockable

The locking mechanism has been separated from the Holding interface (renamed from Base following customer feedback) into a new Lockable interface. That makes it available for broader use; while the Account also implements Lockable allowing accounts to be frozen, it’s not a mandatory feature.

New Instruments

Furthermore, we have broadened the library's functionality by introducing new financial instruments, such as structured products and multi-underlying asset swap instruments (both early access).

Usability Improvements

Finally, we’ve made a large number (around 50 tickets) of smaller improvements addressing customer feedback. These improvements range from the consistency of naming conventions in the library to didactical improvements in our docs and tutorials.

You can find the list of stable packages and the major updates since the previous release here. The technical changelog for each package can be found as a sub-page here.

Distributed Tracing Enhancements

Background

Distributed tracing is a technique for troubleshooting performance issues in a microservices environment like Daml Enterprise. In this release, the client applications gain the ability to extract trace and span IDs from past transactions and completions, so that distributed traces can continue in follow-up commands. Trace context enables client applications that were not the submitters of the original request to pick up the initial spans and continue them in follow-up requests. This allows multi-step workflows to be adorned with contiguous chains of related spans.

To learn how to extend your application to support distributed tracing, see the documentation.

Specific Changes

Trace contexts are now included in the gRPC messages returned in Ledger API streams and point-wise queries. This change affects the following transaction and command completion service calls:

  • TransactionService.GetTransactions
  • TransactionService.GetTransactionTrees
  • TransactionService.GetTransactionByEventId
  • TransactionService.GetTransactionById
  • TransactionService.GetFlatTransactionByEventId
  • TransactionService.GetFlatTransactionById
  • CompletionService.CompletionStream

Impact and Migration

This is a purely additive change.

Pruning Enhancements for the JSON API Server

Background

For each user, the JSON API server retains a high watermark (offset) that keeps track of the last offset seen for each party and template combination. The Participant Node (PN) can prune its data so that the JSON API offset is earlier than the actual PN’s pruned offset ( i.e., the oldest known offset, post-prune).  When a JSON API server queries the ledger API with an earlier offset that is invalid, an error is returned to the client.  Two enhancements have been made to avoid the error.

Specific Changes

The first enhancement is to provide a cache refresh endpoint that internally iterates through the existing [party, template] pairs that are in the cache. The cache refresh has an optimization to do a refresh for all templates whose cache is staler than a given offset.  A limitation, as currently implemented, is that it only has authorization of the JWT used in the HTTP request that triggers the cache refresh, and it attempts to request updates for all those parties with stale caches. There are two options to refresh the entire cache:  (1) provide a JWT that could be authorized for all the relevant parties, or (2) issue separate JWT and HTTP requests for each party that needs a refresh.  The approach selected depends on the customer’s requirements and integration tooling.

The second feature is called prune safety. If an out-of-bounds offset request is made by the JSON API server the error is detected, the  [party, template] cache is cleared, and the cache is recreated by making ledger API requests to get a fresh copy of the ACS for that [party, template] pair. 

Prune safety does not extend to an application providing an invalid offset.  For example, a websocket query from a client can specify an offset. If the provided offset is out of bounds for the PN, then the client application can detect an error from the JSON API and proceed by making a request without a specified offset. 

As an optimization, a customer can create a script to refresh the cache by issuing a query for each [party, template] pair prior to pruning. This script can be run as part of the pruning business process, which recreates the cache before pruning to avoid the latency of updating the cache when query requests are made.

Impact and Migration

This feature is purely additive.

Better Multi-Package Build Support

Background

The new multi-package build feature improves the workflow for developers working across multiple packages at the same time by managing dependencies across packages and only (re)building packages when needed. Using multi-package builds, developers can:

  • Configure daml build to automatically rebuild DARs used in data-dependencies when the source code corresponding to those DARs changes. This is useful in projects with multiple packages.
  • Build all of the packages in a project simultaneously using the new daml build --all command.
  • Clean all build artifacts in a project using daml clean --all.
  • Better organize their multi-package dependencies in a single, new multi-package.yaml file that can reference daml.yaml or other multi-package.yaml files.

For a summary and detailed changes, see the documentation

Specific Changes

The multi-package build feature is quite flexible so that it can be adapted to the development and CI/CD environment. An overview of the changes:

  • Adds a multi-package.yaml file at the root of the project to list packages that should be automatically rebuilt and directories containing other multi-package.yaml or daml.yaml files to also be considered.
  • Changes the behavior of daml build to use the multi-package.yaml file if it can be found (by searching up the directory tree) to perform a multi-package build.
  • Adds the --enable-multi-package flag, defaulting to yes, which can be used to disable the new build behavior.
  • Adds --all to both daml build (for building all packages in a project) and daml clean (for cleaning all packages in a project).
  • Adds  --no-cache to turn off multi-package build caching behavior, and --multi-package-path to specify the location of a multi-package.yaml file.

A complete rebuild may be needed and this can be accomplished using daml clean --all followed by  daml build

Impact and Migration

This feature is purely additive and backwards compatible. Enabled by default. It should not change existing behavior, but if any regressions occur, you can use --enable-multi-package=no to deactivate the feature altogether.

Training and Certification

New certification paths aligned with our latest release:

  • Daml Philosophy: teaches the core tenets for leveraging the unique capabilities of Daml applications and the Canton Network. This path is for all decision-makers, leads, and contributors on a Daml implementation team. 
  • Daml Fundamentals: leads to a foundational-level certification exam and capstone project. The certification path prepares a developer to build a simple Daml application through learning the basics of Daml programming and testing.
  • Daml Contract Developer: prepares the experienced developer to build Daml-based applications. Trainees will learn best practices for Daml programming and design, enabling them to translate processes and requirements into Daml code. They will learn how to think about Daml applications with respect to scalability, performance, and maintainability.

Additional Enhancements

There are many other additional enhancements that are available in this release.  They are listed below.

The JSON API server supports TLS termination

The JSON API server can terminate TLS secured connections from clients. 

The  GCP KMS feature has graduated to General Availability.

Automatic participant pruning support for pruning internal-only state

Pruning presents a trade-off between limiting ledger storage space and being able to query ledger history far into the past. Pruning participant-internal state strikes a balance by deleting only internal state.  Previously, internal pruning was available only via the "manual" participant.pruning.prune_internally command. With this release, pruning participant internal-only state also becomes available through automatic pruning as well.  The specific changes are:

  • Configure automatic, internal-only pruning using the new participant.pruning.set_participant_schedule command's prune_internally_only parameter.
  • Retrieve the currently active participant schedule including the prune_internally_only setting via the newly introduced participant.pruning.get_participant_schedule command.

Support for HLF 2.5

Dalm Enterprise now supports HLF 2.5. These changes are backwards compatible with any running 2.2 solution and no changes are required on Canton side to perform these upgrades. 

Daml Language Changes

The Daml Language updates are listed below.

Restricted name warnings:  Attempting to use the names this, self or arg in the template, interface, or exception fields will often result in confusing errors and mismatches with the underlying desugared code. We now throw an error (or warning) early in those cases on the field name itself to make this more clear.

daml script --ide-ledgerTo unify Daml tools, the --ide-ledger option is supported in Daml script. This allows a user to directly invoke scripts within a .dar file on their local machine, without a separate ledger running. Note the difference from daml test is that daml script will not attempt to recompile or read the source code directly. This option cannot be used with --ledger-host, --participant-config or --json-api.

Daml-script json support for --allThe daml-script binary runner has been refactored to be more consistent across using --script-name and --all.  As such, now --all will work when using --json-api.

Daml-script --upload-dar flag:  The daml-script binary now allows you to specify if you want the DAR containing your scripts to be uploaded to the ledger before execution.   The previously implicit uploading behavior of automatically uploading when using --all is now deprecated with a warning.  To avoid the warning, add --upload-dar=yes

Deprecation of template-local definitions:  The syntax for let bindings in template definitions will be deprecated in favor of plain top-level bindings.   If the deprecated syntax is used, the following warning will be shown during compilation or in the IDE:   Template-local binding syntax ("template-let") is deprecated,  it will be removed in a future version of Daml.   Instead, use plain top level definitions, taking parameters  for the contract fields or body ("this") if necessary.  For more information, see Reference: Templates: Template-local Definitions (Deprecated)

Removal of deprecated controller..can syntax:  The controller..can syntax for defining template choices has been deprecated since Daml 2.0 and is now removed. Projects that use this syntax are no longer accepted. Those choices should instead be defined using choice-first syntax.  Note that as a consequence, the warning flags -Wcontroller-can and
-Wnocontroller-can are not accepted anymore.
See Deprecation of controller-first syntax: Migrating for more information on how to adapt existing projects.

Ledger API Command Submission Changes

CommandService gRPC deadline logic:  Commands submitted to the Ledger API now respect the gRPC deadlines: If a request reaches the command processing layer with an already-expired gRPC deadline, the command will not be sent for submission. Instead the request is rejected with a new self-service error code  REQUEST_DEADLINE_EXCEEDED, which informs the client that the command is guaranteed not to have been sent for execution to the ledger.

Command Interpretation Timeouts:  If you submit a command that runs for a very long time, the Ledger API will now reject the command with the new self-service error code INTERPRETATION_TIME_EXCEEDED when the transaction reaches the ledger time tolerance limit based on the submission time.

Protocol versions 3 and 4 are deprecated:  Protocol versions 3 and 4 are now marked as deprecated. Protocol version 5 should be used for any new deployment.  

KMS wrapper-key configuration value now accepts a simple string

The expected KMS wrapper-key configuration value has changed from:
   crypto.private-key-store.encryption.wrapper-key-id = { str = "..."}

to a simple string:
    crypto.private-key-store.encryption.wrapper-key-id = "..."

Schema migration attempts configuration for the indexer

The configuration fields schema-migration-attempt-backoff and schema-migration-attempts for the indexer were removed. The following config lines will have to be removed, if they exist:

  • participants.participant.parameters.ledger-api-server-parameters.indexer.schema-migration-attempt-backoff
  • participants.participant.parameters.ledger-api-server-parameters.indexer.schema-migration-attempts

Cache weight configuration for the Ledger API server

The configuration fields max-event-cache-weight and max-contract-cache-weight for the Ledger API server were removed. The following config lines will have to be removed, if they exist:

  • participants.participant.ledger-api.max-event-cache-weight
  • participants.participant.ledger-api.max-contract-cache-weight

Config Logging On Startup

By default, Canton will log the config values on startup, as this has turned out to be useful for troubleshooting. This feature can be turned off by setting  canton.monitoring.logging.log-config-on-startup = false but this is not recommended.   Logging the configuration, including all default values, can be turned on using  canton.monitoring.logging.log-config-with-defaults = true.  Note that this will log all available settings, including parameters that are not expected to be changed. Confidential data will not be logged but replaced by xxxx.

Default Size of Ledger API Caches

The default sizes of the contract state and contract key state caches have been decreased by one order of magnitude from 100,000 to 10,000.

  • canton.participants.participant.ledger-api.index-service.max-contract-state-cache-size
  • canton.participants.participant.ledger-api.index-service.max-contract-key-state-cache-size

The size of these caches determines the likelihood that a transaction using a contract/contract-key that was recently created or read will still find it in memory rather than need to query it from the database.   Larger caches might be of interest in use cases where there is a big pool of ambient contracts that are consistently being fetched or used for non-consuming exercises. It may also benefit those use cases where a big pool of contracts is being rotated through a create -> archive -> create-successor cycle. Consider adjusting these parameters explicitly if the performance of your specific workflow depends on large caches, and you were relying on the defaults thus far. 

Target Scope for JWT Authorization

The default scope (scope field in the scope based token) for authenticating on the Ledger API using JWT is daml_ledger_api. Other scopes can be configured explicitly using the custom target scope configuration option:
canton.participants.participant.ledger-api.auth-services.0.target-scope="custom/Scope-5:with_special_characters"

Target scope can be any case-sensitive string containing alphanumeric characters, hyphens, slashes, colons and underscores. Either the target-scope or target-audience parameter can be configured, but not both.

Explicit Settings for Database Connection Pool Sizes

The sizes of the connection pools used for interactions with database storage inside Canton nodes are determined using a dedicated formula described in the documentation article on max connection settings.  The values obtained from that formula can now be overridden using explicit configuration settings for the read, write and ledger-api connection pool sizes:

  • canton.participants.participant.storage.parameters.connection-allocation.num-reads
  • canton.participants.participant.storage.parameters.connection-allocation.num-writes
  • canton.participants.participant.storage.parameters.connection-allocation.num-ledger-api

Similar parameters are available for other Canton node types:

  • canton.sequencers.sequencer.storage.parameters.connection-allocation...
  • canton.mediators.mediator.storage.parameters.connection-allocation...
  • canton.domain-managers.domain_manager.storage.parameters.connection-allocation

The effective connection pool sizes are reported by the Canton nodes at start-up
INFO  c.d.c.r.DbStorageMulti$:participant=participant_b - Creating storage, num-reads: 5, num-writes: 4

Improved observability of ledger streams
The observability of the streams served over the Ledger API has been improved in both logging and metrics.

  • Most operations related to streams lifetime are now logged at debug level rather than trace.
  • Metrics reporting number of active streams metrics.daml.lapi.streams.active are now collected always, independently of the rate limiting settings.

Canton Console Changes

The Canton console has also had several enhancements.  They are listed below.

Usage of the applicationId in command submissions and completion subscriptions:  Previously, the Canton console used a hard-coded "CantonConsole" as an applicationId in the command submissions and the completion subscriptions performed against the Ledger API. Now, if an access token is provided to the console, it will extract the userId from that token and use it instead. A local console will use the adminToken provided in canton.participants.<participant>.ledger-api.admin-token, whereas a remote console will use the token from canton.remote-participants.<remoteParticipant>.tokenThis affects the following console commands:

  • ledger_api.commands.submit
  • ledger_api.commands.submit_flat
  • ledger_api.commands.submit_async
  • ledger_api.completions.list
  • ledger_api.completions.list_with_checkpoint
  • ledger_api.completions.subscribe

You can also override the applicationId by supplying it explicitly to these commands.

Introduction of Java Bindings Compatible Console Commands:  The following console commands were added to support actions with Java codegen compatible data:

  • participant.ledger_api.javaapi.commands.submit
  • participant.ledger_api.javaapi.commands.submit_flat
  • participant.ledger_api.javaapi.commands.submit_async
  • participant.ledger_api.javaapi.transactions.trees
  • participant.ledger_api.javaapi.transactions.flat
  • participant.ledger_api.javaapi.acs.await
  • participant.ledger_api.javaapi.acs.filter
  • participant.ledger_api.javaapi.event_query.by_contract_id
  • participant.ledger_api.javaapi.event_query.by_contract_key

The following commands were replaced by their Java bindings compatible equivalent (in parentheses):

  • participant.ledger_api.acs.await (participant.ledger_api.javaapi.acs.await)
  • participant.ledger_api.acs.filter (participant.ledger_api.javaapi.acs.filter)

New Functions to Specify a Full-blown Transaction Filter for Flat Transactions:  ledger_api.transactions.flat_with_tx_filter and ledger_api.javaapi.transactions.flat_with_tx_filter are more sophisticated alternatives to ledger_api.transactions.flat and ledger_api.javaapi.transactions.flat respectively that allow you to specify a full transaction filter instead of a set of parties. Consider using this if you need to specify more fine-grained filters that include template IDs, interface IDs, and/or whether you want to retrieve and create event blobs for explicit disclosure.

Commands around ACS migration:  Console commands for ACS migration (ACS export/import) can now be used with remote nodes. This change applies to the commands in the repair namespace.

New ACS export / import repair commands: The new ACS export / import commands, repair.export_acs and repair.import_acs, provide similar functionality as the existing repair.download and repair.upload commands. However, their implementation allows them to evolve better over time.  Consequently, the existing download / upload functionality is deprecated.

Transactions generated by importing an ACS have a configurable workflow ID to track ongoing imports:  Contracts added via the repair.party_migration.step2_import_acs and repair.import_acs commands now include a workflow ID. The ID is in the form prefix-${n}-${m}, where m is the number of transactions generated as part of the import process and n is a sequential number from 1 to m inclusive. Each transaction contains 1 or more contracts that share the ledger time of their creation. The two numbers allow you to track whether an import is being processed. You can specify a prefix with the workflow_id_prefix string parameter defined on both commands. If not specified, the prefix defaults to import-${randomly-generated-unique-identifier}.

keys.secret.rotate_node_key() console command: The console command keys.secret.rotate_node_key can now accept a name for the newly generated key.

owner_to_key_mappings.rotate_key command expects a node reference: The previous owner_to_key_mappings.rotate_key is deprecated and now expects a node reference (InstanceReferenceCommon) to avoid any dangerous and/or unwanted key rotations.

DAR vetting and unvetting commands:  DAR vetting and unvetting convenience commands have been added to:

  • Canton admin API as PackageService.VetDar and PackageService.UnvetDar
  • Canton console as participant.dars.vetting.enable and participant.dars.vetting.disable

Additionally, two error codes have been introduced to allow better error reporting to the client when working with DAR vetting or unvetting:  DAR_NOT_FOUND and PACKAGE_MISSING_DEPENDENCIES. Please note that these commands are alpha only and subject to change.

SequencerConnection.addConnection: SequencerConnection.addConnection is deprecated. Use SequencerConnection.addEndpoints instead.

Metrics Changes: There are two metric changes:

  • The DB metric lookup_active_contracts is removed in favor of lookup_created_contracts and lookup_archived_contracts. This reflects the change of active contract lookup from DB: switching for a single batched active DB query to two parallel executed batch queries targeting archived and created events.
  • The sequencer’s client metric load is removed without replacement.

Submission service error code change:  The error code SEQUENCER_DELIVER_ERROR that could be received when submitting a transaction has been superseded by two new error codes: SEQUENCER_SUBMISSION_REQUEST_MALFORMED and SEQUENCER_SUBMISSION_REQUEST_REFUSED. Please migrate client applications code if you rely on the older error code.

Security and Bug Fixes

The following bugs were fixed in 2.7 patch releases and are mentioned here for completeness.   Any additional bugs that are fixed in the 2.8.0 release are also included.

  • In a rare situation, when a mediator node went through an HA failover transition it could get into a stuck state if it also encountered a benign DB transient error at the same time.  This no longer occurs.
  • Canton periodically checks the validity and health of its connection to the database. Those checks were previously competing with other database queries, sometimes leading to contention which would generate a warning but would not have any impact because the queries were retried.  This contention no longer exists.
  • On restart of a sync domain, the participant will replay pending transactions, updating the stores in case some writes were not persisted. Within the command deduplication store, existing records are compared with to be written records for internal consistency checking purposes. This comparison includes the trace context which differs on a restart and hence can cause the check to fail, aborting the startup with an IllegalArgumentException.
  • The Besu Solidity driver code had a race condition which, in rare circumstances, could cause the same transaction to be sequenced twice (with two different nonces), This race condition has been closed.  
  • The participant node can gracefully handle repeatedly sequenced transactions since the v2.7.0 release.  Such transactions are committed only once. However, due to other race conditions in the implementation, the data stored on the participant node may become inconsistent for such a transaction. This inconsistency is detected upon the next attempt to reconnect to the domain and alerted. The race conditions are now closed.   
  • Fixed handling of expired gRPC deadlines on the CommandService: If the gRPC request deadline is too low, requests can lead to errors (logged on the participant as INTERNAL) in the CommandService endpoints (e.g. submitAndWait)  and never appear as completed for the client. The workaround is to restart the participant and use a higher gRPC request deadline.
  • A race condition was eliminated. When a PN is being shut down, the Ledger API can experience a race condition on the CommandService and log internal errors such as  IllegalStateException("Promise already completed.")
  • Reduced memory consumption due to instrumentation that could negatively impact performance.  
  • Default trace sampling ratio was reduced to 1% (from 100%) to avoid performance penalties under high throughput scenarios.
  • Several optimizations improve pruning performance on PostgreSQL.
  • Fixed a communication corner case that caused this log message to be issued every second which added noise to the logging:  logs:['imenoj6hcxkdra2n::1220580feafb268aa001244c6c5013010d0ecaf0437bda7909f8383096d61eedbd7a'], offset -> '0000000000000d6b9c' PARTICIPANT_PRUNED_DATA_ACCESSED(9,325d4e2c): Command completions request from 0000000000000d6b9c to 000000000000114de5 overlaps with pruned offset 0000000000000f6dd5
  • The Canton console will read the applicationId/userId from the token when supplied. This will allow the remote console to better function with the user-management feature as it forces the creation of a user CantonConsole.
  • Some KMS requests were passed an empty TraceContext resulting in no trace id in the audit log.  
  • The ACS migration now works on a remote node.
  • It was possible for implicitly added Archive choices to be included in the coverage report which reduces the coverage percentage. The flag --coverage-ignore-choice PATTERN was added to selectively disable choices in the coverage report to ignore implicit Archive choices. Any choice whose fully qualified name matches the regular expression in PATTERN is removed from the coverage report. The choice will not be included in counts of defined choices or in counts of exercised choices. The choice is treated as if it does not exist.
  • The previous owner_to_key_mappings.rotate_key is deprecated and now expects a node reference (InstanceReferenceCommon) as a parameter to avoid any dangerous and/or unwanted key rotations.
  • Fixed the daml-script binary TLS and access token settings when using the --all flag.

Download and Installation

The Daml 2.8.0 SDK has been released. You can install it using the command:  daml install 2.8.0.

The table below lists how you can download Daml Enterprise or individual components.

Daml Enterprise v2.8.0

Component

File download

Container Image

SDK

Linux
macOS
Windows

digitalasset/daml-sdk:2.8.0

Canton for Daml Enterprise

Standalone JAR file

digitalasset-docker.jfrog.io/canton-enterprise:2.8.0

Daml Finance

GitHub Page

NA

HTTP JSON API Service

Standalone JAR file

digitalasset-docker.jfrog.io/http-json:2.8.0

Trigger Service

Standalone JAR file

digitalasset-docker.jfrog.io/trigger-service:2.8.0

OAuth 2.0 middleware (Open-Source)

GitHub Page

digitalasset-docker.jfrog.io/oauth2-middleware:2.8.0

Participant Query Store

Standalone JAR file

digitalasset-docker.jfrog.io/participant-query-store:0.1.0

Trigger Runner

Standalone JAR file

digitalasset-docker.jfrog.io/trigger-runner:2.8.0

Daml Script

Standalone JAR file

digitalasset-docker.jfrog.io/daml-script:2.8.0

If you are using Oracle JVM and testing security provider signatures, note that the Canton JAR file embeds the Bouncy Castle provider as a dependency. To enable the JVM to verify the signature, put the bcprov JAR on the classpath before the Canton standalone JAR.  For example:

java -cp bcprov-jdk15on-1.70.jar:canton-with-drivers-2.8.0-all.jar com.digitalasset.canton.CantonEnterpriseApp

Note: These Docker images are designed to be suitable for production use, with minimal size and attack surface. Minimal images can sometimes make debugging difficult (e.g. no shell in the containers). For convenience, we provide “debug” versions of each of the above images, which you can access by appending “-debug” to the image tag (e.g. digitalasset-docker.jfrog.io/http-json:2.8.0-debug).