The Canton 3.3 release notes for Splice 0.4.0 are provided below. It highlights the new features in Canton 3.3 that focus on easing application development, testing, upgrading, and supporting the long-term stability of Canton APIs. The high level structure is:
The new release of Canton 3.3 has several goals:
To promote application development for Canton Network's Global Synchronizer by providing stable public APIs with minimal future changes.
To support Splice features.
To make application upgrades easier.
To enhance operational capabilities.
Additional small improvements.
If you are interested in migrating your Global Synchronizer application to Splice 0.4.0, please review the blog post "Canton / Daml 3.3 Release Notes Preview," which describes this in detail. This blog post expands on that one with coverage of new features that applications can take advantage of or that operators can use in managing their validator.
Although much of the discussion here is about the Canton Network or Splice, it is still relevant to application developers for private synchronizer or multi-synchronizer applications. For example, Canton 2 private synchronizer application providers can begin to explore Canton 3 with this release. Please note that deploying private synchronizer or multi-synchronizer applications to production is recommended for a future Canton release.
A new technical documentation site for Daml and Canton 3.x is available. This new site contains our latest technical references, explanations, tutorials, and how-to guides. It also offers improved design, navigation, search, and clearer documentation structure. The new site organizes documentation by Canton Network use case (Overview, Connect, Build, Operate, Subnet) to make relevant information easier to find and consume.
Our Technical Solution Architect Certification course, released earlier this year, provides in-depth coverage of best practice architectural considerations when developing Daml applications for Canton, equipping users with the knowledge needed to design scalable, secure, and high-performance solutions. Sign up for a free account on our LMS platform to get started.
Explore the new docs and consider enrolling to deepen your expertise with Daml and Canton.
There are two types of application updates discussed in this blog post. The first are updates needed to run on a network that has migrated to Splice 0.4.0. The second set is for deprecated features that are backwards compatible in Splice 0.4.0, but the backwards compatibility will be removed in Splice 0.5.0.
For reference, the sections with update or migration information for application, development process, or operations are listed below:
Smart Contract Upgrade (SCU) allows Daml models (packages in DAR files) to be updated transparently. This makes it possible to fix application bugs or extend Daml models without downtime or breaking Daml clients. This feature also eliminates the need to hardcode package IDs, which increases developer efficiency. For example, you can fix a Daml model's bug by uploading the DAR that has the fix in a new version of the package. SCU was introduced in Canton v2.9.1 and is now also available in Canton 3.3.
This feature is well-suited for developing and rolling out incremental template updates. There are guidelines to ensure upgrade compatibility between DAR files. The compatibility is checked at compile time, DAR upload time, and runtime. This is to ensure data backwards compatibility and forward compatibility (subject to the guidelines being followed) so that DARs can be safely upgraded to new versions. It also prevents unexpected data loss if a runtime downgrade occurs (e.g., a ledger client is using template version 1.0.0 while the participant node has the newer version 1.1.0).
You may need to adjust your development practice to ensure package versions follow a semantic versioning approach. To prevent unexpected behavior and ensure compatibility, this feature enforces unique package names and versions for each DAR being uploaded to a participant node, and that packages with the same name are upgrade-compatible. It is no longer possible to upload multiple DARs with the same package name and version. Please ensure you are setting the package version in the daml.yaml files and increasing the version number as new versions are developed.
The 3.x documentation for SCU is in preparation but not yet available. However, the documentation from 2.x is largely applicable and available here with the reference documentation available here; please ignore the protocol version and the Daml-LF version details.
Daml Script provides a way of testing Daml code during development. The following changes have been made to Daml Script:
The Daml standard library is a set of Daml functions, classes and more that make developing with Daml easier. The following updates have been made to it:
There is a new alternative syntax for implementing interfaces: one can now write implements I where ... instead of interface instance I for T where …
The Daml Assistant is a command-line tool that does a lot of useful things to aid development. It has the following updates:
The Daml compiler has the following updates:
Some diagnostics during compilation can be upgraded to errors, downgraded to warnings, or ignored entirely. The --warn flag can be used to trigger this behaviour for different classes of diagnostics. The syntax is as follows:Code generation provides a representation of Daml templates and data types in either Java or TypeScript. The code generation has the following updates:
Canton has a standard model for describing errors which is based on the standard gRPC description. The use of standardized LAPI error responses "enable API clients to construct centralized common error handling logic. This common logic simplifies API client applications and eliminates the need for cumbersome custom error handling code."
In Canton 3.2 there are two error handling systems:
Daml exceptions are deprecated in Canton 3.3; please consider migrating away from them.
Canton 3.3 introduces the failWithStatus Daml method so user defined Daml errors can be directly created and passed to the ledger client. The Ledger API client then can inspect and handle errors raised by Daml code and Canton in the same fashion. This approach has several benefits for applications:
An example will help to make this concrete. Consider the following Daml exception:
exception InsufficientFunds with
required : Int
provided : Int
where
message “Insufficient funds! Needed “ + show required + “ but only got “ + show provided
This would have been received by a ledger client as:
Status(
code = FAILED_PRECONDITION, // The Grpc Status code
message = "UNHANDLED_EXCEPTION(9,...): Interpretation error: Error: Unhandled Daml exception: App.Exceptions:InsufficientFunds@...{ required = 10000, provided = 7000}",
details = List(
ErrorInfoDetail(
// The canton error ID
errorCodeId = "UNHANDLED_EXCEPTION",
metadata = Map(
participant -> "...",
// The canton error category
category -> InvalidGivenCurrentSystemStateOther,
tid = "...",
definite_answer = false,
commands = ...
)
),
RequestInfo(
correlationId = "..."
),
ResourceInfo(
typ = "EXCEPTION_TYPE",
// The daml exception type name
name = "<pkgId>:App.Exceptions:InsufficientFunds"
),
ResourceInfo(
typ = "EXCEPTION_VALUE",
// The InsufficientFunds record, with “required” and “provided” fields
name = "<LF-Value>"
),
)
)
Now it will be received by the client as:
Status(
code = 9, // FAILED_PRECONDITION
message = "DAML_FAILURE(9, ...): UNHANDLED_EXCEPTION/App.Exceptions:InsufficientFunds: Insufficient funds! Needed 10000 but only got 7000",
details = List(
ErrorInfoDetail(
errorCodeId = "DAML_FAILURE",
metadata = Map(
"error_id" -> "UNHANDLED_EXCEPTION/App.Exceptions:InsufficientFunds"
"category" -> "InvalidGivenCurrentSystemStateOther",
... other canton error metadata ...
)),
...
)
)
It is also possible to raise errors directly by calling failWithStatus method from Daml code. The error details are encoded as the ErrorInfoDetail metadata. It includes an error_id of the form UNHANDLED_EXCEPTION/Module.Name:ExceptionName for legacy exceptions, and is fully user defined for errors raised from failWithStatus.
Ledger API now gives the DAML_FAILURE error instead of the UNHANDLED_EXCEPTION error when exceptions are thrown and not caught in the Daml code.
For these reasons, Daml Exceptions are deprecated in this release and the failWithStatus Daml method and FailureStatus error message are recommended going forward. Please migrate your code to using failWithStatus and away from the Daml exceptions before 3.4.
External signing introduces the need to be able to monitor time bounds (often to implement expiry) or measure durations (often between when a contract was created and an expiration time). These durations can be quite long, perhaps a day. This is accommodated with the introduction of time boundary functions in Daml.
This is used by the Canton Network Token Standard. Many of the choices in the Canton Network Token Standard are dependent on ledger time. Expressing these constraints using getTime limits the maximal delay between preparing a transaction and submitting it for execution to one minute. Canton 3.3 introduces new primitives for asserting constraints on ledger time, which remove that artificially tight one minute bound, and instead capture the actual dependency of the submission time and ledger time. These new primitives are used in the Amulet implementation.
The new primitives are:
The Canton 3.3 release also introduces Daml support for verifying cryptographic signatures. Thereby making it easier to build Daml workflows that bridge between Canton Network and other chains. These cryptographic primitives are useful for building bridges or wrapping tokens. This feature is in Alpha status so they may be updated based on user feedback.
This section provides additional context to supplement the Migration guide from version 3.2 to version 3.3.
For clarity,we need to distinguish between two JSON API versions:
This section is about JSON Ledger API v2.
Application developers need stable, easily accessible APIs to build on. Canton 3.3 takes a major step forward in stabilizing the developer-facing APIs of the Canton Participant Node by making all calls that are available in the gRPC Ledger API accessible via HTTP JSON v2 as well. It follows industry standards such as AsyncAPI (websocket) and OpenAPI (synchronous, request-response), which enables the use of the associated tooling (e.g., generate language bindings at the discretion of the developer). This allows developers to freely pick between gRPC or HTTP JSON, depending on what suits them best.
Please note that JSON API v2 does not support query-by-attribute capabilities currently offered by JSON API v1. These queries have proven problematic and are no longer supported. The LAPI pointwise query APIs can be used. For more general querying capabilities, werecommend using the Participant Query Store (PQS).
Some further details about JSON API v2 are:
As mentioned, JSON Ledger API v1 is deprecated in this release and will be removed in Splice 0.5.0/Canton 3.4. So, applications need to migrate to JSON Ledger API v2 which is available in Splice 0.4.0/Canton 3.3. The migration details from Canton 3.2 to Canton 3.3 are available in the "HTTP JSON API Migration to V2 guide". Please note that including @daml/ledger will not work for V2 because it is for Canton JSON API V1.
In Canton 3.3, Smart Contract Upgrade supported two formats for specifying interface and template identifiers to the Ledger API. They are:
The package-id reference format will not be supported in Splice 0.5.0. Applications must switch to using the package-name reference format for all requests submitted to the Ledger API (commands and queries).
There are some cosmetic API changes that were delayed from the jump from Canton 2 to Canton 3. These naming changes are aggregated into this release.
The first is renaming the application_id in the Ledger API to user_id. The migration changes are described in Application ID to User ID rename. Any JSON API v1 calls will also have to make this change.
The second is in anticipation of multi-synchronizer applications where the term domain has changed to synchronizer. This occurs in several places: console commands, gRPC messages, error codes, etc. The migration changes are described in Domain to Synchronizer rename. Any JSON API v1 calls will also have to make this change.
In Canton 3.2, the ledger offset is a string value that is usually converted to a numeric value. In Canton 3.3, the offset is now an int64 which allows trivial and direct comparisons. Negative values are considered to be invalid. A zero value denotes the participant’s begin offset and the first valid offset is 1. Logged offset values will not be in the current hexadecimal format but instead be a decimal Any LAPI or JSON API v1 calls will have to make this change. The String representation is replaced by Long for the Java bindings.
The events that are published by the Participant Node's ledger API have changed. In Canton 3.2, event IDs are strings constructed through concatenation of a transaction ID and node ID and they look something like:
#122051327f59fd759c0b16a07f4cd7146960fb7ada6bfcd56e3144f30a503f5e0010:0
The node-ids are participant node specific and are not interchangeable.
In Canton 3.3, the event_id is replaced with a pair (offset, node_id) of integers for all events, recording the origin and position of the events respectively. The current event-id is replaced with the node-id for event-bearing messages such as CreatedEvent, ArchivedEvent, ExercisedEvent. This approach reduces internal and client storage use without any loss in functionality. The lookups by event ID need to be replaced by lookups by offset. The semantics are that the transaction tree structure is recoverable from the node-ids as node-ids within a transaction carry the same information as old event-ids (discussed in Universal Event Streams below). This is accomplished by:
The migration changes are described in Event ID to offset and node_id. Any JSON API v1 calls will also have to make this change.
The interactive submission service and external signing authorization logic are now always enabled. The following configuration fields must be removed from the Canton Participant Node's configuration:
- ledger-api.interactive-submission-service.enabled
- parameters.enable-external-authorization
The requirement for external signing to pass in all referenced contracts as explicitly disclosed contracts has been removed.
The hashing algorithm for external signing has been updated from V1 to V2. Canton 3.3 will only support hashing algorithm V2, which is not backwards compatible with V1 for several reasons:
Support for V1 has been dropped and will not be supported in Canton 3.3 onward. Refer to the hashing algorithm documentation for the updated version.
This is important for applications that re-compute the V1 hash client-side. Such applications must update their implementation to V2 in order to use the interactive submission service on Canton 3.3.
Also, the following renamings have happened to better represent their contents:
Currently, a Ledger API (LAPI) client can subscribe to ledger events and receive either a flat transaction stream or a transaction tree stream where neither provides a complete view. Subscribing to topology events is not available either. Universal Event Streams is a new feature that overcomes these challenges while providing additional filtering and formatting capabilities.
The Universal Event Streams feature has transaction filters and streams with the following capabilities:
It combines the topology and package information into a single continuous stream of updates ordered by their offsets. Future event types will be added in a backwards compatible manner.
The structural representation of Daml transaction trees no longer exposes the root event IDs and the children for exercised events. It now exposes the last descendant node ID for each node. This new representation changes transaction trees to allow:
The representation can be considered a variant of the DFUDS (Depth-First Unary Degree Sequence) or a Nested Set model representation.
Furthermore, the event nodes are guaranteed to be output in execution order to simplify processing them in that order. If you do need to traverse the actual tree, encoding that traversal as a recursive function with an additional lastDescendandNodeId argument for when to stop the traversal of the current node will work well. The figures below illustrate the difference.
Canton 3.2: store all children of a node and root nodes like this:
Canton 3.3: store the highest node id of node’s descendants:
The Java bindings have an example helper class that can be leveraged to reconstruct the transaction tree. There is also a helper function getRootNodeIds(). The node IDs or root nodes (i.e. the nodes that do not have any ancestors) are important for this computation. A node is a root node if it has no ancestors. There is no guarantee that the root node was also a root in the original transaction (i.e. before filtering out events from the original transaction). In the case that the transaction is returned in AcsDelta shape all the events returned will trivially be root nodes.
For those changes that are required for Splice 0.4.0 see the heading “Required Changes in 3.3” in the section Universal Event Streams to be able to recover the original behavior. Following is a summary of the changes that were made:
The Java bindings and the JSON API data structures have changed accordingly to include the changes described above.
For the detailed way on how to migrate to the new Ledger API please see Ledger API migration guide
To work in Splice 0.5.0, further application updates are needed because the deprecated APIs will be removed. The heading “Changes Required before the Next Major Release” in the section Universal Event Streams has the migration details.
Streaming and point-wise queries support for smart contract upgrading:
The Quick Start is designed to help teams become familiar with Canton Network Global Synchronizer (CN GS) application development by providing scaffolding to kickstart development. It accelerates application development by:
The intent is that you clone the repository and incrementally update the application to match your business operations.
To run the Quick Start you need some binaries from Artifactory. Request Artifactory access by clicking here and we will get right back to you.
The terms and conditions for the binaries can be found here. The is licensed under the BSD Zero Clause License.
A node can be initialized with an external, pre-generated root namespace key while all other keys are automatically created.
If you have been using manual identity initialization of a node, i.e., using auto-init = false, you will be impacted by the following change in automatic node initialization.
The node initialization has been modified to better support root namespace keys and using static identities for our documentation. Mainly, while before, we had the init.auto-init flag, we now support a bit more versatile configurations.
The config structure looks like this now:
canton.participants.participant.init = {
identity = {
type = auto
identifier = {
type = config // random // explicit(name)
}
}
generate-intermediate-key = false
generate-topology-transactions-and-keys = true
}
A manual identity can be specified via the gRPC API if the configuration is set to manual.
identity = {
type = manual
}
Alternatively, the identity can be defined in the configuration file, which is equivalent to an API based initialization using the external config:
identity = {
type = external
identifier = name
namespace = "optional namespace"
delegations = ["namespace delegation files"]
}
The old behaviour of auto-init = false (or init.identity = null) can be recovered using
canton.participants.participant1.init = {
generate-topology-transactions-and-keys = false
identity.type = manual
}
This means that auto-init is now split into two parts: generating the identity and generating the subsequent topology transactions.
The console command node.topology.init_id has been changed slightly too: It now supports additional parameters delegations and delegationFiles. These can be used to specify the delegations that are necessary to control the identity of the node, which means that the init_id call combined with identity.type = manual is equivalent to the identity.type = external in the config, except that one is declarative via the config, the other is interactive via the console. In addition, on the API level, the InitId request now expects the unique_identifier as its components, identifier and namespace.
The init_id repair macro has been renamed to init_id_from_uid. init_id still exists but takes the identifier as a string and namespace optionally instead.
Topology transaction messages had the following changes::
The JSON API and Java bindings have changed accordingly.
In Canton 3.2, only absolute offsets were allowed to define the deduplication periods by offset. Now, participant-begin offsets are also supported for defining deduplication periods. The participant-begin deduplication period (defined as zero value in API) can only be used if the participant was not pruned yet. Otherwise, as in the other cases where the deduplication offset is earlier than the last pruned offset, an error informing you that the deduplication period starts too early will be returned.
Canton 3.3 introduces a new approach to export and import of ACS snapshots, which is an improvement for any future Synchronizer Upgrades with Downtime procedure. It also prepares Canton for online protocol upgrades.
The ACS export and import now use an ACS snapshot containing Ledger API active contracts, as opposed to the Canton internal active contracts. Further, the ACS export now requires a ledger offset for taking the ACS snapshot, instead of an optional timestamp. The new ACS export does not feature an offboarding flag anymore; offboarding is not ready for production use and will be addressed in a future release.
For party replication, we want to take (export) the ACS snapshot at the ledger offset when the topology transaction results in a (to be replicated) party being added (onboarded) on a participant. The new command find_party_max_activation_offset allows you to find such an offset. Analogously, the new find_party_max_deactivation_offset command allows you to find the ledger offset when a party is removed (offboarded) from a participant.
The Canton 3.3 release contains both variants: export_acs_old/import_acs_old and export_acs/import_acs. A subsequent release is only going to contain the Ledger API active contract export_acs/import_acs commands (and their protobuf implementation).
KMS drivers are now supported in the Canton community edition to allow custom integrations.
Composing Canton Network applications is best done via interfaces to minimize the required coordination for rolling out smart contract upgrades of dependencies. In Canton 3.2, this did not work well, as the version of Daml packages was chosen using purely local information about the submitting validator node instead of considering the packages vetted by the counter-participants involved in the transaction. Canton 3.3 introduces support for vetting-state-aware package selection to lower the required coordination for rolling out smart contract upgrades (in many cases to zero). This new feature is called Topology-Aware Package Selection. It uses the topology state of connected synchronizers to optimally select packages for transactions, ensuring they pass vetting checks on counter-participants.
Topology-aware package selection in command submission is enabled by default. To disable, toggle participant.ledger-api.topology-aware-package-selection.enabled = false. A new query endpoint for supporting topology-aware package selection in command submission construction is added to the Ledger API:
Canton console commands updates reflect the prior mentioned changes, such as:
The console commands to generate (generate_signing_key) and register signing keys (register_kms_signing_key) now require a signing key usage parameter usage: SigningKeyUsage to specify the intended context in which a signing key is used. This parameter is enforced and ensures that signing keys are only employed for their designated purposes. The supported values are:
The IdentifierDelegation topology request type and its associated signing key usage, IdentityDelegation, have been removed because it is no longer used. This usage was previously reserved for delegating identity-related capabilities but is no longer supported. Any existing keys using the IdentityDelegation usage will have it ignored during deserialization.
All console commands and data types on the admin API related to identifier delegations have been removed.
It is now possible to have multiple intermediate namespace keys with each one restricted to only authorize a specific set of topology transactions.
NamespaceDelegation.is_root_delegation is deprecated and replaced with the oneof NamespaceDelegation.restriction. See the protobuf documentation for more details. Existing NamespaceDelegation protobuf values can still be read and the hash of existing topology transactions is also preserved. New NamespaceDelegations will only make use of the restriction oneof. transaction is also preserved.
The existing error category InvalidGivenCurrentSystemStateSeekAfterEnd has been generalized. This error category now describes a failure due to requesting a resource using a parameter value that falls beyond the current upper bound (or end) defined by the system's state, for example, a request that asks for data at a ledger offset which is past the current ledger's end.
With this change, the error category InvalidGivenCurrentSystemStateSeekAfterEnd has also been marked as retryable. It makes sense to retry a failed request assuming the system has progressed in the meantime, and thus a previously requested ledger offset has become valid.
To avoid confusion some error codes and commands have been renamed or removed:
DarService and Package service on the admin-api have been cleaned up:
Being able to fail fast can lead to faster recovery. Following this principle, a new storage parameter is introduced: storage.parameters.failed-to-fatal-delay. This parameter, which defaults to 5 minutes, defines the delay after which a database storage that is continuously in a Failed state escalates to Fatal. The sequencer liveness health is now changed to use its storage as a fatal dependency, which means that if the storage transitions to Fatal, the sequencer liveness health transitions irrevocably to NOT_SERVING. This allows a monitoring system to detect the situation and restart the node. NOTE Currently, this parameter is only used by the DbStorageSingle component, which is only used by the sequencer.
Session signing keys for protocol message signing and verification were added. These are software-based, temporary keys authorized by a long-term key via an additional signature and are valid for a short period. Session keys are designed to be used with a KMS/HSM-based provider to reduce the number of signing operations and, consequently, lower the latency and cost associated with external key management services.
Session signing keys can be enabled and their validity period configured through the Canton configuration using <node>.parameters.session_signing_keys. By default they are currently disabled.
A memory check has been introduced when starting the node. This check compares the memory allocated to the container with the -Xmx JVM option. The goal is to ensure that the container has sufficient memory to run the application. To configure the memory check behavior, add one of the following to your configuration:
canton.parameters.startup-memory-check-config.reporting-level = warn // Default behavior: Logs a warning.
canton.parameters.startup-memory-check-config.reporting-level = crash // Terminates the node if the check fails.
canton.parameters.startup-memory-check-config.reporting-level = ignore // Skips the memory check entirely.
Previously not all messages being sequenced had a traffic cost associated which could lead to a denial of service attack using "free" traffic on a synchronizer. Furthermore sequencer acknowledgements do not lead to traffic cost and are also rate limited to avoid a denial of service.
A base event cost can now be added to every sequenced submission. The amount is controlled by a new optional field in the TrafficControlParameters called base_event_cost. If not set, the base event cost is 0.
Sequencer acknowledgements do not incur a traffic fee, in order to rate limit acknowledgements the sequencers will now conflate acknowledgements coming from a participant within a time window. This means that if 2 or more acknowledgements from a given member get submitted during the window, only the first will be sequenced and the others will be discarded until the window has elapsed. The conflate time window can be configured with the key acknowledgements-conflate-window in the sequencer configuration, which defaults to 45 seconds.
Example: sequencers.sequencer1.acknowledgements-conflate-window = "1 minute"
The synchronizer connectivity service was refactored to have endpoints with limited responsibilities:
An improvement to the offline root namespace key procedure by using scripts to initialize a participant node's identity using an offline root namespace key. They were added to the release artifact under scripts/offline-root-key. An example usage with locally generated keys is available at examples/10-offline-root-namespace-init.
The Participant Query Store (PQS) is compatible with both Canton 2 and Canton 3. The only changes needed are those due to type changes in the Ledger API.
In Canton 3, the Ledger API user can be configured for Universal Reader access (on the participant - via authorizations) so a PQS request for * Parties will include all parties on that participant. This will simplify access for new Parties as they emerge. This can be useful when a PQS belongs to the Participant's organization and is intended to have full (read) access to all party's located there. Configuration argument: `--pipeline-filter-parties=*`
The detailed LAPI changes are as follows:
We recommend displaying the event identifying information as <offset>:<node_id>.
This simple Java client is an example of the usage.