Daml 2.5.0 release candidate has been released. You can install it using:
daml install 2.5.0
create-only
, create-if-needed-and-start
, create-and-start
).Developer productivity is a core value driver of Daml. To further support users building applications on their journey to production we are making Daml Finance generally available. Daml Finance is a comprehensive library to support modeling financial use cases in Daml. It allows you to:
Being able to take ready-made building blocks for your application allows you to significantly reduce time-to-market and get into production quicker.
We are continuing to improve and expand the library and appreciate feedback from early adopters.
`daml new test --template=quickstart-finance`
ContingentClaims.Core
ContingentClaims.Lifecycle
Daml.Finance.Account
Daml.Finance.Claims
Daml.Finance.Data
Daml.Finance.Holding
Daml.Finance.Instrument.Generic
Daml.Finance.Instrument.Token
Daml.Finance.Interface.Account
Daml.Finance.Interface.Claims
Daml.Finance.Interface.Data
Daml.Finance.Interface.Holding
Daml.Finance.Interface.Instrument.Base
Daml.Finance.Interface.Instrument.Generic
Daml.Finance.Interface.Instrument.Token
Daml.Finance.Interface.Lifecycle
Daml.Finance.Interface.Settlement
Daml.Finance.Interface.Util
Daml.Finance.Lifecycle
Daml.Finance.Settlement
Daml.Finance.Util
To use Daml Finance, you need to activate Daml-LF 1.15 by adding this stanza to your daml.yaml project file:
build-options: - –target=1.15 |
Daml has predictability and safety as two of its core design philosophies in the sense that we want to ensure that both during development and at the point a party signs a contract, all possible consequences of that contract can be predicted. This philosophy could be at odds with a typical business requirement to be able to easily extend functionality in the future. For example, I want to be able to agree on the rules for a marketplace today by signing the marketplace smart contracts. But I want to be able to later define entirely new asset classes that I cannot yet predict. Indeed, building Daml applications that were extensible in such a way has been tricky until now.
The above philosophy also led to a number of design decisions that made Daml code dependencies fairly static and closely coupled on-ledger types with the types downstream components like client applications consumed and processed. This close coupling between different Daml packages and on- and off-ledger types made it more difficult to evolve applications over time.
Daml Interfaces are our answer to extensibility and modularization of applications without breaking with predictability and safety.
create-only
, create-if-needed-and-start
, create-and-start
).The canton protocol determines the core capabilities and properties of a Canton network. Version 4 brings support for Daml-LF 1.15 and with that Daml Interfaces and Daml Finance. In addition, it moves several static configuration parameters to dynamic configuration so that they can be changed at runtime, and adds an optimization that significantly reduces transaction sizes in some situations.
Canton still supports protocol version 3, but protocol version 3 has to be added as a static domain parameter before starting the new binary:
canton.domains.mydomain.init.domain-parameters.protocol-version = 3 |
If you started the domain node accidentally before changing your configuration, your participants won’t be able to reconnect to the domain, as they will fail with a message like:
DOMAIN_PARAMETERS_CHANGED(9,d5dfa5ce): The domain parameters have changed |
To recover from this, you need to force a reset of the stored static domain parameters using:
canton.domains.mydomain.init.domain-parameters.protocol-version = 3 canton.domains.mydomain.init.domain-parameters.reset-stored-static-config = yes |
New domains deployed using Canton version 2.5.0 will use protocol version 4. Existing users that want the features of protocol version 4 need to perform a protocol version upgrade.
Config changes need to be adapted to as follows:
canton { domains { mydomain { domain-parameters { max-rate-per-participant = 10000 reconciliation-interval = 30 max-inbound-message-size = 10485700 } } } |
mydomain.service.set_max_rate_per_participant(10000) mydomain.service.set_max_request_size(10485700) // equivalently, one case use `mydomain.service.set_max_inbound_message_size(10485700)` |
participants.all.domains.disconnect(mydomain.name) nodes.local.stop() nodes.local.start() participants.all.domains.reconnect_all() |
participants.all.domains.disconnect(sequencer1.name) nodes.local.stop() nodes.local.start() participants.all.domains.reconnect_all() |
mydomain.service.update_dynamic_parameters( _.copy( participantResponseTimeout = TimeoutDuration.ofMinutes(3), ledgerTimeRecordTimeTolerance = TimeoutDuration.ofMinutes(1), mediatorReactionTimeout = TimeoutDuration.ofMinutes(1), transferExclusivityTimeout = TimeoutDuration.ofSeconds(2), topologyChangeDelay = TimeoutDuration.ofSeconds(1), ) ) |
mydomain.service.update_dynamic_domain_parameters( _.update( participantResponseTimeout = 3.minutes, ledgerTimeRecordTimeTolerance = 1.minute, mediatorReactionTimeout = 1.minute, transferExclusivityTimeout = 2.seconds, topologyChangeDelay = 1.second, ) ) |
The Java Bindings are the main integration component we offer for use cases where flexibility and performance are of the greatest importance. It provides a low-overhead, non-blocking and rich interface to interact with Ledger API services. Its code generator makes it easy to interact from your application with your on-ledger Daml models without much boilerplate code. Aside from offering full support for interfaces, we added a series of improvements that we expect will improve the overall developer experience when using them, as well as making them safer.
fromValue
methods that allow to decode a Daml value in its wire encoding into Java codegen equivalent, we introduced ValueDecoders
that return the specific type without casting. fromValue
methods still exist but are marked as deprecated. See the following section about the impact. This new approach works with interfaces too. See https://github.com/digital-asset/daml/issues/14313.ValueDecoders
, we also added new methods to the ActiveContractsClient
and TransactionClient
that allow you to read the active contract set and keep it up to date in a type-safe manner. See https://github.com/digital-asset/daml/issues/14969.getContractTypeId
. See https://github.com/digital-asset/daml/issues/15350.Command
directly anymore; instead, they return an Update<R>
, which is the first step towards a more type-safe interaction with the Ledger API. These types are now also accepted by the clients in our bindings, making the transition to this new approach likely seamless. See the following section about the impact of this change. See https://github.com/digital-asset/daml/issues/14312.CommandService
, multiple overloaded methods have been deprecated in favor of an equally safe but narrower interface, whereby you build your submission using a builder and submit that. This reduces the surface of the CommandClient
to one method for each Ledger API endpoint. See the following section for the impact. See https://github.com/digital-asset/daml/issues/15424.Update<R>
instead of Command
. Since the types are accepted by the clients, the vast majority of use cases in which these values were passed directly to the client means effectively no change to your code is needed. The cases in which you might need to change the way you are using the code fall in two categories:Update<R>
by calling the commands()
method and interacting with the command as you used to.fromValue(x, f…)
API has been deprecated and we recommend you change that to valueDecoder(f…).decode(x)
, and if you write lambdas over fromValue
you can simply call nested valueDecoder
functions. Moreover, we now provide such functions for primitive types, in the PrimitiveValueDecoders
utility class. Existing calls will continue to work following our deprecation policy but you are strongly advised to use the new approach as soon as possible to avoid future breakages.CommandClient
have been deprecated in favor of a new builder-based approach. The methods will follow the usual deprecation policy. You are advised to use the new interface, whereby instead of calling a specific overload, you prepare a submission using com.daml.ledger.javaapi.data.CommandsSubmission
and issue it. The builders require to take mandatory fields (application ID, command ID and the actual commands) and have a fluent interface that allows to progressively add optional fields (like the workflow ID). The names of the fields are unchanged.THREADPOOL_OVERLOADED
error code.MAXIMUM_NUMBER_OF_STREAMS
error code.HEAP_MEMORY_OVER_LIMIT
error code.canton.participants.<participant-id>.ledger-api.rate-limit = { max-api-services-queue-size = 20000 max-streams = 1000 max-used-heap-space-percentage = 100 min-free-heap-space-bytes = 0 } |
If these limits are too low for your environment, you may run into the new errors. In that case you have to set the limits to levels appropriate to your use case.
User management service allows you to create users on the participant that are associated with a list of actAs and readAs claims that grant that user access rights to parties. An external IAM system can then be used to issue access tokens for a specific user and have the participant resolve the corresponding party claims which then can be used to authorize the calls.
We have made a number of improvements to the user management and party management services that simplify the integration of these services into a larger landscape of access and identity management.
is_deactivated
attribute to participant users. Mutating this attribute provides a superior mechanism for temporarily disabling a user to the earlier delete and create call pair.primary_party
, is_deactivated
, metadata.annotations
.local_metadata.annotations
.party_details.display_name
in favor of using party's metadata annotations.This is a purely additive change.
Canton nodes use a variety of cryptographic keys that they need constant access to during operation. To add a layer of security for node operators, it is now possible to configure Canton nodes such that the keys are always encrypted at rest. This is done by employing a technique called envelope encryption where an external KMS system encrypts the private keys that the node keeps in its database. When the node starts, it communicates with the KMS system to decrypt the keys and keeps the decrypted keys in memory only.
Currently only AWS KMS is supported.
Please refer to the feature’s documentation.
To accommodate the KMS features, there have been changes to the signatures for console commands related to uploading and downloading for crypto key pairs and public keys.
`keys.public.upload
and `keys.secret.upload`
now expect a `ByteString` as input and not a reference to an object.
`keys.public.download`
and `keys.secret.download`
have been split in two different
Commands each:
`download`
expects a key fingerprint and returns the serialized public key or key pair (`ByteString`)
and;`download_to`
additionally expects a filename and stores the `ByteString`
directly into a file.Both methods have the protocol version as an optional input to use for serialization. By default it's the latest protocol version supported by the release.
The return type of `keys.secret.list`
has been renamed from `PublicKeyWithName`
to `PrivateKeyMetadata`
to return additional information related to the stored private keys, such as which wrapper key was used to encrypt the stored private key. The fields on the previous result type are the same as on the new result type for backwards compatibility.
Until now, the domain (and/or topology) manager was one type of Canton node that did not support high availability. An intermittent outage of that node was not a total system outage as it is only responsible for handling identity transactions, but it would cause an outage to any operations to do with allocating parties, or adding packages.
As of 2.5.0, it now also supports high availability using an active-passive setup against a shared (HA) database.
Please refer to the documentation for high availability usage.
This is a purely additive change.
The metrics subsystem has undergone a major revision and is now documented much more comprehensively. As a result, a number of unnecessary or confusing metrics have been removed or rationalized. At the same time all metrics produced have been documented.
daml.execution.cache.contract_state.load_successes
daml.execution.cache.contract_state.load_failures
daml.execution.cache.contract_state.load_total_time.*
daml.execution.cache.key_state.load_successes
daml.execution.cache.key_state.load_failures
daml.execution.cache.key_state.load_total_time.*
daml.user_management.cache.load_successes
daml.user_management.cache.load_failures
daml.user_management.cache.load_total_time.*
daml.execution.cache.register_update.*
daml.execution.cache.contract_state.register_update.*
.exedamlcution.cache.key_state.register_update.*
daml.services.index.<operation>.*
metrics which should be used instead:daml.index.db.get_lf_archive.*
daml.index.db.get_parties.*
daml.index.db.list_known_parties.*
daml.index.db.list_lf_packages.*
daml.index.db.lookup_ledger_configuration.*
daml.index.db.lookup_ledger_end.*
daml.index.db.lookup_ledger_id.*
daml.index.db.lookup_participant_id.*
daml.index.db.prune.*
daml.index.db.store_configuration_entry.*
daml.index.db.store_ledger_entry_combined.*
daml.index.db.store_package_entry.*
daml.index.db.store_party_entry.*
daml.index.db.store_rejection.*
daml.services.index.lookup_contract_after_interpretation.*
daml.services.index.lookup_contract_state_without_divulgence.*
Update your metrics names according to the above.
A best practice for monitoring the services of a distributed system is to use an approach known as the “Golden Signals” or “REDS”. Some helpful references for this are Monitoring Distributed Systems, RED Method, and 4 SRE Golden Signals (What they are and why they matter). Using this methodology, monitored metrics are used to determine if the application is healthy and, if unhealthy, what service or endpoint (i.e., API) is the root cause of the issue.
In an effort to further improve the operational aspects of the various components that make up the Daml ecosystem, we introduced “Golden Signal” metrics for all services that offer a gRPC or an HTTP-JSON API. These metrics are available when using the Prometheus exporter.
For the specific metrics, see the documentation.
This is a purely additive change.
While the Ledger API gives full access to any data stored on the ledger and allows users to interact with it, it doesn't provide the full range of querying capabilities that you might expect from an application-specific data store. As such, when developing applications against the Ledger API, it's often the case that the same patterns appear over and over again in many client applications. Even though these patterns are well known and established internally, the newcomer might not see those emerging. As an improvement to how we empower developers to read data from the ledger to build applications around it, we started an initiative to package these patterns to be offered as a facilitation for the Daml off-ledger integrator. Our first release is a Java library that allows to perform a projection of ledger data onto a traditional relational database, around which an application that needs rich querying can be built.
The new Custom Views Java library is now available in Labs status. You are welcome to play around with the library and provide feedback as to where you would like to see this going in the future.
This is a purely additive change.
GetTransactions
, GetTransactionTrees
and GetActiveContracts
streams. See https://github.com/digital-asset/daml/pull/15408.canton.<participant>.parameters.exclude-infrastructure-transactions
) that can be used to meter infrastructure transactions. For more information on participant metering see the Participant Metering documentation. Infrastructure transactions include:INVALID_TOPOLOGY_TX_SIGNATURE_ERROR
and UNAUTHORIZED_TOPOLOGY_TRANSACTION
has changed to SecurityAlert
.``coerceInterfaceContractId``
(#15405)``show @Text``
now escapes special characters, producing syntactically correct expressions (#15177)<?>
) operator to support a generic type for Validation (#15244)TemplateOrInterface
type class (#15347)Numeric
types with precision > 37 and mistyped/missing field access (#15626)-disable-warn-large-tuples=yes
`view`
variable in `view`
method definition (https://github.com/digital-asset/daml/issues/15595)