Release of Daml Connect 1.15.0

Daml Connect 1.15.0 has been released as stable on Wednesday, July 14th. You can install it using:

daml install 1.15.0


  • Daml Exceptions are now stable as part of Daml-LF 1.14 and Ledger API 1.12.0. Daml-LF 1.13 is the new default in the SDK.
  • Observability of Ledger API Server and JSON API Server have been improved through better logging and metrics.

Impact and Migration

  • If you would like to use Daml Exceptions, you need to target Daml-LF explicitly using the build-option --target=1.14.
  • Some log outputs and levels have changed. If you are relying on log messages in your client applications, you may need to adjust the level at which you log, and how you parse log messages.
  • There is a minor improvement in the Java Ledger API Bindings and Codegen to improve compatibility with Java 14. In some cases this could lead to compile time errors for downstream consumers, which can be fixed easily by changing from to everywhere.

What’s New

Daml Exceptions, Daml-LF 1.14 and Ledger API 1.12.0

Daml Exceptions in now Stable and is introduced in new Daml Ledger API and Daml-LF version:

  • Daml Connect 1.15 introduces Ledger API version 1.12.0, which supports Daml-LF 1.14 as stable.
  • Daml-LF 1.13 is the new default in the SDK.
  • Daml-LF 1.14 introduces Daml Exceptions.

Daml Exceptions


Daml, like most smart contract or transactional languages, has all-or-nothing semantics by default. A transaction either executes atomically as a whole, or not at all. This is key for security in a multi-party context, but requires careful handling of expected business exceptions. The new try/catch exception handling feature in Daml makes this a whole lot easier without compromising safety.

Developers can now wrap entire subtransactions in a try/catch block. Should a handleable exception be encountered in the try block, the partial transaction from the start of try to the exception is rolled back and the exception can be processed in the catch block.

These operations are fully validated by Daml Ledgers retaining the security and determinism guarantees of Daml transactions.

Specific Changes

  • The Daml language new keywords, types and standard library functions:
    • Keyword exception to define a new exception type
    • Keyword try to start a subtransaction that may be rolled back
    • Keyword catch to handle an exception from a try block
    • Function throw in module DA.Exception to throw an exception.
    • Predefined exception types GeneralError, ArithmeticError, PreconditionFailed, and AssertionFailed.

Details on their use and an example can be found on the reference documentation page and a new chapter in the Introduction to Daml.

Impact and Migration

This is an additive change.

Improved observability of JSON API and Ledger API Servers


It is important for the operators of Daml components to be able to observe and diagnose problems downstream consumers are experiencing, ranging from component outages to rejected transactions.

Work is ongoing to improve logging and monitoring of the JSON API Server and Ledger API Servers, and this Daml Connect release includes a number of changes.

Specific Changes

  • Ledger API Server
    • The amount of data logged in the transaction service has been reduced at INFO level. Please enable TRACE logging for the logger ``` to log the request data structures.
    • Ledger API Validation failures are now logged at INFO level.
    • The log output of Daml components has changed so that the structured part is closer to JSON. This allows us to distinguish and parse numbers and lists. If you are parsing this log output, you may need to change your parser. The log output has changed from:
context: {a=b, x=1, foo=bar, parties=[alice, bob]}


context: {a: "b", x: 1, foo: "bar", parties: ["alice", "bob"]}
  • The state of the participant indexer can now be checked via the GRPC health endpoint.
  • For every update in the index database, the full context is logged at the INFO level.
  • Metrics for multi-party commands are now tracked only by the lexicographically first party.
  • JSON API Server
    • The healthcheck endpoint on the JSON API now proxies the health check endpoint on the underlying ledger. Previously it only queried for ledger end to determine ledger health.
    • Logging improvements
      • Log statements now include the date next to the time
      • The http response status for a request is now logged
      • The source and the path for incoming http requests are now logged
      • Calls which trigger a command submission are logged with the command id provided in the log context which allows correlating them to log statements in the ledger
      • Command submissions include the template id, choice name, and the contract id in the logging context where applicable
      • Failed command submissions are no longer logged twice (thus reducing noise)
      • For applicable requests, actAs, readAs, applicationId, and ledgerId are included in the log context
    • Metrics support

Similar to the Daml Driver for PostgreSQL, the JSON API now exposes metrics which can be used for monitoring. To enable those metrics, there are two new CI flags:

--metrics-reporter <value>

Start a metrics reporter. Must be one of "console", "csv:///PATH", "graphite://HOST[:PORT][/METRIC_PREFIX]", or "prometheus://HOST[:PORT]".

--metrics-reporting-interval <value>

Set metric reporting interval (defaults to 10s)

You can use the console reporter to see a list of available metrics and refer to Metrics.scala for brief descriptions. In summary, these are the available metrics:

  • Timing metrics for a number of operations are now available:
    • Party management, package management, command submission and query endpoints.
    • Parsing and decoding of incoming json payloads.
    • Processing of a command submission requests on the ledger side.
    • Database operations (fetching contracts by id or key.
    • Response payload construction of a request.
  • Concurrency metrics are now available fo the following event types:
    • Running http requests
    • Command submissions
    • Package allocations
    • Party allocations

Impact and Migration

The majority of these changes are additive, but there are a few changes to log levels and payload. If you are relying on certain log messages and/or parse them, you’ll need to adjust your client code accordingly.

Minor Improvements

  • The Java codegen will now pick up the module-prefixes field from daml.yaml which can be used to handle module name collisions between different DALFs.
  • In order to avoid clashing with the Java 14 type java.lang.Record, has been renamed to The old name has been used to denote a sub-type of the newly renamed one, so it can still be used, but it has been marked as deprecated. If you want to use DamlRecord objects with existing code, note that since Record is now a sub-type of DamlRecord, methods that expect a Record as a parameter or that use them as part of standard Java collections will need to be explicitly adapted to use DamlRecord.
  • The Java codegen now uses the DamlRecord type wherever Record was used before. Java code generated by earlier versions of Daml Connect will continue to work against newer bindings but you should expect deprecation warnings. On the contrary, code generated from this version on will not work with earlier versions of the bindings out of the box.
  • The streaming query endpoints of the JSON API now accept offsets per query rathan one offset for all queries as before. Please refer to the documentation for more details.

Security and Bug fixes

  • Fixed a bug in the integration kit which could lead to errors in the participant node in cases where duplicate contract keys are shared only through the witnessing of create events. This bug manifested itself only in some Daml Drivers and was already backported to the 1.11.X release line in the 1.11.2 release.
  • Fixed an issue where passing --log-level=json was ignored when running the JSON API via daml json-api instead of the standalone JAR.

What’s Next

A lot of work is happening under the hood at the moment, improving the production-readiness and ease of operation of the entire Daml stack, and reduce the complexity of building robust client applications. As the result, the coming releases will bring improvements on:

  • Observability across the stack
  • Deployment and operations of Daml Connect (Enterprise Edition only)
  • Improvements to errors and error handling recommendations

In parallel, we are completing work on a few features:

  • The Daml Profiler (Enterprise Edition only)
  • Oracle DB support (Enterprise Edition only)