Skip to content

Developing Conseil

Ivano Pagano edited this page Jul 6, 2020 · 13 revisions

Developing Conseil

This page describes how you can, as a developer, add features to Conseil or debug issues with it.

Branches

For development, it's best to start with the master branch as new pull requests are merged directly to it.

To debug an existing release, it's best to use the corresponding release tag from the releases page.

Development Process

You will need to set up a Postgres database and a Tezos node to run Conseil in order to make code changes or run a debugger. See Running Conseil for hints on how you can set them up.

Once you make code changes, you can use SBT to compile the code:

sbt -J-Xss32m clean compile

To start Lorre, run this command:

env SBT_OPTS="-Dconfig.file={path to custom lorre config file}" && sbt "runLorre <platform> <network>"

Replace <platform> with blockchain platform you want to run for, e.g. tezos and <network> with the corresponding network for blockchain you choose, e.g. mainnet, carthagenet or babylonnet.

Instructions for putting together a config file can be found at Configuring Conseil

To start Conseil, run this command:

env SBT_OPTS="-Dconfig.file={path to custom conseil config file}" && sbt "runApi"

In IntelliJ IDEA, these can be set up easily as run configurations.

Running tests

Of course, you might want to test the code from time to time ;). From the project directory, run:

sbt test

There's currently no need to have a database running to run the tests. An embedded Postgres database instance will automatically be downloaded and used during testing. Please make sure you have the libpq5 package installed, otherwise some of the unit tests using the embedded database may fail. For Ubuntu the command is:

apt install libpq5

Smoke tests

We have a rough version of automated smoke tests that can be run to check that any refactor won't break any part of the system.

Note: we currently make this assumptions:

  • the tests should be run against a tezos carthage test network
  • the node used to index should have the first thousands blocks from the chain (currently 5 thousand, subject to change)

The whole regression run would first download all data in a local db using the indexer, against a conventionally chosen network and node. The indexing will only fetch a limited number of blocks, to make the operation reasonably fast. Using a local node instance can obviously speed up the process. After finishing the indexing, a conseil api server will start and a few chosen endpoints will be tested with specific inputs, and checked for expected outputs.

If working only on the api part, you might want to keep the data persisted in the db, without indexing it multiple times, this can be done, based on the arguments passed to the command line.

To run the regression:

sbt 'runSmokeTests [platform] [network] [path-to-conf-file]'

Required arguments:

  • platform should be one of the platform supported by Conseil. Based on this, specific implementation of smoke-test will be chosen.

Optional arguments:

  • path-to-conf-file should point to the application configuration file used, where you need to define how to connect to a database, how to reach a blockchain network node as needed, and which are the apiKeys to be used by the api server.
    If not provided, a defaut will be used, as printed on the console output.
  • network should be one of the networks defined in the previous configuration file (so you need to necessarily pass the previous argument to use this), and it's used to identify where to fetch the blockchain data to index.
    If not provided, the indexing part will be skipped, checking the conseil api against data already available in the db.

To summarize you can call it with

  • platform only: no indexing will be done and a default name for the configuration file will be used
  • config path only: no indexing will be done and the specified configuration will be used
  • both arguments: the node will be indexed and the specified configuration file will be used

Changing the database schema

If you want to make any database schema changes, you will first have to make them manually in your Postgres database. After that, you can regenerate the table definitions in the code using this SBT command:

sbt "runSchema [configuration file]"

If no configuration file path is passed, defaults will be used to identify the running database instance to use, as defined in the provided docker-compose.yml file in the root of the project.

You can customize any database configuration parameter used for generation by providing your file and passing the path in the command.

An helpful message should be printed if you run the tool with no param, to guide you with proper configuration options.

Running the sbt task will generate one or more Tables.scala source files, in a directory reported by the output of the command.

You can replace the corresponding ones in Conseil-common/src/main/scala/tech/cryptonomic/conseil/<chain> (where chain is the name of the specific database model for the blockchain at hand: e.g. tezos, bitcoin). The chain is selected by the naming of different database schemas available on the definition file (conseil.sql), where entities from different namespaces will end up in different source files.

Don't forget to save your schema changes in sql/conseil.sql before making any pull requests!

After you've built the changes, you need to format the project once again using sbt scalafmt and you're done!

Please refrain from manually changing the slick models generated files. Manual changes are error-prone and most importantly hard to evolve when another change needs be made and the generation tool could come useful, if not for the fact that it would overwrite any custom work done to the files.

Technology Stack

You will need to be familiar with the Scala language and functional programming techniques in general in order to work with Conseil code. This are the main libraries used throughout the project are:

  • Cats
    • fundamental support for functional programming development, providing all the basic blocks
  • Typelevel Cats-Effect
    • functional handling of "effects" based on type classes and the included IO type
  • Akka-Http
    • used to provide client and server implementations for connecting to blockchain Rest APIs and to expose Conseil's own API
  • Fs2 (functional streaming for scala)
    • used to collect concurrently multiple data elements and progressively process them inside
  • Circe and Jackson
    • we mostly employ Circe for the encoding/decoding of json, but some older parts still rely on Jackson
  • Endpoints
    • used to describe our Rest Api and implement the OpenApi spec and Akka-http server from the same blueprint
  • Lightbend Slick
    • we use this to define the database model, create and compose database queries, compose the asynchronous calls as needed
  • Pureconfig
    • read configuration with early detection of expectations and type-safe modeling of the configured values
  • Scopt
    • command line parsing of options/arguments
  • Scalatest and Scalamock
    • unit testing and mock/stub/dummies

Clone this wiki locally