There are a few goals to the proposed configuration system, which I'll generally refer to as "settings". All of the following is a rough sketch. None of it has been implemented or compiled.


  1. Allow more parameters to be configured from the command line. For example, a common parameter I want to configure is compilation options:
  2.   > set options -Xprint:typer
  3. Allow per-task configuration. For example:
  4.   > set[update] log-level debug
      > set[test-compile] options -Xlog-implicits -explaintypes
  5. These are transient settings. That is, they are only for the current session. This is similar to setting a variable in bash. If you want it to be set every time, you stick it in your .bashrc (.sbtrc for sbt). A side-effect of being able to set parameters from the command line is being able to set them from initialization scripts, as shown in the first preview. Alternatively, you use the existing mechanism of properties, mentioned next.
  6. Integrate settings with properties. Properties would be options backed by a file that uses properties syntax and applied to a project as if by calling set <key> <value>. Modifying those values would write back to the file.

Static/full configuration

The previous section was mostly dynamic in the sense that settings are converted to and from string representations outside of our project definitions. We still want to be able to configure these settings within our project definition with all of the benefits of type checking. In addition, we'd like to make setting the basic parameters a bit more uniform to make it easier to discover settings. Lastly, it would be nice if configuration were more composable. Here is a sketch of how this might look in a project definition:

  override def settings = super.settings ++
      MainClass := Some("Test"),
      CompileOptions += ("-no-specialize" :: "blah" :: _),
      Organization :=
        for(name <- Name; o <- Organization) yield
          o orElse name
    ) ++
      CompileOptions := Nil
    ) ++
      MainClass = Some("UseThisForPackaging")

We'll look at the part that sets the main class first.

  MainClass := Some("Test"),

MainClass is a key parameterized by the type of the value it can be associated with. Its definition might look like:

  val MainClass = SettingKey[Option[String]]("main-class", "Defines the main class, such as used for running or packaging.")

The := method sets the setting on the left to the value on the right hand side. It constrains the type to be Option[String], as specified by the definition of MainClass. Moving along to CompileOptions:

  CompileOptions += ("-no-specialize" :: "blah" :: _),

The += method modifies the existing value by applying the right hand side to the current value. The above would prepend the given options to the compile options. The next use of := is the generalization of the previous two. It allows defining a setting based on other settings. For the initial implementation of settings, these would be evaluated in the order provided, but the idea is to reorder the dependency graph for correct execution and detect incorrect initialization, more like restricted lazy vals than vals.

The final pieces are projectSettings and taskSettings. These separate settings for the project and settings for tasks. If a setting has not been explicitly defined for a task, it is inherited from the project. Here, we set different options for test-compile than the defaults that we set for the project. Also, we set the main class specifically for the package task. We don't have to lookup what the name of the setting is since we already know that MainClass defines the main class and we want to set it for the package task.

The proposed way that tasks would pull settings is:

  lazy val packageInputs = (setting(`package`, MainClass), ...) map { (mainClass: Option[String], ...) => new PackageInputs(mainClass, ...) }
  lazy val `package` = packageInputs map packageImpl
  def packageImpl(inputs: PackageInputs): PackageOutputs = ...
where setting pulls the value of MainClass for the package task.

This proposal avoids the need to use the current pattern that combines lazy val/def for overriding and referencing the super value:

  lazy val mySetting = mySetting0
  def mySetting0 = <default value>

  // in subclass
  override def mySetting0 = super.mySetting0 ++ ...

Old Style

For comparison, here is how the example from the beginning might look in a 0.7.4 project definition:

  override def mainClass = Some("Test")
  override def compileOptions = "-no-specialize" :: "blah" :: super.compileOptions
  override def testCompileOptions = Nil
  override def organization = super.organization orElse name
  override def packageMainClass = Some("UseThisForPackaging")

Open Issues

  1. A disadvantage of this approach is that there is limited static checking that a parameter is valid for a given task. You are limited to the predefined keys, you reference the keys as identifiers checked by the compiler, and the types of values must match up, but you could set MainClass on the clean task, for example. (It would be ignored.) Or, a setting might only be valid at the project level and is inappropriate to set on a task, such as Organization. Perhaps we could mitigate this by writing some code that prints the variables accessed by a task when it runs. This would not be sufficient evidence that a setting is not used by the task, but it would show ones that are definitely used. A comment: I know of at least one type-system trick that is technically a solution, but I don't think these techniques are appropriate here.
  2. A potential advantage related to better composition is easier sharing of settings between multiple projects. A subproject could inherit the properties of its defining module. There are some problems to solve related to this, such as handling multiple defining parents, which can happen when using external projects with project("path").
  3. How should properties modification and persistence be modeled?
    1. Should there be property change hooks so that when certain properties are modified, the updated value is written back? Should there be hooks that run after each command, inspect the current settings, and persist changes to disk?
    2. Related, settings are intended to be immutable during task execution. However, properties in 0.7.x are modifiable during task execution. You might end up in a situation where tasks see different versions of a property during an execution, which I think is bad without these dependencies being explicitly represented in the task/setting system (and hence being self-documenting code and enforced by this system). When/why do people mutate properties during task execution? The main reason I can think of is to change a version during a release. It would be good to know other use cases before designing this part.