@qoopido/lerna.run
lerna run
) - Run command [optional] 🏃
(Optional package extracted from Lerna run command that will give us the ability to run npm script in each package of the workspace that contains that script.
This package was added mainly because NPM Workspaces don't yet support running NPM scripts in parallel and in topological order (they do have this RFC, so perhaps someday this package would become irrelevant :)).
Installation
npm install @qoopido/lerna.run -D -W
# then use it (see usage below)
lerna run <script>
Usage
$ lerna run <script> -- [..args] # runs npm run my-script in all packages that have it
$ lerna run test
$ lerna run build
# watch all packages and transpile on change, streaming prefixed output
$ lerna run --parallel watch
Run an npm script in each package of the workspace that contains that script. A double-dash (--
) is necessary to pass dashed arguments to the script execution.
The name of the current package is available through the environment variable LERNA_PACKAGE_NAME
:
$ lerna run build \$LERNA_PACKAGE_NAME
Note for when using Yarn:
$ yarn lerna <script> -- [..args]The double dash (
--
) will be stripped byyarn
. This results in the inability for Lerna to pass additional args to child scripts through the command line alone. To get around this, either globally install Lerna and run it directly, or create a script inpackage.json
with yourlerna run
command and useyarn
to directly run that instead.
Options
lerna run
accepts all filter flags.
$ lerna run --scope my-component test
--npm-client <client>
Must be an executable that knows how to run npm lifecycle scripts.
The default --npm-client
is npm
.
$ lerna run build --npm-client=yarn
May also be configured in lerna.json
:
{
"command": {
"run": {
"npmClient": "yarn"
}
}
}
--dry-run
Displays the process command that would be performed without actually executing it. This could be helpful for troubleshooting.
$ lerna run test:coverage --dry-run
--stream
Stream output from child processes immediately, prefixed with the originating package name. This allows output from different packages to be interleaved.
$ lerna run watch --stream
--parallel
Similar to --stream
, but completely disregards concurrency and topological sorting, running a given command or script immediately in all matching packages with prefixed streaming output. This is the preferred flag for long-running processes such as npm run watch
run over many packages.
$ lerna run watch --parallel
Note: It is advised to constrain the scope of this command when using the
--parallel
flag, as spawning dozens of subprocesses may be harmful to your shell's equanimity (or maximum file descriptor limit, for example). YMMV
--no-bail
# Run an npm script in all packages that contain it, ignoring non-zero (error) exit codes
$ lerna run --no-bail test
By default, lerna run
will exit with an error if any script run returns a non-zero exit code.
Pass --no-bail
to disable this behavior, running the script in all packages that contain it regardless of exit code.
--no-prefix
Disable package name prefixing when output is streaming (--stream
or --parallel
).
This option can be useful when piping results to other processes, such as editor plugins.
--profile
Profiles the script executions and produces a performance profile which can be analyzed using DevTools in a
Chromium-based browser (direct url: devtools://devtools/bundled/devtools_app.html
). The profile shows a timeline of
the script executions where each execution is assigned to an open slot. The number of slots is determined by the
--concurrency
option and the number of open slots is determined by --concurrency
minus the number of ongoing
operations. The end result is a visualization of the parallel execution of your scripts.
The default location of the performance profile output is at the root of your project.
$ lerna run build --profile
Note: Lerna-Lite will only profile when topological sorting is enabled (i.e. without
--parallel
and--no-sort
).
--profile-location <location>
You can provide a custom location for the performance profile output. The path provided will be resolved relative to the current working directory.
$ lerna run build --profile --profile-location=logs/profile/
--load-env-files
When the task runner is powered by Nx (via --use-nx
) it will automatically load .env
files for you. You can set --load-env-files
to false if you want to disable this behavior for any reason.
For more details about what .env
files will be loaded by default please see: https://nx.dev/recipes/environment-variables/define-environment-variables
--use-nx
Enables integration with Nx. Enabling this option will tell Lerna to delegate
running tasks to Nx instead of using p-map
and p-queue
. This only works if Nx is installed and nx.json
is present. You can also skip cache by providing --skip-nx-cache
Example of nx.json
:
{
"extends": "nx/presets/npm.json",
"tasksRunnerOptions": {
"default": {
"runner": "nx/tasks-runners/default",
"options": {
"cacheableOperations": ["build"]
}
}
}
}
When Nx is installed and nx.json
is detected in the current workspace with useNx
set to true
in lerna.json
, Lerna will respect nx.json
configuration during lerna run
and delegate to the Nx task runner.
Nx will run tasks in an order and with a concurrency that it determines appropriate based on the task graph that it creates. For more information, see Nx Mental Model: The Task Graph.
This behavior allows Nx to run tasks in the most efficient way possible, but it also means that some existing options for lerna run
become obsolete as explained below.
Note when Lerna is set to use Nx and detects
nx.json
withtargetDefaults
in the workspace, it will defer to Nx to detect task dependencies. Some options forlerna run
will behave differently. See Using Lerna (Powered by Nx) to Run Tasks for more details.
useNx
is enabled
Obsolete Options when --sort
and --no-sort
Nx will always run tasks in the order it deems is correct based on its knowledge of project and task dependencies, so --sort
and --no-sort
have no effect.
--parallel
Nx will use the task graph to determine which tasks can be run in parallel and do so automatically, so --parallel
has no effect.
Note if you want to limit the concurrency of tasks, you can still use the concurrency global option to accomplish this.
--include-dependencies
Lerna by itself does not have knowledge of which tasks depend on others, so it defaults to excluding tasks on dependent projects when using filter options and relies on --include-dependencies
to manually specify that dependent projects' tasks should be included.
This is no longer a problem when Lerna uses Nx to run tasks. Nx, utilizing its task graph, will automatically run dependent tasks first when necessary, so --include-dependencies
is obsolete. However, it can still be used to include project dependencies that Lerna detects but Nx does not deem necessary and would otherwise exclude.
--ignore
When used with Nx, --ignore
will never cause lerna run
to exclude any tasks that are deemed to be required by the Nx task graph.
Tip the effects on the options above will only apply if
nx.json
exists in the root with thetargetDefaults
property defined. Otherwise, they will behave just as they would with Lerna's base task runner (ifuseNx
isfalse
).