This package has been deprecated

Author message:

this package is no longer supported

@opstrace/opstrace

0.0.1 • Public • Published

Build status

How to run the local test runner against a remote cluster

The local test runner runs in a container. It requires kubectl to be configured against a specific remote opstrace cluster. It uses kubectl port-forward ... to connect to services in the remote cluster to test them.

  1. Create an opstrace cluster. Wait for it to be operational.

  2. Make your local ~/.kube/config point to the running cluster (e.g via gcloud container clusters ...).

  3. Run make rebuild-testrunner-container-images (can often be omitted).

  4. Run make test-remote.

How to create a cluster: Makefile workflow

  1. make dependencies: this sets up local dependencies (does not allocate cloud resources, does not cost money, creates damage only locally in the worst case).

  2. make install-gcp: this deploys cloud infrastructure for an opstrace cluster. Requires setting STACK_NAME (note that the current GCP OIDC provider setup is not prepared for arbitrary cluster base URLs). This writes a stack description file for subsequent interaction with the infra via the opstrace CLI.

How to create a cluster without deploying the controller, so the controller can be run locally: Makefile workflow

  1. make dependencies: this sets up local dependencies (does not allocate cloud resources, does not cost money, creates damage only locally in the worst case).

  2. make cloudinfra-gcp or make cloudinfra-aws: this deploys cloud infrastructure for an opstrace cluster. Requires setting STACK_NAME (note that the current GCP OIDC provider setup is not prepared for arbitrary cluster base URLs). This writes a stack description file for subsequent interaction with the infra via the opstrace CLI.

  3. make controller-local: this runs controller logic locally (generic redux-saga controller, and k8s controller), for running opstrace cluster components on the cloud infra created in the previous step.

opstrace CLI

Two environment variables are read by the CLI:

  • STACK_NAME: (required) the name of the stack to interact with.

  • STACK_FILE: (optional). Set this to avoid the required -f ./path/to/spec.json flag for CLI commands.

The stack file will be created automatically if using optrace spec set [key] [value] and the file at path -f (or represented by STACK_FILE) does not exist. If it exists, it will set the value in that file. optrace spec set|get [key] [value] only interacts with the file - it does not fetch the remote stack spec since it might not exist yet (haven't run install yet). We may add a flag to get from remote later.

CLI usage:


➜  opstrace git:(master) ✗ opstrace --help
opstrace <command>

Commands:
  opstrace install                   Install and run opstrace in a cloud account
  opstrace spec <cmd> <key> [value]  Set or get a spec option. Will accept stdin
                                     if [value] is ommitted
  opstrace status get [key]          Get status. Can optionally specify [key] as
                                     dot separated path to value e.g.
                                     path.to.value
  opstrace run controller            Run the controller

Options:
  --version  Show version number                                       [boolean]
  --help     Show help                                                 [boolean]

Viewing grafana dashboards

By default we deploy 2 tenants.

  • system: this is for all opstrace system logs and system metrics. Grafana UI for this tenant will be available at https://system.<STACK_NAME>.gcp.opstrace.io

  • default: this is the default user space (more tenants can be created). Grafana UI for this tenant will be available at https://default.<STACK_NAME>.gcp.opstrace.io

The system tenant is special in that we ship dashboards and system logs with the installation. All user tenants are clean spaces for users to configure their logging and metrics.

System logs (last hour)

https://system.<STACK_NAME>.gcp.opstrace.io/grafana/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22logs%22,%7B%7D,%7B%22mode%22:%22Logs%22%7D,%7B%22ui%22:%5Btrue,true,true,%22none%22%5D%7D%5D

System dashboard home

https://system.<STACK_NAME>.gcp.opstrace.io/grafana/?orgId=1

Default streams

By default we deploy one stream in the default tenant.

kube-logs.default.<STACK_NAME>.gcp.opstrace.io

This endpoint will receive logs from a remote fluentd/fluentbit running on a different kubernetes cluster. Configuration for the remote fluentbit daemonset can be found in the test folder. Grep for kube-logs.default.mat.gcp.opstrace.io (in the configmap within /test/fluentbit/configmap.yaml) and replace mat with your STACK_NAME. Run kubectl apply -f . in this folder to deploy the fluentbit resources on the remote cluster. Both clusters must be in the same region and on the same network for it to work out of the box.

View logs received in default tenant

https://default.<STACK_NAME>.gcp.opstrace.io/grafana/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22logs%22,%7B%7D,%7B%22mode%22:%22Logs%22%7D,%7B%22ui%22:%5Btrue,true,true,%22none%22%5D%7D%5D

Readme

Keywords

none

Package Sidebar

Install

npm i @opstrace/opstrace

Weekly Downloads

0

Version

0.0.1

License

none

Unpacked Size

8.76 MB

Total Files

502

Last publish

Collaborators

  • clambertops
  • matappelman
  • patrickheneise