Extension chaosgcp
¶
Version | 0.2.1 |
Repository | https://github.com/chaostoolkit-incubator/chaostoolkit-google-cloud-platform |
This project is a collection of actions and probes, gathered as an extension to the Chaos Toolkit. It targets the Google Cloud Platform.
Install¶
This package requires Python 3.5+
To be used from your experiment, this package must be installed in the Python environment where chaostoolkit already lives.
$ pip install -U chaostoolkit-google-cloud-platform
Usage¶
To use the probes and actions from this package, add the following to your experiment file:
{
"type": "action",
"name": "swap-nodepool-for-a-new-one",
"provider": {
"type": "python",
"module": "chaosgcp.gke.nodepool.actions",
"func": "swap_nodepool",
"secrets": ["gcp", "k8s"],
"arguments": {
"old_node_pool_id": "...",
"new_nodepool_body": {
"nodePool": {
"config": {
"oauthScopes": [
"gke-version-default",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append"
]
},
"initialNodeCount": 3,
"name": "new-default-pool"
}
}
}
}
}
That’s it!
Please explore the code to see existing probes and actions.
Configuration¶
Project and Cluster Information¶
You can pass the context via the configuration
section of your experiment:
{
"configuration": {
"gcp_project_id": "...",
"gcp_gke_cluster_name": "...",
"gcp_region": "...",
"gcp_zone": "..."
}
}
Note that most functions exposed in this package also take those values directly when you want specific values for them.
Credentials¶
This extension expects a service account with enough permissions to perform its operations. Please create such a service account manually (do not use the default one for your cluster if you can, so you’ll be able to delete that service account if need be).
Once you have created your service account, either keep the file on the same machine where you will be running the experiment from. Or, pass its content as part of the secrets
section, although this is not recommended because your sensitive data will be quite visible.
Here is the first way:
{
"secrets": {
"gcp": {
"service_account_file": "/path/to/sa.json"
}
}
}
While the embedded way looks like this:
{
"secrets": {
"k8s": {
"KUBERNETES_CONTEXT": "gke_project_name-g70e8ya0_us-central1_cluster-hello-world"
},
"gcp": {
"service_account_info": {
"type": "service_account",
"project_id": "...",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/...."
}
}
}
}
Notice also how we provided here the k8s
entry. This is only because, in our example we use the swap_nodepool
action which drains the Kubernetes nodes and it requires the Kubernetes cluster credentials to work. These are documented in the Kubernetes extension for Chaos Toolkit. This is the only action that requires such a secret payload, others only speak to the GCP API.
Putting it all together¶
Here is a full example:
{
"version": "1.0.0",
"title": "...",
"description": "...",
"configuration": {
"gcp_project_id": "...",
"gcp_gke_cluster_name": "...",
"gcp_region": "...",
"gcp_zone": "..."
},
"secrets": {
"gcp": {
"service_account_file": "/path/to/sa.json"
},
"k8s": {
"KUBERNETES_CONTEXT": "gke_project_name-g70e8ya0_us-central1_cluster-hello-world"
},
},
"method": [
{
"type": "action",
"name": "swap-nodepool-for-a-new-one",
"provider": {
"type": "python",
"module": "chaosgcp.gke.nodepool.actions",
"func": "swap_nodepool",
"secrets": ["gcp", "k8s"],
"arguments": {
"old_node_pool_id": "...",
"new_nodepool_body": {
"nodePool": {
"config": {
"oauthScopes": [
"gke-version-default",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append"
]
},
"initialNodeCount": 3,
"name": "new-default-pool"
}
}
}
}
}
]
}
Migrate from GCE extension¶
If you previously used the deprecated GCE extension, here is a quick recap of changes you’ll need to go through to update your experiments.
- The module
chaosgce.nodepool.actions
has been replaced bychaosgcp.gke.nodepool.actions
. You will need to update themodule
key for the python providers. - The configuration keys in the
configuration
section have been renamed accordingly:"gce_project_id"
->"gcp_project_id"
"gce_region"
->"gcp_region"
"gce_zone"
->"gcp_zone"
"gce_cluster_name"
->"gcp_gke_cluster_name"
Contribute¶
If you wish to contribute more functions to this package, you are more than welcome to do so. Please, fork this project, make your changes following the usual PEP 8 code style, sprinkling with tests and submit a PR for review.
The Chaos Toolkit projects require all contributors must sign a Developer Certificate of Origin on each commit they would like to merge into the master branch of the repository. Please, make sure you can abide by the rules of the DCO before submitting a PR.
If you wish to add a new function to this extension, that is related to a Google Cloud product that is not available yet in this package, please use the product short name or acronym as a first level subpackage (eg. iam, gke, sql, storage, …). See the list of [GCP products and services][gcp_products].
[gcp_products] https://cloud.google.com/products/
Develop¶
If you wish to develop on this project, make sure to install the development dependencies. But first, create a virtual environment and then install those dependencies.
$ pip install -r requirements-dev.txt -r requirements.txt
Then, point your environment to this directory:
$ python setup.py develop
Now, you can edit the files and they will be automatically be seen by your environment, even when running from the chaos
command locally.
Test¶
To run the tests for the project execute the following:
$ pytest
Exported Activities¶
cloudbuild¶
get_trigger
¶
Type | probe |
Module | chaosgcp.cloudbuild.probes |
Name | get_trigger |
Return | None |
Returns information about a BuildTrigger.
See: https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/get
:param name: name of the trigger :param configuration: :param secrets: :return:
Signature:
def get_trigger(name: str,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None):
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
name | string | Yes |
Usage:
{
"name": "get-trigger",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosgcp.cloudbuild.probes",
"func": "get_trigger",
"arguments": {
"name": ""
}
}
}
name: get-trigger
provider:
arguments:
name: ''
func: get_trigger
module: chaosgcp.cloudbuild.probes
type: python
type: probe
list_trigger_names
¶
Type | probe |
Module | chaosgcp.cloudbuild.probes |
Name | list_trigger_names |
Return | None |
List only the trigger names of a project
:param configuration: :param secrets:
:return:
Signature:
def list_trigger_names(configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None):
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
Usage:
{
"name": "list-trigger-names",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosgcp.cloudbuild.probes",
"func": "list_trigger_names"
}
}
name: list-trigger-names
provider:
func: list_trigger_names
module: chaosgcp.cloudbuild.probes
type: python
type: probe
list_triggers
¶
Type | probe |
Module | chaosgcp.cloudbuild.probes |
Name | list_triggers |
Return | None |
Lists existing BuildTriggers.
See: https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/list
:param configuration: :param secrets:
:return:
Signature:
def list_triggers(configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None):
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
Usage:
{
"name": "list-triggers",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosgcp.cloudbuild.probes",
"func": "list_triggers"
}
}
name: list-triggers
provider:
func: list_triggers
module: chaosgcp.cloudbuild.probes
type: python
type: probe
run_trigger
¶
Type | action |
Module | chaosgcp.cloudbuild.actions |
Name | run_trigger |
Return | None |
Runs a BuildTrigger at a particular source revision.
NB: The trigger must exist in the targeted project.
See: https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/run
:param name: name of the trigger :param source: location of the source in a Google Cloud Source Repository :param configuration: :param secrets:
:return:
Signature:
def run_trigger(name: str,
source: Dict[Any, Any],
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None):
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
name | string | Yes | |
source | mapping | Yes |
Usage:
{
"name": "run-trigger",
"type": "action",
"provider": {
"type": "python",
"module": "chaosgcp.cloudbuild.actions",
"func": "run_trigger",
"arguments": {
"name": "",
"source": {}
}
}
}
name: run-trigger
provider:
arguments:
name: ''
source: {}
func: run_trigger
module: chaosgcp.cloudbuild.actions
type: python
type: action
nodepool¶
create_new_nodepool
¶
Type | action |
Module | chaosgcp.gke.nodepool.actions |
Name | create_new_nodepool |
Return | mapping |
Create a new node pool in the given cluster/zone of the provided project.
The node pool config must be passed a mapping to the body
parameter and respect the REST API.
If wait_until_complete
is set to True
(the default), the function will block until the node pool is ready. Otherwise, will return immediatly with the operation information.
Signature:
def create_new_nodepool(
body: Dict[str, Any],
wait_until_complete: bool = True,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
body | mapping | Yes | |
wait_until_complete | boolean | true | No |
Usage:
{
"name": "create-new-nodepool",
"type": "action",
"provider": {
"type": "python",
"module": "chaosgcp.gke.nodepool.actions",
"func": "create_new_nodepool",
"arguments": {
"body": {}
}
}
}
name: create-new-nodepool
provider:
arguments:
body: {}
func: create_new_nodepool
module: chaosgcp.gke.nodepool.actions
type: python
type: action
delete_nodepool
¶
Type | action |
Module | chaosgcp.gke.nodepool.actions |
Name | delete_nodepool |
Return | mapping |
Delete node pool from the given cluster/zone of the provided project.
If wait_until_complete
is set to True
(the default), the function will block until the node pool is deleted. Otherwise, will return immediatly with the operation information.
Signature:
def delete_nodepool(
node_pool_id: str,
wait_until_complete: bool = True,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
node_pool_id | string | Yes | |
wait_until_complete | boolean | true | No |
Usage:
{
"name": "delete-nodepool",
"type": "action",
"provider": {
"type": "python",
"module": "chaosgcp.gke.nodepool.actions",
"func": "delete_nodepool",
"arguments": {
"node_pool_id": ""
}
}
}
name: delete-nodepool
provider:
arguments:
node_pool_id: ''
func: delete_nodepool
module: chaosgcp.gke.nodepool.actions
type: python
type: action
swap_nodepool
¶
Type | action |
Module | chaosgcp.gke.nodepool.actions |
Name | swap_nodepool |
Return | mapping |
Create a new nodepool, drain the old one so pods can be rescheduled on the new pool. Delete the old nodepool only delete_old_node_pool
is set to True
, which is not the default. Otherwise, leave the old node pool cordonned so it cannot be scheduled any longer.
Please ensure to provide the Kubernetes secrets as well when calling this action. See https://github.com/chaostoolkit/chaostoolkit-kubernetes#configuration
Signature:
def swap_nodepool(old_node_pool_id: str,
new_nodepool_body: Dict[str, Any],
wait_until_complete: bool = True,
delete_old_node_pool: bool = False,
drain_timeout: int = 120,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
old_node_pool_id | string | Yes | |
new_nodepool_body | mapping | Yes | |
wait_until_complete | boolean | true | No |
delete_old_node_pool | boolean | false | No |
drain_timeout | integer | 120 | No |
Usage:
{
"name": "swap-nodepool",
"type": "action",
"provider": {
"type": "python",
"module": "chaosgcp.gke.nodepool.actions",
"func": "swap_nodepool",
"arguments": {
"old_node_pool_id": "",
"new_nodepool_body": {}
}
}
}
name: swap-nodepool
provider:
arguments:
new_nodepool_body: {}
old_node_pool_id: ''
func: swap_nodepool
module: chaosgcp.gke.nodepool.actions
type: python
type: action
sql¶
describe_instance
¶
Type | probe |
Module | chaosgcp.sql.probes |
Name | describe_instance |
Return | mapping |
Displays configuration and metadata about a Cloud SQL instance.
Information such as instance name, IP address, region, the CA certificate and configuration settings will be displayed.
See: https://cloud.google.com/sql/docs/postgres/admin-api/v1beta4/instances/get
:param instance_id: Cloud SQL instance ID.
Signature:
def describe_instance(
instance_id: str,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
instance_id | string | Yes |
Usage:
{
"name": "describe-instance",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosgcp.sql.probes",
"func": "describe_instance",
"arguments": {
"instance_id": ""
}
}
}
name: describe-instance
provider:
arguments:
instance_id: ''
func: describe_instance
module: chaosgcp.sql.probes
type: python
type: probe
export_data
¶
Type | action |
Module | chaosgcp.sql.actions |
Name | export_data |
Return | mapping |
Exports data from a Cloud SQL instance to a Cloud Storage bucket as a SQL dump or CSV file.
See: https://cloud.google.com/sql/docs/postgres/admin-api/v1beta4/instances/export
If project_id
is given, it will take precedence over the global project ID defined at the configuration level.
Signature:
def export_data(instance_id: str,
storage_uri: str,
project_id: str = None,
file_type: str = 'sql',
databases: List[str] = None,
tables: List[str] = None,
export_schema_only: bool = False,
wait_until_complete: bool = True,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
instance_id | string | Yes | |
storage_uri | string | Yes | |
project_id | string | null | No |
file_type | string | “sql” | No |
databases | list | null | No |
tables | list | null | No |
export_schema_only | boolean | false | No |
wait_until_complete | boolean | true | No |
Usage:
{
"name": "export-data",
"type": "action",
"provider": {
"type": "python",
"module": "chaosgcp.sql.actions",
"func": "export_data",
"arguments": {
"instance_id": "",
"storage_uri": ""
}
}
}
name: export-data
provider:
arguments:
instance_id: ''
storage_uri: ''
func: export_data
module: chaosgcp.sql.actions
type: python
type: action
import_data
¶
Type | action |
Module | chaosgcp.sql.actions |
Name | import_data |
Return | mapping |
Imports data into a Cloud SQL instance from a SQL dump or CSV file in Cloud Storage.
See: https://cloud.google.com/sql/docs/postgres/admin-api/v1beta4/instances/import
If project_id
is given, it will take precedence over the global project ID defined at the configuration level.
Signature:
def import_data(instance_id: str,
storage_uri: str,
database: str,
project_id: str = None,
file_type: str = 'sql',
import_user: str = None,
table: str = None,
columns: List[str] = None,
wait_until_complete: bool = True,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
instance_id | string | Yes | |
storage_uri | string | Yes | |
database | string | Yes | |
project_id | string | null | No |
file_type | string | “sql” | No |
import_user | string | null | No |
table | string | null | No |
columns | list | null | No |
wait_until_complete | boolean | true | No |
Usage:
{
"name": "import-data",
"type": "action",
"provider": {
"type": "python",
"module": "chaosgcp.sql.actions",
"func": "import_data",
"arguments": {
"instance_id": "",
"storage_uri": "",
"database": ""
}
}
}
name: import-data
provider:
arguments:
database: ''
instance_id: ''
storage_uri: ''
func: import_data
module: chaosgcp.sql.actions
type: python
type: action
list_instances
¶
Type | probe |
Module | chaosgcp.sql.probes |
Name | list_instances |
Return | mapping |
Lists Cloud SQL instances in a given project in the alphabetical order of the instance name.
See: https://cloud.google.com/sql/docs/postgres/admin-api/v1beta4/instances/list
Signature:
def list_instances(
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
Usage:
{
"name": "list-instances",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosgcp.sql.probes",
"func": "list_instances"
}
}
name: list-instances
provider:
func: list_instances
module: chaosgcp.sql.probes
type: python
type: probe
trigger_failover
¶
Type | action |
Module | chaosgcp.sql.actions |
Name | trigger_failover |
Return | mapping |
Causes a high-availability Cloud SQL instance to failover.
See: https://cloud.google.com/sql/docs/postgres/admin-api/v1beta4/instances/failover
:param instance_id: Cloud SQL instance ID. :param wait_until_complete: wait for the operation in progress to complete. :param settings_version: The current settings version of this instance.
:return:
Signature:
def trigger_failover(
instance_id: str,
wait_until_complete: bool = True,
settings_version: int = None,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> Dict[str, Any]:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
instance_id | string | Yes | |
wait_until_complete | boolean | true | No |
settings_version | integer | null | No |
Usage:
{
"name": "trigger-failover",
"type": "action",
"provider": {
"type": "python",
"module": "chaosgcp.sql.actions",
"func": "trigger_failover",
"arguments": {
"instance_id": ""
}
}
}
name: trigger-failover
provider:
arguments:
instance_id: ''
func: trigger_failover
module: chaosgcp.sql.actions
type: python
type: action
storage¶
object_exists
¶
Type | probe |
Module | chaosgcp.storage.probes |
Name | object_exists |
Return | boolean |
Indicates whether a file in Cloud Storage bucket exists.
:param bucket_name: name of the bucket :param object_name: name of the object within the bucket as path :param configuration: :param secrets:
Signature:
def object_exists(bucket_name: str,
object_name: str,
configuration: Dict[str, Dict[str, str]] = None,
secrets: Dict[str, Dict[str, str]] = None) -> bool:
pass
Arguments:
Name | Type | Default | Required |
---|---|---|---|
bucket_name | string | Yes | |
object_name | string | Yes |
Usage:
{
"name": "object-exists",
"type": "probe",
"provider": {
"type": "python",
"module": "chaosgcp.storage.probes",
"func": "object_exists",
"arguments": {
"bucket_name": "",
"object_name": ""
}
}
}
name: object-exists
provider:
arguments:
bucket_name: ''
object_name: ''
func: object_exists
module: chaosgcp.storage.probes
type: python
type: probe