30 Commits

Author SHA1 Message Date
b810e5e362 Merge branch 'better-tests' of https://github.com/charlesreid1/2019-snakemake-byok8s into better-tests
* 'better-tests' of https://github.com/charlesreid1/2019-snakemake-byok8s:
2019-01-28 16:22:03 -08:00
01a7647aae more readme typo fixes 2019-01-26 01:39:15 -08:00
6515a67225 add instructions for kubernetes using minikube alone, since it is a little bit complicated on aws nodes. 2019-01-26 01:33:06 -08:00
2d54c50e32 Merge branch 'better-tests'
* better-tests: (22 commits)
  update readme instructions
  add a slightly harder test
  switch to python build env. no material change yet.
  commit to lilic/travis-minikube kube-1.12 branch
  switch to an easier task. if this build fails it is the version numbers causing problems.
  implement coredns fixes, kubectl apply, and byok8s into travis tests
  fix flag
  fix flags and remote provider approach: force s3
  updating tests with yml files to fix k8s dns container
  add kube-system DNS fix
  moving snakefile
  fix how snakefile is being dealt with.
  update requirements for kubernetes, snakemake, s3
  bump k8s version, 1.9->1.10
  update byok8s test - test all workflows
  update snakefile zeta workflow
  add byok8s tests - WIP
  make travis use python
  call byok8s from .travis.yml
  fix workflow labels in readme
  ...
2019-01-26 01:30:24 -08:00
abd0fc8a6a update readme instructions 2019-01-26 01:28:36 -08:00
beecef4b41 Better tests (#3)
* update travis.yml to reset to master LiliC/travis-minikube

* start k8s from test/

* fix workflow labels in readme

* call byok8s from .travis.yml

* make travis use python

* add byok8s tests - WIP

* update snakefile zeta workflow

* update byok8s test - test all workflows

* bump k8s version, 1.9->1.10

* update requirements for kubernetes, snakemake, s3

* fix how snakefile is being dealt with.

* moving snakefile

* add kube-system DNS fix

* updating tests with yml files to fix k8s dns container

* fix flags and remote provider approach: force s3

* fix flag

* implement coredns fixes, kubectl apply, and byok8s into travis tests

* switch to an easier task. if this build fails it is the version numbers causing problems.

* commit to lilic/travis-minikube kube-1.12 branch

* switch to python build env. no material change yet.

* add a slightly harder test
2019-01-26 01:03:02 -08:00
23769105d0 Merge branch 'master' into better-tests 2019-01-26 01:02:52 -08:00
104d0fe868 add a slightly harder test 2019-01-26 00:57:11 -08:00
9bbc217d32 switch to python build env. no material change yet. 2019-01-26 00:51:51 -08:00
86751c36a0 commit to lilic/travis-minikube kube-1.12 branch 2019-01-26 00:47:16 -08:00
cd86326b8b switch to an easier task. if this build fails it is the version numbers causing problems. 2019-01-26 00:25:17 -08:00
7793f9cc32 implement coredns fixes, kubectl apply, and byok8s into travis tests 2019-01-26 00:19:58 -08:00
6bb04d44d7 fix flag 2019-01-26 00:11:33 -08:00
5b3c10d2dd fix flags and remote provider approach: force s3 2019-01-26 00:04:23 -08:00
47849200d0 updating tests with yml files to fix k8s dns container 2019-01-25 23:03:56 -08:00
c2de3e8567 add kube-system DNS fix 2019-01-25 17:17:14 -08:00
3b86a9ebe2 moving snakefile 2019-01-25 17:11:00 -08:00
f669ff6951 fix how snakefile is being dealt with. 2019-01-25 17:07:45 -08:00
3c93c1b236 update requirements for kubernetes, snakemake, s3 2019-01-25 13:44:53 -08:00
1095efc568 bump k8s version, 1.9->1.10 2019-01-25 13:44:11 -08:00
46108c379e update byok8s test - test all workflows 2019-01-22 15:17:01 -08:00
e47cfe66c0 update snakefile zeta workflow 2019-01-22 15:14:11 -08:00
d3f295d7da add byok8s tests - WIP 2019-01-22 15:00:08 -08:00
e99286a4e0 update travis.yml to reset to master LiliC/travis-minikube (#2)
* update travis.yml to reset to master LiliC/travis-minikube

* start k8s from test/

* fix workflow labels in readme

* call byok8s from .travis.yml

* make travis use python
2019-01-22 00:35:37 -08:00
d18d7416a3 make travis use python 2019-01-22 00:32:15 -08:00
04dce75b10 call byok8s from .travis.yml 2019-01-22 00:28:46 -08:00
675023537c fix workflow labels in readme 2019-01-22 00:28:20 -08:00
7f537c1f8e start k8s from test/ 2019-01-22 00:22:18 -08:00
c2b7d2c1f6 Merge branch 'master' into reset-travis 2019-01-21 21:50:47 -08:00
43e9832f99 update travis.yml to reset to master LiliC/travis-minikube 2019-01-21 21:46:25 -08:00
16 changed files with 472 additions and 188 deletions

View File

@@ -1,28 +1,53 @@
# https://docs.travis-ci.com/user/languages/python/
# https://raw.githubusercontent.com/LiliC/travis-minikube/minikube-30-kube-1.12/.travis.yml
language: python
python:
- "3.5"
# https://github.com/LiliC/travis-minikube/blob/master/.travis.yml
- "3.6"
sudo: required
# We need the systemd for the kubeadm and it's default from 16.04+
dist: xenial
# This moves Kubernetes specific config files.
env:
- CHANGE_MINIKUBE_NONE_USER=true
# --bootstrapper=localkube comes from
# https://github.com/kubevirt/containerized-data-importer/issues/93
# and
# https://github.com/kubernetes/minikube/issues/2704
install:
# Install byok8s requirements (snakemake, python-kubernetes)
- pip install -r requirements.txt
# Install byok8s cli tool
- python setup.py build install
before_script:
- sudo apt-get update
- sudo apt-get install -y coreutils
- curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
- curl -Lo minikube https://storage.googleapis.com/minikube/releases/0.28.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
- sudo minikube start --kubernetes-version=1.11.0 --vm-driver=none --bootstrapper=localkube
# Do everything from test/
- cd test
# Make root mounted as rshared to fix kube-dns issues.
- sudo mount --make-rshared /
# Download kubectl, which is a requirement for using minikube.
- curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# Download minikube.
- curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.30.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
- sudo minikube start --vm-driver=none --bootstrapper=kubeadm --kubernetes-version=v1.12.0
# Fix the kubectl context, as it's often stale.
- minikube update-context
# Wait for Kubernetes to be up and ready.
- JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl get nodes -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 1; done
################
## easy test
script:
- kubectl cluster-info
# Verify kube-addon-manager.
# kube-addon-manager is responsible for managing other kubernetes components, such as kube-dns, dashboard, storage-provisioner..
- JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl -n kube-system get pods -lcomponent=kube-addon-manager -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 1;echo "waiting for kube-addon-manager to be available"; kubectl get pods --all-namespaces; done
# Wait for kube-dns to be ready.
- JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl -n kube-system get pods -lk8s-app=kube-dns -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 1;echo "waiting for kube-dns to be available"; kubectl get pods --all-namespaces; done
# Create example Redis deployment on Kubernetes.
- kubectl run travis-example --image=redis --labels="app=travis-example"
# Make sure created pod is scheduled and running.
- JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl -n default get pods -lapp=travis-example -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 1;echo "waiting for travis-example deployment to be available"; kubectl get pods -n default; done
#
################
## harder
- byok8s --s3-bucket=cmr-0123 -f workflow-alpha params-blue

108
README.md
View File

@@ -3,36 +3,32 @@
[![travis](https://img.shields.io/travis/charlesreid1/2019-snakemake-byok8s.svg)](https://travis-ci.org/charlesreid1/2019-snakemake-byok8s)
[![license](https://img.shields.io/github/license/charlesreid1/2019-snakemake-byok8s.svg)](https://github.com/charlesreid1/2019-snakemake-byok8s/blob/master/LICENSE)
# Overview
This is an example of a Snakemake workflow that:
- is a command line utility
- is bundled as a Python package
- is designed to run on a Kubernetes cluster
- can be tested locally or with Travis CI using minikube
Snakemake functionality is provided through
a command line tool called `byok8s`, so that
it allows you to do this:
it allows you to do this (abbreviated for clarity):
```
# install minikube so you can
# create a (virtual) k8s cluster
scripts/install_minikube.sh
# move to working directory
cd test
# deploy (virtual) k8s cluster
# Create virtual k8s cluster
minikube start
# run the workflow
byok8s -w my-workflowfile -p my-paramsfile
# Run the workflow
byok8s --s3-bucket=mah-s3-bukkit my-workflowfile my-paramsfile
# clean up (virtual) k8s cluster
# Clean up the virtual k8s cluster
minikube stop
```
Snakemake workflows are run on a Kubernetes (k8s)
Snakemake workflows are provided via a Snakefile by
the user. Snakemake runs tasks on the Kubernetes (k8s)
cluster. The approach is for the user to provide
their own Kubernetes cluster (byok8s = Bring Your
Own Kubernetes).
@@ -40,7 +36,7 @@ Own Kubernetes).
The example above uses [`minikube`](https://github.com/kubernetes/minikube)
to make a virtual k8s cluster, useful for testing.
For real workflow,s your options for
For real workflows, your options for
kubernetes clusters are cloud providers:
- AWS EKS (Elastic Container Service)
@@ -48,8 +44,8 @@ kubernetes clusters are cloud providers:
- Digital Ocean Kubernetes service
- etc...
Travis CI tests utilize minikube.
The Travis CI tests utilize minikube to run
test workflows.
# Quickstart
@@ -65,9 +61,7 @@ Step 3: Run the `byok8s` workflow using the Kubernetes cluster.
Step 4: Tear down Kubernetes cluster with `minikube`.
## Step 1: Set Up VirtualKubernetes Cluster
### Installing Minikube
## Step 1: Set Up Virtual Kubernetes Cluster
For the purposes of the quickstart, we will walk
through how to set up a local, virtual Kubernetes
@@ -76,17 +70,18 @@ cluster using `minikube`.
Start by installing minikube:
```
scripts/install_minicube.sh
scripts/install_minikube.sh
```
Once it is installed, you can start up a kubernetes cluster
with minikube using the following command:
with minikube using the following commands:
```
cd test
minikube start
```
NOTE: If you are running on AWS,
NOTE: If you are running on AWS, run this command first
```
minikube config set vm-driver none
@@ -94,6 +89,17 @@ minikube config set vm-driver none
to set the the vm driver to none and use native Docker to run stuff.
If you are running on AWS, the DNS in the minikube
kubernetes cluster will not work, so run this command
to fix the DNS settings (should be run from the
`test/` directory):
```
kubectl apply -f fixcoredns.yml
kubectl delete --all pods --namespace kube-system
```
## Step 2: Install byok8s
Start by setting up a python virtual environment,
@@ -138,43 +144,50 @@ Now you can run the workflow with the `byok8s` command.
This submits the Snakemake workflow jobs to the Kubernetes
cluster that minikube created.
(NOTE: the command line utility must be run
from the same directory as the kubernetes
cluster was created from, otherwise Snakemake
won't be able to find the kubernetes cluster.)
You should have your workflow in a `Snakefile` in the
current directory. Use the `--snakefile` flag if it is
named something other than `Snakefile`.
(Would be a good idea to instead specify paths
for workflow config and param files,
or have a built-in set of params and configs.)
You will also need to specify your AWS credentials
via the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`
environment variables. These are used to to access
S3 buckets for file I/O.
Run the blue workflow with alpha params:
Finally, you will need to create an S3 bucket for
Snakemake to use for file I/O. Pass the name of the
bucket using the `--s3-bucket` flag.
Start by exporting these two vars (careful to
scrub them from bash history):
```
byok8s -w workflow-blue -p params-alpha
export AWS_ACCESS_KEY_ID=XXXXX
export AWS_SECRET_ACCESS_KEY=XXXXX
```
Run the blue workflow with gamma params, and
kubernetes configuration details in kube-deets
(all json files):
Run the alpha workflow with blue params:
```
byok8s -w workflow-blue -p params-gamma
byok8s --s3-bucket=mah-bukkit workflow-alpha params-blue
```
Run the red workflow with gamma params, &c:
Run the alpha workflow with red params:
```
byok8s -w workflow-red -p params-gamma
byok8s --s3-bucket=mah-bukkit workflow-alpha params-red
```
Run the gamma workflow with red params, &c:
```
byok8s --s3-bucket=mah-bukkit workflow-gamma params-red
```
(NOTE: May want to let the user specify
input and output directories with flags.)
Make reasonable assumptions:
- if no input dir specified, use cwd
- if no output dir specified, make one w timestamp and workflow params
- don't rely on positional args, makes it harder to translate python code/command line calls
All input files are searched for relative to the working
directory.
## Step 4: Tear Down Kubernetes Cluster
@@ -188,3 +201,12 @@ down with the following command:
minikube stop
```
# Using Kubernetes with Cloud Providers
| Cloud Provider | Kubernetes Service | Guide |
|-----------------------------|---------------------------------|----------------------------------------------|
| Minikube (on AWS EC2) | Minikube | [Minikube AWS Guide](kubernetes_minikube.md) |
| Google Cloud Platform (GCP) | Google Container Engine (GKE) | [GCP GKE Guide](kubernetes_gcp.md) |
| Amazon Web Services (AWS) | Elastic Container Service (EKS) | [AWS EKS Guide](kubernetes_aws.md) |
| Digital Ocean (DO) | DO Kubernetes (DOK) | [DO DOK Guide](kubernetes_dok.md) |

View File

@@ -1,17 +0,0 @@
name = config['name']
rule rulename1:
input:
"alpha.txt"
rule target1:
output:
"alpha.txt"
shell:
"echo alpha {name} > {output}"
rule target2:
output:
"gamma.txt"
shell:
"echo gamma {name} > {output}"

View File

@@ -7,82 +7,117 @@ import snakemake
import sys
import pprint
import json
import subprocess
from . import _program
thisdir = os.path.abspath(os.path.dirname(__file__))
parentdir = os.path.join(thisdir,'..')
cwd = os.getcwd()
def main(sysargs = sys.argv[1:]):
parser = argparse.ArgumentParser(prog = _program, description='byok8s: run snakemake workflows on your own kubernetes cluster', usage='''byok8s -w <workflow> -p <parameters> [<target>]
descr = ''
usg = '''byok8s [--FLAGS] <workflowfile> <paramsfile> [<target>]
byok8s: run snakemake workflows on your own kubernetes cluster, using the given workflow name & parameters file.
byok8s: run snakemake workflows on your own kubernetes
cluster, using the given workflow name & parameters file.
''')
byok8s requires an S3 bucket be used for file I/O. Set
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env vars.
'''
parser = argparse.ArgumentParser(
prog = _program,
description=descr,
usage = usg
)
parser.add_argument('workflowfile')
parser.add_argument('paramsfile')
parser.add_argument('-k', '--kubernetes-namespace')
parser.add_argument('-n', '--dry-run', action='store_true')
parser.add_argument('-f', '--force', action='store_true')
parser.add_argument('-k', '--k8s-namespace',default='default', help='Namespace of Kubernetes cluster, if not "default"')
parser.add_argument('-s', '--snakefile', default='Snakefile', help='Relative path to Snakemake Snakefile, if not "Snakefile"')
parser.add_argument('-b', '--s3-bucket', help='Name of S3 bucket to use for Snakemake file I/O (REQUIRED)')
parser.add_argument('-n', '--dry-run', action='store_true', help='Do a dry run of the workflow commands (no commands executed)')
parser.add_argument('-f', '--force', action='store_true', help='Force Snakemake rules to be re-run')
# NOTE: You MUST use S3 buckets, GCS buckets are not supported.
# That's because GCP requires credentials to be stored in a file,
# and we can only pass environment variables into k8s containers.
args = parser.parse_args(sysargs)
# first, find the Snakefile
snakefile_this = os.path.join(thisdir,"Snakefile")
if os.path.exists(snakefile_this):
snakefile = snakefile_this
# find the Snakefile
s1 = os.path.join(cwd,args.snakefile)
if os.path.isfile(s1):
# user has provided a relative path
# to a Snakefile. top priority.
snakefile = os.path.join(cwd,args.snakefile)
else:
msg = 'Error: cannot find Snakefile at any of the following locations:\n'
msg += '{}\n'.format(snakefile_this)
msg = 'Error: cannot find Snakefile at {}\n'.format(s1)
sys.stderr.write(msg)
sys.exit(-1)
# next, find the workflow config file
workflowfile = None
# find the workflow config file
w1 = os.path.join(cwd,args.workflowfile)
w2 = os.path.join(cwd,args.workflowfile+'.json')
# NOTE:
# handling yaml would be nice
if os.path.exists(w1) and not os.path.isdir(w1):
# TODO: yaml
if os.path.isfile(w1):
# user has provided the full filename
workflowfile = w1
elif os.path.exists(w2) and not os.path.isdir(w2):
elif os.path.isfile(w2):
# user has provided the prefix of the
# json filename
workflowfile = w2
if not workflowfile:
msg = 'Error: cannot find workflowfile {} or {} '.format(w1,w2)
msg += 'in directory {}\n'.format(cwd)
else:
msg = ['Error: cannot find workflowfile (workflow configuration file) at any of the following locations:\n']
msg += ['{}'.format(j) for j in [w1,w2]]
sys.stderr.write(msg)
sys.exit(-1)
# next, find the workflow params file
paramsfile = None
# find the workflow params file
p1 = os.path.join(cwd,args.paramsfile)
p2 = os.path.join(cwd,args.paramsfile+'.json')
if os.path.exists(p1) and not os.path.isdir(p1):
# TODO: yaml
if os.path.isfile(p1):
paramsfile = p1
elif os.path.exists(p2) and not os.path.isdir(p2):
elif os.path.isfile(p2):
paramsfile = p2
if not paramsfile:
msg = 'Error: cannot find paramsfile {} or {} '.format(p1,p2)
msg += 'in directory {}\n'.format(cwd)
else:
msg = ['Error: cannot find paramsfile (workflow parameters file) at any of the following locations:\n']
msg += ['{}'.format(j) for j in [p1,p2]]
sys.stderr.write(msg)
sys.exit(-1)
with open(workflowfile, 'rt') as fp:
with open(paramsfile,'r') as f:
config = json.load(f)
with open(workflowfile, 'r') as fp:
workflow_info = json.load(fp)
# get the kubernetes namespace
kube_ns = 'default'
if args.kubernetes_namespace is not None and len(args.kubernetes_namespace)>0:
kube_ns = args.kubernetes_namespace
if args.k8s_namespace is not None and len(args.k8s_namespace)>0:
kube_ns = args.k8s_namespace
# verify the user has set the AWS env variables
if not (os.environ['AWS_ACCESS_KEY_ID'] and os.environ['AWS_SECRET_ACCESS_KEY']):
msg = 'Error: the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY must be set to allow the k8s cluster to access an S3 bucket for i/o.'
sys.stderr.write(msg)
sys.exit(-1)
# verify the user has provided a bucket name
if not args.s3_bucket:
msg = 'Error: no S3 bucket specified with --s3-bucket. This must be set to allow the k8s cluster to access an S3 bucket for i/o.'
sys.stderr.write(msg)
sys.exit(-1)
else:
mah_bukkit = args.s3_bucket
target = workflow_info['workflow_target']
config = dict()
print('--------')
print('details!')
@@ -93,14 +128,22 @@ byok8s: run snakemake workflows on your own kubernetes cluster, using the given
print('\tk8s namespace: {}'.format(kube_ns))
print('--------')
# Note: we comment out configfile=paramsfile below,
# because we have problems passing files into k8s clusters.
# run byok8s!!
status = snakemake.snakemake(snakefile, configfile=paramsfile,
status = snakemake.snakemake(snakefile,
#configfile=paramsfile,
assume_shared_fs=False,
default_remote_provider='S3',
default_remote_prefix=mah_bukkit,
kubernetes_envvars=['AWS_ACCESS_KEY_ID','AWS_SECRET_ACCESS_KEY'],
targets=[target],
printshellcmds=True,
verbose = True,
dryrun=args.dry_run,
forceall=args.force,
#kubernetes=kube_ns,
kubernetes=kube_ns,
config=config)
if status: # translate "success" into shell exit code of 0

6
kubernetes_aws.md Normal file
View File

@@ -0,0 +1,6 @@
# Kubernetes on AWS
## Elastic Container Service
## Quickstart

11
kubernetes_dok.md Normal file
View File

@@ -0,0 +1,11 @@
# Kubernetes on Digital Ocean
## Digital Ocean Kubernetes
(Use web interface to set up a Kubernetes cluster,
then use `kubectl` to connect with Digital Ocean
via Digital Ocean credentials.)
## Quickstart
[link](https://www.digitalocean.com/docs/kubernetes/how-to/connect-with-kubectl/)

7
kubernetes_gcp.md Normal file
View File

@@ -0,0 +1,7 @@
# Kubernetes on Google Cloud Platform
## Google Container Engine
## Quickstart

6
kubernetes_minikube.md Normal file
View File

@@ -0,0 +1,6 @@
# Minikube on AWS EC2 Nodes
## Quickstart

View File

@@ -1,2 +1,4 @@
snakemake>=5.4.0
python-kubernetes
kubernetes
moto
boto3

60
test/Readme.md Normal file
View File

@@ -0,0 +1,60 @@
# 2019-snakemake-byok8s tests
This guide assumes you have minikube installed. (See `../scripts/` directory...)
We will need to fix a problem with a DNS setting in Kubernetes if we are on
an AWS EC2 node, so we'll walk through how to do that first.
Then we'll cover how to start a Kubernetes cluster and run a simple test.
## Fix k8s DNS problem
If you are running on EC2, you will have
to fix the DNS settings inside the container
by patching the `kube-dns` container that
runs as part of Kubernetes.
Apply the DNS fix to the container,
```
kubernetes apply -f fixcoredns.yml
```
(If you are using an older version of minikube + kubernetes
that uses kube-dns, use `fixkubedns.yml` instead.)
## Start (restart) cluster
If you don't already have a Kubernetes cluster running,
start one with minikube:
```
minikube start
# or, if on ec2,
sudo minikube start
```
If you have a Kubernetes pod currently running,
you can delete all of the kube-system pods, and
they will automatically respawn, including the
(now-fixed) kube-dns container:
```
kubernetes delete --all pods --namespace kube-system
```
## Running tests
Now that DNS is fixed, the host and container can
properly communicate, which is required for Kubernetes
to return files it has created.

93
test/Snakefile Normal file
View File

@@ -0,0 +1,93 @@
name = config['name']
rule rulename1:
input:
"alpha.txt"
rule target1:
output:
"alpha.txt"
shell:
"echo alpha {name} > {output}"
rule target2:
output:
"gamma.txt"
shell:
"echo gamma {name} > {output}"
# A somewhat contrived workflow:
#
# zetaA workflow
#
# +---- (sleepy process) -- (sleepy process) -- (sleepy process) --+
# | |
# target3 <---+ +---<----
# | |
# +-----------( sleepy process ) ------ ( sleepy process ) --------+
#
# zetaB workflow
rule target3:
input:
"zetaA.txt", "zetaB.txt"
output:
"zeta.txt"
shell:
"cat {input[0]} {input[1]} > {output}"
rule target3sleepyA1:
output:
touch(".zetaA1")
shell:
"""
sleep 3s
echo zeta_A1 {name} > zetaA.txt
"""
rule target3sleepyA2:
input:
".zetaA1"
output:
touch(".zetaA2")
shell:
"""
sleep 3s
echo zeta_A2 {name} >> zetaA.txt
rm -f .zetaA1
"""
rule target3sleepyA3:
input:
".zetaA2"
output:
"zetaA.txt"
shell:
"""
sleep 3s
echo zeta_A3 {name} >> {output}
rm -f .zetaA2
"""
rule target3sleepyB1:
output:
touch(".zetaB1")
shell:
"""
sleep 4s
echo zeta_B1 {name} > zetaB.txt
"""
rule target3sleepyB2:
input:
".zetaB1"
output:
"zetaB.txt"
shell:
"""
sleep 4s
echo zeta_B2 {name} >> {output}
rm -f .zetaB1
"""

22
test/fixcoredns.yml Normal file
View File

@@ -0,0 +1,22 @@
kind: ConfigMap
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
upstream 8.8.8.8 8.8.4.4
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
proxy . 8.8.8.8 8.8.4.4
cache 30
reload
}
metadata:
creationTimestamp: 2019-01-25T22:55:15Z
name: coredns
namespace: kube-system
#resourceVersion: "198"
#selfLink: /api/v1/namespaces/kube-system/configmaps/coredns

11
test/fixkubedns.yml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
upstreamNameservers: |-
["8.8.8.8", "8.8.4.4"]

View File

@@ -1,76 +0,0 @@
from unittest import TestCase
from subprocess import call, Popen, PIPE
import os
import shutil, tempfile
from os.path import isdir, join
"""
test banana
this test will run bananas with the test
config and params provided in the test dir.
this test will also show how to run tests where
failure is expected (i.e., checking that we handle
invalid parameters).
each test has a unittest TestCase defined.
pytest will automatically find these tests.
"""
class TestBananas(TestCase):
"""
simple bananas test class
This uses the subprocess PIPE var
to capture system input and output,
since we are running bananas from the
command line directly using subprocess.
"""
@classmethod
def setUpClass(self):
"""
set up a bananas workflow test.
we are using the existing test/ dir
as our working dir, so no setup to do.
if we were expecting the user to provide
a Snakefile, this is where we would set
up a test Snakefile.
"""
pass
def test_hello(self):
"""
test hello workflow
"""
command_prefix = ['bananas','workflow-hello']
params = ['params-amy','params-beth']
pwd = os.path.abspath(os.path.dirname(__file__))
for param in params:
command = command_prefix + [param]
p = Popen(command, cwd=pwd, stdout=PIPE, stderr=PIPE).communicate()
p_out = p[0].decode('utf-8').strip()
p_err = p[1].decode('utf-8').strip()
self.assertIn('details',p_out)
# clean up
call(['rm','-f','hello.txt'])
@classmethod
def tearDownClass(self):
"""
clean up after the tests
"""
pass

66
test/test_byok8s.py Normal file
View File

@@ -0,0 +1,66 @@
from unittest import TestCase
from subprocess import call, Popen, PIPE
import os
import shutil, tempfile
from os.path import isdir, join
"""
test byok8s
This tests the byok8s command line utility,
and assumes you have already set up your
k8s cluster using e.g. minikube.
"""
class TestByok8s(TestCase):
"""
simple byok8s test class
This uses the subprocess PIPE var
to capture system input and output,
since we are running byok8s from the
command line directly using subprocess.
"""
@classmethod
def setUpClass(self):
"""
set up a byok8s workflow test.
"""
# verify that a kubernetes cluster is running
pass
def test_alpha(self):
"""
test alpha workflow
"""
workflows = ['workflow-alpha','workflow-gamma','workflow-zeta']
params = ['params-red','params-blue']
pwd = os.path.abspath(os.path.dirname(__file__))
for workflow in workflows:
for param in params:
command = ['byok8s',workflow,param]
p = Popen(command, cwd=pwd, stdout=PIPE, stderr=PIPE).communicate()
p_out = p[0].decode('utf-8').strip()
p_err = p[1].decode('utf-8').strip()
self.assertIn('details',p_out)
# clean up
call(['rm','-f','*.txt'])
@classmethod
def tearDownClass(self):
"""
clean up after the tests
"""
pass

3
test/workflow-zeta.json Normal file
View File

@@ -0,0 +1,3 @@
{
"workflow_target": "target3"
}