Compare commits
23 Commits
old_master
...
master
Author | SHA1 | Date | |
---|---|---|---|
1cec6bff4c | |||
d011abda06 | |||
0a136a0532 | |||
a467bb1009 | |||
1d96211bda | |||
684bdac1b6 | |||
8f15227287 | |||
038b3adde8 | |||
79a766aa51 | |||
f0ce583548 | |||
994eab3bd3 | |||
a12993b135 | |||
ec4140cb22 | |||
dd3f177982 | |||
dc97be0765 | |||
ca3a72e49d | |||
d7503c7cdd | |||
85b4bcb924 | |||
0b6524db77 | |||
213408e524 | |||
fb26c58efc | |||
ea4711b3ba | |||
dac2ecce7e |
2
.gitignore
vendored
Normal file
2
.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
.terraform
|
||||||
|
terraform.tfstate*
|
0
.gitmodules
vendored
Normal file
0
.gitmodules
vendored
Normal file
31
About.md
31
About.md
@@ -1,31 +0,0 @@
|
|||||||
# More About dahak-boto
|
|
||||||
|
|
||||||
## What is it?
|
|
||||||
|
|
||||||
The intention behind dahak-boto is to provide a push-button
|
|
||||||
solution to running workflows. Automating the workflows and
|
|
||||||
removing the user and SSH from the process of runinng workflows
|
|
||||||
makes it possible to automate testing and allows analysts to
|
|
||||||
focus on work that matters - high-level monitoring and parameter
|
|
||||||
studies - instead of low-level details like maintaining a spreadsheet
|
|
||||||
of which instances are running which cases.
|
|
||||||
|
|
||||||
## How boto3 works
|
|
||||||
|
|
||||||
To interface with the AWS API, you use the boto library.
|
|
||||||
The boto library provides various objects and methods.
|
|
||||||
Many of the methods correspond to a general class of requests,
|
|
||||||
e.g., you have an object to represent EC2 and methods to
|
|
||||||
represent actions like getting a list of all instances,
|
|
||||||
or getting a network given a VPC ID.
|
|
||||||
|
|
||||||
Most of the requests are highly customizable and accept
|
|
||||||
comlpicated JSON inputs. This can make boto challenging to use.
|
|
||||||
|
|
||||||
## What dahak-boto does
|
|
||||||
|
|
||||||
dahak-boto is intended to automate dahak workflows,
|
|
||||||
which can be run using a single subnet architecture.
|
|
||||||
A virtual private cloud (VPC) network is set up to
|
|
||||||
allow AWS nodes to talk to one another.
|
|
||||||
|
|
@@ -1,26 +0,0 @@
|
|||||||
# How Bespin Works
|
|
||||||
|
|
||||||
Bespin is a command line utility
|
|
||||||
built around argparser.
|
|
||||||
|
|
||||||
Argparser provides a nice suite
|
|
||||||
of Python tools for extracting
|
|
||||||
command line arguments and
|
|
||||||
processing them.
|
|
||||||
|
|
||||||
Dahak has four principal tasks:
|
|
||||||
* Build a virtual private cloud (VPC) network
|
|
||||||
* Create a security group to control access to the network
|
|
||||||
* Create a spy node and add it to the VPC
|
|
||||||
* Create one or more yeti nodes and add them to the VPC
|
|
||||||
|
|
||||||
bespin provides subcommands for each task.
|
|
||||||
The vpc, spy, and yeti subcommand options
|
|
||||||
all look pretty similar.
|
|
||||||
The security subcommand options are
|
|
||||||
different, as the security group is
|
|
||||||
created or destroyed with the VPC
|
|
||||||
(not by the user), but the user
|
|
||||||
must still modify the security group
|
|
||||||
to whitelist IPs and open/close ports.
|
|
||||||
|
|
29
LICENSE
29
LICENSE
@@ -1,29 +0,0 @@
|
|||||||
BSD 3-Clause License
|
|
||||||
|
|
||||||
Copyright (c) 2018, Chaz Reid
|
|
||||||
All rights reserved.
|
|
||||||
|
|
||||||
Redistribution and use in source and binary forms, with or without
|
|
||||||
modification, are permitted provided that the following conditions are met:
|
|
||||||
|
|
||||||
* Redistributions of source code must retain the above copyright notice, this
|
|
||||||
list of conditions and the following disclaimer.
|
|
||||||
|
|
||||||
* Redistributions in binary form must reproduce the above copyright notice,
|
|
||||||
this list of conditions and the following disclaimer in the documentation
|
|
||||||
and/or other materials provided with the distribution.
|
|
||||||
|
|
||||||
* Neither the name of the copyright holder nor the names of its
|
|
||||||
contributors may be used to endorse or promote products derived from
|
|
||||||
this software without specific prior written permission.
|
|
||||||
|
|
||||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
|
||||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
||||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
|
||||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
|
||||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
||||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
|
||||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
|
||||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
|
||||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
||||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
120
Networking.md
120
Networking.md
@@ -1,120 +0,0 @@
|
|||||||
# The Network
|
|
||||||
|
|
||||||
We use a virtual private cloud (VPC) to allow AWS compute nodes
|
|
||||||
to talk to one another.
|
|
||||||
|
|
||||||
All VPCs define a network, but that is still too wide,
|
|
||||||
so we have to narrow down the definition and define a subnet.
|
|
||||||
We can add as many subnets as IPv4 will allow us, but here
|
|
||||||
we just use one subnet. All our AWS nodes will live on the
|
|
||||||
same virtual private cloud subnet.
|
|
||||||
|
|
||||||
We add one monitoring node (dahak spy) and one or more
|
|
||||||
worker nodes (dahak yeti) to the VPC subnet. All nodes
|
|
||||||
run netdata and the monitoring node uses Prometheus to
|
|
||||||
collect data across the VPC subnet.
|
|
||||||
|
|
||||||
Machines on the VPC have the ability to reach the internet,
|
|
||||||
are not accessible to the public unless given a public
|
|
||||||
IP address. Various services can be configured to listen for
|
|
||||||
traffic to a particular IP address (by binding the service
|
|
||||||
to a particular IP address), thus securing databases
|
|
||||||
or web servers from access by any traffic not originating
|
|
||||||
from the private network.
|
|
||||||
|
|
||||||
# Creating the Network
|
|
||||||
|
|
||||||
## Diagram
|
|
||||||
|
|
||||||
The following diagram shows a sechematic of the network architecture:
|
|
||||||
|
|
||||||
```
|
|
||||||
+--------------------------------------------------------------------------+
|
|
||||||
| Whole Internet |
|
|
||||||
| |
|
|
||||||
| |
|
|
||||||
| +--------------------------------------------------------------------+ |
|
|
||||||
| | Amazon | |
|
|
||||||
| | | |
|
|
||||||
| | | |
|
|
||||||
| | | |
|
|
||||||
| | +---------------------------------------------------+ | |
|
|
||||||
| | | Virtual Private Cloud: Dahak WAN | | |
|
|
||||||
| | | +-----+-----+ | |
|
|
||||||
| | | Network IP Block: 10.117.0.0/16 | Internet | | |
|
|
||||||
| | | 10.117.*.* | Gateway | | |
|
|
||||||
| | | +-----+-----+ | |
|
|
||||||
| | | +----------------------------------+ | | |
|
|
||||||
| | | | VPC Subnet: Dahak LAN | | | |
|
|
||||||
| | | | | +-----+-----+ | |
|
|
||||||
| | | | Subnet IP Block: 10.117.0.0/24 | | Routing | | |
|
|
||||||
| | | | 10.117.0.* | | Table | | |
|
|
||||||
| | | | | +-----+-----+ | |
|
|
||||||
| | | | | | | |
|
|
||||||
| | | +----------------------------------+ | | |
|
|
||||||
| | | +-----+-----+ | |
|
|
||||||
| | | | DHCP | | |
|
|
||||||
| | | +-----+-----+ | |
|
|
||||||
| | | | | |
|
|
||||||
| | +---------------------------------------------------+ | |
|
|
||||||
| | | |
|
|
||||||
| +--------------------------------------------------------------------+ |
|
|
||||||
| |
|
|
||||||
+--------------------------------------------------------------------------+
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## IP Address Blocks
|
|
||||||
|
|
||||||
The IP address schema for the network is `10.X.0.0/16`, indicating
|
|
||||||
any IP address of the form `10.X.*.*` (where X is a number between 2 and 253).
|
|
||||||
For example, `10.117.0.0/16` would cover `10.117.*.*`.
|
|
||||||
|
|
||||||
The subnet IP address schema is `10.X.Y.0/24`, indicating
|
|
||||||
an IP address of the form `10.X.Y.*`. X and Y are any numbers
|
|
||||||
beteween 2 and 253.
|
|
||||||
|
|
||||||
## Internet Gateway
|
|
||||||
|
|
||||||
For nodes on the network to be able to reach the internet,
|
|
||||||
an internet gateway must be added to the VPC. This provides
|
|
||||||
a way for requests that go out to the internet and make it
|
|
||||||
back to the VPC have a way to be rerouted internally to the
|
|
||||||
originating node.
|
|
||||||
|
|
||||||
This is required for the network we are setting up.
|
|
||||||
|
|
||||||
## Routing Table
|
|
||||||
|
|
||||||
The routing table defines how computers on the VPC can find
|
|
||||||
one another and the internet gateway.
|
|
||||||
|
|
||||||
This is required for the network we are setting up.
|
|
||||||
|
|
||||||
## DHCP (and DNS)
|
|
||||||
|
|
||||||
DHCP and DNS have to do with getting directions and finding things
|
|
||||||
in IP space. DHCP controls how IP addresses are handed out on a
|
|
||||||
network and how to route traffic to nodes on the network.
|
|
||||||
DNS has to do with how to turn a web address to an IP address
|
|
||||||
and get directions to that IP address. Amazon offers a
|
|
||||||
DHCP+DNS service (or you can roll your own, if you're into
|
|
||||||
that kind of thing).
|
|
||||||
|
|
||||||
This is required for the network we are setting up.
|
|
||||||
|
|
||||||
## Adding Nodes
|
|
||||||
|
|
||||||
Now, to add nodes, we just add them to the subnet.
|
|
||||||
They will all be assigned IP addresses of `10.X.0.*`.
|
|
||||||
For example,
|
|
||||||
|
|
||||||
```
|
|
||||||
10.117.0.1 gateway
|
|
||||||
10.117.0.100 spy
|
|
||||||
10.117.0.101 yeti #1
|
|
||||||
10.117.0.102 yeti #2
|
|
||||||
10.117.0.103 yeti #3
|
|
||||||
10.117.0.104 yeti #4
|
|
||||||
```
|
|
||||||
|
|
171
README.md
171
README.md
@@ -1,166 +1,17 @@
|
|||||||
# dahak-bespin
|
# bespin
|
||||||
|
|
||||||
This repo contains scripts that use boto3, the Python API
|
bespin is a repository with
|
||||||
provided by AWS, to request AWS resources for running
|
scripts for allocating cloud resources
|
||||||
dahak workflows.
|
for automated testing of dahak workflows.
|
||||||
|
|
||||||
## About dahak-bespin
|
See [charlesreid1.github.io/dahak-bespin](https://charlesreid1.github.io/dahak-bespin).
|
||||||
|
|
||||||
See [About.md](/About.md) for more about dahak-bespin.
|
Inspiration: [terraform-aws-consul](https://github.com/hashicorp/terraform-aws-consul)
|
||||||
The short version: dahak-bespin automates allocating
|
|
||||||
the infrastructure needed to run (and test) dahak workflows.
|
|
||||||
|
|
||||||
## Networking Infrastructure
|
Terraform module organization:
|
||||||
|
|
||||||
See [Networking.md](/Networking.md) for more about the networking
|
* root: This folder shows an example of Terraform code that uses a terraform module to deploy a cluster in AWS.
|
||||||
details. Short version: one network with one subnet.
|
* module: This folder contains the reusable code for this Module
|
||||||
|
* examples: This folder contains examples of how to use the module.
|
||||||
## Example Usage
|
* test: Automated tests for the module and examples.
|
||||||
|
|
||||||
A typical session using bespin
|
|
||||||
might start with the user asking
|
|
||||||
for some help. Pass the `--help` flag
|
|
||||||
or the `help` subcommand to bespin:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ./bespin --help
|
|
||||||
|
|
||||||
___ ____ __ ___ _ _
|
|
||||||
| |_) | |_ ( (` | |_) | | | |\ |
|
|
||||||
|_|_) |_|__ _)_) |_| |_| |_| \|
|
|
||||||
|
|
||||||
cloud infrastructure tool for dahak
|
|
||||||
|
|
||||||
usage: bespin <command> [<args>]
|
|
||||||
|
|
||||||
The most commonly used commands are:
|
|
||||||
vpc Make a VPC for all the dahak nodes
|
|
||||||
security Make a security group for nodes on the VPC
|
|
||||||
spy Make a spy monitoring node
|
|
||||||
yeti Make a yeti worker node
|
|
||||||
|
|
||||||
dahak-bespin uses boto3 to wrangle nodes in the cloud and run dahak workflows
|
|
||||||
|
|
||||||
positional arguments:
|
|
||||||
command Subcommand to run
|
|
||||||
|
|
||||||
optional arguments:
|
|
||||||
-h, --help show this help message and exit
|
|
||||||
```
|
|
||||||
|
|
||||||
This will print out usage information.
|
|
||||||
|
|
||||||
The user should start with a VPC:
|
|
||||||
|
|
||||||
```
|
|
||||||
bespin vpc # get help
|
|
||||||
bespin vpc build # build vpc
|
|
||||||
bespin vpc info # print info
|
|
||||||
```
|
|
||||||
|
|
||||||
Here is the output of the first command:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ./bespin vpc
|
|
||||||
___ ____ __ ___ _ _
|
|
||||||
| |_) | |_ ( (` | |_) | | | |\ |
|
|
||||||
|_|_) |_|__ _)_) |_| |_| |_| \|
|
|
||||||
|
|
||||||
cloud infrastructure tool for dahak
|
|
||||||
|
|
||||||
usage: bespin vpc <vpc_subcommand>
|
|
||||||
|
|
||||||
The vpc subcommands available are:
|
|
||||||
vpc build Build the VPC
|
|
||||||
vpc destroy Tear down the VPC
|
|
||||||
vpc info Print info about the VPC
|
|
||||||
vpc stash Print location of VPC stash files
|
|
||||||
bespin: error: the following arguments are required: vpc_command
|
|
||||||
```
|
|
||||||
|
|
||||||
Next, the user should modify the security group
|
|
||||||
that was created for the VPC.
|
|
||||||
|
|
||||||
The user must whitelist IP addresses to access
|
|
||||||
the network from them.
|
|
||||||
IP addresses should be specified in CIDR notation.
|
|
||||||
|
|
||||||
The user may also whitelist ports.
|
|
||||||
All nodes on the network have whitelisted ports open.
|
|
||||||
All ports are open only to the VPC and to whitelisted
|
|
||||||
IP addresses.
|
|
||||||
|
|
||||||
```
|
|
||||||
bespin security
|
|
||||||
bespin security port add 9999 # add a port to open
|
|
||||||
bespin security ip add "8.8.8.8/32" # add an IP to whitelist
|
|
||||||
|
|
||||||
bespin security port rm 9090 # close a port
|
|
||||||
bespin security ip rm "8.8.8.8/32" # removes an IP from whitelist (if present)
|
|
||||||
```
|
|
||||||
|
|
||||||
(NOTE: security subcommand not yet implemented.)
|
|
||||||
|
|
||||||
Now we are ready to add nodes to the VPC.
|
|
||||||
Start by deploying a spy node:
|
|
||||||
|
|
||||||
```
|
|
||||||
bespin spy # get help
|
|
||||||
bespin spy build # build spy node
|
|
||||||
bespin spy info # get info about spy node
|
|
||||||
```
|
|
||||||
|
|
||||||
example output:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ./bespin spy
|
|
||||||
|
|
||||||
___ ____ __ ___ _ _
|
|
||||||
| |_) | |_ ( (` | |_) | | | |\ |
|
|
||||||
|_|_) |_|__ _)_) |_| |_| |_| \|
|
|
||||||
|
|
||||||
cloud infrastructure tool for dahak
|
|
||||||
|
|
||||||
usage: bespin spy <spy_subcommand>
|
|
||||||
|
|
||||||
The spy subcommands available are:
|
|
||||||
spy build Build the spy node
|
|
||||||
spy destroy Tear down the spy node
|
|
||||||
spy info Print info about the spy node
|
|
||||||
spy stash Print location of spy stash files
|
|
||||||
bespin: error: the following arguments are required: spy_command
|
|
||||||
```
|
|
||||||
|
|
||||||
Finally, we can deploy a yeti:
|
|
||||||
|
|
||||||
```
|
|
||||||
bespin yeti # get help
|
|
||||||
bespin yeti build # build yeti node
|
|
||||||
bespin yeti info # get info about (all) yeti nodes
|
|
||||||
```
|
|
||||||
|
|
||||||
Output from the yeti command:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ./bespin yeti
|
|
||||||
___ ____ __ ___ _ _
|
|
||||||
| |_) | |_ ( (` | |_) | | | |\ |
|
|
||||||
|_|_) |_|__ _)_) |_| |_| |_| \|
|
|
||||||
|
|
||||||
cloud infrastructure tool for dahak
|
|
||||||
|
|
||||||
usage: bespin vpc <vpc_subcommand>
|
|
||||||
|
|
||||||
The vpc subcommands available are:
|
|
||||||
vpc build Build the VPC
|
|
||||||
vpc destroy Tear down the VPC
|
|
||||||
vpc info Print info about the VPC
|
|
||||||
vpc stash Print location of VPC stash files
|
|
||||||
bespin: error: the following arguments are required: yeti_command
|
|
||||||
```
|
|
||||||
|
|
||||||
## How Bespin Works
|
|
||||||
|
|
||||||
See [HowItWorks.md](/HowItWorks.md)
|
|
||||||
for a deeper dive into how bespin works.
|
|
||||||
|
|
||||||
|
406
bespin
406
bespin
@@ -1,406 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
import argparse
|
|
||||||
import subprocess
|
|
||||||
import os, re, sys
|
|
||||||
from dahak_vpc import DahakVPC
|
|
||||||
from dahak_spy import DahakSpy
|
|
||||||
from dahak_yeti import DahakYeti
|
|
||||||
import long_strings
|
|
||||||
from random_labels import random_ip, random_label
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
___ ____ __ ___ _ _
|
|
||||||
| |_) | |_ ( (` | |_) | | | |\ |
|
|
||||||
|_|_) |_|__ _)_) |_| |_| |_| \|
|
|
||||||
|
|
||||||
cloud infrastructure tool for dahak
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class Bespin(object):
|
|
||||||
"""
|
|
||||||
Hat tip:
|
|
||||||
https://chase-seibert.github.io/blog/2014/03/21/python-multilevel-argparse.html
|
|
||||||
"""
|
|
||||||
# Stash files are where bespin stores information
|
|
||||||
# about resources it is supposed to be managing
|
|
||||||
vpc_stashfile = ".vpc"
|
|
||||||
security_stashfile = ".security"
|
|
||||||
spy_stashfile = ".spy"
|
|
||||||
yeti_stashfile = ".yeti"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.logo = long_strings.logo
|
|
||||||
print(self.logo)
|
|
||||||
|
|
||||||
self.has_vpc = self.check_for_vpc()
|
|
||||||
self.has_security_group = self.check_for_security()
|
|
||||||
self.has_spy = self.check_for_spy()
|
|
||||||
self.has_yeti = self.check_for_yeti()
|
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description = long_strings.bespin_description,
|
|
||||||
usage = long_strings.bespin_usage)
|
|
||||||
|
|
||||||
parser.add_argument('command', help='Subcommand to run')
|
|
||||||
|
|
||||||
# parse_args defaults to [1:] for args, but you need to
|
|
||||||
# exclude the rest of the args too, or validation will fail
|
|
||||||
args = parser.parse_args(sys.argv[1:2])
|
|
||||||
if not hasattr(self, args.command):
|
|
||||||
print('Unrecognized command: %s\n'%(args.command))
|
|
||||||
parser.print_help()
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
# use dispatch pattern to invoke method with same name
|
|
||||||
getattr(self, args.command)()
|
|
||||||
|
|
||||||
|
|
||||||
####################################################
|
|
||||||
# Utilities
|
|
||||||
|
|
||||||
def confirm(self, msg):
|
|
||||||
"""
|
|
||||||
Confirm with the user that the action
|
|
||||||
we are about to take (described in msg)
|
|
||||||
is okay to carry out.
|
|
||||||
"""
|
|
||||||
print(msg)
|
|
||||||
ui = input("Okay to proceed? (y/n): ")
|
|
||||||
if(ui.lower()!='y' and ui.lower()!='yes'):
|
|
||||||
print("Script will not proceed.")
|
|
||||||
exit()
|
|
||||||
|
|
||||||
|
|
||||||
def check_for_vpc(self):
|
|
||||||
"""
|
|
||||||
Check for a vpc stash file.
|
|
||||||
If it is present, bespin is already managing a vpc.
|
|
||||||
"""
|
|
||||||
if os.path.isfile(self.vpc_stashfile):
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def check_for_security(self):
|
|
||||||
"""
|
|
||||||
Check for a security group stash file.
|
|
||||||
If it is present, bespin is already managing a security group.
|
|
||||||
"""
|
|
||||||
if os.path.isfile(self.security_stashfile):
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def check_for_spy(self):
|
|
||||||
"""
|
|
||||||
Check for a spy node stash file.
|
|
||||||
If it is present, bespin is already managing a spy node.
|
|
||||||
"""
|
|
||||||
if os.path.isfile(self.spy_stashfile):
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def check_for_yeti(self):
|
|
||||||
"""
|
|
||||||
Does bigfoot exist?
|
|
||||||
|
|
||||||
TODO:
|
|
||||||
This one will need more care.
|
|
||||||
The user should be able to spawn new yeti nodes
|
|
||||||
even if a yeti node already exists.
|
|
||||||
"""
|
|
||||||
if os.path.isfile(self.yeti_stashfile):
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
####################################################
|
|
||||||
# VPC Commands
|
|
||||||
|
|
||||||
# Fix this:
|
|
||||||
# vpc creates a vpc parser object
|
|
||||||
# and hands off all remaining args
|
|
||||||
|
|
||||||
def vpc(self):
|
|
||||||
"""
|
|
||||||
Process subcommands related to the VPC
|
|
||||||
"""
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description = long_strings.vpc_description,
|
|
||||||
usage = long_strings.vpc_usage)
|
|
||||||
|
|
||||||
parser.add_argument('vpc_command')
|
|
||||||
|
|
||||||
# ignore first two argvs (command and subcommand)
|
|
||||||
args = parser.parse_args(sys.argv[2:])
|
|
||||||
print("Received vpc command %s"%(args.vpc_command))
|
|
||||||
|
|
||||||
# offer to help
|
|
||||||
if(args.vpc_command=="help"):
|
|
||||||
parser.print_help()
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
# use dispatch pattern again, look for method named vpc_command
|
|
||||||
vpc_command = "vpc_"+args.vpc_command
|
|
||||||
if not hasattr(self, vpc_command):
|
|
||||||
print("Unrecognized VPC command: %s\n"%(args.vpc_command))
|
|
||||||
parser.print_help()
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
# now invoke the method
|
|
||||||
getattr(self, vpc_command)()
|
|
||||||
|
|
||||||
|
|
||||||
def vpc_build(self):
|
|
||||||
"""
|
|
||||||
Build the VPC
|
|
||||||
"""
|
|
||||||
if(os.path.exists(self.vpc_stashfile)):
|
|
||||||
raise Exception("A VPC (stashfile) already exists!")
|
|
||||||
else:
|
|
||||||
print("argparser: building vpc")
|
|
||||||
self.confirm("About to create a VPC and a subnet.")
|
|
||||||
vpc = DahakVPC(self.vpc_stashfile)
|
|
||||||
vpc.build()
|
|
||||||
|
|
||||||
|
|
||||||
def vpc_destroy(self):
|
|
||||||
"""
|
|
||||||
Destroy the VPC
|
|
||||||
"""
|
|
||||||
if(os.path.exists(self.vpc_stashfile)):
|
|
||||||
print("argparser: destroying vpc")
|
|
||||||
vpc = DahakVPC(self.vpc_stashfile)
|
|
||||||
vpc.destroy()
|
|
||||||
else:
|
|
||||||
raise Exception("No VPC exists! Try creating one with the command:\n\t\tbespin vpn create")
|
|
||||||
|
|
||||||
|
|
||||||
def vpc_info(self):
|
|
||||||
"""
|
|
||||||
Get information about the VPC
|
|
||||||
"""
|
|
||||||
if(os.path.exists(self.vpc_stashfile)):
|
|
||||||
print("argparser: getting vpc info")
|
|
||||||
subprocess.call(['cat',self.vpc_stashfile])
|
|
||||||
else:
|
|
||||||
raise Exception("No VPC exists.")
|
|
||||||
|
|
||||||
|
|
||||||
def vpc_stash(self):
|
|
||||||
"""
|
|
||||||
Print the location of stash files
|
|
||||||
for VPC info.
|
|
||||||
"""
|
|
||||||
print("argparser: showing vpc stashfile location")
|
|
||||||
print(self.vpc_stashfile)
|
|
||||||
|
|
||||||
|
|
||||||
####################################################
|
|
||||||
# Security Group Commands
|
|
||||||
|
|
||||||
def security(self):
|
|
||||||
"""
|
|
||||||
Process subcommands related to the security group
|
|
||||||
"""
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description = long_strings.spy_description,
|
|
||||||
usage = long_strings.spy_usage)
|
|
||||||
|
|
||||||
parser.add_argument('security_command')
|
|
||||||
|
|
||||||
args = parser.parse_args(sys.argv[2:])
|
|
||||||
print("Received security command %s"%(args.security_command))
|
|
||||||
|
|
||||||
if(args.security_command=="help"):
|
|
||||||
parser.print_help()
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
security_command = "security_"+args.vpc_command
|
|
||||||
if not hasattr(self, security_command):
|
|
||||||
print("Unrecognized security command: %s\n"%(args.security_command))
|
|
||||||
parser.print_help()
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
|
|
||||||
def security_build(self):
|
|
||||||
"""
|
|
||||||
Build the security group
|
|
||||||
"""
|
|
||||||
if(self.has_security_group):
|
|
||||||
raise Exception("A security group already exists!")
|
|
||||||
else:
|
|
||||||
print("argparser: building security group")
|
|
||||||
|
|
||||||
|
|
||||||
def security_destroy(self):
|
|
||||||
"""
|
|
||||||
Destroy the security group
|
|
||||||
"""
|
|
||||||
if(self.has_security_group):
|
|
||||||
print("argparser: destroying security group")
|
|
||||||
else:
|
|
||||||
raise Exception("No security group exists! Try creating one with the command:\n\t\tbespin security create")
|
|
||||||
|
|
||||||
|
|
||||||
def security_info(self):
|
|
||||||
"""
|
|
||||||
Get information about the security group
|
|
||||||
"""
|
|
||||||
if(self.has_security_group):
|
|
||||||
print("argparser: getting security group info")
|
|
||||||
else:
|
|
||||||
raise Exception("No security group exists.")
|
|
||||||
|
|
||||||
|
|
||||||
def security_stash(self):
|
|
||||||
"""
|
|
||||||
Print the location of stash files
|
|
||||||
for security group info.
|
|
||||||
"""
|
|
||||||
print("argparser: showing security group stash")
|
|
||||||
|
|
||||||
|
|
||||||
####################################################
|
|
||||||
# Spy Node Commands
|
|
||||||
|
|
||||||
def spy(self):
|
|
||||||
"""
|
|
||||||
Process subcommands related to spy
|
|
||||||
"""
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description = long_strings.spy_description,
|
|
||||||
usage = long_strings.spy_usage)
|
|
||||||
|
|
||||||
parser.add_argument('spy_command')
|
|
||||||
|
|
||||||
args = parser.parse_args(sys.argv[2:])
|
|
||||||
print("Received spy command %s"%(args.spy_command))
|
|
||||||
|
|
||||||
if(args.spy_command=="help"):
|
|
||||||
parser.print_help()
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
spy_command = "spy_"+args.spy_command
|
|
||||||
if not hasattr(self, spy_command):
|
|
||||||
print("Unrecognized spy command: %s\n"%(args.spy_command))
|
|
||||||
parser.print_help()
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
getattr(self, spy_command)()
|
|
||||||
|
|
||||||
|
|
||||||
def spy_build(self):
|
|
||||||
"""
|
|
||||||
Build the spy node
|
|
||||||
"""
|
|
||||||
if(self.has_spy):
|
|
||||||
raise Exception("A spy node already exists!")
|
|
||||||
else:
|
|
||||||
print("argparser: building spy node")
|
|
||||||
|
|
||||||
|
|
||||||
def spy_destroy(self):
|
|
||||||
"""
|
|
||||||
Destroy the spy node
|
|
||||||
"""
|
|
||||||
if(self.has_spy):
|
|
||||||
print("argparser: destroying spy node")
|
|
||||||
else:
|
|
||||||
raise Exception("No spy node exists! Try creating one with the command:\n\t\tbespin spy create")
|
|
||||||
|
|
||||||
|
|
||||||
def spy_info(self):
|
|
||||||
"""
|
|
||||||
Get information about the spy node
|
|
||||||
"""
|
|
||||||
if(self.has_spy):
|
|
||||||
print("argparser: getting spy node info")
|
|
||||||
else:
|
|
||||||
raise Exception("No spy node exists.")
|
|
||||||
|
|
||||||
|
|
||||||
def spy_stash(self):
|
|
||||||
"""
|
|
||||||
Print the location of stash files
|
|
||||||
for spy node info.
|
|
||||||
"""
|
|
||||||
print("argparser: showing spy node stash")
|
|
||||||
|
|
||||||
|
|
||||||
####################################################
|
|
||||||
# Yeti Node Commands
|
|
||||||
|
|
||||||
def yeti(self):
|
|
||||||
"""
|
|
||||||
Process subcommands related to yeti node
|
|
||||||
"""
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description = long_strings.yeti_description,
|
|
||||||
usage = long_strings.vpc_usage)
|
|
||||||
|
|
||||||
parser.add_argument('yeti_command')
|
|
||||||
|
|
||||||
args = parser.parse_args(sys.argv[2:])
|
|
||||||
print("Received yeti command %s"%(args.yeti_command))
|
|
||||||
|
|
||||||
if(args.yeti_command=="help"):
|
|
||||||
parser.print_help()
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
yeti_command = "yeti_"+args.yeti_command
|
|
||||||
if not hasattr(self, yeti_command):
|
|
||||||
print("Unrecognized yeti command: %s\n"%(args.yeti_command))
|
|
||||||
parser.print_help()
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
getattr(self, yeti_command)()
|
|
||||||
|
|
||||||
|
|
||||||
def yeti_build(self):
|
|
||||||
"""
|
|
||||||
Build the yeti node
|
|
||||||
"""
|
|
||||||
if(self.hasyeti):
|
|
||||||
raise Exception("A yeti node already exists!")
|
|
||||||
else:
|
|
||||||
print("argparser: building yeti node")
|
|
||||||
|
|
||||||
def yeti_destroy(self):
|
|
||||||
"""
|
|
||||||
Destroy the yeti node
|
|
||||||
"""
|
|
||||||
if(self.hasyeti):
|
|
||||||
print("argparser: destroying yeti node")
|
|
||||||
else:
|
|
||||||
raise Exception("No yeti node exists! Try creating one with the command:\n\t\tbespin yeti create")
|
|
||||||
|
|
||||||
|
|
||||||
def yeti_info(self):
|
|
||||||
"""
|
|
||||||
Get information about the yeti node
|
|
||||||
"""
|
|
||||||
if(self.hasyeti):
|
|
||||||
print("argparser: getting yeti node info")
|
|
||||||
else:
|
|
||||||
raise Exception("No yeti node exists.")
|
|
||||||
|
|
||||||
|
|
||||||
def yeti_stash(self):
|
|
||||||
"""
|
|
||||||
Print the location of stash files
|
|
||||||
for yeti node info.
|
|
||||||
"""
|
|
||||||
print("argparser: showing yeti node stash")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
Bespin()
|
|
||||||
|
|
2
cloud_init/spy.sh
Normal file
2
cloud_init/spy.sh
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
bash <( curl https://raw.githubusercontent.com/charlesreid1/dahak-spy/master/cloud_init/cloud_init.sh )
|
2
cloud_init/yeti.sh
Normal file
2
cloud_init/yeti.sh
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
bash <( curl https://raw.githubusercontent.com/charlesreid1/dahak-yeti/master/cloud_init/cloud_init.sh )
|
23
dahak_aws.py
23
dahak_aws.py
@@ -1,23 +0,0 @@
|
|||||||
import boto3
|
|
||||||
from botocore.exceptions import ClientError
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
AWS Object (Base Class)
|
|
||||||
|
|
||||||
Define a base class that has a reference
|
|
||||||
to an AWS session, resource, and client.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class AWSObject(object):
|
|
||||||
"""
|
|
||||||
AWSObject defines a session, resource, and client
|
|
||||||
that are used to make API calls to AWS.
|
|
||||||
"""
|
|
||||||
def __init__(self):
|
|
||||||
print("initializing aws object")
|
|
||||||
self.session = boto3.Session(region_name="us-west-1")
|
|
||||||
self.resource = self.session.resource('ec2') # high level interface
|
|
||||||
self.client = self.session.client('ec2') # low level interface
|
|
||||||
|
|
@@ -1,38 +0,0 @@
|
|||||||
import boto3
|
|
||||||
import collections
|
|
||||||
import string
|
|
||||||
import random
|
|
||||||
import os, re
|
|
||||||
import glob
|
|
||||||
from botocore.exceptions import ClientError
|
|
||||||
from pprint import pprint
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
"""
|
|
||||||
Dahak Node Base Class
|
|
||||||
|
|
||||||
Defines behavior inherited by DahakSpy and DahakYeti.
|
|
||||||
"""
|
|
||||||
|
|
||||||
class DahakNode(object):
|
|
||||||
|
|
||||||
def __init__(self,name):
|
|
||||||
self.name = name
|
|
||||||
print("initializing node %s"%(self.name))
|
|
||||||
|
|
||||||
def build(self):
|
|
||||||
print("building node %s..."%(self.name))
|
|
||||||
self._build_node_net_interface()
|
|
||||||
self._build_node()
|
|
||||||
self._stash_node_info()
|
|
||||||
print("done building node.")
|
|
||||||
|
|
||||||
def _build_node_net_interface(self):
|
|
||||||
print("building node %s network interface"%(self.name))
|
|
||||||
|
|
||||||
def _build_node(self):
|
|
||||||
print("building node %s"%(self.name))
|
|
||||||
|
|
||||||
def _stash_node_info(self):
|
|
||||||
print("stashing node %s info"%(self.name))
|
|
||||||
|
|
@@ -1,35 +0,0 @@
|
|||||||
from dahak_aws import AWSObject
|
|
||||||
|
|
||||||
import collections
|
|
||||||
import string
|
|
||||||
import random
|
|
||||||
from pprint import pprint
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
Dahak Security Group
|
|
||||||
|
|
||||||
Create a security group
|
|
||||||
that allows access to the
|
|
||||||
required ports over the VPC.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class DahakSecurity(AWSObject):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
print("initializing dahak security group")
|
|
||||||
|
|
||||||
def build(self):
|
|
||||||
print("building security group...")
|
|
||||||
self._build_security_group()
|
|
||||||
self._stash_security_info()
|
|
||||||
print("done building security group.")
|
|
||||||
|
|
||||||
def _build_security_network(self):
|
|
||||||
print("making security network")
|
|
||||||
|
|
||||||
def _stash_security_info(self):
|
|
||||||
print("stashing security info")
|
|
||||||
|
|
17
dahak_spy.py
17
dahak_spy.py
@@ -1,17 +0,0 @@
|
|||||||
from dahak_node import DahakNode
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
Dahak Spy
|
|
||||||
|
|
||||||
Create a spy node for logging and monitoring.
|
|
||||||
|
|
||||||
Add it to the VPC.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class DahakSpy(DahakNode):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
DahakNode.__init__(self,"spy")
|
|
||||||
|
|
191
dahak_vpc.py
191
dahak_vpc.py
@@ -1,191 +0,0 @@
|
|||||||
from dahak_aws import AWSObject
|
|
||||||
from random_labels import random_label, random_ip
|
|
||||||
|
|
||||||
import boto3
|
|
||||||
import collections
|
|
||||||
import subprocess
|
|
||||||
import string
|
|
||||||
import random
|
|
||||||
import json
|
|
||||||
from botocore.exceptions import ClientError
|
|
||||||
from pprint import pprint
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
Dahak VPC
|
|
||||||
|
|
||||||
Create a single VPC with a single subnet,
|
|
||||||
add an internet gateway, a routing table,
|
|
||||||
and DHCP+DNS services.
|
|
||||||
|
|
||||||
existence of a stashfile indicates existence of a vpc.
|
|
||||||
build command creates a brand-new vpc, fails if stashfile exists.
|
|
||||||
no need for a load command - bespin insists on creating its own infrastructure.
|
|
||||||
delete command needed.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
VPCRule = collections.namedtuple("vpc_rule", ["vpc_ip", "subnet_ip"])
|
|
||||||
|
|
||||||
class DahakVPC(AWSObject):
|
|
||||||
|
|
||||||
# Constructor:
|
|
||||||
|
|
||||||
def __init__(self, stashfile):
|
|
||||||
AWSObject.__init__(self)
|
|
||||||
print("initializing dahak vpc")
|
|
||||||
self.stashfile = stashfile
|
|
||||||
|
|
||||||
|
|
||||||
# Public Methods:
|
|
||||||
|
|
||||||
def build(self):
|
|
||||||
"""
|
|
||||||
vpc build process
|
|
||||||
bespin (callee) already checks no stashfile
|
|
||||||
"""
|
|
||||||
print("building vpc...")
|
|
||||||
self._build_vpc_network()
|
|
||||||
print("done building vpc.")
|
|
||||||
|
|
||||||
def destroy(self):
|
|
||||||
"""
|
|
||||||
vpc destroy process
|
|
||||||
bespin (callee) already checks for stashfile
|
|
||||||
"""
|
|
||||||
print("destroying vpc...")
|
|
||||||
self._destroy_vpc_network()
|
|
||||||
print("done destroying vpc.")
|
|
||||||
|
|
||||||
|
|
||||||
# Private Methods:
|
|
||||||
|
|
||||||
def _build_vpc_network(self):
|
|
||||||
"""
|
|
||||||
Make the necessary api calls
|
|
||||||
to create the vpc using boto
|
|
||||||
"""
|
|
||||||
print("making vpc network")
|
|
||||||
|
|
||||||
self.base_ip = random_ip()
|
|
||||||
self.label = random_label()
|
|
||||||
|
|
||||||
print(" label = %s"%(self.label))
|
|
||||||
print(" base_ip = %s"%(self.base_ip))
|
|
||||||
|
|
||||||
vpc_cidr = self.base_ip.format(addr=0)+"/16"
|
|
||||||
subnet_cidr = self.base_ip.format(addr=0)+"/24"
|
|
||||||
vpc_label = self.label + "_vpc"
|
|
||||||
|
|
||||||
# vpc cidr block
|
|
||||||
# vpc subnet cidr block
|
|
||||||
vpc_rule = VPCRule( vpc_ip = vpc_cidr,
|
|
||||||
subnet_ip = subnet_cidr)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# First, create a VPC network
|
|
||||||
vpc = self.resource.create_vpc(CidrBlock = vpc_rule.vpc_ip)
|
|
||||||
|
|
||||||
# Enable DNS on the VPC
|
|
||||||
response = self.client.modify_vpc_attribute(VpcId=vpc.vpc_id,
|
|
||||||
EnableDnsSupport={"Value":True})
|
|
||||||
response = self.client.modify_vpc_attribute(VpcId=vpc.vpc_id,
|
|
||||||
EnableDnsHostnames={"Value":True})
|
|
||||||
|
|
||||||
# Create VPC subnet
|
|
||||||
subnet = vpc.create_subnet(CidrBlock = vpc_rule.subnet_ip,
|
|
||||||
AvailabilityZone = 'us-west-1a')
|
|
||||||
|
|
||||||
# Craete a DHCP options set for the VPC to use
|
|
||||||
# (amazon-provided DHCP)
|
|
||||||
dhcp_options = self.resource.create_dhcp_options(
|
|
||||||
DhcpConfigurations = [{
|
|
||||||
'Key':'domain-name-servers',
|
|
||||||
'Values':['AmazonProvidedDNS']
|
|
||||||
},
|
|
||||||
{
|
|
||||||
'Key': 'domain-name',
|
|
||||||
'Values': ['us-west-1.compute.internal']
|
|
||||||
}]
|
|
||||||
)
|
|
||||||
dhcp_options.associate_with_vpc(VpcId = vpc.id)
|
|
||||||
|
|
||||||
# Create an internet gateway attached to this VPC
|
|
||||||
gateway = self.resource.create_internet_gateway()
|
|
||||||
gateway.attach_to_vpc(VpcId = vpc.id)
|
|
||||||
|
|
||||||
# Create a Route table and add the route
|
|
||||||
route_table = self.client.create_route_table(VpcId = vpc.vpc_id)
|
|
||||||
route_table_id = route_table['RouteTable']['RouteTableId']
|
|
||||||
response = self.client.create_route( DestinationCidrBlock = '0.0.0.0/0',
|
|
||||||
RouteTableId = route_table_id,
|
|
||||||
GatewayId = gateway.internet_gateway_id )
|
|
||||||
|
|
||||||
except ClientError as e:
|
|
||||||
|
|
||||||
print("\n")
|
|
||||||
print(" X"*20)
|
|
||||||
print("FATAL ERROR")
|
|
||||||
print("Could not create network due to error:")
|
|
||||||
print("-"*20)
|
|
||||||
print(e)
|
|
||||||
print("-"*20)
|
|
||||||
print("\n")
|
|
||||||
|
|
||||||
# vpc information should be saved in a stash file
|
|
||||||
self._stash_vpc_info(vpc.id)
|
|
||||||
|
|
||||||
print("\n")
|
|
||||||
print("SUCCESS")
|
|
||||||
print("Created VPC with the following information:")
|
|
||||||
print(" VPC id: %s"%(vpc.id))
|
|
||||||
print(" VPC label: %s"%(vpc_label))
|
|
||||||
print(" Subnet: (%s)"%(subnet.id))
|
|
||||||
print("\n")
|
|
||||||
|
|
||||||
def _stash_vpc_info(self,vpcid):
|
|
||||||
"""
|
|
||||||
Pass Amazon a VPC ID and ask it for a description,
|
|
||||||
and store the resulting JSON in the stash file
|
|
||||||
"""
|
|
||||||
print("stashing vpc info")
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
response = self.client.describe_vpcs(VpcIds=[vpcid])
|
|
||||||
del response['ResponseMetadata']
|
|
||||||
|
|
||||||
with open(self.stashfile,'w') as f:
|
|
||||||
json.dump(response, f, indent=4, sort_keys=True)
|
|
||||||
|
|
||||||
except ClientError as e:
|
|
||||||
|
|
||||||
|
|
||||||
def _destroy_vpc_network(self):
|
|
||||||
"""
|
|
||||||
Make the necessary api calls
|
|
||||||
to destroy the vpc using boto
|
|
||||||
|
|
||||||
https://aws.amazon.com/premiumsupport/knowledge-center/troubleshoot-dependency-error-delete-vpc/
|
|
||||||
|
|
||||||
If you delete your Amazon VPC using the Amazon VPC console,
|
|
||||||
all its components--such as subnets, security groups, network
|
|
||||||
ACLs, route tables, internet gateways, VPC peering connections,
|
|
||||||
and DHCP options--are also deleted. If you use the AWS Command
|
|
||||||
Line Interface (AWS CLI) to delete the Amazon VPC, you must
|
|
||||||
terminate all instances, delete all subnets, delete custom
|
|
||||||
security groups and custom route tables, and detach any
|
|
||||||
internet gateway in the Amazon VPC before you can delete
|
|
||||||
the Amazon VPC.
|
|
||||||
"""
|
|
||||||
with open(self.stashfile,'r') as f:
|
|
||||||
vpc_info = json.load(f)
|
|
||||||
|
|
||||||
vpc_id = vpc_info['Vpcs'][0]['VpcId']
|
|
||||||
|
|
||||||
response = self.client.delete_vpc(VpcId = vpc_id)
|
|
||||||
print(response)
|
|
||||||
|
|
||||||
subprocess.call(['rm','-f',self.stashfile])
|
|
||||||
|
|
@@ -1,15 +0,0 @@
|
|||||||
from dahak_node import DahakNode
|
|
||||||
|
|
||||||
"""
|
|
||||||
Dahak Yeti
|
|
||||||
|
|
||||||
Create a yeti node for logging and monitoring.
|
|
||||||
|
|
||||||
Add it to the VPC.
|
|
||||||
"""
|
|
||||||
|
|
||||||
class DahakYeti(DahakNode):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
DahakNode.__init__(self,"yeti")
|
|
||||||
|
|
1
docs/.gitignore
vendored
Normal file
1
docs/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
_build
|
20
docs/Makefile
Normal file
20
docs/Makefile
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
# Minimal makefile for Sphinx documentation
|
||||||
|
#
|
||||||
|
|
||||||
|
# You can set these variables from the command line.
|
||||||
|
SPHINXOPTS =
|
||||||
|
SPHINXBUILD = sphinx-build
|
||||||
|
SPHINXPROJ = bespin
|
||||||
|
SOURCEDIR = .
|
||||||
|
BUILDDIR = _build
|
||||||
|
|
||||||
|
# Put it first so that "make" without argument is like "make help".
|
||||||
|
help:
|
||||||
|
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||||
|
|
||||||
|
.PHONY: help Makefile
|
||||||
|
|
||||||
|
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||||
|
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
||||||
|
%: Makefile
|
||||||
|
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
6
docs/_static/bootstrap.min.css
vendored
Normal file
6
docs/_static/bootstrap.min.css
vendored
Normal file
File diff suppressed because one or more lines are too long
6
docs/_static/custom.css
vendored
Normal file
6
docs/_static/custom.css
vendored
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
body {
|
||||||
|
background-color: #efefef;
|
||||||
|
}
|
||||||
|
div.body {
|
||||||
|
background-color: #efefef;
|
||||||
|
}
|
5
docs/automatedtests.md
Normal file
5
docs/automatedtests.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Automated Tests
|
||||||
|
|
||||||
|
The ultimate goal of dahak-bespin
|
||||||
|
is to run automated tests of dahak workflows.
|
||||||
|
|
183
docs/conf.py
Normal file
183
docs/conf.py
Normal file
@@ -0,0 +1,183 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
#
|
||||||
|
# Configuration file for the Sphinx documentation builder.
|
||||||
|
#
|
||||||
|
# This file does only contain a selection of the most common options. For a
|
||||||
|
# full list see the documentation:
|
||||||
|
# http://www.sphinx-doc.org/en/stable/config
|
||||||
|
|
||||||
|
# -- Path setup --------------------------------------------------------------
|
||||||
|
|
||||||
|
# If extensions (or modules to document with autodoc) are in another directory,
|
||||||
|
# add these directories to sys.path here. If the directory is relative to the
|
||||||
|
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||||
|
#
|
||||||
|
# import os
|
||||||
|
# import sys
|
||||||
|
# sys.path.insert(0, os.path.abspath('.'))
|
||||||
|
|
||||||
|
|
||||||
|
# -- Project information -----------------------------------------------------
|
||||||
|
|
||||||
|
project = 'dahak-bespin'
|
||||||
|
copyright = '2018'
|
||||||
|
author = 'DIB Lab'
|
||||||
|
|
||||||
|
# The short X.Y version
|
||||||
|
version = ''
|
||||||
|
# The full version, including alpha/beta/rc tags
|
||||||
|
release = ''
|
||||||
|
|
||||||
|
|
||||||
|
# -- General configuration ---------------------------------------------------
|
||||||
|
|
||||||
|
# If your documentation needs a minimal Sphinx version, state it here.
|
||||||
|
#
|
||||||
|
# needs_sphinx = '1.0'
|
||||||
|
|
||||||
|
# Add any Sphinx extension module names here, as strings. They can be
|
||||||
|
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||||
|
# ones.
|
||||||
|
extensions = [
|
||||||
|
'sphinx.ext.autodoc',
|
||||||
|
'sphinx.ext.githubpages',
|
||||||
|
]
|
||||||
|
|
||||||
|
# Add any paths that contain templates here, relative to this directory.
|
||||||
|
templates_path = ['_templates']
|
||||||
|
|
||||||
|
# The suffix(es) of source filenames.
|
||||||
|
source_parsers = {
|
||||||
|
'.md': 'recommonmark.parser.CommonMarkParser'
|
||||||
|
}
|
||||||
|
source_suffix = ['.rst','.md']
|
||||||
|
|
||||||
|
# The master toctree document.
|
||||||
|
master_doc = 'index'
|
||||||
|
|
||||||
|
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||||
|
# for a list of supported languages.
|
||||||
|
#
|
||||||
|
# This is also used if you do content translation via gettext catalogs.
|
||||||
|
# Usually you set "language" from the command line for these cases.
|
||||||
|
language = None
|
||||||
|
|
||||||
|
# List of patterns, relative to source directory, that match files and
|
||||||
|
# directories to ignore when looking for source files.
|
||||||
|
# This pattern also affects html_static_path and html_extra_path .
|
||||||
|
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
|
||||||
|
|
||||||
|
# The name of the Pygments (syntax highlighting) style to use.
|
||||||
|
pygments_style = 'sphinx'
|
||||||
|
|
||||||
|
|
||||||
|
# -- Options for HTML output -------------------------------------------------
|
||||||
|
|
||||||
|
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||||
|
# a list of builtin themes.
|
||||||
|
#
|
||||||
|
html_theme = 'alabaster'
|
||||||
|
|
||||||
|
# Theme options are theme-specific and customize the look and feel of a theme
|
||||||
|
# further. For a list of options available for each theme, see the
|
||||||
|
# documentation.
|
||||||
|
#
|
||||||
|
# html_theme_options = {}
|
||||||
|
|
||||||
|
# wow:
|
||||||
|
# https://alabaster.readthedocs.io/en/latest/customization.html
|
||||||
|
|
||||||
|
html_theme_options = {
|
||||||
|
'github_user': 'charlesreid1',
|
||||||
|
'github_repo': 'dahak-bespin',
|
||||||
|
'github_button' : 'true',
|
||||||
|
#'analytics_id' : '???',
|
||||||
|
'fixed_sidebar' : 'true',
|
||||||
|
'github_banner' : 'true',
|
||||||
|
'pre_bg' : '#fff'
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Add any paths that contain custom static files (such as style sheets) here,
|
||||||
|
# relative to this directory. They are copied after the builtin static files,
|
||||||
|
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||||
|
html_static_path = ['_static']
|
||||||
|
|
||||||
|
# Custom sidebar templates, must be a dictionary that maps document names
|
||||||
|
# to template names.
|
||||||
|
#
|
||||||
|
# The default sidebars (for documents that don't match any pattern) are
|
||||||
|
# defined by theme itself. Builtin themes are using these templates by
|
||||||
|
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
|
||||||
|
# 'searchbox.html']``.
|
||||||
|
#
|
||||||
|
# html_sidebars = {}
|
||||||
|
|
||||||
|
html_context = {
|
||||||
|
# "google_analytics_id" : 'UA-00000000-1',
|
||||||
|
"github_base_account" : 'charlesreid1',
|
||||||
|
"github_project" : 'dahak-taco',
|
||||||
|
}
|
||||||
|
|
||||||
|
# -- Options for HTMLHelp output ---------------------------------------------
|
||||||
|
|
||||||
|
# Output file base name for HTML help builder.
|
||||||
|
htmlhelp_basename = 'dahak-bespindoc'
|
||||||
|
|
||||||
|
|
||||||
|
# -- Options for LaTeX output ------------------------------------------------
|
||||||
|
|
||||||
|
latex_elements = {
|
||||||
|
# The paper size ('letterpaper' or 'a4paper').
|
||||||
|
#
|
||||||
|
# 'papersize': 'letterpaper',
|
||||||
|
|
||||||
|
# The font size ('10pt', '11pt' or '12pt').
|
||||||
|
#
|
||||||
|
# 'pointsize': '10pt',
|
||||||
|
|
||||||
|
# Additional stuff for the LaTeX preamble.
|
||||||
|
#
|
||||||
|
# 'preamble': '',
|
||||||
|
|
||||||
|
# Latex figure (float) alignment
|
||||||
|
#
|
||||||
|
# 'figure_align': 'htbp',
|
||||||
|
}
|
||||||
|
|
||||||
|
# Grouping the document tree into LaTeX files. List of tuples
|
||||||
|
# (source start file, target name, title,
|
||||||
|
# author, documentclass [howto, manual, or own class]).
|
||||||
|
latex_documents = [
|
||||||
|
(master_doc, 'dahak-bespin.tex', 'bespin Documentation',
|
||||||
|
'charles reid', 'manual'),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
# -- Options for manual page output ------------------------------------------
|
||||||
|
|
||||||
|
# One entry per manual page. List of tuples
|
||||||
|
# (source start file, name, description, authors, manual section).
|
||||||
|
man_pages = [
|
||||||
|
(master_doc, 'dahak-bespin', 'dahak-bespin Documentation',
|
||||||
|
[author], 1)
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
# -- Options for Texinfo output ----------------------------------------------
|
||||||
|
|
||||||
|
# Grouping the document tree into Texinfo files. List of tuples
|
||||||
|
# (source start file, target name, title, author,
|
||||||
|
# dir menu entry, description, category)
|
||||||
|
texinfo_documents = [
|
||||||
|
(master_doc, 'dahak-bespin', 'dahak-bespin Documentation',
|
||||||
|
author, 'dahak-bespin', 'One line description of project.',
|
||||||
|
'Miscellaneous'),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
# -- Extension configuration -------------------------------------------------
|
||||||
|
|
||||||
|
def setup(app):
|
||||||
|
app.add_stylesheet('bootstrap.min.css')
|
||||||
|
|
75
docs/dahakworkflows.md
Normal file
75
docs/dahakworkflows.md
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
# Running Dahak Workflows
|
||||||
|
|
||||||
|
To run dahak workflows,
|
||||||
|
we use the following architecture:
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
+------------------------------------------------------------+
|
||||||
|
| AWS |
|
||||||
|
| |
|
||||||
|
| +----------------------------------------------------+ |
|
||||||
|
| | AWS VPC | |
|
||||||
|
| | | |
|
||||||
|
| | +------------+ +-----------------+ | |
|
||||||
|
| | | | | yeti1 | | |
|
||||||
|
| | | spy | | +-----------------+ | |
|
||||||
|
| | | | | | yeti2 | | | |
|
||||||
|
| | | | | | +---------------+ | |
|
||||||
|
| | | | | | | yeti3 | | | | |
|
||||||
|
| | | | <--------+ | | | | | | |
|
||||||
|
| | | | <-----------+ | | | | | |
|
||||||
|
| | | | <---------------+ | | | | |
|
||||||
|
| | +------------+ +--|---|----------+ | | | |
|
||||||
|
| | +---|-------------+ | | |
|
||||||
|
| | +---------------+ | |
|
||||||
|
| | | |
|
||||||
|
| +----------------------------------------------------+ |
|
||||||
|
| |
|
||||||
|
+------------------------------------------------------------+
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Dahak Infrastructure
|
||||||
|
|
||||||
|
Dahak workflows will require:
|
||||||
|
|
||||||
|
* VPC to connect nodes
|
||||||
|
* 1 spy node to monitor and log
|
||||||
|
* 1+ yeti nodes to run workflows
|
||||||
|
|
||||||
|
## Dahak Terraform Files
|
||||||
|
|
||||||
|
### VPC
|
||||||
|
|
||||||
|
The VPC will allocate an IP address space 10.X.0.0/16.
|
||||||
|
|
||||||
|
The VPC subnet will allocate an IP address space 10.X.0.0/24.
|
||||||
|
|
||||||
|
The VPC will require AWS-provided DNS/DHCP.
|
||||||
|
|
||||||
|
The VPC will require an internet gateway.
|
||||||
|
|
||||||
|
The VPC will require a routing table pointing to the gateway.
|
||||||
|
|
||||||
|
### Spy Node
|
||||||
|
|
||||||
|
The spy node will need to run the cloud init scripts
|
||||||
|
contained in [dahak-spy](https://github.com/charlesreid1/dahak-spy).
|
||||||
|
|
||||||
|
### Yeti Node
|
||||||
|
|
||||||
|
The spy node will need to run the cloud init scripts
|
||||||
|
contained in [dahak-yeti](https://github.com/charlesreid1/dahak-yeti).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
34
docs/index.rst
Normal file
34
docs/index.rst
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
.. _index:
|
||||||
|
|
||||||
|
================
|
||||||
|
dahak-bespin
|
||||||
|
================
|
||||||
|
|
||||||
|
dahak-bespin is a framework for allocating
|
||||||
|
cloud infrastructure to run dahak workflows.
|
||||||
|
|
||||||
|
* See source code at `charlesreid1/dahak-bespin on github
|
||||||
|
<https://github.com/charlesreid1/dahak-bespin>`_.
|
||||||
|
|
||||||
|
* Sphinx documentation hosted at `charlesreid1.github.io/dahak-bespin
|
||||||
|
<https://charlesreid1.github.io/dahak-bespin>`_.
|
||||||
|
|
||||||
|
* This package uses `terraform <https://www.terraform.io/>`_.
|
||||||
|
|
||||||
|
Using Bespin via Terraform
|
||||||
|
===========================
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 2
|
||||||
|
:caption: Contents:
|
||||||
|
|
||||||
|
terraformbasics
|
||||||
|
dahakworkflows
|
||||||
|
automatedtests
|
||||||
|
|
||||||
|
|
||||||
|
Indices and tables
|
||||||
|
==================
|
||||||
|
|
||||||
|
* :ref:`genindex`
|
||||||
|
* :ref:`search`
|
174
docs/terraformbasics.md
Normal file
174
docs/terraformbasics.md
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
# Terraform Basics
|
||||||
|
|
||||||
|
This covers the basics of terraform,
|
||||||
|
which is the tool we will use to
|
||||||
|
automate the deployment of infrastructure
|
||||||
|
to run dahak workflows.
|
||||||
|
|
||||||
|
## Installing Terraform
|
||||||
|
|
||||||
|
[terraform binary - link](https://www.terraform.io/downloads.html)
|
||||||
|
|
||||||
|
On a Mac:
|
||||||
|
|
||||||
|
```
|
||||||
|
brew install terraform
|
||||||
|
```
|
||||||
|
|
||||||
|
## How Terraform Works
|
||||||
|
|
||||||
|
terraform is independent of the particular cloud platform,
|
||||||
|
but in this example we'll show how to use AWS.
|
||||||
|
|
||||||
|
### Configure terraform
|
||||||
|
|
||||||
|
We define infrastructure with a config file,
|
||||||
|
with file extension `.tf`:
|
||||||
|
|
||||||
|
`example.tf`
|
||||||
|
|
||||||
|
If we leave out the AWS access and secret key,
|
||||||
|
terraform will look in `~/.aws/credentials`
|
||||||
|
of the machine running terraform
|
||||||
|
(the one launching jobs).
|
||||||
|
|
||||||
|
This requires setup beforehand
|
||||||
|
(with boto or aws-cli).
|
||||||
|
|
||||||
|
**`example.tf:`**
|
||||||
|
|
||||||
|
```
|
||||||
|
provider "aws" {
|
||||||
|
region = "us-west-1a"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_instance" "example" {
|
||||||
|
ami = "ami-2757f631"
|
||||||
|
instance_type = "t2.micro"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Initializing terraform
|
||||||
|
|
||||||
|
Start by initializing terraform
|
||||||
|
and preparing it to run in your
|
||||||
|
current directory:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ terraform init
|
||||||
|
```
|
||||||
|
|
||||||
|
### Requesting resources
|
||||||
|
|
||||||
|
Now, request the resources that are
|
||||||
|
specified in the `.tf` file
|
||||||
|
(there should only be ONE `.tf` file):
|
||||||
|
|
||||||
|
```
|
||||||
|
$ terraform apply
|
||||||
|
```
|
||||||
|
|
||||||
|
This will examine the resources
|
||||||
|
inside the `.tf` file and compare to
|
||||||
|
the current resources deployed,
|
||||||
|
and will create a plan for what needs
|
||||||
|
to be implemented or changed.
|
||||||
|
|
||||||
|
If the execution plan is successfully created,
|
||||||
|
terraform will print it and await confirmation.
|
||||||
|
|
||||||
|
Type `yes` to proceed.
|
||||||
|
|
||||||
|
### Inspect resources
|
||||||
|
|
||||||
|
Insepct the current state of
|
||||||
|
the assets with:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ terraform show
|
||||||
|
```
|
||||||
|
|
||||||
|
### Updating resources
|
||||||
|
|
||||||
|
If you want to update the infrastructure
|
||||||
|
that terraform is deploying and managing,
|
||||||
|
you can just update the `.tf` file,
|
||||||
|
and run the apply command:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ terraform apply
|
||||||
|
```
|
||||||
|
|
||||||
|
As mentioned, terraform will examine
|
||||||
|
the currently deployed resources and
|
||||||
|
compare them to the resources listed
|
||||||
|
in the terraform file, and come up with
|
||||||
|
an execution plan.
|
||||||
|
|
||||||
|
### Destroying resources
|
||||||
|
|
||||||
|
Once you are ready to get rid of the resources,
|
||||||
|
use the destroy command:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ terraform destroy
|
||||||
|
```
|
||||||
|
|
||||||
|
## Using Variables in Terraform
|
||||||
|
|
||||||
|
### Input Variables
|
||||||
|
|
||||||
|
You can define input variables in a file `variables.tf`
|
||||||
|
and use them to set up infrastructure.
|
||||||
|
|
||||||
|
**`variables.tf`:**
|
||||||
|
|
||||||
|
```
|
||||||
|
variable "region" {
|
||||||
|
default = "us-west-1"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Now you can use this variable by
|
||||||
|
inserting the expression `${var.region}`:
|
||||||
|
|
||||||
|
```
|
||||||
|
provider "aws" {
|
||||||
|
region = "${var.region}"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This can also be set on the command line:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ terraform apply \
|
||||||
|
-var 'region=us-east-1'
|
||||||
|
```
|
||||||
|
|
||||||
|
If you name the varfile something other than `.tf`,
|
||||||
|
use the `-var-file` command line argument:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ terraform apply \
|
||||||
|
-var-file="production.tfvars"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Output Variables
|
||||||
|
|
||||||
|
Output variables are defined in
|
||||||
|
terraform `.tf` files using `output`:
|
||||||
|
|
||||||
|
```
|
||||||
|
output "ip" {
|
||||||
|
value = "${aws_instance.example.public_ip}"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
To see the value, check the output of `terraform apply`
|
||||||
|
or run:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ terraform output ip
|
||||||
|
```
|
||||||
|
|
||||||
|
|
144
long_strings.py
144
long_strings.py
@@ -1,144 +0,0 @@
|
|||||||
logo = """
|
|
||||||
___ ____ __ ___ _ _
|
|
||||||
| |_) | |_ ( (` | |_) | | | |\ |
|
|
||||||
|_|_) |_|__ _)_) |_| |_| |_| \|
|
|
||||||
|
|
||||||
cloud infrastructure tool for dahak
|
|
||||||
"""
|
|
||||||
|
|
||||||
# ----------------------
|
|
||||||
|
|
||||||
bespin_description = "dahak-bespin uses boto3 to wrangle nodes in the cloud and run dahak workflows"
|
|
||||||
bespin_usage = '''bespin <command> [<args>]
|
|
||||||
|
|
||||||
The most commonly used commands are:
|
|
||||||
vpc Make a VPC for all the dahak nodes
|
|
||||||
security Make a security group for nodes on the VPC
|
|
||||||
spy Make a spy monitoring node
|
|
||||||
yeti Make a yeti worker node
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
# ----------------------
|
|
||||||
|
|
||||||
vpc_description = "Make a VPC and a security group"
|
|
||||||
vpc_usage = '''bespin vpc <vpc_subcommand>
|
|
||||||
|
|
||||||
The vpc subcommands available are:
|
|
||||||
vpc build Build the VPC
|
|
||||||
vpc destroy Tear down the VPC
|
|
||||||
vpc info Print info about the VPC
|
|
||||||
vpc stash Print location of VPC stash files
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
vpc_build_description = "Build the VPC"
|
|
||||||
vpc_build_usage = '''bespin vpc build [<options>]
|
|
||||||
|
|
||||||
The vpc build subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
vpc_destroy_description = "Tear down the VPC"
|
|
||||||
vpc_destroy_usage = '''bespin vpc destroy [<options>]
|
|
||||||
|
|
||||||
The vpc destroy subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
vpc_info_description = "Get VPC info"
|
|
||||||
vpc_info_usage = '''bespin vpc info [<options>]
|
|
||||||
|
|
||||||
The vpc info subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
vpc_stash_description = "Get the VPC stash files"
|
|
||||||
vpc_stash_usage = '''bespin vpc stash [<options>]
|
|
||||||
|
|
||||||
The vpc stash subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
# ----------------------
|
|
||||||
|
|
||||||
spy_description = "Make a spy monitoring node and add it to the VPC"
|
|
||||||
spy_usage = '''bespin spy <spy_subcommand>
|
|
||||||
|
|
||||||
The spy subcommands available are:
|
|
||||||
spy build Build the spy node
|
|
||||||
spy destroy Tear down the spy node
|
|
||||||
spy info Print info about the spy node
|
|
||||||
spy stash Print location of spy stash files
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
spy_build_description = "Build the spy node"
|
|
||||||
spy_build_usage = '''bespin spy build [<options>]
|
|
||||||
|
|
||||||
The spy build subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
spy_destroy_description = "Tear down the spy node"
|
|
||||||
spy_destroy_usage = '''bespin spy destroy [<options>]
|
|
||||||
|
|
||||||
The spy destroy subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
spy_info_description = "Get spy node info"
|
|
||||||
spy_info_usage = '''bespin spy info [<options>]
|
|
||||||
|
|
||||||
The spy info subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
spy_stash_description = "Get the spy node stash files"
|
|
||||||
spy_stash_usage = '''bespin spy stash [<options>]
|
|
||||||
|
|
||||||
The spy stash subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
# ----------------------
|
|
||||||
|
|
||||||
yeti_description = "Make a yeti worker node and add it to the VPC"
|
|
||||||
yeti_usage = '''bespin yeti <spy_subcommand>
|
|
||||||
|
|
||||||
The yeti subcommands available are:
|
|
||||||
yeti build Build the yeti node
|
|
||||||
yeti destroy Tear down the yeti node
|
|
||||||
yeti info Print info about the yeti node
|
|
||||||
yeti stash Print location of yeti stash files
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
yeti_build_description = "Build the yeti node"
|
|
||||||
yeti_build_usage = '''bespin yeti build [<options>]
|
|
||||||
|
|
||||||
The yeti build subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
yeti_destroy_description = "Tear down the yeti node"
|
|
||||||
yeti_destroy_usage = '''bespin yeti destroy [<options>]
|
|
||||||
|
|
||||||
The yeti destroy subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
yeti_info_description = "Get yeti node info"
|
|
||||||
yeti_info_usage = '''bespin yeti info [<options>]
|
|
||||||
|
|
||||||
The yeti info subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
yeti_stash_description = "Get the yeti node stash files"
|
|
||||||
yeti_stash_usage = '''bespin yeti stash [<options>]
|
|
||||||
|
|
||||||
The yeti stash subcommand does not offer any options.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
196
main.tf
Normal file
196
main.tf
Normal file
@@ -0,0 +1,196 @@
|
|||||||
|
# TODO:
|
||||||
|
# - vpc?
|
||||||
|
#
|
||||||
|
# Note:
|
||||||
|
# - it is the source directive that links the module code with the module block
|
||||||
|
#
|
||||||
|
# ============================
|
||||||
|
# Dahak Workflows Cluster
|
||||||
|
# ============================
|
||||||
|
#
|
||||||
|
# Deploy a VPC and a single cluster
|
||||||
|
# consisting of a single spy node
|
||||||
|
# (monitoring and benchmarking)
|
||||||
|
# and a variable number of yeti
|
||||||
|
# nodes (worker nodes).
|
||||||
|
|
||||||
|
provider "aws" {
|
||||||
|
region = "${var.aws_region}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# seehttps://github.com/hashicorp/terraform/issues/14399
|
||||||
|
terraform {
|
||||||
|
required_version = ">= 0.9.3, != 0.9.5"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Allocate Spy Node
|
||||||
|
# ============================
|
||||||
|
# Spy node is a simple micro instance.
|
||||||
|
|
||||||
|
module "spy_server" {
|
||||||
|
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
|
||||||
|
# to a specific version of the modules, such as the following example:
|
||||||
|
#source = "git::git@github.com:hashicorp/terraform-aws-consul.git/modules/consul-cluster?ref=v0.0.1"
|
||||||
|
source = "./module"
|
||||||
|
|
||||||
|
cluster_name = "${var.cluster_name}-spy"
|
||||||
|
cluster_size = "1"
|
||||||
|
instance_type = "${var.spy_instance_type}"
|
||||||
|
spot_price = "${var.spot_price}"
|
||||||
|
|
||||||
|
### # The EC2 Instances will use these tags to automatically discover each other and form a cluster
|
||||||
|
### cluster_tag_key = "${var.cluster_tag_key}"
|
||||||
|
### cluster_tag_value = "${var.cluster_name}"
|
||||||
|
|
||||||
|
ami_id = "${var.ami_id}"
|
||||||
|
user_data = "${data.template_file.spy_user_data.rendered}"
|
||||||
|
|
||||||
|
vpc_id = "${data.aws_vpc.dahakvpc.id}"
|
||||||
|
subnet_ids = "${data.aws_subnet_ids.default.ids}"
|
||||||
|
|
||||||
|
# To make testing easier, we allow Consul and SSH requests from any IP address here but in a production
|
||||||
|
# deployment, we strongly recommend you limit this to the IP address ranges of known, trusted servers inside your VPC.
|
||||||
|
allowed_ssh_cidr_blocks = ["0.0.0.0/0"]
|
||||||
|
|
||||||
|
allowed_inbound_cidr_blocks = ["0.0.0.0/0"]
|
||||||
|
ssh_key_name = "${var.ssh_key_name}"
|
||||||
|
|
||||||
|
tags = [
|
||||||
|
{
|
||||||
|
key = "Environment"
|
||||||
|
value = "development"
|
||||||
|
propagate_at_launch = true
|
||||||
|
},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Deploy Spy Node
|
||||||
|
# ============================
|
||||||
|
# Actually deploy the infrastructure
|
||||||
|
# (apt-get scripts, Python, docker,
|
||||||
|
# containers, etc.) to spy.
|
||||||
|
|
||||||
|
data "template_file" "spy_user_data" {
|
||||||
|
template = "${file("${path.module}/dahak-spy/cloud_init/cloud_init.sh")}"
|
||||||
|
|
||||||
|
### vars {
|
||||||
|
### cluster_tag_key = "${var.cluster_tag_key}"
|
||||||
|
### cluster_tag_value = "${var.cluster_name}"
|
||||||
|
### }
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Allocate Yeti Node
|
||||||
|
# ============================
|
||||||
|
# Yeti node is a beefy node.
|
||||||
|
|
||||||
|
module "yeti_server" {
|
||||||
|
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
|
||||||
|
# to a specific version of the modules, such as the following example:
|
||||||
|
#source = "git::git@github.com:hashicorp/terraform-aws-consul.git/modules/consul-cluster?ref=v0.0.1"
|
||||||
|
source = "./module"
|
||||||
|
|
||||||
|
cluster_name = "${var.cluster_name}-server"
|
||||||
|
cluster_size = "${var.num_yeti_servers}"
|
||||||
|
instance_type = "${var.yeti_instance_type}"
|
||||||
|
spot_price = "${var.spot_price}"
|
||||||
|
|
||||||
|
### # The EC2 Instances will use these tags to automatically discover each other and form a cluster
|
||||||
|
### cluster_tag_key = "${var.cluster_tag_key}"
|
||||||
|
### cluster_tag_value = "${var.cluster_name}"
|
||||||
|
|
||||||
|
ami_id = "${var.ami_id}"
|
||||||
|
user_data = "${data.template_file.yeti_user_data.rendered}"
|
||||||
|
|
||||||
|
vpc_id = "${data.aws_vpc.dahakvpc.id}"
|
||||||
|
subnet_ids = "${data.aws_subnet_ids.default.ids}"
|
||||||
|
|
||||||
|
# To make testing easier, we allow Consul and SSH requests from any IP address here but in a production
|
||||||
|
# deployment, we strongly recommend you limit this to the IP address ranges of known, trusted servers inside your VPC.
|
||||||
|
allowed_ssh_cidr_blocks = ["0.0.0.0/0"]
|
||||||
|
|
||||||
|
allowed_inbound_cidr_blocks = ["0.0.0.0/0"]
|
||||||
|
ssh_key_name = "${var.ssh_key_name}"
|
||||||
|
|
||||||
|
tags = [
|
||||||
|
{
|
||||||
|
key = "Environment"
|
||||||
|
value = "development"
|
||||||
|
propagate_at_launch = true
|
||||||
|
},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Deploy Yeti Node
|
||||||
|
# ============================
|
||||||
|
# Actually deploy the infrastructure
|
||||||
|
# (apt-get scripts, Python, snakemake,
|
||||||
|
# singularity, etc.) to yeti.
|
||||||
|
|
||||||
|
data "template_file" "yeti_user_data" {
|
||||||
|
template = "${file("${path.module}/dahak-yeti/cloud_init/cloud_init.sh")}"
|
||||||
|
|
||||||
|
### vars {
|
||||||
|
### cluster_tag_key = "${var.cluster_tag_key}"
|
||||||
|
### cluster_tag_value = "${var.cluster_name}"
|
||||||
|
### }
|
||||||
|
}
|
||||||
|
|
||||||
|
# ============================
|
||||||
|
# Deploy VPC
|
||||||
|
# ============================
|
||||||
|
# Assemble the VPC, subnet,
|
||||||
|
# internet gateway, DNS, DHCP,
|
||||||
|
|
||||||
|
# VPC
|
||||||
|
resource "aws_vpc" "dahakvpc" {
|
||||||
|
cidr_block = "10.99.0.0/16"
|
||||||
|
enable_dns_support = true
|
||||||
|
enable_dns_hostnames = true
|
||||||
|
}
|
||||||
|
|
||||||
|
# VPC subnet
|
||||||
|
resource "aws_subnet" "dahaksubnet" {
|
||||||
|
vpc_id = "${aws_vpc.dahakvpc.id}"
|
||||||
|
cidr_block = "10.99.0.0/24"
|
||||||
|
map_public_ip_on_launch = true
|
||||||
|
availability_zone = "us-west-1a"
|
||||||
|
tags {
|
||||||
|
Name = "namedahaksubnet"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Internet gateway
|
||||||
|
resource "aws_internet_gateway" "dahakgw" {
|
||||||
|
vpc_id = "${aws_vpc.dahakvpc.id}"
|
||||||
|
tags {
|
||||||
|
Name = "namedahakgw"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Route
|
||||||
|
resource "aws_route" "internet_access" {
|
||||||
|
route_table_id = "${aws_vpc.dahakvpc.main_route_table_id}"
|
||||||
|
destination_cidr_block = "0.0.0.0/0"
|
||||||
|
gateway_id = "${aws_internet_gateway.dahakgw.id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Route table
|
||||||
|
resource "aws_route_table" "private_route_table" {
|
||||||
|
vpc_id = "${aws_vpc.dahakvpc.id}"
|
||||||
|
tags {
|
||||||
|
Name = "Private route table"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Associate route table with subnet
|
||||||
|
# and routing table.
|
||||||
|
resource "aws_route_table_association" "dahaksubnet_association" {
|
||||||
|
subnet_id = "${aws_subnet.dahaksubnet.id}"
|
||||||
|
route_table_id = "${aws_vpc.dahakvpc.main_route_table_id}"
|
||||||
|
}
|
||||||
|
|
46
modules/dahak-cluster/README.md
Normal file
46
modules/dahak-cluster/README.md
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
# dahak cluster
|
||||||
|
|
||||||
|
(work in progress)
|
||||||
|
|
||||||
|
This folder contains a [Terraform module](https://www.terraform.io/docs/modules/usage.html)
|
||||||
|
to deploy a dahak cluster consisting of a VPC, a spy monitoring node, and one or more yeti worker nodes.
|
||||||
|
|
||||||
|
## using this module
|
||||||
|
|
||||||
|
This folder defines a Terraform module, which you can use in your
|
||||||
|
code by adding a `module` configuration and setting its `source` parameter
|
||||||
|
to URL of this folder:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
module "dahak_cluster" {
|
||||||
|
# TODO: update this
|
||||||
|
source = "github.com/hashicorp/terraform-aws-consul//modules/consul-cluster?ref=v0.0.5"
|
||||||
|
|
||||||
|
# TODO: update this
|
||||||
|
# amazon image ID
|
||||||
|
ami_id = "ami-abcd1234"
|
||||||
|
|
||||||
|
# Configure and start the nodes
|
||||||
|
user_data = <<-EOF
|
||||||
|
#!/bin/bash
|
||||||
|
/opt/consul/bin/run-consul --server --cluster-tag-key consul-cluster
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# ... See variables.tf for the other parameters you must define for the consul-cluster module
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Note the following parameters:
|
||||||
|
|
||||||
|
* `source`: Use this parameter to specify the URL of the terraform module we are using.
|
||||||
|
The double slash (`//`) is intentional and required. Terraform uses it to specify subfolders within a Git repo.
|
||||||
|
The `ref` parameter specifies a specific Git tag in this repo. It enures you are using a fixed version of the repo.
|
||||||
|
|
||||||
|
* `ami_id`: Use this parameter to specify the amazon machine image to install on the nodes on the cluster.
|
||||||
|
|
||||||
|
* `user_data`: Use this parameter to specify user data (cloud init scripts).
|
||||||
|
|
||||||
|
You can find the other parameters in [variables.tf](variables.tf).
|
||||||
|
|
||||||
|
Check out the [consul-cluster example](https://github.com/hashicorp/terraform-aws-consul/tree/master/MAIN.md) for fully-working sample code.
|
||||||
|
|
122
modules/dahak-cluster/main.tf
Normal file
122
modules/dahak-cluster/main.tf
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
# A modern terraform version is required
|
||||||
|
terraform {
|
||||||
|
required_version = ">= 0.9.3"
|
||||||
|
}
|
||||||
|
|
||||||
|
# This is going to be called
|
||||||
|
# each time we create a module
|
||||||
|
# and point to this directory.
|
||||||
|
#
|
||||||
|
# In other words, we are calling
|
||||||
|
# this once for spy and once for
|
||||||
|
# each yeti node.
|
||||||
|
#
|
||||||
|
# The parameters come from the main.tf
|
||||||
|
# and vars.tf in the parent directory.
|
||||||
|
#
|
||||||
|
resource "aws_launch_configuration" "launch_configuration" {
|
||||||
|
name_prefix = "${var.cluster_name}-"
|
||||||
|
image_id = "${var.ami_id}"
|
||||||
|
instance_type = "${var.instance_type}"
|
||||||
|
user_data = "${var.user_data}"
|
||||||
|
spot_price = "${var.spot_price}"
|
||||||
|
|
||||||
|
iam_instance_profile = "${aws_iam_instance_profile.instance_profile.name}"
|
||||||
|
key_name = "${var.ssh_key_name}"
|
||||||
|
security_groups = ["${aws_security_group.lc_security_group.id}"]
|
||||||
|
placement_tenancy = "${var.tenancy}"
|
||||||
|
associate_public_ip_address = "${var.associate_public_ip_address}"
|
||||||
|
|
||||||
|
ebs_optimized = "${var.root_volume_ebs_optimized}"
|
||||||
|
|
||||||
|
root_block_device {
|
||||||
|
volume_type = "${var.root_volume_type}"
|
||||||
|
volume_size = "${var.root_volume_size}"
|
||||||
|
delete_on_termination = "${var.root_volume_delete_on_termination}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Important note: whenever using a launch configuration with an auto scaling group, you must set
|
||||||
|
# create_before_destroy = true. However, as soon as you set create_before_destroy = true in one resource, you must
|
||||||
|
# also set it in every resource that it depends on, or you'll get an error about cyclic dependencies (especially when
|
||||||
|
# removing resources). For more info, see:
|
||||||
|
#
|
||||||
|
# https://www.terraform.io/docs/providers/aws/r/launch_configuration.html
|
||||||
|
# https://terraform.io/docs/configuration/resources.html
|
||||||
|
lifecycle {
|
||||||
|
create_before_destroy = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create a security group
|
||||||
|
resource "aws_security_group" "lc_security_group" {
|
||||||
|
name_prefix = "${var.cluster_name}"
|
||||||
|
description = "Security group for the ${var.cluster_name} launch configuration"
|
||||||
|
vpc_id = "${var.vpc_id}"
|
||||||
|
|
||||||
|
# aws_launch_configuration.launch_configuration in this module sets create_before_destroy to true, which means
|
||||||
|
# everything it depends on, including this resource, must set it as well, or you'll get cyclic dependency errors
|
||||||
|
# when you try to do a terraform destroy.
|
||||||
|
lifecycle {
|
||||||
|
create_before_destroy = true
|
||||||
|
}
|
||||||
|
|
||||||
|
tags {
|
||||||
|
Name = "${var.cluster_name}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Security group rules:
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_ssh_inbound" {
|
||||||
|
count = "${length(var.allowed_ssh_cidr_blocks) >= 1 ? 1 : 0}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.ssh_port}"
|
||||||
|
to_port = "${var.ssh_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
cidr_blocks = ["${var.allowed_ssh_cidr_blocks}"]
|
||||||
|
|
||||||
|
security_group_id = "${aws_security_group.lc_security_group.id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_ssh_inbound_from_security_group_ids" {
|
||||||
|
count = "${length(var.allowed_ssh_security_group_ids)}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.ssh_port}"
|
||||||
|
to_port = "${var.ssh_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
source_security_group_id = "${element(var.allowed_ssh_security_group_ids, count.index)}"
|
||||||
|
|
||||||
|
security_group_id = "${aws_security_group.lc_security_group.id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_all_outbound" {
|
||||||
|
type = "egress"
|
||||||
|
from_port = 0
|
||||||
|
to_port = 0
|
||||||
|
protocol = "-1"
|
||||||
|
cidr_blocks = ["0.0.0.0/0"]
|
||||||
|
|
||||||
|
security_group_id = "${aws_security_group.lc_security_group.id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
module "security_group_rules" {
|
||||||
|
source = "../consul-security-group-rules"
|
||||||
|
|
||||||
|
security_group_id = "${aws_security_group.lc_security_group.id}"
|
||||||
|
allowed_inbound_cidr_blocks = ["${var.allowed_inbound_cidr_blocks}"]
|
||||||
|
allowed_inbound_security_group_ids = ["${var.allowed_inbound_security_group_ids}"]
|
||||||
|
|
||||||
|
server_rpc_port = "${var.server_rpc_port}"
|
||||||
|
cli_rpc_port = "${var.cli_rpc_port}"
|
||||||
|
serf_lan_port = "${var.serf_lan_port}"
|
||||||
|
serf_wan_port = "${var.serf_wan_port}"
|
||||||
|
http_api_port = "${var.http_api_port}"
|
||||||
|
dns_port = "${var.dns_port}"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
9
modules/dahak-security-rules/README.md
Normal file
9
modules/dahak-security-rules/README.md
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
# dahak cluster security rules
|
||||||
|
|
||||||
|
(work in progress)
|
||||||
|
|
||||||
|
This directory contains configuration files
|
||||||
|
that control/set rules for the security group
|
||||||
|
associated with the dahak cluster.
|
||||||
|
|
||||||
|
[also see](https://github.com/hashicorp/terraform-aws-consul/tree/master/modules/consul-security-group-rules)
|
198
modules/dahak-security-rules/main.tf
Normal file
198
modules/dahak-security-rules/main.tf
Normal file
@@ -0,0 +1,198 @@
|
|||||||
|
# CREATE THE SECURITY GROUP RULES THAT CONTROL WHAT TRAFFIC CAN GO IN AND OUT OF A CONSUL CLUSTER
|
||||||
|
resource "aws_security_group_rule" "allow_server_rpc_inbound" {
|
||||||
|
count = "${length(var.allowed_inbound_cidr_blocks) >= 1 ? 1 : 0}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.server_rpc_port}"
|
||||||
|
to_port = "${var.server_rpc_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
cidr_blocks = ["${var.allowed_inbound_cidr_blocks}"]
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_cli_rpc_inbound" {
|
||||||
|
count = "${length(var.allowed_inbound_cidr_blocks) >= 1 ? 1 : 0}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.cli_rpc_port}"
|
||||||
|
to_port = "${var.cli_rpc_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
cidr_blocks = ["${var.allowed_inbound_cidr_blocks}"]
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_serf_lan_tcp_inbound" {
|
||||||
|
count = "${length(var.allowed_inbound_cidr_blocks) >= 1 ? 1 : 0}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.serf_lan_port}"
|
||||||
|
to_port = "${var.serf_lan_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
cidr_blocks = ["${var.allowed_inbound_cidr_blocks}"]
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_serf_lan_udp_inbound" {
|
||||||
|
count = "${length(var.allowed_inbound_cidr_blocks) >= 1 ? 1 : 0}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.serf_lan_port}"
|
||||||
|
to_port = "${var.serf_lan_port}"
|
||||||
|
protocol = "udp"
|
||||||
|
cidr_blocks = ["${var.allowed_inbound_cidr_blocks}"]
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_serf_wan_tcp_inbound" {
|
||||||
|
count = "${length(var.allowed_inbound_cidr_blocks) >= 1 ? 1 : 0}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.serf_wan_port}"
|
||||||
|
to_port = "${var.serf_wan_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
cidr_blocks = ["${var.allowed_inbound_cidr_blocks}"]
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_serf_wan_udp_inbound" {
|
||||||
|
count = "${length(var.allowed_inbound_cidr_blocks) >= 1 ? 1 : 0}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.serf_wan_port}"
|
||||||
|
to_port = "${var.serf_wan_port}"
|
||||||
|
protocol = "udp"
|
||||||
|
cidr_blocks = ["${var.allowed_inbound_cidr_blocks}"]
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_http_api_inbound" {
|
||||||
|
count = "${length(var.allowed_inbound_cidr_blocks) >= 1 ? 1 : 0}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.http_api_port}"
|
||||||
|
to_port = "${var.http_api_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
cidr_blocks = ["${var.allowed_inbound_cidr_blocks}"]
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_dns_tcp_inbound" {
|
||||||
|
count = "${length(var.allowed_inbound_cidr_blocks) >= 1 ? 1 : 0}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.dns_port}"
|
||||||
|
to_port = "${var.dns_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
cidr_blocks = ["${var.allowed_inbound_cidr_blocks}"]
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_dns_udp_inbound" {
|
||||||
|
count = "${length(var.allowed_inbound_cidr_blocks) >= 1 ? 1 : 0}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.dns_port}"
|
||||||
|
to_port = "${var.dns_port}"
|
||||||
|
protocol = "udp"
|
||||||
|
cidr_blocks = ["${var.allowed_inbound_cidr_blocks}"]
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_server_rpc_inbound_from_security_group_ids" {
|
||||||
|
count = "${length(var.allowed_inbound_security_group_ids)}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.server_rpc_port}"
|
||||||
|
to_port = "${var.server_rpc_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
source_security_group_id = "${element(var.allowed_inbound_security_group_ids, count.index)}"
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_cli_rpc_inbound_from_security_group_ids" {
|
||||||
|
count = "${length(var.allowed_inbound_security_group_ids)}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.cli_rpc_port}"
|
||||||
|
to_port = "${var.cli_rpc_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
source_security_group_id = "${element(var.allowed_inbound_security_group_ids, count.index)}"
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_serf_lan_tcp_inbound_from_security_group_ids" {
|
||||||
|
count = "${length(var.allowed_inbound_security_group_ids)}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.serf_lan_port}"
|
||||||
|
to_port = "${var.serf_lan_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
source_security_group_id = "${element(var.allowed_inbound_security_group_ids, count.index)}"
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_serf_lan_udp_inbound_from_security_group_ids" {
|
||||||
|
count = "${length(var.allowed_inbound_security_group_ids)}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.serf_lan_port}"
|
||||||
|
to_port = "${var.serf_lan_port}"
|
||||||
|
protocol = "udp"
|
||||||
|
source_security_group_id = "${element(var.allowed_inbound_security_group_ids, count.index)}"
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_serf_wan_tcp_inbound_from_security_group_ids" {
|
||||||
|
count = "${length(var.allowed_inbound_security_group_ids)}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.serf_wan_port}"
|
||||||
|
to_port = "${var.serf_wan_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
source_security_group_id = "${element(var.allowed_inbound_security_group_ids, count.index)}"
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_serf_wan_udp_inbound_from_security_group_ids" {
|
||||||
|
count = "${length(var.allowed_inbound_security_group_ids)}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.serf_wan_port}"
|
||||||
|
to_port = "${var.serf_wan_port}"
|
||||||
|
protocol = "udp"
|
||||||
|
source_security_group_id = "${element(var.allowed_inbound_security_group_ids, count.index)}"
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_http_api_inbound_from_security_group_ids" {
|
||||||
|
count = "${length(var.allowed_inbound_security_group_ids)}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.http_api_port}"
|
||||||
|
to_port = "${var.http_api_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
source_security_group_id = "${element(var.allowed_inbound_security_group_ids, count.index)}"
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_dns_tcp_inbound_from_security_group_ids" {
|
||||||
|
count = "${length(var.allowed_inbound_security_group_ids)}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.dns_port}"
|
||||||
|
to_port = "${var.dns_port}"
|
||||||
|
protocol = "tcp"
|
||||||
|
source_security_group_id = "${element(var.allowed_inbound_security_group_ids, count.index)}"
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "allow_dns_udp_inbound_from_security_group_ids" {
|
||||||
|
count = "${length(var.allowed_inbound_security_group_ids)}"
|
||||||
|
type = "ingress"
|
||||||
|
from_port = "${var.dns_port}"
|
||||||
|
to_port = "${var.dns_port}"
|
||||||
|
protocol = "udp"
|
||||||
|
source_security_group_id = "${element(var.allowed_inbound_security_group_ids, count.index)}"
|
||||||
|
|
||||||
|
security_group_id = "${var.security_group_id}"
|
||||||
|
}
|
54
modules/dahak-security-rules/vars.tf
Normal file
54
modules/dahak-security-rules/vars.tf
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
# ---------------------------------------------------------------------------------------------------------------------
|
||||||
|
# REQUIRED PARAMETERS
|
||||||
|
# You must provide a value for each of these parameters.
|
||||||
|
# ---------------------------------------------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
variable "security_group_id" {
|
||||||
|
description = "The ID of the security group to which we should add the Consul security group rules"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "allowed_inbound_cidr_blocks" {
|
||||||
|
description = "A list of CIDR-formatted IP address ranges from which the EC2 Instances will allow connections to Consul"
|
||||||
|
type = "list"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------------------------------------------------
|
||||||
|
# OPTIONAL PARAMETERS
|
||||||
|
# These parameters have reasonable defaults.
|
||||||
|
# ---------------------------------------------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
variable "allowed_inbound_security_group_ids" {
|
||||||
|
description = "A list of security group IDs that will be allowed to connect to Consul"
|
||||||
|
type = "list"
|
||||||
|
default = []
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "server_rpc_port" {
|
||||||
|
description = "The port used by servers to handle incoming requests from other agents."
|
||||||
|
default = 8300
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "cli_rpc_port" {
|
||||||
|
description = "The port used by all agents to handle RPC from the CLI."
|
||||||
|
default = 8400
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "serf_lan_port" {
|
||||||
|
description = "The port used to handle gossip in the LAN. Required by all agents."
|
||||||
|
default = 8301
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "serf_wan_port" {
|
||||||
|
description = "The port used by servers to gossip over the WAN to other servers."
|
||||||
|
default = 8302
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "http_api_port" {
|
||||||
|
description = "The port used by clients to talk to the HTTP API"
|
||||||
|
default = 8500
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "dns_port" {
|
||||||
|
description = "The port used to resolve DNS queries."
|
||||||
|
default = 8600
|
||||||
|
}
|
@@ -1,31 +0,0 @@
|
|||||||
import random
|
|
||||||
import string
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
Random Labels
|
|
||||||
|
|
||||||
Generate random labels for labeling
|
|
||||||
all the AWS assets
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
def random_ip():
|
|
||||||
"""
|
|
||||||
Return a random IP of the form
|
|
||||||
10.*.0.0
|
|
||||||
"""
|
|
||||||
block = random.randint(15,99)
|
|
||||||
return "10.%d.0.{addr}"%(block)
|
|
||||||
|
|
||||||
def random_label():
|
|
||||||
# Generate a random label to uniquely identify this group
|
|
||||||
|
|
||||||
a1 = random.choices(string.ascii_lowercase,k=2)
|
|
||||||
a2 = random.choices(string.digits,k=1)
|
|
||||||
a3 = random.choices(string.ascii_lowercase,k=2)
|
|
||||||
|
|
||||||
label = ''.join(a1+a2+a3)
|
|
||||||
|
|
||||||
return label
|
|
||||||
|
|
44
todo.md
44
todo.md
@@ -1,44 +0,0 @@
|
|||||||
# todo list
|
|
||||||
|
|
||||||
vpc subcommand:
|
|
||||||
- [x] build
|
|
||||||
- [ ] info
|
|
||||||
- [ ] destroy
|
|
||||||
- will require expanding stash files
|
|
||||||
- check that there is no yeti node
|
|
||||||
- check that there is no spy node
|
|
||||||
- check that there is no security group
|
|
||||||
- delete subnet
|
|
||||||
- delete routing table
|
|
||||||
- delete internet gateway
|
|
||||||
- delete dhcp
|
|
||||||
- delete network interface
|
|
||||||
- [x] stash
|
|
||||||
|
|
||||||
security subcommand:
|
|
||||||
- [ ] build
|
|
||||||
- [ ] destroy
|
|
||||||
- [ ] port add
|
|
||||||
- [ ] port rm
|
|
||||||
- [ ] ip add
|
|
||||||
- [ ] ip rm
|
|
||||||
- [ ] info
|
|
||||||
- [ ] stash
|
|
||||||
|
|
||||||
spy subcommand:
|
|
||||||
- [ ] spy build
|
|
||||||
- like vpc - check if dotfile exists
|
|
||||||
- [ ] spy destroy
|
|
||||||
- [ ] spy info
|
|
||||||
- [ ] spy stash
|
|
||||||
|
|
||||||
yeti subcommand:
|
|
||||||
- [ ] yeti build
|
|
||||||
- global label counter
|
|
||||||
- yeti1, yeti2, yeti3, etc.
|
|
||||||
- [ ] yeti destroy
|
|
||||||
- grep nodes with yeti in label
|
|
||||||
- [ ] yeti info
|
|
||||||
- [ ] yeti stash
|
|
||||||
|
|
||||||
|
|
57
variables.tf
Normal file
57
variables.tf
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
# ---------------------------------------------------------------------------------------------------------------------
|
||||||
|
# ENVIRONMENT VARIABLES
|
||||||
|
# Define these secrets as environment variables
|
||||||
|
# ---------------------------------------------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# AWS_ACCESS_KEY_ID
|
||||||
|
# AWS_SECRET_ACCESS_KEY
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------------------------------------------------
|
||||||
|
# OPTIONAL PARAMETERS
|
||||||
|
# These parameters have reasonable defaults.
|
||||||
|
# ---------------------------------------------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
variable "ami_id" {
|
||||||
|
description = "The ID of the AMI to run in the cluster."
|
||||||
|
default = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "aws_region" {
|
||||||
|
description = "The AWS region to deploy into (e.g. us-east-1)."
|
||||||
|
default = "us-east-1"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "cluster_name" {
|
||||||
|
description = "What to name the dahak cluster and all of its associated resources"
|
||||||
|
default = "dahak-test-cluster"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "spy_instance_type" {
|
||||||
|
description = "The type of instance to deploy for the spy node."
|
||||||
|
default = "t2.micro"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "num_yeti_servers" {
|
||||||
|
description = "The number of yeti workers to deploy."
|
||||||
|
default = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "yeti_instance_type" {
|
||||||
|
description = "The type of instance to deploy for the yeti workers."
|
||||||
|
default = "m5.4xlarge"
|
||||||
|
}
|
||||||
|
|
||||||
|
### variable "cluster_tag_key" {
|
||||||
|
### description = "The tag the EC2 Instances will look for to automatically discover each other and form a cluster."
|
||||||
|
### default = "consul-servers"
|
||||||
|
### }
|
||||||
|
|
||||||
|
variable "ssh_key_name" {
|
||||||
|
description = "The name of an EC2 Key Pair that can be used to SSH to the EC2 Instances in this cluster. Set to an empty string to not associate a Key Pair."
|
||||||
|
default = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "spot_price" {
|
||||||
|
description = "The maximum hourly price to pay for EC2 Spot Instances."
|
||||||
|
default = "0.28"
|
||||||
|
}
|
Reference in New Issue
Block a user