|
1 |
| -# setupvm-ose-dev |
2 |
| -Simple, quick and dirty shell script to provision/configure local or cloud based instance of VM to run single node development cluster from source code for OpenShift/Kubernetes. This |
3 |
| -script will setup/automate and configure a complete dev environment based on the normal prereqs (i.e. subscription manager, yum installs, go install, etc...) and allow developers to change/update source and switch between K8 and Origin. |
| 1 | +# Set Up OCP, K8, GlusterFS, CNV, etc... VM's or cloud instances |
| 2 | +This repo has some scripts to perform various developer type functions to help set up bare metal, local VM or cloud instances of environments that are ready to be used as development environment for testing and whatever else you want to use them for. |
4 | 3 |
|
5 |
| -Use this for RHEL 7.x or Centos instances using normal dev type setup (i.e. hack/local-up-cluster.sh for K8 and openshift start for origin). |
| 4 | +## Some Features |
| 5 | +- supports RHEL and CentOS |
| 6 | +- local-up-cluster.sh K8 node for quick easy development (local or cloud enabled) |
| 7 | +- kube-up.sh K8 node to help spin up a multi-node cluster |
| 8 | +- base system prereqs and components for OCP production or development instances |
| 9 | +- Stand-Up a fully functional GlusterFS cluster with Trusted Storage Pool and Initial Volume. |
| 10 | +- CNV, CRS, CNS and Halo support |
| 11 | +- and more |
6 | 12 |
|
7 |
| -# How To Run |
8 | 13 |
|
9 |
| -1. create RHEL 7.X AWS instance in us-east-1d (t2.Large) or GCE in us-east1-d or a local VM Instance (don't forget to add 2nd storage volume for docker registry) - you will run out of memory on builds without t2.large (or equivalent in GCE), at least my experience |
10 |
| -2. create unattached volume for use with our pod (if cloud based setup) - note the volumeID or identifier |
11 |
| -3. scp the attached scripts (SetUpVM.sh and setupvm.config) to your VM (I base everything out of /home/$USER, i.e. /home/ec2-user if running on aws, typically /root if running local VM or GCE) |
12 |
| -4. edit or modify the setupvm.config as these are the params used to run the SetUpVM.sh script and allows you to customize your source paths, gopath, etc... |
13 |
| -5. run the script |
14 |
| - |
15 |
| - ./SetUpVM.sh |
16 |
| - |
17 |
| -6. Script takes about (8 to 10 minutes total depending on network latency) but about 7 minutes in, it will ask to setup docker registry - so look for that as it expects some input on what block device to use |
18 |
| -7. after completion |
19 |
| - - sudo -s or exit and log back in or execute the bash profiles (to work as root and also pick up .bashrc/.bash_profile exports) |
20 |
| - - Change to your source GOPATH directory specified in the setupvm.config (default is /opt/go) (i.e. <$GOPATH>/src/k8s.io/kubernetes) |
21 |
| - - ./hack/local-up-cluster.sh (note: this will build and run the K8 process in this terminal, to stop ctrl+C) |
22 |
| - - Once this is running open another terminal and navigate to KUBEWORKDIR from the setupvm.config (default is /etc/kubernetes) and run ./config-k8.sh |
23 |
| - - you are ready to interact with kubectl |
24 |
| - - To use custom source code, replace /opt/go/src/k8s.io/kubernetes with your forked repo and git checkout <your-branch> |
25 |
| - |
26 |
| - or |
27 |
| - |
28 |
| - - sudo -s |
29 |
| - - cd to your source GOPATH directory specified in setupvm.config file (default is /opt/go) (i.e. <$GOPATH>/src/github.com/openshift/origin) |
30 |
| - - make clean build |
31 |
| - - after completion, you will need to run the start-ose.sh script found in the parameter ORIGINWORKDIR from the setupvm.config (default is /etc/openshift) |
32 |
| - - ./start-ose.sh (this will run openshift as a process - logging is in home dir or openshift working dir at openshift.log) |
33 |
| - |
34 |
| - |
35 |
| -# Some Things To Note: |
36 |
| - |
37 |
| -1. By default, if you did not change the work directory or gopath parameters in the setupvm.config |
38 |
| - - GOPATH = /opt/go (/opt/go/src/github.com/openshift/origin and /opt/go/src/k8s.io/kubernetes) |
39 |
| - - Kube Work Dir = /etc/kubernetes (config scripts and dev-configs dir with some sample yamls) |
40 |
| - - Origin Work Dir = /etc/openshift (config scripts and dev-configs dir with some sample yamls) |
41 |
| - All tasks, scripts and configurations (openshift in particular) will be located there. |
42 |
| - |
43 |
| -2. If you will be switching frequently between K8 and Origin, uncomment the last line in the stop-ose.sh script, that removes the /etc/.kube dir (prevents crossing wires between Origin and Kube) |
44 |
| - |
45 |
| - |
46 |
| -# Additional Supported Functionality |
47 |
| - |
48 |
| -## GlusterFS Cluster Setup (RHEL or CentOS): |
49 |
| - |
50 |
| -1. Prereqs: |
51 |
| - - Choose a single server as your Gluster Master and Heketi-Client Server (where you will run the scripts from) |
52 |
| - - Run as root (sudo -s on AWS after logging in as ec2-user) |
53 |
| - - Setup passwordless ssh between the designated `master/heketi-client` to each node |
54 |
| - - generate public key on master gluster server ```ssh-keygen -t rsa``` |
55 |
| - - on AWS copy /root/.ssh/id_rsa.pub into hosts /root/.ssh/authorized_keys file |
56 |
| - - on non AWS ssh-copy-id -i /root/.ssh/id_rsa.pub root@server (you will get prompted for password) |
57 |
| - |
58 |
| -2. scp the `setupvm.config` , `SetUpGFS.sh`, and `SetUpVM.sh` or clone this repo on the `master` GlusterFS node (pick a single node) |
59 |
| - |
60 |
| -3. Edit the `setupvm.config` with the following variables defined in `setupvm.config` (everything else in `setupvm.config` can be ignored) |
61 |
| - - HOSTENV=rhel or centos (however-have not tested yet on centos) |
62 |
| - - RHNUSER=rhn-support-account (only needed for rhel) |
63 |
| - - RHNPASS=rhn-password (only needed for rhel) |
64 |
| - - POOLID=The Default Should be fine (only needed for rhel) |
65 |
| - - SETUP_TYPE="gluster" (If co-locating dev instance and `master` gluster node use `gluster-dev` for this value) |
66 |
| - - GFS_LIST="glusterfs1.rhs:glusterfs2.rhs:glusterfs.rhs3:..." (Make sure `master` designated node is first in list) |
67 |
| - |
68 |
| - |
69 |
| -4. Execute SetUpGFS.sh (SetUpVM.sh should call and execute SetUpGFS.sh as well, but again, not tested yet) |
70 |
| - |
71 |
| - This will setup a basic GlusterFS cluster (no partitions or volumes will be created, that is manual or can be done by Heketi, just vanilla cluster), Heketi Server and Heketi-Client. Additional config will be required |
72 |
| - |
73 |
| - - configure /etc/heketi/heketi.json (script will give you values to configure), restart heketi |
| 14 | +## How To Run |
74 | 15 |
|
| 16 | +1. Create an instance in AWS or GCE (rhel or centos) or a local VM (or multiple) |
| 17 | +2. Clone this repo or download the desired `setup` scripts from the correct directory repo on your desired install node. |
| 18 | +3. follow the README.md in each of the sub-directory for more detailed instructions, but basically, you configure the `setupvm.config` to pass in parameters and control what you want to install and |
| 19 | + after that execute the associated shell script (i.e. SetUpGFS.sh, SetUpK8.sh, SetUpOrigin.sh, SetUpVM.sh, etc...) |
75 | 20 |
|
76 | 21 |
|
0 commit comments