Pierre-Yves Barriat a02f92693d Fix PHP<->S3 troubles | 1 tahun lalu | |
---|---|---|
.. | ||
provisioning | 1 tahun lalu | |
tools | 1 tahun lalu | |
Migration.md | 1 tahun lalu | |
README.md | 1 tahun lalu | |
Vagrantfile | 1 tahun lalu |
sudo kvm-ok
sudo apt install -y qemu qemu-kvm libvirt-daemon libvirt-clients bridge-utils virt-manager
sudo apt install virtualbox
Or install Virtualbox from https://www.virtualbox.org/wiki/Downloads
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install vagrant
vagrant plugin install vagrant-vbguest
vagrant plugin install vagrant-hostmanager
vagrant plugin install vagrant-disksize
ansible-galaxy collection install ansible.posix
ansible-galaxy collection install community.crypto
ansible-galaxy collection install community.general
ansible-galaxy collection install community.mysql
ansible-galaxy collection install community.aws
In case of using
ceph
, do:export VAGRANT_EXPERIMENTAL="disks" vagrant reload
vagrant up
In case of "'/var/run/libvirt/libvirt-sock': Permission denied", try to login/logout or restart the machine
In case of Rocky/8 and "Stderr: 0%...VBOX_E_OBJECT_NOT_FOUND", try the following bugfix box metadata file. Open a new file called box-metadata.json
and write:
{
"name" : "rockylinux/8",
"description" : "Rocky Linux 8 7.0.0 Bugfix",
"versions" : [
{
"version" : "7.0.1-20221213.0",
"providers" : [
{
"name" : "virtualbox",
"url" : "http://dl.rockylinux.org/pub/rocky/8/images/x86_64/Rocky-8-Vagrant-Vbox-8.7-20221213.0.x86_64.box"
}
]
}
]
}
Now apply the patch with:
vagrant box add box-metadata.json
conf.disksize.size = '100GB'
with plugin vagrant-disksize), you must resize the filesystem from within the guest:vagrant ssh guest
sudo su
parted -l
fdisk -l /dev/sda
fdisk /dev/sda
fdisk -l /dev/sda
mkfs -t xfs -f /dev/sda5
mkdir /extent
mount -t xfs -o defaults /dev/sda5 /extent
blkid
vi /etc/fstab
exit
exit
vagrant reload guest
Once all VMs are up (see 'Deploy'), you can lauch Ansible without Vagrant. Examples:
ansible -v -i '192.168.56.41,' --key-file .vagrant/machines/lb/virtualbox/private_key -u vagrant -b -m setup all
ansible-playbook -v -i provisioning/ansible/hosts -u vagrant -b provisioning/ansible/playbook.yml
In your /etc/hosts
file, add a line to match the dev nextcloud domain (defined in your Vagrantfile), eg "nextcloud.test", to the choosen IP, eg "192.168.56.51".
Open a browser with "https://nextcloud.test"
Done with:
HOSTS = [
{ :hostname => "db1", :ip => NETWORK+"11", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "db_servers" },
{ :hostname => "web.test", :ip => NETWORK+"41", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "web_servers" },
{ :hostname => "lb.test", :ip => NETWORK+"51", :ram => 1024, :cpu => 1, :box => "ubuntu/focal64", :group => "lbal_servers" },
]
OK
TODO: install KeepAlived in DB nodes
HOSTS = [
{ :hostname => "db1", :ip => NETWORK+"11", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "db_servers" },
{ :hostname => "db2", :ip => NETWORK+"12", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "db_servers" },
{ :hostname => "db3", :ip => NETWORK+"13", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "db_servers" },
{ :hostname => "web.test", :ip => NETWORK+"41", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "web_servers" },
{ :hostname => "lb.test", :ip => NETWORK+"51", :ram => 1024, :cpu => 1, :box => "ubuntu/focal64", :group => "lbal_servers" },
]
OK
HOSTS = [
{ :hostname => "db1", :ip => NETWORK+"11", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "db_servers", },
{ :hostname => "db2", :ip => NETWORK+"12", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "db_servers", },
{ :hostname => "db3", :ip => NETWORK+"13", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "db_servers", },
{ :hostname => "lbsql1", :ip => NETWORK+"19", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "db_lbal_servers", :state => "MASTER", :priority => 101, :vip => NETWORK+"20" },
{ :hostname => "lbsql2", :ip => NETWORK+"18", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "db_lbal_servers", :state => "BACKUP", :priority => 100, :vip => NETWORK+"20" },
{ :hostname => "web.test", :ip => NETWORK+"41", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "web_servers", :ipdb => NETWORK+"20" },
{ :hostname => "lb.test", :ip => NETWORK+"51", :ram => 1024, :cpu => 1, :box => "ubuntu/focal64", :group => "lbal_servers" },
]
OK
Done with:
HOSTS = [
{ :hostname => "db1", :ip => NETWORK+"11", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "db_servers", },
{ :hostname => "gl1", :ip => NETWORK+"31", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "gluster_servers" },
{ :hostname => "gl2", :ip => NETWORK+"32", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "gluster_servers" },
{ :hostname => "web.test", :ip => NETWORK+"41", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "web_servers", :redisd => "keydb", :redisp => "6380", :redisv => NETWORK+"40", :priority => 101 },
{ :hostname => "web2.test", :ip => NETWORK+"42", :ram => 1024, :cpu => 1, :box => "centos/7", :group => "web_servers", :redisd => "keydb", :redisp => "6380", :redisv => NETWORK+"40", :priority => 100 },
{ :hostname => "lb.test", :ip => NETWORK+"51", :ram => 1024, :cpu => 1, :box => "ubuntu/focal64", :group => "lbal_servers", :state => "MASTER", :priority => 101 },
{ :hostname => "lb2.test", :ip => NETWORK+"52", :ram => 512, :cpu => 1, :box => "ubuntu/focal64", :group => "lbal_servers", :state => "BACKUP", :priority => 100 },
]
TODO
HOSTS = [
]
Examples:
ansible -v -i '192.168.64.68,' --key-file /home/nextcloud/Documents/Secure/Unix/ssh/id_rsa_pedro -u pedro -b -m setup all
ansible-playbook -v -i provisioning/ansible/hosts_openstack -b provisioning/ansible/playbook_openstack.yml