Audience
- This writeup is mainly intended towards devstack developers intending to setup Manila in devstack with GlusterFS Native driver as the backend.
Pre-requisites
- Working knowledge of devstack
- Working knowledge of Manila (File Share service of openstack)
- User knowledge of GlusterFS
Assumptions
- This writeup is based on Fedora 20 based setup, but should work equally well for any other RPM based distro (CentOS, RHEL etc) with minor distro specific changes as needed.
- All commands & screenshots below are taken from my setup. Pls change them accordingly for your setup.
Setup environment
- My choice: 2 Fedora 20 VMs, one to setup devstack+manila, other to setup GlusterFS
- devstack+manila VM is known as
devstack-large-vm
- GlusterFS VM is known as
scratchpad-vm
- devstack+manila VM is known as
- You may choose to run GlusterFS inside the devstack VM itself. Having it separate is more closer to real world setup
- You may choose to use physical system(s) instead of VM(s) to host devstack and GlusterFS
Setting up devstack with Manila
- As of writing this, Manila is incubated into openstack but is not a core project yet! Hence devstack doesn't have support built-in to setup Manila services.
- Setting up Manila requires one to do additional steps before doing the regular ./stack.sh step of devstack.
- Follow the steps provided here (one of my earlier write-ups) to setup devstack with Manila services.
- Post #3 you should have your devstack with Manila services (m-shr, m-sch, m-api) running.
Setting up GlusterFS volume
- GlusterFS native driver of Manila requires using a version of GlusterFS that supports SSL/TLS based authorization. This is a fairly new support (as of writing this) added to GlusterFS, hence I used the latest (not yet released) version of GlusterFS from the glusterfs nightly build page. Since the time i wrote this, this feature is now available in GlusterFS 3.6.x and above versions.
- Thus, the least supported version for this to work is GlusterFS 3.6 and above
- CAUTION: Recent changes to GlusterFS code has changed SSL/TLS behaviour. For more details (if interested) read this mail thread
- If going for a nightly build of GlusterFS please use builds done on or before Jan 8th, 2015 from the glusterfs nightly build page
- In other words, the min and max GlusterFS versions supported for Manila native driver to work are GlusterFS 3.6.x (released) to glusterfs-3.7dev-0.487.git250944d (nightly build)
-
Read https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_ssl.md to understand how GlusterFS SSL/TLS authorization works. Its highly recommended to read this before proceeding further.
-
Here is the dump of the repo i added to my GlusterFS VM and the version of GlusterFS installed using that repo.
[root@scratchpad-vm ~]# rpm -qa| grep glusterfs glusterfs-libs-3.7dev-0.78.gitc77a77e.fc20.x86_64 glusterfs-3.7dev-0.78.gitc77a77e.fc20.x86_64 glusterfs-cli-3.7dev-0.78.gitc77a77e.fc20.x86_64 glusterfs-api-3.7dev-0.78.gitc77a77e.fc20.x86_64 glusterfs-fuse-3.7dev-0.78.gitc77a77e.fc20.x86_64 glusterfs-server-3.7dev-0.78.gitc77a77e.fc20.x86_64 [root@scratchpad-vm ~]# cat /etc/yum.repos.d/gluster-nightly.repo [gluster-nightly] name=Fedora $releasever - $basearch failovermethod=priority baseurl=http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/fedora-20-x86_64/ #metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch enabled=1 metadata_expire=7d gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch skip_if_unavailable=False [gluster-nightly-source] name=Fedora $releasever - Source failovermethod=priority baseurl=http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/fedora-20-x86_64/ #metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch enabled=0 #metadata_expire=7d gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch skip_if_unavailable=False
-
Create a new GlusterFS volume to be use as a backend for Manila. In my case, i created gv0 and started it.
[root@scratchpad-vm ~]# gluster volume info gv0 Volume Name: gv0 Type: Distribute Volume ID: 784d0159-e20d-45c0-8ee0-2ad2f7292584 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: scratchpad-vm:/bricks/gv0-brick0 Options Reconfigured: snap-max-hard-limit: 256 snap-max-soft-limit: 90 auto-delete: disable
-
As said above, Manila's GlusterFS Native driver uses GlusterFS' SSL/TLS based authorization feature to allow/deny access to Manila share. In order for SSL/TLS based authorization to work, we need to setup SSL/TLS certificates between the client and server. These certificates provide the mutual trust between the client and server and this trust setup is not handled by Manila. It needs to be done out-of-band of Manila. Its understood that this will be done by the storage admin/deployer in a real world setup, prior to setting up Manila with GlusterFS Native driver.
-
There are many ways to create SSL/TLS client and server certificates, the easiest way is to create self-signed certificates & use the same on both sides, which is good enough for the devstack/test setup of ours.
-
Follow the steps below to create the certificates. Note that it doesn't matter where you create these certificates, what matters is where you put them!
-
Create a new private key for the glusterfs server called
glusterfs.key
[root@scratchpad-vm deepakcs]# openssl genrsa -out glusterfs.key 1024 Generating RSA private key, 1024 bit long modulus .++++++ ..............................................................................++++++ e is 65537 (0x10001)
-
Create a new public certificate for glusterfs server using the above private key, called
glusterfs.pem
. This public certificate will be self-signed certificate as it was created using server's private key instead of a CA (Certifying Authority). This certificate will be issued toclient.example.com
.
[root@scratchpad-vm deepakcs]# openssl req -new -x509 -key glusterfs.key -subj /CN=client.example.com -out glusterfs.pem [root@scratchpad-vm deepakcs]#
-
Lastly, we need to create
glusterfs.ca
which has the list of certificates we trust. For our devstack/test only setup, we just copy theglusterfs.pem
asglusterfs.ca
. This means we trust ourselves and also any other entity that throwsglusterfs.pem
as the identity during SSL/TLS handshake.
[root@scratchpad-vm deepakcs]# cp ./glusterfs.pem ./glusterfs.ca [root@scratchpad-vm deepakcs]#
-
Copy these certificates to /etc/ssl/ directory on the server. GlusterFS expects these to be there!. Once copied, this is how /etc/ssl/ will look:
[root@scratchpad-vm deepakcs]# ls -l /etc/ssl/ total 40 lrwxrwxrwx. 1 root root 16 Dec 12 2013 certs -> ../pki/tls/certs -rw-r--r--. 1 root root 765 Jan 21 07:23 glusterfs.ca -rw-r--r--. 1 root root 887 Jan 21 07:23 glusterfs.key -rw-r--r--. 1 root root 765 Jan 21 07:23 glusterfs.pem
-
This closes the server side setup. These certificates needs to be copied to the client system(s) as well, but in Manila the client is the Nova VM! Thus we copy these certificates to the client once we create the Nova VM. In real world, storage admin/deployer might copy these certificates into a tenant specific glance image before hand and reduce the manual intervention needed to setup gluster on the client.
-
Create a new private key for the glusterfs server called
Setup password-less ssh access between devstack and GlusterFS VMs
- GlusterFS native driver would need password-less ssh access for
root
user to GlusterFS VM in order to configure and tune the GlusterFS volume for Manila. - There are enough articles floating around the internet, if you don't know how to make this happen, so please help yourself :)
Configure Manila to use GlusterFS Native driver
-
Below is my
/etc/manila/manila.conf
with the GlusterFS specific changes marked with #DPKS
[stack@devstack-large-vm ~]$ [admin] cat /etc/manila/manila.conf [keystone_authtoken] signing_dir = /var/cache/manila admin_password = abc123 admin_user = manila admin_tenant_name = service auth_protocol = http auth_port = 35357 auth_host = 192.168.122.107 [DEFAULT] logging_exception_prefix = %(color)s%(asctime)s.%(msecs)d TRACE %(name)s %(instance)s logging_debug_format_suffix = from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d logging_default_format_string = %(asctime)s.%(msecs)d %(color)s%(levelname)s %(name)s [-%(color)s] %(instance)s%(color)s%(message)s logging_context_format_string = %(asctime)s.%(msecs)d %(color)s%(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s%(color)s] %(instance)s%(color)s%(message)s rabbit_userid = stackrabbit rabbit_password = abc123 rabbit_hosts = 192.168.122.107 rpc_backend = manila.openstack.common.rpc.impl_kombu #DPKS - create gluster-gv0 backend enabled_share_backends = gluster-gv0 neutron_admin_password = abc123 cinder_admin_password = abc123 nova_admin_password = abc123 state_path = /opt/stack/data/manila osapi_share_extension = manila.api.contrib.standard_extensions rootwrap_config = /etc/manila/rootwrap.conf api_paste_config = /etc/manila/api-paste.ini share_name_template = share-%s scheduler_driver = manila.scheduler.filter_scheduler.FilterScheduler verbose = True debug = True auth_strategy = keystone [DATABASE] connection = mysql://root:abc123@127.0.0.1/manila?charset=utf8 [oslo_concurrency] lock_path = /opt/stack/manila/manila_locks [generic1] service_instance_password = ubuntu service_instance_user = ubuntu service_image_name = ubuntu_1204_nfs_cifs path_to_private_key = /opt/stack/.ssh/id_rsa path_to_public_key = /opt/stack/.ssh/id_rsa.pub share_backend_name = GENERIC1 share_driver = manila.share.drivers.generic.GenericShareDriver [generic2] service_instance_password = ubuntu service_instance_user = ubuntu service_image_name = ubuntu_1204_nfs_cifs path_to_private_key = /opt/stack/.ssh/id_rsa path_to_public_key = /opt/stack/.ssh/id_rsa.pub share_backend_name = GENERIC2 share_driver = manila.share.drivers.generic.GenericShareDriver #DPKS - add gluster-gv0 backend [gluster-gv0] share_backend_name = gluster-gv0 share_driver = manila.share.drivers.glusterfs_native.GlusterfsNativeShareDriver glusterfs_targets = root@scratchpad-vm:/gv0
-
Restart
m-shr
service for the above changes to take effect.
-
If all is well, then
m-shr
will now be started with GlusterFS Native driver which usesgv0
GlusterFS volume as the backend. As part of the startup, GlusterFS native driver enables the SSL/TLS mode ingv0
which can be verified as below (See lines marked with #DPKS)
[root@scratchpad-vm deepakcs]# gluster volume info gv0 Volume Name: gv0 Type: Distribute Volume ID: 784d0159-e20d-45c0-8ee0-2ad2f7292584 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: scratchpad-vm:/bricks/gv0-brick0 Options Reconfigured: server.ssl: on <-- #DPKS client.ssl: on <-- #DPKS nfs.export-volumes: off <-- #DPKS snap-max-hard-limit: 256 snap-max-soft-limit: 90 auto-delete: disable
Setup devstack networking to allow Nova VMs access external/provider network
-
devstack by default doesn't allow Nova VMs access external network. For our usecase, we need the Nova VMs access our GlusterFS server which is external to devstack.
-
devstack by default supplies 2 networks:
private
andpublic
.private
can be used for tenant network andpublic
is for external connectivity.
[stack@devstack-large-vm ~]$ [admin] neutron net-list +--------------------------------------+------------------------+----------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------------+----------------------------------------------------+ | 5b5873bd-f4c9-42c3-aae9-5a3eccb7ec2f | private | ecc6a3b1-ec93-424a-bd56-83a18bcd1d09 10.0.0.0/24 | | c7e06e15-c69c-4b25-96b9-474e8c394461 | public | e6092271-0a1b-45ad-abf7-71830bbe082c 172.24.4.0/24 | | 3bf77336-9653-4356-aec4-81d2e8e8819d | manila_service_network | | +--------------------------------------+------------------------+----------------------------------------------------+ [stack@devstack-large-vm ~]$ [admin] neutron subnet-list +--------------------------------------+----------------+---------------+------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+----------------+---------------+------------------------------------------------+ | ecc6a3b1-ec93-424a-bd56-83a18bcd1d09 | private-subnet | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} | | e6092271-0a1b-45ad-abf7-71830bbe082c | public-subnet | 172.24.4.0/24 | {"start": "172.24.4.2", "end": "172.24.4.254"} | +--------------------------------------+----------------+---------------+------------------------------------------------+
-
In my case, devstack VM (which acts as the host system, aka openstack node) is on 192.168.122.0/24 network which is not compatible with 172.24.4.0/24 of the
public
network. In order for external connectivity to work thepublic
neutron network should be use the same network CIDR as host / provider network.
-
So delete the existing
public
network, create a newpublic
network that matches our host/provider network.
[stack@devstack-large-vm ~]$ [admin] neutron subnet-delete e6092271-0a1b-45ad-abf7-71830bbe082c Unable to complete operation on subnet e6092271-0a1b-45ad-abf7-71830bbe082c. One or more ports have an IP allocation from this subnet. The above error is because there is a router created by devstack thats connected to this subnet, as can be seen below : [stack@devstack-large-vm ~]$ [admin] ip netns qdhcp-5b5873bd-f4c9-42c3-aae9-5a3eccb7ec2f qrouter-0de42c4e-f69c-472b-85a5-d543ba0a658f [stack@devstack-large-vm ~]$ [admin] sudo ip netns exec qrouter-0de42c4e-f69c-472b-85a5-d543ba0a658f ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 21: qr-e9b6f516-6f: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether fa:16:3e:e8:1e:1c brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-e9b6f516-6f valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fee8:1e1c/64 scope link valid_lft forever preferred_lft forever 22: qg-6bd144e1-a2: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether fa:16:3e:54:8c:23 brd ff:ff:ff:ff:ff:ff inet 172.24.4.2/24 brd 172.24.4.255 scope global qg-6bd144e1-a2 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe54:8c23/64 scope link valid_lft forever preferred_lft forever [stack@devstack-large-vm ~]$ [admin] sudo ip netns exec qrouter-0de42c4e-f69c-472b-85a5-d543ba0a658f ip route default via 172.24.4.1 dev qg-6bd144e1-a2 10.0.0.0/24 dev qr-e9b6f516-6f proto kernel scope link src 10.0.0.1 172.24.4.0/24 dev qg-6bd144e1-a2 proto kernel scope link src 172.24.4.2 [stack@devstack-large-vm ~]$ [admin] * Disconnect the router from the subnet [stack@devstack-large-vm ~]$ [admin] neutron router-list Starting new HTTP connection (1): 192.168.122.219 Starting new HTTP connection (1): 192.168.122.219 +--------------------------------------+---------+-----------------------------------------------------------------------------+ | id | name | external_gateway_info | +--------------------------------------+---------+-----------------------------------------------------------------------------+ | 0de42c4e-f69c-472b-85a5-d543ba0a658f | router1 | {"network_id": "c7e06e15-c69c-4b25-96b9-474e8c394461", "enable_snat": true} | +--------------------------------------+---------+-----------------------------------------------------------------------------+ [stack@devstack-large-vm ~]$ [admin] neutron router-gateway-clear router1 Removed gateway from router router1 [stack@devstack-large-vm ~]$ [admin] As we can see below, the gateway device and route entries are cleared [stack@devstack-large-vm ~]$ [admin] sudo ip netns exec qrouter-0de42c4e-f69c-472b-85a5-d543ba0a658f ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 21: qr-e9b6f516-6f: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether fa:16:3e:e8:1e:1c brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-e9b6f516-6f valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fee8:1e1c/64 scope link valid_lft forever preferred_lft forever [stack@devstack-large-vm ~]$ [admin] sudo ip netns exec qrouter-0de42c4e-f69c-472b-85a5-d543ba0a658f ip route 10.0.0.0/24 dev qr-e9b6f516-6f proto kernel scope link src 10.0.0.1 * Now try the subnet-delete [stack@devstack-large-vm ~]$ [admin] neutron subnet-delete e6092271-0a1b-45ad-abf7-71830bbe082c Deleted subnet: e6092271-0a1b-45ad-abf7-71830bbe082c [stack@devstack-large-vm ~]$ [admin] neutron subnet-list +--------------------------------------+----------------+-------------+--------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+----------------+-------------+--------------------------------------------+ | ecc6a3b1-ec93-424a-bd56-83a18bcd1d09 | private-subnet | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} | +--------------------------------------+----------------+-------------+--------------------------------------------+ * Now delete the public network itself [stack@devstack-large-vm ~]$ [admin] neutron net-delete public Deleted net: c7e06e15-c69c-4b25-96b9-474e8c394461 * Create a new public network [stack@devstack-large-vm ~]$ [admin] neutron net-create --router:external public-192 * Create a new public subnet that matches our host/provider network [stack@devstack-large-vm ~]$ [admin] neutron subnet-create --name public-192-subnet --disable-dhcp --gateway 192.168.122.1 --allocation-pool start=192.168.122.10,end=192.168.122.20 public-192 192.168.122.0/24 * At the end, this is how the public net and subnet should look like [stack@devstack-large-vm devstack]$ [admin] neutron net-list +--------------------------------------+------------------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------------+-------------------------------------------------------+ | 0ceb9ffd-8ad1-49ae-ad0e-552735b1d7a1 | private | c34c0222-f95b-450d-a131-d13b8fc1d611 10.0.0.0/24 | | 9c33f42b-4d3f-4935-8363-d6c809e88f45 | public-192 | 17bfc586-199c-44cc-ab61-23e01d9db108 192.168.122.0/24 | | 8b33a048-5d3b-47e9-89e5-6b9156c45972 | manila_service_network | | +--------------------------------------+------------------------+-------------------------------------------------------+ [stack@devstack-large-vm devstack]$ [admin] neutron subnet-list +--------------------------------------+-------------------+------------------+------------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+-------------------+------------------+------------------------------------------------------+ | c34c0222-f95b-450d-a131-d13b8fc1d611 | private-subnet | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} | | 17bfc586-199c-44cc-ab61-23e01d9db108 | public-192-subnet | 192.168.122.0/24 | {"start": "192.168.122.10", "end": "192.168.122.20"} | +--------------------------------------+-------------------+------------------+------------------------------------------------------+ * For instances to get the right nameserver, the instance's pvt subnet need to be updated to have the dns-nameserver set. [stack@devstack-large-vm ~]$ [admin] neutron subnet-update --dns-nameserver 8.8.8.8 c34c0222-f95b-450d-a131-d13b8fc1d611 Updated subnet: c34c0222-f95b-450d-a131-d13b8fc1d611
-
Setup
router1
's gateway to the newly createdpublic-192
network
[stack@devstack-large-vm ~]$ [admin] neutron router-show router1 +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | distributed | False | | external_gateway_info | | | ha | False | | id | 92d5b731-4c19-4444-9f38-8f35f5a4f851 | | name | router1 | | routes | | | status | ACTIVE | | tenant_id | 23495ebceb934550a7a26158c29df7f9 | +-----------------------+--------------------------------------+ [stack@devstack-large-vm ~]$ [admin] neutron router-gateway-set router1 9c33f42b-4d3f-4935-8363-d6c809e88f45 Set gateway for router router1 [stack@devstack-large-vm ~]$ [admin] neutron router-show router1 +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up | True | | distributed | False | | external_gateway_info | {"network_id": "9c33f42b-4d3f-4935-8363-d6c809e88f45", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "17bfc586-199c-44cc-ab61-23e01d9db108", "ip_address": "192.168.122.10"}]} | | ha | False | | id | 92d5b731-4c19-4444-9f38-8f35f5a4f851 | | name | router1 | | routes | | | status | ACTIVE | | tenant_id | 23495ebceb934550a7a26158c29df7f9 | +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-
If everything above worked well, then you should be able to ping the pvt. network gateway, router gateway, from the
router1
's namespace
[stack@devstack-large-vm ~]$ [admin] sudo ip netns exec qrouter-92d5b731-4c19-4444-9f38-8f35f5a4f851 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 8: qr-d89067c9-20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether fa:16:3e:dc:fa:4b brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-d89067c9-20 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fedc:fa4b/64 scope link valid_lft forever preferred_lft forever 12: qg-2e3c241c-6a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether fa:16:3e:07:e0:d8 brd ff:ff:ff:ff:ff:ff inet 192.168.122.10/24 brd 192.168.122.255 scope global qg-2e3c241c-6a valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe07:e0d8/64 scope link valid_lft forever preferred_lft forever [stack@devstack-large-vm ~]$ [admin] sudo ip netns exec qrouter-92d5b731-4c19-4444-9f38-8f35f5a4f851 ping 192.168.122.10 PING 192.168.122.10 (192.168.122.10) 56(84) bytes of data. 64 bytes from 192.168.122.10: icmp_seq=1 ttl=64 time=0.045 ms 64 bytes from 192.168.122.10: icmp_seq=2 ttl=64 time=0.044 ms ^C --- 192.168.122.10 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.044/0.044/0.045/0.006 ms [stack@devstack-large-vm ~]$ [admin] sudo ip netns exec qrouter-92d5b731-4c19-4444-9f38-8f35f5a4f851 ping 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms ^C --- 10.0.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms [stack@devstack-large-vm ~]$ [admin] sudo ip netns exec qrouter-92d5b731-4c19-4444-9f38-8f35f5a4f851 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.122.1 0.0.0.0 UG 0 0 0 qg-2e3c241c-6a 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-d89067c9-20 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-2e3c241c-6a
- Kill the
dhclient
process running on devstack VM/host, otherwise it thinks thateth0
lost its IP (after what we do in step 8) and re-assigns the IP to it, which we don't want.
[stack@devstack-large-vm devstack]$ [admin] ps aux| grep dhclient root 743 0.0 0.3 102376 14592 ? Ss 04:34 0:00 /sbin/dhclient -H devstack-large-vm -1 -q -lf /var/lib/dhclient/dhclient--eth0.lease -pf /var/run/dhclient-eth0.pid eth0 stack 24964 0.0 0.0 112680 2248 pts/30 S+ 07:07 0:00 grep --color=auto dhclient [stack@devstack-large-vm devstack]$ [admin] sudo kill -9 743 [stack@devstack-large-vm devstack]$ [admin] ps aux| grep dhclient stack 25707 0.0 0.0 112676 2232 pts/30 S+ 07:12 0:00 grep --color=auto dhclient
- For external connectivity to work, we need to put
eth0
of devstack as a port inside neutron's br-ex OVS bridge. The below command does that. IMP NOTE : Run the below command as a single bash command as shown below, doing it any other way would cause you to lose connectivity to your devstack VM/Host! Ofcourse replace the IP addr and other stuff with your setup specific values.
[stack@devstack-large-vm ~]$ [admin] sudo ip addr del 192.168.122.107/24 dev eth0 && sudo ip addr add 192.168.122.107/24 dev br-ex && sudo ifconfig br-ex up && sudo ovs-vsctl add-port br-ex eth0 && sudo ip route add default via 192.168.122.1 dev br-ex [stack@devstack-large-vm ~]$ [admin] ifconfig br-ex br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.122.107 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::7cff:f6ff:fe32:8f41 prefixlen 64 scopeid 0x20<link> ether 7e:ff:f6:32:8f:41 txqueuelen 0 (Ethernet) RX packets 25 bytes 1970 (1.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 14 bytes 1952 (1.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [stack@devstack-large-vm ~]$ [admin] ifconfig eth0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::5054:ff:feae:e02e prefixlen 64 scopeid 0x20<link> ether 52:54:00:ae:e0:2e txqueuelen 1000 (Ethernet) RX packets 9116 bytes 619038 (604.5 KiB) RX errors 0 dropped 13 overruns 0 frame 0 TX packets 3765 bytes 1462794 (1.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [stack@devstack-large-vm ~]$ [admin] sudo route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.122.1 0.0.0.0 UG 0 0 0 br-ex 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ex [stack@devstack-large-vm ~]$ [admin] sudo ovs-vsctl show b06da86b-a3bb-4e11-a3ee-57041e26f6e0 Bridge br-ex Port "qg-2e3c241c-6a" Interface "qg-2e3c241c-6a" type: internal Port br-ex Interface br-ex type: internal Port "eth0" Interface "eth0" Port "qg-5a15a37b-7f" Interface "qg-5a15a37b-7f" type: internal Bridge br-int fail_mode: secure Port "tap7ca9063f-86" tag: 2 Interface "tap7ca9063f-86" type: internal Port "tapb58c0ae7-18" tag: 1 Interface "tapb58c0ae7-18" type: internal Port "qr-d89067c9-20" tag: 2 Interface "qr-d89067c9-20" type: internal Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal ovs_version: "2.3.1" [stack@devstack-large-vm ~]$ [admin] sudo ip netns exec qrouter-92d5b731-4c19-4444-9f38-8f35f5a4f851 ping 192.168.122.10 PING 192.168.122.10 (192.168.122.10) 56(84) bytes of data. 64 bytes from 192.168.122.10: icmp_seq=1 ttl=64 time=0.053 ms 64 bytes from 192.168.122.10: icmp_seq=2 ttl=64 time=0.040 ms ^C --- 192.168.122.10 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.040/0.046/0.053/0.009 ms [stack@devstack-large-vm ~]$ [admin] sudo ip netns exec qrouter-92d5b731-4c19-4444-9f38-8f35f5a4f851 ping 192.168.122.137 PING 192.168.122.137 (192.168.122.137) 56(84) bytes of data. 64 bytes from 192.168.122.137: icmp_seq=1 ttl=64 time=1.03 ms 64 bytes from 192.168.122.137: icmp_seq=2 ttl=64 time=0.234 ms ^C --- 192.168.122.137 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.234/0.633/1.032/0.399 ms [stack@devstack-large-vm devstack]$ [admin] sudo ip netns exec qrouter-92d5b731-4c19-4444-9f38-8f35f5a4f851 ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=51 time=33.2 ms ^C --- 8.8.8.8 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 33.273/33.273/33.273/0.000 ms
-
As you can see above,
eth0
is added as abr-ex
port and we are now able to pingrouter1
's gateway,scratchpad-vm
(GlusterFS server IP: 192.168.122.137) and public DNS (8.8.8.8). Its very IMP to get this working, otherwise stop and fix/debug and don't proceed until this works!
-
At the end of this whole exercise, make sure the route entry on your devstack VM/Host looks like the below:
[stack@devstack-large-vm devstack]$ [admin] sudo route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.122.1 0.0.0.0 UG 0 0 0 br-ex 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ex
-
This brings us to the end of steps needed to be done to enable external connectivity to VMs in devstack. If you reached till here successfully, then Congrats and lets create the Nova VM now!
Create a Nova VM
-
Lets create a new Nova VM that will act as a tenant VM for us. I used the pre-existing Fedora 20 cloud image in my devstack setup.
[stack@devstack-large-vm devstack]$ [admin] glance image-list +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+ | ee3fc40f-cd32-49fc-a5b8-67a8b7415097 | cirros-0.3.2-x86_64-uec | ami | ami | 25165824 | active | | 1c912eae-98eb-4b46-bd47-28cbdb47da72 | cirros-0.3.2-x86_64-uec-kernel | aki | aki | 4969360 | active | | b251a3b2-29fd-4f1f-b705-4e428a3a9c9b | cirros-0.3.2-x86_64-uec-ramdisk | ari | ari | 3723817 | active | | f189096e-ca30-426b-900d-17872be5bd3f | Fedora-x86_64-20-20140618-sda | qcow2 | bare | 209649664 | active | | b8799c2b-87a5-4067-9aee-8ca7cb52f787 | ubuntu_1204_nfs_cifs | qcow2 | bare | 318701568 | active | +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+ [stack@devstack-large-vm devstack]$ [admin] nova keypair-add --pub_key ~/.ssh/id_rsa.pub dpk_pubkey [stack@devstack-large-vm devstack]$ [admin] nova keypair-list +------------+-------------------------------------------------+ | Name | Fingerprint | +------------+-------------------------------------------------+ | dpk_pubkey | 11:63:cc:58:fd:d6:70:84:0b:70:e6:f1:1d:68:51:7d | +------------+-------------------------------------------------+ [stack@devstack-large-vm devstack]$ [admin] neutron net-list +--------------------------------------+------------------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------------+-------------------------------------------------------+ | 0ceb9ffd-8ad1-49ae-ad0e-552735b1d7a1 | private | c34c0222-f95b-450d-a131-d13b8fc1d611 10.0.0.0/24 | | 9c33f42b-4d3f-4935-8363-d6c809e88f45 | public-192 | 17bfc586-199c-44cc-ab61-23e01d9db108 192.168.122.0/24 | | 8b33a048-5d3b-47e9-89e5-6b9156c45972 | manila_service_network | | +--------------------------------------+------------------------+-------------------------------------------------------+ [stack@devstack-large-vm devstack]$ [admin] nova boot --image f189096e-ca30-426b-900d-17872be5bd3f --key dpk_pubkey --flavor m1.heat --nic net-id=0ceb9ffd-8ad1-49ae-ad0e-552735b1d7a1 --poll dpkvm-f20 +--------------------------------------+----------------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000004 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | 2PR5CbVX8hy8 | | config_drive | | | created | 2015-01-22T09:24:31Z | | flavor | m1.heat (451) | | hostId | | | id | 30bbc325-ee30-45ea-84c3-0a39b656cc80 | | image | Fedora-x86_64-20-20140618-sda (f189096e-ca30-426b-900d-17872be5bd3f) | | key_name | dpk_pubkey | | metadata | {} | | name | dpkvm-f20 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 81e2e26383554c8791e6ca68f7502564 | | updated | 2015-01-22T09:24:31Z | | user_id | 78a08a5bffd74be1bcebe29485f7c217 | +--------------------------------------+----------------------------------------------------------------------+ Server building... 100% complete Finished [stack@devstack-large-vm devstack]$ [admin] nova list +--------------------------------------+-----------+---------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-----------+---------+------------+-------------+------------------+ | 30bbc325-ee30-45ea-84c3-0a39b656cc80 | dpkvm-f20 | ACTIVE | - | Running | private=10.0.0.6 | +--------------------------------------+-----------+---------+------------+-------------+------------------+
-
We cannot ping/ssh the VM until we enable these protocols in the neutron security group. Find the neutron's default security group and enable these protocols to be allowed for inbound access.
* Find the default neutron security group that corresponds to my tenant ID (I am logged in as tenant `admin`) [stack@devstack-large-vm devstack]$ [admin] neutron security-group-list +--------------------------------------+----------------+----------------------------+ | id | name | description | +--------------------------------------+----------------+----------------------------+ | 0e800bdb-402b-4a8a-b6a8-fe0bfda7b36d | default | Default security group | | b93b3553-8945-4e51-958b-786a52e3f486 | manila-service | manila-service description | | c83c7575-3111-40e6-8106-2e6702d21f96 | default | Default security group | | cbca0ad6-1594-4b91-80e4-b02009fae088 | default | Default security group | +--------------------------------------+----------------+----------------------------+ [stack@devstack-large-vm devstack]$ [admin] keystone tenant-list +----------------------------------+--------------------+---------+ | id | name | enabled | +----------------------------------+--------------------+---------+ | 81e2e26383554c8791e6ca68f7502564 | admin | True | | f4bf2270b0da428c979a23b6c13f4481 | alt_demo | True | | 23495ebceb934550a7a26158c29df7f9 | demo | True | | 128aeb233d4e4b8da203eccca4cded8a | invisible_to_admin | True | | a76c0528e82340de9a5c55fd9fad2ccb | service | True | +----------------------------------+--------------------+---------+ * It happens to be security group with id `c83c7575-3111-40e6-8106-2e6702d21f96`. * Use security-group-show to find the right one! [stack@devstack-large-vm devstack]$ [admin] neutron security-group-show c83c7575-3111-40e6-8106-2e6702d21f96 +----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | description | Default security group | | id | c83c7575-3111-40e6-8106-2e6702d21f96 | | name | default | | security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": null, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": null, "ethertype": "IPv6", "id": "4cf714df-00dd-4961-aa3d-c963bc2e6cd8"} | | | {"remote_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "direction": "ingress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": null, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": null, "ethertype": "IPv6", "id": "54c8aefb-9d96-4133-b53b-8cb507e563ee"} | | | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": null, "protocol": "tcp", "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": 22, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": 22, "ethertype": "IPv4", "id": "65010da6-bffb-4945-8112-b154b770b5a5"} | | | {"remote_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "direction": "ingress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": null, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": null, "ethertype": "IPv4", "id": "9ea3d7c4-75a4-43b4-841b-9dc553f5ec8b"} | | | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": null, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": null, "ethertype": "IPv4", "id": "a01220f0-13b5-49f1-b9c2-9644664818dc"} | | | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": null, "protocol": "icmp", "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": null, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": null, "ethertype": "IPv4", "id": "d5322994-6b58-4165-a66c-f50aa2eaf8e1"} | | tenant_id | 81e2e26383554c8791e6ca68f7502564 | +----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ [stack@devstack-large-vm devstack]$ [admin] neutron security-group-rule-create --protocol icmp --direction ingress c83c7575-3111-40e6-8106-2e6702d21f96 Created a new security_group_rule: +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | direction | ingress | | ethertype | IPv4 | | id | d5322994-6b58-4165-a66c-f50aa2eaf8e1 | | port_range_max | | | port_range_min | | | protocol | icmp | | remote_group_id | | | remote_ip_prefix | | | security_group_id | c83c7575-3111-40e6-8106-2e6702d21f96 | | tenant_id | 81e2e26383554c8791e6ca68f7502564 | +-------------------+--------------------------------------+ [stack@devstack-large-vm devstack]$ [admin] neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress c83c7575-3111-40e6-8106-2e6702d21f96 Created a new security_group_rule: +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | direction | ingress | | ethertype | IPv4 | | id | 65010da6-bffb-4945-8112-b154b770b5a5 | | port_range_max | 22 | | port_range_min | 22 | | protocol | tcp | | remote_group_id | | | remote_ip_prefix | | | security_group_id | c83c7575-3111-40e6-8106-2e6702d21f96 | | tenant_id | 81e2e26383554c8791e6ca68f7502564 | +-------------------+--------------------------------------+ [stack@devstack-large-vm devstack]$ [admin] neutron security-group-show c83c7575-3111-40e6-8106-2e6702d21f96 +----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | description | Default security group | | id | c83c7575-3111-40e6-8106-2e6702d21f96 | | name | default | | security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": null, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": null, "ethertype": "IPv6", "id": "4cf714df-00dd-4961-aa3d-c963bc2e6cd8"} | | | {"remote_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "direction": "ingress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": null, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": null, "ethertype": "IPv6", "id": "54c8aefb-9d96-4133-b53b-8cb507e563ee"} | | | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": null, "protocol": "tcp", "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": 22, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": 22, "ethertype": "IPv4", "id": "65010da6-bffb-4945-8112-b154b770b5a5"} | | | {"remote_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "direction": "ingress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": null, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": null, "ethertype": "IPv4", "id": "9ea3d7c4-75a4-43b4-841b-9dc553f5ec8b"} | | | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": null, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": null, "ethertype": "IPv4", "id": "a01220f0-13b5-49f1-b9c2-9644664818dc"} | | | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": null, "protocol": "icmp", "tenant_id": "81e2e26383554c8791e6ca68f7502564", "port_range_max": null, "security_group_id": "c83c7575-3111-40e6-8106-2e6702d21f96", "port_range_min": null, "ethertype": "IPv4", "id": "d5322994-6b58-4165-a66c-f50aa2eaf8e1"} | | tenant_id | 81e2e26383554c8791e6ca68f7502564 | +----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-
Now login to the VM and try to ping the different IPs. If all is well, you should be able to ping all the IPs successfully.
[stack@devstack-large-vm devstack]$ [admin] nova list +--------------------------------------+-----------+---------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-----------+---------+------------+-------------+------------------+ | da0d20c6-136b-41ea-851d-ccea544fc6ee | dpkvm | SHUTOFF | - | Shutdown | private=10.0.0.3 | | 30bbc325-ee30-45ea-84c3-0a39b656cc80 | dpkvm-f20 | ACTIVE | - | Running | private=10.0.0.6 | +--------------------------------------+-----------+---------+------------+-------------+------------------+ [stack@devstack-large-vm devstack]$ [admin] sudo ip netns exec qrouter-92d5b731-4c19-4444-9f38-8f35f5a4f851 ssh -i ~/.ssh/id_rsa fedora@10.0.0.6 [fedora@dpkvm-f20 ~]$ [fedora@dpkvm-f20 ~]$ hostname dpkvm-f20.novalocal [fedora@dpkvm-f20 ~]$ cat /etc/resolv.conf ; generated by /usr/sbin/dhclient-script search openstacklocal novalocal nameserver 8.8.8.8 [fedora@dpkvm-f20 ~]$ ping google.com PING google.com (64.233.187.101) 56(84) bytes of data. 64 bytes from 64.233.187.101: icmp_seq=1 ttl=39 time=137 ms 64 bytes from 64.233.187.101: icmp_seq=2 ttl=39 time=136 ms ^C --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 136.051/136.972/137.893/0.921 ms [fedora@dpkvm-f20 ~]$ ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=63 time=0.837 ms ^C --- 192.168.122.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.837/0.837/0.837/0.000 ms [fedora@dpkvm-f20 ~]$ ping 192.168.122.137 PING 192.168.122.137 (192.168.122.137) 56(84) bytes of data. 64 bytes from 192.168.122.137: icmp_seq=1 ttl=63 time=2.11 ms 64 bytes from 192.168.122.137: icmp_seq=2 ttl=63 time=0.771 ms ^C --- 192.168.122.137 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.771/1.445/2.119/0.674 ms * I was even able to login to GlusterFS server from the VM! [fedora@dpkvm-f20 ~]$ ssh root@192.168.122.137 The authenticity of host '192.168.122.137 (192.168.122.137)' can't be established. RSA key fingerprint is 3b:e3:49:94:0d:84:4f:68:f8:0a:6d:00:80:12:98:55. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.122.137' (RSA) to the list of known hosts. root@192.168.122.137's password: Last failed login: Thu Jan 22 09:41:16 UTC 2015 from 192.168.122.10 on ssh:notty There was 1 failed login attempt since the last successful login. Last login: Thu Jan 22 05:12:02 2015 from 192.168.122.1 [root@scratchpad-vm ~]# hostname scratchpad-vm [root@scratchpad-vm ~]# * If you reached till here, it means your devstack networking is setup up fine.
Setup Nova VM to act as GlusterFS client
-
In order for our newly created Nova VM to act as a glusterfs client, we need to do few things as below:
- Install GlusterFS client RPMs
- Copy the SSL/TLS client certificates
-
In a real world, the above will be done by the glusterfs admin/deployer as part of creating the tenant specific glance images.
-
Install GlusterFS client RPMs. Since my VM is a F20 VM, i just installed glusterfs-fuse rpm using
yum install
. Note: You might have to usesudo
for commands needingroot
access since cloud VM images typically don't provideroot
access.
[root@dpkvm-f20 mnt]# rpm -qa| grep glusterfs glusterfs-libs-3.5.3-1.fc20.x86_64 glusterfs-fuse-3.5.3-1.fc20.x86_64 glusterfs-3.5.3-1.fc20.x86_64
- Its also OK to create a gluster-nightly repo (as we did on GlusterFS server) and install glusterfs-fuse from it so that client and server glusterfs packages are in sync. For me this worked, so i just continued with it.
-
Copy the 3 certificate files that we created on GlusterFS server into the Nova VM. Note: You cannot access the Nova VM from the GlusterFS server. So i copied the 3 files to my devstack VM/Host and then copied into the Nova VM as below:
[stack@devstack-large-vm ~]$ [admin] sudo ip netns exec qrouter-92d5b731-4c19-4444-9f38-8f35f5a4f851 scp -i ~/.ssh/id_rsa /etc/ssl/glusterfs.* fedora@10.0.0.6:~/ glusterfs.ca 100% 765 0.8KB/s 00:00 glusterfs.key 100% 887 0.9KB/s 00:00 glusterfs.pem 100% 765 0.8KB/s 00:00 * Note: we need to copy as user fedora so can't copy into /etc/ssl on nova VM directly, as root user is disabled for cloud VMs * Now inside the fedora VM [fedora@dpkvm-f20 ~]$ ls -l total 12 -rw-r--r--. 1 fedora fedora 765 Jan 22 10:20 glusterfs.ca -rw-r--r--. 1 fedora fedora 887 Jan 22 10:20 glusterfs.key -rw-r--r--. 1 fedora fedora 765 Jan 22 10:20 glusterfs.pem [fedora@dpkvm-f20 ~]$ sudo cp glusterfs.* /etc/ssl/ [fedora@dpkvm-f20 ~]$ ls -l /etc/ssl/ total 12 lrwxrwxrwx. 1 root root 16 Jun 18 2014 certs -> ../pki/tls/certs -rw-r--r--. 1 root root 765 Jan 22 10:20 glusterfs.ca -rw-r--r--. 1 root root 887 Jan 22 10:20 glusterfs.key -rw-r--r--. 1 root root 765 Jan 22 10:20 glusterfs.pem
- NOTE: By having the same certificate files on both client and server, is the quickest way to setup mutual trust between the two systems. In real world setup, admin/deployer will create separate client and server certificates and set them up accordingly.
-
Edit /etc/hosts/ file in Nova VM and add entry for GlusterFS server. It seems GlusterFS client looks for the hostname (even when IP is provided) during mount, hence the need for this step. Use
sudo
orsudo -s bash
to get aroot
shell inside the Nova VM.
[root@dpkvm-f20 mnt]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.122.137 scratchpad-vm
-
Not doing the above will cause your glusterfs mount to fail with this error:
[2015-01-22 10:21:52.748560] E [common-utils.c:223:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Name or service not known) [2015-01-22 10:21:52.748606] E [name.c:249:af_inet_client_get_remote_sockaddr] 0-gv0-client-0: DNS resolution failed on host scratchpad-vm
-
Not doing the above will cause your glusterfs mount to fail with this error:
Create Manila share and access it from Nova VM
-
Create a new Manila share
[stack@devstack-large-vm ~]$ [admin] manila create glusterfs 1 [stack@devstack-large-vm ~]$ [admin] manila list +--------------------------------------+------+------+-------------+-----------+-------------+--------------------+-------------------------------------------+ | ID | Name | Size | Share Proto | Status | Volume Type | Export location | Host | +--------------------------------------+------+------+-------------+-----------+-------------+--------------------+-------------------------------------------+ | 9aa57dcf-27d9-49dc-8a2d-69a6686c8694 | None | 1 | GLUSTERFS | available | None | scratchpad-vm:/gv0 | devstack-large-vm.localdomain@gluster-gv0 | +--------------------------------------+------+------+-------------+-----------+-------------+--------------------+-------------------------------------------+ [stack@devstack-large-vm ~]$ [admin] manila access-list 9aa57dcf-27d9-49dc-8a2d-69a6686c8694 +----+-------------+-----------+-------+ | id | access type | access to | state | +----+-------------+-----------+-------+ +----+-------------+-----------+-------+
-
Allow access to the share from the Nova VM we created above
[stack@devstack-large-vm ~]$ [admin] manila access-allow 9aa57dcf-27d9-49dc-8a2d-69a6686c8694 cert client.example.com +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | share_id | 9aa57dcf-27d9-49dc-8a2d-69a6686c8694 | | deleted | False | | created_at | 2015-01-22T10:13:56.292741 | | updated_at | None | | access_type | cert | | access_to | client.example.com | | state | new | | deleted_at | None | | id | 42c49e75-36c1-40e8-9073-a7a8b8bc6660 | +-------------+--------------------------------------+ [stack@devstack-large-vm ~]$ [admin] manila access-list 9aa57dcf-27d9-49dc-8a2d-69a6686c8694 +--------------------------------------+-------------+--------------------+--------+ | id | access type | access to | state | +--------------------------------------+-------------+--------------------+--------+ | 42c49e75-36c1-40e8-9073-a7a8b8bc6660 | cert | client.example.com | active | +--------------------------------------+-------------+--------------------+--------+
-
Post successfull manila access-allow, the gluster volume will look like this. Notice that
auth.ssl-allow
is set to the entity that we want to allow access to.
[root@scratchpad-vm ~]# gluster volume info gv0 Volume Name: gv0 Type: Distribute Volume ID: 784d0159-e20d-45c0-8ee0-2ad2f7292584 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: scratchpad-vm:/bricks/gv0-brick0 Options Reconfigured: auth.ssl-allow: client.example.com nfs.export-volumes: off client.ssl: on server.ssl: on snap-max-hard-limit: 256 snap-max-soft-limit: 90 auto-delete: disable
-
Mount the Manila share from the Nova VM
[root@dpkvm-f20 mnt]# hostname dpkvm-f20.novalocal [fedora@dpkvm-f20 ~]$ sudo mount -t glusterfs 192.168.122.137:/gv0 /mnt [fedora@dpkvm-f20 ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 2.0G 943M 1007M 49% / devtmpfs 238M 0 238M 0% /dev tmpfs 246M 0 246M 0% /dev/shm tmpfs 246M 8.3M 238M 4% /run tmpfs 246M 0 246M 0% /sys/fs/cgroup 192.168.122.137:/gv0 50G 927M 49G 2% /mnt [fedora@dpkvm-f20 ~]$ ls -la /mnt/ total 8 drwxr-xr-x. 3 root root 4096 Sep 24 09:37 . dr-xr-xr-x. 18 root root 4096 Jan 22 09:39 .. [fedora@dpkvm-f20 ~]$ mount ... ... 192.168.122.137:/gv0 on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
-
Unmount it, deny access to the share using Manila and check if the mount fails
[fedora@dpkvm-f20 ~]$ sudo umount /mnt [fedora@dpkvm-f20 ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 2.0G 943M 1007M 49% / devtmpfs 238M 0 238M 0% /dev tmpfs 246M 0 246M 0% /dev/shm tmpfs 246M 8.3M 238M 4% /run tmpfs 246M 0 246M 0% /sys/fs/cgroup [stack@devstack-large-vm ~]$ [admin] manila access-list 9aa57dcf-27d9-49dc-8a2d-69a6686c8694 +--------------------------------------+-------------+--------------------+--------+ | id | access type | access to | state | +--------------------------------------+-------------+--------------------+--------+ | 42c49e75-36c1-40e8-9073-a7a8b8bc6660 | cert | client.example.com | active | +--------------------------------------+-------------+--------------------+--------+ [stack@devstack-large-vm ~]$ [admin] manila list +--------------------------------------+------+------+-------------+-----------+-------------+--------------------+-------------------------------------------+ | ID | Name | Size | Share Proto | Status | Volume Type | Export location | Host | +--------------------------------------+------+------+-------------+-----------+-------------+--------------------+-------------------------------------------+ | 9aa57dcf-27d9-49dc-8a2d-69a6686c8694 | None | 1 | GLUSTERFS | available | None | scratchpad-vm:/gv0 | devstack-large-vm.localdomain@gluster-gv0 | +--------------------------------------+------+------+-------------+-----------+-------------+--------------------+-------------------------------------------+ [stack@devstack-large-vm ~]$ [admin] manila access-deny 9aa57dcf-27d9-49dc-8a2d-69a6686c8694 42c49e75-36c1-40e8-9073-a7a8b8bc6660 [stack@devstack-large-vm ~]$ [admin] manila access-list 9aa57dcf-27d9-49dc-8a2d-69a6686c8694 +----+-------------+-----------+-------+ | id | access type | access to | state | +----+-------------+-----------+-------+ +----+-------------+-----------+-------+ * Notice that auth.ssl-allow is now removed! [root@scratchpad-vm ~]# gluster volume info gv0 Volume Name: gv0 Type: Distribute Volume ID: 784d0159-e20d-45c0-8ee0-2ad2f7292584 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: scratchpad-vm:/bricks/gv0-brick0 Options Reconfigured: nfs.export-volumes: off client.ssl: on server.ssl: on snap-max-hard-limit: 256 snap-max-soft-limit: 90 auto-delete: disable [root@scratchpad-vm ~]# [fedora@dpkvm-f20 ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 2.0G 943M 1007M 49% / devtmpfs 238M 0 238M 0% /dev tmpfs 246M 0 246M 0% /dev/shm tmpfs 246M 8.3M 238M 4% /run tmpfs 246M 0 246M 0% /sys/fs/cgroup [fedora@dpkvm-f20 ~]$ sudo mount -t glusterfs 192.168.122.137:/gv0 /mnt Mount failed. Please check the log file for more details. ← expected!!! * Client logs shows mount failed due to Auth failure - which is expected [2015-01-22 10:44:56.782796] I [socket.c:358:ssl_setup_connection] 0-gv0-client-0: peer CN = client.example.com [2015-01-22 10:44:56.784891] I [client-handshake.c:1677:select_server_supported_programs] 0-gv0-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2015-01-22 10:44:56.786702] W [client-handshake.c:1371:client_setvolume_cbk] 0-gv0-client-0: failed to set the volume (Permission denied) [2015-01-22 10:44:56.786742] W [client-handshake.c:1397:client_setvolume_cbk] 0-gv0-client-0: failed to get 'process-uuid' from reply dict [2015-01-22 10:44:56.786758] E [client-handshake.c:1403:client_setvolume_cbk] 0-gv0-client-0: SETVOLUME on remote-host failed: Authentication failed [2015-01-22 10:44:56.786773] I [client-handshake.c:1489:client_setvolume_cbk] 0-gv0-client-0: sending AUTH_FAILED event [2015-01-22 10:44:56.786813] E [fuse-bridge.c:5081:notify] 0-fuse: Server authenication failed. Shutting down. * Server logs shows an incoming authentication from Nova VM was rejected - which is expected [2015-01-22 10:44:58.096347] I [socket.c:299:ssl_setup_connection] 0-tcp.gv0-server: peer CN = client.example.com [2015-01-22 10:44:58.100356] I [login.c:39:gf_auth] 0-auth/login: connecting user name: client.example.com [2015-01-22 10:44:58.100390] E [server-handshake.c:596:server_setvolume] 0-gv0-server: Cannot authenticate client from dpkvm-f20.novalocal-6672-2015/01/22-10:44:56:693822-gv0-client-0-0-0 3.5.3 [2015-01-22 10:44:58.136681] E [socket.c:2397:socket_poller] 0-tcp.gv0-server: error in polling loop
Troubleshooting
- Post reboot of devstack VM/host, we cannot ping/ssh into the devstack VM
- This happens because we manually executed commands to put
eth0
as a port insidebr-ex
and we didn't persist that configuration.
- Persisting that configuration requires one to play with the ifcfg-{eth0,br-ex} files. Google it!
- Post reboot,
eth0
has IP assigned,br-ex
is down,route -n
has eth0 as outgoing interface! - Manual way of fixing the issue is as below:
- Login to the devstack VM as
root
using console and do the below:- Kill dhclient process so that
eth0
doesn't get a IP, once removed sudo ip addr del 192.168.122.107/24 dev eth0 && sudo ip addr add 192.168.122.107/24 dev br-ex && sudo ifconfig br-ex up && sudo ovs-vsctl add-port br-ex eth0 && sudo ip route add default via 192.168.122.1 dev br-ex
- If you get an error like "default entry could not be added", just run
sudo ip route add default via 192.168.122.1 dev br-ex
again.
- If you get an error like "default entry could not be added", just run
- Kill dhclient process so that
- Login to the devstack VM as
- Now you should be able to ping/ssh your devstack VM/Host again!
- This happens because we manually executed commands to put
Additional Reading
Credits
- Thanks to Sridhar Gaddam and Assaf Muller of Red Hat for helping answer my queries and issues related to neutron networking.