OpenStack for Your Environment

UPDATE: Thurs, 15-09-03 at 11:39PM

REASON: Walk-through has been updated because of an error with the order in which I instructed users to bring up the network interfaces and router for the Openstack environment. Glad I caught it!


Well, in Part 1 we installed a working Openstack environment. You feel good! Once you start working with it, you realize, you're a little sandboxed (and being sandboxed unwillingly is never any fun).

Reclaim your SSD/HDD

The first thing we need to do is take back your SSD/HDD space! If you're a curious person (and no doubt you are, otherwise you wouldn't be here), then you've already looked through the Openstack Dashboard. We're going to start calling this Horizon to get you used to the Openstack components. One thing you've probably overlooked is that Openstack (and more specifically Nova) is only using a very small amount of your overall SSD/HDD space. Why? Well, remember that Openstack is typically a multi-node deployment comprised of separate functional resources such as Controller, Compute, Block Storage, Object Storage nodes, and so on. You can read more about each of these resources on the Openstack Wiki, or here are some links to some of the more popular components (which are applicable with our RDO deployment as well):

Keystone: Identity Management
Nova: Compute
Glance: Image Service
Neutron: Networking
Horizon: Dashboard WebUI
Cinder: Block Storage
Swift: Object Storage
Heat: Orchestration
OpenStackClient: Command-Line Client

(we'll get to each of these later)

In our case, we're using an all-in-one stack on a single server, rather than a multi-node architecture. Each one of the appropriate service configs are located in /etc/[openstack_service].

What happened during our RDO Openstack deployment is that Openstack RDO Packstack configured the Nova Compute resource pool (by default) to read/write data from/to /var/lib/nova/instances/. To our misfortune (and also by default) CentOS 7.x partitions less space in /var/ in favor of the /home/ directory. So we're going to use this to our advantage. To prove what I'm talking about, log into Horizon as "admin," then go to Systems > Hypervisors and look at your Local Storage (Total). It's probably different from what you were expecting. That's what we're going to change now.

Changes to reclaim space

Use the following commands to reclaim your space:

  1. Create the new location for Nova Compute data: sudo mkdir -p /home/virt/kvm/datastore
  2. Copy any data from the existing location to your new location: sudo cp -r /var/lib/nova/instances/* /home/virt/kvm/datastore/.
  3. Remove the old directory sudo rm -rf /var/lib/nova/instances/
  4. Symbolically link the data to where Nova "thinks" it should be reading the data from: sudo ln -s /home/virt/kvm/datastore/ /var/lib/nova/instances
  5. Next, make sure that Nova has permission to the new location: sudo chown -R nova:nova datastore/
  6. For good measure, chown the links as well: sudo chown -R nova:nova /var/lib/nova/
  7. Reboot the system by issuing sudo systemctl reboot

This is a quick and dirty method for reclaiming space. Please note that this method may not always be supportable. Openstack could change the default location for Nova at any time.

I will get more in-depth about Nova configuration mappings in another installation article (details are in the /etc/nova/nova.conf file, if you're curious now). If you're starting with a bare metal instance of Openstack, you can initially partition your SSD/HDD to include a separate/larger /var/lib/nova partition. However, if you've already installed the initial OS and for the sake of making this walkthrough dead simple, we're going to continue with the instructions above (as I know they work with RDO-Kilo at the time of writing).

That's it! If you get errors starting up machines, the first place you'll want to troubleshoot is with the permissions (and verify your links). If you're cutting and pasting from my write-up, then you're great at following instructions (and are likely problem-free)!

A useful Openstack deployment

So great! We did all that work, and we still have a sandboxed deployment of Openstack, which isn't really all that fun. Do you feel led on, or like we'll never get to the point where Openstack (in your environment) is useful? Well, fret no more!

It really all comes down to bridging the Openstack Open vSwitch interface to the physical interface (what would be eth0, or the "public" side of your Openstack all-in-one system). Neutron, which is the default Software-Defined NaaS engine for Openstack, will then forward packets from the Openstack hosts onward towards your defined default gateway.

To do this, we're going to create a file in /etc/sysconfig/network-scripts called ifcfg-br-ex, and make some changes to your default public Openstack interface. Let's have a look before we start changing things.

[user@galvatron01 network-scripts]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN  
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000  
    link/ether 52:54:00:b9:f1:cc brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.22/24 brd 192.168.1.255 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 2606:a000:121b:a03d:5054:ff:feb9:f1cc/64 scope global dynamic 
       valid_lft 86221sec preferred_lft 14221sec
    inet6 fe80::5054:ff:feb9:f1cc/64 scope link 
       valid_lft forever preferred_lft forever
3: ens8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000  
    link/ether 52:54:00:bd:f2:21 brd ff:ff:ff:ff:ff:ff
    inet 10.5.50.22/24 brd 10.5.50.255 scope global ens8
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:febd:f221/64 scope link 
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN  
    link/ether b6:dc:26:45:f2:24 brd ff:ff:ff:ff:ff:ff
5: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN  
    link/ether 5e:f7:de:f0:1e:41 brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN  
    link/ether d2:bb:70:00:88:45 brd ff:ff:ff:ff:ff:ff
7: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN  
    link/ether 32:e2:01:e8:0c:4d brd ff:ff:ff:ff:ff:ff
[user@galvatron01 network-scripts]# 

Looking at the example above, we're going to create the bridge interface. (No, it doesn't already exist; we're creating it.) Use the following commands and include the correct output for your environment:

[user@galvatron01 network-scripts]# sudo vi /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex  
DEVICETYPE=ovs  
TYPE=OVSBridge  
BOOTPROTO=static  
IPADDR=192.168.1.22  
NETMASK=255.255.255.0  
GATEWAY=192.168.1.1  
DNS1=192.168.1.70  
DNS2=192.168.1.66  
DNS3=192.168.1.35  
ONBOOT=yes  

Note that we've changed DEVICE, DEVICETYPE, and TYPE. The rest is taken from the public interface.

Next (again using the example above) ens3 that has an IP address of 192.168.1.22 is going to be my public Openstack interface. We're going to create the OVS bridge interface with the following command and add the following contents:

[user@galvatron01 network-scripts]# sudo vi /etc/sysconfig/network-scripts/ifcfg-ens3
DEVICE=eth0  
HWADDR=52:54:00:b9:f1:cc  
TYPE=OVSPort  
DEVICETYPE=ovs  
OVS_BRIDGE=br-ex  
ONBOOT=yes  

PAY ATTENTION TO THE HWADDR FLAG! This MAC address is taken from ens3, and make sure that you have eth0 enumerated correctly (in other words, leave it as it is). The only thing you should change is the HWADDR field.

Next we have to make some changes to the Openstack Neutron configuration. This is the first time we're using the openstack-config command.

[user@galvatron01 ~]# openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs bridge_mappings extnet:br-ex

The command above defines "a logical name for our external physical L2 segment, as 'extnet', this will be referenced as a provider network when we create the external networks." (I quoted that because it's taken directly from the developers for RDO and I was too lazy to rewrite it; I couldn't put it any better anyway.)

Next, we have a Packstack bug to overcome. Again, taken from the RDO folks: "This one will overcome a packstack deployment bug where only vxlan is made available." Why would I want to talk about a bug? It's better to quote someone who knows more than I do!

[user@galvatron01 ~]# openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vxlan,flat,vlan

Reboot the system using systemctl: sudo systemctl reboot.

When the platform comes up, you're going to start seeing things take shape, but we're not completely done yet.

Once the system is up, look for the keystone_admin file that was created after Packstack completed the install of RDO Openstack. Use cat to review the file and make sure that the "admin" password is the same. I know some of you changed the password already, so if you have, make the appropriate changes in this file.

[user@galvatron01 ~]# cat keystonerc_admin 
unset OS_SERVICE_TOKEN  
export OS_USERNAME=admin  
export OS_PASSWORD=[keystone_unique_password]  
export OS_AUTH_URL=http://[ip_address]:5000/v2.0  
export PS1='[\u@\h \W(keystone_admin)]\$ '  
export OS_TENANT_NAME=admin  
export OS_REGION_NAME=RegionOne  
[user@galvatron01 ~]# 

If you're using a terminal, cut and paste that back into the terminal or make this file an executable and run the commands in the file.

When you see [root@galvatron01 ~(keystone_admin)]# then you know you've done it correctly.

Now we're going to create two networks and add them to the router. This will allow your hosts to properly route to your local LAN and out to the internet. But first, let show you a legend for our setup:

Legend:

extnet (eth0) - The physical interface identified in Openstack by our packstack installer.
jinkit_net - The logical network interface connected to your LAN segment.
jinkit_sub (192.168.1.0/24) - The subnet details which need to be address for your LAN environment.
vlan_70 - This is the interface for a made-up private network. It can really be anything you choose.
vlan_70_sub (10.7.70.0/24) - Netword details for vlan_70.
jinkit_router - Software Defined router that is connected to your LAN.

Next, enter the following command:

[user@galvatron01 ~]# neutron net-create jinkit_net --provider:network_type flat --provider:physical_network extnet  --router:external --shared

We're telling Neutron to create an external network identified by jinkit_net and map it the physical host segment named extent. extent was generated automatically in our packstack answers file.

Now we're going to create the public interface. In our lab scenario, this is actually the interface (perhaps eth0) that has internet accessibility. This requires a little bit of understanding, so you may want to reference the legend above to match the examples in my walkthrough with your environment.

[user@galvatron01 ~]# neutron subnet-create --name jinkit_sub --enable_dhcp=True --allocation-pool=start=192.168.1.50,end=192.168.1.254 --gateway=192.168.1.1 jinkit_net 192.168.1.0/24

It's time to create our internal network and associated router interface. These commands are similar with our public interface.

[user@galvatron01 ~]# neutron net-create vlan_70 --shared
[user@galvatron01 ~]# neutron subnet-create --name vlan_70_sub --enable_dhcp=True --allocation-pool=start=10.7.70.50,end=10.7.70.245 --gateway=10.7.70.1 vlan_70 10.7.70.0/24

Lastly, we need to create the router and identify the interfaces associated with the router.

[user@galvatron01 ~]# neutron router-create jinkit_router
[user@galvatron01 ~]# neutron router-gateway-set jinkit_router jinkit_net
[user@galvatron01 ~]# neutron router-interface-add jinkit_router vlan_70_sub

Pretty easy stuff. Remember, identify your network first. Then create the subnets which contain useful IP address information, DHCP, DNS and gateway information. Lastly, create the router and and your interfaces. If you keep that order, then you shouldn't run into any issues.

You can also enter neutron configuration mode, by simply entering neutron at the CLI. Once there, you can access the help menu by typing help or ? and [enter].

Now let's download an image for you to test the platform out with. Cirros is great, because there it gives you the username and password at first boot. Nice folks, eh?!

[user@galvatron01 ~]# curl http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img | glance \
         image-create --name='cirros image' --is-public=true  --container-format=bare --disk-format=qcow2

So that's our first Glance command! Man, we're zipping through them in this tutorial. This fetches the cirros image and imports it into Openstack for future use (via Horizon or the CLI).

Now, at this point I like to go into Horizon myself and add a user. You could do this on the command-line. Here's how:

[user@galvatron01 ~]# keystone tenant-create --name internal --description "internal tenant" --enabled true
[user@galvatron01 ~]# keystone user-create --name internal --tenant internal --pass "foo" --email bar@corp.com --enabled true

Ok, so there's one last thing that you're going to need to do. When your instances boot, you'll notice an error message show in the log about http://169.254.169.254/20090404/metadata/instanceid' failed [49/120s] and url error [timed out]. In order to resolve this issue, you will need to correct it with your /etc/neutron/dhcp_agent.ini configuration after you have configured your networks and attached them to your project router (as we have already done in our walkthrough). Look for the following lines in the dhcp_agent.ini file and make sure they match the following:

# The DHCP server can assist with providing metadata support on isolated
# networks. Setting this value to True will cause the DHCP server to append
# specific host routes to the DHCP request. The metadata service will only
# be activated when the subnet does not contain any router port. The guest
# instance must be configured to request host routes via DHCP (Option 121).
# enable_isolated_metadata = False
###enable_isolated_metadata = False
enable_isolated_metadata = True

# Allows for serving metadata requests coming from a dedicated metadata
# access network whose cidr is 169.254.169.254/16 (or larger prefix), and
# is connected to a Neutron router from which the VMs send metadata
# request. In this case DHCP Option 121 will not be injected in VMs, as
# they will be able to reach 169.254.169.254 through a router.
# This option requires enable_isolated_metadata = True
# enable_metadata_network = False
###enable_metadata_network = False
enable_metadata_network = True  

Now you will be able to route beyond your local network, and your Openstack Deployment is ready.

And that's everything folks! Start using your new Openstack environment. If you have any questions, you can always reach out to me at bjozsa@jinkit.com.

comments powered by Disqus