To understand OpenStack Neutron’s Dynamic Routing feature, you must first understand what BGP Speaker is… and what it isn’t.
Migrating from one Neutron mechanism driver to another, especially in a production environment, is not a decision one takes on without giving much thought. In many cases, the process involves migrating to a “Greenfield” environment, or a new environment that is stood up running the same or similar operating system and cloud service software but configured in a new way, then migrating entire workloads in a weekend (or more). To say this process is tedious is an understatement.
On more than one occasion I have turned to this blog to fix issues that reoccur weeks/months/years after the initial post is born, and this post will serve as one of those reference points in the future, I’m sure. In my OpenStack-Ansible Xena lab running OVN, I’ve twice now come across the following error when performing a openstack network agent list
command:
'Chassis_Private' object has no attribute 'hostname'
What does that even mean?!
My homelab consists of a few random devices, including a Synology NAS that doubles as a home backup system. I use NFS to provide shared storage for Glance images and Cinder volumes, and Synology even has Cinder drivers that leverage iSCSI. All-in-all, it’s a pretty useful setup to test a myriad of OpenStack functionality.
I recently discovered Minio, which is an open-source object storage solution that provides S3 compatibility. Installable with Docker, I thought I’d give it a go and test OpenStack’s reintroduced support for S3 backends in Glance.
Using HP iLO 4 for the last few years, you could say I’ve been a bit spoiled with some of the conveniences provided within.
So, imagine my surprise when firing up my recently-acquired Dell R630 for the first time, only to find that HTTP-based virtual media was not an option in the UI!
A recent attempt to move away from IPMI to the native HPE iLO 4 driver in my OpenStack Ironic lab showed just how wrong I was to believe it would be a seamless change. What I found was that while ironic-conductor could communicate with iLO, apparently, it didn’t like what it saw:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: EE certificate key too weak
Providing high availability of cloud resources, whether it be networking or virtual machines, is a topic that comes up often in my corner of the world. I’d heard of Instance-HA in some Red Hat circles, and only recently learned the OpenStack Masakari project was the one to provide that functionality.
While working through the installation process of installing VMware NSX-T, I have not yet determined whether it is a standalone product or requires the use of vCenter (vSphere Client). I know NSX-T supports both ESXi and KVM hypervisors, so I will have to clear up this confusion later. However, I no longer have ESX anywhere in my home lab to host a vCenter appliance, so my mission has been to install NSX-T and supporting resources on my existing OpenStack cloud running OpenStack-Ansible (Ussuri).
For a long time now I’ve been interested in better understanding alternatives to a ‘vanilla’ Neutron deployment, but other than demonstrations and some hacking on OpenContrail a few years ago and Plumgrid years before that, I’ve really kept it simple by sticking to the upstream components and features.
VMware’s NSX-T product has been on my roadmap since it was first introduced as “compatible with All The Clouds™”, and I’m hoping to deploy the NSX-T Manager and other components on my OpenStack cloud as virtual machine instances that in turn manage networking for a yet-to-be-deployed OpenStack-Ansible based OpenStack cloud in the home lab.
My good friend Cody Bunch documented OpenStack instance shelving back in 2017, and I recently revisited the topic for a customer looking to better conserve resources in their busy cloud. They have developers all over the world working around the clock, and a limited set of resources to go around.
I thought I knew ARP. It’s known as that protocol that straddles Layer 2 and Layer 3, at the so-called “Layer 2.5” of the OSI model, and helps map IP addresses (Layer 3) to MAC addresses (Layer 2). Simple, right?
Until it isn’t.
As I branch out of the networking world and into general systems administration duties, I find myself having to learn a lot more about the tools and utilities used to manage said systems. I recently deployed Cinder in my OpenStack-Ansible based homelab, and am attempting to learn and use the tools available to me in a more efficient way.
As part of my effort to “cut the cord”, I recently install Plex Media Server onto a virtual machine instance running in my home OpenStack environment. The compute node is a few generations old, and while perfectly capable of running many different workloads, transcoding Ultra HD (4K) content is not one of them.
To combat this, I recently installed an NVIDIA Quadro P2000 video card with the goal of passing it through to the virtual machine instance to improve the viewing experience.
Over the last few weeks, I’ve been slowly implementing alternative media sources in the house since “cutting the cord” and cancelling DirecTV service. Clients include an AppleTV 4 (4K) in the living room and an AppleTV 4 (HD) in the master bedroom, along with an old Amazon FireTV in the basement and some cell phones and tablets. Quite a variety of things.
We’ve used subscription apps like Netflex and YouTube Red/Premium for a long time now, and I implemented a Plex server that would allow me to convert existing BluRay and DVD media as well as leverage an HD Home Run w/ an HD antenna.
I recently revisited the FD.io virtual switch based on Vector Packet Processing (VPP) with a goal of making it deployable with OpenStack-Ansible, and while I got things working with the Intel X520 NICs I have in my machine, the Mellanox ConnectX-4 LX NICs were a bit trickier.
Mellanox provides a knowledge article here that describes the process of compiling VPP to support the Mellanox DPDK poll-mode driver, but the instructions are geared towards Red Hat 7.5 installations. Given that I run primarily Ubuntu-based systems, the instructions had to be modified (and simplified) accordingly.
The OpenStack-Ansible project provides an All-In-One (AIO) deployment playbook for developers that creates a functional OpenStack cloud on a single host and is perfect for testing most basic functions. For advanced development and testing, however, a Multi-Node AIO deployment can be performed that deploys several virtual machines on a single bare-metal node to closely replicate a production deployment.
SR-IOV is a specification that allows a PCIe device to appear to be multiple separate physical PCIe devices. A two-port NIC might be broken up into multiple physical functions (PF) with multiple virtual functions (VF) per physical function. Instead of attaching to a tuntap interface, a virtual machine instance can be connected to a virtual function and gain better performance and lower latency.
The Lisa computer holds a special place in Apple lore and in the hearts of enthusiasts and collectors around the world. My interest in the Apple Lisa has led me to people and communities looking to breath new life into the vintage system. This is nothing new, of course. The platform saw CPU and software upgrades introduced years after it was discontinued, and folks brought out hard disk replacements in the early 2000’s to replace aging Apple ProFile and Widget drives.
Earlier this year I made it a goal to spend more time on network virtualization technologies and software-defined networking, and have recently found myself attempting to build out an OpenStack environment using the Open vSwitch ML2 mechanism driver with support for DPDK.
Tungsten Fabric (formerly OpenContrail) is a “multicloud, multistack” SDN solution sponsored by the Linux Foundation - https://tungsten.io.
In a nutshell, Tungsten Fabric, and Contrail, the commercial product based on TF, can replace and augment many of the networking components of a standard OpenStack cloud and provides features such as:
The forwarding plane supports MPLS over GRE, VXLAN, L2/L3 unicast and L3 multicast for interconnection between virtual and physical networks. A Tungsten Fabric architecture overview can be found at http://www.opencontrail.org/opencontrail-architecture-documentation/.
I recently acquired a second Apple Lisa 2/5 system in non-working condition, and part of its rehabilitation will involve replacing many of its internal components. Back in February, I installed and reviewed a replacement motherboard for the Apple Lisa 1 and 2/5 personal computers. Luckily for me, Todd Meyer and the crew at Sapient Technologies have been hard at work and recently released a dual-port parallel card for the Apple Lisa and Macintosh XL that supports all known Lisa peripherals that utilize a parallel interface, including the Apple Dot Matrix printer and Apple ProFile hard disks as well as the X/ProFile from Sigma Seven Systems and VintageMicros.
If you’ve been following the LisaList Google Group or other Apple Lisa news outlets lately, you may have seen mention of a group known as Sapient Technologies producing new hardware for the Apple Lisa. Sapient Technologies has some very interesting products in the pipeline, including:
If you’re an Apple Lisa owner, there’s a good chance you’re playing with fire every time you turn the machine on. No, not because of the infamous battery leakage issues that can take out your I/O board or worse. But rather, the risk you take of losing data due to the nature of aging magnetic media and tired drive motors and mechanisms.
The vintage computing scene has exploded in recent years, and new developments in hardware have given new life to old beasts like the original Macintosh, Amigas, Apple IIs, and even the Lisa. In this episode of This Old Lisa, I’ll be taking a look at the X/ProFile compact flash-based storage system for the Apple Lisa series of machines, a drop-in replacement for vintage Apple ProFile and Widget hard drives that power Lisa systems.
In part two of this series, Using the Cisco ASR1k Router Service Plugin in OpenStack - Part Two, I left off with having demonstrated how routers created using the Neutron API are implemented as VRFs in a Cisco Aggregated Services Router (ASR). A ping out to Google from a connected instance was successful, and the edge firewall verified traffic was being properly source NAT’d thanks to a NAT overload on the ASR.
In my last post, Using the Cisco ASR1k Router Service Plugin in OpenStack - Part One, I demonstrated the installation of the Cisco ASR1k Router Service Plugin in an OpenStack environment deployed with RDO/Packstack. The latest tagged release of the plugin available here is 5.3.0, which supports a range of OpenStack releases up to Ocata.
In this post, I’d like to cover the essential steps of creating tenant routers using the OpenStack API, attaching interfaces to those routers, and observing how they are implemented on the ASR itself.
2018 is calling, and I’ve been involved with OpenStack for the better part of six years. I’ve seen the ‘Stack mature greatly over that time, Neutron included. I’m very familiar with stock Neutron components, to include namespace-based routers, openvswitch and linuxbridge mechanism drivers, DVR, etc. My overall goal with this series is to wade through various vendor offerings and see how they improve upon those stock Neutron components. First up is one of the plugins offered by Cisco for the Cisco Aggregation Services Router (ASR) known as the Cisco ASR1k Router Service Plugin.
A colleague recently mentioned the Cisco NX-OSv 9000, a virtual platform that simulates the control plane of a Cisco Nexus 9000-based device. Supposedly, it’s the same software image that runs on the Cisco Nexus 9000 series without any sort of hardware emulation. The idea is that engineers can develop scripts, test maintenance plans, and learn device configurations quicker and cheaper against the NX-OSv versus using hardware alone. Not a bad argument.
Like many homelabbers, my home lab consists of machines and equipment that was procured from eBay or the dark alleys of Craigslist. In its heyday, my gear wasn’t breaking performance records, but it’s still useful as an all-purpose virtualization and sometimes-baremetal platform, and that’s exactly what I was looking for.
One of the features I’ve been looking forward to for the last few release cycles of OpenStack and Neutron is VLAN-aware VMs, otherwise known as VLAN trunking support.
In a traditional network, a trunk is a type of interface that carries multiple VLANs, and is defined by the 802.1q standard. Trunks are often used to connect multiple switches together and allow for VLAN sprawl across the network.
Thanks for coming.
It’s been a while since I’ve written a blog post. So long, in fact, that I’ve forgotten how it’s done. I’d like to change that, though. I can’t promise I’ll make a regular thing out of it, but I do feel like I have a lot to share and no outlet for it.