Wednesday, 26 August 2015

GNS3 Docker support pending merge


So the experimental GNS3 support for Docker is done. I'm sure that it has a few bugs but that's why it's experimental. We're discussing what to do with the code, either merge them from their respective docker branches to unstable or ship it with the latest version of GNS3. Because I like to keep users in the loop here's the transcript of our discussion:
We have two options:
* wait that the unstable branch become the master branch at the 1.4 release and move docker branch as the new unstable branch
* merge docker support to unstable and add in settings an experimental flags to show this option.


I've synced the unstable branch with docker branch sometime last week so if nothing major has been going on in the last couple of days it should be mergeable.
We should do two things.
Basically, importing docker branch to master branch shouldn't cause any problems with the rest of the code IF GNS3 successfully runs on those machines. Please note that I haven't tested it on Windows or Mac since Docker support in those operating systems has different setups and I'm not sure how docker-py will behave on those systems. Docker-py attaches to a socket on Linux so my priority was to do it on Linux. As long as GNS3 starts and you can work with other VMs you should be fine. If there's an error it should be easily fixable with an if statement or similar so let me know if this happens, I'd be glad to fix issues and improve Docker support after GSOC. Once we're sure the rest of the code is OK we're good to merge even if Docker support isn't extensively tested.
 
That being said, I'm not sure what your policy is on users getting errors but I'm all for pushing new code as soon as possible as long as it's flagged experimental so users are aware that it might crash and can't crash the rest of the product. This way users will start using/testing it for us and report bugs without getting frustrated. I'd say DO IT! :D
Feel free to try it before it gets polished and merged in to GNS3 master branch. Install the latest versions of GNS3 server and GUI from the docker branches:

  • https://github.com/GNS3/gns3-server
  • https://github.com/GNS3/gns3-gui
Make sure the user running GNS3 has permissions to create and manipulate Docker containers. This is usually accomplished by adding that user to the docker group and the popular Linux distributions *should* do the rest but check the Docker documentation on how to run Docker without root permissions.
Also, Docker support requires ubridge to manipulate network namespaces and do UDP tunnels. Use the latest version from master branch:
  • https://github.com/GNS3/ubridge
It's a cool little project in by itself and has some interesting features.
I'll write something more detailed on how to use this new Docker support but until then - try it yourselves!



Thursday, 20 August 2015

Docker in network emulators

Recently, GNS3, a popular network emulator has been developing support for Docker as one of its endpoint devices. What they call endpoint devices are actually VirtualBox and VMWare VMs and now Docker containers. It also supports other types of virtual nodes like switches and routers that are actually Cisco IOS images. This is in contrast to IMUNES which uses Docker to emulate both what they have as equivalent to GNS3 endpoints (PC and Host nodes) and switches and routers by configuring a Docker container based on the type of device.

Regardless of the type of device you're trying to emulate, using some kind of generic virtualization technology in network emulators enables users to choose which software is available on their network nodes. True, you can't run Cisco IOS on those nodes but you can run software like Nginx, Gunicorn, Apache, MongoDB, PostgreSQL and others in a controlled networked environment. Using Docker as the underlying technology makes sense because it efficiently uses the host resources that enable network emulators to create a hundred network nodes in under a minute while still providing the flexibility to set up your own brand of virtual node. This is different from full virtualization like the one VMWare and VirtualBox provide and similar to what LXC does. I'm not going into details on how full and kernel-level virtualizations work or the difference between LXC and Docker. This post is about Docker. So first, a little introduction to Docker and how it works.

Linux namespaces

Docker is a lightweight virtualization technology that uses Linux namespaces to isolate resources between one another. Linux provides the following namespaces:

       Namespace   Isolates
       IPC         System V IPC, POSIX message queues
       Network     Network devices, stacks, ports, etc.
       Mount       Mount points
       PID         Process IDs
       User        User and group IDs
       UTS         Hostname and NIS domain name

Why several namespaces? Because this way Linux can finely tune which processes can access which resources. For example, if you run two processes like Apache and PostgreSQL and they have different network namespaces, they won't see the same interfaces. However, since no one told them to use their own mount namespace, they still see the same mount points. This can be useful, you generally want all processes to see the same root disk but not other resources. Tools like Docker and LXC do a pretty good job putting it all together so you get what looks like lightweight virtual machines. Because it's done inside the same kernel it's lightning fast. However, this limits us to using the same kernel. With Linux namespaces you can only create Linux VMs while with VirtualBox you get full virtualization. Using Docker is not that much of a restriction in network emulators because it's designed for Linux which provides a wide range of network software. Also, with boot2docker (and Docker Machine) for Mac OS X and the recent port to FreeBSD, the list of OS restrictions becomes smaller and smaller.
Namespaces can get a bit technical and if you're interested in how they work, here's a few articles to help you start:

Docker images

The collection of resources and namespaces Docker puts together and that act like VMs are called containers. That's why it's called Docker I guess. Anyway, to start multiple containers Docker uses templates called images. These templates generally don't have any binaries directly included but provide instructions on which software packages to download, scripts to run, make some configurations and so on. Then, based on those instructions listed inside what is called a Dockerfile generate the image. Of course, you can do pretty much whatever you want inside the Dockerfile so if you have any scripts or binaries that you want to include inside the image, you can ship it.

Much like repositories on Github, Docker has something called Docker Hub where various images are stored. Organizations like PostgreSQL, Redis, Fedora and Debian all have their own images hosted there that make it easy to quickly start a Docker container with their software installed. There's a whole bunch of sites offering documentation on how to install Docker, create Docker images, push them to Hub and start containers from those images but here's a short recipe for Fedora 22:

[cetko@nerevar ~]$ sudo dnf install docker
[cetko@nerevar ~]$ sudo systemctl start docker
[cetko@nerevar ~]$ docker pull debian
latest: Pulling from debian
902b87aaaec9: Already exists 
9a61b6b1315e: Already exists 
Digest: sha256:b42e664a14b4ed96f7891103a6d0002a6ae2f09f5380c3171566bc5b6446a8ce
Status: Downloaded newer image for debian:latest
[cetko@nerevar ~]$ docker run -it debian bash
root@9695859bac69:/# 

And that's it! With the last command you've run the Bash shell inside a containerized Debian Jessie installation. You're all set up to use any of the multitude of Docker images hosted on Hub. Now, what if we wanted to create a Docker image but with specific network tools available on Linux and use it as a network node inside an emulator? Think it can't be done? There's already at least two such repositories on Hub, both based on Debian Jessie and share much of the same setup, one used in GNS3 and the other in IMUNES:
  1. https://hub.docker.com/r/gns3/dockervm/
  2. https://hub.docker.com/r/imunes/vroot/
They're actually automated builds created from instructions in Dockerfiles (Docker image configuration files) hosted in Github repositories. This is actually common practice: users push their Docker images to Hub and save Dockerfiles to Github so users can fork repos and build their own images. Give it a try!
Now, there's a lot more to building a network emulator that just downloading a Docker image that fits in nicely in network emulators so stay tuned for more posts of my "Building a network emulator" series.

Wednesday, 19 August 2015

Generic nodes in network emulators

As invaluable tools in networked and distributed systems research, network emulators and simulators offer a viable alternative to live experimental networks. In short, it's easier to draw a network using software and test it than to use real hardware. But before we start, let's go through one crucial point that will help us understand what we're trying to achieve.

Simulators vs Emulators
All tools that provide a testbed for networking scenarios fall into two categories: emulators and simulators.
Here's what Wikipedia says about the difference between those two:
Emulation differs from simulation in that a network emulator appears to be a network; end-systems such as computers can be attached to the emulator and will behave as if they are attached to a network. A network emulator emulates the network which connects end-systems, not the end-systems themselves.Network simulators are typically programs which run on a single computer, take an abstract description of the network traffic (such as a flow arrival process) and yield performance statistics (such as buffer occupancy as a function of time).
Pure network simulators are mostly used in theoretical network research like new protocol algorithms. For example ns-3 is an open source network simulator. That is, users can develop their own network protocol and simulate performance. It also has an animation tool to visually observe how their network operates and is used primarily in academic circles for research and education. GNS3, one of the most popular network tools began as a Cisco emulation tool, but has grown into a multi-vendor network emulator. Every node in GNS3 is represents some network device, be it a switch in the form of a Cisco IOS image or a PC using some form of virtualization technology like VirtualBox or VMware. However, it's important to note that these nodes emulate real network devices.  As Wikipedia said, users can be attached to the emulator just like to a real network. So regardless of what GNS3 calls itself it's actually an emulator and a very useful tool for network administrators that design real networks with existing protocols, not experiments with network protocols that help write research papers.
Some network tools call themselves emulators but are simulators but most tools that call themselves simulators are actually emulators. Another example is IMUNES (Integrated Multiprotocol Network Emulator/Simulator) which can't decide if it's an emulator or a simulator. It's an emulator. One good reason for calling an emulator a simulator is that users generally tend to google "network simulator" more often than "network emulator" so it's harder to get googled if you're called an emulator.

The best network emulators generally have graphical interfaces that let users drag and drop network components into a sandbox to set up a network environment and then run experiments on those environments. Usually, those components (network nodes in the sandbox) represent your off-the-shelf network equipment: switches, routers, PCs etc. Different emulators use different techniques to implement those components. GNS3 uses a combination of Cisco IOS images for switches an routers and various virtualization technologies (VMware, VirtualBox, Docker, VPCS) to implement PCs. IMUNES uses Jails on FreeBSD and Docker on Linux to emulate all of its nodes. Basically, every node in IMUNES is either a FreeBSD or a Linux virtual machine, depending on which OS you're running it. So a router in IMUNES is actually a VM with a mainstream OS and special setup (e.g. some kind of routing software).

Both IMUNES and GNS3 virtual machines can be anything the OS they run supports. If someone asks what is a good general-purpose OS to use for networking the answer will in most cases be Linux. Indeed, Linux is a good choice for a generic network node used in emulators. Let's simplify things and temporarily assume we want to build a network emulator that only uses virtualized Linux machines as nodes because we can set up a variety of network services on those machines like Apache, Nginx, dhcpd and Nginx. We couldn't do that if we only used components like Cisco routers. Sure, we could test a proprietary network protocol but we couldn't really stress test our new Gunicorn+Nginx setup. So generic nodes do have their uses in the world of network emulators. In the next part we'll talk about how to efficiently create such generic network nodes using some popular virtualization technologies.

Monday, 3 August 2015

Ubridge is great but...

In my last post I've talked about ubridge and how it's supposed to work with GNS3. The problem is that users generally need root permissions because GNS3 is basically creating a new (veth) interface on the host. You can't do this without some kind of special permission. Ubridge does this by using Linux capabilities and the setcap command. That's why when you do "make install" when installing ubridge you get:

sudo setcap cap_net_admin,cap_net_raw=ep /usr/local/bin/ubridge

Setting permissions on a file *once* is not a problem, ubridge is already used for VMware and this doesn't really conflict with how GNS3 works. So in GNS3 ubridge should create the veth interfaces for Docker, not GNS3. That's why the newest version of ubridge has some new cool features like hypervisor mode and creating and moving veth interfaces to other namespaces. Here's a quick example:

1. Start ubridge in hypervisor mode on port 9000:

./ubridge -H 9000

2. Connect in Telnet on port 9000 and ask ubridge to create a veth pair and move one interface to namespace of container

telnet localhost 9000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'
docker create_veth guestif hostif
100-veth pair created: guestif and hostif
docker move_to_ns guestif 29326
100-guestif moved to namespace 29326

3. Bridge the hostif interface to an UDP tunnel:

bridge create br0
100-bridge 'br0' created
bridge add_nio_linux_raw br0 hostif
100-NIO Linux raw added to bridge 'br0'
bridge add_nio_udp br0 20000 127.0.0.1 30000
100-NIO UDP added to bridge 'br0'
bridge start br0
100-bridge 'br0' started

That's the general idea of how it should work but I'm having some problems getting this to work on my Fedora installation in the docker branch. My mentors are being really helpful and are trying to debug this with me. I sent them the outputs from various components so here they are for you to get the wider picture.

Manual hypervisor check:
gdb) run -H 11111
Starting program: /usr/local/bin/ubridge -H 11111
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Hypervisor TCP control server started (port 11111).
Destination NIO listener thread for bridge0 has started
Source NIO listener thread for bridge0 has started
[New Thread 0x7ffff65b8700 (LWP 4530)]
[New Thread 0x7ffff6db9700 (LWP 4529)]
[New Thread 0x7ffff75ba700 (LWP 3576)]

GNS3 output:
bridge add_nio_linux_raw bridge0 gns3-veth0ext
bridge add_nio_udp bridge0 10000 127.0.0.1 10001
2015-08-01 13:11:22 INFO docker_vm.py:138 gcetusic-vroot-latest-1 has started
bridge add_nio_linux_raw bridge0 gns3-veth1ext
bridge add_nio_udp bridge0 10001 127.0.0.1 10000
2015-08-01 13:11:22 INFO docker_vm.py:138 gcetusic-vroot-latest-2 has started

Tcpdump output:
[cetko@nerevar gns3]$ sudo tcpdump -i gns3-veth0ext
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on gns3-veth0ext, link-type EN10MB (Ethernet), capture size 262144 bytes
12:06:51.995942 ARP, Request who-has 10.0.0.2 tell 10.0.0.1, length 28
12:06:52.998217 ARP, Request who-has 10.0.0.2 tell 10.0.0.1, length 28

[cetko@nerevar gns3]$ sudo tcpdump -i gns3-veth1ext
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on gns3-veth1ext, link-type EN10MB (Ethernet), capture size 262144 bytes

Netstat output:
udp        0      0 127.0.0.1:10000         127.0.0.1:10001         ESTABLISHED
udp        0      0 127.0.0.1:10001         127.0.0.1:10000         ESTABLISHED

Wednesday, 1 July 2015

GSOC GNS3 Docker support: The road so far

So midterm evaluations are ending soon and I'd like to write about my progress before that. If you remember my last update it was about how to write a new GNS3 module. Probably the biggest issue you'll run into is implementing links between between various nodes. This is because GNS3 is a jungle of different technologies, all with their own networking technologies. Implementing Docker links is no different.

Docker is different kind of virtualization than what GNS3 has been using until now -> OS-level virtualization. VMware, for instance uses full virtualization. You can read more about the difference on one of the million articles on the Internet. An important thing to note is that Docker uses namespaces to manage its network interfaces. More on this here: https://docs.docker.com/articles/networking/#container-networking. It's great, go read it!

GNS3 uses UDP tunnels for connecting its various VM technologies. This means that it after creating a network interface on the virtual machine, it allocates a UDP port on that interface. But this is REALLY not that easy to do in Docker because a lot of the virtualization technologies have UDP tunnels built in - Docker doesn't. Assuming you've read the article above, this is how it will work (still having trouble with it):

  1.  Create a veth pair
  2. Allocate UDP port on one end of veth pair
  3. Wait for container to start and then push the other interface into container namespace
  4. Connect interface to ubridge
If you're wondering what ubridge is -> it's a great little piece of technology that allows you to connect udp tunnels and interfaces. Hardly anyone's heard of it but GNS3 has been using it for their VMware machines for quite some time: https://github.com/GNS3/ubridge

The biggest problem with this is that this is all hidden deep inside GNS3 code which makes you constantly aske the question: "Where the hell should I override this??" Also, you have to take into consideration unforseen problems like the one I've mention earlier: You have to actually start the container in order to create the namespace and push the veth interface into it.

Another major problem that was solved is that Docker container require a running process without which they'll just terminate. I've decided to make an official Docker image to be used for Docker containers:  https://github.com/gcetusic/vroot-linux. It's not yet merged as part of GNS3. Basically, it uses a sleep command to act as a dummy init process and also installs packages like ip, tcpdump, netstat etc. It's a great piece of  code and you can use it independently of GNS3. In the future I expect there'll be a setting, something like "Startup command" so users will be able to use their own Docker images with their own init process.

It's been bumpy road so far, solving problems I haven't really thought about when I was writing the proposal but Docker support is slowly getting there.  

Monday, 8 June 2015

GNS3 architecture and writing a new VM implementation

Last time I wrote a post I talked about what GNS3 does and how Docker fits into this. What I failed to mention and some you already familiar with GNS3 may know, the GNS3 software suite actually comes in two relatively separate parts:

  1. https://github.com/GNS3/gns3-server
  2. https://github.com/GNS3/gns3-gui
The GUI is a Qt-based management interface that sends HTTP requests to specific endpoints to the server defined in one of its files. These endpoints normally handle the basic stuff you would expect for VM instance to do: start/stop/suspend/restart/delete. For example, sending a POST request to /projects/{project_id}/virtualbox/vms creates a new Virtualbox instance handled in virtualbox_handler.py. You might run into some trouble getting the GUI to run, especially if you're using the latest development code like me because with the latest development version Qt4 was replaced with Qt5 and a lot of Linux distributions out there don't yet have Qt5 in their repositories. The installation instructions only deal with Qt4 and Ubuntu so it's up to you to trudge through numerous compile and requirement errors.

Generally, every virtualization technology (Dynamips, VirtualBox, QEMU) has the request handler, the VM manager responsible for managing available VM instances and the VM handler that knows how to start/stop/suspend/restart/delete. Going back to the request handler, if we wanted to start a previously created VM instance, sending a POST request to /projects/{project_id}/docker/images/{id}/start would do it. Once the request gets routed to a specific method, it usually fetches the singleton manager object responsible for that particular VM technology like VirtualBox or Docker that can fetch the Python object representing the VM instance based on the ID in the request. This VM instance object has the methods that can do various things with the instance, but are specific for VirtualBox or Docker or Qemu. 

Here are some important files that the current Docker implementation uses but there are equivalent files for other kinds of virtual devices:

SERVER
  • handlers/docker_handler.py - HTTP request handlers calling the Docker manager
  • modules/docker/__init__.py - Manager class that knows how to fetch and list Docker containers
  • modules/docker/docker_vm.py - Docker VM class whose methods manipulate specific containers
  • modules/docker/docker_error.py - Error class that mostly just overrides a base error class
  • schemas/docker.py - request schemas determining allowed and required arguments in requests
GUI
  • modules/docker/dialogs - folder containing code for GUI Qt dialogs
    • docker_vm_wizard.py - Wizard to create a new VM type, configure it and save it as a template from which the VM instances will be instantiated. Concretely, for Docker you choose from a list of available images from which a container will be created but this really depends on what you're using for virtualization.
  • modules/docker/ui - folder with Qt specifications that generate Python files that are then used to define how the GUI reacts to user interactions
  • modules/docker/pages - GUI pages and interactions are defined here by manipulating the previously generated Python Qt class objects
  • modules/docker/__init__.py - classes that handles the Docker specific functionality of the GUI  like loading and saving of settings
  • modules/docker/docker_vm.py - this part does the actual server requests and handles the responses
  • modules/docker/settings.py - general Docker and also container specific settings
This seems like quite a complicated setup but the important thing to remember is that if you want to add your own virtualization technology you have to make equivalent files to those above in a new folder. My advice is to copy the files of an already existing similar VM technology and go from there. All of the classes inherit from base classes that require some methods to exist, otherwise it will fail spectacularly. As is true with every object oriented language, you should try to leave most of the work to the base classes by overriding the required methods but if the methods they use make no sense, write custom code that circumvents their usage completely. A lot of the code used in one technology may seem useless and redundant in another. For example, VirtualBox has a lot of boilerplate code that manages the location of its vboxmanage command that's completely useless in Docker which uses docker-py to handle all container related actions. The core of GNS3 is written with modularity in mind but with all the (very) different virtualization technologies it supports you're bound to do a hack here or there if you don't want to completely retumble the rest of the code a couple of times.

Cheers until next time when I'll be talking about how to connect vastly different VMs via links.

Monday, 25 May 2015

GNS Docker support

GNS3 Docker support

So the coding session for GSOC finally began this week. I got accepted with the GNS Docker support project and here is the project introduction and my plan of attack.

GNS3 is a network simulator that uses faithfully simulates network nodes. Docker is a highly flexible VM platform that uses Linux namespacing and cgroups to isolate processes inside what are effectively virtual machines. This would enable GNS3 users to create their custom virtual machines and move beyond the limitations of nodes that are network oriented and because of its lightweight implementation, would make it possible to run thousands of standalone servers on GNS3.
Right now GNS3 supports QEMU, VirtualBox and Dynamips (a Cisco IOS emulator). The nodes in GNS3 and the links between them can be thought of as virtual machines that have their own network stacks and communicate amongst themselves like separate machine on any other "real" network. While this is nice by itself, QEMU and VirtualBox are "slow" virtualization technologies because they provide full virtualization -> they can run any OS but this comes at a price. So while QEMU and VirtualBox can run various network services, it's not very efficient. Docker, on the other hand, uses kernel-level virtualization which means it's the same OS but processes are grouped together and different groups isolated between themselves, effectively creating a virtual machine. That's why Docker and other such virtualization solutions are extremely fast and can run thousands of GNS3 nodes -> no translation layer between host and guest systems because they run the same kernel. Docker is quite versatile when it comes to managing custom made kernel-based VMs. It takes the load of the programmer so he/she doesn't have to think about disk space, node startup, process isolation etc. Links between Docker containers pose an additional problem. In its current form, GNS3 uses UDP networking (tunnels) for all communication between all nodes. The advantage is that this is done in userland. It is very simple and works on all OSes without requiring admin privileges. However, using UDP tunnels has proven to be more difficult to integrate new emulators to GNS3 because they usually do not support UDP networking out of the box. OpenvSwitch is a production quality, multilayer virtual switch and interconnecting Docker containers and alleviating the problem of new emulators requires at least basic support for OpenvSwitch ports in GNS3. Additionally, this would enable Docker links in GNS3 to be manipulated through Linux utilities like netem and tc that are specialized for such tasks, something not possible with UDP tunnels.

Let's start coding!