Skip to main content

Hello World!

By August 15, 2015October 19th, 2016Blog

First off: welcome to the IO Visor Project! We are excited for the birth of this community and thrilled about the future that lays ahead of us all.

It has been over 4 years since the conception of what has now become the IO Visor Project, and it feels like a pretty adventurous journey. We’d like to take you down Memory Lane and share  how a group of end users and vendors got here and what the IO Visor Project is all about.

So … how did the IO Visor Project started?

Several PLUMgrid engineers had a vision: a dream of creating a new type of programmable data plane. This new type of extensible architecture would for the first time enable developers to dynamically build IO modules (think stand alone “programs” that can manipulate a packet in the kernel and perform all sort of functions on it), load and unload those in-kernel at run time and do it without any disruption to the system.

We wanted to transform how functions like networking or security or tracing are designed, implemented and delivered and more importantly we wanted to build a technology that would future-proof large-scale deployments with easy-to-extend functionalities.

Yes, it was an ambitious target but this is why we contributed initial IP and code to kickstart the IO Visor Project. Now, a diverse and engaged open source community is taking that initial work and running with it. A technology, compilers, a set of developers tools and real-world use case examples that can be used to create the next set of IO Module that your application and users demand.

What is so unique about it?

The developers that work on eBPF (extended Berkeley Packet Filter, the core technology behind IO Visor Project) refer to it as universal in-kernel virtual machine with run-time extensibility. IO Visor provides infrastructure developers the ability to create applications, publish them, deploy them in live systems without having to recompile or reboot a full datacenter. The IO modules are platform independent, meaning that they could run on any hardware  that uses Linux.

Running IO and networking functions in-kernel delivers the performance of hardware without layers of software and middleware. With functions running in-kernel of each compute node in a data center, IO Visor enables distributed, scale-out performance, eliminating hairpinning, tromboning and bottlenecks that are prevalent in so many implementations today.

Data center operators no longer need to compromise on flexibility and performance.

And finally … why should you care?

  1. This is the first time in the history of the Linux kernel that a developer can envision a new functionality and simply make it happen.

  2. Use cases are constantly changing, and we need an infrastructure that can evolve with them.

  3. Software development cycles should not be longer than hardware cycle.

  4. Single-node implementations won’t cut it in the land of cattle.

Where next?

Browse through iovisor.org where you will find plenty resources and information on the project and its components. Although the IO Visor Project was just formed, there is a lively community of developers who have been working together for several years. The community leverages Github for developer resources at https://github.com/iovisor

The IO Visor Project is open to all developers and there is no fee to join or participate so we hope to see many of you become a part of it!

Welcome again to the IO Visor Project!


About the author of this post