久久精品精选,精品九九视频,www久久只有这里有精品,亚洲熟女乱色综合一区
    分享

    Beginning to Understand Neutron Provider and Tenant Networks in OpenStack

     昵稱22283239 2015-03-19

    Beginning to Understand Neutron Provider and Tenant Networks in OpenStack

    By James Thorne - Posted on Feb 3 2014

    OpenStack is composed of many different projects. The core projects provide compute, storage, and network resources. The Neutron project provides network resources to the OpenStack environment and can be difficult to get started with. To help get the gears turning, I will be discussing some of the functionality Neutron Networking is capable of.

    I would wager that most of us are familiar with virtualized networking through experience running VMware vSphere. Each node in the VMware vSphere Cluster will have physical NICs connected to physical switch ports on a managed switch. Those physical switch ports on the managed switch are configured as a trunk containing all of the particular VLANs you need accessible from your VMware vSphere Cluster. Within the VMware vSphere Client, virtual networks are created mapping to the different VLANs in the trunk. As VMware virtual machines are provisioned, one or more of those virtual networks can be attached to the virtual machine. The virtual network interfaces within the virtual machines can then be assigned IP addresses associated with the subnet on that particular VLAN and the virtual machines can begin communicating.

    OpenStack Neutron Networking has the same capabilities. The controller and compute nodes will have physical NICs connected to physical switch ports on a managed switch. Those physical switch ports on the managed switch are configured as a trunk containing all of the particular VLANs you need accessible from your OpenStack environment. Then from the command line on one of the OpenStack nodes, Neutron Provider Networks (Neutron Provider Networks always map to a physical network with a gateway that exists on a router or firewall) are created mapping to the different VLANs in the trunk. As OpenStack instances are provisioned, one or more of those Neutron Provider Networks can be attached to the instance. The virtual network interfaces within the instances will then be assigned IP addresses associated with the subnet on that particular VLAN and the instances can begin communicating.

    Both of these scenarios are very similar to each other, so what else does Neutron Networking bring to the table? Neutron Tenant Networks.

    First, what is a Tenant (also known as a Project)? OpenStack has been designed to be a multi-tenant environment. User X and User Y can co-exist within the same OpenStack environment and share compute, storage, and network resources or they can have dedicated compute, storage, and network resources within the same OpenStack environment.

    User X can create Neutron Tenant Networks that are completely isolated from any Neutron Tenant Networks created by User Y, even if User X and Y are sharing resources. User X and Y can do this without help from a Systems Administrator (assuming they have the proper permissions). This functionality is possible through the use of Network Namespaces, a feature implemented in the Linux kernel. You can think of Network Namespaces as a chroot jail for the networking stack.

    When User X and User Y create Neutron Tenant Networks, a Network Namespace is created for each. When User X and Y create OpenStack instances and attach those instances to their respective Neutron Tenant Network, only those instances within the same Network Namespace can communicate with each other, even if the instances are spread across OpenStack compute nodes. This is very similar to having two physical Layer 2 networks that have no way of communicating with each other until a router is put between them. And this is exactly how different Neutron Tenant Networks can communicate with each other, by putting a Neutron Router between them.

    With a Neutron Router between the two Neutron Tenant Networks, the instances in each Neutron Tenant Network can now communicate with each other.

    Now, what if those instances need to route out to the internet? One of the Neutron Provider Networks you created above, or possibly a different one, could be attached to the Neutron Router and act as the Neutron Router's default gateway out to the internet. The Neutron Tenant Networks could then be attached to the Neutron Router and those Neutron Tenant Networks could then route out to the internet.

    There is a lot more to Neutron Networking, and this has simply been a high-level overview to get you thinking.

    If you would like to dive deeper and see how to configure various aspects of Neutron Networking, I encourage you to read the following posts by fellow Racker James Denton:

    Neutron Networking: The Building Blocks of an OpenStack Cloud

    Neutron Networking: Simple Flat Network

    Neutron Networking: VLAN Provider Networks

    Neutron Networking: Neutron Routers and the L3 Agent

    For questions, I encourage you to visit the Rackspace Private Cloud Community Forums.

    For questions and/or comments, feel free to get in touch with me @jameswthorne.

    Comments


      本站是提供個人知識管理的網(wǎng)絡(luò)存儲空間,所有內(nèi)容均由用戶發(fā)布,不代表本站觀點。請注意甄別內(nèi)容中的聯(lián)系方式、誘導(dǎo)購買等信息,謹防詐騙。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點擊一鍵舉報。
      轉(zhuǎn)藏 分享 獻花(0

      0條評論

      發(fā)表

      請遵守用戶 評論公約

      類似文章 更多

      主站蜘蛛池模板: 无遮无挡爽爽免费视频| 欧美白妞大战非洲大炮| 奇米四色7777中文字幕| 国产成人精品亚洲资源| 欧美亚洲综合成人A∨在线| 国产又粗又猛又黄又爽无遮挡| 亚洲人成小说网站色在线| 亚洲爆乳少妇无码激情| 99riav国产精品视频| 国产在线欧美日韩精品一区| 亚洲中文字幕久久精品无码喷水| 亚洲欧美综合精品二区| 亚洲欧美日韩精品久久亚洲区| 久久这里有精品国产电影网| 国产AV人人夜夜澡人人爽| 国产一区二区三区导航| 午夜精品久久久久成人| 亚洲中文字幕日产无码成人片| 精品无码人妻一区二区三区品 | 十八禁午夜福利免费网站| 日本A级视频在线播放| 国产精品中文字幕二区| 在线天堂中文官网| 人妻中文字幕不卡精品| 精品国偷自产在线视频| 午夜福利在线观看6080| 国产初高中生视频在线观看| 亚洲AV熟妇在线观看| 中文字幕制服国产精品| 精品无码国产自产拍在线观看| 久章草在线毛片视频播放| 欧美人与动人物牲交免费观看久久 | 好深好爽办公室做视频| 亚洲男女羞羞无遮挡久久丫| 色偷偷AV男人的天堂京东热 | 日韩精品国产中文字幕| 亚洲中文字幕无码一久久区| 亚洲精品无码成人A片九色播放| 精品亚洲国产成人av| 国产成人久久精品一区二区三区| 日韩丝袜欧美人妻制服|