JNCIA-DevOps, JNCIS-DevOps 取得記

先日、JNCIA-DevOps, JNCIS-DevOps を取得したので、その時のメモとなる。
https://www.juniper.net/jp/jp/training/certification/certification-tracks/devops?tab=jnciadevops
https://www.juniper.net/jp/jp/training/certification/certification-tracks/devops?tab=jncis-devops

こちらの資格は、 Juniper Networks から提供されている DevOps Track の資格で、Junos に特化した Automation の試験となっている。
基本的には、 operational command, configuration command をon-box (SLAX, Python など), off-box (PyEZ, ansible など) で実行するための仕組みが対象となっているが、それ以外にも YANG によるコンフィグ拡張、スクリプト使用時に必要な Junos コンフィグ、等もかなり多く出題されるので、Junos の試験だと思った方が実態に近いかもしれない。


出題範囲は、サイトに記載されている通りで、JNCIA-DevOps / JNCIS-DevOps 共に、Python, ansible 等が出題の中心となり、JNCIA/JNCISで、出題範囲そのものはあまり変わらない印象だった。
ただ、 JNCIS の方では、実際のスクリプトが問題内で提供され、出力内容・エラー内容を選択する、など、実機である程度使っていないと答えられない問題が増えており、より実機経験が求められる試験になっているようである。

 

内容としては、XML, JSON, YAML, Python, ansible など、一般的な話題も多いため、これらの事前経験があれば、入りやすいかもしれない。
ただ、Junos 特化の内容も多いため、ある程度実機で試してから受ける方が、より確実と思われる。(vMX, vSRX 等でも試せる範囲)
※ JNCIA では Junos の REST API explorer の設定方法、JNCIS では SLAX 全般、op / commit / event / snmp script のコンフィグ方法、等が範囲に含まれる

これ以外にも、 JNCIS では Ruby, RubyEZ, JET, JSNAPy 等も範囲になるので、マニュアルを一読しつつ、可能であれば、一度は実機で試しておいた方がよいかもしれない。


公式トレーニングもあるので、可能であれば、受講するのが確実と思われる。ただ、 JNCIS-Cloud 等と比べると、比較的、自習はしやすい印象を受けた。


こちらも、自習する場合、Junos Genius で、模擬問題が手に入るので、事前に目を通しておくことをおすすめする。
※ JNCIA-DevOps については、無料で取得できるが、JNCIS-DevOps については、有料、かつ期間制限ありとなるので、注意
https://www.juniper.net/jp/jp/training/junos-genius/

追記:
- 比較的多かった出題として、 NETCONF の transport, messages, operations, content の違い、があったので、チェックしていくとよいかもしれない
- mgd, jsd の用途、も多かった

JNCIA-Cloud, JNCIS-Cloud 取得記

 

先日、JNCIA-Cloud, JNCIS-Cloud を取得したので、その時のメモとなる。
https://www.juniper.net/jp/jp/training/certification/certification-tracks/cloud-track?tab=jncia-cloud
https://www.juniper.net/jp/jp/training/certification/certification-tracks/cloud-track?tab=jncis-cloud

 

こちらの資格は、 Juniper Networks から提供されている Cloud Track の資格で、大まかには Contrail Series を中心とした、 SDN の話題を扱っている。
※ ただし、(特に JNCIA-Cloudでは) Northstar 等、 MPLS-Core 用 SDN の問題も出るので、範囲そのものはかなり広い

 

基本的に出題範囲は、サイトに記載されている通りで、JNCIA-Cloud では、Contrail Networking, Openstack (nova とは何か? 等), NorthStar, Contrail Service Orchestration 等が出題される。少し意外だったのだが、これ以外に vSRX, Security Director, SkyATP, vMX 等も範囲になっており、問題もそれなりの数が出題されてくるので、このあたりもある程度調べていった方がよいかもしれない。
※ 選択式なので、難易度そのものはさほど高くないが、知らないと回答できない問題も、一定数は出てくる

 

JNCIS-Cloud は、ほぼ完全に Contrail Networking の試験となっており、SDN とは何か?, この場合のopenstack構成方法は?, サービスチェインの設定方法?, AnalyzerVM の設定方法?, vRouter の動作? 等、Contrail Networking の動作・設定方法について、かなり細かい出題が行われる。こちらも選択式だが、ある程度実機で確認を行ってから受験するか、可能であれば、公式トレーニングを受けてからの方がよいかもしれない。

 

自習する場合、Junos Genius で、模擬問題が手に入る。
※ JNCIA-Cloud については、無料で取得できるが、JNCIS-Cloud については、有料、かつ期間制限ありとなるので、注意
https://www.juniper.net/jp/jp/training/junos-genius/

juju charm による openstack / Tungsten Fabric インストール

以下の記述に従い、juju charm による openstack / Tungsten Fabric インストールを試してみている。
https://github.com/Juniper/contrail-charms/blob/R5/README.md
https://github.com/Juniper/contrail-charms/blob/R5/manual-deploy.md

環境としては、ubuntu xenial (AMI-ID: ami-06c43a7df16e8213c) 4台 (juju node, openstack controller, Tungsten Fabric Controller, openstack compute) を使用した。
※ 4vCPU, 15GB mem, 60 GB disk

openstack のバージョンとしては、ocata を使用した。
※ queens 等も試してみたのだが、なぜか Tungsten Fabric からの keystone v3 接続がうまくいかなかったため、今回は ocata で試してみている

実行したコマンドは以下となる。

(juju node 上で実施)
# apt-get update
# apt-get install juju

# juju add-cloud

Select cloud type: manual
Enter a name for your manual cloud: manual-cloud-1
Enter the controller's hostname or IP address: (juju node の ip を記載)

# ssh-keygen
# cd .ssh
# cat id_rsa.pub >> authorized_keys
# cd
※ 他のノードの /root/.ssh/authorized_keys にも、上記の公開鍵を追加しておく

# juju bootstrap manual-cloud-1
 -> 2分程度必要

# git clone https://github.com/Juniper/contrail-charms -b R5

# juju add-machine ssh:root@(openstack-controllerのip)
# juju add-machine ssh:root@(openstack-computeのip)
# juju add-machine ssh:root@(TungstenFabric-controllerのip)
※ それぞれ 2分程度必要


# vi set-juju.sh 
juju deploy cs:xenial/ntp
juju deploy cs:xenial/rabbitmq-server --to lxd:0
juju deploy cs:xenial/percona-cluster mysql --config root-password=contrail123 --config max-connections=1500 --to lxd:0
juju deploy cs:xenial/openstack-dashboard --config openstack-origin=cloud:xenial-ocata --to lxd:0
juju deploy cs:xenial/nova-cloud-controller --config console-access-protocol=novnc --config openstack-origin=cloud:xenial-ocata --config network-manager=Neutron --to lxd:0
juju deploy cs:xenial/neutron-api --config manage-neutron-plugin-legacy-mode=false --config openstack-origin=cloud:xenial-ocata --config neutron-security-groups=true --to lxd:0
juju deploy cs:xenial/glance --config openstack-origin=cloud:xenial-ocata --to lxd:0
juju deploy cs:xenial/keystone --config admin-password=contrail123 --config admin-role=admin --config openstack-origin=cloud:xenial-ocata --to lxd:0

juju deploy cs:xenial/nova-compute --config ./nova-compute-config.yaml --to 1

CHARMS_DIRECTORY=/root
juju deploy --series=xenial $CHARMS_DIRECTORY/contrail-charms/contrail-keystone-auth --to 2
juju deploy --series=xenial $CHARMS_DIRECTORY/contrail-charms/contrail-controller --config auth-mode=rbac --config cassandra-minimum-diskgb=4 --config cassandra-jvm-extra-opts="-Xms1g -Xmx2g" --to 2
juju deploy --series=xenial $CHARMS_DIRECTORY/contrail-charms/contrail-analyticsdb --config cassandra-minimum-diskgb=4 --config cassandra-jvm-extra-opts="-Xms1g -Xmx2g" --to 2
juju deploy --series=xenial $CHARMS_DIRECTORY/contrail-charms/contrail-analytics --to 2
juju deploy --series=xenial $CHARMS_DIRECTORY/contrail-charms/contrail-openstack
juju deploy --series=xenial $CHARMS_DIRECTORY/contrail-charms/contrail-agent

juju expose openstack-dashboard
juju expose nova-cloud-controller
juju expose neutron-api
juju expose glance
juju expose keystone

juju expose contrail-controller
juju expose contrail-analytics

juju add-relation keystone:shared-db mysql:shared-db
juju add-relation glance:shared-db mysql:shared-db
juju add-relation keystone:identity-service glance:identity-service
juju add-relation nova-cloud-controller:image-service glance:image-service
juju add-relation nova-cloud-controller:identity-service keystone:identity-service
juju add-relation nova-cloud-controller:cloud-compute nova-compute:cloud-compute
juju add-relation nova-compute:image-service glance:image-service
juju add-relation nova-compute:amqp rabbitmq-server:amqp
juju add-relation nova-cloud-controller:shared-db mysql:shared-db
juju add-relation nova-cloud-controller:amqp rabbitmq-server:amqp
juju add-relation openstack-dashboard:identity-service keystone

juju add-relation neutron-api:shared-db mysql:shared-db
juju add-relation neutron-api:neutron-api nova-cloud-controller:neutron-api
juju add-relation neutron-api:identity-service keystone:identity-service
juju add-relation neutron-api:amqp rabbitmq-server:amqp

juju add-relation contrail-controller ntp
juju add-relation nova-compute:juju-info ntp:juju-info

juju add-relation contrail-controller contrail-keystone-auth
juju add-relation contrail-keystone-auth keystone
juju add-relation contrail-controller contrail-analytics
juju add-relation contrail-controller contrail-analyticsdb
juju add-relation contrail-analytics contrail-analyticsdb

juju add-relation contrail-openstack neutron-api
juju add-relation contrail-openstack nova-compute
juju add-relation contrail-openstack contrail-controller

juju add-relation contrail-agent:juju-info nova-compute:juju-info
juju add-relation contrail-agent contrail-controller

# vi nova-compute-config.yaml 
nova-compute:
    openstack-origin: cloud:xenial-ocata
    virt-type: qemu 
    enable-resize: True
    enable-live-migration: True
    migration-auth-type: ssh

# bash set-juju.sh

以下、完了まで定期的に status を確認 (20分程度必要となった)
# juju status
# tail -f /var/log/juju/*log | grep -v -w DEBUG

注意点として、以下の二点が必要となった。
1. openstack-controller では LXD が使用されており、こちらと、Tungsten Fabric controller が直接疎通できる必要があった。このため、VPC の route table に LXD 用の /24 route を追加 (openstack controller の instance に紐付け) し、openstack controller 用 instance の送信元/送信先のチェック、を無効化している。
2. LXD コンテナ内で、 docker が起動出来ない (Tungsten Fabric の neutron-init で必要) 事象が発生したため、以下で LXD の設定を行っている。

juju ssh 0 ## openstack controller にログイン
  sudo su -
  lxc list ## neutron 用 LXD の id を確認
  lxc config set juju-cb8047-0-lxd-4 security.nesting true
  lxc config show juju-cb8047-0-lxd-4

上手くインストールが完了すると、以下のように openstack / Tungsten Fabric の組み合わせが使用できるようになるはずである。

root@ip-172-31-19-222:~# juju status
Model    Controller      Cloud/Region    Version  SLA
default  manual-cloud-1  manual-cloud-1  2.3.7    unsupported

App                     Version        Status  Scale  Charm                   Store       Rev  OS      Notes
contrail-agent          5.1.0-708.el7  active      1  contrail-agent          local         0  ubuntu  
contrail-analytics      5.1.0-708.el7  active      1  contrail-analytics      local         0  ubuntu  exposed
contrail-analyticsdb    5.1.0-708.el7  active      1  contrail-analyticsdb    local         0  ubuntu  
contrail-controller     5.1.0-708.el7  active      1  contrail-controller     local         0  ubuntu  exposed
contrail-keystone-auth                 active      1  contrail-keystone-auth  local         0  ubuntu  
contrail-openstack      5.1.0-708.el7  active      2  contrail-openstack      local         0  ubuntu  
glance                  14.0.1         active      1  glance                  jujucharms  278  ubuntu  exposed
keystone                11.0.4         active      1  keystone                jujucharms  298  ubuntu  exposed
mysql                   5.6.37-26.21   active      1  percona-cluster         jujucharms  275  ubuntu  
neutron-api             10.0.7         active      1  neutron-api             jujucharms  272  ubuntu  exposed
nova-cloud-controller   15.1.5         active      1  nova-cloud-controller   jujucharms  327  ubuntu  exposed
nova-compute            15.1.5         active      1  nova-compute            jujucharms  299  ubuntu  
ntp                     4.2.8p4+dfsg   active      2  ntp                     jujucharms   32  ubuntu  
openstack-dashboard     11.0.4         active      1  openstack-dashboard     jujucharms  280  ubuntu  exposed
rabbitmq-server         3.5.7          active      1  rabbitmq-server         jujucharms   88  ubuntu  

Unit                       Workload  Agent  Machine  Public address  Ports                       Message
contrail-analytics/0*      active    idle   2        172.31.35.214                               Unit is ready
contrail-analyticsdb/0*    active    idle   2        172.31.35.214                               Unit is ready
contrail-controller/0*     active    idle   2        172.31.35.214   8080/tcp,8082/tcp,8143/tcp  Unit is ready
  ntp/0*                   active    idle            172.31.35.214   123/udp                     ntp: Ready
contrail-keystone-auth/0*  active    idle   2        172.31.35.214                               Unit is ready
glance/0*                  active    idle   0/lxd/5  10.0.206.248    9292/tcp                    Unit is ready
keystone/0*                active    idle   0/lxd/6  10.0.206.215    5000/tcp                    Unit is ready
mysql/0*                   active    idle   0/lxd/1  10.0.206.124    3306/tcp                    Unit is ready
neutron-api/0*             active    idle   0/lxd/4  10.0.206.164    9696/tcp                    Unit is ready
  contrail-openstack/1     active    idle            10.0.206.164                                Unit is ready
nova-cloud-controller/0*   active    idle   0/lxd/3  10.0.206.157    8774/tcp,8778/tcp           Unit is ready
nova-compute/0*            active    idle   1        13.112.122.142                              Unit is ready
  contrail-agent/0*        active    idle            13.112.122.142                              Unit is ready
  contrail-openstack/0*    active    idle            13.112.122.142                              Unit is ready
  ntp/1                    active    idle            13.112.122.142  123/udp                     ntp: Ready
openstack-dashboard/0*     active    idle   0/lxd/2  10.0.206.82     80/tcp,443/tcp              Unit is ready
rabbitmq-server/0*         active    idle   0/lxd/0  10.0.206.50     5672/tcp                    Unit is ready

Machine  State    DNS             Inst id                Series  AZ  Message
0        started  172.31.6.145    manual:172.31.6.145    xenial      Manually provisioned machine
0/lxd/0  started  10.0.206.50     juju-cb8047-0-lxd-0    xenial      Container started
0/lxd/1  started  10.0.206.124    juju-cb8047-0-lxd-1    xenial      Container started
0/lxd/2  started  10.0.206.82     juju-cb8047-0-lxd-2    xenial      Container started
0/lxd/3  started  10.0.206.157    juju-cb8047-0-lxd-3    xenial      Container started
0/lxd/4  started  10.0.206.164    juju-cb8047-0-lxd-4    xenial      Container started
0/lxd/5  started  10.0.206.248    juju-cb8047-0-lxd-5    xenial      Container started
0/lxd/6  started  10.0.206.215    juju-cb8047-0-lxd-6    xenial      Container started
1        started  13.112.122.142  manual:13.112.122.142  xenial      Manually provisioned machine
2        started  172.31.35.214   manual:172.31.35.214   xenial      Manually provisioned machine

Relation provider                          Requirer                                    Interface                       Type         Message
contrail-analytics:analytics-cluster       contrail-analytics:analytics-cluster        contrail-analytics-cluster      peer         
contrail-analytics:contrail-analytics      contrail-controller:contrail-analytics      contrail-analytics              regular      
contrail-analyticsdb:analyticsdb-cluster   contrail-analyticsdb:analyticsdb-cluster    contrail-analyticsdb-cluster    peer         
contrail-analyticsdb:contrail-analyticsdb  contrail-analytics:contrail-analyticsdb     contrail-analyticsdb            regular      
contrail-analyticsdb:contrail-analyticsdb  contrail-controller:contrail-analyticsdb    contrail-analyticsdb            regular      
contrail-controller:contrail-controller    contrail-agent:contrail-controller          contrail-controller             regular      
contrail-controller:contrail-controller    contrail-openstack:contrail-controller      contrail-controller             regular      
contrail-controller:controller-cluster     contrail-controller:controller-cluster      contrail-controller-cluster     peer         
contrail-controller:juju-info              ntp:juju-info                               juju-info                       subordinate  
contrail-keystone-auth:contrail-auth       contrail-controller:contrail-auth           contrail-auth                   regular      
contrail-openstack:cluster                 contrail-openstack:cluster                  contrail-openstack-cluster      peer         
contrail-openstack:neutron-api             neutron-api:neutron-plugin-api-subordinate  neutron-plugin-api-subordinate  subordinate  
contrail-openstack:nova-compute            nova-compute:neutron-plugin                 neutron-plugin                  subordinate  
glance:cluster                             glance:cluster                              glance-ha                       peer         
glance:image-service                       nova-cloud-controller:image-service         glance                          regular      
glance:image-service                       nova-compute:image-service                  glance                          regular      
keystone:cluster                           keystone:cluster                            keystone-ha                     peer         
keystone:identity-admin                    contrail-keystone-auth:identity-admin       keystone-admin                  regular      
keystone:identity-service                  glance:identity-service                     keystone                        regular      
keystone:identity-service                  neutron-api:identity-service                keystone                        regular      
keystone:identity-service                  nova-cloud-controller:identity-service      keystone                        regular      
keystone:identity-service                  openstack-dashboard:identity-service        keystone                        regular      
mysql:cluster                              mysql:cluster                               percona-cluster                 peer         
mysql:shared-db                            glance:shared-db                            mysql-shared                    regular      
mysql:shared-db                            keystone:shared-db                          mysql-shared                    regular      
mysql:shared-db                            neutron-api:shared-db                       mysql-shared                    regular      
mysql:shared-db                            nova-cloud-controller:shared-db             mysql-shared                    regular      
neutron-api:cluster                        neutron-api:cluster                         neutron-api-ha                  peer         
neutron-api:neutron-api                    nova-cloud-controller:neutron-api           neutron-api                     regular      
nova-cloud-controller:cluster              nova-cloud-controller:cluster               nova-ha                         peer         
nova-compute:cloud-compute                 nova-cloud-controller:cloud-compute         nova-compute                    regular      
nova-compute:compute-peer                  nova-compute:compute-peer                   nova                            peer         
nova-compute:juju-info                     contrail-agent:juju-info                    juju-info                       subordinate  
nova-compute:juju-info                     ntp:juju-info                               juju-info                       subordinate  
ntp:ntp-peers                              ntp:ntp-peers                               ntp                             peer         
openstack-dashboard:cluster                openstack-dashboard:cluster                 openstack-dashboard-ha          peer         
rabbitmq-server:amqp                       neutron-api:amqp                            rabbitmq                        regular      
rabbitmq-server:amqp                       nova-cloud-controller:amqp                  rabbitmq                        regular      
rabbitmq-server:amqp                       nova-compute:amqp                           rabbitmq                        regular      
rabbitmq-server:cluster                    rabbitmq-server:cluster                     rabbitmq-ha                     peer         

root@ip-172-31-19-222:~# 

root@ip-172-31-35-214:~# contrail-status 
Pod              Service         Original Name                          State    Id            Status        
                 redis           contrail-external-redis                running  d4d57d26cadf  Up 8 minutes  
analytics        api             contrail-analytics-api                 running  da9de5110f9f  Up 8 minutes  
analytics        collector       contrail-analytics-collector           running  ac04930bc5c1  Up 8 minutes  
analytics        nodemgr         contrail-nodemgr                       running  a48717a004c2  Up 8 minutes  
analytics-alarm  alarm-gen       contrail-analytics-alarm-gen           running  9fe1da20a9e8  Up 8 minutes  
analytics-alarm  kafka           contrail-external-kafka                running  f7e964a49cd7  Up 8 minutes  
analytics-alarm  nodemgr         contrail-nodemgr                       running  607f2ef09c5d  Up 8 minutes  
analytics-snmp   nodemgr         contrail-nodemgr                       running  10bbff7fe1b1  Up 8 minutes  
analytics-snmp   snmp-collector  contrail-analytics-snmp-collector      running  082f6ebcbd37  Up 8 minutes  
analytics-snmp   topology        contrail-analytics-snmp-topology       running  cd3b563f3bbb  Up 8 minutes  
config           api             contrail-controller-config-api         running  3631e5abe9b6  Up 8 minutes  
config           device-manager  contrail-controller-config-devicemgr   running  8eaedcd070ae  Up 8 minutes  
config           nodemgr         contrail-nodemgr                       running  07203da0a748  Up 8 minutes  
config           schema          contrail-controller-config-schema      running  8c6a339dd6d0  Up 8 minutes  
config           svc-monitor     contrail-controller-config-svcmonitor  running  44856f8ea9bc  Up 8 minutes  
config-database  cassandra       contrail-external-cassandra            running  22483d05229e  Up 8 minutes  
config-database  nodemgr         contrail-nodemgr                       running  f7658b9c04af  Up 8 minutes  
config-database  rabbitmq        contrail-external-rabbitmq             running  0225630978a7  Up 8 minutes  
config-database  zookeeper       contrail-external-zookeeper            running  4e3d96385f92  Up 8 minutes  
control          control         contrail-controller-control-control    running  382be60341ce  Up 8 minutes  
control          dns             contrail-controller-control-dns        running  14cb5dda1dc3  Up 8 minutes  
control          named           contrail-controller-control-named      running  67279cdc5385  Up 8 minutes  
control          nodemgr         contrail-nodemgr                       running  0456c3f4ade4  Up 8 minutes  
database         cassandra       contrail-external-cassandra            running  de289b60d667  Up 8 minutes  
database         nodemgr         contrail-nodemgr                       running  8289c2002bca  Up 8 minutes  
database         query-engine    contrail-analytics-query-engine        running  b6fe0b3f6ef4  Up 8 minutes  
webui            job             contrail-controller-webui-job          running  4f4a5c07e1fb  Up 6 minutes  
webui            web             contrail-controller-webui-web          running  f56ff61fef1f  Up 6 minutes  

== Contrail control ==
control: active
nodemgr: active
named: active
dns: active

== Contrail analytics-alarm ==
nodemgr: active
kafka: active
alarm-gen: active

== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active

== Contrail analytics ==
nodemgr: active
api: active
collector: active

== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active

== Contrail webui ==
web: active
job: active

== Contrail analytics-snmp ==
snmp-collector: active
nodemgr: active
topology: active

== Contrail config ==
svc-monitor: active
nodemgr: active
device-manager: active
api: active
schema: active

root@ip-172-31-35-214:~# 

root@ip-172-31-4-230:~# contrail-status 
Pod      Service  Original Name           State    Id            Status        
vrouter  agent    contrail-vrouter-agent  running  b30c790ac0f1  Up 8 minutes  
vrouter  nodemgr  contrail-nodemgr        running  47be0b238f30  Up 7 minutes  

vrouter kernel module is PRESENT
== Contrail vrouter ==
nodemgr: active
agent: active

root@ip-172-31-4-230:~# 
root@ip-172-31-4-230:~# docker images
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
opencontrailnightly/contrail-vrouter-kernel-build-init   latest              9717147e05b3        18 hours ago        255MB
opencontrailnightly/contrail-vrouter-agent               latest              4b4f4651d8b7        18 hours ago        1.41GB
opencontrailnightly/contrail-status                      latest              fa3a147f3236        18 hours ago        1GB
opencontrailnightly/contrail-openstack-compute-init      latest              ba1e85fdb5bb        18 hours ago        1GB
opencontrailnightly/contrail-nodemgr                     latest              fd743b6a284f        18 hours ago        1.01GB
opencontrailnightly/contrail-node-init                   latest              868186c43bf5        18 hours ago        1GB
opencontrailnightly/contrail-base                        latest              d85a1c331fa3        18 hours ago        979MB
root@ip-172-31-4-230:~# 

root@ip-172-31-35-214:~# cat openstackrc 
export OS_USERNAME=admin
export OS_PASSWORD=contrail123
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_AUTH_URL=http://10.0.206.215:5000/v2.0
root@ip-172-31-35-214:~# 

pip install python-openstackclient
source openstackrc

root@ip-172-31-35-214:~# openstack network list
+--------------------------------------+-------------------------+---------+
| ID                                   | Name                    | Subnets |
+--------------------------------------+-------------------------+---------+
| 6d4589ca-eb25-4182-812c-f47f53d0b9d8 | __link_local__          |         |
| cd9b79f0-9b05-4820-865a-fe1ab9446f88 | ip-fabric               |         |
| cf4871f6-35be-4f02-8ad7-04dc21e95440 | default-virtual-network |         |
| 1d36fa0d-90be-42c2-b651-cc147969d152 | dci-network             |         |
+--------------------------------------+-------------------------+---------+
root@ip-172-31-35-214:~# 

root@ip-172-31-35-214:~# ./contrail-introspect-cli/ist.py ctr route summary
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| name                                               | prefixes | paths | primary_paths | secondary_paths | infeasible_paths |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| default-domain:default-                            | 0        | 0     | 0             | 0               | 0                |
| project:__link_local__:__link_local__.inet.0       |          |       |               |                 |                  |
| default-domain:default-project:dci-                | 0        | 0     | 0             | 0               | 0                |
| network:__default__.inet.0                         |          |       |               |                 |                  |
| default-domain:default-project:dci-network:dci-    | 0        | 0     | 0             | 0               | 0                |
| network.inet.0                                     |          |       |               |                 |                  |
| default-domain:default-project:default-virtual-    | 0        | 0     | 0             | 0               | 0                |
| network:default-virtual-network.inet.0             |          |       |               |                 |                  |
| inet.0                                             | 0        | 0     | 0             | 0               | 0                |
| default-domain:default-project:ip-fabric:ip-       | 1        | 1     | 1             | 0               | 0                |
| fabric.inet.0                                      |          |       |               |                 |                  |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
root@ip-172-31-35-214:~# 

curl -O http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
openstack image create cirros --disk-format qcow2 --public --container-format bare --file cirros-0.4.0-x86_64-disk.img
openstack flavor create --ram 512 --disk 1 --vcpus 1 m1.tiny
openstack network create testvn
openstack subnet create --subnet-range 192.168.100.0/24 --network testvn subnet1
NET_ID=`openstack network list | grep testvn | awk -F '|' '{print $2}' | tr -d ' '`
openstack server create --flavor m1.tiny --image cirros --nic net-id=${NET_ID} vm1
openstack server create --flavor m1.tiny --image cirros --nic net-id=${NET_ID} vm2

root@ip-172-31-35-214:~# openstack server list
+--------------------------------------+------+--------+----------------------+--------+---------+
| ID                                   | Name | Status | Networks             | Image  | Flavor  |
+--------------------------------------+------+--------+----------------------+--------+---------+
| 36970673-a7b7-4248-8ea8-207bfc808beb | vm2  | ACTIVE | testvn=192.168.100.4 | cirros | m1.tiny |
| 7e222583-e37b-4570-a5a8-fda4d2ca7d5b | vm1  | ACTIVE | testvn=192.168.100.3 | cirros | m1.tiny |
+--------------------------------------+------+--------+----------------------+--------+---------+
root@ip-172-31-35-214:~# 

root@ip-172-31-35-214:~# ./contrail-introspect-cli/ist.py ctr route summary
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| name                                               | prefixes | paths | primary_paths | secondary_paths | infeasible_paths |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| default-domain:admin:testvn:testvn.inet.0          | 2        | 2     | 2             | 0               | 0                |
| default-domain:default-                            | 0        | 0     | 0             | 0               | 0                |
| project:__link_local__:__link_local__.inet.0       |          |       |               |                 |                  |
| default-domain:default-project:dci-                | 0        | 0     | 0             | 0               | 0                |
| network:__default__.inet.0                         |          |       |               |                 |                  |
| default-domain:default-project:dci-network:dci-    | 0        | 0     | 0             | 0               | 0                |
| network.inet.0                                     |          |       |               |                 |                  |
| default-domain:default-project:default-virtual-    | 0        | 0     | 0             | 0               | 0                |
| network:default-virtual-network.inet.0             |          |       |               |                 |                  |
| inet.0                                             | 0        | 0     | 0             | 0               | 0                |
| default-domain:default-project:ip-fabric:ip-       | 1        | 1     | 1             | 0               | 0                |
| fabric.inet.0                                      |          |       |               |                 |                  |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
root@ip-172-31-35-214:~# ./contrail-introspect-cli/ist.py ctr route show -t default-domain:admin:testvn:testvn.inet.0

default-domain:admin:testvn:testvn.inet.0: 2 destinations, 2 routes (2 primary, 0 secondary, 0 infeasible)

192.168.100.3/32, age: 0:00:43.784175, last_modified: 2019-May-04 08:35:34.135843
    [XMPP (interface)|ip-172-31-4-230.ap-northeast-1.compute.internal] age: 0:00:43.787824, localpref: 200, nh: 172.31.4.230, encap: ['gre', 'udp'], label: 25, AS path: None

192.168.100.4/32, age: 0:00:25.368270, last_modified: 2019-May-04 08:35:52.551748
    [XMPP (interface)|ip-172-31-4-230.ap-northeast-1.compute.internal] age: 0:00:25.372239, localpref: 200, nh: 172.31.4.230, encap: ['gre', 'udp'], label: 30, AS path: None
root@ip-172-31-35-214:~#

ubuntu@ip-172-31-4-230:~$ ip route
default via 172.31.0.1 dev vhost0 
169.254.0.1 dev vhost0  proto 109  scope link 
169.254.0.3 dev vhost0  proto 109  scope link 
169.254.0.4 dev vhost0  proto 109  scope link 
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1 linkdown 
172.31.0.0/20 dev vhost0  proto kernel  scope link  src 172.31.4.230 
ubuntu@ip-172-31-4-230:~$ 
ubuntu@ip-172-31-4-230:~$ ssh ^C
ubuntu@ip-172-31-4-230:~$ 
ubuntu@ip-172-31-4-230:~$ ssh cirros@169.254.0.3
The authenticity of host '169.254.0.3 (169.254.0.3)' can't be established.
ECDSA key fingerprint is SHA256:+dk0gBCbyj52tmf1QHD4J6Lem39S25dqfoIPw1VCzJs.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '169.254.0.3' (ECDSA) to the list of known hosts.
cirros@169.254.0.3's password: 
$ 
$ ip -o a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
1: lo    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000\    link/ether 02:ac:c9:3a:e7:8e brd ff:ff:ff:ff:ff:ff
2: eth0    inet 192.168.100.3/24 brd 192.168.100.255 scope global eth0\       valid_lft forever preferred_lft forever
2: eth0    inet6 fe80::ac:c9ff:fe3a:e78e/64 scope link \       valid_lft forever preferred_lft forever
$ ping 192.168.100.4
PING 192.168.100.4 (192.168.100.4): 56 data bytes
64 bytes from 192.168.100.4: seq=0 ttl=64 time=4.563 ms
64 bytes from 192.168.100.4: seq=1 ttl=64 time=0.857 ms
^C
--- 192.168.100.4 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.857/2.710/4.563 ms
$ 

2 kubernetes クラスタ間の名前解決

TungstenFabric の kubernetes クラスタ2組で、他のクラスタ内の svc / pod の名前解決、および ping 疎通が出来るか、を確認してみている。
環境としては、AWS 上の CentOS7.5 4台 (ami-3185744e, t2.medium) を使用した。

ansible-deployer でのインストールだと、 kubernetes クラスタが持つ ip subnet が重複してしまうため、今回は、 kubeadm を使って、kubernetes のインストールを行っている。
この際、 kubernetes で使用する subnet / service-dns-domain を変更したかったため、kubeadm init 実行時に以下のコマンドを使用している。

クラスタ0:
kubeadm init --pod-network-cidr=10.32.0.0/24 --service-cidr=10.96.0.0/24
クラスタ1:
kubeadm init --pod-network-cidr=10.32.1.0/24 --service-cidr=10.96.1.0/24 --service-dns-domain=cluster1.local

また、クラスタ1については、 coredns 用の svc ip も変更している (subnet の変更と合わせるため)

# cat /etc/sysconfig/kubelet 
-KUBELET_EXTRA_ARGS=
+KUBELET_EXTRA_ARGS="--cluster-dns=10.96.1.10"
# systemctl restart kubelet


TungstenFabric のインストール方法は、以下とほぼ同じだが、今回は、 TungstenFabric controller も、kubernetes 上で稼働させてみている。
http://aaabbb-200904.hatenablog.jp/entry/2019/03/17/222320

このため、TungstenFabric デプロイ時に使用する yaml が変わっている。

- # ./resolve-manifest.sh contrail-non-nested-kubernetes.yaml > cni-vrouter.yaml
+ # ./resolve-manifest.sh contrail-standalone-kubernetes.yaml > cni-vrouter.yaml 

他に、 cni-vrouter.yaml の編集時、および反映後に、以下を実施している。

cni-vrouter.yaml に以下を追記 (subnet, AS番号は、クラスタごとに重複しない値を指定):
  KUBERNETES_POD_SUBNETS: 10.32.1.0/24
  KUBERNETES_IP_FABRIC_SUBNETS: 10.64.1.0/24
  KUBERNETES_SERVICE_SUBNETS: 10.96.1.0/24
  JVM_EXTRA_OPTS: "-Xms128m -Xmx1g"
  BGP_ASN: "64513"
※ VROUTER_GATEWAY の行を削除 (こちらが残っていると、適用後に vRouter に疎通が取れなくなる)

# vi set-label.sh
masternode=$(kubectl get node | grep -w master | awk '{print $1}')
agentnodes=$(kubectl get node | grep -v -w -e master -e NAME | awk '{print $1}')
for i in config configdb analytics webui control
do
kubectl label node ${masternode} node-role.opencontrail.org/${i}=
done

for i in ${agentnodes}
do
kubectl label node ${i} node-role.opencontrail.org/agent=
done

# bash set-label.sh
※ controller, vrouter に、それぞれの role 割り当てを実施

controller, vrouter が上がってきたら、各クラスタの webui にアクセス出来ることを確認した後、1. k8s-pod-network, k8s-service-network に、route-target: 64512:11 を設定 2. controller 間で bgp peer を設定 を実施し、各クラスタの pod / svc 間で疎通が取れることを確認している。
http://aaabbb-200904.hatenablog.jp/entry/2017/11/06/011959

この後、coredns の設定を行うのだが、 coredns の deployment の状態を確認したところ、 pod が認識されていない状態だったため、以下のコマンドで、livenessProbe, readinessProbe の削除を行い、pod が認識されたことを確認している。(この作業を行わないと、coredns のpodが Service からの割り振り対象にならない)
# kubectl edit deployment -n kube-system coredns

また、1. 名前解決に時間がかかる事象の解消、2. service-dns-domain を元に 他クラスタへのforward、を実施するために、coredns の設定で、以下の変更を実施している。

# kubectl edit -n kube-system configmap coredns
1.
 -        forward . /etc/resolv.conf
 +        forward . 10.32.0.253
の変更を実施 (forward 先は、k8s-pod-network の service-ip に設定)
2.
    cluster1.local:53 {
        errors
        cache 30
        forward . 10.96.1.10
    }
を追記 (domain と forward 先が一致するように設定)

上記を実施することで、以下のように、クラスタ0, クラスタ1から、他のクラスタ内の pod の名前解決 / ping 疎通ができることを確認できた。

cluster0 -> cluster1:

/ # nslookup 10-32-1-249.default.pod.cluster1.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      10-32-1-249.default.pod.cluster1.local
Address 1: 10.32.1.249 ip-10-32-1-249.ap-northeast-1.compute.internal
/ # 

/ # ping 10-32-1-249.default.pod.cluster1.local
PING 10-32-1-249.default.pod.cluster1.local (10.32.1.249): 56 data bytes
64 bytes from 10.32.1.249: seq=0 ttl=63 time=1.025 ms
64 bytes from 10.32.1.249: seq=1 ttl=63 time=0.598 ms
^C
--- 10-32-1-249.default.pod.cluster1.local ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.598/0.811/1.025 ms
/ # 
/ # ip -o a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
1: lo    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue \    link/ether 02:10:48:88:da:59 brd ff:ff:ff:ff:ff:ff
15: eth0    inet 10.32.0.252/24 scope global eth0\       valid_lft forever preferred_lft forever
15: eth0    inet6 fe80::501c:63ff:fe7e:6166/64 scope link \       valid_lft forever preferred_lft forever
/ # 

cluster1 -> cluster0:

/ # nslookup 10-32-0-252.default.pod.cluster.local
Server:    10.96.1.10
Address 1: 10.96.1.10 kube-dns.kube-system.svc.cluster1.local

Name:      10-32-0-252.default.pod.cluster.local
Address 1: 10.32.0.252 ip-10-32-0-252.ap-northeast-1.compute.internal
/ # 
/ # 
/ # ping 10-32-0-252.default.pod.cluster.local
PING 10-32-0-252.default.pod.cluster.local (10.32.0.252): 56 data bytes
64 bytes from 10.32.0.252: seq=0 ttl=63 time=0.900 ms
64 bytes from 10.32.0.252: seq=1 ttl=63 time=0.535 ms
^C
--- 10-32-0-252.default.pod.cluster.local ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.535/0.717/0.900 ms
/ # 
/ # ip -o a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
1: lo    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue \    link/ether 02:74:65:28:34:59 brd ff:ff:ff:ff:ff:ff
9: eth0    inet 10.32.1.249/24 scope global eth0\       valid_lft forever preferred_lft forever
9: eth0    inet6 fe80::2c59:7bff:fe92:114c/64 scope link \       valid_lft forever preferred_lft forever
/ #

仮にクラスタが複数に分かれている場合も、 TungstenFabric 内で、かつ fqdn を使用すれば、あまりクラスタの違いを意識すること無く疎通が出来そうなことが分かった。
複数のクラスタを運用する場合は、適用してみてもよいのではなかろうか。

introspect-cli

TungstenFabric の control には多数のルートが登録されており、これらを cli で確認する方法を探していたのだが、以下のツールで実施することが出来たので、出力例を記載しておく。
https://github.com/vcheny/contrail-introspect-cli

特に

./ist.py ctr nei
./ist.py ctr route summary
./ist.py ctr route tables
./ist.py ctr route show [-t table] [-r] [prefix]
./ist.py vr xmpp
./ist.py vr vn
./ist.py vr vrf
./ist.py vr route
./ist.py (対応するコンポーネント) status

あたりは、troubleshoot に活用できそうである。

インストール方法
※ controller ノード上で実施
pip install lxml prettytable
git clone https://github.com/vcheny/contrail-introspect-cli.git
cd contrail-introspect-cli
出力結果
共通:
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py -h
usage: ist [-h] [--version] [--debug] [--host HOST] [--port PORT]
           
           {alarm_gen,analytics,cfg_api,cfg_disc,cfg_schema,cfg_svcmon,collector,ctr,dm,dns,nodemgr_analytics,nodemgr_cfg,nodemgr_ctr,nodemgr_db,nodemgr_vr,qe,vr}
           ...

A script to make Contrail Introspect output CLI friendly.

positional arguments:
  {alarm_gen,analytics,cfg_api,cfg_disc,cfg_schema,cfg_svcmon,collector,ctr,dm,dns,nodemgr_analytics,nodemgr_cfg,nodemgr_ctr,nodemgr_db,nodemgr_vr,qe,vr}
    alarm_gen           contrail-alarm-gen
    analytics           contrail-analytics-api
    cfg_api             contrail-api
    cfg_disc            contrail-discovery
    cfg_schema          contrail-schema
    cfg_svcmon          contrail-svc-monitor
    collector           contrail-collector
    ctr                 contrail-control
    dm                  contrail-device-manager
    dns                 contrail-dns
    nodemgr_analytics   contrail-analytics-nodemgr
    nodemgr_cfg         contrail-config-nodemgr
    nodemgr_ctr         contrail-control-nodemgr
    nodemgr_db          contrail-database-nodemgr
    nodemgr_vr          contrail-vrouter-nodemgr
    qe                  contrail-query-engine
    vr                  contrail-vrouter-agent

optional arguments:
  -h, --help            show this help message and exit
  --version             Script version
  --debug               Verbose mode
  --host HOST           Introspect host address. Default: localhost
  --port PORT           Introspect port number
[root@ip-172-31-42-64 contrail-introspect-cli]#


control:
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py ctr -h
usage: ist ctr [-h]
               
               {status,cpu,trace,uve,nei,ri,route,mcast,bgp_stats,xmpp,ifmap,sc,config,rt}
               ...

positional arguments:
  {status,cpu,trace,uve,nei,ri,route,mcast,bgp_stats,xmpp,ifmap,sc,config,rt}
    status              Node/component status
    cpu                 CPU load info
    trace               Sandesh trace buffer
    uve                 Sandesh UVE cache
    nei                 Show BGP/XMPPP neighbors
    ri                  Show routing instances
    route               Show route info
    mcast               Show multicast managers
    bgp_stats           Show BGP server stats
    xmpp                Show XMPP info
    ifmap               Show IFMAP info
    sc                  Show ServiceChain info
    config              Show related config info
    rt                  Show RtGroup info

optional arguments:
  -h, --help            show this help message and exit
[root@ip-172-31-42-64 contrail-introspect-cli]# 


[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py ctr status
module_id: contrail-control
state: Functional
description
+-----------+-----------+---------------------+--------+----------------------------------+
| type      | name      | server_addrs        | status | description                      |
+-----------+-----------+---------------------+--------+----------------------------------+
| Collector | n/a       |   172.31.42.64:8086 | Up     | Established                      |
| Database  | Cassandra |   172.31.42.64:9041 | Up     | Established Cassandra connection |
| Database  | RabbitMQ  |   172.31.42.64:5673 | Up     | RabbitMQ connection established  |
+-----------+-----------+---------------------+--------+----------------------------------+
[root@ip-172-31-42-64 contrail-introspect-cli]# 
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py ctr nei
+------------------------+---------------+----------+----------+-----------+-------------+------------+------------+-----------+
| peer                   | peer_address  | peer_asn | encoding | peer_type | state       | send_state | flap_count | flap_time |
+------------------------+---------------+----------+----------+-----------+-------------+------------+------------+-----------+
| ip-172-31-18-221.local | 172.31.18.221 | 0        | XMPP     | internal  | Established | in sync    | 0          | n/a       |
| ip-172-31-4-246.local  | 172.31.4.246  | 0        | XMPP     | internal  | Established | in sync    | 0          | n/a       |
+------------------------+---------------+----------+----------+-----------+-------------+------------+------------+-----------+
[root@ip-172-31-42-64 contrail-introspect-cli]# 
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py ctr ri
+--------------------------------------+----------+----------+-------------------------+------------------------+------------------+
| name                                 | vn_index | vxlan_id | import_target           | export_target          | routing_policies |
+--------------------------------------+----------+----------+-------------------------+------------------------+------------------+
| default-domain:default-project:__lin | 3        | 0        |   target:64512:7999999  |                        |                  |
| k_local__:__link_local__             |          |          |   target:172.31.42.64:4 |                        |                  |
| default-domain:default-project:dci-  | 4        | 0        |   target:64512:7999999  |   target:64512:8000001 |                  |
| network:__default__                  |          |          |   target:64512:8000001  |                        |                  |
|                                      |          |          |   target:172.31.42.64:1 |                        |                  |
| default-domain:default-project:dci-  | 4        | 0        |   target:64512:7999999  |   target:64512:8000003 |                  |
| network:dci-network                  |          |          |   target:64512:8000003  |                        |                  |
|                                      |          |          |   target:172.31.42.64:5 |                        |                  |
| default-domain:default-project       | 1        | 0        |   target:64512:7999999  |   target:64512:8000000 |                  |
| :default-virtual-network:default-    |          |          |   target:64512:8000000  |                        |                  |
| virtual-network                      |          |          |   target:172.31.42.64:2 |                        |                  |
| default-domain:default-project:ip-   | 2        | 0        |                         |                        |                  |
| fabric:__default__                   |          |          |                         |                        |                  |
| default-domain:default-project:ip-   | 2        | 0        |   target:64512:7999999  |   target:64512:8000002 |                  |
| fabric:ip-fabric                     |          |          |   target:64512:8000002  |                        |                  |
|                                      |          |          |   target:64512:8000004  |                        |                  |
|                                      |          |          |   target:64512:8000005  |                        |                  |
|                                      |          |          |   target:172.31.42.64:3 |                        |                  |
| default-domain:k8s-default:k8s-      | 5        | 0        |   target:64512:7999999  |   target:64512:8000004 |                  |
| default-pod-network:k8s-default-pod- |          |          |   target:64512:8000002  |                        |                  |
| network                              |          |          |   target:64512:8000004  |                        |                  |
|                                      |          |          |   target:64512:8000005  |                        |                  |
|                                      |          |          |   target:172.31.42.64:6 |                        |                  |
| default-domain:k8s-default:k8s-      | 6        | 0        |   target:64512:7999999  |   target:64512:8000005 |                  |
| default-service-network:k8s-default- |          |          |   target:64512:8000002  |                        |                  |
| service-network                      |          |          |   target:64512:8000004  |                        |                  |
|                                      |          |          |   target:64512:8000005  |                        |                  |
|                                      |          |          |   target:172.31.42.64:7 |                        |                  |
+--------------------------------------+----------+----------+-------------------------+------------------------+------------------+
[root@ip-172-31-42-64 contrail-introspect-cli]#

[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py ctr route -h
usage: ist ctr route [-h] {summary,tables,show,static,aggregate} ...

positional arguments:
  {summary,tables,show,static,aggregate}
    summary             Show route summary
    tables              List route table names
    show                Show route
    static              Show static routes
    aggregate           Show aggregate routes

optional arguments:
  -h, --help            show this help message and exit

[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py ctr route summary
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| name                                               | prefixes | paths | primary_paths | secondary_paths | infeasible_paths |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
| default-domain:default-                            | 0        | 0     | 0             | 0               | 0                |
| project:__link_local__:__link_local__.inet.0       |          |       |               |                 |                  |
| default-domain:default-project:dci-                | 0        | 0     | 0             | 0               | 0                |
| network:__default__.inet.0                         |          |       |               |                 |                  |
| default-domain:default-project:dci-network:dci-    | 0        | 0     | 0             | 0               | 0                |
| network.inet.0                                     |          |       |               |                 |                  |
| default-domain:default-project:default-virtual-    | 0        | 0     | 0             | 0               | 0                |
| network:default-virtual-network.inet.0             |          |       |               |                 |                  |
| inet.0                                             | 0        | 0     | 0             | 0               | 0                |
| default-domain:default-project:ip-fabric:ip-       | 5        | 5     | 2             | 3               | 0                |
| fabric.inet.0                                      |          |       |               |                 |                  |
| default-domain:k8s-default:k8s-default-pod-network | 5        | 5     | 2             | 3               | 0                |
| :k8s-default-pod-network.inet.0                    |          |       |               |                 |                  |
| default-domain:k8s-default:k8s-default-service-    | 5        | 5     | 1             | 4               | 0                |
| network:k8s-default-service-network.inet.0         |          |       |               |                 |                  |
+----------------------------------------------------+----------+-------+---------------+-----------------+------------------+
[root@ip-172-31-42-64 contrail-introspect-cli]# 

[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py ctr route tables
name: default-domain:default-project:__link_local__:__link_local__.inet.0
name: default-domain:default-project:dci-network:__default__.inet.0
name: default-domain:default-project:dci-network:dci-network.inet.0
name: default-domain:default-project:default-virtual-network:default-virtual-network.inet.0
name: inet.0
name: default-domain:default-project:ip-fabric:ip-fabric.inet.0
name: default-domain:k8s-default:k8s-default-pod-network:k8s-default-pod-network.inet.0
name: default-domain:k8s-default:k8s-default-service-network:k8s-default-service-network.inet.0
[root@ip-172-31-42-64 contrail-introspect-cli]# 

[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py ctr route show -h
usage: ist ctr route show [-h]
                          [-f {inet,inet6,evpn,ermvpn,rtarget,inetvpn,l3vpn}]
                          [-l LAST] [-d] [-r]
                          [-p {BGP,XMPP,local,ServiceChain,Static}] [-v VRF]
                          [-s SOURCE] [-t TABLE] [--longer_match]
                          [--shorter_match]
                          [prefix]

positional arguments:
  prefix                Show routes matching given prefix

optional arguments:
  -h, --help            show this help message and exit
  -f {inet,inet6,evpn,ermvpn,rtarget,inetvpn,l3vpn}, --family {inet,inet6,evpn,ermvpn,rtarget,inetvpn,l3vpn}
                        Show routes for given family.
  -l LAST, --last LAST  Show routes modified during last time period (e.g.
                        10s, 5m, 2h, or 5d)
  -d, --detail          Display detailed output
  -r, --raw             Display raw output in text
  -p {BGP,XMPP,local,ServiceChain,Static}, --protocol {BGP,XMPP,local,ServiceChain,Static}
                        Show routes learned from given protocol
  -v VRF, --vrf VRF     Show routes in given routing instance specified as fqn
  -s SOURCE, --source SOURCE
                        Show routes learned from given source
  -t TABLE, --table TABLE
                        Show routes in given table
  --longer_match        Shows more specific routes
  --shorter_match       Shows less specific routes
[root@ip-172-31-42-64 contrail-introspect-cli]# 


[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py ctr route show -t default-domain:k8s-default:k8s-default-pod-network:k8s-default-pod-network.inet.0

default-domain:k8s-default:k8s-default-pod-network:k8s-default-pod-network.inet.0: 5 destinations, 5 routes (2 primary, 3 secondary, 0 infeasible)

10.47.255.251/32, age: 0:05:08.042661, last_modified: 2019-Apr-07 10:22:37.597451
    [XMPP (interface)|ip-172-31-4-246.local] age: 0:05:08.045915, localpref: 200, nh: 172.31.4.246, encap: ['gre', 'udp'], label: 30, AS path: None

10.47.255.252/32, age: 0:05:11.002858, last_modified: 2019-Apr-07 10:22:34.637254
    [XMPP (interface)|ip-172-31-4-246.local] age: 0:05:11.006508, localpref: 200, nh: 172.31.4.246, encap: ['gre', 'udp'], label: 25, AS path: None

10.96.0.10/32, age: 0:05:08.042742, last_modified: 2019-Apr-07 10:22:37.597370
    [XMPP (interface)|ip-172-31-4-246.local] age: 0:05:08.046665, localpref: 200, nh: 172.31.4.246, encap: ['gre', 'udp'], label: 37, AS path: None

172.31.4.246/32, age: 0:06:28.376773, last_modified: 2019-Apr-07 10:21:17.263339
    [XMPP (interface)|ip-172-31-4-246.local] age: 0:06:28.380937, localpref: 200, nh: 172.31.4.246, encap: ['gre', 'udp', 'native'], label: 16, AS path: None

172.31.18.221/32, age: 0:06:27.287767, last_modified: 2019-Apr-07 10:21:18.352345
    [XMPP (interface)|ip-172-31-18-221.local] age: 0:06:27.292165, localpref: 200, nh: 172.31.18.221, encap: ['gre', 'udp', 'native'], label: 16, AS path: None
[root@ip-172-31-42-64 contrail-introspect-cli]# 


[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py ctr xmpp conn
+------------------------+---------+---------------------+-------------------+-------------+-----------------------+------------+-----------------------------+-----------+------------------+------------+
| name                   | deleted | remote_endpoint     | local_endpoint    | state       | last_event            | last_state | last_state_at               | receivers | server_auth_type | dscp_value |
+------------------------+---------+---------------------+-------------------+-------------+-----------------------+------------+-----------------------------+-----------+------------------+------------+
| ip-172-31-4-246.local  | false   | 172.31.4.246:34576  | 172.31.42.64:5269 | Established | xmsm::EvXmppKeepalive | Active     | 2019-Apr-07 10:21:17.161634 |   IFMap   | NIL              | 0          |
|                        |         |                     |                   |             |                       |            |                             |   BGP     |                  |            |
| ip-172-31-18-221.local | false   | 172.31.18.221:39769 | 172.31.42.64:5269 | Established | xmsm::EvXmppKeepalive | Active     | 2019-Apr-07 10:21:18.252562 |   IFMap   | NIL              | 0          |
|                        |         |                     |                   |             |                       |            |                             |   BGP     |                  |            |
+------------------------+---------+---------------------+-------------------+-------------+-----------------------+------------+-----------------------------+-----------+------------------+------------+


vrouter:
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr -h
Introspect Host: 172.31.4.246
usage: ist vr [-h]
              
              {status,cpu,trace,uve,intf,vn,vrf,route,sg,acl,hc,ifmap,baas,xmpp,xmpp-dns,stats,service,si,nh,vm,mpls,vrfassign,linklocal,vxlan,mirror}
              ...

positional arguments:
  {status,cpu,trace,uve,intf,vn,vrf,route,sg,acl,hc,ifmap,baas,xmpp,xmpp-dns,stats,service,si,nh,vm,mpls,vrfassign,linklocal,vxlan,mirror}
    status              Node/component status
    cpu                 CPU load info
    trace               Sandesh trace buffer
    uve                 Sandesh UVE cache
    intf                Show vRouter interfaces
    vn                  Show Virtual Network
    vrf                 Show VRF
    route               Show routes
    sg                  Show Security Groups
    acl                 Show ACL info
    hc                  Health Check info
    ifmap               IFMAP info
    baas                Bgp As A Service info
    xmpp                Show Agent XMPP connections (route&config) status
    xmpp-dns            Show Agent XMPP connections (dns) status
    stats               Show Agent stats
    service             Service related info
    si                  Service instance info
    nh                  NextHop info
    vm                  VM info
    mpls                MPLS info
    vrfassign           VrfAssign info
    linklocal           LinkLocal service info
    vxlan               vxlan info
    mirror              mirror info

optional arguments:
  -h, --help            show this help message and exit
[root@ip-172-31-42-64 contrail-introspect-cli]#

[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr status
Introspect Host: 172.31.4.246
module_id: contrail-vrouter-agent
state: Functional
description
+-----------+---------------------------+---------------------+--------+-------------+
| type      | name                      | server_addrs        | status | description |
+-----------+---------------------------+---------------------+--------+-------------+
| XMPP      | control-node:172.31.42.64 |   172.31.42.64:5269 | Up     | OpenSent    |
| XMPP      | dns-server:172.31.42.64   |   172.31.42.64:53   | Up     | OpenSent    |
| Collector | n/a                       |   172.31.42.64:8086 | Up     | Established |
+-----------+---------------------------+---------------------+--------+-------------+
[root@ip-172-31-42-64 contrail-introspect-cli]# 
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr intf
Introspect Host: 172.31.4.246
+-------+----------------+--------+-------------------+---------------+---------------+---------+--------------------------------------+
| index | name           | active | mac_addr          | ip_addr       | mdata_ip_addr | vm_name | vn_name                              |
+-------+----------------+--------+-------------------+---------------+---------------+---------+--------------------------------------+
| 0     | eth0           | Active | n/a               | n/a           | n/a           | n/a     | n/a                                  |
| 1     | vhost0         | Active | 06:c2:b8:cd:fe:fc | 172.31.4.246  | 169.254.0.1   | n/a     | default-domain:default-project:ip-   |
|       |                |        |                   |               |               |         | fabric                               |
| 3     | tapeth0-1a3aed | Active | 02:c7:14:2f:38:59 | 10.47.255.252 | 169.254.0.3   | n/a     | default-domain:k8s-default:k8s-      |
|       |                |        |                   |               |               |         | default-pod-network                  |
| 4     | tapeth0-1a3bbd | Active | 02:c7:53:a3:fc:59 | 10.47.255.251 | 169.254.0.4   | n/a     | default-domain:k8s-default:k8s-      |
|       |                |        |                   |               |               |         | default-pod-network                  |
| 2     | pkt0           | Active | n/a               | n/a           | n/a           | n/a     | n/a                                  |
+-------+----------------+--------+-------------------+---------------+---------------+---------+--------------------------------------+
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr vn
Introspect Host: 172.31.4.246
+--------------------------------------+--------------------------------------+-------------------+-----------------+------------+----------+
| name                                 | uuid                                 | layer2_forwarding | ipv4_forwarding | enable_rpf | bridging |
+--------------------------------------+--------------------------------------+-------------------+-----------------+------------+----------+
| default-domain:k8s-default:k8s-      | 1ca95bc7-2c74-492f-9aa9-05e755752ee5 | false             | true            | true       | false    |
| default-service-network              |                                      |                   |                 |            |          |
| default-domain:k8s-default:k8s-      | ab5a4cc8-1bce-4e68-a24a-72a0053cb711 | false             | true            | true       | false    |
| default-pod-network                  |                                      |                   |                 |            |          |
+--------------------------------------+--------------------------------------+-------------------+-----------------+------------+----------+
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr vrf
Introspect Host: 172.31.4.246
+--------------------------------------+---------+---------+---------+-----------+----------+--------------------------------------+
| name                                 | ucindex | mcindex | brindex | evpnindex | vxlan_id | vn                                   |
+--------------------------------------+---------+---------+---------+-----------+----------+--------------------------------------+
| default-domain:default-project:ip-   | 0       | 0       | 0       | 0         | 0        | N/A                                  |
| fabric:__default__                   |         |         |         |           |          |                                      |
| default-domain:default-project:ip-   | 1       | 1       | 1       | 1         | 2        | default-domain:default-project:ip-   |
| fabric:ip-fabric                     |         |         |         |           |          | fabric                               |
| default-domain:k8s-default:k8s-      | 2       | 2       | 2       | 2         | 5        | default-domain:k8s-default:k8s-      |
| default-pod-network:k8s-default-pod- |         |         |         |           |          | default-pod-network                  |
| network                              |         |         |         |           |          |                                      |
| default-domain:k8s-default:k8s-      | 3       | 3       | 3       | 3         | 6        | default-domain:k8s-default:k8s-      |
| default-service-network:k8s-default- |         |         |         |           |          | default-service-network              |
| service-network                      |         |         |         |           |          |                                      |
+--------------------------------------+---------+---------+---------+-----------+----------+--------------------------------------+
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr route ## -v 0 (ip-fabric:__default__ の route が表示されている)
Introspect Host: 172.31.4.246
0.0.0.0/0
    [Local] pref:100
     nh_index:0 , nh_type:None, nh_policy:, active_label:-1, vxlan_id:0
169.254.0.3/32
    [LinkLocal] pref:100
     to 2:c7:14:2f:38:59 via tapeth0-1a3aed, assigned_label:29, nh_index:26 , nh_type:interface, nh_policy:enabled, active_label:29, vxlan_id:0
169.254.0.4/32
    [LinkLocal] pref:100
     to 2:c7:53:a3:fc:59 via tapeth0-1a3bbd, assigned_label:21, nh_index:16 , nh_type:interface, nh_policy:enabled, active_label:21, vxlan_id:0
172.31.0.0/20
    [LocalVmPort] pref:100
     nh_index:14 , nh_type:resolve, nh_policy:disabled, active_label:-1, vxlan_id:0
172.31.0.1/32
    [Local] pref:100
     via 6:8f:fa:85:cf:16, nh_index:15 , nh_type:arp, nh_policy:disabled, active_label:-1, vxlan_id:0
172.31.0.2/32
    [Local] pref:100
     via 6:8f:fa:85:cf:16, nh_index:39 , nh_type:arp, nh_policy:disabled, active_label:-1, vxlan_id:0
172.31.4.246/32
    [FabricRouteExport] pref:100
     via vhost0, nh_index:10 , nh_type:receive, nh_policy:disabled, active_label:0, vxlan_id:0
172.31.18.221/32
    [Local] pref:100
     nh_index:0 , nh_type:None, nh_policy:, active_label:0, vxlan_id:0
224.0.0.0/8
    [Local] pref:100
     via vhost0, nh_index:11 , nh_type:receive, nh_policy:enabled, active_label:0, vxlan_id:0
[root@ip-172-31-42-64 contrail-introspect-cli]#

[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr route -v 2 ## k8s-pod-network を表示
Introspect Host: 172.31.4.246
10.32.0.0/12
    [Local] pref:100
     nh_index:1 , nh_type:discard, nh_policy:disabled, active_label:-1, vxlan_id:0
10.47.255.251/32
    [172.31.42.64] pref:200
     to 2:c7:53:a3:fc:59 via tapeth0-1a3bbd, assigned_label:21, nh_index:16 , nh_type:interface, nh_policy:enabled, active_label:21, vxlan_id:0
    [LocalVmPort] pref:200
     to 2:c7:53:a3:fc:59 via tapeth0-1a3bbd, assigned_label:21, nh_index:16 , nh_type:interface, nh_policy:enabled, active_label:21, vxlan_id:0
10.47.255.252/32
    [172.31.42.64] pref:200
     to 2:c7:14:2f:38:59 via tapeth0-1a3aed, assigned_label:29, nh_index:26 , nh_type:interface, nh_policy:enabled, active_label:29, vxlan_id:0
    [LocalVmPort] pref:200
     to 2:c7:14:2f:38:59 via tapeth0-1a3aed, assigned_label:29, nh_index:26 , nh_type:interface, nh_policy:enabled, active_label:29, vxlan_id:0
10.47.255.253/32
    [Local] pref:100
     to 0:0:0:0:0:1 via pkt0, assigned_label:-1, nh_index:13 , nh_type:interface, nh_policy:enabled, active_label:-1, vxlan_id:0
10.47.255.254/32
    [Local] pref:100
     to 0:0:0:0:0:1 via pkt0, assigned_label:-1, nh_index:13 , nh_type:interface, nh_policy:enabled, active_label:-1, vxlan_id:0
10.96.0.1/32
    [LinkLocal] pref:100
     via vhost0, nh_index:11 , nh_type:receive, nh_policy:enabled, active_label:0, vxlan_id:0
10.96.0.10/32
    [172.31.42.64] pref:200
     via ['tapeth0-1a3bbd', 'tapeth0-1a3aed'], nh_index:45 , nh_type:ECMP Composite sub nh count: 2, nh_policy:enabled, active_label:-1, vxlan_id:0
172.31.4.246/32
    [172.31.42.64] pref:200
     to 6:c2:b8:cd:fe:fc via vhost0, assigned_label:16, nh_index:5 , nh_type:interface, nh_policy:enabled, active_label:16, vxlan_id:0
172.31.18.221/32
    [172.31.42.64] pref:200
     to 6:8f:fa:85:cf:16 via MPLSoUDP dip:172.31.18.221 sip:172.31.4.246 label:16, nh_index:35 , nh_type:tunnel, nh_policy:disabled, active_label:16, vxlan_id:0
[root@ip-172-31-42-64 contrail-introspect-cli]# 

[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr sg
Introspect Host: 172.31.4.246
+-----------+---------+--------------------------------------+----------+--------------------------------------+--------------------------------------+
| ref_count | sg_id   | sg_uuid                              | acl_uuid | egress_acl_uuid                      | ingress_acl_uuid                     |
+-----------+---------+--------------------------------------+----------+--------------------------------------+--------------------------------------+
| 2         | 8000005 | 20bb4785-6cd2-43c2-8160-7fbfb1c18e1d | n/a      | 2d7ab4e6-2758-441b-8743-2df5d9eb4ab8 | 024deaeb-5f79-4268-82b0-595e609d5c28 |
+-----------+---------+--------------------------------------+----------+--------------------------------------+--------------------------------------+
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr acl
Introspect Host: 172.31.4.246
+--------------------------------------+--------------------------------------+-------------+
| uuid                                 | name                                 | dynamic_acl |
+--------------------------------------+--------------------------------------+-------------+
| 024deaeb-5f79-4268-82b0-595e609d5c28 | default-domain:k8s-kube-system:k8s-  | false       |
|                                      | kube-system-default-sg:ingress-      |             |
|                                      | access-control-list                  |             |
| 11d8294f-e049-42b9-a0e6-e64eb036fd5f | default-domain:k8s-default:k8s-      | false       |
|                                      | default-service-network:k8s-default- |             |
|                                      | service-network                      |             |
| 21deedf2-2c26-4897-b5a7-b5a0ca060532 | default-domain:k8s-default:k8s-      | false       |
|                                      | default-pod-network:k8s-default-pod- |             |
|                                      | network                              |             |
| 2d7ab4e6-2758-441b-8743-2df5d9eb4ab8 | default-domain:k8s-kube-system:k8s-  | false       |
|                                      | kube-system-default-sg:egress-       |             |
|                                      | access-control-list                  |             |
| b4e48fd4-e75d-4989-bc25-c55a99a998a8 | default-policy-management:k8s-       | false       |
|                                      | denyall                              |             |
| c5552c5f-f588-41f9-bcfd-62799e8483b0 | default-policy-management:k8s-       | false       |
|                                      | Ingress                              |             |
| edc2d263-d0f1-4f0d-ad39-0570153bc674 | default-policy-management:k8s-       | false       |
|                                      | allowall                             |             |
| f527d50b-5f0a-4aa3-8607-7514cb96b30f | default-domain:default-project:ip-   | false       |
|                                      | fabric:ip-fabric                     |             |
+--------------------------------------+--------------------------------------+-------------+
[root@ip-172-31-42-64 contrail-introspect-cli]# 
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr xmpp
Introspect Host: 172.31.4.246
+---------------+-------------+-------------------------------------+-------------------+----------------+------------+-----------+
| controller_ip | state       | peer_name                           | peer_address      | cfg_controller | flap_count | flap_time |
+---------------+-------------+-------------------------------------+-------------------+----------------+------------+-----------+
| 172.31.42.64  | Established | network-control@contrailsystems.com | 172.31.42.64:5269 | Yes            | 0          | n/a       |
+---------------+-------------+-------------------------------------+-------------------+----------------+------------+-----------+
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr xmpp-dns
Introspect Host: 172.31.4.246
+-------------------+-------------+---------------------------------+-------------------+------------+-----------------------------+
| dns_controller_ip | state       | peer_name                       | peer_address      | flap_count | flap_time                   |
+-------------------+-------------+---------------------------------+-------------------+------------+-----------------------------+
| 172.31.42.64      | Established | network-dns@contrailsystems.com | 172.31.42.64:8093 | 0          | 1970-Jan-01 00:00:54.080512 |
+-------------------+-------------+---------------------------------+-------------------+------------+-----------------------------+
[root@ip-172-31-42-64 contrail-introspect-cli]# 
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr stats
Introspect Host: 172.31.4.246
IpcStatsResp
  ipc_in_msgs: 0
  ipc_out_msgs: 0
PktTrapStatsResp
  exceptions: 1175
  invalid_agent_hdr: 0
  invalid_interface: 8
  no_handler: 0
  pkt_dropped: 8
  pkt_fragments_dropped: 0
FlowStatsResp
  flow_active: 60
  flow_created: 1079
  flow_aged: 1019
  flow_drop_due_to_max_limit: 0
  flow_drop_due_to_linklocal_limit: 0
  flow_max_system_flows: 629760
  flow_max_vm_flows: 0
XmppStatsInfo
  ip: 172.31.42.64
  in_msgs: 43
  out_msgs: 75
  reconnect: 1
  config_in_msgs: 22
SandeshStatsResp
  sandesh_in_msgs: 0
  sandesh_out_msgs: 0
  sandesh_http_sessions: 0
  sandesh_reconnects: 0
ShowIFMapAgentStatsResp
  node_updates_processed: 75
  node_deletes_processed: 0
  link_updates_processed: 88
  link_deletes_processed: 0
  node_update_parse_errors: 0
  link_update_parse_errors: 0
  node_delete_parse_errors: 0
  link_delete_parse_errors: 0
[root@ip-172-31-42-64 contrail-introspect-cli]# 
[root@ip-172-31-42-64 contrail-introspect-cli]# 
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr service
Introspect Host: 172.31.4.246
PktStats
  total_rcvd: 1183
  dhcp_rcvd: 0
  arp_rcvd: 626
  dns_rcvd: 4
  icmp_rcvd: 0
  flow_rcvd: 545
  dropped: 0
  total_sent: 952
  dhcp_sent: 0
  arp_sent: 948
  dns_sent: 4
  icmp_sent: 0
  dhcp_q_threshold_exceeded: 0
  arp_q_threshold_exceeded: 0
  dns_q_threshold_exceeded: 0
  icmp_q_threshold_exceeded: 0
  flow_q_threshold_exceeded: 0
  mac_learning_msg_rcvd: 0
DhcpStats
  dhcp_discover: 0
  dhcp_request: 0
  dhcp_inform: 0
  dhcp_decline: 0
  dhcp_other: 0
  dhcp_errors: 0
  offers_sent: 0
  acks_sent: 0
  nacks_sent: 0
  relay_request: 0
  relay_response: 0
ArpStats
  arp_entries: 2
  arp_requests: 5
  arp_replies: 624
  arp_gratuitous: 0
  arp_resolved: 2
  arp_max_retries_exceeded: 0
  arp_errors: 0
  arp_invalid_packets: 0
  arp_invalid_interface: 0
  arp_invalid_vrf: 0
  arp_invalid_address: 0
DnsStats
  dns_resolver
      172.31.42.64
  dscp: 0
  dns_requests: 4
  dns_resolved: 0
  dns_retransmit_reqs: 0
  dns_unsupported: 0
  dns_failures: 4
  dns_drops: 0
IcmpStats
  icmp_gw_ping: 0
  icmp_gw_ping_err: 0
  icmp_drop: 0
MetadataResponse
  metadata_server_port: 8097
  metadata_requests: 0
  metadata_responses: 0
  metadata_proxy_sessions: 0
  metadata_internal_errors: 0
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr vm
Introspect Host: 172.31.4.246
+--------------------------------------+----------------+
| uuid                                 | drop_new_flows |
+--------------------------------------+----------------+
| 1a3aedb4-591e-11e9-9fb1-0e78d1b55f1c | false          |
| 1a3bbd2e-591e-11e9-9fb1-0e78d1b55f1c | false          |
+--------------------------------------+----------------+
[root@ip-172-31-42-64 contrail-introspect-cli]# 

[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py --host 172.31.4.246 vr linklocal
Introspect Host: 172.31.4.246
+--------------------------------------+----------------------+------------------------+-------------------+----------------+---------------+
| linklocal_service_name               | linklocal_service_ip | linklocal_service_port | ipfabric_dns_name | ipfabric_ip    | ipfabric_port |
+--------------------------------------+----------------------+------------------------+-------------------+----------------+---------------+
| default-domain-k8s-default-          | 10.96.0.1            | 443                    | n/a               |   172.31.42.64 | 6443          |
| kubernetes-443                       |                      |                        |                   |                |               |
+--------------------------------------+----------------------+------------------------+-------------------+----------------+---------------+
[root@ip-172-31-42-64 contrail-introspect-cli]#

※ nh,mpls,vrfassign,vxlan,mirror は、対応する cli とほぼ同じだったので、割愛

その他 (collector, schema-transformer, svc-monitor 以外は status, cpu, trace, uve で共通なので、省略):
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py collector -h
usage: ist collector [-h] {status,cpu,trace,uve,server,redis} ...

positional arguments:
  {status,cpu,trace,uve,server,redis}
    status              Node/component status
    cpu                 CPU load info
    trace               Sandesh trace buffer
    uve                 Sandesh UVE cache
    server              Show collector server info
    redis               Show redis server UVE info

optional arguments:
  -h, --help            show this help message and exit
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py cfg_schema -h
usage: ist cfg_schema [-h] {status,cpu,trace,uve,vn,ri,sc,object} ...

positional arguments:
  {status,cpu,trace,uve,vn,ri,sc,object}
    status              Node/component status
    cpu                 CPU load info
    trace               Sandesh trace buffer
    uve                 Sandesh UVE cache
    vn                  List Virtual Networks
    ri                  List Routing Instances
    sc                  List Service Chains
    object              List Schema-transformer Ojbects

optional arguments:
  -h, --help            show this help message and exit
[root@ip-172-31-42-64 contrail-introspect-cli]# 
[root@ip-172-31-42-64 contrail-introspect-cli]# ./ist.py cfg_svcmon -h
usage: ist cfg_svcmon [-h] {status,cpu,trace,uve,si} ...

positional arguments:
  {status,cpu,trace,uve,si}
    status              Node/component status
    cpu                 CPU load info
    trace               Sandesh trace buffer
    uve                 Sandesh UVE cache
    si                  List service instances

optional arguments:
  -h, --help            show this help message and exit
[root@ip-172-31-42-64 contrail-introspect-cli]#

4,872ノードでの負荷状況

前回に続いて、4,872ノードでの負荷状況を確認してみている。
http://aaabbb-200904.hatenablog.jp/entry/2019/03/17/222320

※ 本来は kubernetes クラスタの最大数である、5,000ノードで検証したかったのだが、実機で試したときは このノード数しか起動できなかった、、
https://kubernetes.io/ja/docs/setup/cluster-large/

環境は GCP を使い、インスタンスイメージとしては、CentOS7 (centos-7-v20190312, CentOS7.6) を使用している。
controller兼analytics, k8s master を1台ずつ用意し、インスタンスタイプとしては、n1-highcpu-64 (64vCPU, 58GM mem, 30GB disk)を使用した。
vRouter としては、n1-standard-1 (1vCPU, 3.75GB mem, 10GB disk) を使用した。

手順は基本的に前回と同じだが、変更点として、global ip の数を節約するため、今回は controller/analytics, k8s master の2台にのみ、global ip を割り当て、vRouter のノードについては、private ip のみを割り当てる構成とした。 (default のサブネットは /20 となっており、5,000 ip が入りきらないため、別のVPC を作成し、10.0.0.0/9 を割り当てている) ただし、vRouterノードも、モジュールインストールのためにインターネットにアクセスする必要があるため、CloudNAT (ネットワークサービス > CloudNAT) を追加で作成するようにしている。
また、元々の設定だと、途中で、cassandra がスローダウンする動作となったため、以下のように heap size の最大値を20GBに変更し、事象を回避している。

JVM_EXTRA_OPTS: "-Xms128m -Xmx20g"

他に、前回と比べて追加したコマンドを列記しておく。

# kubectl label node instance-group-2-m2cq node-role.opencontrail.org/config=
  cni-vrouter.yaml の適用後、contrail-kube-manager を起動するために実施 (instance-group-2-m2cq には k8s master の node名を入力する)
  ※ upstream の変更に追随するため

# pip install google-cloud
$ gcloud init
$ gcloud auth login
$ gcloud --format="value(networkInterfaces[0].networkIP)" compute instances list
  GCP instances の ip をダンプするために使用

※ parallel -j 5000 にすると、実行ノードのメモリが枯渇したため、-j 3000, -j 2000 の2回に分けて実施した
ipの差分は以下で取得:
$ cat (インスタンスipをダンプしたファイルを全て列記) | sort | uniq -c | grep ' 1 ' | awk '{print $2}'

起動後、以下のように、4,872台の vRouter が登録される動作となった。
※ interface数は本来、4,878 (vRouter ごとに1, coredns x 2, この時起動していた cirros x2, default で作成される k8s service: kubernetes API, kube-dns) となるはずだったのだが、確認時は、なぜかこの値から変化しなかった、、(analytics-api の応答では、正しく4,878で出力されている、後述)
f:id:aaabbb_200904:20190402003622p:plain

負荷状況としては、controller兼analytics では、以下のように control が最も多くの cpu / mem を使用する動きとなった。
特に、メモリ使用量は前回と比べて大きく上昇しており、30GB を使用する動作となっている。
この状態でも、cirros への ip 払いだし、等は、問題なく実施できていたので、基本的な動作は継続できていたようである。

top - 16:01:05 up  1:17,  2 users,  load average: 62.04, 44.99, 35.31
Tasks: 572 total,   2 running, 570 sleeping,   0 stopped,   0 zombie
%Cpu(s): 65.6 us,  6.5 sy,  0.0 ni, 27.4 id,  0.0 wa,  0.0 hi,  0.4 si,  0.0 st
KiB Mem : 59192668 total, 11975852 free, 42433520 used,  4783296 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 15865188 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                             
19347 root      20   0   35.2g  31.1g  13688 S  3199 55.1 505:42.00 contrail-contro                                     
21052 root      20   0 7336100   2.2g  10960 S  1020  4.0 336:58.27 contrail-collec                                     
19339 root      20   0 5990856 562944  12160 S 286.5  1.0 110:07.13 contrail-dns                                        
21051 root      20   0  559792 259616   6464 R  92.7  0.4  10:36.47 python                                              
10429 polkitd   20   0  890380 854872   1668 S  52.8  1.4   9:47.83 redis-server                                        
13024 polkitd   20   0   34.5g 161112   3816 S  18.2  0.3  22:18.44 beam.smp                                            
 9538 root      20   0 3179672 113380  35224 S   7.6  0.2   4:27.13 dockerd                                             
19290 root      20   0  246400  40248   5284 S   2.3  0.1   0:42.96 python                                              
21044 root      20   0  246404  40192   5284 S   2.3  0.1   0:40.39 python     

$ free -h
              total        used        free      shared  buff/cache   available
Mem:            56G         40G         11G        9.8M        4.6G         15G
Swap:            0B          0B          0B

$ df -h .
ファイルシス   サイズ  使用  残り 使用% マウント位置
/dev/sda1         30G  5.2G   25G   18% /

$ curl 172.16.1.18:8081/analytics/uves/vrouters | python -m json.tool | grep -w href | wc -l
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1065k  100 1065k    0     0  3268k      0 --:--:-- --:--:-- --:--:-- 3279k
4872

$ curl 172.16.1.18:8081/analytics/uves/virtual-machines | python -m json.tool | grep -w href | wc -l
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   768  100   768    0     0   230k      0 --:--:-- --:--:-- --:--:--  375k
4

$ curl 172.16.1.18:8081/analytics/uves/virtual-machine-interfaces | python -m json.tool | grep -w href | wc -l
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1495k  100 1495k    0     0  6018k      0 --:--:-- --:--:-- --:--:-- 6006k
4878
※ 4872(vRouter vhost0)+4(k8s pod: coredns, cirros x 2)+2(defaultで作成される k8s service: kubernetes, kube-dns)

k8s master は、前回と同じく、 kube-apiserver/etcd が最も多くの cpu / mem を使用する動作となった。

top - 15:55:01 up  1:11,  2 users,  load average: 27.01, 24.74, 21.02
Tasks: 610 total,   2 running, 608 sleeping,   0 stopped,   0 zombie
%Cpu(s): 31.2 us,  2.5 sy,  0.0 ni, 65.3 id,  0.2 wa,  0.0 hi,  0.8 si,  0.0 st
KiB Mem : 59192676 total, 41605700 free, 15398320 used,  2188656 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 42949840 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                           
20248 root      20   0   19.0g  10.6g  39724 S  1653 18.7 292:06.02 kube-apiserver                    
 9460 root      20   0   11.0g   1.6g 620208 S 359.3  2.8  54:45.80 etcd                              
20705 root      20   0 1406768   1.1g  30552 S 245.4  2.0  32:05.66 kube-controller                   
20410 root      20   0  385024 105376   5992 S  12.6  0.2   2:20.30 python                            
20257 root      20   0  635832 555136  15836 S   8.9  0.9   8:46.33 kube-scheduler                    
 9107 root      20   0 5875912  92168  17240 S   3.0  0.2   4:51.77 kubelet                           
 3285 root       0 -20       0      0      0 S   1.0  0.0   0:11.79 kworker/0:1H    

# free -h
              total        used        free      shared  buff/cache   available
Mem:            56G         14G         39G         66M        2.1G         40G
Swap:            0B          0B          0B

# df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        30G  4.7G   26G  16% /

# kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP              NODE                    NOMINATED NODE   READINESS GATES
cirros1   1/1     Running   0          77s   10.47.255.250   instance-group-2-4197   <none>           <none>
cirros2   1/1     Running   0          73s   10.47.255.249   instance-group-2-k7sr   <none>           <none>

今回、かなり多くの台数を controller/analytics に追加してみたのだが、実際のところ、ここまで多くのノードを1クラスタにおさめる必要があるのか、という議論はあり、管理を分離する、という意味では、アプリケーションごとに kubernetes クラスタを立てた方がよいかもしれない。
ただ、kubernetes クラスタを複数立てると、その間で連携を行うような操作が難しくなる。
この場合、TungstenFabric のように、多数のノードを1つのクラスタにいれておき、必要に応じて、アプリケーションごとのネットワーク分離の有効・無効を切り替える (policy 等も活用可能)、という動作の方がよいかもしれない。

1,000ノードクラスタの負荷状況

多数のノードが含まれるクラスタでの負荷状況を確認するため、aws の環境を使って、1,000ノードのTungstenFabricクラスタを試してみている。

台数が多く、ansible-deployer だと構築に時間がかかったため、今回は以下のリンク先に従って、 kube-manager, vRouter のモジュールについては、kubernetes の機能を使って配布するようにした。
https://github.com/Juniper/contrail-ansible-deployer/wiki/Provision-Contrail-Kubernetes-Cluster-in-Non-nested-Mode

AMIについては、引き続き CentOS7.5 (ami-3185744e) を使用し、インスタンスタイプは、TungstenFabric controller, k8s master については、m3.xlarge (4vCPU, 16GB mem) 各1台, k8s node については、m3.medium (1vCPU, 4GB mem) 1,000台、を使用している。

1. TungstenFabric controller の構築

以下のリンクと同様、 ansible-deployer でインストールを行う。
http://aaabbb-200904.hatenablog.jp/entry/2019/02/10/222958

instance.yaml は、以下のように、controller 1台だけを指定している。(k8s_master, kube-managerは削除)

provider_config:
  bms:
   ssh_user: root
   ssh_public_key: /root/.ssh/id_rsa.pub
   ssh_private_key: /root/.ssh/id_rsa
   domainsuffix: local
   ntpserver: ntp.nict.jp
instances:
  bms1:
   provider: bms
   roles:
      config_database:
      config:
      control:
      analytics:
      webui:
   ip: 172.31.xx.xx
contrail_configuration:
  CONTAINER_REGISTRY: opencontrailnightly
  CONTRAIL_VERSION: latest
  KUBERNETES_CLUSTER_PROJECT: {}
  JVM_EXTRA_OPTS: "-Xms128m -Xmx1g"
global_configuration:

2. k8s master の構築

以下のリンクに従い、kubeadm を使った k8s master の構築を実施している。
https://github.com/Juniper/contrail-docker/wiki/Provision-Contrail-CNI-for-Kubernetes#faqs

# cd
# cat install-k8s-packages.sh
bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
     https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF'
setenforce 0
yum install -y kubelet kubeadm kubectl docker
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
swapoff -a

# bash install-k8s-packages.sh

# kubeadm init
※ 以下のようなコマンドが表示されるのでひかえておく
kubeadm join 172.31.18.113:6443 --token we70in.mvy0yu0hnxb6kxip --discovery-token-ca-cert-hash sha256:13cf52534ab14ee1f4dc561de746e95bc7684f2a0355cb82eebdbd5b1e9f3634

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

3. k8s node の構築

上記のスクリプトk8s node でも実行し、その後、 kubeadm join を実施する必要がある。
並列で処理を実行するため、今回は GNU parallel と ssh を使用した。

まず、aws 内のノードについて、private ip の一覧を取得する。 (TungstenFabric controller, k8s master の ip は手動で削っておく)

$ pip install awscli
$ aws configure
 (access key, secret key, region 等を入力 (必要に応じて、 IAM から作成))
$ aws ec2 describe-instances --query 'Reservations[*].Instances[*].PrivateIpAddress' --output text | tr '\t' '\n' > /tmp/aaa.txt

その後、上記のリストを k8s master のインスタンスに送り、以下のようなコマンドで k8s node での実行を行う。(/tmp/aaa.pem は、EC2 インスタンス立ち上げ時に指定した pem ファイル)
※ 10-15分程度で完了した

yum -y install epel-release 
yum -y install parallel
ulimit -n 4096
cat aaa.txt | parallel -j1000 scp -i /tmp/aaa.pem -o StrictHostKeyChecking=no install-k8s-packages.sh centos@{}:/tmp
cat aaa.txt | parallel -j1000 ssh -i /tmp/aaa.pem -o StrictHostKeyChecking=no centos@{} chmod 755 /tmp/install-k8s-packages.sh
cat aaa.txt | parallel -j1000 ssh -i /tmp/aaa.pem -o StrictHostKeyChecking=no centos@{} sudo /tmp/install-k8s-packages.sh
cat aaa.txt | parallel -j1000 ssh -i /tmp/aaa.pem -o StrictHostKeyChecking=no centos@{} sudo kubeadm join 172.31.18.113:6443 --token we70in.mvy0yu0hnxb6kxip --discovery-token-ca-cert-hash sha256:13cf52534ab14ee1f4dc561de746e95bc7684f2a0355cb82eebdbd5b1e9f3634

4. vRouter の展開

上記が完了した後、 k8s master 上で以下を発行し、vRouter の展開を行う。(余分なコストがかかるのを防ぐため、kubectl apply 直前までは、k8s node を立ち上げる前に実施した方がよいかもしれない)

# cd
# yum -y install git
# git clone https://github.com/Juniper/contrail-container-builder.git
# cd /root/contrail-container-builder/kubernetes/manifests
# vi ../../common.env
(以下を追記)
CONTRAIL_CONTAINER_TAG=latest
CONTRAIL_REGISTRY=opencontrailnightly

# ./resolve-manifest.sh contrail-non-nested-kubernetes.yaml > cni-vrouter.yaml 
※ 手動で以下の修正を行う (1. Null になってしまう行を削除, 2. k8s master の ip が入ってしまう部分の一部を TungstenFabric controller の ip で置き換え)
--- cni-vrouter.yaml.orig	2019-03-17 21:17:25.218399040 +0900
+++ cni-vrouter.yaml	2019-03-17 21:19:40.744368162 +0900
@@ -11,36 +11,20 @@
   namespace: kube-system
 data:
   AUTH_MODE: {{ AUTH_MODE }}
-  KEYSTONE_AUTH_HOST: {{ KEYSTONE_AUTH_HOST }}
-  KEYSTONE_AUTH_ADMIN_TENANT: "{{ KEYSTONE_AUTH_ADMIN_TENANT }}"
-  KEYSTONE_AUTH_ADMIN_USER: "{{ KEYSTONE_AUTH_ADMIN_USER }}"
-  KEYSTONE_AUTH_ADMIN_PASSWORD: "{{ KEYSTONE_AUTH_ADMIN_PASSWORD }}"
-  KEYSTONE_AUTH_ADMIN_PORT: "{{ KEYSTONE_AUTH_ADMIN_PORT }}"
-  KEYSTONE_AUTH_URL_VERSION: "{{ KEYSTONE_AUTH_URL_VERSION }}"
-  ANALYTICS_API_VIP: {{ ANALYTICS_API_VIP }}
-  ANALYTICS_NODES: {{ ANALYTICS_NODES }}
-  ANALYTICSDB_NODES: {{ ANALYTICSDB_NODES }}
+  ANALYTICS_NODES: TungstenFabric controller IP
+  ANALYTICSDB_NODES: TungstenFabric controller IP
   CLOUD_ORCHESTRATOR: {{ CLOUD_ORCHESTRATOR }}
-  CONFIG_API_VIP: {{ CONFIG_API_VIP }}
-  CONFIG_NODES: {{ CONFIG_NODES }}
-  CONFIGDB_NODES: {{ CONFIGDB_NODES }}
-  CONTROL_NODES: {{ CONTROL_NODES }}
-  CONTROLLER_NODES: {{ CONTROLLER_NODES }}
+  CONFIG_NODES: TungstenFabric controller IP
+  CONFIGDB_NODES: TungstenFabric controller IP
+  CONTROL_NODES: TungstenFabric controller IP
+  CONTROLLER_NODES: TungstenFabric controller IP
   LOG_LEVEL: {{ LOG_LEVEL }}
   METADATA_PROXY_SECRET: {{ METADATA_PROXY_SECRET }}
-  RABBITMQ_NODES: {{ RABBITMQ_NODES }}
+  RABBITMQ_NODES: TungstenFabric controller IP
   RABBITMQ_NODE_PORT: "{{ RABBITMQ_NODE_PORT }}"
-  ZOOKEEPER_NODES: {{ ZOOKEEPER_NODES }}
+  ZOOKEEPER_NODES: TungstenFabric controller IP
   ZOOKEEPER_PORTS: "{{ ZOOKEEPER_PORTS }}"
   ZOOKEEPER_PORT: "{{ ZOOKEEPER_PORT }}"
-  KUBERNETES_CLUSTER_NETWORK: "{{ KUBERNETES_CLUSTER_NETWORK }}"
-  KUBERNETES_CLUSTER_NAME: {{ KUBERNETES_CLUSTER_NAME }}
-  KUBERNETES_POD_SUBNETS: {{ KUBERNETES_POD_SUBNETS }}
-  KUBERNETES_IP_FABRIC_SUBNETS: {{ KUBERNETES_IP_FABRIC_SUBNETS }}
-  KUBERNETES_SERVICE_SUBNETS: {{ KUBERNETES_SERVICE_SUBNETS }}
-  KUBERNETES_IP_FABRIC_FORWARDING: "{{ KUBERNETES_IP_FABRIC_FORWARDING }}"
-  KUBERNETES_IP_FABRIC_SNAT: "{{ KUBERNETES_IP_FABRIC_SNAT }}"
-  KUBERNETES_PUBLIC_FIP_POOL: "{{ KUBERNETES_PUBLIC_FIP_POOL }}"
 ---
 apiVersion: v1
 kind: ConfigMap

# kubectl apply -f cni-vrouter.yaml

うまく適用されると、以下のように、1,000台の vRouter がクラスタに追加されるはずである。
f:id:aaabbb_200904:20190317221631p:plain

クラスタの状態

クラスタが立ち上がってから、各ノードのリソースを見てみたところ、TungstenFabric controller, k8s master で、以下のように CPU 使用率が高い状況になっていた。
※ TungstenFabric controller では、contrail-collector, redis 等、analytics の負荷が高く、k8s master では kube-apiserver, etcd の負荷が高くなった。
安定して稼働させるには、リソースの追加割り当てや、controller/analytics のスケールアウトなど、追加の考慮が必要かもしれない。

TungstenFabric controller, analytics:

top - 12:13:59 up 43 min,  1 user,  load average: 5.77, 12.05, 7.24
Tasks: 153 total,   1 running, 152 sleeping,   0 stopped,   0 zombie
%Cpu(s): 22.0 us, 16.9 sy,  0.0 ni, 60.3 id,  0.0 wa,  0.0 hi,  0.7 si,  0.1 st
KiB Mem : 15233672 total,  7091360 free,  3899712 used,  4242600 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 10779720 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                 
21165 root      20   0 1035220 333448  10996 S  48.2  2.2   3:26.06 contrail-collec                         
18891 root      20   0  906588 249180   7524 S  31.9  1.6   0:09.03 node                                    
12763 polkitd   20   0  243412 189736   1668 S  28.6  1.2   2:45.55 redis-server                            
19410 root      20   0 1375424 108644  12148 S  14.3  0.7   1:40.42 contrail-dns                            
18864 root      20   0  810588 165356   7196 S  13.0  1.1   0:05.14 node                                    
19448 root      20   0 2167860   1.0g  13732 S  10.6  7.0  10:04.95 contrail-contro                         
11985 root      20   0  776036  94764  35204 S   3.0  0.6   2:38.34 dockerd                                 
15803 root      20   0  248324  40180   5332 S   2.3  0.3   0:27.72 python   


k8s master:

top - 12:14:09 up 43 min,  1 user,  load average: 11.84, 12.30, 8.06
Tasks: 133 total,   1 running, 132 sleeping,   0 stopped,   0 zombie
%Cpu(s): 87.2 us,  6.6 sy,  0.1 ni,  3.5 id,  0.2 wa,  0.0 hi,  2.4 si,  0.0 st
KiB Mem : 15233672 total,  8167188 free,  4032788 used,  3033696 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 10702748 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                 
11583 root      20   0 2999520   2.7g  37840 S 309.3 18.6  38:35.47 kube-apiserver                                                                                                          
 5854 root      20   0   10.3g 520460 181348 S  47.5  3.4   1:50.44 etcd                                                                                                                    
18836 root      20   0  481984 350196  27724 S  17.6  2.3   0:19.61 kube-controller                                                                                                         
11211 root      20   0 1572404  74628  31760 S   3.7  0.5   0:52.85 kubelet                                                                                                                 
18460 root      20   0  209628 119648  13428 S   2.7  0.8   0:08.79 kube-scheduler                                                                                                          
10663 root      20   0 1280720  60912  16316 S   0.7  0.4   1:12.93 dockerd-current                                                                                                         
  377 root      20   0  145808  88372  87852 S   0.3  0.6   0:42.88 systemd-journal  

一方、メモリ, disk については、数台での小規模なクラスタと比べて、大きな違いは見られなかった。
※ analyticsdb を有効化している場合、こちらも大きくリソースを使う可能性が高いため注意

TungstenFabric controller, analytics:
[root@ip-172-31-13-135 ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:            14G        4.3G        6.1G         17M        4.1G        9.7G
Swap:            0B          0B          0B
[root@ip-172-31-13-135 ~]# 
[root@ip-172-31-13-135 ~]# 
[root@ip-172-31-13-135 ~]# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       20G  4.3G   16G  22% /
[root@ip-172-31-13-135 ~]# 

k8s master:
[root@ip-172-31-18-113 ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:            14G        3.5G        7.9G        105M        3.2G         10G
Swap:            0B          0B          0B
[root@ip-172-31-18-113 ~]# 
[root@ip-172-31-18-113 ~]# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       20G  3.6G   17G  18% /
[root@ip-172-31-18-113 ~]#