Openstack demo 配置笔记 v3

Page 1

环境信息 【t41a】S11.2 (各种安装软件, IPS) (Nova)(Cinder) 140.83.204.41 eth0 【t41b】S11.2 (OpenStack All in One) (EVS Controller) 140.83.204.42 eth0 【t52a】S11.2 (Nova)(Cinder) 140.83.204.26 eth3

Controller Node 安装软件如下:


另外两个同时为 Compute Node 和 Block Strorage Node,安装软件如下:

env | grep OS_ #export OS_PASSWORD=nova # export OS_AUTH_URL=http://t41b.oracle.com:5000/v2.0/ # export OS_USERNAME=nova # export OS_TENANT_NAME=service

一、Oracle Linux 7 关闭防火墙 1.查看 firewall 的状态 [root@ol7 ~]# systemctl status firewalld 2.临时停止 [root@ol7 ~]# systemctl stop firewalld 3.禁用 firewall 启动,后重启后不会自动启动 [root@ol7 ~]# systemctl disable firewalld 启动 VNC Server 后关闭电源休眠,避免锁屏后需要输入密码。

二、Solaris 11.2 Repo 和 IPS 将 root 由角色改为用户(必须改为用户,否则后面设置 openstack 会报错) #sudo rolemod -K type=normal root 或者 vi /etc/user_attr 注释掉下面一句 #root::::type=role 1,创建 S11.2 Full Repo 拷贝四个 full repo 的 zip 文件到脚本目录下


# install-full-repo.ksh -d /var/s11.2repo -I -v 2,更新 SRU 到 Repo 中 拷贝 SRU 的 zip 到脚本目录下 # install-full-repo.ksh -d /var/s11.2repo -I -v

3,安装 Solaris Desktop 启动 VNC Server #pkg set-publisher -G '*' -g /var/s11.2repo -P solaris #pkg install solaris-desktop #vncserver 4,更新系统 Solaris 系统 #pkg update 保证各个系统都更新到相同版本,重启后注意 BE 信息。

5,提供 IPS Repo 的 HTTP 服务 # svccfg -s application/pkg/server setprop pkg/inst_root=/var/s11.2repo # svccfg -s application/pkg/server setprop pkg/readonly=true 检查可用性 # svcprop -p pkg/inst_root application/pkg/server # svccfg -s application/pkg/server setprop pkg/port=8090 # svccfg -s pkg/server editprop Start the Repository Service # svcadm refresh application/pkg/server # svcadm enable application/pkg/server 在远程的每一个客户端 Solaris 机器上设置 IPS 指向提供 IPS 的机器 IP 地址 # pkg set-publisher -G '*' -M '*' -g http://140.83.204.41:8090/ solaris #pkg install solaris-desktop #vncserver /etc/hosts 140.83.204.41 140.83.204.42

t41a t41b

t41a.oracle.com t41b.oracle.com


140.83.204.26 140.83.204.43 140.83.204.48

t52a oel7 ovs1

t52a.oracle.com oel7.oracle.com ovs1.oracle.com

三、Solaris 11 Hostname 设置和 Root SSH 3.1 正确设置 Hostname 和/etc/hosts 文件 /etc/hosts 文件一定要配置正确,而且长短名正确使用,否则配置 openstack 是连接 mysql 报错 svccfg -s system/identity:node setprop config/nodename="t41a.oracle.com" svccfg -s system/identity:node setprop config/loopback="t41a.oracle.com" svccfg -s system/identity:node refresh svcadm restart system/identity:node svccfg -s system/identity:node listprop config hostname

3.2 设置 root 用户和 ssh 1, 设置 root 为用户

#sudo rolemod -K type=normal root

或者 vi /etc/user_attr 注释掉下面一句 #root::::type=role 2, 允许 root 通过 ssh 访问 配置 EVS 的时候需要 root 能够 ssh 远程访问 修改/etc/ssh/sshd_config vi /etc/ssh/sshd_config PermitRootLogin no 修改为 PermitRootLogin yes 重启 ssh 服务 svcadm restart ssh

3.3 加电启动 ssh ilom-ip start /SP/console boot


四、Linux Hostname 设置 #hostname ovs1.oracle.com #vi /etc/hostname #vi /etc/sysconfig/network 配置 Linux DNS # vi /etc/resolv.conf 添加多个 DNS 服务器 IP 地址: search cn.oracle.com nameserver 10.182.122.37 nameserver 140.83.207.11

五、MySql 数据库 SQL> show databases; SQL> select user,host,password from mysql.user; SQL> use database_name; SQL> show tables; SQL> describe table_name;

六、All in One 控制节点配置脚本

6.1 EVS Controller # pkg install evs # pkg install rad-evs-controller # svcadm restart rad:local # evsadm set-prop -p controller=ssh://evsuser@evs-controller-hostname-or-ipaddress 配置互信

6.2 EVS nodes # pkg install evs # evsadm set-prop -p controller=ssh://evsuser@evs-controller-hostname-or-ipaddress 配置互信


6.3 Controller Node Setup 6.3.1 Install Packages export CONTROLLER_IP=140.83.204.42 export CONTROLLER_FQDN=t41b.oracle.com export CONTROLLER_SHORTNAME=t41b export VOLUME_FQDN=t52a.oracle.com export BLOCK_STORAGE_IP =140.83.204.26 pkg install openstack mysql-55 mysql-55/client python-mysql rabbitmq markupsafe rad-evs-controller svcadm enable rabbitmq svcadm enable mysql svcadm restart rad:local

6.3.2 Configure NTP 必须注意设置多个系统的时间一致,否则后期配置出错 安装 NTP 软件包 # pkg install ntp 安装配置文件 # cp /etc/inet/ntp.client /etc/inet/ntp.conf ### vi /etc/inet/ntp.conf with: driftfile /var/ntp/drift server 140.83.204.42 iburst #svcadm enable ntp 同步多个系统的时间可以如下 T41b#

svcadm enable -r time:stream

//打开 T41b 服务器上的时间流

T52a#

rdate t41b

//设置 T52a 服务器的时间与 T41b 一样

6.3.3 Create MySQL databases #mysql -u root DROP DATABASE nova; DROP DATABASE cinder; DROP DATABASE glance; DROP DATABASE keystone; DROP DATABASE neutron; CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';


CREATE DATABASE cinder; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder'; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder'; CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance'; CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone'; CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron'; FLUSH PRIVILEGES; ########################################################################## #####创建的用户如下: #mysql> select user,host,password from mysql.user; +----------+-----------------------+-------------------------------------------+ | user | host | password | +----------+-----------------------+-------------------------------------------+ | root | localhost | | | root | t41b | | | root | 127.0.0.1 | | | root | ::1 | | | | localhost | | | | t41b | | | nova | $controller_shortname | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 | | cinder | $controller_shortname | *028F8298C041368BA08A280AA8D1EF895CB68D5C | | cinder | $controller_fqdn | *028F8298C041368BA08A280AA8D1EF895CB68D5C | | cinder | $volume_fqdn | *028F8298C041368BA08A280AA8D1EF895CB68D5C | | glance | $controller_shortname | *CC67CAF178CB9A07D756302E0BBFA3B0165DFD49 | | keystone | $controller_shortname | *936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A | | neutron | $controller_shortname | *2BF1709B510068A2EA039818644ED187BF2A5E94 | | nova | t41a.oracle.com | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 | | nova | t41b.oracle.com | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 | | nova | t52a.oracle.com | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 | | cinder | t41a.oracle.com | *028F8298C041368BA08A280AA8D1EF895CB68D5C | | cinder | t41b.oracle.com | *028F8298C041368BA08A280AA8D1EF895CB68D5C | | cinder | t52a.oracle.com | *028F8298C041368BA08A280AA8D1EF895CB68D5C | | glance | t41b.oracle.com | *CC67CAF178CB9A07D756302E0BBFA3B0165DFD49 | | keystone | t41b.oracle.com | *936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A | | neutron | t41b.oracle.com | *2BF1709B510068A2EA039818644ED187BF2A5E94 | | keystone | t41b | *936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A | | nova | t41a | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 | | nova | t41b | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 | | nova | t52a | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 | | cinder | t41a | *028F8298C041368BA08A280AA8D1EF895CB68D5C | | cinder | t41b | *028F8298C041368BA08A280AA8D1EF895CB68D5C |


| cinder | t52a | *028F8298C041368BA08A280AA8D1EF895CB68D5C | | glance | t41b | *CC67CAF178CB9A07D756302E0BBFA3B0165DFD49 | | neutron | t41b | *2BF1709B510068A2EA039818644ED187BF2A5E94 | +----------+-----------------------+-------------------------------------------+ 31 rows in set (0.00 sec)

6.3.4 Configure Keystone # openssl rand -hex 10 #random_string #使用此输出值设置 /etc/keystone/keystone.conf 文件中的 admin_token 参数。 #admin_token = random_string ### vi /etc/keystone/keystone.conf [sql] connection=mysql://keystone:keystone@t41b.oracle/keystone [catalog] driver = keystone.catalog.backends.sql.Catalog [token] provider = keystone.token.providers.pki.Provider #svcadm enable keystone #su - keystone -c "keystone-manage pki_setup" ######################################## ### 下面的几条命令用于创建相应用户的数据库表 ### 应该在每个组件配置最后分别执行! ! ! ######################################## su - keystone -c "keystone-manage db_sync" su - nova -c "nova-manage db sync" su - cinder -c "cinder-manage db sync" su - glance -c "glance-manage db_sync" #getent hosts export CONTROLLER_PUBLIC_ADDRESS=140.83.204.42 export CONTROLLER_ADMIN_ADDRESS=140.83.204.42 export CONTROLLER_INTERNAL_ADDRESS=140.83.204.42 export SERVICE_TOKEN=ADMIN export CONTROLLER_PUBLIC_ADDRESS=t41b.oracle.com export CONTROLLER_ADMIN_ADDRESS=t41b.oracle.com export CONTROLLER_INTERNAL_ADDRESS=t41b.oracle.com #如下命令用于导入 Mysql 数据 su - keystone -c "env CONTROLLER_ADMIN_ADDRESS=t41b.oracle.com CONTROLLER_INTERNAL_ADDRESS=t41b.oracle.com CONTROLLER_PUBLIC_ADDRESS=t41b.oracle.com /usr/demo/openstack/keystone/sample_data.sh"


############################################################### #####上述命令执行结果如下: +-------------+---------------------------------------------+ | Property | Value | +-------------+---------------------------------------------+ | adminurl | http://t41b.oracle.com:$(admin_port)s/v2.0 | | id | 58dbce0560a6c903d0be8e70a0661fb1 | | internalurl | http://t41b.oracle.com:$(public_port)s/v2.0 | | publicurl | http://t41b.oracle.com:$(public_port)s/v2.0 | | region | RegionOne | | service_id | 9513d2bdbfb147f99db9e842937a41cf | +-------------+---------------------------------------------+ +-------------+------------------------------------------------------------+ | Property | Value +-------------+------------------------------------------------------------+ | adminurl | http://t41b.oracle.com:$(compute_port)s/v1.1/$(tenant_id)s | | id | cd707c1cf32d6e5fab04a913525bd354 | internalurl | http://t41b.oracle.com:$(compute_port)s/v1.1/$(tenant_id)s | | publicurl | http://t41b.oracle.com:$(compute_port)s/v1.1/$(tenant_id)s | | region | RegionOne | service_id | a92bd4e81b65c1308b9bf24893cc3c76 +-------------+------------------------------------------------------------+ +-------------+----------------------------------------------+ | Property | Value | +-------------+----------------------------------------------+ | adminurl | http://t41b.oracle.com:8776/v1/$(tenant_id)s | | id | a50596a21a5648d383fe867ad339eaf3 | | internalurl | http://t41b.oracle.com:8776/v1/$(tenant_id)s | | publicurl | http://t41b.oracle.com:8776/v1/$(tenant_id)s | | region | RegionOne | | service_id | d6c56beb4ea94040e5d9843bcb30c3b8 | +-------------+----------------------------------------------+ +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://t41b.oracle.com:9292 | | id | 40d272c7978bee6fc2148d107c6e27a2 | | internalurl | http://t41b.oracle.com:9292 | | publicurl | http://t41b.oracle.com:9292 | | region | RegionOne | | service_id | 5ab62a08293be43ebabdf2c6beb9b5b9 | +-------------+----------------------------------+ +-------------+--------------------------------------------+ | Property | Value | +-------------+--------------------------------------------+ | adminurl | http://t41b.oracle.com:8773/services/Admin | | id | c7245630807ce6669c5efdb0ea1b0a06 | | internalurl | http://t41b.oracle.com:8773/services/Cloud | | publicurl | http://t41b.oracle.com:8773/services/Cloud |

|

|

| |


| region | RegionOne | | service_id | 53a5e6730f2fc2eec66a898032bf91ff | +-------------+--------------------------------------------+ +-------------+---------------------------------------------------+ | Property | Value | +-------------+---------------------------------------------------+ | adminurl | http://t41b.oracle.com:8080/v1 | | id | e83f734109944a70eea4f29991957e54 | | internalurl | http://t41b.oracle.com:8080/v1/AUTH_$(tenant_id)s | | publicurl | http://t41b.oracle.com:8080/v1/AUTH_$(tenant_id)s | | region | RegionOne | | service_id | 7eb13ef1c430edee9a8cc65491fe3ae6 | +-------------+---------------------------------------------------+ +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://t41b.oracle.com:9696/ | | id | 6a775b85834eed4bc2fab8200a1f9cd9 | | internalurl | http://t41b.oracle.com:9696/ | | publicurl | http://t41b.oracle.com:9696/ | | region | RegionOne | | service_id | 73d0e2550c5d62c6ef22e7b43d081df5 | +-------------+----------------------------------+ ###############################################################

####################################

6.3.5 Configure Glance ### vi /etc/glance/glance-api.conf [DEFAULT] sql_connection = mysql://glance:glance@t41b.oracle.com/glance rabbit_host=t41b.oracle.com [keystone_authtoken] auth_uri = http://140.83.204.42:5000/v2.0 identity_uri = http://140.83.204.42:35357 admin_tenant_name = service admin_user = glance admin_password = glance ### vi /etc/glance/glance-cache.conf [DEFAULT] auth_url = http://140.83.204.42:5000/v2.0/ admin_tenant_name = service


admin_user = glance admin_password = glance ### vi /etc/glance/glance-registry.conf [DEFAULT] sql_connection = mysql://glance:glance@t41b.oracle.com/glance [keystone_authtoken] auth_uri = http://140.83.204.42:5000/v2.0 identity_uri = http://140.83.204.42:35357 admin_tenant_name = service admin_user = glance admin_password = glance ### vi /etc/glance/glance-api-paste.ini [pipeline:glance-api] pipeline = versionnegotiation authtoken context apiv1app [filter:authtoken] auth_host = 140.83.204.42 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = glance admin_user = glance ### vi /etc/glance/glance-registry-paste.ini [pipeline:glance-registry] pipeline = authtoken context apiv1app [filter:authtoken] auth_host = 140.83.204.42 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = glance admin_user = glance ### vi /etc/glance/glance-scrubber.conf [DEFAULT] auth_url = http://140.83.204.42:5000/v2.0/ admin_tenant_name = service admin_user = glance admin_password = glance swift_store_auth_address = 140.83.204.42:5000/v2.0/ swift_store_user = johndoe:johndoe


swift_store_key = a86850deb2742ec3cb41518e26aa2d89 s3_store_host = 140.83.204.42:8080/v1.0/ s3_store_access_key = "<20-char AWS access key>" s3_store_secret_key = "<40-char AWS secret key>" s3_store_bucket = "<lowercased 20-char aws key>" s3_store_create_bucket_on_put = False ###svcadm enable glance-db ###svcadm enable glance-api glance-registry glance-scrubber svcadm enable -rs glance-api glance-db glance-registry glance-scrubber ###查看哪些服务有问题 svcs -xv

6.3.6 Configure Nova ### vi /etc/nova/nova.conf [DEFAULT] keystone_ec2_url=http://t41b.oracle.com:5000/v2.0/ec2tokens glance_host=t41b.oracle.com rabbit_host=t41b.oracle.com firewall_driver=nova.virt.firewall.NoopFirewallDriver neutron_url=http://140.83.204.42:9696 neutron_admin_username=neutron neutron_admin_password=neutron neutron_admin_tenant_name=service neutron_admin_auth_url=http://140.83.204.42:5000/v2.0 [database] connection = mysql://nova:nova@t41b.oracle.com/nova ### vi /etc/nova/api-paste.ini [filter:authtoken] auth_uri = http://140.83.204.42:5000/v2.0 identity_uri = http://140.83.204.42:35357 admin_tenant_name = service admin_user = nova admin_password = nova paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory ## 需要这一项??? #### # svcadm enable nova-conductor # svcadm enable nova-api-ec2 nova-api-osapi-compute nova-cert nova-objectstore nova-scheduler svcadm enable -rs nova-conductor svcadm enable -rs nova-computer


svcadm enable -rs nova-api-osapi-compute nova-scheduler nova-cert svcs-x

6.3.7 Configure Horizon #gsed -i -e s@SECURE_PROXY_SSL_HEADER@#SECURE_PROXY_SSL_HEADER@ -e s@CSRF_COOKIE_SECURE@#CSRF_COOKIE_SECURE@ -e s@SESSION_COOKIE_SECURE@#SESSION_COOKIE_SECURE@ -e "s@from horizon.utils@#from horizon.utils@" -e s@SECRET_KEY@#SECRET_KEY@ /etc/openstack_dashboard/local_settings.py

#cp /etc/apache2/2.2/samples-conf.d/openstack-dashboard-http.conf /etc/apache2/2.2/conf.d/ svcadm enable apache22 svcs-x #只有配置好所有组件,并且能够正确运行之后才能浏览器访问 horizon,否则浏览器报错! # Browser http://t41b.oracle.com/horizon # User admin with a password of secrete.

6.3.8 Configure Cinder ### vi /etc/cinder/cinder.conf [DEFAULT] my_ip=140.83.204.42 sql_connection = mysql://cinder:cinder@t41b.oracle.com/cinder connection = mysql://cinder:cinder@t41b.oracle.com/cinder scheduler_driver=cinder.scheduler.simple.SimpleScheduler glance_host=t41b.oracle.com rabbit_host=t41b.oracle.com volume_driver=cinder.volume.drivers.solaris.zfs.ZFSISCSIDriver

### vi /etc/cinder/api-paste.ini [filter:authtoken] auth_uri = http://140.83.204.42:5000/v2.0 identity_uri = http://140.83.204.42:35357 admin_tenant_name = service admin_user = cinder admin_password = cinder ############### iSCSI 目标 # svcadm enable iscsi/target stmf # svcadm enable cinder-db # svcadm enable cinder-api cinder-scheduler # svcadm enable cinder-volume:default cinder-volume:setup svcadm enable -rs cinder-db svcadm enable -rs cinder-api cinder-scheduler


svcs-x

6.3.9 Configure Neutron # evsadm set-prop -p controller=ssh://evsuser@t41b.oracle.com ### vi /etc/neutron/neutron.conf [keystone_authtoken] auth_uri = http://140.83.204.42:5000/v2.0 identity_uri = http://140.83.204.42:35357 admin_tenant_name = service admin_user = neutron admin_password = neutron rabbit_host=t41b.oracle.com [database] connection = mysql://neutron:neutron@t41b.oracle.com/neutron ### vi /etc/neutron/plugins/evs/evs_plugin.ini [EVS] evs_controller = ssh://evsuser@t41b.oracle.com [DATABASE] sql_connection = mysql://neutron:neutron@t41b.oracle.com/neutron #### vi /etc/neutron/dhcp_agent.ini evs_controller = ssh://evsuser@t41b.oracle.com ############################### evsadm set-prop -p controller=ssh://evsuser@t41b.oracle.com su - evsuser -c "ssh-keygen -N '' -f /var/user/evsuser/.ssh/id_rsa -t rsa" su - neutron -c "ssh-keygen -N '' -f /var/lib/neutron/.ssh/id_rsa -t rsa" ssh-keygen -N '' -f /root/.ssh/id_rsa -t rsa cat /var/user/evsuser/.ssh/id_rsa.pub /var/lib/neutron/.ssh/id_rsa.pub /var/user/evsuser/.ssh/authorized_keys su - evsuser -c "ssh evsuser@t41b.oracle.com true" su - neutron -c "ssh evsuser@t41b.oracle.com true" ssh evsuser@t41b.oracle.com true chown -R evsuser:evsgroup /var/user/evsuser/.ssh chown -R neutron:neutron /var/lib/neutron/.ssh evsadm evsadm set-controlprop -p l2-type=vlan evsadm set-controlprop -p uplink-port=net0 evsadm set-controlprop -p vlan-range=200-300

/root/.ssh/id_rsa.pub

>>


ipadm set-prop -p forwarding=on ipv4 svcadm enable -rs ipfilter svcadm enable -rs neutron-server neutron-dhcp-agent svcs-x ######################################################################### #### Configure neutron-l3-agent as described here: Girish's L3 Agent Blog ####虽然此步骤为可选步骤,但是强烈建议执行此步骤 ######################################################################### ###查看哪些服务有问题 svcs -xv 此时可以访问 horizon 啦! #只有配置好所有组件,并且能够正确运行之后才能浏览器访问 horizon,否则浏览器报错! # Browser http://t41b.oracle.com/horizon # User admin with a password of secrete.

6.4 Compute Node Setup 6.4.1 Install Packages # pkg install nova python-mysql mysql-55/client

6.4.2 Configure Nova ### vi /etc/nova/nova.conf [DEFAULT] firewall_driver=nova.virt.firewall.NoopFirewallDriver neutron_url=http://140.83.204.42:9696 neutron_admin_username=neutron neutron_admin_password=neutron


neutron_admin_tenant_name=service neutron_admin_auth_url=http://140.83.204.42:5000/v2.0 rabbit_host=t41b.oracle.com keystone_ec2_url=http://t41b.oracle.com:5000/v2.0/ec2tokens glance_host=t41b.oracle.com [database] connection = mysql://nova:nova@t41b.oracle.com/nova ### vi /etc/nova/api-paste.ini [filter:authtoken] auth_uri = http://140.83.204.42:5000/v2.0 identity_uri = http://140.83.204.42:35357 admin_tenant_name = service admin_user = nova admin_password = nova svcadm restart rad:local svcadm enable -rs nova-compute

6.5 Block Storage Node Setup 6.5.1 Install Packages # pkg install cinder python-mysql mysql-55/client

6.5.2 Configure Cinder ### vi /etc/cinder/cinder.conf [DEFAULT] my_ip=140.83.204.42 sql_connection = mysql://cinder:cinder@t41b.oracle.com/cinder connection = mysql://cinder:cinder@t41b.oracle.com/cinder scheduler_driver=cinder.scheduler.simple.SimpleScheduler glance_host=t41b.oracle.com rabbit_host=t41b.oracle.com volume_driver=cinder.volume.drivers.solaris.zfs.ZFSISCSIDriver ### vi /etc/cinder/api-paste.ini [filter:authtoken] auth_uri = http://140.83.204.42:5000/v2.0 identity_uri = http://140.83.204.42:35357 admin_tenant_name = service


admin_user = cinder admin_password = cinder svcadm enable -rs iscsi/target svcadm enable -rs cinder-volume:default cinder-volume:setup ############### iSCSI 目标 # svcadm enable iscsi/target stmf # svcadm enable cinder-db # svcadm enable cinder-api cinder-scheduler # svcadm enable cinder-volume:default cinder-volume:setup

七、Solaris 启动环境切换 BE #beadm list #beadm activate beName

############################### 查看 Solaris 的 CPU 内存信息 CPU #mpstat 3 #uname -snrvmapiX 内存 #vmstat 3 # prtconf | grep 'Memory' IO #iostat –xnp3 //Solaris 11 之后用 OVM for SPARC 是预装的,ldm 命令也可以使用,而且可以防止有些 MEM 被 ldom 占用 #ldm list # ldm start-reconf primary # ldm set-vcpu 64 primary # ldm set-memory 16G primary # ldm remove-io pci@600 primary # ldm remove-io pci@700 primary # ldm add-spconfig initial; reboot; ############################### 查看 zone 信息 #zoneadm list -cv #zoneadm list -iv


八、查看 Openstack 组件部署信息 8.1 在各个节点上都可以运行 注意必须设置各个节点的时间完全一致,否则出错 export OS_PASSWORD=nova export OS_AUTH_URL=http://t41b.oracle.com:5000/v2.0/ export OS_USERNAME=nova export OS_TENANT_NAME=service [nova] #nova-manage host list #nova-manage service list

[cinder] #cinder-manage host list #cinder-manage service list

创建磁盘 volume #cinder create --display_name test03 15 #cinder list

// Size 是 15G


[flavor] #nova flavor-list

#nova flavor-show 'Oracle Solaris non-global zone - xlarge'


8.2 Horizon 中查看组件


各个服务器配置情况

九、创建镜像 9.1 创建 Solaris Zone 和 UAR 归档 9.1.1 创建普通 Zone ##启动 zone ## zoneadm -z zoneName boot ##重启 zone ## zoneadm -z zoneName reboot ##停止 zone ## zoneadm -z zoneName shutdown ########## 创建 Zone############# #zoneadm list -cv #zonecfg -z osczone create #zoneadm list -cv #zonecfg -z osczone install



#zlogin osczone 'sed /^PermitRootLogin/s/no$/without-password/ < /etc/ssh/sshd_config > /system/volatile/sed.$$ ; cp /system/volatile/sed.$$ /etc/ssh/sshd_config' #zlogin osczone #####创建 Unified Archives uar 模板 #archiveadm create -z osczone /var/tmp/osczone.uar

查看归档文件 CPU 架构信息 #archiveadm info -p osczone.uar

#archiveadm info -v osczone.uar

从一个 archive 文件创建一个新的 zone,然后从 archive 文件安装这个 zone #zonecfg -z newzone01 create -a /var/tmp/osczone.uar -z osczone #zoneadm -z newzone01 install -a /var/tmp/osczone.uar -z osczone


9.1.2 创建 Kernel Zone ##启动 zone ## zoneadm -z zoneName boot ##重启 zone ## zoneadm -z zoneName reboot ##停止 zone ## zoneadm -z zoneName shutdown //使用官网下载的 UAR 创建 KZ 成功,我自己做的 UAR 不行,我自己做的 UAR 启动时总是要连接 IPS! ! ! 创建 Kernel Zone,OpenStack 中的 Cinder 磁盘只能给 Kernel zone 使用,普通的 Zone 不能使用 Cinder 磁盘 Oracle Solaris 内核区域可在 Oracle VM Server for SPARC 上的来宾域中运行。每个 Oracle VM Server for SPARC 域对可以 运行的内核区域数量的限制是独立的。对于 SPARC T4 或 SPARC T5 系统,此限制为 768,对于 SPARC M5 或 SPARC M6 系统,此限制为 512。 SPARC 系统:  至少带有系统固件 8.5.1 的 SPARC T4 系统。  至少带有系统固件 9.2.1 的 SPARC T5、SPARC M5 或 SPARC M6 系统。 virtinfo //确认 kernel zone 虚拟化是被支持的。  Intel CPUs with CPU virtualization (VT-x) enabled in BIOS and with support for Extended Page Tables (EPT), such as Nehalem or newer CPUs  AMD CPUs with CPU virtualization (AMD-v) enabled in BIOS and with support for Nested Page Tables (NPT), such as Barcelona or newer CPUs  sun4v CPUs with a "wide" partition register, for example, Oracle's SPARC T4 or SPARC T5 processors running a supported firmware version and Oracle's SPARC M5, SPARC M6, or newer processors

zonecfg -z oscfirstKZ create -t SYSsolaris-kz zoneadm list -cv zonecfg -z oscfirstKZ info zoneadm -z oscfirstKZ install zoneadm list -cv


zlogin oscfirstKZ 'sed /^PermitRootLogin/s/no$/without-password/ < /etc/ssh/sshd_config > /system/volatile/sed.$$ ; cp /system/volatile/sed.$$ /etc/ssh/sshd_config'


启动 kernel zone zoneadm -z oscfirstKZ boot; zlogin -C oscfirstKZ








####创建 Unified Archives UAR 模板文件 #archiveadm create -z oscfirstKZ /var/tmp/oscfirstKZ.uar (Kernel Zone 的 UAR 创建总是失败,不知道原因?)


9.2 向映像存储添加映像 首先设置环境变量 export OS_AUTH_URL=http://t41b.oracle.com:5000/v2.0/ export OS_USERNAME=glance export OS_PASSWORD=glance export OS_TENANT_NAME=service

# glance image-create --container-format bare --disk-format raw --is-public true --name "Oracle Solaris 11.2 SPARC GN-Zone" --property

architecture=sparc64

--property

hypervisor_type=solariszones

/var/tmp/osczone.uar # glance image-show 'Oracle Solaris 11.2 SPARC Zone'

#glance

image-list --human-readable

# glance image-show 'Oracle Solaris 11.2 SPARC Zone'

--property

vm_mode=solariszones

<


在数据库中查看节点信息 #mysql -u root SQL>show databases; SQL>use nova; SQL>show tables; SQL>desc compute_nodes; SQL> select vcpus,memory_mb,local_gb,disk_available_least, free_ram_mb,free_disk_gb,hypervisor_hostname,hypervisor_type,hypervisor_version,cpu_info from compute_nodes;

注意:cpu_info 信息是 sparc64,导入 image 是必须设置为 sparc64,否者部署虚拟机时报错 ISO 安装文件导入 glance image-create --container-format bare --disk-format iso --is-public true --name "Oracle Solaris 11.2 SPARC ISO" --property architecture=sparc64 --property hypervisor_type=solariszones --property vm_mode=solariszones < /export/home/michael/sol-11_2-text-sparc.iso

9.3 创建 VM 实例 Project 与租户的概念是对应的,设置环境变量后用 keystone tenant-list 查看 export OS_PASSWORD=glance export OS_AUTH_URL=http://t41b.oracle.com:5000/v2.0/ export OS_USERNAME=glance


export OS_TENANT_NAME=service

#keystone tenant-list

用如下命令行创建,选择对应的 ID 即可。 #nova boot --image imageID --flavor flavorID --nic net-id=nicID

9.3.1 通过 Horizon 创建虚拟网络 必须创建了网络和子网之后才能创建虚拟机 zone,否者会失败,错误信息为 No valid host was found. Project-》Networks


#nova net-list

#neutron net-list


9.3.2 创建虚拟机 launch instance Project

->

Launch Instance

Flavor -> non-global zone


必须点一下那个小加号 + 添加网络

9.3.3 创建磁盘 volume 创建磁盘 volume #cinder create --display_name test03 15 #cinder list

// Size 是 15G


9.3.4 手工创建 VM 设置环境变量 export OS_PASSWORD=nova export OS_AUTH_URL=http://t41b.oracle.com:5000/v2.0/ export OS_USERNAME=nova export OS_TENANT_NAME=service 查看模板 image 的 ID 信息 #glance image-list


创建 VM instance #nova boot my01 --image "fd43e335-1d08-615c-d92c-f1ff9a2449b9"

--flavor 6

其中的 adminPass 参数是虚拟机的密码

9.3.5 Horizon 创建 Zone 虚拟机

Zone 的 UAR 归档文件,可以用于创建普通 Zone 和 Kernel Zone,都可以!Flavor 的类型可以任意选择 Kernel Zone 和 NG Zone 类型。但是要注意 Cinder 只能挂载给 Kernel Zone




#nova list


9.3.6 虚拟机不启动 ERROR #nova list

//列出虚拟机 instance 的名字

#nova show Name //查看虚拟机 instance 的详细信息,出错信息

查看日志信息 /var/log/noca/scheduler.log

/etc/nova.conf 中添加了一项 ???????? scheduler_default_filters= ComputeFilter 所有的 log 都在/var/svc/log 目录下 /var/svc/log# ls | grep openstack


在数据库中查看节点信息 #mysql -u root SQL>show databases; SQL>use nova; SQL>show tables; SQL>desc compute_nodes; SQL> select vcpus,memory_mb,local_gb,disk_available_least, free_ram_mb,free_disk_gb,hypervisor_hostname,hypervisor_type,hypervisor_version,cpu_info from compute_nodes;

9.3.7 虚拟机加存储




十、EVS 配置

evsadm show-prop //现实 EVS controller evsadm show-controlprop //现实 EVS controller 属性 neutron 用户就是 EVS Controller 的 Administrator 账号 最好,把 neutron、evsuser、root 的秘钥都互相加一下,免得出错

10.1 创建秘钥 neutron 用户就是 EVS Controller 的 Administrator 账号 【evs-controller & evs-manager& evs-node 三合一】 su - evsuser -c "ssh-keygen -N '' -f /var/user/evsuser/.ssh/id_rsa -t rsa" su - neutron -c "ssh-keygen -N '' -f /var/lib/neutron/.ssh/id_rsa -t rsa" ssh-keygen -N '' -f /root/.ssh/id_rsa -t rsa cat /var/user/evsuser/.ssh/id_rsa.pub /var/lib/neutron/.ssh/id_rsa.pub /root/.ssh/id_rsa.pub /var/user/evsuser/.ssh/authorized_keys 拷贝 evs-node 节点 root 用户的 id_rsa.pub,添加到 evs-controller 的 evsuser 用户 authorized_keys 中。

>>

chown -R evsuser:evsgroup /var/user/evsuser/.ssh chown -R neutron:neutron /var/lib/neutron/.ssh 【evs-node】 su - evsuser -c "ssh-keygen -N '' -f /var/user/evsuser/.ssh/id_rsa -t rsa" ssh-keygen -N '' -f /root/.ssh/id_rsa -t rsa 拷贝 evs-controller 节点 evsuser 用户的 id_rsa.pub,添加到 evs-node 的 evsuser 用户 authorized_keys 中。


chown -R evsuser:evsgroup /var/user/evsuser/.ssh chown -R neutron:neutron /var/lib/neutron/.ssh

10.2 验证秘钥 互相复制了秘钥信息之后,互相 ssh 访问一次。 从 evs-node 到 evs-controller su - evsuser -c "ssh evsuser@evs-controller true" su - neutron -c "ssh evsuser@evs-controller true" ssh evsuser@evs-controller true 从 evs-controller 到 evs-node su - evsuser -c "ssh evsuser@evs-node true" su - neutron -c "ssh evsuser@evs-node true" ssh evsuser@evs-node true 注意节点 url 需要长名和短名都访问一次,因为配置是有时候用的是短名访问

10.3 配置 EVS-Controller evsadm set-prop -p controller=ssh://evsuser@t41b.oracle.com evsadm show-prop evsadm set-controlprop -p l2-type=vlan evsadm set-controlprop -p vlan-range=200-300 evsadm set-controlprop -p uplink-port=net0 evsadm show-controlprop 这里 t41b 是 evs-controller,t41a 和 t52a 是 evs-node

10.4 配置 EVS-Node evsadm set-prop -p controller=ssh://evsuser@t41b.oracle.com

10.5 查看 EVS 配置 evsadm show-prop //现实 EVS controller evsadm show-controlprop //现实 EVS controller 属性 evsadm show-evs -L


evsadm show-evsprop vNet002

# evsadm show-vportprop # evsadm show-vport -o all


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.