Management console catalog setup
Notes before installation
Oplon Management Console is a client program (WebApp) that depending on the license type you configure with the supported features.
The license Oplon Catalog, installed in the service Oplon MonitorEnable Oplon Management Console to act as an instance concentrator Oplon S.A.A.I. within the datacenters, it provides cluster upgrade capabilities and enables group actions of services.
Being the suite Oplon S.A.A.I. a product intended for business-critical mission-critical environments only staff who have completed the course and passed the exam are authorized to certify the installation and maintenance of the products in operation. All certified persons have a document issued by Oplon Networks srl.
Oplon Catalog - Preparing to Install the License
To install the license Oplon Catalog , and then enable the centralized management and cluster features of the Oplon Management Console, you need to acquire a license Oplon Catalog.
If you do not have a license Oplon Catalog you can request it from Oplon or an authorized reseller.
Oplon Catalog - License Installation
Once started Oplon Management Console and logged in to the designated node to host Oplon Catalog perform the following steps:
Select the first level of the job tree. This layer will identify the node Oplon Catalog. Right-click install license:
Select the license from the appropriate one that will necessarily be called license.xml.
Once the license is confirmed, the result of the upload will be displayed. If SUCCESSFULL move to the next step
Restart the Oplon Monitor process
Once the license has been successfully installed, you will have to proceed with the restart of the process Oplon Monitor. To restart the process Oplon Monitor you need to go to the guest operating system and run:
Linux/Unix:
-
Identification of the java process with definition -DLBL_RUNLEVEL-0
-
Perform a kill -15 of the process identified in step 1
-
Wait for the end of the processes controlled by Oplon Monitor
It is possible to see the situation evolve through Oplon Management Console highlighting the controlled shutdown of managed processes....
Oplon Management Console - disconnect
Once all processes Oplon S.A.A.I. have finished their operation, highlighted by a flashy red X pre-placed to the nodes of the tree, perform disconnect:
The status of the Oplon Management Console must return to NO LOGGED.
Oplon Monitor - Catalog-licensed Start
Depending on the operating system you select, run the start of the main process Oplon Monitor will take on the new Oplon Catalog license.
Oplon Management Console Connect to Oplon Catalog
From the management console, select the Connect key:
Type Login and Login Password and if you are successful, the connection with Catalog features will appear...
Oplon Catalog - Configuration
The configuration of Oplon Catalog it is functional to management through a single Oplon Management Console multiple instances Oplon S.A.A.I. Deployed in the datacenter or deployed across multiple datacenters.
Oplon Catalog allows you to list nodes in a very simple way Oplon S.A.A.I. instances that need to maintain the same configurations (Clusters).
To access the list, simply select the first element in the tree (root) that indicates the connection to a CATALOG and right-click "Properties catalog".
In the right pane you can indicate the list of nodes Oplon S.A.A.I. Within the datacenter(s). The template provided comments on some examples.
To start moving the first paragraph <monitorID>
o outside the comment
and edit it with the address of the node that performs catalog
functions.
From:
<monitors>
Ipv4 !-- example
<a1>ID monitor</a0> address="192.168.46.109:54443" description
property="Node A"/>
<a1>ID monitor</a0> address="192.168.46.110:54443" description
property="Node B"/>
<a1>ID monitor</a0> address="192.168.46.111:54443" description
property="Node C"/>
-->
</monitors>
To:
<monitors>
<a1>ID monitor</a0> address="192.168.46.109:54443" description
property="Node A"/>
Ipv4 !-- example
<a1>ID monitor</a0> address="192.168.46.110:54443" description
property="Node B"/>
<a1>ID monitor</a0> address="192.168.46.111:54443" description
property="Node C"/>
-->
</monitors>
then upload the catalog parameters:
After uploading by expanding the root element, the first managed node that is also the catalog itself will appear.
You can immediately notice two new symbols of different color.
These symbols indicate that the process under management is a
"cluster" process. Having not yet indicated anything in the
paragraph <clusters>
Oplon Management Console however, it highlights
processes that are by nature clusters such as instances
Oplon Loadbalancer Standard/Enterprise HA Or Oplon Commander Decision
Engine.
Going to add in the catalog other nodes Oplon A.A.I. these will appear, in the order indicated in the Catalog parameters, in the tree on the left....
From:
<monitors>
<a1>ID monitor</a0> address="192.168.46.109:54443" description
property="Node A"/>
Ipv4 !-- example
<a1>ID monitor</a0> address="192.168.46.110:54443" description
property="Node B"/>
<a1>ID monitor</a0> address="192.168.46.111:54443" description
property="Node C"/>
-->
</monitors>
To:
<monitors>
<a1>ID monitor</a0> address="192.168.46.109:54443" description
property="Node A"/>
<a1>ID monitor</a0> address="192.168.46.110:54443" description
property="Node B"/>
Ipv4 !-- example
<a1>ID monitor</a0> address="192.168.46.111:54443" description
property="Node C"/>
![](/images/docs/management-console-catalog-setup/image16.png)-->
</monitors>
Already from this point on, for "spontaneous clusters" you can perform common actions such as keeping configurations aligned or running hot reinits at the same time...
For example, if we try to upload the balancing system configuration Oplon Management Console will also ask us to run it in the clustered "twin() node(s).
In this case it will also highlight that the "Catalog" is not aligned as a configuration to the spontaneous "Cluster" Oplon LoadBalancer Standard HA (Match : false in the cluster nodes table). It is always recommended to keep the definitions of spontaneous clusters aligned with what is in the catalog, in complex environments having a point where this information is listed increases the document order that is very important for operation. In any case, for "spontaneous clusters", that is, by the nature of the process, the alignment is still done on all selected nodes in the cluster.
The first proposed column clears the cluster node/update. This can be useful in case you want to test a node first and then make it final with the next upload.
Oplon Catalog - Cluster Declaration
As we have seen from the previous paragraphs Oplon Management Console recognizes "spontaneous clusters" without needing any other parameters. However, processes that are not by their very nature must also be treated as clusters. An example for everyone is to want to keep the CATALOG configurations aligned with a backup CATALOG instance in case the main CATALOG is unavailable.
In this case we are talking about "Clusters declared" in the CATALOG
to allow for simultaneous updating. In paragraph <clusters>
sot the
paragraph with description "catalog" outside of comments:
From:
<clusters>
<!-- cluster examples
<a0 clusterGroup property description property="Catalog, New10">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property=""CATALOG""/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property=""CATALOG""/>
</clusterGroup>
<a0 clusterGroup property description property="Monitor">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property=""MONITOR""/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property=""MONITOR""/>
</clusterGroup>
<a0 clusterGroup property description property="LoadBalancer
property">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property="A10_LBLGo"/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property="A10_LBLGo"/>
</clusterGroup>
<a0 clusterGroup property description property="Commander Decision
Engine">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
<a1>ID monitor</a0> address="192.168.46.111:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
</clusterGroup>
-->
</clusters>
To:
<clusters>
<a0 clusterGroup property description property="Catalog, New10">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property=""CATALOG""/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property=""CATALOG""/>
</clusterGroup>
<!-- cluster examples
<a0 clusterGroup property description property="Monitor">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property=""MONITOR""/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property=""MONITOR""/>
</clusterGroup>
<a0 clusterGroup property description property="LoadBalancer
property">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property="A10_LBLGo"/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property="A10_LBLGo"/>
</clusterGroup>
<a0 clusterGroup property description property="Commander Decision
Engine">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
<a1>ID monitor</a0> address="192.168.46.111:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
</clusterGroup>
-->
</clusters>
If we let a few moments pass, max 3 seconds standard, and try to perform the upload we will be asked to update the CATALOG node declared as a cluster:
Normally, the processname must declare the name of the process e.g. A10_LBLGo or A03_LBLGoSurfaceClusterDE etc. There are two keywords reserved for being able to declare clusters to both the catalog "CATALOG" and the monitors " "MONITOR" to allow simultaneous actions on multiple nodes of these elements as well.
We also repeat the same change to manage configuration changes at the same time Oplon Monitor.
From:
<clusters>
<!-- cluster examples
<a0 clusterGroup property description property="Catalog, New10">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property=""CATALOG""/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property=""CATALOG""/>
</clusterGroup>
<a0 clusterGroup property description property="Monitor">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property=""MONITOR""/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property=""MONITOR""/>
</clusterGroup>
<a0 clusterGroup property description property="LoadBalancer
property">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property="A10_LBLGo"/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property="A10_LBLGo"/>
</clusterGroup>
<a0 clusterGroup property description property="Commander Decision
Engine">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
<a1>ID monitor</a0> address="192.168.46.111:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
</clusterGroup>
-->
</clusters>
To:
<clusters>
<a0 clusterGroup property description property="Catalog, New10">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property=""CATALOG""/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property=""CATALOG""/>
</clusterGroup>
<a0 clusterGroup property description property="Monitor">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property=""MONITOR""/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property=""MONITOR""/>
</clusterGroup>
<!-- cluster examples
<a0 clusterGroup property description property="LoadBalancer
property">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property="A10_LBLGo"/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property="A10_LBLGo"/>
</clusterGroup>
<a0 clusterGroup property description property="Commander Decision
Engine">
<a1>ID monitor</a0> address="192.168.46.109:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
<a1>ID monitor</a0> address="192.168.46.110:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
<a1>ID monitor</a0> address="192.168.46.111:54443" processName
property="A03_LBLGoSurfaceClusterDE"/>
</clusterGroup>
-->
</clusters>
We then upload the configuration...
From this moment also for configurations Oplon Monitor will be proposed to align them during upgrades.
In this regard, release 9 introduced the use of variables that type the single node Oplon A.A.I. and the process within the node. This technology allows you to keep the same configuration files on different nodes. The variables are used to indicate in the configuration files of the symbolic values that are replaced with the typical value of Oplon Monitor process. You can define variables in the procProperties section.
Oplon Catalog Using procProperties
procProperties is used to define symbolic values that can be used within configuration files. This technology allows you to describe in the parameters not the value but its symbol that varies from one instance to another.
Take, for example, the process configuration file Oplon Monitor. The most important value is definitely the address where Oplon Monitor listening (highlighted in yellow) e.g.:
</copyright>
<monitor>
<a0>params</a0> address="192.168.46.110" Port="54443"
keyManagerFactory property="SunX509 property"
managementPortRangeMin property="5850"
managementPortRangeMax property="5990"
sysCommandRemoteURL property="https://localhost:5992/SysCommand"
statisticBrokerWebCacheURL property="http://localhost:5993">
</params>
...
This value is definitely different for each individual Oplon Monitor and then update the configuration file on another Oplon Monitor would cause a substantial malfunction.
You can then define symbolic names typical of the instance in the procProperties tab (file) Oplon Monitor such as the listening address of the monitor...
<variables>
<var name property="LBL_GLOBAL_ADDRESS_MANAGEMENT"
Value="192.168.46.110"/>
and, once uploaded, change the address in the monitor parameter file to the symbolic name. Note. You will notice that the upload of the procProperties file did not require updating the other nodes in the cluster. This is important because procProperties are typical of the Oplon Monitor node and/or process node, and do NOT have to be propagated to other nodes in the cluster.
</copyright>
<monitor>
<a0>params</a0> address="#LBL_GLOBAL_ADDRESS_MANAGEMENT #"
Port="54443"
keyManagerFactory property="SunX509 property"
managementPortRangeMin property="5850"
managementPortRangeMax property="5990"
sysCommandRemoteURL property="https://localhost:5992/SysCommand"
statisticBrokerWebCacheURL property="http://localhost:5993">
</params>
...
Template files distributed with rel 9 already have preset variables to facilitate initial setup work.
Oplon Monitorfor example, it lists the following preset variables that are already available in template configurations:
<variables>
<var name property="LBL_GLOBAL_ADDRESS_MANAGEMENT"
Value="192.168.46.110"/>
<var name property="LBL_GLOBAL_ADDRESS_BROKERWEBCACHE"
Value="Localhost"/>
<var name property="LBL_GLOBAL_URL_BROKERWEBCACHE"
Value="http://#LBL_GLOBAL_ADDRESS_BROKERWEBCACHE:5993"/>
<var name property="LBL_GLOBAL_GARBAGE_COLLECTOR_LOADBALANCER"
Value="-XX:-UseParallelGC -XX:-UseParallelOldGC"/>
<var name property="LBL_GLOBAL_GARBAGE_COLLECTOR_WEB_CACHE_DWH"
Value="-XX:-UseParallelGC -XX:-UseParallelOldGC"/>
<!-- Please NOTE: G1 garbage collector only with jdk 1.7.0_xx. You can
find xx in LBL's compatibility matrix -->
<!-- <var name property="LBL_GLOBAL_GARBAGE_COLLECTOR_LOADBALANCER"
Value="-XX:"UseG1GC property -XX:InitiatingHeapOccupancyPercent45"/>
<!-- <var name
property="LBL_GLOBAL_GARBAGE_COLLECTOR_WEB_CACHE_DWH"
Value="-XX:"UseG1GC property -XX:InitiatingHeapOccupancyPercent45"/>
<var name property="LBL_GLOBAL_START_HEAP_LOADBALANCER"
Value="-Xms1g -Xmx1g"/>
</variables>
</copyright>
<monitor>
<a0>params</a0> address="#LBL_GLOBAL_ADDRESS_MANAGEMENT #"
Port="54443"
keyManagerFactory property="SunX509 property"
managementPortRangeMin property="5850"
managementPortRangeMax property="5990"
sysCommandRemoteURL property="https://localhost:5992/SysCommand"
statisticBrokerWebCacheURL
property="#LBL_GLOBAL_URL_BROKERWEBCACHE#">
</params>
...
Oplon Catalog - procProperties hierarchy, inheritance, and scope
ProcProperties files define symbolic names that are valued. The symbolic name can be used within the configuration files, and this is replaced when the file is loaded with the values defined by the symbolic name.
ProcProperties files are of two types:
-
at the Oplon Monitor
-
At the process level managed by Oplon Monitor
From a visual point of view we can set them through Oplon Management Console properties in the following tree levels:
If you define a variable at the node level Oplon Monitor this will also be used by the processes he manages. However, variables defined at the process level will only be usable in the specific process. There are no differences in the symbolic name level, but it is still recommended to distinguish variables at the Oplon Monitor process-level variables. In the templates provided, a LBL_GLOBAL_xxxxx is prepended to the variables Oplon Monitor compared to only LBL_xxxx used in process variables.
It is possible to overwrite variables that have already been declared simply by declaring them again. The declaration sequence is the factor that determines its final value.
For example, if we remove the G1 garbage collector definition comments, the variables would take the last declared value...
From:
<variables>
<var name property="LBL_GLOBAL_ADDRESS_MANAGEMENT"
Value="192.168.46.110"/>
<var name property="LBL_GLOBAL_ADDRESS_BROKERWEBCACHE"
Value="Localhost"/>
<var name property="LBL_GLOBAL_URL_BROKERWEBCACHE"
Value="http://#LBL_GLOBAL_ADDRESS_BROKERWEBCACHE:5993"/>
<var name property="LBL_GLOBAL_GARBAGE_COLLECTOR_LOADBALANCER"
Value="-XX:-UseParallelGC -XX:-UseParallelOldGC"/>
<var name property="LBL_GLOBAL_GARBAGE_COLLECTOR_WEB_CACHE_DWH"
Value="-XX:-UseParallelGC -XX:-UseParallelOldGC"/>
<!-- Please NOTE: G1 garbage collector only with jdk 1.7.0_xx. You can
find xx in LBL's compatibility matrix -->
<!-- <var name property="LBL_GLOBAL_GARBAGE_COLLECTOR_LOADBALANCER"
Value="-XX:"UseG1GC property -XX:InitiatingHeapOccupancyPercent45"/>
<!-- <var name
property="LBL_GLOBAL_GARBAGE_COLLECTOR_WEB_CACHE_DWH"
Value="-XX:"UseG1GC property -XX:InitiatingHeapOccupancyPercent45"/>
<var name property="LBL_GLOBAL_START_HEAP_LOADBALANCER"
Value="-Xms1g -Xmx1g"/>
</variables>
To:
<variables>
<var name property="LBL_GLOBAL_ADDRESS_MANAGEMENT"
Value="192.168.46.110"/>
<var name property="LBL_GLOBAL_ADDRESS_BROKERWEBCACHE"
Value="Localhost"/>
<var name property="LBL_GLOBAL_URL_BROKERWEBCACHE"
Value="http://#LBL_GLOBAL_ADDRESS_BROKERWEBCACHE:5993"/>
<var name property="LBL_GLOBAL_GARBAGE_COLLECTOR_LOADBALANCER"
Value="-XX:-UseParallelGC -XX:-UseParallelOldGC"/>
<var name property="LBL_GLOBAL_GARBAGE_COLLECTOR_WEB_CACHE_DWH"
Value="-XX:-UseParallelGC -XX:-UseParallelOldGC"/>
<!-- Please NOTE: G1 garbage collector only with jdk 1.7.0_xx. You can
find xx in LBL's compatibility matrix -->
<var name property="LBL_GLOBAL_GARBAGE_COLLECTOR_LOADBALANCER"
Value="-XX:"UseG1GC property -XX:InitiatingHeapOccupancyPercent45"/>
<var name property="LBL_GLOBAL_GARBAGE_COLLECTOR_WEB_CACHE_DWH"
Value="-XX:"UseG1GC property -XX:InitiatingHeapOccupancyPercent45"/>
<var name property="LBL_GLOBAL_START_HEAP_LOADBALANCER"
Value="-Xms1g -Xmx1g"/>
</variables>
In this case, the highlighted variables will take G1 values over the ParallelGC...
Values at the level Oplon Monitor are inherited by individual processes. You can then use or overwrite the process-level values Oplon Monitor. Any value overrides Oplon Monitor at the process level do not change values for other processes.
Oplon Catalog - Distributed Actions
Once listed in the node catalog Oplon S.A.A.I. cluster process associations Oplon Management Console reacts to commands by proposing actions distributed on the nodes that make up the cluster. So if, for example, if you want to perform a hot "reinit" of a configuration Oplon Management Console we will propose to run it on the cluster component nodes as well, regardless of whether the cluster is spontaneous or declared.
After selection, you will be asked whether to do so on the other nodes in the cluster:
When confirmed, the cluster services will perform the reinit action at the same time. The action is visible directly on Oplon Management Console if you expand the services of the affected process. You will notice the clustered service the "reinit" symbol activate in the same way as the selected node.
For all planned actions Oplon Management Console will ask the operator with the list of nodes interested in the distributed action and give the ability to select on which nodes to perform the action.