Analytics Manager > Working with Flexible Scaling
Working with Flexible Scaling
Use in-built flexible scaling in Analytics Manager to meet the following objectives:
Check if the existing capacity fulfills the current computation needs, and if addition or reduction in resources is required.
Decide where, when, and how compute capacity can be added or reduced.
Analytics Manager achieves flexible scaling by following these steps:
1. The Quality of Service (QoS) rules evaluate whether an addition in resources or reduction in resources is required. This evaluation is done based on the following parameters:
QoS indicator
Represents the computational activity in Analytics Manager and is a performance indicator of execution of user activity.
For example: Waiting time, waiting jobs, execution time, number of sessions, session duration, and so on.
Specifies a fixed value or indicates the availability of the system that can be computed dynamically.
For example: Number of client instances
Synthesizing function
Specifies a mathematical function that is applied to the QoS indicator and the threshold value is computed dynamically.
For example: Count, sum, average, and so on.
Specifies the value that is used to alter the QoS indicator. Depending on the value of margin, the QoS indicator is increased or decreased by an absolute value or percent value of the QoS indicator.
Specifies the comparison operator that is used to compare the altered QoS indicator and the threshold.
Specifies the count of time period for which the rule must be consecutively met before an addition or reduction in resources is requested.
2. Once the QoS evaluation is complete and an action is warranted, the flexible engine returns which agent is available for increase or decrease in capacity based on the following parameters:
Process metrics
Specifies the performance indicator of the individual compute instance.
For example: CPU used, memory used, threads, paging, and so on.
Platform metrics
Specifies performance indicators that represent the availability of the platform on which the client instances are run.
For example: CPU availability, memory availability, disk availability, and so on.
If an increase in capacity is warranted and a new client must be added, the flexible scaling engine verifies that the host has at least 70% available memory of the minimum memory that is specified while creating a provider.
Required compute capacity per client instance
Specifies the minimum compute capacity that is required to launch and support a compute instance.
Request for addition in resources
Before a request for addition in resources is made, the flexible engine checks if the following conditions are possible:
Add more capacity to an existing host by launching another process.
Deploy the software to a managed host that has available capacity.
If the above conditions are satisfied, then the flexible scaling engine returns the agent where a new instance can be started.
If there is no suitable agent that can satisfy this request, an event is thrown for an external component to add more resources.
Request for reduction in resources
When a request for reduction in resources is made, the flexible scaling engine returns the agent where an instance can be shut down. Typically, the agent that has the least load is returned. When all clients on an agent are shut down, no new jobs or sessions are assigned to this agent. This particular node is available for other computations.
The Quality of Service (QoS) rules can be completely customized. For more information about customizing these rules, see Managing Customized Flexible Scaling.
Redundant Agents
The agent is a critical component to use capacity and maintain operational continuity. An agent must run all the time on a system that runs computations in Analytics Manager. Thus, the Analytics Manager framework implements the concept of redundant agents.
Redundant agents are used to maintain continuous connectivity to the compute resources. At agent startup, a pair of agents (primary agent and secondary agent) is started and registered with the server.
The primary agent is responsible to run compute operations. The secondary agent is used to restart the primary, if the primary agent is killed at run time.
If the secondary agent is killed at runtime, the primary agent restarts the secondary agent.
This feature is optional. To enable this feature, set the value of the UseRedundantAgent parameter in the file of your agent to true. By default, this value is set to false.
Was this helpful?