Fundamentals > Creo Parametric User Interface > The Home Tab > Distributed Computing Technology in Creo Parametric > About Distributed Computing Technology in Creo Parametric
  
About Distributed Computing Technology in Creo Parametric
Creo Parametric uses distributed computing technology to augment your existing hardware so that it can perform computationally intensive tasks.
Distributed computing technology is comprised of the following:
A session of Creo Parametric (the controller)
Participating networked workstations
A daemon running on each participating workstation
One or more agents running on the workstations (Creo Parametric sessions running as server processes)
A task (a collection of jobs)
The following steps outline how distributing computing technology works:
1. The controller communicates with the daemon on each remote workstation in order to begin an agent process.
2. The agent process establishes a connection with the controller.
3. The controller decides which agent receives the next job. The controller dispatches the jobs to the agents in order to most efficiently distribute the computing of the task.
4. The daemon on each workstation enables data streaming between the controller and the agents, provides statistics, provides job execution feedback, reports workstation loads, and monitors workstations on the network.
5. As each job is completed, the results are streamed to the controller.
Interactions between the Controller and the Agent
Distributed computing technology optimizes task distribution. The controller communicates with the daemon on each workstation and determines load averages. Agent sessions are launched automatically, depending on load averages and the number of processors. These agents are started only once per distributed computing task, not once per job.
Data communication is also optimized. Data (models, information, instructions) are efficiently streamed directly to each agent via the daemon. Files are not copied to workstations prior to executing a job. As subsequent jobs are dispatched to the same agent, only data that is different between the jobs is streamed. If the models involved are the same, they are not streamed again.