Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

ct_master(3erl) [linux man page]

ct_master(3erl) 					     Erlang Module Definition						   ct_master(3erl)

NAME
ct_master - Distributed test execution control for Common Test. DESCRIPTION
Distributed test execution control for Common Test. This module exports functions for running Common Test nodes on multiple hosts in parallel. EXPORTS
abort() -> ok Stops all running tests. abort(Nodes) -> ok Types Nodes = atom() | [atom()] Stops tests on specified nodes. progress() -> [{Node, Status}] Types Node = atom() Status = finished_ok | ongoing | aborted | {error, Reason} Reason = term() Returns test progress. If Status is ongoing , tests are running on the node and have not yet finished. run(TestSpecs) -> ok Types TestSpecs = string() | [SeparateOrMerged] Equivalent to run(TestSpecs, false, [], []) . run(TestSpecs, InclNodes, ExclNodes) -> ok Types TestSpecs = string() | [SeparateOrMerged] SeparateOrMerged = string() | [string()] InclNodes = [atom()] ExclNodes = [atom()] Equivalent to run(TestSpecs, false, InclNodes, ExclNodes) . run(TestSpecs, AllowUserTerms, InclNodes, ExclNodes) -> ok Types TestSpecs = string() | [SeparateOrMerged] SeparateOrMerged = string() | [string()] AllowUserTerms = bool() InclNodes = [atom()] ExclNodes = [atom()] Tests are spawned on the nodes as specified in TestSpecs . Each specification in TestSpec will be handled separately. It is however possible to also specify a list of specifications that should be merged into one before the tests are executed. Any test without a particular node specification will also be executed on the nodes in InclNodes . Nodes in the ExclNodes list will be excluded from the test. run_on_node(TestSpecs, Node) -> ok Types TestSpecs = string() | [SeparateOrMerged] SeparateOrMerged = string() | [string()] Node = atom() Equivalent to run_on_node(TestSpecs, false, Node) . run_on_node(TestSpecs, AllowUserTerms, Node) -> ok Types TestSpecs = string() | [SeparateOrMerged] SeparateOrMerged = string() | [string()] AllowUserTerms = bool() Node = atom() Tests are spawned on Node according to TestSpecs . run_test(Node, Opts) -> ok Types Node = atom() Opts = [OptTuples] OptTuples = {config, CfgFiles} | {dir, TestDirs} | {suite, Suites} | {testcase, Cases} | {spec, TestSpecs} | {allow_user_terms, Bool} | {logdir, LogDir} | {event_handler, EventHandlers} | {silent_connections, Conns} | {cover, Cover- SpecFile} | {userconfig, UserCfgFiles} CfgFiles = string() | [string()] TestDirs = string() | [string()] Suites = atom() | [atom()] Cases = atom() | [atom()] TestSpecs = string() | [string()] LogDir = string() EventHandlers = EH | [EH] EH = atom() | {atom(), InitArgs} | {[atom()], InitArgs} InitArgs = [term()] Conns = all | [atom()] Tests are spawned on Node using ct:run_test/1 . AUTHORS
<> common_test 1.5.3 ct_master(3erl)

Check Out this Related Man Page

pool(3erl)						     Erlang Module Definition							pool(3erl)

NAME
pool - Load Distribution Facility DESCRIPTION
pool can be used to run a set of Erlang nodes as a pool of computational processors. It is organized as a master and a set of slave nodes and includes the following features: * The slave nodes send regular reports to the master about their current load. * Queries can be sent to the master to determine which node will have the least load. The BIF statistics(run_queue) is used for estimating future loads. It returns the length of the queue of ready to run processes in the Erlang runtime system. The slave nodes are started with the slave module. This effects, tty IO, file IO, and code loading. If the master node fails, the entire pool will exit. EXPORTS
start(Name) -> start(Name, Args) -> Nodes Types Name = atom() Args = string() Nodes = [node()] Starts a new pool. The file .hosts.erlang is read to find host names where the pool nodes can be started. See section Files below. The start-up procedure fails if the file is not found. The slave nodes are started with slave:start/2,3 , passing along Name and, if provided, Args . Name is used as the first part of the node names, Args is used to specify command line arguments. See slave(3erl) . Access rights must be set so that all nodes in the pool have the authority to access each other. The function is synchronous and all the nodes, as well as all the system servers, are running when it returns a value. attach(Node) -> already_attached | attached Types Node = node() This function ensures that a pool master is running and includes Node in the pool master's pool of nodes. stop() -> stopped Stops the pool and kills all the slave nodes. get_nodes() -> Nodes Types Nodes = [node()] Returns a list of the current member nodes of the pool. pspawn(Mod, Fun, Args) -> pid() Types Mod = Fun = atom() Args = [term()] Spawns a process on the pool node which is expected to have the lowest future load. pspawn_link(Mod, Fun, Args) -> pid() Types Mod = Fun = atom() Args = [term()] Spawn links a process on the pool node which is expected to have the lowest future load. get_node() -> node() Returns the node with the expected lowest future load. FILES
.hosts.erlang is used to pick hosts where nodes can be started. See net_adm(3erl) for information about format and location of this file. $HOME/.erlang.slave.out.HOST is used for all additional IO that may come from the slave nodes on standard IO. If the start-up procedure does not work, this file may indicate the reason. Ericsson AB stdlib 1.17.3 pool(3erl)
Man Page