~/bin/
or /usr/local/bin/
.gpt
) and go into it:$ mkdir gpt && cd gpt
$ mkdir -p apps / gpt && cd apps / gpt
$ rebar create-app appid = gpt
$ ls -1 src gpt_app.erl gpt.app.src gpt_sup.erl
src/gpt.app.src
) the application description and the dependency on gproc:{description, "GProc tutorial"}, ... {applications, [ kernel, stdlib gproc% <--- Application depends on gproc ]}, ...
rel
subdirectory in it and go there:$ cd ../../ $ mkdir rel && cd rel
rebar
create a stub for the node by passing its name in the nodeid
parameter:$ rebar create-node nodeid = gptnode
reltool.config
:... {lib_dirs, ["../deps", "../apps"]},% <--- In these directories, reltool will look for dependencies and our application {rel, "gptnode", "1", [ kernel, stdlib sasl gproc,% <--- gproc application gpt% <--- Our application ]}, ...
files/vm.args
file by changing, say, the node name:-name gptnode@127.0.0.1
-sname gptnode @ localhost
$ cd ../
rebar.config
with the following content:%% There will be dependencies here. {deps_dir, ["deps"]}. %% Subdirectories that rebar should look at {sub_dirs, ["rel", "apps / gpt"]}. %% Compiler Options {erl_opts, [debug_info, fail_on_warning]}. %% List of Dependencies %% In the gproc directory the master branch of the corresponding git repository will be cloned. {deps, [ {gproc, ". *", {git, "http://github.com/esl/gproc.git", "master"}} ]}.
rebar
commands (command output omitted):$ rebar get-deps $ rebar compile $ rebar generate
get-deps
command downloads dependencies. In our case, this is gproc application. The compile
command obviously compiles all the source files, and generate
creates a release.rel/gptnode
can be safely moved to other hosts (of course, subject to binary compatibility, since the release includes the Erlang virtual machine). After creating the release, run what we got:(cd rel / gptnode && sh bin / gptnode console)
(gptnode @ localhost) 1> application: which_applications (). [{sasl, "SASL CXC 138 11", "2.1.9.2"}, {gpt, "GProc tutorial", "1"}, {gproc, "GPROC", "0.01"}, {stdlib, "ERTS CXC 138 10", "1.17.2"}, {kernel, "ERTS CXC 138 10", "2.14.2"}]
rebar
figured out, learned how to create a simple project and work with it. Proceed to gproc.erlang:register/2
function is allowed only for a small number of long-lived processes, whose name should not change, the analogue is global variables in imperative programming languages.erlang:monitor/2
;'DOWN'
message when this process crashes, and then deletes its entry from the ets-table;gpt_proc.erl
file. The gpt_sup.erl
contains the code for the supervisor of this process group. When the gpt_sup:start_worker/1
function is called, our process will be launched and registered under the name that is passed to the function as the only argument. In this case, it is a number.(gptnode @ localhost) 1> [gpt_sup: start_worker (Id) || Id <- lists: seq (1,3)]. (gpt_proc: 29) Start process: 1 (gpt_proc: 29) Start process: 2 (gpt_proc: 29) Start process: 3 [{ok, <0.61.0>}, {ok, <0.62.0>}, {ok, <0.63.0>}]
gproc:add_local_name(Name)
registers the process that calls it, under the name Name
(this function is simply a wrapper over gproc:reg({n,l,Name})
, where n
is the name
, l
is local
). After that, the gproc:lookup_local_name(Name)
function will return the process ID.handle_info ({await, Id}, #state {id = MyId} = State) -> gproc: await ({n, l, Id}), ? DBG ("MyId: ~ p. ~ NNewId: ~ p.", [MyId, Id]), {noreply, State};
gproc:await/1
function is called with an argument that has the following form: {n, l, Id}
. For some reason, it does not have a wrapper, but oh well.(gptnode @ localhost) 2> gproc: lookup_local_name (1)! {await, 4}. {await, 4}
(gptnode @ localhost) 3> gpt_sup: start_worker (4). (gpt_proc: 29) Start process: 4 (gpt_proc: 45) MyId: 1. NewId: 4. {ok, <0.66.0>}
stop
message:handle_info (stop, State) -> {stop, normal, State};
(gptnode @ localhost) 4> gproc: lookup_local_name (1)! stop stop
(gptnode @ localhost) 5> gproc: lookup_local_name (1). undefined
gproc:add_global_name/1
call to allow this action. Consider an example.rebar
will help us with this, since it has the ability to create configuration files using a predetermined pattern. When creating a cluster, consider the following details:kernel
application on each nodefiles/vm.args
:## Name of the node -sname {{node}} ## Cookie for distributed erlang -setcookie gptnode
{{node}}
is a placeholder, which will be filled when creating a release. The -setcookie
virtual machine -setcookie
sets the cookie value for this node; in a cluster, all the nodes should have the same values.files/app.config
. Here, placeholders will also be used:%% gproc {gproc, {{gproc_params}}}, %% Kernel {kernel, {{kernel_params}}},
reltool.config
that the previous two files should be treated as templates:{template, "files / app.config", "etc / app.config"}, {template, "files / vm.args", "etc / vm.args"}
vars/dev1_vars.config
and vars/dev2_vars.config
. The dev1_vars.config
file will contain the following placeholder values:%% etc / app.config {gproc_params, "[ {gproc_dist, {['gpt1 @ localhost'], [{workers, ['gpt2 @ localhost']}]}} ] "}. {kernel_params, "[ {sync_nodes_mandatory, ['gpt2 @ localhost']}, {sync_nodes_timeout, 15000} ] "}. %% etc / vm.args {node, "gpt1 @ localhost"}.
dev2_vars.config
file dev2_vars.config
the sync_nodes_mandatory
and node
parameters sync_nodes_mandatory
swapped. We analyze them in more detail.gproc_dist
parameter refers to the gproc application, it is a tuple of two lists. The first list is the nodes that are able to become the leader (master), the second list contains key-value tuples, for now we only need one key - workers
, which defines a list of nodes that are simple cluster members (slave).sync_nodes_mandatory
is a list of nodes that are required to be present in the cluster. The second, sync_nodes_timeout
is the time in milliseconds that each node will wait for the nodes from the previous list to appear. If the nodes do not appear during this time, the node will stop. Let's make its value 15 seconds in order to have time to run them both by hand.node
value will be written to the startup parameters of the virtual machine, this is its name.dev1 dev2: mkdir -p dev (cd rel && rebar generate target_dir = .. / dev / $ @ overlay_vars=vars/$@_vars.config)
dev/dev1
, launch the second terminal window (or create a new window in the screen ),
dev / dev2 directory .
.
./bin/gptnode console`. Let's see the list of available nodes in the first Erlang shell:(gpt1 @ localhost) 1> nodes (). [gpt2 @ localhost]
(gpt1 @ localhost) 2> gproc: add_global_name ({shell, 1}). true
(gpt2 @ localhost) 2> gproc: lookup_global_name ({shell, 1}). <3358.70.0>
(gpt2 @ localhost) 3> gproc: lookup_global_name ({shell, 1})! {the, message}. message
flush()
command:(gpt1 @ localhost) 3> flush (). Shell got {the, message} ok
Source: https://habr.com/ru/post/112681/
All Articles