Linux – Use non-return scripts on the remote host to manage processes

Use non-return scripts on the remote host to manage processes… here is a solution to the problem.

Use non-return scripts on the remote host to manage processes

I’m using ssh to start a script on the remote host.
After execution, I need to go back to the original host.
The problem is that script1 I want to execute starts a tool in the background on the remote host, and it does not remain on the remote host until the process is not terminated.

ssh -Y -l "test" login 'path/to/script1'

If I execute the command in the terminal, I can go back by typing CTRL+C
But now that I want to execute commands in Perl, I can’t simply precompile CRTL+C

system(qq{ssh -Y -l "testlogin" 'path/to/script1'});

Now does anyone know how to terminate this process on the remote host without knowing the PID?


I don’t think you want the program to wait for the whole ssh... thing to finish, but instead want the operation to be non-blocking so that the program can start it and do something else right away.

There are several ways to do this, depending on what exactly is needed. I want to start by showing the canonical fork+exec approach. Then I released a small program that also uses :system to place a job, a thread, a pipe open, and a module in the background.

A basic approach is to fork then execThis child process has the required programs.

    my @cmd = ('ssh', '-Y', '-l', 'testlogin', 'path/to/script1');
local $SIG{CHLD} = 'IGNORE';  # Don't care about the child process here
my $pid = fork // die "Can't fork: $!";
if ($pid == 0) { 
        exec @cmd;
        die "Shouldn't return after 'exec', there were errors: $!"

# the parent (main program) carries on

This assumes that the process does not need to be monitored or terminated at any time.

Next, here is a program that shows several other ways to start the Action industry in a non-blocking way.

For presentations, it prints to the screen when control is returned immediately after starting the job, and then sleeps for a while so that the command (Print Job done) can clearly show its completion. The program has also been tested to run commands on remote hosts via ssh.

use warnings;
use strict;
use feature 'say';

# Will use array with command terms when possible, or string if needed
my @cmd = ('perl', '-E', 'sleep 5; say "Job done"');
my $cmd_str = join(' ', @cmd[0,1]) . qq('$cmd[-1]');
say "Command: $cmd_str";

# Command shown in question
#my @cmd = ('ssh', '-Y', '-l', 'testlogin', 'path/to/script1');
#my $cmd_str = join ' ', @cmd;

    # Uses shell. Probably simplest
    system("$cmd_str &") == 0 or die $!;

say "\nRan in background via system. Sleep 10";
    sleep 10;
WITH_THREADS: {  #last;
    # Do it any way you want it in a thread
    use threads;

my $thr = async { system(@cmd) };
    say "\nStarted a thread, id ", $thr->tid, ". Sleep 10";
    sleep 10;

# At some point "join" the thread (once it completed, or will wait)
    # Or, have the thread terminated at program exit if still running
    # $thr->detach;

FORK_EXEC: { #last; 
    # Again, have full freedom to do it any way needed    
    local $SIG{CHLD} = 'IGNORE';
    my $pid = fork // die "Can't fork: $!";

if ($pid == 0) { 
        exec @cmd  or die "Can't be here, there were errors: $!";
    say "\nIn parent, forked $pid and exec-ed the program in it. Sleep 10";
    sleep 10;

PIPE_OPEN: {  #last;
    # Strictly no shell. Normally used to read from child as output comes
    my $pid = open(my $pipe, '-|', @cmd)
         die "cant open pipe: $!";

say "\nPipe-opened a process $pid.";

print while <$pipe>;  # demo, not necessary

close $pipe or die "Error with pipe-open: $!";

# Uncomment after installing the module, if desired
#    use Proc::Background;
#    my $proc = Proc::Background->new( @cmd );
#   say "\nStarted a process with Proc::Background, ", $proc->pid, ". Sleep 10";
#    sleep 10;

All of this starts a process that is then free to move on to other things and print to the screen (then wait for the process/thread to finish, as a demonstration). These methods have different advantages and typical uses.

Brief description

  • system passes its command to the shell if it is in a string and has shell metacharacters. What we are doing here is (&), which tells the shell to put the command “in the background”: to fork another process and execute the command in it, thus immediately returning control. It works much like our fork+exec example, which is the old way *nix does multiprocessing

  • A separate thread runs independently, so we can do other work in the main program. The thread needs to be managed at the end, either via join or through detach, in which case we don’t care how it goes really (and it will terminate at the end of the program).

  • pipe-open also forked a separate process, This frees up the main program to continue other work. The idea is that the file handle (which I named $pipe) is used to receive the output of a process because its STDOUT is instead hooked to that resource (“file handle”). It avoids the shell altogether, which is generally a good thing. See also perlipc

If you need actual control over the process, ssh (on your host), or jobs on a remote host, more or others are needed. If this is the case, please clarify.

The question concludes

… how to kill this process on the remote host without knowing the PID?

This seems to disagree with the comment, above I solved what seemed to be a task. But if you do need to terminate the remote process, print its PID from the command on the remote host — all the methods shown above know the PID of the process they started — and capture it in the script. The script can then ssh and kill the process when needed.

Another way not to worry about the reaping child process is to “deny” it in the “double fork”

    my $pid = fork // die "Can't fork: $!";
if ($pid == 0) {     
        my $grandkid_pid = fork // die "Cannot fork: $!";

if ($grandkid_pid == 0) { 
            # do here what you need
            exec @cmd;
            die "Should never get here, there were errors: $!"
    exit;             # parent of second child process exits right away
    waitpid $pid, 0;  # and is reaped

# the main program continues

The first child process exits immediately after the fork and is recycled, so its child process is taken over by init and there are no problems with zombies. This is not the best way to track or query the child process externally.

Related Problems and Solutions