(PHP 4 >= 4.3.0, PHP 5)
proc_open — Execute a command and open file pointers for input/output
$cmd
, array $descriptorspec
, array &$pipes
[, string $cwd
[, array $env
[, array $other_options
]]] )proc_open() is similar to popen() but provides a much greater degree of control over the program execution.
cmd
The command to execute
descriptorspec
An indexed array where the key represents the descriptor number and the value represents how PHP will pass that descriptor to the child process. 0 is stdin, 1 is stdout, while 2 is stderr.
Each element can be:
STDIN
).
The file descriptor numbers are not limited to 0, 1 and 2 - you may specify any valid file descriptor number and it will be passed to the child process. This allows your script to interoperate with other scripts that run as "co-processes". In particular, this is useful for passing passphrases to programs like PGP, GPG and openssl in a more secure manner. It is also useful for reading status information provided by those programs on auxiliary file descriptors.
pipes
Will be set to an indexed array of file pointers that correspond to PHP's end of any pipes that are created.
cwd
The initial working dir for the command. This must be an
absolute directory path, or NULL
if you want to use the default value (the working dir of the current
PHP process)
env
An array with the environment variables for the command that will be
run, or NULL
to use the same environment as the current PHP process
other_options
Allows you to specify additional options. Currently supported options include:
TRUE
TRUE
Returns a resource representing the process, which should be freed using
proc_close() when you are finished with it. On failure
returns FALSE
.
版本 | 说明 |
---|---|
5.2.1 |
Added the bypass_shell option to the
other_options parameter.
|
5.0.0 |
Added the cwd , env and
other_options parameters.
|
Example #1 A proc_open() example
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/error-output.txt", "a") // stderr is a file to write to
);
$cwd = '/tmp';
$env = array('some_option' => 'aeiou');
$process = proc_open('php', $descriptorspec, $pipes, $cwd, $env);
if (is_resource($process)) {
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be appended to /tmp/error-output.txt
fwrite($pipes[0], '<?php print_r($_ENV); ?>');
fclose($pipes[0]);
echo stream_get_contents($pipes[1]);
fclose($pipes[1]);
// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
$return_value = proc_close($process);
echo "command returned $return_value\n";
}
?>
以上例程的输出类似于:
Array ( [some_option] => aeiou [PWD] => /tmp [SHLVL] => 1 [_] => /usr/local/bin/php ) command returned 0
Note:
Windows compatibility: Descriptors beyond 2 (stderr) are made available to the child process as inheritable handles, but since the Windows architecture does not associate file descriptor numbers with low-level handles, the child process does not (yet) have a means of accessing those handles. Stdin, stdout and stderr work as expected.
Note:
If you only need a uni-directional (one-way) process pipe, use popen() instead, as it is much easier to use.
mcuadros at gmail dot com (2013-04-05 10:56:11)
This is a example of how run a command using as output the TTY, just like crontab -e or git commit does.
<?php
$descriptors = array(
array('file', '/dev/tty', 'r'),
array('file', '/dev/tty', 'w'),
array('file', '/dev/tty', 'w')
);
$process = proc_open('vim', $descriptors, $pipes);
michael dot gross at NOSPAM dot flexlogic dot at (2013-01-03 20:33:46)
Please note that if you plan to spawn multiple processes you have to save all the results in different variables (in an array for example). If you for example would call $proc = proc_open..... multiple times the script will block after the second time until the child process exits (proc_close is called implicitly).
bilge at boontex dot com (2012-09-05 02:56:02)
$cmd can actually be multiple commands by separating each command with a newline. However, due to this it is not possible to split up one very long command over multiple lines, even when using "\\\n" syntax.
devel at romanr dot info (2012-03-10 08:04:02)
The call works as should. No bugs.
But. In most cases you won't able to work with pipes in blocking mode.
When your output pipe (process' input one, $pipes[0]) is blocking, there is a case, when you and the process are blocked on output.
When your input pipe (process' output one, $pipes[1]) is blocking, there is a case, when you and the process both are blocked on own input.
So you should switch pipes into NONBLOCKING mode (stream_set_blocking).
Then, there is a case, when you're not able to read anything (fread($pipes[1],...) == "") either write (fwrite($pipes[0],...) == 0). In this case, you better check the process is alive (proc_get_status) and if it still is - wait for some time (stream_select). The situation is truly asynchronous, the process may be busy working, processing your data.
Using shell effectively makes not possible to know whether the command is exists - proc_open always returns valid resource. You may even write some data into it (into shell, actually). But eventually it will terminate, so check the process status regularly.
I would advice not using mkfifo-pipes, because filesystem fifo-pipe (mkfifo) blocks open/fopen call (!!!) until somebody opens other side (unix-related behavior). In case the pipe is opened not by shell and the command is crashed or is not exists you will be blocked forever.
toby at globaloptima dot co dot uk (2011-11-14 13:38:03)
If script A is spawning script B and script B pushes a lot of data to stdout without script A consuming that data, script B is likely to hang but the result of proc_get_status on that process seems to continue to indicate it's running.
So either don't write to stdout i the spawned process (I write to log files instead now) or try to read in the stdout in a non-blocking way if your script A is spawning many instances of script B, I couldn't get this second option to work sadly.
PHP 5.3.8 CLI on Windows 7 64.
mattis at xait dot no (2011-02-03 07:41:23)
If you are, like me, tired of the buggy way proc_open handles streams and exit codes; this example demonstrate the power of pcntl, posix and some simple output redirection:
<?php
$outpipe = '/tmp/outpipe';
$inpipe = '/tmp/inpipe';
posix_mkfifo($inpipe, 0600);
posix_mkfifo($outpipe, 0600);
$pid = pcntl_fork();
//parent
if($pid) {
$in = fopen($inpipe, 'w');
fwrite($in, "A message for the inpipe reader\n");
fclose($in);
$out = fopen($outpipe, 'r');
while(!feof($out)) {
echo "From out pipe: " . fgets($out) . PHP_EOL;
}
fclose($out);
pcntl_waitpid($pid, $status);
if(pcntl_wifexited($status)) {
echo "Reliable exit code: " . pcntl_wexitstatus($status) . PHP_EOL;
}
unlink($outpipe);
unlink($inpipe);
}
//child
else {
//parent
if($pid = pcntl_fork()) {
pcntl_exec('/bin/sh', array('-c', "printf 'A message for the outpipe reader' > $outpipe 2>&1 && exit 12"));
}
//child
else {
pcntl_exec('/bin/sh', array('-c', "printf 'From in pipe: '; cat $inpipe"));
}
}
?>
Output:
From in pipe: A message for the inpipe reader
From out pipe: A message for the outpipe reader
Reliable exit code: 12
php at keith tyler dot com (2010-04-16 11:32:28)
Interestingly enough, it seems you actually have to store the return value in order for your streams to exist. You can't throw it away.
In other words, this works:
<?php
$proc=proc_open("echo foo",
array(
array("pipe","r"),
array("pipe","w"),
array("pipe","w")
),
$pipes);
print stream_get_contents($pipes[1]);
?>
prints:
foo
but this doesn't work:
<?php
proc_open("echo foo",
array(
array("pipe","r"),
array("pipe","w"),
array("pipe","w")
),
$pipes);
print stream_get_contents($pipes[1]);
?>
outputs:
Warning: stream_get_contents(): <n> is not a valid stream resource in Command line code on line 1
The only difference is that in the second case we don't save the output of proc_open to a variable.
Matou Havlena - matous at havlena dot net (2010-04-14 13:03:40)
There is some smart object Processes Manager which i have created for my application. It can control the maximum of simultaneously running processes.
Proccesmanager class:
<?php
class Processmanager {
public $executable = "C:\\www\\_PHP5_2_10\\php";
public $root = "C:\\www\\parallelprocesses\\";
public $scripts = array();
public $processesRunning = 0;
public $processes = 3;
public $running = array();
public $sleep_time = 2;
function addScript($script, $max_execution_time = 300) {
$this->scripts[] = array("script_name" => $script,
"max_execution_time" => $max_execution_time);
}
function exec() {
$i = 0;
for(;;) {
// Fill up the slots
while (($this->processesRunning<$this->processes) and ($i<count($this->scripts))) {
echo "<span style='color: orange;'>Adding script: ".$this->scripts[$i]["script_name"]."</span><br />";
ob_flush();
flush();
$this->running[] =& new Process($this->executable, $this->root, $this->scripts[$i]["script_name"], $this->scripts[$i]["max_execution_time"]);
$this->processesRunning++;
$i++;
}
// Check if done
if (($this->processesRunning==0) and ($i>=count($this->scripts))) {
break;
}
// sleep, this duration depends on your script execution time, the longer execution time, the longer sleep time
sleep($this->sleep_time);
// check what is done
foreach ($this->running as $key => $val) {
if (!$val->isRunning() or $val->isOverExecuted()) {
if (!$val->isRunning()) echo "<span style='color: green;'>Done: ".$val->script."</span><br />";
else echo "<span style='color: red;'>Killed: ".$val->script."</span><br />";
proc_close($val->resource);
unset($this->running[$key]);
$this->processesRunning--;
ob_flush();
flush();
}
}
}
}
}
?>
Process class:
<?php
class Process {
public $resource;
public $pipes;
public $script;
public $max_execution_time;
public $start_time;
function __construct(&$executable, &$root, $script, $max_execution_time) {
$this->script = $script;
$this->max_execution_time = $max_execution_time;
$descriptorspec = array(
0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w')
);
$this->resource = proc_open($executable." ".$root.$this->script, $descriptorspec, $this->pipes, null, $_ENV);
$this->start_time = mktime();
}
// is still running?
function isRunning() {
$status = proc_get_status($this->resource);
return $status["running"];
}
// long execution time, proccess is going to be killer
function isOverExecuted() {
if ($this->start_time+$this->max_execution_time<mktime()) return true;
else return false;
}
}
?>
Example of using:
<?php
$manager = new Processmanager();
$manager->executable = "C:\\www\\_PHP5_2_10\\php";
$manager->path = "C:\\www\\parallelprocesses\\";
$manager->processes = 3;
$manager->sleep_time = 2;
$manager->addScript("script1.php", 10);
$manager->addScript("script2.php");
$manager->addScript("script3.php");
$manager->addScript("script4.php");
$manager->addScript("script5.php");
$manager->addScript("script6.php");
$manager->exec();
?>
And possible output:
Adding script: script1.php
Adding script: script2.php
Adding script: script3.php
Done: script2.php
Adding script: script4.php
Killed: script1.php
Done: script3.php
Done: script4.php
Adding script: script5.php
Adding script: script6.php
Done: script5.php
Done: script6.php
Luceo (2010-03-28 07:39:34)
It seems that stream_get_contents() on STDOUT blocks infinitly under Windows when STDERR is filled under some circumstances.
The trick is to open STDERR in append mode ("a"), then this will work, too.
<?php
$descriptorspec = array(
0 => array('pipe', 'r'), // stdin
1 => array('pipe', 'w'), // stdout
2 => array('pipe', 'a') // stderr
);
?>
cbn at grenet dot org (2009-12-18 07:30:07)
Display output (stdout/stderr) in real time, and get the real exit code in pure PHP (no shell workaround!). It works well on my machines (debian mostly).
#!/usr/bin/php
<?php
/*
* Execute and display the output in real time (stdout + stderr).
*
* Please note this snippet is prepended with an appropriate shebang for the
* CLI. You can re-use only the function.
*
* Usage example:
* chmod u+x proc_open.php
* ./proc_open.php "ping -c 5 google.fr"; echo RetVal=$?
*/
define(BUF_SIZ, 1024); # max buffer size
define(FD_WRITE, 0); # stdin
define(FD_READ, 1); # stdout
define(FD_ERR, 2); # stderr
/*
* Wrapper for proc_*() functions.
* The first parameter $cmd is the command line to execute.
* Return the exit code of the process.
*/
function proc_exec($cmd)
{
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);
$ptr = proc_open($cmd, $descriptorspec, $pipes, NULL, $_ENV);
if (!is_resource($ptr))
return false;
while (($buffer = fgets($pipes[FD_READ], BUF_SIZ)) != NULL
|| ($errbuf = fgets($pipes[FD_ERR], BUF_SIZ)) != NULL) {
if (!isset($flag)) {
$pstatus = proc_get_status($ptr);
$first_exitcode = $pstatus["exitcode"];
$flag = true;
}
if (strlen($buffer))
echo $buffer;
if (strlen($errbuf))
echo "ERR: " . $errbuf;
}
foreach ($pipes as $pipe)
fclose($pipe);
/* Get the expected *exit* code to return the value */
$pstatus = proc_get_status($ptr);
if (!strlen($pstatus["exitcode"]) || $pstatus["running"]) {
/* we can trust the retval of proc_close() */
if ($pstatus["running"])
proc_terminate($ptr);
$ret = proc_close($ptr);
} else {
if ((($first_exitcode + 256) % 256) == 255
&& (($pstatus["exitcode"] + 256) % 256) != 255)
$ret = $pstatus["exitcode"];
elseif (!strlen($first_exitcode))
$ret = $pstatus["exitcode"];
elseif ((($first_exitcode + 256) % 256) != 255)
$ret = $first_exitcode;
else
$ret = 0; /* we "deduce" an EXIT_SUCCESS ;) */
proc_close($ptr);
}
return ($ret + 256) % 256;
}
/* __init__ */
if (isset($argv) && count($argv) > 1 && !empty($argv[1])) {
if (($ret = proc_exec($argv[1])) === false)
die("Error: not enough FD or out of memory.\n");
elseif ($ret == 127)
die("Command not found (returned by sh).\n");
else
exit($ret);
}
?>
chris at 2309 dot net (2009-04-29 07:14:10)
Hi,
to start processes under win and continue with the script while being
able to further manage these processes using their PID, i have written
three small functions that use proc_open together with three tools from
the MS sysinternals suite (psexec.exe, pslist.exe, pskill.exe)
(NB: the sysinternal tools work as standalone, but need some registry
additions, otherwise they pop open an EULA window on first run, see below)
string proc_start ( string $command )
-> starts an independent process using $command on the command shell
and returns its PID when successful or else FALSE
boolean proc_isalive ( string $pid )
-> returns TRUE if the process with PID provided is running,
otherwise FALSE
boolean proc_kill ( string $pid )
-> tries to kill the process with PID provided and returns
TRUE if successful
The sysinternals tools are here::
http://technet.microsoft.com/en-us/sysinternals/
The registry settings that need be added:
---START OF REG FILE---
Windows Registry Editor Version 5.00
[HKEY_USERS\.DEFAULT\Software\Sysinternals]
[HKEY_USERS\.DEFAULT\Software\Sysinternals\PsExec]
"EulaAccepted"=dword:00000001
[HKEY_USERS\.DEFAULT\Software\Sysinternals\PsList]
"EulaAccepted"=dword:00000001
[HKEY_USERS\.DEFAULT\Software\Sysinternals\PsKill]
"EulaAccepted"=dword:00000001
---EOF---
The code:
<?php
/********************************************************************/
/********************************************************************/
function start_proc ( $comm ) {
$dn=dirname(__FILE__);
$descriptorspec = array(
0 => array('pipe', 'r'), // stdin
1 => array('pipe', 'w'), // stdout
2 => array('pipe', 'w') // stderr
);
$fpr = proc_open('psexec.exe -d '.$comm, $descriptorspec, $pipes, $dn);
fclose($pipes[0]);
fclose($pipes[1]);
$stderr = '';
while(!feof($pipes[2])) { $stderr .= fgets($pipes[2], 128); }
fclose($pipes[2]);
proc_close ($fpr);
$pid =FALSE;
if ( preg_match ( "/process ID ([\d]{1,10})\./im", $stderr, $matches ) )
$pid = $matches[1];
else $pid=FALSE;
return $pid;
}
/********************************************************************/
/********************************************************************/
function proc_isalive ( $pid ){
$alive=FALSE;
$dn=dirname(__FILE__);
$descriptorspec = array(
0 => array('pipe', 'r'), // stdin
1 => array('pipe', 'w'), // stdout
2 => array('pipe', 'w') // stderr
);
$fpr = proc_open( 'pslist.exe '.$pid, $descriptorspec, $pipes, $dn );
fclose($pipes[0]);
$stdout = '';
while(!feof($pipes[1])) { $stdout .= fgets($pipes[1], 128); }
fclose($pipes[1]);
$stderr = '';
while(!feof($pipes[2])) { $stderr .= fgets($pipes[2], 128); }
fclose($pipes[2]);
proc_close ($fpr);
if ( strpos($stdout, 'not found') === FALSE ) $alive=TRUE;
return $alive;
}
/********************************************************************/
/********************************************************************/
function proc_kill ( $pid ){
$succ=FALSE;
$dn=dirname(__FILE__);
$descriptorspec = array(
0 => array('pipe', 'r'), // stdin
1 => array('pipe', 'w'), // stdout
2 => array('pipe', 'w') // stderr
);
$fpr = proc_open( 'pskill.exe '.$pid, $descriptorspec, $pipes, $dn );
fclose($pipes[0]);
$stdout = '';
while(!feof($pipes[1])) { $stdout .= fgets($pipes[1], 128); }
fclose($pipes[1]);
$stderr = '';
while(!feof($pipes[2])) { $stderr .= fgets($pipes[2], 128); }
fclose($pipes[2]);
proc_close ($fpr);
if ( strpos($stdout, 'killed') !== FALSE ) $succ=TRUE;
return $succ;
}
/********************************************************************/
/********************************************************************/
?>
simeonl at dbc dot co dot nz (2009-03-03 18:39:17)
Note that when you call an external script and retrieve large amounts of data from STDOUT and STDERR, you may need to retrieve from both alternately in non-blocking mode (with appropriate pauses if no data is retrieved), so that your PHP script doesn't lock up. This can happen if you waiting on activity on one pipe while the external script is waiting for you to empty the other, e.g:
<?php
$read_output = $read_error = false;
$buffer_len = $prev_buffer_len = 0;
$ms = 10;
$output = '';
$read_output = true;
$error = '';
$read_error = true;
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
// dual reading of STDOUT and STDERR stops one full pipe blocking the other, because the external script is waiting
while ($read_error != false or $read_output != false)
{
if ($read_output != false)
{
if(feof($pipes[1]))
{
fclose($pipes[1]);
$read_output = false;
}
else
{
$str = fgets($pipes[1], 1024);
$len = strlen($str);
if ($len)
{
$output .= $str;
$buffer_len += $len;
}
}
}
if ($read_error != false)
{
if(feof($pipes[2]))
{
fclose($pipes[2]);
$read_error = false;
}
else
{
$str = fgets($pipes[2], 1024);
$len = strlen($str);
if ($len)
{
$error .= $str;
$buffer_len += $len;
}
}
}
if ($buffer_len > $prev_buffer_len)
{
$prev_buffer_len = $buffer_len;
$ms = 10;
}
else
{
usleep($ms * 1000); // sleep for $ms milliseconds
if ($ms < 160)
{
$ms = $ms * 2;
}
}
}
return proc_close($process);
?>
snowleopard at amused dot NOSPAMPLEASE dot com dot au (2008-06-05 07:46:29)
I managed to make a set of functions to work with GPG, since my hosting provider refused to use GPG-ME.
Included below is an example of decryption using a higher descriptor to push a passphrase.
Comments and emails welcome. :)
<?php
function GPGDecrypt($InputData, $Identity, $PassPhrase, $HomeDir="~/.gnupg", $GPGPath="/usr/bin/gpg") {
if(!is_executable($GPGPath)) {
trigger_error($GPGPath . " is not executable",
E_USER_ERROR);
die();
} else {
// Set up the descriptors
$Descriptors = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w"),
3 => array("pipe", "r") // This is the pipe we can feed the password into
);
// Build the command line and start the process
$CommandLine = $GPGPath . ' --homedir ' . $HomeDir . ' --quiet --batch --local-user "' . $Identity . '" --passphrase-fd 3 --decrypt -';
$ProcessHandle = proc_open( $CommandLine, $Descriptors, $Pipes);
if(is_resource($ProcessHandle)) {
// Push passphrase to custom pipe
fwrite($Pipes[3], $PassPhrase);
fclose($Pipes[3]);
// Push input into StdIn
fwrite($Pipes[0], $InputData);
fclose($Pipes[0]);
// Read StdOut
$StdOut = '';
while(!feof($Pipes[1])) {
$StdOut .= fgets($Pipes[1], 1024);
}
fclose($Pipes[1]);
// Read StdErr
$StdErr = '';
while(!feof($Pipes[2])) {
$StdErr .= fgets($Pipes[2], 1024);
}
fclose($Pipes[2]);
// Close the process
$ReturnCode = proc_close($ProcessHandle);
} else {
trigger_error("cannot create resource", E_USER_ERROR);
die();
}
}
if (strlen($StdOut) >= 1) {
if ($ReturnCode <= 0) {
$ReturnValue = $StdOut;
} else {
$ReturnValue = "Return Code: " . $ReturnCode . "\nOutput on StdErr:\n" . $StdErr . "\n\nStandard Output Follows:\n\n";
}
} else {
if ($ReturnCode <= 0) {
$ReturnValue = $StdErr;
} else {
$ReturnValue = "Return Code: " . $ReturnCode . "\nOutput on StdErr:\n" . $StdErr;
}
}
return $ReturnValue;
}
?>
radone at gmail dot com (2008-05-26 05:26:51)
To complete the examples below that use proc_open to encrypt a string using GPG, here is a decrypt function:
<?php
function gpg_decrypt($string, $secret) {
$homedir = ''; // path to you gpg keyrings
$tmp_file = '/tmp/gpg_tmp.asc' ; // tmp file to write to
file_put_contents($tmp_file, $string);
$text = '';
$error = '';
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr ?? instead of a file
);
$command = 'gpg --homedir ' . $homedir . ' --batch --no-verbose --passphrase-fd 0 -d ' . $tmp_file . ' ';
$process = proc_open($command, $descriptorspec, $pipes);
if (is_resource($process)) {
fwrite($pipes[0], $secret);
fclose($pipes[0]);
while($s= fgets($pipes[1], 1024)) {
// read from the pipe
$text .= $s;
}
fclose($pipes[1]);
// optional:
while($s= fgets($pipes[2], 1024)) {
$error .= $s . "\n";
}
fclose($pipes[2]);
}
file_put_contents($tmp_file, '');
if (preg_match('/decryption failed/i', $error)) {
return false;
} else {
return $text;
}
}
?>
jonah at whalehosting dot ca (2008-05-02 22:22:15)
@joachimb: The descriptorspec describes the i/o from the perspective of the process you are opening. That is why stdin is read: you are writing, the process is reading. So you want to open descriptor 2 (stderr) in write mode so that the process can write to it and you can read it. In your case where you want all descriptors to be pipes you should always use:
<?php
$descriptorspec = array(
0 => array('pipe', 'r'), // stdin
1 => array('pipe', 'w'), // stdout
2 => array('pipe', 'w') // stderr
);
?>
The examples below where stderr is opened as 'r' is a mistake.
I would like to see examples of using higher descriptor numbers than 2. Specifically GPG as mentioned in the documentation.
joachimb at gmail dot com (2008-04-30 08:24:44)
I'm confused by the direction of the pipes. Most of the examples in this documentation opens pipe #2 as "r", because they want to read from stderr. That sounds logical to me, and that's what I tried to do. That didn't work, though. When I changed it to w, as in
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);
$process = proc_open(escapeshellarg($scriptFile), $descriptorspec, $pipes, $this->wd);
...
while (!feof($pipes[1])) {
foreach($pipes as $key =>$pipe) {
$line = fread($pipe, 128);
if($line) {
print($line);
$this->log($line);
}
}
sleep(0.5);
}
...
?>
everything works fine.
jaroslaw at pobox dot sk (2008-03-28 02:15:54)
Some functions stops working proc_open() to me.
This i made to work for me to communicate between two php scripts:
<?php
$abs_path = '/var/www/domain/filename.php';
$spec = array(array("pipe", "r"), array("pipe", "w"), array("pipe", "w"));
$process = proc_open('php '.$abs_path, $spec, $pipes, null, $_ENV);
if (is_resource($process)) {
# wait till something happens on other side
sleep(1);
# send command
fwrite($pipes[0], 'echo $test;');
fflush($pipes[0]);
# wait till something happens on other side
usleep(1000);
# read pipe for result
echo fread($pipes[1],1024).'<hr>';
# close pipes
fclose($pipes[0]);fclose($pipes[1]);fclose($pipes[2]);
$return_value = proc_close($process);
}
?>
filename.php then contains this:
<?php
$test = 'test data generated here<br>';
while(true) {
# read incoming command
if($fh = fopen('php://stdin','rb')) {
$val_in = fread($fh,1024);
fclose($fh);
}
# execute incoming command
if($val_in)
eval($val_in);
usleep(1000);
# prevent neverending cycle
if($tmp_counter++ > 100)
break;
}
?>
chris AT w3style DOT co.uk (2008-02-22 02:57:00)
It took me a long time (and three consecutive projects) to figure this out. Because popen() and proc_open() return valid processes even when the command failed it's awkward to determine when it really has failed if you're opening a non-interactive process like "sendmail -t".
I had previously guess that reading from STDERR immediately after starting the process would work, and it does... but when the command is successful PHP just hangs because STDERR is empty and it's waiting for data to be written to it.
The solution is a simple stream_set_blocking($pipes[2], 0) immediately after calling proc_open().
<?php
$this->_proc = proc_open($command, $descriptorSpec, $pipes);
stream_set_blocking($pipes[2], 0);
if ($err = stream_get_contents($pipes[2]))
{
throw new Swift_Transport_TransportException(
'Process could not be started [' . $err . ']'
);
}
?>
If the process is opened successfully $pipes[2] will be empty, but if it failed the bash/sh error will be in it.
Finally I can drop all my "workaround" error checking.
I realise this solution is obvious and I'm not sure how it took me 18 months to figure it out, but hopefully this will help someone else.
NOTE: Make sure your descriptorSpec has ( 2 => array('pipe', 'w')) for this to work.
Anonymous (2007-12-27 07:40:27)
I needed to emulate a tty for a process (it wouldnt write to stdout or read from stdin), so I found this:
<?php
$descriptorspec = array(0 => array('pty'),
1 => array('pty'),
2 => array('pty'));
?>
pipes are bidirectional then
John Wehin (2007-12-06 22:52:35)
STDIN STDOUT example
test.php
<?php
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "r")
);
$process = proc_open('php test_gen.php', $descriptorspec, $pipes, null, null); //run test_gen.php
echo ("Start process:\n");
if (is_resource($process))
{
fwrite($pipes[0], "start\n"); // send start
echo ("\n\nStart ....".fgets($pipes[1],4096)); //get answer
fwrite($pipes[0], "get\n"); // send get
echo ("Get: ".fgets($pipes[1],4096)); //get answer
fwrite($pipes[0], "stop\n"); //send stop
echo ("\n\nStop ....".fgets($pipes[1],4096)); //get answer
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
$return_value = proc_close($process); //stop test_gen.php
echo ("Returned:".$return_value."\n");
}
?>
test_gen.php
<?php
$keys=0;
function play_stop()
{
global $keys;
$stdin_stat_arr=fstat(STDIN);
if($stdin_stat_arr[size]!=0)
{
$val_in=fread(STDIN,4096);
switch($val_in)
{
case "start\n": echo "Started\n";
return false;
break;
case "stop\n": echo "Stopped\n";
$keys=0;
return false;
break;
case "pause\n": echo "Paused\n";
return false;
break;
case "get\n": echo ($keys."\n");
return true;
break;
default: echo("Передан не верный параметр: ".$val_in."\n");
return true;
exit();
}
}else{return true;}
}
while(true)
{
while(play_stop()){usleep(1000);}
while(play_stop()){$keys++;usleep(10);}
}
?>
ashnazg at php dot net (2007-10-05 14:23:47)
It seems that if you configured --enable-sigchild when you compiled PHP (which from my reading is required for you to use Oracle stuff), then return codes from proc_close() cannot be trusted.
Using proc_open's Example 1998's code on versions I have of PHP4 (4.4.7) and PHP5 (5.2.4), the return code is always "-1". This is also the only return code I can cause by running other shell commands whether they succeed or fail.
I don't see this caveat mentioned anywhere except on this old bug report -- http://bugs.php.net/bug.php?id=29123
Antti Kauppinen (2007-07-27 00:48:22)
missilesilo at gmail dot com had a great example. However error messages didn't work because of wrong stderr argument.
I changed last value 'r' of
<?php
$descriptorspec = array(
0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'r')
);
?>
to 'w' so that error messages are actually written.
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "w") // stderr is a file to write to
);
?>
missilesilo at gmail dot com (2007-03-08 12:30:29)
If you just want to execute a command and get the output of the program, here is a simple object-oriented way to do it:
If there was an error detected from the STDERR output of the program, the open method of the Process class will throw an Exception. Otherwise, it will return the STDOUT output of the program.
<?php
class Process
{
public static function open($command)
{
$retval = '';
$error = '';
$descriptorspec = array(
0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'r')
);
$resource = proc_open($command, $descriptorspec, $pipes, null, $_ENV);
if (is_resource($resource))
{
$stdin = $pipes[0];
$stdout = $pipes[1];
$stderr = $pipes[2];
while (! feof($stdout))
{
$retval .= fgets($stdout);
}
while (! feof($stderr))
{
$error .= fgets($stderr);
}
fclose($stdin);
fclose($stdout);
fclose($stderr);
$exit_code = proc_close($resource);
}
if (! empty($error))
throw new Exception($error);
else
return $retval;
}
}
try
{
$output = Process::open('cat example.txt');
// do something with the output
}
catch (Exception $e)
{
echo $e->getMessage() . "\n";
// there was a problem executing the command
}
?>
bzapf at qualiject dot de (2007-01-09 07:12:29)
whenever the result of proc_open is forgotten (e.g. a function calling it is left), the process will be terminated immediately. This can be extremely confusing. So always globalize the variable that stores the return value of proc_open...
mjutras at beenox dot com (2006-10-16 06:28:20)
The best way on windows to open a process then to let the php script continue is to call your process with the start command then to kill the "start" process and let your program run.
<?
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);
$process = proc_open('start notepad.exe', $descriptorspec, $pipes);
sleep(1);
proc_close($process);
?>
The start command will be called then open notepad, after 1 second the "start" command will be killed but the notepad will still opened and your php script can continue!
Jeff Warner (2006-09-22 15:37:24)
I wanted to proc_open bash and then send a command and read the output multiple times instead of opening bash each time. There were several "tricks"
Put a "\n" on the end of each command
Use a fflush($pipes[0]) after each fwrite($pipes[0])
Put a sleep(1) before you read the output of the command
Once I added all of that I was able to send an arbitrary amount of commands to bash, read the output and when I was finished I could close the pipes.
Docey (2006-07-24 17:12:31)
if your writing a function that processes a resource from
another function its a good idea not only to check whether
a resource has been passed to your function but also if its
of the good type like so:
<?php
function workingonit($resource){
if(is_resource($resource)){
if(get_resource_type($resource) == "resource_type"){
// resource is a resource and of the good type. continue
}else{
print("resource is of the wrong type.");
return false;
}
}else{
print("resource passed is not a resource at all.");
return false;
}
// do your stuff with the resource here and return
}
?>
this is extra true for working with files and process pipes.
so always check whats being passed to your functions.
here's a small snipppet of a few resource types:
files are of type 'file' in php4 and 'stream' in php5
'prossess' are resources opened by proc_open.
'pipe' are resource opened by popen.
btw the 'prossess' resource type was not mentioned in
the documentation. i make a bug-report for this.
php dot net_manual at reimwerker dot de (2006-06-03 04:47:11)
If you are going to allow data coming from user input to be passed to this function, then you should keep in mind the following warning that also applies to exec() and system():
http://www.php.net/manual/en/function.exec.php
http://www.php.net/manual/en/function.system.php
Warning:
If you are going to allow data coming from user input to be passed to this function, then you should be using escapeshellarg() or escapeshellcmd() to make sure that users cannot trick the system into executing arbitrary commands.
richard at 2006 dot atterer dot net (2006-04-07 12:14:14)
[Again, please delete my previous comment, the code still contained bugs (sorry). This version now includes Frederick Leitner's fix from below, and also fixes another bug: If an empty file was piped into the process, the loop would hang indefinitely.]
The following code works for piping large amounts of data through a filtering program. I find it very weird that such a lot of code is needed for this task... On entry, $stdin contains the standard input for the program. Tested on Debian Linux with PHP 5.1.2.
<?php
$descriptorSpec = array(0 => array("pipe", "r"),
1 => array('pipe', 'w'),
2 => array('pipe', 'w'));
$process = proc_open($command, $descriptorSpec, $pipes);
$txOff = 0; $txLen = strlen($stdin);
$stdout = ''; $stdoutDone = FALSE;
$stderr = ''; $stderrDone = FALSE;
stream_set_blocking($pipes[0], 0); // Make stdin/stdout/stderr non-blocking
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
if ($txLen == 0) fclose($pipes[0]);
while (TRUE) {
$rx = array(); // The program's stdout/stderr
if (!$stdoutDone) $rx[] = $pipes[1];
if (!$stderrDone) $rx[] = $pipes[2];
$tx = array(); // The program's stdin
if ($txOff < $txLen) $tx[] = $pipes[0];
stream_select($rx, $tx, $ex = NULL, NULL, NULL); // Block til r/w possible
if (!empty($tx)) {
$txRet = fwrite($pipes[0], substr($stdin, $txOff, 8192));
if ($txRet !== FALSE) $txOff += $txRet;
if ($txOff >= $txLen) fclose($pipes[0]);
}
foreach ($rx as $r) {
if ($r == $pipes[1]) {
$stdout .= fread($pipes[1], 8192);
if (feof($pipes[1])) { fclose($pipes[1]); $stdoutDone = TRUE; }
} else if ($r == $pipes[2]) {
$stderr .= fread($pipes[2], 8192);
if (feof($pipes[2])) { fclose($pipes[2]); $stderrDone = TRUE; }
}
}
if (!is_resource($process)) break;
if ($txOff >= $txLen && $stdoutDone && $stderrDone) break;
}
$returnValue = proc_close($process);
?>
Kevin Barr (2006-03-06 12:36:51)
I found that with disabling stream blocking I was sometimes attempting to read a return line before the external application had responded. So, instead, I left blocking alone and used this simple function to add a timeout to the fgets function:
// fgetsPending( $in,$tv_sec ) - Get a pending line of data from stream $in, waiting a maximum of $tv_sec seconds
function fgetsPending(&$in,$tv_sec=10) {
if ( stream_select($read = array($in),$write=NULL,$except=NULL,$tv_sec) ) return fgets($in);
else return FALSE;
}
andrew dot budd at adsciengineering dot com (2005-12-28 21:55:45)
The pty option is actually disabled in the source for some reason via a #if 0 && condition. I'm not sure why it's disabled. I removed the 0 && and recompiled, after which the pty option works perfectly. Just a note.
Enrico (2005-11-24 08:48:16)
If you want pass an array to $env, you MUST serialize this!
Bad Example:
$env = array('pippo' => 'Hello', 'request' =>$_REQUEST);
$process = proc_open('php', $descriptorspec, $pipes, $cwd, $env);
fwrite($pipes[0], '<?php print_r($_ENV[request]); ?>');
A result is an empty array;
Good Example:
$env = array('pippo' => 'Hello', 'request' =>serialize($_REQUEST));
$process = proc_open('php', $descriptorspec, $pipes, $cwd, $env);
fwrite($pipes[0], '<?php print_r(unserialize($_ENV[request])); ?>');
A result is good array!
Bye,
Enrico
mendoza at pvv dot ntnu dot no (2005-10-21 22:42:22)
Since I don't have access to PAM via Apache, suexec on, nor access to /etc/shadow I coughed up this way of authenticating users based on the system users details. It's really hairy and ugly, but it works.
<?
function authenticate($user,$password) {
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file","/dev/null", "w") // stderr is a file to write to
);
$process = proc_open("su ".escapeshellarg($user), $descriptorspec, $pipes);
if (is_resource($process)) {
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be appended to /tmp/error-output.txt
fwrite($pipes[0],$password);
fclose($pipes[0]);
fclose($pipes[1]);
// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
$return_value = proc_close($process);
return !$return_value;
}
}
?>
picaune at hotmail dot com (2005-10-16 16:51:41)
The above note on Windows compatibility is not entirely correct.
Windows will dutifully pass on additional handles above 2 onto the child process, starting with Windows 95 and Windows NT 3.5. It even supports this capability (starting with Windows 2000) from the command line using a special syntax (prefacing the redirection operator with the handle number).
These handles will be, when passed to the child, preopened for low-level IO (e.g. _read) by number. The child can reopen them for high-level (e.g. fgets) using the _fdopen or _wfdopen methods. The child can then read from or write to them the same way they would stdin or stdout.
However, child processes must be specially coded to use these handles, and if the end user is not intelligent enough to use them (e.g. "openssl < commands.txt 3< cacert.der") and the program not smart enough to check, it could cause errors or hangs.
Chapman Flack (2005-10-04 12:34:55)
One can learn from the source code in ext/standard/exec.c that the right-hand side of a descriptor assignment does not have to be an array ('file', 'pipe', or 'pty') - it can also be an existing open stream.
<?php
$p = proc_open('myfilter', array( 0 => $infile, ...), $pipes);
?>
I was glad to learn that because it solves the race condition in a scenario like this: you get a file name, open the file, read a little to make sure it's OK to serve to this client, then rewind the file and pass it as input to the filter. Without this feature, you would be limited to <?php array('file', $fname) ?> or passing the name to the filter command. Those choices both involve a race (because the file will be reopened after you have checked it's OK), and the last one invites surprises if not carefully quoted, too.
Kyle Gibson (2005-08-05 00:16:49)
proc_open is hard coded to use "/bin/sh". So if you're working in a chrooted environment, you need to make sure that /bin/sh exists, for now.
mib at post dot com (2005-06-21 18:21:19)
I thought it was highly not recommended to fork from your web server?
Apart from that, one caveat is that the child process inherits anything that is preserved over fork from the parent (apart from the file descriptors which are explicitly closed).
Importantly, it inherits the signal handling setup, which at least with apache means that SIGPIPE is ignored. Child processes that expect SIGPIPE to kill them in order to get sensible pipe handling and not go into a tight write loop will have problems unless they reset SIGPIPE themselves.
Similar caveats probably apply to other signals like SIGHUP, SIGINT, etc.
Other things preserved over fork include shared memory segments, umask and rlimits.
falstaff at arragon dot biz (2005-03-20 17:22:43)
Using this function under windows with large amounts of data is apparently futile.
these functions are returning 0 but do not appear to be doing anything useful.
stream_set_write_buffer($pipes[0],0);
stream_set_write_buffer($pipes[1],0);
these functions are returning false and are also apparently useless under windows.
stream_set_blocking($pipes[0], FALSE);
stream_set_blocking($pipes[1], FALSE);
The magic max buffer size I found with winxp is 63488 bytes, (62k). Anything larger than this results in a system hang.
Andre Caldas (2004-05-25 03:13:42)
About the comment by ch at westend dot com
of 28-Aug-2003 08:46
File streams are buffers. The data is not actualy written if you do not flush the buffer. In your case, fclose has the side effect of flushing the buffer you are closing.
The program "hangs" because it tries to read some data that was not written (since it is buffered).
You must do something like:
<?php
fwrite($fp);
fflush($fp);
fread($fp);
?>
Good luck,
Andre Caldas.
list[at]public[dot]lt (2004-05-11 15:21:04)
if you push a little bit more data through the pipe, it will be hanging forever. One simple solution on RH linux was to do this:
stream_set_blocking($pipes[0], FALSE);
stream_set_blocking($pipes[1], FALSE);
This did not work on windows XP though.
ralf at dreesen[*NO*SPAM*] dot net (2004-01-09 11:49:53)
The behaviour described in the following may depend on the system php runs on. Our platform was "Intel with Debian 3.0 linux".
If you pass huge amounts of data (ca. >>10k) to the application you run and the application for example echos them directly to stdout (without buffering the input), you will get a deadlock. This is because there are size-limited buffers (so called pipes) between php and the application you run. The application will put data into the stdout buffer until it is filled, then it blocks waiting for php to read from the stdout buffer. In the meantime Php filled the stdin buffer and waits for the application to read from it. That is the deadlock.
A solution to this problem may be to set the stdout stream to non blocking (stream_set_blocking) and alternately write to stdin and read from stdout.
Just imagine the following example:
<?
/* assume that strlen($in) is about 30k
*/
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/error-output.txt", "a")
);
$process = proc_open("cat", $descriptorspec, $pipes);
if (is_resource($process)) {
fwrite($pipes[0], $in);
/* fwrite writes to stdin, 'cat' will immediately write the data from stdin
* to stdout and blocks, when the stdout buffer is full. Then it will not
* continue reading from stdin and php will block here.
*/
fclose($pipes[0]);
while (!feof($pipes[1])) {
$out .= fgets($pipes[1], 1024);
}
fclose($pipes[1]);
$return_value = proc_close($process);
}
?>
MagicalTux at FF.ST (2003-12-24 04:20:04)
Note that if you need to be "interactive" with the user *and* the opened application, you can use stream_select to see if something is waiting on the other side of the pipe.
Stream functions can be used on pipes like :
- pipes from popen, proc_open
- pipes from fopen('php://stdin') (or stdout)
- sockets (unix or tcp/udp)
- many other things probably but the most important is here
More informations about streams (you'll find many useful functions there) :
http://www.php.net/manual/en/ref.stream.php
ch at westend dot com (2003-08-28 08:46:21)
I had trouble with this function as my script always hung like in a deadlock until I figured out that I had to strictly keep the following
order. Trying to close all at the end did not work!
proc_open();
fwrite(pipes[0]); fclose(pipes[0]); # stdin
fread(pipes[1]); fclose(pipes[1]); # stdout
fread(pipes[2]); flcose(pipes[2]); # stderr
proc_close();
daniela at itconnect dot net dot au (2003-04-16 02:01:10)
Just a small note in case it isn't obvious, its possible to treat the filename as in fopen, thus you can pass through the standard input from php like
$descs = array (
0 => array ("file", "php://stdin", "r"),
1 => array ("pipe", "w"),
2 => array ("pipe", "w")
);
$proc = proc_open ("myprogram", $descs, $fp);
joeldegan AT yahoo.com (2002-12-28 17:54:58)
I worked with proc_open for a while before realizing how it works with applications in real time.
This example loads up the eDonkey2000 client and reads data from it and passes in various commands and returns the results.
This is the base for an ncurses gui for edonkey I am writing in PHP.
<?
define ("DASHES", "-------------------------------------------------\n");
function readit($pipes, $len=2, $end="> "){
stream_set_blocking($pipes[1], FALSE);
while($ret = fread($pipes[1],$len)){
$retval .= $ret;
if(substr_count($ret, $end) > 0){ $pipes[1] = "" ; break;}
}
return $retval;
}//end fucntion
function sendto($pipes, $str){
fwrite($pipes[0], $str."\n");
}//end function
function viewopts($pipes, $opt){
sleep(1);
sendto($pipes, $opt);
return readit($pipes);
}//end function
function sendopts($pipes, $opt){
sendto($pipes, $opt);
usleep(50);
return readit($pipes);
}//end function
$dspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/eo.txt", "a"),);
$process = proc_open("donkey", $dspec, $pipes);
if (is_resource($process)) {
readit($pipes);
echo DASHES;
echo viewopts($pipes, "vo");
echo DASHES; echo SEP;echo DASHES;
echo sendopts($pipes, "name test".rand(5,5000));
echo DASHES; echo SEP; echo DASHES;
echo viewopts($pipes, "vo");
echo DASHES; echo SEP; echo DASHES;
echo sendopts($pipes, "temp /tmp");
echo DASHES; echo SEP; echo DASHES;
echo viewopts($pipes, "g");
echo DASHES;
sendto($pipes, "q");
sendto($pipes, "y");
readit($pipes);
fclose($pipes[0]);
fclose($pipes[1]);
$return_value = proc_close($process);
}
?>
returns what looks like the following
-----------------------------------------------------------------
Name: test2555
AdminName: admin
AdminPass: password
AdminPort: 79
Max Download Speed: 0.00
Max Upload Speed: 0.00
Line Speed Down: 0.00
Door Port: 4662
AutoConnect: 1
Verbose: 0
SaveCorrupted: 1
AutoServerRemove: 1
MaxConnections: 45
> ----------------------------------------------------------------
filippo at zirak dot it (2002-04-19 07:50:07)
Example of emulating the press of the special key "F3":
fwrite($pipes[0], chr(27)."[13~");
(for others special keys, use the program 'od -c' on linux)
(NEEDED: a timeout for stdout pipe, otherwise a fgets on $pipes[1] can lag forever...)