Home Blog Page 4

Tranzila – Daily transmission report


Tranzila – Daily transmission report

Hello friends ,

If you using Tranzila as an online merchant you will need to pay attention to Daily transmission report :

or in English

This Daily transmission report is an TEST SHVA report. 

When you get your terminal to production you will get the real name in this report and real terminal id and SAPAK code from credit card company.

English: Tranzila web site

Hebrew: Tranzila web site

About Tranzila :

InterSpace Ltd., the owner and creator of TRANZILA™, was established in 1996 in order to provide companies with the opportunity of receiving reliable, professional and affordable Internet services. Today, InterSpace has become a leading force in the web hosting and e business solution market. The company specializes in advance dedicated server, data center hosting and e-commerce solutions. Our services utilizing our state-of –the-art data center, and our premier partnership with the global giant NTT/Verio, and partnerships with Intel, Microsoft, Geotrust and others.

Since its role out in the year 2000, TRANZILA has shown a steady and promising growth and has recruited leading e-merchants, service providers and integrators as clients. In addition to the growth in Internet transaction volume, TRANZILA has grown its business by integrating new technologies and by offering value added services. The latest developments from TRANZILA includes the introduction and offering of the 3dsecure solution in an ASP mode, advanced fraud detection system, recurring billing and invoice issuing system and more.

tranzila graph

TRANZILA offers its clients, turn key solution for their entire on-line infrastructure needs. Our experts are outsourced for characterization of application, applications management and development.


Good luck .

WordPress Optimization tutorial guide

WordPress Optimization tutorial guide

This WordPress optimization tutorial is the most comprehensive guide to WordPress optimization created with the intention of helping you troubleshoot performance related issues and provide you with guidelines on how to speed up your WordPress site..

If you ever experienced slow WordPress admin panel, “MySQL server has gone away” message, pages taking forever to load or you want to prepare your site for a major increase in traffic (for example Digg front page) this is the guide for you.

1. Check the Site stats

Most commonly the problem with slow loading sites is just the sheer size of the page. A typical webpage today will be loaded with images, flash, videos and javascripts all which take a significant portion of bandwidth.

If you want to start dealing with this issue seriously you need to get Firefox browser, Firebug extension and Yslow plugin.

Yslow module will allow you to get a performance score from 0-100.  Getting your site to 80+ score should be your aim.

Try to keep your page size under 100KB. Try to keep it under 50kb if possible. If you have a lot of multimedia content then by all means learn to use YSlow.

Learn about ways to improve the page loading speed.

Another useful Firefox extension worth checking out is Google’s Page Speed.

2. Check your (Vista) System

In rare occasions when you are loading your and other sites slowly, it can be your Vista system that is causing the slowdown.

If you are running Vista check this article for a diagnosis and a possible solution.

3. Check the Plugins

Plugins are usually the prime suspect for slowdowns. With so many WordPress plugins around, chance is you might have installed a plugin which does not use the resources in an optimum way.

For example such plugins that caused slowdowns in the past have been Popularity contest, aLinks or @Feed.

To check plugins, deactivate all of them and check the critical areas of the site again. If everything runs OK, re-enable the plugins one by one until you find the problematic plugin.

After finding the cause you can either write a message to the plugin author and hope they fix it or search for an alternative.

4. Check your Theme

If it’s not the plugins, and you are troubleshooting slowdown of the site, you should check it with a different theme.

Themes can include code with plugin capabilities inside the theme’s function.phpfile so everything what applies to plugins can apply to the theme.

Also, themes may use excessive JavaScript or image files, causing slow loading of the page because of huge amount of data to transfer and/or number of http requests used.

WordPress comes installed with a default theme and it’s best used to test the site if your theme is the suspect for poor performance.

If you discover your theme is causing the slowdowns, you can use the excellent Firebug tool for Firefox browser to debug the problem. Learn more about Firebug, your new best friend.

You can also use this site get general information about the site very fast.

5. Optimize Database Tables

Database tables should be periodically optimized (and repaired if necessary) for optimum performance.

I recommend using WP-DBManager plugin which provides this functionality as well as database backup, all crucial for any blog installation.

WP-DBManager allows you to schedule and forget, and it will take care of all the work automatically.

Other alternative is manually optimizing and repairing your table through a tool like phpmyadmin.

6. Turn off Post Revisions

With WordPress 2.6, post version tracking mechanism was introduced. For example, every time you “Save” a post, a revision is written to the database. If you do not need this feature you can easily turn it off by adding one line to your wp-config.php file, found in the installation directory of your WordPress site:

define(‘WP_POST_REVISIONS’, false);

If you have run a blog with revisions turned on for a while, chance is you will have a lot of revision posts in your database. if you wish to remove them for good, simply run this query (for example using the mentioned WP-DBManager) plugin.

DELETE FROM wp_posts WHERE post_type = “revision”;

This will remove all “revision” posts from your database, making it smaller in the process.

NOTE: Do this with care. If you are not sure what you are doing, make sure to at least create a backup of the database first or even better, ask a professional to help you.

7. Implement Caching

Caching is a method of retrieving data from a ready storage (cache) instead of using resources to generate it every time the same information is needed. Using cache is much faster way to retrieve information and is generally recommended practice for most modern applications.

The easiest way to implement caching (and usually the only way if your blog is on shared hosting) is to use a caching plugin.

The most commonly used is WP Super Cache.

A new kid on the block, W3 Total Cache is more powerful alternative, maturing with every day.

8. MySQL Optimization

MySQL can save the results of a query in it’s own cache. To enable it edit the MySQL configuration file (usually /etc/my.cnf) and add these lines:

This will create a 16 MB query cache after you restart the MySQL server (the amount depends on the amount of available RAM, I use around 250MB on 4GB machine).

To check if it is properly running, run this query:

Example result:

Further MySQL Optimization:

There a lot of options you can play with so here is my MySQL config file instead, tuned in for 4GB, quad-core dedicated machine. This will most probably not work for your machine out of box, use it just as a general guideline.

Tip #2:
Here is a further read regarding MySQL optimization and another one here.

Extremely useful mysqlrepot tool will help you tweak that mysql like nothing. Mysql tuner is one of the best and quickest tools out there to tell you how can you fix up your database. MySQL Tuning primer and MySQL Activity Report are another two scripts to try out.

Maatkit is an extremely useful toolkit for managing MySQL.

MySQL slow query log is valuable for getting info about most problematic queries. To activate it you can edit your my.cnf

This will create a log of slow queries and those not using indexes. Now you need to be able to identify the slow ones for which you can use external slow log filter and parsing tools. Using ‘EXPLAIN‘ is an effective way to understand and optimize complex queries.

You can also install mytop, a ‘top’ command clone that works with MySQL.

9. PHP Opcode Cache

PHP is interpreted language, meaning that every time PHP code is started, it is compiled into the so called op-codes, which are then run by the system. This compilation process can be cached by installing an opcode cache such aseAccelerator. There are other caching solutions out there as well.

To install eAccelerator, unpack the archive and go to the eAccelerator folder. Then type:

This will install eAccelerrator.

Next create temp folder for storage:

Finally to enable it, add these lines to the end of your php.ini file (usually/etc/php.ini or /usr/lib/php.ini):

The changes will be noticeable at once, as PHP does not need to be ‘restarted’.

Note #1: WP Super Cache and eAccelerator work fine together showing further increase in performance.

Note #2: If you like even more possibility for performance, check the WP Super Cache and eAccelerator plugin.

Note #3: Unfortunately eAccelerator won’t work if PHP is run as CGI. You can tryusing fastcgi which will work with suExec and eAccelerator.

Note #4:W3 Total Cache mentioned earlier already utilities both memcached and APC making it amazingly fast.

10. Web Server optimization

Apache optimization is something books have been written on so I will first forward you to this article here. Indepth apache compilation tips here, performance tuninghere, VPS tips here and keep alive tips here.

You can easily test changes in your configuration by running a test from your command prompt

and comparing results. I get around 200 req/s on VPS server.

For more flexible testing you can use Autobench which works in conjunction withhttperf, another benchmarking tool.

Use a fast web server like nginx to serve static content (ie. images) while passing dynamic requests is another popular technique you can use to improveperformance.

Note #1: More cool resources. Optimizing Page load time and a great series on website performance.

Note #2: You can find even more tips&tricks on Elliot Back’s site

11. “MySQL server has gone away” workaround

This WordPress database error appears on certain configurations and it manifests in very slow and no response, usually on your admin pages.

Workaround for this MySQL problem has been best addressed in this article.

This problem evidently exists, but the suggested fix is valid only until you upgrade your WordPress. Hopefully it will be further researched and added into the WordPress core in the future.

Note #1: Sometimes increasing MySQL wait_timeout to 1000 will help with this issue.

12. Fixing posting not possible problem

If you experience WordPress admin panel crawling to a halt, with inability to post or update certain posts, you are probably hitting the mod_security wall.

ModSecurity is Apache module for increasing web site security by preventing system intrusions. However, sometimes it may decide that your perfectly normal WordPress MySQL query is trying to do something suspicious and black list it, which manifests in very slow or no response of the site.

To test if this is the case, check your Apache error log, for example:

and look for something like this:

It tells you the access for this page was denied because of a security rule with id 300013. Fixing this includes white-listing this rule for the page in question.

To do that, edit apache config file (for example/usr/local/apache/conf/modsec2/exclude.conf) and add these lines:

This will white list the page for the given security rule and your site will continue to work normally.

13. RSS Pings and Pingbacks

Reasons for slow WordPress posting may include rss ping and pingback timeouts.

By default WordPress will try to ping servers listed in your ping list (found inSettings->Writing panel) and one of them may timeout slowing the entire process.

Second reason are post pingbacks, mechanism in which WordPress notifies the sites you linked to in your article. You can disable pingbacks in Settings->Discussionby un-checking option “Attempt to notify any blogs linked to from the article (slows down posting)“.

Try clearing ping list and disabling pingbacks to see if that helps speed up your posting time.

Following are the general Rules for optimizing page loading time

14. Use subdomains to share the load

Most browsers are set to load 2-4 files from a domain in parallel. If you move some files to a different domain (subdomain will work) the browser will start downloading 2-4 more files in parallel.

It is good idea to move your theme image files to a subdomain you create. I have created demo.prelovac.com/images and moved my theme images there. I have then changed the theme style.css to reflect the full url to the new image files. Job done!

15. Minimize the number of HTTP requests

You can lower the number of HTTP requests by using fewer images (or placing all images in one large image and position them with CSS), fewer javascripts, fewer css files (usually meaning fewer plugins).

Good effort has been made by PHP speedy plugin which will merge all your JavaScript and all CSS files in one big file which really helps in lowering the HTTP request numbers. The biggest drawback of PHP Speedy is that it’s not 100% compatible with all plugins.

Also use the CSS Sprite generator to move all your images into one image and then use CSS background-position to display them. This will cut your number of HTTP requests significantly.

16. Compress the content using apache .htaccess

If you have our own server you can chose to gzip all content sent to browsers. This will lower the loading time significantly as most html pages compress very well.

Add this code to your .htaccess

17. Create expires headers

Expire headers tell the browser how long it should keep the content in cache. Most of the images on your site never change and it is good idea to keep them cached locally.

Add this to your .htaccess (make sure mod_expires is loaded in your apache if you have problems)

Here is an alternative setting:

Use cacheability engine to check your cache configuration.

18. Cache Gravatars

Many blogs use Gravatars, the little images next to your comments. However gravatars have two big flaws in regards to site optimization:

  • Every gravatar image is a new HTTP requests even if same image is loaded (page with 100 comments would have 100 additional HTTP requests)
  • Gravatar images do not contain expire headers

What we can do is create a local gravatar cache, where images would be cached and served from our site. Ideally you would place the gravatar cache on a separate subdomain (see first heading).

I use a plugin from Zenpax.com which allows all gravatars to be cached locally.

19. Optimize the images with smush.it

It is often overlooked that your images can be optimized (made smaller) which can significantly reduce loading times.

Wouldn’t it be perfect if you could open a site, press a button in your browser and get all images on the site optimized and made available in a single zip file. That is possible thanks to smush.it and their Firefox plugin. It is amazing how effective this is!

20. CSS on top, JavaScript on bottom

It is golden practice to put CSS files on top of the page so they are loaded first. JavaScript files should be placed on the bottom of the page (when possible). I have created a simple plugin which will move the properly registered JavaScript files to the bottom of your pages. The plugin is called Footer javaScript.

21. Use CDN

A CDN is a network of servers, usually located at various sites around the world, which cache the static content of a site, such as image, CSS and JavaScript files.The CDN provider copies your site’s static content to its servers, so when someone lands on your site, the static content is delivered from the server closest to them.For a visual look at how this works, check out this handy graphic from GTmetrix.



Modern webservers and websites have grown to depend on many different factors.

This article covered various approaches to optimization from system level apache, PHP and MySQL changes to settings within your WordPress.

I hope following this guide will help you create a fast and responsive WordPress based site.


Good luck

P.s. Very good guide i edit it but most of it from original website .

Original guid: prelovac

Increase the Upload Size for MySQL Database on cPanel with phpMyAdmin using WHM


Increase the Upload Size for MySQL Database on cPanel with phpMyAdmin using WHM

cPanel/WHM Server imposes a limit on the size of a mysql database that can be imported into phpMyAdmin. The default size is 50MB.

The best way to navigate this limitation is to make some tweaks in the WHM interface. Sometimes editing a php.ini file doesn’t make a difference.

– Log into your WHM interface and type Tweak in the search bar.


The Tweak settings appear, in the find field on the right type: upload size



Change the cPanel PHP max upload size to what you need and save.

Go back to Tweak Settings and in the find bar type: post


Change the cPanel PHP max POST size to what you need

That’s it, now you can import a larger database directly into phpMyAdmin, go back and change back to the default settings if required.


Cpanel PhpMyAdmin uses the php.ini file /usr/local/cpanel/3rdparty/etc/phpmyadmin/php.ini.

To increase upload limit, change values of upload_max_filesize and post_max_size in this php.ini file. Typically, you may set value of post_max_size to twice the value of upload_max_filesize. For example, to import SQL files up to 250MB size, set upload_max_filesize to 250MB and post_max_size to 500MB.

You may also want to change values of max_execution_time and memory_limit.


Good luck .

Scary Steam for Linux bug erases all the personal files on your PC


Scary Steam for Linux bug erases all the personal files on your PC

If you’re a Steam fan running Linux, the last thing you’ll want to do in the next few days is mess with your Steam files. Users on Valve’s GitHub Steam for Linux page are complaining about a nasty bug that has the potential to wipe out every single personal file on your PC. Even worse, users say the bug will even wipe out documents on USB connected drives. So much for local backups.

The impact on you at home: The obvious implication if you’re running Steam on Linux is to be wary of the program right now. As a precaution, don’t connect any local external hard drives while you’re running Steam. Users complaining of this bug appear to have moved their .steam or ~/.local/share/steam directories, or invoked Steam’s Bash script with the —reset option enabled.

UPDATE: Valve gave us the following statement: “So far we have had a handful of users report this issue, after they manually moved their Steam install. We have not been able to reproduce the reported issue, but we are adding some additional checks to ensure this is not possible while we continue to investigate. If anyone else has experienced this or has more information, they should email [email protected]


Steam’s bug appears to be caused by a line in the Steam.sh Bash script: rm -rf “$STEAMROOT/“*. That command is a basic Bash instruction that tells the computer to remove the STEAMROOT directory and all its sub-directories (folders).

That’s all well and good, but the issue is that if the STEAMROOT folder is not there then the computer interprets the command as rm -rf “/“*, as first reported by Bit-Tech. If you’re not familiar with Bash, that command tells the system to delete everything on your hard drive starting from the root directory.

The saving grace for Linux users is that you can only erase files you have write permissions over. That means the system itself can’t be erased, but pretty much all of a user’s files—including photos and personal documents—would be at risk.

Ironically, the instruction at issue is preceded by a comment from the developer: # Scary!.



P.S. Wow …. its fail from Steam  linux dev team .

Original Post : pcworld

Good luck and be careful with steam in linux system.

Looking for your Facebook Profile ID / Group ID / Fanpage ID …

Looking for your Facebook Profile ID / Group ID / Fanpage ID …

Type your Facebook URL to see Looking for your Facebook Profile ID.


link : http://lookup-id.com/

Looking for your Facebook Profile ID – Lookup-ID.com helps you to find the Facebook ID for your Profile or a Group. Facebook ID is a many-digit number, eg. 10453213456789123.
Facebook ID for of certain Facebook social plugins, like the “Like Box” ; Like Button or application….

Good luck

Mod Security is ON in the server and why is it important

Mod Security is ON in the server and why is it important

Mod_Security is an important web application firewall that gets installed as an Apache module. It provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis.

It is used to block commonly known exploits for CMS’s by use of regular expressions and rule sets.

Mod_Security can potentially block common code injection attacks which strengthens the security of the server.

When coding a dynamic website, sometimes users forget to write code to help prevent hacks by doing things such as validating input.

Unfortunately,  sometimes Mod_Security rules block valid transactions as well, below you can find some steps to white-list, configure or delete some rules.

What Can ModSecurity Do?

ModSecurity is a toolkit for real-time web application monitoring, logging, and access control. I like to think about it as an enabler: there are no hard rules telling you what to do; instead, it is up to you to choose your own path through the available features. That’s why the title of this section asks what ModSecurity can do, not what it does.

The freedom to choose what to do is an essential part of ModSecurity’s identity and goes very well with its open source nature. With full access to the source code, your freedom to choose extends to the ability to customize and extend the tool itself to make it fit your needs. It’s not a matter of ideology, but of practicality. I simply don’t want my tools to restrict what I can do.

Back on the topic of what ModSecurity can do, the following is a list of the most important usage scenarios:

      • Real-time application security monitoring and access control

At its core, ModSecurity gives you access to the HTTP traffic stream, in real-time, along with the ability to inspect it. This is enough for real-time security monitoring. There’s an added dimension of what’s possible through ModSecurity’s persistent storage mechanism, which enables you to track system elements over time and perform event correlation. You are able to reliably block, if you so wish, because ModSecurity uses full request and response buffering.

      • Virtual patching

Virtual patching is a concept of vulnerability mitigation in a separate layer, where you get to fix problems in applications without having to touch the applications themselves. Virtual patching is applicable to applications that use any communication protocol, but it is particularly useful with HTTP, because the traffic can generally be well understood by an intermediary device. ModSecurity excels at virtual patching because of its reliable blocking capabilities and the flexible rule language that can be adapted to any need. It is, by far, the activity that requires the least investment, is the easiest activity to perform, and the one that most organizations can benefit from straight away.

      • Full HTTP traffic logging

Web servers traditionally do very little when it comes to logging for security purposes. They log very little by default, and even with a lot of tweaking you are not able to get everything that you need. I have yet to encounter a web server that is able to log full transaction data. ModSecurity gives you that ability to log anything you need, including raw transaction data, which is essential for forensics. In addition, you get to choose which transactions are logged, which parts of a transaction are logged, and which parts are sanitized.

      • Continuous passive security assessment

Security assessment is largely seen as an active scheduled event, in which an independent team is sourced to try to perform a simulated attack. Continuous passive security assessment is a variation of real-time monitoring, where, instead of focusing on the behavior of the external parties, you focus on the behavior of the system itself. It’s an early warning system of sorts that can detect traces of many abnormalities and security weaknesses before they are exploited.


      • Web application hardening

One of my favorite uses for ModSecurity is attack surface reduction, in which you selectively narrow down the HTTP features you are willing to accept (e.g., request methods, request headers, content types, etc.). ModSecurity can assist you in enforcing many similar restrictions, either directly, or through collaboration with other Apache modules. They all fall under web application hardening. For example, it is possible to fix many session management issues, as well as cross-site request forgery vulnerabilities.

    • Something small, yet very important to you

Real life often throws unusual demands to us, and that is when the flexibility of ModSecurity comes in handy where you need it the most. It may be a security need, but it may also be something completely different. For example, some people use ModSecurity as an XML web service router, combining its ability to parse XML and apply XPath expressions with its ability to proxy requests. Who knew?

Guiding Principles

There are four guiding principles on which ModSecurity is based, as follows:

      • Flexibility

I think that it’s fair to say that I built ModSecurity for myself: a security expert who needs to intercept, analyze, and store HTTP traffic. I didn’t see much value in hardcoded functionality, because real life is so complex that everyone needs to do things just slightly differently. ModSecurity achieves flexibility by giving you a powerful rule language, which allows you to do exactly what you need to, in combination with the ability to apply rules only where you need to.

      • Passiveness

ModSecurity will take great care to never interact with a transaction unless you tell it to. That is simply because I don’t trust tools, even the one I built, to make decisions for me. That’s why ModSecurity will give you plenty of information, but ultimately leave the decisions to you.

      • Predictability

There’s no such thing as a perfect tool, but a predictable one is the next best thing. Armed with all the facts, you can understand ModSecurity’s weak points and work around them.

    • Quality over quantity

Over the course of six years spent working on ModSecurity, we came up with many ideas for what ModSecurity could do. We didn’t act on most of them. We kept them for later. Why? Because we understood that we have limited resources available at our disposal and that our minds (ideas) are far faster than our implementation abilities. We chose to limit the available functionality, but do really well at what we decided to keep in.

There are bits in ModSecurity that fall outside the scope of these four principles. For example, ModSecurity can change the way Apache identifies itself to the outside world, confine the Apache process within a jail, and even implement an elaborate scheme to deal with a onceinfamous universal XSS vulnerability in Adobe Reader. Although it was I who added those features, I now think that they detract from the main purpose of ModSecurity, which is a reliable and predictable tool that allows for HTTP traffic inspection.


Deployment Options

ModSecurity supports two deployment options: embedded and reverse proxy deployment. There is no one correct way to use them; choose an option based on what best suits your circumstances. There are advantages and disadvantages to both options:

      • Embedded

Because ModSecurity is an Apache module, you can add it to any compatible version of Apache. At the moment that means a reasonably recent Apache version from the 2.0.x branch, although a newer 2.2.x version is recommended. The embedded option is a great choice for those who already have their architecture laid out and don’t want to change it. Embedded deployment is also the only option if you need to protect hundreds of web servers. In such situations, it is impractical to build a separate proxybased security layer. Embedded ModSecurity not only does not introduce new points of failure, but it scales seamlessly as the underlying web infrastructure scales. The main challenge with embedded deployment is that server resources are shared between the web server and ModSecurity.

    • Reverse proxy

Reverse proxies are effectively HTTP routers, designed to stand between web servers and their clients. When you install a dedicated Apache reverse proxy and add ModSecurity to it, you get a “proper” network web application firewall, which you can use to protect any number of web servers on the same network. Many security practitioners prefer having a separate security layer. With it you get complete isolation from the systems you are protecting. On the performance front, a standalone ModSecurity will have resources dedicated to it, which means that you will be able to do more (i.e., have more complex rules). The main disadvantage of this approach is the new point of failure, which will need to be addressed with a high-availability setup of two or more reverse proxies.

Is Anything Missing?

ModSecurity is a very good tool, but there are a number of features, big and small, that could be added. The small features are those that would make your life with ModSecurity easier, perhaps automating some of the boring work (e.g., persistent blocking, which you now have to do manually). But there are really only two features that I would call missing:

      • Learning

Defending web applications is difficult, because there are so many of them, and they are all different. (I often say that every web application effectively creates its own communication protocol.) It would be very handy to have ModSecurity observe application traffic and create a model that could later be used to generate policy or assist with false positives. While I was at Breach Security, I started a project called ModProfiler [http://www.modsecurity.org/projects/modprofiler/] as a step toward learning, but that project is still as I left it, as version 0.2.

    • Passive mode of deployment

ModSecurity can be embedded only in Apache 2.x, but when you deploy it as a reverse proxy, it can be used to protect any web server. Reverse proxies are not everyone’s cup of tea, however, and sometimes it would be very handy to deploy ModSecurity passively, without having to change anything on the network.

How To Set Up mod_security with Apache on Debian/Ubuntu


Mod security is a free Web Application Firewall (WAF) that works with Apache, Nginx and IIS. It supports a flexible rule engine to perform simple and complex operations and comes with a Core Rule Set (CRS) which has rules for SQL injection, cross site scripting, Trojans, bad user agents, session hijacking and a lot of other exploits. For Apache, it is an additional module which makes it easy to install and configure.

In order to complete this tutorial, you will need LAMP installed on your server.

Installing mod_security

Modsecurity is available in the Debian/Ubuntu repository:

Verify if the mod_security module was loaded.

You should see a module named security2_module (shared) which indicates that the module was loaded.

Modsecurity’s installation includes a recommended configuration file which has to be renamed:

Reload Apache

You’ll find a new log file for mod_security in the Apache log directory:

Configuring mod_security

Out of the box, modsecurity doesn’t do anything as it needs rules to work. The default configuration file is set to DetectionOnly which logs requests according to rule matches and doesn’t block anything. This can be changed by editing the modsecurity.conf file:

Find this line

and change it to:

If you’re trying this out on a production server, change this directive only after testing all your rules.

Another directive to modify is SecResponseBodyAccess. This configures whether response bodies are buffered (i.e. read by modsecurity). This is only neccessary if data leakage detection and protection is required. Therefore, leaving it On will use up droplet resources and also increase the logfile size.

Find this

and change it to:

Now we’ll limit the maximum data that can be posted to your web application. Two directives configure these:

The SecRequestBodyLimit directive specifies the maximum POST data size. If anything larger is sent by a client the server will respond with a 413 Request Entity Too Large error. If your web application doesn’t have any file uploads this value can be greatly reduced.

The value mentioned in the configuration file is

which is 12.5MB.

Similar to this is the SecRequestBodyNoFilesLimit directive. The only difference is that this directive limits the size of POST data minus file uploads– this value should be “as low as practical.”

The value in the configuration file is

which is 128KB.

Along the lines of these directives is another one which affects server performance:SecRequestBodyInMemoryLimit. This directive is pretty much self-explanatory; it specifies how much of “request body” data (POSTed data) should be kept in the memory (RAM), anything more will be placed in the hard disk (just like swapping). Since droplets use SSDs, this is not much of an issue; however, this can be set a decent value if you have RAM to spare.

This is the value (128KB) specified in the configuration file.

Testing SQL Injection

Before going ahead with configuring rules, we will create a PHP script which is vulnerable to SQL injection and try it out. Please note that this is just a basic PHP login script with no session handling. Be sure to change the MySQL password in the script below so that it will connect to the database:


This script will display a login form. Entering the right credentials will display a message “A Secret for you.”

We need credentials in the database. Create a MySQL database and a table, then insert usernames and passwords.

This will take you to the mysql> prompt

Open your browser, navigate to http://yourwebsite.com/login.php and enter the right pair of credentials.

You’ll see a message that indicates successful login. Now come back and enter a wrong pair of credentials– you’ll see the message Invalid username or password.

We can confirm that the script works right. The next job is to try our hand with SQL injection to bypass the login page. Enter the following for the username field:

Note that there should be a space after -- this injection won’t work without that space. Leave thepassword field empty and hit the login button.

Voila! The script shows the message meant for authenticated users.

Setting Up Rules

To make your life easier, there are a lot of rules which are already installed along with mod_security. These are called CRS (Core Rule Set) and are located in

The documentation is available at

To load these rules, we need to tell Apache to look into these directories. Edit the mod-security.conffile.

Add the following directives inside <IfModule security2_module> </IfModule>:

The activated_rules directory is similar to Apache’s mods-enabled directory. The rules are available in directories:

Symlinks must be created inside the activated_rules directory to activate these. Let us activate the SQL injection rules.

Apache has to be reloaded for the rules to take effect.

Now open the login page we created earlier and try using the SQL injection query on the username field. If you had changed the SecRuleEngine directive to On, you’ll see a 403 Forbidden error. If it was left to theDetectionOnly option, the injection will be successful but the attempt would be logged in themodsec_audit.log file.

Writing Your Own mod_security Rules

In this section, we’ll create a rule chain which blocks the request if certain “spammy” words are entered in a HTML form. First, we’ll create a PHP script which gets the input from a textbox and displays it back to the user.


Custom rules can be added to any of the configuration files or placed in modsecurity directories. We’ll place our rules in a separate new file.

Add the following to this file:

Save the file and reload Apache. Open http://yourwebsite.com/form.php in the browser and enter text containing any of these words: pills, insurance, rolex.

You’ll either see a 403 page and a log entry or only a log entry based on SecRuleEngine setting. The syntax for SecRule is

Here we used the chain action to match variables REQUEST_FILENAME with form.php,REQUEST_METHOD with POST and REQUEST_BODY with the regular expression (@rx) string(pills|insurance|rolex). The ?i: does a case insensitive match. On a successful match of all these three rules, the ACTION is to deny and log with the msg “Spam detected.” The chain action simulates the logical AND to match all the three rules.

Excluding Hosts and Directories

Sometimes it makes sense to exclude a particular directory or a domain name if it is running an application like phpMyAdmin as modsecurity and will block SQL queries. It is also better to exclude admin backends of CMS applications like WordPress.

To disable modsecurity for a complete VirtualHost place the following

inside the <VirtualHost> section.

For a particular directory:

If you don’t want to completely disable modsecurity, use the SecRuleRemoveById directive to remove a particular rule or rule chain by specifying its ID.

Further Reading

Official modsecurity documentation https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual

Official modsecurity website http://www.modsecurity.org/

Some help sites : 1 2 3 4 5

What is Browsershots?

What is Browsershots?

Hello ,

Browsershots tests your website’s compatibility on different browsers by taking screenshots of your web pages rendered by real browsers on different operating systems.

What we do and why we created this service?

In our dreams, the web looks good for all users. So we let web designers view screenshots of their pages in different browsers, at different screen resolutions and with different plugins. We’re trying to make this service easy to use, open for all (including access to the source code) and 100%% free.

The problem: cross-browser incompatibilities: This project is concerned with a favorite problem of web designers: websites look different in other browsers. Testing a new site in many browsers can be quite time-consuming. Not everybody has a farm of legacy machines with older OSes and browsers. There are online services that offer screenshots of websites in different browsers for considerable fees. For the hobbyist and for open source projects, these fees may be prohibitive.

The solution: community cooperation: The idea behind this project is to distribute the work of making browser screenshots among community members. Everybody can add URLs to the job queue on a central server. Volunteers use a small program to automatically make screenshots of web pages in their browser and upload the results to the server.


Link : http://browsershots.org/

Good luck

Download a Free 15-day trial of Parallels Plesk 12 Today!

Download a Free 15-day trial of Parallels Plesk 12 Today!

Attention: Only deploy the Parallels Plesk Trial on servers dedicated for this purpose. Control Panel software is designed to take full control over the server it is allocated to and cannot be deleted without reformatting the server.




Link: http://sp.parallels.com/products/plesk/trial/

Good luck ,

I get "You don’t have permission to access /imp/compose.php on this server" error when trying to send e-mail in horde webmail

I get “You don’t have permission to access /imp/compose.php on this server” error when trying to send e-mail in horde webmail


Had some interesting error that caused by mod security.


When trying to send e-mail following error appears:



There can be also similar error in apache error log file:


Very likely you have something like mod_security installed which is improperly flagging the request. There is basically no way this can be something IMP is doing.


Configure mod_security properly or disable it from apache configuration.



Link: http://kb.sp.parallels.com/en/5546

Good luck

How to increase the number of sites that can be hosted on a Parallels Plesk Panel for Linux server

How to increase the number of sites that can be hosted on a Parallels Plesk Panel for Linux server

Hello ,

I need to use more than  300 sites on a Linux Parallels Plesk Panel and the solution was KB113974


There are going to be more than 300 sites on a Linux box running with Parallels Plesk Panel. Are there any prerequisites that have to be met?


By default, the Apache server allows the hosting of no more than 300 websites on a single box. This is due to a limitation on the number of files that can be opened by the Apache process at one time, which is usually 1,024. Apache needs to open 2-4 log files for each site hosted on the server, and once it reaches the opened-file limit, it crashes.

  1. The best practice for this case is to recompile Apache with an increased number of allowed file descriptors, as per our Knowledge Base article:260 How to recompile Apache, PHP, and IMAP with increased value of file descriptors larger than FD_SETSIZE (1024) on a RedHat-like system
  2. In case you do not want to recompile the Apache package, you may enable the Piped Logs feature, which allows you to have up to 900 sites on one server. More details can be found here:2066 How do I enable Piped Logs for Apache Web Server?
  3. Alternatively, you may use the trick below in order to increase the maximum number of allowed open file descriptors:Add ulimit -n 65536 at the beginning of the Apache init script, like this:
    Then restart Apache:
    Note: The file name and content may be different on your system. Another example:
    Restart Apache:

Additional information

As of the release of Parallels Plesk Panel 11.0 Nginx can be installed as a reverse proxy server in front of Apache. It will help you run more sites on one server.

Such a combination of Nginx and Apache provides the following advantages:

  • The maximum allowed number of concurrent connections to a website increases.
  • The consumption of server CPU and memory resources decreases.
  • The maximum effect will be achieved for websites with a large amount of static content (like photo galleries, video streaming sites, and so on).Efficiency of serving visitors with a slow connection speed (GPRS, EDGE, 3G, and so on) improves. For example, a client with a 10 KB/s connection requests a PHP script, which generates a 100 KB response. If there is no Nginx installed on the server, the response is delivered by Apache. During these 10 seconds required to deliver the response, Apache and PHP continue to consume full system resources for this open connection. If Nginx is installed, Apache forwards the response to Nginx (the Nginx-to-Apache connection is very fast as both of them are located on the same server) and releases system resources. As Nginx has a lower memory footprint, the overall load on the system decreases. If you have a large number of such slow connections, using Nginx will significantly improve website performance.

See the Parallels Plesk Panel 11 Administrator’s guide for more details.


Link : http://kb.sp.parallels.com/en/113974

Goodluck ,

How to verify an invalid Plesk backup file


How to verify an invalid Plesk backup file

Hello ,



  • Parallels Plesk 11.0 for Linux
  • Parallels Plesk 11.5 for Linux
  • Parallels Plesk 12.0 for Windows
  • Parallels Plesk 11.0 for Windows
  • Parallels Plesk 11.5 for Windows
  • Parallels Plesk Automation 11.1


A new backup is shown on the Backup Manager page. However, it is marked with a red circle and the following pop-up message is shown when you hover the mouse over the circle:

The following errors may also be observed:


The backup description contains invalid records, causing the backup to fail XML validation.


The backup XML description file should be validated with the backup XML schema. This will help to find problem objects in the Plesk configuration database and fix inconsistencies, which cause the XML file to become invalid.

  1. Install the xmllint utility to help you validate XML files:
    • Download xmllint.zip
    • Create the C:xmllint directory
    • Unpack the archive into C:xmllint
  2. Find the XML backup file in the Parallels Plesk (Plesk) backup repository.For example, if the backup name in the Plesk GUI is test_info_1004281551.xml and Plesk uses the local repository, then the file will be at “%plesk_dir%Backuptest_info_1004281551.xml“.
  3. Using xmllint, reformat the XML file to be more easily readable:
  4. Validate the formatted file:
    If you get an error as below, use the solution from the article #8488 and remove Envelope elements from the file:
    Then validate the backup file again:
Good luck

Let’s Encrypt: Delivering SSL/TLS Everywhere


Let’s Encrypt: Delivering SSL/TLS Everywhere

Hello ,

Vital personal and business information flows over the Internet more frequently than ever, and we don’t always know when it’s happening. It’s clear at this point that encrypting is something all of us should be doing. Then why don’t we use TLS (the successor to SSL) everywhere? Every browser in every device supports it. Every server in every data center supports it. Why don’t we just flip the switch?

The challenge is server certificates. The anchor for any TLS-protected communication is a public-key certificate which demonstrates that the server you’re actually talking to is the server you intended to talk to. For many server operators, getting even a basic server certificate is just too much of a hassle. The application process can be confusing. It usually costs money. It’s tricky to install correctly. It’s a pain to update.

Let’s Encrypt is a new free certificate authority, built on a foundation of cooperation and openness, that lets everyone be up and running with basic server certificates for their domains through a simple one-click process.

Mozilla Corporation, Cisco Systems, Inc., Akamai Technologies, Electronic Frontier Foundation, IdenTrust, Inc., and researchers at the University of Michigan are working through the Internet Security Research Group (“ISRG”), a California public benefit corporation, to deliver this much-needed infrastructure in Q2 2015. The ISRG welcomes other organizations dedicated to the same ideal of ubiquitous, open Internet security.

The key principles behind Let’s Encrypt are:

  • Free: Anyone who owns a domain can get a certificate validated for that domain at zero cost.
  • Automatic: The entire enrollment process for certificates occurs painlessly during the server’s native installation or configuration process, while renewal occurs automatically in the background.
  • Secure: Let’s Encrypt will serve as a platform for implementing modern security techniques and best practices.
  • Transparent: All records of certificate issuance and revocation will be available to anyone who wishes to inspect them.
  • Open: The automated issuance and renewal protocol will be an open standard and as much of the software as possible will be open source.
  • Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the entire community, beyond the control of any one organization.

If you want to help these organizations in making TLS Everywhere a reality, here’s how you can get involved:

To learn more about the ISRG and our partners, check out our About page.


How It Works

Anyone who has gone through the trouble of setting up a secure website knows what a hassle getting a certificate can be. Let’s Encrypt automates away all this pain and lets site operators turn on HTTPS with a single click or shell command.

When Let’s Encrypt launches in Summer 2015, enabling HTTPS for your site will be as easy as installing a small piece of certificate management software on the server:

That’s all there is to it! https://example.com is immediately live.

The Let’s Encrypt management software will:

  • Automatically prove to the Let’s Encrypt CA that you control the website
  • Obtain a browser-trusted certificate and set it up on your web server
  • Keep track of when your certificate is going to expire, and automatically renew it
  • Help you revoke the certificate if that ever becomes necessary.

No validation emails, no complicated configuration editing, no expired certificates breaking your website. And of course, because Let’s Encrypt provides certificates for free, no need to arrange payment.

If you’d like to know more about how this works behind the scenes, check out our technical overview. Or if you really want to dive into the details, read the full protocol specification on Github.


Link : https://www.letsencrypt.org/

Good luck ,


Installing an SSL Certificate error message

Installing an SSL Certificate error message

Hello all ,

I recently try to install ssl certificate but i got this strange error .



1. When i try to activate SSL certificate in plesk Windows i get this error massage

2. When i try to activate this certificate to specific IP address in PLESK windows i get this massage


Wrong CA bundle for the SSL certificate .


Get valid CA certificate to the SSL installation .

Recommended to remove problematic installation of the certificate and reinstall the certificate with valid CA in PLESK .


Good luck ,

Best free WordPress Backup: BackUpWordPress

Best free WordPress Backup: BackUpWordPress

Hello ,

The best app that backups WP and the DB: BackUpWordPress

BackUpWordPress will back up your entire site including your database and all your files on a schedule that suits you. Try it now to see how easy it is!

This plugin requires PHP version 5.3.2 or later


  • Super simple to use, no setup required.
  • Works in low memory, “shared host” environments.
  • Manage multiple schedules.
  • Option to have each backup file emailed to you.
  • Uses zip and mysqldump for faster backups if they are available.
  • Works on Linux & Windows Server.
  • Exclude files and folders from your backups.
  • Good support should you need help.
  • Translations for Spanish, German, Chinese, Romanian, Russian, Serbian, Lithuanian, Italian, Czech, Dutch, French, Basque.


  1. Install BackUpWordPress either via the WordPress.org plugin directory, or by uploading the files to your server.
  2. Activate the plugin.
  3. Sit back and relax safe in the knowledge that your whole site will be backed up every day.

The plugin will try to use the mysqldump and zip commands via shell if they are available, using these will greatly improve the time it takes to back up your site.





Link : https://wordpress.org/plugins/backupwordpress/

Good luck

How to Analyze Distributed Denial-of-Service (DDos) Attack

How to Analyze Distributed Denial-of-Service (DDos) Attack

What is DDoS Attack?

As per Wikipedia, denial-of-service (DoS) or distributed denial-of-service (DDoS) attack is an attempt to make a machine or network resource unavailable to its intended users.

In this small post I would like to show a few useful commands to use if someone is experiencing a DDoS attack. In my case, there is an nginx as a front-end server. The access log format looks like this:

In the log file we’ll see something like this:

Analyzing DDoS Attack

This command allows to see a bigger picture: the distribution of unique IPs sending requests, the number of requests from one IP, etc.

The main thing here is that all of this operates in real-time and we can monitor the situation, as well as make necessary changes in the configuration. For example, we can ban the top 20 of the most active IPs via iptables, or limit the geography of requests for some time in nginxwith the help of GeoIP (http://nginx.org/en/docs/http/ngx_http_geoip_module.html).

Once you run the command, it will display (and will update in real-time) something like the following:

3199 elements in 27 seconds (118.48 elements/s)
1 337 12.48/s
2 308 11.41/s
3 304 11.26/s
4 284 10.52/s
5 275 10.19/s
6 275 10.19/s
7 270 10.00/s
8 230 8.52/s
9 182 6.74/s
10 172 6.37/s

In the given case, the columns mean:

  • 1 — the sequence number
  • 2 — the number of requests from the given IP
  • 3 — the number of requests per second from the given IP
  • 4 — the IP itself

At the very top you will see the summery for all of the requests. We can see that IP sends 12,48 requests per second. During the last 27 seconds it has sent 337 requests.

Let’s review it in details:

tail -f /var/log/nginx/nginx.access.log — continuously read the end of the log-file.

cut -d ‘ ‘ -f 1 — split the string into “substrings” with the help of a delimiter that is defined in –d flag (in the given case, it’s a space).
-f 1 flag means that we only want to show the field with “1” as a sequence number only (in the given case, the field will contain the IP address sending a request).

logtop counts the number of equal strings (i.e., IPs), sorts them in descending order and prints to screen in the form of a list, adding statistics at the same time (in Debian we can do it via aptitude from a standard repository).

That will show the distribution of a string by the IP in the log.

In my case, we were to gather the statistics regarding one IP using the &key=… parameter in a request.

We are going to see something of the kind:


  • 1st column is the number of string entries (IP in our case)
  • 2nd column is the IP address itself

We can see that IP has sent 1,878 requests (Later we will see in Whois that this IP belongs to Google and is not “harmful”)

Let’s take a closer look at it:

grep “&key=” /var/log/nginx/nginx.access.log we find all the strings in the log that contain “&key=” substring (no matter in what part of the string it’s located)

cut -d ‘ ‘ -f 1 (see the previous example) derive an IP address

sort — sort lines (it’s necessary for the correct operation of the next command)

uniq -c — show unique lines and count the number of times the line occurred in the input (-c flag)

sort -n – sort by comparing according to string numerical value (-n flag)

tail -n 30 — derive the last 30 lines (-n 30 flag; we can define a random number of lines)

All the mentioned above requests are provided for Debian and Ubuntu, but I think the commands will look pretty much the same in other Linux distros.

Increase the Upload Size for MySQL Database on cPanel with phpMyAdmin using WHM

Increase the Upload Size for MySQL Database on cPanel with phpMyAdmin using WHM



cPanel/WHM Server imposes a limit on the size of a mysql database that can be imported into phpMyAdmin. The default size is 50MB.

The best way to navigate this limitation is to make some tweaks in the WHM interface. Sometimes editing a php.ini file doesn’t make a difference.


– Log into your WHM interface and type Tweak in the search bar.


The Tweak settings appear, in the find field on the right type: upload size




Change the cPanel PHP max upload size to what you need and save.

Go back to Tweak Settings and in the find bar type: post



Change the cPanel PHP max POST size to what you need

That’s it, now you can import a larger database directly into phpMyAdmin, go back and change back to the default settings if required.


Good luck