Home Blog Page 4

Tranzila – Daily transmission report

0

Tranzila – Daily transmission report

Hello friends ,

If you using Tranzila as an online merchant you will need to pay attention to Daily transmission report :

שם מסוף:                 שים לב מסוף טסט
        0962360               :מספר מסוף
        02/12/2014                :תאריך
        17:55                       :שעה
        1111111        מספר עסק בישראכרט
        0017324            מספר עסק בכאל
        0017324         מספר עסק בדיינרס
        1111111        .מספר עסק באם.אקס
        0929794     מספר עסק בלאומי קארד
        04398                מספר בטולים
        04398                מספר תוספות
        5519                 :דור חסומים
        00:19                זמן התקשרות


 Transmited from: 5

or in English

Terminal:                 שים לב מסוף טסט
        0962360         :Terminal Number
        17/01/2015                 :Date
        17:55                      :Time
        1111111     Isracard Merchant ID
        0017324          CAL Merchant ID
        0017324       Diners Merchant ID
        1111111         AMEX Merchant ID
        0929794        Leumi Merchant ID
        5558         :Blocked Generation
        00:02       Communication Period

 Transmited from: 5

This Daily transmission report is an TEST SHVA report. 

When you get your terminal to production you will get the real name in this report and real terminal id and SAPAK code from credit card company.

English: Tranzila web site

Hebrew: Tranzila web site

About Tranzila :

InterSpace Ltd., the owner and creator of TRANZILA™, was established in 1996 in order to provide companies with the opportunity of receiving reliable, professional and affordable Internet services. Today, InterSpace has become a leading force in the web hosting and e business solution market. The company specializes in advance dedicated server, data center hosting and e-commerce solutions. Our services utilizing our state-of –the-art data center, and our premier partnership with the global giant NTT/Verio, and partnerships with Intel, Microsoft, Geotrust and others.

Since its role out in the year 2000, TRANZILA has shown a steady and promising growth and has recruited leading e-merchants, service providers and integrators as clients. In addition to the growth in Internet transaction volume, TRANZILA has grown its business by integrating new technologies and by offering value added services. The latest developments from TRANZILA includes the introduction and offering of the 3dsecure solution in an ASP mode, advanced fraud detection system, recurring billing and invoice issuing system and more.

tranzila graph

TRANZILA offers its clients, turn key solution for their entire on-line infrastructure needs. Our experts are outsourced for characterization of application, applications management and development.

 

Good luck .

WordPress Optimization tutorial guide

0

WordPress Optimization tutorial guide

This WordPress optimization tutorial is the most comprehensive guide to WordPress optimization created with the intention of helping you troubleshoot performance related issues and provide you with guidelines on how to speed up your WordPress site..

If you ever experienced slow WordPress admin panel, “MySQL server has gone away” message, pages taking forever to load or you want to prepare your site for a major increase in traffic (for example Digg front page) this is the guide for you.

1. Check the Site stats

Most commonly the problem with slow loading sites is just the sheer size of the page. A typical webpage today will be loaded with images, flash, videos and javascripts all which take a significant portion of bandwidth.

If you want to start dealing with this issue seriously you need to get Firefox browser, Firebug extension and Yslow plugin.

Yslow module will allow you to get a performance score from 0-100.  Getting your site to 80+ score should be your aim.

Try to keep your page size under 100KB. Try to keep it under 50kb if possible. If you have a lot of multimedia content then by all means learn to use YSlow.

Learn about ways to improve the page loading speed.

Another useful Firefox extension worth checking out is Google’s Page Speed.

2. Check your (Vista) System

In rare occasions when you are loading your and other sites slowly, it can be your Vista system that is causing the slowdown.

If you are running Vista check this article for a diagnosis and a possible solution.

3. Check the Plugins

Plugins are usually the prime suspect for slowdowns. With so many WordPress plugins around, chance is you might have installed a plugin which does not use the resources in an optimum way.

For example such plugins that caused slowdowns in the past have been Popularity contest, aLinks or @Feed.

To check plugins, deactivate all of them and check the critical areas of the site again. If everything runs OK, re-enable the plugins one by one until you find the problematic plugin.

After finding the cause you can either write a message to the plugin author and hope they fix it or search for an alternative.

4. Check your Theme

If it’s not the plugins, and you are troubleshooting slowdown of the site, you should check it with a different theme.

Themes can include code with plugin capabilities inside the theme’s function.phpfile so everything what applies to plugins can apply to the theme.

Also, themes may use excessive JavaScript or image files, causing slow loading of the page because of huge amount of data to transfer and/or number of http requests used.

WordPress comes installed with a default theme and it’s best used to test the site if your theme is the suspect for poor performance.

If you discover your theme is causing the slowdowns, you can use the excellent Firebug tool for Firefox browser to debug the problem. Learn more about Firebug, your new best friend.

You can also use this site get general information about the site very fast.

5. Optimize Database Tables

Database tables should be periodically optimized (and repaired if necessary) for optimum performance.

I recommend using WP-DBManager plugin which provides this functionality as well as database backup, all crucial for any blog installation.

WP-DBManager allows you to schedule and forget, and it will take care of all the work automatically.

Other alternative is manually optimizing and repairing your table through a tool like phpmyadmin.

6. Turn off Post Revisions

With WordPress 2.6, post version tracking mechanism was introduced. For example, every time you “Save” a post, a revision is written to the database. If you do not need this feature you can easily turn it off by adding one line to your wp-config.php file, found in the installation directory of your WordPress site:

define(‘WP_POST_REVISIONS’, false);

If you have run a blog with revisions turned on for a while, chance is you will have a lot of revision posts in your database. if you wish to remove them for good, simply run this query (for example using the mentioned WP-DBManager) plugin.

DELETE FROM wp_posts WHERE post_type = “revision”;

This will remove all “revision” posts from your database, making it smaller in the process.

NOTE: Do this with care. If you are not sure what you are doing, make sure to at least create a backup of the database first or even better, ask a professional to help you.

7. Implement Caching

Caching is a method of retrieving data from a ready storage (cache) instead of using resources to generate it every time the same information is needed. Using cache is much faster way to retrieve information and is generally recommended practice for most modern applications.

The easiest way to implement caching (and usually the only way if your blog is on shared hosting) is to use a caching plugin.

The most commonly used is WP Super Cache.

A new kid on the block, W3 Total Cache is more powerful alternative, maturing with every day.

8. MySQL Optimization

MySQL can save the results of a query in it’s own cache. To enable it edit the MySQL configuration file (usually /etc/my.cnf) and add these lines:

query_cache_type = 1
query_cache_limit = 1M
query_cache_size = 16M

This will create a 16 MB query cache after you restart the MySQL server (the amount depends on the amount of available RAM, I use around 250MB on 4GB machine).

To check if it is properly running, run this query:

SHOW STATUS LIKE 'Qcache%';

Example result:

Qcache_free_blocks	718
Qcache_free_memory	13004008
Qcache_hits	780759
Qcache_inserts	56292
Qcache_lowmem_prunes	0
Qcache_not_cached	3711
Qcache_queries_in_cache	1715
Qcache_total_blocks	4344

Further MySQL Optimization:

There a lot of options you can play with so here is my MySQL config file instead, tuned in for 4GB, quad-core dedicated machine. This will most probably not work for your machine out of box, use it just as a general guideline.

[mysqld]
bulk_insert_buffer_size = 8M
connect_timeout=10
interactive_timeout=50
join_buffer=1M
key_buffer=250M
max_allowed_packet=16M
max_connect_errors=10
max_connections=100
max_heap_table_size = 32M
myisam_sort_buffer_size=96M
query_cache_limit = 4M
query_cache_size = 250M
query_cache_type = 1
query_prealloc_size = 65K
query_alloc_block_size = 128K
read_buffer_size=1M
read_rnd_buffer_size=768K
record_buffer=1M
safe-show-database
skip-innodb
skip-locking
skip-networking
sort_buffer=1M
table_cache=4096
thread_cache_size=1024
thread_concurrency=8
tmp_table_size = 32M
wait_timeout=500

# for slow queries, comment when not used
#log-slow-queries=/var/log/mysql-slow.log
#long_query_time=1
#log-queries-not-using-indexes

[mysqld_safe]
nice = -5
open_files_limit = 8192

[mysqldump]
quick
max_allowed_packet = 16M

[myisamchk]
key_buffer = 64M
sort_buffer = 64M
read_buffer = 16M
write_buffer = 16M

Tip #2:
Here is a further read regarding MySQL optimization and another one here.

Extremely useful mysqlrepot tool will help you tweak that mysql like nothing. Mysql tuner is one of the best and quickest tools out there to tell you how can you fix up your database. MySQL Tuning primer and MySQL Activity Report are another two scripts to try out.

Maatkit is an extremely useful toolkit for managing MySQL.

MySQL slow query log is valuable for getting info about most problematic queries. To activate it you can edit your my.cnf

log-slow-queries=/var/log/mysql-slow.log
long_query_time=1
log-queries-not-using-indexes

This will create a log of slow queries and those not using indexes. Now you need to be able to identify the slow ones for which you can use external slow log filter and parsing tools. Using ‘EXPLAIN‘ is an effective way to understand and optimize complex queries.

You can also install mytop, a ‘top’ command clone that works with MySQL.

9. PHP Opcode Cache

PHP is interpreted language, meaning that every time PHP code is started, it is compiled into the so called op-codes, which are then run by the system. This compilation process can be cached by installing an opcode cache such aseAccelerator. There are other caching solutions out there as well.

To install eAccelerator, unpack the archive and go to the eAccelerator folder. Then type:

phpize
./configure
make
make install

This will install eAccelerrator.

Next create temp folder for storage:

mkdir /var/cache/eaccelerator

chmod 0777 /var/cache/eaccelerator

Finally to enable it, add these lines to the end of your php.ini file (usually/etc/php.ini or /usr/lib/php.ini):

extension="eaccelerator.so"
eaccelerator.shm_size="16"
eaccelerator.cache_dir="/var/cache/eaccelerator"
eaccelerator.enable="1"
eaccelerator.optimizer="1"
eaccelerator.check_mtime="1"
eaccelerator.debug="0"
eaccelerator.filter=""
eaccelerator.shm_max="0"
eaccelerator.shm_ttl="0"
eaccelerator.shm_prune_period="0"
eaccelerator.shm_only="0"
eaccelerator.compress="1"
eaccelerator.compress_level="9"

The changes will be noticeable at once, as PHP does not need to be ‘restarted’.

Note #1: WP Super Cache and eAccelerator work fine together showing further increase in performance.

Note #2: If you like even more possibility for performance, check the WP Super Cache and eAccelerator plugin.

Note #3: Unfortunately eAccelerator won’t work if PHP is run as CGI. You can tryusing fastcgi which will work with suExec and eAccelerator.

Note #4:W3 Total Cache mentioned earlier already utilities both memcached and APC making it amazingly fast.

10. Web Server optimization

Apache optimization is something books have been written on so I will first forward you to this article here. Indepth apache compilation tips here, performance tuninghere, VPS tips here and keep alive tips here.

You can easily test changes in your configuration by running a test from your command prompt

ab -t30 -c5 http://www.mysite.com/

and comparing results. I get around 200 req/s on VPS server.

For more flexible testing you can use Autobench which works in conjunction withhttperf, another benchmarking tool.

Use a fast web server like nginx to serve static content (ie. images) while passing dynamic requests is another popular technique you can use to improveperformance.

Note #1: More cool resources. Optimizing Page load time and a great series on website performance.

Note #2: You can find even more tips&tricks on Elliot Back’s site

11. “MySQL server has gone away” workaround

This WordPress database error appears on certain configurations and it manifests in very slow and no response, usually on your admin pages.

Workaround for this MySQL problem has been best addressed in this article.

This problem evidently exists, but the suggested fix is valid only until you upgrade your WordPress. Hopefully it will be further researched and added into the WordPress core in the future.

Note #1: Sometimes increasing MySQL wait_timeout to 1000 will help with this issue.

12. Fixing posting not possible problem

If you experience WordPress admin panel crawling to a halt, with inability to post or update certain posts, you are probably hitting the mod_security wall.

ModSecurity is Apache module for increasing web site security by preventing system intrusions. However, sometimes it may decide that your perfectly normal WordPress MySQL query is trying to do something suspicious and black list it, which manifests in very slow or no response of the site.

To test if this is the case, check your Apache error log, for example:

tail -f /usr/local/apache/logs/error_log

and look for something like this:

ModSecurity: Access denied with code 500 (phase 2) ... [id "300013"] [rev "1"] [msg "Generic SQL injection protection"] [severity "CRITICAL"] [hostname  www.prelovac.com"] [uri "/vladimir/wp-admin/page.php"

It tells you the access for this page was denied because of a security rule with id 300013. Fixing this includes white-listing this rule for the page in question.

To do that, edit apache config file (for example/usr/local/apache/conf/modsec2/exclude.conf) and add these lines:

SecRuleRemoveById 300013

This will white list the page for the given security rule and your site will continue to work normally.

13. RSS Pings and Pingbacks

Reasons for slow WordPress posting may include rss ping and pingback timeouts.

By default WordPress will try to ping servers listed in your ping list (found inSettings->Writing panel) and one of them may timeout slowing the entire process.

Second reason are post pingbacks, mechanism in which WordPress notifies the sites you linked to in your article. You can disable pingbacks in Settings->Discussionby un-checking option “Attempt to notify any blogs linked to from the article (slows down posting)“.

Try clearing ping list and disabling pingbacks to see if that helps speed up your posting time.

Following are the general Rules for optimizing page loading time

14. Use subdomains to share the load

Most browsers are set to load 2-4 files from a domain in parallel. If you move some files to a different domain (subdomain will work) the browser will start downloading 2-4 more files in parallel.

It is good idea to move your theme image files to a subdomain you create. I have created demo.prelovac.com/images and moved my theme images there. I have then changed the theme style.css to reflect the full url to the new image files. Job done!

15. Minimize the number of HTTP requests

You can lower the number of HTTP requests by using fewer images (or placing all images in one large image and position them with CSS), fewer javascripts, fewer css files (usually meaning fewer plugins).

Good effort has been made by PHP speedy plugin which will merge all your JavaScript and all CSS files in one big file which really helps in lowering the HTTP request numbers. The biggest drawback of PHP Speedy is that it’s not 100% compatible with all plugins.

Also use the CSS Sprite generator to move all your images into one image and then use CSS background-position to display them. This will cut your number of HTTP requests significantly.

16. Compress the content using apache .htaccess

If you have our own server you can chose to gzip all content sent to browsers. This will lower the loading time significantly as most html pages compress very well.

Add this code to your .htaccess

AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css application/javascript application/x-javascript application/x-httpd-php application/rss+xml application/atom_xml text/javascript

17. Create expires headers

Expire headers tell the browser how long it should keep the content in cache. Most of the images on your site never change and it is good idea to keep them cached locally.

Add this to your .htaccess (make sure mod_expires is loaded in your apache if you have problems)

<filesmatch ".(ico|jpg|jpeg|png|gif|js|css|swf)$"="">
ExpiresActive on
ExpiresDefault "access plus 30 days"
Header unset ETag
FileETag None

Here is an alternative setting:

Header unset Pragma
FileETag None
Header unset ETag

# 1 YEAR

Header set Cache-Control "public"
Header set Expires "Thu, 15 Apr 2010 20:00:00 GMT"
Header unset Last-Modified

# 2 HOURS

Header set Cache-Control "max-age=7200, must-revalidate"

# CACHED FOREVER
# MOD_REWRITE TO RENAME EVERY CHANGE

Header set Cache-Control "public"
Header set Expires "Thu, 15 Apr 2010 20:00:00 GMT"
Header unset Last-Modified

Use cacheability engine to check your cache configuration.

18. Cache Gravatars

Many blogs use Gravatars, the little images next to your comments. However gravatars have two big flaws in regards to site optimization:

  • Every gravatar image is a new HTTP requests even if same image is loaded (page with 100 comments would have 100 additional HTTP requests)
  • Gravatar images do not contain expire headers

What we can do is create a local gravatar cache, where images would be cached and served from our site. Ideally you would place the gravatar cache on a separate subdomain (see first heading).

I use a plugin from Zenpax.com which allows all gravatars to be cached locally.

19. Optimize the images with smush.it

It is often overlooked that your images can be optimized (made smaller) which can significantly reduce loading times.

Wouldn’t it be perfect if you could open a site, press a button in your browser and get all images on the site optimized and made available in a single zip file. That is possible thanks to smush.it and their Firefox plugin. It is amazing how effective this is!

20. CSS on top, JavaScript on bottom

It is golden practice to put CSS files on top of the page so they are loaded first. JavaScript files should be placed on the bottom of the page (when possible). I have created a simple plugin which will move the properly registered JavaScript files to the bottom of your pages. The plugin is called Footer javaScript.

21. Use CDN

A CDN is a network of servers, usually located at various sites around the world, which cache the static content of a site, such as image, CSS and JavaScript files.The CDN provider copies your site’s static content to its servers, so when someone lands on your site, the static content is delivered from the server closest to them.For a visual look at how this works, check out this handy graphic from GTmetrix.

cdns

Conclusion

Modern webservers and websites have grown to depend on many different factors.

This article covered various approaches to optimization from system level apache, PHP and MySQL changes to settings within your WordPress.

I hope following this guide will help you create a fast and responsive WordPress based site.

 

Good luck

P.s. Very good guide i edit it but most of it from original website .

Original guid: prelovac

Increase the Upload Size for MySQL Database on cPanel with phpMyAdmin using WHM

0

Increase the Upload Size for MySQL Database on cPanel with phpMyAdmin using WHM

cPanel/WHM Server imposes a limit on the size of a mysql database that can be imported into phpMyAdmin. The default size is 50MB.

The best way to navigate this limitation is to make some tweaks in the WHM interface. Sometimes editing a php.ini file doesn’t make a difference.

– Log into your WHM interface and type Tweak in the search bar.

tweak-settings-cpanel-databse

The Tweak settings appear, in the find field on the right type: upload size

tweak-max-upload-size-cpanel

 

Change the cPanel PHP max upload size to what you need and save.

Go back to Tweak Settings and in the find bar type: post

tweak-post-size-cpanel

Change the cPanel PHP max POST size to what you need

That’s it, now you can import a larger database directly into phpMyAdmin, go back and change back to the default settings if required.

 

Cpanel PhpMyAdmin uses the php.ini file /usr/local/cpanel/3rdparty/etc/phpmyadmin/php.ini.

To increase upload limit, change values of upload_max_filesize and post_max_size in this php.ini file. Typically, you may set value of post_max_size to twice the value of upload_max_filesize. For example, to import SQL files up to 250MB size, set upload_max_filesize to 250MB and post_max_size to 500MB.

You may also want to change values of max_execution_time and memory_limit.

 

Good luck .

Scary Steam for Linux bug erases all the personal files on your PC

0

Scary Steam for Linux bug erases all the personal files on your PC

If you’re a Steam fan running Linux, the last thing you’ll want to do in the next few days is mess with your Steam files. Users on Valve’s GitHub Steam for Linux page are complaining about a nasty bug that has the potential to wipe out every single personal file on your PC. Even worse, users say the bug will even wipe out documents on USB connected drives. So much for local backups.

The impact on you at home: The obvious implication if you’re running Steam on Linux is to be wary of the program right now. As a precaution, don’t connect any local external hard drives while you’re running Steam. Users complaining of this bug appear to have moved their .steam or ~/.local/share/steam directories, or invoked Steam’s Bash script with the —reset option enabled.

UPDATE: Valve gave us the following statement: “So far we have had a handful of users report this issue, after they manually moved their Steam install. We have not been able to reproduce the reported issue, but we are adding some additional checks to ensure this is not possible while we continue to investigate. If anyone else has experienced this or has more information, they should email [email protected].”

Ouch

Steam’s bug appears to be caused by a line in the Steam.sh Bash script: rm -rf “$STEAMROOT/“*. That command is a basic Bash instruction that tells the computer to remove the STEAMROOT directory and all its sub-directories (folders).

That’s all well and good, but the issue is that if the STEAMROOT folder is not there then the computer interprets the command as rm -rf “/“*, as first reported by Bit-Tech. If you’re not familiar with Bash, that command tells the system to delete everything on your hard drive starting from the root directory.

The saving grace for Linux users is that you can only erase files you have write permissions over. That means the system itself can’t be erased, but pretty much all of a user’s files—including photos and personal documents—would be at risk.

Ironically, the instruction at issue is preceded by a comment from the developer: # Scary!.

Indeed.

 

P.S. Wow …. its fail from Steam  linux dev team .

Original Post : pcworld

Good luck and be careful with steam in linux system.

Looking for your Facebook Profile ID / Group ID / Fanpage ID …

Looking for your Facebook Profile ID / Group ID / Fanpage ID …

Type your Facebook URL to see Looking for your Facebook Profile ID.

 

link : http://lookup-id.com/

Looking for your Facebook Profile ID – Lookup-ID.com helps you to find the Facebook ID for your Profile or a Group. Facebook ID is a many-digit number, eg. 10453213456789123.
Facebook ID for of certain Facebook social plugins, like the “Like Box” ; Like Button or application….

Good luck

Mod Security is ON in the server and why is it important

Mod Security is ON in the server and why is it important

Mod_Security is an important web application firewall that gets installed as an Apache module. It provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis.

It is used to block commonly known exploits for CMS’s by use of regular expressions and rule sets.

Mod_Security can potentially block common code injection attacks which strengthens the security of the server.

When coding a dynamic website, sometimes users forget to write code to help prevent hacks by doing things such as validating input.

Unfortunately,  sometimes Mod_Security rules block valid transactions as well, below you can find some steps to white-list, configure or delete some rules.

What Can ModSecurity Do?

ModSecurity is a toolkit for real-time web application monitoring, logging, and access control. I like to think about it as an enabler: there are no hard rules telling you what to do; instead, it is up to you to choose your own path through the available features. That’s why the title of this section asks what ModSecurity can do, not what it does.

The freedom to choose what to do is an essential part of ModSecurity’s identity and goes very well with its open source nature. With full access to the source code, your freedom to choose extends to the ability to customize and extend the tool itself to make it fit your needs. It’s not a matter of ideology, but of practicality. I simply don’t want my tools to restrict what I can do.

Back on the topic of what ModSecurity can do, the following is a list of the most important usage scenarios:

      • Real-time application security monitoring and access control

At its core, ModSecurity gives you access to the HTTP traffic stream, in real-time, along with the ability to inspect it. This is enough for real-time security monitoring. There’s an added dimension of what’s possible through ModSecurity’s persistent storage mechanism, which enables you to track system elements over time and perform event correlation. You are able to reliably block, if you so wish, because ModSecurity uses full request and response buffering.

      • Virtual patching

Virtual patching is a concept of vulnerability mitigation in a separate layer, where you get to fix problems in applications without having to touch the applications themselves. Virtual patching is applicable to applications that use any communication protocol, but it is particularly useful with HTTP, because the traffic can generally be well understood by an intermediary device. ModSecurity excels at virtual patching because of its reliable blocking capabilities and the flexible rule language that can be adapted to any need. It is, by far, the activity that requires the least investment, is the easiest activity to perform, and the one that most organizations can benefit from straight away.

      • Full HTTP traffic logging

Web servers traditionally do very little when it comes to logging for security purposes. They log very little by default, and even with a lot of tweaking you are not able to get everything that you need. I have yet to encounter a web server that is able to log full transaction data. ModSecurity gives you that ability to log anything you need, including raw transaction data, which is essential for forensics. In addition, you get to choose which transactions are logged, which parts of a transaction are logged, and which parts are sanitized.

      • Continuous passive security assessment

Security assessment is largely seen as an active scheduled event, in which an independent team is sourced to try to perform a simulated attack. Continuous passive security assessment is a variation of real-time monitoring, where, instead of focusing on the behavior of the external parties, you focus on the behavior of the system itself. It’s an early warning system of sorts that can detect traces of many abnormalities and security weaknesses before they are exploited.

 

      • Web application hardening

One of my favorite uses for ModSecurity is attack surface reduction, in which you selectively narrow down the HTTP features you are willing to accept (e.g., request methods, request headers, content types, etc.). ModSecurity can assist you in enforcing many similar restrictions, either directly, or through collaboration with other Apache modules. They all fall under web application hardening. For example, it is possible to fix many session management issues, as well as cross-site request forgery vulnerabilities.

    • Something small, yet very important to you

Real life often throws unusual demands to us, and that is when the flexibility of ModSecurity comes in handy where you need it the most. It may be a security need, but it may also be something completely different. For example, some people use ModSecurity as an XML web service router, combining its ability to parse XML and apply XPath expressions with its ability to proxy requests. Who knew?

Guiding Principles

There are four guiding principles on which ModSecurity is based, as follows:

      • Flexibility

I think that it’s fair to say that I built ModSecurity for myself: a security expert who needs to intercept, analyze, and store HTTP traffic. I didn’t see much value in hardcoded functionality, because real life is so complex that everyone needs to do things just slightly differently. ModSecurity achieves flexibility by giving you a powerful rule language, which allows you to do exactly what you need to, in combination with the ability to apply rules only where you need to.

      • Passiveness

ModSecurity will take great care to never interact with a transaction unless you tell it to. That is simply because I don’t trust tools, even the one I built, to make decisions for me. That’s why ModSecurity will give you plenty of information, but ultimately leave the decisions to you.

      • Predictability

There’s no such thing as a perfect tool, but a predictable one is the next best thing. Armed with all the facts, you can understand ModSecurity’s weak points and work around them.

    • Quality over quantity

Over the course of six years spent working on ModSecurity, we came up with many ideas for what ModSecurity could do. We didn’t act on most of them. We kept them for later. Why? Because we understood that we have limited resources available at our disposal and that our minds (ideas) are far faster than our implementation abilities. We chose to limit the available functionality, but do really well at what we decided to keep in.

There are bits in ModSecurity that fall outside the scope of these four principles. For example, ModSecurity can change the way Apache identifies itself to the outside world, confine the Apache process within a jail, and even implement an elaborate scheme to deal with a onceinfamous universal XSS vulnerability in Adobe Reader. Although it was I who added those features, I now think that they detract from the main purpose of ModSecurity, which is a reliable and predictable tool that allows for HTTP traffic inspection.

 

Deployment Options

ModSecurity supports two deployment options: embedded and reverse proxy deployment. There is no one correct way to use them; choose an option based on what best suits your circumstances. There are advantages and disadvantages to both options:

      • Embedded

Because ModSecurity is an Apache module, you can add it to any compatible version of Apache. At the moment that means a reasonably recent Apache version from the 2.0.x branch, although a newer 2.2.x version is recommended. The embedded option is a great choice for those who already have their architecture laid out and don’t want to change it. Embedded deployment is also the only option if you need to protect hundreds of web servers. In such situations, it is impractical to build a separate proxybased security layer. Embedded ModSecurity not only does not introduce new points of failure, but it scales seamlessly as the underlying web infrastructure scales. The main challenge with embedded deployment is that server resources are shared between the web server and ModSecurity.

    • Reverse proxy

Reverse proxies are effectively HTTP routers, designed to stand between web servers and their clients. When you install a dedicated Apache reverse proxy and add ModSecurity to it, you get a “proper” network web application firewall, which you can use to protect any number of web servers on the same network. Many security practitioners prefer having a separate security layer. With it you get complete isolation from the systems you are protecting. On the performance front, a standalone ModSecurity will have resources dedicated to it, which means that you will be able to do more (i.e., have more complex rules). The main disadvantage of this approach is the new point of failure, which will need to be addressed with a high-availability setup of two or more reverse proxies.

Is Anything Missing?

ModSecurity is a very good tool, but there are a number of features, big and small, that could be added. The small features are those that would make your life with ModSecurity easier, perhaps automating some of the boring work (e.g., persistent blocking, which you now have to do manually). But there are really only two features that I would call missing:

      • Learning

Defending web applications is difficult, because there are so many of them, and they are all different. (I often say that every web application effectively creates its own communication protocol.) It would be very handy to have ModSecurity observe application traffic and create a model that could later be used to generate policy or assist with false positives. While I was at Breach Security, I started a project called ModProfiler [http://www.modsecurity.org/projects/modprofiler/] as a step toward learning, but that project is still as I left it, as version 0.2.

    • Passive mode of deployment

ModSecurity can be embedded only in Apache 2.x, but when you deploy it as a reverse proxy, it can be used to protect any web server. Reverse proxies are not everyone’s cup of tea, however, and sometimes it would be very handy to deploy ModSecurity passively, without having to change anything on the network.

How To Set Up mod_security with Apache on Debian/Ubuntu

Prelude

Mod security is a free Web Application Firewall (WAF) that works with Apache, Nginx and IIS. It supports a flexible rule engine to perform simple and complex operations and comes with a Core Rule Set (CRS) which has rules for SQL injection, cross site scripting, Trojans, bad user agents, session hijacking and a lot of other exploits. For Apache, it is an additional module which makes it easy to install and configure.

In order to complete this tutorial, you will need LAMP installed on your server.

Installing mod_security

Modsecurity is available in the Debian/Ubuntu repository:

apt-get install libapache2-modsecurity

Verify if the mod_security module was loaded.

apachectl -M | grep --color security

You should see a module named security2_module (shared) which indicates that the module was loaded.

Modsecurity’s installation includes a recommended configuration file which has to be renamed:

mv /etc/modsecurity/modsecurity.conf{-recommended,}

Reload Apache

service apache2 reload

You’ll find a new log file for mod_security in the Apache log directory:

root@droplet:~# ls -l /var/log/apache2/modsec_audit.log
-rw-r----- 1 root root 0 Oct 19 08:08 /var/log/apache2/modsec_audit.log

Configuring mod_security

Out of the box, modsecurity doesn’t do anything as it needs rules to work. The default configuration file is set to DetectionOnly which logs requests according to rule matches and doesn’t block anything. This can be changed by editing the modsecurity.conf file:

nano /etc/modsecurity/modsecurity.conf

Find this line

SecRuleEngine DetectionOnly

and change it to:

SecRuleEngine On

If you’re trying this out on a production server, change this directive only after testing all your rules.

Another directive to modify is SecResponseBodyAccess. This configures whether response bodies are buffered (i.e. read by modsecurity). This is only neccessary if data leakage detection and protection is required. Therefore, leaving it On will use up droplet resources and also increase the logfile size.

Find this

SecResponseBodyAccess On

and change it to:

SecResponseBodyAccess Off

Now we’ll limit the maximum data that can be posted to your web application. Two directives configure these:

SecRequestBodyLimit
SecRequestBodyNoFilesLimit

The SecRequestBodyLimit directive specifies the maximum POST data size. If anything larger is sent by a client the server will respond with a 413 Request Entity Too Large error. If your web application doesn’t have any file uploads this value can be greatly reduced.

The value mentioned in the configuration file is

SecRequestBodyLimit 13107200

which is 12.5MB.

Similar to this is the SecRequestBodyNoFilesLimit directive. The only difference is that this directive limits the size of POST data minus file uploads– this value should be “as low as practical.”

The value in the configuration file is

SecRequestBodyNoFilesLimit 131072

which is 128KB.

Along the lines of these directives is another one which affects server performance:SecRequestBodyInMemoryLimit. This directive is pretty much self-explanatory; it specifies how much of “request body” data (POSTed data) should be kept in the memory (RAM), anything more will be placed in the hard disk (just like swapping). Since droplets use SSDs, this is not much of an issue; however, this can be set a decent value if you have RAM to spare.

SecRequestBodyInMemoryLimit 131072

This is the value (128KB) specified in the configuration file.

Testing SQL Injection

Before going ahead with configuring rules, we will create a PHP script which is vulnerable to SQL injection and try it out. Please note that this is just a basic PHP login script with no session handling. Be sure to change the MySQL password in the script below so that it will connect to the database:

/var/www/login.php

<html>
<body>
<?php
    if(isset($_POST['login']))
    {
        $username = $_POST['username'];
        $password = $_POST['password'];
        $con = mysqli_connect('localhost','root','password','sample');
        $result = mysqli_query($con, "SELECT * FROM `users` WHERE username='$username' AND password='$password'");
        if(mysqli_num_rows($result) == 0)
            echo 'Invalid username or password';
        else
            echo '<h1>Logged in</h1><p>A Secret for you....</p>';
    }
    else
    {
?>
        <form action="" method="post">
            Username: <input type="text" name="username"/><br />
            Password: <input type="password" name="password"/><br />
            <input type="submit" name="login" value="Login"/>
        </form>
<?php
    }
?>
</body>
</html>

This script will display a login form. Entering the right credentials will display a message “A Secret for you.”

We need credentials in the database. Create a MySQL database and a table, then insert usernames and passwords.

mysql -u root -p

This will take you to the mysql> prompt

create database sample;
connect sample;
create table users(username VARCHAR(100),password VARCHAR(100));
insert into users values('jesin','pwd');
insert into users values('alice','secret');
quit;

Open your browser, navigate to http://yourwebsite.com/login.php and enter the right pair of credentials.

Username: jesin
Password: pwd

You’ll see a message that indicates successful login. Now come back and enter a wrong pair of credentials– you’ll see the message Invalid username or password.

We can confirm that the script works right. The next job is to try our hand with SQL injection to bypass the login page. Enter the following for the username field:

' or true --

Note that there should be a space after -- this injection won’t work without that space. Leave thepassword field empty and hit the login button.

Voila! The script shows the message meant for authenticated users.

Setting Up Rules

To make your life easier, there are a lot of rules which are already installed along with mod_security. These are called CRS (Core Rule Set) and are located in

root@droplet:~# ls -l /usr/share/modsecurity-crs/
total 40
drwxr-xr-x 2 root root  4096 Oct 20 09:45 activated_rules
drwxr-xr-x 2 root root  4096 Oct 20 09:45 base_rules
drwxr-xr-x 2 root root  4096 Oct 20 09:45 experimental_rules
drwxr-xr-x 2 root root  4096 Oct 20 09:45 lua
-rw-r--r-- 1 root root 13544 Jul  2  2012 modsecurity_crs_10_setup.conf
drwxr-xr-x 2 root root  4096 Oct 20 09:45 optional_rules
drwxr-xr-x 3 root root  4096 Oct 20 09:45 util

The documentation is available at

root@droplet1:~# ls -l /usr/share/doc/modsecurity-crs/
total 40
-rw-r--r-- 1 root root   469 Jul  2  2012 changelog.Debian.gz
-rw-r--r-- 1 root root 12387 Jun 18  2012 changelog.gz
-rw-r--r-- 1 root root  1297 Jul  2  2012 copyright
drwxr-xr-x 3 root root  4096 Oct 20 09:45 examples
-rw-r--r-- 1 root root  1138 Mar 16  2012 README.Debian
-rw-r--r-- 1 root root  6495 Mar 16  2012 README.gz

To load these rules, we need to tell Apache to look into these directories. Edit the mod-security.conffile.

nano /etc/apache2/mods-enabled/mod-security.conf

Add the following directives inside <IfModule security2_module> </IfModule>:

Include "/usr/share/modsecurity-crs/*.conf"
Include "/usr/share/modsecurity-crs/activated_rules/*.conf"

The activated_rules directory is similar to Apache’s mods-enabled directory. The rules are available in directories:

/usr/share/modsecurity-crs/base_rules
/usr/share/modsecurity-crs/optional_rules
/usr/share/modsecurity-crs/experimental_rules

Symlinks must be created inside the activated_rules directory to activate these. Let us activate the SQL injection rules.

cd /usr/share/modsecurity-crs/activated_rules/
ln -s /usr/share/modsecurity-crs/base_rules/modsecurity_crs_41_sql_injection_attacks.conf .

Apache has to be reloaded for the rules to take effect.

service apache2 reload

Now open the login page we created earlier and try using the SQL injection query on the username field. If you had changed the SecRuleEngine directive to On, you’ll see a 403 Forbidden error. If it was left to theDetectionOnly option, the injection will be successful but the attempt would be logged in themodsec_audit.log file.

Writing Your Own mod_security Rules

In this section, we’ll create a rule chain which blocks the request if certain “spammy” words are entered in a HTML form. First, we’ll create a PHP script which gets the input from a textbox and displays it back to the user.

/var/www/form.php

<html>
    <body>
        <?php
            if(isset($_POST['data']))
                echo $_POST['data'];
            else
            {
        ?>
                <form method="post" action="">
                        Enter something here:<textarea name="data"></textarea>
                        <input type="submit"/>
                </form>
        <?php
            }
        ?>
    </body>
</html>

Custom rules can be added to any of the configuration files or placed in modsecurity directories. We’ll place our rules in a separate new file.

nano /etc/modsecurity/modsecurity_custom_rules.conf

Add the following to this file:

SecRule REQUEST_FILENAME "form.php" "id:'400001',chain,deny,log,msg:'Spam detected'"
SecRule REQUEST_METHOD "POST" chain
SecRule REQUEST_BODY "@rx (?i:(pills|insurance|rolex))"

Save the file and reload Apache. Open http://yourwebsite.com/form.php in the browser and enter text containing any of these words: pills, insurance, rolex.

You’ll either see a 403 page and a log entry or only a log entry based on SecRuleEngine setting. The syntax for SecRule is

SecRule VARIABLES OPERATOR [ACTIONS]

Here we used the chain action to match variables REQUEST_FILENAME with form.php,REQUEST_METHOD with POST and REQUEST_BODY with the regular expression (@rx) string(pills|insurance|rolex). The ?i: does a case insensitive match. On a successful match of all these three rules, the ACTION is to deny and log with the msg “Spam detected.” The chain action simulates the logical AND to match all the three rules.

Excluding Hosts and Directories

Sometimes it makes sense to exclude a particular directory or a domain name if it is running an application like phpMyAdmin as modsecurity and will block SQL queries. It is also better to exclude admin backends of CMS applications like WordPress.

To disable modsecurity for a complete VirtualHost place the following

<IfModule security2_module>
    SecRuleEngine Off
</IfModule>

inside the <VirtualHost> section.

For a particular directory:

<Directory "/var/www/wp-admin">
    <IfModule security2_module>
        SecRuleEngine Off
    </IfModule>
</Directory>

If you don’t want to completely disable modsecurity, use the SecRuleRemoveById directive to remove a particular rule or rule chain by specifying its ID.

<LocationMatch "/wp-admin/update.php">
    <IfModule security2_module>
        SecRuleRemoveById 981173
    </IfModule>
</LocationMatch>

Further Reading

Official modsecurity documentation https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual

Official modsecurity website http://www.modsecurity.org/

Some help sites : 1 2 3 4 5

What is Browsershots?

What is Browsershots?

Hello ,

Browsershots tests your website’s compatibility on different browsers by taking screenshots of your web pages rendered by real browsers on different operating systems.

What we do and why we created this service?

In our dreams, the web looks good for all users. So we let web designers view screenshots of their pages in different browsers, at different screen resolutions and with different plugins. We’re trying to make this service easy to use, open for all (including access to the source code) and 100%% free.

The problem: cross-browser incompatibilities: This project is concerned with a favorite problem of web designers: websites look different in other browsers. Testing a new site in many browsers can be quite time-consuming. Not everybody has a farm of legacy machines with older OSes and browsers. There are online services that offer screenshots of websites in different browsers for considerable fees. For the hobbyist and for open source projects, these fees may be prohibitive.

The solution: community cooperation: The idea behind this project is to distribute the work of making browser screenshots among community members. Everybody can add URLs to the job queue on a central server. Volunteers use a small program to automatically make screenshots of web pages in their browser and upload the results to the server.

 

Link : http://browsershots.org/

Good luck

Download a Free 15-day trial of Parallels Plesk 12 Today!

Download a Free 15-day trial of Parallels Plesk 12 Today!

Attention: Only deploy the Parallels Plesk Trial on servers dedicated for this purpose. Control Panel software is designed to take full control over the server it is allocated to and cannot be deleted without reformatting the server.

Frame:

 

 

Link: http://sp.parallels.com/products/plesk/trial/

Good luck ,

I get "You don’t have permission to access /imp/compose.php on this server" error when trying to send e-mail in horde webmail

I get “You don’t have permission to access /imp/compose.php on this server” error when trying to send e-mail in horde webmail

Hello,

Had some interesting error that caused by mod security.

Symptoms

When trying to send e-mail following error appears:

Forbidden
You don't have permission to access /imp/compose.php on this server

or:

Forbidden
You don't have permission to access /imp/basic.php on this server.
Apache Server at webmail.mydomain.com Port 80

 

There can be also similar error in apache error log file:

[error] [client 82.200.65.190] ModSecurity: Access denied with code 403 (phase 2). Match of "eq 0" against "MULTIPART_UNMATCHED_BOUNDARY" required. [file "/etc/httpd/conf.d/mod_security.conf"] [line "70"] [msg "Multipart parser detected a possible unmatched boundary."] [hostname "HOSTNAME"] [uri "/horde/imp/compose.php"] [unique_id "8m0u-n8AAAEAAD7blhoAAAAO"]

Cause

Very likely you have something like mod_security installed which is improperly flagging the request. There is basically no way this can be something IMP is doing.

Resolution

Configure mod_security properly or disable it from apache configuration.

 

 

Link: http://kb.sp.parallels.com/en/5546

Good luck

How to increase the number of sites that can be hosted on a Parallels Plesk Panel for Linux server

How to increase the number of sites that can be hosted on a Parallels Plesk Panel for Linux server

Hello ,

I need to use more than  300 sites on a Linux Parallels Plesk Panel and the solution was KB113974

Symptoms

There are going to be more than 300 sites on a Linux box running with Parallels Plesk Panel. Are there any prerequisites that have to be met?

Resolution

By default, the Apache server allows the hosting of no more than 300 websites on a single box. This is due to a limitation on the number of files that can be opened by the Apache process at one time, which is usually 1,024. Apache needs to open 2-4 log files for each site hosted on the server, and once it reaches the opened-file limit, it crashes.

  1. The best practice for this case is to recompile Apache with an increased number of allowed file descriptors, as per our Knowledge Base article:260 How to recompile Apache, PHP, and IMAP with increased value of file descriptors larger than FD_SETSIZE (1024) on a RedHat-like system
  2. In case you do not want to recompile the Apache package, you may enable the Piped Logs feature, which allows you to have up to 900 sites on one server. More details can be found here:2066 How do I enable Piped Logs for Apache Web Server?
  3. Alternatively, you may use the trick below in order to increase the maximum number of allowed open file descriptors:Add ulimit -n 65536 at the beginning of the Apache init script, like this:
    # head -13 /etc/init.d/apache2
    #!/bin/sh
    ### BEGIN INIT INFO
    # Provides: apache2
    # Required-Start: $local_fs $remote_fs $network $syslog $named
    # Required-Stop: $local_fs $remote_fs $network $syslog $named
    # Default-Start: 2 3 4 5
    # Default-Stop: 0 1 6
    # X-Interactive: true
    # Short-Description: Start/stop apache2 web server ### END INIT INFO
    set -e
    ulimit -n 65536
    

    Then restart Apache:

    # /etc/init.d/apache2 restart
    

    Note: The file name and content may be different on your system. Another example:

    # head -16 /etc/init.d/httpd
    #!/bin/bash
    #
    # httpd Startup script for the Apache HTTP Server
    #
    # chkconfig: - 85 15
    # description: Apache is a World Wide Web server. It is used to serve 
    # HTML files and CGI.
    # processname: httpd
    # config: /etc/httpd/conf/httpd.conf
    # config: /etc/sysconfig/httpd
    # pidfile: /var/run/httpd.pid
    # Source function library.
    . /etc/rc.d/init.d/functions
    ulimit -n 65536
    

    Restart Apache:

    # /etc/init.d/httpd restart
    

Additional information

As of the release of Parallels Plesk Panel 11.0 Nginx can be installed as a reverse proxy server in front of Apache. It will help you run more sites on one server.

Such a combination of Nginx and Apache provides the following advantages:

  • The maximum allowed number of concurrent connections to a website increases.
  • The consumption of server CPU and memory resources decreases.
  • The maximum effect will be achieved for websites with a large amount of static content (like photo galleries, video streaming sites, and so on).Efficiency of serving visitors with a slow connection speed (GPRS, EDGE, 3G, and so on) improves. For example, a client with a 10 KB/s connection requests a PHP script, which generates a 100 KB response. If there is no Nginx installed on the server, the response is delivered by Apache. During these 10 seconds required to deliver the response, Apache and PHP continue to consume full system resources for this open connection. If Nginx is installed, Apache forwards the response to Nginx (the Nginx-to-Apache connection is very fast as both of them are located on the same server) and releases system resources. As Nginx has a lower memory footprint, the overall load on the system decreases. If you have a large number of such slow connections, using Nginx will significantly improve website performance.

See the Parallels Plesk Panel 11 Administrator’s guide for more details.

 

Link : http://kb.sp.parallels.com/en/113974

Goodluck ,

How to verify an invalid Plesk backup file

0

How to verify an invalid Plesk backup file

Hello ,

 

APPLIES TO:

  • Parallels Plesk 11.0 for Linux
  • Parallels Plesk 11.5 for Linux
  • Parallels Plesk 12.0 for Windows
  • Parallels Plesk 11.0 for Windows
  • Parallels Plesk 11.5 for Windows
  • Parallels Plesk Automation 11.1

Symptoms

A new backup is shown on the Backup Manager page. However, it is marked with a red circle and the following pop-up message is shown when you hover the mouse over the circle:

This is not a valid backup. Data cannot be restored from this file.

The following errors may also be observed:

info_1309071511-formatted.xml fails to validate

<description>The dump has wrong format!</description>

Cause

The backup description contains invalid records, causing the backup to fail XML validation.

Resolution

The backup XML description file should be validated with the backup XML schema. This will help to find problem objects in the Plesk configuration database and fix inconsistencies, which cause the XML file to become invalid.

  1. Install the xmllint utility to help you validate XML files:
    • Download xmllint.zip
    • Create the C:xmllint directory
    • Unpack the archive into C:xmllint
  2. Find the XML backup file in the Parallels Plesk (Plesk) backup repository.For example, if the backup name in the Plesk GUI is test_info_1004281551.xml and Plesk uses the local repository, then the file will be at “%plesk_dir%Backuptest_info_1004281551.xml“.
  3. Using xmllint, reformat the XML file to be more easily readable:
    c:xmllintxmllint.exe --format --encode UTF-8 test_info_1004281551.xml > test_info_1004281551_formated.xml
    
  4. Validate the formatted file:
    c:xmllintxmllint.exe --noout --schema "%plesk_dir%PMMplesk.xsd" "%plesk_dir%Backuptest_info_1004281551_formated.xml"
    

    If you get an error as below, use the solution from the article #8488 and remove Envelope elements from the file:

    element Envelope: Schemas validity error : Element '{urn:envelope}Envelope': No matching global declaration available for the validation root.
    

    Then validate the backup file again:

    C:xmllintxmllint --noout --schema "%plesk_dir%PMMplesk.xsd" "%plesk_dir%Backuptest_info_1004281551_formated.xml"
    
Link: http://kb.sp.parallels.com/en/117208
Good luck

Let’s Encrypt: Delivering SSL/TLS Everywhere

0

Let’s Encrypt: Delivering SSL/TLS Everywhere

Hello ,

Vital personal and business information flows over the Internet more frequently than ever, and we don’t always know when it’s happening. It’s clear at this point that encrypting is something all of us should be doing. Then why don’t we use TLS (the successor to SSL) everywhere? Every browser in every device supports it. Every server in every data center supports it. Why don’t we just flip the switch?

The challenge is server certificates. The anchor for any TLS-protected communication is a public-key certificate which demonstrates that the server you’re actually talking to is the server you intended to talk to. For many server operators, getting even a basic server certificate is just too much of a hassle. The application process can be confusing. It usually costs money. It’s tricky to install correctly. It’s a pain to update.

Let’s Encrypt is a new free certificate authority, built on a foundation of cooperation and openness, that lets everyone be up and running with basic server certificates for their domains through a simple one-click process.

Mozilla Corporation, Cisco Systems, Inc., Akamai Technologies, Electronic Frontier Foundation, IdenTrust, Inc., and researchers at the University of Michigan are working through the Internet Security Research Group (“ISRG”), a California public benefit corporation, to deliver this much-needed infrastructure in Q2 2015. The ISRG welcomes other organizations dedicated to the same ideal of ubiquitous, open Internet security.

The key principles behind Let’s Encrypt are:

  • Free: Anyone who owns a domain can get a certificate validated for that domain at zero cost.
  • Automatic: The entire enrollment process for certificates occurs painlessly during the server’s native installation or configuration process, while renewal occurs automatically in the background.
  • Secure: Let’s Encrypt will serve as a platform for implementing modern security techniques and best practices.
  • Transparent: All records of certificate issuance and revocation will be available to anyone who wishes to inspect them.
  • Open: The automated issuance and renewal protocol will be an open standard and as much of the software as possible will be open source.
  • Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the entire community, beyond the control of any one organization.

If you want to help these organizations in making TLS Everywhere a reality, here’s how you can get involved:

To learn more about the ISRG and our partners, check out our About page.

 

How It Works

Anyone who has gone through the trouble of setting up a secure website knows what a hassle getting a certificate can be. Let’s Encrypt automates away all this pain and lets site operators turn on HTTPS with a single click or shell command.

When Let’s Encrypt launches in Summer 2015, enabling HTTPS for your site will be as easy as installing a small piece of certificate management software on the server:

$ sudo apt-get install lets-encrypt
$ lets-encrypt example.com

That’s all there is to it! https://example.com is immediately live.

The Let’s Encrypt management software will:

  • Automatically prove to the Let’s Encrypt CA that you control the website
  • Obtain a browser-trusted certificate and set it up on your web server
  • Keep track of when your certificate is going to expire, and automatically renew it
  • Help you revoke the certificate if that ever becomes necessary.

No validation emails, no complicated configuration editing, no expired certificates breaking your website. And of course, because Let’s Encrypt provides certificates for free, no need to arrange payment.

If you’d like to know more about how this works behind the scenes, check out our technical overview. Or if you really want to dive into the details, read the full protocol specification on Github.

 

Link : https://www.letsencrypt.org/

Good luck ,

 

Installing an SSL Certificate error message

Installing an SSL Certificate error message

Hello all ,

I recently try to install ssl certificate but i got this strange error .

 

Symptoms

1. When i try to activate SSL certificate in plesk Windows i get this error massage

2. When i try to activate this certificate to specific IP address in PLESK windows i get this massage

Unable to update certificate in Web Server: websrvmng failed: A specified logon session does not exist. It may already have been terminated. (Exception from HRESULT: 0x80070520) In Microsoft.Web.Administration module Exception type: System.Runtime.InteropServices.COMException at Microsoft.Web.Administration.Interop.IAppHostMethodInstance.Execute() at Microsoft.Web.Administration.Binding.AddSslCertificate(Byte[] certificateHash, String certificateStoreName) at Microsoft.Web.Administration.BindingManager.BindingTransaction.Commit() at Microsoft.Web.Administration.BindingManager.Save() at Microsoft.Web.Administration.ServerManager.CommitChanges() at ServerManagerFactory.commit() at IIS7ServerManager.commit(IIS7ServerManager* )

Cause

Wrong CA bundle for the SSL certificate .

Resolution

Get valid CA certificate to the SSL installation .

Recommended to remove problematic installation of the certificate and reinstall the certificate with valid CA in PLESK .

 

Good luck ,

Best free WordPress Backup: BackUpWordPress

Best free WordPress Backup: BackUpWordPress

Hello ,

The best app that backups WP and the DB: BackUpWordPress

BackUpWordPress will back up your entire site including your database and all your files on a schedule that suits you. Try it now to see how easy it is!

This plugin requires PHP version 5.3.2 or later

Features

  • Super simple to use, no setup required.
  • Works in low memory, “shared host” environments.
  • Manage multiple schedules.
  • Option to have each backup file emailed to you.
  • Uses zip and mysqldump for faster backups if they are available.
  • Works on Linux & Windows Server.
  • Exclude files and folders from your backups.
  • Good support should you need help.
  • Translations for Spanish, German, Chinese, Romanian, Russian, Serbian, Lithuanian, Italian, Czech, Dutch, French, Basque.

Installation

  1. Install BackUpWordPress either via the WordPress.org plugin directory, or by uploading the files to your server.
  2. Activate the plugin.
  3. Sit back and relax safe in the knowledge that your whole site will be backed up every day.

The plugin will try to use the mysqldump and zip commands via shell if they are available, using these will greatly improve the time it takes to back up your site.

screenshot-1

screenshot-3

screenshot-2

 

Link : https://wordpress.org/plugins/backupwordpress/

Good luck

How to Analyze Distributed Denial-of-Service (DDos) Attack

How to Analyze Distributed Denial-of-Service (DDos) Attack

What is DDoS Attack?

As per Wikipedia, denial-of-service (DoS) or distributed denial-of-service (DDoS) attack is an attempt to make a machine or network resource unavailable to its intended users.

In this small post I would like to show a few useful commands to use if someone is experiencing a DDoS attack. In my case, there is an nginx as a front-end server. The access log format looks like this:

log_format main '$remote_addr — $remote_user [$time_local] "$host" "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" -> $upstream_response_time';

In the log file we’ll see something like this:

188.142.8.61  - [14/Sep/2014:22:51:03 +0400] «www.mysite.com» «GET / HTTP/1.1» 200 519 «6wwro6rq35muk.ru «Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.191602; .NET CLR 3.5.191602; .NET CLR 3.0.191602» "-" -> 0.003

Analyzing DDoS Attack

tail -f /var/log/nginx/nginx.access.log | cut -d ' ' -f 1 | logtop

This command allows to see a bigger picture: the distribution of unique IPs sending requests, the number of requests from one IP, etc.

The main thing here is that all of this operates in real-time and we can monitor the situation, as well as make necessary changes in the configuration. For example, we can ban the top 20 of the most active IPs via iptables, or limit the geography of requests for some time in nginxwith the help of GeoIP (http://nginx.org/en/docs/http/ngx_http_geoip_module.html).

Once you run the command, it will display (and will update in real-time) something like the following:

3199 elements in 27 seconds (118.48 elements/s)
1 337 12.48/s 95.65.66.183
2 308 11.41/s 122.29.177.10
3 304 11.26/s 122.18.251.54
4 284 10.52/s 92.98.80.164
5 275 10.19/s 188.239.14.134
6 275 10.19/s 201.87.32.17
7 270 10.00/s 112.185.132.118
8 230 8.52/s 200.77.195.44
9 182 6.74/s 177.35.100.49
10 172 6.37/s 177.34.181.245

In the given case, the columns mean:

  • 1 — the sequence number
  • 2 — the number of requests from the given IP
  • 3 — the number of requests per second from the given IP
  • 4 — the IP itself

At the very top you will see the summery for all of the requests. We can see that IP 95.65.66.183 sends 12,48 requests per second. During the last 27 seconds it has sent 337 requests.

Let’s review it in details:

tail -f /var/log/nginx/nginx.access.log — continuously read the end of the log-file.

cut -d ‘ ‘ -f 1 — split the string into “substrings” with the help of a delimiter that is defined in –d flag (in the given case, it’s a space).
-f 1 flag means that we only want to show the field with “1” as a sequence number only (in the given case, the field will contain the IP address sending a request).

logtop counts the number of equal strings (i.e., IPs), sorts them in descending order and prints to screen in the form of a list, adding statistics at the same time (in Debian we can do it via aptitude from a standard repository).

grep "&key=" /var/log/nginx/nginx.access.log | cut -d ' ' -f 1 | sort | uniq -c | sort -n | tail -n 30

That will show the distribution of a string by the IP in the log.

In my case, we were to gather the statistics regarding one IP using the &key=… parameter in a request.

We are going to see something of the kind:

31 66.249.69.246
47 66.249.69.15
51 66.249.69.46
53 66.249.69.30
803 66.249.64.33
822 66.249.64.25
912 66.249.64.29
1856 66.249.64.90
1867 66.249.64.82
1878 66.249.64.86

  • 1st column is the number of string entries (IP in our case)
  • 2nd column is the IP address itself

We can see that IP 66.249.64.86 has sent 1,878 requests (Later we will see in Whois that this IP belongs to Google and is not “harmful”)

Let’s take a closer look at it:

grep “&key=” /var/log/nginx/nginx.access.log we find all the strings in the log that contain “&key=” substring (no matter in what part of the string it’s located)

cut -d ‘ ‘ -f 1 (see the previous example) derive an IP address

sort — sort lines (it’s necessary for the correct operation of the next command)

uniq -c  — show unique lines and count the number of times the line occurred in the input (-c flag)

sort -n – sort by comparing according to string numerical value (-n flag)

tail -n 30  — derive the last 30 lines (-n 30 flag; we can define a random number of lines)

All the mentioned above requests are provided for Debian and Ubuntu, but I think the commands will look pretty much the same in other Linux distros.

Increase the Upload Size for MySQL Database on cPanel with phpMyAdmin using WHM

Increase the Upload Size for MySQL Database on cPanel with phpMyAdmin using WHM

 

Hello,

cPanel/WHM Server imposes a limit on the size of a mysql database that can be imported into phpMyAdmin. The default size is 50MB.

The best way to navigate this limitation is to make some tweaks in the WHM interface. Sometimes editing a php.ini file doesn’t make a difference.

 

– Log into your WHM interface and type Tweak in the search bar.

tweak-settings-cpanel-databse

The Tweak settings appear, in the find field on the right type: upload size

tweak-max-upload-size-cpanel

 

 

Change the cPanel PHP max upload size to what you need and save.

Go back to Tweak Settings and in the find bar type: post

tweak-post-size-cpanel

 

Change the cPanel PHP max POST size to what you need

That’s it, now you can import a larger database directly into phpMyAdmin, go back and change back to the default settings if required.

 

Good luck