Saturday, April 26, 2014

Set up and secure Apache web server under CentOS


 

Apache is available in official CentOS repositories, so you can install it as root with the command yum install httpd. Start the httpd service and make sure that it is added to the system startup settings:
service httpd restart
chkconfig httpd on
 
You can verify whether Apache is running with the command #netstat -tulpn | grep httpd. If it's running, you should see output similar to
tcp       0      0 :::80                       :::*                       LISTEN      PID/httpd

By default, Apache serves TCP traffic on port 80 for HTTP and port 443 for the secure HTTPS protocol. Apache's initialization script is at /etc/init.d/httpd, while configuration files are stored under /etc/httpd/. By default the document root directory is /var/www/, while log files are stored under /var/log/httpd/ directory. We'll store files for our primary site in /var/www/html, and virtual host files in /var/www/site-a and /var/www/site-b.

Before working on the primary site, make sure that the server's host name is defined. Edit /etc/httpd/conf/httpd.conf, look for ServerName, and modify the line:
ServerName www.example.com:80
Save the file and restart the service.

Every website needs an index file, which generally contains both text and code written in HTML, PHP, or another web scripting language. For this example just create the index file manually at /var/www/html/index.html. You can then access the primary site by pointing a browser to www.example.com.

Hosting multiple sites

Sometimes you might want to host multiple sites on the same Apache server. For example, if your company needs separate websites for each department or if you want to set up multiple web applications, hosting each site on separate physical servers may not be the best option. In such cases you can host multiple sites on a single Apache server, and each of the sites can run with its own customized settings.

Apache supports name-based and IP-based virtual hosting. Name-based virtual hosts are disabled by default. To enable name-based virtual hosting, edit Apache's httpd.conf configuration file and uncomment the line with NameVirtualHost:
NameVirtualHost *:80

This parameter tells Apache to enable name-based hosting and listen on port 80 for any possible name. You can use a specific name instead of the asterisk wildcard character.
Each virtual host needs a valid DNS entry to work. To set up DNS on a production site, you must add DNS records in the authoritative DNS server. Generally, the primary website should be configured using an A record and the virtual hosts should be configured using CNAME records.

Enabling virtual hosts overrides the primary website unless you declare the primary website as a virtual host as well. The first declared virtual host has the highest priority. Any site that does not have a proper definition defaults to the first defined virtual host, so if site-a.example.com or site-b.example2.com are not properly configured, or if people try to access site-c.example.com and get directed to this Apache server, they will view www.example.com. Edit /etc/httpd/conf/httpd.conf and make sure that ServerName www.example.com is the first virtual host defined:
## start of virtual host definition ##
<VirtualHost *:80>
 ServerAdmin admin@example.com
 DocumentRoot /var/www/html/ 
 ServerName www.example.com
 ## Custom log files can be used. Apache will create the log files automatically. ##
 ErrorLog logs/www.example.com-error_log
 CustomLog logs/www.example.com-access_log common
</VirtualHost>
## end of virtual host definition ##

To set up the other virtual hosts, first create index.html files for the sites at /var/www/site-a and /var/www/site-b, then add the virtual host definitions to httpd.conf, and finally restart the httpd service:
## start of virtual host definition ##
<VirtualHost *:80>
 ServerAdmin admin@example.com
 DocumentRoot /var/www/site-a/
 ServerName site-a.example.com
 ## Custom log files can be used. Apache will create the log files automatically. ##
 ErrorLog logs/site-a.example.com-error_log
 CustomLog logs/site-a.example.com-access_log common
</VirtualHost>
## End of virtual host definition ##

## start of virtual host definition ##
<VirtualHost site-b.example2.com:8000>
 ServerAdmin admin@example2.com
 DocumentRoot /var/www/site-b/
 ServerName site-b.example2.com
 ## Custom log files can be used. Apache will create the log files automatically. ##
 ErrorLog logs/site-b.example2.com-error_log
 CustomLog logs/site-b.example2.com-access_log common
</VirtualHost>
## End of virtual host definition ##

In some cases, system administrators set up web applications on random ports to increase the security of the services, and users have to manually add the port in the URL to gain access to the web site. We've done that here – we set up site-b to run on port 8000. We therefore have to modify the Apache configuration file, adding a Listen line to httpd.conf:
Listen 80
Listen 8000

Since this is the first virtual host defined under port 8000, any other virtual host running on 8000 that lacks a proper definition will default to site-b.example2.com:8000.

Restart the Apache service for the changes to take effect.

Hardening the server against flooding attacks

Though they may live behind a firewall, HTTP servers generally are open to the public, which makes them available to attackers as well, who may attempt denial of service (DoS) attacks by flooding a server with requests. Fully hardening both Linux and Apache against attacks is beyond the scope of this article, but one way to secure a web server against a flood of requests is to limit the number of active connections for a source IP address, which you can do by changing a setting in the iptables packet filter. Although you should set the number of active sessions for a production server based on actual traffic, in this tutorial we will limit the number of concurrent connections to around 250 per five minutes for each source IP address:
 
service iptables stop
rmmod xt_recent
modprobe xt_recent ip_pkt_list_tot=255
service iptables start

rmmod removes the module xt_recent from the kernel. modprobe adds the module to the kernel again with modified parameters, changing the value of ip_pkt_list_tot from its default of 100 to 255.

With the updated parameter, we will create a script that modifies iptables to institute some basic security best practices. Feel free to adapt it to your needs, but make sure that the rules are compatible with your organization's security policy.
## Flush all old rules so that we can start with a fresh set ##
iptables -F

## Delete the user-defined chain 'HTTP_WHITELIST' ##
iptables -X HTTP_WHITELIST

## Create the chain 'HTTP_WHITELIST' ##
iptables -N HTTP_WHITELIST

## Define all new HTTP connections as 'HTTP' for future use within iptables ##
iptables -A INPUT -p tcp --dport 80 -m state --state NEW -m recent --set --name HTTP

## Send all new HTTP connections to the chain 'HTTP_WHITELIST' ##
iptables -A INPUT -p tcp --dport 80 -m state --state NEW -j HTTP_WHITELIST

## Log all HTTP connections. Limit connections to 250 per five minutes; drop any exceeding the limit ##
iptables -A INPUT -p tcp --dport 80 -m state --state NEW -m recent --update --seconds 300 --hitcount 250 --rttl --name HTTP -j ULOG --ulog-prefix HTTP_flood_check
iptables -A INPUT -p tcp --dport 80 -m state --state NEW -m recent --update --seconds 300 --hitcount 250 --rttl --name HTTP -j DROP

Make the script executable, then run it:
chmod +x firewall-script
./firewall-script

You might also want to add some trusted IP addresses or subnet to be excluded from the iptables check. For that, create a whitelisting script:
#!bin/bash
TRUSTED_HOST = 192.168.1.3
iptables -A HTTP_WHITELIST -s $TRUSTED_HOST -m recent --remove --name HTTP -j ACCEPT

Again, make the script executable, then run it:
chmod +x whitelist-script
./whitelist-script

Now the firewall will allow no more than 250 concurrent connections per five minutes to the Apache server for each source IP address, while trusted IP addresses can have an infinite number of parallel connections.

Tuesday, April 8, 2014

OpenSSL heartbeat information disclosure

Overview

OpenSSL 1.0.1 contains a vulnerability that could disclose private information to an attacker.

Description

OpenSSL versions 1.0.1 through 1.0.1f contain a flaw in its implementation of the TLS/DTLS heartbeat functionality (RFC6520). This flaw allows an attacker to retrieve private memory of an application that uses the vulnerable OpenSSL libssl library in chunks of 64k at a time. Note that an attacker can repeatedly leverage the vulnerability to retrieve as many 64k chunks of memory as are necessary to retrieve the intended secrets. The sensitive information that may be retrieved using this vulnerability include:
  • Primary key material (secret keys)
  • Secondary key material (user names and passwords used by vulnerable services)
  • Protected content (sensitive data used by vulnerable services)
  • Collateral (memory addresses and content that can be leveraged to bypass exploit mitigations)

Please see the Heartbleed website for more details. Exploit code for this vulnerability is publicly available. Any service that supports STARTLS (imap,smtp,http,pop) may also be affected.

Impact

By attacking a service that uses a vulnerable version of OpenSSL, a remote, unauthenticated attacker may be able to retrieve sensitive information, such as secret keys. By leveraging this information, an attacker may be able to decrypt, spoof, or perform man-in-the-middle attacks on network traffic that would otherwise be protected by OpenSSL.

Solution

Apply an update

This issue is addressed in OpenSSL 1.0.1g. Please contact your software vendor to check for availability of updates. Any system that may have exposed this vulnerability should regenerate any sensitive information (secret keys, passwords, etc.) with the assumption that an attacker has already used this vulnerability to obtain those items.

Reports indicate that the use of mod_spdy can prevent the updated OpenSSL library from being utilized, as mod_spdy uses its own copy of OpenSSL. Please see https://code.google.com/p/mod-spdy/issues/detail?id=85 for more details.
Disable OpenSSL heartbeat support

This issue can be addressed by recompiling OpenSSL with the -DOPENSSL_NO_HEARTBEATS flag. Software that uses OpenSSL, such as Apache or Nginx would need to be restarted for the changes to take effect.

Use Perfect Forward Secrecy (PFS)

PFS can help minimize the damage in the case of a secret key leak by making it more difficult to decrypt already-captured network traffic. However, if a ticket key is leaked, then any sessions that use that ticket could be compromised. Ticket keys may only be regenerated when a web server is restarted.

Vendor Information 


VendorStatusDate NotifiedDate Updated
Check Point Software TechnologiesAffected07 Apr 201408 Apr 2014
Debian GNU/LinuxAffected07 Apr 201408 Apr 2014
Fedora ProjectAffected07 Apr 201408 Apr 2014
FreeBSD ProjectAffected07 Apr 201408 Apr 2014
Gentoo LinuxAffected07 Apr 201408 Apr 2014
Mandriva S. A.Affected07 Apr 201407 Apr 2014
NetBSDAffected07 Apr 201408 Apr 2014
OpenBSDAffected07 Apr 201408 Apr 2014
OpenSUSEAffected-08 Apr 2014
Red Hat, Inc.Affected07 Apr 201408 Apr 2014
Slackware Linux Inc.Affected07 Apr 201407 Apr 2014
UbuntuAffected07 Apr 201407 Apr 2014
InfobloxNot Affected07 Apr 201408 Apr 2014
m0n0wallNot Affected07 Apr 201408 Apr 2014
PeplinkNot Affected07 Apr 201408 Apr 2014

Friday, April 4, 2014

Ubuntu One - Cloud storage replacement


Many of us had hoped it was an April Fool's prank. But Ubuntu One will, in fact, no longer be available as of June 1, 2014 and all data will be wiped July 31, 2014. This will leave a great number of Ubuntu users without a cloud service. Fear not, intrepid users, there are plenty of cloud services and tools available – each with native Linux clients – ready and willing to take your Ubuntu One data and keep it in the cloud.
But out of the many services, which might be the best to suit your needs? Given what happened with Ubuntu One, many are growing leery of using services hosted by smaller companies (who could easily fold in the coming years). With larger companies making it impossible for the small guys to compete, the best bet for long-term cloud storage is to go with the proven companies with a track record of keeping the lights on and the data safe. With that in mind, which of the large companies work best on the Linux platform? Let's take a look.

Google Drive (with Insync)

Because I have become more and more reliant on Google Drive, the obvious solution for me was to shift all of my Ubuntu One data to Google's cloud service. After seeing the prices (100GB for $1.99/month), it was a no brainer. There was one catch – syncing. For reasons unbeknown to me, a native Linux has yet to appear from behind the magic curtain that is Google. No problem...a company called Insync has us covered. With a robust and reliable native client for Linux, Insync easily syncs your Google Drive data onto your choice of location within your Linux box.
The Insync client is one of the best clients available to the Linux desktop. It does have a price associated with it, but for anyone looking to sync their Google Drive account with Linux, it is very much worth it. Pricing ranges from a one-time $15.00 for a single consumer account license to a business license at $15.00 per account, per year.
Installation is simple. I will walk you through the process on the Ubuntu 13.10 platform (Insync is available for Ubuntu, Debian, and Fedora). The one requirement for Insync is Python.
  1. Download the .deb file into your ~/Downloads directory
  2. Open a terminal window
  3. Issue the command cd Downloads
  4. Issue the command sudo dpkg -i insync_XXX_xxx.deb (Where XXX is the release number and xxx is the architecture)
  5. Type your sudo password and hit Enter
  6. Hit 'y' when prompted
  7. Allow the installation to complete.
Once the command line installation is complete, it's time to walk through the GUI wizard. This will open up automatically. In the resulting window, click Start Insync.
You'll be required to log into your Google account to complete it and then give Insync permission to work with your Google Drive account. Click Accept, when prompted, and then click Start syncing. The synced folder will be named after the address associated with your Gmail account and will reside in your home directory. If you want to relocate this folder, click Advanced setup and you can place that folder anywhere you like. In the Advanced setup, you can also give your synced folder a different name and make sure all Google Docs are not automatically converted. Before the wizard is complete, you will be asked if you want to integrate Insync with Nautilus. I highly recommend you do this, as it will make it incredibly easy to add folders to Insync with a simple right-click within Nautilus. To integrate, just click Yes when prompted (Figure 1).
insync
You will then be prompted for your sudo password, so Insync can download and install the necessary components for Nautilus integration. Finally, click Done in the installer window and, when prompted, click Restart Nautilus.
When Insync is running you'll see an icon in the notification area. From there you can interact with the application to check status, pause, add accounts, and more.

Dropbox

This has been the de facto standard cloud storage for a long time – with good reason. It works with nearly everything. So you can be sure to have your data synced to all of your devices – regardless of platform. The downside of Dropbox is that it's costlier than Google Drive and you're boxed into a single folder. A free account will get your 2 GB of space and for $9.99/month you get 100 GB. Not much else needs to be said about Dropbox (as it has been covered extensively); but the installation of the client is fairly simple. You will first need to sign up for an account, or have a pre-existing account to log into. Once you have that information, do the following (we'll stick with Ubuntu):
  1. Download the appropriate installer file for your platform into your ~/Downloads folder
  2. Open up a terminal window
  3. Issue the command cd Downloads
  4. Issue the command sudo dpkg -i dropbox_XXX_xxx.deb (Where XXX is the release number and xxx is the architecture)
  5. Type your sudo password and hit Enter
  6. Allow the installation to complete
  7. When prompted click the Start Dropbox button (Figure 2)
dropbox
Now it's time to walk through the official Dropbox install Wizard. This is quite simple – it will first ask you for your account credentials (or, if you don't have an account, allow you to set one up). When prompted, log into your Dropbox account (from within the install wizard) and then tell Dropbox where to place the syncing folder. By default, the folder will be ~/Dropbox. You cannot change the name of the folder and you can only sync that one folder. You do get to choose which Dropbox sub-folders to sync (this can be helpful if you have a large Dropbox folder and a smaller SSD drive).
Once the installation is complete, you'll be prompted to restart Nautilus. Dropbox does have limited Dropbox integration. What it allows you to do is move a folder into your Dropbox folder – you cannot sync folders outside of Dropbox.
If you're not using Google Drive, Dropbox is one of the better solutions available, for the Linux desktop – especially if you use other platforms and want to sync data across every device.

ownCloud

Is a bit different from the competition in that it requires you to connect to your own ownCloud server. On the plus side, ownCloud is open source, so anyone can set up their own cloud server. The downside to that is you will need an IP address accessible to the outside world, in order to make use of this. If you have that available, ownCloud is an incredibly powerful solution that you control. Setting up an ownCloud server is beyond the scope of this article, but installing the client is simple (again, sticking with the Ubuntu platform):
  1. Open up a terminal window
  2. Issue the command sudo sh -c "echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/desktop/xUbuntu_13.10/ /' >> /etc/apt/sources.list.d/owncloud-client.list"
  3. Issue the command sudo apt-get update
  4. Issue the command sudo apt-get install owncloud-client
  5. Enter your password when prompted and hit Enter.
After the install completes, issue the command owncloud and then enter the server address for your ownCloud 5 or 6 server. You will then be prompted for your username and password. Upon successful authentication, you can configure where you want your ownCloud folder to exist. Once you set that folder, click Connect and the ownCloud client will prompt you to either open the ownCloud folder or the ownCloud web interface. Click Finish and you're done.
The ownCloud client also has a notification icon that allows you to Sign in, quit, or go to the ownCloud settings. The ownCloud settings window (Figure 3) allows you to: 
  • Add a folder
  • Check storage usage
  • Set up ignored files
  • Modify your accounts
  • Check activity
  • Set ownCloud to launch at start
  • Set up a proxy
  • Limit bandwidth
owncloud
If you want to install your own server, you can download it and install it from the ownCloud installer page. NOTE: The web installer is the easiest method for new users. If you don't want to setup your own server, there are plenty of ownCloud service providers available. Check out this page for a listing of supported service providers. Some of the plans (such as on OwnDrive) are free (1GB of space).
Ubuntu users need not fear the loss of Ubuntu One. With so many cloud services available – most of which offer native Linux clients – there are too many choices, ready to host your data, to be concerned. Give one of these options a try and see if they don't meet your needs.