Tuesday, December 4, 2012

Reverse engineering some javascript

A couple of days ago I had to reverse engineer some javascript. The classics in the encoding are hexadecimal or decimal encoding but this time the was something else. It looked basically like this:

a = ["3f","15","f4","22","4o","4g","17", ... ]

I changed the actual values to make sure that I don't give anything away on the actual investigation, so yes, if you try it on these values it will not make any sense.

It was clear that the nasty stuff was hidden in that array. At the end of the code I found a function called parseInt that actually interacted with the array. It looked like this:


Again this is altered code, but the real value that was there instead of 36 was hard coded and the i was a variable used in a loop to run over the array. What is interesting is that the loop was written like this:


The value was hard coded but the classical way of looping which you get thought in programming classes was not used.

I had a look at parseInt() and what it basically does is that it takes a string (the value in the array) and turns it into an integer. But there are things that resemble nothing like integers so something had to be up with that hard coded value 36. 36 represents a radix. The radix represents the numerical system you are working in. If it would have been 16, it would have been hexadecimal, a 10 would represent the decimal system, an 8 the octal system. To make things for a human a bit more complicated our attacker chose 36. The radix parameter has to be a value between 2 and 36.

I wrote some code and made it appear to me as an array of decimal numbers. After this transformation the next step was the String.fromCharCode(). This javascript function transforms the character code to a string. Thus i ordered my system to do that and transformed my array into characters.

When I made the program print the characters one after the other the content of the obfuscated malicious javascript code revealed itself and I could go on with the investigation.

Tuesday, November 20, 2012

Blocking Phishing Sites

The other day I got the question how the popular browsers block you from going to malicious websites. Interesting, that you had to connect back to a database seems logical but the actual inner workings were a mystery to me so I looked it up.

Firefox, Safari and Chrome all use the same technology called the Google Safe Browsing API. Microsoft Internet Explorer uses a technology called SmartScreen Filter.

Google Safe Browsing API is a complete website with the technical details. Basically it works like this:

You type in the website you want to visit in your browser:

Your browser will need to check the following paths against the database:






As you can see this is quite a lot of strings and looking up strings in a database is usually a slow thing. The trick used here is that a hash is calculated and a 4-byte prefix is sent to the database. In case of a match the database will return all the matches and the client can then calculate the full hash to see if it is in the returned list. 

If the full hash matches the end user is informed about this else your page gets loaded.

As sources for their database they mention Antiphishing.org en Stopbadware.org but you can be pretty sure that it would not be the only sources.

At the Windows Live blog I could find more information on how the Microsoft SmartScreen Filter works. The example Microsoft gives is the following:

Let's say a malicious website is hosted at canada-pharmacy.us. This IP gets marked in the database as "bad", besides the URL the IP address is marked as "bad" too. SmartScreen will generalize this to IP's in the neighborhood. This is done based on ASN blocks, the way IP addresses are split up by owner.

DNS server rating are also part of the SmartScreen technology. The DNS servers that seems to know just a little too much about abusive domains is given a lower rating according to the blog. Unfortunately for a techie this is a meaningless description.

SmartScreens telemetry comes from reports from end users, third parties, traffic from URLs showing up in e-mails and logs. The feeds are fed into machine learning algorithms to either flag or pass a URL. When the algorithm is in doubt, the information is given to an analyst who will do the necessary research.

As a conclusion I would say make sure you have a browser that has such a technology enabled, it is not perfect but it is free and better then having nothing.

Wednesday, November 14, 2012

Changing your SSH port and configuring SVN

A while ago I wrote an post on SSH. Yesterday I was discussing the brute force attacks you get and my conversation partner said that he now systematically configures SSH on another port and rarely sees any attacks.

Just for fun tonight I've switched my port 22 to another port and will monitor my logs to see if I get the same findings. What I actually did is alter the port forwarding scheme so that I keep the config of the server rather standard. I expect to see the same thing as my conversation partner.

After the change I had to figure out how this works out with SVN which I do with svn+ssh. Without the necessary modifications that fails of course since port 22 is not open anymore.

On the server side you don't have to change anything. On the client side there is in ~/.subversion/config in the section [tunnels] a line to add:
sshtunnel = ssh -p port_number -q

You can put in any name instead of sshtunnel but I personally like clear naming for when I am tired. The -q at the end is important otherwise you will get the message "Killed by signal 15."

A last hurdle was that I had already a number of check out projects and created a little problem. I simply renamed the directory, downloaded a new copy this time using
svn co svn+sshtunnel://server/path/dir dir

To merge the contents of both (the old one and the new one) without overwriting the new svn config it was simply
cp -R -n old_dir new_dir
rm -R -f old_dir

Thursday, September 13, 2012

Remote JavaScript Inclusion

Presentation by Steven Van Acker, DistriNet KULeuven at OWASP chapter meeting
Paper: https://lirias.kuleuven.be/bitstream/123456789/354587/1/fp028-nikiforakis.pdf

The basic conclusion of Steven is that browsers don't care what they execute. If they find a script to execute they will. Browsers have a basic principle called origin separation, which basically means that site A can't mess with the javascript from site B. The origin is based on the tuple protocol, website, port. For example http, facebook.com, 80.

A new problem was created when we wanted to make cross-website applications. For example if Facebook wants to integrate the functionality of Google Maps. They have 3 options. The first is to build their own, the second is to make a browser with a javascript which doesn't implement the origin separation, or the last one is to use the API of googlemaps.com and execute it on facebook.com. There is thus googlemaps.com code executed in facebook.com context.

This is a potential problem. The website of qTip plugin for the jQuery framework had a problem in the beginning of the year 2012. The malicious person left on the website a manipulated version of the library. The problem was of course that people executed the malicious script in the context of their website.

For his research Steven used a crawler that downloads websites together with the javascripts on the remote sites they point to and executed the javascripts for dynamic inclusions. The crawler is based on HTMLUnit. The list of websites to crawl was the Alexa top 10.000 and in total the crawler downloaded from 3.300.000 sites. A funny anecdote is that one website did 295 includes.

The conclusions of his research was that he found 5 new vectors of attack:

  • Cross-User Scripting: src= http://localhost/script.js. This could be used in an attack be dropping the javascript upfront on the victim's system.
  • Cross-Network Scripting: src= http://local_network_ip/script.js. This could be used in the same way as the Cross-User Scripting but then the javascript is dropped on a webserver on the network.
  • Stale IP attack: When the javascript is pointing to an IP address instead of a domain name, one only has to obtain the IP, fire up a web server with a malicious script with the correct name. Steven saw even IP addresses that were pointing to DHCP pools.
  • Stale domain name attack: When the javascript is pointing to a domain name which is not registered any more, one only has to obtain the domain name, fire up a web server with a malicious script with the correct name.
  • Typo squatting attack: When the programmer makes a type, one registers that 'wrong' domain, fire up a web server with a malicious script with the correct name.

As mitigation strategies Steven points out that you can't rely on the end user and thus it is up to the programmer and maintainer to do this. A solution could be using a fined-grained javascript sandbox to detect malicious activity, but this isn't a mature thing right now so we need a better thing. Another solution he came up with is to actually download the remote scripts and host them yourself.

To check if this would be a feasible solution Steven downloaded the top 1000 libraries he found and checked in 3 consecutive downloads if they changed. This way he could filter out the dynamic generated libraries. He ended up with a pool of 803 scripts that weren't dynamic. Over a period of a week 89,79% was never modified and 96,76% was only modified once. So his conclusion is that hosting your own copy is actually an actual possibility.

This doesn't exclude you can't have a malicious javascript but you still do your due diligence and check the script you are hosting and be aware of the fact that functionality may break but that is part of using somebody's API.

Saturday, June 9, 2012

SSH fun - a poor man's VPN

Recently I saw an episode of HAK5 on SSH and using certificates to authenticate. I was thinking of implementing a VPN solution at home for when I am on the road but when I saw the show I liked the simplicity of it so I started and a couple of minutes of fun later I had an up and running poor man's VPN.

If you want a quick and easy solution to use an untrusted Internet connection, this might be a solution for you. Here is a description on how to implement it.

Setting up the server

Setting up your server can actually be any system that can run an openssh server. It can be a recent beefy thing but also an old desktop laying around. To install openssh server on a Ubuntu system, you do :

apt-get install openssh-server

Configure the firewall

Since we are about to hang this system to the Internet, a firewall is a must have. Ubuntu comes with ufw built in.

To configure the firewall for the openssh-server, you do:
ufw limit OpenSSH

To start the firewall run:
ufw enable

Configure your server

The configuration can be found in /etc/ssh/sshd_config. Change the following settings:

  • PermitRootLogin no
  • RSAAuthentication yes
  • PasswordAuthentication no
  • UsePAM no
SSH services connected to the Internet are constantly under attack. Since I want to use the attack data, I changed my log level to verbose.

Now that the server is configured, you got to restart the service. You do this with:

service ssh restart

Key generation

The basic idea is to create a key pair. One is a public key and will be installed on the server, the other one is your private key and is installed one the device you will connect with, your laptop or touchpad for example.

To generate the key you type on the laptop or touchpad:

ssh-keygen -t rsa

This will generate your generate your key pair. During the generation it will ask you where you want to store your keys and a password to protect it. The keys are by default stored in your home directory under the hidden directory ".ssh/". The private key is called id_rsa and the public key is called id_rsa.pub. Another important file in the ssh directory is called known_hosts. You will get more information on known_hosts later in this post.

Getting the public key on the server
To get the public key to the server there are a couple of possibilities. 
  • the ssh-copy-id command
  • copy text from id_rsa.pub to the server to the file authorized_keys in the hidden .ssh directory under your profile.
Since ssh-copy-id is not available on every platform in the world, my preference goes out to the second option.

It is always a good idea to have a copy of your private key id_rsa stored at a secure place if something goes wrong with your system.

Opening up the gates

When your internet provides you with a DHCP address you can configure your system to use a dynamic dns. If you got a static IP you don't need the dynamic dns, you just need to know your IP.

Configure your router so that you allow incoming traffic on port 22.

Connecting to your server

A regular SSH session

To build just a regular SSH session you type on the command line:
ssh account@server

Surfing/Skyping/... over SSH session

You can use the SSH session as a SOCKS5 proxy. To do this, you do:
ssh -D port account@server

Then you need to configure your browser so that it uses a local (localhost or socks v5 proxy and of course the port you specified.


By default all logging will take place in /var/log/auth.log.

A regular connection looks like this:
timestamp server sshd[pid]: Connection from IP port port_number
timestamp server sshd[pid]: Found matching RSA key: key_in_hex
timestamp server sshd[pid]: Postponed publickey for user from IP port port_number ssh2 [preauth]

A disconnect looks like this:
timestamp server sshd[pid]: Received disconnect from IP ...

When somebody just does "ssh ip_address" it will show up like this: 
timestamp server sshd[pid]: Connection from IP port port_number
timestamp server sshd[pid]: Connection closed by IP [preauth]

Usually brute-force attacks are done like "ssh useraccount@IP" and since the attacker doesn't have the certificate it will show up like this:
timestamp server sshd[pid]: Connection from IP port port_number
timestamp server sshd[pid]: Invalid user useraccount from IP
timestamp server sshd[pid]: input_userauth_request: invalid user useraccount [preauth]
timestamp server sshd[pid]: Connection closed by IP [preauth]

Final thoughts on security considerations

If you want to slow down the attacks you can always implement a framework called Fail2Ban. This python framework reads out logs and uses iptables firewall to block brute-force attempts.

Friday, June 8, 2012

NoSQL basics

As everybody in this world I get confronted with big data and with my background as a DBA I needed more than "ok, this is mongodb and you use it like this" or "this is reddis and this is the manual".

It took me a while to get some good resources but there is one in particular I want to share with you. Ilya Katsov wrote a must read article on his blog called NoSQL data Modeling. I recommend reading it before you start playing with NoSQL. It will influence you on how you will think about the data and how you will put it into the database which has a direct impact on the performance it.

You can find the blogpost at http://highlyscalable.wordpress.com/2012/03/01/nosql-data-modeling-techniques/

Sunday, April 15, 2012

Do you actually care or do you just want a good feeling?

Since I started working in ITSec, there was this thing that was not clear to me but bugged me often. It looked like most companies understood that they were or could be victims of fraud, but their actions to deal with it were going from weird to good.

The other day, while listening to a freakonomics podcast on my way from work to home it hit me. The podcast was about hybrid cars and why one car is very popular and the rest not so popular. The deal was that the "green" popular car was well known because it looks completely different from the other cars so it could be differentiated. So what you are buying is mostly image, or how you feel about what others think of you.

Back to ITSec. I wondered why people are still buying stuff going from cheap to verrrry expensive technology and sometimes it does absolutely nothing. The "yes, lets buy this box/app, and all our problems will be gone" attitude is according to me very similar to the popular hybrid car.

The majority of the organizations buys stuff to have a clean conscience. Let me explain that hypothesis.

The chance of being a victim online for any organization is quite real. Most solutions work partially against old attacks if they are configured correctly. Since organizations constantly change they are usually not configured correctly. In case of an event, the organization can say, well it is not our fault. We had antivirus, next generation firewalls, web application firewalls,  IDS/IPS, ... you recognize this pattern?

The question actually is did you spend all that money on those things to sleep better at night (aka a nice sales girl or smooth talking guy convinced you when presenting that product)?

If you are the man in charge and reading this, I am not telling you not to buy anything just because you want to sleep better at night. Security starts with small things that you can implement without spending huge amounts of money.

An example for the skeptics: Do you thing everybody should be able to access the payroll data and change it or should there be a procedure in place to log who accesses it, when, why, ... . You might think this is a bogus example but recently there have been cases around the world were fake employees where created and money stolen from companies.

Another example if you are still not convinced. We live nowadays in a world where every event leaves a log trace.  Most logs are just kept for compliancy reasons but actually not for mining them to see what value they can give you. If one of your employees is on a abroad mission and there is a login event from one IP and the next login there is a login event from the other side of the globe, and in the time span between the two logins is not possible to be at the other place, you know you have a problem. You see this costs almost nothing, as an employer you know where your employee is and with some GeoIP and timetables you should be able to do the math quite easily.

My advise is to spend your money wisely, it is a scare resource. Look at the easy stuff first, the things you already have so that you have your basics covered instead of having a good feeling because you got things other people want and so they will try to get it from you.

Ow and just one more thing, what works today is not necessarily working tomorrow, bad guys adapt too. Review what you do, its success rate and share the information with your competitors and CSIRTs because they will give you there information so you can use that to build better defenses.

Wednesday, January 25, 2012


It has been more than 2 months at the cert. My first task was making a report about what was by Symantec called the Nitro case.

Usually I will not blog about this but I learned a couple of valuable lessons.

The first thing about this case was that social engineering was used and this is a real life proof that it is used out there. Awareness training is a hard but necessary thing. I admit I have no easy solution but I guess that starting with explaining to people what it is might be a good thing. I listen to the SE podcast and one of the items they had on the show is actually ITsec setting up a fake website and sending out email with a link and see how many people can be tricked. It is something worth considering I think.

The next thing I learned is that the modus operandi was that all data was gathered and staged on internal servers. It made me think of a DBA problem. A lot a the customers were not monitoring their servers and network. When you know your hard disk space changed over a couple of nights from x% to z% when you were expecting y% a series of bells should go off. The same thing on the network, the traffic on systems should be predictable. Although we have this technology it is not easy to implement and it will not stop the attack, you will only discover it.

Finally I think the most important lesson is that it can happen to everyone.