On Encryption

Hey all, a bit of news before I get on with what I’ve learned so far. I recently looked into joining some groups that are in the security industry, the one that caught my eye the most is the OWASP Foundation, so I looked into the Newcastle (where I live, North East England) chapter, messaged the mailing list and was informed the chapter has been inactive for over a year now. So I applied to get chapter leadership rights for this to get it up and running again, so I could help educate both myself and others about application security.

I was recently accepted and have been joined by Mike Goodwin, who has a lot more experience and a much greater network than my own. He also applied for OWASP Newcastle chapter leadership (he applied as I was accepted) so I knew he was passionate about the cause.

So if anybody in the area is interested give me an email, and if you aren’t in the area visit here to get in touch with your local chapter leader. No chapter in your area? Start your own!

Onto the learning.

So I recently visited NEBytes Security Bytes talk, and it was a great talk by Ben Lee over at @bibbleq about various types of encryption.

I’m going to write about something I learned from Ben’s talk.

When you register on a website you put in your username and your password, and the website needs to store both of these values in one way or another. Now it wouldn’t be a good idea if your username (lets use connor) and password (r0nn0c) were stored in the database as connor and r0nn0c, as if the database was ever leaked then the hacker would have full access to your credentials. So the way around this is to hash the password, which means you encrypt it in such a way as it is stored as a different value.

An example would be this image, which outlines how even changing a single character completely changes the hash. This image uses the SHA-1 hash function.

Cryptographic Hash Function

Wikimedia Commons

Hashing is not just used for passwords, it can be used for a string of any length.

Hashing seems like a good idea to keep your passwords safe, but many can be broken using various methods. I won’t go into these methods in this article, other than to say many of the attacks are simple brute force attacks, trying many different hashes per second, which is made especially easy as modern GPUs can calculate hundreds of millions of hashes per second with certain algorithms. There are many different types of hacks and ways to get around hashing, so don’t take away from this that brute force is the only or the best way, as brute forcing something that has heavy encryption is usually inconsistent.

On top of the Hash function, there is also the Salt function, where your password gets a set of random characters and symbols added into it (it could be at the beginning of the password, the end or somewhere in between) which then gets converted into a hash, making the decryption significantly harder.

That’s all I’m going to write about for the day, so thanks for reading, if there’s anything you’d like to add or any feedback you’d like to give feel free to comment below and educate the masses.

As always, teach yourself something new today, then teach that to someone else!

–Edit

I will make a quick edit from time to time to clarify some things I feel I wasn’t clear enough about.

As much as I have mentioned that your password will be stored as a hash, maybe even a salted hash (which is one of the best case scenarios) this may not necessarily be true, if you request a password reset/click forgot password and receive your password back in plain text, then it is being stored in the database as plain text too.

More information gathering, focusing on domain host and images

It’s been a while since I have put some information on the website, that’s because I had to wait for a domain transfer to go through (so I can hide my whois information, which will be covered today) and there have been some personal issues going on, but I’m back writing to you and hopefully we can cover a lot of ground today.

There is a lot of information in this chapter so I’ll get straight to it.

Finding out domain host details

 

The first thing that was covered was the whois information, in which you can use the command ‘whois’ on a Linux terminal or go to websites such as http://www.whoishostingthis.com/. What this command does is allow you to get information on who owns the domain. For example doing a whois check on www.microsoft.com gives me:

 

Registrant Organization: Microsoft Corporation
Registrant Street: One Microsoft Way,
Registrant City: Redmond
Registrant State/Province: WA
Registrant Postal Code: 98052
Registrant Country: US
Registrant Phone: +1.4258828080

 

So you can see how this data would be useful to gather information on a target’s address and telephone number as these are both required. The best way to avoid this from what I have seen is to use a whoisguard service, for example if you whois this website you get this information:

 

Registrant Name: WHOISGUARD PROTECTED
Registrant Organization: WHOISGUARD, INC.
Registrant Street: P.O. BOX 0823-03411
Registrant City: PANAMA
Registrant State/Province: PANAMA
Registrant Postal Code: 00000
Registrant Country: PA
Registrant Phone: +507.8365503

 

You can see how if I hadn’t got whoisguard installed on my domain you would now have access to my telephone number, address and way more information that I would like publicly available.

 

To find out the IP address that a web address is located you can use the Linux ‘host’ command such as ‘host connorcarr.com’ or you can visit getip.com which does more of the same, as well as a bit more information about the ISP and location.

 

One of my favourite things to do regarding company IP addresses is to use a service such as onsameip.com to search for the domain name, which will then tell you which other websites are registered from the same IP, which can of course allow you to find out other websites owned by the same company/person.

 

Finding Subdomains and hidden web directories

 

One of my new favourite things to do when I get to a new domain is to enter /robots.txt at the end of the domain, this is a file that tells web searches to not record or show these directories. Visiting google.com/robots.txt gets us many lines of “Disallow/Allow/Sitemap” directories, which are hidden from search engines but not from directory searching. These aren’t necessarily always directories they don’t want you finding, but many times have I found an admin login page from browsing these lists, so always keep /robots.txt in mind when visiting websites.

 

The last two things I will discuss today are about images. One of the main sources of data we get from images is Exif data. Exif data stands for Exchangeable Image File Format, and is a standard that specifies the formats for images, sound and tags used by digital cameras, yes this does include smart phones. This is important to us because it can be exploited. For instance if I use Jeffrey’s Exif Viewer on the image url: http://boingboing.net/assets_mt/2010/06/22/xeniskatepark4.jpg I get the photographer’s name, location, date/time of the shot and the camera it was shot with. This is useful of course to gather as much data as you can from images.

 

The final thing will be reverse image searching, this is best used to find out if an image is the original or if it has been uploaded to the internet previously, the best way to do this is use a webservice such as https://www.tineye.com. My favourite thing about tineye.com is you can sort by oldest which will help you to find the first time the image hit the web. A more common method is to find an image on google, right click and hit the s key or click ‘search this image on google’, which will make google do an image search for this image. I did however struggle to find a way to sort these results by date.

 

That’s the last of the information in this article, thanks for staying with me for so long, if you have any feedback for me feel free to send it along in the form on the contact page. Thanks again for reading and try to learn something every day.

On information gathering

Going a bit more in-depth with the initial recon phase of penetration testing, I feel that this phase is the most important phase and it’s the one you should spend the most time doing. My reasoning for this is that this is the phase you discover the network infrastructure and architecture, which allows you to plan the best way or ways to attack the system, all of this is done under the radar and it is the only real chance you have to be under the radar, which is why I say you should spend the most time on it. after you have done the reconnaissance you are free to plan and develop exploits necessary to get through or around any defences.

Since the last time I wrote an article I have learned a little more about information gathering, mainly various ways to discover the system/server types the clients are using. The methods learned are:

Using whatweb (Available on kali linux) to determine the server, IP, PHP version, redirect location and various other software information (such as what version of wordpress they are using.

A lot of websites can mask their server types from this kind of search, in cases like that we can use httprecon (available http://www.computec.ch/projekte/httprecon/?s=download&v=unknown, the latest version of writing this is 7.3) which will run a bunch of comparative checks against certain criteria to determine the likelihood of the server type being various servers.

We can check which servers are running https by running sslscan, available on kali linux. This program also allows you to view the supplier of any SSL certificates a site may own.

 

Thanks again for reading, go and learn something new today. If you feel like you have, teach it to someone else, teach it to me!

Introduction

Hey, so I’ve been following Mohamad Ramadan’s Learn the basics of Ethical Hacking and Penetration Testing (the course is free using the coupon code “meta”) to use as a basis from which to learn the skills necessary to being a Network Security professional. I would love you you to be able to take some knowledge away from this and if you have any advice or knowledge for me I would be happy to take it away.

Some of the main things I have learned so far are:

Using Dradis for storing information about the client and client systems. An example would be storing a list of IPs that the network uses and a list of open, vulnerable ports.

Information gathering techniques, such as searching the client company website for a staff list, finding the head of Network Security from the company, searching for him on LinkenIn and if for example he lists the technologies he currently works with we can use this to our advantage. If he says he uses Apache 2.1 then we can prepare research for security exploits for Apache 2.1

The final (and my personal favourite) thing I have learned so far is Google Search Techniques, this is where you enter specific criteria and commands into the google search bar. An example would be using the search “Warning: mysql_connect(): Access denied for user: “*@*” “on line” -help -forum to reveal logins to databases that were denied for some reason.

 

Thanks for reading and remember we should always be learning.