Thursday, November 25, 2010

PCI DSS Version 2.0 has landed!

Highlights from the new PCI Data Security Standard

Written by: Tal Morgenstern, QSA,
PCI Practice Manager, Comsec Consulting

The PCI Data Security Standard (DSS) has a three-year lifecycle, over the course of which the PCI Standard Security Council (SSC) follows technological developments, and gathers input and requests from QSAs and QSA certified organizations, for the purpose of updating the PCI DSS. Until now, the PCI DSS version in use has been version 1.2 which was released in October 2008. Therefore, at the end of October 2010, PCI DSS version 2.0 was released by the PCI SSC.

As a QSA who has led numerous PCI certification processes, I chose to hold a round-table yesterday in Israel for leading enterprises required to comply with the PCI standard, in order to present and highlight the significant differences between the versions and assist these organizations with the adaptation to and preparation for the new requirements. Version 2.0 will be valid from January 2011, however organizations already in process with version 1.2 will be able to continue certification with this version until the end of 2011.

The new PCI DSS version will delve into and provide:

  • Clarifications regarding Cardholder Data (CHD) environment scoping.
  • Guidelines regarding virtualization with an emphasis on hardening procedures and configurations.
  • Further details on DMZs, namely separation of organizational and internet networks.
  • Guidelines for organizations that issue credit cards regarding the storage of Sensitive Authorization Data (SAD).
  • Increased flexibility regarding cryptographic key management with regards to change management procedures and dual control.
  • Increased flexibility that enables the implementation of a risk management program as an alternative to critical patch installation in the short-term, according to requirement 6.1, that requires that critical patches be installed within one month from their release.
  • Recognition of other other leading international standards, such as CWE and CERT for the purpose of Security Code Review. In addition, version 2.0 requires an application security expert's involvement in processes of security code review, in addition to an automated tool.
  • Increased flexibility with regards to the saving, storing, and consequent deletion of CHD with remote access upon business need only; until now, according to requirement 12.3.10 it has been prohibited to save CHD locally in any circumstance.

Click here to download the full presentation.

Thursday, November 4, 2010

Security Alert: Internet Explorer Zero-day Attack Could Allow Remote Code Execution

Microsoft has released an advisory (2458511) about a new zero-day attack vulnerability discovered for Internet Explorer which enables remote code execution.

IE users should be aware and download the patch upon its release - which is expected in the next few hours.

According to Microsoft,
"The vulnerability exists due to an invalid flag reference within Internet Explorer. It is possible under certain conditions for the invalid flag reference to be accessed after an object is deleted. In a specially-crafted attack, in attempting to access a freed object, Internet Explorer can be caused to allow remote code execution."
Microsoft is continuing to monitor this vulnerability, but expects the patch to mitigate the risks involved with using IE.

Tuesday, September 14, 2010

Product Focus - Secure Islands' IQProtector

At Comsec, we are often times exposed early on to innovative information security products and technology as part of our ongoing work in the field. Comsec prides itself on being "vendor neutral" in our consulting services; this however does not change the fact that we many times encounter products we just find really neat.

We have chosen to dedicate this blog post to a cutting-edge Data Leakage Prevention (DLP) and Information Protection and Control (IPC) technology called IQProtecter, provided by Secure Islands, since it impressed us with its capabilities on a professional level.

While, there are numerous DLP and IPC products in the market today, we have recently had the opportunity to experience this technology firsthand, and find that it delivers an advanced and much-needed solution through a unique approach that provides an interesting twist on existing solutions – giving this product an added fresh factor. But feel free to read about it yourself.

Secure Islands Technologies, Ltd. & the Data Leakage Prevention Challenge
Many organizations today have trouble controlling access to data that is classified as sensitive, seeing as it is many times transferred via numerous channels utilizing different technologies. These companies find themselves facing the continual challenge of ensuring that files classified as sensitive do not lose their classification with the dynamic and deperimeterized working methods employed today.

The Fresh Factor: Existing DLP solutions usually attempt to forestall the leakage of sensitive data as it tries to pass through the various organizational exit points such as IP networks or exits at the endpoints. Secure Islands' new solution removes channels, exits, and users from the equation – and uniquely embeds persistent protection and classification within the data itself, at the moment of creation or initial organizational access.

How They Do It: Leveraging predefined parameters, group policies and other methods of digital rights' management, the IQProtector technology performs a rapid content-aware, infrastructure-agnostic, usage-based discovery process - analyzing and classifying data accessed during actual business activity, and creates an enterprise-wide mapping of where sensitive data resides, who accesses it, where it is sent, and how it is used. This, in effect, facilitates a number of different issues and processes within an organization, including: regulatory compliance, business policy definitions, classification, control, compartmentalization, encryption, and overall ongoing analysis.

If you would like to hear more about this technology, be sure to join us at two upcoming events:

Comsec Consulting, will be hosting a breakfast event featuring this ground-breaking security technology September 16, 2010, at the Van der Valk Hotel, Breukelen, the Netherlands - attendence is free, but requires registration in advance. To register email:

In addition, Comsec will be presenting at the Microsoft Open House dedicated to Secure Islands' technology, October 4, 2010 in Raanana, Israel. To register click here.

Monday, August 9, 2010

From Stuxnet to Web 2.0 - Are we losing our security control?

It has been more than two weeks since the widespread propagation of the Stuxnet malware in mid-July, and experts remain stumped about the origins and perpetrators of this large-scale and damaging attack.

It is undisputed that cybercrime and cyber-espionage are on a constant rise—SonicWall reports that malware instances grew by 300% in the first six months of 2010 from 60 million to approximately 180 million attacks—which can be witnessed through the wide spreading of the Zeus botnet we reported about in April, and growing methods of cyber-espionage that are launched daily, such as the recent posting of Facebook user data on the Pirate Bay for public download, an issue Comsec brought forth in our position paper on the Social Networking Corporate Threat.

It’s not surprising then that according to a Checkpoint sponsored Ponemon study, more than 80% of security professionals expressed concern about the many and widely available virtually OS-independent Web 2.0 platforms that run diverse and complex applications in any common browser, believing they “have a significant [adverse] impact on a company’s security posture”.

However, if until now these types of attacks have largely been intended for the personal financial gain of the attackers, this new sophisticated Stuxnet exploit doesn't conform to this trend at all. This stealthy malware, that primarily targets sensitive infrastructure companies (power plants and physical systems) through their SCADA [Supervisory Control and Data Acquisition] systems, abuses a zero-day vulnerability that automatically executes the malware through specially-crafted shortcut files placed on USB drives (generally known as .lnk files) immediately when the .lnk file is read by the operating system, allowing the attacker to then take control of the system.

Leading organizations have dubbed this virus "the most advanced attack seen to date", which has led security researchers to believe that the attack may even originate from a national government or intelligence agency.

While cyber-espionage is no new phenomenon, with the US being the leader of the pack in this area, what's essentially troubling is the consensus by the general public that cyber-espionage at the government tier is all but acceptable, as was brought forth in a recent Sophos report that indicates that 63% of people in the UK think it's ok for their government to spy on other countries using hacking or malware, albeit 40% added the provision that this was only acceptable "if the UK is at war with them". This being in spite of the fact there are no real "game rules" that dictate or govern this type of spying, pretty much making this territory anyone's game right now.

Comsec as a veritable hub for projects in the private business sector and public authorities and regulatory entities has had a bird’s-eye view of the security in both these sectors for years, and as a result has long-since been advocating the importance of combining these two forces. With the unique strengths available in each sector from methodologies, technology, and R&D in the private sector through legislature, regulations & enforcement, and advanced intelligence capabilities in the public sector; the fusion of both worlds is the most promising approach to a comprehensive solution for this arena.

Former NSA Director, Michael Hayden, in his keynote speech at the Blackhat Convention in Las Vegas on July 29th primarily focused on the present state of cyber-war and the vagueness and grey areas that dominate this activity. If until now in warfare there have been four domains, air, ground, water, and space - the cyber world now represents a fifth domain that is for the most part uncharted territory. The current NSA Director General Keith Alexander, who also spearheads the U.S. Cyber Command, recently discussed the need for the United States to establish a proper framework that will serve to guide its responses to cyber-attacks. However, in a rather ambiguous statement, Hayden emphasizes that although 90 percent of Cybercom's thinking is about attack, at least 90 percent of their work is actually spent on defense.

Hayden believes that the US is best positioned to lead the initiative and establish an international agreement that will dictate the methods of operation in this realm—from the types of systems that can be tapped through actions that will be defined as against international law, in the same way that there are guiding principles in all other warfare domains.

Until then however, it is largely agreed upon that many more advanced and sophisticated attacks will come into play before these international guidelines are ironed out, and prudence is of the essence.

Comsec experience has proven that the following measures have been the most effective, with the highest success rates amongst our clients in dealing with threats of this nature:

  • Assess the threat. Carry out an in-depth risk assessment taking into account all of the potential risk factors involved when using Web 2.0 platforms and other company applications.
  • Devise a defensive countermeasure strategy. Analyze the results and formulate a company policy that takes all of the risk factors into account.
  • Security controls. Assess current company security controls and measures in place, ensure they are modified to conform with the new company policy, and gauge their level of relevance and effectiveness when dealing with these types of threats.
  • Educate. Increase the awareness amongst your employees to the threats these platforms pose through a comprehensive awareness campaign that will enlighten them to the “dos and don’ts” of Web 2.0.
  • Measure success. Establish measurable KPIs and KSIs to evaluate the steps that have been taken, and the adherence to the new company policy, especially over an extended period of time.

For further information contact:

Wednesday, July 28, 2010

New Minimum Requirements for Remote Access to Registered Databases - Israel

On September 1, 2010 a new directive will come into effect that establishes the minimum authentication requirements for remote access to registered databases.

The Israeli Ministry of Justice, Technology, and Information recently released a draft of a directive that establishes the authentication requirements for remote access to registered databases. The need for such a directive was reinforced of late, due to a partial and unauthorized copy of the Population Registry’s leakage to the Internet. This registry contains current information about all Israeli citizens (up to the year 2006), and includes details such as a person’s name, personal identification number, address, relatives, and more.

The Bottom Line
This directive represents a new requirement that states that it is prohibited to solely rely on information that is found in the Population Registry for authentication of an individual for the purpose of remote access to data found in any registered database.

Who Does this Directive Apply To?
Any body or organization that maintains a sensitive database that is subject to the Data Protection Act.

What Does Your Organization Need to Do?
Authentication for remote access to this sensitive data must be established through at least one unique piece of information that is only known to the information seeker (such as a password, or any other personal unique detail personally given to the individual), that is not found in the Population Registry or any other publicly accessible databases.

In addition, the higher the sensitivity of the data contained within the database, the number of required authentication methods should be increased, or further authentication measures should be demanded, such as a smart card or biometric authentication.

Is your organization prepared for this change?

Tuesday, July 6, 2010

Cloud computing - White, Fluffy and Safe or a Hazy, Mysterious step into the Unknown? PART II

Security and the Cloud

Cloud advocates will argue that customers stand to benefit from multiple points of replication and defence and the use of sophisticated technologies that individual companies could afford; yet others insist that cloud computing is a ‘security nightmare’. Whilst this view may be a little extreme, cloud computing will inevitably have a major impact on the way we think about and react to a variety of information security issues.

For this reason Gartner recently issued a report outlining some of the key information security aspects to be aware of regarding the use of cloud services.

1. Privileged user access: Sensitive data processed outside the enterprise brings with it an inherent level of risk, as outsourced services bypass the "physical, logical and personnel controls" in- house IT teams are able to exert. For this reason it is vital for companies to gather as much information as possible about the people who will be managing their data, including the hiring and control procedures of privileged administrators.

2. Regulatory compliance: Customers are ultimately responsible for the security and integrity of their own data, even when it is held by a service provider. Traditional service providers are subjected to external audits and security certifications. Cloud computing providers must also undergo a similar process before an organization can entrust them with sensitive corporate data.

3. Data location: When using cloud solutions you may not know exactly where or in which country your data is hosted. It is important to ask the provider if they will commit to storing and processing data in a specific jurisdictions and whether they will make a contractual commitment to obey local privacy requirements.

4. Data segregation: Typically data in the cloud is held in a shared environment alongside data from other customers. For this reason it is important to understand what measures are taken to effectively encrypt and segregate data.

5. Recovery: It is crucial for the purposes of Business Continuity Management to understand how your data will be recovered in a disaster situation. Ideally your data and application infrastructure should be replicated across multiple sites. It is also critical to understand the restoration process and anticipated recovery times.
6. Investigative support: Investigating inappropriate or illegal activities become significantly harder in cloud computing. The very nature of cloud services makes it especially difficult to investigate a security breach or incident as data logs for multiple customers may be co-located.

Gartner also recommends that smart customers employ the services of a neutral and experienced third party to undertake a security risk assessment to map the specific threats and security challenges an organization may encounter upon moving services to a cloud environment. Comsec can vouch first-hand for the complexities involved with mapping, preparing for, and mitigating the risks associated with migrating to the cloud. Our extensive experience in this area, has enabled us to conduct large-scale projects of this nature, most recently for an organization of 140,000 employees. Cloud computing has unique attributes that require undertaking a risk assessment in areas such as, data integrity, recovery, testing procedures, privacy, security policies, in addition to the management, monitoring, alerting and reporting of security vulnerabilities or breaches. It is also important to evaluate the legal ramifications in areas such as regulatory compliance, and auditing.

However, despite some of the security concerns it looks like cloud computing is here to stay. Whether adoption becomes as prevalent and deep as some forecast will depend largely on overcoming the fears of the cloud. When considering the solutions to the problems raised by cloud computing, it is important to remember that essentially many of these issues are simply old problems in a new setting. Attacks on server infrastructure and web service vulnerabilities existed long before cloud computing became fashionable. Although some aspects of security will be exacerbated when utilizing the cloud, such as data privacy, segregation, access control and governance, others, such as incomplete security patching, will be mitigated. So whilst cloud computing adds a new dimensions to the security challenge, it also provides an opportunity for improvements.

Monday, June 28, 2010

Cloud computing - White, Fluffy and Safe or a Hazy, Mysterious step into the Unknown? PART I

Cloud 101: The Basics

Cloud computing is clearly one of today’s most enticing technology areas and is currently one of the top buzzwords in the Hi-Tech industry. Cloud computing is not a new concept; most of us already use this technology on a daily basis through services like Hotmail, Gmail and Facebook. In the simplest of terms, cloud computing is IT-as-a-Service; rather than an organization building its own IT infrastructure to host databases or applications, this is done by a third party with large server farms. The organization then accesses its data and applications over the internet. In other words, under this new procurement model, IT becomes a utility, consumed like water or electricity.

Cloud computing is growing fast, according to Gartner the market is currently worth about $2.4bn, but is predicted to grow to $8.1bn by 2013. Several large companies have already partially adopted the ‘Cloud’ approach including all of the top five software companies. More recently business services provider Rentokil Initial has rolled out a cloud email solution to its 30,000 employees.

It’s not difficult to see the benefits of cloud computing and enthusiasts are quick to point out its key benefits:

Scalability: Organizations that have grown rapidly, perhaps through acquisitions, often struggle with the complexities required to develop a single coherent enterprise infrastructure. Furthermore, cloud systems are built to cope with sharp increases in workload and seasonal fluctuations. Take for example a tour operator who has to cope with a huge surge in demand during the summer months or a disaster recovery team that requires additional computing power to respond to a large scale emergency.

Cost Effective: As IT providers host services for multiple companies; sharing complex infrastructure can cut costs and allows organizations to only pay for what they actually use.

Speed: Simple cloud services can be deployed rapidly and work ‘out of the box’. This is a great advantage for small emerging businesses that may need to establish a secure e-commerce website quickly. Equally, for more complex software and data base solutions, cloud computing allows organizations to skip the hardware procurement and capital expenditure phase.

Mobility: Many companies today operate a geographically diverse workforce. Cloud services are designed to be used anywhere in the world, so organizations with globally dispersed and mobile employees can access their systems on the move.

However, despite the trumpeted business and technical advantages of cloud computing, many businesses have been relatively slow on the take up. Major corporations that are cloud users are for the most part putting only their less sensitive data in the cloud. There appear to be significant concerns over certain aspects of cloud computing, including: reduced control & governance, regulatory requirements, excessive standardization, usability and fears over issues of connectivity.

However, without a shadow of a doubt the biggest area of concern is the impact on information security. Will corporate and customer data be safe? What about data protection and legal compliance requirements? What are the corporate risks involved in entrusting a single entity with the data of an entire organization.

To be continued...

Tuesday, June 15, 2010

The Social Networking Corporate Threat

Security Alert

As part of ongoing Comsec R&D, we have published an in-depth study on the effect of social networking on a corporate level.

Until now, studies have primarily focused on individual's privacy and security when using social media channels. The threats that social media networks pose on a corporate level have largely been overlooked, and are growing rapidly.

In this study, Comsec will demonstrate the relative ease of exploiting these networks to compromise a company's proprietary data and corporate assets.

To download "The Social Networking Corporate Threat" click here.

Sunday, June 13, 2010

Keep one eye on the ball and the other on the NET!

Cybercrime skyrockets in build up to FIFA World Cup as soccer fans fall victim to bogus websites, on-line fraud, Phishing and spam attacks.

With the FIFA World Cup due to kick-off in just a few weeks time, football fever is beginning to spread across the 32 nations involved in this year’s tournament. With more than 1 billion football fans around the world expected to tune in to cheer on their favorite teams, and tens of thousands of travelling fans, the football World Cup is undisputedly one of the greatest events on the sporting calendar.

This year the tournament will take place in South Africa, the first African nation ever to host the World Cup. In preparation, South Africa has made a number of large investments, including building five new stadiums, improving the public transport infrastructure and implementing special measures to ensure all aspects of safety and security are planned for.

Unfortunately, as with all global sporting events, cybercriminals have been targeting fans in order to gain large profits. With today's ease of purchasing entrance tickets, organized trips or merchandise online, cybercriminals do not have to exert too much effort in reaching their targets. A well-disguised link or website can lead unsuspecting victims to the predator. Filling out contact or credit-card details on bogus websites can easily result in the loss of money or even to identity theft.

As eager football fans desperately scour the internet to secure tickets to their dream match or to book that last hotel room, a plethora of fake websites have sprung up, all of which are more than happy to collect money from unwary fans. To date, FIFA has identified approximately 100 websites in violation globally, with the majority, approximately 32% based in the US, 15% in the UK and 15% in South Africa.

Typically, attackers on the internet have tried and tested methods to defraud victims. These include attempting to compromise legitimate websites to gain sensitive information or sending spam emails or SMS’ to users with the aim of persuading them to follow links to illegitimate websites where personal information is then harvested. Historically, Phishing attacks have skyrocketed around major sporting events; prior to the pervious FIFA World Cup in Germany, related spam increased by around 40% and over 4000 Phishing hosts were discovered every month during the tournament build up.

These shocking statistics are likely to be further exacerbated this year in South Africa as the country launches an expanded broadband network via two new undersea fibre-optic cables. In the past, threat reports have demonstrated that launching these services causes an immediate increased threat level, as cybercriminals take advantage of breaches and vulnerabilities that arise from inadequate security. This has been seen in countries such as Brazil, Turkey, Egypt and Poland, and South Africa is likely to follow this trend once the new infrastructure is in place providing cheaper, faster and widely available broadband services.

Comsec Consulting has experience working with large sporting organizations and those responsible for the organization of major international events to help assess and counter the risks associated with attacks from cybercriminals. With long-standing experience in this area, Comsec believes that three basic types of information attributes need to be protected around global events: Availability, Integrity and Confidentiality.

The availability of information can be compromised through denial-of-service attacks where users are prevented from accessing legitimate websites. These types of attacks are very common around large-scale sporting events, resulting in lost orders for businesses offering goods and services online. These types of attacks are likely to focus on FIFA and World Cup-related websites and may be politically motivated or kudos related. In more extreme circumstances, cybercriminals may attempt to disrupt the broader World Cup infrastructure by targeting physical security systems/CCTV, mobile applications, transportation networks or ticket terminals.

Organizations also need to secure the integrity of their information, particularly confidential information provided by users accessing websites offering services and products relating to the event. Hackers will attempt to gain access to valuable information through compromising user accounts, and they may also be able to reach customer information held in the databases supporting these websites. These types of attacks are likely to be motivated by financial gain as bank and credit card details can be stolen and sold for large sums of money. Organizations must also take measures to protect themselves from the insider threat, as statistically the majority of fraud related incidents can be traced back to an employee.

The information security threats that global sporting events now attract can’t be ignored. Information security consultancy companies, such as Comsec are often drafted in at the early planning stages to ensure that all aspects relating to information security are properly assessed, mapped and tested in preparation for a major event. As any national football coach will tell you, good preparation and planning is the key to success.

Wednesday, June 9, 2010

A Flash in the Pan?

By: Avi Bashan, Information Security Consultant

Just as we thought that the last aftershocks of the recent Flash debate between Apple and Adobe were yesterday's news, the Flash headlines endure. Last Friday Adobe announced a new critical vulnerability in the Flash Players, 9.0.262, and earlier versions 10.0.x and 9.0.x. The vulnerability can potentially cause the Flash Player to crash and allow an attacker to take control of an affected system.

The vulnerability affects Adobe Reader and Acrobat 9.3.2 and the earlier 9.x versions as well. Adobe reported that the vulnerability is currently being exploited across the web. The security update for the Flash Players is scheduled to be released on June 10th, as for the security update for Adobe Reader, we will have to wait until June 29th!

This brings us to the unavoidable question, was Steve Jobs right in his adamant anti-Flash stance? As we can recall, Jobs made several pointed remarks about Flash Player's security and stability. And as of now, Apple still refuses to incorporate Flash into their technology, notedly the iPad and iPhone products.

On the other hand Google approaches the subject quite differently, just recently Google announced and even released a new beta version for their popular mobile operating system Android (version 2.2) named Froyo. The system supports Flash 10.1, furthermore Google's last version of their popular browser Chrome, adds Flash as a built-in feature.

The question still remains open, since as of today there is no real substitute for the Web 2.0 experience that Flash provides. That said, anyone who uses Flash, should probably stop using it for now, or browse knowing full well about the potential repercussions. If you can't forgo the Flash experience, it would be prudent to follow these mitigations for continued use of Flash until the patch is released. As for the Acrobat Reader? It may be high time to start using alternate free PDF viewers readily available on the web, for now at least.

Tuesday, June 1, 2010

The OWASP ASVS and SAMM Standards

By: Shay Zalalichin, CTO


Catch Shay's presentation on June 7th at InfoSec Israel 2010 - where he will be discussing "Privacy in the Modern Age - On Regulations, Challenges and Technology"!

Both the ASVS and SAMM standards are considered relatively less commonly integrated initiatives in the information security industry that have now received official OWASP project status.

The ASVS stands for Application Security Verification Standard and was created to define a standard terminology in the industry to measure the security level for applications/products. Once everybody is synchronized with the terminology, organizations can buy software and know that it is compliant with a specific pre-defined security level; and it can be sure that it is compliant with this level because it was verified according to common / standard requirements, and in the event that this is performed by an external vendor, seeing as this is a well-defined standard - benchmarking between different vendors' performance will be quite easy to evaluate.

As can be seen from the picture below, the verification level starts from a very basic level of just using an automated tool for verification (e.g. WebInspect scan) to a higher level of manual Design Review and Security Code Review (such as CODEFEND™).

SAMM stands for Software Assurance Maturity Model and is an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks the organization faces. The resources provided by SAMM can aid in:

• Evaluating an organization’s existing software security practices
• Building a balanced software security program in well-defined iterations
• Demonstrating concrete improvements for a security assurance program
• Defining and measuring security-related activities within an organization

SAMM was defined with flexibility in mind such that it can be utilized by small, medium, and large organizations using any style of development. Additionally, this model can be applied organization-wide, for a single line-of-business, or even for an individual project.

The picture below exhibits the different activities within the SAMM. Each activity can be performed with a different maturity level allowing the flexibility to be adapted by organizations of different security requirements, budget, maturity etc.

The ASVS is generally a new initiative as most organizations have not yet started implementing it, but can be an excellent business driver for them as it aims to set a higher security level pan-organizationally, which will help organizations communicate in a uniform manner (across departments and divisions), and with similar external bodies.

In addition, the ASVS requires Security Code Review starting from a fully automated (basic level) to the combination of automated and manual which will help ensure the security posture of software - an important initiative - and one Comsec has been tackling with CODEFEND™.

As for the SAMM, from our initial review it seems to be an excellent model that can assist organizations in integrating security within their development lifecycle, and in essence, is very similar to the Comsec in-house developed model. This is especially relevant for SMEs or organizations that are lacking maturity for the adoption of a large-scale and comprehensive model such as the MS-SDL

It’s important to understand that several similar initiatives / standards exists (e.g. MS-SDL, OWASP CLASP) but none of them have acquired enough industry adoption - which hopefully these two standards will begin changing, as the OWASP Top 10 has done.

Ask us about our services towards compliance with these initiatives:
CODEFEND™ Security Code Review • ASVS Gap Analyses • ASVS Product/System Reviews and Security Assessments (ASVS Certification)• ASVS Integration within SDL Policy • SAMM SDL Strategic Planning/Roadmaps • SDL SAMM Assessments • SDL SAMM Compliance

Thursday, May 13, 2010

SAP Auditing Made Easy

By: Medi Karkashon, CISA, CPA
GRC Security Division Manager

With companies’ migration to SAP systems, that in essence, have replaced diverse legacy systems and different islands of information – a hotbed of opportunity suddenly opened up to commit diverse types of malicious activity from fraud and embezzlement to information leakage, through authorization bypass in these new data-rich, well-integrated, intricate systems.

Once upon a time manual processes—such as checking invoices against hard-copy purchase orders, or checking signatures on documents—have evolved into automated processes with automated controls built into SAP systems. Traditional auditing methods are becoming obsolete with the integration of SAP, and internal auditors are finding themselves needing to adapt to these changes—and quickly.

Part and parcel to the advantages that come with the integration of a SAP system are the byproduct of new threats—as with any system of this size and nature; especially those that relate to the integration of data for the different internal processes, which did not exist in the past when there was no interaction between the different systems. This new integration has created a crucial need for constant availability, transparency, and integrity of information, in addition to data in real time for different departments.

SAP systems are purposely constructed in such a manner that makes it easy to get from one place to another to facilitate
business processes. This architecture creates many avenues to reach diverse data very easily. This is where the issue of sensitive authorizations and segregation of duties becomes very critical. These different avenues create myriad new threat vectors, making it increasingly important for an internal auditor to be able to understand how exactly the system works, and the business logic and flow that comes with SAP systems.

Many internal auditors have already recognized this criticality and have enlisted the proper professionals to help mitigate these risks; others choose to work very closely with their IT department on these matters. However, there are still many internal auditors that have not yet addressed the security issues involved with migrating to SAP – whether due to lack of awareness to these issues, or limited company resources.

With this need in mind, Comsec developed a solution to facilitate internal auditing of SAP systems. The SAP security portfolio, was designed to create an easily implemented set of advanced rules and reports available to internal auditors at the click of a button – enabling much-needed on-demand and real-time auditing. These reports focus on anomalous and exceptional data that may indicate system irregularities or processes that are being carried out in discord with company policy. A driving factor in the design process was ensuring that this data would be available for key SAP processes, including:

User management
Authorization management
Human resources
Inventory management

This new package helps the internal auditor slowly learn the ins-and-outs of the system, providing internal auditors with the skills and tools to acclimatize to SAP system auditing, all without requiring the investment of unnecessary internal time and resources.

Thursday, April 29, 2010

Banking virus Zeus strikes back

A new survey conducted by Web security experts has revealed that Zeus, a virus that steals online banking details from infected computers, is back and more powerful than ever.

The Zeus family of malware is the number one botnet online, with an estimated 3.6 million PC infected in the U.S. alone. The malware will infect a system and wait until the user accesses one of the predefined banking URLs. Additionally, Zeus has the ability to inject HTML into the pages rendered by the browser, so that its own content is displayed together (or instead of) the genuine pages from the bank’s Web server.

Once the virus is installed on an affected computer, it can record users’ bank details and passwords, credit card numbers, and other personal details such as passwords for email accounts and social networking sites. This sensitive information is then relayed in real time to a remote server to be used and later sold by cyber criminals.

Earlier this year, many parts of the systems used for the Zeus botnet were destroyed when the Kazakhstani ISP that was being used to administer it was cut off. However, it has not taken long for the malware controllers to spring up elsewhere, and the battle between anti-virus software vendors and botnet developers persists.

Zeus 1.6 has the ability to infect computers using both Firefox and Internet Explorer Web browsers. Experts say the new version of Zeus is expected to significantly increase fraud losses, especially with the growing number of users who regularly bank online. What makes this virus especially dangerous is its ability to bypass even up-to-date anti-virus protections.

A recent study conducted by Trusteer, a web security company, who sampled 10,000 users on a single day, showed that 32% were not using anti-virus protection, 6% percent were using an outdated version, and 71% were using anti-virus with current updates applied. What was particularly striking is that when it came to Zeus infected systems, 31% had no anti-virus protection, 14% were running outdated anti-virus software, but the majority, 55% were using current anti-virus software. Essentially, this means that the majority of infections are going undetected, which is bad news for consumers, banks, and anti-virus providers who were only effective at preventing the virus 23% of the time.

So, what can computer users and financial institutions do to reduce the risks of becoming a victim of cyber crime, whilst continuing to utilize the benefits of online banking?

The development of pro-active technology is fast becoming an important defense mechanism and may include faster and smaller updates and global threat detection networks. More technically savvy users can check their computer’s registry key, which lists software that starts upon a user’s login to their computer. Typically Zeus will add itself to the list as ‘ntos’, but this name may change at any time.

All computer users can reduce risks by installing up-to-date anti-spyware software, updating programs and being secure on the Web by disconnecting from the Internet when not in use. The virus has the capability to inject additional pages into online banking login screens. So if you are suddenly asked for a secret question, security number or other unusual items during the login process, abort the login, and call your bank or try the login from another computer. Users should also be careful when opening attachments or following links on emails & websites, investigate new unknown software before downloading and ensuring that passwords are kept robust and secret.

(The full report can be downloaded from

Thursday, April 22, 2010

Understanding the PCI:DSS - Not Just Complying with It

Upon the establishment of the Payment Card Industry's Data Security Standards (PCI:DSS), many relegated the PCI:DSS to a derivative of the ISO standards and just another "tick in the box" of international standards, without much staying or enforcing power. The PCI:DSS, have since grown and taken on a significant role in the evolution and enforcement of real information security standards within organizations around the globe, and have even expanded to include the Payment Application Data Security Standards (PA-DSS) requiring software developers and vendors to comply with a high level of application security standards.

These standards now represent an integral part of information security practices, and have even instigated a true change by mandating the integration of best practices in data security in organizations of all sizes and types that store, transmit, or process prohibited credit cardholder data such as a full magnetic stripes, CVV2s or PIN data.

In addition, the instituting of the PA-DSS places a whole new emphasis and importance on the development of secure code by software vendors – and the maintenance of a secure software development lifecycle. While companies around the globe have all taken on the challenge of PCI compliance, at times requiring large-scale and pan-organizational information security changes within large networks/infrastructure, many companies still find themselves “in the dark” about the actual requirements themselves.

Because of the complex nature of the PCI:DSS and the PA-DSS, the QSA/PA-QSA, needs to have an in-depth understanding of the standard, including all of the intricacies involved, to enable a pragmatic approach and a tailoring of the technological, procedural, and administrative controls to your organization's needs. A QSA/PA-QSA that brings with them experience and expertise in the area of PCI and PA-DSS certification, will be able to recommend the necessary measures for your company, and to distinguish between the most relevant and applicable requirements as they apply to your company's internal business flow. Your organization will then be confident when integrating the QSA/PA-QSA's professional recommentations knowing that these are the most cost-effective, and critical requirements for your organization's compliance with the PCI and PA-DSS.

With much experience under our belt with these compliance processes, Comsec is able to provide a few guidelines on the most prevalent issues and “grey areas” encountered in the different PCI processes by different sized organizations, to facilitate what is sometimes perceived as a complex and resource-draining process:

Don't be afraid of the standard! Know the most important stipulations of the standard, to be able to maintain an ongoing dialogue with your QSA/PA-QSA regarding the solutions that are best-suited for your organization, as only you possess the in-depth knowledge about your internal business processes that are unique to your organization, and can suggest the methods of operation that are most appropriate for your company, contributing to the feeling of involvement and a level of control of your compliance process.

• Be aware of wherein the pitfalls lie for diverse technologies from legacy systems through Web-based platforms, and the possible constraints they may pose with regards to complying with this standard.

• Correct scoping of your CHD environment is imperative to prevent the inclusion of unnecessary systems in the compliance process. When properly done, this scoping and will in essence reduce the timeframe for compliance and efforts that need to be invested to ensure all of the systems and network comply with the PCI standards.

• Assessing the added value of storing sensitive cardholder data in different systems, may lead to potentially de-scoping systems that do not require the storage of sensitive CHD for ongoing business processes, significantly reducing the number of systems required to be in compliance with the standards and streamlining the overall compliance process.

• There are "quick wins" available to organizations, that may be less known, that make the compliance process much more friendly and attainable. For example:
o Tokenization
o Encryption
o Hashing
o Truncation

Being informed about the different solutions available to your organization, will enable you to take the initiative in suggesting the appropriate solution to your QSA.

• There are many solutions available for the outsourcing of parts of the clearing/payment process. These solutions many times serve to greatly facilitate the compliance process for many organizations.

• The implementation of best practices in secure coding will facilitate the PA-DSS compliance process, as security consideration will have already been taken into account in the development process.

• Be familiar with the different technological solutions involved in the compliance process (e.g. purchasing a web application firewall, or performing code review), and assess which solutions are best-suited for your organization—and are the quickest and most effective to implement.

When it comes down to it, many parts of the PCI and PA-DSS standards are subject to the human interpretation of the QSA/PA-QSA. Complying with the PCI standards does not have to be complicated and time-consuming process for organizations, and knowing the different options available to you, can help in correctly mapping the compliance process and making the correct decisions at the different junctures in the process.

It is also important to bear in mind that these standards serve the purpose of protecting your organization against the many malicious entities who will invest inordinate resources to obtain your clients' sensitive personal information. Complying with these standards will help your organization preserve their long-standing reputation, and will provide your clients with the much-needed assurance that their data is in safe hands when undertaking business transactions with your company.

Do you have questions about PCI:DSS or PA-DSS certification? Feel free to be in touch with our dedicated PCI teams for all of your certification needs, contact: or

Sunday, March 28, 2010

Comsec Consulting is Recruiting...and We're Looking for You!

Comsec is looking for an Information Security Client & Project Manager for Relocation to London!

This position will provide the right candidate with an opportunity to tackle new and exciting challenges in the fast-paced and leading UK information security market!

The ideal candidate will possess the following qualifications:

An in-depth knowledge of the Information Security field – Mandatory.
Experience in project management – Mandatory.
A sales and business orientation.
Ability to manage several client portfolios and projects simultaneously.
English at a mother tongue level.
A willingness to relocate to London for a period of 2-3 years.

For further information or to apply for this position (CVs in English only), contact us at:!

Thursday, March 25, 2010

OWASP Top 10 – Top Five Thoughts

By: Shay Zalalichin, CISSP
CTO, Comsec Consulting

The Open Web Application Security Project, better known in the AppSec world as OWASP, released the new OWASP Top 10 most critical Web application vulnerabilities for 2010 (which is updated about every three years). Since this list is so highly regarded in the AppSec community, I felt it important to highlight some elements.

OWASP Top 10 – Top Five Thoughts

1. Not just statistics –This time around OWASP didn't simply rank weaknesses/vulnerabilities by prevalence, but changed the model and decided to rank the Top 10 by risk (of which the vulnerability is only one factor). This is a new weighted approach, and gives a better picture of the overall risk.

2. A nice element is the new tool provided in the document that helps your average user assess their risk level. Albeit this matrix is a bit more complex than the NIST SP800-30 model which is pretty straightforward (compare figure 1 and 2), it is quite a useful tool for risk assessment.

Figure 1

Figure 2

3. Onto the Top 10 itself – The PHP-oriented Malicious File Execution and Information Leakage vulnerabilities were removed since their weighted risk potential is lower than the two new risks. The Security Misconfigurations vulnerability appeared in the 2004 T10 was removed in 2007 – and has now made a comeback.

4. It should be noted that if the OWASP Top 10 was more frequently published, Unvalidated Redirects which is mostly manifested in Phishing attacks and authorization bypass would likely have been included quite a while ago—since it has been relevant for a few years now.

5. In general, this is a good piece for awareness purposes, and to teach about common risks, but organizations should note that there are plenty of other risks – risks that result from specific technologies integrated within your organization, and that the OWASP T10 is only a starting point. Also, users should be cognizant of the fact that the OWASP T10 are only relevant to Web applications—and client-server apps etc. are subject to additional risks such as buffer overflow and others, that are quite common to languages such C/C++.

You can download the new OWASP Top 10 document from:

You can download the NIST Risk Management Guide for Information Technology Systems from:

Wednesday, February 17, 2010

Mobile World Congress - New Mobile World? New Threats?

By: Shay Zalalichin, CISSP
CTO, Comsec Consulting

With the Mobile World Congress taking place this week, and the unveiling of a new generation of mobile phones that are becoming more advanced and technologically complex, there are two clear trends that can be identified as potential security threats. The threats that arise from using these advanced phones, which are in essence small computers--and the data that is accessed through them--namely social networking sites, and by default, is stored within these phones.

There are constantly new platforms and OS’ introduced, from Android and Web OS through the new Samsung Bada, Symbian, BlackBerry, and the new Meego Nokia/Intel collaboration, which are all part of a what can only be likened to a “gold rush” of opportunity, and many of the companies involved do not take the time to properly integrate security measures into their development lifecycle, in order to not miss the profitable boat.

The diversity of operating systems and platforms posed something of an obstacle for evildoers until recently, as attackers didn’t quite know in which direction to target their attacks and at which specific platform, whereas Windows is the leading operating system for desktop and laptop users, there wasn’t any clear-cut target amongst Smartphones until the iPhone. The iPhone has presented a whole ‘nother ball game. According to Tech Crunchies, iPhone sales have reached a record-breaking 42.5 million handsets across the globe, and another 11 million are expected to be sold by 2011. The iPhone’s success was quickly identified by software developers to create any imaginable mobile application – from banking applications to silly games, and in turn, as a victim of their own success, has become a target for many attackers looking to gain profit from this phone’s popularity.

Until 2009 the mobile world had been much more limited in its capabilities, however 2009 was the year of the Smartphone, with devices enabling much more advanced Web access features and integration options, media capabilities, and providing millions of applications that provide access nearly limitless data, while their users have not yet learned to adapt their perceptions to these highly-advanced devices.

If in the past Smartphones, such as the BlackBerry were designed and built specifically to respond to business and enterprise needs for email use on the go, and as such security for these more limited capabilities had been taken into account from the onset. The iPhone, on the other hand, was targeted at consumers, and security measures were only implemented almost as an afterthought, while in the meantime, due to its popularity and ease-of-use many enterprises adopted this phone in their organizations without properly considering the threats involved.

Apple targeted a younger population looking to be more socially in-sync – providing applications for Facebook and Twitter, Gmail, and myriad games. Thus with its initial release, the iPhone did not provide any encryption capabilities. With the purpose of becoming more enterprise-friendly, Apply quickly understood the need to include additional security measures in the iPhone. As such, Apple released encryption fixes with the latest iPhone model in 2009; which unfortunately was still not enough to prevent the spreading of two well-known iPhone viruses, the most popular being the less harmful the Rick Astley virus and the second being a virus that used the same method as the Rick Astley virus - but was more cybercrime oriented and attempted to steal information from these devices much like Phishing.

These were mostly due to a lack of awareness and jailbreaking of handsets by consumers that were not cognizant of the threats involved with performing such actions on their phones - namely exposing their handsets to remote access over the Internet by installing the SSH service. That said, at the recent Black Hat convention in early February potential vulnerabilities in the latest iPhone encryption were indicated due to a design flaw.

A large part of the problem lies in the consumer’s mindset. Although consumers are still accessing the same websites that could have the same potentially threatening content and malware, according to a survey by Trend Micro, many Smartphone users don’t even enable the built-in security options, and are sure that surfing the Web via these devices is safe or at least as safe as surfing from a PC or laptop, despite the fact that there is no virus or malware protection at all.

Another added threat to this already threat-laden landscape is the social networking threat. With Facebook registering 350 million users in over 180 countries by the end of 2009 and being one of the most popular mobile applications to date, and Twitter rising to 75 million registered users by the end of 2009, in addition to the daily launching of new social platforms, such as Google Buzz, all of which need to be accessed in real time, many developers are running ahead to meet tight launch deadlines without properly considering all security angles. Within three days of the Google Buzz launch, a security breach was already discovered. This combined with the use of the less-protected Smartphone, and the inordinate number of social network users, presents potential threats and risks of a formerly unknown caliber and potential fallout. With data access from Smartphones expected to reach 30% by 2013, the best advice we can offer to the different parties involved is:

Software Vendors:

Be sure to integrate security throughout your development lifecycle and take the time to perform proper threat modeling to identify all of the potential threat factors. While the business need to release software as quickly as possible is understood, the number of victims that can be harmed by insufficient data security is too large to even fathom, and common breaches such as Cross Site Scripting, cryptographic flaws, and SQL Injection could constitute a veritable disaster.


Be aware that your employees are walking around with mini-computers that contain and store sensitive data, from corporate data in emails, to pictures, and documents. In addition, due to the fact these are small computers, these phones can be accessed quite like any other disk or flash drive, data can be extracted, and viruses spread into corporate LANs. Enterprises need to start taking this into consideration, and need to establish proper policies and procedures for the employee use of these devices, and to institute the proper security measures required to cope with these threats.


If, until now, a phone was just a phone – with some messaging and a camera capabilities – these smartphones now contain much more sensitive and potentially endangering data. In addition, while the perception is the exact opposite, users should be well-aware that these devices are by far less secure than a desktop or laptop computer, and as such one should function accordingly. Be aware of the security threats involved, that have the potential of compromising your privacy, when sensitive data is accessed via phone applications, and software installed from unknown and unsigned vendors. Much like your home computer, there are plenty of viruses, and malware out there that can cause much damage. With a device that stores all of your personal contact information, with potentially exposing pictures, and private passwords to your email and other accounts, identity theft could not be an easier feat.

Sunday, January 31, 2010

Google Inc. Zero Day Exploit Information

Written By: Boris Vaynberg & Avi Bashan

On January 12, 2010, Google announced that its systems were hit by a sophisticated attack that most likely originated in China ( This attack was a combination of known attack vectors, known as an Advanced Persistent Threat (APT) – which is an attack based on large resources, and coordinated elements of customized malicious code that is capable of targeting a wide range of security software. This blog post noted that this was not an isolated incident, but rather a deliberate attack that was launched on over 30 leading companies, including: Adobe, Juniper, and other leading international technology, media, finance, and security companies (

An internal investigation by Google revealed that the attackers used a previously unknown weakness (also known as a Zero Day Exploit) of the Internet Explorer browser (which affects all current versions from version 6.0 onward) and Adobe Acrobat Reader (which affects all versions prior to 8.0), in order to implant a Trojan horse. This Trojan horse used encrypted channel 443 to transmit information (

A little over a week ago (last Thursday), Microsoft released a critical security patch for this security breach (MS10-002) (, and it was publicized that the malicious code was leaked to the Internet, and is currently available to the general public. Adobe also released a security update (APSB10-02) for Acrobat Reader.

Upon analysis of the attack files, McAfee discovered that the code was compiled from a folder named Aurora, and therefore it is assumed that this is the name that was given to this operation by the attackers (

The Aurora attack was divided into two separate vectors which are not necessarily dependent on each other.

The first is the weakness vector which enables the malicious code to be run on any target system; the second is the malicious component installed on the system, in this case the Trojan horse, which provides malicious access to the information stored within the target system.

The first vector exploits two different weaknesses within the target computer which enables the installation of the Trojan. The first weakness is via the exploitation of an existing vulnerability in the IE browser, as previously mentioned. The additional vector exploits a weakness in the popular file reader, Adobe Acrobat PDF Reader.

Many companies have treated this series of events quite seriously and have deemed them significant, not because they represent a technological breakthrough in the field of information security (the discovery of these types of previously unknown weaknesses are prevalent with common and widely-used software such as IE, Adobe PDF Reader, and others); but rather due to the fact that the widespread launching of this attack technique against commercial companies on such a large scale is something that had not yet been witnessed before. In addition, there is place for concern that it is considerably likely that information that was stolen as a result of these attacks, is sensitive and valuable intellectual property belonging to the companies involved.

These recent events require us to reevaluate the map of threats we help our clients deal with on a daily basis, and raise important questions, such as:

-What is the level of effectiveness of current systems against these types of attack (Zero Day Exploits)?
- How can we know what is running within our networks at any given time, where and how does internal company information exit our internal networks?


Is this attack relevant to me, and how can I defend myself against it?

Defense against this attack is no different than the various attacks carried out on the Internet on a daily basis.

Defense against this attack is divided into two main elements, which address each of the vectors used in this attack.

1. Defense Against Weaknesses – The latest software security patches for all products in use should be installed regularly by your company.
• The IE security update can be downloaded by following this link:
• If Adobe Acrobat Reader is used in your organization, version 8.0 of the product should be installed. This installation can be found at the following link:

2. Defense against Trojan horses or any other malware that has infiltrated your company network can be performed in the following manner:
• Update antivirus signatures installed on organizational computers.
• Manually remove the Hydraq Trojan from client computers.
• Monitoring the data exit points from the organization's network to prevent data leakage.

For more information contact:

From script kiddies to organised cybercrime – things are getting nasty out there...

Written By: Stuart Okin
Published By: Microsoft Security Newsletter, December 2009

We have heard for some time that hackers have moved from script kiddies, trying to raise their profile, to the seedy and dark world of organised crime. The indictment of the hacker ring in Atlanta in early November for the $9.4 million RBS Worldpay ATM theft, is just one more example of the evolution in organised cybercrime and its sophistication in carrying out complex operations in short timeframes. This ring managed to coordinate an attack on more than 2,000 ATMs within a 12-hour period. Following less than a week of reconnaissance on existing vulnerabilities in RBS Worldpay security, the hackers began taking advantage of the vulnerabilities to acquire encrypted PINs through reverse engineering. This isn’t the first large scale instance of stolen encrypted PINs, which first emerged in the well publicised TJ Maxx theft, involving 11 hackers, and the stolen details of 40 million credit cards. It is believed in both cases that these criminals didn’t actually manage to crack the encryption, but rather bypassed security controls and elevated privileges, in order to function as super-users in the different systems - enabling these heists.

Organised cybercrime rings are beginning to resemble commercial entities, even with a range of pricing models, enabling more efficient attacks, which bring with them bigger returns. On some sites cybercrime is offered as a service today (perhaps we can coin the phrase CaaS!), where they offer a variety of services, such as crimeware - malware which automates cybercrime. Fill in a simple checklist; the operating system, type of attack (botnet, malware), and then the criminal ‘client’ (i.e. attacker) receives the capability of utilising infected computers to attack their target, with the ability to track progress through an online web portal. Cybercrime made easy at the click of a button! This is what we are now up against.

According to some pundits, cybercrime may have even surpassed drug trafficking in some countries as the most profitable illegal business. Furthermore, in the past cybercrime was more about the propagation of widespread, highly visible incidents, solely for the purpose of causing disruption and commotion, however, today these are being replaced by targeted, stealth attacks, often characterised by their invisibility to their victims.

The current method employed by cybercriminals today is the combined attack. Take as an example a recent attempted attack on a client of ours, a major international bank, where an internet port was targeted to divert attention from the more critical attack taking place simultaneously on a database, with sensitive account credentials. Vigilance is a critical factor in catching cyber attacks in real-time, however the ability to analyse the event in the context of the bigger picture is imperative in preventing these diversionary tactics.

Vulnerabilities which once upon a time could have been relegated to important but not necessarily critical, are posing much more serious risks. As an example, we have been working with another leading financial institution, where we identified a potential race condition in their Forex portal. The company’s Forex portal enables a client to purchase foreign currency in the amount their bank account holds, however, the portal vulnerability raised the possibility of valid simultaneous user sessions and as a result a malicious user could in essence open 100 or 1000 sessions at the same time, and purchase foreign currency that far exceeded their bank account limitation. The purchasing of an exorbitant amount of a foreign currency artificially elevates the currency’s exchange rate, and if the malicious user simultaneously purchases and sells large amounts of that currency, they will make a great deal of money.

Supporting these targeted attacks, especially in the area of blackmail, extortion and identity theft, are the increasing number of PCs which have been taken over by organised crime, forming the backbone of botnet armies. It’s estimated by the FBI that more than one million computers are “hijacked” annually by malicious botnets. While it is often perceived that hackers are utilising special methods and cutting-edge techniques, this often is far from the case; their power lies in the large amount of known attack methods and the associate high probability that applications are not protected against all of these. This is reflected in the continued massive rise in Phishing emails and linked spam (it is estimated that nearly 90% of emails today are spam), malware, malicious botnets (more than doubling in number from January to June 2009), worms, Trojans, SQL Injection attacks and site defacement attacks, which are some of the most common methods of attack utilised by cybercriminals – that still succeed many times over.

What makes the landscape increasingly challenging, is that the confidence in our current security controls and countermeasures is often undermined, from the bypassing of SSL at the Black Hat convention in July, through to the GSM A5/1 and DNS Kaminsky vulnerabilities; attacks previously believed as only “academically possible”. While it’s virtually impossible to hermetically secure software today, seeing the incredible lengths criminals will go to obtain valuable financial information, we cannot underestimate the existing weapons in our arsenal to combat cybercriminals – education and awareness.

How it is that people still circulate the emails that state that Microsoft and AOL will give 20p for every person the email is sent to? Poor training and awareness.

How is it that employees manage to download malware and Trojans to their workstations from social networking sites? An improper enforcement of security policies and lack of awareness of these new threats.

How are cybercriminals succeeding in simple attacks like SQL injection? An inability to align coding with security best practices in strong authentication and encryption and a lack of security training for software architects and developers.

How is it that we still see site defacement? Poor network monitoring practices and a lack of awareness of the risk.

Every simple attack method has a simple security countermeasure which was evidently improperly implemented, overlooked or taken too lightly. Cybercriminals aren’t carrying out simple attack scenarios because they’re incapable of more complex attacks, it’s because they don’t need to.

Monday, January 18, 2010

The Secure Software Development Lifecycle–Security from the Start

Monday 6 July 2009
France Telecom Open Tribune
Author: Nissim Bar-El, CEO & Chairman

According to a recent University of Michigan study, more than 75% of bank websites surveyed had at least one design flaw that could make customers vulnerable to cyber thieves. Unfortunately, this is not an uncommon phenomenon across the board for all types of sensitive applications today. With credit card data, and bank accounts constituting 22% and 21% respectively, of items most “on-demand” for online purchase, it’s not surprising then, that all leading research organizations predict a major incorporation rate of security within the software development stages by 2009. This is also known as a secure Software Development Lifecycle (SDL).

While developers usually schedule time into security cycles for securing the infrastructure layer, communication layer, and physical aspects of systems and products, along with ensuring proper work procedures; it seems as though security in the application level is not given adequate attention early in the development of the software. This, partnered with the fact that most application designers and programmers usually do not have the required expertise to develop a secure and robust application, even after following secure programming guidelines, makes it virtually unavoidable to encounter security flaws in complex systems. Other contributing factors could also be the lack of appropriate solutions in the market until now, such as quick and thorough source code analysis, that can be applied on non-compiled code, as well.

Like all other business-critical organizational processes that require planning and strategizing, it is nearly impossible to deliver a bug-free application if a proper and organized process is not implemented. What’s more, implementing security without a clear process can be based on inefficient processes, and insufficient internal communication between teams regarding relevant security issues.

As a result, normal day-to-day insecure products have many implementations and designs that may, if they are not implemented correctly, introduce security flaws and open the door for system users, legitimate or malicious hackers, to perform unauthorized operations. Exploiting these security breaches, intentionally or unintentionally, may harm the three crucial information security missions of every organization – preserving the availability, integrity, and confidentiality of the organization’s sensitive data.

Some of the popular and potentially debilitating forms for exploiting breaches of this nature include: privacy breaches through the gaining of unauthorized access to private and secret information; user impersonation and identity theft, a rapidly growing problem worldwide, this includes the performing of illegitimate activity against the system backend using the Web frontend; stealing corporate information or altering it; gaining full control of a Web server; site defacement; and even the possibility of launching Denial of Service (DoS) attacks. Depending on the severity of the breach, the consequences could include financial losses or potential reputational damages.

Taking into account the severe potential fallout from improper security design and implementation, the inevitable question arises, how is it possible that more organizations are not implementing secure Software Development Lifecycles? This process has proven, time and again, that it reduces costs incurred one hundred-fold if implemented in the first stages of development. Leading organizations like Microsoft, Gartner, and Forrester, have endorsed this approach many times, and have demonstrated clearly that organizations that implement an SDL plan that integrates security best practices throughout the software development lifecycle greatly reduce the number of defects and vulnerabilities. Doing so saves enterprises an inordinate number of man hour costs for future remediation purposes, which may bring about the potential delay of a product’s launch, as well as other financial losses that may result if security breaches go undetected and are exploited by malicious entities.

The SDL provides process-level requirements that define how security should be integrated within each product’s development process, in particular specifying activities that should be performed at each phase of the process. The objective of this process is to minimize the number and quality of security related flaws in the initial design, code, and documentation, and to detect and remove these flaws as early in the development lifecycle as possible. An ideal practice that should be instated throughout the secure development is ongoing source code analysis, that will provide in real-time the security vulnerabilities that exist in the code, which will then facilitate the mitigation of the breaches without delaying the production process. In addition, the SDL allows senior management to receive a clear view of the current level of product security, enabling them to reach an objective decision on whether to release the product or not.

Comsec Consulting has been actively providing its services around many intricate facets of application security for more than two decades. This accumulated experience has enabled Comsec to cultivate a bird’s-eye view of the many complexities in this field. This knowledge has been the foundation for the development of in-house proprietary tools, tried and true methodologies relating to all aspects of Information Risk Management, and even an advanced technology for client-specific, customized source code analysis, all of which contribute to the continuous evolution of SDL. These methodologies and technologies have facilitated the successful implementation of comprehensive end-to-end SDL processes over the past years. These processes have included extensive hands-on practice by Comsec’s multi-disciplinary security consultants with services like: Software Security Policy Development, Procedures and Guidelines Formulation, Design Evaluation, Code Review, Penetration Testing, Security Reviews, along with Software Security Training and Awareness.

The objective of the SDL is to formulate a process in which information security is involved in the development lifecycle; where a formal SDL that is tailored to suit an organization’s needs is one of the best methods to realize an enterprise’s security goals.

A typical SDL process is comprised of the following stages:

 Stage One: Requirement Analysis
At this stage of the process the team verifies whether a specific system, application, module, feature, or software (or any other product) complies with specific predefined security requirements such as industry best practices, leading international compliance standards (ISO:27001, PCI:DSS, ITSec, SOX, and others), policy and procedure definitions, along with RFP requirements.

 Stage Two: Architecture and Design Evaluation
This evaluation stage will include a study of the application and a review of the conceptual and design level. The review will generate a report regarding problematic security considerations in the application architecture, and/or design, as it relates to the application, network, system, or product involved.

 Stage Three: Security Code Review
One of the most important stages in the SDL process is the performing of security code review. This is the most effective way to find vulnerabilities on the source code level – one of the main reasons it is a requirement by such leading international compliance standards, such as PCI:DSS. In order to mitigate risks, Comsec provides a unique and professional service for security code review, which provides quick and accurate results, virtually free of false positives. The Comsec security professionals fuse their accumulated and in-depth knowledge in application security to customize scripts to your organization specific business context, and uncover breaches only human logic and intuition are capable of exposing.

 Stage Four: Overall Security Testing
Utilizing the user interface, browsing, and various functional procedures of the application, issues will be examined during security testing, such as the following: user registration process, password management, user management, system timeouts for sessions and dialogs, user login processes, SSL usage, security interfaces and any other components that may influence the security of the product.
All the various scenarios and types of penetration tests aimed at assessing the quality of the operating security mechanisms is carried out. Information Security Product Evaluation provides assurance that the product will comply with security “best practices”, provide the security expected whenever it is required, and will not expose its consumer to new security vulnerabilities. The purpose of this stage is to examine the inherently planned information security level of a product with the intention of ensuring the highest standards of information security, while also focusing on product integration with the working environment.

 Stage Five: Ongoing Maintenance
This stage includes maintaining security levels with new releases, regression testing to ensure constant security, and overall process improvement. This stage ensures that new vulnerabilities are not introduced as a result of faulty planning and ongoing management.

The above model exhibits the different stages of the SDL process, and the necessary phases in each stage to ensure the success implementation of the secure SDL.

Comsec’s professional experience and research has clearly confirmed that, when executed correctly by a team with the proper in-depth expertise in the SDL process, the integration of an SDL into the product development practice will in effect:

* Reduce overall costs
* Increase development efficiency
* Improve application security
* Reduce time-to-market of new functional features
* Improve the degree of the application security maturity
* Allow senior management a clear view of product security
* Ensure effectiveness of security efforts
* Ensure efficient use of development resources
* Lead to increased customer satisfaction

About Comsec:
Comsec Consulting (TASE: CMSC), is a leading provider of Information Security and Operational Risk services to organizations worldwide. Founded more than two decades ago, Comsec operates offices in the United Kingdom, Netherlands, Poland, Turkey, France, Israel, and has affiliates in the Far East. Comsec covers all aspects of Information Security, from strategy and architecture, planning, design, ERP security services, and advanced security solutions, to PCI and ISO 27001 compliance, for all market sectors. In addition, Comsec provides deep level application threat assessment and testing services to leading software development companies and enterprises across the globe.

How much do you spend on IT Security? Do you really know?

By Stuart Okin , managing director of Comsec Consulting UK
Published: May 1 2009 09:37 | Last updated: May 1 2009 09:37

What a business is spending on its IT could amount to 2 per cent of revenue – a figure that should make everyone sit up and think.
Consider a global hi-tech company, operating in about 20 countries, with a user population of 10,000 and revenues of £850m. Such a company would need to make sure the following areas have all been attended to:

● Process activity – such as risk assessments, audit and penetration tests
● People activity – such as awareness campaigns, security and compliance training
● Development activity – such as re-coding applications with security vulnerabilities
● Technical controls – such as AV, firewalls, intrusion detections systems, patch management
● Operations and incident management – monitoring network and security
● Fraud prevention – including investigation services

The heads of security within companies I have spoken to over the past couple of months do not know how much they spend on IT security. But the cost of IT security could be between 0.01 and 2 per cent of revenue. In the case of imaginary company, this would equate to a potential £17m.

I have been working with a large enterprise in pulling together a model to understand the true cost of IT security. Both my sponsor and I believe we can produce huge financial savings, through standardisation, consolidation, better utilisation of what is in place, improved supplier management, and implementing fraud mitigation solutions.

Any change programme requires investment. The first step, therefore, is to work out what is spent today on IT security, in order to make a business case for investment.
One head of security I spoke to at a large financial organisation, called their IT security “a cottage industry within the company”. This is because the last few years have seen the security agenda break up into a fragmented model – it has been pushed away from the centre.

This does have some benefits, as it moves security closer to the coal face, resulting in improved awareness and responsiveness, but it can also lead to silo thinking, with the implementation of costly point solutions, localised standards and isolation of good practices.

The concern I have is that if a business does not know what it has across the entire organisation, then how does it know whether it has the appropriate tools and operations in place to minimise risk to the business.
Businesses should use the recession to open up opportunities: for IT security leaders, this means being proactive in order to help the business save money and improve the overall security environment.

However – be warned – this will mean the IT security leader will need to take on responsibility and accountability.

Copyright The Financial Times Limited 2009

Decreasing the Risk of Fraud and Embezzlement in ERP Systems

Written by Medi Karkashon-Mizrahi, ERP Security Division Manager

The gradual implementation of ERP systems has united all of the business processes into one comprehensive system on the one hand, and on the other has opened the opportunity to carry out various fraud and embezzlement actions through the system. The existence of a number of factors, such as general authorizations, non-segregation of duties, incorrect policies and procedures, and inadequate design and maintenance of sensitive databases – all these are breaches through which one can conduct fraud and embezzlement. The identification and auditing of these issues will significantly decrease the risk for a potential attack.

The words “fraud” and “embezzlement” are a concern for diverse organizations and managers. This issue is raised repeatedly due to the variety of fraud and embezzlement cases that were recently exposed across the globe.

Fraud and embezzlement is carried out when three elements occur:

1. Pressure (to carry out the act)

2. Attitude (of the malicious party)

3. Opportunity

The first two elements are not and cannot be controlled by the organization. However, opportunities to carry out fraud are abundant due to the failure to manage information systems and the non-implementation of suitable control mechanisms.

The implementation of control mechanisms and the correct system management are critical issues in the implementation of an ERP system. This, due to the fact that these systems store sensitive information on suppliers, clients, employees, sales, budgets, revenue, and additional confidential business information. Furthermore, this working environment presents various opportunities for fraud as described below.

When a product goes into production, a great deal of effort is invested in improving the functionality of the system and in successful implementation amongst the users. As a direct result, many users (including regular employees, consultants and implementers) receive wide system authorization. These wide authorizations are a common phenomenon in many organizations due to the fact that the latter believe it will shorten the time to production.

ERP systems are extremely complex, and there are a number of possibilities to receive wide authorizations in the entire system. In SAP systems, for example, user management is based on user definition, profiles, rules and objects. The user can receive wide authorizations from a problematic profile, can be ascribed incorrect rules or receive objects that are breached.

The removal of wide system authorizations is not a simple task, an organization must recognize the various possibilities, repair them, and manage each separately. The misidentification of wide authorizations increases the organization’s exposure to fraud and embezzlement.

ERP systems register business transactions in real-time. This fact decreases the chance both to prevent fraud and to identify the fraud as soon as possible. For example, if a person stole inventory from a certain company, and at the same time rearranged the inventory application in the ERP system, it would be extremely difficult to discover the embezzlement. The existence of system audits can prevent such an occurrence.

Companies should identify the problematic focal-points in the system and implement real-time audits that decrease the exposure.

ERP systems work via one database, a fact that improves the flow of information and process integration on the one hand, and on the other causes a situation where the majority of the company’s processes are managed under one system. If in the past, procurement was carried out by a buyer on system X, and the payment was carried out via system Y, now these two actions are carried out in a single ERP system. Thus, if the company did not implement a user management scheme that preserves the role distribution principle, most likely there are violations of role distributions. Various embezzlements across the globe could have been avoided if the company had implemented a role distribution principle, which states that each process – from start to finish – will not be conducted by a single individual, rather at least one other person will be involved for auditing and authorization.

Companies should identify the violations of the role distribution principle, repair them and manage them regularly.

The implementation of an ERP system usually includes the assimilation of sensitive procedures such as Signatory Authorization, Procurement Authorization, Supplier Payment Authorization, etc. In the past, these procedures were assimilated through physical measures, however today many organizations have adopted automatic mechanisms instead. The incorrect assimilation of these procedures may cause employees to carry out faulty actions that cannot be prevented.

Companies should implement a suitable audit framework that decreases the occurrence of these situations.

The transfer to an ERP environment is usually conducted from a variety of Legacy systems which manage sensitive data such as employees’ bank account numbers and customers’ price terms. Today, all this data is managed in one system with one database. The conversion process is complex and usually requires exporting data from system X, improving the data and importing it to an ERP system. If the sensitive data is exposed to hostile factors during one of these three stages, it is possible that data will be leaked to unauthorized factors or damaged intentionally. The infrastructure for the correct and efficient management of processes in the system is based on a trustworthy database of master data (for example, payment to a supplier which is carried out in accordance with the payment date, the way of payment and bank account – all master data; collecting payment from a customer is carried out in accordance with the way of payment and customer discounts – both master data). Thus, the management of master data is a potential basis for conducting fraudulent activities.

Companies should identify the sensitive data and carry out periodic examinations on the data’s verity and maintenance by system users.

The problematic focal points should be dealt with in two phases. The first, identification of these focal points, and the second – establishment of a continuous auditing framework that will prevent and decrease the chance of exposure.

ERP systems such as SAP and Oracle are large and complex in regard to the number of implemented processes they store, the huge amounts of data they store and their way of operation. Thus, the identification and proper treatment of the problematic focal points requires skill and deep knowledge of the system.

Comsec Information Security has developed proprietary methodology to conduct Fraud and Embezzlement Assessments in ERP systems in general and SAP systems in particular. The uniqueness of this methodology is derived from work plans that refer to the specific, problematic focal points in each ERP system, whilst emphasizing a continuous auditing framework.