354 99 810KB
English Pages 21 Year 2005
ISSN 1361-3723 September 2005
Featured this month
Contents
Terrorists exploit Internet
Terrorists exploit Internet
1
Swedish adulterer betrayed by mobile
2
SA Mathieson Terrorist groups prefer to use the Internet to spread information rather than attack it, said academics at Oxford University’s ‘Safety and Security in a Networked World’ conference on 8 September. Terrorism aims to communicate, so destroying such a powerful means of communication would be counter-productive, said Shyam Tekwani, an assistant professor at Singapore’s Nanyang Technological University’s school of communications and information. “It’s very useful for them to keep it running,” he told a session on terrorism. Prof Tekwani, an expert on the Tamil Tigers of Sri Lanka, said that the group had established its first web-site in 1993 – using a web server at the University of Texas in Austin. In 1997, the site was moved to servers in the UK, after the US banned it. That is where the group’s main publicity office remains, although the site has since moved to Australia and Canada. He said that the reason for the years of hosting in Western countries seemed to be that no crimes had been committed in those specific countries. “Nations are co-operating internationally to destroy terrorist networks on the ground,” said Prof Tekwani. “My contention is that online networks are no less threatening.” He described the Tamil Tigers’ web presence as “very skillfully done”, adding that when he asked students to examine it, they were won over to the cause. Turn to page 2...
UK banks sent out vulnerable PIN numbers
20
FEATURES War & Peace in Cyberspace Sarbanes-Oxley: maybe a blessing, maybe a curse
4
BS7799 Skimming the technical and legal aspects of BS7799 can give a false sense of security
8
Vulnerability disclosure Cisco Blackhat fiasco reopens old wounds
10
Choosing your VPN VPNs are essential today to connect roaming employees to their company network There are two very disparate choices of VPN to choose from; IPsec protocol VPN, and Secure Sockets Layer (SSL) VPN. These two products have vastly different approaches to doing the same thing. IT security managers must weigh up which of the two approaches fits most snugly in their network. Ray Stanton at BT details what both types of VPN can and can’t do. Turn to page 17...
IPsec VPNs
SSL VPNs
Sits at Layer 3 of network stack
Sits at Layer 4,5 of network stack
Connect hosts to entire private networks
Connects users to applications
Biometrics Biometrics – the promise versus the practice
VPNs
Securing VPNs: comparing SSL and IPsec
Client software must be installed on the
Only requires a standard Web browser -
remote device
no client software necessary
Not really interoperable
Very interoperable
More mature
Legacy systems must be adapted
12
17
REGULAR News in brief Calendar
3 20
ISSN 1361-3723/05 © 2005 Elsevier Ltd. All rights reserved This journal and the individual contributions contained in it are protected under copyright by Elsevier Ltd, and the following terms and conditions apply to their use: Photocopying Single photocopies of single articles may be made for personal use as allowed by national copyright laws. Permission of the publisher and payment of a fee is required for all other photocopying, including multiple or systematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish to make photocopies for non-profit educational classroom use.
NEWS
Editorial office: Elsevier Advanced Technology PO Box 150 Kidlington, Oxford OX5 1AS, United Kingdom Tel:+44 (0)1865 843645 Fax: +44 (0)1865 853971 E-mail: [email protected] Website: www.compseconline.com Editor: Sarah Hilley Editorial Advisors: Peter Stephenson,US; Silvano Ongetta, Italy; Paul Sanderson, UK; Chris Amery, UK; Jan Eloff, South Africa; Hans Gliss, Germany; David Herson, UK; P.Kraaibeek, Germany; Wayne Madsen, Virginia, USA; Belden Menkus, Tennessee, USA; Bill Murray, Connecticut, USA; Donn B. Parker, California, USA; Peter Sommer, UK; Mark Tantam, UK; Peter Thingsted, Denmark; Hank Wolfe, New Zealand; Charles Cresson Wood, USA Bill J. Caelli, Australia Production/Design Controller: Colin Williams Permissions may be sought directly from Elsevier Global Rights Department, PO Box 800, Oxford OX5 1DX, UK; phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail: permissions@elsevier. com. You may also contact Global Rights directly through Elsevier’s home page (http:// www.elsevier.com), selecting first ‘Support & contact’, then ‘Copyright & permission’. In the USA, users may clear permissions and make payments through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA; phone: (+1) (978) 7508400, fax: (+1) (978) 7504744, and in the UK through the Copyright Licensing Agency Rapid Clearance Service (CLARCS), 90 Tottenham Court Road, London W1P 0LP, UK; phone: (+44) (0) 20 7631 5555; fax: (+44) (0) 20 7631 5500. Other countries may have a local reprographic rights agency for payments. Derivative Works Subscribers may reproduce tables of contents or prepare lists of articles including abstracts for internal circulation within their institutions. Permission of the Publisher is required for resale or distribution outside the institution. Permission of the Publisher is required for all other derivative works, including compilations and translations. Electronic Storage or Usage Permission of the Publisher is required to store or use electronically any material contained in this journal, including any article or part of an article. Except as outlined above, no part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the Publisher. Address permissions requests to: Elsevier Science Global Rights Department, at the mail, fax and e-mail addresses noted above. Notice No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made. Although all advertising material is expected to conform to ethical (medical) standards, inclusion in this publication does not constitute a guarantee or endorsement of the quality or value of such product or of the claims made of it by its manufacturer. 02065 Printed by: Mayfield Press (Oxford) LImited
2
Computer Fraud & Security
Swedish adulterer betrayed by mobile S.A. Mathieson
T
he public is starting to realise that mobile telephones can betray their owners, according to academics speaking on 8 September at an Oxford University infosecurity conference.
Ylva Hård af Segerstad of Gothenburg University and Robert Burnett of Karlstad University said that a 2004 Swedish murder case in which deleted, then reconstructed, text messages were vital in convicting a man, have sparked a national debate on privacy. "What you have written and sent can be used against you in ways you can
Terrorists exploit Internet Continued from page 1...
"They focus the site on the cause, the purity of the cause," he said, rather than the group's actions, which include suicide bombings - at least in terms of the English language sites. One reason is to appeal to the diaspora, for sympathy and for funding. Prof Tekwani said that, since September 11 2001, the group have given themselves a fresh image through their web-site, with pictures of the leader next to a tiger replaced by images aiming to show him as a man of peace. Maura Conway, of Trinity College Dublin's department of political science, said that the top uses of the internet by terrorist groups are information provision, financing, networking and planning, recruitment and information gathering through other web-sites. She said that financing took a number of forms, including direct solicitation and fake charity appeals, but also exploitation of e-commerce, either through credit card fraud or in some cases through providing genuine IT services. Ms Conway said that recruitment, at least through English-language websites, focuses more on sympathisers who might lobby politicians on their behalf rather than potential bombers, particularly for
never dream of," Dr af Segerstad told a session on privacy at the Oxford Internet Institute's 'Safety and Security in a Networked World' conference. The case - which the academics said has generated nearly 12,000 articles in the Swedish press - concerned an au pair in a town called Knutby. She turned herself in to police, admitting she had shot and killed the wife of the pastor for whom she worked, as well as shooting a neighbour, who survived. She did not admit she had an accomplice, but the police used Norwegian IT forensics firm Ibas to recover deleted SMS text messages sent between her and the pastor, a member of an extreme Christian sect. Some of the text messages from the pastor - which were sent anonymously, by use of a code at the start of the message ethnic nationalist organisations. "You can imagine that recruiting via the internet is not so smart," she said, for those intending to carry out attacks. The session included discussion of whether terrorist websites were widely viewed. Prof Tekwani said that when he asks the Tamil Tigers' UK office for statistics for their site, "every time I am given fantastical figures". However, such sites may have an impact if they are used by mainstream media. Ms Conway said that the web sites of Palestinian terrorist groups have been used as a source by the Jerusalem Post since the mid-1990s. Calvert Jones, a doctoral student at University of California Berkeley, said that she had found that sites run by the two sides in the Kosovo conflict of the late 1990s were sometimes quoted as news sources by newspapers and magazines. She said that over time, the sites appeared to adapt their design to give them greater credibility as news sources - while encouraging readers to visit online polls and bulletin boards run by mainstream media web-sites such as those of CNN, the Guardian and New York Times newspapers to add messages and votes. Abstracts of papers delivered at the conference are available at http://www.oii.ox.ac.uk/research/cybersafety/
September 2005
NEWS seemed to encourage the au pair to kill the pastor's wife through passages from the Bible. She said she believed them to be text messages from God. The pastor, who was having affairs with both the au pair and the wife of the neighbour she attempted to murder, was sentenced to life in prison for inciting the murder, while the au pair was sentenced to psychiatric care. "What we see is the dark side of technology," said Dr af Segerstad. "We know how to use phones to make calls and messages, but the same technology can be used against us." Professor Helen Margetts of the Oxford Internet Institute took issue with the idea that the public should be surprised that destroyed messages could be recovered, recalling that in Agatha Christie's 1934 detective novel Murder on the Orient Express, Hercule Poirot reconstructs an apparently-destroyed message from a piece of burnt paper. "It seems to me that all is fair in love, war and court cases. I don't really see what's different here with the text message," she said. The session took place as European justice ministers considered a UK proposal that all EU telecoms and internet service providers be forced to retain three years' worth of communications data, including the numbers called and the base stations used by mobile phone users, and header information of emails. Gordon Gow of the London School of Economics' department of media and communications discussed whether prepaid mobile telephones offered as much anonymity as thought by law enforcement agencies and governments, which has led countries including Australia to enforce compulsory identification of customers when they are sold. Pre-paid phones make up more than 60% of all mobiles in the European Union. Dr Gow said that even if the name and address of the customer was not known, a mobile telephone's traffic data provided strong pointers towards their identity, such as a series of locations, patterns of behaviour and the identity of organisations of which they are customers, such as banks.
September 2005
In brief FREE WEBSITE RECORDS SPAM EMAIL BEHAVIOUR A new free website has been launched, which records if computers have been sending spam. The TrustedSource portal gives information about the reputation of an email sender by domain and IP address. It gives email senders a credit rating. The portal allows administrators to troubleshoot, research and analyze various senders that may be sending mail into their environments. Administrators can view current and historical reputation and sending patterns of the senders, as well as analytical information such as country of origin, network ownership, and hosts for known senders for each domain. UK GOVERNMENT RATE IT SECURITY PRODUCTS The UK Government has started to accredit IT security products, so information security managers get assurance about the quality of products. The CSIA Claims Tested Mark Scheme (CTT) went live on 8 September and an encryption company, BeCrypt, is the first company to be awarded the Mark at an award ceremony in the UK. SecureWave also had two products awarded the CTT mark of approval. BRAZIL ARRESTS 85 FOR BANK HACKING Eighty five people were arrested in Brazil for hacking into banks and stealing $33 million, according to reports from Reuters. The investigation was code-named, Operation Pegasus. STANDARDS FOR BIOMETRICS LAUNCHED The international standards body, BSI has released standards for biometrics, which passport and identity card schemes will have to adhere to. The standards will ensure interoperability between different biometric products. The standards will also help prevent mistakes in ID data, claims the BSI. ZOTOB WORM SPREADS VIA WINDOWS 2000 PLUG AND PLAY; ARRESTS MADE Two variants of the worm Zotob have spread using the TCP/IP Port 445, which exploit the Windows 2000 Plug and Play system bug to seize control of the operating system. The Zotob worm has caused outages at more than 100 companies, including ABC and the New York Times, and has infected American Express servers. Microsoft says "A remote code execution vulnerability exists in Plug and Play that could allow an attacker who successfully exploited this vulnerability to take complete control of the affected system." Microsoft has issued a patch. Infected systems are told to await further instructions on an Internet Relay Chat channel, with the danger that they could then be used to attack other systems. Two arrests have been
made - 18-year-old, Farid Essebar from Morocco and 21-year old, Atilla Ekici from Turkey, who are suspected for creating Zotob as well as the worms Rbot and Mytob. NEW YORK STATE MAKES REPORTING BREACHES A LEGAL OBLIGATION New York State has made it is a legal requirement for companies and state agencies to reveal data security breaches to customers. The new bill demands that New York customers are told when the security of their data has been compromised. New York's new law, A.4254, applies to information such as Social Security numbers, drivers' license numbers, or credit card numbers that are not encrypted. The legislation is modelled on a similar bill passed in California in 2003 in the wake of the ChoicePoint breach, which exposed information on 30,000 customers in California. Under the law, companies with customers in New York State have to notify the consumer of any breach as soon as possible. The law also requires local governments in New York to develop a policy on doing the same. The New York Attorney General can prosecute if they fail to comply. POSSIBLE RISK FOR WINDOWS VISTA BETA TESTERS Beta testers for Microsoft's Windows Vista (formerly 'Longhorn') have discovered a networking feature in the operating system that could pose a security risk. Users of Windows Vista Beta 1 activate a feature by default for peer-to peer-networking, called 'peer name resolution protocol' (PNRP), which could be used for applications such as online gaming. After installing Beta 1, some users have noticed an unusually high level of traffic on their machines. The source is reported to have been found by George Bakos, from the Institute for Security Technology Studies at Dartmouth College. IRISH SHOPPERS GET DISPOSABLE CREDIT CARDS The bank, Permanent TSB, is releasing a disposable credit card voucher for Irish shoppers to use. The voucher will have a set credit limit and a unique credit card number, and can be thrown away after it has been used. It has a maximum credit limit of Euro 350 and a minimum limit of Euro 20. It will be sold in shops from 26 September and doesn't require customers to have a conventional credit card. ''This new voucher will enable both sets of people to avail of all the benefits of shopping online or on the telephone in a controlled, prepaid way and without any security issues,'' said Niall O'Grady, head of marketing at permanent TSB bank. The bank has teamed up with Irish company, 3V Transaction Services, a European company, Alphyra, and VISA for the programme.
Computer Fraud & Security
3
WAR & PEACE IN CYBERSPACE
Sarbanes-Oxley: maybe a blessing, maybe a curse Richard Power and Dario Forte Sarbanes-Oxley can bring benefits and heartache to IT security managers. This article demonstrates the advantages and the headaches that the legislation can cause. In the East, there is a legend, repeated in several cultures, with different twists, about a wise man who many villagers thought was just a fool. Wherever he went, whatever befell him, he would simply shrug and say, “Maybe a blessing, maybe a curse.” His behaviour perplexed the villagers. But, in time, the wisdom within his foolishness was revealed over and over again. For example, one day the wise man was given a strong, swift horse. The villagers congratulated him. But he simply shrugged his shoulder and said, “maybe a blessing, maybe a curse.” Then, the magnificent horse disappeared. The villagers attempted to console him. But he just said, “maybe a blessing, maybe a curse.” Then, the horse returned with a herd of strong, swift horses. Surely this turn of events would evoke a more celebratory response from the fool? No, “maybe a blessing, maybe a curse.” Then, his only son was thrown by one of the horses and broke his hip. The villagers were amazed, “Well, how did he know…it was, indeed, a curse.” But the villagers were really baffled when the man shrug again, and said, “maybe a blessing, maybe a curse.” Then, the war came. All of the villagers’ sons went off to die in battle, except of course for the fool whose only son’s hip had been broken. “Maybe a blessing, maybe a curse.” If you are responsible for cyber security in a corporate environment then you are familiar with how it feels to have your wisdom mistaken for foolishness. You also understand how something that can look and feel like a blessing can turn into its opposite and vice versa. When it 4
Computer Fraud & Security
comes to the impact of Sarbanes-Oxley (aka SOX), nothing has changed. Maybe it’s a blessing, maybe it’s a curse—in other words, probably a bit of both. Let’s take a look.
The blessing of SOX (maybe) In mid-2001, Enron was an energy trading, natural gas, and electric utilities company, listed as the seventh largest company in the US. But by the end of year, when its unscrupulous accounting techniques were revealed, Enron went bankrupt, decimating personal retirement plans, causing huge losses to state pension funds and resulting in the demise of Arthur Andersen, one of the big five global accounting firms. The Enron scandal also led to sweeping criminal investigations that cast a huge shadow of political scandal over the Bush administration. Some “argue” that Ken“Kenny Boy” Lay, the man at the top of the ENRON pyramid scheme, was both a big contributor to Bush’s political campaigns and the mastermind behind Enron’s “success.” The public’s attention was taken away from the ENRON scandel by the smoke billowing from the horrific terrorist attack on the World Trade Center in New York on 11 September. But lo and behold, Enron was just the first paroxysm in a series of paroxysms that undermined investor confidence in the financial system itself, for example:
Richard Power
Dario Forte
components, health care, fire safety, security, and fluid control was plunged into crisis when its former chairman and chief executive Dennis Kozlowski and former chief financial officer Mark H. Swartz, were accused of the theft of US $600 million from the company. During their trial in March 2004, they contended the board of directors authorized it as compensation. After a 2001 re-trial, both men were convicted on 29 of 30 counts against them and sentenced to up to 25 years in prison.
WorldCom WorldCom, built mostly on mergers and acquisitions, was the second largest long distance phone company in the U.S. In 2002, an internal audit discovered that US$3.8 billion had been “miscounted,” and the US Securities and Exchange Commission (SEC) opened an investigation. WorldCom filed for Chapter 11 bankruptcy protection in the largest such filing in the history of the United States. Within a few weeks, an additional $3.3 billion in improper accounting since 1999 was announced. By the end of 2003, it was estimated that the company's assets had been inflated by around $12 billion. In 2005, Worldcom founder and former CEO, Bernard Ebbers was found guilty on charges of fraud and conspiracy and sentenced to 25 years in prison. But this unprecedented series of corporate governance scandals isn’t limited to US high rollers and their bean counters:
Royal Ahold Tyco Tyco International Ltd., a conglomerate with business in the areas of electronic
In February 2003, Royal Ahold of the Netherlands, the world’s third-largest grocer, a global empire that once comprised
September 2005
WAR & PEACE IN CYBERSPACE more than 5,600 stores on four continents, admitted overstating its 2001 and 2002 profits by at least $500m because some managers at an American offshoot, US Foodservice, had been booking unearned revenues. Ahold also revealed that it had found some possibly illegal transactions at an Argentine subsidiary. Furthermore, it has discovered that it does not in fact control its Swedish subsidiary, which has until now been 100% consolidated in its accounts. The forensic investigation, concluded in July 2003, revealed 470 accounting irregularities and 278 internal control weaknesses.
Parmalat SpA In late 2003, Parmalat SpA, an Italian dairy and food company and Europe's biggest dairy company, with worldwide operations that included almost 140 production centers, over 36,000 workers and 5,000 Italian dairy farms, declared bankruptcy. An 8 billion euro discrepancy was revealed in Parmalat's accounting records. Calisto Tanzi, Parmalat’s founder, a college dropout who had become a symbol of stunning Horatio Algier success was arrested and charged with fraud and money laundering. The questionable accounting practices that led to Parmalat’s sudden collapse included a scheme in which it sold itself credit linked notes, in effect placing a bet on its own credit worthiness in order to conjure up an asset out of thin air.
Enter SOX In July 2002, in the U.S., SarbanesOxley, officially titled the Public Company Accounting Reform and Investor Protection Act of 2002, commonly referred to as SOX was signed into law. It is named after sponsors Senator Paul Sarbanes (D-MD) and Representative Michael G. Oxley (ROH). SOX is aimed at protecting investors, and restoring investor confidence, by improving the accuracy and reliability of corporate disclosures. The act covers issues such as establishing a public company accounting oversight board, auditor independence, corporate
September 2005
responsibility and enhanced financial disclosure. Of great interest to those of us dedicated to promoting cyber security, SOX mandates it -- sort of. Clearly, a blessing, right? Well, maybe.
The curse of SOX (maybe) What are the challenges and difficulties that SOX presents companies -- in general, and European companies in particular? According to Rothke, there are many challenges and difficulties, including but not limited to: • Getting an understanding of the often vague and abstract compliance requirements. • Dealing with the massive cost of SOX compliance. • Keeping costs in line. • Ensuring segregation of duties or critical authorizations to ensure that compliance activities can be tracked and verified. • Getting control objectives met in a timely manner. • Documentation. • Monitoring and reporting of workflow. As both a lawyer and an information security professional, Lawrence D. Dietz, Senior Director of Government Strategies and Solutions for Symantec has a unique perspective: “Most companies, whether in Europe or the US.,” according to Dietz. “have no real idea of what is adequate.” As both a lawyer and an information security expert who has lectured on SOX in Europe as well as the US, Dietz observes, that they are subject to varying auditor interpretations, mistaken beliefs in a silver bullet like ‘solution,’ a lack of clear, well understood policies and procedures, etc.” Where do you begin? If you do a Google search you will find articles on how SOX should compel you to implement security technologies for controlling
everything from instant messaging to wireless. How do you get your mind around the real issues? Of course, the confusion is exacerbated by consultants whose own numbers depend on convincing you that you cannot figure it out on your own. Well, so let’s start with instant messaging, wireless, and so on. Are such technologies really issues in SOX-related work, especially if SOX is now limited to the reliability of the financial reporting? The technologies are immaterial. The governance, policies, procedures and perhaps attention to best practices and the normal attributes of common law negligence – i.e., gravity of the harm, likelihood to occur, cost to prevent, standard of care, duty of care – are what is material. Organizations that have stopped IM and wireless, or spent money on software or devices to control them, somehow, have done so because of the inability to adequately archive and general lack of security functionality for these technologies. It is not from compulsion to comply with SOX. There are many SOX-related IT security controls that European companies answerable to SOX should look at, including: • • • • •
Audit logs. Access controls. Business continuity plans. Training and awareness. Sanctions for policy non-compliance.
Of particular importance is how your logical security tools and techniques are implemented, configured, and administered to enable restriction of access to data and programs. The COSO control objectives provide a comprehensive list. Section 404 of the Sarbanes-Oxley Act mandates processes to protect corporate assets. Because business information, such as that about customers, is clearly an asset, the section could come into play if a company fails to guard the security of that "asset." Some other sections of SOX, in particular Section 101 and Section 302, have
Computer Fraud & Security
5
WAR & PEACE IN CYBERSPACE information security implications. For example, SOX mandates that audit committees establish procedures for the confidential and anonymous submission by employees of concerns regarding questionable accounting or auditing matters, and prohibits retaliation against whistleblowers. (Maybe a blessing, maybe a curse?) Protecting whistleblowers is dear to our hearts, and probably yours, but various privacy offices in EU states, for example, France, oppose the procedures spelled out in SOX and have stated they go against their country’s privacy laws and practices. There is a lot of talk about frameworks, especially COSO and COBIT (particularly from those trying to market their SOX consulting service). COSO is the best practice for defining internal control structures in all industries, and provides internal control criteria in terms of five integrated components. COBIT is modeled on COSO. Many companies are using the COSO framework as a means of meeting the SOX 404 requirements, since COSO is the only framework the SEC has formally recognized in establishing and monitoring internal controls, and many companies are also using the COBIT framework as a means of meeting the SOX 404 technology control requirements. COSO/COBIT have become so ubiquitous, that for any firm to try to develop something else would be silly. Some of the “Big 4” firms have developed their own unique frameworks, but everything comes back to COSO/COBIT. Alternately, as a more proactive approach, we suggest that you take a good long look at ISO/BS 7799. If you go for ISO/BS 7799 compliance and certification, and do it for real, regulators would be hard-pressed to find you at fault. By the way, a new version of ISO17799 has just been released, giving more tools, specially under the incident management standpoint. What are the benefits (beyond fulfilling the 404 requirement) that SOX presents to companies in general, and European companies in particular? 6
Computer Fraud & Security
The biggest benefits, according to ThruPoint’s Rothke, is that in the process of investigating their infrastructure, many organizations found inefficiencies and redundancies. By identifying these, companies have been able to save huge amounts.
What you need to know How do European companies know if they are answerable to SOX? Any company that is a publicly traded company within the US is subject to following SOX requirements, i.e., any company is required to comply with SOX if it is registered with the SEC as a publicly traded organization, traded on one of the US stock markets or floats capital in one of the US bond markets. SOX applies to any non-US company registered on US exchanges under either the Securities Act or the Exchange Act, regardless of such company’s country of incorporation or corporate domicile. How should the IT managers of European companies get their minds around what is to be done for SOX? Ben Rothke, Senior Security Consultant for ThruPoint, one of the information security professionals out there on the frontline, provides some sage advice: “To determine the scope of their SOX efforts, an organization should do the following: • Start with the business control structure and processes for financial reporting. • Reduce the processes to a set of procedures. • Identify the control objectives that are supported by those procedures. • Identify the applications as known to the business to support their controls.” “When this process is completed, the organization will have a list of applications that need to be worked on. An organization must know two things well: their own IT infrastructure and the SOX requirements. If they know both of those, they are way ahead of
the curve in gaining SOX compliance. But far too many organizations simply call a Big 4 firm and say ‘get us compliant’. This is like getting into a cab and telling the driver: take me somewhere.” Dietz recommends a seven-step approach to compliance: Establish a compliance committee. 1 Define scope. Conduct an audit to determine where changes need to be made. (Keep in mind that the auditor that signs a company's financial statement cannot also implement recommended changes.) Determine the desired state for the company with appropriate processes and controls to ensure compliance with goals. Gartner says that CIOs should expect to participate extensively in the audit process, usually as a member of the compliance committee. 2 Perform a gap analysis. Once the audit is completed, companies will be in a position to ask: Where are we in relation to Sarbanes-Oxley requirements? The "gap analysis" will lead to a full list of requirements for meeting compliance. 3 Modify and/or implement controls. Initiatives must be implemented to upgrade systems and processes that are not in compliance with the law. Project timelines are critical here. 4 Measure compliance. Once the desired state of compliance is implemented, all processes need to be regularly assessed over time for their quality and compliance with control requirements. 5 Report/communicate status. After the final audit has been attested, it is time to communicate the status to appropriate management. 7 Learn, adapt and continue work. Recognise the cyclical implications. In SOX-related work, a lot of emphasis and money is focused on the need for documentation of controls. This aspect is critical. Rothke elaborates.
September 2005
WAR & PEACE IN CYBERSPACE “From a documentation perspective, the organization should start with the list of control objectives and documentation on associated practices. They should then put together a project plan with the following milestones: • Identify any application that does not meet control objectives, that is, documentation on corresponding practice is missing or inadequate to cover control objective. • Notify IT owners of deficient applications that they must change current practice and update documentation. • Review IT owners of planned practices and implementation timeframe to ensure they are compliant. • Review completed gap mitigation and update documentation. Would European companies need to provide this documentation in English? Yes. Regulation S-T Rule 306 governs the treatment of foreign language documents for electronic filings. This rule currently prohibits the filing of foreign language documents in electronic format. It also requires the electronic submission of a fair and accurate English translation of any document, required as an exhibit or attachment to a filing that is in a foreign language. Thus, under Rule 306, an electronic filer currently does not have the option afforded to paper filers of submitting an English summary or "version" of a foreign language document instead of an English translation. (http://www.sec.gov/rules/final/338099.htm)
Regulate this...attest to that What is the future of SOX in general, and for European companies in particular? What is the significance of SOX? Has it made a positive impact on cyber security? Has it hurt us? Rebecca Herold, wwww.rebeccaherold.com, author of Practical Guide to
September 2005
Compliance and Security Risk and other books on how the complex issues of privacy, security and risk interrelate, envisages more regulation, that in some way codifies information security, coming down the pipe. “In light of recent security breaches, along with a much greater public awareness of information security issues, it is anticipated the SEC will specifically address information security within the context of SOX. Even if they do not, though, the spotlight now upon information security makes it very important for reporting companies to be aware of the state of security for their networks and systems and ensure they have implemented policies, procedures and tools to protect the systems. The significance of SOX can be seen with new bills being considered. With the numerous recent information security breaches (ChoicePoint, Bank of America, LexisNexis, etc.) there is a call for greater accountability on behalf of businesses for information security. For example, at a recent committee meeting of the Senate Banking Committee, plans were reported about proposed legislation that would regulate data brokers, including a plan by Senator Jon Corzine (D-NJ) that would require the FTC to create information security guidelines for all companies handling personally identifiable information. Corzine borrowed a page from SOX when he included a requirement for executives to sign off on their measures. It is becoming a trend. Regulations will more commonly include this executive sign-off for information security and privacy and make the executives personally accountable. SOX has had a very clear impact on information protection bills and future regulations. Rothke sees a beneficial maturation process ahead. “SOX will definitely not go away. The good thing is that there will be a lot more people with SOX experience, so that should drive some of the consulting rates down. In addition, the roles and responsibilities of those in the compliance and IT groups will be better
defined, which will make compliance easier. As time goes by, SOX will be ingrained into corporate mind-sets, so the idea of SOX compliance will not be so foreign to so many people. From a product perspective, the software that aids in SOX will certainly mature and get better. This will make compliance easier. Some of the SOX requirements may ease up a bit.” And, of course, we have not even discussed what kind of European law will evolve. Actually, there are some country laws which have started to be“SOX- Llike”. Italian law 231/2001, for example, clarifies the responsibility of top management in the case of frauds such as Enron (and Parmalat). But some technical/security management requirements established by SOX are instead covered by privacy law. Meanwhile, SEC is tryng to apply a SOX-like paradigm to small businesses… Maybe a blessing, maybe a curse…That’s why we stress thinking about an approach that focuses on a comprehensive information security framework, e.g., ISO/BS 7799—if at all feasible for your organization. Within such a context, you will not only be developing what you really need to make a significant impact on risk mitigation, and be able to offer objective corroboration (i.e., certification) -- in the process, you will also be addressing most of what you need to pull together for SOX and its ilk. Indeed, you will even have it documented already.
About the authors Richard Power (www.wordsofpower.net) is an internationally recognized authority on cyber crime, terrorism, espionage, etc. He gave speeches worldwide and was also covering the CSI/FBI Survey. His book “Tangled Web” is considered a must. Dario Forte (www.dflabs.com) is one of the world’s leading experts on Incident Management and Digital Forensic. Former Police Officer, he was Keynote at BlackHat briefing and lecturer in many Worldwide recognized conferences. He’s also Professor at Milano University at Crema.
Computer Fraud & Security
7
BS7799
Skimming the technical and legal aspects of BS7799 can give a false sense of security Jacqui Chau, Consultant, Insight Consulting BS7799 is an Information Security Management Standard and should not be confused as a technology standard. It is highly regarded within the security management industry, but how many security professionals have asked the question; “Are we getting the most out of our BS7799 programme?” If a thorough Risk Assessment is performed, it can recommend particular technology controls, which are then implemented to mitigate particular risks. The technical controls and the way in which they are implemented, however, can be interpreted differently from person to person. BS7799 provides a framework in which security should be managed, but it is important to explore further the high-level control policies, and in particular those within technical and legal areas. Even a fully managed BS7799 compliant/certified service will not fully protect information and could fail. BS7799 certification often can give organizations a false sense of security as managers often associate the BS7799 “certified” or “compliant” status to mean “a secure system”, on which nothing further needs to be done. The standard emphasises continuous improvement, which forces organizations to always think about security. However, organizations often do not realise that incorrect configuration or use of technology controls can still, and often, lead to security vulnerabilities remaining and incidents. The interpretation of the controls and level of diligence in which they are implemented varies depending on experience, knowledge, resources and interest of the staff involved. For example. Under Network Access Controls, one person could interpret this as a network with the network equipment in a locked room, whereas for another it could mean that they have a network with a 8
Computer Fraud & Security
hardened firewall, tested and with IDS to detect unusual traffic behaviour. Either may pass for compliance with the standard, as long as there are processes and procedures behind these controls, as the standard is a framework and guidance for the management of security. However these controls in isolation may not provide the security that organizations expect or require.
The level of effort to comply with BS7799 is often underestimated. A familiar requirement in a service contract and hosting provisions today is ‘The service must comply with BS7799’. This requirement can equate to more work and effort than first estimated on the part of the supplier. An important part of BS7799 is performing a risk assessment to determine risks and vulnerabilities within a service or system. It is only upon completion of a risk assessment that an organization can fully understand the real level of work involved. The way in which the risk assessment outputs are managed may also need to be reconsidered by organizations, as the extent to which organizations pursue each control for it to be deemed ‘compliant’ or ‘installed’ can vary. The Statement of Applicability (SoA) supplements the risk assessment and records the 132 control statuses; where controls are implemented (or not), where they will be implemented or where other controls address the risk.
The amount of work involved in gathering data for the Risk Assessment and determining what controls are necessary to mitigate these risks and how they should be implemented is again very easy to underestimate. The technology controls which come out of the result of a Risk Assessment and SoA are often made the responsibility of the IT support teams to manage. Support personnel often do not have the time/resources/knowledge and experience to be able to handle this type of “consulting” work, on top of their everyday support duties.
Ineffective security controls cause vulnerabilities Although technology is continuously changing, there are several basic technologies that may be a part of the security of most system types. The BS7799 standard is essentially a toolbox of 132 ‘recommended’ controls. Some of these controls within the standard can be further expanded into (supporting) Industry Best Practice guidelines for particular technology and legal controls. If this interpretation is not performed, the usefulness of the implemented controls identified within the SoA can be less effective. For example, a poorly deployed or managed firewall with open rule bases; Operating System builds that have not had security features turned on or patches applied and even password reset procedures that do not verify the identity of the user who has made the request, all of which can open opportunities for unauthorized access. There is also little scope for technical assurance within the content of the standard to ask the question “is this control working as it should or in the most effective way?” Penetration tests can highlight poorly configured technologies and vulnerabilities. However, the standard does not explicitly state that a health check is required. It only recommends regular security tests, but this may again be interpreted differently by each person implementing the BS7799 controls.
September 2005
BS7799
Technology controls are an important aspect of BS7799 It is recognized that depending on the scope of the BS7799 ISMS, technical controls may be more or less important. However, in most circumstances, technology is involved and is critical. There are few departments/organizations that don’t rely heavily on technology, whether this is Internet access, email or just Office applications. Therefore Industry must recognise that the BS7799 standard must optimise the technology controls to achieve an acceptable level of security. It is imperative that organizations consider all BS7799 requirements for example: Network controls - A range of controls shall be implemented to achieve and maintain security in networks. (A.8.5.1)” This is clearly a policy statement rather than a single line item that is to be complied with in isolation. The scope of this may include firewalls at the network perimeter, the use of VPN for both siteto-site and RAS traffic, intrusion detection systems and internal network partitioning (or de-perimiterisation). Interpreting these high level statements into the lower level of detail needed to specify, deploy and configure a particular technical control requires expertise not only in the review and design of technical security solutions, but also an ability to match these to the business requirements. Policies can do only so much - in the modern business environment, technology is often necessary to enforce the policy.
Building a technology control toolbox Within the security arena there are a number of Industry Best Practice control’s and other standards which could be incorporated into a toolbox (or ‘code of practice’) of technology controls for organizations to draw from. For example:
Network access controls: • DMZ guidelines as to what should be put in the secure zone, how it can be protected with firewalls, routers, switches.
September 2005
• Hardening and locking down ports, services and addresses on a firewall. • Optional controls such as honeypots and intrusion detection devices. • Wireless network hardening guides such as standard 802.11.xx. • VPN access to prevent internal systems from being accessed from unauthorized systems and to protect data. • Terminal services to allow secure remote control/support of servers.
Application/operating system access controls: • Strong authentication using 2-factor authentication devices or biometrics. • Directory services to define group, system and individual/role privileges. • Password Management guidelines and rules covering the sharing of passwords, password aging and password quality. • File System access controls for securing administrative/system directories, user permissions etc.
Protection against malicious softwares: • Virus checking on PC’s, laptops, PDA’s, mobiles and servers. • Spyware checking on PC’s and laptops. • Intrusion detection/prevention on hosts for known variants of virus/Trojans.
Cryptographic controls: • Hardware Security Modules (HSM)s to protect cryptographic materials. • SSL, and other secure protocols.
System controls : • Hardening standard operating systems recommendations such as locking down unnecessary ports and services, removing unnecessary user accounts, and enforcing the changing of passwords. • Installing latest patches recommendations, best practice for installing security patches and testing, and use of signed packages.
Legal compliance and BS7799 Section A12 of BS7799 contains a requirement for compliance with legislation.
The high level objective: “To avoid breaches of any criminal and civil law, statutory, regulatory or contractual obligations and any security requirements” clearly indicates the need for legal compliance. However the policy requirements may not be clear to people who are not familiar with the underlying legal issues. Companies are finding it harder to comply with legislation without factoring in their use of technology. Many companies within the financial sectors (not just USA owned) are having to comply with Sarbanes Oxley (SoX) Act. It is difficult for these organizations to keep a paper audit trail in order to comply with this law, and therefore technical controls have to be installed. Another example is a company’s commitment to the Duty of Care in common law, in which organization’s can be sued for damages if it fails to protects its employees for example, from receipt of illegal material (eg. Pornographic material) through company system resources (such as email). Methods in which companies can control this include limitations on Internet access and continuous monitoring of systems for prohibited content. However, it is increasingly more difficult to comply with the expanding number of laws and regulations, and when they are enforced in a standard such as BS7799 which mandates high level statements such as “Identification of applicable legislation”, “Intellectual Property Rights” or “Data protection and privacy of personal information”, the task can often be underestimated by the implementers of the BS7799 standard if they are not legally experienced. Establishing guidelines on how organizations should comply with different legislative aspects could ease this process. For example: • Security of System and Data Files – ensuring that only authorized people can access/modify/view personal information to give protection under the Data Protection Act 1998, and Computer Misuse Act 1990. • Monitoring system access and use ensuring that employees don’t view illicit Web pages, or send pornographic material as part of ongoing monitoring
Computer Fraud & Security
9
VULNERABILITY DISCLOSURE to help comply with the Computer Misuse Act 1990, the Protection of Children Act 1999 and the Protection from Harassment Act 1997. Ensuring accountability for all actions as part of the Computer Misuse Act 1990. Legal compliance requirements often dictate these additional technical controls, even if the business itself doesn’t realise it.
Summary Organizations need to perform a targeted, independent technical review of the environment or specific systems within a
BS7799 scope to ascertain the level of security achieved by the technical controls in place. If undertaken prior to other BS7799 work, this can be used to drive controls that may be required within the SoA as well as highlighting at an early stage potential holes or weaknesses in policies, procedures or actual systems – when carried out in this way an organization can assure itself that technical measures have been implemented appropriately and also map the findings and recommendations back to BS7799 requirements. BS7799 provides an Information Security Management framework
Cisco Black Hat fiasco reopens old wounds Philip Hunter
Philip Hunter
The immediate dispute arising from the strange case of Cisco, Black Hat and the former Internet Security Systems researcher Michael Lynn, was settled within a day. Yet the issues aroused rumble on, reopening old wounds and exposing all three parties to criticism, although Lynn himself has the perfect defence of having sacrificed his own job with ISS for the greater good of the Internet. Meanwhile Cisco and ISS were accused of both bumbling incompetence in their handling of the issue and attempting to obscure an issue of legitimate public interest. It is certainly true that both companies have been left with their reputations in the security arena at least temporarily sullied. It all happened at the annual Black Hat settlement was quickly agreed. Under its security conference in late July, when Lynn terms, Lynn and the conference organisers agreed neither to disseminate the informagave a presentation on how a well known tion nor to discuss it. But meanwhile the flaw in Cisco’s IOS router software, that text of his presentation had been posted on governs the flow of data across the the Internet, although this was subsequentInternet and most of the world’s major ly taken down after the site received a fax corporate networks, could be exploited to from the attorney acting for ISS citing an reveal users’ passwords and login scripts. injunction already issued by a Californian court prohibiting further disclosure.
Under pressure
Under pressure from Cisco, ISS had attempted to stop Lynn giving the presentation and block distribution of its contents via video or any other means. Lynn then resigned from ISS and gave the presentation anyway. Cisco and ISS then threatened both Lynn and the conference organisers with criminal charges, but a 10
Computer Fraud & Security
Disclosure The priority for corporate users is to patch their networks to close this particular loophole. Many have already done so, but there are always stragglers who lag behind and they are more vulnerable now, given that Lynn’s exploit of the IOS vulnerability was
which provides organizations with a standard method of managing their information securely. However, skimming the Technical and Legal aspects of BS7799 can lead to a false sense of security being obtained. The standard has the potential to give assurance that technical and legal controls are required for a system or service. But this often requires significant technical, security and legal expertise.
About the author Jacqui Chau is a consultant in the Technical Risk Assurance team at Insight Consulting. quite widely distributed before the prohibition on further disclosure. More hackers know about it than before. Meanwhile as the case continues to unfold, the greater interest is in the wider issues raised, which come under four headings: the ongoing debate over open source versus proprietary code; the conflict of interest faced by major vendors like Cisco between commercial and wider public interests; security implications of the ongoing migration from IP version 4 to the new version 6; and growing exposure created by continued reliance on passwords. The link with passwords and Ipv6 follows from the fact that Lynn revealed how a specially crafted Ipv6 packet could be used to attack a corporate network or the Internet by trapping users’ password and login information, which can then be used to attack other servers. This exploits the fact that people tend to choose the same use names and passwords for all the services they access.
Open vs proprietary Taking the open source versus proprietary issue first, there are the inevitable suggestions that if all code were open to scrutiny by anyone, then by definition leaks or theft cannot jeopardise security. This harps back to May 2004 when hackers stole portions of Cisco’s IOS. There was speculation then that further weaknesses in IOS will continue to be identified by hackers partly as a result of last year’s theft. It should be noted that making code open
September 2005
VULNERABILITY DISCLOSURE source does not entirely abolish the issue of disclosure. There will still be potentially exploitable flaws that come to light after a while, and vendors of security services could still face the dilemma of whether to publicise them, at least before a patch has been developed. Similarly vendors of the open source products themselves could still seek to hide flaws that only they know about and that have not yet been discovered by others, whether users or hackers.
Conflict of interest As for the conflict of interest, it can be argued that in the modern era where secrets tend to leak out quickly, companies are better off adopting an open approach to disclosure. The perceived conflict of interest may not really exist. It may appear that a company has to balance its public image against the benefits of disclosing vulnerabilities as soon as they are discovered, but in Cisco’s case its reputation has been damaged anyway. Its heavy handling of the Lynn case with the apparent bullying has made some customers distrust statements made by Cisco on other wider security issues. Certain governments have as we know also found themselves inhaling the exhaust of their own publicity machines. There is no doubt though that there is an inevitable confusion between two motives for keeping source code secret - the commercial desire to avoid theft of intellectual property, and the security related fears over hackers discovering vulnerabilities. It is true that in an open source world neither of these motives exist.
IPv6 The Ipv6 implications have also been loudly trumpeted. The Lynn case has been seized upon by Ipv4 “protocol zealots” as evidence that migration to Ipv6 will make the whole Internet more vulnerable. But this is rather a red herring as migration to Ipv6 has been proceeding for some years now in the Far East, with the US lagging behind partly because of the scale of the task there. It is true that migration to Ipv6 need not be rushed because the main motivation was to acquire more space for IP addresses, and this is not likely to run out for Ipv4 for another decade. The more
September 2005
important security question though concerns not Ipv6 itself, but the nature of the vulnerability, which in this case was a heap overflow attack. Such attacks can be conducted against any application that uses dynamic variables whose names as well as their values may change during the course of the execution. This includes the IP protocol stack, but also a range of higher-level applications such as shopping carts where people select items during an online commerce session. A number of anti-virus tools have also been vulnerable, and in February 2005 Symantec released details of how its Anti Virus SAV 9.1 was vulnerable to a heap overflow attack, although the problem was quickly fixed. Such an attack could in that case in theory have disabled the anti-virus protection, allowing hackers then to launch a variety of attacks that the system in question would no longer be protected against. Heap overflow attacks are very similar to the more common stack overflow attacks. The difference is that the latter attacks the memory allocated to static variables that are typically defined within subprograms that are part of a larger application, while the former attacks memory allocated to dynamic variables that may have a longer lifespan and be renamed. In either case the attack involves passing the value of the variable in a string that is deliberately made longer than the allocated size. This can then overwrite the portion of memory containing the return address to resume execution of the program. If the right bits of the string are set correctly then this return address can be reset to the location of a malicious program that performs a function defined by the hacker, which was capturing the passwords and log in details of users in Lynn’s example. However this process is simpler to conduct with static variables, because they have a fixed amount of memory allocated to them. Dynamic variables are allowed to change the amount of memory allocated to them during the course of an application as circumstances dictate, and this makes them harder to use for the programmer, with greater risk of creating bugs. They are also harder for hackers to
exploit via heap overflow attacks, because it is necessary to determine where within a variable length stack the return address is at the chosen time. For this reason the vulnerabilities are somewhat theoretical, and in Symantec’s case no known attack was ever made. But Cisco is a much bigger target given the ubiquity of its IOS router software, and the potential risk is much greater. This is why any organization with routers exposed to the Internet needs to implement the patch that fixes the loophole. More widely, protection against both static and heap overflow attacks needs to be built into software from the outset rather than patched in later. While it is impossible to guard against all vulnerabilities during the coding stage, there is no longer any excuse for failing to do so for stack attacks, which have been known about and exploited for some years.
Password The final issue raised by the Lynn saga is the continuing reliance on user names and passwords. This argument is as old as the hills and in practice all other methods tried until now have been too costly or inconvenient for the general Internet user. The real issue, of poor password hygiene, is unchanged, but the proliferation of online services has accentuated it. The average user, both at work and at home, now requires passwords for a number of services, and most people use the same one all the time. This increases the exposure. Measures that can be taken include the use of smartcards, and dynamic passwords that are generated just for a session, but these all increase cost and complexity. It may be that biometric techniques will catch on now that governments are so keen on them for passports and identity cards, and that readers will be built into computers, but meanwhile all that can be done is to reiterate the advice about adopting good habits such as rotating passwords and choosing ones that are not easy to guess, like perhaps the name of one’s least favourite vegetable.
Computer Fraud & Security
11
BIOMETRICS
Biometrics – The promise versus the practice Nathan Clarke and Steven Furnell, Network Research Group, School of Computing, Communications & Electronics, University of Plymouth, Plymouth, UK The term biometrics has been hard to escape recently, with numerous articles being published discussing the advantages and disadvantages of the technology1,2. Much of this discussion has come about due to the level of research and interest shown in large scale implementations of the technology by the US and UK Governments3,4 and European Union.5 However, few articles to date have discussed the fundamental operation of biometrics and the subsequent issues that arise when developing a biometric technique. This article focuses upon describing the biometric process from a lower level of abstraction, and introduces a number of design features that play an inherent role in the security provided by a biometric approach. Biometrics form one of three generic approaches to authentication, along with passwords and tokens.6 Biometrics have always generated a good deal of interest, and in recent years they have become much more mainstream technologies. However, although generally regarded as the most secure approach, the widespread deployment of biometrics has not occurred to date, with the 2005 CSI/FBI Computer Crime survey reporting that only 15% of organisations are using biometrics.7 The reasons for the lack of widespread deployment go beyond simply the security that can be provided, and issues such as cost, relevance, effort and usability are key limiting factors. That said however, given biometrics are a security tool for ensuring the validity of a user, it would appear prudent to ensure that biometrics are capable of providing the level of security required. But how do biometrics achieve this in practice?
the way we speak or sign our name. Biometric systems can be used in two distinct modes, dependent upon whether the system wishes to determine or verify the identity of a person. The particular choice of biometric will greatly depend upon which of these two methods is required, as performance, usability, privacy and cost will vary. Verification, from a classification perspective, is the simpler of the two methods, as it requires a one-to-one comparison between a recently captured sample and reference sample, known as a template, from the claimed person. Identification requires a sample to be compared against every reference sample, a one-to-many comparison, contained within a database, in order to find whether a match exists. Therefore the
characteristics used in discriminating people need to be more distinct for identification than for verification. Unfortunately, biometrics are not based upon completely unique characteristics. Instead a compromise exists between the level of security required and thus more discriminating characteristics and the complexity, intrusiveness and cost of the system to deploy. It is unlikely however, in the majority of situations that a choice would exist between which method to implement. Instead, different applications or scenarios tend to lend themselves to a particular method. For instance, PC login access is typically a verification task, as the legitimate user will begin by providing their username. However, when it comes to a scenario such as claiming benefits, an identification system is necessary to ensure that the person has not previously claimed benefits under a pseudonym. Although the complexity of the system and uniqueness of the biometric characteristic play an important role in deployment, the underlying mechanism for every biometric technique, whether in identification or verification, is identical. Figure 1 illustrates the key processes within a biometric technique, ignoring all system level considerations. The system is built up of a sensor to capture the biometric sample, a data extraction process to extract the relevant characteristic information from the sample, a pattern classification engine that provides a measure of similarity between a known sample and the new sample, and some decision logic to finally decide whether
The biometric process The term biometrics is defined as “the automated use of physiological or behavioural characteristics to determine or verify identity”.8 Physiological biometrics rely upon a physical attribute such as a fingerprint, face or iris, whereas behavioural approaches utilise some characteristic behaviour, such as 12
Computer Fraud & Security
Figure 1 : The Generic Biometric Process
September 2005
BIOMETRICS this level of similarity is sufficient or not. At face value, this might not seem an overly complex problem to solve, but unfortunately the devil is always in the detail. Additionally, many of the issues also have a subsequent knock-on effect on the next process. For instance, issues concerning what sensor resolution is required for capturing samples has a knock-on effect on how much information can be extracted from the resultant sample. That said a key decision in what and how much information should be extracted is dependant upon the uniqueness of the data and the capability of the classification process. Utilising too much information will simply over complicate the pattern classification engine required, increasing cost, time to execute and storage. Using too little information will limit the ability to classify between samples, leading to difficulties when attempting to classify for large sample populations. Finally, having identified some level of similarity between samples, it is necessary to apply some decision logic to determine whether access should be permitted or rejected. But what level of similarity is sufficient? Setting the threshold of acceptability too low will result in security being compromised, but set it too high and the usability will be impaired, as the system continually rejects the authorized user. These issues, amongst others, are key to biometric systems and will be discussed in the forthcoming sections.
How much information to extract? Although the sensor is simply a method by which a biometric sample can be captured, the level of complexity and sophistication required is dependent upon the data extraction process. As such, many biometric systems will only operate with specific sensor hardware, whether this is fingerprint sensors, facial recognition cameras or hand geometry hardware. The principal reason for this is to reduce the errors relating to failure to enrol and acquire samples, which would occur when the feature extraction process is unable to extract sufficient
September 2005
information due to poor sample capture. Of course, in the majority of these cases a specialised biometric sensor is required to capture the sample in any case, but the need to specifically utilise a particular product for use with an individual biometric limits the availability and choice of hardware, with system designers forced to purchase the hardware suggested and often provided by the biometric vendor. Moreover, should the system designer decide to adopt a different (propriety) algorithm for the biometric process in the future, they would also have to replace all the hardware. More recent efforts have gone into providing algorithms that can perform the biometric process independently of specific hardware through providing standardised interfaces, enabling implementations to choose from a range of devices. This point highlights the lack of maturity and standardisation that exists within much of the biometrics industry, which have traditionally focussed upon designing bespoke solutions for clients, with few large scale implementations to date. The data extraction process is an important step and often determines the complexity of the pattern classification stage. Certain biometrics samples lend themselves to little data extraction effort, such as hand geometry, where the sensor provides data from which the data extraction process can calculate the required finger and/or hand measurements.9 Keystroke analysis is another example, where the sensor provides timing information from which the data extraction process calculates inter-keystroke and hold-time latencies.10 More often than not however, the data extraction process is far more complex. A facial recognition technique can utilise a standard camera as the sensor, which provides an image to the data extraction process. Although the exact details of the data extraction process (and indeed the pattern classification process) are proprietary, the extraction process needs to extract and compute the unique information from the image. This is often measurements such as the distance between the eyes, the eyes and nose, and length of the
mouth.11 In order to extract such information, such features need to be identified within the image, which has given rise to a number of location-based search algorithms. The effectiveness of such search algorithms vary depending upon lighting conditions, image resolution, facial orientation and size, with more sophisticated approaches using three dimensional modelling. Having such a wide range of factors to consider is not limited to facial recognition, with many other biometrics facing similar challenges. It is the assumptions that are placed upon these factors that can often degrade the performance and/or usability of a technique. Key criteria for what data to extract in general from a biometric sample are: • Feature invariance – to ensure the characteristic features that are extracted are time and environment invariant. If a fingerprint were to change periodically, or a facial recognition system were to require a sunny day in order to capture the image, then the biometric system would have difficulty in maintaining security and usability. • Maximise information – reduce the amount of data and ensure only features that contribute to “uniqueness” are extracted. This also helps in reducing the number of features the pattern classification process needs to cope with, thus reducing the complexity of the system. • Liveness Test – this is a more recent addition to the criteria, based upon deficiencies experienced in earlier biometric systems, and requires data to be extracted to determine the liveness of the sample. Weaknesses have been identified in fingerprint systems that merely require an unauthorised user to gently breathe upon the sensor to generate a viable sample12. This was possible due to residual prints from oils in the skin remaining on the sensor after an authorized person had used it. • Ensure the feature extraction process is a one-way function – biometric systems store the extracted features of
Computer Fraud & Security
13
BIOMETRICS a user during the enrolment process. It is important for privacy and security reasons that the original sample cannot be reproduced from this information. The output of the feature extraction process is a feature vector containing all the discriminative information.
Fuzzy classification Much of the power and capability of a biometric technique comes down to the pattern classification engine, and its ability to successfully discriminate between user’s samples. The field of pattern classification is certainly not new, nor specific to biometrics, and has been used to solve all manner of problems in a wide range of industries. The operation of a pattern classification process for biometrics is to compare two feature vectors and provide a measure of similarity. Figure 2 illustrates a simplistic two dimensional representation of the problem, with the classification process required to provide a high level of similarity for samples that appear to share similar characteristics and reject those with less similarity. It would not be difficult to imagine how this problem increases in complexity given large feature vectors, which exponentially increases the feature space available. In order to achieve this numerous technologies have been utilised, such as minimal distance techniques, probabilistic
Figure 2: The Classification Problem
14
Computer Fraud & Security
methods and neural networks.13 Of these, neural network based approaches are the newer technology and are being increasingly implemented as they are seen to perform better. Essentially the neural networks have the capability to learn given feature vectors from the authorized user and impostors, so that the network knows what types of feature vector to accept and reject. There are, however, a large number of considerations and issues associated with the use of neural networks, such as: • Network configuration – size of neural network determines its classification ability, with the risk of both overly complex or too simple networks failing to provide the performance required. Issues concerning training, storage and computation capability are all key concerns. Optimising neural networks is a time consuming and computationally intensive task – particularly if applied to large population sizes. • Availability, suitability and storage of impostor data. In order to optimise the neural network, the best impostor data to utilise is that which closely resembles (but is not identical to) the authorised users’ data. For example, given Figure 2, ideal impostor data to train from would be located around the dotted circle of the authorised user.
Using data which holds no relation to the authorised user’s data is simply increasing the complexity and training time of the classification algorithm. A compounding problem is that the output from the data extraction process is highly unlikely to ever produce an identical feature vector even with the same person providing the sample. Due to noise in the sensor and feature extraction process, the resultant feature vector is often completely unique. It does (or should) however, have closer similarities with the user’s own samples than with those of other users. Although this can help in ensuring replay attacks do not occur by simply rejecting any feature vector that has appeared previously, it does highlight the problem the classification process has in discriminating between samples. Feature vectors from different users are on occasion similar – the degree to which this occurs can depend on both the biometric technique being utilised and the individual biometric vendors, with their own specific proprietary algorithms for pattern classification which vary in their performance.
Similarity – decision time! Depending upon the biometric process, the decision logic itself can be an inherent part of the pattern classification process, or an additional self-contained process. The output of pattern classification stage can vary, but can typically be evaluated to some numerical value of similarity between 0-1. It is the task of the decision logic to determine whether this value is sufficient for access or not through the definition of a threshold level. Above this level, the sample is accepted, while below it is rejected. Much of the literature discussing biometrics introduces the issue of performance, defining the problems of false acceptance and false rejection error rates (FAR and FRR) and illustrating their relationships using a characteristic plot as illustrated in Figure 3a. It is indeed true that the rate at which impostors are accepted into a system (FAR) and the rate at which authorised users are rejected (FRR) tend to share a mutually exclusive relationship. However, this actual relationship and the choice of threshold value are
September 2005
BIOMETRICS neither as clear-cut nor as simple as portrayed in Figure 3a. The reality of the system is that the FAR and FRR convey very uncharacteristic relationships which can vary significantly between users. To illustrate this point, lets say the threshold level had been set at the point at which the FAR and FRR meet, referred to as the Equal Error Rate (EER) in Figure 3a, which the system designer had deemed appropriate given the trade off between security and user convenience. If we introduce individual users’ characteristic plots, illustrated in Figures 3b and 3c, then it can be seen that the previous threshold setting is not ideal for either user. For the user displaying the characteristics in Figure 3b, this choice in threshold level will result in a high level of user inconvenience and a higher level of security than was deemed appropriate by the system designer. For the user in Figure 3c, the level of security provided will be far lower than the system designer had set. So
how does this threshold level get set in practice? There are only two choices, you either set a system-wide setting where all authentications are compared against the same level for all users, or set individual threshold levels for all users (with the latter obviously providing a more optimised configuration than the former). Given appropriate risk analysis and knowledge of the performance characteristics it would be possible to define a system-wide threshold level that is appropriate to meet the security requirements of the system given a defined level of user inconvenience. Setting such a level on an individual basis is a far larger problem, in terms of the time taken to set such a parameter, and who will have the authority to set it. Remembering it is the threshold level that ends up being the key to the biometric system – a poorly selected threshold level can remove any security the biometric technique is deemed to have. Given this problem, time and effort has been put into finding methods of
normalising the output of the pattern classification process – so that an output value of 0.6 means the same across a population of users. Other efforts have gone into finding methods of automating the threshold decision based on a number of authorised and impostor samples – determining the performance of the biometric technique for each and every user. At present most systems implement a number of sensitivity levels that the system designer can alter, so if a user is having difficult authenticating themselves the sensitivity can be reduced (and in fact, be continually reduced until the user is able to authenticate themselves). However by doing this, what has been done to the level of security provided by the technique? It has obviously been reduced for that particular user, but by how much? At this stage of implementation it is not possible to go to the nice performance plots as described in Figure 3, the system designer is left with little to no idea of the performance they
Figure 3: The Relationship between key performance parameters
September 2005
Computer Fraud & Security
15
BIOMETRICS could expect from this system should it be attacked.
Practical implementations to date One of the largest practical evaluations of biometric techniques to date has been undertaken by the UK Passport service (UKPS).14 With the advent of a national identification system and the requirement from the US to incorporate biometric information within passports, the study sought to evaluate the usability of such technology. Although the published material provided by the UKPS explicitly stated that this was not a technology study to assess the performance of biometric techniques, the findings do raise a number of interesting points that reiterate some of the previous issues that were highlighted. A sample population of 10,016 users was used in the study, which sought to test the processes and assess attitudes and opinions regarding the user experience. The three biometric techniques they utilised were facial, iris and fingerprint. Three of the 10 recommendations from the study are highlighted below: • The camera should be manoeuvrable enough to allow it to be positioned to accommodate wheelchair users and others for whom the current arrangements limit access. Environment design needs to ensure that the camera height can cater for full height range found in the UK population. • Applicants need to remove any headwear before facial enrolment. • The verification process should allow a limited number of further attempts to pass verification when the first attempt fails. Each of the recommendations highlights a key problem with each stage of the biometric process: the inadequacies of sensor technology in capturing the sample; the inability to extract the features from samples with unforeseen additions such as headwear, and even having extracted the information, the inability for the pattern classification to correctly authenticate the user. This final recommendation has 16
Computer Fraud & Security
another more worrying aspect in that, every time a user is permitted to re-verify themselves after having been rejected, the security of the system diminishes as they are given another chance.
Concluding thoughts The aim of this article was to describe the underlying mechanisms at work within biometrics and identify the complications that are inherent within such systems. It is considered that although biometrics will not provide a panacea for our authentication needs they will certainly have a strong role to play. Indeed, given the three forms of authentication, biometrics offers one of the most promising possibilities. However, further work needs to focus upon developing more intelligent and robust algorithms that are capable of dealing with the large number of variables that exist. Such issues need to be addressed before widespread adoption of biometrics will provide the level of security and usability that one would expect from such a system.
About the authors Dr Nathan Clarke is a lecturer within the Network Research Group at the University of Plymouth, where he previously completed a PhD on the topic of advanced user authentication for mobile devices. His research has given specific consideration to the use and applicability of biometrics in this context, as well as the practical implementation and evaluation of a range of related techniques. Dr Steven Furnell is the head of the Network Research Group at the University of Plymouth, UK, and an Adjunct Associate Professor with Edith Cowan University, Western Australia. His research has included several projects in the biometrics area, particularly in relation to keystroke analysis on PCs and mobile devices. Related papers can be obtained from www.plymouth.ac.uk/nrg.
References 1 Furnell, S. and Clarke, N. 2005. “Biometrics – No Silver Bullets”, Computer Fraud & Security, August 2005, pp9-14. 2 Fussell, R. 2005. “Authentication: The Development of Biometric
Access Control”. The ISSA Journal. July 2004, pp24-27. 3 Home Office. 2005. Identity Cards. United Kingdom Home Office. http://www.homeoffice.gov.uk/comrace/identitycards/ 4 US Department of State. “Biometric Passport Procurement Moves Forward”. US Department of State. http://www.state.gov/r/pa/prs/ps/200 4/34423.htm 5 IDABC. 2004. “EU Visa Information System gets go-ahead”. eGovernment News. IDABC. http://europa.eu.int/idabc/en/document/2186/330 6 Cope, B. 1990. “Biometric Systems of Access Control”, Electrotechnology, April/May: 71-74. 7 Gordon, L.A., Loeb, M.P., Lucyshyn, W. and Richardson, R. 2005. Tenth Annual CSI/FBI Computer Crime and Security Survey. Computer Security Institute. 8 “How is 'Biometrics' Defined?”, International Biometric Group. http://www.biometricgroup.com/reports/ public/reports/biometric_definition.html 9 “Hand Geometry”, BiometricsInfo.org, http://www.biometricsinfo.org/handgeometry.htm. 10 Clarke, N.L, Furnell, S.M., Lines, B.M. and Reynolds, P.L. 2003. “Using Keystroke Analysis as a Mechanism for Subscriber Authentication on Mobile Handsets”, in Security and Privacy in the age of uncertainty, D.Gritzalis et a. (eds), Kluwer Academic Publishers, pp97-108. 11 Kung, S., Mak, M., Lin, S. 2004. “Biometric authentication by face recognition”, in Biometric Authentication: A Machine Learning Approach. Prentice Hall, New Jersey, pp241-277. 12 Reid, P. 2004. Biometrics for Network Security. Prentice Hall. 13 Kung, S., Mak, M., Lin, S. 2004. Biometric Authentication: A Machine Learning Approach. Prentice Hall, New Jersey 14 UK Passport Service. 2005. Biometric Enrolment Trial. UKPS. http://www.passport.gov.uk/downloads/UKPSBiometrics_Enrolment_ Trial_Report.pdf
September 2005
VPNS
Securing VPNs: comparing SSL and IPsec Ray Stanton, global head of BT’s business continuity, security and governance practice
Ray Stanton
Whatever the wireless standard being deployed and the latest smart phone that’s currently in vogue, at the heart of most organizations’ mobile and flexible working policy is a VPN. Replacing leased lines or dedicated dial-up connections, which were unsuitable for mobile working or expensive to run and re-configure, VPNs use the internet to connect employees back to their company’s network. Unlike the private networks of the past, they are accessible around the world and, with the growth of mobile devices and wireless hotspots, users don’t even require a socket to get online. However despite its advantages over the traditional private circuits, the internet is inherently insecure: its accessibility is both its advantage and disadvantage. With private networks that run over leased lines, the end points are defined and, short of digging up the road and tapping in, only the service provider has access in between. For practical purposes, the risk of attack is limited to a small number of people either in the organization or in the service provider’s exchanges. The Internet is another matter. It is a public network – and that means that added security mechanisms must be provided to protect any company information that is transmitted through it. Typically, this is done by creating a VPN.
Two technologies Until recently, most VPNs have been built using the IP Security (IPsec) protocol, which provided the encryption necessary to protect any data being transmitted. However in the last few years, internet security techniques have developed significantly as a result of E-commerce and, in particular, online banking. As a result, the Secure Socket Layer (SSL) standard is now a realistic alternative to IPsec for some applications.
September 2005
Although both protocols provide confidentiality, authentication and integrity of data, they do so in different ways, thanks to their different configuration and their role in the overall communications stack. IPsec sits at layer three of the stack and protects IP packets exchanged between remote networks or hosts and an IPsec gateway located at the edge of an organization’s private network. In contrast, SSL straddles the transport and session layers and protects application streams from remote users to an SSL gateway. In other words, IPsec connects hosts to entire private networks, while SSL connects users to services and applications inside or at the edge of those networks. IPsec VPNs use a combination of encryption and tunnelling to secure data traffic, encapsulating it within an IP packet that can be routed via the Internet. When it is received by the network layer gateway, the data is ‘unwrapped’, decrypted and routed to the ultimate recipient. A typical remote access IPsec VPN consists of one or more IPsec gateways and special client software that is installed on each remote access device. The client must be configured, either manually or automatically, to support a
specific security policy, such as defining which packets it should encrypt or with which gateway it should build the VPN tunnel. On the other hand, SSL uses a combination of public-key and symmetric-key encryption to secure data transfer. An SSL session begins with an exchange of messages, known as a handshake. This allows the server to authenticate to the client, using public-key techniques, and then enables client and server to cooperate in the creation of symmetric keys that are used for rapid encryption, decryption and integrity checking during the session that follows. As an option, the handshake can also allow the client to authenticate itself to the server. Because SSL uses HTTP and HTTPS protocols, all applications must be able to use these to transmit data. This makes deployment of SSL relatively straightforward for Web-based applications but requires modification for those that are not Web-based.
The benefits of IPsec VPNs As the elder statesman of VPN security, IPsec has become a mature and largely dependable technology that is widely used around the world. Since it is transparent from a network layer perspective – or, strictly speaking, from the perspective of an IP network – it provides a high degree of flexibility with respect to network configuration and applications. IPsec VPNs also enable IP-based legacy systems to be accessed without further development and reconfiguration and, since IPsec can be configured to make remote machines appear to be locally connected, it provides users with the same, familiar access to the network whether at home or in the office. However, it’s not all good news. There are disadvantages to IPsec, the most obvious of which is that client software must be implemented on the remote device. Not only can this be quite large - sometimes as much as 6 Mb - it means that anyone who is
Computer Fraud & Security
17
VPNS required to access the VPN must have a corporate-owned remote device or be using a home computer with software installed. Furthermore, despite its open standards credentials and its status as the first standardised encryption protocol, IPsec is not really interoperable. In fact, such implementations are largely restricted to site-to-site, or gateway-togateway situations. To counter this vendors have frequently used their own specific client enhancements to make things more simple and intuitive for the users. Consequently, few IPsec clients from one vendor will work with an IPsec gateway from another. The net result is that it precludes access from Internet kiosks, Internet cafés and public wireless networks – with all the loss of flexibility that this implies. What’s more, a large number of IPsec products are dependent on a particular operating system. This affects how the protocol is implemented on the client, and indeed the type of client that can be used. Added to which are potential problems associated with the configuration of the client, and the large numbers of support calls that typically ensue. Problems can also arise if users have a need to connect to different vendors’ IPsec gateways, for example when two companies have recently merged, or are working on a joint venture. The obvious answer is to install two clients on the remote machine, but as IPsec clients generally reside in the same part of the stack and compete for resources, they can only rarely co-exist on the same computer. There are also concerns that, if the end user’s machine is compromised, it could be used as a way into the organization’s network: a client PC infected with a Trojan or virus could potentially infect the entire network or give unauthorized access to malicious users.
SSL VPNs The primary benefit of an SSL-based VPN is simply that no client software is 18
Computer Fraud & Security
required other than a standard Web browser. For many applications, a simple HTTPS connection to the Web server is all that is needed to access the services. This increases the flexibility of the VPN: access can be gained from any computer at any time. Authorised users can access resources from any device in any location. And since most web-enabled application already support SSL, there is very little configuration required. Furthermore, as SSL is built into all leading browsers, it is independent of operating system and browsers alike. This neutrality enables users to access resources from a variety of platforms running on different operating systems. Furthermore, users accessing Web-based applications are generally comfortable with the familiarity that SSL VPN offers, and don’t need to learn any specific new systems. However, this is also the source of the biggest drawback of using SSL VPNs: they only really provide VPN access to Web-enabled applications. That means that legacy systems such as those based on mainframes, must be adapted – potentially at significant cost - to enable access using a SSL VPN. Another issue with SSL VPNs is that multiple key exchanges may be required during one session. The significant demands on computing resources of intensive cryptographic key processing operations will affect the performance of the web server. Furthermore, where the organization does not control the client PC - for example, when an internet café is used the remote computer cannot always be trusted. The company has no control over personal firewall and anti-virus installation and, although it is occasionally possible to check for both these facilities, the configuration is not controlled by the organization itself. In addition, traces of session information can be left on the remote computer, which may also be a source of hidden spyware. Finally, SSL VPNs are dependent on continuous access to the internet: there is no option to work offline, which can
reduce productivity in the event of faults or limited coverage. It may be possible to dynamically download a thin client via the SSL session to counter many of these issues. But not only do many public internet terminals block these applets, it also negates the two main advantages of SSL: mobility and flexibility.
Comparing SSL and IPsec There are three main areas where the differences between IPsec and SSL VPNs are significant: • Authentication and access control. • Defence against attack. • Remote computer security. Both types of VPN offer different options when it comes to user identification. IPsec employs Internet Key Exchange (IKE), using digital certificates or pre-shared secrets for two-way authentication. On the other hand, SSL web servers verify users with digital certificates, regardless of the method used to authenticate the corresponding client. Although both IPsec and SSL support certificate-based user authentication, and each offers less expensive options through individual vendor extensions, they differ significantly in the way these extensions are implemented. IPsec vendors, for example, offer alternatives such as eXtended Authentication (XAUTH), which enables the gateway to prompt a client for additional authentication, such as a SecurID code, before proceeding with the tunnel set up. Most SSL vendors support password and token-based authentication, while some offer further techniques such SMS messaging. In general, SSL is the more secure option for organizations that decide to implement non-certificate user authentication. There are also fundamental differences in the way access control is implemented. SSL is probably the best solution when
September 2005
VPNS per-application access control is required, while IPsec is better suited to giving trusted user groups homogenous access to entire private servers and subnets. IPsec standards support ‘selectors’, which are packet filters that permit, encrypt or block traffic to individual destinations or applications. However, most organizations grant hosts access to entire subnets rather than creating or modifying selectors for each IP address, simply for practical reasons. But because they operate at the session layer, SSL VPNs can filter on, and make decisions about, user or group access to individual applications, selected URLs, embedded objects, application commands and even content – delivering a more granular and practical level of control. In addition to authentication, resistance to message replay and other attacks are essential to the protection of any VPN. Both SSL and IPsec support block encryption algorithms, and although SSL VPNs also support stream encryption algorithms that are often used for web browsing, IPsec offers greater flexibility. Unlike SSL, which is restricted to supporting the algorithms that are built into standard web browsers, IPsec has been designed in a modular way such that new algorithms can be implemented. One of the biggest threats to the security of a VPN is the man-in-the-middle attack. To thwart these, IPsec prevents packet modification. However, this strong security feature also generates operational problems. For example, IPsec does not easily work alongside Network Address Translation (NAT) which works by substituting public IP addresses for the private ones included in data packets. The NAT conflict does not affect SSL, which defends against these attacks by carrying sequence numbers inside encrypted packets and so prevents packet injection. Unlike IPsec, SSL creates session bindings above the IP layer, so it is not affected by changes of IP address that may occur on transit through a firewall. Provided HTTP/HTTPS is permitted, SSL does not require the firewall’s rule-set
September 2005
to be changed and, because it uses port 443, no additional ports need be opened. When it comes to message relay attacks, both IPsec and SSL use sequencing to detect and drop message replay attacks. IPsec is more efficient, because it discards out-of-order packets lower in the stack in system code. In SSL VPNs, the TCP session engine or the SSL proxy engine detects out-of-order packets, which consume more resources, before they are discarded. Whatever type of VPN used, it is only as secure as the remote computers connected to it. Therefore, any organization with a VPN needs to put in place complementary security measures, such as personal firewalls, malware scanning, intrusion prevention, OS authentication and file encryption. As IPsec VPNs require a client to be loaded onto the remote computer to enable connection, the number of remote computers able to connect to the organization’s network is limited – and they tend to be owned by the corporation, which can also install and manage the additional security measures required. Furthermore, some IPsec VPN vendors bundle additional security measures into the client software, providing the organization with the opportunity to pre-configure them prior to installation. Unless devices owned by employees have remote access clients installed, IPsec VPNs can be regarded as the safer option in such cases. SSL VPNs pose a greater threat to organizations as they potentially allow access to their network from any computer. To counteract this, most vendors of SSL technologies arrange for sessions to begin by downloading a Java Applet or ActiveX control which searches the remote computer for additional security measures and allows organizations to make decisions regarding access. In circumstances where a browser does not permit the loading of applets or ActiveX, organizations must decide whether to allow or deny access from that computer.
Cost of ownership is also a key issue in choosing a VPN technology. SSL VPNs are regarded as cheaper to implement and manage, primarily because there is no need to purchase and support clients. This reduction in costs may be counteracted by the requirement to Web-enable applications to make them accessible from SSL VPNs. As a result, the cost of ownership comparison is dependent upon the number of users and the applications supported. This cost will have to account for the time taken to validate the application within the SSL VPN environment provided by the SSL vendor: accepting the vendor’s word may not be an option for a critical application.
Choosing a solution Given these differences, it’s clear that, as with all aspects of security, there is no one size fits all solution. IPsec is aimed primarily at protecting interconnected networks, while SSL is aimed at connecting remote users to Web-based applications. Consequently, IPsec is probably the most appropriate technology for siteto-site VPNs with connections of long duration, whereas the flexibility of SSL benefits mobile workers. As companies continue to explore the options for facilitating mobile and flexible working, connecting remote sites and liberating their sales and field workers, they are having to make choices between these two security protocols. Both have their strengths and weaknesses, and organizations need to understand these differences, and weigh up the advantages of each, before selecting which method is most appropriate. In fact, following such assessments, most organizations will find that both technologies have a role to play – depending on their end users’ connectivity requirements. The key is to choose the right combination of options and settings from those on offer in the technology being used. It is these that will ultimately make a significant difference to the level of security achieved.
Computer Fraud & Security
19
CALENDAR
UK banks sent out vulnerable PIN mailers SA Mathieson
T
amper-evident stationery that has been used by UK banks to distrib-
ute millions of personal identification numbers (PINs) can be cracked with standard computer equipment — and in some cases, by the naked eye — according to a paper released earlier this month.
Last autumn, researchers from Cambridge University’s computer laboratory tested a small sample of 16 laser-printed PIN mailer letters, mostly from UK banks. They found that all could be read without removing the layer of either foil or plastic printed
with an interference pattern, through use of an £80 desktop scanner working at 1,200 dots per inch and a desktop computer running Gimp, an open source imaging software package, to enhance the image. Most PIN mailers could also be read with the aid of sharply angled light, and about a third could be read under normal light. Millions of these mailers have been sent over the last two years, with the UK’s introduction of Chip and PIN debit and credit cards. Mike Bond, one of the researchers, says he got the idea when he noticed that the PIN on a mailer from bank Halifax, part of HBOS plc, could be read under normal light before tampering. “One sample that was poorly produced gave me a clue that the technology had problems,” he says.
The researchers alerted the manufacturers, as well as the UK payment association Apacs, before publishing their work. “My colleagues and I subscribe to giving people a reasonable amount of time to deal with the problem,” says Bond. They produced a series of security tests for the maker of Hydalam, the most widely-used brand of such stationery in the UK. Apacs says it shared the paper with its members at the start of this year, and that all manufacturers will benefit from the work. “On the back of this research, we’re going to introduce a new industry-wide standard,” says a spokesperson, adding that this should be in place by the end of this year. Bond warns that even if stationery is upgraded, organisations need to laserprint PIN mailers at a high resolution, with 300 dots per inch being inadequate.
EVENTS CALENDAR 21 September 2005 IDC Security Conference Location: Dublin, Ireland Website: www.idc.com
9-11 October 2005 ISF ANNUAL WORLD CONGRESS Location: Munich, Germany Website: http://www.securityforum.org
27-29 September 2005 ISSE
31 October -1 November 2005 CRYPTOGRAPHIC HASH WORKSHOP 2005 Location: Gaithersburg, US Website: http://www.nist.gov/public_affa irs/confpage/051031.htm
Location: Budapest, Hungary Website: http://www.eema. org/static/isse
17-19 October 2005 RSA Europe
9-10 November 2005
5-7 October 2005
Location: Vienna, Austria Website: www.rsasecurity. com/conference
INFOSECURITY NETHERLANDS
VIRUS BULLETIN CONFERENCE Location: Dublin, Ireland Website: http://www.virusbtn.com/conference/overview/i ndex.xml
20
Computer Fraud & Security
19-21 October 2005 BIOMETRICS 2005
14-16 November 2005 CSI 32nd ANNUAL COMPUTER SECURITY CONFERENCE & EXPO Location: Washington, Website: www.gocsi.com
6-8 December 2005 INFOSECURITY NEW YORK Location: New York Website: www.infosecurityevent.com
Location: Utrecht Website: http://www.infosecurity.nl/sites/infosecurity/en/inde x.asp
Location: London Website: www.biometrics. elsevier.com/
September 2005