Computer Fraud & Security Journel


323 52 442KB

English Pages 22 Year 2004

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
When you outsource to India, where does your data go? ......Page 1
Pentagon panel checks privacy in war on terrorism ......Page 2
Cyber attacks on banks double from 2003 ......Page 3
Crime and punishment: corporate governance......Page 4
Information security governance......Page 6
The need for log integrity: problems and possible solutions......Page 8
Log time stamp management: problems and possible solutions......Page 10
Normalization and data reduction problems and possible solutions......Page 11
Bottom-up approach......Page 12
Video encryption - the epic movie battle......Page 13
The question of organizational forensic policy......Page 15
Conclusion......Page 16
Background of the case study......Page 17
Data gathering and modeling......Page 18
Vulnerabilities and threats......Page 19
Modeling and simulating......Page 20
Conclusion......Page 21
Recommend Papers

Computer Fraud & Security Journel

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

June 2004 ISSN 1361-3723

Corporate governance — 4 Log correlation — 7

Editor: Sarah Hilley Editorial Advisors: Peter Stephenson, US; Silvano Ongetta, Italy; Paul Sanderson, UK; Chris Amery, UK; Jan Eloff, South Africa; Hans Gliss, Germany; David Herson, UK; P.Kraaibeek, Germany; Wayne Madsen, Virginia, USA; Belden Menkus, Tennessee, USA; Bill Murray, Connecticut, USA; Donn B. Parker, California, USA; Peter Sommer, UK; Mark Tantam, UK; Peter Thingsted, Denmark; Hank Wolfe, New Zealand; Charles Cresson Wood. Bill J. Caelli Editorial Office: Elsevier Advanced Technology, PO Box 150 Kidlington, Oxford OX5 1AS, UK Tel: +44-(0)1865-843645 Fax: +44-(0)1865-843971 Email: [email protected] Subscription Price for one year: (12 issues) US$833/¥102,240/769.00 including first class airmail delivery subject to our prevailing exchange rate Price valid to end of 2004 Subscription Enquiries: Orders and Payments: For customers residing in the Americas (North, South and Central America): Elsevier Journals Customer Service 6277 Sea Harbor Drive Orlando, FL 32887-4800, USA North American customers: Tel: +1 (877) 839-7126 Fax: +1 (407) 363-1354 Customers outside US: Tel: +1 (407) 345-4020 Fax: +1 (407) 363-1354 Email: [email protected] For customers in the rest of the World: Elsevier Science Customer Support Department PO Box 211, 1000 AE Amsterdam, The Netherlands Tel: (+31) 20-3853757 Fax: (+31) 20-4853432 Email: [email protected] To order from our website: www.compseconline.com

Publishers of Network Security Computers & Security Computer Fraud & Security Computer Law & Security Report Information Security Technical Report

When you outsource to India, where does your data go?

Analysis

Not where you think… Many outsourced IT services are being subcontracted from Indian providers to countries such as Sudan, Iran and Bulgaria, which increases the security risk. Risk management professionals are warning companies to stop and check that their service provider in India is actually performing contracted offshore services itself and not outsourcing further to other countries. Some companies in India are faced with a labour shortage and lack of proper infrastructure to cope with the burst of business from the west. “They can’t deliver what they’ve signed up to deliver, said Samir Kapuria, director of strategic solutions at security consultancy, @stake, “so they outsource to other countries where the cost is lower.” Colin Dixon, project manager at the Information Security Forum (ISF), said many ISF members have reported this problem during an ongoing investigation by the elite security club into outsourcing risks. “Contracts should contain a clause banning offshoring companies from further outsourcing without the client’s knowledge,” said Dixon. Companies are being put in the awkward position of “relying on the Indian provider to

Contents

perform due diligence on their subcontractors and you don’t know if they are able to do that,” he said. The elongated outsourcing chain multiplies the risk. It “leads to a high degree of separation in the development of applications for example,” said Kapuria.

When you outsource to to India where does your data go?

1

Pentagon panel checks privacy on war on terrorism 1 Cyber attacks on banks double for 2003 2

News In Brief

2,3

Corporate governance

“They can’t deliver what they’ve signed up to deliver” Compliance with corporate governance also gets more complicated as the responsibility lies with the company and not the provider. And adherence to regulations gets even harder to control if services are being outsourced twice. Most ISF members have identified the issue and stopped it before signing a contract, said Dixon. But Kapuria said that some of @stake’s clients didn’t find out about the double outsourcing until after the contract was signed. Intrusion detection traffic coming from outside India alerted some banks that subcontracting was taking place, said Kapuria. 70% of blue-chip companies in the ISF are currently outsourcing.

Crime and punishment: corporate governance

4

Log correlation The “Art” of log correlation: part 1

7

Video encryption The epic movie battle

12

Forensic policy The question of organizational forensic policy

13

Getting the Whole Picture A FARES baseline analysis case study

15

Events

20

news

In Brief RANDEX WORM AUTHOR ARRESTED A 16 year-old juvenile from Canada has been charged with writing and spreading the Randex worm. The worm, which allows remote intruders to access and control computers, is believed to have developed more than 20 variants since its release last November. The Royal Canadian Mounted Police announced the arrest of the youth, who is too young to be named, on 28 May 2004, although charges were filed before then.

TAIWANESE TROJAN AUTHOR TRAPPED Taiwanese police have arrested the author of the Trojan horse known as 'Peep', after it was used by Chinese hackers to steal Taiwanese government information. The 30 year-old programmer, Wang An-ping, was believed to have put the Trojan on hacking websites to advertise his skills; instead the program was used to retrieve and destroy information from infected PCs in schools, companies and government agencies. He denies any knowledge of the attacks, but admits that he wrote the software and was associating with Chinese software developers. If found guilty, he could get up to five years in prison.

Pentagon panel checks privacy in war on terrorism Wayne Madsen The US Department of Defense released a special panel report, which showed that datamining is at risk of invading the privacy rights of US citizens. The report, titled "Safeguarding Privacy in the Fight Against Terrorism," and prepared by the Technology and Privacy Advisory Committee (TAPAC) said that although data mining could be a useful tool in the fight against terrorism, its unfettered use without controls or an adequate predicate, could "run the risk of becoming the 21st-century equivalent of general searches, which the authors of the Bill of Rights were so concerned to protect against." The TAPAC report was sent to Defense Secretary Donald Rumsfeld with a number of recommendations on improving privacy with relation to intelligence gathering systems on potential terrorist threats. TAPAC, in its report, stated that there were a number of systems in operation or under development that encouraged data mining of the personal information of Americans: The report said that the Total Information Awareness

ISSN: 1361-3723/04 © 2004 Elsevier Ltd. All rights reserved. This journal and the individual contributions contained in it are protected under copyright by Elsevier Science Ltd, and the following terms and conditions apply to their use: Photocopying Single photocopies of single articles may be made for personal use as allowed by national copyright laws. Permission of the publisher and payment of a fee is required for all other photocopying, including multiple or systematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish to make photocopies for non-profit educational classroom use. Permissions may be sought directly from Elsevier Science Rights & Permissions Department, PO Box 800, Oxford OX5 1DX, UK; phone: (+44) 1865 843830, fax: (+44) 1865 853333, email: permissions@ elsevier.com. You may also contact Rights & Permissions directly through Elsevier’s home page (http://www.elsevier.com), selecting first ‘Customer Support’, then ‘General Information’, then ‘Permissions Query Form’. In the USA, users may clear permissions and make payments through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA; phone: (978) 7508400, fax: (978) 7504744, and in the UK through the Copyright Licensing Agency Rapid Clearance Service (CLARCS), 90 Tottenham Court Road, London W1P 0LP, UK; phone: (+44) 207 436 5931; fax: (+44) 207 436 3986. Other countries may have a local reprographic rights agency for payments. Derivative Works Subscribers may reproduce tables of contents or prepare lists of articles including abstracts for internal

2

program (TIA), which has had its funding source blocked, was not unique in its potential for data mining. “TAPAC is aware of many other programs in use or under development both within DOD and elsewhere in the government that make similar uses of personal information concerning US persons to detect and deter terrorist activities, including:” • DOD programs to determine whether data mining can be used to identify individuals who pose a threat to US forces abroad. • The intelligence community's Advanced Research and Development Activity center, based in the National Security Agency, to conduct 'advanced research and development related to extracting intelligence from information transmitted or manipulated by electronic means.' • The Computer-Assisted Passenger Prescreening System in the Department of Homeland Security ('DHS') • The Treasury Departments Financial Crimes Enforcement Network • Federally mandated 'Know Your Customer' rules

• The 'MATRIX' (Multistate Anti-Terrorism Information Exchange) system to link law enforcement records with other government and private-sector databases in eight states and DHS • Congress mandate in the Homeland Security Act that DHS 'establish and utilize . . . a secure communications and information technology infrastructure, including data mining and other advanced analytical tools,' to 'access, receive, and analyze data detect and identify threats of terrorism against the United States.' Also, the TAPAC reported that one Pentagon project, known as the Threat Alerts and Locally Observed Notices ("TALON"), allows military installations to share information about threats, suspicious activity, or other anomalous behaviour via a Web-based system. Information about suspicious people who are either denied access to, or who are observed behaving suspiciously around, a military installation can be instantly shared with other military bases and fused with other information maintained on targeted suspicious individuals. TAPAC's report contained a number of proposals for Rumsfeld, including that he

circulation within their institutions. Permission of the publisher is required for resale or distribution outside the institution. Permission of the publisher is required for all other derivative works, including compilations and translations. Electronic Storage or Usage Permission of the publisher is required to store or use electronically any material contained in this journal, including any article or part of an article. Contact the publisher at the address indicated. Except as outlined above, no part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the publisher. Address permissions requests to: Elsevier Science Rights & Permissions Department, at the mail, fax and email addresses noted above. Notice No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made. Although all advertising material is expected to conform to ethical (medical) standards, inclusion in this publication does not constitute a guarantee or endorsement of the quality or value of such product or of the claims made of it by its manufacturer. 02065 Printed by Mayfield Press (Oxford) Ltd

news establish a regulatory framework applicable to all data mining conducted by the Pentagon that involves personally identifiable information concerning US persons. Another recommendation is to create a policylevel privacy officer to check if the regulations are carried out. The creation of two panels of external privacy experts to advise the Pentagon and the President on privacy issues is also recommended. The need for oversight, training, ethics, sensitivity to privacy concerns,

and inter-agency dialogue, are also called for in the report. Rumsfeld appointed the TAPAC in February 2003 to examine the use of "advanced information technologies to identify terrorists before they act." Rumsfeld also charged TAPAC "to ensure that the application of such technologies within the Defense Department is carried out in accordance with US law and American values related to privacy."

Cyber attacks on banks double from 2003 Brian McKenna Online security attacks on the global financial sector have doubled in the last year. Deloitte and Touche's second annual Global Security Survey indicates a dramatic increase in the respondents reporting system breaches among financial institutions. The survey, released on 27 May, showed that the number of financial institutions whose systems have been compromised in the last year has increased by 39% to 83%. Moreover, 40% of victims said they had sustained financial loss. The survey sampled 100 companies, including 31 of the world's top 100 financial services firms, 23 of the top 100 banks, and 10 of the top 50 insurance companies. A senior banking source at a City of London institution confirmed that cyber attacks and financial losses have increased in the last year. "There can be no doubt of that", he said, "though it is

hard to get an overall picture. Some colleagues at other institutions are saying it is 'business as usual' while others do report significant financial losses." Eighty-seven per cent of the professional services firm's respondents said they had fully deployed anti-virus measures, which is down from 96% in 2003. While this might indicate a loss of faith in traditional AV technology, caused by the success of network worms such as Blaster and Sasser, the City of London banking source expressed scepticism. "It's more that anti-virus is lacking at the customer end, especially regarding phishing attacks". Deloitte and Touche failed to put a number on the scale of the loss due to the increased volume of attacks. The banking source said that "losses are starting to get on a par with credit card fraud losses, but at present it is more about brand damage and internal disruption".

In Brief PROOF-OF-CONCEPT VIRUS THREATENS 64-BIT SYSTEMS

FBI INVESTIGATE CISCO CODE THEFT

Symantec has released details of what is thought to be the first threat to 64-bit Windows systems. The virus is a "proof-of-concept" program — to demonstrate that a certain vulnerability exists — rather than an active and malicious virus. Named W64.Rugrat.3344, it is not believed to be a significant threat due to the relatively small number of 64-bits systems in use, however more viruses are anticipated as the systems increase in popularity. The program does not work on 32-bit windows platforms.

The FBI is working with Cisco, as it appears that some of its source code has been stolen. A small amount of the source code, which could be used by hackers to sabotage operating systems, was posted on a Russian website. Few further details are available as the FBI and Cisco continue to investigate the theft and the possible hacking of their corporate network.

WORMS COST ISPS 123 MILLION European ISPs will suffer financially from worms this year - to the tune of 123 million Euros. The increased traffic from worms can cause an upsurge in support calls to ISPs. The associated costs with dealing with worms, including increased customer support, loss of brand equity and tactical response teams can lead to a financial problem that may exist long after the worm has gone.

ZOMBIES DRIVE COMCAST SPAM RECORD Zombie computers have added 700 million e-mails a day to the 100 million legitimate messages flowing through Comcast. This has made the US high speed cablebased internet service provider the world's biggest single source of spam. Zombie computers arise when spammers use bugs in Microsoft Windows to take over PCs and use them to send junk emails, mostly via port 25. PCs with broadband, always-on connections are so quick that most users do not spot what should be a degraded service. Comcast has over 21 million users.

'GAME' RECORDS KEYSTROKES FOR STUDENT HACKER A student at the National University of Singapore was recently jailed for hiding a keystroke logging program as a game on his website. Fellow students proceeded to download the game, while he used their bank account details for online shopping. The program, Perfect Keylogger, would install itself and record all strokes whenever someone downloaded the game from Nguyen Van Phi Hung's website. He used funds stolen from bank accounts to purchase phone cards and magazine subscriptions. The computer engineering student pleaded guilty to several charges and could serve a maximum penalty of 10 years.

HACKERS DEFACE MICROSOFT WEBSITE Hackers calling themselves the "Outlaw Group" sabotaged the UK press area of Microsoft's website on 24 May 2004. Microsoft say that this did not compromise confidential data.

3

corporate governance

Crime and punishment: corporate governance Steven Hinde, Group Information Protection Manager, BUPA, UK

With the financial scandals which occurred in the US some years ago Governments are placing increasing emphasis on good corporate governance and financial reporting. They have shown their resolve through the increasing level of fines imposed upon transgressors. A combination of concerns about survival in markets that are dependent upon efficient supply chains, fail-safe IT and reputations susceptible to damage in the face of corporate governance scandals, have dramatically exposed vulnerabilities. The Financial Times has defined Corporate Governance as follows: "Corporate governance — which can be defined narrowly as the relationship of a company to its shareholders or, more broadly, as its relationship to society …." The corporate governance scandals that brought the downfall of Enron and Worldcom and led to the Sarbanes-Oxley legislation in the US forced greater transparency in company accounting, combined with massive shareholder pressure for companies to know who they were dealing with as well as who they were employing. Named after Senator Paul Sarbanes and Representative Michael Oxley, the Act is also intended to "deter and punish corporate and accounting fraud and corruption, ensure justice for

wrongdoers, and protect the interests of workers and shareholders". So wrote President Bush when signing the Sarbanes-Oxley Act into law on 30 July 2002. The Act introduced major legislative changes to financial practice and corporate governance regulation, and introduced stringent new rules with the stated objective: "to protect investors by improving the accuracy and reliability of corporate disclosures made pursuant to the securities laws." Coming out of the US financial scandals, the overall objective of SarbanesOxley is to ensure that financial reporting is consistent in using source data throughout the organization. Although the legislation emanates from the US, companies around the world are

Section 404 Management Assessment Of Internal Controls (a) RULES REQUIRED - The Commission shall prescribe rules requiring each annual report required by section 13 of the Securities Exchange Act of 1934 (15 USC. 78m) to contain an internal control report, which shall--(1) state the responsibility of management for establishing and maintaining an adequate internal control structure and procedures for financial reporting; and(2) contain an assessment, as of the end of the most recent fiscal year of the issuer, of the effectiveness of the internal control structure and procedures of the issuer for financial reporting. (b) INTERNAL CONTROL EVALUATION AND REPORTING - With respect to the internal control assessment required by subsection (a), each registered public accounting firm that prepares or issues the audit report for the issuer shall attest to, and report on, the assessment made by the management of the issuer. An attestation made under this subsection shall be made in accordance with standards for attestation engagements issued or adopted by the Board. Any such attestation shall not be the subject of a separate engagement.

Figure 1: Section 404, Sarbanes-Oxley Act

4

"At what stage is your organization with respect to Sarbanes-Oxley compliance?" 771 respondents to a recent Internet straw poll replied as follows: Not started Just beginning research Plan created and ready to go Changes being implemented Almost compliant Fully compliant

19% 38% 16% 17% 6% 4%

Source: Sarbanes-Oxley Act Community Forum

Figure 2: Are organizations ready for Sarbanes-Oxley? affected if their parent company is reporting in the US. It impacts directly on all US listed public companies. More broadly, it almost certainly will change the way securities regulators around the world approach corporate governance. The Sarbanes-Oxley Act aims to strengthen overall business operations by providing guidelines to help efficiently manage internal controls and enhance financial reporting practices. Most observers would agree that the Sarbanes-Oxley Act is the single most important piece of legislation affecting corporate governance, financial disclosure and the practice of public accounting since the US securities laws of the early 1930s. It is, moreover, a law that came into being in the glare of a very bright and very hot spotlight of massive corporate failures. It has caused much consternation, panic and selling of silver bullet solutions not only in the US but also around the world. Gartner predicts that the following standards and regulations will start to have global effects: • The Sarbanes-Oxley Act in the US. • The adoption of globally recognized accounting standards in Australia. • Basel II monitoring and reporting requirements in the global financial services industry • The OECD Principles of Corporate Governance

corporate governance Although sections 302 (Corporate Responsibility for Financial Reports), 401 (Disclosures in Periodic Reports), 404 (Management Assessment of Internal Controls), 409 (Real Time Issuer Disclosures), 802 (Criminal Penal0ties for Altering Documents) and 906 (Corporate Responsibility for Financial Reports) are the most significant with respect to compliance and internal control, it is section 404 that seems to be causing most concern. (Figure 1). To assist US Courts to adjudicate in cases brought under Sarbanes-Oxley, the COSO risk management requirements have been extended. (see Figure 2). The implication is that provisions including but extending beyond the controls environment stipulated in Section 404 will be included for judicial review.

Sarbanes Oxley will change the way securities regulators around the world approach corporate governance

Corporate compliance standards with all of the requirements identified in the Sarbanes-Oxley Act will be based on the US Sentencing Commission's Guidelines. The seven original guidelines of the Committee of Sponsoring Organisations (COSO) of the Treadway Commission constitute the minimum requirements for public companies and the foundation of "best practices" for private companies. These are: 1. 2. 3. 4. 5. 6. 7.

Ten guidelines have been added to these to fulfil the discrete criteria of SarbanesOxley compliance, to reinforce the recently updated risk management requirements specified by COSO, and to provide US District Courts and legal counsel with a comprehensive set of high-level guidelines in order to adjudicate cases brought forward for their review under the Sarbanes-Oxley Act in 2005. These are: 1. 2.

Sarbanes-Oxley requires compliance with a comprehensive reform of accounting procedures for publicly held corporations to promote and improve the quality and transparency of financial reporting by both internal and external independent auditors. A company’s system of internal control has a key role in the management of risks that are significant to the fulfilment of its business objectives. Companies must identify and manage any significant risks. Directors must review the system of internal control in managing key risks, and to undertake an annual assessment for the purpose of making their statements on internal control in the annual report. Most public companies must meet the financial reporting and certification mandates of Sarbanes-Oxley for any end of year financial statements filed after 15 June 2004, whilst smaller and foreign companies have until 15 April 2005.

Establish procedures - companies must establish compliance standards and procedures to be followed by all employees. High-level oversight - at least one "high-level" individual must be assigned overall responsibility for compliance. Use due care - organizations must not delegate "substantial discretionary authority" to individuals who have engaged in illegal activities. Communicate standards - organizations must communicate standards and procedures to all employees. Monitor - organizations must take "reasonable steps to achieve compliance" with their own standards. Enforce consistently - standards must be enforced consistently through dis ciplinary mechanisms. Response and prevention - organizations must take "all reasonable steps" to respond appropriately to violations, and must act to prevent similar offences.

3. 4. 5. 6. 7. 8. 9.

10.

Tone at the top - an organizational culture that encourages a commitment to compliance with the law. Conduct and internal control - standards of conduct and internal control systems that are reasonably capable of "reducing the likelihood of violations of law". Leadership accountability - responsibilities of an organization's governing authority and organizational leadership for compliance. Resources and authority - resources and authority for individuals with the responsibility for implementation of the program. History of violations - objective requirement for determining if there is a "history of engaging in violations of law". Conduct training - training and the dissemination of training materials and information within the definition of an "effective program". Evaluate programs - "periodic evaluation of the effectiveness of a program" to the requirement for monitoring and auditing systems. Whistleblower system - a mechanism for anonymous reporting. Encourage employees - system for employees not only to report actual vio lations, but to "seek guidance about potential" violations, in order to more specifically encourage prevention and deterrence of violations. Risk assessment - ongoing risk assessments as part of the implementation of an "effective program." Source: The Seventeen Ultimate Compliance Guidelines Lane Leskela, Research Director, Gartner.

Figure 3: Extended COSO risk management requirements

5

corporate governance

Information security governance Reporting on the system of internal controls is required for almost any organization seeking to meet regulatory compliance, including the Section 404 requirements for Sarbanes-Oxley. In the domain of IS computer operational controls, assurance hinges upon the integrity of the critical underlying IS change and configuration management processes. As security is such a strong theme in the Act, many organizations are addressing this using the ISO/IEC 17799:2000(E) international standard, which is the International copy of British Standard BS7799 Part1: 2000, as the benchmark against which to measure the quality of IS controls. The International Standard is also being used as a benchmark in demonstrating compliance with the stringent IS security requirements or operational practices of other legislation or to meet regulations in their respective industries like for instance, the Gramm-LeachBliley Act (US Financial services), and the Health Information Portability and Accountability Act (US health insurance data). One has to be grateful that the

First court case brought under the Sarbanes-Oxley Act The US SEC has settled a civil fraud action against Vivendi Universal, its former CEO Jean-Marie Messier and its former CFO Guillaume Hannezo. According to the SEC, the settlements include Vivendi's consent to pay a $50 million civil penalty, Messier's agreement to relinquish his claims to a $21 million severance package negotiated before his resignation, and payment of disgorgement and civil penalties by Messier and Hannezo that total over $1 million. Messier is prohibited from serving as an officer or director of a public company for 10 years.The Commission's complaint describes a course of conduct by Vivendi, Messier, and Hannezo that disguised Vivendi's cash flow and liquidity problems, improperly adjusted accounting reserves to meet earnings before interest, taxes, depreciation, and amortisation targets. The SEC also stated Vivendi failed to disclose material financial commitments, all in violation of the antifraud provisions of federal securities laws.The SEC said it wanted to see the penalties paid to defrauded investors, including those who held Vivendi's ordinary shares and American Depository Shares, in accordance with Section 308(a), the Fair Funds provision of the Sarbanes-Oxley Act. Sarbanes-Oxley Act."

Figure 4: First court case brought under the Sarbanes-Oxley Act need to comply, and prove compliance, with internal controls is bringing with it adherence to ISO/IEC 17799. For organizatons in the UK, the Information Commissioner’s Office (Data Protection Act 1998) has used adherence to BS7799 as part of the Registration (Notification)

process and for assessments under the Act. Indeed, the US National Cyber Security Partnership (NCSP) Corporate Governance Task Force Report entitled Information Security Governance A Call to Action issued by the US National

The OECD principles of corporate governance 1 ENSURING THE BASIS FOR AN EFFECTIVE CORPORATE GOVERNANCE FRAMEWORK The corporate governance framework should promote transparent and efficient markets, be consistent with the rule of law and clearly articulate the division of responsibilities among different supervisory, regulatory and enforcement authorities. 2 THE RIGHTS OF SHAREHOLDERS AND KEY OWNERSHIP FUNCTIONS The corporate governance framework should protect and facilitate the exercise of shareholders' rights. 3 THE EQUITABLE TREATMENT OF SHAREHOLDERS The corporate governance framework should ensure the equitable treatment of all shareholders, including minority and foreign shareholders. All shareholders should have the opportunity to obtain effective redress for violation of their rights. 4 THE ROLE OF STAKEHOLDERS IN CORPORATE GOVERNANCE The corporate governance framework should recognise the rights of stakeholders established by law or through mutual agreements and encourage active co-operation between corporations and stakeholders in creating wealth, jobs, and the sustainability of financially sound enterprises. 5 DISCLOSURE AND TRANSPARENCY The corporate governance framework should ensure that timely and accurate disclosure is made on all material matters regarding the corporation, including the financial situation, performance, ownership, and governance of the company. 6 THE RESPONSIBILITIES OF THE BOARDThe corporate governance framework should ensure the strategic guidance of the company, the effective monitoring of management by the board, and the board's accountability to the company and the shareholders.

Figure 5: The OECD principles of corporate governance. 6

log correlation Cyber Security Summit Task Force under the auspices of the US Department of Homeland Security is heavily based on ISO 17799 as well as ISACA’s COBIT (Control Objectives for Information Related Technology) guidance and the revised COSO guidelines for internal controls. The report provides a new management framework and a call for action to industry and organizations to integrate effective information security governance (ISG) programmes into their corporate governance processes. The Task Force identifies cyber security roles and responsibilities within corporate management structures and references and combines industry-accepted standards and best practices, metrics and tool sets that bring accountability to three key elements of corporate governance programs and information security systems, people, process and technology. Although information security is often viewed as a technical issue, the task force said it is also a

'governance challenge' that involves risk management, reporting and accountability. As such, it requires the active engagement of executive management and boards of directors. The Report went on to recommend that: • Organizations should adopt the information security governance framework described in the Report and embed cyber security into their corporate governance process; • Organizations should signal their commitment to information security governance by stating on their website that they intend to use the tools developed by the Corporate Governance Task Force to assess their performance and report the results to their board of directors. • The US Department of Homeland Security should endorse the information security governance framework and core set of principles outlined in

The "ART" of log correlation: Part 1 Tools and Techniques for Correlating Events and Log Files. Dario Valentino Forte, CFE, CISM, Founder of the IRItaly Project

Log file correlation is related to two distinct activities: intrusion detection and network forensics. It is more important than ever that these two disciplines work together in a mutualistic relationship in order to avoid points of failure. This paper, intended as a tutorial for those dealing with such issues, presents an overview of log analysis and correlation, with special emphasis on the tools and techniques for managing them within a network forensics context. The paper has been split in two parts and part 2 will appear next month in Computer Fraud & Security, July edition.

Logs: characteristics and requisites for reliablity Every IT and network object, if programmed and configured accordingly, is capable of producing logs. Logs have to

have certain fundamental requisites for network forensics purposes. They are: • Integrity: The log must be unaltered and not admit any tampering or modification by unauthorized operators.

this report, and encourage the private sector to make cyber security part of its corporate governance efforts. • The Committee of Sponsoring organisations of the Treadway Commission (COSO) should revise the Internal Controls-Integrated Framework so that it explicitly addresses information security governance. As stated above, this latter point has been addressed in the Guidance to US Courts re interpretation of section 404 of Sarbanes-Oxley. The impact of Sarbanes-Oxley and other legislation is starting to have its impact on organizations. They give IS Security personnel an excellent opportunity to raise again the importance of IS security and control as part of Corporate Governance, with the added benefit of it being part of compliance costs and reducing the risk of hefty fines, as demonstrated in the Vivendi case.

• Time stamping: The log must guarantee reasonable certainty as to the date and hour a certain event was registered. This is absolutely essential for making correlations after an incident. • Normalization and data reduction: By normalization we mean the ability of the correlation tool to extract a datum from the source format of the log file that can be correlated with others of a different type without having to violate the integrity of the source datum. Data Reduction (a.k.a. filtering) is the data extraction procedure for identifying a series of pertinent events and correlating them according to selective criteria.

The need for log integrity: problems and possible solutions A log must guarantee its integrity right from the moment of registration. Regardless of the point of acquisition (Sniffer, Agent, Daemon, etc.) a log usually flows as demonstrated in Figure 1. 7

log correlation Cyber Security Summit Task Force under the auspices of the US Department of Homeland Security is heavily based on ISO 17799 as well as ISACA’s COBIT (Control Objectives for Information Related Technology) guidance and the revised COSO guidelines for internal controls. The report provides a new management framework and a call for action to industry and organizations to integrate effective information security governance (ISG) programmes into their corporate governance processes. The Task Force identifies cyber security roles and responsibilities within corporate management structures and references and combines industry-accepted standards and best practices, metrics and tool sets that bring accountability to three key elements of corporate governance programs and information security systems, people, process and technology. Although information security is often viewed as a technical issue, the task force said it is also a

'governance challenge' that involves risk management, reporting and accountability. As such, it requires the active engagement of executive management and boards of directors. The Report went on to recommend that: • Organizations should adopt the information security governance framework described in the Report and embed cyber security into their corporate governance process; • Organizations should signal their commitment to information security governance by stating on their website that they intend to use the tools developed by the Corporate Governance Task Force to assess their performance and report the results to their board of directors. • The US Department of Homeland Security should endorse the information security governance framework and core set of principles outlined in

The "ART" of log correlation: Part 1 Tools and Techniques for Correlating Events and Log Files. Dario Valentino Forte, CFE, CISM, Founder of the IRItaly Project

Log file correlation is related to two distinct activities: intrusion detection and network forensics. It is more important than ever that these two disciplines work together in a mutualistic relationship in order to avoid points of failure. This paper, intended as a tutorial for those dealing with such issues, presents an overview of log analysis and correlation, with special emphasis on the tools and techniques for managing them within a network forensics context. The paper has been split in two parts and part 2 will appear next month in Computer Fraud & Security, July edition.

Logs: characteristics and requisites for reliablity Every IT and network object, if programmed and configured accordingly, is capable of producing logs. Logs have to

have certain fundamental requisites for network forensics purposes. They are: • Integrity: The log must be unaltered and not admit any tampering or modification by unauthorized operators.

this report, and encourage the private sector to make cyber security part of its corporate governance efforts. • The Committee of Sponsoring organisations of the Treadway Commission (COSO) should revise the Internal Controls-Integrated Framework so that it explicitly addresses information security governance. As stated above, this latter point has been addressed in the Guidance to US Courts re interpretation of section 404 of Sarbanes-Oxley. The impact of Sarbanes-Oxley and other legislation is starting to have its impact on organizations. They give IS Security personnel an excellent opportunity to raise again the importance of IS security and control as part of Corporate Governance, with the added benefit of it being part of compliance costs and reducing the risk of hefty fines, as demonstrated in the Vivendi case.

• Time stamping: The log must guarantee reasonable certainty as to the date and hour a certain event was registered. This is absolutely essential for making correlations after an incident. • Normalization and data reduction: By normalization we mean the ability of the correlation tool to extract a datum from the source format of the log file that can be correlated with others of a different type without having to violate the integrity of the source datum. Data Reduction (a.k.a. filtering) is the data extraction procedure for identifying a series of pertinent events and correlating them according to selective criteria.

The need for log integrity: problems and possible solutions A log must guarantee its integrity right from the moment of registration. Regardless of the point of acquisition (Sniffer, Agent, Daemon, etc.) a log usually flows as demonstrated in Figure 1. 7

log correlation log messages, useful especially in the case of a high number of relays (intermediate record retransmission points between the source and the log repository). The main problem in this case is that RFC 3195 has not been incorporated into enough systems to be considered an established protocol.

Log file integrity can be violated

Figure 1 - Log Flow Acquisition occurs the moment a network sniffer, a system agent or a daemon acquires the event and makes it available to a subsequent transmission process directed to a machine that is usually different from the one that is the source of the event. Once the log has reached the destination machine (called the Log Machine) it may be temporarily memorized in a preassigned slot or inputted to a database for later consultation. Once the policy-determined disc capacity has been reached, the data is stored in a predetermined location. The original logs are deleted to make room for new files from the source object. This method is known as log rotation. Log file integrity can be violated in several ways. An attacker might take advantage of a non-encrypted transmission channel between the acquisition and destination points to intercept and modify the transiting log. He might also spoof the IP sending the logs, making the log machine think it is receiving log entries and files that actually come from a different source. The basic configuration of Syslog makes this a real possibility. The RFC 3164 states that Syslog transmissions are based on UDP, a connectionless protocol and thus one that is unreliable for network forensic purposes, unless separate LANs are used for the transmission and collection of log files. But even here there might be some cases that are difficult to interpret. 8

Another integrity problem regards the management of files once they have arrived on the log machine. If the log machine is compromised there is a very high probability of integrity violation. This usually happens to individual files, whose content is modified or even wiped. The integrity issue also regards how the paternity of log files is handled; in many juridical contexts, you have to be certain as to which machine generated the log files and who did the investigation. There are several methods for resolving the problem. The first is specified in RFC 3195, which identifies a possible method for reliable transmission of sys-

Hence, practically speaking, most system administrators and security analysts view SCP (Secure Copy) as a good workaround. The most evident contraindication is the unsuitability of such a workaround for intrusion detection purposes, since there is no real time assessment of the existence of an intrusion via log file reading. And the problem remains of security in transmission between the acquisition and the collection points. In response to the problem, in UNIX-based architectures the practice of using cryptcat to establish a relatively robust tunnel between the various machines is gaining wider acceptance.

Figure 2: Log architecture with time stamping machine

log correlation The procedure is as follows: On log-generating host: 1. you must edit etc/syslog.conf in this mode: *.*

@localhost

2. then run command: # nc -l -u -p 514 crypt cat 10.2.1.1 9999

|

On log-collecting host: 1. run syslog with remote reception (-r) flag (for Linux) 2. run command: # cryptcat -l -p 9999 | nc -u localhost 514 The above configuration will establish an encrypted connection among the various transmission nodes. An alternative

would be to use a Syslog replacement such as Syslog - ng, which performs relay operations automatically and with greater security potential. From the practical standpoint, the methods described above offer a good compromise between operational needs and the theory that a hash must be generated for each log entry (something which is impossible in a distributed environment). The objective still remains of achieving transaction atomicity (transactions are done or undone completely) and log file reliability. The latter concept means being sure that the log file does not get altered once it has been closed, for example via interception during the log rotation phase. The most important aspect of this phase is the final-record message, indicating the last record written in the log, which is then closed and hashed. This sequence of processes may turn out to be critical

Figure 3: Normalization

Figure 4: Multi-Layered Log Architecture

when, after correlation, a whole and trustworthy log has to be provided to the judicial authorities.

Log time stamp management: problems and possible solutions Another problem of importance is managing log file time stamping. Each report has to be 100% reliable, not only in terms of its integrity in the strict sense (IP, ports, payloads, etc.), but also in terms of the date and time of the event reported. Time stamping is essential for two reasons: • Atomicity of the report. • Correlation. The most common problems here are the lack of synchronization and the lack of uniformity of the time zones. The lack of synchronization occurs when the acquisition points (network sensors and Syslog devices) are not synchronized with an atomic clock but only within small groups. Reliance is usually placed on NTP in these cases, but this may open up a series of noted vulnerabilities, especially in distributed architectures connected to the public network. Furthermore, the use of NTP does not guarantee uniformity unless a series of measures recommended by certain RFCs is adopted for certain types of logs as we will describe below. Some technology manufacturers have come out with appliances equipped with highly reliable processors that do time stamping for every entry, synchronizing everything with atomic clocks distributed around the world. This sort of solution, albeit offering a certain degree of reliability, increases design costs and obviously makes management more complex. In a distributed architecture, a time stamping scheme administered by an appliance is set up as follows: The appliance interacts with a PKI that authenticates the transaction nodes to prevent the problem of report repudiation. While this type of architecture may be "easily" implemented in an environment with a healthy budget, there are 9

log correlation

Normalization and data reduction problems and possible solutions

Figure 5: Correlating Normalized Events applications for less extensive architectures that may be helpful in guaranteeing a minimum of compliance with best practices. Granted that one of the most commonly used log format is Libpcap-compatible (used by TcpDump, Ethereal) over TCP connections (hence 3-way), it is possible to attribute a further level of timestamping, as per RFCs 1072 and 2018, by enabling the SackOK option (Selective AcknowledgementOK). This option can return even a 32 bit time stamp value in the first 4 bytes of each packet, so that reports among transaction nodes with the SackOK option enabled are synchronized and can be correlated. This approach may 10

be effective provided that the entire system and network is set up for it. Another factor that is not taken into consideration are Time Zones (TZ). In distributed architectures on the international scale, some information security managers believe it is wise to maintain the time zone of the physical location of the system or network object. This choice has the disadvantage of making correlation more complicated and less effective because of time zone fragmentation. We are currently witnessing an increase of times zones being simply based on GMT, which has the plus of simplifying management even though it still requires that the choice be incorporated into a policy.

Normalization is identified in certain cases with the term event unification. There is a physiological need for normalization in distributed architectures. Numerous commercial systems prefer the use of XML for normalization operations. This language provides numerous opportunities for event unification and management of digital signatures and hashing. There are two basic types of logs: system logs and network logs. If the reports all had a single format there would be no need for normalization. In heterogeneous architectures it is obvious that this is not the case. Let us imagine, for example, an architecture in which we have to correlate events recorded by a website, by a network sniffer and by a proprietary application. The website will record the events in w3c format, the network sniffer in LibPcap format, while the proprietary application might record the events in a non-standard format. It is clear that unification is necessary here. The solution in this case consists of finding points in common among the various formats involved in the transaction and creating a level of abstraction as demonstrated in Figure 3. It follows in this case that an attacker can once again seek to violate log integrity by zeroing in on the links between the various acquisition points and the point of normalization. We will discuss this below. Regarding the correlation, the point of normalization (normally an engine) and the point of correlation (an activity that may be carried out by the same module, for example, in an IDS) may be the same machine. It is clear that this becomes a potential point of failure from the perspective of network forensics and thus must be managed both to guarantee integrity and to limit possible losses of data during the process of normalization. For this purpose the state-of-the-art is to use MD5 and SHA-1 to ensure integrity and to perform an in-depth verification of the event unification engine to respond to

log correlation the data reduction issue, keeping the "source" logs in the normalized format. In the Figure 4, where each source log is memorized on ad hoc supports, another layer is added to Figure 3. In order to manage the secure repository section and still use a series of "source log files" that guarantee a certain reliability, the machines in the second line of Figure 4 have to be trusted, i.e., hardened, and have cryptosystems that can handle authentication, hashing and reliable transmission as briefly discussed earlier.

Correlation and filtering: needs and possible solutions In performing log correlation and filtering, the Security Architect and the Manager have to deal with the architecture problems.

Correlation and filtering: definitions Correlation - "A causal, complementary, parallel, or reciprocal relationship, especially a structural, functional, or qualitative correspondence between two comparable entities". Source: dictionary.com In this article we use Correlation to mean the activity carried out by one or more engines to reconstruct a given complex event, that may be symptomatic of a past or current violation. By filtering we mean an activity that may be carried out by the same engines to extract certain kinds of data and arrange them, for example, by protocol type, time, IP, MAC Address and so on. A fairly complex architecture may be set up as demonstrated in Figure 5. As may be observed from Figure 5, and assuming the necessary precautions indicated earlier have been followed, if data is collected at the individual acquisition points (i.e., before the logs get to the normalization engines) by methods such as SCP, the very use of this method might slow down subsequent operations since these activities require a greater dynamism than the "simple" acquisition

and generation of logs. Hence in this phase you have to use a Tunneling and Authentication (Tp) system based on a secure communication protocol that might be a level 3 ISO/OSI.

Interpretation of one or more log files In most cases the security administrator reads the result of a correlation done by a certain tool, but he only sees the tip of the iceberg. If you look at the figures in this paper, the set of processes upstream of the GUI display is much more complex. Whatever the case may be, the literature indicates two basic methods for analyzing logs, called approaches.

Top-down approach This is the approach most frequently used in network forensics when the examiner is working with an automated log and event correlation tool. While in intrusion detection a top-down approach means starting from an attack to trace back to the point of origin, in network forensics it means starting from a GUI display of the event to get back to the source log, with the dual purpose of: 1.Validating the correlation process used by the engine of the automatic log and event correlation tool and displayed to the Security Administrator. 2.Seeking out the source logs that will then be used as evidence in court or for subsequent analysis. In reference to Figure 5, we have a topdown approach to get back to the source logs represented in the previous figures. Once retraced, the acquired logs are produced and recorded onto a CD-ROM or DVD, and the operator will append a digital signature.

Bottom-up approach

ongoing attack through a real-time analysis of events. In a distributed security environment the IDS engine may reside (as hypothesized earlier) in the same machine hosting the normalization engine. In this case the IDS engine will then use the network forensic tool to display the problem on the GUI. You start from an automatic low level analysis of the events generated by the points of acquisition to arrive at the "presentation" level of the investigative process. Such an approach, furthermore, is followed when log analysis (and the subsequent correlation) is performed manually, i.e., without the aid of automated tools. Here, a category of tools known as log parsers comes to your aid. The purpose of these tools is to analyze source logs for a bottom-up correlation. A parser is usually written in a script language like Perl or Python. There are however parsers written in Java to provide a cross-platform approach to network forensics examiners, perhaps on a bootable CD-ROM (see Section 5 for examples).

About the author Dario Forte, CISM, CFE, has been involved in information security since 1992. He has worked as an instructor for Intrusion Analysis and Response at the United States DHS. He is currently an instructor in Informatics Incidents Handling at the University of Milan. He was a keynote speaker at the BlackHat Conference 2003 in LasVegas and is President of the European Chapter of HTCIA.. He can be reached at www.dflabs.com.

Contact: Corso Magenta 43, 20100 Milano, Italy Tel (+39) 340-2624246 Web: www.honeynet.it Email: [email protected]

This approach is applied by the tool starting from the source log. It is a method used by the IDS to identify an 11

video encryption

Video encryption - the epic movie battle Philip Hunter

Security always has to be weighed against cost on the one hand and convenience on the other. But when the stakes are very high this compromise becomes skewed so that costs soar and user convenience plunges. This is the situation now emerging in the world of digital video given the scale of potential losses facing Hollywood and the broadcast industry in the emerging video Internet. The Motion Picture Association of America (MPAA) is determined that digital video content be encrypted at any place where it could potentially be copied, including not just interfaces such as USB ports and PCMCIA slots, but also the internal PCI buses of PCs. This will impose a significant cost because it will require PC makers to develop and integrate components capable of handling the encrypted video while still ensuring that it can be watched by the user. But this is good news for makers of dedicated onboard processors, who were rubbing their hands in glee at the recent Embedded Systems Conference in San Francisco. But at some point the video does have to be decrypted and converted into analog form so that it can be played. Given the availability of high speed and accurate digital to analog converters, it is relatively easy to tap in at this stage and make bootleg copies. Other techniques such as digital watermarking have therefore been developed to ensure at least that such pirated versions cannot be sold for playback on compliant systems designed to recognize such watermarks. But industry associations such as MPAA do not believe this goes far enough because it does nothing to prevent creation of copies capable of being played on a variety of legacy systems. As a result there is pressure to restrict the user’s access to the video stream in any form, analog or digital, although how this will work out in practice remains to be seen. But it looks certain that the user’s convenience to manipulate video and freedom to make copies for future use will be mitigated. Certainly if video is always encrypted during storage and transmission within the consumer’s PC, it becomes difficult to apply intelligent software based mechanisms for manipulating the content that would otherwise emerge. The whole transmission path would be hardware based for maximum security with 12

intelligence residing at the centre of the transmission network. It will be harder and take longer to allow users to do much more than basic fast forwarding on that basis. Apart from embedded processors, the video Internet is proving a huge boon to the video compression industry, spawning new algorithms at an unprecedented rate. Video poses unique challenges for encryption, notably the need for realtime transmission with low and predictable delay, the large amount of data, and the fact that it has to operate alongside compression. Even encrypting/decrypting conventional text data takes too much computational time using public key cryptography. This is why the role of public key systems is normally confined to transmission of the private session key used for subsequent encryption of the bulk of the data. But even private key systems such as DES would take too long to encrypt whole MPEG video streams in real-time. Fortunately in the case of broadcast or streamed video it is necessary only to scramble a sufficient amount of the data to make the picture unrecognizable, sometimes even leaving the audio track alone. Therefore schemes have evolved that encrypt just a portion of the data.

It also makes sense to combine the encryption and compression processes given that both are computationally intensive, both must take place at about the same time, and both must be reversible. Performing them in a standardised way as a single operation makes it easier for different systems to interoperate. But interoperability can itself be a victim of high security, given the creation of new embedded processors that all need to interact. As yet there is no standardisation body charged with the task of ensuring that emerging digital video devices conform to common standards – indeed there is no agreement over what these standards should be. However for many applications the level of video encryption does not have to be particularly strong. It will often be sufficient just to ensure that it is cheaper to buy the item than to break the algorithm used to encrypt it. The situation is different for new release movies when there is no other source of the content in circulation, and the potential to make a mint in circulating the first bootleg copies. But in that case there are other ways to skin the cat anyway – the first bootleg DVDs of “The Passion of the Christ” were cut from illicit recordings made on camcorders during early showings in the cinema. As always protection must be matched to the level of risk. For this reason there is an argument for using existing secret key algorithms such as DES rather than inventing new ones. The point is to confine the encryption to a subset of the digital bits generated by the video. In that way a video encryption algorithm can be created without needing any fundamentally new techniques. All that is needed is a description of which bits of each frame to encrypt. Video frames vary in size, and some can be more readily compressed than others. In a slow moving sequence successive frames may be almost identical and be compressed almost to zero so that there is no need to encrypt any of the data. Indeed this illustrates the virtue of combining compression with encryption.

forensic policy On the other hand a large frame in a fast moving sequence, say a sporting event, could generate a lot of data even after compression and take so long to encrypt that the delay causes a deterioration in quality. Therefore a good video encryption algorithm should limit itself to a maximum number of encrypted bits per frame, regardless of the frame size. Tests have shown that it is possible to do this while still creating un-viewable encrypted streams. This has been demonstrated for example for RVEA (Realtime Video Encryption Algorithm) developed at Purdue University in the US. The question arises of whether such algorithms operating on a relatively small number of bits in the video stream are sufficiently immune from attack, given that huge computational power is readily available to many hackers these days. If only a limited number of bits are encrypted, it might seem that the chance of decrypting the video through brute force trial and error is increased. But the task is greater than it appears for two reasons. Firstly if the encryption is operating on compressed MPEG transport streams, as is the case with RVEA for example, the cost of any attack would be increased because the MPEG decoding process also has to be performed.

Secondly an attack involving trial and error testing would depend on the computer being able to tell when a video frame that is comprehensible has been produced. Although computers can distinguish individual items such as faces when presented with clearly defined images of the same form, they cannot at present identify whether a video frame is a meaningful image for a human rather than just a random assortment of bits. This after all is a somewhat subjective distinction depending on the context, and cannot be readily defined. It might be possible in some cases to obtain blurred pictures via trial and error by having a human watching a quick replay of candidate images produced by a powerful computer, but it would be a long and tedious process to recover the full video stream this way. Not all video is encoded via MPEG of course, and video conference software normally uses the H.263 protocol. Encryption of such data will also become more important as the Internet grows as a medium for on-demand video conferencing. In some respects the encryption requirements will be more conventional, being to avoid eavesdropping and maintain privacy during sessions. The focus is on the network and the systems that control the sessions

The question of organizational forensic policy Hank Wolfe

The objectives of an organization in combination with the formal policy together underpin the strategic direction that any organization will take. We all know that security begins with policy – in other words the rules of play. If policy is sound then the appropriate security measures can be implemented to protect the activities required to achieve the stated objectives as well as maintain the information assurance requirements – availability, integrity, authentication, confidentiality and non-repudiation. Policy fails when management does not actively support and promote it. The two top executives of a very large local organization have continually refused to accept and adopt the formal policy document proposed

by the professionals responsible for creating it. The reason is that the two individuals have not changed their password in eight years and see no reason to be forced to do so regularly – a best practice as proposed in the

rather than the end point devices and interfaces. The main concern would be to avoid people listening in to what could be confidential sessions involving sensitive or valuable information within the content rather than theft of the video content itself. In this case there might be an argument for a stronger level of encryption than for broadcast video within the network, although in practice Internet video conferencing is unlikely to be used for high-level interaction. But another difference is that Internet conferencing is relatively low quality at present, running at the same data rates as other applications. Therefore it can be served by the same encryption mechanisms, which could be SSL in conjunction with a private key system such as DES. At present however many Web based conferencing services do not support within their client software, which should deter business users. The real interest though concerns digital video broadcasting both to PCs and TVs, given the desire on the part of content owners to lock up as many sources as possible. For consumers this will mean that the great broadcasting revolution may not be as liberating as had been hoped, given an unholy alliance of regulation, litigation, and access control.

new formal policy. Management is either part of the problem or part of the solution. These two should most probably be engaged in a different line of work because they compromise their organization’s security and set a bad example for their subordinates. New policy is now needed to cause the appropriate procedures to be put into place such that if and when an incident occurs (criminal or one that requires internal discipline) there is a mechanism in place to protect any potential evidence that may be needed to deal with the offender(s). This is no longer a trivial issue and will become even more important in the future. It too will require the active support of top management. This article is about creating a security policy that deals with forensic evidence. 13

forensic policy On the other hand a large frame in a fast moving sequence, say a sporting event, could generate a lot of data even after compression and take so long to encrypt that the delay causes a deterioration in quality. Therefore a good video encryption algorithm should limit itself to a maximum number of encrypted bits per frame, regardless of the frame size. Tests have shown that it is possible to do this while still creating un-viewable encrypted streams. This has been demonstrated for example for RVEA (Realtime Video Encryption Algorithm) developed at Purdue University in the US. The question arises of whether such algorithms operating on a relatively small number of bits in the video stream are sufficiently immune from attack, given that huge computational power is readily available to many hackers these days. If only a limited number of bits are encrypted, it might seem that the chance of decrypting the video through brute force trial and error is increased. But the task is greater than it appears for two reasons. Firstly if the encryption is operating on compressed MPEG transport streams, as is the case with RVEA for example, the cost of any attack would be increased because the MPEG decoding process also has to be performed.

Secondly an attack involving trial and error testing would depend on the computer being able to tell when a video frame that is comprehensible has been produced. Although computers can distinguish individual items such as faces when presented with clearly defined images of the same form, they cannot at present identify whether a video frame is a meaningful image for a human rather than just a random assortment of bits. This after all is a somewhat subjective distinction depending on the context, and cannot be readily defined. It might be possible in some cases to obtain blurred pictures via trial and error by having a human watching a quick replay of candidate images produced by a powerful computer, but it would be a long and tedious process to recover the full video stream this way. Not all video is encoded via MPEG of course, and video conference software normally uses the H.263 protocol. Encryption of such data will also become more important as the Internet grows as a medium for on-demand video conferencing. In some respects the encryption requirements will be more conventional, being to avoid eavesdropping and maintain privacy during sessions. The focus is on the network and the systems that control the sessions

The question of organizational forensic policy Hank Wolfe

The objectives of an organization in combination with the formal policy together underpin the strategic direction that any organization will take. We all know that security begins with policy – in other words the rules of play. If policy is sound then the appropriate security measures can be implemented to protect the activities required to achieve the stated objectives as well as maintain the information assurance requirements – availability, integrity, authentication, confidentiality and non-repudiation. Policy fails when management does not actively support and promote it. The two top executives of a very large local organization have continually refused to accept and adopt the formal policy document proposed

by the professionals responsible for creating it. The reason is that the two individuals have not changed their password in eight years and see no reason to be forced to do so regularly – a best practice as proposed in the

rather than the end point devices and interfaces. The main concern would be to avoid people listening in to what could be confidential sessions involving sensitive or valuable information within the content rather than theft of the video content itself. In this case there might be an argument for a stronger level of encryption than for broadcast video within the network, although in practice Internet video conferencing is unlikely to be used for high-level interaction. But another difference is that Internet conferencing is relatively low quality at present, running at the same data rates as other applications. Therefore it can be served by the same encryption mechanisms, which could be SSL in conjunction with a private key system such as DES. At present however many Web based conferencing services do not support within their client software, which should deter business users. The real interest though concerns digital video broadcasting both to PCs and TVs, given the desire on the part of content owners to lock up as many sources as possible. For consumers this will mean that the great broadcasting revolution may not be as liberating as had been hoped, given an unholy alliance of regulation, litigation, and access control.

new formal policy. Management is either part of the problem or part of the solution. These two should most probably be engaged in a different line of work because they compromise their organization’s security and set a bad example for their subordinates. New policy is now needed to cause the appropriate procedures to be put into place such that if and when an incident occurs (criminal or one that requires internal discipline) there is a mechanism in place to protect any potential evidence that may be needed to deal with the offender(s). This is no longer a trivial issue and will become even more important in the future. It too will require the active support of top management. This article is about creating a security policy that deals with forensic evidence. 13

forensic policy

What is forensic evidence? In computing, forensically we are able to collect many different facts from computer media that can now be used in a court of law. Much of the data that may qualify as evidence resides on hard drives in obscure places that most users are not aware of. The forensic investigator must first capture an evidentiary copy of the data residing on computer media associated with the case, and that includes hard drives, floppy disks, CDs, DVDs, flash cards, PCMCIA cards, etc. This evidentiary copy must be validated by using hashing algorithms to prove that the original data and the evidentiary copy of it are exactly the same. The success or failure of a prosecution or disciplinary action may rest first on the validity of the capture process and then on maintaining the chain of evidence. Thereafter, the investigator will be engaged in the analysis and reporting of any evidence that is ultimately found and occasionally testifying in a court of law as to the specifics of how that evidence was obtained.

Forensic policy For purposes this discussion, every organization of any size should be considering the creation of a policy and formalized set of procedures that describe and document the actions that will be taken when an incident is discovered. Many will assume that all that is necessary is to assign the task of capturing evidence to the most technical person available within the organization. If that person is trained in forensic evidence gathering AND if that person has the appropriate tools to accomplish the task then the likelihood of success is enhanced. Merely assigning the task to the most technically competent assures nothing and will most likely not be successful. Technical competence is not enough. When I first began my career in the computing profession, there were only a handful of people in the entire world that knew how to write a program and operate a computer. Today there are literally tens of millions of people who can do that. It seems that just about everyone who has 14

pushed a mouse around for six months or a year has become a “computer expert”. Be assured that they are not and that there is much happening underneath the operation of a mouse and the applications that can be launched by it. While these “experts” may know how to get the most from any given software application, it is unlikely that they have any idea of what happens when the computer is turned on – the boot process, the loading of the interrupt table, how that interfaces with the BIOS, etc. Most users and many so called experts do not know what happens in the registry or what the swap file is used for or how data is actually physically stored on any given media. Many do not even know that a file that has been deleted is not removed from the device and remains there unchanged until its space is overwritten (hard drive, floppy, etc.).

Electronic forensics draws on computer science, police science, cryptography, surveillance and common sense. Forensic investigators must have this basic knowledge and much more to effectively capture and investigate any given case. In fact, merely turning a machine on makes changes to the hard drive of that machine and in so doing may negate the potential evidence that may be found there. Electronic forensics draws on computer science, police science, cryptography, surveillance and common sense. Because of these facts, the procedures to be followed when an incident is discovered need to be closely controlled and formally documented. What is being proposed in this discussion is that this activity be recognized in a series of formal policies and formulated in a set of documented procedures. Most organizations do not need a forensics investigator on staff and that is not what is being recommended. However, there should be someone who has been

minimally trained in the basics of forensics and is responsible for handling any given incident once it has been discovered. This may be as simple as locking down the suspect machines and ensuring that they are not used or tampered with until a trained forensic investigator takes control of the case.

Conclusion Every organization of reasonable size should address this issue. It doesn’t require huge outlays of capital or a large investment in equipment or software (unless the organization plans to set up its own forensic laboratory). Creating a policy after some investigation, in the scheme of things, is very inexpensive and may mean the difference between a successful and unsuccessful prosecution. Retaining a forensic investigator is also an option and their services could be called upon when an incident is discovered. Finally, it must be understood that a proper forensics investigation takes time. It cannot be done in a few minutes or a few hours. Time is money and it must be clear that such investigations cost money AND there is no guarantee that relevant evidence will be found.

About the author Henry B. Wolfe has a long computing career spanning more than 45 years. He currently specializes in cryptographic problems where related to forensic investigation, general computer security, surveillance, and electronic forensics teaching these topics to law enforcement (both in New Zealand and internationally) as well as at the graduate level in the University of Otago located in Dunedin, New Zealand where he is an associate professor.

Contact: Dr. Henry B. Wolfe University of Otago New Zealand Phone: (+64 3) 479-8141 Email: [email protected]

getting the whole picture

A FARES baseline analysis case study Peter Stephenson

After a one-month hiatus we’re back to work on FARES, the Forensic Analysis of Risks in Enterprise Systems. We’ve seen in previous months how the FARES process works and what it comprises. This month we’ll wrap up this topic with a case study of a FARES baseline analysis for a mid-sized financial services company. We begin with a very brief review of the FARES process. There is a specific process for Forensic Analysis of Risks in Enterprise Systems. In simple terms it follows this process flow: • Network analysis and policy domain identification. • Modeling the domain in the current operating state. • Identifying vulnerability groups in the domain. • Identifying threats against the domain. • Modeling the threats. • Simulating the domain’s state changes when applying threats. • Assessing impact. • Applying countermeasures. • Rerunning the simulations with countermeasures in place. The process flow listed above actually is iterative in that it must be repeated for each policy domain and the interactions between domains. In a complex network this can be quite tedious the first time. However, understanding the network under test, its policy domains and testing each component and network segment using traditional tools is far more time consuming and, therefore, more expensive. The FARES approach is risk-centric instead of vulnerability-centric so it is reasonable to expect a significantly greater return on the investment of time

than one would expect from a vulnerability assessment that needed to be plugged into a risk analysis model. Risk modeling requires attention to threats, vulnerabilities, impacts and countermeasures as opposed to vulnerabilities only. The allocation of resources for a

The FARES approach is risk-centric instead of vulnerability-centric

FARES analysis yields a risk management plan while the larger allocation of resources for a vulnerability assessment/penetration test yields a partial (at best) picture of vulnerabilities only.

Background of the case study In any new process, especially one that uses such non-traditional approaches as are used in FARES, it is important to field test the process before releasing it in the wild. This case study addresses one such field test. Clearly, due to privacy restrictions, we cannot give enough detail to reveal the test subject. However, the detail here is sufficient for understanding how FARES works in practice. The field instantiation of FARES included a number of data gathering exercises required to implement the process described above and in previous columns. Those data gathering tasks included: • Interviews with key individuals who could help understand the network architecture, data flows, locations of critical or sensitive data and supporting devices, etc. • Capture of forensic detail of key, sensitive or critical hosts and servers. • Vulnerability scans of the network perimeter to determine its footprint facing the Internet. • Mapping the network. • Forensic analysis of other important devices selected at random. • Preparing appropriate regulatory, policy and legal requirements for mapping against risks identified.

Gather Data  Analyze Data  Apply Safeguards  Reanalyze  Report

Figure 1: High level FARES process.

15

getting the whole picture

Field processes The FARES baseline analysis process as applied in the field is a combination of the theoretical processes listed above and the field instantiations of them. The field instantiations are closely coupled with the data gathering tasks. Simply put, the high level FARES process may be illustrated as shown in Figure 1.

Data gathering and modeling

Figure 2 – Policy Domains and their Ratings

The tool set we used consisted of:

Solar Winds

EnCase Enterprise Edition

SNMP scanner used to map the internal network.

Forensic tools use to gather host and server forensic data as well as to examine selected PCs and workstations forensically

A simple link analyzer

Nessus Freeware vulnerability scanner used to map the perimeter.

This is used to identify authorized and covert inter-domain communications channels – any good link analyzer will do.

CPNTools

NMap Freeware port scanner used to map the perimeter. 16

Colored Petri Net modeling and simulation tool used to model and simulate network security behaviour.

We begin by interviewing key individuals and examining network maps. From this set of data (output of the interviews and the maps) we determine the security policy domains. In the case study those domains are shown in Figure 2. Once we selected the policy domains, we rated, based upon interviews with key personnel, each of them in terms of sensitivity (how sensitive is the data to disclosure, modification or destruction?), criticality (what would be the impact of destruction or unreliability?) and the FARES Simple Communications Model (FSCM). We’ll address FSCM later. Note that the FSCM rating does not necessarily track the other two ratings. This is because the basis for the FSCM is the relative nature of allowed communications between domains. Thus some domains are more sensitive to allowing external communications than others. This has nothing to do with the sensitivity of the domain. A very sensitive domain might be quite open to appropriate communications and would have an FSCM rating that appeared to be lower than its sensitivity and criticality might imply. Next, we map the obvious communications channels between domains. We also want to include, but identify explicitly, covert communications channels (those that may exist but are not authorized). Figure 3 shows the case study mapping. Thus we see, as an example from Figure 3, that there is an authorized (“O”) from the Core domain to the Financial Processing domain. There is

getting the whole picture

Figure 3 - Inter-Domain Communications Channels

also a potential covert channel from the General User domain to the Financial Processing domain. The nature of the covert channel (C5) is a possible compromise of a server using a worm for the purpose of inserting a back door in the server. Note that we have not said that such is the case – only that it could be. This is important because it helps us place countermeasures. One type of countermeasure is the application of threat inhibitors that prevent a threat agent from accessing a targeted vulnerability. We discover additional covert channels by using link analysis to analyze the authorized and potential covert channels. These authorized channels interconnect and in that interconnection there is the potential for an extended, or covert communications channel. For example, there is a potential covert channel from the Public domain to the Core domain created by passing from Public to the DMZ thence to the Core by compromising a DMZ Web server

connected to a back-end database in the Core.

Vulnerabilities and threats We next identified threat and vulnerability groups implicit in the inter-domain communications channels and mapped them. We applied some basic threat principles, for example a threat is credible if: • There is a credible threat agent. • There is a potential vulnerability that the threat could exploit. • There is a law, regulation or policy that defines a security control that could be compromised resulting in an impact to the organization. • The required combination of threat factors are in place in the context of vulnerability and impact. Vulnerabilities are collected in two ways. On the DMZ we use a traditional

scanner to identify the vulnerability footprint as seen from the Internet. This is unreliable as to explicit vulnerabilities since it is impossible to keep any scanner current. However, it does show us a clear picture of how an external attacker will see the Internet-facing portion of the network. We apply three tools here: • NMap. • Solar Winds. • EnCase Enterprise Edition NMap gives us a picture of what is present, what ports are open on visible devices and, finally, a fingerprint of the operating system. The second tool we use is Solar Winds. This performs an SNMP scan and attempts to access devices such as routers for the purpose of mapping the network connections in the router. This is a step in mapping the DMZ and its internal and external routes. The third tool is EnCase Enterprise 17

getting the whole picture Edition (EEE). This allows us to take a forensic snapshot of important computers in the DMZ. EEE lets us see an image of the device’s disks, the open ports, open files, users, logins, etc. It provides a very clear picture of potential vulnerabilities without identifying them explicitly. This is important because we are able to predict vulnerabilities against which a threat agent might operate without concern for the attack tool or exploit the attacker might use. On the internal network we use only two of these tools: EEE and Solar Winds. Once we have determined where a threat agent might credibly attempt an exploit of some sort, we map the credible threats between policy domains. We take the threats from the Common Criteria Profiling Knowledge Base and apply them against our known vulnerability classes. The objective is to ensure that the inter-domain communications channels between domains cannot be exploited. If they can, we have determined that a target within

Figure 4 - FSCM Colored Petri Net

18

the destination domain potentially is exploitable, regardless of whether we know the explicit nature of the exploit or not.

Modeling and simulating We now have enough information to begin modeling and simulating. For the models we use the FARES Simple Communications Model. This model describes, formally, the allowed communications between security policy domains. We simply apply the FSCM ratings for the source and destination domains to the Colored Petri Net model and simulate. The simulation will tell us if the interdomain communications channel conforms to the FSCM. The FSCM, described in words, applies the following constraints to interdomain communications: • No subject may read an object of higher confidentiality than itself.

• No subject may write to an object of lower confidentiality than itself. • No subject may write to an object of higher integrity than itself. • No subject may execute another subject of higher integrity than itself.

Financial services organizations are not particularly risk tolerant

These are traditional constrains used in the Bell-LaPadula confidentiality model and the Biba integrity model. When applied to the FSCM Colored Petri Net (CPNet) model using CPNTools, we see what is demonstrated in Figure 4. We run a set of simulations for all discovered interdomain communications

getting the whole picture and note the FSCM violations. It is possible, of course, to develop a more rigorous model than the FSCM. However, for our purposes in the case study we were satisfied with this model. The benefit to using CPNets is that we can see readily where we need to apply countermeasures to make the simulation fail. A failed simulation means that the communication will fail and the channel is secure. Thus, by modeling and simulating all interdomain communications channels and applying appropriate countermeasures or safeguards, we see that we can prevent an attacker from traversing a path from his or her domain to the destination domain that contains the target. We must, of course, apply this simulation in all cases, including internal to external, external to internal and internal to internal paths.

Final analysis The last step is to interpret the collected data and models at two levels. The first is quite granular. In this step we analyzed the individual threats, vulnerabilities and anticipated impacts against a set of regulatory requirements peculiar to the financial services community. Additionally at this point we determined whether the nature and cost of possible countermeasures was consistent with the impacts of the granular analysis. This is a qualitative determination and deals with the organization’s tolerance for risk. Financial services organizations are not particularly risk tolerant, especially where possible violations of regulatory requirements are concerned. Thus, it is fairly straightforward to determine a relative impact and the advisability of a countermeasure. A major advantage of the FARES approach is that large, expensive, global countermeasures are often not required to manage a significant impact. By analyzing interdomain communications channels carefully we find that it is relatively easy and inexpensive to apply defense in depth at various points in the channel. This is the value of the granular analysis.

The second level of analysis is the risk level. This requires that we combine threats, vulnerabilities and impacts into risks and apply countermeasures as necessary. One of the advantages of this is that while we may have dozens of threats, vulnerabilities and impacts, we will likely have only a few real risks. Viewing an information system in the context of risks instead of vulnerabilities provides a far clearer picture of where the organization should spend limited resources. However, traditional risk analysis uses metrics that are, essentially, unreliable because in information systems they are unquantifiable. Older risk assessment environments (as opposed to the relatively immature information technology environment) use historical data to establish an actuarial process that is reliable in that threats, vulnerabilities and impacts are reliably quantifiable for the purpose of predicting loss. In the information technology world we don’t have enough reliable data to do that. Thus, in the FARES approach, we stick closer to the qualitative analysis than to the quantitative analysis of risk.

Managing risk using FARES Using FARES for risk management is far less expensive and difficult than other methods. Relative to this case history, we are in the process of revising the enterprise models we created above because a major change is taking place within the enterprise. By changing the models and resimulating we get a new picture of risks. This is credible because the threats are still there and the impacts are still potentially of concern, however, the changes to the enterprise have introduced, possibly, new vulnerabilities. By revisiting the models, revised now for the new enterprise, we can reapply the threats (as well as any new ones that have emerged for whatever reason) to the new models, simulate and gauge the results. If it is necessary and prudent to insert new safeguards we also can see their impact upon the risk.

Advantages of FARES • Less resource intensive than a vulnerability asessment but with a greater return on investment of time. • Makes it easy to apply defence in depth at various parts in the channel. • Large, expensive, global countermeasures are not required to manage a significant impact. • Puts risks in the context of the level of risk tolerance of the organization. • Is capable of identifying more important risks.

Conclusion While we cannot, of course, discuss the findings of this case study in detail, we did find that, in comparing our findings with other findings from an earlier traditional vulnerability assessment, the FARES baseline process was capable of identifying more important risks, putting them in the context of the level of risk tolerance of the organization, and addressing countermeasures by using a defense in depth strategy instead of a brute force approach. Next month we will begin a new set of discussions on an area of digital investigation and digital forensics.

About the author Peter Stephenson currently holds the position of Head of Information Assurance, Center for Regional and National Security at Eastern Michigan University. Prior to this he worked at QinetiQ Trusted Information Management where he was promoted to director of research and, subsequently, to chief technology officer for US operations. He holds a BSEE and is currently a PhD candidate (degree expected January 2004) at Oxford-Brookes University in Oxford, UK where his research involves structured investigation of information security incidents in complex computing environments. 19

events

The Prudential gets smart with spam The Prudential, a UK-based financial company, has installed a spam intelligence service from Tumbleweed, which clamps down on the number of emails being blocked accidentally by spam filters. Out of the 40 000 emails received by Prudential everyday, 14 500 are now blocked as spam by filtering software. Prudential has opted for the Dynamic Anti-spam service, (DAS) an

Internet-based subscription service, which analyses spam and legitimate emails from around the world to help categorise what is and isn't spam. "Since DAS was installed, we see a threefold increase in blocked spam messages," said Nick De Silva, Web hosting and Messaging Manager, Prutech. "Before, we used Tumbleweed MMS lexical scanning (using a manually-

updated word list) to detect spam," he said. Prudential chose Tumbleweed DAS for vendor reputation, high integration with existing Tumbleweed MMS and low admin overhead, said De Silva. Tumbleweed's Dynamic Anti-spam Service allows customers to download new anti-spam heuristics that are published up to six times per day. Prudential is a leading life and pensions provider in the UK with 7,400 employees.

Events Calendar INFOSECURITY RUSSIA 21-23 September 2004

NETSEC 2004 14-16 June 2004 Location: San Francisco, US Website: www.cmpevents.com

Location: Moscow, Russia Website: www.infosecuritymoscow.com

ISACA INTERNATIONAL CONFERENCE 2004 27-30 June 2004

COMPSEC 2004 - BUILDING BUSINESS SECURITY 13-14 October 2004

Location: Cambridge, Massachusetts, US Website: www.isaca.org

BLACK HAT BRIEFINGS & TRAINING USA 26-29 July 2004 Location: Las Vegas, US Website: www.blackhat.com

NATIONAL WHITE COLLAR CRIME CENTER ECONOMIC CRIME SUMMIT 17 -18 August 2004 Location: Dallas, Texas US Website: www.summit.nw3c.org/dallas/index.html

19TH IFIP INTERNATIONAL INFORMATION SECURITY CONFERENCE 23-26 August 2004 Location: Toulouse, France Website: www.sec2004.org

20

Location: London, UK Website: www.compsec2004.com Contact: Conference secretarait: Lyn Aitken Tel: +44 (0)1367 718500 Fax: +44 (0)1367 718300 Email:[email protected]

RSA EUROPE 3-5 November 2004 Location: Barcelona, Spain Website: www.rsaconference.com

CSI ANNUAL COMPUTER SECURITY CONFERENCE 7-9 November 2004 Location: Washington, US Website:www.gocsi.com/annual/ Email: [email protected]

INFOSECURITY FRANCE 24-25 November 2004 Location: Paris, France Website:www.infosecurity.com.fr