zondag 28 september 2008

SAS70

Nu ook AZL Services volledig SAS-gecertificeerd!

Op 31 januari jl. heeft –na AZL Vermogensbeheer– ook Services de SAS type II verklaring ontvangen. Pensioen- en vermogensbeheer van AZL voldoen daarmee aan de hoogste normen die internationaal aan proces- en risicobeheersing worden gesteld. Dit is een belangrijke stap in een markt waar transparantie en vertrouwen in de dienstverleningsrelatie van groot belang zijn.
Wet betekent SAS?SAS70 (Statement on Auditing Standards number 70. Service Organisations) is ontwikkeld door het American Institute of Certified Public Accountants (AICPA), de Amerikaanse beroepsorganisatie van accountants. Het is een internationaal erkende auditing standaard en wordt steeds meer beschouwd als een kwaliteitskeurmerk. In het SAS70 rapport staat beschreven hoe de interne beheersingsmaatregelen van een dienstverlenende organisatie zijn opgezet alsmede hoe geaudit wordt om de toereikendheid van de maatregelen te beoordelen en om de feitelijke werking vast te stellen.
Twee soorten SAS70 verklaringen, type-I en type-IIDe SAS70 type-I-verklaring beschrijft de interne controles van de dienstverlenende organisatie. Een externe accountant toetst of de controles daadwerkelijk bestaan en worden nageleefd teneinde de kwaliteit van de dienstverlening te waarborgen. De SAS70 type-II-verklaring geeft een oordeel over de effectieve en langdurige werking van de interne controle maatregelen. Er wordt gedurende tenminste zes maanden gecontroleerd of alle processen en controles conform de voorschriften worden uitgevoerd. Daarbij wordt niet alleen de aandacht gericht op de administratieve organisatie, maar ook de ondersteunende geautomatiseerde systemen zijn een belangrijk onderdeel.Wanneer een dienstverlener in bezit is van een SAS70-verklaring heeft de opdrachtgever de best denkbare garantie dat de service van de hoogste kwaliteit is en de vereiste transparantie bezit. AZL heeft dat belang al in een vroeg stadium onderkend en is op eigen initiatief gestart om voor onze klanten in de pensioensector SAS-verklaringen op te stellen. Inmiddels heeft ook De Nederlandsche Bank (DNB) de relatie gelegd tussen de richtlijnen van uitbesteding en de SAS70-verklaringen.

Vermogensbeheer en PensioenenAangezien het karakter van de activiteiten van vermogensbeheer en pensioenbeheer erg verschillend van aard zijn, zijn twee SAS verklaringen opgesteld. De proces- en risicobeheersing van AZL Vermogensbeheer is in 2005 nauwgezet getoetst, wat uitmondde in de SAS- verklaringen. Nu heeft ook Pensioenen (AZL Services) heeft de SAS type-II-verklaring op 31 januari jl. ontvangen.

Een SAS-verklaring gaat verder dan een Service Level Agreement. Ze maakt de kwaliteit van de uitbestede processen namelijk inzichtelijk, zodat pensioenfondsen erop kunnen vertrouwen dat de serviceorganisatie de uitbestede activiteit goed beheerst en controleert.

Een SAS70-rapportage is ook veel transparanter dan een ISO-certificering omdat het een uitgebreide rapportage omvat inclusief de bevindingen van de externe accountant.
Lagere accountantskosten voor pensioenfondsen?Een SAS70-rapport kan ook uitermate nuttig zijn voor de externe accountant van de uitbestedende organisatie bij de controle van de jaarrekening. In het rapport heeft AZL de processen beschreven die uiteindelijk leiden tot een bepaalde balanspost, de daarmee gepaard gaande risico’s, alsmede de daarbij passende controlemaatregelen. Een algemene toetsing van deze processen kan in het kader van de jaarrekeningcontrole van het pensioenfonds achterwege blijven omdat de accountant kan steunen op de dossiers die in het kader van SAS zijn opgesteld en die door de externe accountant van AZL zijn beoordeeld. De accountant van het pensioenfonds is zodoende in staat de dossiers en controles gemakkelijker te beoordelen en er in hun jaarrekeningcontrole gebruik van maken. AZL raadt haar pensioenfondscliënten aan dit aspect mee te nemen in de onderhandelingen met hun externe accountants, zodat de accountantscontrole sneller en goedkoper kan verlopen
26-3-2007

SAS70

Nu ook AZL Services volledig SAS-gecertificeerd!
Op 31 januari jl. heeft –na AZL Vermogensbeheer– ook Services de SAS type II verklaring ontvangen. Pensioen- en vermogensbeheer van AZL voldoen daarmee aan de hoogste normen die internationaal aan proces- en risicobeheersing worden gesteld. Dit is een belangrijke stap in een markt waar transparantie en vertrouwen in de dienstverleningsrelatie van groot belang zijn.
Wet betekent SAS?SAS70 (Statement on Auditing Standards number 70. Service Organisations) is ontwikkeld door het American Institute of Certified Public Accountants (AICPA), de Amerikaanse beroepsorganisatie van accountants. Het is een internationaal erkende auditing standaard en wordt steeds meer beschouwd als een kwaliteitskeurmerk. In het SAS70 rapport staat beschreven hoe de interne beheersingsmaatregelen van een dienstverlenende organisatie zijn opgezet alsmede hoe geaudit wordt om de toereikendheid van de maatregelen te beoordelen en om de feitelijke werking vast te stellen.
Twee soorten SAS70 verklaringen, type-I en type-IIDe SAS70 type-I-verklaring beschrijft de interne controles van de dienstverlenende organisatie. Een externe accountant toetst of de controles daadwerkelijk bestaan en worden nageleefd teneinde de kwaliteit van de dienstverlening te waarborgen. De SAS70 type-II-verklaring geeft een oordeel over de effectieve en langdurige werking van de interne controle maatregelen. Er wordt gedurende tenminste zes maanden gecontroleerd of alle processen en controles conform de voorschriften worden uitgevoerd. Daarbij wordt niet alleen de aandacht gericht op de administratieve organisatie, maar ook de ondersteunende geautomatiseerde systemen zijn een belangrijk onderdeel.Wanneer een dienstverlener in bezit is van een SAS70-verklaring heeft de opdrachtgever de best denkbare garantie dat de service van de hoogste kwaliteit is en de vereiste transparantie bezit. AZL heeft dat belang al in een vroeg stadium onderkend en is op eigen initiatief gestart om voor onze klanten in de pensioensector SAS-verklaringen op te stellen. Inmiddels heeft ook De Nederlandsche Bank (DNB) de relatie gelegd tussen de richtlijnen van uitbesteding en de SAS70-verklaringen.

Vermogensbeheer en PensioenenAangezien het karakter van de activiteiten van vermogensbeheer en pensioenbeheer erg verschillend van aard zijn, zijn twee SAS verklaringen opgesteld. De proces- en risicobeheersing van AZL Vermogensbeheer is in 2005 nauwgezet getoetst, wat uitmondde in de SAS- verklaringen. Nu heeft ook Pensioenen (AZL Services) heeft de SAS type-II-verklaring op 31 januari jl. ontvangen.

Een SAS-verklaring gaat verder dan een Service Level Agreement. Ze maakt de kwaliteit van de uitbestede processen namelijk inzichtelijk, zodat pensioenfondsen erop kunnen vertrouwen dat de serviceorganisatie de uitbestede activiteit goed beheerst en controleert.

Een SAS70-rapportage is ook veel transparanter dan een ISO-certificering omdat het een uitgebreide rapportage omvat inclusief de bevindingen van de externe accountant.
Lagere accountantskosten voor pensioenfondsen?Een SAS70-rapport kan ook uitermate nuttig zijn voor de externe accountant van de uitbestedende organisatie bij de controle van de jaarrekening. In het rapport heeft AZL de processen beschreven die uiteindelijk leiden tot een bepaalde balanspost, de daarmee gepaard gaande risico’s, alsmede de daarbij passende controlemaatregelen. Een algemene toetsing van deze processen kan in het kader van de jaarrekeningcontrole van het pensioenfonds achterwege blijven omdat de accountant kan steunen op de dossiers die in het kader van SAS zijn opgesteld en die door de externe accountant van AZL zijn beoordeeld. De accountant van het pensioenfonds is zodoende in staat de dossiers en controles gemakkelijker te beoordelen en er in hun jaarrekeningcontrole gebruik van maken. AZL raadt haar pensioenfondscliënten aan dit aspect mee te nemen in de onderhandelingen met hun externe accountants, zodat de accountantscontrole sneller en goedkoper kan verlopen
26-3-2007

maandag 22 september 2008

vv

Privacy breaches related to electronic medical records seem to appear in the news regularly. The Walter Reed Army Medical Center started to notify 1,000 patients of a privacy breach in June. A few days earlier, the University of California San Francisco (UCSF) disclosed that it had to notify more than 3,000 patients of a privacy breach in the Department of Pathology.These news stories bring to mind a Markle Foundation study released at the end of 2007. Though a majority of Americans believed electronic data can improve care, 80 percent were very concerned about the risk of access without their authorization, including access related to marketing, identity theft and fraud. The Association of American Physicians and Surgeons published a similar study earlier in 2007 that found 70 percent of patients asked doctors to suppress information due to privacy concerns, and 50 percent believed control of their records was already lost.The pressure to move records to an electronic format is stronger than ever, but at the same time the risk and awareness of privacy violations are on the rise. In tandem to the shift in opinion are more frequent HIPAA prosecutions by the Department of Justice. The FBI, for example, published a clear warning on April 15, 2008, after a nurse pleaded guilty to unlawful access to patient information:“What every HIPAA-covered entity needs to realize and reinforce to its employees is that the privacy provisions of HIPAA are serious and have significant consequences if they are violated,” (Jane W.) Duke (United States Attorney for the Eastern District of Arkansas) stated. […] “We are committed to providing real meaning to HIPAA. We intend to accomplish this through vigorous enforcement of HIPAA's right-to-privacy protections and swift prosecution of those who violate HIPAA for economic or personal gain or malicious harm.” The rapid adoption and evolution of computer-based processes across dispersed and complex health care environments made the confidentiality, integrity and availability of patient data issues of public policy. Privacy of information in health care is a long-standing tenet of medical ethics, of course, but the move to portable electronic formats brought forward a new generation of security and interoperability challenges.Log management for HIPAAPatients, as well as practitioners, want to view and modify medical records more freely, but with electronic records, it is not immediately clear who else might be looking. Preserving established levels of privacy depends on efforts to better report who should or should not, and who did and did not, have access to electronic patient medical records. In other words, effective logging practices can meet the need for electronic personal health information (EPHI) protection and access management. The U.S .National Institute of Standards and Technology (NIST) Special Publication (SP) 800-66 includes the following question about HIPAA integrity (§164.312(c)(1): “Are current audit, logging, and access control techniques sufficient to address the integrity of the information?”Effective log management for HIPAA can be described in five steps. It first should automate the collection and consolidation of log data. Secondly, it also should automate analysis of the data and generate reports related to EPHI control and access. These two automation goals will save considerable time for operations alone and add considerable value to a security team. With data being collected consistently and analyzed regularly, log management should enable better event management, such as monitoring for unauthorized software, login attempts or other suspicious behavior and discrepancies. Finally, log management should be used to identify and respond to incidents.A covered entity that intends to maintain the integrity of its EPHI must maintain sufficient security controls to know what happens to EPHI, when it happens, and who (or what) acted on it. Capturing this information is easy; pull together the logs of all the systems that touch EPHI. Unfortunately, this log data usually comes from many large and growing sources that have inconsistent content, time stamps, and formats. Covered entities therefore need top-level commitment to a log management solution if they are to effectively store patient data and review its access, change and movement patterns.The success of log management depends on several factors:
Senior management support
Clear statements of objectives and procedures
Security training for log administrators and users Log management requires not only a reasonable amount of data to be collected (which can be archived for future review) but also the ability to detect in real time symptoms of abuse or violations. If Walter Reed and UCSF had had effective log management systems, their IT departments could have immediately alerted management to the presence of file-sharing software or of file traffic going across the network to unauthorized external locations. This is an example of how the use of logs would help satisfy HIPAA's “Protection from Malicious Software, §164.308(a)(5)(ii)(B)”. Even if logs revealed file-sharing software in a system not known to keep EPHI, a management system could pinpoint systems with malicious software and thus lead to a more timely investigation. This use of logs would help satisfy “Security Incident Procedures, §164.308(a)(6)(ii).”The electronic format of data is a double-edged sword. It brings easier access, which makes it popular, but new barriers and controls against attack need to be evolved for the electronic medium. The shift to computers has dramatically increased the need for better information security and access controls, such as robust log management, to preserve even the status quo for privacy in health care.

On the tracks of medical data: Electronic records pressure

On the tracks of medical data: Electronic records pressure

Privacy breaches related to electronic medical records seem to appear in the news regularly. The Walter Reed Army Medical Center started to notify 1,000 patients of a privacy breach in June. A few days earlier, the University of California San Francisco (UCSF) disclosed that it had to notify more than 3,000 patients of a privacy breach in the Department of Pathology.These news stories bring to mind a Markle Foundation study released at the end of 2007. Though a majority of Americans believed electronic data can improve care, 80 percent were very concerned about the risk of access without their authorization, including access related to marketing, identity theft and fraud. The Association of American Physicians and Surgeons published a similar study earlier in 2007 that found 70 percent of patients asked doctors to suppress information due to privacy concerns, and 50 percent believed control of their records was already lost.The pressure to move records to an electronic format is stronger than ever, but at the same time the risk and awareness of privacy violations are on the rise. In tandem to the shift in opinion are more frequent HIPAA prosecutions by the Department of Justice. The FBI, for example, published a clear warning on April 15, 2008, after a nurse pleaded guilty to unlawful access to patient information:“What every HIPAA-covered entity needs to realize and reinforce to its employees is that the privacy provisions of HIPAA are serious and have significant consequences if they are violated,” (Jane W.) Duke (United States Attorney for the Eastern District of Arkansas) stated. […] “We are committed to providing real meaning to HIPAA. We intend to accomplish this through vigorous enforcement of HIPAA's right-to-privacy protections and swift prosecution of those who violate HIPAA for economic or personal gain or malicious harm.” The rapid adoption and evolution of computer-based processes across dispersed and complex health care environments made the confidentiality, integrity and availability of patient data issues of public policy. Privacy of information in health care is a long-standing tenet of medical ethics, of course, but the move to portable electronic formats brought forward a new generation of security and interoperability challenges.Log management for HIPAAPatients, as well as practitioners, want to view and modify medical records more freely, but with electronic records, it is not immediately clear who else might be looking. Preserving established levels of privacy depends on efforts to better report who should or should not, and who did and did not, have access to electronic patient medical records. In other words, effective logging practices can meet the need for electronic personal health information (EPHI) protection and access management. The U.S .National Institute of Standards and Technology (NIST) Special Publication (SP) 800-66 includes the following question about HIPAA integrity (§164.312(c)(1): “Are current audit, logging, and access control techniques sufficient to address the integrity of the information?”Effective log management for HIPAA can be described in five steps. It first should automate the collection and consolidation of log data. Secondly, it also should automate analysis of the data and generate reports related to EPHI control and access. These two automation goals will save considerable time for operations alone and add considerable value to a security team. With data being collected consistently and analyzed regularly, log management should enable better event management, such as monitoring for unauthorized software, login attempts or other suspicious behavior and discrepancies. Finally, log management should be used to identify and respond to incidents.A covered entity that intends to maintain the integrity of its EPHI must maintain sufficient security controls to know what happens to EPHI, when it happens, and who (or what) acted on it. Capturing this information is easy; pull together the logs of all the systems that touch EPHI. Unfortunately, this log data usually comes from many large and growing sources that have inconsistent content, time stamps, and formats. Covered entities therefore need top-level commitment to a log management solution if they are to effectively store patient data and review its access, change and movement patterns.The success of log management depends on several factors:
Senior management support
Clear statements of objectives and procedures
Security training for log administrators and users Log management requires not only a reasonable amount of data to be collected (which can be archived for future review) but also the ability to detect in real time symptoms of abuse or violations. If Walter Reed and UCSF had had effective log management systems, their IT departments could have immediately alerted management to the presence of file-sharing software or of file traffic going across the network to unauthorized external locations. This is an example of how the use of logs would help satisfy HIPAA's “Protection from Malicious Software, §164.308(a)(5)(ii)(B)”. Even if logs revealed file-sharing software in a system not known to keep EPHI, a management system could pinpoint systems with malicious software and thus lead to a more timely investigation. This use of logs would help satisfy “Security Incident Procedures, §164.308(a)(6)(ii).”The electronic format of data is a double-edged sword. It brings easier access, which makes it popular, but new barriers and controls against attack need to be evolved for the electronic medium. The shift to computers has dramatically increased the need for better information security and access controls, such as robust log management, to preserve even the status quo for privacy in health care.

Feds Tighten security on .gov

When you file your taxes online, you want to be sure that the Web site you visit -- http://www.irs.gov/ -- is operated by the Internal Revenue Service and not a scam artist. By the end of next year, you can be confident that every U.S. government Web page is being served up by the appropriate agency.
That’s because the feds have launched the largest-ever rollout of a new authentication mechanism for the Internet’s DNS. All federal agencies are deploying DNS Security Extensions (DNSSEC) on the .gov top-level domain, and some expect that once that rollout is complete, banks and other businesses might be encouraged to follow suit for their sites.

DNSSEC prevents hackers from hijacking Web traffic and redirecting it to bogus sites. The Internet standard prevents spoofing attacks by allowing Web sites to verify their domain names and corresponding IP addresses using digital signatures and public-key encryption.
With DNSSEC deployed, federal Web sites “are less prone to be hacked into, and it means they can offer their services with greater assurances to the public,’’ says Leslie Daigle, Chief Internet Technology Officer for the Internet Society. "DNSSEC means more confidence in government online services.’’

The U.S.’s government DNSSEC mandate is "significant,’’ says Olaf Kolkman, a DNSSEC expert and director of NLnet Labs, a nonprofit R&D foundation in the Netherlands. "First, the tool developers will jump in because there is the U.S. government as a market….Second, there is suddenly a significant infrastructure to validate against.’’

The White House DNSSEC mandate comes just weeks after the July disclosure of one of the most serious DNS bugs ever found. The Kaminsky bug -- named after security researcher Dan Kaminsky who discovered it -- allows for cache poisoning attacks, where a hacker redirects traffic from a legitimate Web site to a fake Web one without the user knowing.
White House officials said their DNSSEC mandate has been in the works since February 2003, when the Bush Administration released its National Strategy to Secure Cyberspace. The cybersecurity strategy, which was prompted by the Sept. 11, 2001, terrorist attacks, included the goal of securing the DNS.
Under a separate, but related, cybersecurity program called the Trusted Internet Connection initiative, the U.S. government is reducing the number of external Internet connections it operates from more than 8,000 to less than 100.

The DNSSEC mandate "was issued as a consequence of agencies having completed the initial consolidation of external network connectivity [through the Trusted Internet Connection initiative],’’ said Karen Evans, administrator for the Office of E-Government and Information Technology at the Office of Management and Budget (OMB), in a statement. "The Kaminsky DNS bug was not a factor.’’

DNS hardware and software vendors that are scrambling to add DNSSEC capabilities to their products predict the one-two punch of the Kaminsky bug followed by the White House mandate will drive DNSSEC deployment across the Internet.
"The timing couldn’t be better right now, with Dan Kaminsky’s vulnerability and the huge spotlight that focused on DNS security,’’ says Mark Beckett, vice president of marketing for Secure64, a DNS vendor that began shipping an automated system for deploying DNSSEC in September. "Even though we have a patch out there for the Kaminsky bug…the only long-term solution to this problem is DNSSEC.’’
The OMB mandate is "significant, but it’s the tip of the iceberg,’’ says Rodney Joffe, senior vice president and senior technologist for NeuStar, which sells the UltraDNS managed services suite and operates several top-level domains (TLDs) including .us and .biz. "All the other TLDs are now scrambling to work on DNSSEC. It’s a sea change. There is no question that 2009 will be the year of DNSSEC.’’

The OMB issued a mandate in August that requires all federal agencies to support DNSSEC.
The memo states that .gov must be cryptographically signed at the top level by January 2009, and that all subdomains under .gov, such as www.irs.gov, must be signed by December 2009.
While the memo focuses on the .gov domain, the U.S. Defense Information Systems Agency says it intends to meet OMB's DNSSEC requirements on the .mil domain, too.
OMB is working with agencies to finalize their plans for deploying DNSSEC on their domains and subdomains, and these plans are expected to be finalized by mid-October.
"The federal government has been working with many organizations regarding DNSSEC and is preparing for deployment,’’ Evans says. "One of the resources available, the Secure Naming Infrastructure Pilot (SNIP), is a testbed available to all government agencies so they can test their DNSSEC operations prior to deployment.’’

To meet the mandate, federal agencies must upgrade their DNS servers to support the new protocol, buy network management tools to support DNSSEC, and provide training to their network management staff.

"The real impact is that you are changing the way the DNS is managed within the .gov domain,’’ says Scott Rose, a computer science with the National Institute for Standards and Technology (NIST) Information Technology Laboratory. "The largest cost in DNSSEC deployment is setting up procedures and software for key management.’’Agencies will pay for DNSSEC out of their existing IT infrastructure budgets, Evans says.
"People who want to enable their domain names, say those of their Web sites, to be validated with DNSSEC have to do some investing. They have to update their infrastructure, and they have to go through a learning curve,’’ says Kolkman, who called the OMB deadline of December 2009 “ambitious.’’
"We think it’s doable,’’ Rose says of the .gov DNSSEC deadline. "We think it sends a strong signal that the U.S. government is committed to DNSSEC and to improving Internet security within the .gov domain.’’
By rolling out DNSSEC on .gov, the federal government is doing what it can to improve the security of the communications it has over the Internet with citizens and contractors.
The U.S. government is "standing up and saying that for all the right reasons, DNSSEC is the path to pursue,’’ Daigle says. "It’s a good move because it’s proactive. They’re trying to address the security of their DNS resources before there is the kind of critical security disaster that many people have posited is needed before DNSSEC would be deployed.’’
Experts say the OMB mandate may encourage ISPs to support DNSSEC, too, as their customers are heavy users of .gov Web sites.
"By the end of the year, a large number of ISPs will all have DNSSEC deployed,’’ Joffe predicts. "There will no longer be an excuse for ISPs not to implement DNSSEC knowing they have customers that go to government Web sites.’’
The U.S. federal government will be among the first organizations in the world to deploy security enhancements to the top-level domain it operates, which is .gov.
Countries that have deployed DNSSEC in their top-level domains include Sweden, Puerto Rico, Bulgaria and Brazil.

DNS vendors hope the federal DNSSEC mandate will lead to broader adoption of the standard across the Internet.
"We’ve seen a fair amount of interest in DNSSEC outside the U.S….but we haven’t had a whole lot of momentum inside the U.S.,’’ says Cricket Liu, vice president of architecture at InfoBlox. "My hope is that this is the beginning of getting the ball rolling in the U.S.’’
What about the root and .com?
While significant, the OMB mandate is missing a few key components that are necessary to drive DNSSEC deployment across the Internet.
First, the OMB memo says nothing about when the Internet’s root servers will support DNSSEC.
Second, the memo doesn’t address whether the U.S. government will require VeriSign, which operates the popular .com and .net top-level domains, to support DNSSEC.
The National Telecommunications and Information Administration (NTIA), the arm of the U.S. government that oversees the Internet’s DNS infrastructure, has not set a deadline for DNSSEC deployment for the root servers, .com or .net.
"NTIA recognizes the potential benefits of a DNSSEC signed root zone file and is actively examining various implementation models in coordination with other U.S. government agencies as well as all the other relevant stakeholders, including [The Internet Corporation for Assigned Names and Numbers] and VeriSign, with whom the Department has existing relevant legal relationships,’’ according to an NTIA statement.
NTIA’s statement said the agency will not take any action that would affect the operational stability or efficiency of the DNS.
"A DNSSEC signed root zone would represent one of the most significant changes to the DNS infrastructure since it was created; therefore any changes cannot be taken lightly considering that the Internet DNS is a global infrastructure on which the global economy relies,’’ the statement said.
VeriSign has been running DNSSEC pilot projects for several years, and it offers free DNSSEC tools on its Web site for developers.
VeriSign operates two of the Internet’s 13 root servers. In March 2008, VeriSign created a DNSSEC testbed for all the root zone operators to use.
"The testbed is going well,’’ says Ken Silva, CTO for VeriSign. "We’ve gathered a lot of data ….This is all part of the process to be ready if and when the full Internet is ready to deploy DNSSEC.’’

VeriSign hasn’t committed to supporting DNSSEC in .com and .net. As of June 2008, .com and .net supported 87.3 million domain names, a figure that is up 20% from the previous year, according to VeriSign.

Silva says .com and .net will not be upgraded with DNSSEC until after the root.
"This is not something that is going to happen overnight,’’ says Silva, who predicts it will be another three years until the root servers support DNSSEC. "For full DNSSEC deployment Internet-wide, you could be talking decades.’’
Experts say full-scale deployment of DNSSEC won’t happen until the root., .com and .net are authenticated with digital signatures.
"Having the root signed is fairly important,’’ Kolkman says. "Obviously, .com is the 300-pound gorilla in the room. If .com were signed, that would pull a lot of people into DNSSEC, but having the root signed gives a more global signal.’’
Chicken-and-egg dilemma
Internet engineers developed DNSSEC in 1997, but the technology hasn’t been widely deployed because it suffers from the classic chicken-and-egg dilemma.
DNSSEC doesn’t protect against spoofing attacks unless it’s widely deployed across the Internet’s DNS infrastructure. Web site operators don’t benefit much from DNSSEC unless it’s deployed at the top-level domain. The top-level domains haven’t supported DNSSEC because there hasn’t been demand from Web site operators.
With the OMB mandate, it appears the egg is cracking. Other top-level domains interested in rolling out DNSSEC include the Pubic Interest Registry’s .org. http://blog.internetgovernance.org/blog/_archives/2008/4/25/3659794.html and Poland’s country code, .pl
One reason DNSSEC has been slow to catch on is that it is difficult to deploy. Network managers will need tools that help them generate and store cryptographic keys in a secure manner, plus they will have to update those keys on a regular basis in order to support DNSSEC.
"It has been a complicated and time-consuming exercise for people to deploy DNSSEC,’’ Beckett says. That’s one reason Secure64 received a $1 million grant from the Department of Homeland Security earlier this year to develop an automated DNSSEC signing solution that became the just-released Secure64 DNS Signer product.
"DHS wanted to prime the pump to get commercial products out there to remove that complexity and to make it possibility to deploy DNSSEC in a matter of days or weeks, rather than the months and months it might take them today,’’ Beckett adds.

OMB says enough commercial DNSSEC products are available to warrant deployment across .gov.
"The U.S. enjoys a robust and dynamic commercial marketplace that will continue to meet our needs,’’ Evans says. "The Department of Homeland Security Science and Technology Directorate has been leading the research and development associated with this initiative. The National Institute for Standards and Technology is responsible for developing DNSSEC standards, and the General Services Administration is ensuring service-based solutions are available.’’
DNSSEC experts are encouraging corporate network managers to view the federal mandate as a sign that DNSSEC is real.
"What I think you should take away from this as corporate IT managers is that DNSSEC is coming. DNNSSEC is real, and it’s out of the experimental stage,’’ Daigle says. "It’s OK to buy products and equipment to support it.’’
Network managers also should take a good look at DNSSEC because of the Kaminsky bug, experts say. This is especially true of industries such as banking and e-commerce that battle phishing attacks.
The Kaminsky bug "is a verifiable and credible business case for actually deploying DNSSEC, not just in the government but in private industry,’’ Joffe says. "The only solution we know of that is 100% correct in solving the problem of DNS cache poisoning is DNSSEC.’’

zaterdag 20 september 2008

By Jack M. GermainE-Commerce Times 09/02/08 4:00 AM PT
New PCI regulations are just around the corner, and retailers dealing with credit cards will need to tighten up their standards in order to comply. For instance, your firewall performance will be reviewed more often, and you'll have to use anti-virus protection even on non-Windows platforms. Also, if you're still using WEP encryption, better get ready to chuck that and move to something better ASAP.

The Payment Card Industry (PCI) regulation changes that take affect Oct. 1 will mean some additional work by IT departments -- and some new spending.
But the PCI Data Security Standard (DSS) version 1.2 will allow the Payment Card Industry a phase-in period to meet the new rules, according to two security firms that provide compliance tools.
The PCI Data Security Standard, first adopted by the PCI Security Standards Council in 2005, contains 12 rules with several sub-sections. The council amended some of those regulations with Version 1.1 in September of 2006. The PCI DSS standards are a set of comprehensive requirements for enhancing payment account data security.
The standards were developed by the founding payment brands of the PCI Security Standards Council, including American Express (NYSE: AXP) , Discover Financial Services, JCB International, MasterCard Worldwide and Visa International, to help facilitate the broad adoption of consistent data security measures on a global basis.
In version 1.2, "there are two dozen small changes, some with fairly significant implications," Mike Loyd, chief scientist for RedSeal Systems, told the E-Commerce Times.


Mostly Clarifies
The primary purpose behind Version 1.2 is to provide clarification of the standards brought into play with the last version release. These clarifications remove the vague configurations and specify data time frames.
"Often there is a discrepancy between what you have to do and what you should do. Now the new regulations try to bring those two factors closer together to true best practices," said Loyd.
All 12 rules have clarifications, but only a few of them are real changes, according to Loyd. The most significant changes call for more auditing of the network infrastructure and how security patches are handled.
"The new version is making me trust the PCI standards more. It started as an actual deployment created by the industry for the industry. It is now very straightforward," said Amichia Shulman, CTO of Imperva.
A Growth Process
The latest rules show a good evolutionary process, noted Shulman. Others involved in providing compliance tools to vendors agree.
"The old ways didn't take into effect the priority of security. Now PCI is saying that we can take in other properties," Tom Rabaut, director of product management for RedSeal Systems, told the E-Commerce Times.
The new rules show that the PCI Council really wants to become more than a watchdog. It is becoming one of the top three motivators for compliance, he added.
"It's not just a document on a slide that nobody pays attention to," Loyd noted.
Most Significant
Version 1.2 will require networks follow firewall rules on perimeter routers. The firewall performance will now be reviewed every six months rather than quarterly, Shulman said.
Two other changes involve security settings and encryption levels. IT cannot use vendor-supplied defaults for passwords and other security parameters. Also, WEP (wired equivalent privacy) will no longer be allowed. IT must configure a stronger encryption vehicle no later than March 1.
Antivirus treatment takes on a more demanding role under the Version 1.2 regulations. Networks trafficking in cardholder information must be protected by an antivirus system regardless of the operating system used.
"Until now, antivirus was only required for Windows," Shulman said. "Now the network protections must address all known types of malware."
This change reflects a shift in computing accesses, he explained. Until about two years ago, few -- if any -- antivirus options were available for other platforms.
Checking Risks
Another key rule change focuses on system security. The existing rules require IT to apply all security patches to operating systems and application software.
The new rules allow IT to perform a risk assessment of the patches before blindly applying them. This will allow IT to determine the relative stability of the patch before it causes other problems.
"This mitigates the risk of faulty patches," Shulman said.
Change Highlights
Other tweaks in the PCI DDS require those companies that hold and work with card payment data to apply specific new security and access procedures to their networks. For instance, each worker who has access to the computer system must have a unique ID. Also, the company must test the password to ensure that is is unreadable.
Additional security will kick in regarding restricting access to card holder information. This will be accomplished by better tracking and monitoring all access to network resources and cardholder data.
"Now it will not be enough to produce an audit trail. The audit must be copied to an internal log server and must be immediately available for analysis," Shulman said.
Companies will also be required to visit off-site data storage facilities of its sensitive cardholder information at least annually.
Don't Panic
Implementation strategies for version 1.2 rules will closely resemble those for version 1.1, Shulman noted.
"There are not a lot of changes, so don't panic. Wireless networks will need changes, but IT will have a reasonable amount of time to comply. Deploying antivirus across all platforms will be a problem for some," he emphasized.
The two most troublesome areas for many companies having to meet the new PCI standards will be in the areas of wireless and network encryption, he said.
Failure Not an Option
Starting Oct. 1, the new assessment standards must be used in measuring a company's PCI compliance, Rabaut warned. Vendors with a lower priority rating for the type of customer data they handle will only need to have a security scan completed by a licensed company, such as Verisign, and complete a questionnaire for self assessment, he said.
"About 80 percent of merchants are higher priority," Loyd added.
Depending on the type of compliance failure, fines could range from US$1,000 to tens of thousands, he said.
"If a data breach occurs, the severity of the fines can be much worse. The credit card companies could stop the offending company's processing rights. It depends on the weight of the vendor," Loyd said.

New PCI Security Standards: Lock It Down, Lock It Tight

By Jack M. GermainE-Commerce Times 09/02/08 4:00 AM PT
New PCI regulations are just around the corner, and retailers dealing with credit cards will need to tighten up their standards in order to comply. For instance, your firewall performance will be reviewed more often, and you'll have to use anti-virus protection even on non-Windows platforms. Also, if you're still using WEP encryption, better get ready to chuck that and move to something better ASAP.

The Payment Card Industry (PCI) regulation changes that take affect Oct. 1 will mean some additional work by IT departments -- and some new spending.
But the PCI Data Security Standard (DSS) version 1.2 will allow the Payment Card Industry a phase-in period to meet the new rules, according to two security firms that provide compliance tools.
The PCI Data Security Standard, first adopted by the PCI Security Standards Council in 2005, contains 12 rules with several sub-sections. The council amended some of those regulations with Version 1.1 in September of 2006. The PCI DSS standards are a set of comprehensive requirements for enhancing payment account data security.
The standards were developed by the founding payment brands of the PCI Security Standards Council, including American Express (NYSE: AXP) , Discover Financial Services, JCB International, MasterCard Worldwide and Visa International, to help facilitate the broad adoption of consistent data security measures on a global basis.
In version 1.2, "there are two dozen small changes, some with fairly significant implications," Mike Loyd, chief scientist for RedSeal Systems, told the E-Commerce Times.
//-->


Mostly Clarifies
The primary purpose behind Version 1.2 is to provide clarification of the standards brought into play with the last version release. These clarifications remove the vague configurations and specify data time frames.
"Often there is a discrepancy between what you have to do and what you should do. Now the new regulations try to bring those two factors closer together to true best practices," said Loyd.
All 12 rules have clarifications, but only a few of them are real changes, according to Loyd. The most significant changes call for more auditing of the network infrastructure and how security patches are handled.
"The new version is making me trust the PCI standards more. It started as an actual deployment created by the industry for the industry. It is now very straightforward," said Amichia Shulman, CTO of Imperva.
A Growth Process
The latest rules show a good evolutionary process, noted Shulman. Others involved in providing compliance tools to vendors agree.
"The old ways didn't take into effect the priority of security. Now PCI is saying that we can take in other properties," Tom Rabaut, director of product management for RedSeal Systems, told the E-Commerce Times.
The new rules show that the PCI Council really wants to become more than a watchdog. It is becoming one of the top three motivators for compliance, he added.
"It's not just a document on a slide that nobody pays attention to," Loyd noted.
Most Significant
Version 1.2 will require networks follow firewall rules on perimeter routers. The firewall performance will now be reviewed every six months rather than quarterly, Shulman said.
Two other changes involve security settings and encryption levels. IT cannot use vendor-supplied defaults for passwords and other security parameters. Also, WEP (wired equivalent privacy) will no longer be allowed. IT must configure a stronger encryption vehicle no later than March 1.
Antivirus treatment takes on a more demanding role under the Version 1.2 regulations. Networks trafficking in cardholder information must be protected by an antivirus system regardless of the operating system used.
"Until now, antivirus was only required for Windows," Shulman said. "Now the network protections must address all known types of malware."
This change reflects a shift in computing accesses, he explained. Until about two years ago, few -- if any -- antivirus options were available for other platforms.
Checking Risks
Another key rule change focuses on system security. The existing rules require IT to apply all security patches to operating systems and application software.
The new rules allow IT to perform a risk assessment of the patches before blindly applying them. This will allow IT to determine the relative stability of the patch before it causes other problems.
"This mitigates the risk of faulty patches," Shulman said.
Change Highlights
Other tweaks in the PCI DDS require those companies that hold and work with card payment data to apply specific new security and access procedures to their networks. For instance, each worker who has access to the computer system must have a unique ID. Also, the company must test the password to ensure that is is unreadable.
Additional security will kick in regarding restricting access to card holder information. This will be accomplished by better tracking and monitoring all access to network resources and cardholder data.
"Now it will not be enough to produce an audit trail. The audit must be copied to an internal log server and must be immediately available for analysis," Shulman said.
Companies will also be required to visit off-site data storage facilities of its sensitive cardholder information at least annually.
Don't Panic
Implementation strategies for version 1.2 rules will closely resemble those for version 1.1, Shulman noted.
"There are not a lot of changes, so don't panic. Wireless networks will need changes, but IT will have a reasonable amount of time to comply. Deploying antivirus across all platforms will be a problem for some," he emphasized.
The two most troublesome areas for many companies having to meet the new PCI standards will be in the areas of wireless and network encryption, he said.
Failure Not an Option
Starting Oct. 1, the new assessment standards must be used in measuring a company's PCI compliance, Rabaut warned. Vendors with a lower priority rating for the type of customer data they handle will only need to have a security scan completed by a licensed company, such as Verisign, and complete a questionnaire for self assessment, he said.
"About 80 percent of merchants are higher priority," Loyd added.
Depending on the type of compliance failure, fines could range from US$1,000 to tens of thousands, he said.
"If a data breach occurs, the severity of the fines can be much worse. The credit card companies could stop the offending company's processing rights. It depends on the weight of the vendor," Loyd said.

New PCI Security Standards: Lock It Down, Lock It Tight

By Jack M. GermainE-Commerce Times 09/02/08 4:00 AM PT

New PCI regulations are just around the corner, and retailers dealing with credit cards will need to tighten up their standards in order to comply. For instance, your firewall performance will be reviewed more often, and you'll have to use anti-virus protection even on non-Windows platforms. Also, if you're still using WEP encryption, better get ready to chuck that and move to something better ASAP.

The Payment Card Industry (PCI) regulation changes that take affect Oct. 1 will mean some additional work by IT departments -- and some new spending.
But the PCI Data Security Standard (DSS) version 1.2 will allow the Payment Card Industry a phase-in period to meet the new rules, according to two security firms that provide compliance tools.
The PCI Data Security Standard, first adopted by the PCI Security Standards Council in 2005, contains 12 rules with several sub-sections. The council amended some of those regulations with Version 1.1 in September of 2006. The PCI DSS standards are a set of comprehensive requirements for enhancing payment account data security.
The standards were developed by the founding payment brands of the PCI Security Standards Council, including American Express (NYSE: AXP) , Discover Financial Services, JCB International, MasterCard Worldwide and Visa International, to help facilitate the broad adoption of consistent data security measures on a global basis.
In version 1.2, "there are two dozen small changes, some with fairly significant implications," Mike Loyd, chief scientist for RedSeal Systems, told the E-Commerce Times.

Mostly Clarifies
The primary purpose behind Version 1.2 is to provide clarification of the standards brought into play with the last version release. These clarifications remove the vague configurations and specify data time frames.
"Often there is a discrepancy between what you have to do and what you should do. Now the new regulations try to bring those two factors closer together to true best practices," said Loyd.
All 12 rules have clarifications, but only a few of them are real changes, according to Loyd. The most significant changes call for more auditing of the network infrastructure and how security patches are handled.
"The new version is making me trust the PCI standards more. It started as an actual deployment created by the industry for the industry. It is now very straightforward," said Amichia Shulman, CTO of Imperva.
A Growth Process
The latest rules show a good evolutionary process, noted Shulman. Others involved in providing compliance tools to vendors agree.
"The old ways didn't take into effect the priority of security. Now PCI is saying that we can take in other properties," Tom Rabaut, director of product management for RedSeal Systems, told the E-Commerce Times.
The new rules show that the PCI Council really wants to become more than a watchdog. It is becoming one of the top three motivators for compliance, he added.
"It's not just a document on a slide that nobody pays attention to," Loyd noted.
Most Significant
Version 1.2 will require networks follow firewall rules on perimeter routers. The firewall performance will now be reviewed every six months rather than quarterly, Shulman said.
Two other changes involve security settings and encryption levels. IT cannot use vendor-supplied defaults for passwords and other security parameters. Also, WEP (wired equivalent privacy) will no longer be allowed. IT must configure a stronger encryption vehicle no later than March 1.
Antivirus treatment takes on a more demanding role under the Version 1.2 regulations. Networks trafficking in cardholder information must be protected by an antivirus system regardless of the operating system used.
"Until now, antivirus was only required for Windows," Shulman said. "Now the network protections must address all known types of malware."
This change reflects a shift in computing accesses, he explained. Until about two years ago, few -- if any -- antivirus options were available for other platforms.
Checking Risks
Another key rule change focuses on system security. The existing rules require IT to apply all security patches to operating systems and application software.
The new rules allow IT to perform a risk assessment of the patches before blindly applying them. This will allow IT to determine the relative stability of the patch before it causes other problems.
"This mitigates the risk of faulty patches," Shulman said.
Change Highlights
Other tweaks in the PCI DDS require those companies that hold and work with card payment data to apply specific new security and access procedures to their networks. For instance, each worker who has access to the computer system must have a unique ID. Also, the company must test the password to ensure that is is unreadable.
Additional security will kick in regarding restricting access to card holder information. This will be accomplished by better tracking and monitoring all access to network resources and cardholder data.
"Now it will not be enough to produce an audit trail. The audit must be copied to an internal log server and must be immediately available for analysis," Shulman said.
Companies will also be required to visit off-site data storage facilities of its sensitive cardholder information at least annually.
Don't Panic
Implementation strategies for version 1.2 rules will closely resemble those for version 1.1, Shulman noted.
"There are not a lot of changes, so don't panic. Wireless networks will need changes, but IT will have a reasonable amount of time to comply. Deploying antivirus across all platforms will be a problem for some," he emphasized.
The two most troublesome areas for many companies having to meet the new PCI standards will be in the areas of wireless and network encryption, he said.
Failure Not an Option
Starting Oct. 1, the new assessment standards must be used in measuring a company's PCI compliance, Rabaut warned. Vendors with a lower priority rating for the type of customer data they handle will only need to have a security scan completed by a licensed company, such as Verisign, and complete a questionnaire for self assessment, he said.
"About 80 percent of merchants are higher priority," Loyd added.
Depending on the type of compliance failure, fines could range from US$1,000 to tens of thousands, he said.
"If a data breach occurs, the severity of the fines can be much worse. The credit card companies could stop the offending company's processing rights. It depends on the weight of the vendor," Loyd said.

zondag 7 september 2008

Latest 'lost' laptop holds treasure-trove of unencrypted AT&T payroll data

Latest 'lost' laptop holds treasure-trove of unencrypted AT&T payroll data
Submitted by Paul McNamara on Thu, 06/05/2008 - 6:42am.
It's just another in a long line of stolen laptops ... unless you work in management at AT&T and you're worried about your social security number falling into the hands of identity thieves. Or, you're worried that your coworkers might find out how much -- or how little -- you actually earn.
While AT&T has declined to disclose the number of management employees put at risk by the May 15 theft from an employee's car, one manager who is among them tells me he knows of others located throughout every corner of AT&T's vast empire in the U.S. "I have found one individual who was not impacted," says the manager, who asked not to be named. "This is probably big, but not everyone."
"I'm very disappointed in my company," he adds. "Eight days passed before we were notified ... and it took up to another 10 days to be informed about requesting a fraud alert and to be given instructions for signing up for credit watch."
I've asked AT&T for comment. At the end of this post is a long excerpt from a Q&A the company provided to employees, who learned of the breach via an e-mail, which reads in part:
"This is to alert you to the recent theft of an AT&T employee's laptop computer that contained AT&T management compensation information, including employee names, Social Security numbers, and, in most cases, salary and bonus information. ... We deeply regret this incident. You will soon hear about additional steps we're taking to reinforce our policies to safeguard sensitive personal information and ensure strict compliance in order to avoid incidents like this in the future."
Regrets were not enough to allay the anger of this manager.
"It is pathetic that the largest telecom company in the world -- with more than 100 million customers -- doesn't encrypt basic personal information," he says.
Failure to encrypt and otherwise better protect such data is inexcusable at this point in time, agrees Kelly Todd, a staff member at attrition.org, a security site that maintains a database of data-breach incidents.
"Lack of encryption of personal data is generally troubling, especially when the data is being stored on any mobile device with a 'steal me' bulls-eye on it," says Todd. "According to part of the AT&T e-mail, 'It was not encrypted, but the laptop was password protected. AT&T is currently in the process of encrypting such systems.' Good for them, but larger companies can sometimes have tens of thousands of systems to identify, plan for, and then execute an encryption process. It seems to me that they should have been 'in the process' a year ago.
"Even more troubling is that AT&T mentions that the laptop was password protected in their letter," he adds. "It might make some people feel better, but just password protection alone is generally considered a security joke."
The AT&T manager whose data was exposed sees an even larger issue in play here.
"I receive company internal e-mails reminding me to contact our legislators about relieving the company of the burdens of regulation," he says. "What happened here shows the company isn't ready to have those burdens lifted."

AT&T Security Manager (092008)

AT&T security guru talks DoS attacks, tomorrow’s hackers
Botnets, protection of personal information pose biggest challenges, CSO Edward Amoroso says

Edward Amoroso is the chief security officer at AT&T in Florham Park, N.J., as well as a professor who has written several textbooks on information security. Amoroso spoke with Network World’s Jon Brodkin this week in Boston, where he delivered a keynote about network security during Forrester’s Security Forum.

What are your biggest security challenges at AT&T?
The biggest challenge right now is sensitive personal information being all over the place, Social Security numbers, credit card numbers. It’s an IT problem. I’m not even convinced it’s appropriate to call it a security problem, it’s just IT infrastructure has developed in a way where that stuff is all over the place. We’re encrypting the whole company. That’s a pretty heavy-handed approach to solving the problem, but that’s really the only option.

Have you lost any sensitive data?
We’ve had some laptops that have been lost just like anybody else. So we report those and move on. That’s been the extent of it, it could be worse.
You also spoke about network security and defending against botnets and denial-of-service attacks in your keynote.
That’s our second-biggest challenge. Keep in mind, we’re a service provider, so the availability threat is way more important than if we were selling software. If Microsoft.com is down for an hour, it wouldn’t be good but it’s not a stock-price-affecting problem. If our network services are down for an hour, that is a very big problem.

Will AT&T be able to successfully defend against these botnets?
We do it now. These things we see, a lot of them are aimed at us all the time. Any carrier that says ‘we’re not under attack’ is lying to you.
Last December, we saw some pretty significant increases in traffic aimed at our host. We think that somebody was aiming big denial of service attacks at our hosting DNS services. We just filter the traffic, we survive it. It’s just the normal course of business for that stuff to be lobbed at you, and you block it.

You’re an adjunct professor of computer science at the Stevens Institute of Technology. What can we expect from the next generation of computer scientists?
They’re good hackers, that’s for sure. They come in and they’ve been reading hacking magazines since they were little kids. There’s a lot of foolishness in youth so a lot of young people do design attack tools. They’re better [than previous generations]. But they’re also better as computer scientists. I would say there’s a general uplift in capability, good and bad. It keeps me sharp. They let me have it if I don’t know the answer to something.

If this new generation of computer scientists is smarter, what kind of impact will they have when they enter the workforce?
I’m in my 40s. When I was growing up technology wasn’t generally available. Young people today are growing up with technology and they speak it fluently the way you speak French in Paris.
My kids, I buy them these complex gaming systems. My son, he’ll go online and buy these hacking devices, and expanded memory and a way of bridging Wi-Fi to our video, and to his camera. There’s no manual, there’s no anything, he’s just sort of natively doing it, and it just works.
When he gets into the workforce, I don’t know if he’s going to be an engineer, a lawyer, a doctor or whatever. But whatever he’s doing he brings that capability to bear. If he goes bad and decides he wants to be a hacker then we’ve got a problem because that’s a kid who knows what he’s doing.

SOX turns 5 (092008)

Sarbanes-Oxley compliance has caused IT headaches for half a decade

Five years after the controversial Sarbanes-Oxley Act was enacted to prevent Enron-like scandals, the law’s financial control requirements are having myriad impacts: large companies have cleaned up their accounting, but at great cost; foreign businesses are dropping out of U.S. stock exchanges to avoid SOX requirements; and many small public companies are scrambling to meet a crucial compliance deadline in December.
Signed into law by President Bush on July 30, 2002, SOX forces public companies to prepare reliable financial statements and bring material weaknesses into public view, with mandated testing for integrity and ethical behavior, IT controls related to financial reporting, whistleblower programs, antifraud provisions and other requirements.


Read the latest WhitePaper - Troubleshooting Remote Site Networks - Best Practices

  • The cost of SOXA sampling of SOX facts, figures and projections:
  • Spending on SOX compliance will surpass $32 billion by the end of 2008.
    $6 billion will be spent this year alone.
  • July 30, 2002, is when President Bush signed Sarbanes-Oxley into law.
  • Nov. 15, 2004, is when companies with more than $75 million in market capitalization were expected to comply. 4,862 companies with market caps that high have reported under SOXÕs Section 404. 1,035 of those have failed to comply at some point.
  • About 7,400 companies with market caps under $75 million face a compliance deadline of Dec. 15, 2007.
    SOURCES: AMR RESEARCH, AUDIT ANALYTICSClick to see: SPX timeline

Compliance has become “pretty much routine” for large companies, who have faced SOX requirements since 2004, says Bob Benoit of Lord & Benoit, which performs SOX research and helps companies comply.

Related Content

It hasn’t been cheap: spending on SOX compliance was $5.5 billion in 2004 and is now more than $6 billion annually, according to AMR Research.
1,035 large public companies have at some point failed to comply with SOX, out of a total of 4,862 that have reported under the law’s Section 404, Benoit says, citing figures from Audit Analytics.

Yet many individual enterprises spent far more on SOX compliance than they had to because the federal government initially failed to issue clear instructions.
“It was millions of dollars extra that was spent. This was due to people overcomplying, doing far more testing than was necessary,” says Michael Kamens, who was global network and security manager at Thermo Electron when the $2 billion company in Waltham, Mass., had to comply with SOX.

For about a year, companies thought they had to document and put controls in for every business process they have, since almost anything can impact financial statements, says John Hagerty, an analyst for AMR Research. Later it became clear that SOX only required such oversight for matters directly related to financial processes, Hagerty says.

Want to compare security products? Visit the IT Buyer's Guides now.
“The biggest pain companies reported was they felt like they were getting conflicted advice,” he says. “People didn’t want to get caught in a situation where they didn’t do enough, so they ended up doing too much.”

Advice from the Public Company Accounting Oversight Board, created by SOX and the Big Four auditing firms was excessive at best, says Kamens, who now works for auditing firm Accume Partners. Whereas today companies focus on 31 so-called key controls, in the days after SOX, public firms were testing for as many as 200 controls, he says.

“It was extremely painful for everybody. Nobody really knew how to comply,” Kamens says. “Because there was so much pressure on public companies to pass, everybody was scared and they did exactly whatever auditors told them to do. Failure was not an option.”

Some private companies have decided to comply with SOX even though they don’t have to, either because they think they might be purchased by a public company or go public themselves, or because they want better control over financial accounting.
“There are more people who realize this is just good business practice,” Hagerty says. “The whole concept of control of your financial environment is a bedrock principle of financial accounting.”
But the cost of SOX also has driven foreign businesses out of American stock exchanges. On Wednesday, BG Group, an oil and natural gas company in the United Kingdom, became the 18th non-U.S. company to quit the New York Stock Exchange since the Securities and Exchange Commision (SEC) made it easier to delist in December, the Bloomberg news service reported. The number of foreign companies traded on the NYSE has dropped 9.5% since SOX became law.
BG Group blamed its decision to leave NYSE on “U.S.-specific obligations [that] carry a cost and administrative complexity.”
Benoit doesn’t quite understand what all the fuss is about. “Internal controls have been around a long time. It’s not rocket science. It’s just a matter of doing it,” he says.

Small companies and SOX
Small public companies face just as complex a task as do larger ones, but compliance costs will be relatively higher as a percentage of a smaller company’s revenue, Hagerty says. Smaller public companies — technically those with less than $75 million of stock in the hands of public investors — have been granted numerous extensions allowing them to postpone compliance. Currently, they are scheduled to face the requirements of SOX on Dec. 15.

Benoit criticized the SEC in a Network World interview last December for not issuing specific guidelines to smaller public companies. Now he says the SEC addressed his concerns with guidance issued May 23.
A compliance project approached correctly should cost 50% to 75% less than what companies have been spending, but many businesses insist on an inefficient, bottom-up approach that audits process-level controls like expenditures, payroll and property, Benoit says.
“Accountants are kind of used to that approach, but internal control is the opposite,” he says. “It’s looking at significant items of risk, identifying those and testing those controls. … When you approach it from the risk perspective, which is what the SEC guidance has made very clear, there are definite and huge savings.”
The SEC on Wednesday adopted a new auditing standard that encourages an even less costly approach to SOX compliance.
That’s good news for smaller public companies that may find their backs against the wall come Dec. 15. Benoit says his firm has contacted about 4,000 companies and “far less than 1%” have started the process of SOX compliance.
“We’re starting to see a small population of companies come alive and start to start their process,” he says. “There’s a small window of opportunity right now. If they start working on the projects now they’ll be OK.”
Small companies face many challenges, according to research by Lord & Benoit. Among them are accounting and disclosure controls, control of treasury functions, competency and training of accountants, revenue recognition, inadequate account reconciliation, consolidations and mergers, and information technology weaknesses.
Software vendors are champing at the bit trying to sell products that automate compliance and reduce cost by taking people out of the process as much as possible. Some built new technology to meet the law’s demands while other vendors took old technologies and repackaged them as SOX compliance tools.

Even former U.S. Attorney General John Ashcroft has gotten in on the game, advising a software company called D2C Solutions that detects internal fraud and makes SOX compliance easier.

The “software [industry] has been the primary beneficiary of this automation phase,” Hagerty says.

Overheid VS verplicht DNS voor overheid (082008)

VS verplicht DNSSEC voor overheid

Het Witte Huis voert DNSSEC in, om omleiding van webverkeer te voorkomen.
Het Witte Huis voert DNSSEC in, om omleiding van webverkeer te voorkomen.
De Amerikaanse overheid neemt maatregelen tegen het DNS-gat. De regering stelt gebruik van DNSSEC verplicht voor overheidsinstanties.
Het Witte Huis legt Amerikaanse overheidsinstanties gebruik van DNS-beveiligingsmaatregelen op. Zo moeten alle overheidsorganen voortaan DNSSEC (domain name system security extensions) gebruiken voor hun internetverbindingen. Dat is een uitbreiding van internetbasisprotocol DNS die controleert of de bron van een dataverzending en de data zelf in orde is.
De regering van de Verenigde Staten zorgt er hiermee voor dat misbruik van het grote beveiligingsgat in DNS niet langer een gevaar kan zijn voor overheidsverkeer. Het in juli onthulde gat stelt kwaadwillenden in staat internetverkeer om te leiden, zonder dat reguliere beveiligingssoftware dat doorheeft. Dit betreft niet alleen webverkeer, maar ook andere toepassingen zoals mail, FTP (file transfer protocol) en zelfs het voor internetbankieren gebruikte SSL (secure socket layer).
Niet beschermd
DNSSEC is eerder al genoemd als enige oplossing voor het DNS-gat. De patches die tegelijk met het bekend maken van het lek door leveranciers beschikbaar zijn gesteld vormen slechts een lapmiddel. Misbruik van het DNS-lek is nog altijd mogelijk.


Het lek dat Kaminsky ontdekte
Domain Name System-servers vertalen domeinnamen naar ip-adressen. Er zijn twee soorten DNS-servers: authoritative en caching nameservers. Alleen het tweede type DNS-server (ook wel ‘resolving name servers' genoemd) is vatbaar voor de kwetsbaarheid die Kaminsky ontdekte. Caching nameservers zijn namelijk niet op de hoogte van alle domeinnamen op het hele internet, en sturen daarom vertaalverzoeken aan ‘autoritieve' DNS-servers. In zo'n vertaalverzoek vraagt een DNS-server naar het ip-adres dat hoort bij een bepaald website- of mailadres.
Het huidige DNS-lek maakt het voor kwaadwillenden mogelijk zich uit te geven voor een autoritieve DNS-server. Zij kunnen daardoor de cache van DNS-servers vervuilen met verkeerde ip-adressen. Die foute adressen kunnen ervoor zorgen dat mails in verkeerde handen terecht komen en dat websitebezoekers worden omgeleid naar vervalste websites. Kwaadwillenden kunnen één website 'kapen' (van bijvoorbeeld een bank of een webmailaanbieder), maar in principe ook het hele .com-domein overnemen.

Security ROI is onzin (september 2008)

Schneier: Security ROI is onzin

Return on investment, beter bekend als ROI, is voor veel bedrijven de basis voor het nemen van beslissingen, toch is het in geval van IT-security onzin, aldus beveiligingsgoeroe Bruce Schneier. Veel bedrijven willen een ROI-model zien zodat ze weten dat hun security investering zich uitbetaalt. Vendors op hun beurt, zijn maar al te gretig om dit soort modellen te maken die "bewijzen" dat hun oplossing voor de beste ROI zorgt. "Het is in theorie een goed idee, maar in de praktijk voornamelijk onzin," laat Schneier weten. "ROI in een security context is onnauwkeurig. Security is geen investering die iets oplevert zoals een nieuwe fabriek. Het zijn kosten, die zich hopelijk door middel van kostenbesparingen uitbetalen. Security draait om het voorkomen van verlies, niet om inkomsten genereren." Nu is het niet zo dat bedrijven dan maar blind moeten investeren. "Bedrijven moeten nooit meer aan een beveiligingsprobleem uitgeven dan het probleem waard is. Daarnaast moet men ook geen problemen negeren die geld kosten als er goedkope manieren zijn om ze op te lossen. Een slim bedrijf benadert security zoals elke andere zakelijke beslissing: kosten tegenover baten."

Gegevens
Bij de klassieke aanpak bestaan de kosten van een beveiligingsincident uit zaken als tijd, geld, reputatie en competitieve voordelen, vermenigvuldigd met de kans dat een incident zich voordoet. Het optelsommetje bepaalt hoeveel een bedrijf aan het risico moet uitgeven om te voorkomen. Om de sommetjes te laten kloppen is veel data nodig en in het geval van computercriminaliteit ontbreekt die. "Er zijn geen goede misdaadcijfers voor cyberspace en weinig data over hoe beveiligingsmaatregelen die risico's voorkomen. We hebben zelfs geen cijfers over de kosten van incidenten." Een oorzaak van het ontbreken van data, is dat de dreiging te snel beweegt. "De eigenschappen van de dreigingen die we proberen te voorkomen, wijzigen zo snel dat we niet snel genoeg de data kunnen verzamelen. Tegen de tijd dat we de gegevens binnen krijgen, is er al een nieuw dreigingsmodel waar we niet voldoende data voor hebben. Dus kunnen we niet alle modellen maken." Het probleem wordt vergroot doordat het berekenen van bijzondere incidenten helemaal onmogelijk is.
Misleiding
Schneier haalt hard uit naar alle beveiligingsbedrijven die ROI-modellen hanteren. "De reden waarom de meeste ROI-modellen die je van security vendors krijgt onzin zijn. Is omdat hun model natuurlijk laat zien dat hun product of dienst financieel gezien verstandig is. Ze hebben echter de cijfers aangepast, zodat die bewering klopt." Managers moeten daarom analyses van mensen met een eigen agenda wantrouwen en resultaten alleen als een algemene richtlijn gebruiken. "Krijg je dus een ROI-model van je vendor, pak dan het framework en vul je eigen cijfers in. Laat de vendor je verbeteringen niet zien, hij zal veranderingen die zijn product minder kost-efficient maken niet als een verbetering beschouwen. En gebruik die cijfers als een algemene richtlijn, in combinatie met risk management en compliance analyses, bij het beslissen van welke producten en diensten je wilt aanschaffen."
Tags: bruce schneier

Belastinggegevens via eBay verkocht (september 2008)

Schijf met belastinggegevens duizenden Britten verkocht via eBay

Een paar dagen nadat een overbodige harde schijf met de gegevens van een miljoen Britse bankklanten via eBay werd verkocht, is nu een schijf met de belastinggegevens van duizenden burgers voor nog geen 10 euro van eigenaar verwisseld. Het gaat onder andere om duizenden rekeningnummers, bankcodes, namen en e-mailadressen die op een schijf van het Charnwood Borough Council in Leicestershire stonden. Een Schotse computerexpert wist de disk voor 7,91 euro via de veilingsite aan te schaffen en wilde die gebruiken om te "oefenen". Via recovery software ontdekte hij niet alleen de belastinggegevens, maar ook 35.000 andere bestanden, zoals rekeningen, foto's van het stadsbestuur en interne memo's. De Schot probeerde contact op te nemen met de security officer van Charnwood, maar kreeg geen antwoord. "Ik kan via deze documenten zien wie financiële problemen heeft en wie de deurwaarder op bezoek krijgt." De informatie op de schijf zou teruggaan tot 2002 en was tot juli nog gewoon in gebruik. In een reactie laat het stadsbestuur weten dat hardware nooit wordt doorverkocht en dat al het materiaal via een gerespecteerd verwerkingsbedrijf wordt verwerkt, dat alle data via de DOD 5220.22M standaard zou verwijderen. Mogelijk dat de harde schijf of de machine waar die in zat gestolen is. De Britse politie heeft namelijk iemand gearresteerd die bij de verkoop betrokken was. De Schotse eigenaar van de schijf werkt inmiddels mee aan het politieonderzoek en zou de data niet verder hebben verspreid.

zondag 31 augustus 2008

PCI DSS 1.1

New security rules on tap for credit-card handlers
Next version of Payment Card Industry security standard due out in October
By Ellen Messmer , Network World , 08/28/2008

ICompanies that handle credit cards can expect to see revised security rules released in early October, according to the group responsible for maintaining the Payment Card Industry security standard for storage and processing of credit and debit cards.
The next version of the 12-part PCI Data Security Standard is aimed at clarifying questions that merchants and service providers had regarding the current PCI DSS 1.1 standard, says Bob Russo, general manager of the PCI Security Standards Council. Some changes in the forthcoming Version 1.2 may prompt merchants and service providers to make adjustments in their security practices to achieve PCI compliance in the future, he adds.
Read the latest WhitePaper - Determining the cause of poor application performance
Click to see: Bob Russo, manager of PCI's Security Standards Council
"We're still tweaking this, but we expect to be finished by September 8th," Russo says. DSS 1.2 will be shared with council members including merchants; card association founders, such as Visa and MasterCard; card processors; and vendors certified to perform network scans or audits as part of the PCI compliance process.
Related Content
Payment card standards body moves ahead on application standard
PCI standard's mandate raises conflict-of-interest question
Payment Card Industry updateBLOG
PCI standards body moves ahead on payment-application cert
Changes to PCI standard not expected to up the ante
TriCipher offers strong authentication as a serviceBLOG
Cisco unwraps blueprint for healthcare security
Sun offering support for OpenSSOView all related articles
The PCI DSS 1.2 document will be presented at the council's upcoming community meetings in Orlando and Brussels. Upon the official October publication of PCI DSS 1.2, the council will set deadlines for supporting the revised standard. Under discussion now is a sunset date of June 30, 2009 for PCI DSS 1.1.
PCI DSS 1.2 is not yet final, but the council is previewing what businesses can expect to see by October.
For one thing, there will be a clarification on the first rule related to using firewalls to protect cardholder data; the revised standard will change the requirement to review firewall rules from every quarter to every six months.
The council also will remove references to Wired Equivalency Privacy (WEP) to emphasize the use of stronger encryption and authentication for wireless networks. Companies using wireless technologies will be expected to implement "industry best practices," including 802.11x. Specifically, new implementations of WEP are not expected to be allowed after March 31, 2009, though current implementations could continue longer -- until June of next year, under the council's current thinking.
In addition, the revised standard probably will remove the requirement to disable service-set identifier (SSID) broadcast, because disabling SSID broadcast does not prevent a malicious user from determining the SSID, according to the council.
Among other clarifications, the revised standard will note that the requirement to use antivirus software extends to all operating system types. Software patching revisions will clarify that a "risk-based approach" for prioritization of patch installation is acceptable. In the matter of assigning a unique ID to each person for computer access, the Version 1.2 standard is expected to clarify that both passwords and passphrases — authentication challenges that require answers that the user should know — are acceptable for PCI compliance.
Want to compare security products? Visit the IT Buyer's Guides now.
A clarification related to restricting physical access to cardholder data makes it clear that this requirement also pertains to paper-based media containing cardholder data, as well as electronic media.
Some other clarifications are expected to detail the need for a protected environment to preserve an audit trail for network resources related to cardholder data. For instance, revised language will clarify that three months of audit-trail history must be immediately available for analysis or quickly accessible. In addition, the council will seek to clarify that both internal and external penetration tests are required.
After the release of PCI DSS 1.2, the next major change to the PCI security standard isn't likely soon, Russo says. "We're hoping to stick to a two-year cycle after that," he says. PCI DSS 1.2 has been under discussion for more than a year as the council reviewed the 2,500 questions it received.

P&G outsources security to IBM

Proctor & Gamble outsources security to IBM, but keeping security staff
Opts for IBM ISS managed services to save money, boost security
By Ellen Messmer , Network World , 08/29/2008

Proctor & Gamble, the global manufacturer of household products, Friday said it has selected IBM ISS to provide managed security services.
"By teaming with IBM ISS, our objective is to both strengthen our security systems and improve the efficiency and effectiveness of our security operations," said Willie Alvarado, P&G’s Enterprise Infrastructure Services director in a statement.
Read the latest WhitePaper - Top 10 Considerations for Scaling a WAN Acceleration Solution
While P&G declined to discuss details about the security outsourcing arrangement, IBM ISS Director Peter Evans says the managed services will include internal network protection and perimeter-based defense. "Managed intrusion prevention, firewall, server protection, managed desktop and managed mail are all part of this," he says. (Compare intrusion prevention products.)
IBM ISS earlier this year indicated it was expanding its managed services portfolio, and the 5-year deal with P&G is believed to be the largest single managed services deal the vendor has seen, though it is declining to reveal dollar figures.
Related Content
IBM, Tata each dig into managed security services
Outsourcing security tasks brings controversy
Six burning questions about network security
Banks mining cash from their computer gear
IBM flash memory breaks 1 million IOPS barrier
Rackable Systems divests,; Nexan adds SMB NAS device; IBM releases XIV clustered storageBLOG
Dell gains, Sun loses in worldwide server market
IBM commits $300 million to disaster recovery build-outView all related articles
P&G will not be shedding security staff in this outsourcing arrangement, according to Evans.
"P&G will still maintain their security staff," says Evans, pointing out that they will now have more time to focus on strategic projects for online collaborative business or in mergers.
IBM ISS operates several security operations centers where remote monitoring and management of customer networking operations takes place. Under the outsourcing arrangement, P&G will partner in what IBM ISS calls a "virtual security operations center" (VSOC) in which P&G will have a Web-based portal that offers vulnerability assessment, data correlation and analysis. The VSOC will consolidate monitoring and maintenance of four IBM ISS Proventia SiteProtector management consoles in North America, Europe and Asia.
IBM ISS also anticipates the services for P&G may be expanded to include identity and access management and data-leakage prevention services in the future.