Share |

InfoSec News

Last Thursday the OpenID foundation announced a serious weakness in the Attribute Exchange extension to OpenID which permits sites to exchange information between endpoints.Essentially, it is possible to pass information through Attribute Exchange unsigned, which could potentially permit an attacker to modify the information.

There are no known exploits at this time, and the major sites that use OpenID have been contacted and have deployed a fix.For the rest of you who have applications using OpenID the recommendation is to update the OpenID4Java library to 0.9.6 final.

Futher details are available at the Threatpost blog and the Ping Talk Blog.

-- Rick Wanner - rwanner at isc dot sans dot org - - Twitter:namedeplume (Protected) (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
The other ISC ( released a patch for BIND 9.8.0. The new version, 9.8.0-P1 [1] fixes a flaw that can lead to a server crash [2]. Only version 9.8.0 is vulnerable, and only if RPZ (response policy zone) is configured.
RPZ is a new feature introduced in BIND 9.8.0. This feature allows recursive name servers to selectively modify responses according to local policies. Usually, recursive name servers will not modify responses, but just forward them to the host that sent the original request.
In order to use RPZ in BIND 9.8.0, it has to be compiled with the --enable-rpz-nsip or the --enable-rpz-nsdname option. These options make a new configuration direction available: response-policy.
Four different policies can be used:
1. NXDOMAIN :replace all NXDOMAINresponses with a single CNAME record. This can be used to redirect users to a default host.
2. NODATA: similar to NXDOMAIN but can be used to redirect to a wildcard record.
3. NO-OP:Does nothing, and can be used to define exceptions for which NODATA/NXDOMAINshould not apply.
4. CNAME: replaces responses that return actual data other then NXDOMAIN. This can be used to apply block lists.
To configure this feature, you define a response-policy zone, and then the zone file will list the detailed policies. For more details, see section of the BIND 9.8 administrator reference manual [3].



Johannes B. Ullrich, Ph.D.

SANS Technology Institute

Twitter (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
In the past month or so, I have had more than one discussion with different friends on the monitoring of virtual machines(VMs). Some of the conversations I have had centered on: What tool(s) should I use? Should I monitor all communications between VMs? What about an IDS? How about Firewalls? etc. It seems there are a lot of questions about keeping things secure in a virtual world environment.
Virtualization has allowed us to do some wonderful things and it has also created a nightmare from a security perspective if not done thoughtfully. Why a nightmare? Let's say it's an organization with many different departments securely separated: Financial, Human Resources, Research and Development, Operations, Legal, Security etc. To consolidate, save money and take advantage server space, the company decides to use virtual machines. To efficiently maximize the use of available resources, some departments ended up together on the same server, while others stayed on separate servers. However, just because they are in the same department, does not mean they are allowed to communicate. Some RD projects are not allowed to have access to the other for example. The real question becomes how do you to protect and monitor.
Do you invest in tools to monitor on the server between the VMs? Do you just monitor outside the servers to ensure what actually leaves? As an example, one IDS and firewall could be used to monitor and control communications between multiple servers. However, when you collapse them into VMs, the monitoring ability from that one IDS and firewall has been significantly degraded. With that said, I also encountered the the other argument that virtual machines can be isolated by the software, so there is no need to worry. The worry is that you have lost the visibility to monitor that you once had, unless something is done. In this scenario, you are relying on the virtualization to keep it secure, but what about monitoring to ensure it is providing the security you are expecting?
I believe it is a combination of both VM level monitoring and network level monitoring. It really depends on the sensitivity of the information on or processed by the VMs as to how you handle it. There may still be a compelling argument for segregation. However, if you're in a environment that collapsed servers to save money, you may be find yourself in the position to have to demonstrate the need to spend more money on security and explain why you cannot rely on the existing security architecture. Virtualization has changed the traditional approach to monitoring and introduced variables that may not have even been considered yet by an organization moving to a virtual world. The emphasis needs to be on having the same view into your systems as you did before. The existing security architecture and monitoring efforts were put in place for a reason and need to be carefully preserved.

What approach and techniques have you used to ensure you can monitor and secure the virtual environment?
(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
Chip maker Broadcom said Sunday that it will pay US$41.9 million for SC Square, a move that will bring expertise in smart-card security to the Irvine, California, architect of silicon for communications equipment.

Internet Storm Center Infocon Status