Tor, the world's largest and most well-known "onion router" network, offers a degree of anonymity that has made it a popular tool of journalists, dissidents, and everyday Internet users who are trying to avoid government or corporate censorship (as well as Internet drug lords and child pornographers). But one thing that it doesn't offer is speed—its complex encrypted "circuits" bring Web browsing and other tasks to a crawl. That means that users seeking to move larger amounts of data have had to rely on virtual private networks—which while they are anonymous, are much less protected than Tor (since VPN providers—and anyone who has access to their logs—can see who users are).

A group of researchers—Chen Chen, Daniele Enrico Asoni, David Barrera, and Adrian Perrig of the Swiss Federal Institute of Technology (ETH) in Zürich and George Danezis of University College London—may have found a new balance between privacy and performance. In a paper published this week, the group described an anonymizing network called HORNET (High-speed Onion Routing at the NETwork layer), an onion-routing network that could become the next generation of Tor. According to the researchers, HORNET moves anonymized Internet traffic at speeds of up to 93 gigabits per second. And because it sheds parts of Tor's network routing management, it can be scaled to support large numbers of users with minimal overhead, they claim.

Like Tor, HORNET encrypts encapsulated network requests in "onions"—with each layer being decrypted by each node passing the traffic along to retrieve instructions on where to next send the data. But HORNET uses two different onion protocols for protecting anonymity of requests to the open internet and a modified version of Tor's "rendezvous point" negotiation for communication with a site concealed within the HORNET network.

Read 6 remaining paragraphs | Comments


With the non-stop stream of zero-day exploits, website breaches, and criminal hacking enterprises, it's not always easy to know how best to stay safe online. New research from Google highlights three of the most overlooked security practices among security amateurs—installing security updates promptly, using a password manager, and employing two-factor authentication.

The practices are distilled from a comparison of security practices followed by expert and non-expert computer users. A survey found stark discrepancies in the ways the two groups reported keeping themselves secure. Non security experts listed the top security practice as using antivirus software, followed by using strong passwords, changing passwords frequently, visiting only known websites, and not sharing personal information. Security experts, by contrast, listed the top practice as installing software updates, followed by using unique passwords, using two-factor authentication, choosing strong passwords, and using a password manager.

"Our results show that experts and non-experts follow different practices to protect their security online," the researchers wrote in a research paper being presented at this week's Symposium On Usable Privacy and Security. "The experts' practices are rated as good advice by experts, while those employed by non-experts received mix[ed] ratings from experts. Some non-expert practices were considered 'good' by experts (e.g., install antivirus software, use strong passwords); others were not (e.g. delete cookies, visit only known websites.)"

Read 3 remaining paragraphs | Comments


In the wake of the demonstration of a vulnerability in the "connected car" software used in a large number of Chrysler and Dodge vehicles in the United States, Fiat Chrysler NV announced today that it was recalling approximately 1.4 million vehicles for emergency security patches.

The company has already issued a patch on its website for drivers, and on Thursday it performed an over-the-air update of some vehicles to block unauthorized remote access, Bloomberg Business reports. The vulnerability, revealed in a report by Wired earlier this week, allowed security researchers Charlie Miller and Chris Valasek to take remote control of a Jeep Cherokee's onboard computer and entertainment system, remotely controlling the throttle of the vehicle while a Wired reporter was driving it at 70mph on a St. Louis-area interstate highway. Miller and Valasek also demonstrated that they could take control of the vehicle's brakes and (in some cases) even its steering, as well as the vehicle's windshield wipers, navigation, and entertainment systems.

The vehicles covered by the recall include the 2015 model year Dodge Ram pickup, Dodge's Challenger and Viper, and the Jeep Cherokee and Grand Cherokee SUVs. While Fiat Chrysler officials said that there was no known real-world use of the vulnerablity (outside Miller's and Valasek's proof of concept), they were taking the recall step out of "an abundance of caution."

Read on Ars Technica | Comments

Hawkeye-G v3 CSRF Vulnerability ***[UPDATED CORRECTED]
[SECURITY] [DSA 3315-1] chromium-browser security update
Hawkeye-G v3.0.1.4912 CSRF Vulnerability CVE-2015-2878

With all the patching you have been doing lately I thought it would be opportune to have a look at what can and cant be done within two days. Why two days? Well quite a few standards want you to, I guess that is one reason, but the more compelling reason is that it takes less and less time for attacks to be weaponised in the modern world. We have over the past year or so seen vulnerabilities released and within hours vulnerable systems are being identified and in many cases exploited. That is probably a more compelling reasons than the standard says to. Mind you to be fair the standard typically has it in there for that reason.

So why does patching instill such dread in many? It tends to be for a number of reasons, the main objections I come across are:

  • It might break something
  • It is internal therefore were ok AKA we have a firewall
  • Those systems are critical and we cant reboot them
  • The vendor wont let us

It might break something">Yes it could, absolutely. Most vendors have pushed patched that despite their efforts to test prior to deployment will actually break something. However in reality the occurrences are low and where possible you should have pushed it to test systems prior to production implementation anyway,so ...

internal therefore were ok AKA we have a firewall">This has to be one of my favourites. Many of us have MM environments, hard on the outside and nice gooey soft on the inside. Which is exactly what attackers are looking for,it is one of the reasons why Phishing is so popular. You get your malware executed by someone on the inside. To me this is not really a reason. I will let you use this reason to prioritise patching, sure, but that is assuming you then go patch your internet facing or other critical devices first.

Those systems are critical and we can">Er, ok. granted you all have systemsthat are critical to the organisation, but if standard management functions cannot be performed on the system, then that in itself should probably have been raised as a massive risk to the organisation. There are plenty of denial of service vulnerabilities that will cause a reboot. If an internet facing system cant be rebooted I suspect you might want to take a look at that on Monday. For internal systems, maybe it is time to segment them as much of the normal network as you possibly can to reduce the risk to those systems.

ndor won">Now it is easy for me to say get a different vendor, but that doesnt really help you much. I find that when you discuss exactly what you can or cantchange the vendor will often be quite accommodating. In fact most of the time they may not even be able to articulate why you cant patch. Ive had vendors state you couldnt patch the operating system, when all of their application was Java. Reliant on Java, sure, reliant on a Windows patch for IE, not so much. Depending on how important you are to them they may come to the party and start doing things sensibly.
If you still get no joy, then maybe it is time to move the system to a more secure network segment and limit the interaction between it and the rest of the environment, allowing work to continue, but reduce the attack platform.

So the two days, achievable?

Tricky but yes. You will need to have all your ducks lined up and have the right tools and processes in place.

Lets have a look at the process first. Generally speaking the process will be pretty much the same for all of you. A high level process is below, I suspect it is familiar to most of you." />

Evaluate the patch.">There are a number of different approaches that organisations take. Some organisations will just accept the vendor recommendation. If the vendor states the patch is critical then it gets applied, no questions asked. Well one, do we have the product/OS that needs to be patched? Yes, then patch.">Other organisations take a more granular approach, they may not be quite as flexible in applying all patches as they are released, or possibly rebooting systems is a challenge (we have all come across systems that when rebooted have a hard time coming back). In those organisations the patch needs to be evaluated. In those situations I like using the CVSS scores and apply any modifiers that are relevant to the site to get a more accurate picture of the risk to my environment (https://nvd.nist.gov/cvss.cfm). If you go down this path make sure you have some criteria in place for considering things critical. For example if your process states a CVSs score of 10 is critical. Scores of 9.9 or lower are High, medium low etc. Id probably be querying the thoughts behind that. ">Many patches address a number of CVEs. I generally pick the highest scoring one and evaluate it. If it drops below the critical score we have specified, but is close. I may evaluate a second one, but if the score remains in the critical range I wont evaluate the rest of the CVEs. It is not like you can apply a patch that addresses multiple CVEs for only one.">You already know which servers, workstations and devices are critical to the organisation, if not that might be a good task for Monday, junior can probably do most of this. Based on the patch you will know what it patches and most vendors provide some information on the attack vector and the expected impact. ">You might want to check our site as well. On patch Tuesday we publish patching information and break it down into server and client patches as well as priorities (PATCH NOW means exactly that). Based on these answers you may be able to reduce the scope of your patching and therefore make the 48 hours more achievable.">Once the patches are know, at least those that must be applied within the 48 hours, push them to your Dev/Test servers/workstations, assuming you have them. If you do not you might need to designate some machines and devices, non-critical of course, as your test machines. Deploy the patches as soon as you can, monitor for odd behaviour and havepeople test. Some organisations have scripted the testing and OPS teams can run these to ensure systems are still working. Others not quite that automated may need to have some assistance from the business to validate the patches didt break anything. ">Once the tests have been completed the patches can be pushed out to the next stage aUAT or staging environment. If you do not have this and you likely wont have these for all your systems, maybe set up a pilot group that is representative of the servers/devices you have in the environment. In the case of workstations pick those that are representative of the various different user groups in the organisation.Again test that things are still functioning as they should. These test should be standardised where possible, quick and easy to execute and verify. Once confirmed working the production roll out can be scheduled.">Schedule the non critical servers that need patching first, just in case, but by now the patches have been applied to a number of machines, passed at least two sets of tests and prior to deployment to production you quickly checked the isc.sans.edu dairy entry to see if people have experienced issues. Our readers are pretty quick in identifying potential issues (make sure you submit a comment if you come across one). Ready to go.

have the information in place and the processes and have the resources to patch, you should be able to patch dev/test within the first 4 hours after receiving the patch information. Evaluating the information should not take long. Close to lunch time UAT/staging/Pilot groups can be patched (assuming testing is fine). Leave them overnight perhaps. Once confirmed there are no issues start patching production and schedule the reboots if needed for that evening.

Patched within 2 days, dreaming? nope, possible, tough, but possible.

For those of you that have patching solutions in place your risk level and effort needed is probably lower than others. By the time the patch has been released to you it has been packaged and some rudimentary testing has already been done by the vendor. The errors that blow up systems will likely already have been caught. For those of you that do not have patching solutions have a good look at what WSUS can do for you. With the third party add on it can also be used to patch third party products (http://wsuspackagepublisher.codeplex.com/) You may have to do some additional work to package it up which may slow you down, but it shouldnt be to much.

It is always good to improve so if you have some hints, tips good practices, horrible disasters that you can share so others can avoid them, leave a comment.

Happy patching well be doing more over the next few weeks.

Mark H - Shearwater

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
[SECURITY] [DSA 3314-1] typo3-src end of life
Internet Storm Center Infocon Status