Information Security News
Microsoft released its pre-announcement for the upcoming patch Tuesday. The summary indicates a total of 8 bulletins, 2 are critical with remote code execution and 6 Important with a mix of remote code execution, elevation of privileges, denial of service and security bypass features. The announcement is available here.
Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Posted by InfoSec News on May 09http://www.washingtonpost.com/blogs/the-switch/wp/2014/05/09/link-shortener-bitly-disconnects-users-facebook-and-twitter-accounts-over-compromised-credentials/
Posted by InfoSec News on May 09http://allafrica.com/stories/201405090302.html
Posted by InfoSec News on May 09http://www.computerworld.com/s/article/9248205/IT_malpractice_Doc_operates_on_server_costs_hospitals_4.8M
Posted by InfoSec News on May 09http://www.bankinfosecurity.com/ffiec-plans-cybersecurity-assessments-a-6825
Posted by InfoSec News on May 09http://gcn.com/Articles/2014/05/09/Insight-Hybrid-Cloud-Security.aspx
A former sailor assigned to a US nuclear aircraft carrier and another man have been charged with hacking the computer systems of 30 public and private organizations, including the US Navy, the Department of Homeland Security, AT&T, and Harvard University.
Nicholas Paul Knight, 27, of Chantilly, VA, and Daniel Trenton Krueger, 20, of Salem, IL, were members of a crew that hacked protected computers as part of a scheme to steal personal identities and obstruct justice, according to a criminal complaint unsealed earlier this week in a US District Court in Tulsa, Oklahoma. The gang, which went by the name Team Digi7al, allegedly took to Twitter to boast of the intrusions and publicly disclose sensitive data that was taken. The hacking spree lasted from April 2012 to June 2013, prosecutors said.
At the time of the alleged hacks, Knight was an active duty enlisted member of the Navy assigned to the nuclear aircraft carrier USS Harry S. Truman, prosecutors said. He worked as a systems administrator in the carrier's nuclear reactor department. He is accused of conducting some of his unlawful hacking while aboard. The "self-professed leader of Team Digi7al [and] the primary publicist and Twitter poster," Knight was discharged after trying to hack into a Navy database while at sea, according to court documents. There are no allegations that he hacked any of the carrier's systems. Krueger, meanwhile, was a student studying network administration at an Illinois community college. The two men were allegedly aided by three unnamed minors who were not charged in the complaint.
With the recent headlines, we've seen heartbleed (which was not exclusive to Linux, but was predominately there), an IE zero day that had folks over-reacting with headlines of "stop using IE", but Firefox and Safari vulnerabilities where not that far back in the news either.
So what is "safe"? And as an System Administrator or CSO what should you be doing to protect your organization?
It's great to say "Defense in Depth" and "The 20 Critical Controls", but that's easy to say and not so easy to do when you are faced with a zero day in the browser that your business application must have to run. What can you do that's quick and easy, that offers some concrete protection for your community of 20, 200, 2,000 or 20,000 workstations?
I'm starting a list here, but this is by no means a well-researched, exhaustive and complete list, just a starting point:
Inventory your hosts and the software that you run. Know your network, and know what's running on your servers and workstations. (if this sounds like another way of saying "read the 20 controls, start implementing at #1 and work your way down the list", then you get my point). After you inventory, read the list. Expect a story on this in the next week or so.
Deploy EMET. In the hype of "don't run IE" headlines, many of the folks who recommended this missed the fact that if you installed EMET, you were nicely protected against last week's 0-day. Expect a story on this later today (yes, I'm on a bit of a tear).
Read your logs. In every incident that I've worked, there's been *some* indication of the attack or compromise in logs somewhere - this is why you log after all. This also holds true for system crashes and problems of any kind. Troubleshooting always starts with your logs, but if you monitor logs for unusual things, you can expect less trouble to troubleshoot, because you'll see it before it gets to be a problem. If there are too many logs (yes, log volumes are insane), deploy a tool (there are tons of free ones) that will help you with this. ELSA (https://code.google.com/p/enterprise-log-search-and-archive/) is a decent starting point for log consolidation and prioritizing, but it's by no means the only solution - find what works for you and use it.
Control your network perimeter (if you can define a perimeter). Put an egress filter that allows what you need, then has a deny any/any/log statement at the bottom. The "log" part makes it simpler to add new list entries to satisfy new business requirements as they come up.
Also at your perimeter, have the ability to block some of the "problem" applictions when you know that things have gone particularly bad. For instance, if there is a Flash zero-day, there's no shame in sending a note to your users to say "we have to block flash at the firewall for a few days until there's a fix". Ditto that for ActiveX, Java, PDF files and whatever else you'd care to add to the list.
Many of these settings are simple tick-boxes at the firewall, some are IPS signatures or some might need a proxy or web content filter product. The key to all of these is to be prepared, know where the knobs are that you need to tweak, and know what you can and can't do both technically and within your organization. If you're caught by surprise and put a "fix" together in a hurry, document what you did so you can re-use that next time, or improve it when you have an hour to spare.
Talk to your users. Keep a steady flow of communication going, let them know what's going on. Call this "Security Awareness Training" if that scores you points, but the object of the game is to keep your user community in the loop - the more they know about what they should do or not do (and why), the fewer problems will come your way from that direction. Also, the more that IT and the Security Team is seen as helping and advising (in a good way), the more slack they'll cut you when you need it - for instance when you need to block Flash, PDF files or their favourite website for a day or three.
We've been saying this for years, but I still have clients who say "we trust our people, why would we do any of that". My answer remains "so you trust their malware too?".
I'd invite you, our readers to add to this list in the comments. This is meant as just a starting point - what have I missed? What has worked for you?
=============== Rob VandenBrink Metafore(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
It started with DNS: Simple short DNS queries are easily spoofed and the replies can be much larger then the request, leading to an amplification of the attack by orders of magnitude. Next came NTP. Same game, different actors: NTP's "monlist" feature allows for small requests (again: UDP, so trivially spoofed) and large responses.
Today, we received a packet capture from a reader showing yet another reflective DDoS mode: SNMP. The "reflector" in this case stands nicely in line for our "Internet of Things" theme. It was a video conferencing system. Firewalling these systems is often not practical as video conferencing systems do use a variety of UDP ports for video and audio streams which are dynamically negotiated. As a result, they are often left "wide open" . And of course, these systems not only let attackers spy on your meeting rooms, but the also make great SNMP reflectors as it turns out.
Here is an anonymized traffic capture (target anonymized with 192.0.2.1):
126.96.36.199 -> 192.0.2.1 SNMP 87 getBulkRequest 188.8.131.52.184.108.40.206.1.1
192.0.2.1 -> 220.127.116.11 IPv4 1514 Fragmented IP protocol (proto=UDP 17, off=0, ID=1f48)
192.0.2.1 -> 18.104.22.168 IPv4 1514 Fragmented IP protocol (proto=UDP 17, off=1480, ID=1f48)
... [ additional fragments omitted ] ...
192.0.2.1 -> 22.214.171.124 IPv4 1514 Fragmented IP protocol (proto=UDP 17, off=54760, ID=1f48)
192.0.2.1 -> 126.96.36.199 IPv4 1514 Fragmented IP protocol (proto=UDP 17, off=56240, ID=1f48)
192.0.2.1 -> 188.8.131.52 SNMP 664 get-response [... large response payload ...]
That is right! A simple 87 byte "getBulkRequest" leads to about 60k Bytes of fragmented data being sent back. We can only assume that poor 184.108.40.206 is the victim of this DoS attack. The user reporting this saw about 5 MBit/sec of traffic.
Similar to what we have seen with other reflective attacks like this, the fragmentation of the traffic is likely going to make filtering even harder. Prolexic posted a white paper about some of the different DrDOS attacks, including SNMP attacks 
So what to do:
- SNMP should probably not traverse your perimeter. Stop it at the firewall. (and I don't think video conferencing systems need it)
- proper egress/ingress control is a good idea. But you already knew that.