Information Security News
I have been witness to network and system security failure for nearly two decades. While the players change and the tools and methods continue to evolve, it's usually the same story over and over: eventually errors add up and combine to create a situation that someone finds and exploits. The root-cause analysis, or post mortem, or whatever you call it in your environment consists of a constellation-of-errors, or kill-chain, or what-have-you. It's up to you do develop an environment that both provides the services that your business requires and has enough complementary layers of defenses to make incidents either a rare occurrence or a non-event.
Working against you are not only a seemingly endless army of humans and automata, but also the following truths or as I call them "The Three Axioms of Computer Security":
How do you create a network that can survive under these conditions? Plan on it happening.
In your design, account for the inevitable failure of other tools and layers. Work from the outside in: Firewall, WAF, Webserver, Database server. Work from the bottom up: Hardware, OS, Security Tools, Application.
I'm imagining the typical DMZ layout for this hypothetical design: External Firewall, External services (DNS, Email Web,) Internal Firewall, Internal services (back office, file/print share, etc.) This is your typical "defense in depth" layout that incorporates the assume-failure philosophy a bit. It attempts to isolate the external servers, which the model assumes are more likely to be compromised, from the internal servers. This isn't the correct assumption for modern exploit scenarios, so we'll fix that as we go through this exercise.
Nowadays, External Firewalls basically fail right off the bat in most exploit scenarios. It has to let in DNS, SMTP, and HTTP/HTTPS. So if it's not actively blocking source IPs in response to other triggers in your environment, it's only value-ad in protecting these services comes from it acting as a separate, corroborating log source. But it's completely necessary, because without it, you open your network up to direct attack on services that you might not be aware that you're exposing. So, firewalls are still a requirement, but keep in mind that it's not covering attacks coming in on our exposed applications. It creates a choke point on your network that you can exploit for monitoring and enforcement (it's a good spot for your IDS which can inform the firewall to block known malicious traffic.) This is all good strategy, but focusing on the threat from the outside.
However, if you plan your firewall strategy with the assumption that other layers will eventually fail, you'll configure the firewall so that outbound traffic is also strongly limited to just the protocols that it needs. Does your webserver really need to send out email to the internet? Does it need to surf the internet? While you may not pay much attention to the various incoming requests that are dropped by the firewall, you need to pay critical attention to any outbound traffic that is being dropped.
The limitation of standard firewalls is that they don't perform deep packet inspection and are limited to the realm of IP addresses and ports for most of their decision making. Enter the Application Firewall, or Web Application Firewall. I'll admit that I have very little experience with this technology, however I do know how to not use it in your environment. PCI requirement 6.6 states that you should secure your web applications either through Application Code Reviews or Application Firewalls. (https://www.pcisecuritystandards.org/pdfs/infosupp_6_6_applicationfirewalls_codereviews.pdf) It should be both, not either/or because you have to assume that the WAF will fail or that code review will fail.
The next device is the application (DNS, email, web, etc.) server. If you're assuming that the firewall and WAF will fail, where should you place the server? It should be placed in the external DMZ, not inside your internal network "because it's got a WAF in front of it.) We'll also take an orthogonal turn in our tour, starting from the bottom of the stack and work our way up. IT managers generally understand hardware failure better than security failures. Multiple power supplies and routes, storage redundancy, backups, DR plans etc. The "Availability" in the CIA triad.
Running on your metal is the OS. Failure here is also mostly of the "Availability" variety, but security/vulnerability patching starts to come to play at this layer. Everybody patches, but does everybody confirm that the patches deployed? This is where a good internal vulnerability/configuration scanning regimen becomes necessary.
As part of your standard build, you'll have a number of security and management applications that run on top of the OS: your HIDS, AV and inventory management agents. Plan on these agents failing to check in, what is your plan to detect when an AV fails to check-in or update properly? Is your inventory management agent adequately patched and secured? While you're considering security failure, consider pre-deploying incident-response tools on your servers and workstations which will speed response-time when things eventually go wrong.
Next is the actual application. How will you mitigate the amount of damage the application process can do when it is usurped by an attacker? Chroot jail is often suggested, but is there any value to jailing the account running BIND if the servers sole purpose is DNS? Consider instead questions like: does httpd really need write access to webroot? Controls like selinux or tripwire come into play here, they're painful, but can mean the difference between a near-miss and the need to disclose a data compromise.
Also part of assuming that the application server will be compromised raises the need to send logs off of the server. This can help reduce the load on the server since network traffic is cheaper than disk writes. Having logs that are collected and timestamped centrally is a boon to any future investigation. It also allows better monitoring and you can leverage any indicators found from an investigation throughout your entire environment easier.
Now, switching directions again and heading to the next layer, the internal firewall. These days the internal firewall separates two hostile networks from one another. The rules have to enforce a policy that protects the servers from the internal systems, as well as the internal systems from the servers since either is just as likely to be compromised these days (some may argue that internal workstations, etc are more likely to fall.)
Somewhere you're going to have a database. It will likely contain stuff that someone will want to steal or modify. It is critical to limit the privileges on the accounts that interact with it. Does php really need read access to the password or credit card column? Or can it get by with just being able to write? It can be painful to work through all of the cases to get the requirements correct, but you'll be glad you did when you watch the "SELECT *" requests in your Apache access logs and you know they failed.
Workstations and other devices require a similar treatment. How much access do you need to do your job? Do you need data on the system, or can you interact with remote servers? Can you get away with application white-listing in your environment? I'm purposefully vague here because there are so many variables, however you have to ask how failures will impact your solution, and what you can do to limit the impact of those failures.
Despite all of these measures, eventually everything will fail. Don't feel glum, that's job security.
Learn from failure: instrument everything, and log everything. Disk is cheap. Netflow from your routers, logs from your firewalls, alerts from your IDS and AV, syslogs from your servers will give you a lot of clues when you have to figure out what went wrong. I strongly suggest full packet capture in front of your key servers. Use your WAF or load balancer to terminate the SSL session so you can inspect what's going into your web servers. Put monitors in front of your database servers too. Store as much as you can, use a circular queue and keep 2 weeks, 2 days, and have an easy way to freeze the files then you think you have an incident. It's a lot easier to create a system that can store a lot of files than it is to create a system that can recover packets from the past.
I know we sound negative all the time, pointing out why things won't work. That shouldn't be taken as an argument to not do something (e.g. AV won't protect you from X, so don't bother) instead it should be a reminder that you need to plan for what you do when it fails. Consider the external firewall, it doesn't block most of the attacks that occur these days, but you wouldn't want to not have one. Planning for failure is a valuable habit.(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Google engineers have squashed several high-impact security vulnerabilities in the company's account recovery system that enabled attackers to hijack user accounts.
A proof-of-concept attack that exploited the bugs required a victim to click a booby-trapped link leading to Google.com contained in a spear phishing e-mail. Behind the scenes, the link briefly redirected to an attacker's website even as it opened a legitimate password-reset page on Google. When a victim clicked on the link while logged in to the targeted Gmail account, the attacker site also performed a cross-site scripting attack. The Web application equivalent of a Jedi mind trick, the two exploits sent both the password entered by the victim and the authentication cookie used to access the account to the attacker's website.
"It's a clever attack," Jeremiah Grossman, CTO of Web-application security firm WhiteHat Security, told Ars. "There's elegance and simplicity." The exploit developer "did a lot of work behind the scenes to make the attack simple. This is what we're meant to do in a browser on Gmail. When we see links, we're meant to click on them."
Larry Seltzer over at ZDNet has noticed that since the release of OS X Mavericks that Apple has stopped updating OS X Mountain Lion. Although Apple is not forthcoming with the reasons for this, it appears this is a departure from past behavior.
-- Rick Wanner MSISE - rwanner at isc dot sans dot edu - http://namedeplume.blogspot.com/ - Twitter:namedeplume (Protected)(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
As a security practitioner I try really hard to drink the Kool-Aid, in other words practice what I preach. I have been a strong advocate, for well over a decade, of avoiding password reuse. There is one consolation I personally made to password reuse. For years I used one "throwaway" password for services where I didn't care about the account. You know those annoying sites that make you sign up just to access some mundane capability. In my case, my throwaway password is still a high quality password, but it is used on literally dozens of sites where there is no data of value, like Adobe. After the Adobe breach I changed my throwaway password on as many sites as I could remember using it at, and developed a better methodology for passwords on these sites (i.e. no more reuse).
Apparently I missed one. Yesterday I got an email from Evernote telling me that I had used the same password at Evernote that I had used at Adobe. The Evernote account probably got my throwaway password before I realized the value of the Evernote service. I now use Evernote nearly every day from my mobile devices; where I don't get prompted for the credentials; but never log into it over the web, so I didn't remember what the password was set to.
Needless to say I quickly changed my Evernote password and enabled Evernote's two-step authentication.
Shortly later an ISC reader forwarded a The Register article about a brute force authentication attack against github. While there aren't a lot of technical details in the article, this attack is interesting because it is a relatively slow attack from over 40,000 IP addresses, obviously designed to reduce the likelihood of any anti-brute-forcing controls kicking in.
"These addresses were used to slowly brute force weak passwords or passwords used on multiple sites. We are working on additional rate-limiting measures to address this". Suggesting that this was not your typical brute force employing obvious userids and incredibly inane passwords, but a targeted attack against password reuse.
The article also goes on to lament; "It strikes us that GitHub's recent bout of probing may stem from crackers using the 38 million user details that were sucked out of Adobe recently to check for duplicate logins on other sites."
Guess I will be looking at all my passwords again, including the ones used by my mobile devices!
-- Rick Wanner - rwanner at isc dot sans dot edu - http://namedeplume.blogspot.com/ - Twitter:namedeplume (Protected)(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Posted by InfoSec News on Nov 22Forwarded from: c7five <c7five (at) thotcon.org>
Posted by InfoSec News on Nov 22http://www.informationweek.com/security/vulnerabilities-and-threats/i2ninja-trojan-taps-anonymized-darknet/d/d-id/1112730
Posted by InfoSec News on Nov 22http://www.ibtimes.co.uk/articles/524137/20131121/cryptolocker-ransomware-police-department-pays-bitcoin-ransom.htm
Posted by InfoSec News on Nov 22http://www.darkreading.com/attacks-breaches/stuxnets-earlier-version-much-more-power/240164120
Posted by InfoSec News on Nov 22http://www.thisdaylive.com/articles/deji-government-should-invest-in-cyber-security-enlightenment/164895/
Posted by InfoSec News on Nov 22http://www.miamiherald.com/2013/11/21/3770148/energy-industry-is-on-alert-against.html
Posted by InfoSec News on Nov 22http://www.jsonline.com/news/milwaukee/dynacare-city-at-odds-over-necessity-of-employee-information-on-stolen-flash-drive-b99147298z1-232745751.html