Information Security News
by Robert Lemos
Office supply retailer Staples is investigating a possible breach of its systems following reports from the banking industry of fraudulent credit and debit card transactions at stores in the northeastern United States.
On Tuesday, the company acknowledged that a breach may have occurred and that it had contacted the appropriate law enforcement agencies. The retailer declined to provide further details.
“Staples is in the process of investigating a potential issue involving credit card data and has contacted law enforcement,” a spokesperson said in a statement sent to Ars. “If Staples discovers an issue, it is important to note that customers are not responsible for any fraudulent activity on their credit cards that is reported on a timely basis.”
=============== Rob VandenBrink Metafore(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Infosec Taylor Swift's cyber-philosophical musings
Do you like your cyberphilosophy delivered via the dulcet voice of America's country music treasure Taylor Swift? Head over to Twitter and follow @SwiftOnSecurity. Below are a few of her most incisive critiques of techno-utopianism. Swift showed how ...
Last week, Ars reported on the story of Anonabox, an effort by a California developer to create an affordable privacy-protecting device based on the open source OpenWRT wireless router software and the Tor Project’s eponymous Internet traffic encryption and anonymization software. Anonabox was pulled from Kickstarter after accusations that the project misrepresented its product and failed to meet some basic security concerns—though its developers still plan to release their project for sale through their own website.
But Anonabox’s brief campaign on Kickstarter has demonstrated demand for a simple, inexpensive way to hide Internet traffic from prying eyes. And there are a number of other projects attempting to do what Anonabox promised. On Kickstarter competitor Indiegogo there’s a project called Invizbox that looks almost identical to Anonabox—except for the approach its team is taking to building and marketing the device.
Based on the Chinese-built WT 3020A—a small wireless router that appears identical to the box that was the basis for the Anonabox—the Invizbox will have similar specs to the cancelled Kickstarter: 64 megabytes of RAM, 16 megabytes of Flash storage, and the Linux-based OpenWRT embedded OS. The main difference, according to the Dublin, Ireland-based team behind Invizbox (Elizabeth Canavan, Paul Canavan, and Chris Monks) is that their Tor router will be locked down better—and they won’t pretend that they’re using custom-built hardware.
Insider Threats: Breaching The Human Barrier
A company can spend all the money it has on technical solutions to protect the perimeter and still not prevent the attack that comes from within. Undoubtedly, every InfoSec professional has heard the argument that the perimeter was broken. That was so ...
5 non-traditional hiring tips for InfoSec
Globally, we're a million people short, according to Cisco's 2014 Annual Security Report. According to Ponemon's 2014 IT Security Jobs Report, 36 percent of staff positions and 58 percent of senior staff positions in IT security went unfilled in 2013.
As part of most vulnerability assessments and penetration tests against a website, we almost always run some kind of scanner. Burp (commercial) and ZAP (free from OWASP) are two commonly used scanners. Once youve done a few website assessments, you start to get a feel for what pages and fields are likely candidates for exploit. But especially if its a vulnerability assessment, where youre trying to cover as many issues as possible (and exploits might even be out of scope), its always a safe bet to run a scanner to see what other issues might be in play.
All too often, we see people take these results as-is, and submit them as the actual report. The HUGE problem with this is false positives and false negatives.
False negatives are issues that are real, but are not be found by your scanner. For instance, Burp and ZAP arent the best tools for pointing a big red arrow at software version issues - for instance vulnerability versions of Wordpress or Wordpress plugins. You might want to use WPSCAN for something like that. Or if you go to the login page, a view source will often give you what you need.
Issues with the certificates will also go unnoticed by a dedicated web scanner - NIKTO or WIKTO are good choices for that. Or better yet, you can use openssl to pull the raw cert, or just view it in your browser.
(If youre noticing that much of what the cool tools will do is possible with some judicious use of your browser, thats exactly what Im pointing out!)
NMAP is another great tool to use for catching what a web scanner might miss. For instance, if youve got a Struts admin page or Hypervisor login on the same IP as your target website, but on a different port than the website, NMAP is the go-to tool. Similarly, lots of basic site assessment can be done with the NMAP --version parameters, and the NSE scripts bundled with NMAP are a treasure trove as well! (Check out Manuels excellent series on NMAP scripts).
False positives are just as bad - where the tool indicates a vulnerability where there is none. If you include blatant false positives in your report, youll find that the entire report will end up in the trash can, along with your reputation with that client! A few false positives that I commonly see are SQL Injection and OS Commmand Injection.
SQL Injection is a vulnerability where, from the web interface, you can interact with and get information from a SQL database thats behind the website, often dumping entire tables.
Website assessment tools ( Burp in this case, but many other tools use similar methods) commonly tests for SQL Injection by injecting a SQL waitfor delay 0:0:20 command. If this takes a significantly longer time to complete than the basic statement, then Burp will mark this as Firm for certainty. Needless to say, I often see this turn up as a false positive. What youll find is that Burp generally runs multiple threads (10 by default) during a scan, so can really run up the CPU on a website, especially if the site is mainly parametric (where pages are generated on the fly from database input during a session). Also, if a sites error handling routines take longer than they should, youll see this get thrown off.
So, how should we test to verify this initial/preliminary finding? First of all, Burps test isnt half bad on a lot of sites. Testing Burps injection with curl or a browser after the scanning is complete will sometimes show that the SQL injection is real. Test with multiple times, so that you can show consistent and appropriate delays for values of 10,30,60, 120 seconds.
If that fails - for instance if they all delay 10 seconds, or maybe no appreciable delay at all, dont despair - SQLMAP tests much more thoroughly, and should be part of your toolkit anyway - try that. Or test manually - after a few websites youll find that testing manually might be quicker than an exhaustive SQLMAP test (though maybe not as thorough).
If you use multiple methods (and there are a lot of different methods) and still cant verify that SQL injection is in play after that initial scans finding, quite often this has to go into the false positives section of your report.
OS Command Injection - where you can execute unauthorized Operating System commands from the web interface - is another common false positive, and for much the same reason. In this vulnerability, the scanner will often use ping -c 20 127.0.0.1 or ping -n 20 127.0.0.1 - in other words, the injected command tells the webserver to ping itself, in this case 20 times. This will in most operating systems create a delay of 20 seconds. As in the SQL injection example, youll find that tests that depend on predictable delay will often get thrown off if they are executed during a busy scan. Running them after the scan (again, using your browser or curl) is often all you need to do to prove these findings as false. Testing other commands, such as pinging or opening an ftp session to a test host on the internet (that is monitoring for such traffic using tcpdump or syslog) is another good sober second thought test, but be aware that if the website you are testing has an egress filter applied to its traffic, a successful injection might not generate the traffic you are hoping for - itll be blocked at the firewall. If you have out of band access to the site being assessed, creating a test file is another good test.
Other tests can similarly see false positives. For instance, any tests that rely only on service banner grabs can be thrown off easily - either by admins putting a false banner in place, or if site updates update packages and services, but dont change that initially installed banner.
Long story short, never never never (never) believe that initial finding that your scanning tool gives you. All of the tools discussed are good tools - they should all be in your toolbox and in many cases should be at the top of your go-to list. Whether the tool is open source or closed, free or very expensive, they will all give you false positives, and every finding needs to be verified as either a true or false positive. In fact, you might not want to believe the results from your second tool either, especially if its testing the same way. Whenever you can, go back to first principals and verify manually. Or if its in scope, verify with an actual exploit - theres nothing better than getting a shell to prove that you can get a shell!
For false negatives, youll also want to have multiple tools and some good manual tests in your arsenal - if your tool misses a vulnerability, you may find that many or all of your tools test for that issue the same way. Often the best way to catch a false negative is to just know how that target service runs, and know how to test for that specific issue manually. If you are new to assessments and penetration tests, false negatives will be much harder to find, and really no matter how good you are youll never know if you got all of them.
If you need to discuss false positives and negatives with a non-technical audience, going to non-technical tools is a good way to make the point. A hammer is a great tool, but while screws are similar to nails, a hammer isnt always the best way to deal with them.
Please, use our comment form tell us about false positives or false negatives that youve found in vulnerability assessments or penetration tests. Keep in mind that usually these arent an indicator of a bad tool, theyre usually just a case of getting a proper parallax view to get a better look at the situation.
Intalock aims for major growth and new vendors
Other infosec vendors include Rapid7 and Cisco's Sourcefire. McPherson hinted at some new vendor appointments to come. "Forty percent of our business in last financial year was in the security space and we are looking to triple that number; it is fair ...
Posted by InfoSec News on Oct 21http://www.csoonline.com/article/2836294/data-breach/staples-confirms-data-breach-investigation.html
Posted by InfoSec News on Oct 21http://www.bloomberg.com/news/2014-10-21/fbi-warns-of-hacks-by-moonlighting-foreign-agents.html
Posted by InfoSec News on Oct 21http://www.darkreading.com/how-to-become-a-ciso-part-1/d/d-id/1316749
Posted by InfoSec News on Oct 21http://news.techworld.com/security/3581701/researcher-creates-proof-of-concept-worm-for-network-attached-storage-devices/
Posted by InfoSec News on Oct 21http://krebsonsecurity.com/2014/10/spike-in-malware-attacks-on-aging-atms/
Posted by InfoSec News on Oct 21Fowarded from: Vic Vandal <vvandal (@) well com>
Posted by InfoSec News on Oct 21http://www.nextgov.com/cybersecurity/2014/10/number-industries-getting-classified-cyberthreat-tips-dhs-has-doubled-july/96923/