InfoSec News

Remember back in 2010, Google was in hot water for some wardriving activities, where personal information was gathered from unencrypted wireless networks found during it's Streetview activities? Deb wrote this up here == https://isc.sans.edu/diary.html?storyid=8794

Well, it looks like the discussion won't die - the FCC has just posted a summary of its findings here, along with some good background and a chronology of events in their investigation == http://transition.fcc.gov/DA-12-592A1.pdf

You'll notice that it's heavily redacted. A version with much less redacting can be found here == http://www.scribd.com/fullscreen/91652398

It's very interesting reading. What I found most interesting in the paper was:

I thought it was sensible that the engineer didn't go write a new tool for this - they used Kismet to collect the data, then massaged Kismets output during their later analysis. Aside from the fact that anyone who's been in almost any SANSclass would realize how wrong using the tool was, at least they didn't go write something from scratch.
Page 2 outlines the various radio licenses held by Google. This caught my eye mostly because I'm in the process of studying up for my own license.
The suggestion and implementation for the data collection in the controversy came from unnamed engineers (Engineer Doe in the paper). I found it really interesting how the final findings document doesn't name actual names - I'd have thought that assigning responsibility would be one of the main purposes of this doc, but hey, what do I know?
Engineer Doe actually outlined in a design document how the tool would collect payloads (page 10/11), but then discounted the impact because the Streetview cars wouldn't be in close proximity to any given user for an extended period of time. The approval for the activity came from a manager who (as far as this doc is concerned) didn't understand the implications of collecting this info, or maybe didn't read the doc, or missed the importance of that section - though a rather pointed question about where URL information was coming from was lifted out of one critical email.

Needless to say, violating Privacy Legislation just a little bit is like being a little bit pregnant - the final data included userids, passwords, health information, you name it. As they say close only counts in horseshoes and hand grenades - NOTin Compliance to Privacy rules !

Long story short, this document outlines how the manager(s) of the project trusted the engineers word on the legal implications of their activity. I see this frequently in my day job. Managers often don't know when to seek a legal opinion - in a lot of cases, if it sounds technical, it must be a technical decision right? So they ask their technical people. Or if they know that they need a legal opinion, they frequently don't have a budget to go down this road, so are left on their own to take their best shot at the do the right thing decision. As you can imagine, if the results of a decision like this ever comes back to see the light of day, it seldom ends well. Though in Google's case, they have a legal department on staff, and I'd imagine that one of their primary directives is to keep an eye on Privacy Legislation, Regulations and Compliance to said legislation. Though you can't fault the legal team if the question never gets directed their way (back to middle managment).

From a project manager point of view, this nicely outlines how expanding the scope of a project without the approval of the project sponsor is almost always a bad idea. in most cases Ive seen, the implications of changing the scope are all around impacts to budget and schedule, but in this case, a good idea and a neat project (Google Streetview) ended up being associated with activity that ended up being deemed illegal, which is a real shame. From a project manager's perspective, exceeding the project scope is almost as bad a failure as not meeting the scope. Exceeding the scope means that either you exceeded the budget or schedule, mis-estimated the budget or schedule, or in this case didn't get the legal homework done on the scope overage.

Take a minute to read the FCCdoc (either version). It's an interesting chronology of a technical project's development and execution, mixed in with company politics, legal investigation and a liberal sprinkling of I don't recall the details of that event type statements. Not the stuff that blockbuster movies are made of, but interesting nonetheless !

We invite your opinions, or any corrections if I've mis-interpreted any of this - please use our COMMENT FORM. I've hit the high points, but I'm no more an lawyer than Engineer Doe

Rob VandenBrink

Metafore (c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
I was reading the other night, which since I've migrated my library means that I was on my iPad.
My kid (he's 11) happened to be in the room, playing a game on one console or another. I'm deep in my book, and he's deep in his game, when he pipes up with Y'know Dad?
You should enable complex passwords on your tablet

(Really, he said exactly that! I guess he was in Settings / Security and wasn't playing a game after all !)
Why is that? I said - (I'm hoping he comes up with a good answer here)
Because if somebody takes your tablet, it'll be harder for them to guess your password (good answer!)
Good idea - is there anything else I should know?
If they guess your password wrong 10 times, your tablet will get wiped out, so they won't get your stuff (Oh - bonus points!)
So aside from me having a really proud parent moment, why is this on the ISC page? It's really good advice, that's why !
It's surprising how many people use the last 4 digits of their phone number, their birthday, or worse yet, their bank card PIN (yes, really) for a password, or have no password at all. And yet, we have all kinds of confidential information on our tablets and phones - mostly in the form of corporate emails and sometimes documents.
As is the case in so many things, when we in the security community discuss tablet security, it's usually about the more advanced and interesting topics like remote management, remote data wipe or forensics. These are valuable discussions - but in a lot of cases, basic (and I mean REALLY BASIC) security 101 advice to our user community will go a lot further in enhancing our security position. Advice like I got from my kid:

Set a password !
Make sure that it's reasonably complex (letters and numbers)
Make sure that it's not a family member name, phone number, birthday, bank PIN or something that might be found on your facebook page
Set a screen saver timeout
Set the device to lock when you close the cover
Delete any documents that you are finished with - remember, the doc on your tablet is just an out of date copy

This may seem like really basic advice, and that's because it is. But in the current wave of BYOD (Bring Your Own Device) policies that we're seeing at many organizations, we're seeing almost zero attention put on the security of the organization's data. BYOD seems to be about transferring costs to our users on one hand, and keeping them happy by letting them use their tablets and phones at work (or school).
Good resources for iPad security (as well as Android and other tablets also) can be found in the SANS Reading Room ( http://www.sans.org/reading_room/ )
Vendors also maintain security documentation - Apple has some good (but basic) guidance at == http://www.apple.com/ipad/business/docs/iPad_Security.pdf
NISThas guidance for Android and Apple (though both are bit out of date):



Please, use our COMMENTFORM to pass along any tablet security tips or links you may have.


Rob VandenBrink

Metafore (c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SSCC 89 - InfoSec Europe trends, tat and tales
Naked Security
by Chester Wisniewski on April 29, 2012 | Leave a comment After returning from BSides Austin for one day, I headed off to InfoSec Europe 2012 in London, UK. As the show was wrapping up Chris Pace and I recorded a short Chet Chat from the show floor.

The story I am about to tell is similar to the diaries posted by Rob VandenBrink in July 2010, Mark Hofman in May of 2011 and Daniel Wesemann in March of 2012. This past week I got a call from someone that I thought was a regular old telemarketer until they said they were from a company in Texas providing Microsoft Support. The caller had a very thick Indian accent. I played along like a dumb user (the lady kept getting very angry with me when I asked her to repeat things and said I didn't understand:) I got to look at my logs by running eventvwr from run line prompt. In my application logs, I found out that warning and error messages were really viruses and I should not click on them because they would multiply and destroy my mother board. I also got to run inf virus, which just opens the Window's inf folder and disregards the word virus, and was asked if I downloaded those files. Of course I said no and she told me they were viruses and all sorts of evil things that had been downloaded to my computer. She then said that Microsoft had developed a very special software that would take care of all of this for me and she would help me. She asked me to now type www.logmein123.com at the run line. At this point, 40 minutes later, I told her Ihad to go somewhere. I asked if Icould call her back because Isure didn't want all that stuff on my computer. She said I could and gave me the number 773-701-5437 and said her name was Peggy. I didn't have time to finish the call, but Isure would have like to have gotten a VM fired up and see what special software she had for me to install.
After the call, I started researching this type of scam and was surprised to see it seemed to be dating back to the 2009 time frame. However, I could not find any statistics that were tracking this data. Maybe I am just looking in the wrong place. I saw guidance from contact your local law enforcement to send an email to antiphishing.org. Ichecked antiphishing.org and could not find any data on this trend nor is there any mention in their report released 26 April 2012 that summarized 2H2011. It states This report seeks to understand trends and their significances by quantifying the scope of the global phishing problem. Specifically, this new report examines all the phishing attacks detected in the second half of 2011 (2H2011, July 1, 2011 through December 31, 2011). This type of phishing is something APWG doesn't appear to track at this time.
I consider these calls to still be phishing attempts because according to APWG, phishing is defined as Phishing is a criminal mechanism employing both social engineering and technical subterfuge to steal consumers personal identity data and financial account credentials. The delivery vector is not email in this case but rather a phone call. The end result is still the same. So, where does that leave us for tracking the trend of fake calls whose target is your computer?
At this point in time, there is no central tracking of this type of delivery vector. However, stay tuned to the ISC. After discussing this with some of the other handlers, the ISC is going to set up a method for reporting these attempts to us for tracking and trending this delivery method. More will be posted in the near future as soon as the details are worked out.

UPDATE: The page for reporting these types of calls is now available atisc.sans.edu/reportfakecall.html. Please let us know what you think and if we have missed anything. (c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Of the Macs that have been infected by the Flashback malware, nearly two-thirds are running OS X 10.6, better known as Snow Leopard, a Russian antivirus company said.
Internet Storm Center Infocon Status