Enlarge (credit: Kaspersky Lab)

For the second time this month, Google has removed Android apps from its Google Play marketplace. Google did so after a security researcher found the apps contained code that laid the groundwork for attackers to take administrative "root" control of infected devices.

"Magic Browser," as one app was called, was uploaded to Google's official Android App bazaar on May 15 and gained more than 50,000 downloads by the time it was removed, Kaspersky Lab Senior Research Analyst Roman Unuchek said in a blog post published Tuesday. Magic Browser was disguised as a knock-off to the Chrome browser. The other app, "Noise Detector," purported to measure the decibel level of sounds, and it had been downloaded more than 10,000 times. Both apps belong to a family of Android malware known as Ztorg, which has managed to sneak past Google's automated malware checks almost 100 times since last September.

Most Ztorg apps are notable for their ability to use well-known exploits to root infected phones. This status allows the apps to have finer-grain control and makes them harder to be removed. Ztorg apps are also concerning for their large number of downloads. A Ztorg app known as Privacy Lock, for instance, received one million installations before Google removed it last month, while an infected Pokémon Go guide racked up 500,000 downloads before its removal in September.

Read 3 remaining paragraphs | Comments

 

(credit: Aurich Lawson)

A Web-hosting service recently agreed to pay $1 million to a ransomware operation that encrypted data stored on 153 Linux servers and 3,400 customer websites, the company said recently.

The South Korean Web host, Nayana, said in a blog post published last week that initial ransom demands were for five billion won worth of Bitcoin, which is roughly $4.4 million. Company negotiators later managed to get the fee lowered to 1.8 billion won and ultimately landed a further reduction to 1.2 billion won, or just over $1 million. An update posted Saturday said Nayana engineers were in the process of recovering the data. The post cautioned that the recovery was difficult and would take time.

“It is very frustrating and difficult, but I am really doing my best, and I will do my best to make sure all servers are normalized,” a representative wrote, according to a Google translation.

Read 2 remaining paragraphs | Comments

 

Enlarge (credit: Victorgrigas)

A raft of Unix-based operating systems—including Linux, OpenBSD, and FreeBSD—contain flaws that let attackers elevate low-level access on a vulnerable computer to unfettered root. Security experts are advising administrators to install patches or take other protective actions as soon as possible.

Stack Clash, as the vulnerability is being called, is most likely to be chained to other vulnerabilities to make them more effectively execute malicious code, researchers from Qualys, the security firm that discovered the bugs, said in a blog post published Monday. Such local privilege escalation vulnerabilities can also pose a serious threat to server host providers because one customer can exploit the flaw to gain control over other customer processes running on the same server. Qualys said it's also possible that Stack Clash could be exploited in a way that allows it to remotely execute code directly.

"This is a fairly straightforward way to get root after you've already gotten some sort of user-level access," Jimmy Graham, director of product management at Qualys, told Ars. The attack works by causing a region of computer memory known as the stack to collide into separate memory regions that store unrelated code or data. "The concept isn't new, but this specific exploit is definitely new."

Read 6 remaining paragraphs | Comments

 

Enlarge (credit: Corinne Kuhlmann)

Microsoft sparked a curious squabble over malware discovery and infection rates. At the start of the month security firm Check Point reported on a browser hijacker and malware downloader called Fireball. The firm claimed that it had recently discovered the Chinese malware and that it had infected some 250 million systems.

Today, Microsoft said no. Redmond claimed that actually, far from being a recent discovery, it had been tracking Fireball since 2015 and that the number of infected systems was far lower (though still substantial) at perhaps 40 million.

The two companies do agree on some details. They say that the Fireball hijacker/downloader is spread through being bundled with programs that users are installing deliberately. Microsoft further adds that these installations are often media and apps of "dubious origin" such as pirated software and keygens. Check Point says that the software was developed by a Chinese digital marketing firm named Rafotech and fingers similar installation vectors; it piggy backs on (legitimate) Rafotech software and may also be spread through spam, other malware, and other (non-Rafotech) freeware.

Read 5 remaining paragraphs | Comments

 

Enlarge (credit: S-8500)

The WCry ransomware worm has struck again, this time prompting Honda Company to halt production in one of its Japan-based factories after finding infections in a broad swath of its computer networks, according to media reports.

The automaker shut down its Sayama plant northwest of Tokyo on Monday after finding that WCry had affected networks across Japan, North America, Europe, China, and other regions, Reuters reported Wednesday. Discovery of the infection came on Sunday, more than five weeks after the onset of the NSA-derived ransomware worm, which struck an estimated 727,000 computers in 90 countries. The mass outbreak was quickly contained through a major stroke of good luck. A security researcher largely acting out of curiosity registered a mysterious domain name contained in the WCry code that acted as a global kill switch that immediately halted the self-replicating attack.

Honda officials didn't explain why engineers found WCry in their networks 37 days after the kill switch was activated. One possibility is that engineers had mistakenly blocked access to the kill-switch domain. That would have caused the WCry exploit to proceed as normal, as it did in the 12 or so hours before the domain was registered. Another possibility is that the WCry traces in Honda's networks were old and dormant, and the shutdown of the Sayama plant was only a precautionary measure. In any event, the discovery strongly suggests that as of Monday, computers inside the Honda network had yet to install a highly critical patch that Microsoft released in March.

Read 2 remaining paragraphs | Comments

 

Enlarge / A configuration screen found in the Drifting Deadline exploit. (credit: WikiLeaks)

Documents published Thursday purport to show how the Central Intelligence Agency has used USB drives to infiltrate computers so sensitive they are severed from the Internet to prevent them from being infected.

More than 150 pages of materials published by WikiLeaks describe a platform code-named Brutal Kangaroo that includes a sprawling collection of components to target computers and networks that aren't connected to the Internet. Drifting Deadline was a tool that was installed on computers of interest. It, in turn, would infect any USB drive that was connected. When the drive was later plugged into air-gapped machines, the drive would infect them with one or more pieces of malware suited to the mission at hand. A Microsoft representative said none of the exploits described work on supported versions of Windows.

The infected USB drives were at least sometimes able to infect computers even when users didn't open any files. The so-called EZCheese exploit, which was neutralized by a patch Microsoft appears to have released in 2015, worked any time a malicious file icon was displayed by the Windows explorer. A later exploit known as Lachesis used the Windows autorun feature to infect computers running Windows 7. Lachesis didn't require Explorer to display any icons, but the drive letter the thrumbdrive was mounted on had to be included in a malicious link. The RiverJack exploit, meanwhile, used the Windows library-ms function to infect computers running Windows 7, 8, and 8.1. Riverjack worked only when a library junction was viewed in Explorer.

Read 4 remaining paragraphs | Comments

 
Multiple Pivotal Products CVE-2017-4974 SQL Injection Vulnerability
 
[CVE-2017-8813] Double-Fetch Vulnerability in Linux-4.10.1/drivers/media/pci/saa7164/saa7164-bus.c
 
[SECURITY] [DSA 3893-1] jython security update
 
Sitecore 7.1-7.2 Cross Site Scripting Vulnerability
 
[SECURITY] [DSA 3890-1] spip security update
 

One of our readers (thanks Gebhard) mailed us a link to an article on what the press is apparently now calling a Revenge Wipe - a system administrator who has left the organization, and as a last hurrah, deletes or locks out various system or infrastructure components.

In this case, the organization was a hosting company in the Netherlands (Verelox). In the case of cloud providers, a disgruntled admin may have access to delete entire networks, hosts, and associated infrastructure. In the case where its a smaller CSP, the administrator may also have access to delete customer servers and infrastructure as well. In Vereloxs situation, that seems to have been the case (from their press release at least)

The classic example of this is the City of San Francisco in 2008), where their main administrator (Terry Childs) refused to give up the credentials to their FiberWAN Network Infrastructure, even after being detained by law enforcement (he eventually did give the credentials directly to the Mayor). Ive listed several other examples in the references below - note that this was not a new thing even in 2008 - this has been a serious consideration for as long as weve had computers.

So, how should an organization protect themselves from a situation like this?

Back up Job Responsibilities:

Know who has access to what. Have multiple people with access to each system. Having any system with only a single administrator can turn into a real problem in the future. DOCUMENT things. BACKUP your configurations in addition to your data.

Use Authorization:

It can be difficult, but wherever possible use Admin accounts with only the rights required. Its very easy to build an every Admin has all rights infrastructure. Its likely more difficult to build a why does the VMware admin need the rights to delete an entire LUN on the San config but its important to think along those lines wherever you can.

Use a back-end directory for authentication to network infrastructure:

What this often means is that folks implement NPS (RADIUS) services in Active Directory. This allows you to audit access and changes during regular production, and also allows you to deactivate network administrator accounts in one place

Where you can, use Two Factor Authentication

Use 2FA whereever possible, this makes password attacks much less of a threat. 2FA is a definite easy implement for VPN and other remote access, also for administration of almost all Cloud Services for your organization.

Just as a side note - I am still seeing that many smaller CSPs have not gone forward with 2FA - if you are looking at any new Cloud services, adding Two Factor Authentication as a must-have is a good way to go.

Deal with Stale Accounts:

Keep track of accounts that are not in use. I posted a powershell script for this (targeting AD) in a previous story == https://isc.sans.edu/diary/The+Powershell+Diaries+-+Finding+Problem+User+Accounts+in+AD/19833

Deal with Service Accounts:

Service accounts are used in Windows and other operating system to run things like Windows Services, or to allow scripts to login to various systems as they run. The common situation is that these service accounts have Domain Administrator or local Root access (depending on the OS).

Know in your heart that the person you are protecting the organization from is the same person who likely created one or all of these accounts.

Be sure that these service accounts are documented as they are created, so that if a mass change is required it can be done quickly.

Know that these use a central directory (such as AD or LDAP), so that if you need to change them or disable them, there is one place to go.

I posted a PowerShell script in a previous story to inventory service accounts in AD == https://isc.sans.edu/forums/diary/Windows+Service+Accounts+Why+Theyre+Evil+and+Why+Pentesters+Love+them/20029/

Restrict Remote Access:

Be sure that your administrative accounts dont have remote access (VPN, RDP Gateway, Citrix CAG etc). This falls into the same category as dont allow Administrators to check mail or browse the internet while logged in as a Domain Admin or root privileges.

On the day:

On the day of termination, be sure that all user accounts available to our administrator are deactivated during the HR interview. If youve used a central authentication store this should be easy (or at least easier)

Also force a global password change for all users (your departing admin has probably done password resets for many of your users), and if you have any stale accounts simply deactivate those.

For Service accounts, update the passwords for all of these. This is a good time to be sure that you arent following a pattern for these passwrods - use long random strings for these (L33t speak versions of your company or product name are not good choices here).

Im sure that Ive missed some important things - please, use our comment for to fill out the picture. This is a difficult topic, since many of us are admins for one thing or another this really hits close to home. But for the same reason, its important that we deal with it correctly, or as correctly as the situation allows.

References:

https://www.heise.de/newsticker/meldung/Revenge-Wipe-Ex-Admin-loescht-Daten-bei-niederlaendischem-Provider-3740243.html?view=print

https://translate.google.com/translate?sl=autotl=enu=https%3A//www.heise.de/newsticker/meldung/Revenge-Wipe-Ex-Admin-loescht-Daten-bei-niederlaendischem-Provider-3740243.html%3Fview%3Dprint

https://www.schneier.com/blog/archives/2008/07/disgruntled_emp.html

http://www.infoworld.com/article/2653004/misadventures/why-san-francisco-s-network-admin-went-rogue.html

https://www.scmagazine.com/former-system-admin-sentenced-to-34-mo-for-hacking-former-employer/article/640254/

https://www.wired.com/2016/06/admin-faces-felony-deleting-files-flawed-hacking-law/

http://www.independent.co.uk/news/business/news/disgruntled-worker-tried-to-cripple-ubs-in-protest-over-32000-bonus-481515.html

===============
Rob VandenBrink
Compugen

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
 

=============== Rob VandenBrink Metafore

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
 
Microsoft Malware Protection Engine CVE-2017-8558 Remote Code Execution Vulnerability
 
Huawei Smart Phones CVE-2017-8143 Local Denial of Service Vulnerability
 
NetBSD CVE-2017-1000378 Arbitrary Code Execution Vulnerability
 

Last months entertainment for many of us was of course the wannacray ms17-010 update. For some of you it was a relaxing time just like any other month. Unfortunately for the rest of us it was a rather busy period trying to patch systems that in some cases had not been patched in months or even years. Others discovered that whilst security teams have been saying you want to open what port to the internet? firewall rules were approved allowing port 445 and in other cases even 139. Another group of users discovered that the firewall that used to be enabled on their laptop was no longer enabled whilst connected to the internet. Anyway, that was last month. On the back of it we all made improvements to our vulnerability management processes. You did, right?

Ok, maybe not yet, people are still hurting. However, when an event like this happens it is a good opportunity to revisit the process that has failed, identify why it went wrong for you and make improvements. Not the sexy part of security, but we cant all be threathunting 24/7.

If you havent started yet or the new process isnt quite where it needs to be where do you start?
Maybe start with how fast or slow should you patch? Various standards suggest that you must be able to patch critical and high risk issues within 48 hours. Not impossible if you approach it the right way, but you do need to have the right things in place to make this happen.
You will need:

  • Asset information - you need to know what you have, how critical it is and of course what is installed on it. Look at each system you have, evaluate the confidentiality, integrity and availability requirements of the system and categorise the systems into critical and less critical systems to the organisation.
  • Vulnerability/Patch information - you need information from vendors, open source and commercial alike. Subscribe to the various lists, get a local RSS feed, etc. Vendors are generally quite keen to let you known once they have a patch.
  • Assessment method The information received needs to be evaluated. Review the issue. Are the systems you have vulnerable? Are those systems that are vulnerable flagged as important to the business? If the answer is yes to both questions (you may have more), then they go on the must patch now list. The assessment method should contain a step to document your decision. This will keep auditors happy, but also allows you to better manage risk.
  • Testing Regime Speed in patching processes comes from the ability to test the required functionality quickly and the reliability of those tests. Having standard tests or even better automated tests can speed up the validation process allowing patching to continue.

Once you have the four core ingredients you are now in a position to know what vulnerabilities are present and hopefully patchable. You know the systems that are most affected by them and have the highest level of risk to the organisation.

The actual mechanics of patching is individual to each organisation. Most of us however will be using something like WSUS, SCCM or Third-party patching products and/or their linux equivalents like satellite, puppet, chef, etc. In the tool used, define the various categories of systems you have, reflecting their criticality. Ideally have a test group for each, Dev or UAT environments if you have them can be great for this. I also often create a The Rest group. This category contains servers that have a low criticality and can be rebooted without much notice. For desktops, I often create a test group, a pilot group and a group for all remaining desktops. The pilot group has representative of most if not all types of desktops/notebooks used in the organisation.

When patches are released they are evaluated and if they are to be pushed they are released to the test groups as soon as possible. Basic functionality and security testing is completed to make sure that patches are not causing issues. Depending on the organisation we often push DEV environments first, then UAT after a cycle of testing. Within a few hours of being released you should have some level of confidence that the patches are not going to cause issues. Your timezone may even help you here. In AU for example patches are often released during the middle of our night. Which means in other countries they may already have encountered issues and reported them (keep an eye the ISC site) before we start patching.
The next step is to release the patch to The Rest group and for desktops to the pilot group. Again, testing is conducted to get confidence the patch is not causing issues. Remember these are low criticality servers and desktops. Once happy start scheduling the production releases. Post reboot run the various tests to restore confidence in the system and you are done.

The biggest challenge in the process is getting a maintenance window to reboot. The best defence against having your window denied is to schedule them in advance and get the various business areas to agree to them. Patch releases are pretty regular so they can be scheduled ahead of time. I like working one or even two years in advance.

The second challenge is the testing of systems post patching. This will take the most prep work. Some organisations will need to get people to test systems. Some may be able to automate tests. If you need people, organise test teams and schedule their availability ahead of time to help streamline your process. Anything that can be done to get confidence in the patched system faster will help meet the 48 hour deadline.

If going fast is too daunting, make the improvements in baby steps. If you generally patch every 3 months. Implement your own ideas, or some of the above and see if you can reduce it to two months. Once that is achieved try and reduce it further.

If you have your own thoughts on how people can improve their processes, or you have failed (we can all learn from failures) then please share. The next time there is something similar to wannacry we all want to be able to say sorted that ages ago.

Mark H - Shearwater

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
 
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
 

This please let us know.

  1. Introduction

Recently, I was confronted with a scenario where a very suspicious Windows pop-up message was shown to a specific user on a corporate network. It was a kind of Yes/No default Windows Dialog Box that, although I cannot reveal the message content, I can assure you that it was in the context of what the user was doing on his computer at that moment.

As we were dealing with a major incident on the same network, our first assumption was that someone had compromised that machine and was controlling it remotely through a reverse connection - the type of situation that urges for a rapid response.

However, after a few hours hunting for any piece of malware on that machine, including operating system events, network connections, user Internet history, e-mail attachments, external devices and so on, nothing interesting was found. In fact, the evidence came from a source Ive never imagined could help me on an incident response. It came from Windows Error Reporting (WER), as described in this diary.

  1. The subtle clue

As no malware evidence was found, we decide to get back to the drawing board, and after looking carefully at the strange message, I noticed that, whatever application had been used by the attacker to present the message, it has hanging. The classic (Not Responding) width:332px" />

Figure 1 Not Responding application sample

By default, when an application hangs or crashes on a Windows system, the Windows Error Reporting (WER) mechanism [1] automatically gathers detailed debug information including the application name, loaded modules and, more important, a heap dump, which comprehends the data that was loaded in the application at the time that the memory was collected. All this data is reported to Microsoft that, in turn, may provide users with solutions for known problems.

As the application used to send the strange message has hanged, the chances are that we could find generated WER artifacts do analyze and track the supposed intrusion. Thus, our next step was looking for them.

  1. Collecting WER information

To demonstrate how we found and analyzed WER files related to that hanged application without exposing real incident information, weve created a similar scenario and used it for this analysis.

  1. Crashing an application

Using a Windows 10 default installation machine in our lab, the first thing was forcing an application to crash. For this purpose, we used the text editor application Notepad++ as the application to be crashed and Process Explorer tool [2] as the means to cause it.

For further analyses purposes, we typed a simple text on the editor, as seen in Figure 2 and, through the Process Explorer, started killing aleatory ntdll.dll width:566px" />

Figure 2 width:366px" />

Figure 3 Killing application threads

It didn width:401px" />

Figure 4 width:517px" />

Figure 5 Application event log evidence

Note that the event ID for crashed application has the value 1000 while for hangeing applications, the value is 1002.

The other evidence are the WER files themselves which, depending on the Windows version are generated in different paths and can be found through different control panel menu options. On Windows 7, for example, WER settings and reporting access can be found through Action Center and on Windows 8 through Problem Reports and Solutions.

On Windows 10, used in our demonstration scenario, the WER menu can be opened through the menu Control Panel - System and Security - Security and Maintenance - width:478px" />

Figure 6 Looking for the specific problem report

width:531px" />

Figure 7 WER problem details

Another way to find WER files is going directly path they are created on the disk. On Windows 10, WER report files can be reached through the path: %SystemDrive%\ProgramData\Microsort\Windows\WER width:567px" />

Figure 8 width:567px" />

Figure 9 WER file list

  1. Analyzing the evidence

Now, making a parallel to the real incident case, when we searched for event log evidence, we could find that an application hanged on that machine moments before the message screenshot time. Better than that, we also could find the WER files associated to that application hang!

You may be thinking right now how I could find WER files in the machine as they are deleted from disk after being sent to Microsoft. The point is: they weren

  • The WER report wasn width:523px" />

    Figure 10 Problem uploading WER during the MITM attack

    Heading back to the real scenario, with WER files in our hands, we could discover the name of the possible application that generated that suspicious pop-up message and, by inspecting the heap dump file we could confirm it. It turns out that we found exactly the pop-up message content into the memory dump file using a simple strings command although there exist an orthodox way to inspect and debug those files using Windbg [4].

    Employing the same strings width:567px" />

    Figure 11 Evidence found

    1. Final words

    As we could see, in addition to helping Windows users to deal with application crashes and hangs, this case demonstrated that WER can be extremely useful for post-mortem analysis. Depending on the scenario, its like having an application memory dump to analyze as part of your DFIR activities without having collected it during the incident.

    On the other hand, it raises some concerns regarding data leaking through the memory dump files. Considering that you have consented to send those information to Microsoft (remembering or not that you have done that [5]), there exists the possibility of those content to be accessed by third parts, like intruders that escalated the privileges on the targeted machine or simple by that new employee that is now using your machine and you thought that removing your user home directory could be enough.

    Things may get worse if we consider that the crashed or hanged application is a password manager, for example. We did experiments on a group of them and privately reported those that allowed us to recover clear text passwords from WER memory dumps. The Enpass password manager has already published a security bulletin and a new version fixing the vulnerability [6] for which the CVE 2017-9733 [7] has been associated.

    For Windows application developers in general, to prevent sensitive information exfiltration from crash dumps, we recommend either completely disabling WER triggering by using AddERExcludedApplication or WerAddExcludedApplication functions [8] or by excluding the memory region that may contain sensitive information using the function WerRegisterExcludedMemoryBlock [9] (available only on Windows 10 and later).

    A more comprehensive solution should be provided by Windows itself that could protect report files by encrypting them - at least the memory dumps. Interestingly, there is a patent from IBM exactly about protecting application core dump files [10]. Today, the encryption is employed only while sending WER report files to Microsoft through SSL connections.

    Regarding our case, in the end, fortunately realized that there was no violation or intrusion on that machine. It was, indeed, a misuse of a legitimate tool by an internal employee that made us learn a bit more the importance of WER files to digital forensics and users privacy.

    1. References

    [1] https://msdn.microsoft.com/en-us/library/windows/desktop/bb513613(v=vs.85).aspx

    [2] https://technet.microsoft.com/en-us/sysinternals/processexplorer.aspx

    [3] https://msdn.microsoft.com/pt-br/library/windows/desktop/bb513638(v=vs.85).aspx

    [4] https://blogs.msdn.microsoft.com/johan/2007/11/13/getting-started-with-windbg-part-i/

    [5] https://privacy.microsoft.com/en-US/windows-10-feedback-diagnostics-and-privacy

    [6] https://www.enpass.io/blog/an-update-on-the-reported-vulnerability-regarding-wer-in-enpass-for-windows-pc/

    [7] https://cve.mitre.org/cgi-bin/cvename.cgi?name=2017-9733

    [8] https://msdn.microsoft.com/en-us/library/windows/desktop/bb513635(v=vs.85).aspx

    [9] https://msdn.microsoft.com/en-us/library/windows/desktop/mt492587(v=vs.85).aspx

    [10] https://www.google.com/patents/US20090172409?lipi=urn%3Ali%3Apage%3Ad_flagship3_messaging%3BELSwd1O0TB2NSjH9aPn1BA%3D%3D

    Renato Marinho

    Morphus Labs | linkedin.com/in/renatomarinho | @renato_marinho

    (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
  •  
    (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
     

    We do continue to receive reports about DDoS extortion e-mail. These e-mails are essentially spammed to the owners of domains based on whois records. They claim to originate from well-known hacker groups like Anonymous who have been known to launch DDoS attacks in the past. These e-mails essentially use the notoriety of the groups name to make the threat sound more plausible. But there is no evidence that these threats originate from these groups, and so far we have not seen a single case of a DDoS being launched after a victim received these e-mails. So no reason to pay :)

    Here is an example of an e-mail (I anonymized some of the details like the bitcoin address and the domain name)

    We are Anonymous hackers group.
    Your site [domain name] will be DDoS-ed starting in 24 hours if you dont pay only 0.05 Bitcoins @ [bit coin address]
    Users will not be able to access sites host with you at all.
    If you dont pay in next 24 hours, attack will start, your service going down permanently. Price to stop will increase to 1 BTC and will go up 1 BTC for every day of attack.
    If you report this to media and try to get some free publicity by using our name, instead of paying, attack will start permanently and will last for a long time.
    This is not a joke.
    Our attacks are extremely powerful - over 1 Tbps per second. No cheap protection will help.
    Prevent it all with just 0.05 BTC @ [bitcoin address]
    Do not reply, we will not read. Pay and we will know its you. AND YOU WILL NEVER AGAIN HEAR FROM US!
    Bitcoin is anonymous, nobody will ever know you cooperated.

    This particular e-mail was rather cheap. Other e-mails asked for up to 10 BTC.

    There is absolutelyno reason to pay any of these ransoms. But if you receive an e-mail like this, there are a couple of things you can do:

    • Verify your DDoS plan: Do you have an agreement with an anti-DDoS provider? A contact at your ISP? Try to make sure everything is set up and working right.
    • We have seen these threats being issued against domains that are not in use. It may be best to remove DNS for the domain if this is the case, so your network will not be affected.
    • Attackers often run short tests before launching a DDoS attack. Can you see any evidence of that? A brief, unexplained traffic spike? If so, then take a closer look, and it may make the threat more serious if you can detect an actual test. The purpose of the test is often to assess the firepower needed to DDoS your network

    And please forward any e-mails like this to us. It would be nice to get a few more samples to look for any patterns. Like I said above, this isnt new, but people appear to still pay up to these fake threats.

    ---
    Johannes B. Ullrich, Ph.D., Dean of Research, SANS Technology Institute
    STI|Twitter|

    (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
     
    (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
     
    (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
     

    Malicious files are generated and spread over the wild Internet daily (read: hourly). The goal of the attackers is to use files that are:

    • not know by signature-based solutions
    • not easy to read for the human eye

    Thats why many obfuscation techniques existto lure automated tools and security analysts. In most cases, its just a question of time to decode the obfuscated data. A classic technique is to use the XOR cypher[1]. This is definitively not a new technique(see a previous diary[2] from 2012) but it still heavily used. And many tools can automate the search for XORd string. Viper, the binary analysis and management framework, is a good example. It can scan for XOR padding:5px 10px"> viper tmpnYaBJs xor -a [*] Searching for the following strings: - This Program - GetSystemDirectory - CreateFile - IsBadReadPtr - IsBadWritePtrGetProcAddress - LoadLibrary - WinExec - CreateFileShellExecute - CloseHandle - UrlDownloadToFile - GetTempPath - ReadFile - WriteFile - SetFilePointer - GetProcAddr - VirtualAlloc - http [*] Hold on, this might take a while... [*] Searching XOR [!] Matched: http with key: 0x74 [*] Searching ROT viper tmpnYaBJs padding:5px 10px"> var bcacfdfaebbbfDeck = new ActiveXObject(dbdbfaeefccaee(+L+^%^LK%,LpL(KeL^%z%+%u%u

    I took some time to check how the obfuscation was performed. How does it work?

    The position of each character is searched in the $data variable and decreased by one. Then the character at this position is returned to build a string of hexcodes. Finally, the hex codes are converted into the final string. Example with the two first characters of the example above:

    $data =SYOm7L-3^ojXtMA2Kbk_FN)GB.$1PJgR

    • + is located at pos 20, search the character at position 19 (20 - 1): 5
    • L is located at pos 5, search the character at position 4 (5 - 1): 7
    • 57 is the hex code for W padding:5px 10px"> // Convert a string from hex chars to string. // In: 575363726970742E7368656C6C // Out: WScript.shell var bufferout = i } // Convert the obfuscate string by shifting by 1 char function deobfuscate(string,step){ var data = SYOm7L-3^ojXtMA2Kbk_FN)GB.$1PJgR var bufferout = i if (p2 padding:5px 10px"> var s = deobfuscate(%zL(L(Lp^2KNKN^P^z^+Ke^P^+^(Ke^+^KKe^P^p^PKN%u%N%L%NKe%,%0%L padding:5px 10px"> hxxp://185.154.52.101/logo.img

      And when you understand how to deobfuscate, it padding:5px 10px"> function obfuscate(string,step){ var data = SYOm7L-3^ojXtMA2Kbk_FN)GB.$1PJgR var bufferout = i j if (p2 if (p2==l2) padding:5px 10px"> var foo = obfuscate(https://isc.sans.edu padding:5px 10px"> %zL(L(LpL^^2KNKN%,L^%^KeL^%P%eL^Ke%+%(L+

      Of course, the method analyzedhere is a one shot! The number of ways to obfuscate data is unlimited...

      [1]https://en.wikipedia.org/wiki/XOR_cipher
      [2]https://isc.sans.edu/forums/diary/Decoding+Common+XOR+Obfuscation+in+Malicious+Code/13354

      Xavier Mertens (@xme)
      ISC Handler - Freelance Security Consultant
      PGP Key

      (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

     
    Internet Storm Center Infocon Status