(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Amazon is experiencing an outage of its S3 service (Simple Storage Service) for a few hours. According to the Amazon status dashboard[1], only theUS-EAST-1 area is affected. Many other Amazon services relying on S3,this outage could have impacts on many websites and web services.


Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

More than 1 million websites running the WordPress content management system may be vulnerable to hacks that allow visitors to snatch password data and secret keys out of databases, at least under certain conditions.

The vulnerability stems from a "severe" SQL injection bug in NextGEN Gallery, a WordPress plugin with more than 1 million installations. Until the flaw was recently fixed, NextGEN Gallery allowed input from untrusted visitors to be included in WordPress-prepared SQL queries. Under certain conditions, attackers can exploit the weakness to pipe powerful commands to a Web server's backend database.

"This is quite a critical issue," Slavco Mihajloski, a researcher with Web security firm Sucuri, wrote in a blog post published Monday. "If you're using a vulnerable version of this plugin, update as soon as possible."

Read 5 remaining paragraphs | Comments

D-link DI-524 CVE-2017-5633 Multiple Cross Site Request Forgery Vulnerabilities
Sage XRT Treasury CVE-2017-3183 SQL Injection Vulnerability
Amazon Kindle Setup CVE-2017-6189 DLL Loading Local Code Execution Vulnerability
Multiple Intel Ethernet Controller CVE-2016-8105 Denial of Service Vulnerability
Iceni Argus Multiple Security Vulnerabilities
Linux Kernel CVE-2017-6353 Incomplete Fix Local Denial of Service Vulnerability
Multiple F5 BIG-IP Products CVE-2016-9245 Denial of Service Vulnerability
Linux Kernel CVE-2017-6074 Local Denial of Service Vulnerability
Iceni Argus CVE-2016-8715 Remote Code Execution Vulnerability
Advisory X41-2017-001: Multiple Vulnerabilities in X.org

This is a guest diary submitted by Remco Verhoef.

The cloud is bringing a lot of interesting opportunities, enabling you to scale your server farm up and down depending on the load. Everything is being taken care of automatically by auto scale groups.There is nothing to worry about anymore.

But this brings me to the following point, in particular, because IPv4 addresses are harder and harder to come by: How quickly are public IP addresses being reused and what if I cancollect requests, specifically HTTP(s) requests, intended for a prior user of the IP.

Easily said. Ive created a server which will just store all requests in an Elasticsearch cluster, a client who does nothing more than just listening for http and https (with an expired self signed certificate) and a userdata script that automatically starts the client, waits for about 10 minutes and then shuts it down again. Ive configured spot instances in multiple regions and kept it running for a while. When spot instances are being shut down (because of the price, or because the instance shuts itself down), it will automatically start a new instances if it is still within the given pricing range.

About the results: Ive loaded the Elasticsearch dataset into R for easy visualisation and querying and together with good old awk, sed, wc and grep Ive been able to draw some interesting results.

Within the dataset we found several occurrences with Amazon in the User-Agent. This leads to the following Amazon related user-agents:

  • Amazon Route 53 Health Check Service
  • Amazon Simple Notification Service Agent
  • Amazon CloudFront

The health check service checks if the host is online and if it returns the expected result (statuscode 200), it will include this server to the dns based load balancing configuration.

Simple Notification Service Agent is being used to send notifications to the server. Weve received several notifications, like cache keys expired, varnish flush caches, but also a load of messages containing userdata.

The Amazon CloudFront traffic is traffic originated to the original webservers, of data that needs to be cached for a longer period of time. This is interesting as well, because youll be able to poison the cache for a long amount of time.

An analysis on the other useragents gave some interesting results as well. Weve seen the following useragents:

  • Stripe (the url being called was a webhook and contained information about a updated subscription.
  • GitHub-Hookshot (a webhook being called)
  • com.google.* (the user-agents are being used by Google applications) for example docs, gmail, ios
    • com.google.chrome.ios
    • com.google.Classroom
    • com.google.Docs
    • com.google.Gmail
    • com.google.GoogleMobile
    • com.google.hangouts
    • com.google.hangouts_iOS
    • com.google.ios.youtube
    • com.google.Slides
  • Gmail
  • GmailHybrid
  • Google
  • Google.Docs
  • Google.Drive
  • Google.DriveExtension
  • Google.Slides
  • GoogleAnalytics
  • GoogleCsi
  • Hangouts
  • SLGoogleAuthService
  • com.apple.appstored is the useragent being used by the apple store application)
  • FBiOSSDK is facebook app
  • FitbitMobile is the fitbit app
  • Haiku%20Learning
  • Instagram the instagram app
  • Spotify

In total weve found 159 different user agents (with the version number stripped off). Which is a lot. The complete dataset contained ~80k requests.

Those useragents are strange, not the variety I had expected. This seems just normal traffic passing a proxy or a gateway. Lets look at the hostnames, in total we have encountered 578 different hostnames, containing a lot of interesting hosts . This is just a subset.

  • Minecraftprod.rtep.msgamestudios.com:443
  • www.googleapis.com:443
  • apis.google.com:443
  • inbox.google.com:443
  • global.bing.com
  • api-global.netflix.com
  • m.baidu.com
  • www.fitbit.com:443
  • audio-sv5-t1-1-v4v6.pandora.com
  • s3.amazonaws.com:443
  • pagead2.googlesyndication.com
  • maps.googleapis.com:443
  • www.google.com.tw:443
  • d1e5xacotudrwg.cloudfront.net
  • www.google.pl
  • www.wikipedia.org
  • admin.brightcove.com
  • crt.usertrust.com
  • www.twitter.com:443
  • oauth.vk.com:443
  • en-US.appex-rf.msn.com
  • 4-edge-chat.facebook.com:443
  • tados-s.westeurope.cloudapp.azure.com:15002
  • google.com:443
  • m.google.com:443
  • www.baidu.com:443
  • www.bing.com
  • s3-us-std-102-prod-contentmover.prod-digitalhub.com.akadns.net
  • i.instagram.com:443
  • localhost
  • mail.google.com:443
  • itunes.apple.com:443

So this is really strange. What about the X-Forwarded-For headers, if it should have been destined to a proxy server, we should be able to find those. Looking at this traffic we didnt see really awkward things, most of them were from left over dns entries for other hosts. One super weird thing though, is this traffic destined for www.datapool.vn.


Host: www.datapool.vn

Remote Address:

Where the remote address is from somewhere in Vietnam, but the X-Forwarded-For explains the request has been forwarded by 8! different proxy servers. If you look up the block owners of these proxy servers youll find these organizations:

  • DoD Network Information Center (DNIC)
  • DoD Network Information Center (DNIC)
  • Computer Sciences Corporation (CSC-68)
  • UK Ministry of Defence
  • Airtel Ghana
  • The Swatch Group Ltd

So traffic from a vietnamese client address, via my Amazon server, with in the X-Forwarded-For the UK Defense as the Department of Defense with a vietnamese website as destination. Interesting. Dont know how to position this, any ideas are welcome.

Now Im looking at the requesting addresses, retrieving all the netblock owners of the requesting addresses. This is not 100% solid, because netblock could change ownership in time. But lets see what it brings us. To have some focus, Ive filtered out all the traffic with the com.google.* user-agents. This is traffic I wouldnt expect to arrive at my systems. The netblock owners are the following:

  • ATT Internet Services (SIS-80)
  • Cellco Partnership DBA Verizon Wireless (CLLC)
  • Gestion de direccionamiento UniNet
  • Hawaiian Telcom Services Company, Inc. (HAWAI-3)
  • McAllen Independent School District (MISD-15)
  • PSINet, Inc. (PSI)
  • San Diego County Office of Education (SDCS)
  • Time Warner Cable Internet LLC (RCSW)
  • Time Warner Cable Internet LLC (RRMA)
  • Time Warner Cable Internet LLC (RRMA)
  • Time Warner Cable Internet LLC (RRSW)
  • Time Warner Cable Internet LLC (RRSW)
  • Time Warner Cable Internet LLC (RRWE)
  • Time Warner Cable Internet LLC (RRWE)
  • T-Mobile USA, Inc. (TMOBI)
  • WideOpenWest Finance LLC (WOPW)

In total there were 95 different originating ip addresses, with 1446 requests (with useragent com.google.*), which is just a small sample of the complete set. ATT and Time Warner have done the most requests.

So with all the data weve crunched, weve found the following cases:

  • we receive data from client addresses mapped to telecom and internet providers, with http method CONNECT and with traffic destined for Google, Microsoft and much more.
  • we receive data for Amazon Route 53 health checks, Amazon Simple Notification Service Agent and Cloudfront. Many of the SNS messages contain privacy related information.
  • we receive many more requests from left over ip addresses, these consist of GET requests, but also POST requests containing data as tokens, keys and logs.

This leaves us with the following attack scenarios:

  • proxy the traffic on our server with the real destination, with for example a falsely generated certificate. Many apps will reject the certificate, but we all know the troubles CAs have with incorrect issued certificates. So at least in theory it will be possible to man in the middle traffic with accepted certificates.

  • For the left over hosts it will be even possible to request a real lets https certificate with a bit of timing and luck, because the challenge response works via http.

  • Another important issue to worry about is that it is possible as well to get accepted as a server into the load balancing configuration of Route 53, by just returning valid response status codes. As long as well return valid status codes, it will be as long as someone notices part of the configuration.

  • And last but not least, the amount of data we were getting on our http endpoints, like the sns notifications, webhooks and all. This could be circumvented easily by just enforcing valid https certificates.

Disclosure Timeline

Weve disclosed our findings with Amazon and Amazon has implemented an IP cooldown feature, which should fix those issues.

2016/10/13: issue reported to Amazon. AWS confirms receipt of the report, and reaches out requesting additional information
2016/10/27: AWS Security requests additional details to further the investigation, and sets up time for an initial telephone sync
2016/11/08: Conference call confirmed for 2016/11/11
2016/11/11: Telephone sync takes place to provide additional technical information. AWS confirmed that it is possible for traffic configured with a destination IP to be received by any instance with that public IP regardless of attachment. Shared that AWS would be adding a cooldown period before an IP is reassigned to reduce the likelihood of an IP being reused while traffic destined for it was in flight. This would be delivered in early 2017
2017/02/24: IP cooldown feature released

Disclaimer: it is possible that the dataset has been manipulated by forged requests, but as no-one knows about this research and the launch of spot instances where to random I dont expect that to be the case.

If you have any more questions just let me know. Ill be happy to answer them. You can contact me using:

[email protected]

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

With the huge surface attack provided by CMS like Drupal or Wordpress, webshells remain a classic attack scenario. A few months ago, I wrote a diary about the power of webshells[1]. A few days ago, a friend of mine asked me some help about an incident he was investigating. A website was compromised (no magic - very bad admin password) and a backdoor was dropped. He senta copy of the malicious file. It was quite small (only 5250 bytes) padding:5px 10px"> ?php ${\x47L\x4f\x42\x41L\x53}[\x62u\x6d\x66\x7a\x78]=a\x75\x74h${\x47LOBAL\x53}[\x71\x70b\x78\x67\x70\x69\x65\x71b\x78]=\x76\x61\x6c\x75\x65${GLO\x42\x41\x4c\x53} [e\x6e\x79p\x75\x74\x68d\x6c\x6bk]=k\x65\x79${\x47L\x4 ...

The file was already uploaded on VT one month ago[2] and had a detection score of 0/54. I decided to have a deep look at the file and to deobfuscate it. Several steps were required:

Step 1 - Lot of characters are replaced by their hexadecimal encoding (\xx). padding:5px 10px"> f = open(sample.php) w = open(sample-out, w) d = f.read() d2 = d.strip() w.write(d2.decode(string-escape padding:5px 10px"> ?php ${GLOBALS}[bumfzx]=auth${GLOBALS}[qpbxgpieqbx]=value${GLOBALS}[enyputhdlkk]=key ${GLOBALS}[pwhuehui]=j${GLOBALS}[pbkqpwkeuthu]=i${GLOBALS}[tkoqjcwbcj]=value $udborfbq=data${GLOBALS}[bdylpnwgwuyn]=data_key${GLOBALS}[knxtwihmugi]=data ...

Step 2 -We see that the PHP code makes references to variables using the ${GLOBALS padding:5px 10px"> ${GLOBALS}[foo] = bar

We can search/replace all occurrences of global variables to make the code more readable.

The last step was to beautify the code to make it human readable. The final backdoor version is available on pastebin[4]. In the code, you can see the $auth variable used to encrypt/decrypt the payload passed to the backdoor. Surprisingly, I found many occurrences of the same string on Google. This reveals that the code is not new and has already been referenced one year ago around July 2015. Practically, what does it do?

Compared to full-features webshell, there are no nice features here. It just accepts PHP commands that are passed to an eval(). Data are passed through a POST HTTP request or cookies. The following arguments must be passed to the script:

ak is the authentication key, a is the command and d contains the PHP commands to execute. They are two commands available:

i padding:5px 10px"> Array ( [ak] = [a] = i ) a:2:{s:2:pvs:18:5.3.10-1ubuntu3.26s:2:svs:5:1.0-1}nsdfjk

e executes the code passed in d padding:5px 10px"> Array ( [ak] = [d] = system(uname -a [a] = e ) Linux shiva 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

What can we learn from this backdoor?

  • Dont loose time by reversing what has already by performed. Google for some strings in the obfuscated code to find relevant material.
  • Most of them are used by script kiddies who dont even take the time to change the encryption keys.
  • It is not easy to detect with classic log files.

Such backdoor is stealthy and not easy to spot in classic web servers log files: NoPOST data not cookiesare not logged/stored in log files. You can log POST data using modsecurity[5]. You can also search for peaks of POST requests in your log files.


Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux Kernel CVE-2015-8962 Memory Corruption Vulnerability
Linux kernel Local Use After Free Multiple Denial of Service Vulnerabilities
Internet Storm Center Infocon Status